Professor Dr. Martin Schulz has been invited to deliver the opening keynote at ISC 2026, bringing decades of experience in supercomputing architecture, large-scale parallel systems, and emerging computing technologies. As high performance computing (HPC) enters the post-Moore era, Schulz is well-positioned to discuss how increasingly heterogeneous and hybrid systems, including emerging quantum technologies, are reshaping the design and capabilities of next-generation supercomputers.
Schulz has dedicated his career to the intersection of supercomputing architecture, software, and large-scale scientific applications. He is a Professor of Computer Architecture and Parallel Systems at the Technical University of Munich (TUM) and serves on the board of the Leibniz Supercomputing Centre (LRZ). In these roles, he plays a key role in transforming innovative computing technologies from research concepts into production systems used by scientists and industry.
Much of his current work explores the challenges and opportunities of the post-Moore era, a time when the traditional pace of transistor scaling is slowing. Rather than relying on smaller transistors for performance gains, future computing systems will increasingly depend on integrating a number of novel technologies into a system. More specifically, Schulz studies how to design and program systems that combine CPUs, GPUs, specialized accelerators, advanced memory technologies, and emerging quantum processors.
Another important focus of his work is integrating quantum computing with classical HPC. Instead of treating quantum devices as standalone machines, Schulz advocates hybrid environments in which quantum processors act as accelerators within established supercomputing workflows. This approach allows quantum hardware to be used alongside classical simulations, optimization algorithms, and data analysis running on large HPC systems.
Schulz also plays a key role in the global HPC software ecosystem, serving as Chair of the MPI Forum, the international body responsible for evolving the Message Passing Interface (MPI) standard. MPI remains the dominant programming model for distributed-memory supercomputing and enables applications to scale across tens of thousands of processors.
Before joining TUM, Schulz spent many years at Lawrence Livermore National Laboratory, where he worked on performance analysis tools and large-scale parallel computing systems. His contributions have been recognized with honors, including the Gordon Bell Award. Schulz brings both a global perspective and extensive technical expertise to the table. His work is integral to the evolution of supercomputing for the next generation of scientific discovery.
Schulz’s opening keynote titled “HPC: A Heterogeneous Future” will take place on Tuesday, June 23, 2026,in Hamburg, Germany.
]]>By Nages Sieslack
Few individuals have influenced high performance computing (HPC) as profoundly as Jack Dongarra. Over a career spanning more than four decades, his work has underpinned modern scientific discovery, supporting everything from climate modeling and physics simulations to artificial intelligence.
In 2021, Dongarra received the ACM A.M. Turing Award, often described as the “Nobel Prize of computing,” in recognition of his contributions to numerical algorithms and high-performance software. The ACM cited his role in developing methods and libraries that allowed scientific software to keep pace with dramatic changes in computing hardware over successive generations. Indeed, Dongarra’s work has been adapted across multiple architectures including the early vector systems, HPC clusters, and now to today’s heterogeneous, AI-accelerated supercomputers.
His influence began early. In the late 1970s, Dongarra developed LINPACK, a software library designed for solving systems of linear equations efficiently. In 1992, he led the development of LAPACK, extending those capabilities to new computer architectures. These tools became foundational to scientific computing. As Dongarra has observed, linear algebra sits “at the heart of many scientific and engineering applications,” and his work ensured those applications could run efficiently on emerging machines.
In 1993, LINPACK became the basis for the TOP500 list, the global ranking of the world’s fastest supercomputers. What began as an effort to create a consistent way to measure performance evolved into one of the most widely recognized benchmarks in computing. The TOP500 did more than rank systems; it helped define progress, offering a transparent way to understand how architectures were evolving and where the performance frontier lay.
Dongarra’s career has spanned academia, national laboratories, and international collaborations. At the University of Tennessee, he founded the Innovative Computing Laboratory, which became a leading center for high performance computing research. He also maintained deep ties with Oak Ridge National Laboratory, home to some of the world’s most powerful supercomputers, as well as held other positions, including a visiting professorship at the University of Manchester. His contributions have been recognized globally through election to the U.S. National Academy of Sciences, the National Academy of Engineering, and the Royal Society.
Yet Dongarra has consistently emphasized that computing advances are rarely the result of isolated breakthroughs. Instead, they emerge from sustained collaboration between mathematicians, computer scientists, and hardware engineers. His career reflects that intersection, bridging theory and practical implementation at a time when computing was evolving from specialized research equipment into essential scientific infrastructure.
Even now, his influence continues. Though formally retired from teaching, Dongarra remains active in research and mentorship at the University of Tennessee. In recent months, he has traveled extensively, speaking to students and researchers about the future of extreme-scale computing and artificial intelligence. In Tunisia, he met with young researchers exploring computational science. In China and Singapore, he delivered lectures on how numerical algorithms continue to shape modern AI and simulation. These appearances reflect his enduring role not only as a pioneer, but as a guide to the next generation.
That perspective will inform his closing keynote at ISC High Performance 2026 in Hamburg, where he will speak on “HPC in Transition.” Given that few figures have witnessed and shaped so many phases of computing’s evolution, his insights into what lies ahead promises to be enlightening.
]]>At the core of its platform is Host-Managed Shingled Magnetic Recording (HM-SMR) technology, a high-density HDD architecture widely used by hyperscalers. Unlike conventional drives, HM-SMR is optimized for sequential data patterns, enabling greater areal density and lowering cost per terabyte. Leil complements this hardware approach with an intelligent software layer that manages how and where data is written, aligning workloads with the physical characteristics of the drives.
This software-defined architecture reduces write amplification, optimizes data placement, and selectively powers down idle disks. According to the company, for suitable workloads, this can deliver energy savings of up to 70% compared to traditional always-on storage systems. The result is lower operational expenditure, reduced cooling requirements, and a smaller environmental footprint.
The Leil platform is aimed at a variety of challenging storage environments, including HPC, AI pipelines, research data repositories, media archives, and large-scale backup, all of which place a premium on sequential throughput, capacity density, and cost efficiency. The platform’s modular design allows organizations to scale capacity incrementally, avoiding forklift upgrades, while maintaining performance across petabyte-scale deployments.
Leil’s distributed file system, SaunaFS, is currently being used to support HPC infrastructure at Assam Agricultural University in India. In addition, the platform integrates with several backup and data protection stacks, including Veeam, Acronis, Veritas Technologies (including Veritas NetBackup), and Rubrik, making it relevant for large-scale enterprise and research deployments.
In November last year, Leil secured €1.5 million in seed funding. Recently, Leil’s HDD-native storage management solution was recognized with the StorageNewsletter Special Jury Prize, highlighting industry acknowledgment of its approach to high-density storage. The award also underscores growing recognition of software-driven HDD architectures as a viable alternative for large-scale data environments.
At ISC 2026, you can expect to see how HDD-native, software-driven systems can help organizations balance accelerating data growth, budget constraints, and sustainability targets. For attendees evaluating next-generation storage architectures, Leil presents a viable alternative for modern data-intensive environments. Visit them at Booth #M30.
]]>Randles, the Alfred Winborne Mordecai and Victoria Stover Mordecai Associate Professor at Duke University, gained widespread recognition at ISC as the recipient of the Jack Dongarra Early-Career Award in 2024. After serving as the ISC Research Paper program chair last year, she returns to demonstrate how HPC is moving medicine from reactive treatment to a new era of proactive, patient-specific care in a keynote titled “HPC for Vascular Digital Twins.”
Over the last few years, Randles’ work has become synonymous with extreme-scale biomedical simulation. In her keynote address, she will demonstrate how HPC enables the creation of patient-specific vascular digital twins. These models integrate medical imaging, physiological data, and large-scale blood flow simulations into dynamic, high-fidelity representations of the human circulatory system.
In her abstract, Randles explains that this technology could reshape healthcare by moving beyond “snapshot” analyses. Unlike static models, vascular digital twins capture the dynamic nature of physiology. This requires sustained simulation across thousands to millions of cardiac cycles, the management of massive multimodal datasets, and rapid analysis needed for clinical work.
Randles will discuss how GPU-accelerated supercomputing and extreme-scale parallelism make this kind of modeling feasible.
Her talk will further explore the convergence of:
About Amanda Randles
Amanda Randles is the Director of the Duke Center for Computational and Digital Health Innovation. Her research integrates HPC, machine learning, and biophysical simulation to advance patient-specific care. Her contributions have been recognized with numerous distinctions, including the ACM Prize in Computing, the NIH Pioneer Award, NSF Career Award and the ACM Grace Hopper Award. Prior to her academic career, she worked as a software engineer at IBM on the Blue Gene supercomputing team.
Randles received her Ph.D. in Applied Physics from Harvard University, an M.S. in Computer Science from Harvard, and a B.A. in Computer Science and Physics from Duke. Prior to graduate school, she worked as a software engineer at IBM on the Blue Gene supercomputing team.
Join ISC High Performance 2026 in #ConnectingTheDots
ISC 2026 returns to the Congress Center Hamburg from June 22 – 26 for its 41st edition. Since its inception in 1986, it has been recognized as the world’s oldest and Europe’s most attended event for the HPC community, and increasingly for AI and quantum professionals interested in performance, energy efficiency, and cost-effectiveness.
Nages Sieslack
[email protected]
The ISC tutorials take place on June 22 and complement the conference program by emphasizing practical skills, performance optimization, emerging software models, reproducibility, and the integration of AI and quantum technologies into scientific workflows.
As HPC evolves at the intersection of simulation, data, and intelligent systems, we believe the 2026 tutorials provide attendees with the opportunity to deepen their technical expertise and reflect on the growing convergence of HPC, AI, and quantum technologies shaping next-generation research infrastructures.
The complete list of tutorials offered is now available on the ISC website, with detailed descriptions set to be published on March 25, when registration opens.
Reduced Pricing to Support the Next Generation
To broaden participation and support the next generation of researchers and technology leaders, we are reducing tutorial pricing across all registration categories.
New tutorial rates are:
“By lowering financial barriers, ISC aims to make advanced technical training more accessible and to support the development of the next generation of HPC, AI, and quantum computing experts. We hope that students, doctoral candidates, postdoctoral researchers, and early-career professionals will take advantage of the opportunity,” said Tanja Gruenter, Head of ISC Program Team.
ISC 2026 looks forward to welcoming participants from around the world to engage, learn, and advance their expertise at the forefront of scientific computing.
Join ISC High Performance 2026 in #ConnectingTheDots
ISC 2026 returns to the Congress Center Hamburg from June 22 – 26 for its 41st edition. Since its inception in 1986, it has been recognized as the world’s oldest and Europe’s most attended event for the HPC community, and increasingly for AI and quantum professionals interested in performance, energy efficiency, and cost-effectiveness.
Nages Sieslack
[email protected]
We think one of those moments may be found this year at the booth of a company called Alice & Bob (G40, Hall H).
Founded in Paris in 2020 by physicists Théau Peronnin and Raphael Lescanne, Alice & Bob is becoming one of Europe’s most talked-about quantum computing startups. With €130 million in funding and 180 employees, the company is building quantum computers that can operate reliably enough to perform useful work – not just in laboratory conditions, but in real-world computing environments.
Why This Distinction Matters
For decades, the field has been held back by a single, stubborn challenge: errors, or to be more specific, error correction. Qubits, the building blocks of quantum computers, are extraordinarily fragile. The slightest disturbance caused by thermal noise, electromagnetic interference, or imperfections in hardware can disrupt calculations. Correcting those errors is possible, but at an enormous cost. Conventional approaches may require thousands, or even millions, of physical qubits to produce a single reliable logical qubit.
Alice & Bob’s approach challenges that equation at its foundation. The company’s technology is built around the “cat qubit,” named after Schrödinger’s famous thought experiment in quantum mechanics. Unlike conventional designs that focus heavily on correcting errors after they occur, cat qubits are engineered to suppress a certain type of errors from the outset. Specifically, the cat qubit is able to significantly reduce the number of “bit-flip” errors that naturally occur in conventional qubits. By reducing error rates at the hardware level, the architecture could dramatically lower the number of physical qubits required to build useful quantum systems.
For those of us in high performance computing, this is not just a scientific curiosity; it is a question of feasibility.
If quantum computing is to become a practical tool alongside classical supercomputers, it must be engineered into systems that are scalable, reliable, and economically viable. Hardware-efficient approaches like Alice & Bob’s could make that transition significantly more achievable.
The company’s rapid growth reflects the urgency of this challenge. With operations in Paris and Boston and an expanding engineering team, Alice & Bob is positioning itself as a contender in the race to build fault-tolerant quantum processors. Its work is not happening in isolation, but as part of a broader shift toward hybrid computing environments where quantum accelerators may one day complement traditional processors.
For HPC center directors, system architects, and researchers thinking about the future of computing infrastructure, Alice & Bob represents something important: a company focused on solving the practical constraints that have so far limited the usefulness of quantum computing.
The cat qubit is not a finished story. But it is a credible attempt to overcome one of quantum computing’s most persistent barriers.
And that alone makes Alice & Bob one of the most interesting companies to watch at ISC this year.
]]>“The Next Generation Committee marks an important step in the systematic development of the ISC program to make it more accessible for younger attendees,” said Colleen Sheedy, People and Organization Development Manager at ISC Group. “Our objective is not only to invite students and young professionals to participate in ISC, but to involve them more actively in shaping ideas, formats, and connections within the community – within a clear and sustainable framework.”
A structured bridge between generations
The committee’s creation follows sustained feedback from younger attendees who expressed a need for stronger orientation, greater visibility, and more opportunities to contribute. While these attendees have long provided valuable input informally, the Next Generation Committee formalizes this engagement and establishes a structured channel for emerging talent to engage with the ISC community.
Positioned as a connector between generations, the committee advances three core objectives: enhancing the visibility of young talent, reducing barriers for new participants, and ensuring the relevance of ISC content for students and entry-level professionals.
The Next Generation Committee operates within a clearly defined scope, combining selected operational contributions with an advisory function. Its activities include:
The committee will be officially introduced at ISC 2026. During this initial phase, members will focus on observation, community engagement, and identifying key needs and opportunities. From 2027 onwards, the first jointly developed measures will be implemented as part of evolving our offering for the next generation of HPC practitioners.
Committee Members
Investing in the future of the HPC community
With the Next Generation Committee, ISC reinforces its long-term commitment to community development and talent cultivation in HPC. Beyond technical excellence, the initiative aims to foster orientation, networking, and professional development, enabling young talents to find their place and voice within the ISC ecosystem.
Join ISC High Performance 2026 in #ConnectingTheDots
ISC 2026 returns to the Congress Center Hamburg from June 22 – 26 for its 41st edition. Since its inception in 1986, it has been recognized as the world’s oldest and Europe’s most attended event for the HPC community, and increasingly for AI and quantum professionals interested in performance, energy efficiency, and cost-effectiveness.
Contact:
Nages Sieslack
[email protected]
Unlike conventional processors, which generate heat and energy loss as electrons move through transistors, photonic computing uses light to perform mathematical operations. This fundamental shift enables significantly lower power consumption and substantially reduced waste heat, addressing two of the most pressing challenges facing modern data centers.
Photonics has long been explored in academic research, but Q.ANT, a Stuttgart-based company, distinguishes itself by bringing the technology into real-world HPC environments. Its systems have already been deployed in operational settings, demonstrating that photonic computing is no longer confined to laboratories and can support everyday scientific and AI workloads.
At the core of Q.ANT’s platform is the Native Processing Server (NPS), a rack-mounted system designed to integrate seamlessly with existing HPC and AI infrastructures. Rather than replacing CPUs or GPUs, the NPS acts as a photonic accelerator, taking over especially energy-intensive tasks such as matrix operations and AI inference. This hybrid approach allows organizations to enhance performance while leveraging their existing hardware investments.
Equally critical is, of course, compatibility. Q.ANT’s solutions are designed to work alongside established software stacks and data-center architectures, lowering barriers to adoption and accelerating practical impact. For HPC operators, this means that photonic acceleration can be introduced incrementally, without disruptive changes to infrastructure or workflows.
Sustainability by Design
What makes Q.ANT particularly relevant in today’s HPC landscape is that sustainability is not an add-on, but a core architectural principle. By computing natively with light and using analog photonic circuits, Q.ANT’s systems can achieve substantial energy-efficiency gains for targeted workloads. Fewer electrical conversions, reduced cooling requirements, and lower overall power consumption directly translate into a smaller carbon footprint per computation.
For HPC centers facing mounting pressure to meet climate targets while continuing to scale performance, this represents a meaningful shift. Instead of simply expanding power-hungry digital infrastructure, photonic acceleration offers a pathway to increasing computational capability without proportionally increasing emissions.
As the post-Moore era forces the HPC community to rethink how performance gains are achieved, Q.ANT presents a compelling vision: faster computation with radically lower energy consumption. By harnessing light as a computing medium, the company is not only pushing technological boundaries but also helping to redefine what sustainable high performance computing could look like in the age of AI.Q.ANT is an ISC bronze sponsor and will be present on the ISC High Performance showfloor at booth F40.
]]>At its launch in April 2023, Eviden accounted for more than €5 billion in annual revenue, giving it immediate scale and a top-tier position among Europe’s digital and advanced computing players. By separating from Atos’ traditional IT services, Eviden became the group’s growth engine, focused on data-intensive and security-critical workloads.
The company’s foundations run deep. Through Atos and its predecessor, Bull, Eviden obviously inherited decades of expertise in supercomputing, mission-critical systems, and secure digital infrastructure. Its technologies support national supercomputing centers, scientific research programs, defense systems, and large public-sector IT systems across Europe.
Eviden’s significance goes beyond technology – as Europe reevaluates its dependence on non-European hyperscalers, the company has become a key player in the push for digital sovereignty. Few European firms can deliver end-to-end solutions spanning AI systems, sovereign cloud platforms, cybersecurity, and HPC. This makes Eviden a strategic partner for governments, research institutions, and regulated industries seeking greater control over data and infrastructure.
Looking ahead, Eviden is positioning itself around AI-optimized supercomputing, hybrid and sovereign cloud architectures, and secure data environments tailored to European regulatory requirements. Market forecasts suggest its revenue could grow to the €6-7 billion range over the medium term, fueled by AI adoption and increased public investment in advanced computing.
Eviden’s trajectory will test whether Europe can successfully scale and sustain its own digital champions at a time when computing power, data, and trust are becoming strategic assets.
We are happy to have them as an ISC 2026 platinum sponsor. They will be exhibiting at booth K30.
Since the publication of this blog, the business unit previously referred to as Eviden has been renamed Bull.
]]>