ELISA https://elisa.tech Enabling Linux In Safety Critical Applications Fri, 06 Mar 2026 19:26:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://elisa.tech/wp-content/uploads/sites/19/2024/04/[email protected] ELISA https://elisa.tech 32 32 What to Expect from the ELISA Project at Embedded World 2026 https://elisa.tech/ambassadors/2026/03/06/what-to-expect-from-the-elisa-project-at-embedded-world-2026/ Fri, 06 Mar 2026 19:23:06 +0000 https://elisa.tech/?p=3905

The ELISA Project will be participating in the upcoming Embedded World Exhibition & Conference, taking place March 10–12, 2026 at Messezentrum Nürnberg, Germany.

Established in 2003, Embedded World has become one of the most important annual gatherings for the global embedded systems community. The event combines a large industry exhibition with a world-class conference program that bridges applied research and real-world industrial applications.

For the ELISA Project community, this event offers an opportunity to connect with engineers, researchers, and organizations working to enable safe use of Linux in safety-critical systems.

ELISA at Embedded World 2026

At this year’s event, the ELISA Project will engage with attendees through:

  • A conference session discussing approaches for assessing the safe usage of Linux

  • On-site discussions with ELISA ambassadors and community members

  • Opportunities to connect with companies building Linux-based safety-critical systems

If you are developing systems where safety, reliability, and open source intersect, this is a great chance to learn more about how the ELISA Project is advancing safety practices around Linux.

Conference Session: Assessing Safe Usage of Linux

A key highlight will be a talk by Kate Stewart from the Linux Foundation.

Approaches on Assessing Safe Usage of Linux

📅 March 10, 2026
⏱ 11:30 (30 minutes)

Linux has become one of the most widely used operating systems across industries—from deeply embedded devices in automotive, aerospace, and medical systems to servers powering global financial infrastructure.

While there are established mechanisms for maintaining and distributing security updates, the question remains:

After applying fixes and updates, how can we demonstrate that a Linux-based system is still safe to use in regulated environments?

In this session, Kate Stewart will explore:

  • Current approaches within the ELISA Project to evaluate Linux in the context of functional safety
  • Methods to support analysis and verification of Linux-based systems
  • Opportunities for automation and collaboration across the ecosystem
  • Emerging best practices for organizations building safety-critical Linux systems

The talk will provide insight into how the community is working to make Linux viable for safety-certified environments.

Learn more about the Embedded World Conference here.

Meet the ELISA Community

In addition to the conference session, several ELISA Project ambassadors and contributors will be attending Embedded World, including: Philipp Ahmann — ETAS GmbH, Nicole Pappler – Alektometis, Simone Weiß — Linutronix along with many other members of the ELISA Project ecosystem.

They will be available throughout the event to discuss:

  • The ELISA Project’s mission and roadmap
  • Collaboration opportunities
  • Safety practices for Linux-based systems
  • How organizations can participate in the project

Let’s Connect

If you are attending Embedded World and already working on Linux-based safety-critical applications, or interested in learning more about the ELISA Project and its goals for 2026 we encourage you to connect with the team during the event.

You can:

  • Reach out directly to ELISA ambassadors onsite
  • Or contact the project team ([email protected]) to schedule a meeting

Embedded World is a fantastic opportunity to exchange ideas, learn from industry leaders, and explore how open source and safety engineering can evolve together. See you there!

]]>
What do you mean when you say…? https://elisa.tech/ambassadors/2026/03/04/what-do-you-mean-when-you-say/ Wed, 04 Mar 2026 08:30:26 +0000 https://elisa.tech/?p=3896

This blog post “What Do You Mean When You Say…?” Introducing the ELISA Glossary for Safety-Critical Open Source” was written by Simone Weiss, Linutronix.

You’re reading a blog post, and three sentences in, you encounter a term and wonder, “What does the author mean when they say that?” You could research it, but you keep reading, telling yourself, “I’ll figure it out later.” We’ve all been there.

The world of embedded and safety-critical open source uses specific terms that can make it hard to understand what’s meant. That’s why we created the ELISA Glossary—a single place for all those terms.

Take a look at the glossary here:
https://directory.elisa.tech/glossary/index.html

What Is the ELISA Glossary?

The ELISA Glossary is a collection of definitions for terms that frequently come up in the ELISA project. Each entry tries to provide not just the theoretical meaning but also the way of how it’s used within ELISA.

You’ll find definitions covering:

  • Safety and certification concepts
  • Embedded and real-time software terms
  • Open-source processes and tools
  • Standards, specifications, and compliance-related language

The glossary is useful for things like:

  • Reading an ELISA blog post and needing a quick refresher
  • Joining a new working group and encountering unfamiliar terms
  • Ensuring consistent language across documents and discussions

The glossary is work in progress. As tools evolve, standards shift, and best practices change, the glossary will continue to grow. We rely on community feedback – if there’s a term you think should be added or a definition that needs refinement, let us know!

Why the Glossary?

The ELISA Project brings together engineers, safety experts, and organizations working on Linux-based safety-critical systems. This diverse mix of industry, standards, and technical backgrounds is one of ELISA’s strengths—but it also means we use a language that’s not always obvious to newcomers, occasional contributors, or even long-time members diving into new topics.

Since ELISA began, we’ve created:

  • Technical documentation
  • Working group deliverables
  • Presentations

Certain terms pop up again and again, which is where the ELISA Glossary comes in—to help make those terms easier to understand, reference, and use consistently.

Explore the ELISA Glossary

https://directory.elisa.tech/glossary/index.html

Clear language may not solve all the challenges in safety-critical software, but it sure makes collaboration easier.

]]>
Enabling Linux in Safety Applications (ELISA) Project Expands Premier Membership with NVIDIA https://elisa.tech/announcement/2026/02/26/enabling-linux-in-safety-applications-elisa-project-expands-premier-membership-with-nvidia/ Thu, 26 Feb 2026 17:12:34 +0000 https://elisa.tech/?p=3891

SAN FRANCISCO, February 26, 2026 – Today, the ELISA (Enabling Linux in Safety Applications) Project announced that NVIDIA has joined as a Premier member and will contribute to advancing the use of Linux in safety-critical and regulated systems. Hosted by the Linux Foundation, ELISA is an open source initiative focused on creating a shared set of elements, processes, and tools to help companies develop and certify Linux-based safety-critical applications and systems.

As software-defined and AI-enabled systems become increasingly central to industries such as automotive, robotics, industrial automation and aerospace, ensuring the safety, reliability, and compliance of Linux-based platforms is more important than ever.

“Linux plays a foundational role in modern, software-defined systems, including those that must meet stringent safety requirements,” said Kate Stewart, Vice President of Dependable Embedded Systems at the Linux Foundation. “NVIDIA’s leadership in accelerated computing, AI, and software platforms brings deep technical expertise to the ELISA community. Their engagement will help drive forward scalable, safety-focused approaches to using Linux in increasingly complex systems.”

NVIDIA joins existing premier members Boeing and Redhat.

ELISA Project General Members include AISIN, arm, Bosch, Canonical, Codethink, Elektrobit, EMQ, Honda, Huawei, Linutronix, Lynx Software Technologies, Nissan Motor Corporation and WindRiver. Associate members Automotive Grade Linux, KernelCI, Institute of Aircraft Systems Engineering and The Regensburg University of Applied Sciences. Learn more about membership here.

 Safety-Critical Software

Open Source Summit North America, scheduled for May 18-20 in Minneapolis, Minnesota, will host a Safety-Critical Software track that features technical sessions, case studies, and cross-industry collaboration initiatives presented by ELISA Project members, ambassadors and contributors. Register here for early-bird pricing by March 24.

About the Linux Foundation

The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, OpenChain, OpenSSF, PyTorch, RISC-V, SPDX, Zephyr, and more. The Linux Foundation focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org. The Linux Foundation has registered trademarks and uses trademarks.

For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

For more information:

Maemalynn Meanor

The Linux Foundation

]]>
Recap of ELISA Working Group and Special Interest Group Annual Updates 2026 https://elisa.tech/ambassadors/2026/02/25/recap-of-elisa-working-group-and-special-interest-group-annual-updates-2026/ Wed, 25 Feb 2026 16:52:32 +0000 https://elisa.tech/?p=3881

On February 11–12, the ELISA Project community gathered for the 2026 Working Group (WG) and Special Interest Group (SIG) Annual Updates. Over two focused sessions, group leads shared key milestones from 2025, current technical priorities, and what lies ahead in 2026, along with concrete opportunities for collaboration and contribution.

The annual updates serve as a checkpoint for the project: a moment to reflect on progress, align on priorities, and welcome new contributors into the work of advancing Linux in safety-critical systems.

The first day opened with an ELISA Project overview from Technical Steering Committee Chair Philipp Ahmann (ETAS), highlighting overall progress and reinforcing ELISA’s mission to define and maintain common elements, processes, and tools that support safety certification for Linux-based systems.

The first day highlighted progress across ELISA’s core Working Groups:

Open Source Engineering Process – Paul Albertella (Codethink) shared updates on process alignment and best practices to support safety certification efforts.

Systems and Automotive – Philipp Ahmann discussed advancements in aligning Linux with functional safety requirements for automotive and system-level applications.

Safety Architecture – Gabriele Paoloni (Red Hat) presented ongoing architectural work supporting safety use cases.

Linux Features for Safety-Critical Systems – Alessandro Carminati (NVIDIA) outlined kernel and feature-level progress enabling dependable Linux deployments.

The second day focused on use-case driven Working Groups and SIGs:

Aerospace – Matthew Weber (The Boeing Company) shared updates on Linux in aerospace systems.

Space Grade Linux – Ramon Roche (The Linux Foundation) discussed the evolution of Space Grade Linux and its relationship with ELISA.

BASIL & Tools WG Evolution – Luigi Pellecchia (Red Hat) highlighted progress in tooling and traceability efforts.

Lighthouse SIG – Philipp Ahmann provided insights into cross-domain collaboration and coordination.

The event concluded with closing reflections and a forward-looking discussion on collaboration opportunities in 2026.

Continuing the Work

The WG & SIG Annual Updates are more than a status review, they are a coordination point for the year ahead. As Linux adoption in safety-critical systems continues to expand across automotive, aerospace, industrial, and emerging domains, ELISA remains committed to open collaboration, practical tooling, and shared technical foundations.

Thank you to all speakers, contributors, and attendees who helped make the 2026 updates a success.

We look forward to another year of advancing Linux in safety-critical environments together.

]]>
ELISA Project at FOSDEM 2026: Advancing Open Source in Safety-Critical Systems https://elisa.tech/ambassadors/2026/01/28/elisa-project-at-fosdem-2026-advancing-open-source-in-safety-critical-systems/ Wed, 28 Jan 2026 08:30:42 +0000 https://elisa.tech/?p=3855

As open source software continues to move deeper into safety-critical systems, FOSDEM provides a unique space for the conversations that need to happen between developers, safety engineers, maintainers, and industry stakeholders. For the Enabling Linux in Safety Applications (ELISA) project, FOSDEM 2026 is an opportunity to engage directly with the open source community, share practical progress, and collaborate on the challenges of using Linux in systems where failure can have serious consequences.

ELISA’s mission is to make it easier for organizations to build and certify Linux-based safety-critical applications systems whose failure could result in loss of human life, significant property damage, or environmental harm. By bringing these discussions to FOSDEM, ELISA helps connect real-world safety and certification needs with the developers and projects building the software at the core of these systems.

What ELISA Is Working On

ELISA brings together companies, developers, and safety experts to define and maintain a shared set of tools, processes, and best practices that help organizations demonstrate that Linux-based systems can meet functional safety requirements. Rather than positioning Linux as a standalone “safety solution,” ELISA focuses on how Linux can be used as a component within safety-critical systems, supported by appropriate system-level mitigations, documentation, and evidence.

A key part of this work is collaboration with certification authorities and standardization bodies across multiple industries. By engaging early and openly, ELISA helps clarify expectations around certification pathways, safety arguments, and compliance, reducing uncertainty for both developers and assessors. This approach enables reuse, transparency, and consistency across domains such as automotive, aerospace, railways, industrial automation, and medical systems.

ELISA at FOSDEM 2026

FOSDEM 2026 offers an ideal environment to continue these conversations. As a free, community-driven event that brings together thousands of open source developers from around the world, it allows ELISA to connect directly with the people building and maintaining the software used in safety-critical products.

Throughout the weekend, ELISA Project Ambassadors will be actively participating across the event giving talks, joining technical discussions, and engaging with contributors in multiple developer rooms. Attendees can also meet the ELISA team at the Linux Foundation Europe stand (Building K, Level 2, Group A), where they will be available to discuss ongoing work, community activities, and ways to get involved in the project.

Several members of the ELISA Technical Steering Committee (TSC) will be present as well, providing an opportunity for in-depth conversations around safety concepts, certification challenges, and cross-industry collaboration.

Session Highlight:

Code, Compliance, and Confusion: Open Source in Safety-Critical Products

This talk examines the growing use of open source software in functionally safe systems, including platforms such as Linux, Zephyr, Xen, and automotive middleware. It looks at both the progress made in recent years and the persistent barriers to adoption, from certification uncertainty and fragmented governance to common misunderstandings around safety responsibility and system architecture. Learn more.

BOF/Unconference

In addition to talks, ELISA-related topics will be discussed in Birds of a Feather (BoF) sessions, which offer a more informal space for discussion and idea exchange.

One BoF will focus on Linux & Open Source Software for safety applications in Railways, exploring how large-scale reuse and collaborative development can support the sector’s growing software needs while meeting strict safety requirements. The discussion will also consider whether there is sufficient momentum to form a foundation-backed initiative to support OSS adoption in railways.

Another BoF, Safety-Critical Linux: Challenges across industries, will bring together participants from automotive, aerospace, medical devices, robotics, and rail. The session will explore shared challenges such as documentation, tooling, certification, and system design, and identify opportunities where cross-industry collaboration could reduce duplication and improve outcomes.

Join the Conversation at FOSDEM

FOSDEM 2026 is an opportunity to move beyond theory and engage in practical, technical discussions about open source in safety-critical systems. Whether you are building software, assessing safety cases, or defining certification strategies, ELISA invites you to take part in the conversations, meet the community, and help shape how Linux and open source software are used in systems that demand the highest levels of trust and reliability.

We look forward to connecting with you in Brussels.

]]>
Recap of ELISA Project at Linux Plumbers Conference: Tokyo, Japan 2025 https://elisa.tech/blog/2026/01/21/recap-of-elisa-project-at-linux-plumbers-conference-tokyo-japan-2025/ Wed, 21 Jan 2026 18:41:13 +0000 https://elisa.tech/?p=3834

The ELISA Project participated in the Linux Plumbers Conference (LPC) 2025, held December 11–13 at Toranomon Hills Forum in Tokyo (with hybrid remote access). The event brought together developers working in the core areas of Linux for technical discussions and collaboration.

ELISA at the Safe Systems with Linux Microconference

ELISA community members joined kernel developers during the Safe Systems with Linux Microconference to explore how Linux can better support safety-critical and high-integrity systems. The microconference focused on progress around traceability, requirements, testing, and scalable verification to support more dependable kernel development.

Session Highlights:

Aspects of Dependable Linux Systems – Kate Stewart (Linux Foundation), Philipp Ahmann (Etas GmbH (BOSCH))

Kate and Philipp discussed how Linux is increasingly used in safety-critical and regulated industries that rely on dependable and robust software. They explained that these industries follow formal standards for requirements, verification, and change management, but such standards are not well known within the open source kernel community. The session highlighted that while the Linux kernel already contains many good development practices, important artifacts like requirements, tests, and documentation are not yet connected in a structured way. The speakers highlighted the need for shared approaches rather than isolated company efforts to make Linux safer and easier to analyze in complex systems. The speakers encouraged collaboration on improving traceability, clarity, and maintainability to support dependable Linux-based systems.

NVIDIA Approach for Achieving ASIL B Qualified Linux: minimizing expectations from upstream kernel processes -Igor Stoppa (NVIDIA)

In this talk, Igor Stoppa presented NVIDIA’s approach for achieving ASIL-B qualified Linux while minimizing the impact on upstream kernel developers and processes. Unlike traditional safety strategies that require modifying or qualifying large parts of the kernel, NVIDIA proposes mechanisms that isolate and contain safety-relevant components so the wider kernel does not need to be safety-qualified. The approach focuses on reducing dependencies, avoiding burdens on maintainers, and enabling qualification without requiring upstream developers to become safety experts. Igor outlined techniques such as resource partitioning, thread capabilities, and memory pools to ensure verifiable safety behavior without intrusive kernel changes. The goal is to support safety use cases in automotive and robotics while keeping upstream integration feasible and low-friction.

Applying Program Verification to Linux Kernel Code: Challenges, Practices, and Automation – Keisuke Nishimura

In this talk, Keisuke Nishimura presented ongoing work on applying deductive program verification to Linux kernel code, with a focus on the task scheduler. He explained that while the kernel is increasingly gaining specifications, checking that implementations satisfy them still relies heavily on manual effort. Using case studies, he showed how formal verification of scheduler functions can uncover real semantic bugs and increase confidence in functional correctness. The talk also covered practical challenges, such as writing formal specifications, handling loops with invariants, and preparing minimal, verifiable code extracted from large kernel files. Keisuke concluded by outlining automation efforts for code extraction and invariant inference, with the goal of making formal verification a more scalable and practical part of the Linux kernel development process.

Defining and maintaining requirements in the Linux Kernel – Chuck Wolber, Gabriele Paoloni (Red Hat), Kate Stewart (Linux Foundation)

Last year in Vienna the speakers of this talk held a session about “improving kernel design documentation and involving experts”. Following this, the ELISA Architecture working group drafted an initial template for the SW Requirements definition and started documenting the expected behaviour for different functions in the TRACING subsystem.

The work also included reviewing and adopting a framework for formally specifying kernel APIs.

This session aimed to present the latest updates and involve the experts to define the best next steps for having a path to introduce and maintain requirements in the kernel.

The discussion focused on how to document code, show value, address maintainer comments, and link requirements to tests and other verification measures.

KUnit Testing Insufficiencies – Matthew Whitehead (The Boeing Company)

This talk examined the limitations of KUnit when testing small, isolated units of Linux kernel code for high-integrity applications. Matthew Whitehead showed how the current KUnit approach struggles with scalability, system-state dependence, and the lack of built-in mocking or faking needed for low-level testing. Because KUnit tests are built into the kernel, they require full kernel builds, multiple kernels for large test sets, and slow write–execute–observe cycles. He demonstrated how creating isolated tests often requires patches, duplicated code, and extensive setup, which leads to high maintenance costs. The session highlighted the need for unit test capabilities that support out-of-tree compilation, user-space execution, and automatic integration of mocks.

Exploring possibilities for integrating StrictDoc with ELISA’s requirements template approach for the Linux kernel – Tobias Deiminger (Linutronix GmbH)

This talk demonstrated how ELISA’s proposed Linux kernel requirements template could be realized using the StrictDoc model and tooling. Tobias Deiminger showed how StrictDoc can parse requirement templates inlined in Linux source code, merge them with sidecar metadata files, and render traceable documents linking requirements, code, and tests. He highlighted that StrictDoc already fulfills most ELISA needs, including SPDX-REQ tags and structured traceability, while gaps remain around hash-based drift detection. The presentation included a live walkthrough using a demo repository and discussed StrictDoc’s broader model (requirements, design, tests, user stories) compared to ELISA’s current low-level focus. The talk concluded with the proposal that StrictDoc add hash generation and compatibility tweaks, while ELISA could list StrictDoc as a reference tool for kernel developers.

BASIL: Open Source Traceability for Safety-Critical Systems” – Luigi Pellecchia

This talk introduces BASIL – The FuSa Spice, a web-based tool that helps manage traceability for large, fast-evolving projects like the Linux kernel. Luigi Pellecchia explains how safety standards require traceability across requirements, code, tests, documentation, and test results, but these artifacts are spread across many repositories and CI systems (e.g., Linux Test Project, man-pages, CKI, KernelCI). BASIL proposes “traceability as code”: a single configuration file defines which repositories to scan, how to extract work items (requirements, tests, results), and how they relate to each other. From this, BASIL can automatically build and update traceability matrices, integrate data from external test infrastructures, and export results in formats such as SPDX. The session shows how this approach makes traceability and compliance more repeatable, automatable, and sustainable for the Linux kernel ecosystem.

 

The discussions at LPC 2025 made it clear that building safer and more dependable Linux-based systems is a shared challenge and a shared opportunity. Across all sessions, common themes emerged: improving traceability, defining clearer requirements, strengthening testing practices, and exploring scalable approaches to verification. These conversations reflect exactly what ELISA is working toward: enabling the broader community to confidently use Linux in safety-critical and high-integrity environments.

 

If you are interested in these topics, we invite you to learn more about the ELISA Project and get involved. Learn more about the ELISA project and working groups.

]]>
Recap: ELISA Project at Open Source Summit Seoul Korea 2025 https://elisa.tech/blog/2025/12/17/recap-elisa-project-at-open-source-summit-seoul-korea-2025/ Wed, 17 Dec 2025 17:06:30 +0000 https://elisa.tech/?p=3793

The Open Source Summit 2025, held on November 4–5 in Seoul, South Korea, brought together a global community of developers, engineers, policymakers, and open source leaders to advance collaboration across the ecosystem. As one of the most comprehensive gatherings in open source, the event created space for meaningful dialogue across technical and strategic domains.

The ELISA Project participated as part of the Safety-Critical Software Track, contributing to discussions at the intersection of open source development and safety standards. This track highlighted the growing role of open source in regulated and safety-sensitive environments, where reliability, transparency, and compliance are essential.

Session Highlights:

Driving Safety Forward: Lessons Learned From Deploying OSS in Real-world Automotive – Jaylin Yu, EMQ

Driving Safety Forward: Lessons Learned From Deploying OSS in Real-world Automotive was presented by Jaylin Yu from EMQ and focused on practical experience deploying open source software in mass-production vehicles. The session examined how OSS can meet automotive safety and security expectations when combined with strong community engagement, academic collaboration, and production-driven validation.

Examples included MQTT-based remote diagnostics, actor-based system design, and the use of advanced stateful fuzzing techniques to uncover concurrency, race conditions, and protocol-level issues. Jaylin highlighted how software supply-chain decisions and dependency misuse can escalate into system-wide failures in safety-critical environments.

The talk also explored post-deployment challenges such as suspend-to-RAM behavior, file-descriptor exhaustion, time synchronization, and observability gaps in Linux-based systems. Overall, the session delivered, field-tested guidance for building secure, traceable, and reliable OSS-based software-defined vehicle platforms.

DO-330 Qualification of Enhanced LLVM Structural Coverage Tool – Minji Park & Seojin Kim, The Boeing Company

DO-330 Qualification of Enhanced LLVM Structural Coverage Tool was presented by Minji Park and Seojin Kim from The Boeing Company and focused on qualifying an open source structural coverage tool for use in safety-critical avionics software.

The session explained why structural coverage is mandatory under RTCA DO-178C and how verification tools themselves must be qualified under RTCA DO-330 to produce trusted evidence. The speakers described Boeing’s efforts to qualify an enhanced LLVM coverage (llvm-cov) tool, targeting statement, decision, and modified condition/decision coverage (MC/DC) required for higher software assurance levels. The session covered key details including how line and branch coverage were aligned with DO-178C objectives through source formatting, pipeline instrumentation, and toolchain integration.

The talk also outlined the determination of Tool Qualification Level (TQL 5), required qualification artifacts, and validation and verification activities needed to support certification. The session concluded with challenges of qualifying open source tools such as version changes, object code coverage, and regulatory submission and how Boeing is addressing them to enable compliant use of OSS in avionics systems.

Introduction and Consideration of Temporal Partitioning in Avionics With Open Source Eco-System – Haesun Kim & Gihwan Kwon, The Boeing Company

Introduction and Consideration of Temporal Partitioning in Avionics With an Open Source Ecosystem was presented by Haesun Kim and Gihwan Kwon from The Boeing Company and examined how ARINC 653 enables safe and deterministic operation in integrated modular avionics (IMA) systems.

The session introduced the motivation for adopting ARINC 653, comparing traditional federated avionics architectures with IMA approaches that rely on strict temporal and spatial partitioning. Key technical details covered the ARINC 653 two-tier scheduling model, including module-level scheduling across partitions and rate-monotonic process scheduling within each partition.

The speakers discussed gaps between ARINC 653 requirements and current open-source operating systems, highlighting challenges in scheduling, process management, and health monitoring. The talk concluded with Boeing’s ongoing collaboration with open-source communities and future work to bridge these gaps and enable compliant, safety-critical avionics systems built on open-source technologies

Smarter Code, Sneakier Risks: Supply Chain Security in the Age of AI – Lavakush Biyani, Harness

Smarter Code, Sneakier Risks: Supply Chain Security in the Age of AI was presented by Lavakush Biyani from Harness and examined how AI-powered coding tools are reshaping software development while introducing new supply chain security risks. The session explained how AI-generated code can unknowingly introduce vulnerabilities through insecure patterns, outdated libraries, or hallucinated dependencies that attackers can exploit.

The session covered real-world examples of dependency confusion, AI-suggested non-existent packages, and the reuse of vulnerable dependency versions due to limited model context. The speakers introduced practical detection techniques such as analyzing code changes, generating AI Bills of Materials (AIBOMs), tracking dependency drift, and monitoring build behavior.

The session concluded with guidance on integrating these security checks into CI/CD pipelines, enabling DevSecOps teams to manage AI-driven risks without slowing development velocity.

Detecting Double Free With BPF – Bojun Seo, LG Electronics

Detecting Double Free With BPF was presented by Bojun Seo from LG Electronics and addressed the challenges of detecting double free vulnerabilities in C and C++ programs, particularly in production and embedded environments.

The session explained why traditional tools such as Valgrind and AddressSanitizer often struggle in real-world systems due to high overhead and their tendency to alter memory behavior, leading to hard-to-reproduce Heisenbugs. The session also covered a novel detection approach using BPF and uprobes to trace memory allocation and deallocation events without modifying the target process’s memory footprint.

The tool tracks allocation counters and captures stack traces in BPF maps, reporting double frees with significantly lower runtime and memory overhead. Through live demonstrations and real code examples, the talk showed how this lightweight BPF-based approach improves reliability and practicality for detecting double free errors in performance-sensitive embedded systems.

Telco Supply Chain Security: Implementing ISO 18974 & SBOM – Haksung Jang, SK Telecom

Telco Supply Chain Security: Implementing ISO/IEC 18974 & SBOM was presented by Haksung Jang from SK Telecom and focused on managing growing software supply chain risks in the rapidly evolving telecom industry.

The talk explained how increased reliance on open source in 5G, cloud-native, and software-defined networks has amplified dependency complexity and reduced visibility, creating serious security challenges. Key technical details covered the adoption of ISO/IEC 18974 (Open Source Security Assurance) as a standardized framework for vulnerability management, governance, and third-party assurance across telecom supply chains.

The session highlighted SBOM implementation using standards such as SPDX and CycloneDX, emphasizing automated generation, validation, and integration into CI/CD pipelines to enable rapid vulnerability response and regulatory compliance. Drawing from SK Telecom’s real-world OSPO experience and OpenChain Telco Work Group activities, the talk provided practical guidance on policy design, supplier collaboration, and building a trusted, standards-based telecom software ecosystem.

Key Takeaways:

The ELISA Project’s presence at Open Source Summit Seoul 2025 showed how open source is now essential in safety-critical and regulated systems.

Across automotive, avionics, embedded, AI, and telecom sessions, speakers demonstrated that open source can meet strict safety and security requirements when supported by strong processes and standards. Talks highlighted the importance of verification, deterministic system design, and low-overhead runtime analysis for real-world deployments. Supply chain security emerged as a shared priority, with SBOMs, AIBOMs, and international standards enabling visibility and trust.

Overall, the sessions reinforced that safety, security, and open collaboration must advance together.

What’s Next?

If you are interested in shaping this work, we invite you to join ELISA working groups and contribute to advancing safety practices in open source together.

]]>
Schrödinger’s test: The /dev/mem case https://elisa.tech/ambassadors/2025/12/10/schrodingers-test-the-dev-mem-case/ Wed, 10 Dec 2025 20:50:20 +0000 https://elisa.tech/?p=3783

This blog was originally published by Alessandro Carminati, Principal Software Engineer at Red Hat, on his personal blog and is republished here with permission.

Why I Went Down This Rabbit Hole

Back in 1993, when Linux 0.99.14 was released, /dev/mem made perfect sense. Computers were simpler, physical memory was measured in megabytes, and security basically boiled down to: “Don’t run untrusted programs.”

Fast-forward to today. We have gigabytes (or terabytes!) of RAM, multi-layered virtualization, and strict security requirements… And /dev/mem is still here, quietly sitting in the kernel, practically unchanged… A fossil from a different era. It’s incredibly powerful, terrifyingly dangerous, and absolutely fascinating.

My work on /dev/mem is part of a bigger effort by the ELISA Architecture working group, whose mission is to improve Linux kernel documentation and testing. This project is a small pilot in a broader campaign: build tests for old, fundamental pieces of the kernel that everyone depends on but few dare to touch.

In a previous blog post, “When kernel comments get weird”, I dug into the /dev/mem source code and traced its history, uncovering quirky comments and code paths that date back decades. That post was about exploration. This one is about action: turning that historical understanding into concrete tests to verify that /dev/mem behaves correctly… Without crashing the very systems those tests run on.

What /dev/mem Is and Why It Matters

/dev/mem is a character device that exposes physical memory directly to userspace. Open it like a file, and you can read or write raw physical addresses: no page tables, no virtual memory abstractions, just the real thing.

Why is this powerful? Because it lets you:

  • Peek at firmware data structures,
  • Poke device registers directly,
  • Explore memory layouts normally hidden from userspace.

It’s like being handed the keys to the kingdom… and also a grenade, with the pin halfway pulled.

A single careless write to /dev/mem can:

  • Crash the kernel,
  • Corrupt hardware state,
  • Or make your computer behave like a very expensive paperweight.

For me, that danger is exactly why this project matters. Testing /dev/mem itself is tricky: the tests must prove the driver works, without accidentally nuking the machine they run on.

STRICT_DEVMEM and Real-Mode Legacy

One of the first landmines you encounter with /dev/mem is the kernel configuration option STRICT_DEVMEM.

Think of it as a global policy switch:

  • If disabled/dev/mem lets privileged userspace access almost any physical address: kernel RAM, device registers, firmware areas, you name it.
  • If enabled, the kernel filters which physical ranges are accessible through /dev/mem. Typically, it only permits access to low legacy regions, like the first megabyte of memory where real-mode BIOS and firmware tables traditionally live, while blocking everything else.

Why does this matter? Some very old software, like emulators for DOS or BIOS tools, still expects to peek and poke those legacy addresses as if running on bare metal. STRICT_DEVMEM exists so those programs can still work: but without giving them carte blanche access to all memory.

So when you’re testing /dev/mem, the presence (or absence) of STRICT_DEVMEM completely changes what your test can do. With it disabled, /dev/mem is a wild west. With it enabled, only a small, carefully whitelisted subset of memory is exposed.

A Quick Note on Architecture Differences

While /dev/mem always exposes what the kernel considers physical memory, the definition of physical itself can differ across architectures. For example, on x86, physical addresses are the real hardware addresses. On aarch64 with virtualization or secure firmware, EL1 may only see a subset of memory through a translated view, controlled by EL2 or EL3.

The main function that the STRICT_DEVMEM kernel configuration option provides in Linux is to filter and restrict access to physical memory addresses via /dev/mem. It controls which physical address ranges can be legitimately accessed from userspace by helping implement architecture-specific rules to prevent unsafe or insecure memory accesses.

32-Bit Systems and the Mystery of High Memory

On most systems, the kernel needs a direct way to access physical memory. To make that fast, it keeps a linear mapping: a simple, one-to-one correspondence between physical addresses and a range of kernel virtual addresses. If the kernel wants to read physical address 0x00100000, it just uses a fixed offset, like PAGE_OFFSET + 0x00100000. Easy and efficient.

But there’s a catch on 32-bit kernels: The kernel’s entire virtual address space is only 4 GB, and it has to share that with userspace. By convention, 3 GB is given to userspace, and 1 GB is reserved for the kernel, which includes its linear mapping.

Now here comes the tricky part: Physical RAM can easily exceed 1 GB. The kernel can’t linearly map all of it: there just isn’t enough virtual address space.

The extra memory beyond the first gigabyte is called highmem (short for high memory). Unlike the low 1 GB, which is always mapped, highmem pages are mapped temporarily, on demand, whenever the kernel needs them.

Why this matters for /dev/mem/dev/mem depends on the permanent linear mapping to expose physical addresses. Highmem pages aren’t permanently mapped, so /dev/mem simply cannot see them. If you try to read those addresses, you’ll get zeros or an error, not because /dev/mem is broken, but because that part of memory is literally invisible to it.

For testing, this introduces extra complexity:

  • Some reads may succeed on lowmem addresses but fail on highmem.
  • Behavior on a 32-bit machine with highmem is fundamentally different from a 64-bit system, where all RAM is flat-mapped and visible.

Highmem is a deep topic that deserves its own article, but even this quick overview is enough to understand why it complicates /dev/mem testing.

How Reads and Writes Actually Happen

A common misconception is that a single userspace read() or write() call maps to one atomic access to the underlaying block device. In reality, the VFS layer and the device driver may split your request into multiple chunks, depending on alignment and boundaries, in this case.

Why does this happen?

  • Many devices can only handle fixed-size or aligned operations.
  • For physical memory, the natural unit is a page (commonly 4 KB).

When your request crosses a page boundary, the kernel internally slices it into:

  1. A first piece up to the page boundary,
  2. Several full pages,
  3. A trailing partial page.

For /dev/mem, this is a crucial detail: A single read or write might look seamless from userspace, but under the hood it’s actually several smaller operations, each with its own state. If the driver mishandles even one of them, you could see skipped bytes, duplicated data, or mysterious corruption.

Understanding this behavior is key to writing meaningful tests.

Safely Reading and Writing Physical Memory

At this point, we know what /dev/mem is and why it’s both powerful and terrifying. Now we’ll move to the practical side: how to interact with it safely, without accidentally corrupting your machine or testing in meaningless ways.

My very first test implementation kept things simple:

  • Only small reads or writes,
  • Always staying within a single physical page,
  • Never crossing dangerous boundaries.

Even with these restrictions, /dev/mem testing turned out to be more like diffusing a bomb than flipping a switch.

Why “success” doesn’t mean success (in this very specific case)

Normally, when you call a syscall like read() or write(), you can safely assume the kernel did exactly what you asked. If read() returns a positive number, you trust that the data in your buffer matches the file’s contents. That’s the contract between userspace and the kernel, and it works beautifully in everyday programming.

But here’s the catch: We’re not just using /dev/mem; we’re testing whether /dev/mem itself works correctly.

This changes everything.

If my test reads from /dev/mem and fills a buffer with data, I can’t assume that data is correct:

  • Maybe the driver returned garbage,
  • Maybe it skipped a region or duplicated bytes,
  • Maybe it silently failed in the middle but still updated the counters.

The same goes for writes: A return code of “success” doesn’t guarantee the write went where it was supposed to, only that the driver finished running without errors.

So in this very specific context, “success” doesn’t mean success. I need independent ways to verify the result, because the thing I’m testing is the thing that would normally be trusted.

Finding safe places to test: /proc/iomem

Before even thinking about reading or writing physical memory, I need to answer one critical question:

“Which parts of physical memory are safe to touch?”

If I just pick a random address and start writing, I could:

  • Overwrite the kernel’s own code,
  • Corrupt a driver’s I/O-mapped memory,
  • Trash ACPI tables that the system kernel depends on,
  • Or bring the whole machine down in spectacular fashion.

This is where /proc/iomem comes to the rescue. It’s a text file that maps out how the physical address space is currently being used. Each line describes a range of physical addresses and what they’re assigned to.

Here’s a small example:

By parsing /proc/iomem, my test program can:

  1. Identify which physical regions are safe to work with (like RAM already allocated to my process),
  2. Avoid regions that are reserved for hardware or critical firmware,
  3. Adapt dynamically to different machines and configurations.

This is especially important for multi-architecture support. While examples here often look like x86 (because /dev/mem has a long history there), the concept of mapping I/O regions isn’t x86-specific. On ARM, RISC-V, or others, you’ll see different labels… But the principle remains exactly the same.

In short: /proc/iomem is your treasure map, and the first rule of treasure hunting is “don’t blow up the ship while digging for gold.”

The Problem of Contiguous Physical Pages

Up to this point, my work focused on single-page operations. I wasn’t hand-picking physical addresses or trying to be clever about where memory came from. Instead, the process was simple and safe:

  1. Allocate a buffer in userspace, using mmap() so it’s page-aligned,
  2. Touch the page to make sure the kernel really backs it with physical memory,
  3. Walk /proc/self/pagemap to trace which physical pages back the virtual address in the buffer.

This gives me full visibility into how my userspace memory maps to physical memory. Since the buffer was created through normal allocation, it’s mine to play with, there’s no risk of trampling over the kernel or other userspace processes.

This worked beautifully for basic tests:

  • Pick a single page in the buffer,
  • Run a tiny read/write cycle through /dev/mem,
  • Verify the result,
  • Nothing explodes.

But then came the next challenge: What if a read or write crosses a physical page boundary?

Why boundaries matter

The Linux VFS layer doesn’t treat a read or write syscall as one giant, indivisible action. Instead, it splits large operations into chunks, moving through pages one at a time.

For example:

  • I request 10 KB from /dev/mem,
  • The first 4 KB comes from physical page A,
  • The next 4 KB comes from physical page B,
  • The last 2 KB comes from physical page C.

If the driver mishandles the transition between pages, I’d never notice unless my test forces it to cross that boundary. It’s like testing a car by only driving in a straight line: Everything looks fine… Until you try to turn the wheel.

To properly test /dev/mem, I need a buffer backed by at least two physically contiguous pages. That way, a single read or write naturally crosses from one physical page into the next… exactly the kind of situation where subtle bugs might hide.

And that’s when the real nightmare began.

Why this is so difficult

At first, this seemed easy. I thought:

“How hard can it be? Just allocate a buffer big enough, like 128 KB, and somewhere inside it, there must be two contiguous physical pages.”

Ah, the sweet summer child optimism. The harsh truth: modern kernels actively work against this happening by accident. It’s not because the kernel hates me personally (though it sure felt like it). It’s because of its duty to prevent memory fragmentation.

When you call brk() or mmap(), the kernel:

  1. Uses a buddy allocator to manage blocks of physical pages,
  2. Actively spreads allocations apart to keep them tidy,
  3. Reserves contiguous ranges for things like hugepages or DMA.

From the kernel’s point of view:

  • This keeps the system stable,
  • Prevents large allocations from failing later,
  • And generally makes life good for everyone.

From my point of view? It’s like trying to find two matching socks in a dryer while it is drying them.

Playing the allocation lottery

My first approach was simple: keep trying until luck strikes.

  1. Allocate a 128 KB buffer,
  2. Walk /proc/self/pagemap to see where all pages landed physically,
  3. If no two contiguous pages are found, free it and try again.

Statistically, this should work eventually. In reality? After thousands of iterations, I’d still end up empty-handed. It felt like buying lottery tickets and never even winning a free one.

The kernel’s buddy allocator is very good at avoiding fragmentation. Two consecutive physical pages are far rarer than you’d think, and that’s by design.

Trying to confuse the allocator

Naturally, my next thought was:

“If the allocator is too clever, let’s mess with it!”

So I wrote a perturbation routine:

  • Allocate a pile of small blocks,
  • Touch them so they’re actually backed by physical pages,
  • Free them in random order to create “holes.”

The hope was to trick the allocator into giving me contiguous pages next time. The result? It sometimes worked, but unpredictably. 4k attempts gave me >80% success. Not reliable enough for a test suite where failures must mean a broken driver, not a grumpy kernel allocator.

The options I didn’t want

There are sure-fire ways to get contiguous pages:

  • Writing a kernel module and calling alloc_pages().
  • Using hugepages.
  • Configuring CMA regions at boot.

But all of these require special setup or kernel cooperation. My goal was a pure userspace test, so they were off the table.

A new perspective: software MMU

Finally, I relaxed my original requirement. Instead of demanding two pages that are both physically and virtually contiguous, I only needed them to be physically contiguous somewhere in the buffer.

From there, I could build a tiny software MMU:

  • Find a contiguous physical pair using /proc/self/pagemap,
  • Expose them through a simple linear interface,
  • Run the test as if they were virtually contiguous.

This doesn’t eliminate the challenge, but it makes it practical. No kernel hacks, no special boot setup, just a bit of clever user-space logic.

From Theory to Test Code

All this theory eventually turned into a real test tool, because staring at /proc/self/pagemap is fun… but only for a while. The test lives here:

github.com/alessandrocarminati/devmem_test

It’s currently packaged as a Buildroot module, which makes it easy to run on different kernels and architectures without messing up your main system. The long-term goal is to integrate it into the kernel’s selftests framework, so these checks can run as part of the regular Linux testing pipeline. For now, it’s a standalone sandbox where you can:

  • Experiment with /dev/mem safely (on a test machine!),
  • Play with /proc/self/pagemap and see how virtual pages map to physical memory,
  • Try out the software MMU idea without needing kernel modifications.

And expect it still work in progress.

]]>
Recap: ELISA Workshop – Munich, Germany 2025 https://elisa.tech/blog/2025/12/03/recap-elisa-workshop-munich-germany-2025/ Thu, 04 Dec 2025 01:03:12 +0000 https://elisa.tech/?p=3772

The ELISA Workshop Munich 2025 took place November 18-20 at the Red Hat office in Grasbrunn, Germany, bringing together project members, contributors, and industry partners for three days of focused collaboration.

Welcome & Introductions Gabriele Paoloni, Red Hat; Kate Stewart, Linux Foundation; Philipp Ahmann, ETAS GmbH

The ELISA Workshop opened with a welcome note from organizers who introduced logistics, guidelines, and expectations for collaboration, including the code of conduct and Chatham House Rule options. Participants from industry, academia, and open source communities briefly introduced themselves, reflecting a diverse range of expertise in safety-critical systems, Linux engineering, certification, and research.

Ask Me Anything – New Contributor Onboarding Gabriele Paoloni, Red Hat; Philipp Ahmann, ETAS GmbH

The “Ask Me Anything about ELISA or the Use of OSS in Safety-Critical Applications” session, led by Gabriele Paoloni and Philipp Ahmann, offered participants an open space to address foundational questions about applying Linux and open source software in safety-critical systems. The conversation clarified why live Q&A remains valuable beyond static FAQs, explored the challenges of using Linux in complex safety contexts, and outlined how ELISA approaches requirements, standards, tooling, and system understanding. 

The session also highlighted common misconceptions such as the idea of producing a “safe Linux”and reinforced the importance of context, collaboration, and evolving industry practices when integrating OSS into safety-relevant applications.

Research questions and publication directions of Aerospace WG Martin Halle, Hamburg University of Technology – Institute of Aircraft Systems Engineering, Matthew Weber, Boeing

This session outlined key research questions for the Aerospace Working Group, focusing on where Linux is currently used in aerospace and space systems, how regulations affect its adoption, and which topics should lead to future white papers. The speakers also introduced shared use cases and tools supporting this work and invited contributors with domain expertise to help advance upcoming publications.

Towards Practical Program Verification for the Linux Kernel Keisuke NISHIMURA, Inria

The session “Towards Practical Program Verification for the Linux Kernel,” presented by Keisuke Nishimura, Jean-Pierre Lozi, and Julia Lawall, introduced foundational concepts of deductive program verification and demonstrated their application through a case study on the kernel function. The speakers highlighted challenges in specifying correct behavior, automating loop invariants, and preparing verification-ready code, and outlined research efforts aimed at making large-scale kernel verification more practical.

Towards a More Sustainable and Secure Software Tooling in Free/Libre Open Source Software Environments Stefan Tatschner, Fraunhofer AISEC

The session “Towards a More Sustainable and Secure Software Tooling in Free/Libre Open Source Software Environments”, presented by Dr. Stefan Tatschner (Fraunhofer AISEC), explored how software sustainability and security intersect in FLOSS ecosystems. Building on his PhD work, Dr. Tatschner discussed how vague or overly complex specifications and fragmented development practices can lead to inconsistent, insecure implementations, illustrated through studies of QUIC stacks and X.509 libraries. He showed how dependency analysis and graph-based metrics can help identify critical projects whose health has a disproportionate impact on the ecosystem.

Introducing SW Requirements in the Linux kernel development process: status and next steps Gabriele Paoloni, Red Hat; Kate Stewart, Linux Foundation; Chuck Wolber, Boeing

The session “Introducing SW Requirements in the Linux Kernel Development Process: Status and Next Steps”, presented by Gabriele Paoloni (Red Hat), Kate Stewart (Linux Foundation), and Chuck Wolber (Boeing), explored how to bring structured software requirements into the Linux kernel’s distributed, maintainer-driven development model. The speakers highlighted gaps in existing documentation and explained how missing explicit intent increases technical debt and complicates safety and certification work. They proposed testable, SPDX-based requirement annotations that live alongside the code to improve clarity, traceability, and review. The talk also summarized feedback from kernel maintainers and outlined ongoing experiments and next steps to refine the approach and drive broader adoption.

Exploring possibilities for integrating StrictDoc with ELISA’s requirements template approach for the Linux kernel Tobias Deiminger, Linutronix; Stanislav Pankevich, Reflex Aerospace

The session “Exploring Possibilities for Integrating StrictDoc with ELISA’s Requirements Template Approach for the Linux Kernel”, presented by Tobias Deiminger (Linutronix GmbH) and Stanislav Pankevich (Reflex Aerospace GmbH), demonstrated how the StrictDoc tool can support structured, traceable requirements workflows for kernel development. The speakers introduced StrictDoc’s capabilities, showed how it is already used at Linutronix for certification-driven projects, and walked through a live prototype integrating SPDX-based requirements directly from kernel source files. They highlighted how StrictDoc can link requirements, code, and tests while enabling validation and drift detection. The session emphasized that such tooling could strengthen documentation quality, improve traceability, and complement ELISA’s efforts to introduce maintainable requirements practices into the kernel ecosystem.

Architectures for Linux in Railway Safety Applications Florian Wühr, Red Hat; Daniel Weingaertner, Red Hat

The session “Architectures for Linux in Railway Safety Applications”, presented by Florian Wühr and Dr. Daniel Weingärtner (Senior Software Engineers, Red Hat EMEA Field CTO Office), explored how Linux-based platforms can be used in modern railway safety systems. They outlined Red Hat’s involvement in the “AutomatedTrain” research project and discussed applying high-performance, Linux-based platforms for autonomous and safety-related rail use cases. The talk covered relevant safety standards and SIL levels, key certification and interoperability challenges in Europe, and compared architectural options (containers, hypervisors, redundancy/diversity) for mixed-criticality railway applications.

Hypervisors are scary, so why use them for enabling Linux for Safety Applications Aqib Javaid, Elektrobit

The session explained why hypervisors, though often viewed as complex or risky, are valuable for enabling Linux in safety-critical systems. Aqib Javaid clarified common misconceptions such as hypervisors being slow or unusable for safety and showed how modern hardware support and open-source options like Xen and L4 make them practical and certifiable. He demonstrated how hypervisors provide strong isolation and allow a small safety monitor to supervise Linux, adding protection without modifying the kernel.

Open Functional Safety: Safety-Qualified Lifecycle with Sphinx Christopher Zimmer, innotec GmbH

The session “Open Functional Safety: Safety-Qualified Lifecycle with Sphinx” was presented by Christopher Zimmer (innotec GmbH). He showed how an open-source toolchain centered on Sphinx can support a full, safety-qualified development lifecycle for smaller companies and open source projects that can’t afford heavy commercial tooling. The talk also outlined how to classify and qualify such tools so they can be used in standards-compliant functional safety workflows.

AGL SDV SoDeV Insights Naoto Yamaguchi, AISIN; Harunobu Kurokawa, Renesas

The session “AGL SDV SoDeV Insights,” presented by Naoto Yamaguchi (AISIN) and Harunobu Kurokawa (Renesas), shared progress on Automotive Grade Linux’s Software-Defined Vehicle initiative. The speakers outlined SoDeV’s goal of decoupling hardware and software using open-source technologies like hypervisors, VirtIO, and unified HMI frameworks to enable reusable, scalable in-vehicle software. They also discussed early prototypes, planned architecture, and open challenges particularly around safety and integrating monitoring in virtualized systems.

Best Practices in Open Source and Standards – Evaluation of Example Projects Simone Weiss, Linutronix

The session presented work from ELISA’s WG Lighthouse OSS on identifying open-source “best practices” and mapping them to quality/safety standards. Simone showed how a common evaluation template was applied to Xen and Yocto, revealing both strong governance/CI practices and recurring issues like fragmented documentation, and outlined plans for a maturity model to rate project process quality.

Beyond the OS: What else is required for safe automotive applications? Isaac Trefz, Elektrobit

The session “Beyond the OS: What Else Is Required for Safe Automotive Applications?” highlighted that making Linux safe is only one part of building a safety-compliant automotive system. Isaac Trefz (Elektrobit) explained that safe applications also require qualified compilers and libraries, safe IPC, reliable rendering paths, hypervisors, hardware support, and proper monitoring/watchdog mechanisms. Using examples like telltales and ADAS functions, he showed how these system-level elements must work together.

BASIL Luigi Pellecchia, Red Hat

The session “BASIL,” presented by Luigi Pellecchia (Red Hat), introduced BASIL as a tool for managing traceability across requirements, code, and tests in safety-critical projects. Luigi highlighted recent updates improved SPDX SBOM export, graphical traceability views, expanded test-framework support, and a new AI-assisted requirement generator. He also outlined a proposal for a configurable traceability scanner that pulls structured data from multiple repositories, aiming to simplify and standardize traceability workflows in open-source safety development.

Continuous Compliance in Safety-Critical Open Source Projects Rinat Shagisultanov, InfoMagnus

The session “Continuous Compliance in Safety-Critical Open Source Projects,” presented by Rinat Shagisultanov (InfoMagnus), showed how safety-annotated SBOMs—using SPDX 3 and its emerging safety profile can automate functional-safety traceability. Rinat explained how tools like BASIL generate these SBOMs and how the OpenCC platform performs semantic diffs, impact analysis, and audit logging inside CI/CD pipelines.

Industry Safety Level(s) vs. Aerospace Use Cases Matthew Weber, Boeing

The session “Industry Safety Level(s) vs. Aerospace Use Cases,” presented by Matthew Weber (Boeing), explained how civil aerospace develops and certifies aircraft software using DO-178C safety levels (DAL A–E), and how these compare conceptually to ASIL/SIL levels in other industries. He walked through the aircraft lifecycle, showed how safety levels drive required artifacts and rigor, and illustrated everything with example use cases and early Linux-based demos (like a safety-aware “cabin light” and NASA CFS-based scenarios).

Linux Virtual Address Space Safety Alessandro Carminati, Red Hat

The session “Linux Virtual Address Space Safety,” presented by Alessandro Carminati (Red Hat), explored how Linux’s virtual memory design especially Virtual Memory Areas (VMAs) and the global linear mapping creates subtle safety risks in mixed-criticality systems. He walked through the VMA lifecycle, showed how the linear map lets kernel and user pages sit side-by-side (enabling accidental cross-domain corruption), and reviewed current defenses and why they’re aimed at security/debugging rather than deterministic functional safety.

Behind the Scenes: Elisa Yocto meta-layer and the ELISA CI infrastructure Sudip Mukherjee, Codethink

The session “Behind the Scenes: ELISA Yocto Meta-Layer and the ELISA CI Infrastructure,” presented by Sudip Mukherjee (Codethink), gave a concise behind-the-scenes look at how ELISA’s Yocto meta-layer and CI system are built and maintained. Sudip explained how the team created a standardized Docker-based build environment, added nightly CI builds, shared sstate caching, and automated testing with QEMU and OpenQA. He also highlighted ongoing work to keep the AGL-based demo app building reliably and invited other working groups to adopt the shared CI to ensure reproducible, stable builds.

The SPDX Safety Profile Release Candidate – towards standardised safety supply chain documentation Nicole Pappler, AlektoMetis

The session “The SPDX Safety Profile Release Candidate – Towards Standardised Safety Supply Chain Documentation” by Nicole Pappler (AlektoMetis) presented the new SPDX 3.1 safety profile, which extends the core SPDX model with safety-specific concepts like requirements, verifications, and evidence links. Nicole explained how this enables standardized, machine-readable safety documentation across the software supply chain, improving traceability, impact analysis, and compliance for safety-critical industries using open source.

Drawing an open source safety-critical landscape Philipp Ahmann, ETAS GmbH

The session “Drawing an Open Source Safety-Critical Landscape” by Philipp Ahmann (ETAS) outlined the need for a clear map of the growing ecosystem of safety-critical open source projects. Philipp proposed building a structured landscape covering OSs, hypervisors, tools, frameworks, simulators, and industry domains to show how projects relate, where they fit, and where gaps or collaboration opportunities exist. The goal is to give the community a central, easy-to-navigate view of safety-critical open source efforts.

In short:

The Munich workshop highlighted the rapid progress and growing cohesion of the safety-critical open source ecosystem. Over three days, contributors shared tools, research, architectures, requirements approaches, and CI practices all reinforcing that using Linux in regulated environments requires aligned methods, clear documentation, traceability, and strong cross-community collaboration.

With active participation from industry, academia, and open-source projects, the workshop wrapped up with renewed momentum and a shared commitment to push ELISA’s technical work forward.

Note: Presentation Slides can be accessed here

Would like to see the photos from the meetup? Check here.

Check the workshop playlist in the ELISA YouTube.

Interested to host the next ELISA workshop?

The ELISA Project hosts workshops on a regular basis to gather the project community to accelerate technical collaboration and output, and plan for future goals. It is intended as a technical community collaboration forum to advance the mission of the ELISA Project. More specifically, the Workshop series provide the avenue to: 

  • Explore ideas about approaches, processes, tooling, and testing that can be incorporated into building safety-critical applications and systems  
  • Exchange perspectives and feedback from the Linux kernel, safety, and other adjacent open source project communities
  • Provide updates about the various Working Groups’ current activities and priorities and future roadmaps
  • Enable real-time collaboration to make more accelerated progress on current work streams 
  • Define and articulate near-term technical goals and priorities
  • Educate and onboard new community members
  • Activate and increase engagement and contributions from a broader range of contributors
  • The workshops are generally held in person to facilitate more open discussions and real-time collaboration. Virtual access can be provided if there is sufficient interest.

Contact us to discuss hosting a workshop.

]]>
Getting Started with the ELISA Project: What You Need to Know https://elisa.tech/blog/2025/11/26/getting-started-with-the-elisa-project-what-you-need-to-know/ Wed, 26 Nov 2025 17:32:04 +0000 https://elisa.tech/?p=3755

What Is Enabling Linux In Safety Applications (ELISA) Project?

The Enabling Linux In Safety Applications (ELISA) Project aims to make it easier for companies to build and certify Linux-based safety-critical applications, systems whose failure could result in loss of human life, significant property damage or environmental damage. ELISA members are working together to define and maintain a common set of tools and processes that can help companies demonstrate that a specific Linux-based system meets the necessary safety requirements for certification. ELISA is also working with certification authorities and standardization bodies in multiple industries to establish how Linux can be used as a component in safety-critical systems.

The Project participants are in close collaboration with other open source projects with a safety-critical analysis focus such as the Xen Project and the Zephyr Project. In addition, the ELISA community members also interact with open source projects with safety-critical relevance and comparable system architecture consideration such as Automotive Grade Linux, SOAFEE, and the SDV Working Group. Beyond those, there have been outreach and interactions with the Yocto Project, SPDX, Real-Time Linux and the Linaro communities.

Working Groups and Technical Structure

The Project is made up of horizontal Working Groups such as Safety Architecture, Linux Features, Tool Investigation, Open Source Engineering Process, and Systems, as well as vertical use case based Working Groups in Aerospace, Automotive, and Medical Devices domains. These Working Groups collaborate to produce an exemplary reference system. Linux Features, Architecture and Code Improvements should be integrated into the reference system directly. Tools and Engineering Process should serve the reproducible product creation. Medical, Automotive, Aerospace and additional future WG use cases should be able to strip down the reference system to their use case demands.

The Project’s Technical Steering Committee (TSC) oversees the Working Group activities and coordinates cross Working Group collaboration to drive the technical direction of the Project. You can interact with the TSC by subscribing to its public forum and attending its biweekly meeting that’s open to the public by default. The mission of the Project is to define and maintain a common set of elements, processes and tools that can be incorporated into Linux-based, safety-critical systems amenable to safety certification.

Ways to Participate in the ELISA Project

There are many ways to connect with the key participants of the Project. You can join the regularly scheduled meetings, such as the bi-weekly Technical Forum meeting, the public Technical Steering Committee meeting, or the public meetings of any Working Group. Another way to connect is to subscribe to the mailing list and introduce yourself on the mailing list forum. To facilitate closer interaction, you can also attend upcoming events, such as the bi-annual in-person ELISA Workshop when members and contributors gather to collaborate and plan for future goals.

Participation is open to anyone – individuals, employees of non-member companies, and members of the ELISA Project. The project offers access to public meetings, mailing lists, events, GitHub and many other public resources.

Quick links:

Ambassadors and Community Support

ELISA Ambassadors are technical leaders who are passionate about the mission of the ELISA Project, recognized for their expertise in functional safety and Linux kernel development, and willing to help others learn about the community and how to contribute. The Ambassador Program brings together technical leaders to educate others on the mission and goals of the ELISA Project, raise awareness, promote Working Group analysis results, engage with the safety and Linux kernel community, and onboard new contributors. Ambassadors are qualified to speak on behalf of the ELISA Project at conferences and meetups, contribute tutorials and blogs, and help mentor new contributors.

ELISA Ambassadors are positioned as thought leaders through ELISA Project and Linux Foundation channels and gain visibility in the open source community. Would like to know more? Check here.

Membership and Collaboration

As with all open source collaborative projects hosted by The Linux Foundation, participation is open to individuals, non-members, and member companies. Companies join the ELISA Project to demonstrate thought leadership, build alliances, define processes and best practices, and support services such as governance, project management, infrastructure, tooling, events, and marketing.

Non-profit organizations, open source projects and government entities are welcome to join the Project as Associate Members. Associate Members contribute research, code development, documentation, and collaboration with Working Groups. A prospective Associate Member will be asked to provide evidence or plans for contribution before approval by the Governing Board.

If you are interested in membership, visit the Join ELISA page or contact: [email protected]

Next Steps

The ELISA Project brings together a diverse community of contributors who share a common goal: enabling the use of Linux in safety-critical applications through open collaboration, shared processes, and industry engagement. Whether you join meetings, participate in Working Groups, contribute code and documentation, follow events, or connect through the mailing lists and Ambassador Program, there are many accessible ways to get involved. By engaging with the community, you can help advance the tools, practices, and understanding needed to support Linux in safety-critical systems and contribute to the ongoing progress of the Project.

]]>