KernelCI Foundation
https://kernelci.org
Ensuring the quality, stability and long-term maintenance of the Linux kernelTue, 03 Mar 2026 15:24:26 +0000en-US
hourly
1 https://wordpress.org/?v=6.9.1https://kernelci.org/wp-content/uploads/sites/36/2020/07/cropped-kernelci_fav-1-1-150x150.pngKernelCI Foundation
https://kernelci.org
3232KernelCI – 2025.Q4 updates
https://kernelci.org/blog/2026/03/03/kernelci-2025-q4-updates/
Tue, 03 Mar 2026 15:24:03 +0000https://kernelci.org/?p=570The KernelCI community had a great finish of the year. Since our last update here, we tackled some quite interesting features based on community feedback. See detailed updates in the sections below.
New KernelCI TSC
KernelCI has announced its new Technical Steering Committee (TSC) composition following the annual election process. The committee, which guides the technical direction of the project and ensures cohesion across the Linux kernel testing ecosystem, now consists of seven members: Ben Copeland (Linaro) serving as TSC Chair, Denys Fedoryshchenko (Collabora) as Infrastructure Committee Lead, and community-elected members Greg KH (Linux Foundation), Gustavo Padovan (Collabora), Mark Brown (Arm), Minas Hambardzumyan (Texas Instruments), and Yogesh Lal (Qualcomm).
This diverse group brings together expertise from major hardware vendors, infrastructure specialists, and kernel maintainers, with the current term running until October 31, 2026. The TSC meets bi-weekly in open sessions to discuss technical decisions, roadmap planning, and ensure KernelCI continues to effectively support the Linux kernel community’s testing needs.
KernelCI meeting public calendar
As you may have noticed, the KernelCI community is growing. New TSC, new working groups, more community members with newcomers arriving frequently. To facilitate access we created a public calendar to announce all our meetings to the community. Through it, it is possible to register yourself to any KernelCI community meeting. Recordings are also available for those who want to watch the meeting at a later time.
Dashboard improvements
This quarter was very focused on the health of the project.
We have introduced monitoring over the API requests and system resources. On top of that, we had a significant increase of backend test coverage, from 40% to nearly 70%, including some benchmark tests.
There is still plenty to be done, and we intend to continue refining the CI/CD of our project, further automating the delivery of future enhancements.
The main functional achievement we had was reaching the last milestone of KCIDB-ng, placing the ingestion of KCIDB submissions closer to the Django backend.
This allowed the team to detach the KCIDB schema from the dashboard database, which provides more freedom to normalize the data, making the API faster and reducing the storage footprint.
We are still meeting every two weeks in the Kernel CI Working Group to explore ideas. One experiment we ran this quarter consisted of adding a few simple filters and columns about “Labs” in some of the tables.
Pull-mode for labs
As part of the work in our Labs Working Group, the team has been working on the Maestro interface to allow labs to pull all the information they need about tests KernelCI can execute. That includes kernel builds, testing rootfs and anything else needed for organizations to run KernelCI tests on their own setup whatever lab technology they are using.
We believe that pull-mode will enable a multitude of labs that were not able to participate in KernelCI before. With pull-mode a lab can be behind a firewall, something quite common in many corporate environments. It is also lab technology agnostic as everything the lab has to do is to listen to Maestro API and then send results to our KCIDB API.
Linaro donating tuxmake, tuxrun, tuxlava
The following repositories are now live under the KernelCl GitHub namespace:
KernelCI is the Linux kernel’s upstream testing and continuous integration ecosystem. The project has started to use Tuxmake and TuxRun to power its build and test pipelines. Moving these tools under the KernelCI namespace brings them into an active community that will help evolve them alongside the kernel itself.
Linaro engineers will continue as co-maintainers and active contributors. All three projects retain their MIT License, so existing users and contributors will see no disruption.
More reliable event delivery: Added an optional mode that lets clients reconnect and catch up on missed events instead of losing them.
Better event discovery: Expanded filtering options so users can find specific events more precisely.
Faster event queries: Added database optimizations that speed up common event lookups
Maintenance reliability: Fixed a cleanup issue so old data can be removed as intended
Platform/tooling updates: Upgraded the database version and refreshed development tooling to improve stability
Build/CI automation was expanded with a new production workflow and tweaks to container image build automation.
Tooling/container configs were refreshed to newer toolchains and base images, with many older variants removed and patches updated.
A large legacy build configuration catalog was removed, alongside updates to runtime and rootfs configuration data.
A new pull-based lab runtime was added, including callback parsing, log handling, and result mapping; runtime configuration was extended to support it.
LAVA handling was hardened for retry scenarios, missing logs, and infrastructure failure reporting.
Storage uploads gained retry logic for network and server-side failures.
Forecast reporting was upgraded to generate an HTML report in addition to console output.
kci-dev improvements
kci-dev continued evolving as a well-packaged CLI suite that Linux distributions and labs can ship, focused on helping engineers analyze KernelCI results and triage problems quickly from the terminal.
We had several improvements to the project in the past quarter:
Released kci-dev v0.1.10, adding Arch Linux packaging support and more workflow/polish fixes, including Debian package build workflow updates and multiple fixes in issues/validation reporting and build-node filtering.
Released kci-dev v0.1.9, with additional triage and UX refinements on top of the previously reported Q3 feature set: moved the detect workflow under results issues, added a command to fetch new issues for a checkout, and restructured the issues command group for clearer usage.
Improved validation and reporting ergonomics: enabled list views for boots validation, added runtime fields to boot/test JSON output, and fixed boot-origin filtering and other result-selection edge cases.
We will keep working on making KernelCI easier for the community to benefit from. From greater stability to an improved Web Dashboard and a more complete kci-dev CLI, there’s much more to enhance in KernelCI for everyone. Big thank you to the entire KernelCI community for making this progress possible!Talk to us at [email protected] , #kernelci IRC channel at Libera.chat or through our Discord server!
Contributed to this blog post Arisu Tachibana, Denys Fedoryshchenko, Gustavo Padovan, Minas Hambardzumyan and Tales Aparecida.
]]>Announcing the new KernelCI Technical Steering Committee (TSC)
https://kernelci.org/blog/2026/01/20/announcing-the-new-kernelci-technical-steering-committee-tsc/
Tue, 20 Jan 2026 14:58:11 +0000https://kernelci.org/?p=566
We are pleased to announce the composition of the new KernelCI Technical Steering Committee (TSC). The TSC plays a vital role in guiding the technical direction of the KernelCI project, ensuring its continued growth and effectiveness in supporting the Linux kernel community.
New TSC Composition
Following our annual election process and in accordance with ourproject charter, the TSC is now composed of the following members:
Ben Copeland, Linaro (TSC-voted member) – TSC Chair
Denys Fedoryshchenko, Collabora (Infrastructure Committee Lead – appointed position)
The current term for community-elected and TSC-voted members runs until October 31, 2026. As specified in our charter, the Infrastructure Committee Lead position is appointed and follows different term rules. Ben Copeland will serve as the TSC Chair.
About the TSC
The Technical Steering Committee is responsible for:
Making important technical decisions about the project’s direction
Discussing the general roadmap and design principles
Ensuring cohesion across the project
Participating in votes on critical matters
Contributing to the project through code, reviews, documentation, and community engagement
We maintain a policy that no more than two members from the same organization may serve on the TSC simultaneously, ensuring diverse representation across the community.
Looking Ahead
This diverse group brings together expertise from across the kernel testing ecosystem, representing major hardware vendors, infrastructure specialists, and kernel maintainers. Their combined experience will be invaluable as KernelCI continues to evolve its infrastructure and expand testing coverage.
The TSC meets bi-weekly to discuss current topics and make decisions. These meetings are an important part of how we coordinate the technical aspects of the project and ensure we’re meeting the needs of the Linux kernel community. The meetings are open and listed on our calendar.
We thank all TSC members for their service and commitment to improving kernel quality and stability through comprehensive testing.
Get Involved
If you’re interested in contributing to KernelCI or learning more about our work, visit:
]]>KernelCI Welcomes Arm and Qualcomm as Premier Members
https://kernelci.org/blog/2025/11/12/kernelci-welcomes-arm-and-qualcomm-as-premier-members/
Wed, 12 Nov 2025 22:21:49 +0000https://kernelci.org/?p=559KernelCI Welcomes Arm and Qualcomm as Premier Members
We are thrilled to announce that two industry leaders, Arm and Qualcomm, have joined KernelCI as Premier members. This marks a significant milestone in our mission to ensure the quality, stability, and long-term maintenance of the Linux kernel through comprehensive testing across the broadest possible range of hardware platforms.
Both companies bring extensive expertise and resources that will significantly strengthen KernelCI’s testing ecosystem. Arm’s deep understanding of processor architecture and their commitment to open-source development aligns perfectly with our goal of standardizing hardware testing across diverse platforms. Arm has been involved with the KernelCI community for many years already, so it is great to see they are taking a step up joining as Premier Members. Meanwhile, Qualcomm’s proven track record in mobile and emerging computing platforms, combined with their existing contributions as a test result submitter to our common results database, demonstrates their ongoing commitment to kernel quality and stability.
“Arm’s commitment to the Linux kernel community is rooted in the belief that open collaboration drives long-term innovation,” said Mark Hambleton, SVP Software at Arm. “Our role in KernelCI reflects the importance we place on scalable, transparent kernel validation across the broad ecosystem of Arm-based solutions, with the aim of improving software quality and accelerating upstream development from cloud to edge.”
“KernelCI is a cornerstone of upstream Linux kernel development, it enables open-source developers to easily validate and test on a large number of different platforms, including Qualcomm’s, ensuring consistency and quality across the entire ecosystem”, said Leendert van Doorn, Qualcomm SVP of Engineering. “It is a capability that is front and center for Qualcomm’s increasing reliance on upstream enablement.”
The addition of Arm and Qualcomm as Premier members comes at an exciting time for KernelCI. Our new infrastructure dramatically improved the possibilities of KernelCI and the ecosystem we are building around it. So KernelCI is well-positioned to leverage the expertise and resources these new Premier members bring. Their involvement will help us expand testing coverage, improve hardware validation processes, and ultimately deliver better Linux kernel quality to the entire open-source community. We look forward to collaborating with both organizations as we continue to grow KernelCI’s impact on upstream kernel development.
]]>KernelCI – 2025.Q3 updates
https://kernelci.org/blog/2025/10/23/kernelci-2025-q3-updates/
Thu, 23 Oct 2025 12:33:18 +0000https://kernelci.org/?p=479The KernelCI community continues to make great progress on multiple fronts of the project. Since our last update here, we have had two in-person events, a few technical achievements and new working groups launched. See detailed updates in the sections below.
All the sessions were recorded and videos are available:
KernelCI Workshop in Amsterdam
In the last week of August, we hosted our first KernelCI workshop in a while. We had 12 participants in-person and a few who joined us online. The workshop was great to kick start some important discussions about the new KernelCI Architecture with the community.
During the workshop, we discussed how to fulfill maintainers’ use case, dealing with KernelCI labs issues, improving data quality and regression identification, KCIDB transition, RISC-V support and more. Check our notes and full video recording.
Transition out of legacy KCIDB
We finally completed our transition out of the legacy KCIDB. This was a very important step for the new KernelCI architecture. KCIDB is our common database for results, so on one end it has an API to receive test results white on the other is it as PostgreSQL database. The submission API has been supplanted by KCIDB-ng, but with no changes to the results submission JSON schema. Submitters are still sending the same files, but to a different API endpoint. KCIDB-ng is a fast Rust-based API to receive and store the JSON results files.
We’re transforming the KCIDB project into a more versatile, cloud-agnostic solution that can be deployed on-premise when needed. This flexibility allowed us to migrate KCIDB to Azure seamlessly. Additionally, we’ve rewritten portions of KCIDB in Rust, which has resolved longstanding performance bottlenecks.
All the other responsibilities were handed over to our Dashboard, so it is responsible for processing and ingesting the data in the JSON files to our PostgreSQL database. The schema of the database remained the same as legacy
Core Infrastructure
Our new infrastructure has enabled us to significantly reduce complexity by implementing proper DevOps practices. Throughout Q3, we improved our deployment systems. In daily operations, this translates to production updates that require minimal supervision and minimal downtime – what previously took hours now takes less than one minute. We’ve also developed our own storage solution that gives us greater control over data costs by maintaining a “hot” cache on VMs while storing longer-term data in object storage with lifecycle policies and more economical storage classes.
There was also an effort to enhance monitoring and alerting systems to proactively prevent issues before they affect users. This includes implementing more granular metrics and establishing alerts for critical performance indicators.
These infrastructure improvements have collectively enabled us to scale more effectively, reduce operational costs, and enhance overall reliability. We remain committed to continuing our infrastructure investments to support our growing demands and ensure we can meet the needs of our expanding user base.
New Labs WG
As a great outcome from the KernelCI Workshop, we created the Labs Working Group(WG) to discuss challenges for connecting test labs to KernelCI.
The Labs WG is already a pretty busy space with over 10 people joining the calls in our bi-weekly sync. The current focus includes improving Maestro API for labs to pull test information from KernelCI, adding support for Labgrid and evaluating dashboards for lab metrics.
The team identified several key challenges that prevent labs from connecting to KernelCI. The primary obstacle is that many labs cannot expose their APIs to the public internet due to strict security policies. To address this limitation, we proposed a pull-mode architecture that reverses the traditional workflow – allowing labs to pull test jobs from KernelCI rather than having KernelCI push jobs directly to them. This approach enables labs to maintain their existing security policies while still actively participating in KernelCI testing. We’re currently developing the protocols and implementations necessary to support this pull-mode architecture.
Dashboard WG
Inspired by the Labs WG, we started the KernelCI Dashboard Working Group (first invite), gathering users and the development team from ProFUSION to talk about bugs and features prioritization.
In two meetings, with 8 attendees, we have defined an action plan to improve the performance of the website, which was affecting user experience, stopped the development of a feature that did not spark joy, and defined the next target for the team: Handling hardware data from labs. Which will require more discussions regarding how that data could be used by the users.
kci-dev improvements
kci-dev has evolved from a small command-line tool into a well-packaged suite of tools that Linux distributions can ship, aimed at analyzing KernelCI results and helping engineers triage problems quickly.
We had many improvements to the project in the past quarter:
Added Debian/RPM packaging and OBS workflows/services automation so labs can build and publish updates automatically, so every client gets the same, reproducible toolchain.
Added results compare command for commit-to-commit regression detection (with tables + JSON output as well).
Consolidated results issues group to list/show issues and fetch related builds/tests.
Improved tree-level views with tree-report.
Added code coverage information through new maestro coverage flow (currently only for chromiumos trees) with per-day buckets, a graph view, and report-info helpers, making stability/coverage trends far clearer.
Improved validation tooling filters (arch filter; better build selection and considering build/job retry during validation). The validation commands allows us to compare if the tests Maestro is running are landing properly in KCIDB.
Added –history to results summary.
Fixed results hardware list and added more filters.
Hardware Information Registry
One of the challenges discussed in various KernelCI forums has been the inconsistency of hardware platform names reported from different testing labs, making it challenging to find/sort/filter results. To address this, a new YAML-based schema has been proposed by Minas Hambardzumyan and is currently in review. The schema organizes information into lists of platforms, processors, and vendors — providing a path to standardization of the reported platform names and adding traceability to product/vendor web pages for more information.
Final Thoughts
We will keep working on making KernelCI easier for the community to benefit from. From greater stability to an improved Web Dashboard and a more complete kci-dev CLI, there’s much more to enhance in KernelCI for everyone. Big thank you to the entire KernelCI community for making this progress possible!Talk to us at [email protected] , #kernelci IRC channel at Libera.chat or through our Discord server!
Contributed to this blog post Arisu Tachibana, Denys Fedoryshchenko, Gustavo Padovan, Minas Hambardzumyan and Tales Aparecida.
]]>Announcing the KernelCI Labs Working Group
https://kernelci.org/uncategorized/2025/09/18/kernelci-labs-working-group-announcement/
Thu, 18 Sep 2025 13:34:19 +0000https://kernelci.org/?p=440Following productive discussions at the KernelCI Workshop 2025 in Amsterdam, we’re excited to announce the formation of a new Labs Working Group(WG) to tackle the evolving challenges of labs testing infrastructure in KernelCI.
Why a Labs WG?
As KernelCI continues to grow and mature, the complexity of managing hardware testing laboratories has become increasingly apparent. While we currently support LAVA as our primary backend, the community has expressed strong interest in expanding our capabilities and addressing various infrastructure challenges that impact testing reliability and efficiency.
The Labs Working Group will serve as a focused forum for improving how KernelCI integrates with and manages testing laboratories, ensuring we can scale effectively while maintaining the quality and reliability our users expect.
Our Focus Areas
The Labs WG has identified several key areas where concentrated effort will deliver significant value to the KernelCI ecosystem:
Labgrid Integration: Design and implement labgrid backend support alongside our existing LAVA integration, providing more flexibility for lab operators
CI Ecosystem Labs: Enhance support for vendor labs (TI, Qualcomm, and others) through improved events API and shared rootfs images
Alternative Backends: Explore integration with other lab management systems like Tuxtest, particularly for virtualized testing environments
Lab Health Dashboard: Create comprehensive tools to monitor lab status and quickly identify infrastructure issues
Failure Classification: Improve our ability to detect and report infrastructure failures separately from actual test failures, reducing noise in test results
Lab Admin Guidelines: Develop standards for lab administrators covering device type differentiation and configuration, making it easier to access the same hardware across multiple labs
Job Prioritization: Implement smarter job scheduling algorithms to make better use of available hardware resources
Dependency Management: Develop systems to avoid running dependent tests when prerequisites fail (for example, skipping NFS-dependent tests if NFS boot fails)
Meeting Schedule and Participation
Based on community feedback during the Amsterdam workshop, we recognize the need to accommodate participants across different time zones, particularly those on the West Coast. Our initial meeting schedule will be:
We invite all interested community members to participate in this working group, especially those who:
Run or manage hardware testing labs
Have experience with LAVA, labgrid, or other lab management systems
Work with vendor lab infrastructure
Have specific use cases or requirements for hardware testing
Whether you’re a seasoned lab administrator or someone interested in learning more about hardware testing infrastructure, your perspective and contributions are valuable to this effort.
Next Steps
If you’re interested in contributing to the Labs Working Group or have specific use cases and requirements to discuss, please reach out to us on the KernelCI mailing list, Discord or join our upcoming meetings.
Together, we’ll build more robust, flexible, and efficient testing capabilities that will benefit the entire Linux kernel development community!
]]>Hello world!
https://kernelci.org/uncategorized/2025/09/05/hello-world/
https://kernelci.org/uncategorized/2025/09/05/hello-world/#commentsFri, 05 Sep 2025 18:35:34 +0000http://kernelci.org/?p=1Welcome to Linux Foundation Projects Sites. This is your first post. Edit or delete it, then start writing!
]]>https://kernelci.org/uncategorized/2025/09/05/hello-world/feed/1June 2025 updates – Linaro and ELISA joined KernelCI!
https://kernelci.org/blog/2025/06/23/june-2025-updates-linaro-and-elisa-joined-kernelci/
Mon, 23 Jun 2025 20:38:58 +0000https://kernelci.org/?p=396A lot has happened in KernelCI since our last blog update. The community continues to grow with Linaro and ELISA joining as members and more contributors and companies adding test results. The infra as a whole continues to evolve. And we are in the first stages of the development of the kernelci.yml test plan standard.
Linaro and ELISA joined us a members
Linaro is joining as a Premier member and ELISA as an Associate Member. We thank you both for their commitment to take part in the KernelCI community and join us in our mission to ensure the quality, stability and long-term maintenance of the Linux kernel.
“Linaro is excited to be rejoining the KernelCI project. KernelCI’s mission to provide Linux Kernel developers with testing at scale across a diverse set of platforms is key to ensuring the long term quality, reliability and security of the Linux kernel. Linaro looks forward to helping KernelCI to grow and become an even more valuable resource.” said Grant Likely, Linaro CTO.
“Linking the requirements to the tests will enable more efficiency in regression testing,” said Kate Stewart, Vice President of Dependable Embedded Systems at the Linux Foundation. “Being able to connect the traceability between code, requirements and tests will get us closer to improving the code coverage and quality of the Linux kernel images. The ELISA Project is focusing on kernel requirements and is looking forward to working with the KernelCI community to make the regression testing more effective over time.”
Automated Testing Summit coming up
KernelCI is hosting the Automated Testing Summit(ATS) 2025 in Denver, CO, USA. It is co-located with the Open Source Summit North America. The agenda is out and a presentation from KernelCI bringing you the latest project updates is on the schedule.
There is still time to sign up and meet us there. It is a hybrid event, so both in-person and virtual attendees are welcome.
Qualcomm, RISC-V International and Texas Instruments International submitting results
KernelCI gained data from 3 new submitters. Both Qualcomm, RISC-V International and Texas Instruments connected their test systems to KernelCI. If you look at our architecture, they are part of our CI ecosystem. They start off by listening to new build events from Maestro, then they download the binaries for the built kernel and artifacts from Maestro. With the kernel and artifacts, they can execute the testing on their environment – sometimes hidden behind a firewall. When tests are completed, they submit complete results to KCIDB, which then becomes accessible through our Dashboard and kci-dev.
.kernelci.yml test plan
We are proposing to introduce a standardized .kernelci.yml file in upstream kernel repositories to help the KernelCI community automatically discover and configure testing for each kernel tree. Part of the goal is to also transfer the ownership of the test plan by maintaining such files close to them or inside their subsystem folder.
This YAML file would specify branches to be tested, kernel configs to build, tests to execute, etc, enabling project maintainers to directly declare their KernelCI preferences. The main benefit is reducing manual effort and guesswork currently involved in onboarding new trees to KernelCI, ultimately making kernel testing more scalable, transparent, and easier to maintain for both KernelCI maintainers and kernel developers.
KCIDB-ng
With the amount of test results data received by KernelCI everyday growing, KCIDB started to show wear off signs. To address the limitations, including a previous implementation heavily dependent on specific Google Cloud technologies, we created kcidb-ng. The new project brings a system that is easy to deploy not only locally for development but also can be run on any cloud environment. It also simplified a lot the ingestion process.
Essentially, we have an API that receives json files with the test result content and stores the files in the spool directory. This entrypoint was written in Rust for efficiency. Then, we have an ingester looping in the server taking the files and ingesting them into PostgreSQL.
Additionally, kcidb-ng comes with logspec integration out of the box. So now data from any origin can be parsed to generate insights about build and test failures to generate KCIDB issue objects. The issues objects are the bridge between seeing a test failures and being able to report it as a regression to the community.
Right now, we are working with all KCIDB origins to move them over to the new API.
Strengthening core infra
Behind the scenes, we’ve been working hard to optimize our infrastructure costs and performance. Our build times on Azure build cluster improved dramatically from 88 to 17 minutes after migrating to modern D8lds_v6 instances, while actually reducing costs. We also implemented a caching solution for linux-firmware that cut our data egress costs by over 95% – from a projected $69k annually down to manageable levels. These optimizations mean faster feedback for developers and more sustainable operations for the project. Additionally, we’ve begun migrating KCIDB components to more cost-effective cloud services, starting with kcidb-rest, which will help us maintain reliable service while keeping infrastructure costs under control.
Final thoughts
We will keep working on making KernelCI easier for the community to benefit from. From greater stability to an improved Web Dashboard and a more complete kci-dev CLI, there’s much more to enhance in KernelCI for everyone. Big thank you to the entire KernelCI community for making this progress possible!Talk to us at [email protected] , #kernelci IRC channel at Libera.chat or through our Discord server!
]]>Exploring the KernelCI Dashboard
https://kernelci.org/blog/2025/06/03/exploring-the-kernelci-dashboard/
Tue, 03 Jun 2025 14:09:58 +0000https://kernelci.org/?p=389The KernelCI project is a critical initiative in the Linux kernel development ecosystem, providing automated testing and continuous integration for kernel builds. In this blog post, we’ll explore the current features available in the KernelCI Dashboard at https://dashboard.kernelci.org.
KernelCI Dashboard Overview
The current KernelCI Dashboard provides a comprehensive view of kernel testing activities, organizing information into three primary sections: Trees, Hardware, and Issues. This structure allows developers to easily navigate between different aspects of kernel testing. In this web dashboard, it’s possible to see the results of CI systems from different maintainers, checking their status, details, history and issues, while gathering valuable insight to work on a better and more reliable kernel.
Trees
The Trees section serves as the main entry point to the dashboard, displaying a tabular view of kernel trees, branches, and commits being tested. The Dashboard visitor is then able to select the one they are interested in – a tree is a fork of the linux kernel with a specific branch and targeting a specific url.
> Image: The Tree Listing page. It shows the website menu on the left, a header on top with a search bar and a table listing the 10 trees with the most number of tests, each row has the tree information as well as counts for how many builds, boots and tests were executed on it.
With options for sorting and searching for a specific tree, it is possible to check the details of a desired tree, where a user can see which configurations, architectures, hardware, and a history of the builds, boots and tests.
This allows maintainers to see the successes, failures and other results of the CI systems and are able to focus their attention on what matters the most to them. A preview of a build or test can be seen right on this page, reviewing tests or builds outputs, with highlights to failures and errors, and a list of issues that were triggered from it.
> Video: Navigating to the mainline Tree Details. There are cards with information about the specifications used, graphs for the result status and history of results, and a list of boot tests. A boot is selected and its output and issues are shown.
Builds and Tests
Each build and test also has a lot of data that a maintainer can find useful, so by pressing “View more details”, the user is directed to a page containing more information about that specific item, such as a the platform that was tested on, a history of a test result, the artifacts (logs or result files) that a test or build produced, and miscellaneous data. For a build, it is also possible to see every test that was executed on it, with links to the details of those tests.
> Video: A boot test details page. There are sections with basic information, as well as the history of its results, miscellaneous data from it, and the files it produced as artifacts.
> Video: A page with details of a build. There are sections with basic information, miscellaneous data, output files, and a table listing the tests that were performed on it.
Hardware
The Hardware section focuses on the physical devices used for testing, providing insights into how different kernels perform across various hardware platforms. Maintainers of such items might want to see the results of tests for that hardware independently of which tree it came from. For that purpose, the dashboard also contains a tab for listing the hardware that were tested on.
The listing is similar to that of trees, but when entering the details pages, it is possible to see all the trees that contributed to testing on that hardware, as well as disabling or enabling the visualization of the results from that tree.
> Video: Navigation from the hardware listing page to a hardware details page. The details page shows a table at the top listing the trees that tested or had builds on that hardware, with their corresponding commit and counting of total builds, boots and tests.
Not only for trees in a specific hardware or hardware on a specific tree, it is also possible to filter for any of the card items by clicking on them or using the Filters button, allowing for better visibility of items that a user may be interested in.
Issues
The Issues section is dedicated to tracking problems identified during testing, making it easier for developers to identify and address failures.
Issues group tests or builds when their results had a certain status or had a certain message in their logs or other conditions. In the dashboard, it is possible to list the most recent issues and when and where they have appeared, resembling the other listing pages.
From that page or from links throughout the dashboard, a user can navigate to a page with details of that issue, including the first time that issue caught a result, the specific data of how that issue came to be, and a list of every incident of that issue, builds or tests.
> Video: Navigating from the issue listing to an issue details page. The detailed page shows sections of the issue’s information, its first incident, the specifications of its error, miscellaneous data, and a table with builds that triggered that issue.
Closing thoughts
The current KernelCI Dashboard provides a powerful interface for monitoring, analyzing, and troubleshooting kernel testing results. Its comprehensive features make it an essential tool for kernel developers, distributions, and hardware vendors who rely on Linux kernel stability and compatibility.
The dashboard allows users to inspect CI results from trees, hardware, builds, and tests; checking for specific issues, filtering for certain configurations and looking over results of their interest. Coupled with detailed pages, interactions and CI results history, it provides better tools for specific use cases, enhanced visualization, and improved troubleshooting capabilities. With a redesigned interface and easy shareability and filtering, the Dashboard can address the needs of different users in the kernel development ecosystem, from maintainers to lab operators.
Whether you’re a kernel developer tracking your patches, a distribution maintainer ensuring stability, or a hardware vendor verifying compatibility, the KernelCI Dashboard (both current and future versions) offers the insights and tools needed to ensure Linux kernel quality across the ecosystem.
Users are encouraged to report bugs and suggestions to [email protected] to help improve this vital project.
We’d like to thank you ProFUSION for their contributions to this project as supplier of the KernelCI Project. This blog post was written by then.
]]>KernelCI – February 2025 updates
https://kernelci.org/blog/2025/02/20/kernelci-february-2025-updates/
Thu, 20 Feb 2025 16:43:23 +0000https://kernelci.org/?p=368For those of you who were at Linux Plumbers Conference last September, you got a great amount of updates from the KernelCI project. However, September is long in the past now. So it is time for a new round of updates.
In summary, in the past few months, we successfully stabilized our new infrastructure, shut down our legacy system. Then, we also developed logspec for parsing test logs, made more progress on the new Web Dashboard, launched kci-dev for kernel developers, and began testing regression notifications via email. Let’s now dive into specific topics for a short update!
Web Dashboard development progress
While we believe the Web Dashboard still has a lot to evolve, it is already able to provide results information for many use cases. The development team, funded by the KernelCI project, is continuously improving the dashboard based on user feedback. We invite everyone to use our new dashboard and share any feedback they may have.
logspec – new log parsing skills
logspec is a new log parser designed to recognize contextual nuances. It is able to match patterns around build failures, boot issues, NULL pointer dereference, Kernel Panic, Kernel Bug, UBSAN warnings etc. We have integrated it in Maestro so far. There, it sits on the Maestro->KCIDB bridge and creates new issues in KCIDB when it finds any of the tracked patterns in the code. So, here are examples of a build issue and boot issue. logspec is a new project, but already helping the community find key information quite fast. We hope to evolve it over the next few years.
Announcing kci-dev cmdline
A few months ago, we started kci-dev, our command line tooling for kernel developers and maintainers. kci-dev is still beta but its core features are taking shape:
sending specific test requests to Maestro: today we support testing any commit on any tree/branch available in KernelCI’s Maestro.
running bisections: we have an experimental tooling that allows us to run bisections for some regressions we found.
fetching test results from the dashboard: Today, we have basic support to fetch test results and download logs through the `kci-dev results` command.
Example run to fetch summary of results:
We invite developers to try it out and give us feedback on the functionalities they want to see in kci-dev.
Experimenting with results notifications
Over the past month, we’ve been developing kernelci-notifications that are already capable of watching for new KCIDB issues and generating notifications for build and boot regressions filed in KCIDB. Currently, we are experimenting with issues created by Maestro, but the notification system is being designed for use with any CI system submitting data to KCIDB. You can see examples of issues reported here. And excerpt follows as well:
KCIDB
KCIDB receives increasing amounts of kernel test data every day. As the project grows, so do its challenges. We are currently looking at improving the performance of KCIDB with long-term needs in mind. A few weeks ago, the KernelCI project published a RFQ for a specialist DBA support for KCIDB.
More hardware support
In the recent weeks, we have seen Penguntronix enable their lab in KernelCI’s Maestro, MediaTek Genio to add their newest hardware through the Collabora lab. As we speak, Qualcomm is working to enable more of their labs in KernelCI too.
New test Infra continues stabilization
We continue improving our core infra. Recently, we added automatic deployment of Maestro through GitHub actions and we also created kernelci-storage, a solution to help the KernelCI ecosystem to store test artifacts.
Final thoughts
We will keep working on making KernelCI easier for the community to benefit from. From greater stability to an improved Web Dashboard and a more complete kci-dev CLI, there’s much more to enhance in KernelCI for everyone. Big thank you to the entire KernelCI community for making this progress possible! Talk to us at [email protected], #kernelci IRC channel at Libera.chat or through our Discord server!
]]>Sustaining long-term support for Linux mainline based products
https://kernelci.org/news/2024/09/13/hw-vendors-meet-in-vienna/
Fri, 13 Sep 2024 18:24:10 +0000https://kernelci.org/?p=351We all know that introducing and supporting hardware in Linux mainline is challenging. At KernelCI, we want to understand how we can help move the ecosystem further upstream.
The challenges come from many angles:
Invest in upstreaming and maintain the hardware support into Linux (and other projects) in the long-run
Take IP protection in consideration, making sure that no critical information is shared in the open
Time to market pressures for the product release
Test mainline based products (continuously), making sure:
The hardware remains functional
No regressions appear
Maintain test infrastructure, so testing can be timely and efficient
Interact with the kernel community to improve the state of the art and fix issues
And so much more
These challenges come from both technical and business aspects. On the one hand, there are still a lot of difficulties in interacting with the upstream community to implement drivers, review patches, and solve regressions. On the other hand, business stakeholders and decision-makers have a hard time understanding upstream practices, which contributes to insufficient investment to bring enough knowledge and resources for effective participation in upstream. KernelCI aims to help Hardware Vendors address both the technical and business obstacles.
With that in mind, KernelCI wants to start a discussion with all interested Hardware Vendors to share our experiences and pain points. We could then look ahead at how we can collaborate on improving the kernel integration processes and facilitate the maintenance of stable kernels in the long run. Stability and security needs are growing exponentially, with so much of our global infrastructure depending on the hardware and software we build.
If you are in Vienna, Austria on September 18th for Open Source Summit Europe and/or Linux Plumbers Conference, we invite you to join our in-person discussion to happen from 1:30pm to 3pm at LPC Room 1.34. If you have any questions or comments, please contact us at [email protected].