The post Airy3D and Lattice to Showcase Compact, Integrated Humanoid and Robotic 3D Vision Demo at Embedded World 2026 appeared first on 2026 Embedded Vision Summit.
]]>The demo combines Airy3D’s DepthIQ
technology with a compact, low-power Lattice CrossLink
-NX FPGA to enable high-quality depth perception in extremely small form factors while minimizing system cost, power consumption, and compute load. By offloading depth processing to the Lattice FPGA, the solution frees system resources and enables efficient integration alongside an application processor.
Showcased on a humanoid robotic hand, the solution demonstrates how single-sensor, passive 3D vision in a compact form factor can perceive objects at very close range and be integrated at the edge of robotic systems—opening new possibilities for robotic grasping, in-hand manipulation, and end-effector-mounted perception.
“In robotics and humanoid systems, moving limbs can create self-occlusion and reduce perception reliability in dynamic environments,” said Jean-Sebastien Landry, Sr. Director, Product and Strategic Partnerships at Airy3D. “By eliminating occlusion and minimum-Z constraints, DepthIQ enables reliable near-field perception and supports camera placement directly within robotic hands and other space-constrained areas.”
“Developers building next-generation humanoids need architectural flexibility at the edge,” said Karl Wachswender, Senior Principal System Architect, Industrial, at Lattice Semiconductor. “By accelerating depth processing with industry-leading low-power, small-sized Lattice FPGAs, this solution by Airy3D allows system designers to either reduce overall system cost or redeploy compute resources toward higher-value AI tasks.”
To experience the new possibilities of robotics and humanoid applications at the edge, powered by a compact and compute-efficient 3D vision solution, visit Airy3D in the Lattice Semiconductor booth (Hall 4, booth #528) at Embedded World 2026.
For more information about DepthIQ and Airy3D technology, visit www.airy3d.com.
About Airy3D
Airy3D is a pioneer in single-sensor 3D vision, transforming standard 2D image sensors into advanced depth-sensing devices through its patented technology. Airy3D enables compact, power-efficient, and cost-effective 3D perception for applications including robotics, automotive, industrial automation, and consumer devices.
Media Contact Jean-Sebastien Landry
Sr. Director, Product and Strategic Partnerships
[email protected]
The post Airy3D and Lattice to Showcase Compact, Integrated Humanoid and Robotic 3D Vision Demo at Embedded World 2026 appeared first on 2026 Embedded Vision Summit.
]]>The post News Highlights from Embedded Vision Summit 2025 appeared first on 2026 Embedded Vision Summit.
]]>He also covered techniques that enable robots to determine appropriate actions in novel situations. The key to deploying these models at the edge is in making the VLMs smaller and more efficient while retaining accuracy.
At the summit, over 1,200 attendees had the choice of learning from some 85 presentations and 65 exhibitors and exploring the latest in embedded vision. In this article, we highlight a selection of the announcements from the event that showcase some of the advances in enabling embedded vision.
One of the winners of the Edge AI and Vision product of the year awards for 2025 in the edge AI and computers or boards category was MemryX, who had just two weeks earlier joined the National Semiconductor Hub in Saudi Arabia and received a strategic investment from one of the Middle East’s largest digital infrastructure projects, NEOM, through the Neom Investment Fund.
Technology analyst firm BDTI, who also organizes the Embedded Vision Summit, recently carried out a hands-on evaluation of MemryX’ MX3 M.2 AI accelerator module, and said the product was exceptionally easy to use while providing good performance and consuming little power. For the evaluation, BDTI installed the M.2 module in an x86 Linux PC, downloaded and compiled several neural network models using the MemryX tools, ran these networks on the module and measured inference performance and power consumption. They also developed a small stand-alone example application that takes images from a USB webcam and runs object detection on them using the module.
“It is the first AI accelerator we’ve encountered for which both the hardware and the software ‘just works.’ We were particularly impressed by the scope of models tested in MemryX’s Model eXplorer website, and the fact that the software tools are sufficiently capable that MemryX doesn’t need to provide its own model zoo of modified models,” BDTI said. “Rather, models can be easily compiled and achieve good performance from their original source code form.” The full evaluation (17 pages) can be found here.
South Korea-based Nota AI, which just filed for an IPO listing in South Korea, showcased its collaboration with Qualcomm Technologies at the Embedded Vision Summit. The company emphasized the optimization of Nota AI’s proprietary AI model optimization platform, NetsPresso, for use with the Qualcomm AI Hub. Nota AI’s CTO, Tae-Ho Kim, detailed how the integrated platforms significantly streamline the workflow for developing and deploying AI models on edge devices.

Nota AI showed off its NetsPresso Optimization Studio, an enhancement to its AI model optimization platform that offers users an intuitive, visual interface designed to simplify AI model optimization. Developers can quickly visualize critical layer details and model performance required for efficient quantization, enabling rapid, data-driven decisions based on actual device performance metrics, according to Nota AI.
Also featured at the show was the Nota Vision Agent (NVA), a generative AI-based video analytics solution. NVA enables real-time video event detection, natural language video search and automated report generation, helping enterprise users maximize situational awareness and operational efficiency. The solution has already proven its commercial viability through a recent supply agreement with the Dubai Roads and Transport Authority.
Meanwhile, SiMa.ai said that it had collaborated with Wind River on an integrated hardware and software solution for “next generation edge AI.”
“Edge AI is the next gold rush and creating significant opportunities across robotics, industrial automation, medical, automotive, and aerospace and defense,” said SiMa.ai founder and CEO Krishna Rangasayee. “Together, with Wind River’s long-standing expertise and market success, SiMa.ai is now jointly delivering the industry’s best edge AI platform that leads in performance, power-efficiency and ease-of-use, addressing all AI needs, including GenAI.”
In the company’s announcement, SiMa.ai said its MLSoC platform integrated with the enterprise-grade Debian derivative eLxr project with commercial support provided by Wind River’s eLxr Pro, allowing developers to easily customize and accelerate production time. “This integrated solution combines the freedom of open source with enterprise grade security, stability and compliance,” the company’s announcement added.
Vision Components presented its new VC MIPI “bricks” system at the Embedded Vision Summit, a modular system based on perfectly matching components and comprised of camera modules, accessories and services, all the way through to ready-to-use MIPI cameras and complete embedded vision systems. A development kit from PHYTEC with NXP i.MX 8M Plus and i.MX 8M Mini processor is also part of the VC MIPI bricks system.

The modular system is matched to the more than 50 VC MIPI cameras and the requirements of industrial projects. The product includes FPC and coax cables for flexible connection, as well as the various VC power SoM FPGA accelerators for image pre-processing. On request, the cameras are also available ready-to-use, with optics fully assembled and calibrated.
The post News Highlights from Embedded Vision Summit 2025 appeared first on 2026 Embedded Vision Summit.
]]>The post NAMUGA debuts Stella-2 solid-state lidar powered by Lumotive appeared first on 2026 Embedded Vision Summit.
]]>Lumotive announced that NAMUGA Co., Ltd. will introduce its first solid-state 3D lidar sensor, Stella-2, which uses Lumotive’s Light Control Metasurface (LCM) technology, at the upcoming Embedded Vision Summit in California.
Stella-2 offers a compact design, software-configurable features, and outdoor-capable performance for use in commercial robotics and industrial automation. It is suitable for applications such as autonomous vacuum cleaners, lawnmowers, warehouse robots, and industrial equipment.
Stella-2: Designed for robotics and autonomy
Available in integratable sensor module and sealed enclosure formats, Stella-2 simplifies 3D perception for a wide range of robotic platforms.
The launch of Stella-2 reflects the deepening collaboration between Lumotive and NAMUGA, driven by a shared mission to democratize high-performance 3D sensing through scalable, software-defined solutions. Embedded Vision Summit marks a major milestone in bringing this vision to market.
For more information, please visit lumotive.com.
The post NAMUGA debuts Stella-2 solid-state lidar powered by Lumotive appeared first on 2026 Embedded Vision Summit.
]]>The post Leopard Imaging to Showcase Advanced Computer Vision Cameras Powered by Qualcomm RB3 Gen 2 and Ride 4 at Embedded Vision Summit appeared first on 2026 Embedded Vision Summit.
]]>The demonstration will highlight Leopard Imaging’s LI-IMX676-FLEX-105H, a 12MP camera running on Qualcomm’s RB3 Gen2 platform, and LI-OX03F10-VS775S, the advanced ASA camera running on the Qualcomm Ride 4 platform. The demos are designed to unleash new potential in robotics, AIoT, automotive, and autonomous systems development.
The Leopard Imaging LI-IMX676-FLEX-105H camera delivers high-resolution image capture and accurate depth perception—critical for navigation, obstacle detection, and autonomous decision-making. Leveraging the compute power of Qualcomm’s QCS6490 processor, the system supports sophisticated 3D vision applications in real time while maintaining a low power footprint.
The Qualcomm Dragonwing RB3 Gen 2 platform enables developers to push boundaries, thanks to its 5th generation AI Engine delivering up to 12 TOPS of performance. The platform supports up to three concurrent camera inputs, 8K video capture, and a suite of popular AI frameworks such as TensorFlow Lite, ROS, and Edge Impulse.
Leopard Imaging’s integration with the RB3 Gen 2 platform comes with a fully optimized development kit that simplifies prototyping and deployment. The kit includes pre-integrated drivers for cameras, sensors, and motor controls, and supports high-speed interfaces such as MIPI CSI and PCIe. The AI/ML Developer Workflow enables seamless model optimization and deployment, empowering developers to bring innovative solutions to market faster.
The LI-OX03F10-VS775S camera, integrated with Qualcomm’s Ride 4 platform, brings advanced vision sensing to the automotive edge. Its color CMOS sensor enables accuracy in varied lighting conditions—making it an ideal automotive solution like surround view systems, rearview cameras, and autonomous driving.
Qualcomm’s Ride 4 platform is purpose-built for next-generation automotive compute, delivering scalable performance, functional safety, and advanced AI processing for ADAS and automated driving. With support for multi-modal sensor fusion, real-time decision-making, and ASIL-D compliance, Ride 4 empowers developers to create intelligent vehicle systems that meet the stringent demands of the automotive industry.
Leopard Imaging’s development kits are optimized for Qualcomm’s platforms, offering pre-integrated drivers, MIPI CSI and PCIe interfaces, and streamlined AI/ML workflows for faster time to market.
Leopard Imaging and Qualcomm will showcase live demos at Embedded Vision Summit in Santa Clara Convention Center, on May 21 – 22 at Booth #700. To arrange a meeting at the event, please contact [email protected].
About Leopard Imaging Inc.
Founded in 2008, Leopard Imaging is a global leader in high-definition embedded cameras and AI-based imaging solutions. Specializing in core technologies that enhance image processing, Leopard Imaging serves various industries, including automotive, aerospace, drones, IoT, and robotics. Offering both original equipment manufacturer (OEM) and original design manufacturer (ODM) services, as well as high-quality manufacturing capabilities in both the U.S. and offshore, Leopard Imaging provides customized camera solutions for some of the most prestigious organizations worldwide. Leopard Imaging holds quality management certifications such as IATF16949 for the automotive industry and AS9100D for the aerospace industry, ensuring the highest standards in its products and services.
Press Contact
Cathy Zhao
The post Leopard Imaging to Showcase Advanced Computer Vision Cameras Powered by Qualcomm RB3 Gen 2 and Ride 4 at Embedded Vision Summit appeared first on 2026 Embedded Vision Summit.
]]>The post Plainsight Introduces Open Source OpenFilter for Scalable Computer Vision AI appeared first on 2026 Embedded Vision Summit.
]]>Plainsight will demonstrate OpenFilter at the Embedded Vision Summit, where it will be showcased at booth #518. CEO Kit Merker will present on Thursday, May 22, with a talk titled “Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-Effective Solutions.”
“OpenFilter has revolutionized how we deploy vision AI for our manufacturing and logistics clients,” said Priyanshu Sharma, Senior Data Engineer at BrickRed Systems. “With its modular filter architecture, we can quickly build and customize pipelines for tasks like automated quality inspection and real-time inventory tracking, without having to rewrite core infrastructure. This flexibility has enabled us to deliver robust, scalable solutions that meet our clients’ evolving needs, while dramatically reducing development time and operational complexity.”
OpenFilter directly addresses the challenges enterprises face when working to deploy AI computer vision in production. Its frame deduplication and priority scheduling reduce GPU inference costs and its advanced abstractions reduce deployment timelines from weeks to days. Its extensible architecture future-proofs investments, making it easy to adapt it to audio, text, and multimodal AI, and positioning OpenFilter as a foundational platform for scalable, agentic computer vision systems.
Traditional computer vision projects often stall due to fragmented tooling and scalability challenges. OpenFilter addresses this with:
OpenFilter Use Cases:
“Filters are the building blocks for operationalizing vision AI,” said Andrew Smith, CTO of Plainsight. “Instead of wrestling with brittle pipelines and bespoke infrastructure, developers can snap together reusable components that scale from prototypes to production. It’s how we make computer vision feel more like software engineering – and less like science experiments.”
“OpenFilter is a leap forward for open source, giving developers and data scientists a powerful, collaborative platform to build and scale computer vision AI,” said Chris Aniszczyk, CTO, CNCF. “Its modular design and permissive Apache 2.0 license make it easy to adapt solutions for everything from agriculture and manufacturing to retail and logistics, helping organizations of all types and sizes unlock the value of vision-based AI.”
“OpenFilter is the abstraction the AI industry has been waiting for. We’re making it possible for anyone – not just experts – to turn camera data into real business value, faster and at lower cost,” said Plainsight CEO Kit Merker. “By treating vision workloads as modular filters, we give developers the power to build, scale, and update applications with the same ease and flexibility as modern cloud software. This isn’t just about productivity, it’s about democratizing computer vision, unlocking new use cases, and making AI accessible and sustainable for every organization. We believe this is the foundation for the next wave of AI-powered transformation.”
OpenFilter is available today under the Apache 2.0 license. Enterprises can join the Early Access Program for the commercial version of OpenFilter. Learn more at plainsight.ai.
Plainsight empowers businesses to unlock actionable insights from visual data. Its open source and commercial solutions serve industries like manufacturing, agriculture, and security, prioritizing privacy, scalability, and responsible AI. Headquartered in Kirkland, Washington, Plainsight is backed by distributed systems pioneers from Amazon, Google, and Microsoft.
The post Plainsight Introduces Open Source OpenFilter for Scalable Computer Vision AI appeared first on 2026 Embedded Vision Summit.
]]>The post BrainChip CTO to Present on Architectural Innovation for Low-Power AI at the Embedded Vision Summit appeared first on 2026 Embedded Vision Summit.
]]>Dr. Lewis’ presentation, titled “State-Space Models vs Transformers at the Extreme Edge: Architectural Choices for Low Power AI,” will examine the trade-offs and opportunities in designing AI models for environments with severe energy and compute constraints.
In his session, Dr. Lewis will explore how State-Space Models (SSMs)—including BrainChip’s proprietary Temporal Event-based Neural Networks (TENNs)—offer significant advantages over traditional Transformer architectures in edge scenarios. Unlike Transformers, which rely on energy-intensive read-write memory architectures, SSMs support read-only operations that reduce total system power and enable the use of novel memory types. With fewer MAC (multiply-accumulate) units required, SSMs further minimize energy consumption and chip area. The talk will also highlight methods for migrating from Transformer-based models like LLaMA to SSMs using distillation, preserving accuracy while dramatically improving efficiency.
Attendees of the Summit can visit BrainChip at booth #716 to see live demonstrations of the company’s latest advancements in Edge AI technology, including innovations in on-chip language processing, ultra-low power inference, and Akida-powered solutions.
“State-space models are redefining the limits of real-time, low-power intelligence at the edge,” said Dr. Lewis. “This presentation will show how BrainChip is delivering scalable, efficient, and highly interactive AI solutions for today’s most demanding embedded environments.”
This participation reinforces BrainChip’s leadership in the rapidly growing field of real-time streaming Edge AI, bringing highly efficient compute to devices in aerospace, automotive, robotics, consumer electronics, and wearables.
About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida
, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition and processing data with unmatched efficiency, precision, and energy economy.
BrainChip’s Temporal Event-based Neural Networks (TENNs) build on State-Space Models (SSMs) with time-sensitive, event-driven frameworks that are ideal for real-time streaming applications. These innovations make low-power Edge AI deployable across industries such as aerospace, autonomous vehicles, robotics, industrial IoT, consumer devices, and wearables. BrainChip is advancing the future of intelligent computing, bringing AI closer to the sensor and closer to real-time.
Explore more at www.brainchip.com
Follow BrainChip:
Twitter: https://www.twitter.com/BrainChip_inc
LinkedIn: https://www.linkedin.com/company/7792006
Media Contact:
Mark Smith
JPR Communications
818-398-1424
The post BrainChip CTO to Present on Architectural Innovation for Low-Power AI at the Embedded Vision Summit appeared first on 2026 Embedded Vision Summit.
]]>The post Airy3D Announces the Support of DepthIQ for Qualcomm Dragonwing RB3 Gen 2 and RB5 Platforms appeared first on 2026 Embedded Vision Summit.
]]>Montreal, Canada – May 20th, 2025 – Airy3D, a leader in 3D sensing solutions, today announced that its DepthIQ
software development kit (SDK) is now supported on the Qualcomm Dragonwing
RB3 Gen 2 and RB5 platforms. This milestone enables developers and OEMs to easily integrate DepthIQ’s passive depth sensing capabilities into a wide range of intelligent devices powered by Qualcomm Technologies’ industry-leading-edge processors.
The Dragonwing RB3 Gen 2 and RB5 platforms are widely adopted across robotics, industrial IoT, smart cameras, and edge AI devices—markets that increasingly demand compact, cost-effective, and low-power 3D vision. By adding native support for these platforms, DepthIQ empowers embedded systems to deliver high-quality and high-resolution depth maps in the smallest form factor, without the need for active components or specialized hardware.
“We are excited to extend DepthIQ’s capabilities to the Dragonwing RB3 Gen 2 and RB5 platforms, which are central to many of our target markets,” said Jean-Sébastien Landry, Director of Product Management at Airy3D. “This integration allows developers to bring robust 3D vision to applications like robotics navigation, smart retail, industrial monitoring, and biometric security—while preserving image quality, minimizing system complexity, and reducing power consumption.”
Live Demo at Embedded Vision Summit Airy3D will showcase its DepthIQ SDK running on the Dragonwing RB5 platform at the Embedded Vision Summit in Santa Clara, California, from May 21–23, 2025. Airy3D will also receive at the event an award for the Product of the Year in the Edge AI camera and sensor category.
To learn more about DepthIQ and Airy3D technology, visit: https://www.airy3d.com/.
About Airy3D Airy3D creates simple, comprehensive depth solutions by focusing on people and technology, as well as partners and customers, to solve problems that deeply impact our industry and society. Airy3D is dedicated to revolutionizing how machines perceive the world. This ambition is driven by a commitment to simplicity, versatility, and affordability, ensuring that Airy3D’s solutions are accessible and transformative for industries worldwide. Airy3D media enquiries contact: [email protected]
Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm and Qualcomm Dragonwing are trademarks or registered trademarks of Qualcomm Incorporated.
The post Airy3D Announces the Support of DepthIQ for Qualcomm Dragonwing RB3 Gen 2 and RB5 Platforms appeared first on 2026 Embedded Vision Summit.
]]>The post Leopard Imaging and Sony Semiconductor Solutions Collaborate to Showcase LI-IMX454 Multispectral Cameras at Automate and Embedded Vision Summit appeared first on 2026 Embedded Vision Summit.
]]>Leopard Imaging and Sony Collaborate to Showcase LI-IMX454 Multispectral Cameras at Automate and Embedded Vision SummitPost this
Leopard Imaging launched LI-USB30-IMX454-MIPI-092H camera with high-resolution imaging across diverse lighting spectrums, powered by Sony’s advanced IMX454 multispectral image sensor. Unlike conventional RGB sensors, Sony’s IMX454 image sensor integrates eight distinct spectral filters directly onto each photodiode, allowing the camera to capture light across 41 wavelengths from 450 nm to 850 nm in a single shot utilizing Sony’s dedicated signal processing—without the need for mechanical scanning or bulky spectral elements.
Multispectral imaging has historically been underutilized due to cost and complexity. With the LI-IMX454, Leopard Imaging and Sony aim to democratize access to this powerful technology by offering a compact, ready-to-integrate solution for a wide range of industries: from industrial inspection to medical diagnostics, precision agriculture, and many more.
“We’re excited to collaborate with Sony to bring this next-generation imaging solution to market,” said Bill Pu, President and Co-Founder of Leopard Imaging. “The LI-IMX454 cameras not only deliver high-resolution multispectral data but also integrate seamlessly with AI and machine vision systems for intelligent decision-making.”
The collaboration also incorporates Sony’s proprietary signal processing software, optimized to support key functions essential to multispectral imaging: defect correction, noise reduction, auto exposure control, robust non-RGB based classification, and color image generation.
Leopard Imaging and Sony will showcase live demos of LI-IMX454 cameras at both Automate and Embedded Vision Summit. To visit Automate: Huntington Place, Booth #8000 on May 12-13. To visit Embedded Vision Summit: Santa Clara Convention Center, Booth #700 on May 21 – 22. To arrange a meeting at the event, please contact [email protected].
About Leopard Imaging Inc.
Founded in 2008, Leopard Imaging is a global leader in high-definition embedded cameras and AI-based imaging solutions. Specializing in core technologies that enhance image processing, Leopard Imaging serves various industries, including automotive, aerospace, drones, IoT, and robotics. Offering both original equipment manufacturer (OEM) and original design manufacturer (ODM) services, as well as high-quality manufacturing capabilities in both the U.S. and offshore, Leopard Imaging provides customized camera solutions for some of the most prestigious organizations worldwide. As an NVIDIA Elite Partner, Leopard Imaging holds quality management certifications such as IATF16949 for the automotive industry and AS9100D for the aerospace industry, ensuring the highest standards in its products and services.
Press Contact
Cathy Zhao
SOURCE Leopard Imaging Inc.
The post Leopard Imaging and Sony Semiconductor Solutions Collaborate to Showcase LI-IMX454 Multispectral Cameras at Automate and Embedded Vision Summit appeared first on 2026 Embedded Vision Summit.
]]>The post Lattice to Showcase Innovative Edge AI and Vision Solutions at Embedded Vision Summit 2025 appeared first on 2026 Embedded Vision Summit.
]]>About Lattice Semiconductor
Lattice Semiconductor (NASDAQ: LSCC) is the low power programmable leader. We solve customer problems across the network, from the Edge to the Cloud, in the growing Communications, Computing, Industrial, Automotive, and Consumer markets. Our technology, long-standing relationships, and commitment to world-class support let our customers quickly and easily unleash their innovation to create a smart, secure, and connected world.
For more information about Lattice, please visit www.latticesemi.com. You can also follow us via LinkedIn, X, Facebook, YouTube, WeChat or Weibo.
Lattice Semiconductor Corporation, Lattice Semiconductor (& design), and specific product designations are either registered trademarks or trademarks of Lattice Semiconductor Corporation or its subsidiaries in the United States and/or other countries. The use of the word “partner” does not imply a legal partnership between Lattice and any other entity.
GENERAL NOTICE: Other product names used in this publication are for identification purposes only and may be trademarks of their respective holders.
Media Contact:
Sophia Hong
Lattice Semiconductor
Phone: 408-268-8786
[email protected]
The post Lattice to Showcase Innovative Edge AI and Vision Solutions at Embedded Vision Summit 2025 appeared first on 2026 Embedded Vision Summit.
]]>The post Vision Components at Embedded Vision Summit: Plug&play MIPI Vision, AI Award and Price Reduction appeared first on 2026 Embedded Vision Summit.
]]>At the Embedded Vision Summit, PerPlant will receive the AI Innovation Award for its Smart Farming Sensor, which was developed using VC MIPI Camera Modules. In the run-up to the event, VC also announces a 12 percent price reduction for its VC MIPI IMX900 Cameras.
Vision Components at Embedded Vision Summit in Santa Clara, California: Booth #620 For further information, visit: www.mipi-modules.com
From cameras to complete embedded vision systems
The VC MIPI Bricks system is perfectly matched to the more than 50 VC MIPI Cameras and the requirements of industrial projects. It includes FPC and coax cables for flexible connection as well as the various VC Power SoM FPGA accelerators for image pre-processing. On request, the cameras are also available ready-to-use, with optics, fully assembled and calibrated. VC also undertakes individual adaptations of the modules.
For complete, customized embedded vision systems and development support from the initial idea through to integration, VC works with Notavis, part of the VC Group. Notavis specializes in the development of embedded vision systems.
Development kit for VC MIPI Cameras and i.MX 8M Plus / 8M Mini
Part of the VC MIPI Bricks system are the two PHYTEC phyBOARD for VC MIPI development kits with NXP i.MX 8M Plus and i.MX 8M Mini processor, which enable functional tests and direct start into development. They support all VC MIPI Cameras, including functions such as the processing of image data in the ISPs of the i.MX 8M Plus processor and edge and AI computing with the integrated NPU units. Both kits are available from PHYTEC and VC. The desired VC MIPI Camera can be freely selected.
Smart farming sensor with VC MIPI Cameras receives the AI Innovation Award
At the Embedded Vision Summit, PerPlant is honored with the AI Innovation Award in the Agriculture category for its PerPlant Insight Sensor, which was developed on the basis of an embedded vision system with VC MIPI Cameras. The smart farming sensor can be installed on any tractor within 30 minutes. It enables AI-based control of fertilization and pesticide application in real time and without an external computing unit.
Jan-Erik Schmitt, Vice President of Sales at Vision Components says:
“We are proud that this project was developed with cameras and support from Vision Components. The sensor opens up the benefits of smart farming to numerous users and contributes to environmental protection and greater sustainability. We wish PerPlant continued success with this outstanding project.”
Price reduction for VC MIPI IMX 900 and support in LibCamera
Vision Components is reducing the price of its latest camera module VC MIPI IMX900 by around 12 percent with immediate effect. The company is thus passing on to its customers the price optimization in purchasing and production made possible by the high demand.
Another new feature disclosed at the Robotics Summit is the support of the VC MIPI Camera Modules in the LibCamera open source library. This enables even easier commissioning of VC MIPI Cameras with all processor platforms. Vision Components also provides its own camera drivers with an extended range of functions as source code free of charge. As a result, VC MIPI Bricks ensures comprehensive compatibility of hardware and software and fast, simple and cost-effective integration.

Caption: At Embedded Vision Summit 2025, Vision Components presents its new VC MIPI Bricks system of perfectly matching camera modules, accessories and services, right through to ready-to-use MIPI cameras and complete embedded vision systems.
About Vision Components
Vision Components is a leading manufacturer of embedded vision systems with over 25 years of experience. The product range extends from versatile MIPI camera modules to freely programmable cameras with ARM/Linux and OEM systems for 2D and 3D image processing. The company was founded in 1996 by Michael Engel, inventor of the first industrial-grade intelligent camera. VC operates worldwide, with sales offices in the USA, Japan, and UAE as well as local partners in over 25 countries.
Company contact
Vision Components GmbH
Jan-Erik Schmitt
+49 7243 216 7-0
Ottostraße 2 | 76275 Ettlingen
The post Vision Components at Embedded Vision Summit: Plug&play MIPI Vision, AI Award and Price Reduction appeared first on 2026 Embedded Vision Summit.
]]>