Motovis, a provider of embedded AI autonomous driving in cooperation with Xilinx, announced that the two companies are collaborating on a solution that pairs the Xilinx Automotive (XA) Zynq® system-on-chip (SoC) platform and Motovis’ convolutional neural network (CNN) IP to the automotive market, specifically for forward camera systems’ vehicle perception and control.
Xilinx automotive success
Xilinx showed a steady unit shipment growth over the last fifteen years. The company managed to double the units shipped from 6.7 million in FY2007 to 12.8 in FY2015 and grow to 17.7 million units in FY2021. Xilinx managed to ship impressive 209 million devices and more than 85 million devices shipped in ADAS (Advanced Driver Assistance System).
The company supplies its chips and software to Tier-1s such as Continental, Magna, Veoneer, Hitachi, ZF, or Neusoft Reach. It also works with OEMs such as BYD, Subary, Daimler and Weilmeister. Due to its deep involvement in automotive adaptive technologies, including AI, Radar, domain control, sensors, and Lidars, Xilinx also deeply cooperate with Baidu Apollo, Ponny AI, Ouster, MiniEye, and Robosense. These are publicly announced customers as there are many more companies using Xilinx parts for future cars or existing ones but still don’t want to be pointed out.
Xilinx ADAS and AD focus areas include Radar, Front, side, rear cameras, domain controllers, in-cabin cameras, front-facing cameras, LiDAR, full display mirrors, and surround-view cameras.
The solution builds upon Xilinx’s corporate initiative to provide customers with robust platforms to enhance and speed development.
Forward camera systems are a critical element of advanced driver assistance systems because they provide the advanced sensing capabilities required for safety-critical functions, including lane-keeping assistance (LKA), automatic emergency braking (AEB), and adaptive cruise control (ACC). The solution, which is available now, supports a range of parameters necessary for the European New Car Assessment Program (NCAP) 2022 requirements by utilizing convolutional neural networks to achieve a cost-effective combination of low latency image processing, flexibility, and scalability.
European New Car Assessment Program (NCAP) keeps adapting its testing requirements to motivate the industry to invest more in safety features for the car and pedestrians.
“This collaboration is a significant milestone for the forward camera market as it will allow automotive OEMs to innovate faster,” said Ian Riches, vice president for the Global Automotive Practice at Strategy Analytics. “The forward camera market has tremendous growth opportunity, where we anticipate almost 20 percent year-on-year volume growth over 2020 to 2025. Together, Xilinx and Motovis are delivering a highly optimized hardware and software solution that will greatly serve the needs of automotive OEMs, especially as new standards emerge and requirements continue to grow.”
Forward camera solution
The forward camera solution scales across the 28nm and 16nm XA Zynq SoC families using Motovis’ CNN IP, a unique combination of optimized hardware and software partitioning capabilities with customizable CNN-specific engines that host Motovis’ deep learning networks – resulting in a cost-effective offering at different performance levels and price points. The solution supports image resolutions of up to eight megapixels. For the first time, OEMs and tier-1 suppliers can now layer their feature algorithms on Motovis’s perception stack to differentiate and future-proof their designs.
Most cars today have 1-megapixel cameras with a tendency that most recent models are getting 2 to 4 megapixels. The future models will get the 8 megapixels that will give mode data for the CNN models, and it will enable that the neural network to train and detects objects much further.
Xilinx ZYNQ available now serves up to 1 MP camera, and the platform is available and deployed today, ZYNQ MPSoC also available today covers up to 2 MP while the more advanced version of ZYNQ MPSoC that covers the 8 MP cameras will be available in vehicles and platforms launching in 2024 / 2025 timeframe.
The upcoming Versal AI Edge targeting vehicles coming in the 2025 / 2026 timeframe comes with support for more than 8 megapixels.
“We are extremely pleased to unveil this new initiative with Xilinx and to bring to market our CNN forward camera solution. Customers designing systems enabled with AEB and LKA functionality need efficient neural network processing within an SoC that gives them flexibility to implement future features easily,” said Dr. Zhenghua Yu, CEO, Motovis. “With Motovis’ customizable deep learning networks and the Xilinx Zynq platform’s ability to host CNN-specific engines that provide unmatched efficiency and optimization, we’re helping to future-proof the design to meet customer needs.”
Market forces continue to drive the adoption of forward-looking camera systems to adhere to global government mandates and consumer watch groups – including The European Commission General Safety Regulation, the National Highway Traffic Safety Administration, and the NCAP. All three have issued formal mandates or strong guidance regarding automakers’ implementations of LKA and AEB in new vehicles produced between 2020-2025 and onward.
“Expanding our XA offering with a comprehensive solution for the forward camera market puts a cost-optimized, high-performance solution in the hands of our customers. We’re thrilled to bring this to life and drive the industry forward,” said Willard Tu, senior director of Automotive, Xilinx. “Motovis’ expertise in embedded deep learning and how they’ve optimized neural networks to handle the immense challenges of forward camera perception puts us both in a unique position to gain market share, all while accelerating our OEM customers’ time to market.”