Service Center 800-441-8246
  1. Autonomous mobile robots
  2. Technical challenges
  3. Faster perception stack processing

Perception stack

Can you ever have enough compute?

Robotics perception is a heavy task even for today’s most powerful processors. It requires a substantial amount of data. And, that dataset must be processed in real-time – or close to it – for the robot to make informed decisions while in motion.

That means developers usually must sacrifice processing somewhere: Either fewer functionalities or weaker, slower perception. 

But what if you could process perception tasks on the edge? This would allow your primary processing unit to concentrate on vehicle performance while still allowing your robot to make quick decisions. 

This is precisely what we did. The ifm perception stack manages perception processing for obstacle detection in a user-friendly platform while reducing friction in the development cycle.

In this article, we’ll discuss our approach to separating the perception stack from the primary processing and providing an edge compute for perception.

Perception stack challenges

Three cameras have become the standard for autonomous vehicles. Whether it’s a small, bi-directional vehicle or a larger one with a substantial operating volume, multiple cameras cover all areas around the vehicle. 

However, this solution presents the challenge of integrating multiple modalities and computing massive perception data.  

Adding every additional 3D imaging unit to a robot for enhanced perception requires more processing power at the primary processor. Eventually, upgrades become necessary, incurring additional costs.

Meanwhile, integrating multiple cameras to perform the same task requires synchronization to ensure that the vehicle is responding to consistent information on the same timescale from various devices. 

In addition to 3D cameras, these devices may include 2D cameras, LiDARs, 3D LiDAR, ultrasonic sensors, and more. 

Now, you are not just dealing with the challenges of integrating multiple cameras. Instead, you are working with a combination of multiple cameras and various modalities, technologies, and physics engines. Bringing all of these elements together presents additional complexities that must be addressed.

Taking the load off your processor

We developed a dedicated compute that provides processing pipelines and tools for multiple camera, multimodal use cases. 

This Vision Processing Unit (VPU) is well-suited for tasks where real-time processing and low power consumption are critical. It significantly lightens the load on the primary processor. 

Now, you’re only sending actionable data rather than creating that data directly in the processor. 

This isn’t achievable with smart cameras. 

Furthermore, our partnership with NVIDIA allows us to offer best-in-class solutions for edge perceptions. You can use ifm or non-ifm products. And, you’ll have sufficient computing power available to handle the heavy lifting in perception tasks.

Split system with easy integration

In processing, we've streamlined our approach by creating a two-piece system dedicated to computing without reinventing the wheel.

Recognizing the need for both GPU and CPU in vision and perception tasks, our connection methodologies and connectors are industry-standard, assembled in a simplified manner. 

We focused on the VPU, refining a flexible software interface with embedded Linux and Docker for customer interaction. This strategy allows customers to easily deploy and access code using familiar tools like ROS1 and ROS2.

What faster processing means for you

  • Real-time responsiveness: Quick decisions and reactions to the environment improves obstacle avoidance and dynamic navigation. More sophisticated navigation algorithms quickly adapt to changes in the surroundings and find optimal paths.
  • Increased autonomy: More accurate and detailed interpretation of sensor data helps a mobile robot perform more complex tasks autonomously, reducing the need for constant human intervention.
  • Multitasking capability: Handle various tasks simultaneously, processing sensor data, performing computations, and executing control commands without significant delays.
  • Energy efficiency: Quick task completion allows the robot to spend more time in low-power idle states, conserving energy when high computational power is unnecessary.

<< Why obstacle detection is hard | Does your vendor understand? >>

 

Gain a competitive edge

Ready to learn more about how ifm’s O3R perception platform can give you more processing power without sacrificing performance? Fill out the form or email Tim McCarver directly at tim.mccarver@ifm.com.

 

 

Topic page autonomous mobile robots

Autonomous mobile robots, or AMRs, provide a number of benefits for many industries and in a variety of applications. ifm has the technology and expertise to help your robots reach more end-users. They're robust, rugged, and easy to integrate into your robot no matter where you are in the development process.