Nov 15, 2019
Qualcomm products mentioned within this post are offered by Qualcomm Technologies, Inc. and/or its subsidiaries.
One of the exciting aspects of technology is seeing how different fields converge to provide cool new solutions. As a robotics developer, I’m very excited to see how artificial intelligence (AI) is finding its way into the field of robotics.
Traditionally robotics has focused on solving mechanical problems such as controlling actuators, gathering inputs from sensors, making devices more programmable and self-reliant, etc. At the same time, AI has typically focused on making software smarter, allowing machines to make independent decisions, and mimicking the way humans learn and improve. With advances in both fields, the integration of AI with robotics is now providing the building blocks for truly intelligent and realistic robots.
And with the robotics industry projected to grow from $48.7 billion in 2019 to over $75 billion in 2024, it’s definitely an exciting field to be in. So, let’s take a look at how AI is augmenting the robotics industry, and some things to consider for adding AI to your next robotics project.
How AI is making robots smarter
When we think of AI-driven robots, we often picture the robotic humanoids featured in blockbuster films that are either assisting humans, or have run amok with evil intentions. In reality, robots can be found helping out in all different industries and form factors.
A key driving factor behind this is the creation of new algorithmic models, which help to identify high-value data from which to make decisions. Machine-learning, computer-vision, and gesture and emotion-recognition, in combination with sophisticated sensors, advanced actuators, and powerful edge processors, are allowing robots to accomplish more tasks than ever before. To give you an idea of just how innovative and beneficial these new robots with AI are becoming, here are a few examples to get you thinking about what you could create:
- Assembly and manufacturing: AI is allowing robots to perform constant/real-time corrections and to learn optimal paths for producing products. Moreover, they’re able to do this while performing repetitious and often dangerous tasks, where a human’s focus might degrade.
- Packaging: AI can help optimize the motions that a robot’s mechanical components need to make in order to package up things. AI can also be used to make decisions about how to package things up.
- Autonomous vehicles: AI, and most notably machine vision, are supporting the next generation of vehicles to become more autonomous. While their form factors are not what we picture when we think of a robot, the AI in a vehicle’s computer system can intelligently control steering, braking, and acceleration, while identifying potential situations before they occur.
- Customer service: AI is helping to drive various forms of robots with natural language processing that can assist humans. For example, robots can be used to deliver drinks to hotel rooms, or pizzas to homes, while letting recipients ask questions to get information. Moreover, such robots are able to collect information such as customer preferences, environmental considerations, etc., which can be used to optimize subsequent deliveries.
As you work with your AI and robotics development team, here are some development considerations:
AI frameworks – AI has traditionally been a hard problem implemented only by those who specialize in that AI. Nowadays, numerous frameworks have democratized AI such that almost any developer can now integrate AI without the need to be an AI specialist. In the field of robotics, AI can be used for everything from real-time decision making, to path finding and object detection. Currently, the most widely-used frameworks are TensorFlow and Caffe2, and ONNX is a popular open-source exchange format that allows deep learning models to be shared between a variety of frameworks. Developers looking to add AI to their robotics projects should become familiar with these three resources.
Training and data – with data being a core building block in AI and in machine learning in particular, there are a number considerations around optimizing an AI model, while striking a balance between accuracy and the ability to handle general input found in the real world.
One risk with deep learning is the introduction of biases during training, which can come from many sources. Ultimately the dataset used to train our models is infinite. The data we choose to train with, its context, and availability in itself, represents some degree of bias since we use it to represent more general cases. Bias is also inherently introduced as we optimize for speed and strike a balance between intelligence and overgeneralization.
Identifying and understanding bias is important not only to let robots function correctly, but to also ensure that robots don’t produce second-order effects such as behaviors that represent social inequalities, unfair prejudgements, etc.
Developers should spend time identifying the potential sources for bias based on what their intentions are for the robot, and place a strong focus on selecting the appropriate training data. For more information check out Setting up your Machine Learning Project for Success.
Sensors – sensors are the bridge between the real world and digital world, and are how robots can detect objects and the surrounding environment. The term sensor broadly encompasses any device which captures some aspect of the real world. This includes barometers, altimeters, and GPS, as well as other devices that you may not typically think of as a sensor like cameras and microphones. Developers should choose their sensors carefully, taking into consideration factors like their target accuracy, operating environment, and performance degradation over their lifespan.
In the context of AI, sensor fusion can be employed to provide a plethora of capabilities ranging from filtering out noisy data to making predictions in random environments. In addition, simultaneous localization and mapping (SLAM) techniques can be employed which use AI in conjunction with sensor data, to help robots navigate and interact with objects.
5G connectivity and edge computing – the evolution of 5G connectivity is converging with advances in powerful edge computing platforms to give developers more options than ever as to where data is processed. 5G for example, incorporates the idea of edge clouds, which brings cloud processing closer to the edge. In conjunction with powerful mobile platforms, developers can choose to run inference (i.e., when a trained AI model is put into use in the real world) either at the edge, or send it to the edge cloud. This architecture can be particularly useful for robots operating in private networks such as in factories, where analytics and decisions need to be made quickly. Moreover, this is making robots more self-reliant.
Battery life and thermal characteristics – the need for maintaining battery and thermal efficiency is an ongoing and important issue for robots, especially when the safety of humans must be maintained. The problem can be further compounded when adding additional processing. For example, inference in AI can require a lot of processing power which typically leads to increased battery consumption and heat. Developers should look to employ platforms in which inference can be offloaded to specialized processors where possible. On the Qualcomm?Snapdragon?845 mobile platform, its Qualcomm?Hexagon?DSP is equipped with special vertex processing capabilities making it more efficient than a general purpose processor for many AI operations.
How to get?started
Developers looking to start incorporating AI into their robotics projects should check out the Qualcomm?Robotics RB3 Platform, which is based on the Snapdragon 845 mobile platform. It supports on-device machine learning, computer vision, and cellular connectivity. For added inspiration, we have published a project designed to help you get started with Amazon AWS Robomaker and Qualcomm Robotics RB3 Development Kit. Additionally, developers can also employ the Qualcomm?Neural Processing SDK for artificial intelligence (AI), which provides hardware accelerated processing support via its Hexagon DSP, Qualcomm?Adreno?GPU and Qualcomm?Kryo?CPU. The SDK also supports models from TensorFlow, Caffe2, and the ONNX format.
In addition, we are also collaborating with alwaysAI, a developer platform that fast-tracks the creation and deployment of computer vision apps on edge devices. alwaysAI recently demonstrated real-time object detection on the Robotics RB3 platform, so stay tuned for more news on this exciting collaboration effort.