What is autonomy? Autonomy is the ability to make your own decisions. In humans, autonomy allows us to do the most meaningful, not to mention meaningless, tasks. This includes things like walking, talking, waving, opening doors, pushing buttons and changing light bulbs. In robots, autonomy is really no different.
Autonomous robots, just like humans, also have the ability to make their own decisions and then perform an action accordingly. A truly autonomous robot is one that can perceive its environment, make decisions based on what it perceives and/or has been programmed to recognize and then actuate a movement or manipulation within that environment. With respect to mobility, for example, these decision-based actions include but are not limited to the following basics: starting, stopping, and maneuvering around obstacles that are in their way.
But before discussing what truly makes a robot autonomous, let’s discuss one of the most common misconceptions surrounding robots today.
For the last 10-15 years, the idea of robotics has largely involved teleoperated mobile robots equipped with cameras being used to get eyes on something out of reach. For example, not only are mobile robots with cameras being used to explore underground mines, but flying robots (aka drones) are being used to explore areas inaccessible by humans, and underwater robots are being used to search and discover shipwrecks in the deepest depths of our oceans. While this use of robots has proven incredibly effective over the years, these examples in no way represent the use of truly autonomous robots.
In reality, the term “robot” has been ripped off over and over again throughout the years by overzealous marketers who want their customers to think their product is some sort of sophisticated Artificial Intelligence. Additionally, the true definition of a robot has also been oversimplified and often used interchangeably with what essentially amounts to pre-programmed machines, not to mention automated actuators like robotic arms or motion control systems. The best example of this can be found in the automobile industry.
Going back even further in time, the classic industrial machines you’d find in an assembly line for a car manufacturer are chronically mislabeled as robots. In reality, while they are amazing feats of engineering, they are remarkably similar to milling machines that operate off computer numerical controls (CNC).
Unlike a truly autonomous robot, these industrial machines are pre-programmed to perform a repetitive movement. They are not able to react. For example, what would happen if one of these so-called robots responsible for installing spare tires in the trunk of a car encountered a random situation in which the trunk was shut. Would the “robot” know not to install the tire? Probably not. Instead, this machine would continue to perform its programmed task and would most likely end up smashing the tire right through the trunk lid. If this machine was truly an autonomous robot, then it would know not to install the tire based on the information it gathered from perceiving the situation and knowing that the trunk was not in fact open.
So what is a good example of an autonomous robot?
The Roomba is easily the most prolific, truly autonomous robot on the market today. While costing only a few hundred dollars, the Roomba is capable of making decisions and taking action based on what it perceives in its environment. It can be placed in a room, left alone, and it will do its job without any help or supervision from a person.
Instead, a set of sensors allows the Roomba to perceive its environment, make a decision based on these perceptions, and then take appropriate action. Simply put, if a Roomba came across a chair, it would have the ability to sense the chair and avoid the chair by changing its direction. Moreover, the Roomba has a dirt detector that senses when dirt has been located, and it spends more time cleaning in that area. So the Roomba’s autonomy extends beyond navigation and mobility, and includes the capability to decide to clean an area more thoroughly based on its ability to perceive that the floor is dirty and its ability to use its actuators to clean it.
The key components to the autonomous action mentioned above include these three key concepts: perception, decision, and actuation.
Perception:
For people, this is mostly our five senses. Eyes, ears, skin, hair, and many other biological mechanisms are used to perceive the world. For a robot, perception means sensors. Laser scanners, stereo vision cameras (eyes), bump sensors (skin and hair), force-torque sensors (muscle strain), and even spectrometers (smell) are used as input devices for a robot. And with both people and robots alike, we can now think of other kinds of information inputs, like the endless supply of data from the internet; in fact one might think of the internet of things as an endless sea of sensors with very long wires reaching back to the robots that might use them.
Decision:
For humans, it’s our brain that makes most of the decisions; or in some cases our “gut” or even our neural system. Our brains make higher level decisions, about where we want to walk for example. But sometimes our biology supersedes our brains and our bodies react to things before our brains even know what’s happening. Those reflexive behaviors, like eyelids closing faster than a flying piece of debris, are operating faster and without the permission of our brains for the purpose of keeping us safe. Autonomous robots have a similar decision making structure. The “brain” of a robot is usually a computer, and it makes decisions based on what its mission is, and what information it receives along the way. But robots also have a capability that is similar to the neurological system in humans, where their safety systems operate faster and without the permission of the brain; in fact in robots, the brain operates with the permission of the safety system. In an autonomous robot, we call that “neurological” system an embedded system; it operates faster and with higher authority than the computer that is executing a mission plan and parsing data. This is how the robot can decide to stop if it notices an obstacle in its way, if it detects a problem with itself, or if its emergency-stop button is pressed.
Actuation:
People have actuators called muscles. They take all kinds of shapes and perform all kinds of functions, from grabbing a cup of coffee to beating our hearts and pumping blood. Robots can have all kinds of actuators too, and a motor of some kind is usually at the heart of the actuator. Whether it’s a wheel, linear actuator, or hydraulic ram, there’s always a motor converting energy into movement. The endless permutations of actuators provide a lifetime of joy and fascination for the people who create and work with them.
So an autonomous robot is one that decides on its own, what action it should take, based on information it has perceived.
If you have a project or idea that requires a more thorough understanding of autonomous robotics, contact us today. We would love to learn more about your project and discuss the solutions available from Stanley.
Get News & Resources Delivered to your inbox.
Sign Up