by Peter Corke

Welcome to this series about robots and the future, based on my Robot Academy series of online lectures.

In this first article I want to do something a bit different and talk a little less about technology, algorithms and software, and get into some bigger picture ideas about robots and where they fit into society—why robots are important.

Recently we have developed a new class of robots we call service robots (or field robots), a type of robot that performs a service to a human being—such as cleaning my floor, in the case of automated vacuum cleaners. These vacuum cleaner robots have become phenomenally popular, with sales around 10 million in a decade. Manufacturing robots have been around for nearly 60 years, yet there’s only about a million of them; but in just over 10 years, we have 10 million robot vacuum cleaners! The reason is that these robots are much cheaper than the manufacturing robots. They’re the sort of thing that a person can afford, that a person would like to have, because they provide real value. They perform useful tasks around the house.

So this is very much the future of robots—having robots that work with us, which are both low-cost and useful. Of course, in the future, there’s going to be many other sorts of robots. Perhaps the next one that we encounter en masse will be the self-driving car. The Google self-driving car project has got a lot of press, and all the major automotive companies are working on them now. Some people believe the technology might be with us by the year 2020. Time will tell. But I believe this is the next robot technology we will all encounter quite soon on an everyday basis.

Why robots? What’s the big picture reason for having robots in the world? If you look at the big problems that our planet faces right now, they come primarily from population growth. And if we look at what the population is going to do over the coming decades, we can also see a growth in the robot population. By the year 2020 it’s believed there will be a hundred million robots at work on the planet.

While a graduate student at Stanford, Victor Scheinman developed this electrically powered arm for the MIT Artificial Intelligence Laboratory. The prototype was considered the first truly flexible robotic arm and had an identifiable shoulder, elbow and writs. Later versions of the MIT arm became known as the w:Programmable Universal Manipulator for Assembly, or PUMAs, and are the most common electric arms in the world

Now, not all of these are going to be the classic manufacturing robots. Many of them are going to be robot cars, or robot vacuum cleaners . . . perhaps robot gardeners or robot maids, who knows what. But this is the sort of new class of robot that’s emerging and their numbers are just going to increase, perhaps exponentially.

Let’s consider some of the big picture reasons why I believe we need robots. We know there are big problems facing society now, and increasingly in coming decades. As I’ve mentioned, a major problem is population growth—there are more people on the planet. It means we need to provide more food, which will require more transportation and infrastructure. There will be more cars on the road; more people; more things going from one place to another. And when this population of ours gets older there’s going to be a need for more health care too.

As the population ages, we are going to have an imbalance in age ratios. There will be a greater number of older people and fewer younger people to support them. This is a problem that is confronting almost every country on the planet today. A profound problem. I don’t think we really have a handle on it yet, or a good solution for what to do. But I do believe that robots will have an important part to play here.

The other problem we’re facing is climate change. As the climate changes, it’s going to have real impacts on where people are able to live, the availability of water, and the amount of food that we can produce. These are very big picture problems that are all very complex and interrelated. Here again, I believe that robots can play a very positive role in ameliorating some of these problems.

Another point I’d like to make, and I think it’s an important one, concerns the ethical considerations around robotics. Now, I’m not an ethicist, I’m a roboticist, but I’m going to talk to an ethicist later in this series when I will try to unpack some of the issues about ethics that apply to robotics. We are going to have a discussion—a bit of an Ethics 101—where we will learn some fundamental principles of ethics, and then ask some questions. For example, Is it appropriate for robots to look after old people or young people? What are the ethical issues with robots driving cars on our roads? What are the issues around invasion of privacy? What about robots in jobs. Every time I give a talk about robots, somebody asks the question: “Aren’t robots going to take away jobs?” I don’t think it’s that simple.

Often also when I give a public talk, somebody in the audience asks me about artificial intelligence (AI), though it’s not a subject I feel particularly comfortable talking about. I consider myself a roboticist, someone who builds machines, whereas AI to me is a completely separate discipline, a sub-discipline within computer science.

Intelligence, of course, is also a very difficult concept to understand, which I think has something to do with the historical context. Back in the 18th and 19th centuries, people were building machines that they called automata, very ingenious, clockwork machines filled with gears and cams and so on. An early and very famous example of machines of this class was a digesting duck, a clockwork machine that you could feed with small pellets it would appear to eat, and the pellets would pass through the digestive tract of the clockwork duck and be eventually excreted. At that time, people considered that this machine had life-like properties. Whether it was intelligent or not, I’m not sure, but people thought it was very life-like.

A definition I particularly like comes from Rodney Brooks, which is that intelligence is in the eye of the beholder. So if you’re looking at a machine doing some task and you think it’s intelligent, then it is intelligent.

Later on came automata. These were incredibly intricate machines that could seemingly do intelligent things. And perhaps at that time, people looked at these machines and considered them to be intelligent because they did the sorts of things that intelligent beings or entities (such as humans) are able to do quite readily.

In the late 1940s and early fifties, William Grey Walter built a number of cybernetic devices—most famously his cybernetic tortoise. This was a machine that contained relays and vacuum tube technology and was able to exhibit light-seeking behavior. So, if you put a light on the floor, this tortoise would move towards the light and stop when it got there. Again, this was a seemingly intelligent behavior—a behavior that is seen in many very simple organisms.
In the late ’40s and early ’50s, therefore, something like that cybernetic tortoise was perceived as being intelligent. Looking at it from our current viewpoint, it seems rather primitive and sad, but at that time I think it was a significant breakthrough.

The term artificial intelligence was probably first introduced in a research proposal dated 1955 from a bunch of very eminent people, amongst them John McCarthy and Marvin Minsky, which are very well-known names in the AI community.

By 1955, digital computers had become a reality and this group of scientists were interested in trying to explore what could they do beyond straightforward numerical calculation. So they were interested in things like neural networks, theories of computation. . . . Could we use computers to do natural language processing? Could they perform acts of creativity?

What you’ll notice from looking at the list of topics they proposed to investigate in their summer project is there’s no mention of robotics and no mention of computer vision. The reason for that is that those two things had not yet been invented.

The first industrial robots were developed by Unimation Inc., a company that wasn’t even formed until 1956, one year after that first AI project. Unimation delivered its first robot in 1961, and several other robots appeared in quite rapid succession shortly after. They were all developed in laboratories at Stanford University and MIT. Chief amongst these robot designers was Victor Scheinman who designed the very famous Stanford arm in 1968 and also the ORM (object-relational mapping tool). Orm I believe, is Norwegian for snake.

Computer vision really got its start in 1964 with the PhD work of Larry Roberts at MIT. He was interested in understanding how computers could interpret a blocks world. So a camera attached to a computer would take a gray-scale image of a scene and with various sorts of early image processing algorithms it could find the edges and then fit a model to that. Larry Roberts went on to do really great things in the creation of the Internet.
So then, what is intelligence? I can look in the dictionary and get a definition of intelligence, but to me, this is not particularly helpful. I think we assume that human beings are intelligent, therefore the behaviors manifest by human beings count as intelligent.

Alan Turing

But we know that there are many animals that can exhibit intelligent behavior, and intelligent behavior is being found in more simple animals too. Once upon a time we believed it was perhaps only the great apes that could exhibit some sort of intelligence, but now scientists have found quite intelligent behaviors in all manner of species. Even birds are able to fashion simple tools to help them to achieve particular tasks.
A definition I particularly like comes from Rodney Brooks, which is that intelligence is in the eye of the beholder. So if you’re looking at a machine doing some task and you think it’s intelligent, then it is intelligent.

Perhaps the best known definition of intelligence is the Turing test that was proposed by the English mathematician, Alan Turing, in a 1950 paper entitled “Computing Machinery and Intelligence”. In that particular paper, he proposed something called the imitation game, and of course that’s the name of the recent movie about the life of Alan Turing.

The game has three players: A is a computer that is being tested for intelligence, B is a real human being, and C is the judge. The judge doesn’t know which of A or B is the computer and so asks questions of A and B to try and determine which is which. In order to mask the really obvious cues about which one of A or B is the computer, the game is generally played with keyboard and screen communication, so there’s no spoken communication. The judge types commands on a keyboard, and A or B responds with information displayed on a screen. Now if the judge, after asking sufficient questions, is unable to determine which of A or B is the computer, then the computer is deemed to be intelligent because the judge, who is intelligent, is unable to tell the difference between it and a real human being.


The QUT Robot Academy provides free-to-use undergraduate-level learning resources for robotics and robotic vision.

The content was developed for two 6-week MOOCs that ran in 2015 and 2016, which in turn was based on courses taught at QUT. The MOOC content is now available as individual lessons (over 200 videos, each less than 10 minutes long) or a masterclass (a collection of videos, around 1 hour in duration, previously a MOOC lecture). Unlike a MOOC, all lessons are available all the time. Although targeted at undergraduate-level around 20% of the lessons require no more than general knowledge, and the required knowledge (on a 5-point scale) is indicated for each lesson.

robotacademy.net.au