People really, really like the idea of robots. The idea of an artificial intelligent being is powerfully attractive, considering how often it crops up in movies and fiction. Meanwhile, increasing sophistication and technological advances in robotics have given us robotic toys, robotic vacuum cleaners, and robotic space exploration. But somehow, these tangible examples don't come close to the capabilities of C3PO, HAL, or even Marvin the paranoid android.
Look at it this way: if I want you to cross a room to a bowl of fruit sitting on a table and then select an orange from the fruit contained therein, all I have to do is ask you to do it. You're capable of figuring out how to cross the room, identify the bowl, home in on an orange, and retrieve same. If you're trying to get a machine to do it, you have to figure out how to get the machine to cross the room (no mean feat in and of itself), identify the bowl (again, tricky), identify an orange (really, really stinking tricky), and retrieve same (which is, and I know you'll be so surprised to hear this, more difficult than you might think.) Basically, it's easy for us because we've already got the hardware and software to fulfill that task.
Mobile robots cannot navigate their circumscribed universe without sensors to give them feedback. At the low end, this can entail encoders or ultrasonic sensors. At the higher end (think the DARPA Grand Challenge), we're talking about a panoply of sensors including, but not limited to, laser scanners, ultrasonics, GPS, accelerometers or gyroscopes, and cameras. Take a moment to look at some of the white papers from the DARPA Grand Challenge participants—you'll note the massive amounts of computing power required to assess the sensor data and use it to navigate and control the vehicle.
The Holy Grail
One of the holy grails for mobile robotics is to incorporate machine vision and use it to acquire useful information. It's been an uphill slog so far but a recent story in Electronics Weekly talks of a project where the researchers managed to get a robot to identify simple objects based on images. This is really, really clever. Sure, we're still light years away from creating a Commander Data but I can't wait to see what researchers figure out next.
Note: Sebastian Thrun, the engineer behind Stanford's winning DARPA Grand Challenge autonomous robot, will be giving a keynote address at Sensors Expo this June. Mark your calendars!