During a recent visit to the US, while roaming around a mall I happened upon a stall of Roombas. A Roomba is a vacuum cleaning robot. It has enough intelligence (and the required sensors, actuators etc etc) to navigate around a room avoiding obstacles like chairs, cables, stairs and cats and use a built in vacuum cleaner to clean the carpets. Once done it makes its way back to the base station and charges itself. It does all this without the any human interference. They cost a couple hundred dollars, that's less than most mid range phones, though I doubt anybody actually uses one as their primary mode of carpet cleaning. I’ve read that they aren’t really all that good at it – they don't work well for multiple rooms, they tend to miss corners and they get stuck from time to time.
Obviously, a Roomba isn’t what we thought we’d have by 2009 way back in the 1960s (say). Back then we thought we’d have intelligent humanoids who could do almost all that we take for granted in humans. And we kinda assumed we’d be far enough along for interesting evening gossip to consist of tales of Mrs Parker’s ‘special’ relationship with her new mark IV (again, say). But then again, we also assumed we’d have rocket ships to Jupiter, ray guns and flying cars. Sadly, as a prudish Victorian would remark at all that (especially Mrs Parker): Alas! Taws' not to be!
So what happened? Where exactly did all those wonderful dreams fall out? Have we just not worked hard enough? Is the KGB responsible for it all? Or did we just hope for a little too much in our naiveté? I’d wager on the last one.
See, building a robot that works in the real world is just not easy. There are too many little things that keep popping up that are so difficult to solve (especially with the constraints current, available, hardware places on us). For example there is the problem of navigation. A four year old can walk from one end of a room to another without bumping into too many things. So can a rat. A million dollar bot though, might still have problems. Especially if you throw a shoe in its way once its off. Now why is that the case? I think its because of the way we’ve approached robotics so far.
When you’re asked to create a robot that can solve a problem we tend to approach it too mathematically. We give it data (or it collects data) and we then try to calculate the best probable path (to take the example of path finding) using that data. But what is the best path? Given a top down look of a room, a sort of God’s eye, view I’m sure there is one. But you never get that sort of view. What you have to work with is a rat’s eye view. From that perspective the only way forward it to make ‘educated guesses’. For example: ‘That's a shoe in front of me. Shoes are small and light. I can move it out of the way or try to go around it.’. As opposed to: ‘Sensor #4 reports obstacle. can’t…move… 01010101. B.S.O.D’. This first example illustrates how we tend to work. And that's how a robot will need to work in order to get around.
This is where all those interesting terms that scientists love to sprinkle on their papers come it – neural networks, genetic algorithms and the rest. But exactly what are these? Essentially, they are models of how things in nature work – neural networks model brains and genetic algorithms model evolution. They abstract things and give you a solution (all proven mathematically) but don’t tell you exactly how they got there. For example, a neural network might help a robot recognise a shoe even if its placed at an odd angle, is upside down and is actually one of those silly things from Prada (as opposed to the sneaker it was originally shown and trained to recognise as ‘shoe’) They offer some hope because they work well when brute computation won’t. But they don’t bode well with me.
Why don’t they? Because of their nature – they are black boxes. They give you a solution, but not the same one every time and not always the best one. And that just won’t do. I want my robots to work the same every time; to be quick, efficient and straightforward while still having that ‘intuitive fuzziness’ we ascribe to living things (and which the aforementioned ‘natural algorithms’ may give us). Why? Because that’s what robots are for damnit! To do all the stupid little things that suck up our time and free ourselves from the drudgery we would otherwise otherwise have go through ourselves. Like the laundry, making that perfect cup of coffee and cleaning out the kitty litter. We do those things well-ish because our brains consist of one huge neural network which can handle things in the real world, unlike a hard coded system. But we don’t do it perfectly for the same reason – neural networks are fuzzy. We can guess that we’ve added just about enough sugar but don’t know for sure. A robot ought to be able to measure that to the microgram.
So what’s the solution? I think it’ll involve some sort of hybrid. A robot that will have a traditional hard coded ‘core’ that can call* on ‘softer’ biologically inspired modules to do specific things (For example: call* an ‘object recogniser’ to figure out exactly what it is that's blocking its path or call* a ‘path finder’ iteratively to figure out how to get to the socket on the wall while not waking the annoying kitty). But that’s just me thinking aloud now. If you want to know for sure just get back to me in a few years. I’ll have it down pat by then :)
Where does that leave us then? Hopefully, in a few years (or decades if you’re pessimistic) we’ll have a model III to weed the lawn, a type 7 to make us meals and a mark IV to... err… well, ask Mrs Parker. And then we’ll finally have enough time and freedom to look at the big picture and contemplate whatever it is we want to. We’ll be free to dream and build and do all that we never could if we had to go through the chore of keeping ourselves alive and ticking.
I for one can’t wait for the day I’ll get to say hello to the plethora of our little friends to be.
No comments:
Post a Comment