The sensor revolution has been underway for a decade, so again, this isn't a forecast, it has happened. Of course, the easiest forecasts to make are the ones that have already happened.
But way back in the 90ies I made the observation that each decade is shaped by a different foundational technology. The arrival of microprocessors, cheap microprocessors in the late 70ies set us up for the processing decade at the 80ies, and the poster child of that decade were personal computers, devices cheap enough to put a processor on everybody's desk. The 90ies were shaped by a fundamentally different technology, and that cheap lasers.
Communication lasers coming in in the 80ies gave us CD Rom, they gave us dark fibre and the like. And it was really clear that the 90ies was going to be an access decade, defined not by what our devices processed for us, like the 80ies, but by what our devices connected us to. And it was really clear that, you know, I mean, nobody could specifically forecast the arrival of the World Wide Web, but it was really clear that it was that kind of space we were headed towards.
Then in the early 90ies we got RFID chips and we started getting vastly cheaper sensors, whether they are MEMS-based accelerometers, temperature, pressure, video cameras are just specialized sensors. And so it is very clear that the arrival of cheap sensors was going to lead to a third age: In the 80ies we built our computers, in the 90ies we networked our computers together, and in this decade, what we've been doing is giving our computers eyes, ears and sensory organs, and asking them to observe and manipulate the world on our behalf.
And just as the poster child in the 80ies was the personal computer, the poster child of the access decade was the World Wide Web, the obvious poster child for a world of ubiquitous centres will be robots. So there is a robot revolution waiting in the wings.
It is not going to be super intelligent robots just around the corner. We've already demonstrated that you don't need a lot of intelligence to make robots do interesting things. Think of the Roomba vacuum cleaner of 2003: It's just a bunch of capacitors and transistors pretending to be a robot. That thing has no smarts.
Also look at the history of the DARPA Grand Challenge. May 2004, the first robotic Grand Challenge: twenty teams, a race across the California Desert for a million dollar prize, and they all failed at the start. One robot got seven kilometres in, and everybody went: "Oh, that was a disaster." Almost exactly 18 months later, one doubling period of Moore's Law, was the second Grand Challenge, in 2005. And in that race, five robots finished, and 22 out of the 23 robots got farther on the course than the robot that got farthest in the first race.
Then in November 2007 we had the Third Grand Challenge, the Urban Grand Challenge, and we demonstrated that robots understand the California vehicle code better than human beings do. And then just two weeks ago, finally it leaks out from Google, it was something that several of us knew about but we couldn't talk about until it leaked out, Google has been quietly driving robotic cars all over the highways in California. They have driven a 140.000 miles with robots on public highways, and there have been no accidents. So everywhere you look, there are sensor-rich devices and things that look like robots slipping into our life.