Daniel Faggella / Op Ed
Posted on: October 15, 2015 / Last Modified: October 15, 2015
In 1637, philosopher René Descartes put forth his proposition of “I think, therefore I am” as proof of one’s existence. In 2015, however, while modern technology may tout the thought capabilities of IBM’s Watson supercomputer and Google’s humanoid robots, Cognitive Roboticist and Author Dr. Mark Bickhard believes true cognition and intelligence is still far from fully developed.
“Cognitive robotics is an orientation to cognition that holds that cognition rises in systems that actually interact in the world, not just process inputs from the world,” Bickhard said. “Robots are a prime example of that. Animals would also be examples of this. Things that actually interact with the world are crucial to cognition, according to this orientation.”
Following that, one might cite IBM’s current TV spot featuring the company’s Watson computer interacting with singer Bob Dylan as proof of a computer’s or robot’s ability to think. However, according to Bickhard, true interaction with the world requires sensory interactions based on embodiments and representations built over millions of years.
“I would argue that something like Watson could not be capable of thinking for two reasons,” he said. “One, it doesn’t interact. More fundamentally, the interactions have to be normative. The interactions have to be able to succeed or fail. The anticipations have to be correct or incorrect. Correct/incorrect is a normative concept and computers simply aren’t normative.”
For Bickhard, the most fundamental sense of normativity is a sense of function. As an example, he cites the human kidney with its blood filtering function as a normative point, which explains the fact that kidneys can be dysfunctional if they don’t filter blood or filter it poorly. And that forms the basis of his argument on the cognitive abilities of a computer.
“I don’t think computers could ever have normativity,” Bickhard said. “To me, normativity arises in a thermodynamic sense. It arises from systems that are far-from-thermodynamic equilibrium, like a candle flame. These are different from other sorts of systems, like a rock.”
Isolate a far-from-equilibrium system like a candle flame and it will run out of oxygen or wax, it will go to equilibrium and thereby cease to exist, Bickard said. By contrast, bacteria that swim up a sugar gradient in search of food will eventually go back down the gradient but, unlike the candle, it can sense the difference. It will tumble as it goes down the gradient until it starts going back up again, so it self-maintains. And it’s that sense of self-maintenance or dependence on far-from-equilibrium thermodynamic conditions that Bickhard believes will keep computers or robots from normativity and equilibrium, regardless of the physical embodiment they ultimately take.
“Robots, as we currently conceive them, aren’t far from equilibrium. They have a battery that’s charged and, as the battery runs down, they go closer to equilibrium,” Bickard said. “It’s because of this dependence on far-from-equilibrium thermodynamic conditions that computers or robots will never be able to (self-maintain).”
Steady and Slow Robot Evolution May Win the Race
Looking to the future, Bickhard still sees limitations ahead. The comparison he draws is with that of a rock. Isolate a rock, and it goes to thermodynamic stability and can sit at equilibrium for billions of years. The problem with systems such as computers and robots that aren’t far from equilibrium, he believes, is that every part of a computer or robot is like a rock; it’s made from metal or some other component that’s also not far from equilibrium.
“One of the things cognitive robotics has come across is, we can’t build and design robots fresh,” Bickhard said. “They’re simply too complicated and we have to build robots that have to learn to get around in the world, just like children and infants do.”
There is a sense, Bickhard says, that robots can start having a stake in the world by maintaining what they’ve learned and what they’ve become through that learning. That awareness, he believes, will come through modeling human evolution.
“This doesn’t preclude the possibility of one day building these types of systems that are far-from-equilibrium,” Bickhard said. “Future robots might have a stake in the world, in which case there would be a difference between successfully indicating what they could do and unsuccessfully indicating what they could do.”
Today’s Watson and Google’s robots show how much technology has evolved since Descartes made his philosophical proposition. However, as Dr. Bickhard illustrates, for robots and computers to develop the capabilities for sensory interaction and continued deep learning, a lot more evolution is still needed.
About the Author:
Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com