By Johan Steyn, 28 September 2021
Published by Business Day: https://www.businesslive.co.za/bd/opinion/columnists/2021-09-28-johan-steyn-ai-is-only-starting-out-and-yet-to-reach-puberty/
“What kind of work does your dad do?” My son had a few friends over and I was listening with interest to them with one ear while trying to focus on a Zoom call with the other. “He fixes computers.” I had to smile. How do you explain artificial intelligence (AI) and intelligent automation to a seven-year-old?
The next morning my son asked: “Daddy, are computers smarter than us?” We were getting ready for school and he tried to make “butterfly wings” while tying his shoelaces. What a question. Well, I thought, while tying his laces hundreds of millions of impulses were streaming in and out of his brain, from tendon bodies and muscle spindles in his extremities to his retina, otolithic organs, and semicircular channels in his head.
Can AI tie its shoes? Is AI smarter than my son, or all humans for that matter? “It is comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” In 1988, Hans Moravec penned those words. As an adjunct faculty member at the Robotics Institute of Carnegie Mellon University in Pittsburgh, he teamed up with Rodney Brooks and Marvin Minsky to produce what became known as “Moravec’s Paradox”.
It’s easy to see the fundamental differences between human and artificial intelligence since carbon-based biological brains and digital silicon-based computers have been tuned for quite different types of work. Because of these distinctions, using our own minds as a model, analogy, or basis for reasoning about AI could be extremely misleading. Consequently, it’s possible that erroneous assumptions will be made regarding the difference between human and AI abilities to do difficult jobs.
By nine months, babies are learning to relate pictures to real items. An image of a toy can teach a baby about it before their first birthday. The time and effort necessary to teach a youngster what a cat is versus the recognition process for a computer illustrates the learning gap. A toddler can identify a cat by looking at it, whereas a machine needs to analyse huge volumes of data to get the same conclusion (estimated at 10-million images).
In 2016, a great deal of excitement about AI was widely published when AlphaGo, developed by Google’s DeepMind Technologies, became the first computer program to beat Lee Sedol at the Chinese board game Weichi (or Go). Training an AI to play Go cost an estimated $25m. The number of permissible board places in Go has been estimated to be about 2.1 x 10,170, which is significantly more than the estimated number of atoms in the known, observable universe, which is about 1 x 1,080.
Human-like intelligence is the gold standard for AI, and discussions on such topics as trustworthiness, explainability and ethics are marked by implicit anthropocentric and anthropomorphic concepts. No matter how advanced AI agents get in terms of intelligence and autonomy, they are likely to remain unconscious robots or specialised devices that assist humans in specific, complex jobs for the foreseeable future.
AlphaGo reached a significant milestone in computer engineering. But did it know it had won? Did it realise the importance of its accomplishment? Did it revel and celebrate in its victory? The answer is simply no. AI, for now, does not share consciousness with humans.
AI is like a young person who is yet to reach puberty. We speak about artificial narrow intelligence. But in time, AI will grow to be a teenager. Artificial general intelligence will be capable of thinking, comprehending, learning, and using intelligence in the same way that humans do.
• Steyn is the chair of the special interest group on artificial intelligence and robotics with the Institute of Information Technology Professionals of SA. He writes in his personal capacity.
Comments