What Can’t AI Do?
A problem from econometrics illustrates the difference between artificial and human intelligence. Understanding tacit knowledge and the limits of AI is crucial to deploying it effectively and fairly.
Written by Edward HearnPublished on Aug. 11, 2021One of the only lucid thought experiments ever carried out by econometricians, the “red bus-blue bus” problem illustrates a central drawback that comes with using statistical estimation to quantify the probability a person makes a specific choice when faced with several alternatives. As the thought experiment goes, imagine that you’re indifferent between taking either a car or a red bus to work. Owing to your indifference, an estimate of your probability of picking either option is a coin flip. There is a 50 percent chance that you’re taking the car and 50 percent that you’re taking the red bus. Thus, your odds of selection are one-to-one.
Now, introduce a third transportation choice in two different scenarios and assume the traveler remains indifferent between alternative choices. In the first scenario, a new train route is opened so that the alternatives facing the apathetic traveler are car, red bus and train. The estimated probabilities are now one-third car, one-third red bus and one-third train. The odds are the same as the two-choice scenario, one-to-one-to-one.
In the second scenario, rather than a red bus, assume the bus could be blue. Thus, the choice facing the traveler is to take a car, to take a red bus or to take a blue bus. Is there any real difference between taking a red bus versus taking a blue bus? No, it’s effectively the same choice. The probabilities should then break down as 50 percent car, 25 percent red bus, 25 percent blue bus and odds of two-to-one-to-one.
This is because the actual choice is exactly the same as the first two-choice scenario, i.e., taking a car versus taking a bus. In other words, a red bus and a blue bus represent the same choice. The color of the bus is irrelevant to the traveler’s transportation choice. So, the probability the apathetic traveler selects either a red or blue bus is simply one-half of the probability the person takes the bus. The method by which these probabilities are estimated, however, is incapable of deciphering these irrelevant alternatives. The algorithm codes car, red bus, blue bus as one-to-one-to-one like in the scenario with the train.
A problem from econometrics illustrates the difference between artificial and human intelligence. Understanding tacit knowledge and the limits of AI is crucial to deploying it effectively and fairly.
One of the only lucid thought experiments ever carried out by econometricians, the “red bus-blue bus” problem illustrates a central drawback that comes with using statistical estimation to quantify the probability a person makes a specific choice when faced with several alternatives. As the thought experiment goes, imagine that you’re indifferent between taking either a car or a red bus to work. Owing to your indifference, an estimate of your probability of picking either option is a coin flip. There is a 50 percent chance that you’re taking the car and 50 percent that you’re taking the red bus. Thus, your odds of selection are one-to-one.
Now, introduce a third transportation choice in two different scenarios and assume the traveler remains indifferent between alternative choices. In the first scenario, a new train route is opened so that the alternatives facing the apathetic traveler are car, red bus and train. The estimated probabilities are now one-third car, one-third red bus and one-third train. The odds are the same as the two-choice scenario, one-to-one-to-one.
In the second scenario, rather than a red bus, assume the bus could be blue. Thus, the choice facing the traveler is to take a car, to take a red bus or to take a blue bus. Is there any real difference between taking a red bus versus taking a blue bus? No, it’s effectively the same choice. The probabilities should then break down as 50 percent car, 25 percent red bus, 25 percent blue bus and odds of two-to-one-to-one.
This is because the actual choice is exactly the same as the first two-choice scenario, i.e., taking a car versus taking a bus. In other words, a red bus and a blue bus represent the same choice. The color of the bus is irrelevant to the traveler’s transportation choice. So, the probability the apathetic traveler selects either a red or blue bus is simply one-half of the probability the person takes the bus. The method by which these probabilities are estimated, however, is incapable of deciphering these irrelevant alternatives. The algorithm codes car, red bus, blue bus as one-to-one-to-one like in the scenario with the train.
Tacit Knowledge
Physicist Michael Polanyi defined “tacit knowledge” as a quantifiable or commonly understood outcome that a human achieves by performing a task that can’t be codified by a repeatable rule. He draws a distinction between this type and abstract knowledge, which is describable, rule-bound and repeatable. Tacit knowledge is difficult or impossible to formally express because humans developed the skills that comprise it evolutionarily, prior to the advent of formal methods of communication. As a result, training AI to carry out tasks that require tacit knowledge is extremely difficult.More From Edward HearnIs Your Company Sourcing the Wrong Type of Talent?
More From Edward HearnIs Your Company Sourcing the Wrong Type of Talent?
Algorithmic Shortcomings
The red-bus/blue-bus (non-)choice is a good example of how algorithmic computation can fail. In their raw forms, models cannot distinguish subtleties of linguistic description that human beings have little or no trouble grasping. For a person, why the red bus and the blue bus are identical when considering transportation alternatives feels intuitive. It’s certainly intuitive that there is a difference in the choice set when a train is introduced versus a blue bus. Describing why the bus color is irrelevant as a programmable rule in an algorithmic process, on the other hand, is exceedingly difficult. Why is this the case?
This riddle is an example of Polanyi’s paradox, named after physicist Michael Polanyi. The paradox, simply stated, is “We know more than we can tell.” More completely, the paradox goes “We know more than we can tell, i.e., many of the tasks we perform rely on tacit, intuitive knowledge that is difficult to codify and automate.” Polanyi’s paradox comes into play any time an individual can do something but cannot describe how they do it.
In this instance, “doing something” implies a quantifiable or commonly understood outcome that a human achieves by performing a task that can’t be codified by a repeatable rule. Polanyi names this type of human performance “tacit knowledge.” He draws a distinction between this type and abstract knowledge, which is describable, rule-bound and repeatable.
Economist David Autor uses Polanyi’s paradox to explain why machines have not taken over all human careers. He suggests that, if automation were not confined to the abstract realm of knowledge, machines would have usurped all human tasks and human employment would have plummeted since the 1980s. Automation has not led to this outcome, however, because it requires specifying exact rules to inform computers what tasks to perform. Tacit knowledge, however, is difficult or impossible to formally express because humans developed the skills that comprise it evolutionarily, prior to the advent of formal methods of communication.
The red-bus/blue-bus (non-)choice is a good example of how algorithmic computation can fail. In their raw forms, models cannot distinguish subtleties of linguistic description that human beings have little or no trouble grasping. For a person, why the red bus and the blue bus are identical when considering transportation alternatives feels intuitive. It’s certainly intuitive that there is a difference in the choice set when a train is introduced versus a blue bus. Describing why the bus color is irrelevant as a programmable rule in an algorithmic process, on the other hand, is exceedingly difficult. Why is this the case?
This riddle is an example of Polanyi’s paradox, named after physicist Michael Polanyi. The paradox, simply stated, is “We know more than we can tell.” More completely, the paradox goes “We know more than we can tell, i.e., many of the tasks we perform rely on tacit, intuitive knowledge that is difficult to codify and automate.” Polanyi’s paradox comes into play any time an individual can do something but cannot describe how they do it.
In this instance, “doing something” implies a quantifiable or commonly understood outcome that a human achieves by performing a task that can’t be codified by a repeatable rule. Polanyi names this type of human performance “tacit knowledge.” He draws a distinction between this type and abstract knowledge, which is describable, rule-bound and repeatable.
Economist David Autor uses Polanyi’s paradox to explain why machines have not taken over all human careers. He suggests that, if automation were not confined to the abstract realm of knowledge, machines would have usurped all human tasks and human employment would have plummeted since the 1980s. Automation has not led to this outcome, however, because it requires specifying exact rules to inform computers what tasks to perform. Tacit knowledge, however, is difficult or impossible to formally express because humans developed the skills that comprise it evolutionarily, prior to the advent of formal methods of communication.
Evolutionary Skills
Tacit, indescribable skills are the crux of another paradox formalized by researchers Hans Moravec, Rodney Brooks and Marvin Minsky. “Moravec’s paradox” states, in compact form, that
We should expect the difficulty of reverse-engineering any human skill to be roughly proportional to the amount of time that skill has been evolving in animals.
The oldest human skills are largely unconscious, and so, appear to us to be effortless.
As a result, we should expect skills that appear effortless to be difficult to reverse-engineer, but skills that require effort may not necessarily be difficult to engineer at all.
Paradoxically, mental reasoning and abstract knowledge require very little computation, but sensorimotor skills, future-outcome visualization, and perceptual inference require vast amounts of computational resources. As Moravec stated in his book on this subject, “It’s comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year old when it comes to perception and mobility.”
Incorporating Polanyi’s and Moravec’s paradoxes into a common theme, humans have only developed abstract thinking over the last few thousand years, and it appears difficult to our species because its relatively rapid development renders it new and inherently difficult to grasp. Alternatively, humans have developed tacit, intuitive but indescribable skills over the entire course of our evolutionary history. They are based in our surroundings, experientially acquired, and predate explication.
More on AIThe Time Has Come to Decouple AI From Human Brains
Tacit, indescribable skills are the crux of another paradox formalized by researchers Hans Moravec, Rodney Brooks and Marvin Minsky. “Moravec’s paradox” states, in compact form, that
We should expect the difficulty of reverse-engineering any human skill to be roughly proportional to the amount of time that skill has been evolving in animals.
The oldest human skills are largely unconscious, and so, appear to us to be effortless.
As a result, we should expect skills that appear effortless to be difficult to reverse-engineer, but skills that require effort may not necessarily be difficult to engineer at all.
Paradoxically, mental reasoning and abstract knowledge require very little computation, but sensorimotor skills, future-outcome visualization, and perceptual inference require vast amounts of computational resources. As Moravec stated in his book on this subject, “It’s comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year old when it comes to perception and mobility.”
Incorporating Polanyi’s and Moravec’s paradoxes into a common theme, humans have only developed abstract thinking over the last few thousand years, and it appears difficult to our species because its relatively rapid development renders it new and inherently difficult to grasp. Alternatively, humans have developed tacit, intuitive but indescribable skills over the entire course of our evolutionary history. They are based in our surroundings, experientially acquired, and predate explication.
More on AIThe Time Has Come to Decouple AI From Human Brains
The Future of AI Is Complementary
For artificial intelligence, then, these paradoxes spell out a counterintuitive conclusion that leads to a fundamental question of resource allocation. If the simplest skills to humans are those that are the most challenging to machines, and, further, if those tacit skills are difficult or impossible to codify, then the simplest tasks humans subconsciously perform will require massive amounts of time, effort and resources to teach to machines.
There arises an inverse relationship between how easily performed a skill is to a human and both its describability and, subsequently, its replicability by machines. The main economic question is, then, is it worth developing AI to perform intuitive human tasks? Why invest increasing amounts of resources to develop AI that performs ever simpler tasks?
This suggests a natural slowing of general AI development. Even though Moore’s Law points to the trillion-fold increase in computer-processing power, the logic by which we communicate with computers has not changed much since the 1970s. When it becomes too expensive in terms of the opportunity cost of research into AI allowing machines to perform increasingly simple human tasks, development of it will slow as diminishing returns set in.
Ideally, as Autor suggests, the future of AI lies in its complementarities with human skills rather than its substitutability for them. For instance, up until the computing revolution of the 1970s and 1980s, statisticians employed veritable armies of graduate students to hand-process reams of paper-based data into summary statistics like means, medians and standard deviations. With the advent of electronic calculators and, later, computers, statistics that formerly required hours or days of human effort could be computed in seconds.
With this change in computational means, machines were able to complement teams of statistical researchers by absorbing students’ low-level, repeatable arithmetic obligations. This freed up a massive amount of time for statisticians and their students as a team to solve more nebulous, open-ended statistical problems, the very types that require creative thinking computers do not do well. The current view of AI and its interaction with human capabilities needs a serious rethink in terms of the kinds of problems it’s being developed to address. After all, do we really need AI to be able to tell us that red buses are the same as blue buses?
For artificial intelligence, then, these paradoxes spell out a counterintuitive conclusion that leads to a fundamental question of resource allocation. If the simplest skills to humans are those that are the most challenging to machines, and, further, if those tacit skills are difficult or impossible to codify, then the simplest tasks humans subconsciously perform will require massive amounts of time, effort and resources to teach to machines.
There arises an inverse relationship between how easily performed a skill is to a human and both its describability and, subsequently, its replicability by machines. The main economic question is, then, is it worth developing AI to perform intuitive human tasks? Why invest increasing amounts of resources to develop AI that performs ever simpler tasks?
This suggests a natural slowing of general AI development. Even though Moore’s Law points to the trillion-fold increase in computer-processing power, the logic by which we communicate with computers has not changed much since the 1970s. When it becomes too expensive in terms of the opportunity cost of research into AI allowing machines to perform increasingly simple human tasks, development of it will slow as diminishing returns set in.
Ideally, as Autor suggests, the future of AI lies in its complementarities with human skills rather than its substitutability for them. For instance, up until the computing revolution of the 1970s and 1980s, statisticians employed veritable armies of graduate students to hand-process reams of paper-based data into summary statistics like means, medians and standard deviations. With the advent of electronic calculators and, later, computers, statistics that formerly required hours or days of human effort could be computed in seconds.
With this change in computational means, machines were able to complement teams of statistical researchers by absorbing students’ low-level, repeatable arithmetic obligations. This freed up a massive amount of time for statisticians and their students as a team to solve more nebulous, open-ended statistical problems, the very types that require creative thinking computers do not do well. The current view of AI and its interaction with human capabilities needs a serious rethink in terms of the kinds of problems it’s being developed to address. After all, do we really need AI to be able to tell us that red buses are the same as blue buses?
Nessun commento:
Posta un commento