A.I. Progress and Delusion
So many books and videos on artificial intelligence. Where to start? If only we had an algorithm to help us decide…
No surprise though that an Economic Thinking post might start with insights and analysis from economists…
Economist Tyler Cowen (of Marginal Revolution and Marginal Revolution University offers an overview essay published 2019 in the collection The Economics of Artificial Intelligence: An Agenda (University of Chicago Press): Neglected Open Questions in the Economics of Artificial Intelligence (link to pdf). Excerpt:
Most analyses of automation focus on the production function, but the new and cheaper outputs resulting from automation have distributional effects as well. For instance, the Industrial Revolution made food cheaper and more reliable in supply, in addition to mechanizing jobs in the factory and in the fields. A new, larger, cheaper and more diverse book market was created, and so on. Artificial intelligence, in turn, holds out the prospect of lowering prices for the outputs that can be produced by the next generation of automation. Imagine education and manufactured goods being much cheaper because we produced them using a greater dose of smart software. The upshot is that even if a robot puts you out of a job or lowers your pay, there will be some recompense on the consumer side.
Economist (and college debater) Gary N. Smith is author of 2018 Oxford University Press book The A.I. Delusion (link to Amazon where you can “look inside”):
We are told that computers are smarter than humans and that data mining can identify previously unknown truths, or make discoveries that will revolutionize our lives. Our lives may well be changed, but not necessarily for the better. Computers are very good at discovering patterns, but are useless in judging whether the unearthed patterns are sensible because computers do not think the way humans think.
We fear that super-intelligent machines will decide to protect themselves by enslaving or eliminating humans. But the real danger is not that computers are smarter than us, but that we think computers are smarter than us and, so, trust computers to make important decisions for us.
The AI Delusion explains why we should not be intimidated into thinking that computers are infallible, that data-mining is knowledge discovery, and that black boxes should be trusted.
Gary N. Smith discusses The AI Delusion in this 2018 Claremont McKenna College presentation, and mentions at the beginning he: “Went to Harvey Mudd College…was on the debate team at CMC for four years…”
Also, from Intelligence Squared Debate, IQ2 Debate: Don’t Trust The Promise Of Artificial Intelligence:
As technology rapidly progresses, some proponents of artificial intelligence believe that it will help solve complex social challenges and offer immortality via virtual humans.
Debaters may consider USFG A.I. reform that would regulate bias, with the idea that A.I. systems used by Google, Facebook, and others shouldn’t be biased against particular ideologies or, say, religious beliefs. But it turns out that bias is essential for useful A.I., as explained in this presentation by machine learning researcher George Montanèz, (also at Harvey Mudd College): Can we make machines in our image? (Walter Bradley Center talk, 2018). George Montanèz presentation begins here.