Regulate or Certify Emerging A.I. Technologies?
For the Stoa policy topic how do USFG vs. NGO approaches compare? (U.S. Federal Government vs. Non-Government Organizations).
Resolved: The United States Federal Government should substantially reform the use of Artificial Intelligence technology.
In addition to private-sector, for-profit companies are a range of Non-Government Organizations (NGOs), from clubs, charities, and churches to universities, foundations, and other nonprofits, plus many technology certification and standards groups.
For the A.I. topic, the history and potential of Underwriters Laboratories is instructive. In What Does It Mean to Certify an AI Product as Safe?, (Dataversity, July 16, 2018), James Kobielus suggests that the fear many have about A.I. technologies is similar to past fears of electricity when first developed and introduced:
Electricity frightened a lot of people when it first entered their lives in the late 19th century. Responding to those concerns, most parts of the civilized world instituted regulations over electric utilities. At the same time, the private sector spawned electrical testing and certification groups such as Underwriters Laboratories (UL).
Thanks to safeguards such as these, we needn’t worry about having 10 zillion volts shoot through our bodies the next time we plug in a toaster. Electricity is a natural phenomenon that can be detected, controlled, and neutralized before it does harm. But how in the world can any testing lab certify that some capability as versatile as AI doesn’t stray beyond its appointed function…
But product testing and certification must keep pace with technological innovation. Given that we’re now living in the digital age, it’s not inconceivable that consumers might someday rely on this or other trusted organizations to certify that some AI-powered gadget can’t accidentally (or deliberately, when disabled by evil people) disable our home fire alarms and carbon monoxide detectors. To their credit, UL has progressively expanded the range of consumer-product safety issues it addresses beyond electrical and fire hazards. It now also tests for water and food safety, environmental sustainability, and hazardous substances in a wide range of products.
Kobielus next argues that “UL and equivalent organizations around the world institute safety testing for AI-equipped products.” He describes categories of testing and certification: AI Rogue Agency, AI Instability, AI Sensor Blindspots, AI Privacy Vulnerabilities, AI Adversarial Exposure, AI Algorithmic Inscrutability, and AI Liability Obscurity.
See also Underwriting AI Safety, (Datanami, July 2, 2018), and UL Joins Leading Body on Artificial Intelligence (UL Press Release, May 15, 2019), and the Partnership on AI website.
For more discussion of the certification vs. regulation debate, see Another Four Falsehoods About the Free Market (Fee.org, February 20, 2014), with UL discussed in the first:
1. The Free Market Must Be Regulated
This one never seems to get old. I wish it would just die. In truth, I agree with it, but in a different sense than it’s usually meant.Many people say that if the government doesn’t regulate, say, the purity of bottled water, Poland Springs would be free to poison us if they could profit from it. While standards of safety or purity in some form would undoubtedly emerge on the market without government—something like Consumer Reports or Underwriters Laboratories—it’s the last part of the accusation that actually does the heavy lifting, even in our present unfree market. Profit-seeking regulates! Outside of markets that have been criminalized or heavily regulated by government, the gains from cheating and hurting others are risky and relatively rare. Legitimate business people are ready to scoop up the dissatisfied (or sick) customers of other legitimate businesspeople by giving them better value for the dollar, burnishing their reputations in the process. When’s the last time you went to a restaurant that had a reputation for poor quality, much less for food poisoning?
And on safe financial assets and toasters, see Safe Toasters and Toxic Financial Assets (Fee.org, October 27, 2011). Would A.I. enhanced toasters be safe without new federal regulations? Who secures safety and reliability of today’s toasters?
… the author has apparently never looked at the back of his toaster. If he did he would have noticed that what actually assures him that his toaster won’t explode is not the regulatory power of the federal (or any) government, but the competition in the private sector. Specifically most small appliances in the United States have the stamp of quality assurance from Underwriters Laboratory (UL), which is a private (nonprofit) organization that has tested such appliances for over a hundred years. UL is unaffiliated with the government and provides this quality assurance so manufacturers can to say to their customers that their toasters or clock radios or televisions are safe.
The future of regulation: Principles for regulating emerging technologies (Deloitte, June 19, 2018) explains:
Emerging technologies such as artificial intelligence (AI), machine learning, big data analytics, distributed ledger technology, and the Internet of Things (IoT) are creating new ways for consumers to interact—and disrupting traditional business models. It’s an era in which machines teach themselves to learn; autonomous vehicles communicate with one other and the transportation infrastructure; and smart devices respond to and anticipate consumer needs.
In the wake of these developments, regulatory leaders are faced with a key challenge: how to best protect citizens, ensure fair markets, and enforce regulations, while allowing these new technologies and businesses to flourish?
More from Deloitte, 5 Principles for Regulating Emerging Technologies (WSJ RISK & COMPLIANCE JOURNAL).
And from Brookings Institution: Soft law as a complement to AI regulation (Brookings AI Governance, July 31, 2020):
While the dialogue on how to responsibly foster a healthy AI ecosystem should certainly include regulation, that shouldn’t be the only tool in the toolbox. There should also be room for dialogue regarding the role of “soft law.” As Arizona State University law professor Gary Marchant has explained, soft law refers to frameworks that “set forth substantive expectations but are not directly enforceable by government, and include approaches such as professional guidelines, private standards, codes of conduct, and best practices.”