Output of Craiyon* for the prompt “A cyborg version of Hunter S. Thompson”, generated July 24, 2022
(Hunter S. Thompson was an American journalist-author who founded the gonzo journalism movement.)
5 December 2022
©European Union Source: European Commission. CC BY4.0
(Photos supplied by Guidepost from various sources)
Excerpts
What Is AI?
The OECD defines an AI system [as] “a machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations, or decisions) for a given set of objectives. It uses machine and/or human-based data and inputs to (i) perceive real and/or virtual environments; (ii) abstract these perceptions into models through analysis in an automated manner (e.g., with machine learning), or manually; and (iii) use model inference to formulate options for outcomes. AI systems are designed to operate with varying levels of autonomy” (OECD 2019).*
As an illustration of the practical use of AI, consider asking AI to give its own definition of AI.
The following prompt was submitted to a modern, trained AI engine called Generative Pre-Trained Transformer 3 (GPT-3): “Write a paragraph describing what AI is and what it can do.” The output that followed from GPT-3 was: AI stands for artificial intelligence. AI is the result of applying cognitive science techniques to artificially create something that performs tasks that only humans can perform, like reasoning, natural communication, and problem-solving. AI is important because it has the potential to help humans become better at completing cognitive tasks, and to automate tasks that are currently difficult or impossible for humans to do . . .
This illustration hopefully highlights the transformative power of AI technology. While many previous technological advances led to the augmentation or automation of routine tasks, AI has the power to augment or automate a much wider variety of tasks that are normally thought to only be possible for humans to complete. . .
Socially Optimal Applications of AI
Many policy problems facing governments require making decisions under uncertainty, but AI – combined with the large data sets to which governments have access – has given policymakers new tools to tackle that uncertainty. . .
David James Henry of the Writers Guild of America pickets during the labor dispute that began on 2 May 2023 to demand that the Alliance of Motion Picture and Television Producers employ AI as a tool for scriptwriting and not take their jobs away
. . .There are real costs posed by AI to society that – as noted by Acemoglu (2021) – are all the more important to understand and confront because of “AI’s promising and wide-reaching potential”. Examples that stem directly from AI’s control of information include privacy violations, creating anti-competitive environments, and behavioral manipulation by machine learning techniques that enable companies to identify and exploit biases and vulnerabilities that consumers themselves do not recognize. Further, there is the direct risk that workers will be displaced by AI via excessive automation, as there is no guarantee that the current pace of the development of AI tools will achieve the socially optimal mix of automation and augmentation of tasks. Finally, there are a number of clear ways that AI have exacerbated social problems, including issues of discrimination and concerns about the functioning of democratic governments.
There is substantial evidence . . . that AI has introduced and perpetuated racial or other forms of bias, both through issues with the underlying datasets used to make decisions, and by unintentional or seemingly-benign decisions made by algorithm designers. AI also can negatively impact how societies communicate on issues fundamental to the functioning of democracies, such as how echo chambers in social media can propagate false information and polarize society. . . There is a central role for governments in the studying, monitoring, and regulating of AI, as evidenced by the United States AI Bill of Rights and the European Commission Artificial Intelligence Act. . .
Failures in AI: Bias in Health Care
While AI algorithms have shown great promise in their ability to apply data to social and economic problems, there are many cases where they can exacerbate existing societal inequities. Obermeyer et al. (2019) examine the use of algorithms to determine which patients are “high-risk” and therefore receive additional resources and attention from care providers. The authors find that the algorithm they studied assigned poor patients to a lower risk score than an equally ill richer patient. This was because the algorithm used health cost as a proxy for health need, and poor patients generate lower costs than rich ones, potentially due to barriers in accessing care or bias they face in the health care system. The use of this proxy introduced bias into the algorithm and resulted in poor patients losing access to additional help they would otherwise have received. . .
Adoption of AI in the European Union
Denmark enterprises are the largest employers of AI among EU countries. (Photo: Cafe Sundbyvester, Copenhagen*)
The overall trends of firm-level AI adoption appear similar in the European Union to those in the United States: in 2021 8 percent of all enterprises with more than 10 employees employed AI technology. . . Larger firms were more likely to use some form of AI technology, with 28 percent of firms with more than 250 employees reporting their use. The survey also showed that firms most used AI to automate workflows, employ machine learning, or analyze written language (3 percent of firms in each case). The overall story is similar to that told by survey records from the prior year: in 2020, 7 percent of enterprises in the EU reported to use AI. Some common uses of the technology were the analysis of large data sets via machine learning and the deployment of chatbots (2 percent of firms in both cases). With these data, we can also see the distribution of AI usage across EU member states. In 2021, Denmark reported the largest share of enterprises employing AI, at 24 percent. Portugal (17 percent), Finland (16 percent), and Luxembourg and the Netherlands (both 13 percent) come next. . .
The Impact of AI on Work
. . .As AI continues to evolve and work its way into a wide variety of applications, its potential gains for society are enormous. . . AI’s benefits could span industries, providing workers with time for new tasks and firms with greater speed and accuracy through automation. . .
AI has the potential to increase productivity, create new jobs, and raise living standards. However, by its very nature of performing “non-routine” tasks formerly thought to be strictly the domain of humans, AI is likely to disrupt large swaths of jobs and tasks. This may lead to difficult adjustments for workers as jobs are redesigned or required skills change. . .
This poses several challenges for policymakers. . . Much of the development and adoption of AI is intended to automate work instead of augmenting it. Private firms advancing AI technology are likely to do so in a direction that maximizes profits, which may not be the socially desirable direction. [Moreover,] AI increases the ability of employers to monitor workers.
In sum, while the potential benefits of AI for labor markets are numerous, unfettered AI could also result in a less democratic and less fair labor market.
Conclusions
The use of AI undoubtedly presents many opportunities to positively transform the economy. The last decade has seen incredible advances in natural-language processing and computer vision, enabling new applications of AI to tasks previously thought to be firmly in the domain of humans.
Firms are rapidly adopting AI around the world for its ability to scale and lower costs, to absorb and process enormous amounts of data, and to help make better decisions, often assisted by humans. And all this process of transition is likely to create new jobs that never would have existed without AI.
At the same time, AI poses several challenges. Huge swaths of the workforce are likely to be exposed to AI, in the sense that AI can now address nonroutine tasks, including tasks in high-skill jobs that until now had never been threatened by any kind of automation. The primary risk of AI to the workforce is in the general disruption it is likely to cause to workers, whether they find that their jobs are newly automated or that their job design has fundamentally changed. The additional risk of AI is that it may lead firms . . .to violate existing laws about bias, fraud, or antitrust, exposing themselves to legal or financial risk, and inflicting economic harm on workers and consumers. . . Detecting, and addressing these violations is far from a simple task. This presents governments with a clear agenda on how to guide AI development in a positive direction.
Spain has launched a pilot regulatory sandbox on AI. (Photo: Pavillion of the Council of Ministers, Moncloa Palace, seat of Spain’s Central Government)
. . . Firms that utilize AI are not freed of the responsibility of abiding by antifraud, antitrust, and antidiscrimination laws, as well as workplace safety and health regulations. It should be a principal goal of policymakers to make sure that government institutions are well-equipped to investigate and enforce these laws when necessary. Doing so is not a straightforward process. . . The goal is to create the appropriate incentives so that firms develop fairer algorithms that abide by national laws. . .Well-designed algorithms have the potential to actual [sic] reduce instances of bias, and firms have voiced a desire to use algorithms to address instances of discrimination. However, without the proper oversight and regulation, this potential for positive transformative change is unlikely to be realized.
Governments are already moving toward more effective regulation of the impact of AI. In October 2022, Spain launched a pilot regulatory sandbox on AI. . . to connect policymakers with AI developers and adopters. It is expected to generate easy-to-follow best practice guidelines for companies, including small and medium-size enterprises and start-ups, to stimulate the development of and reduce barriers to adopt AI, in compliance with the future European Commission Artificial Intelligence Act. Further, the US has announced an initiative to create a “Bill of Rights” for AI covering many areas, such as consumer protections and equity of opportunity in employment, education, housing and finance, and health care.
*Ed’s note: Underscored words provide links to original sources of this EU report. To access these sources, click on title of the report.
———
Images
Featured image/craiyon.com (a free AI art generator), PD via Wikimedia Commons
Picketer David James Henry/David James Henry, CC BY-SA4.0 via Wikimedia Commons
Guidepost’s montage–Poor man/Arturo Avila, partial face blotout supplied, CC BY2.0 via Flickr. Rich man/Librarygroover, CC BY2.0 via Flickr. Cropped.
Cafe Sundbyvester/Orf3us, CC BY3.0, Wikimedia Commons. (*Note: Cafe Sundbyvester simply serves as a sample of Danish enterprises. The use of the photo does not imply it is one of the 24% of Danish enterprises that use AI.)
MoncloaPalace/Yeray Diaz Zbida, CC BY2.0 via Flickr
Texts, prints, photos and other illustrative materials depicted in GUIDEPOST have been either contributed by the authors of each published work or, to the Magazine’s good-faith knowledge, are in the public domain or otherwise benefit from the allowances of Articles 9(2), 10, 10(bis), and applicable others of the Berne Convention for the Protection of literary and artistic works.