Big Tech’s stranglehold on artificial intelligence must be regulated
The technology is too important to be left in the clutches of Silicon Valley
Google CEO Sundar Pichai has suggested—more than once—that artificial intelligence (AI) will affect humanity's development more profoundly than humanity's harnessing of fire. He was speaking, of course, of AI as a technology that gives machines or software the ability to mimic human intelligence to complete ever more complex tasks with little or no human input at all.
You may laugh Pichai's comparison off as the usual Silicon Valley hype, but the company's dealmakers aren't laughing. Since 2007, Google has bought at least 30 AI companies working on everything from image recognition to more human-sounding computer voices—more than any of its Big Tech peers. One of these acquisitions, DeepMind, which Google bought in 2014, just announced that it can predict the structure of every protein in the human body from the DNA of cells—an achievement that could fire up numerous breakthroughs in biological and medical research. These breakthroughs will of course only happen if Google allows broad access to DeepMind's knowledge, but the good news is that Google has decided it will. However, there is a "but."
For one, Google isn't the only gatekeeper whose decisions will largely determine the direction AI technology takes. The roster of companies snatching up AI startups globally is also dominated by the familiar Big Tech names that so often accompany the search and advertising giant: Apple, Facebook, Microsoft, and Amazon. In 2016, this group, along with Chinese mega-players such as Baidu, spent $20 billion to $30 billion out of an estimated global total of $26 billion to $39 billion on AI-related research, development, and acquisitions. With dominance in search, social media, online retail, and app stores, these companies have near-monopolies on user data. Via their fast-growing and increasingly ubiquitous cloud services, Google, Microsoft, Amazon, and their Chinese counterparts are setting the stage to become the primary AI suppliers to everyone else. (In fact, AI-as-a-service is already a $2 billion-a-year industry and expected to grow at an annual rate of 34 percent.) According to soon-to-be-released research from my team at Digital Planet, U.S. corporations' AI talent is intensely concentrated as well: The median number of AI employees in the top five—Amazon, Google, Microsoft, Facebook, and Apple—is about 18,000, while the median for companies six through 24 is about 2,500. The numbers drop significantly from there.
AI's potential is both large and widespread: from driving efficiency gains and cost savings across almost every industry to revolutionary impacts in education, agriculture, finance, national security, and other fields. We have just seen an example of the many AI-enabled changes underway: Lockdown restrictions imposed in the wake of the Covid-19 pandemic led many organisations to introduce bots and automation to replace humans. At the same time, AI could also create new jobs and enhance productivity. In other ways, too, AI has two faces: It sped up the development and rollout of Covid vaccines by predicting the spread of infections at a county-by-county level to inform site selections for clinical trials; it also helped social media companies flag fake news without having to employ human editors. But AI-optimised algorithms in search and social media also created echo chambers for anti-vaxxer conspiracy theories by targeting the most vulnerable. There are growing concerns about ethics, fairness, privacy, surveillance, social justice, and transparency in AI-aided decision-making. Critics warn that democracy itself could be threatened if AI runs amok.
In other words, the mix of positives and negatives puts this potent new suite of technologies on a knife-edge. Do we have confidence that a handful of companies that have already lost public trust can take AI in the right direction? We should have ample reason for worry considering the business models driving their motivations. To advertising-driven companies like Google and Facebook, it's clearly beneficial to elevate content that travels faster and draws more attention—and misinformation usually does—while micro-targeting that content by harvesting user data. Consumer product companies, such as Apple, will be motivated to prioritise AI applications that help differentiate and sell their most profitable products—hardly a way to maximise the beneficial impact of AI.
Yet another challenge is the prioritisation of innovation resources. The shift online during the pandemic has led to outsized profits for these companies, and concentrated even more power in their hands. They can be expected to try to maintain that momentum by prioritising those AI investments that are most aligned with their narrow commercial objectives while ignoring the myriad other possibilities. In addition, Big Tech operates in markets with economies of scale, so there is a tendency towards big bets that can waste tremendous resources. Who remembers IBM's Watson initiative? It aspired to become the universal, go-to digital decision tool, especially in healthcare—and failed to live up to the hype, as did the trendy driverless car initiatives of Amazon and Google parent Alphabet. While failures, false starts, and pivots are a natural part of innovation, expensive big failures driven by a few enormously wealthy companies divert resources away from more diversified investments across a range of socially productive applications.
Despite AI's growing importance, U.S. policy on how to manage the technology is fragmented and lacks a unified vision. It also appears to be an afterthought, with lawmakers more focused on Big Tech's anti-competitive behavior in its main markets—from search to social media to app stores. This is a missed opportunity, because AI has the potential for much deeper societal impacts than search, social media, and apps.
There are three kinds of actions policymakers should consider to free AI from the clutches of Big Tech. First, they can increase public investment in AI. Second, mechanisms should be established to ensure AI is steered away from harmful uses and consumer privacy is protected. Third, given the concentration of AI among only a handful of Big Tech players, the antitrust machinery should be adapted to make it more forward-looking. This would mean anticipating the risks of a small group of large companies steering a technology with such wide-ranging applications—and establishing a system of carrots and sticks to get that steering right. Such proactive regulation has to take place even as policymakers must ultimately rely on the same companies to lead the development of AI, given their scale, technical knowledge, and market access.
While the federal budget request for 2022 includes $171 billion for public research and development, the budget does not specify the amount to be spent on AI. According to some estimates, federal AI research will get $6 billion, with an additional $3 billion allocated for external AI-related contracts. In 2020, one key federal agency, the National Science Foundation, spent $500 million on AI and collaborated with other agencies on awarding another $1 billion to 12 institutes and public-private partnerships. Budget allocations for 2021 include $180 million to be spent on new AI research institutes and an additional $20 million on studying AI ethics. Other federal departments, such as Energy, Defense and Veterans Affairs, have their own AI projects underway. In August 2020, the Department of Energy, for example, allocated $37 million over three years to fund research and development of AI to handle data and operations at the department's scientific user facilities. All these numbers are dwarfed by those of Big Tech.
In addition to public investment in AI, there is a need to envision AI's future uses and regulate current investments. The U.S. National Defense Authorisation Act is intended to ensure that AI is developed ethically and responsibly. The National Institute of Standards and Technology has the task of managing AI risk. The Government Accountability Office has also released reports highlighting risks associated with facial recognition and forensic algorithms used for public safety, and has provided an accountability practices framework to help federal agencies and others use AI responsibly. However, all of these guidelines need to be integrated into a more formal regulatory framework.
Given that the vast majority of AI investment and talent is concentrated within only a small handful of companies, the emerging Biden antitrust revolution can play a key role. The administration is taking aim at Big Tech's crushing dominance of social media, search, app stores, and online retail. Many of these markets and their structures may be hard to change as the tech companies act preemptively to tighten their grip, as I have previously described in Foreign Policy. The AI market, however, is still emerging and potentially malleable. The major tech companies can be given incentives to prioritise societally beneficial AI applications and to open up their data, platforms, and products to be of service to the public. To gain access to these AI vaults, the U.S. government could use the leverage created by the multiple antitrust actions being considered against Big Tech. The historical precedent of Bell Labs can offer inspiration: The 1956 federal consent decree against the Bell System, which had a national monopoly over telecommunications at the time, kept the company intact, but in exchange Bell Labs was required to license all its patents royalty-free to other businesses. This use of public leverage led to a burst of technological innovation in multiple sectors of the economy.
You may or may not agree with Pichai's statement that AI's impact on humankind is comparable to that of harnessing fire, but he made another comment that is much harder to argue with: "[Fire] kills people, too." To its credit, Google-owned DeepMind is providing open access to over 350,000 protein structures for public use. At the same time, it is still unclear whether Google gave life sciences companies within Alphabet's corporate empire proprietary early access to the protein treasure trove and, if so, how those companies might use it.
If the emerging world of AI is dominated by a handful of companies without public oversight and engagement, we run two risks: We limit others from accessing the tools to light their own fires, and we could burn down parts of the social fabric if these companies fire in the wrong way. If we succeed in creating new mechanisms to avoid these risks, AI could be even bigger than fire.
Bhaskar Chakravorti is the dean of global business at Tufts University's Fletcher School of Law and Diplomacy. He is the founding executive director of Fletcher's Institute for Business in the Global Context, where he established and chairs the Digital Planet research program.
Disclaimer: This article first appeared on Foreign Policy, and is published by special syndication arrangement.