Google’s decision to no longer explicitly rule out the use of its AI in weapons systems rightly raises questions about the value of voluntary commitments and principles made by companies.
The world is going crazy! This is the conclusion that observers of the latest political drumbeats are likely to come to. At the Munich Security Conference, it became clear that the anchors of stability of past decades, such as the transatlantic NATO alliance, could soon be a nostalgic thing of the past. The world is in a state of upheaval. And it is a small number of people who are using their power to shape social and political change. Google’s change in AI principles fits well into this picture: The company now allows its own artificial intelligence to be used for weapons systems. Such use was previously explicitly excluded. In today’s technocratic world, the heads of large digital tech giants are shaping the political discourse. Elon Musk, for example, has secured Donald Trump’s trust through money and skilful manoeuvring. On behalf of the US president, he is now turning the American executive branch upside down and making decisions at breakneck speed that have serious consequences for people all over the world, such as when Musk canceled the development aid from one day to the next. When authors talk about an ‘AI coup’, they are not being pessimistic.
Google’s decision to no longer explicitly exclude the use of its AI in weapons systems rightly raises the question of what voluntary promises and principles made by companies are actually worth. One thing is clear: Google is free to use its AI for the development and operation of weapons within the framework of the applicable laws. However, Google’s turnaround also makes it clear that companies are willing to throw ethical concerns overboard if they hope to reap economic benefits. Of course, this does not mean that ethical commitments by large corporations are purely a marketing measure. There are many companies that take their ethical and moral responsibility in the development of artificial intelligence seriously and set a good example. However, especially in the case of sensitive new technologies that will undoubtedly transform our society, compliance with minimum ethical standards should not be left to commercial players to decide for themselves. Instead, ethical standards must be ensured across all sectors and companies – through binding regulation. Whether the EU AI Regulation will prove to be a suitable means of achieving this remains to be seen.
The Musk case in the US already shows that if tech giants have too much power, no democratic system is safe from them. This applies not only to the USA, but also to Europe. The heavyweights of the digital world are already having a significant influence on legislative processes. Meta alone currently employs more than 40 lobbyists in Brussels. If Europeans want to prevent companies from ruthlessly pushing through their own interests, there is no way around more diversity in the digital space. In order to strengthen diversity and fairness in digital markets, European legislators have passed the Digital Markets Act (DMA). This regulatory offensive is an important building block, but is not enough on its own to protect Europe’s citizens, researchers and businesses from monopolists in the digital space. Rather, European solutions are needed that are fully available to business and science in order to keep Europe competitive and enable innovation.
In our new section ‘My opinion’, we provide comments and opinions from the Open Search Foundation team. Today, Leopold Beer – research fellow in the PriDI project – commented on Google’s decision to make its own AI applications available for weapons development in future.