Home AI AI is Seeping into Politics

AI is Seeping into Politics

by ccadm



As we know, artificial intelligence is affecting most aspects of our lives, and politics is no different. Over time, AI algorithms have become much more sophisticated, and deepfakes are a serious threat to the very fabric of the society we live in. From voice generation to video and image generation, AI is being extensively used. There are no media houses that are not using AI in one way or another in this day and age. 


AI’s effect on democratic processes

The widespread use of deepfakes has shown its effects from advertising to Hollywood and election campaigns, so the question is: will AI have an impact on the coming US elections this year? The answer may be yes or may be no. But we have examples of campaigns from foreign factors in the elections of Taiwan this year and how social media and fake accounts spread deep fakes to manipulate public opinion.

As we know, the US is in a leading position in terms of artificial intelligence innovation and adoption, and the US political parties are also adopting it rapidly due to their massive campaign funds. For a clear understanding, we can see how Donald Trump utilized Facebook ads in his 2016 campaign.

Tools like GPT-4 can be used to create content with very little human intervention, and usually they create very stricking content. Fake videos and audio calls are two of them. A recent robocall mimicking Biden’s voice was a wake-up call to understand the deep issues that AI might bring to the table.


Regulatory measures

Only regulations can’t handle this menace, as foreign factors also play a role in these events that are outside a country’s regulatory authority. This brings us to another question: Is a global governance initiative possible to curtail the sinister side of AI? It’s a difficult question in a world so divided and hostile as it is today. 

A good example is the global financial model, but AI seems like a different animal to tame. Mustafa Suleyman, a co-founder of Deepmind, recently acquired by Google, and the head of Microsoft AI, offered a blueprint for an international initiative to regulate AI tech. He co-presented a model with Ian Bremmer of Eurasia Group: “the macroprudential role played by global financial institutions such as the Financial Stability Board, the Bank of International Settlements, and the International Monetary Fund.”

The model was based on two elements: the first was to ensure the examination of the impact of AI through regular assessments by creating a government body, and the second was that the US and China find common ground and build guardrails looked after by third parties.

 
Geopolitical factors

It’s not a good example to compare AI systems to nuclear weapons. But Suleyman and Bremmer wrote,

“AI systems are not only infinitely easier to develop, steal and copy than nuclear weapons; they are controlled by private companies, not governments.”

Source: Foreignaffairs

Such a global structure would definitely help in preventing nonstate actors from leveraging high-tech AI models for their non-ethical practices. But to our bad luck, the states themselves are focused on preventing each other from achieving AI excellence, mostly US allied with the EU, to stop China from getting the latest tech. But is it possible to stop a country as powerful and as robust as China? Maybe we are living in our own delusional worlds.

It’s after all business at the end of the day, and tech corporations will find a way out to reach China considering its vast market. And China, though not yet able to build the latest chips for itself, will find its way out with in-house research considering the level of technological expertise it already has.

It brings us to a point where swift regulations should be in place, as, along with threats, AI holds a huge potential to ease our lives in the near future, from entertainment to work flow management and from medical possibilities to space science. There are endless possibilities if AI is handled with care and diligence.

Original story at Bloomberg





Source link

Related Articles