Artificial Intelligence

Reflections from Columbia Institute of Global Politics Summit on "AI’s Impact on the 2024 Global Elections":

0
100
Computers don't make decisions. Humans make decisions and those decisions are amplified by AI.
(From left) Gillian Tett, Columnist and Editorial Board, Financial Times (moderator), Maria Ressa, Nobel Peace Prize-Winning Journalist; Cofounder, CEO, and President of Rappler; IGP Carnegie Distinguished Fellow, Věra Jourová, Vice-President for Values and Transparency, European Commission & Hillary Rodham Clinton, Professor of International and Public Affairs; Columbia University; 67th Secretary of State and former Senator from New York; IGP Faculty Advisory Board Chair

I was quite moved by Former Secretary of State Hillary Clinton's personal account of her experience in the 2016 election with misinformation. Her warning that the advancement of artificial intelligence (AI) will make her experience "look primitive" set the tone for the conference. She recounted her experience while speaking on a panel of all female leaders, who have all been subject to trolling and spoke to the horrors of being threatened online.

"I don't think any of us understood it. I did not understand it. I can tell you, my campaign did not understand it. Their, you know, the so-called ‘Dark Web’ was filled with these kinds of memes and stories and videos of all sorts…portraying me in all kinds of… less than flattering ways," Clinton said. "And we knew something's going on, but we didn't understand the full extent of the very clever way in which it was insinuated into social media."

Some other takeaways from the Columbia Institute of Global Politics Summit on "AI’s Impact on the 2024 Global Elections":

• In the conversation on generative AI and the global election cycle, we need to think critically about “free speech”, understanding its limitations, especially regarding content produced by generative AI.

• Ultimately, the same regulations for what is illegal offline need to be applied to what is illegal online.

Tech companies favor extremism via their algorithms to boost engagement (and thus profits) and need to be held accountable for the damage mass exposure to disinformation causes.

A brilliant conversation between thought leaders in the field.

you may also like