1 min read

Israeli Firm Attempted to Disrupt Indian Elections, Promoted Anti-BJP Agenda: OpenAI

June 1, 2024
openai gpt 4o

New Delhi: OpenAI, the creators of ChatGPT, has said it acted within 24 hours to disrupt “deceptive” use of artificial intelligence (AI) in a covert operation that sought to influence the ongoing Lok Sabha elections.

This influence campaign, called “Zero Zeno”, was run by STOIC, a political campaign management firm in Israel.

The threat actors attempted to leverage OpenAI’s powerful language models for tasks like generating comments, articles, social media profiles that criticised the ruling BJP and praised the Congress, the company led by CEO Sam Altman said.

“In May, the network began generating comments that focused on India, criticised the ruling BJP party and praised the opposition Congress party. We disrupted some activity focused on the Indian elections less than 24 hours after it began,” OpenAI said.

OpenAI said it banned a cluster of accounts operated from Israel that were being used to generate and edit content for an influence operation that spanned X, Facebook, Instagram, websites, and YouTube.

“This operation targeted audiences in Canada, the United States and Israel with content in English and Hebrew. In early May, it began targeting audiences in India with English-language content,” the company said.

Responding to the report, the BJP called it a “dangerous threat” to the democracy.

“It is absolutely clear and obvious that @BJP4India was and is the target of influence operations, misinformation and foreign interference, being done by and/or on behalf of some Indian political parties,” said Minister of State for Electronics and IT Rajeev Chandrasekhar.

“This is very dangerous threat to our democracy. It is clear vested interests in India and outside are clearly driving this and needs to be deeply scrutinized/investigated and exposed. My view at this point is that these platforms could have released this much earlier, and not so late when elections are ending,” he added.

OpenAI said it has disrupted five covert operations in the last three months that sought to use our models in support of deceptive activity across the internet. “Our investigations into suspected covert influence operations (IO) are part of a broader strategy to meet our goal of safe AI deployment.”

(Except for the headline, this story has not been edited by The Kashmir Monitor staff and is published from a syndicated feed.)


Discover more from The Kashmir Monitor

Subscribe to get the latest posts to your email.

Don't miss a beat! The Kashmir Monitor delivers the latest Kashmir news, sports highlights from every league, political updates, entertainment buzz, and tech innovations happening right now.

Leave a Reply