OpenAI, the developer of ChatGPT, reported that it intervened within 24 hours to halt the “deceptive” use of artificial intelligence in a covert operation aimed at influencing the ongoing Indian general elections.
The campaign, named “Zero Zeno”, was orchestrated by STOIC, a political campaign management firm based in Israel.
According to OpenAI, the threat actors used its advanced language models to create comments, articles, and social media profiles that criticised the ruling BJP and praised the Congress party. This was revealed by the company’s CEO, Sam Altman.
“In May, the network began generating comments that focused on India, criticised the ruling BJP party and praised the opposition Congress party. We disrupted some activity focused on the Indian elections less than 24 hours after it began,” OpenAI said.
OpenAI reported that it banned a group of accounts based in Israel that were being used to create and modify content for an influence operation across X, Facebook, Instagram, various websites, and YouTube.
“This operation targeted audiences in Canada, the United States and Israel with content in English and Hebrew. In early May, it began targeting audiences in India with English-language content,” the company said.
Responding to the report, the BJP called it a “dangerous threat” to democracy.
“It is absolutely clear and obvious that @BJP4India was and is the target of influence operations, misinformation and foreign interference, being done by and/or on behalf of some Indian political parties,” said Minister of State for Electronics and IT Rajeev Chandrasekhar.
“This is very dangerous threat to our democracy. It is clear vested interests in India and outside are clearly driving this and needs to be deeply scrutinized/investigated and exposed. My view at this point is that these platforms could have released this much earlier, and not so late when elections are ending,” he added.
OpenAI announced that it has disrupted five covert operations in the past three months that attempted to use its models to support deceptive activities across the internet.
“Our investigations into suspected covert influence operations (IO) are part of a broader strategy to meet our goal of safe AI deployment,” it said.