Hackers working for nation-states have used OpenAI’s systems in the creation of their cyberattacks, according to research released Wednesday by OpenAI and Microsoft.
The companies believe their research, published on their websites, documents for the first time how hackers with ties to foreign governments are using generative artificial intelligence in their attacks.
But instead of using A.I. to generate exotic attacks, as some in the tech industry feared, the hackers have used it in mundane ways, like drafting emails, translating documents and debugging computer code, the companies said.
“They’re just using it like everyone else is, to try to be more productive in what they’re doing,” said Tom Burt, who oversees Microsoft’s efforts to track and disrupt major cyberattacks.
(The New York Times has sued OpenAI and Microsoft for copyright infringement of news content related to A.I. systems.)
Microsoft has committed $13 billion to OpenAI, and the tech giant and start-up are close partners. They shared threat information to document how five hacking groups with ties to China, Russia, North Korea and Iran used OpenAI’s technology. The companies did not say which OpenAI technology was used. The start-up said it had shut down their access after learning about the use.
Since OpenAI released ChatGPT in November 2022, tech experts, the press and government officials have worried that adversaries might weaponize the more powerful tools, looking for new and creative ways to exploit vulnerabilities. Like other things with A.I., the reality might be more understated.
“Is it providing something new and novel that is accelerating an adversary, beyond what a better search engine might? I haven’t seen any evidence of that,” said Bob Rotsted, who heads cybersecurity threat intelligence for OpenAI.
He said that OpenAI limited where customers could sign up for accounts, but that sophisticated culprits could evade detection through various techniques, like masking their location.
“They sign up just like anyone else,” Mr. Rotsted said.
Microsoft said a hacking group connected to the Islamic Revolutionary Guards Corps in Iran had used the A.I. systems to research ways to avoid antivirus scanners and to generate phishing emails. The emails included “one pretending to come from an international development agency and another attempting to lure prominent feminists to an attacker-built website on feminism,” the company said.
In another case, a Russian-affiliated group that is trying to influence the war in Ukraine used OpenAI’s systems to conduct research on satellite communication protocols and radar imaging technology, OpenAI said.
Microsoft tracks more than 300 hacking groups, including cybercriminals and nation-states, and OpenAI’s proprietary systems made it easier to track and disrupt their use, the executives said. They said that while there were ways to identify if hackers were using open-source A.I. technology, a proliferation of open systems made the task harder.
“When the work is open sourced, then you can’t always know who is deploying that technology, how they’re deploying it and what their policies are for responsible and safe use of the technology,” Mr. Burt said.
Microsoft did not uncover any use of generative A.I. in the Russian hack of top Microsoft executives that the company disclosed last month, he said.
Cade Metz contributed reporting from San Francisco.
Be the first to comment