Home Artificial Intelligence Hackers infiltrate OpenAI, fueling security and geopolitical goncerns

Hackers infiltrate OpenAI, fueling security and geopolitical goncerns

by admin
AI security

A security breach at OpenAI exposed how AI companies are lucrative targets to hackers. 

The breach, which occurred early last year and was recently reported by the New York Times, involved a hacker gaining access to the company’s internal messaging systems. 

The hacker lifted details from employee discussions about OpenAI’s latest technologies. Here’s what we know:

  • The breach occurred early last year and involved a hacker accessing OpenAI’s internal messaging systems.
  • The hacker infiltrated an online forum where OpenAI employees openly discussed the company’s latest AI technologies and developments.
  • The breach exposed internal discussions among researchers and employees but did not compromise the code behind OpenAI’s AI systems or any customer data.
  • OpenAI executives revealed the incident to employees during an all-hands meeting at the company’s San Francisco offices in April 2023 and informed its board of directors.
  • The company chose not to disclose the breach publicly, as it believed that no information about customers or partners had been stolen and that the hacker was a private individual with no known ties to a foreign government.
  • Leopold Aschenbrenner, a former OpenAI technical program manager, sent a memo to the company’s board of directors following the breach, arguing that OpenAI was not doing enough to prevent foreign governments from stealing its secrets.
  • Aschenbrenner, who claims he was fired for leaking information outside the company, stated in a recent podcast that OpenAI’s security measures were insufficient to protect against foreign actors’ theft of key secrets.
  • OpenAI has disputed Aschenbrenner’s characterization of the incident and its security measures, stating that his concerns did not lead to his separation from the company.

Who is Leopold Aschenbrenner?

Leopold Aschenbrenner is a former safety researcher at OpenAI from the company’s superalignment team. The superalignment team, focused on the long-term safety of advanced artificial general intelligence (AGI), recently fell apart when several high-profile researchers left OpenAI.

Among them was co-founder Ilya Sutskever, who recently formed a new company named Safe Superintelligence Inc.

Aschenbrenner penned an internal memo last year detailing his concerns about OpenAI’s security practices, which he described as “egregiously insufficient.”

He circulated the memo among reputable experts outside the company. Weeks later, OpenAI suffered the data breach, so he shared an updated version with board members. Shortly after, he was fired from OpenAI.

“What might also be helpful context is the kinds of questions they asked me when they fired me… the questions were about my views on AI progress, on AGI, the appropriate level of security for AGI, whether the government should be involved in AGI, whether I and the superalignment team were loyal to the company, and what I was up to during the OpenAI board events,” Aschenbrenner revealed in a podcast.

“Another example is when I raised security issues—they would tell me security is our number one priority,” Aschenbrenner stated. “Invariably, when it came time to invest serious resources or make trade-offs to take basic measures, security was not prioritized.”

OpenAI has disputed Aschenbrenner’s characterization of the incident and its security measures. “We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation,” responded Liz Bourgeois, an OpenAI spokeswoman.

“While we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work.”

AI companies become hacking risk

AI companies are undoubtedly an attractive target for hackers due to the colossal volume of valuable data they hold the keys to. 

This data falls into three main categories: high-quality training datasets, user interaction records, and sensitive customer information.

Just consider the value of any one of those categories.

For starters, training data is the new oil. While it’s relatively easy to retrieve from public databases like LAION, the data has to be checked, cleaned, and augmented, which is highly labor-intensive.

AI companies have huge contracts with data companies that primarily operate in Africa, Asia, and South America, the employees’ jobs being to process and refine vast quantities of data.

Then, we’ve got to consider the data AI companies collect from their tools – the user records. 

These are particularly risky when considering business data shared with AI, particularly code and other forms of intellectual property. 

A recent cyber-security report found that over half of people’s interactions with chatbots like ChatGPT include sensitive, personally identifiable information (PII). Another found that 11% of employees share confidential business information with ChatGPT.

Plus, as more businesses integrate AI tools into their operations, they often need to grant access to their internal databases, further escalating the risk. 

As the AI arms race intensifies, with countries like China rapidly closing the gap with the US, the risks associated with AI technology will only continue to expand. 

Beyond these whispers from OpenAI, we’ve not seen evidence of any high-profile breaches yet, but it’s probably only a matter of time. 

Source Link

Related Posts

Leave a Comment