OpenAI is recruiting a new Head of Preparedness, a senior role focused on mitigating risks tied to advanced AI systems. The position comes with a reported annual compensation of roughly $555,000, plus equity, underscoring how seriously the company says it is taking emerging safety concerns.

OpenAI CEO Sam Altman publicly acknowledged the opening in a social media post, describing the job as demanding and warning that it would be "stressful." According to the listing, the role is responsible for reducing harms linked to AI capabilities, including risks to mental health, cybersecurity, and biological misuse.
The hiring push comes as AI systems become more capable and harder to predict. Altman said models are now powerful enough to create "real challenges," especially when it comes to abuse scenarios where attackers could repurpose AI tools faster than defenses can adapt.

Preparedness work at OpenAI is not new, but the role has shifted. The company's previous head of preparedness, Aleksander Madry, was reassigned last year to a position focused on AI reasoning, with safety becoming a secondary responsibility. The new opening restores preparedness as a standalone executive function.
The timing is notable. A growing number of companies are flagging AI-related reputational and operational risks in regulatory filings, and OpenAI itself has acknowledged that some upcoming models could pose elevated cybersecurity threats. The company has said it is expanding monitoring systems and training models to refuse requests that could compromise security.
OpenAI has also faced public scrutiny over mental health concerns. The company has been named in multiple lawsuits alleging harmful interactions with ChatGPT, and external investigations have documented cases where users experienced severe distress during extended conversations with the system. OpenAI has responded by updating how ChatGPT handles sensitive topics, adding crisis resources, and funding research into AI and mental health.
For OpenAI, the Head of Preparedness role is meant to sit at the intersection of technical capability and real-world impact. The position is tasked with anticipating how new systems might be misused and shaping release strategies that limit harm without halting development entirely.
The role does not signal a slowdown in OpenAI's ambitions. Instead, it reflects an acknowledgment that as AI systems approach higher levels of autonomy and reasoning, safety planning must move earlier in the development process and carry more authority.
Thank you for being a Ghacks reader. The post OpenAI Is Hiring a Head of Preparedness to Address AI Safety Risks appeared first on gHacks Technology News.
0 Commentaires