
OpenAI is hiring a Head of Preparedness. Or, in other words, someone whose primary job is to think about all the ways AI could go horribly, horribly wrong. In a post on X, Sam Altman announced the position by acknowledging that the rapid improvement of AI models poses « some real challenges. » The post goes on to specifically call out the potential impact on people’s mental health and the dangers of AI-powered cybersecurity weapons.
The job listing says the person in the role would be responsible for:
« Tracking and preparing for frontier capabilities that create new risks of severe harm. You will be the directly responsible leader for bui …