OpenAI launches advanced AI model o3 with new safety training approach
OpenAI has introduced a new AI reasoning model called o3, claiming it is more advanced than its previous model, o1. The improvements come from enhanced computing methods and a new safety training approach called "deliberative alignment." This method allows the models to consider OpenAI's safety policy during their response process, improving their ability to refuse unsafe requests. OpenAI reports that this has led to a decrease in unsafe answers from o1 while enhancing its performance on benign queries. Additionally, OpenAI used synthetic data to train o1 and o3, avoiding the need for extensive human input. The new models are expected to be publicly available in 2025, with the aim of better aligning AI responses with human values.