Following the lead of Deepsik, OpenAI makes his logic model free

These types of models are most effective in solving complex problems, so if you have a PhD-level math problem that you are breaking down, you can try them. Alternatively, if you have problems with getting previous models to properly respond to your most advanced prompts, you can try this new logic model on them. To try O3-mini, select “cause” only when you are Start a new prompt on the chatagpt.
Although logic models have new capabilities, they cost. OpenAI’s O-Mini is 20 times Its equivalent is more expensive than a non -resning model, GPT -4O Mini. The company says its new model, O3-mini, costs 63% less than O 1-mini per input token, however, is $ 1.10 per million input tokens, still for walking than GPT-4 o Mini. About seven times is expensive.
This new model is coming right Indent AI shook the world less than two weeks ago. Deepsk’s new model dal only performs OpenAI models as well, but the Chinese company claims that training costs about $ 6 million, as Million is against the cost of more than 100 million for training OpenAI’s GPT -4. (It is worth noting that many people are questioning this claim.)
In addition, the logic model of the dippick is worth 0.55 per million input tokens, which is half the price of O3-mini, so OpenAI still has one way to reduce its cost. It is estimated that the logic models also have the cost of energy than other types, giving them a large number of calculations to answer.
This new wave of logic models also presents new security challenges. Openi used the technique of named Elegment To train its O-Series models, basically refer to the internal policies of OpenAI on every step of his logic to ensure that they are not ignoring any rules.
But the company has discovered that the O3-Mini, like the O1 model, is significantly better than no-no-resning models on jailbreaking and “challenging safety evaluation”-that is, in view of its advanced capabilities, the logic model controlled More difficult to do. O3-Mani is the first model to score as a “moderate risk” on a model autonomy, which is given a rating because it is better than previous models in specific coding tasks-“greater possibility for self-improvement and AI research acceleration,” Suggests, ” According to Openi. That said the model is still bad on real -world research. If it was better, it would be rated as a risk risk, and would restrict the release of the OpenAI model.
https://wp.technologyreview.com/wp-content/uploads/2025/01/250131_openai.jpg?resize=1200,600