OpenAI has released two open-weight language models optimized for advanced reasoning and laptop performance, marking its first public release since GPT-2 in 2019.
The larger model, gpt-oss-120b, can run on a single GPU, while the compact gpt-oss-20b runs directly on a personal computer. Both models match the performance of OpenAI's proprietary o3-mini and o4-mini, excelling in coding, competition math and health-related queries.
'One of the things that is unique about open models is that people can run them locally,' said OpenAI co-founder Greg Brockman. By providing trained parameters, developers can fine-tune and deploy the models behind their own firewalls without needing original training data.
Open-weight models differ from open-source AI, which also shares source code and training methods. This approach strikes a balance between accessibility and practicality, empowering startups, researchers and digital nomads to explore AI capabilities without heavy infrastructure.
The move comes as open AI heats up in competition. Meta's Llama models once led the pack until DeepSeek from the Chinese mainland released a cost-effective reasoning model earlier this year. OpenAI's new offerings aim to narrow the gap and fuel innovation across global tech hubs.
Trained on a text-only dataset focused on science, math and coding, these models support versatile, real-world applications. While direct benchmarks against rivals like DeepSeek-R1 are pending, OpenAI's claims suggest a milestone for portable AI.
With free access to these advanced models, young professionals, students and entrepreneurs worldwide can experiment and build on cutting-edge AI, unlocking fresh opportunities for collaboration, creativity and growth.
Reference(s):
OpenAI releases free, downloadable models in competition catch-up
cgtn.com