Qualcomm Technologies, Inc. has announced new products for data centers, the Qualcomm AI200 and AI250 chips. These are specialized computer cards and racks designed to enable artificial intelligence (AI) to run faster and more efficiently. These products use Qualcomm’s advanced AI technology to deliver high performance and large memory, enabling AI programs to run quickly.
Qualcomm AI200 introduces a purpose-built rack-level AI inference. It is designed to deliver low total cost of ownership (TCO) and to work very well with big AI models, like those that understand language or images. Each card can have up to 768 GB of memory, which means it can handle large tasks and is less expensive to use.
The Qualcomm AI250 will use a new memory system that makes it work much faster and use less power than older systems. This helps companies use their hardware better and save money, while still getting great performance for their AI tasks.
Both the AI200 and AI250 use liquid cooling to keep them from getting too hot. They can connect to other computers and networks easily and keep AI work secure. Each rack uses up to 160 kilowatts of power.
Durga Malladi, a leader at Qualcomm, said, “With AI200 and AI250, we are changing what is possible for AI in big computer centers. These new products help customers use advanced AI at a lower cost, while keeping things safe and flexible. Our software makes it easy for companies to use and grow their AI models quickly.”
Qualcomm’s special AI software helps all parts of AI programs work together, from the application to the system level. It works with popular AI tools and makes it easy for developers to use AI models, including those from Hugging Face, a well-known AI company. The software comes with helpful tools and services to make using AI easier for companies.
Qualcomm AI200 and AI250 are expected to be commercially available in 2026 and 2027, respectively.























