DeepSeek-R1 AI Model Now Available on Azure AI Foundry and GitHub

February 26, 2025

DeepSeek, a leading Chinese AI company, has revealed that its DeepSeek-R1 AI model is now available on Microsoft’s Azure AI Foundry platform and GitHub, following its achievement as the most downloaded app on the App Store. This strategic move is designed to integrate cutting-edge AI solutions into various business operations, driving both efficiency and innovation. The DeepSeek-R1 model is steadily gaining momentum, particularly in the U.S. financial sector, so its availability on these platforms is expected to foster widespread adoption and utilization of advanced AI technologies.

DeepSeek-R1 is now part of a vast catalog of over 1,800 models in the Azure AI Foundry, offering developers and businesses a comprehensive resource for AI deployment. By adding DeepSeek-R1 to Azure AI Foundry, Microsoft demonstrates its dedication to supporting the integration of advanced AI with an emphasis on reliability, scalability, and safety. Additionally, the model’s presence on GitHub opens doors for collaborative development and innovation within the global tech community.

Integration into Azure AI Foundry and GitHub

One of the primary advantages of utilizing the DeepSeek-R1 model on Azure AI Foundry is the accelerated pace at which developers can experiment with and incorporate AI into their workflows. The platform’s robust infrastructure enables quick deployments while adhering to stringent service-level agreements (SLAs) and security standards. Furthermore, built-in assessment tools within Azure AI Foundry facilitate the efficient comparison of outputs, performance benchmarking, and the scaling of AI-driven applications.

To incorporate DeepSeek-R1 into your projects via Azure AI Foundry, start by logging into your Azure account and accessing the model catalog. Search for DeepSeek-R1 within the catalog and open its model card. Next, click the Deploy button to generate an API key, which can be used with various client applications. This straightforward process ensures that users can leverage DeepSeek-R1’s capabilities with minimal technical hurdles, fostering innovation and progress in AI developments.

Using the DeepSeek-R1 model on Copilot+ PCs involves downloading the AI Toolkit VS Code extension. By clicking the Download button, users can install the DeepSeek model locally on their devices. The Copilot+ PCs will soon receive an optimized version of the model in the ONNX QDQ format, readily accessible from the AI Toolkit’s model catalog. This format enables compatibility and enhanced performance on Copilot+ PCs’ NPUs, ensuring efficient AI application execution. In addition to the ONNX format, users can also explore the cloud-hosted source model available in Azure Foundry.

Although the Qwen 1.5B model of DeepSeek cannot be directly mapped to the NPU due to its dynamic input shapes and behaviors, optimizations are underway to ensure compatibility. Microsoft’s adoption of the ONNX QDQ optimized format underscores its commitment to widespread AI model scalability across various NPUs. For those interested in running distilled versions of DeepSeek-R1 locally on Copilot+ PCs, detailed information is available on the official Microsoft website. This step-by-step guidance ensures users can fully harness the power of DeepSeek-R1, driving innovation and efficiency through advanced AI capabilities.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later