January 12, 2026

  Analyst Report

Foundry Gets New Name, Anthropic Models

My Atlas / Analyst Reports

1,407 wordsTime to read: 8 min
by
Greg DeMichillie

Greg brings with him over two decades of engineering, product and GTM experience. He has held leadership positions at premier... more

  • Azure AI Foundry is now called Microsoft Foundry.
  • Although Microsoft is an investor in OpenAI, Foundry now supports models from competitors such as Anthropic.
  • Foundry IQ, one of the new features, aims to provide a unified layer for AI agents to access enterprise data sources securely.

Azure AI Foundry, Microsoft’s Web-based tool for professional developers building AI-powered applications and agents, is getting a series of improvements along with a new name: Microsoft Foundry. The new features aim to broaden the scope of Foundry by supporting large language models (LLMs) from companies other than OpenAI and by making it easier to integrate enterprise data with LLM-based apps. Having a single tool set that can be used regardless of whichever model is used will make Foundry more attractive to developers and beneficial for organizations. The new name may signal Microsoft’s long-term intent to expand beyond Azure, but in the medium term, it’s just another in a long line of product renames that cause customer confusion.

Yet Another Rename

This is not the first time Foundry has been renamed. It was originally introduced as Azure AI Studio in Nov. 2023, only to be renamed to Azure AI Foundry in Nov. 2024 and again as Microsoft Foundry Nov. 2025. Microsoft has a long history of overlapping and changing product names, particularly in fast-moving areas such as AI. (See “Related Copilot Terms” for just a few.) But even setting that aside, customers would be forgiven for thinking that a product getting a new name every year might indicate the lack of a long-term product strategy.

In Microsoft’s defense, the entire tech industry is scrambling to harness the potential of AI and LLMs and, in the long run, capabilities will matter more than how many names a product went through. But in the medium term, AI is complicated enough for customers to keep on top of and constant renaming doesn’t help. Nevertheless, customers should expect Microsoft to continue to experiment with products, names, and features until it finds a combination that works in the market.

Going Beyond OpenAI

Beyond the rename, Foundry has also gained important new capabilities. Most notably, it now supports popular models from Anthropic, currently in preview. The new models are:

  • Claude Haiku, the highest performance model that is designed for quick tasks that require less reasoning 
  • Clause Sonnet, a model that is particularly well-regarded for its ability to perform software coding tasks
  • Claude Opus, Anthropic’s most powerful model.

Like other AI models, pricing is based on the number of tokens, both input and output. (For more information, see “Understanding Tokens and Context Windows.”) Pricing varies by model, but Opus is significantly more expensive, as shown in the following table (all prices are in USD).

ModelPrice per 1M input tokensPrice per 1M output tokens
Haiku 4.5$1$3
Sonnet 4.5$3$15
Opus 4.1$15$75
GPT 5.2 (from OpenAI)$1.75$14
Table shows the cost of various Anthropic models compared with the latest OpenAI model. Note that Microsoft offers many variations of OpenAI’s models, with price varying by size and location.

Adding Sonnet makes Foundry more appealing in a segment where AI is already showing tangible value—helping developers write and understand application code. Besides delivering new features, the addition of Anthropic is important because it shows Microsoft expanding beyond OpenAI, despite having invested over US$13 billion in the company. It also comes at a time when OpenAI’s lead in overall model capability appears to be slipping. Claude Sonnet is often cited by developers as being able to produce high-quality, production-ready code, while Google’s Gemini appears to be leading in so-called multimodal tasks (tasks that involve text, image, and video).

For customers, having a single tool set that can be used across models makes Foundry an attractive way to develop agents and applications using whichever model makes sense for a specific use case.

One important caveat for customers outside the United States: Unlike the ChatGPT options, Anthropic services are available only through network endpoints in East US 2 and West US regions. This may change as the service becomes generally available, but until then, developers outside the U.S. must carefully consider the additional latency when selecting an AI model. 

Foundry IQ: Simplifying Access to Enterprise Data

Foundry IQ, a new component in Foundry, aims to eliminate the complex effort of connecting AI applications to corporate data to help enterprises develop solutions faster.

For enterprises, the real value of AI doesn’t appear until they are able to infuse AI systems with their own corporate data. Even relatively simple use cases, such as summarizing documents, become more valuable when systems understand a company’s business, customers, and markets.

The most common way of combining LLMs with private data is retrieval-augmented generation (RAG). But RAG applications can be hard to build. They require specific development skills as well as new types of databases not normally found in enterprises, such as vector databases. In addition, developers must ensure that RAG applications respect data access and security rules to avoid accidentally showing a user information they should not have access to.

Foundry IQ, currently in preview, hopes to solve this problem. Inside every AI agent or application, there is a layer of software that interacts with corporate data sources, helping to organize the data and present it in a way that LLMs can work with. Foundry IQ aims to replace all of that with a centralized set of “knowledge bases”, which are reusable collections of data that are ready to be incorporated into AI applications. The knowledge base then takes care of difficult but common tasks such as indexing and vectorization. By eliminating much of this complex and error-prone, yet critically important, glue code, Foundry IQ could help enterprises develop AI applications significantly faster. 

Critically, Microsoft promises that Foundry IQ will respect existing permissions, ensuring that AI applications don’t accidentally leak sensitive information. But that is easier said than done. Corporate data lives in many places with many formats and many access controls. Challenges will include:

  • Identifying all the relevant corporate data to be used with LLMs
  • Cleaning up inconsistent schema
  • Defining the meaning of that data in a way the LLM can understand
  • Maintaining all this as the business changes, old systems are retired, and new systems are brought on 

In addition to integrating with services built using the open standard Model Context Protocol (MCP), Foundry IQ currently supports several common Microsoft data sources:

  • Microsoft 365 SharePoint
  • Fabric
  • OneLake
  • Azure Blob Storage
  • Azure AI Search.

Over time, Microsoft will certainly expand the list to include more relational data sources, such as Azure SQL and Cosmos DB.

Although Foundry IQ tries to address a real problem in AI development, this entire space is evolving rapidly. There are no guarantees that newer approaches won’t come along and cause further disruption. MCP appears to be gaining traction with support from Google, Microsoft, AWS, and others. But it was introduced by Anthropic as late as Nov. 2024, adopted by Microsoft only four months later in Mar. 2025, and is now a foundational element of Foundry IQ despite being little more than one year old. 

Directions Recommends

Take a look at Anthropic. The addition of Anthropic brings more leading-edge models to Foundry and shows Microsoft’s willingness to bring whatever models customers demand, regardless of their business relationship with OpenAI. 

Keep an eye on Foundry IQ. The entire space is evolving rapidly, but the problem Foundry IQ aims to solve is real. If it is successful, it could make it significantly easier for enterprises to develop AI applications.

Be prepared for more churn. Like many tech companies, Microsoft is experimenting rapidly to find the right combination of products, pricing, and features to unlock the value of AI for enterprise customers. This process will continue to be messy with overlapping product launches, followed by renaming and consolidation. 

Resources

For a deeper look at Foundry and its features, see the Directions report “AI Foundry: New Hub for AI Application Development.”

For more information on Model Context Protocol, see the Directions report “Emerging Protocols Could Standardize AI Integration.”

For a discussion of Microsoft’s other IQ initiatives, see the blog post “CIO Talk: Microsoft Gets IQ.” 

Greg brings with him over two decades of engineering, product and GTM experience. He has held leadership positions at premier tech companies such as Microsoft, Google, AWS, and Adobe, as... more