Top 10 Copilot Studio updates in October 2025 (and why they matter)

Top 10 Copilot Studio updates in October 2025 (and why they matter)

 

Table of Contents

 

In October 2025, Microsoft Copilot Studio introduced a series of powerful updates that push the boundaries of what enterprise AI agents can accomplish. These enhancements range from major new capabilities, such as allowing AI agents to visually interact with software or chat on WhatsApp, to numerous quality-of-life improvements that help those who build and manage these agents work more efficiently. In this article, we highlight the top Copilot Studio improvements of October 2025 (ordered by overall impact) and break down how each update can benefit makers, end-users, and organisations alike.

 

Automate UI Tasks with Computer Use (Public Preview)

Copilot Studio’s new “computer use” capability allows an AI agent to perform actions in an app’s interface, such as navigating a web dashboard, much like a human user.

Copilot agents can now operate apps and websites through a virtual mouse and keyboard, thanks to the new computer use feature (currently in public preview). This means if you describe a task in natural language, the agent can complete it by clicking buttons, typing text, and navigating the UI of software, even when no API or official integration is available. It’s a game-changer for automating processes like data entry, report generation, or information gathering in legacy and third-party applications where automation wasn’t previously possible.

The public preview comes with several enhancements that make UI automation enterprise-ready. For example, it includes a cloud-hosted browser (powered by Windows 365) so agents can run web tasks without the maker having to configure a local machine. It also provides ready-made workflow templates to help makers get started quickly on common UI tasks. Importantly, Copilot Studio offers secure credential management for logging into applications and allow-list controls so admins can restrict agents to interacting only with approved sites and software. These safeguards ensure that automated UI actions remain secure, resilient, and compliant for enterprise scenarios. Because the computer uses agent leverages built-in vision and reasoning, it can even adapt if an interface changes, making the automation robust against minor UI updates. In short, computer use dramatically broadens the scope of tasks Copilot agents can handle, allowing organisations to automate virtually any software workflow with confidence in security and reliability.

 

WhatsApp Channel for Copilot Studio (General Availability)

With WhatsApp support now generally available, organisations can deploy Copilot agents to chat with users on the world’s most popular messaging app.

Microsoft has officially launched the WhatsApp channel for Copilot Studio, now in general availability. WhatsApp is the world’s most-used messaging platform with over 2.7 billion users globally, so this integration allows companies to reach customers on a channel they use every day. Uniquely, Copilot Studio stands out as the only enterprise-grade AI agent platform with native WhatsApp deployment – in just a few clicks, makers can publish an AI agent to a WhatsApp phone number and start engaging users with rich, interactive chats.

This update significantly improves customer engagement and convenience. Users can now interact with AI assistants in a familiar, trusted chat app, removing friction from support or service experiences. By meeting customers where they already are, organisations can accelerate time-to-market for new bot services and strengthen customer relationships through seamless, personalised conversations. On the back end, the WhatsApp channel supports enterprise needs like phone-number-based user authentication and secure media messaging, so companies can expand into this channel without compromising on compliance or privacy.

Overall, having WhatsApp as a native channel greatly expands an agent’s reach and impact, enabling AI-powered assistance in one of the most preferred communication platforms worldwide.

 

Test and Enrich Prompts in the Prompt Builder (Preview)

Copilot Studio’s prompt design tool received a major upgrade with new capabilities to systematically test and improve prompts (now in preview). In the past, tuning an AI prompt often involved trial and error; now makers have built-in tools to validate and refine prompts more rigorously. The prompt builder allows you to create bulk prompt test cases – you can upload a list of sample user inputs, auto-generate variations, pull in real user queries from logs, or write cases manually.

For each test case, you can define custom success criteria (e.g. does the answer use a friendly tone? include a key phrase? follow a JSON format), covering what matters most for your scenario. The studio then runs the agent against all these cases and provides accuracy scores and detailed results per case, so you can quickly pinpoint which prompts or responses need work.

Another enhancement is that the prompt builder now supports Power Fx formulas directly within prompts. This means you can inject dynamic data or calculations into a prompt – for example, the current date, a value looked up from the user’s profile, or the result of a Power Fx expression – all without leaving the prompt editor. These features let you create more context-aware and precise prompts while keeping the authoring experience simple.

Together, the new testing and formula integration reduce rework and guesswork in prompt engineering. Makers can iterate faster with greater confidence that their prompts will perform reliably in real-world conversations, which ultimately means end-users get more accurate and well-structured answers.

 

File Groups as Agent Knowledge (General Availability)

File grouping in Copilot Studio allows makers to combine many documents into a single knowledge source, so agents can draw answers from extensive enterprise content more effectively.

Organising an agent’s knowledge base is now much easier with File Groups, a feature that reached general availability in October. File groups let makers organise large sets of uploaded documents into logical collections that the agent treats as a single knowledge source. Instead of managing dozens or hundreds of individual files, you can bundle related documents (for example, a set of policy PDFs or product manuals) into one named group – significantly reducing clutter and maintenance overhead. The agent will then search and reference the group as one unit, ensuring answers draw from the most relevant content without you having to manually curate each file every time.

This feature greatly improves scalability: up to 25 file groups per agent are supported, encompassing as many as 12,000 files in total for a given agent’s knowledge base. Makers can also attach variable-based instructions to each file group to fine-tune how that content should be used by the AI (for example, prioritising newer documents over older ones in responses).

By structuring knowledge into groups, Copilot Studio agents can deliver more accurate, context-rich answers while makers have a cleaner, more organised way to manage content sources. In short, file groups help scale up an agent’s knowledge to enterprise levels without overwhelming the builder or compromising on relevance.

 

Reusable Component Collections

Managing and reusing conversational components across multiple bots or environments is now far more efficient with reusable component collections, which are generally available as of this update. This capability allows teams to package parts of an AI agent – including its topics (dialogue flows), knowledge sources, custom actions, and entities – into a collection that can be shared or moved between agents and environments. In practice, Copilot Studio’s Solution Explorer lets you export an entire agent (or just selected components) as a solution, then import that solution into another environment or agent project. All the grouped components come along in one go, maintaining their configurations and relationships.

This brings proper application lifecycle management to AI agents. Teams can maintain a library of tested, approved components and reuse them in new agents rather than rebuilding similar logic from scratch. For example, a company might create a standard “Customer Verification” topic with associated actions and then include that in every customer-facing agent via a collection, ensuring consistent behaviour. It also means you can promote an agent from a development environment to production in a predictable way – the entire packaged solution moves together, reducing errors.

These reusable collections lead to a more predictable and consistent approach to managing changes at scale, since updates to a component (say, improving an FAQ answer set) can be propagated to all agents that use that collection. Ultimately, this feature accelerates deployment and fosters best practices: organisations can spin up new copilots faster by leveraging existing building blocks, and governance is easier because core components remain uniform across the enterprise.

 

Enable End Users to Upload Files in Agent Flows

Copilot Studio agents can now accept file uploads from end users during a conversation, which eliminates previous workarounds and unlocks richer scenarios. In practical terms, an AI agent built with Copilot Studio could prompt a user to attach a document or image (for example, “Please upload the invoice PDF for analysis”), and the user can upload the file directly into the chat interface. The agent then can process or act on that file within the same flow – for instance, reading the content and summarising it, extracting data and entering it into a system, or forwarding it to a backend process for further handling.

Under the hood, when a file is uploaded, the agent collects the file and its metadata (filename, content type, etc.) and passes them into the agent’s flow or a connected Power Automate workflow. This means the file can be fed into other tools, connectors, or even custom code for processing, all orchestrated by the Copilot agent. The ability for users to provide a file on the spot streamlines many business processes. For example, an HR bot could accept a resume document and automatically parse and store its details, or a support bot could take in a log file or screenshot from a user and route it to the right technician.

Previously, handling such scenarios often required clunky alternatives (like asking users to email a file or use a separate form). Now it’s a seamless part of the chat experience, reducing friction and manual steps. By enabling file uploads, Copilot Studio agents can better support real-world workflows that involve documents or images – making these AI assistants far more useful in day-to-day operations where file exchange is necessary.

 

Advanced Code Interpreter (General Availability)

Another powerful addition is the advanced code interpreter, which is now generally available as a built-in tool in Copilot Studio. This feature allows Copilot agents to generate and execute Python code on the fly to fulfil user requests or perform complex tasks. In essence, a maker can write a prompt like “Calculate the correlation between these sales figures and plot the result,” and the agent will automatically produce Python code to do so – then run that code behind the scenes and return the result (say, a chart or calculation) to the user. The maker or expert can review and fine-tune the generated code if needed, and even save these code-based prompts as reusable actions. All of this happens within the Copilot Studio environment, with no need to set up external servers or leave the prompt builder interface.

With its general release, the code interpreter has been enhanced to support direct interactions with enterprise data. Notably, the prompt builder can now use natural language to perform CRUD operations (create, read, update, delete) on Microsoft Dataverse tables via generated Python code. In practice, that means an agent could, for example, take a user’s input and, behind the scenes, create a new record in a Dataverse database, or fetch and update records, just by virtue of the prompt’s instructions. The code interpreter can also generate custom visualisations or complex outputs as part of the agent’s response, further extending what kinds of questions the agent can handle (like producing a graph or performing statistical analysis on the fly). This embedded scripting capability gives experienced makers and developers much more flexibility to tailor an agent’s behaviour.

Scenarios that once required building a separate app or Azure Function can now be handled inside Copilot Studio with a few lines of prompt. Enabling the code interpreter for an agent is as simple as toggling it on, and it can be restricted to specific prompts or enabled across the entire agent, depending on needs. Overall, the advanced code interpreter brings the full power of Python’s ecosystem into Copilot Studio – empowering makers to implement highly customised logic and data processing within their AI agents, without leaving the platform.

 

Agents Client SDK for Native App Integration

Microsoft introduced a new Agents Client SDK that allows developers to embed Copilot Studio agents directly into their own desktop or mobile applications. This means end-users can interact with a Copilot agent right inside an app they already use – whether it’s a line-of-business app on Windows, or a mobile app on Android/iOS – instead of having to switch to Teams or a web chat. The SDK supports rich multimodal conversations within the app, starting with text-based chat and Adaptive Cards (interactive UI cards) for input/output. In the near future, it will also support additional modalities like voice, images, video, and context sharing to enable even more natural interactions in embedded scenarios (these features are on the roadmap).

By using the client SDK, organisations can integrate AI assistance into existing workflows at the point of need. For example, a field service mobile app could have an embedded Copilot agent that the technician chats with to get troubleshooting steps, without ever leaving the app’s interface. Developers can tailor the agent’s capabilities to the app’s context, and users benefit from not having to context-switch into a separate chatbot interface. This opens up new workflow possibilities – any app can become an intelligent assistant for its specific domain or tasks. The SDK provides platform-specific libraries and documentation (for Windows, iOS, and Android) to streamline the embedding process.

Crucially, since this integration is supported by Microsoft, companies can trust that embedded agents adhere to the same security and compliance standards as other Copilot deployments. In short, the Agents Client SDK makes it possible to bring the power of Copilot agents into everyday apps, creating smoother user experiences and unlocking AI-driven workflows exactly where users are most engaged.

 

Create MCP Connectors Directly in Copilot Studio (Public Preview)

It’s now much easier to hook up custom AI models and data sources to your Copilot agents, thanks to a new point-and-click experience for MCP connector creation (currently in public preview). MCP, or Model Context Protocol, is the system that allows Copilot agents to interface with external AI model servers or services. Before this update, integrating an MCP connector often required manual setup or custom development. Now, makers can simply provide the host URL of an existing MCP server directly within Copilot Studio, and the platform will handle the rest of the connection automatically. In a matter of minutes, your agent can be linked to an external model or knowledge source, without writing any integration code.

This update also adds support for MCP resources like files and images, not just text streams. In practice, that means an agent could send a file (e.g., an image uploaded by a user) to a custom vision AI model via MCP and get back an analysis, all through the Studio interface. By making third-party AI integrations more turnkey and expanding the types of data those integrations can handle, Microsoft is enabling makers to design richer and more flexible agent experiences without getting bogged down in technical setup. Essentially, Copilot Studio is bridging to a growing ecosystem of AI services seamlessly.

For IT teams and advanced users, this feature ensures you can leverage specialised models (proprietary NLP engines, industry-specific AI services, etc.) in your Copilot agents with minimal hassle, all while maintaining security and manageability. Makers can focus on building impactful agent logic, trusting that the platform will tap into the power of MCP connectors smoothly behind the scenes.

 

New Analytics and Impact Metrics

Finally, Copilot Studio received a suite of new analytics features to help measure how agents are performing and the impact they’re having. Both makers and administrators now have deeper visibility into usage patterns, user satisfaction, and business value generated by their copilots. Here are some of the key analytics improvements:

  • Generative AI Question Themes (Preview): The analytics dashboard can now automatically group users’ generative AI questions from the past week into thematic categories. For each theme, it shows how often those questions come up and how well the agent’s answers address them. This helps identify the most common topics users ask about and reveals any themes where the agent may not be performing optimally (so makers know where to focus improvements).
  • Unanswered Query Insights (Preview): Copilot Studio will highlight themes of questions that the agent could not answer, shown right in the analytics view. This makes it easy for makers to spot frequently unmet needs or knowledge gaps without sifting through transcripts. By seeing patterns in unanswered questions, you can prioritise adding content or training the agent in those areas.
  • Active Users Metric (GA): A new metric tracks how many unique users are engaging with each agent, with views for daily and monthly active users. This goes beyond simple session counts to show actual user reach and engagement trends over time. It’s valuable for understanding adoption – for instance, whether usage is growing and which days or times see peak activity. (This metric is available for agents that use authenticated users.)
  • Consumption and Limits Tracking (GA): The analytics now display each agent’s monthly Copilot credit usage and its configured limit (if your organisation set a quota). Makers and admins no longer have to switch to the Power Platform admin centre to monitor consumption; they can see if an agent is nearing its limit and adjust or request more capacity proactively.
  • ROI Analysis (GA): Perhaps most impressively, there’s a new tool to track the ROI (return on investment) of your autonomous agents. You can define what a successful run is “worth” in your own terms – for example, saving 10 minutes of an employee’s time might equal $X saved. The system then aggregates these values for all runs in the selected period, so you get an automatic calculation of time or money saved thanks to the Copilot agent. This lets teams quantify the business value their AI agents are delivering in real terms.

 

All together, these analytics enhancements give organisations a much clearer picture of an agent’s adoption, effectiveness, and business impact. Teams can use the data to fine-tune agent behaviour (by addressing areas with poor performance or unanswered questions), improve user engagement strategies, and confidently demonstrate the tangible ROI of their Copilot Studio solutions. This level of insight helps guide smarter investments in AI capabilities and ensures that Copilot agents continue to deliver value where it matters most.

 


 

In summary, the October 2025 updates to Microsoft Copilot Studio deliver a mix of game-changing capabilities and thoughtful improvements for builders. From expanded automation reach (allowing AI agents to operate any software UI or chat in new channels) to tools that make development and governance easier (prompt testing, knowledge grouping, component reuse, rich analytics), Microsoft is addressing the needs of all its users – the enterprise IT teams, the makers, and the end customers.

These enhancements not only broaden what Copilot agents can do but also make it simpler to create, deploy, and measure effective AI solutions at scale. Copilot Studio is evolving into an even more powerful enterprise AI platform that empowers organisations to innovate with AI agents while maintaining the control and insight needed to drive real business outcomes.

If you’re eager to see these new Copilot Studio capabilities in action, join one of our upcoming Agent in a Day sessions — hands-on workshops designed to help you build, test, and deploy your own Copilot agents with real business impact. Led by our certified experts, these sessions are the perfect way to explore features like prompt optimisation, file-based knowledge, and automated UI flows — all in a guided, practical environment. Reserve your spot now and start creating copilots that transform the way your organisation works.

Stellium

October 22, 2025