Microsoft Copilot's AI new terms have sent shockwaves across the world

Microsoft has updated Copilot's terms, emphasizing that the AI should not be relied on for decision-making due to potential inaccuracies. While still usable for work, the changes highlight growing concerns over AI reliability and shift responsibility to users.

Microsoft Copilot AI
Microsoft Copilot AI as Entertainment purpose only

Microsoft has rolled out new terms and conditions for Copilot, saying that the AI could only be used for entertainment purposes.

The new capabilities have highlighted tension among the users of Copilot, and it has also raised eyebrows regarding how the tools are being presented to users.

Microsoft has released a statement saying that Copilot is only for entertainment purposes and that it could be used for decision-making, as the AI might make up the incident or situation and doesn't tell the facts, and advises users to use the AI at their own risk.

Can Copilot still be used for work?

Yes, Copilot can still be used for work, as it has been associated with Windows 11 and Microsoft 365.

But users want it to be more reliable and authoritative, as it has been integrated with tools like Windows 11 and Microsoft 365. Since the company issued the Copilot terms, there has been backlash across the tech community.

Though there is ongoing controversy about Copilot, Microsoft has released new updates for Copilot Cowork, which builds on Claude Cowork and can handle more complex, long-running, multi-step tasks beyond simple queries. This Copilot Cowork is available through Microsoft's Frontier Program, which expands its capabilities to manage Microsoft 365 workflows.

Now Copilot includes multi-model support that aims to reduce errors, improve quality, generate and validate the output.

The purpose of Microsoft is to rephrase the terms of Copilot

Since people are taking AI to another level by making it their decision maker, comforter, but sometimes AI can also make mistakes, it could just lead you into the wrong direction with its decision, and this would end up as a big problem

The updated terms of use appear to shift some responsibility for potential inaccuracies generated by Copilot onto users. According to Microsoft's official website, these changes were introduced in October last year.

This move reflects broader concerns about large language models (LLMs) such as OpenAI's GPT and Anthropic's Claude. While these systems have improved over time, they can still produce "hallucinations," where incorrect or fabricated information is presented as fact. Microsoft's revised terms suggest the company remains cautious about the reliability of AI-generated content.

So, Microsoft once again clarifies that Copilot AI should be used within certain limitations and not as a decision-making authority.

Share

Follow NewsBricks on Google News

Stay updated with the latest stories delivered to your feed

M

Written by

Maheswari

With a background in Literature, she brings strong creative writing skills and clarity to her work in content writing. Her academic foundation enables her to present news in a simple, engaging, and reader-friendly manner. She is passionate about covering current affairs in India and Tamil Nadu, along with science-related topics that explain innovations and discoveries in an accessible way. She believes in delivering accurate, clear, and responsible information to audiences. Her focus is on simplifying complex subjects while maintaining credibility and journalistic integrity. Through her writing, she aims to inform and educate readers with meaningful and trustworthy content.

View all articles
Loading comments...