News

Microsoft to Assume Legal Risks of Copilot GenAI Assistant

MSFT to take on the risks of Copilot

Microsoft says it will assume the legal risks in copyright challenges that might arise from the use by developers of its Copilot AI-powered assistants. The Redmond software giant made the announcement to calm fears amid a swirl of generative AI-related lawsuits clouding the short-term future of the technology.

"As customers ask whether they can use Microsoft's Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved," said Microsoft in a Sept. 7 post,  "Microsoft announces new Copilot Copyright Commitment for customers."

Copilot-related lawsuits  like this class action filed in January appeared not too long after the dawn of the generative AI era last fall.

One angle of such lawsuits is intellectual property (IP) infringement (see this suit), and Microsoft seeks to ease concerns of users worried about being mired down in legal complications just because they used a Copilot AI assistant, which Microsoft has infused throughout a wide swath of its products and services, even its flagship Windows OS itself.

"While these transformative tools open doors to new possibilities, they are also raising new questions," Microsoft said. "Some customers are concerned about the risk of IP infringement claims if they use the output produced by generative AI. This is understandable, given recent public inquiries by authors and artists regarding how their own work is being used in conjunction with AI models and services."

Although Microsoft will defend customers on IP infringement grounds, it emphasized that it does not claim any IP rights itself for the outputs of its Copilot services.

Specifically, Microsoft said if a third party sues a commercial customer for copyright infringement for using its Copilot assistants or the output they generate, the company will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit if the customer used the guardrails and content filters Microsoft has built into its products.

The (truncated) reasons why the company is taking on this risk include:

  • We believe in standing behind our customers when they use our products. We are charging our commercial customers for our Copilots, and if their use creates legal issues, we should make this our problem rather than our customers' problem.
  • We are sensitive to the concerns of authors, and we believe that Microsoft rather than our customers should assume the responsibility to address them. Even where existing copyright law is clear, generative AI is raising new public policy issues and shining a light on multiple public goals. We believe the world needs AI to advance the spread of knowledge and help solve major societal challenges.
  • We have built important guardrails into our Copilots to help respect authors' copyrights. We have incorporated filters and other technologies that are designed to reduce the likelihood that Copilots return infringing content. These build on and complement our work to protect digital safety, security, and privacy, based on a broad range of guardrails such as classifiers, metaprompts, content filtering, and operational monitoring and abuse detection, including that which potentially infringes third-party content. Our new Copilot Copyright Commitment requires that customers use these technologies, creating incentives for everyone to better respect copyright concerns.

Today's announcement follows a June post in which Microsoft announced its AI customer commitments. Those include:

  • First, we will share what we are learning about developing and deploying AI responsibly and assist you in learning how to do the same.
  • Second, we are creating an AI Assurance Program to help you ensure that the AI applications you deploy on our platforms meet the legal and regulatory requirements for responsible AI.
  • Third, we will support you as you implement your own AI systems responsibly, and we will develop responsible AI programs for our partner ecosystem.

About the Author

David Ramel is an editor and writer at Converge 360.