Bad news for Microsoft. The United States House of Representatives has barred its staff from using Copilot, the artificial intelligence assistant built by the tech giant, according to a report shared last Friday by Axios.
This ban reflects lawmakers’ concerns about potential security gaps. The chief administrative officer for the House, Catherine Szpindor, described a risk to users stemming from the possibility of data leaks to cloud services that have not been approved.
Microsoft responded to the development by stressing that the government will soon roll out new government-focused tools this summer, which are intended to address security worries. A company spokesperson told Axios that government users have stricter data security requirements and will be accommodated by the upcoming offerings.
“Weak cybersecurity”
Yet this issue is not the only security concern surrounding Microsoft. A government-commissioned report, ordered by President Joe Biden to investigate the surge of cyberattacks linked to China against senior U.S. officials, places the tech giant in a troubling light.
The Washington Post reported that the review accuses Microsoft of weak cybersecurity practices, a lax corporate culture, and a deliberate lack of transparency regarding what the company knew about the origins of the data breach affecting the White House.
Veto on ChatGPT
Copilot, Microsoft’s virtual AI assistant, is capable of generating text, translating languages, and answering user questions in a conversational manner. Its abilities are powered by GPT-4, the same technology behind the widely used ChatGPT.
The House previously restricted internal use of the OpenAI chatbot in a vote last June. In that instance, limited access to the paid version was allowed, but the free version was barred due to potential security risks as well.
The current developments come as lawmakers weigh the balance between enabling advanced AI tools and safeguarding sensitive government data. Officials stress that the focus is on ensuring that any deployed AI tools meet stringent security standards and do not expose critical information to untrusted networks or commercial cloud services.
Industry observers note that government users often require additional safeguards, including strict data handling protocols, on-premises or tightly controlled cloud environments, and clear transparency about how AI systems process and store information. These considerations are particularly acute when dealing with high‑level communications, intelligence matters, or policy deliberations that involve sensitive data and geopolitical risk.
Microsoft has signaled a willingness to adapt its offerings for government customers, emphasizing that security requirements are not negotiable and that collaboration with public sector bodies will guide the design of future tools. The company argues that responsible deployment can coexist with powerful productivity features, provided appropriate guardrails, comprehensive auditing, and ongoing oversight are in place.
Analysts suggest that the evolving regulatory and legislative landscape around AI in the United States will likely shape how firms like Microsoft, OpenAI, and other AI developers structure access, permissions, and data governance. The goal, many say, is to create a safe path to innovation that protects national security while allowing agencies to harness AI capabilities for more efficient operations and informed decision-making.