Corporate AI use restrictions rise as data security concerns grow

No time to read?
Get a summary

A major US technology company has limited the use of ChatGPT among its employees, and the restriction extends to other artificial intelligence tools. The move aims to prevent the potential leakage of confidential information, according to documents reviewed by a prominent newsroom source. The policy specifically restricts access to various AI services, including Copilot, GitHub’s code automation tool. This action affects Microsoft, the owner of GitHub and a producer of AI-powered products, including ChatGPT. Apple is reportedly pursuing similar AI initiatives, which underscores a broader corporate caution around sensitive data and learning models.

The company worries that conversations between users and language models may reveal confidential material. Modern language models learn from user inputs to improve responses, which can inadvertently replicate or expose sensitive information if such data is included in prompts or chats. In practice, this means that questions about confidential topics could produce answers that reference or leak those materials to other users within the system or to developers who maintain the service.

Other restrictions

Earlier this year, a major AI provider temporarily paused ChatGPT after a bug exposed user chat histories to other users. In response, the provider introduced options for users to disable memory of conversations, though some data continues to be stored for system improvements and safety checks. This incident, alongside regulatory actions in certain regions, has prompted companies to adopt stricter controls on how AI tools are used within organizations.

Apple is not alone in adopting cautious policies. A number of large enterprises have started restricting the use of AI search and generation tools by employees to protect corporate data. The trend reflects a broader concern about data governance and the potential for information leakage as AI tools become more embedded in everyday work. In sectors such as finance and enterprise IT, firms are weighing the balance between productivity gains from AI and the need to safeguard proprietary knowledge and customer information. Industry observers note that these precautions may evolve as newer safeguards and governance frameworks are rolled out, helping firms manage risk without stifling innovation.

No time to read?
Get a summary
Previous Article

How to Spot and Prevent Deceptive Practices in Legal Contracting

Next Article

Shine Max sedan debuts in Russia with strong tech and a 3 million ruble price