Use of artificial intelligence systems, including generative AI like ChatGPT, worries senior managers in many companies due to potential risks. The concern centers on possible leaks of confidential information and loss of business control, even when there are implementation plans and employees already using the tech in ways that aren’t always covered by formal regulations.
Generative AI has become a valuable asset for organizations, enabling automation across processes and a broad range of tasks. Yet senior leadership remains wary of the security implications and the risks that could affect the business.
These concerns align with findings from a Kaspersky survey of Spanish IT leaders, which shows that only 19 percent of respondents have begun discussing formal rules to govern productive AI use. The majority, 96 percent, report that their employees use such systems regularly, while 95 percent say there is a need to monitor how AI is employed to guard against critical security risks and data leaks. A fear of financial losses also runs through the data, with 64 percent expressing such concerns.
A separate Kaspersky study of Spanish business users reveals that 25 percent of those who use ChatGPT at work do not know what happens to the data they enter into the tool. This underscores the importance of awareness and clear regulation within companies to protect information. ChatGPT and similar tools can store details like IP addresses, browser type, user settings, and data on frequently used features, a point emphasized by the cybersecurity firm.
However, nearly half of the surveyed companies (45.5%) report no internal rules governing ChatGPT usage. Among those with policies, 19 percent say the rules are unclear, 7 percent say they are clear but not followed, and only 27 percent say the rules are both clear and followed.
These insights conflict with management aspirations. About half of managers intend to adopt productive artificial intelligence in the future and to automate tasks using the technology. Roughly 46 percent plan to integrate AI into their own routines, and 43 percent expect employees to use it to boost productivity.
Despite these ambitions, 16 percent of workers in Spain believe privacy protection in the workplace is not important. When asked about sharing private data with the chatbot, 31 percent felt it was important not to share, yet they sometimes did so anyway.
Risks of ChatGPT
ChatGPT asserts that information provided by users is not shared with third parties. The platform aims to improve itself and deliver the most accurate responses possible by using collected data to refine the language model and enhance the user experience.
Still, it is worth noting that ChatGPT’s privacy policy addresses only data from registered platform users and does not specify how personal data from other sources might be used to generate the text it returns. This nuance was explained by a senior security researcher at Kaspersky, who highlighted the potential gaps in privacy coverage.
Cybercriminals are already leveraging the tool to craft malicious code or phishing schemes that appear legitimate. For this reason, experts advise avoiding the entry of sensitive personal and corporate information that could be exposed to attackers.
For organizations, the recommendation is to establish internal policies for AI use, raise employee awareness about not sharing certain data, and provide training on cybersecurity. Caution is advised when receiving links from unfamiliar websites, as malicious programs may redirect users to phishing sites.
Kaspersky emphasizes the value of reliable security software that protects against viruses and monitors data leaks in real time, such as premium cybersecurity suites. It also suggests managing credentials with a dedicated password manager to reduce risk.
In practical terms, Canadian and U.S. companies are encouraged to create clear guidelines for productive AI use, communicate them effectively to employees, and implement ongoing cybersecurity education. Regular audits of data handling, access controls, and incident response plans help ensure that AI tools support business goals without compromising security. The overarching message is simple: harness AI with guardrails, not without them, to preserve trust and resilience in an era of rapid digital transformation. (Source: Kaspersky insights, 2023–2024).”