Neural networks are stolen for three main reasons: to close the gap with competitors, to steal personal data, and to shorten the process of training your own model. Oleg Rogov, head of the scientific group “Reliable and Secure Intelligent Systems” at the AIRI Research Institute of Artificial Intelligence, told socialbites.ca.
“One of the main reasons for theft is to close the gap with competitors or gain an advantage in a particular area. Additionally, stealing neural networks could allow attackers to shorten lengthy research and development processes such as architecture, training, testing, and so on,” Rogov said.
The expert also noted that theft could provide access to confidential information, such as banking, biometrics or other sensitive data processed by neural networks.
“Stolen models are subject to modification; attackers use special methods to make it difficult to establish a direct link between the stolen model and its original source,” the scientist said.
To detect the fact of theft, neural networks use digital watermarks.
Learn more about how digital watermarks work, who steals neural networks and why, and whether it is possible to identify a stolen AI model or sections of code using such watermarks. report Rogova “socialbites.ca”.
Previously in Russia came withHow the energy consumption of neural networks like ChatGPT can be halved.