Researchers at the NTI Competence Center “Trusted Interaction Technologies” in Russia have announced a method aimed at depersonalizing personal data while preserving its utility. This development was shared with socialbites.ca by Ruslan Permyakov, who serves as Deputy Central Director at the Tomsk State University of Control Systems and Radioelectronics (TUSUR). The announcement centers on a mathematical framework designed to quantify data quality during the depersonalization process and to guide developers in balancing privacy with usefulness.
Permyakov described a specialized mathematical model that evaluates boundary conditions for data transformation. The model explicitly addresses how to retain the intrinsic value of data while reducing the risk that the data could be re-identified. In practical terms, it provides a structured way for engineers to calibrate the degree of abstraction or generalization so that a dataset can represent multiple individuals without exposing unique identifiers or linking back to a single person.
According to the scientist, the approach ensures that data is not rendered so generic that it loses its value to analysts, researchers, or policymakers. At the same time, it prevents the data from remaining tightly coupled to a specific person, which would raise concerns about privacy breaches. The delicate balance is achieved by selecting parameters that coarsen details enough to obscure individual connections while preserving patterns and trends that are meaningful for analysis and service design.
Permyakov emphasized that the goal is to make it impossible to associate the resulting information with any particular individual. This capability, if scaled, could support the creation of services that assemble datasets according to predefined criteria. In practical terms, organizations could generate databases that maintain statistical usefulness while offering stronger protections for personal data across various applications, from health analytics to social science research. The work reflects a broader push within the Russian research community to align data handling practices with evolving privacy expectations and regulatory norms, while still enabling data-driven innovation. The information was shared as part of ongoing dialogues about privacy-preserving computation and its potential to unlock new use cases for big data under strict governance models.
As a practical takeaway, the researchers suggest that users who wish to further obscure their contact details in communications might consider enabling caller ID features on their smartphones. This recommendation appears alongside the depersonalization framework as part of a broader approach to safeguarding personal identifiers in everyday digital interactions, reinforcing the idea that multiple layers of privacy measures can work together to reduce exposure in real-world settings.