A Practical Look at AI, Information Integrity, and Regulation in North America

No time to read?
Get a summary

At the St. Petersburg International Economic Forum 2024, a central focus is placed on artificial intelligence, new technologies, and the lessons learned from their deployment. The question posed is how to assess the current AI landscape, what risks come with these tools, and how to measure their practical potential in real-world use across markets in Canada and the United States.

In one company’s approach, a digital footprint is actively built and managed on major search platforms like Yandex and Google, both inside Russia and abroad. The discussion narrows to AI as a driver of content creation. The phenomenon is not entirely new—content has always been produced and distributed, sometimes accurate, sometimes flawed, and occasionally deceptive. What stands out today is that AI, especially for text and visuals, has dramatically lowered the barrier to content production, enabling almost anyone to generate material. This content flows continually, whether desired or not, with ongoing article writing and image creation happening at unprecedented speed.

That ease of production, paired with rapid distribution, creates a pressing issue: information about individuals or organizations can become unreliable or inaccurate, not necessarily due to malice, but because it evolves and circulates differently. ChatGPT and similar systems are not immune to errors, and many have seen this firsthand.

Understanding what is true in the current landscape is increasingly challenging. The trend is visible every day, and questions about reliability and truth loom large.

— How can one spot reliable information and identify the associated risks in today’s digital environment?

— The risks are real. A contemporary example involves a recent legal case where a participant on a Forbes list faced sanctions that were later questioned. The reasons cited by authorities leaned on information from Wikipedia, a marginally credible site, and an older Forbes article from 2012. Based on that material, officials concluded that sanctions were warranted.

What happened next? The sanctioned individual clarified his current biography in court and supported his case with links from the web rather than traditional documents. This underscored how what is written online—how it is presented and where it is sourced—can tip important legal decisions. The distinction between truth and rumor, and how search engines reflect that distinction, often becomes pivotal.

— Does online information act as a tool for manipulation?

— Yes, information has long been a vehicle for influence, and its power has surged with digital platforms. The current moment makes this tool even more potent and potentially dangerous.

Artificial intelligence has materially lowered the entry barrier for content creation and distribution. Digitalization has been a long-running trend, and rapid technological progress continues. However, security measures lag behind, especially in cybersecurity and in the realm of information integrity. If progress continues while safeguards fall behind, misinformation can spread more easily, affecting not just individuals but businesses and reputations as well. Even prominent figures on business lists may overlook the need to manage this risk, which complicates efforts to protect truth and trust online.

— What steps can help manage this situation?

— The past shows that removing content from libraries or kiosks is no longer feasible on the internet. The online world has persisted for decades, and a vast amount of information will persist indefinitely. The crucial issue is not the absence of fake content but the abundance of accurate and reliable data and how it is framed, published, and perceived by audiences.

Another common concern from clients is that those who need to know already do. Yet regulatory measures, legal costs, and sanctions can still be costly and time-consuming, with no certainty that actions will be reversed. This highlights the modern reality that outcomes depend on the ongoing narrative and the credibility of sources in the public domain.

— How should one counter disinformation online? The process is indeed complex.

— The first assumption to drop is the idea that the internet can be cleaned completely. It cannot be cleaned. Everything posted online leaves a trace and is replicated across platforms. The internet’s 30-year history shows not only the presence of misinformation but also the existence of vast stores of accurate data. The key question becomes: who tells the story, where, and by what means?

— If a false story surfaces, and one releases a counter-narrative, will the misinformation stop circulating?

— The obligation is to continue to publish the correct version and ensure it reaches the public domain. That ongoing visibility is essential, as information ecosystems are dynamic. Today’s success relies on keeping the accurate narrative accessible to audiences, partners, and customers, even as competition evolves around it.

The case study of sanctions illustrates a broader point: care must be taken to present a trustworthy version of events with credible sources. This approach helps ensure that, regardless of how information disperses, the truth has a solid, public-facing record.

— Is AI intensifying the spread of counterfeit content online?

— AI is driving rapid growth in both content and distribution. The effect is not inherently aligned with any particular outcome; it simply accelerates the process. The reality is that many people can generate material today, and AI is a key enabler—text, visuals, and beyond. This trend is here to stay, feeding a world where content creation is ubiquitous and increasingly hybrid in nature, blending machine output with human input in substantial ways.

— What remedies can help in this environment? Do regulatory steps matter?

— Regulation is likely to evolve across platforms, including major events like SPIEF, as policymakers explore appropriate rules and technological safeguards. Yet content creation often remains a hybrid effort, with AI and human collaboration shaping the final product. Distinguishing produced content from human-authored material is not always straightforward, and policy alone cannot resolve the nuanced dynamics of originality and authorship. Ideology, as much as law, informs the path forward.

In practical terms, editorial workflows often blend machine-assisted drafting with human refinement. Some pieces may be half AI-generated and half human-authored, while visuals may rely on automated tools for backgrounds with human artists polishing the focal elements, or vice versa. The issue is not simply about automation; it is about how authorship is defined and how the content is attributed. The solution, for now, lies in transparent practices and robust verification rather than an absolute barrier to automation.

There is no settled answer yet. The core idea remains: present a clear, verifiable version of events and ensure it is accessible to the public. That, more than anything, is what maintains trust in an increasingly crowded information landscape.

No time to read?
Get a summary
Previous Article

APSA’s LAUS-Winning Rockstar Calendar Highlights Inclusive Design

Next Article

Reproductive Cells, Vitamin D, and Longevity: Insights from Killifish