The founder of ChatGPT, speaking as the head of a company that operates in the rapidly changing AI space, indicated that an exit from the market might occur if the European Union’s new AI regulations prove incompatible with the product’s capabilities. This statement came from the co-founder and CEO during a European tour aimed at easing regulatory concerns. The remarks were delivered at a moment when leaders and regulators across Europe are trying to balance innovation with safeguards for citizens and businesses. Altman’s tone suggested uncertainty about whether the current framework could be fully met, given the technical and practical limits involved in aligning a sophisticated language model with a broad, binding set of rules that are still being refined.
During a demonstration at University College London, Altman acknowledged receiving substantial feedback on the drafting of the regulation, which is slated for possible approval in the upcoming months. The proposed law classifies uses of artificial intelligence by purpose and attaches respective risk-based obligations. In particular, major language models like those powering ChatGPT are expected to fall under a high-risk category, triggering stringent compliance measures around transparency, risk management, and how data is used by the chatbot. This classification would place OpenAI and its partners under tighter governance, with independent oversight and stricter reporting practices to ensure public trust and operational accountability. The public discussion underscored a push-pull dynamic between rapid AI advancement and the safeguards needed to protect users and society.
Altman did not shy away from the practical implications. He stated publicly that the team might determine they cannot meet certain requirements. The decision could lead to pausing activities if the technical feasibility does not align with regulatory demands. He described the likelihood of progress as contingent on what is technically achievable while maintaining a path toward responsible deployment. The remarks were delivered in London, shedding light on how international efforts to shape AI governance are being negotiated in real time.
pressure tour
The regulatory dialogue has pushed Altman to travel and engage with European policymakers and organizers to discuss the path forward. The visits included high-level discussions with leaders across member states, including a meeting with Spain’s government president as well as conversations with other European and British officials. The outcomes of these meetings remain uncertain, with observers noting that the talks reflect a broader strategic effort to position AI developers within a regulatory framework that values safety and innovation in equal measure. The impression left by these discussions was mixed, leaving stakeholders to assess whether the talks yield a clearer path for product deployment or additional obstacles that could slow progress.
In describing the regulation itself, OpenAI’s chief executive argued that the law has moved into a final stage but emphasized that the nuances and details ultimately determine its impact. He called for a regulatory posture that sits between traditional European strategies and American approaches, signaling a desire for a balanced model that protects people while not stifling technological progress. The intention expressed was to encourage a constructive regulatory climate rather than to resist oversight altogether. The emphasis was on practical governance that could be implemented without compromising global competitiveness or innovation ecosystems. The travel also highlighted OpenAI’s broader strategic interest in locating a European hub for research and engineering, separate from the regulatory framework itself. Poland emerged as a potential site for such an office, reflecting practical considerations about talent access, infrastructure, and regional collaboration. The aim would be to foster European research partnerships and contribute to a robust AI research community on the continent. Observers noted that the decision would hinge on regulatory clarity and incentives that make such an expansion attractive for a multinational technology company.
regulatory surge
Interestingly, the tour did not include a stop in Italy, a country that temporarily blocked access to a widely used AI service amid concerns that the platform might violate the General Data Protection Regulation. The restriction was lifted after the provider implemented changes, including features allowing users to opt out of data retention between sessions. This development illustrates how regulatory authorities have responded to concerns over data handling and privacy, while also signaling how firms may adapt their products to align with evolving legal expectations. The Italian episode stands as a microcosm of the broader regulatory landscape, where caution, user rights, and corporate innovation must be navigated in tandem. Marked citations: government statements and regulatory guidelines from European authorities; industry feedback from technology leaders outlining practical implications of compliance (citation: European Union AI Act discourse).