A US-based tech firm has paused access to its API for ByteDance, the company behind the social platform TikTok. The suspension follows allegations that ByteDance secretly used OpenAI’s ChatGPT to help train a competing neural network, one that could pose a challenge to ChatGPT itself. The news was reported by observers familiar with the matter.
Investigations by reporters unearthed internal ByteDance documents indicating that the team relied on OpenAI’s Application Programming Interface during the development of a rival project, which code-named Seed, and that usage occurred at multiple stages of the development cycle. The findings suggest a pattern of API activity aligned with attempts to build a comparable tool, raising questions about how ByteDance manages external AI resources in its product pipeline.
OpenAI’s policy statements spell out clear expectations: clients using the API should not harness the service to create AI models that compete with other offerings. In a public note, the company stated that, to maintain responsible use of its technology, every API client must adhere to usage rules. ByteDance’s access is currently paused, and OpenAI indicated that if the current usage does not align with the policy, ByteDance will be asked to adjust its practices or face a permanent block on the API (Source: OpenAI policy updates and public statements).
According to reports, ByteDance’s approach involved leveraging the API at various stages to prototype and accelerate the development of Seed, a project described in internal communications as a possible competitor to existing AI chat services. While ByteDance has not publicly commented on the specifics of its internal workflows, industry observers note that such use cases raise important debates about how AI tools should be shared across competitors and how platform providers balance access with competitive safeguards.
Observers emphasize that the situation underscores the growing complexity of AI governance in major tech ecosystems. When cross-border tech teams rely on external AI providers, clear boundaries and compliant usage practices become essential for safeguarding fair competition and protecting user interests. Proponents of stricter enforcement argue that API providers must vigilantly monitor how clients deploy models trained with their services, particularly when those models could directly challenge the provider’s own offerings. Critics, however, caution against overreach that could stifle legitimate innovation or collaboration.
Remarkably, some commentators have noted that the episode mirrors broader tensions in the AI landscape, where rapid experimentation sometimes collides with policy constraints. As companies continue to explore how external AI tools can accelerate product development, the balance between openness and control remains a central question for policymakers, developers, and platform operators alike. The industry will likely watch closely how OpenAI and ByteDance resolve these tensions and what precedents emerge for API usage during future product cycles.