A Canadian court has ruled that Air Canada must compensate passenger Jake Moffat with 812 for damages, after determining that the airline’s chatbot provided incorrect guidance about potential discounts on a flight. The decision underscores how AI tools used by travel brands can become binding representations of a company, especially when customers rely on them for important purchases.
The events unfolded in November 2022 when Moffat and his family aimed to travel to bid farewell to a grandmother. He reached out to Air Canada’s website chatbot to help select a ticket. The AI advised purchasing regular round-trip fares for 1,630 and then seeking a partial refund of 380 per flight segment, noting a discount was available in cases of bereavement. The guidance appeared to promise a bereavement discount, a detail that, if accurate, could meaningfully affect total costs. This information was relayed through the chatbot on Air Canada’s site.
However, the discount was applicable only when bookings were made by phone or at an Air Canada office, not when using the chatbot online. Moreover, the rate could not be applied to tickets that had already been purchased. Because of this, Moffat did not receive the expected compensation and felt misled by the chatbot’s advice. This set the stage for a legal dispute centered on what constitutes a company’s responsibility for information provided by automated customer support tools.
Moffatt pursued a lawsuit seeking reimbursement of the 880 difference between the fare he paid and the discounted price he believed was available. Air Canada argued that it bore no liability for a chatbot error and that Moffat should have verified fare rules before completing the purchase. The court, however, rejected that position, concluding that the chatbot acted as a representative of the airline and that Moffat reasonably relied on the information supplied by the chatbot when making his decision.
As a result, the court ordered Air Canada to pay Moffat compensation, plus interest and legal costs, confirming that chatbot-assisted guidance can carry accountability for a commercial entity. The ruling serves as a landmark reminder for travelers and companies alike in the digital age, where automated tools increasingly influence price and policy interpretations in real time.
In the broader context, questions about the accountability of AI assistants in customer service have grown. Many travelers expect that online bots will be accurate and transparent about fare rules, refunds, and exclusions, and regulators in Canada and the United States are watching how these tools are implemented. This case illustrates the importance of clear disclosures and easy access to human support when the information presented by AI could affect a purchase decision.
Experts note that as artificial intelligence continues to expand across industries, there will be ongoing scrutiny over when automated guidance becomes binding. To protect consumers, companies may increasingly need to provide explicit limits on AI-generated information and ensure that critical decisions can be reviewed with a human agent. The Moffat decision may prompt airlines and other travel providers to audit chatbot outputs and to improve disclaimers, validation processes, and fallback procedures to prevent similar disputes.
As for consumers, the episode reinforces the precautionary principle: if a chatbot offers a critical price or policy detail, customers should verify it through official channels, especially when the savings hinge on specific booking conditions. In cases of doubt or potential loss, seeking confirmation in writing or speaking with a human representative can avert costly misinterpretations.
Russians before saidWhat professions have emerged with the development of artificial intelligence?— Source: CBC