The Federal Trade Commission (FTC) is ramping up its efforts to combat deceptive uses of AI with its new initiative, "Operation AI Comply." Recent enforcement actions highlight the growing AI compliance risks for companies using artificial intelligence to mislead consumers. Cases against companies like Rytr and DoNotPay reveal how quickly AI misuse can trigger regulatory scrutiny. As AI adoption accelerates, businesses must prioritize compliance to avoid hefty penalties and reputational damage. Learn why understanding AI compliance risks is crucial for navigating this evolving landscape.
In a remarkably short period of time, generative artificial intelligence (AI) has worked its way into countless business models, creating tremendous opportunities for growth. However, this rapid adoption has also introduced a significant AI compliance risk for companies. As is typically the case with new technological innovations, the dark side of AI has emerged, with some businesses embracing unethical practices that put them at risk of regulatory scrutiny and legal consequences.
This has not gone unnoticed by the Federal Trade Commission (FTC), which recently announced an enforcement initiative targeting multiple companies accused of using generative artificial intelligence (AI) to facilitate deceptive conduct that harms consumers. Armed with yet another catchy name, with “Operation AI Comply” the FTC has filed suit against several companies accused of using AI tools to trick, mislead, or defraud consumers by churning out fake reviews, making exaggerated claims about AI-powered tools, or promising easy money through AI-driven schemes.
But even with these new rules in place, AI is evolving so fast it’s anyone’s guess if the FTC can keep pace.
In its complaint against Rytr, the FTC highlighted a significant AI compliance risk by accusing the AI-driven content creation service of enabling its subscribers to use AI to generate thousands of false product reviews, which were then posted on various platforms to manipulate consumer purchase decisions. The FTC charged Rytr with violating the FTC Act by providing users the means to create deceptive content, engaging in an unfair business practice that resulted in a flood of fake reviews, harming both consumers and honest competitors
The FTC also took action against DoNotPay, which describes its service as the "world’s first robot lawyer." According to the FTC’s complaint, the company promised consumers that its AI would allow them to “sue for assault without a lawyer” and “generate perfectly valid legal documents in no time,” and could otherwise serve as a far cheaper but equally effective substitute for a human attorney.
The FTC maintains that all of these claims were false, and that the company failed to deliver on any of its lofty AI promises to the detriment of its subscribers, many of whom apparently relied on the inadequate contracts and ineffective legal filings generated by the company’s AI, to their detriment.
DoNotPay agreed to settle the FTC’s claims for a payment of $193,000, along with warning subscribers about the limitations of its service.
The FTC filed suit against the owners of an online AI-driven business opportunity service called Ascend Ecom (“Ascend”) for violations of the FTC Business Opportunity Rule and the Consumer Review Fairness Act. According to the FTC’s complaint, Ascend employed a sales pitch that falsely claimed its business model to be powered by AI, and that it enabled consumers to quickly earn thousands in passive income from online sales.
Consumers were charged tens of thousands of dollars to start online stores on Amazon, Walmart, Etsy, and TikTok, and thousands more for inventory, only to discover that none of Ascend’s "AI-powered" e-commerce claims were legitimate. This misuse of AI in their business model illustrates a severe AI compliance risk. Ascend and its owners are accused of defrauding consumers out of over $25 million. The seriousness of these accusations led a federal court to issue an order halting their operations and placing the business under the control of a receiver.
Finally, the FTC charged Empire Holdings Group LLC and its owners of maintaining a business opportunity scheme that falsely promised consumers an “AI-powered Ecommerce Empire” by participating in training programs that cost almost $2,000 or by purchasing a “done for you” online storefront for $35,000. In the company’s marketing, it encouraged consumers to “Skip the guesswork and start a million-dollar business today” by harnessing the “power of artificial intelligence” and the company’s proven strategies to make $10,000 in passive monthly income.
The FTC’s action was apparently triggered by numerous consumers who complained that stores they purchased from the company made little or no money, and that it refused to issue promised refunds. The court overseeing the matter also halted the company’s business and placed it in control of a receiver.
For those requiring additional proof that the FTC is serious about protecting consumers from businesses that misuse AI, AI compliance risk is at the forefront of its latest enforcement initiative, Operation AI Comply. The FTC’s recent actions demonstrate that businesses ignoring AI compliance risks are likely to face significant legal challenges. If the allegations against companies like Rytr and DoNotPay are proven true, it’s clear that the misuse of AI can quickly escalate into regulatory action. Rytr’s AI allegedly facilitated fraudulent activities, while DoNotPay’s so-called “robot lawyer” failed to live up to its promises, showcasing the dangers of neglecting AI compliance risk management.
As thousands of companies race to capitalize on AI’s potential, they must address AI compliance risk proactively. Those marketing AI capabilities must ensure their claims are realistic and compliant, focusing on what AI can truly deliver rather than hyping speculative benefits. Failure to manage AI compliance risk can lead to swift intervention by regulators like the FTC, ultimately putting both businesses and their reputations at risk.