– Online product reviews have become a battleground for modern AI, with generative AI capable of creating human-like reviews facing off against AI trained to detect fake reviews.
– Saoud Khalifah, the founder and CEO of Fakespot, a startup that detects fraudulent reviews using AI, reports a surge in AI-generated fake reviews, indicating that the models are now capable of writing about anything.
The distinction between real and fake reviews has become less clear, and the technology to detect fraudulent reviews is still a work in progress
leading to concerns about the widespread availability of advanced AI technology on the internet.
– The Federal Trade Commission (FTC) is proposing a new rule to crack down on fraudulent reviews, banning fake reviews, paid reviews, and other deceptive practices, with substantial fines for violators.
– The extent to which AI-generated content is being used by bad actors is uncertain, raising concerns about the use of chatbots to create various forms of fake content online.
– There are indications that AI-generated reviews are already prevalent, with some Amazon reviews showing signs of AI involvement, starting with the phrase "As an AI language model..."
Amazon, among other online sellers, has been combating fake reviews using a combination of human investigators and AI,
– but AI-generated reviews that are authentic and don't violate guidelines are allowed on their platform.
– The question remains whether AI detection can outsmart the AI generating fake reviews, with the challenge of distinguishing AI-generated content from human-written content.
– Studies indicate that humans struggle to detect reviews written by AI, and efforts are underway to develop AI systems to detect AI-generated content, even within AI companies themselves.
– Consumer advocates are concerned about the potential consequences, as 90% of consumers rely on reviews while shopping online, and dishonest businesses can produce large numbers of real-sounding reviews using AI in a matter of seconds.