Hello and welcome to Eye on AI.
Today we exclusively present new research that shows how generative AI is driving the proliferation of fake reviews online, tricking users into downloading malware-infected apps, and deceiving advertisers into placing their ads in bad apps.
DoubleVerify's fraud analysis team, which provides advertisers, marketplaces and publishers with tools and research to detect fraud and protect their brands, today released research describing how generative AI tools are being used at scale to create fraudulent app reviews faster and easier than ever before. Researchers told Eye on AI that they found tens of thousands of fake AI-generated reviews powering thousands of bad apps across major app stores, including the iOS Store, Google Play Store and connected TV app stores.
“[AI] “It's essentially allowing the scale of fake reviews to grow exponentially,” Gillit Saporta, senior director of fraud analysis at DoubleVerify, told Eye on AI.
Fake reviews have been a problem online for years, especially on e-commerce platforms like Amazon. Earlier this month, the FTC finalized rules banning fake reviews and related deceptive practices, including buying reviews, misrepresenting real reviews on a company's own website, and buying fake social media followers and engagement.
According to DoubleVerify, the finalized rules also explicitly prohibit AI-generated reviews, which have increasingly flooded Amazon, TripAdvisor, and every other site where reviews can be found since generative AI tools became readily available. In their new findings, the company's fraud researchers explain how generative AI is causing an already widespread problem to explode, especially in app stores. The company identified that in 2024, the number of apps with fake AI-generated reviews more than tripled compared to the same period in 2023. Saporta said that while some reviews contain obvious phrases that indicate they are AI-generated (“I'm a language model”), others look genuine and are difficult for users to spot. Only through extensive review analysis could her team spot other subtleties that indicate AI generation, such as phrases being repeated multiple times.
Malicious apps that are legitimized by AI reviews typically download malware onto users' devices to collect data or request intrusive permissions, such as allowing the app to run in the background undetected.
Saporta said many of these apps “target the most vulnerable segments of society,” including seniors (magnifying glass apps, flashlight apps) and children (popular kids' mobile games promising free coins, gems, etc.). Other specific apps that DoubleVerify found to have a high number of AI-generated reviews include Wotcho TV in the Fire TV app, My AI Chatbot in the Google Play Store, and Brain E-Books in the Google Play Store.
DoubleVerify also found that malicious apps that host audio content rely heavily on AI-generated reviews. Because advertisers pay a premium for audio ads, this technique relies on making the app appear legitimate to both users and advertisers. Once downloaded, these apps install malware that simulates audio playback or play audio in the background of the device without the user's knowledge (draining battery and increasing data usage). This allows the app creators to fraudulently charge advertisers for fake views.
In some cases, bad app creators themselves use tools like ChatGPT to quickly generate 5-star reviews, or they outsource the task to gig economy workers. One sign to watch out for is if an app has 90% 5-star reviews, 10% 1-star reviews, and nothing in between.
“I think it would be very hard for someone who doesn't know that an app shows suspicious patterns to spot reviews created by AI,” Saporta said, adding that app stores are aware of the issue and that DoubleVerify is working with them to flag problematic apps.
AI companies tout that generative AI models make writing easier, but it comes at a cost. The surge in AI-generated reviews is similar to how AI tools have made it faster and easier for hackers to create convincing phishing emails. Educators say students are outsourcing writing tasks to ChatGPT, and recruiters say they are overwhelmed by a flood of low-quality resumes created by AI tools. DoubleVerify has also tracked how bad actors are using AI to create shell e-commerce websites for companies that don't actually exist.
Technology often aims to lower the barriers to entry, but can it be lowered too much?
Now for some more AI news.
Sage Lazarus
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
AI in the News
California lawmakers overwhelmingly approve sweeping AI bill. it is of The New York TimesJeremy Kahn writes: Tuesday NewsletterThe bill, called SB1047, aims to prevent catastrophic harm caused by AI and has divided the AI community. After being watered down following lobbying from the tech industry, the version of the bill approved yesterday would require AI companies to test the safety of their largest AI models before release and would allow state attorneys general to sue model developers for significant harm caused by their technology. If passed, the bill would become de facto regulation in the United States and could mark a major change in AI regulation in the United States, which still lacks national regulations on AI. For more information on the bill, see: luckJen Blythe A helpful explainer.
Nvidia beats expectations with record second-quarter profit. As luckReported by Alexei Oreskovich ofThe company reported revenue of $30 billion for the three months ended July 28, up 122% from last year and well ahead of Wall Street's already bullish forecast of $28.9 billion. More importantly, Nvidia confirmed that customer orders for its next-generation Blackwell AI chips will begin in the fourth quarter. Rumors have been swirling that the chips' launch may be delayed, putting further pressure on yesterday's earnings.
Uber is partnering with Wayve to develop mapless self-driving cars. The two companies announced a partnership with Wayve and a strategic investment from Uber today. Wayve is one of the self-driving companies taking a newer “mapless” approach that relies heavily on AI and is designed to allow self-driving cars to operate without geofencing restrictions. With Uber's backing, Wayve plans to accelerate collaboration with automakers to power its consumer fleet with Level 2+ advanced driver assistance and Level 3 self-driving capabilities. It also plans to work on Uber's future Level 4 self-driving cars, which refers to the level at which a vehicle can drive itself fully without human intervention in certain situations. Uber founder and former CEO Travis Kalanick began publicly talking about replacing drivers with self-driving cars in 2014, and the company invested more than $1 billion in self-driving technology before eventually selling its self-driving division to Aurora. Last week, I wrote about the role AI will play in self-driving cars (and why self-driving car makers aren't hanging on to the AI hype). Check out the article, here.
A new study provides the first empirical evidence that LLMs exhibit racialized linguistic stereotypes. Researchers from Stanford University, Oxford University, and the University of Chicago paper yesterday Naturedetails that AI language models show particular bias against African-American English speakers. Specifically, the researchers explain that there is a discrepancy between what the language models overtly say about African-Americans and what they covertly associate with them through dialect. They also argue that current techniques aimed at reducing racial bias in language models (such as human preference adjustments) actually exacerbate that discrepancy by obscuring racism in LLMs. Finally, they found that the models are more likely to “suggest that African-American English speakers will hold less prestigious jobs, be convicted of crimes, or be sentenced to death than speakers of Standard American English.” These are big judgments to leave in the hands of LLMs, and are the types of use cases that many lawmakers are trying to regulate. For example, EU AI law designates employment and judicial procedures as “high-risk” areas, subjecting AI deployment to stricter guardrails and transparency requirements.
The fate of AI
Google is now using AI to facilitate staff meetings, with employees saying the AI asks simple questions. -by Marco Quiroz Gutierrez
Wall Street's AI darling Supermicro delays earnings release amid scrutiny from short sellers —Will Daniel
Meta has given up on making custom chips for its upcoming AR glasses. -by hairAli Hayes
Klarna has 1,800 employees and hopes AI will make it obsolete —Ryan Hogg
Why Honeywell is betting so big on the AI era —John hairElle
AI Calendar
September 10-11: AI Conference, San Francisco
September 10-12: AI Hardware and AI Edge Summit, San Jose, CA
September 17-19: Dreamforce, San Francisco
September 25-26: Meta Connect, Menlo Park, California.
October 22-23: TedAI, San Francisco
October 28-30: Voice & AI, Arlington, VA.
November 19-22: Microsoft Ignite, Chicago, IL.
December 2-6: AWS re:Invent, Las Vegas, NV.
December 8-12: Neural Information Processing Systems (Neurips) 2024 Vancouver, British Columbia
December 9-10: Fortune Brainstorm AI San Francisco (registered) here)
Focus on the AI numbers
1,914
This is the number of AI companies with B2B business models that have received equity investments so far in 2024 (as of August 9). Meanwhile, the number of AI companies with B2C business models is only 214. CB Insight.
This number continues a trend (data shows that B2B AI deals have significantly outnumbered B2C for at least the past four years) and reflects the same strategies being adopted by major AI companies. For example, OpenAI's ChatGPT is the most consumer-facing side of AI. But the company's investments and acquisitions are primarily geared toward accelerating its enterprise strategy. This makes sense, since enterprises have more access to the resources they need to experiment with AI.