That White Guy Complaining About Jobs at Tim Hortons? He’s AI-Generated
A series of controversial TikTok videos featuring a white man upset about not getting hired at Tim Hortons has now been taken down—but not before racking up serious attention and backlash. Turns out, the man in the videos? He’s not real. He’s AI.

The fake persona named ‘Josh’ stirred controversy on TikTok
The videos showed a young white man named “Josh” venting about Canada’s job market. In one clip, he claims he couldn’t get a job because “all the jobs are taken by immigrants,” and even says he was asked if he spoke Punjabi when applying to Tim Hortons.
But “Josh” isn’t a real person. He’s an AI-generated avatar created using Google’s Veo3 video generation software. According to CBC News, a Canadian company called Nexa created the character and used him in a series of “fake-fluencing” videos to market their recruitment services.
The videos were subtle but effective. Viewers saw “Josh” in realistic urban settings, sipping coffee and sharing complaints. While some spotted the fakery—like mismatched hands and nonsense text in the background—others believed the character was real, engaging with him in the comments.
TikTok removed the videos after CBC’s investigation

TikTok removed the videos, stating they violated community guidelines because they failed to clearly label the content as AI-generated. A small watermark from Google Veo was present, but TikTok said it wasn’t enough.
Advertisement
Per their policy, realistic AI-generated videos must include obvious labels like captions, watermarks, or stickers that make the nature of the content clear.
Still, it wasn’t the misleading visuals that caused the most alarm—it was the racially charged messaging. One video directly blames Indian immigrants for taking jobs, echoing far-right rhetoric under the guise of satire.
Marketing experts and academics have condemned the campaign. York University professor Markus Giesler called it “highly unethical” and “unlike anything I’ve ever seen.”
What is fake-fluencing—and why it matters now

The “Josh” campaign is a clear example of fake-fluencing—a growing trend where brands use AI-generated avatars to seem relatable or viral, without revealing that the people aren’t real.
Advertisement
Some brands use AI influencers for harmless, fun promotions. But this case crossed a line. It pushed racially divisive narratives just to get attention.
Nexa’s founder, Divy Nayyar, told CBC the videos were meant to “connect” with job seekers and “have fun” with stereotypes. Still, many viewers found the content tone-deaf and harmful. Critics say it crossed the line.
Nayyar claimed the videos should’ve been “obviously AI” to viewers using common sense. But with AI video tools like Veo3 becoming more realistic—syncing speech, gestures, and facial movements—many experts fear we’re heading into dangerous territory.
How real is too real?
AI videos have become harder to detect. Tools like Veo3 generate lip-synced dialogue and near-photorealistic visuals. That makes it difficult for casual viewers to know what’s authentic and what’s engineered to provoke.
Even marketing professor Marvin Ryder admitted the ad fooled him at first—he only realized the truth after a closer look.
As AI content grows, so does the risk of manipulation, disinformation, and emotional exploitation—especially when targeting vulnerable or frustrated groups online.
TikTok says it will continue enforcing its AI disclosure rules. But as AI tools get more powerful, experts warn that platforms, regulators, and users alike will need stronger safeguards to tell fact from fiction.
What do you think—should TikTok and other platforms require clearer AI disclosure, or does it fall on viewers to spot what’s fake?
More…
- https://www.cbc.ca/news/ai-generated-fake-marketing-1.7578772
- https://www.reddit.com/r/technology/comments/1lvim9c/that_white_guy_who_cant_get_a_job_at_tim_hortons
- https://news.ycombinator.com/item?id=44510331
Advertisement
