Some creators are noticing that their videos look off—shadows are over‑punchy, edges too crisp, and footage sometimes takes on a “plastic,” almost oil‑painting vibe.
Musicians Rick Beato (5 M subs) and Rhett Shull (700 k+ subs) were among the first to raise alarms. Shull compared Shorts posted on Instagram versus YouTube and said the latter looked “smoothened” and like some unwanted “AI upscaling” had been applied. He worries it might suggest viewers think he’s using AI to produce or even deepfake his videos—and that it could erode trust in his content.
YouTube admitted it’s running an experiment on select Shorts — using “traditional machine learning” to unblur, denoise, and improve clarity, much like computational photography on smartphones. But importantly, creators weren’t informed—and there’s still no opt‑out.
One multimedia artist, who deliberately crafted a VHS-style, washed-out aesthetic, found the charm of his “grainy, authentic” videos washed out by these filters. “YouTube’s filter obscured this labor‑intensive quality,” he said on Reddit. Digital ethics experts argue there’s a world of difference between user-controlled phone filters and platform-side, creator-blind modifications.
This move raises larger concerns about authenticity, transparency, and creative control. Despite YouTube’s insistence it’s not using generative AI, critics point out the effects echo diffusion‑based upscaling models—and the lack of consent or disclosure stings.
YouTube has said it’s working on an opt‑out option — but only after the backlash.