Outrage Warps Reality

How mass paranoia and uninformed outrage will hurt the Fediverse, and what we can learn about the drama regarding VLC's AI and Google's newest service.

Outrage Warps Reality
(Characters drawn by @Ikaika; Neofox by @Volpeon; VLC logo by VideoLAN; SafetyCore logo by Google)
ℹ️
Information about this blog post
This blog post was written by two people. You might say it's an experiment, because it's the first time I'm doing that on here. You can look at the pictures to see which parts were written by whom. I'd love to hear feedback! ❤️
Steffo Head

The Fediverse is a social media that's built with federation in mind. It consists of multiple servers with different pieces of software that all can talk to each other. All that, just so there is no single entity that can shut down or enshittify everything. Even though the Fediverse consists of many different pieces of software, there's one that is seen the most and that is basically the first thing people think of when talking about the Fediverse: Mastodon.

Mastodon has a similar structure to Twitter, with posts consisting of max. 500 characters (by default), polls, images and more. Sadly, Mastodon (and the Fediverse) didn't only copy the great things from Twitter... It copied some bad aspects as well. Most importantly for this blog post: The way how drama forms - now added with fediblock spiciness. 🌶️

A sticker of my dragon, representing the "This is Fine." meme.
(Sticker by @Ikaika)
Finnley Head

The Fediverse is a quite technical place. There is a disproportionate number of computer scientists, users of open-source software, furries, and so on. With this mix, there are opinions that deviate from the “mainstream”. You are standing out if you are using something that doesn’t adhere to the standards set by the community. The backlash can be harsh. Some users pointed out that they were publicly criticized for not using Linux, although that was completely off-topic from their original post.

However, the mistrust against major companies is understandable. Shady advertising and unnecessary AI bloat played into the confusion and hate, sometimes it’s just terrible wording that makes people overreact. This overreaction can turn into outrage and misunderstandings, especially when authors exaggerate the issues – intentionally and unintentionally. Steffo and I are jumping into the roles of duty solicitors and shine light on how VLC and Google were mistreated by the community.

The VLC AI drama

Steffo Head

It all started with VideoLAN (the creators of the VLC media player) showing off the latest feature for VLC at CES 2025: automatic AI subtitling and translation, all running locally and offline in real time. They're using OpenAI's Whisper for transcription and Metas SeamlessM4T for translation. Both models are Open Source and the demo showed: They're both working great!

With AI-powered subtitles, people who are hard of hearing can finally also watch videos that don't have proper subtitles. Even though I'm not the target group for this, I think I still can say... that's a huge step into more accessible tech!

A sticker of my dragon, in the "Whooo!" pose from Homer Simpson.
(Sticker by @Ikaika)

Now, this blog post is about the Fediverse, an accessible and tech-focused social media... That sounds like this new feature should be very welcomed, right?

Finnley Head

It was kinda welcomed, but at the same time, the fallout from using the buzzword had already begun.

See, I was really happy about the demo. Using on-device processing to make content more accessible for others is something wonderful. Recognition models might be the next real step towards inclusiveness. Sure, they aren't 100% accurate, but they take a lot of load from the volunteers that create and correct captions. The implementation could create a push towards better models with fewer problems.

The technology isn't exactly new. The same thing is used on many Android phones, for example the Google Pixel or the Samsung lineup. A button below the volume slider enables live captions and does that surprisingly well. Adobe Premiere Pro does an equally well job at automatically generating subtitles, like the ones you see in short-form content ("Shorts", "Reels", etc.). (Note: I'm not talking about the stylistic choice of only showing 1–3 words per line.) Windows 11 had OS-wide live captions since 2023 as well. Of course, you see the occasional hiccup, but that is laughably easy to fix for someone checking for errors.

While a big part of the Fediverse welcomed the change for the sake of inclusion, some people dismissed VLC as a bad actor, subsequently moving away from the app and loudly stating that. Many others followed without really giving it a thought.

Using a buzzword was the worst thing that VLC could have done at that moment. The unwanted bloat, terrible marketing, and misuse of the word “AI” led to people jumping to quick conclusions without a single thought about the actual use. On the Fediverse and many other FOSS-oriented communities, we ended up at a point where AI has the same connotation status as Asbestos. As innovative, safe, and thought-out your use of the funky fibers might be, you would have a really hard time selling it.

The sourcing of training data is another point. How can you prove that data is sourced ethically? On the other hand, can you create a reliable model that is sourced ethically, with sources that have open licenses? Remember that - except for the translation part - it's not about generation here, it's about recognition. Data classification is one of the things that ML has excelled in. With this in mind, the discourse about ethically sourced data might scratch more heads.

⚠️
Whisper doesn't seem to be the right choice anyway if this Ars Technica article is anything to go by. The model is actually a Transformer-based AI model that was trained on captions from good old YouTube, which aren't always the best. That's classic garbage in and garbage out. Also, it sometimes hallucinates what it considers to be missing context.
Steffo Head

I, personally, tried Whisper multiple times, and it worked without issues for me. Sure, Whisper might not be the best for conversations, but things like videos or movies are what it got trained on. Of course, in the end it'll never be 100% accurate - like Finnley said - but I still think it's better than nothing.

Don't forget, VideoLAN could still switch to another transcription model that's trained on better data, so I think we should see how this whole thing will turn out at the end. Claiming that VLC wants to "destroy hand-made subtitles" is absolutely not justified and definitely not helping in this situation.

A sticker of my dragon being annoyed.
(Sticker by @Ikaika)
Finnley Head

The outrage as a whole isn't justified. We don't know anything about resource usage. We hardly know anything about the overall implementation! They just showed a demo of what might be. People speculating only to show their "findings" as facts hurt the discourse badly. In the end, nobody came out as a winner. Neither VLC nor the supporters nor the haters.

The Android SafetyCore drama

Steffo Head

Google published the app "Android System SafetyCore" to the Play Store on the 22nd January 2025 and (somehow) automatically installed it to more than one billion devices. The information in the Play Store itself didn't say a lot, which sadly is typical for system components on the Play Store.

To be fair, SafetyCore got a small blog post saying what it's actually for and what it's actually doing. It just... wasn't linked on the actual Play Store page. And the description that is actually on the Play Store isn't the best either...

Android System SafetyCore is a system service that provides safety features for Android devices.

The same goes with the screenshots of the app. Just white images, where on one of them is the logo of the app. Looks official and professional, yeah yeah.

A screenshot showing 4 "screenshots" like above mentioned.
The actual "screenshots" of Android System SafetyCore. I'm not kidding.
A sticker of my dragon facepalming himself.
(Sticker by @Ikaika)

Now, starting with Android 10, Google has put some important system components to the Play Store, so they can be updated more often than through full android system updates. (Project Mainline) It's a good idea that failed because people didn't like seeing apps being updated that they didn't install. (Fair, normally that would sound like malware.)

Like I said, system components often don't have good descriptions of what they actually do - and with Google being Google, people didn't trust these apps at all. The same happened with SafetyCore.

But hey, that's just the people who aren't technically inclined... I'm sure the Fediverse handled this way better, right? I mean, the start with the first system components were handled well as well, so why shouldn't this system component be different?

Finnley Head

In one of the first posts I could find relating to SafetyCore, Steffo replied with something that I couldn't have said better myself: FUD - Fear, Uncertainty, Doubt. The news broke weeks before (!), but only after the service rolled out - probably with an update - it went nucular [sic!].

A quick deep dive into Google's blog article

Before I analyze the response by the community, I want to go on a tangent really quick and discuss the blog article by Google written by some crack engineers, including "Sr. Product Manager Google Messages and RCS Spam and Abuse" Alberto Pastor Nieto (what a job title, that probably doesn't fit on LinkedIn).

The article is about how Google wants to improve safety and wellbeing when using Google Messages, which is the standard app for SMS and RCS (Rich Communication Services). Their first points are about spam and scam. Google wants to protect users from fraudulent messages such as fake package delivery warnings trying to bait you into clicking some link. Google claims to use on-device ML to classify those messages and move them into the spam folder. The same goes with suspicious links on their own. Also, messages from unknown international numbers can be blocked if you opt in f0r that. So far, so unspectacular.

The spicy part is found in the fourth bullet point: Google wants to introduce on-device ML into the Messages app to classify images that contain nudity. This is used to either blur the image when received or warn the user when they are about to send or forward an image with such contents. It serves as a "speed bump". Google specifically says that the classification happens on the device and that the results will not be forwarded unless you report them. The feature is opt-in for adults and opt-out for minors.

With those new use cases, new models have to be shipped and kept up to date. That's why the SafetyCore exists. It provides the models to classify messages and images in the ways I explained above. Instead of sending information to some server, SafetyCore provides the endpoint to "talk" to the on-device classification and returns the results.

The Fediverses reaction

This time, the Fediverse wasn't so kind. The news spread through what seemed to be a game of "telephone" (or "Chinese whispers" if you are British). Oversimplifications warped the news from "Google uses on-device ML to warn you before you open nudes on Messages" to "Google scans all images on your device". Some users speculated that the feature will also be used on every picture taken, while others started to scream that they are violating EU law, specifically the GDPR. Of course, those posts were happily boosted all over the place.

💢
Rant:
Seriously, how cooked do you have to be to spread that before checking it? It's a thing that we would totally expect from Google, that's out of the question.
But be straight with me: Were you outraged when Apple introduced a similar feature (notably a scanner for CSAM), but way more powerful? This is way milder and is in place to protect you from unwanted dick pics and the like. Also, it's opt in for the great majority of you anyway.

The situation got so bizarre that even GrapheneOS - you know, the chaps with the philosophy "phone OS, no Google" - had to step in and tell everybody to stop yapping. To me, that is pretty telling about the current state of the community.

Post by @GrapheneOS@grapheneos.social
View on Mastodon

When everything that isn't open source or from a major company is considered malicious, who can you really trust? The FOSS community isn't safe from it either, proven by various attacks on repositories, such as in February 2024, where a malicious backdoor was introduced into XZ Utils. I know that having a black box in front of you isn't exactly trust-inspiring, but in my opinion, one should at least take a closer look and see what this black box does and what it doesn't, instead of throwing it into the same ocean with other discourses to drown.

Trust is something you have to work for. Mistrust is something you earn through terrible decisions. Google has earned their fair share of mistrust, not going to deny that. This reaction from the community, however, was silly at best. Prejudication combined with the spreading of false information created an atmosphere that was just uncalled-for. I expected better of this community, and while some authors edited their posts to correct what they said, the damage was already done.

Conclusion

Steffo Head

So, what did we learn out of these two situations?

I'd say: Check before you post / boost. Sure, it's not always possible, and sure, boosting can be meant as a nice gesture, but spreading false information is one of the main reasons why there's so much drama on the Fediverse. If you've heard something, or you saw something weird, look if there's some more info that you don't know yet. If there's a post that calls something out, check the sources or search it (in your favorite search engine) yourself to double-check that information. Boosting misinformation or rage bait can be hurtful to the Fediverse.

Finnley Head

You can't fit complicated matters and discourses into a standardized frame of 500 characters. It's simply not possible. This polarizing environment is counterproductive to everyone, with microblogging catalyzing that dynamic. It's an issue we inherited from Twitter, and I think that we should be better than that. We should strive to be a community that doesn't need fact-checkers or community notes. Think before you act.

Love with your heart, use your head for everything else.
— Captain Disillusion