By: Scott Reid
November 20 2024
With Bluesky's userbase skyrocketing from nine million in September to past 20 million in recent days following the U.S. elections, how it moderates content and handles misinformation will become increasingly important.
Here, Logically Facts examines what we know about Bluesky's approach to content moderation, misinformation, and verification.
Bluesky is built on the open-source AT protocol, and was set up to avoid one company controlling your entire experience and algorithms. The principle is that social networking is too important for one company or person to be in full control.
It was founded in 2019 by former Twitter CEO Jack Dorsey to explore decentralized technologies. However, it is now fully independent of X and Dorsey, and has Jay Graber as its CEO.
In a recent thread, Jay Graber underlined the idea behind Bluesky. (Source: Bluesky/Screenshot)
Bluesky's approach to content moderation is similar to that of other social media networks in one regard but entirely different in another.
We'll start with the similarities. Bluesky has a moderation team that enforces its community guidelines. They address areas that are a matter of law, such as child sex abuse material, as well as areas such as harassment and extremist content.
Users can also self-moderate by blocking and muting people and creating blocklists to which people can subscribe. You can also sign up for other moderation services on top of the existing framework.
Bluesky made an active decision to get involved in election integrity issues, and its rules ban "voter suppression" and the sharing of "misleading content about election processes," among other things.
The rules also prohibit impersonation, though there have been some instances of people posing as prominent politicians, such as Conservative leader Kemi Badenoch and Reform U.K. leader Nigel Farage, which have fooled some journalists.
BBC Verify journalist Shayan Sardarizadeh clarifies that this account is not run by the U.K. Conservative leader of the opposition, Kemi Badenoch. (Source: Bluesky/Screenshot).
There aren't specific rules against misinformation in general, however.
Speaking to Newsweek in August, Paul Frazee, product developer and protocol engineer at Bluesky, said the social media platform hopes to add a feature similar to X's community notes feature and that users can "subscribe to moderation decisions from other organizations they trust."
In a blog, Bluesky suggests as an example that someone could set up a "spider shield" moderation system that blocks photos of spiders for people with arachnophobia. You would subscribe to that service, and any labeled spider pictures would disappear. You could also report unlabeled spider photos to that moderator.
Taking that a stage further, theoretically, someone could set up a moderation service to hide political content or, indeed, apply a fact-checking moderation service.
Within settings, users are also given options to switch warning labels for different categories on or off. However, this also includes a label for misinformation, which infers that such warning labels tackling false claims can be switched off.
A screenshot of the settings within Bluesky (Source: Bluesky/Screenshot/Modified by Logically Facts)
Bluesky has been contacted for comment.
Meanwhile, Logically Facts is monitoring claims on Bluesky, and you can read a couple of our recent fact-checks here and here.
There is a form of verification on Bluesky, but it's very different from other social media platforms, and you have to do it yourself.
When you sign up, your username shows up as a domain. If you use the default one offered by Bluesky, your username will end in bsky.social.
However, you can use your own domain or one run by the company you work for. For example, U.S. news outlet NPR uses npr.org.
Bluesky allows you to use your own domain in your username, in this instance npr.org (Source: Bluesky/Screenshot/Modified by Logically Facts).
The theory is that this handle would allow you to move seamlessly between different apps that use the same protocol.
However, this is very different from the form of verification most people will be familiar with — the verified tick still used by Instagram, Facebook, and TikTok — and many users may not know what to look out for.
Freelance tech journalist Ewan Spence told Logically Facts that this "technical" setup, which may have been appropriate when Bluesky was a smaller project, may not be sufficient for an ever-growing userbase.
He said, "Now it's reached the mainstream, asking a large organization to act as the authenticator service for every service and person is burdensome on it and is such a large technological overhead to be cost-effective for anything other than a small business. And that assumes you trust the website that is doing the verification."
Social media companies have historically had difficulties balancing their wish to promote free speech and ensuring their platforms are safe for users. Balancing such considerations was a key reason for creating Bluesky and its different approach to moderation.
However, deputy director of the NYU Stern Center for Business and Human Rights, Paul Barrett, told Logically Facts, "BlueSky will need to add staff and improve its automated systems as its user base — and content volume and variety — continues to grow. It needs to learn from the mistakes of platforms like Facebook, which did not add sufficient moderation capacity as they grew exponentially."
And Ewan Spence raised the spiraling number of moderation reports from users Bluesky recently received, stating, "In general, moderation requires a huge dataset, a huge amount of automation but also a huge amount of people who are checking the false positives."
Pointing to the rapid growth in demand for Bluesky versus the number of staff already at the service, he added, "They will be short-staffed, no matter how the moderation service works. There is going to be a lag in getting the resources required to accommodate the increase in growth, and this critical period of growth when Bluesky has moments to establish itself is the point when its moderation service is at its most vulnerable."