Moderation in a Public Commons
June 23, 2023
by The Bluesky Team
The goal of Bluesky is to turn social media into a shared public commons. We don’t want to own people’s social graphs or communities. We want to be a tool that helps communities own and govern themselves.
The reason we focus on communities is that for an open commons to work, there needs to be some sort of structure that protects the people who participate. Safety can’t just be left up to each individual to deal with on their own. The burden this puts on people — especially those who are most vulnerable to online abuse and harassment — is too high. It also doesn’t mirror how things work in the real world: we form groups and communities so that we can help each other. The tooling we’re building for moderation tries to take into consideration how social spaces are formed and shaped through communities.
Today, we’re publishing some proposals for new moderation and safety tooling. The first focuses on user lists and reply controls which can be used for community-driven moderation. The second will focus on moderator services and how they can handle problems that small communities can’t. The third is for Hashtags, which are not directly related to moderation but can have a large effect on customizing what you see. (We wanted to include this to distinguish between the labeling proposal, which is intended to address moderation, and mechanisms for discovery.)
These proposals are published on GitHub, and we’d love to hear your feedback and discussion there. In the rest of this post, we want to first share some insight into why we believe a public commons is important for social media.
Why build a public commons?
We’re a public benefit company that was formed with the mission “to develop and drive large-scale adoption of technologies for open and decentralized public conversation.” We believe that public conversations, which form the basis of a democratic society, should be held in spaces that can be a shared public commons.
A company is an efficient structure for building out a cohesive vision of how things should work, but locking users into our systems would be antithetical to our mission. An open commons can’t be governed at the sole discretion of one global company. We offer services like professional moderators so that we can help protect people and provide a good experience, but we shouldn’t exert total control over everyone’s experience, for all time, with no alternative. Users should be able to walk away from us without walking away from their social lives.
The reason we’re building in decentralization is because we observed that business interests and the open web have a habit of coming into conflict. Third-party developers often get locked out. Moderation policies come into conflict with the diverse interests and needs of different groups of users. Ads push towards algorithms that optimize for engagement. It’s a systemic problem that keeps playing out as centralized social media companies rise and fall.
Some might think this isn’t a big deal. Maybe social media is like a club, and the place can be happening one day and dead the next. Would it be so bad if people just moved on to the next joint...? The problem is that people put their lives and livelihoods into these networks. When a network shuts down, people lose out on a lot: they lose their connections, they lose their creative work, and they lose their places of business. Social networks are not just products; they’ve become digital homes.
Even when things are working correctly on social platforms, there are weird dynamics caused by people’s relationships being mediated by a single company. The Internet is pretty obviously real life in the sense that its management has real-world consequences. When these places control our identities and our ability to connect and to make money, having no way out from the founding company is a precarious situation. The power difference is daunting.
The goal of Bluesky is to rebuild social networking so that there’s not a lock-in to the founding company, which is us. We can try to provide a cohesive, enjoyable experience, but there’s always an exit. Users can move their accounts to other providers. Developers can run their own connected infrastructure. Creators can keep access to their audiences. We hope this helps break the cycle of social media companies coming into conflict with the open web.
What should this feel like?
Social media has a reputation for being emotionally chaotic. Some days it can be fulfilling, and other days it can be a box of horrors. If we’re going to succeed in our mission of creating social media that operates as a sustainable public commons, these tools need to not just be good in theory, but actually help create a better social space than what has come before. Our goal for investing in moderation tooling and processes is to create a better public commons for conversations to take place in. Perhaps it can even be a happy, healthy, and predominantly enjoyable space.
Here’s some of what we think it takes to create a great experience:
A great experience should be simple to use. It shouldn’t be overly complex, and there should be sensible defaults and well-run entry points. If things are going well, the average user shouldn’t have to notice what parts are decentralized, or how many layers have come together to determine what they see. However, if conflict arises, there should be easy levers for individuals and communities to pull so that they can reconfigure their experience.
A great experience should recognize that toxicity is not driven only by bad actors. Good intentions can create runaway social behaviors that then create needless conflict. The network should include ways to downregulate behaviors – not just amplify them.
A great experience should respect the burden that community management can place on people. Someone who sets out to help protect others can quickly find themselves responsible for a number of difficult choices. The tooling that’s provided should take into account ways to help avoid burnout.
A great experience should find a balance between creating friendly spaces and over-policing each other. The impulse to protect can sometimes degrade into nitpicking. We should drive towards norms that feel natural and easy to observe.
A great experience should reflect the diversity of views within the network. Decisions that are subjective should be configurable. Moderation should not force the network into a monoculture.
Finally, a great experience should remember that social networking can be pleasant one day and harsh the next. There should be ways to react to sudden events or shifts in your mood. Sometimes you need a way to be online but not be 100% available.
What’s our development process?
During the private beta, we’ve been actively responding to user feedback. For example, based on user feedback, we recently introduced an optional “context” field to add a few lines of text to reports so that moderators can better understand issues that require more context to make a decision on. With these upcoming proposals, we’re looking for feedback ahead of time so we can get to the best outcomes quickly and prepare for opening up the network.
The proposals we’re publishing have been in the works for awhile; we focused on moderation and safety from the earliest moments of developing Bluesky and the AT Protocol, even if they didn’t always appear to be front and center. We’ve been fleshing out our ideas over the last few months through the practical experience of engaging with users during the Bluesky app’s beta. In designing these, we researched existing tooling implemented on other sites, discussed with users from different communities on the app, consulted with trust & safety advisors, and above all, constantly asked ourselves what tools would reduce harm, protect people, and decrease the likelihood of bad outcomes.
Decisions such as who has the right to speak or moderate an online space are intrinsically controversial. And it isn’t just about platforms making moderation decisions. For example, the ability to moderate replies to your own threads introduces complex tradeoffs around safety, counterspeech, and accountability. Replies are an area where some of the nastiest forms of abuse can happen, so we felt that per-thread moderation was a good idea. But there are downsides, too: For example, if a thread contains misinformation, then giving reply controls to the author means they might use it to suppress corrections from other users. Our hypothesis, reflected in the moderation proposals we’re sharing, is that giving users more tools to protect themselves from harassment is worth some downsides like not always having the record corrected in the replies.
We’re presenting proposals for community discussion and developer feedback to help develop a rough consensus around these approaches — these tools won’t be effective unless people actually use them. They’re a set of tools that are meant to overlap, work together, and provide the building blocks for new ideas.
An updated Community Guidelines
In addition to working on moderation tooling, we’ve also been developing new policies and processes to moderate the services we run. As we explain in the community guidelines:
Bluesky Social is one service built on top of the AT Protocol. While we won’t always be able to control everything that happens on other services that use the AT Protocol, we take our responsibility to the users of Bluesky Social seriously. Our community guidelines are designed to promote a safe and enjoyable experience. To achieve this goal, we moderate the content on Bluesky Social, and you can also adjust content filters within the app according to your preferences.
An important site of moderation is the servers themselves, and we want our policies to provide a safe and enjoyable experience for users who enter the network through our services. If the moderation tooling we’re building is like the building blocks and foundations of a city, the norms and policies we’re setting are how we’re actually going to run the city.
For the Bluesky servers we run (that is, the Bluesky app you’ve been using up to this point), we’re publishing updated Community Guidelines next week, linked to in our Terms of Service, which includes the following:
- Treat others with respect. For example, no:
- Threats of violence or physical harm, or content that encourages, promotes, or glorifies violence against people, groups, or animals
- Repeated harassment or abuse directed at a specific person or group
- Encouraging self-harm or suicide
- Promoting hate or extremist conduct that targets people or groups based on their race, gender, religion, ethnicity, nationality, disability, or sexual orientation
- Depictions of excessive violence, torture, dismemberment, or non-consensual sexual activity
- Misleading impersonation of other individuals, organizations, or entities
The actions that we take to enforce these guidelines will include removing individual posts and suspending and removing users from our services.
In the future, other services built on the AT Protocol might choose to adopt their own community guidelines that differ from ours. In a federated model, each server has discretion over what they choose to serve and who they choose to connect to. If you disagree with a server’s moderation policies, you can take your account, social connections, and data to another service.
Proposals
We’ve created a new GitHub repository to host some of these proposals to streamline discussion. You can find the links to them here:
These proposals, and our implementation of them, are not the end of what is possible in an open network — they are part of a process of development that will go through multiple iterations. Other companies or communities can also help set norms and build tooling, but we are investing in a best-effort first pass at creating a safe, pleasant, resilient public social commons. Your feedback, both in the app and through GitHub, is appreciated. Thanks for joining us on this journey.