Bluesky is a buzzy new invite-only Twitter lookalike that was supposed to provide a much-needed reprieve from the toxic social media ecosystem. But by the time I attended his Bluesky in early May, I was wondering if the party was over. For the uninitiated, Bluesky started out as a decentralized social media experiment at Twitter in 2019, before branching out on his own last year and joining Twitter’s former CEO. jack dorsey As a director. (By “decentralized” we mean the company creates open source protocols for building social apps, of which BlueskySocial is one.) In recent months, i.e. Elon Musk‘s Twitter takeover – The site has become a playground for media, political, and technology figures who think they can usher a new platform into the masses.like an outlet wired and rolling stone He emphasized that the app is a comfortable alternative to Twitter. It started out as a playful, goofy, free-spirited environment, much like Twitter, but the posts were “skeet” and ranged from quaint blue-sky photos to nudity.
However, one exchange between Bluesky CEO jay graber Bluesky users also suggest that the app has yet to really tackle the difficult content moderation issues that have disrupted other social media platforms. It all started when multiple users requested that a user with the handle @commie.cafe be removed. The user allegedly engaged in harmful behavior, including calling trans women names, harassing them, and divulging personal information. In response to user concerns, Graeber wrote: Blocks prevent interaction. ” This response has led many to question why the company doesn’t take user concerns seriously and act more proactively against users accused of harmful behavior on other platforms. Became.
“How many people need to be directly alerted to the presence of a dangerous and harmful person before you stop watching and take action?” one user replied to Graber.
“A lot of people are scared and worried here, especially considering how poorly Twitter has handled this problem for years. Don’t be Twitter, be better.” another user replied to Graber. It appears that the user accused of transphobic behavior is no longer on the platform. BlueSky did not respond to requests for comment on the incident, nor did it respond to questions about whether the company had taken any action against its users.
This not only provided a glimpse into Bluesky’s content management approach, but also raised questions about whether the company would take any steps to maintain its good vibes. The nine-person team is building a platform with a waiting list of 1.9 million email addresses for users looking to join his more than 72,000 users in the app’s invite-only beta. But as users flock to the app for its potential as an alternative to Twitter, some early users wonder whether the platform can remain a welcome relief from harassment, hate speech and graphic content. Some people have doubts. Or will he end up making the same mistakes as his predecessor?
The company has remained quiet about its plans to address these issues in the future, other than posting details on its FAQ page. I reached out to a company spokesperson to find out more details about whether the company prioritizes users from marginalized backgrounds, how well it enforces its content moderation policies, and what investments it makes in content moderation. A list of questions was provided. However, a company spokesperson said everyone is “keeping their heads down and getting to work” and would not be available for interviews.
Bluesky says on its site that it plans to use automatic filtering, manual admin actions, and community labeling to manage the platform. In addition to basic filtering for objectionable content, the company wants to allow users and developers to add additional filters and other moderation controls. In a separate post, Graeber said developers running their own servers will be able to set their own content moderation policies at the server and community level. It’s a system that gives people more direct control. ” The company is unclear whether it plans to hire more human moderators and take additional steps to protect users who are members of marginalized communities, especially as its user base grows. I didn’t.
Twitter, Instagram, Facebook, and other prominent social media platforms made the mistake of underestimating the extent to which dangerous rhetoric online can lead to harm offline. yoel roth said the former head of trust and safety at Twitter and a technology policy researcher at the University of California, Berkeley. And while it’s not practical to take a fully localized approach as content moderation expands overseas, the next generation of social platforms is doing what worked and what didn’t work with their veteran predecessors. Ross said he expects the matter to be taken seriously. “One of the promises of federated platforms like Bluesky is that they can give people more choice about what’s allowed and what’s not allowed,” Ross said, adding that creators can be independent from the platform itself. He mentioned Bluesky’s idea of making it . “But you still have to draw the line somewhere for things that aren’t going anywhere. That’s the battleground of content management.”
When it comes to AI that assists with some content management functions, Milo Dietrich A senior researcher at the Center for Monitoring, Analysis and Strategy said the technology, like other social platforms, cannot be trusted to work on its own at scale. Ross agrees, saying that if Bluesky is using AI as part of content moderation, companies should consider using these tools before building an entire moderation strategy around them. Said it needed to be tested. Allowing developers to create their own interfaces and set boundaries for content can also lead to unintended consequences. For example, if a user posts non-identifying or non-consensual sexual images, those posts may be de-indexed and no longer viewable by Bluesky users, but the images will still reside on someone’s personal server. available and may end up on the internet. He said he was unsure whether that was an ethically or legally sufficient solution. Sol Messing He is an associate professor at New York University’s Center for Social Media Politics and former Discovery Data Science Leader at Twitter.