ad

Video Of the Day

Clubhouse Working On 'Report Rooms' Feature, But Will It Solve Moderation Issues

Clubhouse is working on a new safety feature that will allow users to report a Room if the listed topic is found to be violating community guidelines. The platform already has a set of rules in place that advise users to refrain from conversations that are hateful, discriminatory, and abusive in nature. To enforce them, Clubhouse has created a host of user-facing solutions, such as the ability to mute, block, or kick out miscreants from an ongoing live audio session.

Users can already report an audience member in real-time as the conversation goes on. In addition, there’s also an option to report a past incident after the room has ended. However, the situation becomes trickier if a room has solely been created with the purpose of spreading misinformation or hateful content, and is populated with like-minded people who are unlikely to report any attendee for violating the rules. If that room happens to be private, the chances of discovering and reporting the unhealthy Clubhouse conversations are even lower.

Related: How Clubhouse Created An Audio Craze & What The Future Holds

In the past few months, multiple incidents of Clubhouse chats discussing COVID-19 conspiracies and anti-Semitic discourse have been reported, prompting digital activists to demand more robust features to report such Rooms and conversations. It now looks like a solution might be in the works. Jane Manchun Wong recently shared screenshots of an in-development feature in Clubhouse that allows users to report a Room if the topic listed for it appears objectionable. Based on the screenshots, Clubhouse users will soon have two actionable options if they come across a problematic Room — hide or report. If users choose to report a Room, they will be asked to specify the reason, which could be anything from harassment and violence-inciting to hate speech.

On the surface, the ability to report Rooms appears to be a meaningful addition, and if handled well, it could go a long way to keeping the platform safe from the content issues that have haunted the likes of Facebook and Twitter. However, in case the name of a Room is not indicative of its true purpose, the upcoming feature is likely to be rendered almost useless. Furthermore, the ability to report Rooms is not going to solve all of the Clubhouse content moderation problems, especially the challenges that come with moderating audio in real-time. Plus, there is also a risk that a targeted ‘reporting attack’ with a political or ideological agenda could be launched against a Room, sabotaging a seemingly productive conversation even before it starts. If that sounds familiar, that’s because a similar online ‘review bombing’ phenomenon is very real, and has become a huge problem for publishers and content creators lately. Lastly, the feature itself is currently under development, which means there’s a possibility that reporting a Room never makes it out of the experimental stage.

Clubhouse says that its team promptly investigates if an incident is reported, going as far as temporarily retaining the audio recordings to identify problematic content before taking appropriate punitive action. However, it's not always enough. That's primarily because audio is an altogether different entity compared to text, pictures, and videos. For example, platforms often put in place a system that blocks problematic hashtags or automatically flags words that are abusive or harmful. Similarly, machine learning models can be trained to identify violent or explicit visual content, and they are already being used across multiple platforms.

With audio, the first step needed is the creation of AI-assisted systems with natural language processing chops that can flag harmful words or phrases in real-time. Not to mention, in multiple languages and varied accents. Once that system is up and running, a team of human moderators that can take swift action following machine-generated reports becomes a necessity. However, the technology for identifying and reporting harmful audio content is not as mature as those used for text and visual markers. On top of this, Clubhouse would also need to scale fast enough, both in terms of technology and manpower, to handle the issue before the platform gets overwhelmed with controversial content. All that is easier said than done.

Yes, investors have recently poured billions of dollars into the company to help it develop the necessary infrastructure and live up to its hyped potential. At the same time, there’s also no dearth of startup pundits who are worried that Clubhouse’s aspirations of becoming 'the next social media phenomenon' are following a predictable path and will soon come crashing down. Before the lofty business ambitions can be realized, Clubhouse needs to authoritatively demonstrate that it can effectively handle the misinformation and hateful conduct problem better than the bigger sharks in the social media ocean. All that also has to happen while simultaneously competing with the increased competition from the likes of Twitter Spaces, which seem to be evolving at a breakneck pace. Clubhouse is adding new users at an impressive pace, especially in markets like India, which is definitely good news for stakeholders. However, Clubhouse needs to match that growth with the development of an efficient content moderation system.

Next: Is Clubhouse Already Failing?

Source: Jane Manchun Wong/Twitter



from ScreenRant - Feed https://ift.tt/3icoTGO
Clubhouse Working On 'Report Rooms' Feature, But Will It Solve Moderation Issues Clubhouse Working On 'Report Rooms' Feature, But Will It Solve Moderation Issues Reviewed by Riyad on July 14, 2021 Rating: 5

No comments:

ad2

Powered by Blogger.