Anti-Social Media: Designing for Accountability

 

One of the key issues facing social media today is the prevalence of antisocial behavior. This makes user experience worse, and in many cases, it is the site’s design that unintentionally encourages this behavior.

What exactly do we mean by antisocial behavior? There are three key points that exemplify this issue:

  1. Dissuading users from connecting to people different than them
  2. Shielding users from disparate viewpoints
  3. Failing to prevent toxic in-groups and the spread of harassment

The problem is that most social media platforms currently operate using these strategies:

  1. They suggest friends/connections based on who you know.
  2. They suggest content based on the content you already consume.
  3. They ban users individually based on reported transgressions, using either an algorithm or a group of moderators.

This has resulted in the following unforeseen outcomes.

  1. The creation of entrenched “in-groups” and “out-groups,” most of whom are pitted against one another.
  2. The forming and strengthening of social media “bubbles,” which keep users from encountering viewpoints that challenge one’s beliefs or views.
  3. The reinforcement of an us vs. them mentality towards authority figures, especially by those who have been censured by moderators.

How can we remedy this?

Clearly, the current methods to curb antisocial behavior are either failing or producing unwanted side effects. This is because the social media communities being created lack accountability.

The appearance of accountability is there: users who break community rules are banned or suspended, but this does not force them to change the way they act towards the out-group. Instead, the out-group (and the community as a whole) should be able to see and rate users in terms of their produced content. This can be done via the implementation of three design changes:

  1. Create tracking metrics for users that represent how helpful their comments/content are to the community at large.
  2. Make these metrics visible to the community, so that every action the user takes is visible to both the in-group and out-group.
  3. Using these metrics, the site should be able to determine when in-groups and out-groups form.

Even with these changes, there are ways the site designer can take a more proactive approach to combat the in-group/out-group mentality. For example, the websites can suggest connections to people who are “one demographic off,” so to speak. This means recommending those with the same job, but who are of a different generation. Or perhaps those who have the same political views as you, but live in a different country. These metrics already exist in most ways — that’s how social media suggests you similar people right now.

The site can also promote content that, while created by a different in-group, has been shown to be popular with the community as a whole. For example, perhaps a post made and shared heavily in the tech community could be promoted to those working in other fields, if non-tech users react positively to it.

For some social media platforms, it’s largely about the bottom line, and these changes seem costly and time-consuming to enact. What does the company gain? For one thing, the public relations boost that these changes would create is enormous. This good PR would attract a wider user group, especially from minority and marginalized groups, since the likelihood of toxic communities and harassment is lower.

Ultimately, these change can result in a more diverse, pleasant, and expansive social media community. And isn’t that what social media is all about?

 

Image credit: http://sloanreview.mit.edu/content/uploads/2013/03/cfos-anti-social-tendencies-may-be-changing-1000.jpg