GLAAD RELEASES SECOND ANNUAL SOCIAL MEDIA SAFETY INDEX: ALL PLATFORMS RECEIVE FAILING GRADES ON LGBTQ SAFETY

July 13, 2022

New Platform Scorecard evaluates LGBTQ safety, privacy, and expression on Facebook, Instagram, Twitter, TikTok, and YouTube and all platforms receive scores under 50 out of 100

40% of LGBTQ adults and 49% of transgender and nonbinary people do not feel welcomed and safe on social media

GLAAD’s 2022 Social Media Safety Index advisory committee includes representatives from Stanford, Harvard Law School, Media Matters for America; plus Nobel Prize Laureate Maria Ressa and author and activist ALOK.

July 13, 2022 GLAAD, the world’s largest lesbian, gay, bisexual, transgender, and queer (LGBTQ) media advocacy organization, today announced the findings of its second annual Social Media Safety Index (SMSI), a report on LGBTQ user safety across five major social media platforms: Facebook, Instagram, Twitter, YouTube, and TikTok.

For the full report: http://glaad.org/smsi

The 2022 SMSI introduces a Platform Scorecard developed by GLAAD in partnership with Ranking Digital Rights and Goodwin Simon Strategic Research. The Platform Scorecard utilizes twelve LGBTQ-specific indicators to generate numeric ratings with regard to LGBTQ safety, privacy, and expression. A listing of the indicators is available here and below. After reviewing the platforms on measures like explicit protections from hate and harassment for LGBTQ users, offering gender pronoun options on profiles, and prohibiting advertising that could be harmful and/or discriminatory to LGBTQ people, all platforms scored under a 50 out of a possible 100:

  • Instagram: 48%
  • Facebook: 46%
  • Twitter: 45%
  • YouTube: 45%
  • TikTok: 43%

Detailed scores and a full list of Platform Scorecard indicators are available here. Indicators include:

  • The company should disclose a policy commitment to protect LGBTQ users from harm, discrimination, harassment, and hate on the platform.
  • The company should disclose an option for users to add pronouns to user profiles.
  • The company should disclose a policy that expressly prohibits targeted deadnaming and misgendering of other users.
  • The company should clearly disclose what options users have to control the company’s collection, inference, and use of information related to their sexual orientation and gender identity.
  • The company should disclose training for content moderators, including those employed by contractors, that trains them on the needs of vulnerable users, including LGBTQ users.

“Today’s political and cultural landscapes demonstrate the real-life harmful effects of anti-LGBTQ rhetoric and misinformation online,” said GLAAD President and CEO Sarah Kate Ellis. “The hate and harassment, as well as misinformation and flat-out lies about LGBTQ people, that go viral on social media are creating real-world dangers, from legislation that harms our community to the recent threats of violence at Pride gatherings. Social media platforms are active participants in the rise of anti-LGBTQ cultural climate and their only response should be to urgently create safer products and policies, and then enforce those policies.”

GLAAD also released new data from a May 2022 study conducted with Community Marketing & Insights. 84% of LGBTQ adults agree there are not enough protections on social media to prevent discrimination, harassment, or disinformation. 40% of all LGBTQ adults, and 49% of transgender and nonbinary people, do not feel welcomed and safe on social media. Additionally, the newly released 2022 ADL Online Hate and Harassment report found that 66% of LGBTQ users experienced harassment online, with 54% of LGBTQ users reporting severe harassment including sustained harassment, stalking, or doxxing.

The Social Media Safety Index (SMSI) was created with support from Craig Newmark Philanthropies, the Gill Foundation, and Logitech. In addition to the Platform Scorecard, GLAAD’s SMSI provides specific recommendations to each platform to improve LGBTQ safety. Additional trends reported in the SMSI include:

  • Anti-LGBTQ rhetoric on social media translates to real-life harm, including reported levels of increased severe harassment for LGBTQ users when compared to 2021.
  • The problem of anti-LGBTQ hate speech and misinformation continues to be a public health and safety issue. Viral misinformation and inaccuracies have been cited as drivers of many of the nearly 250 anti-LGBTQ bills introduced in states around the country this year. Platforms are largely meeting this dangerous misinformation with inaction and often do not enforce their own policies regarding such content.
  • Issues like the promotion of so-called “conversion therapy,” targeted misgendering and deadnaming, and lack of true transparency reporting, remain prevalent for select platforms. Only select platforms prohibit actions like targeted misgendering and the promotion of conversion therapy. These actions need to be prohibited across the industry.
  • Companies possess the tools they need to effectively curb anti-LGBTQ hate and rhetoric but instead are prioritizing profit over LGBTQ safety and lives.

 

Recommendations across platforms include:

  • Improve the design of algorithms that currently circulate and amplify harmful content, extremism, and hate.
  • Train moderators to understand the needs of LGBTQ users, and to moderate across all languages, cultural contexts, and regions.
  • Be transparent with regard to content moderation, community guidelines and terms of service policy implementation, and algorithm designs.
  • Strengthen and enforce existing community guidelines and terms of service that protect LGBTQ people and others.
  • Respect data privacy, especially where LGBTQ people are vulnerable to serious harms and violence. This includes ceasing the practice of targeted surveillance advertising, in which companies use powerful algorithms to recommend content to users in order to maximize profit.


The May 2021 inaugural edition of the Index was the first-ever and only tech-industry baseline of LGBTQ user safety. In this past year, GLAAD has worked with platforms and applauded major achievements within the tech accountability space, including TikTok’s amendment to its community guidelines in March 2022 in which an explicit prohibition against targeted misgendering and deadnaming was enacted, per the 2021 SMSI’s recommendation. As noted in this year’s SMSI, such a prohibition does not exist on Facebook, Instagram, or YouTube.

 

Congressional hearings, alarming research findings of the spread of misinformation, and massive media coverage have laid bare the urgent need for independent regulatory oversight of these companies — with virtually universal agreement about the need for industry-wide transparency and accountability. The GLAAD SMSI adds LGBTQ recommendations to this necessary and urgent dialogue.
 

To create the Social Media Safety Index, GLAAD convened an Advisory Committee of thought leaders to advise on industry and platform-specific recommendations in the Index. Committee members include ALOK, author, performer, and media personality; Lucy Bernholz, Ph.D, Director, Digital Civil Society Lab at Stanford University; Alejandra Caraballo, Esq., Clinical Instructor, Cyberlaw Clinic, Berkman Klein Center for Internet & Society at Harvard Law School; Jelani Drew-Davi, Director of Campaigns, Kairos; Liz Fong-Jones, Principal Developer Advocate for SRE & Observability, Honeycomb; Evan Greer, Director, Fight for the Future; Leigh Honeywell, CEO and Co-Founder, Tall Poppy; Maria Ressa, Journalist & CEO, Rappler; Tom Rielly, Founder, TED Fellows program, Digital Queers and PlanetOut.com; Brennan Suen, Deputy Director of External Affairs, Media Matters for America; Kara Swisher, contributing writer and host of the Sway podcast at The New York Times.

“All platforms should follow the lead of TikTok and Twitter and should immediately incorporate an explicit prohibition against targeted misgendering and deadnaming of transgender and non-binary people into hateful conduct policies,” said GLAAD’s Senior Director of Social Media Safety, Jenni Olson. “This recommendation remains an especially high priority in our current landscape where anti-trans rhetoric and attacks are so prevalent, vicious, and harmful. We also urge these companies to effectively moderate such content and to enforce these policies.”

Prior to today’s release, GLAAD held briefings with each platform named in the Social Media Safety Index to review issues that LGBTQ users face and the recommendations described in the report. Through a series of presentations at conferences and events, GLAAD will continue to maintain an ongoing dialogue about LGBTQ safety amongst tech industry leaders throughout 2022 and beyond. GLAAD will also spotlight new and existing safety issues facing LGBTQ users in real-time, both to the platforms and to the press and public.

###