Alphabet and GLAAD Are Using AI to Create An Inclusive Space for LGBTQ+ People

Sometimes videos go viral. The person who uploads a particular video to, say, YouTube doesn’t even need much of a following in order for the video to garner enough attention to be shared over and over and attract people to leave so many comments (both negative and positive) that one person cannot simply sift through them fast enough. Last May, this happened to the Gay and Lesbian Alliance Against Defamation, also known as GLAAD.

The organization posted a video of actress Debra Messing accepting GLAAD’s Excellence in Media Award for her work in helping push an agenda for equality in the film and television industry. In her speech, Messing pushed for the Trump administration to “do right” by the LGBTQ+ community by removing certain staff members and focusing on laws that reflect equality. After posting the video on its YouTube channel, GLAAD received an outpouring of comments from people who had something negative to say about Messing and her speech. It was part of a coordinated effort to “attack” the video with “vile hate speech,” said Jim Halloran, GLAAD’s chief digital officer.

This experience left such an impact on the GLAAD board members that the organization felt a calling to do something about how LGBTQ+ content is perceived online. This week it was announced that GLAAD is partnering with Google’s parent company, Alphabet, to better understand how artificial intelligence understands this content.

AI Has Learned Hatred

Since most minority-related content – such as that targeted toward women, people of color, and the LGBTQ+ community – tends to gain more negative attention than content that is seen as more mainstream or “socially acceptable,” artificial intelligence has begun to categorize this content as negative and deems words or phrases that pertain to these minority groups as “bad.”

“The internet is such a vital resource for the LGBTQ+ community, especially for young people finding connection,” Halloran said. “That lifeline is under attack.”

In March of 2017, YouTube came under fire for censoring LGBTQ+ content. Videos were being put into Restricted Mode even though there was nothing that would breach the guidelines set by YouTube (drugs and alcohol, sex, adult content, violence, etc) in them.

Source: The Daily Beast

“After a thorough investigation, we started making several improvements to Restricted Mode. On the engineering side, we fixed an issue that was incorrectly filtering videos for this feature, and now 12 million additional videos of all types — including hundreds of thousands featuring LGBTQ+ content — are available in Restricted Mode,” the company stated in a blog post.

The issue was eventually fixed, but not before the real reason it existed came to light. YouTube’s own AI algorithm had learned to classify this content as negative by watching for trends in the comments that type of content received from users. It had learned to hate.

Alphabet and GLAAD Take On AI with ‘Jigsaw’

GLAAD has opted to work with a division of Alphabet, called Jigsaw, to help the parent company train its AI algorithm system to decide between right and wrong the correct way. The organization will be teaching the algorithm phrases that are acceptable to the LGBTQ+ community as well as those that are considered derogatory. This is considered to be the most effective way to keep a safe online presence for users who identify as LGBTQ+ and allow them access to information they need on sites like YouTube.

Source: CNN Money

After Jigsaw has a better data set of positive LGBTQ+ content, it will be able to make a value judgment about which content to surface in the future, rather than suppressing all content related to the LGBTQ+ community that might attract negative comments. Halloran and his team hope this will help the popular video streaming site to showcase LGBTQ+ members in a more positive light as well.

Social Media Sites Should Take Notice

Facebook has found itself, and its co-founder Mark Zuckerberg, in hot water a few times for helping online trolls and fake accounts push disinformation and negative propaganda over the past few years. Twitter has also heard from consumer groups that are concerned it isn’t doing enough to battle cyberbullying and negative content that creates a harmful space within its community. Instagram isn’t safe either, as it’s been linked to poor mental health among younger users.

According to Halloran, this is a long process that has many different layers to it. It’s not something that can be tackled and changed overnight – and it most likely has to do with money. “Before we can expect tech companies to be incentivized to [make changes], we have to have a conversation about what their financial models are and how they’re making money,” he said.

Source: GAIAX

Per a Twitter spokeswoman, the social media site is working hard to create a safe space and evolve as online bullying evolves.

“In the last year alone, we updated our rules and more clearly communicated what content is allowed on Twitter,” she said. “We took action to enforce these new policies across our platform and implemented technology to stop people from seeing abusive content and block bad actors. While there will always be opportunities to refine and improve our approach, we are proud of the determined team working tirelessly on these issues to make Twitter safer.”

Facebook has also made a few strides in being more inclusive in the past few years. The site added 50 different pre-populated gender identities (and allowing users to create their own) for profiles in 2014 and adding a rainbow filter for profile pictures during LGBTQ+ pride month in 2015.

“Our commitment and support of the LGBTQ community has been unwavering. From our support of marriage equality and bullying prevention, to the many product experiences that we’ve brought to life, we are proud of our attention to the LGBTQ experience on Facebook, often thanks to the many LGBTQ people and allies who work here,” a spokesman said.

Source: Jisc

At the end of the day it doesn’t matter how many options I have to self-identify on Facebook or how many policies Twitter puts in place (since it tends to ignore some of its own rules depending on who you are) in order to create a safer space.

If young people cannot depend on these sites that make billions of dollars catering to them to also give them access to information I need that is being classified as “restricted” because of its target audience, then we need to take a long hard look at what we’re actually doing with technology and work on creating a more equal cyber community.

Comments

comments