I Spent Money to be Abused on Instagram. – Noteworthy — The Journal Blog


Ik heb geld uitgegeven om misbruikt te worden op Instagram.

Een samenvatting voor de leidinggevenden

Ik heb een non-profitorganisatie die sociale media gebruikt om een digitale gemeenschap op te bouwen onder gemarginaliseerde mensen.

Het doelgerichte publieksinstrument van Instagram wordt verondersteld om mensen die kleine bedrijven en non-profitorganisaties runnen, te helpen de mensen te vinden die zich het meest verbonden voelen met ons werk.

Toen ik deze tool echter gebruikte in plaats van me te helpen nieuwe verbindingen te vinden, stelde dit zowel mijzelf als mijn community bloot aan haat tegen veel verkeer .

Ik zou er graag iets aan willen doen.

Ik ben een User Experience Researcher.

Een van mijn vele rollen is om te helpen bij het kwalitatief trainen van een machine learning-algoritme om zoekopdrachten correct te interpreteren en te categoriseren om meer relevante resultaten te leveren.

Het is mijn taak om de machine te trainen door de arbeid te verrichten die een computer nooit zal kunnen: worstelen met de menselijke ervaring en de nuances van het navigeren op internet. Ik focus op mensen, geen cijfers.

Ik heb ook een kleine non-profitorganisatie.

Het heet If You Want It, LTD (beter bekend als Gender is over!). Wat begon als een eenvoudig, persoonlijk, t-shirtproject tussen een vriend en mij, kreeg veel zichtbaarheid en evolueerde naar iets veel groters.

Today it’s a community. We’re a group of people who, together, envision a future wherein coercive gendering ceases to exist; a world where individuals of every gender feel self-determination and body sovereignty in both their expression and in their communities.

If You Want It also operates as a foundation of sorts — hosting events, running fundraising initiatives and distributing money to grassroots, intersectional LGBTQ networks while seeking folks across the globe who might find a home in our perspective.

The project is 3 years old and has donated around $30,000 to other grassroots nonprofits to date. We have a modest, but growing, platform on Instagram, Twitter, Tumblr and Facebook. And, my labor landed me on the cover of Time Magazine in March, 2017.

Where My Two Jobs Overlap

The algorithms I work on in my day job are pretty innocuous and non-manipulative. Every human who interacts with them is treated the same, regardless of who they are, where they come from, or where they’ve been on the web.

However, when I merge my increasing understanding of algorithms with my social justice and digital community work, the conversation shifts. Drastically.

Like all Instagram accounts, the exposure that my Gender is Over! profile receives is at the mercy of algorithmic decision-making. The project is shaped by who sees and interacts with our social media. In many circumstances this is an overwhelmingly good thing. But not always.

What Happened to Me

Towards the end of 2016 I decided it was worth spending $10 or $20 on Instagram’s ad platform to reach folks beyond my current network for Gender is Over! If You Want It. I do not have any kind of an advertising budget.

Running an ad was not a hasty decision. I’m eternally skeptical about forcibly selling people products and ideas that they don’t need.

However, it became clear to me that I have a genuine opportunity to find folks in areas outside of my one-to-two degrees of separation. Perhaps these would be folks who could benefit from the project the most. Closeted queer teens in rural Trump-voting towns or trans folks struggling to find community might actually want to find the project, but they may lack the resources and tools to get there on their own. If they didn’t already know the project existed, and no one close to them knew either, how were they to find it?

I ran my first advertisement right after the 2016 Presidential election, to a general audience. The platform provides tools that let you target people with particular interests, but I thought Instagram would be good enough at picking people for me.

Figure 1: This is the advertisement that people saw
Figure 2: This is the audience who saw it

The campaign didn’t get much traction. A person or two @ mentioned a friend to tag them in the post. Another person or two said that accounts like mine are why Trump was elected. The experience was unremarkable.

A month or two later I decided to take a chance and run an ad again. Keeping in mind that my goal was to reach like minded folks, I thought that using Instagram’s audience selector tool would be an effective way to improve my reach. That is why they built the tool after all, right?

I set up a very simple text based ad (Instagram has since been more aggressive about ensuring ads have little text on the picture) and picked a price of $10. Then, I chose an audience of people who were interested in subject matter along these lines:

Figure 3: The “Interested" Audience: Gender, Gender Identity, Genderqueer, MTV, LGBTQ Nation, Teen Vogue, Laura Jane Grace, Laverne Cox, Janet Mock

Within 24 hours the trolls rolled in. I can’t remember what exactly they said to or about me, but it was dramatic enough to make me pull the ad without hesitation and end the campaign.

The experience was intense. It left me feeling vulnerable, a little bit afraid, and simultaneously frustrated that my message wouldn’t be able to reach other folks like bigger brands could. And then it got me thinking:

Why, when using Instagram’s tools to send my message to more relevant folks, was I abused in a way I had never been before?

I decided it was time to conduct an experiment.

The goal

of the study would be to begin to suss out whether or not using Instagram’s targeted marketing actually put my account at a greater risk of receiving abuse.

I figured

that if the evidence was there on my nonprofit’s account then it might be something other small social justice based nonprofits who wanted to run advertisements would experience too. And, if it was something other small social justice based nonprofits experience, then maybe there was something that could be done about it.

The Set Up (My Methodology)

I began with one control — for a baseline — plus a test. Actually, I followed up with two tests — but that will make this story longer and more complicated so I’m going to save it for now. Spoiler alert, it involves the intersection of toxic masculinity and trolls.

I wanted as few variables as possible, so, save for the launch date of the advertisement and the audience segmentation, every other part of the setup was identical. Design wise, I used the same ad that I’d frantically pulled down just a month prior (see figure 1).

Once live, I monitored my account multiple times a day, logging the comments I received on the ad as well as on other photos that I believed to be in relationship to the ad.

Figure 4: The Setup
Figure 5: The live ad

The Results

As anticipated, the test ad with a target audience got significantly more traction.

Despite running for the same duration of time, and me shelling over the same amount of money, the LGBTQ focused ad was seen 33% (468) more times than the general audience ad, and received ~22% (8) more comments.

The commentary on both campaigns skewed decidedly negative. But, compared to the 84.62% of negative comments on the control ad, the ad targeted seemingly towards the LGBTQ community saw exclusively (100%) negative commentary.

Figure 6: The Results, Visualized

The Reactions

Content warning: abusive language, mentions of suicide, verbal abuse, anti-semitism, mentions of self-harm.

Figure 7: Total Chaos

My Biggest Takeaway: Mutual Distaste, or, Trolling the Trolls?

More than receiving strictly hateful commentary, there was noticeable saturation of folks who seemingly wanted me off of their feed just as badly as I wanted to not be there to begin with.

Right now I’m not here to change the hearts and minds — I’m simply here to find other folks who already think like me and want to find a sense of community. Why, then, did the algorithm pair us together in a pit of mutual despair?

Ensuing Questions

As with most good qualitative research projects, I concluded my study with a lot of thoughts, a couple of additional hypotheses, and even more questions then I had to begin with.

Did Facebook and Instagram create a platform to identify and distribute targeted ad campaigns without paying any attention to, or choosing to ignore, qualitative engagement metrics?

If someone leaves an explicitly anti-LGBTQ comment, do they really count as someone “interested" in LGBTQ subject matter? What in the code, specifically, caused this to happen?

Does Instagram have any reason to fix the problem if as long as they can match accounts to ad campaigns they’ll continue to profit? Even if small nonprofits like myself burn money while trying to connect with potential supporters?

Do the people who make and run this platform understand that their computer code can genuinely hurt the people who engage with it?

Do greed, stakeholders, the bottom line, and capitalism in general take away any incentive for social media platforms to prioritize the mental health of their customers over the economic benefit from trolls and abusers?

Distrust of capitalism and abusive power structures aside (not that those are so easily swept aside), I generally believe that humans at Instagram are taking concerted efforts to both positively affect small product and service based businesses and serve customers sponsored content that might actually align with their interests. I’ve witnessed it first-hand, discovering tiny local brands I never would have known about otherwise.

But — what about spaces that challenge the patriarchy, heteronormativity, sexism, transphobia, the wage gap, the racist foundation America, fatphobia or xenophobia? Or, what about any other vulnerable space where standing up and being visible puts an actual target on one’s back, subjecting them to an onslaught of abuse or the risk of sensitive information being disseminated throughout the web?

Social media platforms must be responsible for recognizing their incredible impact on society, answering to injustice, preventing abuse and the circulation of lies passed off as news, while creating the space for marginalized people to stand tall, be centered, be heard, without risk of emotional or physical violence.

Where Do I Go From Here?

Obviously this was a very small study. So what does that mean? As a researcher that means that I’ve got to get more data!

Do you or someone you know run a small social justice based nonprofit? Has something similar happened to you or have you not run an ad because you’re afraid something like this will happen to you? Do you want to help find some patterns by letting me run a small research study, or sitting down to talk about your experiences? Want to hear more?

Contact me.