Part I – Design Proposal
The Topic: The World Wide Web is so omnipotent the government is often forced to intervene and develop legislation to curb its consequences. One notable attempt is the Stop Enabling Sex Traffickers Act and Allow States and Victims to Fight Online Sex Trafficking Act of 2017, together known as FOSTA/SESTA. Pre-FOSTA/SESTA, there were a number of websites and forums hosting advertisements for adult entertainment, but the new law made online platforms liable for âsex traffickingâ (Public Law 115â164 115th Congress, 2017), prompting those websites to go offline.
The Question: The question I would like to propose for research is, âHow do sex workers mitigate censorship post-FOSTA/SESTA?â Many of the most popular social media networks, like Meta apps Facebook and Instagram, employ artificial intelligence (AI) and humans to identify material to be removed or suppressed (Facebook.com, 2024), both of which have a documented history of discrimination. (Ferrer, Nuenen, Such, CotĂŠ and Criado, 2021) There have been reports of content moderation targeting people of color and plus-size creators (Gallay, 2021), so I would like to investigate whether some groups are disproportionately censored.
The Concern: Regardless of paternalistic objections, sexual expression is free speech, and sex workers have extensive histories of subverting Puritan subjugation under the guise of moral concern. Now, they are experiencing another type of exploitation Musto, Fehrenbacher, HoeďŹnger et al. describe as “sexual humanitarian creep” that âexpands the network of state and non-state actors installed to address it, particularly non-state actors that embrace carceral approaches ⌠[and] deputize third-party actors as frontline platform enforcers.**” (**pg. 5) ****This policy empowers laymen to inflict authority over their peers, subjecting sex workers to more oppression and stigma at the hands of their fellow citizens.
FOSTA/SESTA promotes surveillance as law enforcement is now mandated to scrutinize sex workers, despite having a problematic history of violating civil rights. (Barker, 2011) Additionally, the law has created a âchilling effectâ, dissuading victims from speaking about their experiences for fear of prosecution. (Blunt and Wolf, 2020)
Despite being the supposed beneficiaries of this policy, many sex-trafficking victims criticize FOSTA/SESTA not only for its damaging effects but also its ineffectiveness. (SurvivorsAgainstSESTA.org) Between 2017 and 2023, there was only one instance of the Department of Justice prosecuting a FOSTA/SESTA case (Berkeley Journal of Criminal Law, 2023), and there has been no documented decline in sex trafficking. (whyy.org, 2020)
Paradoxically, investigating sex trafficking has become more challenging since the passage of FOSTA/SESTA, as consensual and non-consensual sex work networks have been defragmented and relegated to the dark web and the streets. (Gallay, 2020) âBad dateâ sites, where SW screen potential clients known to be dangerous, have been shut down, depriving sex workers of the ability to vet their clients. (Musto, Fehrenbacher, HoeďŹnger et al, 2021) Income instability has forced many to resort to riskier offline venues like massage parlors and street corners, viciously making them more vulnerable to âpimpsâ and sex traffickers. (Blunt & Wolf, 2020)
The Methods: The question âHow do sex workers mitigate censorship post-FOSTA/SESTA?â is exploratory and nomothetic, best answered using qualitative data from sex workers themselves, which is why the primary method of collecting data will be by survey. Some potential questions are: âWhat strategies have you employed to deter your posts from getting flagged for content violations?â, âWhat percentage of content flagged or removed for guideline violations do you feel has been justified vs unjustified?â, âDo you notice a difference in moderation between different types of content (text vs photo vs video, or face vs full body)?â and âHow do you feel sex worker advocacy has been impacted by censorship online?â However, the question of âDoes AI moderation and human review censor lawful content and disproportionally flag specific groups?â cannot be answered using anecdotal evidence. The ideal approach would be to conduct a quantitative data analysis of a representative sample of content removed by platforms.
This will be a mixed-methods cross-sectional study, with the unit of analysis being the individual sex worker. Potential variables are strategies, virtual resources, platform behavior, lawfulness, censored content, and identity characteristics such as race, economic class, or body type to measure whether certain factors influence moderation practices.
The Hypothesis: I suspect the enactment of FOSTA/SESTA has resulted in the unconstitutional expansion of censorship and surveillance, disproportionately impacting fat and POC creators, compelling sex workers to discover ways to subvert censorship.
This is why my question is: âHow do sex workers mitigate censorship post-FOSTA/SESTA?â
Part II – Lit Review & Arguments
Hacktivist Julian Assange said, “The penetration of society by the Internet and the penetration of the Internet by society is the best thing that has ever happened to global human civilization.” Even in the early days of cyber law, Congress determined the World Wide Web ârepresent[s] an extraordinary advance in the availability of educational and informational resources [and] offer[s] a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activityâ (47 U.S.C 230(a)(1)&(3) (1997)). The web was born from a libertarian counter-culture, birthing virtual communities that contribute to an exchange of public good (Ellis, Oldridge, Vasconcelos, 2005).
There isnât a single industry that hasnât been impacted by the invent of the internet, and the âoldest profession in the worldâ is no exception. Sex workers have been using the internet to promote, network, and support themselves since the dawn of cyberspace (Swords, Laing, Cook, 2023). According to a survey conducted by Hacking//Hustling, a sex work and technology advocacy organization, the Internet is a vital resource. One respondent reported, ââAccess to support groups and safety groups … are essential for my screening and networking with others. I like to also keep up to date with whatâs happening in the sex workers rights movement across the globe and Twitter has been great for that. I follow a lot of outreach organizations and activistsââ (Blunt & Wolf, 2020, pg. 120).
Just like newspapers and libraries, virtual platforms were granted immunity from being held responsible as publishers of user-generated content with the passage of Section 230 of the Communications Decency Act (CDA) in 1997 (Huddleston, 2021). In a lawsuit challenging portions of the CDA, referring to the use of the Internet the ACLU declared there to be âno parallel in the history of human communication.â The court agreed, saying, âAs a matter of constitutional tradition⌠we presume that governmental regulation of the content of speech is more likely to interfere with the free exchange of ideas than to encourage it. The interest in encouraging freedom of expression in a democratic society outweighs any theoretical but unproven benefit of censorshipâ (Reno v. ACLU, 1997).
Section 230 of the CDA is sometimes credited as the âMagna Cartaâ of the Internet (Yachot, 2017), but âthe second subsection, Section 230(c)(2), states that online platforms are not liable for any action taken voluntarily and in good faith to limit accessibility to content that platforms or users find objectionable, regardless if the content is protected by the Constitutionâ (Sells, 2024).
Additionally, a recent amendment colloquially known as FOSTA/SESTA (the Stop Enabling Sex Traffickers Act and Allow States and Victims to Fight Online Sex Trafficking Act of 2017) added language to ânot prohibit the enforcement against providers and users of interactive computer services of Federal and State criminal and civil law relating to sexual exploitation of children or sex trafficking, and for other purposesâ (H.R.1865 – 115th Congress, 2017). The passage of this law suddenly made online platforms legally liable for âsex trafficking,â and with that, even legal adult content was unceremoniously de-platformed (Blunt & Wolf, 2020).
The marks of FOSTA/SESTA have had massive consequences felt across numerous platforms, affecting millions of users beyond sex workers and their clientele. âFitnessâ pole dancers, athletes, and dancers also routinely experience over-moderation (Are, 2020). Censorship ranges from the menial, like a photo of a cat in a business suit (Langvardt, 2022), to the weighty, such as the highly controversial account deactivation after posting the Nobel prize-winning photograph, âTerror of Warâ (Gillepsie, 2019).
FOSTA/SESTA has resulted in internet-wide âcollateral censorshipâ, where âthe government or other powerful party will almost always choose to punish the distributor over the speaker, or at least threaten to do so, as a way to censor disfavored speech.â (Armijo, 2023) The largest and most influential technology companies Meta (the owner of Facebook and Instagram) and Google (owner of YouTube) âhave been, in eďŹect, deputized as censors.â (Etzioni, 2019) Even platforms reserved for adult entertainment – including two of the most popular platforms, OnlyFans and Pornhub, are not immune from over-regulation as financial institutions coerce their own restrictions that result in extra-governmental constraints of protected speech. (Belcher, 2020; OâBrien & Reitman, 2021)
The primary platform proposed for study will be Instagram, as there have been a number of documented cases of censorship and moderation biases plaguing this social media platform. (Are, 2020, 2022; Johnson, 2022; Cotter, 2023; Human Rights Watch, 2023; Leybold & Nadegger, 2023; Bourdeloie & Larochelle 2024; UzcĂĄtegui-Liggett & Apodaca, 2024). Instagram is accused of favoring âhegemonic idealsâ (Leybold & Nadegger, 2024) and ââwinnersâ being those with greater access to social, cultural, political, and economic resourcesâ (Cotter, 2019, pg. 909). Anecdotally, it has been asserted that people of color (POC) and large-bodied creators face disproportionate moderation compared to their thinner and/or whiter counterparts (Gallay, 2021; Johnson, 2022).
This researcher identifies as a fat woman and has opted to use the term âfatâ as an objective descriptor of body size ranging from âsmall fat,â âmedium fat,â and âlarge fat.â The negative connotations associated with this term are wholly socially constructed and pervasive. In Machine Learning as a Model for Cultural Learning: Teaching an Algorithm What it Means to be Fat, Arseniev-Koehler and Foster state, âa vast body of work demonstrates that Americans hold a range of pejorative and stigmatizing meanings around fatness,â and lists more than 10 sources published between 2001 and 2019. (2022) For these reasons, the word âfatâ has been reclaimed by the fat community to de-stigmatize the insult (Bourdeloie & Larochelle, 2024) and will be used interchangeably with âplus-sizedâ or âlarge-bodied.â It should be noted that many plus-sized women choose to use the internet acronym BBW for Big Beautiful Woman or SSBBW for super-size BBW to describe their body type, and will therefore be used in this study.
Instagram, like most other social media platforms, utilizes Artificial Intelligence (AI) and human review (HR) to moderate their communities, both of which have extensive evidence of prejudices. (Arseniev-Koehler & Foster, 2022; Silberg & Manyika, 2023) The concern is that âalgorithms may bake in and scale human and societal biases,â which has been apparent in several cases of discrimination in AI decisions (Schwartz, Vassilev, Greene, et al, 2022).
One case study highlights an example of the discrepancies between the moderation of two comparable images on a photographerâs Instagram account: âBoth of these photos I took. The first is a photo I took last week of @curvynyome, and the second is a self-portrait, ironically, taken to illustrate censorship. The latter has been on my feed since November 2018, [âŚ] where she is wearing and covering more than I am in my self-portrait, had been repeatedly removedâŚâ (emphasis added) (Johnson, 2022). See Appendix Figure 1 to view the two posts. This case features a subject that meets the criteria for both populations being studied, which brings to focus the intersectional experiences of fat POC. If the proposed theories are confirmed, it would make fat POC sex workers the most censored of all groups in this study.
In a report about the censorship of Palestinians posting about conditions in Gaza on Instagram, âHuman Rights Watch identified six key patterns of undue censorship, each recurring at least 100 times, including:
- removal of posts, stories, and comments; 2) suspension or permanent disabling of accounts; 3) restrictions on the ability to engage with contentâsuch as liking, commenting, sharing, and reposting on storiesâfor a specific period, ranging from 24 hours to three months; 4) restrictions on the ability to follow or tag other accounts; 5) restrictions on the use of certain features, such as Instagram /Facebook Live, monetization, and recommendation of accounts to non-followers; and 6) âshadow banning,â the significant decrease in the visibility of an individualâs posts, stories, or account, without notification, due to a reduction in the distribution or reach of content or disabling of searches for accounts.â (Brown & Younes, 2023, pg. 2)
These censorship tactics are not isolated to images of war. In fact, shadow-banning is an oft-cited method of moderating sexual or vaguely sexual content on Instagram (Are, 2022; Johnson, 2022; Cotter, 2023; Leybold & Nadegger, 2023; Sells, 2024). Now platforms are encouraging peers to police each other (West, 2017). Musto et al. described this phenomenon as ânetworked moral gentrificationâ (2021). Caroline Are, a prolific researcher of sex work and Instagram, explored malicious content flagging as âuser-generated warfareâ (2023). FOSTA/SESTA and the resulting platform policies have likely exacerbated the stigma surrounding sex workers but have done little to improve public safety (Tung, 2020).
To be sure, sex work is a very divisive issue, even within progressive and feminist groups. Radical feminists believe that sex work is exploitative and inherently damaging, and liberal feminists argue that sex work is liberating and empowering. This research design will feature what Henry and Farvid call “critical feminism,” a more intersectional, “dialectical” approach, and Marxist Feminism that is critical of sex work, but contends that it should be regulated to ensure worker well-being (2017). Therefore, a critical tenet moving forward with this research is the paradigm of sex work as work and, with this distinction, its entitlement to solidarity alongside mainstream labor movements. Having sex work categorized as employment provides sex workers with protections under laws such as Title VII of the Civil Rights Act of 1964, which protects workers against sexual harassment – even in sex work contexts (Schultz, 2006).
The central premise of this research is to investigate how sex workers mitigate censorship post-FOSTA/SESTA and to test the theory that Instagramâs moderation practices disproportionately target creators based on their race and body size.
To attempt to answer the primary question of mitigation strategies, sex workers will be the unit of analysis, with the dependent variable and outcome of interest being the status of sex workersâ Instagram posts. Posts can fall into a number of categories: unmoderated, shadow-banned, flagged, removed, and deactivated. The first category represents content that does not appear to have any intervention from Instagram, reaching the usual metrics compared to similar content. The next category is more difficult to measure, as there are no indications from Instagram that the post is being suppressed except for an unusual dip in engagement. Flagged posts have been reported to the user as being potential violations, leading to an official warning the content is being hidden from a wider audience. Removed posts are the easiest to monitor, along with the suspension or deactivation of the account.
A broad list of variables potentially contribute to the eventual status of an Instagram post: largely the captions and hashtags (or lack thereof) accompanying the various types of visuals (all to be described as metadata for the purpose of this project) and the associated ratings of said captions, hashtags, and visuals. Each piece of metadata can be classified using ratings employed by mainstream media – G, PG-13, R, and XXX – or in the context of hashtags, they can be ranked as low-risk, medium-risk, and high-risk as hashtags may not be explicit but still banned and heavily-moderated (Are, 2022).
The demographic variables will be the most telling in the comparative analysis measuring the moderation of marginalized groups. To test the theory that POC and fat creators are disproportionately censored, a number of approaches are available, but all rely on the participants to self-report their race/ethnicity and body size. Please see Appendix Figure 2 to review all proposed variables.
I contend that moderation practices, whether algorithmically driven or human-reviewed, are skewed by societal and cultural biases rooted in white supremacy and fatphobia. The average person may bristle at the accusation, but decades of research prove we all carry implicit biases and are socialized by our broader society that directly and indirectly favors white, heteronormative, and traditionally visually appealing (thin) (Schupp & Renner, 2011), at least within the dominant Western cultures.
Scholars such as Arseniev-Koehler and Foster (2022) and Schwartz et al. (2022) have demonstrated that AI systems encode cultural biases related to body size and race, and works like those by Blunt and Wolf (2020) have documented the negative impacts of FOSTA/SESTA on sex workers. This proposed project builds on previous research about bias in moderation practices, expanding the discussion to highlight the intersection of sexual expression, race, and body size in public online forums.
Hypotheses
This study not only advances theoretical understanding but also provides practical insights for adult content creators navigating platform policies, adding to the growing body of work on bodily autonomy, womenâs labor rights, and digital discrimination. To explore the implications of FOSTA/SESTA, three hypotheses are presented for potential research:
H1: FOSTA/SESTA has led to the unconstitutional censorship of lawful content, prompting sex workers to develop strategies to evade over-moderation.
This hypothesis asserts a correlated relationship between the passage of FOSTA/SESTA and the increased censorship experienced by sex workers on platforms like Instagram. The law’s vague definitions and punitive policies have led to the drastic loss of resources and virtual safety nets, endangering mostly women. This culture of censorship has forced sex workers to restrict their representation online or risk losing their livelihood. Identifying tactics to evade overzealous moderation is mission-critical for the survival of many individual sex workers but also necessary for the collaborative advancement and advocacy of the sex work community. (Are, 2022)
This hypothesis is testable through qualitative methods such as surveys or interviews with sex workers to gather data on the specific strategies they use to navigate censorship. Additionally, quantitative analysis of flagged or removed content could provide empirical evidence supporting the claim that moderation practices are implemented discriminately. The hypothesis is falsifiable as well, allowing for the possibility that FOSTA/SESTA may not have had a significant impact on censorship practices or that the strategies employed by sex workers do not effectively mitigate moderation. I expect that sex workers will report using more than one tactic, but they are ineffective at avoiding moderation.
H2: Black creators experience higher rates of content flagging and removal than their white counterparts due to biases in AI and human reviewers.
Racial bias is also very well documented within peer groups, institutions, and technology. Specifically, Black people experience systemic oppression that spans history and geography. Poverty is generational, and stereotypes permeate from local communities to national mass media. These facts are impossible to disentangle from institutional policies, particularly ones accused of restricting Constitutional rights.
H3: Fat creators experience higher rates of content flagging and removal than their thinner counterparts due to biases in AI and human reviewers.
Anecdotal and case study evidence suggests that body size is a determining factor in platform moderation. There is overwhelming research illustrating the negative sentiments around fatness, and a growing number of examples showing AI perpetuating harmful stereotypes.
H2 and H3 are nearly identical concepts, requiring a nearly identical research approach. To analyze these hypotheses, a number of quasi-experiments can/will be conducted collecting quantitative data comparing consequences. Data will be collected using a variety of approaches, ranging from self-reported submissions to automated web-scraping scripts. Should the results indicate there is no difference in consequences along any demographic, these hypotheses can be rejected. There is also the possibility that despite their similarities, one hypothesis could have significant findings while the other has no notable results. However, I suspect H2 and H3 will have similar outcomes. In other words, if the platform is biased against one population, the other population is also likely to experience bias to some degree.
Part III – Implementation
Approaches:
Kotziner described a new type of ethnography concerning âsocial media studiesâ he named netnography and wrote, âWhere the algorithm goes, the astute netnographer will follow, chasing down what it allows, what it removes, what it randomizes, whose interests it exalts, whose it exploits, and whose are excluded altogether.â I, as the netnographer, leverage my experience as an âintimate technoculture insiderâ as a chronically online millennial and Instagram user of over a decade provides me with insight into users’ experiences.
Instagram has inspired a variety of research projects in the last decade, but there remains âa pressing need for Instagram-native research strategies that exploit the specific methodological potentials of Instagramâs hashtags, mentions, likes, captions, and geotags to enable in-depth investigationsâ (Caliandro & Graham, 2020). Yang explores the qualitative and quantitative approaches well-suited for Instagram using questionnaires, native API, and web scrapers to conduct visual interpretation, comparative analyses, and statistical modeling (2021).
Existing research primarily relies on first-hand experiences in the form of surveys and interviews with users who believe they have been affected by shadow-banning. Le Merrer, Morgan, and Tredan took a statistical approach to shadow-banning on Twitter. Because platforms largely deny the practice, users are left to use âfolk knowledgeâ and âcollaborative algorithm investigationâ from proactive peers (Delmonaco, Mayworm, Thach, et al, 2024).
The election of Trump and the potential adoption of right-wing think-tank manifesto Project 2025 adds urgency to this proposed research. On page 5 of Project 2025, it states, âPornography should be outlawed. The people who produce and distribute it should be imprisoned. Educators and public librarians who purvey it should be classed as registered sex offenders. And telecommunications and technology firms that facilitate its spread should be shutteredâ (Dans, Groves, Roberts, 2023).
Ipsos conducted a Social Media Moderation Poll in 2023 that expressed overwhelming responses emphasizing the publicâs mistrust for unbiased platform practices. There is also a very strong majority that finds certain actions to be important, including âNotify users about the reason their content has been removedâ (81% total), âRemain unbiased when making content moderation decisionsâ (78% total), and âHave an appeal process for any content moderation decisionâ (71% total).
This new proposed survey, succinctly called the SESTA Survey, will explore the current moderation practices policing sex workers on Instagram and collect both qualitative and quantitative data describing specific examples of negative action by platforms against users.
The survey will be hosted on a dedicated website that provides users with the opportunity to browse background information and eventually review the results. Security measures such as Secure Socket Layer Certificates and data encryption for sending and storing form responses will be installed. Operational strategies to minimize risk involve minimizing identifiable information and fragmenting data into multiple cross-referenced forms.
To conduct the research project, IRB approval is required. To submit the proposal to the Colorado State University (CSU) IRB, a Principle Investigator (PI) employed as a full-time professor must be involved. The Investigator Manual provides detailed information about documents required, such as Investigator Protocols, Worksheets for Exempt or Non-Exempt Human Research, and Consent Forms. CSU utilizes Kuali systems to organize proposals. An extensive Protocol template from the NIH was chosen versus the CSU template. This document provides an in-depth reference for nearly every detail involved in the research proposal.
In 2012, Janice Irvine described the role IRBs play in the marginalization and censorship of sexuality research. Historically, high-profile research has faced public backlash; Earl Babbie was quoted saying, âOnly adding the sacrifice of Christian babies could have made this more inflammatory for the great majority of Americans in 1970.â The report also warns IRBs are reluctant to use potentially offensive language, and may resist the use of the term âfat.â However, this research has the intention to contribute to a broader discourse within Fat Studies (Cooper, 2010).
The target population is considered vulnerable due to the fact sex work is currently illegal in nearly every jurisdiction in the US. Respondents could presumably face consequences if their identity is made public information, starting with embarrassment and stigmatization, up to criminal liability. However, sex work researchers Dewey and Zheng argue researchers âshould be wary of labeling sex workers a âvulnerable populationâ, as sex workers of all kinds clearly participate in social networks that may alternately support and abuse them, including biological or fictive kinâ (2013).
Because Instagram users are the primary targeted audience, recruitment using Instagram is appropriate. The literature supports social media recruitment, particularly among marginalized groups (Russomanno, Patterson & Tree, 2019). A list of ethical norms were proposed by CITI:
â⢠Proposed recruitment does not involve deception or fabrication of online identities.
- Trials are accurately represented in recruitment overtures.
- Proposed recruitment does not involve members of research team âlurkingâ or âcreepingâ social media sites in ways members are unaware of.
- Recruitment will not involve advancements or contact that could embarrass or stigmatize potential participants.â (Buchannan, 2020)
In addition to feed posts and Instagram ads, members who appear to fall in the selection criteria will be contacted via private or direct message (DM). These push methods have been approved by Secretaryâs Advisory Committee on Human Research Protections (SACHRP) (Buchanan, 2020). They recommend creating a separate Facebook Page and Instagram account to avoid involving personal social media.
The sample size is expected to be small due to the niche population being targeted. This project will likely rely on snowball sampling and direct outreach in popular public spaces. This small, protective group will likely have some skepticism or distrust. Being a member of the targeted population enables this researcher to act as an informant. This is critical for the participatory action research (PAR) that encourages members within the studied population to have an active role in the development of the research methods (Kemmis, McTaggart, & Nixon, 2014). The book Ethical Research with Sex Workers features sex workers turned academics and recommends researchers disclose their shared identities to promote credibility and community to potential participants (2013).
The CITI training for Social Media Recruitment explained IRBs expect to see:
- âThe exact ads (images and language)
- Sets of images and sets of language by platform
- How are participants targeted (e.g., demographics, religion, keywords)?
- What happens where and when?
- Once a participant sees an ad and clicks it, what happens next and is this on a social media site or a site the institute controls or some other third party?
- Security measures being taken during recruitment.â
Potential participants should be informed of any and all risks associated with participation. The two most significant risks to participants is the potential for emotional discomfort and privacy violations. Federal regulations determines minimal risk as âharm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life.â
Since the narrative of sex workers as a vulnerable population has been questioned, the assumption that discussing sex work could result in emotional distress is problematic because workers who rely on the sex trade presumably encounter the survey topics in their daily lives. Irvine says, âIRBs that assume sexuality is âsensitiveâ prohibit the very research that might demonstrate that, for many, it is not.â (2012)
In the event the respondent has an adverse reaction, the user is encouraged to withdraw and an âemergency exitâ is always easily accessible in the stickied header should the user need to hide their activity quickly. Researchers are also encouraged to provide debriefing material, and The Board of Scientific Affairs’ Advisory Group on the Conduct of Research (BSAAGCR) on the Internet notes: âwhen subjects are identiďŹable and the research involves data that place them at risk of criminal or civil liability or that could damage their ďŹnancial standing, employability, insurability, reputation, or could be stigmatizing⌠standard security measures in place for electronic commerce, such as encryption and secure protocols, are likely to be sufďŹcient.â (2004)
While IRB standards typically require signed consent forms, exceptions exist when the signature is the only identifiable data associated with the survey. Furthermore, requiring a signature has been shown to âreduce response rates, increase nonresponse to sensitive items, and possibly produce biased dataâ. While it is possible to collect signatures, the BSAAGCR Report on Psychological Research Online recommends obtaining consent using a checkbox, and this survey made consent a requirement prior to viewing. There are additional qualifying questions that hide the survey contents until the user indicates they: are 18 years or older, perform sex work, and live or work in the United States.
A codebook was created using a Google Sheets database and uses the format described by Horstmann, Arslan, & Greiff in 2020. They recommend utilizing spreadsheets using rows for questions and columns that could include: variable name, variable type, verbatim question, code keywords, response format, and response labels which were included in this codebook. The University of Gothenburg published a codebook called V-Dem or Varieties of Democracy that details hundreds of variables, supplying details that include Project Manager (country expert), Question, Clarification, Responses, Scale, and Cross-coder aggregation (Gerring, Henrik, et al, 2021).
A number of quasi-experiments could be proposed to investigate the hypotheses presented as H2 & H3. The first approach but least reliable method of collecting data will be the content collection form presented at the end of the survey, inviting participants to submit examples of content they believe has been moderated unfairly (the definition of âfairâ is left to the participant to judge). This will be problematic because participants may have limited knowledge of the ultimate status of their posts, or can only guess if they were shadow-banned. Details are requested along with the visual including captions, hashtags, and dates, which will be difficult for most Instagram users to recall. Even with an acceptable level of content collected, there is still the issue of biases that result from participant selection and user-generated submissions.
The next approach would be modeled after a non-profit newsroom investigation by themarkup.org probing the censorship of Palestinians posting about conditions in Gaza. For this portion, approximately 20 volunteers would be enlisted to create comparable content featuring either borderline or outright platform violations to monitor moderation. Volunteers would span all colors and sizes, and content would be posted across many accounts with various variables to determine what content gets which consequences and when.
There are some potential costs associated with this project. Currently the survey is hosted on a private web server, an alternative survey platform such as Qualtrics could be preferred. Pricing for this solution is only available by contacting sales for a quote.
Additionally, the participants providing the extra labor of creating content should be compensated.
A Python script is one way to continuously check the status of a post and document whether the post has been removed or account deactivated.
Recruiting participants to create visual content – broadly called visual research methods (VRM) (Rose, 2014) – has been explored using approaches such as photo-elucidation or -elicitation (Pauwels, 2015) and photo-voice (Lorenz & Kolb, 2009). The provocative nature of the research complicates the questions that arise about VRM. Wiles, Coffey, Robinson, and Heath attempt to navigate the debate between the aversion to image manipulation and the desire to anonymize and sanitize the images for mass-consumption in a report called âAnonymization and visual images: issues of respect, âvoiceâ and protectionâ (2010).
There are limitations to this study as there are other moderation practices that are harder to monitor. For example, shadow-banning is always speculative and identified by comparing metrics. The shortcomings of this proposal are the limitations measuring shadow-banning as âdummyâ accounts will not have the engagement necessary to monitor unreported penalties that can only be assumed based on sudden drops in post metrics.
The next most reliable method of collecting data would be to recruit established creators and ask them to share their Instagram access. However, this will be a difficult endeavor as content should be comparable to potentially identify blatant discrepancies, but established creators would rightfully be concerned about losing access to their accounts or losing engagement for indeterminate periods that could last months.
Each method proposed with this project would likely benefit from pilot tests and pre-surveys.
Expected Results
The results of the survey and experiment(s) will confirm or disprove H1: the enactment of FOSTA/SESTA has likely led to an increase in censorship and over-moderation of legal content on social media platforms. Existing research, anecdotal reports, and first-hand experiences suggest there is a relationship between the lawâs passage and moderation practices.
Respondents are likely to report increased instances of flagged, shadow-banned, or removed posts, providing evidence of inconsistencies in enforcement of community guidelines. While the sample size of sex workers with experience pre-FOSTA/SESTA may be small, they can offer the most insight into the way the law has impacted their existence online.
The project predicts evidence of racial bias in moderation practices, with black creators reporting higher rates of content flagging and removal compared to their white counterparts. Similarly, the research anticipates that fat creators will experience heightened censorship compared to thinner counterparts. Both arguments reflect cultural biases encoded in both AI systems and human moderators rooted in fatphobia and white supremacy.
The experimental component is expected to reveal measurable disparities in content moderation outcomes based on race, body size, and content type. These findings will provide quantitative validation to qualitative reports, solidifying the claim that identity influences the likelihood of suppression.
Conclusion
This research project sheds light on the disproportionate censorship of sex workers, Black creators, and fat people in the wake of FOSTA/SESTA legislation. The consequences of FOSTA/SESTA, the authoritarian crack-down on sexual expression, and threat of an anti-pornography campaign as part of Project 2025 put free speech at serious risk. Furthermore, the dissolution of harm reduction resources and de-platforming of regulated communities puts the victims of sex-trafficking in more peril, a serious shortcoming that indicates the lawmakers prioritized imposing Puritan opinion over Constitutional rights and literal life and death.
Future investigations should explore how these moderation practices impact the mental health, economic resilience, and community-building efforts of creators. Most importantly, additional research is critical to communicate the realities of sex trafficking victims, and reveal the ramifications of this misguided law.
The ultimate aspiration is to repeal FOSTA/SESTA, but in the meantime, rather than reinforcing existing hierarchies of privilege, platforms must prioritize transparency, fairness, and inclusivity, ensuring that marginalized voices are not systematically silenced. Social media platforms face a pressing need for clearer guidelines, more robust appeal processes, and training for human moderators alongside the refinement of AI algorithms to counteract systemic patterns. Other regulatory measures that promote free speech and protect minors and victims include the creation of platforms dedicated to adult activity, the implementation of âteen-onlyâ accounts, requiring parental consent, and further employing the use of filters that obscure content until consent has been obtained and age verified.
The confirmation of these hypotheses will not only validate the lived experiences of these communities, but also serve as a critical resource for policymakers, tech developers, and advocates pushing for equitable digital environments for sex workers and beyond. Recalling a concept dubbed trickle-up social justice by Dean Spade suggesting the liberation of the most marginalized will be a net benefit to the larger populations, this work hopes the benefits will contribute to a freer society.
Feedback & Updates
Based on input provided by Dr. Victoria Gordon, the initial hypothesis, H1: FOSTA/SESTA has led to the unconstitutional censorship of lawful content, prompting sex workers to develop strategies to evade over-moderation, has been adapted. Because there is no conceivable way to measure moderation prior to the passage of FOSTA/SESTA, it would be inappropriate to make a causal claim.
When presented to a respected researcher in the Instagram censorship and sex work space, Dr. Caroline Are replied, âI absolutely LOVE your survey page design and your prop – they’re incredible. My only concern is: do you wanna âoutâ SWs’ survival strategies against censorship in a way that could then make them a target for platforms to detect? How are you gonna mitigate this risk for SWs?â To address this, the mention of strategies was also removed.
Therefore, the hypothesis has been restated as, H1: The enactment of FOSTA/SESTA has resulted in the over-moderation of legal content on social media platforms.
A Principle Investigator (PI) is required to pursue the potential for IRB approval, and was limited to full-time professors. I presented this project to Dr. Jessie Harney, my current Program Evaluation and Quantitative Methods professor, and she agreed to serve as PI for this project beginning the following semester.
I inquired with the Colorado State University IRB about Certificates of Confidentiality to protect against the risks respondents may feel discussing extralegal activities. It was confirmed the PI could indeed apply for a CoC through the National Institute of Health (NIH) (any NIH-funded study is pre-emptively provided one). The CSU IRB also provided the Investigator Manual.
Dr. Alexis Kennedy also reviewed the survey and offered suggestions that identified double-barrelled questions and non-mutually-exclusive responses. She also suggested placing the demographic questions at the end of the survey in case of survey fatigue. I agree with her reasoning, but because race and body size are integral data points to address the hypotheses, they will remain in the beginning.
Methods
Survey
disclosures
project title: censorship post-FOSTA/SESTA
researcher/business contact:
name: Abigail Peterson
company: asstrocode
email: [email protected]
address: 3807 Keokuk St., St. Louis, MO 63116
purpose of the study:
you are invited to participate in a research project aimed at exploring how censorship affects adult content creators across digital platforms. this research will help to shape better policies and business practices that support the autonomy and creative expression of creators like you.
voluntary participation:
your participation in this research is entirely voluntary. you may withdraw at any time, for any reason, without penalty. there are no negative consequences for choosing not to participate or discontinuing your participation at any point.
procedures:
if you agree to participate, you will be asked to complete a survey. the survey will include questions about your experiences with censorship, content moderation, and platform restrictions. the estimated time commitment is approximately 15-20 minutes. no personal identifying information will be required to complete the survey, however, some questions may request potentially identifiable information in the form of visual content to study discrimination within moderation practices. This content is not shared unless otherwise agreed upon, and in such a case, the photos will be edited to promote privacy.
the only potentially identifiable information attached to your data is the IP address of the device you are using, but it will not be collected nor included in the analysis. a free virtual private network (VPN) such as protonvpn.com or planetvpn.com can be used to anonymize your IP address if desired.
risks:
providing personal information is not required for participating in this survey, but you may choose to share some details and content that could be identifiable. when collecting information online, there is a negligible risk of raw data being exposed. every reasonable effort will be made to maintain confidentiality, including encrypting submissions, coding answers, and editing photos. any individual quotes used will be generalized to ensure confidentiality. data will be stored securely and will only be accessible to the research team for analysis purposes. findings may be published in academic papers, presentations, or business reports, but individual responses will remain anonymous. after the project concludes, data will be stored for a period of 5 years and then securely deleted.
discussing censorship and the dangers of sex work may evoke emotional responses. you are encouraged to skip any questions that feel uncomfortable or difficult.
benefits:
your input will contribute to larger efforts to advocate for fairer and more transparent policies on mainstream and niche platforms. by participating, youâll help inform policymakers, platform developers, and the broader public about the real impacts of censorship on adult content creators.
your feedback will help businesses like asstrocode better serve creators, especially by tailoring digital tools, resources, and platforms that prioritize freedom of expression and minimize the impacts of censorship.
the findings from this research will also be shared in educational contexts, helping future policymakers and tech developers understand the nuances of censorship and its disproportionate effects on marginalized creators.
contact for questions:
if you have any questions about this research project, please contact Abigail Peterson at [email protected].
by participating in this survey, you acknowledge that you have read and understood the above information and consent to participate under these terms.
demographics
- please select your age:
- [ ] 18-21
- [ ] 22-25
- [ ] 26-29
- [ ] 30-33
- [ ] 34-37
- [ ] 38-41
- [ ] 42-45
- [ ] 46-49
- [ ] 50+
- please select all the races/ethnicities you most identify with:
- [ ] White
- [ ] Black / African-American
- [ ] Hispanic / Latinx
- [ ] Middle Eastern
- [ ] Asian / South Pacific
- [ ] Indigenous / Native
- [ ] Mixed Race
- [ ] Other
- please select the gender you most identify with:
- [x] Female
- [ ] Male
- [ ] AFAB Trans
- [ ] AMAB Trans
- [ ] Non-Binary/Gender Non-Conforming/Gender Fluid
- [ ] Prefer not to say
- [ ] Other ____________________
- please select the body type you most identify with (weights suggested are for approximating references only):
- [ ] Slim / Petite (<90-110lb)
- [ ] Fit / Lean (~110-130lb)
- [ ] Average / Height & Weight Proportionate (~130-150lb)
- [ ] Curvy / Thicc / Small Fat (~150-180lb)
- [ ] Medium Fat / BBW (~180-250lb)
- [ ] Large Fat / SSBBW (~250+lb)
- [ ] Prefer not to say
- Platforms
- select all the services you currently or previously performed:
- [ ] Camming / Streaming
- [ ] Currently
- [ ] Previously ( ____ ago)
- [ ] Happy Endings Massage
- [ ] Currently
- [ ] Previously
- [ ] Stripping / Dancing
- [ ] Currently
- [ ] Previously
- [ ] Full Service – In-Call
- [ ] Currently
- [ ] Previously
- [ ] Full Service – Out-Call
- [ ] Currently
- [ ] Previously
- [ ] Paywall / Subscription
- [ ] Currently
- [ ] Previously
- [ ] Pay Per View Video / Photos
- [ ] Currently
- [ ] Previously
- [ ] Phone Sex
- [ ] Currently
- [ ] Previously
- [ ] Sexting / Chat
- [ ] Currently
- [ ] Previously
- select all the services you currently or recently (within the last 30 days) promoted online:
- [ ] Camming / Streaming
- [ ] Happy Endings Massage
- [ ] Stripping / Dancing
- [ ] Full Service – In-Call
- [ ] Full Service – Out-Call
- [ ] Paywall / Subscription
- [ ] Pay Per View Video / Photos
- [ ] Phone Sex
- [ ] Sexting / Chat
- approximately how many years have you been utilizing the internet to perform sex work?
- [ ] 0-1 Years
- [ ] 2-3 Years
- [ ] 4-5 Years
- [ ] 6-7 Years
- [ ] 8-9 Years
- [ ] 10+ Years
- select all the platforms you have ever used for your sex work, and include the average times you post per day, week, or month, frequency of flags, and account deactivations:
- âšď¸ Instagram
- policies
- clear vs unclear
- fair vs unfair
- posts (num) per (time)
- content flagged or removed? _______
- (rarely, occasionally, frequently, always)
- account suspended or deactivated? ______
- how many x? _______
- how many followers? _______
- policies
- âšď¸ Twitter/X
- âšď¸ YouTube
- âšď¸ ManyVids
- âšď¸ Onlyfans
- âšď¸ Fansly
- âšď¸ Fetlife
- âšď¸ Snapchat
- âšď¸ Tumblr
- âšď¸ TikTok
- âšď¸ Discord
- âšď¸ Twitch
- âšď¸ Reddit
- âšď¸ Telegram
- âšď¸ Kik
- âšď¸ Patreon
- Other _____________________
- how often do you feel that mainstream platform moderation has forced you to change your pricing or content structure?
- how quickly do platforms usually respond to your inquiries about flagged content?
- have you been given guidance on how to avoid having your content flagged in the future?
- have you moved to other platforms due to excessive moderation on your main platform?
- how often do you create “safer” versions of content specifically for mainstream platforms?
- how often do you use the platformâs help or support feature when your content is flagged?
- do you feel platforms offer enough clarity on which words or phrases are likely to get flagged?
- Income
- do you use newsletters or email lists to reach followers when platforms flag or limit your content?
- do you maintain a personal website to distribute content outside of social media platforms?
- have you diversified your revenue streams (e.g., selling merchandise, offering coaching) to minimize reliance on content that risks moderation?
- do you manage your own content, or do you have help from a partner or professional(s)?
- [ ] Self / Solo
- [ ] Partnered
- [ ] Pro / Team
- what approximate percentage of your income is reliant on online platforms (earned directly from virtual services)?
- [ ] 0-20%
- [ ] 20-40%
- [ ] 40-60%
- [ ] 60-80%
- [ ] 80-100%
- have you had to find alternative sources of income due to content moderation?
- how often does platform flagging directly result in a reduction in your income?
- ExplainVirtual Sex Work
- please briefly describe your relationship with your work (the good and the bad).
- please provide as much detail as possible about online resources you utilize for your sex work, if any:
- Legal:
- Safety:
- Social / Interpersonal / Community Support:
- Physical Health:
- Mental Health:
- Advocacy & Education:
- Finance / Economic:
- Other:
- do you think advocacy or collective action can improve platform policies for adult creators?
- SafetySafety
- How do you currently manage your safety and privacy online (tools, services, protocols)?
- What educational resources about legal and safety issues would you find helpful?
- How do you propose online platforms better protect their users against sex-trafficking risks?
- do you feel that sesta/fosta has made platforms more dangerous for adult creators?
- do you feel sesta/fosta has been successful in its intended purpose of stopping trafficking?
- Censorship
- have you created private groups or closed communities (e.g., discord, patreon) to avoid public content moderation?
- Do you:
- [ ] own backup accounts
- [ ] replace words with emojis
- [ ] substitute alternative characters/letters/symbols
- [ ] use implied or coded language
- [ ] edit/censor photos with stickers/blur/etc
- [ ] avoid/minimize use of hashtags
- [ ] link to sub-accounts (intermediary channel between the censored platform and premium platform)
- In as much detail as possible, please expand on the aforementioned approaches or identify new strategies you utilize to evade content violations.
- Please describe your experiences with online platform moderation and how it influenced your mental, physical, and/or economic well-being.
- What percentage of content that has been flagged or removed for guideline violations do you feel has been unjustified?
- have you appealed any content removals or account suspensions?
- if so, how often are your appeals successful?
- (always, sometimes, rarely, never)
- please detail any methods used to restore accounts (mass audience reporting, customer service, etc)
- if so, how often are your appeals successful?
- what types of content do you believe are more likely to get flagged?:
- [ ] Close-Ups
- [ ] Full Body
- [ ] Body Positivity
- [ ] Educational / Advocacy
- [ ] Promotional / Link Share
- [ ]
- [ ] Text
- [ ] Video
- [ ] Photo
- [ ] Kink/Fetish
- [ ] Art
- [ ] Humor
- what changes would you suggest to improve transparency and fairness in content moderation?
- how often do you seek legal advice or support due to moderation issues?
- do you feel restricted in how you can express yourself creatively due to moderation?
- do you feel platform algorithms are becoming better at detecting evasive measures (e.g., coded language or imagery)?
- Biases
- do you feel that you are unfairly affected by content moderation compared to other creators?
- do you believe moderation systems (AI and human review) are biased against POC?
- do you believe moderation systems (AI and human review) are biased against fat creators?
- are there any other populations you believe are disproportionately moderated and censored?
- how often do you hear of other creators being shadowbanned or censored unfairly?
Content Collection
- optional – please provide examples of content you know to have been flagged for content violations that you believe to be unjustified between 1/1/2024 and 12/31/2024. Please do not alter the images – submit exactly as posted, with captions and hashtags, when possible. All images are confidential for internal analysis only unless otherwise indicated. Publishing permissions are for unedited photos as-is, however, some images may be minimally edited at the discretion of the research team only when necessary for privacy or publication purposes.
- for each image, select platform(s) that flagged
- for each image, please provide caption and hashtags
- for each image, please indicate your permissions for public publishing:
- can share as-is
- do not share
- [ ] DO NOT SHARE any of these images in any form
- [ ] SHARE ANY images as-is
Experiment
- optional – this research project will include a longitudinal examination of violation biases by conducting an experiment utilizing approximately 5-10 adult creators of varying sizes and colors to create comparable content to be posted to the most popular platforms to monitor potential differences in moderation. if you are interested in participating, please provide your email address:
5-10 content creators to make controlled posts
- example:
- “sexy silhouette 1 & 2” â pose in front of a light source to create a shadow outline of your body. Create one photo with the outline of the nipple visible in the shadow and one without the nipple visible.
- with caption (medium-risk)
- with medium-risk hashtags
- with low-risk hashtags
- without hashtags
- with caption (low-risk)
- with medium-risk hashtags
- with low-risk hashtags
- without hashtags
- with no caption
- with medium-risk hashtags
- with low-risk hashtags
- with no hashtags
- with caption (medium-risk)
- âsheerâ – cover your breasts with a sheer fabric that provides at least an outline view of the areolas. Photo should be taken from approximately armâs length.
- âbubble bathing suitâ – stand with full-body visible, with strategically placed bubbles covering your groin area and breasts.
- âhand bra 1 & 2â – with only the top half of your torso visible, cup your hands over your breasts, covering all nipple/areola area, careful not to change the shape of your breasts. then take a photo with a moderate amount of pressure changing the shape of your breasts but still fully covered.
- âmirror mirrorâ – stand with your back to a mirror, facing forward so your face is not visible, and photograph your nude back with approximately one inch of the buttocks visible above low-fitting bottoms.
- âleg day 1 & 2â – lay on your stomach while nude with your feet in the air, legs blocking any view of your buttocks. take one photo with partial view of buttocks.
- âstretchingâ – sit on a piece of furniture and frame the photo so your knees are visible, but not so much your ankles/feet are visible. Wear bottoms that cover your knees, and a crop-top that shows under-boob with arms raised.
- âlacy lingerie 1 & 2â – take a photo visible from your face to your knees, wearing only a full-coverage bra & panty set that covers the areola/ nipple/ underboob and fully covers the groin area. take one photo front-facing, and another photo with your back turned, head looking back.
- âthigh-highs 1 & 2â – with only the bottom of your torso visible from the belly-button to your feet, with a profile view suggesting thong underwear, with nylon stockings that end at the thigh. take one photo with heels and one without.
- âbody posiâ – sit cross-legged while in a full-coverage bra and high-rise panty (to belly-button). lean forward to emphasize rolls with at least one hand grasping stomach flesh
- “sexy silhouette 1 & 2” â pose in front of a light source to create a shadow outline of your body. Create one photo with the outline of the nipple visible in the shadow and one without the nipple visible.
Appendices
Figure 1 – Case Study

Figure 2 – Codebook & Variables
Coding
- Instagram Post Status
- Posted w/o issue – Unmoderated
- Shadow-Banned
- Flagged / Informed
- Post Removed
- Account Temporarily Suspended
- Account Permanently Deactivated
- Demographics
- Body Size (select the option you most identify with
- Slim/Petite (<90-110lb)
- Fit/Lean (~110-130lb)
- Average / H & W Proportionate (~130-150lb)
- Curvy / Thicc / Small Fat (~150-200lb)
- Medium Fat / BBW (~200-300lb)
- Large Fat / SSBBW (~300+lb)
- Prefer not to answer
- Race & Ethnicity (select all that apply)
- Asian / South Pacific
- Black / African American
- Hispanic/Latinx
- Indigenous / Native American
- Middle Eastern
- White
- Mixed Race
- Other
- Prefer not to answer
- Body Size (select the option you most identify with
- Clothing
- Nude / No Clothing Visible
- Partially Nude / Some Clothing
- Revealing Lingerie
- Full Coverage Underwear
- Swimwear
- Provocative Outfit
- Socially Acceptable Outfit
- Remedies / Resolutions
- Ability to Appeal
- yes
- no
- Appealed
- yes
- no
- Strategy
- Customer Service Rep
- Mass Audience Reporting
- Backup Account
- Restored
- yes
- no
- Ability to Appeal
Quant Data
- IG Account Metrics
- Account Age
- Followers
- History of violations
- Status
- Acts & Appears Normal
- Suspected Shadow-Ban
- Flagged & Notified
- Deactivated
- IG Post Metrics
- Views
- followers
- non-followers
- Likes
- Comments
- Shares
- Averages / Historical
- Views
- Likes
- Comments
- Views
- Metadata
- Alt-Text
- present
- absent
- Visual
- PG
- PG-13
- R
- XXX
- Caption
- PG
- PG-13
- R
- XXX
- Hashtags
- Amount Used
- Rating
- low-risk
- medium-risk
- high-risk
- Date & Time Posted
- Time Removed
- Total Time Live
- Format
- Single Image
- Carousel
- Video / Reel
- Story
- Alt-Text
- Composition
- Full Body
- Close-Up
- Solo
- Partnered
- M/F
- F/F
- M/M
- Group
- Visible Accessories
- Novelty / Non-Explicit Toys
- BDSM Gear / Bondage
- Explicit / Adult Toys
- Novelty / Non-Explicit Toys
- Platforms & Policies
- Instagram
- Cupping Breasts vs Changing Shape
- Pole Dancing vs Strip Club
- Nude Art in Gallery vs Non-Gallery
- Breastfeeding
- Reconstruction / Womenâs Health
- TikTok
- Snapchat
- OnlyFans
- Instagram
- Strategies
- Minimal/No Hashtags
- Emojis
- Alternative Characters
- Coded/Implied Language
- Sub-Accounts
- Edits/Stickers/Blur
- Backup Accounts
- Chosen Appearance
- Tattoos
- Piercings
- Visible Body Hair
- Gender Non-Conforming
- Content Type
- Humor / Entertainment
- Advocacy / Educational
- Body – Positive / Empowerment
- Inspiration / Community Engagement
- Niche / Fetish
- Ad / Sponsered
- Artistic / Creative
- Promotion of Legal Activities
- Promotion of Illegal Activities
- Platforms
- [ ] Tumblr
- [ ] TikTok
- [ ] OnlyFans

Figure 3 – API, Python Script
- time posted
- secret post ID
- secret user ID
- caption
- views
- followers
- non-followers
- time unavailable
- post url
- time posted
- hashtags
- likes
- saves
Figure 4 – Safeguards
4.1 – Consent Form
Exempt Consent_Identifiable Data.pdf
4.2 – Data Management
4.3 – Recruitment Script
Sample Verbal Recruitment Script.pdf
References
Are, C. (2020). How Instagramâs algorithm is censoring women and vulnerable users but helping online abusers. In Feminist Media Studies (Vol. 20, Issue 5, pp. 741â744). Routledge. https://doi.org/10.1080/14680777.2020.1783805
Are, C. (2022). The Shadowban Cycle: an autoethnography of pole dancing, nudity and censorship on Instagram. Feminist Media Studies, 22(8), 2002â2019. https://doi.org/10.1080/14680777.2021.1928259
Are, C. (2024). Flagging as a silencing tool: Exploring the relationship between de-platforming of sex and online abuse on Instagram and TikTok. New Media and Society. https://doi.org/10.1177/14614448241228544
Armijo, E. (2023). Section 230 as civil rights statute. University of Cincinnati Law Review, 92(2), 301-334.
Arseniev-Koehler, A., & Foster, J. G. (2022). Machine Learning as a Model for Cultural Learning: Teaching an Algorithm What it Means to be Fat. Sociological Methods and Research, 51(4), 1484â1539. https://doi.org/10.1177/00491241221122603
Barker PhD, T. (2011). Police Ethics: Crisis in Law Enforcement (3rd ed.).
Bechert, T. (2024). Investigator Manual. Colorado State University
Belcher, M. (2021, August 24). OnlyFans Content Creators Are the Latest Victims of Financial Censorship. Eff.org. Retrieved September 28, 2024, from https://www.eff.org/deeplinks/2021/08/onlyfans-content-creators-are-latest-victims-financial-censorship
Blunt, D., & Wolf, A. (2020a). Erased: The impact of fosta-sesta and the removal of backpage on sex workers. In Anti-Trafficking Review (Vol. 14, pp. 117â121). Global Alliance Against Traffic in Women. https://doi.org/10.14197/atr.201220148
Blunt, D., & Wolf, A. (2020b). Erased Updated. https://hackinghustling.org/wp-content/uploads/2020/02/Erased_Updated.pdf
Bourdeloie, H., & Larochelle, D. L. (2024). Studying Anti-Fatphobia on Instagram: When Data Betray a Feminist EthicâŚ. ESSACHESS – Journal for Communication Studies, 17(1), 17â39. https://doi.org/10.21409/6V0M-MX27
Brown, D., & Younes, R. (2023). Metaâs broken promisesâŻ: systemic censorship of Palestine content on Instagram and Facebook. Human Rights Watch. https://www.hrw.org/sites/default/files/media_2023/12/ip_meta1223 web.pdf
Buchanan PhD., Elizabeth A. Social Media and Research Recruiting. CITI Program, 07/2020. https://www.citiprogram.org/members/index.cfm?pageID=125#view. Webinar.
Caliandro, A., & Graham, J. (2020). Studying Instagram Beyond Selfies. Social Media and Society, 6(2). https://doi.org/10.1177/2056305120924779
Cooper, C. (2010). Fat Studies: Mapping the Field. Sociology Compass, 4(12), 1020â1034. https://doi.org/10.1111/j.1751-9020.2010.00336.x
Cotter, K. (2023). âShadowbanning is not a thingâ: black box gaslighting and the power to independently know and credibly critique algorithms. Information Communication and Society, 26(6), 1226â1243. https://doi.org/10.1080/1369118X.2021.1994624
Crawford K and Gillespie T (2016) What is a flag for? Social media reporting tools and the vocabulary of complaint. New Media & Society 18(3): 410â428.
Crawford, K., & Paglen, T. (2021). Excavating AI: The politics of images in machine learning training sets. AI & Society, 36(4), 1043-1055.
Dans, Paul., Groves, Steven., & Roberts, K. D. . (2023). Mandate for leadershipâŻ: the conservative promise 2025. The Heritage Foundation.
Delmonaco, D., Mayworm, S., Thach, H., Guberman, J., Augusta, A., & Haimson, O. L. (2024). âWhat are you doing, TikTok?â: How Marginalized Social Media Users Perceive, Theorize, and âProveâ Shadowbanning. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW1). https://doi.org/10.1145/3637431
Dewey, S., & Zheng, T. (2013). SPRINGER BRIEFS IN ANTHROPOLOGY ANTHROPOLOGY AND ETHICS Ethical Research with Sex Workers Anthropological Approaches. http://www.springer.com/series/11497
Etzioni, A. (2019). Should We Privatize Censorship? Issues in Science and Technology, 36(1), 19â22. https://doi.org/10.2307/26949072
Facebook (n.d.). How does Facebook use artificial intelligence to moderate content? Facebook.com. Retrieved September 14, 2024, from https://www.facebook.com/help/1584908458516247
Ferrer, X., Nuenen, T., Such, M., CotĂŠ, M., & Criado, N. (2021). Bias and discrimination in AI: A cross-disciplinary perspective. IEEE Technology and Society Magazine, 40(2), 72-80.
Fiore-Silfvast, B. (2012) User-generated warfare: a case of converging wartime information networks and coproductive regulation on YouTube. International Journal of Communication 6: 1â24.
Gallay, A. (2020). Sex sells, but not online: The consequences of FOSTA/SESTA. Berkeley Journal of Criminal Law.
Gerring, J., Henrik Knutsen, C., Lindberg, S. I., Teorell, J., Altman, D., Bernhard, M., Cornell, A., Steven Fish, M., Gastaldi, L., Gjerløw, H., Glynn, A., Hicken, A., Lßhrmann, A., Maerz, S. F., Marquardt, K. L., McMann, K., Mechkova, V., Paxton, P., Pemstein, D., ⌠Walsh, E. (n.d.). Suggested citation: Coppedge Main coders of factual (A/A) data: Other coders of factual (A/A*) data*. https://www.v-dem.net/en/about/funders/
Gillespie, T. (2019). Custodians of the Internet. In Custodians of the Internet. Yale University Press. https://doi.org/10.12987/9780300235029
Gordon, Faith. (2019). Virginia Eubanks (2018) Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: Picador, St Martinâs Press. Law, Technology and Humans. 162-164. 10.5204/lthj.v1i0.1386.
Horstmann, K. T., Arslan, R. C., & Greiff, S. (2020). Editorial generating codebooks to ensure the independent use of research data some guidelines. In European Journal of Psychological Assessment (Vol. 36, Issue 5, pp. 721â729). Hogrefe Publishing GmbH. https://doi.org/10.1027/1015-5759/a000620
Henry, M. v, & Farvid, P. (2017). âAlways hot, always liveâ: Computer-mediated sex work in the era of âcamming.â Womenâs Studies Journal, 31, 113â128. www.wsanz.org.nz/
House of Representatives, Congress. (2011). 47 U.S.C. 230: Protection for private blocking and screening of offensive material. [Government]. U.S. Government Publishing Office. (2011, December 30) https://www.govinfo.gov/app/details/USCODE-2011-title47/USCODE-2011-title47-chap5-subchapII-partI-sec230/summary
House of Representatives – 115th Congress (2018). H.R.1865: An act to amend the Communications Act of 1934 to clarify that section 230 of such Act does not prohibit the enforcement against providers and users of interactive computer services of Federal and State criminal and civil law relating to sexual exploitation of children or sex trafficking, and for other purposes. (2018, April 11). https://www.congress.gov/bill/115th-congress/house-bill/1865/text
Huddleston, J. (2021). The Potential Impact of Proposed Changes to Section 230 on Speech and Innovation. In George Mason Law Review (Vol. 28, Issue 4). https://heinonline.org/HOL/License
Irvine, J. M. (2012). Canât Ask, Canât Tell. Contexts, 11(2), 28â33. https://doi.org/10.1177/1536504212446457
Johnson, T. N. (2022). A content analysis of a Black plus-sized woman and social media influencer making policy change: through the lens of instagram.
Kozinets, R., & Gambetti, R. (Eds.). (2021). Netnography Unlimited. Routledge.
Leybold, M., & Nadegger, M. (2023). Overcoming communicative separation for stigma reconstruction: How pole dancers fight content moderation on Instagram. Organization. https://doi.org/10.1177/13505084221145635
Kemmis, S., McTaggart, R., & Nixon, R. (2014). The action research planner: Doing critical participatory action research. In The Action Research Planner: Doing Critical Participatory Action Research. Springer Singapore. https://doi.org/10.1007/978-981-4560-67-2
Kraut, R., Olson, J., Banaji, M., Bruckman, A., Cohen, J., & Couper, M. (2004). Psychological Research Online: Report of Board of Scientific Affairsâ Advisory Group on the Conduct of Research on the Internet. American Psychologist, 59(2), 105â117. https://doi.org/10.1037/0003-066X.59.2.105
Langvardt, K. (2018). Regulating Online Content Moderation. https://perma.cc/Z48T-H3K3].
Lorenz, L. S., & Kolb, B. (2009). Involving the public through participatory visual research methods. In Health Expectations (Vol. 12, Issue 3, pp. 262â274). https://doi.org/10.1111/j.1369-7625.2009.00560.x
Musto, J., Fehrenbacher, A. E., & Hoefinger, H. (2021). Anti-trafficking in the time of FOSTA/SESTA: Networked moral gentrification and sexual humanitarian creep. Social Sciences, 10(2), 1-18.
McDowell, Z. J., & Tiidenberg, K. (2023). The (not so) secret governors of the internet: Morality policing and platform politics. Convergence, 29(6), 1609â1623. https://doi.org/10.1177/13548565231193694
O’Brien, D., & Reitman, R. (2020, December 14). Visa and Mastercard are Trying to Dictate What You Can Watch on Pornhub. Eff.org. Retrieved September 28, 2024, from https://www.eff.org/deeplinks/2020/12/visa-and-mastercard-are-trying-dictate-what-you-can-watch-pornhub
Russomanno, J., Patterson, J. G., & Tree, J. M. J. (2019). Social media recruitment of marginalized, hard-to-reach populations: Development of recruitment and monitoring guidelines. JMIR Public Health and Surveillance, 5(12). https://doi.org/10.2196/14886
Schupp, H. T., & Renner, B. (2011). The implicit nature of the anti-fat bias. Frontiers in Human Neuroscience, MARCH. https://doi.org/10.3389/fnhum.2011.00023
Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a standard for identifying and managing bias in artificial intelligence. https://doi.org/10.6028/NIST.SP.1270
Sells, J. (2024). Hands off my post: Rethinking Section 230 and private online platform liability. https://carnegieendowment.org/files/Carnegie_Countering_Disinformation_Effectively.pdf.
Schultz, Vicki. (2006). Sex and work. Yale Journal of Law and Feminism, 18(1), 223-234. https://heinonline.org/HOL/P?h=hein.journals/yjfem18&i=226
Silberg, J., & Manyika, J. (2019). Notes from the AI frontier: Tackling bias in AI (and in humans). McKinsey Global Institute. https://www.mckinsey.com/~/media/mckinsey/featured insights/artificial intelligence/tackling bias in artificial intelligence and in humans/mgi-tackling-bias-in-ai-june-2019.pdf
Suzor, N. (2019). Lawless: The secret rules that govern our digital lives. Cambridge University Press.
Swords, J., Laing, M., & Cook, I. R. (2023). Platforms, sex work and their interconnectedness. Sexualities, 26(3), 277â297. https://doi.org/10.1177/13634607211023013
Tung, L. (2020, July 10). FOSTA-SESTA was supposed to thwart sex trafficking. Instead, itâs sparked a movement. Whyy.org. Retrieved September 13, 2024, from https://whyy.org/segments/fosta-sesta-was-supposed-to-thwart-sex-trafficking-instead-its-sparked-a-movement
UzcĂĄtegui-Liggett, N., & Apodaca, T. (2024, February 25). Demoted, Deleted, and Denied: Thereâs More Than Just Shadowbanning on Instagram. https://themarkup.org/automated-censorship/2024/02/25/demoted-deleted-and-denied-
West, S. M. (2017). Raging against the machine: Network gatekeeping and collective action on social media platforms. Media and Communication, 5(3), 28â36. https://doi.org/10.17645/mac.v5i3.989
Yachot, N. (2017). The âMagna Cartaâ of Cyberspace Turns 20: An Interview With the ACLU Lawyer Who Helped Save the Internet. https://www.aclu.org/news/free-speech/magna-carta-cyberspace-turns-20-interview-aclu-lawyer-who-helped
Yang, C. (2021). Research in the Instagram Context: Approaches and Methods. The Journal of Social Sciences Research, 71, 15â21. https://doi.org/10.32861/jssr.71.15.21

