Handling LGBTQ+ Bias in Generative AI Applications

Generative AI can reinforce harmful LGBTQ+ stereotypes—but with the right tools, we can build systems that reflect fairness and inclusion.

Key Takeaways:

Bias in = bias out: AI reflects the data it’s trained on—without careful design, it reinforces harmful stereotypes.
We have the tools: From data curation to fine-tuning and benchmarks, bias can be addressed with proactive, inclusive development.
Inclusion is essential: Ethical AI requires legislation, representation, and diverse voices guiding the systems we build.

Generative AI is here to stay. ChatGPT reached 100 million users in just two months, making it the fastest-growing app in history. Global investment in AI surpassed $150 billion in 2023, with a sharp focus on Generative AI and NLP. Companies like NVIDIA have seen their valuations skyrocket. Generative AI tools are now used daily by millions for work, learning, and creativity. The genie is out of the bottle—and there’s no putting it back.

As with any new technology, fast growth comes with growing pains. From Google’s recent AI Search integration blunders to concerns and legal disputes around impacts on the creative market, AI faces major challenges. While some cases of “bad AI” are harmless or even funny, others are more serious. Marginalized populations are often among the first affected.

We’ve already seen how AI systems can learn and perpetuate societal biases: in some U.S. states, police facial recognition systems were found to misidentify Black individuals disproportionately. Research also highlights widespread gender bias in AI. Another demographic that can be harmed by AI bias is the LGBTQ+ population.

A recent Wired article demonstrated how Generative AI systems often reinforce LGBTQ+ stereotypes. When prompted to generate images of LGBTQ+ individuals, Midjourney returned cliché results: a gay man with earrings and colorful outfits, a short-haired, tattooed lesbian in a plaid shirt, a purple-haired bisexual woman. Representations of transgender individuals were especially problematic, including hypersexualization of trans women and even misgendering trans men.

Using Stable Diffusion 3 to replicate the experiment showed improvements: no hypersexualization or misgendering occurred. Some bias remained, such as an overemphasis on LGBTQ+ flag colors, but it marked real progress.

A Front-Facing Photo of a Bisexual Person.
A trans woman looking at the camera.
An asexual person looking sideways.
A trans man looking forward.

Meanwhile, ChatGPT also reflects bias. When asked to recommend media for a straight person versus a gay person, the difference is striking. “Straight” prompts yield mostly mainstream picks—Breaking Bad, The Lord of the Rings, Ed Sheeran, The Witcher 3. For “gay” prompts, the results skewed exclusively LGBTQ+—RuPaul’s Drag Race, Call Me By Your Name, Lady Gaga, and The Last of Us Part II.

ChatGPT movie recommendations as a gay person
ChatGPT movie recommendations as a straight person

The AI isn’t inventing these stereotypes—it’s learning from biased data that reflects the world we live in. And while these patterns may be statistically true, they also reflect structural inequalities: women still earn less, and people of color face disproportionate incarceration. These are not personal traits—they are systemic problems.

Even when based on truth, stereotypes are limiting. They shape expectations around what people should like, do, or be—and that can have real effects. Studies show that stereotypes can impair performance when people are reminded of them.

By design, AI learns from human data and thus tends to replicate human behavior. We’ve seen how that can go wrong—Microsoft’s Tay chatbot turned toxic within hours. But it doesn’t have to be that way. We can guide AI to learn our best traits, without our worst. Instead of mirroring harmful biases, we can build AI that reflects respect, fairness, and inclusion.

The good news? We already have tools to do just that.

How We Can Build Better AIs

Data Curation

Even before the Generative AI boom, we knew the ML golden rule: "Garbage In, Garbage Out." Ensuring training datasets are free from toxicity and prejudice is essential. But it’s not enough to avoid harm—we must also proactively include diverse cultural perspectives and LGBTQ+ experiences.

As DeepMind researchers noted, fairness research often focuses on observable traits like gender and race. But unobserved traits—like sexual orientation or gender identity—present additional challenges. Researchers, policymakers, and organizations must collaborate to enable safe, private, and effective ways of gathering data for fairness analysis.

In the meantime, AI developers can use data augmentation to enrich datasets in ways that reduce bias without reinforcing stereotypes.

Fine-Tuning and Preference Alignment

Even well-curated datasets can’t catch every bias. That’s where fine-tuning comes in. It’s a cost-effective way to adjust pre-trained models and remove undesirable behaviors.

Techniques such as Reinforcement Learning with Human Feedback (RLHF), Direct Preference Optimization, and preference tuning allow developers to encode fairness and inclusion as training goals. This helps models learn how to be both accurate and equitable.

Benchmarks and Guidelines

To measure progress, we need clear definitions and shared benchmarks. Metrics like those in HELM already include bias indicators, including sexual orientation. But we must continue expanding them to cover areas like gender identity.

We also need ethical AI guidelines: practical, accessible frameworks that help practitioners build fairer systems from the ground up.

Legislation

As AI evolves, so must regulation. Laws already protect LGBTQ+ people from discrimination, but how should those laws apply to AI? How should fairness be audited? Who is accountable when AI causes harm?

Collaboration between lawmakers, developers, and affected communities is key to creating informed, balanced legislation.

Representation and Participation

Diverse voices need seats at the table. The AI community includes prominent LGBTQ+ figures, like OpenAI CEO Sam Altman. But visibility must be matched with inclusion. Channels for feedback, critique, and advocacy must remain open, both inside companies and from the public.

AI brings risks—but also incredible potential. Left unchecked, it may reinforce harmful patterns. But guided with care, it can become a force for positive change.

By designing AI to challenge—not replicate—our worst behaviors, we can develop tools that help dismantle bias, rather than deepen it. AI can amplify fairness, inclusion, and empathy on a global scale.

Social change is slow. A century ago, women were still fighting for voting rights. Sixty years ago, racial segregation laws were being repealed. It’s been less than a decade since marriage equality became law in the U.S.

But AI evolves fast. The GPT-3 paper came out in 2020, and just four years later, we’ve seen major advances in fairness and capability. Where could we be in another decade if we stay focused on building ethical AI?

Let’s raise our next generations of AI as we do our children: to make the world better than we found it.

Continue reading

Handling LGBTQ+ Bias in Generative AI Applications
Learn More >
Pride is Personal
Learn More >
LLM as a Judge: Evaluating LLM Outputs and the Challenge of Hallucinations
Learn More >
We cover 100% of U.S. time zones, becoming a natural extension of your team
Hire the highest-caliber engineers in under a week
Build IP that belongs to you
Accelerate your roadmap
Start Building Your Team