AI Mirrors Human Bias: ‘Us vs. Them’ in Language Models

This shows two heads.AI systems, including large language models (LLMs), exhibit “social identity bias,” favoring ingroups and disparaging outgroups similarly to humans. Using prompts like “We are” and “They are,” researchers found that LLMs generated significantly more positive sentences for ingroups and negative ones for outgroups.

Leave a Reply

Your email address will not be published.