AI Mirrors Human Bias: ‘Us vs. Them’ in Language Models

AI systems, including large language models (LLMs), exhibit “social identity bias,” favoring ingroups and disparaging outgroups similarly to humans. Using prompts like “We are” and “They are,” researchers found that LLMs generated significantly more positive sentences for ingroups and negative ones for outgroups.