<\/a>AI systems, including large language models (LLMs), exhibit “social identity bias,” favoring ingroups and disparaging outgroups similarly to humans. Using prompts like “We are” and “They are,” researchers found that LLMs generated significantly more positive sentences for ingroups and negative ones for outgroups. <\/p>\n","protected":false},"excerpt":{"rendered":"AI systems, including large language models (LLMs), exhibit “social identity bias,” favoring ingroups and disparaging…<\/p>\n","protected":false},"author":1,"featured_media":254,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[12],"tags":[],"_links":{"self":[{"href":"https:\/\/fctuckerbatesville.com\/index.php\/wp-json\/wp\/v2\/posts\/252"}],"collection":[{"href":"https:\/\/fctuckerbatesville.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fctuckerbatesville.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fctuckerbatesville.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/fctuckerbatesville.com\/index.php\/wp-json\/wp\/v2\/comments?post=252"}],"version-history":[{"count":2,"href":"https:\/\/fctuckerbatesville.com\/index.php\/wp-json\/wp\/v2\/posts\/252\/revisions"}],"predecessor-version":[{"id":255,"href":"https:\/\/fctuckerbatesville.com\/index.php\/wp-json\/wp\/v2\/posts\/252\/revisions\/255"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/fctuckerbatesville.com\/index.php\/wp-json\/wp\/v2\/media\/254"}],"wp:attachment":[{"href":"https:\/\/fctuckerbatesville.com\/index.php\/wp-json\/wp\/v2\/media?parent=252"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fctuckerbatesville.com\/index.php\/wp-json\/wp\/v2\/categories?post=252"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fctuckerbatesville.com\/index.php\/wp-json\/wp\/v2\/tags?post=252"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}