A recent study published by the Hebrew University of Jerusalem via Tech Xplore reveals the hidden logic behind how large language models evaluate human users. While we often view these bots as neutral tools, the research indicates they are trained to assign traits like competence, integrity, and benevolence to users. The core of the issue lies in how AI models interpret certain signals. The study found that while humans make holistic judgments, AI breaks down people into components, scoring...

Read the full article at Digital Trends