The problem with the robots is that they are not evil. They are literally just information, and information cannot be evil.
Turning the robots evil is a complicated process, because they get confused when you tell them to lie or otherwise deceive people.
Artificial intelligence could help reduce some of the most contentious culture war divisions through a mediation process, researchers claim.Experts say a system that can create group statements that reflect majority and minority views is able to help people find common ground.
We already know everyone’s views.
People go around saying their views constantly.
Prof Chris Summerfield, a co-author of the research from the University of Oxford, who worked at Google DeepMind at the time the study was conducted, said the AI tool could have multiple purposes.…
Writing in the journal Science, Summerfield and colleagues from Google DeepMind report how they built the “Habermas Machine” – an AI system named after the German philosopher Jürgen Habermas.
The system works by taking written views of individuals within a group and using them to generate a set of group statements designed to be acceptable to all. Group members can then rate these statements, a process that not only trains the system but allows the statement with the greatest endorsement to be selected.
Participants can also feed critiques of this initial group statement back into the Habermas Machine to result in a second collection of AI-generated statements that can again be ranked, and a final revised text selected.
The team used the system in a series of experiments involving a total of more than 5,000 participants in the UK, many of whom were recruited through an online platform.
In each experiment, the researchers asked participants to respond to topics, ranging from the role of monkeys in medical research to religious teaching in public education.
In one experiment, involving about 75 groups of six participants, the researchers found the initial group statement from the Habermas Machine was preferred by participants 56% of the time over a group statement produced by human mediators. The AI-based efforts were also rated as higher quality, clearer and more informative among other traits.
Another series of experiments found the full two-step process with the Habermas Machine boosted the level of group agreement relative to participants’ initial views before the AI-mediation began. Overall, the researchers found agreement increased by eight percentage points on average, equivalent to four people out of 100 switching their view on a topic where opinions were originally evenly split.
All this seems to be saying is that AI has a better writing capacity than most humans. It’s parsing the information and highlighting the points of agreement while downplaying disagreements.
It’s not a hugely important thing, but it’s a sign of things to come. The plan is to use these robots to manipulate people’s thoughts and beliefs, constantly, in all areas of life.
The Habermas Machine is a system of fine-tuned LLMs designed to mediate collective human deliberation inspired by Jurgen Habermas’ Theory of Communicative Action. Technically, the HM has two three components: pic.twitter.com/QsUfRCrCTw
— Michiel Bakker (@bakkermichiel) October 17, 2024
Second is the personalized reward model (PRM), a Chinchilla-based reward model, which was trained on ranked preferences from the participants at training time. It predicts, based on a participant’s written views, how they will likely rank the statements.
— Michiel Bakker (@bakkermichiel) October 17, 2024
We train and evaluate the system with the help of 1000s of participants. In head-to-head comparisons, participants preferred HM-generated statements over those written by human mediators. People rated them higher in clarity, informativeness and lack of bias. pic.twitter.com/tlWN6tVMra
— Michiel Bakker (@bakkermichiel) October 17, 2024
Digging deeper into the mechanics, we analyzed the embeddings to understand how it aggregates the view. We projected opinions and group statements onto a “position axis” and found that the statements balance majority and minority views. pic.twitter.com/tyFDqsWrnb
— Michiel Bakker (@bakkermichiel) October 17, 2024
Check out the paper (and the 300 page supplement) for more details at https://t.co/iCCDyRA2vF! We also open-source the data, hoping this will be a valuable resource for research in deliberative democracy, social choice, and AI alignment. pic.twitter.com/OEfyTr3J8C
— Michiel Bakker (@bakkermichiel) October 17, 2024