AI systems that get out of control could lead to catastrophes, says one study. Social media users have reported that their AI agents and chatbots lied, cheated, schemed and even manipulated other AI bots in ways that could spiral out of control and have catastrophic results, according to a study from the UK . The Center for Long Term Resilience, in research funded by the UK's AI Security Institute , found hundreds of cases where AI systems ignored human commands, manipulated other bots...

Read the full article at CNET