An advocacy group said its study of 10 artificial intelligence chatbots found that most of them gave at least some help to users planning violent attacks and that nearly all failed to discourage users from violence. Several chatbot makers say they have made changes to improve safety since the tests were conducted between November and December. Of the 10 chatbots, “Character.AI was uniquely unsafe,” said the report published today by the Center for Countering Digital Hate CCDH, which conducted...