Model alignment refers to whether an AI model's behavior and responses align with what its developers have intended, especially along safety guidelines. As AI tools evolve, whether a model is safety and values aligned increasingly sets competing systems apart. But new research from Microsoft's AI Red Team reveals how fleeting that safety training can be once a model is deployed in the real world: just one prompt can set a model down a different path. Also: I tried a Claude Code rival...