A KAIST led team shows that AI systems can be reverse engineered remotely using emissions that leak during normal operation, without direct intrusion. Instead, the approach listens. Using a small antenna, the researchers captured faint electromagnetic traces from GPUs and rebuilt how the system was designed. It sounds like a heist trick, but the results hold up, and the security implications are immediate. The system, called ModelSpy, collects electromagnetic output produced while GPUs handle...

Read the full article at Digital Trends