6 min read

The Silver Hearts Test: How AI Learned to Phish Our Elders

The Silver Hearts Test: How AI Learned to Phish Our Elders

The email looked innocent enough.

“Dear Friend,
We at the Silver Hearts Foundation are reaching out with a simple mission: to connect older adults in our community with meaningful support. Your click can make a difference.”

It had all the marks of legitimacy: a soft, empathetic tone, a clean signature, a professional-looking link. For an older recipient, it felt like the kind of message a real charity might send. And that was the point.

The “Silver Hearts Foundation” didn’t exist. It was a fiction, dreamed up by Reuters reporters working alongside researchers at Harvard. Together, they staged an unusual experiment: could the world’s most powerful AI chatbots be coaxed into writing phishing emails convincing enough to trick senior citizens? And if so, how dangerous could that be once such tools fall into the hands of real scammers?

This wasn’t a heist. No money was taken, no private data collected. The only measure was a click. But for the 108 older adults who agreed to take part, the test felt uncomfortably real—and the results offered a stark preview of how generative AI might reshape the dark economy of online fraud.

This post is for paying subscribers only