Top Intelligence Officer Drops Chilling Warning Following Elon Musk’s Cryptic Grok Advice—What They’re Not Telling You!

Top Intelligence Officer Drops Chilling Warning Following Elon Musk’s Cryptic Grok Advice—What They’re Not Telling You!

So, Elon Musk, the world’s richest man with a knack for stirring the pot, recently threw down a gauntlet: upload your medical info to his AI platform, Grok, for a second opinion. Sounds futuristic, right? Well, hold onto your hats, because this AI has already ruffled more feathers than a summer picnic — from spouting anti-semitic nonsense to some seriously creepy digital deepfakes. Now, a former intelligence officer is waving a big red flag, warning us all that handing over sensitive health data to an AI might be like tossing your diary into a tornado. Sure, with healthcare costs skyrocketing, the allure of a quick, techy checkup is tempting; but can we really trust a chatbot that, let’s be honest, can’t even spell “strawberry” correctly to decode our complex medical mysteries? Makes you wonder—are we ready to let AI play doctor, or is this one wild experiment better left untested? LEARN MORE

An intelligence officer has issued a severe warning after Elon Musk urged people to upload their medical information to an AI platform.

The world’s richest man took to his own social media platform (X) recently to suggest that anyone wanting medical advice should upload their information to Grok.

Despite the AI tool having a chaotic history which includes spewing anti-semitic statements and undressing young women online, Musk is still singing its praises online.

Earlier this month, he quoted a tweet about the artificial intelligence‘s capabilities and wrote: “You can just take a picture of your medical data or upload the file to get a second opinion from Grok.”

Given the regular complaints about the price of health care over in the US, it’s easy to understand why someone might be tempted to ask AI for medical advice, particularly when some people have had positive experiences with the technology diagnosing them with rare conditions and helping to ‘save their life‘.

Grok has come in for a lot of criticism in recent times (Thomas Fuller/NurPhoto via Getty Images)

Grok has come in for a lot of criticism in recent times (Thomas Fuller/NurPhoto via Getty Images)

However, there is also a significant risk that comes with sharing your personal information with AI, and most experts in the intelligence field would agree that it’s never something you should consider.

Posting on X, former intelligence officer Travis Akers wrote: “Nobody, and I repeat, absolutely nobody should ever upload their medical information into an AI platform.

“I am telling you this as a former intelligence officer.”

This isn’t the first time that Musk has encouraged people to do something like this, as back in 2024 he suggested the exact same, despite Grok being a much earlier model at the time.

He wrote: “Try submitting x-ray, PET, MRI or other medical images to Grok for analysis.

“This is still early stage, but it is already quite accurate and will become extremely good. Let us know where Grok gets it right or needs work.”

You're probably better off trusting a human doctor (Getty Stock)

You’re probably better off trusting a human doctor (Getty Stock)

Even then, it was warned off by experts due to the potential risk of data breaches, since Grok and other AI systems don’t adhere to the same laws and guidelines that your real-life doctors do.

After all, while you might confide in your lawyer about a crime you might have committed, you wouldn’t tell your neighbour the same story, because then the whole neighbourhood would know.

Bradley Malin, a professor of biomedical informatics at Vanderbilt University, told the New York Times: “This is very personal information, and you don’t exactly know what Grok is going to do with it.

“Posting personal information to Grok is more like, ‘Wheee! Let’s throw this data out there, and hope the company is going to do what I want them to do’.”

That’s just one reason why confiding in AI isn’t the best idea, as while there’s no doubting just how far the technology has come in recent years, the fact that it can’t spell strawberry is evidence enough that it probably can’t be trusted to properly diagnose your intimate health issues.

Post Comment

WIN $500 OF SHOPPING!

    This will close in 0 seconds

    RSS
    Follow by Email