Deepfake Generals Are Now a Real Threat. Spot One in 30 Seconds
In the first three months of 2026, three Indian Army Chiefs were turned into deepfakes.
In March, a video of General Upendra Dwivedi went viral on X. In the clip, the Army Chief appeared to admit that India had shared the location of an Iranian warship with Israel before a US submarine sank it. The Press Information Bureau confirmed within hours that the clip was AI generated. The original speech was from the Raisina Dialogue, where the Chief had spoken about Pakistan and India's security challenges. The deepfake had nothing to do with the original.
Sources: PIB Fact Check on X, Fact Crescendo report
Days later, a second video began circulating. This one targeted former Army Chief General Manoj Pande (Retd.). In the fake clip, he appeared to say that supporting Israel had cost Indian Army lives, that Israeli instructors were training Indian soldiers to dehumanise certain communities, and that a revolt within the forces was possible. None of it was real. The original video was a calm talk about future security challenges.
Source: PIB Fact Check warning, March 16
By April, the same trick had been used against Brigadier Neeraj Khajuria, who had spoken publicly about Operation Sindoor and anti drone operations in the Rann sector. His real interview was edited with AI generated audio to make him appear to criticise the government. Then came Chief of Defence Staff General Anil Chauhan. The pattern was clear.
Source: BOOM Live fact check
The New Playbook
Notice what these incidents share. The attackers no longer need to build a video from scratch. They take a real interview a senior officer has given, at a conference or on a news channel, and they replace only the audio. The lip movements are tweaked just enough to match the new words. The face is real. The setting is real. The uniform is real. Only the words are fake.
This is much harder to spot than a fully fake video. And it travels further on WhatsApp and X, because it carries the trust of real footage.
It is also cheap to make. The same AI tools that let a teenager dub a Hollywood movie into Bhojpuri in twenty minutes will let a hostile cell put new words into the mouth of an Army Chief in an afternoon.
The Indian government has responded with speed. In February 2026, the IT Ministry cut the takedown time for AI generated fake content from 36 hours to just two to three hours. PIB Fact Check now flags major military deepfakes within hours.
Source: RT India report on the new rules
But the gap between upload and debunk is exactly the window the propaganda needs. By the time PIB clears the air, the clip has already done its damage in a thousand WhatsApp groups.
Which means the most important defence tool is no longer a government agency. It is you.
The 30 Second Test
Before you forward any video of a senior military or government figure saying something shocking, run this five step test. It takes less time than reading this paragraph.
One. Check the lips.
Pause the video. Move frame by frame through the most controversial sentence. AI generated audio almost always shows small slips. A word arrives a fraction of a second before the lips move. A mouth stays slightly open during a hard sound. A smile lingers one beat too long. Your eyes know what real speech looks like. Trust them.
Two. Check the voice.
Real human speech has breath. It has small pauses, swallowed words, tiny coughs. AI voice is smoother than real speech. Almost too clean. If a senior officer sounds like a polished news anchor reading from a script, when he is supposed to be speaking freely at a conference, be suspicious.
Three. Check the source.
Where did this video first appear? If the earliest version comes from an unknown account, a fresh handle with few followers, a Telegram channel you have never heard of, or a foreign news outlet you cannot verify, the chance of it being fake just jumped sharply. Real statements by Army Chiefs are first published by ANI, PTI, the official Indian Army handle, or the channel that hosted the original event.
Four. Reverse search a frame.
This is the most powerful step, and most people skip it. Take a screenshot of any clear frame from the video. Open Google Lens on your phone. Search by image. Nine times out of ten, the original unaltered video will appear in the results, usually from months earlier, with completely different content. The moment you find it, you have proof.
Try it here: Google Lens
Five. Check PIB Fact Check.
The official Indian government channels for verifying suspicious content are easy to reach. The X handle is @PIBFactCheck. The WhatsApp number is +91 8799711259. If a military deepfake is already spreading, PIB has very likely already flagged it.
The whole test takes thirty seconds once you have practised. The first time may take two minutes. Either way, it is shorter than the regret of forwarding hostile propaganda to your unit's WhatsApp group.
The Deeper Point
Military deepfakes are not made to fool experts. They are made to fool the average forwarder. The target is not the journalist or the intelligence officer. The target is the retired havildar in a village, the schoolteacher who admires the Army, the cousin in the Gulf who follows defence news on Telegram.
The goal is not to convince every viewer. The goal is to make enough viewers doubt enough things, often enough, that the trust in the Indian Army takes a small hit every time.
This is information warfare at industrial scale, and it is now running against India every week.
The good news is that the same technology that makes the fakes has also given us the tools to catch them. Google Lens is free. PIB Fact Check is free. Your own eyes, slowed down on any phone, are free. The thirty seconds you spend before hitting forward is the cheapest contribution to national security you will ever make.
The bad news is that the next deepfake is already being made somewhere. Probably of someone you respect. Probably saying something that will make you angry.
Slow down. Run the test. Then decide.