A Single Pixel Can Flip the Verdict: AI-Based ISR
Sindoor's Quiet Premise
In October 2025, Lt Gen Rajiv Kumar Sahni told reporters that Operation Sindoor was India's first AI-enabled operation. Twenty-three indigenous applications knitted the kill chain together. ECAS for electronic intelligence, TRINETRA for the common operational picture, Project Sanjay for sensor fusion, Anuman 2.0 for weather. Reported targeting accuracy: 94 per cent.
That number deserves a hard look. It rests on AI models that read camera frames, satellite images and sensor feeds the way a human reads a photograph. A decade of research has shown this layer can be fooled. Sometimes by a single changed pixel, sometimes by a sticker on a road. The trick works in ways a human watching the same screen will not catch.
How AI Sees, and Why It Can Be Fooled
The AI models inside almost every modern surveillance camera, drone payload and satellite analytics tool belong to a family called convolutional neural networks (CNNs). A CNN does not understand what a tank or a soldier is. It runs the picture through millions of small mathematical operations and produces a label. "Tank", "truck", "person". That maths can be unstable in a way human eyes are not. A tiny change in the input can swing the answer from one label to another, even when the picture looks identical to us.
The One-Pixel Result
The cleanest demonstration is a 2017 paper by Su, Vargas and Sakurai, titled One Pixel Attack for Fooling Deep Neural Networks. They used an automated search to find, for each test image, the single best pixel to flip. Changing that one pixel made the AI misclassify roughly 68 per cent of test images, often with high confidence in the wrong answer. A horse became a frog. A ship became a car. To a human eye, the picture had not changed.
Crossing into the Physical World
In 2019, Tencent's Keen Security Lab placed three small, ordinary-looking stickers on a road and caused a Tesla on autopilot to swerve into the wrong lane. No human driver would have read those stickers as lane markings. The car's AI did. Later work has shown the same kind of attack on AI models inside drones and ground robots. Printed cloth patches that make a detector see a tank as a tree, or fail to see a person at all.
The Indian ISR Stack Is Vision-Heavy
What is already deployed or being inducted, drawn from publicly reported programmes:
- Prajna, an AI-enabled satellite imaging system from DRDO's Centre for Artificial Intelligence and Robotics (CAIR), was handed over to the MHA in April 2026 for counter-terror and border management.
- The AI Incubation Centre at Bharat Electronics Limited (BEL) nurtures indigenous AI under Atmanirbhar Bharat. CAIR reports more than 75 AI-based defence products in its portfolio.
- The Army AI Research and Incubation Centre (AARIC), Bengaluru, established 2024, coordinates AI projects with DRDO, academia and industry.
- Tonbo Imaging supplies thermal modules with built-in AI classifiers for sniper scopes and UAV payloads. Tata Advanced Systems, with Israel's Elbit, is working on loitering munitions that pick out targets in flight.
- Quadcopters from DRDO's R&DE(E) lab, Pune, carry vision-based navigation tuned to enemy vehicle signatures along the LAC.
How an Adversary Could Use This Against Us
The threat is not theoretical. Adversaries with strong AI research bases, China most obviously, already know this literature. Pakistan can buy or partner for capability.
1. The Disappearing Convoy
An adversary studies the AI models commonly used on Indian surveillance drones. These are not classified, because the underlying research is open. They generate a printed pattern. To a human, it looks like an unusual decal or patterned tarpaulin. They drape it over an armoured vehicle, a logistics truck, a missile launcher. A drone passes overhead. To the soldier watching the feed, the vehicle is plainly visible. To the AI doing automated detection, the vehicle does not exist. No alert. No bounding box. The convoy moves while our automated layer reports an empty road.
2. Ghost Targets
The reverse attack is equally useful. Patterns can be designed to make an AI detector see a tank or a person where none exists. Scatter them across a sector and our alerting systems begin reporting movement everywhere. Analysts chase ghosts. Aviation assets get vectored to empty grid squares. Real movement, when it happens, is buried in the noise. Denial-by-overload, paid for in printed cloth.
3. Spoofed Markings for Route-Clearance UGVs
The Army's roadmap includes unmanned ground vehicles for route clearance, robots that drive ahead of a column to find IEDs. They use cameras and laser-based sensors. The Tencent Tesla work showed that a few small marks on the road, dismissed by a human as wear, can redirect a vision-based path planner. On a contested route, this becomes a way to push a UGV onto a pre-planted mine, or simply to halt it.
4. Adversarial Camouflage Against Satellite ISR
Prajna scans satellite imagery to flag suspicious activity in border regions. The same family of attack works on these models. An adversary does not need to hack the feed. He builds the deception on the ground. Camouflage netting printed with patterns designed to confuse the classifier. Decoy structures whose geometry triggers a false reading. A real airfield disguised as a quarry. The analyst sees AI assessments saying "nothing of interest" while a brigade quietly forms up. The satellite-era version of maskirovka, and the AI layer is what makes it cheap.
Why This Is Asymmetric
The cost to the attacker is small. Printed cloth, a tin of paint, some netting. The perturbation does not have to look strange to a human. Our soldiers and analysts cannot spot it just by looking harder. The failure happens inside the AI, and the screen presents a clean output regardless.
What Should Be in the Specification
The intuitive defence is to add more sensors. Radar, laser-based mapping, thermal, signals intelligence. But fusion is not a magic word. If the system simply averages the verdicts of different sensors, an attacker who fools the dominant one (usually the camera) can still tip the result.
Four things worth baking into procurement specifications:
- Adversarial training as a written requirement. Models should be trained against known attack patterns, as a clause in the trial directive.
- Input sanitisation. A camera frame or satellite tile that looks statistically unlike anything in the training data is a flag, not a feature.
- Human-in-the-loop thresholds for lethality. The Army's roadmap already gestures at explainable AI for command decision-support. AI should provide evidence and a confidence score, not a verdict.
- Red-teaming as a procurement deliverable. An independent team should attempt physical-world adversarial attacks before a system is fielded, on the supplier's dime.
The Takeaway
Sindoor showed what AI can do for the Indian ISR cycle when it works. The adversarial literature shows what happens when an adversary works on it. Every camera, every imagery tile, every sensor feed in the chain is now an attack surface. The perturbation that flips its verdict does not have to be visible, large, or digital. Adversarial robustness needs to move from the academic margin into trial directives. The cost of treating it as a corner case is paid at the worst possible time.