defence-ai

Why Sindoor Did Not Produce an Indian Lavender or Gospel

· 6 min read · 👁 77 views

Briefing journalists in Delhi on 6 October 2025, Lt Gen Rajiv Kumar Sahni, then Director General of Electronics and Mechanical Engineers and the Army's Director General of Information Systems during Operation Sindoor, gave the clearest public account so far of what the Army's AI stack actually did between 7 and 10 May 2025. AI was fed twenty six years of historical data on Pakistani military movement. It generated heat maps. It produced predictive models of where particular machines, a gun position or a missile unit, would be on the border. The reported accuracy was 94 per cent. The platform pulled feeds from drones, radars and satellites onto a single commander's screen.

That is a substantial AI footprint. It is also, importantly, not Lavender. It is not Gospel. The Indian Army did not surface, after Sindoor, a system that scored individual human beings on a probability of being a militant, or a recommender that nominated buildings as objectives at saturation rate. The shape of what India built is its own. The shape of what India did not build is itself an argument about doctrine.

What Sindoor's AI stack actually was

Three named systems and one named app are now in the public record around Sindoor.

TRINETRA is the Army's Common Operational Picture platform, designed to give commanders a single fused view drawn from sensors and formations across the force. ECAS, the Electronic Intelligence Collation and Analysis System, sits in the SIGINT space and helps identify threats out of an electronic intelligence feed. Anuman 2.0 is the Army's weather forecasting app, used during Sindoor to enable precision engagement in difficult terrain. The unnamed predictive layer Lt Gen Sahni described, twenty six years of movement data and machine-class location prediction, sits alongside these as the targeting analytic.

The architecture this implies is recognisable from the open literature on Western C4ISR programmes. It is multi-source sensor fusion, predictive geolocation of military objects, and decision support to commanders. It is squarely within the laws of armed conflict tradition of attacking military objectives. The unit of analysis is a piece of military equipment, a formation, an installation. The unit of analysis is not a person.

What Lavender and Gospel are, briefly

Israel's Lavender and Gospel, the most operationally tested AI targeting stack on earth, occupy a different quadrant.

Gospel is an object recommender that nominates buildings as military objectives at high throughput. Lavender is a person classifier that scores adult males in Gaza on a probability of being a militant, using social graph features, communications metadata and movement patterns. Where's Daddy?, the third element, is a geofence that fires when a Lavender flagged person enters a tagged building. The reporting that surfaced the stack, principally in +972 Magazine in April 2024, also surfaced the doctrinal choices around it: a roughly ten per cent acknowledged error rate, twenty second human review windows, casualty thresholds set in advance by command, and unguided munitions used against AI nominated junior operatives.

None of this is, narrowly, an autonomous weapon. The drift is in the procedure, not the algorithm. That is the layer at which India has, so far, made different choices.

The shape of what India did not build

Read against the Israeli stack, Sindoor's architecture is a set of deliberate omissions.

India did not field, or has not disclosed, a person classifier that scores individuals on probability of combatant status. The targeting analytic the Army has described works on military equipment and formations, on the question of where a gun or a missile unit is, not on whether a particular human being should be killed.

India did not field a high-throughput building nominator with a residential geofence on top of it. The Army's AI fed a Common Operational Picture for commanders to act on, not a queue of family homes timestamped to occupancy. Indian fires struck nine sites associated with terrorist infrastructure across Pakistan and Pakistan-occupied Kashmir. The order of magnitude is small. The outcome is not consistent with a Gospel-style recommender running at saturation rate.

India did not, on available evidence, compress human review to seconds per nomination. The 72-hour operational window the Defence Minister has cited for Sindoor is short by historical standards but long by Lavender standards. There is room inside it for the deliberate, file-based review that Indian doctrine has traditionally required.

India did not, finally, attach pre-authorised collateral damage thresholds to target categories. Indian doctrine requires case-by-case proportionality assessment. The reported permissions inside the Israeli stack, of fixed civilian casualty ceilings tied to seniority of the target, have no public Indian analogue.

Why these choices fit Indian conditions

Three structural facts make the Israeli model a poor fit for India.

The first is legal exposure. India's domestic review architecture, through the Armed Forces Tribunal and writ jurisdiction, is structurally more demanding. A person classifier with a ten per cent error rate, deployed at population scale, is hard to reconcile with the standard of review that Indian courts can be expected to apply.

The second is the operational profile. The Indian Army's AI problem is not the saturation strike on a dense urban adversary. It is fused situational awareness across long, contested land borders with two adversaries and an internal counter-terror responsibility. Fused operational picture, predictive geolocation of military objects and the heat map are the right primitives for that problem. A person classifier and a residential geofence are not.

The third is institutional. The Army AI Research and Incubation Centre (AARIC) at Bengaluru, established in 2024 in partnership with DRDO, academia and industry, is the institutional vehicle through which the Army's AI work is being shaped. Its focus on partnerships with the Indian AI ecosystem points towards the C4ISR and decision support quadrant, not towards person-centric targeting.

The doctrine gap that remains

The absence of an Indian Lavender or Gospel is not the same thing as a published Indian doctrine on AI-assisted target nomination. It is a set of choices visible in what was built, not a written rulebook. That gap matters as the AI footprint inside the force deepens through the Year of Networking and Data Centricity and the Army's 2026 to 2027 AI roadmap.

The questions Lavender forced into the open will arrive in Indian conditions too. What is the floor on human review time per nomination class. What features may not be used in a target nomination model. How is automation bias engineered against, not just managed. What is the standing red team arrangement for a deployed military AI system. What does proportionality assessment look like when the model is doing the first cut.

The takeaway

Sindoor was India's first AI-enabled operation, in the Army's own description. The fact that its AI stack does not look like Lavender or Gospel is not a coincidence. It reflects legal exposure, an operational profile and an institutional gravity that all push Indian work towards fused situational awareness and decision support, and away from person-centric targeting at scale.

The recommendation is straightforward. India should write down the doctrine it has so far been practising. The choices visible in Sindoor are defensible ones. They will only remain in place if they are made explicit before the architecture changes around them.

Affiliate disclosure: Some links above are Amazon affiliate links. We earn a small commission if you make a purchase, at no extra cost to you.

#Operation Sindoor #AI targeting #Indian Army #AARIC #TRINETRA #military doctrine #Lavender #Gospel