Entering an underground operations room feels like stepping into the future, yet the future is already here: it has not arrived with the roar of weapons, but with the silent pervasiveness of artificial intelligence. It has seeped into global strategy like a shadow, altering what we see before it alters what we do. Scholars from the Bulletin of the Atomic Scientists warn that the real danger is not AI acting — but AI distorting.
Every era rewrites its hierarchies, and this one does so with data. Power is no longer measured in missiles or armoured divisions, but in analytical depth, informational velocity, predictive capability. From the Indo‑Pacific to the Baltic, states observe each other through fogged glass, where the fear of falling behind becomes a detonator in itself. In a world where a millisecond can flip the balance, perception is already half the war.
The paradox is that the machine does not err — we lend it our errors. The more coherent a system appears, the more seductive it becomes. And the more seductive it becomes, the more we risk believing it infallible. AI “hallucinations” — phantom correlations, invented patterns, overconfident judgments — are not mere glitches. They are cognitive distortions that infiltrate the minds of decision‑makers. An optimistic model may interpret a provocation as accidental. A pessimistic one may read hesitation as the first sign of aggression. The algorithm does not think: it makes us think differently.
And behind this distortion lies an entire ecosystem of cognitive biases, silent saboteurs shaping strategic vision long before the algorithm enters the room:
Confirmation bias: we search for data that confirms what we already believe, feeding the machine with our certainties instead of our doubts.
Anchoring bias: the first number, the first satellite image, the first intercepted message becomes an anchor that drags every later interpretation with it.
Availability bias: what is vivid, recent, or dramatic feels more likely, even when it is not.
Selective perception: we see what we expect to see, filtering reality through experience and fear.
Overconfidence bias: the belief that our models, our analysts, our systems cannot be wrong, until they are.
Groupthink: teams converge on the same interpretation, suppressing dissent, reinforcing blind spots.
Sunk cost fallacy: once a narrative is built, we cling to it because abandoning it feels like admitting defeat.
Hindsight bias: after an event occurs, it suddenly appears “obvious,” blinding us to the complexity that preceded it.
When these biases seep into automated systems, they do not disappear — they scale.
The most unsettling example is Lavender, the system used by the IDF to identify potential militants. The passage below remains intact, exactly as you requested:
The Lavender system, used by the IDF to identify potential militants, had an estimated error rate of around 10%. This system reportedly generated about 37.000 potential human targets based on correlations and patterns detected by the algorithm. A 10% margin of error means that at least 3.700 persons may have been misclassified, with direct consequences for human lives and civilian infrastructure.Analyses report that human operators often spent only a few seconds verifying the targets suggested by the AI, further reducing effective human oversight.
Implications of the bias
A 10% error rate in a high‑intensity war context is not a technical detail: it is a systemic bias that results in:
Large‑scale loss of civilian lives
Destruction of homes and non‑military infrastructure
Strategic damage, because hitting false targets can:
◦ fuel radicalization
◦ weaken international legitimacy
◦ compromise real intelligence
◦entangle and sharpen war crime legal proceedings with further delegitimating effects (“I obeyed the algorithm”).
Here, bias is not theory: it is lives, homes, geopolitical consequences spreading like shockwaves.
Meanwhile, war grows lighter, almost ethereal. Drones, robots, autonomous systems promise fewer casualties. But when no one dies, the moral threshold of conflict collapses. The political cost seems to evaporate and become abstract. The temptation to strike first gro. Europe saw a glimpse of this when five unidentified drones flew over the French ballistic missile submarine base at Île Longue in Brittany. A silent, remote, pilotless penetration, yet more eloquent even more than the intrusion of an adversary’s combat aircraft. These drones had no significant attack capability whatsoever, compared to the pass of a multirole fighter or a nuclear capable bomber (as it was usual during the Cold War and today), but their message was strong and unmistakeable. It is the sign of an age in which war does not roar; it whispers.
The true defence, then, is not against AI but against ourselves. We must slow down, impose uncertainty labels, simulate adversary reactions, build independent Red Teams and ensure nuclear decisions remain human. AI must not sprint; we must relearn how to walk through the fog.
At the same time, European intelligence is undergoing a quiet metamorphosis. OSINT and ADINT, once auxiliary tools, are now essential ones*. The flood of public data has become a strategic arsenal, but privacy norms force new ethics, new methods, new caution. It is a cultural revolution before it is a technological one.
Yet at the centre of everything remains the analyst. Their mind is the true battlefield. Uncertainty is the rule, ambiguity the raw material. And in this fog, cognitive biases are the enemy within: shaping the algorithm long before the algorithm shapes the world.
To survive complexity, ancient tools become essential again: ACH (Analysis of Competing Hypotheses), forces analysts to compare competing assumptions and their development, while Red Teaming, creates the setting to see what an analyst would rather ignore. In ADINT, this means simulating adversaries, anticipating their moves, probing vulnerabilities, even when uncomfortable.
AI is not an antagonist. It is a mirror. It amplifies what we are, for better or worse. In a world suspended between information and illusion, vigilance and paranoia, true wisdom is not seeing farther but seeing more clearly. As long as humans remain at the centre of judgment, technology can be a beacon. If we abandon that centre — even out of haste or hubris — the machine will cease to be an ally. It will become a multiplier of our mistakes. History shows regularly that human mistakes have always been the most devastating.
*OSINT (Open-Source Intelligence), despite having a millennial tradition, experienced a specific contemporary surge after 1989, through a grassroots reform from specialists hailing from CIA, DIA, INR, FBI and law enforcement. It quickly spread to NATO and some partner countries starting from 1991 onwards. ADINT (Advertising Intelligence), has no official definition and it may be considered ad subset of OSINT. ADINT can also be used offensively by using seemingly innocuous advertising to infiltrate target devices. Israel has put serious export controls on ADINT software (note of Editor).






































