The former CEO of Google claims he can improve the US military’s integration of artificial intelligence (AI) into its operations. But is AI the panacea Schmidt claims? And if so, is he the person best suited to lead the way?
In a culture known for complex problems and inherent inefficiencies, the US military would seem ideally suited for the kind of cutting-edge solutions offered by the emergence of computer-driven artificial intelligence (AI).
The US military – like its counterparts in China, Russia, India and elsewhere – has, over the course of the past decade, invested heavily in research and development projects designed to bring AI to the battlefield in support of intelligence collection and analysis, logistical support, autonomous warfighting capabilities, healthcare, and cybersecurity. Indeed, in this computer-driven age, there isn’t an aspect of military operations where AI hasn’t been investigated as a potential enhancement.
While the US military hasn’t ignored AI and its utility, Eric Schmidt, the former CEO of Google, believes it has done so in a highly inefficient manner. And he is convinced he is the man who can best help the US military solve its AI challenges.
The arrogance and hubris exhibited by Schmidt – who has never served in the military and as such is unfamiliar with the ethos, culture and operational realities associated with organizing, training and leading millions of men and women for war – has rubbed some senior US military officers the wrong way. However, his wonkish approach to problem solving, combined with the inherent attractiveness of technology-driven solutions, has found an audience within the ranks of the civilian component of the US defense establishment, who have invited Schmidt to sit on several advisory boards involved in the pursuit of AI-driven solutions to military problem sets.
Past attempts to incorporate Schmidt’s digital-centric philosophies into realms driven by the human condition have proven inchoate. In 2013, Schmidt co-authored a book, ‘The New Digital Age’, with Jared Cohen. Cohen had helped spearhead an effort built around a soft-power philosophy known as ‘digital democracy’, where the US sought to exploit perceived digital commonalities (ie. the integration of social media platforms, use of electronic and computer-driven communications and, most telling, the notion that these digital interfaces exposed the youth of foreign societies to American culture, and as such ingrained a predilection for American values that supplanted those of their own cultures and societies) for the purpose of altering the political makeup of areas such as the Middle East.
‘Digital democracy’ drove the US’ support of the Iranian opposition in 2009, the Arab Spring movement of 2010, and the so-called Syrian revolution of 2011. The failure of ‘digital democracy’ to bring about the desired change highlights the risks associated with seeking to digitally manipulate human emotions and values.
The biggest lesson learned from the abject failure of ‘digital democracy’ is that there is no algorithm that can replicate the incoherent complexities of human emotions. Schmidt’s Google experience is one where algorithms are written and applied to better comprehend complex data-driven problems. As data is accumulated and incorporated, AI can be used to automatically update and upgrade these algorithms, allowing for increased efficiencies.
This approach works in a relatively static environment, where assumptions of shared goals and objectives can be built into the algorithms used. This was the fundamental flaw with ‘digital democracy’: politics is not static, but rather dynamic, driven more by the unpredictable vagaries of human emotions than quantifiable data.
In his timeless tract on military matters, the Prussian military philosopher Carl von Clausewitz observed that war was but a “continuation of politics by other means.” Given the inherent relationship between politics and war, it can be extrapolated that the human complexities which proved fatal to ‘digital democracy’ would similarly undermine any effort to use AI to guide and direct decision-making in times of human conflict.
Eric Schmidt’s success in using algorithms to discern intent and desire on the part of consumers to better guide product placement has revolutionized advertising and sales, both online and in traditional brick-and-mortar establishments. This success has prompted Schmidt and other innovators to embrace the promise of AI-driven solutions for military applications.
There is a fundamental flaw in this approach – when Google was seeking to make sense of the ‘big data’ that was produced as a result of the online consumer experience, there was no opposing force seeking to disrupt, mislead or otherwise defeat its effort. War is an inherently adversarial process, and like the French embrace of the Maginot Line to keep German armies from invading France, any AI-driven algorithm can be defeated by simply re-defining the terms of the conflict.
From a technological standpoint alone, AI-driven applications have already been shown to be easily spoofed, whether by altering painted lines of a road to force Tesla’s AI-driven car into oncoming traffic, or convincing AI-controlled software that an image of a turtle was, in fact, a rifle.
Beyond the fact that an enemy in a time of war would constantly be seeking ways to defeat any AI-driven operation, the inherent incompatibility between the logic of data-driven AI and the illogic of human emotion make an overreliance on AI during times of war a self-defeating proposition. One need only examine the US experience in Afghanistan, where a technologically sophisticated American military, having incorporated aspects of AI into nearly every aspect of its warfighting capabilities, has not been able to defeat the relatively unsophisticated forces of the Taliban. No amount of big data manipulation can overcome the fact that US cultural norms will never mesh with Pashtun tribal reality. The US’ failure in Afghanistan is “digital democracy” writ large, the difference between computer-driven artificialities and boots-on-the ground reality.
US Air Force Colonel John Boyd, considered one of the great military thinkers of modern times, compressed the complexities of military decision-making into a brutally simplistic formula he called the ‘OODA loop’. The four components of the OODA loop – Orient, Observe, Decide and Action – at first seem to be perfectly suited for AI-induced enhancements. But a closer look at what Boyd was capturing in his model only underscores the reality that, at the end of the day, the human factor is the dominating force when it comes to the taking of human life in conflict.
Boyd spoke of “the senses” and “mental perspectives” that guided “physical playing out of decisions.” No algorithm can ever be written which captures the visceral gut-driven realities of decision-making during times of war. The key to military victory, according to the tenets of Boyd’s OODA loop, is to get inside the opponent’s decision-making cycle, catching them responding to situations that have already changed because of actions already taken. Against an AI-driven opponent, one will always be able to make the car drive into oncoming traffic, or the computer to see a turtle as a rifle. By the time the algorithm adapts, it will be too late; the sensors collecting the data the AI needs will have been destroyed or spoofed, the power sources to the computers cut, and a bayonet driven into the heart of the operator by an opponent driven more by human sense, mental perspective, and physical action.
This is the reality of war that Eric Schmidt and civilian dilettantes like himself will never understand, caught up as they are in a data-driven world that is as far removed from the modern battlefield as the Earth is from Mars.
If you like this story, share it with a friend!
is a former US Marine Corps intelligence officer. He served in the Soviet Union as an inspector implementing the INF Treaty, in General Schwarzkopf’s staff during the Gulf War, and from 1991-1998 as a UN weapons inspector. Follow him on Twitter @RealScottRitter