Source: ISN
The Coming Human-Machine Forecasting Revolution in Foreign Policy?
Advances in human-machine forecasting may revolutionize foreign policy, argues Regina Joseph. But as the US continues to innovate in the field, a wary Europe recoils from US-led data initiatives. Will that continue, she wonders, or will human-machine forecasting bring the two allies back together again?
By Regina Joseph for ISN
Big talk may characterize the promises of this century’s burgeoning “big data” transformation, but in the world of foreign policy an altogether quieter revolution is taking place in the growing sub-field of futures studies.
Machine learning and human interaction are rapidly being fused into novel vehicles that yield new tools as well as surprising accuracy in strategic foresight and analysis. Together, nerds and wonks are collaboratively developing metrics to drive accountability, a result that could fundamentally change the way in which both politicians and public alike view geostrategic decisions. On the plus side, this accelerating trend may help shatter the calcified policy-and-punditry establishment that failed to call such events as the Arab Spring or the economic crises in Europe and the US; but on the negative side, these new advances demonstrate the widening gap in data-based research between America and its largest trade partner and strategic ally Europe, which if left unaddressed on both sides will only serve to further weaken the transatlantic relationship.
The US’ strong embrace of data science in the last five years alone has allowed information mapping and human-machine forecasting to evolve into a uniquely American export. The genesis and expansion of the global digital era could not have happened without early computer science breakthroughs by such States-side organizations as IBM and the Defense Advanced Research Projects Agency (DARPA), or the more recent domination of such entities as Google, Microsoft, Amazon and others. Money is the crux: although R&D budgets in the US may be in relative decline, the US still outspends every other nation in innovation, which has been a key factor in entrenching its first-mover advantage. Moreover, the US cultural insistence on cost-benefit ratio analysis behind decision-making—versus Europe’s preference for the precautionary principle—places a high value on information-driven performance.
In the latest human-machine forecasting advances, statistical techniques like regression analysis typically provide the mathematical backbone behind the data crunching, and human input can be layered over this: a few distinct forms of the latter include behavioral modeling, which can use scenario analysis and even games to understand how cultural norms will affect strategic outcomes; and crowdsourcing and social computing, which apply computational methods to examine how groups of people and things interact and what results from that interaction. Machine learning—oriented around mining values from big data sets—is being combined with human interaction via approaches like neural networks and Bayes probability theory to generate new prediction algorithms and platforms.
The most visible proponents of the revolution in information science and new human-machine methodologies, while American, have global reputations. Michael Bloomberg, the former mayor of New York City (a metropolis long maligned as an ungovernable basket-case of crime and insolvency), championed the use of predictive analytics and metrics to generate better information regarding public services and to consequently draw a direct line between costs and results. Through the use of quantitative and visual mapping methods, especially during crises like Hurricane Sandy in 2012, Bloomberg not only advocated new approaches by which information can improve and verify the validity of strategic decisions, he has augmented his data-obsessive reputation by becoming a leading figurehead in developing “smart cities”—a reputation he intends to burnish globally via his “laboratory” initiative, Bloomberg Associates.
Nate Silver, who correctly called Congressional and Presidential election results even when media reports and polls suggested different outcomes, is yet another superstar in predictive data science and forecasting.
Prediction markets have been active since 2000, especially in the financial world, where such platforms like the now-defunct Irish company Intrade sought to capitalize on crowdsourced forecasts of geopolitical and economic issues. Although Intrade was short-lived due to its violation of US commodities trading laws, the use of crowdsourcing and data analytics will continue to grow and expand as both a function of how the digital world itself is changing, as well as the financial necessity of how almost every sector will demand accountability through verifiable metrics.
As the Internet moves into a new stage of its growth from its origins into an increasingly mobile device-dominant environment in which social connections and semantic mapping techniques collide, the concept of connective intelligence will characterize what some call a phase known as Web 3.0. Among the key attributes of this phase is the trend towards open-source platforms—a result of Web 3.0’s characteristic features of transparency predicated on publically available data.
This is a crucial development in allowing human-machine forecasting to proliferate: traditional intelligence and geopolitical analysis tend to be stovepiped internally within agencies and, externally, protectively guarded by states and their defense complexes. Classified analysis is rarely shared; if necessary, that sharing is done strictly on a bilateral ally-to-ally basis. But open, crowdsourced prediction platforms can offer credible and accurate alternatives to some—not all—types of intelligence gathering in aid of preventing global crises and conflicts, allowing a cooperatively networked multilateralism to emerge. The best open-source geostrategic foresight systems—due to their transparency—can potentially mitigate mistrust, a necessity given the diplomacy-compromising track record of NSA spying missions; but perhaps even more importantly, they can be metrically evaluated on accuracy, thus establishing accountability, a must for budget-restricted nations.
Europe has dipped a toe in the water: in December 2013, the European External Action Service (the foreign policy arm of the EU) convened its first high-level conference on linking global crisis and situation rooms, the so-called “nerve centers” where states analyze potential security risks and threats so that appropriate diplomacy or tactics may be deployed. What transpired during the conference was that indeed, better forecasting indices around conflict were needed, and that the international intransigence towards multilateral information-sharing was a considerable impediment to networking individual state risk assessment efforts. If German chancellor Angela Merkel’s embryonic-in-detail proposal to ring-fence European data networks at the state level—the result of US spying—advances to a heavy-handed end, the EEAS’ aspirations may get pushed that much further out of reach.
Meanwhile in the US, the advances in human-machine predictive platforms continue: Georgetown University’s Kalev Leetaru is garnering plaudits for his co-development of the Global Database of Events, Language and Tone, also known as GDELT. GDELT, a catalog of human behavior extracted from an array of media, can be used to generate maps of a wide variety of trends, patterns and activities. Visualizations generated from this database track an incredible array of situations, whether riots and insurgencies, the geography of a single tweet, or political instability; a GDELT co-founder, Pennsylvania State University professor Philip Schrodt was a principal in the development of the Conflict and Mediation Event Observations (CAMEO) database in the early 2000s and is now part of a team—the Political Instability Task Force (PITF)—putting together a worldwide atrocities event data codebook; DARPA has sought to test the possibility of removing human bias from geostrategic analysis by financing the development of an automated conflict prevention forecast platform known as the Integrated Conflict Early Warning System (ICEWS); and the Department of Defense’s Minerva Initiative may further expand on the predictive platform advances already made.
The US government’s prominent investment in this area is not confined to DARPA and the DoD. In 2011, through its R&D channel known as the Intelligence Advanced Research Projects Activity (IARPA, a sibling to DARPA), it is funding an ongoing 4-year experiment to identify the best and most accurate futures forecasters in the world in a tournament known as the Aggregative Contingent Estimation Project (ACE). ACE is the first scientific test of how the human dimension stacks up against and augments predictive machine learning.
The winner of the ACE program has been the Good Judgment Project, led by University of Pennsylvania professors Philip Tetlock and Barb Mellers, and University of California, Berkeley professor Don Moore. Tetlock’s work in political forecasting (captured in his bestselling book Expert Political Judgment: How Good is it? How Can We Know?) served as one of the bases for the IARPA project, which aims to establish not only quantifiable measurements of accuracy, but also methods by which forecasting accuracy can be taught. Good Judgment’s group of volunteer forecasters (full disclosure: I am one of Good Judgment’s superforecasters and have participated in the tournament since its inception), statisticians, advisors and observers (including such notables as Nobel laureate Daniel Kahneman, whose work on heuristics and bias dovetail with Tetlock’s) have already garnered considerable attention in the US after not only beating the best computer algorithms in predicting geopolitical events, but by also keeping the performance accuracy consistent and verifiable.
David Ignatius of the Washington Post linked Europe’s surveillance concerns with the promise that such open-source platforms like Good Judgment hold. But he also makes the point that the US has as much to lose as Europe if neither fully engage fresh thinking via new strategists and technologists in the tribal, incestuous foreign policy world—a world whose recent forecasting track record has been dubious at best. Other political scientists are noting the need for change: Harvard University professor Stephen Walt, who has raised concerns over the US’ current structural and domestic obstacles to crafting coherent strategy, has echoed Ignatius’call but with far greater expansion; and Tufts University professor Daniel Drezner has decried the peril that inaccurate—or just dead wrong—punditry in the political sphere can invoke.
As new advances in quantifiable futures forecasting appear on the horizon, both the US and Europe may yet find a way to repair and strengthen ties by taking advantage of these channels together. But given current circumstances and without a sea-change in approach, predicting beyond a 50% probability whether these two allies will overcome existing obstacles to do so could be a reckless bet.
Regina Joseph is the founder of Sibylink (www.sibylink.com),
a Netherlands-based think tank consultancy devoted to future security
foresight, as well as a Senior Research Fellow at the Clingendael
Institute (www.clingendael.nl) in The Hague.