by Roger Spitz

After working with countless decision-makers and interpreting the next-order impacts of our world’s rapidly accelerating rate of change, humanity appears at a crossroads. Evolutionary pressure prioritizes relevance, and that pressure could be nearing our strategic decisions.

As a society, we must completely adapt the education system (Spitz, 2020), prioritizing experimentation and discovery, instilling curiosity and comfort with uncertainty, first starting in the playground and then spreading all the way to our boardrooms. If we don’t improve our abilities to evolve in a nonlinear world, we could find human decision-making sidelined by algorithms as we become blindsided by increasing complexity, while machines gradually learn to move up the decision value chain.

AAA is often used to reflect the ultimate achievement. Those with finance backgrounds will recognize that AAA is the highest level of credit worthiness, or in science the best rank for alphabetical grading scales. The UNDP have used “Anticipatory, Adaptive and Agile” in the context of governance (Wiesen, 2020), as have esteemed colleagues in their recent article entitled “Triple-A Governance: Anticipatory, Agile and Adaptive” (Ramos, Uusikyla, & Luong, 2020).

Stephen Hawking (2000) qualified the 21st century as “the century of complexity.” With that backdrop, for some time we have been using AAA as “Anticipatory, Antifragile and Agility” (AAA) to define what humans should be developing to improve their abilities as the world becomes more complex. This need for humans to enhance their capabilities is that much more relevant in the context of machines learning fast and with increasingly higher-level human functions.

While the term anticipatory is intimately related to foresight, for our AAA taxonomy we borrow the definition of antifragile from Taleb (2012): “Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.” And we use “agility” in the context of the Cynefin framework (Snowden & Boone, 2007), looking at properties such as our ability to be curious, innovative and experimental, to know how to amplify or dampen our evolving behaviors depending on feedback, thus allowing instructive patterns to emerge, especially in complex adaptive systems.

Decision-Making: No Longer a Human Exclusive

Decision-making for key strategic topics (like investments, research and development (R&D), and mergers and acquisitions (M&A)) currently mandates human involvement, typically through Chief Executive Officers, leadership teams, boards, shareholders, and governments. Looking forward, the question is not how much machines will augment human decision-making but whether in time humans will remain involved in the process at all.

Through machine learning (ML) and natural language processing (NLP), the capabilities of artificial intelligence (AI) in strategic decision-making are improving rapidly, while human capacities in this area may not necessarily be progressing. It could even be the opposite: while machines are deemed by many to augment humans in a positive way, the Pew Research Center cautions that AI could reduce individuals’ cognitive, social and survival skills: “People’s deepening dependence on machine-driven networks will erode their abilities to think for themselves [and]take action independent of automated systems” (Anderson, Cohn, & Rainie, 2018).

There are many decision cycle models including the much-admired OODA loop[1] (Observe, Orient, Decide, Act).

At its core, we frame decision-making as following a simple, three-step process:

  1. Detect and collect intelligence.
  2. Interpret the information.
  3. Make and implement decisions.

Every one of these steps is essential to a successful conclusion. The following lists examples of failures in this process. Poor intelligence (failure at step 1) led to the Bay of Pigs invasion, while ineffective interpretation (failure at step 2) contributed to Israel’s surprise at the 1973 October war.

Step 3 is sometimes harder to isolate. Making and implementing decisions can also include ones which decide not to set-up a system to detect or collect intelligence in the first place, or which limit investment in the resources to ultimately interpret such information. One could argue that the lack of preparation which resulted in improvised governmental responses for COVID-19 was a failure at all three steps.

Corporate history is littered with examples of leadership teams with a cognitive bias towards making poor decisions that extrapolated the past with linear predictions. This is often a result of humans finding it difficult to process “exponential” trends (which initially do not seem to grow fast) and being oblivious to next-order implications.

The telecom operators had the option to innovate in over-the-top (OTT) technologies rather than relying on historic cash cows like text messaging and international calls. This wrong decision paved the way for new players like Skype, WeChat, and WhatsApp to lead with disruptive exponential technologies. In the same vein, Verizon acquired video conferencing platform BlueJeans in April 2020 as a late defensive move given the pandemic and explosion of Zoom, as opposed to anticipating the strategic need for enterprise-grade video conferencing platform for the future of work (remote), health (telemedicine) or education (online learning). The pandemic accelerated this need while proper understanding of our two first decision-making steps should have meant that Verizon would have made those strategic decisions many years ago (instead of playing catch-up with Zoom today). In the same way that Disney only woke up in 2017 when it acquired control of BAMTech for streaming technology, leaving Netflix dominate this space during many precious years.

In 2011, Vincent Barabba wrote, “In essence we alerted the management team that change in the capturing of images through digital technologies was coming, and that they had a decade to prepare for it.” Despite on target market assessments, Kodak did not make the correct strategic decisions.

Given the speed and scale of change, the question of “if and how” we are able to enhance our capabilities for decision-making is a legitimate one.

Machines are moving up the decision-making value chain

Today, humans primarily use AI for insights, but AI’s skills could surpass human abilities at every step in the process. AI is already improving in predictive analytics, steadily making its way to the right, toward prescriptive outcomes recommending specific options.

This is in part fueled by exponential technologies as AI learns to move up the value chain:

  • Machines are archetypically used in optimization, automating processes and repetitive tasks.
  • We are finding them more present in augmentation roles as well, where they lend their greater processing powers to perceive and learn (such as in radiology).
  • AI is even tackling the formerly human-mandated domain of creativity. (Google Arts recently partnered with the British choreographer Wayne MacGregor to train an AI to choreograph dances (Leprince-Ringuet, 2018)).

A significant advantage AI has over humans is driven by stacked innovation platforms that can scale rapidly, wherein massive amounts of networked data provide ever deeper insights through signal detection, trend interpretation, and pattern recognition at scale and with unstructured data. This also allows non-intuitive information and connections to be unearthed through ML, while NLP is effective for unstructured extraction.

AI’s current superiority at detection and collection, with scale helping the interpretation

AI already surpasses human ability in trend detection, signal-, and pattern-recognition for unstructured data at scale:

  • One company, Blue Dot, used NLP and ML to detect the COVID-19 virus before the US Center for Disease Control.
  • Another company, Social Standards, scrapes Instagram and Twitter to detect emerging local brands and competitors before they reach peak visibility.
  • The geospatial analytics company Orbital Insight mines digital imagery to predict crop yields or construction rates of Chinese buildings.

Algorithm-augmented predictive insights drive decision-making

A step further than analytics-driven decision support, AI accelerates “infinite” simulations, evaluations and developments, reducing the cost of testing to carry out major R&D and drug discoveries:

  • Halicin was the first antibiotic discovered using AI. The AI found molecules that even help treat formerly untreatable bacterial strains.
  • The OCD medication DSP-1181 is the first non-human-made drug molecule to enter phase 1 clinical trials. Thanks to DSP-1181’s ML intelligence, researchers completed in 1 year what normally would have taken several years.

In the future, will AI perform autonomous, prescriptive strategic decision-making?

AI is currently tasked with decision-assistance, not autonomous strategic decision-making. Why? The situation is beyond “complicated”.

Neither humans nor AI find that decision-making in complex situations is their strength. Using Dave Snowden’s Cynefin Framework (Snowden & Boone, 2007), the complex domain involves unknown unknowns, where there are no right answers and it is only retrospectively that one can establish cause and effect. So if there is solace to be found in humanity’s poor performance here, it is that machines are not currently able to do better (AI’s comfort zone is in the complicated domain, where there is a range of right answers, known unknowns, and causality can be analyzed, so plays well to data).

Most applications of predictive interpretation involve a joint project (augmentation) between humans and AI. As AI is exponential, over time the role of humans may reduce in a number of areas.

In analyzing the trend of machine involvement, one thing is clear: AI is playing a greater role in every step of the decision process. It is starting to take over areas that we previously thought were too important to entrust to machines or required too much human judgment:

  • In 2017, software from J.P. Morgan completed 360,000 hours of legal due diligence work in seconds.
  • A mere two years later, in late 2019, Seal Software (acquired in early 2020 by DocuSign) demonstrated software that helps automate the creative side of legal work, suggesting negotiation points and even preparing the negotiations themselves.
  • EQT Ventures’ proprietary ML platform Motherbrain made more than $100 million in portfolio company investments by monitoring over 10 million companies, its algorithms taking data from dozens of structured and unstructured sources to identify patterns.
  • A German startup called intuitive.ai delivers AI solutions to foster informed strategic management decisions, while UK-based startup 9Q.ai is developing “Complex AI” to optimize multi-objective strategic decision-making in real-time including for the management consultancy sector.

As we are seeing with the current crisis, the extent of international failures in preparation (such as completely ignoring warnings from US’ own intelligence, Bill Gates or the World Economic Forum) is just the tip of the iceberg in our failures in responses to the required problem-solving frameworks. So currently, neither humans nor AI are performing well in complex systems. And few leaders embrace the experimental model, which requires curiosity, creativity and diverse perspectives to allow for unpredictable instructive patterns to emerge.

Will we rise to the challenge of accelerating, disruptive, and unpredictable complex times? Because AI will certainly keep learning—even beyond complicated—as algorithms will no longer rely on only a range of right answers:

  • Matthew Cobb (2020) provides a detailed examination of whether our brain is a computer, spanning the views of Gary Marcus (“Computers are, in a nutshell, systematic architectures that take inputs, encode and manipulate information, and transform their inputs into outputs. Brains are, so far as we can tell, exactly that.”) and those neuroscientists who consider that, even if this were true, “reverse engineering” the brain may not be a given.
  • AI is developing fast in handling complexity with progress in key areas such as artificial neural networks (broadly inspired by biological neural networks which constitute brains and good at pattern recognition). Russell, in his seminal books on AI, acknowledges the views of a number of philosophers who believe that AI will never succeed while expanding on how intelligent agents reason logically with knowledge, including decision-making in uncertain environments and the importance of artificial neural networks to generate the knowledge required for the intelligent agents to have the components required to make decisions (Russell & Norvig, 2020).
  • There are of course limitations to what AI can do today, partly due to the data itself, even more so in complex systems (“Data means more information, but also means more false information” (Taleb, 2013)). In Black Swan, Taleb (2007) warns against how one can misuse big data, including “rearview mirror” (confirmation vs. causality), an instance of poor reasoning as the narrative is being built around the data that ends up with a history clearer than empirical reality. He also flags “silent evidence” as one cannot rely on experiential observations to develop a valid conclusion (the possibility of missing data, spurious correlations, and the risk of previously unobserved events have a tremendous impact).
  • Earlier this year, Ragnar Fjelland (2020) wrote “Why general artificial intelligence will not be realized,” and while acknowledging the major milestones in AI research (including DeepMind AlphaGo in deep reinforcement learning), his view is that the systems lack flexibility and find it difficult to adapt to changes in environment. Like Taleb, he focuses on correlation and causality, and AI’s lack of understanding, a major limitation today.

As AI continues to develop, machines could become increasingly legitimate in autonomously making strategic decisions, where today humans have the edge. If humans fail to become sufficiently AAA, rapidly learning machines could surpass our ability. They do not have to reach general artificial intelligence nor become exceptional at handling complex systems, just better than us.

How Humans can Remain Relevant

To remain relevant, humans must become increasingly anticipatory and antifragile, with agility.

Anticipatory

Taleb (2007) uses the famous example of Black Swans to describe unforeseeable events with large impacts. In many cases, however, the more apt metaphor is the Gray Rhino (Wucker, 2016).

Gray Rhino events are highly probable and obvious, yet we still fail to respond. Perhaps we are in denial or pass the buck. We may diagnose the danger half-heartedly… and then panic when it’s too late. COVID-19 was a Gray Rhino.

The leadership of companies, countries, and organizations are often not caught short by Black Swans but unable or unwilling to prepare for Gray Rhinos. Knowing your Black Swans from your Gray Rhinos is key to becoming more anticipatory and stop being trampled. In addition, leadership must:

  • Learn to qualify weak signals and interpret the next-order impacts of change, connecting the shifting dots with action-triggers.
  • Beware of relying on statistical risks that assume a stable and predictable world.
  • Understand the ramifications of exponential change (which moves “gradually then suddenly”), as the world is not a linear evolution from the past. Remember Amara’s law: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
  • Visioning: map out plausible futures, with the agency to realize our preferred future option. Both short- and long-term strategic decision-making are needed simultaneously today, prioritizing innovation as well as trial-and-error.
  • Embrace “pre-mortem” analysis to identify threats and weaknesses via the hypothetical presumption of failure in the near-future.

Antifragile

Continuing with Snowden’s Cynefin framework, the complex systems that are being created must at least be resilient to shocks and changes, or even better benefit from these, or we could find them crushed.

Drawing analogies with Taleb’s Antifragile (Taleb, 2012), fragile systems are damaged by disorder. They receive more downside than upside from shocks. Excessive debt is a fragile strategy. Stock buy-backs are too, even though they are common. Financial theory – predicated on a stable and orderly world – tells a company not to hoard cash, but cash can be a life preserver in unpredictable times.

Antifragile systems strengthen from disorder. The shocks and errors make them strengthen, not break. Silicon Valley, for example, responds well to pressure. Their experimentational, fluid mindset allows them to rapidly find new solutions. They innovate and evolve, strengthening through the natural selection-like pressure. It is no coincidence that Silicon Valley has $450 billion held in cash.

Many of our economic systems and companies are fragile from having followed the formulaic “strategic playbook” of optimization and hyper-efficiency in a world they presumed linear and predictable. When shocks or chaos strikes, they buckle. If we are to remain relevant (i.e. not seeing our strategic decision-making be substituted by machines), we must create innovative social and economic networked ecosystems that strengthen under stress.

Agility

Our centralized and hierarchical organizations are not nimble. Most move slowly, continuing in the same actions they have always taken. These strategies do not respond well to constantly changing circumstances.

As GE’s Sue Siegel said in 2018: “The pace of change will never be as slow as it is today”, so as the world accelerates exponentially, we must develop agility by:

  • Understanding better the entire system, given the unpredictability and interdependencies of moving parts where the whole is more than the sum of the parts.
  • Developing emergent behaviors (amplified or dampened to move one in the right direction), experimenting and tinkering to fail fast and allow instructive patterns to emerge.
  • Leveraging on liminality for transformation: use the in-between liminal spaces of uncertainty to drive creative destruction (Schumpeter, 1942) and disruptive innovation (Christensen, 1997).
  • Decentralizing, allowing functional redundancies to be used as substitutions within an ecosystem, taking the place of failures.
  • Harnessing curiosity, creativity, and diverse perspectives to go against the grain, because today’s standard knowledge will never solve tomorrow’s surprises. Cross-fertilization with T-shaped profiles that couple deep expertise with broad experience, can move naturally between disciplines, creating new combinations in a world where patterns are hard to interpret, and generalists flourish (Epstein, 2019).

We must create lean, nimble cells that attack problems independently influencing leverage points to create attractors for emergence. Inspired by nature itself, these agile strategies have risen in all sorts of areas, from lean startups to guerrilla fighters.

Looking Forward

Thus far, humans have excelled at decision-making, but our comparative advantage may not necessarily continue.

Our current mental frameworks may not be versatile enough to navigate and manage constant unpredictable change as AI evolves fast, including in the field of Emotion AI (aka Affective Computing or Artificial Emotional Intelligence), where startups such as Affectiva recognize, interpret, simulate and react to human emotion.

As the world and its systems get more complex, there are a number of options for the future of strategic decision-making, including:

  1. Humans are able to adapt and improve our decision-making—becoming more AAA—so we can continue to add value when partnering with machines. Here AI is providing insights to augment and make more informed decisions, uncover new opportunities without necessarily replacing humans.
  2. Humans fail to adapt to our increasingly complex world, instead finding ourselves marginalized or substituted in the key process of decision-making, which could be taken entirely out of our hands.

There may be virtues in a future where we are relieved not only from the more mundane repetitive tasks but also the pressures and responsibilities of decision-making. However, this raises the question of choice: do we proactively decide on our position in the value chain or see ourselves being imposed a given spot.

Ultimately it is an existentialist question around agency, as evolutionary pressure dictates that the best decision-makers will be the ones who survive. If we do not fundamentally redesign our education and strategic frameworks to create more AAA leaders, we may see that choice made for us.

The dystopian alternative: from C-Suite to A-Suite?

I would prefer a world where human decisions continue to propel our species forward, where we consider what it takes to be more likely to build this world. If we do not, our current C-suite of leaders might find themselves replaced by an A-suite (of algorithms).

Using Curry and Hodgson’s (2008) three horizons model, our possible futures are:

  • Now: our present embedded with “Pockets of Future.”
  • Hyper-Augmentation: smart algorithm-augmented predictive decision making is matched with our AAA humans, creating a symbiotic human-machine partnership.
  • AI Future: prescriptive AI autonomously evaluate the range of potential options (consequences, payoffs…), assessing preferred decisions based on optimized returns (quality of outcomes, speed, cost, risk…) without necessarily having human involvement.

This third horizon of AI Future paves the way for new prescriptive decision-making models:

  • Decentralized Autonomous Organizations (DAO): self-organizing collectives determine and execute smart contracts, empowering frictionless automated cooperation at a collective level. Armed with the smart data, insights from analytics, and ML’s predictive capabilities, DAO make optimized decisions.
  • Swarm AI: infinite groups augment their intelligence by forming swarms in real time.

Adopting AAA can ensure more agency over our futures. The longer we wait, the greater the risk of being moved further along the value chain than our preferred future would have envisioned.

About the Author

Roger Spitz is the Founder & CEO of Techistential (Foresight Strategy & Futures Intelligence) and founding Chairman of Disruptive Futures Institute. He is an advisor and speaker on Artificial Intelligence. Roger has two decades leading investment banking businesses, as Head of Technology M&A he advised CEOs, founders, boards, shareholders and decision makers of companies globally.

References

Anderson, J., Cohn, S., & Rainie, L. (2018, December 10). Artificial intelligence and the future of humans. Pew Research Center.

Barabba, V. P. (2011). The Decision Loom: A Design for Interactive Decision-making in Organizations. Triarchy Press.

Cobb, M. (2020). The idea of the brain: The past and future of neuroscience. Basic Books.

Christensen, C. (1997). The Innovator’s Dilemma. Harvard Business Review Press.

Curry, A., & Hodgson, A. (2008). Seeing in multiple horizons: Connecting futures to strategy. Journal of Futures Studies, 13(1), 1-20.

Epstein, D. (2019). RANGE: Why generalists triumph in a specialized world. Macmillan.

Fjelland, R. (2020). Why general artificial intelligence will not be realized. Humanities & Social Sciences Communications, (10), 1-9. https://doi.org/10.1057/s41599-020-0494-4

Hawking, S. (2000, January 23). Interview. San Jose Mercury News.

Leprince-Ringuet, D. (2018, December 17). Google’s latest experiment teaches AI to dance like a human. Wired, UK. https://www.wired.co.uk/article/google-ai-wayne-mcgregor-dance-choreography

Ramos, J., Uusikyla, I., & Tuan Luong, N. (2020, April 3). Triple-A Governance: Anticipatory, Agile and Adaptive. Journal of Futures Studies. https://jfsdigital.org/2020/04/03/triple-a-governance-anticipatory-agile-and-adaptive/

Russell, S. J., & Norvig, P. (2020). Artificial intelligence: A modern approach (4th ed.). Prentice Hall.

Schumpeter, J. A. (1942). Capitalism, Socialism and Democracy. Harper & Brothers.

Snowden, D. J., & Boone, M. E. (2007). A leader’s framework for decision making. Harvard Business Review, (November).

Spitz, R. (2020, March 17). Innovation starts when you fall off the edge of the playground. MIT Technology Review. https://insights.techreview.com/innovation-starts-when-you-fall-off-the-edge-of-the-playground/

Taleb, N. N. (2007). Incerto: Vol. 2. The Black Swan: The Impact of the Highly Improbable. Random House.

Taleb, N. N. (2012). Incerto: Vol. 4. Antifragile: Things That Gain From Disorder. Random House.

Taleb, N. N. (2013, February 8). Beware the Big Errors of ‘Big Data’. https://www.wired.com/2013/02/big-data-means-big-errors-people/

Wiesen, C. (2020, May 14). Anticipatory, Adaptive and Agile Governance is key to the response to COVID-19. UNDP. https://www.asia-pacific.undp.org/content/rbap/en/home/presscenter/articles/2020/anticipatory–adaptive-and-agile-governance-is-key-to-the-respon.html

Wucker, M. (2016). The Gray Rhino: how to recognize and act on the obvious dangers we ignore. St Martin’s Press.

  1. Developed by USAF colonel and military strategic John Boyd.
Share.

Comments are closed.