top of page
  • Writer's pictureStephen Plimmer

Preparing for AI with the 'futures cone'

Updated: Jan 7

2023 was the year that AI moved from trend reports to the mainstream economy. Every week has seemed to produce another key AI news story. The EU passed the first AI regulation this past week, but the future remains shrouded in both excitement and worry. We suggest a framework (the Futures Cone) to help organisations to anticipate both expected and unexpected events over the next few years, as the trajectory becomes clearer.



It's the 2020s, so AI turns 80

2023 was the year that AI moved from trends reports to the mainstream economy. Every week has seemed to produce another key AI news story: a major investment, new product launch, new discovery or eye opening innovation. In the week of writing this post, the EU reached an agreement to pass laws to regulate AI for the first time, which includes banning emotional surveillance, and enforcing the disclosure of data sources. However, the future remains shrouded in conjecture and uncertainty, begging the question of how organisations can form mid- and longer-term plans in which AI is likely to play a critical role. In this article, we look at the idea of the Futures Cone as a tool to help organisations with strategic thinking.


AI is not new. Philosophical ideas about creating new knowledge from existing concepts using mechanical methods are traceable to the 1300s


Our modern concept of “intelligent machines” is often deemed to have originated from events in the 1940s, including the Enigma code-breaking machine. An influential and visionary essay by Vannevar Bush called “As we may think” imagined a machine for building new human knowledge from existing knowledge. The 1940s, incidentally, provided a vibrant decade for many other inventions that remain integral in modern life: the jeep, colour TV system, kidney dialysis machine, atomic bomb and microwave oven. 


The term “AI” itself was attributed to scientist John McCarthy at what could be considered the world’s first AI conference, the Dartmouth Workshop in 1956, where academics met to discuss a cluster of emerging ideas of the time related to “thinking machines”. 


The first chatbot came in 1964. 


“Expert machines” emerged into mainstream commercial applications through the 1970s and 1980s. Decisions of human experts were coded into rules that computers could then apply to new problems toward the ends of making the predictions the human expert would have made. Early applications were often for medical diagnostics. By the 1980s, these methods were used by most Fortune 500 companies. 


The first robot to be imbued with emotions, Kisbet, arrived in 1997.


Expert machines morphed into so-called “rules based systems”, used in popular business software created by the likes of Oracle, SAP and Siebel in the 2000s.


The latest incarnation of AI is Generational AI: the aim is not so much prediction, but the generation of new content from existing patterns in a wide range of data types: numeric, text, images, and sound. 


While Netflix took 3.5 years to reach 1m users, the seminal Generational AI product, ChatGBT, took 5 days to reach the same milestone. It took just two months to pass 100m users. By the end of 2023, ChatGBT will have about 200m global users.


Correspondingly, AI applications have exploded in creative domains, like content-writing, image-creation, research and innovation-idea generation. 


Implications of Generational AI

The modus operandi of Generational AI means that AI can now teach itself, and indeed teach itself on how to teach itself (etc), as learning processes are themselves simply patterns of data. This gives rise to the possibility of exponential rates of machine learning and for the development of capabilities that we can’t presently predict, particularly as computer power increases. Concurrently, humans can no longer necessarily decipher how such algorithms have produced their results.


Regulators in China, the UK and US are moving quickly to follow the EU’s lead and combat the foreseeable threats, but it is a game of catch-up. There have already been a series of high profile events which highlight the vulnerabilities with AI. During 2023,we saw the first case of a US man -  Robert James - wrongfully detained based on AI error. A new platform called Genderify, purporting to be able to identify gender from someone’s name and title information, was shut down after five days after exhibiting a range of biases. An application designed by the UK passport office was shut down after it was found that women with dark skin were twice as likely to have their applications rejected.


Evident threats exist from hackers. In 2023, Russian hackers were held responsible for attacks on US Microsoft Teams users. The possibility emerges for large-scale social and economic disruption as AI is increasingly trusted to control everything from cars to energy systems.  A recent study at Carolina State University highlighted critical vulnerabilities in such systems by showing how hackers could feed AI systems with data that confused them to prevent manual overrides. 

 


There is no long-term certainty that regulators will always be ahead of the technology as it is impossible to regulate for scenarios that are not yet known or others envisage first. Even if we have faith that developed countries manage to create effective regulations, there is no guarantee that all parts of the world, particularly increasingly-sophisticated political extremist groups, will adhere to the same policies and ethics.


The futures cone

Right here and now, the longer term (10+ years) possibilities, opportunities and risks with AI represent a broad range of scenarios. For those wanting to prepare their industries and organisations for an uncertain world, then traditional forecast methods are inadequate. 


We know this because there are plentiful studies that demonstrate that our ability to predict the future is limited and flawed, not just because of our biases, but simply our “cognitive eyesight” fades with the complexity of interacting trends, that give rise to improbable events and turning-points in current trends.


To illustrate, in 2017, Luke Muehlhauser reviewed a study into expert forecasts from 1967 about which innovations would have emerged by 2000. Only 45% of them had come true, despite the belief at the time that 90%-95% would. Some of the best forecasts are praiseworthy, such as there would be “inexpensive high-capacity, worldwide, regional, and local (home and business) communication (perhaps using satellites, lasers, and light pipes)” and “direct broadcasts from satellites to home receivers.” Notably bad forecasts predicted there would be “human hibernation for relatively extensive periods (months to years)” and we’d see “the use of nuclear explosives for excavation and mining.”


We need different tools and approaches, namely those offered by the disciplines of foresight and futures thinking, which account for probabilities of different future scenarios. “The Futures Cone”, as per Fig 1, credited to Hancock and Bezold (1994), is one such tool.


Fig 1: The Futures Cone



The Futures Cone conceptualises the notion that everything beyond the present moment is a “potential” future. Some futures are more likely than others, based on what we see in the present moment and what we know is unchanging about human nature, patterns of technological progress and the political behaviour in different sorts of society. As time evolves, uncertainties compound each other so that more scenarios become possible.


The Projected Future is the “baseline” forecast, extrapolating current trends into the future. It is considered the most probable.


The Probable Future are those that we think of as “likely” to happen. We recognise that there are a range of probable outcomes based on uncertainties in current trends, like product-adoption rates, prices or government taxes.


The Plausible Future is what could happen based on our current understanding of the world, and appreciation of how one thing can lead to another.


The Possible Future is what might potentially happen, but requires knowledge that we don’t yet possess to determine its likelihood.  


The Preposterous Future was an idea introduced by Joseph Voros in homage to futurist James Dator. Dator noted many eventualities that might have appeared preposterous just a relatively short-time (years or decades) before materialising. He thereafter proposed that “any useful idea about the future should appear ridiculous”.


The Preferable Future covers some of the types above and is the one people would think “ought to happen”. It relies more on an emotional value-judgement and less on analysis. People may disagree on what future is “preferable”. 


Applying the Futures Cone to AI

When thinking about the impact of AI over the coming decade, we can imagine AI-scenarios for different domains e.g. business functions, organisations, industries, the wider world of work - or even for ourselves and other individuals. 


While there are already trends to show adoption rates, processing capabilities and investment rates for instance, accurate mid- and longer-term predictions become impossible for several reasons: 


  • The growth of AI is not linear: the i) more people that adopt it, then ii) the more it becomes useful, and iii) the more people participate in its development, iv) the more we see better and more appealing services launched, v) and, so, the more people will adopt it.... and so on. Each one of these steps introduces their own uncertainty, leading to greater overall uncertainty in the technology's diffusion speed.

  • The trends in different AI domains interact (e.g. medicine and insurance, energy and transport), as well as interacting with other evolving technology areas, such as cloud-computing and IOT sensor networks. This all means that new factors are increasingly likely to affect AI applications in any single given sector.

  • As the technology develops, we are not sure what friction it will face from regulators and lobbyists who seek to impose restrictions.

  • Just as with the previous trends, once one scenario emerges, it is not clear what scenarios will follow. For instance, as noted, scientists in 1967 predicted that there would be “affordable communication devices” in homes and businesses. Scientists did not predict the extensive use of such devices would be linked to mental health problems in youngsters, and the controversy about users influencing elections.


Scenarios in AI future

There are already a range of possibilities arising for AI, and no two people will probably agree on where they lie on the spectrum of futures, from Projected to Preposterous. Here is one view:


Projected futures

Projected AI scenarios involve trends that we have already seen in the development of computational speed, AI cognitive capability, the decline in data processing costs, and the rise in access to the internet. 


For instance, we have seen that all of these factors have led to the adoption of AI as a means for organisations with contact centres to use chatbots as a way to communicate with customers without the need for call centre agents (Contact Babel). Customers are able to get faster responses to simpler requests or queries, while companies can save staff costs. There are reasonably consistent trends underpinning adoption: trends in user trust, satisfaction, the application of chatbots to wider classes of query, the relative costs of chatbots versus human resources, and the speed at which organisations adopt new technology.


A projected scenario is that customers continue to communicate more with AI chatbots, and less with human agents, when dealing with companies.


Similarly, relatively "safe" projected trends include the increased use of AI to augment human decisions in other business areas where it has already established a foothold, e.g.:


  • Personalised marketing

  • Responsive websites

  • Financial forecasts.

  • Demand forecasts for stock buying

  • Research for content writing

  • Research for report writing

  • Energy management

  • Vehicle and fleet routing

  • Staff training

  • Meeting management

  • Authentication on IT systems and buildings.

  • Cyber security

  • Recruitment.


Probable

We don’t know exactly how quickly companies will invest in the new technology next, how their competitors will react, how quickly entrepreneurs can bring new products to market, and how disruptive they might be, or how quickly new innovations might emerge. 


Even with our present knowledge of both current market trends and the general laws that govern technology evolution, and human buying behaviour, the further into the future we look, the more the uncertainty grows.


However, it is probable that:


  • Investments will grow strongly in R&D and NPD, such that businesses are allocating a larger percentage of their cost on AI. 

  • Competition will drive adoption in most businesses: Organisations stand to gain productivity benefits from adopting AI, so those that survive will necessarily use it to remain competitive too, and those that don’t will be more likely to fail due to a higher cost base. It is probable that leading organisations will be leaders in driving the impact of AI and vice versa.

  • Larger companies will generally be at an advantage over current smaller companies, but new smaller companies could pick off profits of larger ones.

  • Based on the analyses of different work tasks (as per a recent UK Government study in November) at least 20% of the tasks done by staff today will be done with AI by 2030.


Plausible

There are then a class of futures that are termed “plausible”: They are not probable but still have a reasonable chance of occurring based on other foreseeable trends or events arising, For instance, we are seeing a rise in geopolitical tensions between the West and East, and also a rise in cyber attacks on government and company data. 


If this trend accelerates, it could create scenarios with both public and private sector organisations significantly expanding their investments in AI to defend their critical digital assets.


New AI product market are created or grow faster than we imagine, like AI human work assistants or biometric security systems. History tells us to expect a surge of investment before suppliers consolidate.


Scenarios for the role of AI in future healthcare is an area with many plausible scenarios. AI requires large amounts of data to make predictions and design new drugs. Soon, society may be faced with the choice of allowing AI machines access to private medical data, and even the DNA of relatives, to improve outcomes at the expense of privacy and security. Such debates could polarise opinion and lead to two-tier health outcomes.


Other plausible scenarios:


  • Multiple controversies will surround the use of accurate AI-generated data about citizens by private organisations, much of which citizens don’t know about themselves.

  • Customers interact less with brands, as customers trust that brands are able to predict their needs without needing to tell them. Citizens trust AI to execute decisions ranging from ordering the weekly shopping to switching insurance providers.


Possible

Possible futures require knowledge we don’t have today. There may be some fundamental change before they can breakthrough, or a chain of events will unfold first. As such, we generally see them as improbable, but most people could imagine how they might come to arise.


If geopolitical tensions rise further, we know from the history of the Cold War that Government expenditure can grow rapidly over the space of just a few years, as nations respond to each other in a ratchet effect. Military wars were accompanied by efforts to unsettle the populations of opponents. Future wars could be psychological, fought using AI.


In the business world, Possible scenarios are:


  • Brands use sophisticated psychological marketing 'warfare 'using AI to damage competitors' reputation through subliminal messaging to customers.

  • Poliarised political groups use AI to attack and discredit opponents.

  • A Chief AI Algorithm is appointed to company boards.

  • All staff are equipped with personalised AI-helpers at their induction, which help them behave according to brand values.

  • Brands and celebrities routinely face attacks from deep-fake imitations of their advertisements, websites and communications, which confound millions.

  • AI generated entertainment becomes as popular as human entertainment.

  • Most citizens take out insurance against damage or loss caused by AI.

  • Businesses routinely run system level simulations to evaluate business cases.


Preposterous 

No shortage of films remind us that AI-driven robots could take over the world, perhaps enslaving the human race!


Axes of uncertainty

To manage this blizzard of possible scenarios, futurists tend to subsume a plethora of possible eventualities into a handful of scenarios that are differentiated by more fundamental, macro-level uncertainties..


One of the key uncertainties surrounds whether AI will most affect the jobs of higher skilled knowledge workers, or lower-skilled manual workers. The UK’s Department of Work and Pensions has been analysing trends to 2030 to predict the future of work and foresees a shift towards higher skilled knowledge work and the increasing automation of lower-skilled manual jobs. Transportation and manufacturing jobs are deemed at the highest risk of automation. 


Another concerns the speed of development of capabilities of AI, compared to humans. A 2022 poll of 352 global experts in 2022 by Katja Grace and colleagues asked about the point in time AI will have human-level capabilities. While the mean answer was “2075”, there was a c130 year spread in the answers, as per Figure 2.


Figure 2: Predictions by experts on the point of time when Human-level AI exists




A question often related to this uncertainty is the ultimate impact of AI on society. Our “projected” future would assume that regulators will try to minimise the risks with AI, but perhaps it is only “plausible” that they manage to mitigate them all given the historic capabilities of regulators to mitigate risks in other fields, like online banking or gambling.



Two of these uncertainties about AI’s impact on society are plotted in the matrix in Fig 3, below: 


The horizontal axis concerns the extent that the benefits (wealth, opportunity, agency) are accrued by a small range of centralised actors versus the wider population. The vertical axis concerns whether the direction of technology and the impacts on society are regulated and controlled by the policies of major institutions and organisations, or from citizens themselves.


Fig 3: Scenarios and indicators of pathways towards them. Concentric circles denote the Future Cone.



Social media is an example of a social and technological trend that has largely been controlled by citizens themselves to now, though companies exert certain controls and influences, and Governments are increasingly looking to apply regulations. Both citizens and corporations have been recipients of shifting opportunities.


We can anticipate a power battle with AI where there is no outright winner between these different directions: a move in one direction is met by a correction. However, should the direction of travel continue along one direction uncorrected, it can lead to a more extreme (and hopefully “Preposterous”) scenario.


A utopian view of AI would be that AI elevates people from poverty, grants personalised global healthcare, fuels innovation, entertains us royally and works in service of human progress. Benefits accrue across society.


A dystopian view of AI would be that it further concentrates wealth and power into the hands of relatively few, who used it to accrue yet more control, with governments and/or corporations abusing their power.


History suggests we will see elements of all of these trajectories at different times. Hopefully, as systems correct themselves, we will remain within the set of scenarios that create more good than harm.


Scenario planning as an insurance policy.

The scenarios for any particular organisation and their industry are analogous to those for society and the economy. There is uncertainty about the speed that AI will affect any organisation, through its impact on the economic dynamics across different industries. There is uncertainty about the wider societal and economic climate in which businesses will operate, and the winners and losers.


We certainly believe that significant change is “projected” in most industries: At least 16% of UK businesses are already using AI (ONS), having increased by a factor of x7 over ten years (Forbes). Most forecasts suggest that most companies will be using AI by 2030. A survey by the Conference Board in the US found that the majority of American workers (56%) use GenAI on work tasks, and though the number was much lower at the mid-point of 2023 in the UK (8%), this is expected to change quickly with more than half of UK citizens having heard of Generational AI (Deloitte).


Applying the future scenario and scenario matrix to any given industry, key axes of uncertainty include the extent that AI will augment or replace human work, and whether the rise of AI will ultimately see the strengthening of existing incumbents or birth of new companies and product-markets, as per Figure 4.


Fig 4: Scenarios and indicators of pathways towards them for a given industry. Concentric circles denote the Future Cone.



There are different strategies that organisations might deploy in response to AI, relying more or less on AI as a fundamental part of their operational model. Just as “pure” e-commerce retailers threatened traditional high-street retailers, then there is the potential for “pure” AI-based companies, having just a handful of employees and the majority of the core product or service being created and delivered by AI. They will take on traditional companies where AI is used more to augment human work than replace it. 


The struggles of some online-only companies (e.g. the estate agency Purple Bricks or web company MySpace) shows that it is not necessarily the case that survival will be guaranteed by those that pursue the most disruptive business models.


Assuming that the world will exist outside of extremities, and businesses will continue to be the means by which services are provided to citizens, horizon scanning and scenario-planning can help most businesses. Those that recognise the range of futures within their own Future Cone will be better placed and more ready to respond when events invariably occur that are outside of today’s Projected Future pathway.


In summary

In summary, it is too early to confidently predict how AI is going to affect organisations between now and 2030 and certainly beyond that point. Some AI scenarios of the recent past, such as the rise of the AI chatbot, are now a reality and we can anticipate the trend continuing. A host of other scenarios remain probable, plausible and possible. History suggests that one or two scenarios that are currently considered “Preposterous” are likely to come to pass, even though the great majority won't.


This uncertainty should motivate rather-than deter an organisation from exploring the range of scenarios with which it might be faced: Identifying future opportunities and risks is an exercise akin to tuning on the headlights of a car in gloomy conditions.  To pre-empt possibilities, of hitting obstacles and missing a turning, gives an organisation the critical advantage of the time to make reasoned and intelligent decisions. 


One very probable scenario for AI is that, at some point, many businesses will experience events that had seemed improbable to those that had not been paying attention to the road ahead.


How we can help

Read more about our Map and Dive services, which can offer insight into either general or specific future market scenarios for your organisation, such as in the impact of AI, to identify opportunities, risks and options to respond.    





   




<Notes>



34 views0 comments

Commentaires


bottom of page