Article
Marco Bevolo1*, Dennis Draeger2
1Researcher, Academy for Leisure and Events, Breda University of Applied Sciences, The Netherlands
2Director of Foresight, Shaping Tomorrow, Hawke’s Bay, New Zealand
Abstract
This academic paper delves into the paradigm of “Automated Liminality,” exploring how AI-induced ‘hallucinations’ may serve as a springboard for unparalleled creativity in foresight and futures research. The authors present a historical overview of AI and its evolving role, particularly focusing on large language models (LLMs) and their capability to forge new pathways in generating future knowledge. They outline the resistance within the foresight field towards LLMs and propose three transformative ways in which these tools can contribute to liminal space creation: LLMs’ inherent otherness, their capacity to synthesize disparate data, and the simplification of the foresight process for broader demographics.
Foresight and futures research are identified as interdisciplinary and inherently liminal; hence, systematic and intuitive methodologies are equally valued. The paper challenges the technological unemployment narrative by presenting a balanced view of AI’s impact on jobs and the amplification of human capabilities. Through critical analysis, the paper suggests that LLMs’ ‘hallucinations’ or inaccuracies can be redefined as unforeseen scenarios providing crucial contributions to future envisioning. Furthermore, it examines LLMs as ‘other’ entities that, although not human, can catalyze creative thought unbounded by human limitations.
Addressing the democratization of the foresight process, AI’s incorporation is shown to facilitate broad participation, promote critical and creative thinking, and expedite the process. The authors conclude by arguing for a narrative focused on human agency within the future of LLMs, advocating for the crucial role of futurists in framing these tools as complements to – not replacements for – the human element in strategic foresight.
Keywords
Large Language Models, Futures Research, Creativity, Technological Unemployment, Human-AI Collaboration.
Introduction
With this contribution, the authors argue that artificial intelligence (AI) and especially generative AI such as large language models (LLMs) can help to make foresight and futures research even more liminal by adding new avenues for creating future knowledge and insights about the future. Since there is still some resistance to using LLMs within the foresight and futures field, we look at a historical perspective of AI as well as its application in the field before exploring three modes of liminal space creation enabled by LLMs: the otherness of LLMs, the ability to connect disparate data, and how they could simplify the foresight process for a larger demographic.
Foresight and futures research are relatively recent multidisciplinary fields, stretching from human sciences to design (Bevolo, 2023-a) in perpetual transition, that might be conceptualized as inherently liminal (Thomassen, 2014) in the complex process of uncovering knowledge about the future or at least to co-create concepts based on different hypotheses and scenarios (Walker, 2019). It is a domain where systematic analysis might contribute valuable output and meaningful outcomes as much as individual intuition and charisma (Bevolo, 2023-b, in: Danemann et al., 2023). Although methodological equivalence might sometimes apply, unlike historical sciences (Staley, 2007) or political sciences (George & Bennett, 2005), futures studies work on a domain that does not exist, hence backtracing (George & Bennet, 2005) or documental verifications of presumed facts is not feasible for futurists.
A short historical perspective of AI and Foresight
Since the public release of ChatGPT 3.5 in 2022, online media has seen another recurrence of discussions about technological unemployment (Keynes, 1930). While robots continue to replace humans in the manufacturing sector, there is evidence that the overall impact of robots and AI on individuals may be a net positive considering rebounding local wages (Chung, 2023). Likewise, LLMs could play a role in enhancing the manufacturing workforce (Kernan Freire et al., 2023).
A recent survey from The Adecco Group shows that 41% of the 2,000 C-suite executives surveyed said their workforce would decrease due to AI over the next 5 years (2024). The World Economic Forum, however, expects a net increase in jobs due to LLMs (2023). The effects on white-collar jobs, while potentially devastating, could be tempered with a change in narrative, enhancing human work rather than replacing it (Yifei, 2023). So, concerns of technological unemployment may still be premature especially since it is less about technological capabilities and more about the human pursuit for maximum profits (Ebert, 2023).
Strategic futurists have not been immune to this discussion either. Opinions about whether AI could replace futurists have been posted on LinkedIn, listservs, and other online media. The authors argue that futurists, while having no direct necessity or professional obligation to adopt AI as a working tool, have a responsibility to at least communicate alternate narratives for how AI may be used as tools by humans rather than as replacements for humans (Datta & Dickerson, 2023; Gebru & Torres, 2024).
Automation was introduced to the field of foresight as early as 2015 by at least one online platform which automated horizon scanning to a high degree (Kehl et al., 2020). The company was launched in 2003 for collaborative foresight, and it has been applying AI in a variety of ways since 2015. Through automating the basic research of futures/ foresight, the platform augmented how the process could be done (Fergnani & Jackson, 2019).
In 2022, this platform launched new services in the generative AI space with applications for human collaboration. So, it uses a variety of technologies to organize thinking about the future. The same platform is continuing the work on augmenting how the process can be done rather than trying to either replace the process or the people doing the process.
One of the authors engaged in a critical review of how machine learning (ML) systems might connect to design processes for Design Futures (Bevolo & Amati, 2020), concluding that AI might be suitable for scanning, however, — at the time of that analysis — Generative AI could not yet enable feedback loops between prototyping and next cycles of steps in Design Futures processes, especially when pertaining to applied projects. However, the speed of development and the quality of improvement in the design and delivery of automated foresight does require systematic and regular monitoring.
Given the rise of ChatGPT and derivative systems, the relationship between automated and human foresight and futures exploration is an area of liminality between digital efficiency and personal intuition, as the underlying question is whether LLMs might complement or even replace futures practitioners and futurist scholars. Ultimately, the authors believe this is not the correct possibility to explore because it frames AI within the dichotomy of the dystopian versus utopian visions so common during periods of Cultural Lag (Fisher & Wright, 2006). More importantly, this dichotomy unnecessarily constrains philosophical dialogue (Albrecht, 2017).
Looking at foresight practitioners and consultants, more urgent questions revolve around what value strategic futurists can gain from LLMs. This, of course, is depending on design processes and delivery formats, and ultimately about the rationale costs versus benefits for clients. By focusing on these challenges, a necessary assumption is that futurists will remain part of the process while asserting the value of human leadership within the process.
Automating Connections
As explored in the peer reviewed publication by one of the authors (Bevolo & Amati, 2020), ML systems, had already proven their value in scanning manifestations and generating trend clusters, with increasing precision and meaningfulness. However, LLMs generate information that is either irrational or false because they are focused on novelty (Rawte et al., 2023; Zhang et al., 2023). These hallucinations negatively impact accuracy (Mukherjee & Chang, 2023). LLMs have no linguistic intent because they are not actually intelligent and do not understand their inputs or even their output (O’Neill & Connor, 2023). So, the output can be vague because the machine has reached its useful limit. Even when they are accurate, the output may do nothing more than summarize its training data because it is favoring utility over novelty. If the LLM focuses too much on utility, the output is accurate but not intellectually stimulating (Mukherjee & Chang, 2023).
These limitations have often been experienced and reviewed as “errors”, perhaps adopting a positivist and linear logic to research epistemology. However, as demonstrated by one of the authors in an ESOMAR paper (Bevolo & Price, 2006), in futures research and foresight apparent “errors” might lead to unforeseen scenarios and concepts, hence playing a key role in delivering true value (e.g. Black Swans) (Taleb, 2007). This is because the act of envisioning the future intrinsically includes the necessity of embracing paradoxes while exploring the unknown.
Likewise, the limitations to accuracy can positively impact creativity, and LLMs can produce articles that mix things up in intriguing ways. Some people interpret the machine’s output as pseudo-creativity (Runco, 2023), but such views are inherently biased (Magni et al., 2023).
The trade-off of accuracy for creativity is one reason for the field to use LLMs for collaboration tools. Humans can collaboratively verify the accuracy of output while adding greater, real-world detail. Human collaborators then become machine-human collaborators to “make large corpora accessible to qualitative scholarship” (Feuston & Brubaker, 2021). More research needs to be done with regards to the utility of machine creativity on enhancing human creativity especially within the context of human collaboration (Seeber et. al, 2020) because the future of futures research and foresight lie in this new form of active dialog between irrational synthesis by automated LLM systems and human creativity. Indeed, it is an LLM’s most popularly derided feature, hallucinations, that presents the greatest benefit to futures and foresight liminality because they show the distinct otherness of LLMs.
Machines as otherness for liminality
Techno-optimists would have us divorce humanity further from nature to become one with the machine to put an end to death (Bostrom, 2008; Wernaart, 2022). However, in doing so, humanity would surrender agency to machines that have no agency independent of humanity (Diéguez, 2023). In fact, some groups of techno-optimists can be considered antihuman (Sagikyzy & Uyzbayeva, 2024). Meanwhile, techno-pessimists highlight the negative effects of AI (Tunç & Öcal, 2023; Wernaart, 2022). Yet, between these extremes is a moderate group of post-luddites who call for more responsible uses of digital technologies (Rauch, 2019).
This entire spectrum is associated with anthropomorphic metaphors (Li & Suh, 2021). Both techno-optimists and techno-pessimists speak of AI as if it held agency to act independently upon society. Adopting these metaphors illustrates the theory of Cultural Lag currently at work over LLMs as they personify the hopes and fears of different social groups (Fisher & Wright, 2006).
While Artificial General Intelligence (AGI) is often personified with human characteristics such as artificial suffering (Li, 2024), LLMs are tools and not a replacement for human intelligence (Xu, 2021; Chubb et al., 2021). More to the point, these metaphors ignore the reality of otherness that AI continues to embody.
Even though nature has been othered (Scarano, 2024), technology generally and LLMs specifically are inherently other (Olivier 2021). They are not human, and yet, they can behave as if they were. They do not understand what they read or write, and they often derive conclusions in a way that their creators do not entirely understand (Coeckelbergh, 2020). So, the authors argue that we should reframe the general narrative to think of them in this way, as machines with human programmed agency, even if some scenarios suggest they could become more than that (Gebru & Torres, 2024).
However, by not being human and therefore not being constrained by judgment, emotion, or political will; LLMs might help to navigate the issues that workshop participants usually encounter (e.g. idea blocks, linear thinking, biased vision, team politics), and enable practitioners, scholars, and clients to more quickly identify unexpected knowledge gaps, or increasing the ratio of known unknowns versus known knowns. Ultimately, given the current VUCA nature of our everyday reality, it might certainly help to uncover unknown unknowns and even help categorize unknown knowns (i.e. assumptions).
LLMs also occupy a liminal space between genius forecasting and expert forecasting. LLMs function by averaging the words used by a multitude of sources. Therefore, they can be used as generic agglomerations of various experts. The authors argue that even this virtual plurality is still better than myopia or inability to generate any idea, such as in the context of creative blocks within workshops or within writing. It all depends on the mindset people will have when leveraging the otherness of automated systems. However, the greatest value of LLM hallucinations and otherness is their value for democratizing foresight.
Democratizing the Foresight Process
There are many barriers to business model innovation, and strategic foresight has been proposed as a path forward through the identified barriers (Moqaddamerad & Ali, 2024). However, organizational foresight has many barriers of its own (Mortensen et al., 2021). AI may help to address these barriers to foresight and therefore innovation by diverting users away from their short-term biases and, instead, focusing on their engagement with the foresight process.
From a foresight workshop perspective, AI can be used to reduce the time needed to get through the foresight process. Therefore, it can support facilitators to involve more people to further democratise the process. AI can also support facilitators to make the workshop customizable to the participants’ needs in real time during the workshop.
A key aspect of using AI during facilitation is the use of critical thinking. Since humans commonly devalue the output of LLMs even when the output is objectively better than a human’s (Yin et al., 2024), participants in a workshop will be engaging critical thinking about the AI output more than the strategic foresight process.
Uniting the participants with critical thinking against the AI can help make them complicit in the AI-enabled foresight process. So, if the facilitator generates a list of trends and asks the participants to evaluate items on the list, the participants may be more likely to voice criticism when they know the trends were generated by an AI and not a human. A competent facilitator may even uncover the assumptions and biases of the participants using either the biases exposed in the LLM’s output or through questioning participant responses against the objective benchmark of LLM output.
The only politics at play in criticizing an AI is an individual’s vested interest to show that they are smarter than the machine. Therefore, more participants may engage with the machine’s output. The facilitator can then switch participants to creative thinking by asking how the list should be edited and what should be added.
A similar process can be followed with scenario generation all while generating content that is specifically relevant for the ephemeral situation of the workshop, the individuals collected in that group at a particular time. The facilitator can ask for participant help in what to search for and how to search for it further involving participants in the process. Likewise, AI can manage much of the analytical aspect of workshops during the workshop to keep the participation time busy with knowledge creation (Dufva & Ahlqvist, 2015).
This combination of critical thinking with creative thinking makes room for liminal space by transitioning participants between different modes of thinking.
Conclusions
Although the future of LLMs is still developing with a variety of plausible scenarios, the authors contend that futurists should communicate these futures with a focus on human agency, accountability, and responsibility. Considering the AI platform’s work, factoring the inevitability of further evolution thereof, the authors here conclude that one way that LLMs are augmenting the foresight process is by making the process even more liminal, that is between human creativity and automated processes. LLMs are enabling foresight to become more flexible, anticipatory, and effective to face today’s and tomorrow’s increasingly liminal condition by the otherness of the AI, automating disparate connections, and simplifying the foresight process for novices.
References
Adecco Group. (2024). Leading through the great disruption: How a human-centric approach to AI creates a talent advantage. Retrieved April 12, 2024, from https://discover.adeccogroup.com/Business-Leaders-2024_Global-Report
Andersen, Dannemand, et al. (2023) Technology Foresight for Public Funding of Innovation: Methods and Best Practices, Vesnic Alujevic, L., Farinha, J. and Polvora, A. editor(s), Publications Office of the European Union, Luxembourg, doi:10.2760/759692, JRC134544.
Bevolo, M. (2023-a). Individual intuition and communicative charisma as enablers of futures and envisioning: “genius forecasting” versus “forecasting geniuses”, pp. 18-26 in: Dannemand Andersen, P., Bevolo, M., Ilevbare, I., Malliaraki, E., Popper, R. and Spaniol, M.J., Technology Foresight for Public Funding of Innovation: Methods and Best Practices, Vesnic Alujevic, L., Farinha, J. and Polvora, A. editor(s), Publications Office of the European Union, Luxembourg, 2023, doi:10.2760/759692, JRC134544.
Bevolo, M. (2023-b). Design Futures in A Wider Context: Historical Reflections, Potential Directions, and Challenges Ahead. International Journal of Design and Allied Sciences, 2 (2), 27-41.
Bevolo, M., & Amati, F. (2020). The Potential Role of AI in Anticipating Futures from a Design Process Perspective: From the Reflexive Description of “Design” to a Discussion of Influences by the Inclusion of AI in the Futures Research Process. World Futures Review, 12(2), 198-218. https://doi.org/10.1177/1946756719897402
Bevolo, M. & Price, N. (2006). Towards the evolutions and revolutions in futures research. ESOMAR BrandMatters conference, New York, March 2006.
Chubb, J, et al. (2021) Speeding up to keep up: exploring the use of AI in the research process. AI Soc. 2022;37(4):1439-1457. doi: 10.1007/s00146-021-01259-0. PMID: 34667374; PMCID: PMC8516568.
Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and engineering ethics, 26(4), 2051-2068.
Datta, T., & Dickerson, J. P. (2023). Who’s Thinking? A Push for Human-Centered Evaluation of LLMs using the XAI Playbook. arXiv preprint arXiv:2303.06223.
Dufva, M., & Ahlqvist, T. (2015). Knowledge creation dynamics in foresight: A knowledge typology and exploratory method to analyse foresight workshops. Technological Forecasting and Social Change, 94, 251-268.
Ebert, N. (2023). From Keynes’ Possibilities to Contemporary Precarities: Reflections on the Origins of Our Economically and Politically Precarious Times. Sociology Lens, 36(2), 185-197.
Fergnani, A., & Jackson, M. (2019). Extracting scenario archetypes: A quantitative text analysis of documents about the future. Futures & Foresight Science, 1(2), e17.
Feuston, J. L., & Brubaker, J. R. (2021). Putting Tools in Their Place: The Role of Time and Perspective in Human-AI Collaboration for Qualitative Analysis. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1-25.
Fisher, D. R., & Wright, L. M. (2006, June 23). On Utopias and Dystopias: Toward an Understanding of the Discourse Surrounding the Internet. Journal of Computer-mediated Communication. https://doi.org/10.1111/j.1083-6101.2001.tb00115.x
Gebru, T., & Torres, É. P. (2024). The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday.
George, A.L. & Bennett, A. (2005). Case Studies and Theory Development in the Social Sciences. Cambridge, MT, US, MIT Press.
Kehl, W., Jackson, M., & Fergnani, A. (2020). Natural Language Processing and Futures Studies. World Futures Review, 12(2), 181-197. https://doi.org/10.1177/1946756719882414
Kernan Freire, S., Foosherian, M., Wang, C., & Niforatos, E. (2023, July). Harnessing large language models for cognitive assistants in factories. In Proceedings of the 5th International Conference on Conversational User Interfaces (pp. 1-6).
Keynes, J. M. (1930). “Economic Possibilities for our Grandchildren”. Essays in Persuasion. Harcourt Brace. 358-373.
Li, M., & Suh, A. (2021, January). Machinelike or humanlike? A literature review of anthropomorphism in AI-enabled technology. In 54th Hawaii International Conference on System Sciences (HICSS 2021) (pp. 4053-4062).
Li, O. (2024). Should we develop AGI? Artificial suffering and the moral development of humans. AI and Ethics, 1-11.
Magni, F., et al. (2023). Humans as Creativity Gatekeepers: Are We Biased Against AI Creativity?. Journal of Business and Psychology, 1-14.
Moqaddamerad, S., & Ali, M. (2024). Strategic foresight and business model innovation: The sequential mediating role of sensemaking and learning. Technological Forecasting and Social Change, 200, 123095.
Mortensen, J. K., et al. (2021). Barriers to developing futures literacy in organisations. Futures, 132, 102799.
Mukherjee, A., & Chang, H. (2023). Managing the Creative Frontier of Generative AI: The Novelty-Usefulness Tradeoff. California Management Review.
O’Neill, M., & Connor, M. (2023). Amplifying Limitations, Harms and Risks of Large Language Models. arXiv preprint arXiv:2307.04821.
Rauch, J. (2019). What is a Luddite, Really? What Is Technology? Conference, Portland, Oregon. https://www.academia.edu/38676716/What_is_a_Luddite_Really
Rawte, V., et al. (2023). The Troubling Emergence of Hallucination in Large Language Models—An Extensive Definition, Quantification, and Prescriptive Remediations. arXiv preprint arXiv:2310.04988.
Runco, M. A. (2023). AI can only produce artificial creativity. Journal of Creativity, 33(3), 100063.
Sagikyzy, A., & Uyzbayeva, A. (2024). Between Utopia and Reality (Modern Transhumanism Theories and Posthumanism). In Post-Apocalyptic Cultures: New Political Imaginaries After the Collapse of Modernity (pp. 77-95). Cham: Springer International Publishing.
Scarano, F.R. (2024). The Modern Divorce Between Nature and Culture. In: Regenerative Dialogues for Sustainable Futures. Sustainable Development Goals Series. Springer, Cham. https://doi.org/10.1007/978-3-031-51841-6_5
Seeber, I., et al. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & management, 57(2), 103174.
STALEY, D. (2007), History and Future. Plymouth, UK, Lexington Books.
TALEB, N.N. (2007). The Black Swan. The impact of the highly improbable, 2nd Edition, London, UK. Random House.
Thomassen, B. (2014). Liminality and the Modern. Farnham, UK, Ashgate.
Tunç, Y. E., & Öcal, A. T. (2023). Neo-Luddism in the Shadow of Luddism. Perspectives on Global Development and Technology, 22(1-2), 5-26. https://doi.org/10.1163/15691497-12341649
Walker, G. M. (2019). Exploring a designed liminality framework: Learning to create future-orientated knowledge (Doctoral dissertation, Murdoch University).
Wang, Y. (2023). The Large Language Model (LLM) Paradox: Job Creation and Loss in the Age of Advanced AI. https://www.techrxiv.org/doi/full/10.36227/techrxiv.24085824.v1
Wernaart, B. (Ed.), (2022), Moral Design and Technology, Wageningen (NL), Wageningen Academic Publishers.
World Economic Forum. (2023). Future of Jobs Report 2023. Retrieved April 12, 2024, from https://www3.weforum.org/docs/WEF_Future_of_Jobs_2023.pdf
Yin, Y., et al. (2024). AI can help people feel heard, but an AI label diminishes this impact. Proceedings of the National Academy of Sciences, 121(14), e2319112121.
Zhang, Y., et al. (2023). Siren’s song in the AI ocean: a survey on hallucination in large language models. arXiv preprint arXiv:2309.01219.