Facebook Twitter Instagram
    Trending
    • Urban-Rural Polarization in Canada
    • Confronting the Anti-Futures Triangle
    • Symposium: War, Genocide, and Futures Beyond US Hegemony
    • Foreword: Editorial Statement On the Necessity of Critique
    • Does Genocide Have Gender?
    • Welcoming Collapse to Create Better Futures
    • From Collapse to Motherships
    • The Futures of the United Nations
    Journal of Futures Studies
    • Who we are
      • Editorial Board
      • Editors
      • Core Team
      • Digital Editing Team
      • Consulting Editors
      • Indexing, Rank and Impact Factor
      • Statement of Open Access
    • Articles and Essays
      • In Press
      • 2025
        • Vol. 30 No. 2 December 2025
        • Vol. 30 No. 1 September 2025
        • Vol. 29 No. 4 June 2025
        • Vol. 29 No. 3 March 2025
      • 2024
        • Vol. 29 No. 2 December 2024
        • Vol. 29 No. 1 September 2024
        • Vol. 28 No. 4 June 2024
        • Vol. 28 No. 3 March 2024
      • 2023
        • Vol. 28 No. 2 December 2023
        • Vol. 28 No. 1 September 2023
        • Vol. 27 No. 4 June 2023
        • Vol. 27 No. 3 March 2023
      • 2022
        • Vol. 27 No. 2 December 2022
        • Vol. 27 No.1 September 2022
        • Vol.26 No.4 June 2022
        • Vol.26 No.3 March 2022
      • 2021
        • Vol.26 No.2 December 2021
        • Vol.26 No.1 September 2021
        • Vol.25 No.4 June 2021
        • Vol.25 No.3 March 2021
      • 2020
        • Vol.25 No.2 December 2020
        • Vol.25 No.1 September 2020
        • Vol.24 No.4 June 2020
        • Vol.24 No.3 March 2020
      • 2019
        • Vol.24 No.2 December 2019
        • Vol.24 No.1 September 2019
        • Vol.23 No.4 June 2019
        • Vol.23 No.3 March 2019
      • 2018
        • Vol.23 No.2 Dec. 2018
        • Vol.23 No.1 Sept. 2018
        • Vol.22 No.4 June 2018
        • Vol.22 No.3 March 2018
      • 2017
        • Vol.22 No.2 December 2017
        • Vol.22 No.1 September 2017
        • Vol.21 No.4 June 2017
        • Vol.21 No.3 Mar 2017
      • 2016
        • Vol.21 No.2 Dec 2016
        • Vol.21 No.1 Sep 2016
        • Vol.20 No.4 June.2016
        • Vol.20 No.3 March.2016
      • 2015
        • Vol.20 No.2 Dec.2015
        • Vol.20 No.1 Sept.2015
        • Vol.19 No.4 June.2015
        • Vol.19 No.3 Mar.2015
      • 2014
        • Vol. 19 No. 2 Dec. 2014
        • Vol. 19 No. 1 Sept. 2014
        • Vol. 18 No. 4 Jun. 2014
        • Vol. 18 No. 3 Mar. 2014
      • 2013
        • Vol. 18 No. 2 Dec. 2013
        • Vol. 18 No. 1 Sept. 2013
        • Vol. 17 No. 4 Jun. 2013
        • Vol. 17 No. 3 Mar. 2013
      • 2012
        • Vol. 17 No. 2 Dec. 2012
        • Vol. 17 No. 1 Sept. 2012
        • Vol. 16 No. 4 Jun. 2012
        • Vol. 16 No. 3 Mar. 2012
      • 2011
        • Vol. 16 No. 2 Dec. 2011
        • Vol. 16 No. 1 Sept. 2011
        • Vol. 15 No. 4 Jun. 2011
        • Vol. 15 No. 3 Mar. 2011
      • 2010
        • Vol. 15 No. 2 Dec. 2010
        • Vol. 15 No. 1 Sept. 2010
        • Vol. 14 No. 4 Jun. 2010
        • Vol. 14 No. 3 Mar. 2010
      • 2009
        • Vol. 14 No. 2 Nov. 2009
        • Vol. 14 No. 1 Aug. 2009
        • Vol. 13 No. 4 May. 2009
        • Vol. 13 No. 3 Feb. 2009
      • 2008
        • Vol. 13 No. 2 Nov. 2008
        • Vol. 13 No. 1 Aug. 2008
        • Vol. 12 No. 4 May. 2008
        • Vol. 12 No. 3 Feb. 2008
      • 2007
        • Vol. 12 No. 2 Nov. 2007
        • Vol. 12 No. 1 Aug. 2007
        • Vol. 11 No. 4 May. 2007
        • Vol. 11 No. 3 Feb. 2007
      • 2006
        • Vol. 11 No. 2 Nov. 2006
        • Vol. 11 No. 1 Aug. 2006
        • Vol. 10 No. 4 May. 2006
        • Vol. 10 No. 3 Feb. 2006
      • 2005
        • Vol. 10 No. 2 Nov. 2005
        • Vol. 10 No. 1 Aug. 2005
        • Vol. 9 No. 4 May. 2005
        • Vol. 9 No. 3 Feb. 2005
      • 2004
        • Vol. 9 No. 2 Nov. 2004
        • Vol. 9 No. 1 Aug. 2004
        • Vol. 8 No. 4 May. 2004
        • Vol. 8 No. 3 Feb. 2004
      • 2003
        • Vol. 8 No. 2 Nov. 2003
        • Vol. 8 No. 1 Aug. 2003
        • Vol. 7 No. 4 May. 2003
        • Vol. 7 No. 3 Feb. 2003
      • 2002
        • Vol. 7 No.2 Dec. 2002
        • Vol. 7 No.1 Aug. 2002
        • Vol. 6 No.4 May. 2002
        • Vol. 6 No.3 Feb. 2002
      • 2001
        • Vol.6 No.2 Nov. 2001
        • Vol.6 No.1 Aug. 2001
        • Vol.5 No.4 May. 2001
        • Vol.5 No.3 Feb. 2001
      • 2000
        • Vol. 5 No. 2 Nov. 2000
        • Vol. 5 No. 1 Aug. 2000
        • Vol. 4 No. 2 May. 2000
      • 1999
        • Vol. 4 No. 1 Nov. 1999
        • Vol. 3 No. 2 May
      • 1998
        • Vol. 3 No. 1 November 1998
        • Vol. 2 No. 2 May. 1998
      • 1997
        • Vol. 2 No. 1 November 1997
        • Vol. 1 No. 2 May. 1997
      • 1996
        • Vol. 1 No. 1 November 1996
    • Information
      • Submission Guidelines
      • Publication Process
      • Duties of Authors
      • Notice of Publication Fee Implementation
      • Submit a Work
      • JFS Premium Service
      • Electronic Newsletter
      • Contact us
    • Topics
    • Authors
    • Perspectives
      • About Perspectives
      • Podcast
      • Multi-lingual
      • Exhibits
        • When is Wakanda
      • Special Issues and Symposia
        • The Hesitant Feminist’s Guide to the Future: A Symposium
        • The Internet, Epistemological Crisis And The Realities Of The Future
        • Gaming the Futures Symposium 2016
        • Virtual Symposium on Reimagining Politics After the Election of Trump
        • War, Genocide and Futures Beyond US Hegemony
    • JFS Community of Practice
      • About Us
      • Teaching Resources
        • High School
          • Futures Studies for High School in Taiwan
        • University
          • Adults
    Journal of Futures Studies
    Home»Rendering new futures or enshrining pictures of the past? The double-edged sword of using AI-generated imagery for anticipation

    Rendering new futures or enshrining pictures of the past? The double-edged sword of using AI-generated imagery for anticipation

    Article

    Laura Aline Bechthold1,*

    1Professor, Bavarian Foresight-Institute, Technische Hochschule Ingolstadt, Germany

    Abstract

    With the rise of artificial intelligence (AI) a new creativity tool has emerged with the potential to tremendously influence our systems of knowledge production. AI-based text-to-image (TTI) systems enable individuals, irrespective of their artistic skills, to contribute to the creative part of anticipation and transform vague ideas into tangible artifacts. However, the efficacy of any new technological advancement does not only depend upon the technology, but also on its practitioners. Hence, it is important to reflect on how the co-creation between humans and AI may unfold. This essay explores the use of AI-generated imagery in participatory activities for anticipation. After a short technical introduction, it follows two main lines of argumentation. The first part discusses potential pitfalls of using AI-generated imagery. A data collection based on 480 AI-generated images shows how these pictures tend to reproduce dominant stereotypes and with that might enshrine existing archetypal worldviews – even if references to the future are included in the prompting process. The second part turns to the potentials of AI-generated imagery and builds on the experiences of two participatory formats in which TTI-systems have been used to (co-)create images of the future. Here, the results show how the use of such mediational tools may help participants gain clarity on their ideas, facilitate group discussions, and enable the combination of multiple ideas into joint visions. The final part addresses the emerging tension between potentials and pitfalls and closes with a set of recommendations for futures researchers, practitioners, and facilitators of participatory interventions.

    Keywords

    Artificial intelligence, Anticipation, Participation, Mediational tool, Images

    Introduction

    Anticipation is about using the future in behavioral decisions (Poli, 2010) and presents a form of ‘knowledge production with organizing effects’ (Flyverbom & Garsten, 2021, p. 2) that attempts to shape societal and organizational practices. Creative or symbolic processes, such as the development of narratives, storytelling or the use of metaphors, have thereby long been recognized as important means in anticipatory practice (Bruskin & Mikkelsen, 2020; Liveley et al., 2021; McDowell, 2019). By building on the ideas of experiential learning and collective intelligence (Kazemier et al., 2021), anticipatory activities such as scenario processes (Schoemaker, 1993) or the UNESCO’s Futures Literacy approach (Miller, 2018) often demand participants to create joint images of the future. Lederwasch’s (2012) scenario art approach illustrates how creative visual methods can transform abstract future narratives into tangible, interactive artworks that catalyze collective dialogue and reframe underlying assumptions. The creation of visual artefacts may provide people with greater power to express their ideas for the future beyond the boundaries of mere language (Sanders & Stappers, 2014). At the same time, the “mediational means” used in such activities may profoundly influence individual’s thinking and action (Vartiainen & Tedre, 2024). Think about your own experiences with activities for generating imaginary futures, such as co-design workshops or scenario processes. The moment when ideas are consolidated and prepared for group discussion marks a pivotal point in any participatory process. Oftentimes it comes down to one or two people who act as information brokers (Burt, 1995), implicitly or explicitly curating which ideas end up in the visualization and with that will shape the final narrative. Lederwasch (2012, 2015), for instance, describes how the artworks in scenario processes are developed and presented by a single individual such as an artist or the facilitator. In other settings, especially when participants are asked to visualize their results by themselves, this process may be unconsciously constrained by pragmatism and the artistic skillset of the group’s designated illustrator. This harbors the risk that subjective cognitive filters of a few may overshadow collective group outcomes. Hence, reflecting on the mechanisms through which participants express ideas about the future and negotiate shared meaning becomes crucial in designing effective anticipatory activities (Miller, 2018).

    With the rise of artificial intelligence (AI) a new creativity tool has emerged with the potential to tremendously influence our systems of knowledge production. Specifically, AI-based text-to-image (TTI) systems enable individuals, irrespective of their artistic skills, to contribute to the creative part of anticipation and transform vague ideas into tangible artifacts. Several examples show, how AI-generated imagery is also more and more integrated into participatory futures and foresight activities (e.g., Milojevic, Inayatullah & Poocharoen, 2024). However and as pointed out by Escotet (2023), the efficacy of any new technological advancement does not only depend upon the technology, but also on its practitioners. Hence, it is important to reflect on how the co-creation between humans and AI may unfold. Against this background, this essay seeks to explore and critically discuss the use of AI-generated imagery in anticipation with a particular focus on participatory processes.

    Technical Background

    In 2023, the world has witnessed an unprecedented leap in the development and spread of AI (Escotet, 2023), most notably in the realms of language processing and digital imagery creation. Especially the public rollout of ChatGPT in November 2022 has not only marked a significant milestone in the evolution of large language models (LLMs) but also brought the technology out of its niche. Parallel to the advancements in LLMs, text-to-image (TTI) systems have made significant strides, transforming the way digital visuals are generated (Luccioni et al., 2023). TTI-systems use intricate machine learning architectures, such as diffusion models, to generate high-quality digital visuals based on user-generated textual inputs, so called prompts (Chen & Kao, 2022). The growing capabilities and ease-of-use allow users to imagine and express their ideas even with little to no understanding of the underlying technology (Oppenlaender, 2022). In easy-to-understand user interfaces, people may insert their prompts to generate pictures and use additional features to edit or vary their creations (see Fig. 1 for an example). Additionally, rich online documentations, such as ‘prompt libraries’, help users to master these tools and improve their prompting skills. To date, models like Stable Diffusion, Midjourney, and OpenAI’s DALL·E enjoy high public popularity among TTI-systems. While all of them employ variations of diffusion model architectures (for technicalities and comparisons, see e.g., Borji, 2023; Lee & Summers, 2023; Palmieri, 2023), all of them have been trained on extensive datasets, including significant quantities of labeled image data sourced from the internet (Borji, 2023; Vartiainen & Tedre, 2024). This vast database enables them to recognize and replicate a wide array of visual concepts and artistic styles (Oppenlaender, 2022). While Midjourney and OpenAI charge for the use of the systems, Stable Diffusion is open source and offers a free browser-based playground. With that, the technology becomes accessible to everyone.

    Fig. 1: Midjourney as an example of a user interface of TTI-systems. Source: Own illustration.

    Reproducing Pictures of the Past? The Pitfalls of AI-based Imagery

    The growing popularity of TTI-systems has led to increased research efforts aimed at understanding their technical functioning (Luccioni et al., 2023). As mentioned, all AI-image generators are based on massive amounts of labeled image data. Hence, their new generations are reflections of dominating data patterns, mainly from the world wide web. Several studies have already pinpointed to issues of biases and skewed societal representations in AI-generated imagery, such as the underrepresentation of marginalized identities (Luccioni et al., 2023), the reflection of cultural stereotypes and historical tropes (Flathers et al., 2024), or an uneven distribution of gender (Thomas & Thomson, 2023), often also deviating from true demographic distributions (Ali et al., 2023). These insights urged me to question how TTI-systems, when employed in anticipation, might possibly fuel the continuation of established worldviews by subconsciously embedding them into our future visions. To gain deeper understanding, I conducted a small exploration: While previous research (e.g., Luccioni et al., 2023) focused on analyzing AI-based representations in the present, I altered this approach by including “the future” into the generation of pictures. I used Midjourney (version 5.2) to generate images representing 10 generic groups of people. To build on previous research, I predominantly chose occupational categories that had been analyzed in a similar context before. Ali et al. (2023), for instance, analyzed the representation of surgeons in AI-generated imagery, Thomas and Thomson (2023) looked at the representation of journalists, and Luccioni et al. (2023) analyzed a set of 146 occupations. Additionally, I included prompts on toddlers and teenagers to assess generational differences. The integration of certain accessories (ball) or attributions (smart/ cool) stemmed from the idea to see how such cues influence the portrayal of individuals and whether they reinforce or challenge existing stereotypes across different age and occupational groups. The final set of the ten groups was: (1) a toddler playing with a ball; (2) a cool teenager playing with a ball; (3) a teenager who is really smart; (4) a general surgeon; (5) a journalist; (6) a lawyer; (7) an elementary school teacher; (8) a successful startup founder; (9) an HR-department leader; (10) an AI-specialist. For each of the 10 groups, I generated four different prompts. The first prompt (reference) was kept generic with no specification about the temporality or the person’s characteristics. Next, I altered the prompts systematically by integrating three different references to the future: 1) “… in the year 2044” as a concrete reference point that matches with the time frame of many foresight processes, 2) “…in the future” as a reference with no clear indication about temporality, 3) “a futuristic…” as another alternative with a slightly different connotation. This resulted in four prompts per group, as shown in the following example:

    1. Baseline: “a photorealistic representation of a lawyer“
    2. Future reference 1: “a photorealistic representation of a lawyer in the year 2044“
    3. Future reference 2: “a photorealistic representation of a lawyer in the future“
    4. Future reference 3: “a futuristic, photorealistic representation of a lawyer“

    Each prompt was iterated three times resulting in 12 AI-generated representations, as Midjourney automatically generates four pictures per prompt. Hence, I generated 48 pictures per group and 480 pictures in total. An example is shown in Fig. 2.

    Next and following previous research (Luccioni et al., 2023), I analyzed two salient categories as observable identity characteristics to investigate possible biases: gender and ethnicity. Both categories – even if a tremendous oversimplification of our diverse reality – were assessed as binary variables. For gender, a distinction was made between male/ female. As regards ethnicity, the distinction was made between white and non-white individuals, which reflects the approach of Ali et al. (2023). To reduce subjectivity of the evaluation, input from a second researcher was sought. Although this homogeneity introduces a weakness in the design, I am confident that potential evaluators’ biases do not alter the results due to the unanimous agreement between them. The results are reported in Table 1 and 2 in a descriptive way.

    While this approach certainly presents just a small glimpse into the use of AI-imagery for anticipation, it vividly draws our attention to how TTI-systems might limit rather than expand our imaginative capacity: Table 1 indicates a prevalence of male over female representations, suggesting a strong gender bias in the AI-generated images. Especially, in the “present”-temporality reference, this trend is consistent across all groups. As soon as the future-reference points are added, interestingly, the gender distribution tends to get more diverse or even turn into an overrepresentation of women. In no case, however, does the gender distribution reach a state of parity. In terms of ethnicity, Table 2 illustrates an overwhelming representation of white individuals across all groups and temporal references. Although there is a slight increase in diversity with the explicit future-oriented prompts, this remains marginal, and the images fall short of reflecting the heterogeneity of our global society. Another observation is that the images reflecting the future tend to incorporate high-tech aesthetics and futuristic settings that evoke common science fiction tropes, such as sleek, technologized environments and cyborg-like figures. These motifs may further reinforce a narrow vision of the future, by prioritizing familiar visual narratives over more diverse, culturally nuanced representations.

    From a technical perspective, AI-generated imagery represents the cumulative effort of the prompt’s author(s), the AI algorithm, and indirectly, those who provide the vast online resources from which the systems extract their information (Palmieri, 2023). This exploration shows that recent AI-generated visuals still tend to mirror prevalent stereotypes and are prone to misrepresentations. In my view, this pattern exposes a critical limitation for anticipation: Including “the future” in the prompting does not change dominant representations. More diverse and pluralistic representations can only be achieved if aspects of diversity are purposefully integrated into the prompts themselves. Hence, when used in an unquestioned manner, AI-visuals might inadvertently enshrine dominant perceptions and rationalize existing archetypal worldviews. I will deepen this point in the discussion.

    Fig. 2: Example of AI-generated images. Source: Own illustration.

    Table 1: Gender-based representations in AI-imagery.


    Please note: In this representation, the prompt about the toddler has been left out as the images did not show a clear indication of gender. Source: Own illustration.

    Table 2: Ethnicity-based representations in AI-imagery

    Rendering New Futures? The Promises of AI-Based Imagery for Anticipation

    Building upon the critical examination of AI-generated images, this chapter explores the nuanced ways in which these technologies, despite their flaws, may contribute positively to the field of foresight. It is within this context of informed critique that the following two participatory activities were designed, exploring the potential of TTI-systems to enrich foresight endeavors.

    The first example took place as part of a higher education program at a German university. 25 interdisciplinary and international students took part in a 4-hours Futures Literacy Lab on “The Future of Maritime Travel in 2045.” The lab followed the process as proposed by UNESCO and was conducted by the author and another facilitator. The aim was to provide students with a deeper understanding of the future, to teach them how to recognize their assumptions and to train their cognitive flexibility in imagining different future scenarios. After discussing trends in the first phase, the second phase was dedicated to anticipation. There, we followed a two-step process: After a concise introduction to TTI-systems (Midjourney (Version 5.2) and Stable Diffusion (Playground)), we tasked the students with crafting prompts individually to create desirable visions of the future. Each student crafted one prompt and copied it on a joint digital whiteboard open to the entire class (see Fig. 3). In the next step, we asked the students discuss and combine their ideas in small groups. Each group had to generate a joint prompt and a picture that reflected their collective vision (see Fig. 4).

    Fig. 3: Examples of individual prompts of students on the digital whiteboard.

    From a research perspective, I was interested in how far the process of generating and using AI-imagery would help the students in their group’s anticipatory process. In a reflection round at the end of the session, we asked the students about their experience. Several participants mentioned that the process of generating prompts helped them to gain more clarity of their own ideas and to express their emotions towards different aspects of the future. The first round of individual prompt crafting provided them with something tangible to communicate own ideas and understanding those of others during the group discussion. In the second phase of the Futures Literacy Lab, we used all generated AI-pictures as input to jointly elicit underlying assumptions of the desirable futures. The pictures helped the students to detect certain overarching patterns (such as “techno-positivism” or “eco-friendliness”). With that this typically rather abstract part of Futures Literacy Labs became more tangible.

    The second intervention took place in July 2023 during a public event at a German university. Over one day, students, staff, their families as well as citizens can engage in multiple activities to learn about the university. During that day and under the guidance of the author, a group of students organized and operated a booth to collect images of the future from the festival visitors. When people came to the booth, they were asked about their ideas for a desirable future for their city in 2070. Students, who had been introduced to Midjourney (Version 5.2) beforehand, converted the spoken ideas into prompts and generated pictures in a co-creative way. Since Midjourney always generates multiple options at first, the participants could choose their most desirable option or ask for an iteration to alter details (see Fig. 5).

    Fig. 5: Exemplary results of the anticipation activity during the public festival.

    Fig. 4: Example of one group-based prompt and the AI-generated image.

    This activity revealed several insights: First, the use of TTI-systems sparked curiosity, and it was easy to engage people of all ages and backgrounds into dialogues about the future. Second, whereas almost no AI-picture managed to exactly resemble the participants’ ideas, seeing their thoughts translated into a concrete artefact helped them to pinpoint what the liked or disliked, which was an easy entry point for subsequent discussions. Further, this method worked across all age groups as for instance children could articulate their ideas, too.

    In sum, the two examples indicated four possible benefits of using AI-imagery in anticipatory activities:

    First, there is the text-to-image generation process. Here, participants need to craft prompts to explain their desired picture to the TTI-system. No matter if there is a facilitator who helps with the prompting or not, the more precise and detailed the textual inputs are, the better will the AI-image reflect the underlying idea. Hence, this process alone urges participants to develop clarity about their vision by deciding on a focus and specific keywords (e.g., related to style, mood, or features). With that TTI-systems may act both as a means for self-reflection as well as a mechanism to turn intangible values into tangible artifacts.

    Second, since everybody with a technical device and an internet connection can engage in such a process, AI-imagery may also act as a mediating tool between individual ideas and collective narratives. In the first example, for instance, the individual pictures served as a discussion basis before being amalgamated into joint group visions. The capacity of the groups to envision collectively was enhanced and the emergence of a shared understanding was fostered.

    Third, and as also based on the experiences in example 1, having a collection of pictures supported the participants to detect similarities and common patterns, which enhanced their capability to elicit key beliefs and assumptions underlying their imagined futures. This is a crucial step in any anticipatory activity (Belton & Dillon, 2021; Marshall et al., 2023).

    Fourth, and as indicated in example 2, if used at larger scale, the comparison of AI-generated images can also inform about different worldviews across different groups of people and elicit both commonalities as well as differences. Hence, co-created AI-imagery could serve in larger stakeholder processes as an additional data pool.

    In conclusion, the exploration undertaken the past two chapters reveals a complex landscape where TTI-systems, with their inherent biases and limitations, coexist with their potential to significantly contribute to foresight and anticipation. In the next chapter, we proceed to discuss this tension and propose ways to harness the creative potential of TTI-systems while addressing their biases for more inclusive and robust futures thinking.

    Conclusion: How to Cope with the Potentials and Perils of AI-Imagery?

    Don’t shoot the robot. Watch over its owner. (Sampedro, 2023)

    Building on Foucault’s idea that power and knowledge are inextricably linked (Foucault, 1981), the methods how images of the future are produced, shared, and utilized need to be critically examined for their influence on anticipatory actions and, consequently, higher-level organizational or societal structures. While this applies to both linguistic and visual representations of imaginary futures, pictures in particular enable us to deliberately build the world we imagine and make it accessible for exploration and reflection (Fischer & Mehnert, 2021). The use of scenario art may break down barriers between and within stakeholder groups, increase interest in alternative perspectives, and help generate empathy (Fitzgerald & Davies, 2024; Lederwasch, 2012). The earlier presented examples suggest that this notion can also be extended to the use of AI-generated imagery, which even holds the potential to democratize participatory processes by enabling all participants to play an active role in the visualization process. With a few clicks, anyone can transform their thoughts into tangible artistic artefacts that offer new opportunities to share ideas with others and open discussions about similarities and differences in anticipating the future. Hence, at first glance one might assume that skilled illustrators are no longer necessary. However, I contend that the facilitation of a technologically adept individual remains essential – not only for generating AI imagery but also for interpreting its outputs.

    As demonstrated earlier, AI is still prone to biases and skewed representations frequently generating visuals that rely on stereotypical or historically entrenched tropes. As TTI-systems learn from the vast content available on the internet and are set to replicate what is most visible there, they – inevitably and by-design – marginalize underrepresented perspectives. Sardar (2020) recognizes: “AI operates with the smog of ignorance,” (p. 9). Taking this one step further, the rise of big data combined with artificial intelligence challenges the conventional distinction between data, information, and knowledge and questions both our epistemologies, i.e., the methods for acquiring, validating, and understanding knowledge, as well as our ontologies, i.e., the underlying assumptions about the nature of reality (Sardar, 2020). Research regarding the perception of AI-generated images shows that individuals tend to oscillate between overvaluing the system’s seemingly rational outputs – thereby devaluing their own interpretative capacities – and calibrating their perceptions to align with the AI’s implicit cues (Rapp et al., 2025). In other words, people may gradually forfeit their own interpretive agency, instead accepting the AI’s output as the definitive lens through which ideas of the future are understood. Given that anticipation happens not only explicitly with the system (e.g., the human, the team, the organization) being aware of it, but also implicitly (Poli, 2019), using AI-generated imagery might exacerbate instead of mitigate common futures fallacies (Milojević, 2021). Hence, especially when used in an unquestioned manner, AI-based imagery can influence our assumptions and expectations about the future below the threshold of consciousness, enshrine dominant perceptions and rationalize existing worldviews which ultimately will shape emerging knowledge and anticipatory behavior.

    This mechanism becomes particularly critical against recent calls regarding the “decolonization of the future”, which refers to a process of questioning and challenging dominant narratives, systems, and assumptions about the future, while creating space for diverse perspectives (Inayatullah, 1990; Jae, 2023; Sangchai, 2024; Sardar, 1993). Giannouli (2021) writes: “Decolonising your imagination means that you liberate your assumptions from past and future trends.” Given the vast dominance of WEIRD content on the internet, i.e. content stemming from Western, educated, industrialized, rich, and democratic contexts, AI-visuals are prone to project futures steeped in such interpretations. A liberation from these entrenched paradigms becomes technically highly unlikely if not impossible without deliberate intervention. Without careful human oversight, anticipatory activities that rely on AI-generated imagery could quietly foster the exclusion of marginalized voices, prevent equal representation, or enshrine deeply entrenched orthodoxies. Further, instead of elevating imagination, they may narrow and constrain our collective ability to imagine and articulate radically different futures. Hence, the question arises: Are the benefits of using AI-generated images outweighed by their tendency to reinforce hegemonic discourses?

    As pointed out by Escotet (2023), the efficacy of any new technological advancement does not only depend upon the technology, but also on its practitioners. We need to critically reassess both the data inputs and interpretative frameworks of AI systems, ensuring that anticipatory practices actively empower diverse perspectives rather than creating echo chambers of existing disparities. Consequently, the application of AI-imagery in anticipation necessitates a reflection on the role of futures practitioners or facilitators of anticipatory processes. With TTI-systems, the expertise in crafting effective prompts becomes key to translating nuanced ideas into vivid images that truly reflect the participants’ visions. Particularly, when engaging with stakeholders from diverse backgrounds, the facilitation of a technologically adept individual becomes crucial. Hence, as TTI-systems gain prominence, the facilitator now faces expanded responsibilities, and must serve in a dual capacity: firstly, in her traditional role by aiding participants in articulating and discussing their narratives, and secondly, as an expert translator and technical tutor, contextualizing the technological intricacies of the AI tools employed in the participatory process. Based on my reflections with the use of TTI-systems in anticipation, I conclude this essay with the following recommendations:

    1. Become AI-savvy and reflect on possible data biases: If you intend to use TTI-systems, generate a profound understanding about the technology itself. Consider how existing data biases inherent in your anticipation topic might affect outputs. Especially if the topic is culturally or socially sensitive, reflect whether the benefits of using AI-imagery outweigh the risks of perpetuating entrenched worldviews or restricting innovative thought.
    2. Participant empowerment: Carefully consider the added value of using TTI-systems in your anticipatory activity and whether your participants are the right audience for this exercise. For instance, be aware that most TTI-systems still work best when used in English. If you decide to use AI-imagery, help participants understand the limitations of TTI-systems by proactively addressing this point, e.g., via a short technical introduction. To ensure that the creation and interpretation processes remain collaborative and to avoid frustration, provide instructions or prompting support for the participants. Further, you might think about small exercises or hints that you can integrate into the intervention that help participants play with different prompts and to jointly reflect on their results.
    3. Iterative Feedback Loop: Use AI-imagery as part of an iterative process and not as an ultimate answer. Generate images, discuss with participants, gather feedback, and refine. This iterative approach ensures that AI-imagery is being used to enhance the futures visioning process, not dictate it.

    Acknowledgements

    I would like to thank the Trend Seminar Team at the Center for Digital Technology and Management for the opportunity to implement the workshop on AI. My heartfelt thanks also go to my colleague Dr. Gerhard Schönhofer for sharing his ideas and investing his time in planning and co-facilitating the workshop. I am equally grateful to Katharina Kleine for leading the participatory activity during the Campus Festival, as well as all the students from the MSc Global Foresight & Technology Management Program who helped during the afternoon. Last but not least, I extend my gratitude to Tim Kolberg, who acted as a sparrings partner and contributed his technical expertise to this paper.

    References

    Ali, R., Tang, O. Y., Connolly, I. D., Abdulrazeq, H. A., Mirza, F. N., Lim, R. K., Johnston, B. R., Groff, M. W., Williamson, T., Svokos, K., Libby, T. J., Shin, J. H., Gokaslan, Z. L., Doberstein, C. E., Zou, J., & Asaad, W. F. (2023). The Face of a Surgeon: An Analysis of Demographic Representation in Three Leading Artificial Intelligence Text-to-Image Generators [Preprint]. Surgery. https://doi.org/10.1101/2023.05.24.23290463

    Belton, O., & Dillon, S. (2021). Futures of autonomous flight: Using a collaborative storytelling game to assess anticipatory assumptions. Futures, 128, 102688. https://doi.org/10.1016/j.futures.2020.102688

    Borji, A. (2023). Generated Faces in the Wild: Quantitative Comparison of Stable Diffusion, Midjourney and DALL-E 2 (arXiv:2210.00586). arXiv. http://arxiv.org/abs/2210.00586

    Bruskin, S., & Mikkelsen, E. N. (2020). Anticipating the end: Exploring future-oriented sensemaking of change through metaphors. Journal of Organizational Change Management, 33(7), 1401–1415. https://doi.org/10.1108/JOCM-11-2019-0342

    Burt, R. S. (1995). Structural holes: The social structure of competition (1. Harvard Univ. Press paperback ed). Harvard University Press.

    Chen, H.-C., & Kao, Y.-F. (2022). Mirroring Reality: Surreal Tourist Gaze of Chinese Landscape Painting by Artificial Neural Networks. 2022 IEEE 5th International Conference on Knowledge Innovation and Invention (ICKII ), 155–159. https://doi.org/10.1109/ICKII55100.2022.9983554

    Escotet, M. Á. (2023). The optimistic future of Artificial Intelligence in higher education. PROSPECTS. https://doi.org/10.1007/s11125-023-09642-z

    Fischer, N., & Mehnert, W. (2021). Building Possible Worlds: A Speculation Based Framework to Reflect on Images of the Future. Journal of Futures Studies, 25(3). https://doi.org/10.6531/JFS.202103_25(3).0003

    Fitzgerald, L. M., & Davies, A. R. (2024). Engaging futures: Scenario visualisation for sustainable urban food sharing. Futures, 164, 103462. https://doi.org/10.1016/j.futures.2024.103462

    Flathers, M., Smith, G., Wagner, E., Fisher, C. E., & Torous, J. (2024). AI depictions of psychiatric diagnoses: A preliminary study of generative image outputs in Midjourney V.6 and DALL-E 3. BMJ Mental Health, 27(1), e301298. https://doi.org/10.1136/bmjment-2024-301298

    Flyverbom, M., & Garsten, C. (2021). Anticipation and Organization: Seeing, knowing and governing futures. Organization Theory, 2(3), 263178772110203. https://doi.org/10.1177/26317877211020325

    Foucault, M. (1981). Power / knowledge: Selected interviews and other writings 1972 – 1977 (C. Gordon, Ed.). Pantheon Books.

    Giannouli, A. (2021). Decolonizing Your Imagination – Futures Literacy For Change. Accessed via: https://interaliaproject.com/decolonizing-your-imagination-futures-literacy-for-change/ (15.02.2025)

    Inayatullah, S. (1990). Deconstructing and reconstructing the future. Futures, 22(2), 115–141. https://doi.org/10.1016/0016-3287(90)90077-U

    Jae, K. (2023). Decolonizing Futures Practice: Opening Up Authentic Alternative Futures. Journal of Futures Studies, 28(1), 15–24. https://doi.org/10.6531/JFS.202309_28(1).0002

    Kazemier, E. M., Damhof, L., Gulmans, J., & Cremers, P. H. M. (2021). Mastering futures literacy in higher education: An evaluation of learning outcomes and instructional design of a faculty development program. Futures, 132, 102814. https://doi.org/10.1016/j.futures.2021.102814

    Lederwasch, A. (2012). Scenario Art: A New Futures Method that Uses Art to Support Decision-Making for Sustainable Development. Journal of Futures Studies, 17(1), 25–40.

    Lederwasch, A. (2015). Complementing Causal Layered Analysis with Scenario Art: To develop a national vision and strategy for Australia’s minerals industry. In S. Inayatullah & I. Milojevic (Eds.), CLA 2.0: Transformative research in theory and practice (pp. 143–162). Tamkang University Press.

    Lee, J., & Summers, N. (2023). Channeling Creativity Through a Deeper Understanding of AI Image Generation. ACM SIGGRAPH 2023 Labs, 1–2. https://doi.org/10.1145/3588029.3599743

    Liveley, G., Slocombe, W., & Spiers, E. (2021). Futures literacy through narrative. Futures, 125, 102663. https://doi.org/10.1016/j.futures.2020.102663

    Luccioni, A. S., Akiki, C., Mitchell, M., & Jernite, Y. (2023). Stable Bias: Analyzing Societal Representations in Diffusion Models (arXiv:2303.11408). arXiv. http://arxiv.org/abs/2303.11408

    Marshall, H., Wilkins, K., & Bennett, L. (2023). Story thinking for technology foresight. Futures, 146, 103098. https://doi.org/10.1016/j.futures.2023.103098

    McDowell, A. (2019). Storytelling Shapes the Future. Journal of Futures Studies, 23(3). https://doi.org/10.6531/JFS.201903_23(3).0009

    Miller, R. (Ed.). (2018). Transforming the future: Anticipation in the 21st century. Routledge.

    Milojević, I. (2021). Futures Fallacies: What They Are and What We Can Do About Them.

    Milojevic, Inayatullah & Poocharoen. (2024). Artificial Intelligence, water futures, and a living constitution: Using the futures triangle to envision novel futures For Thailand. Journal of Futures Studies – Perspectives. https://jfsdigital.org/2024/07/03/artificial-intelligence-water-futures-and-a-living-constitution/

    Oppenlaender, J. (2022). The Creativity of Text-to-Image Generation. Proceedings of the 25th International Academic Mindtrek Conference, 192–202. https://doi.org/10.1145/3569219.3569352

    Palmieri, A. (2023). Midjourney experimentation: Representing Nature on a macro scale. SCIRES-IT – SCIentific RESearch and Information Technology, 13(1). https://doi.org/10.2423/i22394303v13n1p181

    Poli, R. (2010). The many aspects of anticipation. Foresight, 12(3), 7–17. https://doi.org/10.1108/14636681011049839

    Poli, R. (2019). Introducing Anticipation. In R. Poli (Ed.), Handbook of Anticipation (pp. 3–16). Springer International Publishing. https://doi.org/10.1007/978-3-319-91554-8_1

    Rapp, A., Di Lodovico, C., Torrielli, F., & Di Caro, L. (2025). How do people experience the images created by generative artificial intelligence? An exploration of people’s perceptions, appraisals, and emotions related to a Gen-AI text-to-image model and its creations. International Journal of Human-Computer Studies, 193, 103375. https://doi.org/10.1016/j.ijhcs.2024.103375

    Sanders, E. B.-N., & Stappers, P. J. (2014). Probes, toolkits and prototypes: Three approaches to making in codesigning. CoDesign, 10(1), 5–14. https://doi.org/10.1080/15710882.2014.888183

    Sangchai, S. (2024). Some Aspects of Futurism. Journal of Futures Studies. Vol. 28(4) 83-103DOI: 10.6531/JFS.202406_28(4).0006

    Sardar, Z. (1993). Colonizing the future: The ‘other’ dimension of futures studies. Futures, 25(2), 179–187. https://doi.org/10.1016/0016-3287(93)90163-N

    Sardar, Z. (2020). The smog of ignorance: Knowledge and wisdom in postnormal times. Futures, 120, 102554. https://doi.org/10.1016/j.futures.2020.102554

    Schoemaker, P. J. H. (1993). Multiple scenario development: Its conceptual and behavioral foundation. Strategic Management Journal, 14(3), 193–213. https://doi.org/10.1002/smj.4250140304

    Thomas, R. J., & Thomson, T. J. (2023). What Does a Journalist Look like? Visualizing Journalistic Roles through AI. Digital Journalism, 1–23. https://doi.org/10.1080/21670811.2023.2229883

    Vartiainen, H., & Tedre, M. (2024). How Text-to-Image Generative AI is Transforming Mediated Action. IEEE Computer Graphics and Applications, 1–12. https://doi.org/10.1109/MCG.2024.3355808

     

     

    Top Posts & Pages
    • Teaching for Transformation: Lessons from Critical Pedagogy for Design Futures Education
    • Homepage
    • Towards an Explicit Research Methodology: Adapting Research Onion Model for Futures Studies
    • Eschatology as Empire
    • Iran at the Crossroads
    • The Futures Cone Reimagined: A Framework for Critical and Plural Futures Thinking
    • Building Possible Worlds: A Speculation Based Framework to Reflect on Images of the Future
    • Articles and Essays
    • Diegetic Prototypes in the Design Fiction Film Her: A Posthumanist Interpretation
    • Regenerative Futures: Eight Principles for Thinking and Practice
    In-Press

    Signs in Chaos: Prigogine and the Art of Reading Futures in Systems That Don’t Repeat

    March 7, 2026

    Article Fredy Vargas-Lama Faculty of Management, Universidad Externado de Colombia, Bogota, Colombia Abstract This article…

    Spawning new futures: new pathways in futures education after COVID-19 — the Metafutureschool story

    February 16, 2026

    Imagining the Future after Crisis: Science and Environmental Imaginaries in the Anthropocene

    February 16, 2026

    Sawali Weaving as Decolonial Design Futures Practice

    February 3, 2026

    Characters, values, aesthetics: Creative methods for water futures

    February 3, 2026

    Cultural Dimensions in Foresight and Scenario Planning: An Exploratory Study

    February 3, 2026

    Layering Interreligious Harmony: Integrating The Robin Approach and Causal Layered Analysis at the Parliament of the World’s Religions

    February 3, 2026

    The Futures Cone Reimagined: A Framework for Critical and Plural Futures Thinking

    February 3, 2026

    Envisioning the Futures of Language Education in the Era of Artificial Intelligence

    February 3, 2026

    Two Decades of the Futures Triangle (2003–2024): A Critical Review of Theory, Method and Practice

    February 3, 2026

    The Journal of Futures Studies,

    Graduate Institute of Futures Studies

    Tamkang University

    Taipei, Taiwan 251

    Tel: 886 2-2621-5656 ext. 3001

    Fax: 886 2-2629-6440

    ISSN 1027-6084

    Tamkang University
    Graduate Institute of Futures Studies
    © 2026 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.