Facebook Twitter Instagram
    Trending
    • Urban-Rural Polarization in Canada
    • Confronting the Anti-Futures Triangle
    • Symposium: War, Genocide, and Futures Beyond US Hegemony
    • Foreword: Editorial Statement On the Necessity of Critique
    • Does Genocide Have Gender?
    • Welcoming Collapse to Create Better Futures
    • From Collapse to Motherships
    • The Futures of the United Nations
    Journal of Futures Studies
    • Who we are
      • Editorial Board
      • Editors
      • Core Team
      • Digital Editing Team
      • Consulting Editors
      • Indexing, Rank and Impact Factor
      • Statement of Open Access
    • Articles and Essays
      • In Press
      • 2025
        • Vol. 30 No. 2 December 2025
        • Vol. 30 No. 1 September 2025
        • Vol. 29 No. 4 June 2025
        • Vol. 29 No. 3 March 2025
      • 2024
        • Vol. 29 No. 2 December 2024
        • Vol. 29 No. 1 September 2024
        • Vol. 28 No. 4 June 2024
        • Vol. 28 No. 3 March 2024
      • 2023
        • Vol. 28 No. 2 December 2023
        • Vol. 28 No. 1 September 2023
        • Vol. 27 No. 4 June 2023
        • Vol. 27 No. 3 March 2023
      • 2022
        • Vol. 27 No. 2 December 2022
        • Vol. 27 No.1 September 2022
        • Vol.26 No.4 June 2022
        • Vol.26 No.3 March 2022
      • 2021
        • Vol.26 No.2 December 2021
        • Vol.26 No.1 September 2021
        • Vol.25 No.4 June 2021
        • Vol.25 No.3 March 2021
      • 2020
        • Vol.25 No.2 December 2020
        • Vol.25 No.1 September 2020
        • Vol.24 No.4 June 2020
        • Vol.24 No.3 March 2020
      • 2019
        • Vol.24 No.2 December 2019
        • Vol.24 No.1 September 2019
        • Vol.23 No.4 June 2019
        • Vol.23 No.3 March 2019
      • 2018
        • Vol.23 No.2 Dec. 2018
        • Vol.23 No.1 Sept. 2018
        • Vol.22 No.4 June 2018
        • Vol.22 No.3 March 2018
      • 2017
        • Vol.22 No.2 December 2017
        • Vol.22 No.1 September 2017
        • Vol.21 No.4 June 2017
        • Vol.21 No.3 Mar 2017
      • 2016
        • Vol.21 No.2 Dec 2016
        • Vol.21 No.1 Sep 2016
        • Vol.20 No.4 June.2016
        • Vol.20 No.3 March.2016
      • 2015
        • Vol.20 No.2 Dec.2015
        • Vol.20 No.1 Sept.2015
        • Vol.19 No.4 June.2015
        • Vol.19 No.3 Mar.2015
      • 2014
        • Vol. 19 No. 2 Dec. 2014
        • Vol. 19 No. 1 Sept. 2014
        • Vol. 18 No. 4 Jun. 2014
        • Vol. 18 No. 3 Mar. 2014
      • 2013
        • Vol. 18 No. 2 Dec. 2013
        • Vol. 18 No. 1 Sept. 2013
        • Vol. 17 No. 4 Jun. 2013
        • Vol. 17 No. 3 Mar. 2013
      • 2012
        • Vol. 17 No. 2 Dec. 2012
        • Vol. 17 No. 1 Sept. 2012
        • Vol. 16 No. 4 Jun. 2012
        • Vol. 16 No. 3 Mar. 2012
      • 2011
        • Vol. 16 No. 2 Dec. 2011
        • Vol. 16 No. 1 Sept. 2011
        • Vol. 15 No. 4 Jun. 2011
        • Vol. 15 No. 3 Mar. 2011
      • 2010
        • Vol. 15 No. 2 Dec. 2010
        • Vol. 15 No. 1 Sept. 2010
        • Vol. 14 No. 4 Jun. 2010
        • Vol. 14 No. 3 Mar. 2010
      • 2009
        • Vol. 14 No. 2 Nov. 2009
        • Vol. 14 No. 1 Aug. 2009
        • Vol. 13 No. 4 May. 2009
        • Vol. 13 No. 3 Feb. 2009
      • 2008
        • Vol. 13 No. 2 Nov. 2008
        • Vol. 13 No. 1 Aug. 2008
        • Vol. 12 No. 4 May. 2008
        • Vol. 12 No. 3 Feb. 2008
      • 2007
        • Vol. 12 No. 2 Nov. 2007
        • Vol. 12 No. 1 Aug. 2007
        • Vol. 11 No. 4 May. 2007
        • Vol. 11 No. 3 Feb. 2007
      • 2006
        • Vol. 11 No. 2 Nov. 2006
        • Vol. 11 No. 1 Aug. 2006
        • Vol. 10 No. 4 May. 2006
        • Vol. 10 No. 3 Feb. 2006
      • 2005
        • Vol. 10 No. 2 Nov. 2005
        • Vol. 10 No. 1 Aug. 2005
        • Vol. 9 No. 4 May. 2005
        • Vol. 9 No. 3 Feb. 2005
      • 2004
        • Vol. 9 No. 2 Nov. 2004
        • Vol. 9 No. 1 Aug. 2004
        • Vol. 8 No. 4 May. 2004
        • Vol. 8 No. 3 Feb. 2004
      • 2003
        • Vol. 8 No. 2 Nov. 2003
        • Vol. 8 No. 1 Aug. 2003
        • Vol. 7 No. 4 May. 2003
        • Vol. 7 No. 3 Feb. 2003
      • 2002
        • Vol. 7 No.2 Dec. 2002
        • Vol. 7 No.1 Aug. 2002
        • Vol. 6 No.4 May. 2002
        • Vol. 6 No.3 Feb. 2002
      • 2001
        • Vol.6 No.2 Nov. 2001
        • Vol.6 No.1 Aug. 2001
        • Vol.5 No.4 May. 2001
        • Vol.5 No.3 Feb. 2001
      • 2000
        • Vol. 5 No. 2 Nov. 2000
        • Vol. 5 No. 1 Aug. 2000
        • Vol. 4 No. 2 May. 2000
      • 1999
        • Vol. 4 No. 1 Nov. 1999
        • Vol. 3 No. 2 May
      • 1998
        • Vol. 3 No. 1 November 1998
        • Vol. 2 No. 2 May. 1998
      • 1997
        • Vol. 2 No. 1 November 1997
        • Vol. 1 No. 2 May. 1997
      • 1996
        • Vol. 1 No. 1 November 1996
    • Information
      • Submission Guidelines
      • Publication Process
      • Duties of Authors
      • Notice of Publication Fee Implementation
      • Submit a Work
      • JFS Premium Service
      • Electronic Newsletter
      • Contact us
    • Topics
    • Authors
    • Perspectives
      • About Perspectives
      • Podcast
      • Multi-lingual
      • Exhibits
        • When is Wakanda
      • Special Issues and Symposia
        • The Hesitant Feminist’s Guide to the Future: A Symposium
        • The Internet, Epistemological Crisis And The Realities Of The Future
        • Gaming the Futures Symposium 2016
        • Virtual Symposium on Reimagining Politics After the Election of Trump
        • War, Genocide and Futures Beyond US Hegemony
    • JFS Community of Practice
      • About Us
      • Teaching Resources
        • High School
          • Futures Studies for High School in Taiwan
        • University
          • Adults
    Journal of Futures Studies
    Home»Envisioning the Futures of Language Education in the Era of Artificial Intelligence

    Envisioning the Futures of Language Education in the Era of Artificial Intelligence

    Article

    Georgios P. Georgiou1*

    1Department of Languages and Literature, University of Nicosia, Nicosia, Cyprus

    2Phonetic Lab, University of Nicosia, Nicosia, Cyprus

    Abstract

    This study offers a futures-oriented synthesis of artificial intelligence (AI) in language education by applying Dator’s Four Futures, namely, Continued Growth, Collapse, Discipline, and Transformation, to five high-impact areas: (1) process automation, (2) the rise of self-paced learning, (3) the replacement of humans by robots, (4) educator job displacement, and (5) the evolution of pronunciation pedagogy. A semi-structured double-sourced literature scan (2018–2025) across major academic databases and policy portals was conducted, prioritizing empirical studies, systematic reviews/meta-analyses, and official guidance/standards. The results indicate that Continued Growth yields augmentation across all five areas, namely automation of routine processes, stronger self-paced tutoring, teacher-orchestrated robot support, role drift (not mass displacement), and intelligibility-aligned pronunciation feedback, while leaving governance gaps. Collapse produces failures across the same areas, including over-automation, opaque self-paced systems that widen inequities, cost-driven teacher substitution, job hollowing, and accent-based harms that erode trust. Discipline is enabled by auditable guardrails applied system-wide, such as human-in/on-the-loop, privacy-by-design, transparent documentation, and bias auditing. Transformation reconfigures learning and assessment into immersive, agentic, multimodal ecosystems across all five areas, but is preferable only under fairness-first engineering and intelligibility-centered pedagogy. The study contributes a decision-ready roadmap for shifting AI adoption from ad-hoc procurement to managed capability, specifying institutional safeguards and evolving professional roles to secure equitable, intelligibility-aligned futures. Through the integration of evidence, scenario analysis, and actionable governance mechanisms, the study offers institutions a practical framework to prevent Collapse dynamics and enable Discipline and Transformation pathways.

    Keywords

    Artificial Intelligence, Education, Futures, Language

    Introduction

    Artificial intelligence (AI) has moved from the periphery of computer-assisted language learning to the center of mainstream practice, reshaping how languages are taught, learned, and assessed (Linderoth et al., 2024). Recent syntheses chart a rapid expansion of empirical work on generative and data-driven tools in language education, with studies reporting gains in feedback timeliness, writing support, and oral practice, alongside recurring concerns about validity, transparency, and bias (Li et al., 2025; Lee et al., 2025; Weng & Fu, 2025). In writing, large-scale reviews of automated writing evaluation document consistent though uneven benefits for accuracy and revision quality, contingent on pedagogy and meaningful human oversight (Dizon & Gayed, 2024; Karatay & Kataray, 2024). For speaking and pronunciation, meta-analytic and review evidence indicate that automatic speech recognition (ASR)-mediated practice can improve segmental and suprasegmental performance when embedded in thoughtfully designed tasks (Liu, 2025; Ngo et al., 2024). In parallel, a substantial literature warns that ASR accuracy varies across dialects, accents, gender, and age, creating risks for equity if such systems are used uncritically in feedback or assessment (Tatman, 2017).

    Within this context, education systems are beginning to codify governance. In Europe, the AI Act classifies many educational uses as high-risk, introducing lifecycle risk management, documentation, and bias-testing requirements, with phased enforcement now unfolding across the sector (Piquart, 2025). Complementary guidance for teachers in the EU stresses practical ethics (purpose limitation, data minimization, and explainability) to ensure AI augments rather than replaces human judgment (European Commission, 2022). These policy shifts resonate with long-standing insights in pronunciation research that privilege intelligibility and comprehensibility over nativeness, underscoring the need to align technological affordances with communicative goals and learner identities (Levis, 2018).

    From a futures perspective, debates about intelligent machines and educational robotics are not entirely new. As early as the 1970s, Toffler’s Future Shock and The Third Wave identified automation, robotics, and accelerated social change as defining pressures on education, work, and everyday life (Toffler, 1970, 1980). Futures scholars have also long explored the social, legal, and ethical implications of robots and AI, including explicit discussions of whether robots might one day hold rights or quasi-personhood (Inayatullah, 2001; McNally & Inayatullah, 1988; Wurah, 2017). Dator’s (2001) introduction to the symposium on The Rights of Robots further demonstrates that questions about whether robots should be included within legal and ethical communities have been a core concern in futures thinking about law and technology for decades. In addition, Dator has used alternative-futures frameworks to examine emerging technologies and their societal implications, including AI and robots, for several decades (Dator, 1989, 2009). Seen against this historical backdrop, contemporary concerns about AI in language education extend a much older conversation about automation, labor, and rights into the specific domains of language learning, teaching, and assessment.

    This study offers an original, futures-oriented examination of AI in language education by combining a futures framework with a systematic, double-sourced evidence base to map plausible trajectories across five high-impact areas. Its novelty lies in moving beyond tool-by-tool evaluations to a scenario methodology that integrates empirical research, policy guidance, and a claim-evidence matrix to stress-test assumptions for validity, equity, and governance. The core aim is to clarify where current trends could lead and to specify concrete guardrails that make preferable futures practicable. The study’s key contribution is a decision-ready synthesis that links evidence to policy and practice, reframing AI adoption from ad-hoc procurement to managed capability, and articulating the evolving professional roles (e.g., coach, assessor-validator, data steward, experience designer) needed to realize equitable, intelligibility-centered outcomes.

    Methodology

    This research builds on prior applications of futures methodologies to AI contexts that emphasize participatory and anticipatory scenario development (Farrow, 2020; Inayatullah, 2008). More specifically, this paper employs Dator’s (2009) Four Futures as a methodological framework to envision the futures of language education across five key areas in the AI era, including process automation, the rise of self-paced learning, the replacement of humans by robots, educator job displacement, and the evolution of pronunciation pedagogy (see Figure 1). These five areas were chosen because they reflect the significant and transformative impacts of AI on language education. They encompass key areas of change, offering a focused yet comprehensive view of the future shaped by emerging technologies.

    Jim Dator’s (2009) four generic futures, namely, Continued Growth, Collapse, Discipline, and Transformation, offer a framework for thinking about how the future might unfold. The Continued Growth scenario assumes current trends in the economy and technology will persist, bringing progress and development. In contrast, Collapse envisions systemic failure due to overreach, inequality, or environmental degradation, leading to societal breakdown. Discipline suggests a future where societies adopt ethical limits and governance to ensure sustainability and social balance. Finally, Transformation imagines a radical shift driven by revolutionary technologies, environmental changes, or changes in human consciousness, leading to entirely new ways of living. Together, these archetypes help explore the full range of possibilities for navigating an uncertain future.

    We organized the analysis with Dator’s four futures and examined the same five areas in each scenario. We conducted a semi-structured literature scan (2018–2025) across Scopus, Web of Science, ERIC, Google Scholar, and policy portals (UNESCO, OECD, US DoE) using predefined strings (e.g., “AI AND language education,” “robot-assisted language learning”, “automated scoring AND fairness”, “ASR AND accent bias”, “pronunciation AND intelligibility”). Inclusion criteria were as follows: empirical studies, systematic reviews/meta-analyses, and official policy/standards; written in the English language; and relevance to at least one area. We recorded each source in a spreadsheet and applied light thematic tags. Tagged statements were then mapped to archetypes to form the four scenarios, checking for causal plausibility and cross-scenario contrast. High-stakes claims (e.g., fairness, labor) were double-sourced (research + policy). A claim-evidence matrix and the large table in the Results section document how sources support each scenario element.

    Fig. 1. Core areas of AI influence in language education

    Results

    Scenario 1 – Continued Growth

    Automation in processes

    AI continues to take over routine, scale-dependent tasks such as item generation, low-stakes grading/feedback, plagiarism detection, dashboards, while teachers retain authority over consequential decisions (placement, high-stakes grading, progression). Policy guidance frames this as human-in-the-loop adoption to protect validity, transparency, and privacy (Memarian & Doleck, 2024; UNESCO, 2023/2025). Empirical work shows large language models (LLMs) can approximate human grading on some university exams, but performance varies across tasks and prompts, reinforcing the need for teacher moderation, documentation, and audits (Flodén, 2025). With the capacity to automate administrative duties, create exams, conduct assessments, detect plagiarism, and provide feedback (Renkema & Tursunbayeva, 2024), AI offers educators a powerful tool that could liberate them from administrative burdens and redirect their focus toward meaningful student engagement and instructional innovation.

    The rise of self-paced learning

    In this scenario, AI technologies steadily improve and become seamlessly integrated into self-paced learning ecosystems (Saikrishna, 2025). Mainstream platforms embed adaptive tutors that sequence content and provide real-time, personalized feedback, yielding measurable gains in achievement when access and digital literacy conditions are met (Dong et al., 2025). Continued growth, therefore, improves the reach and timeliness of feedback, but the same dynamics demand explainability to learners and visibility to educators so model decisions can be questioned and corrected (Memarian & Doleck, 2024; UNESCO, 2023/2025).

    Will robots replace human educators?

    Evidence from reviews and trials in robot-assisted language learning (RALL) shows positive cognitive/affective effects when robots are used for bounded, repetitive tasks (pronunciation drills, turn-taking practice, oral testing), orchestrated by the teacher (Belpaeme et al., 2018; Deng et al., 2024; de Wit et al., 2018; Engwall & Lopes, 2022; Randall, 2019). This pattern supports task-level substitution (robots handling drill-heavy routines) rather than role-level replacement of teachers. Programs that scale responsibly also address classroom governance (consent for audio/video capture, cultural-pragmatic fit), keeping the human educator in charge of pedagogy and relationships.

    Will language educators lose their jobs?

    Under continued growth, the labor signal is role drift, not mass displacement. AI offloads scale-dependent work (feedback triage, practice scheduling, item drafting) while teachers lean into coaching, discourse/culture mediation, assessment validation, and data stewardship (UNESCO, 2023/2025); thus, the teacher’s role shifts to that of a coach and facilitator, focusing on human-centered skills like communication, empathy, and deep cultural insights. This augmentation framing aligns with human-oversight policy and avoids over-automation pitfalls flagged in the assessment literature (Baker & Hawn, 2022).

     

    The evolution of pronunciation pedagogy

    Pronunciation support in this path aligns with the shift from nativeness to intelligibility/comprehensibility as a primary target-consistent with the phonological descriptors in the Council of Europe Common European Framework Reference (CEFR) for Languages Companion Volume and recent practice (Berns & van Vuuren, 2023; Council of Europe, 2020). Reviews of AI/computer-assisted pronunciation training (CAPT) tools further suggest that automated pronunciation feedback can aid learning when embedded in sound pedagogy, while calling for transparent evaluation across accents and contexts (Liu & Ab Rahman, 2025; Vančová, 2023). To keep growth preferable, institutions should require intelligibility-aligned rubrics, accent-diverse evaluation, and disclosure of known limitations.

    Scenario 2 – Collapse

    Automation in processes

    In a collapse trajectory, systems over-automate assessment and administration without adequate oversight. Institutions lean on automated scoring and analytics to cut costs, but variability across tasks and populations produces erratic, contestable results, undermining legitimacy. Recent evidence finds LLMs can approximate human grading in some contexts while also showing non-trivial divergence, a pattern that demands moderation of the system now withholds (Flodén, 2025). Meanwhile, the educational AI literature documents structural pathways for algorithmic bias (e.g., skewed training data, construct under-representation), which – without robust controls –translate into unfair classifications or feedback (Baker & Hawn, 2022). The policy consensus that AI should remain human-in-/on-the-loop is ignored or underfunded, creating a perception that automated decisions are final rather than advisory (UNESCO, 2023/2025).

    The rise of self-paced learning

    Under collapse, self-paced platforms expand unevenly. Connectivity, device scarcity, and digital-skills gaps widen existing inequities, reproducing advantages for already-privileged learners (Farahani & Ghasemi, 2024). UNESCO’s Global Education Monitoring work shows that access to technology exacerbates educational inequality when not matched with inclusion policies; the pandemic made these gaps visible and durable (UNESCO GEM, 2023). OECD analyses similarly warn that digital transformation, absent governance and procurement standards, amplifies divides and bias (OECD, 2023). In this scenario, adaptive systems also lack transparency for learners and teachers, leading to uninterpretable recommendations, automation complacency, and disengagement.

    Will robots replace human educators?

    Cost-cutting deployments push social robots/agentic avatars into roles they are not ready to fill: unproctored oral testing, pastoral support, and classroom management. Reviews of RALL show benefits for bounded, teacher-orchestrated tasks; when robots are scaled as teacher substitutes, learning and relational quality suffer (Deng et al., 2024; Randall, 2019). Ethical and governance gaps intensify the damage, as privacy-sensitive audio and video capture in classrooms proceeds with weak consent and unclear data-retention practices, fueling backlash (Johnston, 2023; Lutz et al., 2021). As Selwyn (2019) cautions, robotization cannot replicate the social, moral, and cultural work of teaching; yet in the collapse path, systems try, and trust erodes.

    Will language educators lose their jobs?

    Budget pressure plus techno-solutionism generates hiring freezes and casualization, with AI/robot deployments rationalized as efficiency measures. According to Frey and Osborne (2017), it is estimated that nearly 47% of existing jobs could be automated within the next two decades, a statistic that indicates the gravity of the transformation underway. OECD’s Digital Education Outlook highlights how governance and procurement shape digital ecosystems; in a collapse path, those levers are weak, so labor protections and reskilling are de-prioritized (OECD, 2023). The result is role hollowing, with less time for high-value relational work and greater pressure to supervise brittle systems, driving attrition and loss of tacit expertise. UNESCO’s guidance, which stresses human oversight and capacity-building, is not implemented at scale, compounding legitimacy problems for institutions (UNESCO, 2023/2025).

    The evolution of pronunciation pedagogy

    Pronunciation technologies, rushed into high-stakes use, re-entrench prestige norms and penalize minoritized accents. Systematic reviews show promise for ASR-supported pronunciation training in EFL, but they also call for transparent, accent-aware evaluation, which is missing here (Liu & Ab Rahman, 2025). Outside education, benchmark studies demonstrate large racial disparities in ASR error rates, a red flag for any automated feedback loop touching multilingual and non-prestige varieties (Koenecke et al., 2020). Without a pluricentric design, the resulting feedback homogenizes learner speech and undermines identity, counter to contemporary assessment directions that emphasize intelligibility/comprehensibility (Berns & van Vuuren, 2023; Council of Europe, 2020). A collapse scenario imagines the failure of AI-driven pronunciation systems. Furthermore, pronunciation systems may be characterized by technical limitations, cultural resistance, misuse, or ethical concerns about linguistic homogenization (Díaz-Domínguez, 2020).

    Scenario 3 – Discipline

    Automation in processes

    In a Discipline trajectory, institutions adopt AI with strong safeguards, including human-in-/on-the-loop decision-making for consequential assessment, documented model behavior, privacy-by-design, and routine bias auditing. International guidance explicitly frames AI as assistive, not substitutive, requiring transparency to educators and learners and sustained capacity-building (UNESCO, 2023/2025; US Department of Education, 2023). Operationally, this is implemented via risk frameworks (e.g., NIST AI RMF 1.0) and technical documentation such as model cards and datasheets that specify intended use, evaluation across subgroups, and limits (Gebru et al., 2021; Mitchell et al., 2019; NIST, 2023). The result is automation for scale-dependent tasks (item generation, low-stakes feedback, analytics) with teachers retaining authority over placement, progression, and final grades, an approach that directly addresses well-documented algorithmic bias pathways in education (Baker & Hawn, 2022).

    The rise of self-paced learning

    Self-paced tools remain central, but discipline means they are governed by explainability and educator oversight. Platforms must expose why a task or hint is recommended and give instructors levers to override or adapt pathways (UNESCO, 2023/2025; US Department of Education, 2023). System-level governance (interoperability, data minimization, role-based access) is prioritized to foster trust and reduce lock-in (OECD, 2023). This combination preserves the personalization gains reported in recent syntheses while reducing black-box risk. In addition, AI acts as a complement to human instruction, offering adaptive support while educators and institutions maintain oversight. Learners retain control over their data and pace of progression, and AI tools are held to standards of fairness and inclusivity (Butt, 2024).

    Will robots replace human educators?

    Robots and embodied agents are used as aides under codes of practice, not as teacher replacements. Evidence shows robots can improve bounded, repetitive tasks (pronunciation drills, turn-taking, oral practice) when teacher-orchestrated but not as wholesale substitutes (Belpaeme et al., 2018; Woo et al., 2021). Ethical protocols address consent for audio/video capture, data minimization, and storage/retention; privacy research on social robots underscores that concerns measurably affect use intentions, so proactive safeguards are essential (Johnston, 2023; Lutz et al., 2021). In short, robots extend reach where appropriate, while authority over pedagogy and relationships remains human.

    Will language educators lose their jobs?

    Discipline emphasizes role evolution with protections, explicitly tying AI adoption to workload relief, upskilling, and recognition of new roles (AI pedagogy designer, assessment validator, data steward). This aligns with global guidance that stresses human oversight and educator capacity as preconditions for deployment (UNESCO, 2023/2025; OECD, 2023). Governance and procurement standards curb techno-solutionism and help avoid labor hollowing, while preserving teachers’ relational and cultural work that automation cannot replace.

    The evolution of pronunciation pedagogy

    Pronunciation in a Discipline future follows intelligibility/comprehensibility-first targets consistent with the CEFR Companion Volume phonological descriptors and pluricentric aims (Council of Europe, 2020). Systems are audited for accent fairness before classroom use, reflecting evidence of substantial disparities in ASR performance across speaker groups; documentation must disclose subgroup results and mitigation steps (Koenecke et al., 2020). Reviews of AI-supported pronunciation training note benefits when tools are pedagogically integrated and evaluated transparently across accents and contexts (Vančová, 2023).

    Scenario 4 – Transformation

    Automation in processes

    In a Transformation trajectory, automation does not just speed up existing processes; it reconfigures them. Assessment becomes continuous and context-aware as agentic systems generate tasks on the fly, capture multimodal evidence (speech, gesture, gaze), and produce explainable, formative feedback embedded in authentic scenarios. Educators curate workflows and set guardrails, while co-creative design pipelines (generative AI + learning analytics) spin up new materials and assessments with learners as partners (Urmeneta & Romero, 2025; Weatherly, 2025). This aligns with a growing strand of research on AI-supported co-creation in education, moving from consumption to participatory production.

    The rise of self-paced learning

    Learning shifts from modules to immersive, situated practice across mixed/virtual reality (VR) spaces with embodied AI interlocutors. Systematic reviews show AR/VR can enhance language learning – especially vocabulary, interactional fluency, and motivation – when designs leverage interactivity and context; recent work extends this to social VR with LLM-driven agents capable of on-the-spot role-plays and feedback (Christou et al., 2025; Haoming & Wei, 2024; Llanos-Ruiz et al., 2025; Pan et al., 2025; Schorr et al., 2024). In a transformation path, these affordances become routine, immersing learners in target scenarios (e.g., clinics, airports, negotiations) and receiving just-in-time linguistic and pragmatic support.

    Will robots replace human educators?

    Embodied agents (social robots and extended reality (XR)-based embodied conversational agents (ECAs)) move from tutors to co-creators of experiences. Current evidence indicates ECAs can drive immersion, reduce anxiety, and adaptively scaffold interaction; XR-native ECAs and classroom robots become more capable as perception and dialogue improve (Belpaeme et al., 2018; Du & Daniel, 2024; Han & Ryu, 2025; Yang et al., 2025). Even so, research and governance communities emphasize that teachers must retain authority over goals, ethics, and culture; transformation should expand human capacity rather than replace it.

    Will language educators lose their jobs?

    Roles are reinvented rather than removed. As learning migrates into lived, AI-mediated contexts, demand rises for experience designers, ethics/data stewards, assessment validators, and community partners who localize content and ensure cultural fit. Global policy guidance continues to emphasize rights, oversight, and capacity-building, which are increasingly critical as classrooms dissolve into networks of learning experiences (UNESCO, 2025; OECD, 2023). Institutions that tie deployment to upskilling and workload redesign retain talent and legitimacy; those that do not risk trust erosion even in a technologically advanced landscape.

    The evolution of pronunciation pedagogy

    Pronunciation targets evolve toward AI-optimized intelligibility configurable to context (e.g., discipline, interlocutor, register), while retaining pluricentric respect for varieties. This builds on (not replaces) the intelligibility/comprehensibility emphasis already embedded in the CEFR Companion Volume and recent pronunciation/CAPT reviews (Amrate & Tsai, 2025; Council of Europe, 2020; Liu & Ab Rahman, 2025). However, any shift that relies on ASR or CAPT must confront well-evidenced accent-related performance disparities documented across commercial ASR systems and newer models by reporting subgroup metrics, diversifying datasets, and auditing bias-mitigation efforts (Graham & Roll, 2024; Kheir et al., 2023; Koenecke et al., 2020). Transformation is preferable only if intelligibility-first design goes hand-in-hand with fairness-first engineering. Table 1 summarizes the evidence and implications across Dator’s Four Futures for AI in language education.

    Table 1: Scenario evidence and practice/policy implications across Dator’s Four Futures in language education.

    Scenario: Continued Growth
    Area Central Claim Supporting Evidence & Key Citations Implications for Practice/ Policy
    Automation in processes AI augments teachers by automating routine, scale-dependent tasks; human-in-the-loop for high-stakes decisions. LLMs can approximate human grading but require teacher moderation and audits (Flodén, 2025). Policy guidance frames AI as assistive to protect validity and privacy (Memarian & Doleck, 2024; UNESCO, 2023/2025). AI automates administrative duties and feedback (Renkema & Tursunbayeva, 2024). Teachers’ roles shift towards coaching and instructional innovation. Policy must ensure human oversight protocols and transparency in automated processes.
    The rise of self-paced learning Adaptive tutors in mainstream platforms provide personalized, real-time feedback, improving achievement where access and digital literacy are met. Embedding adaptive tutors yields measurable gains (Dong et al., 2025). Growth demands explainability to learners and visibility for educators (UNESCO, 2023/2025). Need to address digital equity. Platforms must provide dashboards for educators to monitor and correct AI-driven learning pathways.
    Will robots replace human educators? Robots are used for bounded, repetitive tasks (e.g., drills) orchestrated by the teacher; task-level substitution, not role replacement. Reviews and trials show positive cognitive/affective effects for teacher-orchestrated, bounded tasks (Belpaeme et al., 2018; Deng et al., 2024; Randall, 2019). Classroom governance (consent, cultural fit) is essential. Teachers remain in charge of pedagogy and relationships.
    Will language educators lose their jobs? Labor market sees role drift, not mass displacement. AI offloads scale-dependent work, and teachers focus on human-centered skills. Teacher’s role shifts to coach, culture mediator, and data steward (UNESCO, 2023/2025). Augmentation aligns with policy and avoids over-automation pitfalls (Baker & Hawn, 2022). Investment in teacher upskilling for new roles. Job descriptions evolve to emphasize coaching, empathy, and cultural mediation.
    The evolution of pronunciation pedagogy Pronunciation support aligns with intelligibility/comprehensibility over nativeness; automated feedback is embedded in sound pedagogy. Alignment with CEFR Companion Volume descriptors (Council of Europe, 2020). Automated feedback aids learning but requires transparent evaluation across accents (Liu & Ab Rahman, 2025; Vančová, 2023). Institutions should mandate intelligibility-aligned rubrics and accent-diverse evaluation of AI tools.
    Scenario: Collapse
    Area Central Claim Supporting Evidence & Key Citations Implications for Practice/ Policy
    Automation in processes Over-automation without oversight leads to erratic, unfair results, undermining system legitimacy and validity. LLM grading shows significant divergence without moderation (Flodén, 2025). Algorithmic bias pathways lead to unfair classifications (Baker & Hawn, 2022). Human-in-the-loop policy is ignored (UNESCO, 2023/2025). Erosion of trust in educational institutions. Automated decisions are perceived as final, leading to contestable and unfair outcomes for students.
    The rise of self-paced learning Uneven expansion widens educational inequality; adaptive systems lack transparency, leading to complacency and disengagement. Technology access exacerbates inequality without inclusion policies (UNESCO GEM, 2023). Digital transformation amplifies divides absent governance (OECD, 2023). The digital divide becomes a central equity crisis. Privileged learners pull further ahead, while others are left behind with uninterpretable systems.
    Will robots replace human educators? Cost-cutting deploys robots as teacher substitutes, for which they are unready, damaging learning quality and trust. Benefits are lost when robots are scaled as substitutes (Deng et al., 2024; Randall, 2019). Ethical gaps fuel backlash (e.g., privacy issues) (Johnston, 2023; Lutz et al., 2021). Trust in educational technology erodes. Robotization fails to replicate the social and moral work of teaching.
    Will language educators lose their jobs? Budget pressures lead to hiring freezes and casualization, hollowing out professional roles and driving attrition and loss of expertise. High risk of job automation (Frey & Osborne, 2017). Labor protections are de-prioritized (OECD, 2023). UNESCO guidance on capacity-building is not implemented. Teachers experience deskilling and increased pressure to manage brittle AI systems, leading to burnout and an exodus from the profession.
    The evolution of pronunciation pedagogy Rushed technologies re-entrench prestige accents and penalize minoritized varieties, counter to intelligibility principles. ASR systems show large racial disparities in error rates (Koenecke et al., 2020). Lack of accent-aware evaluation (Liu & Ab Rahman, 2025). Systems may fail due to technical limits or ethical concerns (Díaz-Domínguez, 2020). Pronunciation feedback homogenizes learner speech and undermines linguistic identity, reinforcing linguistic discrimination and bias.
    Scenario: Discipline
    Area Central Claim Supporting Evidence & Key Citations Implications for Practice/ Policy
    Automation in processes AI is adopted with strong guardrails: human-in-the-loop, bias auditing, and transparency for consequential decisions. Implementation via risk frameworks (NIST AI RMF) and documentation like model cards (Gebru et al., 2021; Mitchell et al., 2019). Addresses algorithmic bias pathways (Baker & Hawn, 2022). Requires institutional investment in governance. Teachers retain final authority over high-stakes decisions like progression and grading.
    The rise of self-paced learning Self-paced tools are governed by explainability and educator oversight; system-level governance prioritizes trust and reduces lock-in. Platforms must expose recommendation rationales and allow instructor overrides (UNESCO, 2023/2025; US Department of Education, 2023). Interoperability and data minimization are key (OECD, 2023). AI is a complement to human instruction (Butt, 2024). Preserves personalization gains while mitigating “black-box” risks. Learners retain data control, and tools are held to fairness standards.
    Will robots replace human educators? Robots are used as aides under strict ethical codes of practice, not as replacements; authority remains with the teacher. Evidence supports the use of bounded, teacher-orchestrated tasks (Belpaeme et al., 2018). Proactive privacy safeguards are essential (Johnston, 2023; Lutz et al., 2021). Ethical protocols for consent and data management are mandatory. Robots extend reach where appropriate without supplanting human relationships.
    Will language educators lose their jobs? Role evolution is protected; AI adoption is tied to workload relief, upskilling, and recognition of new, higher-order roles. Global guidance stresses human oversight and educator capacity as a precondition (OECD, 2023; UNESCO, 2023/2025). Governance curbs techno-solutionism. Teachers’ relational and cultural work is preserved and valued. Career pathways expand into AI pedagogy design and data stewardship.
    The evolution of pronunciation pedagogy Systems are audited for accent fairness pre-deployment, following intelligibility-first targets and pluricentric aims. Alignment with CEFR descriptors (Council of Europe, 2020). Documentation must disclose ASR performance across subgroups and mitigation steps (Koenecke et al., 2020). Benefits when tools are transparently evaluated (Vančová, 2023). Mandatory pre-deployment bias audits and transparency reports. Rubrics and tools must be designed for linguistic diversity and fairness.
    Scenario: Transformation
    Area Central Claim Supporting Evidence & Key Citations Implications for Practice/ Policy
    Automation in processes AI reconfigures assessment into continuous, context-aware, and multimodal processes; educators and learners become co-creators. Agentic systems generate tasks and provide formative feedback in authentic scenarios (Weatherly, 2025). Move towards AI-supported co-creation (Urmeneta & Romero, 2025). Educators curate AI workflows and guardrails. Assessment becomes a dynamic, participatory process rather than a periodic event.
    The rise of self-paced learning Learning migrates to immersive VR/AR spaces with embodied AI interlocutors, enabling situated practice in target scenarios. AR/VR enhances vocabulary, fluency, and motivation (Haoming & Wei, 2024; Schorr et al., 2024). Social VR with LLM-driven agents allows for realistic role-plays (Christou et al., 2025; Pan et al., 2025). Demand for experienced designers and ethics stewards. Learning is defined by lived experience in simulated, context-rich environments.
    Will robots replace human educators? Embodied agents (robots, XR ECAs) evolve from tutors to co-creators of learning experiences, expanding human capacity. ECAs drive immersion and adaptively scaffold interaction (Han & Ryu, 2025; Yang et al., 2025). Teacher authority over goals and ethics remains essential. Teacher roles are reinvented, not replaced. They become orchestrators of complex AI-mediated experiences and guardians of ethical and cultural norms.
    Will language educators lose their jobs? Roles are radically reinvented. Demand rises for experienced designers, ethics stewards, validators, and community partners. As classrooms dissolve into networks of experiences, policy stresses rights and oversight (OECD, 2023; UNESCO, 2025). Upskilling is critical for legitimacy. Institutions that tie technology deployment to workload redesign and continuous learning will retain talent and trust.
    The evolution of pronunciation pedagogy Pronunciation targets evolve towards context-configurable, AI-optimized intelligibility while respecting pluricentric varieties. Builds on intelligibility emphasis in CEFR and CAPT reviews (Amrate & Tsai, 2025; Liu & Ab Rahman, 2025). Must confront ASR accent disparities with fairness-first engineering (Graham & Roll, 2024; Koenecke et al., 2020). Transformation is preferable only if intelligibility-first design goes hand-in-hand with fairness-first engineering. Requires rigorous, ongoing auditing of AI systems for accent bias.

    Conclusion and policy implications

    Across the four futures, the arc of AI in language education is governed less by technical ceilings than by choices about validity, equity, teacher capacity, and data rights. Recent empirical work on automated scoring confirms both promise and variability; LLM raters can approximate human judgments on certain writing dimensions, yet reliability and construct coverage remain uneven across prompts and student populations, emphasizing the need for human moderation and explicit validity arguments (Liew & Tang, 2024; Pack, 2024; Tang et al., 2024; Zhao & Jian, 2025). These findings align with newer sectoral syntheses showing that gains from adaptive/AI-supported assessment are contingent on transparency and educator oversight rather than automation alone (Garzón et al., 2025; OECD, 2024).

    Policy should therefore shift from ad-hoc adoption to managed capability. First, require formal institutional AI management systems to identify risks (accent-conditioned errors, accessibility gaps), assign controls, and monitor residual risk over time; unlike tool-specific checklists, management-system standards make responsibilities auditable across programs and vendors. Second, before procurement or scale-up, conduct and publish algorithmic impact assessments tied to the stakes of the decision and include educator/student contestation routes and documentation of model scope, data provenance, and known failure modes (e.g., prompt sensitivity, subgroup performance). Third, make explainability operational by ensuring that educator dashboards expose why a recommendation, score, or hint was issued and allow overrides, and that students can see criteria, evidence, and next-step rationales. These steps are consistent with recent national and multilateral guidance that centers human oversight and professional learning as prerequisites for credible adoption (OECD, 2024).

    Equity demands special attention where speech and identity meet assessment. Contemporary analyses and audits show that accent remains conceptually mishandled in parts of the ASR literature and that system performance can vary by dialect and speaker group, with clear implications for pronunciation feedback and speaking assessment (Prinos et al., 2024). Institutions should mandate accent-aware evaluation (publish subgroup metrics), intelligibility-aligned rubrics, and independent audits whenever ASR/CAPT tools inform grading or placement. In parallel, learning analytics research continues to surface student concerns about data use and surveillance; program-level policies should implement strict purpose limitation, retention controls, and options to opt out of nonessential tracking while preserving pedagogical usefulness (Karimov et al., 2024).

    Finally, durable capacity determines which future materializes. Education systems reporting the healthiest outcomes invest in teacher development and workload redesign alongside technology, treating educators as co-designers of prompts, tasks, and assessment workflows rather than downstream users (OECD, 2024). Concretely, institutions can tie any enterprise AI purchase to ring-fenced professional-learning time, require human-in/on-the-loop protocols for consequential assessment, and stress-test assessment regimes against current student practices (e.g., widespread gen-AI use) to keep validity, fairness, and curriculum aims aligned (Weale, 2025). As Inayatullah (2023) reminds us, the challenge is not only to manage the tools of AI but to reshape the meanings we attach to them, that is, how educators, learners, and institutions narrate intelligence, authorship, and trust in the age of machine participation. These measures make the Discipline and Transformation futures practicable, preventing the slippage into Collapse where over-automation, opacity, and inequity erode trust.

    Acknowledgements

    This study was supported by the Phonetic Lab of the University of Nicosia.

    References

    Deng, Q., Fu, C., Ban, M., & Iio, T. (2024). A systematic review on robot-assisted language learning for adults. Frontiers in Psychology, 15, 1471370. http://dx.doi.org/10.3389/fpsyg.2024.1471370

    Díaz-Domínguez, A. (2020). How futures studies and foresight could address ethical dilemmas of machine learning and artificial intelligence. World Futures Review, 12(2), 169–180. http://dx.doi.org/10.1177/1946756719894602

    Dizon, G., & Gayed, J. M. (2024). A systematic review of Grammarly in L2 English writing contexts. Cogent Education, 11(1), 2397882. https://doi.org/10.1080/2331186X.2024.2397882

    Dong, L., Fan, K., Li, X., & Zhou, M. (2025). Examining the effect of artificial intelligence on students’ academic performance: A meta-analysis. Computers & Education: Artificial Intelligence, 6, 100260. https://doi.org/10.1016/j.caeai.2025.100400

    Du, J., & Daniel, B. K. (2024). Transforming language education: A systematic review of AI-powered chatbots for English as a foreign language speaking practice. Computers and Education: Artificial Intelligence, 6, 100230. https://doi.org/10.1016/j.caeai.2024.100230

    Engwall, O., & Lopes, J. (2022). Interaction and collaboration in robot-assisted language learning for adults. Computer Assisted Language Learning, 35(5–6), 1273–1309. http://dx.doi.org/10.1080/09588221.2020.1799821

    European Commission. (2022). Ethical guidelines on the use of AI and data in teaching and learning for educators. Publications Office of the European Union. https://op.europa.eu/en/publication-detail/-/publication/d81a0d54-5348-11ed-92ed-01aa75ed71a1

    Farahani, M., & Ghasemi, G. (2024). Artificial intelligence and inequality: Challenges and opportunities. Qeios, 9, 78–99. https://doi.org/10.32388/7HWUZ2

    Farrow, E. (2020). Organisational Artificial Intelligence Future Scenarios: Futurists Insights and Implications for the Organisational Adaptation Approach, Leader and Team. Journal of Futures Studies, 24(3), 1–15. https://doi.org/10.6531/JFS.202003_24(3).0001

    Flodén, J. (2025). Grading exams using large language models: A comparison between human and AI grading of exams in higher education using ChatGPT. British educational research journal, 51(1), 201–224. http://dx.doi.org/10.1002/berj.4069

    Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. http://dx.doi.org/10.1016/j.techfore.2016.08.019

    Garzón, J., Patiño, E., & Marulanda, C. (2025). Systematic review of artificial intelligence in education: Trends, Benefits, and challenges. Multimodal Technologies and Interaction, 9(8), 84. http://dx.doi.org/10.3390/mti9080084

    Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Iii, H. D., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. http://dx.doi.org/10.1145/3458723

    Graham, C., & Roll, N. (2024). Evaluating OpenAI’s Whisper ASR: Performance analysis across diverse accents and speaker traits. JASA Express Letters, 4(2). http://dx.doi.org/10.1121/10.0024876

    Han, D., & Ryu, J. (2025). Exploring Embodied Conversational Agents in Extended Reality for Language Learning. Immersive Learning Research-Practitioner, 38–42. https://doi.org/10.56198/8r82g517

    Haoming, L., & Wei, W. (2024). A systematic review on vocabulary learning in AR and VR gamification context. Computers & Education: X Reality, 4, 100057. https://doi.org/10.1016/j.cexr.2024.100057

    Inayatullah, S. (2001). The rights of robot: Inclusion, courts and unexpected futures. Journal of Future Studies, 6(2), 93–102. https://jfsdigital.org/wp-content/uploads/2014/06/062-A08.pdf

    Inayatullah, S. (2008). Six pillars: Futures thinking for transforming. Foresight, 10(1), 4–21. http://doi.org/10.1108/14636680810855991

    Inayatullah, S. (2023, January 23). Deconstructing ChatGPT | Part 1. Journal of Futures Studies. https://jfsdigital.org/2023/01/23/deconstructing-chatgpt-part-1/

    ISO/IEC. (2023). ISO/IEC 42001:2023—Artificial intelligence—Management system. International Organization for Standardization.

    Johnston, S. -K. (2023). Privacy considerations of using social robots in education: policy recommendations for learning environments. United Nations, Department of Economics and Social Affairs, Sustainable Development.

    Karatay, Y., & Karatay, L. (2024). Automated writing evaluation use in second language classrooms: A research synthesis. System, 123, 103332. https://doi.org/10.1016/j.system.2024.103332

    Karimov, A., Saarlela, M., Aliyev, S., & Baker, R. (2024). Ethical considerations and student perceptions of engagement data in learning analytics. In HICSS-57 Proceedings. Penn Center for Learning Analytics.

    Kheir, Y. E., Ali, A., & Chowdhury, S. A. (2023). Automatic Pronunciation Assessment–A Review. arXiv preprint arXiv:2310.13974. https://doi.org/10.48550/arXiv.2310.13974

    Koenecke, A., Nam, A., Lake, E., Nudell, J., et al. (2020). Racial disparities in automated speech recognition. Proceedings of the National Academy of Sciences, 117(14), 7684–7689.

    Lee, S., Choe, H., Zou, D., & Jeon, J. (2025). Generative AI (GenAI) in the language classroom: A systematic review. Interactive Learning Environments, 1–25. https://doi.org/10.1080/10494820.2025.2498537

    Levis, J. M. (2018). Intelligibility, oral communication, and the teaching of pronunciation. Cambridge University Press.

    Li, B., Tan, Y. L., Wang, C., & Lowell, V. (2025). Two years of innovation: A systematic review of empirical generative AI research in language learning and teaching. Computers and Education: Artificial Intelligence, 100445. https://doi.org/10.1016/j.caeai.2025.100445

    Liew, P. Y., & Tan, I. K. (2024). On automated essay grading using large language models. In Proceedings of the 2024 8th International Conference on Computer Science and Artificial Intelligence (pp. 204–211).

    Linderoth, C., Hultén, M., & Stenliden, L. (2024). Competing visions of artificial intelligence in education—A heuristic analysis on sociotechnical imaginaries and problematizations in policy guidelines. Policy Futures in Education, 22(8), 1662–1678. https://doi.org/10.1177/14782103241228900

    Liu, Y., & Ab Rahman, F. (2025). A systematic literature review of research on automatic speech recognition in EFL pronunciation. Cogent Education, 12(1), 2466288. https://doi.org/10.1080/2331186X.2025.2466288

    Liu, Y., binti Ab Rahman, F., & binti Mohamad Zain, F. (2025). A systematic literature review of research on automatic speech recognition in EFL pronunciation. Cogent Education, 12(1), 2466288. https://doi.org/10.1080/2331186X.2025.2466288

    Llanos-Ruiz, D., Abella-García, V., & Ausín-Villaverde, V. (2025). Virtual Reality in Higher Education: A Systematic Review Aligned with the Sustainable Development Goals. Societies, 15(9), 251. https://doi.org/10.3390/soc15090251

    Lutz, C., Mahlke, S., & Tamò-Larrieux, A. (2021). Do privacy concerns about social robots affect use intentions? Frontiers in Robotics and AI, 8, 627958. https://doi.org/10.3389/frobt.2021.627958

    McNally, P., & Inayatullah, S. (1988). The rights of robots: Technology, culture and law in the 21st century. Futures, 20(2), 119–136. https://doi.org/10.1016/0016-3287(88)90019-5

    Memarian, B., & Doleck, T. (2024). Human-in-the-loop in artificial intelligence in education: A review and entity-relationship (ER) analysis. Computers in Human Behavior: Artificial Humans, 2(1), 100053. https://doi.org/10.1016/j.chbah.2024.100053

    Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., … & Gebru, T. (2019). Model cards for model reporting. In Proceedings of the conference on fairness, accountability, and transparency (pp. 220–229).

    Ngo, T. T. N., Chen, H. H. J., & Lai, K. K. W. (2024). The effectiveness of automatic speech recognition in ESL/EFL pronunciation: A meta-analysis. ReCALL, 36(1), 4–21. http:// 10.1017/S0958344023000113

    NIST. (2023). AI Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.AI.100-1

    OECD. (2023). Digital Education Outlook 2023: Towards an Effective Digital Education Ecosystem. OECD Publishing. https://doi.org/10.1787/c74f03de-en

    OECD. (2024). Education Policy Outlook 2024: Reshaping Teaching into a Thriving Profession from ABCs to AI. OECD Publishing. https://doi.org/10.1787/dd5140e4-en

    Pack, A., Barrett, A., & Escalante, J. (2024). Large language models and automated essay scoring of English language learner writing: Insights into validity and reliability. Computers and Education: Artificial Intelligence, 6, 100234. http://dx.doi.org/10.1016/j.caeai.2024.100234

    Pan, M., Kitson, A., Wan, H., & Prpa, M. (2025). Ellma-t: an embodied llm-agent for supporting english language learning in social vr. In Proceedings of the 2025 ACM Designing Interactive Systems Conference (pp. 576–594).

    Piquard, A. (2025, February 2). Artificial intelligence: The first measures of the European AI Act regulation take effect. Le Monde. https://www.lemonde.fr/en/pixels/article/2025/02/02/artificial-intelligence-the-first-measures-of-the-european-ai-act-regulation-take-effect_6737691_13.html

    Prinos, K., Patwari, N., & Power, C. A. (2024, June). Speaking of accent: A content analysis of accent misconceptions in ASR research. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (pp. 1245–1254).

    Randall, N. (2019). A survey of robot-assisted language learning (RALL). ACM Transactions on Human-Robot Interaction, 9(1), 1–36. http://dx.doi.org/10.1145/3345506

    Renkema, M., & Tursunbayeva, A. (2024). The future of work of academics in the age of Artificial Intelligence: State-of-the-art and a research roadmap. Futures, 163, 103453. http://dx.doi.org/10.1016/j.futures.2024.103453

    Saikrishna, M. B. (2025). Reimagining educator roles for adaptive learning: insights from the Gioia methodology in a tech-driven future. On the Horizon: The International Journal of Learning Futures, 33(1), 116–125. http://dx.doi.org/10.1108/OTH-11-2024-0076

    Schorr, I., Plecher, D. A., Eichhorn, C., & Klinker, G. (2024). Foreign language learning using augmented reality environments: a systematic review. Frontiers in Virtual Reality, 5, 1288824. http://dx.doi.org/10.3389/frvir.2024.1288824

    Selwyn, N. (2019). Should robots replace teachers?: AI and the future of education. John Wiley & Sons.

    Tang, X., Chen, H., Lin, D., & Li, K. (2024). Harnessing LLMs for multi-dimensional writing assessment: Reliability and alignment with human judgments. Heliyon, 10(14). http://dx.doi.org/10.1016/j.heliyon.2024.e34262

    Tatman, R. (2017, April). Gender and dialect bias in YouTube’s automatic captions. In Proceedings of the first ACL workshop on ethics in natural language processing (pp. 53–59).

    Toffler, A. (1970). Future Shock. Random House.

    Toffler, A. (1980). The Third Wave. William Morrow.

    UNESCO Global Education Monitoring Report Team. (2023). Technology in education: A tool on whose terms? (GEM 2023). UNESCO. https://www.unesco.org/gem-report/en/publication/technology

    UNESCO. (2023). Guidance for generative AI in education and research. UNESCO. https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research

    UNESCO. (2025). AI and the right to education: Protecting learners’ rights (Digital Learning Week 2025). UNESCO. https://www.unesco.org/en/articles/ai-and-education-protecting-rights-learners

    Urmeneta, A., & Romero, M. (2025, August). AI as a Creative Partner: A PRISMA Review of AI’s Role in Supporting Creativity in Education. Frontiers in Education, 10, 1602151. http://dx.doi.org/10.3389/feduc.2025.1602151

    US Department of Education (2023). Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. U.S. Department of Education. https://www.ed.gov/sites/ed/files/documents/ai-report/ai-report.pdf

    Vančová, H. (2023). AI and AI-powered tools for pronunciation training. Journal of Language and Cultural Education, 11(3), 12–24. http://dx.doi.org/10.2478/jolace-2023-0022

    Weale, S. (2025, February 25). UK universities warned to ‘stress-test’ assessments as 92% of students use AI. The Guardian. https://www.theguardian.com/education/2025/feb/26/uk-universities-warned-to-stress-test-assessments-as-92-of-students-use-ai

    Weatherly, C. A. (2025). Friend or Foe? Enhancing Creativity in K-12 Education Through AI. Thinking Skills and Creativity, 102030. http://dx.doi.org/10.1016/j.tsc.2025.102030

    Weng, Z. & Fu, Y. (2025). Generative AI in language education: Bridging divide and fostering inclusivity. International Journal of Technology in Education (IJTE), 8(2), 395–420. http://dx.doi.org/10.46328/ijte.1056

    Wurah, A. (2017). We Hold These Truths to Be Self-Evident, That All Robots Are Created Equal. Journal of Futures Studies, 22(2), 61–74. http://doi.org/10.6531/JFS.2017.22(2).A61

    Yang, F. C., Acevedo, P., Guo, S., Choi, M., & Mousas, C. (2025). Embodied Conversational Agents in Extended Reality: A Systematic Review. IEEE Access, 13, 79085. http://dx.doi.org/10.1109/ACCESS.2025.3566698

    Zhao, J., Chapman, E., & Sabet, P. G. (2024). Generative AI and educational assessments: A systematic review. Education Research and Perspectives, 51, 124–155. http://dx.doi.org/10.70953/ERPv51.2412006

     

    Top Posts & Pages
    • Homepage
    • Towards an Explicit Research Methodology: Adapting Research Onion Model for Futures Studies
    • The Futures Cone Reimagined: A Framework for Critical and Plural Futures Thinking
    • Jose Rizal: Precursor of Futures Thinking in the Philippines
    • Urban-Rural Polarization in Canada
    • Embodied Presence, COVID-19 and the Transcendence of ITopian Fear
    • Submission Guidelines
    • Articles by Author
    • From Systems to Selves: Applying the Futures Triangle to Personal Futures
    • Iran at the Crossroads
    In-Press

    Spawning new futures: new pathways in futures education after COVID-19 — the Metafutureschool story

    February 16, 2026

    Article Barbara Maingon1,2 1Metafutureschool; 2 Uppsala University, Sweden Abstract The COVID-19 crisis shattered the illusion…

    Imagining the Future after Crisis: Science and Environmental Imaginaries in the Anthropocene

    February 16, 2026

    Sawali Weaving as Decolonial Design Futures Practice

    February 3, 2026

    Characters, values, aesthetics: Creative methods for water futures

    February 3, 2026

    Cultural Dimensions in Foresight and Scenario Planning: An Exploratory Study

    February 3, 2026

    Layering Interreligious Harmony: Integrating The Robin Approach and Causal Layered Analysis at the Parliament of the World’s Religions

    February 3, 2026

    The Futures Cone Reimagined: A Framework for Critical and Plural Futures Thinking

    February 3, 2026

    Envisioning the Futures of Language Education in the Era of Artificial Intelligence

    February 3, 2026

    Two Decades of the Futures Triangle (2003–2024): A Critical Review of Theory, Method and Practice

    February 3, 2026

    The River of Dharma: Visions for Transforming River–City Futures

    January 28, 2026

    The Journal of Futures Studies,

    Graduate Institute of Futures Studies

    Tamkang University

    Taipei, Taiwan 251

    Tel: 886 2-2621-5656 ext. 3001

    Fax: 886 2-2629-6440

    ISSN 1027-6084

    Tamkang University
    Graduate Institute of Futures Studies
    © 2026 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.