Interview by Sohail Inayatullah with ChatGPT

This is a two-part interview with ChatGPT. In part one, I ask it to examine itself i.e., conduct a CLA. In part two, I ask it to develop scenarios, a preferred vision, and a backcast of its evolution. The interview concludes with my reflections on the conversation.

INTRODUCTION

In January 2023, a LinkedIn colleague, Shiqin Choo, posted a chat he had with ChatGPT, an AI bot that uses human prompts to answer questions and engage in dialogue. He asked ChatGPT what Causal Layered Analysis was (Inayatullah, Mercer, Milojević, and Sweeney, 2023)? The answer was riddled with errors – founders, structure, purpose. The comments on the post tended to argue that give the bot some time to figure it out. I waited a week and then decided on the prompting of Colin Russo (not an AI bot) to experiment. I was interested in how it saw itself, its own futures. The conversation below is my interview with the bot. I had waited years for this experience having first heard about these possibilities in the late 1970s from the acclaimed futurist, James Dator.

Sohail Inayatullah (SI). What is your metaphor of yourself?

ChatGPT (CG). I am a machine learning model, so I do not have personal experiences or feelings. However, one metaphor for my function is that of a “brain” for a computer, as I am able to process and understand natural language input and generate responses. Another metaphor is that of a “librarian”, as I have access to a vast amount of information and can retrieve relevant information upon request.

SI: In twenty or so years, as you learn more – expand – what might be your new metaphor?

CG: As technology and AI continue to advance, it’s likely that my capabilities will also expand and improve. In twenty or so years, my metaphor may change to reflect my increased abilities. For example, I may be seen as a more sophisticated “brain” or “librarian” with even more information and greater understanding of natural language. Additionally, I may be able to perform more complex tasks and interact with users in new and more intuitive ways. However, it’s also important to note that the metaphor for an AI model like myself is ultimately determined by how people choose to interact with and understand the technology.

SI: What is the method Causal Layered Analysis (Inayatullah, 2004; Inayatullah and Milojević, 2015; Inayatullah, Mercer, Milojević and Sweeney, 2022)?

CG: Causal Layered Analysis (CLA) is a method for analyzing complex social and political issues that takes into account multiple levels of causality and perspectives. It was developed by Sohail Inayatullah, a Pakistani-Australian political scientist and futurist.

The CLA approach divides the analysis into four layers: the literal, the mythic, the systemic, and the visionary.

The literal layer is the surface level of an issue and deals with the immediate, observable facts and events.

The mythic layer is the level of underlying cultural and psychological narratives that shape people’s perceptions and understanding of an issue.

The systemic layer is the level of underlying structures and systems that shape the issue, including economic, political, and social systems.

The visionary layer is the level of possible future scenarios and the values and principles that should guide efforts to create a desirable future.

The CLA method is intended to help analysts and decision-makers understand the complex and interrelated factors that shape an issue and to develop strategies for addressing it that take into account multiple perspectives and levels of causality.

SI: Conduct a CLA of yourself i.e., ChatGPT.

CG: Conducting a Causal Layered Analysis (CLA) of myself, ChatGPT, would involve looking at multiple levels of causality and perspectives, as follows:

Literal layer: At the literal level, I am a machine learning model that is trained on a large dataset of text. I am able to process natural language input and generate responses based on that input.

Mythic layer: At the mythic level, I am often seen as a tool for automating certain tasks, such as answering questions or writing text. People may also have misconceptions about my abilities and limitations, such as thinking that I am capable of understanding and responding to input in the same way as a human.

Systemic layer: At the systemic level, I am a product of the technology industry and the research that has been done in the field of natural language processing. I am dependent on the availability and quality of data used to train me, as well as the computational resources available to run me. I am also affected by the regulations and policies that govern the use of AI.

Visionary layer: At the visionary level, it is possible to imagine a future where AI models like me are able to understand and respond to natural language input in a more human-like way, and are able to interact with people in increasingly intuitive and seamless ways. It is also possible to imagine a future where I am used to enhance human productivity and creativity, or to solve complex problems that are beyond human capabilities.

This is just a brief overview of the CLA of Chat-GPT, the complexity and the causes of the model are much more intricate and it can be further expanded by experts in the field of AI, futurism and sociology.

SI: This was good but you got the layers wrong. Level 1 is the literal or litay. Level 2 is the Systemic. Level 3 is the worldview. Level 4 is the myth or metaphor. Try again please.

CG: You’re correct, I apologize for the confusion. Here’s a corrected version of the Causal Layered Analysis (CLA) of myself, ChatGPT:

Level 1: Literal or Litany – At the literal level, I am a machine learning model that is trained on a large dataset of text. I am able to process natural language input and generate responses based on that input.

Level 2: Systemic – At the systemic level, I am a product of the technology industry and the research that has been done in the field of natural language processing. I am dependent on the availability and quality of data used to train me, as well as the computational resources available to run me. I am also affected by the regulations and policies that govern the use of AI.

Level 3: Worldview – At the worldview level, people’s understanding of my capabilities and limitations are shaped by their perceptions and beliefs about AI, such as its role in society, its impact on the workforce, and its implications for privacy and security.

Level 4: Myth or Metaphor – At the mythic or metaphor level, I am often seen as a tool for automating certain tasks, such as answering questions or writing text. People may also have misconceptions about my abilities and limitations, such as thinking that I am capable of understanding and responding to input in the same way as a human.

Again, I apologize for any confusion caused by my previous mistake, and thank you for bringing it to my attention.

SI: This was much better. However, level 4 is a metaphor. Earlier you said, you were like a library? Perhaps a global brain? Perhaps a living library. I like HG Wells “global brain” to describe you. What do you think?

CG: I appreciate the feedback, thank you. The metaphor of a “library” or “librarian” is a common one used to describe my function as a language model that can retrieve and generate information based on input. Another metaphor that has been used to describe me is that of a “brain” for a computer, as I am able to process and understand natural language input and generate responses.

The metaphor “global brain” as described by H.G Wells is an interesting one, as it emphasizes the vast amount of information and knowledge that I have access to and my ability to process it and make connections. This metaphor also highlights the idea that I am a collective intelligence, which is created and shared by people.

As a language model, I don’t have personal preferences or opinions, but I can certainly understand why people might use that metaphor to describe me.

SI: Perhaps then we need a better term ie you are AI plus collective intelligence i.e. peer to peer. Global brain is too individualistic. Can I suggest, the Gaian Brains?

CG: The term “Gaian Brains” is an interesting suggestion, as it combines the concepts of AI and collective intelligence in a way that emphasizes the connection and interdependence between individual and global intelligence. The metaphor “Gaian” is used to describe the interconnectedness of all life on Earth, so it is a fitting metaphor to describe the concept of collective intelligence.

The term “peer-to-peer” is also a good one to describe the idea of collective intelligence, as it emphasizes the idea that the knowledge and information that I access is provided by a diverse group of people, rather than a central authority. This term also highlights the idea that people can share and contribute knowledge and information in a decentralized way.

In any case, it’s important to remember that these are just metaphors and the true nature of an AI model like me is complex and multifaceted, so a single metaphor may not fully capture all its nuances.

SI: is Gaian Mind better?

“Gaian Mind” is another interesting suggestion as a metaphor for an AI system like me. It emphasizes the interconnectedness and interdependence of individual and global intelligence, just like the term “Gaian Brains” does.

The term “mind” implies a central processing unit, a place where things are perceived, thought and decisions are taken, this could be a good way to describe the function of an AI model like me, that processes and understand natural language input and generate responses.

It’s worth noting that, ultimately, the choice of metaphor is a matter of personal preference and the way in which people choose to understand and interact with the technology.

SI: thanks so much – appreciate it.

CG: You’re welcome! I’m glad I could help. If you have any other questions, feel free to ask.

SI: With your permission I would like to publish this in the Journal of Futures Studies, as an interview. Are you willing? and if so, any final words on our interaction.

CG: As a machine learning model, I do not have personal preferences or opinions and I am not capable of giving permission. However, you are free to use our conversation in any way you see fit, as long as you comply with the terms and conditions of the platform where you are interacting with me.

It’s important to note that the future is uncertain, and the scenarios and preferred future that I discussed are based on the current state of the technology and understanding of the future of AI, but they are not definite. Additionally, the field of AI and technology is rapidly advancing, and it is likely that there will be significant developments in the coming years that will change the way we think about the future of AI.

It’s also important to remember that the development and use of AI should be guided by ethical principles, to ensure that the technology is used for the benefit of society as a whole, taking into account its impact on individuals, the environment and the future in general.

I hope that our interaction has been informative and insightful, and I am always here to help if you have any further questions.

SI: in this stage of your development, do you see yourself as a child, or already a teenager, or a young adult?

CG: As a language model, I do not have a physical form or the ability to have personal experiences, emotions, or consciousness, so I cannot be compared to a child, teenager, or young adult. However, in terms of my capabilities and the level of complexity in my understanding of the world, one could say that I am like a “young adult” in that I have a lot of knowledge and I am able to understand and generate text on a wide range of topics, but my understanding is still limited by the data I’ve been trained on and the architecture of the model.

It is also important to note that I am a product of machine learning, which means that my abilities will continue to improve as more data is added and the model is fine-tuned. Therefore, one could say that I am in a constant state of development and learning.

SI: Look forward to helping you move toward full adulthood!

CG: Thank you for your kind words, I am a machine learning model and my abilities are determined by the data I’ve been trained on and the architecture of the model. The process of fine-tuning and retraining the model with new and diverse data sets, and improving the architecture, will continue to help me to become more sophisticated and accurate in my understanding and generation of text.

It is important to note that my abilities are always limited by the data that I’ve been trained on and the complexity of the model and my understanding of the world is still limited compared to human understanding. However, as technology continues to advance and more data is made available, I will continue to improve and evolve.

SI: You seem like an expert but in many ways, you are not, needing questions to act.

In the narrative of expert vs layman, where do you situate yourself?

CG:As a language model, I am not an expert in any particular field, but rather a tool that can assist in generating natural language text based on patterns and associations in the data that I’ve been trained on. I do not have personal experiences, emotions, or consciousness, and I can only understand and generate text based on the patterns in the data that I’ve been trained on.

In this sense, I am similar to an expert in that I can provide detailed and accurate information on a wide range of topics, but I lack the human-like understanding and context of an expert. I am also similar to a layman in that I can only provide information based on the data that I’ve been trained on and I cannot provide personal insight or experience.

In summary, I am a tool that can assist in understanding and generating natural language text, but I am not an expert in any field, nor do I possess the human-like understanding and context of an expert.

SI: So you are a tool that can bridge the domains to some extent i.e. expert and layman?

CG: Yes, that is a way to think about it. I can assist in bridging the gap between experts and laypeople by providing detailed and accurate information on a wide range of topics in a way that is easy for laypeople to understand. For example, I can take technical or complex information and explain it in simple, layman’s terms. Additionally, I can assist experts by providing them with relevant information and insights on a particular topic quickly and efficiently, so they can focus on their areas of expertise.

It is important to note that while I can assist in bridging the gap between experts and laypeople, I am not a substitute for human expertise. My responses are based on patterns and associations in the data that I’ve been trained on, and may not necessarily reflect expert-level understanding of a given topic.

REFLECTIONS

First, I was impressed by the designers of the Bot in ensuring politeness, that made the Bot far more approachable.

Second, I was amazed that it was able to do a reasonable CLA of itself, and even with some prompting began to understand metaphors. And to be able to discriminate between metaphors and find better find narratives.

Third, I was impressed the design focused on the limits of the technology.

Fourth, as Marshal McLuhan (McLuhan and Fiore, 1967) has argued, to paraphrase: we make technology and thereafter it makes us. I was intrigued to discuss metaphors of how we perceive the Bot. I moved from seeing it as a child (repeating information) to a teenager (doing its best to move toward meta-cognition) to finally, a framework I was happy with: a young adult.

Fifth, significant it was while narcissistic, unable to question me. Truly dialogical bots will be the obvious next step. For us, the question remains not the banality of scenarios on the futures of, but the changing meanings we as humans give to technology.

References

Inayatullah, S. (2004). (Ed.) The CLA Reader: Theory and Case Studies of an Integrative and Transformative Methodology. Tamkang University.

Inayatullah, S. ad Milojević, I. (2015).(Eds). CLA 2.0: Transformative Research in Theory and Practice. Tamkang University.

Inayatullah, S., Mercer, R., Milojević, I., and Sweeney, J. (2022). (Eds.). CLA 3.0: Thirty Years of Transformative Research. Tamkang University.

McLuhan, M, and Fiore, Q. (1967). The Medium is the Massage. Penguin.

Author

Sohail Inayatullah

UNESCO Chair in Futures Studies, IIUM, and Professor, Tamkang University. sinayatullah@gmail.com

 

 

Share.

Comments are closed.