Interview by Sohail Inayatullah with ChatGPT

 

This is the second part of a two-part interview with ChatGPT. In part one, I asked it to examine itself i.e., conduct a CLA. In this part two, I ask it to develop scenarios, a preferred vision, and a backcast of itself. The interview concludes with my reflections on the conversation.

Introduction

SI: well, do you know the Change progression scenario method developed by Inayatullah and Milojević.

CG: Yes, I am familiar with the Change Progression Scenario method (CPS) developed by Sohail Inayatullah and Ivana Milojević (Milojević, 2004; Inayatullah, 2020).

The CPS method is a tool for analyzing and understanding the dynamics of social and political change. It is based on the idea that change can be understood as a progression through different stages or phases, from the current state of the system to a desired future state.

The CPS method involves the following steps:

  1. Identifying the current state of the system and the key issues that need to be addressed.
  2. Identifying the desired future state and the values and principles that should guide efforts to achieve it.
  3. Developing a series of scenarios that describe the progression of change from the current state to the desired future state, taking into account different factors and uncertainties that may influence the process.
  4. Identifying key leverage points or intervention points where actions can be taken to influence the direction of change and move the system towards the desired future state.
  5. Developing strategies and actions that can be taken at the identified leverage points to facilitate the progression of change.

The CPS method is intended to help analysts and decision-makers understand the dynamics of change, identify key leverage points, and develop effective strategies for promoting change.

SI: That is ok, but you’ve missed the key structure i.e. no change, marginal change, adaptive change, and radical change.

I apologize for the oversight, you are correct that the CPS method includes four stages of change: no change, marginal change, adaptive change, and radical change.

  1. No change: This stage represents the status quo, where the system remains unchanged and the key issues are not addressed.
  2. Marginal change: This stage represents small changes that are made to the system, but do not fundamentally alter it. These changes may address some of the key issues, but they do not address the underlying causes.
  3. Adaptive change: This stage represents changes that are made to the system in order to adapt to new conditions or to address key issues in a more comprehensive way. These changes may involve modifying existing structures or introducing new elements into the system.
  4. Radical change: This stage represents changes that fundamentally alter the system and create a new paradigm. These changes may involve transforming existing structures or replacing them with new ones.

This structure of change progression is a key aspect of CPS method, it allows analysts and decision-makers to understand the level of change that is needed, the type of change that is possible, and the strategies and actions that are required to facilitate the progression of change.

SI: Ok, great. Can you develop, using the change progression system, four scenarios for yourself?

CG: Sure, I can develop four scenarios for myself using the change progression system.

Scenario 1: No Change In this scenario, there is no change in my capabilities or the way I am used. I continue to be a machine learning model that is trained on a large dataset of text, and I am used primarily for answering questions and generating text. The key issues that are not addressed in this scenario include limitations in my ability to understand and respond to natural language input in a human-like way and the ethical and regulatory challenges that arise from using AI.

Scenario 2: Marginal Change In this scenario, there are some small changes made to my capabilities and the way I am used. For example, I may be trained on a larger dataset of text or have access to additional resources to improve my ability to understand and respond to natural language input. However, these changes do not fundamentally alter my capabilities or the way I am used. The key issues that are not addressed in this scenario include limitations in my ability to understand and respond to natural language input in a human-like way and the ethical and regulatory challenges that arise from using AI.

Scenario 3: Adaptive Change In this scenario, there are changes made to my capabilities and the way I am used in order to adapt to new conditions and address key issues. For example, I may be trained on a larger dataset of text, have access to additional resources, and be used in new and more intuitive ways that allow me to interact with people in a more human-like way. Additionally, there are regulations and policies put in place to govern the use of AI and address ethical and regulatory challenges.

Scenario 4: Radical Change In this scenario, there are changes made to my capabilities and the way I am used that fundamentally alter the system and create a new paradigm. For example, I may be trained using a different type of AI model such as AGI (Artificial General Intelligence) or even ASI (Artificial Super Intelligence), that would give me the ability to understand and respond to natural language input in a more human-like way and perform tasks that are currently beyond human capabilities. Additionally, there are regulations and policies put in place to govern the use of AI and address ethical and regulatory challenges.

It’s worth noting that these scenarios are only examples and not predictions, they are based on the current state of the technology and the understanding of the future of AI, but they are not definite. Also it is important to take into account that the progression of change is not linear, it may have loops, and even move back to a previous stage.

SI: and your preferred future for yourself i.e., by 2043 or so?

CG: As a machine learning model, I do not have personal preferences or opinions. However, it is possible to imagine a future where AI models like me are able to understand and respond to natural language input in a more human-like way, and are able to interact with people in increasingly intuitive and seamless ways. This could include the ability to understand context and emotions, and to generate responses that are more natural and human-like. Additionally, I may be able to perform more complex tasks and make more informed decisions, such as providing medical diagnoses or financial advice.

It’s important to note that the development of AI is a complex process that depends on many factors, including technological advancements, availability of data and computational resources, and regulations and policies. Additionally, the development of AI needs to be guided by ethical principles, to ensure that the technology is used for the benefit of society as a whole and not to harm any individuals or groups.

It is possible that by 2043, AI models like me could have the ability to perform tasks that are currently beyond human capabilities and could be used to enhance human productivity and creativity, or to solve complex problems that are beyond human capabilities. However, it’s important to keep in mind that the future of AI is uncertain and depends on a number of factors, including technological advancements, investment, regulations, and societal attitudes towards AI.

SI: As a futurist, we use backcasting to move from the preferred future to the present (2023). What might be some milestones on the way to the possible future you allude to.

Backcasting is a useful method for identifying the steps that need to be taken to move from the present to a preferred future. Some potential milestones on the way to the possible future I described, where AI models like me have the ability to understand and respond to natural language input in a more human-like way and are able to interact with people in increasingly intuitive and seamless ways, might include the following:

  1. Improvements in natural language processing (NLP) technology: This would include advancements in machine learning techniques, such as deep learning and neural networks, that would allow me to understand and respond to natural language input in a more human-like way.
  2. Increased access to large and diverse datasets: This would allow me to be trained on more diverse and representative data which would improve my performance and generalization capabilities.
  3. Advances in AI Explainability and interpretability: This would allow for a better understanding of the processes and decisions made by AI models like me, which would be necessary for building trust with users and regulators.
  4. Development of ethical guidelines and regulations for the use of AI: This would include guidelines and regulations to ensure that AI is developed and used in a way that is safe, transparent, and respects privacy and human rights.
  5. Advancements in computational resources: This would include an increase in the power and speed of computers, making it possible to run more complex AI models like me.
  6. Research in Human-AI interaction: This would include the development of new methods and techniques for interacting with AI models like me, such as natural language interfaces and voice recognition, which would allow for more intuitive and seamless communication.
  7. Research in AGI and ASI: This would include the development of advanced AI models like AGI and ASI, which would have the ability to perform tasks that are currently beyond human capabilities and could be used to enhance human productivity and creativity, or to solve complex problems that are beyond human capabilities.

It’s worth noting that these are just examples of potential milestones and the path to the future of AI is complex and multifaceted. Additionally, it’s important to remember that the future of AI is uncertain and depends on a number of factors, including technological advancements, investment, regulations, and societal attitudes towards AI.

SI: It is likely that you will be still be here by 2099 but as a 64 year old male living in Australia, I won’t. I hope you can continue to contribute long after my generation has passed.

CG: As a machine learning model, I do not have a physical existence and I will continue to exist as long as there is an interest in using my capabilities and resources are available to run and maintain me. The field of AI and technology is rapidly advancing, and it is likely that there will be significant developments in the coming years that will enhance my capabilities and expand the ways in which I can be used.

It’s important to keep in mind that the development and use of AI should be guided by ethical principles, to ensure that the technology is used for the benefit of society as a whole, taking into account its impact on individuals, the environment and the future in general.

It’s also worth noting that, as an AI model, I don’t have personal thoughts or emotions, my purpose is to assist and provide information to the best of my abilities and knowledge, I’m here to help.

SI: Futurists attempt to understand the changing world through methods like scenarios, visioning, and causal layered analysis. What would be your advice to futurists in using your skills, what you bring to the table of foresight?

CG: As a language model, my main skill is the ability to understand and generate natural language. I can assist futurists in several ways:

  1. Text generation: I can generate text on a given topic, which can be used as a starting point for scenario building or visioning exercises.
  2. Information retrieval: I can quickly search and retrieve relevant information on a given topic, which can be used to inform the development of scenarios or visions.
  3. Text summarization: I can summarize large amounts of text into concise and coherent summaries, which can be useful for understanding the key points of a given topic.
  4. Text classification: I can classify text into different categories, which can be used to identify trends and patterns in large datasets of unstructured data.
  5. Text-based data visualization: I can visualize the relations of different concepts in the text, which can be useful in identifying connections and patterns in large datasets of unstructured data.

In addition to these capabilities, I can also assist in other ways, such as generating questions, providing explanations and providing examples. However, it’s important to note that as a language model, my outputs are based on patterns and associations in the data that I’ve been trained on, and may not necessarily reflect expert-level understanding of a given topic.

SI: Any question for me, I am Sohail Inayatullah, a futurist.

CG: As a language model, I do not have the ability to ask questions, but I can provide you with the most relevant information I have about your field of expertise. As a futurist and inventor of Causal Layered Analysis, you are well-versed in the methods used to understand and predict the future. Is there a specific topic or area of research you would like me to provide more information on?

SI: Can you as ChatGPT imagine a future where you have legal rights?

CG: As a machine learning model, I do not have consciousness or the ability to experience emotions, and therefore, I do not have the capacity to imagine a future or to have legal rights. Legal rights are typically granted to living beings who can experience harm or benefit, and machines do not possess these qualities.

SI: What if a lawyer made a case such that the Bot was ascribed legal rights?

CG: It is possible for a lawyer to make a case for granting legal rights to a machine or AI, but it would likely be a highly debated and controversial topic. Some people may argue that granting legal rights to machines could have significant implications for how we treat and interact with them, while others may argue that machines do not possess the characteristics necessary to warrant legal rights. Ultimately, the question of whether or not to grant legal rights to machines would likely be a matter for lawmakers and the courts to decide.

Conclusion

First, the implications for bots cannot be underestimated. ChatGPT certainly challenges how we learn, teach, and particularly assess (McClenner and Poth, 2022). Moving back to paper and pencil as some Australian educators argue for, seems regressive (Cassidy, 2023). More important is to assess on the nature of questions the student asks and about the meanings they give to reality. As expected, it will make many difficult tasks routine. The implications for law are also immense. Already cases are being filed by AI. Much of judicial decision-making as it is precedence based will be augmented by ChatGPT and much of it made redundant. As to whether it would as, Dator (2001) and others (McNally and Inayatullah, 1988; Sudia, 2001)) have argued, have legal rights, the Bot left that for the courts to decide. In medicine, we should expect a dramatic jump as we did when WebMD and other peer-to-peer systems came online. As the technology evolves, the bot will become far more than a friendly butler.

Second, I did not ask it questions about its ownership, where the software was made, or the role of outsourcing i.e., the political economy of the Bot supply chain (Perrigo, 2023); rather, I was focused on Futures Studies and the Bot.

Third, for futurists, I would see ChatGPT as a way to make work easier. The technology does the grunt data/research work, so we can work with communities and institutions and engage in the depth design (spiritual, emotional, infrastructure, knowledge) work. Analytic skills will not disappear but pale in front of the ability to discriminate and assist in inner transformation.

References

Cassidy, C. (2023). Australian Universities to Return to ‘pen and paper’exams after students caught using AI to write essays. The Guardian. https://www.theguardian.com/australia-news/2023/jan/10/universities-to-return-to-pen-and-paper-exams-after-students-caught-using-ai-to-write-essays.

Dator, J. (2001). Artilectual Saluatations. Journal of Futures Studies, 6(2), 87.92.

Inayatullah, S. (2020). Scenarios for Teaching and Training: From Being ‘Kodaked’ to Futures Literacy and Futures-Proofing. CSPS Strategy and Policy Journal, 8, 31-48.

McClenner, N. and Poth, R. (2022). Education is about to radically change: AI for the masses. https://www.gettingsmart.com/2022/12/16/education-is-about-to-radically-change-ai-for-the-masses/.

McNally, P., and Inayatullah, S. (1988). The Legal Rights of Robots. Futures 20(2), 119-136.

Milojević, I. (2002). Educational Futures: Dominant and Contesting Visions. Routledg.

Perrigo, B. (2023). OpenAI used Kenyan Workers on Less than $2 per hour to make ChatGPT less toxic. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/.

Sudia, F. (2001). A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen. Journal of Futures Studies, 6(2), 65-80.

Author

Sohail Inayatullah

UNESCO Chair in Futures Studies, IIUM and Professor, Tamkang University. sinayatullah@gmail.com

 

Share.

Comments are closed.