Journal of Futures Studies, June 2020, 24(4): 25–34
From Data Serfdom to Data Ownership: An Alternative Futures View of Personal Data as Property Rights
Michael Nycyk, Department of Internet Studies, Curtin University of Technology, Brisbane and Perth, Kent Street, Bentley, Western Australia, Australia. michael.nycyk@gmail.com
* Web Text version of each JFS paper here is for easy reading purpose only, for the valid and published context of each article, please refer to the PDF version.
Abstract
Internet users’ production of vast amounts of their personal data is used by companies and governments, not always for the benefit of humankind. Gaspard Koenig, founder of the Génération libre think-tank, calls this mass harvesting and mining data serfdom. This repositions the internet user as workers in exchange for perceived benefits of using various internet applications. Koenig provocatively advocates enacting cultural change where liberalist ideas of data ownership are weaved into legislation and can be drawn upon by internet users to control the use of their data. This article outlines his views that are supported by using Inayatullah’s (2015, 2009) Causal Layered Analysis Futures method to illustrate the current data serfdom problem and suggest an alternative future solution. It concludes with a discussion of possible solutions that may assist in preventing darker new realities from emerging that data serfdom and loss of it as a property right may cause.
Keywords: Causal Layered Analysis (CLA), Data Ownership And Control, Data Mining, Liberalism, Metaphor, Profiling, Property Rights, Serfdom, Social Algorithms
Introduction
Internet users1 produce vast volumes of data daily often without considering the consequences of doing so. Data, in this context, is anything posted online, not just one’s personal details, but what is posted in comments’ sections or photos on social media. Data misuse has contributed to the crisis of the internet in the form of tribalism, echo chambers, privacy breaches and the rise of the discriminatory computer algorithm. The worldwide backlash, aimed especially at corporate Silicon Valley companies, is gaining momentum as public trust in protecting internet user data erodes (Flew, 2018). Data misuse has also become part of the postnormal times perspective as advocated by Sardar (2010), where it creates new realities of control, lack of privacy and human life course and opportunities determination. At the core of this problem is the concept that internet users work to give others their data and lose all rights over it in a master and serf type relationship.
The liberalist idea of data being owned and used by its producer as a right, while not new, has gained increased support as people feel their data is unprotected and open to misuse. For example, the Cambridge Analytica scandal, where 87 million Facebook users had their personal and posted content data accessed without consent for specific political use fuelled concerns about data ownership. This was seen as a dystopian reality of psychological manipulation where people felt taken advantage of (Berghel, 2018; Isaak & Hanna, 2018). Purtova (2015, 2010) argues debating data as property has been occurring since the 1970’s, as data has become a system resource forming ecosystems of profiles stored in data warehouses, harvested and used without the internet user’s consent. This means now, and in the future, internet users remain serfs with little to no control over their data and reap undesirable consequences from this practice.
The stake companies and governments hold in collecting and mining internet user data, often without adequate user protection of it, is fuelled by profit and potentially sinister control techniques like human profiling. Internet companies and online retailers have economic and cultural stakes in obtaining personal data. Koenig (2018, p. 13-14), drawing on Benabou and Rochfeld’s (2015) views, gives an overview of this problem using what has occurred in Europe as an example:
They are scattered around the wilderness of the web, owned by nobody, res nullius, but which someone can appropriate and then convert into databases which, by contrast, are then protected by intellectual property rights. This appropriation has a phenomenal value: according to the Boston Consulting Group, the value of personal data in Europe could reach 1,000 billion euros by 2020, or 8% of European GDP.
The European Union General Data Protection Regulation 2018, which incorporates some aspects of data as internet user property, has notably recognized this issue. One principle they espouse is the requirement of data minimization. All internet data collectors must only collect what is adequate, relevant and limited to what is necessary in relation to the purposes for which they were processed (Asia Pacific Privacy Authorities, 2018). This is important because data often contains information to clearly make people identifiable and subject to singling out, which means one person may be treated differently from another (European Union Agency for Fundamental Rights, 2018).
Such measures as these to control government and company data collection and use are necessary if internet users continue to produce data for them. History has shown examples of potential negative impacts of collecting data for negative uses. An extreme misuse of data for the purpose of identification was illustrated in World War Two. During the German occupation of France, a way to find out who was Jewish was through creating a census using punched cards. Stating one’s religion was compulsory. This was stopped by René Carmille hacking the data on the cards to give a different result, hence saving people from disclosing their religion and being captured by the German army (Bruno, Jany-Catrice, & Touchelay, 2016). This type of data misuse illustrated by this historic example continues today in areas such as microtargeting of voter’s social media through advertising to cater to political filter bubbles (Ward, 2018).
Of concern with data collecting and harvesting has been the use of computer algorithms. These are defined sequential procedures dictating what a computer will do to solve a problem. They draw on many types of user data. For example, commercial platforms like Amazon collaborative filtering collects historical user data, analyzes them and makes recommendations of what a customer should purchase, often from exploitable data such as age or gender (Dongsheng et al., 2016). Lazer (2015) argues algorithms are curating data for ideological bias, especially on social media. Bakshy, Messing and Adamic (2015) argued filter bubbles emerged from clusters of content internet users published online being used to align with like-minded individuals. Pariser (2011) also argued that data had been mined by the algorithms to create new realities of tribalism and echo chambers gained by the harvesting of user data to many transnational high technology companies.
The concern of data used in the execution of algorithmic programs lies in their ability to gain power to determine a person’s life outcomes by setting rules for decisions with gender, race or age criteria, preventing certain groups obtaining loans or being selected for employment (O’Neil, 2016). China has been reported as having repressive algorithms that allow mass surveillance through mobile/cell phone applications (apps) that categorize people according to a criterion of trustworthiness, determining their future opportunities or even incarceration and separation from mainstream society (Human Rights Watch, 2019).
Having established that data serfdom is a crisis, this article moves to a discussion of Koenig’s ideas of data as property. This is followed by a CLA analysis of the current situation and the presentation of an alternative data ownership future. These are represented by the current metaphor of data serfdom and the future metaphor of the internet user as controller of and owner of their produced data.
Koenig’s View of Data Serfdom
Koenig’s main argument is that internet users are harvesting data to technology and other companies, and governments, in serfdom roles, often unaware of doing so. To clarify, he argues that data should follow the Liberalism view of being property rights. Liberalism is defined by Gray (1986) as the conferring on all humans the same moral status and denies relevance to legal or political orders of differences in moral worth among human beings. Serfdom is defined as a contractual relationship where land and personal safety was exchanged by serf labour to the land owner or a lord (O’Rourke, 2017). It was not an exact form of slavery, yet despite the personal rights serfs had, the institutional nature of the arrangement meant economic exploitation of serfs was common (Schöffer, 2008).
It is possible to see how online data generation and gathering mirrors a serf arrangement between the internet user and the company or other collecting it. Koenig (2018) argues that this is digital feudalism. He compares the current internet data arrangements to the High Medieval period in 13th Century France, where serfs had no property rights at all. The new digital lords are the social media, gaming, electronic commerce and other internet platforms who conduct a lucrative practice of using 1 trillion euros in Europe alone while digital serfs produce the data for them. The technology company can give some rights to the internet user in a similar way as lords gave serfs protection such as from hackers. But this often at the expense of having internet user data used without permission for undesirable and sinister uses.
Koenig (2018) states data ownership rests on three elements of liberalist property rights. These are:
- Usus: I use my data as I wish
- Abusus: I destroy my data as I wish (not necessarily to be forgotten what is destroyed)
- Fructus: I sell my data for profit
These three principles are provocative because they shift the power base from the company or government to the individual. How this can be legally enforced effectively is unclear. Koenig though is insistent that liberal personal ownership rights are not only possible, but necessary (Koenig, 2018).
Koenig in argueing this position states that futures historian Yuval Harari (2015) is correct in claiming this problem is called ‘dataism’ and is akin to being a new type of religion. Harari (2015) asserts that an online phenomenon or entity is determined by its contribution to data and data processing. The problem Harari states is that dataism causes humans to sacrifice free will at the altar of the algorithm (Koenig, 2018). As an example, although not always a choice, Chinese internet users will produce data despite their knowledge of the potential misuse of it. As the data is no longer the right of the user, the segregation and ranking of people through algorithms, as reported by Human Rights Watch (2019) illustrates an example of Harari’s (2015) cautions about dataism.
Internet users are becoming increasingly aware of the level of data misuse and can feel helpless that laws across the world are slow to address this issue. Koenig is optimistic this is occurring as his think-tank’s report states (Landreau, Peliks, Binctin, Pez-Pérard, & Léger, 2018, p. 33):
Consumers have never been so seasoned and aware of business practices. They are familiar with the tools, techniques and business models. At the same time, they are showing themselves to be increasingly suspicious vis-a-vis the consumer system and commercial practices. Resistance is gaining ground, taking the form of avoiding certain companies, giving up certain products as well as more active forms of rebellion such as complaining and boycotts. These movements have always existed, but with the advent of digital technology, they can be easily organized, be quickly visible on a large scale or go viral in just a few clicks.
Internet users are therefore becoming aware of such practices that their data is owned and controlled by others. In terms of alternative futures this is a positive development as demand to limit and eliminate the data mining practices of companies and governments continue to grow.
Koenig’s views also indirectly link to philosopher Jürgen Habermas’s idea of public space. The internet has increased the opportunity for voices to be heard in debates, but is drowned out by popular opinion and a mass of views largely of argument rather than debate. Habermas (1991) defines public space as a virtual or imaginary community which does not necessarily exist in any identifiable form. The current state of the internet, particularly social media, erodes the public sphere by data being employed to shape opinion rather than encourage debate.
The erosion of public space, through hijacking of debates and influencing of public opinion greatly concerns Koenig. Koenig (2018) also argues that a potential benefit of giving people more control over data is it may stop the filter bubbles that form what Bozdag and van den Hoven (2015) term Cyberbalkanization. This is the practice of online segregation into small political groups with similar perspectives and values, displaying a narrow-minded, combative and argumentative approach to those having contradictory views. This can have a negative effect on online debates as information is constrained to political parties that espouse one worldview and negate their opponents’ views on issues.
Koenig is concerned for the future potential and consequences of data to be misused. How this misuse will occur in the future can only be speculated in that many societies have their decision-making powers placed in the hands of algorithms or machine learning artificial intelligence. Continued forms of human profiling are of particular concern. It may be unclear on how data may be used if it is allowed to be continued to be harvested and mined, combined with no sign of massive amounts of data produced daily being stopped. Although Koenig is argueing that paring back where the data goes to be used and making data a liberal property right may be a strategy to solve the serfdom problem. Exploring an alternative future for data ownership is now presented in the CLA current and future analysis.
Using Causal Layered Analysis to Understand Data Serfdom
This first analysis uncovers the deeper meaning of data serfdom as it exists. Before proceeding to present the current and alternative CLA analysis, a brief overview of it and how it will be used is discussed.
CLA is a theoretical lens and a methodological tool to deconstruct many views to present a meta-level discussion of a problem (Holdaway, 2016). Among CLA’s benefits include: it does not claim or argue for a ‘truth’ but uncovers who wins and losers in a dominant discourse (Hoffman, 2012), drills down to explore unconscious and unarticulated perspectives influencing the issue (Conway, 2012), displays at its deepest level a metaphor from the nested layers of worldviews and causes (MacGill, 2015) and shows structural and systematic from deep layers of analysis (Ariell, 2010; Haigh, 2016; Slaughter, 2004).
The main advantage of this method in this data serfdom problem is it effectively analyses the problems and through the layers about a problem give possible future alternatives. This is important in regards to data as property, as it frames ideas of how a different metaphor can shape a future Koenig has envisaged. To illustrate this method’s use that this analysis follows, Ramos (2015, p. 25) states:
This methodology is post-structural in so far as it seeks to problematise existing future oriented thinking, exploring the assumptions, ideologies, worldviews, epistemes, myths and metaphors that already are embedded in images, statements or policy oriented research about the future. It has developed, however, as a way of opening up spaces for alternative futures.
It is this ability to visually present alternative futures that is a benefit of using CLA. The first analysis, called Data as Commodity, adheres to Inayatullah’s (1998) layers, where the first, Litany, being the facts conceived by the issue is the tip of an iceberg. These are also considered to be symptoms of a problem. Below this surface analysis lie these levels:
- Systematic Cause: Social and other causes that contribute to the metaphor, such as economic, cultural, political and historic factors.
- Worldview: The structures that support and make legitimate the structure to exist and grow, identifying those reasons for this.
- The Metaphor or sometimes called the Myth: The dimension of the problem or solution.
This analysis frames the problems and deconstructs leading to an articulation of alternatives for future practice and policy (Ramos, 2015).
The second analysis, called Data as Owned by the Creator, sets out the alternative future that Koenig and others call for. However, this is done by an analysis, and critical thinking of, the issues involved rather than layering another’s thinking over the alternative. It will use the same levels but to arrive at a new metaphor, the headings will include extra terms to describe and clearly articulate the alternatives. Litany will have measurements and progress indicators in the title, Systematic will show the structures and policies needed for change and worldview will include the cultural aspects of the metaphor. In this context, culture means the beliefs and behaviours that influence the individual and society’s practice (Kroeber & Kluckhohn, 1952), such as believing giving over data is convenient, encouraging them to willing give their data as serfs to companies and governments.
Data as Commodity: What Contributes to Data Serfdom?
The internet has become commoditized and monetized as organisations struggle to regain lost revenues and make profits. Data serfdom has assisted this redress as data is given freely to use in many areas such as sales campaigns, geographic and transport planning, as well as potential concerning areas previously discussed by Koenig and others. In this section, the analysis asks “What contributes to data serfdom?” Data is a valuable commodity, but the tension for the internet user is how much belongs to me and what can I do to decide how it’s used?
Table 1 lays out this problem and its causes, demonstrating how data serfdom is arrived at as a metaphor:
Table 1: Data as commodity causal layered analysis 1: Current reality
Analysis | Factors |
Litany/The Symptoms |
|
Systematic Causes |
|
Worldview |
|
Metaphor |
|
This table suggests Data Serfdom as the overarching metaphor that describes the litany, the causes and the worldview. These were obtained from a survey of academic literature and books, with some use of expert interviews and other webs sites sparingly used.
The litany symptoms fell into two specific areas. First, agents of the internet who collect that data, not always social media companies but many others, will harvest the data for profiling. Although existing before the internet as companies will analyze products and consumers, advertising and sales targeting is more relentless electronically than the traditional print and ‘junk’ postal mail of the past. As Isaak and Hanna (2018) argue, as relevant in the Cambridge Analytica data scandal, websites and social media must be transparent in what they will use data for. While seemingly intuitive that would happen, clearly the targeting of peoples’ political interests from data given lacked that transparency.
Secondly, people feel a loss of control over their data and, as Purtova (2015) argues, data is a resource of great economic value, but it is difficult to find clearly defined property rights for the internet user in its ownership. Therefore, a lack of trust, possible inequalities and discrimination by data profiling and creation and maintaining privileges by certain groups in society, all display the symptoms of data serfdom.
However, the analysis demonstrates some anomalies, such as the internet user has some responsibility for this problem. There is evidence to suggest in the system that organizations will disclose they have legal rights to the data the internet user consents to when they use a web platform or any aspect of the internet. It is when, as O’Neil (2016) and Berghel (2018) argue, such data is used to create a data economy driven by human biases, agendas and prejudices by use of the data. Social media platforms are designed for the collection of personal data, so factors such as the willingness or disinterest in this issue play a role in the overall problem of data serfdom. Internet users may embrace their personal details that are shown to a global audience without concern. This overlaps with the next layer as being a worldview.
These system issues suggest that although internet users should have the type of rights Koenig suggests to avoid serfdom, the growth of using the internet is a factor in changing cultural views on this issue. Access to the internet becomes affordable and more widespread worldwide. Therefore, as beliefs influence behaviours that create new cultures (or even create new realities), internet users have come to expect tradeoffs with their data that are much like the lord and peasant relationship in medieval times. The technological companies may take it for granted that internet users are willing serfs and that companies and governments can mine their data and profile their customers for any number of reasons. Some may be innocuous; getting recommendations from eBay or Amazon if you are a consumer can be ignored. Yet the internet user may have a lack of choice if a product is sold only on one platform, so there is little choice but to give the company their personal demographic data.
This worldview supports the desires of internet users for privacy, and outrage when this is broken, that is often expressed in data breaches like Ashley Madison’s dating name and email address hacking incident or Cambridge Analytica. As Beer (2009) suggests, the speed of such incidents create new realities of power as expressed by the use of algorithms which potentially erode democratization and empowerment. Yet internet users have a role in this worldview because, as the CLA suggests, they may have a fatigue to find out the laws governing their data use and ignore messages saying terms of data use has changed.
Social media platforms are seductive and people may experience pressure to join them as they feel they are missing out on not using them, or may be forced by an employer to join them. The main cultural worldview in this layer is the nonchalant attitude of internet users that privacy no longer exists in the online and outer world. Although not espousing blame at the internet user’s new cultural worldviews, clearly data serfdom is a metaphor that the CLA analysis lays out through symptoms, system causes and worldviews. The key now is to use CLA to map out an alternatives future to minimize and prevent data serfdom.
Internet User Control and Ownership of Data: The Alternative Metaphor
In claiming an alternative for data serfdom is possible, the CLA analysis suggests what is occurring and what can occur to reach this. That mapping out such a future is provocative needs a brief explanation. Attempts at implementing any liberalist policies in our society away from capitalism draw criticisms, especially in the right and left ideologies that are constantly contested and criticized. The argument here is that it may be provocative to suggest these measures to technology companies or governments who need data. However, in some countries new laws have been implemented that do restrict the use of internet user data in a serfdom manner.
This section asks “What new metaphor emerges that points to an alternative future for the internet user’s data?”. Where this analysis differs slightly from the first one is that some of the characteristics are already in motion across some countries, but are of concern to all. Hence this is why the analysis measures progress on the litany symptoms, the structures and cultural changes taken place, and may take place in the future.
Table 2 lays out this problem and its causes, demonstrating how data ownership is arrived at as a metaphor:
Table 2: Data as owned by creator causal layered analysis 2: Alternative future
Analysis | Factors |
Litany –
As Measurements and Progress |
|
Systematic
Structures and Policies |
|
Culture
Worldview |
|
Metaphor |
|
The metaphor and the causes to reach its idea alternative does align with Koenig’s (2018) idea of eliminating data serfdom, Usus (use data as wished), Abusus (destroy data) and Fructus (sell data) are all done by the internet user. They become the instruments of control by the internet user, not the company or government and it is no longer sacrificing free will at the altar of the algorithm because it is highly controlled what data can be used and for what purpose.
The litany measurements and progress are divided into public pressure driven action and attempts to change legal systems to further protect internet user data. Looking at the three levels to create the metaphor, the overall assessment is despite issues raised by scholars that property rights in a liberalist sense is still not successfully enshrined in law, the ethos is. First, public debates and calls to action to have data as property are ongoing. Movements and resistance come from the people, their elected representatives and interest groups such as Electronic Frontiers. Second, despite the disappointment of laws being slow to implement, practical solutions have been set up to assist internet users, such as the E Safety Commissioner in Australia and changes to privacy laws worldwide. Their effectiveness is debatable. Yet EU countries when drafting protection laws insist that limitations on internet user’s rights to data protection be enshrined in law and government police. There is an ongoing battle to address this litany, not just in the EU but worldwide (European Union Agency for Fundamental Rights, 2018).
Therefore, the measurements and progress of moving towards internet user data ownership is possible and occurring. One important litany that is complex to solve is how will data as property influence the reduction of the algorithm as a deterministic mechanism? Progress, as informed by debates that are ongoing in government and academia, is slow as companies and governments assert their rights to use them in a deterministic manner. Yet these are crucial to address because algorithms can self-optimize and be generated through machine learning technologies (Uricchio, 2017). Shin (2019) especially calls for urgent consideration of further legal control over them as vital to the long-term success of reducing their power and misuse by governments and companies. Destroying one’s data not to be included in an algorithm is a vexing question but is crucial to creating an alternative future without them playing a part in determining people’s lives and privileges.
The system procedures do align themselves with the litany symptoms again with the power of internet users insisting on change and, for legal systems, to recognise the internet user as the owner of their data. People are aware through media reporting that their data has been misused from what they have harvested for others. Technology companies, like Facebook, Twitter and Google have been slow to act and have often done so under pressure from the media and governments. The impediment is that policies to stop data serfdom are borderless. Legal systems across the world must have flexibility to interface with each other to solve data breaches and unwanted data harvesting and mining.
The alternative future metaphor encouraging Koenig’s vision of no data serfdom lies in the cause of changing cultural attitudes. People pressure, supported by strong, enforceable data protection laws, is the systematic change. Internet users will need to be proactive in knowing laws and what mechanisms can redress the misuse of their data. What flows from these into causing cultural change is the new data as property ownership mechanism must encourage associational freedoms where the internet user can access redress wherever they live. Exchanges across geography create a liquidity of value worldview where users are confident, they can interact with other cultures and countries to redress data issues. New political contracts enshrined in law hold the state to accountability to act on data serfdom and its consequences.
Adopting the alternative futures metaphor where the internet user is the controller of their produced data is challenging. However, what the alternative CLA posits is that not only is this possible in some form, but the move towards eliminating data serfdom has begun. In working towards such a future reality, cultural worldviews must continue to change and technology companies must continue to be held to account for their mining, profiling, algorithmic and other undesirable practices.
Conclusion
Data serfdom is a part of the epistemological crises engulfing the internet when data is used against the intent it was created for. Koenig’s arguments against data serfdom served as a basis for finding causes of it and making metaphoric sense of the current situation. However, the alternative futures CLA table suggested peoples’ desires for greater data control, combined with strong worldwide legal mechanisms to address issues, is moving towards the alternative future of internet users owning data.
The results of this analysis strongly suggest Koenig’s reality of a liberalist data ownership metaphor is one that may occur in the future. However, much work will be needed to continue pressure on governments and companies who use data as their key means of survival and control. Gaining consensus to governments that enshrines internet users’ rights in law, and getting technology and other companies to adhere to it, remains a difficult route to making this shift into policy.
Inayatullah’s CLA futures method is challenging to use but it successfully creates metaphors that will perhaps change futures to equality and fairness, avoiding new undesirable realties. The internet will grow further and big data is now a taken for granted by-product of taking part in any form of online communication. This article has shown the factors of the current and alternative realities data and the metaphor it can become that internet user property can move towards. If society is to move away from data serfdom the alternative future needs constant people pressure and supporting legal and other structures to make sure their digital property is not exploited or misused.
Notes
- For the purposes of this article the term internet user is repeatedly used to provide clarity between identifying the user, as in the person using the internet, and the company, government or other mining or using the data in any way. Also, the term internet refers to the interconnected network, but it should be recognised with ubiquitous computing and multiple forms of it, such as social media, email and mobile/cell phone texting, that data can be collected in many ways from many online sources.
References
Ariell, A. (2010). Forest futures: A causal layered analysis. Journal of Futures Studies, 14(4), 49-64).
Asia Pacific Privacy Authorities. (2018). EU General Data Protection Regulation – General information document. British Columbia: Office of the Information and Privacy Commissioner of British Columbia. Retrieved from https://www.appaforum.org/wp-content/uploads/2019/10/appa-gdpr-general-information-document.pdf
Bakshy, E., Messing, S., & Adamic, L. A. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348(6239), 1130-1132.
Beer, D. (2009). Power through the algorithm? Participatory web cultures and the technological unconscious. New Media & Society, 11(6), 985-1002. DOI: 10.1177/1461444809336551
Benabou, V., & Rochfeld, J. (2015). Who benefits from the click?: Sharing value in the digital age. Paris: Odile Jacob.
Berghel, H. (2018). Malice domestic: The Cambridge Analytica dystopia. Computer, 51(5), 84-89. http://dx.doi.org/10.1109/MC.2018.2381135
Bozdag, E., & van den Hoven, J. (2015). Breaking the filter bubble: Democracy and design. Ethics and Information Technology, 17(4), 249-265: 10.1007/s10676-015-9380-y
Bruno, I., Jany-Catrice, F., & Touchelay, B. (2016). Introduction. The social sciences of quantification in France: An overview. In I. Bruno, F. Jany-Catrice & B. Touchelay (Eds). The social sciences of quantification: From politics of large numbers to target-driven policies (pp. 1-14). Switzerland: Springer.
Conway, M. (2012). Using causal layered analysis to explore the relationship between academics and administrators in universities. Journal of Futures Studies, 17(2), 37-58.
Dongsheng, L., Chen, C., Lv, Q., Shang, L., Yingying, Z., Lu, T., & Gu, N. (2016). An algorithm for efficient privacy-preserving item-based
collaborative filtering. Future Generation Computer Systems, 55. 311-320. https://doi.org/10.1016/j.future.2014.11.003
European Union Agency for Fundamental Rights. (2018). Handbook on European data protection law – 2018 edition. Vienna. Retrieved from https://fra.europa.eu/sites/default/files/fra_uploads/fra-coe-edps-2018-handbook-data- protection_en.pdf
Flew, T. (2018). Platforms on trial. InterMEDIA, 46(2), 24-29.
Gray, J. (1986). Liberalism. Milton Keynes: Open University Press.
Habermas, J. (1991). The public sphere. In C. Mukerji & M. Schudson (Eds). Rethinking popular culture: Contemporary perspectives in cultural studies (pp. 398-404). Los Angeles: University of California Press.
Haigh, M. (2016). Fostering deeper critical inquiry with causal layered analysis. Journal of Geography in Higher Education, 40(2), 164-181. DOI: 10.1080/03098265.2016.1141185
Harari, Y. (2015). Homo deus: A brief history of tomorrow. London: Harvill Secker.
Hoffman, J. (2012). Unpacking images of China using causal layered analysis. Journal of Futures Studies, 16(3), 1-24.
Holdaway, M. (2016). Using CLA to deconstruct current scholarly views on
corporate accountability to community. Journal of Futures Studies, 21(1), 19-34. DOI:10.6531/JFS.2016.21(1).A19
Human Rights Watch. (2019). China’s algorithms of repression: Reverse engineering a Xinjiang police mass surveillance app. Retrieved from https://www.hrw.org/sites/default/files/report_pdf/china0519_web5.pdf
Inayatullah, S. (1998). Causal layered analysis: Poststructuralism as method. Futures, 30(8), 815-829. https://doi.org/10.1016/S0016-3287(98)00086-X
Inayatullah, S. (2009). Causal layered analysis: An integrative and transformative theory and method. In J. Glenn & T. Gordon (Eds), Futures research methodology, version 3.0. (pp. 1-51). Washington: The Millennium Project 2009. Retrieved from http://www.metafuture.org/wp-content/uploads/2016/01/Causal-Layered-Analysis-FRM-version-3-2009.pdf
Inayatullah, S. (2015). What works: Case studies in foresight. Taipei: Tamkang University Press.
Isaak, J., & Hanna, M. J. (2018). User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer, 51(8), 56-59. DOI: 10.1109/MC.2018.3191268
Koenig, G. (2018). Introduction: The softian bargain. In I. Landreau, G. Peliks, N. Binctin, V. Pez-Pérard & L. Léger (Eds). My data are mine: Why we should have ownership rights on our personal data (pp. 10-17). Paris: GenerationLibre. Retrieved from https://www.generationlibre.eu/wp-content/uploads/2018/01/Rapport-Data-2018-EN-v2.pdf
Kroeber, A., & Kluckhohn, C. (1952). Culture: A Critical Review of Concepts and definitions. Cambridge, MA: Peabody Museum of American Archaeology and Ethnology.
Landreau, I., Peliks, G., Binctin, N., Pez-Pérard, V., & Léger, L. (2018). My data are mine: Why we should have ownership rights on our personal data. Paris: GenerationLibre Think Tank. Retrieved from https://www.generationlibre.eu/wp-content/uploads/2018/01/Rapport-Data-2018-EN-v2.pdf
Lazer, D. (2015). The rise of the social algorithm: Does content curation by Facebook introduce ideological bias? Science, 348(6239), 1090-1091. DOI: 10.1126/science.aab1422
MacGill, V. (2015). Unravelling the Myth/Metaphor Layer in Causal Layered Analysis. Journal of Futures Studies, 20(1), 55-68. DOI:10.6531/JFS.015.20(1).A55
O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Great Britain: Allen Lane.
O’Rourke, S. (2017). The emancipation of the serfs in Europe. In D. Eltis (Ed). The Cambridge world history of slavery vol 4 AD 1804-AD 2016 (pp. 422-440). Cambridge: Cambridge University Press.
Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. New York: Penguin Press.
Purtova, N. (2010). Property in personal data: A European perspective on the instrumentalist theory of propertisation. European Journal of Legal Studies, 2(3), 193-208. Retrieved from https://ejls.eui.eu/wp-content/uploads/sites/32/pdfs/The_Future_of_Autumn_Winter_2010/A_EUROPEAN_PERSPECTIVE_ON_THE_INSTRUMENTALIST_%20PROPERTY_IN_PERSONAL_DATA_.pdf
Purtova, N. (2015). The illusion of personal data as no one’s property. Law, Innovation and Technology, 7(1), 83-111. https://doi.org/10.1080/17579961.2015.1052646
Ramos, J. (2015). Transcendence of a method: The story of causal layered analysis. In S. Inayatullah & I. Milojević (Eds). CLA 2.0 Transformative research in theory and practice (pp. 25-44). Taipei: Tamkang University Press.
Sardar, Z. (2010). Welcome to postnormal times. Futures, 42(5), 435-444. doi:10.1016/j.futures.2009.11.028
Schöffer, I. (2008). The second serfdom in Eastern Europe as a problem of historical explanation. Historical Studies: Australia and New Zealand, 33, 46-61. https://doi.org/10.1080/10314615908595151
Shin, D. (2019). Socio-technical design of algorithms: Fairness, accountability, and transparency. 30th European Conference of the International Telecommunications Society (ITS): “Towards a Connected and Automated Society”. International Telecommunications Society (ITS), Helsinki, Finland, 16-19 June 2019. Retrieved from https://www.econstor.eu/bitstream/10419/205212/1/Shin.pdf
Slaughter, R. (2004). Futures beyond dystopia: Creating social foresight. Abingdon: Routledge.
Uricchio, W. (2017). Data, culture and the ambivalence of algorithms. In M. Tobias & K. Schäfer (Eds). The datafied society: Studying culture through data (pp. 125-137). Amsterdam: Amsterdam University Press.
Ward, K. (2018). Social networks, the 2016 US presidential election, and Kantian ethics: applying the categorical imperative to Cambridge Analytica’s behavioral microtargeting. Journal of Media Ethics, 33(3), 133-148.10.1080/23736992.2018.1477047