Internet Citizen interviewed Anita Gurumurthy and Nandini Chami of IT for Change, India, on the challenges facing the world, and in particular the Global South, with the artificial intelligence revolution. IT for Change has produced several studies on digital technologies, in which, among other things, they will be using the term “digital intelligence” as a broader conception of this developing phenomenon. She is also the secretary of the Just Net global coalition. Interview by Sally Burch (ALAI).

By Sally Burch (ALAI)

1) What is the difference between “artificial intelligence” and “digital intelligence”, and how does the term “digital intelligence” help us to better understand the new era of technology?

The term artificial intelligence ignores the social origins of the intelligence that a given technology produces. It mystifies the machine. But the term digital intelligence is more systems-oriented. It emphasises the interaction between human and digital systems in problem-solving and decision-making, which is increasingly common in the world of the Fourth Industrial Revolution (4RI). The term digital intelligence also seems to have more historical grounding: it does not imply a fetish for machines; it seems to recognise the AI revolution as part of a longer evolution of computing, Internet and Big Data technologies. This systemic logic – in which intelligence is embedded in the techno-social relations that make up the system – helps us never to lose sight of the fact that social knowledge and human work are the raw material of the intelligence revolution made possible by the new powers of digital technology, in particular AI technologies.

2) There is an ongoing international debate on the implications of AI, especially since the launch of the GPT4 Chat. In your opinion, what are the main threats (and/or advantages) of this kind of technology, and what can we do about it, from a digital justice and community perspective?

The miracles of AI – including the GPT Chat phenomenon – are indeed momentous. This is a historical juncture much like the Gutenberg moment, when the mass production of books through the printing press contributed to changes in the institutions of civilisation. AI can enhance human creativity and change the social division of work to empower and transform. It can be for individual emancipation or for it to make the Keynesian dream of a better life for all a reality. However, the status quo is not at all oriented towards this potential. Today, AI is firmly rooted in the logic of financialisation on steroids, based on a blatant disregard for human dignity and societal wellbeing.

The greatest threat posed by current AI development trajectories is the exacerbation of the environmental crisis. New evidence suggests that AI may be more of a problem than a solution to our struggle with climate change, water scarcity and high energy consumption. Some estimates suggest that the water consumption to train Open AI’s large GPT 3 language model was equivalent to the amount needed to fill the cooling tower of a nuclear reactor. Even start-ups and technology developers working towards a more ethical and transparent AI industry struggle to address the sustainability challenge. Start-up HuggingFace managed to train its own large language model BLOOM on a French nuclear-powered supercomputer, producing a lower emissions footprint than most models of similar size; but once training was complete, in the pre-deployment phase, BLOOM emitted a carbon footprint equivalent to that of 60 flights between London and Paris.

The technology loop of generative AI1 has also opened a Pandora’s box of labour exploitation. As the Sama controversy in Kenya demonstrated, language models and content moderation tools can only be perfected by the work of countless content workers who wade through the toxic rubbish of hateful and violent content that causes psychological trauma. The wellbeing and mental health of workers fall victim to the woeful absence of protections in the artificial intelligence value chain for such high-risk work.

A third preoccupation that has come to the fore in the months since ChatGPT took the world by storm is the long-term impact of the AI revolution on the future of work. Studies in recent months by the OECD and ILO suggest that the workforce in developed countries is at greater immediate risk of losing jobs to automation enabled by generative AI; but in the longer term, this leap is expected to lead to higher productivity and increased GDP. The global South’s labour force will not be affected immediately, but this is not good news for their long-term livelihood and wellbeing prospects. If their countries are left out of Generative AI and other AI technology leaps, and remain trapped in the low-value segments of the economy – and become temporary workers or foot soldiers in the new 4RI, like the indigo farmers of the British industrial revolution – what awaits us is a neo-colonial economic future that limits the options of most of the world.

It is the extractivism of data from this global majority that powers the AI revolution. And just as the public commons of Web 2.0 was cannibalised for corporate profit in the platformisation of the internet, thwarting the production of shared knowledge and the possibilities of peer-to-peer sharing, we find ourselves at another similar moment in the digital revolution. Generative AI, in particular, threatens to co-opt the public knowledge commons without any licensing obligations to share/return to society. Imagine a situation where government health records – open government data – are being used by pharmaceuticals for proprietary research on epidemiological predictions that the government is forced to buy/rent in a health crisis!

The Big Pharma patent monopolies that prevented the fight against Covid should show us that this is a very real possibility.

We should also refocus on foundational rather than generative AI. Can the majority of the world’s population engaged in agriculture, livestock and agriculture-related livelihoods, which depend on forests and common natural resources, be helped to thrive in the age of AI, especially in their climate change adaptation and mitigation needs? How can we enable localised diagnostic and predictive models to trigger warnings and long-term strategies? Why are we merely pushing more and more data sharing in directions that only seem to help big agribusiness and technology companies to integrate people in extremely adverse conditions into the hyper-capitalist AI market? Developing countries have to find ways to harness their data resources for their autonomous development in the intelligence revolution, similar to how a country like Thailand in Asia recovered after the crisis of the 1990s and rebuilt its economy.

Anita Gurumurthy
Executive Director, IT for Change

3) There is great concern concerning the theft of intellectual property by AI, which trawls and reuses data, such as artists’ work, without acknowledging the source. How do you will be framing this debate?

Certainly, generative AI, capable of developing text and visual images and cloning voices, has brought the issue of intellectual property theft to the forefront. Policymakers approach this in different ways: China wants to control information flows to generative AI; while Japan first wanted to remove copyright claims on datasets used for generative AI and later reversed its position; EU and US policy is ambivalent about when fair use covers generative AI training. The balance between creators’ rights and the use of public resources from the knowledge commons for technological development continues to evolve.

Let us now turn to the creator’s perspective. Authors find themselves living the fictional nightmare of Roald Dahl’s “Great Automatic Grammarian”, when the imitation machine imitates their styles and voices better than they do and creation becomes an assembly line of production. The moral rights of the author or creative performer are at risk when their works are cannibalised to train generative AI. There are also issues of cultural appropriation, such as Indian Warli art being auctioned at Sotheby’s without acknowledging the cultural context of its production by forest tribes; concerns that the Maori community in New Zealand have raised and attempted to address in the use of their language and linguistic resources for the development of formation models. Collective licensing – the recognition of the cultural commons of literature, art and human cultural heritage – seems important. A fiduciary mechanism could be created to prevent cannibalisation or re-use in violation of the cultural commons. For literature and art, the balance between the intellectual commons as the public and common heritage of all humanity, and the moral rights of the author, must also be maintained. The collective licensing proposal of the Authors’ Guild seems useful in this respect. This proposal says: “The Authors Guild proposes to create a collective licence whereby a collective management organisation (CMO) would license rights on behalf of authors, negotiate fees with AI companies, and then distribute payment to authors who register with the CMO. These licences could cover past uses of books, articles and other works in AI systems, as well as future uses. The latter would not be licensed without specific authorisation from the author or other rights holders.”

Nandini Chami
Deputy Director, IT for Change

4) What do you see as the main AI-related issues and proposals that should be addressed in multilateral spaces such as the United Nations, in order to promote digital justice and counter the excessive power of large digital corporations?

There is an ongoing debate, including in India, about whether AI governance can be adequately addressed on the global stage or whether we need answers at the national level. Western democracies and the mainstream world have different ways of calibrating the balance between individual rights and the social good; this is recognised even in the human rights debate, as the contextual interpretation of rights is extremely important. As a recent UNCTAD study on G20 countries shows, what is meant by sensitive personal data is defined differently in different societies. Ideas of human-centred innovation, market transparency and accountability, desired trajectories of AI development: we need a multi-scalar governance model in which the rights of people at the margins are protected by some basic rights protections and, at the same time, each national community can come to a deliberative process to determine how it should harness the AI revolution and integrate into the global economy backed by justiciable human rights-based AI development legislation. Hyper-liberalisation of data services markets may not work for all countries, as some may even benefit from limiting their integration into the global digital economy.

This article is included in the digital magazine Internet Ciudadana N° 10 – October 2023 – Another digital world is possible!


(1) Generative AI is a branch of artificial intelligence that focuses on the generation of original content from existing data and in answer to prompts. (Wikipedia)

* Anita Gurumurthy is a founding member and executive director of IT for Change, where she leads projects and research collaborations with the networked society, focusing on governance, democracy and gender justice, democracy and gender justice.

* Nandini Chami is Deputy Director of IT for Change. Her work focuses primarily on research and policy advocacy in the ambits of digital rights and development, and the political economy of women’s rights in the information society. She is part of the organisation’s advocacy efforts around the 2030 Development Agenda on issues of ‘data for development’ and digital technologies and gender justice.

* Sally Burch is an Anglo-Ecuadorian journalist. Executive Director of ALAI. She holds a BA in Literature from the University of Warwick (UK) and a BA in Journalism from Concordia University (Montreal, Canada). She regularly publishes articles on women and communication, the right to information and social movements.

 

The original article can be found here