June 16, 2024
Array

Remembering Joseph Thomas, The Founding President of FSMI

Prabir Purkayastha

COMRADE Joseph Thomas, who passed away last year, was the first president of the Free Software Movement of India (FSMI). Joseph Thomas and I go back a long way: we were almost the same age, and relatively old among the FSMI activists.

Joseph Thomas was clear that in the free software movement, we cannot focus only on software; we also need to address issues of hardware on which free software can run. It was a welcome corrective. What he was telling us at that time is what many of us know today: free software cannot work unless we look at the hardware as well. Therefore the question of software and hardware, in terms of free and open systems, is a question of how you look at the larger intellectual property regime itself. This shows that Joseph Thomas was way ahead of the times, even ahead of activists who were among the initiators of the free software movement.

Let me also add that Joseph Thomas, like many of us, was a difficult person. If you conform to what the world wants you to think and say, you're a nice, easy-going person. But then such persons rarely change society; such persons tend to accept the status quo. It is the difficult people who, who by rejecting the prevailing consensus, chart out new paths that change society.

Of course, being difficult alone is not enough. We must also have a critical vision of what constitutes the problems today, and how we can create solutions. And that's why I'm talking about Comrade Joseph Thomas's first intervention with us, that we cannot talk about free software without also looking at the hardware. And that's why he spent a significant amount of his time on hardware and related networking issues.

Both Comrade Joseph Thomas and I come from an era when there was virtually no formal computer science taught in Indian universities and colleges. At best, people talked about software programs. That's all we knew about computers—how to program large main frame machines, the only computers available those days.

The IBM 360 computer was supposed to be among the most advanced machines in the 70s. It was huge and filled a large room. Today, the mobile phone in your pocket is at least 100 times more powerful and has 1,000 times more memory than what the IBM 360s had. And the IBM 360 also cost about 300 times more than today’s iPhone. In JNU where I was a student, we used a Russian computer, a copy of an IBM 360. I still remember that when we got an error, the computer just gave us the memory location at which the error had taken place and the kind of error—generally a division by zero—and we had to debug what we had done wrong and in which line of the program—and fix that. So the huge change that has taken place both in hardware and software is something that both Joseph Thomas and I have seen. We had the shared experience of not only knowing the paths we as a society have taken but also the paths not taken. Why then were these paths taken and for whom?

These are questions that will confront every new generation. New advances are taking place across both hardware and software. High-performance chips power Artificial Intelligence (AI) products like ChatGPT, which are today at the cutting edge of technology. They have also brought about a qualitative change in how we look at artificial intelligence itself. How far are we from the holy grail of AI: Artificial General Intelligence?

Perhaps artificial intelligence itself is the wrong way of framing this question. Nevertheless, that is the accepted terminology. Since we cannot create our terminology, we learn to live with it. The problem being discussed is whether with such tools can we reach the goal of what is called general purpose intelligence. This means solving not specific problems using brute computational force but routinely being able to solve different classes of problems that human beings routinely do.

This, of course, is the holy grail of artificial intelligence, and a lot of people would argue that with the new technologies or new approaches—the ChatGPT kind of tools being the major ones—we are very close to having produced human-like capacities in the of machines.

Such machines today can pass the Turing test. This was formulated by Alan Turing, one of the founders of computer science. The test was essentially based on whether a human being can differentiate between a computer and a human being while talking to it. Today, that test no longer suffices. Computers can mimic human beings so when we talk to it, we find it difficult to understand whether it's a real human being at the other end or a machine. With the computational power of today’s machines, we can train very large language models with huge amounts of data. This is essentially what entities like ChatGPT do and can confuse us whether it is a human being or a machine at the other end. But ChatGPT-like tools have no concept of what is true, they predict that based on the text that has come before, the likelihood of a particular word appearing next. This likelihood is based on huge amounts of texts—ingested mostly from the internet—and has no concept of truth in any physical sense, but only seems to be truth. As some authors have stated, models like ChatGPT do not occasionally hallucinate but are essentially generating “bullshit”. It is only designed to produce truth-seeming text and not truth.

The Turing Test was originally designed in an era in which large language models, trained and running on machines with enormous computational power, could not have been conceived. Therefore, the problem today comprises defining new tests of human intelligence and machine intelligence. The fact that a human being cannot distinguish between the behaviour of machines and human beings easily from their responses to queries, is no longer going a viable test of intelligence. We have to look at something more fundamental in human intelligence, and that is a journey that is going still to be a long one.

We have to ask today more fundamental questions regarding what these technologies are and to what end should they be used. What is it that we want to develop and what do we not want? What are our ethical boundaries for such development? And we have also addressed the most fundamental problem that confronted our generation: In whose interests are these technologies being developed? This was the basic issue facing the free software community as well. Are these new technologies for the benefit of a few companies such as Microsoft, Open AI, Google, Apple, and Amazon? Or is it for the benefit of humankind?

And that brings us to the larger conflict today. Are all developments in science and technology in the interest of capital? Or are they in the interests of people? These questions lie at the centre of the development of technology. Unlike science, which seeks to understand nature, technology is explicitly to produce artefacts—machines or software—that fulfil human needs. Therefore the question is whether such developments of technology are to meet the greed of capital or meet our needs as people. The fight against capital by the people is also a fight on who controls technology. Who controls knowledge and science? Are such advances for the people or is it for capital? We raise these questions, not because we are opposed to advances in knowledge or technology, but because we want these advances to solve the needs of the people. Not to make the few rich even richer.

This was the question that thinkers such as Einstein, Bernal, Haldane, Saha and Kosambi posed in the 20th century. These questions are as valid today as they were then.

The people, who are at the core of its development, are speaking up on the dangers of such AI models. And products. The battles within Open AI, with some key people speaking up on the dangers of such models, are not simply being fought by a few individuals in such corporations, but are also about who controls these new technologies. In whose interests should such technologies be used? Who decides whether they are safe or not?

A Sam Altman of OpenAI today, like Bill Gates earlier, believes technology development should benefit only the corporation. (Read capital.) If we speak about the dream of such technologies being used for the people, it harms the profits and therefore the interest of capital. This is the battle that the Free Software community fought against copyrighted and patented software.

We were able to defeat patents in software in India as a part of our struggle against expanding property rights in the 2005 Amended Patents Act. But every generation faces new challenges. Today, the challenge is over the direction that AI tools and companies are taking. How do they threaten our public discourse, in which propaganda can be multiplied exponentially by machines? Who owns the machines then owns the public sphere, and the danger it poses to all societies to have a handful of capitalists control what we read and think.

If we accept that new technologies need to be audited for the harm they may do to society, then it need to be audited by people who understand it, by people who are its practitioners joining hands with larger people’s movements. It will not come from within the corporations, as the interest of corporations is the interest of the handful of people who own its shares. They are interested in the value of the shares in the stock market. If they have invested $1 how quickly can they get $20? Or even better $200? That's all that matters to them.

So this is the gap between the interests of capitalists, and that is the contradiction between capital and the people today. And it's interesting to see that even within the heart of OpenAI, this battle is being fought by a host of developers, who have been willing to walk out and publicly talk about the risks of ChatGPT-like models to larger society and losing millions of their bonuses and stock incentives.

They are speaking out that these technologies pose new dangers to society. This is the larger battle that is going on, inside the companies and people outside. It is these battles that the Free Software movements across the world need to join. And this is not simply between capital and the people, but between a handful of rich countries, who had built colonial empires earlier and today want to use new technologies to continue their global hegemony. It is these countries, led by the US that speak in the name of “international rule-based order” where they get to make the rules. AI tools are very much a part of this larger battle, with such AI tools entering even the battlefield as we see in Israel’s attack on Gaza.

To fight the alliance of global big capital, to fight the larger battle of humanity against captivity, the technological community is as much a part of this battle as others are. And to do this, I must return to comrade Joseph Thomas and others like us, who built the free software movement two decades ago. This is not just within the technological community, within employees of these organizations, but is a part of a much larger battle against capital. This is the message that Joseph Thomas has left for us in the way he lived and fought for this goal. His life encompassed not only working on the narrow issue of telecom, where he was an employee and a trade unionist but also the larger issues of hardware, software and the communication network that a country needs. How do you bring different sections of people as allies together? How can the free software movement meet these new challenges? And how not to treat hardware and software as two separate compartments, but as part of the whole.

While the battles today may appear different, the underlying issues are identical. Who controls the technology? Who controls software? Who controls the development of the chips that drive the AI tools?

We can win this battle only if we bring along much larger sections of the people. To help them understand that it is in their interest, as well as the interest of humanity today, that we take a critical look at how technology is developed. And the direction that such developments should take. This is where I would like to end, not only by paying my homage to Comrade Joseph Thomas but also by laying before us what our objectives should be. That would be the true homage to Comrade Joseph Thomas.

(This is an abridged version of the address given at the First Remembrance Day for Comrade Joseph, on June 9, held in Ernakulam)

 

 

Enable GingerCannot connect to Ginger Check your internet connection
or reload the browser
Disable GingerRephraseRephrase with Ginger (Ctrl+Alt+E)Edit in Ginger

 

×