What the AI revolution means for the social sciences – Go Health Pro

Artificial intelligence promises to fundamentally change the way social scientists teach and produce research. LSE President and Vice Chancellor Larry Kramer argues the social sciences must embrace the opportunities presented by AI while responding to the many challenges it poses for society.


New technologies are disrupting a huge number of traditions and practices across multiple domains: economic, social, political and cultural. Among these, artificial intelligence (AI) has the potential to be the most fertile and disruptive.

Of course, Generative AI is so new, and is evolving and improving so quickly, that no one can really say they understand it. There are, however, a few things I think we can say with reasonable confidence, and these frame the challenge I see AI presenting for students and scholars in the social sciences.

Making choices

First, the AI revolution is going to upend practices across society, but especially in knowledge industries – which means in universities. It will change not just our administrative operations, but how and what we teach, and how and what we research. It’s just difficult at this point to say precisely how.

Second, like all new technologies, whether the social and economic changes AI produces are for good or ill is not inherent in the technology. New technologies may be good or bad or both, and to different degrees and in different ways. It’s a matter of how we decide to use and deploy them. Which means it’s a matter on which social scientists should have a great deal to offer.

Take the internet as an example. It has had profoundly beneficial consequences (making knowledge accessible to vastly larger numbers of people, for instance) and profoundly damaging ones (such as the ease of spreading disinformation). All result from choices made, and not made, when it was still new. If we could go back, we would construct internet policy differently, but the path has been created and we now live with the consequences.

Which leads to a third point: We cannot avoid making choices now. Leaving AI to develop however markets shape it is a choice; regulating it is a choice; not choosing is a choice. Whatever we do or don’t do now will shape how AI wends its way into our lives. So, we need to make these choices, as best we can, in ways that preserve flexibility to change course if our initial choices turn out badly. That kind of regulatory design problem is a classic social science challenge.

What AI means for LSE

In terms of our internal operations at LSE, the emergence of AI poses some clear, if difficult, challenges, such as how best to incorporate it into the School’s administrative processes and how to ensure that students and staff have both access to and training for what they need.

Figuring out the implications of AI for teaching is going to be tricky. An initial effort to restrict or ban its use was abandoned once people recognised it would just encourage cheating. Since then, schools everywhere have been trying to figure out how best to regulate its use.

Speaking for myself, I think we should go further. We should affirmatively want students to use AI, and then ask ourselves how our classes and how our teaching should accordingly change. Rather than asking how to conform the use of AI to current models of what we are teaching and how we are teaching it, we need to rethink what we have to offer students through our teaching, given what this technology can do and given how it will transform the jobs students will move into after they leave here. There is already a huge hunger among employers for graduates who, as part of their education, know how to integrate AI into their work, which is itself already beginning to transform how that work is done. We need to prepare our students accordingly.

AI and the social sciences

The most exciting aspect of the AI revolution is on the research side, where the challenge is how to take advantage of the incredible range of opportunities AI offers or will soon offer. The current focus of most AI research is on potential threats and problems and how to avoid them, whether algorithmic bias in the data, deepfakes, or fear of the Singularity.

These are important questions, but the possibilities for beneficial uses are of at least equal importance. We have, in fact, hardly begun to explore what AI can do for social science research: for example, taking full advantage of its ability to find patterns in massive data sets and tease out possible causation in ways we cannot presently discern or even imagine.

These and other possibilities will continue to unfold as AIs become better and more powerful, and innovative researchers think of new ways to use them. As part of that, there is need for research on how to design AIs for social science work: helping those who build and train them to do so with social science rather than natural science models and constructs.

Social progress

Historically, new technologies displaced forms of then-existing labour while creating new forms, a result today’s techno-optimists tell us will be the case with this new technology as well. And they could be right, but I’m not so sure.

Unlike earlier technological breakthroughs, AIs will presumably be capable of learning to do any new jobs they create. And insofar as a machine will invariably be able to do these jobs faster and cheaper than humans, the risk of labour displacement is very real. Certainly, the risk seems high if we just leave the development of AI to our current market-based approach, which incentivises a short-term focus on increasing productivity and decreasing costs.

We need new research to determine how we can implement and integrate rapidly developing forms of artificial intelligence in ways that enhance human capacities – that is, in ways that maximise the benefits of human/AI partnership. We then need to figure out how to create incentives to use these, rather than focusing simply on what is cheap or fast.

In any event, these are just a few of the many challenges and opportunities being created by generative AI, which makes this critical terra nova for social science research of all kinds.

This is an extract from the inaugural lecture of LSE President and Vice Chancellor of LSE Larry Kramer, “What is needed is hard thinking”: five challenges for the social sciences, held at the London School of Economics on 14 October 2024.


Note: This article gives the views of the author, not the position of EUROPP – European Politics and Policy or the London School of Economics. Featured image credit: Shutterstock.com



Leave a Comment

x