Modern statistics was invented in Britain towards the end of the nineteenth century and into the first decades of the twentieth by Francis Galton, Karl Pearson, Ronald Fisher and others. But while the new science was rapidly taken up, applied and developed in the social sciences in the United States, and by the natural and medical sciences everywhere, its relationship to British social science has always been a much more ambivalent one.
In 1937 William Beveridge’s farewell address as outgoing director of the LSE argued that one reason for the ‘unsatisfactory status of the social sciences’ was ‘the frequent failure of social scientists to be scientific in method: in particular their neglect of observation of facts, as the basis of theories and as the control of theories.’ His comments could well have been made today, and have, in effect been repeated by umpteen reviews of British social science ever since. The Clapham Report of 1946 noted a drastic shortage of statisticians in UK social science. The Rosenbaum report of 1971 lamented the lack of attention to statistics in teaching and poor quality of quantitative methodology in research. Every recent International Benchmarking Review of UK social science subjects has sounded similar alarms. This is despite progress in computing power, the internet and survey technology that give the contemporary social scientist an infinitely richer store of ‘facts’ to observe and analyse, and infinitely more powerful means to do so, all within a mouse click or two.
In contrast to their peers elsewhere, university staff in UK Higher Education Social Science are much less likely to have either basic quantitative skills or other methodological expertise, unless they work in Economics or experimental Psychology. Statistical work is done by a small minority of staff who are often marginalised. The skills and interests of current staff drive both the curriculum content of degrees, new staff recruitment, and the peer review system, so that this perverse skill mix of staff is reproduced and perpetuated. Thus despite numerous initiatives for change or reform, the last half century has almost certainly seen a decline in the level of staff quantitative skills and the efforts made to teach these skills to undergraduate and postgraduate students.
This has left the social science in UK Higher Education with some bizarre characteristics. The country with perhaps the best social science data infrastructure in the world – including cohort studies dating back 70 years, survey and administrative data overseen by the ONS and curated and managed by the UK Data Archive, the Understanding Society programme – has relatively few social scientists who can make any use of it. Not only research suffers, but teaching too. Thus the Higher Education Careers Service Unit Futuretrack survey found that over 80% of the graduates in Politics or Sociology that it asked thought that their degree had developed their ability to use numerical data ‘a little’ ‘very little’ or ‘not at all’: a frankly staggering result. In an era facing the challenge of ‘Big Data’ we are not even preparing our students to deal with small data.
If UK university social science is to change then three interventions will be key: improving and widening staff quantitative skills, devoting much more curriculum space to giving students these skills and finally teach them more effectively. The report launched today by British Academy is focused on the last of these, by looking at how leading universities across the world meet this challenge. Part of the value of the report is the simple accounts of the wide variety of ways different departments do this. There is a wealth of ideas here for anyone looking for inspiration. However within this diversity there are a number of common lessons that I would draw and that ought to focus our thinking.
Quantitative skills are best learnt by extensive and repeated practice. Such practice should have a readily visible purpose. Students are most likely to sharpen their skills when they need them to answer substantive questions that interest them. Abstract conceptual ideas are best taught using visualization, and taking advantage of the spectacular advances in software for this. Students learn in different ways and at varying speeds, so that peer learning and small group work is especially effective. If students are confident with simple maths skills, their learning is not cluttered by diverting attention to coping with simple calculations. Dealing extensively with and reasoning with quantitative evidence in other parts of their degree programme not only reinforces these skills but shows how they are a core part of what it is to do social science. Instead of essays, teach students to write research papers, where they must confront alternative theories or hypotheses with evidence, appropriately marshaled and presented.
There is also another, rather more negative lesson. This is that the way quantitative skills are too often taught at present in the UK could hardly be worse. Teach technical quantitative skills in isolation from their application, with little attention to why or how they might be useful. Devote little time to it. Ignore it for the first year of the degree. Devote little attention to quantitative empirical evidence elsewhere in the degree. Assess students almost exclusively by essays.
Given these findings the forthcoming TEF offers both a danger and an opportunity. The danger is that universities or departments anticipate that students will rate their experience of quantitative skills teaching poorly (often with excellent reasons for doing so) and quietly withdraw from teaching those skills altogether. The opportunity is a TEF that rewards the kind of arrangements that suit good quants teaching – more contact hours, small group teaching, innovative assessment – might help some departments shift their approach.
A problem that has beset UK social science for a century will not be resolved overnight. Initiatives such as the Q-Step programme are a fundamental first step. However in the longer term, what will matter is convincing UK social scientists that ‘observation of facts’ is something they’ve neglected for far too long.
John MacInnes is Professor of Sociology at the University of Edinburgh.