< Previous page ^ Table of Contents Next page >

Assessment of research: Notice

In November 1997 the University received a consultation paper, circulated by the HEFCE and the other funding bodies, seeking the views of higher education institutions (HEIs) on the future arrangements for research assessment exercises (RAEs) (see Reporter, p. 224). The University's response, as approved by the Council and the General Board, is reproduced below; it takes the form of a letter from the Vice-Chancellor, accompanied by a series of questions posed in the consultation paper and the University's answers to them.


1. I am writing in response to the funding bodies' consultation paper RAE 2/97 concerning the future of assessment of the quality of research in higher education institutions (HEIs). I enclose with this letter an annex which shows in detail our views on the questions asked. We recognize that further assessment on similar lines to that in 1996 will take place. This letter therefore focuses on points which we consider should be given most attention at this stage of the consultation process about the next exercise.

Frequency and form of the exercises

2. The present relatively short span between exercises deters institutions from developing major new areas of activity or making radical changes in existing ones, with the risk that innovative work is not promoted. We welcome the rumours that the next exercise will not occur until the year 2001, but in our view the exercises in their present form will have run their course by then and will be ripe for more fundamental reconsideration. Now is the time to start thinking about a simpler successor method for selective funding, which might be introduced for 2006.

Funding consequences of the exercises

3. While this question is not strictly part of the current consultation, we must also comment on the level of funding for research which arises both from the ratings awarded and from the methodologies adopted by the funding bodies. It is of course the primary object of the exercises to facilitate the selective allocation of research funds. We support the exercises on the assumption that the financial outcome of them will be sufficient at least to underpin the level of research activity assessed and to provide for some limited development. It is therefore of much concern that in some individual instances the award of the highest ratings has not only failed to produce additional funds but has produced less than necessary to maintain the quality of research as assessed. It is misleading to imply (as was done in the Council's request for our 1997 Strategic Plan) that an 'additional premium' has been provided for all Departments rated 5*. It would assist us if the potential funding consequences were made clear in advance of the next exercise.

Consequences for staff recruitment and retention

4. There is also concern that the adoption of a single census date for attribution of past research output to institutions has produced undesirable 'market movements'. We take the view that there should be other relatively simple and acceptable ways of achieving appropriate attribution, and that further consultation should be undertaken to this end. A suggestion is that research output may count for quality rating in more than one institution but that the author may count for 'volume' and funding in only one.

Scholarship, and the nature of research

5. In my letter of 3 December 1996 to Brian Fender, I expressed serious concern about the distorting effects of successive exercises on scholarship in particular. In our view, it should be explicitly stated that for the purpose of assessment 'research' embraces particular forms of scholarship. We would expect the assessment panels to prescribe those forms afresh, but some possible examples appear in my 1996 letter and in our answer to question 11 in the annex attached.

6. Much substantial scholarly activity in the humanities may take ten or more years to see the light of day, and the exercises are causing researchers to divert their energies to short-term work. The period from which publications may be cited ought to be lengthened considerably for the relevant subject areas, and the subject communities should be consulted about the appropriate span.

Robustness of panel judgements

7. The academic and wider communities will only have confidence in the outcome of the exercise if the panels' judgements are transparently robust. That has implications for several distinct aspects of the process:

The process in terms of panel practice, and of any deviation from the initially published criteria, needs to be well documented and disseminated to the academic community.

Interdisciplinary research

8. There is concern that interdisciplinary research has not been assessed properly in the past, but doubt about the workability of the suggestions for the future in the consultation paper. A further suggestion is the creation of multidisciplinary sub-panels, drawn from the main panels. Certainly greater stress must be placed on soundly constituted panels, capable of appreciating interdisciplinary research and ensuring appropriate consultation with other panels or outside experts as necessary. In addition, in cases of doubt the submitting Departments should be given a formal opportunity, while the panels are working, to say how their submissions might be handled. It is worthy of note that the HEFCE have not yet been able to tell the University to which secondary panels one of our interdisciplinary submissions was referred in 1996.

Impact of the outcome

9. There is a strong view that the possibility of obtaining a 5* rating in 1996 for a very small number of staff (as opposed to a coherent unit) tested the credibility of the results. Ways should be devised of ensuring some minimum proportion (less than 100 per cent) of active researchers for the award of the highest grades, and clearly demonstrating this in the published outcome.


1. Should the funding bodies use a form of research assessment exercise similar to previous exercises to make judgements of research quality for allocating research funding?

Yes. Concerns have however been expressed that (i) every effort should be made to reduce the RAE burden by simplification of the process, and (ii) more attention should be given to the adverse impact of the RAE on staff appointments, choice of research projects, long-term research, and staff development. There is also a need for clear early guidance and instructions on a future exercise with no subsequent changes, especially after submissions have been made. This principle should also be extended to the funding method, recognizing that the funding bodies may have to make arithmetical adjustments in the light of the outcome, at least to maintain the level of selectivity.

What other approach might we take which would be as robust, effective, and acceptable to HEIs as a form of RAE?

No comment.

2. Should research assessment be concerned only with the question of research quality as opposed to, for example, its value for money or its relevance to wealth creation and the quality of life?


3. Should research assessment cover all academic research, and adopt a broad and inclusive definition of research?


4. Should there be a single UK-wide exercise rather than separate exercises conducted by each of the funding bodies?


5. Should the method of assessment continue to be based primarily on peer review?

Yes. Panel members must have the calibre, proven research record, and breadth of knowledge to ensure that their judgements command support.

6. Should the exercise continue to be based only on single-discipline units of assessment (UOAs)?

Yes. It should be recognized, however, that some present Units of Assessment are not strictly single-discipline and may require special attention, for example the Clinical and Biological Units, and UOA 33 Built Environment (see the answer to question 22).

7. Should all submissions for assessment conform to a common framework and common data definitions?

Yes, on the understanding that detailed interpretation of the common elements for particular subject areas is left to appropriately constituted panels.

8. Should the exercise continue to be a prospective one, based on past performance?

Yes. The attribution of past output to a single institution at a single date has produced undesirable 'market movement', and suitable counter-measures must be given careful thought.

9. Should ratings of research quality continue to be the only output of a future assessment exercise, or should we consider adding a greater formative element? If so, how much developmental feedback should the exercise provide at the subject level and on submissions?

We are not in favour of a greater formative element. It would be helpful if all panel Chairmen were able to provide informal feedback on those submissions given low ratings; the provision of panels' reflections on common problems at subject level might be useful.

10. What additional financial cost would be acceptable to include a greater developmental aspect in the RAE?

No additional cost would be acceptable.

11. Are there activities essential to research which the present definition excludes but where there is a strong case for taking these into account in assessing the full range of a department's or unit's research?

Yes. It is for the panels to develop criteria appropriate to the assessment of particular subject areas, in consultation with the subject communities, but it has been suggested that the following activities are insufficiently well covered at present:

12. What should be the interval between the 1996 RAE and the next and subsequent exercises?

Five or six years.

13. Would it be practicable in administrative and funding terms to assess only some subjects in any one year?


Would this approach reduce the overall burden of research assessment on higher education institutions and researchers and be more attractive to them?


14. Should we consider further an interim, opt-in exercise?


If so, what kind of exercise would be robust and equitable in principle, workable in practice, and attractive to HEIs?

No comment.

15. Should we implement the Dearing Committee's proposal to provide an incentive not to participate in the RAE?

No. While this might reduce the burden of the exercise, especially that on panels, research funds should not go to non-participants in RAEs.

How else could we meet the need to strike a balance between teaching and research activity?

No comment.

16. If peer review is retained as the primary method of assessment, should this be supplemented by quantitative methods, and if so, how?

No. Robust common methods are essential, and there can be no universal use of quantitative methods, given the weakness of bibliometrics for some subject areas. Otherwise any such use should be a matter for individual panels, and of course should be made known through their published criteria.

17. Should an element of self-assessment be introduced into the assessment process, and if so what should this comprise?


18. How could we discourage HEIs from exaggerating their achievements? Would publishing the whole or parts of submissions bring value to the RAE?


Should we consider a deposit which would be forfeited if the actual grade fell significantly below the grade claimed?


19. Would an element of visiting improve the RAE?


20. Could visiting be achieved in a cost-effective way?


21. Should we explore the possibility of conducting visits for submissions in certain categories, and if so which ones?

No. There might be an argument for selective visits for 'borderline' cases, but this is likely to add considerably to the RAE burden on panels and departments, and to increase the costs.

22. Should we consider reconfiguring large units of assessment, and in particular medical subjects, into better defined units?

Yes. UOAs 1-3 should be retained, but with clearer guidance on the subject distribution between them. The dispersed biological and biomedical UOAs warrant reconsideration as a whole. UOA 33 Built Environment is too broad a collection of single disciplines.

23. Apart from medical subjects, was the number, division, and coverage of the 1996 subject-based UOAs and assessment panels appropriate?

Yes in general, but see the answer to question 22.

Are there instances where a strong case can be made for combining or subdividing UOAs or panel remits, on which we might further consult subject communities?

Yes, the biological/biomedical sciences, and Built Environment. Science Studies should be treated separately from the rest of the Philosophy UOA.

24. In addition to suggestions in questions 22 and 23 above, could the use of sub-panels ensure coverage of broadly defined subjects without requiring additional panels?

Possibly so for Architecture, Clinical Medicine, and Economic History. In principle for other areas, to avoid major restructuring of the UOAs, provided that comparability was not threatened.

25. What is the desirable balance between maintaining some common criteria and working methods for different panels, and allowing panels to set criteria appropriate to the subject area?

This is largely a matter for the panels themselves, subject to the need for common criteria for cognate disciplines and for clear publication - well in advance - of the individual panel criteria and working methods.

26. Should formal mechanisms be introduced to ensure greater interaction and comparability between panels?


If so, how far should we strive to set limits to moderation and conformity?

Suggestions are a panel of panel Chairmen and overlapping panel membership. Arguments in favour were 'normalization of standards', not always evident in 1996; better consideration of interdisciplinary work; clarity and uniformity of (i) application of criteria and (ii) aggregation of marks.

27. Could the measures outlined in paragraphs 48-52 of the consultation paper* be combined with a broadly discipline-based framework to improve the assessment of interdisciplinary research?

No. There is limited support for, and doubts about the practicality of, generic interdisciplinary criteria or a single interdisciplinary monitoring group, although the problem is acknowledged.

What other measures are both feasible and effective?

If the interdisciplinary activity is a single subject such as Architecture, a separate panel/sub-panel. Broader areas might be assessed by ad hoc multidisciplinary panels drawn from the single-discipline panels. Overall, reliance must be placed on creating a sound membership of panels and ensuring greater on-going interaction between them.

28. For a future exercise to assess research quality, should we continue to appoint panel members as outlined above, or should we cast the net more widely?

There is clearly some discontent with the present method. There appears to be a need for wider consultation with the subject communities and other relevant bodies; and a need to ensure from the outset a high calibre, fully capable, well balanced, and impartial membership.

In particular:

Should panel chairs be appointed by the funding bodies on the basis of nominations from outgoing chairs alone?


What alternatives are possible (for example, the new chair to be elected by the outgoing panel)?

Election is suggested as a possibility.

Should we continue to seek a degree of continuity in membership of panels, and to what extent?

Yes. No more than half.

29. How can we attract and induct appropriately qualified users and in particular people from industry into the process of academic peer review?

The best industrial laboratories, and the Civil Service (as users), might be consulted on this question. UOA 33 Built Environment had an Industry sub-panel which apparently worked well, perhaps because it was treated as a distinct authority rather than integrated within the main panel.

30. If the problem is the inability of those concerned to devote the necessary time to the process, is there a way to involve them which would not require so much time?

Members of separate sub-panels need not attend all panel meetings.

31. To what extent should we seek to appoint overseas academics to assessment panels? Would it be possible to involve overseas academics short of full membership of panels?

The cost of wide subject coverage and the inability of scholars of distinction to make time are obvious barriers, but other countries do involve non-nationals in such processes. They might be available as advisers or assessors 'on call'.

32. Should we seek other ways of adding an international dimension to the process, for example, by requiring international moderation for top grades?

Yes. The 'Yes' is a reflection of perceived desirability. There is support for this, given that the present rating scale references 'international' excellence.

33. In a future exercise to assess research quality, should we adopt a rating scale that makes no reference to sub-areas of research activity?

Yes. But if reference to sub-areas continues, it is crucial that panels say how they will aggregate the sub-components.

34. Should HEIs still be able to identify within submissions the sub-areas of research conducted in the unit?

Yes. This practice might be expanded to incorporate interdisciplinary or 'thematic' sub-areas that cross several disciplines.

35. Should decimal scores be allowed, based on averaging scores awarded by individual panel members?

No. The necessity for panels to discuss each submission to a definite conclusion (a single rating) adds an element of robustness and credibility.

36. Should the present seven-point scale be retained?


If so, should it be numbered from 1-7?

The balance of opinion is Yes, but there may be something to be said for retaining known ratings in the next round.

37. Should the rating be expressed as the percentage of work of national excellence and the percentage of work of international excellence?


38. Is there a case for panels identifying a minimum number of staff required for the attainment of the highest grade?

Yes. The RAE should be designed so as strongly to encourage inclusion of a very high percentage of staff (but not all).

39. Should such decisions be made through consultation with subject communities?

No comment.

40. Should the rating scale be modified to reflect more directly the proportion of staff submitted in highly rated departments, bearing in mind the potential consequences for submission patterns and for the calculation of funding?

Yes. The credibility of the RAE is undermined if it is perceived that high ratings can be achieved by including only a fraction of university staff, and the rating scale should be redesigned to avoid misleading presentation of the outcome. Whatever is concluded, it is important that such details should be transparent and should be known in advance of the exercise.

41. Are there additional issues concerning the assessment of research which we should consider over the longer term?

Consideration of the future of the exercises after the next should be given high priority now rather than much later.

42. Are there other issues or concerns relevant to the assessment of research which this paper does not cover?

No comment.

* These are proposals to improve the handling of interdisciplinary research, either by the use of 'a discrete set of generic criteria for all interdisciplinary research submissions', or by the establishment of 'a single, interdisciplinary monitoring group' to monitor the treatment of such work by subject panels.

< Previous page ^ Table of Contents Next page >

Cambridge University Reporter, 22 April 1998
Copyright © 1998 The Chancellor, Masters and Scholars of the University of Cambridge.