![]() |
![]() |
Next page ![]() |
6. Much substantial scholarly activity in the humanities may take ten or more years to see the light of day, and the exercises are causing researchers to divert their energies to short-term work. The period from which publications may be cited ought to be lengthened considerably for the relevant subject areas, and the subject communities should be consulted about the appropriate span.
The process in terms of panel practice, and of any deviation from the initially published criteria, needs to be well documented and disseminated to the academic community.
Yes. Concerns have however been expressed that (i) every effort should be made to reduce the RAE burden by simplification of the process, and (ii) more attention should be given to the adverse impact of the RAE on staff appointments, choice of research projects, long-term research, and staff development. There is also a need for clear early guidance and instructions on a future exercise with no subsequent changes, especially after submissions have been made. This principle should also be extended to the funding method, recognizing that the funding bodies may have to make arithmetical adjustments in the light of the outcome, at least to maintain the level of selectivity.
What other approach might we take which would be as robust, effective, and acceptable to HEIs as a form of RAE?
No comment.
2. Should research assessment be concerned only with the question of research quality as opposed to, for example, its value for money or its relevance to wealth creation and the quality of life?
Yes.
3. Should research assessment cover all academic research, and adopt a broad and inclusive definition of research?
Yes.
4. Should there be a single UK-wide exercise rather than separate exercises conducted by each of the funding bodies?
Yes.
5. Should the method of assessment continue to be based primarily on peer review?
Yes. Panel members must have the calibre, proven research record, and breadth of knowledge to ensure that their judgements command support.
6. Should the exercise continue to be based only on single-discipline units of assessment (UOAs)?
Yes. It should be recognized, however, that some present Units of Assessment are not strictly single-discipline and may require special attention, for example the Clinical and Biological Units, and UOA 33 Built Environment (see the answer to question 22).
7. Should all submissions for assessment conform to a common framework and common data definitions?
Yes, on the understanding that detailed interpretation of the common elements for particular subject areas is left to appropriately constituted panels.
8. Should the exercise continue to be a prospective one, based on past performance?
Yes. The attribution of past output to a single institution at a single date has produced undesirable 'market movement', and suitable counter-measures must be given careful thought.
9. Should ratings of research quality continue to be the only output of a future assessment exercise, or should we consider adding a greater formative element? If so, how much developmental feedback should the exercise provide at the subject level and on submissions?
We are not in favour of a greater formative element. It would be helpful if all panel Chairmen were able to provide informal feedback on those submissions given low ratings; the provision of panels' reflections on common problems at subject level might be useful.
10. What additional financial cost would be acceptable to include a greater developmental aspect in the RAE?
No additional cost would be acceptable.
11. Are there activities essential to research which the present definition excludes but where there is a strong case for taking these into account in assessing the full range of a department's or unit's research?
Yes. It is for the panels to develop criteria appropriate to the assessment of particular subject areas, in consultation with the subject communities, but it has been suggested that the following activities are insufficiently well covered at present:
12. What should be the interval between the 1996 RAE and the next and subsequent exercises?
Five or six years.
13. Would it be practicable in administrative and funding terms to assess only some subjects in any one year?
No.
Would this approach reduce the overall burden of research assessment on higher education institutions and researchers and be more attractive to them?
No.
14. Should we consider further an interim, opt-in exercise?
No.
If so, what kind of exercise would be robust and equitable in principle, workable in practice, and attractive to HEIs?
No comment.
15. Should we implement the Dearing Committee's proposal to provide an incentive not to participate in the RAE?
No. While this might reduce the burden of the exercise, especially that on panels, research funds should not go to non-participants in RAEs.
How else could we meet the need to strike a balance between teaching and research activity?
No comment.
16. If peer review is retained as the primary method of assessment, should this be supplemented by quantitative methods, and if so, how?
No. Robust common methods are essential, and there can be no universal use of quantitative methods, given the weakness of bibliometrics for some subject areas. Otherwise any such use should be a matter for individual panels, and of course should be made known through their published criteria.
17. Should an element of self-assessment be introduced into the assessment process, and if so what should this comprise?
No.
18. How could we discourage HEIs from exaggerating their achievements? Would publishing the whole or parts of submissions bring value to the RAE?
No.
Should we consider a deposit which would be forfeited if the actual grade fell significantly below the grade claimed?
No.
19. Would an element of visiting improve the RAE?
No.
20. Could visiting be achieved in a cost-effective way?
No.
21. Should we explore the possibility of conducting visits for submissions in certain categories, and if so which ones?
No. There might be an argument for selective visits for 'borderline' cases, but this is likely to add considerably to the RAE burden on panels and departments, and to increase the costs.
22. Should we consider reconfiguring large units of assessment, and in particular medical subjects, into better defined units?
Yes. UOAs 1-3 should be retained, but with clearer guidance on the subject distribution between them. The dispersed biological and biomedical UOAs warrant reconsideration as a whole. UOA 33 Built Environment is too broad a collection of single disciplines.
23. Apart from medical subjects, was the number, division, and coverage of the 1996 subject-based UOAs and assessment panels appropriate?
Yes in general, but see the answer to question 22.
Are there instances where a strong case can be made for combining or subdividing UOAs or panel remits, on which we might further consult subject communities?
Yes, the biological/biomedical sciences, and Built Environment. Science Studies should be treated separately from the rest of the Philosophy UOA.
24. In addition to suggestions in questions 22 and 23 above, could the use of sub-panels ensure coverage of broadly defined subjects without requiring additional panels?
Possibly so for Architecture, Clinical Medicine, and Economic History. In principle for other areas, to avoid major restructuring of the UOAs, provided that comparability was not threatened.
25. What is the desirable balance between maintaining some common criteria and working methods for different panels, and allowing panels to set criteria appropriate to the subject area?
This is largely a matter for the panels themselves, subject to the need for common criteria for cognate disciplines and for clear publication - well in advance - of the individual panel criteria and working methods.
26. Should formal mechanisms be introduced to ensure greater interaction and comparability between panels?
Yes.
If so, how far should we strive to set limits to moderation and conformity?
Suggestions are a panel of panel Chairmen and overlapping panel membership. Arguments in favour were 'normalization of standards', not always evident in 1996; better consideration of interdisciplinary work; clarity and uniformity of (i) application of criteria and (ii) aggregation of marks.
27. Could the measures outlined in paragraphs 48-52 of the consultation paper* be combined with a broadly discipline-based framework to improve the assessment of interdisciplinary research?
No. There is limited support for, and doubts about the practicality of, generic interdisciplinary criteria or a single interdisciplinary monitoring group, although the problem is acknowledged.
What other measures are both feasible and effective?
If the interdisciplinary activity is a single subject such as Architecture, a separate panel/sub-panel. Broader areas might be assessed by ad hoc multidisciplinary panels drawn from the single-discipline panels. Overall, reliance must be placed on creating a sound membership of panels and ensuring greater on-going interaction between them.
28. For a future exercise to assess research quality, should we continue to appoint panel members as outlined above, or should we cast the net more widely?
There is clearly some discontent with the present method. There appears to be a need for wider consultation with the subject communities and other relevant bodies; and a need to ensure from the outset a high calibre, fully capable, well balanced, and impartial membership.
In particular:
Should panel chairs be appointed by the funding bodies on the basis of nominations from outgoing chairs alone?
No.
What alternatives are possible (for example, the new chair to be elected by the outgoing panel)?
Election is suggested as a possibility.
Should we continue to seek a degree of continuity in membership of panels, and to what extent?
Yes. No more than half.
29. How can we attract and induct appropriately qualified users and in particular people from industry into the process of academic peer review?
The best industrial laboratories, and the Civil Service (as users), might be consulted on this question. UOA 33 Built Environment had an Industry sub-panel which apparently worked well, perhaps because it was treated as a distinct authority rather than integrated within the main panel.
30. If the problem is the inability of those concerned to devote the necessary time to the process, is there a way to involve them which would not require so much time?
Members of separate sub-panels need not attend all panel meetings.
31. To what extent should we seek to appoint overseas academics to assessment panels? Would it be possible to involve overseas academics short of full membership of panels?
The cost of wide subject coverage and the inability of scholars of distinction to make time are obvious barriers, but other countries do involve non-nationals in such processes. They might be available as advisers or assessors 'on call'.
32. Should we seek other ways of adding an international dimension to the process, for example, by requiring international moderation for top grades?
Yes. The 'Yes' is a reflection of perceived desirability. There is support for this, given that the present rating scale references 'international' excellence.
33. In a future exercise to assess research quality, should we adopt a rating scale that makes no reference to sub-areas of research activity?
Yes. But if reference to sub-areas continues, it is crucial that panels say how they will aggregate the sub-components.
34. Should HEIs still be able to identify within submissions the sub-areas of research conducted in the unit?
Yes. This practice might be expanded to incorporate interdisciplinary or 'thematic' sub-areas that cross several disciplines.
35. Should decimal scores be allowed, based on averaging scores awarded by individual panel members?
No. The necessity for panels to discuss each submission to a definite conclusion (a single rating) adds an element of robustness and credibility.
36. Should the present seven-point scale be retained?
Yes.
If so, should it be numbered from 1-7?
The balance of opinion is Yes, but there may be something to be said for retaining known ratings in the next round.
37. Should the rating be expressed as the percentage of work of national excellence and the percentage of work of international excellence?
No.
38. Is there a case for panels identifying a minimum number of staff required for the attainment of the highest grade?
Yes. The RAE should be designed so as strongly to encourage inclusion of a very high percentage of staff (but not all).
39. Should such decisions be made through consultation with subject communities?
No comment.
40. Should the rating scale be modified to reflect more directly the proportion of staff submitted in highly rated departments, bearing in mind the potential consequences for submission patterns and for the calculation of funding?
Yes. The credibility of the RAE is undermined if it is perceived that high ratings can be achieved by including only a fraction of university staff, and the rating scale should be redesigned to avoid misleading presentation of the outcome. Whatever is concluded, it is important that such details should be transparent and should be known in advance of the exercise.
41. Are there additional issues concerning the assessment of research which we should consider over the longer term?
Consideration of the future of the exercises after the next should be given high priority now rather than much later.
42. Are there other issues or concerns relevant to the assessment of research which this paper does not cover?
No comment.
* These are proposals to improve the handling of interdisciplinary research, either by the use of 'a discrete set of generic criteria for all interdisciplinary research submissions', or by the establishment of 'a single, interdisciplinary monitoring group' to monitor the treatment of such work by subject panels.
![]() |
![]() |
Next page ![]() |