GreenTechSupport GTS 井上創学館 IESSGK

GreenTechSupport News from IESSGK

news20100304nn4

2010-03-04 11:22:57 | Weblog
[naturenews] from [nature.com]

[naturenews]
Published online 3 March 2010 | Nature 464, 16-17 (2010) | doi:10.1038/464016a
News
University rankings smarten up

Systems for ranking the world's higher-education and research institutions are about to become more sophisticated, says Declan Butler.

BY Declan Butler

{{Improved university rankings may help students, researchers and policy-makers to make better choices.}
S. JARRATT/CORBIS}

Every autumn, politicians, university administrators, funding offices and countless students wait impatiently for the World University Rankings produced by Britain's Times Higher Education (THE) magazine. A position in the upper echelons of the THE ranking can influence policy-makers' higher-education investments, determine which institutions attract the best researchers or students, and prompt universities to try to boost their ratings.

But academics and universities have long criticized what they describe as the outsized influence of the THE and other university rankings, saying that their methodology and data are problematic (see Nature 447, 514–515; 2007). Many universities see wild swings in their rankings from year to year, for example, which cannot reflect real changes in quality; and many French universities' ratings suffer because their researchers' publications often list affiliations with national research agencies as well as the university itself, diluting the benefit for the university. Now, universities and other stakeholders are developing their own rankings to tackle these shortcomings.

"Rankings have outgrown the expectations of those who started them," says Kazimierz Bilanow, managing director of the IREG Observatory on Academic Rankings and Excellence, a Warsaw-based ranking quality-assurance body created in October 2009. "What were often exercises intended to boost newspaper circulation have come to have enormous influence on policy-making and funding of institutions and governments."

Several approaches to university rankings now being developed are switching the emphasis away from crude league tables and towards more nuanced assessments that could provide better guidance for policy-makers, funding bodies, researchers and students alike. They promise to rank universities on a much wider range of criteria, and assess more intangible qualities, such as educational excellence. And the THE ranking list is trying to remake itself in the face of the criticism.

One complaint is that the THE's rankings rely heavily on reputational surveys, which involve polling academics about which universities they think are the best in a given field. Some argue that these assessments often use too few academics, who may not be well informed about all the universities they are being asked to judge, and that there is a bias towards English-speaking countries.

In November 2009, the THE announced that the data for its rankings would no longer be supplied by QS, a London-based higher-education media company. "We are very much aware that national policy and multimillion-pound decisions are influenced by these rankings," said THE editor Ann Mroz at the time. "We are also acutely aware of the criticisms made of the methodology. Therefore, we feel we have a duty to improve how we compile them."

League-table turnabout

The THE will in future draw its ranking data from the Global Institutional Profiles Project, which was launched by data provider Thomson Reuters in January. The project aims to create a comprehensive database on thousands of the world's universities, including details of research funding, numbers of researchers and PhDs awarded, and measures of educational performance. The company will also use its internal citation and publication data to generate multiple indicators of institutions' research performance, and will build in auditing procedures to guard against misinformation provided by universities.

Thomson Reuters plans to continue reputational surveys, but aims to have at least 25,000 reviewers, compared with the 4,000 used by QS for the THE 2009 rankings. It has partnered with UK pollster Ipsos MORI to try to ensure the survey is representative. "We are not doing this randomly, but putting a lot of thought behind it," says Simon Pratt, project manager for institutional research at Thomson Reuters. "We want a more balanced view across all subject areas." The THE will continue to rank all universities in the form of a league table, which critics say offers a false precision that exaggerates differences between institutions. But the new rankings will be more nuanced and detailed, according to Pratt, including data that enable institutions to compare themselves on various indicators with peers having similar institutional profiles.

Comparing like with like is the cornerstone of a European Commission effort to create a global database of universities — the Multi-dimensional Global ranking of Universities (U-Multirank). A pilot project involving 150 universities will be launched in the coming months by a group of German, Dutch, Belgian and French research centres that specialize in research and education metrics, known as the Consortium for Higher Education and Research Performance Assessment.

U-Multirank hopes to focus its comparisons on institutions that have similar activities and missions. Existing league tables lump together all types of universities, but comparing a large multidisciplinary university with a regional university focused on teaching, for example, makes little sense, says Frans van Vught, one of U-Multirank's project leaders and former president and rector of the University of Twente in Enschede, the Netherlands.

To identify universities with similar profiles, the project will draw on a sister European Union project, U-Map, in which van Vught is also involved. U-Map is building a classification of universities based on their level of research activity, the types of degrees and student programmes offered, as well as the extent of other important roles such as their regional and industrial engagement and international orientation. U-Multirank will develop indicators of performance on each of these aspects. After completion of the pilots, the two projects will seek philanthropic funding to become operational services, says van Vught.

U-Multirank also hopes to overcome one of the major criticisms of many existing ranking systems: that they focus excessively on research output, neglecting the many other crucial roles that universities have, not least teaching. Indeed, the Academic Ranking of World Universities, compiled by Shanghai Jiao Tong University in China and generally known as the Shanghai index, focuses exclusively on research output and citation impact, including variables such as numbers of Nobel prizewinners and publications in Nature and Science (see 'Top marks').

Rankings that use citation counts do not usually take into account the widely different citation rates among disciplines. This biases rankings in favour of biomedical research institutions, penalizing those that publish mainly in the social sciences or in other fields with lower citation rates. By contrast, both the Thomson Reuters and U-Multirank initiatives will use a variety of normalized bibliometric indicators that take this, and other pitfalls, into account.

In place of league tables, U-Multirank will give an overall grade of institutional performance on each of the various indicators it considers, allowing students, scientists and policy-makers to access and combine the indicators most relevant to them, so making their own à la carte rankings. "They will be able to look at the data through their own spectacles," says van Vught.

But as everyone in the field acknowledges, educational aspects of universities are particularly difficult to compare. Research is an international activity, and reasonable indicators exist for comparing institutions. Education, by contrast, is largely organized nationally and reflects different cultures and traditions. "It's a much tougher problem," says Pratt.

University dropout rates in France, for example, cannot be compared directly with those in other countries because all students who pass the baccalauréat automatically acquire a place at a French university. Selection takes place at the end of the first undergraduate year, and not immediately after leaving high school, pushing up the dropout rate. Similarly, the length and content of degrees often vary greatly between countries.

Measuring ideas

That's a gap in assessment that the Organisation for Economic Co-operation and Development (OECD) is trying to fill. Last month, it launched a US$12.5-million pilot project, the Assessment of Higher Education Learning Outcomes (AHELO), to develop new metrics for assessing teaching and learning outcomes. The project, which does not intend to produce rankings, will try to measure complex aspects of university life — such as the ability of students to think critically and come up with original ideas — across different cultures and languages. Although few details are yet available, it says it intends to launch a pilot involving 200 students in a dozen or so universities in six countries, including the United States and Japan. "We will be watching the development of the AHELO exercise very closely," says Ben Sowter, QS's head of research.

CONTINUED ON newsnn5

最新の画像もっと見る

post a comment