Why is accreditation in trouble for India and the US?

227 0

A university’s reputation is built over several decades. Can rankings justify the effort required to reach its current position? Are rankings an ugly form of elitism?

In 1995, Reed College, one of the top 10 in the US, refused to participate in the U.S. News & World Report annual survey, questioning the methodology and usefulness of college rankings.

However, a 1994 report by The Wall Street Journal about institutions flagrantly manipulating data to move up in the rankings in U.S. News and other popular college guides triggered the animosity.

Last November, several, such as the UC Berkeley School of Law, Harvard Law School, and Yale University, joined in rejecting the agency’s rankings, saying they were harmful to attracting prospective students.

Later, 13 medical schools joined the boycott, including the top-ranked Harvard Medical School. The 1994 Wall Street Journal report added a new concern: it is impossible to come up with a single number that characterises university performance.

However, the executive chair of the ranking agency said that the elite schools did not want to be held accountable by an independent third party.

Though accreditation gives absolute grades, and ranking is relative to similar institutions, they use parallel parameters in their processes.

One can also debate whether the measurement must be qualitative or quantitative. Whereas NIRF is 100% quantitative, the National Accreditation and Assessment Agency (NAAC) is almost 70% quantitative, and the remaining is qualitative.

Both approaches have their plusses and minuses. A qualitative approach is time-consuming, results cannot be verified, labour intensive, and most importantly, not statistically representative. 

On the other hand, in quantitative methodologies, there is a false focus on numbers and prone to being gamed by the institute after a while.

 

There is also a debate about whether a university should be programme-accredited or institution-accredited.

The education system is as large as ours so that programme accreditation can be enormously time-consuming. On the other hand, university accreditation tends to camouflage the inconsistencies within the departments and is, hence, error-prone.

Several writers and articles have criticised the NAAC’s processes in recent years and red-flagged the agency’s credibility. The concerns are very similar to those aired by US universities in the past.

Are the red flags justified?

Admittedly, the red flags must be addressed adequately. However, the limitations of a measurement system or the vastness or disparities within the system cannot be unfairly cited to bring down an agency’s credibility. The credibility of ranking or accreditation methodologies is debated and researched worldwide, and any ranking, assessment, or accreditation process is fraught with dangers.

At least 20 Global Ranking agencies measure quality on various parameters.

Australia has a Research Performance Index that measures the effectiveness of University Research.

The Centre for Science and Technology Studies at Leiden University maintains a European and worldwide ranking of the top 500 universities based on the number and impact of Web of Science-indexed publications annually.

The Quacquarelli Symonds (QS) World University Rankings rate the world’s top universities and are published annually since 2004.

To further advance Asian interests, QS established the QS Asian University Rankings in 2009 in collaboration with the Korean daily “Chosun Ilbo.” They rank the top 350 Asian universities.

Round University Ranking (RUR) is another world university ranking that assesses the effectiveness of 750 leading universities worldwide based on 20 indicators distributed among four key dimension areas: teaching, research, international diversity and financial sustainability.

Times Higher Education (THE), a British publication, and Thomson Reuters have provided a new set of world university rankings, called ‘THE World University Rankings’ (THE-WUR) since 2011.

There is also a ‘Ranking of Rankings’ ‘UniRanks’ launched in 2017 that aggregates the results of five global rankings: THE, QS, US News, ARWU, and Reuters, combining them to form a single rank.

Whereas the ‘THE-WUR’ is based on 13 carefully calibrated performance indicators that measure an institution’s performance across teaching (30%), research (30%), research citation (30%), international outlook (7,5%), knowledge transfer /industry income (2.5%), the (QS), a British company bases its ranking on parameters such as academic reputation (40%), employer reputation (10%), faculty-student ratio (20%), citation per faculty (20%) and international faculty and student ratio (5% each).

The Academic Ranking of World Universities (ARWU), also known as Shanghai Rankings, bases its findings on the number of alums winning Nobel and Field Medals (10%), faculty winning the same  (20%), highly cited researchers (20%), articles published in Nature of Science (20%), indexed articles (20%) and per capita academic performance of the institution (10%).

NAAC assigns a weightage of 15% for curricular aspects, 20% for teaching learning and evaluation, 25% for research, innovation and extension and 10% each for infrastructure and learning resources, student support and progression, governance, leadership and management and institutional values and best practices.

The NIRF ranking is based on six parameters: Teaching, Learning and Resources, Research and Professional Practice, Graduation Outcomes, Outreach and inclusivity, and Perception of the World View.

A quick glance shows the complexity of the processes and subjectivity involved, even as they appear similar.

 

Why are there so many accreditation and ranking methodologies and agencies?

The answer is simple.

Education is multi-faceted. It cannot be bound to simple metrics that are the same across disciplines. Simply put, Ranking or Accreditation is not agnostic.

The Indian education system is diverse. There are 2, 3, 4, and 5-year institutions that offer degrees/diplomas/certifications.

There are also technology vs social sciences and life sciences institutions, multidisciplinary vs single discipline, private vs public, research-based, innovation-based, language-based, and even special purpose institutions/universities.

The boundary conditions in which they operate are very different. They cannot be grouped under the same parameters for a quality check.

That being the story, agencies like NAAC can only do so much and nothing beyond that.

It is probably time to junk all the ranking and accreditation processes and adopt Quality Assurance as the default.

Why cannot an institution or a university be sued for reneging on their promises?

Is it not time to check the return on investment of our institutions/universities rather than pull down our agencies, especially when several of our students from elite institutions, educated on public money, do not even serve within the country?

(The author is a former Chairman of AICTE, Ashok Thakur, and a Former Secretary of the Education Government of India. Views expressed here are personal.)

 

Photo credit pixels

Related Post

Leave a comment

Your email address will not be published. Required fields are marked *