Two Surveys, One Ranking: How We Did It

     Print Edition: Oct 28, 2012

For the past several years, BT did a pure perceptual survey to rank B-schools. Like we said before, it has its merits. But this year, we decided to do a combination of a factual and a perceptual survey, and crack the problem of ranking a lot of schools for good measure.

The factual survey was done first, wherein we sent out a detailed questionnaire across multiple parameters to 1,832 B-schools - both accredited and non-accredited but well-known - across the country. In all, 206 institutes responded with data, of which 205 could be ranked. Vinod Mehta School of Management, IIT Kharagpur, could not be ranked because of inadequate data and lack of supporting documents.

After the factual data had come in from these schools, we ran the perceptual survey. A separate questionnaire was administered to a group of more than 1,200 stakeholders of the B-school ecosystem, including teachers, students, young executives and HR recruiters. Their opinions are taken again on various aspects relating to the five key parameters - Learning Experience, Living Experience, Return On Investment, Future Orientation and Brand Value - and scored. However, their responses were added up according to assigned weights, not in equal proportion.

The questionnaires of both the surveys were based on identical parameters. At the top level, we had Learning Experience (with a weightage of 30 per cent), Living Experience (15 per cent), Return on Investment (25 per cent), Brand Value (20 per cent), and Future Orientation (10 per cent). Each of these primary parameters, or what we call "domains", has multiple sub-parameters, or sub-domains under them, which also have their weightages. Under the sub-domains are the sub-sub-domains, based on which questions were framed. The questions, of course, were different for the two surveys.

Learning Experience encompasses things like quality of internship, teaching methods used by schools such as role plays, case studies and problem solving methods, quality of teaching infrastructure such as libraries and Wi-Fi on campus, qualifications and experience of faculty members including research articles and papers published by them in reputed journals, cross-functional exposure, etc.

Living Experience took into account hostel for students and facilities available there, recreation facilities such as music room, amphitheater, sports infrastructure, etc.

Return on Investment was one of the most important parameters. Here we analysed how B-schools fared on their fees charges vis-a-vis the average salaries that their graduates got in placements. It also took into account things like number of companies who visited the campus for placements and, more importantly, number of companies that returned from last year for placements. This is a significant parameter given that companies that return for placements the following year are ostensibly happy with the people they hired from the college the previous year.

Brand Value took into account names of entrance examinations considered for admission, minimum cut-off score for admission, number of students who applied for admission (which is a fairly accurate indicator of how attractive a B-school is to an aspiring student), etc.

Future Orientation encompassed factors such as student exchange programmes, number of seminars or conferences attended by students and faculty members, tie-ups with foreign universities, courses or programmes on business ethics and responsible business conduct, credits being given to students for association with voluntary programs or charities, etc.

After the scoring was done for the two surveys, for each school, the factual score and perceptual for each sub-domain were added in 50:50 weightage. So, from bottoms-up, factual and perceptual scores were added right up to the topmost domain level, to get the overall scores for ranking purposes.

However, before the factual and perceptual scores were added up, we had to address the original problem of how to factor in scores for colleges that are less known.

The "Base" and "Weights"

A good B-school needs to be "known" as one, not only in the locality or even state that it operates in, but beyond. Only then does it become a national brand. You could argue that if few people know a B-school, perception scores should be low for it, and there should be no issue. However, a problem occurs when very few people know and rate a B-school, but give it very high scores. This skews the results of the survey and puts better known institutes at a disadvantage. That's because when many more respondents rate well-known B-schools, their scores tend to even out. But the same doesn't happen for less reputed B-schools that are rated highly by a handful of people.

"Awareness", then, becomes an invisible and difficult factor in a survey like this, where we do a perceptual survey of more than 200 B-schools.

To address this issue and bring fairness to the rankings, we introduced an element called the "awareness deflator" in the perception survey. The 205 B-schools were divided into ten "panels" for the perception survey, with an average 20 institutes in each. Each panel was rated by 120 respondents, which we call "Base". For each panel, B-schools that were rated by 40 or more respondents (one-third the sample) were assigned a weight of 1, which means their perceptual scores were multiplied by 1. B-schools that were rated by 30-39 respondents were given a weight of 0.9, so their perceptual scores were multiplied by 0.9, deflating their total score by a small margin. For 20-29 respondents, the weight was 0.8; and for B-schools that were rated by fewer than 20 respondents, the weight was 0.7.

It was the "awareness-adjusted" perceptual score that was added, right up from the sub-domain level. This innovative approach helped us address the awareness problem, and rank more than 200 B-schools in a combined factual and perceptual survey method.

  • Print

A    A   A