
A fiery debate is unfolding online after venture capitalist and former Infosys CFO Mohandas Pai posed a blunt question on social media: “What is your suggestion on how to choose people in leadership positions? This is a real issue.”
Pai’s query came as a response to a series of thought-provoking posts by startup founder Lal Chand Bisu, who questioned India’s long-standing reliance on written examinations as the gateway to leadership roles.
“Written exam doesn’t prove any real world skill apart from your expertise in logic and theory. We choose a lot of so called ‘smart’ people through written exam and we put them at a leadership position of real world problems,” Bisu wrote on X.
He argued that this system consistently produces “mediocre outcomes,” particularly visible in government structures and the broader education system. “The mediocrity of this structure was hidden because almost everyone believed it, and it also looked fair. Our complete education system is designed around it.”
Bisu believes the rise of artificial intelligence is exposing cracks in the model. “Now with AI in picture, weakness of this structure is quite visible. Today AI can crack almost any exam. But cracking exams is not about AI smartness, but more about exam strategy. It was never a right way to choose the smartness. Real smartness is tasted in the real world,” he said, predicting “one of the big disruption in the future” where “leaders will come from real world experience, not through written exam.”
Pai’s post drew an outpouring of opinions, with many echoing Bisu’s concerns. “Leadership isn’t proven in exams. It shows in crisis. In how one listens, learns, decides, and lifts others. Don’t just test intellect, test integrity, resilience, and the ability to handle power with humility,” wrote one user.
Another commentator added nuance to the conversation, saying: “Sir, I understand where you're coming from — and I say this with genuine respect. You’re asking a valid question: ‘If not written exam scores, then what else do we rely on to assess merit for government jobs and higher education?’ It’s a fair question, and in theory, written exams seem like the most objective way to evaluate individuals from diverse backgrounds. But here lies the core assumption — the bias that needs to be challenged: That performance in a time-bound, memory-heavy, English-oriented written test is the sole, pure, and complete definition of ‘merit.’ And that’s precisely where the problem begins.”