Unchecked AI can amplify bias, threaten privacy and erode public trust
Unchecked AI can amplify bias, threaten privacy and erode public trustIn the past couple of years, AI has evolved from novelty to necessity, transforming industries, jobs and business models. Today, it is an unstoppable force writing code, drafting legal papers, analysing customer sentiment and powering innovation pipelines. Similar to their global peers, Indian companies are also racing to deploy AI. However, many boardrooms are still unprepared to fully embrace the technology.
Deloitte’s global AI governance survey echoes this gap. Over 50 percent of organisations have deployed AI, but only a third of boards feel they have meaningful oversight. This alarming gap suggests that many companies are adopting AI without fully understanding its implications. Furthermore, two-thirds of directors globally admit to limited or no practical AI experience, exacerbating the problem.
The lack of preparedness in boardrooms is evident, as nearly a third of boards are yet to put AI on their formal agenda. This oversight can have serious consequences, as companies that fail to address AI-related risks may face ethical, reputational, and regulatory issues. As AI continues to transform industries, jobs, and business models, boards must take a proactive approach to AI governance.
These numbers hint at a widening gap between AI’s promise and boardroom readiness. As companies embrace AI, boards risk facing ethical, reputational and regulatory issues. While deploying AI, companies must be aware of its technical intricacies and ask the right questions to ensure responsibility. It is essential to recognise the potential risks of algorithms steering human decisions. When algorithms influence choices, there can be unforeseen consequences, biases or a lack of accountability. Companies should be vigilant about identifying red flags and maintaining a balance between using AI and preserving human judgement.
Many Indian boards view AI as a “tech topic” for IT leaders, rather than a powerful force that can make or break reputation, trust and strategy. This mindset must change.
The hidden risks
Unchecked AI can amplify bias, threaten privacy and erode public trust. One flawed hiring algorithm or a “hallucinating” GenAI model can cause immense damage overnight, reputationally, financially and legally. To mitigate these risks, boards must invest in continuous learning and scenario planning, encouraging directors to develop a working understanding of AI systems and their limits.
As part of this effort, boards must ask the right questions. For instance, they should inquire about the origin of the training data, how algorithmic bias is tested and addressed and whether decision-making processes are transparent and explainable. Boards must also consider what happens when AI systems fail and who is ultimately accountable. By asking these questions and seeking answers, boards can ensure that their organisations use AI responsibly and mitigate potential risks.
Boards must expect to be challenged by AI-literate stakeholders, regulators, civil society and customers. As the AI Act rolls out in Europe, companies will face new obligations that will influence their global supply chains.
With India also exploring AI regulatory frameworks, boards must oversee compliance to ensure their companies remain competitive.
Boards must move from “passive awareness” to “active stewardship.” This means gaining a deeper understanding of where and how AI is used, asking the right questions and ensuring robust accountability frameworks. To achieve this, boards should focus on three key areas: developing curious and confident directors who go beyond buzzwords; establishing clear principles, governance structures and crisis response plans; and promoting a culture that encourages open dissent and discourages groupthink.
AI governance roadmap
AI governance helps innovation build people’s trust. Boards that get this right will help their organisations stand out in a crowded, credibility-driven marketplace. To achieve this, boards should consider some key steps. First, they should make AI strategy a regular agenda item and ensure that AI ethics and technology risk are also discussed. Additionally, boards should appoint a director with strong digital and AI expertise or establish an external advisory council. Investing in upskilling and scenario planning is also vital, as is ensuring clear reporting on AI deployment and auditing processes. Furthermore, boards must define accountability for AI risk management and ethical oversight.
By taking intentional and informed actions, India’s boards can navigate the complexities of AI with curiosity and courage, shaping a better future for their organisations.
(Views are personal; Shefali Goradia is Chairperson, Deloitte South Asia, and Deepti Berera is Partner, Deloitte South Asia)