
By Khushbu Jain, Advocate, Supreme Court of India
India is witnessing a rapid transformation in how decisions are made, thanks to artificial intelligence. From banks approving loans and companies screening job candidates, to law enforcement agencies allocating resources, algorithms are becoming central to these choices. The appeal lies in AI’s ability to process large datasets swiftly, identify patterns that humans may overlook, and offer scalable solutions in a vast, diverse nation.
However, this growing reliance on automated systems brings with it critical legal and ethical challenges. As machines begin to make—or at least heavily influence—decisions that shape people’s lives, fundamental questions arise: Who is accountable for errors? Can decisions be trusted if they are not understandable? Most importantly, how do we protect constitutional rights in an algorithm-driven world?
One of the most urgent issues with AI is the “black box” phenomenon. Many advanced models, especially those based on deep learning, are so complex that even their developers cannot fully explain how they arrive at specific decisions. This lack of transparency becomes a serious concern when these models are used in high-stakes scenarios such as credit approval, insurance underwriting, or criminal risk assessment.
Consider a scenario where a life insurance application is denied based on data collected from a fitness tracker or health app. The applicant, unaware that their device shared step counts, heart rate, or sleep data with third parties, is left shocked and confused. The algorithm deemed them a high-risk candidate based on patterns it identified—perhaps from a week of low activity due to illness or work stress. Yet, the applicant receives no explanation and finds no one accountable.
The result is not just frustration but a loss of agency. Individuals become powerless against opaque systems that make life-altering decisions based on data trails they didn’t knowingly consent to share.
This scenario paints a troubling picture of automated systems used without sufficient oversight:
1. Decisions are made based on personal data users didn't realize was shared.
2. Individuals can’t understand or challenge the logic behind these decisions.
3. There’s often no recourse or appeal mechanism.
4. A person’s digital footprint—likes, steps, posts—can silently shape their destiny.
Without safeguards, AI risks becoming an invisible arbiter of opportunity, denying access to services based on incomplete or context-blind data. This can deepen inequalities and foster discrimination—especially when the data itself reflects historical biases.
India must adopt a rights-based framework for AI, one that aligns with the Supreme Court's recognition of dignity and privacy as constitutional guarantees, as affirmed in the landmark Puttaswamy judgment. To ensure AI serves, rather than undermines, these values, the following principles must guide regulation:
-
Transparency: Individuals should be notified when AI is involved in decisions about them and have clarity on how these conclusions were reached.
-
Accountability: Organizations using AI must be answerable for outcomes, ensuring that human oversight exists in every automated decision pipeline.
-
Right to Explanation: Every citizen should have the ability to request an explanation and review of significant decisions made by AI—especially in critical areas like finance, healthcare, and employment.
-
Bias Audits: Routine checks must be mandated to uncover and correct discriminatory patterns embedded within datasets or decision-making processes.
India’s Digital Personal Data Protection Act 2023 is a step in the right direction. It enhances citizen control over personal data, but AI-specific challenges demand further legislative evolution.
India can draw inspiration from the European Union’s robust AI governance framework. The EU emphasizes not only transparency and accountability but also mandates that powerful AI systems undergo impact assessments, regular audits, and human oversight before being deployed in sensitive sectors.
By adopting similar safeguards, India can ensure that no AI system affecting health, financial access, or civil liberties operates unchecked. Such measures reinforce public trust and ensure that technological advancements do not come at the cost of justice or inclusion.
AI is here to stay. It will increasingly shape digital governance, public service delivery, and industrial innovation. But its success should not be measured merely by efficiency or profitability. The true test lies in whether it upholds the values of fairness, dignity, and justice for all citizens.
As India forges ahead with its AI journey, it must prioritize building public confidence in algorithmic systems. This requires not just legal frameworks but a commitment to human-centric design—where the right to know, to appeal, and to be treated fairly remains paramount.
The future of India’s digital transformation depends on building systems that empower people rather than control them. We must never allow the convenience of automation to erode our rights. With thoughtful regulation and a citizen-first approach, India can lead the way in developing AI that is not only smart—but just, transparent, and humane.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.