The boss no longer wears a suit, holds meetings, or walks past your desk. Today’s boss is an algorithm — unseen, unfeeling, and untiring. It monitors keystrokes, tracks delivery times, rates performance, and even recommends who should stay or go. What was once the domain of human judgment has quietly been transferred to artificial intelligence, coded by unseen engineers and governed by unseen biases.
At first glance, algorithmic management appears to be the logical evolution of a digital economy. It promises efficiency, objectivity, and speed — no moods, no favoritism, no fatigue. But beneath this promise lies a troubling question: can justice exist when the decision-maker has no conscience?
Across India and the world, AI-driven systems are increasingly managing workers in sectors as varied as food delivery, logistics, retail, customer service, and even law. Algorithms allocate shifts, determine pay, and flag “low-performing” workers. They reward compliance, penalize deviation, and make thousands of micro-decisions every second — each of which can affect a human livelihood.
Yet, these systems operate in a legal vacuum. Workers rarely know how decisions are made or what data is collected about them. When an app suddenly “deactivates” a gig worker, there is no clear process of appeal, no human contact, no reasoning. A driver or delivery partner who is logged out of the system might as well have been dismissed — but with none of the rights that an employee enjoys.
This is not a futuristic scenario; it is today’s reality. Algorithms, originally meant to optimize work, now define it. And when bias creeps into code, discrimination is automated at scale.
Recent studies in Europe and the United States have revealed that AI-based hiring and evaluation tools frequently replicate existing prejudices. Algorithms trained on historical data learn to favor male candidates, penalize career breaks (often taken by women), or devalue applicants from certain regions or colleges. Bias, once human and visible, has become digital and invisible.
India is not immune. From automated resume screenings to performance tracking dashboards, digital evaluation systems are now common in corporate and service sectors. But the law has not yet caught up. The Information Technology Act, 2000, and its amendments deal with cybercrimes and data misuse, not algorithmic accountability. The Digital Personal Data Protection Act, 2023, focuses on consent and privacy, but remains silent on discrimination arising from automated decisions.
The absence of specific regulation around AI transparency and fairness leaves both employers and employees vulnerable. For workers, there is no legal right to understand or challenge algorithmic decisions. For companies, the lack of clear standards creates reputational and ethical risks that could quickly escalate into litigation.
Globally, legal systems are beginning to act. The European Union’s AI Act, expected to come into force soon, classifies workplace AI tools as “high risk,” requiring human oversight, explainability, and independent audits. The UK Information Commissioner’s Office has also issued guidance emphasizing the need for fairness in algorithmic decision-making. The message is clear: technology cannot be exempt from accountability simply because it is complex.
In India, the next step must be the creation of a framework for algorithmic governance. Such a law should ensure three fundamental rights for every digital worker:
- The Right to Explanation – Workers should have the right to know how an AI system makes employment-related decisions.
- The Right to Appeal – Any algorithmic judgment affecting pay, suspension, or termination should be subject to human review.
- The Right to Fair Data Use – Worker data must be collected and used transparently, with explicit consent and strict boundaries on surveillance.
Without these safeguards, we risk creating a silent hierarchy — not of humans, but of machines that make rules without empathy.
Algorithmic management also raises deeper ethical questions. Should an app be allowed to track how fast a delivery partner walks or how long a customer call lasts? Is “productivity scoring” just a modern form of digital control? The law must distinguish between efficiency tools and exploitation tools.
For policymakers, this is not just a matter of privacy but of labour justice in the age of AI. Automation will continue to grow, but it must grow responsibly. Regulators should promote algorithmic audits, transparency reports, and independent grievance mechanisms that bridge the gap between technology and accountability.
Employers, too, have a moral duty. AI should assist, not replace, human fairness. An algorithm may compute performance, but only a human can understand potential.
The coming decade will determine whether AI becomes a partner in progress or a weapon of inequality. And the difference will depend on one thing — whether we insist that even digital decisions must follow the law.
Because justice, no matter how automated the world becomes, must remain profoundly human.