6 Automated Administration: Administrative Law and Algorithmic Decision-Making in India

Divij Joshi

Introduction[1]

With the ubiquity of digital information and computational tools, there has been a concomitant proliferation in the use of computers to analyse information and produce specific outputs on the basis of encoded rules and logics. Such computational tools, which we will refer to as ‘algorithmic systems’ have implications not only for their use in particular domains (like healthcare or policing), but also in their systemic effects on the manner in which knowledge about individuals and societies is parsed and acted upon.[2] In this paper, I focus on automated decision-making in the public sector, a subset of algorithmic systems which are used within decision-making processes in public administration, either producing kinds of knowledge as outputs to be acted upon by human agents, or directly triggering particular actions as an outcome of an algorithmic process.

 

Algorithmic systems are assuming an increasingly prominent role in public administration in India. Decisions ranging from policy formulation and rule-making, to quasi-judicial functions of evaluating specific claims are now delegated, in varying degrees, to computer algorithms which function with some degree of autonomy and without requiring direct human involvement. Algorithmic systems have been used in bureaucratic processes in India since at least the 1980s, when ‘rule-based’ systems were piloted within tax and healthcare administration.[3] Contemporary administrative use of algorithmic systems includes the proliferation of ‘machine learning’ systems, which seek to create their own logics and patterns of understanding based on analysis of vast underlying datasets, in order to optimise for specific outcomes.[4]

 

As the use of algorithmic systems in society has proliferated, there has been a substantial body of literature engaging with questions about information processing within algorithmic systems, and its legal consequences, particularly under public law. Scholars have examined how the move towards data-driven decision-making systems fundamentally impact concepts of the rule of law and justice, which are the root of constitutional democracies.[5][6] Scholarship has also dwelled on the impact of algorithmic systems on privacy and data protection law, particularly on the aspect of privacy which preserves individual self-determination and selfhood.[7] A related branch of studies has contended with algorithmic fairness, transparency and accountability, and its implications for legal systems concerned with, for example, the right to information, rights against anti-discrimination and liability for wrongful conduct.[8]

 

Within this broader field of algorithmic studies, there is a specific body of literature which has engaged with the effects of algorithmic systems in government administrations. Early engagement with this subject examined the impact of rule-based expert systems within government and the rise of the ‘data processing model of bureaucracy’ on concepts of administrative law, including reasonableness and fairness in administrative decision-making and public participation in policy processes.[9] More recent engagement incorporates concerns relating to developments in big data analysis and machine learning systems as well as the increasing autonomy attributed to algorithmic decision-making systems, including its impact on administrative discretion and processes of adjudication.[10]

 

Legal scholarship engaging with administrative law and algorithmic systems has mostly been within the United States (“U.S.”) and European contexts. In India, while there has been renewed and multi-disciplinary scholarly attention paid to information systems utilised within government administration, largely as a result of large-scale projects like Aadhaar,[11] legal scholarship as well as judicial and policy attention has approached administrative information processing activities primarily from the lens of informational privacy and data protection. While the lens of privacy and data protection law can and should inform the regulation of algorithmic systems, it is not sufficient to respond to the specific questions that these systems pose within the context of administration and bureaucracy.

 

The use of algorithmic systems for administrative decision-making is a subject which should concern legal and regulatory scholarship for two interrelated reasons. First, the use of algorithmic systems requires deliberating trade-offs between their presumed benefits, (for example, in reducing costs and increasing efficiency, or curtailing arbitrariness) and perceived harms, (for eg. increasing opacity and reducing accountability). These trade-offs must be deliberated within the context of specific legal frameworks, including constitutional rights, which place constraints on state action, and consequently, on the deployment of algorithmic systems. Second, algorithmic systems pose questions of normative and institutional change for administrative agencies which must be contended with. Algorithmic systems substantially impact norms of administrative decision-making, ranging from the role of bureaucratic discretion in the application of statutory rules and standards, to the norms governing procedural fairness in formulating administrative policies and decisions – questions which are fundamental to administrative law.

 

There is a long history of administrative law jurisprudence in India, the goal of which is to ensure administrative action ascribes to constitutional principles – including rights against arbitrary state action, administrative and procedural fairness and equality before the law. This jurisprudence addresses aspects of administrative action from the delegation of legislative powers and administrative rule-making, to public involvement in policy processes, to administrative procurement processes and individual decision-making. Even as algorithmic systems fundamentally alter the characteristics of each of these forms of administrative action, there has been little consideration given to legal or regulatory responses to ensure adherence to recognised principles of administrative law.

 

This article seeks to explore how algorithmic systems are impacting the function and role of government administration in India, and what this implies for the areas of law which are concerned with the regulation and governance of administrative decision-making within government agencies – broadly categorised as administrative law. This will also illuminate broader questions about the philosophy of information regulation in India, including how information collection and processing activities within algorithmic systems are mediating the citizen-state relationship.

 

This paper will locate the debates about normative and institutional change brought about by the use of algorithmic systems in the Indian administrative law context. This provides a valuable contribution to the existing literature on the subject for two reasons. First, it provides a framework to engage with administrative algorithmic decision-making within the contours of Indian law and jurisprudence. Second, understanding the effects of the use of algorithmic systems within the context of administrative systems in the particular context of India can inform literature on questions of algorithmic fairness, transparency, accountability and ethics more broadly.

 

Part I will review the literature around the operations of algorithmic systems and their implications for important public values. Part II of the paper will briefly outline the history and the political economy of the contemporary era of ‘government-by-algorithm’, and review jurisprudence and literature on its implications for the law of public administration. Part III will examine how public agencies in India are utilising automated or algorithmic systems for decision-making. Part IV will examine the implications of automated decision-making on administrative legal principles under Indian law.

 

Fairness, Accountability and Transparency in Algorithmic Decision-Making 

Public administration today is increasingly characterised by the use of computational and digital systems to integrate and analyse information or data through algorithmic logics. In particular, there is a rise in the use of so-called ‘Artificial Intelligence’ (“AI”) and ‘Big Data’ technologies, propelled by the use of Machine Learning (“ML”) systems, which utilise statistical methods to make causal inferences between large sets of data, or optimise mathematical functions in order to make ‘predictions’ for future instances of data. This section briefly examines how algorithmic systems impact upon values of fairness, transparency and accountability, which are also normative values upheld by administrative law and regulation, as well as values that (nominally) motivate government administration at large.

 

The term algorithm describes a series of steps through which particular inputs can be turned into outputs.[12] An algorithmic system is a system which uses one or more algorithms, usually as part of a computational software, to produce outputs which may be used for making decisions. Algorithmic systems are characterised not only by the underlying technologies used to compute information, but equally by the social, cultural, legal and institutional contexts where algorithms are embedded, which are crucial determinants of how these systems are used and governed.[13] These algorithmic systems, and their implication for public administration and legal and constitutional rights, are the socio-technical systems that this paper focusses on.

 

The proliferation of these systems in a number of socially consequential areas, such as policing, education, finance and healthcare, both within and external to government, has spurred substantial debates on their implications for important public values, centred largely around values of transparency, fairness and accountability of these systems. This framing, while not exhaustive of the range of implications posed by the widespread use of automated decision-making systems and algorithmic technologies, emphasises how algorithmic decision-making systems challenge important assumptions and expectations about consequential decision-making that concerns people relating to the transparency about how a decision is made, the ‘fairness’ of such a decision, and who should be accountable for these decisions.[14] Each of these concepts are highly contested, highly context-specific, and escape universal definition, yet they broadly describe the anxieties that algorithmic decision-making has given rise to in various contexts, that are relevant for our study.

 

Transparency, in the context of algorithmic decision-making may broadly be described as “a system of observing and knowing that promises a form of control”.[15] Transparency is instrumental in understanding and demanding accountability about a decision. Algorithmic decision-making gives rise to challenges of transparency owing both to the intrinsic technological inscrutability of some novel forms of algorithmic systems – such as complex machine learning systems utilising data with a high number of characteristics,[16] or which computes data in a manner unintelligible to the audience demanding transparency.[17] However, transparency is also a function of how these systems are integrated into and engage with existing social, institutional or organisational contexts.[18] For example, a major factor inhibiting transparency of algorithmic systems used in the public sector is the reluctance of governments or private contractors to reveal the details of trade sensitive information.[19]

 

Fairness, in the context of algorithmic decision-making, is implicated both in the manner in which decisions are made, as well as on its effects on particular individuals or groups, concerning aspects of both the intrinsic quality of a decision-making process, as well as the broader distributive implications of decisions made.[20] Several studies of algorithmic systems used in different social contexts have shown how the impacts of these systems are distributed in ways that are considered ‘unfair’ – either indicating statistical bias based on particular characteristics like class, race or caste (which are often characteristics legally protected against discrimination).[21] Bias or discrimination can arise owing to a number of elements in the decision-making process, including (1) the kinds of historical data that a Machine Learning algorithm might take into account, which may include protected characteristics; (2) how the data is processed and whether the processing itself produces (statistically) biased or arbitrary results, or, (3) if the context in which a decision-making system is used is consistently biased towards a particular group.[22] Owing often to the scale at which algorithmic systems are often used, implicit or explicit biases in algorithmic decision-making can often lead to systematic discrimination at socially consequential scales.[23]

 

Accountability in the context of algorithmic decision-making refers to ability of various actors involved in the production of a decision through an algorithmic system to be held to account for such decisions, including “the obligation to explain and justify their use, design, and/or decisions of/concerning the system and the subsequent effects of that conduct.”[24] Algorithmic systems within governments are often complex systems, assemblages of data, computational techniques and varying institutional or organisational contexts, involving different actors responsible for different elements of the system (for example, a developer of a software, the agency responsible for procuring the system and the agency responsible for using it, etc.).[25] This complexity makes it difficult to attribute responsibility for the ultimate decision taken through the use or aid of an algorithmic system to a single actor or organisation, in many cases undermining effective accountability.[26]

 

The admittedly broad values of fairness, accountability and transparency offer but one frame of analysis for the consequences of algorithmic systems on public values. Algorithmic systems also portend structural effects on, for example, democratic participation and human agency, and their impacts may be usefully analysed from a number of normative lenses or frameworks. However, this framing is particularly useful in the context of the aims of this chapter – to highlight the impact of algorithmic systems on public administration and the values, norms and laws that guide or govern public administration.

 

Algorithmic Administrative Decision-Making in India 

The use of algorithmic systems and logics for decision-making is hardly a novel phenomenon. Information systems have long played a part in public administration, even within jurisdictions like India which have seen relatively delayed adoption of computers and digital technologies. Historically, digital systems were implemented in order to automate routine and clerical tasks of administration.[27] Although there is some evidence of the use of more complex systems, such as the use of knowledge based expert systems (an early form of ‘artificial intelligence’ systems which relied on programming syntactic rules to aid in tasks like legal interpretation and analysis), it is only in the past two decades that the implementation of digital systems within public administration has emerged as a transformative phenomenon in India. Despite the highly fragmented nature of digital technology use in India, governments at both the Central and the State level have been eagerly adopting these technologies in order to augment and supplant their decision-making capabilities.

 

In this part, we use three case studies to examine how algorithmic technologies intersect with administrative decision-making processes at different stages, and explore their implications for administrative law discussed previously.

 

1. Tax Assessment and Case Allocation under the Income Tax Act

In 2019, the Government of India introduced a scheme to replace manual assessment of income tax for the purpose of additional scrutiny with an automated system known as the Faceless Assessment Scheme (“FAS”). In 2020, the Indian parliament amended certain provisions of the Income Tax Act (“Tax Amendment Act”)[28] to incorporate the FAS, which, inter alia, includes provisions for an ‘automated allocation tool’ and ‘automated examination tool’ which are defined as algorithmic systems for the randomised allocation of cases, and standardised assessment of draft orders, respectively.

 

As per the Tax Amendment Act,

 

“‘Automated allocation tool’ means an algorithm for randomised allocation of cases, by using suitable technological tools, including artificial intelligence and machine learning, with a view to optimise the use of resources.”[29] and;

 

‘“Automated examination tool’ means an algorithm for standardised examination of draft orders, by using suitable technological tools, including artificial intelligence and machine learning, with a view to reduce the scope of discretion.’[30]

 

Under these provisions of the Tax Amendment Act, decisions about the ‘randomised allocation’ of tax assessments, as well as the examination of draft assessment orders are to be automated through the suitable technological tools, including “artificial intelligence and machine learning”, in order to optimise resources and reduce discretion, respectively (echoing the standard justifications for automating administrative decisions which we noted in the previous section).

 

Automation enters the tax assessment system at two points. The Automated Allocation algorithm is used by the tax authorities in order to identify specific cases for tax assessments, and to allocate the scrutiny of tax returns to a specific regional assessment centre, ostensibly to reduce bias and increase transparency in the selection and allotment of cases for further scrutiny. After the initial assessment, a draft assessment order is prepared by the authority, which is then analysed using the Automated Examination Tool, using an algorithmic system, and the taxpayer is intimated of the final assessment.

 

The details of the algorithms used, the statistical techniques applied or the data on which the Machine Learning system is supposed to work on have not been made publicly available, and the considerations that an algorithmic system for allocation or examination must take into account are not specified in the primary legislation (the Income Tax Act) or in the rules made by the tax administrative authority (the Central Board for Direct Taxes).

 

In addition to automating and augmenting manual allocation and assessment of draft orders, the FAS also resulted in assessments being conducted without providing a hearing to affected persons. Consequently, a number of challenges were raised before various High Courts arguing that proceedings with finalising assessments were conducted without granting the right to a personal hearing before the adjudicating officers.[31]

 

2. Voter Roll ‘Deduplication’ by the Electoral Commission of India  

Recent exercises undertaken by the Electoral Commission of India (“ECI”) to ‘clean’ voter rolls through digital deduplication algorithms are another important example of algorithmic decision-making disturbing individual rights in novel ways.

 

In 2015, the ECI launched the National Electoral Roll Purification and Authentication Programme (“NERPAP”) with the objective of “bringing a totally error-free and authenticated electoral roll”, through linking electoral databases with the database of India’s national biometric resident database – UID or Aadhaar. The process of ‘linking’ databases was implemented through a computer software programme which was used to algorithmically ‘deduplicate’ – i.e., remove multiple copies of the same data from a database – voter lists, ostensibly in order to ensure that there is no voter fraud due to the possession of multiple voter ID cards. This was achieved by comparing Aadhaar data – deemed to be a unique reference, with the demographic details of individuals enrolled on voter lists. Ostensibly, if the Aadhaar data mapped to more than one voter record, it would be deemed to be a ‘duplicate’ and removed from the voter rolls.[32]

 

The NERPAP process was trialled across a number of jurisdictions, most prominently perhaps in Telangana, where 30,00,000 people were reportedly removed from the voter rolls without following the established procedure, thereby disentitling them from participating in the state elections.[33]

 

A challenge to the NERPAP Scheme and the use of software to automate voter deduplication was filed before the Telangana High Court, claiming, among other things, that the ECI deployed an “algorithm … which is neither transparent nor public, to carry out its statutory and constitutional duty of preparing and maintaining the voter rolls in India generally and Andhra Pradesh and Telangana in particular, which led to the deletion of almost 27 lakh voters in Telangana and 19 lakh voters in Andhra Pradesh in violation of the procedure established by law and declared by the Supreme Court of India.[34]

 

As with the case of the tax administration, the claims made before the High Court in the case of the NERPAP automation of voter deduplication relate to the opacity of the software and logic employed, as well as the lack of due process followed when making a decision that disturbed the rights of affected persons.

 

3. Fraud Analytics in Healthcare Administration

In 2018 the Government of India launched a national public health insurance scheme termed as the Pradhan Mantri Jan Arogya Yojna (PMJAY), which, among other things, aims to provide health insurance coverage to poor households. Over the course of implementation of the scheme, the Government of India has entered into various partnerships with private firms for fraud detection and analysis of transactions and claims made through the scheme.[35]

 

According to public documentation about the scheme released by the National Health Authority, a‘Fraud Analytics Control and Tracking System’, (“FACTS”) has been implemented, which will ostensibly use Artificial Intelligence and Machine Learning in order to “identify suspect transactions & entities. Using advanced tools such as Natural Language Processing and Optical Character Recognition and Image Analytics, unstructured data such as images, documents and clinical notes submitted are analysed to detect cases of potential fraud and abuse.[36] As per guidelines for the scheme, a finding of prima facie fraud from the algorithm can trigger an investigation which can result in the rejection of an insurance claim as well as further disciplinary action on the identified entity.

 

As with the above cases of using automation in administrative decisions, the algorithmic system utilised for identifying and making the initial decision about ‘fraudulent claims’ is not made public, nor is there information about the basis on which it operates, apart from the fact that it is based on Machine Learning techniques.

 

The algorithmic techniques that the FACTS system reportedly uses, known as Machine Learning, or ML, is based on analysing large datasets to find patterns among data, and impose that logic or pattern among future instances of data. As we will discuss in the next section, apart from the general concerns posed by automated decision making, ML introduces distinct challenges for the purpose of reviewing the propriety of administrative action from the lens of administrative law.

 

In the subsequent section, we explore how these legal-ethical considerations around fairness, accountability and transparency have emerged specifically in the context of public administration in India, and briefly review the jurisprudence and literature pertaining to algorithmic decision-making, and public administrative law.

 

Public Administration in the Age of Automation 

The emerging centrality of information technologies, and automated decision-making systems, within public administration is as much a phenomenon that concerns organisational changes in government, and wider political and economic trends, as much as it does technological change.[37] Scholars of public administration have theorised how these technological transformations fundamentally the context within which policy choices are made and within which public administration takes place. In particular, scholars have noted how contemporary public administration around the world, including in India, has been characterised by ‘New Public Management’, or NPM, a ‘market-based’ model of governance emphasising efficiency, innovation and service-delivery, in turn encouraging deregulation, public-private partnerships, and technification of government administration.[38] As Magretts et. al., note, principles of NPM laid the foundation for the contemporary technification and digitisation of public administration, leading to what they identify as ‘Digital Era Governance’, which places the use and integration of previously siloed government information systems at the very heart of public administration functions, driving transformations in the organisation and culture of public administration at large by influencing public sector values and changing the role of judgement and discretion which are at the heart of administrative decisions.[39]

 

Cuellar, similarly, argues that algorithmic systems are bringing about both complex and subtle organisational changes within the administrative state, with the increasing adoption of opaque data-modelling and data science techniques in administrative decision-making requiring specific trade-offs between optimising social welfare concerns with ‘political pragmatism and procedural constraints’, and restructuring administrative functions and organisation in the process.[40]

 

The emergence of these technologies as crucial elements in the administrative establishment of the state has attracted some degree of interest from courts, regulators as well as within legal scholarship attempting to explain and account for the implications of algorithmic technologies for public administration and the citizen-state relationship. Before turning to the analysis of algorithmic decision-making in the context of Indian administrative law, it is useful to examine how this interaction has been analysed in some common law jurisdictions.

 

Scholars of public law in the U.S. have written about the potential implications of computerisation and digitisation on administrative procedure since the early 1990s. Schwartz’ germinal paper on data processing and government administration notes how bureaucracy in the U.S. was transforming into an ‘information processing’ system, and its implications for ‘bureaucratic justice’ – accuracy, efficiency and dignity of the participant in an administrative process, particularly owing to the non-transparent nature of relying upon computer operations. Schwartz argues for building in both procedural safeguards through data protection regulation, as well as an independent oversight mechanism for such decision-making within public administration.[41]

 

Danielle Citron has also argued for revamping procedural rights in the US context. Citron argues that computerised decision-making nullifies distinctions between administrative rule-making and adjudication functions, without providing the adequate safeguards offered by administrative law for either function. Administrative decision-making usually assumes procedural safeguards such as notice and hearing mechanisms in the case of individualised adjudications, or notice and comments, and more generally public transparency and participation mechanisms for rulemaking and delegated legislative functions. Citron argues, however, that contemporary algorithmic and data-driven systems combine rulemaking and adjudication functions in ways that obscure the specific procedural protections of either of these, resulting in a procedural void as far as administrative law and regulation is concerned. This is particularly true in cases where ‘data-based decisions’ lead to the creation by computer systems of new rules by which to process individual cases, as well as the application of these rules to particular cases.[42]

 

Deirdre Mulligan and Kenneth Bamberger have also argued for the application of administrative law protections to administrative decision-making which involves algorithmic systems. Their analysis is particularly important for taking into account the organisational and institutional context of the modern administrative state, where software for administrative functions is often outsourced to private actors, thereby also outsourcing the policymaking functions that algorithmic systems displace. They argue that such ‘policy-by-procurement’ should be restructured to incorporate specific rules of administrative accountability including public input and expert deliberation into algorithmic processes.

 

The legal implications of automated decision-making systems were also considered by the Australian Administrative Review Council (“ARC”) as far back as 2004, when it provided specific guidance for administrative agencies to consider the legality of the use of automated decision systems, in line with administrative law values of ‘lawfulness, fairness, rationality, openness (or transparency) and efficiency’.[43] The ARC guidance notes that administrative law principles governing the legality of administrative decisions, the use of discretion and natural justice are at stake in the consideration to use or rely upon automated ‘expert systems’ which make decisions or aid in human decision-making.[44]

 

In the United Kingdom, legal scholars have scrutinised specific executive actions in the administrative legal context, arguing for greater scrutiny through judicial review as well as re-framing administrative law principles in light of automated decision-making. Marion Oswald examines, in particular, the impact of machine learning and so-called ‘predictive’ tools in administrative decisions. She argues that automated decision-making changes the nature and meaning of duties of administrative agencies to ‘give reasons’ for decisions, as well as the standard of ‘relevance’ of fact and reasonableness of executive decision-making.[45] Similarly, Jennifer Cobbe analyses how the (largely uncodified) principles of English administrative law might be applied to a range of automated decision-making systems in particular contexts. Cobbe draws from data protection regulation and standards, to argue that machine learning tools, in particular, might fall foul of certain principles including the right to provide adequate legal justifications for certain decisions, the duty of a delegate to not fetter the discretionary power granted to them, and the requirement to only consider relevant facts in administrative adjudications.[46]

 

Courts in most common law jurisdictions have not had much opportunity to consider the specific legal implications of administrative use of algorithmic systems.[47] A notable exception is the algorithmic system was at issue in State v Loomis, before the Wisconsin Supreme Court.48 Here, the use of an algorithmic risk assessment system known as COMPAS for making bail decisions was challenged as being contrary to due process requirements. However, the court noted that the algorithmic system’s outputs were not making individualised adjudications in a manner which was sufficient to attract the due process requirement under U.S. administrative law, and the court proceeded to allow for its use, among other things, on the grounds that bail decisions were not relying upon the COMPAS system, but were merely considering it. The distinction between these two was not clearly articulated – an issue we will discuss later– and the decision has been criticised subsequently for failing to take into account due process requirements in algorithmic decision-making.[48]

 

Administrative Law in India and Automated Decision-Making 

Administrative law in India constitutes a largely uncodified field based to a large extent on principles of constitutional law and the bill of fundamental rights in Part III of the Constitution of India.[49] Legal review of administrative action is based on a mix of reinterpreted English common law principles and analysis of constitutional principles under Article 14 of the Constitution, which establishes the right to equality, encompassing, among other things, the concept of reasonableness of administrative action.

 

Broadly, the grounds for legal review (and the permissible limits of administrative action) were laid down in the Supreme Court’s judgement in Tata Cellular v Union of India,[50] whereby the court noted that there are three broad grounds for challenging administrative action, namely: Illegality of the action – exceeding or not giving effect to the statutory or legal provision by which a decision-maker derives power from; irrationality or unreasonableness, which governs the exercise of discretionary power; and procedural impropriety, more broadly framed as rules of natural justice.

 

In this section, we will examine how these rules of judicial review, or regulations and limitations on administrative action might apply in the context of administrative use of automated decision-making systems outlined in the case studies above.

 

Rules of Discretion  

A central concern of administrative law and regulation is the control over discretionary executive action. Effective administration is only possible by providing a large degree of discretionary power to execute legislative policy, and in particular, as Upendra Baxi argues, ‘discretion is a tool for the individualisation of justice’ allowing for the operation of a socio-economic welfare state like India.[51] Administrative law is therefore concerned with balancing the imperative of delegating discretionary power to the executive with concerns around its appropriate and rights-conforming use.

 

Improper Delegation of Discretionary Power  

When examining delegated discretionary power, the courts must assess whether the delegation is legal. Judicial review of legislative action here examines whether the power conferred on the executive has been ‘properly’ delegated, namely, falls within the constitutional bounds of legislative delegation, which are assessed primarily against the trinity of rights under Articles 14, 19 and 21. In a number of cases, the Supreme Court of India has held that statutes that are so vague as to provide no guidance to those enforcing the law, to prevent its arbitrary exercise, must be struck down as discriminatory.[52]

 

If we apply this standard of scrutiny to the language of the Taxation (Amendment) Act, 2020, it could be argued that the language of the statute confers broad discretionary power to utilise the automated tools for the allotment and examination of tax assessments. In defining the mechanism that should be utilised for such assessments, the statute provides no guidance on how or on what principles such assessment should take place, apart from through ‘suitable technological tools including Artificial Intelligence and Machine Learning’. As explained previously, the scope of these words, and the technologies that they incorporate, is incredibly broad and constitutes a wide range of activities, processes and technologies. Machine Learning, for example, may be utilised to incorporate any number of factors, relevant or not, on the basis of which a tax return could be allotted or examined. Delegating administrative power on the basis of the use of these particular technologies for automated assessment, may, therefore, fall foul of the standard against vagueness of a statute, since it provides no guidance for the executive in how such technologies may be utilised and what factors they might consider in coming to decisions that affect rights of legal persons, nor does it provide procedural safeguards to ensure against arbitrary exercise of such power.[53] This example may usefully be extended to other areas where delegated power may be sought to be conferred through the use of technological tools for automated decision-making. Consider, for example, a requirement that executive authorities examine illegal speech on online platforms through the use of ‘machine learning tools’ or ‘automated tools’, without laying down the criteria on which such analysis must be based, would also likely fall foul of the rule against vagueness.[54]

 

Improper Exercise of Delegated Power 

The exercise of delegated power must also conform to certain legal principles. When the law confers discretionary power on an administrative authority, the authority must ensure that (1) the discretion is not abandoned or fettered; and (2) that the discretion is exercised ‘properly’.[55]

 

The rule against fettering jurisdiction implies that when discretion is conferred on an authority, the authority must itself exercise such delegation, and must not sub-delegate its powers (without legal authority), place the power to take a decision on another body, blindly follow the dictation of a third party, or follow a procedure in exercising discretion whereby it is unable to take into account the merits and circumstances of a particular case.[56] The rule against fettering discretion is particularly relevant when considering how human agents and automated decision-making systems interact and the contexts in which administrative decisions are formally ‘assisted’ by automated systems. As indicated above, even the most complex algorithmic system is incapable of utilising its own discretion. Algorithmic systems are by definition bound by specific rules (although the rule-base of certain contemporary systems may constantly evolve or be incredibly vast).[57] As such, the wholesale exercise of an administrative power by an algorithmic system, or in other words, if an algorithmic system directly makes and effects an administrative decision, it would be a clear violation of the rule against discretion being fettered.

 

However, in most cases, there is (at least formally) a human agent making a ‘final decision’, usually ‘assisted’ by an automated system. Consider, for example, the case of the NERPAP algorithm. By simple calculation of the numbers involved in the voter removal exercise and the timeline, it is apparent that human decision-makers would not have been able to apply their discretion in any meaningful manner. It is more likely that they merely proceeded on the basis of the ‘decision’ that was provided to them by the software used ostensibly for deduplication, without application of their own discretion. This is commonly referred to as ‘automation bias’ in the literature studying the interaction between human agents and computer systems – namely, where, for multiple reasons, a decision-maker would choose to rely on an automated system instead of considering countervailing evidence or administering their own discretion.[58] Automation bias is merely one example of the ways in which complex algorithmic systems interact with human agents and oversight. However, it indicates that the exercise of discretion by administrative authorities is substantially challenged by the use of automated systems, and that merely the fact that the final decision is made by a human being should not disallow from scrutiny that the decision was made without application of mind or in violation against the principle of fettering discretion.

 

The proper exercise of discretionary power considers the manner in which discretion is exercised, and the factors that any administrative decision must take into account. Needless to say, administrative action can always be reviewed on grounds of its unconstitutionality or violation of fundamental rights. However, for the purpose of this paper, we will examine the tenets of administrative law relating to the procedure, and not the effect, of administrative decision-making. The popular formulation of administrative propriety in decision-making under English common law is the Wednesbury test for reasonableness or ‘irrationality’ of decision-making, which has also been imported into jurisprudence in Indian High Courts and the Supreme Court.[59] The standard of rationality applied in judicial review is that the decision must not be ‘in outrageous defiance of logic or moral standards’, or that the decision takes into account irrelevant or extraneous factors, or fails to take into account relevant facts.[60]

 

The standards of relevance and rationality of a decision is clearly implicated in the process of automated decision-making. Relevance of the material facts taken into consideration are implicated particularly in automated systems that incorporate large amounts of data sets in order to find patterns and establish links between underlying data and a specified outcome. Consider the example of the FACTS fraud analytics system. Hypothetically, the algorithm on which the data analytics system decides whether a hospital or a beneficiary is ‘fraudulent’ may take into account a number of factors, including transactional information about health purchases, but also factors such as social media behaviour, consumer consumption data, etc.[61] The former is arguably relevant to a determination of fraud, while the latter is likely to have little to no bearing on whether a person commits fraud in this particular scheme. As such, if courts were to examine the facts on which such a system made decisions, which was subsequently relied upon by administrative agencies, it may find that they do not satisfy the doctrine of reasonableness. Similarly, algorithmic systems may incorporate logical rules which do not satisfy the reasonableness or rationality criterion. In particular, algorithmic systems which are based on drawing inferences between categories of information are intended to optimise particular functions without consideration of any underlying logic. In doing so, they can both reproduce historically prejudiced action, but also confuse co-relation with causation and establish rules of decision-making which are wholly illogical or arbitrary. For example, to revisit once again the FACTS system, the algorithm may establish a rule (based on available statistical information) that persons who suffer from particular disabilities are more likely to commit fraud. Where the law requires that particular facts be taken into account, or that irrelevant factors are not taken into account, or that the logic of decision-making should adhere to certain normative standards in administrative decisions, various kinds of formulations and ‘data-based’ analytics which are based on processing large volumes of diverse information may be implicated.

 

Rules of Adjudication and Principles of Natural Justice 

A third, and particularly important consideration in administrative decision-making is its procedural propriety. While adherence to specific procedural norms is writ across administrative decision-making, it is particularly important when an authority is in a ‘quasi-judicial’ role, namely, where it must make a determination on facts and application of standards or rules, which can prejudice the rights of an individual or a group.[62]

 

Where an administrative action prejudicially affects the rights of a person, the principles of natural justice are applicable to such a decision. Broadly, these principles may be classified as – (1) the rule against bias, and (2) the right to a fair hearing.[63]

 

The rule against bias requires that, where a fair adjudication of facts is required, the issue should not be prejudiced or pre-determined by biases that might arise in various contexts. Bias generally depends upon the individual circumstances of a case, concerning the decision-making body or institutional context, and their pre-conceived notions. The standard for determining bias is whether a “reasonable man, in possession of relevant information, would have thought that bias was likely and whether the authority concerned was likely to be disposed to decide the matter in a particular way.” Therefore, the fact of bias does not need to be proven, and the reasonable likelihood of bias is sufficient grounds to challenge a decision.[64] The rule against bias has generally operated where there is a personal or pecuniary interest of the decision-maker, but its broader formulation cautions against situations in which decisions cannot be taken objectively. As noted above, algorithmic systems exhibit bias and discrimination in many ways, which could systematically preclude an objective assessment in certain contexts. For example, a system that takes into account historical information may inherit historical biases on the basis of caste, class, gender or sexuality, or their proxies, which are then used as part of  the decision-making matrix. In such cases, the decisions relying upon such systems may both be substantively discriminatory and violative of Articles 14, 15 or 16, but could also give rise to a reasonable likelihood of bias that violates the procedural norms of natural justice.

 

It is unclear what a judicial analysis of the rule against bias in administrative decision might look like in the context of algorithmic decision-making. While algorithmic systems have been shown to indicate discriminatory ‘biases’ – bias in the training data, statistical biases of the model used, or bias in the choice of application, it might prove difficult for an affected party to challenge a decision on the basis that it violates the rule against bias, without sufficient material on which to make such a claim.[65] The burden of proof to show that there is a ‘real likelihood of bias’ normally falls on the affected person or the person making the claim. However, under the present conditions of non-transparency about how decisions utilising algorithmic systems are made, it might prove challenging to sustain such a claim.

 

The right to a fair hearing encompasses a number of principles that ensures that a person suffering the consequences of an administrative adjudication has the ability to present their case and change the outcome of a decision.[66] This rule, often captured in the phrase audi alterem partem, or ‘hear the other side’ requires an administrative authority to satisfy a number of procedural conditions in coming to a decision. Broadly, these include the requirement to provide a notice that a hearing will take place, a right to the affected person to know the evidence used against them, including a right to inspect the evidence available before the authority, and the right to present evidence and cross-examine the evidence presented against them. In some cases, there is also a duty to provide reasons for coming to a particular decision (although there is no general duty to provide reasons), linking the materially relevant facts with the final decision.[67] As per the Supreme Court, the rationale for providing reasons is linked to the transparency of the decision-making process for the affected persons as well as for the purpose of judicial or appellate review.

 

It is apparent from the case studies discussed previously that the use of automated decision-making systems challenges many aspects of natural justice as laid down by the Supreme Court. In particular, challenges arise when decision-making processes are unable to provide sufficient justification or rationale for a decision, and are unable to consider any additional or extenuating evidence presented by parties to the decision. As we noted previously, the outputs of an algorithmic system are often inscrutable or opaque for a number of reasons, including the nature of the mathematical operations or due to the confidentiality of the algorithmic system of the data. This implies that the duty to provide reasons cannot always be suitably satisfied in cases where automated systems make or assist in making decisions. In each of the examples above, the algorithmic systems used have not been made transparent to affected persons in any meaningful way. It is unclear what data is used in the system, or what logical process is followed by the algorithm in order to arrive at a conclusion. Similarly, the system itself is unable to consider additional evidence in its decision-making process. In the case of the FAS and the NERPAP, it has also been alleged in court proceedings that personal hearings were also disposed of, owing to the reliance on the automated system for expediency, further implying that many of these decisions may fall foul of important conditions that principles of natural justice require to be satisfied.

 

Conclusion

This paper has argued that algorithmic systems – assemblages of computational and data-based tools – are being used in the context of public sector administrative decision-making in India in a manner that implicates important norms that regulate administrative conduct. These include norms that place limits on the delegation of power to the executive branch, as well as norms about how administrative power should be exercised in order to protect certain important constitutionally guaranteed rights, including non-discrimination and equality, as well as the concept of ‘natural justice’, also read into constitutional guarantees.

 

New digital technologies, particularly computational and data-based systems, are likely to remain mainstays of government administration, offering improvements in administrative efficiency and certainty. In the process, digital technologies are also systematically changing norms and values of the public sector. This raises an important question about the evolution of legal systems in conjunction with these changes in administrative procedure. In particular, administrative law faces distinct challenges – how should the law balance the values which potentially conflict with the use of automated decision-making systems? Should bureaucratic efficiency be provided greater leeway as against individualised adjudication and procedural justice? Should the scope of administrative discretion be expanded, as large-scale information systems allow for a greater role of the administrative state?

 

I have argued that the manner in which the Indian state is uncritically deploying and relying upon algorithmic systems in administration today requires us to urgently address these questions, particularly in asking whether this use comports with established legal norms and principles that guide and regulate administrative conduct. A bare assessment of a sample of algorithmic systems deployed indicates that they do not fulfil important criterion on the basis of which we judge the legality and constitutionality of administrative decision-making – they ignore established limits on the delegation of power, occlude protections on transparency and accountability about the manner in which administrative discretion is exercised, and override procedural protections which form the basis for the delivery of individualised justice in administrative proceedings. There is an urgent need for a legal response that understands the implications of these technologies. Considering the largely uncodified basis of Indian administrative law and its roots in the Indian constitution, it is likely that such a response would need to come from higher courts in India, who must re-assert the application of administrative legal principles in scrutinising administrative conduct which is guided by automated decision-making systems.

 


  1. PhD Candidate, Faculty of Laws, University College London. The author would like to thank Kruthika R. for her inputs and discussions which are invaluable to this paper.
  2. Tarleton Gillespie, ‘The Relevance of Algorithms’ in Tarleton Gillespie, Pablo J Boczkowski and Kirsten A Foot (eds), Media Technologies (The MIT Press 2014) <http://mitpress.universitypressscholarship.com/view/10.7551/mitpress/9780262525374.001.0001/upso-9780262525374-chapter-9> accessed 29 July 2020.
  3. Patrick Saint-Dizier, ‘The Knowledge-Based Computer System Development Program of India: A Review’ (1991) 12 AI Magazine 33.
  4. Michael Veale and Irina Brass, ‘Administration by Algorithm?: Public Management Meets Public Sector Machine Learning’, Algorithmic Regulation (Oxford University Press 2019) <https://oxford.universitypressscholarship.com/10.1093/oso/9780198838494.001.0001/oso-9780198838494-chapter-6>
  5. Mireille Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Paperback edition, EE Edward Elgar Publishing 2016).
  6. Danielle Keats Citron, ‘Technological Due Process’ (2007–2008) 85 Washington University Law Review 1249.
  7. Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford University Press 2009); Mireille Hildebrandt, ‘Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning’ (2019) 20 Theoretical Inquiries in Law 83.
  8. Solon Barocas and Andrew D Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671.
  9. Paul Schwartz, ‘Data Processing and Government Administration: The Failure of the American Legal Response to the Computer’ (1991) 43 Hastings LJ 1321; Citron (n 5).
  10. Michael Veale, Max Van Kleek and Reuben Binns, ‘Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making’ [2018] Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems 1; Deirdre K Mulligan and Kenneth A Bamberger, ‘Procurement as Policy: Administrative Process for Machine Learning’ (2019) 34 Berkeley Technology Law Journal 773.
  11. Reetika Khera, ‘Impact of Aadhaar in Welfare Programmes’ (2017) SSRN Scholarly Paper ID 3045235 <https://papers.ssrn.com/abstract=3045235>
  12. Thomas H Cormen and others, Introduction to Algorithms (MIT press 2009).
  13. Tarleton Gillespie, ‘2. Algorithm’, 2. Algorithm (Princeton University Press 2016) <https://www.degruyter.com/document/doi/10.1515/9781400880553-004/html> accessed 26 November 2021.
  14. Rob Kitchin, ‘Thinking Critically about and Researching Algorithms’ (2017) 20 Information, Communication & Society 14.
  15. Mike Ananny and Kate Crawford, ‘Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability’ (2018) 20 New Media & Society 973.
  16. Jenna Burrell, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3 Big Data & Society 205395171562251.
  17. Jakko Kemper and Daan Kolkman, ‘Transparent to Whom? No Algorithmic Accountability without a Critical Audience’ (2019) 22 Information, Communication & Society 2081.
  18. Franck Pasquale, The Black Box Society (Harvard University Press 2015).
  19. Id.
  20. Solon Barocas, Moritz Hardt and Arvind Narayanan, ‘Fairness and Machine Learning’ 253, (fairmlbook.org).
  21. Barocas and Selbst (n 7).
  22. Barocas, Hardt and Narayanan (n 19).
  23. ibid.
  24. 24Maranke Wieringa, ‘What to Account for When Accounting for Algorithms: A Systematic Literature Review on Algorithmic Accountability’, Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (ACM 2020) <http://dl.acm.org/doi/10.1145/3351095.3372833> accessed 29 July 2020.
  25. European Parliament. Directorate General for Parliamentary Research Services., A Governance Framework for Algorithmic Accountability and Transparency. (Publications Office 2019) <https://data.europa.eu/doi/10.2861/59990>
  26. Madeleine Clare Elish, ‘Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction’ (2019) 5 Engaging Science, Technology, and Society 40.
  27. Saint-Dizier (n 2).
  28. The Taxation And Other Laws (Relaxation And Amendment Of Certain Provisions) Act, 2020.
  29. S.4 (XXIV), The Taxation And Other Laws (Relaxation And Amendment Of Certain Provisions) Act, 2020.
  30. S.4 (XXIV), The Taxation And Other Laws (Relaxation And Amendment Of Certain Provisions) Act, 2020.
  31. Chander Arjandas Manwani, Bombay High Court, (Writ Petition no. 3195 of 2021) order dated 21st September 2021; RMSI Private Ltd. v. National E-Assessment Centre., Delhi High Court, W.P.(C) 6482/2021 (Delhi HC), order dated 14/07/2021.
  32. ‘Linking of Electoral Data with Aadhaar: All You Need to Know’ The Times of India (21 December 2021) <https://timesofindia.indiatimes.com/business/india-business/linking-of-electoral-data-with-aadhaar-all-you-need-to-know/articleshow/88408171.cms>.
  33. ‘Democracy at Stake: Why Many Eligible Voters Might Not Vote in Telangana on Dec 7 | The News Minute’ <https://www.thenewsminute.com/article/democracy-stake-why-many-eligible-voters-might-not-vote-telangana-dec-7-92706>
  34. ‘Srinivas Kodali v. Election Commission Of India, Through Secretary And Others, Telangana High Court, (PIL No. 374 / 2018)
  35. ‘5 Analytical Firms Look for Fraud in Ayushman Bharat PMJAY - Health News, Medibulletin’ <https://medibulletin.com/5-analytical-firms-look-for-fraud-in-ayushman-bharat-pmjay/>.
  36. Ayushman Bharat PM-JAY Annual Report, 2020-2021, National Health Authority, <https://nha.gov.in/img/resources/Annual-Report-2020-21.pdf>.
  37. Helen Margetts and Patrick Dunleavy, ‘The Second Wave of Digital-Era Governance: A Quasi-Paradigm for Government on the Web’ (2013) 371 Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 20120382.
  38. Baru RV and Nundy M, ‘Blurring of Boundaries: Public-Private Partnerships in Health Services in India’ (2008) 43 Economic and Political Weekly 62.
  39. Margetts and Dunleavy (n 36).
  40. Mariano-Florentino Cuéllar, ‘Cyberdelegation and the Administrative State’ in Nicholas R Parrillo (ed), Administrative Law from the Inside Out: Essays on Themes in the Work of Jerry L. Mashaw (Cambridge University Press 2017).
  41. Schwartz (n 8).
  42. Citron (n 5).
  43. Administrative Review Council (Australia), Automated Assistance in Administrative Decision Making: Report to the Attorney-General (AGPS 2005).
  44. Id.
  45. Marion Oswald, ‘Algorithm-Assisted Decision-Making in the Public Sector: Framing the Issues Using Administrative Law Rules Governing Discretionary Power’ (2018) 376 Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 20170359.
  46. Jennifer Cobbe, ‘Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making’ (2019) 39 Legal Studies 636.
  47. Nb. Courts have had the opportunity to consider algorithmic systems implicated in challenges to administrative action, but few have specifically commented on the specific implications of the use of automated systems and similar technology. Cf. Peter Whiteford, ‘Debt by Design: The Anatomy of a Social Policy Fiasco – Or Was It Something Worse?’ (2021) 80 Australian Journal of Public Administration 340.
  48. Katherine Freeman, ‘Algorithmic Injustice: How the Wisconsin Supreme Court Failed to Protect Due Process Rights in State v. Loomis’ 18 33.
  49. Sujit Choudhry, Madhav Khosla and Pratap Bhanu Mehta (eds), The Oxford Handbook of the Indian Constitution (Oxford University Press 2016); Raeesa Vakil, ‘Constitutionalizing Administrative Law in the Indian Supreme Court: Natural Justice and Fundamental Rights’ (2018) 16 International Journal of Constitutional Law 475.
  50. 1994 SCC (6) 651.
  51. Upendra Baxi, "Development in Indian Administrative Law" in A.G. Noorani (ed.), Public Law India (1982).
  52. Shreya Singhal v. UOI, (2015) 5 SCC 1.
  53. Indeed, the lack of a requirement to provide a personal hearing in the FAS has been challenged before multiple High Courts, as of the time of writing.
  54. This example is consciously borrowed from a similar rule incorporated in the IT Act (Intermediary Guidelines) Rules, 2021.
  55. I.P. Massey, Administrative Law, (10th Edition, Eastern Book Company, 2017)
  56. Indian Rly. Construction Co. Ltd. v. Ajay Kumar, (2003) 4 SCC 579.
  57. Cormen and others (n 11).
  58. Ben Green and Yiling Chen, ‘Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments’, Proceedings of the Conference on Fairness, Accountability, and Transparency (ACM 2019) <https://dl.acm.org/doi/10.1145/3287560.3287563>.
  59. G.B. Mahajan v. Jalgaon Municipal Council, [1991] 3 SCC 91
  60. Indian Railway Construction Co. Ltd. v. Ajay Kumar (2003 (4) SCC 579)
  61. The hypothetical is not too far from reality. Data from social media is widely used in algorithmic determinations of credit scores in India and elsewhere. See ‘Not CIBIL, This Lender Uses Your Social Media Behaviour for Loan up to Rs 2 Lakh!’ (Financialexpress) <https://www.financialexpress.com/money/not-cibil-this-lender-uses-your-social-media-behaviour-for-loan-up-to-rs-2-lakh/1761934/>.
  62. Although the distinction between a ‘quasi-judicial’ and administrative action is increasingly waning inasmuch as procedural propriety is concerned. See A.K Kraipak v. Union of India 1969 2 SCC 262
  63. D.K. Yadav vs J.M.A. Industries Ltd, 1993 SCC (3) 259.
  64. Jiwan K. Lohia v. Durga Dutt Lohia, (1992) 1 SCC 56.
  65. Cobbe (n 45).
  66. Keshav Mills Co. Ltd. v. Union of India, (1973) 1 SCC 380.
  67. Gurdial Singh Fiji v. State of Punjab, (1979) 2 SCC 368; Kranti Associates (P) Ltd. v. Masood Ahmed Khan, (2010) 9 SCC 496.

About the Author

PhD Candidate, Faculty of Laws, University College London

License

The Philosophy and Law of Information Regulation in India Copyright © 2022 by Divij Joshi. All Rights Reserved.

Share This Book