On demand 1h 35s Advanced

Artificial Intelligence and Civil Rights Online: Combatting Digital Redlining and Algorithmic Discrimination

4.9 out of 5 Excellent(23 reviews)
View all credits1 approved jurisdictions
Play video
  • Credit information
  • Related courses

Artificial Intelligence and Civil Rights Online: Combatting Digital Redlining and Algorithmic Discrimination

This course will address the impact of artificial intelligence (AI) on civil rights and civil liberties in the United States. AI issues constitute a fast-changing legal landscape in which courts and regulators are attempting to keep pace with rapid technological changes. Automated decision systems now affect the housing, employment, education, and credit opportunities people are given. They also affect liberty and due process rights when used by government agencies. This course will provide an overview of key legal issues that are emerging in civil rights and liberties contexts because of the use of artificial intelligence.

Transcript

Esha Bhandari - Hello everyone and welcome to this CLE presentation on Artificial Intelligence and Civil Rights Online: Combating Digital Redlining and Algorithmic Discrimination. I'm presenting this CLE in March 2022. My name is Esha Bhandari, and I'm a deputy director at the Speech, Privacy, and Technology Project at the American Civil Liberties Union in New York. The law around artificial intelligence is rapidly changing, which is no surprise when you consider the pace of technological change. I aim in this CLE to focus on the effect of artificial intelligence technologies on civil rights and civil liberties. My focus will be talking about the ways in which the law has been evolving in the realm of civil rights and civil liberties to adapt to artificial intelligence. I'll describe several of the contexts in which the law is addressing AI and automated decision systems that affect rights with a focus on transparency efforts and civil rights litigation. And I'll give an overview of key court decisions and enforcement actions relating to AI and civil rights.

To kick things off, let's talk about what we mean by digital redlining or algorithmic discrimination. These terms aren't obvious, but they are becoming increasingly common. They're used in court decisions, in the media. So what does this mean? First of all, I wanna talk about what I mean by saying artificial intelligence. It's a very broad term. You could spend hours debating what constitutes artificial intelligence with people in different fields, but I'm using it here to mean essentially any automated decision system that involves human design, training, and data input choices but which yields a result that is the result of an automated process. So digital redlining or algorithmic discrimination is what happens when you've got an automated decision system, or an AI system, that is producing outputs. These outputs could be risk assessments, they could be eligibility scores, or they could be binary yes/no determinations. What I mean by this is you might set up an automated decision system, or an ADS, to tell you whether someone should get a certain benefit. You might set it up to tell you yes or no, this person should get the benefit or they should not get the benefit. You might set it up as an eligibility system.

So it simply tells you is this person eligible or not and then another decision process, maybe a human decision-maker, decides if the person in fact gets that benefit. Or you could set the system up to be a risk assessment, which maybe is less common in the benefits context, but, for example, you might see a system that says some person is at risk of flight. You see this often in the bail context, in the criminal legal system. So that might be a percentage score. It might be a high, medium, low probability. The main takeaway from this is that automated decision systems can be coded, can be designed, and set up in of the variety of ways to produce different types of answers. The human design, the thinking and the goals that go into an automated system should always be front of mind when we confront and evaluate what the systems are doing.

One of the issues that comes up in this field is often an automated decision system might be seen to be giving a scientific answer to a human or complex social problem. But we often have to ask, what was the system intended to do or what was the system asked to do? For example, if a system is asked to identify 50% of a population who should not be released from detention, and the system says 50% of the people that it considers are not eligible for release, we have to be very careful to understand it's not that there's some objective assessment that 50% of people were not eligible for release. It's that the problem the AI was asked to solve for was to identify 50% of people not to release. So those are really important front-end questions that we have to always keep in mind that these systems are varied, they're complex, but they usually always start with a particular problem that a human is attempting to address and specific parameters. An AI or an automated decision system is often called a black box system.

What this means is that the outputs or the results are not easily explainable. Instead of a system where I could say 3 plus 5 equals 8, and I can explain how I got 8 from the two factors that I added together, a black box system often can't be explainable that way. You might get a result that says this person's risk score to be foreclosed on their mortgage is medium, or maybe it's even portrayed as a percentage, 50% risk score of facing foreclosure. What does that really mean and how can we explain that? There might be factors that went into the ADS to produce that 50% score, but it's not a clear cut 1 plus 2 divided by 3 answer or explainable situation. So that's where we have to keep in mind that a lot of these very complex algorithms might produce results that nobody going back to look at the data input or the source code could explain in a narrative sense. It is often impossible to say these were the factors that the algorithm considered and this is how they clearly interact to produce the result.

The reason that many of these automated decision systems are black box is because they employ what's known as machine learning. They take an initial set of data, let's say it's training data, and they may be code it to produce certain results, but over time, the system learns from the data and it learns from its own results. So it's an iterative process such that over time, this system may not look like the one that was initially coded by a human designer that had an initial set of training data. It may result in more complex calculations or systems that no human can explain. The context in which an AI or automated decision system is used can also vary. In some areas, the automated decision system is used as the final arbiter of a decision, which means there's no human override. But in other contexts, a system might be used to inform a human's decision-making. So you might get a score or a yes or no decision, but ultimately a human has to consider that and perhaps other factors and ratify the decision of the automated system or make a different decision.

There's some interesting literature out there about how often humans are likely to deviate from the recommendation of an automated decision system. There's often a powerful force involved with this idea of objectivity or scientific truth that can come from these systems that makes it difficult in some contexts, for humans to you deviate from the system or what it's recommending. But this is an important thing to keep in mind and has relevance to the later issues we'll discuss particularly around due process when you've got decisions that are being made that no human is ultimately accountable for.

So what are a few examples of AI or automated decision systems? A credit score is a classic algorithm and one that people are probably very familiar with. It's not the most complex of the systems that are out there now. Social media content algorithms is in other example where we're probably increasingly familiar with those. The content that we each see on our media or as we're browsing the internet is usually served up to us on the basis of an algorithm that uses a variety of factors to determine what we want to see or what is best for the business of the social media platform. There are eligibility algorithms used in the public benefits context. There are risk assessment algorithms and automated decision systems used for bail determination. Or in the family regulation or child welfare system, there are a variety of automated decisions used including to calculate risk of various things.

There are so-called predictive policing tools, which produce results that some police departments use to determine where to police more heavily. There are hiring assessment technologies where there are automated systems that assess candidates' qualifications, potentially assess their interviews, and use a variety of factors to give a score to employers to inform their decision-making in who to hire, or perhaps even to determine eligibility for the job at all, so to screen out candidates completely. Going back to my original question of what is digital redlining or algorithmic discrimination? This occurs when the use of an automated decision system has a disparate impact based on protected class status under civil rights laws, or is somehow biased in ways that result in worse outcomes on the basis of protected class status, such as race to gender, age, or disability, among others.

A few examples of this might be an online ad delivery system that shows ads for certain jobs to men over women or ads for housing in certain neighborhoods to people of particular races. You'll recall that as I said, some of these algorithmic systems are black boxes, meaning it's difficult for a human to pinpoint which input or which factor led to an outcome. This is one of the most pernicious problems in this area because it's difficult to identify bias or whether race, gender, age, disability are playing a determinative factor in a way that they shouldn't in whatever an AI system is doing. Oftentimes the only clue we have about bias is looking at outcomes and assessing whether there are disparate outcomes on the basis of these protected class statuses and examining why this disparate outcome is resulting and what the system is looking at in terms of its training data, in terms of the source code, perhaps in terms of the question or problem it's being asked to solve.

It isn't often that an AI system is explicitly coded to take into account race, gender, or age, for example, if the system is generally not intended to be addressing a problem that's specific to a race, gender, or age. So intentional discrimination in the sense of explicitly telling a system, I want you to give fewer opportunities to women over men, or to people of certain races is less likely or less common than you have a system without explicitly taking into account these factors that nonetheless results in these disparate outcomes or biased outcomes, which is why it is so important to do after-the-fact assessment of the results of a system, compare it to the known demographics of the population, and see if that kind of disparate impact is resulting. Policy makers and the public are increasingly becoming aware of the discriminatory potential of AI technologies.

A 2016 White House report was an early example of government recognition of the potential for discrimination that could result from widespread and increasing adoption of AI technologies in critical areas that affect society, particularly in the realms of civil rights. As a result of this increasing awareness, courts are also becoming more and more educated about the issue and we are seeing a whole new area of law develop and arise, which is looking at the application of civil rights laws to these automated tools. It's also important to distinguish between the use of automated decision systems by the private sector and by the government, because the legal regimes that apply in each context are different and the types of legal claims that are being raised also vary depending on whether it's private sector use versus government use. But the use of AI and automated decision systems is on the rise in both contexts.

I'll just specify a few government uses where we're seeing increasing use of AI. One is in the criminal system. I mentioned bail risk assessments. There's also something known as probabilistic DNA genotyping that's used as evidence in criminal cases. There's predictive policing, which I also mentioned. In the family regulation or child welfare system there are risk scores that are being determined by automated systems. There are family placement matching tools that are automated. And then lastly, benefits or funding determinations and fraud detection are also being increasingly done by automated systems. These are the government examples. There are also private sector examples. As I go through the rest of the presentation, I will at times be talking about legal challenges to private sector use and at times to government sector use.

Litigation around these issues is increasing in both contexts. Turning to the next big question. How do we know when an AI system might be discriminatory? There are a lot of hurdles to transparency on this issue. I mentioned previously that one way to know if a system is discriminatory is just to look at the outcomes and see if there's a disparate impact on the basis of protected class status. Another piece of information we might want about an AI system is what training data was used? So for example, let's say you have a hiring technology and this system was trained on a dataset of qualified workers, all of whom were, let's say, men. And so the way that this system was designed and trained was to have a pool of qualified workers that were only men, but then the system will be used on a broader population. That is information you might want to know. It might be critical to deciding whether this system in fact has a biased view of who is a qualified worker and what the system has been trained to accept as a qualified worker.

So these are just a few examples of the types of transparency we might want when evaluating a system. One is outcomes, what are the outcomes, what are the demographic characteristics of the population in the outcomes. And two, what were the training data, what is the source code? Who were the people who designed it? What was the system designed to do or what question was it asking? There's a whole host of issues around transparency in particular contexts that are important. But how do we get answers to these questions? One avenue is independent auditing, research, or data journalism. This is when you have outside independent individuals or groups that test an automated decision system or an AI to see what outcomes result. There are a host of different techniques that can be used and this is a growing field of work.

There are increasingly academics, computer scientists, and media organizations and newsrooms that specialize in this type of auditing of AI systems or platforms using a variety of technological means and other techniques, but essentially the goal of this kind of auditing is analogous to what happened in the offline context with civil rights enforcement. So for example, you may have had in the offline context people who run audit tests of landlords or employers, and they send qualified applicants for a home or a job to that landlord or employer. They'll often employ something known as paired testing. Maybe you send people of different races to a landlord, they present equal credentials for a home, and then you see what opportunities they're offered, what homes they're offered, what neighborhoods they're offered. Similarly with audit testing of employers, a common technique has been to send resumes that are equal in terms of qualifications, but differ only in the gender or some other characteristic that you're testing and see what responses you get from the landlord.

This kind of testing has always been adversarial, meaning that it's not done with the knowledge or consent of the party being tested. That's a pretty critical aspect of this because again, in order to be truly independent and to potentially yield results that would be embarrassing for a company or possibly even subject them to legal liability for discrimination, it's hard to do that kind of work hand in hand with a company. So that is one model is the independent researcher or journalist testing an automated platform or system. Now, I described offline examples of this type of audit testing with the testing of the landlords and employers.

This offline testing has been encouraged by the federal government, in fact, to enforce laws such as the Fair Housing Act and Title VII of the Civil Rights Act of 1964. It's long been accepted in the offline context that this kind of adversarial independent testing is not only encouraged but in fact necessary to ferret out violations of civil rights laws. It's going to be fairly rare for a company itself to audit itself and then offer up examples of its discriminatory treatment of people seeking homes or people seeking jobs. That's why this independent audit testing model sprang up. The Supreme Court, in fact, recognized the standing of civil rights testers to bring enforcement claims under the Fair Housing Act in a 1962 decision, Haven's Realty Corporation v. Coleman.

So again, in the offline context, it has been well established that even if companies don't like it, there is the ability for testers to show up and run these kinds of tests or to send in applications that aren't real applications but they're tester applications, and despite the loss of company time, the annoyance to a company of having to filter through people who are not actually prospective renters or prospective applicants for a job, this is the cost of doing business and necessary to enforce civil rights laws. The problem now is that many of these critical life opportunity transactions, including seeking housing, seeking employment, credit, or education are increasingly taking place online. And just as these transactions for housing, employment, and credit

 have moved online, they're also increasingly mediated by automated systems that decide which opportunities are shown to people and/or which opportunities people are screened out of. I gave the example earlier of ad targeting systems, where if I go online, I might see a completely different set of housing opportunities or job ads than any one of you listening to this CLE. It'll depend on our browsing history, on the information that the platforms have about us, and then ads will be targeted to us. Similarly, if I go to a major platform for, let's say, employment and I submit a resume for a job, it's quite possible, maybe even likely, that that platform works with employers to screen the hundreds or maybe thousands of resumes that are submitted to it. So even if I see the job opportunity that I'm interested in, my resume might be screened out by a platform before an employer ever sees it. And if that screening system is an automated system, that's another critical area that an independent researcher might want to test to see if there's bias in that system. In the online context, however, common audit testing methods are often prohibited. I mentioned that offline, you might have people do a resume test where they send in identically qualified resumes that differ on basic demographics. You might have in-person paired testing where people show up to try to get an apartment.

Online, you have to use slightly different methods. For example, some online audit testing requires creating tester accounts. Maybe you sign up for a social media platform account that's a research account. It doesn't reflect your information. So when you fill out the demographic characteristics or the questions that the platform asks you, who are you, what's your name, what's your gender, what's your age, where do you live, you're gonna be creating a tester account so you're gonna be controlling those variables and they won't reflect who you are in fact. And then you might use multiple tester accounts that differ on the basis of one variable to see what opportunities that platform or website shows you. So, for example, you might create one tester account that's coded as someone in their 20s and another tester account that's coded as someone in their 60s, and you have these tester accounts visit the same website, see are they shown the same job opportunities. Another common technique used by independent researchers online is scraping. Scraping is an automated method of collecting data that's otherwise available to the public, but scraping allows you to automatically collect and record that information rather than manually writing down the data that is provided.

Scraping is a very critical technique because so many of the online audit testing methods require visiting websites multiple times repeatedly to see what different opportunities are shown at various times, how the system behaves at various times. If researchers had to manually collect that information, much of this research would be completely impractical or impossible. So scraping is another online technique used for audit testing that doesn't really have an offline analog. The overall problem with these online testing techniques is that website terms of service often prohibit them. It's not unusual for website terms of service to say things like you cannot provide false information to the website or the platform, which would of course prohibit creating tester accounts that don't reflect your real information. They often also prohibit scraping or other automated means of collecting information. Terms of service can even be as explicit as to say you agree not to use this service for any research or other purposes because these terms of service are written by the companies themselves. They are a one-way street. They're not meaningfully agreed to by users. And as a result, the terms of service reflect the company's interests. And if the company feels that they don't want to be subject to this kind of testing, they may often put that in their terms of service.

They may even have laudable reasons for requiring, for example, that people generally provide true information about themselves. They may want to prevent bots on their service. They may want to protect other users from feeling that they don't know who's using the system. So it's not always nefarious these terms of service, but they do have the effect of prohibiting a lot of independent testing and research. Now, you might ask what's the big deal with violating terms of service? Most people don't agree to them. They're clickwrap in many instances. You might even have an argument that they are contracts of adhesion and they cannot be enforced.

All of this is potentially on the table for people to argue, but the Computer Fraud and Abuse Act, which is a federal law, was a long-time barrier to robust online audit testing because the federal government and courts interpreted the CFAA to render website terms of service violations into criminal violations. This understandably had a large chilling effect on researchers and journalists who would otherwise violate website terms of service in the name of their public interest research that they're doing in service of the public good or in service of enforcing civil rights laws, or informing the public and policy makers about their work. They might have felt confident that they could violate terms of service for that reason, but risking criminal liability under the CFAA would be a bridge too far for many people. So what to do about this problem?

In 2016, a group of academic researchers, computer scientists, and a media organization challenged the CFAA as a violation of the First Amendment to the extent it prohibited them from conducting research to uncover whether online housing and employment platforms discriminate on the basis of race, gender, age, or other characteristics. The case is Sandvig v. Barr in the DDC and I was counsel in that case. The researchers and academics in that case really wanted to conduct the kind of online audit testing that I'm describing and were facing barriers because the websites they wanted to test or the platforms they wanted to test had terms of service prohibiting their activities and the CFAA was standing in their way as this specter of federal criminal liability. The district court in Sandvig v. Barr held that the CFAA should not be read to criminalize violations of website terms of service. Ultimately what the district court held was that it didn't have to decide the question of whether the researchers had a First Amendment right to engage in their activities because the CFAA should be read narrowly.

In 2021, the Supreme Court ruled on the scope of the CFAA and it narrowed the scope of the CFAA in a way that should protect researchers going forward. in Van Buren v. United States, the Supreme Court held that the particular provision of the CFAA which had been read to encompass liability for violations of website terms of service alone should be read in a more narrow way. The case had nothing to do with terms of service violations. It involved a law enforcement officer improperly accessing a work database and being charged under the CFAA. But nonetheless, the issues that that case raised overlapped with the problem of this broad reading of the CFAA to cover written terms of service violations. So that narrowing that the Supreme Court did in Van Buren should be protection for researchers going forward when the only thing that they're doing is violating website terms of service. They're not engaged in hacking, breaking into websites in a way that would violate other parts of the Computer Fraud and Abuse Act.

Even with the threat of CFAA liability being reduced for most researchers and data journalists, there are still barriers to common online investigative techniques and that includes the continued existence of terms of service that prohibit a lot of the common methods and people may have various risk factors in violating terms of service. There may be other ways that those terms could be enforced against them. They may be testing or studying a particularly litigious platform that they think might try to enforce those terms or perhaps other legal barriers against them. So it's not that it's fully clear now for people who want to conduct independent or adversarial testing of platforms and websites, but there is, the threat of CFAA liability at least is no longer a major factor. Potential liability for researchers or audit testers is not the only hurdle to transparency efforts.

Next, I'm gonna turn to talking about trade secrets and the role that invocations of trade secrets have played in transparency around algorithmic decision systems or automated decision systems and AI. In context in which people might otherwise be entitled to information about an automated decision system, we have seen an increasing number of trade secrets claims that are prohibiting that information from being exchanged. For example, in the criminal context where people who are defendants are entitled to the evidence that's going to be used against them so that they can defend against that, in this context, courts have prevented some defendants from gaining information about an automated decision system, such as its source code or data inputs, that was used to generate or provide evidence against them.

Again, the normal context as we would understand it would be that this defendant would be entitled to this type of information because it's evidence being used against them, but where it's an automated decision system being used, that information that is necessary to challenge the system and how it produced its results or its evidence is being withheld. And often it's being withheld on the grounds that producing sufficient information to evaluate the system would violate the trade secrets of the private company that developed it. Keep in mind that these systems, of course, are being used to provide evidence by government entities, which are prosecuting people. But nonetheless, even though this is a government use, because there's a private company that's been involved in either selling the system to the prosecuting entity or leasing it to them, or in other ways providing evidence to them using that private system, the defendants don't get the benefit of being able to see inside the hood of that system. I'll give one specific example which is probabilistic DNA genotyping software that is often used in criminal cases and has been the subject of much litigation. I won't get into the specifics of what probabilistic DNA genotyping is, but it's essentially, as the name implies, it provides probabilities that DNA is a match to a person. And there has been a lot written about how these software systems can be filled with errors, there are a host of problems that can arise, and some of the litigation has been over whether defendants can get sufficient information about how these systems are designed to challenge errors, to identify if there are errors in the first place, or if there is an unacceptably high rate of error, or if there are other flaws with that system.

For an example of the kinds of legal arguments against the invocation of trade secrets in the criminal context, you can see the host of amicus briefs that were filed in a case called People v. Johnson in the California state courts. In that case, the court sidestepped the issue of whether a company and the government could invoke trade secrets to avoid providing information about a system that was used against a defendant, but there were a whole host of groups interested in the issue that filed amicus briefs and laid out the legal arguments for why trade secrets is not an appropriate argument to be made when you have the government using these systems in a context in which individual liberty is at stake. While there have been courts that have upheld the invocation of trade secrets in the criminal context, meaning they have not given defendants access to source code or training data, there have been some recent cases where the defendants were successful in gaining access to more information about an automated system.

United States v. Ellis in the Western District of Pennsylvania in 2019 is a case where the defendant was granted access to the source code for a probabilistic DNA genotyping tool and State v. Pickett, which is a New Jersey case. In that case, the court granted access to source code for probabilistic DNA genotyping as well. The trend does seem to be that more courts are providing access to some information. There's still often disputes over how much information and how much data should be provided with source code because source code divorced of data may not be that useful depending on the system. So this is an area of the law to watch because the issues are not settled at all and courts are regularly ruling in different ways.

One other avenue for transparency around automated decision systems or AI are open records laws, whether we're talking about the Federal Freedom of Information Act or state level open records laws. There has been some litigation around the scope of these laws and whether they would require turning over information about automated decision systems. And again, keep in mind that this is a context where government records might be presumed to be subject to open records laws as a baseline matter. So let's say you have an agency, a housing agency, where their records are open to the public with exceptions, of course, but that's the general baseline. Where things can get complicated is if you have the records of a private entity that has sold or leased a system to that agency, are that private entity's records now subject to open records laws? Does it matter what the arrangement is, does it matter whether the records are housed with the agency? These questions arise because you have this entanglement between a public function or a government function that is being partly outsourced or performed by a private system that companies often claim is proprietary, they invoke trade secrets, as I mentioned. So it's rare that a government agency would itself have an automated decision system that's fully developed in house though it's certainly possible. And in that case, the arguments that the system is proprietary would be, I would assume, much weaker under open records laws, but it's certainly an issue when you've got a private company that's claiming their proprietary information should not be open to the public just because the agency, the public agency that they work with, is subject to open records laws. There might be signs that courts are paying attention to this problem now.

In January 2022, a New Jersey judge required the State Department of Education to reveal to six school districts the source code and funding data related to an algorithm that New Jersey uses to allocate funding for all school districts in the state. The school districts that were challenging this funding distribution, raised claims under the New Jersey Open Public Records Act and the common law right of access. And the court said that these districts were entitled to more information about this algorithm to assess whether they had any other claims against the state for how it had given it funding. An earlier decision had in fact required source code to be turned over, but the six challenging school districts went back to court when they said that that information alone wasn't enough to assess how the algorithm worked.

So the January 2022 decision involved the challengers getting more information. So again, this goes to show that it's not the source code alone that's often even sufficient to provide enough information to assess the system and courts will have to look very carefully at what combination of data and information is a bare minimum for someone to be able to look at a system, identify biases and flaws, and then possibly have a challenge to that system. Once again, there are novel issues of law to be addressed under different states' open records regimes and I expect there to be a lot of litigation in the coming years over these issues. That's an overview of some of the transparency issues surrounding automated decision systems. Even figuring out where a system is being used and then once someone is aware that a system is being used against them, getting enough information about that system, that's the first hurdle. Now I wanna turn to a different issue, which is what happens when you know that a system involves some bias, some discrimination, whether it's intentional, whether it's disparate impact, how is the law handling those issues when we know that there's an automated decision system or an AI that is resulting in discrimination? I'm gonna take one case study, that of online ad targeting and delivery, to show that we've entered a new era of civil rights enforcement.

Online ad targeting and delivery ecosystems have been among the most litigated areas of civil rights enforcement in the era of automated decision systems. What do I mean when I talk about online ad targeting and delivering? I mean that the system employed mostly by private companies, I'm talking here about the private sector, where you visit a platform, you visit a website and you see ads that are tailored to you. It's axiomatic now that most of us are not seeing the same internet that our neighbor might see, that even someone in our own household might see. The way that we browse the internet, the information about us that has been collected and aggregated by data brokers, that social media platforms all have on us, the profiles that have been created of millions of internet users, all go towards feeding an online ad targeting infrastructure that maximizes revenue for advertisers and the platforms that offer these services to advertisers. For example, if an advertiser wants to show its ads to a particular demographic, many social media platforms, many other platforms can offer that option. They can say you can show your ad to people of a certain age, race, gender, certain income level, certain geographic targeting. There are numerous targeting options available out there because of the way that the ad targeting ecosystem has developed. Separate and apart from explicit ad targeting which an advertiser can select, as I mentioned, they can select who to show their ad to very specifically, there's also the ad delivery ecosystem which has sprung up, which is the system whereby an advertising platform decides who to show an ad to absent any specific direction from the advertiser.

For example, if I go online and post an ad for a product that I'm selling and I don't choose any targeting options at the platform that I've selected, I just say, show this ad, the platform is very likely going to be using an automated system to decide which of its potentially millions of users should see the ad. Most of these delivery algorithms will be optimized to generate the most clicks because platforms want to offer their advertisers the greatest number of clickthroughs, the most number of people who are actually interested in their ad, seeing the ad. So many online platforms that offer advertising services will use their own algorithmic delivery system to identify those users that it thinks will be more likely to click on an ad by any particular advertiser. And of course the more clicks they can show the advertiser, the more interested users that have seen the ad, the more revenue for the platform. So these ad delivery automated systems are optimized usually for maximum profit with no consideration for other issues. Several years ago, researchers and civil rights groups began raising the alarm about the potential for online ad targeting and delivery systems to discriminate against users on the basis of protected class status and violate existing civil rights laws. In particular, the concern was over ads for housing, employment, credit, and education, which are heavily regulated areas under the civil rights laws because these are core economic opportunities where the law wants to ensure that people are not excluded from those opportunities on the basis of say race, gender, or age. But you can see how in an ad targeting ecosystem where an advertiser can select who to show an employment ad to based on race, gender, or age, you're going to have a problem.

Similarly, with an ad delivery system that might optimize certain housing ads for people living in certain neighborhoods which may reflect existing segregation, and therefore this automated system might reinforce that segregation by only showing those housing opportunities to people of a certain race, which the system knows to already live in that area. You're going to reinforce that real world discrimination and segregation and you're not going to give people of different races or people who were not targeted by the advertiser, the opportunity to ever know about that housing ad or that job ad.

One of the first major cases about online ad targeting and how it intersects with the civil rights laws was brought against Facebook. And it was brought by the ACLU, the Communications Workers of America, and other fair housing and civil rights organizations. I was counsel in this case. The case began with charges filed at the Equal Employment Opportunity Commission stating that Facebook targeted certain ads for jobs to younger male Facebook users. There was other simultaneous litigation that claimed race discrimination in job, housing, and credit ads and age discrimination in job ads on Facebook. Facebook settled the case before it made it to federal court and in doing so, Facebook made sweeping changes to its online ad ecosystem. Among those changes, Facebook created a separate portal for ads for housing, credit, and employment.

So if an advertiser went on Facebook after the settlement and wanted to place a housing, credit, or employment ad, be redirected to a separate area where they wouldn't be given targeting options that would constitute protected characteristics under the civil rights laws. So for example, you could not target a housing, credit, or employment ad on the basis of age or gender or proxies for age and gender. Removing proxies as a targeting option is very important of course because that could just be another way for advertisers to get around the restrictions if they could simply find those proxies and target on the basis of that. There were other aspects of the settlement including how Facebook would deal with malicious advertisers, those who would want to place a housing or employment or credit ad using the fuller set of targeting options and so might not be honest about the fact that they're placing an HEC ad, a housing, employment, or credit ad, and try to get into the broader portal where they could target based on protected characteristics like gender or age.

So there are details about how Facebook has to work to ensure that it's got a system that can catch all of those ads. But this was a major settlement and the first major platform changes to reflect the fact that civil rights laws and civil rights enforcement would apply to internet ad targeting. The settlement with Facebook addressed the targeting portion of its ad system, the part where advertisers could explicitly choose who to show their ads to. But there are remaining issues not resolved by that settlement, including Facebook's ad delivery algorithm, which remember is Facebook's automated system that decides which users see an ad regardless of whether the advertiser has selected any targeting options. Facebook's ad delivery algorithm was the subject of subsequent charges filed by the Department of Housing and Urban Development in 2019. There's also ongoing litigation against Facebook for its ad delivery algorithm by private entities and there's also ongoing litigation against the advertisers themselves who used discriminatory ad targeting tools in areas covered by civil rights laws.

You can see these pending cases, Opiotennione v. Bozzuto in the Ninth Circuit and Vargas v. Facebook in the Ninth Circuit. And the briefs in these cases will give you a good overview of the legal issues raised here. There are slightly different legal issues raised against advertisers who choose to intentionally target and discriminate in the delivery of housing, employment, and credit ads and slightly different issues with respect to a platform like Facebook that has an ad delivery algorithm that might result in discrimination among users because it's biased in some way so it might be showing job ads more men than women or showing housing ads more to people of one race than another. As a result of all of this litigation around Facebook, other ad platforms are on notice of similar potential claims.

And I also wanna flag that one of the arguments that's often raised by platforms like Facebook in defense of its practices is the applicability of Section 230 of the Communications Decency Act. And this is an area where the courts will have to weigh in on how Section 230 of the CDA intersects with civil rights claims. It's beyond the scope of this presentation to get into the specifics of the legal arguments around CDA 230, but in very broad strokes, Section 230 does protect certain online platforms for liability for the content of users' posts in certain circumstances and it has allowed internet freedom of expression to flourish as a result because platforms can moderate and make decisions about what content to leave up or take down without being afraid of being held liable for the content of those posts, for defamation, for example, and other claims.

Now, social media platforms are often claiming that CDA 230 would protect them against liability for civil rights claims. The argument of civil rights organizations, including the ACLU in the Facebook case that I was a part of, is that CDA 230 doesn't cover these types of claims when the claims are focused on the platform's own actions in ad targeting and ad delivery infrastructure, when the platforms themselves make those choices and effectuate the ad targeting and the ad delivery that is held to be, or is alleged to be, discriminatory. This is again another area of the law to look out for because there will be upcoming court cases that address this intersection. I want to turn now to civil litigation against the government, due process, equal protection, and statutory claims that have been raised in the context of government use of AI or automated systems. To date, there has been litigation over the state use of algorithms to calculate public benefits eligibility and to identify fraud in programs.

These claims have involved constitutional due process and equal protection claims, and also statutory claims under Medicaid, the Americans with Disabilities Act, and the Rehabilitation Act. One example is the case of K.W. ex rel. D.W. v. Armstrong, which went up to the Ninth Circuit in 2015. In that case, plaintiffs were adults with developmental disabilities whose Medicaid assistance was drastically cut when Idaho adopted a new algorithmic tool. This tool was being used to determine budgets for adults who were receiving assistance under this Medicaid program. And the plaintiffs who had received the drastic reductions filed suit and one of their claims was a due process claim, namely that they needed to be able to assess the tool, look at the factors the tool was using to determine what level of assistance they should be getting and possibly be able to rebut those factors and rebut the assessments of the tool. In this case, there was no human decision-maker who could explain the choices that had been made by the tool, something that we might normally expect from government agency decisions around things like benefits. So in this case, the plaintiffs made the claim that they were entitled to adequate notice and an explanation of the reason for the reduction in their benefits or the denial of their benefits. The district court granted a preliminary injunction, finding that their claim had merit. And the Ninth Circuit upheld that preliminary injunction and on the grounds that the plaintiffs were entitled to an adequate notice and explanation of the reasons behind the choices that were made for their assistance. This is an example of an automated decision system being subject to due process analysis and ultimately the state having to find a way to provide an explanation for its use of these tools or not being able to use these tools if such explanations are just impossible to provide. In another case study of a due process claim against an automated decision tool, Michigan used a system known as MiDAS. The MiDAS tool was meant to identify fraud and unemployment benefits, but it turned out that the MiDAS tool was very bad at determining fraud.

In fact, it had an error rate of up to 93% when people studied it. So this tool was challenged by people who were deemed to have committed unemployment benefit fraud and it was held to violate due process by the Michigan Court of Appeals. Again, this is an example of a state using an automated decision tool that it can't really explain the function of or the reasoning behind its determinations. Individuals weren't being told why they were deemed to have committed fraud. It was just that this tool had made this prediction or had made this assessment and there could obviously be serious consequences for people if there is a state determination that they've committed unemployment fraud, triggering the due process analysis. That due process decision by the Michigan Court of Appeals is currently pending review in the Michigan Supreme Court. But separately, a federal court has allowed a lawsuit to proceed against private companies that were involved in the design and deployment of MiDAS. This is a recognition that it's not only the state that faces potential liability for its use of a flawed tool, but the private companies that were involved in the design or the implementation process of MiDAS now are also facing potential liability. And the court in that case, analogized the issue in some ways to products liability, where if a private company is just putting a faulty product out on the market that has such serious consequences for people's lives, there has to be responsibility and accountability for that. Lastly, moving away from the benefits context, the use of algorithms by government employers has also been challenged.

In Houston Federation of Teachers versus the Houston Independent School District, a case in the Southern District of Texas, the district court dismissed all claims except a procedural due process claim brought by teachers who could have been terminated because of a privately developed and secret algorithm that conducted teacher evaluations. One of the factors that this algorithm took into account was student performance. But the teachers did not know what the universe of factors was, what the full set of data, and what the source code of this algorithm was and how it was coming to its assessments of teachers, which for them could result in negative performance evaluations and potential termination. The district court that allowed the procedural due process claim to advance held that the teachers had no way to ensure that the algorithm was making its calculations correctly because the school district didn't audit the algorithm and its scores. So the school district didn't do what it should have done, which is assess how well the algorithm was performing, how its scores were determined, and auditing whether there were any issues, whether it's error rates, whether it's systemic bias, or any other issue with the tool. And the teachers had no way to challenge those assessments because there wasn't a human who could ultimately explain or tell them why the assessments they got were the way they were. That's another example of a state government entity being held to a due process standard for the use of an algorithmic tool in a context in which you've got a liberty or property interest. Here, it was public employment. This type of litigation is nascent and I expect that we'll see many more of these types of claims in the future, particularly in a context where more and more states and localities are moving to require some sort of transparency about state and local agency use of algorithms.

As I mentioned earlier in this presentation, transparency is a key hurdle to even knowing when an algorithm is being used against you. And for a while, the teachers in the Houston School District case didn't even know that there was an algorithm being used to assess their performance. As more and more people start to realize that particular agencies are using algorithmic tools in contexts that implicate their fundamental rights, I expect that we'll see more of these types of challenges. They may initially be a lot of due process challenges where people are seeking an explanation, some sort of insight into the tool and the factors it uses, but down the line I expect we'll also see more equal protection claims and potentially statutory claims of discrimination if the more that is known about a tool that is being used reveals that that tool is in fact discriminatory because it has biased outcomes or in other way takes into account factors that lead to discrimination. That's an overview of some of the issues that are coming up in this developing context of artificial intelligence, digital redlining, algorithmic discrimination, and the law. I expect a lot of changes in the coming years because this is a very fast moving area of law. And the technologies are changing rapidly as well, which means that the courts are often on the back foot and assessing technologies and their uses several years after perhaps a technology has moved on to the next generation version. I wanna turn to a few key takeaways. One, private actors and government entities do face litigation over the use of automated decision systems or tools. The existing cases show that private companies and government are not immune from claims being brought, even in a context where some of the issues may be novel legal issues. Second, is that transparency is a key hurdle to bringing claims by those affected by an automated decision system.

One, the existence of a tool is often secret. People sometimes don't even know that a decision that affected their life was made through the use of an automated decision tool. They may simply get a yes or no answer from a company or from a government agency and never be told that the yes or no answer was informed by a tool or that a risk assessment tool was one of the factors used in the decision affecting them. Even where the use of a tool is known though, getting sufficient information to evaluate it and any potential legal claims is also difficult. And even with information being provided, the outcome of a tool may still be unexplainable even by those who design it and deploy it. And that presents a challenge because it won't be the case that always receiving the source code and the data used by a tool can easily explain what it is doing. The next key takeaway is that the standards by which an automated decision system is evaluated will depend on the context. It will matter whether you're challenging a government agency decision and what standards that agency is held to including due process, including equal protection. It will matter if it's a private sector use and whether there are civil rights laws that govern that area of operation that that industry that's using that tool. One metric by which an automated decision system can be used is an error rate.

Another metric is disparate impact and also potential targeting, explicit targeting, based on race, gender, age, or their proxies. A tool may not explicitly code race, gender, age as a factor to consider in a context in where doing so would be inappropriate, but it might be using factors that are proxies. And if the people who've designed that tool haven't carefully considered that, you could have a situation where protected characteristics are in fact explicitly a factor in an automated decision system where they shouldn't be. Lastly, the ability to understand and explain the decision of an automated decision system raises due process problems in the context of government use, but it also can and should trouble us in private sector or other contexts where constitutional due process claims may not be at stake.

We do face the prospect that more and more decisions that affect people's core economic opportunities and core life chances are made by systems without any human accountability, the decision of whether to be hired for a job or to be eligible for credit or a mortgage. These decisions have a major impact on our lives and the more and more they are made by decision systems without human accountability involved, the greater impact it's going to have on our society, both for civil rights and civil liberties, but also just generally for democratic principles of fairness and our understanding of what we owe to each other. I expect that there will be more and more of these types of due process claims even outside the government context where an automated decision system is being used in a setting where really a human decision-maker should be involved, or there should be some human who can stand by a choice because it is so consequential for someone's life. To the extent that existing laws are not adequate to cover those types of claims, I expect we will see more regulation and legislation in the future. Whether it's at the state and local level or the federal level, there are likely to be changes in the laws to reflect the growing power of these automated decision systems. So while the cases that I've mentioned now are addressing these issues under current laws, this is not likely to be the only legal regime that courts will be assessing going forward.

That's the end of my presentation. Thank you very much for listening.

Presenter(s)

EBJ
Esha Bhandari, JD
Deputy Project Director
American Civil Liberties Union

Credit information

Jurisdiction
Credits
Available until
Status
Alabama
    Not Offered
    Alaska
    • 1.0 voluntary
    Pending
    Arizona
    • 1.0 general
    Pending
    Arkansas
    • 1.0 general
    Pending
    California
    • 1.0 general
    Pending
    Colorado
      Not Offered
      Connecticut
      • 1.0 general
      Pending
      Delaware
        Not Offered
        Florida
        • 1.0 general
        Pending
        Georgia
          Unavailable
          Guam
          • 1.0 general
          Pending
          Hawaii
          • 1.0 general
          Pending
          Idaho
            Not Offered
            Illinois
            • 1.0 general
            Pending
            Indiana
              Not Offered
              Iowa
                Not Offered
                Kansas
                  Not Offered
                  Kentucky
                    Not Offered
                    Louisiana
                      Not Offered
                      Maine
                      • 1.0 general
                      December 31, 2026 at 11:59PM HST Pending
                      Minnesota
                      • 1.0 general
                      Pending
                      Mississippi
                        Not Offered
                        Missouri
                        • 1.0 general
                        Pending
                        Montana
                          Not Offered
                          Nebraska
                            Not Offered
                            Nevada
                              Unavailable
                              New Hampshire
                              • 1.0 general
                              Pending
                              New Jersey
                              • 1.2 general
                              Pending
                              New Mexico
                                Not Offered
                                New York
                                • 1.0 areas of professional practice
                                Pending
                                North Carolina
                                • 1.0 general
                                Unavailable
                                North Dakota
                                • 1.0 general
                                Pending
                                Ohio
                                • 1.0 general
                                Unavailable
                                Oklahoma
                                  Not Offered
                                  Oregon
                                    Not Offered
                                    Pennsylvania
                                      Not Offered
                                      Puerto Rico
                                        Not Offered
                                        Rhode Island
                                          Not Offered
                                          South Carolina
                                            Not Offered
                                            Tennessee
                                            • 1.0 general
                                            Unavailable
                                            Texas
                                            • 1.0 general
                                            Unavailable
                                            Utah
                                              Not Offered
                                              Vermont
                                              • 1.0 general
                                              Pending
                                              Virginia
                                                Not Offered
                                                Virgin Islands
                                                • 1.0 general
                                                Pending
                                                Washington
                                                  Not Offered
                                                  West Virginia
                                                    Not Offered
                                                    Wisconsin
                                                      Not Offered
                                                      Wyoming
                                                        Not Offered
                                                        Credits
                                                          Available until
                                                          Status
                                                          Not Offered
                                                          Credits
                                                          • 1.0 voluntary
                                                          Available until
                                                          Status
                                                          Pending
                                                          Credits
                                                          • 1.0 general
                                                          Available until
                                                          Status
                                                          Pending
                                                          Credits
                                                          • 1.0 general
                                                          Available until
                                                          Status
                                                          Pending
                                                          Credits
                                                          • 1.0 general
                                                          Available until
                                                          Status
                                                          Pending
                                                          Credits
                                                            Available until
                                                            Status
                                                            Not Offered
                                                            Credits
                                                            • 1.0 general
                                                            Available until
                                                            Status
                                                            Pending
                                                            Credits
                                                              Available until
                                                              Status
                                                              Not Offered
                                                              Credits
                                                              • 1.0 general
                                                              Available until
                                                              Status
                                                              Pending
                                                              Credits
                                                                Available until
                                                                Status
                                                                Unavailable
                                                                Credits
                                                                • 1.0 general
                                                                Available until
                                                                Status
                                                                Pending
                                                                Credits
                                                                • 1.0 general
                                                                Available until
                                                                Status
                                                                Pending
                                                                Credits
                                                                  Available until
                                                                  Status
                                                                  Not Offered
                                                                  Credits
                                                                  • 1.0 general
                                                                  Available until
                                                                  Status
                                                                  Pending
                                                                  Credits
                                                                    Available until
                                                                    Status
                                                                    Not Offered
                                                                    Credits
                                                                      Available until
                                                                      Status
                                                                      Not Offered
                                                                      Credits
                                                                        Available until
                                                                        Status
                                                                        Not Offered
                                                                        Credits
                                                                          Available until
                                                                          Status
                                                                          Not Offered
                                                                          Credits
                                                                            Available until
                                                                            Status
                                                                            Not Offered
                                                                            Credits
                                                                            • 1.0 general
                                                                            Available until

                                                                            December 31, 2026 at 11:59PM HST

                                                                            Status
                                                                            Pending
                                                                            Credits
                                                                            • 1.0 general
                                                                            Available until
                                                                            Status
                                                                            Pending
                                                                            Credits
                                                                              Available until
                                                                              Status
                                                                              Not Offered
                                                                              Credits
                                                                              • 1.0 general
                                                                              Available until
                                                                              Status
                                                                              Pending
                                                                              Credits
                                                                                Available until
                                                                                Status
                                                                                Not Offered
                                                                                Credits
                                                                                  Available until
                                                                                  Status
                                                                                  Not Offered
                                                                                  Credits
                                                                                    Available until
                                                                                    Status
                                                                                    Unavailable
                                                                                    Credits
                                                                                    • 1.0 general
                                                                                    Available until
                                                                                    Status
                                                                                    Pending
                                                                                    Credits
                                                                                    • 1.2 general
                                                                                    Available until
                                                                                    Status
                                                                                    Pending
                                                                                    Credits
                                                                                      Available until
                                                                                      Status
                                                                                      Not Offered
                                                                                      Credits
                                                                                      • 1.0 areas of professional practice
                                                                                      Available until
                                                                                      Status
                                                                                      Pending
                                                                                      Credits
                                                                                      • 1.0 general
                                                                                      Available until
                                                                                      Status
                                                                                      Unavailable
                                                                                      Credits
                                                                                      • 1.0 general
                                                                                      Available until
                                                                                      Status
                                                                                      Pending
                                                                                      Credits
                                                                                      • 1.0 general
                                                                                      Available until
                                                                                      Status
                                                                                      Unavailable
                                                                                      Credits
                                                                                        Available until
                                                                                        Status
                                                                                        Not Offered
                                                                                        Credits
                                                                                          Available until
                                                                                          Status
                                                                                          Not Offered
                                                                                          Credits
                                                                                            Available until
                                                                                            Status
                                                                                            Not Offered
                                                                                            Credits
                                                                                              Available until
                                                                                              Status
                                                                                              Not Offered
                                                                                              Credits
                                                                                                Available until
                                                                                                Status
                                                                                                Not Offered
                                                                                                Credits
                                                                                                  Available until
                                                                                                  Status
                                                                                                  Not Offered
                                                                                                  Credits
                                                                                                  • 1.0 general
                                                                                                  Available until
                                                                                                  Status
                                                                                                  Unavailable
                                                                                                  Credits
                                                                                                  • 1.0 general
                                                                                                  Available until
                                                                                                  Status
                                                                                                  Unavailable
                                                                                                  Credits
                                                                                                    Available until
                                                                                                    Status
                                                                                                    Not Offered
                                                                                                    Credits
                                                                                                    • 1.0 general
                                                                                                    Available until
                                                                                                    Status
                                                                                                    Pending
                                                                                                    Credits
                                                                                                      Available until
                                                                                                      Status
                                                                                                      Not Offered
                                                                                                      Credits
                                                                                                      • 1.0 general
                                                                                                      Available until
                                                                                                      Status
                                                                                                      Pending
                                                                                                      Credits
                                                                                                        Available until
                                                                                                        Status
                                                                                                        Not Offered
                                                                                                        Credits
                                                                                                          Available until
                                                                                                          Status
                                                                                                          Not Offered
                                                                                                          Credits
                                                                                                            Available until
                                                                                                            Status
                                                                                                            Not Offered
                                                                                                            Credits
                                                                                                              Available until
                                                                                                              Status
                                                                                                              Not Offered

                                                                                                              Become a Quimbee CLE presenter

                                                                                                              Quimbee partners with top attorneys nationwide. We offer course stipends, an in-house production team, and an unparalleled presenter experience. Apply to teach and show us what you've got.

                                                                                                              Become a Quimbee CLE presenter image