How Can We Legislate Algorithms? Lessons Learned at the State and Local Level

Written by iLegis. Summary of panel by Kelsey Kober, Senior Manager of Policy on the Information Technology Industry Council's Public Sector team, at the iLegis Conference, Nov. 3, 2023, moderated by Cathy Pagano.

In May 2020, during the crux of the COVID-19 pandemic, angry crowds gathered around the United Kingdom’s (UK) Department of Education headquarters. But these protestors weren’t concerned with school closures or mask mandates—the target of this mass outrage was the UK government’s decision to generate exam scores through an algorithm. When the pandemic disrupted students’ ability to sit for their “A-level” exams, the UK government developed an algorithm that used past student behavior to calculate grades. Almost 40 percent of students received lower grades than anticipated, and the UK quickly retracted the grades in the face of protests. 

As technology develops at a rapid pace, governments around the world from the UK to New York City have faced the challenge of adopting innovative tools like algorithms and automated decision-making while limiting harm to constituents. Cathy Pagano from the Women’s Bar Association of DC and Kelsey Kober from the Information Technology Industry Council explored this concept at our November 2022 iLegis conference in a panel titled How Can We Legislate Algorithms? Lessons Learned from New York City. 

Algorithms, which can be broadly defined as rules or sets of instructions for computers to follow, can make many functions of running a government easier. Algorithms can help determine where in a city to build fire stations, quickly analyze large sets of healthcare data, and reduce employee workloads by performing routine administrative tasks. However, automated decision making comes with considerable pitfalls, as the UK government quickly learned. Beyond the practical limitations of using a computer to simulate human decision making, drawbacks include the possibility for bias and discrimination as well as the privacy and security concerns related to the collection of large amounts of constituent data to feed into algorithms. 

However, policymakers at all levels of government are taking action to ensure algorithms are being handled responsibly. In the 117th session of the United States Congress (2021-2022) a variety of legislation was introduced to address appropriate use of algorithms.  This includes the “Algorithmic Accountability Act of 2022” (S.3572, H.R. 6580), to require new transparency and accountability for automated decision systems and the “DATA Act” (S. 1477), to require large websites and social networks that use algorithms for suggesting content to obtain express consent from their users before collecting or sharing their personal data.  These bills were not enacted during the 117th Congress, and therefore died with the end of that Congress.  However, a new 118th Session of the U.S. Congress began Jan. 3, 2023, and it will be interesting to watch what new legislation is introduced during this two-year session. 

In October of 2022, U.S. President Joe Biden released an AI Bill of Rights, which outlined various principles intended to protect the rights of Americans in an algorithm-driven world. Multiple U.S. Federal government agencies have addressed this issue as well, between the Federal Trade Commission, the Department of Commerce’s National Institute of Standards and Technology (NIST), and the Department of Defense. The European Union’s General Data Protection Regulation (GDPR) provides citizens with the right not to be subject to a decision that’s made solely on automated processing if this decision brings “major implications” for the subject. In April 2021, the European Commission published a draft law to regulate AI, which would impose documentation, training, and monitoring requirements on AI tools. It has been reported that the EU Member States – the Council of the EU – approved a compromise version of their proposed Artificial Intelligence Regulation (AI Act) on December 6, 2022.  In addition, their Parliament is scheduled to vote on this draft AI Act by the end of March 2023; afterwards, further discussions between the Member States, the Parliament and the Commission are expected, with the hopes of adoption of the final AI Act by the end of 2023.  Australia, Singapore, and Brazil have all proposed an AI regulatory framework, and the UK, India, and Canada have developed strategies to bolster their economies’ competitiveness in AI.  

Moreover, state and local governments within the U.S. are also looking to regulate this complicated policy area. Alabama, Colorado, Vermont, and Illinois have all passed laws establishing an advisory board or oversight mechanism to evaluate the use of AI and automated decision making. California and Washington have both introduced legislation that would regulate private industry’s deployment of automated decision systems, though these bills did not move forward. Most notably, New York City went so far as to develop a Task Force to examine government use of automated decision systems and make policy recommendations to the Mayor and City Council.  

During the iLegis conference, the panelists gave an overview of the New York City Automated Decision Systems Task Force’s activities. Throughout 2018 and 2019, the Task Force held seven public meetings located throughout New York City and held over a dozen member meetings. Comprised of a diverse group of academics, city officials, technology researchers, and representatives from social justice organizations, the Task Force took their public engagement role seriously, collecting over 400 public recommendations from New Yorkers. In November 2019, the Task Force released a report detailing a list of policy recommendations. These recommendations were clustered around two main suggestions: 1) create resources and infrastructure within the New York City government to assist agencies in using these systems and 2) broaden public education and discussion on algorithmic decision-making. 

One major challenge the Task Force faced was the New York City government’s inability to share information about the automated decision systems the city used and for what purpose, which significantly hindered the Task Force’s ability to do a full case exploration. Nonetheless, the panelists agreed that the city’s undertaking of this work was admirable and should be replicated across the country, and that New York City’s subsequent appointment of a new algorithms management and policy officer role is a positive outcome. 

 Kober then outlined specific steps legislative drafters should consider taking when crafting policy related to algorithms and automated decision-making:  

  •  In order to protect constituents’ civil rights while reducing administrative burden on regulators and other city officials, legislative drafters should consider taking a risk-based approach by dividing specific algorithmic use cases into different regulatory categories. “Low risk” use cases such as the deployment of algorithms for back-office operations should not be subject to regulation, while higher stakes areas like criminal justice, healthcare, and financial transactions should be provided with some level of oversight to ensure the absence of bias or arbitrary decision making. 

  • Policymakers should consider following the New York City Automated Decision Systems Task Force’s lead in terms of who to bring to the table. Academics, technologists, and social justice nonprofits all have valuable expertise and perspectives that can help ensure the responsible deployment of algorithms and automated decision making. Additionally, policymakers from the legislative and executive branches have a strong incentive to involve themselves in these discussions and to share information about algorithms currently used by the government when able. 

  • To ensure that the regulatory environment reflects the speed of innovation, policymakers should track closely with recommendations from standards bodies. For instance, the International Standards Organization and International Electrotechnical Commission’s Joint Technical Committee (ISO/IEC JTC 1) has already developed several standards around AI and is in the process of developing a suite of others. Legislative drafters could also create a body within the administration or executive branch that keeps track of technology developments and makes recommendations for policy changes that reflect the new realities. 

  • One of the best ways to curtail bias in AI and automated decision-making in the long term is to invest in STEM education for underprivileged or represented communities. Policymakers must also engage regularly with higher education institutions with a focus on minorities/disadvantaged groups and invest in scholarship programs and mentorship programs to ensure that all communities have the opportunity to enter the technology field. 

 In summary, as technology continues to develop at an astounding rate, governments around the world are working to address the challenges of widespread adoption of innovative tools like algorithms and automated decision-making while limiting harm to constituents.  Policymakers are moving forward to address these concerns.  Involving a wide variety of stakeholders and reviewing what worldwide policymakers are doing will greatly assist this process.

Previous
Previous

Making Congress Work Better for the People It Serves