Principles for Automation in Government

Bill Hunt
5 min readDec 21, 2020

This article is part three in a series on IT policy recommendations. A PDF of the full recommendations may be downloaded here.

Artificial Intelligence (AI), Machine Learning (ML), Robotic Processing Automation (RPA)¹, and other related predictive algorithm technologies continue to gain attention. However, at the moment their promises are far greater than the reality, and instead of successes we continue to see the worst of ourselves reflected back. Vendors also continue to oversell the functionality of these tools, while glossing over major expenses and difficulties, such as acquiring and tagging training data.

The Trump Administration, rather than increasing scrutiny and oversight of these technologies, only sought to reduce barriers to its usage. The Biden Administration will need to create stronger protections for the American people through better governance of the usage of these solutions in government.

The problem is that humans have written our biases into our processes, and automation only expedites and amplifies these biases. (The book Automating Inequality explains this better than I ever could.) As a technologist, I become concerned when I hear of government agencies implementing these technologies for decision-making, as our unequal systems will only lead to greater inequity. It’s all too easy to “blame the algorithm” to avoid liability, but it’s who humans create the algorithms.

Simply put, the Federal government cannot have racist chatbots. The government must not exacerbate the problem of minorities not receiving benefits they deserve. And the government should not be using tools that can reenforce existing racism and sexism while remaining willfully ignorant of these topics. Yet with all of these failures, we still see organizations running gleefully towards toxic ideas such as predictive policing and facial-recognition technology.

Fundamentally, this is a question of ethics. Although in government we have extensive ethics laws and regulations in regard to finances and influence, there is almost no actual guidance on ethical practices in the use of technology. And in the U.S. there exists no standard code of ethics for software engineering, no Hippocratic Oath for practicing technology. However, we do have a series of regulatory proxies for ethics, in the form of security and privacy requirements aimed to protect the data of the American people.

A diagram reflecting the balance between human versus computer decision-making and impact to human life and livelihood.

By requiring a series of controls — not unlike those that we use for IT security — we can increase the safety of the usage of these tools. Similar to the current National Institute of Standards and Technology (NIST) classifications for Low, Medium, and High security systems, artificial intelligence systems should be classified by their impact to people, and the level of automation that is allowed must be guided by the impact. And like the NIST security controls, these must be auditable and testable, to make sure systems are functioning within the expected policy parameters.

For instance, a robot vacuum cleaner presents very little risk of life, but can cause some inconvenience if it misbehaves, so very few controls and human oversight would be required. But automation in the processing for loans or other benefits may disastrously impact people’s finances, so higher controls must be implemented and more human engagement should be required.

Most notably among these controls must be explainability in decision-making by computers. When a decision is made by a machine — for instance, the denial of a benefit to a person — we must be able to see exactly how and why the decision was made and improve the system in the future. This is a requirement that megacorporations have long railed against due to the potential legal liabilities they may face in having to provide such documentation, but the Administration must not yield to these private interests at the expense of The People.

Another key control will be transparency in the usage of these systems, and all Federal agencies must be required to notify the people when such a system is in use. This should be done both through a Federal Records Notice similar to the ones required for new information systems, but also on the form, tool, or decision letter itself so that consumers are aware of how these tools are used. Standard, plain language descriptions should be created and used government-wide.

Related to that control, any system that makes a determination, on a benefit or similar, must have a process for the recipient to appeal the decision to an actual human in a timely fashion. This requirement is deliberately burdensome, as it will actively curtail many inappropriate uses in government, since overtaxed government processes won’t be able to keep up with too many denied benefits. For instance, the Veterans Benefit Appeals system currently is entirely manual, but has a delay of a year or more, and some Veterans have been waiting years for appeals to be adjudicated; if a system is seeing an unreasonably large number of appeals of benefit denials, that’s a good indicator of a broken system.

Moreover the result of that appeal must become part of the determining framework after re-adjudication, and any previous adjudications or pending appeals should be automatically reconsidered retroactively.

There also exists a category of uses of Artificial Intelligence that the government should entirely prohibit. The most extreme and obvious example is the creation of lethal robots for law enforcement or military usage — regardless of what benefits the Department of Defense and military vendors try to sell us. Although there’s little fear of a science-fiction dystopia of self-aware murderbots, major ethical considerations must still be taken into account. If we cannot trust even human officers to act ethically under political duress, we certainly cannot expect robots devoid of any empathy to protect our citizens from tyranny when they can be turned against people with the push of a button.

Similarly, the government must also be able to hold private companies liable for their usage of these technologies both in government and the private sector as well. If something fails, the government legally owns the risk, but that does not mean that private companies should escape blame or penalties. The increase in companies creating self-driving cars will inevitably lead to more deaths, but these companies continue to avoid any responsibility. The National Highway Traffic Safety Administration’s recommendations on autonomous vehicles do not go nearly far enough, merely making the “request that manufacturers and other entities voluntarily provide reports.”

In short, the government must make a stand to protect its people, instead of merely serving the interests of private companies — it cannot do both.

For further reading, the governments of Canada and Colombia have released guidance on this topic, providing an excellent starting point for other governments.

[1] Some of us technologists have referred to RPA as “Steampunkification” instead of IT modernization, as the older systems are still left in place while newer tech is just stuck on top, increasing rather than decreasing the technical debt of an organization— much as Steampunks glue shiny gears onto old hats as fashion.

--

--

Bill Hunt

Civic Technologist & Policy Enthusiast. Views are my own, not my employer’s. Move carefully and fix things.