The Challenge of Regulating AI

Marianne Bellotti
Rebellion Defense
Published in
4 min readApr 6, 2023

--

The goal of NIST’s AI Framework is to provide a structured approach to assessing and managing the risks involved with developing and deploying AI solutions. Overall NIST has the right idea. Rather than starting from scratch, we should be leveraging the lessons learned from security and safety engineering. These approaches are grounded in user research, risk assessments and failure testing.

However, no one has been able to fill in the specifics of what those things should look like for AI. NIST’s big problem is that early conjecture from researchers is not holding up. Algorithmic fairness can do more harm to marginalized communities. Transparency and explainability do not stop people from completely misusing technology. And while the scope of responsibility in security and safety is well understood, AI practitioners must also contend with society’s general anxiety around long term impacts of this technology which push the boundaries of the challenge into greater ambiguity.

Scope and scale challenges in the NIST AI playbook

The draft playbook NIST has released represents many of the problems regulators face. Because there’s little consensus and even less data on what works, the playbook attempts to account for every theory, every risk, every possibility.

Large organizations will have no problem producing the in-depth analysis of complex local and global risks suggested by the NIST playbook. However, the tech industry has always been driven by hobbyists and tinkerers. Unlike other fields of engineering, we encourage those without formal training or qualifications to scrub in. We build tools specifically to help them do that. Small teams with few resources can and often do compete on equal footing with much more established organizations.

That’s because those established organizations are usually making more money on the way things are to take any serious interest in change. The big mainframe producers had the technology to enter the world of desktop computing as a dominant force and missed out because the new technology threatened revenues of the old. The companies that replaced them then missed out on mobile and cloud computing for exactly the same reason. As AI continues to mature, the companies that will make the biggest impact in this space are not the large transnational corporations that dominate tech today but smaller organizations that will find large complex regulatory frameworks impossible to manage. Case in point, OpenAI has less than 400 employees in total.

For every exceptional small or medium sized team like OpenAI, there will be hundreds that will pick up the tools, use them inappropriately and fail. This is the audience NIST’s AI Framework or any similar regulation needs to serve. Odds are the AI most people will interact with in the future will look like whatever the least informed and most inexperienced practitioners can produce. Regulation needs to address what strategies are viable under those conditions.

How can NIST improve its AI Framework?

Rebellion Defense was thrilled to have the opportunity to contribute to the conversation via public comment (which you can read here). How can NIST improve the usefulness of its AI Framework? We believe the best way to narrow the focus on risk projections productively is by focusing on safety.

While there’s lots of thoughtful commentary on how AI impacts climate change, social justice, and other complex, long term issues, AI is often just one factor in a dynamic system shaping those outcomes. The relationship between technology and impact in these spaces is not deterministic. Every organization should be sensitive to long term impacts of their products, but regulation must stay within the bounds of what can reasonably be foreseen.

By focusing on a concept that is well scoped and can be properly defined, like safety, organizations of different sizes can be held accountable by the same standards. Broadly defined criteria give organizations the ability to cherry pick what aspects of regulations apply to them. By factoring more concerns into the Framework, NIST makes the Framework less powerful in ensuring positive outcomes.

We also echo feedback from others that the primary benefit of AI is in the human-computer partnership, not replacement or general intelligence. NIST’s Playbook frequently refers to user research and design, but could go into more detail. The techniques and approaches for designing these partnerships exist, and the Playbook should highlight them. In particular we’ve been experimenting with design patterns created by Spoto and Oleynik for generative adversarial networks, but Shneiderman’s 2D matrix of human control -vs- AI control is also gaining momentum among practitioners.

Overall, the NIST Framework sets a solid foundation for what regulation in this space should look like. We’re looking forward to watching the thinking here mature and future iterations of the Playbook adding more detail to the conversation.

Interested in joining Rebellion’s engineering team? Check out our current openings here.

--

--

Author of Kill It with Fire Manage Aging Computer Systems (and Future Proof Modern Ones)