When YouTube demonetizes a creator without explanation, or an algorithm denies your insurance claim, where's the paper trail? Where's the accountability?
Working as a chemical engineer at a renewable plasma pyrolysis company, I was introduced to Good Manufacturing Practice (GMP). During plant walkarounds (not just the plant floor though, also through mechanical, electrical, and quality control trailers) it was mind-boggling how thoroughly everything was documented, how clearly every decision was traceable.
At Monolith, the most oft-repeated mantra was "Safety Matters Most" and that generalizes to every aspect of safety: from the safety of the operators, the safety of the end user, the safety of the equipment, and the safety of the product.
Critical process changes were documented a number of times over, both informally and formally but at the end of the day the communication was clear, and it was always a set point where the buck would stop. Though it should be noted that since Monolith was using novel plasma process technology, there was a significant amount more documentation over minutia than I hear about talking to my process engineer friends in other manufacturing fields. Other manufacturing industries also have their own relevant standards: biomedical tech, cosmetics, food processing, pharmaceuticals all have strictly enforced safeguard regulations.
These regulations aren't pedantry, they're laws written in blood. In 1984, an American Union Carbide plant in Bhopal, India leaked 40 tons of methyl isocyanate, poison gas, rolling through sleeping neighborhoods killing thousands because safety systems were offline and maintenance was deferred with no systemic accountability framework to catch cascading failures. This disaster motivated the creation of modern Process Safety Management standards. The stakes are so high when the alternative is harm to life, and the bureaucracy was necessitated. I remember working in a microfluidics lab during undergrad, always keeping a spare set of long pants on-campus in case an EHS (Environment Health & Safety) auditor were to be walking around the labs.
But today we exist in digital systems just as much as physical ones. I spend most of my day interfacing with algorithms, and encounter 0 reactor vessels in my day-to-day life. The rules and structures of those systems shape our lives as much as any law, but they have zero frameworks for iterative governance or feedback. And the harm to life can be very real when critical systems like insurance approval, rent agreements, and employment are determined by algorithms. This asymmetry really bugs me: you can track down an MOC (Management of Change) who bumped an additive's flowrate by 1 kg/hr (and why it was done), but you can't trace an algorithm's decision. Where's the digital equivalent of an MOC? Who's the EHS auditor for code?
Creators on YouTube and other platforms often lambast the algorithm and its finicky approach to demonetization and the lack of any recourse or real support from the platform to restore what's the core of the creator's livelihood. Algorithms determine your eligibility for life-altering things like credit and insurance and leasing. Surely it's clear that the stakes here are just as high, and necessitates the same kind of accountability.
It's not that accountability is architecturally impossible in digital spaces, we have plenty of working examples. But to understand why it matters where we build accountability, we need to understand how digital systems actually work.
The Lessig Framework: Code is Law
Lawrence Lessig's book Code and Other Laws of Cyberspace (1999) holds more relevance as time goes on and our world becomes increasingly dependent on digital infrastructure. He reveals the dichotomy of regulation between the physical and cyber worlds. In physical space, it's laws that regulate behavior but those laws are only as strong as their enforcement: you're free to break them and potentially face consequences. In digital spaces, code infrastructurally regulates your behavior, it literally defines what you can and cannot do.
Code is written by engineers, not elected officials... so who holds them accountable? Do we even have systems for such accountability, are they being used? How are infrastructural design choices managed and encoded? It's not inherently bad that this is how digital systems work and are built, but it does mean that we should be paying much closer attention to HOW these systems are built.
And increasingly, code isn't just explicit rules written by engineers, it increasingly incorporates statistical systems trained on data. These systems inherit the biases baked into their training data: if historical loan data reflects redlining, the algorithm learns to redline. If hiring patterns favored certain demographics, the model perpetuates that preference. The opacity deepens because even the engineers who built the system often can't explain why it made a specific decision. But the lack of explainability doesn't reduce the stakes, it increases the need for structural accountability.
So if code is law, and code is increasingly based on statistical systems which have their own biases, what do accountable and democratic systems look like? We already have some pretty solid examples.
Case Study: Git
Git is a version control system used by nearly every programmer. Without version control tools like Git, collaborating on code can look like emailing zipped folders back and forth, manually tracking who changed what, and praying no one overwrites someone else's work: total chaos. Git solves this not by asking programmers to be more careful, but by making every change automatically tracked and visible.
There's a few key features baked into how Git works that inherently add accountability. Git holds a commit history, which records every change with a timestamp and author (and a good descriptive message too if the author's diligent). git blame lets you see exactly who changed which line and when, diff lets you see exactly what changed between versions of a file, and revert means at any point you can go back to any previously saved checkpoint.
Git doesn't need you to trust your collaborators, or have a super diligent file management/metadata tracking system: trust is unnecessary because the architecture makes everything visible. Writing code collaboratively is so common and necessary at scale that it couldn't be left to simply trust and goodwill. That's why you have dev and prod stacks, set up rules in GitHub actions, and prevent pushing to main. But baking visibility into the system didn't add friction to the end purpose of the software in any way, if anything it makes it so much stronger as a version tracker.
Open source software, tracked using Git and hosted on GitHub adds another layer: not only is every change tracked but you can see the entire history. Anyone can audit the code, help pitch in to improve it, or use it as inspiration for their own projects.
Git shows us version control and visibility work at the individual file level. But what about knowledge at scale, can these principles work for collaborative truth-building?
Case Study: Wikipedia
This pattern of visible digital systems scales beyond code: Wikipedia is a perfect example, collaborative knowledge building in a universally accessible, free, online encyclopedia. Entirely maintained by volunteers, Wikipedia is the most-accessed reference work ever and the very way it functions bakes in transparency.
Anyone can edit a wikipedia article, but every edit is logged. You can see every change, who made it and when. These are live documents, with disputes and discussion on talk pages with public displays of reasoning and perspective. Vandalism is easily undone because the structure makes reversion so easy. The wiki editors are so hyper-vigilant there's a whole category of memes dedicated to the phenomenon where Wikipedia editors update a person's article with their death date and cause of their passing often before mainstream news breaks.
It's not perfect, there's a lot of politics that come in through the different layers of governance like administrators, and arbitration committees. Edit wars can erupt, and bias issues are inevitable but it's still incredible that such a brilliant thing exists and works at all. Not just that it works, but it works well! I remember teachers in middle school warning us against Wikipedia since anyone could edit it, but that openness is precisely what keeps it accurate (especially on technical articles). Through the diligence of Wikipedia's community, technically inaccurate information is quickly obliterated, and claims made without citations can be annotated with "Citation needed". Over 7 million articles, thousands of active editors all building a system with accountability that's structural.
This pattern is even inherited by the fan-wiki, and similar structures exist in other communities that need to manage collaborative editing like forums. Track changes, make history visible, and easily enable reversion and iteration.
So we have accountability for code (Git) and knowledge (Wikipedia). But what about the most consequential writing of all, the laws that govern us?
Case Study: DC's Laws on Github
At the end of the day, software, encyclopedia articles, and laws are all forms of writing. Legal documents like laws and statues are similarly iterative, but the workflows and transparency (and pace) are wildly different. That still didn't stop DC from applying SWE transparency structures to governance.
The Government of the District of Columbia publishes the laws of the DC Code on GitHub. Though to note, this is an authoritative (coming from the recognized official capacity) not a definitive (defining) copy of the law. If there were a huge divergence between the code on GitHub and the version the City Council passes, then the city council's version has authority. Still, this hugely increases accessibility and transparency. In 2018, Joshua Tauberer even opened a PR to fix a typo.
Though some concern is warranted about relying on a third party company like Microsoft for public governance (see: the German Government trying to migrate away from MS 365 with their own software, this is a great idea and a step in a very positive direction for the openness of the legal system. With proper support and infrastructure, a git-like open and public system for legal documents could widely increase democratic access to the legal frameworks that govern us. Imagine being able to see exactly which lobbyists and interests motivated each part of a bill! Or a single universal document framework for digital legal documents.
Git, Wikipedia, and DC's open code are just three examples but they prove transparency is technically possible. They're not hypothetical ideals; they're working systems used by millions. Why don't our most consequential algorithmic systems look anything like this?
Opacity by Design
If Git, Wikipedia, and open governance structures show us that transparency is architecturally possible, why are so many of our most consequential digital systems opaque by design?
Stafford Beer's principle of POSIWID (the purpose of a system is what it does) cuts through the noise: if a system consistently fails its stated purpose, that failure is the purpose. If algorithms consistently produce biased outcomes while evading transparency, then the resultant producing of biased outcomes without accountability is what they're designed for.
Cathy O'Neil's Weapons of Math Destruction names the pattern: a WMD is (1) opaque, (2) unregulated, (3) scalable, and (4) nearly impossible to contest. These systems encode human bias while presenting themselves as mathematically objective. When YouTube demonetizes a creator's livelihood with no explanation, when teacher evaluation algorithms determine employment without showing their work, when credit scoring perpetuates redlining through zip code proxies, the opacity isn't a bug. It's a shield for the system from accountability. You can't challenge what you can't see, can't appeal what you can't understand, can't prove discrimination when the algorithm is proprietary.
As AI and statistical algorithms are further integrated into the world around us there's no telling how far this could scale, and what choices human decision makers will make. You can already see this playing out with the Denver Flock Camera System. Mayor Johnston extended the Flock license plate camera surveillance contract despite City Council opposition calling it "undemocratic." The system flagged a woman's license plate as being involved in a package theft, just because she had driven through the town of Bow Mar. Police showed up at the door acting as if she was already guilty saying, "You know why I'm here [...] you know we have cameras in that town, you can't get a breath of fresh air in or out of that place without us knowing". There was no clear process for accountability, the algorithm's error became her burden to prove. There was viral national backlash revealing exactly that opacity protects the system, not the people it affects.
It's a crying shame: the same technologies that enable Git's transparency could be applied here. Version control, change logs, audit trails—none of this is technically impossible. The opacity is a choice. When bias is encoded but presented as mathematical objectivity, when decisions are irreversible but unexplainable, when systems scale without oversight—that's not a bug. That's the point. The question isn't whether we can build accountable algorithmic systems. We already know we can. The question is how and whether we will build them.
The answer lies not in asking nicely, but in mandating structure.
Architecture, Not Goodwill
Chemical engineers didn't just say "Safety First!" they made it architecturally impossible to bypass safety checks, skip documentation, or make untracked changes to critical processes. Monolith, just like most other engineering firms had so much documentation that they needed to hire multiple Document Control Specialists. Far from being excessive, it's the base expectation about Good Manufacturing Practice and Safety. It's not to question people's intentions but when lives are at stake (if your reactor can turn into a huge hydrogen explosion with a mishap) good intentions aren't enough.
You can't just say "Safety Matters" in a rousing speech; you need to see it through and build it into the structure. You make it impossible to skip the safety check, to bypass the documentation, to make an untracked change to a critical process.
The same logic applies to digital systems, and even more urgently with statistical code being pushed into everything. We cannot rely on tech companies to be benevolent, and can't trust that algorithms will be fair. We can't hope that platforms will prioritize user welfare over engagement metrics and growth. Accountability has to be architectural.
We already know how to build these systems: Git, Wikipedia, and open governance prove it. We've proven we can build accountable systems. The question is how we force them to exist. Process Safety Management standards didn't emerge from engineers' goodwill; they were mandated after Bhopal killed thousands. What's the digital equivalent? We shouldn't sit around and wait for another Bhopal.
We need to do the work and define what digital EHS auditors could look like. We need algorithmic MOCs. We need the architecture to match the stakes. Not because the people building these systems are necessarily malicious, but because, like any engineer knows, hoping for perfect execution isn't a safety plan. You build systems that catch failures, that make accountability structurally unavoidable.
The technology exists. The question is whether we'll demand it before or after our own digital Bhopal.
References and Further Reading
- Wikipedia: Bhopal disaster
- Wikipedia: Process Safety Management (OSHA regulation)
- Harvard Magazine: Code is Law
- The Economist: Wikipedia is 20, and its reputation has never been higher
- Know Your Meme: Wikipedia Editors When Someone Dies
- Wikipedia: Coverage of death
- Wikipedia: Administration
- Wikipedia: Citation needed
- GitHub: DC Council Law XML
- Ars Technica: How I changed the law with a GitHub pull request
- Wikipedia: 2024 CrowdStrike-related IT outages
- Heise: Microsoft 365 alternative openDesk version 1.0 announced
- Wikipedia: POSIWID (The Purpose of a System Is What It Does)
- Denverite: Denver extends Flock surveillance cameras despite pushback from city council
- Denverite: Majority of City Council blasts Denver's contract with Flock, saying it's 'undemocratic'
- Colorado Sun: Flock camera flags innocent woman in package theft investigation
- Denverite: Denver is at the center of a viral national fight over surveillance
- Code and Other Laws of Cyberspace (1999) — Lawrence Lessig
- Weapons of Math Destruction (2016) — Cathy O'Neil