Paper and Video Essay

Design and Development of Systems

Creation vs Maintenance view of developing AI:

  1. Creation: “focus on finding new places and ways to use technologies and new insights that AI might yield when ML is applied to massive datasets to find relationships in the data”
  2. Maintenance: “surfaces problems with existing systems and attempts to mitigate those harms (for instance, by making them more fair, accountable, and transparent)”

“When designers of these algorithmic systems train computational models that ignore transgender identity, these systems demand that trans people somehow shed an identity they can’t; identities that cisgender people hardly ever bother to regard.”

“Designers of sociotechnical systems have repeatedly built computational systems and models rendering decisions that exacerbate and reinforce historical prejudices, oppression, and marginalization”

For those of us who can just not deal with race, or gender, or sexuality, we get to pass through these systems relatively unscathed. But for those of us who can’t ignore those dimensions of who we are, those aspects of ourselves make us stick out. More examples in Design Justice.

Utopia

A utopia implies perfection and thus no feedback. ML models think they live in a perfect world unless told otherwise.

Related: The ones who walk away from Omelas

Truth’ and Feedback loops

“Absurdity follows when algorithmic systems deny the people they mistreat the status to lodge complaints, let alone the power to repair, resist, or escape the world that these systems create.” How do feedback loops play into these systems? Is it possible to create good human-in-the-loop ML?

“Absurdity and tragedy tend to manifest when bureaucratic imaginations diverge from reality and when people can’t override the delusions baked into those imaginations” It’s dangerous when a single source dictates the truth.

But when the institution does wield power and people can’t just leave anymore, these institutions can (and do) get more and more detached from the lives and needs of people. Those bureaucracies construct their own worlds where everything gets “rationalized” in simplified, reductive language.

“People talk about “debiasing” data and reviewing code before a model is trained and deployed. What I’m saying is that even if you’ve done everything right, if you don’t pay attention to the power dynamics as they unfold and play out, the system out in the world is going to drift further and further away from reality.”

Systemized classification and quantification of the world acts as an interpretive and transformational force. In other words, quantization changes the world.

Why monopolies (over data and power) are bad: bureaucracies with no power self-correct (or be corrected) -> they have no place in a world where people can freely walk away or reject the bureaucracy’s nonsense (give feedback)

Abridged Maps

Abridged maps as potemkin villages, producing a simplified yet inaccurate view of the world. It’s not necessarily wrong to create ‘abridged maps’, the problem comes when projecting the map onto the world to try and create change.

“When modelers and designers of influential systems use these maps as guides to substantially transform the world, the abridgements and the omissions they make become targets of erasure.”

“In the process of training a model, the algorithm creates its own world — it generates its own sort of utopia where things are clear and calculable. That system imposes its model upon the world, judging and punishing people who don’t fit the model that the algorithm produced in the interest of some ostensibily objective goal that designers insist is better than decisions humans make in some or many ways.”

These systems become more actively dangerous when they go from “making sense of the world” to “making the world make sense”

There’s no dataset in the world that adequately conveys white supremacy, or slavery, or colonialism. (see: data distributions)

So at best these systems generate a facsimile of a world with the shadows of history cast on the ground skewed, flattened, and always lacking depth that only living these experiences can bring. Once again, creating a potemkin village of what the true problem is: an incredibly reductionist view on complex problems.

See also: map as territory

Metis

James C. Scott in Seeing like a State desribes metis, which he translates substantively as the intelligence required to adapt to new and changing circumstances.

Metis is more than constructing any number of “rules of thumb”. Rather, knowing how and when to apply those rules in a concrete situtation is the essense of metis. Isn’t metis then just the frame problem?

“A person without the lived experience of disabilities can never truly understand what it means to be ‘like’ someone who experiences it.” Disability simulation doesn’t work; why do we let ML systems do it then, let alone systems without metis?

Important in the context of traditional knowledge (TK)