A pedagogy for robots?

Algorithms are not merely finite series of procedures of computation but are also generative agents conditioned by their exposure to the features of data inputs
Louise Amoore

There is a need for an AI ethics – but how to implement it in a broad manner? A pedagogy to train AI at large?

Make the algorithm our pet

Hypothesis: For a capable infrastructure of algorithms, a dictum of transparency, i.e. to open the algorithmic black box, to handle harms isn't feasible.
Comparatively, no one really understands the inner life of a pet dog, but we can anticipate its behaviour, because we know it's nature and how we trained the dog. All that is deemed right or wrong (to train a dog) lies within it.

This however doesn't guarantee for someone else that we didn't train our dog to be aggressive and harmful. This would not change if the third one could see and understand the inner life of the dog. He can anticipate the danger, but the harm remains. This doesn't change, wether the dog trainer is held accountable nor the dogs training is made transparent.
How this hypothetical dog reacts depends on the interplay of the constitution of its inner life (its code) and the training the dog received (the data).

Now, with algorithms, we make this dog (so to speak) β€” thus determine its nature.

A modern hypocratic oath

Hippocratic oath for data science


If one element of my past presence on a London street for a "Stop the War" campaign march enters a training dataset for a multinational software company's neural net, which, one day in the future, intervenes to detain some other person in a distant city, how is some part of my action lodged within this vast and distributed authorship?
Louise Amoore, Cloud Ethics, 2020, p. 20