Content Policy: Something Weird, Automatically and at Scale

In the recent piece of writing published on Medium1, and in the related TED talk2, the writer and artist James Bridle maps out a particularly weird strand of contemporary online culture. Bridle looks into the world of YouTube content made for children, such as various “finger family”, toy unboxing, “surprise egg” unwrapping and nursery rhyme videos, and the abyss stares back. As it turns out, YouTube content production for young children is a highly specialized and potentially lucrative endeavor that gets made though a strange combination of human production and machine learning.

These videos have the capacity to mesmerize children for hours through endless repetition and interchangeability of internal parts; the main tactic is to generate of as many different outcomes as possible, from a single premise. Once a successful trope is established, an enormous amount of similar content gets made, designed to replicate and expand the content producer’s success. This produces thousands of variations, and as automation works, the content gets progressively stranger to the point of becoming obscure and eventually, non-sensical. Even the titles of these videos are nothing like the unified semantic wholes aimed to be read by a human; rather, they operate more like strings of machine-readable tags in an attempt to capture as much bot traffic as possible. Title examples include: “Surprise Play Doh Eggs Peppa Pig Stamper Cars Pocoyo Minecraft Smurfs Kinder Play Doh Sparkle Brilho,”, or “Cars Screamin’ Banshee Eats Lightning McQueen Disney Pixar,” and “Disney Baby Pop Up Pals Easter Eggs SURPRISE”.

Image1

YouTube search page screenshot; title: “Finger Family”. Taken by the author. Link Accessed July 16, 2018.

Image2

YouTube search page screenshot; title: “Finger Family”. Taken by the author. Link Accessed July 16, 2018.

It is unclear which portion of this content is produced by automation (bots) and which by actual humans, but it is clear that some form of automation is always at play. The videos vary a great deal in quality and production value: some are clearly amateurish and yet others employ professional actors, but all of them have a particular strategy for generating clicks and attracting viewership. YouTube content producers rely on advertising for revenue, and advertising relies on visibility; this pushes the producers to put out more videos with more variety - it is a numbers game. This enormous quantity of content requires fresh production techniques, a new kind of labor; this produces a new, strange kind of practice. Bridle writes: “This is content production in the age of algorithmic discovery — even if you’re a human, you have to end up impersonating the machine.”3 A strange twist, fit for the age of machine learning: in order to produce more content for children, adults become children-like, compelled to act out and imitate a completely alien set of rules that they do not understand. Habituation’s new groove.

All of this provides a glimpse into a possible future of education. A strange world emerges where the centuries-old maturation and habituation procedures are intercepted and contaminated by content platforms through machine-learning strategies, without any supervision and outside of any public debate. For us, a crucial question is whether these new practices are reprogramming young minds to read and interpret the world differently than traditional, non-AI assisted education would. What new forms of intelligence, new methods of interpretation and new models of engagement will emerge?

Image31

Amazon website screenshot. Retrieved from omr.com/en/my-handy-design-nightmarish-phone-cases-amazon/. Accessed July 25, 2018.

It is possible that these occurrences present a first glimpse into a profound epistemic shift for design practices, as they point to the radical gap between production and interpretation within the coming regime of cognitive technologies. These machine-learning enabled practices treat all images and objects as free floating, polysemic elements to produce work which is machine-readable. Ultimately, they work toward decoupling and obfuscation the classic relationships that define all design practices: that between the process and outcome, and that between the producer and content. The decoupling happens because machine-learning procedures give no access to the underlying operations; in fact, they go one step beyond the traditional idea of a “black box” as described by Bruno Latour4. In this case, the process is opaque even to the engineers, and no direct connection can be made between the underlying mathematics and the surface “meaning” layer. 

 

This phenomenon points to a possible new chapter of the digital in architectural design. The early digital regime culminated with parametricism, probably the last genre to maintain the explicit, direct causal relation between the machinic process and the cultural outcome.

Parametricism stakes its authority on the capacity to see, understand and deploy the ‘big picture’, a total process where mapping, translation and optimization strategies cohere, point to and ultimately (re)produce the ‘authentic real’. The parametric model is held together by this belief of the collapse (or sameness) between what the philosopher Wilfrid Sellars called the scientific image and the manifest image5. In the age of machine learning and automated content generation, which brings radical opacity, we can see this parametric approach clearly as a form of ‘naive realism’. One that adheres to the deep belief of causality and the possibility of clear interpretation between the algorithmic substrate and its effects. 

 

Those days might be over - an abyss has opened and it is staring back at the design community. Crucially, this abyss presents us not with the problem of knowledge, but of design. If this gap is the one between production and interpretation, then it can not simply be closed through expert knowledge, as even the experts do not, and cannot fully know. Our formal systems have finally taken off in flight, revealing their deep alien nature, their non-binding relation with the real and their capacity to usher in a new real. If a central theme of any design method lies in constructing the metaphor between the model and narrative, machine learning techniques present us with one authored by non-human agents. We should be very interested in the new possibilities of constructing metaphors that structure and describe relations between formal systems and their effects—in order to understand what kind of attitude is appropriate after the classical one has been played out. The fear is that the attitude of “not knowing” will become the standard trope of the coming machine-learning culture, bringing us back into the domain of pure exegesis and interpretation, thus ushering in a new Dark Ages where the divine is replaced by AI.

Image32

Amazon website screenshot. Retrieved from omr.com/en/my-handy-design-nightmarish-phone-cases-amazon/. Accessed July 25, 2018.

For now, we can at least map out formal properties and conceptual implications of these phenomena. It is a beginning of an attempt to confront the situation through design. Some properties to note: an enormously large search space of non-abstract (but rather characteristic, recognizable, qualitative) elements, inherent replaceability and interchangeability of features, naming practices that depend on the strange art of keyword/hashtag association, tendency toward recognizable tropes. Interestingly, these practices mirror some well-known methods within contemporary design - collecting, sampling and mashing – but with a crucial difference: they are, strictly speaking, non-compositional, non-visual, and dependent on tags and keywords, rather than formal ideas. 

 

What could all this stuff mean for architectural design? It is a question of education, first and foremost. There is a need for new strategies for working, new practices of engagement, new interpretative procedures and deployment tactics. In the world where “being right” is not enough anymore and where “meaning well” amounts to almost nothing, traditional education cannot help us much if we want to lead. And one possible strategy that could start new modes of engagement is as old as humanity itself: play.

 

In the words of Réne Thom, the French mathematician:

“Challenging the moralist’s fatalism, the player, confronting any situation, thinks that there is always something to be done […] In the final analysis, what justifies the player’s stance is the fact that the only conceivable way to expose a black box is to play with it. Every great technical and scientific success consists of a black box rendered explicit. […]”6

  1. James Bridle, “Something is Wrong on the Internet”, Medium, November 6 2017, //medium.com/@jamesbridle/something-is-wrong-on-the-internet-c39c471271d2

  2. James Bridle, “ The nightmare videos of children’s YouTube — and what's wrong with the internet today”, TED Talk, July 2018youtube.com/watch?reload=9&v=v9EKV2nSU8w&feature=youtu.be

  3. James Bridle, “Something is Wrong on the Internet”, Medium, November 6 2017

  4. Bruno Latour , Pandora's Hope: Essays on the Reality of Science Studies. (Cambridge, Massachusetts: Harvard University Press), 304.

  5. Wilfrid S. Sellars , Philosophy and the Scientific Image of Man, p. 35-78. For a longer discussion about this in the realm of simulations, see Luciana Parisi, Simulations, in Ian Cheng Live Simulation Exhibition.

  6. Rene Thom,” At the Boundaries of Man's Power: Play”, p.12-13