Speculating with predictive memory-making: The (Black) Box of Memories

Ramona-Riin Dremljuga, Kseniia Kalugina, and Gabriel Pereira (alphabetical)

This blog post is a result of the Museum of Random Memory’s workshop at the European Science and Technology Studies Conference (EASST 2018) in Lancaster, UK. It is part of an ongoing series experimenting with speculative fiction as a critical method for thinking otherwise about possible futures, led by Kseniia Kalugina and Annette Markham (see where we started, here)

I enter a room. It’s dim. At the end of the room, there is a table. On top of the table, there is a black box. Behind the box, there are mini-projectors that project futuristic videos on loop. The projectors point to posters on the wall with text “THE BOX OF MEMORIES: 100% OF VISITORS RETURN. EACH VISIT GUARANTEES A NEW MEMORY!”

On both sides of the table, there is a MoRM un-curator. Both un-curators have a badge with their last name written on it. They remind me of agents. The agents are serious, but their tags are not. One is named Curatee, the other one Museom – what does that even mean?

Curatee approaches me, asking me to try out a new technology that uses algorithms to de-personalize my memories for public use. I decide to participate, even though I feel deep discomfort to share anything personal with these strangers. I don’t even know why.

I’m given a link to a web application where I can donate a picture from my personal data production device. As soon as I upload the file, magic happens. It says so on the screen: “Algorithmic magic in progress.” The box on the table starts to switch colors. The minimalist intense black box turns into a futuristic disco ball, performing high-end LED-engineered choreography.

The un-curator holds up an iPad, placing it very close to my face. The loading bar flashes vigorously while Siri announces it’s status update: “Calculating the number of molecules in the image. Abstracting data. Assigning tags.” The behind-the-scene process is described in extreme detail. I feel a bit noxious. I don’t feel interested in all this information I’m given in full speed.

Finally, the iPad opens a video file named “A friend to remember”. It starts with a still image of my hairdresser’s dog which she sent to me on Messenger. Museom steps closer and starts the video. It is a slideshow of pictures I’ve never seen before. It’s a compilation of animals similar to my hairdresser’s dog, mostly in their size and color, but also of people dressed up as animals, as well as taxidermy examples. It ends with a quote about friendship, asking whether I want to share the memory. I don’t, obviously, so I don’t do anything with it.

Suddenly, the loading screen appears, showing me the hypnotizing ‘dancing dots’ and the announcement that by sharing this I agree with the Terms of Service and I cede rights to the image to any party interested. What is this? What am I supposed to do? I don’t agree to anything, I haven’t even seen the terms. Not to talk about the video that had nothing to do with me. Also, the screen doesn’t allow me to go back or undo. Ugh, I feel frustrated. And it’s not even real.

But it does make me think of those automatic updates my phone does that I only notice once it’s already done. I’ve chosen not to pay much attention to how that happens, or even what exactly changes, because the process seems inevitable – you can’t use the services unless you go along with the updates and agree to the terms. Right? It’s not like we have a choice, do we? To me, visiting this new age museum wannabe pointed to the randomness of the data preserved and repurposed today. It’s always curated as well as un-curated by someone or something – if not ourselves, then the other counterparts that have access to us. Context is then key. As is our ability to produce, assess and ‘read’ information, really.

An analog black box of memories.

A speculative look onto how a black-box algorithm looks.

My hairdresser’s dog which she sent to me on Messenger. Or a long lost memory?

Museum of Random Memory (MoRM) is a series of performative interventions where we play with the concept of a museum. Bringing in people ‘from the street’, representing a part of the public opinion, we propose situations that ask for reflecting on the complexities related to our digitally saturated lives. We would say we are always creating a new type of a museum, and what in its customary definition is a repository of artifacts that are selected to be preserved and carefully curated would become a somewhat random and temporary collection of memories that are uncurated throughout the MoRM performance.

The vignette presented above is the result of our collective speculation of what a MoRM science-fiction-inspired performance taking into account our imagination of current and future algorithms for memory-making. We draw our inspiration for using speculative model as a performative intervention from the literary genre of science fiction which allows to critique prevailing assumptions, question biases and explore possibilities within the themes of interest of social researchers (Bould, Butler, Roberts, & Vint, 2009). Building on the understanding of speculation as a critical approach in social sciences (see Haraway, 2013) we give ourselves the ability to “function rhetorically to prompt change in socio-technical contexts” (Markham, 2015).

The three core elements that animate our speculation are the “black-box” metaphor, the business/entrepreneurial aspect of algorithms and the conception of predictive memory-making.  

Black boxes are sites where decision-making happens in an inscrutable way, hiding the embedded design decisions, values and assumptions. This has been a major concern for current scholars focused on algorithms and artificial intelligence, especially when it is used in “high-stakes domains”, such as criminal​ ​justice,​ ​healthcare, welfare,​ ​and​ ​education (AI Now, 2017, p.1). Here, we speculate on how these black-boxes could (or already do) work for our memory-making. When the person in the story gives access to their data, an image is randomly paired with others, generating a comical but somewhat realist account of what could happen when AIs sort and curate memories. Meaningfully, the “algorithmic magic” element mentioned in the story, while the black box spits out all sorts of technical aspects, serves to further separate the human and relational element of memory-making and turn it into a technical, quantifiable, datafied problem.

Secondly, the museum-related performance is mixed with the entrepreneurial. Our speculative MoRM enterprise exhibits, promotes and tests the prototype of an algorithm, hidden from public view behind the black box and absurd descriptions of workflow processes. How does this shadow real-life technology prototypes? User tests? Technology expos? These questions remain to be addressed, especially considering the prominent role of tech conglomerates, platforms and corporations in defining what gets to count. As our reliance on four or five technology and media firms skyrockets, the code they create becomes increasingly a “law” (Lessig, 1999) we abide to in our everyday lives, and in our memory-making.

The third is a speculative idea of “predictive memory-making”. In Philip K. Dick’s sci-fi short story “Minority Report”, mutants called “precogs” are able to predict when crimes are about to happen and so turn reactive policing into proactive. It was interesting for us to flesh out what could happen to memory in a moment where prediction through machine learning is becoming a more and more popular concept and reality (see Brayne, 2017). The black-box embeds this arrogant conception: it intakes an image of your phone and says its output is your memory. By linking to other images it chooses single-handedly, predicting (proacting) rather than responding to, augmenting or reacting to your memory-making.

All of those considerations, we believe, are essential in a moment where data about us constructs us, regardless if we have given it, or not (Cheney-Lippold, 2017). This is one of the core urgencies of raising consciousness of the process whereby we create memories and data in a digitally-saturated world.

Data does not speak by itself.

References

Bould, M., Butler, A. M., Roberts, A. & Vint, S. (Eds.). (2009). The Routledge companion to science fiction. New York, NY: Routledge.

Brayne, S. (2017). Big Data Surveillance: The Case of Policing. American Sociological Review, 82(5), 977-1008. doi:10.1177/0003122417725865

Campolo, A., Sanfilippo, M., Whittaker, M., & Crawford, K. (2017). AI Now 2017 Report. AI Now Institute at New York University. Retrieved from https://ainowinstitute.org/AI_Now_2017_Report.pdf

Cheney-Lippold, J. (2017). We Are Data: Algorithms and The Making of Our Digital Selves. New York: NYU Press.

Haraway, D. (2013). SF: Science fiction, speculative fabulation, string figures, so far. Ada: A Journal of Gender, New Media, and Technology, 3. Retrieved from http://adanewmedia.org/2013/11/issue3-haraway/

Lessig, L. (1999). Code: And Other Laws Of Cyberspace. Basic Books.

Markham, A. (2015). Discourse matters: Designing better digital futures [Blog]. Retrieved from http://culturedigitally.org/2015/06/discourse-matters/

Creating Future Memories

This page is part of Creating Future Memories, an Aarhus University funded research project exploring speculative, future-oriented, and participatory methods for citizens to understand and better control the data being produced through and around the everyday use of digital media.