By: Peter Conlin
Improbable: Hacking the Predictive System
We live in a time obsessed with forms of prediction, wherein prediction, despite its integration into computational technologies and its presumed rationality, is often indistinguishable from something like fate, especially if one is on the wrong side of the uncertainty circuits. It is fundamental to financial capitalism with its ‘modes of prediction’1 wherein investment risk, directed by predictive abstractions, replaces labour as ‘the fount of value’,2 and is central in the strategic decision-making of corporations and state agencies through a full suite of predictive technologies; however the focus of this commentary is on the predictive within the everyday and intimate levels of the self. In any of these dimensions we should not take prediction on its own terms (i.e. the domain of mathematics installed into digital systems, a fine-tuned instrument of data science, etc.). It is larger than this, and in a way, less than this, as it is bound to the mundane, as well as to mythical currents and powers of authority. The crux of the predictive lies in the zones where data-science meets the drawing of lots, and then disappears into the weird psyches of 21st century lives. If there are points to hack, it is where and when a probabilistic system transmutes into an ambient condition. What if a trace of indeterminacy is injected into these points? And if no such transformation or sabotage is possible now, at least this could be a site to hone aleatory guile? These are the areas and ideas I want to explore in this speculative and generative text, which will delve into the web project Petit Tube (petittube.com) as a specific case.
For those in societies with an intensive integration into digital technologies, probabilistic calculations seep into how we think, act, and interact. In yet another layer of machinic living, underlying both production and social reproduction, prediction is produced through digital stochastic reasoning, that combines huge volumes of data with mathematical processes (e.g. Markov chains) to convert randomness into likely outcomes, and quantifies the degree and type of uncertainty. The data-quantification of our lives puts us into the probability circuit, with the naive ideal of better living (‘the more of social life that can be translated into metrics and measured over time, the better the decisions, products, or services provided by companies, political actors, and governments will be.’3) Work, health, security and cultural systems are increasingly driven by processes that govern through stochastic variables. The fact that many of these predictions fail or are unreliable doesn’t lessen the obsession and involvement of our energies into quantitative anticipation in any way—it’s not that kind of prediction. We aren’t disappointed or surprised if they ‘come true’ or not, but we might change platforms. These are predictions without a future as such, and shift experience from specific dread or promise into a ground force that manifests as a type of pressure and reconfigures motivation and expectation.
As I will elaborate, prediction is perhaps one among many other figures leading into a social-temporal assembly or a source of social gravity. The crux of what I am calling prediction lies in our relation to contingency, which is especially heightened in turbulent times. As such, probabilistic rationality is one figuration (dominant, digital capitalist, supposedly technocratic) among a related set identified by Matthew Fuller and Olga Goriunova including: ‘luck, fate, fortune, providence, destiny, necessity, risk.’4 In this light, prediction is—despite the hyper-rational association—a technologised incorporation of the contingency-fate complex, connecting the corporal to abstract realms and altering the reality process. Perhaps a new being has emerged—‘homo probabilism’—a term coined by Ivan Ascher5 to describe the culmination of financial capital but which I see as extending more generally across dimensions and experiences of life.
A longer history of probability and a tentative call for reclaiming indeterminacy
Most histories of probability6 begin with games of chance—the literal rolling of bones, coin tosses and the attempts to observe patterns and manage chance, whether to improve odds and/or to develop the analysis of random probability distribution itself. If probabilistic calculation is central to social life now, as I believe it is, then we cannot downplay the significance of games of chance in the cultural and social logic of this conjuncture. Predictive living is indicative of a deep gamification whether we see ourselves as players or not. Maybe stochasticity, itself, was a hack long ago—a playful subversion of scholasticism and final causes, perhaps a form of early-Modern pranking. Wasn’t it strange to posit this speculative condition of foreknowledge with inescapable practical applications that could out-manoeuvre deterministic chains? But it quickly lost its smile and through the centuries became its own dire system. This appears to be a recurring pattern. We could see the hijinks of programmers and hardware engineers of the 60s and 70s as giving a crucial energy to the development of the surveillance society, preemptive policing, conspirituality, new realms of commodification and other Fred Turner7 nightmares. A cautionary note is to see hacking as having the tendency to contribute to the very entity it seeks to foil and unwittingly enhance new inaccessible systems. And to momentarily get way ahead of myself, what about the hacks of hacking, what of those projects? The self-negating way forward or a deathly serious play of fire leading into impossibly complex systems with sinister logics? Soon we will reach the end of hacking in the finite limits of life. But these are only figures of capture and betrayal of the true spirit of hacking? All of the above. Hack that.
In many ways this is the story of the ascent of the statistical into higher (or is it lower?) levels of reality—the full diffusion of stats into our lives in synergies between psychometrics, capitalism and digital technologies with the self as ground zero. ‘The system’ here is much larger than digital data and networks of the past few decades, and extends back to the 17th century if not further. It has subsequently gone through several different phases of which ‘the algorithm’ and AI are but instants or a manifestation. However emergent or ‘digital’ all this might seem, this condition nevertheless arises from well known areas of contention within the social sciences around the representational nature of statistics and their constructive and normative functions. Following Alain Desrosières’8 terms, the predictive operates within the fraught relationship between the thing being measured and the measuring process, and how social entities may be, themselves, statistical constructs for the purposes of measure. Values, meaning and experience are reshaped in this quantification process. ‘Numbers create and can be compared with norms, which are among the gentlest and yet most pervasive forms of power in modern democracies.’9 At almost every moment when using digital devices we are within the deep web of this ‘gentle’ power. The complexities of these relations are so ubiquitous and far-reaching, it is easy to forget that data is an element of measure, and that a datafied society is a fully and obsessively measured one. Who or what is doing all of this? Why have we entered this vast realm of measured living? Can there or should there be a way out?
Prediction developed by probabilistic statistics seems to imply carefully produced knowledge, within a rigorous scientific method for gauging uncertainty and producing reliable information of future events. However, this is not our probability. There are post-probability pronouncements of moving from prediction to pre-emption, from classical frequentist statistics (probability in terms of the frequency of objective properties sampled) to Bayesian approaches (probability as a measure of believability that a statistician has about the occurrence of an event), and from a kind of knowledge to the production of correlations outside of human comprehension but certainly within the competitive advantage of businesses.10 In these shifts prediction might appear to be recast as self-interest with only an operational level of truth, and that we are no longer in the science of prediction but in the drama of pre-emptive action. But the goals and function of quantification have never been merely descriptive and are ‘part of a strategy of intervention.’11 Classical statistical norms have always been creating qualities they are meant to dispassionately measure. As well, to think probability in the movement from reason to irrationality, from centralised public planning to ad hoc opportunism of private interests misses the fundamental double-nature of probability as Ian Hacking saw it.
The condition I am exploring is lodged within two theses by Ian Hacking, philosopher of probability and science, and are pivotal in coming to terms with probabilistic reason. These ideas long predate digital technologies, but I argue that they have been augmented in the politics of automated probability. The first is the dual nature of probability—it combines the search for stable frequencies to gauge uncertainty with assertions of belief on the occurrence of future events. ‘Probability…is Janus faced. On the one side it is statistical, concerning itself with the stochastic laws of chance processes. On the other side it is epistemological, dedicated to assessing the reasonable degrees of belief in propositions quite devoid of statistical background.’12 The second thesis is that the rise of probability from the 19th century onward (outlined in Hacking’s Taming of Chance) saw a departure from deep deterministic understandings, and an entry into an indeterminate view of the world. However, this did not result in a radical openness and social entropy, but on the contrary, in ceaseless surveying and calculating. In the development of statistical measures of control (in fact, measure as control), indeterminacy becomes increasingly synonymous with managerial quantification. Chance from this time on is taken seriously in social life as it is converted into statistical laws based on data and distribution curves, hence the titular ‘taming of chance’. From this, I am proposing—in a tentative and exploratory manner—a project of reclaiming indeterminacy in the 21st century and a re-envisioning of predictive data as belief. I am not casting all probabilistic functions as pure belief or arbitrary plays within an anything-goes social entropy. But we cannot lose a feel for the indeterminant, nor can we be blind to all the epistemological investment in what is presented as data-driven likely occurrences.
Imagine we find ourselves wandering around the edges of a vast predictive system. Maybe we abhor it but we are nevertheless of it, either in spite of ourselves or through an all-out embrace. But in any account, from the right vantage point there is a fascination with the transactions and communion. After all these years there is still the imaginary of the cybernetic dream of an X (singularity entity, artificial superintelligence) that knows us beyond how we know ourselves, and not only knows the future but can pass into impending moments, distorting temporal boundaries like the poets sought. But instead of enriching vitality, we can feel it draining away into temporal homogeneity. So it would appear to be no easy hack—how to hack a dream of techno-omniscience and data-driven time machines? They’ve ruined randomness, so it must be reinvented. How to get lucky when contingency has become the exclusive domain of ‘data science’? How does one hack statistical fate in the 21st century?
Artificial Futures (AI as AF)
I am working with the supposition that whenever we are in a situation modulated by digital technologies—specifically the array of machine learning techniques, algorithmic functions, extensive data processing, automated functions frequently labelled as ‘AI’—we are necessarily within a predictive realm, even if we don’t want to be. Almost all the machine learning techniques and functions associated with Artificial Intelligence are driven one way or another by a predictive logic, underscored by the dubious assumption that prediction and intelligence are somehow synonymous. As such, they are not only data processing and information technologies, but time technologies orientated to modelling future events (within the statistical sense of an ‘event’); given that prediction is a temporal function, these technologies are not only about data and processing but alter the experience and conceptualization of time. This is complicated by the way that such an engagement with the future (as a mathematical abstraction, from a system which is using prediction as a means of control and to shape outcomes), usually has a de-futuring effect rendering what is to come as merely a projection based on existing patterns of the present and data sets without the alterity and sense of open possibility of The Future; and thus as stated, making predictions without a future.
Another way to say this is that the predictive condition I am exploring is a temporal expression of quantification. By framing data collection and analysis within the larger enterprise of prediction, I am foregrounding the temporality embedded in measure. Prediction is the time of measure, and its expansion into the time of everyday life. To be imbricated into digital networks is to enter this time, with our actions and expressions are variables of capacities and performance. This ubiquitous and quotidian aspect of digital media should be seen as a time technology altering to the future, whether it is cancelled or reassigned.
The opposite of prediction
So what would it mean to ‘hack’ this temporality of predictive living? My view is to not leave it at subsuming probability into the service of the collective (Red AI), or to democratise the modes of prediction, merging stochastic reason with social justice. These are fascinating projects, hopefully already well underway, however, for me a hack means opening these systems (and ourselves as we are enfolded in them) to the improbable. What does unpredictability or the improbably really mean at this point in time? This is what we must open ourselves to, this is part of the hack. We can ask, to begin with, why the obsession with prediction in such unpredictable times? The ostensible answer is that probabilistic relations to the future are an attempt to mitigate uncertainty and achieve a degree of control and governability. But the predictive modes I am addressing here never seem to stabilise and usually add to uncertainty. It’s time to investigate the opposite of predictability, whatever it might be.
As prediction is oriented to likely outcomes of what will happen, then the hack could mean that the great wheel of ‘what’s next’ breaks down—slows or jams shut. In life without systemic anticipation, what are we going to do indeed? How will anything function anymore, will the structures of temporality unravel? In this light, maybe the opposite of prediction lies in moments when time stands still. A bit of spiritual hacking would do us all some good. For such temporal hacking, the experience of randomness is one way to start.
Aleatory visions (Petit Tube)
If the prediction system began in games of chance, then we should return to its origins of play and indeterminacy. In a data environment where everything is incessantly trying to predict what we want, randomness can be political. So we need to relearn how to inhabit chance and develop aleatory sensibilities. From deep within the prediction economy (political and libidinal), ‘chance must be prepared.’13 If John Cage prepared pianos, then we need to prepare platforms. ‘The suggestion here is that these discussions offer the development of a sensual and political understanding of chance that establishes it as the grounding condition for modes of being and one that is perversely synthetic in its ethico-aesthetics.’14 In a time attempting to irradicate improbable actions through measure and probabilistic calculations, we can no longer take chance for granted. We must cultivate ourselves to chance because no one is born lucky in a universe proceeding under the sway of probability.
Hacking algorithmic recommendation systems can be one way of doing this, and the web project Petit Tube (petittube.com) is an example. It randomly filters YouTube videos that have very few views and presents itself as offering ‘the least interesting videos on YouTube.’ Click play and the aleatory and enigmatic viewing of content begins. In an algorithm vs algorithm scenario, it selects videos which have been ignored, forgotten or otherwise outside YouTube’s predictions. The experience for me is surprisingly freeing, generating both a receptive energy toward the multiplicity of images, and a space from which to reflect and rewire our experience of predictive systems. Launched in 2011, the project was developed by Yann van der Cruyssen, a polymath based in France, who works on the borders of art, video game creation, music and wallpaper invention.
What is it like? It’s both ordinary and subtly weird, maybe an infraordinary of media. More of a simple side-step than anything psychedelic or mind-bendingly entropic. On a typical viewing one might see a used car advertisement set in a North American dealership with dirty late-winter snow, an office party in a generic business environment, fingers slowly rotating an opal stone, a presentation from the Antelope High School career counselling team, a clip from a video game reposted from Twitch, a Birthday party in a South-East Asian home. So this is the great unwatched. In fact, only a small minority of videos on YouTube are actually watched in large numbers, and most of these are music videos (especially songs for children), followed by how-to videos and product demos, and then more distantly by videos from popular content creators. What about the rest? I have been exploring Petit Tube for several years, and there appears to be what I characterise as an ‘outlook’. After months of revisiting the site, it can feel like beachcombing—walking the same stretch of coast but each time finding something perplexing has washed in. It’s like some kind of media therapy or deprogramming process to work free from ultra-curated content, personalised algorithms, known pleasures, and the worn grooves of one’s own tastes reinforced by a predictive media system. It sometimes feels calming, even though the content is a haphazard assortment typical of so much of contemporary media culture, not particularly meditative onto itself. So we have not quite reached the end of the media universe—rather a cul de sac that at times seems like a worm-hole.
The overall feel is game-like, in a simple and slightly clunky interface that is both underwhelming, and in a way, mysterious—something of a Bermuda Triangle of online content. The videos have disappeared in the sense that they have almost never been viewed, but through this random selection, they still can be seen on the threshold of a media void. Perhaps even the person who uploaded the content is no longer interested or even aware of their own media. It is a site which, momentarily, makes lost content reappear. We watch them, and then we watch them disappear all over again. We watch the blinding insignificance.The larger context of this is, on the one hand, a media ecology seeking to eliminate chance through predictive systems that already know who we are and what we want to experience; and on the other hand, a larger psychosocial environment that is increasingly animated by uncertainty. This hack offers a media universe wherein video waste functions like a filtering process rather than degradation. Disinterest and abandonment function as a prism through which something presumed as trivial might reach a kind of plenitude in spite of its staggering banality. In these refractions, Petit Tube works as a distrusting agent of YouTube. It is only by way of an improbable filtering of content that no one is interested in that we might finally drop out of the supposed affirmative preemption of contemporary image culture. What we lose in expectation of the visual we might gain in hope. If this kind of randomness forms a mysterious triangle, then it is formed in the interdependence between what exceeds our grasp, the mundane and lucidity.
BIO
Peter Conlin is a writer and researcher based in Birmingham (UK) and works as a Teaching Associate at the Department of Cultural, Media and Visual Studies, University of Nottingham. He is the author of Temporal Politics and Banal Culture: Before the Future (Routledge).
REFERENCES
- Ivan Ascher, Portfolio Society: On the Capitalist Mode of Prediction (New York: Zone/Near Futures, 2016). ↩︎
- Ascher, Portfolio Society, 26. ↩︎
- Andreas Jungherr, Gonzalo Rivero and Daniel Gayo-Avello, Retooling Politics: How Digital Media Are Shaping Democracy (Cambridge: Cambridge University Press, 2020), 165. ↩︎
- Matthew Fuller and Olga Goriunova, Bleak Joys: Aesthetics of Ecology and Impossibility. (Minneapolis: University of Minnesota Press, 2019), 78. ↩︎
- Ascher, Portfolio Society, 85. ↩︎
- F. N. David, Games, Gods and Gambling (London: Charles Griffin, 1962); Rüdiger Campe, The Game of Probability: Literature and Calculation from Pascal and Kleist (Redwood City: Stanford University Press, 2012); Brian Everitt, Chance Rules (New York: Springer Science & Business Media, 2008); John Haigh, Probability: A very short introduction (Oxford: Oxford University Press, 2012). ↩︎
- Fred Turner, From Counterculture to Cyberculture (London: University of Chicago Press, 2006). ↩︎
- Alain Desrosières, The Politics of Large Numbers (Cambridge, Mass.: Harvard University Press, 1998). ↩︎
- Theodore Porter, Trust in Numbers (Princeton, N.J.: Princeton University Press, 1995), 45. ↩︎
- Mark Andrejevic, Infoglut (London: Taylor & Francis Group, 2013), 96-110. ↩︎
- Porter, Trust in Numbers, 42. ↩︎
- Ian Hacking, The Emergence of Probability (Cambridge: Cambridge University Press, 2006), 12. ↩︎
- Fuller and Goriunova, Bleak Joys, 83. ↩︎
- Fuller and Goriunova, Bleak Joys, 83. ↩︎