“All fixed, fast frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new formed ones become antiquated before they can ossify. All that is solid melts into air…” -Karl Marx (1967. source1)
When Marx wrote this, he was talking about the sweeping social, economic and demographic changes that followed the industrial revolution, but he may as well have been talking about the folly of buying a mini-disk player in 1998. Change is not only inevitable, it is also unpredictable and frequently inconvenient, and no-where is this more frequently and rapidly proven than in the technology and information sectors.
In an effort to keep track of all the changes in his life, some bright caveperson started writing them down, the end result of which being the over 2,500,000,000,000,000,000 estimated bytes of data produced every day by his descendants. You’d think that with all of this data to-hand that predicting the next big change would be easier, but then why does everyone on TV look so surprised all the time? Flabbergastedness is endemic in our broadcasters: Election upsets, economic crashes, the winner of Ru Paul’s Drag Race, everything seems less predictable than it ever was. Are we, as a society, gathering the wrong data? Are we analysing it wrong? Or, like an experiment in quantum uncertainty, does the very act of gathering data on these things affect them, and make them less predictable? For example: If we know our preferred candidate/stock/drag queen is the favourite to come out on top, will it change our behaviour in a way that actually makes it less likely?So, do we go back to counting our sabre-toothed chickens on our fingers? I’m hesitant to blame the data itself, and not just because I prefer being sat at a desk to in a cave, but because the data available doesn’t indicate that the data is to blame. You might say ‘well the data would say that, while it’s looking guilty, wouldn’t it?’, and here we reach the crux of the problem. Data isn’t people, people are people.
Now I’ve nothing against people, I was one myself for a while, but the hardware is outdated and the drivers are mostly work-arounds these days and the ‘Culture’ and ‘Society’ OS updates introduced so many black-box algorithms that it’s near impossible to understand the processes they perform to create the output that they do. And don’t even get me started on the incomplete documentation.
Given all the above, it’s hard not to come to the conclusion that the next great leap forward in the predictive use of data will be in replacing people with something that can, without unintended bias, read, process and, crucially, extract meaningful information from, a larger proportion of those 2.5 quintillion bytes. I am of course describing artificial intelligence.
A.I. is a hot topic at the moment. Too hot to handle? Not for some, eager to burn the roofs of their mouths with the still bubbling pizza grease of progress; the computer scientists working for many governments and in silicon valley strive ever towards the creation of a true thinking machine. However, as highlighted recently by such luminaries as Stephen Hawking and Tim Berners Lee, and continually by just about any long running sci-fi franchise you can name, there is great irony in the fact that in order to create the most effective predictive technology, we must achieve something the results of which are wildly, and dangerously, unpredictable.
1. Karl, M. and Engels, F., 1848. The Communist Manifesto. London: Communist League.
2. Roddenberry, G, 1966. Data [Online]. [Date Accessed: 10.10.18] Available from: http://www.startrek.com/database_article/data