We are entering an age where content is commissioned, created, read and acted upon by machines. This is something that’s been happening with numeric data for some time, however, language-based content is a different matter. But how exactly does this work? And what are the implications? Is it, for example, really possible for a computer to edit and write a newspaper?
Demand Media is perhaps the best known proponent of machine-commissioned content.
Demand Media has created a virtual factory that pumps out 4,000 videoclips and articles a day. It starts with an algorithm. The algorithm is fed inputs from three sources: search terms (popular terms from more than 100 sources comprising 2 billion searches a day), the ad market (a snapshot of which keywords are sought after and how much they are fetching), and the competition (what’s online already and where a term ranks in search results).
When they first started Demand Media used editors to commission their content:
The process worked fine. But once it was automated, every algorithm-generated piece of content produced 4.9 times the revenue of the human-created ideas. So Rosenblatt got rid of the editors. Suddenly, profit on each piece was 20 to 25 times what it had been. It turned out that gut instinct and experience were less effective at predicting what readers and viewers wanted — and worse for the company — than a formula.
Which is great: it makes sense to supply people with the content they actually want by looking at what people search for on Google, which is what the Demand Media algorithm does. Unfortunately, Demand Media is SEO mill on a huge scale. The content they produce is poor and its value questionable – they make money from bucket shop ads at tiny CPMs from enormous traffic figures. (Actually, I’m not sure why Google doesn’t do something about them, as they are clearly degrading the value of Google’s search., but that’s another story.)
However, leaving the quality of the content to one side (and remember Demand Media’s content is created by humans, it is only commissioned by machines, and they could increase the quality if they saw fit), I believe their commissioning model is a blueprint for a good deal of information-based content commissioning in the years to come. I doubt we’ll see a machine commissioning content for the New Yorker, but it wouldn’t surprise me to find out that their editors are already using some kind of trend and tracking tools to help inform decisions about what content they commission.
O.K. so machine-commissioned content makes sense in certain contexts at least, but what about creating content? Can computers really write coherent journalism? Another piece in Wired looks at machine-written journalism, and in particular at a system from Reuters Thomson called NewsScope.
The latest iteration of NewsScope “scans and automatically extracts critical pieces of information” from US corporate press releases, eliminating the “manual processes” that have traditionally kept so many financial journalists in gainful employment.
At the moment machine-written content is still fairly limited in technical scope and application – it’s mainly used in financial circles, in part because that’s where information is most valuable (something that stimulates innovation).
However, if you can get machines to create content from financial information then it why can’t you apply the same model to freesheets like the Metro? Much of thier content is rewritten press releases and agency copy, with a minimum of analysis or comment. Why not get machines to write their celebrity and sports news?
And if you can create a machine-based commissioning and content creation system for Metro, why not the Sun, the Mirror or even the Times?
Data.gov.uk opened it doors to the public last week. The US version has been with us for a while. As I discussed in an earlier post, this is all about opening up machine-readable data.
But that’s data, what about written information? Are there machines reading that? And what are they doing with it?
Social media sentiment trackers like Visible Media are a first step. They monitor keywords and phrases, they allow individuals and organisations to monitor what is being said about them and to respond appropriately.
I’m not sure whether or not this has happened, but it seems entirely conceivable that such a system might commission an article or some other content in response to the sentiment it detected in an article that was itself commissioned and written by a machine.
But sentiment tracking goes further than that, Visible Media’s system links with CRM systems, CRM systems link with procurement, ordering, vendor management, etc. It’s wholly possible to create a system for Walmart that monitors Facebook and Twitter for reactions to the new Lady Gaga album, the machine quantifies this sentiment, and alters purchase and manufacturing levels for the CD accordingly.
Of course, machines don’t have opinions (unless you programme them to do so), and certain types of content will still be commissioned, written and read exclusively by humans, but it doesn’t take a great leap of the imagination to see that large amounts of the information/content we produce (along with the data) will be commissioned, written and read by machines. And we know where that leads, right?