ChatGPT’s recommendations for writing an exhibition review are nothing if not uncritical, and yet they are unmistakably of the moment
Another day, another newspaper story about cultural producers losing ground, if not their entire livelihoods, to AI. One particularly vocal group of late has been illustrators, whose position currently looks about as secure as that of a handloom weaver in the late eighteenth century. For months, illustrations ‘authored’ by programs such as Midjourney have been popping up in journals including The Atlantic and The Economist. Last December a big sci-fi and fantasy publisher, Tor, advance-released the cover of a forthcoming book, Christopher Paolini’s Fractal Noise, which turned out to be AI-generated and stirred a social-media backlash. More recently, a twenty-something fintech dude in the US used AI to make an entire children’s book in about five minutes, provoking further ire within the industry itself. At the base of this inhuman ‘creativity’, of course, is human labour. Artwork-generating AIs – somewhat like artists of yore, before originality became paramount – are ‘trained’ by looking at existing creators’ work. Moreover, ‘in the style of…’ rip-off prompts are increasingly built into the standard AI brief.
All of this is understandably galling for human practitioners, but let’s rewind a couple of years – and try and zoom out a little – to recall the arrival of a certain type of book cover that became known as ‘the book blob’: amorphous, multicoloured globular shapes overlaid with a dominant, usually sanserif title. This trend was much mocked at the time and was also contextualised as the latest identikit format (see, earlier, knockoff-Twilight covers from the 2000s). You see where I’m going here. Before the rise of the robots, publishers running scared of difference, unpredictability and creativity had already strategised to take the human spark out of book design. (That Fractal Noise cover is equally generic, for its genre.) Bestsellers have looked like an AI designed them for years; now they probably will, and it’ll be even cheaper for the magnates. (I once wrote a book whose designer crowdsourced the design by offering the title and concept up to a workshop; I was a little ticked off then, but the cover was odd and I liked it, and I think I got off lightly.)
This ‘trend’ feels applicable across the creative board, especially where culture already registers as algorithmic. Why shouldn’t AIs be able to mimic pop music if pop music is already a predictable amalgam of retro styles (meaning anything pre-millennium) and designed to be in the background, only half listened to? At the end of last year, Chinese music streamer Tencent announced that it had released 1000-plus AI-generated tracks mimicking the human voice; one of these, they said, already had over 100 million streams (admittedly probably by bots). Mainstream cinema, too, has spent decades weaning itself off creativity and into a culture of remakes, comic-book adaptations, and something like guaranteed audiences. The quotient of risk and intuitive leaping that makes for something of lasting cultural value is being sucked out of the equation from above.
What about art? While concerns are understandably rising about AI’s ability to imitate historical styles – The Guardian recently asked several experts to separate real paintings from computerised fakes, albeit using jpegs (ArtReview’s editor JJ Charlesworth did pretty well, others less so) – the contemporary art scene seems, almost nostalgically, tied to people: artists, as fronting personalities and aspiration models, seem as important as the art. At the same time, though, there’s a lot of recombinantly nostalgic work around that an AI probably could do, and few people seem too worried about that, at least not yet. And then, maybe, an observer makes jokes about the endgame, at least within the gallery: AI-generated art, explained by an AI-generated press release. Still, the latter might not be so bad. At the start of January, writing about ChatGPT for the London Review of Books, academic Paul Taylor asked the much-remarked-upon AI chatbot a healthcare exam question and got the kind of admirably lucid answer that currently has teachers very worried. Around the same time, I had one of my frequent encounters with a gallery handout that made absolutely no sense, and I started to daydream…
But, as with most people, I don’t want AI taking my job, and other cultural critics are already weaseling up reasons why it won’t happen. A month ago, the music writer Simon Reynolds, writing about ChatGPT for the website of streaming site Tidal, reckoned critics are relatively safe (pause, snigger, gulp) because AIs can’t have emotional responses to art; they can’t feel it. But Reynolds is the same writer who, in Retromania (2011), noted the descent of music criticism into a safe box-ticking of historical influences. True, this chatbot is nothing if not uncritical. Generally it responds by some way of saying it’s not designed to give opinions; when I asked it to review a show recently – I wanted to compare a human-authored review I’d read with what the gizmo could do – it gave me a bullet-point list of how to write a review, ending with ‘A critical review should be written objectively and respectfully, without being overly negative or positive’. Whatever you think of that – me personally, not much; a lot of the writing I love would fail that test – it has the anodyne tang of the moment. Especially since the shift from art criticism to ‘art writing’ – and all the structural factors which seemingly occasioned it – many art writers have already trained themselves not to upset the applecart. A simple way around becoming outmoded by technology would seem to be ‘do what the technology can’t do’. But in a world where those things aren’t valued – aren’t lucrative for someone else, are distractingly provoking of thought rather than consumption, don’t promote consensus – well, good luck with that.