Juha-Matti Santala
Community Builder. Dreamer. Adventurer.

Art forgery, LLMs and why it feels a bit off

For quite a while, I wrestled with an experience that I couldn’t quite put my finger on. If I knew a picture or a text was generated by a large language model (LLM) like ChatGPT or Midjourney, even if it looked good or read smooth, I couldn’t shake this feeling of not caring about it at all.

I’ve always considered myself as someone who’s less interested in how something is made: if it’s good, it’s good, right? (There’s a whole other discussion of the damage these systems are doing to the livelihoods of artists but I won’t go there with this post.)

That was, until I read Austin Kleon’s Show Your Work in which he writes about the phenomenon behind this feeling.

He quotes Paul Gloom’s How Pleasure Works: The New Science of Why We Like What We Like as Gloom, a psychology professor, explains how the pleasure we get from art is not just its colors and shapes and patterns but also what we are told about it.

A group of scientists at the Oxford Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB) did a study on this and discovered that

When the participants were informed the following image was a fake, their brain responded negatively before even being shown the piece itself. The person had made a preconceived judgement of the image. A comment made by Professor Martin Kemp of Oxford University says it shows "the way we view art is not rational".

I haven’t fully read the original article (and to be honest, probably would struggle to understand its nuances) but Emilia Sharples referenced it in her article Essentialism and Art Forgery where I learned about it in the first place.

Once I read these, I started to realise that it’s likely this exact same phenomenon in play when I was encountering LLM generated art, images or text. My brain knew it’s not “real” and that the effort put into it wasn’t human effort so it made me less interested in it all together – often even before seeing or reading it in the first place.

If you didn’t spend time writing it, why should I spend time reading it?

I was reminded about this again when Neven Mrgan published How it feels to get an AI email from a friend. In it, Neven shares a story of how they one day received an email from a friend. The friend who sent the email (or the LLM that generated it, who knows) disclosed at the end of it that it was “written by AI” (which sounds much fancier than “generated by LLM”) and that sparked a negative feeling in Neven:

My reaction to this surprised me: I was repelled, as if digital anthrax had poured out of the app. I’m trying to figure out why.

Neven then goes on to discuss a variety of reasons why they think it made them feel like it did. I really like one of the conclusions they come to:

It had simply not occurred to me—and now that it has occurred to me, I definitely do not want small talk and relationships outsourced to server farms. This stuff shouldn’t feel hard or taxing; it’s what our presence here on Earth is mostly made up of. The effort, the clumsiness, and the time invested are where humanity is stored.

In an interview with Swiss newspaper Tages-Anzeiger, Oliver Reichenstein shared similar thoughts when discussing the aesthetics of AI:

It’s disappointing: You know better, yet you still use this AI kitsch. I’m sure you also disappoint many readers: They think that you resort to such tricks because you do not put in enough effort. This might get you a few more clicks. But bad images devalue the work you’ve put into your writing.

and

When I scroll through LinkedIn and an AI image pops up, I just scroll right past it. Kitsch pretends to be what it is not. I don’t want to waste my time trying to give sense to carelessly generated nonsense that never had any.

The future of corporate world

A short while later I saw a post in LinkedIn by someone (I didn’t bother to store a reference to it) who was inviting other people to send their “AI avatars” to take part in a podcast hosted by this person’s own “AI Avatar”. I was stunned and kept thinking, what a silly concept. Maybe I should send my own “AI Avatar” to listen to that podcast so that the creation and consumption would be all automated and all that was left was a pile of resources wasted.

But it’s not just podcasts and emails.

Zoom CEO Eric Yuan wants people to send their “AI assistants” to sit in virtual meetings instead of being there themselves. I get the “meetings suck” mentality that’s prevalent in modern work life but the solution is to stop doing the meetings and finding other ways – like written asynchronous communication – to deal with it rather than sending bots to have those meetings.

A meeting is not valuable on its own. It’s the human connection and the positive outcomes from those connections that meetings should be for.

Yuan says

Sometimes I want to join, so I join. If I do not want to join, I can send a digital twin to join. That’s the future.

That’s not a future I want. I want a future – and to be fair, present day – where if I don’t want to join a meeting, I won’t join a meeting.

Maybe the real reason for Yuan’s desire to have people run bot-infested Zoom calls is because “usage has since [after the height of the COVID-19 pandemic] come down, and Zoom faces a number of business challenges he and I talked about.”

If the goal is really to extend the capabilities of our intelligence, experiences and style, surely the video call would be the first one to go. Once the machine learning systems get better, they can be much more efficient having those discussions in whatever binary language they choose to speak with each other, rather than rendering and recording video.

Oh, there’s also real LLM art forgeries

As I was researching for this and collecting stories and notes, I ran into a great piece by Maggie Appleton about how people sell new kind of art forgeries. Instead of copying an existing painting and claiming it to be real, they generate new artwork using neural networks that have been trained by the art of real artists and asked to generate new pieces in their style.

But I'd never seen this collection of his work. I was shocked and delighted to discover [William] Morris was somehow also a fine-art illustrator?? No one had ever told me. And a way better illustrator than I would have imagined. These prints echoed his other work, but with far more detail, rich colours, and compositional beauty. Perhaps he did these later in life? Perhaps the other work I'd seen was by a younger, less skilled Morris?

Instead of disclosing that’s what is happening, these pieces are being sold as original art by the famous artists.

I rechecked the Etsy print listings for disclaimers or details I must have missed the first time around, but none appeared. Nothing said these prints were “inspired by” or “influnced by” or “made to mimic” or any other derivatives of Morris, rather than originals.

A wonderful but scary conclusion to Appleton’s post is the realisation that while now it’s rather trivial to check if these are actual artwork by these artists by doing reverse image searches and looking for their actual art online, the next models will be trained with these forgeries in the training data.