AI - artificial intelligence | Josh Brinkers
A couple of months ago, I asked ChatGPT to help write some survey questions. Just a simple battery on food delivery services – nothing grand or philosophical. To its credit, it did a decent job. Polite, well-structured, and impressively fast. It even avoided the kind of weird double-barrelled sentence I usually write and then instantly regret.
But the end result had me thinking about Blade Runner – specifically those eerily lifelike replicants. On paper, they tick all the boxes. They look right. They sound right. But something’s missing. The soul, maybe. Or the twitchy humanness that can’t quite be programmed. The questions were technically sound, but about as emotionally resonant as a parking fine.
So while the structure was there, the humanity wasn’t. No nuance. No wryness. No sense they were written by someone who’s had a Tesco meal deal at 11pm and then felt irrationally betrayed by it. In short: they weren’t human.
Please note: I didn’t write any of the above.
From this point on, this article has been written by me, a human. I’m one of the more prolific article writers at Market Measures, so I fed ChatGPT all of the articles I’d written in my time here and asked it to pay attention to my writing style, word choices and tone. I then told it to write the intro to an article about AI in market research, and to chuck in a movie reference somewhere because I’ve noticed I tend to do that.
So the first three paragraphs are AI pretending to be me, and I’d say the results are hit and miss:
- It started with a personal story, which I often do, but it also took the opportunity to self-aggrandise itself, which feels unearned.
- It mentioned Blade Runner, which is indeed a movie and one I like very much, but it uses it to describe AI as ‘eerily lifelike’ with ‘something missing’. I’d instead argue the point of that film is the opposite – to me, Blade Runner is about the humanisation of AI, and questions if any AI that can feel empathy should be considered distinct from a human at all.
- Finally it goes on some unhinged ramble about being betrayed by Tesco meal deals, which I think was an attempt at relatability. That’s not something I’ve ever experienced but ChatGPT clearly thinks it happens to humans enough to make a quip out of it, so maybe I’m just out of touch here, who knows!
Basically, those three paragraphs are not how I would have chosen to open an article about AI in market research. However, they’re helpful in demonstrating how I feel about it at the moment.
Obviously I could have got ChatGPT to write this whole article as ‘me’ and moved on with my day. It would have saved me some time and I don’t think I would have lost my job for doing it.
The problem is that when the subject is ‘my own writing style’ I’m the most qualified expert in the world, and I’d only score ChatGPT a 4/10 for its attempt at it – so should I be impressed by everything it does, especially things where I’ve got less expertise to judge the results with?
Where AI is really helping me out in my job is mechanising knowledge, and doing things that you can’t really get wrong but take ages to do. It’s excellent at summarising previous or external work, coding open ended responses, and automating processes. I find it’s less good at writing engaging questionnaires, or designing projects sympathetic to conflicting client aims and office politics – basically anything that relies on (pardon the trite expression) ‘thinking outside the box’.
The reason for this is ‘the box’ is all it knows. LLMs have been fed on human creativity, and borrow from libraries of it whenever told to make something new. This leads to utter averageness from a creative standpoint; every choice it makes is a safe one, because it’s a choice that’s been made before. It also means the amount of ‘stuff’ a single person can create has shot through the roof, but is that a good thing when it’s so much of it feels like filler?
I suppose my concerns with AI stem less from it taking my job (which it might) or taking over the world (which it might), and more from the side effects of the democratisation of ability it’s brought about. On the whole that’s a good thing, but I can’t help but feel it’s leading to a higher volume of lower quality work, and acting as a crutch for people operating outside their areas of expertise who may not have the knowledge to give AI’s outputs proper oversight.
In summary, it’s worth remembering that there’s a reason Microsoft called their offering Copilot: it’s the human who’s meant to be in charge.