Once in a while I get asked a question that makes me stop and think, really think about something. When this happens, the answer tends to come gushing out of me like water from a collapsed dam. I experienced this a few days ago when I woke up to a message from my brother which read:
“How are you thinking about AI vis a vis your career and your art? Worried or excited?”
Rather than respond straight away, I let the question simmer at the back of my mind while on my morning walk, and when I sat at my desk to start my day, I had so much to say that I rambled on for far longer than anyone should in a text message.
If you’ve been following this blog and my writing for a while, you might know I’ve written a few essays about AI, the value of creativity, and the intersection of art and humanity. But what I realised as my thumbs frantically danced all over my phone screen, typing out my response to the question is (1) the world has changed quite a bit since I wrote that essay about AI and art a few years ago, and (2) I have a lot more to say given how things have evolved since I last penned my thoughts on the subject.
My brother had the suggestion to turn my response into a blog post, which is exactly what I’ve tried to do here. I’ve chopped and changed things around here and there so that it flows like an essay rather than a collection of typo-laden, disjointed 8am ramblings. I’ve also expanded on certain points and made edits through the lens of creativity as a practice, and the philosophy of art for the sake of art, which, as it happens, are central to this blog.
So, with all that said, I give you some thoughts on the AI revolution and what it means for our art, our careers, and our humanity.
AI and society
I think what gets lost in the AI conversation is that there are different types of AI. We tend to focus on Generative AI and Large Language Models (LLMs) like ChatGPT which are used primarily for content generation and information synthesis, but I do think there are more interesting and frankly more useful applications of machine learning.
It is genuinely interesting to see how we can use it to find a cure for cancer, how to explore our DNA, genetics and all sorts of things that can improve quality of life. But we dwell on little fancy chat bots that help us crank out marketing emails to send to people who’ll use AI in turn to summarise the same emails into bullet points and use them to generate their own responses.
We're sleepwalking into a future where it'll be one AI agent talking to another in the workplace. What'll be the point of any of this?
I'd love to see a situation where we use this as a signal to figure out what is worth doing. For instance, if we can't be bothered to write certain emails or reports ourselves, and if we find ourselves delegating these tasks to AI, might this be because those tasks don't contribute any value to our affairs? Could it be because we know deep down that no one reads those emails or reports and we'd be better off not cranking them out in the first place?
There's also the question of what this means for education. We're seeing lots of news reports about students using AI to write their essays and do their homework. Recently I learnt that some academics and researchers are using AI to write their papers, and some use AI for the peer review process central to judging which papers are accepted for publication.
It's easy to decry the disingenuity of these students and academics, but I wonder if we should take a step back and review the system as a whole. If students and researchers are driven to a point where they feel they have to use AI for things that should be inherently valuable to their development, this suggests to me that (1) there are perverse incentives at play, (2) the system is broken beyond repair, and (3) a significant overhaul is needed.
I think teachers and institutions should foster an environment where students are genuinely curious to learn and develop themselves. In this sort of environment, it'll be obvious to students that they're only cheating themselves when they use AI to write their essays, because the value lies not in the grades or rewards bestowed, but in the process, research and critical thinking required to write an essay. You can't truly know what you think about something until you've put in the time and effort to read and write about it. If anything, the process of doing the research and writing the essay should be the true reward, but anyone who's been through the current educational system will know that we're a long way from this.
As for researchers using AI to write and review papers, can we really blame them? Well, maybe a little bit. But I'd argue that the bulk of the blame lies with a system that drives them to publish or perish, and a system that compels them to spend time they don't have on the arduous, painstaking peer review process, with no commensurate reward. Again, these are symptoms of a bigger problem. The system is broken and is in need of reform.
AI and careers
As far as my career is concerned, I think I do the sort of work that can't be properly automated away, not yet anyway. The keyword is “properly”, because anything can be automated. However, for some things the results of automation tend to produce watered down simulacrums at best, or at worst, outputs that bear no resemblance to, and often contradict the desired outcomes.
What this means for me is it makes sense to learn the strengths, boundaries and shortcomings of AI as it stands, and use my human expertise to augment it where it falls short.
I think everyone, and most of all young people should figure out what makes them unique as humans, what they can bring to the table that AI can't replicate. Not that long ago, the golden career advice was “learn to code” and/or “get a Computer Science degree”. Now, software development has turned out to be one of the easiest things to automate with AI because (1) coding is largely based on logic, (2) there's hardly any nuance involved, and (3) this is the sort of activity that AI can automate flawlessly as it improves.
I admit there's more to being a software developer than coding. Software engineers spend a lot of their time eliciting requirements, working with teams, navigating relationships, and engaging in a lot of activities that require soft skills. This applies to most career paths, but what sets coding apart from others and makes it ripe for an AI takeover is that what used to be its unique selling point (i.e. a skill that very few people had) is now demonstrably replicable with AI. The tables have turned in that sense. Computer science graduates will likely be among those who struggle the most to land entry-level jobs, and they're among the easiest targets for redundancy. Granted, these are broad generalisations, but you only need to look at how many tech professionals are being made redundant at the largest tech companies like Microsoft, Meta, Amazon and others to see that the market trends are starting to demonstrate this.
AI and art
As for AI in art, it's a joke. Sure, you can crank out a whole novel or song or painting in minutes with Gen AI tools, and lots of people are doing it. But these are the equivalent of Ultra-Processed Foods (UPFs). It might seem like a good idea at first, but after the first few bites, it leaves your body craving real, nutrient-rich food.
It won't stop large companies from trying to convince you that their UPF is what your body needs, or their AI-generated content can replace human art. We’re seeing a trend where companies are trying to use AI to make art so they don't pay artists. Spotify is now full of AI music, so they don't have to pay human artists for the streams, which as a musician, I can confirm was always a pittance anyway.
Amazon is full of AI-generated books, no doubt made by people looking to make a quick buck. You might wonder why Amazon is willing to let these books stay up on the platform, until you realise they couldn't care less about the quality of their offerings as long as they get their cut of the sales. It benefits Amazon for the platform to be flooded with “books” because it makes it more likely for unwitting customers to buy them, thereby generating more revenue for the company.
So many LinkedIn articles, social media posts, text, images and videos are generated and/or altered by AI. There's so much of these online and we contribute to it on a daily basis. A few weeks ago, there was that trend of creating action figures in our likeness with AI. It was disheartening to see so many people jump on the bandwagon. Many would've considered it a bit of harmless fun, but there are far reaching implications. What happens when there are more AI-generated images than real images online, one might ask? Time will tell. We're starting to see situations where search engines serve up factually incorrect images as the leading or most prominent results when people search for information. For instance, a few months ago, someone did a Google search for Ludwig van Beethoven and the first image that came up was an AI-generated image. This is not cool.
A time will come when there's so much AI slop around that it'll cause people to crave human art. We'll long for live music where we can see and engage with the musicians and meet them on a human level. It might just be the thing that saves human art. Through this lens, I predict that art is here to stay, and artists will always have a place, though I fear that the situation will get worse before it gets better.
The same will happen with social media too. We'll get to a place where we can't tell what's real or what's been fabricated with AI, and we will struggle to believe the evidence of our eyes and ears.
The only solution will be to get offline and engage with other humans in the real world. We’ll have little choice but to go out and touch grass, if we are to hang on to our humanity. Or maybe none of this will happen. Maybe AI will catalyse the onset of the apocalypse, in which case, we'll say of our humanity, it was fun while it lasted.
AI and the global south
There’s also the question of inequality. There’s no argument that the developed world (or the global north, or the west, or whatever term we consider most appropriate for the world's largest economies) is leading the AI revolution, while the rest of the world, especially Africa lags behind. We need to figure out what needs to happen to level the playing field. How do we invest in the much-needed infrastructure so that developing countries can leverage the benefits of AI?
There’s also the point to consider, that while it’s true that Africa lags behind, the silver lining is that it simultaneously creates a benefit in the sense that we can watch what the rest of the world is doing and learn from their mistakes when the AI revolution comes to Africa.
This second mover advantage as it were, might just pave the way for good things to come.
AI and everyday use
We need to remember AI is a tool, like any other tool. I don't want to blame AI for the terrible things people do with it. Humans have always used tools for good and for evil. Nuclear power could have solved the world's energy needs many times over, but today we associate it with weapons of mass destruction. The Internet can be and has been used for many things, including porn and all sorts of fraudulent activities, but it has also been used for so much good and contributed so many positive things to our civilisation.
Just like all the tools that have come before it, AI has potential for good, and it'll surely be (and is already being) used for evil too. That said, I do think we shouldn’t write off AI just because it has created new opportunities for perverts (for example). I also think we need to remember that tools aren’t created equal. Some tools can do more damage than others, just as some can produce more positive impact than others. For instance, an automatic rifle can do more damage than a knife in the hands of a mentally-unstable individual. You only need to look at mass shootings in America for evidence of this. A potential solution to this problem is legislation.
Then there’s the problem of energy. I worry that we don’t talk enough about the impact AI engines have on the climate. These things require a lot of energy to run, and like any resource, we need to account for the opportunity cost of this energy. If, for instance, the amount of energy required to generate a digital action figure could be used to power an entire city, at what point do we need to have an honest, grown-up conversation about whether it’s the best use of the limited energy currently at our disposal? At what point do we need to accept that its use continues to catalyse climate change? At what point do we need to take a long, hard look in the mirror and consider whether our priorities are misplaced?
AI and the future
In terms of what the future holds as far as the proliferation of AI is concerned, I will say this: It doesn’t augur well when people are less likely to use a product when AI is added to it, and when tech companies have to force people to adopt it by bundling it into existing products.
The jury is still out on what this revolution means for our species and the planet we call home. This could be the best thing to ever have happened to us. It could also be the worst. I have faith in humanity, and I continue to hold out hope that we’ll be better off for having invented AI. As with all things, we’ll find out in due course. Until then, I’ll continue expressing my humanity in ways that bring me joy, including (but not limited to) reading, writing, making music, walking, drinking coffee, and eating tofu sandwiches, not necessarily in that order. Most of all, I’ll continue spending time with the people I hold dear, and doing what I can to leave this place better than I found it.
Thanks for making it this far. If you’d like to read more of my thoughts on AI, art, and humanity, feel free to check out the following…
My new album, Hope on the Horizon, is out everywhere now. Not a fan of streaming and want to support my music? You can download a digital version or buy a CD now here. Thank you for listening, spreading the word, and reaching out to share your thoughts. I appreciate it. Have a great week.