Uncategorized

Nothing Beats Humanity

IMG_0988

Am I the only one who plays their favourite song on repeat dozens of times? I thought this was common practice, but I’ve since deduced from subtle comments by friends and family that perhaps this isn’t the case. 

Anyway, earlier this week I found my new favourite song — Cuántas Veces! by Katie James – and in predictable fashion, I went to town on it. I streamed it over and over on my daily walks, I played it on a loop while working at my desk, and as I write this, I have a browser tab open with a live version of the song on repeat. You might have noticed (from the title or the lyrics if you’ve listened to the song) that it’s a Spanish song, and although I don’t speak Spanish, I’ve long admired Katie James’ guitar melodies and smooth compositions. In the last few days I’ve listened to this one song at least 100 times (I did the maths), and this number will have grown significantly by the time I’m done writing this essay. 

A few days ago, after what would have been the 50th (or so) replay, I figured I should learn more about the lyrics to improve my appreciation of the song, as if that was even possible. I looked up the lyrics and to my utter surprise, I couldn’t understand them…because I don’t speak Spanish. Naturally, I pulled up Google Translate in a new browser tab, copied the Spanish lyrics from the already open tab, and pasted them to reveal the English translation. 

Google Translate gave me a rough idea of what the song was about, and I mean that literally; the translation was just good enough to interpret the title, yet bad enough that most of the lyrics were unintelligible. I wasn’t willing to settle for this poor translation job, and yet I didn’t have access to any Spanish-speaking folk who could help. But no bother, because just then I remembered this amazing tool that’s been taking the world by storm since 2022, and I figured surely it would provide me with better lyric translations and interpretations. If you’re yet to cotton on to what I’m referring to, it’s ChatGPT, an Artificial Intelligence (AI) engine, or system, or model, or whatever you wish to call it. 

I’ve written a fair amount about AI and ChatGPT on this blog, and my understanding of, and experience with it continues to evolve. Sometimes, when I have a bit too much time on my hands, I test the limits of what ChatGPT can do by asking questions I think it can’t answer and posing tasks or problems I think it would fail at. Occasionally I’m right and it completely falls flat. But once in a while, it surprises me, you might even say it blows my mind. For instance, a few weeks ago I tasked ChatGPT with translating a basic English sentence to Yoruba. Yoruba is a language spoken by one of the largest ethnic groups in Southwestern Nigeria, as well as in a few other countries in West Africa (like Togo and Sierra Leone)  and the Americas (like Brazil and El Salvador). It also happens to be my mother tongue, though I don’t have a lot of mileage on that track. The following is a brief conversation I had with ChatGPT to test its limit with the Yoruba language. 

Me to ChatGPT: Do you speak Yoruba?

ChatGPT: Yes, I can understand and communicate in Yoruba to some extent. How can I assist you in Yoruba?

Me to ChatGPT: Tell me what you just told me in Yoruba

ChatGPT: Sure! Here it is in Yoruba: "Bẹẹni, mo le mọ ati mu Yoruba lẹẹkan. Bí nisisiyi, bawo ni mo le ṣe iranlọwọ fun yin ni Yoruba?"

I chose Yoruba with my little ChatGPT exercise because I assumed that Yoruba language content couldn’t possibly have been a part of the training data set used to develop the Large Language Model that ChatGPT operates on, so ChatGPT wouldn’t know what to do with my request. I expected the AI to tell me as much, but to my surprise, it spat out a sentence which I recognised as Yoruba but could barely understand. My jaw dropped to the floor the moment I realised my new AI friend could speak my mother tongue better than I. But I digress, let’s get back to my new favourite Spanish song. 

It took all of 3 seconds of Googling to learn that Cuántas Veces! translates to How Many Times! or How Often! It also took me less than a minute of searching for the song on YouTube to learn that a common expression like this makes for a popular song title. Unsurprisingly, my search returned many different songs, a few of them classics in the Spanish-speaking world. This is important, and you’ll see why in a moment. For now, what matters is that whenever I go to find Cuántas Veces! on my phone while on a walk, or on my computer while at my desk, I have to include the artist’s name – Katie James – to ensure I get the song I’m looking for, which, to be fair, is simple enough to do. Still, as I said earlier, after playing Cuántas Veces! on a loop more times than is typical for most people, I decided to see what my AI friend had to say about the song. So I fed ChatGPT the prompt below.

Me to ChatGPT: What's the meaning of the song Cuántas Veces by Katie James?

Its response was impressive, or so it seemed, at first. ChatGPT gave me a blurb about how the song “delves into themes of longing, regret and the emotional complexity of relationships”. ChatGPT told me the lyrics “reflect on the number of times the speaker has experienced heartbreak, love, and introspection” and the song “poignantly captures the cycles of love and loss, and the emotional toll they take on a person”. Yet again, I was in awe of this non-human thing capable of dissecting a song and responding with what sounded like an informed opinion. But this feeling didn’t last long, because something caught my eye. Embedded in the polished prose that ChatGPT had used to describe the song, was one throwaway word in the third and last paragraph: melancholic. I’ll say it again, ChatGPT described Cuántas Veces! as melancholic. Melancholic. If you’ve made it this far and still haven’t listened to the song, here it is for reference. Feel free to take 4 minutes away from this article and bless your ears. You’ll thank me. But please come back. 

¡Cuántas Veces! - Katie James

Now that you’ve listened to it (I assume), would you consider this song melancholic? I certainly wouldn’t. And if you haven’t listened to it, no worries, just take my word for it, that I don’t think any human being speaking any language or from any culture on this earth would listen to this song and use the word “melancholic” to describe it. It is so upbeat, so full of life and energy, so happy-sounding and uplifting, it couldn’t be any less melancholic if it tried. As you can imagine, I challenged ChatGPT on this with the following prompt.

Me to ChatGPT: The song doesn't have a melancholic tone though, does it?

ChatGPT surprised me once again. It immediately rolled over and conceded that I was right. Its exact words were “You are right; ‘Cuántas Veces’ by Katie James does not have a melancholic tone. The song has an upbeat and lively rhythm, blending folk and Latin musical elements, which gives it a more uplifting and positive feel.” This was a pleasant surprise. I wish we humans were more willing to admit when we’re wrong. This is something I’m trying to get better at every day. Staring at my AI friend’s response on the screen, my first thought was, perhaps this is something AI does better than humans, but after my fleeting recognition of this positively rare and understated character trait, I found myself aggrieved. It took a moment to figure out I was annoyed, chagrined even, because I’d almost been deceived by ChatGPT. I almost fell for its beautiful and articulate spiel, and if it hadn’t been for that one word – melancholic – which made me think twice, I would have been none the wiser. It had me until it slipped up with that one word. I hadn’t bothered to double-check what it had previously told me until it told me something so jarring that I just knew it had to be wrong. 

By the way, there’s a name for this phenomenon where ChatGPT (or some other AI system) generates a response to a prompt which isn’t true to reality. This phenomenon where AI blatantly makes stuff up and spits out this unverified information with the confidence of a populist leader is called hallucination, and it happens more often than you might think. There’s been lots of documented evidence of this. Last year, a lawyer used ChatGPT to write his case filing and it was only after presenting it to the court that it emerged that some of the cited cases in the filing had been made up, by the AI. As you can see, unlike my case with the song, this is a very non-trivial scenario with real-world implications.

But trivial or not, this got me thinking, what else has my AI friend told me that I swallowed and believed without verifying? How many times (see what I did there) has my AI friend hallucinated, lied to me, misled me, and fed me false information? I don’t know, and I probably never will. Still, all this got me thinking, if ChatGPT could feed me false information about the tone of my new favourite song, isn’t it possible that everything it told me about the song was false? I found myself questioning the entire synopsis and interpretation it gave me before its use of the word “melancholic”. What if it was all just nonsense? 

There was a way to find out. I figured, let’s see what information my AI friend based its analysis on, so I asked it to give me the original (Spanish) lyrics of Cuántas Veces! by Katie James, and a translated English version. Remember that I had the original Spanish lyrics pulled up in a separate browser tab. Would you like to guess what ChatGPT gave me in response to my prompt? Surprise surprise, the lyrics didn’t match, so either the lyrics I pulled up in the separate tab was wrong, or ChatGPT was. Did I mention I’d listened to the song dozens of times by this point? When you listen to a song over and over, you don’t have to speak the language to know whether the lyrics you’re looking at match those of the song. Needless to say, ChatGPT was wrong. I told it as much and asked it to try again. It gave me new lyrics. They were different lyrics, but they were still the wrong lyrics. Again, I told it the lyrics were wrong and asked it to try again. It gave me new, different lyrics, and again, they were wrong. And again, and again, and again. I repeated this 5 times, and 5 times, ChatGPT gave me the wrong lyrics. Remember when I said there were quite a few different songs titled Cuántas Veces? I imagine ChatGPT kept feeding me versions of those songs and passing them off as the lyrics to the Katie James’ song I’d asked for. It clearly had enough data to pull from, such that it didn’t run out of lyrics to present to me, even after I repeatedly told it the lyrics were wrong. By this point, I was perplexed and puzzled in equal measure, and a tiny bit impressed if I’m being honest, because each time I told it the lyrics were wrong, it just apologised for the error and tried again. There was a glimpse of that admirable quality again, the willingness to admit it was wrong, which was nice to see, but it was equally frustrating to see that it was designed (or felt the need) to present the wrong answer instead of stating it didn’t have the lyrics I asked for. In other words, it couldn’t just admit it didn’t know the answer to my question. 

I repeated this exercise a few more times, telling ChatGPT it gave me the wrong lyrics, and then, one time, perhaps out of impatience or exasperation, I hit the terminate button while it was generating its latest response, because I could see from the first few lines that it was once again giving me wrong lyrics and I couldn’t be bothered to let it finish before having to tell it yet again that it had given me the wrong lyrics. Maybe it was just the passage of time, maybe it ran out of other Cuántas Veces lyrics to feed me, maybe it was because I terminated it mid-flow, whatever it was, something changed, because when I told it its last response was wrong and asked it to try again, this time, instead of giving me the full lyrics to a purported Cuántas Veces song, it simply gave me a short blurb about Katie James’ Cuántas Veces, a correct translation of an excerpt, and links to listen to it on YouTube Music and Deezer. Curiously though, it did not share the full song lyrics in Spanish or the translation in English, even after I’d prompted it to share and translate the full lyrics.

In essence, ChatGPT didn’t spell out the full lyrics to a Cuántas Veces song like it had done many times before. Without being privy to the intricate workings of ChatGPTs engine, I can only assume this latest response was a tacit acknowledgement that it didn’t have access to the lyrics I’d asked for (or didn’t have the necessary permission to use the data in generating responses), so it could only offer me the next best thing, which was a summary of the song and a link to listen to it on a few streaming platforms. If this is the case, why didn’t ChatGPT save us both some time and tell me it couldn’t provide the lyrics and translation I needed? Why did it repeatedly pass off the wrong lyrics as the right ones? Why did it feel the need to lie to me? For the briefest of moments I thought, maybe it is human after all. Then I jettisoned this thought as quickly as conceived it. 

I’ve been doing some serious, brow-furrowing thinking on this. The point of contention I keep coming back to is whether ChatGPT was designed to prioritise providing false information over admitting a gap in its knowledge base. In other words, has this AI been built to make stuff up instead of saying something to the tune of “I don’t know”? This has grave consequences for us all as AI continues to permeate our everyday lives. You might have noticed I used the word “designed” more than once in this essay. ChatGPT and other AIs like it have been (and will continue to be) designed by select humans as a means to achieving their specific ends. I have no doubt that AI will continue to impress me in certain respects, and I look forward to continually testing its limits and making the most of it where applicable to improve everyday life. That said, interactions like the one I’ve chronicled here give me pause, and remind me that there are certain things – human qualities, behaviours and preferences – that are near impossible to replicate or replace, for better or worse, because, at the end of the day, nothing beats humanity, not even AI. At least not yet.

P.S.: My debut non-fiction book, Art Is The Way, and my middle-grade novella, A Hollade Christmas, are out everywhere now. You can get them in all good bookstores and from all major online vendors.