It’s completely natural for humans to attribute human characteristics to generative AI systems that seem to behave like us. And since language is one of the most profoundly human characteristics on earth, it explains why we tend to compare LLMs like GPT or PaLM (Bard) to humans, and even attribute emotions to them.

But in the same way that we don't expect an airplane to flap its wings like a bird, or a submarine to swim by using wave-like movements like a fish, it's important to remember that LLMs do not use language the way humans do. They are an immensely useful invention to generate language but are blind to experiences that aren't encoded in language and have no true emotional understanding.

Listen to this fascinating discussion between Vasagi Kothandapani, Bart Maczynski and Marina Pantcheva in our latest Globally Speaking Podcast episode and learn all about the fundamental differences between language learning in LLMs and humans and how we can encode ethics in AI.