AI

There’s a lot of hullaballoo recently about ChatGPT and AI. It’s not surprising. The feats that these text systems are accomplishing are astounding, and creepily human-like. It begs the question: are we at the cusp of Artificial General Intelligence?

I’m not an AI expert. But I do know that most people don’t understand what it’s all about, and a lot of media just gets it horribly wrong. For many of us, if something smells and feels like it’s human-made, passing a Turing test, we implicitly intuit that the system is in fact “intelligent”, in a similar sense to how we intuit our own cerebral capabilities. We get an emotional response because the system is sufficiently human-like to convince us in our guts, which makes the leap to logical conclusions easy if not inevitable.

It turns out that everything we’ve built so far that purports some level of AI is in fact quite dissimilar to how our brains actually work. Well, sort of. To be clear, we just don’t know how our brains work. And systems like ChatGPT are in vital respects similarly black-boxed and inexplicable. “They just work.” Enormous neural nets of 100’s of billions of neurons (sound familiar?) end up eliciting language patterns that so convincingly replicate our own abilities that it confounds and amazes us. Is it as smart as us?

ChatGPT is certainly emulating our language construction semantics in a completely different way, on a completely different substrate. But the results are almost the same. So at minimum, it’s a fantastic tool. But it’s not a human. And it’s miles away from “general” intelligence. It’s a huge matrix of weighted coefficients that were machine learned ad nauseam to basically output something that makes sense to us, and is fairly narrowly applicable only to text generation.

Part of the wow-factor here should be abated by understanding that English is really “only” about 50,000 words, and there are only “so many” ways to put all those words together in different ways that can produce meaning. Don’t get me wrong — ChatGPT is four-letter-word-ing amazing, and certainly cool. But maybe constructing intelligible text is not as “complicated” as we think it really is. Human language is astounding, and something that interests me to no end. However, perhaps it’s not as complex as it feels.

If you want to seriously nerd out on this topic, read this Stephen Wolfram article. It’s really, really long, and there’s a lot of math, and a lot of terms in quotes (spoiler: for good reason), but it seems incredibly sensible to me. I admit most of the math was outside my wheelhouse. But there’s still some great insight here about neural networks and machine learning and what’s really going on under the hood. Another related article below as well, more easily digestible. Enjoy!

https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/

This entry was posted in General. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *