How to sound smart when talking about AI

AI Speaker

You’re nervous. You’ve just tied your first ever half-windsor tie knot in the bathroom, based on a Youtube short that stopped loading half way through. You’re sure that everyone can see that your suit pants aren’t shortened, but just folded in. And even though you’ve been in the meeting room 15 minutes early to get everything set up, Windows says it wants to update and you’re not sure if it might just go ahead.

It doesn’t matter.

This is the moment you’ve been preparing for. You’ve invited all the big wigs from your group. The leads, the local VP - even a managing director is at the table in front of you. You take a deep breath. Here it goes:

“Our company” - you begin, with a solemn voice “needs an AI strategy”. You pause to check your audience’s reaction. You’re in luck! Everyone seems to nod in agreement. Yes - an AI strategy! We need one of those! Good! You relax. This was it. You’ve just taken the next step on the corporate ladder by gaining recognition as the fancy innovation leader you want others to see you as.

There’s just one tiny problem: You don’t have the first clue about AI.

The good news is that no one else on this table does either. So all you need to do now is sound confident and knowledgeable - and this cozy corner office with the two visitor chairs in front of your desk can soon be yours.

I want to help you with this.

By writing this glossary of AI terms - and by adding just enough extra detail, context and reusable quotes for you to sound like a deep thought pioneer in the exciting field of artificial intelligence - whatever the hell that is.

So here it goes:

Computer Brain

Artificial Intelligence (AI)

AI is a collective term that encapsulates any interaction with a machine that gives the impression of intelligence. Here’s one:

If age is less than thirty, say “wow, that’s young!” else say “wow, that’s old!”

That’s AI. And so is ChatGPT, Dall-E and Alpha Go.

I’m not trying to be intentionally obnoxious (ok, maybe a little). My point is just that there is no clear cut off point between this simple bit of logic and the systems that we’ve come to think of as AI.

Instead, there’s a wide spectrum of techniques, architectures and implementations that all create an impression that the computer you’re interacting with has a genuine understanding of your input and provides thoughtful output - and all of that is called “AI”.

Artificial General Intelligence (AGI)

There might, however, be a whole different quality of thinking system that we haven’t seen yet.

The theory goes as follows: AI takes input and creates output. This output can serve as training material for the AI again. Right now, training a large language model (LLM) on its own output lets it deteriorate over time. But it doesn’t have to be that way. At some point, learning from its own output might increase an AI’s capabilities leading to a positive feedback loop.

This is especially true if the output of an AI is another, better AI. This might sound crazy at first, but AI is already capable of writing programs of moderate complexity - and we’re only just getting started.

In either scenario, the AI improves by a certain percentage of itself on every iteration. And an improved AI is not only more capable, but can also iterate faster.

If you plot this sort of accelerating exponential growth onto a chart, you end up with a hockey stick style curve that stays flat for a while and then accelerates almost vertically in a very short time. This event is referred to as the “singularity” and it would bring forth an AI that genuinely understands all the things and can think in ways we are struggling to anticipate. This might also be an AI that has no use for humans, but one step at a time…

Machine Learning

Machine Learning Similar to “AI” above, machine learning is a word that describes many things. But usually, what people mean is the act of a computer looking at large amounts of data, deriving patterns and applying these patterns to understand other data.

In some cases, humans highlight patterns within the data and tell the computer about what that pattern means - e.g. by drawing a box around a cat in a picture and tagging it with the word “cat”. If that happens, we speak of “supervised learning”.

At other times, the computer just directly processes data, e.g. language from the internet, without human intervention. It then organizes this data by common characteristics and tries to make sense of it by itself. This is called “unsupervised learning

A third approach would be learning with a certain goal - e.g. “get good at playing Go” or “try to solve this maze in the shortest possible time”. The computer will test all sorts of strategies and will be rewarded for those that work or punished if it was being naughty. This is called “reinforcement learning”

Neural Networks

The technology that combs through the data, recognises patterns and creates associations is called an “(Artificial) Neural Network” - and it’s one of the most revolutionary pieces of software ever created.

Just like any mammal brain, a neural network consists of neurons (the nodes) and synapses (the connections). The connections between the nodes aren’t just on or off - they can be weighted, indicating the strength of the connection.

Input - e.g. the pixel data from an image - is shoved into the input layer of neurons. Then, a hidden layer in the middle processes this input, strengthens or weakens connections between nodes storing existing patterns and ultimately activates or deactivates a set of nodes on the other end which can then be read as output.

Usually, there isn’t just one of these middle layers, but many of them. This is called “deep learning”.

Sometimes, the output from a neural network is also fed back to it as input - usually with some variation. Networks that are designed for this are called “Recurrent Neural Networks”.

There’s also an incredibly clever architecture called Generative Adversarial Networks (GAN) where you take two networks and make them distrust each other. Basically, the first network creates some output. e.g. an image and the second network tries to find flaws with it. This creates a powerful feedback loop that can lead to incredible results, such as the images in this post which were created by Midjourney.

So - where do you get such a Neural Network?

Neural Network Store

Neural networks are just software that you run on a computer. You can get them as part of Google’s Tensor Flow platform, from Microsoft’s Cognitive Toolkit, Facebook’s PyTorch or use something like Theano or Caffe. Or you can use a cloud hosted version, such as AWS Sagemaker, Google Vertex or Azure ML.

When you first try to use any of these, you might be a bit overwhelmed by their complexity. But fear not - there are popular APIs such as Keras that run on top of the Neural Networks and make them significantly easier to use.

Having said that - if you want to do anything with AI, you better polish up your Python. There are deep learning frameworks and neural networks in other languages, e.g. DL4J(ava), Shark (C++) or brain.js - but any serious player in this space is using Python.

Oh - and you need one more thing: A good GPU!

What on earth does my graphics card have to do with Neural Nets?

At the heart of your computer lives a Processor, a CPU. This CPU has immense power, but it can only do somewhere between 4 and 16 things at the same time. (maybe 32 if you’re fancy)

That’s great if you want to answer a single complicated question, such as “what is the next state of the economy in my city building game?” - but it is less useful when you have to answer a much simpler question like “what color should this pixel be?” two million times for each pixel on your screen.

For these questions, you need something that can do a lot of small computations simultaneously, rather than a few big ones. And that something is your graphics card or “GPU”.

Now - “What state should my neuron be in?” is much more like the pixel question than the economy question. And the answer is much quicker to find on a GPU than on a CPU.

So - next time you’re wondering why you can’t afford NVidia’s stock anymore - this would be the reason.

Natural Language Processing/Understanding/Generation (NLP/NLU/NLG)

If I was a computer, I couldn’t be more annoyed by the way that humans talk to each other. I mean - it’s all over the place. Every rule has a million exceptions, different words mean different things in different places, contexts and cultures and quite often what is said isn’t even what is meant - there’s all this context, sarcasm, hyperbole and hidden meanings…it’s a mess.

The techniques that deal with this mess - and, in fact, generate output that is largely indistinguishable from human language are called “NLP”, “NLU” and “NLG” respectively.

GPT

Have you tried ChatGPT? I hope so! It’s pretty incredible. The chat part is self explanatory - after all, you chat with it and it chats back. But what does GPT mean? That stands for “Generative Pre-Trained Transformer”

Let’s go through each parts of that individually:

Hallucinations

Hallucinations

GPT architectures create perfectly reasonable sounding output - but they don’t really understand your question. (Or maybe they do - building AI is actually teaching us a lot about how we think ourselves and we might need to reconsider some stuff).

But, sometimes, when choosing the most appropriate next word to add to the output based on statistics, the GPT might decide that the next most likely word isn’t something that quite exists. This might be a legal court case that never happened. Or a code sample that exists in many other APIs, but not in the one you’ve been asking about.

If a generative AI invents something that doesn’t quite exist, we call it a hallucination.

Explainable AI

Because of these Hallucinations - and a general concern that we might be creating something that is not quite aligned with our ethics and values - we feel that we need to better understand how the AI got to its answer.

But there’s a problem: The AI doesn’t know either. Asking to source all inputs that went into an output is the equivalent of looking at the billions of little currents flowing through a processor and trying to figure out whether it is running Age of Empires or Excel.

Nonetheless - a lot of people are pushing for this to lessen concerns about adopting AI, and complying with (e.g. financial) regulations that demand to know the reasoning behind a decision.

This isn’t impossible per se - after all, AI is largely Matrix math and that’s something that can be done forward as well as backward - but knowing all the numbers that went into producing an output and articulating them in a way that resembles human logic are two very different things.

Fancy Terms

Alright, I think this covers most of the basics. But to truly shine in your boardroom meeting, it may be good to sprinkle some of these terms over your presentation as well:

Phew, that was a lot to take in. I hope that armed with this glossary and general understanding, you’ll make the best of the short period between us introducing AI and AI abolishing us. Personally, I hope I’ll be in a Wall-E style hover deck chair with a milkshake - and I hope I’ll see you there!

If you’d like to learn more about how we’re using AI at Hivekit to coordinate people and machines in the physical world, have a look at our vision page.