AI 101

Well, it was about time I wrote this post. I was originally planning to open with it but it has just been sitting in my backlog every since. I keep telling people that this is a blog about AI, but I have yet to talk about a single AI-related topic. Well, today I’m going to rectify that. Today I want to give a birdseye view of AI. My name is prof. Vente and welcome to AI 101.

Groundwork

For this story to have any kind of meaning I’m going to have to start by laying down a bit of groundwork. That means starting with a couple of definitions.

Definition (agent) 

In AI the agent(s) is the entity(s) (usually a PC, a network, or a piece of software) with whose behaviour we are concerned. Usually, these are the entities that are going to perform the hopefully intelligent behaviour.

Definition (Sensor/actuator) 

 Sensors are what an agent uses to gather information from its environment. These can be conventional like camera’s but can also include memory or UI elements. Actuators are what an agent uses to act upon its environment. This can again be conventional like an engine, or something like a monitor. Traditionally agents are also considered part of the environment. That way a decision like moving is still considered acting upon the environment. One should also note that sensors and actuators are not mutually exclusive. 

These two definitions are really more about creating a language to talk about stuff than they are hard concrete concepts. For example, when we talk about an agent and you ask what are its actuators, the answer is usually “it depends”.  It’s like the word “colourful”, it’s a useful concept when talking about stuff, but exactly what that means is usually left open to interpretation.

Models for intelligence

Before I go into the meat of the problem, I just want to prefix it with this disclaimer. Talking about intelligence, and by extension about AI, is a slippery slope in and of itself. Despite what standardised testing might have tried to sell you, intelligence isn’t something we can objectively measure or define. We can only measure aspects of it. Things like IQ are just a tiny fraction of the whole spectrum that you could call intelligence, so some of the concepts and terminology in this story are going to have to be a bit vague. That’s just inherent to this field.

The definition of AI is something along the lines of “The practice of getting systems to exhibit intelligent behaviour”. A statement like this by itself is almost meaningless. For example, what do I mean with “system”? What kinds of intelligence am I talking about? Why do I say, “exhibit intelligent behaviour” instead of “becoming intelligent”? (That last question is actually a whole other can of philosophical worms that is beyond the scope of this post). So, to make all this a bit less vague, I’ll try to describe AI through a few examples, instead of trying to define it directly.

What intelligence?

One aspect that is worth mentioning, is what kind of intelligence AI is concerned with. Some AI researchers try to model human intelligence, flaws, biases and everything else included. This is usually psychological research. People in this category try to more or less reverse engineer our brain in order to learn more about it.

Others just try to get the best sort of intelligence they can. One thing worth mentioning is that just because we don’t look for the biases in these applications, doesn’t mean they aren’t there. We just haven’t actively constructed or countered them. Human biases in AIs is a really interesting topic that is beyond the scope of this post, but I’ll try to come back to it at some point (you’re gonna read me say that a lot in this post).

Different kinds of AI

As I said, I prolonged this post for a very long time. Every time I just wasn’t happy with how I explained it. After a nearly infinite amount of uhm-ing and ah-ing, I think I came up with a taxonomy that works well enough. Below I listed the three categories that I think AI fits the best into, with a few examples of each. Obviously, this list is far from exhaustive.

Didactical

Chess

Photo via Visual hunt

This is perhaps the most conventional form of AI. In didactical AI, it’s the humans that make deductions ahead of time about the nature of the desired behaviour and then (hard-)code that behaviour into the agent. This type of AI is almost always a form of weak AI (which I’ll talk about in a minute) and thus is best suited to very specific circumstances.

  • Game AI Every time you play a game against a computer, someone programmed it to do that. Usually, they’ll have done it by making deductions themselves about what is smart behaviour. This is usually not incredibly smart, but just good enough. If you want really good, you’ll usually have to move over to educational AI (which I’ll also talk about in a minute), because of the increasing complexity involved.
  • Natural language processing Ever had a chat with customer service over the internet? Chances are you were talking to a bot. With modern processing power, it is possible to fabricate a programme that behaves well enough within certain constraints. The interactivity and accuracy (and thus believability) of these programmes varies greatly, but progress is still being made in this field.
  • Logistics Traffic control, job scheduling, and warehouse stock. With the scale of all of these and related matters, the complexity of managing them has skyrocketed. There is a lot, and I mean a lot, of mathematics that goes into this. It is pretty effective, but as far as I know, the progress isn’t that stellar because it relies on human deduction like the rest.

Educational

Basic diagram of a neural network

Photo credit: fdecomite via Visual hunt / CC BY

This form of AI has recently become incredibly popular. This form of AI simulates the (human) behaviour to learn and adapt. This version is the most robust and the most effective when it works but, is also the most complex. These forms of AI are best suited to adapting to changin enviroments. You might not realise this but adaptability is a big deal. The applications are mostly academic and very specific at this point but there are huge strides being made in this field every day. An enormous advantage of this form of AI is that you can teach agents things that you don’t know yourself. All you need to do is give them clear information about what is desirable and what isn’t and then kind let them figure it out.

  • Machine learning (ML) It’s hard to describe machine learning with more rigour than “you teach agents how to learn” without getting into the detail, which I’ll surely do at some point. Suffice it to say that the important bit is that in ML we aim to teach a computer to do a thing without explicitly telling them how. A popular example of this one is Neural networks.
  • Data Analysis This usually involves more statistics than it does computer science, but can still be very effective. The kinds of analysis are numerous and vary depending on your situation. A few of examples of different kind of analyses are quantitative vs qualitative or discovery vs diagnostic, all of which will undoubtedly get their own post at some point.
  • Pattern recognition This one is extremely closely linked with data analysis. Of course, there must be patterns and thus data for there to be patterns we can recognise. The applications are nigh on infinite. Think medical diagnoses, financial predictions or criminal identification.

Sensory

Mars rover

Photo via Visual Hunt

You could classify this kind of AI as the most difficult, for reasons I’ll go into in a moment. Sensory AI is mostly about gathering information and then making decisions about that. This can actually mean new ways to gather more data, but can also involve trying to find new ways to interpret it.

  • Computer vision This form of AI is exactly what it sounds like. It attempts to give computers vision. “Why don’t they just stick a camera on it?” I hear you think. Sure that kinda works for getting input, but that doesn’t mean the AI can interpret it. Things like facial recognition and object detection also fall into this category. This part of AI usually focuses equally on finding the right hardware for the job, as well as the right software.
  • The Internet of Things (IoT) You might have heard this term before as it has gained stellar popularity over the first few years. IoT refers to the practice of hooking everyday stuff up to the internet, so that it can interact (hopefully intelligently) with other things. An example of this might be hooking up my coffee machine to my google calendar so it can detect when I should get up, and make some coffee right before that, so I’ll be actually lured out of my bed by the smell of fresh coffee. My raspberry pi or your favourite fitness tracker are also great examples of IoT devices. When done incorrectly these can form huge security risks, which is a topic I’m sure I’ll talk about at some point.
  • Simultaneous localisation and mapping (SLAM) Imagine you go into a big building to have an interview with someone. When you’re done with the interview chances are you can find your way out of the building on your own. While you were walking to your interview you probably kept track of the number of turns you took, where doors were and how many steps you took. This is incredibly hard for AIs.

Obviously, I did little justice to most, if any, of these subjects as they are subjects that I’m literally going to study for the next few years. It’s just a short overview to give you a lay of the land. I’ll undoubtedly come back to these and more topics to dive deeper into them, rectify mistakes I made and share new insights. These are also just general categories and by no means mutually exclusive. You could even argue that every kind of AI falls a little bit within each of these catagories.

Moravec’s paradox

I hinted at this subject slightly when I talked about sensory AI (where the phenomenon is most prevalent), but I just want to dive a little deeper into it. To illustrate, I want you to do a little experiment. Without looking up, raise your hands above your head. Yeah? Now, without lowering your hands, stick out your tongue, and then touch the tips of your fingers together. Chances are you either touched your fingers on the first try, or you came pretty close (in case you were wondering, the tongue was just for comedic effect).

This is called proprioception, which is the sense of the state of your body parts in relation to each other. It’s about monitoring your internal state and being able to act and correct it. Humans do this pretty well, even instinctively. Robots are notoriously bad at this stuff, which leaves me to the actual paradox:

It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.

Hans Moravec
Things like SLAM, which humans can do instinctively, are huge problems for AIs. That is why a lot of manual labourers have less to fear from AIs than certain highly educated people. We already have robots that can hover your room, but we’re a long way off until they can straighten your magazines, put your dishes in the cupboards, and fluff your pillows, so you can tell your maid she’s fine for now.

Strong vs. Weak AI

If you compare the AIs that we have versus the AIs in SiFi, we are sorely lacking behind. Our AIs are nowhere near as charming as the likes of C3PO, but also luckily not as dangerous as the likes of Skynet or HAL 9000. This is at the heart of the distinction between weak and strong AI, but let me put that in concrete terms.

Definition (weak AI) 

Weak AI  sometimes also called narrow AI.  Is a type of AI that does one very specific task, hopefully well. Up until this time, everything we have made falls in this category. Weak AIs can be extremely effective at what they do but they start failing miserably as soon as their environment changes beyond the parameters they learned about.  

Definition (strong AI) 

Strong AI,  sometimes also called sentient AI, is the counterpart to weak AI. This is an AI that can adapt to new situations as well as any human. Some people also demand that to be called strong AI, it would have to have some form of sentience

Whether it is even possible for an AI to attain sentience is still very much being debated, but this is as much a philosophical discussion as it is a scientific one. If this is true, that opens up a whole other can of philosophical worms in areas like ethics, but that is far beyond the scope of this post.

Our own event horizon

When talking about the difference between weak and strong AI, you’ll sometimes hear someone talking about “the singularity”. This is a sort of event horizon in AI. It refers to the following scenario. If we keep improving AIs, some people predict that there will come a time that AIs become smart enough to start modifying and improving themselves. At this point, all bets will be off. If this happens, they will become smarter beyond any human recognition. They will also be able to circumvent any restrictions we put on them, as they can just modify them out, no matter how deeply we embed them. At this point, there really will be no telling what would happen. Partly because the scenario is too vague, but also because it is beyond our comprehension by definition.

Now before everybody starts panicking, let me tell you, this is a long time away. Most people don’t even agree on whether it is even possible. These super sentient AIs need not be strong AIs initially, but I think it’s quite unlikely that they would be. I think it is logically possible, but whether we’ll ever get there is an entirely different matter. If we do, I don’t think it will be within the coming life spans. It never hurts to think about the things to come, though.  Ethics in AI is an interesting subject to which I’ll undoubtedly return some day

Why generality maters

In a lot of things, but especially in AI, generality and adaptiveness matters, and I’d like to talk a bit more about that. I think those things matter mainly because of two reasons: because they are hard and because they are useful. It is hard because of the nature of computers, upon which AIs are build. Computers can only carry out incredibly detailed and specific instructions. Generality, therefore, is incredibly labour intensive. With computers, there is no figuring-it-out-on-the-fly. Everything has to be thought of in advance, and that is incredibly hard.

The usefulness of generality is two-fold. Obviously, it is very useful if you have an AI that can adapt every time the environment changes. Think about production robots. Every time a screw hole is moved a couple of centimetres, the whole production has to be stopped and the robots have to be reprogrammed. This costs a lot of time and also money, so adaptability is very important from an economic perspective.

On the other hand, there is also a meta-usefulness to the generality of AI. What I mean by that is the following. Consider the PC, and how it made it to the integral part of our lives it is today. PCs started out as humongous, sluggish, incredibly expensive and hyper-specific machines, that were only used by academics, governments and mega-corporations. The revolution came when they became general-purpose. When the computers became good enough for general purpose applications they were brought to the public. That caused an explosion in improvement. Since the computer has become a consumer staple, the amount of research and development that when into it has skyrocketed.

I think the same thing will happen with AIs. If we can make them general purpose enough to make them interesting to the public, interests will explode. This means that there will be more resources available for research and the realm of the possible will increase exponentially with each day. Of course, this is already happening, but I think there is even more possible. With the kind of technology going around today, your imagination is the limit.

Like, Share, subscribe:
RSS
Facebook
LinkedIn
Google+
http://modernwizardry.xyz/ai-101/