About the significance of machine cognition


About the significance of machine cognition

Artificial General Intelligence - A closer look

While Alpha Go‘s success at beating the world‘s best human Go player has recently been surpassed by a new implementation of the AI used beating the „old“ one in 100:0 games, that machine is still only able to play Go. And while the resulting gameplay is an amazing achievement, even raising comments on clever and original moves from seasoned Go players, we shouldn‘t forget that to get to the original capability (beating Lee Sedol) the AI played literally millions and millions of rounds of Go against itself to learn the game.

In comparison, a human can learn the basics of the game very quickly and sit down to play his or her first full games by just going through a few learning trials. Why? Because the human brain is „generally intelligent“, meaning that it can learn new things quickly and apply them nearly instantly. This is not to say that a human that has played 10 games of Go would be considered anything other than green behind the ears.

On the other hand, as good as Alpha Go is at playing Go, it has absolutely zero knowledge of anything else, i.e. playing Chess or driving a car.
A goal that many AI scientists are aiming for is an artificial general intelligence (AGI) - an AI that is able to be used for many tasks. Even creating an AI that is able to learn similar things, such as „playing a game“ is currently pure science fiction.

As I‘ve indicated before (link), the major advantage of AI is the way the learning sets are currently set up: the learning set of any AI can be saved to a file and installed in another machine / computer / smartphone that has the same AI framework to instantly make that device „smarter“.
This is why the inclusion of AI hardware in the iPhone X is such a fascinating aspect: all of a sudden, it is possible to expand the App store with learning sets. Want a camera that can identify cars by model? Just load up the appropriate learning set someone (or some company) has trained a compatible AI for and *Voila*: your iPhone can instantly start identifying car models.
That still doesn‘t bring us to an AGI, of course, but it does for portable (or industrial) AI what the App store did for … well, Apps. It gives you the ability to expand the toolset you carry around in your pocket as you need.

And while the concept of an AGI is enticing, for most things it really isn‘t necessary. Identifying tumor cells in an MRI likely won‘t be any more accurate with an AGI as it is with an AI.
Once you get to very complex activities, however, AI starts to fail. While you can use an AI to spider travel portals on the internet to find the shortest and least expensive flight to a vacation destination, it will fail miserably if you try to train it to plan out the vacation.

Why? Because the network (or population, if the AI is an evolutionary one) would explode out of proportion due to the complexity of the task.
Vacation planning needs to take a large number of variables into consideration, some of which change in time and many of which change due to changes in other variables. The cheap flight your AI picked out might turn out to be a disaster if it didn‘t consider the fact that the airline it chose had ongoing Chapter 11 litigation.

So the best example for an application of AGI might be an automated travel planner that you can take with you (probably not on your iPhone, but at least as an instance living in a cloud service). Such an AGI would plan the initial stages of your vacation (flights to and from as well as your hotel) but would be able to automatically react to unforeseen situations, such as flight cancellations, as they happen.
It might be possible to build such a „tool“ using a number of different AI‘s, each trained to optimize a particular aspect of planning and a simple workflow backbone coordinating the „if flight is cancelled then use flight-finder AI to find new flight“ situations.

In effect, this type of setup would likely still be too static to do the job properly, as all possible issues with your vacation would need to be taken into consideration (and put into the workflow backbone), a nearly impossible feat. Subsequently, it will be necessary to find the path towards a functioning AGI in order to solve these complex problems. But how? One method that is currently being tried in various teams is to model the human brain. Since the brain has over a billion neurons, however, that isn‘t an easy problem to solve.

Cognitive augmentation and AI Design

Early studies in augmented reality (AR) were driven by Google Glass and similar devices that people would wear on their head. A good application of AR in logistics is the visual augmentation of a "pick list", where the person getting items from shelving is shown the next item, the location and arrows to that item in the projection they see through the AR glasses.

Augmenting reality is not limited to our visual system, of course - just think of your smartwatch that vibrates when you've been sitting down too long. One step (or many?) up from augmenting our senses is to augment our cognition. This is a two-way street - permitting our cognitive processes to provide input to AI systems is an excellent way to further human-machine interaction.

In this excellent TED Talk, Maurice Conti sheds light on the current and future path to AI-augmented material design. Here, again, it becomes quite clear that humanity is about to be disrupted as a whole - in a positive way.

Keeping AI "safe"

Isaac Asimov, prolific writer of Science Fiction and Non-Fiction books (more than 500!) and father of the term "robotics" realized very early on that "intelligent" robots could cause as much harm as good, if "programmed" the wrong way.

For this reason, he penned the famous "three laws" in 1942:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Later, a fourth law (the "zeroth" law) was added: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

The terms "Artificial Intelligence" and "Artificial Cognition" didn't exist in 1942 - Asimov had mechanical robots in mind, of the kind that he described in "I, Robot", the book that made these laws famous. And while especially the "zeroth" law is a pretty good fit for what we think of when we picture evil AI's (The Matrix, Terminator), a new think-through is certainly necessary - and imminently required - to define the ethical use of AI.

Satya Nadella, CEO of Microsoft, has put together his own set of rules to keep future AI and - more importantly - Cognitive Systems - in check.

The big question is: will this stop a really evil, mad scientist from developing an AI that disobeys any of these rules and laws? Hardly. With AI available online in a pay-as-you-drink model, a future Armageddon along the lines of Terminator certainly doesn't seem so "science-fictiony" anymore…