Cognificance

Cognificance

About the significance of machine cognition

Cognificance

About the significance of machine cognition

AI for Work

While many professionals still have issues in wrapping their brain about what AI is and what it can do (and can't do), there are folks out there going through "the great awakening": one of these is Salesforce CEO Marc Benioff, who uses a Salesforce bot called Einstein to assist in making decisions on corporate strategy.

As detailed in
this article from the Kurzweil AI site, Salesforce's Einstein is just one example of how anyone making business-level decisions, from executives to knowledge workers, can benefit from AI trained to interpret specific data sets. From analyzing consumer response to an advertising campaign to interpreting the social media response to a corporate event, humans benefit greatly from AI assistance.

Modern systems such as ERP and CRM gather so much data (or can, if used properly) that the sheer amount can be overwhelming to even trained statisticians. Using specialized "AI bots" to make sense of this data can add immense value to decision making.

All this added value that AI can bring to the process of running a business will bring a deepening of the "digital divide" with it. Companies whose management doesn't understand AI nor the value this type of analysis has will continue to be run as companies have since the beginning of time (or at least since the Italians invented double-entry accounting): by the "seat of the pants". While personal experience is a great asset for managers, not seeing the forest for the trees is something anyone with a lot of experience has to deal with.

Many private plane crashes are caused over proportionally by very experienced pilots. The phenomenon is well-studied and easily explained: pilots with thousands of hours racked up will often not bother taking the preflight checklist in hand; after all, they've done it so often they know what they need to go over. Similarly, the effectiveness and - more importantly, the future of companies managed by "gut feelings" or "seat of the pants" decisioning is often based on a small group of people or even a single person.

Companies embracing AI technology to generate a sounder base for making long-reaching decisions (or even small, day-to-day ones) will benefit greatly, while companies whose executive management shies away from employing this technology will likely fail.
Comments

Google's TPU 2.0

Google made their AI framework TensorFlow open-source in late 2015. Most AI frameworks use relatively inexpensive and widely available GPU (Graphics Processing Unit) devices to accelerate AI-related filtering, as doing this with regular CPUs isn't very cost effective (Watts/Output). But even GPU's - while much more efficient that CPU's for this type of activity - can be beat by differentiated hardware.

Research into making AI-related computing more efficient has shown that "bittedness" isn't what drives performance. While modern operating systems require 64-bit processors for efficient work, deep learning machines are quite happy to work with 8 bits, as long as there are lots and lots of nodes to avoid swapping.

Google research developed a
Tensor Processing Unit (TPU) to work alongside their TensorFlow framework. The device was designed to plug into a regular hard-drive slot, making roll-out of large clusters of TPUs quite simple: set up server racks with slot-in hard drive capacity and plug these (mostly) full of TPUs.

The advantage of the TPU over a GPU-based solution is a much higher number of operations per second per watt. I.e.: faster number crunching with lower power requirements. Google's TPU was never made available to the market, though I wouldn't be surprised to find something similar - albeit much scaled down - in a Google Android phone in a few years.

Things don't stay at a standstill in IT, especially not at Google, so it comes as no surprise that the
successor of the TPU has been announced. This "TPU 2.0" (also dubbed "Cloud TPU") device doesn't offer the same form factor as version 1. Just looking at the towering heat sink gives you a feeling that there is quite a bit of neural capability waiting to be unleashed.

And indeed: while the original TPU could only be used to run pre-trained neural networks, this new version is designed to facilitate an efficient learning cycle as well. According to Google, the TPU 2.0 can train a neural net several times faster than comparable GPU farms.

The TPU 2.0 was designed specifically for a Cloud-based offering. In other words: anyone can put together an AI solution using Google's open-source TensorFlow framework and run this solution on the Google Cloud with access to TPU 2.0 farms. All at a price, of course. Will this be a success for Google? In my opinion, selling TPU time via Cloud-based AIaaS (AI as a Service) isn't the prime objective of all the R&D that has gone into this new device. Google itself has transformed to an AI company, with most services it offers, from Maps to Search to Pictures all using AI in some form. Not to forget Google Home - the associated service requires intense AI processing for natural language processing (NLP) of voice input.

As the world moves to AI - and who wouldn't like to have an intelligence built into their "Smart"-Phone - you can bet your booties that companies like AMD, Intel or Nvidia are hard at work, designing industrial or even consumer-grade AI hardware. The next two years will likely show a plethora of TPU-like processing devices coming to a computer store near you!
Comments

Dreaming of "zero Inbox"?

"I have a dream"… about an AI-powered email filter that has a 100% hit rate on spam, prioritizes and classifies the remaining email and possibly answers simple queries automatically.

Current spam filters generally work heuristically or with statistic-driven Bayes filters, which means that they try to detect spam by looking at keywords and semantics of the message. This is why you will find spam messages getting through even high-price professional filter appliances, because the sender has found some creative way to spell a word that a human will immediately be able to interpret but a machine has no chance of understanding.

Not unless AI comes into the mix. The problem here is, of course - as always - that AI requires large sample sets to make proper decisions. Even power-managers with 400 emails per day won't fit this bill and while the central email server of large corporates would have enough daily samples to properly train a deep learning system, apparently no vendor has jumped on this bandwagon (please prove me wrong here!).

A good first step
has just been announced by a Google team led by Ray Kurzweil. A service that has already been available in the browser client version of Gmail is now also available on mobile devices (Android and iOS): if activated, Gmail will suggest three likely responses to any email received. The responses are the output of Google's AI that attaches to Gmail directly. Sample set size? Not an issue here!

I would think a sensible next step is for Google to use the learning set from this endeavor to add additional email services, such as spam filtering and automatic classification. Go, Google, Go!

Now, I'm waiting for YouTube videos of two Gmail accounts talking to one another…
Comments

Synthetic Sensors use AI - Smart Home

Just came across this fascinating Carnegie Mellon University project:

https://www.digitaltrends.com/home/synthetic-sensors/?utm_content=buffer88939

These sensor boards (CMU calls them "Supersensors") use a plethora of different environmental sensors, such as:

  • Radio interference
  • Electromagnetic Noise (probably the same sensor as above)
  • Magnetism
  • Motion X/Y/Z
  • Light color
  • Illumination
  • Air pressure
  • Humidity
  • non-contact temperature
  • Ambient temperature
  • Acoustic
  • Vibration

The sensor data is fed into an AI that is trained to recognize events by their sensor signature, such as turning on a faucet, operating a microwave oven or even counting the number of paper towels used from a dispenser (Facility Managers listen up!).

While the project claims that the AI runs locally, my prediction is that - with the exception of large FM companies - most of these supersensors will likely feed event data into a cloud-based AI that is pre-trained in thousands of event types and continually learns from new signatures it receives.

While smart home automation is a great field for sensors like these, I see big advantages for healthcare as well. Attach one of these over each intense care bed and doctors as well as nurses - and most of all the patient - will benefit from registering key events such as shivering, shifting in bed, etc. Care-at-home patients will benefit just as much.

As with any data going into the cloud, I hope the Carnegie Mellon team is taking care to make sure there is no data being sent out that can be directly attributed to a household or an individual. Hack into one of these sensors and you'll figure our very quickly if someone is at home or not!

Where can you get your hands on one?

Well, the concept was just presented as a paper at
CHI2017, so we're not talking ready-for-market devices.
Have a look at the
project homepage, there are more details on the work done here. You can download the paper from this website.
Comments

The Status Quo of understanding AI

The one-day DocVille conference that takes place every year - usually in Brussels - is a meeting of ECM and Capture technology suppliers as well as (very few) end customers. The format is simple: a keynote at the start, 4-5 discussion table sessions and networking all through the day.
 
As a change from the previous years, AI was a topic at not one but two of the table sessions (up from zero the previous year). The headline of the session I attended read "Integrating AI as a service (AIaaS) into ECM – From IDR to NLP & Conversational Chatbots" - wow, now that's a pretty demanding topic.
 
Interestingly enough, a good portion of the discussion time was needed to define what AI is - even the moderator seemed to diverge from the definition you'll find on Wikipedia by suggesting that his company has produced an AI that requires as little as 250 sample documents to do a classification… if you understand the technical details of what the scientific community calls AI, you'll realized right away that a neural net with 250 samples to work with isn't going to do a very good job of pattern recognition - more likely than not, we're talking about statistical analysis of either semantic or graphical content here, but it is an indicator of the status quo of AI-awareness.
 
The term "AI" has a lot of marketing pull right now - I clearly remember being at the VOI community stand at CeBIT in the mid-90's where every second piece of signage screamed "XML" at you. If you then talked to the people in the booth what XML meant to them, for their products and - more importantly, for their customers, most couldn't formulate a convincing answer.
 
I'm quite afraid that a very similar thing is happening right now with AI. Any tech product that doesn't have AI in it may be seen as less worthy in the eyes of consumers or professional buyers… so you better put AI in the spec sheet somewhere, even if the AI piece is really nothing more than a statistical analysis engine!
 
It also means that organizations geared to educate the public on AI aren't doing enough to get the word out. The speculations that flew around the table in the first five minutes clearly showed that people are putting together fragments of supposed knowledge to generate a picture of AI that is - for the most part - wildly deviating from fact.
Comments

Drone reconnaissance - not without AI!

With French drone manufacturer Parrot offering consumer-grade drones with commercial markets in mind, it becomes quite obvious that the hurdle for this market is not building a smarter Drone, but working with the flood of data it will generate.

The idea of Parrot (and other manufacturers) is to provide really inexpensive data collection devices - in the case of Parrot a 4-prop and a glider drone - and position them towards activities that previously were the sole realm of experts. These Drones can be equipped with heat-sensing cameras, for example. The 4-prop can be used to inspect building roofs for heat leakage. This previously required either the prohibitive expense of a pro camera attached to a (real) helicopter (at about 1,000$ per hour) or clambering on top of the roof with a standard infrared camera system that has been available to check walls and windows for many years. Glider-style drones, on the other hand, can - continuously, if needed - check crops on the infrared but also the green side of the spectrum to measure crop health, growth rate or weed infestation.

In both cases, tools are handed to people that likely do not have a degree in Physics. They need assistance to churn through the massive amounts of data so as to interpret the findings. It makes little sense, however, to pick up a glider drone to check your vineyards for frost damage for the - in comparison - paltry sum of around $5,000, only to have to hire an expert that is able to interpret the results for $120 per hour every time you let the drone fly.

And what technology is best suited to recognize patterns in the gigabytes of incoming video data if not AI?

With companies like Lockheed Martin and even Airbus entering the commercial drone market, the data and information flood from these devices is sure to reach astronomical levels very quickly - all the more reason for established AI firms and startups to jump on the bandwagon and look at providing AI-based interpretation solutions the market will need quickly.
Comments