Nest's newest home surveillance camera, the Nest Cam IQ, not only comes with a whopping 4k sensor, but also with new "brains" to analyze the video and audio stream. These "brains" are not built into the camera, of course. We are felt lightyears away from having that sort of capability in a package this small. The AI doing the video analysis runs on the Nest Aware service available as a subscription. You can bet your booty that Nest benefits from Google's new TPU 2.0 hardware, as analyzing video and audio content simultaneously and live takes a lot of processing power.
Anyone (like me) that runs a home surveillance system based on simple pixel change algorithms knows the benefit that intelligent analysis of video streams would bring. To give you an idea of the difference, allow me to explain how current systems (with some exceptions) work. The goal of surveillance recording is not to net the most hours of video material. The goal is to single out security issues, either so that it is easy to find the relevant footage after the fact or - ideally - to get a system alert while an issue occurs. Traditional surveillance systems use a method where video pixel changes are analyzed and situations are flagged where a certain number of pixels (that corresponds to a certain size) changes in a certain timeframe. Think of a dog jumping over a fence or a door being opened.
While it is pretty simple to trigger an event if a door is opened that pretty much fills the camera frame, the situation changes drastically if the camera is recording a yard. Wind will move bushes and trees, and this will trigger the recording mechanism, resulting in a lot of recorded material. If you want to find out if someone is trespassing in your yard during the day, you'll have to look at hours and hours of video of moving trees.
There have been systems available that use algorithms to try to discern the shape of a person, for example. One of these that I have used personally is from Sighthound, Inc. which continued to develop the product I used several years ago ("Vitamin D"). Sighthound claims to have an SDK available that permits the training of a neural network used for facial recognition that runs locally on an iPad, using only that iPad's hardware.
While the training and recognition of individual faces may even work in a simple AI on an iPad, the video analysis that Nest offers goes several levels higher on the complexity scale. Think of it this way: instead of recognizing someone's face when they stand in front of a camera, a complete AI solution should be able to detect a lot more information, such as
"John Doe, wearing blue shorts and a Metallica T-Shirt. John just entered the dining room carrying a tablet computer and put that tablet computer on the table, leaving the room in the direction of the kitchen. John looks tired".
If that doesn't sound like big brother watching, I don't know what does!
Since the new Next camera not only records video in ultra high resolution but also sound in excellent quality, the data is sufficient for an AI to discern a lot of activities. Want to know how your aging mother is doing in her apartment three blocks down? In a scenario where the Nest service is connected to your Alexa account (which is planned), you could just say: "Alexa, what is mom doing right now?" and Alexa would answer "Your mother is sitting at the table doing a crossword puzzle." And while your mom might not appreciate being watched 24/7 by an online camera system, both of you will likely appreciate an urgent message being automatically sent out that your mom has been lying on the floor without motion for two minutes.
I would have an immediate use for a system like that: one that tells me when our cat is sitting in front of the terrace sliding door, waiting to be let in. I don't think I'll be putting these devices into my kids' bedrooms anytime soon.