Even though IoT brings a good deal of comfort into our houses, it retains one invisible threat. Invisible, but still serious.
Anybody home?
Life was simpler when gadgets were “dumb”. Once they became “smart”, a whole kaleidoscope of attack scenarios became possible. This applies to the Internet of Things (IoT) too — a constellation of clever little gizmos that entangle your kitchen, parlor and garage.
These gizmos wish you no harm on their own. It’s the Presentation Attacks (PAs) that can fool them through a series of artful manipulations and ferret out tons of sensitive info related to you or your family. (Including your beloved pets.)
Let’s see how it can be done, what can protect you and what is liveness vs readiness controversy.
Evil spiders lurking around
There are roughly 40 billion IoT gadgets in the world. In a few years their quantity is going to double (although we have a chip shortage at the moment). Together with that, there are almost 50 million smart homes in the US alone, according to Statista.
This armada includes all types of thingies: from a smart assistant like Amazon Echo to light bulbs, doorbells, coffee makers, fridges… And even smart yoga mats — their concept is based on pressure sensing, which helps you correctly adjust your poses and also save yoga session stats on your phone.
In turn, this invisible IoT web is perfect for some ill-disposed spiders to lurk in. At first, you may wonder: who on Earth cares about my smart light bulbs or thermostats? Well, someone may care.
For example, knowing at what time you turn on/off your lights allows a stalker to figure out which hours of the day you are at home. The same info can be retrieved from your smart coffee machine or tea cattle: odds are you will command it to make a cup of refreshing brew by the time you return from work.
The problem is that IoT devices know everything about you. They assemble a huge corpus of representative data, which allows skeletonizing your schedule, behavioral patterns, gastronomic and online shopping preferences, layout of your house, and so forth.
Through a smart doorbell someone can peep at the guests you’ve invited to a backgammon evening. Or detect the exact leave and return hours. Or obtain still images of your entire family. Purposes may differ. But this is quite possible.
There’s something wrong with IoT
All this espionage is possible due to one factor: IoT gadgets have a major breach in their security. The problem is that they can’t afford to have enhanced antihack protection — otherwise a roombot or a smart speaker would be too expensive if it had additional security components.
One of the typical attack scenarios is intercepting a default link key for network joining, which then allows controlling a device remotely. Since a caboodle of IoT gadgets have almost rudimentary authorization algorithms, hijacking one of them is a relatively simple task.
Perhaps, a similar attack pattern could be applied to the Owlet Wi-Fi baby heart monitors in 2016 — these medical devices had a gaping security breach in their connectivity algorithms. Potentially, hackers could access the pulse-related data and tamper it, so parents wouldn’t even know if any problems — like child tachycardia — took place. Which could result in an infant’s death.
A similar problem was exposed with the St. Jude’s cardiac devices in 2017. It turned out they could be accessed via hacking too. Now imagine what kind of mayhem would ensue if a sinister party actually hijacked a St. Jude’s pacemaker and fiddled with it to kill its work. While killing a patient along the way.
The Story of Spooky Spoofers
Hacking has a cousin named spoofing. It is an attack practice, during which a culprit pretends to be someone legitimate who can be identified and authorized by a smart biometric system.
There are many ways to do this masquerade: masks, photo cutouts, fake fingerprints made from gummy bears, etc. In IoT’s case, Voice User Interface (VUI) is the most vulnerable component.
These days it’s pretty simple to basically steal your voice like the Navajo’s yee naaldlooshii could do. Only instead of witchery, fraudsters are using a Feedforward Neural Network as a parametric text-to-speech synthesizer that can learn from a selection of voice samples. Once training is over, it can produce a scarily real facsimile of a target’s voice.
Antispoofing to the Rescue
At first, it seems hopeless: if the culprits can clone something so unique as your speech then who will stop them? Luckily, antispoofing can — a set of measures developed to prevent such tricky attacks.
And the best way to do so is to check the liveness of a person — a group of natural signals like breathing that testify that it’s a living person interacting with a sensor: camera, finger scanner, etc. So, readiness of a system to prevent an attack is determined by how well it detects liveness.
For example, with a device-free sensing method, the potentially targeted IoT system can be shielded with the Channel State Information (CSI) analysis. In simple words, this technique allows a sophisticated algorithm to read your lip movements to make sure that it’s really you who’s giving voice commands.
But there are more mind-bendingly clever methods to confirm that it’s your voice: feature extraction, audio segmentation, analysis of plosive consonants, and others. Visit antispoofing.org to explore biometric security!