Are Vulnerability, Security, and System Personalization Inextricably Linked?
For many years our household has possessed a mix of device types ranging from standard desktops to heavy laptops to ultralight netbooks. Over that same span of time the mix of "non-computer" electronic devices has also evolved as have the media we've used for recording and music playback.
What has happened over that time that is highly significant is the increasing power and sophistication of the smaller devices. Most important is the smartphone with its constantly evolving mix of communication, display, and interactive features. While it’s not yet totally liberated from the tyranny of the keyboard, in many ways the smartphone eclipses the personal computer in personal and social utility.
Also coming on strong is the voice operated home assistant such as Amazon Echo and its pairing with Alexa’s audio interface. Their reliance on the Amazon cloud bypasses (or at least attempts to bypass) local computers.
Google, Apple, and Amazon all realize the importance of building and operating cloud-reliant infrastructures across which multiple device types, tied to personal identities, can operate. We're seeing the benefits of such competition each day as new and different features and benefits are offered.
All these features and benefits, of course, are tied to wireless and cloud-based technologies and thus expose us to privacy and security vulnerabilities. So far these appear to have been risks that many are willing to tolerate. Whether such tolerance will continue as threats to life and limb proliferate, as is being demonstrated by the rise of ransomware, is the question. What happens, for example, when a fully loaded jumbo jet crashes into a crowded suburban shopping mall because of some hacker’s demonic act?
On a more personal level, I'll be pleased if someone comes up with a good approach to managing my interaction with a list of choices in a purely verbal way. Even long menu lists can be easily navigated via keyboard, mouse, and display. What happens when that list must be navigated by voice only?
That is one place where artificial intelligence and semantics can support verbally interactive agents. I want, for example, to say out loud something like, "Play some classical symphony slow movements but not anything I’ve listen to over the past few weeks. Also, exclude anything composed by Mozart or Haydn since I'm probably already familiar with them. If possible, only play performances using original instruments from when the works were first performed. And keep in mind I’m especially fond of Clementi."
While the technologies for parsing such a statement into an executable series of search queries already exist, I wonder how realistic it will be to embed such intelligence into an affordable commercial system that's keyed to my own personal behavior and tastes, regardless of which system I'm using? Right now, for example, I'm using an Amazon Echo as a Bluetooth speaker paired with my iPhone 6 to play an iTunes playlist. If Alexa wants to keep track of what I'm listening to so she can keep from playing something from Amazon's library that I've listened to already from my iTunes library, shouldn't she be listening to and keeping track of what I'm playing through the Echo when it’s used as a Bluetooth speaker with my iPhone?
It may sound like a silly idea right now, given the limits of Apples and Amazon's data sharing, but could such features be made commercially and securely available by someone else in the future?
Things were a lot simpler when we could store all our media on a desktop computer hard drive, weren't they?
Copyright © 2017 by Dennis D McDonald