Today I was shopping for trash bags in an Italian supermarket. The Italians seem to make a large number of different sized trashbags, all measured in centimeters, and for some reason, I can never remember the exact sizes that we use. So a few months ago I photographed the labels of the correct sizes and uploaded them to the extremely useful Evernote app, so I can just take out my phone, search “trashbags” and there’s the picture.
It made me think about how wearable computers will change that simple action. In another few years I’ll have a wrist computer (not a watch–see my thoughts on that here). It will have voice recognition, so I’ll just murmur, “Hey wrist, trashbags?”, glance down and the correct labels will be displayed.
And then a few years after that I’ll have smart glasses. I mean really smart glasses. They will know, through video, that I’m in the trashbag aisle. When I hesitate more than a few seconds in front of the trashbag choices, the image of the correct labels will float up in my vision. It will be, in fact, not much different than my own process of remembering–except the brain that’s doing the remembering will be somewhere in the cloud.
Although, come to think of it, some people say that's where my brain is most of the time anyway....
There’s a major public issue brewing that sooner or later will explode into common debate. You could probably trace its beginning back a few years ago when entrepreneur and technologist Martin Ford wrote a book called “The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future”. His new book is an extension of his thinking called “The Rise of the Robots” reviewed here in the New York Times by Barbara Ehrenreich, another sharp thinker about the nature of work.
With his first book, Ford raised the issue that we may be facing a new kind of automation. Previous bouts of automation have eliminated jobs, but always created new jobs, and for years most economists assumed that would be true with computers and robots. But the conversation has started to shift toward the notion that “this time is different.” Different in two ways: 1) this will impact white collar workers as well as those who work with their hands and 2) it is not at all clear where the new middle-class jobs will come from.
I see the trend everywhere among my clients–from so-called e-discovery software that is eliminating that common task for young lawyers, to programmatic buying tools in advertising agencies that replace traditional media buyers. And in her review, Ehrenreich points out that an increasing number of financial and sports stories are written by robots–and then does a pretty good job of suggesting how someday smart software could be used to replace book reviewers.
And recently I was introduced to the concept of Robotic Process Automation, which effectively automates many routine clerical tasks, without requiring fundamental changes to the company’s underlying software. A lot of clerical tasks involve, say, checking one number against another to make sure it was properly recorded, or moving data from one program to another program that isn’t fully compatible.
That’s the kind of work that is often outsourced to India. Now, RPA advocates suggest that with these new efficiencies it may be possible to bring those jobs home to the United States. But that would still mean that say, a thousand jobs that were lost in the US a decade ago might return to our shores–but this time only employing fewer than one hundred.
Ford points out that a country that consists of a wealthy elite and everyone else performing minimum wage jobs is not a healthy economy. Indeed, that’s generally agreed upon by both liberal and conservative economists. We need the kind of strong middle class that existed in the United States post-WWII, created in part by the unionization of factory workers. That’s the rationale behind the movement across the country right now to raise the wages of service workers to create a new middle class. One wage goal that is often suggested is $15 an hour.
Ironically, I recently saw a business plan for a sophisticated fast food robot that would easily replace several workers. What struck me most was a graph in the plan that showed how the machine became a profitable investment when the wages of workers approached….$15 an hour.
In”The Rise of the Robots” Ford makes an obvious but controversial suggestion: a guaranteed annual income. If robots and smart software are creating additional wealth, but that’s not being distributed beyond the owners of the machines, then the notion of income redistribution raises its head. And in a world in which jobs are created and destroyed quickly, and workforce flexibility is important, then a guaranteed annual income would give people the freedom to take some risks, as well as participate in growing but insecure opportunities like Uber or Task Rabbit.
Like climate change, job loss through automation is one of those issues that creeps up very slowly, and is also highly susceptible to political manipulation. We will hear much, much more on this topic long before any solutions come into sight.
“Home Sweet Home” is going to need some new adjectives.
As I mentioned earlier, last month I spoke at CEDIA Future Home Experience--a conference for companies that design and install whole-house audio-video systems, as well as home security and home automation.
I made some predictions, and I also did a little experiment with my audience.
First, a few predictions:
--The home of the future will have facial recognition--it will know who is in the house, and recognize people as they approach the front door. When you walk into your living room, the lights, climate control and music will adjust to your preferences.
--Video screens will be so inexpensive they can be built into any object or appliance. The refrigerator door, for example, may become a true “home page”--a big video screen that shows everything from the household calendar and messages--Don’t eat the cake, it’s for company!-- to the kids’ artwork and even real-time fitness updates for dieters.
--All of this will be managed with voice commands--”House, turn off the outdoor lights at 11 tonight.” “House, start the air conditioner tonight when I'm five miles from home.” “House, activate the security system.” “House, have the children come home yet?”
Thus the house of the future, controlled through voice commands, is inevitably going to have a personality. Look at something as simple as the voice of Siri on today's iPhone; with the right questions, she’ll tell sly jokes or kid around a bit.
Hence my experiment. I asked the audience--over a thousand of the people who will create these houses--what kind of emotional experience the house of the future will create. They texted in their ideas throughout the speech, building the “word cloud” shown above.
“Sweet” doesn't appear once. But warmth, calm, relaxation, and delight all figure prominently. My favorite contribution, however, was that the home of the future will need a heart.
So the Apple watch has received its first reviews, and they are tentatively positive. Tentative, because most of the reviews caution that unlike most Apple products, it’s a device with a steep learning curve. And it has fairly limited capabilities at present. However, like technology reviewers throughout history, they can’t help but factor in just how cool the Watch will be when it, uh, works better.
But that optimism is simply because technology reviewers also know that if the first generation of a product is promising, relentless engineering plus the acceleration of technology means that the subsequent generations will inevitably be much better.
In my brief experience with the Apple Watch, however, I had a different response. I found myself staring at the white fluoroelastomer “Sport Band” that held the Watch on my wrist, and wondered, why aren’t we using that space as well?
I suspect that the “wrist watch” form itself is a problem. When you look at a smartphone or tablet, the entire device is the screen. Smart watches inevitably give up more than 50% of their real estate to the strap, which just sits there.
Bendable LCDs, batteries and circuits are already well along in the laboratories and showing up in prototypes. By the end of this decade, the “smart bracelet” may become the preferred wrist display, in which the entire object is a curved touch screen that can display anything from video screen to a numeric keypad to a list of emails. The bracelet form would also allow a larger battery, a key problem in today’s smart watches.
And, when you weren’t using it as a display, your wrist bracelet would be a new fashion opportunity. The entire band could display any kind of color, shape or pattern: designer screen savers for smart bracelets.
After all, even the shape of the traditional watch evolved as the technology improved. The first personal timepiece was the size of a large egg, worn around the neck. Then came the pocket watch. Finally, watches became even smaller, and the rest was, well, wristory.
We may seem the same transition for the smart watch in the years to come. and someday the Apple Watch will seem as quaint as those timekeeping eggs once worn around the neck.
I’m spending a few weeks in Sicily to do some writing...and enjoy springtime in the Mediterranean. It’s a welcome relief after the East Coast winter.
Yesterday I made an appointment for a mid-afternoon conference call on the Monday after Easter, and then realized I’d made a mistake. Easter Monday in Sicily is a major holiday, a day when lots of long lunches and spontaneous invitations and impromptu visits take place.
My Sicilian friends would certainly understand if I said, no, we can’t visit because I have a business call...I am, after all, an Americano and we have strange habits. But it would be quite bad form. And I’d also probably miss some great food and companionship.
In New York, we joke that you know someone is a real friend when you can cancel lunch with them at the last minute if a business meeting comes up. It’s pretty much the opposite here.
Which leads me to ponder how much culture shapes our work habits. My audiences are often very concerned about the way that technology is blurring the lines between work and home. But much of that blurring is due to choices we as people, as organizations, as a society, ourselves make. The technology just makes it easier.
The FCC has decided to regulate the Internet as something closer to a public utility, or the federal highway system, than a cable television service. Critics of the FCC decision--which include, of course, all of the major Internet service providers and their lobbyists--say that this will spell doom for the Internet. There will be less innovation, higher prices, reduced competition, general bad news for the consumer.
When I hear these dire warnings, I think about my Internet and wireless experience in Italy, where telecommunications is distinctly more regulated than in the US.
At my isolated stone farmhouse in the midst of Sicilian cow country, I receive signals from three different cellular companies (all of whom offer high-speed Internet service), plus a choice of two high-speed DSL providers (if I ever get around to putting in a phone line), plus two operators of a sophisticated wireless technology called WiMAX (never adopted by the big ISPs in the United States).
That’s a lot of choices, in middle of the countryside, on a Mediterranean island that is hardly the technology center of Europe. And all those services are cheaper than what one would pay in the United States. So if that’s what a bit more government regulation produces, then I say: bring it on.
What enormous year-end event could possibly cause media ranging from CNN, the BBC, Newsweek, and NPR to The Globe and Mail and Mental Floss to call the Practical Futurist for an interview?
Try the 1989 movie “Back to the Future 2”--which happens to be set in 2015 and is thus full of predictions for our upcoming year.
The reporters were particularly interested in what the film got wrong, which includes both Doc’s flying car and Marty McFly’s hoverboard. Of course futurists have been getting the flying car wrong since at least 1957, when Popular Mechanics featured a flying car on the cover. They cautioned in the article, however, that we wouldn’t actually have them until 1967.
And the hoverboard? Entrepreneurs have lately come up with a version, but it functions magnetically and thus only floats above metal surfaces. Marty’s hoverboard, on the other hand, floats over anything and the only way I can imagine it might work is anti-gravity. Alas, in 2015, it’s unlikely we’ll even have a complete theory of gravity.
On the other hand, BTTF2 got some things right: Marty uses a thumbprint to pay for a taxi ride (shades of the iPhone 6); TV screens are flat and wall-sized; video telephone calls are increasingly common.
Of course, BTTF2 wasn’t meant to be a futurist manifesto but rather an entertaining movie. And it certainly succeeded at being memorable, considering the number of journalists who are writing about it 26 years later. (We’ll see how many articles appear at the end of 2018 about “Bladerunner”, which was set in 2019.)
But it’s also a good reminder of the difference between futurism and science fiction. New technologies can run into all sorts of financial, governmental and social problems that the fiction writer can happily ignore. For example: even if you could build a reasonably-priced flying car, you’d need new infrastructure for landing, a whole new range of driver skills and the approval of government agencies from the Department of Transportation to the FAA.
And thus a good futurist needs to understand not just technology, but the worlds of business, government, and human nature.
Human nature was one thing that BTTF2 got right. My favorite prediction was that objects from the 70s and 80s would become sought-after antiques in 2015. Sure enough, a couple of weeks ago, an Apple I from the mid-’70s sold at auction for $360,000. Don’t ditch that 1984 Mac quite yet!
I was speaking in Iowa City earlier this week and was reminded again of how vital many Midwestern cities have become. At the same time, a new research group, City Observatory, released a report about where young college graduates are moving. As we already know, they like to move to cities. But, as an excellent New York Times summary points out, what’s interesting is that cities like Nashville, Austin, Portland, Buffalo, Pittsburgh and St. Louis have had the highest percentage increase of young graduates since 2000, all significantly higher than New York City.
I’ve long thought that this is a trend that will continue. As work becomes more virtualized, and cities like New York and Los Angeles become increasingly expensive, it simply makes sense that both employers and employees will look to cities that offer more affordable lifestyles. That’s going to be especially true when the bulk of the Millennial generation begins to think about having kids.
The Internet has not only made it more possible to work at a distance, but it also enhances the smaller city lifestyle. You don’t have to drive fifty miles to see a foreign film--they’re available, streaming. The Internet takes care of just about any exotic shopping needs. There’s the Metropolitan Opera in live HD in your local theater. And given the speed at which trends now spread across the country, the latest artisanal kale shop will probably show up in your neighborhood only a few months after it debuts in Brooklyn.
Yet real estate developers in the major cities continue to build new apartments at a record pace. In New York City alone, developers like to say there are another million people on the way. But I’m not so sure. People like cities, and I don’t expect any reversal of our species‘ five-thousand-year march into urbanization. But when you add in the new factor of virtual work and life, I don’t think bigger (and more crowded and more expensive) will continue to be better.
Yesterday the Sony Computer Science Laboratories --Sony’s elite corporate think-tank--gave its first symposium in New York City, at the Museum of Modern Art. As is appropriate for an independent think tank, some of the ideas were visionary to the point of dream-like, such as 3D-printable gardens. Others were of the ilk that make perfect sense but will be tough to implement in the real world, such as a microgrid power system that used DC rather than AC power plus wind and solar power to create energy-independent neighborhoods. Probably not practical for the developed world, but at the right price, ground-breaking in developing countries where large percentages of the population don’t have electricity to start with.
But the most remarkable demonstration for me was very close to Sony’s own home turf: an artificial intelligence system that is able to listen to a musical performer and extract their “style”, rather than recording the actual notes. The system can then create new pieces of music in the style of the performer, or accompany a real musician in the style of a particular accompanist. Researcher Francois Patchet showed examples of a John Coltrane song done in the style of Wagner, a Brazilian ballad performed in the style of the a cappella group Take 6, and an original composition in the style of jazz legend Bill Evans. A good piece in The Atlantic took a more in-depth look at this last month.
Interesting detail: while the software will take bits and pieces of a composer’s work, it is constrained from copying so much as to constitute plagiarism. It’s a fine line, of course, that hip-hop artists have struggled with in the process of sampling over the years. But Patchet took the intellectual property question an additional step. Recorded music, he pointed out, thanks to everything from illegal downloading to low-cost streaming services, is getting to be pretty low-value these days. “The real new asset of value,” said Patchet, “is style.”
I have a feeling that’s a concept that the lawyers over at Sony Music are thinking about right now. Sony co-owns the largest music library in the world, including, oh, the Beatles and Michael Jackson. If a computer is smart enough to listen to the entire Michael Jackson oeuvre, and then write “new” Michael Jackson songs, just where do those royalties go?
I saw a great presentation last week at a wearable computing conference, by the wearables group at Motorola--a team that’s really focused on building Google Glass-like equipment for industry, rather than consumers.
It was interesting that even at this small industry event, no one in the audience quite agreed on what to call these embryonic devices. Of the two most popular phrases--”head-mounted displays” or “smart glasses”--I think I’ll take the latter. Although now it looks Google is making progress in making “glass” legally it’s own. (Hopefully if Apple introduces a version they can call them i-glasses.)
It made me realize that adoption of smart glasses will probably be a throwback to the patterns of the last century, when commercial applications came first and then the technology migrated to consumers. (Of course, that pattern has been turned on its head this century--employees tend to have better computers and phones in their homes than they do at work.)
It’s pretty clear that the first compelling applications of smart glasses will initially be in areas like public safety (firefighters, for example), equipment maintenance workers, maybe warehouses and logistics--areas where people need detailed and up-to-date information, while keeping their hands free. Because it's such advanced technology, the first really usable smart glasses are going to be expensive, as well.
It’s probably going to be a bit like the adoption curve of tablet computers. Twenty years ago, Fujitsu was already making a good business out of tablet computers for specialized purposes like healthcare, inventory and sales.
Then in 2001 Microsoft tried to introduce the Tablet PC more broadly, and it was pretty much only early adopters who bought it. I was one of them. Frankly, it was a bit of pain--you had to use a special pen, for starters--but it certainly got lots of attention from curious passengers on airplanes. All in all, not unlike today’s Google Glass.
Finally, in 2010, touch screens plus better interfaces came along and the tablet was launched--twenty years after Fujitsu started selling them.
I suspect it will be the same with smart glasses--although they will be mainstream far more quickly than the tablet did, thanks to Moore’s Law and our increasingly rapid acceptance of new technology.