This is default featured slide 1 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 2 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 3 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 4 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

This is default featured slide 5 title

Go to Blogger edit html and find these sentences.Now replace these sentences with your own descriptions.This theme is Bloggerized by Lasantha Bandara - Premiumbloggertemplates.com.

Tuesday, 1 October 2013

AT&T’s gigabit network will go live in December … with 300 Mbps service

AT&T said on Tuesday that its planned gigabit network set for Austin is both real and that the early stages of it will start operating on December 1. Ma Bell today launched a portal where residents can express interest in the service and said it would start offering what it calls its AT&T U-verse with GigaPower service with a symmetrical 300 Mbps option.
By the middle of 2014 AT&T says residents with the GigaPower service will have a symmetrical gigabit connection. No word on the price or whether it will be capped.
This puts Austin in a unique spot among cities — with two companies preparing to lay two gigabit, fiber-to-the-home networks in the self-proclaimed Live Music Capital of the world. There’s this effort from AT&T, and the planned Google Fiber deployment, which many view as the impetus for AT&T’s original announcement.

How will AT&T get to a gig?

So long, copper!
So long, copper!
Google said at the time it plans to connect its first customers by mid-2014, which perhaps not coincidentally is when AT&T will have its first customers upgraded to a gigabit at no additional costs to the yet-to-be-determined-price, according to Lori Lee, executive vice president, AT&T Home Solutions. Lee didn’t explain how AT&T would get from today’s U-Verse speeds that top out at 24 Mbps downstream to 300 Mbps on their way to true gigabit connectivity.
AT&T’s current network in Austin is a very-high-bit-rate digital subscriber line (VDSL) network which means it has fiber deployed to nodes in each neighborhood, but uses copper phone lines to make the final hop to homes. I asked Dr. George Ginis, SVP of DSL marketing at ASSIA and the inventor of VDSL, if AT&T’s current lines could be pushed to 300 Mbps or even a gig. He explained that there are technologies that can get a VDSL network to about 200 Mbps using the typical equipment U.S. telcos have deployed.
One technology, called vectored VDSL allows speeds of up to 100 Mbps, while another uses bonding to deliver speeds of up to 200 Mbps on the double-twisted-pair (it has four wires) copper networks in use in many parts of the U.S. There is atechnology called G.fast that could deliver gigabit speeds, but that’s not standardized yet. So it sounds like in Austin at least AT&T is going to really deploy fiber to the home for gigabit customers.

A whole lotta unknowns

AT&T’s GigaPower customers have the option of TV, voice and “the possibility of integrated mobile service” with their service as well. Unfortunately Lee wasn’t as forthcoming with other details, such as pricing.
That’s fine, because it’s not like we know how Google plans to price its broadband or its broadband and TV offering (Google in its other cities does not offer a voice service perhaps because if you have broadband you already have access to VoIP without paying Google a fee). It’s not clear if Lee’s voice service will be VoIP-based or still use the copper wiring AT&T already has in the home (for now anyway).
Austin Google Fiber Launch
Also unclear is how AT&T plans to roll out the service and where. The AT&T announcement is timed to the launch of a portal where users can sign up. Lee says that customers won’t have to pay a fee to commit to getting the service, as Google Fiber hopefuls in Kansas City did. In what appeared to be a dig at Google, Lee said “You can sign up and show your interest at any time. It’s not a one-time sign up.”
Of course, since that social engineering and format of the competition helped Google create economies of scale that lowered its expenses for digging fiber, it seems AT&T is willing to forgo those advantages. And of course, we still don’t know how or where Google plans its own roll out.
Plus, AT&T’s CEO Randall Stephenson said last week at an investor conference that because the cost of deploying fiber has dropped by so much, he thinks it will deploy faster networks in cities other than Austin. What those costs are is hard to say. Lee didn’t comment. Google has never commented. I’ve heard estimates from $6,424 from the man who invented DSL to as low as $450 from industry observers.
So basically, Austinites now have two web sites where they can enter their information to sign up for proposed gigabit networks. AT&T’s apparently will go live at less than a third of the speed perhaps as a way of gauging demand and helping set price expectations in the market. Since Google’s spokespeople have told me that part of the pricing for Google Fiber factors in the price it pays for broadcast rights as part of the TV package, and AT&T already has those deals, perhaps it’s willing to undercut Google to win in Austin.
Also if it ties subscribers to a contract, those who sign up for the December service, might not be in a position to sign up for Google’s fiber, which might disrupt Google’s costs.
Meanwhile, we have no technical details, no pricing, no indication of where the network will launch and no idea what a gig could even do for us. Yet, I can still find people who are over the moon with excitement.

FreedomPop launches its smartphone VoIP service, giving away 200 minutes each month

Mobile broadband provider FreedomPop has officially become a full-fledged mobile operator. On Tuesday it started selling its first smartphone, the HTC Evo Design, and began offering its first voice and messaging communications services following itsusual freemium model: the first 200 minutes, 500 texts and 500 MBs of data each month are free – anything more you pay for.
Though a mobile virtual network operator FreedomPop isn’t following the usual MVNO model of reselling another network’s traditional voice and messaging service. It’s buying bulk 3G and 4G data from Sprint and offering its own VoIP-based communications services over the top. TextNow beat FreedomPop to market with a smartphone and VoIP service in August, but the two MVNOs are in rare company. They’re the first all-IP mobile carriers in the U.S.
FreedomPop plans to offer multiple Android smartphones, but started out with the $99 WiMAX-powered refurbished EVO. Though FreedomPop uses Sprint’s LTE networks for its mobile broadband service, Sprint’s WiMAX footprint is still bigger and finding cheap WiMAX handsets is easier, CEO and co-founder Stephen Stokols said.
“In a lot of places Sprint has both networks, WiMAX is stronger, like LA and NYC,” Stokols said. “We also wanted to launch a $99 phone. That’s hard to do on LTE.”
The phones fall back on Sprint’s 3G CDMA networks, so they’ll work across the country, though the performance of VoIP on 3G might be a bit iffy. Stokols said that FreedomPop spent a lot of time optimizing its VoIP codecs to work on narrowband connections. Though the WiMAX network will produce better call quality, the service is designed to work across the 3G network, Stokols said.
Otherwise it will look just like a regular mobile voice and messaging service. Each customer will get a phone number, which can be used to call or text any other number. The Android’s regular phone dialer has merely been remapped onto FreedomPop’s communications app, Stokols said.
Keeping with its freemium business model, FreedomPop will give its baseline service away for free. Anyone who wants to go over 200 minutes or 500 texts can pay for the extra units each month or sign up for its premium $11 unlimited talk and text plan. On the data side, FreedomPop’s 100,000 customers are closely split between paying customers and those who only consume its 500 MB of free data each month.
As with most of its new services, FreedomPop is launching the smartphone program in beta, allowing it to gradually roll it out to its customers. FreedomPop has stockpiled 30,000 Android WiMAX phones and has its first LTE phones on the way, so if the beta proves successful, Stokols said, it can ramp it up quickly.

BlackBerry: The one time smartphone leader, its fall, and the comeback that never happened

The year was 1999. Just when it seemed everyone who needed one was wearing an alphanumeric pager on their belt, BlackBerry changed the game with its first portable email device. Three short years later, BlackBerry added voice capabilities, turning its data-centric devices into early smartphones that helped show the world the power of mobile computing while spawning a highly profitable enterprise services group.
How could a company so ahead of the smartphone and mobile broadband curve fail to maintain its place in the market in less than a decade and be prepared to sell itself for just $4.7 billion in 2013, after being valued at over $40 billion in 2007?
There are a number of reasons, but to understand them, you have to step back in time to get the whole story.

Helping to create an industry

Getting email or surfing the web on a device that fits in your pocket is pretty commonplace now. In the mid-1990s though, cell phones were clunky devices that couldn’t hold a charge for more than a few hours and were one-trick ponies: All they did was make or receive voice calls. Small pagers became handy, particularly for the growing IT industry.
Numeric pagers were just the start though. Then called Research In Motion, BlackBerry saw the potential of both receiving and sending text as well. The value was significant — instead of being notified of a phone number to call back, you could receive actionable information. By 1996, BlackBerry had a device on the market that was a two-way pager, sending data over the internet and offering both delivery and read receipts; now a standard for most messaging services.
BlackBerry 850
These new devices also included many hardware features that appeared in many later products from others: like thumb-based keyboards and trackwheels for scrolling through menus and messages. The company also created its BlackBerry Enterprise Service (BES) to help securely deliver mail and messages.
Data-centric messaging machines were all the rage for a few years and during this time cell phones went from a luxury item to a device that could be obtained by a broad audience. Handsets also got smaller while battery life improved. All of this happened at just the right time for BlackBerry, which began to merge its two-way messenger devices with cell phones in 2002: BlackBerry’s smartphone was born.

The good times keep getting better

The next several years saw BlackBerry grow its lucrative BES subscriber base from 534,000 in 2003 to nearly 5 million in 2006. The company made smart decisions to improve its handsets with color displays, and some early models that included a Wi-Fi or a Bluetooth radio.
Helping to make handsets even smaller was the introduction of SureType: A predictive text method that also allowed keyboards to have half as many keys. Each individual hardware key represented two different characters and SureType helped the handset figure out the typed words.
At this time, BlackBerry saw promise outside of enterprises and began to offer smartphones attractive to consumers. These phones included a built-in chat client, media player software, expanded memory and a change from a trackwheel to a trackball with the Pearl handset. The biggest draw over competing handsets though was the famous BlackBerry keyboard, which made messaging, email and chat easy to use. First the Pearl, and later the Curve, become hot handsets that consumers wanted.
Of course, BlackBerry wasn’t the only choice for a smartphone: Nokia and Palm — remember them? — were in the mix as well. Like BlackBerry, Palm changed with the industry, first offering personal digital assistants and later bringing its PalmOS software to handsets. And Nokia was in its prime, leading the smartphone market worldwide: A spot it would hold until a market disruption would occur.

The game changed in 2007

By 2007 more than 1 of every 3 new smartphone purchases in the U.S. was a BlackBerry. Worldwide the company was second only to Nokia in the smartphone business, which was just gathering steam. BlackBerry’s market share would continue to grow but peaked in 2009 only to spiral downward over the next four years. What happened? Apple.
Apple’s iPhone changed the game for BlackBerry and everyone else trying to make smartphones. Yes, consumers still wanted their connected features, but interacting with virtual objects through a touchscreen was something BlackBerry didn’t offer, and the iPhone’s Safari web browser was like nothing the early smartphone market had seen. And in 2008, when Apple announced support for third-party apps its iPhone sales growth only accelerated. Simply put, BlackBerry had nothing comparable to offer and wouldn’t for years.
But it wasn’t just the changing market that began BlackBerry’s downfall; it was the dismissal of the idea that the market was changing. BlackBerry’s leadership never took Apple’s iPhone as a serious threat to business. At least not until it was too late.
Here’s one of many examples of that dismissal from Jim Balsillie, co-CEO of BlackBerry, speaking about the then-new iPhone:
“The recent launch of Apple’s iPhone does not pose a threat to Research In Motion Ltd.’s consumer-geared BlackBerry Pearl and simply marks the entry of yet another competitor into the smartphone market.”

BlackBerry’s responses to the iPhone: A bad Storm and a dim Torch

By November of 2008, BlackBerry began selling its first touchscreen device: the BlackBerry Storm. But there were two huge problems that led the New York Times to declare it a “dud.”.
First: that famous hardware keyboard that BlackBerrys were known for was gone. In its place was a horribly implemented touch-based keyboard. The ease of text entry, long a staple of BlackBerry devices, was missing. Second: The operating system was improved for touch input but still not optimized for it. BlackBerry overlooked Apple’s approach of creating a symbiotic hardware and software relationship, instead thinking it could just slap a touchscreen on a BlackBerry and continue to enjoy strong sales.
Image 1 for post NYT's Pogue disses, really, really disses the Blackberry Storm( 2008-11-26 20:07:09)
Another example of BlackBerry playing follow the leader — first to Apple’s iOS and later to Google’s Android — was BlackBerry App World. This smartphone app store opened in April 2009, nearly a year after Apple’s own store. Besides being late to the app game the store opened with just a few hundred apps. It was deemed “good enough” in our review.
But time would prove that it wasn’t quite good enough at all. Nor was the Storm. So in 2010, BlackBerry launched the Torch; a hybrid device with both a touchscreen and a hardware keyboard. In my review, I called it an evolution, not a revolution. While it offered a better experience than prior models thanks largely to better software, it wouldn’t compel current iPhone or Android device owners to make a switch. New software would be needed to rival or exceed the experiences offered on competing devices.

QNX will save us all

If the Storm and Torch were strikes one and two, BlackBerry’s entire QNX strategy would be strike three and seal the company’s fate. Knowing that its aging BlackBerry OS was beyond saving, BlackBerry bought QNX Software Solutions from Harman International in April of 2010. The idea was to integrate QNX software into BlackBerry hardware to create a more compelilng software experience.
It was a sound idea but the implementation proved a challenge. While BlackBerry was working on QNX for its devices, it rolled out BlackBerry 7 software and new phones to run it on. This turned out to be a visible stop-gap measure because the new handsets would be incompatible with the new QNX platform. That hurt not just consumer adoption but developer interest as well during the transition.
The transition took far too long as well. In July of 2011, 15 months after the QNX purchase, BlackBerry laid off 2,000 employees and still had no smartphones running the new software. Instead, it had made a questionable decision by debuting that software on a new tablet called the Playbook in April.
blackberry-playbook
The device actually worked well for what it could do but it was what the tablet couldn’tdo that was a key issue: It came without an email client. Due to technical challenges with security, BlackBerry required the PlayBook to use a BlackBerry phone for email; another “strategy” that wouldn’t help expand BlackBerry’s customer base. The device launched with few apps and many that were available were simply Android apps running in an emulator.
PlayBook sales never took off and by the end of 2011 BlackBerry still didn’t have QNX-powered handsets on the market. At that time its U.S. market share had dropped to 16 percent of all smartphones sold; nearly half that of both iOS and Android devices. Tablet sales were on the rise for these two competing platforms as well; yet not even $300 price cuts could move BlackBerry Playbooks.
Surely 2012 would be better?

The beginning of the end

A new year brought a new CEO as then-COO Thorsten Heins was brought in to take over from long-time co-CEOs Jim Basillie and Mike Lazaridis. Heins seemed positive from the get-go, suggesting that the company’s investment in QNX would pay off and that BlackBerry was on the right track for success. But in March, Heins came clean on an investor call and publicly admitted the situation was dire.
A lack of new products in 2012 essentially wasted another year for BlackBerry’s comeback, which by this point was admittedly a long shot. No major products were launched and the new BlackBerry 10 software — based on QNX — didn’t arrive in the second half of 2012. Instead, the company announced in June 2012 a BlackBerry 10 delay until 2013 and another 5,000 job cuts. The Playbook tablet did see a new version of software, but aside from that, BlackBerry delivered no innovation during the year.
The innovation arrived in late January, 2013 in the form of the BlackBerry Z10. Was it better than the company’s prior all-touchscreen efforts? Yes, by far. But it still had some quirks and a number of gestures that could take customers time to get used to. And the mobile app gap had only widened. After just two days with the device I wrote the following:
“The lack of apps [on BB 10] that I use on other devices is concerning. BlackBerry has commitments for Skype, Amazon Kindle and others, but they’re not there. Nor is Netflix, any recognizable top-tier games, or my offline reading platforms. Google Talk is there, but no Google Voice; a must for me on any phone. YouTube is the HTML 5 mobile site wrapped up.”
The hardware was solid, the software was improved, but the overall experience was lacking. I felt deja vu from the Storm launch: existing BlackBerry users might be happy but the new phone wouldn’t grow BlackBerry’s user base. The company followed up with the Q10, Q5 and Z30 throughout the year, but its overall market share kept shrinking. There would be no comeback for BlackBerry’s hardware business.

A $4.7 billion bet

Although hardware sales fell, BlackBerry services were still generating profits. Perhaps that’s why Fairfax Financial decided to offer $9 a share for the company in September, totaling a $4.7 billion deal. BlackBerry signed a letter of intent to close the deal, pending a six week due diligence effort by Fairfax.
What happens to the company after the deal is up to Fairfax, but it’s not likely going to entail the selling of consumer handsets. Instead, the new owner would likely sell the hardware business off if possible and focus more on the lucrative BlackBerry services business. Licensing of BlackBerry’s many patents is another likely possibility.
In the end, the smartphone market was BlackBerry’s to lose. And it did. By dismissing new competition, the company became complacent and sluggish. Instead of reacting faster and more intensely, BlackBerry faltered with a few quick-fixes that failed. Jumping into the tablet market with an unfinished product further hurt the company’s outlook, and taking too long to deliver the real challenger to iOS and Android — namely, BlackBerry 10 — made customers lose faith that BlackBerry could deliver a compelling product.
The market couldn’t wait, so it moved on. And so BlackBerry may not be a part of the industry it helped create because the comeback it promised never arrived.
Could BlackBerry have avoided its fate? Hindsight is always 20-20, so knowing how the last few years played out provides us with information that BlackBerry didn’t have. Perhaps it should have seen the disruption of iOS and Android sooner, however.
Instead, the company took an almost arrogant approach towards its competition and clearly didn’t have a long-term plan to combat it. Each time BlackBerry adjusted to the market, it would still be two steps behind iOS and Android, causing more churn with stopgaps such as BlackBerry 7 and handsets that wouldn’t ever run the company’s future software. All in all, it was a costly lesson for BlackBerry, the once and not future king of smartphones.

Beware of Amazon’s “legacy” cloud, says HP’s cloud guru

In an interesting — if self-serving twist — Hewlett-Packard’s top cloud guy says enterprises should consider alternatives to Amazon’s old-school cloud.
Some might say that’s the pot calling the kettle black; HP is a 74-year-old tech company and Amazon Web Services has been around since — what — 2007? On the other hand, HP is banking big on new, but fast maturing OpenStack technology — it uses both SUSE and Ubuntu Linux in parts of is cloud infrastructure but is also working on its own distribution which is already running an HP Cloud sandbox, Saar Gillai, SVP of converged cloud at HP, said in an interview Tuesday morning.
The sandbox, available since June, has drawn a lot of interest from Fortune 50 and Fortune 100 customers who want to play with it, he said. He’s banking that this interest will morph into cloud business down the road.
Update: OpenStack cloud technology is gaining ground  – with new orchestration and management capabilities coming online. On the other hand, Gillai said,  ”yes, there are certain players in this market for 8 or 9 years with legacy stuff that’s not that new.” He did not mention AWS by name, but the context was clear: Amazon is the incumbent player here and HP is the new cloud on the block, was the message, albeit a new cloud with a lot of existing enterprise customer relationships.
Saar Gillai, Hewlett-Packard senior VP of converged cloud speaking at OpenStack Summit 2013

What’s old is new and new is old

HP is hoping that the open-source fervor that propelled Linux to the top of the heap in enterprise and mobile operating systems will similarly motivate enterprise customers to take the OpenStack cloud plunge — with full HP services and support attached, of course.
Given the sheer number of contributors to OpenStack — HP claims the fourth most contributions to the latest Havana release — its clear that the technology has piqued interest among developers and their employers. But since it launched four years ago, it’s still new in the game and businesses weighing a move into cloud need to know that moving legacy stuff to any cloud isn’t a day at the beach. HP, he said, will help assess those difficulties up front and, if needed, help with the move.
AWS is clearly the top public cloud contender by far and Gillai conceded that it offers huge benefits, especially to startups that want to minimize IT expenses and add and delete resources easily.
“Our challenge is, if you look at [Amazon's] attractiveness besides the capex vs. opex stuff which we can solve, it is the frictionlessness (service) AWS provides. How do we do that in the private cloud?” Gillai said. It’s clear he thinks that once that puzzle is solved, all bets are off.
OpenStack projects like TripleO are going a long way to ease the setup and implementation of OpenStack clouds, he said.
HP, like all the non-AWS cloud players, is laying on the hybrid cloud message thick. They say that model will let business customers keep sensitive data and apps under their control in private clouds but use public cloud as needed for workload spikes or non-sensitive data and applications. Update: As if on cue, Gartner came out Tuesday with projection that by the end of 2017 nearly half of all enterprise companies will have hybrid cloud deployed compared to just 1 to 2 percent now, according to Gartner analyst Thomas Bittman.
Now, to think that AWS is unaware of businesses’ desire for private and hybrid options would be naive. The fact that it’s fighting so hard to keep the CIA private cloud contract is one indication of how important that is to the company. If AWS can satisfy CIA requirements, AWS might get beyond its public-cloud-only roots. Amazon also has its cordoned-off GovCloud option and we hear it’s duplicating that model around the world in an attempt to ease concerns over data privacy and sovereignty.

On the other hand …

And while AWS may be 8 years old, it keeps churning out more and higher-level services at a rate that most of its own biggest users find hard to track.
Still, HP has a point that for some workloads and for some companies, AWS will not be the best and only cloud option going forward. especially if companies like HP, Red Hat, and IBM can offer OpenStack options that are at least somewhat price competitive and offer enterprise-class service level agreements and support.

How to build for a world where you’re connected to hundreds of devices

Over the next decade or two, everything that can have connected digital technology injected into it, will. Today’s smart watches and smart shirts, such as adidas’ miCoach Elite, will become ubiquitous as will adaptive technology such as connected cutlery like Lift Labs’ spoon for measuring and correcting Parkinson’s tremors. The trajectory is clear each person will have hundreds of connected devices in their life.
That’s the good news. But we can also see enough of this future to see that managing a multi-device, software, and behavior world will require rethinking how we design, perhaps from scratch.

Multiple device system design

mobilize-2013-essaySome of the design challenges of experiencing and designing for this world are starting to become apparent, even painful.
Our ability to manage multiple devices, each with its own software ecosystem, interface, and quirks, is reaching its limit. There’s only so much attention we can pay to our devices, and as consumers we’ve started to leave a wake of unused digital things that are a little awkward to use, recharge, wear, or carry around.
Moreover, many new devices involve user experiences that move across devices: a sensor synchronizes to a server that an app contacts to visualize analytics. There’s real value in creating new device categories, just as Nike did with the Nike Plus, but instead of making consumers learn new device ecosystems, we need to think about offering services. To do this, designers have to learn how to create experiences that smoothly span multiple devices.
Synchronized cloud-based services like Netflix, Dropbox, and Angry Birds provide a glimpse of how device-spanning design could feel. I can grab a file on whatever device is synched to my cloud service. I can pause a movie on one connected video display, and unpause it on another. I can continue the game I started on my phone on my TV. In all these situations I don’t really care about the specific device because it only serves as a frame for the thing I really care about, the service.

Designing the multiple device system experience

How do you design a single service that can appear as an app, as a data visualization, as a specialized device, or as one of a dozen different hardware platforms we haven’t thought of yet? Current models typically fall into two broad categories: standards and vendor lock-in. Complex “do everything” standards make implementation difficult, require heavy configuration (I suggest looking at media sharing, which is way more complex than it should be, despite dozens of standards), and lead to a frustratingly inconsistent user experience that doesn’t scale well.
Multiple links, connections, networks and devices
In vendor lock-in everything works as long as buyers stay in a single company’s proprietary system. It only scales when the vendor scales it, which may be desirable from the perspective of the company, but is unrealistic to consumers.
As a consumer and designer, neither approach has enabled sophisticated multi-device experiences. Companies still chase two-screen experiences… but what about the twelve-screen experience? The “twelve-screen, forty embedded sensors, ambient display, nearby camera drone and car” experience? The traditional models—let’s call them “linear”—may have been appropriate when we had a couple of multipurpose devices, but they feel too limited in a world of many connected devices.
Designing large-scale, multi-device service interactions is more like planning and running a farm rather than setting up and operating an assembly line. Let’s call this non-linear approach “emergent.” Essentially we need to let go of the desire to tightly control functionality at the micro level, ditch the tools that stem from those assumptions (our popular programming languages, interaction design methods and software development environments), and focus on creating tools that induce large-scale behaviors at the macro level.

Emergent behavior

Emergent behavior in multi-device systems is not a new idea. Cellular automata and intelligent agents have been tried as inspirations in the space of multi-device experience design, and multi-device experience research goes back twenty years or more. General Magic’s visionary Telescript language tackled related questions in the early 90s. However, most of these projects focused on low-level functionality, which is a distraction.
What we’re missing is a high-level approach for creating emergent multi-device user experience. How can we easily tell an ensemble of (perhaps arbitrary) devices that we’re interested in achieving a certain result, and have them trend toward a positive outcome close to what we’re hoping to achieve? That’s exactly what we do when we ask a friend to make dinner, or we plant a garden, or we start a project where we don’t know all the steps. That is, in fact, largely how we manage our everyday lives. We expect that the world will be imprecise, but won’t fail catastrophically and will allow us to engage in a dialogue that iteratively guides the result in roughly the right direction.
SimCityTraffic
What does such a system look like? There’s an inspirational class of software that demonstrates this kind of interaction well: “god games” like SimCity. In these games the complexity of the simulated environment is so high it’s impractical, or impossible, to control all of the components. Instead, players cultivate an emergent system using a limited set of tools and hope it moves roughly in the desired direction. Obviously the entertainment of these games is that the system that emerges does not behave as desired, and requires management.
The outcomes might not always be entirely automated, for example a cleaning service can ask “Do you mean here?” when it’s not sure which of several objects you’re referring to, and you can answer by pointing and saying “no, on the bookcase.” (That would also be a more graceful way for devices to negotiate; Sims just burn down the town) The role of user experience designers in this situation may be to use these same emergent behavior tools to create systems that are highly failure-resistant, so most outcomes will be positive most of the time. Of course some systems, say an IV drip controller, will need to be precisely predictable, but perhaps a hospital bed management system can be built as an emergent system that responds flexibly and occasionally requires a nurse to say “not right now, come back later” to a sheet changer that showed up at the wrong time.

Giving up control

Sure, the emergent model means that sometimes things won’t work out. But it’s not like the linear model doesn’t produce substandard results too. The assumption that we need to precisely identify and control all of our devices at all times leads, counter-intuitively, to less control over our daily lives. That’s the predictable result when designers assume users have the inclination or time to do all the work of creating and managing complex device ensembles.
Some experts may enjoy that kind of micromanaging attention, but we shouldn’t assume that everyone does. Instead of starting from the worms-eye view of a single device that needs constant help to find and work with other devices, we can start from the god-game view of a field of devices that needs to be cultivated. If we do that, we can start to see connected information processing devices; not as computers that need to communicate, but capabilities that will be used as needed. And we can start to see problems not as failures that collapse in a pile of incomprehensible error messages, but as the start of a conversation.
Special thanks to Elizabeth Goodman for her comments on an early draft, inspiration from Steven Johnson’s Emergence, Ben Cerveny’s thoughts on games and cultivating technology, Scott Jenson’s thoughts on discovery, and Luke Plurkowski’s on conversations.

How to build for a world where you’re connected to hundreds of devices

Over the next decade or two, everything that can have connected digital technology injected into it, will. Today’s smart watches and smart shirts, such as adidas’ miCoach Elite, will become ubiquitous as will adaptive technology such as connected cutlery like Lift Labs’ spoon for measuring and correcting Parkinson’s tremors. The trajectory is clear each person will have hundreds of connected devices in their life.
That’s the good news. But we can also see enough of this future to see that managing a multi-device, software, and behavior world will require rethinking how we design, perhaps from scratch.

Multiple device system design

mobilize-2013-essaySome of the design challenges of experiencing and designing for this world are starting to become apparent, even painful.
Our ability to manage multiple devices, each with its own software ecosystem, interface, and quirks, is reaching its limit. There’s only so much attention we can pay to our devices, and as consumers we’ve started to leave a wake of unused digital things that are a little awkward to use, recharge, wear, or carry around.
Moreover, many new devices involve user experiences that move across devices: a sensor synchronizes to a server that an app contacts to visualize analytics. There’s real value in creating new device categories, just as Nike did with the Nike Plus, but instead of making consumers learn new device ecosystems, we need to think about offering services. To do this, designers have to learn how to create experiences that smoothly span multiple devices.
Synchronized cloud-based services like Netflix, Dropbox, and Angry Birds provide a glimpse of how device-spanning design could feel. I can grab a file on whatever device is synched to my cloud service. I can pause a movie on one connected video display, and unpause it on another. I can continue the game I started on my phone on my TV. In all these situations I don’t really care about the specific device because it only serves as a frame for the thing I really care about, the service.

Designing the multiple device system experience

How do you design a single service that can appear as an app, as a data visualization, as a specialized device, or as one of a dozen different hardware platforms we haven’t thought of yet? Current models typically fall into two broad categories: standards and vendor lock-in. Complex “do everything” standards make implementation difficult, require heavy configuration (I suggest looking at media sharing, which is way more complex than it should be, despite dozens of standards), and lead to a frustratingly inconsistent user experience that doesn’t scale well.
Multiple links, connections, networks and devices
In vendor lock-in everything works as long as buyers stay in a single company’s proprietary system. It only scales when the vendor scales it, which may be desirable from the perspective of the company, but is unrealistic to consumers.
As a consumer and designer, neither approach has enabled sophisticated multi-device experiences. Companies still chase two-screen experiences… but what about the twelve-screen experience? The “twelve-screen, forty embedded sensors, ambient display, nearby camera drone and car” experience? The traditional models—let’s call them “linear”—may have been appropriate when we had a couple of multipurpose devices, but they feel too limited in a world of many connected devices.
Designing large-scale, multi-device service interactions is more like planning and running a farm rather than setting up and operating an assembly line. Let’s call this non-linear approach “emergent.” Essentially we need to let go of the desire to tightly control functionality at the micro level, ditch the tools that stem from those assumptions (our popular programming languages, interaction design methods and software development environments), and focus on creating tools that induce large-scale behaviors at the macro level.

Emergent behavior

Emergent behavior in multi-device systems is not a new idea. Cellular automata and intelligent agents have been tried as inspirations in the space of multi-device experience design, and multi-device experience research goes back twenty years or more. General Magic’s visionary Telescript language tackled related questions in the early 90s. However, most of these projects focused on low-level functionality, which is a distraction.
What we’re missing is a high-level approach for creating emergent multi-device user experience. How can we easily tell an ensemble of (perhaps arbitrary) devices that we’re interested in achieving a certain result, and have them trend toward a positive outcome close to what we’re hoping to achieve? That’s exactly what we do when we ask a friend to make dinner, or we plant a garden, or we start a project where we don’t know all the steps. That is, in fact, largely how we manage our everyday lives. We expect that the world will be imprecise, but won’t fail catastrophically and will allow us to engage in a dialogue that iteratively guides the result in roughly the right direction.
SimCityTraffic
What does such a system look like? There’s an inspirational class of software that demonstrates this kind of interaction well: “god games” like SimCity. In these games the complexity of the simulated environment is so high it’s impractical, or impossible, to control all of the components. Instead, players cultivate an emergent system using a limited set of tools and hope it moves roughly in the desired direction. Obviously the entertainment of these games is that the system that emerges does not behave as desired, and requires management.
The outcomes might not always be entirely automated, for example a cleaning service can ask “Do you mean here?” when it’s not sure which of several objects you’re referring to, and you can answer by pointing and saying “no, on the bookcase.” (That would also be a more graceful way for devices to negotiate; Sims just burn down the town) The role of user experience designers in this situation may be to use these same emergent behavior tools to create systems that are highly failure-resistant, so most outcomes will be positive most of the time. Of course some systems, say an IV drip controller, will need to be precisely predictable, but perhaps a hospital bed management system can be built as an emergent system that responds flexibly and occasionally requires a nurse to say “not right now, come back later” to a sheet changer that showed up at the wrong time.

Giving up control

Sure, the emergent model means that sometimes things won’t work out. But it’s not like the linear model doesn’t produce substandard results too. The assumption that we need to precisely identify and control all of our devices at all times leads, counter-intuitively, to less control over our daily lives. That’s the predictable result when designers assume users have the inclination or time to do all the work of creating and managing complex device ensembles.
Some experts may enjoy that kind of micromanaging attention, but we shouldn’t assume that everyone does. Instead of starting from the worms-eye view of a single device that needs constant help to find and work with other devices, we can start from the god-game view of a field of devices that needs to be cultivated. If we do that, we can start to see connected information processing devices; not as computers that need to communicate, but capabilities that will be used as needed. And we can start to see problems not as failures that collapse in a pile of incomprehensible error messages, but as the start of a conversation.
Special thanks to Elizabeth Goodman for her comments on an early draft, inspiration from Steven Johnson’s Emergence, Ben Cerveny’s thoughts on games and cultivating technology, Scott Jenson’s thoughts on discovery, and Luke Plurkowski’s on conversations.