Observations, hypotheses, predictions and experiments with design, technology and the humdrum details of daily life.
This article first appeared in Issue #2 of Connected, a quarterly publication about connected devices, alongside many smarter observations about wearable technology. You should totally buy the issue here.
da•ta rash \ˈdātə ˈrash\, n. an irritating or unsightly eruption of information on the wrist or other site of wearable technology.
Every technology has its toxic byproducts and associated maladies. The pollutants of the industrial era cursed us with black lung, lead poisoning, radiation sickness and more. Now the information age threatens to ding us with the damaging, if less deadly, ailments of data pollution.
For all the remarkable opportunities that information technology has unlocked, it’s hard to dispute the downside of our diminished ability to focus, to find calm, to connect with the people we care about. As social networks spew a dazzling blizzard of text, images and alerts, we’re buried under the impossibility of consuming those messages as quickly as they are produced. Online, we enjoy the illusion of companionship without the demands or benefits of friendship. Offline, we test our genuine friendships by gazing into glass slabs instead of enjoying one another’s company.
And now here come the wearables. I’m a technologist, an enthusiast, an optimist. I’m beside myself about the possibilities of ubiquitous computing. But I’m also concerned that the first generation of wearable gadgets is buffeting the body in unintended ways, like so many other technologies that came before. Will our skin burn with bubbling boils of data? Will our ears buzz with non-stop notifications? Will our eyes flicker with the alerts streaming across the clothing and accessories of others?
There are so many opportunities in the fact that we can now wear data. But the risk is that it will wear us.
It doesn’t have to be that way, of course. Careful, humane design can give us the benefits of wearing data without allowing information poisoning to seep beyond the screen and into our physical selves.
Some suggestions: design for pre-attention; design for fashion; design for identity; design for the individual; and design to amplify our humanity.
Designers of smartphone interfaces did their jobs a little too well. They created experiences that are so engaging that they soak up all of our attention. They did it by combining very personal data, social interaction, and a hefty dose of FOMO into a visual interface that requires focus and concentration to make sense of it. Our screens become the foreground to everything else, and in the moments when we allow them to fade to the background, alerts prompt us to pick them up again.
This model is not a promising future for wearables. Smart-watch designers, however, seem to be smitten with the idea of strapping a smartphone equivalent on your wrist. These watches strive to replace or supplement smartphones with screens that update you constantly with the latest info. This “convenience” imposes more information onto your body than it can or should bear.
As the internet of things turns everything—every object, every place, every person—into a potential interface, those interfaces have to be more discerning. They should demand our attention only at truly demanding moments, not at the receipt of every new email. The real luxury of wearing information is not in exposing ourselves to every passing data point but in filtering that data in ways that alert us gently, even subconsciously, to changes in our environment.
Cognitive science has a name for this. Pre-attentive processing is the way our brains gather information from the environment when we don’t even realize it. In the flash of an eye and without even a moment of concentration, we detect changes in temperature, in color, in motion, in facial expression. We process these environmental cues subconsciously, without effort, so they don’t compete with or intrude upon the subject of our conscious focus. Contrast that with the concentration it takes to read even a short text message, an activity that requires you to tune out everything else for a few seconds.
Designing for pre-attention makes the information display so subtle that it becomes practically instinctual—a spidey-sense awareness of your personal data. The original wearable technology—the watch—is a model of this kind of ambient, low-impact display. Unlike phones, watches don’t push or interrupt but quietly make their information available whenever you choose to seek it out. That information also happens to be highly glanceable, consistently formatted, and requires virtually zero cognitive overhead. We glance absentmindedly at clocks or watches without losing the attention we’ve invested elsewhere. As an information interface, the watch is neither greedy nor preening.
Turning a watch into a smartphone undoes all of that elegance. Ideally, the smart things we wear on our bodies shouldn’t ever buzz, beep, or tantrum. They should quietly respect our attention, standing ready to deliver information with as little distraction as possible.
The Withings Activité is one of the few smart watches to embrace the original, inspired interface that served analog watches so well. It’s a pedometer that does the usual fitness-tracker stuff, syncing via bluetooth with your devices to track your steps and sleep patterns. But its display is decidedly low-res; a simple dial shows your progress toward your daily goal, from zero to 100 percent. It's elegant, nonintrusive, and by the way, the battery lasts a whole year, not just a few hours.
All pre-attentive interfaces are similarly simple, though not necessarily so analog. If you must design a watch that alerts you to the state of your inbox, for example, there are better, less intrusive alerts than numbers, beeps, or buzzes. Just ask Bilbo Baggins. In The Hobbit, Bilbo’s sword Sting glows when orcs are nearby, its glow growing stronger as danger increase.
Sting is a pre-attentive approach to personal safety. A pre-attentive approach to smart watches could do the same, changing the color or intensity of a glow as messages from certain people start to pile up (family or coworkers in this case, not orcs). This shift in color or intensity is, for all practical purposes, a single-pixel display. It’s a low-resolution signal that cuts out unnecessary detail to give just enough info to make a decision or dedicate attention. When this glow reaches a threshold of importance, you can turn your focus to a different, more attentive display, like your computer or phone.
A single-pixel display doesn’t have to be single-function. This glow-meter approach could deploy multiple colors to track multiple types of data. If your watch glows red when your inbox needs attention, it might glow blue when rain is on the way, or green when a deadline approaches. The result would be a kind of mood ring to show the state of your personal data cloud. (In fact, why not add mood as a data dimension, too? Your wrist watch could glow when your partner is in distress or is simply thinking about you.)
Happily, this glowing “Hobbit effect” could also be beautiful, which brings us to the next remedy to unsightly data rash:
Clunky objects lend themselves to unsightly data rash. (Literally. The plastic Fitbit Force bracelet was recalled after causing allergic rashes and blistering.) As we’ve begun to dress bodies with wearable technology, the focus has been more on “technology” than on “wearable.” The industry has tended to focus on the engineering question (how can we bolt this technology onto a body?) instead of a more challenging and subtle fashion question (what if this beautiful wearable object happened to be magic?).
We should strive to create objects that people want to wear even without its built-in technology. We might love our gadgets for their special powers, but we should equally love them for their essential wearability as aesthetic objects and personal fashion statements.
Just look to our earliest wearable technologies—eyeglasses and wristwatches—for instructive inspiration. Both gained real commercial traction only when they also became fashion statements—ornamental as well as functional. Fitness fashion (rubber bracelets) and tech fashion (screens and polished titanium) are fine as far as they go, but it’s time to explore a fuller range of fashion and personality in the smart objects we intend to wear. It’s time to look smart, not just act smart.
Our clothing and accessories are personal expressions to the world. Too many of this first generation of smart objects ignore this fundamental external role. They are instead designed for a relentless inward focus, tracking data for private consumption, or displaying info intended solely for its wearer. These gadgets let us wear data, but they rarely share that data with the outside world in the traditional way that we share what we wear. Meantime, the selfie stands in for our current state of digital dress-up.
This is an observation that raises more questions than answers, but those questions all present fascinating and useful starting points for designers. What does it mean to wear data? How can I project data in a way that expresses my passions, my sense of humor, my well being, my state of mind, or whether I’m available for interaction versus feeling private?
In other words, how might we turn data from a rash-inducing assault of information to a personal expression of self?
Clothing and accessories are essential to our public identity, and so their augmented versions should likewise augment and extend the broadcast of that identity. This sharing won’t always be visible, and the target of the sharing won’t always be people. Wearing data means that we expand our social circle beyond people and to the smart things and places that surround us.
Location data was what gave smartphones the magic necessary to create a mainstream wave of mobile computing. For wearable computing, identity data seems likely to be a similarly critical ingredient. Gadgets like the Nymi bracelet or Disney’s MagicBand turn our bodies into secure broadcasters of unique identity. Both use biometrics to ensure that you're the one actually wearing them—Nymi via heart signature and MagicBand via bone density scan. By broadcasting unique identity, they open the opportunity to let us invisibly negotiate with trusted sources to wrangle anything from government services, to door locks, to payments, to restaurant reservations.
But this approach has an important secondary effect, too. When wearables focus on identity first, they can relegate a whole range of features, sensors, and data-gathering functions to other gizmos that happen to be nearby. Your bed is perhaps better suited to tracking your sleep patterns than a bracelet, for example; the bed just needs to know that it’s you who’s sleeping in it. Foursquare co-founder Naveen Selvadurai calls these embedded technologies “there-ables,” devices and sensors that are already there in the room; you just show up.
A sensible way to avoid data rash is simply to limit exposure to information allergens. Let’s find ways to reduce the number of sensors we have to wear and push them instead into the smart environment around us. Certain activities will always require wearable sensors in order to work. We have to wear pedometers or heart-rate monitors to make them go. But a whole host of location, security, and home automation features can be pushed off of our bodies and into the semi-smart environments around us.
When sensors can live near us instead of actually wrapped around us, we don’t have to wear so many of the things in the first place. How many bracelets are we expected to strap on, after all? My friend Rachel Kalmar, a data scientist, often wears over 20 smart bracelets at a time in order to make the point that most of them are at once redundant and incompatible. The emerging wearables industry can surely do better.
Alas, there’s a potentially troublesome outcome if we make wearables focus on identity in order to outsource data-gathering: it could make our identities far more public than most of us are comfortable with. When our gadgets start announcing our presence to any device in broadcast range, it’s easy to imagine those devices getting a little pushy. A subset of marketers persistently and excitedly promise a future of location-based advertising where we’re pummeled by ad messages and discount offers as we pass by storefronts or walk through shop aisles. What could be more horrific than a “service” that bombards you with ads you can never escape? This is perhaps the worst kind of information poisoning: wandering endlessly through a thick and toxic cloud of targeted commercial messaging.
And so we must take care. Managing identity, that most personal piece of data, requires respect, transparency, and the confidence to cede control to the individual.
When I talk to people about a sensor-laden future full of smart objects, the topic is greeted with equal parts excitement and dread. The dread comes from several creepy prospects: sensors everywhere (including our bodies) might create a culture of constant surveillance; our data might be used and seen in ways we don’t understand or control; and we might forfeit agency over our environments to “smart” devices that aren’t quite smart enough, buffeting us with dubious decisions.
These fears—the staples of every dystopian sci-fi movie—are about loss of control. They rail against a data environment so polluted that we no longer know how our personal information will be used or how machines might impose themselves on us.
As we design the personal services and devices that respond to the individual it’s important that we design first and foremost for the individual. Your data—and especially your identity—should be yours and yours first. You should have confidence that your personal gadgets and services operate in your interest, and not solely in the ambiguous interest of the megacorp that created them.
Services should be designed as opt-in, not opt-out. You decide which service gets access to your identity, and the off switch should be obvious and available. Wearables maven Liza Kindred suggests that services should have “nutrition labels” that clearly identify the information that they’ll use, and how they’ll use it. Who gets your data? How much of it, and for how long? Are they allowed to compare it with other people’s info? Can you make the data self-destruct? All of those decisions should be in your easy understanding and control as the person who’s making data or identity available.
The wearables world often uses the familiar but unfortunate language of science fiction to describe its smart devices: they “augment” or “enhance”, they are prosthetics, bionics, even cyborg extensions. This language emphasizes the technology, not the human being wearing it.
Technology should bend to our lives instead of vice versa. Instead of using the cold and creepy terms of enhancement or augmentation, I suggest wearables should aim to amplify our humanity. They should let us be who we already are, only more so. They should give us greater control, mastery, and understanding over our environment and ourselves. They should reinforce connections with the people we love and the places we visit, instead of isolating us under a torrent of data. They should draw us into the world instead of drawing our eyes to a screen.
We are suddenly awash in data, with the fresh possibility of wearing devices that are able to capture, process, and report that data. Like previous eras of technology, we’ll make missteps as we learn to use this newly abundant raw resource. We’ll sometimes create interfaces that overwhelm or irritate with the effects of data pollution and information poisoning. But if we can focus first on human needs and natural interaction, we can soothe the occasional data rash and promote healthy insight instead.
Entertainment Weekly has a new responsive mobile website, and the design is a Global Moxie joint. Check it out at m.ew.com.
I had the good fortune to lead the overall design effort for the project, and I invited some of the planet’s finest web heroes to join in.1 They’re like The Avengers, only without all the bickering and Hulk-like rage:
Robert Gorell wrangled IA and UX on the project, serving up an entire smorgasbord of content to small screens, one bite-sized morsel at a time.
Dan Mall and his SuperFriendly colleagues Scott Cook and Matt Cook managed the art direction and visual design of the site. They created a fresh look on top of the rich visual brand that Entertainment Weekly had already established in its print and digital projects.
Brad Frost made the magic happen with HTML and CSS. We put his new Pattern Lab environment through its paces as an ideal way to manage both process and deliverable for a responsive project.
Jonathan Stark made the site dance with JavaScript that is at once compact, flexible, and even understated. Just like the man himself. Jonathan also wrangled a lot of the Github and collaborative workflow for the project (a process he details here).
Kristina Frantz kept the trains running on time (and on the right track) as producer of this fandango.
10up was our technology partner, handling the back-end integration of the design into Entertainment Weekly’s content management system. They’re just great.
Everything happened under the careful and pragmatic attention of Entertainment Weekly Product Director Chad Schlegel. We also worked closely with EW.com Editor Bill Gannon, Art Director Martin Schwartz, and Senior Project Manager Serena Tan. Everyone on their crew was a pleasure to work with: enthusiastic, flexible, and willing to hand themselves over to the responsive design process. That process continues to evolve and its departures from traditional design milestones and deliverables can sometimes give all of us vertigo. With every project, though, the process gets more grounded, more settled.
I’d like to share some of that process, as well as some novel design solutions that we developed.
Our mission: design a responsive site for phones and small tablets—everything south of the iPad Mini.2 This new edition replaces a simple site that has served phones for several years. While that site focused on efficient delivery of news and headlines, the new site was to branch out and include the full EW.com experience, including photo galleries, videos, and community experiences.
Beyond a full slate of features and content, though, a big piece of this was about brand. The simple design of the original mobile site, while efficient, did not capture the energy and ebullience of Entertainment Weekly’s desktop and print experience. “The mobile site right now doesn’t convey our personality. It’s like a government form. It’s not fun,” Chad told us. Assistant Managing Editor Mike Bruno brought the message home: “We want to take what we’re good at and just wallop you in the head with that stuff.”
Part of connecting the mobile design with the larger Entertainment Weekly brand also meant literal connection across platforms. The design had to feel not only consistent with those platforms (print, apps, and web across devices) but also had to make them known. One of the site’s job in both function and aesthetic was to plug readers into the breadth of the Entertainment Weekly universe in all of its platform contexts.
So: featureful, pretty, on-brand, and platform-minded... all with the goals of increasing reader engagement as measured by page views, user sessions, and community involvement.
We spent a lot of time with Entertainment Weekly readers, finding out their sense of the brand, and the jobs that the website, apps, and magazine do for them. Robert and I did hours of interviews with regular readers. And from there, we were off. (Dan gives a great overview of our kickoff process, including a slew of photos.)
EW.com cranks out a huge amount of news every day, typically between 50 and 75 new stories per day. Yet for all of this emphasis on breaking entertainment news, the brand’s roots are in its magazine origin. As part of the mission to connect the brand across platforms, we wanted to create a magazine feel even on the small screen. That meant leading with the site’s wealth of glossy, high-quality photos. When you hit the site, no matter what page, the art leads as key content, taking up much of the first-screen experience.
On larger screens, like small tablets, this photo-led design is even stronger, with the photo running huge, almost like a magazine cover. For a site that has so many exclusive photos and videos—where multimedia is a peer to prose—this approach had three advantages: it’s visually arresting; it connects to the print brand; and it promotes photos as one of the biggest draws to the site (a fact which traffic patterns had long indicated).
As a result, a strong lead image commands a third of the screen on nearly every page of the site. But which stories should get the lead-image treatment on the homepage and channel pages? For the past several years, Entertainment Weekly has long promoted several top stories in its “dynamic lead” (DL), a bit of industry jargon that refers to a carousel.
There was vigorous discussion about the pros and cons of keeping the carousel, and we finally ditched it.
Carousels are slideshow-style widgets that chop content into individual panels, and you swipe or tap to spin through the offerings one by one. Media sites like this one often use them to create a slideshow of featured headlines. Carousels seem to solve lots of problems, delivering high-impact visuals without sacrificing screen real estate. Because carousels let you stack lots of content into the same compact space, it’s especially tempting to impose them on tiny mobile screens. Pour featured stories into carousels, and presto, the headlines all magically share the same premium top-of-page position.
Alas, research reveals miserable click-through rates on carousels.3 Poking at them only a little bit reveals why: carousels are slow because you have to work to use them. They rely on physical repetition, cognitive effort, and little more than vague trust that there’s going to be something useful on that next slide. Carousels ask for those rarest of commodities: patience and attention.
Consider the seven stories featured at the top of Entertainment Weekly’s new mobile site. If we piled the seven headlines into a slideshow carousel, then getting to that seventh story would be a six-swipe slog through the first six. That’s six interactions before you even see that last headline, and most will never get there. Instead of highlighting content, a carousel defeats the purpose by hiding that content behind a pile of swipes and taps.
Our goals for this featured area were twofold: strong visual impact and easy access to all top articles. We wanted a solution that solved for both, instead of a carousel which neglected one of those goals (easy access to all headlines). So we turned to an old friend.
Progressive disclosure is a mobile designer’s best secret weapon. It’s a high-falutin’ term for giving people full content a little bit at a time. You give people a taste or a synopsis and then let them ask for more if they want it, a great strategy for managing content on small screens. A slideshow carousel is a kind of progressive disclosure: you give someone a piece of content and then invite them to browse more of the same. The problem with carousels for headlines is that they have no scent of information and very low content density. You don’t know whether the next slide will be interesting, and you just have to work through the thing to find out, one headline at a time.
Progressive disclosure works best when you give people more to go on. We wanted to show several featured articles at once and then let people decide whether they wanted to see lots more of the same. Otherwise they could move on to other content.
Our solution was a simple More button. The top area of the homepage shows a gigantic splash image for the lead story, followed by two secondary headlines with thumbnail images. Hit the More button to reveal four more articles. By showing three stories at the top, we give people enough information to know if they’d like to see more like that. If so, we give them everything we’ve got. That’s one interaction instead of six for a carousel.
We extended this pattern to all the content sections on the homepage and channel pages. Each of these sections shows a small handful of headlines. Tap a button to see lots more of them, or just keep scrolling down to the next content section to explore a different vein of entertainment news.
For larger screens with more real estate, we did away with several of these More buttons and just poured all the headlines in at once. Larger screens offer easier scanning of lots of headlines, so they can support that additional content density. That underscores an important job that progressive disclosure does for small screens...
One of the unfortunate side effects of responsive designs is all too often a crazy-long page on small-screen gadgets. That’s especially true when you design for the desktop first and then crush that design into a single column for phones. “Hm, I’ve got a three column design... I guess I’ll just stack all of those columns on top of each other. Mission accomplished!”
That gives you content parity with the desktop site, which is great, but it doesn’t provide the same parity of usability. When you do that, third-column content that was easily accessible on large screens now gets sunk to the depths of a seemingly endless single column on phones. This buries content... and risks the occasional case of swipe-swipe-swipe thumb sprain for those determined enough to go all the way to the bottom of the page.4
The More button (and progressive disclosure generally) enables a compact view of each content section. You get a taste of each section by scanning the top stories, without committing the design to displaying everything at once. It makes the content quickly scannable while giving easy one-tap access to more if you want it. This approach allows the site to feature lots of content on the homepage and channel pages without risking thumb sprain. Boom: content parity and easy accessibility.
Implicit in progressive disclosure is the notion of primary and secondary content: show the most important stuff always, but push secondary or optional content into another view or state. Optional, nice-to-have content provides some interesting room to play, because you have the possibility of treating content itself as an enhancement. Just like progressive enhancement tests the browser to see if it can handle fancy functionality, progressive disclosure lets you do the same with content, too. “Does this device have the ability (usually: the space) to display this extra content?”
That’s the approach we took with the navigation. Typically, of course, navigation offers you links to the top sections of the site, and that’s what you get on Entertainment Weekly’s mobile site at narrow views, too. You get the primary sections as well as some currently hot topics, and the core job is done.
For larger screens, though, we wanted to take the opportunity to share the featured stories of the hour. We went beyond the staple section navigation to promote individual pieces. The goal of navigation is to help people find great content, so we decided to use this area to make explicit suggestions. Nice-to-have content enhanced the navigation to make it into a kind of dashboard panel, a mini homepage with splashy promotion of top stories.
We had some fun with other aspects of navigation, too. On the page for “Tonight's Best TV,” we created an app-like interaction for navigating anchor links. The page shows editor’s picks of the best TV shows airing in the next seven days. A fixed navigation bar at the top of the screen lets you hop directly to a specific day; tap a date, and the page scrolls smoothly to that day. So far, so good.
But the navigation is also an indicator of your progress through the week. As you scroll down the screen, the navigation bar moves and highlights to show you the date you’re currently browsing. It’s a small thing, but it feels smart and helpful when you use it. The navigation bar is also swipeable; for small screens that can’t fit all seven days across, you can swipe the navigation bar to browse the dates. It’s a fun little interaction for an important corner of the site.
As for most entertainment sites, photo galleries are a huge draw at Entertainment Weekly, driving redonkulous levels of traffic. Entertainment Weekly adds a twist to their galleries, though: nearly all of them are also lists:
You get the idea. It’s a perfect viral lovechild of lists and entertainment photos. But crucially, they’re also combined with Entertainment Weekly’s wry and witty commentary. That means the photos don’t just have captions; they sometimes have awesome essay-length missives. We had to make it easy to browse the photos and this potentially very long text and the comments for each photo—all on small screens. There’s a lot going on here, but we managed to wrangle it into a tidy package.
The caption/essay toggles into view over a portion of the photo, so you can still see the photo as you read the text, or dismiss it if you like. Tap the comment button to “flip” the gallery over to reveal comments on the back. If you’re just in it for the photos, tap the center of the photo to toggle all of the gallery controls on and off.
This was a challenging bit of UX to sort out, which likewise meant for challenging code. Jonathan Stark brought it smoothly home.
Media companies across the board are seeing more and more so-called “earned traffic.” That is, readers are sharing their content with other readers, so that a growing number of visitors are now coming to the site via Twitter, Facebook, blogs, and so on. Homepage and search-engine traffic has lost ground to this newer social traffic for years. That means that media sites, Entertainment Weekly included, are more invested in ever than encouraging readers to share. No great surprise there.
We originally planned to put static share buttons at the top and bottom of article pages, but the crew at Entertainment Weekly wanted us to push harder. Social sharing would be a key metric for the success of the redesign. We toyed with gluing a social toolbar to screen bottom, but that absorbed too much space on smaller screens. We wanted this sharing button to be convenient without being obnoxious.
Instead, we wound up with a little share “bug” that floats in the right corner of the screen, staying fixed as you scroll. We made the button translucent so that it was a bit less in your face, competing less with the content it’s intended to support. Tapping this Share button triggers an animation that rolls out the share buttons; tap it again to roll it back up. The whole thing is done with CSS animations. It’s cool. You should try it out.
Sharing is one of those functions that almost always serves both reader and publisher. Everyone wins there, and we felt okay about making share functions so omnipresent. Alas, readers typically don’t feel the same way about advertising.
Handling ads is always one of the toughest elements of a design project like this one. Ads make the whole enterprise possible, but readers often perceive them as getting in the way of the action. Our responsibility as designers is to give advertisers a good position without trampling the reading experience for users.
When I led the design of People magazine’s mobile website last year, I invented a new ad format called the snap banner. It was a so-called adhesion banner that stuck to the bottom of the page but then snapped into place on the page and scrolled away. The result was an ad that stuck around long enough to make itself known, but which knew enough to leave when it wasn’t wanted.
Since that site launched, Time Inc (parent company of both People and Entertainment Weekly) has adopted the snap banner as a standard format. They changed it up a bit, though, moving the ad to the top of the screen, where it disappears after a certain amount of scroll. We used that format on the homepage for small screens, but combined its interaction with a second ad on the page. When that second ad scrolls into view we disappear the top-of-screen snap banner, so that only a single ad is ever onscreen at any one time.
On larger screens, we show a different ad format entirely, and these scroll normally with the page. With a bigger screen, the ads linger longer, and there’s no need to stick them to the screen for attention. The ad bumps around to different positions on the page according to available width.
From a branding perspective, one of the most important “ads” on the page is the “Subscribe to Entertainment Weekly” section at the bottom of every single page of the site leading into the page footer. This section details the premium content available in this week’s paid edition of Entertainment Weekly, which you can get in both print and app versions.
This is the bit of business where we explicitly detail where else you can find Entertainment Weekly content and how it morphs across platforms. From there, the page links to all the company’s accounts on social media. The intended message: Entertainment Weekly is not a magazine, it’s not a website, it’s not an app, and it’s not a Facebook account. It’s all of them at once, an editorial platform that pours content into many different containers.
“We can’t really consider ourselves a magazine company anymore,” incoming Time Inc CEO Joe Ripp recently said. “We’re a media company.” It’s true for everyone now. As platforms proliferate, you have to meet your customers on whatever device or medium they choose. This footer business we cooked into every page of the site is intended to reinforce that message.
That’s also the core principle that animates responsive design: there is no one “true” presentation for your content, no single container. Like many, we’ve seen our design process transform to meet that reality. The process continues to evolve, but we had some great learnings here.
We stayed sketchy and informal for as long as possible, emphasizing coarse blocks of content at very high levels of abstraction. This let us move quickly as we planned the broad strokes of the site. Sketches also keep conversation high level, avoiding the all-too-common headaches that emerge when you get into the weeds too early with fine details. Robert did a lot of these big-picture sketches and then gradually gave fine definition the repeating patterns that we found across pages. As soon as we had the system worked out, we moved right into HTML, skipping the formal wireframe altogether.
Responsive design is about creating systems of content rather than a rigid page design. This system is built of modules that shift and reorder according to what the device can display. Rather than designing in one page, it’s helpful to drive the main design effort down to its component parts. Brad and others have lately been calling this atomic design, where you design from the building blocks and gradually up to the entire page, rather than the reverse. It’s very powerful.
Brad’s pattern lab tool proved to be the ideal way to collect and define these atomic components. Once marked up, the components could be assembled inside pattern lab into templates and, with actual content, final pages. The approach encouraged code reuse, which was not only efficient but promoted uber-consistent UX across the site. And because we anticipated this approach, Brad was able to get started on initial markup well before the IA and visual design was complete. Brad detailed his process for the site here.
Even while Robert was working on the early sketches for the site’s architecture, Dan and Scott were already at work on the visual design. They did this with element collages, a kind of style tile on steroids. Element collages are collections of lots of branded design elements that give a sense not only of color and type, but of a variety of constructed UI widgets.
What this meant was that IA sketches and visual design happened in parallel and eventually merged. Dan and Scott started with impressionistic notions to establish visual direction. As Robert’s sketches simultaneously evolved, the element collages adopted their information and became examples of the finished patterns. Again, these were like exploded web pages, emphasizing page components more than complete design. Just to be sure that they hung together as a page, we assembled a couple of full-page comps, but we didn’t do this for every page. Full-page Photoshop comps still have a useful role, in other words, but not anything close to the key deliverable they once were.
We didn’t create visual comps of every single element, either. Once we had enough designed elements in hand, we could make good guesses in the HTML itself, where we were able to see how things actually played out at different widths. Dan detailed the visual design process for the site here.
Responsive design is a squishy process. When IA, design, and markup happen in overlapping stages, there are fewer crisp milestones and deliverables. One stage melts into the next, and it takes more discipline than it has in the past to keep the train moving forward. Transitions to new methods and techniques are always a bit bumpy, but the good news is that we’re definitely getting there. Things are settling.
A proliferation of platforms and gadgets has reminded us that the packaging of digital content is naturally ambiguous. Responsive design tries to capture that truth and turn it into an advantage. Slowly but surely, our collective design process is evolving to do the same.
The rest of the design crew have also shared their perspectives on the Entertainment Weekly project and its process. Check em out:
People rarely swipe beyond the first couple of slides so the effect is that you wind up hiding the very items you mean to feature. The data shows dramatically lower clicks for the “slides” that follow the lead image. Here’s some sample research to back that up:
The gist of this research is that carousels definitely do help you have a strong visual impact (big image!), but this helps only the first article. This has the unfortunate effect of making the follow-on articles go unexplored. ↩
Smashing Magazine published The Mobile Book this week, and wow… it’s a humdinger. It’s full of smart advice from all the people I simultaneously love, fear, and admire in the universe of mobile web: Jeremy Keith, Peter-Paul Koch (PPK), Stephanie Rieger, Trent Walton, Brad Frost, Dave Olson, and Dennis Kardys. With such great company, I was especially honored to contribute the book’s final chapter about designing for touch.
The book is not only smart but beautiful. The dead-tree version is hardcover with stitched binding and, get this, an old-school ribbon bookmark. The thing is just gorgeous. Even if you prefer your books in pixels instead of paper, you still get an elegant interior design featuring the illustrations of Mike Kus.
Much of the book looks at mobile through the lens of the web, but it’s also a useful resource for developers and designers on other platforms. The book is neatly organized into three sections: the mobile landscape, responsive web design, and UX design for mobile. The first and last of these are applicable to any platform, and frankly, the web-specific responsive-design techniques will quickly become matters of basic digital literacy.
This matter of evolving literacy is very much the point of Jeremy’s forward to the book:
This book is an artefact of its time. There will come a time when this book will no longer be necessary, when designing and developing for mobile will simply be part and parcel of every Web worker’s lot. But that time isn’t here just yet. So in the meantime you’ve got the current state of all things mobile packed together into this single volume.
I’m flattered to report that the first round of reviewers agree with Jeremy about the book’s stature as a well-rounded and authoritative review of mobile design technique.
Design Shack: “It’s a handbook for web design today. Earlier I mentioned that you should add this book to your shelf, in reality, you’ll probably want to keep it on your desk.”
UX Magazine: “I highly recommend this book to both the blossoming and the experienced UX designer. The various voices of different authors breathe fresh narrative air that carries diverse-and-deep domain knowledge along in a cohesive story about how to harness the chaos of our ever-evolving world into a mobile-UX delight. Consider the lessons in this book a whopping set of New Year resolutions.”
Open Designs: “As somebody who spends a lot of time tinkering and tweaking websites to make them work better, I thought this book was bloody brilliant. There is so much depth and information packed into its 336 pages that I think it will become the book for the mobile Web.”
The table of contents:
I’m admittedly biased, but my advice is to run out and buy The Mobile Book immediately. It belongs on every webslinger’s bookshelf and/or ebook. Buy the book, or download a free chapter, “Responsive Design Strategy” by Trent Walton (PDF, 8MB). Enjoy.
Touch has landed on the desktop. A whole new category of touch devices is flooding the consumer market in coordination with the release of Windows 8: touchscreen laptops and tablet/keyboard combos. These new hybrid combinations of touch and keyboard create a new ergonomic environment... and fresh demands on designers.
Like tablets before them, the ergonomics of these hybrid gizmos demand UI conventions that depart from desktop layouts of similar screen size. The hybrids not only need big touch targets to accommodate clumsy fingers, but they also need controls and navigation conveniently placed where hands naturally come to rest. Designing for touch introduces elements of industrial design: physical comfort and ease are critical considerations.
Unfortunately, the top-of-screen navigation and menus of traditional desktop layouts are outright hostile to hybrid ergonomics. Tried-and-true desktop conventions have to change to make room for fingers and thumbs. For now at least, the solution is not just a matter of designing separate interfaces for touch and non-touch gadgets. That won’t fly, because as designers (and especially web designers) we often don’t have enough information about the device.
After poking at this problem for a few weeks, my conclusion is: every desktop UI should be designed for touch now. When any desktop machine could have a touch interface, we have to proceed as if they all do.
Walk with me.
Hybrids require us to move our hands back and forth between the keyboard and the touchscreen just behind it. Before this new onslaught of hybrids arrived, many (including a dismissive Steve Jobs) criticized the concept as untenable: people wouldn’t want to shuttle their hands back and forth to point at the screen. The effort would be too much, too inefficient, and the result would be the fatigue of “gorilla arms.” It’s a criticism leveled at Minority Report-style interfaces of science fiction, too: who wants to work with your arms constantly in the air?
Early returns suggest those initial worries were unfounded. People do embrace touch with these hybrids, but they do it by barely lifting their arms. In usability studies by John Whalen of Brilliant Experience and by Intel,1 newcomers shifted naturally to interacting directly with the touchscreen, ignoring any mouse or trackpad. Despite the availability (and greater precision) of these time-tested pointers, people said the touchscreen felt more intimate and direct. The hand became their preferred pointer for buttons, scrolling, you name it. Even expert users accustomed to tabbing between fields switched to independently selecting form fields by touch.
There seems to be something irresistible about the touchscreen, even when more precise or efficient options are available. Jeff Atwood put it nicely in his review of Microsoft’s Surface tablet:
I’ve stopped thinking of touch as some exotic, add-in technology contained in specialized devices. I belatedly realized that I love to touch computers. And why not? We constantly point and gesture at everything in our lives, including our screens. It’s completely natural to want to interact with computers by touching them. That’s why the more unfortunate among us have displays covered in filthy fingerprints. …
After living with the Surface RT for a few days now, I’m convinced that this form factor is the replacement and way forward for the stagnant laptop. I can’t even remember the last time I was this excited about a computer. The more I use it, the more I think that touch plus keyboard is the future of all laptops.
But what about those gorilla arms? John Whalen’s research found that people avoid raising their arms with hybrids by instead resting them alongside the keyboard, keeping a loose grip at the bottom corners of the screen. (Among other things, this grip helps to steady a sometimes floppy laptop screen when you tap at it.)
As with any handheld touchscreen device, the way you hold the thing informs where primary controls should go. So this bottom-corner grip has important implications for the visual layout of websites and apps on hybrid devices. But first, to basics...
Designing for touch means designing for fingers, yes, but to be more specific, you’re really designing for thumbs. On every handheld touchscreen, from phones to tablets to hybrids, the thumbs call the shots. Here’s why.
On phones, the best interfaces optimize for a one-handed grip, because it’s at once the most freeing and the most limiting. It’s freeing because it lets you do things with the other hand—write, sip coffee, hold a baby—a fact that makes it the most common grip. But it’s limiting because working a phone one-handed means working it with your thumb. Thumbs separate us from the beasts, but alas, when it comes to driving software, thumbs lack both reach and dexterity.
This peculiar combination of freedom and constraint requires specific design concessions, most of them imposed by thumbspan. While a thumb can manage to sweep most of the screen on all but the most oversized phones, only about a third of the screen is in truly effortless territory—at the bottom of the screen on the side opposite the thumb. When holding a phone in the right hand, for example, the thumb falls naturally in an arc at the bottom left corner of the screen.
This is a big reason why apps and mobile operating systems pin primary controls to the bottom edge of the screen—precisely the opposite of typical desktop layouts. (It’s not only simple comfort and convenience that drive screen-bottom conventions, though. It’s also the awkward fact that fingers obscure the screen. Pushing controls below the content keeps hovering hands out of the way.)
Tablets are trickier because we hold them so many different ways. We grab, tilt, lean, cradle, and clench in a whole variety of embraces, many of which depend upon stance. The rule of thumb still applies to these guys, except that the thumb zone changes. The special headache here is that the thumb zone isn’t consistent even for individual devices; it changes depending on posture.
Standing up, you use two hands to tap away on a large tablet like iPad. You likely hold it halfway up the sides for leverage (hold it too close to the bottom, and the thing goes floppy.) Or perhaps you have one arm wrapped around it like a clipboard while you tap with the other hand. Sitting at a table, you’re likely to prop a tablet with one hand at the lower third and tap with the other. Reclining in an armchair, you tend to rest it in your lap and tap with the other hand. Lying down or reclining, you rest the thing on your belly or nestled in a blanket, propping it with one hand, tapping with the other. In all of these grips, fingers fall in different places on the device.
When it comes to tablets, in other words, we’re all hands. We roam all over the things—all over, that is, except the top and bottom edges. As varied as tablet grips can be, two things are true for all of them. First, we tend to hold tablets at the sides; though the specific location wanders up and down, thumbs tend to settle at the middle- to top-third of the screen.
Second, the larger the screen, the harder it is to take in the whole thing at a glance as you can on a phone. On larger tablets, as with print design, our visual attention naturally focuses on the top of the tablet, and the design’s information hierarchy should reflect that.
These factors mean eyes and thumbs naturally occupy the top half of tablets, with thumbs straddling the edges. Spreading navigation and primary controls across the bottom—the standard pattern for phones—turns out to be ergonomically hostile on tablets. Sometimes the bottom isn’t even visible at all. In the laziest and perhaps most common of positions—lying down or reclining—the bottom bezel tends to disappear into blankets, sweaters, and soft bellies.
Tablet navigation and other frequent controls should hug the sides or top corners for easy thumb access. Avoid forcing people to lift and haul their entire arms over to the top or bottom edges for frequent touch targets. Some arm lifting is of course inevitable. Tablets are thumb and index-finger devices, with the index finger driving interaction inside the tablet’s canvas. You have to move your arm for that, no way around it, but focusing navigation around the thumb as the anchor at least means that you can spare your arm the most frequent taps. The top corners are within thumb striking distance while also remaining in the tablet’s primary visual area.
But what happens when we strap a keyboard onto the thing?
Here again, the rule of thumb calls the shots. You’ll recall that hybrid users frequently adopt a bottom-corner grip, resting their arms alongside the keyboard. Placing primary controls and navigation in easy reach of bottom-corner thumbs means you avoid gorilla arms. The result is a vertically flipped version of the thumb zone we saw for standalone tablets.
Not everyone adopts the bottom grip, though. Others (especially newcomers) go freeform, jabbing their index finger at the screen. This approach unhinges the hands from the screen edges, giving freedom to roam the interface. Still, the center of the screen tends to be an easier touch than the corners with this technique. Trouble is, this finger hot zone is exactly the reverse of the thumb zone.
The upshot: optimizing for thumbs means a subpar experience for the index finger—and vice versa. One layout has to win, though, and as with every other touch device, the winner is the thumb. John Whalen’s study suggests that hybrid users begin to prefer thumb use over time, with expert users going nearly all thumbs, reaching them in and out of the screen from the edges to drive interaction. Once again, thumbs are the primary utility pointer.
Cluster primary controls and gestures for hybrid screens around the bottom corners and sides. That’s one reason Windows 8 uses edge gestures to summon system and app controls. A swipe from the right edge conjures the system charms, and a swipe from the bottom edge brings up a shelf of app tools.
What all of this adds up to: input type and grip should drive the placement of controls, not screen size. For web designers in particular, this is a big headache.
For most of its short history, web-design practice has focused on the visual—on screen size. It’s not yet in our industry’s DNA to consider physicality and environment in our layouts. That’s why many are still surprised at the idea that they can’t just use their legacy desktop layout on iPad, even though the screen size is the same. The layout looks good, sure, but that rarely means it’s also finger-friendly.
The rise of the hybrids means touch is no longer the sole province of phones and tablets. It’s arrived on desktops and laptops, too. Most desktop website layouts, however, are not optimized for touch. They challenge our clumsy fingers and thumbs with small touch targets for links and menus, or they lean on hover interactions that can’t be triggered by touch at all. Few sites place primary navigation in easy reach of the thumb zone for either tablets or hybrids; they favor cursor-friendly screen-top navigation instead.
Ideally, we would all tweak our CSS to accommodate a range of input types in the same way responsive design has encouraged us to accommodate a range of screen sizes. Responsive web designers have so far used screen size as a proxy to assume support for touch. “If it’s a small screen, it’s touch. If it’s a big screen, it’s mouse-driven.” That distinction was already in trouble with large tablets like the iPad, and hybrids break that approach even more.
Unfortunately, we don’t yet have media queries to specifically target touch devices, but that may change soon. Recent draft proposals for CSS4 include a pointer media query to target gadgets with “fine” or “coarse” pointing tools. A mouse, trackpad, stylus or any other precision accessory would be a fine pointer, while fingers would be coarse. This would allow you to create specific rules to pamper fat fingers:
/* Make input fields taller for touch */
@media (pointer:coarse) {
input[type=”text”] {
min-height:44px;
}
}
This will get us part of the way, although it’s not clear whether a browser with a keyboard/mouse and a touchscreen should identify itself as coarse or fine. Even better would be targeting the combination specifically. As we’ve already seen, the layout for a touch-keyboard hybrid should be different from that of a touch-only tablet, because the ergonomics are different. That makes it important to identify not only the availability of touch but whether it’s combined with other input types. It would be helpful if media queries could target additional input types. While we’re at it, it would be great to have http headers that announce to the back-end server what type of device it’s dealing with:
“Hi, I’m a touchscreen!”
“Howdy, I’m a touch-keyboard hybrid.”
“Greetings, I have no screen at all...”
Until we get these “Hello, my name is” name tags in CSS or http, we have to make do. There’s only one sensible way to do that:
If a device can be used for touch, its interface should be finger-friendly. This isn’t a problem that’s specific to touch, either; it’s just that touch got here first. A new desktop design language is needed, one that replaces cursor-only interactions with conventions flexible enough to handle any of several potential input styles. For the moment, that means covering touch-only, keyboard and mouse, or these new touch-keyboard hybrids. It won’t stop there; even more input methods are on their way.
Windows 8 is one of the first ambitious—and imperfect—efforts to try to address this thorny issue. It’s the first attempt at an operating system whose interface can handle any input (from handwriting to speech to touch) and any output (screens of any size or no-screen spoken experiences). That’s a hard problem, and Microsoft is wrestling with it earlier than most of us, but it’s a problem all of us will have to address in the very near future.
Despite their valiant effort, however, Microsoft’s designers still run headlong into a collision of input styles, which is probably unavoidable. You see this, for example, in the difference between the desktop-style Internet Explorer and the Metro-style Internet Explorer. Both are present in Windows 8, and the one you get depends on what mode you’re using. They have very different interfaces, with the desktop layout tuned for mouse and Metro tuned for fingers. The address bar, for example, slips to the bottom for the Metro version, as Matthew Honan describes in his Surface review:
Web browsing works well. I liked having the ability to swap between multiple browser windows by right clicking, but the address bar on the bottom side is something I still haven’t gotten used to. It makes sense when you are using the device in touch mode, because that’s where your thumbs naturally land, but it’s just plain odd with a keyboard.
So how to build this new touch-and-every-other-input desktop experience? This one is going to take some time. Luke Wroblewski and Jason Weaver shared some useful suggestions this week for responsive navigation across touchscreen devices, and it’s exactly the kind of exploration we need.2
I’d add to Luke and Jason’s work a few guidelines to inform how we might evolve our desktop designs:
As we’ve seen over and over again in the last few years, the growing range of devices and platforms continues to make our work both more exciting and more challenging. Our job is getting harder, but it’s also our job, period. The ideal of the web, after all, is a platform that can be accessed from any device, no matter what its input or output method. For now, that means opening up all desktop layouts for easy finger-tapping.
People Magazine launched a new mobile site last week, the first responsive website from Time Inc.’s 95 magazine titles. Check it out at m.people.com.
The People website is a Global Moxie project. I had the good fortune to lead the design effort and pull together some of the finest web talent on the planet (and also some of my favorite people):
The whole thing happened under the pragmatic (and genuinely gymnastic) direction of Time Inc.’s Tony Brancato, a great partner for us in this. And man, did we ever need the enormous brains of all these talented folks. Bringing a site as vast as People’s to the small screen conjures a slew of challenges and opportunities.
We developed some novel approaches that I want to share here. I’ll cover the site’s approaches to advertising, progressive enhancement, navigation, web interactions, full content across devices, and cross-screen community. First, though, here’s what we set out to do.
Our brief was to design a responsive site for phones and 7” tablets (Kindle Fire, Nexus 7, etc.). People has two other sites: one for desktop and one for iPad. The new edition stakes out the smaller end of the spectrum, replacing a very simple site that has served phones for several years. The new site’s responsive web design adapts to three primary breakpoints: the phone, 7” portrait, and 7” landscape.
The irony for this “small-screen” website is that its 7” landscape layout is nearly as wide as People’s desktop design. In creating this small-screen design, in other words, we also created a desktop-sized design, too. This is the essential nature of responsive design, of course, a layout that adapts gracefully to a wide range of screen sizes.
So why maintain three separate websites? Why not have a single responsive site and be done with it? I’m on record with a strong point of view that everyone should strive to have a single website that feeds the same content to all devices. That’s the ideal, at least, and I believe it should be the default starting point for any web project. Always ask, “Can we do this with a single site for all devices?” And if you can, you probably should.
But pragmatism is required here, and business realities leave little room for dogma. There are a whole slew of potential reasons why it can be tough to blow up your existing sites and replace them with a single responsive site—business reasons, technical reasons, organizational/political reasons, or simple risk management. Sometimes change can’t happen all at once.
But you can still get there one step at a time. People’s approach is a sensible one: build a mobile site using responsive techniques as a first step. Over time, you can overcome business/tech/org challenges to let your responsive mobile site grow up and eat the other sites. I don’t speak for People, and I don’t know if that’s their plan. But I hope so. This new site positions People to move eventually to a single responsive site, which would simplify their tech maintenance and editorial workflow. Word is, this new site will eat the iPad site next.
For now, though, this is a sturdy first step in the march to the promised land. You can’t always make the whole journey in a single leap, but you can still make steady progress toward the ideal. Eyes on the prize, friends.
(And hey, if all of this pans out as it should, perhaps it will boost People’s digital IQ, already in the top ten among magazines.)
Unlike the previous mobile site, this new edition serves (nearly) all of the content of the desktop version, a frank acknowledgment that the mobile experience has to be more than a lite version of the “real” desktop website. We do everything on our phones now, and with more than just a quick glance. People’s stats bear that out, as People Digital’s general manager Liz White told paidContent, explaining why the old mobile site wasn’t cutting it:
“The initial version was us operating on the assumption that people were coming to the mobile phone to snack.” But when 25 percent of mobile users spend 5 minutes or more on the site, they’re coming for more than a quick snack.
Our job was to figure out how to wedge People’s vast store of content into the small screen without overwhelming readers. Here’s how we did it.
People.com offers a fast-flowing stream of daily content. The homepage’s job is to surface a ton of that content for quick, frequent scans of headlines and photos. On larger screens, we do that by displaying lots of links for the key sections, in typical news-website style. We show two columns for portrait and three for landscape.
Shrink the screen, though, and that giant collection of links suddenly becomes unwieldy. If you squeeze those links into a single column, you get an endless list of links, which is swipe-swipe-swipe frankly awful swipe-swipe if you just want a swipe-swipe-swipe summary of the swipe-swipe top news.
This is a common problem for mobile sites, and that’s where progressive disclosure is such a successful technique for small-screen interfaces. Progressive disclosure is a high-falutin’ term for showing only essential or summary information but making it dead easy to drill down into secondary screens or content panels when you want more.
On the homepage, we deployed carousels to do that work for phones. At the smallest breakpoint, the two or three columns of links collapse into a three-panel carousel. Only one panel of the carousel is visible at any moment, of course, so the initial display shows only the first one-third of links for each section. This approach lets you scan the latest headlines in every section with a quick vertical scroll, but you have the option to drill into a section’s secondary headlines by swiping horizontally through the carousel, or by using the arrow navigation.
Carousels require JavaScript to fire up their engines, of course, which means that phones without JavaScript (or operating systems like BlackBerry 4 whose JS is too awful to deal with) don’t get the carousels at all. That’s okay, because those browsers still get a link to the section homepages for access to that content. For less capable devices, in other words, progressive disclosure is managed by plain old web links.
We’ve been trained to believe that extra taps, clicks, or swipes or evil, and that’s not true. As long as every tap provides satisfaction (a completed task, more information, a smile), then it’s a quality tap. If the information scent is strong and if every tap is a quality tap, then it’s appropriate to require extra taps in the service of clarity on individual screens. Quality taps are more important than their quantity. This is true in all interfaces, but especially for mobile: Clarity trumps density.
Since some devices can’t display carousels, we didn’t want to burden those devices with content that they don’t need. We deployed a strategy that Filament Group’s Scott Jehl rather awesomely calls aggressive enhancement. (Scott was a huge friend to this project, as was Mat Marquis and the rest of the gang at Filament Group.1)
If you’re a web developer, you’re already familiar with progressive enhancement, where you gradually layer new functionality into a site according to the capabilities of the browser or device. Aggressive enhancement goes further, treating content itself as an enhancement.
Know how Readability and Instapaper whittle a page down to its basic content? That’s what we did by design. Aggressive enhancement delivers a page containing only the most fundamental content, then fills in secondary content via Ajax. This approach works well for sidebars, “about us content,” some forms of secondary navigation—and in the case of the People website, the carousel content.
If a browser doesn’t have JavaScript, it doesn’t even download the secondary carousel content. The result is a light page that lets the browser start rendering that basic content right away. It’s a technique that’s respectful of visitors’ bandwidth, computing power, and time. It’s not only responsive, it’s responsible, one of Scott Jehl’s favorite phrases.
We also took extreme care (and a lot of lumps) to make the photo galleries accessible to all devices. Visit the gallery on a capable touch-enabled device, and you get a solid, fast, silky experience as you swipe through the entire photo gallery within a single page. The gallery is a full-screen experience, and you can tap to toggle the navigation controls, view photo captions, and reveal sharing options.
No JavaScript? No problem: less fancy browsers can still tap through all the photos, with each image loading in a separate page. In that case, the controls and captions get an appropriate inline display, without all the fancy toggling.
Why go through all this trouble to support underpowered devices? Many are older devices due to go out of rotation, right? First, supporting the largest possible audience is just plain good business sense. Doing otherwise leaves money on the table. More important, though, it’s a future-friendly strategy.
When it comes to the web, the more backward-compatible you are, the more forward-compatible you’re likely to be. It’s all too common to assume that the web’s future consists exclusively of ever more capable browsers on ever smarter devices. That’s part of the story, but the future will include dumber devices, too. Speech is coming on strong, for example, and the voice-driven web browsers in future automobiles probably won’t be JavaScript champs. Building sites that are gentle to less capable older browsers also paves the way for less capable future browsers, which may be more common than you think.
Aggressive enhancement emphasizes essential content in the way it delivers pages over the wire. The same content-first values should apply to the design, too.
Mobile web experiences should lead with content, not a big stack of navigation controls. With time and screen real estate at a premium, mobile designs should fill the first screen of every page with the good stuff, with content.
At People.com, screen navigation is tucked behind a Sections button in the top toolbar. Tap the button and the entire screen fills instantly with navigation options. The menu’s appearance is instant and feels like an overlay has appeared, but in reality, it’s actually an anchor link to navigation at the bottom of the page.
This is my favorite navigation pattern for mobile websites, and it’s one championed by my pal Luke Wroblewski in his excellent book Mobile First:
This design uses a minimum amount of navigation elements (just a single link at the top), gives people an opportunity to pivot and explore when they get to the end of content, doesn’t duplicate the content of another menu, and (best of all) only requires a simple anchor link to work. That’s right: no fancy JavaScript, overlays, or separate navigation pages to maintain – just an anchor that links to the bottom of the page. That’s like HTML 0.
In larger layouts, we fall back to traditional desktop conventions and shift the navigation to a horizontal strip at the top of the page.
Delivering effective ads on mobile is just plain hard, and I’m not convinced that display ads will pan out as the future of mobile sponsorship.2 For the moment, though, banners remain at the center of things. Display ads are the revenue model, and publishers and advertisers are trying hard to find a way to make them work. It’s an uphill climb.
Traditional expectations continue to apply: advertisers want “above the fold” banner ads, but those usually choke out content or flick by so fast you don’t see them. Both the advertiser and the reader are poorly served. I came up with a new ad format to try to address this.
Snap banners hug the bottom of the screen in a fixed position but, as you scroll, they find a home and snap into a scrolling position on the page, eventually scrolling away like any other content.3 This new format stays on screen longer than a traditional inline ad, and the banner’s sudden leap into the scrolling page catches the eye, too. Those elements work to advertisers’ advantage. But the snap banner also stays out of readers’ way at screen bottom and then eventually gets clear of the screen entirely, both to readers’ advantage.
Even with the snap banner, though, tiny 320x50 mobile ads don’t carry much visual oomph. So we experimented with ads that expand to full screen. In addition to the snap banner delivery slot, we delivered responsive snap-banner templates that expand to full-screen on any device when you tap them, without taking you off the site. Tap again to dismiss.
We also serve occasional full-screen ads as interstitials in the photo galleries. In past user testing, I’ve seen high acceptance of ads in that context. Since you’re in gallery-flipping mode, it’s no problem to keep on trucking past the ad to the next slide. It’s like flipping through a paper magazine, but the format is bold enough that a reader will pause if interested.
Aside from interaction challenges, ads remain a real challenge for responsive layouts. Most ads are still delivered as blocks of immovable pixels packed into into files with names ending in gif or png. Typically, you get one size of creative and that size is almost certainly defined by IAB standards.
We need more flexible ad creative: messages that are delivered in fluid html rather than static images. Ad agencies and networks need to step up here. It will open bigger opportunities for them, and unlock design freedom for publishers along the way. A well-crafted snippet of ad HTML can flow into any space it’s placed, adapt to any screen resolution, and target any device. Instead of juggling a ton of assets for a single campaign, you’ve got one tidy package. It’s better for everyone.4
We delivered some responsive, cross-platform ad templates as part of the design, and we’re hopeful that they yield useful discussions between People and its advertisers. This is a tough chicken-and-egg problem. Even with 1 billion page views a month (a billion!), People.com doesn’t have the individual weight to swing industry-wide change on ad formats. Unfortunately, agencies and networks show little interest in doing this on their own, despite the advantages. Someone has to budge. The best we can do is be noisy in our advocacy and generous with our examples. Meanwhile, we have to work around these inflexible ad units.
Typical banner ads jam the machinery of a responsive design. Responsive design relies on flexible elements: images, text, and other design elements that shrink and expand with the layout. Ads are rarely flexible. Shrink them down, and the text becomes unreadable. They are nearly always intended to be displayed at one size and one size only.
For now, that means ad-driven responsive designs have to build themselves around these immovable building blocks. When you build to IAB standard sizes, for example, the design favors the 320x240 ad block, since that’s the largest size that will squeeze inside most phone screens. In larger layouts, column widths are set to accommodate that same ad block. You see this in The Boston Globe’s design, and we did the same thing.
It’s not just a technical issue. It’s also a sales issue. “Separate creative for separate devices” is a reflection of the way these ads are sold. Mobile, tablet, and desktop versions of websites are presented as completely separate properties instead of simply “the website.” Trouble is, it’s people who form market segments, not devices. Segmenting by device—whether that’s for content or for advertising—just doesn’t reflect the way we consume information today.
We need to start selling sponsorship across platforms instead of in device silos. Responsive ads require responsive sales packages, too. My friend Mark Boulton has done some clever writing on this, and he boils it down to this:
Providing space for ads needs to be broadened into multiple spaces for one ad concept. This requires closer collaboration between advertisers and web sites, designers and marketeers and sales teams.
We’re still learning how to do all this stuff, and ad experimentation is needed on technical, business, and cultural fronts.
Photos are core content for People, and it was a real challenge to improve the presentation of photos—really feature them—while also remaining lean for mobile performance and compact for mobile display.
Tears, sweat, and blood have been spilled over the past several months over how to handle responsive images, serving varying image sizes or crops depending on screen size. (Chris Coyier, as usual, has a great roundup of the options.) For better or worse, we punted on this, and we serve the same image files to all devices.5
That decision was due to the nature of the source images available to us. People’s digital team does a ton of work with the photos, manually cropping and sizing each and every image that comes through. They typically generate over ten different versions of every photo. It’s a tremendous workflow.
People selected their cut sizes based on the needs of the desktop web layout, which is sensible since that’s almost exclusively where they’ve been displayed so far. But here’s the thing: those desktop-sized photos are too small for mobile—or at least, for retina-display screens. You read that right: too small for mobile.
Most photos on the desktop site max out at 435x580, smaller than the iPhone’s 640x960 screen, for example. (A jumbo photo size for iPad is also available for some photos, but not all.) Since we rarely had a high-resolution image available in the first place, we didn’t have to cope with the responsive-image issue, a mixed blessing. Unfortunately, that meant we weren’t able to serve People’s remarkable archive of photos in the best light for the growing population of high-res screens.
Responsive design is more than just front-end tech magic. It also requires hard changes to editorial workflow and content strategy. This stuff takes time. As high-density displays make the leap from handheld phones to desktops and laptops, revising our photo workflows will become an especially high priority. Lots of work ahead for all of us. (I’m grateful that Karen McGrane’s book is arriving this fall to help: Content Strategy for Mobile.)
Another goal of the site was to knit together community on all platforms. People uses Disqus as its comment platform, so we wrestled their API into the new site’s design. But we also introduced a new element to the People ecosystem. I call them emoticomments.
Emoticomments are one-tap microcomments: they’re multiple-choice “Like” buttons, with emoticons as your options. They’re available on any photo or article, on all platforms. They’re a playful, effortless way for readers to share their reactions. Jenny Ng designed simple emoticomment icons to capture five essential emotional responses to People’s editorial.
People’s design crew replaced these with their own to make them consistent with the icons used on the desktop site.
Responsive web design is only a little over two years old. We’ve come a long way in that time, and every launch of a responsive design for a giant content site like this one is a marker of just how far. Responsive design is elegant and even simple in its theory, but sometimes devilishly complex in its details. Ethan has been barnstorming the country this year, sharing his techniques for overcoming some of the bumps and headaches that are inevitable with any new technique. We’re making headway. I’ve shared a few of our strategies for managing those bumps here, and all of it continues to evolve.
In any fresh technology or technique, the initial challenge is simply, “can we make this thing work?” Once you do, it’s time to add the polish. That’s the stage we’re at with responsive web design. As an industry, we’re moving at remarkable speed to improve the experience now that we’ve built the machinery. That's happening not only in geeky areas of performance and optimization, but also in content strategy, workflow, and business strategy. I’m crazy proud of our team for some of the new techniques that bring that polish to People’s new site. Lots more great stuff ahead.
position:fixed to do their magic, but fixed positioning is poorly and inconsistently supported across mobile browsers. We had to revert to a browser whitelist provided by the jQuery Mobile team to detect whether the browser supports the feature. If not, the snap banner is delivered like a regular static banner at the very top of the page. ↩
document.write('