It Took a Natural Disaster for Me to Understand Snap Map

A year after Instagram killed its maps feature, Snap nailed it. Houston proved it.
Image may contain Human Person Vehicle Transportation Canoe Rowboat Boat Car Automobile and Nature
Joe Raedle/Getty Images

Of all the ways to watch the flood waters rise in Houston last week, the most compelling was through Snap’s new crowd-sourced Map feature. Unlike, say, the New York Times’ curated two-minute videos, which relied on images to relay facts, Snapchat’s real-time, ephemeral videos dropped me as directly as I’ve yet been dropped into the experience of disaster itself. I was canoeing down a street in which the waters licked the bottom of second-story windows. I hung out in the entrance of a school-turned-shelter, watching volunteers dressed as cartoon characters entertain kids. I observed as a man caught a fish in his living room.

The video Snaps, taken together, conveyed more than a string of facts or a glimpse of the surroundings. They conveyed a range of emotions—horror, shock, anger, loss, sadness, and, in small, sweet moments, joy. They told a collective human story about the experience.

It’s been just over two months since Snap rolled out this feature, which lets Snapchatters pan across a map to view images and video clips in real time from anywhere they’re being posted. The early reviews weren’t good. They focused on privacy issues, and suggested that Snap Map would be a drain on the resources of this newly public company. All that may be true, but Snap Map reveals something else, too: Regardless of how successful cofounder and CEO Evan Spiegel turns out to be at building a company, he’s proven himself adept at understanding and building products for the most fundamental communication shift of our time.

In short, the reign of the era of the word as the exclusive medium for real-time communication is coming to an end. Images have evolved into a new language. We are using them to communicate about how we are thinking, feeling, and interacting with the world around us. They are, in effect, replacing written language for many of the things we once relied on words to express.

Though this shift has been happening for well more than a decade, the mind-blowingly fast advances in everything from computing power to computer vision are advancing our ability to capture and manipulate images of everything at rapid pace. There will be 45 billion cameras in the world within the next five years, according to market analysis by the investment firm LDV Capital. The standard smartphone will have four cameras, equipping it for the rise of augmented reality. Emerging high-powered graphics chips and deep learning processing units can process massive amounts of data in fractions of seconds. But the technology alone is not enough to advance the creation of a new language, any more than the emergence of the printing press could facilitate the writing of Tolstoy’s Anna Karenina, or the musings distributed through the earliest tabloids. The tools were the start, but the formats that best conveyed our thoughts and emotions emerged through trial and error over time as people pushed the scope of the written word.

In the age of the rise of the image, that’s the work of the entrepreneurs who are building out the features that will inform a new image literacy. What do people want to use photos and videos to tell each other? What’s the best way to do that?

We expected Facebook to figure this out, and in large part it has. Facebook was the first mainstream platform to host our pictures. But it has broken new ground in this area mostly through its size, its smarts, and its ability to write very big checks. Founder Mark Zuckerberg parses user data and scours the startup ecosystem for the services that are likely to break out, and he has made prescient acquisitions at exactly the right time. Witness the rise of Instagram, for which Facebook famously paid $1 billion shortly after it went public in the summer of 2012. (Back then, the jury was still out on Facebook as it floundered through its first few quarters as a public company.)

Facebook is also very good at applying what has worked for other services. After Snap’s users embraced its stories feature, which allows them to string together images and videos over the course of a 24-hour period, Instagram introduced it. No, Instagram didn’t dream it up. But the service has so many dedicated users that it was able to harness the feature to keep them from running over to Snap.

But for all that Facebook has done to evolve the ways we can use and share photos, it hasn’t proven adept at launching original, photo-driven features. In fact, this week marks a year since Instagram shut down its version of Map, which it said people hadn’t embraced. Instagram’s photo maps were buried. To find them, you had to tap on a person’s profile and then find the small location icon beneath their bio. It would take you to a map with stacks of photos arranged based on where they had been taken. It treated photos as documents, static trinkets tied to a place.

By contrast, Spiegel has always understood photos as stories we’re telling each other. That’s why the app he launched opens to the camera, and why he calls Snap a “camera” company. Like phone calls, Snaps aren’t intended to be stored so much as absorbed, decoded, and released. They are temporary, intended to convey an immediate message. The augmented reality dog ears and flower halos, the captions, and the short video clip formats were all intended to give us more tools to infuse emotion into our conversations, expressing ourselves more deeply. If static images were the nouns of the written language, AR screens were the adjectives, rich with meaning. (Video became the verbs.)

Snap Map introduces context to those stories, and thus becomes our real-time visual newspaper—a framework for understanding the time and place of our chatter. You find it easily and immediately from the first screen you open by pinching your fingers out as if you are making yourself smaller in relation to other people’s stories, which you are. Zoom all the way out and you can swipe across the heatmap, where clusters of blues, yellows, and reds indicate Snapchat stories that are emerging in real time. Many are ringed and headlined—The US Open in New York; The Titans vs. Chiefs game in Kansas City; Houston Strong—but if you begin to zoom in on any one of these geographies, you can see the Snaps emerging more granularly. The hyperlocal news doesn’t have headlines, but it’s there if you know the locality for which you search.

As I watched Houston, I realized that the storm was heading for Tupelo, Mississippi, where I have a lot of family. So I scrolled across the map and zoomed in on Tupelo, where a string of Snaps populated the map. I could watch a short clip of rain falling outside a window, and watch a guy walk through the parking lot over by the mall, the water rising to just beneath his ankles. Just a lot of rain in Tupelo; not much to worry about.

I zoomed out and scrolled across the ocean to Paris, where it was also raining as people strolled down the Champs-Élysées. I scrolled to Saudi Arabia, where a string of bitmoji-peppered videos took me into the Eid al-Adha celebration. I circled the globe until I got to Utah, where I saw my stepsister’s bitmoji. By clicking it on, I could watch her Snaps—a geolocated personalized newsfeed—or send her a greeting.

With Snap Map, the company has created a news program, curated not just by my personal contacts but also by geographic location. It is expansive, yet contained: The Snaps remain ephemeral, disappearing after 24 hours. It’s an evolving, image-driven version of a good old-fashioned newspaper, reporting all the news that’s fit to Snap.