In Analytics, Internet of Things

At the end of 2013, Gartner predicted that by 2017, one in five market leaders would cede dominance to a company founded after 2000 because they were unable to take full advantage of the data around them. They also predicted that by 2016 one in four market leaders would lose massive market share due to digital business incompetence.

Has this come to pass? Consider:

Gartner may not be prophetic, but it looks like they at least identified a major trend. So what will you need to take full advantage of big data in 2016 and remain, or become, a leader?

How much data are we talking about?

Big Data is a well-worn term anymore, but how much data is big? Jeremy Stanley, the CTO of Collective, cites this IDC graph in his presentation “The Rise of the Data Scientist.”

Spaur-2016-Look-Ahead-01

By comparing points on the curve corresponding to 2016 and 2017, we can see that about 2,000 exabytes of data will be added to the digital universe in 2016.

2,000 exabytes equals two trillion gigabytes. This is roughly equivalent to the entire digital universe in 2012. Or for a silly spatial comparison: If 2,000 exabytes were divided among one-inch long, one gigabyte USB sticks, those sticks would stretch 31.6 million miles, reaching nearly from Earth to Venus.

Where will all this data come from?

In a blog post from micro-controller manufacturer Atmel, we can see that approximately five billion connected devices will join the Internet of Things in 2016.

Spaur-2016-Look-Ahead-02

This is based on a consensus estimate drawn from firms such as Cisco, Ericsson, Gartner, IDC and Harbor Research.

Like the projected growth in data, the 2016 growth in connected devices will equal the entire universe of devices just a few years ago.

What kinds of data will we see?

It’s worth reflecting on what type of data IoT devices generate, because the types of data influence the types of analytics. Those additional five billion devices will provide data that:

    • allow manufacturers to follow their product through the supply network to the end-consumers
    • communicate when and where they are being used and how often
    • communicate when they need refilled, replenished, repaired, and replaced
    • alert when they are operating under distress and may fail
    • provide transportation and logistics operators with more granularity in managing their cargo and fleets
    • provide convenience to the people who deployed them (such as automatically adjusting the thermostat to a comfortable climate when the a person is within 15 minutes of their home)

Because devices are connected and communicating, they deliver a stream of data. This becomes a time series of data, since when data is recorded, sent and received yields useful insight into the data itself and the people /activities that generate that data.

Because devices are out in the world and not trapped on a desk top or in a data center, their location matters. This becomes GIS data, since where a device is it, what it’s near, and what it’s connected to on a network yields useful insight into the data itself and the people / activities that generate that data.

Time series and GIS data require new repositories and analytics that many organizations don’t yet have. This will become a challenge for companies in 2016. (The implications of new data types is a big topic that we’ll be exploring further in 2016.)

How will we handle and analyze all that data?

In his book The Singularity is Near, Ray Kurzweil argues that we’re drawing close to when a $1,000 computer will have the same cognitive power as a human brain.

Spaur-2016-Look-Ahead-03
In 2016, a $1,000 computer will surpass a mouse brain. (You didn’t realize that a mouse brain does so much, did you?) The $1,000 human brain is just a couple years away at current rates.

We’re already at the point where, for many analytical tasks, we require computerized brains to do our heavy data integration and computational work. Think of weather modeling or financial markets or piloting aircraft and spacecraft.

What software will run on these more powerful computers? A Forbes article by Louis Columbus summarizes trends in big data analytics through 2020, including this graph:

Spaur-2016-Look-Ahead-04a
In 2016, the big data analytics market will grow by approximately $1 Billion across five main categories: real-time analysis and decision-making, precise marketing, operational efficiency, business model innovation, and enhanced customer experience. The analysis of transactional, time-series and GIS data applies across these five domains.

Are you ready for 2016?

Like Gartner, these other studies are not necessarily prophetic, but they do point to the overall trend. The opportunity awaits in 2016 for you to apply increasingly affordable computing and analytics power to correlating, analyzing and visualizing new types of data to generate new insights, new opportunities and new revenues, thereby avoiding the fate of the eclipsed companies listed at the start of this post.

How do you take advantage of this opportunity in 2016?

Start small and move fast in testing use cases of data-driven changes that make an impact in your operational efficiency and your relationships with prospects and customers. That covers three of the five categories of analytics listed in the Forbes article. Increased operational efficiency generates savings that you can apply to further data-driven initiatives. Improved relationships with prospects and customers increase top-line revenues and bring you market visibility. With increased top-line revenues and bottom-line savings, you’re on your way to data-driven business improvement.

Why do you need to do this? Your customers expect it and your shareholders require it, mainly because your competitors are already doing it.

(Ron Stein contributed to this post)

Recent Posts

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.