A recent customer testimonial video from CEVA Logistics discusses the benefits of having all your big data in one place for analytics.
As Deepak Dodani, Vice President of Global Supply Chain and Transport Solutions for CEVA Logistics says in the video, “to have [all our data] pulled together into one place and to provide intelligence tool sets on top of it provides tremendous flexibility and insight…This data starts to drive key decision making for the future supply chain processes.”
It’s true that analytics encompassing all an organization’s data, instead of individual data silos, gives tremendous insight. That’s a prime value provided by situational intelligence applications.
The question is, how do you get all your big data into one place to perform analytics? It’s big, after all.
One approach is what Deepak describes: gather all the data in the enterprise and move it to a single repository in the cloud. This approach requires a lot of upfront work to create the repository and ongoing work whenever you want to add a new data source to your analytics program. A cloud repository is nearly mandatory if you are implementing streaming analytics for your Internet of Things devices. On the positive side, a common cloud repository might give you ease of back up and disaster recovery, plus offload some IT management tasks to your cloud provider.
A second approach is the leave all your data sources in place and connect them to a common platform for analytics and visualization. This helps you get your analytics program up and running faster, plus it makes it easier to integrate future data sources into your analytics program. This could be very important as data-creating devices proliferate from the growing Internet of Things. However, leaving your data in place doesn’t necessarily give you ease of back up and disaster recovery.
Which approach would you choose and why?