Providing news, views and unrivaled content for the global sourcing community

 

 

More, more, more: 2017 will be marked with easier access to data, increased analytics and better cloud storage

Posted: 12/02/2016 - 20:50

There is no doubt that cloud, big data and artificial intelligence will be trending in 2017, just as they were in 2016, and will likely be in 2018. These are multi-year endeavours because the true implementation of technologies under these umbrellas have just begun, and challenges - like finding quality resources and better understanding of the technologies to fit into the business use cases - remain.

The decision to pick the right technology is always difficult, but fortunately, as of end of 2016, there will be hundreds of successful implementations of various technologies under the umbrella that includes cloud, big data and AI to provide a strong baseline of proven technologies from which companies can choose. This reduces the probability of making a wrong decision.

Trends

Data Access for Business Users
More and more business users will become adept at accessing data that was previously in the domain of the IT department. The key to making this happen is “self-service” technologies that allow the corporate end user – the business unit executive – to have access to data and to generate his/her own insights from it. This data access and growth of self-service tools was an important development in 2016, and will remain important role because more corporations have started adopting big data technologies and eventually will go through the same requirement cycles those which have already made the big data leap. IT outsourcers and solutions providers can expect to be called upon to create customised self-service tools, user interfaces and portals, as well as helping to deploy off-the-shelf software.

Perishable Insights
Expect forward-thinking corporations to invest in smarter and faster analytics that allow them to obtain “perishable insights” – information in near real-time that can be used to engage stakeholders. Pulling better value out of data will keep technical experts on their toes in 2017 as there is high demand to get value out of data as soon as it’s born. Most businesses have some sort of business intelligence and/or analytic capability, but the winners will be the companies that achieve those insights fastest and subsequently act upon them. Businesses want to create actions almost in lock step with customers as take action, such as visiting a website or tracking the use activity while they are in the store, or tracking fraud and security risks as soon as they occur.

Security
Security for big data systems played a very important role in 2016 and we will experience a high demand for better tools and solutions to configure and maintain security for these systems. Driving the road map for various security companies will be the ability to find “personal identifying information” out of large volumes of data and to provide secure and compliant access to this and other mission-critical data across the different continents. The marriage of cloud and big data systems adds more complexity to security road maps, but the good news is that various security product vendors are aware of this and many organisations have already deployed security to protect big data systems in the cloud. Expect many others to follow suit, or at least to try, in 2017.

Cloud Storage
Cloud storage, that is storing all a company’s data in cloud, emerged in 2016, scaling down the compute stack. This is different from migrating or moving data across the systems, and entails the storing of the source of business “truth” directly on a cloud storage system and using cloud servers to ingest, process and analyse the data. Expect cloud storage use cases to be in high demand in 2017 as corporate IT departments figure out the most efficient and effective ways to store and access information in the cloud. This will be especially true if use case demands technologies like Hadoop.

Better ETL
IT solutions firms will work with corporate IT departments to better reduce the data footprint in expensive data warehouse systems. The very basic use case of big data, typically via Hadoop systems, has been to reduce the data footprint in data warehouse systems so that these systems can serve the reporting tools effectively by limiting time spent handling ETL. Horizontal scaling vs. vertical scaling played its own important role while comparing traditional vs. new data technologies and this will keep playing its own important role for those who are about to begin their big data journey in coming years.

It’s important to note that big data (even more so than cloud) is not a destination, but rather a journey, and that no corporate IT department is ever really done working on big data systems. Even when it works well, the challenge is to lessen the footprint or with fewer tools.

OLAP and Hadoop
Within Xavient’s client base we’ve recognized that OLAP based on Hadoop systems played an important role in 2016, with several products and various open source projects emerging, each with their own pros and cons. As this trend picks up the speed in 2017, there is still a need for the stable and effective solution, especially an open source solution, to migrate existing models to new systems based on either Hadoop or big data ecosystems. In 2017, we may see more work in this area and better stability in open source projects.

Data Science
Data analysis is not data science. Corporations that understand this, and have been able to hire data scientists, by and large, are the ones which have created a competitive advantage. The challenge in 2017 will be finding and then hiring the best data science talent to leverage new data types and use cases based on the variety of data. In fact, it’s not a “nice to have” but rather a requirement to create a stable approach to migrate existing data science algorithms and workloads to new big data technologies in open source. A great deal of work needs to be done in understanding existing open source technologies to start leveraging better options to process and analyse large volume of data with scalable solutions.

Containerisation
"Container" and "containerisation" were two of the big buzz words in 2016. In 2017, we will see more work on the same topic but expect more maturity and clarity based on the hundreds of proof-of-concepts created by the end of 2016. Based on Xavient’s work in this area, we believe that by middle to end of 2017, most departments within an organisation will start getting their hands dirty in this; there won’t be much choice because of business requirements on fast development and release cycles. There will be more work on automation of application deployments using containers rather than virtual machines.

AI
Artificial intelligence was also one of the top buzz words in 2016, even if not much headway was made here. Expect to hear more on AI and machine learning in 2017 and I expect to see more practical uses of AI, or rather, how to simplify the understanding and use of AI for business and technical users. We will definitely see more startups, solutions and tools focused on AI. We wouldn’t be surprised if there is someone somewhere building a machine learning tool or platform right now to conduct trades based on market momentum rather than human emotions.


About the Author Neeraj Sabharwal, Head of Cloud, Data & Analytics in Xavient Information Systems (www.xavient.com), has more than 15 years of experience in data, cloud and big data technologies. Neeraj has worked on hundreds of cloud and big data implementations to help customers to get more value out of their data. He can be reached at nsabharwal@xavient.com.

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.

About The Author