Thursday, June 6, 2013

Clearing the Big Data Hurdle: The Open Source Advantage

By Christopher M. Carter, Hiln
www.Hiln.com
In today’s world there is a new understanding, the emergence of a new “reality” that is much, much different than what we had even a decade ago.  This new reality of big data that exists within today’s enterprises cannot be underestimated.  Big data is becoming more important in all industries, but none more so than in the finance arena, both in enterprises and big finance in Wall Street firms.  Most businesses aren't ready to manage this flood of data, much less do anything interesting with it.
Big data will impact every industry, from finance to education and government.  In fact, the Federal government just announced a new big data research initiative, with a budget of $200 million.
Data as a whole is a catalyst for business.  According to IDC, there will be 2.7 zeta bytes of data created this year alone.  Now, if you look into the enterprise, you begin to see that in order to begin analysing and deriving value from these increasingly large data sets, organizations need to embrace the right tools that will allow for these new capabilities.  As businesses begin to better understand their existing data, they can gain competitive advantage in the process, however, that competitive advantage can only be realized if data can be processed intelligently, efficiently and results delivered in a timely manner.
How does the enterprise begin to mine its data?  Good question.  With so much data existing that firms can become overwhelmed, how can the good data be identified?   What is “needed” data and what information is not as valuable?   The old mantra of “good data in, good data out; bad data in, bad data out” can help to start answering these questions.   All firms need to be cognizant, first and foremost, of the quality of the data being entered into their systems and used in daily operations. This is especially important in industries like finance, where data is the lifeblood of the business.
Opportunities abound in big data, and an organisation can get as much potential knowledge out of this stored data as they put energy into analysing it.  With applications spanning from Business Objects from SAP and the usage of in-memory data from Hana to newer applications, members of the finance sector are looking to add new positions like a Chief Data Officer specifically to make the key decisions around information that need to be made today.   Big data is indeed big, but it's not for all purposes.  For example, it’s not for transactional or real-time in-memory processing of small and endless streams of structured data.  Think of data like a big truck vs. a small sedan - each has its purpose.  However, both Big data and fast in-memory traditional databases have a place in driving business.
Opportunities in harnessing and utilising big data become more feasible when open source frameworks come into play.  The open source world has basically created the new age of big data analytics, be it when utilising Hadoop, the most widely used and well-known solution for developers, to products like Greenplum from EMC and others.  These tools have created a rush to market to support organisations trying to compute as much data as fast as they can with a solution that will allow them to make decisions in as real time as they can.  For example, a major retailer with outlets around the globe, utilising an open source framework, has the ability to harness the data coming in from their social media sites, run it through their enterprise data analytics solution, utilising literally thousands of nodes, to make real time decisions in their stores about products and pricing.
Three to five years ago this was not possible.  But, with the large and active open source community working on the framework this computation ability now exists and is being utilised and modified by new companies daily.
Corporations are looking at their data as an asset within their walls no matter where it physically resides, but yet there is still so much to learn and to dig through.  The new Chief Data Officer and their team must stay vigilant and be concerned about many factors that will directly impact the business, including how and what data is being provided to regulators.  Enterprises need to set standards when it comes to their information, and this is more important than ever in the increasingly regulatory-focused landscape.  Firms need to insure their internal processes are in place for current government regulatory requirements, as well as taking into account regulations in many of the new laws that are being created, seemingly on the fly.
There is no doubt that bringing the power of big data and harnessing its performance is important and that it will become more strategic when considering how organisations will use the data to interact with their clients, competitors and the market through faster decision-making.  Some companies will start to shrink under the pressure of this new data analysis, while some may indeed fail completely.  But regardless of which companies falter, and which ones gain market share, one thing is for certain: database companies should see tremendous gains as the need for more and more database applications increases.
Organisations are looking to the future and deciding how important a role big data will play in the coming years.  The truth is, how firms utilize big data as a source of knowledge and power will be the largest influence.  These enterprises that find success with adopting open source tools to analyse their information will see improved profitability, provide stronger service throughout the organisation and to their customers and rise above in the land of giants.

No comments:

Post a Comment