Turning Data Sludge, Into A Data Swamp, Then Into A Data Lake, And Then Into Data. The Magic Revealed?!
Since our “birth” as Computer and Engineering Consultants Ltd (CEC) in the 1970s, we have devoted considerable time and resources to clarify and separate hype from the reality surrounding numerous data processing (now called Information Technology) approaches trends, methods, and solutions. In 1991, CEC penned an article titled “Reengineering Realities”, which was designed to provide our clients and customers with thoughts on the new technologies of the day – Computer-Aided Software Engineering tools (CASE).
The notion of using a re-engineering methodology, tools, and techniques to recover and enhance system design intent, was one of the latest entries in our industry's list of “solutions” during these times. When CASE technology was first popularized, it was intriguing to see how many tools turned into CASE technology overnight.
As with the CASE technology phenomena, today, many tools, techniques, processes, and methodologies are claiming support for “big data”, and the ability to take “legacy data”, and systems, and gain credible new insights that were originally not thought about within the original design intent. Many of the claims result from inconsistent terminology.
The terms of this topic (reverse-engineering, forward-engineering, re-engineering, restructuring) were and are currently ill-defined, and subject to optimistic myths. Big data assumes that systems and their associated databases have been forward-engineered, and now can be reverse-engineered.
Once we have defined our terms, we need to determine when and how we can use re-engineering concepts. The contemporary term that has strong similarities to forward-engineering is Enterprise Architecture. Next, we need to formulate detailed processes, tools, control mechanisms, etc. to put the concepts into practice, and to provide quality “engineered” data, which is the precursor to big data.
You simply cannot reverse engineer something that was not engineered in the first place.
This broadcast has two objectives. The first objective is to provide clear definitions of terms and concepts. The second objective is to suggest how Enterprise Architecture is the imperative to truly gain credible insights from big data efforts, and how Enterprise Architecture can be practically applied today and soon, to provide big data realities.
We hope that the logic provided in this webinar will provide new insights into what is being attempted with big data. As in 1991, the advice to today’s big data consumers: caveat emptor – let the buyer beware.