Tag Archives: Strategy

Requirements Half-Life

ThreeMileIslandI had pondered writing a post called “Requirements Decay” about how requirements don’t last forever. In my research I found that such a post, complete with “my” words “requirements decay” and “requirements half-life”, had already been done comprehensively here. In a compact argument underpinned by half-life mathematics, the anonymous author proposes that a requirement isn’t likely to stand unchanged forever and explores the implications.

For me, requirements decay is an idea that helps us think realistically about project planning and improves our chances of meeting business needs. Continue reading

To SQL or to NoSQL?

DiscDrivesRecently there was a great post at Dzone recounting how one “tech savvy startup” moved away from its NoSQL database management system to a relational one. The writer, Matt Butcher, plays out the reasons under these main points:

  1. Our data is relational
  2. We need better querying
  3. We have access to better resources

Summing up: “The bottom line: choose the right tool.” Continue reading

Guiding Principles for Data Enrichment

The data integration process is traditionally thought of in three steps: extract, transform, and load (ETL). Putting aside the often-discussed order of their execution, “extract” is pulling data out of a source system, “transform” means validating the source data and converting it to the desired standard (e.g. yards to meters), and load means storing the data at the destination.

An additional step, data “enrichment”, has recently emerged, offering significant improvement in business value of integrated data. Applying it effectively requires a foundation of sound data management practices.  Continue reading

Get the Big Picture: Effective High-Level Diagrams

PIcassoDrawingI believe that early, effective big picture diagrams are key to application development project success. According to the old saw, no project succeeds without a catchy acronym. Maybe so, but I’d say no project succeeds without a good big picture diagram. The question: what constitutes a good one? To me good high-level diagrams have four key characteristics: they are simple, precise, expressive, and correct.

Continue reading

Data Management: So Easy a Caveman Can Do It?

I recently stumbled upon one of The Martin Agency’s hilarious Geico caveman ads and wondered, rather geekily, why they didn’t do one about data analysis. I think if a caveman suddenly arrived in the 2010s he or she would see parallels between his life and the activities of today’s knowledge worker. When I thought it through, it seemed obvious that knowledge workers need to be more like farmers and less like hunter/gatherers if they want to achieve the full potential of business intelligence.

Continue reading

A New Framework for Data Management?

HubAndSpokeI hold a strong prejudice that IT paradigms are useful for about 30 years. The PC was dominant from 1980 to 2010, “online” mainframe systems from 1970 to 2000, and so on. If that’s the case then time’s up for Bill Inmon’s data warehousing framework. So far no widely held pattern has emerged to help us envision data management in today’s big data, mobile BI, end-user visualization, predictive analytics world, but at their recent Business Technology conference, Forrester Research took a swing at it by presenting their 2009 “hub and spoke” organizational strategy as a data management vision. Continue reading

What Driving Dogs Tell Us About Learning

Recently the BBC posted this video. On first view it is just funny, but watching those dogs learn to drive really reminded me of personal experiences with IT teams making big learning transitions. To represent those real situations let’s consider a fictional team of SQL developers facing the daunting task of deploying a functional Hadoop-based analytics prototype in two months. The video parallels their critical learning success factors: (1) set audacious goals, (2) learn bit by bit, and (3) know your limits.

Continue reading

Skills of the Data Architect

One common theme in recent tectonic shifts in information technology is data management. Analyzing customer responses may require combing through unstructured emails and tweets. Timely analysis of web interactions may demand a big data solution. Deployment of data visualization tools to users may dictate redesign of warehouses and marts. The data architect is a key player in harnessing and capitalizing on new data technologies. Continue reading

The gnarly, subtle-seeming data quality question

I’ve posted a couple of articles at my company’s blog site that reflect my view on data quality efforts:

  • Yes, there is a business case for improving data quality, and I’ve got real business value examples. If you look for real money where you anecdotally know there are data quality problems, you’ll likely find it in high costs of data correction and rework, and savings related to business process improvements that reliable data enables.
  • There are distinct things an organization can do to reap benefits of improved data management and data quality.  (1) Get started in the first place, (2) find the tangible benefits, (3) cross the departmental silos that exist in every large organization, and (4) promote sound data management practices.

Continue reading

Abstracting and recombining all the way to the bank

In the past I’ve never understood what people really mean they say “think outside the box” but Jim Harris, in a recent OCDQ blog post, helped me figure it out.

Mr. Harris ends with this provocative line: “the bottom line is Google and Facebook have socialized data in order to capitalize data as a true corporate asset.”  The post starts with a cold war analogy and proceeds to describe how Facebook and Google have made big money as “internet advertising agencies:” offering free services with which users (like us) serve up personal data in return for use of the service, then selling advertising space based on our data (hopefully anonymized).

Continue reading