For complex work, a very simple app requires a very smart user. That point was driven home to me in Tableau Fundamentals class this week. I don’t see that as bad news at all.
Not so long ago I wrote a piece that attempted to inject a bit of reality into the claims then made by some data visualization tool vendors. I cited unexpected challenges that those adopting such tools for their obvious and compelling data presentation abilities might face. The challenges included unexpectedly complex data integration, establishing solid reporting standards and practices, scaling report distribution as demand for the visualizations expands, and the conversion work that can result from version upgrades.
Although a Fundamentals class, the experienced and enthusiastic instructor and the small, intelligent student group combined to make the two days immensely valuable, going far beyond the basics on the program (more on specific lessons learned will appear in an upcoming post). The instructor’s focus on principles rather than recipes drove home this point: in order to effectively use Tableau you have to understand not only how to operate Tableau itself but also the underlying data management, usability, and statistics principles.
Could it be that adopting easy-to-use Tableau in place of, say, SSRS, Cognos, or SAS requires an upgrade in staff knowledge and expertise? Continue reading →
The term “trust” implies absolutes, and that’s a good thing for relationships and art. However, in the business of data management, framing trust in data in true or false terms puts data governance at odds with good practice. A more nuanced view that recognizes the usefulness of not-fully-trusted data can bring vitality and relevance to data governance, and help it drive rather than restrict business results.
The Wikipedia entry — for many a first introduction to data governance — cites Bob Seiner’s definition: “Data governance is the formal execution and enforcement of authority over the management of data and data related assets.” The entry is accurate and useful, but words like “trust”, “financial misstatement”, and “adverse event” lead the reader to focus on the risk management role of governance.
However, the other role of data governance is to help make data available, useful, and understood. That means sometimes making data that’s not fully trusted available and easy to use. Continue reading →
Recently I’ve had the opportunity to dig deeply into the CMMI Data Management Maturity model. Since its release, the DMM model has emerged as the de facto standard data management maturity framework (I’ve listed other frameworks at the end of this post).
I’m deeply impressed by the completeness and polish of the DMM model as a comprehensive catalog of processes required for effective data management. Even after decades in the business the broad scope and business focus of the model changed the way I think about data management.
Over the past year I’ve reviewed what seem like countless plans for enterprise data warehouses. The plans address real problems in the organizations involved: the organization needs better data to recognize trends and react faster to opportunities and challenges; business measures and analyses are unavailable because data in source systems is inconsistent, incomplete, erroneous, or contains current values but no history; and so on.
The plans detail source system data and its integration into a central data hub. But the ones I’m referring to don’t tell how the data will be delivered, or portray a specific vision of how the data is to drive business value. Instead, their business case rests on what I’ll call the “railroad hypothesis”. No one could have predicted how the railroads enabled development of the West, so the improved data infrastructure will create order of magnitude improvements in ability to access, share, and utilize data, from which order of magnitude business benefits will follow.* All too often these plans just build bridges to nowhere. Continue reading →
A quick Google search seems to reveal if you manage People, Process, and Technology you’ve got everything covered. That’s simply not the case. Data is separate and distinct from the things it describes — namely people, processes, and technologies — and organizations must separately and intentionally manage it.
The data management message seems a tough one to deliver effectively. Data management interest groups have hammered at it for years, but a sometimes preachy and jargon laden approach relying on data quality train wreck stories hasn’t generally loosened corporate purse strings. Yes, financial companies’ data-first successes in the 1990s paved the way for the ’00s dot com juggernauts, whose market capitalization stems largely from innovative data management. Yet, we still have huge personal data breaches at some of our most trusted companies, and data scientists spend the bulk of their valuable time acquiring, cleaning, and integrating poorly organized data.
The first steps are often the hardest, so here’s a short, no jargon, big picture guide to getting started with effective data management in three steps:
His point is, since big data applications are often off the beaten IT path, big data professionals must solve “problems that companies don’t even know they have – as their insights highlight bottlenecks or inefficiencies in the production, marketing or delivery processes,” often with “data which does not fit comfortably into tables and charts, such as human speech and writing.” Continue reading →
I had pondered writing a post called “Requirements Decay” about how requirements don’t last forever. In my research I found that such a post, complete with “my” words “requirements decay” and “requirements half-life”, had already been done comprehensively here. In a compact argument underpinned by half-life mathematics, the anonymous author proposes that a requirement isn’t likely to stand unchanged forever and explores the implications.
For me, requirements decay is an idea that helps us think realistically about project planning and improves our chances of meeting business needs. Continue reading →
Recently there was a great post at Dzone recounting how one “tech savvy startup” moved away from its NoSQL database management system to a relational one. The writer, Matt Butcher, plays out the reasons under these main points:
The data integration process is traditionally thought of in three steps: extract, transform, and load (ETL). Putting aside the often-discussed order of their execution, “extract” is pulling data out of a source system, “transform” means validating the source data and converting it to the desired standard (e.g. yards to meters), and load means storing the data at the destination.
An additional step, data “enrichment”, has recently emerged, offering significant improvement in business value of integrated data. Applying it effectively requires a foundation of sound data management practices. Continue reading →
I believe that early, effective big picture diagrams are key to application development project success. According to the old saw, no project succeeds without a catchy acronym. Maybe so, but I’d say no project succeeds without a good big picture diagram. The question: what constitutes a good one? To me good high-level diagrams have four key characteristics: they are simple, precise, expressive, and correct.