Category Archives: Analysis

One More Species of Overloaded Data

A while back I wrote the post A Field Guide to Overloaded Data, which publicized the work of Duane Hufford, who examined different types of overloaded data during the 1990s. Over the years his classifications of overloaded data effectively categorized data anomalies I encountered in the wild.

That is until recently, when a colleague encountered an array of file names in a single SQL Server column. This instance didn’t fit into the three categories detailed in the earlier post, so I’m presenting it here. I’ve also added it to the the original post.

Bundling

Definition: Bundled data is a situation where a single column in a table contains an array of values. Continue reading

Guidelines for Successful Tableau Analytics Rollout

I’ve written previously about development of Tableau analytics capability from single user to multiple teams across an organization. This article is intended for those who may have first installed Tableau Server to enable folks outside their own sphere to interact with their Tableau creations. For the way ahead, it presents a few guidelines for successful development and deployment that data analysts should internalize as their analytics product grows.

The theme is, from the very start, to develop dashboards as if they serve hundreds of users and access millions of data records. If you do that, then as your analytical tools grow in usefulness and popularity, you’ll avoid difficult conversions and retooling later. Continue reading

Reengineered Processes Need Business-Defined Data

“Business process reengineering is the act of recreating a core business process with the goal of improving product output, quality, or reducing costs.”* Recently I’ve perused articles on business process reengineering and have been surprised to find that they share a lack of emphasis on data definition.

By establishing a shared business vocabulary, identifying and describing business-critical entities and events, and applying the defined entities and events in process and system design, BPR teams can ensure an efficient redesigned process that works smoothly from end to end.

In spite of data concerns making up two of the seven key BPR principles (“Merging data collection and processing units” and “Shared databases to interconnect dispersed departments”), articles on the topic tend to lump these concerns into general Information Technology topics, without acknowledging the need for business driven data definition and management. For example, this post stresses the need for “more sources of data and enhanced connectedness to information”. This one recounts a famous Ford BPR example where a new database was central to the solution. Many, like this one, cite “shared databases” as a core principle. However, none details the business leadership and participation necessary to define a common data foundation across a reengineered business process. Continue reading

Two Design Principles for Tableau Data Sources

It’s not unusual for talented teams of business analysts to find themselves maintaining significant inventories of Tableau dashboards. In addition to sound development practices, following two key principles in data source design help these teams spend less time in maintenance and focus more on building new visualizations: publishing Tableau data sources separately from workbooks and waiting until the last opportunity to join dimension and fact data.


Imagine a business team — let’s call it Marketing Analytics — with read-only access to a Hadoop store or an enterprise data warehouse. They gain approval for Tableau licenses and Tableau Server publication rights for five tech-savvy data analysts. After a few initial successes with some impactful visualizations, the team gathers steam. After a while the team finds itself supporting scores of published workbooks serving a few hundred managers and executives. In spite of generally sound practices, Marketing Analytics struggles to maintain consistency from one Tableau workbook to another.

Continue reading

Leadership Must Prioritize Data Quality

Data quality improvements follow specific, clear leadership from the top. Project leaders count data quality among project goals when senior management encourages them to do so with unequivocal incentives, a common business vocabulary, shared understanding of data quality principles, and general agreement on the objects of interest to the business and their key characteristics.

Poor data quality costs businesses about “$15 million per year in losses, according to Gartner.” As Tendü Yoğurtçu puts it, “artificial intelligence (AI) and machine learning algorithms are only as effective as the data they use.” Data scientists understand the difficulties well, as they spend over 70% of their time in data prep.

Recent studies report that data entry typos are the largest source of poor data quality (here and here). My experience says otherwise. From what I’ve seen, operational data is generally good, and data errors only appear when data changes context. In this post I’ll detail why data quality is management’s responsibility, and why data quality will remain poor until leadership makes it a priority. Continue reading

Anonymize Data for Better Executive Analytics

Reading articles about data anonymization makes it clear that it is not an entirely effective security measure (here and here), but still part of a robust security capability, and required if your organization is affected by GDPR. (I use “anonymization” as a general term encompassing techniques that de-identify personal data within a given data set.)

But there’s a positive side of anonymized data that hasn’t received much press. Providing anonymous data to senior managers who don’t need access to personal data can encourage them to take a broader perspective, and thereby bring new energy to fact-based senior planning and analysis. Continue reading

Toward an Analytics Code of Ethics

In data management and analytics, we often focus on correcting apparent inability and unwillingness on the part of business leaders to effectively gather and capitalize on data resources. With that perspective, we often see ethics as a side issue difficult to prioritize given the scale and persistence of our other challenges.

At least that was my perspective, and my initial response when confronted recently by a family member on this topic. Her view from outside the field was that ethics should be a primary concern. As I’ve reflected on this conversation, I’ve come around to her point.

In recent years we’ve seen many examples of data misuse due to ethical lapses. Here’s a post that gives five examples, including police officers looking up data on individuals not related to any police business, an employee passing personal data including SSNs to a text sharing site, and Uber’s “god view”, available at the corporate level, which an employee used in 2014 to track a journalist’s location. Continue reading

Meaningful Requirements Start Successful Data Projects

To me, development projects fail or succeed in the first few weeks. Once a project starts off in the wrong direction, momentum and expectations tend to prevent a return to the proper path. With today’s wealth of database options each addressing exciting new possibilities, the right choice for the application’s data foundation plays a large part in steering a project to success.

At this year’s Enterprise Data World conference, William Brooks showed the relations among different data modeling approaches, in effect detailing how to derive nine different model types from a detailed conceptual entity relationship model. Mr Brooks’ presentation hinted at a way to correctly frame up your data direction early on in a project, setting the stage for success.

According to his presentation, called “Symmetry in Modeling Approaches“, the different model types — relational, graph, dimensional, JSON, XML, and so on — all represent different perspectives on the same data relationships. Each suits a different application, like dimensional for reporting applications, data vault for data warehouses, graph databases for multi-layered search, and so on. However, if properly constructed they all map back in predictable and specific ways to a normalized entity-relationship model.

I and others write that ER modeling should be integral to requirements definition, but Mr. Brooks’ presentation implies that ER modeling can also serve as the basis for application architecture as well. Continue reading

Start Data Quality Improvements with a New Definition

What is Data Quality anyway? If you are a data professional, I’m sure someone from outside our field has asked you that question, and if you’re like me you’ve fallen into the trap of answering in data-speak.

To my listener, I’d guess that the experience was similar to having a customer service rep who has just turned down his simple request justify it by describing byzantine company policies.

There’s a ton of great writing available on data quality, and I in no way mean to disparage it or its value in the field. But in that writing I’ve yet to find a concise and compelling definition that’s useful to non-data professionals. I’ll review one or two prevailing definitions and then offer one that could help us unlock real data quality improvements. Continue reading

The Practical Metadata Business Case

Even now the business case for a metadata tool seems unclear and difficult to quantify, but it isn’t impossible.

We in the data management business tend to devalue solutions that don’t clearly derive from a coherent top-level view. We seek applications defined from an enterprise architecture, database designs from an enterprise data model, and data elements consistent with the enterprise business glossary.

However, sometimes tactical gains make sense even when the big picture is missing, and tactical successes of metadata for analytics teams can raise consciousness that helps set the stage for evolving data management improvements. Continue reading