Meaningful Requirements Start Successful Data Projects

To me, development projects fail or succeed in the first few weeks. Once a project starts off in the wrong direction, momentum and expectations tend to prevent a return to the proper path. With today’s wealth of database options each addressing exciting new possibilities, the right choice for the application’s data foundation plays a large part in steering a project to success.

At this year’s Enterprise Data World conference, William Brooks showed the relations among different data modeling approaches, in effect detailing how to derive nine different model types from a detailed conceptual entity relationship model. Mr Brooks’ presentation hinted at a way to correctly frame up your data direction early on in a project, setting the stage for success.

According to his presentation, called “Symmetry in Modeling Approaches“, the different model types — relational, graph, dimensional, JSON, XML, and so on — all represent different perspectives on the same data relationships. Each suits a different application, like dimensional for reporting applications, data vault for data warehouses, graph databases for multi-layered search, and so on. However, if properly constructed they all map back in predictable and specific ways to a normalized entity-relationship model.

I and others write that ER modeling should be integral to requirements definition, but Mr. Brooks’ presentation implies that ER modeling can also serve as the basis for application architecture as well. Continue reading

Start Data Quality Improvements with a New Definition

What is Data Quality anyway? If you are a data professional, I’m sure someone from outside our field has asked you that question, and if you’re like me you’ve fallen into the trap of answering in data-speak.

To my listener, I’d guess that the experience was similar to having a customer service rep who has just turned down his simple request justify it by describing byzantine company policies.

There’s a ton of great writing available on data quality, and I in no way mean to disparage it or its value in the field. But in that writing I’ve yet to find a concise and compelling definition that’s useful to non-data professionals. I’ll review one or two prevailing definitions and then offer one that could help us unlock real data quality improvements. Continue reading

Sound Data Culture Enables Modern Data Architectures

Modern data architectures, by enabling data analytics insights, promise to drive order of magnitude value gains across many business sectors (here, here, and here). Not so long ago, big data presented a daunting challenge. Although tools were plentiful, we struggled to conceptualize the architecture and organization within which to capitalize on those tools. Now solid frameworks have emerged. This post reviews two promising models for modern data architecture, and discusses two key cultural values critical to their successful adoption: drive to solve business challenges and drive for universal data correctness. Continue reading

Fixing Tableau Desktop Blue Screen or Unresponsive

Tableau desktop (10.2.2 on Windows 7 at work) was consistently locking up my computer or causing a BSOD when I tried to start it. After struggling for a while trying to solve the problem, I found out it was because it used all resources when opening the log file, which had over time grown to 24gig. Apparently my version of Tableau desktop doesn’t periodically clean up the log files.

However, if the …/Logs folder isn’t there at Tableau startup, it just builds a new one and starts fresh, so whenever Tableau isn’t running you can just delete it. So, to make that happen automatically, I’ve added a batch file with these commands to my startup folder: Continue reading

Leader’s Data Manifesto Annual Review: “It’s About the Lopez Women”

A year ago I recounted proceedings from the 2017 EDW World conference, which included release of the Leader’s Data Manifesto (LDM). Last week’s EDW World 2018 served as a one-year status report on the Manifesto. The verdict: there’s still a long way to go, but speakers and attendees report dramatic progress and emergence of shared values supporting data management’s role in enabling success and reducing risk.

To me the most compelling example of progress was the story of the Lopez women, told by Tommie Lawrence, who leads patient data quality efforts at Sharp Healthcare, a major San Diego, Ca, healthcare network. Ms. Lawrence’s team is responsible for data quality related to about six million patient records in the 40 highest priority of Sharp’s ~400 systems containing Patient Health Information (PHI).

A few years ago, Sharp Healthcare had two patients named Maria Lopez*, with birthdays one day apart. One suffered from kidney disease, the other had cancer. After a long wait a kidney was found, and the hospital called the Maria with kidney disease and asked her to come to the hospital for a transplant immediately. During operation prep, an assistant noticed that Maria had cancer, and put a halt to proceedings – it didn’t make sense to give the kidney to someone with cancer. Continue reading

The PDDQ Framework: Lean Data Quality for Patient Records

For most of us it may have slipped under the radar, but in December a groundbreaking Patient Demographic Data Quality framework was jointly released by a US government agency and the CMMI Institute.

In response to findings that many “safety-related events were caused by or related to incorrect patient identification”, the Office of the National Coordinator for Health Information Technology (ONC) worked with CMMI to develop the PDDQ Framework in order help organizations implement effective, sustainable data management practices around patient data management. 

Groundbreaking? Yes. As a lean framework appropriate for small business the PDDQ shows how you can rightsize the Data Management Maturity Model to match your situation. That it is freely available demonstrates CMMI’s commitment to improving data quality in healthcare. Continue reading

Values and Behaviors of the Successful Agilist

Of course, any discussion of Agile values starts with the Agile Manifesto. The first sentence declares that Agile development is about seeking better ways and helping others. Then, as if espousing self-evident truths, the founders present four relative value statements. Finally, they emphasize appropriate balance, saying that the relatively less valued items aren’t worthless: implying that they are to be maintained inasmuch as they support the relatively more valued items.

While there is value in the four relative value statements, I believe most successful Agilists value the first and last statements more. So to me, the core Agile values are continuous improvement, helping others, and balance.

There’s a lot written about Agile behaviors, but as I read most is geared toward scrummasters or managers, and most is about transitioning from the waterfall world. Starting from the premise that Agile methods are established, focusing on participants rather than managers, and based on the assumption that behaviors are grounded in values, this post details the values and behaviors I’ve observed of those who succeed as Agile team members.

Continue reading

Escaping Teradata Purgatory (Select Failed. [2646] No more spool space)

Also see the related post More on “Select Failed. [2646] No more spool space”

If you are a SQL developer or data analyst working with Teradata, it is likely you’ve gotten this error message: “Select Failed. [2646] No more spool space”. Roughly speaking, Teradata “spool” is the space DBAs assign to each user account as work space for queries. So, for example, if your query needs to build an intermediate table behind the scenes to sort or otherwise process before it hands over your result set, that happens in spool space. It is limited, in part, to keep your potentially runaway query from using up too much space and clogging up the system.

After briefly setting the stage, this post presents the top three tactics I use to avoid or overcome spool space errors. For the second two tactics I’ll show working code. At the end of the post you’ll find volatile DDL that you can use to get the queries to run. Continue reading

The Practical Metadata Business Case

Even now the business case for a metadata tool seems unclear and difficult to quantify, but it isn’t impossible.

We in the data management business tend to devalue solutions that don’t clearly derive from a coherent top-level view. We seek applications defined from an enterprise architecture, database designs from an enterprise data model, and data elements consistent with the enterprise business glossary.

However, sometimes tactical gains make sense even when the big picture is missing, and tactical successes of metadata for analytics teams can raise consciousness that helps set the stage for evolving data management improvements. Continue reading

Reporting Database Design Guidelines: Dimensional Values and Strategies

I recently found myself in a series of conversations in which I needed to make a case for dimensional data modeling. The discussions involved a group of highly skilled data architects who were surely familiar with dimensional techniques but didn’t see them as the best solution in the case at hand.

I thought it would be easy to find a quick, jargon free summary of best reporting database design principles aimed at a technical audience. There were a number of good summaries (cited at the end of this post), but none pitched just right for this highly-technical-but-outside-the-data-warehouse-world crowd.

I wanted to raise the dimensional model because, for most business reporting scenarios, it not only delivers on reporting needs, but also helps report developers handle changes to those needs as a side effect of the design.

So these are the notes I prepared for the conversation. They helped us all get on the same page, hopefully they will be useful to others: Continue reading