Tag Archives: Business Intelligence

Guidelines for Successful Tableau Analytics Rollout

I’ve written previously about development of Tableau analytics capability from single user to multiple teams across an organization. This article is intended for those who may have first installed Tableau Server to enable folks outside their own sphere to interact with their Tableau creations. For the way ahead, it presents a few guidelines for successful development and deployment that data analysts should internalize as their analytics product grows.

The theme is, from the very start, to develop dashboards as if they serve hundreds of users and access millions of data records. If you do that, then as your analytical tools grow in usefulness and popularity, you’ll avoid difficult conversions and retooling later. Continue reading

Two key interview questions for SQL developer candidates

Frequently in my career I’ve selected or helped select ETL and reporting professionals who need SQL skills. For some of those opportunities, placement firms returned resumes with interminable, and nearly identical, lists of technical achievements with excruciating unnecessary detail (paraphrasing: “Wrote SELECT statements using GROUP BY”, “Applied both inner and outer joins”). Before interviewing we typically ranked candidates in order of preference based on resumes. Candidates’ interview success bore little relation to resume-based rankings.

With some candidates I’ve encountered consistent pauses after fact questions, sometimes accompanied by keyboard clicks. They obviously used “think time” to look up answers on the internet. As a result, “fact” questions didn’t distinguish one candidate from another.

On the other hand, open ended questions worked well. I’ve written before that interviews should ask opinion rather than fact questions. Open ended questions or thought exercise, as opposed to fact questions, assess SQL skill level, are hard to quickly look up on the web, and have the added benefit of demonstrating a candidate’s reasoning and communication skills. Here are two of my favorite examples: Continue reading

Data Architecture for Improved Dashboard Performance

Sometimes success seems like a data analytics team’s worst enemy. A few successful visualizations packaged up into a dashboard by a small skunkworks team can generate interest such that a year later the team has published scores of mission critical dashboards. As their use spreads throughout the organization, and as features expand to meet the needs of an expanding user base, the dashboards can slow down and data refreshes fail as they exceed database and  analytics tool time and resource limits.

There are steps teams can take to deal with such slowdowns. Analytics tool vendors typically offer efficiency guides, like this one, that help resolve dashboard response time issues. A frequent recommendation is for the dashboard to use summary tables rather than full detail, reducing the amount of data that the dashboard has to parse as the user waits for a viz to render.*

Summary tables also help resolve data refresh timeouts, but their long term success for the team depends on the foundation on which they are built and how they are organized. The most obvious approach is to build custom summaries serving each dashboard. While report-specific tables stand out as a quick win, analysis shows they are a suboptimal solution because they tend to (1) reduce ability to respond to requirements evolution, and (2) make metrics in different dashboards less consistent. Continue reading

Two Design Principles for Tableau Data Sources

It’s not unusual for talented teams of business analysts to find themselves maintaining significant inventories of Tableau dashboards. In addition to sound development practices, following two key principles in data source design help these teams spend less time in maintenance and focus more on building new visualizations: publishing Tableau data sources separately from workbooks and waiting until the last opportunity to join dimension and fact data.


Imagine a business team — let’s call it Marketing Analytics — with read-only access to a Hadoop store or an enterprise data warehouse. They gain approval for Tableau licenses and Tableau Server publication rights for five tech-savvy data analysts. After a few initial successes with some impactful visualizations, the team gathers steam. After a while the team finds itself supporting scores of published workbooks serving a few hundred managers and executives. In spite of generally sound practices, Marketing Analytics struggles to maintain consistency from one Tableau workbook to another.

Continue reading

More on “Select Failed. [2646] No more spool space”

Also see the previous related post Escaping Teradata Purgatory (Select Failed. [2646] No more spool space)

Not too long ago I posted on how to avoid the dreaded “No more spool space” error in Teradata SQL. That post recounted approaches to restructuring SQL queries so that they would avoid being cancelled for using inordinate amounts of Teradata resources. Teradata is an immensely powerful, even if aging, database engine but it does little to help one not steeped in knowledge of its structure to use its resources efficiently.

But what if, as sometimes happens, your DB admin team further tightens the screws by  reducing spool space, or imposing new execution time or CPU usage limits? Then, you’ll have to go further to make queries efficient, as happened on one team that I was a part of. Beyond the steps previously recommended, here’s what we did: Continue reading

How to be a good client


I recently listened to Brian O’Neill’s excellent interview with Tom Davenport, headlined “Why on a scale of 1-10, the field of analytics has only gone from a one to about a two in ten years time.”

The conversation covered a lot of ground as Mr O’Neill and Mr Davenport explored the reasons why. Highlights included general lack of technical literacy and lack of an organizational data driven culture. But to their credit, they took responsibility on behalf of analytics professionals, emphasizing how we in the field could change in order to make more analytics efforts successful. Rather than focusing on providing technology-centered solutions, they recommended that data and AI professionals seek first to understand and empathize with their clients or internal customers, enabling data and AI pros to develop more effective analytics capabilities in light of that understanding.

I agree that analytics professionals can improve their game. However, as a former consultant who’s switched over to the client side, I think there’s room for improvement all around. To me, clients who work proactively to prepare for an analytics project position themselves for better outcomes. Continue reading

Leader’s Data Manifesto at #EDW19: Building a Foundation for Data Science

It’s been a truism that data is a resource, but to prove it you just have to follow the money. As the illustration shows, the vast majority of corporate market value draws from intangible assets. Just as money is an abstraction that represents wealth, data is an abstraction that represents these intangible assets.

It’s year three after initial rollout of the Leader’s Data Manifesto (LDM). Since then, many widely publicized events have highlighted the value of data and metadata, and the importance of sound data management (here, here, and here). Recently at Enterprise Data World, John Ladley, Danette McGilvray, James Price, and Tom Redman presented this year’s LDM update. They reintroduced the Manifesto, recounted events of the past year, discussed strategy for the coming year, and issued a call to action for data professionals. Continue reading

Enterprise Data Prep for Analytics: Two Principles

Data scientists spend most of their time doing data integration rather than gathering insights. In my interview with data scientist Yan Li, she said that data collection and prep takes at least 70% of her time. Obviously, there’s a lot of integration work to do on data that’s new to analytics efforts, but not every analysis uses brand new data. Organizations can improve analytics efficiency by staging commonly used data pre-integrated for data science.

For years, large organizations have supported data warehouses, but prevailing data warehousing practices often fail when faced with “big data” volume and velocity. Still, warehousing teams in large organizations can pre-prep frequently-used internal data. Examples include reference and master data, production and sales records, and so on.    Continue reading

Toward an Analytics Code of Ethics

In data management and analytics, we often focus on correcting apparent inability and unwillingness on the part of business leaders to effectively gather and capitalize on data resources. With that perspective, we often see ethics as a side issue difficult to prioritize given the scale and persistence of our other challenges.

At least that was my perspective, and my initial response when confronted recently by a family member on this topic. Her view from outside the field was that ethics should be a primary concern. As I’ve reflected on this conversation, I’ve come around to her point.

In recent years we’ve seen many examples of data misuse due to ethical lapses. Here’s a post that gives five examples, including police officers looking up data on individuals not related to any police business, an employee passing personal data including SSNs to a text sharing site, and Uber’s “god view”, available at the corporate level, which an employee used in 2014 to track a journalist’s location. Continue reading

Start Data Quality Improvements with a New Definition

What is Data Quality anyway? If you are a data professional, I’m sure someone from outside our field has asked you that question, and if you’re like me you’ve fallen into the trap of answering in data-speak.

To my listener, I’d guess that the experience was similar to having a customer service rep who has just turned down his simple request justify it by describing byzantine company policies.

There’s a ton of great writing available on data quality, and I in no way mean to disparage it or its value in the field. But in that writing I’ve yet to find a concise and compelling definition that’s useful to non-data professionals. I’ll review one or two prevailing definitions and then offer one that could help us unlock real data quality improvements. Continue reading