I’ve written previously about development of Tableau analytics capability from single user to multiple teams across an organization. This article is intended for those who may have first installed Tableau Server to enable folks outside their own sphere to interact with their Tableau creations. For the way ahead, it presents a few guidelines for successful development and deployment that data analysts should internalize as their analytics product grows.
The theme is, from the very start, to develop dashboards as if they serve hundreds of users and access millions of data records. If you do that, then as your analytical tools grow in usefulness and popularity, you’ll avoid difficult conversions and retooling later. Continue reading →
Sometimes success seems like a data analytics team’s worst enemy. A few successful visualizations packaged up into a dashboard by a small skunkworks team can generate interest such that a year later the team has published scores of mission critical dashboards. As their use spreads throughout the organization, and as features expand to meet the needs of an expanding user base, the dashboards can slow down and data refreshes fail as they exceed database and analytics tool time and resource limits.
There are steps teams can take to deal with such slowdowns. Analytics tool vendors typically offer efficiency guides, like this one, that help resolve dashboard response time issues. A frequent recommendation is for the dashboard to use summary tables rather than full detail, reducing the amount of data that the dashboard has to parse as the user waits for a viz to render.*
Summary tables also help resolve data refresh timeouts, but their long term success for the team depends on the foundation on which they are built and how they are organized. The most obvious approach is to build custom summaries serving each dashboard. While report-specific tables stand out as a quick win, analysis shows they are a suboptimal solution because they tend to (1) reduce ability to respond to requirements evolution, and (2) make metrics in different dashboards less consistent. Continue reading →
It’s not unusual for talented teams of business analysts to find themselves maintaining significant inventories of Tableau dashboards. In addition to sound development practices, following two key principles in data source design help these teams spend less time in maintenance and focus more on building new visualizations: publishing Tableau data sources separately from workbooks and waiting until the last opportunity to join dimension and fact data.
Imagine a business team — let’s call it Marketing Analytics — with read-only access to a Hadoop store or an enterprise data warehouse. They gain approval for Tableau licenses and Tableau Server publication rights for five tech-savvy data analysts. After a few initial successes with some impactful visualizations, the team gathers steam. After a while the team finds itself supporting scores of published workbooks serving a few hundred managers and executives. In spite of generally sound practices, Marketing Analytics struggles to maintain consistency from one Tableau workbook to another.
Tableau desktop (10.2.2 on Windows 7 at work) was consistently locking up my computer or causing a BSOD when I tried to start it. After struggling for a while trying to solve the problem, I found out it was because it used all resources when opening the log file, which had over time grown to 24gig. Apparently my version of Tableau desktop doesn’t periodically clean up the log files.
However, if the …/Logs folder isn’t there at Tableau startup, it just builds a new one and starts fresh, so whenever Tableau isn’t running you can just delete it. So, to make that happen automatically, I’ve added a batch file with these commands to my startup folder: Continue reading →
If you are a SQL developer or data analyst working with Teradata, it is likely you’ve gotten this error message: “Select Failed.  No more spool space”. Roughly speaking, Teradata “spool” is the space DBAs assign to each user account as work space for queries. So, for example, if your query needs to build an intermediate table behind the scenes to sort or otherwise process before it hands over your result set, that happens in spool space. It is limited, in part, to keep your potentially runaway query from using up too much space and clogging up the system.
After briefly setting the stage, this post presents the top three tactics I use to avoid or overcome spool space errors. For the second two tactics I’ll show working code. At the end of the post you’ll find volatile DDL that you can use to get the queries to run. Continue reading →
Even now the business case for a metadata tool seems unclear and difficult to quantify, but it isn’t impossible.
We in the data management business tend to devalue solutions that don’t clearly derive from a coherent top-level view. We seek applications defined from an enterprise architecture, database designs from an enterprise data model, and data elements consistent with the enterprise business glossary.
However, sometimes tactical gains make sense even when the big picture is missing, and tactical successes of metadata for analytics teams can raise consciousness that helps set the stage for evolving data management improvements. Continue reading →
I recently found myself in a series of conversations in which I needed to make a case for dimensional data modeling. The discussions involved a group of highly skilled data architects who were surely familiar with dimensional techniques but didn’t see them as the best solution in the case at hand.
I thought it would be easy to find a quick, jargon free summary of best reporting database design principles aimed at a technical audience. There were a number of good summaries (cited at the end of this post), but none pitched just right for this highly-technical-but-outside-the-data-warehouse-world crowd.
I wanted to raise the dimensional model because, for most business reporting scenarios, it not only delivers on reporting needs, but also helps report developers handle changes to those needs as a side effect of the design.
So these are the notes I prepared for the conversation. They helped us all get on the same page, hopefully they will be useful to others: Continue reading →
Standing up any new analytics tool in an organization is complex, and early on, new adopters of Tableau often struggle to include all the complexities in their plan. This post proposes a mental model in the form of a story of how Tableau might have rolled out in one hypothetical installation to uncover common challenges for new adopters.
Tableau’s marketing lends one to imagine that introducing Tableau is easy: “Fast Analytics”, “Ease of Use”, “Big Data, Any Data” and so on. (here, 3/31/2017). Tableau’s position in Gartner’s Magic Quadrant (referenced on the same page) attests to the huge upside for organizations that successfully deploy Tableau, which I’ve been lucky enough to witness firsthand. Continue reading →
Even though it happens annually, teams building new visualizations often forget to think about the effects of turning over from one year to another.
In today’s fast paced, Agile world, requirements for even the most critical dashboards and visualizations tend to evolve, and development often proceeds iteratively from a scratchpad sketch through successively more detailed versions to release of a “1.0” production version. Organized analytics teams evolve dashboards within a process framework that include checkpoints ensuring standards are met for security, reliability, usability, and so on.
A reporting team can build a revolutionary analytics capability enabling unprecedented visibility into operations, and then, if year turnover isn’t included in requirements, experience embarrassing errors and usability challenges in the January after initial deployment. In effect, the system experiences its own Y2.xK crisis, not too different from the expected Y2K crisis 16 years ago. Continue reading →
As I mentioned in the February post, I’m new to Tableau, and as the tone of that post implied,enjoying it very much. Tableau is a robust and flexible solution for data delivery. Like Qlikview, which I worked with a while ago, it is supported by outstanding, and free, introductory training and a very active user community.
As I’ve made my first steps in Tableau I’ve been a frequent user community visitor, and generally have gotten the answers I’ve been looking for. However, like any tool there still have been a few surprises. I’ll run down the top few in this post:
Measures can have complex logic
Big extracts are tricky
Changing data sources is really tricky
Sorry, there are some things you just can’t do
Hopefully this post helps other novices negotiate those first few steps a bit more easily. Continue reading →