Making the case for analytics management systems.
In my last few posts, I discussed how most digital analytics implementations are custom built and that there haven’t been a lot of major changes in analytics implementation in the last decade. This has resulted in many organizations spending a lot of money on analytics products, but often not getting the value they expected.
What’s next in for analytics implementations?
In the next decade, it is my hope that our industry takes another leap forward as it did a decade ago with tag management systems. If we only continue incrementally improving custom analytics implementations, we will be no better off a decade from now.
So what might analytics implementation look like a decade from now? I believe that the next paradigm shift is the deployment of analytics management tools that will bring standardization and consistency to analytics implementations.
I believe this so much so that I left my old consulting job to work full time on the Apollo analytics management system currently being built by Search Discovery. This system is leading the paradigm shift that is long overdue in analytics implementations.
So what is an analytics management system?
Simply put, it does for analytics implementations what tag management systems did for tags.
There are many important aspects of digital analytics implementations that go beyond JavaScript tags. They include business requirements, solution designs, data layers, data quality, documentation, etc. Most organizations do a poor job of defining business requirements, architecting the correct solution design for requirements, building data layers, QA’ing data, and keeping documentation updated.
This is due to the fact that there is no standard approach or tool to help manage these disparate implementation components beyond spreadsheets and text documents. But all of these implementation artifacts are critical to success and should be centrally managed and built so they are inter-connected.
Some weaknesses of analytics-implementations-as-usual
Let’s look at this through an example. Let’s say that you work for a B2B company that is trying to drive as many web leads as possible. Today, they may be tracking lead form submissions with a metric, but executives really want to see which marketing campaigns drive closed deals. The current implementation may fall short since it is tracking leads while the stakeholders really want to track closed deals.
- Once the analytics team understands that the goal is to connect marketing campaigns to closed deals, they have to determine if they know how to do that in their analytics tool. Has the team ever used their analytics tool to do this before? Do they know which features are needed? Depending upon their sophistication level, they may or they may not. Or they may be forced to pay high-priced consultants to show them how to do it.
- Next, the team needs to determine what data should be collected for this business requirement and build a data layer and tagging specs to capture the appropriate data. Then the tag management system needs to be configured and lined up with the data layer.
- Next, data needs to go through quality assurance to make sure only good data is being sent into the analytics tool. Once good data is flowing, the analytics team has to build the dashboards or reports needed to convey the results to stakeholders.
- Lastly, the analytics team may need to conduct training for stakeholders who want to “self-service” this data.
As you can see, there is a lot of work required just to answer one business requirement! And all of this work is done in a custom, ad-hoc manner, from the identification of the business question, to the data layer and tagging specs, to the QA and the reporting/training.
Each of these steps represents a risk point where things can (and often do) break. In the real world, I see many cases where something breaks along the way and the business requirement is ultimately not answered to the satisfaction of the business stakeholder.
So what should an analytics management system do?
1. Design using a business requirements best practice library
To start, the organization should be able to review a list of business requirements that have been implemented by many organizations over the past twenty years. The analytics team could review this list of best practice business requirements with their stakeholders to make sure that the ones they focus on are the most important.
The analytics team shouldn’t have to guess at what business questions can be answered. Instead, they should have a menu of requirements to choose from that incorporates the best thinking about analytics. Otherwise, they may miss out on some great information that stakeholders didn’t even know they could get from their analytics tool. Business requirements should be grouped by site function and industry vertical so it is easy to zero in on the most relevant for each business.
Once the ideal business requirements have been selected, the analytics team shouldn’t be required to know how to design a solution for each requirement. Instead, the best solution for their analytics tool should be available automatically. The solution shouldn’t be dependent upon how much the team knows about their analytics tool, but instead should be based upon what has been proven to work in the past by experts who have tried various approaches and purposely picked the best approach.
2. Enable instant access implementations
At the same time that business requirements are selected, the organization should have instantaneous access to the ideal data layer and tagging specifications for each business requirement. The act of simply choosing requirements should pre-populate the data layer and tagging specifications. This means that developers are only responsible for populating the data layer they are provided.
Of course, the tag management system used by the organization should also be programmatically configured so that it supports the business requirements and tagging specifications. Since there is a direct line from business requirement to data layer to code, all of these implementation elements should be interconnected such that a change to one is a change to all.
3. Automatically validate data
Also, since the analytics management system created the data layer and tagging specifications, it knows what data is expected, so quality assurance should be interconnected as well. This helps ensure that only the right data makes it to the production data set. All of this means that organizations can spend less time implementing and more time actually analyzing data.
4. Programmatically create dashboards
Once data is being collected, the analytics team shouldn’t have to start from scratch to build reporting and training. Since the business requirements that were selected have been used in the past, why not auto-create the appropriate dashboards and reports for end-users. In addition, if there are segments or conversion metrics that have been proven to be useful for a business requirement, those should be available as well.
5. Built-in capacity-building
Finally, there should be some sort of requirement-level training provided so end-users don’t just learn how to use the analytics tool, but are also trained on how to take action from each business requirement.
The paradigm shift is happening now
All of these elements—from business requirements through training—are critical components of Apollo. They are interconnected and build upon each other. Before Apollo, most of this didn’t exist, and if it did, it was piecemeal and or outdated.
Like I described in the custom car blog post, today, we are all artisans building custom implementations, but I believe that the next paradigm shift will be a move towards automation. There is no need for every organization to manually complete all of the steps outlined above, when they have all been done before and by people who have been through many more analytics implementations than your organization.
It’s time for analytics implementations to leverage the efficiencies that carmakers have built into assembly lines. It’s time for organizations to interconnect all aspects of their implementation and achieve cost and time savings through the use of an analytics management system. This is the thinking that informs the design of Apollo, the first tool on the leading edge of this paradigm shift.
In my next post, I will dive into more detail on what an analytics management system looks like and share some examples…