Categories
Uncategorised

The Data Pipeline – What is it?

Following on from the last blog post, about dataops, I think it’s a great time to speak about the data pipeline, which is another hot topic that’s come to the fore in the last few years.

So, what is a data pipeline? We’ve (probably) all heard the term, but what does it mean?

Everyone has a data pipeline, of sorts. If you simply pull your data into spreadhsheets to the work the data, or into the complex data pipelines of large organisations, everything is your data pipeline.

At the base level, the data pipeline is the process your data goes through to take it from it’s raw state, into storage (data warehouse, data lake, etc), and then into your analytics solution.

The modern data pipeline automates this process and can provide near real time streaming of the data.

Isn’t that ETL?

No, ETL is a step in your data pipeline. It became popular in the 1970’s and is often used in data warehousing. It’s one of the data pipeline’s first steps, where you extract the data from it’s source (E). Convert it into a form that is written to the destination, which can include adding other sources of data (T), and finally writing the data to it’s destination (L).

So what’s ELT? Has someone misspelled ETL?

Recently ELT (Extract, Load, Translate) has been used when dealing with data lakes. In the ELT process the data is not transformed when it is extracted to the data lake, but stored in its original form. As the data isn’t transformed when extracted to the data lake, the query and schema don’t need to be defined prior to this. Often though, some sources can be databases or a structured data systems, and because of this (particular data) it will have an associated schema.

Why have data lakes become popular?

As has been previously discussed, the amount of data being created is increasing day by day, hour by hour, so a new way of storing and accessing data was needed.

Data lakes allow you to store high velocity, high volumes of data in a constant stream. This can include relational data (operational databases and data from line of business applications) and non-relational data (mobile apps, IoT devices and social media etc).

Since databases marry storage with compute, storing large volumes of data in this way becomes very expensive. Which leads to high levels of data retention management (either cutting certain fields off of the data or limiting the time that the data is held), to reduce costs.

Data lakes by contrast are relatively inexpensive. This is mainly down to the fact that storage for data, in this unstructured format, is relatively cheap, and you don’t incur the costs associated iwth a data warehouse in preparing the data (for storage), which is rather time consuming, and costly.

Do I need a data lake?

Well, the answer is, it depends. There is no one size fits all answer here. To understand if a data lake is right for you, ask yourself these 4 questions

  • How structured is your data?

If most of your data sits in structured tables (CRM records/financial balance sheets etc), then it’s probably best/easier to stick with a data warehouse.

If you’re working with large volumes of event-based data (server logs or a click stream) then it might be easier to store that data in its raw form in a data lake.

  • Is Data retention an issue?

If you’re constantly trying to balance between keeping hold of data (for analysis), or getting rid of it to manage costs, then it would make sense to investigate a data lake.

  • Do you have a predictable or experimental use case?

Are you looking to build a dashboard/report from your data, that’s built from a fairly fixed set of queries against tables, that are regularly updated, then a data warehouse would probably be your best option.

If, however, you have more experimental use cases (machine learning/predictive analytics), then it’s more difficult to know what data will be needed, and how you’d like to query it. Then a data lake might be a better option.

  • Do you work with (or are you looking to work with) streaming data?

Streaming data is data that’s generated continuously by a large amount of data sources, which typically send the data records simultaneously. This can be from log files generated by the use of mobile phones, web applications, geospatial services, ecommerce purchases, game player activity, social networks, financial trading floors. The list goes on.

If you are looking to do the above, then it would make sense to investigate a data lake.

So, I’ve gotten all my data stored What happens next?

The next step is to get your data into an analytics solution.

If you’re using a data warehouse, then you would probably look to connect your analytics solution to it, and then build your analytics dashboards.

You can’t really do that from a data lake. The next logical step on from a data lake is to build a data catalogue , which essentially gives you a unified view of all your data, and the associated metadata.

What vendors should I be looking at to help with my data pipeline?

Well that depends on if you’re using a data warehouse, or data lake.

If it’s a data warehouse, then it’s really business as usual. PowerBi and Tableau need clean data, so they do require a data warehouse. Qlik can easily connect to your data warehouse, though I do know some organisations that use Qlik as their data warehouse. Some aren’t small either, having up to 2,000 users.

If you’re looking at a data lake, there are numerous vendors that can assist with the different stages, though a vast majority only provide one or two pieces of the puzzle.

In my mind, after Qlik’s acquisitions and integration of Attunity and Podium Data (now under the banner of Qlik Data Integration (QDI)), is the most comprehensive solution.

QDi is quite brilliant, it will take your data (from an industry leading number of sources), automate the creation of your data warehouse/data lake, stream the data into platforms like Kafka, and also output to your analytic solution of choice. Of course, I think this should be Qlik but if you want to do just reporting, then you can connect any other solution to its outputs (PowerBI, Tableau etc).

The automation is incredibly powerful too. Allowing you to automate the mapping, target table creation and data instantiation. Quickly create and deploy analytics ready structures (not just to Qlik remember), and (at scale) catalogue, inventory, search & retrieve data.

In addition to all this, QDi also allows you to automate the movement of your data between on premises data sources and cloud storage.

Courtesy of Qlik®

If you’re looking to investigate any of this, I can only strongly recommend you looking at Qlik’s solution as part of your process.

Thanks for making it this far. Do you have a data pipeline? What does it look like? I’d love to hear in the comments.

Categories
Uncategorised

DataOps or Black Ops? – How do you manage your data?

courtesy of wikipedia

 

We’ve all heard that the amount of data we produce is exploding. It’s reported that: 

  • 1.7MB of data is created every second by every person during 2020. 
  • In the last two years alone, the astonishing 90% of the world’s data has been created. 
  • 2.5 quintillion bytes of data are produced by humans every day. 
  • 463 exabytes of data will be generated each day by humans as of 2025. 
  • 95 million photos and videos are shared every day on Instagram. 
  • By the end of 2020, 44 zettabytes will make up the entire digital universe. 
  • Every day, 306.4 billion emails are sent, and 5 million Tweets are made. 

The most mind-boggling number for me is 2.5 quintillion bytes of data are produced by humans every day.  

What’s a quintillion I hear me you ask. I’ve had to look it up!  

A quintillion is 1 followed by 18 0’s. I’ve tried to comprehend what this actually looks like, and the best I’ve found is this analogy.

Let’s take a penny as the unit of reference/measurement (they’re generally the same size everywhere), and 1 penny = 1 byte. In that case, it would look like this 

The small black line in the bottom right corner is the Sears Tower. The block itself, measures 832kmx832kmx832km (yes, kilometers). Mind boggling right? We, humans, are creating 2.5 times that, every day! Then you need to add machine data to that to get the real total. The reported split is 60% humans/40% machine data. 

As an aside, data has a weight too. We go from mind bogglingly (if it’s not a word, I think it should be) large numbers to the opposite end of the scale. I won’t bore you with the numbers, feel free to Google it (other search engines are available). Suffice to say that the data created by the Internet, weighs around 50g, about the same as a large strawberry. 

With organisations looking to understand, and analyse this gargantuan amount of data, a new way of dealing with this deluge was needed, so people started turning to the techniques used in Devops, as a basis.

Enter Dataops. 

So. What is Dataops? 

In their Information Technology gossary, Gartner’s definition is that it’s a collaborative data management practice focused on improving the communication, integration and automation of data flows between data managers and data consumers across an organization. 

For me, it’s the process of taking your data streams/sources and putting them through a process that then prepares them to be used to gain insights, using a tools, technologies and techniques, in a staged process, that delivers data for insights. Taking your data (as Qlik say, from raw to ready) is your data analytics pipeline. 

How do we get there? 

It’s widely understood that this is a 7-step process. DataKitchen have an excellent cookbook on this subject 

1 – Add automated data and logic tests. 

You need to test things, to make sure that they’re working properly. This step adds the testing of inputs, outputs and business logic at each stage of the data analytics pipeline. Doing this manually is time consuming, so a robust automated process is key. 

2 – Use a version control system. 

Data never comes from a single source, and often lacks a governing control, though this is improving. A revision control tool, like Git, helps to store and manage changes. This then helps the data teams to parallelise their efforts by allowing them to get to the next step, branch and merge. 

3 – Branch and merge. 

Branch and merge allows one of the analytics team to check out a copy of all the relevant data from the version control system. They can then make changes locally (branch). This boosts productivity by allowing many developers to work on branches concurrently. When changes to the branch are complete, tested and working, the code/data can then be checked back into the version control system (merge). If a branch proves to be unfruitful, it can be discarded, and the analytics team member can start again, with no impact to the underlying data. 

4 – Use Multiple environments. 

In many organisations, team members work on the production database, which often leads to conflicts and inefficiencies. A “private” copy of the relevant data is needed. With storage on-demand from cloud suppliers, data sets can be quickly (and inexpensively) copied to reduce conflicts. 

5 – Reuse and containerise. 

Data Analytics team members have had a hard time reusing each other’s work. Break functionalities into components, which can be containerised using technologies like Docker. These are ideal for highly customised functions that require a skill set that isn’t widely shared amongst the Data Analytics team. 

6 – Paramererise your processing 

The Data Analytics pipeline needs to be flexible. In software development a parameter is information that is passed to a program that then changes the way it is executed. What does this look like in the data analytics pipeline? If a third-party data source is used, it might be incomplete. This data set is then run through an algorithm to fill in the gaps. By parameterising this process, the data analytics team member can easily build a new, parallel, datamart with the new algorithm and have both old, and new, versions available through parameter change. 

7 – Be happy. 

DataKitchen call it Work without Fear™. Which addresses the disquiet that the data analytics team have in anticipation of deploying changes that could break the production system or allows poor quality data into the end result. 

To address this, it’s recommended that 2 key workflows are developed and optimised. 

1 – The Value Pipeline – Where data flows into production and creates value for the organisation 

2 – Innovation pipeline – this is where new ideas are developed and are then added to the production. 

Both of these feed into the production pipeline. There’s nothing new about this, I’ve spoken at length with many, many clients through the years about also having a test/development environment, and you’d be surprised how many have never actually implemented either. 

So, dataops is a methodology, that enables a data analytics team to thrive. It allows your analytics solution of choice to be nimble, while still maintaining a high level of quality. Organisations, that have embraced this methodology, have seen fantastic improvement in user satisfaction, and in their development of analytics as a key competitive advantage.  

Are you looking to, or indeed have you, implemented dataops? If you haven’t is the way you’re developing your analytics serving you? Let me know, I’d love to hear your thoughts.  

Categories
Uncategorised

Data Literacy – There’s gold in that thar data!

In the last 2 years, one of the emerging topics of conversation hasn’t been around new technologies like AI, machine learning, predictive analytics, but data literacy. 

As I mentioned in last week’s post on data storytelling, the stories we were told and read when we were young not only helped with us learning to read, they taught us life lesson. So it should be with data storytelling, where it would help us learn to read data (data literacy), and also teach us lessons about our business/organistion. 

What is data literacy? 

For me, it’s simply the ability to make decisions by interpreting the data.  

Gartner go a bit further and also add that it includes “an understanding of data sources and constructs, analytical methods and techniques applied — and the ability to describe the use case, application and resulting value.” That’s probably too much for most people, in my eyes. 

Sooooo, what’s up with our data literacy? It’s good right? 

We all think we’ve got good data literacy skills (a bit like a lot of people think they’re excel experts!), however research by the Data Literacy Project revealed that whilst organisations are shifting day-to-day analysis away from data experts they’re hitting a wall as findings have shown that only 21% of employees are data literate. Despite 65% claiming that they need to read and interpret data on a regular basis and when entering their current role, only 25% of those employees felt fully prepared to use data effectively. 

So, what’s stopping organisations from unlocking the insights held within their data?  

I believe this starts at the top. Overwhelmingly when I speak with people about how their executives use analytics, a majority reply that their execs only want a pdf report so “they can see what’s red, and then get others to investigate.” I can understand why, they’re busy people, but research has shown that 60% of people at a senior level rely on pure instinct, that figure goes up to 63% when you look at the rest of the organisation. Some of this will come down to a resistance to change, a human characteristic we’ve all come across, and some will be down to a lack of confidence in being able to interpret data 

How does this affect me? 

We all want to be happy in our work lives. Don’t we? The effects of lacking these skills, goes deeper than just making gut feel decisions, which at the end of the day is an educated guess.  

The Data Literacy Project’s research has found that 74% of people feel overwhelmed or unhappy when working with data, 36% of these try to find a way to get the job done without using data, and 14% avoid the task altogether. 

Most telling (and to me, shocking) is over half of those (61%) feel that being overwhelmed with data has contributed to workplace stress. 

So what do we do? 

As an individual, even if you think you have good data literacy skills, I’d highly recommend visiting the Data Literacy Project’s website. There’s a huge amount of information there, you can assess your level, do courses to improve your literacy, and there’s even a superb community. Do I use it, yes. Did I think I was data literate when I first went there? Yes. Was I. Erm, not really. Am I data literate now. Pretty much so, though it’s a journey.

As an organistion, it starts at the top. Do as I do should be your mantra.  

Make sure that those who need to use your analytics tool of choice get educated in how to use it. They don’t need to be able to develop from scratch, but they should be introduced to the concept of self-service. 

Secondly, set up a program for data skills and capabilities, where people can learn skills that are related to their job. Everyone should participate, right up to the CEO (do as I do), and the learning should be fun. Give certificates for achievements and learning. 

Use examples of data in action, share stories with everyone of how the organisation used data to make a change and the benefits that brought. 

These examples can be anything. Off the top of my head, Inefficiencies discovered in an inventory analysis, perhaps a sales report that provided a focus on more profitable accounts, and a sizable revenue uptick, or an employee engagement survey that brought some well loved perks for the company. 

Lastly, ingrain data into all important decision making. Always raise the question “What data do we have or can we get which supports or contradicts this business case?”. That’s right, also look for contradictions, data isn’t only for proving an idea right, it’s also about proving it wrong, so you don’t take a mis-step. 

Dat a used to support decisions should always be questioned. How reliable are the sources? is the analysis correct? Have any checks been made? Are there other sources of evidence that are consistent with this? How important is the decision? Is there any more evidence required for us to act? 

A way of doing this is to put data into the hands of first line managers. For example, a large retail cosmetics chain I the UK, out analytics in the hands of all their store managers, and saw the company’s profitability soar to record levels. It also had the unforeseen side effect of creating competition between the store managers too, which do=rove the figures even higher. 

Data literacy has come to the fore over the last couple of years, for nearly all. More people are needed in organisations that can interpret data, draw insights and ask the right questions. Anyone can develop these skills, either on their own, or with the support of their organisation. 

An excellent starting point for anyone, is the Data Literacy Project’s website. I’d highly recommend you paying it a visit.  

There’s gold in that thar data! Time to learn how to pan for the nuggets in it. 

Categories
Uncategorised

Data Story Telling. Are you sitting comfortably? – Then I’ll begin

We’ve all grown up listening to bedtime stories we’re told by our parents. These remain some of our fondest memories from childhood. Through the ages stories have taught us life lessons, and are our introduction to learning to read.  

“Stories are memory aids, instruction manuals and moral compasses.” – Aleks Krotoski 

In recent years, the desire to make sense of the raft of data that is created daily has led to the emergence of data storytelling as a way to get value from the data, by turning the visualised data we’re all now used to seeing into a story that readily communicates the message from that data.  

You’ve probably heard of data storytelling, but what is it? 

TDWI (Transforming Data with Intelligence) defines data storytelling as: “The practice of building a narrative around a set of data and its accompanying visualisations to help convey the meaning of that data in a powerful and compelling fashion.” 

The important word here is narrative, the story must also be a trigger for action. It’s not enough anymore to simply show some captured visualisations and chat around them.

That’s great, but what does that mean? 

Quite simply it’s taking your data, and putting a narrative around it in plain English (or whatever your first language is) that is easy for others to understand. 

“We are, as a species, addicted to story. Even when the body goes to sleep, the mind stays up all night, telling itself stories.” – Jonathan Gottschall 

How do you put a story together? 

For me, it comes down to 4 things. 

1 – An issue that needs to be resolved 

2 – What have I learned from the data to address the issue. 

3 – What narrative does my audience need to understand to take action? 

4 – What visualisations are needed to present the relevant data, and give confidence in the insight and action plan? 

“The purpose of a storyteller is not to tell you how to think, but to give you questions to think upon.” – Brandon Sanderson

Stories are what catches people’s attention.

When people hear a story, it stimulates more areas of the brain. People will only hear statistics but they will “feel” a story.

The psycologist Gary Klein performed an exercise with University students. Where he divided them into 6 groups and gave them some statistics on crime pattens in America.

Half the students in the group had to make a one minute presentation supporting the proposition that non-violent crime is a serious problem, with the other half arguing the opposite. They then voted on who they thought was the best, in their group.

After that, as a distraction, he then showed them a video and when that was over asked them to write down what they remembered from the one minute presentations.

In the average speech 2.5 statistics were typically used, and only one in ten people told a story. When asked to recall what they heard, only 5% remembered any statistic, but 63% remember the stories.

Pretty telling.

So, how do I present a data story? 

Well, that depends on what you’re using for your BI solution. Yes, you can use PowerPoint, but then you’ll end up spending large amounts of time snipping the relevant visualisations and then adding a narrative around them. Who has time for that? 

Looking at the 4 vendors in the Gartner’s BI Magic Quadrant, they enable storytelling in different ways, if at all. I may stand corrected, please feel free to let me know if my impression isn’t (or is?) correct. 

Tableau 

Data Storytelling in Tableau is an extension of their dashboarding, and available within their product. The resultant story can be tailored to your look and feel. Most examples I see look very slick, and have an infographic feel about them. Though I would challenge the fact that anyone can easily create a story. 

PowerBI  

PowerBI (as with a lot of things Microsoft) seems to be an overly complicated approach to putting the story together. Though again, it does look very slick. The main drawback with PowerBI sotrytelling is complexity. One example I saw was a 3-(Internet)page guide, which for me would ultimately cause people to spend too much time in preparing a storyline. Again leading me to believe that it’s not easy for anyone to create a story.

“Stories constitute the single most powerful weapon in a leader’s arsenal.” – Dr. Howard Gardner, Professor Harvard University

ThoughtSpot 

Incredibly ThoughtSpot doesn’t appear to have any story telling functionality at all. They do speak a lot about data storytelling, but in terms of their search capabilities, rather than putting together a storyboard. 

Qlik Sense 

Probably the easiest to use of the 4, and the most intuitive. Storyboarding is built into Qlik Sense. The user can make their selections, take snapshots of any visualisation, and then include that in the story with a free hand narrative. 

Not only is it easy to use, when used in a meeting and the age-old question of “I don’t agree with your data” comes up, you can go back into the Qlik Sense application the snapshot was taken from, with the selections that were active when the snapshot was taken. A discussion can be had to satisfy the question, and then the meeting can continue. 

Storyboarding in Qlik Sense can look as polished as the others, though often there isn’t the need. 

“Those who tell stories rule society“ – Plato 

Plato’s words are truer now than when he uttered them. When you combine the right visuals, the right narrative and the right data, you have a story that can influence and drive change, that is memorable.

With all our lives now filled to bursting with things that demand our attention, making your data story memorable is the most important thing of all.

Categories
Uncategorised

The Balanced Scorecard. Do you track your strategy performance?

After 8 years working in data I’ve not heard much, recently, of companies using a balanced scorecard. Which is a little mystifying to me, as it can have huge benefits. 

Robert Kaplan and David Norton are accredited with introducing the Balanced Scorecard in their 1992 Harvard Business Review article, though a little research reveals that the Balanced Scorecard first came into being a few years earlier. 

So, what is a Balanced Scorecard? 

Well, it’s a tool to manage and track your strategy performance, on one page. The phrase Balanced Scorecard mainly refers to a performance management “report” that is used by management teams. Commonly, this team is concerned with executing and tracking the implementation & performance of an organisation’s strategy or operational activities. 

The point of a balanced scorecard (I’ll call it BSC from now on, to save my fingers) is that an organisation isn’t managed looking at the past, but according to strategies that are aligned with the future growth of that organisation. 

The BSC lays out the strategic direction of an organisation, while making sure this direction is measurable, and can oversee that their actions are in line with the strategy, since goals influence behavior. 

For a BSC to be successful, the strategic goals, and what they mean, should be the focus. To enable this, each action must be allocated a deadline, budget and a dedicated person that’s responsible for that action. 

I’ve never heard of a BSC. Is it widely used? 

The figures are unclear on how many organisations use a BSC, and as I commented earlier, I’m hearing less of it.

2GC perform an annual survey on the usage of BSCs. 2GC’s 2019 Balanced Scorecard (alright, I had to type it) Usage Survey

Their 11th annual survey was of organisations (public and private) in 17 countries around the world, but mainly the EU (65%), and 76% had fewer than 500 employees.

The survey found that 71% used the BSC for strategic management, that influenced business actions (68%). 

It’s an incredibly interesting survey (honestly!), I’d recommend following the hyperlink above to have a look more closely. 

Why would you use a BSC? 

Well, there a few reasons. Using a BSC would give you improved management information, better strategy communication and execution, align your projects, initiatives, improve your strategic planning, and give better process and organisational alingment  

Where do I start? 

A BSC covers 4 perspectives, financial, customer, internal business and innovation & learning. 

Translated into English, this means, How do we look to our shareholders (financial perspective), how do our customers see us (oddly enough, customer perspective), what do we have to excel at (business perspective), and are we able to continue improving and create value (Innovation & learning perspective). 

At a base level, this is laid out somewhat like this 

If you’re struggling to decide what is relevant, then Stacey Barr has an excellent free template that walks you through steps, based on her PuMP process. I’d highly recommend visiting this page and downloading  the PuMP Diagnostic Discussion Tool

So what do I “measure”? 

Well, that’s entirely down to you, each BSC is different. That’s because each organisation has different priorities in each of the quadrants. Bernard Marr has an excellent KPI Library on his site. Once you’ve decided on what your strategy is, I’d highly recommend visiting this page to understand what you need to track what you’ve decided is your priority. 

Some don’ts 

  • Don’t use older BSC models.
  • Don’t use data that isn’t live.
  • Don’t forget to show the link between your business goals.
  • Don’t use irrelevant kpis
  • Most of all, don’t take a “paint by numbers approach. A BSC should constantly evolve.

When you’re going through this process you should keep a few things at the front of your mind. 

  • Reflect your strategy in the cause and effect map. 
  • Align your processes to the strategy map. 
  • Use meaningful key performance indicators. 
  • Improve your decision making by learning from the process. 
  • Keep the BSC linked to live data
  • Your BSC should constantly evolve.
  • A BSC is only successful because it is linked to YOUR strategy. 

How do I present this?

Any number of ways. You can use <cough> Excel, which would be a good place to develop the BSC, but once that is done, I recommend then building something bespoke. When I say bespoke, I mean something like Qlik, Tableau, PowerBI etc. As these can be linked directly to the data sources and satisfy the requirement for live data.

As Profession John Crowcroft said recently (after it was revelaed that an error with Excel casued Covid-19 results to be lost:

“Excel was always meant for people mucking around with a bunch of data, for their small company, to see what it looked like, and then when you need to do something more serious, you build something bespoke that works.”

Ultimatley, what you need is entirely tailored to your strategy. A BSC isn;t just for large organisations, a majority of those surveyed by 2GC has less than 500 employees, so a BSC is relevant to almost everyone.

Do your reasearch, develop your GSC thoughtfully and it will prove to be of huge benefit.

What’s your experiene of BSCs? Have you developed one? Do you use it?Are BSC’s disappearing? Do organisation pretty much keep quiet on having one? Which is an option as it is entirely centred around their strategy.

Let me know, I’m very keen to hear of your experiences.

Categories
Uncategorised

All your eggs in one basket? Vendor lock-in

In economics, the definition of vendor lock-in is that it makes “a customer dependent on a vendor for products and services, or unable to use another vendor without substantial switching costs.” 

We’ve all experienced vendor lock-in, from (amongst many) Apple’s iTunes (initially), printer manufacturers threatening to invalidate warrantees if their cartridges aren’t used, cordless tool manufactures that make products only their batteries fit, laptops that “throttle” or limit the processing speed available if “genuine” power supplies aren’t used.  

In the comments on my first post Apples for Apples?, someone commented on LinkedIn that their experience was that CDO’s and Heads of Data are wary of implementing solutions like Qlik as they are “wary of vendor lock-in.” 

I found this observation to be true (I’ve had that raised as a risk numerous times through the years), and yet incredibly amusing.

Amusing? Yes.

Even after making that statement to me, I’ve seen the same organisations quite happily deciding to go wall to wall Microsoft (including PowerBI). So, is that really an issue for them?  

Vendor lock-in isn’t only about corporate software, it’s now extending to the cloud too. 

Today, many organisations have moved to the cloud, and have reduced their physical infrastructure considerably. Now instead of having dedicated servers or capacity, they are priced according to compute capacity consumed. In theory, workloads can be moved from one provider to another, but it’s now becoming a complicated task. The reality now is that embedding an organisation into a single (public) cloud infrastructure severely limits the ability to change. 

What if the quality of service declines, or isn’t even met to begin with? The provider could change technologies they use, making your data vulnerable. There’s also a possibility that the vendor will then raise prices, knowing that their clients are locked in. 

“Don’t put all your eggs in one basket”.  

Warren Buffet

There’s been a move towards vendors looking to lock organisations into their products and services, in the analytics space too. The acquisitions of Looker by Google, Tableau by Salesforce, are the major examples, and we’ve also seen the merger of Sisense and Periscope data. The Google and Salesforce business was done to give them a leading analytics solution, Sisense and Periscope was to allow them a broader base of customers and create a larger company. 

Outside of these 3, the major acquisitions have been by vendors to expand their “reach” into organisations. DataRobot acquired the data prep company Paxata, Alteryx acquired ClearStory Data to extend its presence into the data science space, and Qlik acquired Podium Data & Attunity. 

The most interesting (to me) of all these was the business done by Qlik, while all the others either slightly extended their offering, or took them into a space they’d been trying to get into for years (think SFDC’s wave product that never really gained wide acceptance). Unlike other vendors, the acquisition of Podium Data isn’t a play to lock people into the full range of Qlik products.  

Really? How’s that? I hear you ask. 

Usually I see vendors acquire companies and then integrate them into their product offering, and make their use proprietary. Podium and Attunity have definitely been integrated in a new product segment for Qlik, called Qlik Data Integration (QDI), but they haven’t been made proprietary.  

Not only can you connect to a huge amount of data sources, (purported to be the largest number of sources in the industry) once you’ve created your data warehouse/data lake (with QDI), and maybe created a data catalogue, you can move your data between cloud platforms, and also then output that into most major analytics products, not just Qlik’s. 

How do I avoid vendor lockdown? 

If you don’t go down the QDI route. Keeping internal back-ups of your data helps an organisation stay ready to host the data elsewhere, if it proves to be too difficult to move it from an existing service. It would also provide some protection to ransomware. 

Opt for an AI solution that gives a smooth cloud integration with all of their preferred hosting partners. DataRobot is one such company, that supports all of the cloud hosting providers. 

Using containers, applications are more portable, and readily deployed on any platform. It’s also cheaper! Kubernetes puts more control in the developer’s hands, allowing them to build with ease. 

Select the right migration tool, that allows you to have “built-in” flexibility. 

Formulate an exit plan from the beginning. 

Build diversity into your cloud strategy, by taking a multi-cloud direction. 

Perform due diligence on the various “switching costs” 

I’d love to hear your thoughts on this. Have you had experience of vendor lock-in? Did you manage to resolve it, or did you end up “stuck” in that situation? 

Categories
Uncategorised

BlackBerry – Mobile phones, right?

Nope!

I was speaking with someone this week and the subject turned to Mobile Device Management, at which point I brought up BlackBerry. Their response was, “no we use iPhones.”

Which got me to thinking that BlackBerry suffer from one of the biggest misconceptions today.

“But, BlackBerry Mobile phones are still available” I hear you say.

Yes, though BlackBerry decided to stop producing mobile phones in 2016, and outsourced their production to others, the biggest deal being with TCL in the same year.

Let’s go back in time a little.

A fair few of us remember Blackberry’s mobile phones (some with fondness, and others with a slight rage at not being able to get our large fingers to operate the keyboard easily. Yes, I’m the latter!). They were the defacto mobile device for Enterprises everywhere, with full qwerty hardware keyboards, then a soft keyboard (on the poorly received BlackBerry Storm) to keep pace with the iPhone and Android mobile phones, but BlackBerry mobile phones were most favoured by Enterprises for their security and reliability, while the users loved BlackBerry Messenger (BBM).

Did you know that the network that BBM ran on, has been the only reliable communication medium available after numerous disasters?

In 2013 it all changed, when John Chen took the reins as CEO. BlackBerry, at that time, were losing $1BN a quarter, and the board had been forced to put the company up for sale.

He immediately set about looking at what assets the company had, which turned out not to be the mobiles, but the technology behind them and a sea change (only to those outside of BlackBerry) started, and the focus became what had always been their genetic footprint, their software and services for enterprises with high security requirements.

“We have begun moving the company to embrace a multi-platform, BYOD world by adopting a new mobility management platform and a new device strategy,”
– John Chen

Chen explained in an open letter published shortly after his appointment. “I believe in the value of this brand. With the right team and the right strategy in place, I am confident that we will rebuild BlackBerry for the benefit of all our constituencies.” 

Since that time 7 years ago, through the development of their own solutions, and a series of acquisitions, BlackBerry has reinvented itself now as the software vendor of choice for Enterprises, and maybe surprisingly, car manufacturers. Yes, you read that right, car manufacturers, brands including Audi, BMW, Jaguar Land Rover and Toyota are using BlackBerry’s QNX software for infotainment systems, acoustics and dashboard functions. The latest figures state that 120 million cars have BlackBerry software running in them! So it’s likely you are driving a car that has BlackBerry’s software in it.

Then there are autonomous cars, and BlackBerry are at the forefront there too:

https://www.vice.com/en_us/article/ywxxk7/blackberry-qnx-autonomous-cars-security

Back onto BlackBerry in the Enterprise. Over the past 7 years, they have developed and acquired a range of solutions that cover the needs of the enterprise to secure mobile devices and the enterprise’s data. Which ranges from securing any mobile device or wearable (watch/glasses), to real time operating systems in vehicles, document management & storage (akin to Box), to crisis alerting & collaboration (which is used by Government agencies and large corporate companies worldwide).

It’s no coincidence that:

• All the G7 governments leverage BlackBerry solutions.

• 15 of G20 governments leverage BlackBerry solutions.

• BlackBerry provides security and services to:

a. 9 out of 10 of the largest global banks and law firms

b. 4 of the 5 largest global managed healthcare providers

c. 3 of the top 5 global investment services companies.

Some of the reasons are:

• The only provider with Common Criteria EAL+4 for secure email, messaging and browsing.

• 80+ certifications and approvals – the most of any mobility solutions provider.

• BlackBerry were named a Leader Again in the 2020 Gartner Magic Quadrant for Enterprise Mobility Management Suites.

• BlackBerry were picked out as one of the 5 mobility Management Vendors to watch in 2020.

• BlackBerry is the only EMM vendor to receive Full Operational Capability (FOC) for the US DoD (Department of Defence)

Blackberry have come an incredible distance (in so short a space of time), from being an innovator for mobile phones, to reinventing themselves as an innovator of the Enterprise Mobile Security space.

Perhaps companies like Kodak could have learnt a lesson from BlackBerry?

Categories
Uncategorised

Spreadsheets – Fit for purpose? Or fit for the bin?

After 30 years Excel is still incredibly popular, with over 1 billion people using it globally. It’s almost as omnipresent as the personal computer itself. Which perfectly lines up with Bill Gates’ original vision of “A PC for every desktop.”

Most organisations are not looking to move away from a heavy reliance on excel as they see it as a very powerful tool for calculating, managing and modelling data, as well as cheap, easy to use, and completely flexible for anything from a base CRM to analysing budgets, amongst the many uses that can be found.

Though the user-friendliness of spreadsheet software tempts its over-use. The fact is that most of these programs are designed with a broad user-base and generic use-cases in mind.

All the reasons that organisations find it so good, also feeds the many reasons that it can be so dangerous:

> Training.

Spreadsheets are easy to use. Are they really? If you search Google for Excel tutorials, you get 513,000,000 results. If it was so easy to use, then why is there such a proliferation? In fact most organisations have no formal spreadsheet training in place.

This reflects my own experience, much earlier in my career, I was preparing for a new job which would require an intermediate level use of Excel. Firstly, I was overwhelmed with the amount of “courses” available, and had to make a decision on what would be useful to me, without really knowing if I was either going down the right path, or indeed if I was missing something that was important. I eventually decided on a channel on YouTube, which had over 1400 videos, that covered not only the latest version of Excel, but all the previous versions. Who knew you needed so many different videos on how to perform a VLOOKUP!

A survey of 250 Professionals sponsored by the ICAEW (Institute of Chartered Accountants in England and Wales) found that some people spend up to 2 ½ hours a day in spreadsheets, in other terms a third of their day, just over 1 1/2 days each week, 7 days a month, or 85/85 days a year. That’s nearly 3 months of time spent in a spreadsheet. It’s not clear from this study how much of that time was to compile the spreadsheet, but it’s a good guess that it would be most of it.

A short test of 45,000 excel users by the leaning platform company filtered, to understand how well they knew how to use the functions of Excel revealed some surprising results.

Only 28% of people surveyed answered all the questions correctly, leaving a huge 72% that had gaps in their knowledge.

Furthermore, when you look more closely at the results, only 19% answered the questions correctly about data handling (which is the use of formula) correctly.

> No best practices.

With such varied use cases, there are no fundamental best practices in place. If you search for Excel best practices, the first 4 results reference best practices (1 is actually tips), and then the rest are about principles and good practices. This is also heavily tied into the training point too, where you would normally find the “standardised” best practices would be laid out for you.

> Sharing.

Spreadsheets are often emailed back and forth. Which creates versioning nightmares and discrepancies. Outside of these issues, there’s also the danger that the email goes to the wrong person. We’ve all done it. With GDPR, who can afford a mistake that sends data with personal details outside of the organisation?

> Human Error.

Spreadsheets are versatile and everywhere – and most have errors. A web search on “How many spreadsheets have errors?” gives alarming results. The commonly quoted figure is 88%, though figures of 90 to 94% can also be found. Whatever the actual number is, these are incredibly high, and these spreadsheets with errors are not just little home spreadsheets for cataloguing your Lego/games collection or planning your next vacation. These spreadsheets with errors involve millions of pounds.

Reinhart-Rogoff Controversy – Reinhart and Rogoff are the authors of the widely acclaimed book on the history of financial crises, This Time is Different. They have also written several papers derived from this research, with governments using their results to justify austerity measures that were implemented after the crash 10 years ago. That slowed growth and raised unemployment. When others were unable to recreate their findings with publicly available data, a closer look at the figures in their original spreadsheets revealed a formula error which effectively turned the findings on their head.

If you have a look around, there are some major companies and organisations that have suffered at the hands of this.

> Real time?

I’ve already mentioned that the decision window has shrunk considerably. The use of spreadsheets doesn’t play at all in this scenario, the data is static, and without version control, it quickly becomes a nightmare to make anything but gut feel decisions from data is a spreadsheet.

> Manual.

Spreadsheet based analysis is a highly manual, time consuming process, forcing data consumers to extract, consolidate and manipulate data even before carrying out any analysis. Typically, people talk in terms of hours or days to compile reports in spreadsheets, and that’s before they are able to run their analysis. This process can also increase the risk of errors creating a false picture of the organisation’s performance.

A lot of organisations that I speak with find the manual process very challenging, and when we discuss the amount of time they spend just compiling the data in spreadsheet it comes out that 3/4 to 2/3 of the time spent is in this process. Which means analysts aren’t doing the job they were employed for A move away from this not only brings value to the business in that the analysts can now actually analyse the data, it also brings about a jump in job satisfaction for them too.

> Pre-determined.

Once the spreadsheet build process is completed, data is pre-packaged and only gives users what’s been pre-determined by the developer, meaning users tunnel through the data and if the insight they need is not found, make links based on guessing, or indeed go through the prolonged process of bringing in more data to the spreadsheet to manipulate.

Spreadsheets are also a snapshot of information taken at a specific point in time, and as such can become out of date very quickly – a spreadsheet built today could be meaningless within hours of their creation. Furthermore, as spreadsheets proliferate through a business, the ability for anyone to manipulate that data, negates any sense that you have a “single version of the truth”. This can create debate around who has the correct numbers, rather than allowing users to focus on what to do about the insight they should be gaining from the data.

> Multiple data sources?

This is particularly problematic for people I’ve spoken with. They have to export the data from each source into separate spreadsheets, and then run a series of formula to consolidate their data. Often, they are able to automate some of this for repeatable reports, but if there’s a requirement for something different, then the process becomes entirely manual again.

> Made for mobile?

Have you ever tried looking at a spreadsheet on a mobile device? If you have, I don’t need to say anymore. If you haven’t. try it.

Spreadsheets are also cumbersome, difficult to search (often just one parameter at a time), have fixed navigation, a need to know the answers you’re looking for. I once spoke with a Finance Director about what she went through to work in Excel, and her first comment was “Well, what to do you need to know?” which is a complete departure from what I see as true analysis.

> Smooth meetings.

Lots of people we speak with raise this issue. Very often, there are different spreadsheets in the same meeting, taken from the same data source, but because the information is pulled at different times, the results can vary tremendously. Meetings become about where a figure came from.  Rather than focussed on their true purpose.

> Open to fraud?

While I was preparing this post, I was speaking with a colleague about this very subject. His comment was that anyone using a spreadsheet can alter a cell/cells to make the data agree with what they want.

We’ve all seen this happen on numerous occasions in the past, where unscrupulous employees have manipulated the figures to give themselves better dividends, make themselves look better, or even a lone disgruntled employee that would change figures/calculations “to get their own back” which in my mind is the original form of data security breach. Why would an organisation deliberately leave itself open to this unsafe practice of relying on spreadsheets? I say deliberate because they’ve all heard and seen this happen on numerous occasions in the past, and still keep using spreadsheets.

When you bring all these facts together, I believe that the use of spreadsheets does have a very limited place in our work lives, however, there are now a majority of things that spreadsheets are used for that can be now be rolled into a modern BI Solution <cough – Qlik>. Actually, not only Qlik, while Qlik is really a true modern BI solution, in fairness, you can also address the issues above by using other company’s software like PowerBi, Tableau etc.

One of the biggest challenges to changing this is human nature. We don’t like change and generally resist it, which surfaces as the view that “if it’s not broken, then why fix it” attitude. Clearly it is broken, and it’s time to fix it.

What do you believe? Are spreadsheets fit for purpose or fit for the bin? I’d love to hear your comments and experiences. If you’d prefer, feel free to do the straw poll below.

Categories
Uncategorised

Self-service or Disservice? What is self-service BI?

I’ll start by saying that I’m quite adamant about what self-service business intelligence is, and isn’t.

There was a post on LinkedIn a week ago, that turned my thoughts to this, and made this the topic for today.

I find it baffling that the conventional idea of self-service analytics is for the “business user” to have the ability to create their own dashboards/views.

Why baffling? Well, 2 reasons.

  1. It didn’t really work with Excel.
  2. Data Literacy – It’s now becoming a well-recognised fact that data literacy is low amongst people that aren’t analysts. When it comes down to it (in the nicest possible way), most people wouldn’t know a dimension or measure if it introduced itself nicely in the street. To expect these people to easily create something, from nothing would, be pure fantasy, would probably lead to an increase in failed BI projects, and would be a dis-service to (not only) the IT/BI teams that work so hard to implement a project, but also to the users how would be left with something they largely wouldn’t use.

Excel Hell

Excel is (for me) the original self-service software, and a client once stated that they are “in excel hell” to describe the effect of Excel on the business.

Everyone can freely create with excel, and it quickly comes unmanageable. Not only from the amount of spreadsheets that are created, it also causes chaos because everyone creates their own view, from the same data, at different times, often with wildly different calculations and the business then becomes mired in a swamp of differing results and opinions.

I’ve also had exactly the same conversation with a Tableau user, who had to create a visualisation of all their Tableau dashboards because they gave everyone the ability to create, and found that it very quickly became untenable, with around 1500 tableau dashboards.

Don’t even get me started on errors in spreadheets! That’s a post for another day.

If the thought is that self-service is actually just for analysts, then this should be something that people are upfront about when discussing this, not saying that self-service is for everyone, when in fact it’s not.

This approach gives the IT and BI teams serious palpitations, not to mention the humble user, who even if they believe they have the right skills, generally don’t.

Where’s the governance? How do you ensure that everyone is using the same measures? My post last week mentioned a conversation I had with an organisation that had 15 different definitions of FTE. The BI Team seek to make sure that issues like this don’t arise, and are understandably reluctant to let everyone create their own views but the widely accepted view of self-service would only go and exacerbate these problems.

What’s my opinion on this?

Is self-service for everyone?

Absolutely!

I firmly side with the BI/IT teams about giving the general user the ability to create from scratch, sure there will be people that have that ability, but they’re in the minority.

As I said previously, the low standard of data literacy would mean this would create more problems than it would solve, and would move an organisation away from being more agile. Isn’t the desire for agility the point of self-service? So why would you do something that makes you less so?

I think Qlik says it best:

The premise of self-service business intelligence is to give all employees access to insights that will help them make better decisions, regardless of analytics skills.

Source: Self-Service BI: What it means, why it matters, and best practices https://bit.ly/32jhbE8

I’ve come to see self-service analytics as split into 2 distinct groups:

Guided and Self-Service Analytics

Guided Analytics (Think of this as “traditional” analytics) is where the BI team build an application (Qlik call them applications, not dashboards) with a certain need in mind, be that Sales Marketing, operations, project management to name a small number of uses.

The users then interact with the application, and if there are any other questions, then they have to raise a change request, and wait until the BI team can implement that for them.

Great for day to day, standard, requirements, but organisations are now needing to be more agile.

It’s a fact, the decision window has shrunk massively and people are now looking to make decisions on that day.

Any longer can seriously impact the business.  

All BI software does this very well, most “new age” BI software is just “Excel on steroids.”

Self-service Analytics 

I believe that self-service covers 2 areas:

  1. Knowledge workers that need to be more flexible and build their own analyses.
  2. Business users that need to make decisions quickly.

Yes the knowledge workers would be given the ability to create from the ground up, they would have the skills to be able to do this.

Business users, though, would have an application, built for them by the BI Team, as they would for Guided Analytics, but then the user is given the ability empowered (empowered is an overused word, but incredibly relevant here) to then make changes to charts, by use of an easy to understand, drag and drop governed library, called Master Items in Qlik Sense.

These users can change the chart type or replace/add dimensions and measures.

Qlik Sense will also suggest the most appropriate visualisation, and the user will then be prompted for a dimension/measure, which is contained in the governed library, under 3 headings (dimensions/measures and visualisations)

Governance

All this is all well and good, but how do you then gain the governance to ensure that these people are using the same dimensions and measures?

This comes down to the governed library, the users see this as the Master Items library, which is the cornerstone of self-service. The BI Team creates and curates this governed library, so the dimensions/measures are all guaranteed to be the same, and infinitely reusable. Not only for the business user, but for the knowledge workers/analysts.

This was something that had limited availability in QlikView, but has come to the fore in Qlik Sense.

Implementing self-service in this way, largely removes the change requests, allows the BI team to focus on what they do best, and empowers (again appropriate) the analysts and general users to do what they do best.

AI (Artificial Intelligence, or Augmented?)

AI has become a buzz word recently, and is often a way for vendors to make orders larger by adding more software to the requirement. There are lots of companies that have sprung up in this space.

What if you’re a small to medium business, without the budget to invest in this?

That’s where Augmented Intelligence (Qlik’s AI) comes in. Qlik’s AI is baked into Qlik Sense.

Simply stated – instead of just solely relying on pure machine automation, as you may find with typical Artificial Intelligence applications, Augmented Intelligence works with human interaction and perspective to solve complex business problems.

Mike Tarallo – Qlik

By combining the Associative experience and Qlik’s cognitive engine, it creates what is in effect an intelligent assistant right at your fingertips.

Why is this good?

Qlik’s AI allows you to conversationally search the data, giving a faster easier way to ask questions and get insights.

You can auto-generate visual analyses to help see the data in new ways, and uncover hidden insights.

For the developers, it speeds up the porcess of building an application with visualisation suggestions, association recommendations to name a couple of areas it helps.

Supplemental to Qlik’s AI is the Insight Advisor.

The Insight Advisor not only learns from the data, but the user’s behaviour AS WELL. Bringing important insights to the fore. Which lowers the skills required to analyse the data and to keep asking and answering the stream of questions that are always the result.

Qlik’s Mike Tarallo made an excellent video on this: https://bit.ly/3imJRBB

I think it’s time that the answer to what is self-service BI, is standardised, and everyone starts talking about the same thing. Much like the company with all those versions of FTE, how can we truly know what something is if the words used aren’t consistent.

Thanks again for reading this, I’d love to hear your thoughts on this topic.

Until next time, stay safe!

Categories
Uncategorised

Qlik Sense, PowerBi, Tableau – Apples for Apples?

 

I read a lot of posts where people look to compare these 3 pieces of analytics software and overwhelmingly I see people try and position them as one is better than the other, mostly the through the age-old sales tactic of FUD (fear, uncertainty and doubt).

Having been in the field of analytics sales for coming up to 8 years (admittedly all Qlik) , I do get this, though there isn’t an objective view of which one to decide on. What I’d like to do here, is speak about the other things that don’t come to the fore in these comparisons are made by others that don’t know what makes Qlik diferent.

For me it comes down to this:

Do you want to see what has happened with your business/organisation? In other words, are you looking for a quick/easy way to visualise your data that would otherwise be represented in excel? Or, are you looking to find out why things have happened?

In my opinion, Qlik Sense, PowerBi and Tableau do the first thing very well, though it’s only Qlik that allows you to take that next step and answer the why questions.

How does Qlik enable you to do this?

Qlik’s Associative experience

The unique thing about Qlik Sense, is the “Associative Experience” or the green (your selection), white (everything associated to your selection) and grey (everything not associated to your selection) which is highlighting that appears when you make selections. This not only allows you to see what is related to your selections, it also allows you to see what isn’t.

Why is this important? Well, there are incredible examples of data that should have been associated but wasn’t. For example, during a proof of concept a utilities company was looking to visualise their managed debt. The associative experience showed that there was a huge amount of the debt that was unmanaged (it was grey). The total amount came to 10’s of millions. There are mountains of examples like this.

The way the associative experience works means that all of your data, in that particular application, is linked.

Nothing is left behind when you make selections.

This also means that if you have other questions, you can carry on asking and answering them.

PowerBi and Tableau are very like excel, in that you have to pretty much understand what questions the user will have, when you create a dashboard. If excel has been frowned on for this very limitation, then why would you want to just do the same but with some very nice graphics?

Qlik has come up with an excellent demo of the Associative Experience in action, I did try to embed it right here, but the restrictions of WordPress mean that it doesn’t work properly. I’ll leave the link at the bottom so you can jump to the page and try it out.

Colour blindness (Yeah, this one surprised me too)

Worldwide, there are approximately 300 million people with colour blindness, almost the same number of people as the entire population of the USA!

In the UK there are approximately 3 million colour blind people (about 4.5% of the entire population).

The Green, White and Grey Associative Experience in Qlik has been consciously developed with colour blindness in mind. 

Green, White, and Grey are some of the few colours that people with colour blindness can differentiate between, when they are on the same page.

BBC Sport in fact posted an excellent article about the impact of colourblindness on people watching sports. Colour blindness in football: Kit clashes and fan struggles – what is being done?

Build once, deploy anywhere

Qlik Sense is mobile enable “right out of the box”, and Qlik Sense was developed using the ethos of “build once, deploy anywhere.” Which means you don’t have to consider the different screen sizes when you are developing. This is responsive design.

Additionally, Qlik Sense is delivered entirely in your HTML 5 browser, so you don’t need separate clients for the different operating systems you have. Qlik Sense will work in Chrome, Safari, Firefox, etc. Be that on your desktop, laptop, tablet or mobile phone.

You can also install a specific Qlik Sense app on your Apple or Android phone, which would allow you to connect to the server, download the data/app and then work offline.

How much does the app cost?

Nothing.

Storytelling

There’s been a lot said about the value of storytelling with your data. Within Qlik Sense, you can create stories from your data, by taking snapshots, adding commentaries and highlighting salient data points.This is unique, though it does go even further. I use Qlik Sense (a lot) in Sales meetings, and when data is presented in any meeting, there’s always the chance that someone will question it. That then usually goes one of two ways. The meeting gets sidetracked into a discussion on if the data is right, or, it gets moved to, the well used (but rarely followed up), “We’ll take that offline” list of things that may or may not be addressed after the meeting finishes. When this happens, I simply right Qlik click (apologies it’s ingrained in me) on the snapshot, and go back into the QLik Sense application with all the selections, that were made when the snapshot was taken. We can then come to an agreement quickly, and move on. It’s saved me an enormous amount of time, and kept meetings on track.

Moving away from the business benefits:

Governance

There are a couple of layers of governance in Qlik Sense:

i) Security down to the row/cell to make sure that only those you want can see the data. This also applies geographically. ie. If you create a Qlik Sense application for a global sales team, then you can ensure that each salesperson only sees the data for them, or their team. Sales Managers can see their team, and the figures for other teams, Sales Directors get a higher level and so it goes on.

ii) Governed libraries. The central tenet around self-service, for me. I’ll go into what I believe self-service is in another post. Governed libraries enables the BI team to develop and curate a library of dimensions and measures, that are reusable. Not only does this enable self-service, at all levels, it also ensures that everyone is using the same calculations for their measures. 

I once spoke with a company that had 15 different definitions of FTE (Full Time Employee/Equivalent), and they could not report accurately on this vital measure. A governed library gave them a single definition, and trust in what was being reported.Ultimately a governed library creates agility and trust in your data. The much sought after Single Version of the Truth.

In memory (Qlik) vs query-based (Tableau and PowerBI)

Another difference is that Qlik also utilises an in-memory architecture.

Why does this matter?  That comes down to something that’s missed from most project scopes, and that’s acceptable performance.

In the words of Michael Distler in his blog post last year:

Even when moving BI processing to the data lake, a SQL-based query tool must cache/aggregate SQL-based views to have any chance of achieving acceptable performance. And since they are only caching some of the data, one needs to guess what questions a user may ask. Go beyond these pre-defined boundaries and the user is faced with waiting while the BI tool undertakes the slow process of a new query against a massive repository. And even more time-consuming would be using SQL-based queries to uncover the unexpected non-related or missing data. With the democratization of data and more business users being called on to use analytics in their daily tasks – making people wait even longer is plainly unacceptable.

The Whole Big Story – Michael Distler 

In memory enables the performance a user is seeking for when they are looking for answers, from the data. In memory brings the speed that’s associated with agility, not the waiting while the query is executed. This is one of the areas that others try and subject the FUD (Fear, Uncertainty, and Doubt) I mentioned earlier. The fact remains that if this was an issue, then some very large companies wouldn’t have rolled Qlik out to 1000’s of users.

You can also run the Qlik Associative Engine at source by using the Qlik Associative Big Data Index which gives the same performance against data sources of immense size.

Lastly (though I could go one, and on about what makes Qlik Sense unique), 

Speed and scale

I’ve spoken about the speed for the user, Qlik is also incredibly quick to implement. The majority of any time in a Qlik Sense project is (and should be) devoted to building the data model. Once this is done, you are set.

Though it’s impossible to predict exactly how long any implementation will take (be it Qlik Sense, Tableau or PowerBi), most organisations start with their developer team of 1 to 3 people, and then roll out when they’ve developed the applications or dashboards. Typically the first Qlik Sense app is developed in the first couple of days.

I worked on a tender for a public sector organisation last year, which was for 1000’s of users, and we estimated that bringing all the data together, creating the data models and first Qlik Sense applications would take 3 months. Sadly we didn’t win the bid, due to the buying authority having some bizarre objections about what was in our cloud, which is by the by. The project went to another organisation using PowerBI. Last I heard, 4 months into the project, they were having massive issues just scaling PowerBi to the number of users. Qlik Sense scales linearly. So each server you add, will support exactly the same number of users (let’s say server 1 = 350 users, Server 2 = another 350 users) whereas others don’t scale in this linear way.

A customer once made a comparison between Qlik and Tableau, which I think I pretty telling

What I can do in a day in Tableau, I can’t do in Qlik.

What I can do in 2 days in Qlik, I can’t do in Tableau.

Ultimately it’s down to what you want. Something that puts pictures on what would normally be a spreadsheet, or a solution that enables to ask (and answer) a stream of questions, and also lets you get to the why answers.

If you’d like to have a look at the Associative Experience demo I mentioned, here’s the link: Qlik Associative difference demo page

Thanks for sticking with this to the end, and I do hope this has been informative. I’d love to hear your comments below.