Animate flight data in Power BI

“Have you seen the deck.gl framework from Uber? It’s designed to show large scale big data visualization?” said one of the software engineering team. I had a look at it, and it does look amazing. The below image is from the deck.gl framework, and shows car accidents across the UK, or at least in the screen grab, the south east and London.

Uber deck.gl – Visual amazeballs

We went through some of the examples, and while looking at the flight data one, they mentioned that ‘You can’t do that in Power BI’.

Heathrow flights – deck.gl example

I retorted, ‘You can do something a bit similar’ I then had to go prove I could do it. To be fair they had said that a few times while going through the examples, I had to pick one I knew I could do. There was no way you let the front end peeps lord it over us data geeks, what sort of a world would that be!

So first of all, I had to get some data. Thankfully the lovely people at Uber data visualisation, had provided the source of the data. One quick hyperlink later and I was downloading flight path data from the website The OpenSky Network. It’s a service that collects data from a network of stations that read the ADS-B and MODE-S signals transmitted from the aircraft for air traffic control.

It’s a big data set, it is split by day, and then by hour, and as it can be downloaded in CSV format, so I loaded the one file for the 07:00 to 07:59 data, that was about 200 mb.

Nice and easy so far, its a very easy formatted data set (check the readme.txt on the site), apart from the time columns. They are in the Unix format. For those not familiar with it, it is not a date time, it is the number of seconds that have passed since 01/01/1970. So 1574349827 is 11/21/2019 @ 3:23pm (UTC). That isn’t an issue with a bit of M we can convert it on load in Power BI to a nicely formatted date time format.

#datetime(1970, 1, 1, 0, 0, 0) + #duration(0, 0, 0, [time])

Once that was done, I mapped the points on the map visual which gave me the following output:

Aircraft Points

One problem, there was to many points, so I’ve filtered the data on the Latitude and Longitude columns just to show a range for the UK. Looks good, but we need to animate it. Does Power BI do animation? No… well not by default, but there is a custom visual called Play Axis that can enable you to do it.

Play Axis!

This allows you to add a column to it and play it in order, in this case my date time. In fact I didn’t need to convert the Unix time, as it would have played it back in numerical order! Once that was added to the report we have animated visual awesomeness!

It’s alive!

I’ve loaded the sample file on my GitHub here enjoy!

Your Brain & Report Design Part 3 – 10% or 1 in 10?

image11This is Part 3 in my series of blogs posts (fairly infrequent), about report design options, based around the cognitive process. This post I’ll be looking at framing the answer. What is framing the answer? It is how best to show the result in the context of the question, and in some cases influence the user to taking action on it.

Framing the answer

For example have a look at the following.

Framing_1

Both are showing the same thing, just from a difference perspective. One great example of how to frame the answer is how a doctor would talk to you about an operation:

Framing_2

Framing the result is leading the user through a positive or negative emotional context. ‘We missed the target by 3 %’ or ‘We achieved 97% of the target’ have different contexts around them. I would suggest that the second statement is perceived as more positive as the first one. So the first step is to define that context in the data you want to display.

Is it abstract

The next thing to consider, is the value abstract? Can the user identify or associate with the value? For example:

Framing_3

Again both show the same thing, but at a cognitive level the Failure Rate 10% is an abstract number, hard to visualize, but 1 in 10 isn’t. Some parts manufacturers moved from showing percentage values to ‘in something’ values as it focused the report consumer more directly. As a result, there was a lot more attention to the failure rates, which then lead to action, and a reduction in the part failure rate.

In the book Thinking, Fast and Slow, by Daniel Kahneman (He is a physiologist who won a Noble Prize in economics) he talks about this affect. Also he indicates that prosecution lawyers will use one or the other the try to convince the jury or parole board. ‘This person has a one in ten chance of re-offending’ or ‘This person has a ten percent chance of re-offending’. Subtle differences, but priming the jury to an outcome that they want.

So when next creating a report, think about how to frame the result, and how best to display the result. I’ve used it recently in sickness rates, moving from a percentage to a proportion of the workforce, with positive results.

The Prime Number

One more thing about numbers, and this is something that you can try yourself. You can prime people with a value to affect their next response.

Ask one set of people the following:

Have you seen the film 80 Days around the world?

The next set:

Have you seen the film, Snow White and the Seven Dwarfs?

The next question you ask to both groups:

How many countries are in Africa?

You, hopefully, should get higher estimated values from the first set, and lower estimates for the second set. Why? You have already primed them with a value, 80 for the first set and 7 for the second. basically you have pass a unconscious stimulus to them to provoke a response from a later unrelated question. I’ve done and number of these tests, for example, I asked one group to give me an estimate of a price of a laptop. One group had in the question, do not give a value below £800, the other do not give a value above £1500. A final group didn’t have any mention of a estimate limit. For the results of the questions that had a ceiling or floor on the estimate, the responses were in a small range. For the group ‘do not give a value below £800, most where in  a small range around  £1100. For the group ‘do not give a value above £1500’ they where a small range of £1300. For the group with out a ‘prime’ the ranges were quite wide from £600 to £2000. It is an interesting example of how you can influence people with out them knowing!

I highly recommend the book Thinking, Fast and Slow, it shows how you understand and perceive a numbers. It is focused on estimating and risk, but it does go through a wide range of topics, that are eye opening in how your brain works.

Azure Databricks and CosmosDB: Tips on playing nice together

DBrick love Cosmos

I’ve been working on a project and been using CosmosDB as a data source and extracting the data from it into Azure Databricks. Once there I apply some data and analytics magic to it. So this post is about some of the assumptions that I had about it, and some of the issues that I came across.

For setting up Databricks to get data from CosmosDB, the place to go is the Azure CosmosDB Spark connector site. I’m not going to go through installing it, as the read me and guidance on the GitHub does a good job and it is straight forward to do. There is also a tutorial here, that deals with ‘On time flight performance’, it covers how to connect, and then process data but doesn’t cover any of the issues you may have, as its using a nice, conformed data structure. In the real world, we have all sorts of weird design options, taken over the life time of a product and/or service.Read More »

Theme ggplot2 to Power BIs visual style

For a recent project, I’ve had to hit the R scripting and use the R Visuals to plug a gap in Power BI (PBI). Even though PBI is very capable, it does not have the full range of statistical formulas that Excel has. So I’ve had to build Linear Regression formulas in DAX, and also calculate in Power Query some coefficients using R. For the visuals, again I hit some limits with Power BI. I needed to use a measure as an axis, I also needed to show a polynomial trend line, so had to use the R visual and the ggplot2 library to display the data.

I’ve not used R much, I’ve been on a SQL Bits training day about it, and one of my colleagues, was quite good at it (However they have since moved on), so it was a nice move out of my comfort zone and I get to learn something too!

Note: If you want to follow this blog post you’ll need Microsoft R Open and install the ggplot2 library. Also useful is the ggplot2 reference web site

In this example I’ve started with a blank PBI file, and used the ‘Enter Data’ function to create a column with single value in it. You don’t need to do this if you are using you own data, I just need something to drag into the R visual to use as a data set. I’ll actually be using one of the inbuilt R data sets for the visual. You can download the example PBI file from my GitHub

So lets start with the basic set up in the R Visual.

library(ggplot2)

#Base Chart uses the iris dataset
chart <-    ggplot(iris, aes(x = Petal.Width, y = Sepal.Length)) + 
            geom_point() + 
            stat_smooth()      

#display
chart

Which renders the following chart. (Note the rounded corners in the visual, thanks to the Feb PBI Desktop update)

Lets break this down. First it calls ggplot, using the ‘iris’ data set and assigns the Petal.Width and Sepal.Length to the relevant axis.

ggplot(iris, aes(x = Petal.Width, y = Sepal.Length))

Plots the points on the chart, don’t miss this bit out, I did, and could not understand why I wasn’t seeing any data

geom_point()

Add the trend line and the shading.

stat_smooth()

So far so good. But it does not fit the Power BI style, and will look a bit out of place along side the base PBI ones. However we can fix that. First we are going to add two variables called BaseColour and LightGrey and assign them a hex colour value so they can be used without having to recall the hex values. So the base code will look like:

library(ggplot2)

#BaseGrey
BaseColour = "#777777"
LightGrey = "#01B8AA"

#Base Chart
chart <-    ggplot(iris, aes(x = Petal.Width, y = Sepal.Length)) + 
            geom_point() + 
            stat_smooth()      

#display
chart

Nothing should change in the visual. So next I’m going to update the stat_smooth function, to remove the shaded area and change the colour of the line.

stat_smooth(col = LightGrey, se=FALSE)

‘col’ assigns the colour, ‘se’ removes the shaded area.

For the next set of updates to the visual, we are going to update the theme setting, specifically the following:

  • Axis Text
  • Axis Ticks
  • Panel Grid
  • Panel Background

So lets start with the text. if you poke around the PBI formatting settings you’ll come across the default font family, size and colour in the standard PBI visuals. So next we are going to add the following:

First set the ‘axis.text’ elements to the size, font and use the variable ‘BaseColour’

axis.text = element_text(size=11, family=”Segoe UI”, colour=BaseColour)

remove the axis ticks with

axis.ticks = element_blank()

and set the axis text, setting the colour using the BaseColour variable

axis.title = element_text(size=11, family=”Segoe UI”, colour=BaseColour)

but will need to wrap it up in the ‘theme’ stuff, so the code now looks like:

library(ggplot2)

#BaseGrey
BaseColour = "#777777"
LightGrey = "#01B8AA"

chart <-    ggplot(iris, aes(x = Petal.Width, y = Sepal.Length)) + 
            geom_point() + 
            stat_smooth(col = LightGrey, se=FALSE) 

#Build theme
chart <- chart + theme(      axis.text = element_text(size=11, family="Segoe UI", colour=BaseColour)
                         ,   axis.ticks = element_blank()
                         ,   axis.title = element_text(size=11, family="Segoe UI", colour=BaseColour)     
                        )

#display
chart

Which hopefully should be getting close to the PBI look:

So just the grid line and backdrop to sort out, which will be added to the theme set up as:

Set the grid for the ‘y’ axis.

panel.grid.major.y = element_line( size=.1, color=BaseColour )

Set the grid for the ‘x’ axis, basically get rid of it using the element_blank()

panel.grid.major.x = element_blank()

And set the backdrop to blank as well

panel.background = element_blank()

which should mean that the code should now look like:

library(ggplot2)

#BaseGrey
BaseColour = "#777777"
LightGrey = "#01B8AA"

chart <-    ggplot(iris, aes(x = Petal.Width, y = Sepal.Length)) + 
            geom_point() + 
            stat_smooth(col = LightGrey, se=FALSE) 

#Build theme
chart <- chart + theme(      axis.text = element_text(size=11, family="Segoe UI", colour=BaseColour)
                         ,   axis.ticks = element_blank()
                         ,   axis.title = element_text(size=11, family="Segoe UI", colour=BaseColour)     
                         ,   panel.grid.major.y = element_line( size=.1, color=BaseColour )  
                         ,   panel.grid.major.x = element_blank()   
                         ,   panel.background = element_blank()  
                        )

#display
chart

And it should match the look of the default Power BI style.

So with a few bits of code added to the theme, you can change the look of an R visual, to match the default PBI theme, or what every you want it to look like. I’m not a R expert in any measure, and there maybe a better way of doing this, but its got me started, and hopefully you too.

Update: The BBC released a theme for R, along with a cook book on how to do visuals, I may use this as a base for my next project.

Power Query – Combining Columns in M

In my Power BI courses I always recommend some books and sites that will help them learn DAX or M. Most of the time i’m presenting to new users to Power BI, and having to get them to move from the Excel muscle memory that they have and also to show them that sometimes the code-less approach is not always the best way.

One example presented itself the other day. A user was combining some Excel sheets and wanted to create a rough dimension based on the values in one of the columns, then use it as a filter or axis. They hit the internet and came across a few post on how to do this. A few recommend this approach:

1 – Append the two tables together as new query
2 – Remove the columns you don’t need
3 – Remove duplicates

Well that is sort of fine, as you can’t specify a single column in the table append, and it can all be done in the interface. However is has a bit of overhead in the 1 and 2 stages, so can you do it so you only reference the relevant columns and remove some of the overhead. The answer is ‘Yes’, but you need to code it in M. Using the Blank Query function i created this M code:

let
   Source = Table.Combine(
                            {
                                Table.SelectColumns(#"table name 1",{"column name"})
                            ,   Table.SelectColumns(#"table name 2",{"column name"})
                            ,   Table.SelectColumns(#"table name 3",{"column name"})
                            }
                        ),
    #"Removed Duplicates" = Table.Distinct(Source)
in
    #"Removed Duplicates"

So lets look at the M code behind it.
So rather that use a query as a reference, you can declare the table and column that you want with the following

 Table.SelectColumns(#"table name 1",{"column name"})

You wrap these up with a table combine to a SQL Union the columns together using the following

Table.Combine

than run a remove duplicates to do a SQL Union All. You don’t have to duplicate the whole tables/queries just to select the data, then trim it down to the columns that you want. As far as I can see you can’t do this in the interface, it only via M.

Power BI – Parameters & Data Sources

This blog post comes from a question raised at a Power BI training session, and deals with updating data source information, for example moving from development to production, or folder or file locations moving. The question was asked as I was showing the attendees the ‘Load from Folder’ function in Power BI, so you can dump monthly files into a folders and load them together. The question was  ‘What if the location changes? do you have to rebuild the report?’. I mentioned that you can edit the M code to change the location, but the users was a bit worried that it was a bit to ‘techy’ to do that, so  I suggested a parameter that you can use. They where more than happy with that approach it  turns out.

So how to use parameters? Well lets fire up Power BI and enter Power Query, but clicking on ‘Edit Queries’

And then select the New Parameter option, which should bring up the following…

Now, there are a few options here, you can have a list of values, or items based on a query, but for this example, we just need the basic version.

Type = Text
Current Values = The folder location that we are going to use to pull the files in

Click OK and how do we use it when loading a folder?

Select the folder import then select the little drop down for the folder path.

Select the parameter (Or start the process from there, by creating a new one).

Click OK, and away it goes as if you typed in the folder location normally.

So if you need to change the location, you don’t have to edit the M code, just update the parameter, by clicking on the manage parameter selection, where we created the new parameter. Nice and easy, well I think so.

So parameters can be used in a number of data sources, for example SQL Server connections

So switching around DEV, UAT and Live servers should be a slightly less painful process, than editing the M code.

Power BI – How to clear all slicers

ClearAllBookMark

I was delivering my standard Power BI training course, and there was a question from one of the users which was ‘Is there a way of clearing all the slicers you have selected, so if you are doing something you can go back to the start’, my answer was ‘No… wait, yes… you could use a bookmark to do it’

So how to clear all slicers? First of all in Power BI Desktop you need to go to view and add the Bookmarks Pane. Then make sure that all the slicers are clear, or in the default state that you want them. In this case I made sure that everything was deselected. Once that is done, Click ‘Add’ on the bookmarks pane, that will create the Bookmark, I’ve renamed mine to ‘Default’

BookmarksPane

Next, add a button to your report, and allocate the ‘Default’ Bookmark to the Action setting. I’ve selected the ‘Reset’ button and added the text ‘Click to reset’ on mine. Next select some items, once you are done and want to go back to the start, click on the button (Ctrl-Click in Power BI Desktop), that will apply the default Bookmark, and in my case clear all the slicers! Nice quick and easy solution.