Write to Database using Tableau Prep and R

Hello Friends!

I know this functionality was just announced at the keynote for Tableau Prep at #Data19, but I’m impatient. So I developed a fairly simple workaround using R.

This is broken into 3 parts – the R section, the database section, and the needs of Tableau Prep.

My Setup

I have Tableau Prep 2019.3 and R 3.5.3. and inserting my data into a MariaDB database. For those that don’t know, MariaDB is an open-source version of MySQL, spurred up when MySQL was bought by Oracle.

The Tableau Prep Section

For Tableau Prep, we’re going to be loading the Excel file from #SportsVizSunday LIVE. We are loading the “Data” tab, and only a couple columns. You can add others, but I chose 3.

1.) R doesn’t like spaces in the column names, so once you load it into Tableau Prep, remove the white space in the column names. I added underscores, but you can just remove the white space also.

write_to_db_01

2.) I chose the school, mapping and total_expenses fields for a quick example.

3.) Add a Script step and an Output step like so:

write_to_db_02

I just setup a quick output to write to a csv file (you won’t need it).

Setting up the Table in the Database

Next we need to create the table in the database. If you need to know how to download MariaDB, here’s the website and follow the instructions in the install file.

Once you get it up, I have a simple database called tableau_test and used a simple create table SQL to create the table structure:

CREATE TABLE `ncaa_spend` (
`school2` VARCHAR(250) NULL DEFAULT NULL,
`mapping2` VARCHAR(250) NULL DEFAULT NULL,
`total_exp` INT(11) NULL DEFAULT NULL
)
COLLATE=’latin1_swedish_ci’
ENGINE=InnoDB
;

Setting up the R Script

For this task, A lot of the heavy lifting is done in R, and it’s really not that heavy.

You need the helper library, DBI, and after that the script is fairly simple.

library(DBI)

df2 <- data.frame(school2 = as.character(df$school)
,mapping2 = as.character(df$mapping)
,total_exp = as.integer(df$total_expenses));

con <- dbConnect(RMariaDB::MariaDB(), host = "localhost", user = "root", password = "root", dbname="tableau_test")
dbAppendTable(con, "ncaa_spend", df2)
dbDisconnect(con)

We then wrap it in a helper function, so we can use it in Prep:

write_table_prep2 <- function(df) {

library(DBI)

df2 <- data.frame(school2 = as.character(df$school)
,mapping2 = as.character(df$mapping)
,total_exp = as.integer(df$total_expenses));

con <- dbConnect(RMariaDB::MariaDB(), host = "localhost", user = "root", password = "root", dbname="tableau_test")
dbAppendTable(con, "ncaa_spend", df2)
dbDisconnect(con)
return(df2)
}

We also need our Tableau Prep helper function so we can define the new variables we created.

getOutputSchema <- function() {

return (data.frame (
school2 = prep_string (),
mapping2 = prep_string (),
total_exp = prep_int()

));
};

I then plopped it in an R file called write_to_db_prep.R

Putting it all together

In the script section of Tableau Prep, specify your R file and function. Here’s what mine looks like:

write_to_db_03

By clicking run, you should have data in your table and data in a csv file, which you can discard.

write_to_db_04

That’s it! Now you can insert data into a database using Tableau Prep!

Any questions, reach out!

-Paul

How to Use Tableau Prep and R to scrape Starbucks locations

Hello Friends!

To be honest, I have been holding out using Tableau Prep for a long time. There are a few reasons for this, but with the release of Tableau Prep 2019.3 with R and Python integration, my use is starting to be swayed.

In this blog post, I will show you how to scrape Starbucks locations off their store locator to visualize and analyze in Tableau.

Getting Started

To get started, we need to look at Starbucks store locator on their website and use the web developer feature in Chrome or Firefox.

Open up one of these browsers, and navigate to their store locator: Starbucks Store Locator. You should see something like this:

starbucks_scrape_01

If you click on the info for each location, you will see the Store Name, address, hours, amenities, etc.

starbucks_scrape_02

So how do we get this info without a manual copy/paste into a spreadsheet? Source code to the rescue!

1.) On the side of Chrome, click on the three vertical dots at the top right, click on More Tools and Developer Tools.

starbucks_scrape_03

2.) Now, this next part is a little trial and error, and not really straight forward. Some store locators are driven off web services that you can tap into, others just have some code in their source code. Starbucks has the latter. In the Element section of DevTools, I do a search (Ctrl + F) for the store I’m looking for. In this case, I’m looking for the “Pittsburgh Marriott City Ctr Lobby” to see if I can find something I can work with.

On the 5th hit, I find what I’m looking for: STRUCTURED JSON!!

starbucks_scrape_04

All we need to do is scrape this, parse the JSON, and we have our data.

Bringing in R/Building the R function

If you don’t have R, there are numerous blogs on how to get it installed and up and going. For Tableau Prep, you need to download/install a library called Rserve, which allows Tableau to talk to R. That is all spelled out here.

In R, we’re going to rely on a couple libraries:

library(rvest) ## Easy web scraper
library(jsonlite) ## parse the json into a dataframe
library(stringr) ## some string functions
library(tidyverse)
library(dplyr)
library(sqldf) ## writing sql instead of source R to query data frames
options(sqldf.driver = "SQLite")

Once we have these loaded, lets write some code and get some data!

## Pull the URL and html into R
starbucks_baseurl <- "https://www.starbucks.com/store-locator?place=15219"

page_html  script:nth-child(5)") %>%
    html_text %>%
    str_split("window.__INTL_MESSAGES")

What we are doing here:

1.) Defining a new variable called json.

2.) Finding the html node for our json.

3.) Converting it to text.

4.) Stripping out the data we don’t need (str_split). If you look, you will see that there are multiple JSON definitions under that script tag, and we only need the first one. The second one starts with window.__INTL_MESSAGES, so we are splitting the string on that.

We need to clean up our JSON a little more before we can parse it.

json_clean <- substr(json[[1]][1],1,nchar(json[[1]][1])-1); ## removes a semi-colon at the end of the JSON
json_clean %
str_remove("window.__BOOTSTRAP = ") %>% ## removes the window.__BOOTSTRAP = string so we have clean JSON
trimws() ## helper function to remove white space at the beginning and end of our JSON.

Cool, now we are ready to parse some JSON!

df_json &lt;- fromJSON(json_clean, simplifyDataFrame = TRUE)

#Extract out the store locations
df_json2 &lt;- df_json$storeLocator$locationState$locations;

#build a data frame with some necessary data<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>
df_json3 &lt;- data.frame(name = as.character(df_json2$name)
,brand = as.character(df_json2$brandName)
,latitude = df_json2$coordinates$latitude
,longitude = df_json2$coordinates$longitude
,storeNumber = df_json2$storeNumber
,address = df_json2$address$streetAddressLine1
,city = df_json2$address$city
,state = df_json2$address$countrySubdivisionCode
,zip = df_json2$address$postalCode
,open_status = df_json2$open
);

But what if we want multiple zip codes?

At this point, we are ready to wrap this into a function and go into Tableau Prep. But I want to loop through a list of zip codes and create a comprehensive list. Easy! let’s just wrap a for loop around the whole thing and append the results to each other. In the end, we get something like this:

starbucks_scrape_tst <- function(df) {

  library(rvest)
  library(dplyr)
  library(sqldf) ## writing sql instead of source R to query data frames
  options(sqldf.driver = "SQLite")
  library(tidyverse)
  library(stringr)
  library(jsonlite)  

  df_json4 <- data.frame();

for(i in nrow(df)){
  starbucks_baseurl <- paste0("https://www.starbucks.com/store-locator?place=", df$zip_code[i]);

  page_html <- read_html(starbucks_baseurl)

  json %
    html_nodes("body > script:nth-child(5)") %>%
    html_text %>%
    str_split("window.__INTL_MESSAGES")

  json_clean <- substr(json[[1]][1],1,nchar(json[[1]][1])-1);
  json_clean %
    str_remove("window.__BOOTSTRAP = ") %>%
    trimws()

  df_json <- fromJSON(json_clean, simplifyDataFrame = TRUE)

  #Extract out the store locations
  df_json2 <- df_json$storeLocator$locationState$locations;

  df_json3 <- data.frame(name = as.character(df_json2$name)
                         ,brand = as.character(df_json2$brandName)
                         ,latitude = df_json2$coordinates$latitude
                         ,longitude = df_json2$coordinates$longitude
                         ,storeNumber = df_json2$storeNumber
                         ,address = df_json2$address$streetAddressLine1
                         ,city = df_json2$address$city
                         ,state = df_json2$address$countrySubdivisionCode
                         ,zip = df_json2$address$postalCode
                         ,open_status = df_json2$open
  );

  df_json4 <- rbind(df_json4,df_json3);
  return(df_json4);
}
}

For Tableau Prep, we also need a little helper function. We need to tell Tableau what our data frame is made of, and the datatype of each.

getOutputSchema <- function() {

return (data.frame (
name = prep_string (),
brand = prep_string(),
latitude = prep_decimal (),
longitude = prep_decimal (),
storeNumber = prep_string(),
address = prep_string(),
city = prep_string(),
state = prep_string(),
zip = prep_string(),
open_status = prep_bool()
));
}

Into Tableau Prep, finally!

Let’s open up Tableau Prep and pull our data.

First, we need to find a list of zip codes. There are a few out there, but I found this one.

Next, to keep things simple, I filtered to a list of only Pittsburgh zip codes, and renamed the field zip to zip_code.

I then added an aggregate step to get a unique list of zip codes, since zip codes cross county lines in some cases.

starbucks_scrape_05

I then added a script step off the aggregate step. Up comes this dialog:

starbucks_scrape_06

We need to connect to Rserve first. So go back into R and run

library(Rserve)
<span id="mce_SELREST_start" style="overflow:hidden;line-height:0;"></span>Rserve()

This will launch our R server.

Click on “Connect to Rserve Server” and this dialog should appear. If your port is empty, the default port for Rserve is 6311.

starbucks_scrape_07

Load your R file (I called mine starbucks_scrape.R) and tell Tableau your scrape function (mine is called starbucks_scrape_tst)

I then set up an output step to a hyper file, and opened it up in desktop:

starbucks_scrape_08

And finally, we have all the Starbucks locations for the Pittsburgh area.

Reach out with any questions!

Happy scraping!

-Paul

 

 

 

Full R-code

getOutputSchema <- function() { 

  return (data.frame (
    name = prep_string (),
    brand = prep_string(),
    latitude = prep_decimal (),
    longitude = prep_decimal (),
    storeNumber = prep_string(),
    address = prep_string(),
    city = prep_string(),
    state = prep_string(),
    zip = prep_string(),
    open_status = prep_bool()
  ));
}

starbucks_scrape_tst <- function(df) {

  library(rvest)
  library(dplyr)
  library(sqldf) ## writing sql instead of source R to query data frames
  options(sqldf.driver = "SQLite")
  library(tidyverse)
  library(stringr)
  library(jsonlite)  

  df_json4 <- data.frame();

for(i in nrow(df)){
  starbucks_baseurl <- paste0("https://www.starbucks.com/store-locator?place=", df$zip_code[i]);

  page_html <- read_html(starbucks_baseurl)

  json %
    html_nodes("body > script:nth-child(5)") %>%
    html_text %>%
    str_split("window.__INTL_MESSAGES")

  json_clean <- substr(json[[1]][1],1,nchar(json[[1]][1])-1);
  json_clean %
    str_remove("window.__BOOTSTRAP = ") %>%
    trimws()

  df_json <- fromJSON(json_clean, simplifyDataFrame = TRUE)

  #Extract out the store locations
  df_json2 <- df_json$storeLocator$locationState$locations;

  df_json3 <- data.frame(name = as.character(df_json2$name)
                         ,brand = as.character(df_json2$brandName)
                         ,latitude = df_json2$coordinates$latitude
                         ,longitude = df_json2$coordinates$longitude
                         ,storeNumber = df_json2$storeNumber
                         ,address = df_json2$address$streetAddressLine1
                         ,city = df_json2$address$city
                         ,state = df_json2$address$countrySubdivisionCode
                         ,zip = df_json2$address$postalCode
                         ,open_status = df_json2$open
  );

  df_json4 <- rbind(df_json4,df_json3);
  return(df_json4);
}
}

 

Creating Custom Regions using PostGIS and Tableau

Welcome back, friends!

With the release of Tableau 2019.2 and the native support of PostGIS, I’ve been working on how to incorporate this new functionality into my day-to-day work.

This is similar to a problem that came up at work: How do you create custom outlines of regions based on store locations? I know this might seem trivial, but you could have situations where stores even in the same zip code are in different regions.

So how do you solve this? PostGIS to the rescue!

Getting started

If you aren’t familiar with PostGIS or how to get started with it, go back and read my blog post on it here.

I pulled down a list of Starbucks locations from POI Factory, and ran a simple clustering algorithm to simulate regions. You can download the Excel file here if you want to follow along.

First I’m going to upload these into PostGIS. I created a table with the following structure:

postgis_regions_01

Then, I clicked on the table, clicked Import/Export and import your data (after saving my file as a csv from Excel).

Next, lets create our geometry column. I have a tendency of importing my lat and long in as separate columns, and then running a couple SQL statements:

alter table starbucks add column geom geometry(Point, 4326);
update starbucks set geom=st_SetSrid(st_MakePoint(longitude, latitude), 4326);
create index starbucks_geom_idx on starbucks using gist (geom);

What this does in a few words:

1.) Adds a geometry column.

2.) Populates the new geometry column with the lat long fields.

3.) Creates a spatial index on the new geometry column.

Visualizing this in Tableau, we see something like this:

postgis_regions_02

Creating the custom regions

To create the custom regions, were going to use Voronoi Polygons. From Wikipedia, a Voronoi diagram is a partitioning of a plane into regions based on distance to points in a specific subset of the plane.

In plain English, we will be creating a region around a point, until it reaches another point. Luckily, there’s a function for this in PostGIS.

First we’re going to find the centroids for each region:

SELECT region, (ST_Dump(ST_Centroid(ST_collect(geom)))).geom::Geometry(point, 4326) AS geom
FROM starbucks
GROUP BY 1;

postgis_regions_03

Now, use a voronoi diagram to get actual dividing edges between the region centroids:

SELECT (ST_Dump(ST_VoronoiPolygons(ST_collect(geom)))).geom::Geometry(polygon, 4326) AS geom
FROM (SELECT region, (ST_Dump(ST_Centroid(ST_collect(geom)))).geom::Geometry(point, 4326) AS geom
FROM starbucks
GROUP BY 1) region_centers;

postgis_regions_04

As you can see you get some weird stuff, that covers a lot of the ocean, etc. Let’s intersect it with the outline of the United States. I’m also going to create a table with a spatial index to speed up performance:

create table starbucks_voronoi AS
SELECT (ST_Dump(ST_VoronoiPolygons(ST_collect(centroid_geom)))).geom::Geometry(Polygon, 4326) AS geom
FROM (SELECT region, (ST_Dump(ST_Centroid(ST_collect(geom)))).geom::Geometry(point, 4326) AS centroid_geom
FROM starbucks
GROUP BY 1) region_centers;

CREATE INDEX starbucks_voronoi_gix
ON starbucks_voronoi USING GIST (geom);

SELECT (ST_Dump(ST_Intersection(a.geom, b.geom))).geom::Geometry(Polygon, 4326) AS geom
FROM tl_2018_us_state a
inner JOIN starbucks_voronoi b
on 1=1

You should get something that looks like this:

postgis_regions_05

Bringing it all together

Let’s tie our region numbers back to the new custom regions we created:

postgis_regions_06

The results aren’t the prettiest, but not every “sub” region is populated with a Starbucks location.

Now adding our Starbucks locations back in:

postgis_regions_08

postgis_regions_07

 

That’s it. I’ll work on cleaning this up (including getting rid of the state outlines), but that gives you an idea of how to do it.

Happy mapping!

-Paul

Common Geospatial Tasks using PostGIS

Back in January, I gave a quick introduction on how to get started with PostGIS

In this post, I’m going to talk about how to do simple geospatial or geoprocessing tasks in PostGIS. With the upcoming native support coming in Tableau, I thought this would be a great opportunity to show the power and opportunity of PostGIS.

A little background

When I first got into geospatial 10+ years ago, the only software on the market was ESRI’s ArcMap. In the last few years, with the influx of open-source software, I have transitioned about 90% of my processes away from ESRI and to either QGIS or a database system.

When you are dealing with large datasets, you start to realize that ArcMap has performance issues when it comes to spatial joins and other geoprocessing tasks.

Getting Started

To get started, let’s look at some national park location data. I found a dataset on Kaggle that has the necessary location data.

First, you need to set up the table structure in PostGIS. I named the table national_parks and here’s how I set up the table:

postgis_2_02

Next, we have to load the data. Under tables, find your national_parks table and right click on it and follow the prompts. You after you import the data, the geom field will be empty.

To fill that geometry column in, we just have to run a simple update statement.

update national_parks set geom=st_SetSrid(st_MakePoint(“Longitude”, “Latitude”), 4326);

Now we have data in our database and we can start doing some basic PostGIS calculations.

Find places within x Distance

This is one of the most common calculations when it comes to analyzing data. We always want to know which places are within x location. With PostGIS this is pretty easy, it’s just a simple query. Let’s find the National parks within 1000 miles of Pittsburgh.

SELECT a.”Park_Name”,
a.”Latitude”,
a.”Longitude”,
ST_DistanceSphere(‘SRID=4326;POINT(-80.0505401 40.431478)’::geometry,geom)/1609.344 as st_distanceSphere_in_miles
from public.national_parks a
where ST_DistanceSphere(‘SRID=4326;POINT(-80.0505401 40.431478)’::geometry,geom)/1609.344 <= 1000
order by st_distanceSphere_in_miles

The key part of this query is the ST_DistanceSphere function. This function calculates the distance in meters between two points. The 1609.344 is to convert meters to miles.

Point in Polygon

Let’s say that you are questioning which state each national park falls in. From the previous post, I showed how to import a shapefile, now we are going to do a special join to find which state the centroid falls in.

SELECT a.”Park_Name”, a.”State”, b.stusps, b.name
from public.national_parks a
inner join public.tl_2018_us_state b
on ST_WITHIN(a.geom, b.geom)

The ST_WITHIN function is the important part of this query. ST_WITHIN says when the point is within the state, then join the point with the state it falls in, else don’t.

Generate Random Points

Another cool thing you can do with PostGIS is to create random points within a state, county, block group, etc. The function is fairly simple:

SELECT stusps, name, geom, (ST_Dump(ST_GeneratePoints(geom, 1000))).geom::Geometry(point, 4326) AS geom_gen_points
FROM tl_2018_us_state
where stusps = ‘PA’

What does this look like? (A little sneak peak to the Tableau 2019.2 Beta)

postgis_2_03postgis_2_04

Or throwing them together on a dual-axis:

postgis_2_05

Why would you want to do this? A few reasons. Let’s say you wanted to create a dot density map, like Sarah Battersby did back in May 2018:

Or let’s say you have a 3 mile ring around a point, and you want to find the demographic attributes of that area. You could easily generate points, dump them, join them to the respective block group, etc, and then easily find the weighted demographics of that 3 mile ring.

In conclusion

This just scratches the surface of all the opportunity PostGIS. There are countless more use cases of PostGIS such as convex and concave hulls, and intersects and length. Maybe I’ll get into those next time.

-Paul

Getting Started with analyzing NFL play-by-play data using nflscrapR

Hello again!

We’re going to talk about analyzing NFL play by play data using the R package nflscrapR.

It was not long ago that NFL play-by-play data was hard to find, and if you did find it, it took a lot of clean up before you could even analyze it. Now, thanks to
Maksim Horowitz, Ron Yurko and Sam Ventura, analyzing NFL data is easier than ever.

Getting started

If you don’t have R, you need to first download and install it. I prefer the Microsoft Open version of R mainly because of the multi-threaded math libraries.

Once you download that you need an IDE (Integrated development environment). I prefer RStudio.

Once you have those downloaded and installed, open up RStudio.

To install nflscrapR, you need to install devtools before you are able to install the nflscrapR library.

install.packages("devtools")
devtools::install_github(repo = "maksimhorowitz/nflscrapR")

Once that’s installed, you can start pulling down some data. Let’s look at Super Bowl 53. The scrape_season_play_by_play will take a while. You might want to walk away for a while (or do something else) while it loads.

library(nflscrapR)

library(sqldf)

pbp_2018 <- scrape_season_play_by_play(2018, type = “post”)

sb_53 <- sqldf(“select * from pbp_2018 where game_id = ‘2019020300’”)

#Get rid of null or missing win probability rows

sb_53 <- sqldf(“select * from sb_53 where home_wp is not null and away_wp is not null”)

At this point, you can either visualize it in R using base graphics (ugh) or using ggplot2. But I’m going to export it and throw it into Tableau.

To export out the sb_53, you can either use the write.csv command, or since Tableau supports statistical file formats, we can use R’s RData format.

save(sb_53, file = “sb_53.RData”)

This will save it to your default directory (mine is Documents), but you can specify a directory by adding the path in front of your file name.

Visualizing in Tableau

Open up Tableau and click on Statistical file and choose your sb_53.RData file. Click on Sheet 1 to get started.

Let’s look at some win probability data. We need to clean it up a little.

First, make Game Seconds Remaining a Dimension.

1.) Drag Game Seconds Remaining to Columns and make it Continuous.

2.) Drag Home Wp to Rows. Right click -> Measure -> Maximum.

3.) If it’s not already a line under marks, make it a line.

4.) Right click on the Game Seconds Remaining axis and click Reversed.

You should see something like this:

nflscrapR_01

5.) Drag Away Wp to Rows. Right click -> Measure -> Maximum.

6.) Right click on Away Wp in your rows field, click Dual Axis. Then Right click on the Away Wp and synchronize the axes.

nflscrapR_02

The rest is just formatting.

nflscrapR_03

Here’s a link to the workbook on Tableau Public if you want to see how I formatted it.

This is just scratching the surface with what’s possible with nflscrapR, but this will get you started.

Reach out with any questions!

-Paul

 

What is Expected Value?

Welcome back friends!

Today we are going to talk about another probability and statistics concept called Expected Value.

Expected value is what you think it is: the return you can expect given the data or knowledge you already have.

Investors use expected value all the time. They make decisions to buy/sell a stock based on what the expected value is for that stock.

Without thinking about it, we intuitively use expected value when we make everyday decisions. We factor in the benefits and risks of a decision and if it has a positive expected value, we usually are in favor of that decision, else we are against it.

For example, when we take on a new project or (a blog post in this case), we view the expected value in terms of personal development and other career benefits as higher than the cost in terms of time and/or sanity.

Likewise, anyone who reads a lot knows that most books they choose will have minimal impact on them, while a few books will change their lives and be of tremendous value.

Looking at the required time and money as a cost, reading books has a positive expected value.

Back to math

Expected value informs us what we think “the long-term” average will be after adding many more trials or records.

For example, if we flip a quarter 10 times, it’s probably not going to be 50/50. If you flip it 100 times, it’s still probably not going to be 50/50, but closer. But the more and more you flip the coin, the closer you will get to the 50/50 expected value of coin flipping.

Another great example of this is rolling a die. As we increase the sample size, we see that the probability of rolling any value approaches 1/6.

Notation

I had to throw in some notation here for completeness. For discrete data (non-continuous), it’s pretty easy. It’s just the weighted average of the possible values and their respective probabilities.

μ = E(X) = ∑[x·P(x)]

where

μ = mean

E(X) = expected value

x = an outcome

P(x) = probability of that outcome

A company makes electronic gadgets. One out of every 50 gadgets is faulty, but the company doesn’t know which ones are faulty until a buyer complains. Suppose the company makes a $3 profit on the sale of any working gadget, but suffers a loss of $80 for every faulty gadget because they have to repair the unit.

E(X) = 49/50 • 3 + 1/50 • (-80)

= 147/50 – 80/50

= 67/50

= 1.34

The expected value is $1.34 on every gadget made, and since its positive, we can expect the company to make a profit.

Finding the expected value of a continuous variable – like one from a normal distribution – is a little more complex involving calculus. We’ll get into that later.

That’s all for now!

-Paul

10X Your Life

Never reduce a target. Instead, increase actions. When you start rethinking your targets, making up excuses, and letting yourself off the hook, you are giving up on your dreams! — Grant Cardone

A book I just finished reading is the 10X Rule: The Only Difference Between Success and Failure by Grant Cardone.

41wkgwtxo7l._sx329_bo1204203200_

The premise is simple: Success is your duty and obligation and the only way to get there is to take massive amounts of action. By nature, he contends, we don’t set our goals high enough, because we are taught early in life to set obtainable goals. Where does that get us? We typically set goals where we are comfortable, and that doesn’t help us grow.

I found Cardone’s advice to be both simple and life-changing with actionable measures that could be applied to all facets of life.

I became so fired up and motivated as I listened to The 10X Rule, that I started setting goals, multiplying them, better utilizing my time, dreaming bigger and achieving more.

Key Concepts of the 10X Rule

Because I highly recommend this book for anyone looking to change their mindset and increase success, and their position in life, I will only provide a summary.

Why?

In order to feel the full impact, you should read The 10X Rule for yourself. Here is a sneak peek:

Set bigger goals

The majority of us get stuck at normal levels of output because we don’t set high enough goals. We set easily attainable goals, reach them, yet we still feel unsatisfied. What if we multiplied our goals by 10X? For example, if you planned to produce one new original viz or learn one new technique in Tableau, you instead set a goal of 10 new vizzes or new techniques. Set bigger goals — your marriage goals, your workout goals, your Tableau goals and more. Setting higher targets for yourself will yield greater results.

Take massive action

Goals require follow through. Create 10X the vizzes you planned to produce, write 10X the blog posts you planned to write. Set high enough goals then take massive action to fulfill your true potential. Don’t let setbacks stop you, view them only as obstacles that you will overcome.

Don’t be average and change your thinking

In order to achieve greatness — it could be a better marriage, running a marathon, becoming a Zen Master — you need to believe that 10X is possible. Your thoughts and actions are the real reason you are where you are right now. Success is not something that doesn’t happen to you, it’s something that happens because of you.

Don’t keep limiting yourself, know that 10Xing your life is possible and go after your goals!!

Application in my life

Like I said earlier, I started applying this at work and in my life in general. One of the great chapters in his book is one on the 32 differences or traits between successful and unsuccessful people.

Many of those I took to heart, such as “having a can do attitude”, “Love Challenges” and “Be Uncomfortable.” But the one I liked the most was “Commit first and figure out the details later.”

Recently, there was was a problem I was approached with at work. I had no clue on how I was going to figure it out or how I was going to get the data to figure it out. But I committed. I didn’t care. I eventually figured it the solution and hopefully it leads to a great business opportunity.

In conclusion, I loved Cardone’s direct style and this really hit home. I highly recommend this book.

Cardone believes that we are all capable of bigger thinking, setting higher targets, taking massive action and realizing our full potential. No matter our background, where we started or where we came from, we all have the ability and choice to believe that 10Xing our life is possible. Although you may face adversity,  Cardone encourages you to keep going. Success is not only possible, it’s also your duty, obligation and responsibility.

-Paul