blog, Coaching, data, data engineering, data science, portfolio, python

Project:- Who are the Goodest Doggos? Wrangling & Analysing WeRateDogs Tweets to Find the Goodest Floofs

The Project

This project focused on wrangling data from the WeRateDogs Twitter account using Python, documented in a Jupyter Notebook (wrangle_act.ipynb and the subsequent analysis in act_analysis_notebook.ipynb).

This Twitter account rates dogs with humorous commentary. The rating denominator is usually 10, however, the numerators are usually greater than 10. WeRateDogs has over 4 million followers and has received international media coverage. Each day you can see a good doggo, lots of floofers and many pupper images.

WeRateDogs downloaded their Twitter archive and sent it to Udacity via email exclusively for us to use in this project. This archive contains basic tweet data (tweet ID, timestamp, text, etc.) for all 5000+ of their tweets as they stood on August 1, 2017. The data is enhanced by a second dataset with predictions of dog breeds for each of the Tweets. Finally, we used the Twitter API to glean further basic information from the Tweet such as favourites and retweets.

Using this freshly cleaned WeRateDogs Twitter data , interesting and trustworthy analyses and visualizations can be created to communicate back our findings.

The Python Notebooks, and PDF reports written to communicate my project and findings can also be found here

What Questions Are We Trying To Answer?

  • Q1. What Correlations can we find in the data that make a good doggo?
  • Q2. Which are the more popular; doggos, puppers, fullfers or poppos?
  • Q3. Which are the more popular doggo breeds and why is it Spaniels?

WeRateDogs @dog_rates

WeRateDogs

What Correlations can we find in the data that make a Good Doggo?

First, we wanted to determine if there were any correlations in the data to find any interesting relationships. To do this we performed some correlation analysis and produced visuals to support that. Prior to the analysis, we assumed that Favourite & Retweet would be correlated since these are both ways to show your appreciation for a tweet on Twitter.

The output of our analysis is as follows

This scatter plot matrix shows the relationships between each of the variables. It shows that while there looks to be a strong linear relationship between Favourites and Retweets, there no othe relationships were highlighted.

With that in mind, we wanted to quantify these relationships to solidify our understanding

Again, the above heatchart shows our correlation relationships, showing a strong relationship between favourites and retweets with a correlation coeficcient of aprox (r = 0.8)

Let’s narrow in on just that relationship.

With this chart we can see Favourites verses Retweets and it’s strong linearly positive relationship

Observations

  • As we assumed, There is a strong linear relationship between Favourites and Retweets.
  • The regression coefficient for this relationship is (r= 0.797)
  • From the points we plotted, we cannot find any other correlations.
  • In future, we could try and categorize the source and dog_stage to investigate correlations there with popularity of the Tweet.

Which are the more popular; doggos, puppers, fullfers or poppos?

We performed some data wrangling on the tweet_archive dataset to integrate 4 different “Class” of doggos down into one column which would be easier to analyse.

These classes are fun terminologies used by WeRateDogs so it would be really cool to see the popularity of these different types (dog_class = [doggo, pupper, fluffer, puppo])

Can we ascertain which category of dog is more popular?

Observations

  • Interesting, as we look at Retweets and Favourites, Puppos are by far the more popular on average containing the higher number of favourites and retweets
  • From the points we plotted, we can see that Puppers have the lower numbers on average, there are a lot of outliers.

Which are the more popular doggo breeds and why is it Spaniels?

Everyone loves doggos, but we all have a different favourite kind. With so many to choose from, which breed really is the goodest doggo and why is it Spaniels?

By integrating the image_prediction data into our dataset, we have three columns denoting the probability chance of the image being of a particular breed. This is some really interesting data to use, lets use it to see if we can determine the popularity of certain breeds of doggos.

Observations

  • We can see the most common types of dog here are Golden Retrievers and Labrator Retrievers, this seems sensible since these dog types are very common. Other dog breeds rounding out the top 5 are Chihuahuas, Pugs and Pembrokes.
  • We could also limit the probability to ensure it meets a minimum probability level
  • Some incorrect values like Seat-Belt, hamster, bath towel still exist in the data which we could clean given more time in future
  • We only used the 1st prediction column, we may have been able to use all 3 to determine the overall probability or popularity of dog breeds
  • There must be some mistake, Spaniels were not even in the top 10!?

Observations and Conclusion

  • During our analysis, we ound that there is a strong linear relationship between the number of Favourites and the number of Retweets of a given Tweet. The regression coefficient for this relationship is (r= 0.797)
  • We did anticipate this relationship already since there is a fair chance that if a user enjoys a twee they have the choice option to Favourite or Retweet it – both are a measure of the users enjoyment of the tweet.
  • We have also found through visualisation and data wrnagling that the pupper is the most popular doggo, with on average, more Retweets and more Favourites per tweet than the other 3 categories Doggo, Fluffer and Puppo.
  • Golden retriever are the goodest doggos, Labrador Retriever, Pembroke, Chihuahua and Pugs complete the top 5 common dog breeds in the data.

The Goodest Doggos

What We Learned

  • How to Programmatically download files using Python <code>requests</code> library
  • How to sign up for and use an API
  • How to use the <code>tweepy</code> library to connect Python to the Twitter API
  • How to handle JSON files in Python
  • How to manually assess and programmatically assess datasets and define Quality and Tidiness Issues
  • How to structure a report to document, define, and test Data Cleansing Steps

References

blog, Coaching, data, data engineering, data science, portfolio, python, Statistics

Project:- Analyse A/B Test Results to Determine the Conversion Rate of a New Web Page

The Project

The project is part of the Udacity Data Analysis Nanodegree. The section of the course is a Project where we perform our own data analysis to determine whether a web-site should change their page design from and old page to a new page, based on the results of an AB test on a subset of users.

The Project aims to bring together several concepts taught to us over the duration of the course, which we can apply to the data set which will allow us to analyse the data and determine probabilities of a user converting or not using various statistical methods based on whether the user used old page or new page

The PDF report written to communicate my project and findings can also be found here

What We Learned

  • Using proportions to find probability.
  • How to write Hypothesis statements and using these to Test against.
  • Writing out Hypotheses and observation in accurate terminology
  • Using statsmodel to simulate 10000 examples from a sample dataset, and finding differences from the mean
  • Plotting differences from the mean in a plt.Hist Histogram, and adding a representation line for the actual observed difference
  • Using Logistic Regression to determine probabilities for one of two possible outcomes
  • Creating Dummy variables for making categorical variables usable in regression
  • Creating interaction variables to better represent attributes in combination for use in regression
  • Interpreting regression summary() results and accurately concluding and making observations from results

Interesting Snippets

The Code and the Report

References

blog, Coaching, data, data engineering, data science, portfolio, python, Statistics

Project:- Data Analysis of Movie Releases, in Python

The Project

The project is part of the Udacity Data Analysis Nanodegree. The section of the course is a Project where we perform our own analysis on a data-set of our choosing from a prescribed list. I chose Movie Releases, a processed version of this dataset on Kaggle :  https://www.kaggle.com/tmdb/tmdb-movie-metadata/data

The Project aims to bring together several concepts taught to us over the duration of the course, which we can apply to the data set which will allow us to analyse several attributes and answer several questions that we ask of the data ourselves.

I downloaded the data from the above link. I then imported the data into Python so we could use a Jupyter Notebook to create the required report, which allows us to document and code in the same document, great for presenting back findings and visualisations from the data.

I structured the project similarly to the CRISP-DM method – that is I i. Stated the objectives, ii. Decided what questions to ask of the data, iii. Carried out tasks to understand the data, iv. Performed Data Wrangling and Exploratory Data Analysis and then drew conclusions and answered the questions posed.

The PDF report written to communicate my project and findings can also be found here

What We Learned

  • Using Hist and plot() to build Histograms visualisations
  • Using plotting.scatter_matrix and plot() to build scatter plot visualisations
  • Changing the figsize of a chart to a more readable format, and adding a ‘;’ to the end of the line to remove unwanted tex
  • Renaming data frame Columns in Pandas
  • Using GroupBy and Query in Pandas to aggregate and group selections of data
  • Creating Line charts, Bar charts, Heatmaps in matplotlib and utilising Seaborn to add better visuals and formatting like adding appropriate labels, titles , colour
  • Using lambda functions to wrangle data formats
  • Structuring a report in a way that is readable and informative, taking the reader through conclusions drawn

Interesting Snippets

Average budget verses Average Revenue of Genres
Average RoI (Revenue by Budget)
Two dimensional analysis of Genres over Time, judging the average Budget, Revenue and Ratings
Top 10 Directors by their Total Revenue

The Code and the Report

  • GitHub repository for the data, SQL, PDF report and Jupyter Notebook
  • the PDF report can also be found here

References

blog, Coaching, data, data engineering, data science, portfolio, python, Statistics

Project:- Data Analysis of Wine Quality, in Python

The Project

The project is part of the Udacity Data Analysis Nanodegree. The section of the course is a Case Study on wine quality, using the UCI Wine Quality Data Set: https://archive.ics.uci.edu/ml/datasets/Wine+Quality

The Case Study introduces us to several new concepts which we can apply to the data set which will allow us to analyse several attributes and ascertain what qualities of wine correspond to highly rated wines.

I downloaded the data from the above link. I then imported the data into Python so we could use a Jupyter Notebook to create the required report, which allows us to document and code in the same document, great for presenting back findings and visualisations from the data.

I structured the project similarly to the CRISP-DM method – that is I i. Stated the objectives, ii. Decided what questions to ask of the data, iii. Carried out tasks to understand the data, iv. Performed Data Wrangling and Exploratory Data Analysis and then drew conclusions and answered the questions posed.

The PDF report written to communicate my project and findings can also be found here

What We Learned

  • Using Hist and plot() to build Histograms visualisations
  • Using plotting.scatter_matrix and plot() to build scatter plot visualisations
  • Changing the figsize of a chart to a more readable format, and adding a ‘;’ to the end of the line to remove unwanted text
  • Appending data frames together in Pandas
  • Renaming data frame Columns in Pandas
  • Using GroupBy and Query in Pandas to aggregate and group selections of data
  • Creating Bar charts in matplotlib and using Seaborn to add better formating
  • Adding appropriate labels, titles , colour
  • Engineering proportionality in the data that allows data sets be compared more easily

The Code and the Report

  • GitHub repository for the data, SQL, PDF report and Jupyter Notebook
  • the PDF report can also be found here

References



blog, Coaching, data, data science, Project Management, python, stakeholder management

Applying CRISP-DM to Data Science and a Re-Usable Template in Jupyter

What is CRISP-DM?

CRISP-DM is a process methodology that provides a certain amount of structure for data-mining and analysis projects. It stands for cross-industry process for data mining. According to polls popular Data  Science website KD Nuggets, it is the most widely used process for data-mining.

 

 

The process revolves are six major steps:

1.       Business Understanding

Start by focusing on understanding the objectives and requirements from a business perspective, and then using this knowledge to define the data problem and  project plan.

2.       Data Understanding

As with every data project, there is an initial hurdle to collect data and to familiarise yourself with it, identify data quality issues, discover initial insights, or to detect interesting nuggets of information that might for a hypothesis for analysis.

3.       Data Preparation

The nitty-gritty dirty work of preparing the data by cleaning it, merging, moulding it etc to form a final dataset that can be used in modeling.

4.       Modeling

At this point we decide which model techniques to actually use and build them

5.       Evaluation

Once we appear to have good enough quality model results, these need to be tested to ensure they test well against unseen data and that all key business issues have been answered.

6.       Deployment

At this stage we are ready to deploy our code representation of the model into an production environment and solve our original business problem.

Why Use It ?

Puts the Business Problem First

One of the greatest advantages is that it puts business understanding at the centre of the project. This means we are concentrating on solving the business’s needs first and foremost and trying to deliver value to our stakeholders.

Commonality of Structured Solutions

As a manager of Data Scientists and Analysts it also ensures that we stick to a common methodology to maintain optimal results that the team can follow, ensuring we have a followed best practice or tackled common issues.

Flexible

It’s also not a rigid structure – It’s malleable and steps are repeatable, and often you will naturally go back through the steps to optimise your final data set for modeling.

It does not necessarily need to be mining or model related. Since so many of the business problems today require extensive data analysis and preperation, the methodology can flex to suit other categories of solutions like recommender systems, Sentiment analysis, NLP amongst other

A Template in Jupyter

Since we have a structure process to follow, it is likely we have re-usable steps and components. Ideal for re-usable code. What’s more, a Jupyter notebook can contain the documentation necessary for our business understanding and description of each step in the process.

To that end, I have a re-usable notebook on my DataSci_Resources GitHub Repository here 

Resources

blog, Coaching, data science, Project Management, Statistics

Using the Pareto Principle to Drive Value in Project Management

What is the Pareto Principle

The Pareto Principle (also commonly known as the 80/20 principle), is an observation which states that 80 percent of outputs come from 20 percent of the inputs. It was first observed by the Italian economist Vilfredo Pareto, who observed that 80% of Italy’s wealth, came from 20% of its population. He found that this principle held roughly true in other countries and situations as well.

The Pareto Principle is a neat guide of describing distributions in real-life scenarios that holds true in a vast array of situations. That is, that each input in a scenario, is unequally distributed to the outputs of that situation.

For example;

  • A common adage on Computer Science is that 20% of features contribute 80% of usage
  • Microsoft also noted that 20% of bugs contribute 80% of crashes. While also finding that 20% of effort contributed 80% of features
  • 20% of customers contribute to 80% of income
  • 20% of workers contribute 80% of the work

It’s not simply a case of investing the same amount of input and getting an equal value out.

pareto principle graph

(Better Explained)

Why Use the Pareto Principle

I want to propose how valuable this observation is in project management and to consider using this to gain massive return on investment by adhering to it as a principle, beyond understanding the underlying statistic, whether in your own life or in your work.

If we accept that 20% of the effort produces 80% of the results in a project or product; it conversely holds true that 80% of the effort produces only 20% of the results. In investment terms, that is a massive investment of a resource for an increasingly diminishing return on investment (law of diminishing returns) – you wouldn’t want your investment banker running those odds, so why adhere to it in life or in project management?

Instead of investing so much more in terms of effort and resource to ‘complete’ a project or product, we could focus primarily on the efforts that produce the majority of the results and forget the rest, or at least use this to make an informed decision to prioritise  investments on other projects before coming back to ‘complete’ the project.

Considering this, with the 80% of resources saved, we can invest in further projects and products and get 80% return on each of them – huge returns for the same inputs!

1_0NSXtsSkOEEjpIQE5XZ9Rw

Conclusion

As project managers, it’s our responsibility to find the most efficient way to get projects completed. There is a set of tasks that generate a disproportionate amount of work.

With this in mind, I want you to consciously make a decision on how we allocate resource, and not keep aiming for the perfect final product. You may very well want the perfect product, but the key is that we have a choice.

For Example;

  • Create 5 wire-frame prototypes instead of 1 detail one
  • Build 5 features with 80% of the functionality rather than 1 perfect one
  • Find a solution to 5 bug that solves the issue for 80% of users rather than 1 that resolves it for everyone

That said, if we still need the final product 100% completed, it is about making an informed decision now that will optimise our investments – focus on the 20%’ers first that produce the best bang for our buck, re-prioritising as we see fit, before returning to attain 100%.

cloud, Coaching, data, data engineering, data science

What is Amazon Web Services and Cloud Technology?

  1. What is a Cloud Platform?
  2. What is Amazon Web Services?
  3. Why Use a Cloud Platform?
  4. The Benefits of Cloud Computing
  5. How is Cloud Computing Changing Data?
  6. Some Technological Products of AWS
  7. Some Case Studies Using AWS
  8. Useful Links & Resources

I’ve recently been reading up on Amazon Web Services which is a cloud computing platform hosted by technology giants Amazon. I thought I’d write up and share what I’ve found to both cement my understanding and hopefully teach others at the same time.

What is a Cloud Platform?

First of all, what exactly is a Cloud Platform? A Cloud Platform, or Cloud Computing, essentially offers everything a normal server or computing architecture would, but securely via the internet. This means raw computing power, database storage, applications, content delivery and other functionality through the internet. Think of it more like a utility that you are renting – in the same way your electricity or gas. Only you are using computing power, whether that is for storage, streaming or other service.

What is Amazon Web Services?

Amazon Web Services (or AWS for short) is a secure cloud platform offered by technology giants Amazon (you may have heard of them!). AWS offers huge computing power, massive database storage, content delivery and a wide suite of technologies that offer support of a wide range of other functionality that are very easy to scale, grow and keep up to date.

According to Amazon:

“Amazon Web Services (AWS) provides on-demand computing resources and services in the cloud, with pay-as-you-go pricing. For example, you can run a server on AWS that you can log on to, configure, secure, and run just as you would a server that’s sitting in front of you.”

Why Use a Cloud Platform?

Traditionally, computing platforms for businesses would be locally hosted at the business or off-site at another business owned location. The business physically owns the entire infrastructure and architecture, as well as large recurring cost to run, maintain, service, expand, upgrade and even power that hardware. The difference with Cloud Platform is that the Cloud Host owns the computing platform, and effectively rents it out to anyone who needs it, when they need it meaning that businesses can save cost of running their own platforms.

The Benefits of Cloud Computing

  1. Cost Savings: By hosting data centres and computing on the cloud, businesses can make significant cost savings rather than having these systems locally hosted. This is the cost of physical space, disaster recovery and utility power. What’s more, once on the cloud, cloud computing services are pay-as-you-go. Meaning you only pay for the features and storage capacity that’s used.
  2. Security: There is a misconception that it is less secure by not having all your files and data stored locally on site and instead accessing everything from the cloud over the internet. This is counter to the truth; a cloud host’s primary concern is to carefully monitor security and to keep it secure, employing the best tools and intellect. This is significantly more efficient than bespoke in-house security systems, since a business must divide its resources between many aspects of its technology concerns, security being only one. Additionally, a high percentage of data thefts occur are actually perpetrated by its own employees, therefore it can actually be much safer to keep sensitive information off-site where access is logged and locked behind security.
  3. Agility & Flexibility: Cloud computing is made remarkably easing for organisations. After all, making it easy is in the interest of the Cloud Host. Whenever the business needs to change anything to do with its architecture, a cloud-based service can be changed instantly. So much quicker than undergoing an expensive and often complex change to your existing infrastructure. What’s more is that Cloud Hosts are able to offer a massive breadth of different systems, tools and can support many more through open source and third party. All your needs are  simply through a click of a button – as and when you need it, or scaled up and down automatically based entirely on your usage.

How is Cloud Computing Changing Data?

Data is valuable. When you think about it, information or intelligence, has always held value throughout history. Census information has been collected for centuries for more efficient taxation, farm yields for feeding population through winter, army troop counts, movements & equipment for waging war. Now, we call it data – and every single piece of information, intelligence or data holds value. From the millions of bits of information that surround every single action you, your business or your customer takes are nuggets of invaluable, actionable information just waiting to be identified and acted upon.

What has changed through time is the volume of data we can gather and store. With cloud computing, we can truly have Big Data, and have the storage capacity to collect every nugget of data we can and make it easy to analyse it for insight using analysis tools provided by the Cloud Host. Through these insights, a business can increase efficiencies and better understand their user or customer.

Some Technological Products of AWS

A handful of technologies that might interest a Data Engineer or Data Scientist:

  • Amazon RDS – Managed Relational DAtabase Service for MySQL, PostreSQL, Oracle, SQL Server and MariaDB
  • Amazon Redshift – Fast, Simple, Cost-effective Data Warehousing
  • Amazon ElastiCache – In-memory Caching System
  • Amazon EMR – Hosted Hadoop Framework
  • Amazon Kinesis – Work with Real-time Streaming Data
  • AWS Glue – Prepare and load data
  • Amazon Quicksight – Fast Business Analytics Service
  • Amazon SageMaker – Build, train, and deploy Machine Learning at Scale
  • Amazon Comprehend – discover insights and relationships in text
  • Amazon Lex – Build voice and Text chatbots

And many, many more

Some Case Studies Using AWS

  • Airbnb “Airbnb believes that AWS saved it the expense of at least one operations position. Additionally, the company states that the flexibility and responsiveness of AWS is helping it to prepare for more growth”
  • Epic Games – “Creator of Fortnite, the multiplayer battle royale game that has become a global phenomenon, relies on AWS for its expansive infrastructure, unmatched reliability, and global scale”
  • Netflix “AWS enables Netflix to quickly deploy thousands of servers and terabytes of storage within minutes. Users can stream Netflix shows and movies from anywhere in the world, including on the web, on tablets, or on mobile devices such as iPhones.”
  • Pinterest –  “By using AWS, the company can maintain developer velocity and site scalability, manage multiple petabytes of data each day, and perform daily refreshes of its massive search index.”
  • Expedia“By using AWS, Expedia has become more resilient. Expedia’s developers have been able to innovate faster while saving the company millions of dollars. Expedia provides travel-booking services across its flagship site Expedia.com and about 200 other travel-booking sites around the world.”

Useful Links & Resources

  1. https://aws.amazon.com/what-is-cloud-computing/
  2. https://aws.amazon.com/what-is-aws/
  3. https://aws.amazon.com/getting-started/
  4. https://www.salesforce.com/hub/technology/benefits-of-cloud/
  5. https://www.datameer.com/blog/cloud-changes-big-data-analytics-big-data-analytics-needs-change/
reviews

Book Review:- Data Science for Business

1_dfkEYd_lCvR8XbpGufL-yw

Data Science for Business: What you need to know about data mining and data-analytic thinking

by Foster Provost and Tom Fawcett

I started reading Data Science for Business back in May, 2018 and it took me a few months to get through this one. It’s advertised as an introduction to Data Science concepts and techniques. I would say that those who are looking to broaden their knowledge into Data Science would benefit from having some familiarity with data and analysis as it dives straight into techniques and concepts without much foundation. A background in data, or studying data simultaneously would be of great benefit. However, for the keen developer or business associate looking to understand what Data Science can offer, this will be a good stop-gap. The engineer will likely be left understanding a bit more of t’Why’ , but less of the ‘How’. 

The book does a really good job of framing it’s knowledge in a business sense, absolutely vital for understanding real world applications of Data Science. After all, the purpose of Data Science is to add value to the business. Each concept will generally have a real-world everyday business example to keep things relevant, while also bridging the gap between business and technology terminology, crucial for those learning.

 I felt it was a very valuable deeper dive into the world of Data Science. Particularly the early and latter chapters on the business benefits and analytical thinking mind-set.

Other topics covered include;

  • Predictive modeling
  • Fitting a model to data
  • Avoiding Overfitting
  • Similarity, Neighbours and Clusters
  • Visualising performance
  • Evidence & Probabilities
  • Text mining