• The "groupby" method groups data by different categories. The data is grouped based on one or several variables and analysis is performed on the individual groups.


  • For example, let's group by the variable "drive-wheels". We see that there are 3 different categories of drive wheels.


  •  
    df['drive-wheels'].unique()
      
    
  • If we want to know, on average, which type of drive wheel is most valuable, we can group "drive-wheels" and then average them.


  • We can select the columns 'drive-wheels', 'body-style' and 'price', then assign it to the variable "df_group_one".


  •  
    df_group_one = df[['drive-wheels','body-style','price']]
      
    
  • We can then calculate the average price for each of the different categories of data.


  •  
    # grouping results
    df_group_one = df_group_one.groupby(['drive-wheels'],as_index=False).mean()
    df_group_one
      
    
  • From our data, it seems rear-wheel drive vehicles are, on average, the most expensive, while 4-wheel and front-wheel are approximately the same in price.


  • You can also group with multiple variables. For example, let's group by both 'drive-wheels' and 'body-style'.


  • This groups the dataframe by the unique combinations 'drive-wheels' and 'body-style'. We can store the results in the variable 'grouped_test1'.


  •  
    # grouping results
    df_gptest = df[['drive-wheels','body-style','price']]
    grouped_test1 = df_gptest.groupby(['drive-wheels','body-style'],as_index=False).mean()
    grouped_test1
      
    
  • This grouped data is much easier to visualize when it is made into a pivot table. A pivot table is like an Excel spreadsheet, with one variable along the column and another along the row. We can convert the dataframe to a pivot table using the method "pivot " to create a pivot table from the groups.


  • In this case, we will leave the drive-wheel variable as the rows of the table, and pivot body-style to become the columns of the table:


  •  
    grouped_pivot = grouped_test1.pivot(index='drive-wheels',columns='body-style')
    grouped_pivot
      
    
  • Often, we won't have data for some of the pivot cells. We can fill these missing cells with the value 0, but any other value could potentially be used as well. It should be mentioned that missing data is quite a complex subject and is an entire course on its own.


  •  
    grouped_pivot = grouped_pivot.fillna(0) #fill missing values with 0
    grouped_pivot
      
    
  • If you did not import "pyplot" let's do it again.


  •  
    import matplotlib.pyplot as plt
    %matplotlib inline 
      
    
  • Variables: Drive Wheels and Body Style vs Price Let's use a heat map to visualize the relationship between Body Style vs Price.


  •  
    #use the grouped results
    plt.pcolor(grouped_pivot, cmap='RdBu')
    plt.colorbar()
    plt.show()
      
    
  • The heatmap plots the target variable (price) proportional to colour with respect to the variables 'drive-wheel' and 'body-style' in the vertical and horizontal axis respectively. This allows us to visualize how the price is related to 'drive-wheel' and 'body-style'.


  • The default labels convey no useful information to us. Let's change that:


  •  
    fig, ax = plt.subplots()
    im = ax.pcolor(grouped_pivot, cmap='RdBu')
    
    #label names
    row_labels = grouped_pivot.columns.levels[1]
    col_labels = grouped_pivot.index
    
    #move ticks and labels to the center
    ax.set_xticks(np.arange(grouped_pivot.shape[1]) + 0.5, minor=False)
    ax.set_yticks(np.arange(grouped_pivot.shape[0]) + 0.5, minor=False)
    
    #insert labels
    ax.set_xticklabels(row_labels, minor=False)
    ax.set_yticklabels(col_labels, minor=False)
    
    #rotate label if too long
    plt.xticks(rotation=90)
    
    fig.colorbar(im)
    plt.show()
      
    
  • Visualization is very important in data science, and Python visualization packages provide great freedom. We will go more in-depth in a separate Python Visualizations course.


  • The main question we want to answer in this module, is "What are the main characteristics which have the most impact on the car price?".


  • To get a better measure of the important characteristics, we look at the correlation of these variables with the car price, in other words: how is the car price dependent on this variable?


  • Correlation: a measure of the extent of interdependence between variables.


  • Causation: the relationship between cause and effect between two variables.


  • It is important to know the difference between these two and that correlation does not imply causation. Determining correlation is much simpler the determining causation as causation may require independent experimentation.


  • Pearson Correlation The Pearson Correlation measures the linear dependence between two variables X and Y.


  • The resulting coefficient is a value between -1 and 1 inclusive, where:


    1. 1: Total positive linear correlation.


    2. 0: No linear correlation, the two variables most likely do not affect each other.


    3. -1: Total negative linear correlation.


  • Pearson Correlation is the default method of the function "corr". Like before we can calculate the Pearson Correlation of the of the 'int64' or 'float64' variables.


  •  
    df.corr()
      
    
  • Sometimes we would like to know the significant of the correlation estimate.


  • P-value: What is this P-value? The P-value is the probability value that the correlation between these two variables is statistically significant. Normally, we choose a significance level of 0.05, which means that we are 95% confident that the correlation between the variables is significant.


  • By convention, when the


    1. p-value is < 0.001: we say there is strong evidence that the correlation is significant.


    2. the p-value is < 0.05: there is moderate evidence that the correlation is significant.


    3. the p-value is < 0.1: there is weak evidence that the correlation is significant.


    4. the p-value is > 0.1: there is no evidence that the correlation is significant.


  • We can obtain this information using "stats" module in the "scipy" library.


  •  
    from scipy import stats
      
    
  • Wheel-base vs Price Let's calculate the Pearson Correlation Coefficient and P-value of 'wheel-base' and 'price'.


  •  
    pearson_coef, p_value = stats.pearsonr(df['wheel-base'], df['price'])
    print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =", p_value)  
    The Pearson Correlation Coefficient is 0.5846418222655081  with a P-value of P = 8.076488270732955e-20
      
    
  • Conclusion: Since the p-value is < 0.001, the correlation between wheel-base and price is statistically significant, although the linear relationship isn't extremely strong (~0.585)


  • Horsepower vs Price Let's calculate the Pearson Correlation Coefficient and P-value of 'horsepower' and 'price'.


  •  
    pearson_coef, p_value = stats.pearsonr(df['horsepower'], df['price'])
    print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value)  
    The Pearson Correlation Coefficient is 0.8095745670036559  with a P-value of P =  6.36905742825998e-48
      
    
  • Conclusion: Since the p-value is < 0.001, the correlation between horsepower and price is statistically significant, and the linear relationship is quite strong (~0.809, close to 1)


  • Length vs Price Let's calculate the Pearson Correlation Coefficient and P-value of 'length' and 'price'.


  •  
    pearson_coef, p_value = stats.pearsonr(df['length'], df['price'])
    print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value)  
    The Pearson Correlation Coefficient is 0.690628380448364  with a P-value of P =  8.016477466159053e-30
      
    
  • Conclusion: Since the p-value is < 0.001, the correlation between length and price is statistically significant, and the linear relationship is moderately strong (~0.691).


  • Width vs Price Let's calculate the Pearson Correlation Coefficient and P-value of 'width' and 'price':


  •  
    pearson_coef, p_value = stats.pearsonr(df['width'], df['price'])
    print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =", p_value ) 
    The Pearson Correlation Coefficient is 0.7512653440522674  with a P-value of P = 9.200335510481426e-38
      
    
  • Conclusion: Since the p-value is < 0.001, the correlation between width and price is statistically significant, and the linear relationship is quite strong (~0.751).


  • Curb-weight vs Price Let's calculate the Pearson Correlation Coefficient and P-value of 'curb-weight' and 'price':


  •  
    pearson_coef, p_value = stats.pearsonr(df['curb-weight'], df['price'])
    print( "The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value)  
    The Pearson Correlation Coefficient is 0.8344145257702846  with a P-value of P =  2.1895772388936997e-53
      
    
  • Conclusion: Since the p-value is < 0.001, the correlation between curb-weight and price is statistically significant, and the linear relationship is quite strong (~0.834).


  • Engine-size vs Price Let's calculate the Pearson Correlation Coefficient and P-value of 'engine-size' and 'price':


  •  
    pearson_coef, p_value = stats.pearsonr(df['engine-size'], df['price'])
    print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =", p_value) 
    The Pearson Correlation Coefficient is 0.8723351674455185  with a P-value of P = 9.265491622197996e-64
      
    
  • Conclusion: Since the p-value is < 0.001, the correlation between engine-size and price is statistically significant, and the linear relationship is very strong (~0.872).


  • Bore vs Price Let's calculate the Pearson Correlation Coefficient and P-value of 'bore' and 'price':


  •  
    pearson_coef, p_value = stats.pearsonr(df['bore'], df['price'])
    print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =  ", p_value ) 
    The Pearson Correlation Coefficient is 0.5431553832626602  with a P-value of P =   8.049189483935364e-17
      
    
  • Conclusion: Since the p-value is < 0.001, the correlation between bore and price is statistically significant, but the linear relationship is only moderate (~0.521).


  • We can relate the process for each 'City-mpg' and 'Highway-mpg': City-mpg vs Price


  •  
    pearson_coef, p_value = stats.pearsonr(df['city-mpg'], df['price'])
    print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value)  
    The Pearson Correlation Coefficient is -0.6865710067844677  with a P-value of P =  2.3211320655676368e-29
      
    
  • Conclusion: Since the p-value is < 0.001, the correlation between city-mpg and price is statistically significant, and the coefficient of ~ -0.687 shows that the relationship is negative and moderately strong.


  • Highway-mpg vs Price


  •  
    pearson_coef, p_value = stats.pearsonr(df['highway-mpg'], df['price'])
    print( "The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value ) 
    The Pearson Correlation Coefficient is -0.7046922650589529  with a P-value of P =  1.7495471144476807e-31
      
    
  • Conclusion: Since the p-value is < 0.001, the correlation between highway-mpg and price is statistically significant, and the coefficient of ~ -0.705 shows that the relationship is negative and moderately strong.


  • ANOVA: Analysis of Variance The Analysis of Variance (ANOVA) is a statistical method used to test whether there are significant differences between the means of two or more groups. ANOVA returns two parameters:


  • F-test score: ANOVA assumes the means of all groups are the same, calculates how much the actual means deviate from the assumption, and reports it as the F-test score. A larger score means there is a larger difference between the means.


  • P-value: P-value tells how statistically significant is our calculated score value.


  • If our price variable is strongly correlated with the variable we are analyzing, expect ANOVA to return a sizeable F-test score and a small p-value.


  • Drive Wheels Since ANOVA analyzes the difference between different groups of the same variable, the groupby function will come in handy. Because the ANOVA algorithm averages the data automatically, we do not need to take the average before hand.


  • Let's see if different types 'drive-wheels' impact 'price', we group the data.


  •  
    grouped_test2=df_gptest[['drive-wheels', 'price']].groupby(['drive-wheels'])
    grouped_test2.head(2)
    
    df_gptest
      
    
  • We can obtain the values of the method group using the method "get_group".


  •  
    grouped_test2.get_group('4wd')['price']
      
    
  • we can use the function 'f_oneway' in the module 'stats' to obtain the F-test score and P-value.


  •  
    # ANOVA
    f_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price'], grouped_test2.get_group('4wd')['price'])  
     
    print( "ANOVA results: F=", f_val, ", P =", p_val)  
    
    
    ANOVA results: F= 67.95406500780399 , P = 3.3945443577151245e-23
      
    
  • This is a great result, with a large F test score showing a strong correlation and a P value of almost 0 implying almost certain statistical significance. But does this mean all three tested groups are all this highly correlated?


  •  
    Separately: fwd and rwd
    f_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price'])  
     
    print( "ANOVA results: F=", f_val, ", P =", p_val )
    ANOVA results: F= 130.5533160959111 , P = 2.2355306355677845e-23
      
    
  • Let's examine the other groups


  •  
    4wd and rwd
    f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('rwd')['price'])  
       
    print( "ANOVA results: F=", f_val, ", P =", p_val)   
    ANOVA results: F= 8.580681368924756 , P = 0.004411492211225333
      
    
     
    4wd and fwd
    f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('fwd')['price'])  
     
    print("ANOVA results: F=", f_val, ", P =", p_val)   
    ANOVA results: F= 0.665465750252303 , P = 0.41620116697845666
      
    
  • Conclusion: Important Variables We now have a better idea of what our data looks like and which variables are important to take into account when predicting the car price. We have narrowed it down to the following variables:


  • Continuous numerical variables:


    1. Length


    2. Width


    3. Curb-weight


    4. Engine-size


    5. Horsepower


    6. City-mpg


    7. Highway-mpg


    8. Wheel-base


    9. Bore


  • Categorical variables: Drive-wheels


  • As we now move into building machine learning models to automate our analysis, feeding the model with variables that meaningfully affect our target variable will improve our model's prediction performance.