Techniques to increase the R-squared value in polynomial regression

To increase the R-squared value in polynomial regression, there are several strategies to improve the model’s fit to the data and better capture the underlying relationships between the independent and dependent variables. Some of the approaches to consider are

  1. Higher Polynomial Degrees: Increasing the polynomial degree allows the model to capture more complex relationships in the data. However, we have to be cautious not to overfit the data by selecting a degree that is too high.
  2. Feature Engineering: Adding additional relevant features to the model. New features may help explain more variance in the dependent variable. Domain knowledge can guide in identifying meaningful additional features.
  3. Interaction Terms: Interaction terms capture the combined effect of two or more independent variables. Including interaction terms can help capture more nuanced relationships in the data.
  4. Outlier Handling: Identify and address outliers in the dataset. Outliers can disproportionately influence the regression model and reduce R2. We can either remove outliers or use robust regression techniques that are less sensitive to outliers.
  5. Feature Scaling: Ensure that the features are appropriately scaled. Some algorithms, like polynomial regression, can be sensitive to the scale of the input features. Standardize or normalize the features to have similar scales.
  6. Data Quality: Ensure that our dataset is of high quality, free from missing values and data errors. Poor data quality can lead to misleading results and lower R-square.
  7. Residual Analysis: Examine the residuals – the differences between actual and predicted values. We should look for patterns or systematic errors in the residuals. If we find patterns, it may indicate that the model is not capturing some important relationships.
  8. Model Selection: Consider exploring other regression algorithms or machine learning models that may better suit our data. Different algorithms have different strengths, and one model may perform better than polynomial regression for our specific problem.

Polynomial Regression and Cross Validation

Polynomial regression is a type of regression analysis used in statistics and machine learning to model the relationship between a dependent variable (target) and one or more independent variables (predictors) as an nth-degree polynomial function. In simple terms, it extends linear regression by allowing the relationship between the variables to be more complex, capturing non-linear patterns in the data.It allows for more flexibility by introducing higher-order terms of the independent variable(s). The equation for a polynomial regression model of degree can be represented as

is still the dependent variable.
is the independent variable.
​ is the intercept.
,bn​ are the coefficients of the polynomial terms.

  • Observed for range of 5 degrees of polynomial regression. For each degree, we created polynomial features, fit a polynomial regression model, and performed cross-validation to obtain R-squared scores.
  • Plotted the learning curve to visualize how the cross-validation score changes with the polynomial degree.
  • Identified the best degree with the highest cross-validation R-squared score.
  • From the below graph, we can conclude that the best degree fit for the present data is 2. 

K-fold validation and Estimating Prediction Error

K-Fold Cross-Validation:K-fold cross-validation and cross-validation are techniques used in machine learning and statistics to assess the performance of a predictive model and to reduce the risk of overfitting. They both involve splitting a dataset into multiple subsets, training and evaluating the model on different subsets, and then aggregating the results. However, they have some differences in how they achieve this.

  • K-fold cross-validation is a technique where the dataset is divided into K equally sized folds or subsets.
  • The model is trained and evaluated K times, with each fold serving as the test set once while the remaining K-1 folds are used for training.
  • The results from the K iterations are typically averaged to obtain a single performance metric, such as accuracy or mean squared error.
  • This technique helps in assessing how well a model generalizes to different subsets of data and reduces the risk of overfitting since the model is evaluated on different data partitions.Example: In 5-fold cross-validation, the dataset is split into 5 subsets, and the model is trained and tested on each subset separately.

Estimating Prediction Error:Estimating prediction error and the validation set approach are important concepts in the context of model evaluation and selection in machine learning. They are used to assess how well a predictive model is likely to perform on unseen data. Let’s explore these concepts:

  • The prediction error of a machine learning model refers to how well the model’s predictions match the true values in the dataset.
  • The primary goal of estimating prediction error is to understand how well the model generalizes to new, unseen data. A model that performs well on the training data but poorly on new data is said to have high prediction error, indicating overfitting.
  • There are various techniques to estimate prediction error, including cross-validation, which we discussed earlier, as well as techniques like bootstrapping.
  • Common metrics used to measure prediction error include mean squared error (MSE) for regression problems and accuracy, precision, recall, F1-score, etc., for classification problems.

Exploring the Relationship Between Obesity, Physical Inactivity, and Diabetes Rates Using Decision Tree

In this data-driven analysis, we explore the relationship between obesity, physical inactivity, and diabetes rates across various counties in the United States. Our primary goal is to perform Decision Tree regression model. We have split the data into training and testing sets, with 80% used for training and 20% for testing. We chose to focus our analysis on predicting %diabetic based on %obese and %inactive.Our analysis employed a Decision Tree regression model, a powerful tool for understanding how different variables influence a target variable. The Decision Tree was trained on the training data, and its performance was evaluated using Mean Squared Error (MSE) and R-squared (R2) metrics.

Mean Squared Error (MSE):

The MSE is a measure of the average squared difference between the actual values and the predicted values. In our case, an MSE of 0.71 suggests that, on average, the model’s predictions have a squared error of 0.71. This means that the model’s predictions deviate from the actual values by a relatively small amount, which is generally a positive sign.However, the interpretation of MSE values depends on the specific scale and context of the target variable.

R-squared (R2):

An R2 score of -0.08 indicates that the model does not explain much of the variance in %diabetic. In fact, it has a negative R2 score, which suggests that the model performs worse than a horizontal line (a constant prediction).A negative R2 score could indicate that the model doesn’t fit the data well and may not be a good choice for predicting %diabetic based solely on %obese and %inactive.

From the results, we see the Decision Tree model trained did not perform well in explaining the variance in %diabetic using %obese and %inactive as predictors. The negative R2 score indicates that the model’s predictions are bad which might not capture the underlying patterns.

T-test and P-Value

A t-test is a statistical hypothesis test used to determine if there is a significant difference between the means of two groups. It is particularly useful when comparing the means of two groups to assess if the observed differences are statistically significant. The t-test calculates a test statistic, often denoted as “t,” which is then used to calculate a p-value.

Null hypothesis (H0) and alternative hypothesis (H1):

    • Null Hypothesis (H0): There is no significant difference between the means of pre-molt and post-molt data.
    • Alternative Hypothesis (H1): There is a significant difference between the means of pre-molt and post-molt data.
    • Calculate the t-statistic: It is calculated using the formula t=Mean difference/standard error of the difference
    • Calculate the degrees of freedom (df): The degrees of freedom for an independent two-sample t-test is given by df=n1+n22
    • We can use a t-distribution table or a statistical software package to find the p-value associated with the calculated t-statistic and degrees of freedom. Alternatively, most statistical software packages provide built-in functions to directly calculate the p-value.
    • We should check the assumptions of normality and equal variance for the two groups. If the variances are not approximately equal, we may need to use a modified t-test.Compare the p-value to the significance level:
      • If ≤pα, reject the null hypothesis (H0), indicating that there is a significant difference between the means.
      • If >p>α, fail to reject the null hypothesis, suggesting that there is no significant difference between the means.

For our Crab data,
Step 1: Define Null and Alternative Hypotheses

  • Null Hypothesis (H0): This is the default assumption that there is no significant difference between the groups we are comparing. it means that there is no significant difference between pre-molt and post-molt crab data.
  • Alternative Hypothesis (Ha): This is what we want to test. It suggests that there is a significant difference between the groups.

Step 2: Collect Data

we can  collect data for pre-molt and post-molt crab sizes. These are  two groups for comparison.

Step 3: Perform the t-test

The t-test is a statistical test that calculates the t-statistic, which is a measure of how much the means of two groups differ relative to the variation in the data.

Step 4: Calculate the p-value

The p-value is a crucial result of the t-test. It represents the probability of observing the data that we have (or more extreme data) under the assumption that the null hypothesis is true (i.e., there is no significant difference between the groups). A small p-value indicates that the observed data is unlikely to have occurred by random chance alone.

Step 5: Interpret the p-value

To make a decision, we need to  compare the p-value to a significance level (alpha), typically set at 0.05. There are two possible outcomes:

  • If p-value < alpha:  reject the null hypothesis (H0). This means that the data provides strong evidence that there is a significant difference between pre-molt and post-molt crab sizes.
  • If p-value ≥ alpha: fail to reject the null hypothesis (H0). This means that the data does not provide enough evidence to conclude that there is a significant difference between the groups.

Step 6: Make a Conclusion

Based on the comparison of the p-value and alpha, we can conclude that there is a significant difference between pre-molt and post-molt crab sizes.

Concepts of Regression, R-squared Value ,Overfitting.

Concepts of Regression

Regression is a statistical method used for modeling the relationship between a dependent variable (target) and one or more independent variables (predictors or features). The primary goal of regression analysis is to understand how changes in the independent variables affect the dependent variable.

Regression Equation:The foundation of regression analysis is the regression equation, which represents the relationship between the dependent variable (Y) and one or more independent variables (X₁, X₂, … Xₖ).

In simple linear regression, the equation is: Y = β₀ + β₁X + ε, where:

  • Y is the dependent variable.
  • X is the independent variable.
  • β₀ and β₁ are the coefficients to be estimated (intercept and slope).
  • ε represents the error term, which accounts for the unexplained variability in Y.


Coefficients (β₀ and β₁):Coefficients are values that the regression model estimates to quantify the relationship between the independent and dependent variables.

  • β₀ (intercept): Represents the value of Y when X is 0.
  • β₁ (slope): Represents the change in Y for a one-unit change in X.

Residuals:Residuals (or errors) are the differences between the observed values of the dependent variable (Y) and the predicted values (Ŷ) from the regression model.

  • Residuals are calculated as: Residual = Y – Ŷ.
  • Analyzing residuals helps assess the model’s fit and assumptions.

Goodness of Fit:Goodness of fit measures how well the regression model fits the data.

  • One common measure is R-squared (R²), which quantifies the proportion of variance in Y that is explained by the independent variables. R² ranges from 0 to 1, with higher values indicating a better fit.

 Cross-Validation:Cross-validation is a technique used to evaluate a model’s performance on unseen data.

  • Common methods include k-fold cross-validation, where the dataset is divided into k subsets , and the model is trained and tested on different combinations of these folds to estimate its generalization performance

R-Squared Value:

R-squared (R²) is a statistical measure that is often used to evaluate the goodness of fit of a regression model. It quantifies the proportion of the variance in the dependent variable  that is explained by the independent variables  in the model. R² values range from 0 to 1, with higher values indicating a better fit.

Overfitting in Regression:Overfitting in regression occurs when the model is excessively complex and fits the training data too closely. It tries to capture not only the true underlying relationship between the predictors and the target variable but also noise, random fluctuations, and outliers present in the training data.


    • On the training data, an overfit model will exhibit a very high R-squared because it essentially “memorizes” the training data.
    • On new, unseen data  the model’s performance deteriorates significantly because it cannot generalize well beyond the training data. This results in a much lower R-squared, indicating that the model is not reliable for making predictions.
    • For instance,The machine learning algorithm predicts university student academic performance and graduation outcomes based on factors such as family income, past academic performance, and academic qualifications of parents. However, the test data only includes candidates from a specific gender or ethnic group.
    • In this case, the model may overfit to the specific gender or ethnic group present in the test data.
    • It might learn patterns or biases that are not applicable to candidates from different gender or ethnic backgrounds.
  • As a result, it struggles to make accurate predictions for candidates outside the narrow demographic represented in the test dataset. The solution can be like, training dataset should be more representative of the diversity of the university student population. Including data from a broader range of gender and ethnic backgrounds will help the model generalize and make fairer predictions for all students.

Pearson correlation coefficient (R)

From following we know that correlation between %diabetes and %inactivity:

Correlation[DiabetesShort〚All, 2〛, Inactivity〚All, 2〛]

0.441706 implies R=(0.442)

The Pearson correlation coefficient, often denoted as “R,” is a statistical measure that quantifies the strength and direction of a linear relationship between two continuous variables.It ranges from -1 to 1, where:

    • -1: Perfect negative linear correlation (as one variable increases, the other decreases).
    • 0: No linear correlation (variables are not linearly related).
    • 1: Perfect positive linear correlation (as one variable increases, the other increases).

Interpretation of R = 0.442

  • In our analysis, we calculated an R value of approximately 0.442 when assessing the correlation between %diabetes and %inactivity.
  • A positive R value indicates a positive linear relationship, which means that as %inactivity increases, %diabetes tends to increase as well. However, the strength of this relationship is moderate, as the R value is not close to 1.
  • The value of 0.442 suggests that there is a statistical significancy, but not exceptionally strong, positive correlation between %diabetes and %inactivity.
  • When |R| is closer to 1 (either positive or negative), it indicates a stronger linear relationship. In our case, the correlation is moderate, meaning that while there is a connection between %inactivity and %diabetes, other factors may also influence %diabetes rates, and the relationship is not entirely deterministic.
  • However, it’s important to note that correlation does not imply causation. In other words, while there is a statistical relationship, it does not mean that inactivity directly causes diabetes. There could be confounding variables or other factors at play.

Further analysis, including regression modeling and potentially considering additional variables, can help explore the causal relationships and make predictions based on this data.The Pearson correlation coefficient of 0.442 indicates a moderate positive linear relationship between %diabetes and %inactivity.But it’s important to conduct more in-depth analysis to understand the underlying factors and potential causal relationships between these variables.

BP Test and Hypothesis Testing

Today’s lecture was focused on some essential statistical concepts that are significant for understanding research. The BP test, null hypothesis, alternative hypothesis, and p-value were covered.

Firstly, the Breusch-Pagan test, a statistical test employed to examine heteroscedasticity in regression analysis. The consistency of the variance of errors across various levels of independent variables can be assessed through this test, which is considered crucial for the evaluation of whether the assumptions of a regression model are met or not.

Hypothesis testing involves collecting data, calculating a test statistic, and  using the p-value to determine whether to reject the null hypothesis. A small p-value indicates strong evidence against H0, which leads to rejection. The null hypothesis, commonly represented as H0, is a statement asserting the absence of a significant effect or relationship within the data. The alternative hypothesis, frequently denoted as Ha or H1, indicates the presence of a significant effect or relationship. Decisions concerning these hypotheses are made using p-value, which is a measure of the strength of evidence against the null hypothesis.

If we consider a scenario related to customer satisfaction, the null hypothesis suggests that modifying the website’s layout does not result in any significant changes in customer satisfaction, while the alternative hypothesis indicates that the change does make a significant difference. Hypothesis testing involves conducting a study where some customers see the old website layout, and others see the new website, and then comparing their satisfaction scores to determine whether there’s enough evidence to reject the null hypothesis in favor of the alternative hypothesis.

In summary, the lecture provided insight into the utilization of the BP test for the assessment of regression model assumptions and the formulation of hypotheses, as well as their evaluation using p-values.

Week-1 Monday

I have conducted a data analysis focusing on the relationship between diabetes and inactivity. Initially, I analysed the data points using Microsoft Excel for the basic understanding and found that there are common data points among diabetes, inactivity and obesity. I found that FIPS data points of inactivity is a subset of FIPS data points of diabetes.
•  After going through the pdf “CDC Diabetes 2018”, Observed the basic metric of evaluation such as mean, median, skewness, standard deviation for the data points.I understood that there is a slight skewness, with a kurtosis of about 4 for %diabetes. Also we can observe the deviation in normality from the quantile plot.
• Similarly, for %inactivity, the skewness is in other direction, with a kurtosis less than the kurtosis of normal distribution which is 3.
• From the scatter plot between diabetes and inactivity common data pairs and linear model that is fit for the data points, 20% (approx.) of the variation in diabetes can be interpreted for variation in inactivity.
• I understood that there is a deviation from the normality for the residuals of the data points from the linear model which resulted in Heteroscedasticity.
• The points in the plot between the residuals and the predicted values in the linear model shows the fanning out of the residuals which says that the linear model is not a suitable model.
However, I’m still very enthusiastic to learn all the above stats using python and I’m trying to do the same.