SARIMA for Time Series

SARIMA – Seasonal Autoregressive Integrated Moving Average: SARIMA extends ARIMA by incorporating a seasonal pattern that repeats at regular intervals. If there are recurring patterns in the data. It considers how the past seasonal patterns contribute to forecasting future values.

e.g., sales increasing every holiday season SARIMA captures this seasonality.

  1. Autoregressive (AR): Looks at how a value relates to its past value. If today’s temperature is high, it’s likely tomorrow’s will be high too.
  2. Integrated (I): Deals with trends by differencing—subtracting each value from the previous one. If temperatures are generally rising over time, SARIMA helps look at how much they’re going up.
  3. Moving Average (MA):Considers the error from previous predictions to improve future predictions. If yesterday’s prediction was a bit off, adjust today’s prediction to be closer to the actual value.
  4. Seasonal (S): Addresses repeating patterns, like daily or yearly seasons. Acknowledges that it’s colder in winter and hotter in summer, adjusting predictions accordingly.

Understanding SARIMA in Simple Steps

  1. Exploratory Data Analysis (EDA): It understands the characteristics of the time series data. Plot the data to identify trends, seasonality, and any patterns.
  2. Stationarity Check: Ensure the data has constant statistical properties over time. It  Applies differencing to make the data stationary. This involves subtracting each value from its previous value.
  3. Autocorrelation Analysis: Understand the correlation between the current value and past values. Plot Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF) to identify the optimal values for AR and MA components.
  4. Model Fitting: We can use the identified parameters to fit the ARIMA or SARIMA model and can train the model using historical data, considering autoregressive, moving average, and integration components.
  5. Prediction and Evaluation: Make predictions using the fitted model and assess its performance and compare the predicted values with actual values to evaluate how well the model captures the underlying patterns in the data.

ARIMA for Time series data

ARIMA (Autoregressive Integrated Moving Average):

In an autoregressive model, the forecast of a variable is based on its linear combination of past values of the same variable.

  1. Autoregressive (AR):  ARIMA looks at the relationship between the current value of a time series and its past value. It considers how the past values of a variable contribute to predicting its future values.
    • If yesterday’s stock prices influenced today’s, ARIMA captures this influence.
  2. Integrated (I): It aims to make the time series data stationary, meaning its statistical properties like mean and variance remain constant over time. It checks if the data has a trend or changing statistical properties.
    • If the data is not stationary, ARIMA applies differencing to make it more predictable.
  3. Moving Average (MA): It involves considering the past forecast errors to predict future values. Instead of looking at past values of the variable, ARIMA looks at past errors in predicting the variable.
    • It considers how the errors from previous predictions influence the current prediction.

Understanding ARIMA in Simple Steps:

  1. Start with the Data:
    • Data should be more like having a series of numbers, like daily stock prices or monthly website visits—something that changes over time.
  2. Understand Trends:
    • ARIMA looks at the overall trend in the data. It analysis if the trend is generally going up or down. This helps to understand the baseline behavior.
  3. Deal with Trends:
    • If there’s a trend, ARIMA helps to remove it by differencing.
  4. Autoregressive Part:
    • It checks how today’s data relates to the data from the past. If there’s a clear connection, it uses this to make predictions.
  5. Moving Average Part:
    • ARIMA considers how accurate past predictions were and adjusts for any mistakes made. It’s a way of learning from past experiences.
  6. Combine Everything for Predictions:
    • The algorithm combines the insights from trends, historical data, and past predictions to make an informed guess about what might happen next

PCA algorithm

Principal Component Analysis is a dimensionality reduction technique commonly used in machine learning and data analysis. Its primary goal is to transform high-dimensional data into a lower-dimensional representation, capturing the most important information. This reduction in dimensionality can lead to improved computational efficiency, visualization, and often better model performance.

steps to perform PCA:

  1. Standardize the Data: If the features in the dataset have different scales, it is important to standardize them by subtracting the mean and dividing by the standard deviation to give each feature equal importance.
  2. Compute the Covariance Matrix: Calculate the covariance matrix of the standardized data. The covariance matrix provides information about how variables change together.
  3. Compute Eigenvectors and Eigenvalues: Calculate the eigenvectors and eigenvalues of the covariance matrix. The eigenvectors represent the principal components, and the eigenvalues indicate the amount of variance captured by each principal component.
  4. Sort Eigenvectors by Eigenvalues: Sort the eigenvectors in descending order based on their corresponding eigenvalues. The higher the eigenvalue, the more variance is captured by the corresponding eigenvector.
  5. Select Principal Components: Choose the top k eigenvectors to form the new feature space. Typically, it would select the number of principal components that capture a sufficiently high percentage of the total variance.

Example:

Suppose the data on people’s heights and weights. We can find that most of the variation, along a diagonal line, representing a combination of height and weight. PCA helps us focus on this main trend and ignore less important details.

Boosting

Definition: Boosting also combines predictions from multiple models, but in a sequential manner. Unlike bagging, boosting focuses on giving more weight to instances that the current set of models misclassifies, aiming to correct errors iteratively.

Key Steps:

  1. Model Training:
    • Train a base model on the original dataset, and assign weights to each data point. Initially, all weights are equal.
  2. Weight Adjustment:
    • Increase the weights of misclassified instances, making them more influential in the next round of training. This allows subsequent models to focus on the previously misclassified data.
  3. Sequential Model Building:
    • Train additional models sequentially, with each model giving more attention to the instances that were misclassified by the previous models.
  4. Combining Predictions:
    • Combine the predictions of all models, giving more weight to models that performed well on their respective training subsets.

Example: AdaBoost : AdaBoost is a popular boosting algorithm that combines weak learners to create a strong learner. It assigns higher weights to misclassified instances, leading subsequent models to focus on these instances.

Advantages of Bagging:

  • Reduces overfitting by introducing diversity through multiple models.
  • Enhances model robustness by averaging out errors or biases present in individual models.
  • Particularly effective when the base model is sensitive to the specific data it is trained on.

Tools and Libraries for Sentimental Analysis

  1. NLTK (Natural Language Toolkit):
    • NLTK is a popular Python library for natural language processing that provides tools for sentiment analysis, tokenization, and other NLP tasks.
  2. TextBlob:
    • TextBlob is a simple and easy-to-use Python library for processing textual data. It provides a sentiment analysis API and tools for working with textual data.
  3. VADER (Valence Aware Dictionary and sEntiment Reasoner):
    • VADER is a sentiment analysis tool specifically designed for social media text. It uses a pre-built lexicon and rule-based approach to analyze sentiment.
  4. Scikit-learn:
    • Scikit-learn, a machine learning library in Python, provides tools for building and evaluating machine learning models, making it useful for sentiment analysis tasks.
  5. Transformers Library (Hugging Face):
    • The Transformers library by Hugging Face provides pre-trained models, including BERT and GPT, which can be fine-tuned for specific sentiment analysis tasks.

Challenges in Sentiment Analysis:

  1. Context and Ambiguity:
    • Sentiment analysis can be challenging due to the ambiguity of language. The sentiment expressed in a sentence may depend heavily on the context, sarcasm, or the overall tone of the document.
  2. Domain Specificity:
    • Sentiment analysis models trained on general datasets may not perform well in domain-specific contexts. Domain-specific sentiment lexicons and fine-tuning on domain-specific data are often needed.
  3. Handling Negation and Modifiers:
    • Negations and modifiers can significantly alter the sentiment of a sentence. Effective sentiment analysis models need to account for the impact of words like “not” or modifiers like “very.”

Sentimental analayis

Sentiment analysis, also known as opinion mining, is a natural language processing task that involves determining the sentiment expressed in a piece of text. The sentiment can be positive, negative, neutral, or even a combination of these. Sentiment analysis is widely used in various applications to understand public opinion, customer feedback, and social media sentiments. Some applications are:

  1. Text Preprocessing:
    • Before performing sentiment analysis, text data often undergoes preprocessing steps such as tokenization, stemming, and removing stop words to standardize and clean the text.
  2. Feature Extraction:
    • Features are extracted from the preprocessed text to represent the information that will be used for sentiment analysis. Common features include word frequencies, n-grams and word embeddings.
  3. Sentiment Lexicons:
    • Sentiment lexicons are lists of words associated with their sentiment polarity like positive, negative, or neutral. These lexicons are often used to match words in the text and assign sentiment scores.
  4. Machine Learning Approaches:
    • Supervised Learning: In supervised learning, sentiment analysis is treated as a classification problem. A model is trained on a labeled dataset where each text is associated with its sentiment label like positive, negative, or neutral.
    • Unsupervised Learning: Unsupervised approaches involve clustering or topic modeling to group similar sentiments together without using labeled training data.
  5. Deep Learning Approaches:
    • Recurrent Neural Networks (RNNs): RNNs can capture sequential dependencies in text, but they may struggle with long-term dependencies.
    • Convolutional Neural Networks (CNNs): CNNs can capture local patterns in the text and are effective for sentiment analysis tasks.
    • Transformers: Transformer-based models, such as BERT and GPT, have achieved state-of-the-art results in sentiment analysis by capturing contextual information and relationships between words.

Forecasting Data

Forecasting data is a critical aspect of decision-making across various domains such as finance, economics, supply chain management, and more. Forecasting involves predicting future trends based on historical and current data. This process aids organizations in making informed decisions, allocating resources effectively, and responding to changing circumstances.

Understanding Time Series Data and Exploratory Data Analysis (EDA):

Time series data serves as the bedrock of forecasting, embodying a chronological sequence of observations. These observations span diverse realms, encompassing daily stock prices, monthly sales figures, and hourly temperature readings. The temporal dimension inherent in this data is indispensable for discerning patterns, trends, and seasonality. Before embarking on the application of forecasting methodologies, a crucial precursor is Exploratory Data Analysis (EDA). This involves the visual and analytical examination of the data to unveil underlying structures. For instance, visualizing monthly sales through a line chart enables the identification of trends, aiding in the determination of whether sales exhibit a steady increase, seasonal fluctuations, or other discernible patterns.

Choosing a Forecasting Method and Data Preprocessing:

The selection of an appropriate forecasting method hinges on the unique characteristics of the data at hand. Linear regression proves valuable for data featuring a consistent trend, while Autoregressive Integrated Moving Average (ARIMA) excels in handling time-dependent data. In cases where intricate dependencies are paramount, machine learning models, such as Long Short-Term Memory (LSTM) networks, come into play. Data preprocessing constitutes a critical step in the forecasting pipeline. It involves addressing missing values, outliers, and transforming the data as necessary. For instance, if daily sales data contains gaps, strategies like imputation or interpolation are applied to ensure the dataset’s completeness and accuracy.

Train-Test Split and Model Evaluation:

A fundamental practice in forecasting is the division of data into training and testing sets. The training set facilitates the model’s learning process, while the testing set evaluates its performance on unseen data. This bifurcation ensures that the forecasting model generalizes effectively to new observations. Beyond training, robust evaluation becomes imperative. Metrics such as Mean Absolute Error (MAE), Mean Squared Error (MSE), or Root Mean Squared Error (RMSE) are employed to gauge the accuracy of forecasting models. The interplay between the training-test split and model evaluation forms a crucial feedback loop, guiding the iterative refinement of forecasting approaches and enhancing their predictive capabilities.

In conclusion, forecasting data is a multifaceted process that requires a systematic approach, from understanding the nature of the data to selecting appropriate methods, training models, evaluating performance, and refining as needed. The flexibility to adapt to changing data patterns is paramount for meaningful and accurate forecasts that contribute to informed decision-making.

 

Random Forest

Random Forest is an ensemble learning technique, which means it combines multiple individual models to make more robust and accurate predictions. This ensemble approach leverages the wisdom of the crowd by aggregating the predictions of multiple models, reducing the risk of overfitting, and improving overall performance.

Bagging: It employs a technique called bagging to create diverse training sets for each decision tree. Bagging involves random sampling with replacement from the original training dataset to create multiple subsets, often referred to as “bootstrap samples.” Each decision tree is trained on one of these bootstrap samples. This diversity helps prevent individual decision trees from overfitting to the training data.

Random Feature Selection: Another key feature of Random Forest is the random selection of features at each split node when constructing decision trees. Instead of considering all available features for the best split at each node, Random Forest randomly selects a subset of features to consider. This random feature selection reduces the correlation between trees and improves the model’s generalization ability.

Decision Tree Construction: Each decision tree in a Random Forest is constructed using the process described in the Decision Tree section. However, during tree construction, a random subset of features is considered at each node, which makes the trees decorrelated and reduces the risk of overfitting.

Classification and Regression: After building all the individual decision trees, Random Forest combines their predictions to make a final prediction. The method of combining predictions depends on the type of problem , For classification tasks, Random Forest uses a majority vote among the individual trees. The class that receives the most votes is the final prediction.For regression tasks, the final prediction is the average of the predictions from all the trees.

Advantages of Random Forest:

  • Improved predictive accuracy compared to individual decision trees.
  • Robust to noisy data and overfitting.
  • Can handle both classification and regression tasks.
  • Provides feature importance information.
  • Works well “out of the box” with minimal hyperparameter tuning.

Disadvantages:

  • Can be computationally expensive, especially for a large number of trees.
  • Interpretability can be challenging when dealing with a large number of trees.
  • May not perform well on highly imbalanced datasets.
  • Requires more memory and storage compared to a single decision tree.

 

Decision Tree in detail

Today there was a discussion in Class about decision tree and Random Forest models of data.Thats where I started to think about this and posting some insights on this.

A decision tree is a supervised machine learning algorithm used for both classification and regression tasks. It is a visual representation of a decision-making process, where each internal node of the tree represents a decision or a test on a particular feature, each branch represents an outcome of that decision, and each leaf node represents the final prediction or classification.

Decision tree works like following:

  1. Splitting Data: The tree starts with a single node that contains all the data. At each internal node, the dataset is split into two or more child nodes based on a feature and a threshold. The goal is to create splits that result in the purest possible child nodes, meaning that they contain similar target values.
  2. Node Selection: The algorithm selects the feature and threshold that result in the best split, typically using criteria like Gini impurity or information gain for classification tasks and mean squared error for regression tasks.
  3. Recursion: The splitting process is applied recursively to each child node until a stopping condition is met. This condition could be a predefined depth limit, a minimum number of data points in a node, or the purity of the nodes.
  4. Leaf Nodes: Once the tree reaches a stopping condition, the leaf nodes make predictions. In classification, the majority class in the leaf node is used as the prediction, while in regression, it’s typically the mean of the target values in the leaf node.

Here are some complexities and challenges that we might encounter:

1. Overfitting:

Complex Decision Trees: Decision trees can become overly complex and overfit the training data. This means they capture noise in the data rather than the underlying patterns.

To reduce overfitting:Prune the tree by limiting the maximum depth or setting a minimum number of samples per leaf node. Pruning involves removing branches that do not significantly contribute to the tree’s predictive power. It can be done by setting a minimum node size, which defines the minimum number of data points required in a node to allow further splitting.

2. Feature Importance:

Bias Towards High Cardinality Features:Decision trees may bias importance toward features with many categories. To address this we consider normalizing feature importance values by the number of categories in the feature.

3. Handling Imbalanced Data:

Biased Predictions: Decision trees can produce biased predictions on imbalanced datasets, favoring the majority class. To handle imbalanced data we adjust class weights or use balanced class sampling.Using other techniques like resampling, specialized algorithms like Random Forest, which often handle imbalanced data better.

Scikit-learn

  • Scikit-learn, often abbreviated as sklearn, is an open-source machine learning library in Python.It provides a wide range of machine learning algorithms for various tasks, including classification, regression, clustering, dimensionality reduction, and more.
  • Scikit-learn is built on top of other popular scientific libraries in Python, such as NumPy, SciPy, and Matplotlib, making it seamlessly integrate with the Python ecosystem.It is known for its ease of use, with a simple and consistent API that is accessible for both beginners and experienced machine learning practitioners.
  • The library includes tools for data preprocessing, including feature scaling, missing data handling, and categorical variable encoding.It supports various model evaluation techniques, including cross-validation, and provides metrics for assessing model performance, such as accuracy, precision, recall, and F1-score.
  • Scikit-learn includes utilities for hyperparameter tuning, allowing you to optimize the parameters of machine learning models for better performance.
  • Model pipelines in Scikit-learn enable the creation of structured workflows that combine data preprocessing, feature selection, and model training in a single, manageable pipeline.
  • Hyperparameter Tuning: Grid search and randomized search are methods provided by Scikit-learn for optimizing the hyperparameters of machine learning models. This helps in finding the best set of hyperparameters to improve model performance.
  • Dimensionality Reduction: Scikit-learn provides dimensionality reduction techniques like Principal Component Analysis \ and t-distributed Stochastic Neighbor Embedding (t-SNE). These methods help reduce the number of features in a dataset while retaining essential information, which can be valuable for visualization and speeding up machine learning model.

Convolutional Neural Networks

CNNs are a class of deep neural networks designed for processing and analyzing visual data, particularly images and videos. CNNs have had a profound impact on computer vision and image analysis, leading to significant advancements in tasks such as image classification, object detection, facial recognition, image segmentation.

  1. Convolutional Layers: CNNs use convolutional layers to extract features from input images. These layers apply a set of learnable filters to the input image. Each filter detects specific patterns or features in the image, such as edges, corners, or textures.
  2. Pooling Layers: Pooling layers downsample the feature maps produced by convolutional layers. Max pooling and average pooling are common techniques used to reduce the spatial dimensions of the feature maps while retaining the most important information.
  3. Fully Connected Layers: After the convolutional and pooling layers, CNNs typically have one or more fully connected layers for classification or regression tasks. These layers learn to combine the extracted features for final predictions.
  4. Activation Functions: CNNs use activation functions like ReLU -Rectified Linear Unit introduces non-linearity into the model, allowing it to capture complex patterns in the data.
  5. Convolutional Filters: The convolutional filters are trained to recognize various low- to high-level features in images. In deeper layers, they can identify more complex patterns and objects.
  6. Feature Hierarchies: CNNs learn hierarchies of features, starting with simple features at lower layers and progressing to complex object representations at higher layers. This hierarchical feature learning is a key to their success in image analysis.

Geopy library in python

  • GeoPy is a popular library for geocoding and geospatial data in Python, but there are several other libraries and tools that can be used for similar purposes, each with its own unique features and strengths.
  • It can be easily installed using the PIP command “pip install geopy.” This library provides various features such as Geopositioning, GeoListPlot, GeoHistogram, and Geodistance.
  • Geocoding:
    • GeoPy allows us to convert human-readable addresses, place names, or locations into geographic coordinates, typically latitude and longitude. This process is known as geocoding.
    • Geocoding is essential for applications that involve mapping, navigation, and location-based services. It helps pinpoint exact locations on the Earth’s surface.
  • Reverse Geocoding:
    • Reverse geocoding, the inverse of geocoding, is the process of converting geographic coordinates (latitude and longitude) into human-readable addresses or place names.
    • It is used to display location information to users in a format they can easily understand.
  • Distance Calculations:
    • GeoPy provides utilities for calculating distances between two sets of geographic coordinates. It supports various distance units, such as miles, kilometers, and nautical miles.
    • Distance calculations are useful for tasks like finding the nearest location, measuring distances between locations, and determining proximity between geographic points.
  • Great Circle Distance:
    • The great circle distance is the shortest distance between two points on the Earth’s surface, following the curvature of the Earth. GeoPy can calculate this distance, which is essential for precise distance measurements.

Similar libraries:

  1. geopandas: GeoPandas is an open-source Python library that extends the capabilities of Pandas for working with geospatial data. It allows us to work with geospatial datasets, perform geospatial operations, and create maps and plots.
  2. folium: Folium is a Python library that makes it easy to create interactive Leaflet maps. It’s particularly useful for creating web maps with custom markers, popups, and layers, and for visualizing geospatial data.
  3. Shapely: Shapely is a library for performing geometric operations on geometric objects. It’s often used in combination with other geospatial libraries to create, manipulate, and analyze geometric shapes.
  4. Cartopy: Cartopy is a library built on top of Matplotlib that simplifies geographic projections and map plotting. It’s commonly used for creating maps and plots that involve geospatial data.
  5. GeoDjango: GeoDjango is an extension of Django, a popular web framework for Python. It adds geospatial database support and tools for building geospatial web applications.

DBSCAN in detail

DBSCAN : Density-Based Spatial Clustering of Applications with Noise is a density-based clustering algorithm used to discover clusters of data points in a dataset. It’s particularly effective for finding clusters of arbitrary shapes and handling outliers or noise.

  • The choice of parameters ε – distance threshold and minPts- minimum points required for a core point is crucial and should be determined based on the dataset and problem domain.
  • DBSCAN is efficient for datasets with varying densities but may struggle when clusters have significantly different densities.
  • It doesn’t require us to specify the number of clusters beforehand, making it suitable for scenarios where the cluster count is unknown.
  • DBSCAN can identify clusters of different shapes and sizes, and it naturally handles noise points.
  • The algorithm recursively expands the cluster by examining the ε-neighborhood of the core point’s neighbors. If any of these neighbors are also core points, they are added to the same cluster, and their ε-neighborhoods are explored in turn. This process continues recursively until there are no more core points in the ε-neighborhood

DBSCAN is widely applied in various fields, including image analysis, spatial data, and anomaly detection, where clusters may not be well-defined or uniformly distributed

K means clustering

  • K-means clustering is a popular unsupervised machine learning algorithm used for partitioning a dataset into a set of distinct, non-overlapping subgroups called clusters.
  • These clusters are defined in such a way that data points within the same cluster are more similar to each other than they are to data points in other clusters.
  • K-means is a centroid-based clustering algorithm, and it aims to minimize the variance within each cluster.
  • The algorithm starts by selecting K initial cluster centroids.
  • Data points are assigned to the nearest centroid, and centroids are updated by computing the mean of assigned data points.
  • This process iterates until convergence. K-means is widely used for tasks like customer segmentation and image compression. It’s efficient for large datasets but sensitive to initial centroid selection.
  • It may not work well with non-spherical or irregularly shaped clusters, and choosing the right K value can be challenging, often requiring domain expertise or techniques like the elbow method.
  • The final output of the K-means algorithm is a set of cluster assignments for each data point and the centroids of the clusters.

Heat Maps

  1. Heat map is a graphical representation of data that is used to visualize the relationships and patterns within a dataset, typically involving two dimensions. Heat maps are particularly useful for understanding the distribution of data points in a matrix format. They are widely used in various applications, such as data analysis, data visualization, and feature selection.
  2. Data Representation: Heat maps are often used to represent a matrix of data, where each cell in the matrix is color-coded to represent the value of a specific data point. The color intensity in each cell corresponds to the value of that data point, with lighter colors indicating lower values and darker colors indicating higher values.
  3. Two-Dimensional Data: Heat maps are typically applied to two-dimensional data, which can include correlations between features, distances between data points, or any other kind of relationship that can be quantified.
  4. Color Encoding: Color is a key aspect of heat maps. A color gradient is used to map values to colors, with a color scale ranging from, for example, cool colors like blue for low values to warm colors like red for high values. The specific color scheme can be adjusted to match the preferences and needs of the data analyst.

In data analysis, heat maps are often used to visualize the correlation between features in a dataset. Each cell in the matrix represents the correlation coefficient between two features. This helps identify which features are highly correlated and can guide feature selection in machine learning models.Heat maps can be applied to image data to visualize the intensity of certain features or objects within an image. For example, the second Project that is about the washington shooting data is one of the dataset that can seen the heat maps.Various data visualization libraries, such as Matplotlib, Seaborn, and Plotly in Python, provide functions for creating heat maps. These libraries make it relatively easy to generate heat maps from data.

Hierarchical Clustering

Hierarchical clustering is a popular method in unsupervised machine learning and data analysis that groups similar data points into clusters. It builds a hierarchy of clusters, which can be represented as a tree-like structure called a dendrogram. Hierarchical clustering can be used for a variety of applications, such as image segmentation, document classification, and biology.

There are two main approaches to hierarchical clustering: agglomerative and divisive clustering.

1. Agglomerative Hierarchical Clustering: Agglomerative clustering starts with each data point as its own cluster and iteratively merges the most similar clusters until only one cluster remains.The process will be as following:

  • Initialization: Start with each data point as a single cluster, so we have as many clusters as data points.
  • Merge Closest Clusters: This step is done using linkage criteria like single linkage, complete linkage, average linkage, or Ward’s method.
  • Repeat: We  continue merging clusters until all data points belong to a single cluster or until a predefined number of clusters is reached.

2. Divisive Hierarchical Clustering: Divisive hierarchical clustering takes the opposite approach, starting with all data points in a single cluster and recursively dividing them into smaller clusters. This approach is less common than agglomerative clustering.In both agglomerative and divisive clustering, the choice of linkage criteria and the distance metric used to measure similarity or dissimilarity between data points play a crucial role in determining the final clusters.

Distance Metrics: The choice of distance metric depends on the nature of our data and problem. Common distance metrics include Euclidean distance, Manhattan distance.

Dendrogram: A dendrogram is a tree-like diagram that illustrates the hierarchical structure of clusters. It shows the order in which clusters were merged and can help us choose the appropriate number of clusters for our specific application.

Hyperparameter Tuning

Hyperparameter tuning, also known as hyperparameter optimization, is a crucial step in machine learning (ML) model development. Hyperparameters are parameters that are not learned from the data during the training process but are set prior to training. They control aspects of the model’s training process and, ultimately, its performance.

Examples of hyperparameters include learning rate, the number of hidden layers in a neural network, the number of decision trees in a random forest, and the regularization strength in a linear regression model.The goal of hyperparameter tuning is to find the best combination of hyperparameters that yields the optimal performance of a machine learning model on a specific task or dataset. This involves systematically searching through different hyperparameter settings to find the configuration that results in the highest accuracy, lowest error, or best performance metric for the problem at hand. Hyperparameter tuning helps improve a model’s generalization ability and ensures that it can make accurate predictions on new, unseen data.

There are several methods for hyperparameter tuning, including:

  1. Grid Search: In grid search, we specify a set of hyperparameters and their possible values, and the algorithm exhaustively tests all combinations. This can be time-consuming for large search spaces but ensures that we explore all possible options.
  2. Random Search: Random search involves randomly sampling hyperparameters from predefined distributions. It’s often more efficient than grid search because it doesn’t require testing all possible combinations.
  3. Bayesian Optimization: Bayesian optimization is a probabilistic model-based approach that leverages the information from previous evaluations to make informed choices about the next set of hyperparameters to test. This can be more efficient for complex and expensive-to-evaluate models.

Automated Hyperparameter Tuning Libraries: There are libraries and tools like scikit-learn’s GridsearchCV and RandomizedsearchCV, as well as external tools like Optuna, Hyperopt, and others, designed to facilitate hyperparameter tuning.

Washington Post Police Shootings Dataset: Columns and Insights

The Washington Post, where they are compiling a database of every fatal shooting in the United States by a police officer in the line of duty since 2015. The database contains records of every person shot by an on-duty police officer since January 1, 2015, as well as the agencies involved in each event. It is regularly updated as fatal shootings are reported and as facts emerge about individual cases.

The Post provides a comprehensive and accurate record of police shootings in the United States. It aims to fill the gap in data reported to the FBI on fatal police shootings, which has been found to be undercounted by more than half.

The dataset in the repository contains several columns that provide valuable information about each fatal police shooting incident. Here are some of the key columns and what can be inferred from them:

  1. “id”: This column represents a unique identifier for each shooting incident. It allows for easy referencing and tracking of individual cases.
  2. “name”: This column contains the name of the person who was fatally shot by a police officer. It provides insight into the identities of the victims involved in these incidents.
  3. “date”: The “date” column indicates the date on which the shooting incident occurred. By analyzing this column, patterns and trends in police shootings over time can be identified.
  4. “manner_of_death”: This column provides information on the manner in which the person died, whether it was due to a gunshot wound or other causes. It helps in understanding the circumstances surrounding each shooting incident.
  5. “armed”: The “armed” column describes the weapons or objects the person had at the time of the shooting. It provides insights into whether the person was armed, unarmed, or had a potentially dangerous object.
  6. “age”: This column represents the age of the person who was shot. Analyzing this column can reveal patterns related to age groups affected by police shootings.
  7. “gender”: The “gender” column indicates the gender of the person involved in the shooting incident. It helps in understanding whether there are any gender-based disparities in police shootings.
  8. “race”: This column provides information about the race or ethnicity of the person who was shot. It allows for the examination of racial disparities in police shootings.
  9. “city”: The “city” column specifies the city or location where the shooting incident took place. It helps in identifying geographical patterns in police shootings.
  10. “state”: This column represents the state in which the shooting incident occurred. Analyzing this column allows for comparisons between different states and their respective rates of police shootings.

Insights from analyzing the data

By analyzing the data in these columns, researchers can gain insights into:

  • Demographics: Who is most at risk of being shot by police based on age, gender, race
  • Circumstances: Whether the person was armed and what threats they posed
  • Geographic patterns: locations with higher or lower rates of police shootings
  • Trends over time: Changes in police shootings year over year

This data can help identify potential biases, disparities, and problem areas to improve law enforcement policies and training.

 

Clustering In Detail

Clustering is a fundamental technique in unsupervised machine learning and data analysis. It involves grouping similar data points together based on certain features or characteristics. Clustering is used for various purposes, such as discovering patterns in data, segmenting customers, image analysis.

Key Concepts in Clustering:

Unsupervised Learning: Clustering is an unsupervised learning technique, which means it doesn’t rely on labeled data. Instead, it identifies patterns and structures within data based on inherent similarities.The main objective of clustering is to find groups or clusters of data points where points within the same cluster are more similar to each other than to those in other clusters. Clusters are defined based on some similarity or distance metric.

Distance Metrics: Clustering algorithms typically use distance metrics to measure the similarity or dissimilarity between data points. Common distance metrics include Euclidean distance, Manhattan distance, cosine similarity, and more.

Centroids: Many clustering algorithms are centroid-based. They identify cluster centers (centroids) and assign data points to the nearest centroid. The choice of centroids and the distance metric can vary depending on the algorithm.

Hierarchical vs. Partitional Clustering: Clustering methods can be broadly categorized into hierarchical and partitional clustering

  • Hierarchical Clustering: It creates a tree-like structure of clusters, where clusters at one level are merged to form larger clusters at the next level. It can result in a hierarchical structure of clusters.
  • Partitional Clustering: It divides data into non-overlapping clusters, with each data point belonging to exactly one cluster. Common partitional methods include K-Means and DBSCAN.

Some Clustering Algorithums:

K-Means: This is one of the most popular partitional clustering algorithms. It aims to partition data into K clusters, where K is a user-defined parameter. It uses centroids to represent each cluster.

Hierarchical Clustering: Algorithms like Agglomerative and Divisive clustering create a hierarchy of clusters.

DBSCAN (Density-Based Spatial Clustering of Applications with Noise): DBSCAN groups together data points that are close to each other and separates areas with low data density.

Project Methodology in Brief

Feature Selection:A careful selection of features from result was undertaken. Essential columns, including ‘% DIABETIC’, ‘% OBESE’, and ‘% INACTIVE’, were identified for their relevance to health indicators. This strategic feature selection ensured a focused and meaningful exploration of the datasets.

Statistical Analysis – ANOVA:To unravel the statistical relationships with categorical variables, the Analysis of Variance (ANOVA) function was applied. Categorical predictors such as FIPS, COUNTY, and STATE were scrutinized for their correlation with the target variable ‘% DIABETIC’. The ANOVA results, gauged through p-values, informed the selection of influential variables impacting diabetes rates.

Linear Regression Models:The research delved into linear regression models to discern nuanced relationships. Individual analyses were executed to gauge the influence of specific health indicators. Linear regression models were employed to understand the impact of ‘% INACTIVE’ and ‘% OBESE’ on ‘% DIABETIC’.

Bootstrap as a Statistical Tool

Bootstrap is a powerful statistical tool and resampling technique that is used for estimating the sampling distribution of a statistic by repeatedly resampling from the observed data. It is particularly valuable when traditional parametric methods are not applicable or when we want to make inferences about a population parameter without making strong distributional assumptions.

  • Resampling: Bootstrap involves drawing random samples from the observed data. These resampled datasets are called “bootstrap samples.” Each bootstrap sample typically has the same size as the original dataset.
  • Estimation: A statistic of interest, such as the mean, median, variance, or a parameter estimate, is calculated for each bootstrap sample. This provides a collection of values for the statistic of interest, which forms the basis for inference.
  • Sampling Distribution: By repeating the resampling process a large number of times, we can create a “bootstrap distribution” for the statistic. This distribution approximates the sampling distribution of the statistic under the assumptions of the original data.
  • Inference: With the bootstrap distribution in hand, we can perform various types of statistical inference. For example, we can calculate confidence intervals, estimate standard errors, perform hypothesis tests, and more, without relying on traditional parametric assumptions like normality.

INTERACTION TERM

  • An interaction term in model building refers to a statistical construct that represents the combined effect of two or more independent variables which is also known as predictors, on the dependent variable that is the outcome or response variable.
  • Interaction terms are used in regression analysis and other statistical modeling techniques to account for situations where the relationship between the dependent variable and one independent variable depends on the level or values of another independent variable.
  • In simpler terms, it allows us to assess whether the effect of one variable on the outcome changes based on the value of another variable.
  • In a linear regression model, an interaction term is typically denoted by multiplying the two or more predictor variables involved.
  • For example, if we have two predictors, X1 and X2, and suspect an interaction between them, we would include an interaction term like X1 * X2 in the regression equation.
  • In polynomial regression model, we can see how the effect of one predictor variable changes as a function of another predictor variable. Mathematically, an interaction term between two predictors, X₁ and X₂, in a quadratic polynomial regression might look like this :Y = β₀ + β₁X₁ + β₂X₁² + β₃X₂ + β₄X₁X₂ + ε
  • In this equation, the interaction term is represented by β₄X₁X₂. β₄ quantifies how the effect of X₁ on Y changes depending on the value of X₂.

INTERPRETATION :

For Linear equation:

  • If the coefficient for the interaction term (e.g., X1 * X2) is statistically significant and positive, it suggests that the effect of X1 on the outcome is amplified when X2 increases.
  • If the coefficient for the interaction term is statistically significant and negative, it suggests that the effect of X1 on the outcome is diminished when X2 increases.

For Polynomial equation :

  • If β₄ is positive, it suggests that as X₁ increases, the effect of X₂ on Y becomes stronger.
  • If β₄ is negative, it suggests that as X₁ increases, the effect of X₂ on Y becomes weaker.
  • If β₄ is close to zero, it indicates little or no interaction between X₁ and X₂.

Techniques to increase the R-squared value in polynomial regression

To increase the R-squared value in polynomial regression, there are several strategies to improve the model’s fit to the data and better capture the underlying relationships between the independent and dependent variables. Some of the approaches to consider are

  1. Higher Polynomial Degrees: Increasing the polynomial degree allows the model to capture more complex relationships in the data. However, we have to be cautious not to overfit the data by selecting a degree that is too high.
  2. Feature Engineering: Adding additional relevant features to the model. New features may help explain more variance in the dependent variable. Domain knowledge can guide in identifying meaningful additional features.
  3. Interaction Terms: Interaction terms capture the combined effect of two or more independent variables. Including interaction terms can help capture more nuanced relationships in the data.
  4. Outlier Handling: Identify and address outliers in the dataset. Outliers can disproportionately influence the regression model and reduce R2. We can either remove outliers or use robust regression techniques that are less sensitive to outliers.
  5. Feature Scaling: Ensure that the features are appropriately scaled. Some algorithms, like polynomial regression, can be sensitive to the scale of the input features. Standardize or normalize the features to have similar scales.
  6. Data Quality: Ensure that our dataset is of high quality, free from missing values and data errors. Poor data quality can lead to misleading results and lower R-square.
  7. Residual Analysis: Examine the residuals – the differences between actual and predicted values. We should look for patterns or systematic errors in the residuals. If we find patterns, it may indicate that the model is not capturing some important relationships.
  8. Model Selection: Consider exploring other regression algorithms or machine learning models that may better suit our data. Different algorithms have different strengths, and one model may perform better than polynomial regression for our specific problem.

Polynomial Regression and Cross Validation

Polynomial regression is a type of regression analysis used in statistics and machine learning to model the relationship between a dependent variable (target) and one or more independent variables (predictors) as an nth-degree polynomial function. In simple terms, it extends linear regression by allowing the relationship between the variables to be more complex, capturing non-linear patterns in the data.It allows for more flexibility by introducing higher-order terms of the independent variable(s). The equation for a polynomial regression model of degree can be represented as

Y=b0​+b1​X+b2​X2+…+bn​Xn
Where:
is still the dependent variable.
is the independent variable.
​ is the intercept.
,bn​ are the coefficients of the polynomial terms.

  • Observed for range of 5 degrees of polynomial regression. For each degree, we created polynomial features, fit a polynomial regression model, and performed cross-validation to obtain R-squared scores.
  • Plotted the learning curve to visualize how the cross-validation score changes with the polynomial degree.
  • Identified the best degree with the highest cross-validation R-squared score.
  • From the below graph, we can conclude that the best degree fit for the present data is 2. 

K-fold validation and Estimating Prediction Error

K-Fold Cross-Validation:K-fold cross-validation and cross-validation are techniques used in machine learning and statistics to assess the performance of a predictive model and to reduce the risk of overfitting. They both involve splitting a dataset into multiple subsets, training and evaluating the model on different subsets, and then aggregating the results. However, they have some differences in how they achieve this.

  • K-fold cross-validation is a technique where the dataset is divided into K equally sized folds or subsets.
  • The model is trained and evaluated K times, with each fold serving as the test set once while the remaining K-1 folds are used for training.
  • The results from the K iterations are typically averaged to obtain a single performance metric, such as accuracy or mean squared error.
  • This technique helps in assessing how well a model generalizes to different subsets of data and reduces the risk of overfitting since the model is evaluated on different data partitions.Example: In 5-fold cross-validation, the dataset is split into 5 subsets, and the model is trained and tested on each subset separately.

Estimating Prediction Error:Estimating prediction error and the validation set approach are important concepts in the context of model evaluation and selection in machine learning. They are used to assess how well a predictive model is likely to perform on unseen data. Let’s explore these concepts:

  • The prediction error of a machine learning model refers to how well the model’s predictions match the true values in the dataset.
  • The primary goal of estimating prediction error is to understand how well the model generalizes to new, unseen data. A model that performs well on the training data but poorly on new data is said to have high prediction error, indicating overfitting.
  • There are various techniques to estimate prediction error, including cross-validation, which we discussed earlier, as well as techniques like bootstrapping.
  • Common metrics used to measure prediction error include mean squared error (MSE) for regression problems and accuracy, precision, recall, F1-score, etc., for classification problems.

Exploring the Relationship Between Obesity, Physical Inactivity, and Diabetes Rates Using Decision Tree

In this data-driven analysis, we explore the relationship between obesity, physical inactivity, and diabetes rates across various counties in the United States. Our primary goal is to perform Decision Tree regression model. We have split the data into training and testing sets, with 80% used for training and 20% for testing. We chose to focus our analysis on predicting %diabetic based on %obese and %inactive.Our analysis employed a Decision Tree regression model, a powerful tool for understanding how different variables influence a target variable. The Decision Tree was trained on the training data, and its performance was evaluated using Mean Squared Error (MSE) and R-squared (R2) metrics.

Mean Squared Error (MSE):

The MSE is a measure of the average squared difference between the actual values and the predicted values. In our case, an MSE of 0.71 suggests that, on average, the model’s predictions have a squared error of 0.71. This means that the model’s predictions deviate from the actual values by a relatively small amount, which is generally a positive sign.However, the interpretation of MSE values depends on the specific scale and context of the target variable.

R-squared (R2):

An R2 score of -0.08 indicates that the model does not explain much of the variance in %diabetic. In fact, it has a negative R2 score, which suggests that the model performs worse than a horizontal line (a constant prediction).A negative R2 score could indicate that the model doesn’t fit the data well and may not be a good choice for predicting %diabetic based solely on %obese and %inactive.

From the results, we see the Decision Tree model trained did not perform well in explaining the variance in %diabetic using %obese and %inactive as predictors. The negative R2 score indicates that the model’s predictions are bad which might not capture the underlying patterns.

T-test and P-Value

A t-test is a statistical hypothesis test used to determine if there is a significant difference between the means of two groups. It is particularly useful when comparing the means of two groups to assess if the observed differences are statistically significant. The t-test calculates a test statistic, often denoted as “t,” which is then used to calculate a p-value.

Null hypothesis (H0) and alternative hypothesis (H1):

    • Null Hypothesis (H0): There is no significant difference between the means of pre-molt and post-molt data.
    • Alternative Hypothesis (H1): There is a significant difference between the means of pre-molt and post-molt data.
    • Calculate the t-statistic: It is calculated using the formula t=Mean difference/standard error of the difference
    • Calculate the degrees of freedom (df): The degrees of freedom for an independent two-sample t-test is given by df=n1+n22
    • We can use a t-distribution table or a statistical software package to find the p-value associated with the calculated t-statistic and degrees of freedom. Alternatively, most statistical software packages provide built-in functions to directly calculate the p-value.
    • We should check the assumptions of normality and equal variance for the two groups. If the variances are not approximately equal, we may need to use a modified t-test.Compare the p-value to the significance level:
      • If ≤pα, reject the null hypothesis (H0), indicating that there is a significant difference between the means.
      • If >p>α, fail to reject the null hypothesis, suggesting that there is no significant difference between the means.

For our Crab data,
Step 1: Define Null and Alternative Hypotheses

  • Null Hypothesis (H0): This is the default assumption that there is no significant difference between the groups we are comparing. it means that there is no significant difference between pre-molt and post-molt crab data.
  • Alternative Hypothesis (Ha): This is what we want to test. It suggests that there is a significant difference between the groups.

Step 2: Collect Data

we can  collect data for pre-molt and post-molt crab sizes. These are  two groups for comparison.

Step 3: Perform the t-test

The t-test is a statistical test that calculates the t-statistic, which is a measure of how much the means of two groups differ relative to the variation in the data.

Step 4: Calculate the p-value

The p-value is a crucial result of the t-test. It represents the probability of observing the data that we have (or more extreme data) under the assumption that the null hypothesis is true (i.e., there is no significant difference between the groups). A small p-value indicates that the observed data is unlikely to have occurred by random chance alone.

Step 5: Interpret the p-value

To make a decision, we need to  compare the p-value to a significance level (alpha), typically set at 0.05. There are two possible outcomes:

  • If p-value < alpha:  reject the null hypothesis (H0). This means that the data provides strong evidence that there is a significant difference between pre-molt and post-molt crab sizes.
  • If p-value ≥ alpha: fail to reject the null hypothesis (H0). This means that the data does not provide enough evidence to conclude that there is a significant difference between the groups.

Step 6: Make a Conclusion

Based on the comparison of the p-value and alpha, we can conclude that there is a significant difference between pre-molt and post-molt crab sizes.

Concepts of Regression, R-squared Value ,Overfitting.

Concepts of Regression

Regression is a statistical method used for modeling the relationship between a dependent variable (target) and one or more independent variables (predictors or features). The primary goal of regression analysis is to understand how changes in the independent variables affect the dependent variable.

Regression Equation:The foundation of regression analysis is the regression equation, which represents the relationship between the dependent variable (Y) and one or more independent variables (X₁, X₂, … Xₖ).

In simple linear regression, the equation is: Y = β₀ + β₁X + ε, where:

  • Y is the dependent variable.
  • X is the independent variable.
  • β₀ and β₁ are the coefficients to be estimated (intercept and slope).
  • ε represents the error term, which accounts for the unexplained variability in Y.

 

Coefficients (β₀ and β₁):Coefficients are values that the regression model estimates to quantify the relationship between the independent and dependent variables.

  • β₀ (intercept): Represents the value of Y when X is 0.
  • β₁ (slope): Represents the change in Y for a one-unit change in X.

Residuals:Residuals (or errors) are the differences between the observed values of the dependent variable (Y) and the predicted values (Ŷ) from the regression model.

  • Residuals are calculated as: Residual = Y – Ŷ.
  • Analyzing residuals helps assess the model’s fit and assumptions.

Goodness of Fit:Goodness of fit measures how well the regression model fits the data.

  • One common measure is R-squared (R²), which quantifies the proportion of variance in Y that is explained by the independent variables. R² ranges from 0 to 1, with higher values indicating a better fit.

 Cross-Validation:Cross-validation is a technique used to evaluate a model’s performance on unseen data.

  • Common methods include k-fold cross-validation, where the dataset is divided into k subsets , and the model is trained and tested on different combinations of these folds to estimate its generalization performance

R-Squared Value:

R-squared (R²) is a statistical measure that is often used to evaluate the goodness of fit of a regression model. It quantifies the proportion of the variance in the dependent variable  that is explained by the independent variables  in the model. R² values range from 0 to 1, with higher values indicating a better fit.

Overfitting in Regression:Overfitting in regression occurs when the model is excessively complex and fits the training data too closely. It tries to capture not only the true underlying relationship between the predictors and the target variable but also noise, random fluctuations, and outliers present in the training data.

Consequences:

    • On the training data, an overfit model will exhibit a very high R-squared because it essentially “memorizes” the training data.
    • On new, unseen data  the model’s performance deteriorates significantly because it cannot generalize well beyond the training data. This results in a much lower R-squared, indicating that the model is not reliable for making predictions.
    • For instance,The machine learning algorithm predicts university student academic performance and graduation outcomes based on factors such as family income, past academic performance, and academic qualifications of parents. However, the test data only includes candidates from a specific gender or ethnic group.
    • In this case, the model may overfit to the specific gender or ethnic group present in the test data.
    • It might learn patterns or biases that are not applicable to candidates from different gender or ethnic backgrounds.
  • As a result, it struggles to make accurate predictions for candidates outside the narrow demographic represented in the test dataset. The solution can be like, training dataset should be more representative of the diversity of the university student population. Including data from a broader range of gender and ethnic backgrounds will help the model generalize and make fairer predictions for all students.

Pearson correlation coefficient (R)

From following we know that correlation between %diabetes and %inactivity:

Correlation[DiabetesShort〚All, 2〛, Inactivity〚All, 2〛]

0.441706 implies R=(0.442)

The Pearson correlation coefficient, often denoted as “R,” is a statistical measure that quantifies the strength and direction of a linear relationship between two continuous variables.It ranges from -1 to 1, where:

    • -1: Perfect negative linear correlation (as one variable increases, the other decreases).
    • 0: No linear correlation (variables are not linearly related).
    • 1: Perfect positive linear correlation (as one variable increases, the other increases).

Interpretation of R = 0.442

  • In our analysis, we calculated an R value of approximately 0.442 when assessing the correlation between %diabetes and %inactivity.
  • A positive R value indicates a positive linear relationship, which means that as %inactivity increases, %diabetes tends to increase as well. However, the strength of this relationship is moderate, as the R value is not close to 1.
  • The value of 0.442 suggests that there is a statistical significancy, but not exceptionally strong, positive correlation between %diabetes and %inactivity.
  • When |R| is closer to 1 (either positive or negative), it indicates a stronger linear relationship. In our case, the correlation is moderate, meaning that while there is a connection between %inactivity and %diabetes, other factors may also influence %diabetes rates, and the relationship is not entirely deterministic.
  • However, it’s important to note that correlation does not imply causation. In other words, while there is a statistical relationship, it does not mean that inactivity directly causes diabetes. There could be confounding variables or other factors at play.

Further analysis, including regression modeling and potentially considering additional variables, can help explore the causal relationships and make predictions based on this data.The Pearson correlation coefficient of 0.442 indicates a moderate positive linear relationship between %diabetes and %inactivity.But it’s important to conduct more in-depth analysis to understand the underlying factors and potential causal relationships between these variables.

BP Test and Hypothesis Testing

Today’s lecture was focused on some essential statistical concepts that are significant for understanding research. The BP test, null hypothesis, alternative hypothesis, and p-value were covered.

Firstly, the Breusch-Pagan test, a statistical test employed to examine heteroscedasticity in regression analysis. The consistency of the variance of errors across various levels of independent variables can be assessed through this test, which is considered crucial for the evaluation of whether the assumptions of a regression model are met or not.

Hypothesis testing involves collecting data, calculating a test statistic, and  using the p-value to determine whether to reject the null hypothesis. A small p-value indicates strong evidence against H0, which leads to rejection. The null hypothesis, commonly represented as H0, is a statement asserting the absence of a significant effect or relationship within the data. The alternative hypothesis, frequently denoted as Ha or H1, indicates the presence of a significant effect or relationship. Decisions concerning these hypotheses are made using p-value, which is a measure of the strength of evidence against the null hypothesis.

If we consider a scenario related to customer satisfaction, the null hypothesis suggests that modifying the website’s layout does not result in any significant changes in customer satisfaction, while the alternative hypothesis indicates that the change does make a significant difference. Hypothesis testing involves conducting a study where some customers see the old website layout, and others see the new website, and then comparing their satisfaction scores to determine whether there’s enough evidence to reject the null hypothesis in favor of the alternative hypothesis.

In summary, the lecture provided insight into the utilization of the BP test for the assessment of regression model assumptions and the formulation of hypotheses, as well as their evaluation using p-values.

Week-1 Monday

I have conducted a data analysis focusing on the relationship between diabetes and inactivity. Initially, I analysed the data points using Microsoft Excel for the basic understanding and found that there are common data points among diabetes, inactivity and obesity. I found that FIPS data points of inactivity is a subset of FIPS data points of diabetes.
•  After going through the pdf “CDC Diabetes 2018”, Observed the basic metric of evaluation such as mean, median, skewness, standard deviation for the data points.I understood that there is a slight skewness, with a kurtosis of about 4 for %diabetes. Also we can observe the deviation in normality from the quantile plot.
• Similarly, for %inactivity, the skewness is in other direction, with a kurtosis less than the kurtosis of normal distribution which is 3.
• From the scatter plot between diabetes and inactivity common data pairs and linear model that is fit for the data points, 20% (approx.) of the variation in diabetes can be interpreted for variation in inactivity.
• I understood that there is a deviation from the normality for the residuals of the data points from the linear model which resulted in Heteroscedasticity.
• The points in the plot between the residuals and the predicted values in the linear model shows the fanning out of the residuals which says that the linear model is not a suitable model.
However, I’m still very enthusiastic to learn all the above stats using python and I’m trying to do the same.