EDA and Machine Learning Models in R and Python (Regression, Classification, Clustering, SVM, Decision Tree, Random Forest, Time-Series Analysis, Recommender System, XGBoost)
Recursive feature elimination is based on the idea to repeatedly construct a model (for example an SVM or a regression model) and choose either the best or worst performing feature (for example based on coefficients), setting the feature aside and then repeating the process with the rest of the features. This process is applied until all features in the dataset are exhausted. Features are then ranked according to when they were eliminated. As such, it is a greedy optimization for finding the best performing subset of features. Read more at this link
Parametric v/s non parametric models in short and detailed
Regression guarantees interpolation of data and not extrapolation
Interpolation basically means using the model to predict the value of a dependent variable on independent values that lie within the range of data you already have. Extrapolation, on the other hand, means predicting the dependent variable on the independent values that lie outside the range of the data the model was built on.
Explanation of linear or linearity in Linear Regression
The term 'linear' in linear regression refers to the linearity in the coefficients, i.e. the target variable y is linearly related to the model coefficients. It does not require that y should be linearly related to the raw attributes or features. Feature functions could be linear or non-linear.
Techniques for handling Class Imbalance in Dataset
EDA and Machine Learning Models in R and Python (Regression, Classification, Clustering, SVM, Decision Tree, Random Forest, Time-Series Analysis, Recommender System, XGBoost)