Data-Science-Regular-Bootcamp
Data-Science-Regular-Bootcamp copied to clipboard
Regular practice on Data Science, Machien Learning, Deep Learning, Solving ML Project problem, Analytical Issue. Regular boost up my knowledge. The goal is to help learner with learning resource on Da...
It is simple to implement. It is robust to the noisy training data It can be more effective if the training data is large.
The K-NN working can be explained on the basis of the below algorithm: Step-1: Select the number K of the neighbors Step-2: Calculate the Euclidean distance of K number of...
Suppose there are two categories, i.e., Category A and Category B, and we have a new data point x1, so this data point will lie in which of these categories....
K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on Supervised Learning technique. K-NN algorithm assumes the similarity between the new case/data and available cases and put the...
In the previous steps, apart from standardization, you do not make any changes on the data, you just select the principal components and form the feature vector, but the input...
As we saw in the previous step, computing the eigenvectors and ordering them by their eigenvalues in descending order, allow us to find the principal components in order of significance....
As there are as many principal components as there are variables in the data, principal components are constructed in such a manner that the first principal component accounts for the...
Eigenvectors and eigenvalues are the linear algebra concepts that we need to compute from the covariance matrix in order to determine the principal components of the data. Before getting to...
The aim of this step is to understand how the variables of the input data set are varying from the mean with respect to each other, or in other words,...
plt.figure(figsize=(10,10)) sns.heatmap(cm,annot=True) plt.xlabel('Predicted') plt.ylabel('Truth Values')