python-for-data-and-media-communication-gitbook
python-for-data-and-media-communication-gitbook copied to clipboard
clustering - Kmeans, Agglomerative, and DBSCAN
Troubleshooting
Describe basic
- Which chapter of book?: Ch 17
Describe your question
For a data set with 100 records, for all K value in 1<= K <= 100, the K-means clustering algorithm returns only one non-empty cluster. Incremental version of K-means returns exact same result. How is this possible?
And would single link and DBSCAN handle such data?
Describe the efforts you have spent on this issue
For each data points with same distance to all other points clearly doesn’t work here. Besides all data points with no distance (100 of them at exact one position), is there any other possibilities?
Have you Google/ Stackover flow anything?
Nothing found.
Sounds like a case where:
- Data points are too close to each other, relatively speaking, compared with the distance between centroid.
- One centroid captures all the data points at a glance and updates itself to the centre of those points; while all other centroid has no data to update themselves.
Without looking at the exact data, you can try the following methods:
- Normalise all the records, in case the distance is too small in original space.
- Manually set K-means centroid initial values. If you intend to have K clusters, randomly pick K data points to be the initial centroids.
Which algorithm is better is hard to tell without looking at the data. We can focus on making K-means work first.
In sklearn.cluster.KMeans
if set ( init=’random’)
, seems the centroid will be randomly picked. And if random to choose centroids and run for many times but obey the same result, I guess it rules out the possibility that the distance between data points is smaller than the distance between centroid, because it's already random.
Looks like there left only one situtation that there is no distance at all of are the data points.
If the dataset has infinite many data points, seems under some distance constrictions the data shape like Paraboloid may result in just only one non-empty cluster. But for the dataset with limited data points, it's unlikely to have the result.