Minimal-Bag-of-Visual-Words-Image-Classifier
Minimal-Bag-of-Visual-Words-Image-Classifier copied to clipboard
memory error
I am using this classifier for 4 class classification over 23,0000 image set.
After calculation of SIFT feature I get a error
Traceback (most recent call last):
File "learn.py", line 129, in
Hi, Jivnes, this 'MemoryError' indicates that the script runs out of memory. Try with much less images like 1000, and see if it works.
Hey, I just tried it on only 1,765 images and got the same error. What's happening?
And if you use even less images? When does it start working?
There has to be a lot of images for good image classification results. Neural nets use hundreds of images, and even for something simple such as HAAR cascades you need many, many images:
It is unclear exactly how many of each kind of image are needed. For Urban Challenge 2008 we used 1000 positive and 1000 negative images whereas the previous project Grippered Bandit used 5000. The result for the Grippered Bandit project was that their classifier was much more accurate than ours.
This is an issue that needs to be fixed in the code.
Dear Rich, thanks for reminding me to update the readme.md. It seems like
As the name suggests, this is only a minimal example to illustrate the general workings of such a system.
is not clear enough.
Oh.
But even still, I would like to see if there's a way to fix this. What does this function:
def dict2numpy(dict):
nkeys = len(dict)
array = zeros((nkeys * PRE_ALLOCATION_BUFFER, 128))
pivot = 0
for key in dict.keys():
value = dict[key]
nelements = value.shape[0]
while pivot + nelements > array.shape[0]:
padding = zeros_like(array)
array = vstack((array, padding))
array[pivot:pivot + nelements] = value
pivot += nelements
array = resize(array, (pivot, 128))
return array
Do? Why do you need to use memory operations in the first place?