imagehash icon indicating copy to clipboard operation
imagehash copied to clipboard

Too many collisions?

Open doughboyks opened this issue 6 years ago • 28 comments

Maybe I am doing it wrong but I ran through about 1,250 images (out of more than 350,000) and hashed (Perceptual) them. I then stored the hash in the database and I am using the SQL Bit_COUNT to get the hamming distance.

I took a random hash and ran the query and ended up with 2 at a distance of 0 (different hashes) and maybe 50 or more at a distance of 1. The farthest away is a distance of 27.

These are the two images that had different hashes but yet were still 0 away.

Hash: 12d2552c66ddc94b (image: https://10deb7fbfece20ff53da-95da5b03499e7e5b086c55c243f676a1.ssl.cf1.rackcdn.com/a1afc58c6ca9540d057299ec3016d726_l.jpg)

Hash: 12e627593dbc2307 (image: https://10deb7fbfece20ff53da-95da5b03499e7e5b086c55c243f676a1.ssl.cf1.rackcdn.com/3b8a614226a953a8cd9526fca6fe9ba5_l.jpg)

As you can see these are not anywhere close to the same.

SELECT c.*, BIT_COUNT('12e627593dbc2307' ^ i.hash) as hamming_distance FROM images i where hash is not null ORDER BY hamming_distance ASC

Will this not work on "created" images? Maybe the sample size is too small for an accurate comparison. I think I am reading it is converted to an 8x8 image...maybe in my case it should be MUCH larger but I am not sure where to start.

doughboyks avatar Jan 17 '18 22:01 doughboyks

That's weird. Can you make sure you are comparing integers, and not strings?

jenssegers avatar Jan 17 '18 22:01 jenssegers

I am not following. I have both of those hashes stored in the database just like that. I could not find out how to get and store the 01010101011 representation.

So the query actually would look like

SELECT c.*, BIT_COUNT('12e627593dbc2307' ^ '12d2552c66ddc94b') as hamming_distance FROM images i where hash is not null ORDER BY hamming_distance ASC

doughboyks avatar Jan 17 '18 23:01 doughboyks

Following up on this. What am I doing wrong or does it not work for what I want it to?

doughboyks avatar Jan 19 '18 13:01 doughboyks

Any help on this would be greatly appreciated. Either I am not following how to get the binary value of the hash or what I am doing is correct and it has limitations.

doughboyks avatar Jan 22 '18 22:01 doughboyks

Sorry for my Eng. First - _MySQL BIT_COUNT() syntax: BIT_COUNT(N) Where N is an integer. (Not string) Not sure, but it seems to me that the 01010101011 is also not a string

Why you can't loaded all md5 hashes from DB to script, and use
$yourVar->distance($hash1,$hash2)? My test function (return 10 closest hashes) with 50000 hashes - result 0.8 sec (load from db -> foreach -> asort -> array_spice)

with your test pics: $yourVar->distance('12d2552c66ddc94b','12e627593dbc2307') //return 28

Opekunov avatar Jan 22 '18 23:01 Opekunov

Because by the time I get done hashing all of the images there will be 7 times as many as yours. All things being equal that would be more than 5 seconds.

So I need the MySQL query to work, I am just seeing no documentation on how to store and ultimately compare the binary version of a hash. For instance is there a "$hash->get_binary"? I could then store that and pass it into the query.

doughboyks avatar Jan 22 '18 23:01 doughboyks

Let me ask this. Is the 0101011100 just an example and not the actual format? Following through the code for distance and when I call it passing the hash I get:

hash: 7e426e403d1e71f | hexdec: 568622213714011935

Should I be storing and using this hexdec value (even though it isn't 0's and 1's)?

doughboyks avatar Jan 22 '18 23:01 doughboyks

Ok, try with hexdec $hash1 = hexdec ("12e627593dbc2307"); $hash2 = hexdec ("12d2552c66ddc94b"); SELECT *, BIT_COUNT(1361819201567466247 ^ hashBig) as hd from photoHash ORDER BY hd ASC; Result 28, like $->distance($hash1,$hash2);
screenshot

But maybe need test with more images One more faster solution https://stackoverflow.com/questions/4777070/hamming-distance-on-binary-strings-in-sql/4783415

Opekunov avatar Jan 22 '18 23:01 Opekunov

ALTER TABLE photoHash ADD hashBit BIT(YOURCOUNT)

for storing 01010101011 representation, but this not INT for BIT_COUNT

Opekunov avatar Jan 23 '18 00:01 Opekunov

From the readme it has:

Image 1 hash: 3c3e0e1a3a1e1e1e (0011110000111110000011100001101000111010000111100001111000011110) Image 2 hash: 3c3e0e3e3e1e1e1e (0011110000111110000011100011111000111110000111100001111000011110)

How do I generate the information in the parenthesis? I have the hash, just need to know how to get it as all 0's and 1's.

doughboyks avatar Jan 23 '18 00:01 doughboyks

In PHP you could do decbin(hexdec($hash)) or base_convert($hash, 16, 2) to convert the hex value into a binary value.

jdreesen avatar Jan 23 '18 08:01 jdreesen

Thanks jdreesen. So now I am attempting to store that data, but what format? The only thing that appears to keep it like this is varchar. But when I do the query like this:

SELECT i.*, BIT_COUNT(101001010110000000110011011011010010101111101100011010100001101 ^ i.hash) as hamming_distance FROM images ci where hash is not null ORDER BY hamming_distance ASC

After processing only 10 images and grabbing one of the hashes all 10 are being returned as 1. Obviously I am not storing it correctly or I am not comparing correctly, or something. I apologize that I am having this much difficulty with this, but other than getting a hash none of this is really defined.

doughboyks avatar Jan 23 '18 17:01 doughboyks

I think I have this figured out. I made the MySQL field BigINT Unsigned, I installed bcmath(so I don't get E+19 values) and I am storing and comparing against the hexdec value of the hash. I did a limited test and it got an exact match against one image (the others were in the 20's). I did another test where there were two similar images (one color and one black and white) and it matched perfect on the color and was like 10 on the B&W).

Looks like it will take 80+ hours before it will have hashed everything. I will do some more testing and see if it all is working and if so we can consider this "issue" closed (just might want to take the learnings and apply to the readme.

doughboyks avatar Jan 23 '18 22:01 doughboyks

Hi @doughboyks, could you elaborate on your use of bcmath to solve this ? My hashes int representation get truncated to 9223372036854775807 aka PHP_INT_MAX when I try to store them.

I'm considering splitting the hash in two parts for now, like

8ececed9d8d8d0f4 => 8ececed9, d8d8d0f4
to decimal => 2395918041, 3638087924

And then querying potential duplicates this way, assuming hash_1 and hash_2 being BIGINT columns, and supplied parameters being [int image.hash_1, int image.hash_2, int treshold]. (note, MySQL dialect)

select ID, BIT_COUNT(hash_1 XOR ?) + BIT_COUNT(hash2 XOR ?) AS distance FROM pictures HAVING distance < ?

But this really feels dirty, I'd like to avoid this approach. Still, it works for now as a POC for my project, and I'm getting the same distances with @jenssegers compare() function :

  • phash is the whole hex string,
  • hash_1 an int representation of the first 8 digits of this hex string,
  • hash_2 an int representation of the last 8 digits of the hex string.
    /* Picture.php, Laravel5.5 Eloquent Model */

    /**
     * @param int $level
     * Returns potential duplicates of this picture.
     */
    public function findPotentialDuplicates(int $level = 5) {
        $hash_1 = $this->hash_1;
        $hash_2 = $this->hash_2;
        return DB::table('pictures')
            ->selectRaw('id, BIT_COUNT(hash_1 ^ ?) + BIT_COUNT(hash_2 ^ ?)  AS distance', [$h1, $h2])
            ->havingRaw('distance < ?', [$level])
            ->get();
    }

   /**
     * @param Picture $a
     * @param Picture $b
     * @return int
     */
    public static function compare(Picture $a, Picture $b) {
        $hasher = self::getHasher();
        return $hasher->distance($a->phash, $b->phash);
    }

But hey, it works for now, I'm getting similar and "opposite" images with similarity cursors, couldn't get fancier ! Thanks for documenting your issue, it allowed me to resolve this.

Lucassifoni avatar Feb 08 '18 10:02 Lucassifoni

@Lucassifoni, any progress on this issue?

scratcher28 avatar Mar 27 '19 10:03 scratcher28

@scratcher28, splitting the hash in two columns worked fine with our approx. 120.000 images, with relatively fast queries on a modest server (meaning : it wasn't annoying to use), but we moved on and this system isn't in use anymore.

Re-reading this after a year, my approach was naïve and I should have designed something better.

Lucassifoni avatar Mar 27 '19 10:03 Lucassifoni

@Lucassifoni, do you know/have any better alternatives to find duplicates?

scratcher28 avatar Mar 27 '19 11:03 scratcher28

With the conversation on this having hit on performance, I wanted to add that I find performance for hash comparisons quite bearable if you have GMP installed. Personally, I'm going to avoid doing any fancy database shenanigans as it'd lock me into a particular storage mechanism.

For 50,000 hash comparisons: With GMP enabled - ~0.23 seconds Without GMP enabled - ~1.54 seconds

While the exact times will vary depending on server performance, hashes, the position of the moon, and any number of other things, I strongly suspect the 80% reduction in execution time will be pretty consistent.

DynamoGeek avatar May 11 '19 20:05 DynamoGeek

@scratcher28 the finding duplicates part was good enough, we supplemented it with some heuristics as a duplicate flagging pipeline :

  • Does it have the same sha256 hash as another one ? Exit, use a reference to the previous one without telling the user.
  • Does this image have the same filename than another one ? Flag (weight by filename length)
  • Does this image have a hash (as given by this library) close to another one ? Flag
  • Is the user who uploads it the same as the user who uploaded some of those flagged images ? Flag

Depending on the number of flags, you could show a popup to the user to ask for confirmation and word it more or less strongly. If the user says "yes, they're the same", we keep the one that had a better resolution.

Lucassifoni avatar Jun 19 '19 08:06 Lucassifoni

For optimizing mysql query checkout this stackoverflow answer. It suggests to narrow down search results by adding all the bits together and caching that value in a seperate column - that will allow you to exclude checking images that are obviously not alike.

garygreen avatar Jan 13 '20 18:01 garygreen

I agree it would be helpful for some of this to exist in the install instructions.

To solve for PHP displaying numbers in E notation, I ended up making this work by installing bcmath. On Ubuntu: $ sudo apt update $ sudo apt install php7.4-bcmath

To solve for numbers being rounded to 9223372036854775807 in the database, make sure your column is bigint unsigned.

bcmath causes issues with number display (carrying what should be integers out to a crazy number of decimals). You might also find these functions helpful, in particular bcround()

{
    return strpos($n, '-') === 0; // Is the number less than 0?
}

function bcceil($n)
{
    return bcnegative($n) ? (($v = bcfloor(substr($n, 1))) ? "-$v" : $v)
                          : bcadd(strtok($n, '.'), strtok('.') != 0);
}

function bcfloor($n)
{
    return bcnegative($n) ? '-' . bcceil(substr($n, 1)) : strtok($n, '.');
}

function bcround($n, $p = 0)
{
    $e = bcpow(10, $p + 1);
    return bcdiv(bcadd(bcmul($n, $e, 0), bcnegative($n) ? -5 : 5), $e, $p);
}

aacook avatar Jul 13 '20 17:07 aacook

I have tested the perceptual hash with more than 5000 images, it occasionally generate weird results, but only with very high distance. In my experience, anything above 8 can collide, but I still find actually similar images at around 15. Below 8 it's generally very accurate though. First collision is at 2 but for pictures of papers (invoices, tickets) so it's pretty normal considering these kind of documents have very few high frequencies variations and are after all, very similar. First real collision is at 5 for pictures actually looking similar: image A very WTF collision at 5: image There are some weird collisions around 6-7 but most of the time it's really accurate.

What I don't understand though, is how it can find really similar pictures at very high distances: image The most WTF true positives at 15 distances: image

I have absolutely no explanation for this behaviour, but I will try to merge these PR and test it again:

  • Not always returning a 16 char hash jenssegers/imagehash#28
  • Add PerceptualHash2 jenssegers/imagehash#58

VincentChalnot avatar Nov 22 '20 10:11 VincentChalnot

@VincentChalnot thank you so much for detailed information. We are also experiencing the same issue, although we use DifferenceHash. 8 seems to be the sweet spot for us too. How strange, maybe that can be my lottery numbers this weekend 🤣

garygreen avatar Nov 22 '20 16:11 garygreen

There's something wrong with the included perceptual hash algorithm. I could not figure out what exactly. The other included algorithms will usually provide better results.

jenssegers avatar Nov 22 '20 17:11 jenssegers

@jenssegers I get similar results even with DifferenceHash algorithm though. So maybe there's something else happening?

garygreen avatar Nov 22 '20 18:11 garygreen

Check the discussion at #52 and the PR #58 that are resolving a lot of issues. https://github.com/jenssegers/imagehash/issues/52#issuecomment-732118361

Things I will try to look into to detect more duplicates (not really related to pHash)

  • Super easy: Try to use exif data like creation date, photos with identical creation dates are likely to be exact duplicates and pictures with close creation dates can probably be analysed to keep only one.
  • Relatively easy: Try the PR concerning auto-rotation #17 (needs to move to a separate lib IMO)
  • Super hard: Try to center the picture on the main "subject", allowing to discard a percentage of the image around, I know there are library to do that. Compute only the hash of the cropped image.

VincentChalnot avatar Nov 23 '20 12:11 VincentChalnot

For those interested, here's my final query to select duplicates:

  SELECT m.ulid                                                 AS media_ulid,                                                 
         sm.ulid                                                AS similar_media_ulid,                                         
         BIT_COUNT(m.perceptual_hash ^ sm.perceptual_hash)      AS distance,                                                   
         BIT_COUNT(m.perceptual_hash ^ sm.perceptual_hash) + (                                                                                                          
             322970 - 322980/(1 + POWER(ABS(TIMESTAMPDIFF(MINUTE, m.creation_date, sm.creation_date))/3695643000, 0.55))       
          ) AS adjusted_distance                                                                                               
  FROM media AS m                                                                                                              
           INNER JOIN media AS sm ON m.ulid != sm.ulid                                                                         
           LEFT JOIN similarity s1 on m.ulid = s1.media_ulid                                                                   
  WHERE (m.last_similarity_query_date IS NULL OR sm.update_date > m.last_similarity_query_date)                                
    AND s1.media_ulid IS NULL                                                                                                                                                                                                           
  HAVING distance < 17;

UPDATE media SET last_similarity_query_date = NOW();

It's storing all duplicates into a table in a single query, adding an adjusted distance that adds a modifier on the pHash distance with a curve looking like this: image So images took at 20 minutes interval will have a stronger similarity and images took at more than 20min will have less similarities (the rest of the curve is almost linear)

Also, it ignores existing duplicates and it's storing a date on the media that were processed to ignore them during next query (not sure this is useful as the query takes less than 10s for 5000+ pictures).

VincentChalnot avatar Nov 23 '20 13:11 VincentChalnot

I tried to save the hash (toInt) into mysql, and I found the value were casted which max length is 8 byte(64 bits), so the distance calculated by mysql bit_count is not correct actually. We try to split the hash(toHex) into 2 parts and save them into two columns, add both distances of column as the real distance, sound weird ,but it works! if MySQL could XOR more than 64 bits, we can save the image hash as what we calculate in PHP

dddkkk01 avatar Sep 17 '22 07:09 dddkkk01