tntsearch icon indicating copy to clipboard operation
tntsearch copied to clipboard

Performance issues with large datasets

Open somegooser opened this issue 1 year ago • 6 comments

Hi,

I have performance issues when indexing large datasets with 50000 records. Ik takes 30+ minutes.

The indexed content is not even long. It is approximately 50 characters per row.

This also happens with another datasets with only 500 rows with LONG tekst.

Any information how to boost performance?

somegooser avatar Jun 29 '23 08:06 somegooser

50k is not a large dataset, it easily indexes millions of rows... can you tell us a bit more about the structure and how/where do you index the data...?

On Thu, Jun 29, 2023 at 10:41 AM somegooser @.***> wrote:

Hi,

I have performance issues when indexing large datasets with 50000 records. Ik takes 30+ minutes.

The indexed content is not even long. It is approximately 50 characters per row.

This also happens with another datasets with only 500 rows with LONG tekst.

Any information how to boost performance?

— Reply to this email directly, view it on GitHub https://github.com/teamtnt/tntsearch/issues/291, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAQMGWR77K52LIWKECB57W3XNU5SNANCNFSM6AAAAAAZYGBQLY . You are receiving this because you are subscribed to this thread.Message ID: @.***>

stokic avatar Jun 29 '23 08:06 stokic

Thanks for the reply.

I am using a simple dataset with 50000+ company names. I am only using a custom tokenizer.

somegooser avatar Jun 29 '23 08:06 somegooser

ok can you show us the code for the tokenizer and table structure?

On Thu, Jun 29, 2023 at 10:54 AM somegooser @.***> wrote:

Thanks for the reply.

I am using a simple dataset with 50000+ company names. I am only using a custom tokenizer.

— Reply to this email directly, view it on GitHub https://github.com/teamtnt/tntsearch/issues/291#issuecomment-1612657859, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAQMGWVZLOBLSFEJXV5OE7DXNU7C7ANCNFSM6AAAAAAZYGBQLY . You are receiving this because you commented.Message ID: @.***>

stokic avatar Jun 29 '23 08:06 stokic

Hi,

This is my tokenizer

`<?php

namespace Search;

use TeamTNT\TNTSearch\Support\AbstractTokenizer; use TeamTNT\TNTSearch\Support\TokenizerInterface;

class Tokenizer extends AbstractTokenizer implements TokenizerInterface { static protected $pattern = '/[^\p{L}-\p{N}]+/u';

public function tokenize($text, $stopwords = [])
{
    if ($text === null) {
        return [];
    }

    $text = mb_strtolower($text, 'UTF-8');
    $text = str_replace(['-', '_', '~'], [' ', ' ', '-'], $text);
    $text = strip_tags($text);
    $split = preg_split($this->getPattern(), $text, -1, PREG_SPLIT_NO_EMPTY);

    return array_diff($split, $stopwords);
}

} `

My query is super simpel

$indexer->query('SELECT id,nameFROMcompanies');

somegooser avatar Jul 04 '23 08:07 somegooser

I'm using this package with a result set of 1.5 million records. Indexing from scratch takes ~5 minutes.

ultrono avatar Jul 07 '23 12:07 ultrono

Thats crazy...

There is something weird anyways.

Indexing 10.000 rows takes like 20 seconds on my server. But when i index 100.000 rows with alike data (so not different form of data or length) it takes about 30 minutes. Even updating the index is very slow instead of complete reindex.

Could it be something with the size of the index file?

somegooser avatar Jul 07 '23 13:07 somegooser