Damian Nardelli

Results 3 issues of Damian Nardelli

Using gpt-2 345M model to run inferences in batches between 10 and 100 documents with approximately ~60 tokens is taking ~15ms in a Tesla T4 GPU machine. Why? That looks...

So if I ran into an issue with liblinear where a feature present in class A is not biasing to that class. Isn't that weird? If I incremented the weight...

Ticket: https://hibernate.atlassian.net/browse/HHH-13530 added unit tests to reproduce HHH-13530 issue