Thomas Germer

Results 140 comments of Thomas Germer

@krish2366 @ShauryaDusht @BadakalaYashwanth You should read the [Contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md#issues): > If you are interested in resolving an [open issue](https://github.com/TheAlgorithms/Python/issues), simply make a pull request with your proposed fix. We do...

I had the same error and @Azreal42 gave the right hint. I could resolve it with ``` unset GITHUB_TOKEN ``` EDIT: I created a PR to fix this issue in...

Your code example is for openai/CLIP, but `max_position_embeddings` is for HuggingFace CLIP, which are different implementations. Either way, there simply are no positional embeddings for tokens in longer texts. If...

@Karliz24 appears to be a bot. See also the following issues, which are just copy & paste of some other issue: * https://github.com/googleapis/python-aiplatform/issues/4844 * https://github.com/Karliz24/Marley-/issues/23 * https://github.com/Karliz24/Marley-/issues/22 * https://github.com/Karliz24/Marley-/issues/21 *...

If you repeatedly add arrays to arrays, their magnitude increases exponentially and will eventually overflow. Therefore, it is a good idea to normalize values. A common normalization strategy is to...

In the past, I implemented a custom `AdaptiveAvgPool2d` module to get around this issue for a different model. Try replacing the three occurrences of `nn.AdaptiveAvgPool2d` in [models.py](https://github.com/CSAILVision/semantic-segmentation-pytorch/blob/8f27c9b97d2ca7c6e05333d5766d144bf7d8c31b/mit_semseg/models/models.py#L398) with `MyAdaptiveAvgPool2d`. Maybe...

For reference, the line in question: https://github.com/cloudflare/workers-oauth-provider/blob/a6e3e06c2642e0fd4c185374753201cffc21ce8a/src/oauth-provider.ts#L1953 Arguably, this is the consequence of the interface of [`Crypto`](https://developer.mozilla.org/en-US/docs/Web/API/Crypto) being too minimal, which makes it too easy for developers (or AIs, in...

The abstract of [the CLIP paper](https://arxiv.org/abs/2103.00020) says: > a dataset of 400 million (image, text) pairs collected from the internet [The COCO paper](https://arxiv.org/pdf/1405.0312) says: > we collected images from Flickr...

CLIP has been trained on images of size 224x224 and therefore works best at this size. If you want best results, your images have to be converted to this size...

PyTesseract is very old and much worse at OCR than GPT (try with handwritten notes for example), so this PR would be a massive downgrade. I am not sure if...