Forrest Bao
Forrest Bao
hi @zjq0455 thanks for your interest in HHEM. Yes, of course. Just note that HHEM's context window is 512 tokens. We are working on a more powerful open source version...
Update: [HHEM-2.1-Open](https://huggingface.co/vectara/hallucination_evaluation_model), released in August 2024, extends the context window to whatever your hardware can support, primarily VRAM on your GPU. The ticket is now considered closed.
hi @ArthurZucker There are two differences here: 1. We hacked the T5ForTokenClassification for sequence classification. We padded a `` token at the beginning of the sequence in lieu of the...
Thanks @ahmed-d8k Some quick feedback: 1. In `requirements.txt`, do not fix the versions unless necessary. 2. For LLM providers and LLMs, please follow this convention in folder and LLM names:...
encountered the same issue here. Can someone fix it please?
Thanks @ahuang11 for this suggestion. We heard the same feature request from many others at PyCon US 2024. What's your suggestion on how a user can trigger the code-to-app conversion?...
Thanks. I am sitting on the fence between the two options. I feel the magic command is more a Juypte-way of doing things. The downside is that if you want...
Thanks. We will support at least one way to do it first. Then we can support more.
I am not sure about this idea below because it conflicts with expectations per Jupyter convention. A user's expectation for the last line, `hello` is printing the variable `hello`, which...
The expected behavior is as follows: ``` In [1]: def hello(a: str) -> str: ...: return "Hello, " + a + "!" ...: In [2]: hello Out[2]: str> In [3]:...