Soohwan Kim
Soohwan Kim
# π Add an attention map image drawing function to the wandb log. Example: ## Motivation To observe model training process. ## Your contribution
# π Add inference script Add inference script for one audio. ## Motivation - #66
# π Add `ray` on pre-processing - ray: https://github.com/ray-project/ray ## Motivation - Data preprocessing currently takes a lot of time. - There is no multiprocessing technique. - Ray is one...
# π Add Luna (Linear Unified Nested Attention) Transformer ## Model description - [paper](https://arxiv.org/abs/2106.01540) - [implementation](https://github.com/sooftware/luna-transformer) ## Open source status * [x] the model implementation is available: (give details) *...
- Error code when I put `gatsby new [MY-DIR] https://github.com/greglobinski/gatsby-starter-personal-blog.git` ``` info sharp Detected globally-installed libvips v8.11.3 info sharp Building from source via node-gyp CC(target) Release/obj.target/nothing/../../../node-addon-api/nothing.o LIBTOOL-STATIC Release/nothing.a warning: /Library/Developer/CommandLineTools/usr/bin/libtool:...
I try to apply the `Resume` page to my blog. I encounter below error message. - Error Message ``` Field "category" is not defined by type "MarkdownRemarkFrontmatterFilterInput". File: src/experience_components/Skills/index.tsx:31:50 GraphQLError:...
### System Info - `transformers` version: 4.21.1 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.31 - Python version: 3.9.12 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.11.0+cu113 (True) - Tensorflow version (GPU?): not installed...
**Describe the bug** I used deepspeed inference like below: ```python model = ( T5ForConditionalGeneration.from_pretrained( "paust/pko-t5-large", ).half().eval().to(torch.cuda.current_device()) ) model = deepspeed.init_inference( self.model, mp_size=8, dtype=torch.float, injection_policy={T5Block: ('SelfAttention.o', 'EncDecAttention.o', 'DenseReluDense.wo')} ) ``` It...
I tried converting the unigram.json file to transformers tokenizer. Moreover I converted to tokenizer to BERT format. (`[CLS] SENTENCE [SEP]`) I share it because I think it will be helpful...
μλ νμΈμ. λ©μ§ μ€νμμ€ κ³΅κ°ν΄μ£Όμ μ κ°μ¬ν©λλ€. λͺ¨λΈ μ¬μ© κ΄λ ¨ν΄μ κΆκΈν μ μ΄ μμ΄μ μ΄μ λ¨κΉλλ€. λνμ²΄λ‘ νμ΅μ΄ λλ€κ³ λμ΄μλλ°, λν λ°μ΄ν°λΌλ©΄ [TURN]μ ν¬ν¨ν λν λ°μ΄ν° ν¬λ§·μ μν λͺ μ€νμ ν ν°μ μ μνμ ¨μ κ²...