adogwangwang

Results 5 comments of adogwangwang

B是很相近的话,不就是硬负例吗?我看到这个项目有挖掘硬负例的方法,只不过数据集不好搞

> GPU KV cache hello, can you show me how to increase GPU KV cache? here is my log , when I run gemma7b,it will take 30s once, soooo slow,...

> 进入容器查询路径“/var/lib/lgraph/data”,没有upload_files目录。 ![image](https://private-user-images.githubusercontent.com/21599896/321558387-9a2daa3f-d921-4cd9-ac00-7c37b2baf1f3.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTY4NzgzNzQsIm5iZiI6MTcxNjg3ODA3NCwicGF0aCI6Ii8yMTU5OTg5Ni8zMjE1NTgzODctOWEyZGFhM2YtZDkyMS00Y2Q5LWFjMDAtN2MzN2IyYmFmMWYzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA1MjglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNTI4VDA2MzQzNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWQyOGNmNDdmMzQyMDg3ZTA1ZjA3YzVkYjZmYzk2NzVkNjA4YTMwYjFlODdhZGMzMGNmOWQ3NjNiMjI4ZGQ1YmQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.RGu7-BX9rR_a-0VRqFO5wNub7qvGmea4eQKn6NHSaz4) 解决了吗?同样的问题~

> > 进入容器查询路径“/var/lib/lgraph/data”,没有upload_files目录。 ![image](https://private-user-images.githubusercontent.com/21599896/321558387-9a2daa3f-d921-4cd9-ac00-7c37b2baf1f3.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MTY4NzgzNzQsIm5iZiI6MTcxNjg3ODA3NCwicGF0aCI6Ii8yMTU5OTg5Ni8zMjE1NTgzODctOWEyZGFhM2YtZDkyMS00Y2Q5LWFjMDAtN2MzN2IyYmFmMWYzLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA1MjglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNTI4VDA2MzQzNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWQyOGNmNDdmMzQyMDg3ZTA1ZjA3YzVkYjZmYzk2NzVkNjA4YTMwYjFlODdhZGMzMGNmOWQ3NjNiMjI4ZGQ1YmQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.RGu7-BX9rR_a-0VRqFO5wNub7qvGmea4eQKn6NHSaz4) > > 解决了吗?同样的问题~ 自己把数据放进去,解决了

> @adogwangwang could you provide more info on which backend (I'm assuming CUDA not Metal) and which version you're running. hello, I am useing llama-cpp-python 0.2.64, when I run llava...