ZanePoe
ZanePoe
activity.bigImage.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { activity.bigImage.animaTo(info, new Runnable() {//这个时候的info还不为空 @Override public void run() { if (null == info){//这里info不为空,无log Log.i("vicky","info怎么会为空的?bigImageClick"); } activity.bigImage.setVisibility(View.GONE); } }); } }); 但是到了这里就报空指针异常...
错误提示: `Error:Execution failed for task ':......:transformClassesWithDesugarForDebug'. > com.android.build.api.transform.TransformException: java.io.IOException: Could not delete path 'D:\......\build\intermediates\transforms\desugar\debug\10.jar'.` AndroidStudio第一次启动应用了BlockCanaryEx的项目时不会报错,编译正常。但只要修改任何代码,之后都无法正常编译了,clean project时提示: `Unable to delete file: D:\......\build\intermediates\processing-tools\runtime-deps\debug\desugar_try_with_resources.jar` 只有invalidate Caches /restart之后才能再运行,然后重复这个问题。 我推测是BlockCanaryExPlugin这个插件没有释放资源导致。
当前版本: dependencies = [ "uvicorn>=0.18.3", "pydantic[dotenv]>=2.0.0", "pydantic-settings>=2.1.0", "aiosqlite>=0.15.0", "fastapi-user-auth=0.7.0", "fastapi-amis-admin-nav=0.1.3", "fastapi-scheduler=0.0.15", "sqlmodelx>=0.0.9", "alembic>=1.7.6", ] 经过多次验证,sqlite不会报错,但使用postgresql启动报错,报错原因为类型不匹配: 详细错误如下: Traceback (most recent call last): File "d:\code\github\fastapi-user-auth-demo\.venv\lib\site-packages\starlette\routing.py", line 677, in lifespan async with self.lifespan_context(app)...
大量的实测,在双 2080ti 22G显卡下,exl2的速度是awq的两倍以上。并且,exl2在超长上下文的情况下,显存增长很平缓。双 2080ti 22G显卡下,6000左右的tokenawq量化的SUS-Chat-34B-AWQ会爆显存。而SUS-Chat-34B-6.0bpw-h6-exl2在7000token下依然显存容量充足。
### Checklist - [X] 1. I have searched related issues but cannot get the expected help. - [X] 2. The bug has not been fixed in the latest version. -...
## 📚 Documentation ### Suggestion Flashinfer has supported the sm75 CUDA device. but your doc “If you are using CUDA and your compute capability is above 80, then it is...
### Feature request / 功能建议 client.launch_model api 建议增加 按最近默认配置启动的功能,类似于webui上的launch功能:  ### Motivation / 动机 方便不同的任务灵活切换大小模型 ### Your contribution / 您的贡献 https://inference.readthedocs.io/zh-cn/latest/reference/generated/xinference.client.Client.launch_model.html
### Your current environment The output of `python env.py` ```text Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build...
### Motivation Skywork/Skywork-R1V2-38B的架构基本和OpenGVLab/InternVL3-38B一样,仅仅是LLM由[Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B)换为了QwQ32b。目前天工给出了lmdeploy推理的代码,并进行了相关配置的适配,但测试有异常: ```(RayWorkerWrapper pid=1141151) You are using a model of type internvl_chat to instantiate a model of type skywork_chat. This is not supported for all configurations of models and...
如题