Jared Van Bortel

Results 359 comments of Jared Van Bortel

> Hope this is what you need: Yes, that is very helpful, thanks. **edit:** Could you please try to get info for the exception by running the `.exr -1` command...

Unfortunately, I'm not sure how to get the exception message with WinDbg. Here's another option: I uploaded a console-enabled build (`gpt4all-installer-win64-v2.5.0-pre2-debug-console.exe `) to the [pre-release](https://github.com/nomic-ai/gpt4all/releases/tag/v2.5.0-pre1). It would be helpful if...

Unless you can debug it with Visual Studio (which I know will provide the exception information), I'm not sure what else to do.

Now we're getting somewhere: ``` KERNELBASE!RaiseException+6c VCRUNTIME140!_CxxThrowException+90 [D:\a\_work\1\s\src\vctools\crt\vcruntime\src\eh\throw.cpp @ 75] D:\a\_work\1\s\src\vctools\crt\vcruntime\src\eh\throw.cpp @ 75 llmodel+ba4dc 0x0000002f`b14fd2b8 ``` Unfortunately, I no longer have a copy of the debug info for that build...

Here is the call stack when the exception is thrown: ``` KERNELBASE!RaiseException+0x6c VCRUNTIME140D!_CxxThrowException+0x120 llmodel!vk::detail::throwResultException+0x29c llmodel!vk::resultCheck+0x23 llmodel!vk::Instance::enumeratePhysicalDevices+0xf7 llmodel!kp::Manager::listDevices+0x38 llmodel!ggml_vk_available_devices+0xf6 llmodel!LLModel::availableGPUDevices+0x4f chat!MySettings::MySettings+0x74 chat!MyPrivateSettings::MyPrivateSettings+0x14 chat!`anonymous namespace'::Q_QGS_settingsInstance::innerFunction+0x36 chat!QtGlobalStatic::Holder+0x1c chat!QGlobalStatic >::operator()+0x24 chat!MySettings::globalInstance+0x12 chat!main+0x12f chat!invoke_main+0x39 chat!__scrt_common_main_seh+0x12e...

There is definitely room for improving the indexing speeds - if it could run without a model loaded we would be able to use the GPU and greatly increase performance....

> Nothing makes anything show up in the Local Documents database. Try asking on our [Discord](https://discord.gg/mGZE39AS3e). You may be missing a step.

I believe what manyoso is saying is that our Vulkan backend currently requires a contiguous chunk of memory to be available, as it allocates one big chunk instead of smaller...

> often only the Q4 models are working We only support GPU acceleration of Q4_0 and Q4_1 quantizations at the moment.

> I can't load a Q4_0 into VRAM on either of my 4090s, each with 24gb. Just so you're aware, GPT4All uses a completely different GPU backend than the other...