localGPT
localGPT copied to clipboard
Error: 'abs_out_mps()' operation does not support input type 'int64' in MPS backend on Apple M1 Mac
Issue Description
When running the ingest.py script on an Apple M1 MacBook Pro with 32GB of memory, an error occurs due to the 'abs_out_mps()' operation not supporting the input type 'int64' in the MPS backend. This issue prevents the script from executing successfully.
Reproduction Steps
- Clone the repository to a local directory.
- Install the required dependencies by running pip install -r requirements.txt and wait for the installation to complete.
- Open the ingest.py file and navigate to line 37.
- Change the value of device from 'cuda' to 'mps'.
- Run the command python ingest.py in the terminal.
Error Message
python ingest.py
Loading documents from /Users/lrodrrol/Documents/Projects/LocalGpt/localGPT/SOURCE_DOCUMENTS
Loaded 1 documents from /Users/lrodrrol/Documents/Projects/LocalGpt/localGPT/SOURCE_DOCUMENTS
Split into 72 chunks of text
load INSTRUCTOR_Transformer
max_seq_length 512
Using embedded DuckDB with persistence: data will be stored in: /Users/lrodrrol/Documents/Projects/LocalGpt/localGPT/DB
Traceback (most recent call last):
File "/Users/lrodrrol/Documents/Projects/LocalGpt/localGPT/ingest.py", line 57, in <module>
main()
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/Users/lrodrrol/Documents/Projects/LocalGpt/localGPT/ingest.py", line 51, in main
db = Chroma.from_documents(texts, embeddings, persist_directory=PERSIST_DIRECTORY, client_settings=CHROMA_SETTINGS)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 413, in from_documents
return cls.from_texts(
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 381, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 158, in add_texts
embeddings = self._embedding_function.embed_documents(list(texts))
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/langchain/embeddings/huggingface.py", line 148, in embed_documents
embeddings = self.client.encode(instruction_pairs)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/InstructorEmbedding/instructor.py", line 539, in encode
out_features = self.forward(features)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/torch/nn/modules/container.py", line 204, in forward
input = module(input)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/InstructorEmbedding/instructor.py", line 269, in forward
output_states = self.auto_model(**trans_features, return_dict=False)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1846, in forward
encoder_outputs = self.encoder(
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1040, in forward
layer_outputs = layer_module(
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 673, in forward
self_attention_outputs = self.layer[0](
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 579, in forward
attention_output = self.SelfAttention(
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 521, in forward
position_bias = self.compute_bias(real_seq_length, key_length, device=scores.device)
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 428, in compute_bias
relative_position_bucket = self._relative_position_bucket(
File "/Users/lrodrrol/.pyenv/versions/3.10.4/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 399, in _relative_position_bucket
relative_position = torch.abs(relative_position)
TypeError: Operation 'abs_out_mps()' does not support input type 'int64' in MPS backend.
Offer to Help
I am willing to contribute to resolving this issue by providing a pull request to the repository's readme file if someone can suggest a fix. Please let me know if there are any specific changes or steps needed to address this problem.
Can you look at this (update to the readme) and see if that helps with the issue?