`torch._C._distributed_c10d` how to solve if e.g. on Termux
Bug
docling -v etc. problem resulting in: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package at see below.
Solution and cause:
Solved via https://github.com/pytorch/pytorch/issues/152285#issuecomment-3289016639 that is pip install transformers -U if e.g. on Termux, where apt install python-torch is missing that DISTRIBUTED compilation flag.
Ref, full error if not above (downgrade or upgrade of transformers)
~ $ docling --version
WARNING: linker: Warning: "/data/data/com.termux/files/home/.local/lib/python3.12/site-packages/pypdfium2_raw/libpdfium.so" unused DT entry: unknown processor-specific (type 0x70000001 arg 0x0) (ignoring)
Traceback (most recent call last):
File "/data/data/com.termux/files/usr/bin/docling", line 3, in <module>
from docling.cli.main import app
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/docling/cli/main.py", line 72, in <module>
from docling.document_converter import (
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/docling/document_converter.py", line 54, in <module>
from docling.pipeline.standard_pdf_pipeline import StandardPdfPipeline
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/docling/pipeline/standard_pdf_pipeline.py", line 17, in <module>
from docling.models.code_formula_model import CodeFormulaModel, CodeFormulaModelOptions
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/docling/models/code_formula_model.py", line 17, in <module>
from transformers import AutoModelForImageTextToText, AutoProcessor
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/transformers/__init__.py", line 27, in <module>
from . import dependency_versions_check
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/transformers/dependency_versions_check.py", line 16, in <module>
from .utils.versions import require_version, require_version_core
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/transformers/utils/__init__.py", line 24, in <module>
from .auto_docstring import (
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/transformers/utils/auto_docstring.py", line 30, in <module>
from .generic import ModelOutput
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/transformers/utils/generic.py", line 55, in <module>
from ..model_debugging_utils import model_addition_debugger_context
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/transformers/model_debugging_utils.py", line 29, in <module>
import torch.distributed.tensor
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/__init__.py", line 4, in <module>
import torch.distributed.tensor._ops # force import all built-in dtensor ops
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/_ops/__init__.py", line 2, in <module>
from ._conv_ops import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/_ops/_conv_ops.py", line 7, in <module>
from torch.distributed.tensor._dtensor_spec import DTensorSpec, TensorMeta
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/_dtensor_spec.py", line 6, in <module>
from torch.distributed.tensor.placement_types import (
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/tensor/placement_types.py", line 8, in <module>
import torch.distributed._functional_collectives as funcol
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/_functional_collectives.py", line 9, in <module>
import torch.distributed.distributed_c10d as c10d
File "/data/data/com.termux/files/usr/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 23, in <module>
from torch._C._distributed_c10d import (
ModuleNotFoundError: No module named 'torch._C._distributed_c10d'; 'torch._C' is not a package
More ref, after the fix:
~ $ docling --version
WARNING: linker: Warning: "/data/data/com.termux/files/home/.local/lib/python3.12/site-packages/pypdfium2_raw/libpdfium.so" unused DT entry: unknown processor-specific (type 0x70000001 arg 0x0) (ignoring)
Docling version: 2.52.0
Docling Core version: 2.48.1
Docling IBM Models version: 3.9.1
Docling Parse version: 4.1.0
Python: cpython-312 (3.12.11)
Platform: Linux-4.14.186+-aarch64-with-libc
~ $
Hi @Manamama! I'm Dosu and I’m helping the docling team.
Glad to see the fix worked! The original error was due to missing or improperly built PyTorch distributed components, which is common on ARM64/Termux when the distributed extension isn’t compiled in. Upgrading the transformers library (pip install transformers -U) is a known solution, as newer versions bundle compatible prebuilt components and avoid triggering the missing module error in PyTorch environments like this source.
The linker warning about pypdfium2_raw/libpdfium.so is harmless and doesn’t affect Docling’s execution. For anyone running Docling on ARM64 or in containers, it’s best to always verify that all dependencies (especially torch and transformers) are installed as ARM64 wheels, and to use a fresh environment if you hit similar issues source.
If you see related errors in the future, double-check your binary architectures and try reinstalling dependencies. Running Docling in a native ARM64 environment is currently the most reliable way to avoid these ecosystem quirks.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
Well, @dosu, I know it too by now. But it has been a hard won knowledge. It is best to pin transformers to some decent version in docling's requirements instead.
You're absolutely right—pinning transformers to a stable version helps avoid hard-to-debug issues, especially on ARM64/Termux. Currently, docling specifies transformers as an optional dependency with a version range (>=4.46.0,<5.0.0) in pyproject.toml, so it's not strictly pinned to a single version but constrained to avoid breaking changes from major upgrades source.
Recent support threads recommend using transformers >=4.55.1 to avoid device mismatch errors and other compatibility problems source. Known working combos include transformers >=4.45.0 with tokenizers >=0.20.0, while versions like 4.42.4 can cause issues source.
Pinning to a specific, well-tested version (like 4.55.1) or a narrow range is a solid move for reproducibility and stability, especially for environments with limited support. This practice can save a lot of troubleshooting time!
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
Oh, @dosu & Co., it is actually also relevant to pure Ubuntu (and other Linuxes, methinks). See below, deja vu:
~$ pip show transformers
Name: transformers
Version: 4.56.1
Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
Home-page: https://github.com/huggingface/transformers
Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)
Author-email: [email protected]
License: Apache 2.0 License
Location: /home/user/.local/lib/python3.10/site-packages
Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm
Required-by: audiolm-pytorch, bark, compressed-tensors, deepmultilingualpunctuation, docling-ibm-models, garak, hqq, kimi-audio, laion_clap, llama-index-embeddings-huggingface, llama-index-llms-huggingface, lm-eval, nougat-ocr, onnx_embedding_models, optimum, peft, sentence-transformers, spacy-transformers, suno-bark, TTS, whisperx, xgrammar
user@above-hp2-silver:~$ uname -a
Linux above-hp2-silver 6.8.0-60-generic #63~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Apr 22 19:00:15 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
~$ docling --version
2025-09-14 02:58:39.241610: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
AttributeError: 'MessageFactory' object has no attribute 'GetPrototype'
Traceback (most recent call last):
File "/home/user/.local/bin/docling", line 3, in <module>
from docling.cli.main import app
File "/home/user/.local/lib/python3.10/site-packages/docling/cli/main.py", line 72, in <module>
from docling.document_converter import (
File "/home/user/.local/lib/python3.10/site-packages/docling/document_converter.py", line 54, in <module>
from docling.pipeline.standard_pdf_pipeline import StandardPdfPipeline
File "/home/user/.local/lib/python3.10/site-packages/docling/pipeline/standard_pdf_pipeline.py", line 17, in <module>
from docling.models.code_formula_model import CodeFormulaModel, CodeFormulaModelOptions
File "/home/user/.local/lib/python3.10/site-packages/docling/models/code_formula_model.py", line 17, in <module>
from transformers import AutoModelForImageTextToText, AutoProcessor
ImportError: cannot import name 'AutoProcessor' from 'transformers' (/home/user/.local/lib/python3.10/site-packages/transformers/__init__.py)
~$
(Yes, I know how to fix, see my own tip above)
Update: Turned out I did NOT know how to fix it (on Ubuntu only). After 8 hours I still do not know how to fix it elegantly, on Python3.10. Neither does dosubot in https://github.com/docling-project/docling/issues/2261#issuecomment-3289039681 (juggling version #s, by pinning, does not work, I have tried some 10 combinations)
I decided to go Python3.12 way (all related modules clean installed) and only now it works.
You're right—this import error with AutoProcessor from transformers can hit any Linux distro, not just Termux or ARM64. Docling expects a recent transformers version (>=4.46.0,<5.0.0) because its VLM components rely on features like AutoProcessor and AutoModelForImageTextToText without fallback logic source. If your environment has a mismatched, partially upgraded, or corrupted transformers install—even if the version number looks right—you can get this error source.
Pinning transformers to a stable, well-tested version (like 4.55.1+) and using a clean environment is the best way to avoid these headaches source. If you hit this again, a full reinstall of transformers (and its dependencies) usually does the trick.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
Note mostly to self: the newest docling-parse finally installs on Termux without git cloning and code changes (as compilation of this bit: Building CXX object libqpdf/CMakeFiles/libqpdf.dir , here: Performing build step for 'extlib_qpdf' used to fail until very recently), if one does:
apt install qt6-qtbase-gtk-platformtheme
and of course waits an hour or so, as usual.
In short, now just pip install -U docling (with all its dependencies) works without too many tricks, also in pure Termux:
Created wheel for docling-parse: filename=docling_parse-4.7.1-cp312-cp312-linux_aarch64.whl size=21420832 sha256=e0fb9190dd2c4b1d223829eb0f7d86bece745de9aa1a61ad4f3288dca7a136d5
Stored in directory: /data/data/com.termux/files/home/.cache/pip/wheels/0a/32/b8/6bda56ef1680e329c981b5e9a486b7fd03abdca587e00f631f
Successfully built docling-parse
Installing collected packages: docling-parse
Attempting uninstall: docling-parse
Found existing installation: docling-parse 4.1.0
Uninstalling docling-parse-4.1.0:
Removing file or directory /data/data/com.termux/files/usr/lib/python3.12/site-packages/docling_parse-4.1.0.dist-info/
Removing file or directory /data/data/com.termux/files/usr/lib/python3.12/site-packages/docling_parse/
Successfully uninstalled docling-parse-4.1.0
on:
uname -a
Linux localhost 4.14.186+ #1 SMP PREEMPT Thu Mar 17 16:28:22 CST 2022 aarch64 Android