Kokoro-FastAPI icon indicating copy to clipboard operation
Kokoro-FastAPI copied to clipboard

Multiple simultaneous requests or cancelling requests mid-stream causes server to crash

Open Mobious opened this issue 6 months ago • 8 comments

Describe the bug The Docker containers (both ghcr.io/remsky/kokoro-fastapi-gpu:latest and ghcr.io/remsky/kokoro-fastapi-cpu:latest) crash without any error message when there are repeated requests canceled mid-stream.

Branch / Deployment used Using the Docker images specified in the documentation.

  • GPU: ghcr.io/remsky/kokoro-fastapi-gpu:latest
  • CPU: ghcr.io/remsky/kokoro-fastapi-cpu:latest

Operating System Crash occures in Docker when running both on Windows 10 and Ubuntu 20.04.6 LTS

Additional context Here's a minimal Python script that can fairly consistently crash the Docker container (may require several runs). You may need to adjust the timeout value depending on if you are running the GPU or CPU container and how fast your machine is.

import requests

# may need to run this script multiple times before the docker container will crash

# crash happens more consistently on /dev/captioned_speech, but I've made it happen on /v1/audio/speech as well with more requests
url = 'http://localhost:8880/dev/captioned_speech'
url = 'http://localhost:8880/v1/audio/speech'
data = {
    'input': 'Hello, this is a longer message to give the client time to cancel before the server finishes streaming the response. More text, more text, 12345456698732565213214, 12341458731278643782'
}
# if using GPU, timeout should be lower
timeout = 0.3  # GPU
timeout = 5.0  # CPU

for i in range(20):
    try:
        resp = requests.post(url, json=data, timeout=timeout)
        print(resp)
    except (requests.exceptions.ReadTimeout, requests.exceptions.ConnectionError):
        print(f'timed out request {i+1}')
print('Done')

Mobious avatar Jun 17 '25 19:06 Mobious

Experiencing same behavior

MorganMarshall avatar Jun 18 '25 02:06 MorganMarshall

I found a similar crash also can occur anytime your are running multiple simultaneous requests. In fact this may be the actual source of the problem as I realize that request processing may be overlapping in my original example if the server does not immediately stop processing when the client closes the connection.

I also tested running the server with uv outside of a Docker container and found the crash happens there as well, so the issue does not appear to be related to Docker.

Mobious avatar Jun 18 '25 21:06 Mobious

@Mobious can you test it on the latest version and tell me if that stil lhappens for you?

fireblade2534 avatar Jun 20 '25 22:06 fireblade2534

@fireblade2534 I just pulled the most recent GPU image ghcr.io/remsky/kokoro-fastapi-gpu:latest, sha256:ce12c3d6f0718d09188f3355b71973cf14c12791b7e17a358fdb9dd3a6faca33

Server still crashed right after running my script, but now prints "free(): invalid pointer" at the end. Here's the output:

>docker run --gpus all --rm -p 8880:8880 sha256:ce12c3d6f0718d09188f3355b71973cf14c12791b7e17a358fdb9dd3a6faca33

==========
== CUDA ==
==========

CUDA Version 12.8.0

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

2025-06-21 03:01:45.549 | INFO     | __main__:download_model:60 - Model files already exist and are valid
INFO:     Started server process [31]
INFO:     Waiting for application startup.
03:01:58 AM | INFO     | main:57 | Loading TTS model and voice packs...
03:01:58 AM | INFO     | model_manager:38 | Initializing Kokoro V1 on cuda
03:01:58 AM | DEBUG    | paths:101 | Searching for model in path: /app/api/src/models
03:01:58 AM | INFO     | kokoro_v1:46 | Loading Kokoro model on cuda
03:01:58 AM | INFO     | kokoro_v1:47 | Config path: /app/api/src/models/v1_0/config.json
03:01:58 AM | INFO     | kokoro_v1:48 | Model path: /app/api/src/models/v1_0/kokoro-v1_0.pth
WARNING: Defaulting repo_id to hexgrad/Kokoro-82M. Pass repo_id='hexgrad/Kokoro-82M' to suppress this warning.
/app/.venv/lib/python3.10/site-packages/torch/nn/modules/rnn.py:123: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
  warnings.warn(
/app/.venv/lib/python3.10/site-packages/torch/nn/utils/weight_norm.py:143: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`.
  WeightNorm.apply(module, name, dim)
03:02:00 AM | DEBUG    | paths:153 | Scanning for voices in path: /app/api/src/voices/v1_0
03:02:00 AM | DEBUG    | paths:131 | Searching for voice in path: /app/api/src/voices/v1_0
03:02:00 AM | DEBUG    | model_manager:77 | Using default voice 'af_heart' for warmup
03:02:00 AM | INFO     | kokoro_v1:81 | Creating new pipeline for language code: a
WARNING: Defaulting repo_id to hexgrad/Kokoro-82M. Pass repo_id='hexgrad/Kokoro-82M' to suppress this warning.
03:02:01 AM | DEBUG    | kokoro_v1:261 | Generating audio for text with lang_code 'a': 'Warmup text for initialization.'
03:02:03 AM | DEBUG    | kokoro_v1:268 | Got audio chunk with shape: torch.Size([57600])
03:02:03 AM | INFO     | model_manager:84 | Warmup completed in 4316ms
03:02:03 AM | INFO     | main:106 |

░░░░░░░░░░░░░░░░░░░░░░░░

    ╔═╗┌─┐┌─┐┌┬┐
    ╠╣ ├─┤└─┐ │
    ╚  ┴ ┴└─┘ ┴
    ╦╔═┌─┐┬┌─┌─┐
    ╠╩╗│ │├┴┐│ │
    ╩ ╩└─┘┴ ┴└─┘

░░░░░░░░░░░░░░░░░░░░░░░░

Model warmed up on cuda: kokoro_v1
CUDA: True
67 voice packs loaded

Beta Web Player: http://0.0.0.0:8880/web/
or http://localhost:8880/web/
░░░░░░░░░░░░░░░░░░░░░░░░

INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8880 (Press CTRL+C to quit)
03:02:31 AM | DEBUG    | paths:153 | Scanning for voices in path: /app/api/src/voices/v1_0
03:02:31 AM | DEBUG    | streaming_audio_writer:40 | Disabling Xing VBR header for MP3 encoding.
INFO:     127.0.0.1:44526 - "POST /dev/captioned_speech HTTP/1.1" 200 OK
03:02:31 AM | DEBUG    | paths:153 | Scanning for voices in path: /app/api/src/voices/v1_0
03:02:31 AM | DEBUG    | paths:131 | Searching for voice in path: /app/api/src/voices/v1_0
03:02:31 AM | DEBUG    | tts_service:204 | Using single voice path: /app/api/src/voices/v1_0/af_heart.pt
03:02:31 AM | DEBUG    | tts_service:280 | Using voice path: /app/api/src/voices/v1_0/af_heart.pt
03:02:31 AM | INFO     | tts_service:284 | Using lang_code 'a' for voice 'af_heart' in audio stream
03:02:31 AM | INFO     | text_processor:159 | Starting smart split for 184 chars
03:02:31 AM | DEBUG    | text_processor:164 | Split raw text into 1 parts by pause tags.
03:02:31 AM | DEBUG    | text_processor:65 | Total processing took 16.22ms for chunk: 'Hello, this is a longer message to give the client...'
03:02:31 AM | DEBUG    | text_processor:65 | Total processing took 1.84ms for chunk: 'More text, more text, twelve sextillion, three hun...'
03:02:31 AM | DEBUG    | text_processor:206 | Yielding chunk 1: 'Hello, this is a longer message to give the client...' (119 tokens)
03:02:31 AM | DEBUG    | kokoro_v1:261 | Generating audio for text with lang_code 'a': 'Hello, this is a longer message to give the client time to cancel before the server finishes streami...'
03:02:31 AM | DEBUG    | kokoro_v1:268 | Got audio chunk with shape: torch.Size([175800])
03:02:31 AM | DEBUG    | kokoro_v1:277 | Processing chunk timestamps with 22 tokens
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'Hello': 0.275s - 0.775s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word ',': 0.775s - 0.850s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'this': 0.850s - 1.050s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'is': 1.050s - 1.175s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'a': 1.175s - 1.288s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'longer': 1.288s - 1.700s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'message': 1.700s - 2.263s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'to': 2.263s - 2.375s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'give': 2.375s - 2.538s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'the': 2.538s - 2.638s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'client': 2.638s - 3.025s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'time': 3.025s - 3.325s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'to': 3.325s - 3.450s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'cancel': 3.450s - 4.300s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'before': 4.300s - 4.612s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'the': 4.612s - 4.737s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'server': 4.737s - 5.075s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'finishes': 5.075s - 5.675s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'streaming': 5.675s - 6.112s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'the': 6.112s - 6.200s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'response': 6.200s - 7.075s
03:02:31 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word '.': 7.075s - 7.225s
03:02:31 AM | DEBUG    | text_processor:65 | Total processing took 0.21ms for chunk: 'More text,'
03:02:31 AM | DEBUG    | text_processor:65 | Total processing took 0.15ms for chunk: 'more text,'
03:02:31 AM | DEBUG    | text_processor:65 | Total processing took 0.15ms for chunk: 'twelve sextillion,'
03:02:31 AM | DEBUG    | text_processor:65 | Total processing took 0.24ms for chunk: 'three hundred and forty-five quintillion,'
03:02:31 AM | DEBUG    | text_processor:65 | Total processing took 0.28ms for chunk: 'four hundred and fifty-six quadrillion,'
03:02:31 AM | DEBUG    | text_processor:65 | Total processing took 0.25ms for chunk: 'six hundred and ninety-eight trillion,'
03:02:31 AM | DEBUG    | text_processor:65 | Total processing took 0.24ms for chunk: 'seven hundred and thirty-two billion,'
03:02:31 AM | DEBUG    | text_processor:65 | Total processing took 0.25ms for chunk: 'five hundred and sixty-four million,'
03:02:31 AM | DEBUG    | text_processor:245 | Yielding clause chunk 2: 'More text, more text, twelve sextillion, three hun...' (212 tokens)
03:02:31 AM | DEBUG    | paths:153 | Scanning for voices in path: /app/api/src/voices/v1_0
03:02:31 AM | DEBUG    | streaming_audio_writer:40 | Disabling Xing VBR header for MP3 encoding.
INFO:     127.0.0.1:44542 - "POST /dev/captioned_speech HTTP/1.1" 200 OK
03:02:31 AM | DEBUG    | paths:153 | Scanning for voices in path: /app/api/src/voices/v1_0
03:02:31 AM | DEBUG    | paths:131 | Searching for voice in path: /app/api/src/voices/v1_0
03:02:31 AM | DEBUG    | tts_service:204 | Using single voice path: /app/api/src/voices/v1_0/af_heart.pt
03:02:31 AM | DEBUG    | tts_service:280 | Using voice path: /app/api/src/voices/v1_0/af_heart.pt
03:02:31 AM | INFO     | tts_service:284 | Using lang_code 'a' for voice 'af_heart' in audio stream
03:02:31 AM | INFO     | text_processor:159 | Starting smart split for 184 chars
03:02:31 AM | DEBUG    | text_processor:164 | Split raw text into 1 parts by pause tags.
03:02:31 AM | DEBUG    | text_processor:65 | Total processing took 0.53ms for chunk: 'Hello, this is a longer message to give the client...'
03:02:31 AM | DEBUG    | text_processor:65 | Total processing took 2.10ms for chunk: 'More text, more text, twelve sextillion, three hun...'
03:02:31 AM | DEBUG    | text_processor:206 | Yielding chunk 1: 'Hello, this is a longer message to give the client...' (119 tokens)
03:02:31 AM | DEBUG    | kokoro_v1:261 | Generating audio for text with lang_code 'a': 'Hello, this is a longer message to give the client time to cancel before the server finishes streami...'
03:02:32 AM | DEBUG    | kokoro_v1:268 | Got audio chunk with shape: torch.Size([175800])
03:02:32 AM | DEBUG    | kokoro_v1:277 | Processing chunk timestamps with 22 tokens
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'Hello': 0.275s - 0.775s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word ',': 0.775s - 0.850s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'this': 0.850s - 1.050s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'is': 1.050s - 1.175s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'a': 1.175s - 1.288s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'longer': 1.288s - 1.700s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'message': 1.700s - 2.263s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'to': 2.263s - 2.375s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'give': 2.375s - 2.538s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'the': 2.538s - 2.638s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'client': 2.638s - 3.025s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'time': 3.025s - 3.325s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'to': 3.325s - 3.450s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'cancel': 3.450s - 4.300s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'before': 4.300s - 4.612s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'the': 4.612s - 4.737s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'server': 4.737s - 5.075s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'finishes': 5.075s - 5.675s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'streaming': 5.675s - 6.112s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'the': 6.112s - 6.200s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word 'response': 6.200s - 7.075s
03:02:32 AM | DEBUG    | kokoro_v1:305 | Added timestamp for word '.': 7.075s - 7.225s
free(): invalid pointer

EDIT: Ran the test a few more times. Sometimes it still crashes with no error message, and some times it crashes with "double free or corruption (!prev)" instead of "free(): invalid pointer".

Mobious avatar Jun 21 '25 03:06 Mobious

Do you get a different error when you run it on Windows vs Linux. Also which is was the test above run on

fireblade2534 avatar Jun 21 '25 15:06 fireblade2534

That test was on Windows. Tested the new image on Ubuntu. Crash still occurs, but was never able to get any error messages from the server.

EDIT: Tested the newest CPU container as well (sha256:916e50b8ef753a547dd2e4d52dac8c971b84f7bdefa6ecf6cdb1abf13f4ba1b7). Crash still occurs on both Windows and Ubuntu. Was able to get the "double free or corruption (!prev)" error on Ubuntu.

Mobious avatar Jun 23 '25 19:06 Mobious

Same issue

/usr/local/lib/python3.12/dist-packages/torch/nn/modules/rnn.py:123: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
  warnings.warn(
/usr/local/lib/python3.12/dist-packages/torch/nn/utils/weight_norm.py:143: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`.
  WeightNorm.apply(module, name, dim)
/usr/local/lib/python3.12/dist-packages/jieba/_compat.py:18: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
  import pkg_resources
Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
Loading model cost 0.970 seconds.
Prefix dict has been built successfully.
06:05:00 AM | WARNING  | main:107 | 
░░░░░░░░░░░░░░░░░░░░░░░░
    ╔═╗┌─┐┌─┐┌┬┐
    ╠╣ ├─┤└─┐ │ 
    ╚  ┴ ┴└─┘ ┴
    ╦╔═┌─┐┬┌─┌─┐
    ╠╩╗│ │├┴┐│ │
    ╩ ╩└─┘┴ ┴└─┘
░░░░░░░░░░░░░░░░░░░░░░░░
                
Model warmed up on cuda: kokoro_v1
CUDA: True
103 voice packs loaded
Beta Web Player: http://0.0.0.0:8880/web/
or http://localhost:8880/web/
░░░░░░░░░░░░░░░░░░░░░░░░
Fatal Python error: Segmentation fault
Thread 0x0000fffe0e01f180 (most recent call first):
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 89 in _worker
  File "/usr/lib/python3.12/threading.py", line 1010 in run
  File "/usr/lib/python3.12/threading.py", line 1073 in _bootstrap_inner
  File "/usr/lib/python3.12/threading.py", line 1030 in _bootstrap
Thread 0x0000fffec898f180 (most recent call first):
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 89 in _worker
  File "/usr/lib/python3.12/threading.py", line 1010 in run
  File "/usr/lib/python3.12/threading.py", line 1073 in _bootstrap_inner
  File "/usr/lib/python3.12/threading.py", line 1030 in _bootstrap
Current thread 0x0000ffff91ff8720 (most recent call first):
  Garbage-collecting
  File "/usr/lib/python3.12/fractions.py", line 217 in __new__
  File "/app/api/src/services/streaming_audio_writer.py", line 109 in write_chunk
  File "/app/api/src/services/audio.py", line 183 in convert_audio
  File "/app/api/src/services/tts_service.py", line 108 in _process_chunk
  File "/app/api/src/services/tts_service.py", line 330 in generate_audio_stream
  File "/app/api/src/routers/openai_compatible.py", line 148 in stream_audio_chunks
  File "/app/api/src/routers/openai_compatible.py", line 245 in dual_output
  File "/usr/local/lib/python3.12/dist-packages/starlette/responses.py", line 246 in stream_response
  File "/usr/local/lib/python3.12/dist-packages/starlette/responses.py", line 266 in wrap
  File "/usr/lib/python3.12/asyncio/events.py", line 88 in _run
  File "/usr/lib/python3.12/asyncio/base_events.py", line 1987 in _run_once
  File "/usr/lib/python3.12/asyncio/base_events.py", line 641 in run_forever
  File "/usr/lib/python3.12/asyncio/base_events.py", line 674 in run_until_complete
  File "/usr/lib/python3.12/asyncio/runners.py", line 118 in run
  File "/usr/lib/python3.12/asyncio/runners.py", line 194 in run
  File "/usr/local/lib/python3.12/dist-packages/uvicorn/server.py", line 67 in run
  File "/usr/local/lib/python3.12/dist-packages/uvicorn/main.py", line 580 in run
  File "/usr/local/lib/python3.12/dist-packages/uvicorn/main.py", line 413 in main
  File "/usr/local/lib/python3.12/dist-packages/click/core.py", line 794 in invoke
  File "/usr/local/lib/python3.12/dist-packages/click/core.py", line 1226 in invoke
  File "/usr/local/lib/python3.12/dist-packages/click/core.py", line 1363 in main
  File "/usr/local/lib/python3.12/dist-packages/click/core.py", line 1442 in __call__
  File "/usr/local/bin/uvicorn", line 10 in <module>
Extension modules: numpy._core._multiarray_umath, numpy.linalg._umath_linalg, torch._C, torch._C._dynamo.autograd_compiler, torch._C._dynamo.eval_frame, torch._C._dynamo.guards, torch._C._dynamo.utils, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, psutil._psutil_linux, psutil._psutil_posix, charset_normalizer.md, requests.packages.charset_normalizer.md, requests.packages.chardet.md, yaml._yaml, regex._regex, markupsafe._speedups, PIL._imaging, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._pcg64, numpy.random._mt19937, numpy.random._generator, numpy.random._philox, numpy.random._sfc64, numpy.random.mtrand, scipy._lib._ccallback_c, scipy.linalg._fblas, scipy.linalg._flapack, _cyutility, scipy._cyutility, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_schur_sqrtm, scipy.linalg._matfuncs_expm, scipy.linalg._linalg_pythran, scipy.linalg.cython_blas, scipy.linalg._decomp_update, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack, scipy.sparse.linalg._propack._zpropack, scipy.optimize._group_columns, scipy._lib.messagestream, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._slsqplib, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy._lib._uarray._uarray, scipy.special._ufuncs_cxx, scipy.special._ellip_harm_2, scipy.special._special_ufuncs, scipy.special._gufuncs, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.linalg._decomp_interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.spatial._ckdtree, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._hausdorff, scipy.spatial._distance_wrap, scipy.spatial.transform._rotation, scipy.spatial.transform._rigid_transform, scipy.optimize._direct, srsly.ujson.ujson, srsly.msgpack._epoch, srsly.msgpack._packer, srsly.msgpack._unpacker, blis.cy, thinc.backends.cblas, cymem.cymem, preshed.maps, blis.py, thinc.backends.linalg, murmurhash.mrmr, thinc.backends.numpy_ops, thinc.layers.premap_ids, thinc.layers.sparselinear, spacy.symbols, preshed.bloom, spacy.strings, spacy.attrs, spacy.parts_of_speech, spacy.morphology, spacy.lexeme, spacy.tokens.morphanalysis, spacy.tokens.token, spacy.tokens.span, spacy.tokens.span_group, spacy.tokens._retokenize, spacy.tokens.doc, spacy.vectors, spacy.vocab, spacy.training.align, spacy.training.alignment_array, spacy.pipeline._parser_internals.nonproj, spacy.training.example, spacy.training.gold_io, spacy.matcher.levenshtein, spacy.matcher.matcher, spacy.matcher.dependencymatcher, spacy.matcher.phrasematcher, spacy.tokenizer, spacy.pipeline.pipe, spacy.pipeline.trainable_pipe, spacy.pipeline._parser_internals.stateclass, spacy.pipeline._parser_internals.transition_system, spacy.kb.kb, spacy.kb.candidate, spacy.kb.kb_in_memory, spacy.ml.parser_model, thinc.extra.search, spacy.pipeline._parser_internals._beam_utils, spacy.pipeline.transition_parser, spacy.pipeline._parser_internals.arc_eager, spacy.pipeline.dep_parser, spacy.pipeline._edit_tree_internals.edit_trees, spacy.pipeline.tagger, spacy.pipeline.morphologizer, spacy.pipeline._parser_internals.ner, spacy.pipeline.ner, spacy.pipeline.senter, spacy.pipeline.sentencizer, scipy.io.matlab._mio_utils, scipy.io.matlab._streams, scipy.io.matlab._mio5_utils, _cffi_backend, av._core, av.logging, av.bytesource, av.buffer, av.audio.format, av.error, av.dictionary, av.container.pyio, av.utils, av.option, av.descriptor, av.format, av.stream, av.container.streams, av.sidedata.motionvectors, av.sidedata.sidedata, av.opaque, av.packet, av.container.input, av.container.output, av.container.core, av.codec.context, av.video.format, av.video.reformatter, av.plane, av.video.plane, av.video.frame, av.video.stream, av.codec.hwaccel, av.codec.codec, av.frame, av.audio.layout, av.audio.plane, av.audio.frame, av.audio.stream, av.filter.pad, av.filter.link, av.filter.context, av.filter.graph, av.filter.filter, av.filter.loudnorm, av.audio.resampler, av.audio.codeccontext, av.audio.fifo, av.bitstream, av.video.codeccontext, spacy.pipeline.multitask, curated_tokenizers._bbpe, curated_tokenizers._spp, curated_tokenizers._wordpiece, fugashi.fugashi (total: 197)

xxnuo avatar Jul 25 '25 06:07 xxnuo

SAME !!!!!!!!!!!!!!!!!!!!!!!!!!!!!

FreedomLiX avatar Jul 25 '25 06:07 FreedomLiX