Validation error from label-studio after docker-compose up -d of ml-backend
git clone https://github.com/heartexlabs/label-studio-ml-backend
cd label-studio-ml-backend/label_studio_ml/examples/simple_text_classifier
docker-compose up -d
Then in Label studio add it, but validation error occured as below:

I directly access it in the browser, it said:

Have you set up your labeling config in Label Studio?
Also it would be great if you could share ML logs.
@makseq Thank you, Sir!
Have you set up your labeling config in Label Studio?
Yes. I set up the integration in the dockerized container Web Admin UI.
Also it would be great if you could share ML logs.
How to show the ML logs, pls?
@fishfree Just to copy logs from your ML backend terminal.
@makseq I run docker-compose logs -f, as below:
$ docker-compose logs -f
Attaching to server, redis
server | 2021-12-30 05:53:25,676 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
server | 2021-12-30 05:53:25,685 INFO RPC interface 'supervisor' initialized
server | 2021-12-30 05:53:25,685 CRIT Server 'inet_http_server' running without any HTTP authentication checking
server | 2021-12-30 05:53:25,686 INFO supervisord started with pid 1
server | 2021-12-30 05:53:26,689 INFO spawned: 'rq_00' with pid 11
server | 2021-12-30 05:53:26,691 INFO spawned: 'wsgi' with pid 12
server | 2021-12-30 05:53:27,829 INFO success: rq_00 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
server | 2021-12-30 05:53:27,829 INFO success: wsgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
redis | 1:C 30 Dec 2021 05:53:24.956 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis | 1:C 30 Dec 2021 05:53:24.956 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=1, just started
redis | 1:C 30 Dec 2021 05:53:24.956 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis | 1:M 30 Dec 2021 05:53:24.958 * monotonic clock: POSIX clock_gettime
redis | 1:M 30 Dec 2021 05:53:24.959 * Running mode=standalone, port=6379.
redis | 1:M 30 Dec 2021 05:53:24.959 # Server initialized
redis | 1:M 30 Dec 2021 05:53:24.959 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis | 1:M 30 Dec 2021 05:53:24.960 * Loading RDB produced by version 6.2.6
redis | 1:M 30 Dec 2021 05:53:24.960 * RDB age 92 seconds
redis | 1:M 30 Dec 2021 05:53:24.960 * RDB memory usage when created 0.77 Mb
redis | 1:M 30 Dec 2021 05:53:24.960 # Done loading RDB, keys loaded: 4, keys expired: 0.
redis | 1:M 30 Dec 2021 05:53:24.960 * DB loaded from disk: 0.000 seconds
redis | 1:M 30 Dec 2021 05:53:24.960 * Ready to accept connections
then click the button, the logs has no change.
Hi @fishfree Could you please check folder logs for more log files (uwsgi.log and other)?
Also can you check this url: http://10.2.14.77:9090/ ?
@KonstantinKorotaev The uwsgi.log is as below:
[2022-02-09 05:02:14,383] [ERROR] [label_studio_ml.exceptions::exception_f::53] Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/label_studio_ml/exceptions.py", line 39, in exception_f
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/label_studio_ml/api.py", line 49, in _setup
model = _manager.fetch(project, schema, force_reload, hostname=hostname, access_token=access_token)
File "/usr/local/lib/python3.7/site-packages/label_studio_ml/model.py", line 481, in fetch
model = cls.model_class(label_config=label_config, **kwargs)
File "./simple_text_classifier.py", line 30, in __init__
assert self.info['type'] == 'Choices'
AssertionError
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/label_studio_ml/exceptions.py", line 39, in exception_f
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/label_studio_ml/api.py", line 49, in _setup
model = _manager.fetch(project, schema, force_reload, hostname=hostname, access_token=access_token)
File "/usr/local/lib/python3.7/site-packages/label_studio_ml/model.py", line 481, in fetch
model = cls.model_class(label_config=label_config, **kwargs)
File "./simple_text_classifier.py", line 30, in __init__
assert self.info['type'] == 'Choices'
AssertionError
[pid: 19|app: 0|req: 5/5] 192.168.48.1 () {34 vars in 458 bytes} [Wed Feb 9 05:02:14 2022] POST /setup => generated 737 bytes in 23 msecs (HTTP/1.1 500) 2 headers in 91 bytes (1 switches on core 0)
[pid: 19|app: 0|req: 6/6] 10.2.8.19 () {34 vars in 1377 bytes} [Wed Feb 9 05:02:26 2022] GET /setup => generated 178 bytes in 1 msecs (HTTP/1.1 405) 3 headers in 118 bytes (1 switches on core 0)
The output of url http://10.2.14.77:9090/ is as below:
{"model_dir":"/data/models","status":"UP"}
Thank you very much!
@fishfree What is your labeling config? simple_text_classifier can be used with choices only without modifications.
Hello, I have the same problem when running the simple_text_classifier, I wonder if anyone knows what the problem is?
(label_studio) tcexeexe@nvidia-server:~/dataDisk2/SHTEC/label-studio-ml-backend$ label-studio-ml init my_ml_backend --script label_studio_ml/examples/simple_text_classifier/simple_text_classifier.py --force
=> LABEL STUDIO HOSTNAME = http://localhost:8080
=> WARNING! API_KEY is not set
Congratulations! ML Backend has been successfully initialized in ./my_ml_backend
Now start it by using:
label-studio-ml start ./my_ml_backend
(label_studio) tcexeexe@nvidia-server:~/dataDisk2/SHTEC/label-studio-ml-backend$ label-studio-ml start my_ml_backend
=> LABEL STUDIO HOSTNAME = http://localhost:8080
=> WARNING! API_KEY is not set
* Serving Flask app "label_studio_ml.api" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
[2022-04-14 18:41:15,390] [WARNING] [werkzeug::_log::225] * Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
[2022-04-14 18:41:15,390] [INFO] [werkzeug::_log::225] * Running on http://10.168.1.217:9090/ (Press CTRL+C to quit)
[2022-04-14 18:41:33,721] [INFO] [werkzeug::_log::225] 10.168.1.217 - - [14/Apr/2022 18:41:33] "GET /health HTTP/1.1" 200 -
[2022-04-14 18:41:33,725] [ERROR] [label_studio_ml.exceptions::exception_f::53] Traceback (most recent call last):
File "/home/tcexeexe/dataDisk1/SHTEC/label-studio-ml-backend/label_studio_ml/exceptions.py", line 39, in exception_f
return f(*args, **kwargs)
File "/home/tcexeexe/dataDisk1/SHTEC/label-studio-ml-backend/label_studio_ml/api.py", line 50, in _setup
model = _manager.fetch(project, schema, force_reload, hostname=hostname, access_token=access_token)
File "/home/tcexeexe/dataDisk1/SHTEC/label-studio-ml-backend/label_studio_ml/model.py", line 492, in fetch
model = cls.model_class(label_config=label_config, **kwargs)
File "/home/tcexeexe/dataDisk2/SHTEC/label-studio-ml-backend/my_ml_backend/simple_text_classifier.py", line 34, in __init__
assert self.info['type'] == 'Choices'
AssertionError
Traceback (most recent call last):
File "/home/tcexeexe/dataDisk1/SHTEC/label-studio-ml-backend/label_studio_ml/exceptions.py", line 39, in exception_f
return f(*args, **kwargs)
File "/home/tcexeexe/dataDisk1/SHTEC/label-studio-ml-backend/label_studio_ml/api.py", line 50, in _setup
model = _manager.fetch(project, schema, force_reload, hostname=hostname, access_token=access_token)
File "/home/tcexeexe/dataDisk1/SHTEC/label-studio-ml-backend/label_studio_ml/model.py", line 492, in fetch
model = cls.model_class(label_config=label_config, **kwargs)
File "/home/tcexeexe/dataDisk2/SHTEC/label-studio-ml-backend/my_ml_backend/simple_text_classifier.py", line 34, in __init__
assert self.info['type'] == 'Choices'
AssertionError
Hello, I have the same problem when running the example of simple_text_classifier, I wonder if anyone knows what the problem is?
(label_studio) tcexeexe@nvidia-server:~/dataDisk2/SHTEC/label-studio-ml-backend$ label-studio-ml init my_ml_backend --script label_studio_ml/examples/simple_text_classifier/simple_text_classifier.py --force => LABEL STUDIO HOSTNAME = http://localhost:8080 => WARNING! API_KEY is not set Congratulations! ML Backend has been successfully initialized in ./my_ml_backend Now start it by using: label-studio-ml start ./my_ml_backend (label_studio) tcexeexe@nvidia-server:~/dataDisk2/SHTEC/label-studio-ml-backend$ label-studio-ml start my_ml_backend => LABEL STUDIO HOSTNAME = http://localhost:8080 => WARNING! API_KEY is not set * Serving Flask app "label_studio_ml.api" (lazy loading) * Environment: production WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Debug mode: off [2022-04-14 18:41:15,390] [WARNING] [werkzeug::_log::225] * Running on all addresses. WARNING: This is a development server. Do not use it in a production deployment. [2022-04-14 18:41:15,390] [INFO] [werkzeug::_log::225] * Running on http://10.168.1.217:9090/ (Press CTRL+C to quit) [2022-04-14 18:41:33,721] [INFO] [werkzeug::_log::225] 10.168.1.217 - - [14/Apr/2022 18:41:33] "GET /health HTTP/1.1" 200 - [2022-04-14 18:41:33,725] [ERROR] [label_studio_ml.exceptions::exception_f::53] Traceback (most recent call last): File "/home/tcexeexe/dataDisk1/SHTEC/label-studio-ml-backend/label_studio_ml/exceptions.py", line 39, in exception_f return f(*args, **kwargs) File "/home/tcexeexe/dataDisk1/SHTEC/label-studio-ml-backend/label_studio_ml/api.py", line 50, in _setup model = _manager.fetch(project, schema, force_reload, hostname=hostname, access_token=access_token) File "/home/tcexeexe/dataDisk1/SHTEC/label-studio-ml-backend/label_studio_ml/model.py", line 492, in fetch model = cls.model_class(label_config=label_config, **kwargs) File "/home/tcexeexe/dataDisk2/SHTEC/label-studio-ml-backend/my_ml_backend/simple_text_classifier.py", line 34, in __init__ assert self.info['type'] == 'Choices' AssertionError Traceback (most recent call last): File "/home/tcexeexe/dataDisk1/SHTEC/label-studio-ml-backend/label_studio_ml/exceptions.py", line 39, in exception_f return f(*args, **kwargs) File "/home/tcexeexe/dataDisk1/SHTEC/label-studio-ml-backend/label_studio_ml/api.py", line 50, in _setup model = _manager.fetch(project, schema, force_reload, hostname=hostname, access_token=access_token) File "/home/tcexeexe/dataDisk1/SHTEC/label-studio-ml-backend/label_studio_ml/model.py", line 492, in fetch model = cls.model_class(label_config=label_config, **kwargs) File "/home/tcexeexe/dataDisk2/SHTEC/label-studio-ml-backend/my_ml_backend/simple_text_classifier.py", line 34, in __init__ assert self.info['type'] == 'Choices' AssertionError
Hi @tcexeexe simple_text_classifier expects label config with Choices, it seems that you use some other tag. If you want to use simple_text_classifier change your label config to Choices or modify classifier to use your config.
Hi everyone, I have found the solution. Actylly the problem is with the configuration with the type of labeling and the type of ML example you are loading. You have to make same type of labeling as the ML backend you are loading for the job or otherwise it will give error. https://hjlabs.in
Closed as non-active.