clip-as-service
clip-as-service copied to clipboard
not able to start server
Prerequisites
Please fill in by replacing
[ ]
with[x]
.
- [ ] Are you running the latest
bert-as-service
? - [ ] Did you follow the installation and the usage instructions in
README.md
? - [ ] Did you check the FAQ list in
README.md
? - [ ] Did you perform a cursory search on existing issues?
System information
Some of this information can be collected via this script.
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- TensorFlow installed from (source or binary):
- TensorFlow version:
- Python version:
-
bert-as-service
version: - GPU model and memory:
- CPU model and memory:
Description
Please replace
YOUR_SERVER_ARGS
andYOUR_CLIENT_ARGS
accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
bert-serving-start YOUR_SERVER_ARGS
/opt/anaconda3/lib/python3.8/site-packages/bert_serving/server/helper.py:175: UserWarning: Tensorflow 2.3.0 is not tested! It may or may not work. Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/ warnings.warn('Tensorflow %s is not tested! It may or may not work. ' usage: /opt/anaconda3/bin/bert-serving-start -model_dir /uncased_L-12_H-768_A-12/ -num_worker=2 -max_seq_len 50 ARG VALUE
ckpt_name = bert_model.ckpt
config_name = bert_config.json
cors = *
cpu = False
device_map = []
do_lower_case = True
fixed_embed_length = False fp16 = False gpu_memory_fraction = 0.5 graph_tmp_dir = None http_max_connect = 10 http_port = None mask_cls_sep = False max_batch_size = 256 max_seq_len = 50 model_dir = /uncased_L-12_H-768_A-12/ no_position_embeddings = False no_special_token = False num_worker = 2 pooling_layer = [-2] pooling_strategy = REDUCE_MEAN port = 5555 port_out = 5556 prefetch_size = 10 priority_batch_size = 16 show_tokens_to_client = False tuned_model_dir = None verbose = False xla = False
I:VENTILATOR:[__i:__i: 67]:freeze, optimize and export graph, could take a while...
/opt/anaconda3/lib/python3.8/site-packages/bert_serving/server/helper.py:175: UserWarning: Tensorflow 2.3.0 is not tested! It may or may not work. Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/
warnings.warn('Tensorflow %s is not tested! It may or may not work. '
E:GRAPHOPT:[gra:opt:154]:fail to optimize the graph!
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.8/site-packages/bert_serving/server/graph.py", line 42, in optimize_graph
tf = import_tf(verbose=args.verbose)
File "/opt/anaconda3/lib/python3.8/site-packages/bert_serving/server/helper.py", line 186, in import_tf
tf.logging.set_verbosity(tf.logging.DEBUG if verbose else tf.logging.ERROR)
AttributeError: module 'tensorflow' has no attribute 'logging'
Traceback (most recent call last):
File "/opt/anaconda3/bin/bert-serving-start", line 8, in
Then this issue shows up:
...
Hi there, I have a problem with the starting server. It gives actually two errors, one with the "tf.logging" and the other one as "TypeError: cannot unpack non-iterable NoneType object". Thanks
Similar issue here, seems to be very tricky dependencies management.
Suggested fix seems generally to be to downgrade to tensorflow>=1.10
.
However older tensorflow<=2.0
seems not to be supported with Python 3.8.:
Python 3.8 support requires TensorFlow 2.2
Is tensorflow>=2.2
support planned for bert-serving-server
?
我将tensorflow 1.15降到1.10,解决问题了。