spikingjelly icon indicating copy to clipboard operation
spikingjelly copied to clipboard

I am getting the follwing NotImplementedError with spikingjelly

Open arghasen111 opened this issue 1 year ago • 5 comments

I am using spikingjelly 0.0.0.0.14. I am not sure why I am getting this error

Comet is not installed, Comet logger will not be available. 2023-09-28 20:09:07.520963: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. 2023-09-28 20:09:07.546317: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-09-28 20:09:08.020003: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Namespace(dataset='gen1', path='../atis_data/atis/train_a/', num_classes=2, b=48, sample_size=100000, T=5, tbin=2, image_shape=(240, 304), epochs=50, lr=0.001, wd=0.0001, num_workers=4, train=True, test=False, device=0, precision=16, save_ckpt=True, comet_api=None, model='vgg-11', bn=True, pretrained_backbone=None, pretrained=None, extras=[640, 320, 320], min_ratio=0.05, max_ratio=0.8, aspect_ratios=[[2], [2, 3], [2, 3], [2, 3], [2], [2]], box_coder_weights=[10.0, 10.0, 5.0, 5.0], iou_threshold=0.5, score_thresh=0.01, nms_thresh=0.45, topk_candidates=200, detections_per_img=100) [256, 512, 512, 640, 320, 320] /home/ubinet_admin/anaconda3/lib/python3.11/site-packages/lightning_fabric/connector.py:554: UserWarning: 16 is supported for historical reasons but its usage is discouraged. Please set your precision to 16-mixed instead! rank_zero_warn( /home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:508: UserWarning: You passed Trainer(accelerator='cpu', precision='16-mixed')but AMP with fp16 is not supported on CPU. Usingprecision='bf16-mixed'instead. rank_zero_warn( Using bfloat16 Automatic Mixed Precision (AMP) GPU available: True (cuda), used: False TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs /home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/setup.py:176: PossibleUserWarning: GPU available but not used. SetacceleratoranddevicesusingTrainer(accelerator='gpu', devices=1). rank_zero_warn( Trainer(limit_train_batches=1.0)` was configured so 100% of the batches per epoch will be used.. File loaded. File loaded. Number of parameters: 12652695

| Name | Type | Params

0 | backbone | DetectionBackbone | 11.9 M 1 | anchor_generator | GridSizeDefaultBoxGenerator | 0 2 | head | SSDHead | 742 K

12.7 M Trainable params 0 Non-trainable params 12.7 M Total params 50.611 Total estimated model params size (MB) Sanity Checking DataLoader 0: 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last): File "/home/ubinet_admin/Documents/argha_work/object-detection-with-spiking-neural-networks/object_detection.py", line 140, in main() File "/home/ubinet_admin/Documents/argha_work/object-detection-with-spiking-neural-networks/object_detection.py", line 132, in main trainer.fit(module, train_dataloader, val_dataloader) File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 532, in fit call._call_and_handle_interrupt( File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/call.py", line 43, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 571, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 980, in _run results = self._run_stage() ^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 1021, in _run_stage self._run_sanity_check() File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 1050, in _run_sanity_check val_loop.run() File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/loops/utilities.py", line 181, in _decorator return loop_run(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 115, in run self._evaluation_step(batch, batch_idx, dataloader_idx) File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/loops/evaluation_loop.py", line 376, in _evaluation_step output = call._call_strategy_hook(trainer, hook_name, *step_kwargs.values()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/call.py", line 294, in _call_strategy_hook output = fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/strategies/strategy.py", line 393, in validation_step return self.model.validation_step(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/Documents/argha_work/object-detection-with-spiking-neural-networks/object_detection_module.py", line 104, in validation_step return self.step(batch, batch_idx, mode="val") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/Documents/argha_work/object-detection-with-spiking-neural-networks/object_detection_module.py", line 55, in step features, head_outputs = self(events) ^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/Documents/argha_work/object-detection-with-spiking-neural-networks/object_detection_module.py", line 39, in forward features = self.backbone(events) ^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/Documents/argha_work/object-detection-with-spiking-neural-networks/models/detection_backbone.py", line 42, in forward feature_maps = self.model(x, classify=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/Documents/argha_work/object-detection-with-spiking-neural-networks/models/spiking_vgg.py", line 132, in forward x_seq = functional.seq_to_ann_forward(x.float(), self.features[0]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/spikingjelly/clock_driven/functional.py", line 568, in seq_to_ann_forward y = m(y) ^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/spikingjelly/clock_driven/neuron.py", line 1114, in forward spike_seq, self.v_seq = neuron_kernel.MultiStepParametricLIFNodePTT.apply( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/autograd/function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/spikingjelly/clock_driven/neuron_kernel.py", line 1107, in forward raise NotImplementedError NotImplementedError`

arghasen111 avatar Sep 28 '23 14:09 arghasen111

Do you use TPU? CuPy can not work on TPU.

fangwei123456 avatar Oct 08 '23 08:10 fangwei123456

No. I am using CPU.

arghasen10 avatar Oct 11 '23 04:10 arghasen10

AMP with fp16 is not supported on CPU

fangwei123456 avatar Oct 11 '23 04:10 fangwei123456

Okay I tried GPU but now I am getting the following OSError: [Errno 24] Too many open files

Here is it.

Comet is not installed, Comet logger will not be available. 2023-10-11 10:29:25.291527: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. 2023-10-11 10:29:25.312584: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-10-11 10:29:25.698530: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Namespace(dataset='gen1', path='../atis_data/atis/train_a', num_classes=2, b=8, sample_size=100000, T=5, tbin=2, image_shape=(240, 304), epochs=50, lr=0.001, wd=0.0001, num_workers=4, train=True, test=False, device=0, precision=16, save_ckpt=True, comet_api=None, model='vgg-11', bn=True, pretrained_backbone=None, pretrained=None, extras=[640, 320, 320], min_ratio=0.05, max_ratio=0.8, aspect_ratios=[[2], [2, 3], [2, 3], [2, 3], [2], [2]], box_coder_weights=[10.0, 10.0, 5.0, 5.0], iou_threshold=0.5, score_thresh=0.01, nms_thresh=0.45, topk_candidates=200, detections_per_img=100) [256, 512, 512, 640, 320, 320] /home/ubinet_admin/anaconda3/lib/python3.11/site-packages/lightning_fabric/connector.py:554: UserWarning: 16 is supported for historical reasons but its usage is discouraged. Please set your precision to 16-mixed instead! rank_zero_warn( Using 16bit Automatic Mixed Precision (AMP) GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs Trainer(limit_train_batches=1.0) was configured so 100% of the batches per epoch will be used.. File loaded. File loaded. You are using a CUDA device ('NVIDIA RTX A2000 12GB') that has Tensor Cores. To properly utilize them, you should set torch.set_float32_matmul_precision('medium' | 'high') which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] Number of parameters: 12652695

| Name | Type | Params

0 | backbone | DetectionBackbone | 11.9 M 1 | anchor_generator | GridSizeDefaultBoxGenerator | 0 2 | head | SSDHead | 742 K

12.7 M Trainable params 0 Non-trainable params 12.7 M Total params 50.611 Total estimated model params size (MB) Sanity Checking DataLoader 0: 0%| | 0/2 [00:00<?, ?it/s]/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/utilities/data.py:76: UserWarning: Trying to infer the batch_size from an ambiguous collection. The batch size we found is 8. To avoid any miscalculations, use self.log(..., batch_size=batch_size). warning_cache.warn( Sanity Checking DataLoader 0: 100%|████████████████████████████████████████████████████| 2/2 [00:00<00:00, 3.36it/s] [0] val results: creating index... index created! Loading and preparing results... DONE (t=0.00s) creating index... index created! Running per image evaluation... Evaluate annotation type bbox DONE (t=0.05s). Accumulating evaluation results... DONE (t=0.00s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Epoch 0: 13%|▏| 225/1682 [01:09<07:32, 3.22it/s, v_num=76, train_loss_bbox_step=2.900, train_loss_classif_step=0.83Traceback (most recent call last): File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/queues.py", line 244, in _feed File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/reduction.py", line 51, in dumps File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/multiprocessing/reductions.py", line 370, in reduce_storage File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/reduction.py", line 198, in DupFd File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/resource_sharer.py", line 48, in init OSError: [Errno 24] Too many open files Epoch 0: 13%|▏| 226/1682 [01:10<07:31, 3.22it/s, v_num=76, train_loss_bbox_step=4.100, train_loss_classif_step=0.77Traceback (most recent call last): File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/queues.py", line 244, in _feed File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/reduction.py", line 51, in dumps File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/multiprocessing/reductions.py", line 370, in reduce_storage File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/reduction.py", line 198, in DupFd File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/resource_sharer.py", line 48, in init OSError: [Errno 24] Too many open files Epoch 0: 13%|▏| 227/1682 [01:10<07:31, 3.22it/s, v_num=76, train_loss_bbox_step=2.960, train_loss_classif_step=0.70Traceback (most recent call last): File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/queues.py", line 244, in _feed File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/reduction.py", line 51, in dumps File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/multiprocessing/reductions.py", line 370, in reduce_storage File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/reduction.py", line 198, in DupFd File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/resource_sharer.py", line 48, in init OSError: [Errno 24] Too many open files Epoch 0: 14%|▏| 228/1682 [01:10<07:31, 3.22it/s, v_num=76, train_loss_bbox_step=3.440, train_loss_classif_step=0.70Traceback (most recent call last): File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/queues.py", line 244, in _feed File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/reduction.py", line 51, in dumps File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/multiprocessing/reductions.py", line 370, in reduce_storage File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/reduction.py", line 198, in DupFd File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/resource_sharer.py", line 48, in init OSError: [Errno 24] Too many open files Epoch 0: 14%|▏| 229/1682 [01:11<07:30, 3.22it/s, v_num=76, train_loss_bbox_step=3.610, train_loss_classif_step=0.85Traceback (most recent call last): File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/resource_sharer.py", line 145, in _serve send(conn, destination_pid) File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/resource_sharer.py", line 50, in send reduction.send_handle(conn, new_fd, pid) File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/reduction.py", line 183, in send_handle with socket.fromfd(conn.fileno(), socket.AF_UNIX, socket.SOCK_STREAM) as s: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/socket.py", line 546, in fromfd nfd = dup(fd) ^^^^^^^ OSError: [Errno 24] Too many open files Traceback (most recent call last): File "/home/ubinet_admin/Documents/argha_work/object-detection-with-spiking-neural-networks/object_detection.py", line 140, in main() File "/home/ubinet_admin/Documents/argha_work/object-detection-with-spiking-neural-networks/object_detection.py", line 132, in main trainer.fit(module, train_dataloader, val_dataloader) File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 532, in fit call._call_and_handle_interrupt( File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/call.py", line 43, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 571, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 980, in _run results = self._run_stage() ^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 1023, in _run_stage self.fit_loop.run() File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/loops/fit_loop.py", line 202, in run self.advance() File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/loops/fit_loop.py", line 355, in advance self.epoch_loop.run(self._data_fetcher) File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 133, in run self.advance(data_fetcher) File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 190, in advance batch = next(data_fetcher) ^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/loops/fetchers.py", line 126, in next batch = super().next() ^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/loops/fetchers.py", line 58, in next batch = next(iterator) ^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/utilities/combined_loader.py", line 285, in next out = next(self._iterator) ^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/pytorch_lightning/utilities/combined_loader.py", line 65, in next out[i] = next(self.iterators[i]) ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 633, in next data = self._next_data() ^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1328, in _next_data idx, data = self._get_data() ^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1294, in _get_data success, data = self._try_get_data() ^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1132, in _try_get_data data = self._data_queue.get(timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/queues.py", line 122, in get return _ForkingPickler.loads(res) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/site-packages/torch/multiprocessing/reductions.py", line 307, in rebuild_storage_fd fd = df.detach() ^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/resource_sharer.py", line 58, in detach return reduction.recv_handle(conn) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/reduction.py", line 189, in recv_handle return recvfds(s, 1)[0] ^^^^^^^^^^^^^ File "/home/ubinet_admin/anaconda3/lib/python3.11/multiprocessing/reduction.py", line 159, in recvfds raise EOFError EOFError Epoch 0: 14%|█▎ | 229/1682 [01:11<07:35, 3.19it/s, v_num=76, train_loss_bbox_step=3.610, train_loss_classif_step=0.857, train_loss_step=4.470]

arghasen10 avatar Oct 11 '23 05:10 arghasen10

Hi, can you provide the minimal codes to reproduce the errors?

fangwei123456 avatar Oct 11 '23 08:10 fangwei123456