sedna icon indicating copy to clipboard operation
sedna copied to clipboard

Steps

Open Liu-JJ97 opened this issue 3 years ago • 21 comments

I don't know which one to choose. 1. I need to install sedna on the cloud and the edge respectively, prepare a big model on the cloud, prepare a little model on the edge, and create services separately. 2. Install sedna in the cloud, prepare images(big and little) in the cloud, and create services separately

Liu-JJ97 avatar Sep 17 '21 05:09 Liu-JJ97

In JointInferenceService, localcontroller should be ready in all nodes after once sedna installtion. then: follow example run your service.

Recommended Practices:

  1. Install sedna;
  2. Prepare image with big model in CloudNode, prepare image with little model in other nodes.
  3. Start jobs.

By the way, you can experience the post-installation actions here.

JoeyHwong-gk avatar Sep 17 '21 06:09 JoeyHwong-gk

After the step of "Mock Video Stream for Inference in Edge Side", there are four pictures in /joint_inference/output/output/, how should i do to Check Inference Result?

Liu-JJ97 avatar Sep 23 '21 08:09 Liu-JJ97

image

Liu-JJ97 avatar Sep 23 '21 08:09 Liu-JJ97

In JointInferenceService example, here defines the inference behavior that occurs every 10 frames, then the inference result will saved in outputdir.

In this function, you can find out the results ( little_model_inference_result, big_model_inference_result ) will be saved to which folder.

In this example, output storing the final result (rules: result = big_model_result if big_model else little_model) of all example , hard_example_cloud_inference_output storing the big_model_result of hard example , hard_example_edge_inference_output storing the little_model_result of hard example .

We simulate a scenario where edge device resources are limited, a liitle model with low precision is deployed for inference. As some example is difficult to predict (we called it hard example) , the big model on another node is invoked for inference.

JoeyHwong-gk avatar Sep 23 '21 12:09 JoeyHwong-gk

Yes, I understand the idea of this project, what I want to consult is when I execute "ffmpeg -re -i /data/video/video.mp4 -vcodec libx264 -f rtsp rtsp://localhost/video",that is to say the last step in the document,the result is the picture above, what should I do to get the verification result, what command should be executed, can Get the final picture result in the document .

Liu-JJ97 avatar Sep 23 '21 12:09 Liu-JJ97

My folder "hard_example_cloud_inference_output" and "hard_example_edge_inference_output " are empty , is there something wrong?

Liu-JJ97 avatar Sep 23 '21 12:09 Liu-JJ97

Yes, I understand the idea of this project, what I want to consult is when I execute "ffmpeg -re -i /data/video/video.mp4 -vcodec libx264 -f rtsp rtsp://localhost/video",that is to say the last step in the document,the result is the picture above, what should I do to get the verification result, what command should be executed, can Get the final picture result in the document .

ffmpeg process the video content and transmits each frame to the program for inference, this raw data will not be saved, what you seen in your outputdir is the final results. Just download them for Helmet Detection Results.

"hard_example_cloud_inference_output" and "hard_example_edge_inference_output" only save the inference result of hard examples.

JoeyHwong-gk avatar Sep 24 '21 00:09 JoeyHwong-gk

there is no output in /joint_inference/output directory , Status: Active: 1 Conditions: Last Heartbeat Time: 2021-11-09T09:25:07Z Last Transition Time: 2021-11-09T09:25:07Z Message: services "helmet-detection-inference-example-cloud" already exists Status: True Type: Failed Failed: 1 Start Time: 2021-11-09T09:25:07Z Events:

root@edgenode1:~# kubectl get pod NAME READY STATUS RESTARTS AGE helmet-detection-inference-example-cloud-8v26p 1/1 Running 0 7m5s

And there is no helmet-detection-inference-example-edge existing, do you know how can solve this problem?

15926273249 avatar Nov 09 '21 09:11 15926273249

PLTA @llhuii @JimmyYang20

JoeyHwong-gk avatar Nov 17 '21 00:11 JoeyHwong-gk

there is no output in /joint_inference/output directory , Status: Active: 1 Conditions: Last Heartbeat Time: 2021-11-09T09:25:07Z Last Transition Time: 2021-11-09T09:25:07Z Message: services "helmet-detection-inference-example-cloud" already exists Status: True Type: Failed Failed: 1 Start Time: 2021-11-09T09:25:07Z Events:

root@edgenode1:~# kubectl get pod NAME READY STATUS RESTARTS AGE helmet-detection-inference-example-cloud-8v26p 1/1 Running 0 7m5s

And there is no helmet-detection-inference-example-edge existing, do you know how can solve this problem?

I also get this problem ,did you solve it?

PHLens avatar Dec 02 '21 15:12 PHLens

@JimmyYang20

llhuii avatar Dec 03 '21 02:12 llhuii

@PHLens can you give more info, e.g. kubectl get nodes -o wide; kubectl get ji -o yaml;

JimmyYang20 avatar Dec 03 '21 02:12 JimmyYang20

@JimmyYang20 sorry about the late replay. Here are the info: kubectl get nodes NAME STATUS ROLES AGE VERSION master-node1 Ready control-plane,master 52d v1.21.5 nano Ready agent,edge 98m v1.19.3-kubeedge-v1.8.1 kubectl get ji NAME AGE helmet-detection-inference-example 2m31s kubectl get ji -o yaml

apiVersion: v1
items:
- apiVersion: sedna.io/v1alpha1
  kind: JointInferenceService
  metadata:
    creationTimestamp: "2021-12-13T07:14:11Z"
    generation: 1
    name: helmet-detection-inference-example
    namespace: default
    resourceVersion: "5140963"
    uid: 847c5d65-03a3-4305-a274-366e793e6409
  spec:
    cloudWorker:
      model:
        name: helmet-detection-inference-big-model
      template:
        spec:
          containers:
          - env:
            - name: input_shape
              value: 544,544
            image: kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0
            imagePullPolicy: IfNotPresent
            name: big-model
            resources:
              requests:
                memory: 2Gi
          nodeName: master-node1
    edgeWorker:
      hardExampleMining:
        name: IBT
        parameters:
        - key: threshold_img
          value: "0.9"
        - key: threshold_box
          value: "0.9"
      model:
        name: helmet-detection-inference-little-model
      template:
        spec:
          containers:
          - env:
            - name: input_shape
              value: 416,736
            - name: video_url
              value: rtsp://localhost/video
            - name: all_examples_inference_output
              value: /data/output
            - name: hard_example_cloud_inference_output
              value: /data/hard_example_cloud_inference_output
            - name: hard_example_edge_inference_output
              value: /data/hard_example_edge_inference_output
            image: kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0
            imagePullPolicy: IfNotPresent
            name: little-model
            resources:
              limits:
                memory: 2Gi
              requests:
                cpu: 100m
                memory: 64M
            volumeMounts:
            - mountPath: /data/
              name: outputdir
          nodeName: nano
          volumes:
          - hostPath:
              path: /joint_inference/output
              type: Directory
            name: outputdir
  status:
    active: 1
    conditions:
    - lastHeartbeatTime: "2021-12-13T07:14:11Z"
      lastTransitionTime: "2021-12-13T07:14:11Z"
      status: "True"
      type: Running
    - lastHeartbeatTime: "2021-12-13T07:14:11Z"
      lastTransitionTime: "2021-12-13T07:14:11Z"
      message: the worker of service failed
      reason: workerFailed
      status: "True"
      type: Failed
    failed: 1
    startTime: "2021-12-13T07:14:11Z"
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

PHLens avatar Dec 13 '21 07:12 PHLens

there is no output in /joint_inference/output directory , Status: Active: 1 Conditions: Last Heartbeat Time: 2021-11-09T09:25:07Z Last Transition Time: 2021-11-09T09:25:07Z Message: services "helmet-detection-inference-example-cloud" already exists Status: True Type: Failed Failed: 1 Start Time: 2021-11-09T09:25:07Z Events: root@edgenode1:~# kubectl get pod NAME READY STATUS RESTARTS AGE helmet-detection-inference-example-cloud-8v26p 1/1 Running 0 7m5s And there is no helmet-detection-inference-example-edge existing, do you know how can solve this problem?

I also get this problem ,did you solve it?

yes,the service deployment file is not updated. Try to modify the version number and change 3.0 to 4.0 , image: kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0->image: kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.4.0,there are two things that need to be modified

Liu-JJ97 avatar Dec 14 '21 09:12 Liu-JJ97

Yes, I understand the idea of this project, what I want to consult is when I execute "ffmpeg -re -i /data/video/video.mp4 -vcodec libx264 -f rtsp rtsp://localhost/video",that is to say the last step in the document,the result is the picture above, what should I do to get the verification result, what command should be executed, can Get the final picture result in the document .

ffmpeg process the video content and transmits each frame to the program for inference, this raw data will not be saved, what you seen in your outputdir is the final results. Just download them for Helmet Detection Results.

"hard_example_cloud_inference_output" and "hard_example_edge_inference_output" only save the inference result of hard examples.

The version number in the service deployment file has not been updated.(image: kubeedge/sedna-example-joint-inference-helmet-detection-big:v0.3.0) Some people will ignore this problem. You can give a hint in the installation file.

Liu-JJ97 avatar Dec 14 '21 09:12 Liu-JJ97

there is no output in /joint_inference/output directory , Status: Active: 1 Conditions: Last Heartbeat Time: 2021-11-09T09:25:07Z Last Transition Time: 2021-11-09T09:25:07Z Message: services "helmet-detection-inference-example-cloud" already exists Status: True Type: Failed Failed: 1 Start Time: 2021-11-09T09:25:07Z Events: root@edgenode1:~# kubectl get pod NAME READY STATUS RESTARTS AGE helmet-detection-inference-example-cloud-8v26p 1/1 Running 0 7m5s And there is no helmet-detection-inference-example-edge existing, do you know how can solve this problem?

I also get this problem ,did you solve it?

yes,the service deployment file is not updated. Try to modify the version number and change 3.0 to 4.0 , image: kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.3.0->image: kubeedge/sedna-example-joint-inference-helmet-detection-little:v0.4.0,there are two things that need to be modified

ok thanks, i'll try it

PHLens avatar Dec 14 '21 09:12 PHLens

I would like to ask a question. In the project, the cloud edge joint reasoning means that for the same request, the cloud edge executes the reasoning at the same time, and there is no cooperation, is it true?

Liu-JJ97 avatar Dec 14 '21 10:12 Liu-JJ97

No, there is a HEM algorithm, edge use the algorithm to determine whether or not to send the image data to cloud. If the image is a hard example ,then the edge will send it to cloud and then cloud will do the inference ,if not the edge do the inference alone.

PHLens avatar Dec 14 '21 10:12 PHLens

在“Mock Video Stream for Inference in Edge Side”步骤之后,/joint_inference/output/output/中有四张图片,我应该如何检查推理结果? when I execute "ffmpeg -re -i /data/video/video.mp4 -vcodec libx264 -f rtsp rtsp://localhost/video",that is to say the last step in the document, the ouputdir " /joint_inference/output" is empty? why

916264367 avatar Mar 10 '22 12:03 916264367

when I execute "ffmpeg -re -i /data/video/video.mp4 -vcodec libx264 -f rtsp rtsp://localhost/video",that is to say the last step in the document, the ouputdir " /joint_inference/output" is empty? why

916264367 avatar Mar 10 '22 12:03 916264367

/joint_inference/output 目录中没有输出, 状态: 活动:1 条件: 上次心跳时间:2021-11-09T09:25:07Z 上次转换时间:2021-11-09T09:25:07Z 消息:服务“头盔- detection-inference-example-cloud" 已经存在 状态:True 类型:失败 失败:1 开始时间:2021-11-09T09:25:07Z 事件:

root@edgenode1:~# kubectl get pod NAME READY STATUS RESTARTS AGEhelmet -detection-inference-example-cloud-8v26p 1/1 Running 0 7m5s

并且不存在头盔检测推理示例边缘,您知道如何解决这个问题吗? Me too. No output in directory /joint_inference/output, Have you solved it

916264367 avatar Mar 11 '22 00:03 916264367