depthai-experiments icon indicating copy to clipboard operation
depthai-experiments copied to clipboard

No USB device

Open yoich opened this issue 4 years ago • 41 comments

Hi, I bought OAK-1. The depthai_demo.py worked fine then I tried to work depthai-experiments. I ran people-tracker main.py, but I got the message No USB device [03e7:2485], still looking... 10.045s NOT FOUND, err code 5. The full log is below. I'm working on windows 10 and I watched this issue https://github.com/luxonis/depthai-experiments/issues/36 So I used virtualenv and created each environment to depthai-demo.py and people tracker/main.py. What do I need to do to resolve this?


python .\main.py XLink initialized. Sending internal device firmware Successfully connected to device. Loading config file Attempting to open stream config_d2h watchdog started Successfully opened stream config_d2h with ID #0!

Closing stream config_d2h: ... Closing stream config_d2h: DONE. EEPROM data: invalid / unprogrammed D:\home\iprediction\depthAI\depthai-experiments\people-tracker\model\config.json depthai: Calibration file is not specified, will use default setting; config_h2d json: {"_board":{"calib_data":[0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0],"mesh_left":[0.0],"mesh_right":[0.0]},"_load_inBlob":true,"_pipeline":{"_streams":[{"name":"metaout"},{"name":"previewout"}]},"ai":{"NCEs":1,"NN_config":{"NN_family":"mobilenet","confidence_threshold":0.5,"output_format":"detection"},"blob0_size":2290560,"blob1_size":0,"calc_dist_to_bb":false,"camera_input":"rgb","cmx_slices":7,"keep_aspect_ratio":true,"num_stages":1,"shaves":7},"app":{"sync_sequence_numbers":false,"sync_video_meta_streams":false,"usb_chunk_KiB":64},"board":{"clear-eeprom":false,"left_fov_deg":69.0,"left_to_rgb_distance_m":0.0,"left_to_right_distance_m":0.03500000014901161,"name":"","override-eeprom":false,"revision":"","rgb_fov_deg":69.0,"stereo_center_crop":false,"store-to-eeprom":false,"swap-left-and-right-cameras":false},"camera":{"mono":{"fps":30.0,"resolution_h":720,"resolution_w":1280},"rgb":{"fps":30.0,"resolution_h":1080,"resolution_w":1920}},"depth":{"depth_limit_mm":10000,"lr_check":false,"median_kernel_size":7,"padding_factor":0.30000001192092896,"warp_rectify":{"edge_fill_color":-1,"mirror_frame":true,"use_mesh":false}},"ot":{"confidence_threshold":0.5,"max_tracklets":20}} size of input string json_config_obj to config_h2d is ->1589 size of json_config_obj that is expected to be sent to config_h2d is ->1048576 Attempting to open stream config_h2d Successfully opened stream config_h2d with ID #1! Writing 1048576 bytes to config_h2d !!! XLink write successful: config_h2d (1048576) Closing stream config_h2d: ... Closing stream config_h2d: DONE. Creating observer stream host_capture: ... Attempting to open stream host_capture Successfully opened stream host_capture with ID #0! Creating observer stream host_capture: DONE. Read: 2290560 Attempting to open stream inBlob Successfully opened stream inBlob with ID #1! Writing 2290560 bytes to inBlob !!! XLink write successful: inBlob (2290560) Closing stream inBlob: ... Closing stream inBlob: DONE. depthai: done sending Blob file D:\home\iprediction\depthAI\depthai-experiments\people-tracker\model\model.blob Attempting to open stream outBlob Successfully opened stream outBlob with ID #2! Closing stream outBlob: ... Closing stream outBlob: DONE. Input layer : Name: data Index: 0 Element type: uint8 Element size: 1byte Offset: 0 byte Dimensions: [Batch : 1, Channel : 3, Height : 320, Width : 544]

Output layer : Name: detection_out Index: 0 Element type: float16 Element size: 2 bytes Offset: 0 byte Dimensions: [Batch : 1, Channel : 1, Height : 200, Width : 7]

CNN to depth bounding-box mapping: start(0, 0), max_size(0, 0) Host stream start:metaout Opening stream for read: metaout Attempting to open stream metaout Successfully opened stream metaout with ID #3! Starting thread for stream: metaout Host stream start:previewout Opening stream for read: previewout Attempting to open stream previewout Started thread for stream: metaout Successfully opened stream previewout with ID #4! Starting thread for stream: previewout depthai: INIT OK! Started thread for stream: previewout XLink initialized. No USB device [03e7:2485], still looking... 10.045s NOT FOUND, err code 5 depthai: Error initializing xlink device is not initialized Traceback (most recent call last): File "", line 1, in File "C:\Python38\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Python38\lib\multiprocessing\spawn.py", line 125, in _main prepare(preparation_data) File "C:\Python38\lib\multiprocessing\spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Python38\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "C:\Python38\lib\runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "C:\Python38\lib\runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "C:\Python38\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "D:\home\iprediction\depthAI\depthai-experiments\people-tracker\main.py", line 13, in d = DepthAI() File "D:\home\iprediction\depthAI\depthai-experiments\people-tracker\depthai_utils.py", line 21, in init raise RuntimeError("Error creating a pipeline!") RuntimeError: Error creating a pipeline!

yoich avatar Jan 27 '21 16:01 yoich

Maybe it's a faulty USB port, try other ones. If that doesn't work, try restarting your PC. If the restart doesn't work, try running pip install -r requirements.txt in people-tracker

User8395 avatar Feb 05 '21 19:02 User8395

Thanks for the assist @qaqak . And sorry about the delay @yoich .

So when you ran the other example, did you run the python3 -m pip install -r requirements.txt?

Right now we kind of containerize the demos so that as we do API changes, the demo still works with the API for which it was written.

So I think this may be the issue, but not sure (way behind because of shipping issues).

Luxonis-Brandon avatar Feb 06 '21 04:02 Luxonis-Brandon

Right now we kind of containerize the demos so that as we do API changes, the demo still works with the API for which it was written.

What do you mean by containerize?

User8395 avatar Feb 06 '21 13:02 User8395

Hi, qaqak and Luxonis-Brandon. Thank you for your advice. Some demo programs (face detection) could run and I could see images, so I think the USB port is fine. https://ibb.co/cgn38gq

I did pip install -r requirements.txt after activated venv. Is there any difference between pip install -r requirements.txt and python3 -m pip -r requirements.txt ?

I just started. Sorry.

yoich avatar Feb 08 '21 02:02 yoich

Right now we kind of containerize the demos so that as we do API changes, the demo still works with the API for which it was written.

What do you mean by containerize?

Sorry by that I mean that the demos have the API version that they were written for contained in the demo. So the requirements should be installed when running the demo.

Luxonis-Brandon avatar Feb 08 '21 04:02 Luxonis-Brandon

Hi, qaqak and Luxonis-Brandon. Thank you for your advice. Some demo programs (face detection) could run and I could see images, so I think the USB port is fine. https://ibb.co/cgn38gq

I did pip install -r requirements.txt after activated venv. Is there any difference between pip install -r requirements.txt and python3 -m pip -r requirements.txt ?

I just started. Sorry.

So @realWadim could you please help to explain the difference (if any) here? I don't know.

Luxonis-Brandon avatar Feb 08 '21 04:02 Luxonis-Brandon

It's safer to use python3 -m pip -r requirements.txt, since we explicitly use python3 as interpreter. If we're using a virtual environment and use python3 by default, we can simply use pip install -r requirements.txt.

realWadim avatar Feb 09 '21 12:02 realWadim

Hi, I did python -m pip install -r requirements.txt again. But nothing changed. This is result of pip freeze.

> pip freeze
depthai==0.3.0.0+aeda4a9fdef6edc9f826b7dc354a123d1611a7c6
numpy==1.19.5
opencv-python==4.5.1.48
scipy==1.4.1`

yoich avatar Feb 10 '21 06:02 yoich

Try python -m pip uninstall -r requirements.txt then python -m pip install -r requirements.txt

Thanks, Qasim

User8395 avatar Feb 10 '21 16:02 User8395

Hi, qaqak. Thank you for your advice. But it did not work. Now I try to work another example "people-counter" and I got an error and could not run. At the moment I could run only depthai_demo.py What's wrong with me.

yoich avatar Feb 13 '21 13:02 yoich

Hi, I could run 'gen2-human-pose'. python main.py -cam

yoich avatar Feb 13 '21 13:02 yoich

I'm actually facing the same issue, eventhough most other scripts seem to work without any problems

TannerGilbert avatar Feb 13 '21 13:02 TannerGilbert

Hi,

I still could not run 'people-tracker/main.py'

But I could run 'people-counter/main.py' by adding if __name__ == '__main__':

Just for your information.

yoich avatar Feb 13 '21 15:02 yoich

Thanks. Not sure on this one, bringing up this fix internally, and thanks @yoich !

Luxonis-Brandon avatar Feb 13 '21 18:02 Luxonis-Brandon

I am having the same issue when trying the coronamask sample. I was able to run the demo hello world example which uses the latest version of the api, but somehow when I try the coronamask sample (after creating a virtual environment for the requirements), I get the device not found error.
Is there any plan for these examples to be upgraded to the latest depthai API?

Thanks

magallardo avatar Mar 16 '21 22:03 magallardo

Hi @magallardo sorry about the trouble. So actually ArduCam actually had beaten us to the punch on making an updated version using Gen2 API: https://github.com/OAKChina/depthai-examples/tree/master/face_mask

That said, I haven't checked if these have been upgraded to the latest stable release of Gen2, as they are a month or two old now, so I'm thinking not.

So actually I'll ask the team if we can go through and do pull-requests on all of them to get them to the latest DepthAI API.

And we are also in the process of migrating all those over to this Github as well to update the face mask example. Sorry again about the trouble.

-Brandon

Luxonis-Brandon avatar Mar 16 '21 22:03 Luxonis-Brandon

@Luxonis-Brandon Thanks for the prompt response.

I have a question regarding the Gen2 API. I was looking at the requirements file on the ArduCam link you provided for face_mask example and it is using depthai==0.0.2.1+ab14564b91fd7cdd98a70ccda438cf1482839cdd

However, in the requirements file for the coronamask in the luxonis/depthai-experiments repository, the depthai used is:depthai==0.3.0.0+aeda4a9fdef6edc9f826b7dc354a123d1611a7c6

So, I am a little bit confused with the versioning scheme. Which version is supposed to be newer or Gen2?

Thanks again. Marcelo

magallardo avatar Mar 16 '21 23:03 magallardo

Hi Marcelo,

Sorry about the confusion. And agreed the numbering was quite confusing WRT Gen1 and Gen2 prior to the formal Gen2 release.

So ArduCam wrote those a bit ago (thanks to them for being really early adopters of Gen2) so they're actually using a pre-release version of the Gen2 API, which used that confusing 0.0.2.x format.

So now that Gen2 is formally released, the version numbering is WAY clearer.

Gen1 is now 1.x. E.g. here Gen2 is now 2.x. E.g. here for Python and here for C++.

Those are formal releases, but you can always just pull the latest from those respective repositories.

Anyway, going forward there won't be any of that odd 0.0.2.x release, it was just a temporary stop-gap while we were developing Gen2 API and it was not yet stable. And I just struck up a conversation internally to Luxonis and with ArduCam about Luxonis helping to update all those examples to the Gen2 2.x formal release.

And beside that exception, anything with version <2 is Gen1, and anything 2 and above is Gen2. (And ideally any code with 0.0.2.x references will be replaced over time as we catch stuff like that. The outdated examples there were my fault as I only remembered them earlier this week when a Luxonis engineer reminded me about them.)

Thoughts?

Thanks, Brandon

Luxonis-Brandon avatar Mar 16 '21 23:03 Luxonis-Brandon

@Luxonis-Brandon Thanks again for the pointers.

I have another question. While running these samples I have found that when first running a sample with one versions of the API, and then running a sample with version 2, the applications will start giving the no device found error. I was able to recover from that by restarting the device (in my case I am using an oak-d device). Is that what is expected and how can I get around this issue without having to restart the device.

Thanks again, Marcelo

magallardo avatar Mar 17 '21 15:03 magallardo

Does it give the no device error with version 2, version 1, or both versions?

Thanks, Qasim

User8395 avatar Mar 17 '21 16:03 User8395

@qaqak The error occurs if you run first a sample with Gen1 api and then try to run a sample with Gen2 api.

I also tried to run again the gen1 sample after that and got the same error. At that point nothing was working so I decided to restart the device (oak-d) and then I was able to run the Gen2 sample.

Thanks

magallardo avatar Mar 17 '21 17:03 magallardo

@magallardo, what is your computer model? And the error doesn't occur with the Gen 1 API?

Thanks, Qasim

User8395 avatar Mar 17 '21 22:03 User8395

@qaqak I am running the samples on a Rpi3 and the Oak-d is connected to the USB.

Both versions work ok after restarting the oak-d device. However, if I first run a gen1 sample and then a gen2 sample, then I get the device can't be found. Similar, if I run first a gen2 sample followed by a gen1 sample, I get the error while running the second sample.

I am getting around this issue by restarting the oak-d device when running different versions of the api.

Thanks, Marcelo

magallardo avatar Mar 17 '21 22:03 magallardo

So it's like this?

Run Gen 1 Sample: OK Run Gen 2 Sample: Error

Restart device

Run Gen 2 Sample: OK Run Gen 1 Sample: Error

Restart device...

Also, does the error occur when you run a Gen 1 sample followed by another Gen 1 Sample, and Gen 2 followed by Gen 2?

Thanks, Qasim

User8395 avatar Mar 18 '21 01:03 User8395

@qaqak That is correct. The error happens when changing the API.

Also, the error does not occur when running multiple samples using the same version of the API.

Thanks, Marcelo

magallardo avatar Mar 18 '21 01:03 magallardo

Very interesting. We'll try to reproduce. Thanks for iterating to make this clear. @cafemoloko would you mind seeing about reproducing this?

Thanks again, Brandon

Luxonis-Brandon avatar Mar 18 '21 03:03 Luxonis-Brandon

How about a replacement OAK-D? Also @magallardo, just choose one version of the API and stick with that?

User8395 avatar Mar 18 '21 16:03 User8395

@qaqak I am able to get around now by restarting the oak-d, which is not terrible but annoying.

Also, are you suggesting the oak-d device is defective?

Unfortunately I have been trying some gen1 and gen2 samples as not all samples which I am interested (coronamask) have been upgraded to gen2 API.

Thanks, Marcelo

magallardo avatar Mar 18 '21 16:03 magallardo

It might be defective. By restarting the device, you mean re-plugging it?

Thanks, Qasim

User8395 avatar Mar 18 '21 16:03 User8395

@qaqak Yes. I unplug the power, wait a few seconds and plug it again.

After that the samples for gen1 or gen2 work unless you mix the apis from one sample to another.

Thanks, Marcelo

magallardo avatar Mar 18 '21 16:03 magallardo