metadata-filter icon indicating copy to clipboard operation
metadata-filter copied to clipboard

Bump chai from 4.3.10 to 5.1.0

Open dependabot[bot] opened this issue 1 year ago • 2 comments
trafficstars

Bumps chai from 4.3.10 to 5.1.0.

Release notes

Sourced from chai's releases.

v5.1.0

What's Changed

New Contributors

Full Changelog: https://github.com/chaijs/chai/compare/v5.0.3...v5.1.0

v5.0.3

Fix bad v5.0.2 publish.

Full Changelog: https://github.com/chaijs/chai/compare/v5.0.2...v5.0.3

v5.0.2

What's Changed

Full Changelog: https://github.com/chaijs/chai/compare/v5.0.1...v5.0.2

v5.0.0

BREAKING CHANGES

  • Chai now only supports EcmaScript Modules (ESM). This means your tests will need to either have import {...} from 'chai' or import('chai'). require('chai') will cause failures in nodejs. If you're using ESM and seeing failures, it may be due to a bundler or transpiler which is incorrectly converting import statements into require calls.
  • Dropped support for Internet Explorer.
  • Dropped support for NodeJS < 18.
  • Minimum supported browsers are now Firefox 100, Safari 14.1, Chrome 100, Edge 100. Support for browsers prior to these versions is "best effort" (bug reports on older browsers will be assessed individually and may be marked as wontfix).

What's Changed

... (truncated)

Commits

Dependabot compatibility score

You can trigger a rebase of this PR by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Note Automatic rebases have been disabled on this pull request as it has been open for over 30 days.

dependabot[bot] avatar Mar 01 '24 22:03 dependabot[bot]

The way to get started is with one Raspberry Pi (RPi) with Pi Camera sending images to one hub computer (Mac or Linux PC). It is easiest to start with the 2 computers on the same network. You can easily move them to different networks later. You do not need any other devices than a RPi with a camera (running imagenode.py) and a Linux PC (running imagehub.py). You will need networking hardware to connect the 2 computers, but it can be ethernet or WiFi.

So, start with 2 computers:

  1. RPi computer running imagenode.py Note that imageZMQ is NOT yet tested with Raspberry Pi OS Bullseye. I am waiting for a production replacement for the Python PiCamera module. imageZMQ runs with Raspberry Pi OS Buster and earlier.
  2. Linux PC running imagehub.py.

imageZMQ is a Python module that is imported by imagenode.py on the RPi and imported by imagehub.py on the Linux PC. So, imageZMQ must be running and tested on both the RPi and the Linux PC. imageZMQ is pip installable.

All the test programs and instructions are in imagezmq, imagenode and imagenub GitHub repositories. Here is the best way to get started:

  1. On your Linux PC, get imageZMQ test programs running. See the imageZMQ documentation in the imageZMQ GitHub library.
  2. Get imageZMQ test program 2 running with RPi sending and Linux PC receiving.
  3. Only AFTER you have the imageZMQ test programs running OK should you attempt to use imagenode and imagehub programs
  4. Run both the imagenode test programs for imagenode Test 1 on the same computer (it will need a screen, so use a Linux PC if you don’t have a screen on your RPi). Follow the instructions in the imagenode GitHub README file.
  5. Run the imagenode test 2 and test 3 using an RPi as the imagenode and a Linux PC as the receiver.
  6. Spend time experimenting with multiple settings in your imagenode.YAML file per the imagenode documentation. There are several example imagenode.yaml files in the imagenode repository.
  7. Finally, run the test programs in the imagehub GitHub repository. You will then have a working setup that you can tune and adjust by changing imagenode.yaml settings.

I always use a virtual environment for running all tests and production programs. That is discussed in the imageZMQ, imagenode and imagehub documentation. I do all my testing running the programs with python at the command line. But in my production setup, I use systemctl / systemd. I always start the imagehub program BEFORE starting the imagenode program. My imagehub program typically runs for many months (on a laptop running Linux) without restarting, but the imagenodes restart more often than that.

I also run my prototype librarian.py on the same Linux laptop that is running imagehub.py. My prototype librarian answers queries by reading the imagehub log file. My librarian prototype program is in my yin-yang-ranch Github repository. I am continuing to develop my librarian prototype, but it is not yet ready for pushing to Github yet.

jeffbass avatar Jun 07 '23 06:06 jeffbass

The way to get started is with one Raspberry Pi (RPi) with Pi Camera sending images to one hub computer (Mac or Linux PC). It is easiest to start with the 2 computers on the same network. You can easily move them to different networks later. You do not need any other devices than a RPi with a camera (running imagenode.py) and a Linux PC (running imagehub.py). You will need networking hardware to connect the 2 computers, but it can be ethernet or WiFi.

So, start with 2 computers:

  1. RPi computer running imagenode.py Note that imageZMQ is NOT yet tested with Raspberry Pi OS Bullseye. I am waiting for a production replacement for the Python PiCamera module. imageZMQ runs with Raspberry Pi OS Buster and earlier.
  2. Linux PC running imagehub.py.

imageZMQ is a Python module that is imported by imagenode.py on the RPi and imported by imagehub.py on the Linux PC. So, imageZMQ must be running and tested on both the RPi and the Linux PC. imageZMQ is pip installable.

All the test programs and instructions are in imagezmq, imagenode and imagenub GitHub repositories. Here is the best way to get started:

  1. On your Linux PC, get imageZMQ test programs running. See the imageZMQ documentation in the imageZMQ GitHub library.
  2. Get imageZMQ test program 2 running with RPi sending and Linux PC receiving.
  3. Only AFTER you have the imageZMQ test programs running OK should you attempt to use imagenode and imagehub programs
  4. Run both the imagenode test programs for imagenode Test 1 on the same computer (it will need a screen, so use a Linux PC if you don’t have a screen on your RPi). Follow the instructions in the imagenode GitHub README file.
  5. Run the imagenode test 2 and test 3 using an RPi as the imagenode and a Linux PC as the receiver.
  6. Spend time experimenting with multiple settings in your imagenode.YAML file per the imagenode documentation. There are several example imagenode.yaml files in the imagenode repository.
  7. Finally, run the test programs in the imagehub GitHub repository. You will then have a working setup that you can tune and adjust by changing imagenode.yaml settings.

I always use a virtual environment for running all tests and production programs. That is discussed in the imageZMQ, imagenode and imagehub documentation. I do all my testing running the programs with python at the command line. But in my production setup, I use systemctl / systemd. I always start the imagehub program BEFORE starting the imagenode program. My imagehub program typically runs for many months (on a laptop running Linux) without restarting, but the imagenodes restart more often than that.

I also run my prototype librarian.py on the same Linux laptop that is running imagehub.py. My prototype librarian answers queries by reading the imagehub log file. My librarian prototype program is in my yin-yang-ranch Github repository. I am continuing to develop my librarian prototype, but it is not yet ready for pushing to Github yet.

jeffbass avatar Jun 07 '23 06:06 jeffbass

for small poc can we neglect librarian and start to work on single node and single hub? and focus on transmitting the images? Firstly i want to setup for motion detector with one raspberry pi and send the detected images to hub.? can i start with this poc?

mohan51 avatar Jun 07 '23 09:06 mohan51

for small poc can we neglect librarian and start to work on single node and single hub? and focus on transmitting the images? Firstly i want to setup for motion detector with one raspberry pi and send the detected images to hub.? can i start with this poc?

mohan51 avatar Jun 07 '23 09:06 mohan51

Setting up a poc does not require a librarian program. Motion detection and temperature sensors on RPi computers are what I use in my production system.

Using a single RPi running imagenode sending images to a single imagehub computer works well. The imagehub computer should be a Mac or Linux PC with a SSD disk. Using an RPI for an imagehub will not work because writing image files to a micro SD card is slow and can cause SD card failure. I have found that sending jpg files from the imagenode is faster and does not slow down the network as much. There is an imagenode.yaml option that specifies the sending of jpg files. Raw OpenCV image files are quite large; jpg files are much smaller.

When you run this "imagenode to imagehub" arrangement, you will use the imagenode.yaml file on the RPi to specify the camera and motion detector settings as well as the IP address of the imagehub. You can specify the jpg option in the imagenode.yaml file. The description of the imagenode.yaml file is in this imagenode Github repository.

There is an imagehub.yaml file on the imagehub computer which specifies the location of the event log and the image files directories. The documentation about those directories and files is in the imagehub Github repository. If you specify the jpg option on the imagenode.yaml file, you must also specify the jpg option in the imagehub.yaml file. Raw OpenCV image files are quite large and take a lot of disk space; jpg files are much smaller. I use jpg files in my own production system.

You can use any program to read the log and the image files that accumulate on the imagehub computer. My own prototype librarian program is located in the yin-yang-ranch Github repository. It reads the imagehub event log while it is being actively written by the imagehub program. This has worked well for me and allows me to read the imagehub events log in real time. I use SMS texting to query the imagenode log for recent motion detection events. My prototype librarian is a poc for reading the imagehub events log and does not have any image reading or image analysis code in it yet. There are many image analysis programs in tutorials and Github repositories available. I am experimenting with some of them. But reading the imagehub event logs is my day to day use of my imagenode --> imagehub system.

jeffbass avatar Jun 07 '23 16:06 jeffbass

Setting up a poc does not require a librarian program. Motion detection and temperature sensors on RPi computers are what I use in my production system.

Using a single RPi running imagenode sending images to a single imagehub computer works well. The imagehub computer should be a Mac or Linux PC with a SSD disk. Using an RPI for an imagehub will not work because writing image files to a micro SD card is slow and can cause SD card failure. I have found that sending jpg files from the imagenode is faster and does not slow down the network as much. There is an imagenode.yaml option that specifies the sending of jpg files. Raw OpenCV image files are quite large; jpg files are much smaller.

When you run this "imagenode to imagehub" arrangement, you will use the imagenode.yaml file on the RPi to specify the camera and motion detector settings as well as the IP address of the imagehub. You can specify the jpg option in the imagenode.yaml file. The description of the imagenode.yaml file is in this imagenode Github repository.

There is an imagehub.yaml file on the imagehub computer which specifies the location of the event log and the image files directories. The documentation about those directories and files is in the imagehub Github repository. If you specify the jpg option on the imagenode.yaml file, you must also specify the jpg option in the imagehub.yaml file. Raw OpenCV image files are quite large and take a lot of disk space; jpg files are much smaller. I use jpg files in my own production system.

You can use any program to read the log and the image files that accumulate on the imagehub computer. My own prototype librarian program is located in the yin-yang-ranch Github repository. It reads the imagehub event log while it is being actively written by the imagehub program. This has worked well for me and allows me to read the imagehub events log in real time. I use SMS texting to query the imagenode log for recent motion detection events. My prototype librarian is a poc for reading the imagehub events log and does not have any image reading or image analysis code in it yet. There are many image analysis programs in tutorials and Github repositories available. I am experimenting with some of them. But reading the imagehub event logs is my day to day use of my imagenode --> imagehub system.

jeffbass avatar Jun 07 '23 16:06 jeffbass

Hi Jeff,

We need to run imagezmq on both the node and hub? because when ever i try to run imagezmq on hub side.its just running and exiting without calling any methods in the imagezmq.

mohan51 avatar Jun 08 '23 07:06 mohan51

Hi Jeff,

We need to run imagezmq on both the node and hub? because when ever i try to run imagezmq on hub side.its just running and exiting without calling any methods in the imagezmq.

mohan51 avatar Jun 08 '23 07:06 mohan51

I am not sure what you mean by "run imageZMQ on hub side". imageZMQ is a Python module that is imported by a hub program on the hub computer. It is not a program that is run on the hub computer. imageZMQ is also imported by a sending program on the image sending computer. It is not a program on the sending computer. Before you can attempt running something such as an imagenode, it is important that you have imageZMQ test programs running correctly on both the image sending computer and the image receiving computer. The imageZMQ test programs are simple versions of the imagenode and imagehub programs. Did you run the tests in the imageZMQ repository? I recommend that you run the first 3 tests described in the README of that repository in order. Which imageZMQ test programs did you run? Which test program failed? What was the error message, if any? If you have not been able to get the imageZMQ test programs running correctly, please open an issue in the imageZMQ repository and I will try to help you there.

jeffbass avatar Jun 08 '23 15:06 jeffbass

I am not sure what you mean by "run imageZMQ on hub side". imageZMQ is a Python module that is imported by a hub program on the hub computer. It is not a program that is run on the hub computer. imageZMQ is also imported by a sending program on the image sending computer. It is not a program on the sending computer. Before you can attempt running something such as an imagenode, it is important that you have imageZMQ test programs running correctly on both the image sending computer and the image receiving computer. The imageZMQ test programs are simple versions of the imagenode and imagehub programs. Did you run the tests in the imageZMQ repository? I recommend that you run the first 3 tests described in the README of that repository in order. Which imageZMQ test programs did you run? Which test program failed? What was the error message, if any? If you have not been able to get the imageZMQ test programs running correctly, please open an issue in the imageZMQ repository and I will try to help you there.

jeffbass avatar Jun 08 '23 15:06 jeffbass

Hi Jeff, Imagezmq testcases an perfectly.but when i run (imagehub.py) in my linux and imagenode.py on rpi. they both are waiting and not executing anything. but imagenode should capture and sent to imagehub right? but the process is not happening

mohan51 avatar Jun 12 '23 09:06 mohan51

Hi Jeff, Imagezmq testcases an perfectly.but when i run (imagehub.py) in my linux and imagenode.py on rpi. they both are waiting and not executing anything. but imagenode should capture and sent to imagehub right? but the process is not happening

mohan51 avatar Jun 12 '23 09:06 mohan51

while running imagenode I am facing the below error

2023-06-12 12:15:51,966 ~ Starting imagenode.py 2023-06-12 12:15:52,027 ~ Unanticipated error with no Exception handler. Traceback (most recent call last): File "/home/vfvi/img/imagenode/imagenode/imagenode.py", line 30, in main node = ImageNode(settings) # start ZMQ, cameras and other sensors File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 109, in init self.setup_cameras(settings) File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 268, in setup_cameras cam = Camera(camera, settings.cameras, settings) # create a Camera instance File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 863, in init self.cam = VideoStream(usePiCamera=True, File "/home/vfvi/img/env/lib/python3.9/site-packages/imutils/video/videostream.py", line 13, in init from .pivideostream import PiVideoStream File "/home/vfvi/img/env/lib/python3.9/site-packages/imutils/video/pivideostream.py", line 2, in from picamera.array import PiRGBArray ModuleNotFoundError: No module named 'picamera' 2023-06-12 12:15:52,031 ~ Exiting imagenode.py 2023-06-12 12:15:52,032 ~ SIGTERM detected, shutting down

mohan51 avatar Jun 12 '23 11:06 mohan51

while running imagenode I am facing the below error

2023-06-12 12:15:51,966 ~ Starting imagenode.py 2023-06-12 12:15:52,027 ~ Unanticipated error with no Exception handler. Traceback (most recent call last): File "/home/vfvi/img/imagenode/imagenode/imagenode.py", line 30, in main node = ImageNode(settings) # start ZMQ, cameras and other sensors File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 109, in init self.setup_cameras(settings) File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 268, in setup_cameras cam = Camera(camera, settings.cameras, settings) # create a Camera instance File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 863, in init self.cam = VideoStream(usePiCamera=True, File "/home/vfvi/img/env/lib/python3.9/site-packages/imutils/video/videostream.py", line 13, in init from .pivideostream import PiVideoStream File "/home/vfvi/img/env/lib/python3.9/site-packages/imutils/video/pivideostream.py", line 2, in from picamera.array import PiRGBArray ModuleNotFoundError: No module named 'picamera' 2023-06-12 12:15:52,031 ~ Exiting imagenode.py 2023-06-12 12:15:52,032 ~ SIGTERM detected, shutting down

mohan51 avatar Jun 12 '23 11:06 mohan51

I'm glad you got the imageZMQ tests cases to work. Thanks for sending the imagenode.py error messages.

Did you send images for imageZMQ tests using a PiCamera? That's the default in the imageZMQ tests 2 and 3. I presume you were able to send camera images and that you saw them on your hub PC OK? That would test that your camera was working OK on your imagenode computer. You need to run the imageZMQ tests on the same imagenode RPi computer and imagehub PC computer that will be running imagenode.py.

Your error message indicates that imagenode.py is not successfully importing the picamera module. Are you using a virtual environment? Did you do a pip install of picamera into that virtual environment? If you ran the imageZMQ tests using the picamera module OK, then you may be running imagenode.py in a different virtual environment or with a different linux path. Make sure you can import picamera correctly before you start imagenode.py. I find it helps to run Python from the command line in the same directory and virtual environment to verify that the import of picamera works OK. If you are using a USB camera instead of a picamera, you can specify that as an option for imagenode.py.

jeffbass avatar Jun 12 '23 19:06 jeffbass

I'm glad you got the imageZMQ tests cases to work. Thanks for sending the imagenode.py error messages.

Did you send images for imageZMQ tests using a PiCamera? That's the default in the imageZMQ tests 2 and 3. I presume you were able to send camera images and that you saw them on your hub PC OK? That would test that your camera was working OK on your imagenode computer. You need to run the imageZMQ tests on the same imagenode RPi computer and imagehub PC computer that will be running imagenode.py.

Your error message indicates that imagenode.py is not successfully importing the picamera module. Are you using a virtual environment? Did you do a pip install of picamera into that virtual environment? If you ran the imageZMQ tests using the picamera module OK, then you may be running imagenode.py in a different virtual environment or with a different linux path. Make sure you can import picamera correctly before you start imagenode.py. I find it helps to run Python from the command line in the same directory and virtual environment to verify that the import of picamera works OK. If you are using a USB camera instead of a picamera, you can specify that as an option for imagenode.py.

jeffbass avatar Jun 12 '23 19:06 jeffbass

Hi Jeff, I resolved the Picamera issue but could you please suggest the changes i need to do on imaging.py side. should i need to comment the sensors? what changes do you suggest for making one hub and one node.

  1. and also can i print the framerate? what is the usual frame rate for this? I am using raspberry pi 3 camera 5mp

mohan51 avatar Jun 13 '23 06:06 mohan51

Hi Jeff, I resolved the Picamera issue but could you please suggest the changes i need to do on imaging.py side. should i need to comment the sensors? what changes do you suggest for making one hub and one node.

  1. and also can i print the framerate? what is the usual frame rate for this? I am using raspberry pi 3 camera 5mp

mohan51 avatar Jun 13 '23 06:06 mohan51

getting error from imagenode like "Exception at sender.send_jpg in REP_watcher function".

I running the programs in Virtual environment only

mohan51 avatar Jun 13 '23 06:06 mohan51

getting error from imagenode like "Exception at sender.send_jpg in REP_watcher function".

I running the programs in Virtual environment only

mohan51 avatar Jun 13 '23 06:06 mohan51

I don't have any sensors.should i need any sensors for running the poc?

mohan51 avatar Jun 13 '23 11:06 mohan51

I don't have any sensors.should i need any sensors for running the poc?

mohan51 avatar Jun 13 '23 11:06 mohan51

Exception at sender.send_jpg in REP_watcher function. --- Logging error --- Traceback (most recent call last): File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 344, in send_jpg_frame_REP_watcher hub_reply = self.sender.send_jpg(text, jpg_buffer) File "/home/vfvi/img/env/lib/python3.9/site-packages/imagezmq/imagezmq.py", line 162, in send_jpg_reqrep hub_reply = self.zmq_socket.recv() # receive the reply message File "zmq/backend/cython/socket.pyx", line 805, in zmq.backend.cython.socket.Socket.recv File "zmq/backend/cython/socket.pyx", line 841, in zmq.backend.cython.socket.Socket.recv File "zmq/backend/cython/socket.pyx", line 194, in zmq.backend.cython.socket._recv_copy File "zmq/backend/cython/checkrc.pxd", line 13, in zmq.backend.cython.checkrc._check_rc File "/home/vfvi/img/imagenode/imagenode/tools/utils.py", line 55, in clean_shutdown_when_killed sys.exit() SystemExit

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/vfvi/img/imagenode/imagenode/imagenode.py", line 38, in main hub_reply = node.send_frame(text, image) File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 347, in send_jpg_frame_REP_watcher self. fix_comm_link() File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 430, in fix_comm_link self.shutdown_imagenode() File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 449, in shutdown_imagenode sys.exit() SystemExit

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/handlers.py", line 73, in emit if self.shouldRollover(record): File "/usr/lib/python3.9/logging/handlers.py", line 192, in shouldRollover self.stream.seek(0, 2) #due to non-posix-compliant Windows feature RuntimeError: reentrant call inside <_io.BufferedWriter name='/home/vfvi/img/imagenode/imagenode/imagenode.log'> Call stack: File "/home/vfvi/img/imagenode/imagenode/imagenode.py", line 62, in main() File "/home/vfvi/img/imagenode/imagenode/imagenode.py", line 43, in main log.warning('SIGTERM was received.') File "/usr/lib/python3.9/logging/init.py", line 1454, in warning self._log(WARNING, msg, args, **kwargs) File "/usr/lib/python3.9/logging/init.py", line 1585, in _log self.handle(record) File "/usr/lib/python3.9/logging/init.py", line 1595, in handle self.callHandlers(record) File "/usr/lib/python3.9/logging/init.py", line 1657, in callHandlers hdlr.handle(record) File "/usr/lib/python3.9/logging/init.py", line 948, in handle self.emit(record) File "/usr/lib/python3.9/logging/handlers.py", line 75, in emit logging.FileHandler.emit(self, record) File "/usr/lib/python3.9/logging/init.py", line 1183, in emit StreamHandler.emit(self, record) File "/usr/lib/python3.9/logging/init.py", line 1083, in emit self.flush() File "/usr/lib/python3.9/logging/init.py", line 1063, in flush self.stream.flush() File "/home/vfvi/img/imagenode/imagenode/tools/utils.py", line 54, in clean_shutdown_when_killed logging.warning('SIGTERM detected, shutting down') Message: 'SIGTERM detected, shutting down' Arguments: ()

mohan51 avatar Jun 13 '23 11:06 mohan51

Exception at sender.send_jpg in REP_watcher function. --- Logging error --- Traceback (most recent call last): File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 344, in send_jpg_frame_REP_watcher hub_reply = self.sender.send_jpg(text, jpg_buffer) File "/home/vfvi/img/env/lib/python3.9/site-packages/imagezmq/imagezmq.py", line 162, in send_jpg_reqrep hub_reply = self.zmq_socket.recv() # receive the reply message File "zmq/backend/cython/socket.pyx", line 805, in zmq.backend.cython.socket.Socket.recv File "zmq/backend/cython/socket.pyx", line 841, in zmq.backend.cython.socket.Socket.recv File "zmq/backend/cython/socket.pyx", line 194, in zmq.backend.cython.socket._recv_copy File "zmq/backend/cython/checkrc.pxd", line 13, in zmq.backend.cython.checkrc._check_rc File "/home/vfvi/img/imagenode/imagenode/tools/utils.py", line 55, in clean_shutdown_when_killed sys.exit() SystemExit

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/vfvi/img/imagenode/imagenode/imagenode.py", line 38, in main hub_reply = node.send_frame(text, image) File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 347, in send_jpg_frame_REP_watcher self. fix_comm_link() File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 430, in fix_comm_link self.shutdown_imagenode() File "/home/vfvi/img/imagenode/imagenode/tools/imaging.py", line 449, in shutdown_imagenode sys.exit() SystemExit

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3.9/logging/handlers.py", line 73, in emit if self.shouldRollover(record): File "/usr/lib/python3.9/logging/handlers.py", line 192, in shouldRollover self.stream.seek(0, 2) #due to non-posix-compliant Windows feature RuntimeError: reentrant call inside <_io.BufferedWriter name='/home/vfvi/img/imagenode/imagenode/imagenode.log'> Call stack: File "/home/vfvi/img/imagenode/imagenode/imagenode.py", line 62, in main() File "/home/vfvi/img/imagenode/imagenode/imagenode.py", line 43, in main log.warning('SIGTERM was received.') File "/usr/lib/python3.9/logging/init.py", line 1454, in warning self._log(WARNING, msg, args, **kwargs) File "/usr/lib/python3.9/logging/init.py", line 1585, in _log self.handle(record) File "/usr/lib/python3.9/logging/init.py", line 1595, in handle self.callHandlers(record) File "/usr/lib/python3.9/logging/init.py", line 1657, in callHandlers hdlr.handle(record) File "/usr/lib/python3.9/logging/init.py", line 948, in handle self.emit(record) File "/usr/lib/python3.9/logging/handlers.py", line 75, in emit logging.FileHandler.emit(self, record) File "/usr/lib/python3.9/logging/init.py", line 1183, in emit StreamHandler.emit(self, record) File "/usr/lib/python3.9/logging/init.py", line 1083, in emit self.flush() File "/usr/lib/python3.9/logging/init.py", line 1063, in flush self.stream.flush() File "/home/vfvi/img/imagenode/imagenode/tools/utils.py", line 54, in clean_shutdown_when_killed logging.warning('SIGTERM detected, shutting down') Message: 'SIGTERM detected, shutting down' Arguments: ()

mohan51 avatar Jun 13 '23 11:06 mohan51

I cloned picam-motion-test.yaml to imagenode.yaml, for this i need to do any changes on imaging.py side?

mohan51 avatar Jun 13 '23 11:06 mohan51

and also you are giving default frame rate = 32, then how we need to calculate the fps from Rpi to hub(linux pc)?

mohan51 avatar Jun 13 '23 11:06 mohan51

and also you are giving default frame rate = 32, then how we need to calculate the fps from Rpi to hub(linux pc)?

mohan51 avatar Jun 13 '23 11:06 mohan51

Hi Jeff, I successfully ran the motion detector, but how to calculate fps ? and also how to detect the motion in the images, because when i saw the imagehub logs i found the messages regarding still and motion but the hub receiving all the images.how to filter the motion images in the hub, when i studying your repo , I came to know that the images which is used for motion detection will send to hub? right?

mohan51 avatar Jun 13 '23 12:06 mohan51

Hi Jeff, I successfully ran the motion detector, but how to calculate fps ? and also how to detect the motion in the images, because when i saw the imagehub logs i found the messages regarding still and motion but the hub receiving all the images.how to filter the motion images in the hub, when i studying your repo , I came to know that the images which is used for motion detection will send to hub? right?

mohan51 avatar Jun 13 '23 12:06 mohan51

how to calculate frame rate on the receiving side.i think fps on sending side is 30 right? And also how to calculate the bandwidth?

mohan51 avatar Jun 13 '23 14:06 mohan51

how to calculate frame rate on the receiving side.i think fps on sending side is 30 right? And also how to calculate the bandwidth?

mohan51 avatar Jun 13 '23 14:06 mohan51

I am able to achieve only 1.69 fps on the receiving side. is there anyway to increase the fps?

mohan51 avatar Jun 14 '23 05:06 mohan51

I am able to achieve only 1.69 fps on the receiving side. is there anyway to increase the fps?

mohan51 avatar Jun 14 '23 05:06 mohan51

I see from your messages above that you have resolved your errors by using YAML settings in the imagenode.yaml file. The yaml settings are the way to manage imagenode behavior. You can delete sections or settings you don't need, or you can comment them out.

The frame rate setting in imagenode is used to set the frame rate on the camera. For example, when using the picamera, the frame rate sets how often the frame is captured. Setting the frame rate in imagenode.yaml only affects the camera capture rate, not the FPS throughput.

There is no FPS measurement in the imagenode or imagehub code. When I am testing a new setup, I have the imagenode send to this imageZMQ receive program that measures FPS.. Then I try various image sizes, color to grayscale changes, etc., to see what affects FPS. You may want to try that.

FPS is affected by many factors. The main one is image size. The next most important factor is jpg vs. raw OpenCV images.

I use 640x480 images in my own applications. I also use jpg rather than raw OpenCV images. Using 640x480 images sent as jpgs, I get image throughputs of 10-15 FPS. Jpg compression takes some time on the RPi side but it is faster for me, so I never use raw OpenCV images.

I have never used images larger than 640x480. I suspect your 1.69 fps may be related to a larger image size?

There is a faster alternative to OpenCV's jpg compression. One of the imageZMQ contributors provided this faster jpg sending program example. There is a matching faster jpg hub program. You may want to try that as well.

jeffbass avatar Jun 14 '23 07:06 jeffbass

I see from your messages above that you have resolved your errors by using YAML settings in the imagenode.yaml file. The yaml settings are the way to manage imagenode behavior. You can delete sections or settings you don't need, or you can comment them out.

The frame rate setting in imagenode is used to set the frame rate on the camera. For example, when using the picamera, the frame rate sets how often the frame is captured. Setting the frame rate in imagenode.yaml only affects the camera capture rate, not the FPS throughput.

There is no FPS measurement in the imagenode or imagehub code. When I am testing a new setup, I have the imagenode send to this imageZMQ receive program that measures FPS.. Then I try various image sizes, color to grayscale changes, etc., to see what affects FPS. You may want to try that.

FPS is affected by many factors. The main one is image size. The next most important factor is jpg vs. raw OpenCV images.

I use 640x480 images in my own applications. I also use jpg rather than raw OpenCV images. Using 640x480 images sent as jpgs, I get image throughputs of 10-15 FPS. Jpg compression takes some time on the RPi side but it is faster for me, so I never use raw OpenCV images.

I have never used images larger than 640x480. I suspect your 1.69 fps may be related to a larger image size?

There is a faster alternative to OpenCV's jpg compression. One of the imageZMQ contributors provided this faster jpg sending program example. There is a matching faster jpg hub program. You may want to try that as well.

jeffbass avatar Jun 14 '23 07:06 jeffbass

what is framerate parameter? because you have given 32? I am using 5mp raspberry pi camera module.but getting around 7-8fps only

mohan51 avatar Jun 14 '23 07:06 mohan51

what is framerate parameter? because you have given 32? I am using 5mp raspberry pi camera module.but getting around 7-8fps only

mohan51 avatar Jun 14 '23 07:06 mohan51

I commented Sensor and Light classes in imaging.py then no image is transferring from node to hub. is it mandatory to include classes like sensor and light? although i don't have sensors and lights in my project.

mohan51 avatar Jun 14 '23 12:06 mohan51

I commented Sensor and Light classes in imaging.py then no image is transferring from node to hub. is it mandatory to include classes like sensor and light? although i don't have sensors and lights in my project.

mohan51 avatar Jun 14 '23 12:06 mohan51

Framerate parameter sets camera frame rate only. If the throughput fps is 7-8 only, that is a limit of jpg conversion, network speed, imagehub saving speed. The actual throughput frame rate is often lower than the camera frame rate i computer vision pipelines. You can set the camera frame rate to a lower value and it slows the pipeline down. For example, my water meter frame rate is set to 2.

If you comment out or remove the sensor and light sections of the yaml, that makes them loaded but unused. These are very small methods (only 1K bytes or less), so commenting them out in the source code doesn't change the size of the loaded modules much. But you can definitely comment them out in the source code if you want.

jeffbass avatar Jun 14 '23 16:06 jeffbass

Framerate parameter sets camera frame rate only. If the throughput fps is 7-8 only, that is a limit of jpg conversion, network speed, imagehub saving speed. The actual throughput frame rate is often lower than the camera frame rate i computer vision pipelines. You can set the camera frame rate to a lower value and it slows the pipeline down. For example, my water meter frame rate is set to 2.

If you comment out or remove the sensor and light sections of the yaml, that makes them loaded but unused. These are very small methods (only 1K bytes or less), so commenting them out in the source code doesn't change the size of the loaded modules much. But you can definitely comment them out in the source code if you want.

jeffbass avatar Jun 14 '23 16:06 jeffbass

instead of using detectors can i use any ai models for motion or still recognization

mohan51 avatar Jun 15 '23 06:06 mohan51

instead of using detectors can i use any ai models for motion or still recognization

mohan51 avatar Jun 15 '23 06:06 mohan51

Hi Jeff, is framerate is a time constant? if yes, then framerate: 32 is in seconds are milliseconds? and also for every 32ms is the camera captures the images?

mohan51 avatar Jun 15 '23 11:06 mohan51

Hi Jeff, is framerate is a time constant? if yes, then framerate: 32 is in seconds are milliseconds? and also for every 32ms is the camera captures the images?

mohan51 avatar Jun 15 '23 11:06 mohan51

Yes you can use any AI model of your choice for motion or still recognition. AI models are going to run pretty slowly on a Raspberry Pi computer, which is why my motion detector method uses frame differencing.

Frame rate or FPS (Frames Per Second) is a count of how many frames are captured by the camera in a second. So (1 / FPS) * 1000 will be the time in milliseconds from the start of the capture of one frame to the start of capture of the next frame. For a frame rate or FPS of 32, the time in milliseconds is (1/32)*1000 = 31.25 milliseconds. For an FPS of 2, the time is 500 milliseconds. For an FPS of 10, the time is 100 milliseconds.

For my own projects, I want to capture and process a series of individual image frames. Even when you JPG compress a series of individual image frames, you are transmitting each frame individually without any frame-to-frame video codec compression. If you want to have a compressed video stream, imageZMQ and imagenode are not appropriate tools. There are many video streaming programs that use a variety of video codecs. These video codecs do video compression that does not send a series of individual image frames, but instead sends reference frames and frame differences. It may that using a video streaming codec rather than sending a series of individual image frames is better for your project. In that case, you should use video streaming software rather than imageZMQ.

jeffbass avatar Jun 17 '23 05:06 jeffbass

Yes you can use any AI model of your choice for motion or still recognition. AI models are going to run pretty slowly on a Raspberry Pi computer, which is why my motion detector method uses frame differencing.

Frame rate or FPS (Frames Per Second) is a count of how many frames are captured by the camera in a second. So (1 / FPS) * 1000 will be the time in milliseconds from the start of the capture of one frame to the start of capture of the next frame. For a frame rate or FPS of 32, the time in milliseconds is (1/32)*1000 = 31.25 milliseconds. For an FPS of 2, the time is 500 milliseconds. For an FPS of 10, the time is 100 milliseconds.

For my own projects, I want to capture and process a series of individual image frames. Even when you JPG compress a series of individual image frames, you are transmitting each frame individually without any frame-to-frame video codec compression. If you want to have a compressed video stream, imageZMQ and imagenode are not appropriate tools. There are many video streaming programs that use a variety of video codecs. These video codecs do video compression that does not send a series of individual image frames, but instead sends reference frames and frame differences. It may that using a video streaming codec rather than sending a series of individual image frames is better for your project. In that case, you should use video streaming software rather than imageZMQ.

jeffbass avatar Jun 17 '23 05:06 jeffbass

Hi Jeff when i connected my two raspberrypi's to my pc. one of the raspberrypi is showing like this if image.flags['C_CONTIGUOUS']: AttributeError: 'NoneType' object has no attribute 'flags'

mohan51 avatar Jun 19 '23 11:06 mohan51

Hi Jeff when i connected my two raspberrypi's to my pc. one of the raspberrypi is showing like this if image.flags['C_CONTIGUOUS']: AttributeError: 'NoneType' object has no attribute 'flags'

mohan51 avatar Jun 19 '23 11:06 mohan51

can i connect and get all the two raspberry images? at a time? I am using threading here

mohan51 avatar Jun 19 '23 11:06 mohan51

can i connect and get all the two raspberry images? at a time? I am using threading here

mohan51 avatar Jun 19 '23 11:06 mohan51

when i replacing one picamera module with other picamera module v2, i am getting error like if image.flags['C_CONTIGUOUS']: AttributeError: 'NoneType' object has no attribute 'flags'

mohan51 avatar Jun 20 '23 07:06 mohan51

when i replacing one picamera module with other picamera module v2, i am getting error like if image.flags['C_CONTIGUOUS']: AttributeError: 'NoneType' object has no attribute 'flags'

mohan51 avatar Jun 20 '23 07:06 mohan51

Yes, you can connect multiple raspberry pi computers and each one can have multiple cameras. The images on the imagehub side are labelled by the 'name' and 'viewname' values in imagenode.yaml. Look at the README of this repository. It shows how the images and image messages are labelled. It is the labels that allow you to separate & sort images from different raspberry pi's. There is also documentation in the imagehub repository that describes the labelling and directories of images and image messages.

I have not seen the error image.flags['C_CONTIGUOUS']: AttributeError: 'NoneType' object has no attribute 'flags' before. I will try to duplicate it on my raspberry pi setup. Can you please tell me: Type of Raspberry Pi (rpi 3; rpi 4) etc raspberry pi OS version python version openCV version pyZMQ version

If you can give me that information for both raspberry pi computers (both the one that is working OK and the 2nd one you added that is getting an error), I will try to identify the error.

(To get version of pyZMQ, OpenCV etc. use these commands:

pi@rpi31:~ $     # run the commands at a CLI prompt in your test directory; this is at home directory
(py3cv4) pi@rpi31:~ $ workon py3cv4   # this should be changed to the name of YOUR virtualenv
(py3cv4) pi@rpi31:~ $ python --version
Python 3.7.3
(py3cv4) pi@rpi31:~ $ pip freeze
imagezmq==1.1.1
imutils==0.5.4
numpy==1.20.2
opencv-contrib-python==4.1.0.25
picamera==1.13
psutil==5.8.0
PyYAML==5.4.1
pyzmq==22.0.3
RPi.GPIO==0.7.0
(py3cv4) pi@rpi31:~ $ 

Thanks for your help in tracking down this error. Jeff

jeffbass avatar Jun 20 '23 20:06 jeffbass

Yes, you can connect multiple raspberry pi computers and each one can have multiple cameras. The images on the imagehub side are labelled by the 'name' and 'viewname' values in imagenode.yaml. Look at the README of this repository. It shows how the images and image messages are labelled. It is the labels that allow you to separate & sort images from different raspberry pi's. There is also documentation in the imagehub repository that describes the labelling and directories of images and image messages.

I have not seen the error image.flags['C_CONTIGUOUS']: AttributeError: 'NoneType' object has no attribute 'flags' before. I will try to duplicate it on my raspberry pi setup. Can you please tell me: Type of Raspberry Pi (rpi 3; rpi 4) etc raspberry pi OS version python version openCV version pyZMQ version

If you can give me that information for both raspberry pi computers (both the one that is working OK and the 2nd one you added that is getting an error), I will try to identify the error.

(To get version of pyZMQ, OpenCV etc. use these commands:

pi@rpi31:~ $     # run the commands at a CLI prompt in your test directory; this is at home directory
(py3cv4) pi@rpi31:~ $ workon py3cv4   # this should be changed to the name of YOUR virtualenv
(py3cv4) pi@rpi31:~ $ python --version
Python 3.7.3
(py3cv4) pi@rpi31:~ $ pip freeze
imagezmq==1.1.1
imutils==0.5.4
numpy==1.20.2
opencv-contrib-python==4.1.0.25
picamera==1.13
psutil==5.8.0
PyYAML==5.4.1
pyzmq==22.0.3
RPi.GPIO==0.7.0
(py3cv4) pi@rpi31:~ $ 

Thanks for your help in tracking down this error. Jeff

jeffbass avatar Jun 20 '23 20:06 jeffbass

Type of Raspberry Pi : Rpi4 raspberry pi OS version:Ubuntu python version: 3.8.10 openCV version:4.2.0 pyZMQ version:25.1.0

mohan51 avatar Jun 21 '23 04:06 mohan51

Type of Raspberry Pi : Rpi4 raspberry pi OS version:Ubuntu python version: 3.8.10 openCV version:4.2.0 pyZMQ version:25.1.0

mohan51 avatar Jun 21 '23 04:06 mohan51

when i storing the images which are coming from two pi's are storing under single label name..I am unable to find the error why it is storing under single label.although i am receiving pictures from two nodes

mohan51 avatar Jun 21 '23 04:06 mohan51

when i storing the images which are coming from two pi's are storing under single label name..I am unable to find the error why it is storing under single label.although i am receiving pictures from two nodes

mohan51 avatar Jun 21 '23 04:06 mohan51