esp32-cam-demo icon indicating copy to clipboard operation
esp32-cam-demo copied to clipboard

OV7670 support

Open nkolban opened this issue 8 years ago • 65 comments
trafficstars

I have a WROVER and an OV7670 camera. I am using the ESP-IDF as of this post date. All compiles cleanly and when I flash/run, the application runs. However, I do not see any output (please see log at end). I have tried enabling the test pattern and there is no change. Here is the log of a run with zero changes having been made to the source:

D (633) camera: Enabling XCLK output
D (633) ledc: LEDC_PWM CHANNEL 0|GPIO 21|Duty 0004|Time 0
D (633) camera: Initializing SSCB
I (633) gpio: GPIO[26]| InputEn: 0| OutputEn: 1| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (633) gpio: GPIO[27]| InputEn: 0| OutputEn: 1| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (643) gpio: GPIO[26]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 1| Pulldown: 0| Intr:0 
I (653) gpio: GPIO[27]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 1| Pulldown: 0| Intr:0 
D (663) camera: Resetting camera
D (673) camera: Searching for camera address
D (673) camera: Detected camera at address=0x21
D (673) camera: Camera PID=0x76 VER=0x73 MIDL=0x7f MIDH=0xa2
D (683) camera: Doing SW reset of sensor
D (733) camera: Test pattern enabled
D (733) camera: Setting frame size at 320x240
D (763) camera: Allocating frame buffer (320x240, 76800 bytes)
D (763) camera: Initializing I2S and DMA
I (763) gpio: GPIO[35]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (763) gpio: GPIO[34]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (773) gpio: GPIO[39]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (783) gpio: GPIO[36]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (793) gpio: GPIO[19]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (793) gpio: GPIO[18]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (803) gpio: GPIO[5]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (813) gpio: GPIO[4]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (823) gpio: GPIO[25]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (833) gpio: GPIO[23]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (843) gpio: GPIO[22]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
D (843) camera: Allocating DMA buffer #0, size=1280
D (853) camera: Allocating DMA buffer #1, size=1280

nkolban avatar May 29 '17 21:05 nkolban

I received two new OV7670 devices and tried them instead of the one I was originally using. Unfortunately no change. I still have exactly the same symptoms.

nkolban avatar Jun 06 '17 00:06 nkolban

Sorry, I haven't noticed the issue when you opened it. Although the readme mentions that Other OV7xxx series should work as well, with some changes to camera configuration code, I would not expect them to work out of the box. Most likely you will need to change the code which configures camera's registers, because register maps are different between OV7725 and OV7670. I don't have OV7670 at my disposal, but i'm going to order one and see if it can be made to run with some code changes.

igrr avatar Jun 06 '17 02:06 igrr

Thank you sir ... I'll leave this ticket open till we get to the bottom of the OV7670. I was about to hit the send button when a new thought struck me. In your last post in this thread you mentioned the OV7725. I understand that was the camera device your were testing with. I have been waiting the last couple of weeks for more OV7670s so that I could eliminate my one instance of the OV7670 as a fault ... but I also ordered an OV7725. When they arrived today, I was confused. The OV7725 appears to be a 10 bit device as opposed to an 8 bit device like the OV7670. As I examined the OV7725, I found that it actually has more pins (2 more 8bit vs 10bit) than the OV7670 and the female socket found on the WROVER. I put it mentally aside to check that what the vendor had sent was indeed an OV7725.

It was here that it dawned on me, the OV7725 might work in the WROVER is I just let the extra pins be unconnected. I hadn't read anywhere the notion that the OV7725 and the OV7670 were not pin identical. However, when I plugged in the OV7725 with pins "off the edge of the connector" and ran the application ... it appears to work!!

Hoorah ... however, now I'm going to hit your code deeply AND study the data sheet of the OV7670 now that I have a sand-box environment that is showing signs of life.

D (633) camera: Enabling XCLK output
D (633) ledc: LEDC_PWM CHANNEL 0|GPIO 21|Duty 0004|Time 0
D (633) camera: Initializing SSCB
I (633) gpio: GPIO[26]| InputEn: 0| OutputEn: 1| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (633) gpio: GPIO[27]| InputEn: 0| OutputEn: 1| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (643) gpio: GPIO[26]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 1| Pulldown: 0| Intr:0 
I (653) gpio: GPIO[27]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 1| Pulldown: 0| Intr:0 
D (663) camera: Resetting camera
D (673) camera: Searching for camera address
D (673) camera: Detected camera at address=0x21
D (673) camera: Camera PID=0x77 VER=0x21 MIDL=0x7f MIDH=0xa2
D (683) camera: Doing SW reset of sensor
D (733) camera: Test pattern enabled
D (733) camera: Setting frame size at 320x240
D (763) camera: Allocating frame buffer (320x240, 76800 bytes)
D (763) camera: Initializing I2S and DMA
I (763) gpio: GPIO[35]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (763) gpio: GPIO[34]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (773) gpio: GPIO[39]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (783) gpio: GPIO[36]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (793) gpio: GPIO[19]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (793) gpio: GPIO[18]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (803) gpio: GPIO[5]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (813) gpio: GPIO[4]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (823) gpio: GPIO[25]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (833) gpio: GPIO[23]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
I (843) gpio: GPIO[22]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 
D (843) camera: Allocating DMA buffer #0, size=1280
D (853) camera: Allocating DMA buffer #1, size=1280
D (853) camera: Init done
D (863) camera: Waiting for positive edge on VSYNC
D (893) camera: Got VSYNC
D (893) camera: Waiting for frame
D (923) camera: Frame done
D (923) camera_demo: Done
@@@@@@@@@@@@@@@@@@@@@%%%%%%%%%########## +++++++++==========-:::::::::          
@@@@@@@@@@@@@@@@@@@@@%%%%%%%%%########## +++++++++==========-:::::::::          
@@@@@@@@@@@@@@@@@@@@@%%%%%%%%%########## +++++++++==========-:::::::::          
@@@@@@@@@@@@@@@@@@@@@%%%%%%%%%########## +++++++++==========-:::::::::   

nkolban avatar Jun 06 '17 03:06 nkolban

Glad that you have got OV7725 working! Yes, placing two pins outside of the connector is a bit awkward, but in RGB565 and YUV modes OV7725 uses only 8 bits (D9-D2) for data transfer, and D0/D1 are unused.

igrr avatar Jun 06 '17 03:06 igrr

expecting to see OV7670 without FIFO

ferry21 avatar Jun 09 '17 14:06 ferry21

Support for OV7670 would be great, as it's cheaper (at least on aliexpress) and more popular than OV7725.

@igrr Thanks for looking into it.

kenjiru avatar Jun 12 '17 10:06 kenjiru

For the record: due to other duties I won't have time to work on this for the next two weeks, so if anyone is interested, please go ahead and try adding support for OV7670. Last time @Oitzu was successful at adding OV2640 support, i think OV7670 would be even less effort.

igrr avatar Jun 12 '17 10:06 igrr

I have started studying OV7670 and am making progress. I break the puzzle into two parts:

  1. Driving the Camera through its exposed registers.
  2. Retrieving the data through the parallel "I2S" bus in Camera mode.

What is causing me to break right now is the later. I have poured over @igrr code on I2S retrieval but am super lost in it. What would be ideal would be to have the algorithm and principles of how the code works with I2S written down. If someone already knows this story and would be willing to have a 30 minute skype session with me while we walk through the relevant code, I for one will make time to write it up in detail ... not just for this project but for I2S access in general. I'm afraid there are too many I2S mysteries with the current implementation.

nkolban avatar Jun 12 '17 14:06 nkolban

Next time i have time to work on the I2S/camera code, i promise to make easier to understand and add some comments. In the meantime, here's the high level overview of I2S usage:

Once camera is configured over SCCB, it starts sending image frames over its parallel output interface (known as DCMI). This interface contains a clock signal, a parallel data bus (usually 8-bit, but sometimes 10- or 12-bit), and at least two signals indicating frame and line boundaries, HREF and VREF (some cameras use HSYNC, VSYNC). Image frames consist of lines, and each line consists of pixels. Each pixel is represented using one or more bytes, depending on pixel format (color representation). HREF indicates when the pixels produced by the camera are valid, and VREF indicate when lines of pixels are valid. Difference between HREF and HSYNC is that HREF is high when pixels are valid, while HSYNC is normally high and goes low between subsequent lines. Same for VREF/VSYNC. Many cameras have registers which allow the function of output pin to be chosen between VREF/VSYNC and HREF/HSYNC. Some cameras also have registers to invert these signals.

I2S peripheral of the ESP32 can work in parallel mode, where it will sample and store parallel input data. It may be helpful to understand that I2S peripheral consists of two parts: the peripheral itself, and the DMA engine. These two parts are connected using a FIFO. The logic of operation of I2S peripheral in camera mode is as follows:

  • On each PCLK cycle,
  • If HREF and VREF input signals are high,
  • Get data from parallel input bus (we use 8 bits, but the peripheral can take up to 16 bits),
  • Arrange input data into 32-bit samples using the selected format (more on this later),
  • Store 32-bit samples into the FIFO,
  • Until RX_EOF_NUM samples are stored (this value is configured using a register)

There is one caveat about RX_EOF_NUM when pixel clock is above 10 MHz, more on this later.

As mentioned above, input data (8-bit) can be arranged into 32-bit samples in different ways. In code these different ways are given in the declaration of i2s_sampling_mode_t enum:

    /* camera sends byte sequence: s1, s2, s3, s4, ...
     * fifo receives: 00 s1 00 s2, 00 s2 00 s3, 00 s3 00 s4, ...
     */
    SM_0A0B_0B0C = 0,//!< SM_0A0B_0B0C
    /* camera sends byte sequence: s1, s2, s3, s4, ...
     * fifo receives: 00 s1 00 s2, 00 s3 00 s4, ...
     */
    SM_0A0B_0C0D = 1,//!< SM_0A0B_0C0D
    /* camera sends byte sequence: s1, s2, s3, s4, ...
     * fifo receives: 00 s1 00 00, 00 s2 00 00, 00 s3 00 00, ...
     */
    SM_0A00_0B00 = 3,//!< SM_0A00_0B00

We see that SM_0A0B_0C0D mode provides highest storage density, placing 2 8-bit values into a single 32-bit word. The other two modes place 1 8-bit value into each 32-bit word, with the slight difference that SM_0A0B_0B0C also repeats every value by placing it into the low half-word of the adjacent 32-bit word.

Why do we need this odd SM_0A0B_0B0C mode though? Well, even though it wasn't originally intended to be used in camera mode, SM_0A0B_0B0C turns out to be useful in cases when camera pixel clock is higher than 10 MHz. In this case, I2S peripheral can not receive the final sample in the line before HREF goes low. This is a slight quirk of I2S hardware of the ESP32. To work around this quirk, we set RX_EOF_NUM to be one less than the real number of bytes per image line, and enable SM_0A0B_0B0C mode. Because of the way in which data is duplicated in this mode, the last 32-bit word written into FIFO will contain both the byte number RX_EOF_NUM - 2 and the final byte, RX_EOF_NUM - 1 (if we count them starting from 0).

Assume an imaginary case when the image is 16 bytes wide. At or below 10MHz, we can use SM_0A0B_0C0D or SM_0A00_0B00 sampling mode, and set RX_EOF_NUM = 16. In case of SM_0A00_0B00, the FIFO will receive 16 32-bit samples:

sample 0: 0 u0 0 0
sample 1: 0 u1 0 0
sample 2: 0 u2 0 0
...
sample 13: 0 u14 0 0
sample 14: 0 u14 0 0
sample 15: 0 u15 0 0

Next, here's the case when PCLK is > 10MHz, and SM_0A0B_0C0D mode must be used. RX_EOF_NUM must be set to 15, and the FIFO will receive 15 32-bit samples:

sample 0: 0 u0 0 u1
sample 1: 0 u1 0 u2
sample 2: 0 u2 0 u3
...
sample 13: 0 u13 0 u14
sample 14: 0 u14 0 u15

As you can see, the last byte of the image line (u15) was stuffed into the 14th sample. This is not very convenient, but we still can live with this, and this method allows I2S parallel mode to work up to 20MHz PCLK. In the code this arrangement is called "high speed mode", or "hs_mode". Note/TODO: the code may actually be simplified so that the knowledge of this oddball mode doesn't trickle into DMA line filters (described later)... this will make the workaround much more self contained and less confusing to the reader.

Next up: DMA engine. As mentioned above, DMA engine is connected to I2S using a FIFO. I2S puts samples into the FIFO. DMA engine reads samples from the FIFO and writes them into ESP32's DRAM. Where in DRAM does it write them to, though? This is controlled using DMA descriptors. Each DMA descriptor is a small structure (in C language sense) which contains information like:

  • pointer to the buffer where the DMA engine should store the data into,
  • size of the buffer (i.e. the length of the data which can be stored into it)
  • pointer to the next DMA descriptor (qe.next)
  • some other fields which are not important to the discussion. The structure is called lldesc_t (for "linked list descriptor", because DMA descriptors form a linked list) and is defined in "rom/lldesc.h" header file. Because DMA descriptor length field is only 12 bits long, we can store an integer up to 4095 bytes in there. Because DMA operates on 32-bit words, that limits the length of DMA buffer to 4092 bytes. Normally we would configure DMA to fetch data from I2S peripheral in chunks, each chunk containing a single image line. However if the line needs more than 4092 bytes to be stored (taking the overhead of the chosen sampling mode into account), it won't fit into a single DMA buffer. In this case the code in camera.c uses multiple DMA buffers per line, so that each buffer is shorter than 4092 bytes.

When the application starts receiving the frame, it prepares a linked list of several DMA descriptors, with each DMA descriptor pointing to its own DMA buffer. Each DMA descriptor also points to the next DMA descriptor. Once DMA engine is done with a DMA buffer represented by a single DMA descriptor, it loads the next DMA descriptor and starts filling the next buffer. The final DMA descriptor points to the first DMA descriptor, so that all the DMA descriptors form a ring. DMA engine will keep running while I2S keeps sending it bytes using a FIFO. The total number of bytes which I2S will acquire is written into RX_EOF_NUM register at the beginning of the frame.

Each time DMA engine fills a DMA buffer, I2S interrupt (IN_DONE) is triggered. Interrupt handler (i2s_isr) does some arithmetic to figure out which DMA buffer has been filled, and puts the pointer to the corresponding DMA descriptor into a queue (data_ready). On the other side of the queue, a task (dma_filter_task) is waiting. The objective of this task is to transform the contents of a single DMA buffer (which has RX_EOF_NUM 32-bit samples) into the correct image pixel format. Images can be stored in different formats, such as RGB565 (16 bits per pixel), RGB888 (24 bits per pixel), grayscale (8 bits per pixel), YUV 4:2:2 (16 bits per pixel) and others. dma_filter_task calls a format-specific function (such as dma_filter_grayscale) to extract the needed bytes from I2S samples and store them into the output image buffer (framebuffer). For example, consider a case when camera is sending data in a YUV format. Y stands for luminance (brightness), U and V stand for two color components. Here is the data layout which camera sends out: y0 u0 y1 v1 y2 u2 y3 v3... Here one pixel of the image is represented using two bytes: one brightness component and one color component. Note that the two color components are shared between adjacent bytes. This is called color component subsampling, it is done in order to reduce the amount of data to be transferred, taking advantage of the fact that humans are more sensitive to spatial changes in brightness than spatial changes in color. Effectively, color subsampling reduces resolution of color components of the image.

After I2S peripheral performs sampling using SM_0A0B_0C0D mode (assuming that PCLK is <=10MHz), the following samples will be written into the FIFO (and from there, into DMA): 0 y0 0 u0 0 y1 0 v1 0 y2 0 u2 0 y3 0 v3 ...

These bytes will be stored into DMA buffer by DMA engine. To produce a grayscale image from this buffer, DMA filter function needs to extract the 3rd byte from each sample, and store these bytes into the output frame buffer:

y0 y1 y2 y3...

Once DMA filter task has finished processing all the input data, frame_ready semaphore is given to indicate that the whole frame buffer has been filled. This concludes processing of the frame.

Comments on the above and corrections are welcome. This description possibly has to be moved into the repository itself, eventually. For now I have left out the description of JPEG mode, as in JPEG mode the concepts of "lines" and "framebuffer" are less obvious.

Edit: updated section on DMA descriptors, explained that DMA descriptors form a ring, each descriptor pointing to the next one.

igrr avatar Jun 12 '17 18:06 igrr

@nkolban Hello sir, assuming the code was not changed since your output of connecting OV7670 to the WROVER kit and you getting a blank output; how did the code not return with an err flag similar to this ? error_code

Serpent999 avatar Jun 14 '17 03:06 Serpent999

The error code changes for me , corresponding to the change in connection of the camera. Which is, 0x20001 if not connected and 0x20003 if it is and it is an unsupported version, OV7670 in this case.

Serpent999 avatar Jun 14 '17 03:06 Serpent999

@Serpent999 Howdy my friend. I have not yet achieved OV7670 support with the code samples here. When I first obtained the WROVER board and plugged in the OV7670, I believe I got exactly the same errors as you did. Since I only had one OV7670 I couldn't swear that the camera was good. So I ordered a hand-full of cameras including some more OV7670 and an instance of an OV7725. If I remember correctly, the OV7725 is what mr @igrr used while developing/testing. The bottom line is that the OV7725 worked first time. My plan now is to take my OV7670, @igrr code and the datasheets and try and get the OV7670 working. I'm not sure if others are also working on this task ... if so, I'd like to hear from them/you so that we might collaborate and pull resources on the project.

nkolban avatar Jun 14 '17 04:06 nkolban

Hi, check my repo - I have had OV7670 working with an earlier copy of igrr's repository. I have found YUV422 format works without errors in terms of getting a picture, yet I'm still struggling with correct registers. ov7670_yuv422_qqvga ov7670_yuv422_colorbar_qqvga

tekker avatar Jun 14 '17 08:06 tekker

@Serpent999 Hi, I didn't get to the bottom of this exactly, but after adding the correct ID in for the OV7670 (0x76), the camera_probe function detected OK with the following code (in camera.c):

ESP_LOGD(TAG, "Resetting camera");
gpio_config_t conf = { 0 };
conf.pin_bit_mask = 1LL << config->pin_reset;
conf.mode = GPIO_MODE_OUTPUT;
gpio_config(&conf);

gpio_pulldown_en(config->pin_reset); // ov7670 reqd?
gpio_set_level(config->pin_reset, 0);
delay(10);
gpio_pulldown_dis(config->pin_reset); // ov7670 reqd?
gpio_set_level(config->pin_reset, 1);
delay(10);

ESP_LOGD(TAG, "Searching for camera address");

tekker avatar Jun 14 '17 10:06 tekker

@nkolban Ok, I was just making sure. Yes, let's work on that and get the code stable and working. @tekker, Thank you for your help, sir. What are the current issues with the code? I would have take a look at it later today, due to classes and research work. I will keep you guys updated about any solution findings I make which would be useful.

Serpent999 avatar Jun 14 '17 17:06 Serpent999

For me, I'm more interested in understanding the structure of I2S and DMA than the actual camera registers. I'd like to understand (so that I can help others understand) the subtleties of I2S and DMA in ESP32. With that in mind, I plan to try and reverse the I2S/DMA code with the assistance of the excellent write up provided by @igrr.

nkolban avatar Jun 14 '17 19:06 nkolban

@nkolban I agree sir, learning the underlying structure of the I2S and DMA of the ESP32 is an important step in the implementation of the camera code as well as other possible use. With that knowledge, it should be clearer to work on the EPS32.

Serpent999 avatar Jun 14 '17 19:06 Serpent999

@Serpent999 Do you have any thoughts on a way we can collaborate together to understand it in depth and write it up for the benefit of all? I have some notions but am open to all / any plans. If you'd like to correspond with me directly, I can be found at email [email protected]

nkolban avatar Jun 14 '17 19:06 nkolban

Sure sir, I will contact you via email to discuss further by tomorrow.

Serpent999 avatar Jun 14 '17 21:06 Serpent999

Looking at lldesc_t, we see (that as of this date), it is described as:

/* this bitfield is start from the LSB!!! */
typedef struct lldesc_s {
    volatile uint32_t size  :12,
                        length:12,
                        offset: 5, /* h/w reserved 5bit, s/w use it as offset in buffer */
                        sosf  : 1, /* start of sub-frame */
                        eof   : 1, /* end of frame */
                        owner : 1; /* hw or sw */
    volatile uint8_t *buf;       /* point to buffer data */
    union{
        volatile uint32_t empty;
        STAILQ_ENTRY(lldesc_s) qe;  /* pointing to the next desc */
    };
} lldesc_t;

I think I understand qe and buf but the others puzzle me.

  • What are size and length? I can imagine telling the DMA descriptor the size of the buffer, but what then is length (or visa versa).
  • sosf is defined as "start of sub-frame". What is a "sub-frame"?
  • eof is defined as "end of frame" ... what does this mean? Are we saying that these DMA functions might somehow be knowledgeable about "camera frames"?
  • owner is defined as either hardware or software. What does this mean in this context?

nkolban avatar Jun 15 '17 04:06 nkolban

We will have a description of DMA hardware, which will include the structure of DMA descriptor, in the TRM. Until that is released, here are my explanations:

  • size: size of the buffer pointed to by buf member
  • length: number of valid data bytes stored into the buffer. For example, if you create a DMA descriptor with 1024 bytes size, and start receiving data using I2S with RX_EOF_NUM set to 128 (samples), then I2S will store only 128 (sampels) * 4 (bytes per sample) = 512 bytes into the DMA buffer, and set the length field to 512.
  • sosf: this field is not used by I2S DMA hardware, it is reserved for other peripherals (such as WiFi MAC).
  • eof: means "end of file", indicates that this DMA descriptor is the last one in the chain. DMA hardware will not advance to the next descriptor pointed to by qe if eof bit is set.
  • owner: indicates who can write to the DMA buffer associated with the descriptor. When set to 0, indicates that DMA buffer is 'owned' by the software, 1 means that DMA buffer is 'owned' by the DMA hardware. DMA hardware will change this bit from 1 to 0 when it is finished writing to the DMA buffer.

igrr avatar Jun 15 '17 05:06 igrr

@Serpent999 I'm having some trouble getting the RGB565 mode working correctly, it seems really strange - at the moment I can get an image however the colours are all screwed up. Today I'm attempting to use an ILI9341 to display the image directly - it might make testing a little more efficient as well. (I have had success with connecting OV7670->Arduino DUE->ILI9341, but lost all that code when my laptop was stolen). Maybe using a TFT might help as a sort of poor man's video RAM - at least until I can get an ESP-PSRAM chip :)

@NKolban I agree @igrr's project here is an invaluable reference for me as well to learn I2S & DMA, so thank you @igrr for all your help documenting this area of the ESP32, and thank you too @nkolban. It's hard to find good study materials on the internals of DMA in general, and this level of detail on the workings of the ESP32 is fantastic!

Also @Nkolban I would like to help if I can, in any way 👍 One idea I might try is using your Telnet example to change OV7670 settings on the fly for testing...

tekker avatar Jun 15 '17 05:06 tekker

@tekker maybe you can find something useful in the feature/rgb_bitmap branch — it adds RGB565 support for OV7725, i think at least the DMA setup and line filter should be the same for OV7670.

igrr avatar Jun 15 '17 05:06 igrr

@tekker @Serpent999 @igrr All this material is GOLD in this issue. Do we want to keep posting comments here or do we want to turn this into a forum thread? I'm fine either way and will happily follow group decision. @tekker Re dynamic tweaks to OV7670. Nice idea. One of the areas Ive been tinkering with is embedded web servers with embedded HTML/JavaScript. As we make progress, I'd be happy to provide a "tweaker" to the many registers of the camera that would be an attractive Web page. We could have the ESP32 serve the page and point the browser to the ESP32. A page would should with sections for each of the registers broken out ... where we could tweak to our hearts content and have REST calls dynamically made to the ESP32 to change the settings.

nkolban avatar Jun 15 '17 13:06 nkolban

The attached is my first draft at taking @igrr excellent words and ... well ... simply re-writing them. The way I work is to read, practice and then from memory and quick reference, re-write. From there I go back and practice more and attempt to re-use ... as I iterate over this, more material gets added and existing polished. Since there may be a number of us working in this space, I figured best to get anything I may have out in any state that others may use or comment upon.

I2S Camera Draft 2017-06-15.pdf

nkolban avatar Jun 16 '17 03:06 nkolban

With regards to run time configuration of camera registers and an embedded web server, there is a corresponding ticket/RFC in #35.

@nkolban I went through the your writeup and found it very well written, explanation is solid and well structured. Would you be okay eventually submitting it as an .md file into this repository?

@krzychb please have a look as well, maybe an updated diagram of software/hardware structure (#32) could be a good addition to this text.

igrr avatar Jun 16 '17 04:06 igrr

With regards to posting comments here or in a forum thread — i would prefer keeping the discussion here, potentially opening a new issue which would track improvements to documentation / description of I2S DMA part of this demo. There are three interleaving discussions going on here:

  • how does I2S and DMA work? — somewhat related to #32
  • how to get OV7670 to work in this demo?
  • implementing live tweaking of camera registers (web server/telnet) — related to #35

igrr avatar Jun 16 '17 04:06 igrr

Hi @igrr - Thanks ! I have been able to get your excellent feature/rgb_bitmap branch as a working base for OV7670 support, and also have had success adding ILI9341 support, which is very very cool.

The OV7670 is an annoying sensor, but at $3 AUD on aliexpress its good value - in fact, with an ESP32, its got to be about one of the cheapest internet webcams on the market! But the register settings are hard to figure out.

I'm having some issues with decoding RGB565 format - I'm fairly sure its most likely a matter of figuring out tweaking register settings.


image1_565

Above image is how a RGB565 picture appears - not so great...


With Test pattern enabled, these are the results:

image3_565

image2_565

At the moment, it is missing the start of the frame - this most likely due to framerate?

tekker avatar Jun 17 '17 08:06 tekker

Might have something to do with the way you configure HREF/VREF signals. Some cameras allow one to select between HSYNC/HREF and VSYNC/VREF. The way the code is set up, we need active-high VREF and HREF signals (although signal polarity can be inverted in GPIO matrix, if needed). If the camera produces VSYNC instead of VREF, this may be an issue for figuring out the start of frame. Colors also look funny... I can't immediately tell what's wrong though. If you push your work in progress code into some branch, i'll try to have a look next week.

igrr avatar Jun 17 '17 14:06 igrr

I have changed the set_pixformat() to experiment with various output modes. The image is stable in YUV format but is distorted in RGB. The following image is the output after setting COM7 to COM7_FMT_GBR422. Which is the value 0x00. Is the output from the camera supposed to be in RGB color or is the dma_filter_rgb565 supposed to handle that? The raw default output in YUV is not in full color. Also, what should the timing be in a general sense for the RGB format on the OV cameras? I think the datasheet uses 24Mhz reference and we are at 8.@tekker Could you please tell what register values you are using? We need to figure out the right combination of clock and output mode registers. rgb565_gbr422_gbra

Serpent999 avatar Jun 18 '17 22:06 Serpent999

Filter function expects the data to be in rgb565 (or maybe bgr565) format. If the camera produces rgb422, you need to tweak some config register to switch to 565. For OV7725 that was a matter of setting one bit in mode register.

igrr avatar Jun 19 '17 00:06 igrr

@igrr ok thank you, I will look for the proper register config. Also, I am trying to zoom out of the default scale.However, changing the scaling register values, SCALING_XSC and SCALING_YSC do not have any effect.

Serpent999 avatar Jun 19 '17 00:06 Serpent999

Ok , I got the scaling to respond, turns out the enable register was different from the one in the register_header file. It had the value to modify but not the register to send the value to.

Serpent999 avatar Jun 19 '17 01:06 Serpent999

Hi @igrr, @Serpent999, @nkolban - I have extended the the feature/rgb_bitmap branch in a new repo - https://github.com/tekker/esp32-ov7670-hacking to support displaying the OV7670 with an ILI9341, and streaming BMP using RGB565 / YUV422 as well as JPEG. Its very rough but it may help someone with OV7670 - I am using a 32-bit aligned framebuffer to try and conserve memory. Being able to Telnet in and change camera settings while displaying in browser stream and ILI9341 is really cool. Will only run correctly with "ESP-IDF Pre-release v2.1-rc1" at the moment - very tight on memory / stack usage.

tekker avatar Jul 22 '17 12:07 tekker

@tekker Do you mind if I pull your OV7670 changes into this project? Will keep attribution / commit authorship to you.

igrr avatar Nov 16 '17 06:11 igrr

@igrr or anyone that understands igrr:s comment (https://github.com/igrr/esp32-cam-demo/issues/29#issuecomment-307871635): Why does each 32bit i2s sample need to be "padded" with two zero bytes? Why can't we put 4 pixel bytes in each sample instead of only 2 (SM_0A0B_0C0D)?

larsenglund avatar Dec 06 '17 08:12 larsenglund

@larsenglund that's just how the I2S peripheral works. It can not put 4 samples into single word, unfortunately.

As to the "why", the reason is basically that I2S parallel mode supports up to 16 bits wide input. If the input was 16 bits, then there would not be zeros (i.e. SM_AABB_CCDD instead of SM_0A0B_0C0D).

igrr avatar Dec 06 '17 08:12 igrr

@igrr Aha, I see, thanks! 16 bit wide hardware limitation makes perfect sense.

larsenglund avatar Dec 06 '17 08:12 larsenglund

@igrr When XCLK > 10MHz, is_hs_mode() returns true, and the sampling mode is SM_0A0B_0B0C instead of SM_0A00_0B00, right? However, I receive corrupted JPEG data when XCLK > 10MHz with OV2640 camera, i believe it is due to the sampling mode. Do you have any idea to fix this? Thanks

hopkinskong avatar Dec 19 '17 16:12 hopkinskong

For frequency above 10MHz, the way the camera module is connected to the ESP32 matters. For example I also get occasional errors in JPEG frames at 20MHz on WROVER-KIT, which does have a dedicated header for camera module. But recently I made a small PCB which connects camera module with Devkit-C with much shorter signal wires, and that works perfectly even at 20MHz in JPEG mode.

igrr avatar Dec 19 '17 23:12 igrr

image

I also did made a PCB for this, I am using the WROVER modules with PSRAM to store the images. When XCLK > 10MHz, the JPEG blocks are messed up...

UPDATE: I think it's the issue of SM_0A0B_0B0C. For XCLK=10.000MHz, using SM_0A00_0B00 produce images without issues, however, using SM_0A0B_0B0C produce corrupted JPEG. Notice that XCLK is still 10.000MHz

image

hopkinskong avatar Dec 20 '17 12:12 hopkinskong

@hopkinskong It would be better if you could open a separate issue, because this one is related to ov7670 support.

That being said, i have just built the latest master from scratch and ran the code on my camera breakout board. PCLK is 20 MHz. Here's the screengrab, which shows no issues: http://download.igrr.me/ov2640_screencap.mov

igrr avatar Dec 21 '17 09:12 igrr

Please forgive my intrusion - I'm just curious as to the state of OV7670 support in this project. Here was a great discussion, and it's now not clear to me whether @igrr did end up pulling @tekker's changes in, as @tekker did not respond. It would be great to consolidate all of this as the OV7670 is an appealing camera for ESP32 projects.

DavidAntliff avatar Feb 01 '18 03:02 DavidAntliff

Hi all, I'm using OV7692 Camera with ESP32 Wrover kit. I'm facing problem in configuring HSTART, HSIZE, VSTART , VSIZE and x &y Scaling required for QVGA, Below are the register configuration I tried but getting only garbage image. Please help me. i2c.writeRegister(ADDR, REG12, 0b10000000); //all registers default

i2c.writeRegister(ADDR, CLKRC, 0b10000000); //double clock i2c.writeRegister(ADDR, REG14, 0x40); //enable auto 50/60Hz detect + exposure timing can be less... + Automatic Gain Ceiling 32x i2c.writeRegister(ADDR, AECH, 0x01); //AEC[15:0] = 0x010

i2c.writeRegister(ADDR, REG0E, 0x00); // Output drive capability 2x i2c.writeRegister(ADDR, REG12, 0x06); //RGB RGB565 i2c.writeRegister(ADDR, REG61, 0x20); //8-bit output pattern

//Scaling Parameter can be adjusted manually , H & V downsample by 4 , i2c.writeRegister(ADDR, REGC3 , (0x40 | 0x0A));

//set scale_v_en & scale_h_en i2c.writeRegister(ADDR, REG81 , (0x41 | 0x0C));

//divide PCLK by 2 , PCLK Delay option' i2c.writeRegister(ADDR, REG5E , 0x12); //REG3E = 0x20 , REG5E = 0x22

//X & Y Scaling xsc_man = 0x3a , ysc_man = 0x35 i2c.writeRegister(ADDR, REGC4 , 0x00); i2c.writeRegister(ADDR, REGC5 , 0x3A);

i2c.writeRegister(ADDR, REGC6 , 0x00); i2c.writeRegister(ADDR, REGC7 , 0x35);

i2c.writeRegister(ADDR, HSTART, 0xFC); //HSTART = (0x3F << 2)

i2c.writeRegister(ADDR, HSIZE, 0xA0); //2 * HSIZE = (0x50 << 2)

i2c.writeRegister(ADDR, VSTART, 0x06); //VSTART = (0x03 << 1)

i2c.writeRegister(ADDR, VSIZE, 0x3C); //2 * VSIZE = (0x78 << 1)

//color matrix values i2c.writeRegister(ADDR, REGBB, 0x80 + 0x20 * 0); i2c.writeRegister(ADDR, REGBC, 0x80 + 0x20 * 0); i2c.writeRegister(ADDR, REGBD, 0x00); i2c.writeRegister(ADDR, REGBE, 0x22 + (0x11 * 0) / 2); i2c.writeRegister(ADDR, REGBF, 0x5e + (0x2f * 0) / 2); i2c.writeRegister(ADDR, REGC0, 0x80 + 0x20 * 0);

i2c.writeRegister(ADDR, REGC1, 0x9e); //matrix signs i2c.writeRegister(ADDR, REG13, 0xE7); //AWB on i2c.writeRegister(ADDR, REG8E, 0x80 | 0x12); // Simple AWB }

kshitijgupta1991 avatar Mar 27 '18 04:03 kshitijgupta1991

@igrr , sorry for my intrusion. I would like to confirm 3 points here:

1- Regards to your comment about DMA descriptor

length: number of valid data bytes stored into the buffer. For example, if you create a DMA descriptor with 1024 bytes size, and start receiving data using I2S with RX_EOF_NUM set to 128 (samples), then I2S will store only 128 (sampels) * 4 (bytes per sample) = 512 bytes into the DMA buffer, and set the length field to 512.

I want to confirm about "*4" here. Do you assume that the sampling mode is SM_0A00_0B00? As 1 byte from I2S will be expended into 4 bytes in FIFO. If I use other sampling modes like SM_0A0B_0C0D, then 2 bytes from I2S will be expended into 4 bytes in FIFO. As the result, the length field of DMA descriptor will set to 128(bytes) * 2 = 256 bytes into DMA buffer. Is it correct?

2- About "owner" field in DMA descriptor

owner: indicates who can write to the DMA buffer associated with the descriptor. When set to 0, indicates that DMA buffer is 'owned' by the software, 1 means that DMA buffer is 'owned' by the DMA hardware. DMA hardware will change this bit from 1 to 0 when it is finished writing to the DMA buffer.

I understand that your code allocates enough DMA buffers to receive one line of a camera frame. After DMA hardware done with all DMA buffers, all "owner" fields of DMA descriptors will be set to 0 by DMA hardware.

Since DMA buffers form a ring buffer, there is another use-case that I want to use a small number of DMA buffers to receive large data (maybe I do not want to allocate too many DMA buffers to receive all data). Eg: it requires 8 DMA buffers of 4092bytes to store data, and I only allocate 4 DMA buffers of 4092bytes to use. Then should I need to set "owner" field of filled DMA descriptor to 1 again after copying data out of it so that DMA hardware can use that DMA descriptor when it rolls back from the 4th DMA descriptor. If it is possible, is there any side-effect of doing so?

BTW, I see nowhere in the code to set "owner" field of DMA descriptors to 1 again to receive next line of a frame. I am confused of the usage of "owner" field from your comment as DMA hardware will auto clear it to 0 after writing but software does not set back to 1 for next RX cycle. Can you show me the point?

3-In your code, the function dma_desc_init(), the way it calculates total dma_sample_count as below dma_sample_count += pd->length/4; Does it assume the sample mode is SM_0A00_0B00? As 4 bytes in FIFO will record 1 I2S RX byte.

Can you confirm my unclear points?

Thanks, HuyK

HuyKhac avatar Jul 04 '18 03:07 HuyKhac

@igrr Sorry for asking many questions :) One more question about DMA descriptors allocation in cam_dma_init() function:

size_t dma_desc_count = dma_per_line * 4;

It seems that the code allocates 4times more than the required number of DMA buffers to store a line of a frame because dma_desc_count will be used as the number of DMA buffers and DMA descriptors to be created in other code in cam_dma_init() function.

CameraInfo.dma_buf = (dma_elem_t**) malloc(sizeof(dma_elem_t*) * dma_desc_count);
CameraInfo.dma_desc = (lldesc_t*) malloc(sizeof(lldesc_t) * dma_desc_count);

And when it allocates number of DMA buffers according to dma_desc_count, it calculates total DMA sample count value in order to set to EOF_RX_NUM for RX cycles.

dma_sample_count += pd->length / 4;

Therefore, I see it does not calculate exactly number of samples per line to EOF_RX_NUM but 4 times bigger. If so, this code is not meaningful as its intention is to calculate (EOF_RX_NUM - 1) in SM_0A0B_0B0C sampling mode

if (s_state->sampling_mode == SM_0A0B_0B0C &&
            (i + 1) % dma_per_line == 0) {
            pd->length -= 4;
        }

Is my thinking correct? If so, why do you need to create extra DMA buffers in the ring? Can you explain me about the logic here?

Thanks, HuyK

HuyKhac avatar Jul 04 '18 06:07 HuyKhac

I've been working for a while with the code from the ESP32_I2S project from github. My cheap ov7670 camera works okay. I think I now understand most of the code pretty decently. Here are some observations:

The i2S was designed for audio device interconnection, 16-bit audio samples, two channels. The hardware that reads and writes the camera output into memory is therefore based on 16-bits at a time. But the camera only has 8 data lines. So every time the camera delivers a byte of data it strobes PLX, and the DMA receives 16 bits, 8 of them padding. So every second byte in the DMA buffer is zero-padding. The GPIOs are soft-connected to the i2S hardware, so here is the code that sets up the GPIO matrix to route 16 (not 8) pins to i2S hardware:

// Route input GPIOs to I2S peripheral using GPIO matrix, last parameter is invert gpio_matrix_in(D0, I2S0I_DATA_IN0_IDX, false); gpio_matrix_in(D1, I2S0I_DATA_IN1_IDX, false); gpio_matrix_in(D2, I2S0I_DATA_IN2_IDX, false); gpio_matrix_in(D3, I2S0I_DATA_IN3_IDX, false); gpio_matrix_in(D4, I2S0I_DATA_IN4_IDX, false); gpio_matrix_in(D5, I2S0I_DATA_IN5_IDX, false); gpio_matrix_in(D6, I2S0I_DATA_IN6_IDX, false); gpio_matrix_in(D7, I2S0I_DATA_IN7_IDX, false); gpio_matrix_in(0x30, I2S0I_DATA_IN8_IDX, false); gpio_matrix_in(0x30, I2S0I_DATA_IN9_IDX, false); gpio_matrix_in(0x30, I2S0I_DATA_IN10_IDX, false); gpio_matrix_in(0x30, I2S0I_DATA_IN11_IDX, false); gpio_matrix_in(0x30, I2S0I_DATA_IN12_IDX, false); gpio_matrix_in(0x30, I2S0I_DATA_IN13_IDX, false); gpio_matrix_in(0x30, I2S0I_DATA_IN14_IDX, false); gpio_matrix_in(0x30, I2S0I_DATA_IN15_IDX, false);

gpio_matrix_in(VSYNC, I2S0I_V_SYNC_IDX, true); gpio_matrix_in(0x38, I2S0I_H_SYNC_IDX, false); //0x30 sends 0, 0x38 sends 1 gpio_matrix_in(HREF, I2S0I_H_ENABLE_IDX, false); gpio_matrix_in(PCLK, I2S0I_WS_IN_IDX, false);

The first 8 values are real GPIO pin numbers. Then there are 8 "fake" values 0x30 to route dummy 0 line to the i2S hardware. The i2S hardware gets another "fake" pin 0x38 (1 bit) routed to a signal we want to strap high.

If you fiddle and change some of the 0x30 routings to 0x38, instead of zeros on every padding byte you can put other bit-patterns there instead. If you poke around in the data when the interrupt occurs you can see the data exactly. (I just used Serial.print to look at ione padding byte per frame, but if you print too much the interrupt service routine gets too slow and your image gets screwed up badly.)

The second observation that may be helpful is that if you look at the WRover Kit schematic, the camera port and the LCD screen use many of the same GPIO pins. So you can run the LCD demo examples just fine, until you plug in the camera and you have contention for those lines. The camera kills the LCD.

I've got a partial work-around, I'm still trying to refine it. But the key thing is that the OV7670 camera has a "sleep" mode that puts its output pins into tri-state mode, so the LCD can spring to life again. So in principle, it should be possible to interleave camera and LCD usage on the WRover. But with the pin allocation on the WRover kit you can't have both running simultaneously.

cspwcspw avatar Aug 11 '18 16:08 cspwcspw

VSYNC and scanline interrupts.

In 160 x 120 mode, say, the camera produces 120 scanlines then a VSYNC. For each scanline, the I2S engine sees two bytes per pixel (RGB565). It counts the expected number of samples (bytes) before the DMA buffer gets full. When it has read the required number of samples, it generates the ISR interrupt.

Unlike igrr, I'm running the I2S engine continuously, rather than restarting the engine afresh for each new frame. A fresh restart guarantees fresh syncronization.

So I'm having difficulty understanding what occurs (or what I should do) if some samples occasionally get "lost". If I push up my XCLK frequency and have too-long wires to the camera, the signal visibly degrades on an oscilloscope. At some point the I2S engine misses a sample, so now the order of the Hi/Lo byte of the pixel is reversed, and the image naturally goes badly wrong. But in my case, it stays wrong because I'm not resetting the engine at the start of each frame.

Let's assume I've lost one or two samples. So the DMA buffer count has falled behind where the camera thinks it is. Now I get end-of-line HREF occurring. Can I still get an ISR interrupt for the line, even though the DMA buffer is not full? Will the I2S engine then start a new DMA buffer on the next scanline?

A similar problem occurs with the VSYNC. My camera produces 120 scanlines, then a VSYNC. Suppose for whatever reason the I2S hardware doesn't think its DMA buffer is full, so it has not yet delivered the last scanline.

Is there a best practice or way to say "The camera signals take priority". If I2S sees an HREF or a VSYNC when it still thinks it needs more samples, can we force it to deliver the "partial" block anyway?

Thanks Peter

cspwcspw avatar Sep 03 '18 10:09 cspwcspw

hello sir i want to use my ov7670 with a wemos lolin esp32 board as shown in bitluni lab https://github.com/bitluni/ESP32CameraI2S but it seems to be not working in my pc

_**In file included from sketch\I2SCamera.h:17:0,

             from sketch\I2SCamera.cpp:6:

sketch\DMABuffer.h: In constructor 'DMABuffer::DMABuffer(int)':

DMABuffer.h:10: error: 'malloc' was not declared in this scope

 buffer = (unsigned char *)malloc(bytes);

                                       ^

exit status 1 'malloc' was not declared in this scope**_

can anyone help me out

shikharuniyal avatar Jan 12 '19 16:01 shikharuniyal

hello sir i want to use my ov7670 with a wemos lolin esp32 board as shown in bitluni lab https://github.com/bitluni/ESP32CameraI2S but it seems to be not working in my pc

_**In file included from sketch\I2SCamera.h:17:0,

             from sketch\I2SCamera.cpp:6:

sketch\DMABuffer.h: In constructor 'DMABuffer::DMABuffer(int)':

DMABuffer.h:10: error: 'malloc' was not declared in this scope

 buffer = (unsigned char *)malloc(bytes);

                                       ^

exit status 1 'malloc' was not declared in this scope**_

can anyone help me out

Hi,

I have the same issue and no solution.

BeatArnet avatar Jan 21 '19 19:01 BeatArnet

malloc() is not "built-in" to C++, but is defined in stdlib.h, so you need to have this line somewhere, probably at the top of the file giving trouble.

#include <stdlib.h>

cspwcspw avatar Jan 22 '19 13:01 cspwcspw

Yeah, problem solved. Thanks for help.

BeatArnet avatar Jan 22 '19 17:01 BeatArnet

am having a problem with video streaming, it is showing a purple color instead of natural one 51467507_381909335930451_9202293316618551296_n 51645962_616072702197477_1934994079515410432_n how to fix that

espiot328266 avatar Feb 07 '19 16:02 espiot328266

Greetings to all.

I am struggling for a couple of days trying to retrieve an image from the ov7670, but no result. I connect with the sensor, configure it, and the rest of initialization is pretty the same as the original project or similar ones.

My code has no compilation errors, however, when it comes to get an image using a web browser the code stops waiting for a frame. I have studied the code thoroughly and i2s interface functionality and it seems like i2s interruption never occurs. This is the reason why the dma is never filled with the image data and, thus, the semaphore is never released.

I can not figure out why this happens. If you have any ideas or suggestions I will appreciate any.

Screenshot from 2019-06-27 11-27-09

log.txt

rykovv avatar Jun 27 '19 15:06 rykovv

One possible thing to try: The camera module needs an external clock which the ESP32 must supply: without that, you will not get frames delivered. So this is one avenue to investigate that could explain your problem.

If you have an oscilloscope you need to check then the clock output signal is being generated and is connected to the correct pin on the camera module. (Beware, the ESP32 pins 34 and above are input only, so you cannot use those to output the clock.) Once the camera gets the clock, you should be able to see the camera returning pixel clock and VSYNC signals to the ESP32. Without those, you'll never get the interrupts you need.

cspwcspw avatar Jun 27 '19 18:06 cspwcspw

Thank you @cspwcspw, I really appreciate your help.

The camera module needs an external clock which the ESP32 must supply: without that, you will not get frames delivered. So this is one avenue to investigate that could explain your problem.

I use the 20MHz external clock, generating it through ledc (channel 0) on XCLK camera's pin (currently D21). I configure CLKRC camera's register as you and did in your project, and also I tried Use external clock directly (no clock pre-scale available) option activating bit[6] of the same register, but that gave no result.

Once the camera gets the clock, you should be able to see the camera returning pixel clock and VSYNC signals to the ESP32.

The interesting point is that enabling VSYNC interruption, it constantly occurs and I can see that. Therefore, the camera is sending image frames, as @igrr described. However the MCU does not capture the data.

I would be happy to have an oscilloscope on hands to check the pins, but I don't. The only clue I have now is that something wrong with the I2S0. Because, as I understand, when one data sample is transmitted, the I2S interruption occurs (i2s_isr), the sample is stored, and the process continues until the whole image received. But the i2s_isr never occurs.

rykovv avatar Jun 27 '19 22:06 rykovv

Mmm. Don't know what to suggest next. And I've torn down my hardware and given the oscilloscope I used back to its owner.

But my understanding of the i2s interrupt is a bit different: you should only get one interrupt per scanline, not one per sample (the ESP32 would never be able to handle interrupts that fast, one per pixel).

It has been a while now so my recall might be fuzzy, but essentially the I2S hardware with DMA manages the transfer of pixel data from the camera into DMA memory. Latching each 8-bit half-pixel and writing it to the next 16-bits of DMA (yes, half the bytes are zero because it is an 8-bit camera bus feeding into a 16-bit I2S hardware latch). That all happens without any interrupts. That latching is triggered by PCLK coming from the camera (so check that wiring, that the GPIO pin is correctly configured for INPUT, etc.)

The I2S hardware runs in the background, and waits until it has received the correct (pre-programmed) number of samples before it interrupts the ESP32. (i.e. this interrupt is count-based, from the i2S hardware, and means "one scan line is ready in memory"). The ESP32 interrupt service routine (ISR) counts the scan lines, and copies data out of the DMA buffer into successive locations in its frame buffer (skipping every second zero byte in the DMA buffer, of course). From counting scan-line interrupts it knows when the whole frame has been received.

The VSYNC and its ISR play a fairly minor role in the logic: the XCLK you generate runs continuously, so from the camera's point of view it is always running, busy delivering pixel data and control signals like VSYNC, PCLK, HREF. When ESP32 frame capture is initiated, the camera could be half-way through its current frame, so the ESP32 waits for VSYNC before enabling the i2S hardware capture. This ensures the next pixel captured by i2S hardware is indeed the very first pixel of the first scan line of the new frame.

Another thing I would also check is the PWDN line on the camera. It can be used to tri-state or suppress some camera control lines and pixel data - I'm not sure if VSYNC still runs when the camera is "tri-stated", but if your i2S hardware is never getting PCLK it will never latch any data, and you will never get the interrupt firing.

cspwcspw avatar Jun 28 '19 04:06 cspwcspw

@cspwcspw thank you again for your explanation. It really cleared up the things.

The next thing I am going to do is to find the oscilloscope and analyze the issue, particularly the PCLK pin activity.

The VSYNC and its ISR play a fairly minor role in the logic:

That's also true. After digging a little bit more into the datasheet I found that the digram bellow confirms your words.

Screenshot from 2019-06-28 17-36-50

Another thing I would also check is the PWDN line on the camera.

I had no information about it. I tried to pull down this pin with 10K resistor and the camera did not responded.

I will research more about these points and keep you posted on further steps.

Thank you @cspwcspw

rykovv avatar Jun 28 '19 15:06 rykovv

I found it easier to change the line of code that sets up the clock speed while debugging. It slows everything down. 20Mhz was a bit fast for the oscilloscope I used. But importantly, fast signals don't work well over long wires. The data sheet says 10Mhz is lowest XCLK speed, but I got down to about 8Mhz successfully.

You can try this too: If you #include"Arduino.h" at the top of the I2SCamera.cpp you will be able to use the standard Serial.print("xxx") inside the i2sInterrupt routine. See if any interrupts are occurring. (Serial printing is a slow process and you shouldn't be printing in the middle of a time-critical interrupt service routine, but you can use this anyway to try learn something useful.)

You could also sprinkle some print statements elsewhere to attempt to debug things, i.e. in the VSYNC ISR, or in the bit of logic that tries to determine whether enough scan lines have been received to stop the frame capture and return to the main program.

cspwcspw avatar Jun 30 '19 06:06 cspwcspw

@cspwcspw,

Thank you for your suggestions. Finally, I solved the issue. It was wrong VSYNC synchronization. I was expecting VSYNC to be negative and it was configured positive. So properly changing COM10 register or gpio_matrix_in function allowed image retrieval.

Now, I am struggling with colors adjustment issue. The image I get seems to have sepia effect. I suppose MTXN and GAMN must be properly adjusted.

cap

rykovv avatar Jul 05 '19 14:07 rykovv

Good news.

There are some other camera registers that allow one to determine the format of the colour stream pixels, and the order that the RGB components arrive at the ESP32. See e.g. the COM30 register. I always used RGB565 and there was also some OV7670 register option, I think, to send the channels in BGR order rather than RGB order. The OV7670 seems quite flexible because it was, I think, a camera for integrating into early cell-phones. I found it useful to get a red, green and blue card to put in front of the camera, then look at which bits get set in the 16-bit frame stream to make sure that what you think are the red channel are the ones that get higher values when you're looking at a red object, etc. If a blue card makes your image increase its redness and vice-versa, ....

Wrong displays might also be because the rendering / display side of things gets it wrong. Packing those bytes into a bitmap in that code uses a fixed bitmap header that defines the bitmap format, including sizes and order of the RGB channels. So perhaps you can completely ignore the camera and just manually pack a stream of bytes for rendering a test pattern (all red, all green, or all blue), etc. That can help debug whether your bitmap header and rendering in the browser is all as you expect it to be.

cspwcspw avatar Jul 06 '19 13:07 cspwcspw

@cspwcspw

That's really useful tips for debugging the color correctness. Thank you for your valuable suggestions! Will keep you posted.

rykovv avatar Jul 07 '19 18:07 rykovv

Hi, I do a porting of the driver for the ov7670. The base code was the idf camera support, if you wish I send the source code please contact me.

jjsch-dev avatar Sep 25 '19 13:09 jjsch-dev

I finale got it at most in logs. Camera running with QQVGA size only .frame_size = FRAMESIZE_QQVGA, I suppose to used your fork with support OV7670 - https://github.com/jjsch-dev/esp32-cam-demo

D (960) camera: Setting frame size to 160x120
D (970) ov7760: reset reg 0C, W(04) R(04)
D (970) ov7760: reset reg 3E, W(1A) R(1A)
D (970) ov7760: reset reg 70, W(3A) R(3A)
D (980) ov7760: reset reg 71, W(35) R(35)
D (980) ov7760: reset reg 72, W(22) R(22)
D (990) ov7760: reset reg 73, W(F2) R(F2)
D (990) ov7760: reset reg A2, W(02) R(02)
D (1000) ov7760: reset reg 17, W(13) R(13)
D (1000) ov7760: reset reg 18, W(01) R(01)
D (1000) ov7760: reset reg 32, W(36) R(36)
D (1010) ov7760: reset reg 19, W(02) R(02)
D (1010) ov7760: reset reg 1A, W(7A) R(7A)
D (1050) ov7760: reset reg 12, W(04) R(04)
D (1050) ov7760: reset reg 8C, W(00) R(00)
D (1050) ov7760: reset reg 04, W(00) R(00)
D (1050) ov7760: reset reg 40, W(D0) R(D0)
D (1050) ov7760: reset reg 1E, W(22) R(22)
D (1060) ov7760: reset reg 14, W(6A) R(6A)
D (1060) ov7760: reset reg 4F, W(B3) R(B3)
D (1060) ov7760: reset reg 50, W(B3) R(B3)
D (1070) ov7760: reset reg 51, W(00) R(00)
D (1070) ov7760: reset reg 52, W(3D) R(3D)
D (1080) ov7760: reset reg 53, W(A7) R(A7)
D (1080) ov7760: reset reg 54, W(E4) R(E4)
D (1080) ov7760: reset reg 3D, W(40) R(40)
V (1130) camera: DMA desc  0: 640 640 0 1 1 1 0x3ffb3cf8 0x3ffb3cd0
V (1130) camera: DMA desc  1: 640 640 0 1 1 1 0x3ffb6df0 0x3ffb3cdc
V (1130) camera: DMA desc  2: 640 640 0 1 1 1 0x3ffb7074 0x3ffb3ce8
V (1140) camera: DMA desc  3: 640 640 0 1 1 1 0x3ffb72f8 0x3ffb3cc4
V (1140) camera: Waiting for negative edge on VSYNC
V (1180) camera: Got VSYNC
I (1230) Camera-sample: fb->width = 160 , fb->height = 120 , fb->format = 0, fb->len = 57600 

app-z avatar Nov 12 '19 20:11 app-z

@app-z good job, if you have problems please let me know. I also ported the QR recognition demo of @donny681 to esp_idf/esp32_camera that support the ov7670 sensor. https://github.com/jjsch-dev/ESP32_CAMERA_QR

jjsch-dev avatar Nov 13 '19 12:11 jjsch-dev