picamera icon indicating copy to clipboard operation
picamera copied to clipboard

Lens shading control

Open rwb27 opened this issue 6 years ago • 57 comments

This PR adds:

  • Updates to picamera.PiCamera that:
    • Make PiCamera.analog_gain writeable
    • Make PiCamera.digital_gain writeable
    • Add a new property PiCamera.lens_shading_table that allows setting of the camera's lens shading compensation table.
  • Requirements to enable the above features:
    • A Python header conversion for user_vcsm.h and an object-oriented wrapper in the style of mmalobj that makes it possible to work with VideoCore shared memory from Python
    • Updates to the mmal library with the new constants added in late 2017 to the userland code that enable setting the gains directly and manipulating lens shading correctin

The module will run fine with older versions of the userland code, but will throw an exception if you try to set analog or digital gain, or use the lens shading table. I guess that makes it a "soft" dependency? The features were introduced late 2017 in a commit.

I thought passing in the lens shading table as a numpy array made good sense, but I have been fairly careful to avoid introducing any hard depndencies on numpy, having read the docs on picamera.array and assumed that this would be desirable.

I have tried to keep things like docstrings and code style consistent, but please do say if I can tidy up my proposed changes.

rwb27 avatar Feb 09 '18 17:02 rwb27

PS this includes the changes in my other PR #463 so I will close it now.

rwb27 avatar Feb 09 '18 17:02 rwb27

@rwb27 thank you so much for putting this up! I was just looking for something exactly like this. Is there any place you could show an example of loading in a lens shading table and initializing the camera with it? It would be helpful to see the format in which the lens shading table needs to be loaded and passed in.

dhruvp avatar Feb 17 '18 00:02 dhruvp

No problem. I have some code that does exactly that as part of my microscope control scripts but I will try to chop it out into a stand-alone script.

The basic principle is quite simple though: the array should be a 3-dimensional numpy array, with shape (4, (h+1)//64, (w+1)//64) where w, h are the width and height of the camera's full resolution. I've not extensively tested how this varies with video mode; I have always just used the maximum resolution, i.e. 3280x2464 for the camera module v2. The 4 channels correspond to Red, Green1, Green2, Blue gains, green appears twice because there are two green pixels per unit cell in the Bayer pattern. The other two dimensions correspond to position on the image - NB that it's height then width rather than the other way around.

You can either pass your numpy array to the camera's constructor (cam = picamera.PiCamera(lens_shading_table=myarray)) or simply set cam.lens_shading_table to the array. Note that doing the latter reinitialises the camera (like changing sensor_mode or resolution) so the constructor method is more efficient.

A complete example is below. This will set the camera's lens shading table to be flat (i.e. unity gain everywhere).

from picamera import PiCamera
import numpy as np
import time

with PiCamera() as cam:
    lst_shape = cam._lens_shading_table_shape()

lst = np.zeros(lst_shape, dtype=np.uint8)
lst[...] = 32 # NB 32 corresponds to unity gain

with PiCamera(lens_shading_table=lst) as cam:
    cam.start_preview()
    time.sleep(5)
    cam.stop_preview()

I should probably put this in the docs somewhere...

rwb27 avatar Feb 19 '18 10:02 rwb27

This is amazing - thank you so much for putting all this together. As a last clarification, are you sure the channel order should be [R, G1, G2, B]? I was looking through userland's lens_analyze script and it seems that script outputs in the order of [B, Gb2, Gb1, R]. At least that's what it looks like in my ls_table.h file after running their script.

Thanks!

dhruvp avatar Feb 19 '18 21:02 dhruvp

hmm, you may be correct there - that would explain a few things. I think the middle ones are probably both green but I may have R and B swapped, it's possible that my code that generates the correction from a raw image has the channels swapped somewhere else. If you're able to test it before I am, do let me know. Bear in mind that white balance is applied after the shading table, so it's not quite as simple as just changing the average values for different channels.

rwb27 avatar Feb 20 '18 00:02 rwb27

Hi Richard,

I ended up trying it as your original post suggested [R, G1, G2, B] and it worked beautifully! Thanks for putting this together and let me know if there's any way I can help.

Dhruv

On Mon, Feb 19, 2018 at 4:02 PM, Richard Bowman [email protected] wrote:

hmm, you may be correct there - that would explain a few things. I think the middle ones are probably both green but I may have R and B swapped, it's possible that my code that generates the correction from a raw image has the channels swapped somewhere else. If you're able to test it before I am, do let me know. Bear in mind that white balance is applied after the shading table, so it's not quite as simple as just changing the average values for different channels.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/waveform80/picamera/pull/470#issuecomment-366830600, or mute the thread https://github.com/notifications/unsubscribe-auth/ACJrSpvcxJ_TuLkgj1ANTHM1QXw4n9iXks5tWguAgaJpZM4SAO6- .

dhruvp avatar Feb 20 '18 18:02 dhruvp

Hello Richard,

I find your Lens shading control extremely useful, problem is that I'm not an expert in programming and I'm not able to follow your requirements to enable it.

Would it be possible to get a tutorial on how to install it? Is there a package I can download and install?

Thanks, Marc

quetzacoal avatar Jun 21 '18 12:06 quetzacoal

Hi Marc, that's a good point - I've tried to keep the fork "clean" to make it easy to pull back into the main PiCamera release. I do, however, have better instructions for how to install the software for the OpenFlexure Microscope which includes installing this fork. In short, you can install it with:

sudo pip install https://github.com/rwb27/picamera/archive/lens-shading.zip

The only requirement you should need to upgrade is the “userland” libraries on your Raspberry Pi, which you can do using the rpi-update command. However, the version that ships with the latest Raspbian image is already new enough, so if burning a new SD card is simpler, you can just do that. If you are getting an error when you run the code above relating to _lens_shading_table_shape() it is unlikely to be due to missing requirements – that suggests to me that the module hasn’t been installed properly. Perhaps you could try the command above and check it completes successfully - with any luck that should solve the problem...

rwb27 avatar Jun 21 '18 12:06 rwb27

Oh, and while I'm here, for those of you interested in calibrating a camera, I've now written a closed-loop calibration script that works much better than my first attempt (which ports 6by9's c code more or less directly). I guess there must be something nonlinear that happens in the shading compensation - I have not figured out what it is, but 3-4 cycles of trying a correction function and tweaking it seems to fix things. It's currently on a branch, but I'll most likely merge it into master soon, here's a link to the recalibration script.

rwb27 avatar Jun 21 '18 12:06 rwb27

Incredible! I managed to install your OpenFlexure microscope control with your installation guide. I also ran one of your examples and worked perfectly. Now I was trying to use your recalibration script but it's telling me I need the microscope library... can I find it in one of your repositories or should I look somewhere else? Thanks

quetzacoal avatar Jun 22 '18 11:06 quetzacoal

Excellent, glad that worked! If you've installed the openflexure_microscope library, it's best to run it from the command line. It will try to talk to a motor controller on the serial port by default, but there's a command line flag to turn that off. You can use:

  • openflexure_microscope --no_stage to run the camera with manual control of gain, exposure speed,...
  • openflexure_microscope --recalibrate to recalibrate the lens shading table so that the image is uniform and white. If that doesn't work (probably because the command line entry points weren't installed), try replacing openflexure_microscope with python -m openflexure_microscope.

If you are running the python script directly, it might get confused about relative imports (because it's designed to be part of the module) - that is probably where the error comes from about the microscope library (it is in microscope.py in the openflexure_microscope module that you have already installed.

I should probably figure out a way to crop out the camera-related parts of this, but if you look in the relevant Python files you can probably figure out what's going on - or just use it through the openflexure_microscope module if that's easier. The important point to understand is that the recalibration routine saves a file (microscope_settings.npz) in the current directory, and that is loaded by default to set up the microscope. You can open that file with numpy to inspect its contents; the lens shading table will be in there.

Hope that helps...

rwb27 avatar Jun 22 '18 14:06 rwb27

Ok, I understood everything now. The program works even better than I expected!

I don't know how can I repay you, thanks!

quetzacoal avatar Jun 22 '18 15:06 quetzacoal

Hi!

First of all thanks for your work, I successfully tried it on our raspberry, but I have some issues.

We would like to use the Raspberry Pi 3 with PiCamera v2.1 on a microscope with a simple C-mount adapter lens.

I used your calibrator, which created the lens shading table, which works well with white and black colored items. If I put a red, yellow, or other colored items under the microscope lens, the vignetting issue come back, so only a circle in the middle of the picture represents the correct color. I checked your recalibrate.py code to find out the problem, but it looks good.

Here you can see sample pictures: https://drive.google.com/open?id=16pK5cAoHu9MCvlQo3EDlKEkGH-ooF0WG

I hope that maybe you can help me, or maybe you had the same problem.

Best regards, Zoltán

zbarna avatar Sep 04 '18 19:09 zbarna

Hi Zoltan, It’s great to hear this has been useful to you! The short answer is that a more sophisticated correction is needed because of the Bayer filter. The sensor has been designed for a short focal length lens, which means that light is incident perpendicular to the sensor in the middle, the light hits the edges of the sample at an angle (imagine all the rays of light coming from a point a few mm in front of the sensor). That means that towards the edge of the sensor, the Bayer filter must be shifted relative to the pixels, in order that the light passes through the right filter before it’s detected. Using the camera in e.g. a microscope usually means the light is incident at normal incidence across the whole sensor, so the Bayer filter doesn’t match up properly with the pixels any more.

The most obvious effect is that the image gets dim at the edges, which can be corrected by multiplying by a table of gains for each pixel. That is what the lens shading correction does. The next level is to consider that some blue light leaks through on to green pixels, and so on for the other colour channels, and this effect gets worse at the edges of the sensor. Unfortunately, that can only be corrected using a 3*3 matrix (the lens shading table corresponds to the 3 diagonal elements of that matrix).

I mostly see this effect as a decrease in saturation towards the edge of the image, though if you saturate the sensor it leads to wierder effects! I don’t think there is an easy way to fix this without postprocessing the images, unless you can extend the GPU processing somehow. I may at some point start to work on a postprocessing script and calibration routine, if you are interested in helping and/or testing it, let me know!

rwb27 avatar Sep 04 '18 22:09 rwb27

Hi Richard!

Thanks for your deep explanation!

I also has a conversation about this issue here. I linked your answer to there, I hope you don't mind it: https://www.raspberrypi.org/forums/viewtopic.php?f=43&t=190586&p=1361623#p1361623

It sounds reasonable, and unfortunately means that it's very difficulty to calibrate the lens shading matrix :( .

Unfortunately the postprocessing solution is not enough for me, because I need the correct colors on the preview picture as well, not only after I captured the image. I think to extend the GPU processing is not so easy, and maybe slows down the camera speed :S .

I'm really dissappointed about this now, I though that with a simple calibration I can use different lenses with my picamera.

What's the plan? When do you start to work on your script to solve this?

Best regards, Zoltán

zbarna avatar Sep 05 '18 07:09 zbarna

Hello Zoltán,

Something I can also recommend you is to go for Picamera v1.3, it has way less crosstalk and the vignetting effect is consequently smaller.

I guess Richard's software also works with this model.

Good luck,

Marc

quetzacoal avatar Sep 05 '18 09:09 quetzacoal

Interesting, perhaps the older module doesn’t have the same optimisations (lenslets and offset Bayer filter) that cause the problem. In principle my fork and calibration scripts should work, but I have only tried it once and it failed... I am reasonably sure that, with some debugging, it would work - but it might not be completely straightforward.

rwb27 avatar Sep 05 '18 10:09 rwb27

@zbarna I’m afraid I will be starting work on a script to fix this “when I have time” which might be a while! I like the idea of trying the older camera module, but if that isn’t possible, maybe you could think about a way to use the camera with the lens still on, as that would sort out the issue of things being optimised for the stock lens. For example, could you make or buy an adapter to put the pi camera in front of the microscope eyepiece, if it has one?

rwb27 avatar Sep 05 '18 10:09 rwb27

Thanks for the replies and suggestions for both of you!

I was also thinking about that to try the picamera v1.3. Unfortunately it has only 5mpx compared to the picamera v2, which has 8mpx.

Could you explain why the v1.3 camera has smaller vignetting effect? I'm interested :) .

@rwb27 Maybe we have to use the stock lens and do some hack with microscope lens adapter to make it work. I think it's also a difficult topic, but maybe we don't have other choice.

zbarna avatar Sep 05 '18 11:09 zbarna

@zbarna I'm sure in this thread you will have a better explanation than any I can give you.

https://www.raspberrypi.org/forums/viewtopic.php?t=196297

I also had this v1-v2 dilema, my solution was either to use v1 whole sensor or use the v2 and crop the region of interest.

quetzacoal avatar Sep 05 '18 12:09 quetzacoal

Hi, first of all, thanks for implementing the picamera version of the lens shading algorithm - it really helped me a lot in my project (Super 8 telecine)!

The lens shading correction will work fine with any v1 cam, but it will fail with the newer v2 cams, as there is too much crosstalk between the raw color channels.

If you can live with a only a part of the full frame, you can probably get away with a v2 cam, but I recomment considering using the older v1. If you need to use the v2, just use one-third or at most half of the center portion of the frame.

BTW, I do not think that a more elaborate approach could recover a good color signal from the v2 cams, there is just too much crosstalk towards the edges of the frame. At least that was the result for the lens I am using, a Schneider Componon-S.

cpixip avatar Sep 25 '18 21:09 cpixip

Good grief - well, this looks like it'll save a huge amount of work! Here was I planning lens shading for 1.14 - hadn't even realised someone (6by9 presumably?) had added writable gains to the firmware! I'll have to spend a little time going through it, but a quick skim looks great so far. Many many thanks - I'll definitely mark this for 1.14!

waveform80 avatar Sep 29 '18 21:09 waveform80

No problem, do please let me know if there's anything I can do to help things along - I'm in the fortunate position of being able to spend some work hours on this. If it would be helpful to add some example code (for example, my little closed loop calibration utility) I'll happily extract it from its current location and tidy up the documentation a bit.

rwb27 avatar Oct 01 '18 15:10 rwb27

I've been using this for the past few months and it's been fantastic. Thanks @rwb27 for all the work. I also used the calibration script and found that if you need a little speed up, just setting the num images to average over as 1 does really well and reduces the script time to about 10-15 seconds.

dhruvp avatar Oct 05 '18 17:10 dhruvp

Hi!

I tried the calibration script with the v1 camera, but got an error. I expected this, because the max. resolution is lower in the case of v1 camera than v2 camera.

Dear @quetzacoal and @cpixip! You mentioned that you used the calibration with the v1 camera. Could you share the modified source code of the calibration script?

Thanks in advance for your help! Best Regards, Zoltán

zbarna avatar Oct 25 '18 15:10 zbarna

Hi Zoltán,

my approach is a little bit different than what is coded here, as it is specific to my needs. Once I figure out how github works, I will post some generic code. In the mean time, I will try to explain the code-section which I use. It might help anyway.

I assume in my code that the raw image is stored in a numpy-array with the channels

        img[:,:, 0] = # Red
        img[:,:, 1] = # Green1
        img[:,:, 2] = # Green2
        img[:,:, 3] = # Blue

I than use the following function to create a lens-compensating table:

`def calc_table(img):
    
    # padding the image to the right size - it took me quite a while to understand
    # the mapping between raw image and lens compensation table
    # basically, it's padded to a size that 32x32 tiles can map directly
    dx    = (img.shape[0]/32+1)*32
    dy    = (img.shape[1]/32+1)*32
    
    # now enlarging to "correct" size....
    pad_x =  dx-img.shape[0]
    pad_y =  dy-img.shape[1]
    tmpI = cv2.copyMakeBorder(img,0,pad_x,0,pad_y,cv2.BORDER_REPLICATE)
    
    # ... downsizing with averaging. It is important to do this iteratively in order
    # to avoid artefacts. Also, the iterative down-sizing gets rid of all of the noise
    # in the raw image - important if you want to have a reliable lens compensation
    # Well, not the best code (magic numbers!), but it works... ;)
    while tmpI.shape[1]>img.shape[1]/16:
        dx = tmpI.shape[1]/2
        dy = tmpI.shape[0]/2            
        tmpI = cv2.resize(tmpI,(dx,dy),cv2.INTER_AREA)
    raw = tmpI

    # get the maximum value in each channel in order
    # to make sure that gains requested by the table
    # are always larger than one. This is important
    # as otherwise, weird things are happening (the 
    # lens-shading correction assumes that all values in the table are 
    # larger than 32)
    rawMax = np.amax(np.amax(raw, axis=0),axis=0)       

    # in order to calculate the mask, use float here to keep precision and range
    raw  = raw.astype(float)
        
    # fast way to compute lens compensation

    # Note: if you use a larger scaler, say 64 for example
    # you will get a sensitivity boost. Of course, the noise floor
    # is multiplied as well, so it's a mixed blessing... 
    scaler = 32
    # array divide, ignoring zero entries....
    table = scaler*np.divide(rawMax,raw,where=raw!=0)

    # convert back to int, project values into safe range
    return table.astype(int).clip(0x00,0xff)`

For completeness I include here also a function which will write this table in a format which can be included in the c++-programs floating around for lens compensation: (again, ugly code because all of this was just a quick hack)

def save_table(fileName,table):
    # note that the table is transposed...
    
    # the ls_table.h has the following sequence of channels
    cComments = ["R",
                "Gr",
                "Gb",
                "B"]

    # now write the table...
    with open(fileName,'w') as file:

        # initial part of the table
        file.write("uint8_t ls_grid[] = {\n")

        for c in range(0,table.shape[2]):
            # insert channel comment (for readability)
            file.write("//%s - Ch %d\n"%(cComments[c],3-c))
            # scan the table
            for y in range(0,table.shape[1]):
                for x in range(0,table.shape[0]-1):         
                    file.write("%d, "%table[x][y][c])
                
                # finish a single line
                file.write("%d,\n"%table[table.shape[0]-1][y][c])
            
        # finish the the ls_grid array
        file.write("};\n");

        # write some additional vars which are expected in ls_table.h
        file.write("uint32_t ref_transform = 3;\n");
        file.write("uint32_t grid_width = %u;\n"%table.shape[1]);
        file.write("uint32_t grid_height = %u;\n"%table.shape[2]);     
    

I have not tested this code extensively with the v2-cam, but in a few tests I run it did seem to work. It does give me a workable solution for v1-cameras and Schneider Componon-S lens with 50 mm focal length.

Note that due to the aggressive microlens-array design of the v2-camera, this camera is practically useless for lenses which have a longer focal length. Way to much mixing of the different color channels occurs which can not be compensated for by the simple lens compensation we have available. In contrast, the microlens-array of the v1-cam is much softer, and the result I get are ok.

Again, my plan is to set up in the next weeks a repository on github with some better code, but maybe this gets you already going. The important step in lens compensation are:

  • make sure you have an absolutely even illuminated surface you are imaging with the camera.

  • check the raw images whether the intensities of the 4 color channels are about equal and well-exposed.

  • grab the raw image and calculate the lens compensation table.

  • upload the correction table to the camera and check that the compensated image shows even intensity values across the whole field. Feel fine ;)

Things to keep in mind while doing this:

  1. The unusual mapping between lens compensation table and the raw image which requires a padding operation.
  2. Also, it is important that you sample the intensities for a single entry in the lens compensation table over the whole raw image area which is handled by this entry (the iterative resize operation in the above code does this). The operation dramatically reduces the noise otherwise present in the compensation table and gets rid of artifacts which would be introduced by direct scaling to the final resolution.

Hope this helps. - cpixip

cpixip avatar Oct 25 '18 17:10 cpixip

Hi @cpixip!

Thanks a lot for your comment! I will make a try with your code. I'm also waiting for your github repository. :)

I will write my experiences after the test with the v1 cam.

Best Regards, zbarna

zbarna avatar Oct 25 '18 17:10 zbarna

Hi!

I tried your code @cpixip, but unfortunately it's not working, and cannot figure out what is the problem. :(

How I tried:

I created a raw image with the raspistill -r -o raw.jpg command, and use your code this way:

        img = cv2.imread("raw.jpg")
        table = calc_table(img)
        save_table("output.txt", table)

The first problem is that, into the output.txt the script created table only for //R - Ch 3 //Gr - Ch 2 //Gb - Ch 1

The B - Ch 0 is missing.

The other issue: I use the generated output.txt this way to load into the picamera lensshading table:

#read lensshading tables from txt separated with ', '  
npzloaded1 = np.loadtxt('rwblensshading0.txt', dtype=np.uint8, delimiter=', ')
npzloaded2 = np.loadtxt('rwblensshading1.txt', dtype=np.uint8, delimiter=', ')
npzloaded3 = np.loadtxt('rwblensshading2.txt', dtype=np.uint8, delimiter=', ')
npzloaded4 = np.loadtxt('rwblensshading3.txt', dtype=np.uint8, delimiter=', ')

npzloadedfull = np.array([npzloaded1, npzloaded2, npzloaded3, npzloaded4])

with PiCamera(resolution=cam.MAX_RESOLUTION, lens_shading_table=npzloadedfull) as cam:
    cam.start_preview()
    time.sleep(600)

I got an error, because in the output.txt the matrix is transposed, but I think it shouldn't. So I modified your code to make it non-transposed. But I don't understand why it's needed.

After some hack, I get this result :( test_newcalibrate

Could you give some help? How do you create raw image for your script? How do you load it into the PiCamera lens_shading_table?

Thanks for your help in advance! Best Regards, zbarna

zbarna avatar Oct 27 '18 18:10 zbarna

Hi @zbarna,

I am awfully sorry that I did that quick shot above. I copied and pasted together code out of a quite complicated, still evolving project (client-server architecture, lens shading, HDR-generation with v1-camera) and I missed some code segments. The last time I did work on this code section was last March and in my code, the lens-compensation calculation is actually done on a PC and send back to the Raspberry Pi camera.

The most important point I overlooked was that my algorithm only works if the raw image is taken with camera.hflip = True, camera.vflip = True. You probably didn't use this mapping.

Also, if you are missing one of the four channels in the written out ls_table.h, there is a big chance that you did actually supply to the calculation routine only a standard RGB-image, not a 4-channel RG1G2B-image. If the three last lines

uint32_t ref_transform = 3;
uint32_t grid_width = 31;
uint32_t grid_height = 41;

are missing in the .h-file, that is an indication you supplied only a 3-channel RGB-image.

Anyway, I did write a clean code which should get you started. Of course, you will need the modified library of rwb27 with all the great extensions.

You can find the script, as well as the results of an example-run in the following repository:

v1-lens shading

Currently, it needs camera.hflip = True as well as camera.vflip = True. Will push a script shortly which will handle all possible mappings for v1-cams.

Best, cpixip

cpixip avatar Oct 29 '18 19:10 cpixip

Hi @cpixip!

First of all thanks you for creating the github with the code. It's very well commented, and clear.

At first sight it's working well! I have a few remarks, or issues, but I write this to your github repository, I think we should continue there, if you made it :) .

Best Regards, zbarna

zbarna avatar Nov 02 '18 18:11 zbarna