TenSEAL icon indicating copy to clipboard operation
TenSEAL copied to clipboard

How to have more than one conv layer

Open vcozzolino opened this issue 3 years ago • 8 comments

I just started playing around with TenSEAL by following the encrypted convolutions example. I would like to add another conv layer to the example network but I'm not sure how to proceed, mostly because I don't know what logic to follow to provide to the second conv a matrix rather than the packed vector from im2col? How would I change the code in the example to achieve it?

I'm not sure that this is the right way to do it so any help is appreciated!

vcozzolino avatar Oct 14 '21 16:10 vcozzolino

Hello @vcozzolino

CKKSVector and im2col support a single conv layer.

The CKKSTensor is a more flexible version, and it should support more conv layers. However, we didn't get the chance to create a tutorial around it. If you have the time to investigate the CKKSTensor support, any contribution is more than welcome.

Thank you!

bcebere avatar Oct 20 '21 08:10 bcebere

Thank you for the answer!

I will look into it and see if I can make some progress :)

vcozzolino avatar Oct 29 '21 11:10 vcozzolino

have you been able to find a way to do this

tolusophy avatar Oct 31 '21 10:10 tolusophy

@vcozzolino have you succeeded in implementing more than one conv layer? I'm also interested in making it for more comprehensive models

CharlesKung avatar Dec 23 '21 16:12 CharlesKung

def conv2d(tensor: ts.ckks_tensor, kernel: ts.plain_tensor, stride):
    tensor_dim, kernel_dim = tensor.shape, kernel.shape
    result = []
    
    # 2D image convolution
    if len(tensor_dim)==2 and len(kernel_dim)==2:
        tensor_h, tensor_w = tensor_dim[0], tensor_dim[1]
        kernel_h, kernel_w = kernel_dim[0], kernel_dim[1]
        kernel_size = kernel_h * kernel_w

        # flatten the kernel
        kernel_ = kernel.reshape([kernel_size])

        # loop over each convolution window
        for h in range(0, tensor_h - kernel_h + 1, stride):
            for w in range(0, tensor_w - kernel_w + 1, stride):
                # the window
                window = tensor[h:h+kernel_h, w:w+kernel_w]
                
                # flatten the window
                window.reshape_([kernel_size])
                
                # do convolution
                window.dot_(kernel_)
                
                # collect the convolution result
                result.append(window)

    # 3D image convolution
    elif len(tensor_dim)==3 and len(kernel_dim)==3 and tensor_dim[0]==kernel_dim[0]:
        tensor_h, tensor_w = tensor_dim[1], tensor_dim[2]
        kernel_h, kernel_w = kernel_dim[1], kernel_dim[2]
        kernel_size = kernel_dim[0] * kernel_dim[1] * kernel_dim[2]

        # flatten the kernel
        kernel_ = kernel.reshape([kernel_size])

        # loop over each convolution window
        for h in range(0, tensor_h - kernel_h + 1, stride):
            for w in range(0, tensor_w - kernel_w + 1, stride):
                # the window
                window = tensor[0:tensor_dim[0], h:h+kernel_h, w:w+kernel_w]
                
                # flatten the window
                window.reshape_([kernel_size])
                
                # do convolution
                window.dot_(kernel_)
                
                # collect the convolution result
                result.append(window)
                
    # illegal input
    else:
        print("conv2d(): illegal input")
        sys.exit()
        
    return result

I tried to implement the conv layer. However, I can only put the convolution result of each window in a list which is the result list. I noticed that there is pack_vectors() in BFVVector and CKKSVector, but CKKSTensor doesn't have such a function. I have a thought on implementing pack_tensors(). Suppose t0, t1, and t2 are CKKSTensors, t0.pack_tensors( [t1, t2] ) works as follows:

  1. Check the dimension of t0.

  2. If the dimension of t0 is 1, the dimension of t1, t2, t3 must be 1. E.g., t0 = [1], t1 = [5,4,8], t2 = [3,0] t0.shape = [1], t1.shape = [3], t2.shape = [2] After packing, t0 = [1,5,4,8,3,0].

  3. If the dimension of t0 is 2, the dimension of t1, t2, t3 must be 2 and the width must be equal. E.g., t0 = [ [1,2] ], t1 = [ [3,4], [5,6], [7,8] ], t2 = [ [9,10], [11,12] ] t0.shape = [1,2], t1.shape = [3,2], t2.shape = [2,2] After packing, t0 = [[1,2], [3,4], [5,6], [7,8], [9,10], [11,12]].

  4. If the dimension of t0 is 3, the dimension of t1, t2, t3 must be 3 and all tensors must have equal width and equal height. E.g., t0 = [ [[0,1,2], [3,4,5]] ], t1 = [ [[6,7,8], [9,10,11]], [[12,13,14], [15,16,17]] ], t2 = [ [[18,19,20], [21,22,23]] , [[24,25,26], [27,28,29]], [[30,31,32], [33,34,35]] ] t0.shape = [1,2,3], t1.shape = [2,2,3], t2.shape = [3,2,3] After packing, t0 = [[[ 0, 1, 2], [ 3, 4, 5]], [[ 6, 7, 8], [ 9,10,11]], [[12,13,14], [15,16,17]], [[18,19,20], [21,22,23]], [[24,25,26], [27,28,29]], [[30,31,32], [33,34,35]]].

I'm new to homomorphic encryption. I don't know whether only manipulating the xt::xarray<dtype_t> _data in tensor_storage.h is enough or not. Any advice or help is greatly appreciated.

jasonchien1996 avatar Feb 07 '22 10:02 jasonchien1996

hello,has your problem been solved?have you been able to find a way to add more conv layers?

JauneHH avatar Dec 30 '22 14:12 JauneHH

hello, did you solve it

maxwellgodv avatar Apr 13 '23 02:04 maxwellgodv

Hey everybody, check out #403 where I showcase a solution to have multiple conv layers. Best, Martin

MartinNoc avatar Apr 23 '24 07:04 MartinNoc