complexPyTorch icon indicating copy to clipboard operation
complexPyTorch copied to clipboard

A high-level toolbox for using complex valued neural networks in PyTorch

Results 19 complexPyTorch issues
Sort by recently updated
recently updated
newest added

This PR fixes the memory leak with ComplexBatchNorm1d (I added with torch.no_grad as it was missing) + reformatting the code to be more readable and less error-prone

Hi there, Thank you very much for this library, it has been very helpful in my research. I wanted to share with you some modifications I made to the code...

Need to replace `dilation` and `return_indices` args with `count_include_pad` and `divisor_override` args to be compatible with `torch.nn.functional.avg_pool2d`

Hi I noticed that you have custom matmul (https://github.com/wavefrontshaping/complexPyTorch/blob/a4e752caf827f3b642960366a7e9420f308076cc/complexPyTorch/complexFunctions.py#L11-L19) and tanh, neg functions defined (https://github.com/wavefrontshaping/complexPyTorch/blob/a4e752caf827f3b642960366a7e9420f308076cc/complexPyTorch/complexFunctions.py#L52-L56) which are actually unnecessary since these functions are supported for complex numbers in the last...

File "D:/Pycharm/coplexcnn/train.py", line 124, in y_hat = net(X) File "C:\Users\MyPC\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "D:/Pycharm/complexcnn/train.py", line 79, in forward x = self.bn1(x) File "C:\Users\MyPC\AppData\Local\Programs\Python\Python38\lib\site-packages\torch\nn\modules\module.py", line 1051,...

hey and thanks for the great work, Just a question on the grad computations (though this could be my lack of understanding on autograd complex analysis). For the forward pass...

Dear author, the current version can not compute in parallel, could you pls revise it ?

hi I am a beginner to complex valued networks, My program uses real-valued loss functions and the network does not converge and I have a question: What kind of loss...

Hi, About the naive batch norm, I believe it's more effective to normalize the tensor with the absolute value rather than normalizing each part individually: https://github.com/wavefrontshaping/complexPyTorch/blob/2044cb077b3f139d59dff56abc378b1457de40d6/complexPyTorch/complexLayers.py#L213 What do you think?