AI-Feynman icon indicating copy to clipboard operation
AI-Feynman copied to clipboard

TypeError: can't convert cuda:0 device type tensor to numpy.

Open vnikoofard opened this issue 3 years ago • 7 comments

Hi, When I ran the example code mentioned in the repository, I mean

import aifeynman

aifeynman.get_demos("example_data") # Download examples from server
aifeynman.run_aifeynman("./example_data/", "example1.txt", 60, "14ops.txt", polyfit_deg=3, NN_epochs=500)

I got the following error. What would be the reason?

Training a NN on the data...

NN loss: (tensor(0.0011, device='cuda:0', grad_fn=<DivBackward0>), SimpleNet( (linear1): Linear(in_features=3, out_features=128, bias=True) (linear2): Linear(in_features=128, out_features=128, bias=True) (linear3): Linear(in_features=128, out_features=64, bias=True) (linear4): Linear(in_features=64, out_features=64, bias=True) (linear5): Linear(in_features=64, out_features=1, bias=True) ))

Checking for symmetries...

Checking for separabilities...

TypeError Traceback (most recent call last) /tmp/ipykernel_71/483010986.py in 2 3 aifeynman.get_demos("example_data") # Download examples from server ----> 4 aifeynman.run_aifeynman("./example_data/", "example1.txt", 60, "14ops.txt", polyfit_deg=3, NN_epochs=500)

/opt/conda/lib/python3.7/site-packages/aifeynman/S_run_aifeynman.py in run_aifeynman(pathdir, filename, BF_try_time, BF_ops_file_type, polyfit_deg, NN_epochs, vars_name, test_percentage) 272 PA = ParetoSet() 273 # Run the code on the train data --> 274 PA = run_AI_all(pathdir,filename+"_train",BF_try_time,BF_ops_file_type, polyfit_deg, NN_epochs, PA=PA) 275 PA_list = PA.get_pareto_points() 276

/opt/conda/lib/python3.7/site-packages/aifeynman/S_run_aifeynman.py in run_AI_all(pathdir, filename, BF_try_time, BF_ops_file_type, polyfit_deg, NN_epochs, PA) 94 idx_min = -1 95 else: ---> 96 idx_min = np.argmin(np.array([symmetry_plus_result[0], symmetry_minus_result[0], symmetry_multiply_result[0], symmetry_divide_result[0], separability_plus_result[0], separability_multiply_result[0]])) 97 98 print("")

/opt/conda/lib/python3.7/site-packages/torch/_tensor.py in array(self, dtype) 676 return handle_torch_function(Tensor.array, (self,), self, dtype=dtype) 677 if dtype is None: --> 678 return self.numpy() 679 else: 680 return self.numpy().astype(dtype, copy=False)

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

vnikoofard avatar Nov 22 '21 02:11 vnikoofard

I have the same problem. Did you find a solution?

ParticleTruthSeeker avatar Jan 08 '22 23:01 ParticleTruthSeeker

I have the same issue. Is this an issue with the code?

shastro avatar Apr 05 '22 06:04 shastro

I wrote a solution (that is working for me) for this problem. Essentially, the "symmetry_plus_result" and similar calls can either return a float or a float with a torch.Tensor datatype. "np.argmin()" can only work with floats so the code below converts all the error checks from torch.Tensor datatype to numbers.

Replace the whole if/else statement in S_run_aifeynman with the code below.

if symmetry_plus_result[0]==-1:
        idx_min = -1
    else:
        min_error_array = [symmetry_plus_result[0], symmetry_minus_result[0], symmetry_multiply_result[0], symmetry_divide_result[0], separability_plus_result[0], separability_multiply_result[0]]
        
        # Change all resultant errors to integers
        for i in range(6):
          # If the element is a torch.Tensor
          if type(min_error_array[i]) == torch.Tensor:
            # Extract the number in the tensor as a number
            min_error_array[i] = min_error_array[i].item()

        #Find the minimum error
        idx_min = np.argmin(np.array(min_error_array))

I'm modifying the Colab notebook from https://towardsdatascience.com/ai-feynman-2-0-learning-regression-equations-from-data-3232151bd929 to run AIF for clarity.

Kolby-Bum avatar Apr 07 '22 15:04 Kolby-Bum

Could you do a pull request with the fix?

AndreScaffidi avatar May 12 '22 15:05 AndreScaffidi

It got this error here. I just used .cpu() for every element and it was resolved.

kvyaswanth avatar Sep 02 '22 13:09 kvyaswanth

Just ran into this issue here, @Kolby-Bum 's solution solves part of the problem, there is also another line where the issue must be tackled. I think I'll submit a pull request to fix this as nothing seems to have happened since April

JaoCR avatar Oct 26 '22 13:10 JaoCR

Checked the other branches and current pull requests, didn't see nothing related, someone let me know if this is already being worked on and I didn't see it

JaoCR avatar Oct 26 '22 14:10 JaoCR