torchquad icon indicating copy to clipboard operation
torchquad copied to clipboard

Numerical integration in arbitrary dimensions on the GPU using PyTorch / TF / JAX

Results 23 torchquad issues
Sort by recently updated
recently updated
newest added

Added function torchquad/plots/plot_adaptive_grid.py to visualize the adaptive grid with the corresponding function values on a 3D-scatter plot.

# Description Summary of changes * Added adaptive_trapezoid * Added adaptive_grid ## Outstanding work - [x] Create some test functions to see if this is working (or use existing ones)...

# Feature ## Desired Behavior / Functionality Current test functions are based only on polynomials and sinusoidal and exponential functions. ## What Needs to Be Done - [ ] Find...

enhancement
good first issue
help wanted

# Feature ## Desired Behavior / Functionality The state of the art in deterministic integration methods is arguably [QUADPACK](https://en.wikipedia.org/wiki/QUADPACK). However, there are no GPU implementations of it. ## What Needs...

documentation
enhancement
help wanted

# Feature ## Desired Behavior / Functionality torchquad allows fully differentiable numerical integration. This can enable neural network training through integrals. This capability deserves a dedicated example. There is a...

documentation
good first issue
help wanted

# Support for unequal numbers of points per dimension ## Desired Behavior / Functionality Torchquad currently only supports equal numbers of points per dimension for the deterministic methods (that use...

enhancement
help wanted

# Issue ## Problem Description torch.set_default_tensor_type() as of PyTorch 2.1 ## Expected Behavior ## What Needs to be Done use torch.set_default_dtype() and torch.set_default_device() as alternatives ## How Can It Be...

bug
help wanted

Hi, I found issue https://github.com/esa/torchquad/issues/170 asking about the case where both the integrand and the domain of integration are functions of parameters, and it looks like @ilan-gold implemented this feature...

# Feature ## Desired Behavior / Functionality Setting default device of torch to TPU should work, but it hangs ## How Can It Be Tested I have a not totally...

This allows avoiding the invocation of a lambda wrapper function. The commit just exposes an already existing functionality. Also related to #187