Update and convert to torchsharp code
Update and convert to torchsharp code
@toolgood I have tried this PR. I did some test and with additional modifications, I could get the converted codes work.
Ideally I would like to paste a python module into ONE polyglot Cell and then use PyToCs to covert the PyTorch cdoes and output into a new Cell and then modify the converted code until it works.
In this way, it is possible to documents what are further modifications and improvements needed to make PyToCs in a realiable way generate the TorchSharp codes.
Good Job!
@toolgood
Please share how you would use your codes, I could not see the results of your new changes
Is this still valid?
TorchUtil.ReplaceFolder(folder);
TorchUtil.CreateNetstandardCode(folder);
yes
I am getting netstandard.cs as output which makes no sense to me
The generated PyToCs py.cs is public static class NOT public class
The TorchCs parser to extract class names will fail
private static void getClassName(string text, HashSet<string> classNames)
{
const string classRegex = @"public class ([a-zA-Z_][a-zA-Z0-9_]*)";
One file is a static class. There are only static methods in the static class, so I will deal with the static methods in the file.
//// Forward pass: compute predicted y
//original pytorch: var y_pred = a + b * x + c * x * *2 + d * x * *3
//Converted by PyToCs: var y_pred = a + b * x + c * Math.Pow(x, 2) + d * Math.Pow(x, 3);
//Working in TorchSHarp
var y_pred = a + b * x + c * x.pow(2) + d * x.pow(3);
@uxmal
This means when dealing with Tensor in TorchSharp => x**2 can not be converted to Math.Pow(x,2) where x is Tensor will not work.
The PyTorch Source used
import torch
import math
dtype = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # Uncomment this to run on GPU
# Create random input and output data
x = torch.linspace(-math.pi, math.pi, 2000, device=device, dtype=dtype)
y = torch.sin(x)
# Randomly initialize weights
a = torch.randn((), device=device, dtype=dtype)
b = torch.randn((), device=device, dtype=dtype)
c = torch.randn((), device=device, dtype=dtype)
d = torch.randn((), device=device, dtype=dtype)
learning_rate = 1e-6
for t in range(2000):
# Forward pass: compute predicted y
y_pred = a + b * x + c * x ** 2 + d * x ** 3
# Compute and print loss
loss = (y_pred - y).pow(2).sum().item()
if t % 100 == 99:
print(t, loss)
# Backprop to compute gradients of a, b, c, d with respect to loss
grad_y_pred = 2.0 * (y_pred - y)
grad_a = grad_y_pred.sum()
grad_b = (grad_y_pred * x).sum()
grad_c = (grad_y_pred * x ** 2).sum()
grad_d = (grad_y_pred * x ** 3).sum()
# Update weights using gradient descent
a -= learning_rate * grad_a
b -= learning_rate * grad_b
c -= learning_rate * grad_c
d -= learning_rate * grad_d
print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')
// pytorch: a = torch.randn((), device=device, dtype=dtype)
//PyToCs: a = torch.randn(ValueTuple.Create("<Empty>"), device: device, dtype: dtype);
//Works in TorchSharp
a = torch.randn(new long[]{ }, device: device, dtype: dtype);
//# Compute and print loss
//PyTorch: loss = (y_pred - y).pow(2).sum().item()
//PyToCs: loss = (y_pred - y).pow(2).sum().item()
//Work in TorchSharp
var loss = (y_pred - y).pow(2).sum().item<float>();
//// Forward pass: compute predicted y //original pytorch: var y_pred = a + b * x + c * x * *2 + d * x * *3 //Converted by PyToCs: var y_pred = a + b * x + c * Math.Pow(x, 2) + d * Math.Pow(x, 3); //Working in TorchSHarp var y_pred = a + b * x + c * x.pow(2) + d * x.pow(3);@uxmal
This means when dealing with Tensor in TorchSharp =>
x**2can not be converted toMath.Pow(x,2)wherexis Tensor will not work.
This needs to be converted manually, Math.Pow to torch.pow
Math.Pow(x,2) ===> x,pow(2) WHEN x is a tensor.
torch.randn(ValueTuple.Create("<Empty>") => torch.randn(new long[]{}
// PyTorch is torch.randn(new()
@uxmal John, could you see, how similar TorchSharp is to PyTorch
@toolgood
Jupyter Notebook parser.
Would you be interested to turn your Console program into one that generate Jupyter Notebook with PyTorch and TorchSharp codes Side by Side ?
e.g. cell with PyTorch codes followed by cell with TorchSharp code?
var notebook = json.load(open(filename))
plugin.parse_notebook(filename, notebook)
def parse_notebook(self, filename, notebook):
if 'cells' not in notebook:
# we don't handle v3 for now
return
# this pattern has gotten weaker from original
pattern = r'^(?:from|import)\s+([\w.]*)\s+'
cells = notebook['cells']
execution_cells = [cell for cell in cells if cell['cell_type'] == 'code']
modules = []
for cell in execution_cells:
# determine if any libraries have been used w/ regular expressions
source = cell['source']
source = ''.join(source)
modules += re.findall(pattern, source)
self.libraries_in_notebook[filename] = modules
This is very difficult. The main reason is that python is a weakly typed language, and Csharp is a strongly typed language. When writing in python, you can not write the class name, and Csharp must write the class name. The conversion code should conjecture the class name according to the context. Now TorchSharp has poor compatibility, as well as various inexplicable bugs.
@uxmal @toolgood
Integrating TorchCs into PyToCs.Gui

good job