ICSharpCore icon indicating copy to clipboard operation
ICSharpCore copied to clipboard

Update .NET to .NET Core 3.1

Open kaiidams opened this issue 3 years ago • 0 comments

Can you update .NET to .NET Core 3.1 or .NET 6.0?

.NET Core 2.2 is end of support. https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core

The Docker image for .NET Core 2.2 is Debian stretch based mcr.microsoft.com/dotnet/core/sdk:2.2 I think it makes easier for SciSharp Cube to update their Docker image. SciSharp/SciSharpCube#4.

Tested the following cell on WSL of Windows 11, Ubuntu 20.04, .NET Core SDK 3.1, TensorFlow.NET 0.70.0 TensorFlow Keras 0.7, NumSharp 0.30.0

using Tensorflow.NumPy;
using static Tensorflow.Binding;
using static Tensorflow.KerasApi;
using Tensorflow;

// Parameters        
var training_steps = 1000;
var learning_rate = 0.01f;
var display_step = 100;

// Sample data
var X = np.array(3.3f, 4.4f, 5.5f, 6.71f, 6.93f, 4.168f, 9.779f, 6.182f, 7.59f, 2.167f,
             7.042f, 10.791f, 5.313f, 7.997f, 5.654f, 9.27f, 3.1f);
var Y = np.array(1.7f, 2.76f, 2.09f, 3.19f, 1.694f, 1.573f, 3.366f, 2.596f, 2.53f, 1.221f,
             2.827f, 3.465f, 1.65f, 2.904f, 2.42f, 2.94f, 1.3f);
var n_samples = X.shape[0];

// We can set a fixed init value in order to demo
var W = tf.Variable(-0.06f, name: "weight");
var b = tf.Variable(-0.73f, name: "bias");
var optimizer = keras.optimizers.SGD(learning_rate);

// Run training for the given number of steps.
foreach (var step in range(1, training_steps + 1))
{
    // Run the optimization to update W and b values.
    // Wrap computation inside a GradientTape for automatic differentiation.
    using (var g = tf.GradientTape())
    {
    // Linear regression (Wx + b).
    var pred = W * X + b;
    // Mean square error.
    var loss = tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * n_samples);
    // should stop recording
    // Compute gradients.
    var gradients = g.gradient(loss, (W, b));

    // Update W and b following gradients.
    optimizer.apply_gradients(zip(gradients, (W, b)));

    if (step % display_step == 0)
    {
        pred = W * X + b;
        loss = tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * n_samples);
        print($"step: {step}, loss: {loss.numpy()}, W: {W.numpy()}, b: {b.numpy()}");
    }
    
    }
}

outputs

step: 100, loss: 0.17983408, W: 0.43350387, b: -0.49056756 
step: 200, loss: 0.15764152, W: 0.41270342, b: -0.34310183 
step: 300, loss: 0.14023502, W: 0.39428195, b: -0.21250194 
step: 400, loss: 0.12658237, W: 0.37796733, b: -0.09683866 
step: 500, loss: 0.11587408, W: 0.36351863, b: 0.00559611 
step: 600, loss: 0.107475124, W: 0.35072243, b: 0.0963154 
step: 700, loss: 0.1008875, W: 0.33938974, b: 0.1766591 
step: 800, loss: 0.09572056, W: 0.32935318, b: 0.2478138 
step: 900, loss: 0.091667935, W: 0.32046452, b: 0.31083038 
step: 1000, loss: 0.0884893, W: 0.31259245, b: 0.3666397 

kaiidams avatar Jan 16 '22 10:01 kaiidams