HCareLou

Results 31 comments of HCareLou

``` protected override void RegisterComponents() { if (_registered) return; for (int i = 0; i < _list.Count; i++) { register_module($"{i}", _list[i]); } _registered = true; } ``` This is the...

We can call RegisterComponents once in the top-level model, so that other models will be registered automatically, which is the most convenient.

![image](https://github.com/dotnet/TorchSharp/assets/55724885/6a098664-5128-4ff1-91c5-5d2359086c23) I wonder if this could serve as a relatively good solution.

To facilitate the use of pre-trained weights in TorchSharp, it is advisable to maintain consistency with PyTorch as much as possible.

I'm not sure if this is feasible or not, but the idea is to call RegisterComponents within the parameterless constructor of nn.Module. This way, when you create a custom model...

Switching gears, we could declare properties and then mark them with a custom attribute that has a “name” which will be used as the registration name. Subsequently, we could employ...

So, to address the issue of Sequential not registering its sub-modules, for the time being, should we also rewrite Sequential’s RegisterComponents, just like we did with ModuleList?

Storing using binary ``` def encode(writer,value: int) -> None: if value < 0: raise NotImplementedError("LEB128 encoding of negative numbers is not implemented") while value > 0: num = value &...

``` /// /// 加载模型参数 /// /// 参数字典 /// 参数的位置 public static void LoadStateDict(this Dictionary dict, string location) { using FileStream stream = File.OpenRead(location); using BinaryReader reader = new BinaryReader(stream); var...

Yes, indeed, you could change it to save_tensor_to_binary(tensor, binary_file). It's worth noting that the conversion to double was initially intended for enhanced compatibility. As an alternative, you could experiment with...