Emiliano Martinez
Emiliano Martinez
The problem is related to how a KerasNet model is saved every checkpoint, as I see in the code it still uses Java serialization, and the function from the Module...
There is a problem with java serialization related with the model´s size. Its two times bigger. Besides, if you want to resume a KerasNet model from a checkpoint you need...
I could add more examples using the functional api in the doc. I usually use stackoverflow, because there are a lot of spark community and it will help for bigdl...
An example would be: ``` val input = Variable[Float](Shape(1, 10)) val dense1 = Dense[Float](2).from(input) // existing model val model = Model(input, dense1) val newLayer = Dense[Float](2) // Add a new...
I see, this kind of dependences are a problem(having in mind that Spark 3.2 was released a year and a half ago). By now, the only solution is to create...
let me check it out
Well, there is a problem related to the implicit evidences that the compiler creates. **This is with Scala 2.13** ```private[tensor] class DenseTensor[@specialized T: ClassTag]( ... ``` It includes an implicit...
Wonderful!!!, I try as as soon as possible. Thank you very much!
It seems to work well. Some aspects related to implicit conversions, which is normal having in mind that the Scala 3 compiler is a new compiler. But it looks great,...
It is normal, the problem happens if you start more threads than one in training phase. Every thread creates a copy of the output of each module in the forward...