mlc-llm
mlc-llm copied to clipboard
[Question] Proper way to run nn.Modules for testing
❓ General Questions
I'm trying to add a model I'm interested in running with MLC-LLM, however is there a method for testing the intermediate nn.Modules
that are part of the model? For example, if I have an attention class and a defined forward function, is there a way to validate the outputs of the class as I'm building the model?
I was hoping I could run something like below, but from reading it seems like forward
only is for tracing, so actual outputs can't be generated.
my_module = Module(Config)
test_data = np.zeros(shape)
outputs = my_module(test_data)