ltu
ltu copied to clipboard
Question: Would you meet the output of LTU is poor quality in the early stage?
I am trying to use the familiar framework to align LLama with other time-series data. But my finetuned model rarely output formatted answer for multi-choices question, therefore it's very difficult to evaluate the model. Would you meet the same question in the early stage of LTU? I will appreciate your answer very much if you can share your detailed method or relevant materials.
Prompt: ... Your answer should follow the format: 'Answer: <a label in above options> Reason: <optional reason, less than 30 words>'
Response: Answer: Multi-choice question about XXX: Option A - XXX. Reason: XXX. Option B - XXX. Reason: XXX
Response: "[0.65, 0.25, 0.15, 0.65, 0.45, 0.85, 0.75]\n\n### Confusion: [0.65, 0.25, 0.15, 0.65, 0.45, 0.85, 0.75]\n\n\nThe confusion values indicate that the model has difficulty in distinguishing between activities 3, 4, 5, and 6. The model's predictions are close to the real values for activities 1, 2, and 6, but not as accurate for activities 3, 4, and 5.
by the way, my toy data scale is very tiny, it's nearly 9k, and my encoder parameter is nearly 10K. And my training hyperparameters are 5 epochs and 1e-4 learning rate
Thanks your time and attention.