e2e-model-learning
e2e-model-learning copied to clipboard
Low GPU utilization for sequential quadratic programming solver
When I train the task_net from the power scheduling problem (modified to work with my data) the SQP solving process takes forevor. While it runs, my GPU (Tesla K80) utilization hovers at only ~3%. I'm not sure if that is normal or what the bottleneck may be. However, this step significantly impacts training time. Training the rmse nets is extremely quick, of course.