nataliebarcickikas
nataliebarcickikas
Directly calling `batch_predict` causes no issues: ``` batch_predict(encoding['input_ids'], encoding['image'], encoding['bbox'], encoding['attention_mask'], encoding['token_type_ids']) ``` Output: `tensor([[0.4553, 0.5447]], grad_fn=) `
I print the dimensions along with the four individual components of it: In the forward call: ``` batch_predict(encoding['input_ids'], encoding['image'], encoding['bbox'], encoding['attention_mask'], encoding['token_type_ids']) Input embeddings: torch.Size([1, 44, 768]) Position embeddings: torch.Size([1,...
Where are the number of categorical variables accounted for? In the case of the pest control problem, would that correspond to PESTCONTROL_N_STAGES = 5 in the following line? It is...
Hi Changyong, One bottleneck in my simulations seems to be related to posterior sampling, specifically done in this line: https://github.com/QUVA-Lab/COMBO/blob/e16358d706525874fb3b1a79c36b6200d4689ed9/COMBO/main.py#L100 The sampling takes a long time as shown in the...
Hi ChangYong, Thanks a lot for the clarification. Unfortunately, I am indeed using a couple of hundred categorical variables so this step is a real bottleneck...Can the elementwise slice sampling...
Just following up on the above - not sure if parallelizing the elementwise slice sampling loop is mathematically sound in the algorithm? Best, Natalie