adapters
adapters copied to clipboard
Is there a way to customize the input to train an adapter?
Environment info
-
adapter-transformers
version: 3.0.1 - Platform: MacOS Monterey 12.3.1
- Python version: 3.9.12
- PyTorch version (GPU?): 1.12.0.dev20220520 (CPU or MPS)
Details
Dear all,
I am working on combining another pre-trained vector to the hidden layer of the transformer (for example, dot product of both) and feeding the combined vector into an adapter layer to train a new adapter. Is there a way to do that using the adapter-transformers?
I have been trying to build this from scratch. But for the further experiment, I would love to use the features from the adapter-transformers, such as stacking, and fusing adapters. It would be amazing, if this is feasible using the package.
Thank you very much and best regards,
Hi,
I tried to solve this, please help, if it works as the following:
# define the adapter.
adapter_config = AdapterConfig(mh_adapter=True, output_adapter=True, reduction_factor=16, non_linearity="relu")
self.lang_adapter = Adapter(adapter_name=name, input_size=hidden_size, down_sample=64, config=adapter_config)
# after changing the first hidden_state in the hidden_states -> hidden_states_new
self.lang_adapter.train()
# initialize the residual
residual = torch.zeros(self.hidden_size)
for idx, hidden_state in enumerate(self.hidden_states_new):
print(f"idx {idx} hidden_state {hidden_state.size()}")
output, down, up = self.lang_adapter(hidden_state, residual)
residual = output
would call .train()
and this loop update the parameters of adapters?
Thank you very much!
This issue has been automatically marked as stale because it has been without activity for 90 days. This issue will be closed in 14 days unless you comment or remove the stale label.
This issue was closed because it was stale for 14 days without any activity.