naturalcc icon indicating copy to clipboard operation
naturalcc copied to clipboard

questions about paper "A Structural Analysis of Pre-Trained Language Models for Source Code"

Open skye95git opened this issue 2 years ago • 8 comments

  1. The high variability would suggest a content-dependent head, while low variability would indicate a content-independent head.

Figure 7: Visualization of attention heads in CodeBERT, along with the value of attention analysis ( 𝑝 𝛼 ( 𝑓 )), and attention variability, given a Python code snippet.

What are high, low and attention variability?

  1. What are the inputs and outputs of models in Syntax Tree Induction?
  2. Why is it content dependent? Snipaste_2022-05-27_11-41-24

skye95git avatar May 27 '22 03:05 skye95git

Thanks to propose these questions; First, the variability is attention variability, and we think the high value is content-dependent and the low is content-independent. Second, Our input is the pruned AST and code snippet, i.e. no symbols(for example: https://drive.google.com/file/d/1FMgABZMACAv8OjU7wcMliMqc9m3_APQ1/view?usp=sharing) Third, it is high variability, and we consider that the attention distribution does not depend on the location at this head. Hope to help you!

timetub avatar May 27 '22 07:05 timetub

Thanks to propose these questions; First, the variability is attention variability, and we think the high value is content-dependent and the low is content-independent. Second, Our input is the pruned AST and code snippet, i.e. no symbols(for example: https://drive.google.com/file/d/1FMgABZMACAv8OjU7wcMliMqc9m3_APQ1/view?usp=sharing) Third, it is high variability, and we consider that the attention distribution does not depend on the location at this head. Hope to help you!

Thanks for your reply!

  1. So, the attention variability is calculated according to Formula 5, right? If the calculated value is high, it is considered content-dependent head, otherwise, it is considered content-dependent content-independent head.
  2. Does it also reflect that different heads are paying attention to different information? image

skye95git avatar May 27 '22 07:05 skye95git

If input is the pruned AST and code snippet, what are the outputs of models in Syntax Tree Induction? The output is is there an edge between the two nodes, right?

skye95git avatar May 27 '22 08:05 skye95git

Actually, the purned AST strcuture is the gold standard, and we use the method(in our paper) to induce a binary tree,and compute the similarity between the two trees.

timetub avatar May 27 '22 09:05 timetub

Actually, the purned AST strcuture is the gold standard, and we use the method(in our paper) to induce a binary tree,and compute the similarity between the two trees.

Thanks for your reply! I didn't understand what induce meant. Induce a binary tree mean generate an AST from zero, or predict edges only?

skye95git avatar May 30 '22 02:05 skye95git

Actually, the purned AST strcuture is the gold standard, and we use the method(in our paper) to induce a binary tree,and compute the similarity between the two trees.

Thanks for your reply! I didn't understand what induce meant. Induce a binary tree mean generate an AST from zero, or predict edges only?

Yes, Induce a binary tree means generate a tree from zero.

timetub avatar May 30 '22 03:05 timetub

Thanks. There are two final questions:

  1. So, the attention variability is calculated according to Formula 5, right? If the calculated value is high, it is considered content-dependent head, otherwise, it is considered content-dependent content-independent head.
  2. Does it also reflect that different heads are paying attention to different information? image

skye95git avatar May 30 '22 03:05 skye95git

Yes, your understanding is right.

timetub avatar Jun 15 '22 06:06 timetub