Skip to content

Commit 8c767bd

Browse files
isHuangXinmallamanis
authored andcommitted
add 2 papers
1 parent 6637c71 commit 8c767bd

File tree

2 files changed

+26
-0
lines changed

2 files changed

+26
-0
lines changed

_publications/gui4cross.markdown

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
---
2+
layout: publication
3+
title: "Cross-Language Binary-Source Code Matching with Intermediate Representations"
4+
authors: Yi Gui, Yao Wan, Hongyu Zhang, Huifang Huang, Yulei Sui, Guandong Xu, Zhiyuan Shao, Hai Jin
5+
conference: SANER
6+
year: 2022
7+
bibkey: gui2022cross
8+
additional_links:
9+
- {name: "ArXiV", url: "https://arxiv.org/abs/2201.07420"}
10+
- {name: "Code", url: "https://github.com/CGCL-codes/naturalcc"}
11+
tags: ["code similarity", "clone"]
12+
---
13+
Binary-source code matching plays an important role in many security and software engineering related tasks such as malware detection, reverse engineering and vulnerability assessment. Currently, several approaches have been proposed for binary-source code matching by jointly learning the embeddings of binary code and source code in a common vector space. Despite much effort, existing approaches target on matching the binary code and source code written in a single programming language. However, in practice, software applications are often written in different programming languages to cater for different requirements and computing platforms. Matching binary and source code across programming languages introduces additional challenges when maintaining multi-language and multi-platform applications. To this end, this paper formulates the problem of cross-language binary-source code matching, and develops a new dataset for this new problem. We present a novel approach XLIR, which is a Transformer-based neural network by learning the intermediate representations for both binary and source code. To validate the effectiveness of XLIR, comprehensive experiments are conducted on two tasks of cross-language binary-source code matching, and cross-language source-source code matching, on top of our curated dataset. Experimental results and analysis show that our proposed XLIR with intermediate representations significantly outperforms other state-of-the-art models in both of the two tasks.

_publications/wan2022what.markdown

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
---
2+
layout: publication
3+
title: "What Do They Capture? -- A Structural Analysis of Pre-Trained Language Models for Source Code"
4+
authors: Yao Wan, Wei Zhao, Hongyu Zhang, Yulei Sui, Guandong Xu, Hai Jin
5+
conference: ICSE
6+
year: 2022
7+
bibkey: wan2022what
8+
additional_links:
9+
- {name: "ArXiV", url: "https://arxiv.org/abs/2202.06840"}
10+
- {name: "Code", url: "https://github.com/CGCL-codes/naturalcc"}
11+
tags: ["Transformer", "pretraining", "program analysis"]
12+
---
13+
Recently, many pre-trained language models for source code have been proposed to model the context of code and serve as a basis for downstream code intelligence tasks such as code completion, code search, and code summarization. These models leverage masked pre-training and Transformer and have achieved promising results. However, currently there is still little progress regarding interpretability of existing pre-trained code models. It is not clear why these models work and what feature correlations they can capture. In this paper, we conduct a thorough structural analysis aiming to provide an interpretation of pre-trained language models for source code (e.g., CodeBERT, and GraphCodeBERT) from three distinctive perspectives: (1) attention analysis, (2) probing on the word embedding, and (3) syntax tree induction. Through comprehensive analysis, this paper reveals several insightful findings that may inspire future studies: (1) Attention aligns strongly with the syntax structure of code. (2) Pre-training language models of code can preserve the syntax structure of code in the intermediate representations of each Transformer layer. (3) The pre-trained models of code have the ability of inducing syntax trees of code. Theses findings suggest that it may be helpful to incorporate the syntax structure of code into the process of pre-training for better code representations.

0 commit comments

Comments
 (0)