Skip to content Skip to footer

LLM4Decompile: An Open-Source Broad Language Models Focused on Decompiling with a Strong Emphasis on Code Execution and Recompiling Capabilities

Decompilation is a pivotal process in software reverse engineering facilitating the analysis and interpretation of binary executables when the source code is not directly accessible. Valuable for security analysis, bug detection, and the recovery of legacy code, the process often needs assistance in generating a human-readable and semantically accurate source code, which is a substantial challenge.

Traditional research in decompilation has relied on various tools and methods for back-translating binary code to source code, with varying degrees of success. These tools, like Ghidra and IDA Pro, perform exceptionally in certain scenarios but frequently require modifications to present the code in a format easily understandable by humans. The task is made more complex because of the inherent difficulty in accurately reconstructing fine source code aspects, such as variable names and original structures, including loops and conditional statements, which are commonly lost in the compilation process.

However, LLM4Decompile, recently introduced by the Southern University of Science and Technology and the Hong Kong Polytechnic University, is utilizing a unique approach. LLM4Decompile uses LLMs pretrained on extensive C source code and matching assembly code, intending to utilize their predictive capabilities to accurately reconstruct source code syntax from binary executables. This approach prioritizes the executability of the code, regarded as a key component of functional programming.

To train diverse model sizes from 1B to 33B parameters, the team created a dataset comprising four billion tokens, covering a wide range of C and assembly code pairs. This preparation is designed to offer the models an in-depth understanding of code structure and semantics. Unlike past tools that typically produced non-functional code or code challenging for humans to parse, LLM4Decompile aims at generating code that replicates the source syntax and maintains its executable essence.

LLM4Decompile’s effectiveness was meticulously evaluated using the recently introduced Decompile-Eval benchmark. The benchmark assesses decompiled code on two significant parameters: recompilability and re-executability. These metrics evidence the model’s comprehension of code semantics and its ability to generate syntactically correct code. LLM4Decompile achieved an impressive milestone with a remarkable 90% recompilability rate and a 21% re-executability rate in the 6B model. This performance is a 50% improvement in decompilation performance over its predecessor, GPT-4, demonstrating significant advancements in decompilation accuracy and utility.

In conclusion, LLM4Decompile’s introduction is transforming software engineering. The tool not only addresses the long-standing challenges in decompilation but also sets the stage for new research and development paths. With its superior methodology and performance, LLM4Decompile offers a model for future endeavors, signaling a future where decompilation can be as nuanced and refined as the code it aims to uncover. For software engineering, this is an exhilarating era, with LLM4Decompile spearheading a more advanced and effective approach to decompilation.

Leave a comment

0.0/5