Towards AI-Assisted Synthesis of Verified Dafny Methods

Abstract

Large stochastic language models show great promise in many domains, including programming. A promise is easy to make but hard to keep, and language models often fail to keep their promises when applied to programming, generating erroneous code. One promising avenue to keep models honest is to have them generate code in a language that supports formal verification: if and when that is adopted, the model would provide proof along with the code, and that proof would be automatically verified. Unfortunately, existing large language models show a severe lack of proficiency in verified programming languages.
In this paper we demonstrate how to improve two pretrained models’ proficiency in the Dafny verified programming language. Using 178 programming problems from the MBPP dataset, we prompt two contemporary models (GPT-4 and PaLM-2 ) to generate methods in Dafny. We use three different types of prompts: a direct contextless prompt, a second one that includes a signature of the method and test cases, and a third one that decomposes the problem into steps and includes dynamically chosen similar examples. Our results show that GPT-4 is better than PaLM-2 , but that, in both models, the third prompt greatly improves the success of the generation task with respect to the direct prompt. With the third prompt, GPT-4 was able to generate verified (and human-evaluated) Dafny methods in 58% of the cases, while the first prompt generated verified (and human-evaluated) methods in only 19% of the cases. Surprisingly, the second prompt had the worst performance, with only 10%.
One tangible contribution of our work is a collection of 153 MBPP problems that are implemented and formally verified in Dafny, 50 of which were written by us and 103 were automatically synthesized by GPT-4 . Additionally, our results demonstrate that the benefits of formal program verification (proof of correctness) are now within reach of large stochastic language models used to generate code. These results also demonstrate that program verification systems can likewise benefit from incorporating large language models, whether to synthesize code wholesale, to generate specifications, or to construct internal verification annotations such as loop invariants, that are hard for programmers and verification tools to find directly. (e.g. legal arguments, transport signaling, structural engineering, etc.) where solutions must be correct, and where that correctness needs to be verifiable by existing formal tools, or explained to (and understood by) designers and end-users.

Type
Publication
In ACM International Conference on the Foundations of Software Engineering (FSE 2024), Porto de Galinhas, Brazil.
Iris Ma
Iris Ma
Research & Teaching Assistant

My research interests include program verification, LLM application, and LLM testing.