VPT 2013: Papers with Abstracts

Papers
Abstract. Ranking functions are a tool successfully used in termination analysis, complexity analysis, and program parallelization.
Among the different types of ranking functions and approaches to finding them, this talk will concentrate on
functions that are found by linear programming techniques. The setting is that of
a loop that has been pre-abstracted so that
it is described by linear constraints over a finite set of numeric variables.
I will review results (more or less recent) regarding the search for
ranking functions which are either linear or lexicographic-linear.
Abstract. Synthesis holds the promise to revolutionize the development of
complex systems by automating the translation from specifications to
implementations. Synthesis algorithms are based on the same level of
mathematical rigor as verification algorithms but can be applied at
earlier development stages, when only parts of the design are
available. Given a formal specification of the desired system
properties, for example in a temporal logic, we determine if the
partial design can be completed into a full design that satisfies the
properties.

For general distributed systems, the synthesis problem is undecidable.
However, there has been a sequence of discoveries where the
decidability was established for specific system architectures, such
as pipelines and rings, or other restrictions on the problem, such as
local specifications. Encouraged by these findings, new specification
languages like Coordination Logic aim for a uniform treatment of the
synthesis problem.

In this talk, I will review several techniques that transform
undecidable synthesis problems into decidable problems.
Abstract. The reachability problem for Petri nets is a central problem of net theory. The problem is known to be decidable by inductive invariants definable in the Presburger arithmetic. When the reachability set is definable in the Presburger arithmetic, the existence of such an inductive invariant is immediate. However, in this case, the computation of a Presburger formula denoting the reachability set is an open problem. Recently this problem got closed by proving that if the reachability set of a Petri net is definable in the Presburger arithmetic, then the Petri net is flatable, i.e. its reachability set can be obtained by runs labeled by words in a bounded language. As a direct consequence, classical algorithms based on acceleration techniques effectively compute a formula in the Presburger arithmetic denoting the reachability set.
Abstract. We present a transformational approach to program verification and software model checking that uses three main ingredients:
(i) Constraint Logic Programming (CLP),
(ii) metaprogramming and program specialization, and
(iii) proof by transformation.
Abstract. The bar for adoption of refactoring tools is high: not only does a refactoring extract information from your source code, it also transforms it, often in a radical way.

After discussing what users require from their tools, we will examine ways in which tool builders can try to increase their users' confidence in the tools. These mechanisms include visualisation, unit testing, property-based testing and verification, and are based on the Kent functional programming group's experience of building the HaRe and Wrangler refactoring systems for Haskell and Erlang.
Abstract. It has been known for a while that program transformation techniques, in particular, program specialization, can be used to prove the properties of programs automatically. For example, if a program actually implements (in a given context of use) a constant function, sufficiently powerful and semantics preserving program transformation may reduce the program to a syntactically trivial ``constant'' program, pruning unreachable branches and proving thereby the property. Viability of such an approach to verification has been demonstrated in previous works where it was applied to the verification of parameterized cache coherence protocols and Petri Nets models.
In this paper we further extend the method and present a case study on its appication to the verification of a cryptographic protocol. The protocol is modeled by functional programs at different levels of abstraction and verification via program specialization is done by using Turchin's supercompilation method.
Abstract. We present a method for verifying partial correctness properties of imperative programs by using techniques based on the transformation of constraint logic programs (CLP). We consider: (i) imperative programs that manipulate integers and arrays, and (ii) first order logic properties that define <i>configurations</i> of program executions. We use CLP as a metalanguage for representing imperative programs, their executions, and their properties. First, we encode the correctness of an imperative program, say Prog, as the negation of a predicate 'incorrect' defined by a CLP program T. By construction, 'incorrect' holds in the least model of T if and only if the execution of Prog from an initial configuration eventually halts in an error configuration. Then, we apply to program T a sequence of transformations that preserve its least model semantics. These transformations are based on well-known transformation rules, such as unfolding and folding, guided by suitable transformation strategies, such as specialization and generalization. The objective of the transformations is to derive a new CLP program TransfT where the predicate 'incorrect' is defined either by (i) the fact `incorrect.' (and in this case Prog is incorrect), or by (ii) the empty set of clauses (and in this case Prog is correct). In the case where we derive a CLP program such that neither (i) nor (ii) holds, we iterate the transformation. Since the problem is undecidable, this process may not terminate. We show through examples that our method can be applied in a rather systematic way, and is amenable to automation by transferring to the field of program verification many techniques developed in the field of program transformation.
Abstract. The verification of program transformation systems requires that we prove their termination.
For positive supercompilation, ensuring termination usually requires the memoisation of expressions
which are subsequently used to determine when to perform generalization and folding.
However, determining which expressions to memoise can greatly affect the results obtained.
Memoising too many expressions requires a lot more expensive checking for the possibility
of generalization or folding. More new functions will also be created and generalization will
be performed more often, resulting in less improved residual programs. We would therefore
like to memoise as few expressions as possible, but this could lead to non-termination.
In this paper, we describe a simple pre-processing step which can be applied to programs
prior to transformation by positive supercompilation to ensure that in any potentially infinite
sequence of transformation steps there must be function unfolding. We prove, for programs
that have been pre-processed in this way, that it is only necessary to memoise expressions
immediately before function unfolding to ensure termination, and we demonstrate this on
a number of tricky examples.
Abstract. The refinement-based approach to developing software is based on the
correct-by-construction paradigm were software systems are constructed via the step-by-step refinement of an initial high-level specification into a final concrete specification. Proof obligations, generated during this process are discharged to ensure the consistency between refinement levels and hence the system's overall correctness.

Here, we are concerned with the refinement of specifications using the Event B modelling language and its associated toolset, the Rodin platform. In particular, we focus on the final steps of the process where the final concrete specification is transformed into an executable algorithm. The transformations involved are (a) the transformation from an Event B specification into a concrete recursive algorithm and (b) the transformation from the recursive algorithm into its equivalent iterative version. We prove both transformations correct and verify the correctness of the final code in a static programme verification environment for C# programs, namely the Spec# programming system.
Abstract. The paper describes how to verify cryptographic protocols by a general-purpose program
transformation technique with unfolding. The questions of representation and analysis
of the protocols as prefix rewriting grammars are discussed. In these aspects Higman and
Turchin embeddings on computational paths are considered, and a refinement of Turchin’s
relation is presented that allows to algorithmically decide the empty word problem for
prefix rewriting grammars.