We develop formal tools for programming distributed computation with effective security guarantees. Our goal is to enable programmers to express and prove high-level security properties with a reasonable amount of effort—sometimes automatically, sometimes with mechanical assistance—as part of the development process. These properties should hold in a hostile environment, with realistic (partial) trust assumptions on the principals and machines involved in the computation.
We are located in Orsay, within the Inria-Microsoft Research Joint Lab, with participants from the Programming Principles and Tools group at Microsoft Research Cambridge, the Moscova team at Inria Rocquencourt, and the Everest team at Inria Sophia-Antipolis. Please contact us for additional information, visits, internships, etc.
Over the last decade, as security expectations have evolved, the task of writing secure code has become more challenging and widespread. A large amount of code now has security implications, and programmers often struggle to analyze them. In principle, a security analysis should involve a precise mapping from security goals to enforcement mechanisms, but these high-level goals are often left informal, whereas lower-level enforcement mechanisms are largely hidden in library implementations.
Partly because their design predates security concerns, the abstractions embedded in programming languages are usually ineffective as regards security. In a distributed system, for example, these abstractions are trustworthy only inasmuch as every machine involved in the computation “plays by the rules” and correctly implements them. This assumption is clearly unrealistic when some of these machines may be controlled by an adversary. Hence, for instance, a cautious programmer who relies on remote procedure calls should not reason about them as if they were local calls, and must instead look at the details of their implementation, including the network stack, the underlying protocols, and the system configuration. Also, due to functional demands such as flexibility and ease of deployment, the boundary between applications and protocols gets blurred. Modern application code often needs to dynamically select protocols, or change their configuration. Hence, even the experts who design the protocols may not be able to guarantee their security in advance, without knowledge of the application. In fact, most interesting high-level security properties (privacy, integrity, or secrecy invariants) necessarily depend on the whole application. As a result, those properties must somehow be reflected to the programmer, rather than hidden behind “transparent” abstractions.
Distributed applications can be structured as parties that exchange messages according to some pre-arranged communication patterns. These sessions (or contracts, or protocols) simplify distributed programming: when coding a role for a given session, each party just has to follow the intended message flow, under the assumption that the other parties are also compliant. In an adversarial setting, remote parties may not be trusted to play their role. Hence, defensive implementations also have to monitor one another, in order to detect any deviation from the assigned roles of a session. This task involves low-level coding below session abstractions, thus giving up most of their benefits.
We explore language-based support for sessions. We extend the ML language with session types that express flows of messages between roles, such that well-typed programs always play their roles. We compile session type declarations to cryptographic communication protocols that can shield programs from any low-level attempt by coalitions of remote peers to deviate from their roles. (…)
We consider cryptographic enforcement mechanisms for imperative programs with untrusted components. In source programs, security depends on an abstract information-flow policy for accessing the shared memory. In their implementations, shared memory is unprotected and security depends instead on encryption and signing. We build compilers from source programs and policies to cryptographic implementations. To establish their correctness, we develop type systems that enforce a correct usage of cryptographic primitives against active adversaries, under standard (probabilistic, polytime) security assumptions. (…)
Many protocols rely on secure audit logs to store traces of protocol runs. These logs can be used a posteriori to verify the compliance of each principal to its role: principals who attempt non-compliant actions will be blamed using the logged evidence. Secure logging is widely used in practice and much research effort has been devoted to techniques for implementing logs securely and efficiently. Still, which data should be logged? and why? (…)
There has been much progress in formal methods and tools for cryptography, enabling, in principle, the automated verification of complex security protocols. In practice, however, these methods and tools remain difficult to apply. Indeed, the verification of a system that uses a given protocol involves more than the cryptographic verification of an abstract model. For these reasons, we are interested in the integration of protocol verifiers into the arsenal of software testing and verification tools.
In recent work, Bhargavan et al. advocate the automatic extraction and verification of cryptographic models from executable code. They verify protocol implementations written in F#. Their approach relies on sharing as much code as possible between implementations and models. We explore a similar approach to extend the benefit of computational cryptographic verification to protocol implementations. to this end, we develop a tool for extracting cryptographic models from executable code written in F#, then rely on CryptoVerif to obtain computational verification results for executable code.
As an extended case study, we verify implementations of TLS, one of the most widely deployed communications protocol.
Provable security, whose origins can be traced back to the pioneering work of Goldwasser and Micali, advocates a mathematical approach based on complexity theory in which the goals and requirements of cryptosystems are specified precisely, and where the security proof is carried out rigorously and makes all underlying assumptions explicit. In a typical provable security setting, one reasons about effective adversaries, modelled as arbitrary probabilistic polynomial-time Turing machines, and about their probability of thwarting a security objective, e.g., secrecy. In a similar fashion, security assumptions about cryptographic primitives bound the probability of polynomial algorithms to solve difficult problems, e.g., computing discrete logarithms. The security proof is performed by reduction by showing that the existence of an effective adversary with a certain advantage in breaking security implies the existence of an effective algorithm contradicting the security assumptions.
The game-playing technique is a general method to structure and unify cryptographic proofs. Its central idea is to view the interaction between an adversary and the cryptosystem as a game, and to study game transformations that preserve security, thus allowing to transform an initial game-that explicitly encodes a security property-into a game where it is easy to bound the advantage of the adversary. Code-based techniques (CBT) is an instance of the game-playing technique that has been used successfully to verify state-of-the-art cryptographic schemes, see e.g. Belare and Rogaway. Its distinguishing feature is to take a code-centric view of games, security hypotheses and computational assumptions, that are expressed using (probabilistic, imperative, polynomial-time) programs. Under this view, game transformations become program transformations, and can be justified rigorously by semantic means. Although CBT proofs are easier to verify and are more easily amenable to machine-checking, they go far beyond established theories of program equivalence and exhibit a rich and broad set of reasoning principles that draws from program verification, algebraic reasoning, and probability and complexity theory. Thus, despite the beneficial effect of their underlying framework, CBT proofs remain inherently difficult to verify. In an inspiring paper, Halevi argues that formal verification techniques are mandatory to improve trust in game-based proofs, going as far as describing an interface to a tool for computer-assisted code-based proofs. As a first step towards the completion of Halevi’s programme, we develop CertiCrypt, a framework to construct machine-checked code-based proofs in the Coq proof assistant. (…)