ARA SSIA Formacrypt: Detailed description

Return to FormaCrypt main page

The verification of cryptographic protocols has been a very active research area in the last years. Most works on this topic use one of the following two historically independent approaches:

The FormaCrypt project focuses on bridging the gap between the two approaches.

This path promises tremendous benefits: protocols can be analyzed and proved secure using the simpler, automated methods specific to the formal approach, yet the security guarantees are with respect to the more comprehensive computational model. This raises significant challenges, since these approaches have been developed by two largely independent communities. They have developed their own models and notions of security, and reconciling these models presents considerable difficulties.

In order to bridge the gap between the two points of view on cryptography, we plan to use several approaches, with inspiration coming to varying degrees from the computational and the formal worlds: the direct approach will formalize and automate computational proofs as they are usually done by cryptographers (Theme 1); on the other extreme, the modular approach will prove the computational soundness of unmodified formal proofs (Theme 3); in between, we will modify a formal approach in order to adapt it to computationally sound proofs. More precisely, we will design a computationally sound logic for proving protocols (Theme 2).

All these approaches will rely on a formalization of cryptographic protocols in process calculi, which will be essentially equivalent to each other, even if they may differ in the details for practical reasons. (The most appropriate language may depend on the considered classes of protocols and properties.)

Theme 1: The direct approach. A computationally sound prover

In the direct approach, we will formalize the computational proofs, and relying on this formalization, build a specialized automatic prover for cryptographic protocols, sound in the computational model.

Computational security proofs

The basic idea of the computational approach of the security for cryptographic schemes is to show that breaking a specific security notion is at least as hard as breaking a well-known computational problem, such as integer factoring, the RSA problem [34], or the Diffie-Hellman problem [20]. These proofs are led in the complexity theory framework, providing reductions between these two problems: the well-known computational problem can be reduced to an attack of the scheme. Therefore, for a long time, security proofs in cryptography were the description of a reduction, and then the presentation of arguments to explain why the reduction works.

This kind of proof technique is very efficient when one wants to prove the security of an asymmetric encryption scheme [23] or a signature scheme [24,25], but it becomes much more intricate when interactive protocols are in consideration (such as key exchange schemes) or when several computational assumptions are necessary (the scheme is secure if the RSA problem is hard, and the family (f_k) is a family of pseudo-random functions, etc).

A recent proof technique

Very recently, the proofs in computational cryptography apply a new concept introduced by Shoup [37,36,38]: using a sequence of games. Each game defines the interaction between the participants of a protocol and an adversary. The first game consists of the real attack: the adversary interacts with all the participants with access to several oracles to formalize the various kinds of information he may have access to, according to the security model. The rest of the proof then aims at showing that, in this game, the probability that the adversary finds a certain information (for example distinguishes the key exchanged between honest participants of the protocol from a random key) is negligible. More precisely, we consider a specific event that happens when the adversary has broken the scheme. And we want to bound the probability of this event, in the probability space defined by all the random coins of the adversary, the participants, and the oracles.

The following games are obtained by transformation of the first game, such that the difference of probabilities between different games can be easily bounded: the probability of our crucial event is thus ``almost'' the same in all the games. The transformations are chosen, such that the probability of this event in the last game is obvious from the form of the game. The probability corresponding to the initial game can then be easily computed.

Formalizing security proofs

The various games are usually described informally in English, together with arguments establishing upper bounds on the probabilities involved. Our goal is to build a formal framework in which these proofs can be described and verified. The games correspond to processes in the formal model, so they will be formalized in a process calculus. The spi calculus [3], the applied pi calculus [2] (which are often used to describe protocols in the formal approach), and the probabilistic polynomial-time calculus of [29,30] will be obvious sources of inspiration for this process calculus. However, some adaptations will be necessary. For example, we plan to bound the number of sessions of the protocol by a variable N, and to obtain probabilities function of N. In contrast, [29,30] bound the number of sessions by a polynomial, and obtain asymptotic security results. The drawback of such results is that they prove security for keys long enough, but cannot be used to evaluate which is the minimal length of the key that guarantees a certain level of security. In the formal world, the number of sessions is either bounded by a small constant (typically 1, 2, or 3), in which case the protocol is proved secure only when the number of sessions is smaller than the bound, or unbounded, which is impossible in the computational approach: if runs can be unbounded, the adversary always ends up finding the secrets by exhaustive exploration. We also plan to include additional constructs in our calculus, for example to simulate lists, which are often used in computational proofs to represent the global state of the system of N sessions.

We will obviously define a semantics for this calculus, corresponding to the computational model: this semantics will consider messages as bit-strings, nonces as random numbers, while the primitives will have the real (probabilistic) semantics.

The direct approach

In the direct approach, our goal is to directly formalize the computational proofs by sequences of games. So we will formalize the allowed transformations of games, and the conditions under which they can be applied. These transformations come from security properties of the cryptographic primitives, from general properties of probabilities, or from allowed syntactic transformations of the process calculus. We will prove these transformations correct with respect to the computational semantics of the calculus. This formalization will give the necessary theoretical background for implementing a specialized prover, designed for the proof of cryptographic protocols in a computational model.

This tool will take as input a description of the protocol in our process calculus, and an indication of the security property to prove. The prover will then automatically search a proof of the desired property. To do this, it will try to apply the game transformations associated with the definition of each cryptographic primitive used in the protocol. (These are the main transformation steps.) When these transformations are not directly applicable, it will determine and perform the necessary syntactic transformations in order to make one of them applicable. It will then compute the new game after transformation, and the relation between probabilities in the games. The process can then be repeated on the obtained game, until the game is simple enough for the system to be able to compute the desired probability directly, and therefore the desired security property proved.

Theme 2: A computationally sound logic

The formal approach, which uses a highly idealized representation of cryptographic primitives, has been a successful basis for formal logics and automated tools. Our second theme aims at adapting such logics to the computational approach, which yields more insight into the strength and vulnerabilities of protocols, but it is more difficult to apply, since it involves explicit reasoning about probability and computational complexity. The purpose of this theme is to show that formal reasoning, based on an abstract treatment of cryptographic primitives, can be used to reason about probabilistic polynomial-time protocols in the face of probabilistic polynomial-time attacks.

The method

This approach aims at describing a cryptographically sound formal logic for proving protocol security properties without explicitly reasoning about probability, asymptotic complexity, or the actions of a malicious attacker. A fragment of this expected logic has already been defined in [19], proving that the method works and is reasonable for usual properties. However, many difficult extensions of this first fragment are expected and still need to be done. The approach rests on a new probabilistic, polynomial-time semantics for an existing protocol security logic, replacing an earlier semantics that uses nondeterministic formal evaluation. Naturally, the advantage of reusing an existing protocol security logic is that any existing security proof in this logic with respect to the formal semantics will become a computational proof with respect to the (new) computationally sound semantics, with only minor corrections. While the basic form of the protocol logic remains unchanged from previous work, there are interesting technical problems involving the difference between efficiently recognizing and efficiently producing a value, and involving a reinterpretation of standard logical connectives that seems necessary to support certain forms of reasoning. Such distinctions were not available in the coarser formal model, and raises some apparently fundamental issues about the inherent logic of asymptotic probabilistic properties.

The protocol composition logic (PCL)

The most natural way to represent a protocol specification for use with this method is to use the simple ``protocol programming language'' first defined in [22,17,18]. It represents a protocol by a set of roles, such as ``Initiator'', ``Responder'' or ``Server'', each specifying a sequence of actions to be executed by a honest participant. The set of actions must be complete enough to allow us to code any usual protocol. In particular, we need actions to create a nonce, encrypt or decrypt a term, match a term against a given pattern, or send and receive a message. More actions might also be needed to model some specific protocols (signature-based actions, hashing, among others.)

The Protocol Composition Logic [6,17,18,22] is a logic designed to express security properties over cryptographic protocols. It consists of both logical formulas and modal formulas. Modal formulas use a modal operator similar to Floyd-Hoare logic. Intuitively, the formula psi [P]_X phi means that if psi is true at some point in the execution of a protocol (in the presence of a malicious attacker), then phi will be true after agent X performs the sequence P of actions. The pre- and post-conditions are formulas, and may describe actions taken by various principals and characterize the information that is available to or hidden from them. For example, a pre- or post-condition

psi = for all x.Fresh(X,x) => (for all Y.X!=Y => not Possess(Y,x))

specify that for any fresh nonce x generated by participant X, no participant (Y) different from X knows x. A nonce is fresh when it has been generated but not sent yet. In a (computational) probabilistic semantics of this formula, not Possess becomes knowing x with only a negligible probability.

Moreover, it seems that we should not limit ourselves to the predicates already defined in the PCL for its formal semantics: there are predicates which had no meaning in the formal world, but are very useful in the computational one. For example, it would be very helpful to have a specific predicate Indist(Y,x) saying that Y cannot distinguish the message x from a random bit-string with a probability significantly higher that one half. Such operators introduce a fine granularity in our model that we did not have in the formal world.

A probabilistic polynomial-time semantics

The semantics we want to explore recasts the methods of Micciancio and Warinschi [32] in a logical setting, and reflects accepted modeling approaches used in the field of cryptography, particularly [10,35]. The central organizing idea is to interpret formulas as operators on probability distributions on traces. Informally, representing a probability distribution by a set of equi-probable traces (each tagged by the random sequence used to produce it), the meaning of a formula phi on a set T of traces is the subset T' of T in which phi holds. This interpretation yields a probability: the probability that phi holds is the ratio |T'|/|T|. Conjunction and disjunction are simply intersection and union. There are several possible interpretations for implication, and it is not clear at this point which will prove most fruitful in the long run. For example, we can interpret phi => psi as the union of not(phi) and the composition of psi with phi; the latter is also the conditional probability of psi given phi. This interpretation supports a soundness proof for a sizable fragment of the protocol logic, and resembles the probabilistic interpretation of implication in [33]. Since the logic does not mention probability explicitly, the idea is to consider a formula ``true'' if it holds with asymptotically overwhelming probability.

While this approach seems promising and has already produced some good results, there are still various issues to fix and extensions to include. For example, it is to be decided what is the best way to model a predicate like Indist(Y,x), i.e. to make it fully coherent with other predicates. The problem is to design a generic way to combine predicates defined with different probability thresholds (1/2+epsilon for Indist and 1-epsilon for Possess). If we can define a generic framework solving this issue, we will be able to combine any kind of computational predicates in this logic. In the same direction, it is also to be decided what new computational predicates we can, or should, include in the logic. Moreover, any new predicate usually comes with a set of associated new axioms or rules that also need to be well interpreted in the probabilistic polynomial-time semantics. Another problem is to analyze precisely why we are able to define many different implication operators in the probabilistic semantics of the PCL logic, and in which case each of them should, or should not, be used. In the end, we will get a very useful, flexible, and direct method to prove the security of protocols using a computationally sound logic as easily as proving their security with respect to the previous, formal, semantics.

Theme 3: The modular approach

In the modular approach, our goal is to justify formal notions of security from a computational point of view. This has two main interests. First, this makes possible to reuse directly existing formal tools or standard formal techniques in order to obtain computationally-valid proofs of security. In addition, it helps to understand what are the implicit cryptographic hypotheses made in the formal models. In particular, what is the computational meaning of ``perfect cryptography''?

We call this approach modular as it relies on a computational soundness theorem, proved once and for all, and specific formal analyses for each protocol. Then the computational security proof of a protocol can be achieve modularly: one has only to prove that the idealized protocol verifies the desired property in the formal model and that the cryptographic primitives verify the computational security properties.

Theme 3.1: Justifying formal models

Existing works on modular approaches in the active case such as [7,32,8,27,16] have concentrated so far on trace properties (e.g. authentication) and more recently (in a limited way) secrecy [8,16]. In this part of the project, we plan to consider yet unstudied security properties and cryptographic primitives.

Regarding security properties, we plan to consider branching properties such as fairness, which are relevant for the analysis of branching protocols (intuitively protocols in which honest participants can make choices, such as contract signing protocols). Our goal is to achieve similar soundness results for these properties as for trace properties, that is: if the property holds in the formal world, then its computational counterpart holds on the concrete protocol with overwhelming probability.

We also plan to study further secrecy properties. Indeed, [8,16] relate formal and computational secrecy only for nonces. We wish to extend these results to other primitives, in particular, keys. This cannot be a trivial extension of previous work since, as already noticed in by D. Pointcheval, defining the secrecy of a key requires to change the adversary's game. In addition, the fact that secrecy on traces implies indistinguishability seems to rely strongly on the particular cryptographic primitives (non-deterministic encryption, signatures). For more general cryptographic primitives, we will rely on other formal definitions of secrecy like observational equivalence for example, related to Theme 3.3.

Regarding cryptographic primitives, we plan to consider more general cryptographic primitives like symmetric encryption, hashes and macs. Moreover even for asymmetric encryption and signatures (for which soundness results have already been established), one might need to consider specialized primitives that do not satisfy the strong cryptographic hypotheses required in such soundness theorems. For example, for encryption, one might need to consider encryption primitives that are only IND-CPA. For electronic voting [28], the signature has some special properties. We plan to study soundness of such particular primitives (one can unblind a blind signature). This might require to add equational theories in the formal model, which will be also studied in Theme 3.2.

Theme 3.2: Equational theories

Previous results of computational soundness rely on formal models that use free (term) algebras. Unfortunately this is not sufficient to model cryptographic primitives which cannot be grouped into sets of constructors and destructors, resp. for building messages (e.g. encryption, pairing) and decomposing them (e.g. decryption, projection). This is why most recent formal models such as [2,13,1,12,14] rely on equational theories and their corresponding initial algebras.

Our goal will be to exhibit sufficient conditions on protocols and their implementations so as to ensure the computational soundness of some formal models based on equational theories. Relevant primitives that we plan to study include XOR, the modular (Diffie-Hellman) exponentiation and the CBC encryption.

For each of these primitives (together with pairing and projections), a number of issues will be successively considered.

To answer these questions, we will rely in particular on the recent work of Baudet et al. [9] concerning passive adversaries. This work provides a general criterion for the computational soundness of static equivalence and applies it on two example theories modeling the XOR and ciphers. A very nice way to achieve our goals would be to generalize Baudet et al.'s criterion to active adversaries and then show how to instantiate it. For that purpose, we will also study the possibility of using composition theorems to deduce such a criterion on a given set of primitives from its validity on smaller sets of primitives.

Theme 3.3: Application to guessing attacks

Guessing attacks occur in cryptography when an attacker is able to recover the value of a secret by trying every possible value for it. Secrets that are vulnerable to guessing attacks---sometimes called weak secrets---are those whose entropy is low enough for an exhaustive search to be practical. This is typically the case for passwords chosen by regular (human) users.

The security of protocols against guessing attacks has received a lot of attention since the first attacks were discovered (see e.g. Gong [26]). Various formalizations of guessing attacks have been proposed since the seminal work of Lowe [31]. Yet, except the very recent definition of Abadi and Warinschi [5], none has received a computational justification yet.

Abadi and Warinschi [5] define a notion of guessing attacks based on patterns in the spirit of [4], for passive adversaries. Our first goal in this theme will be to compare the definition of Abadi and Warinschi [5] with that of Corin et al. [15] based on static equivalence. Specifically we will look for a specialized equational theory such that the corresponding notion of static equivalence (together with [15]) leads to the same notion of guessing attacks as Abadi and Warinschi.

In other words, the point of view that we will adopt is the following: rather than proposing new (somewhat arbitrary) models of security to take into account weak secrets, we wish to introduce a dedicated equational theory able to model practical implementations of protocols that involve weak secrets. Then, and notably for the active case, we will obtain the security against guessing attacks as a particular case of equivalence between processes. Thus this theme will be highly related to Theme 3.2. We foresee to reuse a number of the techniques mentioned above.

Theme 4: Case studies and comparison of the various approaches

We plan to apply the techniques presented above to various protocols, from protocols of the literature (such as the Otway-Rees, Yahalom, and Needham-Schroeder protocols) to more complex, realistic protocols, such as SSH (secure shell) and IKEv2 (the new key establishment protocol for IPsec).

We will obviously compare the results obtained by the different approaches, to determine the advantages and difficulties for each of them. All these approaches will provide proofs in the computational model, either directly, or via a proof in the formal model. (If the modular approach is successful, we may modify a Dolev-Yao-style protocol verification tool, so that it checks the additional hypotheses needed to guarantee the soundness of the proof in the computational model, and thus provides a proof in the latter model.)

References

1
M. Abadi and V. Cortier.
Deciding knowledge in security protocols under (many more) equational theories.
In Proc. 18th IEEE Computer Security Foundations Workshop (CSFW'05), pages 62-76, Aix-en-Provence, France, June 2005. IEEE Comp. Soc. Press.

2
M. Abadi and C. Fournet.
Mobile values, new names, and secure communication.
In 28th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL'01), pages 104-115, London, United Kingdom, Jan. 2001. ACM Press.

3
M. Abadi and A. D. Gordon.
A calculus for cryptographic protocols: The spi calculus.
Information and Computation, 148(1):1-70, Jan. 1999.
An extended version appeared as Digital Equipment Corporation Systems Research Center report No. 149, January 1998.

4
M. Abadi and P. Rogaway.
Reconciling two views of cryptography (the computational soundness of formal encryption).
Journal of Cryptology, 15(2):103-127, 2002.

5
M. Abadi and B. Warinschi.
Password-based encryption analyzed.
In L. Caires and L. Monteiro, editors, Proceedings of the 32nd International Colloquium on Automata, Languages and Programming (ICALP'05), volume 3580 of Lecture Notes on Computer Science, Lisboa, Portugal, July 2005. Springer.
To appear.

6
M. Backes, A. Datta, A. Derek, J. Mitchell, and M. Turuani.
Compositional analysis of contract signing protocols.
In 18th IEEE Computer Security Foundations Workshop, 2005.
To appear.

7
M. Backes and B. Pfitzmann.
Symmetric encryption in a simulatable Dolev-Yao style cryptographic library.
In 17th IEEE Computer Security Foundations Workshop, Pacific Grove, CA, June 2004. IEEE.

8
M. Backes and B. Pfitzmann.
Relating symbolic and cryptographic secrecy.
In 26th IEEE Symposium on Security and Privacy, pages 171-182, Oakland, CA, May 2005. IEEE.

9
M. Baudet, V. Cortier, and S. Kremer.
Computationally sound implementations of equational theories against passive adversaries.
In L. Caires and L. Monteiro, editors, Proceedings of the 32nd International Colloquium on Automata, Languages and Programming (ICALP'05), volume 3580 of Lecture Notes on Computer Science, pages 652-663, Lisboa, Portugal, July 2005. Springer.
To appear.

10
M. Bellare and P. Rogaway.
Entity authentication and key distribution.
In 13th Annual International Cryptology Conference on Advances in Cryptology (CRYPTO '93), pages 232-249. Springer-Verlag, 1994.

11
B. Blanchet.
Proverif 1.10 (automatic protocol verifier).
Available at http://proverif.inria.fr, Feb. 2004.

12
B. Blanchet, M. Abadi, and C. Fournet.
Automated verification of selected equivalences for security protocols.
In 20th IEEE Symposium on Logic in Computer Science (LICS 2005), pages 331-340, Chicago, IL, June 2005. IEEE Computer Society.

13
Y. Chevalier, R. Küsters, M. Rusinowitch, and M. Turuani.
Deciding the security of protocols with Diffie-Hellman exponentiation and products in exponents.
In P. K. Pandya and J. Radhakrishnan, editors, FST TCS 2003: Foundations of Software Technology and Theoretical Computer Science, 23rd Conference, volume 2914 of Lecture Notes on Computer Science, pages 124-135, Mumbai, India, Dec. 2003. Springer.

14
Y. Chevalier, R. Küsters, M. Rusinowitch, and M. Turuani.
An NP decision procedure for protocol insecurity with XOR.
Theoretical Computer Science, 338(1-3):247-274, June 2005.

15
R. Corin, J. M. Doumen, and S. Etalle.
Analysing password protocol security against off-line dictionary attacks.
In 2nd Int. Workshop on Security Issues with Petri Nets and other Computational Models (WISP), Electronic Notes in Theoretical Computer Science, June 2004.

16
V. Cortier and B. Warinschi.
Computationally sound, automated proofs for security protocols.
In M. Sagiv, editor, Proc. 14th European Symposium on Programming (ESOP'05), volume 3444 of Lecture Notes on Computer Science, pages 157-171, Edimbourg, U.K., Apr. 2005. Springer.

17
A. Datta, A. Derek, J. Mitchell, , and D. Pavlovic.
A derivation system for security protocols and its logical formalization.
In 16th IEEE Computer Security Foundations Workshop, pages 109-125. IEEE Comp. Soc. Press, 2003.

18
A. Datta, A. Derek, J. Mitchell, , and D. Pavlovic.
A derivation system and compositional logic for security protocols.
Journal of Computer Security, 2005.
To appear.

19
A. Datta, A. Derek, J. C. Mitchell, V. Shmatikov, and M. Turuani.
Probabilistic polynomial-time semantics for a protocol security logic.
In L. Caires and L. Monteiro, editors, ICALP 2005: the 32nd International Colloquium on Automata, Languages and Programming, volume 3580 of Lecture Notes on Computer Science, Lisboa, Portugal, July 2005. Springer.
To appear.

20
W. Diffie and M. E. Hellman.
New Directions in Cryptography.
IEEE Transactions on Information Theory, IT-22(6):644-654, November 1976.

21
D. Dolev and A. C. Yao.
On the security of public key protocols.
IEEE Transactions on Information Theory, IT-29(12):198-208, Mar. 1983.

22
N. Durgin, J. Mitchell, , and D. Pavlovic.
A compositional logic for proving security properties of protocols.
Journal of Computer Security, 11:677-721, 2003.

23
S. Goldwasser and S. Micali.
Probabilistic Encryption.
Journal of Computer and System Sciences, 28:270-299, 1984.

24
S. Goldwasser, S. Micali, and R. Rivest.
A ``Paradoxical'' Solution to the Signature Problem.
In Proceedings of the 25th Symposium on the Foundations of Computer Science (FOCS '84), pages 441-448, Singer Island, U.S.A., 1984. IEEE, New York.

25
S. Goldwasser, S. Micali, and R. Rivest.
A Digital Signature Scheme Secure Against Adaptative Chosen-Message Attacks.
SIAM Journal of Computing, 17(2):281-308, April 1988.

26
L. Gong, T. M. A. Lomas, R. M. Needham, and J. H. Saltzer.
Protecting poorly chosen secrets from guessing attacks.
IEEE Journal on Selected Areas in Communications, 11(5):648-656, June 1993.

27
R. Janvier, Y. Lakhnech, and L. Mazaré.
Completing the picture: Soundness of formal encryption in the presence of active adversaries.
In M. Sagiv, editor, Proc. 14th European Symposium on Programming (ESOP'05), volume 3444 of Lecture Notes on Computer Science, pages 172-185, Edimbourg, U.K., Apr. 2005. Springer.

28
S. Kremer and M. D. Ryan.
Analysis of an electronic voting protocol in the applied pi calculus.
In M. Sagiv, editor, Programming Languages and Systems: 14th European Symposium on Programming, ESOP 2005, volume 3444 of Lecture Notes on Computer Science, pages 186-200, Edimbourg, UK, Apr. 2005. Springer.

29
P. D. Lincoln, J. C. Mitchell, M. Mitchell, and A. Scedrov.
A probabilistic poly-time framework for protocol analysis.
In ACM Computer and Communication Security (CCS-5), pages 112-121, San Francisco, California, Nov. 1998.

30
P. D. Lincoln, J. C. Mitchell, M. Mitchell, and A. Scedrov.
Probabilistic polynomial-time equivalence and security protocols.
In J. Wing, J. Woodcock, and J. Davies, editors, FM'99 World Congress On Formal Methods in the Development of Computing Systems, volume 1708 of Lecture Notes on Computer Science, pages 776-793, Toulouse, France, Sept. 1999. Springer.

31
G. Lowe.
Analyzing protocols subject to guessing attacks.
In Workshop on Issues in the Theory of Security (WITS'02), Portland, Oregon, Jan. 2002.

32
D. Micciancio and B. Warinschi.
Soundness of formal encryption in the presence of active adversaries.
In M. Naor, editor, Theory of Cryptography Conference (TCC'04), volume 2951 of Lecture Notes on Computer Science, pages 133-151, Cambridge, MA, USA, Feb. 2004. Springer.

33
N. J. Nilsson.
Probabilistic logic.
Artificial Intelligence, 28(1):71-87, 1986.

34
R. Rivest, A. Shamir, and L. Adleman.
A Method for Obtaining Digital Signatures and Public Key Cryptosystems.
Communications of the ACM, 21(2):120-126, February 1978.

35
V. Shoup.
On formal models for secure key exchange (version 4).
Technical Report RZ 3120, IBM, 1999.

36
V. Shoup.
A Proposal for an ISO Standard for Public-Key Encryption, december 2001.
ISO/IEC JTC 1/SC27.

37
V. Shoup.
OAEP Reconsidered.
In J. Kilian, editor, Advances in Cryptology - proceedings of CRYPTO '01, volume 2139 of Lecture Notes in Computer Science, pages 239-259, Santa-Barbara, California, 2001. Springer-Verlag, Berlin.

38
V. Shoup.
OAEP Reconsidered.
Journal of Cryptology, 15(4):223-249, September 2002.

Page maintained by Bruno Blanchet