Still, if we study the complexity of Wagner's attack, we observe that it is exactly the square of the complexity of a differential attack against the simplified version of Coconut98 with decorrelation omitted. This seems to confirm one of the original motivation of decorrelation theory. This design uses the following paradigm: we take a regular cipher and we plug in, at some strategic points of the computation extra primitives which are called "decorrelation modules" in order to ensure decorrelation. This provides formal protection against attacks which use correlation between two plaintext/ciphertext pairs. We believe that attacks which use correlation between a larger number of pairs are intrinsically harder to handle, and that security against these attacks comes from the basic design and not from the added decorrelation module.
In order to achieve actual security, we need the basic design to be already secure, which was not the case with Coconut98 as shown by Wagner. We believe that on the contrary, it is the case for DFC and actually we even challenged to break DFC with partially known keys.
The first argument refers to the authors previous investigations on differential cryptanalysis. Usually, we need what Xuejia Lai calls the "hypothesis of stochastic equivalence" which basically says that what is true on average over all keys holds for a given key. Decorrelation theory does not assume this hypothesis. Indeed, it shows security "on average" over the distribution of keys. This means that some classes of weak keys may exist (they actually do, as shown by Don Coppersmith), but their fraction must necessarily be small (Coppersmith's weak keys are within a fraction of 1/2**128). This paradigm is quite different from the Rijmen-Knudsen's approach, which is presumably why they overlooked the fact. In addition we announce that, although decorrelation theory investigates averages, new results on deviations will be presented at Eurocrypt' 99.
The second argument denies the original goal of decorrelation theory. The confusion may come from the fact that one of the authors has been part of research with comparable goals: constructing provably secure ciphers against differential cryptanalysis. The Nyberg-Knudsen approach aims at building ciphers with this property by an ad hoc construction. This is not the case for decorrelation theory. Indeed, decorrelation theory strengthens conservative ciphers: protection against attacks which are beyond the scope of decorrelation of order two is provided heuristically as for other "ordinary" ciphers. Picking up dedicated weak constructions is thus irrelevant to our paradigm.
Their third argument still remains. Actually, the best known security result on DFC considers differential attacks in which the input and output differences need to be fixed prior to the attack and cannot be changed in the meantime. Decorrelation theory is still young and wider security results are in progress. For instance at the next Eurocrypt conference security against general iterated attack of a given order will be demonstrated. In particular it is conjectured that no iterated attack in which each iteration uses one plaintext/ciphertext cipher which is allowed to keep a small amount information between each iteration can work. The famous linear cryptanalysis is an example of such an attack. The security against it is formally proven by a different method at this time. In another (unpublished) result, it is shown that our model extends to adaptive plaintext and ciphertext attacks as well.
Provable security always leads to controversy. Since DFC was built with a regular design and with additional combinatorial properties, we believe that potential users skeptical on the role of provable security can still consider DFC as if it were an ordinary design. At least the design of DFC has its "raison d'etre" and leads to a cipher that has not been challenged so far.