friends [entries|archive|friends|userinfo]

The El Gordo Massive Galaxy Cluster [Apr. 23rd, 2014|05:19 am]

It is bigger than a bread box. It is bigger than a bread box.

LinkLeave a comment

Dan Geer on Heartbleed and Software Monocultures [Apr. 22nd, 2014|12:52 pm]

Good essay:

To repeat, Heartbleed is a common mode failure. We would not know about it were it not open source (Good). That it is open source has been shown to be no talisman against error (Sad). Because errors are statistical while exploitation is not, either errors must be stamped out (which can only result in dampening the rate of innovation and rewarding corporate bigness) or that which is relied upon must be field upgradable (Real Politik). If the device is field upgradable, then it pays to regularly exercise that upgradability both to keep in fighting trim and to make the opponent suffer from the rapidity with which you change his target.

The whole thing is worth reading.

LinkLeave a comment

frivol: Word of the Day [Apr. 22nd, 2014|12:00 am]

frivol: to behave frivolously; trifle.
LinkLeave a comment

Massive Nearby Spiral Galaxy NGC 2841 [Apr. 22nd, 2014|05:13 am]

It is one of the more massive galaxies known. It is one of the more massive galaxies known.

LinkLeave a comment

Differential Fault Analysis on the families of SIMON and SPECK ciphers, by Harshal Tupsamudre and Sh [Apr. 21st, 2014|09:44 pm]

In 2013, the US National Security Agency proposed two new families of lightweight block ciphers: SIMON and SPECK. However, no security analysis was provided for these ciphers. Currently, linear and differential cryptanalytic results for SIMON are available in the literature, but no fault attacks on these two cipher families have been reported so far. In this paper, we present the first fault attack on the families of SIMON and SPECK ciphers. The attack assumes a fault model that can flip only one bit of the intermediate result. Using this attack, the n-bit secret key used in SIMON can be recovered using (n/2) bit faults on an average while the n-bit secret key used in SPECK can be recovered using (n/3) bit faults. Furthermore, we demonstrate a more practical attack on SIMON that employs a random byte fault model. This attack retrieves multiple bits of the key depending upon the Hamming weight of the byte fault. The average number of byte faults required to retrieve all n bits of the last round key is (n/8).
LinkLeave a comment

A low complexity bit-parallel Montgomery multiplier based on squaring for trinomials , by Yin Li and [Apr. 21st, 2014|09:44 pm]

In this paper, we present a new bit-parallel Montgomery multiplier for $GF(2^m)$ generated with irreducible trinomials. A newly proposed divide-and-conquer approach is applied to simplify the polynomial multiplication while the Montgomery squaring is induced to simplify the modular reduction. Meanwhile, this design effectively exploits the overlapped elements in squaring and reduction operation to reduce the space complexity. As a result, the proposed multiplier has about 25\% reduced space complexity compared with previous multipliers, with a slight increase of time complexity. Among five binary fields recommended by NIST for the ECDSA (Elliptic Curve Digital Signature Algorithm), there exist two fields, i.e., $GF(2^{409})$, $GF(2^{233})$,
defined by trinomials. For these two fields, we show that our proposal outperforms the previous best known results if the space and time complexities are both considered.
LinkLeave a comment

Chosen Ciphertext Security via Point Obfuscation, by Takahiro Matsuda and Goichiro Hanaoka [Apr. 21st, 2014|09:44 pm]

In this paper, we show two new constructions of chosen ciphertext secure (CCA secure) public key encryption (PKE) from general assumptions. The key ingredient in our constructions is an obfuscator for point functions with multi-bit output (MBPF obfuscators, for short), that satisfies some (average-case) indistinguishability-based security, which we call AIND security, in the presence of hard-to-invert auxiliary input. Specifically, our first construction is based on a chosen plaintext secure PKE scheme and an MBPF obfuscator satisfying the AIND security in the presence of computationally hard-to-invert auxiliary input. Our second construction is based on a lossy encryption scheme and an MBPF obfuscator satisfying the AIND security in the presence of statistically hard-to-invert auxiliary input. To clarify the relative strength of AIND security, we show the relations among security notions for MBPF obfuscators, and show that AIND security with computationally (resp. statistically) hard-to-invert auxiliary input is implied by the average-case virtual black-box (resp. virtual grey-box) property with the same type of auxiliary input. Finally, we show that a lossy encryption scheme can be constructed from an obfuscator for point functions (point obfuscator) that satisfies re-randomizability and a weak form of composability in the worst-case virtual grey-box sense. This result, combined with our second generic construction and several previous results on point obfuscators and MBPF obfuscators, yields a CCA secure PKE scheme that is constructed \emph{solely} from a re-randomizable and composable point obfuscator. We believe that our results make an interesting bridge that connects CCA secure PKE and program obfuscators, two seemingly isolated but important cryptographic primitives in the area of cryptography.
LinkLeave a comment

Faster Maliciously Secure Two-Party Computation Using the GPU, by Tore Kasper Frederiksen and Thomas [Apr. 21st, 2014|09:44 pm]

We present a new protocol for maliciously secure two-partycomputation based on cut-and-choose of garbled circuits using the recent idea of ``forge-and-loose'' which eliminates around a factor 3 of garbled circuits that needs to be constructed and evaluated. Our protocol introduces a new way to realize the "forge-and-loose" approach which avoids an auxiliary secure two-party computation protocol, does not rely on any number theoretic assumptions and parallelizes well in a same instruction, multiple data (SIMD) framework.

With this approach we prove our protocol universally composable-secure against a malicious adversary assuming access to oblivious transfer, commitment and coin-tossing functionalities in the random oracle model.

Finally, we construct, and benchmark, a SIMD implementation of this protocol using a GPU as a massive SIMD device. The findings compare favorably with all previous implementations of maliciously secure, two-party computation.
LinkLeave a comment

STRIBOB: Authenticated Encryption from GOST R 34.11-2012 LPS Permutation, by Markku-Juhani O. Saarin [Apr. 21st, 2014|09:44 pm]

Authenticated encryption algorithms protect both the confidentiality and integrity of messages with a single processing pass. We show how to utilize the $L \circ P \circ S$ transform of the Russian GOST R 34.11-2012 standard hash ``Streebog'' to build an efficient, lightweight algorithm for Authenticated Encryption with Associated Data (AEAD) via the Sponge construction. The proposed algorithm ``StriBob'' has attractive security properties, is faster than the Streebog hash alone, twice as fast as the GOST 28147-89 encryption algorithm, and requires only a modest amount of running-time memory. StriBob is a Round 1 candidate in the CAESAR competition.
LinkLeave a comment

Impossible differential cryptanalysis of LBlock with concrete investigation of key scheduling algori [Apr. 21st, 2014|09:44 pm]

Impossible differential cryptanalysis has been proved to be one of the most powerful techniques to attack block ciphers. Based on the impossible differential paths, we can usually add several rounds before or after to launch the key recovery attack. Impossible differential cryptanalysis is powerful not only because the number of rounds it can break is very competitive compared to other attacks, but also unlike differential attacks which are statistical attacks in the essential, impossible differential analysis does not require many statistical assumptions. In this paper, we investigate the key recovery attack part of the impossible differential cryptanalysis. We point out that when taking the (non-linear) key scheduling algorithm into consideration, we can further derive the redundancy among the subkeys, and thus can filter the wrong key at a rather early stage. This can help us control the time complexity and increase the number of rounds we can attack. As an application, we analyze recently proposed lightweight block cipher LBlock, and as a result, we can break 23 rounds with complexity $2^{77.4}$ encryptions without using the whole code block, which is by far the best attack against this cipher.
LinkLeave a comment

Witness Encryption from Instance Independent Assumptions, by Craig Gentry and Allison Bishop Lewko a [Apr. 21st, 2014|09:44 pm]

Witness encryption was proposed by Garg, Gentry, Sahai, and Waters as
a means to encrypt to an instance, x, of an NP language and produce
a ciphertext. In such a system, any decryptor that knows of a witness w that
x is in the language can decrypt the ciphertext and learn the
message. In addition to proposing the concept, their work provided a candidate for a witness encryption scheme built using multilinear encodings. However, one
significant limitation of the work is that the candidate had no proof
of security (other than essentially assuming the scheme secure).

In this work we provide a proof framework for proving witness
encryption schemes secure under instance independent assumptions. At the
highest level we introduce the abstraction of positional witness
encryption which allows a proof reduction of a witness encryption
scheme via a sequence of 2^n hybrid experiments where n is the
witness length of the NP-statement. Each hybrid step proceeds by
looking at a single witness candidate and using the fact that it does not
satisfy the NP-relation to move the proof forward.
We show that this isolation strategy enables one to create a
witness encryption system that is provably secure from assumptions that
are (maximally) independent of any particular encryption instance.
We demonstrate the viability of our approach by implementing this strategy using
level n-linear encodings where n is the witness length. Our
complexity assumption has approximately n group elements,
but does not otherwise depend on the NP-instance x.
LinkLeave a comment

Weak instances of composite order protocols, by Sorina Ionica and Malika Izabach{\`e}ne [Apr. 21st, 2014|09:44 pm]

In pairing-based cryptography, the security of protocols using composite
order groups relies on the difficulty of factoring a composite number
$N$. Boneh~etal~proposed the Cocks-Pinch method to construct ordinary pairing-friendly elliptic curves having a subgroup of composite order $N$. Displaying such a curve as a public parameter implies revealing a square root of the complex multiplication discriminant $-D$ modulo $N$. We exploit this information leak and the structure of the endomorphism ring of the curve to factor the RSA modulus, by computing a square root $\lambda$ of $-D$ modulo one of its factors. Our attack is based on a generic discrete logarithm algorithm. We recommend that $\lambda$ should be chosen as a high entropy input parameter when running the Cocks-Pinch algorithm, in order to ensure protection from our attack.
LinkLeave a comment

Identity-based encryption and digital signature schemes using extended chaotic maps, by SK Hafizul I [Apr. 21st, 2014|09:44 pm]

This paper designed a new extended chaotic map-based Identity-based encryption (ECM-IBE) scheme and Identity-based digital signature (ECM-IDS) scheme using extended chaotic maps. The security of the ECM-IBE scheme is based on the hardness assumption of chaotic maps-based decisional Diffie-Hellman (CDDH) problem, whereas the ECM-IDS scheme is secure based on the difficulties of chaotic maps-based discrete logarithm (CDL) problem.
LinkLeave a comment

Design of identity-based digital signature schemes using extended chaotic maps, by SK Hafizul Islam [Apr. 21st, 2014|09:44 pm]

Inspired from the Identity-based cryptosystem proposed by Adi Shamir, and Boneh and Franklin, this paper designed a new Identity-based digital signature (ECM-IDS) scheme using extended chaotic maps. The ECM-IDS scheme is secure based on the difficulties of integer factorization problem.
LinkLeave a comment

New Treatment of the BSW Sampling and Its Applications to Stream Ciphers, by Lin Ding and Chenhui Ji [Apr. 21st, 2014|09:44 pm]

By combining the time-memory-data tradeoff (TMDTO) attack independently proposed by Babbage and Goli\'{c} (BG) with the BSW sampling technique, this paper explores to mount a new TMDTO attack on stream ciphers. The new attack gives a wider variety of trade-offs, compared with original BG-TMDTO attack. It is efficient when multiple data is allowed for the attacker from the same key with different IVs, even though the internal state size is twice the key size. We apply the new attack to MICKEY and Grain stream ciphers, and improves the existing TMDTO attacks on them. Our attacks on Grain v1 and Grain-128 stream ciphers are rather attractive in the respect that the online time, offline time and memory complexities are all better than an exhaustive key search, and the amount of keystream needed are completely valid. Finally, we generalize the new attack to a Guess and Determine-TMDTO attack on stream ciphers, and mount a Guess and Determine-TMDTO attack on SOSEMANUK stream cipher with the online time and offline time complexities both equal to $2^{128}$, which achieves the best time complexity level compared with all existing attacks on SOSEMANUK so far.
LinkLeave a comment

Info on Russian Bulk Surveillance [Apr. 21st, 2014|10:55 am]

Good information:

Russian law gives Russia’s security service, the FSB, the authority to use SORM (“System for Operative Investigative Activities”) to collect, analyze and store all data that transmitted or received on Russian networks, including calls, email, website visits and credit card transactions. SORM has been in use since 1990 and collects both metadata and content. SORM-1 collects mobile and landline telephone calls. SORM-2 collects internet traffic. SORM-3 collects from all media (including Wi-Fi and social networks) and stores data for three years. Russian law requires all internet service providers to install an FSB monitoring device (called “Punkt Upravlenia”) on their networks that allows the direct collection of traffic without the knowledge or cooperation of the service provider. The providers must pay for the device and the cost of installation.

Collection requires a court order, but these are secret and not shown to the service provider. According to the data published by Russia’s Supreme Court, almost 540,000 intercepts of phone and internet traffic were authorized in 2012. While the FSB is the principle agency responsible for communications surveillance, seven other Russian security agencies can have access to SORM data on demand. SORM is routinely used against political opponents and human rights activists to monitor them and to collect information to use against them in “dirty tricks” campaigns. Russian courts have upheld the FSB’s authority to surveil political opponents even if they have committed no crime. Russia used SORM during the Olympics to monitor athletes, coaches, journalists, spectators, and the Olympic Committee, publicly explaining this was necessary to protect against terrorism. The system was an improved version of SORM that can combine video surveillance with communications intercepts.

Link4 comments | Leave a comment

immiscible: Word of the Day [Apr. 21st, 2014|12:00 am]

immiscible: incapable of being mixed.
LinkLeave a comment

Ash and Lightning above an Icelandic Volcano [Apr. 21st, 2014|05:10 am]

Why did a picturesque Why did a picturesque

LinkLeave a comment

Dual System Groups and its Applications --- Compact HIBE and More, by Jie Chen and Hoeteck Wee [Apr. 20th, 2014|04:13 pm]

We introduce the notion of *dual system groups*.

- We show how to derive compact HIBE by instantiating the dual system framework in Waters (Crypto '09) and Lewko and Waters (TCC '10) with dual system groups. Our construction provides a unified treatment of the prior compact HIBE schemes from static assumptions.

- We show how to instantiate dual system groups under the decisional subgroup assumption in composite-order groups and the decisional linear assumption ($d$-LIN) in prime-order groups. Along the way, we provide new tools for simulating properties of composite-order bilinear groups in prime-order groups. In particular, we present new randomization and parameter-hiding techniques in prime-order groups.

Combining the two, we obtain a number of new encryption schemes, notably

- a new construction of IBE in prime-order groups with shorter parameters;

- a new construction of compact HIBE in prime-order
groups whose structure closely mirrors the selectively secure HIBE
scheme of Boneh, Boyen and Goh (Eurocrypt '05);

- a new construction of compact spatial encryption in prime-order groups.
LinkLeave a comment

ICEPOLE: High-speed, Hardware-oriented Authenticated Encryption, by Pawel Morawiecki and Kris Gaj an [Apr. 20th, 2014|04:13 pm]

This paper introduces our dedicated authenticated encryption scheme ICEPOLE. ICEPOLE is a high-speed hardware-oriented scheme, suitable for high-throughput network nodes or generally any environment where specialized hardware (such as FPGAs or ASICs) can be used to provide high data processing rates. ICEPOLE-128 (the primary ICEPOLE variant) is very fast. On the modern FPGA device Virtex 6, a basic iterative architecture of ICEPOLE reaches 41 Gbits/s, which is over 10 times faster than the equivalent implementation of AES-128-GCM. The throughput-to-area ratio is also substantially better when compared to AES-128-GCM. We have carefully examined the security of the algorithm through a range of cryptanalytic techniques and our findings indicate that ICEPOLE offers high security level.
LinkLeave a comment

A Generic Scan Attack on Hardware based eStream Winners, by Sandip Karmakar and Dipanwita Roy Chowdh [Apr. 20th, 2014|03:43 pm]

Scan chains, a design for testability (DFT)
feature, are included in most modern-day ICs. But, it
opens a side channel for attacking cryptographic chips.
We propose a methodology by which we can recover
internal states of any stream cipher using scan chains
without knowledge of its design. We consider conven-
tional scan-chain design which is normally not scram-
bled or protected in any other way. In this scenario
the challenge of the adversary is to obtain the corre-
spondence of output of the scan chain and the internal
state registers of the stream cipher. We present a math-
ematical model of the attack and the correspondence
between the scan chain-outputs and the internal state
bits have been proved under this model. We propose an
algorithm that through o-line and on-line simulation
forms bijection between the above mentioned sets and
thus nds the required correspondence. We also give an
estimate of the number of o-line simulations necessary
for nding the correspondence.
The proposed strategy is successfully applied to eS-
tream hardware based nalists MICKEY-128 2.0, Triv-
ium and Grain-128. To the best of our knowledge, this is
the rst scan based attack against full round Grain-128
and only the fourth reported cryptanalysis. This attack
on Trivium is better than that of the published scan-
attack on Trivium. This scan-based attack is also the
rst reported scan based cryptanalysis against MICKEY-
128 2.0.
LinkLeave a comment

Continuous After-the-fact Leakage-Resilient Key Exchange (full version), by Janaka Alawatugoda and C [Apr. 20th, 2014|03:43 pm]

Security models for two-party authenticated key exchange (AKE) protocols have developed over time to provide security even when the adversary learns certain secret keys. In this work, we advance the modelling of AKE protocols by considering more granular, continuous leakage of long-term secrets of protocol participants: the adversary can adaptively request arbitrary leakage of long-term secrets even after the test session is activated, with limits on the amount of leakage per query but no bounds on the total leakage. We present a security model supporting continuous leakage even when the adversary learns certain ephemeral secrets or session keys, and give a generic construction of a two-pass leakage-resilient key exchange protocol that is secure in the model; our protocol achieves continuous, after-the-fact leakage resilience with not much more cost than a previous protocol with only bounded, non-after-the-fact leakage.
LinkLeave a comment

Forgery on Stateless CMCC, by Guy Barwell [Apr. 20th, 2014|03:43 pm]

We present attacks against CMCC that invalidate the claimed security of integrity protection and misuse resistance. We exploit the fact zero-padding is used on both the message and authenticated data and demonstrate how one may generate a forgery with a single call to the encryption oracle. From this we calculate the ciphertext of the chosen message, yielding a forgery and so breaking INT-CTXT. In the nonce-reuse setting, existence of a forgery leads directly to a 2-query distinguisher.
LinkLeave a comment

Making RSA-PSS Provably Secure Against Non-Random Faults, by Gilles Barthe and François Dupressoir a [Apr. 20th, 2014|03:43 pm]

RSA-CRT is the most widely used implementation for RSA signatures. However, deterministic and many probabilistic RSA signatures based on CRT are vulnerable to fault attacks. Nevertheless, Coron and Mandal (Asiacrypt 2009) show that the randomized PSS padding protects RSA signatures against random faults. In contrast, Fouque et al. (CHES 2012) show that PSS padding does not protect against certain non-random faults that can be injected in widely used implementations based on the Montgomery modular multiplication. In this article, we prove the security of an infective countermeasure against a large class of non-random faults; the proof extends Coron and Mandal's result to a strong model where the adversary can force the faulty signatures to be a multiple of one of the prime factors of the RSA modulus. Such non-random faults induce more complex probability distributions than in the original proof, which we analyze using careful estimates of exponential sums attached to suitable rational functions. The security proof is formally verified using appropriate extensions of EasyCrypt, and provides the first application of formal verification to provable (i.e. reductionist) security in the context of fault attacks.
LinkLeave a comment

Practical and Secure Query Processing for Large-scale Encrypted Cloud Storage Systems, by Fangquan C [Apr. 20th, 2014|03:43 pm]

With the increasing popularity of cloud-based data services, data owners are highly motivated to store their huge amount of (potentially sensitive) personal data files on remote servers in encrypted form. Clients later can query over the encrypted database to retrieve files of interest while preventing database servers from learning private information about the contents of files and queries.
In this paper, we investigate new and novel SSE designs which meet all practical properties, including one-round multi-keyword query, comprehensive and practical privacy protection, sublinear search time, and efficient dynamic data operation support. Moreover, our solutions can well support parallel search and run for very large-scale cloud databases. Compared to the existing SSE solutions,
our solution is highly compact, efficient and flexible. Its performance and security are carefully characterized by rigorous analysis. Experimental evaluations conducted over large representative real-word data sets demonstrate that compared with the state-of-the-art our solution indeed achieves desirable properties for large-scale encrypted database systems.
LinkLeave a comment

Enhanced Lattice-Based Signatures on Reconfigurable Hardware, by Thomas P\"oppelmann and L{\'e}o Duc [Apr. 20th, 2014|03:43 pm]

The recent BLISS signature scheme showed that lattice-based constructions have evolved to practical alternatives to RSA or ECC. Besides reasonably small signatures with 5600 bits for a 128-bit level of security, BLISS enables extremely fast signing and signature verification in software. However, due to the complex sampling of Gaussian noise with high precision, it is not clear whether this scheme can be mapped efficiently to embedded devices. In particular, the software approach of using large precomputed tables for Gaussian sampling cannot be transferred to constrained computing environments, such as FPGAs with limited memory. In this work we present techniques for an efficient CDT-based Gaussian sampler on reconfigurable hardware involving Peikert's convolution lemma and the Kullback-Leibler divergence. Based on our enhanced sampler design, we provide a first BLISS architecture for Xilinx Spartan-6 FPGAs that integrates fast FFT/NTT-based polynomial multiplication, sparse multiplication, and a Keccak hash function. With on our core a signing operations requires 123 \textmu s on average, using 2,584 slices, 8 BRAMs, and 6 DSPs. Verification takes slightly less with 70 \textmu s.
LinkLeave a comment

Certification and Efficient Proofs of Committed Topology Graphs, by Thomas Gross [Apr. 20th, 2014|03:43 pm]

Digital signature schemes are a foundational cryptographic building block in certification and the projection of trust. Based on a signature scheme on committed graphs, we propose a toolkit of certification and proof methods to sign committed topology graphs
and to prove properties of their certificates in zero-knowledge.
This toolkit allows an issuer, such as an auditor, to sign the topology representation of an infrastructure. The prover, such as an infrastructure provider, can then convince a verifier of topology properties, such as partitions, connectivity or isolation, without disclosing the structure of the topology itself. By that, we can achieve the certification of the structure of critical systems, such as infrastructure clouds or outsourced systems, while still maintaining confidentiality. We offer zero-knowledge proofs of knowledge for a general specification language of security goals for virtualized infrastructures, such that high-level security goalscan be proven over the topology certificate. Our method builds upon the Camenisch-Lysyanskaya signature scheme, is based on honest-verifier proofs and the strong RSA assumption.
LinkLeave a comment

Private and Dynamic Time-Series Data Aggregation with Trust Relaxation, by Iraklis Leontiadis and Ka [Apr. 20th, 2014|03:43 pm]

With the advent of networking applications collecting user data on
a massive scale, the privacy of individual users appears to be a major concern.
The main challenge is the design of a solution that allows the data analyzer to
compute global statistics over the set of individual inputs that are protected by
some confidentiality mechanism. Joye et al. [7] recently suggested a solution
that allows a centralized party to compute the sum of encrypted inputs collected
through a smart metering network. The main shortcomings of this solution are
its reliance on a trusted dealer for key distribution and the need for frequent key
updates. In this paper we introduce a secure protocol for aggregation of timeseries
data that is based on the Joye et al. [7] scheme and in which the main
shortcomings of the latter, namely, the requirement for key updates and for the
trusted dealer are eliminated. As such, during the protocol execution none of the
parties apart from the users themselves are aware of the secret keys. Moreover
our scheme supports a dynamic group management, whereby as opposed to Joye
et al. [7] leave and join operations do not trigger a key update at the users.
LinkLeave a comment

Handycipher: a Low-tech, Randomized, Symmetric-key Cryptosystem, by Bruce Kallick [Apr. 20th, 2014|03:43 pm]

Handycipher is a low-tech, randomized, symmetric-key, stream cipher, simple enough to permit pen-and-paper encrypting and decrypting of messages, while providing a significantly high level of security by using a nondeterministic encryption procedure, multiple encryption, and randomly generated session keys.
LinkLeave a comment

A realtime key recovery attack on the authenticated cipher FASER128, by Xiutao FENG and Fan ZHANG [Apr. 20th, 2014|03:43 pm]

FASER is a family of authenticated ciphers submitted to the CAESAR competition, which contains two parent ciphers: FASER128 and FASER256. In this work we only focus on FASER128 and present a key recovery attack to FASER128, which needs at most 64 key words and is realtime in a PC. The result shows that FASER128 is very insecure. What's more, our attack can be easily applied to FASER256 and break it entirely.
LinkLeave a comment

Practical Complexity Cube Attacks on Round-Reduced Keccak Sponge Function, by Itai Dinur and Pawel M [Apr. 20th, 2014|03:43 pm]

In this paper we mount the cube attack on the Keccak sponge function. The cube attack, formally introduced in 2008, is an algebraic technique applicable to cryptographic primitives whose output can be described as a low-degree polynomial in the input. Our results show that 5- and 6-round Keccak sponge function is vulnerable to this technique. All the presented attacks have practical complexities and were verified on a desktop PC.
LinkLeave a comment

Locally Decodable Codes for edit distance, by Rafail Ostrovsky and Anat Paskin-Cherniavsky [Apr. 20th, 2014|03:43 pm]

Locally decodable codes (LDC)~\cite{BFLS91,KT00} are error correcting codes that allow decoding (any) individual symbol of the message, by reading only few symbols of the codeword. Consider an application such as storage
solutions for large data, where errors may occur in the disks (or some disks may just crush). In such an application, it is often desirable to recover only small portions of the data (have random access). Thus, in such applications, using LDC provides enormous efficiency gains over standard error correcting codes (ECCs), that need to read the entire encoded message to learn even a single bit of information.
Typically, LDC's, as well as standard ECC's decode
the encoded messaged if upto some bounded fraction of the symbols had been modified. This corresponds to decoding strings of bounded Hamming distance from a valid codeword. An often more realistic metric is the edit distance, measuring the shortest sequence of insertions and deletions (indel.) of symbols leading from one word to another.
For example, (few) indel. modifications is a more realistic model for mutations occurring in a genome. Even more commonly, communication over the web may sustain deletions (lost packets) and insertions (noise).\footnote{Edit distance is indeed "more expressive" then Hamming distance in the sense that $dist_E(x,y)\leq 2dist_H(x,y)$ always holds, while edit distance 2 may translate to Hamming distance $n$. For instance, consider $x=1010\ldots 10,y=0101\ldots 1$.
Standard ECC's for edit distance have been previously considered~\cite{SZ97}. Furthermore,~\cite{SZ97} devised codes with
rate and distance (error tolerance) optimal upto constants.
LDC's, originally considered in the setting of PCP's~\cite{BFLS91}, have found many additional applications, and generated a lot of fascinating work (see~\cite{Yek11} and references within).
However, combining these two useful settings of LDC, and robustness
against indel. errors has never been considered.

In this work, we study the question of constructing LDC's for edit distance. We demonstrate a strong positive result - LDC's for edit distance can be achieved, with similar parameters to LDC's for Hamming distance. More precisely, we devise a generic transformation from LDC
for Hamming distance to LDC for edit distance with related parameters.
LinkLeave a comment

Fault Analysis of Grain Family of Stream Ciphers, by Sandip Karmakar and Dipanwita Roy Chowdhury [Apr. 20th, 2014|03:43 pm]

In this paper, we present fault attack on Grain family of stream ciphers, an eStream finalist. The earlier fault attacks on Grain work on LFSR whereas our target for fault induction is the NFSR. Our attack requires a small number of faults to be injected; 150 only for Grain v1 and only 312 and 384 for Grain-128 and Grain-128a, respectively. The number of faults are much lesser than the earlier reported fault attacks; 1587 for Grain-128 and 1831 for Grain-128a.
LinkLeave a comment

Differential Fault Analysis of MICKEY Family of Stream Ciphers, by Sandip Karmakar and Dipanwita Roy [Apr. 20th, 2014|03:43 pm]

This paper presents differential fault analysis of the
MICKEY family of stream ciphers, one of the winners of eStream
project. The current attacks are of the best performance among
all the attacks against MICKEY ciphers reported till date. The
number of faults required with respect to state size is about
1.5 times the state size. We obtain linear equations to determine
state bits. The fault model required is reasonable. The fault model
is further relaxed without reproducing the faults and allowing
multiple bit faults. In this scenario, more faults are required
when reproduction is not allowed whereas, it has been shown
that the number of faults remains same for multiple bit faults.
LinkLeave a comment

Linear Extension Cube Attack on Stream Ciphers, by Liren Ding, Yongjuan Wang, Zhufeng Li [Apr. 20th, 2014|03:43 pm]

Basing on the original Cube attack, this paper proposes an improved method of Cube attack on stream ciphers, which makes improvement on the pre-processing phase of the original attack. The new method can induce maxterms of higher-order from those of lower-order by the trade-off between time and space, thus recovering more key bits and reducing the search complexity on higher-dimension. In this paper, the improved attack is applied to Lili-128 algorithm and reduced variants of Trivium algorithm. We can recover 88 key bits of Lili-128 algorithm within time complexity of 2^14 and 48 key bits of Trivium algorithm can be recovered by cubes with dimension no larger than 8 when the initialization round is 576, the results are much better than those of the original attacks.
LinkLeave a comment

Cryptanalysis of the MORE symmetric key fully homomorphic encryption scheme, by Boaz Tsaban and Noam [Apr. 20th, 2014|03:43 pm]

The fully homomorphic symmetric encryption scheme \emph{MORE} encrypts
keys by conjugation with a random invertible matrix over an RSA modulus.
We provide a two known-ciphertexts cryptanalysis recovering a linear dependence among
the two encrypted keys.
LinkLeave a comment

Introducing Fault Tolerance into Threshold Password-Authenticated Key Exchange, by Ivan Pryvalov and [Apr. 20th, 2014|03:13 pm]

A threshold password-authenticated key exchange (T-PAKE) protocol allows a set of n servers to collectively authenticate a client with a human-memorizable password such that any subset of size greater than a threshold t can authenticate the client, while smaller subsets of servers learn no information about the password. With its protection against offline dictionary attacks, T-PAKE provides a practical solution for an important real-life problem with password authentication. However, the proposed T-PAKE constructions cannot tolerate any misbehavior---not even a crash---by a participating server during a protocol execution; the protocol has to be re-executed until all participating servers behave correctly. This not only presents a fault management challenge for the servers, but more importantly also can leave the clients frustrated.
In this work, we present a novel T-PAKE protocol which solves the above fault management problem by employing a batched and offline phase of distributed key generation (DKG). Our protocol is secure against any malicious behavior from up to any t < n servers under the decisional Diffie-Hellman assumption in the random oracle model, and it ensures protocol completion for t < n/2. Moreover, it is efficient (16n + 7 exponentiations per client, 20n + 14 per server), performs explicit authentication in three communication rounds, and requires a significantly lesser number of broadcast rounds compared to previous secure T-PAKE constructions. We have implemented our protocol, and have verified its efficiency using micro-benchmark experiments. Our experimental results show that the protocol only introduces a computation overhead of few milliseconds at both the client and the server ends, and it is practical for use in real-life authentication scenarios.
LinkLeave a comment

Fine grain Cross-VM Attacks on Xen and VMware are possible!, by Gorka Irazoqui Apecechea and Mehmet [Apr. 20th, 2014|03:13 pm]

This work exposes further vulnerabilities in virtualized cloud servers by mounting Cross-VM cache attacks in Xen and VMware VMs targeting AES running in the victim VM. Even though there exists a rich literature on cache attacks on AES, so far only a single work, demonstrating a working attack on an ARM platform running a L4Re virtualization layer has been published. Here we show that AES in a number popular cryptographic libraries including OpenSSL, PolarSSL and Libgcrypt are vulnerable to Bernstein's correlation attack when run in Xen and VMware (bare metal version) VMs, the most popular VMs used by cloud service providers (CSP) such as Amazon and Rackspace. We also show that the vulnerability persists even if the VMs are placed on different cores in the same machine. The results of this study shows that there is a great security risk to AES and (data encrypted under AES) on popular cloud services.
LinkLeave a comment

leveret: Word of the Day [Apr. 20th, 2014|12:00 am]

leveret: a young hare.
LinkLeave a comment

Earth-size Kepler-186f [Apr. 20th, 2014|04:43 am]

Earth-size Kepler-186f Earth-size Kepler-186f

LinkLeave a comment

[ viewing | most recent entries ]
[ go | earlier ]