Papers by Pratyay Mukherjee

IACR Cryptology ePrint Archive, 2020
In a threshold symmetric-key encryption (TSE) scheme, encryption/decryption is performed by inter... more In a threshold symmetric-key encryption (TSE) scheme, encryption/decryption is performed by interacting with any threshold number of parties who hold parts of the secretkeys. Security holds as long as the number of corrupt (possibly colluding) parties stay below the threshold. Recently, Agrawal et al. [CCS 2018] (alternatively called DiSE) initiated the study of TSE. They proposed a generic TSE construction based on any distributed pseudorandom function (DPRF). Instantiating with DPRF constructions by Naor, Pinkas and Reingold [Eurocrypt 1999] (also called NPR) they obtained several efficient TSE schemes with various merits. However, their security models and corresponding analyses consider only static (and malicious) corruption, in that the adversary fixes the set of corrupt parties in the beginning of the execution before acquiring any information (except the public parameters) and is not allowed to change that later. In this work we augment the DiSE TSE definitions to the fully adaptive (and malicious) setting, in that the adversary is allowed to corrupt parties dynamically at any time during the execution. The adversary may choose to corrupt a party depending on the information acquired thus far, as long as the total number of corrupt parties stays below the threshold. We also augment DiSE's DPRF definitions to support adaptive corruption. We show that their generic TSE construction, when plugged-in with an adaptive DPRF (satisfying our definition), meets our adaptive TSE definitions. We provide an efficient instantiation of the adaptive DPRF, proven secure assuming decisional Diffie-Hellman assumption (DDH), in the random oracle model. Our construction borrows ideas from Naor, Pinkas and Reingold's [Eurocrypt 1999] statically secure DDH-based DPRF (used in DiSE) and Libert, Joye and Yung's [PODC 2014] adaptively secure threshold signature. Similar to DiSE, we also give an extension satisfying a strengthened adaptive DPRF definition, which in turn yields a stronger adaptive TSE scheme. For that, we construct a simple and efficient adaptive NIZK protocol for proving a specific commit-and-prove style statement in the random oracle model assuming DDH.

Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021
Threshold cryptography enables cryptographic operations while keeping the secret keys distributed... more Threshold cryptography enables cryptographic operations while keeping the secret keys distributed at all times. Agrawal et al. (CCS'18) propose a framework for Distributed Symmetric-key Encryption (DiSE). They introduce a new notion of Threshold Symmetrickey Encryption (TSE), in that encryption and decryption are performed by interacting with a threshold number of servers. However, the necessity for interaction on each invocation limits performance when encrypting large datasets, incurring heavy computation and communication on the servers. This paper proposes a new approach to resolve this problem by introducing a new notion called Amortized Threshold Symmetric-key Encryption (ATSE), which allows a "privileged" client (with access to sensitive data) to encrypt a large group of messages using a single interaction. Importantly, our notion requires a client to interact for decrypting each ciphertext, thus providing the same security (privacy and authenticity) guarantee as DiSE with respect to a "notso-privileged" client. We construct an ATSE scheme based on a new primitive that we formalize as exible threshold key-derivation (FTKD), which allows parties to interactively derive pseudorandom keys in di erent modes in a threshold manner. Our FTKD construction, which uses bilinear pairings, is based on a distributed variant of left/right constrained PRF by Boneh and Waters (Asiacrypt'13). Despite our use of bilinear maps, our scheme achieves signi cant speed-ups due to the amortized interaction. Our experiments show 40x lower latency and 30x more throughput in some settings.
Efficient Attribute-Based Proxy Re-Encryption with Constant Size Ciphertexts
Progress in Cryptology – INDOCRYPT 2020, 2020

Theoretical Computer Science, 2015
Random Oracles serve as an important heuristic for proving security of many popular and important... more Random Oracles serve as an important heuristic for proving security of many popular and important cryptographic primitives. But, at the same time they are criticized due to the impossibility of practical instantiation. Programmability is one of the most important feature behind the power of Random Oracles. Unfortunately, in the standard hash functions, the feature of programmability is limited. In recent years, there has been endeavours to restrict programmability of random oracles. However, we observe that the existing models allow adaptive programming, that is, the reduction can program the random oracle adaptively in the online phase of the security game depending on the query received from the adversary, and thus are quite far from the standard model. In this paper, we introduce a new feature called non-adaptive programmability of random oracles, where the reduction can program the random oracle only in the pre-processing phase. In particular, it might nonadaptively program the RO by setting a regular function as a post-processor on the oracle output. We call this new model Non-Adaptively-Programmable Random Oracle (NAPRO) and we show that this model is actually equivalent to so-called Non-Programmable Random Oracle (NPRO) introduced by Fischlin et al. [8], hence too restrictive. However, we also propose a slightly stronger model, called Weak-Non-Adaptively-Programmable Random Oracle (WNAPRO), where in addition to non-adaptive programming, the reduction is allowed to adaptively extract some "auxiliary information" from the RO and this "auxiliary information " interestingly plays crucial role in the security proof allowing several important RO proofs to go through! In particular we prove the following results in WNAPRO model. 1. RSA-Full-Domain Hash signature scheme (RSA-FDH), and Boneh-Franklin ID-based encryption scheme (BF-IDE) are secure in the WNAPRO model. This is in sharp contrast to strong blackbox proofs of FDH schemes, where full programmability seems to be necessary. 2. Shoup's Trapdoor-permutation based Key-encapsulation Mechanism (TDP-KEM) can not be proven secure via blackbox reduction from ideal trapdoor-permutations in the WNAPRO model.
Parallelization of the Wiedemann Large Sparse System Solver over Large Prime Fields
An overview of eSTREAM ciphers

Advances in Cryptology – CRYPTO 2017, 2017
Non-malleable codes-introduced by Dziembowski, Pietrzak and Wichs at ICS 2010-are key-less coding... more Non-malleable codes-introduced by Dziembowski, Pietrzak and Wichs at ICS 2010-are key-less coding schemes in which mauling attempts to an encoding of a given message, w.r.t. some class of tampering adversaries, result in a decoded value that is either identical or unrelated to the original message. Such codes are very useful for protecting arbitrary cryptographic primitives against tampering attacks against the memory. Clearly, non-malleability is hopeless if the class of tampering adversaries includes the decoding and encoding algorithm. To circumvent this obstacle, the majority of past research focused on designing non-malleable codes for various tampering classes, albeit assuming that the adversary is unable to decode. Nonetheless, in many concrete settings, this assumption is not realistic. In this paper, we explore one particular such scenario where the class of tampering adversaries naturally includes the decoding (but not the encoding) algorithm. In particular, we consider the class of adversaries that are restricted in terms of memory/space. Our main contributions can be summarized as follows:-We initiate a general study of non-malleable codes resisting spacebounded tampering. In our model, the encoding procedure requires large space, but decoding can be done in small space, and thus can be also performed by the adversary. Unfortunately, in such a setting it is impossible to achieve non-malleability in the standard sense, and we S. Faust and K.
Continuous Space-Bounded Non-malleable Codes from Stronger Proofs-of-Space
Advances in Cryptology – CRYPTO 2019, 2019
Non-malleable codes are encoding schemes that provide protections against various classes of tamp... more Non-malleable codes are encoding schemes that provide protections against various classes of tampering attacks. Recently Faust et al. (CRYPTO 2017) initiated the study of space-bounded non-malleable codes that provide such protections against tampering within small-space devices. They put forward a construction based on any non-interactive proof-of-space (NIPoS). However, the scheme only protects against an a priori bounded number of tampering attacks.
Continuous Space-Bounded Non-malleable Codes from Stronger Proofs-of-Space
Advances in Cryptology – CRYPTO 2019, 2019
Non-malleable codes are encoding schemes that provide protections against various classes of tamp... more Non-malleable codes are encoding schemes that provide protections against various classes of tampering attacks. Recently Faust et al. (CRYPTO 2017) initiated the study of space-bounded non-malleable codes that provide such protections against tampering within small-space devices. They put forward a construction based on any non-interactive proof-of-space (NIPoS). However, the scheme only protects against an a priori bounded number of tampering attacks.

Progress in Cryptology – INDOCRYPT 2018, 2018
Multilinear maps enable homomorphic computation on encoded values and a public procedure to check... more Multilinear maps enable homomorphic computation on encoded values and a public procedure to check if the computation on the encoded values results in a zero. Encodings in known candidate constructions of multilinear maps have a (growing) noise component, which is crucial for security. For example, noise in GGH13 multilinear maps grows with the number of levels that need to be supported and must remain below the maximal noise supported by the multilinear map for correctness. A smaller maximal noise, which must be supported, is desirable both for reasons of security and efficiency. In this work, we put forward new candidate constructions of obfuscation for which the maximal supported noise is polynomial (in the security parameter). Our constructions are obtained by instantiating a modification of Lin's obfuscation construction (EUROCRYPT 2016) with composite order variants of the GGH13 multilinear maps. For these schemes, we show that the maximal supported noise only needs to grow polynomially in the security parameter. We prove the security of these constructions in the weak multilinear map model that captures all known vulnerabilities of GGH13 maps. Finally, we investigate the security of the considered composite order variants of GGH13 multilinear maps from a cryptanalytic standpoint.

IACR Cryptol. ePrint Arch., 2021
We study the problem of secure multiparty computation for functionalities where only one party re... more We study the problem of secure multiparty computation for functionalities where only one party receives the output, to which we refer as solitary MPC. Recently, Halevi et al. (TCC 2019) studied fully secure (i.e., with guaranteed output delivery) solitary MPC and showed impossibility of such protocols for certain functionalities when there is no honest majority among the parties. In this work, we study fully secure solitary MPC in the honest majority setting and focus on its round complexity. We note that a broadcast channel or public key infrastructure (PKI) setup is necessary for an n-party protocol against malicious adversaries corrupting up to t parties where n/3 ≤ t < n/2. Therefore, we study the following settings and ask the question: Can fully secure solitary MPC be achieved in fewer rounds than fully secure standard MPC in which all parties receive the output? • When there is a broadcast channel and no PKI: – We start with a negative answer to the above question. In part...

Non-malleable codes, introduced by Dziembowski, Pietrzak, and Wichs (ICS'10) provide the guar... more Non-malleable codes, introduced by Dziembowski, Pietrzak, and Wichs (ICS'10) provide the guarantee that if a codeword c of a message m, is modified by a tampering function f to c', then c' either decodes to m or to "something unrelated" to m. In recent literature, a lot of focus has been on explicitly constructing such codes against a large and natural class of tampering functions such as split-state model in which the tampering function operates on different parts of the codeword independently. In this work, we consider a stronger adversarial model called block-wise tampering model, in which we allow tampering to depend on more than one block: if a codeword consists of two blocks c = (c1, c2), then the first tampering function f1 could produce a tampered part c'_1 = f1(c1) and the second tampering function f2 could produce c'_2 = f2(c1, c2) depending on both c2 and c1. The notion similarly extends to multiple blocks where tampering of block ci could ha...

IACR Cryptol. ePrint Arch., 2016
Indistinguishability obfuscation is a central primitive in cryptography. Security of existing mul... more Indistinguishability obfuscation is a central primitive in cryptography. Security of existing multilinear maps constructions on which current obfuscation candidates are based is poorly understood. In a few words, multilinear maps allow for checking if an arbitrary bounded degree polynomial on hidden values evaluates to zero or not. All known attacks on multilinear maps depend on the information revealed on computations that result in encodings of zero. This includes the recent annihilation attacks of Miles, Sahai and Zhandry [EPRINT 2016/147] on obfuscation candidates as a special case. Building on a modification of the Garg, Gentry and Halevi [EUROCRYPT 2013] multilinear maps (GGH for short), we present a new obfuscation candidate that is resilient to these vulnerabilities. Specifically, in our construction the results of all computations yielding a zero provably hide all the secret system parameters. This is the first obfuscation candidate that weakens the security needed from the...

Annihilation attacks, introduced in the work of Miles, Sahai, and Zhandry (CRYPTO 2016), are a cl... more Annihilation attacks, introduced in the work of Miles, Sahai, and Zhandry (CRYPTO 2016), are a class of polynomial-time attacks against several candidate indistinguishability obfuscation (IO) schemes, built from Garg, Gentry, and Halevi (EUROCRYPT 2013) multilinear maps. In this work, we provide a general efficiently-testable property for two single-input branching programs, called partial inequivalence, which we show is sufficient for our variant of annihilation attacks on several obfuscation constructions based on GGH13 multilinear maps. We give examples of pairs of natural NC1 circuits, which - when processed via Barrington's Theorem - yield pairs of branching programs that are partially inequivalent. As a consequence we are also able to show examples of "bootstrapping circuits,'' (albeit somewhat artificially crafted) used to obtain obfuscations for all circuits (given an obfuscator for NC1 circuits), in certain settings also yield partially inequivalent branchi...

Theory of Cryptography, 2020
We present a reusable two-round multi-party computation (MPC) protocol from the Decisional Diffie... more We present a reusable two-round multi-party computation (MPC) protocol from the Decisional Diffie Hellman assumption (DDH). In particular, we show how to upgrade any secure two-round MPC protocol to allow reusability of its first message across multiple computations, using Homomorphic Secret Sharing (HSS) and pseudorandom functions in N C 1-each of which can be instantiated from DDH. In our construction, if the underlying two-round MPC protocol is secure against semi-honest adversaries (in the plain model) then so is our reusable two-round MPC protocol. Similarly, if the underlying two-round MPC protocol is secure against malicious adversaries (in the common random/reference string model) then so is our reusable two-round MPC protocol. Previously, such reusable two-round MPC protocols were only known under assumptions on lattices. At a technical level, we show how to upgrade any two-round MPC protocol to a first message succinct two-round MPC protocol, where the first message of the protocol is generated independently of the computed circuit (though it is not reusable). This step uses homomorphic secret sharing (HSS) and lowdepth pseudorandom functions. Next, we show a generic transformation that upgrades any first message succinct two-round MPC to allow for reusability of its first message.
Lecture Notes in Computer Science, 2019

Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018
Threshold cryptography provides a mechanism for protecting secret keys by sharing them among mult... more Threshold cryptography provides a mechanism for protecting secret keys by sharing them among multiple parties, who then jointly perform cryptographic operations. An attacker who corrupts upto a threshold number of parties cannot recover the secrets or violate security. Prior works in this space have mostly focused on definitions and constructions for public-key cryptography and digital signatures, and thus do not capture the security concerns and efficiency challenges of symmetric-key based applications which commonly use long-term (unprotected) master keys to protect data at rest, authenticate clients on enterprise networks, and secure data and payments on IoT devices. We put forth the first formal treatment for distributed symmetric-key encryption, proposing new notions of correctness, privacy and authenticity in presence of malicious attackers. We provide strong and intuitive game-based definitions that are easy to understand and yield efficient constructions. We propose a generic construction of threshold authenticated encryption based on any distributed pseudorandom function (DPRF). When instantiated with the two different DPRF constructions proposed by Naor, Pinkas and Reingold (Eurocrypt 1999) and our enhanced versions, we obtain several efficient constructions meeting different security definitions. We implement these variants and provide extensive performance comparisons. Our most efficient instantiation uses only symmetric-key primitives and achieves a throughput of upto 1 million encryptions/decryptions per seconds, or alternatively a sub-millisecond latency with upto 18 participating parties. * Work done as an intern at Visa Research.

Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018
Token-based authentication is commonly used to enable a single-sign-on experience on the web, in ... more Token-based authentication is commonly used to enable a single-sign-on experience on the web, in mobile applications and on enterprise networks using a wide range of open standards and network authentication protocols: clients sign on to an identity provider using their username/password to obtain a cryptographic token generated with a master secret key, and store the token for future accesses to various services and applications. The authentication server(s) are single point of failures that if breached, enable attackers to forge arbitrary tokens or mount offline dictionary attacks to recover client credentials. Our work is the first to introduce and formalize the notion of password-based threshold token-based authentication which distributes the role of an identity provider among n servers. Any t servers can collectively verify passwords and generate tokens, while no t − 1 servers can forge a valid token or mount offline dictionary attacks. We then introduce PASTA, a general framework that can be instantiated using any threshold token generation scheme, wherein clients can "sign-on" using a two-round (optimal) protocol that meets our strong notions of unforgeability and password-safety. We instantiate and implement our framework in C++ using two threshold message authentication codes (MAC) and two threshold digital signatures with different trade-offs. Our experiments show that the overhead of protecting secrets and credentials against breaches in PASTA, i.e. compared to a naïve single server solution, is extremely low (1-5%) in the most likely setting where client and servers communicate over the internet. The overhead is higher in case of MAC-based tokens over a LAN (though still only a few milliseconds) due to public-key operations in PASTA. We show, however, that this cost is inherent by proving a symmetric-key only solution impossible. * Work done as an intern at Visa Research.

Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, 2020
We use biometrics like fingerprints and facial images to identify ourselves to our mobile devices... more We use biometrics like fingerprints and facial images to identify ourselves to our mobile devices and log on to applications everyday. Such authentication is internal-facing: we provide measurement on the same device where the template is stored. If our personal devices could participate in external-facing authentication too, where biometric measurement is captured by a nearby external sensor, then we could also enjoy a frictionless authentication experience in a variety of physical spaces like grocery stores, convention centers, ATMs, etc. The open setting of a physical space brings forth important privacy concerns though. We design a suite of secure protocols for external-facing authentication based on the cosine similarity metric which provide privacy for both user templates stored on their devices and the biometric measurement captured by external sensors in this open setting. The protocols provide different levels of security, ranging from passive security with some leakage to active security with no leakage at all. With the help of new packing techniques and zero-knowledge proofs for Paillier encryption-and careful protocol design, our protocols achieve very practical performance numbers. For templates of length 256 with elements of size 16 bits each, our fastest protocol takes merely 0.024 seconds to compute a match, but even the slowest one takes no more than 0.12 seconds. The communication overhead of our protocols is very small too. The passive and actively secure protocols (with some leakage) need to exchange just 16.5KB and 27.8KB of data, respectively. The first message is designed to be reusable and, if sent in advance, would cut the overhead down to just 0.5KB and 0.8KB, respectively.

Theory of Cryptography, 2016
All known candidate indistinguishability obfuscation (iO) schemes rely on candidate multilinear m... more All known candidate indistinguishability obfuscation (iO) schemes rely on candidate multilinear maps. Until recently, the strongest proofs of security available for iO candidates were in a generic model that only allows "honest" use of the multilinear map. Most notably, in this model the zero-test procedure only reveals whether an encoded element is 0, and nothing more. However, this model is inadequate: there have been several attacks on multilinear maps that exploit extra information revealed by the zero-test procedure. In particular, Miles, Sahai and Zhandry (Crypto'16) recently gave a polynomial-time attack on several iO candidates when instantiated with the multilinear maps of Garg, Gentry, and Halevi (Eurocrypt'13), and also proposed a new "weak multilinear map model" that captures all known polynomial-time attacks on GGH13. In this work, we give a new iO candidate which can be seen as a small modification or generalization of the original candidate of Garg, Gentry, This paper is a merged version of [GMS16, MSZ16b].
Uploads
Papers by Pratyay Mukherjee