Multi-Factor Credential Hashing for Asymmetric Brute-Force Attack Resistance
Vivek Nair (UC
Berkeley),
Dawn Song (UC Berkeley)
Since the introduction of bcrypt in 1999, adaptive password hashing functions,
whereby brute-force resistance increases symmetrically with computational
difficulty for legitimate users, have been our most powerful post-breach
countermeasure against credential disclosure. Unfortunately, the relatively low
tolerance of users to added latency places an upper bound on the deployment of
this technique in most applications. In this paper, we present a multi-factor
credential hashing function (MFCHF) that incorporates the additional entropy of
multi-factor authentication into password hashes to provide asymmetric
resistance to brute-force attacks. MFCHF provides full backward compatibility
with existing authentication software (e.g., Google Authenticator) and hardware
(e.g., YubiKeys), with support for common usability features like factor
recovery.
The result is a 10^6 to 10^48 times increase in the difficulty of cracking
hashed credentials, with little added latency or usability impact.
CHEX-MIX: Combining Homomorphic Encryption with Trusted Execution Environments
for Oblivious Inference in the Cloud
Deepika Natarajan (University of Michigan-Ann Arbor),
Andrew Loveless (University
of Michigan-Ann Arbor),
Wei Dai
(Microsoft Research),
Ron Dreslinski (University
of Michigan-Ann Arbor)
Data, when coupled with state-of-the-art machine learning models, can enable
remarkable applications. But, there exists an underlying tension: users wish to
keep their data private, and model providers wish to protect their intellectual
property. Homomorphic encryption (HE) and
multi-party computation (MPC) techniques have been proposed as solutions to this
problem; however, both techniques require model providers to fully trust the
server performing the machine learning computation. This limits the scale of
inference applications since it prevents model providers from leveraging shared
public cloud infrastructures.
In this work, we present HEX-ML, a solution to the problem of privacy-preserving
machine learning between two mutually distrustful parties in an untrusted cloud
setting. HEX-ML relies on a combination of HE and trusted execution environments
(TEEs), using HE to provide clients
with confidentiality guarantees and TEEs to provide model providers with
confidentiality guarantees and protect the integrity of computation from
malicious cloud adversaries. Unlike prior solutions to this problem, such as
multi-key HE, single-key HE, MPC, or TEE-only techniques, our solution assumes
that both clients and the cloud can be malicious, makes no collusion
assumptions, and frees model providers from needing to maintain private online
infrastructures. Our implementation results show that HEX-ML can execute at high
efficiency, with low communication cost, while providing security guarantees
unaddressed by prior work. Compared to a recent multi-key HE work, for example,
that allows (partial) offload to an untrusted cloud, HEX-ML achieves a 3× lower
communication cost and a 3× faster computation time.
A Generic Obfuscation Framework for Preventing ML-Attacks on Strong-PUFs through
Exploitation of DRAM-PUFs
Owen Millwood (University of
Sheffield),
Meltem Kurt Pehlivanoğlu
(Kocaeli University),
Jack Miskelly (Queen's
University Belfast),
Aryan Mohammadi Pasikhani (University of Sheffield),
Prosanta Gope (University of
Sheffield),
Elif Bilge Kavun
(University of Passau)
Designing sufficiently secure systems for low-power devices is a difficult
problem to tackle considering the limited power and computational resources
available. With the ubiquitous adoption of the Internet-of-Things (IoT) not
appearing to be slowing any time soon, resource-constrained security is more
important than ever. Physical Unclonable Functions (PUFs) have gained momentum
in recent years for their potential in enabling strong security through
generation of unique identifiers based on entropy derived from unique
manufacturing variations. Strong-PUFs however, which are desirable for
authentication protocols, have been often been shown to be insecure to Machine
Learning Modelling Attacks (ML-MA). Recently, some schemes have been proposed to
enhance security against ML-MA through post-processing of the PUF, however,
often security is not sufficiently upheld, the scheme requires too large an
additional overhead, or, key data must be insecurely stored in Non-Volatile
Memory.
In this work, we propose a generic framework for securing Strong-PUFs against
ML-MA through obfuscation of challenge and response data by exploiting a
DRAM-PUF to supplement a One-Way Function (OWF) which can be implemented using
the available resources on an FPGA platform. Our proposed scheme enables
reconfigurability, strong security and one-wayness. We conduct ML-MA using
various classifiers to strongly evaluate the performance of our scheme across
multiple 16-bit and 32-bit Arbiter-PUF (APUF) variants, showing our scheme
reduces model accuracy to around 50% for each PUF (random guessing) and evaluate
the uniformity and uniqueness of the final responses, demonstrating that ideal
properties are maintained. Even though we demonstrate our proposal through a
DRAM-PUF, our scheme can be extended to work with memory-based PUFs in general.
Automatic verification of transparency protocols
Vincent Cheval (INRIA Paris,
France),
José Moreira (Valory AG, Switzerland),
Mark Ryan (University of
Birmingham)
We introduce a new methodology and new features in ProVerif, an automatic tool
for verifying security protocols. This methodology and these features are aimed
at protocols which involve sophisticated data types that have strong properties,
such as Merkle trees, which allow compact proofs of data presence and tree
extension. Such data types are widely used in protocols in systems that use
distributed ledgers and/or blockchains.
With our methodology, it is possible to describe the data type quite abstractly,
using ProVerif axioms, and prove the correctness of the protocol using those
axioms as assumptions. Then, in separate steps, one can define one or more
concrete implementations of the data type, and again use ProVerif to show that
the implementations satisfy the assumptions that were coded as axioms. This
helps make compositional proofs, splitting the proof burden into several
manageable pieces.
To enable this methodology, we introduce new capabilities in ProVerif, by
extending the class of lemmas and axioms that it can reason with. Specifically,
we allow user-defined predicates, attacker predicates and message predicates to
appear in lemmas and axioms, and define their semantics. We show the soundness
of the implementation of this idea with respect to the semantics.
We illustrate the methodology and features by providing the first formal
verification of two transparency protocols which precisely models the Merkle
tree data structure. The two protocols are transparent decryption (sometimes
called accountable decryption), and certificate transparency. Transparent
decryption is a way of ensuring that decryption operations are visible by people
who are affected by them. This can be used, for example, to support privacy: it
can mean that a subject is alerted to the fact that information about them has
been decrypted. Certificate transparency is an Internet security standard for
monitoring and auditing the issuance of digital certificates.