密码学安全规约(2018Guo)
密码学安全规约(2018Guo)
Introduction
to Security
Reduction
Introduction to Security Reduction
Fuchun Guo • Willy Susilo • Yi Mu
Introduction to Security
Reduction
Fuchun Guo Willy Susilo
School of Computing School of Computing
& Information Technology & Information Technology
University of Wollongong University of Wollongong
Wollongong, New South Wales, Australia Wollongong, New South Wales, Australia
Yi Mu
School of Computing
& Information Technology
University of Wollongong
Wollongong, New South Wales, Australia
This Springer imprint is published by the registered company Springer International Publishing AG
part of Springer Nature.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my lovely wife Yizhen,
two adorable sons John and Kevin,
and my kindly mother Suhua.
To the memory of my father Yongming.
–Fuchun Guo
To my family!
–Yi Mu
Preface
vii
Acknowledgements
We finally accomplished something which is meaningful and useful for our research
society. Our primary goal is to make confusing concepts of security reductions van-
ish, and to provide a clear guide on how to program correct security reductions.
Here, we would like to record some of our major milestones as well as to acknowl-
edge several people who have helped us complete this book.
The time that we started to write this book can be traced back to the second half
of 2013, after Dr. Fuchun Guo received his PhD degree in July of that year. At that
time, being a research assistant, Fuchun was invited to co-supervise Professor Willy
Susilo and Professor Yi Mu’s PhD students at the University of Wollongong, Aus-
tralia. Fuchun’s primary task was to help Willy and Yi train PhD students with very
little background in public-key cryptography. It is evident that there is a big gap
between savvy researchers and PhD students just starting on their PhD journeys.
Furthermore, we found it was really ineffective for our students to read papers by
themselves to understand security proofs and some tricky methods in security reduc-
tions. How to quickly train our students remains an elusive problem as we have to
repeat the interactions with each individual student day by day. We collected some
basic but important knowledge that all our students must master in their studies to
conduct research in public-key cryptography. Then we decided to write this book to-
gether to help our students. Hence, the original motivation of writing this book was
to save our time in the training of our students. We do hope that this book will also
benefit others who want to start their research careers in public-key cryptography,
or others who want to study the techniques used in programming correct security
reductions.
The first version of this book was completed in April 2015. In that version, Chap-
ter 4 had only about 50 pages. That version was rather incomprehensible with a lot
of logic and consistency problems. Then, we started to polish the book, which was
completed in August 2017. It took 28 months to clarify many important concepts
that are contained in this book. We patiently crafted Chapter 4 to ensure that all con-
cepts and knowledge are presented clearly and are easy to understand. Originally,
we either did not fully understand many concepts or did not clearly know how to
explain them. A significant amount of time was used to think about how to explain
ix
x Acknowledgements
each concept and exemplify it with a simple, yet clear, example. We were very pas-
sionate in completing this book without thinking about any time constraints. The
external proofreading by our students was started in September 2017 and completed
in March 2018. More than ten PhD students were involved in the proofreading. We
believe this was an invaluable experience for us, which paints a very nice story to
share and remember. This book would never have been completed without the hard
work of our students.
At the early stage of this book writing, we received a lot of feedback from the
process of training our students. This invaluable experience helped us see which
concepts are hard for students to understand and how to clearly explain these. We
are indebted to our colleagues and students: Rongmao Chen, Jianchang Lai, Peng
Jiang, Nan Li, and Yinhao Jiang. They provided insightful feedback and thoughts
when we trained them in public-key cryptography. We can now proudly say that
these people have now completed their PhD studies and they have mastered the
required skills as independent researchers in public-key cryptography, thanks to the
information and training that are provided in this book.
When we completed the writing of this book, we decided to invite our PhD stu-
dents to read it first. Without too much surprise, our students still found many con-
fusing concepts and unclear knowledge points. They provided a lot of invaluable
comments and advice that have been used to improve the quality of this book. In
particular, more than 20 pages were added to Chapter 4 to improve the clarity of
this important chapter. Specifically, we would like to thank these people: Jianchang
Lai, Zhen Zhao, Ge Wu, Peng Jiang, Zhongyuan Yao, Tong Wu, and Shengming Xu
for their helpful advice and feedback.
The first manuscript given to students for proofreading was full of typos and
grammatical errors. We would like to thank the following people for their help in
improving this book: Jianchang Lai, Fatemeh Rezaeibagha, Zhen Zhao, Ge Wu,
Peng Jiang, Xueqiao Liu, Zhongyuan Yao, Tong Wu, Shengming Xu, Binrui Zhu, Ke
Wang, Yannan Li, and Yanwei Zhou.
We would also like to thank all authors of published references that have been
cited in this book, especially those authors whose schemes have been used as exam-
ples. We merely reorganized this knowledge and put it together with our understand-
ing and our logic. We would also like to thank the Springer editor Ronan Nugent and
the copy-editors of Springer, who gave us a lot of insightful comments and advice
that have indeed improved the quality and clarity of this book.
Last but not least, we would like to thank our families who have always been very
supportive. We spent so much time in editing and correcting this book, and without
their patience, it would have been impossible to complete. Thank you.
Finally, we hope that the reader will find that this book is useful.
xi
xii Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Chapter 1
Guide to This Book
Fig. 1.1 Steps for constructing a provably secure scheme in public-key cryptography
There are two popular methods for security proofs in public-key cryptography,
namely game-based proof and simulation-based proof. The former can also be clas-
sified into two categories, i.e., security reduction and game hopping. This book cov-
ers only security reduction, which starts with the assumption that there exists an
adversary who can break the proposed scheme. In security proofs with security re-
duction, a concrete security reduction depends on the corresponding cryptosystem,
the scheme, and the underlying hard problem. There is no universal approach to
program the security reduction for all schemes. This book introduces security re-
ductions for three specific cryptosystems: digital signatures, public-key encryption,
and identity-based encryption. All examples and schemes given in this book are
constructed over cyclic groups with or without a bilinear map.
The contents of each chapter are outlined as follows. Chapter 2 briefly revisits
cryptographic notions, algorithms, and security models. This chapter can be skipped
if the reader is familiar with the definitions. Chapter 3 introduces the foundations of
group-based cryptography: we introduce finite fields, cyclic groups, bilinear pairing,
and hash functions. Our introduction mainly focuses on efficiently computable oper-
ations and the group representation. We minimize the description of the preliminary
knowledge of group-based cryptography. Chapter 4 is the most important chapter
in this book. In this chapter, we classify and explain the fundamental concepts of
security reduction and also summarize how in general to program a full security re-
duction for digital signatures and encryption. We take examples from group-based
cryptography, when it is necessary to give examples to explain the concepts. The re-
maining chapters of this book are dedicated to the security proofs of some selected
schemes in order to help the reader understand how to program the security reduc-
tion correctly. The security proof in each selected scheme corresponds to a useful
reduction technique.
About Notation. This book prefers to use the following notation. The same notation
may have different meanings in different applications.
For mathematical primitives:
• q, p: prime numbers.
• Fqn : the finite field where q is the characteristic, and n is a positive integer.
• k: the size of embedding degree in an extension field denoted by F(qn )k .
• Z p : the integer set {0, 1, 2, · · · , p − 1}.
• Z∗p : the integer set {1, 2, · · · , p − 1}.
• H: a general group.
• G: a cyclic group of prime order p.
• u, v: general elements in a field or a group.
• g, h: group elements of a cyclic group.
• w, x, y, z: integers in an integer set, such as Z p .
• e: a bilinear map.
“Seeing once is better than hearing 100 times, but doing once is better than
seeing 100 times.”
To best use this book, you can try to prove (Doing) the schemes provided in the book
based on the knowledge in Chapter 4, prior to reading the security proofs given in
the book (Seeing). You will understand more about which part is the most difficult
for you and how security reductions can be programmed correctly. The reader can
visit the authors’ homepages to find supplementary resources for this book.
Chapter 2
Notions, Definitions, and Models
A digital signature is a fundamental tool in cryptography that has been widely ap-
plied to authentication and non-repudiation. Take authentication as an example. A
party, say Alice, wants to convince all other parties that a message m is published
by her. To do so, Alice generates a public/secret key pair (pk, sk) and publishes the
public key pk to all verifiers. To generate a signature σm on m, she digitally signs m
with her secret key sk. Upon receiving (m, σm ), any receiver who already knows pk
can verify the signature σm and confirm the origin of the message m.
A digital signature scheme consists of the following four algorithms.
In the definition of (standard) digital signatures, the secret key sk does not need
to be updated during the signature generation. We name this signature stateless sig-
nature. In contrast, if the secret key sk needs to be updated before the generation
of each signature, we name it stateful signature. Stateful signature schemes will be
introduced in Section 6.3 and Section 6.5 in this book.
Correctness. Given any (SP, pk, sk, m,CT ), if CT = E[SP, pk, m] is a ciphertext
encrypted with pk on the message m, the decryption of CT with the secret key sk
will return the message m.
Security. Without the secret key sk, it is hard for any PPT adversary to extract
the message m from the given ciphertext CT = E[SP, pk, m].
The indistinguishability security of public-key encryption is modeled by a game
played by a challenger and an adversary. The challenger generates an encryption
scheme, while the adversary tries to break the scheme. To start, the challenger gen-
erates a key pair (pk, sk), sends the public key pk to the adversary, and keeps the
secret key sk. The adversary outputs two distinct messages m0 , m1 from the same
message space to be challenged. The challenger generates a challenge ciphertext
8 2 Notions, Definitions, and Models
Correctness. Given any (mpk, msk, ID, dID , m,CT ), if CT = E[mpk, ID, m] is
a ciphertext encrypted with ID on the message m, the decryption of CT with the
private key dID will return the message m.
Security. Without the private key dID , it is hard for any PPT adversary to extract
the message from the given ciphertext CT = E[mpk, ID, m].
The indistinguishability security of identity-based encryption is modeled by a
game played by a challenger and an adversary. The challenger generates an IBE
scheme, while the adversary tries to break the scheme. To start, the challenger gen-
erates a master key pair (mpk, msk), sends the master public key mpk to the ad-
versary, and keeps the master secret key msk. The adversary outputs two distinct
10 2 Notions, Definitions, and Models
In Phase 1 and Phase 2 of the security model, the adversary can alternately make
private-key queries and decryption queries. The total numbers of queries made by
the adversary are qk and qd in the security model, but the adversary can adaptively
decide the number of private-key queries denoted by q1 and the number of decryp-
tion queries denoted by q2 in Phase 1 by itself as long as q1 ≤ qk and q2 ≤ qd .
3.1.1 Definition
Definition 3.1.1.1 (Finite Field) A finite field (Galois field), denoted by (F, +, ∗),
is a set containing a finite number of elements with two binary operations “+”
(addition) and “∗” (multiplication) defined as follows.
• ∀u, v ∈ F, we have u + v ∈ F and u ∗ v ∈ F.
• ∀u1 , u2 , u3 ∈ F, (u1 + u2 ) + u3 = u1 + (u2 + u3 ) and (u1 ∗ u2 ) ∗ u3 = u1 ∗ (u2 ∗ u3 ).
• ∀u, v ∈ F, we have u + v = v + u and u ∗ v = v ∗ u.
• ∃0F , 1F ∈ F (identity elements), ∀u ∈ F, we have u + 0F = u and u ∗ 1F = u.
• ∀u ∈ F, ∃–u ∈ F such that u + (–u) = 0F .
• ∀u ∈ F∗ , ∃u−1 ∈ F∗ such that u ∗ u−1 = 1F . Here, F∗ = F\{0F }.
• ∀u1 , u2 , v ∈ F, we have (u1 + u2 ) ∗ v = u1 ∗ v + u2 ∗ v.
We denote by the symbol 0F ∈ F the identity element under the addition operation
and by the symbol 1F ∈ F the identity element under the multiplication operation.
We denote by −u the additive inverse of u and by u−1 the multiplicative inverse of
u. Note that the binary operations defined in the finite field are different from the
traditional arithmetical addition and multiplication.
A finite field, denoted by (Fqn , +, ∗) in this book, is a specific field where n is a
positive integer, and q is a prime number called the characteristic of Fqn . This finite
field has qn elements. Each element in the finite field can be seen as an n-length
vector, where each scalar in the vector is from the finite field Fq . Therefore, the bit
length of each element in this finite field is n · |q|.
In a finite field, two binary operations are defined: addition and multiplication. They
can be extended to subtraction and division through their inverses described as fol-
lows.
• The subtraction operation is defined from the addition. ∀u, v ∈ F, we have
u − v = u + (−v),
u/v = u ∗ v−1 ,
We introduce three common classes of finite fields, namely prime fields, binary
fields, and extension fields.
• Prime Field Fq is a field of residue classes modulo q. There are q elements in this
field represented as Zq = {0, 1, 2, · · · , q − 1}, and two operations: the modular
addition and the modular multiplication. Furthermore,
where the corresponding element in this field is an−1 an−2 · · · a1 a0 . The addition
in this field is calculated by applying XOR to each pair of two polynomial coef-
ficients, while the multiplication in this field requires an operation of reduction
modulo an irreducible binary polynomial f (x) of degree n. Furthermore,
n −2
−u = u and u−1 = u2 mod f (x).
3.1 Finite Fields 15
• Extension Field F(qn1 )n2 is an extension field of the field Fqn1 . The integer n2 is
called the embedding degree. Similarly to the binary field, the representation can
be described as
where the corresponding element in this field is an2 −1 an2 −2 · · · a1 a0 . The addi-
tion in this field denotes the addition of polynomials with coefficient arithmetic
performed in the field Fqn1 . The multiplication is performed by an operation of
reduction modulo an irreducible polynomial f (x) of degree n2 in Fqn1 [x]. The
computations of −u and u−1 are much more complicated than for the previous
two fields. We omit them in this book.
Among three fields mentioned above, the prime field F p is the most important field
in group-based cryptography. This is due to the fact that the group order is usu-
ally a prime number. In the prime field F p , all elements are numbers in the set
Z p = {0, 1, 2, · · · , p − 1}. All the following modular arithmetic operations over the
prime field are efficiently computable. The detailed algorithms for conducting the
corresponding computations are outside the scope of this book.
• Modular Additive Inverse. Given y ∈ Z p , compute
−y mod p.
1
= z−1 mod p.
z
• Modular Addition. Given y, z ∈ Z p , compute
y + z mod p.
y − z mod p.
y ∗ z mod p.
yz mod p.
3.2.1 Definitions
Definition 3.2.1.1 (Abelian Group) An abelian group, denoted by (H, ·), is a set
of elements with one binary operation “·” defined as follows.
• ∀u, v ∈ H, we have u · v ∈ H.
• ∀u1 , u2 , u3 ∈ H, we have (u1 · u2 ) · u3 = u1 · (u2 · u3 ).
• ∀u, v ∈ H, we have u · v = v · u.
• ∃1H ∈ H, ∀u ∈ H, we have u · 1H = u.
• ∀u ∈ H, ∃u−1 ∈ H, such that u · u−1 = 1H .
We denote by 1H the identity element in this group. The only group operation
can be extended to another operation called group division, i.e., given u, the aim is
to compute u−1 . Note that the division u/v is equivalent to u · v−1 .
Definition 3.2.1.2 (Cyclic Group) An abelian group H is a cyclic group if there
exists (at least) one generator, denoted by h, which can generate the group H:
n o n o
H = h1 , h2 , · · · , h|H| = h0 , h1 , h2 , · · · , h|H|−1 ,
Let (G, g, p) be a cyclic group and x be a positive integer. We denote by gx the group
exponentiation, where gx is defined as
gx = g · g · · · g · g .
| {z }
x
gx = gx mod p .
Therefore, when an integer x is chosen for the group exponentiation, we can assume
that x is chosen from the set Z p and call the integer x the exponent.
In public-key cryptography, x is an extremely large exponent whose length is at
least 160 in the binary representation. Therefore, it is impractical to perform x − 1
copies of the group operations. Group exponentiation is frequently used in group-
based cryptography. There exist other polynomial-time algorithms for the compu-
tation of group exponentiation. The simplest algorithm is the square-and-multiply
algorithm, which is described as follows.
18 3 Foundations of Group-Based Cryptography
i
• Let gi = g2 . Compute gi = gi−1 · gi−1 for all i ∈ [1, n − 1].
• Set X to be the subset of {0, 1, 2, · · · , n − 1} where j ∈ X if x j = 1.
• Compute gx by
n−1 n−1 i
∏ g j = ∏ gxi i = g∑i=0 xi 2 = gx .
j∈X i=0
The algebraic structure of an abelian group is simpler than that of a finite field. The
reason is that the abelian group defines only one binary operation while the finite
field defines two. The operation properties are identical, and thus we can immedi-
ately obtain an abelian group from a finite field. For example, (Fqn , +) and (F∗qn , ∗)
are both abelian groups. It seems that there is no need to explore other implementa-
tions of cyclic groups.
However, we do need to construct more advanced cyclic groups with a finite field
for various reasons. For example, the Elliptic Curve Group was invented to reduce
the size of group representation for the same security level. The operation “·” in
the abelian group can be the same as or different from the operations “+, ∗” in the
finite field. For example, the group operation in the Elliptic Curve Group is a curve
operation over a finite field requiring both “+” and “∗” operations.
The first group choice is a multiplicative group (F∗qn , ∗) from a finite field under the
multiplication operation. The multiplicative group of a finite field is a cyclic group,
where the finite field can be a prime field, a binary field or an extension field.
Here, we introduce the multiplicative group modulo q from a prime field (F∗q , ∗).
The group elements, group generator, group order, and group operation are de-
scribed as follows.
• Group Elements. The space of the modulo multiplicative group is Z∗q = {1, 2, · · ·,
q − 1}. Therefore, each group element has |q| bits in the binary representation.
• Group Generator. There exists a generator h ∈ Z∗q , which can generate the group
Z∗q . However, not all elements of Z∗q are generators. The group element h is a
generator if and only if the minimum positive integer x satisfying hx mod q = 1
is equal to q − 1.
• Group Order. The order of this group is q − 1. Since q is a prime number, Z∗q is
not a group of prime order for a large prime q (excluding q = 3).
• Group Operation. The group operation “ · ” in this group is integer multiplica-
tion modulo the prime number q. To be precise, let u, v ∈ Z∗q and “ × ” be the
mathematical multiplication operation. We have u · v = u × v mod q.
This modular multiplicative group is not a group of prime order. We can extract
a subgroup G of prime order p from it if p divides q − 1, namely p|(q − 1). To find
a generator g of G, the simplest approach is to search from 2 to q − 1 and select the
first u such that
q−1
u p 6= 1 mod q.
q−1
The generator of G is g = u p , where g p = 1 mod q and the group is denoted by
20 3 Foundations of Group-Based Cryptography
G = {g, g2 , g3 , · · · , g p }.
The second group choice is an elliptic curve group. An elliptic curve is a plain curve
defined over a finite field Fqn , where all points are on the following curve:
Y 2 = X 3 + aX + b
along with a distinguished point at infinity, denoted by ∞. Here, a, b ∈ Fqn and the
space of points is denoted by E(Fqn ). The finite field can be a prime field or a binary
field or others, and each field has a different computational efficiency.
The group elements, group generator, group order, and group operation of the
elliptic curve group are described as follows.
Notice that given an x-coordinate x and the curve, we can compute two y-
coordinates +y and −y. Therefore, with the curve, each group element (x, y) can
3.2 Cyclic Groups 21
be simplified: (x, 1) to denote (x, +y), or (x, 0) to denote (x, −y). Sometimes, we
can even represent the group element with x only, because we can handle both
group elements (x, +y) and (x, −y) in computations that will return one correct
result. Therefore, the bit length of a group element is about n|q|.
• Group Generator. There exists a generator h ∈ E(Fqn ), which can generate the
group E(Fqn ). The point at infinity serves as the identity group element.
• Group Order. The group order of the elliptic curve group is denoted by
|E(Fqn )| = qn + 1 − t,
√
where |t| ≤ 2 qn and t is the trace of the Frobenius of the elliptic curve over the
field. Note that the group order is not a prime number for most curves.
• Group Operation. The group operation “ ·” in the elliptic curve group has two
different types of operations, which depend on the input of two group elements
u and v.
– If u = (xu , yu ) and v = (xv , yy ) are two distinct points, we draw a line through
u and v. This line will intersect the elliptic curve at a third point. We define
u · v as the reflection of the third point in the x-axis.
– Otherwise, if u = v, we draw the tangent line to the elliptic curve at u. This
line will intersect the elliptic curve at a second point. We define u · u as the
reflection of the second point in the x-axis.
The detailed group operation is dependent on the given group elements, curve,
and finite field. We omit a detailed description of the group operation here.
This elliptic curve group is not always a group of prime order. However, we can
extract a subgroup G of prime order p from it if p is a divisor of the group order.
The extraction approach is the same as that in the modular multiplicative group. We
define the group (G, g, p) as an elliptic curve group for scheme constructions, where
G is a group, g is the generator of G, and p is the group order.
The DL problem over the elliptic curve group is also hard as it does not have
any polynomial-time solution. Furthermore, there is no sub-exponential-time algo-
rithm for solving the DL problem in a general elliptic curve group, which means that
we can choose the finite field as small as possible to reduce the size of the group
representation. This short representation property is the primary motivation for con-
structing a cyclic group from an elliptic curve. For example, to have an elliptic curve
group where the time complexity of solving the DL problem over this group is 280 ,
the bit length of the prime q in the prime field Fq for the elliptic curve group im-
plementation can be as small as 160, rather than 1,024 in the modular multiplicative
group. The tradeoff is less computationally efficient of the group operation in the
elliptic curve group compared to that in the modular multiplicative group.
In an elliptic curve group, the group element in the binary representation can be
as small as the group order in the binary representation. That is, |g| = |p|. However,
this does not mean that all elliptic curve groups have this nice feature. For l-bit
security level, we must have at least |p| = 2 · l. The size of each group element g
depends on the choice of the finite field, and we have |g| ≥ |p| for all choices.
22 3 Foundations of Group-Based Cryptography
The following computations are the most common operations over a group G of
prime order p.
• Group Operation. Given g, h ∈ G, compute
g · h.
gx .
Note that the operations mentioned above do not represent all computations for
a group. We should also include all operations over the prime field, where the prime
number is the group order. For example, given the group (G, g, p), an additional
1
group element h, and x, y ∈ Z p , we can compute g x h−y .
Roughly speaking, a bilinear pairing provides a bilinear map that maps two group
elements in elliptic curve groups to a third group element in a multiplicative group
without losing its isomorphic property. Bilinear pairing was originally introduced to
solve hard problems in elliptic curve groups by mapping a given problem instance
in the elliptic curve group into a problem instance in a multiplicative group, running
a sub-exponential-time algorithm to find the answer to the problem instance in the
multiplicative group, and then using the answer to solve the hard problem in the
elliptic curve group.
Bilinear pairing for scheme constructions is built from a pairing-friendly elliptic
curve where it should be easy to find an isomorphism from the elliptic curve group
to the multiplicative group. The instantiations of bilinear pairing, denoted by G1 ×
G2 → GT , fall into the following three types.
• Symmetric. G1 = G2 = G. We denote a symmetric pairing by G × G → GT .
3.3 Bilinear Pairings 23
This completes the definition of the symmetric pairing. Now, we introduce its
size efficiency.
Let E(Fqn )[p] be the elliptic curve subgroup of E(Fqn ) with order p over the basic
field Fqn , and Fqnk [p] be the multiplicative subgroup of the extension field Fqnk with
order p, where k is the embedding degree. The bilinear pairing is actually defined
over
E(Fqn )[p] × E(Fqn )[p] → Fqnk [p].
A secure bilinear pairing requires the DL problem to be hard over both the elliptic
curve group G and the multiplicative group GT . We should also make these groups
as small as possible for efficient group operations. However, the DL problem in the
multiplicative group defined over the extension field suffers from sub-exponential
attacks. Therefore, the size of qnk must be large enough to resist sub-exponential
attacks. That is why we need an embedding degree k to extend the field. For l-bit
security level, we have the following parameters.
Therefore, we have |p| = |g| = 160 and |e(g, g)| = 1, 120. Unfortunately, no such
curve has been found for any k ≥ 7. Therefore, this means that we cannot con-
struct a symmetric pairing where the size of group elements in G is 160 bits for
80-bit security.
• Option 2. We choose the pairing group with embedding degree k = 2. For
k · |Fqn | = 1, 024, we have |Fqn | = 512. Therefore, |p| = 160, |g| = 512 and
|e(g, g)| = 1, 024. There exists such an elliptic curve with a minimum size of
GT for 80-bit security, but we cannot use it to construct schemes with short rep-
resentation for group elements particularly in G.
This completes the definition of the asymmetric pairing. Now, we introduce its
size efficiency.
Let E(Fqn )[p] be the elliptic curve subgroup of E(Fqn ) with order p over the
basic field Fqn , E(Fqnk )[p] be one of the elliptic curve subgroups of E(Fqnk ) with
order p over the extension field Fqnk , and Fqnk [p] be the multiplicative subgroup of
extension field Fqnk with order p, where k is the embedding degree. The bilinear
pairing is actually defined over
group elements in G2 can be compressed into half or quarter size or even shorter
representations if the bilinear pairing is the third type, in which there is no efficient
homomorphism between G1 and G2 .
• H : {0, 1}∗ → {0, 1}n . The output space is the set containing all n-bit strings. To
resist birthday attacks, n must be at least 2 · l bits for l-bit security. We mainly
use this kind of hash function to generate a symmetric key from the key space
{0, 1}n for hybrid encryption.
• H : {0, 1}∗ → Z p . The output space is {0, 1, 2, · · · , p − 1}, where p is the group
order. We use this kind of hash function to embed hashing values in group expo-
nents, when the input values are not in the Z p space.
• H : {0, 1}∗ → G. The output space is a cyclic group. That is, this hash function
will hash the input string into a group element. This hash function exists only
for some groups. The main groups we can hash are the group G in the symmetric
bilinear pairing G×G → GT and the group G1 in the asymmetric bilinear pairing
G1 × G2 → GT .
How to construct cryptographic hash functions is outside the scope of this book.
The above definitions and descriptions of hash functions are sufficient for construct-
ing schemes and security proofs.
The generic algorithms, such as the baby-step giant-step algorithm [95] and the Pol-
lard Rho algorithm [87], work for all groups. The particular algorithms, such as the
index calculus algorithm [3, 58], only work for some particular groups, such as the
modular multiplicative group. A survey of these algorithms was given in [78] and
[81] (Chapter 3.6).
√
The time complexity of generic algorithms is normally O( p) for a group of or-
der p, while the time complexity of the index calculus algorithm is sub-exponential.
The most efficient algorithm, which is called the number field sieve and is a variant
of the index calculus algorithm, has a complexity of L p [ 13 , 1.923] (refer to [72] for
the definition of L-notation). The record for solving a discrete logarithm in GF(p)
is a 768-bit prime [67]. In a group construction with a finite field of characteristic 2,
the calculation of a logarithm in F21279 was announced in [66]. In a group construc-
tion with a finite field of characteristic 3, the latest result is given in [2]. A full list
of the records sorted by date can be found in [52].
Elliptic Curve Cryptography. The notion of elliptic curve groups was indepen-
dently suggested by Koblitz [69] and Miller [83]. In comparison with the modular
multiplicative groups, there exists no sub-exponential algorithm for solving the dis-
crete logarithm problem over the elliptic curve groups. The current record is the
discrete logarithm of a 113-bit Koblitz curve [102] and a curve over F2127 [16]. For
a survey of recent progress, we refer the reader to the work [43].
The US National Institute of Standards and Technology (NIST) published the
recommended size of an elliptic curve group (Table 2 of [9]). For a more direct
comparison of the key size, we refer the reader to [17]. A collection of recommen-
dations from well-known organizations is available at [19]. There are also many
helpful textbooks such as [17, 18, 57, 99], which provide a detailed introduction to
elliptic curve cryptography.
Bilinear Pairings. The use of bilinear pairing was first proposed in [80, 41] to
attack cryptosystems. Later, numerous schemes become achievable with the help
of pairings. We refer the reader to the survey in [39] of these constructions during
the first few years after the bilinear pairing was invented.
Galbraith et al.’s paper [44] provides a background to pairing and classifies the
pairing G1 × G2 → GT into three types. There is another rarely used type of pairing,
which was introduced by Shacham [92]. These four types are denoted Type I, II, III,
and IV, respectively. The difference in these four types is the structures of groups G1
and G2 . Meanwhile, the Weil pairing and the Tate pairing are classified with respect
to the computation, and the work in [70] gives an efficiency comparison of these
two pairings. Elegant explanations of pairings can be found in [76, 31, 97], where
the structure of r-torsion, the Miller algorithm, and optimizations of the pairing
computation were explained.
The elliptic curves that we use to construct a bilinear pairing are referred to as
pairing-friendly curves. Finding pairing-friendly curves with an optimized group
size needs the embedding degree, the modulo of the underlying group, and the ρ-
value all to be considered. The most commonly applied method is called the com-
plex multiplication (CM method) [5]. Summaries of pairing-friendly curve construc-
tions were given in [40, 63].
Chapter 4
Foundations of Security Reduction
In this chapter, we introduce what is security reduction and how to program a cor-
rect security reduction. We start by presenting an overview of important concepts
and techniques, and then proof structures for digital signatures and encryption. We
classify each concept into several categories in order to guide the reader to a deep
understanding of security reduction. We devise and select some examples to show
how to correctly program a full security reduction. Some definitions adopted in this
book may be defined differently elsewhere in the literature.
Security Reduction
Mathematical Hard Problem ←−−−−−−−−−−−−−−−−−−−−−−− Cryptographic Scheme
& .
Mathematical Primitive
In this book, cryptography, cryptosystem, and scheme have the following meanings.
• Cryptography, such as public-key cryptography and group-based cryptography,
is a security mechanism to provide security services for authentication, confiden-
tiality, integrity, etc.
• A cryptosystem, such as digital signatures, public-key encryption, and identity-
based encryption, is a suite of algorithms that provides a security service.
• A scheme, such as the BLS signature scheme [26], is a specific construction or
implementation of the corresponding algorithms for a cryptosystem.
A cryptosystem might have many different scheme constructions. For example,
many signature schemes with distinct features have been proposed in the literature.
Suppose a scheme is generated with a security parameter λ . The level of “hardness”
of breaking the scheme can be denoted by a function S(λ ) of λ for this scheme. The
functions for different proposed schemes are different, and thus their levels of hard-
ness are not the same even though they are constructed over the same mathematical
primitive.
t(λ ) = O(λ n0 ).
t(λ ) = O(eλ ),
1
ε(λ ) = .
Θ (eλ )
That is, the value ε(λ ) tends to zero very quickly as the input λ grows.
4.1 Introduction to Basic Concepts 33
We can classify all mathematical problems into “easy” and “hard” as follows.
• Easy. A problem generated with a security parameter λ is easy if there exists
an algorithm that can solve the problem in polynomial time with non-negligible
advantage associated with λ .
• Hard. A problem generated with a security parameter λ is hard if there exists
no (known) algorithm that can solve the problem in polynomial time with non-
negligible advantage associated with λ .
Hard problems are those mathematical problems only believed to be hard based
on the fact that all known algorithms cannot efficiently solve them. There is no
mathematical proof for the hardness of a mathematical hard problem. We can only
prove that solving a problem is not easier than solving another problem. Notice that
some believed-to-be-hard problems might become easy in the future.
34 4 Foundations of Security Reduction
Suppose there exists an algorithm that can break a scheme or solve a hard problem
in time t with advantage ε, where the scheme or the problem is generated with a
security parameter λ .
• An algorithm that can break a scheme or solve a hard problem with (t, ε) is com-
putationally efficient if t is polynomial time and ε is non-negligible associated
with the security parameter λ .
• An algorithm that can break a scheme or solve a hard problem with (t, ε) is
computationally inefficient if t is polynomial time but ε is negligible associated
with the security parameter λ .
In this book, a computationally efficient algorithm is treated as a probabilistic
polynomial-time (PPT) algorithm. In the following introduction, a computationally
efficient algorithm is called an efficient algorithm for short. An algorithm requiring
exponential time to solve a hard problem is also computationally inefficient.
All algorithms in public-key cryptography can be classified into the following four
types, and each type is defined for a different purpose.
• Scheme Algorithm. This algorithm is proposed to implement a cryptosystem. A
scheme algorithm might be composed of multiple algorithms for different com-
putation tasks. For example, a digital signature scheme usually consists of four
algorithms: system parameter generation, key pair generation, signature genera-
tion, and signature verification. We require the scheme algorithm to return correct
results except with negligible probability.
• Attack Algorithm. This algorithm is proposed to break a scheme. A scheme is
secure if all attack algorithms are computationally inefficient. Suppose there ex-
ists an adversary who can break the proposed scheme in polynomial time with
non-negligible advantage. This means that the adversary knows a computation-
ally efficient attack algorithm. However, this algorithm is a black-box algorithm
only known to the adversary. The steps inside the algorithm are unknown.
• Solution Algorithm. This algorithm is proposed to solve a hard problem. Sim-
ilarly, a problem is hard if all solution algorithms for this problem are compu-
tationally inefficient. In a security reduction, if there exists a computationally
efficient attack algorithm that can break a proposed scheme, we prove that there
exists a computationally efficient solution algorithm that can solve a mathemati-
cal hard problem.
• Reduction Algorithm. This algorithm is proposed to describe how a security
reduction works. A security reduction is merely a reduction algorithm. If the
attack indeed exists, it shows how to use an adversary’s attack on a simulated
4.1 Introduction to Basic Concepts 35
All mathematical hard problems can be classified into the following two types.
• Computationally Hard Problems. These problems, such as the discrete loga-
rithm problem, cannot be solved in polynomial time with non-negligible advan-
tage. This type of hard problem is used as the underlying hard problem in the
security reduction.
• Absolutely Hard Problems. These problems cannot be solved with non-negligible
advantage, even if the adversary can solve all computational hard problems in
polynomial time with non-negligible advantage. Absolutely hard problems are
unconditionally secure against any adversary. This type of hard problem is used
in security reductions to hide secret information from the adversary.
A simple example of an absolutely hard problem is to compute x from (g, gx+y ),
where x, y are both randomly chosen from Z p . More absolutely hard problems will
be introduced in Section 4.7.6. We will explain why it is essential to utilize ab-
solutely hard problems in security reductions in Section 4.5.7. When we need to
assume that an adversary can solve all computational hard problems in polynomial
time with non-negligible advantage, we say that the adversary is a computationally
unbounded adversary who has unbounded computational power.
Suppose all potential attack algorithms to break a proposed scheme have been
found with the following distinct time cost and advantage
For a simple analysis, we say that the proposed scheme has k-bit security if the
minimum value within the following set
nt t2 tl o
1
, , ···,
ε1 ε2 εl
is 2k , where the time unit is one step/operation. This definition will be used to ana-
lyze the concrete security of a proposed scheme in this book.
The security level of a proposed scheme is not fixed. Suppose a proposed scheme
has k-bit security against all existing attack algorithms. If an attack algorithm with
∗
(t ∗ , ε ∗ ) satisfying t ∗ /ε ∗ = 2k < 2k is found in the future, the proposed scheme will
then have k∗ -bit security instead.
In this book, the concepts of hard problem and hardness assumption are treated as
equivalent. However, the descriptions of these two concepts are slightly different.
• We can say that breaking a proposed scheme implies solving an underlying hard
problem, denoted by A, such that the scheme is secure under the hardness as-
sumption on A.
• We can also say that a hardness assumption is a weak assumption or a strong
assumption. Weak or strong is not related to the problem but to the strength of
the assumption.
A hard problem is associated with the solution while a hardness assumption is as-
sociated with the security assumption. In a security reduction, we are to solve an
underlying hard problem or break an underlying hardness assumption.
In this book, security reduction and security proof are assumed to be different con-
cepts with different components. We clarify them as follows.
• A security reduction is a part of a security proof focusing on how to reduce
breaking a proposed scheme to solving an underlying hard problem. A security
reduction consists of a simulation algorithm and a solution algorithm.
4.2 An Overview of Easy/Hard Problems 37
All mathematical problems can be classified into the following four types for
scheme constructions and security reductions: computational easy problems, com-
putational hard problems, decisional easy problems, and decisional hard problems.
In this section, we collect some popular problems that have been widely used in the
literature.
where fi ∈ Z p is the coefficient of xi for all i ∈ [0, n]. Therefore, this element is
computable by computing
n
i
g f (a) = ∏(ga ) fi .
i=0
2 n−1
• Polynomial Problem 2. Given g, ga , ga , · · · , ga ∈ G, f (x) ∈ Z p [x], and any
w ∈ Z p satisfying f (w) = 0, we can compute the group element
f (a)
g a−w .
38 4 Foundations of Security Reduction
f (x)
x−w
is a polynomial of degree n − 1, where all coefficients are computable. Therefore,
f (x)
this element is computable because x−w is a polynomial of degree n − 1.
2 n−1
• Polynomial Problem 3. Given g, ga , ga , · · · , ga ∈ G, f (x) ∈ Z p [x], and any
w ∈ Z p , we can compute the group element
f (a)− f (w)
g a−w .
f (x) − f (w)
x−w
is a polynomial of degree n − 1, where all coefficients are computable. Therefore,
this element is computable because f (x)−
x−w
f (w)
is a polynomial of degree n − 1.
2 n−1 f (a)
• Polynomial Problem 4. Given g, ga , ga , · · · , ga , g a−w ∈ G, f (x) ∈ Z p [x], and
any w ∈ Z p satisfying f (w) 6= 0, we can compute the group element
1
g a−w .
f (a) !1
d
1 g a−w
g a−w = .
∏n−1 ai fi0
i=0 (g )
2 n−1
• Polynomial Problem 5. Given g, ga , ga , · · · , ga , has ∈ G, and e(g, h) f (a)s ∈ GT
where f (0) 6= 0, we can compute the group element
e(g, h)s .
2 n
• Polynomial Problem 6. Given g, ga , ga , · · · , ga ∈ G, and F(x) ∈ Z p [x], we can
compute the group element
e(g, g)F(a) .
Let F(x) = F2n x2n + F2n−1 x2n−1 + · · · + F1 x + F0 be a polynomial of degree 2n. It
can be rewritten as
1 1 1
• Polynomial Problem 7. Given g a−x1 , g a−x2 , · · · , g a−xn ∈ G, and all distinct xi ∈
Z p , we can compute the group element
1
g (a−x1 )(a−x2 )(a−x3 )···(a−xn ) .
fi (x)
= g (x−x1 )(x−x2 )···(x−xi )(x−xi+1 )
w1 w2 w w
+ (x−x )(x−x +···+ (x−x )(x−xi )···(x−x ) + (x−x )(x−x )···(x−x
= g x−x1 1 2) 1 2 i 1 2 i )(x−xi+1 ) .
which contains i group elements. Given all elements in Si , we can compute the
new element
1 1 +···+ 1
1
+ x−x w
1
(x−x1 )(x−x2 )···(x−xi )(x−xi+1 )
g x−x1 2 x−x i+1
g = w1 w2 wi
g x−x1 · g (x−x1 )(x−x2 ) · · · g (x−x1 )(x−x2 )···(x−xi )
by the above approach, which is the (i + 1)-th group element in the set Si+1 .
Therefore, with the given group elements, we immediately have S1 and then can
compute S2 , S3 , · · · until Sn . We solve this problem because the n-th group ele-
ment in Sn is the solution to the problem instance.
Another type of computational easy problem can be seen as a structured problem,
where the solution to a structured problem must satisfy a defined structure. For
example, given g, ga ∈ G, a structured problem is to compute a pair (r, gar ) for
an integer r ∈ Z p . Here, the integer r can be any number chosen by the party that
returns the answer. We have the following structured problems that are efficiently
solvable. How to solve these problems is important, especially in the simulation of
digital signatures and private keys of identity-based cryptography.
• Structured Problem 1. Given g, ga ∈ G, we can compute a pair
gr , gar
Let r = r0 − a ∈ Z p . We have
1 1 0
1 0
g a+r , gr = g a+r0 −a , gr −a = g r0 , gr −a ,
and thus the computed pair is the solution to the problem instance.
• Structured Problem 3. Given g, ga ∈ G and w ∈ Z p , we can compute a pair
r
g a+w , gr
Let r = r0 (a + w) ∈ Z p . We have
r r0 (a+w) 0
0 0
g a+w , gr = g a+w , gr (a+w) = gr , gr (a+w) ,
and thus the computed pair is the solution to the problem instance.
• Structured Problem 4. Given g, ga , gb ∈ G and w ∈ Z∗p , we can compute a pair
42 4 Foundations of Security Reduction
gab g(wa+1)r , gr
Let r = − wb + r0 ∈ Z p . We have
b 0 b 0
gab g(wa+1)r , gr = gab g(wa+1)(− w +r ) , g− w +r
1 0 0 b 0
= g− w b+wr a+r , g− w +r ,
and thus the computed pair is the solution to the problem instance.
In the above structured problems, the integer r in the computed pair is also a
random number from the point of view of the adversary if r0 is secretly and randomly
chosen from Z p . The randomness of r is extremely important for indistinguishable
simulation, introduced in Section 4.7.
because this equation holds if and only if Z is true. Here, e(g, g)F(a) is computable
as has been explained in Polynomial Problem 6.
A decisional problem generated with a security parameter λ is hard if, given as input
a problem instance whose target is Z, the advantage of returning a correct guess in
polynomial time is a negligible function of λ , denoted by ε(λ ) (ε for short),
ε = Pr Guess Z = True|Z = True − Pr Guess Z = True|Z = False ,
where
• Pr Guess Z = True|Z = True is the probability of correctly guessing Z if Z is true.
unless there exists an analysis. Here, we introduce three popular methods for the
hardness analysis.
• The first method is by reduction. Suppose there exists an efficient solution algo-
rithm that can solve a new hard problem, denoted by A. We construct a reduction
algorithm that transforms a random instance of an existing hard problem, de-
noted by B, into an instance of the proposed problem A, such that a solution
to the problem instance of A implies a solution to the problem instance of B.
Since the problem B is hard, the assumption that the new hard problem A is easy
is false. Therefore, the problem A is hard without any computationally efficient
solution algorithm.
For example, let the variant DDH problem be a new problem; we want to reduce
its hardness to the DDH problem. Given a random instance (g, ga , gb , Z) of the
DDH problem, we randomly choose z and generate an instance of the variant
DDH problem as
(g, gz , gb , gaz , Z).
We have that Z is true in the variant DDH problem if and only if Z = gab , which
is also true in the DDH problem. Therefore, the solution to the variant DDH
problem instance is the solution to the DDH problem instance, and the variant
DDH problem is not easier than the DDH problem. This reduction seems to be
the same as a security reduction from breaking a proposed scheme to solving an
underlying hard problem. However, this reduction is static and much easier than
the security reduction. The reasons will be explained in Section 4.5.
• The second method is by membership proof. Suppose there exists a general prob-
lem that has been proved hard without any computationally efficient solution al-
gorithm. We only need to prove that the new hard problem is a particular case of
this general hard problem.
For example, the decisional (P, Q, f )-GDHE problem is a general hard problem,
and the decisional ( f , g, F)-GDDHE problem is a specific problem. We only need
to prove that the decisional ( f , g, F)-GDDHE problem is a member of the deci-
sional (P, Q, f )-GDHE problem.
• The third method is by intractability analysis in the generic group model. In
this model, an adversary is only given a randomly chosen encoding of a group,
instead of a specific group. Roughly speaking, the adversary cannot perform any
group operation directly and must query all operations to an oracle, where only
basic group operations are allowed to be queried. We analyze that the adversary
cannot solve the hard problem under such an oracle. For example, the decisional
(P, Q, f )-GDHE problem was analyzed in the generic group model in [22].
The methods mentioned above for hardness analysis are just used to convince us
that a new hard problem is at least as hard as an existing hard problem, or that a new
hard problem is hard under ideal conditions. The first two methods are much easier
for the beginner than the third one. Note that the third method is only suitable for
group-based hard problems.
4.3 An Overview of Security Reduction 47
All hardness assumptions can be classified into weak assumptions and strong as-
sumptions, but the classification is not very precise.
• Weak assumptions over the group-based mathematical primitive are those hard
problems, such as the CDH problem, whose security levels are very close to
the DL problem. The security level is only associated with the input security
parameter for the generation of the underlying mathematical primitive. A weak
assumption is also regarded as a standard assumption.
• Strong assumptions over the group-based mathematical primitive are those hard
problems, such as the q-SDH problem, whose security levels are lower than the
DL problem. The security level is not only associated with the input security
parameter for the generation of the underlying mathematical primitive, but also
other parameters, such as the size of each problem instance.
Here, “weak” means that the time cost of breaking a hardness assumption is much
greater than that for “strong.” The word “weak” is better than the word “strong” in
hardness assumptions, because it is harder to break a weak assumption than to break
a strong assumption. A strong assumption means that the hardness assumption is
relatively risky and unreliable. Weak assumption and strong assumption are two
concepts used to judge whether an underlying hardness assumption for a proposed
scheme is good or not.
Security reduction was invented to prove that breaking a proposed scheme implies
solving a mathematical hard problem. In this section, we describe how a security
reduction works and explain some important concepts in security reduction.
When we propose a scheme for a cryptosystem, we usually do not analyze the se-
curity of the proposed scheme against a list of attacks, such as replay attack and
collusion attack. Instead, we analyze that the proposed scheme is secure in a se-
curity model. A security model can be seen as an abstract of multiple attacks for a
cryptosystem. If a proposed scheme is secure in a security model, it is secure against
any attack that can be described and captured in this security model.
To model the security for a cryptosystem, a virtual party, called the challenger,
is invented to interact with an adversary. A security model can be seen as a game
(interactively) played between the challenger and the adversary. The challenger cre-
ates a scheme following the algorithm (definition) of the cryptosystem and knows
48 4 Foundations of Security Reduction
secrets, such as the secret key, while the adversary aims to break this scheme. A
security model mainly consists of the following definitions.
• What information the adversary can query.
• When the adversary can query information.
• How the adversary wins the game (breaks the scheme).
The security models for different cryptosystems might be entirely different, because
the security services are not the same.
We give an example to show that the security model named IND-ID-CPA for IBE
captures the collusion attack. The security model can be simply revisited as follows.
Setup. The challenger runs the setup algorithm of IBE, gives the master public key
to the adversary, and keeps the master secret key.
Phase 1. The adversary makes private-key queries in this phase. The challenger
responds to queries on any identity following the key generation algorithm of IBE.
Challenge. The adversary outputs two distinct messages m0 , m1 from the same mes-
sage space and an identity ID∗ to be challenged, whose private key has not been
queried in Phase 1. The challenger randomly flips a coin c ∈ {0, 1} and returns the
challenge ciphertext CT ∗ as CT ∗ = E[mpk, ID∗ , mc ], which is given to the adversary.
Phase 2. The challenger responds to private-key queries in the same way as in Phase
1 with the restriction that no private-key query is allowed on ID∗ .
Guess: The adversary outputs a guess c0 of c and wins the game if c0 = c.
The collusion attack on an IBE scheme is stated as follows. If a proposed IBE
scheme is insecure against the collusion attack, two users, namely ID1 , ID2 , can
together use their private keys dID1 , dID2 to decrypt a ciphertext CT created for a
third identity, namely ID3 . We now investigate whether or not the proposed scheme
is secure in the above security model. Following the security model, the adversary
can query ID1 , ID2 for private keys and set ID∗ = ID3 as the challenge identity.
If the proposed scheme is insecure against the collusion attack, the adversary can
always correctly guess the encrypted message and so win the game. Therefore, if
a proposed IBE scheme is secure in this security model, the proposed scheme is
secure against the collusion attack.
A correct security model definition for a cryptosystem requires that the adversary
cannot win the game in the security model. Otherwise, no matter how the scheme
is constructed, the adversary can win the game and then the scheme is insecure in
such a security model. To satisfy this requirement, the adversary must be prevented
from making any trivial query. For example, the adversary must be prevented from
querying the private key of ID∗ , which would allow the adversary to simply win
the game by running the decryption algorithm on the challenge ciphertext with the
private key of ID∗ . A security model is ideal and the best if the adversary can make
any queries at any time excluding those queries that will allow trivial attacks. A
proposed scheme provably secure in an ideal security model is more secure than a
scheme provably secure in other security models.
4.3 An Overview of Security Reduction 49
A cryptosystem might have more than one security model for the same security
service. These security models can be classified into the following two types.
• Weak Security Model. A security model is weak if the adversary is restricted in
its set of allowed queries or has to reveal some queries in advance to the chal-
lenger. For example, in the security model of IND-sID-CPA for identity-based
encryption, the adversary cannot make any decryption query and has to specify
the challenge identity before seeing the master public key.
• Strong Security Model. A security model is strong if the adversary is not re-
stricted in the queries it can make (except those queries allowing trivial attacks)
and the adversary does not need to reveal any query in advance to the challenger.
For example, in the security model of IND-ID-CCA for identity-based encryp-
tion, the adversary can make decryption queries on any ciphertext different from
the challenge ciphertext, and the adversary does not need to specify the challenge
identity before the challenge phase.
If a proposed scheme is secure in a strong security model, it indicates that it
has strong security. The word “strong” is better than the word “weak” in the secu-
rity model. Recall that these two words for hardness assumptions have the opposite
senses. The reader might find that some security models are regarded as standard se-
curity models. A standard security model is the security model that has been widely
accepted as a standard to define a security service for a cryptosystem. For example,
existential unforgeability against chosen-message attacks is the standard security
model for digital signatures. Note that a standard security model is not necessarily
the strongest security model for a cryptosystem.
The process from insecure to easy in the proof by contradiction is called security
reduction. A security reduction works if we can find a solution to a problem instance
of the mathematical hard problem with the help of the adversary’s attack. However,
security reduction cannot directly reduce the adversary’s attack on the proposed
scheme to solving an underlying hard problem. This is because the proposed scheme
and the problem instance are generated independently.
In the security reduction, the proposed scheme is replaced with a different but
well-prepared scheme, which is associated with a problem instance. We extract a
solution to the problem instance from the adversary’s attack on such a different but
well-prepared scheme to solve the mathematical problem. The core and difficulty of
the security reduction is to generate such a different but well-prepared scheme. In
the following introduction, we will introduce the following important concepts.
• The concepts of real scheme, challenger, and real attack associated with the
proposed scheme.
• The concepts of simulated scheme, simulator, and simulation associated with the
different but well-prepared scheme.
4.3 An Overview of Security Reduction 51
In a security reduction, both the real scheme and the simulated scheme are schemes.
However, their generation and application are completely different.
• A real scheme is a scheme generated with a security parameter following the
scheme algorithm described in the proposed scheme. A real scheme can be seen
as a specific instantiation of the proposed scheme (algorithm). When the adver-
sary interacts with the real scheme following the defined security model, we as-
sume that the adversary can break this scheme. For simplicity, we can view the
proposed scheme as the real scheme.
• A simulated scheme is a scheme generated with a random instance of an under-
lying hard problem following the reduction algorithm. In the security reduction,
we want the adversary to interact with such a simulated scheme and break it with
the same advantage as that of breaking the real scheme.
When the adversary interacts with a scheme, the scheme needs to respond to queries
made by the adversary. To easily distinguish between the interaction with a real
scheme and the interaction with a simulated scheme, two virtual parties, called the
challenger and the simulator, are adopted.
• When the adversary interacts with a real scheme, we say that the adversary is
interacting with the challenger, who creates a real scheme and responds to queries
from the adversary. The challenger only appears in the security model and in the
security description where the adversary needs to interact with a real scheme.
• When the adversary interacts with a simulated scheme, we say that the adversary
is interacting with the simulator, who creates a simulated scheme and responds to
queries from the adversary. The simulator only appears in the security reduction
and is the party who runs the reduction algorithm.
These two parties appear in different circumstances (i.e., security model and se-
curity reduction) and perform different computations because the challenger runs
the real scheme while the simulator runs the simulated scheme. We can even de-
scribe the interaction between the adversary and the scheme without mentioning the
entity who runs the scheme.
52 4 Foundations of Security Reduction
In a security reduction, to make sure that the adversary is able to break the simulated
scheme with the advantage defined in the breaking assumption, we always need
to prove that the simulation is indistinguishable from the real attack (on the real
scheme). The concepts of real attack and simulation can be further explained as
follows.
• The real attack is the interaction between the adversary and the challenger, who
runs the real (proposed) scheme following the security model.
• The simulation is the interaction between the adversary and the simulator, who
runs the simulated scheme following the reduction algorithm. Simulation is a part
of security reduction.
If the simulation is indistinguishable from the real attack, the adversary cannot dis-
tinguish the scheme that it is interacting with is a real scheme or a simulated scheme.
That is, the simulated scheme is indistinguishable from the real scheme from the
point of view of the adversary. In this book, the indistinguishability between the
simulation and the real attack is equivalent to that between the simulated scheme
and the real scheme.
When the adversary is asked to interact with a given scheme, we stress that the
given scheme can be a real scheme or a simulated scheme, or can be neither the real
scheme nor the simulated scheme. In the breaking assumption, we assume that the
adversary is able to break the real scheme, but we cannot directly use this assump-
tion to deduce that the adversary will also break the simulated scheme, unless the
simulated scheme is indistinguishable from the real scheme.
Suppose there exists an adversary who can break a proposed scheme in polynomial
time t with non-negligible advantage ε. Generally speaking, in the security reduc-
tion, we will construct a simulator to solve an underlying hard problem with (t 0 , ε 0 )
defined as follows:
ε
t 0 = t + T, ε 0 = .
L
• T is referred to as the reduction cost, which is also known as the time cost. The
size of T is mainly dependent on the number of queries from the adversary and
the computation cost for a response to each query.
• L is referred to as the reduction loss, also called the security loss or loss factor.
The size of L is dependent on the proposed security reduction. The minimum
loss factor is 1, which means that there is no loss in the reduction. Many proposed
schemes in the literature have loss factors that are linear in the number of queries,
such as signature queries or hash queries.
In a security reduction, solving an underlying hard problem with (t 0 , ε 0 ) is ac-
ceptable as long as T and L are polynomial, because this means that we can solve
the underlying hard problem in polynomial time with non-negligible advantage. If
one of them is exponentially large, the security reduction will fail as there is no
contradiction.
In a security reduction, loose reduction and tight reduction are two concepts intro-
duced to measure the reduction loss.
54 4 Foundations of Security Reduction
The security level of the proposed scheme is at most εt . Firstly, the upper bound
security level of the scheme is 80 bits. Secondly, since the security level of the
underlying hard problem B is 60 bits, we have
25 t t
ε = 215 · ≥ 260 .
210
ε
Thus, we obtain εt ≥ 245 , and the lower bound security level of the scheme is 45
bits. Therefore, the range of the proposed scheme’s security level in bits is [45, 80].
The lower bound security level of the proposed scheme is not 60 but 45 due to
the reduction cost and the reduction loss. To make sure that the security level of the
proposed scheme is at least 80 bits from the above deduction, we have the following
two methods.
• We program a security reduction for the proposed scheme S under the discrete
logarithm assumption without any reduction cost or reduction loss. That is, the
quality of the security reduction from the proposed scheme to the discrete loga-
rithm problem is perfect. However, few schemes in the literature can be tightly
reduced to the discrete logarithm assumption.
• We generate the group G with a larger security parameter such that the underlying
hard problem B has 95-bit security, and then the lower bound security level of the
scheme is 80 bits. This solution works with the tradeoff that we have to increase
the group length of the representation, which will decrease the computational
efficiency of group operations.
The security range [45, 80] does not mean that there must exist an attack algo-
rithm that can break the scheme in 245 steps. It only states that 45-bit security is
the provable lower bound security level. Whether the provable lower bound security
level can be increased or not is unknown and is dependent on the security reduction
that we can propose.
We emphasize that in a real security reduction, it is actually very hard to calculate
the lower bound security level because the reduction cost is t + T , and we cannot
calculate the security level t/ε from (t + T, Lε ). The above argument and discussion
are artificial and only given to help the reader understand the concrete security of
the proposed scheme. However, it is always correct that the underlying hardness as-
sumption should be as weak as the discrete logarithm assumption and (T, L) should
be as small as possible.
An ideal security reduction is the best security reduction that we can program for a
proposed scheme. It should capture the following four features.
56 4 Foundations of Security Reduction
• Security Model. The security model should be the strongest security model that
allows the adversary to maximally, flexibly, and adaptively make queries to the
challenger and win the game with a minimum requirement.
• Hard Problem. The underlying hard problem adopted for the security reduction
must be the hardest one among all hard problems defined over the same math-
ematical primitive. For example, the discrete logarithm problem is the hardest
problem among all problems defined over a group.
• Reduction Cost and Reduction Loss. The reduction cost T and the reduction
loss L are the minimized values. That is, T is linear in the number of queries
made by the adversary and L = 1.
• Computational Restrictions on Adversary. There is no computational restric-
tion on the adversary except time and advantage. For example, the adversary is
allowed to access a hash function by itself. However, in the random oracle model,
the adversary is not allowed to access a hash function but has to query a random
oracle instead.
Unfortunately, an inherent tradeoff among these features is very common in all se-
curity reductions proposed in the literature. For example, we can construct an ef-
ficient signature scheme whose security is under a weak hardness assumption, but
the security reduction must use random oracles. We can also construct a signature
scheme without random oracles in the security reduction, but it is accompanied with
a strong assumption or a long public key. Currently, it seems technically impossible
to construct a scheme with an ideal security reduction satisfying all four features
mentioned above.
Suppose Bob has constructed a scheme along with its security reduction. In the given
security reduction, Bob proves that if there exists an adversary who can break his
scheme in polynomial time with non-negligible advantage, he can construct a sim-
ulator to solve an underlying hard problem in polynomial time with non-negligible
advantage. Therefore, Bob has shown by contradiction that there exists no adversary
who can break his proposed scheme, since the hard problem cannot be solved. Now,
we have the following question:
How can Bob convince us that his security reduction is truly correct?
In this book, successful simulation and indistinguishable simulation are two differ-
ent concepts. They are explained as follows.
• Successful Simulation. A simulation is successful from the point of view of the
simulator if the simulator does not abort in the simulation while interacting with
the adversary. The simulator makes the decision to abort the simulation or not
58 4 Foundations of Security Reduction
according to the reduction algorithm. We assume that the adversary cannot abort
the attack before the simulation is completed. In this book, a simulation refers to
a successful simulation unless specified otherwise.
• Indistinguishable Simulation. A successful simulation is indistinguishable from
the real attack if the adversary cannot distinguish the simulated scheme from the
real scheme. An unsuccessful simulation must be distinguishable from the real at-
tack. Whether a (successful) simulation is distinguishable or indistinguishable is
judged by the adversary. An indistinguishable simulation is desirable, especially
when we want the adversary to break the simulated scheme with the advantage
defined in the breaking assumption.
We emphasize that a correct security reduction might fail with a certain proba-
bility to generate a successful simulation. In this book, a successful simulation only
means that the simulator does not abort in the simulation. That is, in a success-
ful simulation, the simulator’s responses to the adversary’s queries might even be
incorrect. However, to simplify the proof, the reduction algorithm should tell the
simulator to abort if queries from the adversary cannot be correctly answered. For
example, in the security reduction for a digital signature scheme, the simulator must
abort if it cannot compute valid signatures on queried messages for the adversary.
However, even with such an assumption, a successful simulation does not mean that
the simulation is indistinguishable from the real attack. In Section 4.5.11, we discuss
how the adversary can distinguish the simulated scheme from the real scheme.
We define a failed attack and a successful attack in order to clarify the adversary’s
attack on the simulated scheme.
• Failed Attack. An attack by the adversary fails if the attack cannot break the
simulated scheme following the security model. Any output such as an error
symbol ⊥, a random string, a wrong answer, or an abort from the adversary is a
failed attack.
• Successful Attack. An attack by the adversary is successful if the attack can
break the simulated scheme following the security model. In this book, an attack
refers to a successful attack unless specified otherwise.
We define these two types of attacks to simplify the description of reduction. In
particular, there is no abort from the adversary at the end of the simulation. Any
output that is not a successful attack is treated as a failed attack. The simulator may
abort during the simulation because it cannot generate a successful simulation. An
attack by the adversary is either failed or successful. If the adversary returns a failed
attack, it is equivalent that the adversary returns a successful attack with probability
0. Therefore, at the end of the simulation, the adversary will launch a successful
attack with a certain probability.
4.4 An Overview of Correct Security Reduction 59
Suppose the adversary is given a simulated scheme. The adversary’s attack on the
simulated scheme can be classified into the following two types.
• Useless Attack. A useless attack is an attack by the adversary that cannot be
reduced to solving an underlying hard problem.
• Useful Attack. A useful attack is an attack by the adversary that can be reduced
to solving an underlying hard problem.
According to the above definitions, an attack by the adversary on the simulated
scheme must be either useless or useful. We emphasize that a failed attack can be
a useful attack and a successful attack can be a useless attack, depending on the
cryptosystem, proposed scheme, and its security reduction.
The concepts of successful security reduction and correct security reduction are
regarded as different in this book. They are explained as follows.
• Successful Security Reduction. We say that a security reduction is successful
if the simulation is successful and the adversary’s attack in the simulation is a
useful attack.
• Correct Security Reduction. We say that a security reduction for a proposed
scheme is correct if the advantage of solving an underlying hard problem us-
ing the adversary’s attack is non-negligible in polynomial time if the breaking
assumption holds.
A successful security reduction is desired in order to obtain a correct security
reduction. That is, a security reduction is correct if the security reduction can be
successful in solving an underlying hard problem in polynomial time with non-
negligible advantage.
In this section, we take a close look at the adversary, who is assumed to be able to
break the proposed scheme. It is important to understand which attack the adversary
can launch or will launch on the simulated scheme.
The breaking assumption states that there exists an adversary who can break the
proposed scheme in polynomial time with non-negligible advantage. There is no
restriction on the adversary except time and advantage. The adversary in the security
reduction is a black-box adversary. The most important property of a black box
is that what the adversary will query and which specific attack the adversary will
launch are not restricted and are unknown to the simulator.
For such a black-box adversary, we use adaptive attack to describe the black-box
adversary’s behavior. Adaptive attacks will be introduced in the next subsection. We
emphasize that the adversary in the security reduction is far more than a black-box
adversary. The reason will be explained soon after introducing the adaptive attack.
Let a be an integer chosen from the set {0, 1}. If a is randomly chosen, we have
1
Pr[a = 0] = Pr[a = 1] = .
2
However, if a is adaptively chosen, the two probabilities Pr[a = 0] and Pr[a = 1] are
unknown. An adaptive attack is a specific attack where the adversary’s choices from
the given space are not uniformly distributed but based on an unknown probability
distribution.
An adaptive attack is composed of the following three parts in a security reduc-
tion between an adversary and a simulator. We take the security reduction for a
digital signature scheme in the EU-CMA security model as an example to explain
these three parts. Suppose the message space is {m1 , m2 , m3 , m4 , m5 } with five dis-
tinct messages and the adversary will first query the signatures of two messages
before forging a valid signature of a new message.
• What the adversary will query to the simulator is adaptive. We cannot claim
that a particular message, for example m3 , will be queried for its signature with
probability 2/5. Instead, the adversary will query the signature of message mi
with unknown probability.
62 4 Foundations of Security Reduction
• How the adversary will query to the simulator is adaptive. The adversary might
output two messages for signature queries at the same time or one at a time.
For the latter, the adversary decides the message for its first signature query.
Upon seeing the received signature, it will then decide the second message to be
queried.
• What the adversary will output for the simulator is adaptive. If the adversary
makes signature queries on the messages m3 and m4 , we cannot claim that the
forged signature will be on a random message m∗ from {m1 , m2 , m5 }. Instead,
the adversary will forge a signature of one of the messages from {m1 , m2 , m5 }
with unknown probabilities between [0, 1] satisfying
An adaptive attack is not just about how the adversary will query to the simula-
tor. All choices are also made adaptively by the adversary unless restricted in the
corresponding security model. For example, in a weak security model for digital
signatures, the adversary must forge the signature of a message m∗ designated by
the simulator. In this case, m∗ is not adaptively chosen by the adversary. There are
some security models, such as IND-sID-CPA for IBE, where the adversary needs
to output a challenge identity before seeing the master public key, but it can still
adaptively choose this identity from the identity space.
Suppose there are only two distinct attacks that can break the simulated scheme
in a security reduction. One attack is useful and the other is useless. Consider the
following question.
According to the description of the black-box adversary, we know that this prob-
ability is unknown due to the adaptive attack by the adversary. However, a correct
security reduction requires us to calculate the probability of returning a useful at-
tack. To solve this problem, we amplify the black-box adversary into a malicious
adversary and consider the maximum probability of returning a useless attack by the
malicious adversary.
The malicious adversary is still a black-box adversary who will launch an adap-
tive attack. However, the malicious adversary will try its best to launch a useless
attack unless the adversary does not know how to, as long as the useless attack does
not contradict the breaking assumption. If the maximum probability of returning a
useless attack is not the overwhelming probability 1, this means that the probabil-
ity of returning a useful attack is non-negligible, and thus the security reduction is
correct. If a security reduction works against such a malicious adversary, the secu-
4.5 An Overview of the Adversary 63
rity reduction definitely works against any adversary who can break the proposed
scheme. The reason is that this maximum probability is the biggest likelihood that
all adversaries can make the attack useless. From now on, an adversary refers to a
malicious adversary unless specified otherwise.
To help the reader better understand the meaning of the malicious adversary when
the simulation is indistinguishable, we create the following toy game to explain the
difficulty of security reduction. In this toy game,
• The simulator generates the simulated scheme with a random b ∈ {0, 1}.
• The adversary adaptively chooses a ∈ {0, 1} as an attack.
• The adversary’s attack is useful if and only if a 6= b.
In the security reduction, a can be seen as the adaptive attack launched by the
adversary where both a = 0 and a = 1 can break the scheme, and b can be seen as
the secret information in the simulated scheme. In the simulation, all the parameters
given to the adversary may include the secret information about how to launch a
useless attack. The malicious adversary intends to make this attack useless. It will
try to guess b from the simulated scheme and then output an attack a in such a way
that Pr[a = b] = 1.
Security reduction is hard because we must program the simulation in such a way
that the adversary does not know how to launch a useless attack. In the correctness
analysis of the security reduction, the probability Pr[a 6= b] must be non-negligible.
To achieve this, b must be random and independent of all the parameters given to
the adversary, so that the adversary can only correctly guess b with probability 12 .
In this case, we will have Pr[a 6= b] = 12 even though a is adaptively chosen by the
adversary. The corresponding probability analysis will be given in Section 4.6.4.
attack with such malicious probability does not contradict the breaking assump-
tion, because the simulated scheme is different from the real scheme.
The first case is very straightforward, but the second case is a little complex,
because it varies and depends on the security reduction. The probability P∗ is quite
different in the security reductions for digital signatures and for encryption. The
details can be found in Section 4.9 and Section 4.10.
All public-key schemes are only computationally secure. If there exists an adver-
sary who has unbounded computational power and can solve all computational hard
problems, it can definitely break any proposed scheme in polynomial time with non-
negligible advantage. For example, the adversary can use its unbounded computa-
tional power to solve the DL problem, where the discrete logarithm can be applied
to break all group-based schemes. Therefore, any proposed scheme is only secure
against an adversary with bounded computational power.
An inherent interesting question is to explore which problems the adversary can-
not solve when analyzing the security of a proposed scheme. A compact theorem
template for claiming that a proposed scheme is provably secure (without mention-
ing its security model) can be stated as follows.
Theorem 4.5.6.1 If the mathematical problem P is hard, the proposed scheme is
secure and there exists no adversary who can break the proposed scheme in polyno-
mial time with non-negligible advantage.
The above theorem states that the proposed scheme is secure under the hardness
assumption of problem P. It seems that the adversary’s computational ability should
be bounded in solving the hard problem P. That is, we only prove that the proposed
scheme is secure against an adversary who cannot solve the hard problem P or other
problems harder than P. This proof strategy is acceptable because we assume that
the problem P is hard, and hence do not care whether the proposed scheme is secure
or not against an adversary who can solve the problem P.
However, in the security reduction, the adversary’s computational ability is taken
to be unbounded to simplify the correctness analysis. Roughly speaking, the pro-
posed scheme is only secure against a computationally bounded adversary, but the
corresponding security reduction should even work against a computationally un-
bounded adversary. The reason will be given in the next subsection.
sion). This piece of information tells the adversary how to generate a useless attack
on the simulated scheme. For example, b in the toy game of Section 4.5.4 is the
secret information, which should be unknown to the adversary. Other examples can
be found at the end of this chapter. The simulator must hide this secret informa-
tion. Otherwise, once the malicious adversary obtains the information I from the
given parameters in the simulation, it will always launch a useless attack. Here,
given parameters are all the information that the adversary knows from the simu-
lated scheme, such as public key and signatures.
The simplest way to hide the information I is for the simulator never to reveal the
information I to the adversary. Unfortunately, we found that all security reductions
in the literature have to respond to queries where responses include the information
I. To make sure that the adversary does not know the information I, the simulator
must use some hard problems to hide it. Let P be the underlying hard problem in
the security reduction. There are three different methods for the simulator to hide
the information I in the security reduction.
• The simulator programs the security reduction in such a way that the information
I is hidden by a set of new hard problems P1 , P2 , · · · , Pq . The correctness of the
security reduction requires that these new hard problems are not easier than the
problem P. Otherwise, the adversary can solve them to obtain the information
I. To achieve this, we must prove that the information I can only be obtained
by solving these hard problems, which are not easier than the underlying hard
problem P. This method is challenging and impractical because we may not be
able to reduce the hard problems P1 , P2 , · · · , Pq to the underlying hard problem P.
• The simulator programs the security reduction in such a way that the information
I is hidden by the problem P. This method works because we assume that it is
hard for the adversary to solve the problem P. However, how to use the problem
P to hide the information I from the adversary is challenging. For example, P
is the CDH problem. Suppose the information I is hidden with gab where the
adversary knows (g, ga , gb ). The simulator should not provide additional group
2
elements, such as the group element ga b , to the adversary. Otherwise, it is no
longer a CDH problem.
• The simulator programs the security reduction in such a way that the information
I is hidden with some absolutely hard problems that cannot be solved, even if
the adversary has unbounded computational power. This method is very efficient
because those absolutely hard problems can be universally used and independent
of the underlying hard problem P. In this case, we only need to prove that the
information I is hidden with absolutely hard problems.
This book only introduces security reductions where the secret information is
hidden with the third method. Therefore, the adversary can be a computationally
unbounded adversary. This method is sufficient, but not necessary. However, it pro-
vides the most efficient method of analyzing the correctness of a security reduction.
We note that the third method was adopted in most security reductions proposed in
the literature.
66 4 Foundations of Security Reduction
A security reduction starts with the breaking assumption that there exists an adver-
sary who can break the proposed scheme in polynomial time with non-negligible
advantage. Then, we construct a simulator to generate a simulated scheme and use
the adversary’s attack to solve an underlying hard problem. According to the previ-
ous explanations, the adversary in the security reduction is summarized as follows.
• The adversary has unbounded computational power in solving all computational
hard problems defined over the adopted mathematical primitive and breaking the
simulated scheme.
• The adversary will maliciously try its best to launch a useless attack to break the
simulated scheme and make the security reduction fail.
We assume that the adversary has unbounded computational power, but we can-
not directly ask the adversary to solve an underlying hard problem for us. All that
the adversary will do for us is to launch a successful attack on a scheme that looks
like a real scheme. This is what the adversary can do and will do in a security re-
duction. Now, it is time to understand what the adversary knows and never knows
in a security reduction.
We assume that the adversary knows the reduction algorithm. However, this does
not mean that the adversary knows all the secret parameters chosen in the simulated
scheme, although the simulated scheme is generated by the reduction algorithm.
There are some secrets that the adversary never knows. Otherwise, it is impossible
to obtain a successful security reduction.
There are three types of secrets that the adversary never knows.
• Random Numbers. The adversary does not know those random numbers (in-
cluding group elements) chosen by the simulator when the simulator generates
the simulated scheme unless they can be computed by the adversary. For exam-
ple, if the simulator randomly chooses two secrets number x, y ∈ Z p , we assume
that they are unknown to the adversary. However, once (g, gx+y ) are given to the
adversary, the adversary knows x + y according to the previous subsection.
• Problem Instance. The adversary does not know the random instance of the
underlying hard problem given to the simulator. This assumption is desired to
simplify the proof of indistinguishability. For example, suppose Bob proposes
a scheme and a security reduction that shows that if there exists an adversary
who can break the scheme, the reduction can find the solution to the instance
(g, ga ) of the DL problem. In the security reduction, the adversary receives a
key pair (g, gα ), which is equal to (g, ga ). Since the adversary knows that (g, ga )
is a problem instance, it will immediately find out that the given scheme is a
simulated scheme and stop the attack.
• How to Solve an Absolutely Hard Problem. The adversary does not know how
to solve an absolutely hard problem, such as computing (x, y) from the group el-
ements (g, gx+y ). Another example is to compute a pair (m∗ , f (m∗ )) when given
(m1 , m2 , f (m1 ), f (m2 )) for a distinct m∗ ∈
/ {m1 , m2 }, where f (x) ∈ Z p [x] is a ran-
dom polynomial of degree 2. Some absolutely hard problems will be introduced
in Section 4.7.6.
Roughly speaking, the adversary will utilize what it knows to launch a useless at-
tack on the simulated scheme, while the simulator should utilize what the adversary
never knows to force the adversary to launch a useful attack with non-negligible
probability.
In the security reduction, the adversary interacts with a given scheme that is the
simulated scheme. The adversary distinguishes the simulated scheme from the real
scheme by correctness and randomness, which are described as follows.
68 4 Foundations of Security Reduction
The way that the adversary launches a useless attack cannot be described here in de-
tail because it is highly dependent on the proposed scheme, the reduction algorithm,
and the underlying hard problem. Here, we only give a high-level overview of what
a useless attack looks like in digital signatures and encryption.
• Suppose a security reduction for a signature scheme uses a forged signature from
the adversary to solve an underlying hard problem. A useless attack is a special
forged signature for the simulated scheme that is valid and also computable by
the simulator, so that it cannot be used to solve an underlying hard problem.
• Suppose a security reduction for an encryption scheme uses the guess of the
encrypted message from the adversary to solve an underlying decisional hard
problem. A useless attack is a special way of guessing the encrypted message mc
such that the message in the challenge ciphertext can always be correctly guessed
4.6 An Overview of Probability and Advantage 69
(c0 = c), no matter whether the target Z in the decisional problem instance is true
or false.
The adversary can launch a useless attack because it knows the secret information
in the simulation. How to hide the secret information I from the adversary using
absolutely hard problems is an important step in a security reduction.
At the end of this section, we summarize the malicious adversary in a security re-
duction as follows.
• When the adversary is asked to interact with a given scheme, it considers this
scheme to be a simulated scheme. The adversary will use what it knows and
what it can query (following the defined security model) to find whether the
given scheme is indeed distinguishable from the real scheme or not.
• When the adversary finds out that the given scheme is a simulated scheme, the
adversary will launch a successful attack with malicious and adaptive probability
P∗ ∈ [0, 1]. The detailed probability P∗ is dependent on the security reduction.
• When the adversary cannot distinguish the simulation from the real attack, it
will launch a successful attack with probability Pε according to the breaking
assumption.
• Without contradicting the breaking assumption, the adversary will use what it
knows and what it receives to launch a useless attack on the given scheme.
In the security reduction, we prove that if there exists an adversary who can
break the proposed scheme, we can construct a simulator to solve an underlying
hard problem. To be more precise, a correct security reduction requires that even
if the attack on the simulated scheme is launched by a malicious adversary who
has unbounded computational power, the advantage of solving an underlying hard
problem is still non-negligible.
Probability is the measure of the likelihood that an event will occur. In a security
proof, this event is mainly about a successful attack on a scheme or a correct so-
lution to a problem instance. There are four important probability definitions for
digital signatures, encryption, computational problems, and decisional problems,
respectively.
70 4 Foundations of Security Reduction
• Digital Signatures. Let Pr[WinSig ] be the probability that the adversary success-
fully forges a valid signature. Obviously, this probability satisfies
0 ≤ Pr[WinSig ] ≤ 1.
• Encryption. Let Pr[WinEnc ] be the probability that the adversary correctly guesses
the message in the challenge ciphertext in the security model of indistinguisha-
bility. This probability satisfies
1
≤ Pr[WinEnc ] ≤ 1.
2
The message in the challenge ciphertext is mc where c ∈ {0, 1}, and the adversary
will output 0 or 1 to guess c. Since c is randomly chosen by the challenger, no
matter what the guess c0 is, we have that Pr[WinEnc ] = Pr[c0 = c] holds with
probability at least 12 .
• Computational Problems. Let Pr[WinC ] be the probability of computing a cor-
rect solution to an instance of a computational problem. This probability satisfies
0 ≤ Pr[WinC ] ≤ 1.
In the above four definitions, the adversary refers to a general adversary who intends
to break a scheme or solve a problem. Note that this is not the adversary who can
break the scheme or solve the problem in the breaking assumption. Otherwise, the
probability cannot be, for example, Pr[WinSig ] = 0.
4.6 An Overview of Probability and Advantage 71
The minimum and maximum probabilities in the above four definitions are dif-
ferent. We cannot use a given probability to universally measure whether a proposed
scheme is secure or insecure and whether a problem is hard or easy. Due to the dif-
ference, advantage is defined in such a way that we can use the same measurement
to judge security/insecurity for schemes and hardness/easiness for problems.
In the advantage definition, we are not interested in how large the advantage
is, but only in two different results: Negligible and Non-negligible. Let ε be the
advantage of breaking a proposed scheme or solving a problem. If the advantage
is negligible, the proposed scheme is secure or the problem is hard. Otherwise, the
proposed scheme is insecure or the problem is easy. A precise non-negligible value
is not important because any non-negligible advantage indicates that the proposed
scheme is insecure or the problem is easy.
There is no standard definition of maximum advantage. For example, in the def-
inition of indistinguishability for encryption, some researchers prefer 12 as the max-
imum advantage while others prefer 1. In this book, we prefer 1 as the maximum
advantage for encryption to keep the consistency with digital signatures.
The definitions of advantage for digital signatures, encryption, computational
problems, and decisional problems are described as follows.
ε = Pr[WinSig ].
ε = Pr[WinC ].
That is, we have ε = 1. Otherwise, the decisional problem is absolutely hard, and
we have
1
Pr[Guess Z=True |Z = True] = Pr[Guess Z=True|Z = False] = .
2
That is, we have ε = 0, which means that the probability of correctly/wrongly
guessing Z is the same. Therefore, the advantage ε is also within the range [0, 1].
The ranges in all the above advantage definitions are the same. If ε = 0, then the
scheme is absolutely secure or the problem is absolutely hard. If ε = 1, then the
scheme can be broken or the problem can be solved with success probability 1.
4.6 An Overview of Probability and Advantage 73
The reason why we strengthen a black-box adversary into a malicious adversary can
be explained as follows with probability and advantage.
• In a security reduction, we cannot calculate the probability of returning a useful
attack by a black-box adversary unless the adversary’s attack is always useful.
The reason is that the adversary’s attack is adaptive.
• In a security reduction, we can calculate the advantage of returning a useless
attack by a malicious adversary, who tries its best to make the security reduction
fail. If the advantage is 1, this means that the security reduction is incorrect.
We have to consider the advantage in the security reduction because it is hard or
impossible to make the attack always useful in a security reduction. Take the security
reduction for digital signatures as an example. In the EU-CMA security model, the
simulator must program the security reduction in such a way that some signatures
can be computed by the simulator in order to respond to signature queries. If the
adversary happens to choose one of these signatures as the forged signature, the
forged signature is a useless attack and thus the forged signature cannot always be a
useful attack.
Pr[a = 0] + Pr[a = 1] = 1.
The above two probability formulas are simple but they are the core of the prob-
ability analysis in all security reductions. We can only calculate the success proba-
bility associated with an adaptive attack by these two formulas.
Let ε denote the advantage of breaking a proposed scheme and εR denote the advan-
tage of solving an underlying hard problem by a security reduction. The concepts
of useless attack, useful attack, tight reduction, and loose reduction in a security
reduction can be explained as follows.
• Useless. The advantage εR is negligible. There exists an adversary who can break
the proposed scheme in polynomial time with non-negligible advantage ε, but the
advantage εR is negligible.
• Useful. The advantage εR is non-negligible. If there exists an adversary who can
break the proposed scheme in polynomial time with non-negligible advantage ε,
then the advantage εR is also non-negligible.
ε
• Loose. The advantage εR is equal to O(q) , where q denotes the number of queries
made by the adversary. In practice, the size of q can be as large as q = 230 or
q = 260 , depending on the definition of q. For example, q can denote the number
of signature queries or the number of hash queries made by the adversary. The
number q = 230 is derived based on the fact that a key pair can be used to generate
up to 230 signatures, while the number q = 260 is derived based on the fact that
the adversary can make up to 260 hash queries in polynomial time.
ε
• Tight. The advantage εR is equal to O(1) , where O(1) is constant and independent
of the number of queries. For example, a security reduction with O(1) = 2 is a
tight reduction. When the reduction loss is related to the security parameter λ ,
we still call it a tight reduction although it is only almost tight.
In the above descriptions, we do not consider the time cost and assume that the
security reduction is completed in polynomial time. The above four concepts are
associated with the advantage only.
• Inequations.
q
Pr[A1 ∨ A2 ∨ · · · ∨ Aq ] ≤ ∑ Pr[Ai ] (6.11)
i=1
q
Pr[A1 ∧ A2 ∧ · · · ∧ Aq ] ≥ ∏ Pr[Ai ] (6.12)
i=1
q
Pr[A1 ∨ A2 ∨ · · · ∨ Aq ] ≤ 1 − ∏ Pr[Aci ] (6.13)
i=1
q
Pr[A1 ∧ A2 ∧ · · · ∧ Aq ] ≥ 1 − ∑ Pr[Aci ] (6.14)
i=1
Pr[A] ≥ Pr[A|B] · Pr[B] (6.15)
• Conditional Equations.
Random numbers (including random group elements) are very common in con-
structing cryptographic schemes, such as digital signature schemes and encryption
schemes. Suppose each number in the set {A1 , A2 , · · · , An } ∈ Z p is a random num-
ber. This means that each number is chosen randomly and independently from Z p ,
76 4 Foundations of Security Reduction
Let (A, B,C) be three random integers chosen from the space Z p . The concept of
random and independent can be explained as follows.
• Random. C is equal to any integer in Z p with the same probability 1p .
• Independent. C cannot be computed from A and B.
The concept of random and independent is applied in the security reduction as fol-
lows. Let A, B,C be random integers chosen from Z p . Suppose an adversary is only
given A and B. The adversary then has no advantage in guessing the integer C and
can only guess the integer C correctly with probability 1p . If A, B are two integers
randomly chosen from the space Z p and C = A + B mod p, we still have that C is
equivalent to a random number chosen from Z p . However, A, B,C are not indepen-
dent, because C can be computed from A and B.
In a scheme construction and a security proof, when we say that A1 , A2 , · · · , Aq
are all randomly chosen from an exponentially large space, such as Z p , we assume
that they are all distinct. That is, Ai 6= A j for any i 6= j. This assumption will sim-
plify the probability analysis and the proof description. We also note that for some
proposed schemes in the literature, if they generate random numbers that are equal,
the proposed schemes will become insecure.
In a real scheme, suppose (A, B,C) are integers randomly chosen from Z p . However,
in the simulated scheme, (A, B,C) are generated from a function with other random
integers. For example, all integers A, B,C are simulated from a function with w, x, y, z
as input, where (w, x, y, z) are integers randomly chosen from Z p by the simulator
when running the reduction algorithm. We want to investigate whether the simulated
(A, B,C) are also random and independent from the point of view of the adversary.
If the simulated random integers are also random and independent, the simulated
scheme is indistinguishable from the real scheme from the point of view of the
random number generation.
4.7 An Overview of Random and Independent 77
We have the following simplified lemma for checking whether the simulated ran-
dom numbers (A, B,C) are also random and independent.
Lemma 4.7.1 Suppose a real scheme and a simulated scheme generate integers
(A, B,C) with different methods described as follows.
• In the real scheme, let (A, B,C) be three integers randomly chosen from Z p .
• In the simulated scheme, let (A, B,C) be computed by a function with random
integers (w, x, y, z) from Z p as the input to the function, denoted by (A, B,C) =
F(w, x, y, z).
Suppose the adversary knows the function F from the reduction algorithm but not
(w, x, y, z). The simulated scheme is indistinguishable from the real scheme if for any
given (A, B,C) from Z p , the number of solutions (w, x, y, z) satisfying (A, B,C) =
F(w, x, y, z) is the same. That is, any (A, B,C) from Z p will be generated with the
same probability in the simulated scheme.
It is not hard to verify the correctness of this lemma. We prove this lemma by
arguing that any three given values (A, B,C) will appear with the same probabil-
ity. Let < w, x, y, z > be a vector that represents one choice of random (w, x, y, z)
from Z p . There are p4 different vectors in the vector space and each vector will be
chosen with the same probability 1/p4 . Suppose that for any (A, B,C), the number
of < w, x, y, z > generating (A, B,C) via F is n, so the probability of choosing ran-
dom (w, x, y, z) satisfying (A, B,C) = F(w, x, y, z) is n/p4 . Therefore, the simulated
scheme is indistinguishable from the real scheme.
Consider the simulation of (A, B,C) from Z p with the following functions under
modular operations using random integers (w, x, y, z) from Z p .
( A, B, C ) = F(x, y) = ( x , y , x + y ) (7.19)
( A, B, C ) = F(x, y, z) = ( x , y , z + 3 ) (7.20)
( A, B, C ) = F(x, y, z) = ( x , y , z + 4 · xy ) (7.21)
( A, B, C ) = F(w, x, y, z) = ( x + w , y , z + w · x ) (7.22)
x = A,
y = B,
x + y = C.
If the given (A, B,C) satisfies A + B = C, the function has one solution
Otherwise, there is no solution. Therefore, the simulated A, B,C are not random
and independent. To be precise, C can be computed from A + B.
78 4 Foundations of Security Reduction
x = A,
y = B,
z + 3 = C.
For any given (A, B,C), the function has one solution
x = A,
y = B,
z + 4xy = C.
For any given (A, B,C), the function has one solution
x + w = A,
y = B,
z + w · x = C.
For any given (A, B,C), the function has p different solutions
where w can be any integer from Z p . Therefore, A, B,C are random and indepen-
dent.
The full lemma can be stated as follows. This lemma will frequently be used
in the correctness analysis of the schemes in this book. We stress that when
(A1 , A2 , · · · , Aq ) are randomly chosen from Z p , we have that (gA1 , gA2 , · · · , gAq ) are
random and independent from G. Therefore, in this book, the analysis of random-
ness and independence is associated with integers or exponents from Z p only.
Lemma 4.7.2 Suppose a real scheme and a simulated scheme generate integers
(A1 , A2 , · · ·, Aq ) with different methods described as follows.
• In the real scheme, let (A1 , A2 , · · · , Aq ) be integers randomly chosen from Z p .
• In the simulated scheme, let (A1 , A2 , · · · , Aq ) be computed by a function with ran-
dom integers (x1 , x2 , · · · , xq0 ) from Z p as the input to the function, denoted by
4.7 An Overview of Random and Independent 79
Suppose the adversary knows the function F from the reduction algorithm but not
(x1 , x2 , · · · , xq0 ). The simulated scheme is indistinguishable from the real scheme if,
for any (A1 , A2 , · · · , Aq ) from Z p , the number of solutions (x1 , x2 , · · · , xq0 ) satisfying
(A1 , A2 , · · · , Aq ) = F(x1 , x2 , · · · , xq0 ) is the same. That is, every (A1 , A2 , · · · , Aq ) from
Z p will be generated with the same probability in the simulated scheme.
We stress that if q0 < q, the simulated scheme is definitely distinguishable from the
real scheme. For indistinguishable simulation, it is required that q0 ≥ q must hold,
but this condition is not sufficient.
A general system of n linear equations (or linear system) over Z p with n unknown
secrets (x1 , x2 , · · · , xn ) can be written as
where the ai j are the coefficients of the system, and y1 , y2 , · · · , yn are constant terms
from Z p . We define A as the coefficient matrix,
a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
A= . . . . .
.. .. .. · · · ..
an1 an2 an3 · · · ann
We have the following lemma for the simulation of random numbers by a linear
system.
Lemma 4.7.3 Suppose a real scheme and a simulated scheme generate integers
(A1 , A2 , · · · , An ) with different methods described as follows.
• In the real scheme, let (A1 , A2 , · · · , An ) be n integers randomly chosen from Z p .
• In the simulated scheme, let (A1 , A2 , · · · , An ) be computed by
a11 a12 a13 · · · a1n x1
a21 a22 a23 · · · a2n x2
(A1 , A2 , · · · , An )> = A · X > = . . . . · . mod p,
.. .. .. · · · .. ..
an1 an2 an3 · · · ann xn
Suppose the adversary knows A but not X. If the determinant of A is nonzero, the
simulated scheme is indistinguishable from the real scheme.
According to our knowledge of linear systems, if |A| 6= 0 there is only one solu-
tion < x1 , x2 , · · · , xn > for any given (A1 , A2 , · · · , An ). According to Lemma 4.7.2, the
Ai are random and independent, so the simulated scheme is indistinguishable from
the real scheme. If |A| = 0 the number of solutions can be zero or p depending on
the given (A1 , A2 , · · · , An ). The Ai are not random and independent, so the simulated
scheme is distinguishable from the real scheme.
Consider the simulation of (A1 , A2 , A3 ) from Z p with the following functions
using random integers (x1 , x2 , x3 ) from Z p .
x1 + 3x2 + 3x3 = A1 ,
x1 + x2 + x3 = A2 ,
3x1 + 5x2 + 5x3 = A3 .
133
1 1 1 = 0.
355
x1 + 3x2 + 3x3 = A1 ,
2x1 + 3x2 + 5x3 = A2 ,
9x1 + 5x2 + 2x3 = A3 .
133
2 3 5 = 53 6= 0.
952
where there are q coefficients, and all coefficients ai are randomly chosen from
Z p . We have the following lemma for the simulation of random numbers by the
polynomial function f (x).
Lemma 4.7.5 Suppose a real scheme and a simulated scheme generate integers
(A1 , A2 , · · · , An ) with different methods described as follows.
• In the real scheme, let (A1 , A2 , · · · , An ) be n integers randomly chosen from Z p .
• In the simulated scheme, let (A1 , A2 , · · · , An ) be computed by
The correctness of the security reduction requires that the simulated scheme is
indistinguishable from the real scheme, and thus (A1 , A2 , · · · , An ) must be random
and independent.
• The adversary outputs an attack and this attack on the simulated scheme is useless
if the adversary can compute
A∗ = F ∗ (x1 , x2 , · · · , xq ).
The correctness of the security reduction requires that the adversary cannot com-
pute A∗ except with negligible probability.
In the security reduction, we do not need to prove an indistinguishable simulation
and a useful attack separately. Instead, we only need to prove that
(A1 , · · · , An , A∗ ) = F1 (x1 , x2 , · · · , xq ), · · · , Fn (x1 , x2 , · · · , xq ), F ∗ (x1 , x2 , · · · , xq )
We give several absolutely hard problems and introduce the advantage and the prob-
ability of solving these problems by an adversary who has unbounded computational
power. These examples are summarized from existing security reductions in the lit-
erature.
• Suppose (a, Z, c, x) satisfies Z = ac + x mod p, where a, x ∈ Z p and c ∈ {0, 1} are
randomly chosen. Given (a, Z), the adversary has no advantage in distinguishing
whether Z is computed from either a · 0 + x or a · 1 + x except with probability
1/2. The reason is that a, Z, c are random and independent.
• Suppose (a, Z1 , Z2 , · · · , Zn−1 , Zn , x1 , x2 , · · · , xn ) satisfies Zi = a + xi mod p, where
a, xi for all i ∈ [1, n] are randomly chosen from Z p . Given (a, Z1 , Z2 , · · · , Zn−1 ),
the adversary has no advantage in computing Zn = a + xn except with probability
1/p. The reason is that a, Z1 , Z2 , · · · , Zn are random and independent.
• Suppose ( f (x), Z1 , Z2 , · · · , Zn , x1 , x2 , · · · , xn ) satisfies Zi = f (xi ), where f (x) ∈
Z p [x] is an n-degree polynomial randomly chosen from Z p . Given (Z1 , Z2 , · · · , Zn ,
x1 , x2 , · · · , xn ), the adversary has no advantage in computing a pair (x∗ , f (x∗ ))
for a new x∗ different from xi except with probability 1/p. The reason is that
Z1 , Z2 , · · · , Zn , f (x∗ ) are random and independent.
• Suppose (A, Z1 , Z2 , · · · , Zn−1 , Zn , x1 , x2 , · · · , xn ) satisfies |A| 6= 0 mod p and Zi is
computed by Zi = ∑nj=1 ai, j x j mod p, where A is an n × n matrix whose el-
ements are from Z p , and x j for all j ∈ [1, n] are randomly chosen from Z p .
Given (A, Z1 , Z2 , · · · , Zn−1 ), the adversary has no advantage in computing Zn =
∑nj=1 an, j x j except with probability 1/p. The reason is that Z1 , Z2 , · · · , Zn are ran-
dom and independent.
• Suppose (g, h, Z, x, y) satisfies Z = gx hy , where x, y ∈ Z p are randomly chosen.
Given (g, h, Z) ∈ G, the adversary has no advantage in computing (x, y) except
with probability 1/p. Once the adversary finds x, it can immediately compute y
with Z. However, g, h, Z, x are random and independent.
• Suppose (g, h, Z, x, c) satisfies Z = gx hc , where x ∈ Z p and c ∈ {0, 1} are ran-
domly chosen. Given (g, h, Z) ∈ G, the adversary has no advantage in distinguish-
ing whether Z is computed from either gx h0 or gx h1 , except with probability 1/2.
The reason is that g, h, Z, c are random and independent.
In the real security reductions, if the adversary has advantage 1 in computing the
target in the above examples, it can always launch a useless attack. More absolutely
hard problems can be found in the examples given in this book.
security proof in the random oracle model [14] (namely with random oracles) for a
proposed scheme means that at least one hash function in the proposed scheme is
treated as a random oracle. This section introduces how to use random oracles in
security reductions.
Scheme + H.
Roughly speaking, a security proof with random oracles for this scheme is the secu-
rity proof for the following combination
Scheme + O,
where the hash function H is set as a random oracle O. That is, we do not analyze
the security of Scheme + H but Scheme + O, and believe that Scheme + H is secure
if Scheme + O is secure. Notice that the random oracle O and a real hash function
H are treated differently in the security reduction. The security gap between these
two combinations was discussed in [85].
Proving the security of a proposed scheme in the random oracle model requires
at least one hash function to be set as a random oracle. However, it does not need all
hash functions to be set as random oracles. To keep consistency with the concept of
indistinguishable simulation, we assume that the real scheme in the random oracle
model refers to the combination Scheme + O. Otherwise, the adversary can imme-
diately distinguish the simulated scheme with random oracles from the real scheme
with hash functions.
The differences between hash functions and random oracles in security reductions
are summarized as follows.
• Knowledge. Given any arbitrary string x, if H is a hash function, the adversary
knows the function algorithm of H and so knows how to compute H(x). However,
if H is set as a random oracle, the adversary does not know H(x) unless it queries
x to the random oracle.
• Input. Hash functions and random oracles have the same input space, which is
dependent on the definition of the hash function. Although the number of inputs
to a hash function is exponential, the number of inputs to a random oracle is
4.8 An Overview of Random Oracles 85
polynomial. This is due to the fact that the random oracle only accepts queries in
polynomial time, and thus the random oracle only has a polynomial number of
inputs.
• Output. Hash functions and random oracles have the same output space, which
is dependent on the definition of the hash function. Given an input, the output
from a hash function is determined by the input and the hash function algorithm.
However, the output from a random oracle for an input is defined by the simu-
lator who controls the random oracle. The outputs from a hash function are not
required to be uniformly distributed, but the outputs from a random oracle must
be random and uniformly distributed.
• Representation. A hash function can be seen as a mapping from an input space
to an output space, where the mapping is calculated according to the hash func-
tion algorithm. A random oracle can be viewed as a virtual hash function that is
represented by a list composed of input and output only. The random oracle itself
does not have any rule or algorithm to define the mapping, as long as all outputs
are random and independent. See Table 4.1 for comparison.
Random oracles are very helpful for the simulator in programming security re-
ductions. The reason is that the simulator can control and select any output that looks
random and helps the simulator complete the simulation or force the adversary to
launch a useful attack. Security proofs in the random oracle model are, therefore,
believed to be much easier than those without random oracles.
In a security reduction with random oracles, hash queries and their responses from
an oracle look like a list as described in Table 4.1, where only inputs and outputs are
known to the adversary. The outputs can be adaptively computed by the simulator,
as long as they are random and independent. How to compute these outputs should
be recorded because they can be helpful for the simulator to program the security
reduction. Let x be a query, y be its response, and S be the secret state used to com-
86 4 Foundations of Security Reduction
pute y. After the simulator responds to the query on x with y = H(x), the simulator
should add a tuple (x, y, S) to a hash list, denoted by L .
The hash list created by the simulator is composed of input, output, and the cor-
responding state S. This hash list should satisfy the following conditions.
• The hash list is empty at the beginning before any hash queries are made.
• All tuples associated with queries will be added to this hash list.
• The secret state S must be unknown to the adversary.
How to choose S in computing y is completely dependent on the proposed scheme
and the security reduction. In the security reduction for some encryption schemes,
we note that y can be randomly chosen without using a secret state. Examples can
be found in the schemes given in this book.
For a security proof in the random oracle model, the simulator should add one more
phase called H-Query (usually after the Setup phase) in the simulation to describe
hash queries and responses. Note that this phase only appears in the security reduc-
tion, and it should not appear in the security model.
H-Query. The adversary makes hash queries in this phase. The simulator prepares
a hash list L to record all queries and responses, where the hash list is empty at the
beginning.
For a query x to the random oracle, if x is already in the hash list, the simulator
responds to this query following the hash list. Otherwise, the simulator generates a
secret state S and uses it to compute the hash output y adaptively. Then, the simulator
responds to this query with y = H(x) and adds the tuple (x, y, S) to the hash list.
This completes the description of the random oracle performance in the simula-
tion. If more than one hash function is set as a random oracle, then the simulator
must describe how to program each random oracle accordingly. In the random or-
acle model, the adversary can make queries to random oracles at any time even
after the adversary wins the game. The simulator should generate all outputs in an
adaptive way in order to make sure that the random oracle can help the simulator
program the simulation, such as signature simulation and private-key simulation.
How to adaptively respond to the hash queries from the adversary is fully dependent
on the proposed scheme, the underlying hard problem, and the security reduction.
Examples can be found in Section 4.12.
Suppose the hash function H is set as a random oracle. We study a general case of
oracle response and analyze its success probability.
4.8 An Overview of Random Oracles 87
H-Query. The simulator prepares a hash list L to record all queries and responses,
where the hash list is empty at the beginning.
For a query x to the random oracle, if x is already in the hash list, the simulator
responds to this query following the hash list. Otherwise, the simulator works as
follows.
• The simulator chooses a random secret value z and a secret bit c ∈ {0, 1} (how
to choose c has not yet been defined) to compute y. Here, S = (z, c) is the secret
state used to compute y.
• The simulator then sets H(x) = y and sends y to the adversary.
• Finally, the simulator adds (x, y, z, c) to the hash list.
This completes the description of the random oracle performance in the simulation.
Suppose qH hash queries are made to the random oracle. The hash list is composed
of qH tuples as follows.
Suppose the adversary does not know (z1 , c1 ), (z2 , c2 ), (z3 , c3 ), · · · , (zqH , cqH ).
From these qH hash queries, the adversary adaptively chooses q+1 out of qH queries
x10 , x20 , · · · , xq0 , x∗ ,
where q + 1 ≤ qH . Let c01 , c02 , · · · , c0q , c∗ be the corresponding secret bits for the cho-
sen hash queries. We want to calculate the success probability, defined as
We stress that this probability cannot be computed once all ci are known to the
adversary. That is why we assume that the adversary does not know all ci at the
beginning.
This probability appears in many security proofs. For example, in the security
proof of digital signatures in the random oracle model, the security reduction is
programmed in such a way that a signature of message x is simulatable if c = 0
and is reducible if c = 1. Suppose the adversary first queries signatures on messages
x10 , x20 , · · · , xq0 and then returns a forged signature of the message x∗ . The probability
of successful simulation and useful attack is equal to
• In the first approach, the simulator randomly chooses i∗ ∈ [1, qH ] and guesses that
the adversary will output the i∗ -th query as x∗ . Then, for a query xi , the simulator
sets
ci = 1 if i = i∗
.
ci = 0 otherwise
In this setting, P is equivalent to successfully guessing which query is chosen as
x∗ . Since the adversary makes qH queries, and one of queries is chosen to be x∗ ,
we have P = 1/qH . The success probability is linear in the number of all hash
queries.
• In the second approach, the simulator guesses more than one query as the poten-
tial x∗ to increase the success probability. To be precise, the simulator flips a bit
bi ∈ {0, 1} in such a way that bi = 0 occurs with probability Pb , and bi = 1 occurs
with probability 1 − Pb . Then, for a query xi , the simulator sets
ci = 1 if bi = 1
.
ci = 0 otherwise
The value is maximized at Pb = 1 − 1/(1 + q), and then we get P ≈ 1/(eq) when
(1 + 1q )q ≈ e. This success probability is linear in the number of chosen hash
queries instead of the number of all hash queries.
In the security proof, qH is believed to be much larger than q (for example
qH = 260 compared to q = 230 ). Therefore, the second approach has a larger success
probability than the first one. The first approach, however, is much simpler to under-
stand than the second approach. In the security proofs for selected schemes in this
book, we always adopt the first approach when one of the two approaches should be
used in a security reduction. We can naturally modify the security proofs with the
second approach in order to have a smaller reduction loss.
We summarize the use of random oracles in security proofs, especially for digital
signatures and encryption, as follows.
• A random oracle is useful in a security proof not because its outputs are random
and uniformly distributed but because the adversary must query x to the random
oracle in order to know the corresponding output H(x), and the computations of
all outputs are fully controlled by the simulator.
4.9 Security Proofs for Digital Signatures 89
• When a hash function is set as a random oracle, the adversary will not receive the
hash function algorithm from the simulator in the security proof. The adversary
can only access the “hash function” by asking the random oracle.
• A random oracle is an ideal hash function. However, we do not need to con-
struct such an ideal hash function in the simulation. Instead, the main task for the
simulator in the security proof is to think how to respond to each input query.
• The simulator can adaptively return any element as the response to a given in-
put query as long as all responses look random from the point of view of the
adversary. This tip is very useful for security proofs in “hash-then-sign” digital
signatures. To be precise, we program H(m) in such a way that the corresponding
signature of H(m) is either simulatable or reducible.
• The hash list is empty when it is created, but the simulator can pre-define the
tuple (x, H(x), S) in the hash list before the adversary makes a query on x. This
tip is useful when a signature generation needs to use H(x), and x has not yet
been queried by the adversary.
• The secret state S for x is useful for the simulator to compute signatures on x in
digital signatures, or private keys on x in identity-based encryption, or to perform
the decryption without knowing the corresponding secret key.
• If breaking a scheme must use a particular pair (x, H(x)), then the adversary
must have queried x to the random oracle. This is essential in the security proof
of encryption under a computational hardness assumption.
• To simplify the security proof, we assume that the simulator already knows the
maximum number of queries that the adversary will make to the random oracle
before the simulation. This assumption is useful in the probability calculation.
More details about how to use random oracles to program a security reduction can
be found in the schemes given in this book.
Suppose there exists an adversary A who can break the proposed signature scheme
in the corresponding security model. We construct a simulator B to solve a com-
putational hard problem. Given as input an instance of this hard problem in the
security proof we must show (1) how the simulator generates the simulated scheme;
(2) how the simulator solves the underlying hard problem using the adversary’s at-
tack; and (3) why the security reduction is correct. A security proof is composed of
the following three parts.
• Simulation. In this part, we show how the simulator uses the problem instance
to generate a simulated scheme and interacts with the adversary following the
unforgeability security model. If the simulator has to abort, the security reduction
fails.
90 4 Foundations of Security Reduction
• Solution. In this part, we show how the simulator solves the underlying hard
problem using the forged signature generated by the adversary. To be precise, the
simulator should be able to extract a solution to the problem instance from the
forged signature.
• Analysis. In this part, we need to provide the following analysis.
1. The simulation is indistinguishable from the real attack.
2. The probability PS of successful simulation.
3. The probability PU of useful attack.
4. The advantage εR of solving the underlying hard problem.
5. The time cost of solving the underlying hard problem.
The simulation will be successful if the simulator does not abort in computing
the public key and responding to all signature queries from the adversary. The simu-
lation is indistinguishable if all computed signatures can pass the signature verifica-
tion, and the simulation has the randomness property. An attack by the adversary is
useful if the simulator can extract a solution to the problem instance from the forged
signature.
Many security proofs only calculate the probability of successful simulation
without calculating the probability of useful attack. Such an analysis is the same
as ours because the probability of successful simulation in their definitions includes
the usefulness of the attack. The difference is due to the different definition of suc-
cessful simulation.
A security reduction for digital signatures does not have to use the forged signa-
ture to solve an underlying hard problem. With random oracles, the simulator can
use hash queries instead of the forged signature to solve an underlying hard prob-
lem. However, this is a rare case. The motivation for this kind of security reduction
will be explained later.
Let ε be the advantage of the adversary in breaking the proposed signature scheme.
The advantage of solving the underlying hard problem, denoted by εR , is
εR = PS · ε · PU .
If the simulation is successful and indistinguishable from the real attack with prob-
ability PS , the adversary can successfully forge a valid signature with probability ε.
With probability PU , the forged signature is a useful attack and can be reduced to
solving the underlying hard problem. Therefore, we obtain εR as the advantage of
solving the underlying hard problem in the security reduction.
4.9 Security Proofs for Digital Signatures 91
In the security reduction, if the solution to the problem instance is extracted from the
adversary’s forged signature, we can classify all signatures in the simulated scheme
into two types: simulatable and reducible.
• Simulatable. A signature is simulatable if it can be computed by the simulator.
If the forged signature is simulatable, the forgery attack is useless. Otherwise,
the simulator can compute the forged signature and perform as the adversary by
itself. The security reduction will be successful without the help of the adver-
sary. That is, the simulator can solve the underlying hard problem by itself. This
security reduction is wrong.
• Reducible. A signature is reducible if it can be used to solve the underlying hard
problem. If the forged signature is reducible, the attack is useful. Similarly, a re-
ducible signature in the security reduction cannot be computed by the simulator.
Otherwise, the simulator could solve the underlying hard problem by itself.
In a security reduction for digital signatures, each signature in the simulated scheme
should be either simulatable or reducible. A successful security reduction requires
that all queried signatures are simulatable, and the forged signature is reducible.
Otherwise, the simulator cannot respond to signature queries from the adversary or
use the forged signature to solve the underlying hard problem.
In security proofs for digital signatures, most security proofs in the literature pro-
gram the security reduction in such a way that the simulator does not know the
corresponding secret key. Intuitively, if the simulator knows the secret key, then all
signatures must be simulatable including the forged signature. Therefore, this reduc-
tion must be unsuccessful. However, this observation is not correct. It is possible to
program a simulation where the simulator knows the secret key. An example can
be found in [7]. We stress that some signatures must be still reducible even though
the secret key is known to the simulator; otherwise, the security reduction must be
incorrect. This is a paradox that must be addressed in this type of security reduction.
In this book, all introductions and given schemes program the security reduction
in such a way that the secret key is unknown to the simulator. That is, in such a
simulation, if the secret key is known to the simulator, the simulator can immediately
solve the underlying hard problem.
92 4 Foundations of Security Reduction
4.9.5 Partition
In the simulation, the simulator must also hide from the adversary which signatures
are simulatable and which signatures are reducible. If the adversary can always re-
turn a simulatable signature as the forged signature, the reduction has no advantage
of solving the underlying hard problem. We call the approach of splitting signatures
into the above two sets partition. The simulator must stop the adversary (who knows
the reduction algorithm and can make signature queries) from finding the partition.
Two different approaches are currently used to deal with the partition.
• Intractable in Finding. Given the simulation including queried signatures, the
computationally unbounded adversary cannot find the partition with probabil-
ity 1, so the forged signature returned by the adversary is still reducible with
non-negligible probability PU . To hide the partition from the computationally
unbounded adversary in the security reduction, the simulator should utilize ab-
solutely hard problems in hiding the partition in the simulation. The secret infor-
mation I, introduced in Section 4.5.7, can be treated as the partition.
• Intractable in Distinguishing. Given the simulation including queried signa-
tures and two complementary partitions, the computationally unbounded adver-
sary has no advantage in distinguishing which partition is used, so the adversary
can only guess the partition correctly with probability 1/2. Here, complementary
partitions means that any signature in the simulation must be either simulatable
using one partition or reducible using another partition. In this case, the adversary
will return a forged signature that is reducible with probability 1/2. The secret
information I here is which partition is adopted in the simulation. Note that it is
not necessary to construct two partitions that are complementary, as long as the
adversary cannot always find which signature is simulatable in both partitions.
In comparison with the first approach, the second approach does not need to hide
the partition from the adversary. We found that the partition is fixed in the reduc-
tion algorithm. To make sure that there are two different partitions in the second
approach, we must propose two distinct simulation algorithms. That is, a reduction
algorithm consists of at least two simulation algorithms and the corresponding so-
lution algorithms. One such example can be found in Section 4.14.3.
Recalling the definitions of tight reduction and loose reduction, we have the follow-
ing observations.
• If all signatures of the same message are either simulatable or reducible and a ran-
domly chosen message is simulatable with probability P, then the reduction must
be loose. Let qs be the number of signature queries. If the adversary randomly
chooses messages for signature queries, the probability of successful simulation
4.9 Security Proofs for Digital Signatures 93
and useful attack is Pqs (1 − P) ≤ q1s . The reduction loss is linear in the number
qs , and thus the security reduction is loose.
• If the signatures of a message can be generated to be simulatable or reducible, we
can achieve a tight reduction. For a signature query on a message, the simulator
makes it simulatable. In this case, there is no abort in the responses to signature
queries. Let 1 − P be the probability that the forged signature is reducible. The
probability of successful simulation and useful attack is 1 · (1 − P) instead of
Pqs (1 − P). If 1 − P is constant and small, the security reduction is tight.
We found that all previous signature schemes with tight reductions use a random
salt in a signature generation, where the random salt is used to switch the function-
ality between simulatable and reducible. Therefore, all unique signatures [77] that
do not have any random salt in the signature generation seem unable to achieve tight
reductions. However, with random oracles, we can program a tight reduction for a
specially constructed unique signature scheme [53], where the solution to the under-
lying hard problem is from one of the hash queries. A simplified signature scheme
is given in Section 5.8.
A correct security reduction for a digital signature scheme, where the simulator does
not know the secret key, should satisfy the following conditions. We can use these
conditions to check whether a security reduction is correct or not.
• The underlying hard problem is a computational hard problem.
• The simulator does not know the secret key.
• All queried signatures are simulatable without knowing the secret key.
• The simulation is indistinguishable from the real attack.
• The partition is intractable or indistinguishable.
• The forged signature is reducible.
• The advantage εR of solving the underlying hard problem is non-negligible.
• The time cost of the simulation is polynomial time.
A security reduction where the simulator uses hash queries to solve an underlying
hard problem or knows the secret key will change the method of the security reduc-
tion. However, since these two cases are very special, we omit their introductions.
94 4 Foundations of Security Reduction
Suppose there exists an adversary A who can break the proposed encryption scheme
in the corresponding security model. We construct a simulator B to solve a deci-
sional hard problem. Given as input an instance of this hard problem (X, Z), in the
security proof we must show (1) how the simulator generates the simulated scheme;
(2) how the simulator solves the underlying hard problem using the adversary’s at-
tack; and (3) why the security reduction is correct. A security proof is composed of
the following three parts.
• Simulation. In this part, we show how the simulator uses the problem instance
(X, Z) to generate a simulated scheme and interacts with the adversary follow-
ing the indistinguishability security model. Most importantly, the target Z in the
problem instance must be embedded in the challenge ciphertext. If the simulator
has to abort, it outputs a random guess of Z.
• Solution. In this part, we show how the simulator solves the decisional hard
problem using the adversary’s guess c0 of c, where the message in the challenge
ciphertext is mc . The method of guessing Z is the same in all security reductions.
To be precise, if c0 = c, the simulator outputs that Z is true. Otherwise, c0 6= c,
and it outputs that Z is false.
• Analysis. In this part, we need to provide the following analysis.
1. The simulation is indistinguishable from the real attack if Z is true.
2. The probability PS of successful simulation.
3. The probability PT of breaking the challenge ciphertext if Z is true.
4. The probability PF of breaking the challenge ciphertext if Z is false.
5. The advantage εR of solving the underlying hard problem.
6. The time cost of solving the underlying hard problem.
The simulation will be successful if the simulator does not abort in computing
the public key, responding to queries, and computing the challenge ciphertext. The
simulation with a true Z is indistinguishable from the real attack if (1) all responses
to queries are correct; (2) the challenge ciphertext generated with a true Z is a correct
ciphertext as defined in the proposed scheme; (3) and the randomness property holds
in the simulation.
In this book, breaking the ciphertext means that the adversary correctly guesses
the message in the ciphertext. This proof structure is regarded as the standard struc-
ture for an encryption scheme under a decisional hardness assumption in the indis-
tinguishability security model. The proof structure for an encryption scheme under
a computational hardness assumption in the random oracle model is quite different
and will be introduced in Section 4.11.5.
4.10 Security Proofs for Encryption Under Decisional Assumptions 95
A strong security model for encryption allows the adversary to make decryption
queries on any ciphertexts with the restriction that no decryption query is allowed
on the challenge ciphertext. We found that all ciphertexts in the simulation can be
classified into the following four types.
• Correct Ciphertext. A ciphertext is correct if it can be generated by the en-
cryption algorithm. For example, taking as input pk = (g, g1 , g2 ) ∈ G and mes-
sage m ∈ G, an encryption algorithm randomly chooses r ∈ Z p and computes
CT = (gr , gr1 , gr2 · m). Any ciphertext that can be generated with a message m
and a number r for pk is a correct ciphertext.
• Incorrect Ciphertext. A ciphertext is incorrect if it cannot be generated by the
encryption algorithm. Continued from the above example, (gr , gr+1 r
1 , g2 · m) is an
incorrect ciphertext because it cannot be generated by the encryption algorithm
with (pk, m) as the input using any random number.
• Valid Ciphertext. A ciphertext is valid if the decryption of the ciphertext returns
a message. We stress that the message returned from the decryption can be any
message as long as the output is not ⊥.
• Invalid Ciphertext. A ciphertext is invalid if the decryption of the ciphertext
returns an error ⊥, without returning any message.
In the above classifications, correct ciphertext and incorrect ciphertext are associ-
ated with the ciphertext structure, while valid ciphertext and invalid ciphertext are
associated with the decryption result. Note that correct ciphertext and valid cipher-
text may be treated as equivalent elsewhere in the literature.
When constructing an encryption scheme, an ideal decryption algorithm should
be able to accept all correct ciphertexts as valid ciphertexts, and reject all incorrect
ciphertexts. In the security reduction, the simulated decryption algorithm should be
able to perform the decryption identically to the proposed decryption algorithm.
Otherwise, the simulated scheme may be distinguishable from the real scheme.
However, it is not easy to construct such a perfect decryption in both the real scheme
and the simulated scheme.
We classify ciphertexts into the above four types in order to clarify the analysis of
whether the decryption simulation helps the adversary break the challenge cipher-
text or not, especially when the challenge ciphertext is generated with a false Z. A
ciphertext from the adversary for a decryption query can be one of the above four
types, and each type should be differently responded in a correct way. This classi-
fication is desirable because in the security reduction for the proposed scheme, an
incorrect ciphertext for a decryption query might be accepted such that the decryp-
tion result will help the adversary break the challenge ciphertext.
The challenge ciphertext is either a correct ciphertext or an incorrect ciphertext,
which depends on Z in the ciphertext generation. For the challenge ciphertext, we
further define two special types in the next subsection.
96 4 Foundations of Security Reduction
The target Z in the instance of the underlying decisional hard problem is either a
true or a false element. The challenge ciphertext must be computed with the target
Z, and it can be classified into the following two types.
• True Challenge Ciphertext. The challenge ciphertext created with the target Z
is a true challenge ciphertext if Z is true. We have that the probability of breaking
the true challenge ciphertext is
• False Challenge Ciphertext. The challenge ciphertext created with the target
Z is a false challenge ciphertext if Z is false. We have that the probability of
breaking the false challenge ciphertext is
In a security reduction for encryption, the simulator must embed the target Z in the
challenge ciphertext such that it satisfies the following conditions.
• If Z is true, the true challenge ciphertext is a correct ciphertext whose encrypted
message is mc ∈ {m0 , m1 }, where m0 , m1 are two messages from the same mes-
sage space provided by the adversary, and c is randomly chosen by the simulator.
We should program the simulation in such a way that the simulation is indistin-
guishable and then the adversary can guess the encrypted message correctly with
non-negligible advantage defined in the breaking assumption.
• If Z is false, the false challenge ciphertext can be either a correct ciphertext or an
incorrect ciphertext. However, the challenge ciphertext cannot be an encryption
of the message mc from the point of view of the adversary. We program the
simulation in such a way that the adversary cannot guess the encrypted message
correctly except with negligible advantage.
If the challenge ciphertext is independent of Z, the guess of the message in the
challenge ciphertext is independent of Z, and thus the guess is useless. This is why
Z must be embedded in the challenge ciphertext.
4.10 Security Proofs for Encryption Under Decisional Assumptions 97
The above analysis yields the advantage of solving the underlying hard problem,
which is
h i h i
εR = Pr The simulator guesses
Z is true Z = True − Pr The simulator guesses
Z is true Z = False
1 1
= PT · PS + (1 − PS ) − PF · PS + (1 − PS )
2 2
= PS (PT − PF ).
98 4 Foundations of Security Reduction
We assume that there exists an adversary who can break the proposed scheme in
polynomial time with non-negligible advantage ε. If the message mc is encrypted in
the real scheme, according to the definition of the advantage in the security model,
we have
1
ε = 2 Pr[c0 = c] − .
2
That is, the adversary can correctly guess the message in the challenge ciphertext of
the real scheme with probability Pr[c0 = c] = 12 + ε2 .
If Z is true, and the simulated scheme is indistinguishable from the real scheme
from the point of view of the adversary, the adversary will break the simulated
scheme as it can break the real scheme and correctly guess the encrypted message
with probability 12 + ε2 . That is, we have
1 ε
PT = Pr[c0 = c|Z = True] = + .
2 2
According to the deduction in Section 4.10.5, the advantage of solving the underly-
ing hard problem is
εR = Pr Guess Z = True|Z = True − Pr Guess Z = True|Z = False
h i h i
= Pr The simulator guesses
Z is true Z = True − Pr The simulator guesses
Z is true Z = False
= PS (PT − PF ).
One-time pad plays an important role in the security proof of encryption. The sim-
plest example of a one-time pad is as follows:
CT = m ⊕ K,
where m ∈ {0, 1}n is a message, and K ∈ {0, 1}n is a random key unknown to the
adversary. For such a one-time pad, the adversary has no advantage in guessing the
message in CT , except with success probability 21n , even if it has unbounded com-
putational power. If CT is created for a randomly chosen message from two distinct
messages {m0 , m1 }, the adversary still has no advantage and can only guess the
encrypted message with success probability 12 . Therefore, a one-time pad captures
perfect security in the indistinguishability security model, where the adversary has
no advantage in breaking the ciphertext.
The notion of one-time pad is defined as follows.
Definition 4.10.9.1 (One-Time Pad) Let E(m, r, R) be an encryption of a given
message m with a public parameter R and a secret parameter r. The ciphertext
E(m, r, R) is a one-time pad if, for any two distinct messages m0 , m1 from the same
message space, CT can be seen as an encryption of either the message m0 with the
secret parameter r0 or an encryption of the message m1 with the secret parameter
r1 under the public parameter R with the same probability:
h i h i
Pr CT = E(m0 , r0 , R) = Pr CT = E(m1 , r1 , R) .
In the security proof of encryption, we need to prove that the false challenge
ciphertext is a one-time pad from the point of view of the adversary. To be precise,
given an instance (X, Z) of a decisional hard problem, we must program the security
reduction in such a way that Z is embedded in the challenge ciphertext CT ∗ , and
CT ∗ is a one-time pad from the point of view of the adversary if Z is false. We stress
that “from the point of view of the adversary” is extremely important. We cannot
simply prove that the false challenge ciphertext is a one-time pad. The reasons will
be explained in Section 4.10.11.
In the security proof of group-based encryption, we are more interested in those
one-time pads constructed from a cyclic group. We can use the following lemma
to check whether or not a general ciphertext constructed over a cyclic group is a
one-time pad from the point of view of the adversary, even if it has unbounded
computational power.
Lemma 4.10.1 Let CT be a general ciphertext defined as follows, where mc ∈
{m0 , m1 } and the adversary knows the group (G, g, p) and messages m0 , m1 ∈ G:
∗
CT = gx1 , gx2 , gx3 , · · · , gxn , gx · mc .
We give several examples to introduce what a one-time pad looks like in group-
based cryptography. Suppose the adversary knows the following information.
• The cyclic group (G, g, h, p), where g, h are generators, and p is the group order.
• Two distinct messages m0 , m1 from G and how the ciphertext is created.
In the following ciphertexts, c ∈ {0, 1} and x, y ∈ Z p are randomly chosen by the
simulator. The aim of the adversary is to guess c in CT . We want to investigate
whether the following constructed ciphertexts are one-time pads or not.
CT = gx , g4 · mc (10.25)
CT = gx , gy · mc (10.26)
CT = gx , hx · mc (10.27)
CT = gx , gy , gxy · mc (10.28)
CT = gx , hx+y · mc (10.29)
CT = g2x+y+z , gx+3y+z , g4x+7y+3z · mc (10.30)
CT = gx+3y+3z , g2x+3y+5z , g9x+5y+2z · mc (10.31)
102 4 Foundations of Security Reduction
CT = gx+3y+3z , g2x+6y+6z , g9x+5y+2z · mc (10.32)
The determinant of the coefficient matrix is zero where x2 = 2x1 . However, x∗ and
x2 are independent because there exists a 2 × 2 sub-matrix whose determinant is
nonzero. Therefore, we have that x∗ is random and independent of x1 , x2 .
The last example shows one interesting result. Even though x1 , x2 , · · · , xn , x∗ are
not random and independent, it does not mean that x∗ can be computed from
x1 , x2 , · · · , xn , but that at least one value in {x1 , x2 , · · · , xn , x∗ } can be computed from
others. To make sure that the last example will not occur in the analysis, we can first
remove some xi if {x1 , x2 , · · · , xn } are dependent until all remaining xi are random
and independent. A detailed example can be found in Section 4.14.2.
4.10 Security Proofs for Encryption Under Decisional Assumptions 103
In the correctness analysis of encryption, it is not sufficient to prove that the false
challenge ciphertext is a one-time pad. Instead, we must prove that breaking the
false challenge ciphertext is as hard as breaking a one-time pad. That is, the false
challenge ciphertext is a one-time pad from the point of view of the adversary who
receives some other parameters from the simulated scheme. We have to analyze in
this way because other parameters might help the adversary break the false chal-
lenge ciphertext.
Let CT ∗ be the challenge ciphertext defined as
∗
CT ∗ = gx1 , gx2 , gx3 , · · · , gxn , gx · mc .
0 0 0
Let gx1 , gx2 , · · · , gxn be additional information that the adversary obtained from other
phases. If Z is false, we must analyze that the following ciphertext, an extension of
the false challenge ciphertext, is a one-time pad:
0 0 0 ∗
gx1 , gx2 , · · · , gxn , gx1 , gx2 , gx3 , · · · , gxn , gx · mc .
However, if the adversary can obtain a new group element such as gx+y+z from a
response to a decryption query, the challenge ciphertext is no longer a one-time pad
because the adversary can compute the element g9x+5y+2z from the other three group
elements to decrypt the message mc .
In the security analysis, we must prove that the simulation of decryption is correct.
To be precise, the simulation of decryption must satisfy the following conditions.
• If Z is true, the simulation of decryption is indistinguishable from the decryption
in the real scheme. Otherwise, we cannot prove that the simulation is indistin-
guishable from the real attack. Here, the indistinguishability means that the sim-
ulated scheme will accept correct ciphertexts and reject incorrect ciphertexts in
104 4 Foundations of Security Reduction
the same way as the real scheme. To make sure that the simulation of decryption
is indistinguishable, the simplest way is for the simulator to be able to generate a
valid secret key for the ciphertext decryption.
• If Z is false, the adversary cannot break the false challenge ciphertext with the
help of decryption queries, i.e., the adversary cannot successfully guess the mes-
sage in the challenge ciphertext with the help of decryption queries. This condi-
tion is desired when proving that the adversary has no (or negligible) advantage in
breaking the false challenge ciphertext. How to stop the adversary from breaking
the false challenge ciphertext using decryption queries is the most challenging
task in security reduction. This is due to the fact that all ciphertexts for decryp-
tion queries can be adaptively generated by the adversary, e.g., by modifying the
challenge ciphertext in the CCA security model.
The above two different conditions are desired for proving the security of en-
cryption schemes under decisional hardness assumptions. However, when we prove
encryption schemes under computational hardness assumptions in the random ora-
cle model, the required conditions of the decryption simulation are slightly different.
The reason is that there is no target Z in the given problem instance.
Let (pk∗ , sk∗ ) be the key pair in a public-key encryption scheme, and (ID∗ , dID∗ ) be
the key pair in an identity-based encryption scheme that an adversary aims to chal-
lenge, where the challenge ciphertext will be created for pk∗ or ID∗ . For simplicity,
in this section, we denote by sk∗ and dID∗ the challenge decryption keys.
In the security reduction for encryption, it is not necessary for the simulator to
program the simulation without knowing the challenge decryption key. Currently,
there are two different methods associated with the decryption key.
• In the first method, the simulator knows the challenge decryption key. The sim-
ulator can easily simulate the decryption by following the decryption algorithm
because it knows the decryption key. The decryption simulation in the simulated
scheme is therefore indistinguishable from the decryption in the real scheme.
However, it is challenging to simulate the challenge ciphertext, so that the adver-
sary has negligible advantage in breaking the false challenge ciphertext.
• In the second method, the simulator does not know the challenge decryption key.
If Z is false, we found that it is relatively easy to generate the false challenge
ciphertext satisfying the requirements. However, it is challenging to simulate the
decryption correctly without knowing the decryption key. The simulator does not
know the challenge decryption key in this case, but the simulator has to be able
to simulate the decryption.
We cannot adaptively choose one of them to program a security reduction for a
proposed scheme. Which method can be used is dependent on the proposed scheme
and the underlying hard problem.
4.10 Security Proofs for Encryption Under Decisional Assumptions 105
We have explained that if Z is false, we cannot simply analyze that the false chal-
lenge ciphertext is a one-time pad, but we must analyze that the adversary cannot
break the false challenge ciphertext except with negligible advantage. To calculate
the probability PF of breaking the false challenge ciphertext, we might need to cal-
culate the following probabilities and advantages.
• Probability PFW = 12 . The probability PFW of breaking the false challenge cipher-
text without any decryption query is 12 . This probability is actually the probability
of breaking the false challenge ciphertext in the CPA security model, where there
is no decryption query. The security proof for CPA is relatively easy, because we
do not need to analyze the following probability or advantages.
?
• Advantage AKF = 0. The advantage AKF of breaking the false challenge cipher-
text with the help of the challenge decryption key is either 0 or 1. If AKF = 0,
this means that the adversary cannot use the challenge decryption key to guess
the message in the false challenge ciphertext, and this completes the probability
analysis of PF . Otherwise, AKF = 1, and we need to analyze the following proba-
bility or advantages.
• Advantage ACF = 0. The advantage ACF of breaking the false challenge ciphertext
with the help of decryption queries on correct ciphertexts is 0. Since a correct
ciphertext will always be accepted for decryption, a correct security reduction
requires that decryption of correct ciphertexts must not help the adversary break
the false challenge ciphertext. Otherwise, it is impossible to obtain PF ≈ 12 , and
the security reduction fails.
?
• Advantage AIF = 0. The advantage AIF of breaking the false challenge ciphertext
with the help of decryption queries on incorrect ciphertexts is either 0 or 1. If
AIF = 0, decryption of incorrect ciphertexts will not help the adversary break
the false challenge ciphertext, and this completes the probability analysis of PF .
Otherwise, AIF = 1, and we need to analyze the following probability.
• Probability PFA ≈ 0. The probability PFA of accepting an incorrect ciphertext for
decryption query is negligible, or the probability that the adversary can gener-
ate an incorrect ciphertext for decryption query to be accepted by the simulator
is negligible. If PFA = 0 all incorrect ciphertexts for decryption queries will be
rejected by the simulator.
We can use the following formula to define the probability PF with the above
probabilities and advantages:
PF = PFW + AKF ACF + AIF PFA .
The flowchart for analyzing the probability PF is given in Figure 4.2. The probabil-
ity analysis for CCA security is complicated because we may have to additionally
analyze up to four different cases.
106 4 Foundations of Security Reduction
where the target (false) Z is randomly chosen from G. This challenge ciphertext is
a one-time pad from the point of view of the adversary if and only if Z is random
and unknown to the adversary. Even if the challenge decryption key α can be
computed by the adversary, it does not help the adversary guess the encrypted
message. Therefore, we have AKF = 0.
• AKF = 1. In a public-key encryption scheme, the public key is pk = h, and the
β γ
secret key is sk = (α, β , γ), where SP = (G, g1 , g2 , g3 , p), h = gα1 g2 g3 , and
α, β , γ ∈ Z p are randomly chosen. Here, g1 , g2 , g3 are three distinct group ele-
ments. Therefore, the challenge decryption key is (α, β , γ) in this construction.
Suppose the false challenge ciphertext is equal to
4.10 Security Proofs for Encryption Under Decisional Assumptions 107
CT ∗ = (C1∗ ,C2∗ ,C3∗ ) = gx1 , Z, Z α · mc ,
where the (false) target Z is randomly chosen from G. The message is encrypted
with Z α , and Z is given as the second element in the challenge ciphertext. There-
fore, this challenge ciphertext is a one-time pad from the point of view of the
adversary if and only if α is random and unknown to the adversary. It is easy
to see that with the help of (α, β , γ), the adversary can easily break the false
challenge ciphertext. Therefore, we have AKF = 1.
• AIF = 0. Continue the above example for AKF = 1. Even if the adversary has un-
bounded computational power, it still cannot compute the challenge decryption
key (α, β , γ) from the public key. The false challenge ciphertext is a one-time
pad from the point of view of the adversary when α is random and unknown to
the adversary.
Let CT = (C1 ,C2 ,C3 ) be an incorrect ciphertext. Suppose the decryption of this
γ
ciphertext will return to the adversary the group element m = C3 ·C2 as the mes-
sage. The computationally unbounded adversary can easily obtain γ from this
group element because C2 ,C3 are known. However, the computed γ cannot help
the adversary break the false challenge ciphertext. Therefore, we have AIF = 0.
Notice that the advantage AIF = 0 holds if and only if α will not be known to the
adversary from the decryption of all incorrect ciphertexts.
• AIF = 1. Continue the examples for AKF = 1 and AIF = 0. Suppose the decryption
of that incorrect ciphertext will not return m = C3 ·C2 but m = C3 ·C2−α instead.
γ
Following similar analysis, the adversary can obtain α from decryption queries
on incorrect ciphertexts and then break the false challenge ciphertext. Therefore,
we have AIF = 1.
Whether the challenge decryption key can be computed by the adversary signifi-
cantly affects the analysis. We have the following interesting observations.
• If the challenge decryption key can be computed from the public key by the
computationally unbounded adversary, the decryption of either correct or incor-
rect ciphertexts will not help the adversary break the false challenge ciphertext.
That is, ACF = AIF = 0. The reason is that the adversary can use the challenge
decryption key to decrypt ciphertexts by itself. Therefore, in this case, we do not
need to consider whether decryption queries can help the adversary break the
false challenge ciphertext or not.
• If the challenge decryption key cannot be computed by the computationally un-
bounded adversary, the decryption of incorrect ciphertexts might help the adver-
sary break the false challenge ciphertext. The reason is that the challenge decryp-
tion key might play an important role in the construction of a one-time pad, but
the challenge decryption key might be obtained by the adversary by decryption
queries on incorrect ciphertexts, so that the false challenge ciphertext is no longer
a one-time pad. We stress that the decryption of incorrect ciphertexts does not al-
ways help the adversary. It depends on how the one-time pad is constructed in
the security reduction.
108 4 Foundations of Security Reduction
Giving the challenge decryption key to the adversary in analysis is sufficient but
not necessary, because the simulator only responds to decryption queries made by
the adversary. We can even skip the analysis of whether the challenge decryption key
helps the adversary. However, this is a useful assumption to simplify the analysis,
especially when the challenge decryption key cannot help the adversary.
A correct security reduction for an encryption scheme should satisfy the following
conditions. We can use these conditions to check whether a security reduction is
correct or not.
• The underlying hard problem is a decisional problem.
• The simulator uses the adversary’s guess to solve the underlying hard problem.
• The simulation is indistinguishable from the real attack if Z is true.
• The probability of successful simulation is non-negligible.
• The advantage of breaking the true challenge ciphertext is ε.
• The advantage of breaking the false challenge ciphertext is negligible.
• The advantage εR of solving the underlying hard problem is non-negligible.
• The time cost of the simulation is polynomial time.
In a security proof with random oracles, we can prove the security of a proposed
scheme under a computational hardness assumption. One main difference is how to
solve the underlying hard problem. The reduction method with random oracles is
given in the next section.
In this section, we introduce how to use random oracles to program a security reduc-
tion for an encryption scheme under a computational hardness assumption. Security
reductions for encryption schemes under computational hardness assumptions and
for encryption schemes under decisional hardness assumptions are quite different in
the challenge ciphertext simulation, how to find a solution to the problem instance,
and the correctness analysis.
Let H : {0, 1}∗ → {0, 1}n be a cryptographic hash function. We consider the follow-
ing encryption of the message mc with an arbitrary string x, where m0 , m1 are any
two distinct messages chosen from the message space {0, 1}n and mc is randomly
chosen from {m0 , m1 }:
CT = x, H(x) ⊕ mc .
If H is a hash function, the above ciphertext is not a one-time pad. Given x and the
hash function H, the adversary can compute H(x) by itself and decrypt the ciphertext
to obtain the message mc . However, if H is set as a random oracle, the adversary
cannot compute H(x) by itself. The adversary must query x to the random oracle to
know H(x). Then, we have the following two interesting results.
• Before Querying x. Since H(x) is random and unknown to the adversary, the
message is encrypted with a random and unknown encryption key. Therefore,
the above ciphertext is equivalent to a one-time pad, and the adversary has no
advantage in guessing the message mc except with probability 12 .
• After Querying x. Once the adversary queries x to the random oracle and re-
ceives the response H(x), it can immediately decrypt the message with H(x).
Therefore, the above ciphertext is no longer a one-time pad, and the adversary
can guess the encrypted message with advantage 1.
With the above features, the simulator is able to use random oracles to program
a security reduction for an encryption scheme in the indistinguishability security
model to solve a computational hard problem.
The breaking assumption states that there exists an adversary who can break the
above ciphertext with non-negligible advantage. That is, the adversary has non-
negligible advantage in guessing mc correctly. According to the features introduced
in the previous subsection, the adversary must query y to the random oracle. Oth-
4.11 Security Proofs for Encryption Under Computational Assumptions 111
The simulator cannot directly simulate H(y) ⊕ mc in the challenge ciphertext be-
cause y is unknown to the simulator. Fortunately, this problem can be easily solved
with the help of random oracles. To be precise, the simulator chooses a random
string R ∈ {0, 1}n to replace H(y) ⊕ mc . The challenge ciphertext in the simulated
scheme is set as
CT ∗ = (X, R).
The challenge ciphertext can be seen as an encryption of the message mc ∈ {m0 , m1 }
if H(y) = R ⊕ mc , where we have
CT ∗ = (X, R) = X, H(y) ⊕ mc .
Unfortunately, after sending this challenge ciphertext to the adversary, the simula-
tor may not know which hash query from the adversary is the solution y and will
respond to the query on y with a random element different from R ⊕ mc . There-
112 4 Foundations of Security Reduction
fore, the query response is wrong, and the challenge ciphertext in the simulation is
distinguishable from that in the real scheme. We have the following results.
• Before Querying y. From the point of view of the adversary, the challenge ci-
phertext is an encryption of m0 if H(y) = R ⊕ m0 and an encryption of m1 if
H(y) = R ⊕ m1 . Without making a hash query on y to the random oracle, the
adversary never knows H(y), and thus the challenge ciphertext in the simulated
scheme is indistinguishable from the challenge ciphertext in the real scheme.
Before querying y to the random oracle, H(y) is random and unknown to the
adversary, so the challenge ciphertext is an encryption of m0 or m1 with the same
probability. Therefore, the challenge ciphertext is equivalent to a one-time pad
from the point of view of the adversary, and thus the adversary has no advantage
in guessing the encrypted message.
• After Querying y. Once the adversary makes a hash query on y to the random
oracle, the simulator will respond to the query with a random element Y = H(y).
If the simulator does not know which query is the challenge hash query Q∗ = y,
the response to the query on y is independent of R and therefore we have
Y ⊕ m0 = R or Y ⊕ m1 = R
We are ready to summarize the proof structure for encryption schemes under com-
putational hardness assumptions, where at least one hash function is set as a random
oracle. Suppose there exists an adversary A who can break the proposed encryption
scheme in the corresponding security model. We construct a simulator B to solve a
computational hard problem. Given as input an instance of this hard problem, in the
security proof we must show (1) how the simulator generates the simulated scheme;
(2) how the simulator solves the underlying hard problem; and (3) why the security
reduction is correct. A security proof is composed of the following three parts.
• Simulation. In this part, we show how the simulator programs the random or-
acle simulation, generates the simulated scheme using the received problem in-
stance, and interacts with the adversary following the indistinguishability secu-
rity model. If the simulator aborts, the security reduction fails.
• Solution. In this part (at the end of the guess phase), we show how the simulator
solves the computational hard problem using hash queries to the random oracle
made by the adversary. To be precise, we should point out which hash query is
4.11 Security Proofs for Encryption Under Computational Assumptions 113
the challenge hash query Q∗ in this simulation, how to pick the challenge hash
query from all hash queries, and how to use the challenge hash query to solve the
underlying hard problem.
• Analysis. In this part, we need to provide the following analysis.
1. The simulation is indistinguishable from the real attack if no challenge hash
query is made by the adversary.
2. The probability PS of successful simulation.
3. The adversary has no advantage in breaking the challenge ciphertext if it does
not make the challenge hash query to the random oracle.
4. The probability PC of finding the correct solution from hash queries.
5. The advantage εR of solving the underlying hard problem.
6. The time cost of solving the underlying hard problem.
The simulation will be successful if the simulator does not abort in computing the
public key, responding to queries, and computing the challenge ciphertext. Before
the adversary makes the challenge hash query, the simulation is indistinguishable
if (1) all responses to queries are correct; (2) the challenge ciphertext is a correct
ciphertext as defined in the proposed scheme from the point of view of the adversary;
and (3) the randomness property holds for the simulation.
If the security reduction is correct, the solution to the problem instance can be
extracted from the challenge hash query. For simplicity, we can view the challenge
hash query as the solution to the problem instance. In this type of security reduction,
there is no true challenge ciphertext or false challenge ciphertext.
In the real scheme, the challenge ciphertext is a correct ciphertext whose encrypted
message is mc . If the adversary makes the challenge hash query to the random oracle,
it can use the response to decrypt the encrypted message and then break the scheme.
However, in the simulated scheme, the challenge ciphertext is a correct ciphertext if
and only if there is no challenge hash query. Once the adversary makes the challenge
hash query to the random oracle and the response to the challenge hash query is
wrong, it will immediately find out that the challenge ciphertext is incorrect and the
simulation is distinguishable.
According to the explanation in Section 4.11.4, we do not care that the simula-
tion becomes distinguishable after the adversary has made the challenge hash query
to the random oracle. In the security reduction, making the challenge hash query
can be seen as a successful and useful attack by the adversary. Before the adversary
launches such an attack with non-negligible probability, the simulated scheme must
be indistinguishable from the real scheme, and the adversary has no advantage in
breaking the challenge ciphertext. That is, the challenge ciphertext must be an en-
cryption of either m0 or m1 with the same probability (i.e., one-time pad) from the
point of view of the adversary.
114 4 Foundations of Security Reduction
Suppose there exists an adversary who can break the proposed encryption scheme in
the random oracle model with advantage ε. We have the following important lemma
(originally from [24]) about calculating the advantage.
Lemma 4.11.1 If the adversary has no advantage in breaking the challenge cipher-
text without making the challenge hash query to the random oracle, the adversary
will make the challenge hash query to the random oracle with probability ε.
Proof. According to the breaking assumption, we have
1 ε
Pr[c0 = c] = + .
2 2
This is the success probability that the adversary can correctly guess the encrypted
message in the real scheme according to the breaking assumption.
Let H ∗ denote the event of making the challenge hash query to the random ora-
cle, and H ∗c be the complementary event of H ∗ . According to the statement in the
lemma, we have
1
Pr[c0 = c|H ∗ ] = 1, Pr[c0 = c|H ∗c ] = .
2
Then, we obtain
ciphertext with advantage 1 after making qH hash queries, one of the hash queries
must be the challenge hash query. Therefore, the probability that a randomly picked
query from the hash list is the challenge hash query is 1/qH . There is no reduction
loss in finding the solution from hash queries if the decisional variant of the compu-
tational hard problem is easy. The reason is that the simulator can test all the hash
queries one by one until it finds the challenge hash query. However, if the decisional
variant is also hard, it seems that this finding loss cannot be avoided.
CT ∗ = (x, R),
where R is a random string chosen from {0, 1}n+1 . Let CT ∗ = (C1 ,C2 ). We have the
following observations.
• The challenge ciphertext can be seen as an encryption of a message from
{m0 , m1 } if no challenge hash query on x is made to the random oracle. There-
fore, the simulation is indistinguishable from the real attack.
• However, the message mc in the challenge ciphertext can easily be identified from
the least significant bit of C2 , because the LSB of the message is not encrypted.
According to the choice of messages m0 , m1 , the bit c is equal to the LSB of C2 .
Therefore, the adversary can correctly guess the encrypted message with proba-
bility 1 without making the challenge hash query to the random oracle.
The above encryption scheme is not IND-CPA secure. The adversary has non-
negligible advantage in guessing the encrypted message in the challenge ciphertext,
and this is why we cannot prove its security in the IND-CPA security model.
116 4 Foundations of Security Reduction
In this type of security reduction, the challenge decryption key is usually pro-
grammed as a key unknown to the simulator. How to simulate the decryption with-
out knowing the challenge decryption key then becomes tricky and difficult. For-
tunately, the random oracle also provides a big help in the decryption simulation.
We now give a simple example to see how the decryption simulation works without
having the corresponding decryption key.
Let the system parameters SP be (G, g, p, H), where H : {0, 1}∗ → {0, 1}n is a
cryptographic hash function satisfying n = |p| (the same bit length as p). Suppose
a public-key encryption scheme generates a key pair (pk, sk), where pk = g1 = gα
and sk = α. The encryption algorithm takes as input the public key pk, a message
m ∈ {0, 1}n , and the system parameters SP. It chooses a random number r ∈ Z p and
returns the ciphertext as
4.11 Security Proofs for Encryption Under Computational Assumptions 117
CT = (C1 ,C2 ,C3 ) = gr , H(0||gr1 ) ⊕ r, H(1||gr1 ) ⊕ m .
The decryption algorithm first computes C1α = gr1 and then extracts r by H(0||C1α ) ⊕
C2 and m by H(1||C1α ) ⊕C3 . Finally, it outputs the message m if and only if CT can
be created with the decrypted number r and the decrypted message m.
This encryption scheme is not secure in the IND-CCA security model but only
secure in the IND-CCA1 security model, where the adversary can only make de-
cryption queries before the challenge phase. The proposed encryption scheme is
secure under the CDH assumption in the random oracle model, where it is hard to
compute gab from a problem instance (g, ga , gb ).
In the security reduction, the simulator sets α = a with the unknown exponent
a in the problem instance. We are now interested in knowing how the simulator
responds to decryption queries from the adversary with the help of random oracles.
In this encryption scheme, a queried ciphertext CT is valid if CT can be created
with the decrypted random number r0 and the decrypted message m0 . That is,
0 0 0
gr , H(0||gr1 ) ⊕ r0 , H(1||gr1 ) ⊕ m0 = CT.
This completes the description of the decryption simulation. To simulate the de-
cryption with the help of random oracles, the security reduction must satisfy two
conditions. Firstly, we can use hash queries instead of the challenge decryption key
to simulate the decryption. A ciphertext is valid if and only if the adversary ever
made the correct hash query to the random oracle. This condition is necessary if
the simulator is to simulate the decryption correctly. Secondly, there should be a
mechanism for checking which hash query is the correct hash query for a decryp-
tion. Otherwise, given a ciphertext for a decryption query, the simulator might return
many distinct results depending on the used hash queries.
• Without making the challenge hash query, the adversary cannot distinguish the
simulated scheme from the real scheme and has no advantage in breaking the
challenge ciphertext.
• The simulator uses the challenge hash query to solve the hard problem.
• The advantage εR of solving the underlying hard problem is non-negligible.
• The time cost of the simulation is polynomial time.
We find that any provably secure encryption scheme under a decisional hardness
assumption can be modified into a provably secure encryption scheme under a com-
putational hardness assumption with random oracles. Therefore, the above summary
is only suitable for some encryption schemes, especially those schemes that cannot
be proved secure under decisional hardness assumptions.
σm = H(m)a ,
where a is the unknown secret from the problem instance, and all xi are randomly
chosen by the simulator. The simulatable and reducible conditions are described as
follows:
Simulatable, if x ∈ {x1 , x2 , · · · , xq }
σm is .
Reducible, otherwise
f (a) = (a − x1 )(a − x2 ) · · · (a − xq ).
Since x ∈
/ {x1 , x2 , · · · , xq }, we have z = f (x) 6= 0 and
f (a) − f (x)
a−x
4.13 Examples of Incorrect Security Reductions 123
f (a) − f (x)
= aq−1 wq−1 + · · · + a1 w1 + w0 .
a−x
1
Then g a−H(m) can be computed by
!1 f (a) !1
z z
σm g a−H(m)
q−1 i
= q−1 i
∏i=0 (ga )wi ∏i=0 (ga )wi
f (a)−z+z ! 1
z
g a−H(m)
= q−1 i
g∑i=0 a wi
f (a)−z z !1
z
g a−H(m) · g a−H(m)
= q−1 i
g∑i=0 a wi
z 1
z
= g a−H(m)
1
= g a−H(m) .
1
The pair − H(m), g a−H(m) is the solution to the q-SDH problem instance.
In a security reduction, the malicious adversary can utilize what it knows (scheme
algorithm, reduction algorithm, and how to solve all computational hard problems)
to distinguish the given scheme or find a way to launch a useless attack on the
scheme. In this section, we give three examples to explain why security reductions
fail. Note that the security reductions in our given examples are incorrect but this
does not mean that the proposed schemes are insecure. Actually, all the schemes in
our examples can be proved secure with other security reductions.
124 4 Foundations of Security Reduction
pk = (g1 , g2 , g3 ) = (gα , gβ , gγ ),
sk = (α, β , γ).
Sign: The signing algorithm takes as input a message m ∈ Z p , the secret key
sk, and the system parameters SP. It computes the signature σm on m as
1
σm = g α+mβ +H(m)γ .
Theorem 4.13.1.1 Suppose the hash function H is a random oracle. If the 1-SDH
problem is hard, the proposed scheme is provably secure in the EU-CMA secu-
rity model with only two signature queries, where the adversary must select two
messages m1 , m2 for signature queries before making hash queries to the random
oracle.
Incorrect Proof. Suppose there exists an adversary A who can break the proposed
scheme in the corresponding security model. We construct a simulator B to solve
the 1-SDH problem. Given as input a problem instance (g, ga ) over the pairing group
PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. The
simulator randomly chooses x1 , x2 , x3 , y from Z p and sets the secret key as
α = x1 a, β = x2 a + y, γ = x3 a,
where a is the unknown random number in the instance of the hard problem. The
corresponding public key is
(g1 , g2 , g3 ) = (ga )x1 , (ga )x2 gy , (ga )x3 ,
4.13 Examples of Incorrect Security Reductions 125
x1 + x2 mi + x3 wi = 0,
as the signature of the message m. Let H(m) = w. According to the random oracle,
we have x1 + x2 m + x3 w = 0 and
1 1 1 1
g α+mβ +H(m)γ = g x1 a+(x2 a+y)m+x3 H(m)a = g a(x1 +x2 m+x3 w)+ym = g ym .
The simulated scheme is distinguishable from the real scheme, because this event
occurs in the real scheme with negligible probability 1p . The adversary therefore
126 4 Foundations of Security Reduction
breaks the security reduction by returning an invalid signature in the forgery query
phase. The proposed scheme is therefore not provably secure.
Comments on the technique. In this security reduction, a signature of m is simu-
latable if and only if the pair (m, H(m)) satisfies x1 + x2 m + x3 H(m) = 0. Without
the use of random oracles, it is hard for the simulator to simulate the two queried
signatures on m1 , m2 if H(m1 ) and H(m2 ) cannot be controlled by the simulator.
This is the advantage of using random oracles in the security reduction.
pk = (g1 , g2 , g3 ) = (gα , gβ , gγ ),
sk = (α, β , γ).
Sign: The signing algorithm takes as input a message m ∈ Z p , the secret key
sk, and the system parameters SP. It chooses a random number r ∈ Z p and
computes the signature σm on m as
1
σm = (σ1 , σ2 ) = r, g α+mβ +rγ .
Theorem 4.13.2.1 If the 1-SDH problem is hard, the proposed scheme is existen-
tially unforgeable in the key-only security model, where the adversary is not allowed
to query signatures.
Incorrect Proof. Suppose there exists an adversary A who can break the proposed
scheme in the key-only security model. We construct a simulator B to solve the
1-SDH problem. Given as input a problem instance (g, ga ) over the pairing group
PG, B runs A and works as follows.
4.13 Examples of Incorrect Security Reductions 127
Setup. Let SP = PG. The simulator randomly chooses x, y from Z p and sets the
secret key as
α = a, β = ya + x, γ = xa,
where a is the unknown random number in the instance of the hard problem. The
corresponding public key is
(g1 , g2 , g3 ) = ga , (ga )y gx , (ga )x ,
γ β − αγ
x= , y= .
α α
Then, the adversary breaks the security reduction by returning a forged signature
1
r∗ , g α+m∗ β +r∗ γ on the message m∗ with a particular number r∗ satisfying
128 4 Foundations of Security Reduction
β − αγ γ
1+ · m∗ + · r∗ = 0.
α α
That is, r∗ is equal to
1 + xm∗
r∗ = − mod p.
y
The forged signature is useless for the simulator because 1 + ym∗ + xr∗ = 0. The
proposed scheme is therefore not provably secure.
Unbounded computational power revisited. The above attack is based on the as-
sumption that the adversary, who has unbounded computational power, knows how
to compute α, β , γ from the public key. That is, the adversary knows how to solve
the DL problem, which is harder than the 1-SDH problem. This example raises an
interesting question. Can this security reduction be secure against an adversary who
cannot solve the 1-SDH problem? This question is not easy to answer. The answer
“yes” requires us to prove that the adversary has no advantage in finding (m∗ , r∗ )
such that 1 + ym∗ + xr∗ = 0 from ga , gya+x , and gxa . Actually, the adversary does
not need to solve the DL problem. An equivalent problem is, given gα , gβ , gγ , to
find (m∗ , r∗ ) such that
∗ ∗ γ ∗
gα gβ m gγr = g α m ,
which implies 1 + ym∗ + xr∗ = 0. The corresponding proof is rather complicated
because we need to prove that the adversary cannot solve this problem if the 1-SDH
problem is hard for the adversary. There might be other approaches for the adversary
to find (m∗ , r∗ ) such that 1 + ym∗ + xr∗ = 0. We must prove that the adversary cannot
find (m∗ , r∗ ) in all cases if we want to program a correct reduction. This is why we
assume that the adversary has unbounded computational power for simple analysis.
Comments on the technique. In this security reduction, a signature of m is either
simulatable or reducible depending on the corresponding random number r. If 1 +
ym + xr = 0, the signature is simulatable. Otherwise, it is reducible. The partition is
whether or not 1 + ym + xr = 0 in the given signature, and it must be hard for the
adversary to find it.
Sign: The signing algorithm takes as input a message m ∈ Z p , the secret key
sk, and the system parameters SP. It chooses a random coin c ∈ {0, 1} and
computes the signature σm on m as
1
σm = g αc +m .
Theorem 4.13.3.1 If the 1-SDH problem is hard, the proposed scheme is provably
secure in the EU-CMA security model with only one signature query.
Incorrect Proof. Suppose there exists an adversary A who can break the proposed
scheme in the corresponding security model. We construct a simulator B to solve
the 1-SDH problem. Given as input a problem instance (g, ga ) over the pairing group
PG, B runs A and works as follows.
Setup. Let SP = PG. The simulator randomly chooses x ∈ Z p , b ∈ {0, 1} and sets
the public key as x a
(g , g ), if b = 0
(gα0 , gα1 ) = ,
(ga , gx ), otherwise
where a is the unknown random number in the instance of the hard problem.
Query. For a signature query on m, the simulator computes
1 1
σm = g x+m = g αb +m .
In this section, we give three examples to introduce how to program a correct secu-
rity reduction. These examples are one-time signature schemes where each proposed
scheme can generate one signature at most. We choose one-time signature schemes
as examples because it is easy to program correct security reductions, especially the
correctness analysis.
Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key osk, and the system parameters SP. It computes the signature σm on m as
σm = α + H(m) · β mod p.
4.14 Examples of Correct Security Reductions 131
Theorem 4.14.1.1 Suppose the hash function H is a random oracle. If the DL prob-
lem is hard, the proposed one-time signature scheme is provably secure in the EU-
CMA security model with only one signature query and reduction loss qH , where qH
is the number of hash queries to the random oracle.
Proof Idea. Let (g, ga ) be an instance of the DL problem that the simulator receives.
To solve the DL problem with the forged signature, the simulation should satisfy the
following conditions.
• Both α and β must be simulated with a. Otherwise, it is impossible to have
both reducible signatures and simulatable signatures in the simulation. That is,
all signatures will be either reducible or simulatable.
• The forged signature on the message m∗ is reducible if α + H(m∗ )β contains a.
When α, β are both simulated with a, we have α + H(m∗ )β for a random value
H(m∗ ) contains a except with negligible probability.
• The queried signature on the queried message m is simulatable if α + H(m)β
does not contain a. When α, β are both simulated with a, to make sure that a can
be removed in this queried signature, H(m) must be very special and related to
the setting of α, β .
Proof. Suppose there exists an adversary A who can break the one-time signature
scheme in the EU-CMA security model with only one signature query. We construct
a simulator B to solve the DL problem. Given as input a problem instance (g, ga )
over the cyclic group (G, p, g), B controls the random oracle, runs A , and works
as follows.
Setup. Let SP = (G, p, g) and H be set as a random oracle controlled by the simu-
lator. The simulator randomly chooses x, y ∈ Z p and sets the secret key as
a
α = a, β = − + y.
x
The public key is
1
opk = (g1 , g2 ) = ga , (ga )− x gy ,
which can be computed from the problem instance and the chosen parameters.
H-Query. The adversary makes hash queries in this phase. Before receiving queries
from the adversary, B randomly chooses an integer i∗ ∈ [1, qH ], where qH denotes
the number of hash queries to the random oracle. Then, B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
132 4 Foundations of Security Reduction
For a query on mi , if mi is already in the hash list, B responds to this query fol-
lowing the hash list. Otherwise, let mi be the i-th new queried message. B randomly
chooses wi ∈ Z p and sets H(mi ) as
H(mi ) = x if i = i∗
.
H(mi ) = wi otherwise
Then, B responds to this query with H(mi ) and adds (mi , H(mi )) to the hash list.
Query. The adversary makes a signature query on m. If m is not the i∗ -th queried
message in the hash list, abort. Otherwise, B computes σm as
σm = xy mod p.
Advantage and time cost. Suppose the adversary breaks the scheme with
(t, 1, ε) after making qH hash queries. The advantage of solving the DL problem
is therefore qεH . Let Ts denote the time cost of the simulation. We have Ts = O(1).
B will solve the DL problem with (t + Ts , ε/qH ).
This completes the proof of the theorem.
Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It chooses a random number r ∈ Z p and
computes the signature σm on m as
σm = (σ1 , σ2 ) = r, α + H(m) · β + r · γ mod p .
Theorem 4.14.2.1 If the DL problem is hard, the above one-time signature scheme
is provably secure in the EU-CMA security model with only one signature query and
reduction loss about L = 1.
Proof Idea. Let (g, ga ) be an instance of the DL problem. Let σm be the queried sig-
nature generated with a random number r, and σm∗ be the forged signature generated
with a random number r∗ , where
σm = r, α + H(m)β + rγ ,
σm∗ = r∗ , α + H(m∗ )β + r∗ γ .
134 4 Foundations of Security Reduction
Let H(m∗ ) = u · H(m) for a number u ∈ Z p . If the adversary knows the reduction
algorithm and has unbounded computational power, we have the following interest-
ing observations on the simulation of α, β , γ if the security reduction provides only
one simulation.
• If α does not contain a, we have that H(m)β +rγ is simulatable for the simulator.
Let r∗ = ru. We have
σm∗ = r∗ , α + H(m∗ )β + r∗ γ = ru, α + u H(m)β + rγ
which can be computed from the problem instance and the chosen parameters.
Query. The adversary makes a signature query on m. B computes σm as
σm = (σ1 , σ2 ) = − x1 − H(m)x2 , y1 + H(m)y2 .
Finally, B computes
σ2 − y1 − H(m∗ )y2
a=
x1 + H(m∗ )x2 + r∗
as the solution to the DL problem instance.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been ex-
plained above. The randomness of the simulation includes all random numbers in
the key generation and the signature generation. They are
x1 a + y1 , x2 a + y2 , a, − x1 − H(m)x2 .
The simulation is indistinguishable from the real attack because they are random
and independent following the analysis below.
Probability of successful simulation and useful attack. There is no abort in the
simulation. The forged signature is reducible if r∗ 6= −x1 − H(m∗ )x2 . To prove that
the adversary has no advantage in computing −x1 −H(m∗ )x2 , we only need to prove
that −x1 − H(m∗ )x2 is random and independent of the given parameters. Since σ2
in the queried signature can be computed from the secret key and σ1 , we only need
to prove that the following elements
−1 −H(m∗ ) 0 0 y2
It is not hard to find that the absolute value of the determinant of the matrix is
|H(m∗ ) − H(m)|, which is nonzero. Therefore, we have that α, β , r, −x1 − H(m∗ )x2
are random and independent. Combining lemma 4.7.2 with the above result, we
have that r∗ is random and independent of the given parameters. Therefore, the
probability of successful simulation and useful attack is 1 − 1p ≈ 1.
Advantage and time cost. Suppose the adversary breaks the scheme with
(t, 1, ε). Let Ts denote the time cost of the simulation. We have Ts = O(1). B will
solve the DL problem with (t + Ts , ε).
This completes the proof of the theorem.
The security proof in Theorem 4.14.2.1 is based on the fact that the adversary cannot
find the partition x1 + H(m∗ )x2 + r∗ = 0 from the given parameters. In this section,
we introduce a security proof composed of two different simulations whose parti-
tions are opposite. That is, a signature generated with a random number is simulat-
able in one simulation and reducible in another simulation. If the adversary cannot
distinguish which simulation is adopted by the simulator, any forged signature gen-
erated by the adversary will be reducible with probability 12 .
Theorem 4.14.3.1 If the DL problem is hard, the proposed one-time signature
scheme in Section 4.14.2 is provably secure in the EU-CMA security model with
only one signature query and reduction loss at most L = 2.
Proof. Suppose there exists an adversary A who can break the one-time signature
scheme in the EU-CMA security model with only one signature query. We construct
a simulator B to solve the DL problem. Given as input a problem instance (g, ga )
over the cyclic group (G, g, p), B runs A and works as follows.
B randomly chooses a secret bit µ ∈ {0, 1} and programs the simulation in two
different ways.
• The reduction for µ = 0 is programmed as follows.
Setup. Let SP = (G, g, p, H). B randomly chooses x1 , y1 , y2 ∈ Z p and sets the
secret key as α = x1 a + y1 , β = 0a + y2 , γ = a. The public key is
opk = (g1 , g2 , g3 ) = (ga )x1 gy1 , gy2 , ga ,
which can be computed from the problem instance and the chosen parameters.
Query. The adversary makes a signature query on m. B computes σm as
4.14 Examples of Correct Security Reductions 137
σm = (σ1 , σ2 ) = − x1 , y1 + H(m)y2 .
Finally, B computes
σ2 − y1 − H(m∗ )y2
a=
x1 + r∗
as the solution to the DL problem instance.
• The reduction for µ = 1 is programmed as follows.
Setup. Let SP = (G, g, p, H). B randomly chooses x2 , y1 , y2 ∈ Z p and sets the
secret key as α = 0a + y1 , β = x2 a + y2 , γ = a. The public key is
opk = (g1 , g2 , g3 ) = gy1 , (ga )x2 gy2 , ga ,
which can be computed from the problem instance and the chosen parameters.
Query. The adversary makes a signature query on m. B computes σm as
σm = (σ1 , σ2 ) = − H(m)x2 , y1 + H(m)y2 .
Finally, B computes
138 4 Foundations of Security Reduction
σ2 − y1 − H(m∗ )y2
a=
H(m∗ )x2 + r∗
as the solution to the DL problem instance.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation and the signature generation. They are
x1 a + y1 , 0 + y2 , a , −x1 µ =0
(α, β , γ, r) = .
0 + y1 , x2 a + y2 , a , −H(m)x2 µ =1
In the last section of this chapter, we revisit and classify concepts used in security
proofs of public-key cryptography. It is important to fully understand these concepts
and master where/how to apply them in a security proof. Note that some concepts,
such as advantage and valid ciphertexts, may have different explanations elsewhere
in the literature.
Various concepts related to “proof” are interpreted differently for different purposes.
So far, we have the following proof concepts.
• Proof by Contradiction. This concept introduces what is provable security via
reduction in public-key cryptography We will revisit preliminaries and some im-
portant concepts about proof by contradiction in Section 4.15.2.
• Security Proof. A security proof is mainly composed of a reduction algorithm
and its correctness analysis. To be precise, the correctness analysis should show
that the advantage of solving an underlying hard problem using an adaptive attack
by a malicious adversary is non-negligible.
• Security Reduction. This is a reduction run by the simulator following the re-
duction algorithm. A security reduction should introduce how to generate a simu-
lated scheme and how to reduce an attack on this scheme to solving an underlying
hard problem. We will revisit some important concepts about security reduction
in Section 4.15.3.
• Simulation. This is an interaction between the adversary and the simulator who
generates a simulated scheme with a problem instance. In comparison with se-
curity reduction, simulation focuses on how to program a simulation indistin-
guishable from the real attack. We will revisit some important concepts about
simulation in Section 4.15.4.
A security proof is not a real mathematical proof showing that a proposed scheme
is secure. Instead, it merely proposes a reduction algorithm and shows that if there
exists an adversary who can break the proposed scheme, we can run the reduction
algorithm and reduce the adversary’s attack to solving an underlying hard problem.
That is, a security proof is an algorithm only. Unfortunately, we cannot demonstrate
this reduction algorithm to convince people that the proposed scheme is secure.
What we do instead is a theoretical analysis showing that the proposed reduction
algorithm indeed works. That is, we can reduce any adaptive attack by the malicious
adversary who has unbounded computational power to solving an underlying hard
problem with non-negligible advantage.
140 4 Foundations of Security Reduction
the security reduction, we assume that the adversary knows the scheme algorithm of
the proposed scheme, the reduction algorithm, and how to solve all computational
hard problems. On the other hand, we can program a security reduction successfully,
because the adversary does not know the random number(s) chosen by the simula-
tor, the problem instance given to the simulator, and how to solve an absolutely hard
problem.
In the security reduction, the attack by the adversary and the underlying hard
problem for security reductions are related. We reduce a computational attack to
solving a computational hard problem. For example, in a security reduction for dig-
ital signatures, we mainly use the forged signature from the adversary to solve a
computational hard problem. We reduce a decisional attack to solving a decisional
hard problem. For example, in a security reduction for encryption, we mainly use
the guess of the randomly chosen message mc in the challenge ciphertext to solve
a decisional hard problem. With the help of random oracles, in a security reduction
for encryption, we can also reduce a decisional attack in the indistinguishability se-
curity model to solving a computational hard problem. We stress that each type of
reduction is quite different in simulation, solution, and analysis.
The first step of the security reduction is the simulation. The simulator uses the
given problem instance to generate a simulated scheme, and may abort in the sim-
ulation so that the simulation is not successful. The adversary will launch a failed
attack or a successful attack in the simulation. It is not necessary to implement the
full simulated scheme. Only those algorithms involved in the responses to queries
are desired. The simulated scheme is indistinguishable from the real scheme when
correctness and randomness hold. To be precise, all responses to queries, such as
signature queries and decryption queries, must be correct. All simulated random
numbers (group elements) must be truly random and independent. The indistin-
guishability is necessary because we only assume that the adversary can break a
given scheme that looks like a real scheme from the point of view of the adversary.
We cannot guarantee that the adversary will also break the given scheme with the
same advantage as breaking the real scheme, if the given scheme is distinguishable
from the real scheme.
In a security reduction with random oracles, the simulator controls random ora-
cles and can embed any special integers/elements in responses to hash queries, as
long as all responses are uniformly distributed. The simulator controls responses
to hash queries to help program the simulation, especially for signature generation
and private-key generation in identity-based encryption, without knowing the corre-
sponding secret key. The number of hash queries to random oracles is polynomial.
As long as the adversary does not query x to the random oracle, H(x) is random and
unknown to the adversary. A hash list is used to record all queries made by the ad-
versary, all responses to hash queries, and all secret states for computing responses.
4.15 Summary of Concepts 143
For digital signatures, most security reductions in the literature program the sim-
ulation in such a way that the simulator does not know the corresponding secret key,
and the simulator utilizes the forged signature from the adversary to solve an un-
derlying hard problem. All digital signatures in the simulation can be classified into
simulatable signatures and reducible signatures. In the simulation, there should be
a partition that specifies which signatures are simulatable and which signatures are
reducible. If a security reduction is to use the forged signature to solve an underlying
hard problem, all queried signatures must be simulatable, and the forged signature
must be reducible. The adversary should not be able to find or distinguish the parti-
tion from the simulation. Otherwise, the adversary can always choose a simulatable
signature as the forged signature so that the forgery is a useless attack.
The simulation is much more complicated for encryption than that for digital
signatures. We can program the security reduction to solve either a decisional hard
problem or a computational hard problem depending on the proposed scheme and
the security reduction. The corresponding security reductions are very different.
• If we program the security reduction to solve a decisional hard problem, we use
the adversary’s guess of the message in the challenge ciphertext to solve the un-
derlying hard problem. In the corresponding simulation, the target Z from the
problem instance must be embedded in the challenge ciphertext in a way that
the challenge ciphertext fulfills several conditions. To be precise, if Z is true,
the simulated scheme is indistinguishable from the real scheme so that the adver-
sary will guess the encrypted message correctly with probability 12 + ε2 according
to the breaking assumption. Otherwise, it should be as hard for the adversary to
break the false challenge ciphertext as it is to break a one-time pad, so that the ad-
versary has success probability only about 12 of guessing the encrypted message.
It is not easy to analyze the probability of breaking the false challenge ciphertext
when the adversary can make decryption queries. To analyze the probability PF of
breaking the false challenge ciphertext, we need to analyze the probabilities and
advantages of PFW , AKF , ACF , AIF , and PFA . In the simulation of encryption schemes,
the challenge decryption key can be either known or unknown to the simulator,
which is dependent on the proposed scheme and the security reduction.
• If we program a security reduction in the indistinguishability security model to
solve a computational hard problem with the help of random oracles, the security
reduction is very different from that to solve a decisional hard problem. In partic-
ular, the simulator should use one of the hash queries, namely the challenge hash
query, to solve the underlying computational hard problem. There is no true/false
challenge ciphertext definition in this type of simulation. Before the adversary
makes the challenge hash query to the random oracle, the simulation must be
indistinguishable from the real scheme, and the adversary must have no advan-
tage in breaking the challenge ciphertext. If these conditions hold, the adversary
must make the challenge hash query to the random oracle in order to fulfill the
breaking assumption. In this security reduction, we usually program the simula-
tion in such a way that the simulator does not know the challenge decryption key.
For CCA security, the hash queries from the adversary must be able to help the
decryption simulation. That is, all correct ciphertexts can be decrypted with the
144 4 Foundations of Security Reduction
correct hash queries, and all incorrect ciphertexts will be rejected according to
those hash queries made by the adversary.
In a security reduction for encryption under a decisional hardness assumption,
the decryption simulation should be indistinguishable from the real attack if Z is
true, and the decryption simulation should not help the adversary break the false
challenge ciphertext if Z is false. In a security reduction for encryption under a
computational hardness assumption, the decryption simulation should be indistin-
guishable from the real attack and should not help the adversary distinguish the
challenge ciphertext in the simulated scheme from that in the real scheme.
algorithm. The simulation is successful if the simulator does not abort during the
simulation, where the simulator aborts because it cannot correctly respond to the
adversary’s queries.
• If the simulation is successful, the adversary will launch an attack on the simu-
lated scheme. The attack is a successful attack with a certain probability, depend-
ing on the simulation and the reduction algorithm. To be precise, if the simulated
scheme is indistinguishable (IND) from the real scheme, the adversary should
launch a successful attack on the simulated scheme with probability Pε defined in
the breaking assumption. If the simulated scheme is distinguishable (DIS) from
the real scheme, the adversary will launch a successful attack on the simulated
scheme with malicious probability P∗ ∈ [0, 1], depending on the reduction algo-
rithm and explained as follows.
– For digital signatures, the adversary will launch a successful attack with prob-
ability P∗ = 0, no matter whether the reduction is by the forged signature or
hash queries.
– For encryption under a decisional hard problem, the adversary will try to
launch a successful attack with probability P∗ = Pε if the simulation is dis-
tinguishable because Z is false.
– For encryption under a computational hard problem, the adversary will try to
launch a successful attack with probability P∗ = 0 if the simulation is distin-
guishable before the adversary makes the challenge hash query to the random
oracle.
• An attack by the adversary, no matter whether it is successful or failed, can be a
useful attack depending on the cryptosystem and the reduction algorithm. Only
a useful attack can be reduced to solving an underlying hard problem.
There are many concepts associated with the word “model” including security
model, standard security model, standard model, random oracle model, and generic
group model. They have totally different meanings.
• Security Model. A security model is defined for modeling attacks. It is a game
played by the adversary and the challenger who generates a scheme for the ad-
versary to attack. Each security model can be seen as an abstracted class of at-
tacks, where we define how the adversary breaks the scheme. The security model
should appear in the security definition for a cryptosystem.
• Standard Security Model. A cryptosystem can have many security models. One
of these security models is selected as the standard security model. The standard
security model should be widely accepted and good enough to prove the security
of the proposed scheme. We emphasize that the standard security model is not
the strongest security model for a cryptosystem.
• Standard Model. A standard model is a model of computation where the adver-
sary is only restricted by the amount of time it takes to break a cryptosystem.
Differing from the standard model, the random oracle model is a special model
that has defined some further restrictions on the adversary.
• Random Oracle Model. The random oracle model is not a security model for
a cryptosystem but an assumption where at least one hash function is set as a
random oracle controlled by the simulator, and the adversary must access the
random oracle to know the corresponding hash values. The random oracle model
only appears in the security proof.
• Generic Group Model. The generic group model is an assumption proposed for
analyzing whether a problem defined over cyclic groups is hard or not. In this
model, the adversary cannot see the group implementation or any group element,
only their encodings. Then, a problem is analyzed to be hard within this assump-
tion. The generic group model only appears in the hardness analysis for a new
hard problem.
In a security reduction, we often mention the word “indistinguishable.” It is used
in two different places.
• Indistinguishable Security. In the definition of indistinguishable security for en-
cryption, a random message chosen by the challenger or by the simulator, either
m0 or m1 , is encrypted in the challenge ciphertext. An encryption scheme is in-
distinguishably secure if the adversary has negligible advantage in guessing the
encrypted message in the challenge ciphertext.
• Indistinguishable Simulation. The adversary is given a simulated scheme, and
we want the adversary to launch an attack on it with the advantage defined in
the breaking assumption. This requires that the simulated scheme should be in-
distinguishable from the real scheme. Otherwise, the adversary could launch any
attack which might be failed or successful without any restriction.
Chapter 5
Digital Signatures with Random Oracles
In this chapter, we mainly introduce the BLS scheme [26], the BBRO scheme [20],
and the ZSS scheme [104] under the H-Type, the C-Type, and the I-Type structures,
respectively. With a random salt, we can modify the BLS scheme into the BLS+
scheme using a random bit [65] and the BLS# scheme using a random number [48]
with tight reductions. The same approach can also be applied to the ZSS scheme,
where the responses to hash queries are different due to the adoption of the q-SDH
assumption. At the end, we introduce the BLSG scheme, a simplified version of
[53], based on the BLS signature scheme with a completely new and tight security
reduction without the use of a random salt. The given schemes and/or proofs may
be different from the original ones.
pk = h, sk = α.
Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It returns the signature σm on m as
σm = H(m)α .
Theorem 5.1.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the BLS signature scheme is provably secure in the EU-CMA se-
curity model with reduction loss L = qH , where qH is the number of hash queries to
the random oracle.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
sets the public key as h = ga , where the secret key α is equivalent to a. The public
key is available from the problem instance.
H-Query. The adversary makes hash queries in this phase. Before receiving queries
from the adversary, B randomly chooses an integer i∗ ∈ [1, qH ], where qH denotes
the number of hash queries to the random oracle. Then, B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
Let the i-th hash query be mi . If mi is already in the hash list, B responds to this
query following the hash list. Otherwise, B randomly chooses wi from Z p and sets
H(mi ) as
g i if i = i∗
b+w
H(mi ) = .
gwi otherwise
The simulator B responds to this query with H(mi ) and adds (i, mi , wi , H(mi )) to
the hash list.
Query. The adversary makes signature queries in this phase. For a signature query
on mi , if mi is the i∗ -th queried message in the hash list, abort. Otherwise, we have
H(mi ) = gwi .
B computes σmi as
σmi = (ga )wi .
According to the signature definition and simulation, we have
σm∗ gab+awi∗
= = gab
(ga )wi∗ (ga )wi∗
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses α ∈ Z p , computes h = gα , and returns a public/secret
key pair (pk, sk) as follows:
pk = h, sk = α.
Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It chooses a random coin c ∈ {0, 1} and
returns the signature σm on m as
σm = (σ1 , σ2 ) = c, H(m, c)α .
We require that the signing algorithm always uses the same random coin c on
the message m.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
e(σ2 , g) = e H(m, σ1 ), h .
Theorem 5.2.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the BLS+ signature scheme is provably secure in the EU-CMA
security model with reduction loss L = 2.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
sets the public key as h = ga , where the secret key α is equivalent to a. The public
key is available from the problem instance.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
For a query on (mi , ci ) where ci ∈ {0, 1}, if mi is already in the hash list, B
responds to this query following the hash list. Otherwise, B randomly chooses xi ∈
{0, 1}, yi , zi ∈ Z p and sets H(mi , 0), H(mi , 1) as
The simulator B responds to this query with H(mi , c) and adds (mi , xi , yi , zi , H(mi , 0),
H(mi , 1)) to the hash list.
5.2 BLS+ Scheme 151
Query. The adversary makes signature queries in this phase. For a signature query
on mi , let the corresponding hashing tuple be (mi , xi , yi , zi , H(mi , 0), H(mi , 1)).
B computes σmi as
1, (g a )zi : xi = 0
α
σmi = ci , H(mi , ci ) = .
0, (ga )yi :
xi = 1
pk : a,
H(mi , 0), H(mi , 1) : (b + yi , zi ) or (yi , b + zi )
ci : xi .
152 5 Digital Signatures with Random Oracles
The adversary has no advantage in guessing which hash query is answered with
b when y∗ , z∗ are randomly chosen by the simulator. Therefore, x∗ is random and
unknown to the adversary, so that c∗ = x∗ holds with probability 12 . Therefore, the
probability of successful simulation and useful attack is 12 .
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε)
after making qH queries to the random oracle. The advantage of solving the CDH
problem is therefore ε2 . Let Ts denote the time cost of the simulation. We have Ts =
O(qH + qs ), which is mainly dominated by the oracle response and the signature
generation. Therefore, B will solve the CDH problem with (t + Ts , ε/2).
This completes the proof of the theorem.
pk = h, sk = α.
Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It chooses a random number r ∈ Z p and
returns the signature σm on m as
σm = (σ1 , σ2 ) = r, H(m, r)α .
5.3 BLS# Scheme 153
We require that the signing algorithm always uses the same random number r
for signature generation on the message m.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
e(σ2 , g) = e H(m, σ1 ), h .
Theorem 5.3.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the BLS# signature scheme is provably secure in the EU-CMA
security model with reduction loss about L = 1.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
sets the public key as h = ga , where the secret key α is equivalent to a. The public
key is available from the problem instance.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
For a query on (m, r) from the adversary where r ∈ Z p , if (m, r) is already
in the hash list, B responds to this query following the hash list. Otherwise, it
randomly chooses z ∈ Z p , responds to this query with H(m, r) = gb+z , and adds
(m, r, z, H(m, r), A ) to the hash list. Here, A in the tuple means that the query (m, r)
is generated by the adversary.
Query. The adversary makes signature queries in this phase. For a signature query
on mi , if there exists a tuple (mi , ri , yi , H(mi , ri ), B) in the hash list, the simulator
uses this tuple to generate the signature. Otherwise, B randomly chooses ri , yi ∈ Z p ,
sets H(mi , ri ) = gyi , and adds (mi , ri , yi , H(mi , ri ), B) to the hash list. If (mi , ri ) was
ever generated by the adversary, the simulator aborts.
B computes σmi as
σmi = ri , H(mi , ri )α = ri , (ga )yi .
Forgery. The adversary returns a forged signature σm∗ on some m∗ that has not
been queried. Let the forged signature be (σ1∗ , σ2∗ ) = (r∗ , H(m∗ , r∗ )α ), and the cor-
responding hashing tuple be (m∗ , r∗ , z∗ , H(m∗ , r∗ )). We should have
∗
H(m∗ , r∗ ) = gb+z .
pk : a,
H(m, r) : z + b if r is chosen by the adversary,
H(mi , ri ) : yi if ri is chosen by the simulator,
random number : ri .
pk = (g1 , g2 ), sk = α.
Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It chooses a random number r ∈ Z p and
returns the signature σm on m as
σm = (σ1 , σ2 ) = gα2 H(m)r , gr .
Theorem 5.4.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the BBRO signature scheme is provably secure in the EU-CMA
security model with reduction loss L = qH , where qH is the number of hash queries
to the random oracle.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
sets the public key as
g1 = ga , g2 = gb ,
where the secret key α is equivalent to a. The public key is available from the
problem instance.
H-Query. The adversary makes hash queries in this phase. Before any hash queries
are made, B randomly chooses i∗ ∈ [1, qH ], where qH denotes the number of hash
156 5 Digital Signatures with Random Oracles
queries to the random oracle. Then, B prepares a hash list to record all queries and
responses as follows, where the hash list is empty at the beginning.
Let the i-th hash query be mi . If mi is already in the hash list, B responds to this
query following the hash list. Otherwise, B randomly chooses wi from Z p and sets
H(mi ) as
g i , if i = i∗
w
H(mi ) = .
gb+wi , otherwise
The simulator B responds to this query with H(mi ) and adds (i, mi , wi , H(mi )) to
the hash list.
Query. The adversary makes signature queries in this phase. For a signature query
on mi , if mi is the i∗ -th queried message in the hash list, abort. Otherwise, we have
H(mi ) = gb+wi .
B chooses a random ri0 ∈ Z p and computes σmi as
0 0
σmi = (ga )−wi · H(mi )ri , (ga )−1 gri .
pk : a, b,
5.5 ZSS Scheme 157
According to the setting of the simulation, where a, b, wi , ri0 are randomly chosen, it
is easy to see that they are random and independent from the point of view of the
adversary. Therefore, the simulation is indistinguishable from the real attack.
Probability of successful simulation and useful attack. If the simulator success-
fully guesses i∗ , all queried signatures are simulatable, and the forged signature is
reducible because the message mi∗ cannot be chosen for a signature query, and it will
be used for the signature forgery. Therefore, the probability of successful simulation
and useful attack is q1H for qH queries.
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε)
after making qH queries to the random oracle. The advantage of solving the CDH
problem is therefore qεH . Let Ts denote the time cost of the simulation. We have Ts =
O(qH + qs ), which is mainly dominated by the oracle response and the signature
generation. Therefore, B will solve the CDH problem with (t + Ts , ε/qH ).
This completes the proof of the theorem.
pk = (g1 , h), sk = α.
Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It returns the signature σm on m as
1
σm = h α+H(m) .
Theorem 5.5.0.1 Suppose the hash function H is a random oracle. If the q-SDH
problem is hard, the ZSS signature scheme is provably secure in the EU-CMA secu-
rity model with reduction loss L = qH , where qH is the number of hash queries to
the random oracle.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the q-
2 q
SDH problem. Given as input a problem instance (g, ga , ga , · · · , ga ) over the pairing
group PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
randomly chooses w1 , w2 , · · · , wq from Z p and sets the public key as
The simulator B responds to this query with H(mi ) and adds (i, mi , wi , H(mi )) or
(i∗ , mi∗ , w∗ , H(mi∗ )) to the hash list. Notice that w∗ 6= wi∗ in our simulation.
Query. The adversary makes signature queries in this phase. For a signature query
on mi , if mi is the i∗ -th queried message in the hash list, abort. Otherwise, we have
H(mi ) = wi .
B computes σmi as
1
and outputs w∗ , g a+w∗ as the solution to the q-SDH problem instance.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation and the responses to hash queries. They are as follows:
pk : a, (a + w1 )(a + w2 ) · · · (a + wq ),
H(mi ) : w1 , · · · , wi∗ −1 , w∗ , wi∗ +1 , · · · , wqH .
pk = (g1 , h), sk = α.
Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It chooses a random coin c ∈ {0, 1} and
returns the signature σm on m as
1
σm = (σ1 , σ2 ) = c, h α+H(m,c) .
We require that the signing algorithm always uses the same random coin c on
the message m.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
e σ2 , g1 gH(m,σ1 ) = e(h, g).
Theorem 5.6.0.1 Suppose the hash function H is a random oracle. If the q-SDH
problem is hard, the ZSS+ signature scheme is provably secure in the EU-CMA
security model with reduction loss L = 2.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the q-
2 q
SDH problem. Given as input a problem instance (g, ga , ga , · · · , ga ) over the pairing
group PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
randomly chooses y1 , y2 , · · · , yq , w from Z p and sets the public key as
where the secret key α is equivalent to a. This requires q = qH , where qH denotes the
number of hash queries. The public key can be computed from the problem instance
and the chosen parameters.
5.6 ZSS+ Scheme 161
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
For a query on (mi , ci ) where ci ∈ {0, 1}, if mi is already in the hash list, B
responds to this query following the hash list. Otherwise, B randomly chooses xi ∈
{0, 1}, yi , zi ∈ Z p and sets H(mi , 0), H(mi , 1) as follows:
H(mi , 0) = yi , H(mi , 1) = zi : xi = 0
.
H(mi , 0) = zi , H(mi , 1) = yi : xi = 1
The simulator
B responds to this query with H(mi , ci ) and adds mi , xi , yi , zi , H(mi , 0),
H(mi , 1) to the hash list.
Query. The adversary makes signature queries in this phase. For a signature query
on mi , let the corresponding hashing tuple be mi , xi , yi , zi , H(mi , 0), H(mi , 1) .
B computes σmi as
1
σmi = xi , h α+H(mi ,xi ) = xi , gw(a+y1 )···(a+yi−1 )(a+yi+1 )···(a+yq )
q
using g, ga , · · · , ga , y1 , y2 , · · · , yq .
According to the signature definition and simulation, we have ci = xi and H(mi , xi )
= yi such that
1 w(a+y1 )···(a+yi−1 )(a+yi )(a+yi+1 )···(a+yq )
h α+H(mi ,ci ) = g a+yi = gw(a+y1 )···(a+yi−1 )(a+yi+1 )···(a+yq ) .
pk : a, w(a + y1 )(a + y2 ) · · · (a + yq ),
H(mi , 0), H(mi , 1) : (yi , zi ) or (zi , yi ),
ci : xi .
H(m∗ , 0) = y∗ , H(m∗ , 1) = z∗ : x∗ = 0
.
H(m∗ , 0) = z∗ , H(m∗ , 1) = y∗ : x∗ = 1
The adversary knows x∗ if it finds which hash value H(m∗ , 0) or H(m∗ , 1) is a root of
w(a+y1 ) · · · (a+yq ). Since w(a+y1 ) · · · (a+yq ), y∗ , z∗ are random and independent,
the adversary has no advantage in finding x∗ . Therefore, the probability of successful
simulation and useful attack is 12 .
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε)
after making qH queries to the random oracle. The advantage of solving the q-SDH
problem is therefore ε2 . Let Ts denote the time cost of the simulation. We have Ts =
O(qs qH ), which is mainly dominated by the signature generation. Therefore, B will
solve the q-SDH problem with (t + Ts , ε2 ).
This completes the proof of the theorem.
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses h ∈ G, α ∈ Z p , computes g1 = gα , and returns a
public/secret key pair (pk, sk) as follows:
pk = (g1 , h), sk = α.
Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It chooses a random number r ∈ Z p and
returns the signature σm on m as
1
σm = (σ1 , σ2 ) = r, h α+H(m,r) .
We require that the signing algorithm always uses the same random number r
on the message m.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
e σ2 , g1 gH(m,σ1 ) = e(h, g).
Theorem 5.7.0.1 Suppose the hash function H is a random oracle. If the q-SDH
problem is hard, the ZSS# signature scheme is provably secure in the EU-CMA se-
curity model with reduction loss L = 1.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the q-
2 q
SDH problem. Given as input a problem instance (g, ga , ga , · · · , ga ) over the pairing
group PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
randomly chooses y1 , y2 , · · · , yq , w from Z p and sets the public key as
where the secret key α is equivalent to a. This requires q = qs . The public key can
be computed from the problem instance and the chosen parameters.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
For a query on (m, r) from the adversary, if (m, r) is already in the hash list, it
responds to this query following the hash list. Otherwise, it randomly chooses z ∈ Z p
and sets H(m, r) = z. The simulator B responds to this query with H(m, r) and adds
164 5 Digital Signatures with Random Oracles
(m, r, z, H(m, r), A ) to the hash list. Here, A in the tuple means that the query (m, r)
is generated by the adversary.
Query. The adversary makes signature queries in this phase. For a signature query
on mi , if there exists a tuple (mi , ri , yi , H(mi , ri ), B) in the hash list, the simulator
uses this tuple to generate the signature. Otherwise, B randomly chooses ri , yi ∈ Z p ,
sets H(mi , ri ) = yi , and adds (mi , ri , yi , H(mi , ri ), B) to the hash list. If (mi , ri ) was
ever generated by the adversary, the simulator aborts.
B computes σmi as
1
σmi = ri , h α+H(mi ,ri ) = ri , gw(a+y1 )···(a+yi−1 )(a+yi+1 )···(a+yq )
q
using g, ga , · · · , ga , w, y1 , y2 , · · · , yq .
According to the signature definition and simulation, we have H(mi , ri ) = yi such
that
1 w(a+y1 )···(a+yi−1 )(a+yi )(a+yi+1 )···(a+yq )
h α+H(mi ,ri ) = g a+yi = gw(a+y1 )···(a+yi−1 )(a+yi+1 )···(a+yq ) .
can be rewritten as d
g f (a)+ a+z∗ ,
where f (a) is a (q − 1)-degree polynomial function in a, and d is a nonzero integer.
The simulator B computes
!1 d !1
σ2∗ g f (a)+ a+z∗
d d
1
= = g a+z∗
g f (a) g f (a)
1
and outputs (z∗ , g a+z∗ ) as the solution to the q-SDH problem instance.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation, the responses to hash queries, and the signature generation. They are as
follows:
5.8 BLSG Scheme 165
pk : a, w(a + y1 )(a + y2 ) · · · (a + yq ),
H(m, r) : z if r is chosen by the adversary
H(mi , ri ) : yi if ri is chosen by the simulator
random number : ri .
pk = h, sk = α.
Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It returns the signature σm on m as
α α
σm = (σ1 , σ2 , σ3 ) = H(m)α , H m||σm1 , H m||σm2 ,
166 5 Digital Signatures with Random Oracles
e(σ3 , g) = e H m||σm2 , h .
Theorem 5.8.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the BLSG signature scheme is provably secure in the EU-CMA
√
security model with reduction loss 2 qH , where qH is the number of hash queries
to the random oracle.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
sets the public key as h = ga , where the secret key α is equivalent to a. The public
key is available from the problem instance.
H-Query. The adversary makes hash queries in this phase. Before receiving queries
from the adversary, B randomly chooses an integer c∗ ∈ {0, 1} and then chooses
c∗
another random value k∗ from the range [1, qH 1− 2 ], where qH denotes the number
of hash queries to the random oracle. To be precise, the range is [1, qH ] if c∗ = 0,
1
and the range is [1, qH 2 ] if c∗ = 1. Then, B prepares a hash list to record all queries
and responses as follows, where the hash list is empty at the beginning.
For each tuple, the format is defined and described as follows:
(x, Ix , Tx , Ox , Ux , zx ),
Response of object Ix . The object Ix is to identify who is the first to generate and
submit x to the random oracle. The query is meaningful in that both the adversary
and the simulator can make to the random oracle, although the random oracle is
controlled by the simulator. If the first query on x is first generated and submitted by
the adversary, we say that this query is made first by the adversary and set Ix = A .
Otherwise, we set Ix = B.
Take a new message m as an example. Suppose the adversary first queries H(m),
H(m||σm1 ) to the random oracle and then queries the signature of m to the simulator.
Notice that the signature generation on the message m requires the simulator to
know all the following values
The hash list does not record how to respond to hash query H(m||σm2 ). Therefore,
the simulator must query H(m||σm2 ) to the random oracle first before generating
its signature. Notice that the adversary might query H(m||σm2 ) again for signature
verification after receiving the signature, but this hash query is first generated and
made by the simulator. Therefore, we define
• For x ∈ {m, m||σm1 }, the corresponding Ix for x is Ix = A .
• For x = m||σm2 , the corresponding Ix for x is Ix = B.
Response of object Tx . We assume “||” is a concatenation notation that will never
appear within messages. The simulator can also run the verification algorithm to ver-
ify whether each block signature is correct or not. Therefore, it is easy to distinguish
the input structure of all hash queries. We define four types of hash queries to the
random oracle.
Type i. x = m||σmi . Here, σmi denotes the first i block signatures of m, and i refers
to any integer i ∈ {0, 1, 2}. We assume m||σm0 = m for easy analysis.
Type D. x is a query different from the previous three types. For example, x =
0
m||Rm but Rm 6= σmi for any i ∈ {0, 1, 2}, or x = m||σmi for any i0 ≥ 3.
The object Tx is set as follows. If Ix = B, then Tx = ⊥. Otherwise, suppose Ix = A .
Then, the simulator can run the verification algorithm to know which type x belongs
to and set
i if x belongs to Type i for any i ∈ {0, 1, 2}
Tx = .
⊥ otherwise x belongs to Type D
g x if (Tx , Ox ) = (c∗ , k∗ )
b+z
Ux = H(x) = .
gzx otherwise
We denote by zx the secret for the response to x. In the following, if the query x
needs to be written as x = m||σmi , the corresponding secret will be rewritten as zim .
Finally, the simulator adds the defined tuple (x, Ix , Tx , Ox ,Ux , zx ) for the new query
x to the hash list. This completes the description of the hash query and its response.
For the tuple (x, Ix , Tx , Ox ,Ux , zx ), we have that H(x)α = Uxa = (ga )zx is com-
putable by the simulator for any query x as long as (Tx , Ox ) 6= (c∗ , k∗ ). If (Tx , Ox ) =
(c∗ , k∗ ), we have
H(x)α = Uxa = (gb+zx )a = gab+azx .
For the tuple (x, Ix , Tx , Ox ,Ux , zx ), we denote by mi, j the message in the query
input x if (Tx , Ox ) = (i, j). We define
to be the message set with qi messages, where Mi contains all messages in those
tuples belonging to Type i (Tx = i). According to the setting of the oracle responses,
for those hash queries belonging to Type i for all i ∈ {0, 1, 2}, there are three mes-
sage sets M0 , M1 , M2 at most to capture all messages in these queries. All queried
messages mentioned above are described in Table 5.1, where the query associated
with the message mi, j is made before another query associated with the message
mi, j0 if j < j0 .
Without knowing the signature σm of m before making hash queries associated
with m, the adversary must make hash queries H(m), H(m||σm1 ), H(m||σm2 ) in se-
quence because σmi in the query m||σmi contains
For a message m, the adversary can query all three hash queries H(m), H(m||σm1 ),
H(m||σm2 ) or fewer, such as H(m), H(m||σm1 ), for this message before its signature
query. Therefore, the following inequality and subset relationships hold:
q2 ≤ q1 ≤ q0 , M2 ⊆ M1 ⊆ M0 .
Suppose the adversary can finally forge a valid signature of a message m∗ . The
adversary must at least make the hash query H(m∗ ||σm2 ∗ ) in order to compute
H(m∗ ||σm2 ∗ )α , which guarantees q2 ≥ 1. Since the number of hash queries is at most
qH , we have q0 < qH . We stress that the number q1 is adaptively decided by the
adversary. However, it must be
√ √
q1 < qH or q1 ≥ qH .
Query: The adversary makes signature queries in this phase. For a signature query
on the message m that is adaptively chosen by the adversary, the simulator computes
the signature σm as follows.
If m is never queried to the random oracle, the simulator works as follows from
i = 1 to i = 3, where i is increased by one each time.
• Add a query on m||σmi−1 and its response to the hash list (m||σm0 = m). According
to the setting of the random oracle simulation, the corresponding tuple is
i−1
m||σmi−1 , B, ⊥, ⊥, gzm , zi−1
m .
In the above signature generation, σi for all i ∈ {1, 2, 3} is computable by the simu-
lator, and the signature of σm is equal to σm3 = (σ1 , σ2 , σ3 ). Therefore, the signature
of m is computable by the simulator.
Suppose the message m was ever queried to the random oracle by the adver-
sary, where the following queries associated with the message m were made by the
adversary
m||σm0 , · · · , m||σmrm : rm ∈ {0, 1, 2}.
Here, the integer rm is adaptively decided by the adversary. Let (x, Ix , Tx , Ox ,Ux , zx )
be the tuple for x = m||σmrm . That is, Tx = rm .
170 5 Digital Signatures with Random Oracles
which cannot be computed by the simulator, and thus the simulator fails to sim-
ulate the signature for the adversary, in particular the block signature σrm +1 .
• Otherwise, (Tx , Ox ) 6= (c∗ , k∗ ). Then, σrm +1 is computable by the simulator be-
cause
H(m||σmrm ) = gzx , σrm +1 = H(m||σmrm )α = (ga )zx .
Similarly to the case that m is never queried to the random oracle, the simulator
can generate and make hash queries
H(m||σmrm +1 ), · · · , H(m||σm2 )
to the random oracle. Finally, it computes the signature σm for the adversary.
Therefore, σm is a valid signature of the message m. This completes the descrip-
tion of the signature generation.
Forgery: The adversary returns a forged signature σm∗ on some m∗ that has not
been queried. Since the adversary cannot make a signature query on some m∗ , we
have that the following queries to the random oracle were made by the adversary:
The solution to the problem instance does not have to be associated with the
forged message m∗ . The simulator solves the hard problem as follows.
• The simulator searches the hash list to find the first tuple (x, Ix , Tx , Ox ,Ux , zx )
satisfying
(Tx , Ox ) = (c∗ , k∗ ).
If this tuple does not exist, abort. Otherwise, let the message mc∗ ,k∗ in this tuple
be denoted by m̂ for short. That is, mc∗ ,k∗ = m̂ and we have m̂ ∈ Mc∗ . Note that
m̂ may be different from m∗ . This tuple is therefore equivalent to
c∗
∗ ∗
(x, Ix , Tx , Ox ,Ux , zx ) = m̂||σm̂c , A , c∗ , k∗ , gbzm̂ , zcm̂ .
∗ c∗
That is H(m̂||σm̂c ) = gb+zm̂ contains the instance gb .
• The simulator searches the hash list to find the second tuple (x0 , Ix0 , Tx0 , Ox0 ,Ux0 , zx0 ),
where x0 is the query about the message m̂ and Tx0 = c∗ + 1. If this tuple does not
exist, abort. Otherwise, we have m̂ ∈ Mc∗ +1 and
∗
x0 = m̂||σm̂c +1 ,
∗ ∗
where σm̂c +1 contains σc∗ +1 = H(m||σmc )α .
• The simulator computes and outputs
5.8 BLSG Scheme 171
∗ c∗
H(m̂||σm̂c )α gab+azm̂
c∗
= c∗
= gab
(ga )zm̂ (ga )zm̂
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been ex-
plained above. The randomness of the simulation includes all random numbers in
the key generation and the responses to hash queries. They are a, zx , b + zx0 . Accord-
ing to the setting of the simulation, where a, b, zi are randomly chosen, it is easy to
see that they are random and independent from the point of view of the adversary.
Therefore, the simulation is indistinguishable from the real attack. In particular, the
adversary does not know which hash query is answered with b. That is, c∗ is random
and unknown to the adversary.
Probability of successful simulation and useful attack. According to the as-
sumption, the adversary will break the signature scheme with advantage ε. The ad-
versary will make the hash query H(m∗ ||σm2 ∗ ) with probability at least ε, such that
m∗ ∈ M2 and thus q2 ≥ 1. The number of hash query is qH . Since q0 + q1 + q2 = qH ,
we have q0 < qH .
The reduction is successful if the simulator does not abort in either the query
phase or the forgery phase. According to the setting of the simulation, the reduction
is successful if m̂ ∈ Mc∗ and m̂ ∈ Mc∗ +1 .
• If c∗ = 0, we have m̂ ∈ M0 and m̂ ∈ M1 . In this case, k∗ ∈ [1, qH ] and |M0 | =
q0 < qH . We have that any message in M0 will be chosen as m̂ with probability
qH according to the simulation. Since M1 ⊆ M0 , the success probability is
1
q1
.
qH
√
• If c∗ = 1, we have m̂ ∈ M1 and m̂ ∈ M2 . In this case, k∗ ∈ [1, qH ] and
√
|M1 | = q1 . If q1 < qH , we have that any message in M1 will be chosen as
m̂ with probability √1qH according to the simulation. Since M2 ⊆ M1 , the suc-
cess probability is
q2
√ .
qH
Let Pr[suc] be the probability of successful simulation and useful attack when
q2 ≥ 1. We calculate the following probability of success:
1 √ √
≥ Pr[m̂ ∈ M0 ∩ M1 |c∗ = 0, q1 ≥ qH ] Pr[q1 ≥ qH ]
2
1 √ √
+ Pr[m̂ ∈ M1 ∩ M2 |c∗ = 1, q1 < qH ] Pr[q1 < qH ]
2
1 √ 1 √
= √ Pr[q1 ≥ qH ] + √ Pr[q1 < qH ]
2 qH 2 qH
1
= √ .
2 qH
Advantage and time cost. Suppose the adversary breaks the scheme with
(t, qs , ε) after making qH queries to the random oracle. The advantage of solving
√
the CDH problem is therefore ε/(2 qH ). Let Ts denote the time cost of the sim-
ulation. We have Ts = O(qH + qs ), which is mainly dominated by the oracle re-
sponse and the signature generation. Therefore, B will solve the CDH problem
√
with (t + Ts , ε/(2 qH )).
This completes the proof of the theorem.
Chapter 6
Digital Signatures Without Random Oracles
In this chapter, we mainly introduce signature schemes under the q-SDH hardness
assumption and the CDH assumption. We start by introducing the Boneh-Boyen
short signature scheme [21] and then its variant, namely the Gentry scheme, which
is modified from his IBE scheme [47]. Most stateless signature schemes without
random oracles must produce signatures at least 320 bits in length for 80-bit security.
Then, we introduce the GMS scheme [54], which achieves a less than 320-bit length
of signature in the stateful setting. The other two signature schemes are the Waters
scheme modified from his IBE scheme [101] and the Hohenberger-Waters scheme
[61]. The Waters scheme requires a long public key, and the Hohenberger-Waters
scheme addressed this problem in the stateful setting. The given schemes and/or
proofs may be different from the original ones.
Sign: The signing algorithm takes as input a message m ∈ Z p , the secret key
sk, and the system parameters SP. It randomly chooses r ∈ Z p and returns the
signature σm on m as
1
σm = (σ1 , σ2 ) = r, h α+mβ +r .
We require that the signing algorithm always uses the same random number r
for signature generation on the message m.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
e σ2 , g1 gm
2 gσ1
= e(g, h).
Theorem 6.1.0.1 If the q-SDH problem is hard, the Boneh-Boyen signature scheme
is provably secure in the EU-CMA security model with reduction loss L = 2.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
2 q
q-SDH problem. Given as input a problem instance g, ga , ga , · · · , ga over the
pairing group PG, B runs A and works as follows.
B chooses a secret bit µ ∈ {0, 1} and programs the reduction in two different
ways.
• The reduction for µ = 0 is programmed as follows.
Setup. Let SP = PG. B randomly chooses y, w0 , w1 , w2 , · · · , wq from Z p and sets
the public key as
q
using g, ga , · · · , ga , y, w0 , w1 , · · · , wq .
Let ri = wi − ymi . According to the signature definition and simulation, we have
1 w0 (a+w1 )(a+w2 )···(a+wq )
h α+mi β +ri = g a+mi y+wi −ymi = gw0 (a+w1 )···(a+wi−1 )(a+wi+1 )···(a+wq ) .
q
using g, ga , · · · , ga , x, w0 , w1 , · · · , wq .
Let ri = wi mi − x. According to the signature definition and simulation, we have
1 w0 (a+w1 )(a+w2 )···(a+wq ) w0
x+mi a+wi mi −x (a+w 1 )···(a+wi−1 )(a+wi+1 )···(a+wq )
h α+mi β +ri = g = g mi .
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation and the signature generation. Let h = gγ . They are α, β , γ, r1 , r2 , · · · , rqs
equivalent to
a, y, w0 (a + w1 ) · · · (a + wq ), w1 − ym1 , w2 − ym2 , · · · , wq − ymq : µ = 0
s s
.
x, a, w0 (a + w1 ) · · · (a + wq ), w1 m1 − x, w2 m2 − x, · · · , wq mq − x : µ = 1
s s
Pr[Success]
= Pr[Success|µ = 0] Pr[µ = 0] + Pr[Success|µ = 1] Pr[µ = 1]
= Pr[m∗ β + r∗ 6= mi β + ri ] Pr[µ = 0] + Pr[m∗ β + r∗ = mi β + ri ] Pr[µ = 1]
1
= Pr[m∗ β + r∗ 6= mi β + ri ] + Pr[m∗ β + r∗ = mi β + ri ]
2
1
= .
2
Therefore, the probability of successful simulation and useful attack is 12 .
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε).
The advantage of solving the q-SDH problem is ε2 . Let Ts denote the time cost of
6.2 Gentry Scheme 177
pk = (g1 , g2 ), sk = (α, β ).
Sign: The signing algorithm takes as input a message m ∈ Z p , the secret key
sk, and the system parameters SP. It randomly chooses r ∈ Z p and computes
the signature σm on m as
β −r
σm = (σ1 , σ2 ) = r, g α−m .
We require that the signing algorithm always uses the same random number r
for signature generation on the message m.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
e σ2 , g1 g−m = e g2 g−σ1 , g .
Theorem 6.2.0.1 If the q-SDH problem is hard, the Gentry signature scheme is
provably secure in the EU-CMA security model with reduction loss about L = 1.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the q-
2 q
SDH problem. Given as input a problem instance (g, ga , ga , · · · , ga ) over the pairing
group PG, B runs A and works as follows.
Setup. Let SP = PG. B randomly chooses w0 , w1 , w2 , · · · , wq from Z p and sets the
public key as
q q−1
g1 = ga , g2 = gwq a +wq−1 a +···+w1 a+w0 ,
178 6 Digital Signatures Without Random Oracles
f (x) − f (mi )
fmi (x) = .
x − mi
q
We have that σmi is computable using g, ga , · · · , ga , fmi (x), f (x). Let ri = f (mi ).
According to the signature definition and simulation, we have
β −ri f (a)− f (mi )
g α−mi = g a−mi = g fmi (a) .
f ∗ (a)
= ∗ = g a−m∗
g g f (a)
1
and outputs −m∗ , g a−m∗ as the solution to the q-SDH problem instance.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been ex-
plained above. The randomness of the simulation includes all random numbers in
the key generation and the signature generation. They are
The simulation is indistinguishable from the real attack because the random num-
bers are random and independent following the analysis below.
Probability of successful simulation and useful attack. There is no abort in
the simulation. The forged signature is reducible if r∗ 6= f (m∗ ). To prove that the
adversary has no advantage in computing f (m∗ ), we only need to prove that the
following integers are random and independent:
(α, β , r1 , · · · , rqs , f (m∗ )) = a, f (a), f (m1 ), f (m2 ), · · · , f (mqs ), f (m∗ ) .
f (a) = wq aq + · · · + w1 a + w0 ,
f (m1 ) = wq m1 q + · · · + w1 m1 + w0 ,
f (m2 ) = wq m2 q + · · · + w1 m2 + w0 ,
···
f (mqs ) = wq mqs q + · · · + w1 mqs + w0 ,
f (m∗ ) = wq m∗ q + · · · + w1 m∗ + w0 .
Since w0 , w1 , · · · , wq are all random and independent, the randomness property holds
because the determinant of the coefficient matrix is nonzero:
aq aq−2 · · · a 1
ma1 mq−1
1 · · · m1 1
q−1
ma2 m2 · · · m2 1
= ∏ (xi − x j ) : xi , x j ∈ {a, m1 , · · · , mqs , m∗ }.
··· 1≤i< j≤q+1
maqs mq−1
qs · · · mqs 1
m∗ a m∗ q−1 · · · m∗ 1
pk = (g1 , u0,1 , u1,1 , u0,2 , u1,2 , · · · , u0,n , u1,n , N), sk = (α, c),
We require that the algorithm always uses the same bit b for the same message.
Here, “|” denotes bitwise concatenation.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm =
(σ1 , σ2 , σ3 ). It accepts the signature if σ2 ≤ N, σ3 ∈ {0, 1} and
n
e σ1 , g1 gσ2 |σ3 = e ∏ um[i] ,i , g .
i=1
Theorem 6.3.0.1 If the q-SDH problem is hard, the GMS signature scheme is prov-
ably secure in the EU-CMA security model with reduction loss at most L = 2n.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the q-
2 q
SDH problem. Given as input a problem instance (g, ga , ga , · · · , ga ) over the pairing
group PG, B runs A and works as follows.
B chooses a secret bit µ ∈ {0, 1} and programs the reduction in two different
ways. If µ = 0, the simulator guesses that the adversary will forge a signature with
a tuple (c∗ , b∗ ) that was used by the simulator in the signature generation. If µ = 1,
the simulator guesses that the adversary will forge a signature with a tuple (c∗ , b∗ )
that was never used by the simulator in the signature generation.
6.3 GMS Scheme 181
and sets d1,c = 1 − d0,c for all c ∈ [1, qs ]. Let F(x) be the polynomial of degree
2qs defined as
qs qs
F(x) = ∏ (x + c|0)(x + c|1) = ∏ (x + c|d0,c )(x + c|d1,c ).
c=1 c=1
For all c ∈ [1, qs ], the polynomials F0,kc (x), F1,kc (x) will be replaced by
F(x) F(x)
F0,kc (x) := , F1,kc (x) := .
x + c|d1,c x + c|d0,c
g1 = ga , u0,i = gw0,i ·F0,i (a) , u1,i = gw1,i ·F1,i (a) , i ∈ [1, n],
where α = a and we require q = 2qs . The public key can be computed from the
problem instance and the chosen parameters.
Query. The adversary makes signature queries in this phase. For a signature
query on m, let the updated counter for this signature generation be c, and m[i]
be the i-th bit of message m. Then, we have that the polynomials Fm[i],i (x) for all
i ∈ [1, n] including Fm[kc ],kc (x) contain the root c|dm[kc ],c . B sets b = dm[kc ],c . Let
Fm (x) be
∑ni=1 wm[i],i · Fm[i],i (x)
Fm (x) = .
x + c|b
We have that Fm (x) is a polynomial of degree at most (q − 1).
B computes the signature σm as
σm = (σ1 , σ2 , σ3 ) = gFm (x) , c, b ,
182 6 Digital Signatures Without Random Oracles
q
using g, ga , · · · , ga , Fm (x).
According to the signature definition and simulation, we have
n 1 n 1
= g∑i=1 wm[i],i ·Fm[i],i (a)
a+c|b
= gFm (a) .
α+c|b
∏ um[i] ,i
i=1
where α = a and we require q = qs . The public key can be computed from the
problem instance and the chosen parameters.
Query. The adversary makes signature queries in this phase. For a signature
query on m, let the updated counter for this signature generation be c. B sets
b = bc . Let Fm (x) be
q
using g, ga , · · · , ga , Fm (x).
According to the signature definition and simulation, we have
n 1 n 1
= g∑i=1 wm[i],i ·Fm[i],i (a)
a+c|b
= gFm (a) .
α+c|b
∏ um[i] ,i
i=1
The simulator continues the simulation if the tuple (c∗ , b∗ ) was never used by
the simulator in the signature generation for any message, so that the polynomial
F(x) does not contain the root c∗ |b∗ . Therefore,
can be rewritten as
z
f (x) + ,
x + c∗ |b∗
184 6 Digital Signatures Without Random Oracles
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation and the signature generation. They are
a, w0,i · F0,i (a), w1,i · F1,i (a), dm[kc ],c : µ = 0
.
a, w0,i · F(a), w1,i · F(a), bc : µ =1
According to the setting of the simulation, where a, w0,i , w1,i , dm[kc ],c , bc are ran-
domly chosen, it is easy to see that they are random and independent from the point
of view of the adversary no matter whether µ = 0 or µ = 1. Therefore, the simula-
tion is indistinguishable from the real attack, and the adversary has no advantage in
guessing µ from the simulation.
Probability of successful simulation and useful attack. There is no abort in the
simulation. Let the random bit in the forged signature of m∗ be b∗ . We have
• Case µ = 0. The forged signature is reducible if m∗ [kc∗ ] 6= m[kc∗ ] using the tuple
(c∗ , b∗ ), where m[kc∗ ] is the kc∗ -bit of the message m queried by the adversary.
Since m and m∗ differ on at least one bit and kc∗ is randomly chosen by the
simulator, we have that the success probability of m∗ [kc∗ ] 6= m[kc∗ ] is at least 1n .
• Case µ = 1. The forged signature is always reducible with success probability
1, because the tuple (c∗ , b∗ ) was never used by the simulator in the signature
generation.
Let µ ∗ ∈ {0, 1} be the type of attack launched by the adversary, where µ ∗ = 0
means that c∗ |b∗ in the forged signature was used by the simulator in the signature
generation, and µ ∗ = 1 means that c∗ |b∗ in the forged signature was never used by
the simulator in the signature generation. Since the two simulations are indistin-
guishable and the simulator randomly chooses one simulation, the forged signature
is reducible with success probability Pr[Success] described as follows:
1 1
= Pr[µ ∗ = 0] + Pr[µ ∗ = 1]
2n 2
1 1
≥ Pr[µ ∗ = 0] + Pr[µ ∗ = 1]
2n 2n
1
= Pr[µ ∗ = 0] + Pr[µ ∗ = 1]
2n
1
= .
2n
1
Therefore, the probability of successful simulation and useful attack is at least 2n .
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε).
ε
The advantage of solving the q-SDH problem is therefore 2n . Let Ts denote the
2
time cost of the simulation. We have Ts = O(qs ), which is mainly dominated by the
signature generation. Therefore, B will solve the q-SDH problem with t + Ts , 2n ε
.
This completes the proof of the theorem.
pk = (g1 , g2 , u0 , u1 , u2 , · · · , un ), sk = α.
Sign: The signing algorithm takes as input a message m ∈ {0, 1}n , the secret
key sk, and the system parameters SP. Let m[i] be the i-th bit of message m. It
chooses a random number r ∈ Z p and returns the signature σm on m as
!r !
n
m[i]
σm = (σ1 , σ2 ) = gα2 u0 ∏ ui , gr .
i=1
Theorem 6.4.0.1 If the CDH problem is hard, the Waters signature scheme is prov-
ably secure in the EU-CMA security model with reduction loss L = 4(n+1)qs , where
qs is the number of signature queries.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B runs A and works as follows.
Setup. Let SP = PG. B sets q = 2qs and randomly chooses integers k, x0 , x1 , · · · , xn ,
y0 , y1 , · · · , yn satisfying
k ∈ [0, n],
x0 , x1 , · · · , xn ∈ [0, q − 1],
y0 , y1 , · · · , yn ∈ Z p .
where α = a. The public key can be computed from the problem instance and the
chosen parameters.
We define F(m), J(m), K(m) as
n
F(m) = −kq + x0 + ∑ m[i] · xi ,
i=1
n
J(m) = y0 + ∑ m[i] · yi ,
i=1
0 if x0 + ∑ni=1 m[i] · xi = 0 mod q
K(m) = .
1 otherwise
Then, we have
n
m[i]
u0 ∏ ui = gF(m)a+J(m) .
i=1
Query. The adversary makes signature queries in this phase. For a signature query
on m, if K(m) = 0, the simulator aborts. Otherwise, B randomly chooses r0 ∈ Z p
and computes the signature σm as
!0
r
J(m) n 1
− F(m) m[i] − F(m) 0
σm = (σ1 , σ2 ) = g2 u0 ∏ ui , g2 gr .
i=1
We have that σm is computable using g, g1 , F(m), J(m), r0 , m and the public key.
1
Let r = − F(m) b + r0 . We have
6.4 Waters Scheme 187
!r
n − 1 b+r0
m[i] F(m)
gα2 u0 ∏ ui = gab gF(m)a+J(m)
i=1
J(m)
−ab+r0 F(m)a− F(m) b+J(m)r0
= gab · g
J(m)
− F(m) b r0 (F(m)a+J(m))
=g g
!r0
J(m) n
− F(m) m[i]
= g2 u0 ∏ ui ,
i=1
1 b+r0
− F(m)
gr = g
1
− F(m) 0
= g2 gr .
b
a, b, x0 b + y0 , x1 b + y1 , x2 b + y2 , · · · , xn b + yn , − + ri0 .
F(mi )
According to the setting of the simulation, where a, b, yi , ri0 are randomly chosen, it
is easy to see that the simulation is indistinguishable from the real attack.
188 6 Digital Signatures Without Random Oracles
We have
n
0 ≤ x0 + ∑ m[i]xi ≤ (n + 1)(q − 1),
i=1
where the range [0, (n + 1)(q − 1)] contains integers 0q, 1q, 2q, · · · , nq (n < q).
Let X = x0 + ∑ni=1 m[i]xi . Since all xi and k are randomly chosen, we have
h i h i 1
Pr[F(m∗ ) = 0] = Pr X = 0 mod q · Pr X = kq X = 0 mod q = .
(n + 1)q
Since the pair (mi , m∗ ) for any i differ on at least one bit, K(mi ) and F(m∗ ) differ on
the coefficient of at least one x j , and then
1
Pr[K(mi ) = 0|F(m∗ ) = 0] = .
q
Based on the above results, we obtain
g1 = ga ,
u1 = gbx1 +y1 , u2 = gbx2 +y2 , u3 = gbx3 +y3 ,
v1 = g−b+z1 , v2 = gc0 b+z2 ,
190 6 Digital Signatures Without Random Oracles
where α = a and a, b are unknown secrets from the problem instance. The public
key can be computed from the problem instance and the chosen parameters.
Query. The adversary makes signature queries in this phase. For a signature query
on m, let the updated counter for this signature generation be c. The simulation of
this signature falls into the following two cases.
um r
1 u2 u3 = g
b(mx1 +rx2 +x3 )+(my1 +ry2 +y3 )
, vc1 v2 = gb(c0 −c)+z1 c+z2 .
σm = (σ1 , σ2 , σ3 , σ4 )
z +z mx1 +rx2 +x3
!
my1 +ry2 +y3 − c1 −c2 ·(mx1 +rx2 +x3 ) 0 − c0 −c s0
= g1 0
· (vc1 v2 )s , g1 · g , r, c .
(um r α c
1 u2 u3 ) (v1 v2 )
s
We have
ab(mx1 +rx2 +x3 )+a(my1 +ry2 +y3 )
(um r α c s
1 u2 u3 ) (v1 v2 ) = g · (vc1 v2 )s
= ga(my1 +ry2 +y3 ) (vc1 v2 )s
my1 +ry2 +y3
= g1 · (vc1 v2 )s .
6.5 Hohenberger-Waters Scheme 191
Forgery. The adversary returns a forged signature σm∗ on some m∗ that has not been
queried. Let the signature be
∗ ∗
α c∗ s
σm∗ = (σ1∗ , σ2∗ , σ3∗ , σ4∗ ) = um u r
1 2 3 u v v
1 2 , gs ∗ ∗
, r , c .
c∗ = c0 and m∗ x1 + r∗ x2 + x3 6= 0.
If it is successful, we have
∗ ∗ ∗ s
σ1∗ = um r
α
1 u2 u3 vc1 v2
∗ x +r∗ x +x )+a(m∗ y +r∗ y +y ) ∗ +z
= gab(m 1 2 3 1 2 3 · (gz1 c 2 )s
ab(m∗ x1 +r∗ x2 +x3 )+a(m∗ y1 +r∗ y2 +y3 ) z1 c∗ +z2
=g · (σ2∗ ) .
Since s0c , rc , sc0 are randomly chosen, we only need to consider the randomness of
mc0 x1 + x3
a, bx1 + y1 , bx2 + y2 , bx3 + y3 , − b + z1 , c0 b + z2 , − .
x2
According to the setting of the simulation, where a, b, x1 , x3 , y1 , y2 , y3 , z1 , z2 are ran-
domly chosen, it is easy to see that the simulation is indistinguishable from the real
attack. In particular, the adversary has no advantage in guessing c0 from the given
parameters.
192 6 Digital Signatures Without Random Oracles
c∗ = c0 , m∗ x1 + r∗ x2 + x3 6= 0.
Since c0 is random and unknown to the adversary according to the above analysis,
we have that c∗ = c0 holds with probability N1 for any adaptive choice c∗ . To prove
that the adversary has no advantage in computing r∗ satisfying m∗ x1 + r∗ x2 + x3 = 0,
we only need to prove that
m∗ x1 + x3
−
x2
is random and independent from the point of view of the adversary. It is easy to see
that the following integers associated with x1 , x2 , x3 are random and independent:
mc0 x1 + x3 m∗ x1 + x3
bx1 + y1 , bx2 + y2 , bx3 + y3 , − ,− .
x2 x2
Any adaptive choice r∗ satisfies m∗ x1 + r∗ x2 + x3 = 0 with probability 1/p. There-
fore, the probability of successful simulation and useful attack is N1 (1−1/p) ≈ 1/N.
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε).
The advantage of solving the CDH problem is therefore Nε . Let Ts denote the time
cost of the simulation. We have Ts = O(qs ), which is mainly dominated by the sig-
nature generation. Therefore, B will solve the CDH problem with (t + Ts , Nε ).
This completes the proof of the theorem.
Chapter 7
Public-Key Encryption with Random Oracles
pk = g1 , sk = α.
Encrypt: The encryption algorithm takes as input a message m ∈ {0, 1}n , the
public key pk, and the system parameters SP. It chooses a random number
r ∈ Z p and returns the ciphertext CT as
CT = (C1 ,C2 ) = gr , H(gr1 ) ⊕ m .
Theorem 7.1.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the Hashed ElGamal encryption scheme is provably secure in the
IND-CPA security model with reduction loss L = qH , where qH is the number of
hash queries to the random oracle.
Proof. Suppose there exists an adversary A who can (t, ε)-break the encryption
scheme in the IND-CPA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the cyclic group
(G, g, p), B controls the random oracle, runs A , and works as follows.
Setup. Let SP = (G, g, p) and H be the random oracle controlled by the simulator.
B sets the public key as g1 = ga where α = a. The public key is available from the
problem instance.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
Let the i-th hash query be xi . If xi is already in the hash list, B responds to this
query following the hash list. Otherwise, B randomly chooses yi ∈ {0, 1}n and sets
H(xi ) = yi . The simulator B responds to this query with H(xi ) and adds (xi , yi ) to
the hash list.
Challenge. A outputs two distinct messages m0 , m1 ∈ {0, 1}n to be challenged. The
simulator randomly chooses R ∈ {0, 1}n and sets the challenge ciphertext CT ∗ as
CT ∗ = (gb , R),
where gb is from the problem instance. The challenge ciphertext can be seen as an
encryption of the message mc ∈ {m0 , m1 } using the random number b if H(gb1 ) =
R ⊕ mc :
CT ∗ = (gb , R) = gb , H(gb1 ) ⊕ mc .
The challenge ciphertext is therefore a correct ciphertext from the point of view of
the adversary, if there is no hash query on gb1 to the random oracle.
Guess. A outputs a guess or ⊥. The challenge hash query is defined as
The simulator randomly selects one value x from the hash list (x1 , y1 ), (x2 , y2 ), · · ·,
(xqH , yqH ) as the challenge hash query. The simulator can immediately use this hash
query to solve the CDH problem.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation, the responses to hash queries, and the challenge ciphertext generation.
They are
a, y1 , y2 , · · · , yqH , b.
According to the setting of the simulation, where a, b, yi are randomly chosen, it is
easy to see that the randomness property holds, and thus the simulation is indistin-
guishable from the real attack.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Advantage of breaking the challenge ciphertext. If H(gab ) = R ⊕ m0 , the chal-
lenge ciphertext is an encryption of m0 . If H(gab ) = R ⊕ m1 , the challenge ciphertext
is an encryption of m1 . If the query gab is not made, H(gab ) is random and unknown
to the adversary, so that it has no advantage in breaking the challenge ciphertext.
Probability of finding solution. Since the adversary has advantage ε in guessing the
chosen message according to the breaking assumption, the adversary will query gab
to the random oracle with probability ε according to Lemma 4.11.1. The adversary
makes qH hash queries in total. Therefore, a random choice of x is equal to gab with
probability qεH .
Advantage and time cost. Let Ts denote the time cost of the simulation. We
have
Ts = O(1). Therefore, the simulator B will solve the CDH problem with
t + Ts , qεH .
This completes the proof of the theorem.
pk = (g1 , g2 ), sk = (α, β ).
Encrypt: The encryption algorithm takes as input a message m ∈ {0, 1}n , the
public key pk, and the system parameters SP. It chooses a random number
r ∈ Z p and returns the ciphertext CT as
CT = (C1 ,C2 ) = gr , H(gr1 ||gr2 ) ⊕ m .
Theorem 7.2.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the Twin Hashed ElGamal encryption scheme is provably secure in
the IND-CPA security model with reduction loss L = 1.
Proof. Suppose there exists an adversary A who can (t, ε)-break the encryption
scheme in the IND-CPA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the cyclic group
(G, g, p), B controls the random oracle, runs A , and works as follows.
Setup. Let SP = (G, g, p) and H be the random oracle controlled by the simulator.
B randomly chooses z1 , z2 ∈ Z p and sets the public key as
(g1 , g2 ) = ga , gz1 (ga )z2 ,
where α = a and β = z1 + z2 a. The public key can be computed from the problem
instance and the chosen parameters.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
Let the i-th hash query be xi . If xi is already in the hash list, B responds to this
query following the hash list. Otherwise, B randomly chooses yi ∈ {0, 1}n and sets
H(xi ) = yi . The simulator B responds to this query with H(xi ) and adds (xi , yi ) to
the hash list.
Challenge. A outputs two distinct messages m0 , m1 ∈ {0, 1}n to be challenged. The
simulator randomly chooses R ∈ {0, 1}n and sets the challenge ciphertext CT ∗ as
CT ∗ = (gb , R),
7.2 Twin Hashed ElGamal Scheme 197
where gb is from the problem instance. The challenge ciphertext can be seen as an
encryption of the message mc ∈ {m0 , m1 } using the random number b if H(gb1 ||gb2 ) =
R ⊕ mc :
CT ∗ = (gb , R) = gb , H(gb1 ||gb2 ) ⊕ mc .
The challenge ciphertext is therefore a correct ciphertext from the point of view of
the adversary, if there is no hash query on gb1 ||gb2 to the random oracle.
Guess. A outputs a guess or ⊥. In the above simulation, the challenge hash query
is defined as
Q∗ = gb1 ||gb2 = gab ||gz1 b+z2 ab .
Suppose (x1 , y1 ), (x2 , y2 ), · · · , (xqH , yqH ) are in the hash list, where each query xi can
be denoted by xi = ui ||vi . If xi does not satisfy this structure, we can delete it.
The simulator finds the query x∗ = u∗ ||v∗ from the hash list satisfying
as the challenge hash query and returns u∗ = gab as the solution to the CDH prob-
lem instance. In this security reduction, the second group element is only used for
helping the simulator find the challenge hash query from the hash list.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation, the responses to hash queries, and the challenge ciphertext generation.
They are
a, z1 + z2 a, y1 , y2 , · · · , yqH , b.
According to the setting of the simulation, where a, b, z1 , z2 , yi are randomly cho-
sen, it is easy to see that the randomness property holds, and thus the simulation is
indistinguishable from the real attack.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Advantage of breaking the challenge ciphertext. If H(gb1 ||gb2 ) = R ⊕ m0 , the chal-
lenge ciphertext is an encryption of m0 . If H(gb1 ||gb2 ) = R ⊕ m1 , the challenge cipher-
text is an encryption of m1 . If the query gb1 ||gb2 is not made, we have that H(gb1 ||gb2 )
is random and unknown to the adversary, so that it has no advantage in breaking the
challenge ciphertext.
Probability of finding solution. Since the adversary has advantage ε in guessing the
chosen message according to the breaking assumption, the adversary will query gab
to the random oracle with probability ε according to Lemma 4.11.1. The adversary
makes qH hash queries in total. We claim that the adversary has no advantage in
generating a query x = u||v satisfying
198 7 Public-Key Encryption with Random Oracles
z1 + z2 a0 = w.
pk = (g1 , g2 ) , sk = (α1 , α2 ) .
Encrypt: The encryption algorithm takes as input a message m ∈ {0, 1}n , the
public key pk, and the system parameters SP. It picks a random r ∈ Z p and
returns the ciphertext CT as
CT = (C1 ,C2 ) = gr , H(A2 ) ⊕ m ,
where A1 = H(0)||gr1 ||1, A2 = H(A1 )||gr2 ||2. Here, H(0) denotes an arbitrary
but fixed string for all ciphertext generations.
Decrypt: The decryption algorithm takes as input a ciphertext CT , the secret
key sk, and the system parameters SP. Let CT = (C1 ,C2 ). It computes
7.3 Iterated Hashed ElGamal Scheme 199
Theorem 7.3.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the Iterated Hashed ElGamal encryption scheme is provably se-
√
cure in the IND-CPA security model with reduction loss L = 2 qH , where qH is the
number of hash queries to the random oracle.
Proof. Suppose there exists an adversary A who can (t, ε)-break the encryption
scheme in the IND-CPA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the cyclic group
(G, g, p), B controls the random oracle, runs A , and works as follows.
Setup. Let SP = (G, g, p) and H be the random oracle controlled by the simulator.
B randomly picks i∗ ∈ {1, 2} and sets the secret key in such a way that αi∗ = a from
the problem instance and another value, denoted by z ∈ Z p , is randomly chosen by
the simulator. The public key pk = (g1 , g2 ) = (gα1 , gα2 ) can therefore be computed
from the problem instance and the chosen parameters.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
Let the i-th hash query be xi . If xi is already in the hash list, B responds to this
query following the hash list. Otherwise, B randomly chooses yi ∈ {0, 1}n and sets
H(xi ) = yi . The simulator B responds to this query with H(xi ) and adds (xi , yi ) to
the hash list.
Challenge. A outputs two distinct messages m0 , m1 ∈ {0, 1}n to be challenged. The
simulator randomly chooses R ∈ {0, 1}n and sets the challenge ciphertext CT ∗ as
CT ∗ = (gb , R),
where gb is from the problem instance. The challenge ciphertext can be seen as an
encryption of the message mc ∈ {m0 , m1 } using the random number b if H(Q∗2 ) =
R ⊕ mc , where Q∗1 = H(0)||gb1 ||1, Q∗2 = H(Q∗1 )||gb2 ||2:
CT ∗ = (gb , R) = gb , H(Q∗2 ) ⊕ mc .
The challenge ciphertext is therefore a correct ciphertext from the point of view of
the adversary, if there is no hash query on Q∗2 to the random oracle.
Guess. A outputs a guess or ⊥. In the above simulation, there are two challenge
hash queries defined as
The solution to the CDH problem instance is gαi∗ b = gab within the challenge hash
query Q∗i∗ . Suppose (x1 , y1 ), (x2 , y2 ), · · · , (xqH , yqH ) are in the hash list, where each
query xi can be denoted by xi = ui ||vi ||wi . Here, ui ∈ {H(0), y1 , y2 , · · · , yqH }, vi ∈ G,
and wi ∈ {1, 2}. If xi does not satisfy this structure, we can delete it.
Table 7.1 All hash queries with valid structure in the hash list
(u || v ||1, y1 ) (u || v ||1, y2 ) ··· (u || v ||1, yk )
1 1 2 2 k k
(y1 ||v 11 ||2, y 11 ) (y 2 ||v21 ||2, y21 ) k ||vk1 ||2, yk1 )
(y
(y1 ||v12 ||2, y12 ) (y2 ||v22 ||2, y22 ) (yk ||vk2 ||2, yk2 )
Y1 = .. Y2 = .. · · · Yk = ..
.
.
.
(y1 ||v1n1 ||2, y1n1 ) (y2 ||v2n2 ||2, y2n2 ) (yk ||vknk ||2, yknk )
All hash queries are of one of the forms shown in Table 7.1. Suppose all hash
queries in the first row are in the query set Y0 . In the first column of the second row,
all hash queries in the query set, denoted by Y1 , use y1 . All other rows and columns
have a similar structure and definition. If the challenge hash queries exist in the hash
list, all hash queries in the set Y0 must have only one query whose v is equal to gα1 b
because all distinct hash queries in the set Y0 have the same u = H(0) and w = 1.
Similarly, all hash queries in the set Yi must have at most one query whose v is equal
to gα2 b . We stress that vi j = vi0 j0 may hold, where vi j is from Yi , and vi0 j0 is from Yi0 .
In the above simulation, if i∗ = 1, this means that α1 = a and α2 = z, where z is
randomly chosen by the simulator. Therefore, gα2 b = (gb )z can be computed by the
simulator, and the simulator can check whether v in the hash query u||v||w from Yi
is equal to gbz or not. Next, we describe how to pick the challenge hash query.
• If i∗ = 1, the simulator checks each hash query in Y0 as follows. For the query
ui ||vi ||1 whose response is yi , the simulator checks whether there exists j ∈ [1, ni ]
such that vin j = gbz (a query from Yi ). If yes, this query is kept in Y0 . Otherwise,
this query is removed from Y0 . Suppose Y∗0 is the final set after removing all
hash queries as described above. The simulator randomly picks one query from
Y∗0 as the challenge hash query Q∗1 = u∗ ||v∗ ||1 and extracts v∗ as the solution to
the CDH problem instance.
• If i∗ = 2, the simulator randomly picks one query from the sets Y1 ∪ Y2 ∪ · · · ∪ Yk
as the challenge hash query Q∗2 = u∗ ||v∗ ||2 and extracts v∗ as the solution to the
CDH problem instance.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation, the responses to hash queries, and the challenge ciphertext generation.
They are
a, z, y1 , y2 , · · · , yqH , b.
7.3 Iterated Hashed ElGamal Scheme 201
Without making the challenge query Q∗2 to the random oracle, which requires the
adversary to query Q∗1 first, H(Q∗2 ) is random and unknown to the adversary, so that
it has no advantage in breaking the challenge ciphertext.
Probability of finding solution. Since the adversary has advantage ε in guessing
the chosen message according to the breaking assumption, the adversary will query
Q∗1 and Q∗2 to the random oracle with probability ε according to Lemma 4.11.1. The
adversary makes qH hash queries in total. Therefore, we have
n1 + n2 + · · · + nk + k ≤ qH .
Let suc be the probability of successfully picking the challenge hash query. We
have the following probability of success:
1
Pr[suc|i∗ = d] ≥ √ , for some d ∈ {1, 2}.
qH
On the other hand, if the adversary can adaptively make hash queries against the
above probability, this means that
1 1
Pr[suc|i∗ = 1] < √ (1), and Pr[suc|i∗ = 2] < √ (2).
qH qH
202 7 Public-Key Encryption with Random Oracles
The above two probabilities must hold because the adversary does not know the
value i∗ .
√
• If i∗ = 1, the adversary must make hash queries in such a way that k ≥ 1 + qH .
Suppose {(x1 , y1 ), (x2 , y2 ), · · · , (xk , yk )} are all the hash queries and responses
from the set Y0 . It is also the case that the query set Yi for all i ∈ [1, k] must
have one hash query whose v is equal to gα2 b . Otherwise, (xi , yi ) will be re-
moved from Y0 because of how the simulator picks the challenge hash query. If
√
some queries are removed, and the remaining number is less than qH , we have
∗ 1
Pr[suc|i = 1] ≥ √qH .
• Suppose the probability (1) holds. If i∗ = 2, there are k hash queries in the set
Y1 ∪ Y2 ∪ · · · ∪ Yk whose v is equal to gα2 b . Let N = |Y1 ∪ Y2 ∪ · · · ∪ Yk |. In this
case, to make sure that the probability (2) holds, the total number of hash queries
√ √
must satisfy N ≥ k qH + 1. Otherwise, N < k qH + 1, and we have
k k 1
≥ √ =√ ,
N k qH qH
and then obtain Pr[suc] ≥ 2√1qH . Therefore, the simulator can find the correct solu-
tion from hash queries with probability at least 2√εqH .
Advantage and time cost. Let Ts denote the time cost of the simulation. We
√
have
Ts = O( qH ). Therefore, the simulator B will solve the CDH problem with
t + Ts , 2√εqH .
This completes the proof of the theorem.
hash functions H1 : {0, 1}∗ → Z p , H2 , H3 : {0, 1}∗ → {0, 1}n , and returns the
system parameters SP = (G, p, g, H1 , H2 , H3 ).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses α ∈ Z p , computes g1 = gα , and returns a public/secret
key pair (pk, sk) as follows:
pk = g1 , sk = α.
Encrypt: The encryption algorithm takes as input a message m ∈ {0, 1}n , the
public key pk, and the system parameters SP. It works as follows:
• Choose a random string σ ∈ {0, 1}n .
• Compute C3 = H3 (σ ) ⊕ m and r = H1 (σ ||m||C3 ).
• Compute C1 = gr and C2 = H2 (gr1 ) ⊕ σ .
The ciphertext CT is defined as
CT = (C1 ,C2 ,C3 ) = gr , H2 (gr1 ) ⊕ σ , H3 (σ ) ⊕ m .
Theorem 7.4.0.1 Suppose the hash functions H1 , H2 , H3 are random oracles. If the
CDH problem is hard, the Fujisaki-Okamoto Hashed ElGamal encryption scheme is
provably secure in the IND-CCA security model with reduction loss L = qH2 , where
qH2 is the number of hash queries to the random oracle H2 .
Proof. Suppose there exists an adversary A who can (t, qd , ε)-break the encryption
scheme in the IND-CCA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance g, ga , gb over the cyclic group
(G, g, p), B controls the random oracles, runs A , and works as follows.
Setup. Let SP = (G, g, p) and H1 , H2 , H3 be random oracles controlled by the sim-
ulator. B sets the public key as g1 = ga where α = a. The public key is available
from the problem instance.
H-Query. The adversary makes hash queries in this phase. B prepares three hash
lists to record all queries and responses as follows, where the hash lists are empty at
the beginning.
204 7 Public-Key Encryption with Random Oracles
• Let the i-th hash query to H1 be xi . If xi is already in the hash list, B responds to
this query following the hash list. Otherwise, B randomly chooses Xi ∈ Z p , sets
H1 (xi ) = Xi and adds (xi , Xi ) to the hash list.
• Let the i-th hash query to H2 be yi . If yi is already in the hash list, B responds to
this query following the hash list. Otherwise, B randomly chooses Yi ∈ {0, 1}n ,
sets H2 (yi ) = Yi and adds (yi ,Yi ) to the hash list.
• Let the i-th hash query to H3 be zi . If zi is already in the hash list, B responds to
this query following the hash list. Otherwise, B randomly chooses Zi ∈ {0, 1}n ,
sets H3 (zi ) = Zi and adds (zi , Zi ) to the hash list.
Let the number of hash queries to random oracles H1 , H2 , H3 be qH1 , qH2 , qH3 , re-
spectively.
Phase 1. The adversary makes decryption queries in this phase. For a decryption
query on CT = (C1 ,C2 ,C3 ), the simulator searches the hash lists to see whether
there exist three pairs (x, X), (y,Y ), (z, Z) such that
x = z||m||C3 ,
H (x)
y = gX1 = g1 1 ,
X H1 (x)
C1 = g = g ,
C2 = Y ⊕ z = H2 (y) ⊕ z,
C3 = Z ⊕ m = H3 (z) ⊕ m.
The challenge ciphertext is therefore a correct ciphertext from the point of view of
the adversary if it does not query gb1 , σ ∗ , σ ∗ ||mc ||C3 to random oracles H2 , H3 , H1 ,
respectively.
Phase 2. The simulator responds to decryption queries in the same way as in Phase
1 with the restriction that no decryption query is allowed on CT ∗ .
Guess. A outputs a guess or ⊥. The challenge hash query is defined as
which is a query to the random oracle H2 . The simulator randomly selects one value
y from the hash list (y1 ,Y1 ), (y2 ,Y2 ), · · · , (yqH2 ,YqH2 ) as the challenge hash query. The
simulator can immediately use this hash query to solve the CDH problem.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The decryption simulation is correct except with
negligible probability according to the following analysis.
• For case 1, the simulator can correctly respond to a decryption query on CT =
(C1 ,C2 ,C3 ) following the description in Phase 1.
• For case 2, with (x, H1 (x)) where x = (z||m||C3 ) satisfying C1 = gH1 (x) , the simu-
H (x)
lator can compute y = g1 1 and extract z. Since the adversary did not query y, z
to the random oracles, we have that H2 (y) ⊕C2 and H3 (z) ⊕C3 must be random
in {0, 1}n , so that they are equal to z and m provided by the adversary in x with
negligible probability.
• For case 3 without (x, H1 (x)) satisfying C1 = gH1 (x) , we have the following two
sub-cases.
– C1 = gb . The ciphertext CT for every decryption query must be different from
the challenge ciphertext CT ∗ = (C1∗ ,C2∗ ,C3∗ ). For such a decryption query, the
simulator cannot compute gb1 to simulate the decryption. However, the hash
query H1 (σ ∗ ||mc ||Z2 ) = b will determine C2∗ ,C3∗ . That is, all ciphertexts where
C1 = gb are invalid except the challenge ciphertext. Therefore, the simulator
can perform the decryption correctly by returning ⊥ to the adversary.
– C1 6= gb . The simulator performs an incorrect decryption simulation for CT
if and only if there exist three hash queries and responses (x, X), (y,Y ), (z, Z)
after this decryption simulation, where these three queries satisfy
x = (z||m||C3 ),
H (x)
y = g1 1 ,
H1 (x)
C1 = g ,
206 7 Public-Key Encryption with Random Oracles
C2 = H2 (y) ⊕ z,
C3 = H3 (z) ⊕ m.
The simulation fails because the decryption returns ⊥ before these hash
queries, but the decryption should return m after these hash queries. Since
H1 (x) = X is randomly chosen, C1 = gH1 (x) holds with negligible probability.
Therefore, the simulator performs the decryption simulation correctly except with
negligible probability. In particular, the decryption response will not generate any
new hash query and its response for the adversary.
The correctness of the simulation including the public key, decryption, and the
challenge ciphertext has been explained above. The randomness of the simulation
includes all random numbers in the key generation, the responses to hash queries,
and the challenge ciphertext generation. They are
a, Xi ,Yi , Zi , b.
According to the setting of the simulation, where a, b, Xi ,Yi , Zi are randomly cho-
sen, it is easy to see that the randomness property holds, and thus the simulation is
indistinguishable from the real attack.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Advantage of breaking the challenge ciphertext. According to the simulation, we
have
• The challenge ciphertext is an encryption of the message m0 if
Without making the challenge query Q∗ = gb1 to the random oracle, the adversary
can break the challenge ciphertext if and only if there exist queries to H3 , H1 satisfy-
ing the equations. Note that the upper bound of the success probability is the sum of
the probability that a query to H3 is answered with R2 ⊕ mc and the probability that
a query to H1 is answered with b. The success probability is at most (2qH3 + qH1 )/p,
which is negligible. Furthermore, any decryption response will not help the adver-
sary obtain additional hash queries or their responses. Therefore, the adversary has
no advantage in breaking the challenge ciphertext except with negligible probability.
Probability of finding solution. According to the definition and simulation, if the
adversary does not query gb1 = gab to the random oracle H2 , the adversary has no
advantage in guessing the encrypted message except with negligible probability.
Since the adversary has advantage ε in guessing the chosen message according to
the breaking assumption, the adversary will query gab to the random oracle with
7.4 Fujisaki-Okamoto Hashed ElGamal Scheme 207
In this chapter, we introduce the ElGamal encryption scheme and the Cramer-Shoup
encryption scheme [32]. The first scheme is widely known, and we give it here to
help the reader understand how to analyze the correctness of a security reduction.
The second scheme is the first practical encryption scheme without random oracles
with CCA security. The given schemes and/or proofs may be different from the
original ones.
pk = g1 , sk = α.
Theorem 8.1.0.1 If the DDH problem is hard, the ElGamal encryption scheme is
provably secure in the IND-CPA security model with reduction loss L = 2.
Proof. Suppose there exists an adversary A who can (t, ε)-break the encryption
scheme in the IND-CPA security model. We construct a simulator B to solve the
DDH problem. Given as input a problem instance (g, ga , gb , Z) over the cyclic group
(G, g, p), B runs A and works as follows.
Setup. Let SP = (G, g, p). B sets the public key as g1 = ga where α = a. The public
key is available from the problem instance.
Challenge. A outputs two distinct messages m0 , m1 ∈ G to be challenged. The
simulator randomly chooses c ∈ {0, 1} and sets the challenge ciphertext CT ∗ as
CT ∗ = gb , Z · mc ,
where gb and Z are from the problem instance. Let r = b. If Z = gab , we have
CT ∗ = gb , Z · mc = gr , gr1 · mc .
Advantage and time cost. The advantage of solving the DDH problem is
1 ε 1 ε
PS (PT − PF ) = + − = .
2 2 2 2
Let Ts denote the time cost of the simulation. We have Ts = O(1). Therefore, the
simulator B will solve the DDH problem with (t + Ts , ε/2).
This completes the proof of the theorem.
Theorem 8.2.0.1 If the Variant DDH problem is hard, the Cramer-Shoup encryp-
tion scheme is provably secure in the IND-CCA security model with reduction loss
about L = 2.
Proof. Suppose there exists an adversary A who can (t, qd , ε)-break the encryption
scheme in the IND-CCA security model. We construct a simulator B to solve the
212 8 Public-Key Encryption Without Random Oracles
Variant DDH problem. Given as input a problem instance X = (g1 , g2 , ga11 , ga22 ) over
the cyclic group (G, g, p), B runs A and works as follows.
Setup. Let SP = (G, g, p, H), where H : {0, 1}∗ → Z p is a cryptographic hash func-
tion. B randomly chooses α1 , α2 , β1 , β2 , γ1 , γ2 ∈ Z p and sets the public key as
β β γ γ
pk = (g1 , g2 , u, v, h) = g1 , g2 , gα1 1 gα2 2 , g1 1 g2 2 , g11 g22 .
The public key can be computed from the problem instance and the chosen param-
eters.
Phase 1. The adversary makes decryption queries in this phase. For a decryption
query on CT , since the simulator knows the secret key, it runs the decryption algo-
rithm and returns the decryption result to the adversary.
Challenge. A outputs two distinct messages m0 , m1 ∈ G to be challenged. The
simulator randomly chooses c ∈ {0, 1} and sets the challenge ciphertext CT ∗ as
∗ ∗
a γ a γ
CT ∗ = (C1∗ ,C2∗ ,C3∗ ,C4∗ ) = ga11 , ga22 , g11 1 g22 2 · mc , (ga11 )α1 +w β1 (ga22 )α2 +w β2 ,
where w∗ = H(C1∗ ,C2∗ ,C3∗ ) and ga11 , ga22 are from the problem instance. Let r = a1 . If
a1 = a2 , we have
∗ ∗
a γ a γ
CT ∗ = ga11 , ga22 , g11 1 g22 2 · mc , (ga11 )α1 +w β1 (ga22 )α2 +w β2
∗
= gr1 , gr2 , hr m, ur vw r .
from the challenge ciphertext. If γ1 , γ2 are unknown to the adversary, and there is no
decryption query, we have that γ1 +zγ2 and a1 γ1 +a2 zγ2 are random and independent
because the determinant of the following coefficient matrix is nonzero:
1 z
= z(a2 − a1 ) 6= 0.
a1 a2 z
In this case, the challenge ciphertext can be seen as a one-time pad encryption of
the message mc from the point of view of the adversary, so that the adversary has
no advantage in guessing the bit c. Next, we show that the decryption queries do not
help the adversary break the challenge ciphertext except with negligible probability.
If a decryption query on CT = gr11 , gr22 ,C3 ,C4 passes the verification, the de-
−r γ −r γ
cryption will return C3 · g1 1 1 g2 2 2 to the adversary, and the adversary will know
r1 γ1 + r2 zγ2 .
That is, it can pass the verification if the adversary can compute the number
r1 (α1 + wβ1 ) + r2 z(α2 + wβ2 ).
According to the simulation, all the parameters associated with α1 , α2 , β1 , β2 in-
cluding the target r1 (α1 + wβ1 ) + r2 z(α2 + wβ2 ) are
α1 + zα2
β1 + zβ2
a1 (α1 + w∗ β1 ) + za2 (α2 + w∗ β2 )
r1 (α1 + wβ1 ) + r2 z(α2 + wβ2 ).
r1 zr2 r1 w zr2 w
Advantage and time cost. The advantage of solving the DDH problem is
1 ε 1 qd ε qd ε
PS (PT − PF ) = + − + = − ≈ .
2 2 2 p − qd 2 p − qd 2
Let Ts denote the time cost of the simulation. We have Ts = O(qd ), which is mainly
dominated by the decryption. Therefore, the simulator B will solve the Variant
DDH problem with (t + Ts , ε/2).
This completes the proof of the theorem.
Chapter 9
Identity-Based Encryption with Random
Oracles
In this chapter, we introduce the Boneh-Franklin IBE scheme [24] under the H-Type,
the Boneh-BoyenRO IBE scheme [20] and the Park-Lee IBE scheme [86] under the
C-Type, and the Sakai-Kasahara IBE scheme [91, 30] under the I-Type. The given
schemes and/or proofs may be different from the original ones.
KeyGen: The key generation algorithm takes as input an identity ID ∈ {0, 1}∗
and the master key pair (mpk, msk). It returns the private key dID of ID as
dID = H1 (ID)α .
Decrypt: The decryption algorithm takes as input a ciphertext CT for ID, the
private key dID , and the master public key mpk. Let CT = (C1 ,C2 ). It decrypts
Theorem 9.1.0.1 Suppose the hash functions H1 , H2 are random oracles. If the
BDH problem is hard, the Boneh-Franklin identity-based encryption scheme is prov-
ably secure in the IND-ID-CPA security model with reduction loss L = qH1 qH2 ,
where qH1 , qH2 are the number of hash queries to the random oracles H1 , H2 , re-
spectively.
Proof. Suppose there exists an adversary A who can (t, qk , ε)-break the encryption
scheme in the IND-ID-CPA security model. We construct a simulator B to solve
the BDH problem. Given as input a problem instance (g, ga , gb , gc ) over the pairing
group PG, B controls the random oracles, runs A , and works as follows.
Setup. B sets g1 = ga where α = a. The master public key except two hash func-
tions is therefore available from the problem instance, where the two hash functions
are set as random oracles controlled by the simulator.
H-Query. The adversary makes hash queries in this phase. Before any hash queries
are made, B randomly chooses i∗ ∈ [1, qH1 ], where qH1 denotes the number of hash
queries to the random oracle H1 . Then, B prepares two hash lists to record all
queries and responses as follows, where the hash lists are empty at the beginning.
• Let the i-th hash query to H1 be IDi . If IDi is already in the hash list, B responds
to this query following the hash list. Otherwise, B randomly chooses xi ∈ Z p and
sets H1 (IDi ) as
g i if i 6= i∗
x
H1 (IDi ) = .
gb otherwise
The simulator responds to this query with H1 (IDi ) and adds (i, IDi , xi , H1 (IDi ))
to the hash list.
• Let the i-th hash query to H2 be yi . If yi is already in the hash list, B responds to
this query following the hash list. Otherwise, B randomly chooses Yi ∈ {0, 1}n ,
responds to this query with H2 (yi ) = Yi , and adds (yi , H2 (yi )) to the hash list.
Phase 1. The adversary makes private-key queries in this phase. For a private-key
query on IDi , let (i, IDi , xi , H1 (IDi )) be the corresponding tuple. If i = i∗ , abort. Oth-
erwise, according to the simulation, we have H1 (IDi ) = gxi . The simulator computes
dIDi = (ga )xi , which is equal to H1 (IDi )α . Therefore, dIDi is a valid private key for
IDi .
Challenge. A outputs two distinct messages m0 , m1 ∈ {0, 1}n and one identity ID∗
to be challenged. Let the corresponding tuple of a hash query on ID∗ to H1 be
(i, ID∗ , x, H1 (ID∗ )). If i 6= i∗ , abort. Otherwise, we have i = i∗ and H1 (ID∗ ) = gb .
9.1 Boneh-Franklin Scheme 217
The simulator randomly chooses R ∈ {0, 1}n and sets the challenge ciphertext
CT ∗ as
CT ∗ = (gc , R),
where gc is from the problem instance. The challenge ciphertext can be seen as
an encryption of the message mcoin ∈ {m0 , m1 } using the random number c, if
H2 (e(g, g)abc ) = R ⊕ mcoin :
CT ∗ = (gc , R) = gc , H2 e(H1 (ID∗ ), g1 )c ⊕ mcoin .
The challenge ciphertext is therefore a correct ciphertext from the point of view of
the adversary if there is no hash query on e(g, g)abc to the random oracle H2 .
Phase 2. The simulator responds to private-key queries in the same way as in Phase
1 with the restriction that no private-key query is allowed on ID∗ .
Guess. A outputs a guess or ⊥. The challenge hash query is defined as
which is a query to the random oracle H2 . The simulator randomly selects one value
y from the hash list (y1 ,Y1 ), (y2 ,Y2 ), · · · , (yqH2 ,YqH2 ) as the challenge hash query. The
simulator can immediately use this hash query to solve the BDH problem.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the
master-key generation, the responses to hash queries, the private-key generation,
and the challenge ciphertext generation. They are
According to the setting of the simulation, where all of them are randomly chosen,
the randomness property holds, and thus the simulation is indistinguishable from
the real attack.
Probability of successful simulation. If the identity ID∗ to be challenged is the i∗ -
th identity queried to the random oracle, the adversary cannot query its private key
so that the simulation will be successful in the query phase and the challenge phase.
The success probability is therefore 1/qH1 for qH1 queries to H1 .
Advantage of breaking the challenge ciphertext. According to the simulation, we
have
• The challenge ciphertext is an encryption of the message m0 if
H2 e(g, g)abc = R ⊕ m0 .
218 9 Identity-Based Encryption with Random Oracles
Without making the challenge query Q∗ = e(g, g)abc to the random oracle, the ad-
versary has no advantage in breaking the challenge ciphertext.
Probability of finding solution. According to the definition and simulation, if the
adversary does not query Q∗ = e(g, g)abc to the random oracle H2 , the adversary has
no advantage in guessing the encrypted message. Since the adversary has advan-
tage ε in guessing the chosen message according to the breaking assumption, the
adversary will query e(g, g)abc to the random oracle with probability ε according to
Lemma 4.11.1. Therefore, a random choice of y from the hash list for H2 will be
equal to e(g, g)abc with probability qHε .
2
Advantage and time cost. Let Ts denote the time cost of the simulation. We
have Ts = O(qH1 + qk ), which is mainly dominated by the oracle response and
the
key generation. Therefore, the simulator B will solve the BDH problem with
ε
t + Ts , qH qH .
1 2
This completes the proof of the theorem.
KeyGen: The key generation algorithm takes as input an identity ID ∈ {0, 1}∗
and the master key pair (mpk, msk). It randomly chooses r ∈ Z p and returns the
private key dID of ID as
dID = (d1 , d2 ) = gα2 H(ID)r , gr .
Theorem 9.2.0.1 Suppose the hash function H is a random oracle. If the DBDH
problem is hard, the Boneh-BoyenRO identity-based encryption scheme is provably
secure in the IND-ID-CPA security model with reduction loss L = 2qH , where qH is
the number of hash queries to the random oracle.
Proof. Suppose there exists an adversary A who can (t, qk , ε)-break the encryption
scheme in the IND-ID-CPA security model. We construct a simulator B to solve the
DBDH problem. Given as input a problem instance (g, ga , gb , gc , Z) over the pairing
group PG, B controls the random oracle, runs A , and works as follows.
Setup. B sets g1 = ga , g2 = gb where α = a. The master public key except the hash
function is therefore available from the problem instance, where the hash function
is set as a random oracle controlled by the simulator.
H-Query. The adversary makes hash queries in this phase. Before any hash queries
are made, B randomly chooses i∗ ∈ [1, qH ], where qH denotes the number of hash
queries to the random oracle. Then, B prepares a hash list to record all queries and
responses as follows, where the hash list is empty at the beginning.
Let the i-th hash query to H be IDi . If IDi is already in the hash list, B responds
to this query following the hash list. Otherwise, B randomly chooses xi ∈ Z p and
sets H(IDi ) as
g i if i 6= i∗
b+x
H(IDi ) = .
gxi otherwise
The simulator B responds to this query with H(IDi ) and adds the tuple (i, IDi , xi ,
H(IDi )) to the hash list.
Phase 1. The adversary makes private-key queries in this phase. For a private-key
query on IDi , let (i, IDi , xi , H(IDi )) be the corresponding tuple. If i = i∗ , abort. Oth-
erwise, according to the simulation, we have H(IDi ) = gb+xi .
The simulator randomly chooses ri0 ∈ Z p and computes dIDi as
0
dIDi = (ga )−xi · H(ID)ri , (ga )−1 gri .
According to the setting of the simulation, where a, b, c, xi , ri0 are randomly chosen,
it is easy to see that the randomness property holds, and thus the simulation is indis-
tinguishable from the real attack.
Probability of successful simulation. If the identity ID∗ to be challenged is the i∗ -
th identity queried to the random oracle, the adversary cannot query its private key
so that the simulation will be successful in the query phase and the challenge phase.
The success probability is therefore 1/qH for qH queries to H.
Probability of breaking the challenge ciphertext.
If Z is true, the simulation is indistinguishable from the real attack, and thus the
adversary has probability 12 + ε2 of guessing the encrypted message correctly.
If Z is false, it is easy to see that the challenge ciphertext is a one-time pad
because the message is encrypted using Z, which is random and cannot be calculated
from the other parameters given to the adversary. Therefore, the adversary only has
probability 1/2 of guessing the encrypted message correctly.
Advantage and time cost. The advantage of solving the DBDH problem is
9.3 Park-Lee Scheme 221
1 1 ε 1 ε
PS (PT − PF ) = + − = .
qH 2 2 2 2qH
Let Ts denote the time cost of the simulation. We have Ts = O(qH + qk ), which is
mainly dominated by the oracle response and the key generation.
Therefore, the
simulator B will solve the DBDH problem with t + Ts , 2qH .
ε
KeyGen: The key generation algorithm takes as input an identity ID ∈ {0, 1}∗
and the master key pair (mpk, msk). It randomly chooses r,tk ∈ Z p and returns
the private key dID of ID as
r tk r
dID = (d1 , d2 , d3 , d4 ) = gα+r
2 , g , H(ID)g2 , tk .
Theorem 9.3.0.1 Suppose the hash function H is a random oracle. If the DBDH
problem is hard, the Park-Lee identity-based encryption scheme is provably secure
in the IND-ID-CPA security model with reduction loss L = 2.
Proof. Suppose there exists an adversary A who can (t, qk , ε)-break the encryption
scheme in the IND-ID-CPA security model. We construct a simulator B to solve the
DBDH problem. Given as input a problem instance (g, ga , gb , gc , Z) over the pairing
group PG, B controls the random oracle, runs A , and works as follows.
Setup. B sets g1 = ga , g2 = gb where α = a. The master public key except the hash
function is therefore available from the problem instance, where the hash function
is set as a random oracle controlled by the simulator.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
For a query on ID, B randomly chooses xID , yID ∈ Z p and sets H(ID) as
The simulator B responds to this query with H(ID) and adds(ID, xID , yID , H(ID))
to the hash list.
Phase 1. The adversary makes private-key queries in this phase. For a private-key
query on ID, let (ID, xID , yID , H(ID)) be the corresponding tuple in the hash list.
The simulator randomly chooses r0 ∈ Z p and computes dID as
0 0 0
dID = (gb )r , gr (ga )−1 , gr yID (ga )−yID , xID .
mpk : a, b,
H(ID) : yID − xID b,
(r,tk ) : r0 − a, xID ,
CT ∗ : c, xID∗ .
According to the setting of the simulation, where a, b, c, xID , yID , r0 are randomly
chosen, it is easy to see that the randomness property holds, and thus the simulation
is indistinguishable from the real attack.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Probability of breaking the challenge ciphertext.
If Z is true, the simulation is indistinguishable from the real attack, and thus the
adversary has probability 12 + ε2 of guessing the encrypted message correctly.
If Z is false, it is easy to see that the challenge ciphertext is a one-time pad
because the message is encrypted using Z, which is random and cannot be calculated
from the other parameters given to the adversary. Therefore, the adversary only has
probability 1/2 of guessing the encrypted message correctly.
Advantage and time cost. The advantage of solving the DBDH problem is
1 ε 1 ε
PS (PT − PF ) = + − = .
2 2 2 2
Let Ts denote the time cost of the simulation. We have Ts = O(qH + qk ), which is
mainly dominated by the oracle response and the key generation. Therefore, the
simulator B will solve the DBDH problem with t + Ts , ε2 .
KeyGen: The key generation algorithm takes as input an identity ID ∈ {0, 1}∗
and the master key pair (mpk, msk). It returns the private key dID of ID as
1
dID = h α+H1 (ID) .
Decrypt: The decryption algorithm takes as input a ciphertext CT for ID, the
private key dID , and the master public key mpk. Let CT = (C1 ,C2 ). It decrypts
the message by computing
C2 ⊕ H2 e(C1 , dID ) = m.
Theorem 9.4.0.1 Suppose the hash functions H1 , H2 are random oracles. If the q-
BDHI problem is hard, the Sakai-Kasahara identity-based encryption scheme is
provably secure in the IND-ID-CPA security model with reduction loss L = qH1 qH2 ,
where qH1 , qH2 are the number of hash queries to the random oracles H1 , H2 , re-
spectively.
Proof. Suppose there exists an adversary A who can (t, qk , ε)-break the encryption
scheme in the IND-ID-CPA security model. We construct a simulator B to solve
2 q
the q-BDHI problem. Given as input a problem instance (g, ga , ga , · · · , ga ) over the
pairing group PG, B controls the random oracles, runs A , and works as follows.
Setup. B randomly chooses w∗ , w1 , w2 , · · · , wq ∈ Z p . Let f (x) be the polynomial in
Z p [x] defined as
q
f (x) = ∏(x − w∗ + wi ).
i=1
wi if i 6= i∗
H1 (IDi ) = ,
w∗ otherwise
where wi , w∗ are random numbers chosen in the setup phase. In the simulation,
wi∗ is not used in the responses to hash queries. The simulator B responds
to this query with H1 (IDi ) and adds the tuple (i, IDi , wi , H1 (IDi )) or the tuple
(i∗ , IDi∗ , w∗ , H1 (IDi∗ )) to the hash list.
• Let the i-th hash query to H2 be yi . If yi is already in the hash list, B responds to
this query following the hash list. Otherwise, B randomly chooses Yi ∈ {0, 1}n ,
responds to this query with H2 (yi ) = Yi , and adds (yi , H2 (yi )) to the hash list.
Phase 1. The adversary makes private-key queries in this phase. For a private-key
query on ID, let (i, IDi , wi , H(IDi )) be the corresponding tuple. If i = i∗ , abort. Oth-
erwise, according to the simulation, H(IDi ) = wi . Let fIDi (x) be defined as
f (x)
fIDi (x) = .
x − w∗ + wi
0
∗
r∗ ∗
CT ∗ = (gr , R) = g1 gH1 (ID ) , H2 e(g, h)r ⊕ mc .
The challenge ciphertext is therefore a correct ciphertext from the point of view of
∗
the adversary, if there is no hash query on e(g, h)r to the random oracle.
Phase 2. The simulator responds to private-key queries in the same way as in Phase
1 with the restriction that no private-key query is allowed on ID∗ .
Guess. A outputs a guess or ⊥. The challenge hash query is defined as
∗ 0 f (a)
Q∗ = e(g, h)r = e(g, g)r a ,
which is a query to the random oracle H2 . The simulator randomly selects one value
y from the hash list (y1 ,Y1 ), (y2 ,Y2 ), · · · , (yqH2 ,YqH2 ) as the challenge hash query. We
define
r0 f (x) d
= F(x) + ,
x x
where F(x) is a (q − 1)-degree polynomial, and d is a nonzero integer according to
the fact that x - f (x). The simulator can use this hash query to compute
1
r0 f (a)
1 d
Q∗ 1
d e(g, g) a
= e(g, g) da d = e(g, g) 1a .
=
e(g, g)F(a) e(g, g)F(a)
∗
Without making the challenge query Q∗ = e(g, h)r to the random oracle, the adver-
sary has no advantage in breaking the challenge ciphertext.
Probability of finding solution. According to the definition and simulation, if the
∗
adversary does not query Q∗ = e(g, h)r to the random oracle H2 , the adversary has
no advantage in guessing the encrypted message. Since the adversary has advan-
tage ε in guessing the chosen message according to the breaking assumption, the
∗
adversary will query e(g, h)r to the random oracle with probability ε according to
Lemma 4.11.1. Therefore, a random choice of y from the hash list for H2 will be
∗
equal to e(g, h)r with probability qHε .
2
Advantage and time cost. Let Ts denote the time cost of the simulation. We have
Ts = O(qk qH1 ), which is mainly dominated by the
key generation.
Therefore, the
simulator B will solve the q-BDHI problem with t + Ts , qH εqH .
1 2
This completes the proof of the theorem.
Chapter 10
Identity-Based Encryption Without Random
Oracles
In this chapter, we start by introducing the Boneh-Boyen IBE scheme [20], which is
selectively secure under the DBDH assumption. Then, we introduce a variant ver-
sion of Boneh-Boyen IBE for CCA security without the use of one-time signatures
[28] but with a chameleon hash function [94]. Then, we introduce the Waters IBE
scheme [101] and the Gentry IBE scheme [47], which are both fully secure under
the C-Type and the I-Type, respectively. The given schemes and/or proofs may be
different from the original ones.
• Run the key generation algorithm of S to generate a key pair (opk, osk).
• Choose a random number s ∈ Z p and compute
s
(C1 ,C2 ,C3 ,C4 ) = (huID )s , hvH(opk) , gs , e(g1 , g2 )s · m .
• Run the signing algorithm of S to sign (C1 ,C2 ,C3 ,C4 ) using the secret key
osk. Let the corresponding signature be σ .
The final ciphertext on m is
Theorem 10.1.0.1 If the DBDH problem is hard and the adopted one-time sig-
nature scheme is strongly unforgeable, the Boneh-Boyen identity-based encryption
scheme is provably secure in the IND-sID-CCA security model with reduction loss
L = 2.
Proof. Suppose there exists an adversary A who can (t, qk , qd , ε)-break the encryp-
tion scheme in the IND-sID-CCA security model. We construct a simulator B to
solve the DBDH problem. Given as input a problem instance (g, ga , gb , gc , Z) over
the pairing group PG, B runs A and works as follows.
Init: The adversary outputs an identity ID∗ ∈ Z p to be challenged.
Setup. Let H be a cryptographic hash function, and S be a secure one-time signa-
ture scheme. B simulates other parameters in the master public key as follows.
10.1 Boneh-Boyen Scheme 231
Phase 1. The adversary makes private-key queries and decryption queries in this
phase.
For a private-key query on ID 6= ID∗ , let huID = gw1 b+w2 . The simulator randomly
chooses r0 ∈ Z p and computes dID as
w
− 2 − 1
0 0
dID = (ga ) w1 (huID )r , (ga ) w1 gr .
− wa +t 0
gα2 (huID )r (hvH(opk) )t = gab · (huID )r · (gw3 b+w4 ) 3
4w
a − w3 0
= (huID )r · (g ) (hvH(opk) )t ,
− w1 0
gt = (ga ) 3 gt .
232 10 Identity-Based Encryption Without Random Oracles
Therefore, dID|opk is valid for ID|opk. The simulator uses this private key to decrypt
the ciphertext CT .
Challenge. A outputs two distinct messages m0 , m1 ∈ GT to be challenged. The
simulator randomly chooses coin ∈ {0, 1} and sets the challenge ciphertext CT ∗ as
where σ ∗ is a signature of (C1∗ ,C2∗ ,C3∗ ,C4∗ ) using osk∗ . Let s = c. If Z = e(g, g)abc ,
we have
s ∗
bc ID ∗
ID∗ ∗ −1 +c(x1 +ID ·x2 ) ∗
hu =g ID = (gc )x1 +ID ·x2 ,
H(opk∗ )
s
−1 +c(x1 +H(opk∗ )·x3 )
∗ ∗
bc
hvH(opk ) = g H(opk∗ ) = (gc )x1 +H(opk )·x3 ,
gs = gc ,
s
e(g1 , g2 ) · mcoin = Z · mcoin .
b b
mpk : a, b, − b + x1 , + x2 , + x3 ,
ID∗ H(opk∗ )
a
skID : − + r0 ,
w1
CT ∗ : c.
10.2 Boneh-Boyen+ Scheme 233
where α = a and a, b are from the problem instance. The master public key can
therefore be computed from the problem instance and the chosen parameters.
According to the above simulation, we have
ID
huID = gb( ID∗ −1)+(x1 +ID·x2 ) ,
hvH(C) wz = gb(y1 H(C)+y2 z−1)+(x1 +H(C)x3 +zx4 ) .
Phase 1. The adversary makes private-key queries and decryption queries in this
phase.
For a private-key query on ID 6= ID∗ , let huID = gw1 b+w2 . The simulator randomly
chooses r0 ∈ Z p and computes dID as
w
− 2 − 1
0 0
dID = (ga ) w1 (huID )r , (ga ) w1 gr .
− wa +t 0
gα2 (huID )r (hvH(C) wC5 )t = gab · (huID )r · (gw3 b+w4 ) 3
w
− w4 0
= (huID )r · (ga ) 3 (hvH(C) wC5 )t ,
− w1 0
gt = (ga ) 3 gt .
Therefore, dID|CT is valid for ID|CT . The simulator uses this private key to decrypt
the ciphertext CT .
Challenge. A outputs two distinct messages m0 , m1 ∈ GT to be challenged. The
simulator randomly chooses coin ∈ {0, 1} and sets the challenge ciphertext CT ∗ as
where C∗ = (C1∗ ,C3∗ ,C4∗ ), and z∗ is the value satisfying y1 H(C∗ ) + y2 z∗ − 1 = 0. Let
s = c. If Z = e(g, g)abc , we have
∗
∗ s bc ID −1 +c(x1 +ID∗ ·x2 )
∗
huID = g ID∗ = (gc )x1 +ID ·x2 ,
∗ s
∗
∗ ∗ ∗ ∗ ∗ ∗
hvH(C ) wC5 = gbc(y1 H(C )+y2 z −1)+c(x1 +H(C )x3 +z x4 ) = (gc )x1 +H(C )·x3 +z x4 ,
gs = gc ,
e(g1 , g2 )s · mcoin = Z · mcoin .
b a 0 1 − y1 H(C∗ )
a, b, − b + x1 , + x2 , y1 b + x3 , y2 b + x4 , − + r , c, .
ID∗ w1 y2
It is not hard to see that the above integers and 1−y1y2H(C) are random and independent
because x1 , x2 , x3 , x4 , y1 , y2 are randomly chosen.
The correctness of the simulation has been explained above. The randomness
of the simulation includes all random numbers in the master-key generation, the
private-key generation, and the challenge ciphertext generation. They are
10.3 Waters Scheme 237
b a 0 1 − y1 H(C∗ )
a, b, − b + x1 , + x2 , y1 b + x3 , y2 b + x4 , − + r , c, .
ID∗ w1 y2
They are random and independent according to the above analysis of the decryption
simulation. Therefore, the simulation is indistinguishable from the real attack.
Probability of successful simulation. There is no abort in the simulation except
C5 = 1−y1y2H(C) in a decryption query, and thus the probability of successful simu-
qd
lation is 1 − p−q ≈ 1, where the i-th adaptive choice of C5 is equal to the random
d
1−y1 H(C)
number y2 with probability 1/(p − i + 1).
Probability of breaking the challenge ciphertext.
If Z is true, the simulation is indistinguishable from the real attack, and thus the
adversary has probability 12 + ε2 of guessing the encrypted message correctly.
If Z is false, and there is no decryption query, it is easy to see that the challenge
ciphertext is a one-time pad because the message is encrypted using Z, which is
random and unknown to the adversary. In the simulation, Z is independent of the
challenge decryption key. Therefore, the decryption queries cannot help the adver-
sary find Z to break the challenge ciphertext. The adversary only has probability 1/2
of guessing the encrypted message correctly.
Advantage and time cost. The advantage of solving the DBDH problem is
1 ε 1 ε
PS (PT − PF ) = + − = .
2 2 2 2
Let Ts denote the time cost of the simulation. We have Ts = O(qk + qd ), which is
mainly dominated by the key generation and the decryption. Therefore, the simula-
tor B will solve the DBDH problem with t + Ts , ε2 .
KeyGen: The key generation algorithm takes as input an identity ID ∈ {0, 1}n
and the master key pair (mpk, msk). Let ID[i] be the i-th bit of ID. It chooses a
random number r ∈ Z p . It returns the private key dID of ID as
238 10 Identity-Based Encryption Without Random Oracles
!
n
ID[i] r
r
dID = (d1 , d2 ) = gα2 u0 ∏ ui ,g .
i=1
Decrypt: The decryption algorithm takes as input a ciphertext CT for ID, the
private key dID , and the master public key mpk. Let CT = (C1 ,C2 ,C3 ). It de-
crypts the message by computing
n ID[i] s r
e(C1 , d2 ) e (u ∏ u
0 i=1 i ) , g
·C3 = ·C3 = e(g1 , g2 )−s · e(g1 , g2 )s m = m.
e(C2 , d1 ) e gs , gα (u ∏n u )r
ID[i]
2 0 i=1 i
Theorem 10.3.0.1 If the DBDH problem is hard, the Waters identity-based encryp-
tion scheme is provably secure in the IND-ID-CPA security model with reduction
loss L = 8(n + 1)qk , where qk is the number of private-key queries.
Proof. Suppose there exists an adversary A who can (t, qk , ε)-break the encryption
scheme in the IND-ID-CPA security model. We construct a simulator B to solve the
DBDH problem. Given as input a problem instance (g, ga , gb , gc , Z) over the pairing
group PG, B runs A and works as follows.
Setup. B sets q = 2qk and randomly chooses k, x0 , x1 , · · · , xn , y0 , y1 , · · · , yn satisfying
k ∈ [0, n],
x0 , x1 , · · · , xn ∈ [0, q − 1],
y0 , y1 , · · · , yn ∈ Z p .
where α = a. The master public key can therefore be computed from the problem
instance and the chosen parameters.
We define F(ID), J(ID), K(ID) as
n
F(ID) = −kq + x0 + ∑ ID[i] · xi ,
i=1
10.3 Waters Scheme 239
n
J(ID) = y0 + ∑ ID[i] · yi ,
i=1
0 if x0 + ∑ni=1 ID[i] · xi = 0 mod q
K(ID) = .
1 otherwise
Then, we have
n
ID[i]
u0 ∏ ui = gF(ID)a+J(ID) .
i=1
Phase 1. The adversary makes private-key queries in this phase. For a private-key
query on ID, if K(ID) = 0, the simulator aborts. Otherwise, B randomly chooses
r0 ∈ Z p and computes the private key dID as
!0
r
J(ID) n 1
− F(ID) ID[i] − F(ID) 0
dID = (d1 , d2 ) = g2 u0 ∏ ui , g2 gr .
i=1
We have that dID is computable using g, g1 , F(ID), J(ID), r0 , ID and the master pub-
lic key.
b
Let r = − F(ID) + r0 . We have
!r
n − b +r0
ID[i] F(ID)
gα2 u0 ∏ ui = gab gF(ID)a+J(ID)
i=1
− b +r0
F(ID)
= gab gF(ID)a+J(ID)
J(ID)
−ab+r0 F(ID)a− F(ID) b+J(ID)r0
= gab · g
J(ID)
− F(ID) b r0 (F(ID)a+J(ID))
=g g
!r0
J(ID) n
− F(ID) ID[i]
= g2 u0 ∏ ui ,
i=1
b +r0
− F(ID)
gr = g
1
− F(ID) 0
= g2 gr .
The simulator randomly chooses coin ∈ {0, 1} and sets the challenge ciphertext
CT ∗ as
∗
CT ∗ = (C1∗ ,C2∗ ,C3∗ ) = (gc )J(ID ) , gc , Z · mcoin ,
where c, Z are from the problem instance. Let s = c. If Z = e(g, g)abc , we have
n
ID∗ [i] s
∗
c ∗
u0 ∏ ui = gJ(ID ) = (gc )J(ID ) ,
i=1
gs = gc ,
e(g1 , g2 )s · mcoin = Z · mcoin .
mpk : a, b, − kqa + x0 a + y0 , x1 a + y1 , x2 a + y2 , · · · , xn a + yn ,
b
dID : − + r0 ,
F(ID)
CT ∗ : c.
We have
n
0 ≤ x0 + ∑ ID[i]xi ≤ (n + 1)(q − 1),
i=1
where the range [0, (n + 1)(q − 1)] contains integers 0q, 1q, 2q, · · · , nq (n < q).
Let X = x0 + ∑ni=1 ID[i]xi . Since all xi and k are randomly chosen, we have
10.3 Waters Scheme 241
h i h i 1
Pr[F(ID∗ ) = 0] = Pr X = 0 mod q · Pr X = kq X = 0 mod q = .
(n + 1)q
Since the pair (IDi , ID∗ ) for any i differ on at least one bit, K(IDi ) and F(ID∗ ) differ
on the coefficient of at least one x j so that
1
Pr[K(IDi ) = 0|F(ID∗ ) = 0] = .
q
Based on the above results, we have
Let Ts denote the time cost of the simulation. We have Ts = O(qk ), which is mainly
Therefore, the simulator B will solve the DBDH
dominated by the key generation.
ε
problem with t + Ts , 8(n+1)q .
k
This completes the proof of the theorem.
242 10 Identity-Based Encryption Without Random Oracles
We require the same random numbers r1 , r2 , r3 for the same identity ID.
Encrypt: The encryption algorithm takes as input a message m ∈ GT , an iden-
tity ID, and the master public key mpk. It chooses a random number s ∈ Z p and
returns the ciphertext CT as
!
CT = (C1 ,C2 ,C3 ,C4 ) = (g1 g−ID )s , e(g, g)s , e(h3 , g)s ·m, e(h1 , g)s e(h2 , g)sw ,
C3 e(h3 , g)s · m
d
= β3 −r3
= m.
e(C1 , d6 ) ·C2 5 e g(α−ID)s , g α−ID · e(g, g)sr3
10.4 Gentry Scheme 243
Proof. Suppose there exists an adversary A who can (t, qk , qd , ε)-break the encryp-
tion scheme in the IND-ID-CCA security model. We construct a simulator B to
q+2
solve the q-DABDHE problem. Given as input a problem instance (g0 , ga0 , g, ga ,
2 q
ga , · · · , ga , Z) over the pairing group PG, B runs A and works as follows.
Setup. B randomly chooses three q-degree polynomials F1 (x), F2 (x), F3 (x) in Z p [x].
It sets the master public key as
Fi (x) − Fi (ID)
fID,i (x) = .
x − ID
We have that fID,i (x) for all i ∈ {1, 2, 3} are polynomials.
The simulator computes the private key dID as
dID = F1 (ID), g fID,1 (a) , F2 (ID), g fID,2 (a) , F3 (ID), g fID,3 (a) ,
2 q
which is computable from g, ga , ga , · · · , ga , fID,1 (x), fID,2 (x), fID,3 (x). Let r1 =
F1 (ID), r2 = F2 (ID), r3 = F3 (ID). We have
q
q+1 i
= e(g0 , g)a ∏ e(g0 , g) fi a
i=0
q
i
= Z · e g0 , ∏ g fi a ,
i=0
β3 −d5∗ d ∗
∗ 5
e(h3 , g)s · mc = e g(α−ID )s , g α−ID∗ · e(g, g)s · mc
∗
= e(C1∗ , d6∗ ) · (C2∗ )d5 · mc ,
β1 −d1∗ +w∗ (β2 −d3∗ ) (d1∗ +d3∗ w∗ )
∗ (α−ID∗ )s
e(h1 , g)s e(h2 , g)sw =e g ,g α−ID ∗
· e(g, g)s
∗
∗ ∗ ∗
= e C1∗ , d2∗ (d4∗ )w · (C2∗ )d1 +d3 w .
Since the polynomial is randomly chosen, all xi are random and independent. We
also have
The matrix is a (qk + 2) × (qk + 2) matrix, and the determinant of this matrix is
246 10 Identity-Based Encryption Without Random Oracles
∏ (yi − y j ) 6= 0.
yi ,y j ∈{a,ID∗ ,ID1 ,ID2 ,···,IDqk },i6= j
Therefore, Fi (ID∗ ) and Fi (a), Fi (ID j ) for all i ∈ {1, 2, 3} and j ∈ [1, qk ] are random
and independent.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Probability of breaking the challenge ciphertext.
If Z is true, the simulation is indistinguishable from the real attack, and thus the
adversary has probability 12 + ε2 of guessing the encrypted message correctly.
qd
If Z is false, we show that the adversary only has success probability 1/2 + p−q
d
of guessing the encrypted message as follows.
q+1
Since Z is random, without loss of generality, let Z = e(g0 , g)a · e(g, g)z for
∗ ∗ ∗
some random and nonzero integer z. C1 ,C2 ,C3 in the challenge ciphertext can be
rewritten as
∗ ∗
C1∗ = gs(α−ID ) , C2∗ = e(g, g)s+z , C3∗ = e(g, g)zd5 · e(h3 , g)s · mc .
Furthermore, only C3∗ contains the random number d5∗ . Therefore, if the adversary
cannot learn d5∗ from the query, the challenge ciphertext is a one-time pad from the
point of view of the adversary.
According to the randomness property, the adversary can only learn d5∗ from
a decryption query on (ID∗ ,CT ). For a decryption query on (ID∗ ,CT ), let CT =
∗ 0 00
(C1 ,C2 ,C3 ,C4 ) where C1 = (g1 gID )s , C2 = e(g, g)s , and w = H(C1 ,C2 ,C3 ). If s0 =
00
s (treated as a correct ciphertext) and the ciphertext is accepted, the simulator will
return
C3 C3
d ∗ = 0,
e(C1 , d6∗ ) ·C2 5 e(h3 , g)s
where the adversary learns nothing about d5∗ from the decryption result. Otherwise,
s0 6= s00 (treated as an incorrect ciphertext), and in the following we prove that such
an incorrect ciphertext will be rejected except with negligible probability.
• Suppose (C1 ,C2 ,C3 ) = (C1∗ ,C2∗ ,C3∗ ) such that H(C1 ,C2 ,C3 ) = w = w∗ . For such
an incorrect ciphertext to pass the verification requires C4 = C4∗ . However, this
ciphertext cannot be queried because it is the challenge ciphertext.
• Suppose (C1 ,C2 ,C3 ) 6= (C1∗ ,C2∗ ,C3∗ ). Since the hash function is secure, we have
H(C1 ,C2 ,C3 ) = w 6= w∗ . For such an incorrect ciphertext to pass the verification
requires the adversary to be able to compute C4 satisfying
d ∗ +d ∗ w
C4 = e(C1 , d2∗ (d4∗ )w ) ·C2 1 3
β1 −d1∗ +w(β2 −d3∗ ) ∗ ∗
∗ 0 00 (d1 +d3 w)
= e g(α−ID )s , g α−ID∗ · e(g, g)s
0 ∗ ∗ 00 −s0 )
= e(g, g)s (β1 +wβ2 ) · e(g, g)(d1 +d3 w)(s ,
10.4 Gentry Scheme 247
Advantage and time cost. The advantage of solving the q-DABDHE problem is
1 ε 1 qd ε
PS (PT − PF ) = + − + ≈ .
2 2 2 p − qd 2
Let Ts denote the time cost of the simulation. We have Ts = O(q2k + qd ), which is
mainly dominated by the key generation and the decryption.
Therefore, the simula-
tor B will solve the q-DABDHE problem with t + Ts , ε2 .
This completes the proof of the theorem.
References
1. Abdalla, M., Bellare, M., Rogaway, P.: The oracle Diffie-Hellman assumptions and an analy-
sis of DHIES. In: D. Naccache (ed.) CT-RSA 2001, LNCS, vol. 2020, pp. 143–158. Springer
(2001)
2. Adj, G., Canales-Martı́nez, I., Cruz-Cortés, N., Menezes, A., Oliveira, T., Rivera-Zamarripa,
L., Rodrı́guez-Henrı́quez, F.: Computing discrete logarithms in cryptographically-interesting
characteristic-three finite fields. IACR Cryptology ePrint Archive 2016, 914 (2016)
3. Adleman, L.M.: A subexponential algorithm for the discrete logarithm problem with appli-
cations to cryptography. In: FOCS 1979, pp. 55–60. IEEE Computer Society (1979)
4. An, J.H., Dodis, Y., Rabin, T.: On the security of joint signature and encryption. In: L.R.
Knudsen (ed.) EUROCRYPT 2002, LNCS, vol. 2332, pp. 83–107. Springer (2002)
5. Atkin, A.O.L., Morain, F.: Elliptic curves and primality proving. Mathematics of computa-
tion 61(203), 29–68 (1993)
6. Attrapadung, N., Cui, Y., Galindo, D., Hanaoka, G., Hasuo, I., Imai, H., Matsuura, K., Yang,
P., Zhang, R.: Relations among notions of security for identity based encryption schemes.
In: J.R. Correa, A. Hevia, M.A. Kiwi (eds.) LATIN 2006, LNCS, vol. 3887, pp. 130–141.
Springer (2006)
7. Bader, C., Hofheinz, D., Jager, T., Kiltz, E., Li, Y.: Tightly-secure authenticated key ex-
change. In: Y. Dodis, J.B. Nielsen (eds.) TCC 2015, LNCS, vol. 9014, pp. 629–658. Springer
(2015)
8. Bader, C., Jager, T., Li, Y., Schäge, S.: On the impossibility of tight cryptographic reductions.
In: M. Fischlin, J. Coron (eds.) EUROCRYPT 2016, LNCS, vol. 9666, pp. 273–304. Springer
(2016)
9. Barker, E., Barker, W., Burr, W., Polk, W., Smid, M.: Recommendation for key management
part 1: General (revision 3). NIST special publication 800(57), 1–147 (2012)
10. Bellare, M., Boldyreva, A., Micali, S.: Public-key encryption in a multi-user setting: Security
proofs and improvements. In: B. Preneel (ed.) EUROCRYPT 2000, LNCS, vol. 1807, pp.
259–274. Springer (2000)
11. Bellare, M., Desai, A., Pointcheval, D., Rogaway, P.: Relations among notions of security for
public-key encryption schemes. In: H. Krawczyk (ed.) CRYPTO 1998, LNCS, vol. 1462, pp.
26–45. Springer (1998)
12. Bellare, M., Miner, S.K.: A forward-secure digital signature scheme. In: M.J. Wiener (ed.)
CRYPTO 1999, LNCS, vol. 1666, pp. 431–448. Springer (1999)
13. Bellare, M., Namprempre, C.: Authenticated encryption: Relations among notions and anal-
ysis of the generic composition paradigm. In: T. Okamoto (ed.) ASIACRYPT 2000, LNCS,
vol. 1976, pp. 531–545. Springer (2000)
14. Bellare, M., Rogaway, P.: Random oracles are practical: A paradigm for designing efficient
protocols. In: D.E. Denning, R. Pyle, R. Ganesan, R.S. Sandhu, V. Ashby (eds.) CCS 1993,
pp. 62–73. ACM (1993)
15. Bellare, M., Rogaway, P.: Optimal asymmetric encryption. In: A.D. Santis (ed.) EURO-
CRYPT 1994, LNCS, vol. 950, pp. 92–111. Springer (1994)
16. Bernstein, D.J., Engels, S., Lange, T., Niederhagen, R., Paar, C., Schwabe, P., Zimmer-
mann, R.: Faster elliptic-curve discrete logarithms on FPGAs. Tech. rep., Cryptology ePrint
Archive, Report 2016/382 (2016)
17. Blake, I., Seroussi, G., Smart, N.: Elliptic Curves in Cryptography, London Mathematical
Society Lecture Note Series, vol. 265. Cambridge University Press (1999)
18. Blake, I., Seroussi, G., Smart, N.: Advances in Elliptic Curve Cryptography, London
Mathematical Society Lecture Note Series, vol. 317. Cambridge University Press (2005)
19. BlueKrypt: Cryptographic Key Length Recommendation. Available at:
https://www.keylength.com
20. Boneh, D., Boyen, X.: Efficient selective-ID secure identity-based encryption without ran-
dom oracles. In: C. Cachin, J. Camenisch (eds.) EUROCRYPT 2004, LNCS, vol. 3027, pp.
223–238. Springer (2004)
21. Boneh, D., Boyen, X.: Short signatures without random oracles. In: C. Cachin, J. Camenisch
(eds.) EUROCRYPT 2004, LNCS, vol. 3027, pp. 56–73. Springer (2004)
22. Boneh, D., Boyen, X., Goh, E.: Hierarchical identity based encryption with constant size
ciphertext. In: R. Cramer (ed.) EUROCRYPT 2005, LNCS, vol. 3494, pp. 440–456. Springer
(2005)
23. Boneh, D., Boyen, X., Shacham, H.: Short group signatures. In: M.K. Franklin (ed.)
CRYPTO 2004, LNCS, vol. 3152, pp. 41–55. Springer (2004)
24. Boneh, D., Franklin, M.K.: Identity-based encryption from the Weil pairing. In: J. Kilian
(ed.) CRYPTO 2001, LNCS, vol. 2139, pp. 213–229. Springer (2001)
25. Boneh, D., Franklin, M.K.: Identity-based encryption from the Weil pairing. SIAM J. Com-
put. 32(3), 586–615 (2003)
26. Boneh, D., Lynn, B., Shacham, H.: Short signatures from the Weil pairing. In: C. Boyd (ed.)
ASIACRYPT 2001, LNCS, vol. 2248, pp. 514–532. Springer (2001)
27. Canetti, R., Halevi, S., Katz, J.: A forward-secure public-key encryption scheme. In: E. Bi-
ham (ed.) EUROCRYPT 2003, LNCS, vol. 2656, pp. 255–271. Springer (2003)
28. Canetti, R., Halevi, S., Katz, J.: Chosen-ciphertext security from identity-based encryption.
In: C. Cachin, J. Camenisch (eds.) EUROCRYPT 2004, LNCS, vol. 3027, pp. 207–222.
Springer (2004)
29. Cash, D., Kiltz, E., Shoup, V.: The twin Diffie-Hellman problem and applications. In: N.P.
Smart (ed.) EUROCRYPT 2008, LNCS, vol. 4965, pp. 127–145. Springer (2008)
30. Chen, L., Cheng, Z.: Security proof of Sakai-Kasahara’s identity-based encryption scheme.
In: N.P. Smart (ed.) IMA 2005, LNCS, vol. 3796, pp. 442–459. Springer (2005)
31. Costello, C.: Pairings for beginners. Available at:
http://www.craigcostello.com.au/pairings/PairingsForBeginners.pdf
32. Cramer, R., Shoup, V.: A practical public key cryptosystem provably secure against adaptive
chosen ciphertext attack. In: H. Krawczyk (ed.) CRYPTO 1998, LNCS, vol. 1462, pp. 13–25.
Springer (1998)
33. Delerablée, C.: Identity-based broadcast encryption with constant size ciphertexts and private
keys. In: K. Kurosawa (ed.) ASIACRYPT 2007, LNCS, vol. 4833, pp. 200–215. Springer
(2007)
34. Diffie, W., Hellman, M.E.: New directions in cryptography. IEEE Trans. Information Theory
22(6), 644–654 (1976)
35. Dodis, Y., Franklin, M.K., Katz, J., Miyaji, A., Yung, M.: Intrusion-resilient public-key en-
cryption. In: M. Joye (ed.) CT-RSA 2003, LNCS, vol. 2612, pp. 19–32. Springer (2003)
36. Dodis, Y., Katz, J., Xu, S., Yung, M.: Key-insulated public key cryptosystems. In: L.R.
Knudsen (ed.) EUROCRYPT 2002, LNCS, vol. 2332, pp. 65–82. Springer (2002)
37. Dolev, D., Dwork, C., Naor, M.: Non-malleable cryptography (extended abstract). In: ACM
STOC, pp. 542–552 (1991)
38. Dolev, D., Dwork, C., Naor, M.: Non-malleable Cryptography. Weizmann Science Press of
Israel (1998)
References 251
39. Dutta, R., Barua, R., Sarkar, P.: Pairing-based cryptographic protocols: A survey. IACR
Cryptology ePrint Archiv 2004, 64 (2004)
40. Freeman, D., Scott, M., Teske, E.: A taxonomy of pairing-friendly elliptic curves. J. Cryp-
tology 23(2), 224–280 (2010)
41. Frey, G., Rück, H.G.: A remark concerning m-divisibility and the discrete logarithm in the
divisor class group of curves. Mathematics of computation 62(206), 865–874 (1994)
42. Fujisaki, E., Okamoto, T.: Secure integration of asymmetric and symmetric encryption
schemes. In: M.J. Wiener (ed.) CRYPTO 1999, LNCS, vol. 1666, pp. 537–554. Springer
(1999)
43. Galbraith, S.D., Gaudry, P.: Recent progress on the elliptic curve discrete logarithm problem.
Des. Codes Cryptography 78(1), 51–72 (2016)
44. Galbraith, S.D., Paterson, K.G., Smart, N.P.: Pairings for cryptographers. Discrete Applied
Mathematics 156(16), 3113–3121 (2008)
45. Gay, R., Hofheinz, D., Kiltz, E., Wee, H.: Tightly CCA-secure encryption without pairings.
In: M. Fischlin, J. Coron (eds.) EUROCRYPT 2016, LNCS, vol. 9665, pp. 1–27. Springer
(2016)
46. Gay, R., Hofheinz, D., Kohl, L.: Kurosawa-Desmedt meets tight security. In: J. Katz,
H. Shacham (eds.) CRYPTO 2017, LNCS, vol. 10403, pp. 133–160. Springer (2017)
47. Gentry, C.: Practical identity-based encryption without random oracles. In: S. Vaudenay (ed.)
EUROCRYPT 2006, LNCS, vol. 4004, pp. 445–464. Springer (2006)
48. Goh, E., Jarecki, S.: A signature scheme as secure as the Diffie-Hellman problem. In: E. Bi-
ham (ed.) EUROCRYPT 2003, LNCS, vol. 2656, pp. 401–415. Springer (2003)
49. Goldwasser, S., Micali, S.: Probabilistic encryption. J. Comput. Syst. Sci. 28(2), 270–299
(1984)
50. Goldwasser, S., Micali, S., Rivest, R.L.: A digital signature scheme secure against adaptive
chosen-message attacks. SIAM J. Comput. 17(2), 281–308 (1988)
51. Gordon, D.M.: A survey of fast exponentiation methods. J. Algorithms 27(1), 129–146
(1998)
52. Grémy, L.: Computations of discrete logarithms sorted by date. Available at: http://perso.ens-
lyon.fr/laurent.gremy/dldb
53. Guo, F., Chen, R., Susilo, W., Lai, J., Yang, G., Mu, Y.: Optimal security reductions for
unique signatures: Bypassing impossibilities with a counterexample. In: J. Katz, H. Shacham
(eds.) CRYPTO 2017, LNCS, vol. 10402, pp. 517–547. Springer (2017)
54. Guo, F., Mu, Y., Susilo, W.: Short signatures with a tighter security reduction without random
oracles. Comput. J. 54(4), 513–524 (2011)
55. Guo, F., Susilo, W., Mu, Y., Chen, R., Lai, J., Yang, G.: Iterated random oracle: A universal
approach for finding loss in security reduction. In: J.H. Cheon, T. Takagi (eds.) ASIACRYPT
2016, LNCS, vol. 10032, pp. 745–776 (2016)
56. Hanaoka, Y., Hanaoka, G., Shikata, J., Imai, H.: Identity-based hierarchical strongly key-
insulated encryption and its application. In: B.K. Roy (ed.) ASIACRYPT 2005, LNCS, vol.
3788, pp. 495–514. Springer (2005)
57. Hankerson, D., Menezes, A.J., Vanstone, S.: Guide to Elliptic Curve Cryptography. Springer
Professional Computing. Springer (2004)
58. Hellman, M.E., Reyneri, J.M.: Fast computation of discrete logarithms in GF(q). In:
D. Chaum, R.L. Rivest, A.T. Sherman (eds.) CRYPTO 1982, pp. 3–13. Plenum Press, New
York (1982)
59. Herzberg, A., Jakobsson, M., Jarecki, S., Krawczyk, H., Yung, M.: Proactive public key and
signature systems. In: R. Graveman, P.A. Janson, C. Neuman, L. Gong (eds.) CCS 1997, pp.
100–110. ACM (1997)
60. Hofheinz, D., Jager, T.: Tightly secure signatures and public-key encryption. In: R. Safavi-
Naini, R. Canetti (eds.) CRYPTO 2012, LNCS, vol. 7417, pp. 590–607. Springer (2012)
61. Hohenberger, S., Waters, B.: Realizing hash-and-sign signatures under standard assumptions.
In: A. Joux (ed.) EUROCRYPT 2009, LNCS, vol. 5479, pp. 333–350. Springer (2009)
62. Itkis, G., Reyzin, L.: Sibir: Signer-base intrusion-resilient signatures. In: M. Yung (ed.)
CRYPTO 2002, LNCS, vol. 2442, pp. 499–514. Springer (2002)
252 References
63. Kachisa, E.J.: Constructing suitable ordinary pairing-friendly curves: A case of elliptic
curves and genus two hyperelliptic curves. Ph.D. thesis, Dublin City University (2011)
64. Katz, J.: Digital Signatures. Springer (2010)
65. Katz, J., Wang, N.: Efficiency improvements for signature schemes with tight security reduc-
tions. In: S. Jajodia, V. Atluri, T. Jaeger (eds.) CCS 2003, pp. 155–164. ACM (2003)
66. Kleinjung, T.: The Certicom ECC Challenge. Available at: https://listserv.nodak.edu/cgi-
bin/wa.exe?A2=NMBRTHRY;256db68e.1410 (2014)
67. Kleinjung, T., Diem, C., Lenstra, A.K., Priplata, C., Stahlke, C.: Computation of a 768-bit
prime field discrete logarithm. In: J. Coron, J.B. Nielsen (eds.) EUROCRYPT 2017, LNCS,
vol. 10210, pp. 185–201 (2017)
68. Knuth, D.E.: The art of computer programming. Vol.2. Seminumerical algorithms. Addison-
Wesley (1997)
69. Koblitz, N.: Elliptic curve cryptosystems. Mathematics of Computation 48(177), 203–209
(1987)
70. Koblitz, N., Menezes, A.: Pairing-based cryptography at high security levels. In: N.P. Smart
(ed.) IMA International Conference on Cryptography and Coding, LNCS, vol. 3796, pp. 13–
36. Springer (2005)
71. Lamport, L.: Constructing digital signatures from a one-way function. Tech. rep., Technical
Report CSL-98, SRI International Palo Alto (1979)
72. Lenstra, A.K., Lenstra, H.W.: Algorithms in number theory. In: Handbook of Theoretical
Computer Science, Volume A: Algorithms and Complexity (A), pp. 673–716 (1990)
73. Lenstra, A.K., Verheul, E.R.: Selecting cryptographic key sizes. J. Cryptology 14(4), 255–
293 (2001)
74. Lidl, R., Niederreiter, H.: Finite Fields (2nd Edition). Encyclopedia of Mathematics and its
Applications. Cambridge University Press (1997)
75. Lim, C.H., Lee, P.J.: A key recovery attack on discrete log-based schemes using a prime
order subgroupp. In: B.S. Kaliski Jr. (ed.) CRYPTO 1997, LNCS, vol. 1294, pp. 249–263.
Springer (1997)
76. Lynn, B.: On the implementation of pairing-based cryptosystems. Ph.D. thesis, Stanford
University (2007)
77. Lysyanskaya, A.: Unique signatures and verifiable random functions from the DH-DDH sep-
aration. In: M. Yung (ed.) CRYPTO 2002, LNCS, vol. 2442, pp. 597–612. Springer (2002)
78. McCurley, K.S.: The discrete logarithm problem. Cryptology and computational number
theory 42, 49 (1990)
79. McEliece, R.J.: Finite Fields for Computer Scientists and Engineers. The Kluwer Interna-
tional Series in Engineering and Computer Science. Springer (1987)
80. Menezes, A., Okamoto, T., Vanstone, S.A.: Reducing elliptic curve logarithms to logarithms
in a finite field. IEEE Trans. Information Theory 39(5), 1639–1646 (1993)
81. Menezes, A., van Oorschot, P., Vanstone, S.: Handbook of applied cryptography. Discrete
Mathematics and Its Applications. CRC Press (1996)
82. Menezes, A., Smart, N.P.: Security of signature schemes in a multi-user setting. Des. Codes
Cryptography 33(3), 261–274 (2004)
83. Miller, V.S.: Use of elliptic curves in cryptography. In: H.C. Williams (ed.) CRYPTO 1985,
LNCS, vol. 218, pp. 417–426. Springer (1985)
84. Naor, M., Yung, M.: Public-key cryptosystems provably secure against chosen ciphertext
attacks. In: H. Ortiz (ed.) ACM STOC, pp. 427–437. ACM (1990)
85. Nielsen, J.B.: Separating random oracle proofs from complexity theoretic proofs: The non-
committing encryption case. In: M. Yung (ed.) CRYPTO 2002, LNCS, vol. 2442, pp. 111–
126. Springer (2002)
86. Park, J.H., Lee, D.H.: An efficient IBE scheme with tight security reduction in the random
oracle model. Des. Codes Cryptography 79(1), 63–85 (2016)
87. Pollard, J.M.: Monte Carlo methods for index computation (mod p). Mathematics of com-
putation 32(143), 918–924 (1978)
References 253
88. Rackoff, C., Simon, D.R.: Non-interactive zero-knowledge proof of knowledge and chosen
ciphertext attack. In: J. Feigenbaum (ed.) CRYPTO 1991, LNCS, vol. 576, pp. 433–444.
Springer (1991)
89. Rosen, K.H.: Elementary Number Theory and Its Applications (5th Edition). Addison-
Wesley (2004)
90. Rotman, J.J.: An Introduction to the Theory of Groups. Graduate Texts in Mathematics.
Springer (1995)
91. Sakai, R., Kasahara, M.: ID based cryptosystems with pairing on elliptic curve. IACR Cryp-
tology ePrint Archive 2003, 54 (2003)
92. Shacham, H.: New paradigms in signature schemes. Ph.D. thesis, Stanford University (2006)
93. Shamir, A.: Identity-based cryptosystems and signature schemes. In: G.R. Blakley, D. Chaum
(eds.) CRYPTO 1984, LNCS, vol. 196, pp. 47–53. Springer (1984)
94. Shamir, A., Tauman, Y.: Improved online/offline signature schemes. In: J. Kilian (ed.)
CRYPTO 2001, LNCS, vol. 2139, pp. 355–367. Springer (2001)
95. Shanks, D.: Class number, a theory of factorization and genera. In: Proc. Symp. Pure Math,
vol. 20, pp. 415–440 (1971)
96. Shoup, V.: A computational introduction to number theory and algebra (2nd Edition). Cam-
bridge University Press (2009)
97. Silverman, J.H.: The Arithmetic of Elliptic Curves (2nd Edition). Graduate Texts in Mathe-
matics. Springer (2009)
98. Vasco, M.I., Magliveras, S., Steinwandt, R.: Group Theoretic Cryptography. Cryptography
and Network Security Series. CRC Press (2015)
99. Washington, L.C.: Elliptic Curves: Number Theory and Cryptography (2nd Edition). Dis-
crete Mathematics and Its Applications. CRC Press (2008)
100. Watanabe, Y., Shikata, J., Imai, H.: Equivalence between semantic security and indistin-
guishability against chosen ciphertext attacks. In: Y. Desmedt (ed.) PKC 2003, LNCS, vol.
2567, pp. 71–84. Springer (2003)
101. Waters, B.: Efficient identity-based encryption without random oracles. In: R. Cramer (ed.)
EUROCRYPT 2005, LNCS, vol. 3494, pp. 114–127. Springer (2005)
102. Wenger, E., Wolfger, P.: Solving the discrete logarithm of a 113-bit Koblitz curve with an
FPGA cluster. In: A. Joux, A.M. Youssef (eds.) SAC 2014, LNCS, vol. 8781, pp. 363–379.
Springer (2014)
103. Yao, D., Fazio, N., Dodis, Y., Lysyanskaya, A.: ID-based encryption for complex hierarchies
with applications to forward security and broadcast encryption. In: V. Atluri, B. Pfitzmann,
P.D. McDaniel (eds.) CCS 2004, pp. 354–363. ACM (2004)
104. Zhang, F., Safavi-Naini, R., Susilo, W.: An efficient signature scheme from bilinear pairings
and its applications. In: F. Bao, R.H. Deng, J. Zhou (eds.) PKC 2004, LNCS, vol. 2947, pp.
277–290. Springer (2004)