0% found this document useful (0 votes)
95 views262 pages

密码学安全规约(2018Guo)

Uploaded by

wengyuan980930
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views262 pages

密码学安全规约(2018Guo)

Uploaded by

wengyuan980930
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 262

Fuchun Guo · Willy Susilo · Yi Mu

Introduction
to Security
Reduction
Introduction to Security Reduction
Fuchun Guo • Willy Susilo • Yi Mu

Introduction to Security
Reduction
Fuchun Guo Willy Susilo
School of Computing School of Computing
& Information Technology & Information Technology
University of Wollongong University of Wollongong
Wollongong, New South Wales, Australia Wollongong, New South Wales, Australia

Yi Mu
School of Computing
& Information Technology
University of Wollongong
Wollongong, New South Wales, Australia

ISBN 978-3-319-93048-0 ISBN 978-3-319-93049-7 (eBook)


https://doi.org/10.1007/978-3-319-93049-7

Library of Congress Control Number: 2018946564

© Springer International Publishing AG, part of Springer Nature 2018


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein
or for any errors or omissions that may have been made. The publisher remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by the registered company Springer International Publishing AG
part of Springer Nature.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my lovely wife Yizhen,
two adorable sons John and Kevin,
and my kindly mother Suhua.
To the memory of my father Yongming.
–Fuchun Guo

To my wife Aurelia and our beloved son


Jayden, without whom this work will never be
accomplished.
–Willy Susilo

To my family!
–Yi Mu
Preface

Security reduction is a very popular approach for proving security in public-key


cryptography. With security reduction, roughly speaking, we can show that breaking
a proposed scheme is as difficult as solving a mathematical hard problem. However,
how to program a correct security reduction using an adversary’s adaptive attack is
rather complicated. The reason is that there is no universal security reduction for all
proposed schemes.
Security reductions given in cryptographic research papers are often hard for be-
ginners to fully comprehend. To aid the beginners, some cryptography textbooks
have illustrated how to correctly program security reductions with simpler exam-
ples. However, security reductions mentioned in research papers and previous text-
books are usually for specific schemes. The difference in security reductions for
different schemes leads to confusion for the beginners. There is a need for a book
that systematically introduces how to correctly program a security reduction for a
cryptosystem, not for a specific scheme. With this in mind, we wrote this book,
which we hope will help the reader understand how to correctly program a security
reduction.
The contents of this book, especially the foundations of security reductions, are
based on our understanding and experience. The reader might find that the expla-
nations of concepts are slightly different from those in other sources, because we
have added some “condiments” to help the reader understand these concepts. For
example, in a security reduction, the adversary is not a black-box adversary but a
malicious adversary who has unbounded computational power.
We thought this book would be completed within one year, but we underesti-
mated its difficulty. It has taken more than four years to complete the writing of
this book. There must still be errors that have not yet been found. We welcome any
comments and suggestions.

University of Wollongong, Australia Fuchun Guo, Willy Susilo, and Yi Mu


May 2018

vii
Acknowledgements

We finally accomplished something which is meaningful and useful for our research
society. Our primary goal is to make confusing concepts of security reductions van-
ish, and to provide a clear guide on how to program correct security reductions.
Here, we would like to record some of our major milestones as well as to acknowl-
edge several people who have helped us complete this book.
The time that we started to write this book can be traced back to the second half
of 2013, after Dr. Fuchun Guo received his PhD degree in July of that year. At that
time, being a research assistant, Fuchun was invited to co-supervise Professor Willy
Susilo and Professor Yi Mu’s PhD students at the University of Wollongong, Aus-
tralia. Fuchun’s primary task was to help Willy and Yi train PhD students with very
little background in public-key cryptography. It is evident that there is a big gap
between savvy researchers and PhD students just starting on their PhD journeys.
Furthermore, we found it was really ineffective for our students to read papers by
themselves to understand security proofs and some tricky methods in security reduc-
tions. How to quickly train our students remains an elusive problem as we have to
repeat the interactions with each individual student day by day. We collected some
basic but important knowledge that all our students must master in their studies to
conduct research in public-key cryptography. Then we decided to write this book to-
gether to help our students. Hence, the original motivation of writing this book was
to save our time in the training of our students. We do hope that this book will also
benefit others who want to start their research careers in public-key cryptography,
or others who want to study the techniques used in programming correct security
reductions.
The first version of this book was completed in April 2015. In that version, Chap-
ter 4 had only about 50 pages. That version was rather incomprehensible with a lot
of logic and consistency problems. Then, we started to polish the book, which was
completed in August 2017. It took 28 months to clarify many important concepts
that are contained in this book. We patiently crafted Chapter 4 to ensure that all con-
cepts and knowledge are presented clearly and are easy to understand. Originally,
we either did not fully understand many concepts or did not clearly know how to
explain them. A significant amount of time was used to think about how to explain

ix
x Acknowledgements

each concept and exemplify it with a simple, yet clear, example. We were very pas-
sionate in completing this book without thinking about any time constraints. The
external proofreading by our students was started in September 2017 and completed
in March 2018. More than ten PhD students were involved in the proofreading. We
believe this was an invaluable experience for us, which paints a very nice story to
share and remember. This book would never have been completed without the hard
work of our students.
At the early stage of this book writing, we received a lot of feedback from the
process of training our students. This invaluable experience helped us see which
concepts are hard for students to understand and how to clearly explain these. We
are indebted to our colleagues and students: Rongmao Chen, Jianchang Lai, Peng
Jiang, Nan Li, and Yinhao Jiang. They provided insightful feedback and thoughts
when we trained them in public-key cryptography. We can now proudly say that
these people have now completed their PhD studies and they have mastered the
required skills as independent researchers in public-key cryptography, thanks to the
information and training that are provided in this book.
When we completed the writing of this book, we decided to invite our PhD stu-
dents to read it first. Without too much surprise, our students still found many con-
fusing concepts and unclear knowledge points. They provided a lot of invaluable
comments and advice that have been used to improve the quality of this book. In
particular, more than 20 pages were added to Chapter 4 to improve the clarity of
this important chapter. Specifically, we would like to thank these people: Jianchang
Lai, Zhen Zhao, Ge Wu, Peng Jiang, Zhongyuan Yao, Tong Wu, and Shengming Xu
for their helpful advice and feedback.
The first manuscript given to students for proofreading was full of typos and
grammatical errors. We would like to thank the following people for their help in
improving this book: Jianchang Lai, Fatemeh Rezaeibagha, Zhen Zhao, Ge Wu,
Peng Jiang, Xueqiao Liu, Zhongyuan Yao, Tong Wu, Shengming Xu, Binrui Zhu, Ke
Wang, Yannan Li, and Yanwei Zhou.
We would also like to thank all authors of published references that have been
cited in this book, especially those authors whose schemes have been used as exam-
ples. We merely reorganized this knowledge and put it together with our understand-
ing and our logic. We would also like to thank the Springer editor Ronan Nugent and
the copy-editors of Springer, who gave us a lot of insightful comments and advice
that have indeed improved the quality and clarity of this book.
Last but not least, we would like to thank our families who have always been very
supportive. We spent so much time in editing and correcting this book, and without
their patience, it would have been impossible to complete. Thank you.
Finally, we hope that the reader will find that this book is useful.

University of Wollongong, Australia Fuchun Guo, Willy Susilo, and Yi Mu


May 2018
Contents

1 Guide to This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Notions, Definitions, and Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5


2.1 Digital Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Public-Key Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Identity-Based Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Foundations of Group-Based Cryptography . . . . . . . . . . . . . . . . . . . . . . 13


3.1 Finite Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.1.2 Field Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.3 Field Choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1.4 Computations over a Prime Field . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Cyclic Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.2 Cyclic Groups of Prime Order . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.3 Group Exponentiations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.4 Discrete Logarithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.2.5 Cyclic Groups from Finite Fields . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.6 Group Choice 1: Multiplicative Groups . . . . . . . . . . . . . . . . . . 19
3.2.7 Group Choice 2: Elliptic Curve Groups . . . . . . . . . . . . . . . . . . 20
3.2.8 Computations over a Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Bilinear Pairings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3.1 Symmetric Pairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3.2 Asymmetric Pairing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3.3 Computations over a Pairing Group . . . . . . . . . . . . . . . . . . . . . 25
3.4 Hash Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.5 Further Reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

xi
xii Contents

4 Foundations of Security Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29


4.1 Introduction to Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.1.1 Mathematical Primitives and Superstructures . . . . . . . . . . . . . 29
4.1.2 Mathematical Problems and Problem Instances . . . . . . . . . . . 30
4.1.3 Cryptography, Cryptosystems, and Schemes . . . . . . . . . . . . . . 31
4.1.4 Algorithm Classification 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.1.5 Polynomial Time and Exponential Time . . . . . . . . . . . . . . . . . 32
4.1.6 Negligible and Non-negligible . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.1.7 Insecure and Secure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.8 Easy and Hard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.1.9 Algorithm Classification 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.10 Algorithms in Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1.11 Hard Problems in Cryptography . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.12 Security Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.13 Hard Problems and Hardness Assumptions . . . . . . . . . . . . . . . 36
4.1.14 Security Reductions and Security Proofs . . . . . . . . . . . . . . . . . 36
4.2 An Overview of Easy/Hard Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2.1 Computational Easy Problems . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2.2 Computational Hard Problems . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.3 Decisional Easy Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2.4 Decisional Hard Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.2.5 How to Prove New Hard Problems . . . . . . . . . . . . . . . . . . . . . . 45
4.2.6 Weak Assumptions and Strong Assumptions . . . . . . . . . . . . . 47
4.3 An Overview of Security Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3.1 Security Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3.2 Weak Security Models and Strong Security Models . . . . . . . . 49
4.3.3 Proof by Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3.4 Proof by Contradiction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3.5 What Is Security Reduction? . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3.6 Real Scheme and Simulated Scheme . . . . . . . . . . . . . . . . . . . . 51
4.3.7 Challenger and Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3.8 Real Attack and Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.9 Attacks and Hard Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.10 Reduction Cost and Reduction Loss . . . . . . . . . . . . . . . . . . . . . 53
4.3.11 Loose Reduction and Tight Reduction . . . . . . . . . . . . . . . . . . . 53
4.3.12 Security Level Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.13 Ideal Security Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4 An Overview of Correct Security Reduction . . . . . . . . . . . . . . . . . . . . 56
4.4.1 What Should Bob Do? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.4.2 Understanding Security Reduction . . . . . . . . . . . . . . . . . . . . . . 57
4.4.3 Successful Simulation and Indistinguishable Simulation . . . . 57
4.4.4 Failed Attack and Successful Attack . . . . . . . . . . . . . . . . . . . . 58
4.4.5 Useless Attack and Useful Attack . . . . . . . . . . . . . . . . . . . . . . 59
4.4.6 Attack in Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.4.7 Successful/Correct Security Reduction . . . . . . . . . . . . . . . . . . 60
Contents xiii

4.4.8 Components of a Security Proof . . . . . . . . . . . . . . . . . . . . . . . . 60


4.5 An Overview of the Adversary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.5.1 Black-Box Adversary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.5.2 What Is an Adaptive Attack? . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.5.3 Malicious Adversary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.5.4 The Adversary in a Toy Game . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.5.5 Adversary’s Successful Attack and Its Probability . . . . . . . . . 63
4.5.6 Adversary’s Computational Ability . . . . . . . . . . . . . . . . . . . . . 64
4.5.7 The Adversary’s Computational Ability in a Reduction . . . . 64
4.5.8 The Adversary in a Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.5.9 What the Adversary Knows . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.5.10 What the Adversary Never Knows . . . . . . . . . . . . . . . . . . . . . . 67
4.5.11 How to Distinguish the Given Scheme . . . . . . . . . . . . . . . . . . . 67
4.5.12 How to Generate a Useless Attack . . . . . . . . . . . . . . . . . . . . . . 68
4.5.13 Summary of Adversary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.6 An Overview of Probability and Advantage . . . . . . . . . . . . . . . . . . . . . 69
4.6.1 Definitions of Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.6.2 Definitions of Advantage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
4.6.3 Malicious Adversary Revisited . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.6.4 Adaptive Choice Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.6.5 Useless, Useful, Loose, and Tight Revisited . . . . . . . . . . . . . . 74
4.6.6 Important Probability Formulas . . . . . . . . . . . . . . . . . . . . . . . . 74
4.7 An Overview of Random and Independent . . . . . . . . . . . . . . . . . . . . . . 75
4.7.1 What Are Random and Independent? . . . . . . . . . . . . . . . . . . . . 76
4.7.2 Randomness Simulation with a General Function . . . . . . . . . 76
4.7.3 Randomness Simulation with a Linear System . . . . . . . . . . . . 79
4.7.4 Randomness Simulation with a Polynomial . . . . . . . . . . . . . . 81
4.7.5 Indistinguishable Simulation and Useful Attack Together . . . 82
4.7.6 Advantage and Probability in Absolutely Hard Problems . . . 83
4.8 An Overview of Random Oracles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.8.1 Security Proof with Random Oracles . . . . . . . . . . . . . . . . . . . . 84
4.8.2 Hash Functions vs Random Oracles . . . . . . . . . . . . . . . . . . . . . 84
4.8.3 Hash List . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.8.4 How to Program Security Reductions with Random Oracles 86
4.8.5 Oracle Response and Its Probability Analysis . . . . . . . . . . . . . 86
4.8.6 Summary of Using Random Oracles . . . . . . . . . . . . . . . . . . . . 88
4.9 Security Proofs for Digital Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.9.1 Proof Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.9.2 Advantage Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.9.3 Simulatable and Reducible . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.9.4 Simulation of Secret Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.9.5 Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
4.9.6 Tight Reduction and Loose Reduction Revisited . . . . . . . . . . 92
4.9.7 Summary of Correct Security Reduction . . . . . . . . . . . . . . . . . 93
4.10 Security Proofs for Encryption Under Decisional Assumptions . . . . 94
xiv Contents

4.10.1 Proof Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94


4.10.2 Classification of Ciphertexts . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
4.10.3 Classification of the Challenge Ciphertext . . . . . . . . . . . . . . . . 96
4.10.4 Simulation of the Challenge Ciphertext . . . . . . . . . . . . . . . . . . 96
4.10.5 Advantage Calculation 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.10.6 Probability PT of Breaking the True Challenge Ciphertext . . 98
4.10.7 Probability PF of Breaking the False Challenge Ciphertext . . 98
4.10.8 Advantage Calculation 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.10.9 Definition of One-Time Pad . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4.10.10 Examples of One-Time Pad . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4.10.11 Analysis of One-Time Pad . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.10.12 Simulation of Decryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.10.13 Simulation of Challenge Decryption Key . . . . . . . . . . . . . . . . 104
4.10.14 Probability Analysis for PF . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.10.15 Examples of Advantage Results for AKF and AIF . . . . . . . . . . . 106
4.10.16 Advantage Calculation 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.10.17 Summary of Correct Security Reduction . . . . . . . . . . . . . . . . 109
4.11 Security Proofs for Encryption Under Computational Assumptions . 109
4.11.1 Random and Independent Revisited . . . . . . . . . . . . . . . . . . . . . 109
4.11.2 One-Time Pad Revisited . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.11.3 Solution to Hard Problem Revisited . . . . . . . . . . . . . . . . . . . . . 110
4.11.4 Simulation of Challenge Ciphertext . . . . . . . . . . . . . . . . . . . . . 111
4.11.5 Proof Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.11.6 Challenge Ciphertext and Challenge Hash Query . . . . . . . . . . 113
4.11.7 Advantage Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.11.8 Analysis of No Advantage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.11.9 Requirements of Decryption Simulation . . . . . . . . . . . . . . . . . 116
4.11.10 An Example of Decryption Simulation . . . . . . . . . . . . . . . . . . 116
4.11.11 Summary of Correct Security Reduction . . . . . . . . . . . . . . . . 118
4.12 Simulatable and Reducible with Random Oracles . . . . . . . . . . . . . . . . 119
4.12.1 H-Type: Hashing to Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
4.12.2 C-Type: Commutative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.12.3 I-Type: Inverse of Group Exponent . . . . . . . . . . . . . . . . . . . . . 122
4.13 Examples of Incorrect Security Reductions . . . . . . . . . . . . . . . . . . . . . 123
4.13.1 Example 1: Distinguishable . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
4.13.2 Example 2: Useless Attack by Public Key . . . . . . . . . . . . . . . . 126
4.13.3 Example 3: Useless Attack by Signature . . . . . . . . . . . . . . . . . 128
4.14 Examples of Correct Security Reductions . . . . . . . . . . . . . . . . . . . . . . 130
4.14.1 One-Time Signature with Random Oracles . . . . . . . . . . . . . . . 130
4.14.2 One-Time Signature Without Random Oracles . . . . . . . . . . . . 133
4.14.3 One-Time Signature with Indistinguishable Partition . . . . . . . 136
4.15 Summary of Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.15.1 Concepts Related to Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.15.2 Preliminaries and Proof by Contradiction . . . . . . . . . . . . . . . . 140
4.15.3 Security Reduction and Its Difficulty . . . . . . . . . . . . . . . . . . . 141
Contents xv

4.15.4 Simulation and Its Requirements . . . . . . . . . . . . . . . . . . . . . . . 142


4.15.5 Towards a Correct Security Reduction . . . . . . . . . . . . . . . . . . . 144
4.15.6 Other Confusing Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

5 Digital Signatures with Random Oracles . . . . . . . . . . . . . . . . . . . . . . . . . . 147


5.1 BLS Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
5.2 BLS+ Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
5.3 BLS# Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
5.4 BBRO Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
5.5 ZSS Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5.6 ZSS+ Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.7 ZSS# Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.8 BLSG Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

6 Digital Signatures Without Random Oracles . . . . . . . . . . . . . . . . . . . . . . 173


6.1 Boneh-Boyen Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
6.2 Gentry Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.3 GMS Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.4 Waters Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.5 Hohenberger-Waters Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

7 Public-Key Encryption with Random Oracles . . . . . . . . . . . . . . . . . . . . . 193


7.1 Hashed ElGamal Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
7.2 Twin Hashed ElGamal Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
7.3 Iterated Hashed ElGamal Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
7.4 Fujisaki-Okamoto Hashed ElGamal Scheme . . . . . . . . . . . . . . . . . . . 202

8 Public-Key Encryption Without Random Oracles . . . . . . . . . . . . . . . . . . 209


8.1 ElGamal Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
8.2 Cramer-Shoup Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

9 Identity-Based Encryption with Random Oracles . . . . . . . . . . . . . . . . . . 215


9.1 Boneh-Franklin Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
9.2 Boneh-BoyenRO Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
9.3 Park-Lee Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
9.4 Sakai-Kasahara Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

10 Identity-Based Encryption Without Random Oracles . . . . . . . . . . . . . . 229


10.1 Boneh-Boyen Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
10.2 Boneh-Boyen+ Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
10.3 Waters Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
10.4 Gentry Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Chapter 1
Guide to This Book

The first step in constructing a provably secure cryptosystem in public-key cryp-


tography is to clarify its cryptographic notion and formalize the definitions of the
algorithm and its corresponding security model. A cryptographic notion helps the
reader understand the definition of the algorithm, while the security model is essen-
tial for measuring the strength of a proposed scheme. Both a scheme construction
and its security proof require knowledge of the corresponding cryptographic foun-
dations. With this in mind, we arranged the book chapters in order to capture the
working process of a provably secure scheme, as depicted in Figure 1.1.

Fig. 1.1 Steps for constructing a provably secure scheme in public-key cryptography

There are two popular methods for security proofs in public-key cryptography,
namely game-based proof and simulation-based proof. The former can also be clas-
sified into two categories, i.e., security reduction and game hopping. This book cov-

© Springer International Publishing AG, part of Springer Nature 2018 1


F. Guo et al., Introduction to Security Reduction,
https://doi.org/10.1007/978-3-319-93049-7_1
2 1 Guide to This Book

ers only security reduction, which starts with the assumption that there exists an
adversary who can break the proposed scheme. In security proofs with security re-
duction, a concrete security reduction depends on the corresponding cryptosystem,
the scheme, and the underlying hard problem. There is no universal approach to
program the security reduction for all schemes. This book introduces security re-
ductions for three specific cryptosystems: digital signatures, public-key encryption,
and identity-based encryption. All examples and schemes given in this book are
constructed over cyclic groups with or without a bilinear map.
The contents of each chapter are outlined as follows. Chapter 2 briefly revisits
cryptographic notions, algorithms, and security models. This chapter can be skipped
if the reader is familiar with the definitions. Chapter 3 introduces the foundations of
group-based cryptography: we introduce finite fields, cyclic groups, bilinear pairing,
and hash functions. Our introduction mainly focuses on efficiently computable oper-
ations and the group representation. We minimize the description of the preliminary
knowledge of group-based cryptography. Chapter 4 is the most important chapter
in this book. In this chapter, we classify and explain the fundamental concepts of
security reduction and also summarize how in general to program a full security re-
duction for digital signatures and encryption. We take examples from group-based
cryptography, when it is necessary to give examples to explain the concepts. The re-
maining chapters of this book are dedicated to the security proofs of some selected
schemes in order to help the reader understand how to program the security reduc-
tion correctly. The security proof in each selected scheme corresponds to a useful
reduction technique.
About Notation. This book prefers to use the following notation. The same notation
may have different meanings in different applications.
For mathematical primitives:
• q, p: prime numbers.
• Fqn : the finite field where q is the characteristic, and n is a positive integer.
• k: the size of embedding degree in an extension field denoted by F(qn )k .
• Z p : the integer set {0, 1, 2, · · · , p − 1}.
• Z∗p : the integer set {1, 2, · · · , p − 1}.
• H: a general group.
• G: a cyclic group of prime order p.
• u, v: general elements in a field or a group.
• g, h: group elements of a cyclic group.
• w, x, y, z: integers in an integer set, such as Z p .
• e: a bilinear map.

For scheme constructions:


• λ : a security parameter.
• (G, g, p): a general cyclic group G of prime order p where g is a generator of G.
• |p|: the bit length of the number p in the binary representation.
• |g|: the bit length of the group element g in the binary representation.
1 Guide to This Book 3

• |G|: the number of group elements in the group G.


• PG = (G, GT , g, p, e): a pairing group composed of two groups G, GT of the
same prime order p with a generator g of G and a bilinear map e : G × G → GT .
• {0, 1}∗ : the space of all bit strings.
• {0, 1}n : the space of all n-bit strings.
• α, β , γ: random integers in Z p as secret keys.
• g, h, u, v: group elements.
• r, s: random numbers in Z p .
• n: a general positive number associated with the corresponding scenario.
• i, j: indexing numbers.
• m: a plaintext message.
• σm = (σ1 , σ2 , · · · , σn ): a signature of m where σi denotes the i-th element.
• CT = (C1 ,C2 , · · · ,Cn ): a ciphertext where Ci denotes the i-th element.
• (pk, sk): a key pair where pk is the public key and sk is the secret key.
• dID : a private key of identity ID in identity-based cryptography.

For hard problems:


• I: an instance of a mathematical hard problem.
• Z: the target to be decided in an instance of a decisional hard problem in which
Z is either a true element or a false element.
• g, h, u, v: group elements.
• a, b, c: random and unknown exponents from Z p in the problem instance I.
• F(x), f (x), g(x): (random) polynomials in Z p [x], namely polynomials in x where
all coefficients are randomly chosen from Z p .
• Fi , Gi , fi , ai : the coefficients of xi in polynomials.
• n, k, l: general positive integers associated with the corresponding scenario.

For security models and security proofs:


• A : the adversary.
• C : the challenger.
• B: the simulator.
• ε: the advantage of breaking a scheme or solving a hard problem.
• t: the time cost of breaking a scheme.
• q: the number associated with the underlying hard problem.
• qs : the number of signature queries.
• qk : the number of private-key queries in identity-based cryptography.
• qd : the number of decryption queries.
• qH : the number of hash queries to random oracles.
• c, coin: a bit randomly chosen from {0, 1}.
• w, x, y, z: secret and random numbers chosen from Z p by the simulator.
• Ts : the time cost of the security reduction.
4 1 Guide to This Book

A Note to Research Students. Security reduction requires very tricky analysis.


Even if you can understand a security reduction from others, it may still be chal-
lenging for you to program a correct security reduction for your own scheme. A
traditional Chinese proverb says

“Seeing once is better than hearing 100 times, but doing once is better than
seeing 100 times.”

To best use this book, you can try to prove (Doing) the schemes provided in the book
based on the knowledge in Chapter 4, prior to reading the security proofs given in
the book (Seeing). You will understand more about which part is the most difficult
for you and how security reductions can be programmed correctly. The reader can
visit the authors’ homepages to find supplementary resources for this book.
Chapter 2
Notions, Definitions, and Models

In this chapter, we briefly revisit important knowledge including the cryptographic


notions, algorithms, and security models of digital signatures, public-key encryp-
tion, and identity-based encryption. For convenience in the presentation, we split the
traditional key generation algorithm of digital signatures and public-key encryption
into the system parameter generation algorithm and the key generation algorithm,
where the system parameters can be shared by all users. Each cryptosystem in this
book is composed of four algorithms.

2.1 Digital Signatures

A digital signature is a fundamental tool in cryptography that has been widely ap-
plied to authentication and non-repudiation. Take authentication as an example. A
party, say Alice, wants to convince all other parties that a message m is published
by her. To do so, Alice generates a public/secret key pair (pk, sk) and publishes the
public key pk to all verifiers. To generate a signature σm on m, she digitally signs m
with her secret key sk. Upon receiving (m, σm ), any receiver who already knows pk
can verify the signature σm and confirm the origin of the message m.
A digital signature scheme consists of the following four algorithms.

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It returns the system parameters SP.
KeyGen: The key generation algorithm takes as input the system parameters
SP. It returns a public/secret key pair (pk, sk).
Sign: The signing algorithm takes as input a message m from its message space,
the secret key sk, and the system parameters SP. It returns a signature of m
denoted by σm .

© Springer International Publishing AG, part of Springer Nature 2018 5


F. Guo et al., Introduction to Security Reduction,
https://doi.org/10.1007/978-3-319-93049-7_2
6 2 Notions, Definitions, and Models

Verify: The verification algorithm takes as input a message-signature pair


(m, σm ), the public key pk, and the system parameters SP. It returns “accept” if
σm is a valid signature of m signed with sk; otherwise, it returns “reject.”

Correctness. Given any (pk, sk, m, σm ), if σm is a valid signature of m signed


with sk, the verification algorithm on (m, σm , pk) will return “accept.”
Security. Without the secret key sk, it is hard for any probabilistic polynomial-
time (PPT) adversary to forge a valid signature σm on a new message m that can
pass the signature verification.
In the security model of digital signatures, the security is modeled by a game
played by a challenger and an adversary, where during the interaction between them
the challenger generates a signature scheme and the adversary tries to break the
scheme. That is, the challenger first generates a key pair (pk, sk), sends the public
key pk to the adversary, and keeps the secret key. The adversary can then make
signature queries on any messages adaptively chosen by the adversary itself. Finally,
the adversary returns a forged signature of a new message that has not been queried.
This security notion is called existential unforgeability.
The security model of existential unforgeability against chosen-message attacks
(EU-CMA) can be described as follows.
Setup. Let SP be the system parameters. The challenger runs the key generation al-
gorithm to generate a key pair (pk, sk) and sends pk to the adversary. The challenger
keeps sk to respond to signature queries from the adversary.
Query. The adversary makes signature queries on messages that are adaptively cho-
sen by the adversary itself. For a signature query on the message mi , the challenger
runs the signing algorithm to compute σmi and then sends it to the adversary.
Forgery. The adversary returns a forged signature σm∗ on some m∗ and wins the
game if
• σm∗ is a valid signature of the message m∗ .
• A signature of m∗ has not been queried in the query phase.
The advantage ε of winning the game is the probability of returning a valid forged
signature.
Definition 2.1.0.1 (EU-CMA) A signature scheme is (t, qs , ε)-secure in the EU-
CMA security model if there exists no adversary who can win the above game in
time t with advantage ε after it has made qs signature queries.
A stronger security model for digital signatures is defined as follows.
Definition 2.1.0.2 (SU-CMA) A signature scheme is (t, qs , ε)-secure in the security
model of strong unforgeability against chosen-message attacks (SU-CMA) if there
exists no adversary who can win the above game in time t with advantage ε after it
has made qs signature queries, where the forged signature can be on any message
as long as it is different from all queried signatures.
2.2 Public-Key Encryption 7

In the definition of (standard) digital signatures, the secret key sk does not need
to be updated during the signature generation. We name this signature stateless sig-
nature. In contrast, if the secret key sk needs to be updated before the generation
of each signature, we name it stateful signature. Stateful signature schemes will be
introduced in Section 6.3 and Section 6.5 in this book.

2.2 Public-Key Encryption

Public-key encryption is another important tool in public-key cryptography, which


has demonstrated many useful applications such as data confidentiality, key ex-
change, oblivious transfer, etc. Take data confidentiality as an example. A party,
say Bob, wants to send a sensitive message m to another party, say Alice, though
they do not share any secret key. Alice first generates a public/secret key pair (pk, sk)
and publishes her public key pk to all senders. With pk, Bob can then encrypt the
sensitive message m and sends the resulting ciphertext to Alice. Alice can in turn
decrypt the ciphertext with the secret key sk and obtain the message m.
A public-key encryption scheme consists of the following four algorithms.

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It returns the system parameters SP.
KeyGen: The key generation algorithm takes as input the system parameters
SP. It returns a public/secret key pair (pk, sk).
Encrypt: The encryption algorithm takes as input a message m from its mes-
sage space, the public key pk, and the system parameters SP. It returns a ci-
phertext CT = E[SP, pk, m].
Decrypt: The decryption algorithm takes as input a ciphertext CT , the secret
key sk, and the system parameters SP. It returns a message m or outputs ⊥ to
denote a failure.

Correctness. Given any (SP, pk, sk, m,CT ), if CT = E[SP, pk, m] is a ciphertext
encrypted with pk on the message m, the decryption of CT with the secret key sk
will return the message m.
Security. Without the secret key sk, it is hard for any PPT adversary to extract
the message m from the given ciphertext CT = E[SP, pk, m].
The indistinguishability security of public-key encryption is modeled by a game
played by a challenger and an adversary. The challenger generates an encryption
scheme, while the adversary tries to break the scheme. To start, the challenger gen-
erates a key pair (pk, sk), sends the public key pk to the adversary, and keeps the
secret key sk. The adversary outputs two distinct messages m0 , m1 from the same
message space to be challenged. The challenger generates a challenge ciphertext
8 2 Notions, Definitions, and Models

CT ∗ on a message mc randomly chosen from {m0 , m1 }. If decryption queries are al-


lowed, the adversary can make decryption queries on any ciphertexts that are adap-
tively chosen by the adversary itself with the restriction that no decryption query is
allowed on CT ∗ . Finally, the adversary outputs a guess of the chosen message mc in
the challenge ciphertext CT ∗ .
Formally, the security model of indistinguishability against chosen-ciphertext at-
tacks (IND-CCA) can be described as follows.
Setup. Let SP be the system parameters. The challenger runs the key generation al-
gorithm to generate a key pair (pk, sk) and sends pk to the adversary. The challenger
keeps sk to respond to decryption queries from the adversary.
Phase 1. The adversary makes decryption queries on ciphertexts that are adaptively
chosen by the adversary itself. For a decryption query on the ciphertext CTi , the
challenger runs the decryption algorithm and then sends the decryption result to the
adversary.
Challenge. The adversary outputs two distinct messages m0 , m1 from the same
message space, which are adaptively chosen by the adversary itself. The chal-
lenger randomly chooses c ∈ {0, 1} and then computes a challenge ciphertext
CT ∗ = E[SP, pk, mc ], which is given to the adversary.
Phase 2. The challenger responds to decryption queries in the same way as in Phase
1 with the restriction that no decryption query is allowed on CT ∗ .
Guess. The adversary outputs a guess c0 of c and wins the game if c0 = c.
The advantage ε of the adversary in winning this game is defined as
 1
ε = 2 Pr[c0 = c] − .
2
Definition 2.2.0.1 (IND-CCA) A public-key encryption scheme is (t, qd , ε)-secure
in the IND-CCA security model if there exists no adversary who can win the above
game in time t with advantage ε after it has made qd decryption queries.
In general, we regard the IND-CCA model as the standard security model for the
security of encryption. There is a weaker version of indistinguishability, i.e., indis-
tinguishability against chosen-plaintext attacks (IND-CPA), which is also referred
to as semantic security, defined as follows.
Definition 2.2.0.2 (IND-CPA) A public-key encryption scheme is (t, ε)-secure in
the security model of indistinguishability against chosen-plaintext attacks (IND-
CPA) if the scheme is (t, 0, ε)-secure in the IND-CCA security model, where the
adversary is not allowed to make any decryption query.
In the description of security models, a random coin is chosen by the challenger
to decide which message will be encrypted in the challenge phase. In this book, we
denote the random coin by the symbol c ∈ {0, 1} or by the symbol coin ∈ {0, 1} if c
has been used in the hardness assumption.
2.3 Identity-Based Encryption 9

2.3 Identity-Based Encryption

Identity-based encryption (IBE) is motivated by a disadvantage of public-key en-


cryption, which is that each public key looks like a random string and thus public-
key encryption needs a certificate system. In the notion of IBE, there is a master key
pair (mpk, msk) generated by a private-key generator (PKG). The master public key
mpk is published to all users, and the master secret key msk is kept by the PKG.
Suppose a party, say Bob, wants to send a sensitive message to another party, say
Alice. Bob simply encrypts the message with the master public key mpk and Alice’s
identity ID, such as Alice’s email address. Alice decrypts the ciphertext with her
private key dID , which is computed by the PKG with the identity ID and the master
secret key msk.
An IBE scheme only requires all encryptors to verify the validity of the mas-
ter public key mpk. Therefore, they do not have to verify the public keys of the
receivers since the public keys are the receivers’ identity information. Only the re-
ceiver matching the identity information is able to receive its private key from the
PKG and decrypt the corresponding ciphertext. IBE allows Bob to encrypt a mes-
sage for Alice using her name as the identity; then Alice applies for the correspond-
ing private key from the PKG. In this book, the decryption key is called the secret
key in PKE; while the decryption key is called the private key in IBE.
An identity-based encryption scheme consists of the following four algorithms.

Setup: The setup algorithm takes as input a security parameter λ . It returns a


master public/secret key pair (mpk, msk).
KeyGen: The key generation algorithm takes as input an identity ID and the
master key pair (mpk, msk). It returns the private key dID of ID.
Encrypt: The encryption algorithm takes as input a message m from its mes-
sage space, an identity ID, and the master public key mpk. It returns a ciphertext
CT = E[mpk, ID, m].
Decrypt: The decryption algorithm takes as input a ciphertext CT for ID, the
private key dID , and the master public key mpk. It returns a message m or out-
puts ⊥ to denote a failure.

Correctness. Given any (mpk, msk, ID, dID , m,CT ), if CT = E[mpk, ID, m] is
a ciphertext encrypted with ID on the message m, the decryption of CT with the
private key dID will return the message m.
Security. Without the private key dID , it is hard for any PPT adversary to extract
the message from the given ciphertext CT = E[mpk, ID, m].
The indistinguishability security of identity-based encryption is modeled by a
game played by a challenger and an adversary. The challenger generates an IBE
scheme, while the adversary tries to break the scheme. To start, the challenger gen-
erates a master key pair (mpk, msk), sends the master public key mpk to the ad-
versary, and keeps the master secret key msk. The adversary outputs two distinct
10 2 Notions, Definitions, and Models

messages m0 , m1 and an identity ID∗ to be challenged. The challenger generates a


challenge ciphertext CT ∗ on a randomly chosen message from {m0 , m1 } for ID∗ .
During the game, the adversary can adaptively make private-key queries on any
identities other than the challenge identity and can make decryption queries on
any ciphertexts other than the challenge ciphertext. In particular, the adversary can
make a decryption query on (ID,CT ) satisfying either (ID = ID∗ ,CT 6= CT ∗ ) or
(ID 6= ID∗ ,CT = CT ∗ ). Finally, the adversary outputs a guess of the chosen mes-
sage in the challenge ciphertext CT ∗ .
The security model of indistinguishability against chosen-ciphertext attacks (IND-
ID-CCA) can be described as follows.
Setup. The challenger runs the setup algorithm to generate a master key pair
(mpk, msk) and sends mpk to the adversary. The challenger keeps msk to respond to
queries from the adversary.
Phase 1. The adversary makes private-key queries and decryption queries, where
identities and ciphertexts are adaptively chosen by the adversary itself.
• For a private-key query on IDi , the challenger runs the key generation algorithm
on IDi with the master secret key msk and then sends dIDi to the adversary.
• For a decryption query on (IDi ,CTi ), the challenger runs the decryption algo-
rithm with the private key dIDi and then sends the decryption result to the adver-
sary.
Challenge. The adversary outputs two distinct messages m0 , m1 from the same mes-
sage space and an identity ID∗ to be challenged, where m0 , m1 , ID∗ are all adaptively
chosen by the adversary itself. We require that the private key of ID∗ has not been
queried in Phase 1. The challenger randomly chooses c ∈ {0, 1} and then computes
a challenge ciphertext CT ∗ = E[mpk, ID∗ , mc ], which is given to the adversary.
Phase 2. The challenger responds to private-key queries and decryption queries in
the same way as in Phase 1 with the restriction that no private-key query is allowed
on ID∗ and no decryption decryption query is allowed on (ID∗ ,CT ∗ ).
Guess. The adversary outputs a guess c0 of c and wins the game if c0 = c.
The advantage ε of the adversary in winning this game is defined as
 1
ε = 2 Pr[c0 = c] − .
2
Definition 2.3.0.1 (IND-ID-CCA) An identity-based encryption scheme is (t, qk , qd ,
ε)-secure in the IND-ID-CCA security model if there exists no adversary who can
win the above game in time t with advantage ε after it has made qk private-key
queries and qd decryption queries.
There are two weaker security models, defined as follows.
Definition 2.3.0.2 (IND-sID-CCA) An identity-based encryption scheme is (t, qk , qd ,
ε)-secure in the selective-ID security model (IND-sID-CCA) if the encryption
scheme is (t, qk , qd , ε)-secure in the IND-ID-CCA security model but the adversary
must choose the challenge identity ID∗ before the setup phase.
2.4 Further Reading 11

Definition 2.3.0.3 (IND-ID-CPA) An identity-based encryption scheme is (t, qk , ε)-


secure in the security model of indistinguishability against chosen-plaintext at-
tacks (IND-ID-CPA) if the scheme is (t, qk , 0, ε)-secure in the IND-ID-CCA security
model, where the adversary is not allowed to make any decryption query.

In Phase 1 and Phase 2 of the security model, the adversary can alternately make
private-key queries and decryption queries. The total numbers of queries made by
the adversary are qk and qd in the security model, but the adversary can adaptively
decide the number of private-key queries denoted by q1 and the number of decryp-
tion queries denoted by q2 in Phase 1 by itself as long as q1 ≤ qk and q2 ≤ qd .

2.4 Further Reading

In this section, we briefly introduce developments of security models for digital


signatures, public-key encryption (PKE), and identity-based encryption (IBE).
Digital Signatures. Digital signatures were first introduced by Diffie and Hellman
[34] and formally defined by Goldwasser, Micali, and Rivest [50], where the EU-
CMA security model was first defined. One-time signature is a very special digital
signature invented by Lamport [71] and is an important building block for crypto-
graphic constructions.
Many security models for digital signatures have been proposed in the litera-
ture, where the security models of signatures are defined to model signature query
and signature forgery. The notion of strong unforgeability (SU) was discussed in
[13, 4]. If the adversary can query signatures but cannot decide which messages
are to be signed, the security model is defined as known-message attacks [50]
or random-message attacks [64]. If the adversary can choose the messages to be
queried but must do so before seeing the public key, the security model is defined as
weak chosen-message attacks [21], generic chosen-message attacks [50], or known-
message attacks [64]. We refer the reader to the book [64] authored by Jonathan
Katz for further reading on these models, the relations among them, and how to
transfer a weaker model to a stronger model. Note that the EU-CMA model is not
the strongest security model. Some security models (e.g., [12, 36, 59, 62]) have been
defined to capture leakage-resistant security for digital signatures, and some secu-
rity models (e.g., [82, 7, 8]) have been defined to consider the security under the
multi-user setting.
Public-Key Encryption. The security model for public-key encryption is defined
to model the decryption query and the security goal.
For the decryption query, we have the definitions of chosen-plaintext attacks
(CPA) [49], chosen-ciphertext attacks (CCA) [11, 88], and non-adaptive chosen-
ciphertext attacks (CCA1) [84]. In the CCA1 security model, the adversary is only
allowed to make decryption queries prior to receiving the challenge ciphertext. For
the security goal, we have the following four definitions.
12 2 Notions, Definitions, and Models

• The definition of indistinguishability (IND) [49]: the adversary cannot distin-


guish the encrypted message in the challenge ciphertext.
• The definition of semantic security (SS) [49]: the adversary cannot compute the
encrypted message from the ciphertext.
• The definition of non-malleability (NM) [37, 38]: given a challenge ciphertext,
the adversary is unable to output another ciphertext such that the corresponding
encrypted messages are “meaningfully related.”
• The definition of plaintext awareness (PA) [15]: the adversary is unable to create
a ciphertext without knowing the underlying message for encryption.
The notion of semantic security is proved [100] to be equal to indistinguishability,
and non-malleability implies [11] indistinguishability under any type of attacks.
We refer the reader to the work [11] to see the relations among the secu-
rity models mentioned above. There also exist stronger security models (e.g.,
[59, 36, 27, 35]) defined to capture leakage-resistant security for PKE. Some se-
curity models (e.g., [10, 60, 45, 46]) have been defined to consider the security
under the multi-user setting.
Identity-Based Encryption. Identity-based cryptosystems were first introduced by
Shamir [93]. The security models of IND-ID-CPA and IND-ID-CCA were defined
in several works (e.g., [24, 25]). The security model of IND-sID-CCA was first
defined in [27, 28, 20]. Similarly to PKE, there are some variants of the IBE secu-
rity model such as IND-ID-CCA1, IND-sID-CCA1, NM-ID-CPA, NM-ID-CCA1,
NM-ID-CCA, NM-sID-CPA, SS-ID-CPA, SS-ID-CCA1, SS-ID-CCA, and SS-sID-
CPA. The work in [6] shows that non-malleability still implies indistinguishability
under any type of attacks, and semantic security still equals indistinguishability for
IBE. The stronger security models introduced in [103, 56] were proposed to capture
leakage-resistant security for IBE.
Chapter 3
Foundations of Group-Based Cryptography

In this chapter, we introduce some mathematical primitives including finite fields,


groups, and bilinear pairing that are the foundations of group-based cryptography.
We also describe three types of hash functions that play an important role in the
scheme constructions. We mainly focus on introducing the feasibility of basic oper-
ations and the size of binary representations.

3.1 Finite Fields

3.1.1 Definition

Definition 3.1.1.1 (Finite Field) A finite field (Galois field), denoted by (F, +, ∗),
is a set containing a finite number of elements with two binary operations “+”
(addition) and “∗” (multiplication) defined as follows.
• ∀u, v ∈ F, we have u + v ∈ F and u ∗ v ∈ F.
• ∀u1 , u2 , u3 ∈ F, (u1 + u2 ) + u3 = u1 + (u2 + u3 ) and (u1 ∗ u2 ) ∗ u3 = u1 ∗ (u2 ∗ u3 ).
• ∀u, v ∈ F, we have u + v = v + u and u ∗ v = v ∗ u.
• ∃0F , 1F ∈ F (identity elements), ∀u ∈ F, we have u + 0F = u and u ∗ 1F = u.
• ∀u ∈ F, ∃–u ∈ F such that u + (–u) = 0F .
• ∀u ∈ F∗ , ∃u−1 ∈ F∗ such that u ∗ u−1 = 1F . Here, F∗ = F\{0F }.
• ∀u1 , u2 , v ∈ F, we have (u1 + u2 ) ∗ v = u1 ∗ v + u2 ∗ v.
We denote by the symbol 0F ∈ F the identity element under the addition operation
and by the symbol 1F ∈ F the identity element under the multiplication operation.
We denote by −u the additive inverse of u and by u−1 the multiplicative inverse of
u. Note that the binary operations defined in the finite field are different from the
traditional arithmetical addition and multiplication.
A finite field, denoted by (Fqn , +, ∗) in this book, is a specific field where n is a
positive integer, and q is a prime number called the characteristic of Fqn . This finite

© Springer International Publishing AG, part of Springer Nature 2018 13


F. Guo et al., Introduction to Security Reduction,
https://doi.org/10.1007/978-3-319-93049-7_3
14 3 Foundations of Group-Based Cryptography

field has qn elements. Each element in the finite field can be seen as an n-length
vector, where each scalar in the vector is from the finite field Fq . Therefore, the bit
length of each element in this finite field is n · |q|.

3.1.2 Field Operations

In a finite field, two binary operations are defined: addition and multiplication. They
can be extended to subtraction and division through their inverses described as fol-
lows.
• The subtraction operation is defined from the addition. ∀u, v ∈ F, we have

u − v = u + (−v),

which calculates the addition of u and the additive inverse of v.


• The division operation is defined from the multiplication. ∀u ∈ F, v ∈ F∗ , we have

u/v = u ∗ v−1 ,

which calculates the multiplication of u and the multiplicative inverse of v.

3.1.3 Field Choices

We introduce three common classes of finite fields, namely prime fields, binary
fields, and extension fields.
• Prime Field Fq is a field of residue classes modulo q. There are q elements in this
field represented as Zq = {0, 1, 2, · · · , q − 1}, and two operations: the modular
addition and the modular multiplication. Furthermore,

−u = q − u and u−1 = uq−2 mod q.

• Binary Field F2n can be represented as a field of equivalence classes of polyno-


mials whose degree is n − 1 and whose coefficients are from F2 :

F2n = an−1 xn−1 + an−2 xn−2 + · · · + a1 x + a0 : ai ∈ F2 ,




where the corresponding element in this field is an−1 an−2 · · · a1 a0 . The addition
in this field is calculated by applying XOR to each pair of two polynomial coef-
ficients, while the multiplication in this field requires an operation of reduction
modulo an irreducible binary polynomial f (x) of degree n. Furthermore,
n −2
−u = u and u−1 = u2 mod f (x).
3.1 Finite Fields 15

• Extension Field F(qn1 )n2 is an extension field of the field Fqn1 . The integer n2 is
called the embedding degree. Similarly to the binary field, the representation can
be described as

F(qn1 )n2 = an2 −1 xn2 −1 + an2 −2 xn2 −2 + · · · + a1 x + a0 : ai ∈ Fqn1 ,




where the corresponding element in this field is an2 −1 an2 −2 · · · a1 a0 . The addi-
tion in this field denotes the addition of polynomials with coefficient arithmetic
performed in the field Fqn1 . The multiplication is performed by an operation of
reduction modulo an irreducible polynomial f (x) of degree n2 in Fqn1 [x]. The
computations of −u and u−1 are much more complicated than for the previous
two fields. We omit them in this book.

3.1.4 Computations over a Prime Field

Among three fields mentioned above, the prime field F p is the most important field
in group-based cryptography. This is due to the fact that the group order is usu-
ally a prime number. In the prime field F p , all elements are numbers in the set
Z p = {0, 1, 2, · · · , p − 1}. All the following modular arithmetic operations over the
prime field are efficiently computable. The detailed algorithms for conducting the
corresponding computations are outside the scope of this book.
• Modular Additive Inverse. Given y ∈ Z p , compute

−y mod p.

• Modular Multiplicative Inverse. Given z ∈ Z∗p , compute

1
= z−1 mod p.
z
• Modular Addition. Given y, z ∈ Z p , compute

y + z mod p.

• Modular Subtraction. Given y, z ∈ Z p , compute

y − z mod p.

• Modular Multiplication. Given y, z ∈ Z p , compute

y ∗ z mod p.

• Modular Division. Given y ∈ Z p and z ∈ Z∗p , compute


16 3 Foundations of Group-Based Cryptography
y
mod p.
z
• Modular Exponentiation. Given y, z ∈ Z p , compute

yz mod p.

The modular multiplicative inverse requires z to be a nonzero integer. However,


in cryptography, we cannot avoid the case z = 0 during the calculation z−1 , although
it happens with negligible probability. If this occurs, we define 1/0 = 0 in this book
for all hard problems and schemes.

3.2 Cyclic Groups

3.2.1 Definitions

Definition 3.2.1.1 (Abelian Group) An abelian group, denoted by (H, ·), is a set
of elements with one binary operation “·” defined as follows.
• ∀u, v ∈ H, we have u · v ∈ H.
• ∀u1 , u2 , u3 ∈ H, we have (u1 · u2 ) · u3 = u1 · (u2 · u3 ).
• ∀u, v ∈ H, we have u · v = v · u.
• ∃1H ∈ H, ∀u ∈ H, we have u · 1H = u.
• ∀u ∈ H, ∃u−1 ∈ H, such that u · u−1 = 1H .
We denote by 1H the identity element in this group. The only group operation
can be extended to another operation called group division, i.e., given u, the aim is
to compute u−1 . Note that the division u/v is equivalent to u · v−1 .
Definition 3.2.1.2 (Cyclic Group) An abelian group H is a cyclic group if there
exists (at least) one generator, denoted by h, which can generate the group H:
n o n o
H = h1 , h2 , · · · , h|H| = h0 , h1 , h2 , · · · , h|H|−1 ,

where |H| denotes the group order of H and h|H| = h0 = 1H .


Definition 3.2.1.3 (Cyclic Subgroup of Prime Order) A group G is a cyclic sub-
group of prime order if it is a subgroup of a cyclic group H and |G| is a prime
number, where
• |G| is a divisor of |H|;
• There exists a generator g ∈ H, which generates G.
3.2 Cyclic Groups 17

3.2.2 Cyclic Groups of Prime Order

In group-based cryptography, we prefer to use a cyclic group G of prime order p


due to the following reasons. Firstly, the cyclic group G is the smallest subgroup
without confinement attacks [75]. Secondly, any integer in {1, 2, · · · , p − 1} has a
modular multiplicative inverse, which is very useful in the scheme constructions.
For example, if gx is a group element of G for any x ∈ {1, 2, · · · , p − 1}, then we
1
have that g x is also a group element. Finally, any group element except 1G in G is a
generator of this group. These three properties are desired for security and flexibility
in the construction of public-key cryptosystems. In this book, a (cyclic) group refers
to a cyclic group of prime order unless specified otherwise. Note that it is not nec-
essary to construct all group-based cryptosystems from a group of prime order. For
example, the ElGamal signature scheme can be constructed from any cyclic group.
To define a group for scheme constructions, we need to specify
• The space of the group, denoted by G.
• The generator of the group, denoted by g.
• The order of the group, denoted by p.
That is, (G, g, p) are the basic components when defining a group in the scheme con-
structions. Notice that the group operation can be simplified or omitted depending
on the choice of the group.

3.2.3 Group Exponentiations

Let (G, g, p) be a cyclic group and x be a positive integer. We denote by gx the group
exponentiation, where gx is defined as

gx = g · g · · · g · g .
| {z }
x

The group exponentiation is composed of x − 1 copies of the group operations from


the above definition. According to the definition of the group (G, g, p), we have

gx = gx mod p .

Therefore, when an integer x is chosen for the group exponentiation, we can assume
that x is chosen from the set Z p and call the integer x the exponent.
In public-key cryptography, x is an extremely large exponent whose length is at
least 160 in the binary representation. Therefore, it is impractical to perform x − 1
copies of the group operations. Group exponentiation is frequently used in group-
based cryptography. There exist other polynomial-time algorithms for the compu-
tation of group exponentiation. The simplest algorithm is the square-and-multiply
algorithm, which is described as follows.
18 3 Foundations of Group-Based Cryptography

• Convert x into an n-bit binary string x:


n−1
x = xn−1 · · · x1 x0 = ∑ xi 2i .
i=0

i
• Let gi = g2 . Compute gi = gi−1 · gi−1 for all i ∈ [1, n − 1].
• Set X to be the subset of {0, 1, 2, · · · , n − 1} where j ∈ X if x j = 1.
• Compute gx by
n−1 n−1 i
∏ g j = ∏ gxi i = g∑i=0 xi 2 = gx .
j∈X i=0

The group exponentiation costs at most 2n − 2 group operations, which is linear


in the bit length of x. The time complexity is O(log x), which is much faster than
O(x) = x − 1 group operations.
Note that group exponentiation is just a common name for a cyclic group. It has
different names in specific groups. For example, we call it modular exponentiation
in a modular multiplicative group and point multiplication (or scalar multiplication)
in an elliptic curve group, respectively. These groups will be introduced in Section
3.2.6 and Section 3.2.7, respectively.

3.2.4 Discrete Logarithms

The integer x satisfying gx = h, where g, h ∈ G are not the identity element 1G , is


called the discrete logarithm to the base g of h. Computing x is known as the discrete
logarithm (DL) problem.
The DL problem is the fundamental hard problem in group-based cryptography.
There is no polynomial-time algorithm for solving the DL problem over a general
cyclic group. The only relatively efficient algorithm (the Pollard Rho Algorithm)

still requires O( p) steps where p is the group order. For example, if the group
order of G is as large as 2160 , solving the DL problem over the group G requires
280 steps, which an adversary cannot run in polynomial time. Solving a problem
with time complexity 2l means that the problem has l-bit security. Note that the
time complexity of solving the DL problem over a general cyclic group of order

p is O( p). However, for some specific groups, such as a cyclic group of order p
constructed from a prime field, the time complexity of solving the DL problem over

this group can be much less than O( p).
Given a cyclic group G, if |G| is not a prime number, for some group elements
g, h ∈ G, there exists either no discrete logarithm or more than one discrete loga-
rithm. However, if |G| is a prime number, there must exist only one solution x in
Z p . This is why we prefer to use a cyclic subgroup whose group order is a prime
number.
3.2 Cyclic Groups 19

3.2.5 Cyclic Groups from Finite Fields

The algebraic structure of an abelian group is simpler than that of a finite field. The
reason is that the abelian group defines only one binary operation while the finite
field defines two. The operation properties are identical, and thus we can immedi-
ately obtain an abelian group from a finite field. For example, (Fqn , +) and (F∗qn , ∗)
are both abelian groups. It seems that there is no need to explore other implementa-
tions of cyclic groups.
However, we do need to construct more advanced cyclic groups with a finite field
for various reasons. For example, the Elliptic Curve Group was invented to reduce
the size of group representation for the same security level. The operation “·” in
the abelian group can be the same as or different from the operations “+, ∗” in the
finite field. For example, the group operation in the Elliptic Curve Group is a curve
operation over a finite field requiring both “+” and “∗” operations.

3.2.6 Group Choice 1: Multiplicative Groups

The first group choice is a multiplicative group (F∗qn , ∗) from a finite field under the
multiplication operation. The multiplicative group of a finite field is a cyclic group,
where the finite field can be a prime field, a binary field or an extension field.
Here, we introduce the multiplicative group modulo q from a prime field (F∗q , ∗).
The group elements, group generator, group order, and group operation are de-
scribed as follows.
• Group Elements. The space of the modulo multiplicative group is Z∗q = {1, 2, · · ·,
q − 1}. Therefore, each group element has |q| bits in the binary representation.
• Group Generator. There exists a generator h ∈ Z∗q , which can generate the group
Z∗q . However, not all elements of Z∗q are generators. The group element h is a
generator if and only if the minimum positive integer x satisfying hx mod q = 1
is equal to q − 1.
• Group Order. The order of this group is q − 1. Since q is a prime number, Z∗q is
not a group of prime order for a large prime q (excluding q = 3).
• Group Operation. The group operation “ · ” in this group is integer multiplica-
tion modulo the prime number q. To be precise, let u, v ∈ Z∗q and “ × ” be the
mathematical multiplication operation. We have u · v = u × v mod q.
This modular multiplicative group is not a group of prime order. We can extract
a subgroup G of prime order p from it if p divides q − 1, namely p|(q − 1). To find
a generator g of G, the simplest approach is to search from 2 to q − 1 and select the
first u such that
q−1
u p 6= 1 mod q.
q−1
The generator of G is g = u p , where g p = 1 mod q and the group is denoted by
20 3 Foundations of Group-Based Cryptography

G = {g, g2 , g3 , · · · , g p }.

Here, g p = 1G is the identity group element.


We use the group (G, g, p, q) for the scheme constructions, where g is the gener-
ator of G, p is the group order, and q is a large prime number satisfying p|(q − 1).
The multiplicative group can be used for group-based cryptography because the DL
problem over this group is hard, without polynomial-time solutions. However, there
exist sub-exponential-time algorithms for solving the DL problem over this mul-
tiplicative √group, whose time complexity is sub-exponential in the bit length of q,
8 3 log2 q
such as 2 . To make sure that the DL problem over the multiplicative group
has 280 time complexity or 80-bit security level, the bit length of q must be at least
1,024 to resist sub-exponential attacks. Therefore, the length of each group element
is at least 1,024 bits.
In this group description, p, q are both prime numbers but p  q. Note that in
the description of security proofs, we denote by q a different meaning where q is
a number as large as the query number and p is the group order satisfying q  p.
There is an interesting question. Notice that (Fq , +) from the prime field under
the modular addition operation is also a cyclic group. This modular additive group
has group order q, which is a prime number. It provides better features than the
modular multiplicative group whose group order is q − 1. So, it seems better to use
this modular additive group than the modular multiplicative group. However, this is
wrong and the reason is omitted here.

3.2.7 Group Choice 2: Elliptic Curve Groups

The second group choice is an elliptic curve group. An elliptic curve is a plain curve
defined over a finite field Fqn , where all points are on the following curve:

Y 2 = X 3 + aX + b

along with a distinguished point at infinity, denoted by ∞. Here, a, b ∈ Fqn and the
space of points is denoted by E(Fqn ). The finite field can be a prime field or a binary
field or others, and each field has a different computational efficiency.
The group elements, group generator, group order, and group operation of the
elliptic curve group are described as follows.

• Group Elements. We denote by E(Fqn ) the space of an elliptic curve group,


where all group elements in this group are points described with coordinates
(x, y) ∈ Fqn × Fqn . Theoretically, the bit length of each group element is

|x| + |y| = 2|Fqn | = 2n · |q|.

Notice that given an x-coordinate x and the curve, we can compute two y-
coordinates +y and −y. Therefore, with the curve, each group element (x, y) can
3.2 Cyclic Groups 21

be simplified: (x, 1) to denote (x, +y), or (x, 0) to denote (x, −y). Sometimes, we
can even represent the group element with x only, because we can handle both
group elements (x, +y) and (x, −y) in computations that will return one correct
result. Therefore, the bit length of a group element is about n|q|.
• Group Generator. There exists a generator h ∈ E(Fqn ), which can generate the
group E(Fqn ). The point at infinity serves as the identity group element.
• Group Order. The group order of the elliptic curve group is denoted by

|E(Fqn )| = qn + 1 − t,

where |t| ≤ 2 qn and t is the trace of the Frobenius of the elliptic curve over the
field. Note that the group order is not a prime number for most curves.
• Group Operation. The group operation “ ·” in the elliptic curve group has two
different types of operations, which depend on the input of two group elements
u and v.
– If u = (xu , yu ) and v = (xv , yy ) are two distinct points, we draw a line through
u and v. This line will intersect the elliptic curve at a third point. We define
u · v as the reflection of the third point in the x-axis.
– Otherwise, if u = v, we draw the tangent line to the elliptic curve at u. This
line will intersect the elliptic curve at a second point. We define u · u as the
reflection of the second point in the x-axis.
The detailed group operation is dependent on the given group elements, curve,
and finite field. We omit a detailed description of the group operation here.
This elliptic curve group is not always a group of prime order. However, we can
extract a subgroup G of prime order p from it if p is a divisor of the group order.
The extraction approach is the same as that in the modular multiplicative group. We
define the group (G, g, p) as an elliptic curve group for scheme constructions, where
G is a group, g is the generator of G, and p is the group order.
The DL problem over the elliptic curve group is also hard as it does not have
any polynomial-time solution. Furthermore, there is no sub-exponential-time algo-
rithm for solving the DL problem in a general elliptic curve group, which means that
we can choose the finite field as small as possible to reduce the size of the group
representation. This short representation property is the primary motivation for con-
structing a cyclic group from an elliptic curve. For example, to have an elliptic curve
group where the time complexity of solving the DL problem over this group is 280 ,
the bit length of the prime q in the prime field Fq for the elliptic curve group im-
plementation can be as small as 160, rather than 1,024 in the modular multiplicative
group. The tradeoff is less computationally efficient of the group operation in the
elliptic curve group compared to that in the modular multiplicative group.
In an elliptic curve group, the group element in the binary representation can be
as small as the group order in the binary representation. That is, |g| = |p|. However,
this does not mean that all elliptic curve groups have this nice feature. For l-bit
security level, we must have at least |p| = 2 · l. The size of each group element g
depends on the choice of the finite field, and we have |g| ≥ |p| for all choices.
22 3 Foundations of Group-Based Cryptography

3.2.8 Computations over a Group

The following computations are the most common operations over a group G of
prime order p.
• Group Operation. Given g, h ∈ G, compute

g · h.

• Group Inverse. Given g ∈ G, compute


1
= g−1 .
g

Since g p = g · g p−1 = 1, we have g−1 = g p−1 .


• Group Division. Given g, h ∈ G, compute
g
= g · h−1 .
h
• Group Exponentiation. Given g ∈ G and x ∈ Z p , compute

gx .

Note that the operations mentioned above do not represent all computations for
a group. We should also include all operations over the prime field, where the prime
number is the group order. For example, given the group (G, g, p), an additional
1
group element h, and x, y ∈ Z p , we can compute g x h−y .

3.3 Bilinear Pairings

Roughly speaking, a bilinear pairing provides a bilinear map that maps two group
elements in elliptic curve groups to a third group element in a multiplicative group
without losing its isomorphic property. Bilinear pairing was originally introduced to
solve hard problems in elliptic curve groups by mapping a given problem instance
in the elliptic curve group into a problem instance in a multiplicative group, running
a sub-exponential-time algorithm to find the answer to the problem instance in the
multiplicative group, and then using the answer to solve the hard problem in the
elliptic curve group.
Bilinear pairing for scheme constructions is built from a pairing-friendly elliptic
curve where it should be easy to find an isomorphism from the elliptic curve group
to the multiplicative group. The instantiations of bilinear pairing, denoted by G1 ×
G2 → GT , fall into the following three types.
• Symmetric. G1 = G2 = G. We denote a symmetric pairing by G × G → GT .
3.3 Bilinear Pairings 23

• Asymmetric 1. G1 6= G2 with an efficient homomorphism ψ : G2 → G1 .


• Asymmetric 2. G1 6= G2 with no efficient homomorphism between G2 and G1 .
Bilinear pairing can be built from the prime-order groups (G1 , G2 , GT ) or the
composite-order groups (G1 , G2 , GT ). In the following two subsections, we only
introduce the symmetric pairing and the asymmetric pairing in prime-order groups,
and focus on their group representations.

3.3.1 Symmetric Pairing

The definition of symmetric pairing is stated as follows. Let PG = (G, GT , g, p, e)


be a symmetric-pairing group. Here, G is an elliptic curve subgroup, GT is a multi-
plicative subgroup, |G| = |GT | = p, g is a generator of G, and e is a map satisfying
the following three properties.
• For all u, v ∈ G, a, b ∈ Z p , e(ua , vb ) = e(u, v)ab .
• e(g, g) is a generator of group GT .
• For all u, v ∈ G, there exist efficient algorithms to compute e(u, v).

This completes the definition of the symmetric pairing. Now, we introduce its
size efficiency.
Let E(Fqn )[p] be the elliptic curve subgroup of E(Fqn ) with order p over the basic
field Fqn , and Fqnk [p] be the multiplicative subgroup of the extension field Fqnk with
order p, where k is the embedding degree. The bilinear pairing is actually defined
over
E(Fqn )[p] × E(Fqn )[p] → Fqnk [p].
A secure bilinear pairing requires the DL problem to be hard over both the elliptic
curve group G and the multiplicative group GT . We should also make these groups
as small as possible for efficient group operations. However, the DL problem in the
multiplicative group defined over the extension field suffers from sub-exponential
attacks. Therefore, the size of qnk must be large enough to resist sub-exponential
attacks. That is why we need an embedding degree k to extend the field. For l-bit
security level, we have the following parameters.

|p| = 2 · l, to resist Pollard Rho attacks.


|g| ≈ |Fqn | ≥ 2 · l, which depends on the chosen elliptic curve.
|e(g, g)| = k · |Fqn |, which should be large enough to resist sub-exponential attacks.
We study the different choices of the security parameter for 80-bit security level,
namely l = 80.
• Option 1. We choose an elliptic curve where |Fqn | = 2 · l = 160. Since the ex-
tension field k · |Fqn | must be at least 1,024 bits in the binary representation to
resist sub-exponential attacks for 80-bit security, we should choose at least k = 7.
24 3 Foundations of Group-Based Cryptography

Therefore, we have |p| = |g| = 160 and |e(g, g)| = 1, 120. Unfortunately, no such
curve has been found for any k ≥ 7. Therefore, this means that we cannot con-
struct a symmetric pairing where the size of group elements in G is 160 bits for
80-bit security.
• Option 2. We choose the pairing group with embedding degree k = 2. For
k · |Fqn | = 1, 024, we have |Fqn | = 512. Therefore, |p| = 160, |g| = 512 and
|e(g, g)| = 1, 024. There exists such an elliptic curve with a minimum size of
GT for 80-bit security, but we cannot use it to construct schemes with short rep-
resentation for group elements particularly in G.

3.3.2 Asymmetric Pairing

The definition of asymmetric pairing (Asymmetric 2) is stated as follows. Let PG =


(G1 , G2 , GT , g1 , g2 , p, e) be an asymmetric-pairing group. Here, G1 , G2 are elliptic
curve subgroups, GT is a multiplicative subgroup, |G1 | = |G2 | = |GT | = p, g1 is
a generator of G1 , g2 is a generator of G2 , and e is a map satisfying the following
three properties.
• For all u ∈ G1 , v ∈ G2 , a, b ∈ Z p , e(ua , vb ) = e(u, v)ab .
• e(g1 , g2 ) is a generator of group GT .
• For all u ∈ G1 , v ∈ G2 , there exist efficient algorithms to compute e(u, v).

This completes the definition of the asymmetric pairing. Now, we introduce its
size efficiency.
Let E(Fqn )[p] be the elliptic curve subgroup of E(Fqn ) with order p over the
basic field Fqn , E(Fqnk )[p] be one of the elliptic curve subgroups of E(Fqnk ) with
order p over the extension field Fqnk , and Fqnk [p] be the multiplicative subgroup of
extension field Fqnk with order p, where k is the embedding degree. The bilinear
pairing is actually defined over

E(Fqn )[p] × E(Fqnk )[p] → Fqnk [p].

Similarly, for l-bit security level, we have the following parameters.

|p| = 2 · l, to resist Pollard Rho attacks.


|g1 | ≈ |Fqn | ≥ 2 · l, which depends on the chosen elliptic curve.
|g2 | ≈ |Fqnk | ≥ k · 2l, which depends on the chosen elliptic curve and k.
|e(g1 , g2 )| = k · |Fqn |, which should be large enough to resist sub-exponential attacks.
We study the choice of security parameter for 80-bit security level, namely l =
80, towards short group representation in G1 such that |Fqn | = 2 · l = 160. If k · |Fqn |
must be at least 1,024 to resist sub-exponential attacks for 80-bit security, we should
choose at least k = 7. However, the minimum k we have found for this pairing is 10.
Therefore, we have |p| = |g1 | = 160 and |g2 | = |e(g1 , g2 )| = 1, 600. Note that the
3.4 Hash Functions 25

group elements in G2 can be compressed into half or quarter size or even shorter
representations if the bilinear pairing is the third type, in which there is no efficient
homomorphism between G1 and G2 .

3.3.3 Computations over a Pairing Group

A pairing group is composed of groups (G, GT ) or (G1 , G2 , GT ) of prime order


p and a bilinear map e. All computations over a pairing group are summarized as
follows.
• All modular operations over Z p .
• All group operations over the groups (G, GT ) or (G1 , G2 , GT ).
• The pairing computation e(u, v) for all u, v ∈ G or u ∈ G1 , v ∈ G2 .
Note that all group-based schemes in the literature are constructed with the above
computations, which are all efficiently computable. Some widely known computa-
tions more complicated than the above basic computations but still efficiently com-
putable are introduced in Section 4.2.

3.4 Hash Functions

A hash function takes an arbitrary-length string as an input and returns a much


shorter string as an output. The primary motivation for using hash functions, espe-
cially for group-based cryptography, is due to the limited space of Z p or G, such
that we cannot embed all values in them. With the adoption of hash functions, we
can securely embed strings of any length into group elements/exponents to improve
the computational efficiency without using a large group. The tradeoff is that the
security of group-based cryptography also depends on hash functions. If an adopted
hash function is broken, the corresponding scheme will no longer be secure.
Hash functions can be classified into the following two main types according to
the security definition.
• One-Way Hash Function. Given a one-way hash function H and an output
string y, it is hard to find a pre-image input x satisfying y = H(x).
• Collision-Resistant Hash Function. Given a collision-resistant hash function
H, it is hard to find two different inputs x1 and x2 satisfying H(x1 ) = H(x2 ).
When a hash function is claimed to be a cryptographic hash function in this book,
it is a one-way hash function, or a collision-resistant hash function, or an ideal hash
function that will be set as a random oracle in the security proof. Hash functions can
be classified into the following three important types according to the output space,
where the input can be any arbitrary strings.
26 3 Foundations of Group-Based Cryptography

• H : {0, 1}∗ → {0, 1}n . The output space is the set containing all n-bit strings. To
resist birthday attacks, n must be at least 2 · l bits for l-bit security. We mainly
use this kind of hash function to generate a symmetric key from the key space
{0, 1}n for hybrid encryption.
• H : {0, 1}∗ → Z p . The output space is {0, 1, 2, · · · , p − 1}, where p is the group
order. We use this kind of hash function to embed hashing values in group expo-
nents, when the input values are not in the Z p space.
• H : {0, 1}∗ → G. The output space is a cyclic group. That is, this hash function
will hash the input string into a group element. This hash function exists only
for some groups. The main groups we can hash are the group G in the symmetric
bilinear pairing G×G → GT and the group G1 in the asymmetric bilinear pairing
G1 × G2 → GT .
How to construct cryptographic hash functions is outside the scope of this book.
The above definitions and descriptions of hash functions are sufficient for construct-
ing schemes and security proofs.

3.5 Further Reading

In this section, we briefly introduce group-based cryptography including algebraic


structures, exponentiations, the discrete logarithm problem, elliptic curve cryptog-
raphy, and bilinear pairings.
Algebraic Structures. Number theory and algebraic structures, such as groups,
rings, and finite fields, are the foundations of modern cryptography. We refer the
reader to [89, 96] for further reading about number theory. Basic knowledge of fi-
nite fields can be found in [79], while a detailed introduction can be found in [74].
Group theory plays an important role in public-key cryptography. The author in [90]
gave an introduction to group theory. For how to apply group theory in cryptogra-
phy, the reader is referred to the book [98].
Exponentiations. Group exponentiation is the basic computation of group-based
cryptography. The simplest approach is the square-and-multiply algorithm [68].
There also exist many improved algorithms, such as the m-ary method, the sliding-
window method, and the Montgomery method. A technique called Montgomery’s
ladder improves the computation process to withstand side-channel attacks. The
work in [51] provided a useful survey of these algorithms. More specialized algo-
rithms for elliptic curve groups were introduced in [57].
The security level that a group can achieve is determined by the group order
and the algorithm for solving its discrete logarithm problem. The group order, in
turn, determines the computational efficiency of group-based schemes. We refer the
reader to the work in [73] for a detailed elaboration of the group size and the security
level.
Discrete Logarithm Problem. Algorithms for solving the discrete logarithm prob-
lem can be divided into two categories: generic algorithms and particular algorithms.
3.5 Further Reading 27

The generic algorithms, such as the baby-step giant-step algorithm [95] and the Pol-
lard Rho algorithm [87], work for all groups. The particular algorithms, such as the
index calculus algorithm [3, 58], only work for some particular groups, such as the
modular multiplicative group. A survey of these algorithms was given in [78] and
[81] (Chapter 3.6).

The time complexity of generic algorithms is normally O( p) for a group of or-
der p, while the time complexity of the index calculus algorithm is sub-exponential.
The most efficient algorithm, which is called the number field sieve and is a variant
of the index calculus algorithm, has a complexity of L p [ 13 , 1.923] (refer to [72] for
the definition of L-notation). The record for solving a discrete logarithm in GF(p)
is a 768-bit prime [67]. In a group construction with a finite field of characteristic 2,
the calculation of a logarithm in F21279 was announced in [66]. In a group construc-
tion with a finite field of characteristic 3, the latest result is given in [2]. A full list
of the records sorted by date can be found in [52].
Elliptic Curve Cryptography. The notion of elliptic curve groups was indepen-
dently suggested by Koblitz [69] and Miller [83]. In comparison with the modular
multiplicative groups, there exists no sub-exponential algorithm for solving the dis-
crete logarithm problem over the elliptic curve groups. The current record is the
discrete logarithm of a 113-bit Koblitz curve [102] and a curve over F2127 [16]. For
a survey of recent progress, we refer the reader to the work [43].
The US National Institute of Standards and Technology (NIST) published the
recommended size of an elliptic curve group (Table 2 of [9]). For a more direct
comparison of the key size, we refer the reader to [17]. A collection of recommen-
dations from well-known organizations is available at [19]. There are also many
helpful textbooks such as [17, 18, 57, 99], which provide a detailed introduction to
elliptic curve cryptography.
Bilinear Pairings. The use of bilinear pairing was first proposed in [80, 41] to
attack cryptosystems. Later, numerous schemes become achievable with the help
of pairings. We refer the reader to the survey in [39] of these constructions during
the first few years after the bilinear pairing was invented.
Galbraith et al.’s paper [44] provides a background to pairing and classifies the
pairing G1 × G2 → GT into three types. There is another rarely used type of pairing,
which was introduced by Shacham [92]. These four types are denoted Type I, II, III,
and IV, respectively. The difference in these four types is the structures of groups G1
and G2 . Meanwhile, the Weil pairing and the Tate pairing are classified with respect
to the computation, and the work in [70] gives an efficiency comparison of these
two pairings. Elegant explanations of pairings can be found in [76, 31, 97], where
the structure of r-torsion, the Miller algorithm, and optimizations of the pairing
computation were explained.
The elliptic curves that we use to construct a bilinear pairing are referred to as
pairing-friendly curves. Finding pairing-friendly curves with an optimized group
size needs the embedding degree, the modulo of the underlying group, and the ρ-
value all to be considered. The most commonly applied method is called the com-
plex multiplication (CM method) [5]. Summaries of pairing-friendly curve construc-
tions were given in [40, 63].
Chapter 4
Foundations of Security Reduction

In this chapter, we introduce what is security reduction and how to program a cor-
rect security reduction. We start by presenting an overview of important concepts
and techniques, and then proof structures for digital signatures and encryption. We
classify each concept into several categories in order to guide the reader to a deep
understanding of security reduction. We devise and select some examples to show
how to correctly program a full security reduction. Some definitions adopted in this
book may be defined differently elsewhere in the literature.

4.1 Introduction to Basic Concepts

4.1.1 Mathematical Primitives and Superstructures

Mathematics is the foundation of modern cryptography. With a mathematical prim-


itive, we define mathematical hard problems and construct cryptographic schemes.
Generally, the structure of a cryptographic scheme is more complicated than the
structure of a mathematical hard problem (e.g., interactive vs. non-interactive). It is
relatively hard to analyze the security of a cryptographic scheme compared to the
hardness of a mathematical hard problem. Therefore, security reduction was intro-
duced to analyze the security of cryptographic schemes. In a security reduction, if
a scheme is constructed over a mathematical primitive, its underlying hard problem
must be defined over the same mathematical primitive. For example, in group-based
cryptography, a cyclic group or a pairing group is the mathematical primitive. If
a scheme is proposed over a cyclic group G, the underlying hard problem for the
security reduction must also be defined over the same cyclic group G. Figure 4.1
provides an overview of the relationship among these four concepts.
In computational complexity theory, a reduction transforms one problem into
another problem, while in public-key cryptography, a security reduction reduces
breaking a proposed scheme into solving a mathematical hard problem. How to

© Springer International Publishing AG, part of Springer Nature 2018 29


F. Guo et al., Introduction to Security Reduction,
https://doi.org/10.1007/978-3-319-93049-7_4
30 4 Foundations of Security Reduction

Security Reduction
Mathematical Hard Problem ←−−−−−−−−−−−−−−−−−−−−−−− Cryptographic Scheme

& .

Mathematical Primitive

Fig. 4.1 The relationship among the four concepts

correctly program a security reduction is highly dependent on the cryptosystem,


security model, proposed scheme, and hard problem. We assume that there exists a
proposed scheme that needs to be proved secure and an adversary who is capable
of breaking the proposed scheme. In this book, “the proposed scheme” and “the
adversary” will frequently be mentioned when explaining the concepts of security
reduction.
Roughly speaking, each mathematical primitive is implemented with a string
as input. The bit length of the input string is a security parameter denoted by λ ,
which is an integer. In group-based cryptography, the security parameter λ refers
in particular to the bit length of a group element, such as 160 bits or 1,024 bits. In
this book, when we say that a cryptographic scheme (or a mathematical problem)
is generated with a security parameter λ , we mean that its underlying mathematical
primitive is generated with the security parameter λ .

4.1.2 Mathematical Problems and Problem Instances

A mathematical problem defined over a mathematical primitive is a mathematical


object representing certain questions and answers. For each mathematical problem,
there should be some descriptions of input (question) and output (answer). Math-
ematical problems can be classified into computational problems and decisional
problems. A decisional problem can be seen as a special case of a computational
problem whose output has only two answers, such as true and false.
An input string for a (mathematical) problem is referred to as a problem instance.
A problem should have an infinite number of instances. In a security reduction, we
find a correct solution (answer) to a randomly chosen instance of a problem, which
indicates that this problem is efficiently solvable. Suppose a problem is generated
with a security parameter λ . The level of “hardness” of solving this problem can
be denoted by a function P(λ ) of λ for this problem. The functions for different
problems are different, which means that their levels of hardness are not the same
even though they are defined over the same mathematical primitive.
4.1 Introduction to Basic Concepts 31

4.1.3 Cryptography, Cryptosystems, and Schemes

In this book, cryptography, cryptosystem, and scheme have the following meanings.
• Cryptography, such as public-key cryptography and group-based cryptography,
is a security mechanism to provide security services for authentication, confiden-
tiality, integrity, etc.
• A cryptosystem, such as digital signatures, public-key encryption, and identity-
based encryption, is a suite of algorithms that provides a security service.
• A scheme, such as the BLS signature scheme [26], is a specific construction or
implementation of the corresponding algorithms for a cryptosystem.
A cryptosystem might have many different scheme constructions. For example,
many signature schemes with distinct features have been proposed in the literature.
Suppose a scheme is generated with a security parameter λ . The level of “hardness”
of breaking the scheme can be denoted by a function S(λ ) of λ for this scheme. The
functions for different proposed schemes are different, and thus their levels of hard-
ness are not the same even though they are constructed over the same mathematical
primitive.

4.1.4 Algorithm Classification 1

In mathematics and computer science, an algorithm is a set of steps to compute an


output from an input. All algorithms can be classified into deterministic algorithms
and probabilistic algorithms.
A deterministic algorithm is an algorithm where, given as an input a problem
instance, it will always return a correct result. A probabilistic (randomized) algo-
rithm is an algorithm where, given as an input a problem instance, it will return a
correct result by chance only, meaning that the obtained result may be either incor-
rect or correct with some likelihood. We denote by (t, ε) that an algorithm returns
a correct result in time t with success probability ε. In comparison with determinis-
tic algorithms, probabilistic algorithms are believed to be more efficient for solving
problems. A deterministic algorithm can be seen as a specific probabilistic algorithm
where the success probability is 100%. In this book, all algorithms are probabilistic
algorithms unless specified otherwise.
An algorithm with (t, ε) can be applied differently in cryptography as follows.
• If this algorithm is used to measure how successfully it can return a correct result,
ε is regarded as a probability as described above.
• If this algorithm is particularly used to measure how successfully it can break a
scheme or solve a hard problem compared to other algorithms that cannot break
the scheme or solve the hard problem, ε is regarded as an advantage. Advantage
is a variant definition of probability, introduced in Section 4.6.2.
32 4 Foundations of Security Reduction

We have the above two different applications because it is confusing to measure


whether a scheme is secure or insecure and whether a problem is hard or easy with
probability. The difference between probability and advantage can be found in Sec-
tion 4.6. In this book, an algorithm is mainly used for the second application and the
default ε is therefore referred to as an advantage. When the algorithm is specifically
proposed to break a digital signature scheme or solve a computational problem, we
can also call ε probability, because the probability and the advantage are equivalent.

4.1.5 Polynomial Time and Exponential Time

Suppose a scheme is constructed (or a problem is generated) with a security param-


eter λ . Let t(λ ) be the time cost of an algorithm for breaking the scheme or solving
the problem, where t(λ ) is a function of λ .
• We say that t(λ ) is polynomial time if there exists n0 > 0 such that

t(λ ) = O(λ n0 ).

• We say that t(λ ) is exponential time if t(λ ) can be expressed as

t(λ ) = O(eλ ),

where e is the base of the natural logarithm.


Note that intermediate time between polynomial time and exponential time is
called sub-exponential time. If √
tse (λ ) is a sub-exponential time associated with the
3
factor λ , such as tse (λ ) = 28 λ , we can still choose a proper λ in such a way that
tse (λ ) is as large as 280 or even larger.

4.1.6 Negligible and Non-negligible

Suppose a scheme is constructed (or a problem is generated) with a security parame-


ter λ . Let ε(λ ) be the advantage of an algorithm for breaking the scheme or solving
the problem, where ε(λ ) is a function of λ . To explain the concepts of negligible
and non-negligible clearly, we do not use the traditional definitions but borrow the
definitions of polynomial time and exponential time to define these two concepts.
• We say that ε(λ ) is negligible associated with λ if ε(λ ) can be expressed as

1
ε(λ ) = .
Θ (eλ )

That is, the value ε(λ ) tends to zero very quickly as the input λ grows.
4.1 Introduction to Basic Concepts 33

• We say that ε(λ ) is non-negligible associated with λ if there exists n0 ≥ 0 such


that
1
ε(λ ) = .
O(λ n0 )
The minimum advantage ε(λ ) is 0, which means that there is no advantage. The
maximum advantage ε(λ ) is equal to 1 in this book, which is independent of the
input security parameter. The details will be introduced in Section 4.6.2.

4.1.7 Insecure and Secure

We can classify all schemes into “insecure” and “secure” as follows.


• Insecure. A scheme generated with a security parameter λ is insecure in a secu-
rity model if there exists an adversary who can break the scheme in polynomial
time with non-negligible advantage associated with λ .
• Secure. A scheme generated with a security parameter λ is secure in a security
model if there exists no adversary who can break the scheme in polynomial time
with non-negligible advantage associated with λ .
We cannot simply say that a scheme is insecure or secure, because it is associ-
ated with the input security parameter and the security model. A scheme might be
insecure in one security model, but secure in another security model.

4.1.8 Easy and Hard

We can classify all mathematical problems into “easy” and “hard” as follows.
• Easy. A problem generated with a security parameter λ is easy if there exists
an algorithm that can solve the problem in polynomial time with non-negligible
advantage associated with λ .
• Hard. A problem generated with a security parameter λ is hard if there exists
no (known) algorithm that can solve the problem in polynomial time with non-
negligible advantage associated with λ .
Hard problems are those mathematical problems only believed to be hard based
on the fact that all known algorithms cannot efficiently solve them. There is no
mathematical proof for the hardness of a mathematical hard problem. We can only
prove that solving a problem is not easier than solving another problem. Notice that
some believed-to-be-hard problems might become easy in the future.
34 4 Foundations of Security Reduction

4.1.9 Algorithm Classification 2

Suppose there exists an algorithm that can break a scheme or solve a hard problem
in time t with advantage ε, where the scheme or the problem is generated with a
security parameter λ .
• An algorithm that can break a scheme or solve a hard problem with (t, ε) is com-
putationally efficient if t is polynomial time and ε is non-negligible associated
with the security parameter λ .
• An algorithm that can break a scheme or solve a hard problem with (t, ε) is
computationally inefficient if t is polynomial time but ε is negligible associated
with the security parameter λ .
In this book, a computationally efficient algorithm is treated as a probabilistic
polynomial-time (PPT) algorithm. In the following introduction, a computationally
efficient algorithm is called an efficient algorithm for short. An algorithm requiring
exponential time to solve a hard problem is also computationally inefficient.

4.1.10 Algorithms in Cryptography

All algorithms in public-key cryptography can be classified into the following four
types, and each type is defined for a different purpose.
• Scheme Algorithm. This algorithm is proposed to implement a cryptosystem. A
scheme algorithm might be composed of multiple algorithms for different com-
putation tasks. For example, a digital signature scheme usually consists of four
algorithms: system parameter generation, key pair generation, signature genera-
tion, and signature verification. We require the scheme algorithm to return correct
results except with negligible probability.
• Attack Algorithm. This algorithm is proposed to break a scheme. A scheme is
secure if all attack algorithms are computationally inefficient. Suppose there ex-
ists an adversary who can break the proposed scheme in polynomial time with
non-negligible advantage. This means that the adversary knows a computation-
ally efficient attack algorithm. However, this algorithm is a black-box algorithm
only known to the adversary. The steps inside the algorithm are unknown.
• Solution Algorithm. This algorithm is proposed to solve a hard problem. Sim-
ilarly, a problem is hard if all solution algorithms for this problem are compu-
tationally inefficient. In a security reduction, if there exists a computationally
efficient attack algorithm that can break a proposed scheme, we prove that there
exists a computationally efficient solution algorithm that can solve a mathemati-
cal hard problem.
• Reduction Algorithm. This algorithm is proposed to describe how a security
reduction works. A security reduction is merely a reduction algorithm. If the
attack indeed exists, it shows how to use an adversary’s attack on a simulated
4.1 Introduction to Basic Concepts 35

scheme (see Section 4.3.6) to solve a mathematical hard problem. A reduction


algorithm at least consists of a simulation algorithm (how to simulate the scheme
algorithm) and a solution algorithm (how to solve an underlying hard problem).
Among the aforementioned algorithms, we only require that the advantage ε in
the attack algorithm, the solution algorithm, and the reduction algorithm is non-
negligible; while the probability ε in the scheme algorithm is close to 1. When we
say that an adversary can break a scheme or solve a hard problem, we mean that the
corresponding attack algorithm or the corresponding solution algorithm known to
the adversary is computationally efficient.

4.1.11 Hard Problems in Cryptography

All mathematical hard problems can be classified into the following two types.
• Computationally Hard Problems. These problems, such as the discrete loga-
rithm problem, cannot be solved in polynomial time with non-negligible advan-
tage. This type of hard problem is used as the underlying hard problem in the
security reduction.
• Absolutely Hard Problems. These problems cannot be solved with non-negligible
advantage, even if the adversary can solve all computational hard problems in
polynomial time with non-negligible advantage. Absolutely hard problems are
unconditionally secure against any adversary. This type of hard problem is used
in security reductions to hide secret information from the adversary.
A simple example of an absolutely hard problem is to compute x from (g, gx+y ),
where x, y are both randomly chosen from Z p . More absolutely hard problems will
be introduced in Section 4.7.6. We will explain why it is essential to utilize ab-
solutely hard problems in security reductions in Section 4.5.7. When we need to
assume that an adversary can solve all computational hard problems in polynomial
time with non-negligible advantage, we say that the adversary is a computationally
unbounded adversary who has unbounded computational power.

4.1.12 Security Levels

In public-key cryptography, we need to know how secure a proposed scheme is and


how hard a mathematical problem is. We say that a scheme or a problem has k-bit
security if an adversary must take 2k steps/operations to break the scheme or to solve
the problem. The security level indicates the strength of an adversary in breaking a
scheme or solving a problem, which can be seen as the time cost of breaking a
scheme or solving a problem. Generating a scheme with a security parameter λ
does not mean that this scheme has λ -bit security. The real security level depends
on the mathematical primitive and the scheme construction.
36 4 Foundations of Security Reduction

Suppose all potential attack algorithms to break a proposed scheme have been
found with the following distinct time cost and advantage

(t1 , ε1 ), (t2 , ε2 ), · · · , (tl , εl ).

For a simple analysis, we say that the proposed scheme has k-bit security if the
minimum value within the following set
nt t2 tl o
1
, , ···,
ε1 ε2 εl

is 2k , where the time unit is one step/operation. This definition will be used to ana-
lyze the concrete security of a proposed scheme in this book.
The security level of a proposed scheme is not fixed. Suppose a proposed scheme
has k-bit security against all existing attack algorithms. If an attack algorithm with

(t ∗ , ε ∗ ) satisfying t ∗ /ε ∗ = 2k < 2k is found in the future, the proposed scheme will
then have k∗ -bit security instead.

4.1.13 Hard Problems and Hardness Assumptions

In this book, the concepts of hard problem and hardness assumption are treated as
equivalent. However, the descriptions of these two concepts are slightly different.
• We can say that breaking a proposed scheme implies solving an underlying hard
problem, denoted by A, such that the scheme is secure under the hardness as-
sumption on A.
• We can also say that a hardness assumption is a weak assumption or a strong
assumption. Weak or strong is not related to the problem but to the strength of
the assumption.
A hard problem is associated with the solution while a hardness assumption is as-
sociated with the security assumption. In a security reduction, we are to solve an
underlying hard problem or break an underlying hardness assumption.

4.1.14 Security Reductions and Security Proofs

In this book, security reduction and security proof are assumed to be different con-
cepts with different components. We clarify them as follows.
• A security reduction is a part of a security proof focusing on how to reduce
breaking a proposed scheme to solving an underlying hard problem. A security
reduction consists of a simulation algorithm and a solution algorithm.
4.2 An Overview of Easy/Hard Problems 37

• A security proof consists of all components required to convince us that a pro-


posed scheme is indeed secure. Besides a given security reduction, it should also
include a correctness analysis for the proposed security reduction.
We will introduce which components should be included in the security proof
of digital signatures and encryption in Sections 4.9.1, 4.10.1, and 4.11.5. Note that
these two concepts may be regarded as equivalent elsewhere in the literature.

4.2 An Overview of Easy/Hard Problems

All mathematical problems can be classified into the following four types for
scheme constructions and security reductions: computational easy problems, com-
putational hard problems, decisional easy problems, and decisional hard problems.
In this section, we collect some popular problems that have been widely used in the
literature.

4.2.1 Computational Easy Problems

A computational problem generated with a security parameter λ is easy if there


exists a polynomial-time solution algorithm that can find a correct solution to a
given problem instance with overwhelming probability 1.
Let f (x) and F(x) be polynomials in Z p [x] of degree n and 2n, respectively. Let
a ∈ Z p be a random and unknown exponent. We have several interesting polynomial
problems that are efficiently solvable.
2 n
• Polynomial Problem 1. Given g, ga , ga , · · · , ga ∈ G, and f (x) ∈ Z p [x], we can
compute the group element
g f (a) .
The polynomial f (x) can be written as

f (x) = fn xn + fn−1 xn−1 + · · · + f1 x + f0 ,

where fi ∈ Z p is the coefficient of xi for all i ∈ [0, n]. Therefore, this element is
computable by computing
n
i
g f (a) = ∏(ga ) fi .
i=0
2 n−1
• Polynomial Problem 2. Given g, ga , ga , · · · , ga ∈ G, f (x) ∈ Z p [x], and any
w ∈ Z p satisfying f (w) = 0, we can compute the group element
f (a)
g a−w .
38 4 Foundations of Security Reduction

If f (w) = 0 for an integer w ∈ Z p , we have that x − w divides f (x), and

f (x)
x−w
is a polynomial of degree n − 1, where all coefficients are computable. Therefore,
f (x)
this element is computable because x−w is a polynomial of degree n − 1.
2 n−1
• Polynomial Problem 3. Given g, ga , ga , · · · , ga ∈ G, f (x) ∈ Z p [x], and any
w ∈ Z p , we can compute the group element
f (a)− f (w)
g a−w .

It is easy to see that x − w divides f (x) − f (w) and then

f (x) − f (w)
x−w
is a polynomial of degree n − 1, where all coefficients are computable. Therefore,
this element is computable because f (x)−
x−w
f (w)
is a polynomial of degree n − 1.
2 n−1 f (a)
• Polynomial Problem 4. Given g, ga , ga , · · · , ga , g a−w ∈ G, f (x) ∈ Z p [x], and
any w ∈ Z p satisfying f (w) 6= 0, we can compute the group element
1
g a−w .

Since x − w divides f (x) − f (w), we have

f (x) f (x) − f (w) + f (w) f (x) − f (w) f (w)


= = + ,
x−w x−w x−w x−w
which can be rewritten as
f (x) 0 0 d
= fn−1 xn−1 + fn−2 xn−2 + · · · + f10 x + f00 +
x−w x−w
for coefficients fi0 ∈ Z p , which are computable, and d = f (w) which is a nonzero
integer. Therefore, this element is computable by computing

f (a) !1
d
1 g a−w
g a−w = .
∏n−1 ai fi0
i=0 (g )

2 n−1
• Polynomial Problem 5. Given g, ga , ga , · · · , ga , has ∈ G, and e(g, h) f (a)s ∈ GT
where f (0) 6= 0, we can compute the group element

e(g, h)s .

Let f (x) = fn xn + fn−1 xn−1 + · · · + f1 x + f0 . This polynomial can be rewritten as


4.2 An Overview of Easy/Hard Problems 39

f (x) = x( fn xn−1 + fn−1 xn−2 + · · · + f1 ) + f0 .

Since f0 = f (0) 6= 0, this element is computable by computing


!1
f0
e(g, h) f (a)s
e(g, h)s = .
e has , ∏n−1 ai fi+1

i=0 (g )

2 n
• Polynomial Problem 6. Given g, ga , ga , · · · , ga ∈ G, and F(x) ∈ Z p [x], we can
compute the group element
e(g, g)F(a) .
Let F(x) = F2n x2n + F2n−1 x2n−1 + · · · + F1 x + F0 be a polynomial of degree 2n. It
can be rewritten as

F(x) = xn (F2n xn + F2n−1 xn−1 + · · · + Fn+1 x1 ) + (Fn xn + · · · + F1 x1 + F0 ).

Therefore, this element is computable by computing


 n n   n 
i i
e(g, g)F(a) = e ga , ∏(ga )Fn+i e g, ∏(ga )Fi .
i=1 i=0

1 1 1
• Polynomial Problem 7. Given g a−x1 , g a−x2 , · · · , g a−xn ∈ G, and all distinct xi ∈
Z p , we can compute the group element
1
g (a−x1 )(a−x2 )(a−x3 )···(a−xn ) .

For any x1 , x2 , · · · , xn ∈ Z p , a polynomial f (x) can be rewritten as

f (x) = w1 (x − x1 )(x − x2 )(x − x3 ) · · · (x − xn−1 )(x − xn )


+w2 (x − x2 )(x − x3 ) · · · (x − xn−1 )(x − xn )
+w3 (x − x3 ) · · · (x − xn−1 )(x − xn )
+···
+wn (x − xn )
+w

for some w1 , w2 , · · · , wn , w from Z p . The above element is computable and can be


explained as follows.
We have
1 1 +···+ 1 + 1 fi (x)
+ x−x
g x−x1 2 x−x x−x
i i+1 = g (x−x1 )(x−x2 )···(x−xi )(x−xi+1 ) ,
where fi (x) is a polynomial of degree i. Rewrite fi (x) as

fi (x) = w1 (x − x2 )(x − x3 )(x − x4 ) · · · (x − xi )(x − xi+1 )


+w2 (x − x3 )(x − x4 ) · · · (x − xi )(x − xi+1 )
40 4 Foundations of Security Reduction

+w3 (x − x4 ) · · · (x − xi )(x − xi+1 )


+···
+wi (x − xi+1 )
+w .

If w = 0, we can choose a different integer k 6= 1 and compute


k 1 +···+
+ x−x 1
g x−x1 2 x−xi+1
.

Otherwise, w 6= 0 and we have


1 1 +···+
+ x−x 1
g x−x1 2 x−xi+1

fi (x)
= g (x−x1 )(x−x2 )···(x−xi )(x−xi+1 )
w1 w2 w w
+ (x−x )(x−x +···+ (x−x )(x−xi )···(x−x ) + (x−x )(x−x )···(x−x
= g x−x1 1 2) 1 2 i 1 2 i )(x−xi+1 ) .

Let Si be the set of group elements defined as


n 1 1 1 1 o
Si = g x−x1 , g (x−x1 )(x−x2 ) , g (x−x1 )(x−x2 )(x−x3 ) , · · · , g (x−x1 )(x−x2 )(x−x3 )···(x−xi ) ,

which contains i group elements. Given all elements in Si , we can compute the
new element
 1 1 +···+ 1
1
+ x−x w
1
(x−x1 )(x−x2 )···(x−xi )(x−xi+1 )
g x−x1 2 x−x i+1
g = w1 w2 wi

g x−x1 · g (x−x1 )(x−x2 ) · · · g (x−x1 )(x−x2 )···(x−xi )

by the above approach, which is the (i + 1)-th group element in the set Si+1 .
Therefore, with the given group elements, we immediately have S1 and then can
compute S2 , S3 , · · · until Sn . We solve this problem because the n-th group ele-
ment in Sn is the solution to the problem instance.
Another type of computational easy problem can be seen as a structured problem,
where the solution to a structured problem must satisfy a defined structure. For
example, given g, ga ∈ G, a structured problem is to compute a pair (r, gar ) for
an integer r ∈ Z p . Here, the integer r can be any number chosen by the party that
returns the answer. We have the following structured problems that are efficiently
solvable. How to solve these problems is important, especially in the simulation of
digital signatures and private keys of identity-based cryptography.
• Structured Problem 1. Given g, ga ∈ G, we can compute a pair
 
gr , gar

for an integer r ∈ Z p , where a correct pair, denoted by (u, v), satisfies


4.2 An Overview of Easy/Hard Problems 41

e(u, ga ) = e(v, g).

We solve this problem by randomly choosing r0 ∈ Z p and computing


 0 0

gr , (ga )r .

Let r = r0 . The computed pair is the solution to the problem instance.


• Structured Problem 2. Given g, ga ∈ G, we can compute a pair
 1 
g a+r , gr

for an integer r ∈ Z p , where a correct pair, denoted by (u, v), satisfies


 
e u, ga · v = e(g, g).

We solve this problem by randomly choosing r0 ∈ Z∗p and computing


 1 0 
g r0 , gr −a .

Let r = r0 − a ∈ Z p . We have
 1   1 0
  1 0 
g a+r , gr = g a+r0 −a , gr −a = g r0 , gr −a ,

and thus the computed pair is the solution to the problem instance.
• Structured Problem 3. Given g, ga ∈ G and w ∈ Z p , we can compute a pair
 r 
g a+w , gr

for an integer r ∈ Z p , where a correct pair, denoted by (u, v), satisfies


 
e u, ga · gw = e(v, g).

We solve this problem by randomly choosing r0 ∈ Z p and computing


 0 0

gr , gr (a+w) .

Let r = r0 (a + w) ∈ Z p . We have
 r   r0 (a+w) 0
  0 0

g a+w , gr = g a+w , gr (a+w) = gr , gr (a+w) ,

and thus the computed pair is the solution to the problem instance.
• Structured Problem 4. Given g, ga , gb ∈ G and w ∈ Z∗p , we can compute a pair
42 4 Foundations of Security Reduction
 
gab g(wa+1)r , gr

for an integer r ∈ Z p , where a correct pair, denoted by (u, v), satisfies

e(u, g) = e(ga , gb )e(gwa g, v).

We solve this problem by randomly choosing r0 ∈ Z p and computing


 1 0 0 b 0

g− w b+wr a+r , g− w +r .

Let r = − wb + r0 ∈ Z p . We have
   b 0 b 0

gab g(wa+1)r , gr = gab g(wa+1)(− w +r ) , g− w +r
 1 0 0 b 0

= g− w b+wr a+r , g− w +r ,

and thus the computed pair is the solution to the problem instance.
In the above structured problems, the integer r in the computed pair is also a
random number from the point of view of the adversary if r0 is secretly and randomly
chosen from Z p . The randomness of r is extremely important for indistinguishable
simulation, introduced in Section 4.7.

4.2.2 Computational Hard Problems

A computational problem generated with a security parameter λ is hard if, given


as input a problem instance, the probability of finding a correct solution to this
problem instance in polynomial time is a negligible function of λ , denoted by ε(λ )
(ε for short). A computational problem is definitely easy if λ is not large enough.
We give some computational hard problems in the following, where G is the
pairing group from e : G × G → GT unless it is specified otherwise.
Discrete Logarithm Problem (DL)
Instance: g, ga ∈ G, where G is a general cyclic group
Compute: a

Computational Diffie-Hellman Problem (CDH)


Instance: g, ga , gb ∈ G, where G is a general cyclic group
Compute: gab

q-Strong Diffie-Hellman Problem (q-SDH) [21]


a a2 aq
Instance:  g , g , · · · , g ∈ G
g,
1
Compute: s, g a+s ∈ Z p × G for any s
4.2 An Overview of Easy/Hard Problems 43

q-Strong Diffie-Hellman Inversion Problem (q-SDHI) [21]


2 q
Instance: g, ga , ga , · · · , ga ∈ G
1
Compute: ga

Bilinear Diffie-Hellman Problem (BDH) [24]


Instance: g, ga , gb , gc ∈ G
Compute: e(g, g)abc

q-Bilinear Diffie-Hellman Inversion Problem (q-BDHI) [20]


2 q
Instance: g, ga , ga , · · · , ga ∈ G
1
Compute: e(g, g) a

q-Bilinear Diffie-Hellman Problem (q-SDH) [22]


2 q q+2 q+3 2q
Instance: g, ga , ga , · · · , ga , ga , ga , · · · , ga , h ∈ G
q+1
Compute: e(g, h)a

4.2.3 Decisional Easy Problems

A decisional problem is to guess whether a target, denoted by Z, in a problem in-


stance is true or false. If the answer to the decisional problem is true, then Z is equal
to a specific element, called true element; otherwise, Z is different from the specific
element, called false element. A decisional problem generated with a security pa-
rameter λ is easy if there exists a solution algorithm that can correctly guess Z in a
problem instance in polynomial time with overwhelming probability 1.
We denote by “Z = True” that Z is a true element or Z is true for short. Similarly,
we denote by “Z = False” that Z is a false element or Z is false for short. When
the true element is clearly defined, we can alternatively write “Z = True Element”
?
instead of Z = True. We denote by “Z = True Element” deciding whether Z is a true
element or not. Examples can be found at the end of this subsection.
Let I be an instance of a computational problem. The decisional variant of this
computational problem can be seen as setting (I, Z) as the problem instance. The aim
is to decide whether Z is a correct solution to the instance I of the computational
problem. Therefore, each computational problem can be modified into a decisional
problem. We have the following interesting observations.
• If a computational problem is easy, its decisional variant must be easy. Otherwise,
the computational problem is hard and its decisional variant can be either easy or
hard depending on the problem definition.
• If the decisional variant of a computational problem is hard, the computational
problem must also be hard.
We now list two decisional easy problems defined over a pairing group whose cor-
responding computational problems are hard.
44 4 Foundations of Security Reduction

• Decisional Problem 1. Given g, ga , gb , Z ∈ G, the decisional problem is to decide


whether Z = gab or Z is a random element from G\{gab }.
We can easily solve this problem by verifying
?
e(Z, g) = e(ga , gb ),

because this equation holds if and only if Z is true.


n
• Decisional Problem 2. Given g, ga , · · · , ga , Z ∈ G, and f (x), F(x) ∈ Z p [x], where
f (x), F(x) are polynomials of degree n, 2n, respectively, satisfying f (x) - F(x),
the problem is to decide whether Z = gF(a)/ f (a) or Z is a random element from
G\{gF(a)/ f (a) }.
We can easily solve this problem by verifying
 
?
e Z, g f (a) = e(g, g)F(a) ,

because this equation holds if and only if Z is true. Here, e(g, g)F(a) is computable
as has been explained in Polynomial Problem 6.

4.2.4 Decisional Hard Problems

A decisional problem generated with a security parameter λ is hard if, given as input
a problem instance whose target is Z, the advantage of returning a correct guess in
polynomial time is a negligible function of λ , denoted by ε(λ ) (ε for short),
   
ε = Pr Guess Z = True|Z = True − Pr Guess Z = True|Z = False ,

where
• Pr Guess Z = True|Z = True is the probability of correctly guessing Z if Z is true.
 

• Pr Guess Z = True|Z = False is the probability of wrongly guessing Z if Z is false.


 

Similarly, a decisional problem is definitely easy if λ is not large enough.


We give some decisional hard problems in the following, where G is the pairing
group from e : G × G → GT unless it is specified otherwise.
Decisional Diffie-Hellman Problem (DDH)
Instance: g, ga , gb , Z ∈ G, where G is a general cyclic group
?
Decide: Z = gab

Variant Decisional Diffie-Hellman Problem (Variant DDH) [32]


Instance: g, ga , gb , gac , Z ∈ G, where G is a general cyclic group
?
Decide: Z = gbc
4.2 An Overview of Easy/Hard Problems 45

Decisional Bilinear Diffie-Hellman Problem (DBDH) [101]


Instance: g, ga , gb , gc ∈ G, Z ∈ GT
?
Decide: Z = e(g, g)abc

Decisional Linear Problem [23]


Instance: g, ga , gb , gac1 , gbc2 , Z ∈ G
?
Decide: Z = gc1 +c2

q-DABDHE Problem [47]


2 q q+2
Instance: g, ga , ga , · · · , ga , h, ha ∈ G, Z ∈ GT
? q+1
Decide: Z = e(g, h)a

Decisional (P, Q, f )-GDHE Problem [22]


Instance: gP(x1 ,x2 ,···,xm ) ∈ G, e(g, g)Q(x1 ,x2 ,···,xm ) , Z ∈ GT
P = (p1 , p2 , · · · , ps ) ∈ Z p [X1 , · · · , Xm ]s is an s-tuple of m-variate polynomials
Q = (q1 , q2 , · · · , qs ) ∈ Z p [X1 , · · · , Xm ]s is an s-tuple of m-variate polynomials
f ∈ Z p [X1 , X2 , · · · , Xm ]
f 6= ∑ ai, j pi p j + ∑ bi qi holds for ∀ai, j , bi
?
Decide: Z = e(g, g) f (x1 ,x2 ,···,xm )

Decisional ( f , g, F)-GDDHE Problem [33]


2 n−1
Instance: g, ga , ga , · · · , ga , ga f (a) , gb·a f (a) ∈ G
2 2k
h, ha , ha , · · · , ha , hb·g(a) ∈ G
Z ∈ GT
f (x), g(x) are co-prime polynomials of degree n, k, respectively
?
Decide: Z = e(g, h)b· f (a)
In the definition of decisional hard problems, the answer to the problem instance
is either true or false. In particular, false in this book means Z 6= gab in the DDH
problem. There also exists a slightly different definition of false where Z is randomly
chosen from G. In this case, it is possible that Z = gab holds with probability 1p
when the DDH problem is defined over a group of order p. We do not adopt this
definition, in order to simplify the probability analysis. In this book, Z = True in the
DDH problem means Z = gab , while Z = False means that Z is randomly chosen
from G\{gab }. The same rule will be applied to all decisional hard problems.

4.2.5 How to Prove New Hard Problems

In public-key cryptography, it is possible that the proposed scheme looks secure


without any efficient attack, but there is no hard problem that can be adopted for the
security reduction. In this case, we have to create a new hard problem. A new hard
problem is the same as a new proposed scheme, whose hardness is not convincing
46 4 Foundations of Security Reduction

unless there exists an analysis. Here, we introduce three popular methods for the
hardness analysis.
• The first method is by reduction. Suppose there exists an efficient solution algo-
rithm that can solve a new hard problem, denoted by A. We construct a reduction
algorithm that transforms a random instance of an existing hard problem, de-
noted by B, into an instance of the proposed problem A, such that a solution
to the problem instance of A implies a solution to the problem instance of B.
Since the problem B is hard, the assumption that the new hard problem A is easy
is false. Therefore, the problem A is hard without any computationally efficient
solution algorithm.
For example, let the variant DDH problem be a new problem; we want to reduce
its hardness to the DDH problem. Given a random instance (g, ga , gb , Z) of the
DDH problem, we randomly choose z and generate an instance of the variant
DDH problem as
(g, gz , gb , gaz , Z).
We have that Z is true in the variant DDH problem if and only if Z = gab , which
is also true in the DDH problem. Therefore, the solution to the variant DDH
problem instance is the solution to the DDH problem instance, and the variant
DDH problem is not easier than the DDH problem. This reduction seems to be
the same as a security reduction from breaking a proposed scheme to solving an
underlying hard problem. However, this reduction is static and much easier than
the security reduction. The reasons will be explained in Section 4.5.
• The second method is by membership proof. Suppose there exists a general prob-
lem that has been proved hard without any computationally efficient solution al-
gorithm. We only need to prove that the new hard problem is a particular case of
this general hard problem.
For example, the decisional (P, Q, f )-GDHE problem is a general hard problem,
and the decisional ( f , g, F)-GDDHE problem is a specific problem. We only need
to prove that the decisional ( f , g, F)-GDDHE problem is a member of the deci-
sional (P, Q, f )-GDHE problem.
• The third method is by intractability analysis in the generic group model. In
this model, an adversary is only given a randomly chosen encoding of a group,
instead of a specific group. Roughly speaking, the adversary cannot perform any
group operation directly and must query all operations to an oracle, where only
basic group operations are allowed to be queried. We analyze that the adversary
cannot solve the hard problem under such an oracle. For example, the decisional
(P, Q, f )-GDHE problem was analyzed in the generic group model in [22].
The methods mentioned above for hardness analysis are just used to convince us
that a new hard problem is at least as hard as an existing hard problem, or that a new
hard problem is hard under ideal conditions. The first two methods are much easier
for the beginner than the third one. Note that the third method is only suitable for
group-based hard problems.
4.3 An Overview of Security Reduction 47

4.2.6 Weak Assumptions and Strong Assumptions

All hardness assumptions can be classified into weak assumptions and strong as-
sumptions, but the classification is not very precise.
• Weak assumptions over the group-based mathematical primitive are those hard
problems, such as the CDH problem, whose security levels are very close to
the DL problem. The security level is only associated with the input security
parameter for the generation of the underlying mathematical primitive. A weak
assumption is also regarded as a standard assumption.
• Strong assumptions over the group-based mathematical primitive are those hard
problems, such as the q-SDH problem, whose security levels are lower than the
DL problem. The security level is not only associated with the input security
parameter for the generation of the underlying mathematical primitive, but also
other parameters, such as the size of each problem instance.
Here, “weak” means that the time cost of breaking a hardness assumption is much
greater than that for “strong.” The word “weak” is better than the word “strong” in
hardness assumptions, because it is harder to break a weak assumption than to break
a strong assumption. A strong assumption means that the hardness assumption is
relatively risky and unreliable. Weak assumption and strong assumption are two
concepts used to judge whether an underlying hardness assumption for a proposed
scheme is good or not.

4.3 An Overview of Security Reduction

Security reduction was invented to prove that breaking a proposed scheme implies
solving a mathematical hard problem. In this section, we describe how a security
reduction works and explain some important concepts in security reduction.

4.3.1 Security Models

When we propose a scheme for a cryptosystem, we usually do not analyze the se-
curity of the proposed scheme against a list of attacks, such as replay attack and
collusion attack. Instead, we analyze that the proposed scheme is secure in a se-
curity model. A security model can be seen as an abstract of multiple attacks for a
cryptosystem. If a proposed scheme is secure in a security model, it is secure against
any attack that can be described and captured in this security model.
To model the security for a cryptosystem, a virtual party, called the challenger,
is invented to interact with an adversary. A security model can be seen as a game
(interactively) played between the challenger and the adversary. The challenger cre-
ates a scheme following the algorithm (definition) of the cryptosystem and knows
48 4 Foundations of Security Reduction

secrets, such as the secret key, while the adversary aims to break this scheme. A
security model mainly consists of the following definitions.
• What information the adversary can query.
• When the adversary can query information.
• How the adversary wins the game (breaks the scheme).
The security models for different cryptosystems might be entirely different, because
the security services are not the same.
We give an example to show that the security model named IND-ID-CPA for IBE
captures the collusion attack. The security model can be simply revisited as follows.
Setup. The challenger runs the setup algorithm of IBE, gives the master public key
to the adversary, and keeps the master secret key.
Phase 1. The adversary makes private-key queries in this phase. The challenger
responds to queries on any identity following the key generation algorithm of IBE.
Challenge. The adversary outputs two distinct messages m0 , m1 from the same mes-
sage space and an identity ID∗ to be challenged, whose private key has not been
queried in Phase 1. The challenger randomly flips a coin c ∈ {0, 1} and returns the
challenge ciphertext CT ∗ as CT ∗ = E[mpk, ID∗ , mc ], which is given to the adversary.
Phase 2. The challenger responds to private-key queries in the same way as in Phase
1 with the restriction that no private-key query is allowed on ID∗ .
Guess: The adversary outputs a guess c0 of c and wins the game if c0 = c.
The collusion attack on an IBE scheme is stated as follows. If a proposed IBE
scheme is insecure against the collusion attack, two users, namely ID1 , ID2 , can
together use their private keys dID1 , dID2 to decrypt a ciphertext CT created for a
third identity, namely ID3 . We now investigate whether or not the proposed scheme
is secure in the above security model. Following the security model, the adversary
can query ID1 , ID2 for private keys and set ID∗ = ID3 as the challenge identity.
If the proposed scheme is insecure against the collusion attack, the adversary can
always correctly guess the encrypted message and so win the game. Therefore, if
a proposed IBE scheme is secure in this security model, the proposed scheme is
secure against the collusion attack.
A correct security model definition for a cryptosystem requires that the adversary
cannot win the game in the security model. Otherwise, no matter how the scheme
is constructed, the adversary can win the game and then the scheme is insecure in
such a security model. To satisfy this requirement, the adversary must be prevented
from making any trivial query. For example, the adversary must be prevented from
querying the private key of ID∗ , which would allow the adversary to simply win
the game by running the decryption algorithm on the challenge ciphertext with the
private key of ID∗ . A security model is ideal and the best if the adversary can make
any queries at any time excluding those queries that will allow trivial attacks. A
proposed scheme provably secure in an ideal security model is more secure than a
scheme provably secure in other security models.
4.3 An Overview of Security Reduction 49

4.3.2 Weak Security Models and Strong Security Models

A cryptosystem might have more than one security model for the same security
service. These security models can be classified into the following two types.
• Weak Security Model. A security model is weak if the adversary is restricted in
its set of allowed queries or has to reveal some queries in advance to the chal-
lenger. For example, in the security model of IND-sID-CPA for identity-based
encryption, the adversary cannot make any decryption query and has to specify
the challenge identity before seeing the master public key.
• Strong Security Model. A security model is strong if the adversary is not re-
stricted in the queries it can make (except those queries allowing trivial attacks)
and the adversary does not need to reveal any query in advance to the challenger.
For example, in the security model of IND-ID-CCA for identity-based encryp-
tion, the adversary can make decryption queries on any ciphertext different from
the challenge ciphertext, and the adversary does not need to specify the challenge
identity before the challenge phase.
If a proposed scheme is secure in a strong security model, it indicates that it
has strong security. The word “strong” is better than the word “weak” in the secu-
rity model. Recall that these two words for hardness assumptions have the opposite
senses. The reader might find that some security models are regarded as standard se-
curity models. A standard security model is the security model that has been widely
accepted as a standard to define a security service for a cryptosystem. For example,
existential unforgeability against chosen-message attacks is the standard security
model for digital signatures. Note that a standard security model is not necessarily
the strongest security model for a cryptosystem.

4.3.3 Proof by Testing

To measure whether a proposed scheme is secure or not in the corresponding se-


curity model, we can give the proposed scheme to the challenger for testing. The
challenger runs the proposed scheme and calls for attacks. Any adversary can in-
teract with the challenger and make queries to the challenger, while the challenger
will respond to these queries following the security model. The proposed scheme
is claimed to be secure if no adversary can win this game in polynomial time with
non-negligible advantage.
Unfortunately, we cannot prove the security of the proposed scheme by testing
in this way. Even though no adversary can win in this game, it does not mean that
the proposed scheme is truly secure. The reason is that the adversary might hide
its ability to break the proposed scheme during the call-for-attacks phase and will
launch an attack to break the scheme when the proposed scheme has been adopted
as a standard for applications.
50 4 Foundations of Security Reduction

4.3.4 Proof by Contradiction

A proof by contradiction is described as follows.

A mathematical problem is believed to be hard.


If a proposed scheme is insecure, we prove that this problem is easy.
The assumption is then false, and the scheme is secure.

The proof by contradiction for public-key cryptography is explained as follows.


Firstly, we have a mathematical problem that is believed to be hard. Then, we give
a breaking assumption that there exists an adversary who can break the proposed
scheme in polynomial time with non-negligible advantage. That is, the adversary is
assumed to be able to break the proposed scheme by following the steps described
in the proof by testing. Next, we show that this mathematical hard problem is easy
because such an adversary exists. The contradiction indicates that the breaking as-
sumption must be false. In other words, the scheme is secure and cannot be broken.
Therefore, the proposed scheme is secure.
The contradiction occurs if and only if we can efficiently solve an underlying
hard problem with the help of the adversary. If the underlying hard problem is actu-
ally easy or we cannot efficiently solve the underlying hard problem, the proof will
fail to obtain a contradiction. A proof without contradiction does not mean that the
proposed scheme is insecure but that the proposed scheme is not provably secure.
That is, the given proof cannot convince us that the proposed scheme is provably
secure.

4.3.5 What Is Security Reduction?

The process from insecure to easy in the proof by contradiction is called security
reduction. A security reduction works if we can find a solution to a problem instance
of the mathematical hard problem with the help of the adversary’s attack. However,
security reduction cannot directly reduce the adversary’s attack on the proposed
scheme to solving an underlying hard problem. This is because the proposed scheme
and the problem instance are generated independently.
In the security reduction, the proposed scheme is replaced with a different but
well-prepared scheme, which is associated with a problem instance. We extract a
solution to the problem instance from the adversary’s attack on such a different but
well-prepared scheme to solve the mathematical problem. The core and difficulty of
the security reduction is to generate such a different but well-prepared scheme. In
the following introduction, we will introduce the following important concepts.
• The concepts of real scheme, challenger, and real attack associated with the
proposed scheme.
• The concepts of simulated scheme, simulator, and simulation associated with the
different but well-prepared scheme.
4.3 An Overview of Security Reduction 51

4.3.6 Real Scheme and Simulated Scheme

In a security reduction, both the real scheme and the simulated scheme are schemes.
However, their generation and application are completely different.
• A real scheme is a scheme generated with a security parameter following the
scheme algorithm described in the proposed scheme. A real scheme can be seen
as a specific instantiation of the proposed scheme (algorithm). When the adver-
sary interacts with the real scheme following the defined security model, we as-
sume that the adversary can break this scheme. For simplicity, we can view the
proposed scheme as the real scheme.
• A simulated scheme is a scheme generated with a random instance of an under-
lying hard problem following the reduction algorithm. In the security reduction,
we want the adversary to interact with such a simulated scheme and break it with
the same advantage as that of breaking the real scheme.

In a security reduction, it is not necessary to fully implement the simulated


scheme. We only need to implement those algorithms involved in the responses
to queries from the adversary. For example, when proving an encryption scheme in
the IND-CPA security model, we do not need to implement the decryption algo-
rithm for the simulated scheme, because the simulator is not required to respond to
decryption queries from the adversary.

4.3.7 Challenger and Simulator

When the adversary interacts with a scheme, the scheme needs to respond to queries
made by the adversary. To easily distinguish between the interaction with a real
scheme and the interaction with a simulated scheme, two virtual parties, called the
challenger and the simulator, are adopted.
• When the adversary interacts with a real scheme, we say that the adversary is
interacting with the challenger, who creates a real scheme and responds to queries
from the adversary. The challenger only appears in the security model and in the
security description where the adversary needs to interact with a real scheme.
• When the adversary interacts with a simulated scheme, we say that the adversary
is interacting with the simulator, who creates a simulated scheme and responds to
queries from the adversary. The simulator only appears in the security reduction
and is the party who runs the reduction algorithm.
These two parties appear in different circumstances (i.e., security model and se-
curity reduction) and perform different computations because the challenger runs
the real scheme while the simulator runs the simulated scheme. We can even de-
scribe the interaction between the adversary and the scheme without mentioning the
entity who runs the scheme.
52 4 Foundations of Security Reduction

4.3.8 Real Attack and Simulation

In a security reduction, to make sure that the adversary is able to break the simulated
scheme with the advantage defined in the breaking assumption, we always need
to prove that the simulation is indistinguishable from the real attack (on the real
scheme). The concepts of real attack and simulation can be further explained as
follows.
• The real attack is the interaction between the adversary and the challenger, who
runs the real (proposed) scheme following the security model.
• The simulation is the interaction between the adversary and the simulator, who
runs the simulated scheme following the reduction algorithm. Simulation is a part
of security reduction.
If the simulation is indistinguishable from the real attack, the adversary cannot dis-
tinguish the scheme that it is interacting with is a real scheme or a simulated scheme.
That is, the simulated scheme is indistinguishable from the real scheme from the
point of view of the adversary. In this book, the indistinguishability between the
simulation and the real attack is equivalent to that between the simulated scheme
and the real scheme.
When the adversary is asked to interact with a given scheme, we stress that the
given scheme can be a real scheme or a simulated scheme, or can be neither the real
scheme nor the simulated scheme. In the breaking assumption, we assume that the
adversary is able to break the real scheme, but we cannot directly use this assump-
tion to deduce that the adversary will also break the simulated scheme, unless the
simulated scheme is indistinguishable from the real scheme.

4.3.9 Attacks and Hard Problems

The aim of a security reduction is to reduce an adversary’s attack to solving an


underlying hard problem. An attack can be a computational attack or a decisional
attack. A computational attack, such as forging a valid signature, requires the adver-
sary to find a correct answer from an exponential-size answer space. A decisional
attack, such as guessing the message in the challenge ciphertext in the IND-CPA
security model, only requires the adversary to guess 0 or 1. Security against a deci-
sional attack is also known as indistinguishability security. According to the types
of attacks and the types of hard problems, we can classify security reductions into
the following three types.
• Computational Attacks to Computational Hard Problems. For example, in
the security model of EU-CMA for digital signatures, the adversary wins the
game if it can forge a valid signature of a new message that has not been queried.
The forgery is a computational attack. In the security reduction, the simulator
will use the forged signature to solve a computational hard problem.
4.3 An Overview of Security Reduction 53

• Decisional Attacks to Decisional Hard Problems. For example, in the security


model of IND-CPA for encryption, the adversary wins the game if it can cor-
rectly guess the message mc in the challenge ciphertext. The guess is a decisional
attack. In the security reduction, the simulator will use the adversary’s guess of
the encrypted message to solve a decisional hard problem.
• Decisional Attacks to Computational Hard Problems. This type is a special
reduction because it is only available for security reductions in the random oracle
model, where the simulator uses hash queries made by the adversary to solve a
computational hard problem. The details of this type will be introduced in Section
4.11 for encryption schemes under computational hardness assumptions.
We rarely reduce a computational attack to solving a decisional hard problem,
especially for digital signatures and encryption, although this type of reduction is
not wrong.

4.3.10 Reduction Cost and Reduction Loss

Suppose there exists an adversary who can break a proposed scheme in polynomial
time t with non-negligible advantage ε. Generally speaking, in the security reduc-
tion, we will construct a simulator to solve an underlying hard problem with (t 0 , ε 0 )
defined as follows:
ε
t 0 = t + T, ε 0 = .
L
• T is referred to as the reduction cost, which is also known as the time cost. The
size of T is mainly dependent on the number of queries from the adversary and
the computation cost for a response to each query.
• L is referred to as the reduction loss, also called the security loss or loss factor.
The size of L is dependent on the proposed security reduction. The minimum
loss factor is 1, which means that there is no loss in the reduction. Many proposed
schemes in the literature have loss factors that are linear in the number of queries,
such as signature queries or hash queries.
In a security reduction, solving an underlying hard problem with (t 0 , ε 0 ) is ac-
ceptable as long as T and L are polynomial, because this means that we can solve
the underlying hard problem in polynomial time with non-negligible advantage. If
one of them is exponentially large, the security reduction will fail as there is no
contradiction.

4.3.11 Loose Reduction and Tight Reduction

In a security reduction, loose reduction and tight reduction are two concepts intro-
duced to measure the reduction loss.
54 4 Foundations of Security Reduction

• We say that a security reduction is loose if L is at least linear in the number of


queries, such as signature queries or hash queries, made by the adversary.
• We say that a security reduction is tight if L is a constant number or is small (e.g.,
sub-linear in the number of queries).
If L is as large as 2k for an integer k, we say the reduction has k-bit security loss.
Theoretically, for group-based cryptography, we must increase the group size with
an additional k-bit security to make sure that the proposed scheme is as secure as the
underlying hard problem. We rarely consider the time cost in measuring whether a
security reduction is tight or not. The main reason is that the time cost of a security
reduction is mainly determined by the hardness assumption and the security model,
and is independent of the proposed security reduction.

4.3.12 Security Level Revisited

Suppose a proposed scheme, denoted by S, is generated with a security parameter


λ . It is really hard to calculate its concrete security level because we do not know
which attack is the most efficient one. However, we can calculate the lower bound
security level and the upper bound security level of the proposed scheme S.
• Suppose solving a hard problem, denoted by A, can immediately be used to break
the scheme S. The upper bound security level of the scheme S is the security level
of the problem A. In group-based cryptography, the discrete logarithm problem
is the fundamental hard problem. Solving the discrete logarithm problem implies
breaking all group-based schemes. Therefore, the upper bound security level of
all group-based schemes is the security level of the discrete logarithm problem
defined over the group.
• Suppose breaking the scheme S can be reduced to solving an underlying hard
problem, denoted by B. The lower bound security level of the scheme S is calcu-
lated from the underlying hard problem B. However, we cannot simply say that
the lower bound security level of the scheme S is equal to the security level of
the problem B. It still depends on the reduction cost and the reduction loss. The
equality holds if and only if there is no reduction cost and no reduction loss.
We now use an example to explain the range of the security level for a proposed
scheme based on the following statements.
1. A cyclic group G is generated for scheme constructions.
2. The discrete logarithm problem over the group G has 80-bit security.
3. Another problem, denoted by B, over the group G has only 60-bit security.
4. The proposed scheme is constructed over the group G, and its security is reduced
to the underlying hard problem B. To be precise, if there exists an adversary who
can (t, ε)-break the proposed scheme, there exists a simulator who can solve the
underlying hard problem B in time 25 · t with advantage 2ε10 .
4.3 An Overview of Security Reduction 55

The security level of the proposed scheme is at most εt . Firstly, the upper bound
security level of the scheme is 80 bits. Secondly, since the security level of the
underlying hard problem B is 60 bits, we have

25 t t
ε = 215 · ≥ 260 .
210
ε

Thus, we obtain εt ≥ 245 , and the lower bound security level of the scheme is 45
bits. Therefore, the range of the proposed scheme’s security level in bits is [45, 80].
The lower bound security level of the proposed scheme is not 60 but 45 due to
the reduction cost and the reduction loss. To make sure that the security level of the
proposed scheme is at least 80 bits from the above deduction, we have the following
two methods.
• We program a security reduction for the proposed scheme S under the discrete
logarithm assumption without any reduction cost or reduction loss. That is, the
quality of the security reduction from the proposed scheme to the discrete loga-
rithm problem is perfect. However, few schemes in the literature can be tightly
reduced to the discrete logarithm assumption.
• We generate the group G with a larger security parameter such that the underlying
hard problem B has 95-bit security, and then the lower bound security level of the
scheme is 80 bits. This solution works with the tradeoff that we have to increase
the group length of the representation, which will decrease the computational
efficiency of group operations.
The security range [45, 80] does not mean that there must exist an attack algo-
rithm that can break the scheme in 245 steps. It only states that 45-bit security is
the provable lower bound security level. Whether the provable lower bound security
level can be increased or not is unknown and is dependent on the security reduction
that we can propose.
We emphasize that in a real security reduction, it is actually very hard to calculate
the lower bound security level because the reduction cost is t + T , and we cannot
calculate the security level t/ε from (t + T, Lε ). The above argument and discussion
are artificial and only given to help the reader understand the concrete security of
the proposed scheme. However, it is always correct that the underlying hardness as-
sumption should be as weak as the discrete logarithm assumption and (T, L) should
be as small as possible.

4.3.13 Ideal Security Reduction

An ideal security reduction is the best security reduction that we can program for a
proposed scheme. It should capture the following four features.
56 4 Foundations of Security Reduction

• Security Model. The security model should be the strongest security model that
allows the adversary to maximally, flexibly, and adaptively make queries to the
challenger and win the game with a minimum requirement.
• Hard Problem. The underlying hard problem adopted for the security reduction
must be the hardest one among all hard problems defined over the same math-
ematical primitive. For example, the discrete logarithm problem is the hardest
problem among all problems defined over a group.
• Reduction Cost and Reduction Loss. The reduction cost T and the reduction
loss L are the minimized values. That is, T is linear in the number of queries
made by the adversary and L = 1.
• Computational Restrictions on Adversary. There is no computational restric-
tion on the adversary except time and advantage. For example, the adversary is
allowed to access a hash function by itself. However, in the random oracle model,
the adversary is not allowed to access a hash function but has to query a random
oracle instead.

Unfortunately, an inherent tradeoff among these features is very common in all se-
curity reductions proposed in the literature. For example, we can construct an ef-
ficient signature scheme whose security is under a weak hardness assumption, but
the security reduction must use random oracles. We can also construct a signature
scheme without random oracles in the security reduction, but it is accompanied with
a strong assumption or a long public key. Currently, it seems technically impossible
to construct a scheme with an ideal security reduction satisfying all four features
mentioned above.

4.4 An Overview of Correct Security Reduction

4.4.1 What Should Bob Do?

Suppose Bob has constructed a scheme along with its security reduction. In the given
security reduction, Bob proves that if there exists an adversary who can break his
scheme in polynomial time with non-negligible advantage, he can construct a sim-
ulator to solve an underlying hard problem in polynomial time with non-negligible
advantage. Therefore, Bob has shown by contradiction that there exists no adversary
who can break his proposed scheme, since the hard problem cannot be solved. Now,
we have the following question:

How can Bob convince us that his security reduction is truly correct?

The simplest way of proving the correctness of his security reduction is to


demonstrate the security reduction, which outputs a solution to a hard problem.
4.4 An Overview of Correct Security Reduction 57

Unfortunately, Bob cannot demonstrate this because a successful demonstration re-


quires a successful attack and this attack indicates that the proposed scheme is inse-
cure. Without the demonstration, another way is that Bob analyzes the correctness
of his security reduction. We stress that the correctness analysis of the security re-
duction is the most difficult part of the security proof of public-key cryptography.
In the following subsections, we introduce the basic preliminaries for understanding
when a security reduction is correct.

4.4.2 Understanding Security Reduction

To program a security reduction and prove the security of a proposed scheme, we


have the following important observations.
• At the beginning of the security proof, we assume that there exists an adversary
who can break the proposed scheme. To be precise, when the adversary interacts
with a real scheme following the corresponding security model, the adversary is
able to break the real scheme.
• The adversary is assumed to be able to break any given scheme following the se-
curity model if the given scheme looks like a real scheme during the interaction.
That is, the adversary can break any scheme that looks like a real scheme from
the point of view of the adversary.
• In the security reduction, the adversary interacts with a given scheme that is a
simulated scheme. We want the adversary to believe that the given scheme is a
real scheme and break it. At the end, the adversary’s attack is reduced to solving
an underlying hard problem.
• We do not know whether or not the adversary will break the simulated scheme
with the same advantage as that of breaking the real scheme if the adversary
finds out that the given scheme is not a real scheme. We also do not know how
the adversary breaks the simulated scheme when the given scheme looks like a
real scheme.
The difficulties of the security reduction include how to ensure that the adversary
accepts the simulated scheme as a real scheme and how to ensure that the adversary’s
attack can be reduced to solving an underlying hard problem.

4.4.3 Successful Simulation and Indistinguishable Simulation

In this book, successful simulation and indistinguishable simulation are two differ-
ent concepts. They are explained as follows.
• Successful Simulation. A simulation is successful from the point of view of the
simulator if the simulator does not abort in the simulation while interacting with
the adversary. The simulator makes the decision to abort the simulation or not
58 4 Foundations of Security Reduction

according to the reduction algorithm. We assume that the adversary cannot abort
the attack before the simulation is completed. In this book, a simulation refers to
a successful simulation unless specified otherwise.
• Indistinguishable Simulation. A successful simulation is indistinguishable from
the real attack if the adversary cannot distinguish the simulated scheme from the
real scheme. An unsuccessful simulation must be distinguishable from the real at-
tack. Whether a (successful) simulation is distinguishable or indistinguishable is
judged by the adversary. An indistinguishable simulation is desirable, especially
when we want the adversary to break the simulated scheme with the advantage
defined in the breaking assumption.
We emphasize that a correct security reduction might fail with a certain proba-
bility to generate a successful simulation. In this book, a successful simulation only
means that the simulator does not abort in the simulation. That is, in a success-
ful simulation, the simulator’s responses to the adversary’s queries might even be
incorrect. However, to simplify the proof, the reduction algorithm should tell the
simulator to abort if queries from the adversary cannot be correctly answered. For
example, in the security reduction for a digital signature scheme, the simulator must
abort if it cannot compute valid signatures on queried messages for the adversary.
However, even with such an assumption, a successful simulation does not mean that
the simulation is indistinguishable from the real attack. In Section 4.5.11, we discuss
how the adversary can distinguish the simulated scheme from the real scheme.

4.4.4 Failed Attack and Successful Attack

We define a failed attack and a successful attack in order to clarify the adversary’s
attack on the simulated scheme.
• Failed Attack. An attack by the adversary fails if the attack cannot break the
simulated scheme following the security model. Any output such as an error
symbol ⊥, a random string, a wrong answer, or an abort from the adversary is a
failed attack.
• Successful Attack. An attack by the adversary is successful if the attack can
break the simulated scheme following the security model. In this book, an attack
refers to a successful attack unless specified otherwise.
We define these two types of attacks to simplify the description of reduction. In
particular, there is no abort from the adversary at the end of the simulation. Any
output that is not a successful attack is treated as a failed attack. The simulator may
abort during the simulation because it cannot generate a successful simulation. An
attack by the adversary is either failed or successful. If the adversary returns a failed
attack, it is equivalent that the adversary returns a successful attack with probability
0. Therefore, at the end of the simulation, the adversary will launch a successful
attack with a certain probability.
4.4 An Overview of Correct Security Reduction 59

4.4.5 Useless Attack and Useful Attack

Suppose the adversary is given a simulated scheme. The adversary’s attack on the
simulated scheme can be classified into the following two types.
• Useless Attack. A useless attack is an attack by the adversary that cannot be
reduced to solving an underlying hard problem.
• Useful Attack. A useful attack is an attack by the adversary that can be reduced
to solving an underlying hard problem.
According to the above definitions, an attack by the adversary on the simulated
scheme must be either useless or useful. We emphasize that a failed attack can be
a useful attack and a successful attack can be a useless attack, depending on the
cryptosystem, proposed scheme, and its security reduction.

4.4.6 Attack in Simulation

We have classified three types of simulations (i.e., successful simulation, distin-


guishable simulation, and indistinguishable simulation) and four types of attacks
(i.e., failed attack, successful attack, useless attack, and useful attack). In Section
4.5 we will introduce and explain the following relationships that are extremely
important in the security reduction.
• An attack at the end of a successful simulation can be either a failed attack or
a successful attack, which is dependent on whether the successful simulation is
distinguishable or not.
• An attack at the end of a distinguishable simulation can be either a failed attack or
a successful attack. That is, the adversary can decide which attack it will launch.
This does not contradict the breaking assumption.
• An attack at the end of an indistinguishable simulation is a successful attack
with probability defined in the breaking assumption. The attack on the simulated
scheme, however, can be either a useful attack or a useless attack. We emphasize
that an indistinguishable simulation cannot ensure a useful attack in the security
reduction.
The indistinguishability analysis of a simulation is important in all security re-
ductions. However, it is not necessary to program an indistinguishable simulation in
the entire simulation. For encryption under a decisional hardness assumption (see
Section 4.10), the simulation must be distinguishable if Z is false. For encryption
under a computational hardness assumption (see Section 4.11), we only require that
the simulation is indistinguishable before the adversary makes a specific hash query
to the random oracle.
60 4 Foundations of Security Reduction

4.4.7 Successful/Correct Security Reduction

The concepts of successful security reduction and correct security reduction are
regarded as different in this book. They are explained as follows.
• Successful Security Reduction. We say that a security reduction is successful
if the simulation is successful and the adversary’s attack in the simulation is a
useful attack.
• Correct Security Reduction. We say that a security reduction for a proposed
scheme is correct if the advantage of solving an underlying hard problem us-
ing the adversary’s attack is non-negligible in polynomial time if the breaking
assumption holds.
A successful security reduction is desired in order to obtain a correct security
reduction. That is, a security reduction is correct if the security reduction can be
successful in solving an underlying hard problem in polynomial time with non-
negligible advantage.

4.4.8 Components of a Security Proof

A security proof by a security reduction should have the following components in


order to prove that the proposed scheme is secure.
• Simulation. The reduction algorithm should show how the simulator generates
a simulated scheme and interacts with the adversary.
• Solution. The reduction algorithm should show how the simulator solves the
underlying hard problem by returning a solution to a problem instance with the
help of the adversary’s attack on the simulated scheme.
• Analysis. After the simulation and solution, there should be an analysis showing
that the advantage of solving the underlying hard problem is non-negligible if the
breaking assumption holds.
The above three components are essential for proving that the security reduc-
tion is correct or the proposed scheme is truly secure under the underlying hard-
ness assumption in the corresponding security model. These three components are
quite different in detail depending on the cryptosystem, proposed scheme, underly-
ing hard problem, and reduction approach.
In this book, we only show these three components for digital signatures, encryp-
tion under decisional hardness assumptions, and encryption under computational
hardness assumptions.
4.5 An Overview of the Adversary 61

4.5 An Overview of the Adversary

In this section, we take a close look at the adversary, who is assumed to be able to
break the proposed scheme. It is important to understand which attack the adversary
can launch or will launch on the simulated scheme.

4.5.1 Black-Box Adversary

The breaking assumption states that there exists an adversary who can break the
proposed scheme in polynomial time with non-negligible advantage. There is no
restriction on the adversary except time and advantage. The adversary in the security
reduction is a black-box adversary. The most important property of a black box
is that what the adversary will query and which specific attack the adversary will
launch are not restricted and are unknown to the simulator.
For such a black-box adversary, we use adaptive attack to describe the black-box
adversary’s behavior. Adaptive attacks will be introduced in the next subsection. We
emphasize that the adversary in the security reduction is far more than a black-box
adversary. The reason will be explained soon after introducing the adaptive attack.

4.5.2 What Is an Adaptive Attack?

Let a be an integer chosen from the set {0, 1}. If a is randomly chosen, we have

1
Pr[a = 0] = Pr[a = 1] = .
2
However, if a is adaptively chosen, the two probabilities Pr[a = 0] and Pr[a = 1] are
unknown. An adaptive attack is a specific attack where the adversary’s choices from
the given space are not uniformly distributed but based on an unknown probability
distribution.
An adaptive attack is composed of the following three parts in a security reduc-
tion between an adversary and a simulator. We take the security reduction for a
digital signature scheme in the EU-CMA security model as an example to explain
these three parts. Suppose the message space is {m1 , m2 , m3 , m4 , m5 } with five dis-
tinct messages and the adversary will first query the signatures of two messages
before forging a valid signature of a new message.
• What the adversary will query to the simulator is adaptive. We cannot claim
that a particular message, for example m3 , will be queried for its signature with
probability 2/5. Instead, the adversary will query the signature of message mi
with unknown probability.
62 4 Foundations of Security Reduction

• How the adversary will query to the simulator is adaptive. The adversary might
output two messages for signature queries at the same time or one at a time.
For the latter, the adversary decides the message for its first signature query.
Upon seeing the received signature, it will then decide the second message to be
queried.
• What the adversary will output for the simulator is adaptive. If the adversary
makes signature queries on the messages m3 and m4 , we cannot claim that the
forged signature will be on a random message m∗ from {m1 , m2 , m5 }. Instead,
the adversary will forge a signature of one of the messages from {m1 , m2 , m5 }
with unknown probabilities between [0, 1] satisfying

Pr[m∗ = m1 ] + Pr[m∗ = m2 ] + Pr[m∗ = m5 ] = 1.

An adaptive attack is not just about how the adversary will query to the simula-
tor. All choices are also made adaptively by the adversary unless restricted in the
corresponding security model. For example, in a weak security model for digital
signatures, the adversary must forge the signature of a message m∗ designated by
the simulator. In this case, m∗ is not adaptively chosen by the adversary. There are
some security models, such as IND-sID-CPA for IBE, where the adversary needs
to output a challenge identity before seeing the master public key, but it can still
adaptively choose this identity from the identity space.

4.5.3 Malicious Adversary

Suppose there are only two distinct attacks that can break the simulated scheme
in a security reduction. One attack is useful and the other is useless. Consider the
following question.

What is the probability of returning a useful attack by the adversary?

According to the description of the black-box adversary, we know that this prob-
ability is unknown due to the adaptive attack by the adversary. However, a correct
security reduction requires us to calculate the probability of returning a useful at-
tack. To solve this problem, we amplify the black-box adversary into a malicious
adversary and consider the maximum probability of returning a useless attack by the
malicious adversary.
The malicious adversary is still a black-box adversary who will launch an adap-
tive attack. However, the malicious adversary will try its best to launch a useless
attack unless the adversary does not know how to, as long as the useless attack does
not contradict the breaking assumption. If the maximum probability of returning a
useless attack is not the overwhelming probability 1, this means that the probabil-
ity of returning a useful attack is non-negligible, and thus the security reduction is
correct. If a security reduction works against such a malicious adversary, the secu-
4.5 An Overview of the Adversary 63

rity reduction definitely works against any adversary who can break the proposed
scheme. The reason is that this maximum probability is the biggest likelihood that
all adversaries can make the attack useless. From now on, an adversary refers to a
malicious adversary unless specified otherwise.

4.5.4 The Adversary in a Toy Game

To help the reader better understand the meaning of the malicious adversary when
the simulation is indistinguishable, we create the following toy game to explain the
difficulty of security reduction. In this toy game,
• The simulator generates the simulated scheme with a random b ∈ {0, 1}.
• The adversary adaptively chooses a ∈ {0, 1} as an attack.
• The adversary’s attack is useful if and only if a 6= b.
In the security reduction, a can be seen as the adaptive attack launched by the
adversary where both a = 0 and a = 1 can break the scheme, and b can be seen as
the secret information in the simulated scheme. In the simulation, all the parameters
given to the adversary may include the secret information about how to launch a
useless attack. The malicious adversary intends to make this attack useless. It will
try to guess b from the simulated scheme and then output an attack a in such a way
that Pr[a = b] = 1.
Security reduction is hard because we must program the simulation in such a way
that the adversary does not know how to launch a useless attack. In the correctness
analysis of the security reduction, the probability Pr[a 6= b] must be non-negligible.
To achieve this, b must be random and independent of all the parameters given to
the adversary, so that the adversary can only correctly guess b with probability 12 .
In this case, we will have Pr[a 6= b] = 12 even though a is adaptively chosen by the
adversary. The corresponding probability analysis will be given in Section 4.6.4.

4.5.5 Adversary’s Successful Attack and Its Probability

Let Pε be the probability of returning a successful attack on the proposed scheme


under the breaking assumption. In a security reduction the adversary will launch a
successful attack on the simulated scheme with probability described as follows.
• If the simulated scheme is indistinguishable from the real scheme, according
to the breaking assumption, the adversary will return a successful attack on the
simulated scheme with probability Pε .
• If the simulated scheme is distinguishable from the real scheme, the adversary
will return a successful attack on the simulated scheme with malicious and adap-
tive probability P∗ ∈ [0, 1] decided by the adversary itself. Returning a successful
64 4 Foundations of Security Reduction

attack with such malicious probability does not contradict the breaking assump-
tion, because the simulated scheme is different from the real scheme.
The first case is very straightforward, but the second case is a little complex,
because it varies and depends on the security reduction. The probability P∗ is quite
different in the security reductions for digital signatures and for encryption. The
details can be found in Section 4.9 and Section 4.10.

4.5.6 Adversary’s Computational Ability

All public-key schemes are only computationally secure. If there exists an adver-
sary who has unbounded computational power and can solve all computational hard
problems, it can definitely break any proposed scheme in polynomial time with non-
negligible advantage. For example, the adversary can use its unbounded computa-
tional power to solve the DL problem, where the discrete logarithm can be applied
to break all group-based schemes. Therefore, any proposed scheme is only secure
against an adversary with bounded computational power.
An inherent interesting question is to explore which problems the adversary can-
not solve when analyzing the security of a proposed scheme. A compact theorem
template for claiming that a proposed scheme is provably secure (without mention-
ing its security model) can be stated as follows.
Theorem 4.5.6.1 If the mathematical problem P is hard, the proposed scheme is
secure and there exists no adversary who can break the proposed scheme in polyno-
mial time with non-negligible advantage.
The above theorem states that the proposed scheme is secure under the hardness
assumption of problem P. It seems that the adversary’s computational ability should
be bounded in solving the hard problem P. That is, we only prove that the proposed
scheme is secure against an adversary who cannot solve the hard problem P or other
problems harder than P. This proof strategy is acceptable because we assume that
the problem P is hard, and hence do not care whether the proposed scheme is secure
or not against an adversary who can solve the problem P.
However, in the security reduction, the adversary’s computational ability is taken
to be unbounded to simplify the correctness analysis. Roughly speaking, the pro-
posed scheme is only secure against a computationally bounded adversary, but the
corresponding security reduction should even work against a computationally un-
bounded adversary. The reason will be given in the next subsection.

4.5.7 The Adversary’s Computational Ability in a Reduction

In a security reduction, the simulation, such as a response computed by the simu-


lator, might include some secret information (denoted by I in the following discus-
4.5 An Overview of the Adversary 65

sion). This piece of information tells the adversary how to generate a useless attack
on the simulated scheme. For example, b in the toy game of Section 4.5.4 is the
secret information, which should be unknown to the adversary. Other examples can
be found at the end of this chapter. The simulator must hide this secret informa-
tion. Otherwise, once the malicious adversary obtains the information I from the
given parameters in the simulation, it will always launch a useless attack. Here,
given parameters are all the information that the adversary knows from the simu-
lated scheme, such as public key and signatures.
The simplest way to hide the information I is for the simulator never to reveal the
information I to the adversary. Unfortunately, we found that all security reductions
in the literature have to respond to queries where responses include the information
I. To make sure that the adversary does not know the information I, the simulator
must use some hard problems to hide it. Let P be the underlying hard problem in
the security reduction. There are three different methods for the simulator to hide
the information I in the security reduction.
• The simulator programs the security reduction in such a way that the information
I is hidden by a set of new hard problems P1 , P2 , · · · , Pq . The correctness of the
security reduction requires that these new hard problems are not easier than the
problem P. Otherwise, the adversary can solve them to obtain the information
I. To achieve this, we must prove that the information I can only be obtained
by solving these hard problems, which are not easier than the underlying hard
problem P. This method is challenging and impractical because we may not be
able to reduce the hard problems P1 , P2 , · · · , Pq to the underlying hard problem P.
• The simulator programs the security reduction in such a way that the information
I is hidden by the problem P. This method works because we assume that it is
hard for the adversary to solve the problem P. However, how to use the problem
P to hide the information I from the adversary is challenging. For example, P
is the CDH problem. Suppose the information I is hidden with gab where the
adversary knows (g, ga , gb ). The simulator should not provide additional group
2
elements, such as the group element ga b , to the adversary. Otherwise, it is no
longer a CDH problem.
• The simulator programs the security reduction in such a way that the information
I is hidden with some absolutely hard problems that cannot be solved, even if
the adversary has unbounded computational power. This method is very efficient
because those absolutely hard problems can be universally used and independent
of the underlying hard problem P. In this case, we only need to prove that the
information I is hidden with absolutely hard problems.
This book only introduces security reductions where the secret information is
hidden with the third method. Therefore, the adversary can be a computationally
unbounded adversary. This method is sufficient, but not necessary. However, it pro-
vides the most efficient method of analyzing the correctness of a security reduction.
We note that the third method was adopted in most security reductions proposed in
the literature.
66 4 Foundations of Security Reduction

4.5.8 The Adversary in a Reduction

A security reduction starts with the breaking assumption that there exists an adver-
sary who can break the proposed scheme in polynomial time with non-negligible
advantage. Then, we construct a simulator to generate a simulated scheme and use
the adversary’s attack to solve an underlying hard problem. According to the previ-
ous explanations, the adversary in the security reduction is summarized as follows.
• The adversary has unbounded computational power in solving all computational
hard problems defined over the adopted mathematical primitive and breaking the
simulated scheme.
• The adversary will maliciously try its best to launch a useless attack to break the
simulated scheme and make the security reduction fail.
We assume that the adversary has unbounded computational power, but we can-
not directly ask the adversary to solve an underlying hard problem for us. All that
the adversary will do for us is to launch a successful attack on a scheme that looks
like a real scheme. This is what the adversary can do and will do in a security re-
duction. Now, it is time to understand what the adversary knows and never knows
in a security reduction.

4.5.9 What the Adversary Knows

There are three types of information that the adversary knows.


• Scheme Algorithm. The adversary knows the scheme algorithm of the proposed
scheme. That is, the adversary knows how to precisely compute an output from
an input. For example, to break a signature scheme, the adversary knows the
system parameter generation algorithm, the key generation algorithm, the signing
algorithm, and the verification algorithm.
• Reduction Algorithm. The adversary knows the reduction algorithm proposed
for proving the security of the proposed scheme. Otherwise, we must prove that
the adversary cannot find the reduction algorithm by itself from the scheme algo-
rithm. For example, the adversary knows how the public key will be simulated;
how the random numbers will be chosen; how queries will be answered; and how
an underlying hard problem will be solved with a useful attack.
• How to Solve All Computational Hard Problems. The adversary has un-
bounded computational power and can solve all computational hard problems.
For example, in group-based cryptography, suppose (g, ga ) is given to the ad-
versary. We assume that the adversary can compute a before launching an attack.
However, the adversary cannot break other cryptographic primitives, such as hash
functions, when they are adopted as building blocks for constructing schemes.
Otherwise, it can break the building blocks to break the simulated scheme, so
that the security reduction is not successful.
4.5 An Overview of the Adversary 67

We assume that the adversary knows the reduction algorithm. However, this does
not mean that the adversary knows all the secret parameters chosen in the simulated
scheme, although the simulated scheme is generated by the reduction algorithm.
There are some secrets that the adversary never knows. Otherwise, it is impossible
to obtain a successful security reduction.

4.5.10 What the Adversary Never Knows

There are three types of secrets that the adversary never knows.
• Random Numbers. The adversary does not know those random numbers (in-
cluding group elements) chosen by the simulator when the simulator generates
the simulated scheme unless they can be computed by the adversary. For exam-
ple, if the simulator randomly chooses two secrets number x, y ∈ Z p , we assume
that they are unknown to the adversary. However, once (g, gx+y ) are given to the
adversary, the adversary knows x + y according to the previous subsection.
• Problem Instance. The adversary does not know the random instance of the
underlying hard problem given to the simulator. This assumption is desired to
simplify the proof of indistinguishability. For example, suppose Bob proposes
a scheme and a security reduction that shows that if there exists an adversary
who can break the scheme, the reduction can find the solution to the instance
(g, ga ) of the DL problem. In the security reduction, the adversary receives a
key pair (g, gα ), which is equal to (g, ga ). Since the adversary knows that (g, ga )
is a problem instance, it will immediately find out that the given scheme is a
simulated scheme and stop the attack.
• How to Solve an Absolutely Hard Problem. The adversary does not know how
to solve an absolutely hard problem, such as computing (x, y) from the group el-
ements (g, gx+y ). Another example is to compute a pair (m∗ , f (m∗ )) when given
(m1 , m2 , f (m1 ), f (m2 )) for a distinct m∗ ∈
/ {m1 , m2 }, where f (x) ∈ Z p [x] is a ran-
dom polynomial of degree 2. Some absolutely hard problems will be introduced
in Section 4.7.6.
Roughly speaking, the adversary will utilize what it knows to launch a useless at-
tack on the simulated scheme, while the simulator should utilize what the adversary
never knows to force the adversary to launch a useful attack with non-negligible
probability.

4.5.11 How to Distinguish the Given Scheme

In the security reduction, the adversary interacts with a given scheme that is the
simulated scheme. The adversary distinguishes the simulated scheme from the real
scheme by correctness and randomness, which are described as follows.
68 4 Foundations of Security Reduction

• Correctness. All responses to queries from the adversary in the simulated


scheme must be exactly the same as in the real scheme. When the adversary
makes queries to the simulated scheme, it will receive the corresponding re-
sponses accordingly. If a response is not correct, the adversary can judge the
scheme to be a simulated scheme because the real scheme should respond to all
queries correctly. For example, the adversary judges a signature scheme to be a
simulated scheme when the received signature of message m cannot pass the ver-
ification. Another example in encryption is the decryption query. The adversary
will find out that the given encryption scheme is a simulated scheme if a decryp-
tion query on a ciphertext that should be rejected by the real scheme is accepted
by the given scheme.
• Randomness. Random numbers and random group elements in the simulated
scheme should be truly random and independent. Random numbers and random
group elements are used in many constructions, such as digital signatures and
private keys in identity-based cryptography. If the real scheme produces random
numbers/elements, they must be truly random and independent. Then, all gen-
erated random numbers/elements in the simulated scheme must also be random
and independent. Otherwise, the adversary can easily determine that the given
scheme is a simulated scheme. We will introduce the concept of random and
independent in Section 4.7.
According to the descriptions of successful simulation and indistinguishable sim-
ulation in Section 4.4.3, we have the following interesting formula.
Successful Simulation + Correctness + Randomness = Indistinguishable Simulation.

An indistinguishable simulation cannot guarantee that the adversary’s attack is use-


ful. The malicious adversary can still utilize what it knows and what it receives from
the simulator to find out how to launch a useless attack.

4.5.12 How to Generate a Useless Attack

The way that the adversary launches a useless attack cannot be described here in de-
tail because it is highly dependent on the proposed scheme, the reduction algorithm,
and the underlying hard problem. Here, we only give a high-level overview of what
a useless attack looks like in digital signatures and encryption.
• Suppose a security reduction for a signature scheme uses a forged signature from
the adversary to solve an underlying hard problem. A useless attack is a special
forged signature for the simulated scheme that is valid and also computable by
the simulator, so that it cannot be used to solve an underlying hard problem.
• Suppose a security reduction for an encryption scheme uses the guess of the
encrypted message from the adversary to solve an underlying decisional hard
problem. A useless attack is a special way of guessing the encrypted message mc
such that the message in the challenge ciphertext can always be correctly guessed
4.6 An Overview of Probability and Advantage 69

(c0 = c), no matter whether the target Z in the decisional problem instance is true
or false.
The adversary can launch a useless attack because it knows the secret information
in the simulation. How to hide the secret information I from the adversary using
absolutely hard problems is an important step in a security reduction.

4.5.13 Summary of Adversary

At the end of this section, we summarize the malicious adversary in a security re-
duction as follows.
• When the adversary is asked to interact with a given scheme, it considers this
scheme to be a simulated scheme. The adversary will use what it knows and
what it can query (following the defined security model) to find whether the
given scheme is indeed distinguishable from the real scheme or not.
• When the adversary finds out that the given scheme is a simulated scheme, the
adversary will launch a successful attack with malicious and adaptive probability
P∗ ∈ [0, 1]. The detailed probability P∗ is dependent on the security reduction.
• When the adversary cannot distinguish the simulation from the real attack, it
will launch a successful attack with probability Pε according to the breaking
assumption.
• Without contradicting the breaking assumption, the adversary will use what it
knows and what it receives to launch a useless attack on the given scheme.
In the security reduction, we prove that if there exists an adversary who can
break the proposed scheme, we can construct a simulator to solve an underlying
hard problem. To be more precise, a correct security reduction requires that even
if the attack on the simulated scheme is launched by a malicious adversary who
has unbounded computational power, the advantage of solving an underlying hard
problem is still non-negligible.

4.6 An Overview of Probability and Advantage

4.6.1 Definitions of Probability

Probability is the measure of the likelihood that an event will occur. In a security
proof, this event is mainly about a successful attack on a scheme or a correct so-
lution to a problem instance. There are four important probability definitions for
digital signatures, encryption, computational problems, and decisional problems,
respectively.
70 4 Foundations of Security Reduction

• Digital Signatures. Let Pr[WinSig ] be the probability that the adversary success-
fully forges a valid signature. Obviously, this probability satisfies

0 ≤ Pr[WinSig ] ≤ 1.

• Encryption. Let Pr[WinEnc ] be the probability that the adversary correctly guesses
the message in the challenge ciphertext in the security model of indistinguisha-
bility. This probability satisfies
1
≤ Pr[WinEnc ] ≤ 1.
2
The message in the challenge ciphertext is mc where c ∈ {0, 1}, and the adversary
will output 0 or 1 to guess c. Since c is randomly chosen by the challenger, no
matter what the guess c0 is, we have that Pr[WinEnc ] = Pr[c0 = c] holds with
probability at least 12 .
• Computational Problems. Let Pr[WinC ] be the probability of computing a cor-
rect solution to an instance of a computational problem. This probability satisfies

0 ≤ Pr[WinC ] ≤ 1.

• Decisional Problems. Let Pr[WinD ] be the probability of correctly guessing the


target Z in an instance of a decisional problem. That is, if Z is a true element,
the guess output is true. Otherwise, Z is a false element and the guess output is
false. The probability Pr[WinD ] is dependent on the definition of the decisional
problem.
– Suppose the target Z is randomly chosen from a space having two elements
where one element is true and the other is false. Then, we have
1
≤ Pr[WinD ] ≤ 1.
2
– Suppose the target Z is randomly chosen from a space having n elements
where only one element is true. Then, we have
1
1− ≤ Pr[WinD ] ≤ 1.
n
This probability is calculated in such a way that if we cannot correctly guess
the target with probability 1, we guess that Z is false. Since n − 1 out of n
elements are false and Z is randomly chosen, we have that Z is false with
1
probability n−1
n = 1− n.

In the above four definitions, the adversary refers to a general adversary who intends
to break a scheme or solve a problem. Note that this is not the adversary who can
break the scheme or solve the problem in the breaking assumption. Otherwise, the
probability cannot be, for example, Pr[WinSig ] = 0.
4.6 An Overview of Probability and Advantage 71

The minimum and maximum probabilities in the above four definitions are dif-
ferent. We cannot use a given probability to universally measure whether a proposed
scheme is secure or insecure and whether a problem is hard or easy. Due to the dif-
ference, advantage is defined in such a way that we can use the same measurement
to judge security/insecurity for schemes and hardness/easiness for problems.

4.6.2 Definitions of Advantage

Informally speaking, advantage is the measure of how successfully an attack algo-


rithm can break a proposed scheme or a solution algorithm can solve a problem,
compared to the idealized probability Pideal in the corresponding security model.
For example, Pideal = 0 in the EU-CMA security model for digital signatures and
Pideal = 12 in the indistinguishability security model.
Advantage is an adjusted probability, where the minimum advantage must be
zero. Roughly speaking, we use the following method to define an advantage.
• If Pideal is non-negligible, we define the advantage as

Advantage = Probability of Successful Attack − Pideal .

• If Pideal is negligible, we define the advantage as

Advantage = Probability of Successful Attack.

In the advantage definition, we are not interested in how large the advantage
is, but only in two different results: Negligible and Non-negligible. Let ε be the
advantage of breaking a proposed scheme or solving a problem. If the advantage
is negligible, the proposed scheme is secure or the problem is hard. Otherwise, the
proposed scheme is insecure or the problem is easy. A precise non-negligible value
is not important because any non-negligible advantage indicates that the proposed
scheme is insecure or the problem is easy.
There is no standard definition of maximum advantage. For example, in the def-
inition of indistinguishability for encryption, some researchers prefer 12 as the max-
imum advantage while others prefer 1. In this book, we prefer 1 as the maximum
advantage for encryption to keep the consistency with digital signatures.
The definitions of advantage for digital signatures, encryption, computational
problems, and decisional problems are described as follows.

• Advantage for Digital Signatures. The advantage of forging a valid signature


in the security model of EU-CMA for digital signatures is defined as

ε = Pr[WinSig ].

We have ε ∈ [0, 1] according to its probability definition.


72 4 Foundations of Security Reduction

• Advantage for Encryption. The advantage of correctly guessing the encrypted


message in the security model of indistinguishability for encryption is defined as
1 1
ε = Pr[WinEnc |c = 0] − + Pr[WinEnc |c = 1] −
2 2
 1
= 2 Pr[WinEnc |c = 0] Pr[c = 0] + Pr[WinEnc |c = 1] Pr[c = 1] −
2
 1
= 2 Pr[WinEnc ] − .
2
We have ε ∈ [0, 1] according to its probability definition.
• Advantage for Computational Problems. The advantage of finding a solution
to an instance of a computational problem is defined as

ε = Pr[WinC ].

We have ε ∈ [0, 1] according to its probability.


• Advantage for Decisional Problems. The advantage of correctly guessing the
target Z in an instance of a decisional problem is defined as
1 1
ε = Pr[WinD |Z = True] − + Pr[WinD |Z = False] −
2 2
= Pr[WinD |Z = True] − (1 − Pr[WinD |Z = False])
= Pr[Guess Z = True|Z = True] − Pr[Guess Z = True|Z = False],

where the first conditional probability is the probability of correctly guessing Z


if Z is true, and the second conditional probability is the probability of wrongly
guessing Z if Z is false. The advantage definition for a decisional problem does
not directly use the probability Pr[WinD ].
If the decisional problem is easy, then

Pr[Guess Z=True|Z = True] = 1, Pr[Guess Z=True|Z = False] = 0.

That is, we have ε = 1. Otherwise, the decisional problem is absolutely hard, and
we have
1
Pr[Guess Z=True |Z = True] = Pr[Guess Z=True|Z = False] = .
2
That is, we have ε = 0, which means that the probability of correctly/wrongly
guessing Z is the same. Therefore, the advantage ε is also within the range [0, 1].
The ranges in all the above advantage definitions are the same. If ε = 0, then the
scheme is absolutely secure or the problem is absolutely hard. If ε = 1, then the
scheme can be broken or the problem can be solved with success probability 1.
4.6 An Overview of Probability and Advantage 73

4.6.3 Malicious Adversary Revisited

The reason why we strengthen a black-box adversary into a malicious adversary can
be explained as follows with probability and advantage.
• In a security reduction, we cannot calculate the probability of returning a useful
attack by a black-box adversary unless the adversary’s attack is always useful.
The reason is that the adversary’s attack is adaptive.
• In a security reduction, we can calculate the advantage of returning a useless
attack by a malicious adversary, who tries its best to make the security reduction
fail. If the advantage is 1, this means that the security reduction is incorrect.
We have to consider the advantage in the security reduction because it is hard or
impossible to make the attack always useful in a security reduction. Take the security
reduction for digital signatures as an example. In the EU-CMA security model, the
simulator must program the security reduction in such a way that some signatures
can be computed by the simulator in order to respond to signature queries. If the
adversary happens to choose one of these signatures as the forged signature, the
forged signature is a useless attack and thus the forged signature cannot always be a
useful attack.

4.6.4 Adaptive Choice Revisited

The probability of an adaptive choice cannot be calculated. Fortunately, there are


two important probability formulas associated with an adaptive choice that we can
still use.
Take the toy game in Section 4.5.4 as an example, where the adversary adaptively
chooses a ∈ {0, 1}, and the simulator randomly chooses b ∈ {0, 1}.
• The complementary event holds for all adaptive choices. That is,

Pr[a = 0] + Pr[a = 1] = 1.

• If b is unknown to the adversary, we have that the choice of a is independent of


b, and then

Pr[a = b] = Pr[a = 0|b = 0] Pr[b = 0] + Pr[a = 1|b = 1] Pr[b = 1]


= Pr[a = 0] Pr[b = 0] + Pr[a = 1] Pr[b = 1]
1 1
= Pr[a = 0] + Pr[a = 1]
2 2
1 
= Pr[a = 0] + Pr[a = 1]
2
1
= .
2
74 4 Foundations of Security Reduction

The above two probability formulas are simple but they are the core of the prob-
ability analysis in all security reductions. We can only calculate the success proba-
bility associated with an adaptive attack by these two formulas.

4.6.5 Useless, Useful, Loose, and Tight Revisited

Let ε denote the advantage of breaking a proposed scheme and εR denote the advan-
tage of solving an underlying hard problem by a security reduction. The concepts
of useless attack, useful attack, tight reduction, and loose reduction in a security
reduction can be explained as follows.
• Useless. The advantage εR is negligible. There exists an adversary who can break
the proposed scheme in polynomial time with non-negligible advantage ε, but the
advantage εR is negligible.
• Useful. The advantage εR is non-negligible. If there exists an adversary who can
break the proposed scheme in polynomial time with non-negligible advantage ε,
then the advantage εR is also non-negligible.
ε
• Loose. The advantage εR is equal to O(q) , where q denotes the number of queries
made by the adversary. In practice, the size of q can be as large as q = 230 or
q = 260 , depending on the definition of q. For example, q can denote the number
of signature queries or the number of hash queries made by the adversary. The
number q = 230 is derived based on the fact that a key pair can be used to generate
up to 230 signatures, while the number q = 260 is derived based on the fact that
the adversary can make up to 260 hash queries in polynomial time.
ε
• Tight. The advantage εR is equal to O(1) , where O(1) is constant and independent
of the number of queries. For example, a security reduction with O(1) = 2 is a
tight reduction. When the reduction loss is related to the security parameter λ ,
we still call it a tight reduction although it is only almost tight.
In the above descriptions, we do not consider the time cost and assume that the
security reduction is completed in polynomial time. The above four concepts are
associated with the advantage only.

4.6.6 Important Probability Formulas

Let A1 , A2 , · · · , Aq , B denote different events, and Ac be the complementary event that


the event A does not occur. The following probability formulas have been used in
many security reductions for calculating the advantage.
• Equations.

Pr[B] = 1 − Pr[Bc ] (6.1)


4.7 An Overview of Random and Independent 75

Pr[A] = Pr[A|B] Pr[B] + Pr[A|Bc ] Pr[Bc ] (6.2)


Pr[A ∧ B] = Pr[B] · Pr[A|B] (6.3)
Pr[A ∧ B] = Pr[A] · Pr[B|A] (6.4)
Pr[B|A] · Pr[A]
Pr[A|B] = (6.5)
Pr[B]
Pr[A|B] = 1 − Pr[Ac |B] (6.6)
Pr[A1 ∧ A2 ∧ · · · ∧ Aq ] = 1 − Pr[Ac1 ∨ Ac2 ∨ · · · ∨ Acq ] (6.7)
Pr[A1 ∨ A2 ∨ · · · ∨ Aq ] = 1 − Pr[Ac1 ∧ Ac2 ∧ · · · ∧ Acq ] (6.8)
Pr[(A1 ∧ A2 ∧ · · · ∧ Aq )|B] = 1 − Pr[(Ac1 ∨ Ac2 ∨ · · · ∨ Acq )|B] (6.9)
Pr[(A1 ∨ A2 ∨ · · · ∨ Aq )|B] = 1 − Pr[(Ac1 ∧ Ac2 ∧ · · · ∧ Acq )|B] (6.10)

• Inequations.
q
Pr[A1 ∨ A2 ∨ · · · ∨ Aq ] ≤ ∑ Pr[Ai ] (6.11)
i=1
q
Pr[A1 ∧ A2 ∧ · · · ∧ Aq ] ≥ ∏ Pr[Ai ] (6.12)
i=1
q
Pr[A1 ∨ A2 ∨ · · · ∨ Aq ] ≤ 1 − ∏ Pr[Aci ] (6.13)
i=1
q
Pr[A1 ∧ A2 ∧ · · · ∧ Aq ] ≥ 1 − ∑ Pr[Aci ] (6.14)
i=1
Pr[A] ≥ Pr[A|B] · Pr[B] (6.15)

• Conditional Equations.

Pr[A|B] = Pr[A] if A and B are independent (6.16)


q
Pr[A1 ∨ A2 ∨ · · · ∨ Aq ] = ∑ Pr[Ai ] if all events are independent (6.17)
i=1
q
Pr[A1 ∧ A2 ∧ · · · ∧ Aq ] = ∏ Pr[Ai ] if all events are independent (6.18)
i=1

4.7 An Overview of Random and Independent

Random numbers (including random group elements) are very common in con-
structing cryptographic schemes, such as digital signature schemes and encryption
schemes. Suppose each number in the set {A1 , A2 , · · · , An } ∈ Z p is a random num-
ber. This means that each number is chosen randomly and independently from Z p ,
76 4 Foundations of Security Reduction

and all numbers are uniformly distributed in Z p . In a simulated scheme, if random


numbers are not randomly chosen, but generated from a function, we must prove
that these simulated random numbers generated by the function are also random
and independent from the point of view of the adversary. Otherwise, the simula-
tion is distinguishable from the real attack. In this section, we explain the concept
of random and independent and introduce how to simulate randomness, where all
simulated random numbers are truly random and independent.

4.7.1 What Are Random and Independent?

Let (A, B,C) be three random integers chosen from the space Z p . The concept of
random and independent can be explained as follows.
• Random. C is equal to any integer in Z p with the same probability 1p .
• Independent. C cannot be computed from A and B.
The concept of random and independent is applied in the security reduction as fol-
lows. Let A, B,C be random integers chosen from Z p . Suppose an adversary is only
given A and B. The adversary then has no advantage in guessing the integer C and
can only guess the integer C correctly with probability 1p . If A, B are two integers
randomly chosen from the space Z p and C = A + B mod p, we still have that C is
equivalent to a random number chosen from Z p . However, A, B,C are not indepen-
dent, because C can be computed from A and B.
In a scheme construction and a security proof, when we say that A1 , A2 , · · · , Aq
are all randomly chosen from an exponentially large space, such as Z p , we assume
that they are all distinct. That is, Ai 6= A j for any i 6= j. This assumption will sim-
plify the probability analysis and the proof description. We also note that for some
proposed schemes in the literature, if they generate random numbers that are equal,
the proposed schemes will become insecure.

4.7.2 Randomness Simulation with a General Function

In a real scheme, suppose (A, B,C) are integers randomly chosen from Z p . However,
in the simulated scheme, (A, B,C) are generated from a function with other random
integers. For example, all integers A, B,C are simulated from a function with w, x, y, z
as input, where (w, x, y, z) are integers randomly chosen from Z p by the simulator
when running the reduction algorithm. We want to investigate whether the simulated
(A, B,C) are also random and independent from the point of view of the adversary.
If the simulated random integers are also random and independent, the simulated
scheme is indistinguishable from the real scheme from the point of view of the
random number generation.
4.7 An Overview of Random and Independent 77

We have the following simplified lemma for checking whether the simulated ran-
dom numbers (A, B,C) are also random and independent.
Lemma 4.7.1 Suppose a real scheme and a simulated scheme generate integers
(A, B,C) with different methods described as follows.
• In the real scheme, let (A, B,C) be three integers randomly chosen from Z p .
• In the simulated scheme, let (A, B,C) be computed by a function with random
integers (w, x, y, z) from Z p as the input to the function, denoted by (A, B,C) =
F(w, x, y, z).
Suppose the adversary knows the function F from the reduction algorithm but not
(w, x, y, z). The simulated scheme is indistinguishable from the real scheme if for any
given (A, B,C) from Z p , the number of solutions (w, x, y, z) satisfying (A, B,C) =
F(w, x, y, z) is the same. That is, any (A, B,C) from Z p will be generated with the
same probability in the simulated scheme.
It is not hard to verify the correctness of this lemma. We prove this lemma by
arguing that any three given values (A, B,C) will appear with the same probabil-
ity. Let < w, x, y, z > be a vector that represents one choice of random (w, x, y, z)
from Z p . There are p4 different vectors in the vector space and each vector will be
chosen with the same probability 1/p4 . Suppose that for any (A, B,C), the number
of < w, x, y, z > generating (A, B,C) via F is n, so the probability of choosing ran-
dom (w, x, y, z) satisfying (A, B,C) = F(w, x, y, z) is n/p4 . Therefore, the simulated
scheme is indistinguishable from the real scheme.
Consider the simulation of (A, B,C) from Z p with the following functions under
modular operations using random integers (w, x, y, z) from Z p .

( A, B, C ) = F(x, y) = ( x , y , x + y ) (7.19)
( A, B, C ) = F(x, y, z) = ( x , y , z + 3 ) (7.20)
( A, B, C ) = F(x, y, z) = ( x , y , z + 4 · xy ) (7.21)
( A, B, C ) = F(w, x, y, z) = ( x + w , y , z + w · x ) (7.22)

We have the following observations.


• 7.19 Distinguishable. In this function, we have

x = A,
y = B,
x + y = C.

If the given (A, B,C) satisfies A + B = C, the function has one solution

< x, y >=< A, B > .

Otherwise, there is no solution. Therefore, the simulated A, B,C are not random
and independent. To be precise, C can be computed from A + B.
78 4 Foundations of Security Reduction

• 7.20 Indistinguishable. In this function, we have

x = A,
y = B,
z + 3 = C.

For any given (A, B,C), the function has one solution

< x, y, z >=< A, B,C − 3 > .

Therefore, A, B,C are random and independent.


• 7.21 Indistinguishable. In this function, we have

x = A,
y = B,
z + 4xy = C.

For any given (A, B,C), the function has one solution

< x, y, z >=< A, B,C − 4AB > .

Therefore, A, B,C are random and independent.


• 7.22 Indistinguishable. In this function, we have

x + w = A,
y = B,
z + w · x = C.

For any given (A, B,C), the function has p different solutions

< w, x, y, z >=< w, A − w, B,C − w(A − w) >,

where w can be any integer from Z p . Therefore, A, B,C are random and indepen-
dent.
The full lemma can be stated as follows. This lemma will frequently be used
in the correctness analysis of the schemes in this book. We stress that when
(A1 , A2 , · · · , Aq ) are randomly chosen from Z p , we have that (gA1 , gA2 , · · · , gAq ) are
random and independent from G. Therefore, in this book, the analysis of random-
ness and independence is associated with integers or exponents from Z p only.
Lemma 4.7.2 Suppose a real scheme and a simulated scheme generate integers
(A1 , A2 , · · ·, Aq ) with different methods described as follows.
• In the real scheme, let (A1 , A2 , · · · , Aq ) be integers randomly chosen from Z p .
• In the simulated scheme, let (A1 , A2 , · · · , Aq ) be computed by a function with ran-
dom integers (x1 , x2 , · · · , xq0 ) from Z p as the input to the function, denoted by
4.7 An Overview of Random and Independent 79

(A1 , A2 , · · · , Aq ) = F(x1 , x2 , · · · , xq0 ).

Suppose the adversary knows the function F from the reduction algorithm but not
(x1 , x2 , · · · , xq0 ). The simulated scheme is indistinguishable from the real scheme if,
for any (A1 , A2 , · · · , Aq ) from Z p , the number of solutions (x1 , x2 , · · · , xq0 ) satisfying
(A1 , A2 , · · · , Aq ) = F(x1 , x2 , · · · , xq0 ) is the same. That is, every (A1 , A2 , · · · , Aq ) from
Z p will be generated with the same probability in the simulated scheme.
We stress that if q0 < q, the simulated scheme is definitely distinguishable from the
real scheme. For indistinguishable simulation, it is required that q0 ≥ q must hold,
but this condition is not sufficient.

4.7.3 Randomness Simulation with a Linear System

A general system of n linear equations (or linear system) over Z p with n unknown
secrets (x1 , x2 , · · · , xn ) can be written as

a11 x1 + a12 x2 + · · · + a1n xn = y1


a21 x1 + a22 x2 + · · · + a2n xn = y2
,
···
an1 x1 + an2 x2 + · · · + ann xn = yn

where the ai j are the coefficients of the system, and y1 , y2 , · · · , yn are constant terms
from Z p . We define A as the coefficient matrix,
 
a11 a12 a13 · · · a1n
 a21 a22 a23 · · · a2n 
A= . . . . .
 
 .. .. .. · · · .. 
an1 an2 an3 · · · ann

We have the following lemma for the simulation of random numbers by a linear
system.
Lemma 4.7.3 Suppose a real scheme and a simulated scheme generate integers
(A1 , A2 , · · · , An ) with different methods described as follows.
• In the real scheme, let (A1 , A2 , · · · , An ) be n integers randomly chosen from Z p .
• In the simulated scheme, let (A1 , A2 , · · · , An ) be computed by
   
a11 a12 a13 · · · a1n x1
 a21 a22 a23 · · · a2n   x2 
(A1 , A2 , · · · , An )> = A · X > =  . . . .  ·  .  mod p,
   
 .. .. .. · · · ..   .. 
an1 an2 an3 · · · ann xn

where x1 , x2 , · · · , xn are random integers chosen from Z p .


80 4 Foundations of Security Reduction

Suppose the adversary knows A but not X. If the determinant of A is nonzero, the
simulated scheme is indistinguishable from the real scheme.
According to our knowledge of linear systems, if |A| 6= 0 there is only one solu-
tion < x1 , x2 , · · · , xn > for any given (A1 , A2 , · · · , An ). According to Lemma 4.7.2, the
Ai are random and independent, so the simulated scheme is indistinguishable from
the real scheme. If |A| = 0 the number of solutions can be zero or p depending on
the given (A1 , A2 , · · · , An ). The Ai are not random and independent, so the simulated
scheme is distinguishable from the real scheme.
Consider the simulation of (A1 , A2 , A3 ) from Z p with the following functions
using random integers (x1 , x2 , x3 ) from Z p .

( A1 , A2 , A3 ) = ( x1 + 3x2 + 3x3 , x1 + x2 + x3 , 3x1 + 5x2 + 5x3 ) (7.23)


( A1 , A2 , A3 ) = ( x1 + 3x2 + 3x3 , 2x1 + 3x2 + 5x3 , 9x1 + 5x2 + 2x3 ) (7.24)

• 7.23 Distinguishable. In this function, we have

x1 + 3x2 + 3x3 = A1 ,
x1 + x2 + x3 = A2 ,
3x1 + 5x2 + 5x3 = A3 .

It is easy to verify that the determinant of the coefficient matrix satisfies

133
1 1 1 = 0.
355

Therefore, (A1 , A2 , A3 ) are not random and independent. To be precise, given A1


and A2 , we can compute A3 by A3 = A1 + 2A2 .
• 7.24 Indistinguishable. In this function, we have

x1 + 3x2 + 3x3 = A1 ,
2x1 + 3x2 + 5x3 = A2 ,
9x1 + 5x2 + 2x3 = A3 .

It is easy to verify that the determinant of the coefficient matrix satisfies

133
2 3 5 = 53 6= 0.
952

Therefore, (A1 , A2 , A3 ) are random and independent.


Simulation by a more general linear system can be described as follows.
Lemma 4.7.4 Suppose a real scheme and a simulated scheme generate integers
(A1 , A2 , · · ·, An ) with different methods described as follows.
4.7 An Overview of Random and Independent 81

• In the real scheme, let (A1 , A2 , · · · , An ) be n integers randomly chosen from Z p .


• In the simulated scheme, let (A1 , A2 , · · · , An ) be computed by
   
a11 a12 a13 · · · a1q x1
 a21 a22 a23 · · · a2q   x2 
(A1 , A2 , · · · , An )> = A · X > =  . . . .  ·  .  mod p,
   
 .. .. .. · · · ..   .. 
an1 an2 an3 · · · anq xq

where x1 , x2 , · · · , xq are random integers chosen from Z p .


Suppose the adversary knows A but not X.
• The simulated scheme is distinguishable from the real scheme if q < n.
• The simulated scheme is indistinguishable from the real scheme if q ≥ n and there
exists an n × n sub-matrix whose determinant is nonzero.

4.7.4 Randomness Simulation with a Polynomial

Let f (x) ∈ Z p [x] be a (q − 1)-degree polynomial function defined as

f (x) = aq−1 xq−1 + aq−2 xq−2 + · · · + a1 x + a0 ,

where there are q coefficients, and all coefficients ai are randomly chosen from
Z p . We have the following lemma for the simulation of random numbers by the
polynomial function f (x).
Lemma 4.7.5 Suppose a real scheme and a simulated scheme generate integers
(A1 , A2 , · · · , An ) with different methods described as follows.
• In the real scheme, let (A1 , A2 , · · · , An ) be n integers randomly chosen from Z p .
• In the simulated scheme, let (A1 , A2 , · · · , An ) be computed by

(A1 , A2 , · · · , An ) = ( f (m1 ), f (m2 ), · · · , f (mn )),

where m1 , m2 , · · · , mn are n distinct integers in Z p and f is a (q − 1)-degree poly-


nomial.
Suppose the adversary knows m1 , m2 , · · · , mn but not f (x). The simulated scheme is
indistinguishable from the real scheme if q ≥ n.
We can rewrite the simulation as

(A1 , A2 , · · · , An )> = ( f (m1 ), f (m2 ), · · · , f (mn ))>


82 4 Foundations of Security Reduction
  
mq−1 mq−2 mq−3 · · · m01

aq−1
 q−1 q−2 1q−3
1 1
 m2 m2 m2 · · · m02    aq−2 
 
=
 .. .. .. .  · .  mod p.
· · · ..   .. 

 . . .
mq−1
n mq−2
n mq−3
n · · · m0n a0

The coefficient matrix is the Vandermonde matrix, whose determinant is nonzero.


According to Lemma 4.7.4, the simulated scheme is indistinguishable from the real
scheme.

4.7.5 Indistinguishable Simulation and Useful Attack Together

Suppose the real scheme generates random numbers (A1 , A2 , · · · , An ) from Z p . In


the simulated scheme, let x1 , x2 , · · · , xq be integers randomly chosen from Z p by the
simulator, and F1 , F2 , · · · , Fn , F ∗ be functions of (x1 , x2 , · · · , xq ) defined over Z p given
in the reduction algorithm (thus, they are known to the adversary). In the security
reduction, the following requirements might occur at the same time.
• In the simulated scheme, (A1 , A2 , · · · , An ) is computed as

Ai = Fi (x1 , x2 , · · · , xq ), for all i ∈ [1, n].

The correctness of the security reduction requires that the simulated scheme is
indistinguishable from the real scheme, and thus (A1 , A2 , · · · , An ) must be random
and independent.
• The adversary outputs an attack and this attack on the simulated scheme is useless
if the adversary can compute

A∗ = F ∗ (x1 , x2 , · · · , xq ).

The correctness of the security reduction requires that the adversary cannot com-
pute A∗ except with negligible probability.
In the security reduction, we do not need to prove an indistinguishable simulation
and a useful attack separately. Instead, we only need to prove that
 
(A1 , · · · , An , A∗ ) = F1 (x1 , x2 , · · · , xq ), · · · , Fn (x1 , x2 , · · · , xq ), F ∗ (x1 , x2 , · · · , xq )

are random and independent, so that the randomness property of (A1 , A2 , · · · , An )


holds, and the adversary has no advantage in computing A∗ , because A∗ is random
and independent of all given Ai . An example is given in Section 4.14.2.
4.8 An Overview of Random Oracles 83

4.7.6 Advantage and Probability in Absolutely Hard Problems

We give several absolutely hard problems and introduce the advantage and the prob-
ability of solving these problems by an adversary who has unbounded computational
power. These examples are summarized from existing security reductions in the lit-
erature.
• Suppose (a, Z, c, x) satisfies Z = ac + x mod p, where a, x ∈ Z p and c ∈ {0, 1} are
randomly chosen. Given (a, Z), the adversary has no advantage in distinguishing
whether Z is computed from either a · 0 + x or a · 1 + x except with probability
1/2. The reason is that a, Z, c are random and independent.
• Suppose (a, Z1 , Z2 , · · · , Zn−1 , Zn , x1 , x2 , · · · , xn ) satisfies Zi = a + xi mod p, where
a, xi for all i ∈ [1, n] are randomly chosen from Z p . Given (a, Z1 , Z2 , · · · , Zn−1 ),
the adversary has no advantage in computing Zn = a + xn except with probability
1/p. The reason is that a, Z1 , Z2 , · · · , Zn are random and independent.
• Suppose ( f (x), Z1 , Z2 , · · · , Zn , x1 , x2 , · · · , xn ) satisfies Zi = f (xi ), where f (x) ∈
Z p [x] is an n-degree polynomial randomly chosen from Z p . Given (Z1 , Z2 , · · · , Zn ,
x1 , x2 , · · · , xn ), the adversary has no advantage in computing a pair (x∗ , f (x∗ ))
for a new x∗ different from xi except with probability 1/p. The reason is that
Z1 , Z2 , · · · , Zn , f (x∗ ) are random and independent.
• Suppose (A, Z1 , Z2 , · · · , Zn−1 , Zn , x1 , x2 , · · · , xn ) satisfies |A| 6= 0 mod p and Zi is
computed by Zi = ∑nj=1 ai, j x j mod p, where A is an n × n matrix whose el-
ements are from Z p , and x j for all j ∈ [1, n] are randomly chosen from Z p .
Given (A, Z1 , Z2 , · · · , Zn−1 ), the adversary has no advantage in computing Zn =
∑nj=1 an, j x j except with probability 1/p. The reason is that Z1 , Z2 , · · · , Zn are ran-
dom and independent.
• Suppose (g, h, Z, x, y) satisfies Z = gx hy , where x, y ∈ Z p are randomly chosen.
Given (g, h, Z) ∈ G, the adversary has no advantage in computing (x, y) except
with probability 1/p. Once the adversary finds x, it can immediately compute y
with Z. However, g, h, Z, x are random and independent.
• Suppose (g, h, Z, x, c) satisfies Z = gx hc , where x ∈ Z p and c ∈ {0, 1} are ran-
domly chosen. Given (g, h, Z) ∈ G, the adversary has no advantage in distinguish-
ing whether Z is computed from either gx h0 or gx h1 , except with probability 1/2.
The reason is that g, h, Z, c are random and independent.
In the real security reductions, if the adversary has advantage 1 in computing the
target in the above examples, it can always launch a useless attack. More absolutely
hard problems can be found in the examples given in this book.

4.8 An Overview of Random Oracles

A random oracle, denoted by O, is typically used to represent an ideal hash func-


tion H whose outputs are random and uniformly distributed in its output space. A
84 4 Foundations of Security Reduction

security proof in the random oracle model [14] (namely with random oracles) for a
proposed scheme means that at least one hash function in the proposed scheme is
treated as a random oracle. This section introduces how to use random oracles in
security reductions.

4.8.1 Security Proof with Random Oracles

Suppose a proposed scheme is constructed using a hash function H as one of the


primitives. We can represent the proposed scheme using the following combination

Scheme + H.

Roughly speaking, a security proof with random oracles for this scheme is the secu-
rity proof for the following combination

Scheme + O,

where the hash function H is set as a random oracle O. That is, we do not analyze
the security of Scheme + H but Scheme + O, and believe that Scheme + H is secure
if Scheme + O is secure. Notice that the random oracle O and a real hash function
H are treated differently in the security reduction. The security gap between these
two combinations was discussed in [85].
Proving the security of a proposed scheme in the random oracle model requires
at least one hash function to be set as a random oracle. However, it does not need all
hash functions to be set as random oracles. To keep consistency with the concept of
indistinguishable simulation, we assume that the real scheme in the random oracle
model refers to the combination Scheme + O. Otherwise, the adversary can imme-
diately distinguish the simulated scheme with random oracles from the real scheme
with hash functions.

4.8.2 Hash Functions vs Random Oracles

The differences between hash functions and random oracles in security reductions
are summarized as follows.
• Knowledge. Given any arbitrary string x, if H is a hash function, the adversary
knows the function algorithm of H and so knows how to compute H(x). However,
if H is set as a random oracle, the adversary does not know H(x) unless it queries
x to the random oracle.
• Input. Hash functions and random oracles have the same input space, which is
dependent on the definition of the hash function. Although the number of inputs
to a hash function is exponential, the number of inputs to a random oracle is
4.8 An Overview of Random Oracles 85

polynomial. This is due to the fact that the random oracle only accepts queries in
polynomial time, and thus the random oracle only has a polynomial number of
inputs.
• Output. Hash functions and random oracles have the same output space, which
is dependent on the definition of the hash function. Given an input, the output
from a hash function is determined by the input and the hash function algorithm.
However, the output from a random oracle for an input is defined by the simu-
lator who controls the random oracle. The outputs from a hash function are not
required to be uniformly distributed, but the outputs from a random oracle must
be random and uniformly distributed.
• Representation. A hash function can be seen as a mapping from an input space
to an output space, where the mapping is calculated according to the hash func-
tion algorithm. A random oracle can be viewed as a virtual hash function that is
represented by a list composed of input and output only. The random oracle itself
does not have any rule or algorithm to define the mapping, as long as all outputs
are random and independent. See Table 4.1 for comparison.

Table 4.1 Hash function and random oracle in representation

Input Hash Function Output Input Random Oracle Output


x1 y1 x1 y1
x2 y2 x2 y2
x3 H(xi ) = yi y3 x3 Simulator y3
x4 y4 .. ..
.. .. . .
. . xq yq

Random oracles are very helpful for the simulator in programming security re-
ductions. The reason is that the simulator can control and select any output that looks
random and helps the simulator complete the simulation or force the adversary to
launch a useful attack. Security proofs in the random oracle model are, therefore,
believed to be much easier than those without random oracles.

4.8.3 Hash List

In a security reduction with random oracles, hash queries and their responses from
an oracle look like a list as described in Table 4.1, where only inputs and outputs are
known to the adversary. The outputs can be adaptively computed by the simulator,
as long as they are random and independent. How to compute these outputs should
be recorded because they can be helpful for the simulator to program the security
reduction. Let x be a query, y be its response, and S be the secret state used to com-
86 4 Foundations of Security Reduction

pute y. After the simulator responds to the query on x with y = H(x), the simulator
should add a tuple (x, y, S) to a hash list, denoted by L .
The hash list created by the simulator is composed of input, output, and the cor-
responding state S. This hash list should satisfy the following conditions.
• The hash list is empty at the beginning before any hash queries are made.
• All tuples associated with queries will be added to this hash list.
• The secret state S must be unknown to the adversary.
How to choose S in computing y is completely dependent on the proposed scheme
and the security reduction. In the security reduction for some encryption schemes,
we note that y can be randomly chosen without using a secret state. Examples can
be found in the schemes given in this book.

4.8.4 How to Program Security Reductions with Random Oracles

For a security proof in the random oracle model, the simulator should add one more
phase called H-Query (usually after the Setup phase) in the simulation to describe
hash queries and responses. Note that this phase only appears in the security reduc-
tion, and it should not appear in the security model.
H-Query. The adversary makes hash queries in this phase. The simulator prepares
a hash list L to record all queries and responses, where the hash list is empty at the
beginning.
For a query x to the random oracle, if x is already in the hash list, the simulator
responds to this query following the hash list. Otherwise, the simulator generates a
secret state S and uses it to compute the hash output y adaptively. Then, the simulator
responds to this query with y = H(x) and adds the tuple (x, y, S) to the hash list.
This completes the description of the random oracle performance in the simula-
tion. If more than one hash function is set as a random oracle, then the simulator
must describe how to program each random oracle accordingly. In the random or-
acle model, the adversary can make queries to random oracles at any time even
after the adversary wins the game. The simulator should generate all outputs in an
adaptive way in order to make sure that the random oracle can help the simulator
program the simulation, such as signature simulation and private-key simulation.
How to adaptively respond to the hash queries from the adversary is fully dependent
on the proposed scheme, the underlying hard problem, and the security reduction.
Examples can be found in Section 4.12.

4.8.5 Oracle Response and Its Probability Analysis

Suppose the hash function H is set as a random oracle. We study a general case of
oracle response and analyze its success probability.
4.8 An Overview of Random Oracles 87

H-Query. The simulator prepares a hash list L to record all queries and responses,
where the hash list is empty at the beginning.
For a query x to the random oracle, if x is already in the hash list, the simulator
responds to this query following the hash list. Otherwise, the simulator works as
follows.
• The simulator chooses a random secret value z and a secret bit c ∈ {0, 1} (how
to choose c has not yet been defined) to compute y. Here, S = (z, c) is the secret
state used to compute y.
• The simulator then sets H(x) = y and sends y to the adversary.
• Finally, the simulator adds (x, y, z, c) to the hash list.
This completes the description of the random oracle performance in the simulation.
Suppose qH hash queries are made to the random oracle. The hash list is composed
of qH tuples as follows.

(x1 , y1 , z1 , c1 ), (x2 , y2 , z2 , c2 ), · · · , (xqH , yqH , zqH , cqH ).

Suppose the adversary does not know (z1 , c1 ), (z2 , c2 ), (z3 , c3 ), · · · , (zqH , cqH ).
From these qH hash queries, the adversary adaptively chooses q+1 out of qH queries
 
x10 , x20 , · · · , xq0 , x∗ ,

where q + 1 ≤ qH . Let c01 , c02 , · · · , c0q , c∗ be the corresponding secret bits for the cho-
sen hash queries. We want to calculate the success probability, defined as

P = Pr[c01 = c02 = · · · = c0q = 0 ∧ c∗ = 1].

We stress that this probability cannot be computed once all ci are known to the
adversary. That is why we assume that the adversary does not know all ci at the
beginning.
This probability appears in many security proofs. For example, in the security
proof of digital signatures in the random oracle model, the security reduction is
programmed in such a way that a signature of message x is simulatable if c = 0
and is reducible if c = 1. Suppose the adversary first queries signatures on messages
x10 , x20 , · · · , xq0 and then returns a forged signature of the message x∗ . The probability
of successful simulation and useful attack is equal to

Pr[c01 = c02 = · · · = c0q = 0 ∧ c∗ = 1].

The above success probability depends on the method of choosing ci . Here we


introduce two approaches proposed in the literature. The first approach is easy to
understand, but the probability is relatively small, and the loss factor is linear in the
number of all hash queries, denoted by qH . The second approach is a bit complex,
but has a larger success probability than the first one, and the loss factor is linear in
the number of chosen hash queries, denoted by q.
88 4 Foundations of Security Reduction

• In the first approach, the simulator randomly chooses i∗ ∈ [1, qH ] and guesses that
the adversary will output the i∗ -th query as x∗ . Then, for a query xi , the simulator
sets
ci = 1 if i = i∗

.
ci = 0 otherwise
In this setting, P is equivalent to successfully guessing which query is chosen as
x∗ . Since the adversary makes qH queries, and one of queries is chosen to be x∗ ,
we have P = 1/qH . The success probability is linear in the number of all hash
queries.
• In the second approach, the simulator guesses more than one query as the poten-
tial x∗ to increase the success probability. To be precise, the simulator flips a bit
bi ∈ {0, 1} in such a way that bi = 0 occurs with probability Pb , and bi = 1 occurs
with probability 1 − Pb . Then, for a query xi , the simulator sets

ci = 1 if bi = 1
.
ci = 0 otherwise

Since all bi are chosen according to the probability Pb , we have

P = Pr[c01 = c02 = · · · = c0q = 0 ∧ c∗ = 1]


= Pr[b1 = b2 = · · · = bq = 0 ∧ b∗ = 1]
= Pbq (1 − Pb ).

The value is maximized at Pb = 1 − 1/(1 + q), and then we get P ≈ 1/(eq) when
(1 + 1q )q ≈ e. This success probability is linear in the number of chosen hash
queries instead of the number of all hash queries.
In the security proof, qH is believed to be much larger than q (for example
qH = 260 compared to q = 230 ). Therefore, the second approach has a larger success
probability than the first one. The first approach, however, is much simpler to under-
stand than the second approach. In the security proofs for selected schemes in this
book, we always adopt the first approach when one of the two approaches should be
used in a security reduction. We can naturally modify the security proofs with the
second approach in order to have a smaller reduction loss.

4.8.6 Summary of Using Random Oracles

We summarize the use of random oracles in security proofs, especially for digital
signatures and encryption, as follows.
• A random oracle is useful in a security proof not because its outputs are random
and uniformly distributed but because the adversary must query x to the random
oracle in order to know the corresponding output H(x), and the computations of
all outputs are fully controlled by the simulator.
4.9 Security Proofs for Digital Signatures 89

• When a hash function is set as a random oracle, the adversary will not receive the
hash function algorithm from the simulator in the security proof. The adversary
can only access the “hash function” by asking the random oracle.
• A random oracle is an ideal hash function. However, we do not need to con-
struct such an ideal hash function in the simulation. Instead, the main task for the
simulator in the security proof is to think how to respond to each input query.
• The simulator can adaptively return any element as the response to a given in-
put query as long as all responses look random from the point of view of the
adversary. This tip is very useful for security proofs in “hash-then-sign” digital
signatures. To be precise, we program H(m) in such a way that the corresponding
signature of H(m) is either simulatable or reducible.
• The hash list is empty when it is created, but the simulator can pre-define the
tuple (x, H(x), S) in the hash list before the adversary makes a query on x. This
tip is useful when a signature generation needs to use H(x), and x has not yet
been queried by the adversary.
• The secret state S for x is useful for the simulator to compute signatures on x in
digital signatures, or private keys on x in identity-based encryption, or to perform
the decryption without knowing the corresponding secret key.
• If breaking a scheme must use a particular pair (x, H(x)), then the adversary
must have queried x to the random oracle. This is essential in the security proof
of encryption under a computational hardness assumption.
• To simplify the security proof, we assume that the simulator already knows the
maximum number of queries that the adversary will make to the random oracle
before the simulation. This assumption is useful in the probability calculation.
More details about how to use random oracles to program a security reduction can
be found in the schemes given in this book.

4.9 Security Proofs for Digital Signatures

4.9.1 Proof Structure

Suppose there exists an adversary A who can break the proposed signature scheme
in the corresponding security model. We construct a simulator B to solve a com-
putational hard problem. Given as input an instance of this hard problem in the
security proof we must show (1) how the simulator generates the simulated scheme;
(2) how the simulator solves the underlying hard problem using the adversary’s at-
tack; and (3) why the security reduction is correct. A security proof is composed of
the following three parts.
• Simulation. In this part, we show how the simulator uses the problem instance
to generate a simulated scheme and interacts with the adversary following the
unforgeability security model. If the simulator has to abort, the security reduction
fails.
90 4 Foundations of Security Reduction

• Solution. In this part, we show how the simulator solves the underlying hard
problem using the forged signature generated by the adversary. To be precise, the
simulator should be able to extract a solution to the problem instance from the
forged signature.
• Analysis. In this part, we need to provide the following analysis.
1. The simulation is indistinguishable from the real attack.
2. The probability PS of successful simulation.
3. The probability PU of useful attack.
4. The advantage εR of solving the underlying hard problem.
5. The time cost of solving the underlying hard problem.
The simulation will be successful if the simulator does not abort in computing
the public key and responding to all signature queries from the adversary. The simu-
lation is indistinguishable if all computed signatures can pass the signature verifica-
tion, and the simulation has the randomness property. An attack by the adversary is
useful if the simulator can extract a solution to the problem instance from the forged
signature.
Many security proofs only calculate the probability of successful simulation
without calculating the probability of useful attack. Such an analysis is the same
as ours because the probability of successful simulation in their definitions includes
the usefulness of the attack. The difference is due to the different definition of suc-
cessful simulation.
A security reduction for digital signatures does not have to use the forged signa-
ture to solve an underlying hard problem. With random oracles, the simulator can
use hash queries instead of the forged signature to solve an underlying hard prob-
lem. However, this is a rare case. The motivation for this kind of security reduction
will be explained later.

4.9.2 Advantage Calculation

Let ε be the advantage of the adversary in breaking the proposed signature scheme.
The advantage of solving the underlying hard problem, denoted by εR , is

εR = PS · ε · PU .

If the simulation is successful and indistinguishable from the real attack with prob-
ability PS , the adversary can successfully forge a valid signature with probability ε.
With probability PU , the forged signature is a useful attack and can be reduced to
solving the underlying hard problem. Therefore, we obtain εR as the advantage of
solving the underlying hard problem in the security reduction.
4.9 Security Proofs for Digital Signatures 91

4.9.3 Simulatable and Reducible

In the security reduction, if the solution to the problem instance is extracted from the
adversary’s forged signature, we can classify all signatures in the simulated scheme
into two types: simulatable and reducible.
• Simulatable. A signature is simulatable if it can be computed by the simulator.
If the forged signature is simulatable, the forgery attack is useless. Otherwise,
the simulator can compute the forged signature and perform as the adversary by
itself. The security reduction will be successful without the help of the adver-
sary. That is, the simulator can solve the underlying hard problem by itself. This
security reduction is wrong.
• Reducible. A signature is reducible if it can be used to solve the underlying hard
problem. If the forged signature is reducible, the attack is useful. Similarly, a re-
ducible signature in the security reduction cannot be computed by the simulator.
Otherwise, the simulator could solve the underlying hard problem by itself.
In a security reduction for digital signatures, each signature in the simulated scheme
should be either simulatable or reducible. A successful security reduction requires
that all queried signatures are simulatable, and the forged signature is reducible.
Otherwise, the simulator cannot respond to signature queries from the adversary or
use the forged signature to solve the underlying hard problem.

4.9.4 Simulation of Secret Key

In security proofs for digital signatures, most security proofs in the literature pro-
gram the security reduction in such a way that the simulator does not know the
corresponding secret key. Intuitively, if the simulator knows the secret key, then all
signatures must be simulatable including the forged signature. Therefore, this reduc-
tion must be unsuccessful. However, this observation is not correct. It is possible to
program a simulation where the simulator knows the secret key. An example can
be found in [7]. We stress that some signatures must be still reducible even though
the secret key is known to the simulator; otherwise, the security reduction must be
incorrect. This is a paradox that must be addressed in this type of security reduction.
In this book, all introductions and given schemes program the security reduction
in such a way that the secret key is unknown to the simulator. That is, in such a
simulation, if the secret key is known to the simulator, the simulator can immediately
solve the underlying hard problem.
92 4 Foundations of Security Reduction

4.9.5 Partition

In the simulation, the simulator must also hide from the adversary which signatures
are simulatable and which signatures are reducible. If the adversary can always re-
turn a simulatable signature as the forged signature, the reduction has no advantage
of solving the underlying hard problem. We call the approach of splitting signatures
into the above two sets partition. The simulator must stop the adversary (who knows
the reduction algorithm and can make signature queries) from finding the partition.
Two different approaches are currently used to deal with the partition.
• Intractable in Finding. Given the simulation including queried signatures, the
computationally unbounded adversary cannot find the partition with probabil-
ity 1, so the forged signature returned by the adversary is still reducible with
non-negligible probability PU . To hide the partition from the computationally
unbounded adversary in the security reduction, the simulator should utilize ab-
solutely hard problems in hiding the partition in the simulation. The secret infor-
mation I, introduced in Section 4.5.7, can be treated as the partition.
• Intractable in Distinguishing. Given the simulation including queried signa-
tures and two complementary partitions, the computationally unbounded adver-
sary has no advantage in distinguishing which partition is used, so the adversary
can only guess the partition correctly with probability 1/2. Here, complementary
partitions means that any signature in the simulation must be either simulatable
using one partition or reducible using another partition. In this case, the adversary
will return a forged signature that is reducible with probability 1/2. The secret
information I here is which partition is adopted in the simulation. Note that it is
not necessary to construct two partitions that are complementary, as long as the
adversary cannot always find which signature is simulatable in both partitions.
In comparison with the first approach, the second approach does not need to hide
the partition from the adversary. We found that the partition is fixed in the reduc-
tion algorithm. To make sure that there are two different partitions in the second
approach, we must propose two distinct simulation algorithms. That is, a reduction
algorithm consists of at least two simulation algorithms and the corresponding so-
lution algorithms. One such example can be found in Section 4.14.3.

4.9.6 Tight Reduction and Loose Reduction Revisited

Recalling the definitions of tight reduction and loose reduction, we have the follow-
ing observations.
• If all signatures of the same message are either simulatable or reducible and a ran-
domly chosen message is simulatable with probability P, then the reduction must
be loose. Let qs be the number of signature queries. If the adversary randomly
chooses messages for signature queries, the probability of successful simulation
4.9 Security Proofs for Digital Signatures 93

and useful attack is Pqs (1 − P) ≤ q1s . The reduction loss is linear in the number
qs , and thus the security reduction is loose.
• If the signatures of a message can be generated to be simulatable or reducible, we
can achieve a tight reduction. For a signature query on a message, the simulator
makes it simulatable. In this case, there is no abort in the responses to signature
queries. Let 1 − P be the probability that the forged signature is reducible. The
probability of successful simulation and useful attack is 1 · (1 − P) instead of
Pqs (1 − P). If 1 − P is constant and small, the security reduction is tight.
We found that all previous signature schemes with tight reductions use a random
salt in a signature generation, where the random salt is used to switch the function-
ality between simulatable and reducible. Therefore, all unique signatures [77] that
do not have any random salt in the signature generation seem unable to achieve tight
reductions. However, with random oracles, we can program a tight reduction for a
specially constructed unique signature scheme [53], where the solution to the under-
lying hard problem is from one of the hash queries. A simplified signature scheme
is given in Section 5.8.

4.9.7 Summary of Correct Security Reduction

A correct security reduction for a digital signature scheme, where the simulator does
not know the secret key, should satisfy the following conditions. We can use these
conditions to check whether a security reduction is correct or not.
• The underlying hard problem is a computational hard problem.
• The simulator does not know the secret key.
• All queried signatures are simulatable without knowing the secret key.
• The simulation is indistinguishable from the real attack.
• The partition is intractable or indistinguishable.
• The forged signature is reducible.
• The advantage εR of solving the underlying hard problem is non-negligible.
• The time cost of the simulation is polynomial time.
A security reduction where the simulator uses hash queries to solve an underlying
hard problem or knows the secret key will change the method of the security reduc-
tion. However, since these two cases are very special, we omit their introductions.
94 4 Foundations of Security Reduction

4.10 Security Proofs for Encryption Under Decisional


Assumptions

4.10.1 Proof Structure

Suppose there exists an adversary A who can break the proposed encryption scheme
in the corresponding security model. We construct a simulator B to solve a deci-
sional hard problem. Given as input an instance of this hard problem (X, Z), in the
security proof we must show (1) how the simulator generates the simulated scheme;
(2) how the simulator solves the underlying hard problem using the adversary’s at-
tack; and (3) why the security reduction is correct. A security proof is composed of
the following three parts.
• Simulation. In this part, we show how the simulator uses the problem instance
(X, Z) to generate a simulated scheme and interacts with the adversary follow-
ing the indistinguishability security model. Most importantly, the target Z in the
problem instance must be embedded in the challenge ciphertext. If the simulator
has to abort, it outputs a random guess of Z.
• Solution. In this part, we show how the simulator solves the decisional hard
problem using the adversary’s guess c0 of c, where the message in the challenge
ciphertext is mc . The method of guessing Z is the same in all security reductions.
To be precise, if c0 = c, the simulator outputs that Z is true. Otherwise, c0 6= c,
and it outputs that Z is false.
• Analysis. In this part, we need to provide the following analysis.
1. The simulation is indistinguishable from the real attack if Z is true.
2. The probability PS of successful simulation.
3. The probability PT of breaking the challenge ciphertext if Z is true.
4. The probability PF of breaking the challenge ciphertext if Z is false.
5. The advantage εR of solving the underlying hard problem.
6. The time cost of solving the underlying hard problem.
The simulation will be successful if the simulator does not abort in computing
the public key, responding to queries, and computing the challenge ciphertext. The
simulation with a true Z is indistinguishable from the real attack if (1) all responses
to queries are correct; (2) the challenge ciphertext generated with a true Z is a correct
ciphertext as defined in the proposed scheme; (3) and the randomness property holds
in the simulation.
In this book, breaking the ciphertext means that the adversary correctly guesses
the message in the ciphertext. This proof structure is regarded as the standard struc-
ture for an encryption scheme under a decisional hardness assumption in the indis-
tinguishability security model. The proof structure for an encryption scheme under
a computational hardness assumption in the random oracle model is quite different
and will be introduced in Section 4.11.5.
4.10 Security Proofs for Encryption Under Decisional Assumptions 95

4.10.2 Classification of Ciphertexts

A strong security model for encryption allows the adversary to make decryption
queries on any ciphertexts with the restriction that no decryption query is allowed
on the challenge ciphertext. We found that all ciphertexts in the simulation can be
classified into the following four types.
• Correct Ciphertext. A ciphertext is correct if it can be generated by the en-
cryption algorithm. For example, taking as input pk = (g, g1 , g2 ) ∈ G and mes-
sage m ∈ G, an encryption algorithm randomly chooses r ∈ Z p and computes
CT = (gr , gr1 , gr2 · m). Any ciphertext that can be generated with a message m
and a number r for pk is a correct ciphertext.
• Incorrect Ciphertext. A ciphertext is incorrect if it cannot be generated by the
encryption algorithm. Continued from the above example, (gr , gr+1 r
1 , g2 · m) is an
incorrect ciphertext because it cannot be generated by the encryption algorithm
with (pk, m) as the input using any random number.
• Valid Ciphertext. A ciphertext is valid if the decryption of the ciphertext returns
a message. We stress that the message returned from the decryption can be any
message as long as the output is not ⊥.
• Invalid Ciphertext. A ciphertext is invalid if the decryption of the ciphertext
returns an error ⊥, without returning any message.
In the above classifications, correct ciphertext and incorrect ciphertext are associ-
ated with the ciphertext structure, while valid ciphertext and invalid ciphertext are
associated with the decryption result. Note that correct ciphertext and valid cipher-
text may be treated as equivalent elsewhere in the literature.
When constructing an encryption scheme, an ideal decryption algorithm should
be able to accept all correct ciphertexts as valid ciphertexts, and reject all incorrect
ciphertexts. In the security reduction, the simulated decryption algorithm should be
able to perform the decryption identically to the proposed decryption algorithm.
Otherwise, the simulated scheme may be distinguishable from the real scheme.
However, it is not easy to construct such a perfect decryption in both the real scheme
and the simulated scheme.
We classify ciphertexts into the above four types in order to clarify the analysis of
whether the decryption simulation helps the adversary break the challenge cipher-
text or not, especially when the challenge ciphertext is generated with a false Z. A
ciphertext from the adversary for a decryption query can be one of the above four
types, and each type should be differently responded in a correct way. This classi-
fication is desirable because in the security reduction for the proposed scheme, an
incorrect ciphertext for a decryption query might be accepted such that the decryp-
tion result will help the adversary break the challenge ciphertext.
The challenge ciphertext is either a correct ciphertext or an incorrect ciphertext,
which depends on Z in the ciphertext generation. For the challenge ciphertext, we
further define two special types in the next subsection.
96 4 Foundations of Security Reduction

4.10.3 Classification of the Challenge Ciphertext

The target Z in the instance of the underlying decisional hard problem is either a
true or a false element. The challenge ciphertext must be computed with the target
Z, and it can be classified into the following two types.
• True Challenge Ciphertext. The challenge ciphertext created with the target Z
is a true challenge ciphertext if Z is true. We have that the probability of breaking
the true challenge ciphertext is

PT = Pr[c0 = c|Z = True].

• False Challenge Ciphertext. The challenge ciphertext created with the target
Z is a false challenge ciphertext if Z is false. We have that the probability of
breaking the false challenge ciphertext is

PF = Pr[c0 = c|Z = False].

In the simulation, if the challenge ciphertext is the true challenge ciphertext, we


require that the adversary has non-negligible advantage defined in the breaking as-
sumption in guessing the encrypted message. Otherwise, the adversary should only
have negligible advantage in guessing the encrypted message in the false challenge
ciphertext. To achieve this difference, the simulation of the challenge ciphertext
should satisfy the conditions given in the next subsection.

4.10.4 Simulation of the Challenge Ciphertext

In a security reduction for encryption, the simulator must embed the target Z in the
challenge ciphertext such that it satisfies the following conditions.
• If Z is true, the true challenge ciphertext is a correct ciphertext whose encrypted
message is mc ∈ {m0 , m1 }, where m0 , m1 are two messages from the same mes-
sage space provided by the adversary, and c is randomly chosen by the simulator.
We should program the simulation in such a way that the simulation is indistin-
guishable and then the adversary can guess the encrypted message correctly with
non-negligible advantage defined in the breaking assumption.
• If Z is false, the false challenge ciphertext can be either a correct ciphertext or an
incorrect ciphertext. However, the challenge ciphertext cannot be an encryption
of the message mc from the point of view of the adversary. We program the
simulation in such a way that the adversary cannot guess the encrypted message
correctly except with negligible advantage.
If the challenge ciphertext is independent of Z, the guess of the message in the
challenge ciphertext is independent of Z, and thus the guess is useless. This is why
Z must be embedded in the challenge ciphertext.
4.10 Security Proofs for Encryption Under Decisional Assumptions 97

4.10.5 Advantage Calculation 1

The advantage of solving the underlying hard problem is


   
εR = Pr Guess Z = True|Z = True − Pr Guess Z = True|Z = False
h i h i
= Pr The simulator guesses
Z is true Z = True − Pr The simulator guesses
Z is true Z = False .

Let US be the event of unsuccessful simulation, and SS be the event of successful


simulation. If the simulation is unsuccessful, the simulator will randomly guess Z
by itself, and thus we have
h i 1
The simulator guesses
Pr Z is true US = .
2
Otherwise, according to the proof structure, we have
h i
Pr The simulator guesses
Z is true SS = Pr[c0 = c].

By applying the law of total probability, we have


The simulator guesses
h i
Pr Z = True
Z is true
The simulator guesses The simulator guesses
h i h i
= Pr Z = True ∧ SS Pr[SS] + Pr Z = True ∧US Pr[US]
Z is true Z is true
1
= Pr[c0 = c|Z = True] Pr[SS] + Pr[US]
2
1
= PT · PS + (1 − PS ).
2

The simulator guesses


h i
Pr Z = False
Z is true
The simulator guesses The simulator guesses
h i h i
= Pr Z = False ∧ SS Pr[SS] + Pr Z = False ∧US Pr[US]
Z is true Z is true
1
= Pr[c0 = c|Z = False] Pr[SS] + Pr[US]
2
1
= PF · PS + (1 − PS ).
2

The above analysis yields the advantage of solving the underlying hard problem,
which is
h i h i
εR = Pr The simulator guesses
Z is true Z = True − Pr The simulator guesses
Z is true Z = False
 1   1 
= PT · PS + (1 − PS ) − PF · PS + (1 − PS )
2 2
= PS (PT − PF ).
98 4 Foundations of Security Reduction

In the security reduction, to solve a decisional hard problem with non-negligible


advantage, we should program the security reduction in such a way that PT is as
large as possible, and PF is as small as possible. On the contrary, to make the security
reduction fail, the aim of the adversary is to achieve PT ≈ PF . According to the
descriptions of a useful attack and a useless attack, the adversary’s attack is useful
if PT − PF is non-negligible. Otherwise, it is useless.

4.10.6 Probability PT of Breaking the True Challenge Ciphertext

We assume that there exists an adversary who can break the proposed scheme in
polynomial time with non-negligible advantage ε. If the message mc is encrypted in
the real scheme, according to the definition of the advantage in the security model,
we have  
1
ε = 2 Pr[c0 = c] − .
2
That is, the adversary can correctly guess the message in the challenge ciphertext of
the real scheme with probability Pr[c0 = c] = 12 + ε2 .
If Z is true, and the simulated scheme is indistinguishable from the real scheme
from the point of view of the adversary, the adversary will break the simulated
scheme as it can break the real scheme and correctly guess the encrypted message
with probability 12 + ε2 . That is, we have

1 ε
PT = Pr[c0 = c|Z = True] = + .
2 2

4.10.7 Probability PF of Breaking the False Challenge Ciphertext

If Z is false, the false challenge cipheretext should be an incorrect ciphertext or


a correct ciphertext whose encrypted message is neither m0 nor m1 . Therefore, the
adversary knows that the given scheme is not a real scheme, but a simulated scheme,
because the challenge ciphertext in the real scheme should be a correct ciphertext
whose encrypted message is from {m0 , m1 }.
Since the adversary is malicious, even though the adversary finds out that the
challenge ciphertext is false, it will not abort but try its best to guess c using what it
knows in order to have PF ≈ PT . Therefore, the probability PF is
1
PF = Pr[c0 = c|Z = False] ≥ ,
2
which is highly dependent on the simulation and is no smaller than 12 , where the
probability 12 is obtained by a random guess.
4.10 Security Proofs for Encryption Under Decisional Assumptions 99

4.10.8 Advantage Calculation 2

According to the deduction in Section 4.10.5, the advantage of solving the underly-
ing hard problem is
   
εR = Pr Guess Z = True|Z = True − Pr Guess Z = True|Z = False
h i h i
= Pr The simulator guesses
Z is true Z = True − Pr The simulator guesses
Z is true Z = False
= PS (PT − PF ).

If the probability PS is non-negligible, and the simulation with a true Z is indistin-


guishable, by putting the probabilities PT and PF together, we have
1 ε 
εR = PS (PT − PF ) = PS + − PF .
2 2
The advantage is non-negligible if and only if PF ≈ 12 . Otherwise, if PF = 12 + ε2 , the
advantage of solving the underlying hard problem is zero.
The aim of the simulator is to obtain a non-negligible εR from the security re-
duction against a malicious adversary. According to the above deduction, a correct
security reduction requires that
• PS is non-negligible.
• The simulated scheme is indistinguishable from the real scheme if Z is true.
• PF ≈ 12 , which means that the malicious adversary has almost no advantage in
breaking the false challenge ciphertext.
The ideal probability PF = 12 holds if and only if the message mc is encrypted with
a one-time pad from the point of view of the adversary. The probability PF ≈ 12
means that the false challenge ciphertext is a one-time pad from the point of view
of the adversary except with negligible probability. A one-time pad is a specific
encryption where the adversary has no advantage in guessing the message in the
challenge ciphertext, even if it has unbounded computational power. We introduce
the concept of one-time pad in the next subsection.
We found out an interesting reduction result according to the above analysis. For
example, even if the simulation satisfies that PS = 1, ε = 1, PF = 12 and the simulation
is indistinguishable, we obtain
1 ε  1
εR = PS + − PF = 6= 1.
2 2 2
That is, the maximum advantage of solving the decisional hard problem is not 100%,
but at most 50%. The guess of Z is not always correct because the adversary, who
is given the false challenge ciphertext (Z is false), might still guess c correctly with
probability 12 and output c0 = c, so that the simulator will guess that Z is true, but
actually Z is false.
100 4 Foundations of Security Reduction

4.10.9 Definition of One-Time Pad

One-time pad plays an important role in the security proof of encryption. The sim-
plest example of a one-time pad is as follows:

CT = m ⊕ K,

where m ∈ {0, 1}n is a message, and K ∈ {0, 1}n is a random key unknown to the
adversary. For such a one-time pad, the adversary has no advantage in guessing the
message in CT , except with success probability 21n , even if it has unbounded com-
putational power. If CT is created for a randomly chosen message from two distinct
messages {m0 , m1 }, the adversary still has no advantage and can only guess the
encrypted message with success probability 12 . Therefore, a one-time pad captures
perfect security in the indistinguishability security model, where the adversary has
no advantage in breaking the ciphertext.
The notion of one-time pad is defined as follows.
Definition 4.10.9.1 (One-Time Pad) Let E(m, r, R) be an encryption of a given
message m with a public parameter R and a secret parameter r. The ciphertext
E(m, r, R) is a one-time pad if, for any two distinct messages m0 , m1 from the same
message space, CT can be seen as an encryption of either the message m0 with the
secret parameter r0 or an encryption of the message m1 with the secret parameter
r1 under the public parameter R with the same probability:
h i h i
Pr CT = E(m0 , r0 , R) = Pr CT = E(m1 , r1 , R) .

In the security proof of encryption, we need to prove that the false challenge
ciphertext is a one-time pad from the point of view of the adversary. To be precise,
given an instance (X, Z) of a decisional hard problem, we must program the security
reduction in such a way that Z is embedded in the challenge ciphertext CT ∗ , and
CT ∗ is a one-time pad from the point of view of the adversary if Z is false. We stress
that “from the point of view of the adversary” is extremely important. We cannot
simply prove that the false challenge ciphertext is a one-time pad. The reasons will
be explained in Section 4.10.11.
In the security proof of group-based encryption, we are more interested in those
one-time pads constructed from a cyclic group. We can use the following lemma
to check whether or not a general ciphertext constructed over a cyclic group is a
one-time pad from the point of view of the adversary, even if it has unbounded
computational power.
Lemma 4.10.1 Let CT be a general ciphertext defined as follows, where mc ∈
{m0 , m1 } and the adversary knows the group (G, g, p) and messages m0 , m1 ∈ G:
 ∗

CT = gx1 , gx2 , gx3 , · · · , gxn , gx · mc .

The ciphertext CT is a one-time pad if


4.10 Security Proofs for Encryption Under Decisional Assumptions 101

• x∗ is a random number from Z p , and


• x∗ is independent of x1 , x2 , · · · , xn .
The ciphertext CT is not a one-time pad if
• x∗ is not a random number from Z p (can be known to the adversary), or
• x∗ is dependent on x1 , x2 , · · · , xn (can be computed from x1 , x2 , · · · , xn ).

Proof. Let C∗ = gx mc . From the point of view of the adversary, we have
• CT can be seen as an encryption of m0 where r0 = x∗ = logg C∗ − logg m0 ∈ Z p .
• CT can be seen as an encryption of m1 where r1 = x∗ = logg C∗ − logg m1 ∈ Z p .
Since x∗ is random and independent of x1 , x2 , · · · , xn , we have that both x∗ =
logg C∗ − logg m0 and x∗ = logg C∗ − logg m1 hold with the same probability 1p .
Therefore, the general ciphertext is a one-time pad. Otherwise, if the adversary
knows x∗ , or can compute x∗ from x1 , x2 , · · · , xn , the message mc in the general ci-
phertext can be decrypted by the adversary, and thus it is not a one-time pad.
This completes the proof. 

To prove that CT is a one-time pad, we can also prove that x , x1 , x2 , · · · , xn are
random and independent. However, this is a sufficient but not necessary condition.
An example is given at the end of the next subsection.

4.10.10 Examples of One-Time Pad

We give several examples to introduce what a one-time pad looks like in group-
based cryptography. Suppose the adversary knows the following information.
• The cyclic group (G, g, h, p), where g, h are generators, and p is the group order.
• Two distinct messages m0 , m1 from G and how the ciphertext is created.
In the following ciphertexts, c ∈ {0, 1} and x, y ∈ Z p are randomly chosen by the
simulator. The aim of the adversary is to guess c in CT . We want to investigate
whether the following constructed ciphertexts are one-time pads or not.
 
CT = gx , g4 · mc (10.25)
 
CT = gx , gy · mc (10.26)
 
CT = gx , hx · mc (10.27)
 
CT = gx , gy , gxy · mc (10.28)
 
CT = gx , hx+y · mc (10.29)
 
CT = g2x+y+z , gx+3y+z , g4x+7y+3z · mc (10.30)
 
CT = gx+3y+3z , g2x+3y+5z , g9x+5y+2z · mc (10.31)
102 4 Foundations of Security Reduction
 
CT = gx+3y+3z , g2x+6y+6z , g9x+5y+2z · mc (10.32)

According to Lemma 4.10.1, we have the following results.


• 10.25 No. We have x∗ = 4 which is not random.
• 10.26 Yes. We have
(x1 , x∗ ) = (x, y).
x∗ is random and independent of x1 , because (x, y) are both random numbers.
• 10.27 No. We have
(x1 , x∗ ) = (x, x logg h).
x∗ is dependent on x1 and logg h, satisfying the equation x∗ = x1 logg h.
• 10.28 No. We have
(x1 , x2 , x∗ ) = (x, y, xy).
x∗ is dependent on x1 and x2 , satisfying the equation x∗ = x1 x2 .
• 10.29 Yes. We have

(x1 , x∗ ) = (x, x logg h + y logg h).

x∗ is independent of x1 , because y is a random number that only appears in x∗ .


• 10.30 No. We have

(x1 , x2 , x∗ ) = (2x + y + z, x + 3y + z, 4x + 7y + 3z).

The determinant of the coefficient matrix is zero, and x∗ is dependent on (x1 , x2 ),


satisfying x∗ = x1 + 2x2 .
• 10.31 Yes. We have

(x1 , x2 , x∗ ) = (x + 3y + 3z, 2x + 3y + 5z, 9x + 5y + 2z).

The determinant of the coefficient matrix is nonzero. Therefore, x∗ is random and


independent of x1 , x2 .
• 10.32 Yes. We have

(x1 , x2 , x∗ ) = (x + 3y + 3z, 2x + 6y + 6z, 9x + 5y + 2z).

The determinant of the coefficient matrix is zero where x2 = 2x1 . However, x∗ and
x2 are independent because there exists a 2 × 2 sub-matrix whose determinant is
nonzero. Therefore, we have that x∗ is random and independent of x1 , x2 .
The last example shows one interesting result. Even though x1 , x2 , · · · , xn , x∗ are
not random and independent, it does not mean that x∗ can be computed from
x1 , x2 , · · · , xn , but that at least one value in {x1 , x2 , · · · , xn , x∗ } can be computed from
others. To make sure that the last example will not occur in the analysis, we can first
remove some xi if {x1 , x2 , · · · , xn } are dependent until all remaining xi are random
and independent. A detailed example can be found in Section 4.14.2.
4.10 Security Proofs for Encryption Under Decisional Assumptions 103

4.10.11 Analysis of One-Time Pad

In the correctness analysis of encryption, it is not sufficient to prove that the false
challenge ciphertext is a one-time pad. Instead, we must prove that breaking the
false challenge ciphertext is as hard as breaking a one-time pad. That is, the false
challenge ciphertext is a one-time pad from the point of view of the adversary who
receives some other parameters from the simulated scheme. We have to analyze in
this way because other parameters might help the adversary break the false chal-
lenge ciphertext.
Let CT ∗ be the challenge ciphertext defined as
 ∗

CT ∗ = gx1 , gx2 , gx3 , · · · , gxn , gx · mc .

0 0 0
Let gx1 , gx2 , · · · , gxn be additional information that the adversary obtained from other
phases. If Z is false, we must analyze that the following ciphertext, an extension of
the false challenge ciphertext, is a one-time pad:
 0 0 0 ∗

gx1 , gx2 , · · · , gxn , gx1 , gx2 , gx3 , · · · , gxn , gx · mc .

That is, x∗ is random and independent of x10 , x20 , · · · , xn0 , x1 , x2 , · · · , xn .


The above explanation is just for an ideal result. The adversary might still have
negligible advantage in breaking the false challenge ciphertext. The reason is that
the adversary might obtain useful information from other phases such as the public
key and responses to queries with a certain probability. For example, the following
challenge ciphertext is a one-time pad if x, y, z ∈ Z p are randomly chosen by the
simulator in the simulation:
 
CT ∗ = gx+3y+3z , g2x+6y+6z , g9x+5y+2z · mc .

However, if the adversary can obtain a new group element such as gx+y+z from a
response to a decryption query, the challenge ciphertext is no longer a one-time pad
because the adversary can compute the element g9x+5y+2z from the other three group
elements to decrypt the message mc .

4.10.12 Simulation of Decryption

In the security analysis, we must prove that the simulation of decryption is correct.
To be precise, the simulation of decryption must satisfy the following conditions.
• If Z is true, the simulation of decryption is indistinguishable from the decryption
in the real scheme. Otherwise, we cannot prove that the simulation is indistin-
guishable from the real attack. Here, the indistinguishability means that the sim-
ulated scheme will accept correct ciphertexts and reject incorrect ciphertexts in
104 4 Foundations of Security Reduction

the same way as the real scheme. To make sure that the simulation of decryption
is indistinguishable, the simplest way is for the simulator to be able to generate a
valid secret key for the ciphertext decryption.
• If Z is false, the adversary cannot break the false challenge ciphertext with the
help of decryption queries, i.e., the adversary cannot successfully guess the mes-
sage in the challenge ciphertext with the help of decryption queries. This condi-
tion is desired when proving that the adversary has no (or negligible) advantage in
breaking the false challenge ciphertext. How to stop the adversary from breaking
the false challenge ciphertext using decryption queries is the most challenging
task in security reduction. This is due to the fact that all ciphertexts for decryp-
tion queries can be adaptively generated by the adversary, e.g., by modifying the
challenge ciphertext in the CCA security model.
The above two different conditions are desired for proving the security of en-
cryption schemes under decisional hardness assumptions. However, when we prove
encryption schemes under computational hardness assumptions in the random ora-
cle model, the required conditions of the decryption simulation are slightly different.
The reason is that there is no target Z in the given problem instance.

4.10.13 Simulation of Challenge Decryption Key

Let (pk∗ , sk∗ ) be the key pair in a public-key encryption scheme, and (ID∗ , dID∗ ) be
the key pair in an identity-based encryption scheme that an adversary aims to chal-
lenge, where the challenge ciphertext will be created for pk∗ or ID∗ . For simplicity,
in this section, we denote by sk∗ and dID∗ the challenge decryption keys.
In the security reduction for encryption, it is not necessary for the simulator to
program the simulation without knowing the challenge decryption key. Currently,
there are two different methods associated with the decryption key.
• In the first method, the simulator knows the challenge decryption key. The sim-
ulator can easily simulate the decryption by following the decryption algorithm
because it knows the decryption key. The decryption simulation in the simulated
scheme is therefore indistinguishable from the decryption in the real scheme.
However, it is challenging to simulate the challenge ciphertext, so that the adver-
sary has negligible advantage in breaking the false challenge ciphertext.
• In the second method, the simulator does not know the challenge decryption key.
If Z is false, we found that it is relatively easy to generate the false challenge
ciphertext satisfying the requirements. However, it is challenging to simulate the
decryption correctly without knowing the decryption key. The simulator does not
know the challenge decryption key in this case, but the simulator has to be able
to simulate the decryption.
We cannot adaptively choose one of them to program a security reduction for a
proposed scheme. Which method can be used is dependent on the proposed scheme
and the underlying hard problem.
4.10 Security Proofs for Encryption Under Decisional Assumptions 105

4.10.14 Probability Analysis for PF

We have explained that if Z is false, we cannot simply analyze that the false chal-
lenge ciphertext is a one-time pad, but we must analyze that the adversary cannot
break the false challenge ciphertext except with negligible advantage. To calculate
the probability PF of breaking the false challenge ciphertext, we might need to cal-
culate the following probabilities and advantages.
• Probability PFW = 12 . The probability PFW of breaking the false challenge cipher-
text without any decryption query is 12 . This probability is actually the probability
of breaking the false challenge ciphertext in the CPA security model, where there
is no decryption query. The security proof for CPA is relatively easy, because we
do not need to analyze the following probability or advantages.
?
• Advantage AKF = 0. The advantage AKF of breaking the false challenge cipher-
text with the help of the challenge decryption key is either 0 or 1. If AKF = 0,
this means that the adversary cannot use the challenge decryption key to guess
the message in the false challenge ciphertext, and this completes the probability
analysis of PF . Otherwise, AKF = 1, and we need to analyze the following proba-
bility or advantages.
• Advantage ACF = 0. The advantage ACF of breaking the false challenge ciphertext
with the help of decryption queries on correct ciphertexts is 0. Since a correct
ciphertext will always be accepted for decryption, a correct security reduction
requires that decryption of correct ciphertexts must not help the adversary break
the false challenge ciphertext. Otherwise, it is impossible to obtain PF ≈ 12 , and
the security reduction fails.
?
• Advantage AIF = 0. The advantage AIF of breaking the false challenge ciphertext
with the help of decryption queries on incorrect ciphertexts is either 0 or 1. If
AIF = 0, decryption of incorrect ciphertexts will not help the adversary break
the false challenge ciphertext, and this completes the probability analysis of PF .
Otherwise, AIF = 1, and we need to analyze the following probability.
• Probability PFA ≈ 0. The probability PFA of accepting an incorrect ciphertext for
decryption query is negligible, or the probability that the adversary can gener-
ate an incorrect ciphertext for decryption query to be accepted by the simulator
is negligible. If PFA = 0 all incorrect ciphertexts for decryption queries will be
rejected by the simulator.
We can use the following formula to define the probability PF with the above
probabilities and advantages:
 
PF = PFW + AKF ACF + AIF PFA .

The flowchart for analyzing the probability PF is given in Figure 4.2. The probabil-
ity analysis for CCA security is complicated because we may have to additionally
analyze up to four different cases.
106 4 Foundations of Security Reduction

Fig. 4.2 The flowchart for analyzing probability PF

4.10.15 Examples of Advantage Results for AK I


F and AF

In the previous subsection, we introduced which probabilities and advantages we


need to analyze for CCA security if Z is false. In a correct security reduction, the
advantages AKF and AIF can be either 0 or 1 depending on the proposed scheme and
the security reduction. We give four artificial examples to introduce AKF = 0, AKF =
1, AIF = 0, and AIF = 1 in the security reduction.
• AKF = 0. In a public-key encryption scheme, the key pair is (pk, sk) = (h, α) where
SP = (G, g, p), h = gα , and α ∈ Z p is randomly chosen. Therefore, the challenge
decryption key is α in this construction.
Suppose the false challenge ciphertext is equal to
 
CT ∗ = (C1∗ ,C2∗ ) = gx , Z · mc ,

where the target (false) Z is randomly chosen from G. This challenge ciphertext is
a one-time pad from the point of view of the adversary if and only if Z is random
and unknown to the adversary. Even if the challenge decryption key α can be
computed by the adversary, it does not help the adversary guess the encrypted
message. Therefore, we have AKF = 0.
• AKF = 1. In a public-key encryption scheme, the public key is pk = h, and the
β γ
secret key is sk = (α, β , γ), where SP = (G, g1 , g2 , g3 , p), h = gα1 g2 g3 , and
α, β , γ ∈ Z p are randomly chosen. Here, g1 , g2 , g3 are three distinct group ele-
ments. Therefore, the challenge decryption key is (α, β , γ) in this construction.
Suppose the false challenge ciphertext is equal to
4.10 Security Proofs for Encryption Under Decisional Assumptions 107
 
CT ∗ = (C1∗ ,C2∗ ,C3∗ ) = gx1 , Z, Z α · mc ,

where the (false) target Z is randomly chosen from G. The message is encrypted
with Z α , and Z is given as the second element in the challenge ciphertext. There-
fore, this challenge ciphertext is a one-time pad from the point of view of the
adversary if and only if α is random and unknown to the adversary. It is easy
to see that with the help of (α, β , γ), the adversary can easily break the false
challenge ciphertext. Therefore, we have AKF = 1.
• AIF = 0. Continue the above example for AKF = 1. Even if the adversary has un-
bounded computational power, it still cannot compute the challenge decryption
key (α, β , γ) from the public key. The false challenge ciphertext is a one-time
pad from the point of view of the adversary when α is random and unknown to
the adversary.
Let CT = (C1 ,C2 ,C3 ) be an incorrect ciphertext. Suppose the decryption of this
γ
ciphertext will return to the adversary the group element m = C3 ·C2 as the mes-
sage. The computationally unbounded adversary can easily obtain γ from this
group element because C2 ,C3 are known. However, the computed γ cannot help
the adversary break the false challenge ciphertext. Therefore, we have AIF = 0.
Notice that the advantage AIF = 0 holds if and only if α will not be known to the
adversary from the decryption of all incorrect ciphertexts.
• AIF = 1. Continue the examples for AKF = 1 and AIF = 0. Suppose the decryption
of that incorrect ciphertext will not return m = C3 ·C2 but m = C3 ·C2−α instead.
γ

Following similar analysis, the adversary can obtain α from decryption queries
on incorrect ciphertexts and then break the false challenge ciphertext. Therefore,
we have AIF = 1.
Whether the challenge decryption key can be computed by the adversary signifi-
cantly affects the analysis. We have the following interesting observations.

• If the challenge decryption key can be computed from the public key by the
computationally unbounded adversary, the decryption of either correct or incor-
rect ciphertexts will not help the adversary break the false challenge ciphertext.
That is, ACF = AIF = 0. The reason is that the adversary can use the challenge
decryption key to decrypt ciphertexts by itself. Therefore, in this case, we do not
need to consider whether decryption queries can help the adversary break the
false challenge ciphertext or not.
• If the challenge decryption key cannot be computed by the computationally un-
bounded adversary, the decryption of incorrect ciphertexts might help the adver-
sary break the false challenge ciphertext. The reason is that the challenge decryp-
tion key might play an important role in the construction of a one-time pad, but
the challenge decryption key might be obtained by the adversary by decryption
queries on incorrect ciphertexts, so that the false challenge ciphertext is no longer
a one-time pad. We stress that the decryption of incorrect ciphertexts does not al-
ways help the adversary. It depends on how the one-time pad is constructed in
the security reduction.
108 4 Foundations of Security Reduction

Giving the challenge decryption key to the adversary in analysis is sufficient but
not necessary, because the simulator only responds to decryption queries made by
the adversary. We can even skip the analysis of whether the challenge decryption key
helps the adversary. However, this is a useful assumption to simplify the analysis,
especially when the challenge decryption key cannot help the adversary.

4.10.16 Advantage Calculation 3

In the advantage calculation, we have shown that


   
εR = Pr Guess Z = True|Z = True − Pr Guess Z = True|Z = False
= PS (PT − PF ).

By applying the probability of breaking the true/false challenge ciphertext, when


the simulation is indistinguishable from the real attack we finally have
1 ε 
εR = PS + − PFW − AKF ACF + AIF PFA .
2 2
• For CPA security, the advantage is equivalent to
1 ε 
εR = PS + − PFW ,
2 2
where we only need to analyze that PS is non-negligible and PFW = 12 .
• For CCA security where AKF = 0, the advantage is equivalent to
1 ε 
εR = PS + − PFW ,
2 2
where we only need to analyze that PS is non-negligible and PFW = 12 .
• For CCA security where AKF = 1, the advantage is equivalent to
1 ε 
εR = PS + − PFW − ACF − AIF PFA ,
2 2
where we need to analyze that PS is non-negligible and
1
PFW = , ACF = 0, AIF = 0 or PFA ≈ 0.
2
The property of indistinguishable simulation is desired in all cases. Note that the
indistinguishability analysis is much more complicated in the CCA security model
than in the CPA security model, because we also need to analyze that the decryption
simulation is indistinguishable if Z is true.
4.11 Security Proofs for Encryption Under Computational Assumptions 109

4.10.17 Summary of Correct Security Reduction

A correct security reduction for an encryption scheme should satisfy the following
conditions. We can use these conditions to check whether a security reduction is
correct or not.
• The underlying hard problem is a decisional problem.
• The simulator uses the adversary’s guess to solve the underlying hard problem.
• The simulation is indistinguishable from the real attack if Z is true.
• The probability of successful simulation is non-negligible.
• The advantage of breaking the true challenge ciphertext is ε.
• The advantage of breaking the false challenge ciphertext is negligible.
• The advantage εR of solving the underlying hard problem is non-negligible.
• The time cost of the simulation is polynomial time.
In a security proof with random oracles, we can prove the security of a proposed
scheme under a computational hardness assumption. One main difference is how to
solve the underlying hard problem. The reduction method with random oracles is
given in the next section.

4.11 Security Proofs for Encryption Under Computational


Assumptions

In this section, we introduce how to use random oracles to program a security reduc-
tion for an encryption scheme under a computational hardness assumption. Security
reductions for encryption schemes under computational hardness assumptions and
for encryption schemes under decisional hardness assumptions are quite different in
the challenge ciphertext simulation, how to find a solution to the problem instance,
and the correctness analysis.

4.11.1 Random and Independent Revisited

Let H be a cryptographic hash function, and x be a random input string.


• If H is a cryptographic hash function, H(x) is dependent on x and the hash func-
tion algorithm. That is, H(x) is computable from x and the hash function H.
• If H is set as a random oracle, H(x) is random and independent of x. This is due
to the fact that H(x) is an element randomly chosen by the simulator.
In a security proof with random oracles, if the adversary does not query x to the
random oracle, H(x) is random and unknown to the adversary. This is the core of
security reduction for encryption with random oracles.
110 4 Foundations of Security Reduction

4.11.2 One-Time Pad Revisited

Let H : {0, 1}∗ → {0, 1}n be a cryptographic hash function. We consider the follow-
ing encryption of the message mc with an arbitrary string x, where m0 , m1 are any
two distinct messages chosen from the message space {0, 1}n and mc is randomly
chosen from {m0 , m1 }:  
CT = x, H(x) ⊕ mc .

If H is a hash function, the above ciphertext is not a one-time pad. Given x and the
hash function H, the adversary can compute H(x) by itself and decrypt the ciphertext
to obtain the message mc . However, if H is set as a random oracle, the adversary
cannot compute H(x) by itself. The adversary must query x to the random oracle to
know H(x). Then, we have the following two interesting results.
• Before Querying x. Since H(x) is random and unknown to the adversary, the
message is encrypted with a random and unknown encryption key. Therefore,
the above ciphertext is equivalent to a one-time pad, and the adversary has no
advantage in guessing the message mc except with probability 12 .
• After Querying x. Once the adversary queries x to the random oracle and re-
ceives the response H(x), it can immediately decrypt the message with H(x).
Therefore, the above ciphertext is no longer a one-time pad, and the adversary
can guess the encrypted message with advantage 1.
With the above features, the simulator is able to use random oracles to program
a security reduction for an encryption scheme in the indistinguishability security
model to solve a computational hard problem.

4.11.3 Solution to Hard Problem Revisited

In a security reduction for an encryption scheme in the indistinguishability secu-


rity model without random oracles, we use the guess of the random coin c of the
adversary to solve a decisional hard problem. With the help of random oracles, we
can program a security reduction to solve a computational hard problem. However,
instead of using the guess of c, the simulator uses one of the hash queries.
Let H : {0, 1}∗ → {0, 1}n be a cryptographic hash function set as a random oracle.
Let X be a given instance of a computational hard problem, and y be its solution.
Suppose the challenge ciphertext in the simulation is created as
 
CT = X, H(y) ⊕ mc .

The breaking assumption states that there exists an adversary who can break the
above ciphertext with non-negligible advantage. That is, the adversary has non-
negligible advantage in guessing mc correctly. According to the features introduced
in the previous subsection, the adversary must query y to the random oracle. Oth-
4.11 Security Proofs for Encryption Under Computational Assumptions 111

erwise, without making the query on y to the random oracle, CT is equivalent to a


one-time pad, which is contrary to the breaking assumption. Therefore, the solution
to the problem instance will appear in one of the hash queries made by the adversary.
We denote by Q∗ the challenge hash query, if the adversary has no advantage in
guessing the encrypted message without making a hash query on Q∗ to the random
oracle. According to the breaking assumption, the adversary will make the chal-
lenge hash query on Q∗ = y to the random oracle with non-negligible probability.
Since the number of hash queries to the random oracle is polynomial, the simulator
then uses these hash queries to solve the underlying hard problem. For example, the
simulator can randomly picks one of the hash queries as the solution. The success
probability is q1H for qH hash queries, which is non-negligible. This is the magi-
cal part of using random oracles in proving security of encryption schemes under
computational hardness assumptions in the indistinguishability security model.

4.11.4 Simulation of Challenge Ciphertext

In a security reduction for an encryption scheme in the indistinguishability security


model without random oracles, we embed the target Z from the problem instance in
the challenge ciphertext to obtain a true challenge ciphertext if Z is true or a false
challenge ciphertext if Z is false. However, in a security reduction for an encryption
scheme under a computational hardness assumption, there is no target Z in the prob-
lem instance, and thus the simulation of the challenge ciphertext must be different.
The simulation of the challenge ciphertext has one important step, which is to
simulate a component associated with the challenge hash query. We give the fol-
lowing example to explain how to simulate the challenge ciphertext. We continue
the example in the previous subsection, where the challenge ciphertext in the real
scheme is set as  
CT ∗ = X, H(y) ⊕ mc .

The simulator cannot directly simulate H(y) ⊕ mc in the challenge ciphertext be-
cause y is unknown to the simulator. Fortunately, this problem can be easily solved
with the help of random oracles. To be precise, the simulator chooses a random
string R ∈ {0, 1}n to replace H(y) ⊕ mc . The challenge ciphertext in the simulated
scheme is set as
CT ∗ = (X, R).
The challenge ciphertext can be seen as an encryption of the message mc ∈ {m0 , m1 }
if H(y) = R ⊕ mc , where we have
 
CT ∗ = (X, R) = X, H(y) ⊕ mc .

Unfortunately, after sending this challenge ciphertext to the adversary, the simula-
tor may not know which hash query from the adversary is the solution y and will
respond to the query on y with a random element different from R ⊕ mc . There-
112 4 Foundations of Security Reduction

fore, the query response is wrong, and the challenge ciphertext in the simulation is
distinguishable from that in the real scheme. We have the following results.
• Before Querying y. From the point of view of the adversary, the challenge ci-
phertext is an encryption of m0 if H(y) = R ⊕ m0 and an encryption of m1 if
H(y) = R ⊕ m1 . Without making a hash query on y to the random oracle, the
adversary never knows H(y), and thus the challenge ciphertext in the simulated
scheme is indistinguishable from the challenge ciphertext in the real scheme.
Before querying y to the random oracle, H(y) is random and unknown to the
adversary, so the challenge ciphertext is an encryption of m0 or m1 with the same
probability. Therefore, the challenge ciphertext is equivalent to a one-time pad
from the point of view of the adversary, and thus the adversary has no advantage
in guessing the encrypted message.
• After Querying y. Once the adversary makes a hash query on y to the random
oracle, the simulator will respond to the query with a random element Y = H(y).
If the simulator does not know which query is the challenge hash query Q∗ = y,
the response to the query on y is independent of R and therefore we have

Y ⊕ m0 = R or Y ⊕ m1 = R

holds with negligible probability. Then, the simulated scheme is distinguishable


from the real scheme because the encrypted message in the challenge ciphertext
is neither m0 nor m1 . However, we do not care that the simulation becomes dis-
tinguishable now, because the simulator has already received the challenge hash
query from the adversary and can solve the underlying hard problem.

4.11.5 Proof Structure

We are ready to summarize the proof structure for encryption schemes under com-
putational hardness assumptions, where at least one hash function is set as a random
oracle. Suppose there exists an adversary A who can break the proposed encryption
scheme in the corresponding security model. We construct a simulator B to solve a
computational hard problem. Given as input an instance of this hard problem, in the
security proof we must show (1) how the simulator generates the simulated scheme;
(2) how the simulator solves the underlying hard problem; and (3) why the security
reduction is correct. A security proof is composed of the following three parts.

• Simulation. In this part, we show how the simulator programs the random or-
acle simulation, generates the simulated scheme using the received problem in-
stance, and interacts with the adversary following the indistinguishability secu-
rity model. If the simulator aborts, the security reduction fails.
• Solution. In this part (at the end of the guess phase), we show how the simulator
solves the computational hard problem using hash queries to the random oracle
made by the adversary. To be precise, we should point out which hash query is
4.11 Security Proofs for Encryption Under Computational Assumptions 113

the challenge hash query Q∗ in this simulation, how to pick the challenge hash
query from all hash queries, and how to use the challenge hash query to solve the
underlying hard problem.
• Analysis. In this part, we need to provide the following analysis.
1. The simulation is indistinguishable from the real attack if no challenge hash
query is made by the adversary.
2. The probability PS of successful simulation.
3. The adversary has no advantage in breaking the challenge ciphertext if it does
not make the challenge hash query to the random oracle.
4. The probability PC of finding the correct solution from hash queries.
5. The advantage εR of solving the underlying hard problem.
6. The time cost of solving the underlying hard problem.
The simulation will be successful if the simulator does not abort in computing the
public key, responding to queries, and computing the challenge ciphertext. Before
the adversary makes the challenge hash query, the simulation is indistinguishable
if (1) all responses to queries are correct; (2) the challenge ciphertext is a correct
ciphertext as defined in the proposed scheme from the point of view of the adversary;
and (3) the randomness property holds for the simulation.
If the security reduction is correct, the solution to the problem instance can be
extracted from the challenge hash query. For simplicity, we can view the challenge
hash query as the solution to the problem instance. In this type of security reduction,
there is no true challenge ciphertext or false challenge ciphertext.

4.11.6 Challenge Ciphertext and Challenge Hash Query

In the real scheme, the challenge ciphertext is a correct ciphertext whose encrypted
message is mc . If the adversary makes the challenge hash query to the random oracle,
it can use the response to decrypt the encrypted message and then break the scheme.
However, in the simulated scheme, the challenge ciphertext is a correct ciphertext if
and only if there is no challenge hash query. Once the adversary makes the challenge
hash query to the random oracle and the response to the challenge hash query is
wrong, it will immediately find out that the challenge ciphertext is incorrect and the
simulation is distinguishable.
According to the explanation in Section 4.11.4, we do not care that the simula-
tion becomes distinguishable after the adversary has made the challenge hash query
to the random oracle. In the security reduction, making the challenge hash query
can be seen as a successful and useful attack by the adversary. Before the adversary
launches such an attack with non-negligible probability, the simulated scheme must
be indistinguishable from the real scheme, and the adversary has no advantage in
breaking the challenge ciphertext. That is, the challenge ciphertext must be an en-
cryption of either m0 or m1 with the same probability (i.e., one-time pad) from the
point of view of the adversary.
114 4 Foundations of Security Reduction

4.11.7 Advantage Calculation

Suppose there exists an adversary who can break the proposed encryption scheme in
the random oracle model with advantage ε. We have the following important lemma
(originally from [24]) about calculating the advantage.
Lemma 4.11.1 If the adversary has no advantage in breaking the challenge cipher-
text without making the challenge hash query to the random oracle, the adversary
will make the challenge hash query to the random oracle with probability ε.
Proof. According to the breaking assumption, we have
1 ε
Pr[c0 = c] = + .
2 2
This is the success probability that the adversary can correctly guess the encrypted
message in the real scheme according to the breaking assumption.
Let H ∗ denote the event of making the challenge hash query to the random ora-
cle, and H ∗c be the complementary event of H ∗ . According to the statement in the
lemma, we have
1
Pr[c0 = c|H ∗ ] = 1, Pr[c0 = c|H ∗c ] = .
2
Then, we obtain

Pr[c0 = c] = Pr[c0 = c|H ∗ ] Pr[H ∗ ] + Pr[c0 = c|H ∗c ] Pr[H ∗c ]


1
= Pr[H ∗ ] + Pr[H ∗c ]
2
1
= Pr[H ] + (1 − Pr[H ∗ ])

2
1 1
= + Pr[H ∗ ],
2 2
and deduct Pr[H ∗ ] = ε. This completes the proof. 
The advantage εR of solving the underlying computational hard problem is de-
fined as
εR = PS · ε · PC .
If the probability that the simulation is successful and indistinguishable is PS , and
the adversary has no advantage in breaking the challenge ciphertext without making
the challenge hash query, the challenge hash query will appear in the hash list with
probability ε. Finally, the probability of picking the challenge hash query from the
hash list is PC . Therefore, the advantage of solving the computational hard problem
is PS · ε · PC .
The simulator needs to pick one hash query as the challenge hash query and ex-
tracts the solution to the problem instance from it. If the simulator cannot verify
which hash query is the correct one, it has to randomly pick one of the hash queries
as the challenge hash query. For simplicity, if the adversary can break the challenge
4.11 Security Proofs for Encryption Under Computational Assumptions 115

ciphertext with advantage 1 after making qH hash queries, one of the hash queries
must be the challenge hash query. Therefore, the probability that a randomly picked
query from the hash list is the challenge hash query is 1/qH . There is no reduction
loss in finding the solution from hash queries if the decisional variant of the compu-
tational hard problem is easy. The reason is that the simulator can test all the hash
queries one by one until it finds the challenge hash query. However, if the decisional
variant is also hard, it seems that this finding loss cannot be avoided.

4.11.8 Analysis of No Advantage

A successful reduction requires that the adversary has no advantage in breaking


the challenge ciphertext if it never makes the challenge hash query to the random
oracle. To satisfy this condition, the challenge ciphertext should look like a one-
time pad, where c ∈ {0, 1} is random and unknown, from the point of view of the
adversary. That is, the generated challenge ciphertext is an encryption of either m0 or
m1 with the same probability. We stress that an indistinguishable simulation cannot
guarantee that the adversary has no advantage. The reason is that c may be non-
random in the simulated challenge ciphertext.
For example, let the message space be {0, 1}n+1 , and H : {0, 1}∗ → {0, 1}n be a
cryptographic hash function. Suppose the adversary chooses two distinct messages
m0 , m1 to be challenged, where the least significant bits (LSB) of m0 and m1 are 0
and 1, respectively. Suppose the challenge ciphertext in the real scheme is set as
 
x, H(x) ⊕ mc ,

and the challenge ciphertext in the simulated scheme is set as

CT ∗ = (x, R),

where R is a random string chosen from {0, 1}n+1 . Let CT ∗ = (C1 ,C2 ). We have the
following observations.
• The challenge ciphertext can be seen as an encryption of a message from
{m0 , m1 } if no challenge hash query on x is made to the random oracle. There-
fore, the simulation is indistinguishable from the real attack.
• However, the message mc in the challenge ciphertext can easily be identified from
the least significant bit of C2 , because the LSB of the message is not encrypted.
According to the choice of messages m0 , m1 , the bit c is equal to the LSB of C2 .
Therefore, the adversary can correctly guess the encrypted message with proba-
bility 1 without making the challenge hash query to the random oracle.
The above encryption scheme is not IND-CPA secure. The adversary has non-
negligible advantage in guessing the encrypted message in the challenge ciphertext,
and this is why we cannot prove its security in the IND-CPA security model.
116 4 Foundations of Security Reduction

4.11.9 Requirements of Decryption Simulation

In a security reduction for an encryption scheme under a decisional hardness as-


sumption, the decryption simulation must be indistinguishable from the real attack
if Z is true, and the decryption simulation must not help the adversary break the
false challenge ciphertext if Z is false. However, the requirements of the decryp-
tion simulation for this type of security reduction are slightly different. We have the
following requirements before the adversary makes the challenge hash query to the
random oracle.
• We require that the decryption simulation is indistinguishable from the decryp-
tion in the real scheme. Otherwise, the simulation is distinguishable from the real
attack. To achieve this, in the simulated scheme, any ciphertext will be accepted
or rejected identically to the real scheme.
• We require that the decryption simulation cannot help the adversary distinguish
the challenge ciphertext in the simulated scheme from that in the real scheme.
Otherwise, the simulation is distinguishable from the real attack. To achieve this,
we can construct the scheme and program the security reduction in such a way
that all incorrect ciphertexts will be rejected and only ciphertexts created by the
adversary are correct. In this case, any modification of the challenge ciphertext
will be judged to be an incorrect ciphertext.
The core of the decryption simulation is to find an approach that can determine
whether a ciphertext for a decryption query from the adversary is correct or incor-
rect. If the ciphertext is correct, it must be completely generated by the adversary
and the simulator must be able to simulate the decryption. Otherwise, it must be re-
jected. The details of the approach depend on the proposed scheme and the security
reduction. We cannot give any general summary here but will provide an example
in the following subsection.

4.11.10 An Example of Decryption Simulation

In this type of security reduction, the challenge decryption key is usually pro-
grammed as a key unknown to the simulator. How to simulate the decryption with-
out knowing the challenge decryption key then becomes tricky and difficult. For-
tunately, the random oracle also provides a big help in the decryption simulation.
We now give a simple example to see how the decryption simulation works without
having the corresponding decryption key.
Let the system parameters SP be (G, g, p, H), where H : {0, 1}∗ → {0, 1}n is a
cryptographic hash function satisfying n = |p| (the same bit length as p). Suppose
a public-key encryption scheme generates a key pair (pk, sk), where pk = g1 = gα
and sk = α. The encryption algorithm takes as input the public key pk, a message
m ∈ {0, 1}n , and the system parameters SP. It chooses a random number r ∈ Z p and
returns the ciphertext as
4.11 Security Proofs for Encryption Under Computational Assumptions 117
 
CT = (C1 ,C2 ,C3 ) = gr , H(0||gr1 ) ⊕ r, H(1||gr1 ) ⊕ m .

The decryption algorithm first computes C1α = gr1 and then extracts r by H(0||C1α ) ⊕
C2 and m by H(1||C1α ) ⊕C3 . Finally, it outputs the message m if and only if CT can
be created with the decrypted number r and the decrypted message m.
This encryption scheme is not secure in the IND-CCA security model but only
secure in the IND-CCA1 security model, where the adversary can only make de-
cryption queries before the challenge phase. The proposed encryption scheme is
secure under the CDH assumption in the random oracle model, where it is hard to
compute gab from a problem instance (g, ga , gb ).
In the security reduction, the simulator sets α = a with the unknown exponent
a in the problem instance. We are now interested in knowing how the simulator
responds to decryption queries from the adversary with the help of random oracles.
In this encryption scheme, a queried ciphertext CT is valid if CT can be created
with the decrypted random number r0 and the decrypted message m0 . That is,
 0 0 0

gr , H(0||gr1 ) ⊕ r0 , H(1||gr1 ) ⊕ m0 = CT.

Otherwise, it is invalid, and the simulator outputs ⊥. Consider the ciphertext CT =


(C1 ,C2 ,C3 ) for a decryption query. Let C1 = gr for some exponent r ∈ Z p . The
ciphertext will be a correct ciphertext if CT = (gr , H(0||gr1 ) ⊕ r, H(1||gr1 ) ⊕ m) for
a message m. We have the following observations.
• If the adversary does not query 0||gr1 to the random oracle, H(0||gr1 ) is random
and unknown to the adversary, so that C2 = H(0||gr1 ) ⊕ r holds with negligible
probability 21n for any adaptive choice of C2 .
• If the adversary does not query 1||gr1 to the random oracle, H(1||gr1 ) is random
and unknown to the adversary, but any C3 can still be seen as an encryption of a
message, where the message is C3 ⊕ H(1||gr1 ).
From the above observation, the adversary cannot generate a valid ciphertext except
with negligible probability 1/2n for a large n unless it queries 0||gr1 to the random
oracle and uses H(0||gr1 ) in the ciphertext generation.
Suppose (x1 , y1 ), (x2 , y2 ), · · · , (xq , yq ) are in the hash list, where x, y denote a
query and a response, respectively. If CT is a valid ciphertext, one of the hash queries
must be equal to 0||gr1 . Otherwise, the ciphertext is invalid. Therefore, the simula-
tor can simulate the decryption without knowing the challenge decryption key as
follows.

• For all i ∈ [1, q], it starts with i = 1 and computes r0 = y1 ⊕C2 .


0
• It uses r0 to decrypt the message m0 by computing H(1||gr1 ) ⊕C3 .
• It checks whether the ciphertext CT can be generated with (r0 , m0 ). If yes, the
simulator returns m0 as the decrypted message. Otherwise, the simulator sets i =
i + 1 and repeats the above procedure.
• If all yi cannot decrypt the ciphertext correctly, the simulator outputs ⊥ as the
decryption result on the queried ciphertext CT . That is, CT is invalid.
118 4 Foundations of Security Reduction

This completes the description of the decryption simulation. To simulate the de-
cryption with the help of random oracles, the security reduction must satisfy two
conditions. Firstly, we can use hash queries instead of the challenge decryption key
to simulate the decryption. A ciphertext is valid if and only if the adversary ever
made the correct hash query to the random oracle. This condition is necessary if
the simulator is to simulate the decryption correctly. Secondly, there should be a
mechanism for checking which hash query is the correct hash query for a decryp-
tion. Otherwise, given a ciphertext for a decryption query, the simulator might return
many distinct results depending on the used hash queries.

4.11.11 Summary of Correct Security Reduction

A successful security reduction for an encryption scheme under a computational


hardness assumption is slightly different from a security reduction for an encryption
scheme under a decisional hardness assumption. We stress that the difference is
not due to using random oracles but due to the underlying hardness assumption.
An encryption scheme under a decisional hardness assumption can also be proved
secure with random oracles.
Public-Key Encryption. A correct security reduction for a public-key encryp-
tion scheme under a computational hardness assumption must satisfy the following
conditions.
• The underlying hard problem is a computational problem.
• The simulator does not know the secret key.
• The simulator can simulate the decryption for CCA security.
• The probability of successful simulation is non-negligible.
• Without making the challenge hash query, the adversary cannot distinguish the
simulated scheme from the real scheme and has no advantage in breaking the
challenge ciphertext.
• The simulator uses the challenge hash query to solve the hard problem.
• The advantage εR of solving the underlying hard problem is non-negligible.
• The time cost of the simulation is polynomial time.

Identity-Based Encryption. A correct security reduction for an identity-based


encryption scheme under a computational hardness assumption must satisfy the fol-
lowing conditions.
• The underlying hard problem is a computational problem.
• The simulator does not know the master secret key.
• The simulator can simulate the private-key generation.
• The simulator can simulate the decryption for CCA security.
• The probability of successful simulation is non-negligible.
• The simulator does not know the private key of the challenge identity.
4.12 Simulatable and Reducible with Random Oracles 119

• Without making the challenge hash query, the adversary cannot distinguish the
simulated scheme from the real scheme and has no advantage in breaking the
challenge ciphertext.
• The simulator uses the challenge hash query to solve the hard problem.
• The advantage εR of solving the underlying hard problem is non-negligible.
• The time cost of the simulation is polynomial time.
We find that any provably secure encryption scheme under a decisional hardness
assumption can be modified into a provably secure encryption scheme under a com-
putational hardness assumption with random oracles. Therefore, the above summary
is only suitable for some encryption schemes, especially those schemes that cannot
be proved secure under decisional hardness assumptions.

4.12 Simulatable and Reducible with Random Oracles

In previous sections, we have introduced what is simulatable and what is reducible


for digital signatures. These two concepts are important for digital signatures and
for private keys in identity-based encryption. In this section, we summarize three
different structures used in the constructions of signature schemes and other cryp-
tographic schemes that are quite popular in group-based cryptography. These three
types are introduced in the random oracle model, where random oracles are used to
decide whether a signature is simulatable or reducible.

4.12.1 H-Type: Hashing to Group

The first H-type of signature structure is described as

σm = H(m)a ,

where H : {0, 1}∗ → G is a cryptographic hash function. Here, (g, ga , gb ) ∈ G is an


instance of the CDH problem, and the aim is to compute gab .
Suppose H is set as a random oracle. For a query on m, the simulator responds
with
H(m) = gxb+y ,
where b is the unknown secret in the problem instance, x ∈ Z p is adaptively chosen,
and y ∈ Z p is randomly chosen by the simulator. Because y is randomly chosen from
Z p , H(m) is random in G.
The simulatable and reducible conditions are described as follows:

 Simulatable, if x = 0
σm is .
Reducible, otherwise

120 4 Foundations of Security Reduction

• The H-type is simulatable if x = 0 because we have

σm = H(m)a = (g0b+y )a = gya = (ga )y ,

which is computable from ga and y without knowing a.


• The H-type is reducible if x 6= 0 because we have
!1 !1 !1 !1
x x x x
σm H(m)a g(xb+y)a x·ab
= = = g = gab ,
(ga )y (ga )y gay

which is the solution to the CDH problem instance.

The second H-type of signature structure is described as


1
σm = H(m) a ,

where H : {0, 1}∗ → G∗ is a cryptographic hash function. Here, (g, ga ) ∈ G is an


1
instance of the DHI problem, and the aim is to compute g a .
Suppose H is set as a random oracle. For a query on m, the simulator responds
with
H(m) = gy·a+x ,
where a is the unknown secret in the problem instance, x ∈ Z p is adaptively chosen,
and y ∈ Z∗p is randomly chosen by the simulator. Because y is randomly chosen from
Z∗p , H(m) is random in G∗ .
The simulatable and reducible conditions are described as follows:

 Simulatable, if x = 0
σm is .
Reducible, otherwise

• The H-type is simulatable if x = 0 because we have


1 1
σm = H(m) a = (gya+x ) a = gy ,

which is computable from g and y without knowing a.


• The H-type is reducible if x 6= 0 because we have
!1 1
!1 x
!1
x x x
σm H(m) a gy+ a 1
= = = ga ,
gy gy gy

which is the solution to the DHI problem instance.


4.12 Simulatable and Reducible with Random Oracles 121

4.12.2 C-Type: Commutative

The C-type of signature structure is described as


 
σm = gab H(m)r , gr ,

where H : {0, 1}∗ → G is a cryptographic hash function and r ∈ Z p is a random


number. Here, (g, ga , gb ) ∈ G is an instance of the CDH problem, and the aim is to
compute gab .
Suppose H is set as a random oracle. For a query on m, the simulator responds
with
H(m) = gxb+y ,
where b is the unknown secret in the problem instance, x ∈ Z p is adaptively chosen,
and y ∈ Z p is randomly chosen by the simulator. Because y is randomly chosen from
Z p , H(m) is random in G.
The simulatable and reducible conditions are described as follows:

 Simulatable, if x 6= 0
σm is .
Reducible, otherwise

• The C-type is simulatable if x 6= 0 because we can choose a random r0 ∈ Z p and


set r = − ax + r0 . Then, we have
 − a +r0
x
gab H(m)r = gab gxb+y
0 ya 0
= gab · g−ab+xr b− x +r y
0 y 0
= (gb )xr · (ga )− x · gr y ,
a 0
gr = g− x +r
1 0
= (ga )− x · gr ,

which are computable from g, ga , gb and x, y, r0 . Since r0 is random in Z p , we have


that r is also random in this simulation.
• The C-type is reducible if x = 0 because we have

gab H(m)r gab (g0b+y )r


r y
= = gab ,
(g ) gry

which is the solution to the CDH problem instance.


This type of signature structure can also be proved secure without random oracles
if the hash function H(m) can be replaced with a similar function. An example can
be found in Section 6.4.
122 4 Foundations of Security Reduction

4.12.3 I-Type: Inverse of Group Exponent

The I-type of signature structure is described as


1
σm = h a−H(m) ,
2 q
where H : {0, 1}∗ → Z p is a cryptographic hash function. Here, (g, ga , ga , · · · , ga ) ∈
1
G is an instance of the q-SDH problem, and the aim is to compute a pair (s, g a+s )
for any s ∈ Z p .
Suppose H is set as a random oracle. For a query on m, the simulator responds
with
H(m) = x ∈ Z p
where x ∈ Z p is randomly chosen by the simulator, and thus H(m) is random in Z p .
In the simulated scheme, suppose the group element h is computed by

h = g(a−x1 )(a−x2 )···(a−xq ) ,

where a is the unknown secret from the problem instance, and all xi are randomly
chosen by the simulator. The simulatable and reducible conditions are described as
follows: 
 Simulatable, if x ∈ {x1 , x2 , · · · , xq }
σm is .
Reducible, otherwise

• The I-type is simulatable if x ∈ {x1 , x2 , · · · , xq }. Without loss of generality, let


x = x1 . We have
1 (a−x1 )(a−x2 )···(a−xq )
h a−H(m) = g a−H(m)

= g(a−x2 )(a−x3 )···(a−xq )


q−1 0 q−2 0 0 0
= (ga )wq−1 · (ga )wq−2 · (ga )w1 · gw0 ,

where wi is the coefficient of ai for all i in

(a − x2 )(a − x3 ) · · · (a − xq ) = aq−1 w0q−1 + · · · + a1 w01 + w00 .

• The I-type is reducible if x ∈


/ {x1 , x2 , · · · , xq }. Let f (a) be the polynomial function
in Z p [a] defined as

f (a) = (a − x1 )(a − x2 ) · · · (a − xq ).

Since x ∈
/ {x1 , x2 , · · · , xq }, we have z = f (x) 6= 0 and

f (a) − f (x)
a−x
4.13 Examples of Incorrect Security Reductions 123

is a polynomial function in a of degree q − 1, which can be rewritten as

f (a) − f (x)
= aq−1 wq−1 + · · · + a1 w1 + w0 .
a−x
1
Then g a−H(m) can be computed by
!1 f (a) !1
z z
σm g a−H(m)
q−1 i
= q−1 i
∏i=0 (ga )wi ∏i=0 (ga )wi
f (a)−z+z ! 1
z
g a−H(m)
= q−1 i
g∑i=0 a wi
f (a)−z z !1
z
g a−H(m) · g a−H(m)
= q−1 i
g∑i=0 a wi
 z 1
z
= g a−H(m)
1
= g a−H(m) .
 1 
The pair − H(m), g a−H(m) is the solution to the q-SDH problem instance.

In this structure, it is important to have all values in {x1 , x2 , · · · , xq } also random


and independent of Z p . Otherwise, the adversary will be able to distinguish the
simulation when most queries are answered with hash values from {x1 , x2 , · · · , xq }.
For example, in a digital signature scheme, suppose the adversary makes q + 1 hash
queries to the random oracle and q signature queries to the simulator. It is required
that q out of q + 1 hash queries must be answered with values from {x1 , x2 , · · · , xq }
to simulate signatures. For example, if {H(m1 ), H(m2 ), · · · , H(mq )} = {1, 2, · · · , q},
the simulator cannot simulate the random oracle correctly, because the set {H(m1 ),
H(m2 ), · · · , H(mq )} are not random in Z p .

4.13 Examples of Incorrect Security Reductions

In a security reduction, the malicious adversary can utilize what it knows (scheme
algorithm, reduction algorithm, and how to solve all computational hard problems)
to distinguish the given scheme or find a way to launch a useless attack on the
scheme. In this section, we give three examples to explain why security reductions
fail. Note that the security reductions in our given examples are incorrect but this
does not mean that the proposed schemes are insecure. Actually, all the schemes in
our examples can be proved secure with other security reductions.
124 4 Foundations of Security Reduction

4.13.1 Example 1: Distinguishable

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e), selects a cryp-
tographic hash function H : {0, 1}∗ → Z p , and returns the system parameters
SP = (PG, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It chooses random numbers α, β , γ ∈ Z p , computes g1 = gα , g2 = gβ , g3 =
gγ , and returns a public/secret key pair (pk, sk) as follows:

pk = (g1 , g2 , g3 ) = (gα , gβ , gγ ),
sk = (α, β , γ).

Sign: The signing algorithm takes as input a message m ∈ Z p , the secret key
sk, and the system parameters SP. It computes the signature σm on m as
1
σm = g α+mβ +H(m)γ .

Verify: The verification algorithm takes as input a message-signature pair


(m, σm ), the public key pk, and the system parameters SP. It accepts the signa-
ture if  
H(m)
e σm , g1 gm g
2 3 = e(g, g).

Theorem 4.13.1.1 Suppose the hash function H is a random oracle. If the 1-SDH
problem is hard, the proposed scheme is provably secure in the EU-CMA secu-
rity model with only two signature queries, where the adversary must select two
messages m1 , m2 for signature queries before making hash queries to the random
oracle.
Incorrect Proof. Suppose there exists an adversary A who can break the proposed
scheme in the corresponding security model. We construct a simulator B to solve
the 1-SDH problem. Given as input a problem instance (g, ga ) over the pairing group
PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. The
simulator randomly chooses x1 , x2 , x3 , y from Z p and sets the secret key as

α = x1 a, β = x2 a + y, γ = x3 a,

where a is the unknown random number in the instance of the hard problem. The
corresponding public key is
 
(g1 , g2 , g3 ) = (ga )x1 , (ga )x2 gy , (ga )x3 ,
4.13 Examples of Incorrect Security Reductions 125

where all group elements are computable.


Selection. Let the message space be Z p . The adversary selects two messages m1 , m2
for signature queries.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
• For a hash query on mi ∈ {m1 , m2 }, the simulator computes wi such that

x1 + x2 mi + x3 wi = 0,

and sets H(mi ) = wi as the response to the hash query on mi .


• For a hash query on m ∈ / {m1 , m2 }, the simulator randomly chooses w ∈ Z p and
sets H(m) = w as the response to the hash query on m.
The corresponding pair (m, H(m)) is added to the hash list.
Query. For a signature query on m ∈ {m1 , m2 }, the simulator computes
1
σm = g ym

as the signature of the message m. Let H(m) = w. According to the random oracle,
we have x1 + x2 m + x3 w = 0 and
1 1 1 1
g α+mβ +H(m)γ = g x1 a+(x2 a+y)m+x3 H(m)a = g a(x1 +x2 m+x3 w)+ym = g ym .

Therefore, σm is a valid signature of the message m.


Forgery. The adversary returns a forged signature σm∗ on some m∗ , where
1
σm∗ = g α+m∗ β +H(m∗ )γ .

Let H(m∗ ) = w∗ . Since m∗ ∈


/ {m1 , m2 }, we must have x1 + x2 m∗ + x3 w∗ 6= 0.
Rewrite
α + m∗ β + H(m∗ )γ = (x1 + m∗ x2 + w∗ x3 )a + ym∗ ,
and then
x1 +m∗ x2 +w∗ x3 
 ym∗ α+m∗ β +H(m∗ )γ
1
∗ ∗
, g = (s, g a+s )
x1 + m x2 + w x3
is the solution to the 1-SDH problem instance.
This completes the simulation and the solution. The analysis is omitted here. 
Attack on the security reduction. The queried signature σm on the message m is
1
equal to g ym . Upon receiving the two queried signatures, the adversary finds
 m1  m2 1
σm1 = σm2 = gy .

The simulated scheme is distinguishable from the real scheme, because this event
occurs in the real scheme with negligible probability 1p . The adversary therefore
126 4 Foundations of Security Reduction

breaks the security reduction by returning an invalid signature in the forgery query
phase. The proposed scheme is therefore not provably secure.
Comments on the technique. In this security reduction, a signature of m is simu-
latable if and only if the pair (m, H(m)) satisfies x1 + x2 m + x3 H(m) = 0. Without
the use of random oracles, it is hard for the simulator to simulate the two queried
signatures on m1 , m2 if H(m1 ) and H(m2 ) cannot be controlled by the simulator.
This is the advantage of using random oracles in the security reduction.

4.13.2 Example 2: Useless Attack by Public Key

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e) and returns the
system parameters SP = PG.
KeyGen: The key generation algorithm takes as input the system parameters
SP. It chooses random numbers α, β , γ ∈ Z p , computes g1 = gα , g2 = gβ , g3 =
gγ , and returns a public/secret key pair (pk, sk) as follows:

pk = (g1 , g2 , g3 ) = (gα , gβ , gγ ),
sk = (α, β , γ).

Sign: The signing algorithm takes as input a message m ∈ Z p , the secret key
sk, and the system parameters SP. It chooses a random number r ∈ Z p and
computes the signature σm on m as
 1 
σm = (σ1 , σ2 ) = r, g α+mβ +rγ .

Verify: The verification algorithm takes as input a message-signature pair


(m, σm ), the public key pk, and the system parameters SP. It accepts the signa-
ture if  
e σ2 , g1 gm σ1
2 g3 = e(g, g).

Theorem 4.13.2.1 If the 1-SDH problem is hard, the proposed scheme is existen-
tially unforgeable in the key-only security model, where the adversary is not allowed
to query signatures.
Incorrect Proof. Suppose there exists an adversary A who can break the proposed
scheme in the key-only security model. We construct a simulator B to solve the
1-SDH problem. Given as input a problem instance (g, ga ) over the pairing group
PG, B runs A and works as follows.
4.13 Examples of Incorrect Security Reductions 127

Setup. Let SP = PG. The simulator randomly chooses x, y from Z p and sets the
secret key as
α = a, β = ya + x, γ = xa,
where a is the unknown random number in the instance of the hard problem. The
corresponding public key is
 
(g1 , g2 , g3 ) = ga , (ga )y gx , (ga )x ,

where all group elements are computable.


Forgery. The adversary returns a forged signature σm∗ on some m∗ , where
 1 
σm∗ = r∗ , g α+m∗ β +r∗ γ .

If 1 + ym∗ + xr∗ = 0, abort. Otherwise, we have

α + m∗ β + r∗ γ = (1 + ym∗ + xr∗ )a + xm∗ .

The simulator computes

xm∗ 1+ym∗ +xr∗


   
1
∗ ∗
, g α+m∗ β +r∗ γ = s, g a+s ,
1 + ym + xr

which is the solution to the 1-SDH problem instance.


This completes the simulation and the solution. The analysis is omitted here. 
Attack on the security reduction. The adversary is able to break the security reduc-
tion and returns a useless forgery that cannot be reduced to solving the underlying
hard problem, upon receiving the public key from the simulator.
The above security reduction is correct if and only if the adversary has no ad-
vantage in picking (m∗ , r∗ ) satisfying 1 + ym∗ + xr∗ = 0. Otherwise, the forged sig-
nature will be a useless attack. Unfortunately, from the reduction algorithm and the
received public key, the computationally unbounded adversary knows how α, β , γ
in the public key will be simulated and then knows how to compute x, y from the
received public key following the functions described in the reduction algorithm.
That is,
α = a, β = ya + x, γ = xa.
To be precise, the adversary knows a by solving the DL problem in the problem in-
stance (g, gα ), knows x by solving the DL problem in the problem instance (gα , gγ )
and knows ya + x by solving the DL problem in the problem instance (g, gβ ), where

γ β − αγ
x= , y= .
α α
Then, the adversary breaks the security reduction by returning a forged signature
 1 
r∗ , g α+m∗ β +r∗ γ on the message m∗ with a particular number r∗ satisfying
128 4 Foundations of Security Reduction

β − αγ γ
1+ · m∗ + · r∗ = 0.
α α
That is, r∗ is equal to
1 + xm∗
r∗ = − mod p.
y
The forged signature is useless for the simulator because 1 + ym∗ + xr∗ = 0. The
proposed scheme is therefore not provably secure.
Unbounded computational power revisited. The above attack is based on the as-
sumption that the adversary, who has unbounded computational power, knows how
to compute α, β , γ from the public key. That is, the adversary knows how to solve
the DL problem, which is harder than the 1-SDH problem. This example raises an
interesting question. Can this security reduction be secure against an adversary who
cannot solve the 1-SDH problem? This question is not easy to answer. The answer
“yes” requires us to prove that the adversary has no advantage in finding (m∗ , r∗ )
such that 1 + ym∗ + xr∗ = 0 from ga , gya+x , and gxa . Actually, the adversary does
not need to solve the DL problem. An equivalent problem is, given gα , gβ , gγ , to
find (m∗ , r∗ ) such that
∗ ∗ γ ∗
gα gβ m gγr = g α m ,
which implies 1 + ym∗ + xr∗ = 0. The corresponding proof is rather complicated
because we need to prove that the adversary cannot solve this problem if the 1-SDH
problem is hard for the adversary. There might be other approaches for the adversary
to find (m∗ , r∗ ) such that 1 + ym∗ + xr∗ = 0. We must prove that the adversary cannot
find (m∗ , r∗ ) in all cases if we want to program a correct reduction. This is why we
assume that the adversary has unbounded computational power for simple analysis.
Comments on the technique. In this security reduction, a signature of m is either
simulatable or reducible depending on the corresponding random number r. If 1 +
ym + xr = 0, the signature is simulatable. Otherwise, it is reducible. The partition is
whether or not 1 + ym + xr = 0 in the given signature, and it must be hard for the
adversary to find it.

4.13.3 Example 3: Useless Attack by Signature

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e). The algorithm
returns the system parameters SP = PG.
KeyGen: The key generation algorithm takes as input the system parameters
SP. It chooses random numbers α0 , α1 ∈ Z p , computes g0 = gα0 , g1 = gα1 , and
returns a public/secret key pair (pk, sk) as follows:
4.13 Examples of Incorrect Security Reductions 129

pk = (g0 , g1 ) = (gα0 , gα1 ),


sk = (α0 , α1 ).

Sign: The signing algorithm takes as input a message m ∈ Z p , the secret key
sk, and the system parameters SP. It chooses a random coin c ∈ {0, 1} and
computes the signature σm on m as
1
σm = g αc +m .

Verify: The verification algorithm takes as input a message-signature pair


(m, σm ), the public key pk, and the system parameters SP. It accepts the signa-
ture if
e(σm , g0 gm ) = e(g, g) or e(σm , g1 gm ) = e(g, g).

Theorem 4.13.3.1 If the 1-SDH problem is hard, the proposed scheme is provably
secure in the EU-CMA security model with only one signature query.

Incorrect Proof. Suppose there exists an adversary A who can break the proposed
scheme in the corresponding security model. We construct a simulator B to solve
the 1-SDH problem. Given as input a problem instance (g, ga ) over the pairing group
PG, B runs A and works as follows.
Setup. Let SP = PG. The simulator randomly chooses x ∈ Z p , b ∈ {0, 1} and sets
the public key as  x a
(g , g ), if b = 0
(gα0 , gα1 ) = ,
(ga , gx ), otherwise
where a is the unknown random number in the instance of the hard problem.
Query. For a signature query on m, the simulator computes
1 1
σm = g x+m = g αb +m .

σm is a valid signature because either e(σm , g0 gm ) = e(g, g) or e(σm , g1 gm ) = e(g, g).


Forgery. The adversary returns a forged signature σm∗ on some m∗ , where
1

σm∗ = g αc∗ +m .

If c∗ = b, abort. Otherwise, we have αc∗ = a and then


 1

(m∗ , σ ∗ ) = s, g a+s

is the solution to the 1-SDH problem instance.


This completes the simulation and the solution. The analysis is omitted here. 
130 4 Foundations of Security Reduction

Comments on the technique. In this security reduction, the simulator programs


the simulation in such a way that any signature of a message computed with αb is
simulatable while any signature of a message computed with α1−b is reducible. The
partition thus is based on the value b. If the adversary does not know the value b,
the forged signature with an adaptive choice α from {α0 , α1 } will be reducible with
probability Pr[c∗ = 1 − b] = 1/2. The partition b is secretly chosen by the simulator
and assumed to be unknown to the adversary.
Attack on the security reduction. However, the adversary is able to break the secu-
rity reduction and returns a useless forgery, upon receiving the simulated signature
from the simulator. Given the reduction algorithm and the simulated signature from
the simulator, the adversary knows which α was used in the signature generation
by verifying the queried signature and thus knows b, and knows how to generate a
forged signature that is useless. Therefore, the security reduction cannot solve the
1-SDH problem.

4.14 Examples of Correct Security Reductions

In this section, we give three examples to introduce how to program a correct secu-
rity reduction. These examples are one-time signature schemes where each proposed
scheme can generate one signature at most. We choose one-time signature schemes
as examples because it is easy to program correct security reductions, especially the
correctness analysis.

4.14.1 One-Time Signature with Random Oracles

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a cyclic group (G, p, g), selects a cryptographic hash
function H : {0, 1}∗ → Z p , and returns the system parameters SP = (G, p, g, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It chooses random numbers α, β ∈ Z p , computes g1 = gα , g2 = gβ , and
returns a one-time public/secret key pair (opk, osk) as follows:

opk = (g1 , g2 ), osk = (α, β ).

Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key osk, and the system parameters SP. It computes the signature σm on m as

σm = α + H(m) · β mod p.
4.14 Examples of Correct Security Reductions 131

Verify: The verification algorithm takes as input a message-signature pair


(m, σm ), the public key opk, and the system parameters SP. It accepts the sig-
nature if
H(m)
gσm = g1 g2 .

Theorem 4.14.1.1 Suppose the hash function H is a random oracle. If the DL prob-
lem is hard, the proposed one-time signature scheme is provably secure in the EU-
CMA security model with only one signature query and reduction loss qH , where qH
is the number of hash queries to the random oracle.
Proof Idea. Let (g, ga ) be an instance of the DL problem that the simulator receives.
To solve the DL problem with the forged signature, the simulation should satisfy the
following conditions.
• Both α and β must be simulated with a. Otherwise, it is impossible to have
both reducible signatures and simulatable signatures in the simulation. That is,
all signatures will be either reducible or simulatable.
• The forged signature on the message m∗ is reducible if α + H(m∗ )β contains a.
When α, β are both simulated with a, we have α + H(m∗ )β for a random value
H(m∗ ) contains a except with negligible probability.
• The queried signature on the queried message m is simulatable if α + H(m)β
does not contain a. When α, β are both simulated with a, to make sure that a can
be removed in this queried signature, H(m) must be very special and related to
the setting of α, β .
Proof. Suppose there exists an adversary A who can break the one-time signature
scheme in the EU-CMA security model with only one signature query. We construct
a simulator B to solve the DL problem. Given as input a problem instance (g, ga )
over the cyclic group (G, p, g), B controls the random oracle, runs A , and works
as follows.
Setup. Let SP = (G, p, g) and H be set as a random oracle controlled by the simu-
lator. The simulator randomly chooses x, y ∈ Z p and sets the secret key as
a
α = a, β = − + y.
x
The public key is  
1
opk = (g1 , g2 ) = ga , (ga )− x gy ,

which can be computed from the problem instance and the chosen parameters.
H-Query. The adversary makes hash queries in this phase. Before receiving queries
from the adversary, B randomly chooses an integer i∗ ∈ [1, qH ], where qH denotes
the number of hash queries to the random oracle. Then, B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
132 4 Foundations of Security Reduction

For a query on mi , if mi is already in the hash list, B responds to this query fol-
lowing the hash list. Otherwise, let mi be the i-th new queried message. B randomly
chooses wi ∈ Z p and sets H(mi ) as

H(mi ) = x if i = i∗

.
H(mi ) = wi otherwise

Then, B responds to this query with H(mi ) and adds (mi , H(mi )) to the hash list.
Query. The adversary makes a signature query on m. If m is not the i∗ -th queried
message in the hash list, abort. Otherwise, B computes σm as

σm = xy mod p.

Since H(m) = H(mi∗ ) = x, we have


 a 
σm = α + H(m)β = a + x − + y = xy,
x
which is a valid signature of m.
Forgery. The adversary returns a forged signature σm∗ on some m∗ . Since H(m∗ ) =
w∗ 6= H(m) = x, we have
 a  x − w∗
σm∗ = a + w∗ − + y = a + w∗ y.
x x
B computes
(σm∗ − w∗ y)x
a=
x − w∗
as the solution to the DL problem instance.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been ex-
plained above. The randomness of the simulation includes all random numbers in
the key generation and the responses to hash queries. They are
a
a, − + y, w1 , · · · , wi∗ −1 , x, wi∗ +1 , · · · , wqH .
x
According to the setting of the simulation, where a, x, y, wi are all randomly chosen,
it is easy to see that they are random and independent from the point of view of the
adversary. Therefore, the simulation is indistinguishable from the real attack.
Probability of successful simulation and useful attack. If the simulator suc-
cessfully guesses i∗ , the queried signature on the message m = mi∗ is simulatable,
and the forged signature is reducible because the message chosen for signature query
must be different from mi∗ . Therefore, the probability of successful simulation and
useful attack is 1/qH .
4.14 Examples of Correct Security Reductions 133

Advantage and time cost. Suppose the adversary breaks the scheme with
(t, 1, ε) after making qH hash queries. The advantage of solving the DL problem
is therefore qεH . Let Ts denote the time cost of the simulation. We have Ts = O(1).
B will solve the DL problem with (t + Ts , ε/qH ).
This completes the proof of the theorem. 

4.14.2 One-Time Signature Without Random Oracles

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a cyclic group (G, p, g), selects a cryptographic hash
function H : {0, 1}∗ → Z p , and returns the system parameters SP = (G, p, g, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It chooses random numbers α, β , γ ∈ Z p , computes g1 = gα , g2 = gβ , g3 =
gγ , and returns a one-time public/secret key pair (opk, osk) as follows:

opk = (g1 , g2 , g3 ), osk = (α, β , γ).

Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It chooses a random number r ∈ Z p and
computes the signature σm on m as
 
σm = (σ1 , σ2 ) = r, α + H(m) · β + r · γ mod p .

Verify: The verification algorithm takes as input a message-signature pair


(m, σm ), the public key opk, and the system parameters SP. It accepts the sig-
nature if
H(m)
gσ2 = g1 g2 gσ3 1 .

Theorem 4.14.2.1 If the DL problem is hard, the above one-time signature scheme
is provably secure in the EU-CMA security model with only one signature query and
reduction loss about L = 1.

Proof Idea. Let (g, ga ) be an instance of the DL problem. Let σm be the queried sig-
nature generated with a random number r, and σm∗ be the forged signature generated
with a random number r∗ , where
 
σm = r, α + H(m)β + rγ ,
 
σm∗ = r∗ , α + H(m∗ )β + r∗ γ .
134 4 Foundations of Security Reduction

Let H(m∗ ) = u · H(m) for a number u ∈ Z p . If the adversary knows the reduction
algorithm and has unbounded computational power, we have the following interest-
ing observations on the simulation of α, β , γ if the security reduction provides only
one simulation.
• If α does not contain a, we have that H(m)β +rγ is simulatable for the simulator.
Let r∗ = ru. We have
    
σm∗ = r∗ , α + H(m∗ )β + r∗ γ = ru, α + u H(m)β + rγ

must be simulatable. Therefore, if the simulation of α does not contain a, the


adversary can generate such a signature to launch a useless attack.
• If β does not contain a, we have that α + rγ is simulatable for the simulator. Let
r∗ = r. We have
   
σm∗ = r∗ , α + H(m∗ )β + r∗ γ = r, α + H(m∗ )β + rγ

must be simulatable. Therefore, if the simulation of β does not contain a, the


adversary can generate such a signature to launch a useless attack.
• If only α, β contain a, we have that there exists only one message H(m) whose
signature is simulatable. However, the simulator does not know which message
H(m) the adversary will query. The probability of successful simulation is then
negligible.
With the above analysis, all secret keys must contain a in the simulation. The
simulator uses the chosen random number r to make sure that the queried signature
is simulatable for any message. It also requires that the adversary cannot find the
partition. Otherwise, the forged signature will be useless.
Proof. Suppose there exists an adversary A who can break the one-time signature
scheme in the EU-CMA security model with only one signature query. We construct
a simulator B to solve the DL problem. Given as input a problem instance (g, ga )
over the cyclic group (G, g, p), B runs A and works as follows.
Setup. Let SP = (G, p, g, H). B randomly chooses x1 , x2 , y1 , y2 ∈ Z p and sets the
secret key as
α = x1 a + y1 , β = x2 a + y2 , γ = a.
The public key is
 
opk = (g1 , g2 , g3 ) = (ga )x1 gy1 , (ga )x2 gy2 , ga ,

which can be computed from the problem instance and the chosen parameters.
Query. The adversary makes a signature query on m. B computes σm as
 
σm = (σ1 , σ2 ) = − x1 − H(m)x2 , y1 + H(m)y2 .

Let r = −x1 − H(m)x2 . We have


4.14 Examples of Correct Security Reductions 135

α + H(m)β + rγ = (x1 a + y1 ) + H(m)(x2 a + y2 ) + ra


= a(x1 + H(m)x2 + r) + y1 + H(m)y2
= y1 + H(m)y2 .

Therefore, σm is a valid signature of the message m.


Forgery. The adversary returns a forged signature σm∗ on a message m∗ . Let σm∗ be
 
σm∗ = (σ1 , σ2 ) = r∗ , α + H(m∗ )β + r∗ γ .

If x1 + H(m∗ )x2 + r∗ = 0, abort. Otherwise, we have

α + H(m∗ )β + r∗ γ = a(x1 + H(m∗ )x2 + r∗ ) + y1 + H(m∗ )y2 .

Finally, B computes
σ2 − y1 − H(m∗ )y2
a=
x1 + H(m∗ )x2 + r∗
as the solution to the DL problem instance.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been ex-
plained above. The randomness of the simulation includes all random numbers in
the key generation and the signature generation. They are

x1 a + y1 , x2 a + y2 , a, − x1 − H(m)x2 .

The simulation is indistinguishable from the real attack because they are random
and independent following the analysis below.
Probability of successful simulation and useful attack. There is no abort in the
simulation. The forged signature is reducible if r∗ 6= −x1 − H(m∗ )x2 . To prove that
the adversary has no advantage in computing −x1 −H(m∗ )x2 , we only need to prove
that −x1 − H(m∗ )x2 is random and independent of the given parameters. Since σ2
in the queried signature can be computed from the secret key and σ1 , we only need
to prove that the following elements

(α, β , γ, r, −x1 − H(m∗ )x2 )


 
= x1 a + y1 , x2 a + y2 , a, − x1 − H(m)x2 , − x1 − H(m∗ )x2

are random and independent.


According to the simulation, we have
 
(α, β , r, −x1 − H(m∗ )x2 ) = x1 a + y1 , x2 a + y2 , − x1 − H(m)x2 , − x1 − H(m∗ )x2 ,

which can be rewritten as


136 4 Foundations of Security Reduction
  
a 0 10 x1
 0 a 0 1  x2 
 
 −1 −H(m) 0 0   y1  .

−1 −H(m∗ ) 0 0 y2

It is not hard to find that the absolute value of the determinant of the matrix is
|H(m∗ ) − H(m)|, which is nonzero. Therefore, we have that α, β , r, −x1 − H(m∗ )x2
are random and independent. Combining lemma 4.7.2 with the above result, we
have that r∗ is random and independent of the given parameters. Therefore, the
probability of successful simulation and useful attack is 1 − 1p ≈ 1.
Advantage and time cost. Suppose the adversary breaks the scheme with
(t, 1, ε). Let Ts denote the time cost of the simulation. We have Ts = O(1). B will
solve the DL problem with (t + Ts , ε).
This completes the proof of the theorem. 

4.14.3 One-Time Signature with Indistinguishable Partition

The security proof in Theorem 4.14.2.1 is based on the fact that the adversary cannot
find the partition x1 + H(m∗ )x2 + r∗ = 0 from the given parameters. In this section,
we introduce a security proof composed of two different simulations whose parti-
tions are opposite. That is, a signature generated with a random number is simulat-
able in one simulation and reducible in another simulation. If the adversary cannot
distinguish which simulation is adopted by the simulator, any forged signature gen-
erated by the adversary will be reducible with probability 12 .
Theorem 4.14.3.1 If the DL problem is hard, the proposed one-time signature
scheme in Section 4.14.2 is provably secure in the EU-CMA security model with
only one signature query and reduction loss at most L = 2.
Proof. Suppose there exists an adversary A who can break the one-time signature
scheme in the EU-CMA security model with only one signature query. We construct
a simulator B to solve the DL problem. Given as input a problem instance (g, ga )
over the cyclic group (G, g, p), B runs A and works as follows.
B randomly chooses a secret bit µ ∈ {0, 1} and programs the simulation in two
different ways.
• The reduction for µ = 0 is programmed as follows.
Setup. Let SP = (G, g, p, H). B randomly chooses x1 , y1 , y2 ∈ Z p and sets the
secret key as α = x1 a + y1 , β = 0a + y2 , γ = a. The public key is
 
opk = (g1 , g2 , g3 ) = (ga )x1 gy1 , gy2 , ga ,

which can be computed from the problem instance and the chosen parameters.
Query. The adversary makes a signature query on m. B computes σm as
4.14 Examples of Correct Security Reductions 137
 
σm = (σ1 , σ2 ) = − x1 , y1 + H(m)y2 .

Let r = −x1 . We have

α + H(m)β + rγ = x1 a + y1 + H(m)y2 − x1 a = y1 + H(m)y2 ,

which is a valid signature of the message m.


Forgery. The adversary returns a forged signature σm∗ on some m∗ . Let σm∗ be
 
σm∗ = (σ1 , σ2 ) = r∗ , α + H(m∗ )β + r∗ γ .

If r∗ = −x1 , abort. Otherwise, we have

σ2 = α + H(m∗ )β + r∗ γ = a(x1 + r∗ ) + y1 + H(m∗ )y2 .

Finally, B computes
σ2 − y1 − H(m∗ )y2
a=
x1 + r∗
as the solution to the DL problem instance.
• The reduction for µ = 1 is programmed as follows.
Setup. Let SP = (G, g, p, H). B randomly chooses x2 , y1 , y2 ∈ Z p and sets the
secret key as α = 0a + y1 , β = x2 a + y2 , γ = a. The public key is
 
opk = (g1 , g2 , g3 ) = gy1 , (ga )x2 gy2 , ga ,

which can be computed from the problem instance and the chosen parameters.
Query. The adversary makes a signature query on m. B computes σm as
 
σm = (σ1 , σ2 ) = − H(m)x2 , y1 + H(m)y2 .

Let r = −H(m)x2 . We have

α + H(m)β + rγ = y1 + H(m)(x2 a + y2 ) − H(m)x2 a = y1 + H(m)y2 ,

which is a valid signature of the message m.


Forgery. The adversary returns a forged signature σm∗ on some m∗ . Let σm∗ be
 
σm∗ = (σ1 , σ2 ) = r∗ , α + H(m∗ )β + r∗ γ .

If r∗ = −H(m∗ )x2 , abort. Otherwise, we have

σ2 = α + H(m∗ )β + r∗ γ = a(H(m∗ )x2 + r∗ ) + y1 + H(m∗ )y2 .

Finally, B computes
138 4 Foundations of Security Reduction

σ2 − y1 − H(m∗ )y2
a=
H(m∗ )x2 + r∗
as the solution to the DL problem instance.

This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation and the signature generation. They are
 
 x1 a + y1 , 0 + y2 , a , −x1 µ =0
(α, β , γ, r) =   .
 0 + y1 , x2 a + y2 , a , −H(m)x2 µ =1

According to the setting of the simulation, where x1 , x2 , y1 , y2 , a are randomly cho-


sen, it is easy to see that they are random and independent from the point of view of
the adversary no matter whether µ = 0 or µ = 1. Therefore, the simulation is indis-
tinguishable from the real attack, and the adversary has no advantage in guessing µ
from the simulation.
Probability of successful simulation and useful attack. There is no abort in the
simulation. Let the random numbers in the queried signature and the forged signa-
ture be r and r∗ , respectively. We have
• Case µ = 0. The forged signature is reducible if r∗ 6= r.
∗ ∗)
• Case µ = 1. The forged signature is reducible if rr 6= H(m
H(m) .
Since the two simulations are indistinguishable, and the simulator randomly chooses
one of them, the forged signature is therefore reducible with success probability
Pr[Success] described as follows.

Pr[Success] = Pr[Success|µ = 0] Pr[µ = 0] + Pr[Success|µ = 1] Pr[µ = 1]


= Pr[r∗ 6= r] Pr[µ = 0] + Pr[r∗ 6= rH(m∗ )/H(m)] Pr[µ = 1]
≥ Pr[r∗ 6= r] Pr[µ = 0] + Pr[r∗ = r] Pr[µ = 1]
1 1
= Pr[r∗ 6= r] + Pr[r∗ = r]
2 2
1
= .
2
)r∗
/ {r, H(m
Note that if the adversary returns a forged signature satisfying r∗ ∈ H(m) }, the
signature must be reducible for both cases. Therefore, the probability of successful
simulation and useful attack is at least 12 .
Advantage and time cost. Suppose the adversary breaks the scheme with (t, 1, ε).
The advantage of solving the DL problem is at least ε2 . Let Ts denote the time costof
the simulation. We have Ts = O(1). B will solve the DL problem with t + Ts , ε2 .
This completes the proof of the theorem. 
4.15 Summary of Concepts 139

4.15 Summary of Concepts

In the last section of this chapter, we revisit and classify concepts used in security
proofs of public-key cryptography. It is important to fully understand these concepts
and master where/how to apply them in a security proof. Note that some concepts,
such as advantage and valid ciphertexts, may have different explanations elsewhere
in the literature.

4.15.1 Concepts Related to Proof

Various concepts related to “proof” are interpreted differently for different purposes.
So far, we have the following proof concepts.
• Proof by Contradiction. This concept introduces what is provable security via
reduction in public-key cryptography We will revisit preliminaries and some im-
portant concepts about proof by contradiction in Section 4.15.2.
• Security Proof. A security proof is mainly composed of a reduction algorithm
and its correctness analysis. To be precise, the correctness analysis should show
that the advantage of solving an underlying hard problem using an adaptive attack
by a malicious adversary is non-negligible.
• Security Reduction. This is a reduction run by the simulator following the re-
duction algorithm. A security reduction should introduce how to generate a simu-
lated scheme and how to reduce an attack on this scheme to solving an underlying
hard problem. We will revisit some important concepts about security reduction
in Section 4.15.3.
• Simulation. This is an interaction between the adversary and the simulator who
generates a simulated scheme with a problem instance. In comparison with se-
curity reduction, simulation focuses on how to program a simulation indistin-
guishable from the real attack. We will revisit some important concepts about
simulation in Section 4.15.4.
A security proof is not a real mathematical proof showing that a proposed scheme
is secure. Instead, it merely proposes a reduction algorithm and shows that if there
exists an adversary who can break the proposed scheme, we can run the reduction
algorithm and reduce the adversary’s attack to solving an underlying hard problem.
That is, a security proof is an algorithm only. Unfortunately, we cannot demonstrate
this reduction algorithm to convince people that the proposed scheme is secure.
What we do instead is a theoretical analysis showing that the proposed reduction
algorithm indeed works. That is, we can reduce any adaptive attack by the malicious
adversary who has unbounded computational power to solving an underlying hard
problem with non-negligible advantage.
140 4 Foundations of Security Reduction

4.15.2 Preliminaries and Proof by Contradiction

Mathematics is the foundation of modern cryptography. We define mathematical


hard problems and construct schemes over mathematical primitives. Each math-
ematical primitive is generated with an input security parameter λ . The compu-
tational complexity of solving a specific mathematical problem P defined over a
mathematical primitive can be denoted by P(λ ), which is dependent on the prob-
lem definition and the size of λ . The complexity is also known as the security
level. Suppose there are two hard problems A, B defined over the same mathematical
primitive, and their security levels are PA (λ ), PB (λ ), respectively. If PA (λ ) is much
higher than PB (λ ), we say that A is a relatively weak hardness assumption while B
is a relatively strong hardness assumption. Here, “weak” is better than “strong.” In
group-based cryptography, a cyclic group is the mathematical primitive, where the
security parameter is the bit length of a group element. The security level of the dis-
crete logarithm problem defined over a cyclic group depends on the implementation
of this cyclic group and the input security parameter λ in the group implementation.
For example, a modular multiplicative group generated with the security parameter
λ = 1, 024 has a security level roughly 280 , while an elliptic curve group generated
with the security parameter λ = 1, 024 can have a security level up to 2512 .
There are two types of algorithms: deterministic algorithms and probabilistic al-
gorithms, where a probabilistic algorithm is believed to be more efficient than a
deterministic algorithm. A probabilistic algorithm is measured by a time cost t and
a value ε ∈ [0, 1], which is either probability or advantage depending on the def-
inition. If t is polynomial and ε is non-negligible, the corresponding algorithm is
computationally efficient. Otherwise, if t is exponential or ε is negligible, the corre-
sponding algorithm is computationally inefficient. All algorithms for cryptography
can be classified into (1) scheme algorithms for implementing a cryptosystem, (2)
attack algorithms for breaking a proposed scheme, (3) solution algorithms for solv-
ing a mathematical hard problem, and (4) reduction algorithms for specifying how
to reduce an attack on the simulated scheme to solving an underlying hard problem.
The probability ε for scheme algorithms must be approximately equal to 1, while
the advantage ε for the other three types of algorithms can be non-negligible. A
proposed scheme is secure if all attack algorithms are computationally inefficient.
A mathematical problem is easy if there exists a computationally efficient solution
algorithm, while a mathematical problem is believed to be hard if all known solution
algorithms are computationally inefficient. We cannot mathematically prove that a
problem is hard but only prove that solving this problem is not easier than solving
an existing believed-to-be-hard problem by a reduction.
Proof by contradiction is adopted to prove that a proposed scheme is secure
against any adversary in polynomial time with non-negligible advantage. Firstly,
we have a mathematical problem that is believed to be hard. Then, we assume that
there exists an adversary who can break the proposed scheme, and prove that the
adversary’s attack can be reduced to solving the underlying hard problem by a secu-
rity reduction. Finally, we conclude that the proposed scheme is secure because the
underlying hard problem becomes easy if the breaking assumption is true.
4.15 Summary of Concepts 141

4.15.3 Security Reduction and Its Difficulty

A security model models attacks as a game played by a challenger and an adversary.


It defines what/when the adversary can query and how the adversary wins the game.
Every cryptosystem must have a corresponding security model. There may be many
security models proposed to define the same security service for a cryptosystem.
The reason is that some proposed schemes can only be proved secure in a weak
security model, where the adversary’s interactions with the scheme are restricted.
A strong security model allows the adversary to flexibly and adaptively query more
information than a weak security model. Therefore, a strong security model is better
than a weak security model. To define a security model for a new cryptosystem, all
queries from the adversary that will make the adversary trivially win the game must
be forbidden. Otherwise, the adversary can always win the game no matter how the
proposed scheme is constructed.
In the security reduction, we begin with the breaking assumption that there exists
an adversary who can break the proposed scheme in polynomial time with non-
negligible advantage under the corresponding security model. We construct a sim-
ulator who uses a given problem instance to generate a simulated scheme and then
uses the adversary’s attack on the simulated scheme to solve an underlying hard
problem. A security reduction is a reduction algorithm only. It merely specifies
what the simulator should do in the security reduction. In the security reduction,
there always exist a reduction cost and a reduction loss when reducing the adver-
sary’s attack to solving an underlying hard problem. A security reduction is a tight
reduction if the reduction loss is sub-linear in the number of queries or constant-
size, i.e., independent of the number of queries made by the adversary. Otherwise,
it is a loose reduction.
The adversary’s attack can be classified into four categories: (1) failed attack that
cannot break the scheme, (2) successful attack that can break the scheme, (3) useful
attack that can be reduced to solving an underlying hard problem, and (4) useless
attack that cannot be reduced to solving an underlying hard problem. The adversary
may fail in launching a successful attack on the simulated scheme. The adversary’s
attack on the simulated scheme may be useless. A successful security reduction
requires that the adversary’s attack is useful, but a successful attack by the adversary
cannot guarantee that the attack is useful. A correct security reduction must provide
a correctness analysis showing that the advantage of solving an underlying hard
problem using the adversary’s attack is non-negligible.
In the breaking assumption, there is no restriction on the adversary in break-
ing the proposed scheme except time and advantage. The adversary is a black-box
adversary, who will launch an adaptive attack including adaptive queries and an
adaptive output. To be able to calculate the advantage of solving an underlying hard
problem, we amplify the black-box adversary into a malicious black-box adversary
who has unbounded computational power. A correct security reduction is compli-
cated because we must give a correctness analysis showing that the advantage of
solving an underlying hard problem using an adaptive attack from a computation-
ally unbounded adversary is non-negligible. To simplify the correctness analysis of
142 4 Foundations of Security Reduction

the security reduction, we assume that the adversary knows the scheme algorithm of
the proposed scheme, the reduction algorithm, and how to solve all computational
hard problems. On the other hand, we can program a security reduction successfully,
because the adversary does not know the random number(s) chosen by the simula-
tor, the problem instance given to the simulator, and how to solve an absolutely hard
problem.
In the security reduction, the attack by the adversary and the underlying hard
problem for security reductions are related. We reduce a computational attack to
solving a computational hard problem. For example, in a security reduction for dig-
ital signatures, we mainly use the forged signature from the adversary to solve a
computational hard problem. We reduce a decisional attack to solving a decisional
hard problem. For example, in a security reduction for encryption, we mainly use
the guess of the randomly chosen message mc in the challenge ciphertext to solve
a decisional hard problem. With the help of random oracles, in a security reduction
for encryption, we can also reduce a decisional attack in the indistinguishability se-
curity model to solving a computational hard problem. We stress that each type of
reduction is quite different in simulation, solution, and analysis.

4.15.4 Simulation and Its Requirements

The first step of the security reduction is the simulation. The simulator uses the
given problem instance to generate a simulated scheme, and may abort in the sim-
ulation so that the simulation is not successful. The adversary will launch a failed
attack or a successful attack in the simulation. It is not necessary to implement the
full simulated scheme. Only those algorithms involved in the responses to queries
are desired. The simulated scheme is indistinguishable from the real scheme when
correctness and randomness hold. To be precise, all responses to queries, such as
signature queries and decryption queries, must be correct. All simulated random
numbers (group elements) must be truly random and independent. The indistin-
guishability is necessary because we only assume that the adversary can break a
given scheme that looks like a real scheme from the point of view of the adversary.
We cannot guarantee that the adversary will also break the given scheme with the
same advantage as breaking the real scheme, if the given scheme is distinguishable
from the real scheme.
In a security reduction with random oracles, the simulator controls random ora-
cles and can embed any special integers/elements in responses to hash queries, as
long as all responses are uniformly distributed. The simulator controls responses
to hash queries to help program the simulation, especially for signature generation
and private-key generation in identity-based encryption, without knowing the corre-
sponding secret key. The number of hash queries to random oracles is polynomial.
As long as the adversary does not query x to the random oracle, H(x) is random and
unknown to the adversary. A hash list is used to record all queries made by the ad-
versary, all responses to hash queries, and all secret states for computing responses.
4.15 Summary of Concepts 143

For digital signatures, most security reductions in the literature program the sim-
ulation in such a way that the simulator does not know the corresponding secret key,
and the simulator utilizes the forged signature from the adversary to solve an un-
derlying hard problem. All digital signatures in the simulation can be classified into
simulatable signatures and reducible signatures. In the simulation, there should be
a partition that specifies which signatures are simulatable and which signatures are
reducible. If a security reduction is to use the forged signature to solve an underlying
hard problem, all queried signatures must be simulatable, and the forged signature
must be reducible. The adversary should not be able to find or distinguish the parti-
tion from the simulation. Otherwise, the adversary can always choose a simulatable
signature as the forged signature so that the forgery is a useless attack.
The simulation is much more complicated for encryption than that for digital
signatures. We can program the security reduction to solve either a decisional hard
problem or a computational hard problem depending on the proposed scheme and
the security reduction. The corresponding security reductions are very different.
• If we program the security reduction to solve a decisional hard problem, we use
the adversary’s guess of the message in the challenge ciphertext to solve the un-
derlying hard problem. In the corresponding simulation, the target Z from the
problem instance must be embedded in the challenge ciphertext in a way that
the challenge ciphertext fulfills several conditions. To be precise, if Z is true,
the simulated scheme is indistinguishable from the real scheme so that the adver-
sary will guess the encrypted message correctly with probability 12 + ε2 according
to the breaking assumption. Otherwise, it should be as hard for the adversary to
break the false challenge ciphertext as it is to break a one-time pad, so that the ad-
versary has success probability only about 12 of guessing the encrypted message.
It is not easy to analyze the probability of breaking the false challenge ciphertext
when the adversary can make decryption queries. To analyze the probability PF of
breaking the false challenge ciphertext, we need to analyze the probabilities and
advantages of PFW , AKF , ACF , AIF , and PFA . In the simulation of encryption schemes,
the challenge decryption key can be either known or unknown to the simulator,
which is dependent on the proposed scheme and the security reduction.
• If we program a security reduction in the indistinguishability security model to
solve a computational hard problem with the help of random oracles, the security
reduction is very different from that to solve a decisional hard problem. In partic-
ular, the simulator should use one of the hash queries, namely the challenge hash
query, to solve the underlying computational hard problem. There is no true/false
challenge ciphertext definition in this type of simulation. Before the adversary
makes the challenge hash query to the random oracle, the simulation must be
indistinguishable from the real scheme, and the adversary must have no advan-
tage in breaking the challenge ciphertext. If these conditions hold, the adversary
must make the challenge hash query to the random oracle in order to fulfill the
breaking assumption. In this security reduction, we usually program the simula-
tion in such a way that the simulator does not know the challenge decryption key.
For CCA security, the hash queries from the adversary must be able to help the
decryption simulation. That is, all correct ciphertexts can be decrypted with the
144 4 Foundations of Security Reduction

correct hash queries, and all incorrect ciphertexts will be rejected according to
those hash queries made by the adversary.
In a security reduction for encryption under a decisional hardness assumption,
the decryption simulation should be indistinguishable from the real attack if Z is
true, and the decryption simulation should not help the adversary break the false
challenge ciphertext if Z is false. In a security reduction for encryption under a
computational hardness assumption, the decryption simulation should be indistin-
guishable from the real attack and should not help the adversary distinguish the
challenge ciphertext in the simulated scheme from that in the real scheme.

4.15.5 Towards a Correct Security Reduction

A security reduction is merely a reduction algorithm specifying how to reduce the


adversary’s attack to solving an underlying hard problem, if such an adversary in-
deed exists. From a security reduction to a correct security reduction, there should
be some steps analyzing that the advantage of solving the underlying hard problem
with any attack by the adversary is non-negligible. In Figure 4.3, we have given all
related steps. They are described as follows.

Fig. 4.3 Towards a correct security reduction

• A security reduction mainly consists of a simulation algorithm and a solution


algorithm. A simulation is an interaction between the adversary and the simulated
scheme that is generated using the problem instance following the simulation
4.15 Summary of Concepts 145

algorithm. The simulation is successful if the simulator does not abort during the
simulation, where the simulator aborts because it cannot correctly respond to the
adversary’s queries.
• If the simulation is successful, the adversary will launch an attack on the simu-
lated scheme. The attack is a successful attack with a certain probability, depend-
ing on the simulation and the reduction algorithm. To be precise, if the simulated
scheme is indistinguishable (IND) from the real scheme, the adversary should
launch a successful attack on the simulated scheme with probability Pε defined in
the breaking assumption. If the simulated scheme is distinguishable (DIS) from
the real scheme, the adversary will launch a successful attack on the simulated
scheme with malicious probability P∗ ∈ [0, 1], depending on the reduction algo-
rithm and explained as follows.
– For digital signatures, the adversary will launch a successful attack with prob-
ability P∗ = 0, no matter whether the reduction is by the forged signature or
hash queries.
– For encryption under a decisional hard problem, the adversary will try to
launch a successful attack with probability P∗ = Pε if the simulation is dis-
tinguishable because Z is false.
– For encryption under a computational hard problem, the adversary will try to
launch a successful attack with probability P∗ = 0 if the simulation is distin-
guishable before the adversary makes the challenge hash query to the random
oracle.
• An attack by the adversary, no matter whether it is successful or failed, can be a
useful attack depending on the cryptosystem and the reduction algorithm. Only
a useful attack can be reduced to solving an underlying hard problem.

– For digital signatures whose security reductions involve a forged signature,


the adversary’s attack is useful if the forged signature is reducible.
– For encryption under a decisional hard problem, an attack is useful if the prob-
ability of correctly guessing the message in the true challenge ciphertext is
1 ε
2 + 2 , and the probability of correctly guessing the message in the false chal-
lenge ciphertext is at most 12 except with negligible advantage.
– For all cryptosystems whose security reductions involve hash queries, an at-
tack is useful if the hash queries made by the adversary include the challenge
hash query that can be used to solve an underlying hard problem.

• Finally, a security reduction is correct if the advantage of solving an underlying


hard problem is non-negligible, assuming that the adversary can break the real
scheme in polynomial time with non-negligible advantage.
This completes all the steps in a correctness analysis to show that a proposed secu-
rity reduction is a correct security reduction.
146 4 Foundations of Security Reduction

4.15.6 Other Confusing Concepts

There are many concepts associated with the word “model” including security
model, standard security model, standard model, random oracle model, and generic
group model. They have totally different meanings.
• Security Model. A security model is defined for modeling attacks. It is a game
played by the adversary and the challenger who generates a scheme for the ad-
versary to attack. Each security model can be seen as an abstracted class of at-
tacks, where we define how the adversary breaks the scheme. The security model
should appear in the security definition for a cryptosystem.
• Standard Security Model. A cryptosystem can have many security models. One
of these security models is selected as the standard security model. The standard
security model should be widely accepted and good enough to prove the security
of the proposed scheme. We emphasize that the standard security model is not
the strongest security model for a cryptosystem.
• Standard Model. A standard model is a model of computation where the adver-
sary is only restricted by the amount of time it takes to break a cryptosystem.
Differing from the standard model, the random oracle model is a special model
that has defined some further restrictions on the adversary.
• Random Oracle Model. The random oracle model is not a security model for
a cryptosystem but an assumption where at least one hash function is set as a
random oracle controlled by the simulator, and the adversary must access the
random oracle to know the corresponding hash values. The random oracle model
only appears in the security proof.
• Generic Group Model. The generic group model is an assumption proposed for
analyzing whether a problem defined over cyclic groups is hard or not. In this
model, the adversary cannot see the group implementation or any group element,
only their encodings. Then, a problem is analyzed to be hard within this assump-
tion. The generic group model only appears in the hardness analysis for a new
hard problem.
In a security reduction, we often mention the word “indistinguishable.” It is used
in two different places.
• Indistinguishable Security. In the definition of indistinguishable security for en-
cryption, a random message chosen by the challenger or by the simulator, either
m0 or m1 , is encrypted in the challenge ciphertext. An encryption scheme is in-
distinguishably secure if the adversary has negligible advantage in guessing the
encrypted message in the challenge ciphertext.
• Indistinguishable Simulation. The adversary is given a simulated scheme, and
we want the adversary to launch an attack on it with the advantage defined in
the breaking assumption. This requires that the simulated scheme should be in-
distinguishable from the real scheme. Otherwise, the adversary could launch any
attack which might be failed or successful without any restriction.
Chapter 5
Digital Signatures with Random Oracles

In this chapter, we mainly introduce the BLS scheme [26], the BBRO scheme [20],
and the ZSS scheme [104] under the H-Type, the C-Type, and the I-Type structures,
respectively. With a random salt, we can modify the BLS scheme into the BLS+
scheme using a random bit [65] and the BLS# scheme using a random number [48]
with tight reductions. The same approach can also be applied to the ZSS scheme,
where the responses to hash queries are different due to the adoption of the q-SDH
assumption. At the end, we introduce the BLSG scheme, a simplified version of
[53], based on the BLS signature scheme with a completely new and tight security
reduction without the use of a random salt. The given schemes and/or proofs may
be different from the original ones.

5.1 BLS Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e), selects a cryp-
tographic hash function H : {0, 1}∗ → G, and returns the system parameters
SP = (PG, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses α ∈ Z p , computes h = gα , and returns a public/secret
key pair (pk, sk) as follows:

pk = h, sk = α.

Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It returns the signature σm on m as

σm = H(m)α .

© Springer International Publishing AG, part of Springer Nature 2018 147


F. Guo et al., Introduction to Security Reduction,
https://doi.org/10.1007/978-3-319-93049-7_5
148 5 Digital Signatures with Random Oracles

Verify: The verification algorithm takes as input a message-signature pair


(m, σm ), the public key pk, and the system parameters SP. It accepts the signa-
ture if 
e(σm , g) = e H(m), h .

Theorem 5.1.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the BLS signature scheme is provably secure in the EU-CMA se-
curity model with reduction loss L = qH , where qH is the number of hash queries to
the random oracle.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
sets the public key as h = ga , where the secret key α is equivalent to a. The public
key is available from the problem instance.
H-Query. The adversary makes hash queries in this phase. Before receiving queries
from the adversary, B randomly chooses an integer i∗ ∈ [1, qH ], where qH denotes
the number of hash queries to the random oracle. Then, B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
Let the i-th hash query be mi . If mi is already in the hash list, B responds to this
query following the hash list. Otherwise, B randomly chooses wi from Z p and sets
H(mi ) as
g i if i = i∗
 b+w
H(mi ) = .
gwi otherwise
The simulator B responds to this query with H(mi ) and adds (i, mi , wi , H(mi )) to
the hash list.
Query. The adversary makes signature queries in this phase. For a signature query
on mi , if mi is the i∗ -th queried message in the hash list, abort. Otherwise, we have
H(mi ) = gwi .
B computes σmi as
σmi = (ga )wi .
According to the signature definition and simulation, we have

σmi = H(mi )α = (gwi )a = (ga )wi .

Therefore, σmi is a valid signature of mi .


Forgery. The adversary returns a forged signature σm∗ on some m∗ that has not been
queried. If m∗ is not the i∗ -th queried message in the hash list, abort. Otherwise, we
have H(m∗ ) = gb+wi∗ .
5.2 BLS+ Scheme 149

According to the signature definition and simulation, we have

σm∗ = H(m∗ )α = (gb+wi∗ )a = gab+awi∗ .

The simulator B computes

σm∗ gab+awi∗
= = gab
(ga )wi∗ (ga )wi∗

as the solution to the CDH problem instance.


This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been ex-
plained above. The randomness of the simulation includes all random numbers in
the key generation and the responses to hash queries. They are

a, w1 , · · · , wi∗ −1 , b + wi∗ , wi∗ +1 , · · · , wqH .

According to the setting of the simulation, where a, b, wi are randomly chosen, it


is easy to see that they are random and independent from the point of view of the
adversary. Therefore, the simulation is indistinguishable from the real attack.
Probability of successful simulation and useful attack. If the simulator suc-
cessfully guesses i∗ , all queried signatures are simulatable, and the forged signature
is reducible because the message mi∗ cannot be chosen for a signature query, and
it will be used for the signature forgery. Therefore, the probability of successful
simulation and useful attack is q1H for qH queries.
Advantage and time cost. Suppose the adversary breaks the scheme with
(t, qs , ε) after making qH queries to the random oracle. The advantage of solving
the CDH problem is therefore qεH . Let Ts denote the time cost of the simulation. We
have Ts = O(qH + qs ), which is mainly dominated by the oracle response and the
signature generation. Therefore, B will solve the CDH problem with (t + Ts , ε/qH ).
This completes the proof of the theorem. 

5.2 BLS+ Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e), selects a cryp-
tographic hash function H : {0, 1}∗ → G, and returns the system parameters
SP = (PG, H).
150 5 Digital Signatures with Random Oracles

KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses α ∈ Z p , computes h = gα , and returns a public/secret
key pair (pk, sk) as follows:

pk = h, sk = α.

Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It chooses a random coin c ∈ {0, 1} and
returns the signature σm on m as
 
σm = (σ1 , σ2 ) = c, H(m, c)α .

We require that the signing algorithm always uses the same random coin c on
the message m.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
 
e(σ2 , g) = e H(m, σ1 ), h .

Theorem 5.2.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the BLS+ signature scheme is provably secure in the EU-CMA
security model with reduction loss L = 2.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
sets the public key as h = ga , where the secret key α is equivalent to a. The public
key is available from the problem instance.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
For a query on (mi , ci ) where ci ∈ {0, 1}, if mi is already in the hash list, B
responds to this query following the hash list. Otherwise, B randomly chooses xi ∈
{0, 1}, yi , zi ∈ Z p and sets H(mi , 0), H(mi , 1) as

H(mi , 0) = gb+yi , H(mi , 1) = gzi :



xi = 0
.
H(mi , 0) = gyi , H(mi , 1) = gb+zi : xi = 1

The simulator B responds to this query with H(mi , c) and adds (mi , xi , yi , zi , H(mi , 0),
H(mi , 1)) to the hash list.
5.2 BLS+ Scheme 151

Query. The adversary makes signature queries in this phase. For a signature query
on mi , let the corresponding hashing tuple be (mi , xi , yi , zi , H(mi , 0), H(mi , 1)).
B computes σmi as
 
 1, (g a )zi : xi = 0
 
 
α
σmi = ci , H(mi , ci ) =   .

 0, (ga )yi :

xi = 1

According to the signature definition and simulation, we have


• If xi = 0, then ci = 1 and H(mi , ci )α = H(mi , 1)α = (gzi )a = (ga )zi .
• Otherwise xi = 1, then ci = 0 and H(mi , ci )α = H(mi , 0)α = (gyi )a = (ga )yi .
Therefore, σmi is a valid signature of mi .
Forgery. The adversary returns a forged signature σm∗ on some m∗ that has not
been queried. Let the forged signature be (σ1∗ , σ2∗ ) = (c∗ , H(m∗ , c∗ )α ), and the cor-
responding hashing tuple be (m∗ , x∗ , y∗ , z∗ , H(m∗ , 0), H(m∗ , 1)). If c∗ 6= x∗ , abort.
Otherwise, we have c∗ = x∗ and
 b+y∗
g : c∗ = x∗ = 0
∗ ∗
H(m , c ) = .
 b+z∗
g : c∗ = x∗ = 1

According to the signature definition and simulation, we have


∗ ∗
σ2∗ = H(m∗ , c∗ )α = (gb+w )a = gab+aw : w∗ = y∗ or w∗ = z∗ .

The simulator B computes



σ2∗ gab+aw ab
∗ = ∗ =g
(ga )w (ga )w

as the solution to the CDH problem instance.


This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation, the responses to hash queries, and the signature generation. They are as
follows:

pk : a,
H(mi , 0), H(mi , 1) : (b + yi , zi ) or (yi , b + zi )
ci : xi .
152 5 Digital Signatures with Random Oracles

According to the setting of the simulation, where a, b, xi , yi , zi are randomly chosen,


it is easy to see that they are random and independent from the point of view of the
adversary. Therefore, the simulation is indistinguishable from the real attack.
Probability of successful simulation and useful attack. All queried signatures
are simulatable, and the forged signature is reducible if c∗ = x∗ . According to the
setting, we have
∗ ∗
H(m∗ , 0) = gb+y , H(m∗ , 1) = gz : x∗ = 0

∗ ∗ ∗ ∗ .
y
H(m , 0) = g , H(m , 1) = g b+z : x∗ = 1

The adversary has no advantage in guessing which hash query is answered with
b when y∗ , z∗ are randomly chosen by the simulator. Therefore, x∗ is random and
unknown to the adversary, so that c∗ = x∗ holds with probability 12 . Therefore, the
probability of successful simulation and useful attack is 12 .
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε)
after making qH queries to the random oracle. The advantage of solving the CDH
problem is therefore ε2 . Let Ts denote the time cost of the simulation. We have Ts =
O(qH + qs ), which is mainly dominated by the oracle response and the signature
generation. Therefore, B will solve the CDH problem with (t + Ts , ε/2).
This completes the proof of the theorem. 

5.3 BLS# Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e), selects a cryp-
tographic hash function H : {0, 1}∗ → G, and returns the system parameters
SP = (PG, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses α ∈ Z p , computes h = gα , and returns a public/secret
key pair (pk, sk) as follows:

pk = h, sk = α.

Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It chooses a random number r ∈ Z p and
returns the signature σm on m as
 
σm = (σ1 , σ2 ) = r, H(m, r)α .
5.3 BLS# Scheme 153

We require that the signing algorithm always uses the same random number r
for signature generation on the message m.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
 
e(σ2 , g) = e H(m, σ1 ), h .

Theorem 5.3.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the BLS# signature scheme is provably secure in the EU-CMA
security model with reduction loss about L = 1.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
sets the public key as h = ga , where the secret key α is equivalent to a. The public
key is available from the problem instance.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
For a query on (m, r) from the adversary where r ∈ Z p , if (m, r) is already
in the hash list, B responds to this query following the hash list. Otherwise, it
randomly chooses z ∈ Z p , responds to this query with H(m, r) = gb+z , and adds
(m, r, z, H(m, r), A ) to the hash list. Here, A in the tuple means that the query (m, r)
is generated by the adversary.
Query. The adversary makes signature queries in this phase. For a signature query
on mi , if there exists a tuple (mi , ri , yi , H(mi , ri ), B) in the hash list, the simulator
uses this tuple to generate the signature. Otherwise, B randomly chooses ri , yi ∈ Z p ,
sets H(mi , ri ) = gyi , and adds (mi , ri , yi , H(mi , ri ), B) to the hash list. If (mi , ri ) was
ever generated by the adversary, the simulator aborts.
B computes σmi as
   
σmi = ri , H(mi , ri )α = ri , (ga )yi .

According to the signature definition and simulation, we have

H(mi , ri )α = (gyi )a = (ga )yi .

Therefore, σmi is a valid signature of mi .


154 5 Digital Signatures with Random Oracles

Forgery. The adversary returns a forged signature σm∗ on some m∗ that has not
been queried. Let the forged signature be (σ1∗ , σ2∗ ) = (r∗ , H(m∗ , r∗ )α ), and the cor-
responding hashing tuple be (m∗ , r∗ , z∗ , H(m∗ , r∗ )). We should have

H(m∗ , r∗ ) = gb+z .

According to the signature definition and simulation, we have


∗ ∗
σ2∗ = H(m∗ , r∗ )α = (gb+z )a = gab+az .

The simulator B computes



σ2∗ gab+az ab
∗ = ∗ =g
(ga )z (ga )z

as the solution to the CDH problem instance.


This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation, the responses to hash queries, and the signature generation. They are as
follows:

pk : a,
H(m, r) : z + b if r is chosen by the adversary,
H(mi , ri ) : yi if ri is chosen by the simulator,
random number : ri .

According to the setting of the simulation, where a, b, z, yi , ri are randomly chosen,


it is easy to see that they are random and independent from the point of view of the
adversary. Therefore, the simulation is indistinguishable from the real attack.
Probability of successful simulation and useful attack. The forged signature is
reducible, and all queried signatures are simulatable when the random number ri
chosen in the signature query phase is different from all random numbers in the hash
query phase. A randomly chosen number is different from all of the numbers in the
hash query phase with probability 1 − qpH . Therefore, the probability of successful
simulation and useful attack is (1 − qpH )qs ≈ 1.
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε)
after making qH queries to the random oracle. The advantage of solving the CDH
problem is therefore approximately ε. Let Ts denote the time cost of the simulation.
We have Ts = O(qH + qs ), which is mainly dominated by the oracle response and
the signature generation. Therefore, B will solve the CDH problem with (t + Ts , ε).
This completes the proof of the theorem. 
5.4 BBRO Scheme 155

5.4 BBRO Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e), selects a cryp-
tographic hash function H : {0, 1}∗ → G, and returns the system parameters
SP = (PG, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses g2 ∈ G, α ∈ Z p , computes g1 = gα , and returns a
public/secret key pair (pk, sk) as follows:

pk = (g1 , g2 ), sk = α.

Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It chooses a random number r ∈ Z p and
returns the signature σm on m as
 
σm = (σ1 , σ2 ) = gα2 H(m)r , gr .

Verify: The verification algorithm takes as input a message-signature pair


(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
 
e(σ1 , g) = e(g1 , g2 )e H(m), σ2 .

Theorem 5.4.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the BBRO signature scheme is provably secure in the EU-CMA
security model with reduction loss L = qH , where qH is the number of hash queries
to the random oracle.

Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
sets the public key as
g1 = ga , g2 = gb ,
where the secret key α is equivalent to a. The public key is available from the
problem instance.
H-Query. The adversary makes hash queries in this phase. Before any hash queries
are made, B randomly chooses i∗ ∈ [1, qH ], where qH denotes the number of hash
156 5 Digital Signatures with Random Oracles

queries to the random oracle. Then, B prepares a hash list to record all queries and
responses as follows, where the hash list is empty at the beginning.
Let the i-th hash query be mi . If mi is already in the hash list, B responds to this
query following the hash list. Otherwise, B randomly chooses wi from Z p and sets
H(mi ) as
g i , if i = i∗
 w
H(mi ) = .
gb+wi , otherwise
The simulator B responds to this query with H(mi ) and adds (i, mi , wi , H(mi )) to
the hash list.
Query. The adversary makes signature queries in this phase. For a signature query
on mi , if mi is the i∗ -th queried message in the hash list, abort. Otherwise, we have
H(mi ) = gb+wi .
B chooses a random ri0 ∈ Z p and computes σmi as
 0 0

σmi = (ga )−wi · H(mi )ri , (ga )−1 gri .

Let ri = −a + ri0 . According to the signature definition and simulation, we have


0 0
gα2 H(mi )ri = gab · (gb+wi )−a+ri = (ga )−wi · H(mi )ri
0 0
gri = g−a+ri = (ga )−1 gri .

Therefore, σmi is a valid signature of mi .


Forgery. The adversary returns a forged signature σm∗ on some m∗ that has not been
queried. If m∗ is not the i∗ -th queried message in the hash list, abort. Otherwise, we
have H(m∗ ) = gwi∗ .
According to the signature definition and simulation, we have
   
σm∗ = (σ1∗ , σ2∗ ) = gα2 H(m∗ )r , gr = gab (gwi∗ )r , gr .

The simulator B computes

σ1∗ gab (gwi∗ )r


∗ = = gab
(σ2 )wi∗ (gr )wi∗

as the solution to the CDH problem instance.


This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation, the responses to hash queries, and the signature generation. They are as
follows:

pk : a, b,
5.5 ZSS Scheme 157

H(mi ) : w1 , · · · , wi∗ −1 , b + wi∗ , wi∗ +1 , · · · , wqH ,


ri : −a + ri0 .

According to the setting of the simulation, where a, b, wi , ri0 are randomly chosen, it
is easy to see that they are random and independent from the point of view of the
adversary. Therefore, the simulation is indistinguishable from the real attack.
Probability of successful simulation and useful attack. If the simulator success-
fully guesses i∗ , all queried signatures are simulatable, and the forged signature is
reducible because the message mi∗ cannot be chosen for a signature query, and it will
be used for the signature forgery. Therefore, the probability of successful simulation
and useful attack is q1H for qH queries.
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε)
after making qH queries to the random oracle. The advantage of solving the CDH
problem is therefore qεH . Let Ts denote the time cost of the simulation. We have Ts =
O(qH + qs ), which is mainly dominated by the oracle response and the signature
generation. Therefore, B will solve the CDH problem with (t + Ts , ε/qH ).
This completes the proof of the theorem. 

5.5 ZSS Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e), selects a cryp-
tographic hash function H : {0, 1}∗ → Z p , and returns the system parameters
SP = (PG, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses h ∈ G, α ∈ Z p , computes g1 = gα , and returns a
public/secret key pair (pk, sk) as follows:

pk = (g1 , h), sk = α.

Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It returns the signature σm on m as
1
σm = h α+H(m) .

Verify: The verification algorithm takes as input a message-signature pair


(m, σm ), the public key pk, and the system parameters SP. It accepts the signa-
ture if  
e σm , g1 gH(m) = e(h, g).
158 5 Digital Signatures with Random Oracles

Theorem 5.5.0.1 Suppose the hash function H is a random oracle. If the q-SDH
problem is hard, the ZSS signature scheme is provably secure in the EU-CMA secu-
rity model with reduction loss L = qH , where qH is the number of hash queries to
the random oracle.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the q-
2 q
SDH problem. Given as input a problem instance (g, ga , ga , · · · , ga ) over the pairing
group PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
randomly chooses w1 , w2 , · · · , wq from Z p and sets the public key as

g1 = ga , h = g(a+w1 )(a+w2 )···(a+wq ) ,

where the secret key α is equivalent to a. This requires q = qH , where qH denotes


the number of hash queries to the random oracle. The public key can be computed
from the problem instance and the chosen parameters.
H-Query. The adversary makes hash queries in this phase. Before any hash queries
are made, B randomly chooses i∗ ∈ [1, qH ] and an integer w∗ ∈ Z p . Then, B pre-
pares a hash list to record all queries and responses as follows, where the hash list
is empty at the beginning.
Let the i-th hash query be mi . If mi is already in the hash list, B responds to this
query following the hash list. Otherwise, B sets H(mi ) as
 ∗
w , if i = i∗
H(mi ) = .
wi , otherwise

The simulator B responds to this query with H(mi ) and adds (i, mi , wi , H(mi )) or
(i∗ , mi∗ , w∗ , H(mi∗ )) to the hash list. Notice that w∗ 6= wi∗ in our simulation.
Query. The adversary makes signature queries in this phase. For a signature query
on mi , if mi is the i∗ -th queried message in the hash list, abort. Otherwise, we have
H(mi ) = wi .
B computes σmi as

σmi = g(a+w1 )···(a+wi−1 )(a+wi+1 )···(a+wq )


q
using g, ga , · · · , ga , w1 , w2 , · · · , wq .
According to the signature definition and simulation, we have
1 (a+w1 )···(a+wi )···(a+wq )
σmi = h α+H(mi ) = g a+wi = g(a+w1 )···(a+wi−1 )(a+wi+1 )···(a+wq ) .

Therefore, σmi is a valid signature of mi .


Forgery. The adversary returns a forged signature σm∗ on some m∗ that has not been
queried. If m∗ is not the i∗ -th queried message in the hash list, abort. Otherwise, we
have H(m∗ ) = w∗ .
5.5 ZSS Scheme 159

According to the signature definition and simulation, we have


1 (a+w1 )(a+w2 )···(a+wq )
σm∗ = h α+H(m∗ ) = g a+w∗ ,

which can be rewritten as d


g f (a)+ a+w∗ ,
where f (a) is a (q − 1)-degree polynomial function in a, and d is a nonzero integer.
The simulator B computes
!1 d !1
g f (a)+ a+w∗
d d
σ m∗ 1
= = g a+w∗
g f (a) g f (a)

1
 
and outputs w∗ , g a+w∗ as the solution to the q-SDH problem instance.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation and the responses to hash queries. They are as follows:

pk : a, (a + w1 )(a + w2 ) · · · (a + wq ),
H(mi ) : w1 , · · · , wi∗ −1 , w∗ , wi∗ +1 , · · · , wqH .

According to the setting of the simulation, where a, w1 , w2 , · · · , wq , w∗ are randomly


chosen, it is easy to see that they are random and independent from the point of view
of the adversary. Therefore, the simulation is indistinguishable from the real attack.
Probability of successful simulation and useful attack. If the simulator success-
fully guesses i∗ , all queried signatures are simulatable, and the forged signature is
reducible because the message mi∗ cannot be chosen for a signature query, and it will
be used for the signature forgery. Therefore, the probability of successful simulation
and useful attack is q1H for qH queries.
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε)
after making qH queries to the random oracle. The advantage of solving the q-SDH
problem is therefore qεH . Let Ts denote the time cost of the simulation. We have
Ts = O(qs qH ), which is mainly dominated by the signature generation. Therefore,
B will solve the q-SDH problem with (t + Ts , qεH ).
This completes the proof of the theorem. 
160 5 Digital Signatures with Random Oracles

5.6 ZSS+ Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e), selects a cryp-
tographic hash function H : {0, 1}∗ → Z p , and returns the system parameters
SP = (PG, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses h ∈ G, α ∈ Z p , computes g1 = gα , and returns a
public/secret key pair (pk, sk) as follows:

pk = (g1 , h), sk = α.

Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It chooses a random coin c ∈ {0, 1} and
returns the signature σm on m as
 1 
σm = (σ1 , σ2 ) = c, h α+H(m,c) .

We require that the signing algorithm always uses the same random coin c on
the message m.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
 
e σ2 , g1 gH(m,σ1 ) = e(h, g).

Theorem 5.6.0.1 Suppose the hash function H is a random oracle. If the q-SDH
problem is hard, the ZSS+ signature scheme is provably secure in the EU-CMA
security model with reduction loss L = 2.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the q-
2 q
SDH problem. Given as input a problem instance (g, ga , ga , · · · , ga ) over the pairing
group PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
randomly chooses y1 , y2 , · · · , yq , w from Z p and sets the public key as

g1 = ga , h = gw(a+y1 )(a+y2 )···(a+yq ) ,

where the secret key α is equivalent to a. This requires q = qH , where qH denotes the
number of hash queries. The public key can be computed from the problem instance
and the chosen parameters.
5.6 ZSS+ Scheme 161

H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
For a query on (mi , ci ) where ci ∈ {0, 1}, if mi is already in the hash list, B
responds to this query following the hash list. Otherwise, B randomly chooses xi ∈
{0, 1}, yi , zi ∈ Z p and sets H(mi , 0), H(mi , 1) as follows:

H(mi , 0) = yi , H(mi , 1) = zi : xi = 0
.
H(mi , 0) = zi , H(mi , 1) = yi : xi = 1

The simulator
 B responds to this query with H(mi , ci ) and adds mi , xi , yi , zi , H(mi , 0),
H(mi , 1) to the hash list.
Query. The adversary makes signature queries in this phase. For a signature query
on mi , let the corresponding hashing tuple be mi , xi , yi , zi , H(mi , 0), H(mi , 1) .
B computes σmi as
 1   
σmi = xi , h α+H(mi ,xi ) = xi , gw(a+y1 )···(a+yi−1 )(a+yi+1 )···(a+yq )

q
using g, ga , · · · , ga , y1 , y2 , · · · , yq .
According to the signature definition and simulation, we have ci = xi and H(mi , xi )
= yi such that
1 w(a+y1 )···(a+yi−1 )(a+yi )(a+yi+1 )···(a+yq )
h α+H(mi ,ci ) = g a+yi = gw(a+y1 )···(a+yi−1 )(a+yi+1 )···(a+yq ) .

Therefore, σmi is a valid signature of mi .


Forgery. The adversary returns aforged signature ∗
σm∗ on some m that has not been
1
queried. Let σm∗ = (σ1∗ , σ2∗ ) = c∗ , h α+H(m∗ ,c∗ ) , and the corresponding hashing
tuple be (m∗ , x∗ , y∗ , z∗ , H(m∗ , 0), H(m∗ , 1)). If c∗ = x∗ , abort. Otherwise, we have
c∗ 6= x∗ and then
H(m∗ , c∗ ) = H(m∗ , 1 − x∗ ) = z∗ .
According to the signature definition and simulation, we have z∗ ∈ / {y1 , y2 , · · · , yq }
and
1 w(a+y1 )(a+y2 )···(a+yq )
σ2∗ = h α+H(m∗ ,c∗ ) = g a+z∗ ,
which can be rewritten as d
g f (a)+ a+z∗ ,
where f (a) is a (q − 1)-degree polynomial function in a, and d is a nonzero integer.
The simulator B computes
!1 d !1
σ2∗ g f (a)+ a+z∗
d d
1
= = g a+z∗
g f (a) g f (a)
162 5 Digital Signatures with Random Oracles
1
and outputs (z∗ , g a+z∗ ) as the solution to the q-SDH problem instance.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation, the responses to hash queries, and the signature generation. They are as
follows:

pk : a, w(a + y1 )(a + y2 ) · · · (a + yq ),
H(mi , 0), H(mi , 1) : (yi , zi ) or (zi , yi ),
ci : xi .

According to the setting of the simulation, where a, w, yi , zi , xi are randomly chosen,


it is easy to see that they are random and independent from the point of view of the
adversary. Therefore, the simulation is indistinguishable from the real attack.
Probability of successful simulation and useful attack. All queried signatures
are simulatable, and the forged signature is reducible if c∗ 6= x∗ . According to the
setting, we have

H(m∗ , 0) = y∗ , H(m∗ , 1) = z∗ : x∗ = 0

.
H(m∗ , 0) = z∗ , H(m∗ , 1) = y∗ : x∗ = 1

The adversary knows x∗ if it finds which hash value H(m∗ , 0) or H(m∗ , 1) is a root of
w(a+y1 ) · · · (a+yq ). Since w(a+y1 ) · · · (a+yq ), y∗ , z∗ are random and independent,
the adversary has no advantage in finding x∗ . Therefore, the probability of successful
simulation and useful attack is 12 .
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε)
after making qH queries to the random oracle. The advantage of solving the q-SDH
problem is therefore ε2 . Let Ts denote the time cost of the simulation. We have Ts =
O(qs qH ), which is mainly dominated by the signature generation. Therefore, B will
solve the q-SDH problem with (t + Ts , ε2 ).
This completes the proof of the theorem. 

5.7 ZSS# Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e), selects a cryp-
tographic hash function H : {0, 1}∗ → Z p , and returns the system parameters
SP = (PG, H).
5.7 ZSS# Scheme 163

KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses h ∈ G, α ∈ Z p , computes g1 = gα , and returns a
public/secret key pair (pk, sk) as follows:

pk = (g1 , h), sk = α.

Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It chooses a random number r ∈ Z p and
returns the signature σm on m as
 1 
σm = (σ1 , σ2 ) = r, h α+H(m,r) .

We require that the signing algorithm always uses the same random number r
on the message m.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
 
e σ2 , g1 gH(m,σ1 ) = e(h, g).

Theorem 5.7.0.1 Suppose the hash function H is a random oracle. If the q-SDH
problem is hard, the ZSS# signature scheme is provably secure in the EU-CMA se-
curity model with reduction loss L = 1.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the q-
2 q
SDH problem. Given as input a problem instance (g, ga , ga , · · · , ga ) over the pairing
group PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
randomly chooses y1 , y2 , · · · , yq , w from Z p and sets the public key as

g1 = ga , h = gw(a+y1 )(a+y2 )···(a+yq ) ,

where the secret key α is equivalent to a. This requires q = qs . The public key can
be computed from the problem instance and the chosen parameters.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
For a query on (m, r) from the adversary, if (m, r) is already in the hash list, it
responds to this query following the hash list. Otherwise, it randomly chooses z ∈ Z p
and sets H(m, r) = z. The simulator B responds to this query with H(m, r) and adds
164 5 Digital Signatures with Random Oracles

(m, r, z, H(m, r), A ) to the hash list. Here, A in the tuple means that the query (m, r)
is generated by the adversary.
Query. The adversary makes signature queries in this phase. For a signature query
on mi , if there exists a tuple (mi , ri , yi , H(mi , ri ), B) in the hash list, the simulator
uses this tuple to generate the signature. Otherwise, B randomly chooses ri , yi ∈ Z p ,
sets H(mi , ri ) = yi , and adds (mi , ri , yi , H(mi , ri ), B) to the hash list. If (mi , ri ) was
ever generated by the adversary, the simulator aborts.
B computes σmi as
 1   
σmi = ri , h α+H(mi ,ri ) = ri , gw(a+y1 )···(a+yi−1 )(a+yi+1 )···(a+yq )

q
using g, ga , · · · , ga , w, y1 , y2 , · · · , yq .
According to the signature definition and simulation, we have H(mi , ri ) = yi such
that
1 w(a+y1 )···(a+yi−1 )(a+yi )(a+yi+1 )···(a+yq )
h α+H(mi ,ri ) = g a+yi = gw(a+y1 )···(a+yi−1 )(a+yi+1 )···(a+yq ) .

Therefore, σmi is a valid signature of mi .


Forgery. The adversary returns a forged signature σm∗ on some m∗ that has not been
1
queried. Let σm∗ = (σ1∗ , σ2∗ ) = r∗ , h α+H(m∗ ,r∗ ) , and the corresponding hashing tuple


be (m∗ , r∗ , z∗ , H(m∗ , r∗ )).


According to the signature definition and simulation, we have that z∗ is randomly
chosen from Z p and is different from {y1 , y2 , · · · , yq }. Then
1 w(a+y1 )(a+y2 )···(a+yq )
σ2∗ = h α+H(m∗ ,r∗ ) = g a+z∗ ,

can be rewritten as d
g f (a)+ a+z∗ ,
where f (a) is a (q − 1)-degree polynomial function in a, and d is a nonzero integer.
The simulator B computes
!1 d !1
σ2∗ g f (a)+ a+z∗
d d
1
= = g a+z∗
g f (a) g f (a)

1
and outputs (z∗ , g a+z∗ ) as the solution to the q-SDH problem instance.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation, the responses to hash queries, and the signature generation. They are as
follows:
5.8 BLSG Scheme 165

pk : a, w(a + y1 )(a + y2 ) · · · (a + yq ),
H(m, r) : z if r is chosen by the adversary
H(mi , ri ) : yi if ri is chosen by the simulator
random number : ri .

According to the setting of the simulation, where a, w, z, yi , ri are randomly chosen,


it is easy to see that they are random and independent from the point of view of the
adversary. Therefore, the simulation is indistinguishable from the real attack.
Probability of successful simulation and useful attack. The forged signature is
reducible, and all queried signatures are simulatable when the random number ri
chosen in the signature query phase is different from all random numbers in the
hash query phase. A randomly chosen number is not equal to the numbers in the
hash query phase with probability 1 − qpH . Therefore, the probability of successful
simulation and useful attack is (1 − qpH )qs ≈ 1.
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε)
after making qH queries to the random oracle. The advantage of solving the q-SDH
problem is therefore approximately ε. Let Ts denote the time cost of the simula-
tion. We have Ts = O(q2s ), which is mainly dominated by the signature generation.
Therefore, B will solve the q-SDH problem with (t + Ts , ε).
This completes the proof of the theorem. 

5.8 BLSG Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e), selects a cryp-
tographic hash function H : {0, 1}∗ → G, and returns the system parameters
SP = (PG, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses α ∈ Z p , computes h = gα , and returns a public/secret
key pair (pk, sk) as follows:

pk = h, sk = α.

Sign: The signing algorithm takes as input a message m ∈ {0, 1}∗ , the secret
key sk, and the system parameters SP. It returns the signature σm on m as
 α α 
σm = (σ1 , σ2 , σ3 ) = H(m)α , H m||σm1 , H m||σm2 ,
166 5 Digital Signatures with Random Oracles

where σmi = (σ1 , σ2 , · · · , σi ). We call σi a block signature in this signature


scheme. The final signature σm is equivalent to σm3 .
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. It accepts the signa-
ture if and only if

e(σ1 , g) = e H(m), h , and
e(σ2 , g) = e H m||σm1 , h , and
 

e(σ3 , g) = e H m||σm2 , h .
 

Theorem 5.8.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the BLSG signature scheme is provably secure in the EU-CMA

security model with reduction loss 2 qH , where qH is the number of hash queries
to the random oracle.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B controls the random oracle, runs A , and works as follows.
Setup. Let SP = PG and H be the random oracle controlled by the simulator. B
sets the public key as h = ga , where the secret key α is equivalent to a. The public
key is available from the problem instance.
H-Query. The adversary makes hash queries in this phase. Before receiving queries
from the adversary, B randomly chooses an integer c∗ ∈ {0, 1} and then chooses
c∗
another random value k∗ from the range [1, qH 1− 2 ], where qH denotes the number
of hash queries to the random oracle. To be precise, the range is [1, qH ] if c∗ = 0,
1
and the range is [1, qH 2 ] if c∗ = 1. Then, B prepares a hash list to record all queries
and responses as follows, where the hash list is empty at the beginning.
For each tuple, the format is defined and described as follows:

(x, Ix , Tx , Ox , Ux , zx ),

x refers to the query input.


Ix refers to the identity, either the adversary A or the simulator B.
Tx refers to the type of the hash query.
Ox refers to the order index of the query within the same type.
Ux refers to the response to x, i.e., Ux = H(x).
zx refers to the secret for computing Ux .
Let the hash query be x or the adversary query H(x). If x is already in this hash
list, B responds to this query following the hash list. Otherwise, the simulator re-
sponds to this query as follows:
5.8 BLSG Scheme 167

Response of object Ix . The object Ix is to identify who is the first to generate and
submit x to the random oracle. The query is meaningful in that both the adversary
and the simulator can make to the random oracle, although the random oracle is
controlled by the simulator. If the first query on x is first generated and submitted by
the adversary, we say that this query is made first by the adversary and set Ix = A .
Otherwise, we set Ix = B.
Take a new message m as an example. Suppose the adversary first queries H(m),
H(m||σm1 ) to the random oracle and then queries the signature of m to the simulator.
Notice that the signature generation on the message m requires the simulator to
know all the following values

H(m), H(m||σm1 ), H(m||σm2 ).

The hash list does not record how to respond to hash query H(m||σm2 ). Therefore,
the simulator must query H(m||σm2 ) to the random oracle first before generating
its signature. Notice that the adversary might query H(m||σm2 ) again for signature
verification after receiving the signature, but this hash query is first generated and
made by the simulator. Therefore, we define
• For x ∈ {m, m||σm1 }, the corresponding Ix for x is Ix = A .
• For x = m||σm2 , the corresponding Ix for x is Ix = B.
Response of object Tx . We assume “||” is a concatenation notation that will never
appear within messages. The simulator can also run the verification algorithm to ver-
ify whether each block signature is correct or not. Therefore, it is easy to distinguish
the input structure of all hash queries. We define four types of hash queries to the
random oracle.
Type i. x = m||σmi . Here, σmi denotes the first i block signatures of m, and i refers
to any integer i ∈ {0, 1, 2}. We assume m||σm0 = m for easy analysis.
Type D. x is a query different from the previous three types. For example, x =
0
m||Rm but Rm 6= σmi for any i ∈ {0, 1, 2}, or x = m||σmi for any i0 ≥ 3.
The object Tx is set as follows. If Ix = B, then Tx = ⊥. Otherwise, suppose Ix = A .
Then, the simulator can run the verification algorithm to know which type x belongs
to and set

i if x belongs to Type i for any i ∈ {0, 1, 2}
Tx = .
⊥ otherwise x belongs to Type D

We emphasize that Tx and Ox are used to mark “valid” queries generated by


the adversary only. We define Type D because that the adversary can generate any
arbitrary string as a query to the random oracle. The last type of queries will never
be used in the signature generation or the signature forgery.
Response of object Ox . The object Ox is set as follows.
• If Tx = ⊥, then Ox = ⊥.
168 5 Digital Signatures with Random Oracles

• Otherwise, suppose Tx = c. Then, Ox = k if x is the k-th new query added to the


hash list in those queries where Tx = c.
To calculate the integer k for the new query x, the simulator must count how many
queries have been added to the hash list, where only those queries with the same
Tx will be counted. We emphasize that the setting of Ox needs to know the value Tx
first.
For the objects Ix , Tx , and Ox , there are only three cases in all tuples in the hash
list. They are

(Ix , Tx , Ox ) = (A , c, k), (Ix , Tx , Ox ) = (A , ⊥, ⊥), (Ix , Tx , Ox ) = (B, ⊥, ⊥),

where c ∈ {0, 1, 2} and k ∈ [1, qH ].


Response of objects (Ux , zx ). Let (Ix , Tx , Ox ) be the response to the query x ac-
cording to the above description. The simulator randomly chooses zx ∈ Z p and sets
the response Ux to x according to the chosen (c∗ , k∗ ) as follows:

g x if (Tx , Ox ) = (c∗ , k∗ )
 b+z
Ux = H(x) = .
gzx otherwise

We denote by zx the secret for the response to x. In the following, if the query x
needs to be written as x = m||σmi , the corresponding secret will be rewritten as zim .
Finally, the simulator adds the defined tuple (x, Ix , Tx , Ox ,Ux , zx ) for the new query
x to the hash list. This completes the description of the hash query and its response.
For the tuple (x, Ix , Tx , Ox ,Ux , zx ), we have that H(x)α = Uxa = (ga )zx is com-
putable by the simulator for any query x as long as (Tx , Ox ) 6= (c∗ , k∗ ). If (Tx , Ox ) =
(c∗ , k∗ ), we have
H(x)α = Uxa = (gb+zx )a = gab+azx .
For the tuple (x, Ix , Tx , Ox ,Ux , zx ), we denote by mi, j the message in the query
input x if (Tx , Ox ) = (i, j). We define

Mi = {mi,1 , mi,2 , · · · , mi,qi }

to be the message set with qi messages, where Mi contains all messages in those
tuples belonging to Type i (Tx = i). According to the setting of the oracle responses,
for those hash queries belonging to Type i for all i ∈ {0, 1, 2}, there are three mes-
sage sets M0 , M1 , M2 at most to capture all messages in these queries. All queried
messages mentioned above are described in Table 5.1, where the query associated
with the message mi, j is made before another query associated with the message
mi, j0 if j < j0 .
Without knowing the signature σm of m before making hash queries associated
with m, the adversary must make hash queries H(m), H(m||σm1 ), H(m||σm2 ) in se-
quence because σmi in the query m||σmi contains

H(m)α , H(m||σm1 )α , · · · , H(m||σmi−1 )α .


5.8 BLSG Scheme 169

Table 5.1 Messages queried by the adversary where Tx 6= ⊥


M0 = { m0,1 , m0,2 , m0,3 , · · · , · · · , · · · , · · · , m0,q0 }
M1 = { m1,1 , m1,2 , m1,3 , · · · , · · · , · · · , m1,q1 }
M2 = { m2,1 , m2,2 , m2,3 , · · · , · · · , m2,q2 }

For a message m, the adversary can query all three hash queries H(m), H(m||σm1 ),
H(m||σm2 ) or fewer, such as H(m), H(m||σm1 ), for this message before its signature
query. Therefore, the following inequality and subset relationships hold:

q2 ≤ q1 ≤ q0 , M2 ⊆ M1 ⊆ M0 .

Suppose the adversary can finally forge a valid signature of a message m∗ . The
adversary must at least make the hash query H(m∗ ||σm2 ∗ ) in order to compute
H(m∗ ||σm2 ∗ )α , which guarantees q2 ≥ 1. Since the number of hash queries is at most
qH , we have q0 < qH . We stress that the number q1 is adaptively decided by the
adversary. However, it must be
√ √
q1 < qH or q1 ≥ qH .

Query: The adversary makes signature queries in this phase. For a signature query
on the message m that is adaptively chosen by the adversary, the simulator computes
the signature σm as follows.
If m is never queried to the random oracle, the simulator works as follows from
i = 1 to i = 3, where i is increased by one each time.
• Add a query on m||σmi−1 and its response to the hash list (m||σm0 = m). According
to the setting of the random oracle simulation, the corresponding tuple is
 i−1

m||σmi−1 , B, ⊥, ⊥, gzm , zi−1
m .

• Compute the block signature σi as


α i−1
σi = H m||σmi−1 = (ga )zm .

In the above signature generation, σi for all i ∈ {1, 2, 3} is computable by the simu-
lator, and the signature of σm is equal to σm3 = (σ1 , σ2 , σ3 ). Therefore, the signature
of m is computable by the simulator.
Suppose the message m was ever queried to the random oracle by the adver-
sary, where the following queries associated with the message m were made by the
adversary
m||σm0 , · · · , m||σmrm : rm ∈ {0, 1, 2}.
Here, the integer rm is adaptively decided by the adversary. Let (x, Ix , Tx , Ox ,Ux , zx )
be the tuple for x = m||σmrm . That is, Tx = rm .
170 5 Digital Signatures with Random Oracles

• If (Tx , Ox ) = (c∗ , k∗ ), the simulator aborts because

H(m||σmrm ) = gb+zx , σrm +1 = H(m||σmrm )α = Uxa = (gb+zx )a = gab+azx ,

which cannot be computed by the simulator, and thus the simulator fails to sim-
ulate the signature for the adversary, in particular the block signature σrm +1 .
• Otherwise, (Tx , Ox ) 6= (c∗ , k∗ ). Then, σrm +1 is computable by the simulator be-
cause
H(m||σmrm ) = gzx , σrm +1 = H(m||σmrm )α = (ga )zx .
Similarly to the case that m is never queried to the random oracle, the simulator
can generate and make hash queries

H(m||σmrm +1 ), · · · , H(m||σm2 )

to the random oracle. Finally, it computes the signature σm for the adversary.
Therefore, σm is a valid signature of the message m. This completes the descrip-
tion of the signature generation.
Forgery: The adversary returns a forged signature σm∗ on some m∗ that has not
been queried. Since the adversary cannot make a signature query on some m∗ , we
have that the following queries to the random oracle were made by the adversary:

m∗ ||σm0 ∗ , m∗ ||σm1 ∗ , m∗ ||σm2 ∗ .

The solution to the problem instance does not have to be associated with the
forged message m∗ . The simulator solves the hard problem as follows.
• The simulator searches the hash list to find the first tuple (x, Ix , Tx , Ox ,Ux , zx )
satisfying
(Tx , Ox ) = (c∗ , k∗ ).
If this tuple does not exist, abort. Otherwise, let the message mc∗ ,k∗ in this tuple
be denoted by m̂ for short. That is, mc∗ ,k∗ = m̂ and we have m̂ ∈ Mc∗ . Note that
m̂ may be different from m∗ . This tuple is therefore equivalent to
c∗
 ∗ ∗

(x, Ix , Tx , Ox ,Ux , zx ) = m̂||σm̂c , A , c∗ , k∗ , gbzm̂ , zcm̂ .

∗ c∗
That is H(m̂||σm̂c ) = gb+zm̂ contains the instance gb .
• The simulator searches the hash list to find the second tuple (x0 , Ix0 , Tx0 , Ox0 ,Ux0 , zx0 ),
where x0 is the query about the message m̂ and Tx0 = c∗ + 1. If this tuple does not
exist, abort. Otherwise, we have m̂ ∈ Mc∗ +1 and

x0 = m̂||σm̂c +1 ,
∗ ∗
where σm̂c +1 contains σc∗ +1 = H(m||σmc )α .
• The simulator computes and outputs
5.8 BLSG Scheme 171
∗ c∗
H(m̂||σm̂c )α gab+azm̂
c∗
= c∗
= gab
(ga )zm̂ (ga )zm̂

as the solution to the CDH problem instance.

This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been ex-
plained above. The randomness of the simulation includes all random numbers in
the key generation and the responses to hash queries. They are a, zx , b + zx0 . Accord-
ing to the setting of the simulation, where a, b, zi are randomly chosen, it is easy to
see that they are random and independent from the point of view of the adversary.
Therefore, the simulation is indistinguishable from the real attack. In particular, the
adversary does not know which hash query is answered with b. That is, c∗ is random
and unknown to the adversary.
Probability of successful simulation and useful attack. According to the as-
sumption, the adversary will break the signature scheme with advantage ε. The ad-
versary will make the hash query H(m∗ ||σm2 ∗ ) with probability at least ε, such that
m∗ ∈ M2 and thus q2 ≥ 1. The number of hash query is qH . Since q0 + q1 + q2 = qH ,
we have q0 < qH .
The reduction is successful if the simulator does not abort in either the query
phase or the forgery phase. According to the setting of the simulation, the reduction
is successful if m̂ ∈ Mc∗ and m̂ ∈ Mc∗ +1 .
• If c∗ = 0, we have m̂ ∈ M0 and m̂ ∈ M1 . In this case, k∗ ∈ [1, qH ] and |M0 | =
q0 < qH . We have that any message in M0 will be chosen as m̂ with probability
qH according to the simulation. Since M1 ⊆ M0 , the success probability is
1

q1
.
qH

• If c∗ = 1, we have m̂ ∈ M1 and m̂ ∈ M2 . In this case, k∗ ∈ [1, qH ] and

|M1 | = q1 . If q1 < qH , we have that any message in M1 will be chosen as
m̂ with probability √1qH according to the simulation. Since M2 ⊆ M1 , the suc-
cess probability is
q2
√ .
qH
Let Pr[suc] be the probability of successful simulation and useful attack when
q2 ≥ 1. We calculate the following probability of success:

Pr[suc] = Pr[suc|c∗ = 0] Pr[c∗ = 0] + Pr[suc|c∗ = 1] Pr[c∗ = 1]


= Pr[m̂ ∈ M0 ∩ M1 |c∗ = 0] Pr[c∗ = 0] + Pr[m̂ ∈ M1 ∩ M2 |c∗ = 1] Pr[c∗ = 1]
1 1
= Pr[m̂ ∈ M0 ∩ M1 |c∗ = 0] + Pr[m̂ ∈ M1 ∩ M2 |c∗ = 1]
2 2
172 5 Digital Signatures with Random Oracles

1 √ √
≥ Pr[m̂ ∈ M0 ∩ M1 |c∗ = 0, q1 ≥ qH ] Pr[q1 ≥ qH ]
2
1 √ √
+ Pr[m̂ ∈ M1 ∩ M2 |c∗ = 1, q1 < qH ] Pr[q1 < qH ]
2
1 √ 1 √
= √ Pr[q1 ≥ qH ] + √ Pr[q1 < qH ]
2 qH 2 qH
1
= √ .
2 qH

Therefore, the success probability is at least √1 for qH queries.


2 qH

Advantage and time cost. Suppose the adversary breaks the scheme with
(t, qs , ε) after making qH queries to the random oracle. The advantage of solving

the CDH problem is therefore ε/(2 qH ). Let Ts denote the time cost of the sim-
ulation. We have Ts = O(qH + qs ), which is mainly dominated by the oracle re-
sponse and the signature generation. Therefore, B will solve the CDH problem

with (t + Ts , ε/(2 qH )).
This completes the proof of the theorem. 
Chapter 6
Digital Signatures Without Random Oracles

In this chapter, we mainly introduce signature schemes under the q-SDH hardness
assumption and the CDH assumption. We start by introducing the Boneh-Boyen
short signature scheme [21] and then its variant, namely the Gentry scheme, which
is modified from his IBE scheme [47]. Most stateless signature schemes without
random oracles must produce signatures at least 320 bits in length for 80-bit security.
Then, we introduce the GMS scheme [54], which achieves a less than 320-bit length
of signature in the stateful setting. The other two signature schemes are the Waters
scheme modified from his IBE scheme [101] and the Hohenberger-Waters scheme
[61]. The Waters scheme requires a long public key, and the Hohenberger-Waters
scheme addressed this problem in the stateful setting. The given schemes and/or
proofs may be different from the original ones.

6.1 Boneh-Boyen Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e) and returns the
system parameters SP = PG.
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses h ∈ G, α, β ∈ Z p , computes g1 = gα , g2 = gβ , and
returns a public/secret key pair (pk, sk) as follows:

pk = (g1 , g2 , h), sk = (α, β ).

Sign: The signing algorithm takes as input a message m ∈ Z p , the secret key
sk, and the system parameters SP. It randomly chooses r ∈ Z p and returns the
signature σm on m as

© Springer International Publishing AG, part of Springer Nature 2018 173


F. Guo et al., Introduction to Security Reduction,
https://doi.org/10.1007/978-3-319-93049-7_6
174 6 Digital Signatures Without Random Oracles

 1 
σm = (σ1 , σ2 ) = r, h α+mβ +r .

We require that the signing algorithm always uses the same random number r
for signature generation on the message m.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
 
e σ2 , g1 gm
2 gσ1
= e(g, h).

Theorem 6.1.0.1 If the q-SDH problem is hard, the Boneh-Boyen signature scheme
is provably secure in the EU-CMA security model with reduction loss L = 2.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
2 q
q-SDH problem. Given as input a problem instance g, ga , ga , · · · , ga over the
pairing group PG, B runs A and works as follows.
B chooses a secret bit µ ∈ {0, 1} and programs the reduction in two different
ways.
• The reduction for µ = 0 is programmed as follows.
Setup. Let SP = PG. B randomly chooses y, w0 , w1 , w2 , · · · , wq from Z p and sets
the public key as

g1 = ga , g2 = gy , h = gw0 (a+w1 )(a+w2 )···(a+wq ) ,

where α = a, β = y and we require q = qs . The public key can be computed from


the problem instance and the chosen parameters.
Query. The adversary makes signature queries in this phase. For the i-th signa-
ture query on mi , B computes the signature σmi as
 
σmi = (σ1 , σ2 ) = wi − ymi , gw0 (a+w1 )···(a+wi−1 )(a+wi+1 )···(a+wq )

q
using g, ga , · · · , ga , y, w0 , w1 , · · · , wq .
Let ri = wi − ymi . According to the signature definition and simulation, we have
1 w0 (a+w1 )(a+w2 )···(a+wq )
h α+mi β +ri = g a+mi y+wi −ymi = gw0 (a+w1 )···(a+wi−1 )(a+wi+1 )···(a+wq ) .

Therefore, σmi is a valid signature of mi .


Forgery. The adversary returns a forged ∗
 signature σm on some m that has not

1
been queried. Let σm∗ = (σ1∗ , σ2∗ ) = r∗ , h α+m∗ β +r∗ . According to the signature
6.1 Boneh-Boyen Scheme 175

definition and simulation, if m∗ β + r∗ = mi β + ri for some queried signature


of mi , abort. Otherwise, let c = m∗ β + r∗ . We have c 6= wi = mi β + ri for all
i ∈ [1, qs ], and then
1 w0 (a+w1 )(a+w2 )···(a+wq )
σ2∗ = h α+m∗ β +r∗ = g a+c ,

which can be rewritten as d


g f (a)+ a+c ,
where f (a) is a (q − 1)-degree polynomial function, and d is a nonzero integer.
The simulator B computes
!1 d
!1
σ2∗ g f (a)+ a+c
d d
1
= = g a+c
g f (a) g f (a)
 1

and outputs c, g a+c as the solution to the q-SDH problem instance.

• The reduction for µ = 1 is programmed as follows.


Setup. Let SP = PG. B randomly chooses x, w0 , w1 , w2 , · · · , wq from Z p and sets
the public key as

g1 = gx , g2 = ga , h = gw0 (a+w1 )(a+w2 )···(a+wq ) ,

where α = x, β = a and we require q = qs . The public key can be computed from


the problem instance and the chosen parameters.
Query. The adversary makes signature queries in this phase. For the i-th signa-
ture query on mi 6= 0, B computes the signature σmi as
 w0 
(a+w1 )···(a+wi−1 )(a+wi+1 )···(a+wq )
σmi = (σ1 , σ2 ) = wi mi − x, g mi

q
using g, ga , · · · , ga , x, w0 , w1 , · · · , wq .
Let ri = wi mi − x. According to the signature definition and simulation, we have
1 w0 (a+w1 )(a+w2 )···(a+wq ) w0
x+mi a+wi mi −x (a+w 1 )···(a+wi−1 )(a+wi+1 )···(a+wq )
h α+mi β +ri = g = g mi .

Therefore, σmi is a valid signature of mi . Note that if mi = 0, B can randomly


choose ri ∈ Z p to generate the signature, because α + mi β + ri = x + ri , which is
computable by the simulator.
Forgery. The adversary returns a forged ∗
 signature σm on some m that has not

1
been queried. Let σm∗ = (σ1∗ , σ2∗ ) = r∗ , h α+m∗ β +r∗ . According to the signature
definition and simulation, if m∗ β + r∗ 6= mi β + ri for all i ∈ [1, qs ], abort. Other-
wise, we have
m∗ β + r∗ = mi β + ri
176 6 Digital Signatures Without Random Oracles

for some i. That is, m∗ a + r∗ = mi a + ri .


The simulator B computes
ri − r∗
a= ∗
m − mi
and solves the q-SDH problem immediately using a.

This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation and the signature generation. Let h = gγ . They are α, β , γ, r1 , r2 , · · · , rqs
equivalent to
 
 a, y, w0 (a + w1 ) · · · (a + wq ), w1 − ym1 , w2 − ym2 , · · · , wq − ymq : µ = 0
s s
  .
 x, a, w0 (a + w1 ) · · · (a + wq ), w1 m1 − x, w2 m2 − x, · · · , wq mq − x : µ = 1
s s

According to the setting of the simulation, where a, x, y, w0 , wi are randomly chosen,


it is easy to see that they are random and independent from the point of view of the
adversary no matter whether µ = 0 or µ = 1. Therefore, the simulation is indistin-
guishable from the real attack, and the adversary has no advantage in guessing µ
from the simulation.
Probability of successful simulation and useful attack. There is no abort in the
simulation. Let the random numbers in the queried signature and the forged signa-
ture be ri and r∗ , respectively. We have
• Case µ = 0. The forged signature is reducible if m∗ β + r∗ 6= mi β + ri for all i.
• Case µ = 1. The forged signature is reducible if m∗ β + r∗ = mi β + ri for some i.
Since the two simulations are indistinguishable and the simulator randomly chooses
one simulation, the forged signature is therefore reducible with success probability
Pr[Success] described as follows:

Pr[Success]
= Pr[Success|µ = 0] Pr[µ = 0] + Pr[Success|µ = 1] Pr[µ = 1]
= Pr[m∗ β + r∗ 6= mi β + ri ] Pr[µ = 0] + Pr[m∗ β + r∗ = mi β + ri ] Pr[µ = 1]
1 
= Pr[m∗ β + r∗ 6= mi β + ri ] + Pr[m∗ β + r∗ = mi β + ri ]
2
1
= .
2
Therefore, the probability of successful simulation and useful attack is 12 .
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε).
The advantage of solving the q-SDH problem is ε2 . Let Ts denote the time cost of
6.2 Gentry Scheme 177

the simulation. We have Ts = O(q2s ), which is mainly dominated by the signature


generation. Therefore, B will solve the q-SDH problem with (t + Ts , ε/2).
This completes the proof of the theorem. 

6.2 Gentry Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e) and returns the
system parameters SP = PG.
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses α, β ∈ Z p , computes g1 = gα , g2 = gβ , and returns a
public/secret key pair (pk, sk) as follows:

pk = (g1 , g2 ), sk = (α, β ).

Sign: The signing algorithm takes as input a message m ∈ Z p , the secret key
sk, and the system parameters SP. It randomly chooses r ∈ Z p and computes
the signature σm on m as
β −r
 
σm = (σ1 , σ2 ) = r, g α−m .

We require that the signing algorithm always uses the same random number r
for signature generation on the message m.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
   
e σ2 , g1 g−m = e g2 g−σ1 , g .

Theorem 6.2.0.1 If the q-SDH problem is hard, the Gentry signature scheme is
provably secure in the EU-CMA security model with reduction loss about L = 1.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the q-
2 q
SDH problem. Given as input a problem instance (g, ga , ga , · · · , ga ) over the pairing
group PG, B runs A and works as follows.
Setup. Let SP = PG. B randomly chooses w0 , w1 , w2 , · · · , wq from Z p and sets the
public key as
q q−1
g1 = ga , g2 = gwq a +wq−1 a +···+w1 a+w0 ,
178 6 Digital Signatures Without Random Oracles

where α = a, β = f (a) = wq aq + wq−1 aq−1 + · · · + w1 a + w0 and we require q =


qs + 1. Here, f (a) ∈ Z p [a] is a q-degree polynomial function in a. The public key
can be computed from the problem instance and the chosen parameters.
Query. The adversary makes signature queries in this phase. For the i-th signature
query on mi , B computes the signature σmi as
 
σmi = (σ1 , σ2 ) = f (mi ), g fmi (a) ,

where fmi (x) is a (q − 1)-degree polynomial defined as

f (x) − f (mi )
fmi (x) = .
x − mi
q
We have that σmi is computable using g, ga , · · · , ga , fmi (x), f (x). Let ri = f (mi ).
According to the signature definition and simulation, we have
β −ri f (a)− f (mi )
g α−mi = g a−mi = g fmi (a) .

Therefore, σmi is a valid signature of mi .


Forgery. The adversary returns a forged signature σm∗ on some m∗ that has not been
 β −r∗ 
queried. Let σm∗ = (σ1∗ , σ2∗ ) = r∗ , g α−m∗ . According to the signature definition
and simulation, if f (m∗ ) = r∗ , abort. Otherwise, we have r∗ 6= f (m∗ ) and then
β −r∗ f (a)−r∗
σ2∗ = g α−m∗ = g a−m∗ ,

which can be rewritten as ∗ (a)+ d


gf a−m∗ ,
where f ∗ (a) is a (q − 1)-degree polynomial function in a, and d is a nonzero integer.
The simulator B computes
!1 d !1
σ2∗ g f (a)+ a−m∗
d d
1

f ∗ (a)
= ∗ = g a−m∗
g g f (a)

1
 
and outputs −m∗ , g a−m∗ as the solution to the q-SDH problem instance.

This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been ex-
plained above. The randomness of the simulation includes all random numbers in
the key generation and the signature generation. They are

a, f (a), f (m1 ), f (m2 ), · · · , f (mqs ).


6.2 Gentry Scheme 179

The simulation is indistinguishable from the real attack because the random num-
bers are random and independent following the analysis below.
Probability of successful simulation and useful attack. There is no abort in
the simulation. The forged signature is reducible if r∗ 6= f (m∗ ). To prove that the
adversary has no advantage in computing f (m∗ ), we only need to prove that the
following integers are random and independent:
 
(α, β , r1 , · · · , rqs , f (m∗ )) = a, f (a), f (m1 ), f (m2 ), · · · , f (mqs ), f (m∗ ) .

This can be rewritten as

f (a) = wq aq + · · · + w1 a + w0 ,
f (m1 ) = wq m1 q + · · · + w1 m1 + w0 ,
f (m2 ) = wq m2 q + · · · + w1 m2 + w0 ,
···
f (mqs ) = wq mqs q + · · · + w1 mqs + w0 ,
f (m∗ ) = wq m∗ q + · · · + w1 m∗ + w0 .

Since w0 , w1 , · · · , wq are all random and independent, the randomness property holds
because the determinant of the coefficient matrix is nonzero:
aq aq−2 · · · a 1
ma1 mq−1
1 · · · m1 1
q−1
ma2 m2 · · · m2 1
= ∏ (xi − x j ) : xi , x j ∈ {a, m1 , · · · , mqs , m∗ }.
··· 1≤i< j≤q+1
maqs mq−1
qs · · · mqs 1
m∗ a m∗ q−1 · · · m∗ 1

Therefore, any adaptive choice of r∗ satisfying r∗ = f (m∗ ) holds with probability


1/p. Therefore, the probability of successful simulation and useful attack is 1 − 1p ≈
1 for any adaptive choice r∗ from the adversary.
Advantage and time cost. Suppose the adversary breaks the scheme with
(t, qs , ε). Let Ts denote the time cost of the simulation. We have Ts = O(q2s ), which is
mainly dominated by the signature generation. Therefore, B will solve the q-SDH
problem with (t + Ts , ε).
This completes the proof of the theorem. 
180 6 Digital Signatures Without Random Oracles

6.3 GMS Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e) and returns the
system parameters SP = PG.
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses u0,1 , u1,1 , u0,2 , u1,2 , · · · , u0,n , u1,n ∈ G, α ∈ Z p , com-
putes g1 = gα , selects the upper bound on the number of signatures, denoted
by N, and returns a public/secret key pair (pk, sk) as follows:

pk = (g1 , u0,1 , u1,1 , u0,2 , u1,2 , · · · , u0,n , u1,n , N), sk = (α, c),

where c is a counter initialized with c = 0.


Sign: The signing algorithm takes as input a message m ∈ {0, 1}n , the secret
key sk, and the system parameters SP. It increases its counter by one, c := c + 1.
If c > N, abort. Otherwise, it chooses a random bit b ∈ {0, 1} and returns the
signature σm on m as
 n  1 
σm = (σ1 , σ2 , σ3 ) = ∏ um[i] ,i α+c|b , c, b .
i=1

We require that the algorithm always uses the same bit b for the same message.
Here, “|” denotes bitwise concatenation.
Verify: The verification algorithm takes as input a message-signature pair
(m, σm ), the public key pk, and the system parameters SP. Let σm =
(σ1 , σ2 , σ3 ). It accepts the signature if σ2 ≤ N, σ3 ∈ {0, 1} and
   n 
e σ1 , g1 gσ2 |σ3 = e ∏ um[i] ,i , g .
i=1

Theorem 6.3.0.1 If the q-SDH problem is hard, the GMS signature scheme is prov-
ably secure in the EU-CMA security model with reduction loss at most L = 2n.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the q-
2 q
SDH problem. Given as input a problem instance (g, ga , ga , · · · , ga ) over the pairing
group PG, B runs A and works as follows.
B chooses a secret bit µ ∈ {0, 1} and programs the reduction in two different
ways. If µ = 0, the simulator guesses that the adversary will forge a signature with
a tuple (c∗ , b∗ ) that was used by the simulator in the signature generation. If µ = 1,
the simulator guesses that the adversary will forge a signature with a tuple (c∗ , b∗ )
that was never used by the simulator in the signature generation.
6.3 GMS Scheme 181

• The reduction for µ = 0 is programmed as follows.


Setup. Let SP = PG. B randomly chooses

w0,1 , w1,1 , w0,2 , w1,2 , · · · , w0,n , w1,n ∈ Z p ,


d0,1 , d0,2 , d0,3 , · · · , d0,qs ∈ {0, 1},
k1 , k2 , · · · , kqs ∈ [1, n],

and sets d1,c = 1 − d0,c for all c ∈ [1, qs ]. Let F(x) be the polynomial of degree
2qs defined as
qs qs
F(x) = ∏ (x + c|0)(x + c|1) = ∏ (x + c|d0,c )(x + c|d1,c ).
c=1 c=1

For simplicity, we set polynomials

F0,i (x) = F(x), F1,i (x) = F(x), i ∈ [1, n].

For all c ∈ [1, qs ], the polynomials F0,kc (x), F1,kc (x) will be replaced by

F(x) F(x)
F0,kc (x) := , F1,kc (x) := .
x + c|d1,c x + c|d0,c

After all replacements, for all c ∈ [1, qs ], we have that


– F0,i (x), F1,i (x) for any i ∈ [1, n]\{kc } include the roots c|d0,c and c|d1,c .
– F0,kc (x) does not include the root c|d1,c but only c|d0,c .
– F1,kc (x) does not include the root c|d0,c but only c|d1,c .
The simulator sets the public key as

g1 = ga , u0,i = gw0,i ·F0,i (a) , u1,i = gw1,i ·F1,i (a) , i ∈ [1, n],

where α = a and we require q = 2qs . The public key can be computed from the
problem instance and the chosen parameters.
Query. The adversary makes signature queries in this phase. For a signature
query on m, let the updated counter for this signature generation be c, and m[i]
be the i-th bit of message m. Then, we have that the polynomials Fm[i],i (x) for all
i ∈ [1, n] including Fm[kc ],kc (x) contain the root c|dm[kc ],c . B sets b = dm[kc ],c . Let
Fm (x) be
∑ni=1 wm[i],i · Fm[i],i (x)
Fm (x) = .
x + c|b
We have that Fm (x) is a polynomial of degree at most (q − 1).
B computes the signature σm as
 
σm = (σ1 , σ2 , σ3 ) = gFm (x) , c, b ,
182 6 Digital Signatures Without Random Oracles
q
using g, ga , · · · , ga , Fm (x).
According to the signature definition and simulation, we have
 n  1  n  1
= g∑i=1 wm[i],i ·Fm[i],i (a)
a+c|b
= gFm (a) .
α+c|b
∏ um[i] ,i
i=1

Therefore, σm is a valid signature of m.


Forgery. The adversary returns a forged signature σm∗ on some m∗ that has not
been queried. Let σm∗ be
!
 n  1∗ ∗
∗ ∗ ∗ α+c |b ∗ ∗
σm∗ = (σ1 , σ2 , σ3 ) = ∏ um∗ [i],i ,c ,b .
i=1

The simulator continues the simulation if


– The tuple (c∗ , b∗ ) was ever used by the simulator in the signature generation
for a message, denoted by m.
– m[kc∗ ] 6= m∗ [kc∗ ], where m[kc∗ ] and m∗ [kc∗ ] are the kc∗ -th bit of messages m
and m∗ , respectively.
According to the signature definition and simulation, we have that only one of
the polynomials F0,kc∗ (x), F1,kc∗ (x) contains the root c∗ |b∗ , and this polynomial
is Fm[kc∗ ],kc∗ (x). That is, the polynomial Fm∗ [kc∗ ],kc∗ (x) does not contain the root
c∗ |b∗ . Therefore,
∑ni=1 wm∗ [i],i · Fm∗ [i],i (x)
Fm∗ (x) = ,
x + c∗ |b∗
can be rewritten as
z
f (x) + ,
x + c∗ |b∗
where f (x) is a (q − 1)-degree polynomial function in x, and z is a nonzero inte-
ger.
The simulator B computes
!1 f (a)+ a+cz∗ |b∗
!1
z z
σ1∗ g 1
= = g a+c∗ |b∗
g f (a) g f (a)
 
1
and outputs c∗ |b∗ , g a+c∗ |b∗ as the solution to the q-SDH problem instance.

• The reduction for µ = 1 is programmed as follows.


Setup. Let SP = PG. B randomly chooses

w0,1 , w1,1 , w0,2 , w1,2 , · · · , w0,n , w1,n ∈ Z p ,


b1 , b2 , b3 , · · · , bqs ∈ {0, 1}.
6.3 GMS Scheme 183

Let F(x) be the polynomial of degree qs defined as


qs
F(x) = ∏ (x + c|bc ).
c=1

The simulator sets the public key as

g1 = ga , u0,i = gw0,i ·F(a) , u1,i = gw1,i ·F(a) , i ∈ [1, n],

where α = a and we require q = qs . The public key can be computed from the
problem instance and the chosen parameters.
Query. The adversary makes signature queries in this phase. For a signature
query on m, let the updated counter for this signature generation be c. B sets
b = bc . Let Fm (x) be

∑ni=1 wm[i],i · Fm[i],i (x)


Fm (x) = .
x + c|b

We have that Fm (x) is a polynomial of degree at most q − 1.


B computes the signature σm as
 
σm = (σ1 , σ2 , σ3 ) = gFm (x) , c, b ,

q
using g, ga , · · · , ga , Fm (x).
According to the signature definition and simulation, we have
 n  1  n  1
= g∑i=1 wm[i],i ·Fm[i],i (a)
a+c|b
= gFm (a) .
α+c|b
∏ um[i] ,i
i=1

Therefore, σm is a valid signature of m.


Forgery. The adversary returns a forged signature σm∗ = (σ1∗ , σ2∗ , σ3∗ ) on some
m∗ that has not been queried. Let σm∗ be
!
 n  1∗ ∗
∗ ∗ ∗ α+c |b ∗ ∗
σm∗ = (σ1 , σ2 , σ3 ) = ∏ um∗ [i],i ,c ,b .
i=1

The simulator continues the simulation if the tuple (c∗ , b∗ ) was never used by
the simulator in the signature generation for any message, so that the polynomial
F(x) does not contain the root c∗ |b∗ . Therefore,

∑ni=1 wm∗ [i],i · F(x)


Fm∗ (x) = ,
x + c∗ |b∗

can be rewritten as
z
f (x) + ,
x + c∗ |b∗
184 6 Digital Signatures Without Random Oracles

where f (x) is a (q − 1)-degree polynomial function in x, and z is a nonzero inte-


ger.
The simulator B computes
!1 f (a)+ a+cz∗ |b∗
!1
z z
σ1∗ g 1
= = g a+c∗ |b∗
g f (a) g f (a)
 
1
∗ ∗
and outputs c |b , g a+c∗ |b∗
as the solution to the q-SDH problem instance.

This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation and the signature generation. They are

a, w0,i · F0,i (a), w1,i · F1,i (a), dm[kc ],c : µ = 0
.
a, w0,i · F(a), w1,i · F(a), bc : µ =1

According to the setting of the simulation, where a, w0,i , w1,i , dm[kc ],c , bc are ran-
domly chosen, it is easy to see that they are random and independent from the point
of view of the adversary no matter whether µ = 0 or µ = 1. Therefore, the simula-
tion is indistinguishable from the real attack, and the adversary has no advantage in
guessing µ from the simulation.
Probability of successful simulation and useful attack. There is no abort in the
simulation. Let the random bit in the forged signature of m∗ be b∗ . We have
• Case µ = 0. The forged signature is reducible if m∗ [kc∗ ] 6= m[kc∗ ] using the tuple
(c∗ , b∗ ), where m[kc∗ ] is the kc∗ -bit of the message m queried by the adversary.
Since m and m∗ differ on at least one bit and kc∗ is randomly chosen by the
simulator, we have that the success probability of m∗ [kc∗ ] 6= m[kc∗ ] is at least 1n .
• Case µ = 1. The forged signature is always reducible with success probability
1, because the tuple (c∗ , b∗ ) was never used by the simulator in the signature
generation.
Let µ ∗ ∈ {0, 1} be the type of attack launched by the adversary, where µ ∗ = 0
means that c∗ |b∗ in the forged signature was used by the simulator in the signature
generation, and µ ∗ = 1 means that c∗ |b∗ in the forged signature was never used by
the simulator in the signature generation. Since the two simulations are indistin-
guishable and the simulator randomly chooses one simulation, the forged signature
is reducible with success probability Pr[Success] described as follows:

Pr[Success] = Pr[Success|µ = 0] Pr[µ = 0] + Pr[Success|µ = 1] Pr[µ = 1]


= Pr[µ ∗ = 0 ∧ m∗ [kc∗ ] 6= m[kc∗ ] ] Pr[µ = 0] + Pr[µ ∗ = 1] Pr[µ = 1]
= Pr[µ ∗ = 0] Pr[m∗ [kc∗ ] 6= m[kc∗ ]] Pr[µ = 0] + Pr[µ ∗ = 1] Pr[µ = 1]
6.4 Waters Scheme 185

1 1
= Pr[µ ∗ = 0] + Pr[µ ∗ = 1]
2n 2
1 1
≥ Pr[µ ∗ = 0] + Pr[µ ∗ = 1]
2n 2n
1 
= Pr[µ ∗ = 0] + Pr[µ ∗ = 1]
2n
1
= .
2n
1
Therefore, the probability of successful simulation and useful attack is at least 2n .
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε).
ε
The advantage of solving the q-SDH problem is therefore 2n . Let Ts denote the
2
time cost of the simulation. We have Ts = O(qs ), which is mainly dominated by the
signature generation. Therefore, B will solve the q-SDH problem with t + Ts , 2n ε

.
This completes the proof of the theorem. 

6.4 Waters Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e) and returns the
system parameters SP = PG.
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses g2 , u0 , u1 , u2 , · · · , un ∈ G, α ∈ Z p , computes g1 = gα ,
and returns a public/secret key pair (pk, sk) as follows:

pk = (g1 , g2 , u0 , u1 , u2 , · · · , un ), sk = α.

Sign: The signing algorithm takes as input a message m ∈ {0, 1}n , the secret
key sk, and the system parameters SP. Let m[i] be the i-th bit of message m. It
chooses a random number r ∈ Z p and returns the signature σm on m as
!r !
n
m[i]
σm = (σ1 , σ2 ) = gα2 u0 ∏ ui , gr .
i=1

Verify: The verification algorithm takes as input a message-signature pair


(m, σm ), the public key pk, and the system parameters SP. Let σm = (σ1 , σ2 ).
It accepts the signature if
 n 
m[i]
e(σ1 , g) = e(g1 , g2 )e u0 ∏ ui , σ2 .
i=1
186 6 Digital Signatures Without Random Oracles

Theorem 6.4.0.1 If the CDH problem is hard, the Waters signature scheme is prov-
ably secure in the EU-CMA security model with reduction loss L = 4(n+1)qs , where
qs is the number of signature queries.

Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B runs A and works as follows.
Setup. Let SP = PG. B sets q = 2qs and randomly chooses integers k, x0 , x1 , · · · , xn ,
y0 , y1 , · · · , yn satisfying

k ∈ [0, n],
x0 , x1 , · · · , xn ∈ [0, q − 1],
y0 , y1 , · · · , yn ∈ Z p .

It sets the public key as

g1 = ga , g2 = gb , u0 = g−kqa+x0 a+y0 , ui = gxi a+yi ,

where α = a. The public key can be computed from the problem instance and the
chosen parameters.
We define F(m), J(m), K(m) as
n
F(m) = −kq + x0 + ∑ m[i] · xi ,
i=1
n
J(m) = y0 + ∑ m[i] · yi ,
i=1
0 if x0 + ∑ni=1 m[i] · xi = 0 mod q

K(m) = .
1 otherwise

Then, we have
n
m[i]
u0 ∏ ui = gF(m)a+J(m) .
i=1

Query. The adversary makes signature queries in this phase. For a signature query
on m, if K(m) = 0, the simulator aborts. Otherwise, B randomly chooses r0 ∈ Z p
and computes the signature σm as
 !0 
r
J(m) n 1
− F(m) m[i] − F(m) 0
σm = (σ1 , σ2 ) = g2 u0 ∏ ui , g2 gr  .
i=1

We have that σm is computable using g, g1 , F(m), J(m), r0 , m and the public key.
1
Let r = − F(m) b + r0 . We have
6.4 Waters Scheme 187
!r
n  − 1 b+r0
m[i] F(m)
gα2 u0 ∏ ui = gab gF(m)a+J(m)
i=1
J(m)
−ab+r0 F(m)a− F(m) b+J(m)r0
= gab · g
J(m)
− F(m) b r0 (F(m)a+J(m))
=g g
!r0
J(m) n
− F(m) m[i]
= g2 u0 ∏ ui ,
i=1
1 b+r0
− F(m)
gr = g
1
− F(m) 0
= g2 gr .

Therefore, σm is a valid signature of m.


Forgery. The adversary returns a forged signature σm∗ on some m∗ that has not been
queried. Let the signature be
!r !
n
m∗ [i]
σm∗ = (σ1∗ , σ2∗ ) = gα2 u0 ∏ ui , gr .
i=1

According to the signature definition and simulation, if F(m∗ ) 6= 0, abort. Otherwise,


we have F(m∗ ) = 0 and then
!r
n r
m∗ [i] ∗
 ∗ ∗
∗ α
σ1 = g2 u0 ∏ ui = gab gF(m )a+J(m ) = gab (gr )J(m ) .
i=1

The simulator B computes


J(m∗ )
σ1∗ gab gr
∗ = ∗ = gab
(σ2∗ )J(m ) grJ(m )

as the solution to the CDH problem instance.


This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation and the signature generation. They are

b
a, b, x0 b + y0 , x1 b + y1 , x2 b + y2 , · · · , xn b + yn , − + ri0 .
F(mi )

According to the setting of the simulation, where a, b, yi , ri0 are randomly chosen, it
is easy to see that the simulation is indistinguishable from the real attack.
188 6 Digital Signatures Without Random Oracles

Probability of successful simulation and useful attack. A successful simulation


and a useful attack require that

K(m1 ) = 1, K(m2 ) = 1, · · · , K(mqs ) = 1, F(m∗ ) = 0.

We have
n
0 ≤ x0 + ∑ m[i]xi ≤ (n + 1)(q − 1),
i=1

where the range [0, (n + 1)(q − 1)] contains integers 0q, 1q, 2q, · · · , nq (n < q).
Let X = x0 + ∑ni=1 m[i]xi . Since all xi and k are randomly chosen, we have
h i h i 1
Pr[F(m∗ ) = 0] = Pr X = 0 mod q · Pr X = kq X = 0 mod q = .
(n + 1)q

Since the pair (mi , m∗ ) for any i differ on at least one bit, K(mi ) and F(m∗ ) differ on
the coefficient of at least one x j , and then

1
Pr[K(mi ) = 0|F(m∗ ) = 0] = .
q
Based on the above results, we obtain

Pr[K(m1 ) = 1 ∧ · · · ∧ K(mqs ) = 1 ∧ F(m∗ ) = 0]


= Pr[K(m1 ) = 1 ∧ · · · ∧ K(mqs ) = 1|F(m∗ ) = 0] · Pr[F(m∗ ) = 0]
= (1 − Pr[K(m1 ) = 0 ∨ · · · ∨ K(mqs ) = 0|F(m∗ ) = 0]) · Pr[F(m∗ ) = 0]
 qs 
≥ 1 − ∑ Pr[K(mi ) = 0|F(m∗ ) = 0] · Pr[F(m∗ ) = 0]
i=1
 
1 qs
= · 1−
(n + 1)q q
1
= ,
4(n + 1)qs

which is the probability of successful simulation and useful attack.


Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε).
ε
The advantage of solving the CDH problem is therefore 4(n+1)q s
. Let Ts denote
the time cost of the simulation. We have Ts = O(qs ), which is mainly domi-
nated
 by the signature
 generation. Therefore, B will solve the CDH problem with
ε
t + Ts , 4(n+1)qs
.
This completes the proof of the theorem. 
6.5 Hohenberger-Waters Scheme 189

6.5 Hohenberger-Waters Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a pairing group PG = (G, GT , g, p, e) and returns the
system parameters SP = PG.
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses u1 , u2 , u3 , v1 , v2 ∈ G, α ∈ Z p , computes g1 = gα , se-
lects the upper bound on the number of signatures, denoted by N, and returns a
public/secret key pair (pk, sk) as follows:

pk = (g1 , u1 , u2 , u3 , v1 , v2 , N), sk = (α, c),

where c is a counter initialized with c = 0.


Sign: The signing algorithm takes as input a message m ∈ Z p , the secret key
sk, and the system parameters SP. It chooses random numbers r, s ∈ Z p and in-
creases the counter by one, c := c + 1. If c > N, abort. Otherwise, the algorithm
returns the signature σm on m as
 α c s s 
σm = (σ1 , σ2 , σ3 , σ4 ) = um r
1 u2 u3 v1 v2 , g , r, c .

Verify: The verification algorithm takes as input a message-signature pair


(m, σm ), the public key pk, and the system parameters SP. Let σm =
(σ1 , σ2 , σ3 , σ4 ). It accepts the signature if σ4 ≤ N and
   
e(σ1 , g) = e um
σ3 σ4
u
1 2 u 3 , g1 e v 1 v 2 , σ 2 .

Theorem 6.5.0.1 If the CDH problem is hard, the Hohenberger-Waters signature


scheme is provably secure in the EU-CMA security model with reduction loss L = N.
Proof. Suppose there exists an adversary A who can (t, qs , ε)-break the signature
scheme in the EU-CMA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the pairing group
PG, B runs A and works as follows.
Setup. Let SP = PG. B randomly chooses x1 , y1 , y2 , x3 , y3 , z1 , z2 ∈ Z p , x2 ∈ Z∗p , and
c0 ∈ [1, N]. It sets the public key as

g1 = ga ,
u1 = gbx1 +y1 , u2 = gbx2 +y2 , u3 = gbx3 +y3 ,
v1 = g−b+z1 , v2 = gc0 b+z2 ,
190 6 Digital Signatures Without Random Oracles

where α = a and a, b are unknown secrets from the problem instance. The public
key can be computed from the problem instance and the chosen parameters.
Query. The adversary makes signature queries in this phase. For a signature query
on m, let the updated counter for this signature generation be c. The simulation of
this signature falls into the following two cases.

• c 6= c0 . B randomly chooses r, s0 ∈ Z p such that mx1 + rx2 + x3 6= 0. We have

um r
1 u2 u3 = g
b(mx1 +rx2 +x3 )+(my1 +ry2 +y3 )
, vc1 v2 = gb(c0 −c)+z1 c+z2 .

It computes the signature σm as

σm = (σ1 , σ2 , σ3 , σ4 )
z +z mx1 +rx2 +x3
!
my1 +ry2 +y3 − c1 −c2 ·(mx1 +rx2 +x3 ) 0 − c0 −c s0
= g1 0
· (vc1 v2 )s , g1 · g , r, c .

Let s = − mx1 +rx 2 +x3


c0 −c a + s0 . We have

(um r α c
1 u2 u3 ) (v1 v2 )
s

 − mx1 +rx2 +x3 a+s0


c0 −c
= gab(mx1 +rx2 +x3 )+a(my1 +ry2 +y3 ) · gb(c0 −c)+z1 c+z2
z c+z
−a 1c −c2 ·(mx1 +rx2 +x3 ) 0
= gab(mx1 +rx2 +x3 )+a(my1 +ry2 +y3 ) · g−ab(mx1 +rx2 +x3 ) · g 0 · (vc1 v2 )s
z c+z
my1 +ry2 +y3 − 1c −c2 ·(mx1 +rx2 +x3 ) 0
= g1 0
· (vc1 v2 )s ,
mx1 +rx2 +x3
− c0 −c 0
gs = g1 · gs .

Therefore, σm is a valid signature of m.


• c = c0 . B randomly chooses s ∈ Z p and computes r satisfying mx1 +rx2 +x3 = 0.
That is, s is randomly chosen and
mx1 + x3
r=− .
x2
It computes the signature σm as
 
my +ry +y
σm = (σ1 , σ2 , σ3 , σ4 ) = g1 1 2 3 · (vc1 v2 )s , gs , r, c .

We have
ab(mx1 +rx2 +x3 )+a(my1 +ry2 +y3 )
(um r α c s
1 u2 u3 ) (v1 v2 ) = g · (vc1 v2 )s
= ga(my1 +ry2 +y3 ) (vc1 v2 )s
my1 +ry2 +y3
= g1 · (vc1 v2 )s .
6.5 Hohenberger-Waters Scheme 191

Therefore, σm is a valid signature of m.

Forgery. The adversary returns a forged signature σm∗ on some m∗ that has not been
queried. Let the signature be
 ∗ ∗  
α c∗ s
σm∗ = (σ1∗ , σ2∗ , σ3∗ , σ4∗ ) = um u r
1 2 3 u v v
1 2 , gs ∗ ∗
, r , c .

According to the signature definition and simulation, the simulation is successful if

c∗ = c0 and m∗ x1 + r∗ x2 + x3 6= 0.

If it is successful, we have
∗ ∗ ∗ s
σ1∗ = um r

1 u2 u3 vc1 v2
∗ x +r∗ x +x )+a(m∗ y +r∗ y +y ) ∗ +z
= gab(m 1 2 3 1 2 3 · (gz1 c 2 )s
ab(m∗ x1 +r∗ x2 +x3 )+a(m∗ y1 +r∗ y2 +y3 ) z1 c∗ +z2
=g · (σ2∗ ) .

The simulator B computes


1
m∗ x1 +r∗ x2 +x3
!
σ1∗
m∗ y1 +r∗ y2 +y3 ∗ +z
= gab
g1 (σ2∗ )z1 c 2

as the solution to the CDH problem instance.


This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation and the signature generation. They are

pk : a, bx1 + y1 , bx2 + y2 , bx3 + y3 , − b + z1 , c0 b + z2 ,


mc x1 + rc x2 + x3
(r, s) when c 6= c0 : rc , − a + s0c (r = rc , s = sc ),
c0 − c
 
mc0 x1 + x3 mc0 x1 + x3
(r, s) when c = c0 : − , sc0 r=− , s = sc0 .
x2 x2

Since s0c , rc , sc0 are randomly chosen, we only need to consider the randomness of
mc0 x1 + x3
a, bx1 + y1 , bx2 + y2 , bx3 + y3 , − b + z1 , c0 b + z2 , − .
x2
According to the setting of the simulation, where a, b, x1 , x3 , y1 , y2 , y3 , z1 , z2 are ran-
domly chosen, it is easy to see that the simulation is indistinguishable from the real
attack. In particular, the adversary has no advantage in guessing c0 from the given
parameters.
192 6 Digital Signatures Without Random Oracles

Probability of successful simulation and useful attack. There is no abort in the


simulation. The forged signature is reducible if

c∗ = c0 , m∗ x1 + r∗ x2 + x3 6= 0.

Since c0 is random and unknown to the adversary according to the above analysis,
we have that c∗ = c0 holds with probability N1 for any adaptive choice c∗ . To prove
that the adversary has no advantage in computing r∗ satisfying m∗ x1 + r∗ x2 + x3 = 0,
we only need to prove that
m∗ x1 + x3

x2
is random and independent from the point of view of the adversary. It is easy to see
that the following integers associated with x1 , x2 , x3 are random and independent:

mc0 x1 + x3 m∗ x1 + x3
bx1 + y1 , bx2 + y2 , bx3 + y3 , − ,− .
x2 x2
Any adaptive choice r∗ satisfies m∗ x1 + r∗ x2 + x3 = 0 with probability 1/p. There-
fore, the probability of successful simulation and useful attack is N1 (1−1/p) ≈ 1/N.
Advantage and time cost. Suppose the adversary breaks the scheme with (t, qs , ε).
The advantage of solving the CDH problem is therefore Nε . Let Ts denote the time
cost of the simulation. We have Ts = O(qs ), which is mainly dominated by the sig-
nature generation. Therefore, B will solve the CDH problem with (t + Ts , Nε ).
This completes the proof of the theorem. 
Chapter 7
Public-Key Encryption with Random Oracles

In this chapter, we mainly use a variant of ElGamal encryption to introduce how


to prove the security of encryption schemes under computational hardness assump-
tions. The basic scheme is called the hashed ElGamal scheme [1]. The twin ElGa-
mal scheme and the iterated ElGamal scheme are from [29] and [55], respectively,
and introduce two totally different approaches for addressing the reduction loss of
finding a correct solution from hash queries. The ElGamal encryption scheme with
CCA security is introduced using the Fujisaki-Okamoto transformation [42]. The
given schemes and/or proofs may be different from the original ones.

7.1 Hashed ElGamal Scheme

SysGen: The system parameter generation algorithm takes as input a secu-


rity parameter λ . It chooses a cyclic group (G, p, g), selects a cryptographic
hash function H : {0, 1}∗ → {0, 1}n , and returns the system parameters SP =
(G, p, g, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses α ∈ Z p , computes g1 = gα , and returns a public/secret
key pair (pk, sk) as follows:

pk = g1 , sk = α.

Encrypt: The encryption algorithm takes as input a message m ∈ {0, 1}n , the
public key pk, and the system parameters SP. It chooses a random number
r ∈ Z p and returns the ciphertext CT as
 
CT = (C1 ,C2 ) = gr , H(gr1 ) ⊕ m .

© Springer International Publishing AG, part of Springer Nature 2018 193


F. Guo et al., Introduction to Security Reduction,
https://doi.org/10.1007/978-3-319-93049-7_7
194 7 Public-Key Encryption with Random Oracles

Decrypt: The decryption algorithm takes as input a ciphertext CT , the secret


key sk, and the system parameters SP. Let CT = (C1 ,C2 ). It decrypts the mes-
sage by computing

C2 ⊕ H(C1α ) = H(gr1 ) ⊕ m ⊕ H gαr = m.




Theorem 7.1.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the Hashed ElGamal encryption scheme is provably secure in the
IND-CPA security model with reduction loss L = qH , where qH is the number of
hash queries to the random oracle.
Proof. Suppose there exists an adversary A who can (t, ε)-break the encryption
scheme in the IND-CPA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the cyclic group
(G, g, p), B controls the random oracle, runs A , and works as follows.
Setup. Let SP = (G, g, p) and H be the random oracle controlled by the simulator.
B sets the public key as g1 = ga where α = a. The public key is available from the
problem instance.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
Let the i-th hash query be xi . If xi is already in the hash list, B responds to this
query following the hash list. Otherwise, B randomly chooses yi ∈ {0, 1}n and sets
H(xi ) = yi . The simulator B responds to this query with H(xi ) and adds (xi , yi ) to
the hash list.
Challenge. A outputs two distinct messages m0 , m1 ∈ {0, 1}n to be challenged. The
simulator randomly chooses R ∈ {0, 1}n and sets the challenge ciphertext CT ∗ as

CT ∗ = (gb , R),

where gb is from the problem instance. The challenge ciphertext can be seen as an
encryption of the message mc ∈ {m0 , m1 } using the random number b if H(gb1 ) =
R ⊕ mc :  
CT ∗ = (gb , R) = gb , H(gb1 ) ⊕ mc .

The challenge ciphertext is therefore a correct ciphertext from the point of view of
the adversary, if there is no hash query on gb1 to the random oracle.
Guess. A outputs a guess or ⊥. The challenge hash query is defined as

Q∗ = gb1 = (gb )α = gab .


7.2 Twin Hashed ElGamal Scheme 195

The simulator randomly selects one value x from the hash list (x1 , y1 ), (x2 , y2 ), · · ·,
(xqH , yqH ) as the challenge hash query. The simulator can immediately use this hash
query to solve the CDH problem.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation, the responses to hash queries, and the challenge ciphertext generation.
They are
a, y1 , y2 , · · · , yqH , b.
According to the setting of the simulation, where a, b, yi are randomly chosen, it is
easy to see that the randomness property holds, and thus the simulation is indistin-
guishable from the real attack.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Advantage of breaking the challenge ciphertext. If H(gab ) = R ⊕ m0 , the chal-
lenge ciphertext is an encryption of m0 . If H(gab ) = R ⊕ m1 , the challenge ciphertext
is an encryption of m1 . If the query gab is not made, H(gab ) is random and unknown
to the adversary, so that it has no advantage in breaking the challenge ciphertext.
Probability of finding solution. Since the adversary has advantage ε in guessing the
chosen message according to the breaking assumption, the adversary will query gab
to the random oracle with probability ε according to Lemma 4.11.1. The adversary
makes qH hash queries in total. Therefore, a random choice of x is equal to gab with
probability qεH .
Advantage and time cost. Let Ts denote the time cost of the simulation. We
have
 Ts = O(1). Therefore, the simulator B will solve the CDH problem with
t + Ts , qεH .
This completes the proof of the theorem. 

7.2 Twin Hashed ElGamal Scheme

SysGen: The system parameter generation algorithm takes as input a secu-


rity parameter λ . It chooses a cyclic group (G, p, g), selects a cryptographic
hash function H : {0, 1}∗ → {0, 1}n , and returns the system parameters SP =
(G, p, g, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses α, β ∈ Z p , computes g1 = gα , g2 = gβ , and returns a
196 7 Public-Key Encryption with Random Oracles

public/secret key pair (pk, sk) as follows:

pk = (g1 , g2 ), sk = (α, β ).

Encrypt: The encryption algorithm takes as input a message m ∈ {0, 1}n , the
public key pk, and the system parameters SP. It chooses a random number
r ∈ Z p and returns the ciphertext CT as
 
CT = (C1 ,C2 ) = gr , H(gr1 ||gr2 ) ⊕ m .

Decrypt: The decryption algorithm takes as input a ciphertext CT , the secret


key sk, and the system parameters SP. Let CT = (C1 ,C2 ). It decrypts the mes-
sage by computing
   
C2 ⊕ H C1α ||C1 = H (gr1 ||gr2 ) ⊕ m ⊕ H gαr ||gβ r = m.
β

Theorem 7.2.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the Twin Hashed ElGamal encryption scheme is provably secure in
the IND-CPA security model with reduction loss L = 1.
Proof. Suppose there exists an adversary A who can (t, ε)-break the encryption
scheme in the IND-CPA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the cyclic group
(G, g, p), B controls the random oracle, runs A , and works as follows.
Setup. Let SP = (G, g, p) and H be the random oracle controlled by the simulator.
B randomly chooses z1 , z2 ∈ Z p and sets the public key as
 
(g1 , g2 ) = ga , gz1 (ga )z2 ,

where α = a and β = z1 + z2 a. The public key can be computed from the problem
instance and the chosen parameters.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
Let the i-th hash query be xi . If xi is already in the hash list, B responds to this
query following the hash list. Otherwise, B randomly chooses yi ∈ {0, 1}n and sets
H(xi ) = yi . The simulator B responds to this query with H(xi ) and adds (xi , yi ) to
the hash list.
Challenge. A outputs two distinct messages m0 , m1 ∈ {0, 1}n to be challenged. The
simulator randomly chooses R ∈ {0, 1}n and sets the challenge ciphertext CT ∗ as

CT ∗ = (gb , R),
7.2 Twin Hashed ElGamal Scheme 197

where gb is from the problem instance. The challenge ciphertext can be seen as an
encryption of the message mc ∈ {m0 , m1 } using the random number b if H(gb1 ||gb2 ) =
R ⊕ mc :  
CT ∗ = (gb , R) = gb , H(gb1 ||gb2 ) ⊕ mc .

The challenge ciphertext is therefore a correct ciphertext from the point of view of
the adversary, if there is no hash query on gb1 ||gb2 to the random oracle.
Guess. A outputs a guess or ⊥. In the above simulation, the challenge hash query
is defined as
Q∗ = gb1 ||gb2 = gab ||gz1 b+z2 ab .
Suppose (x1 , y1 ), (x2 , y2 ), · · · , (xqH , yqH ) are in the hash list, where each query xi can
be denoted by xi = ui ||vi . If xi does not satisfy this structure, we can delete it.
The simulator finds the query x∗ = u∗ ||v∗ from the hash list satisfying

(gb )z1 · (u∗ )z2 = v∗

as the challenge hash query and returns u∗ = gab as the solution to the CDH prob-
lem instance. In this security reduction, the second group element is only used for
helping the simulator find the challenge hash query from the hash list.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation, the responses to hash queries, and the challenge ciphertext generation.
They are
a, z1 + z2 a, y1 , y2 , · · · , yqH , b.
According to the setting of the simulation, where a, b, z1 , z2 , yi are randomly cho-
sen, it is easy to see that the randomness property holds, and thus the simulation is
indistinguishable from the real attack.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Advantage of breaking the challenge ciphertext. If H(gb1 ||gb2 ) = R ⊕ m0 , the chal-
lenge ciphertext is an encryption of m0 . If H(gb1 ||gb2 ) = R ⊕ m1 , the challenge cipher-
text is an encryption of m1 . If the query gb1 ||gb2 is not made, we have that H(gb1 ||gb2 )
is random and unknown to the adversary, so that it has no advantage in breaking the
challenge ciphertext.
Probability of finding solution. Since the adversary has advantage ε in guessing the
chosen message according to the breaking assumption, the adversary will query gab
to the random oracle with probability ε according to Lemma 4.11.1. The adversary
makes qH hash queries in total. We claim that the adversary has no advantage in
generating a query x = u||v satisfying
198 7 Public-Key Encryption with Random Oracles

u 6= gab and gbz1 uz2 = v.


0
Let u = ga b and v = gwb where a0 6= a. If the adversary can compute such a query,
the adversary must be able to find w satisfying

z1 + z2 a0 = w.

According to our simulation, a, z1 + z2 a, z1 + z2 a0 are random and independent for


any a0 6= a, so that w is random in Z p from the point of view of the adversary. There-
fore, the adversary has probability at most qH /p in generating an incorrect query
that passes the verification, which is negligible. In other words, only the challenge
hash query can pass the verification, and thus the simulator can find the correct
solution from hash queries with probability 1.
Advantage and time cost. Let Ts denote the time cost of the simulation. We have
Ts = O(qH ), which is mainly dominated by finding the solution. Therefore, the sim-
ulator B will solve the CDH problem with (t + Ts , ε).
This completes the proof of the theorem. 

7.3 Iterated Hashed ElGamal Scheme

SysGen: The system parameter generation algorithm takes as input a secu-


rity parameter λ . It chooses a cyclic group (G, p, g), selects a cryptographic
hash function H : {0, 1}∗ → {0, 1}n , and returns the system parameters SP =
(G, p, g, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses α1 , α2 ∈ Z p , computes g1 = gα1 , g2 = gα2 , and returns
a public/secret key pair (pk, sk) as follows:

pk = (g1 , g2 ) , sk = (α1 , α2 ) .

Encrypt: The encryption algorithm takes as input a message m ∈ {0, 1}n , the
public key pk, and the system parameters SP. It picks a random r ∈ Z p and
returns the ciphertext CT as
 
CT = (C1 ,C2 ) = gr , H(A2 ) ⊕ m ,

where A1 = H(0)||gr1 ||1, A2 = H(A1 )||gr2 ||2. Here, H(0) denotes an arbitrary
but fixed string for all ciphertext generations.
Decrypt: The decryption algorithm takes as input a ciphertext CT , the secret
key sk, and the system parameters SP. Let CT = (C1 ,C2 ). It computes
7.3 Iterated Hashed ElGamal Scheme 199

B1 = H(0)||C1α1 ||1, B2 = H(B1 )||C2α2 ||2,

and decrypts the message by computing C2 ⊕ H(B2 ) = m.

Theorem 7.3.0.1 Suppose the hash function H is a random oracle. If the CDH
problem is hard, the Iterated Hashed ElGamal encryption scheme is provably se-

cure in the IND-CPA security model with reduction loss L = 2 qH , where qH is the
number of hash queries to the random oracle.
Proof. Suppose there exists an adversary A who can (t, ε)-break the encryption
scheme in the IND-CPA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance (g, ga , gb ) over the cyclic group
(G, g, p), B controls the random oracle, runs A , and works as follows.
Setup. Let SP = (G, g, p) and H be the random oracle controlled by the simulator.
B randomly picks i∗ ∈ {1, 2} and sets the secret key in such a way that αi∗ = a from
the problem instance and another value, denoted by z ∈ Z p , is randomly chosen by
the simulator. The public key pk = (g1 , g2 ) = (gα1 , gα2 ) can therefore be computed
from the problem instance and the chosen parameters.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
Let the i-th hash query be xi . If xi is already in the hash list, B responds to this
query following the hash list. Otherwise, B randomly chooses yi ∈ {0, 1}n and sets
H(xi ) = yi . The simulator B responds to this query with H(xi ) and adds (xi , yi ) to
the hash list.
Challenge. A outputs two distinct messages m0 , m1 ∈ {0, 1}n to be challenged. The
simulator randomly chooses R ∈ {0, 1}n and sets the challenge ciphertext CT ∗ as

CT ∗ = (gb , R),

where gb is from the problem instance. The challenge ciphertext can be seen as an
encryption of the message mc ∈ {m0 , m1 } using the random number b if H(Q∗2 ) =
R ⊕ mc , where Q∗1 = H(0)||gb1 ||1, Q∗2 = H(Q∗1 )||gb2 ||2:
 
CT ∗ = (gb , R) = gb , H(Q∗2 ) ⊕ mc .

The challenge ciphertext is therefore a correct ciphertext from the point of view of
the adversary, if there is no hash query on Q∗2 to the random oracle.
Guess. A outputs a guess or ⊥. In the above simulation, there are two challenge
hash queries defined as

Q∗1 = H(0)||gb1 ||1 = H(0)||gα1 b ||1,


Q∗2 = H(Q∗1 )||gb2 ||2 = H(Q∗1 )||gα2 b ||2.
200 7 Public-Key Encryption with Random Oracles

The solution to the CDH problem instance is gαi∗ b = gab within the challenge hash
query Q∗i∗ . Suppose (x1 , y1 ), (x2 , y2 ), · · · , (xqH , yqH ) are in the hash list, where each
query xi can be denoted by xi = ui ||vi ||wi . Here, ui ∈ {H(0), y1 , y2 , · · · , yqH }, vi ∈ G,
and wi ∈ {1, 2}. If xi does not satisfy this structure, we can delete it.

Table 7.1 All hash queries with valid structure in the hash list
(u || v ||1, y1 ) (u || v ||1, y2 ) ··· (u || v ||1, yk )
 1 1  2 2  k k
 (y1 ||v 11 ||2, y 11 )  (y 2 ||v21 ||2, y21 )  k ||vk1 ||2, yk1 )
 (y
 (y1 ||v12 ||2, y12 )  (y2 ||v22 ||2, y22 )  (yk ||vk2 ||2, yk2 )

 
 
Y1 = .. Y2 = .. · · · Yk = ..


 . 

 . 

 .
(y1 ||v1n1 ||2, y1n1 ) (y2 ||v2n2 ||2, y2n2 ) (yk ||vknk ||2, yknk )
  

All hash queries are of one of the forms shown in Table 7.1. Suppose all hash
queries in the first row are in the query set Y0 . In the first column of the second row,
all hash queries in the query set, denoted by Y1 , use y1 . All other rows and columns
have a similar structure and definition. If the challenge hash queries exist in the hash
list, all hash queries in the set Y0 must have only one query whose v is equal to gα1 b
because all distinct hash queries in the set Y0 have the same u = H(0) and w = 1.
Similarly, all hash queries in the set Yi must have at most one query whose v is equal
to gα2 b . We stress that vi j = vi0 j0 may hold, where vi j is from Yi , and vi0 j0 is from Yi0 .
In the above simulation, if i∗ = 1, this means that α1 = a and α2 = z, where z is
randomly chosen by the simulator. Therefore, gα2 b = (gb )z can be computed by the
simulator, and the simulator can check whether v in the hash query u||v||w from Yi
is equal to gbz or not. Next, we describe how to pick the challenge hash query.
• If i∗ = 1, the simulator checks each hash query in Y0 as follows. For the query
ui ||vi ||1 whose response is yi , the simulator checks whether there exists j ∈ [1, ni ]
such that vin j = gbz (a query from Yi ). If yes, this query is kept in Y0 . Otherwise,
this query is removed from Y0 . Suppose Y∗0 is the final set after removing all
hash queries as described above. The simulator randomly picks one query from
Y∗0 as the challenge hash query Q∗1 = u∗ ||v∗ ||1 and extracts v∗ as the solution to
the CDH problem instance.
• If i∗ = 2, the simulator randomly picks one query from the sets Y1 ∪ Y2 ∪ · · · ∪ Yk
as the challenge hash query Q∗2 = u∗ ||v∗ ||2 and extracts v∗ as the solution to the
CDH problem instance.

This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation, the responses to hash queries, and the challenge ciphertext generation.
They are
a, z, y1 , y2 , · · · , yqH , b.
7.3 Iterated Hashed ElGamal Scheme 201

According to the setting of the simulation, where a, b, z, yi are randomly chosen, it


is easy to see that the randomness property holds, and thus the simulation is indis-
tinguishable from the real attack. In particular, the adversary has no advantage in
guessing the randomly chosen i∗ ∈ {1, 2}.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Advantage of breaking the challenge ciphertext. According to the simulation, we
have
• The challenge ciphertext is an encryption of the message m0 if

Q∗1 = H(0)||gα1 b ||1, Q∗2 = H(Q∗1 )||gα2 b ||2, H(Q∗2 ) = R ⊕ m0 .

• The challenge ciphertext is an encryption of the message m1 if

Q∗1 = H(0)||gα1 b ||1, Q∗2 = H(Q∗1 )||gα2 b ||2, H(Q∗2 ) = R ⊕ m1 .

Without making the challenge query Q∗2 to the random oracle, which requires the
adversary to query Q∗1 first, H(Q∗2 ) is random and unknown to the adversary, so that
it has no advantage in breaking the challenge ciphertext.
Probability of finding solution. Since the adversary has advantage ε in guessing
the chosen message according to the breaking assumption, the adversary will query
Q∗1 and Q∗2 to the random oracle with probability ε according to Lemma 4.11.1. The
adversary makes qH hash queries in total. Therefore, we have

n1 + n2 + · · · + nk + k ≤ qH .

Let suc be the probability of successfully picking the challenge hash query. We
have the following probability of success:

Pr[suc] = Pr[suc|i∗ = 1] Pr[i∗ = 1] + Pr[suc|i∗ = 2] Pr[i∗ = 2]


1 1
= · Pr[suc|i∗ = 1] + · Pr[suc|i∗ = 2].
2 2
To prove Pr[suc] ≥ √1
2 qH , we only need to prove

1
Pr[suc|i∗ = d] ≥ √ , for some d ∈ {1, 2}.
qH

On the other hand, if the adversary can adaptively make hash queries against the
above probability, this means that
1 1
Pr[suc|i∗ = 1] < √ (1), and Pr[suc|i∗ = 2] < √ (2).
qH qH
202 7 Public-Key Encryption with Random Oracles

The above two probabilities must hold because the adversary does not know the
value i∗ .

• If i∗ = 1, the adversary must make hash queries in such a way that k ≥ 1 + qH .
Suppose {(x1 , y1 ), (x2 , y2 ), · · · , (xk , yk )} are all the hash queries and responses
from the set Y0 . It is also the case that the query set Yi for all i ∈ [1, k] must
have one hash query whose v is equal to gα2 b . Otherwise, (xi , yi ) will be re-
moved from Y0 because of how the simulator picks the challenge hash query. If

some queries are removed, and the remaining number is less than qH , we have
∗ 1
Pr[suc|i = 1] ≥ √qH .
• Suppose the probability (1) holds. If i∗ = 2, there are k hash queries in the set
Y1 ∪ Y2 ∪ · · · ∪ Yk whose v is equal to gα2 b . Let N = |Y1 ∪ Y2 ∪ · · · ∪ Yk |. In this
case, to make sure that the probability (2) holds, the total number of hash queries
√ √
must satisfy N ≥ k qH + 1. Otherwise, N < k qH + 1, and we have

k k 1
≥ √ =√ ,
N k qH qH

which contradicts the probability (2) requirement.



If the probabilities (1) and (2) hold, we have k ≥ 1 + qH and then
√ √ √
N ≥ k qH + 1 > qH · qH = qH ,

which contradicts the assumption of qH hash queries at most. Therefore, we have


1 1
Pr[suc|i∗ = 1] ≥ √ , or Pr[suc|i∗ = 2] ≥ √
qH qH

and then obtain Pr[suc] ≥ 2√1qH . Therefore, the simulator can find the correct solu-
tion from hash queries with probability at least 2√εqH .
Advantage and time cost. Let Ts denote the time cost of the simulation. We

have
 Ts = O( qH ). Therefore, the simulator B will solve the CDH problem with
t + Ts , 2√εqH .
This completes the proof of the theorem. 

7.4 Fujisaki-Okamoto Hashed ElGamal Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a cyclic group (G, p, g), selects three cryptographic
7.4 Fujisaki-Okamoto Hashed ElGamal Scheme 203

hash functions H1 : {0, 1}∗ → Z p , H2 , H3 : {0, 1}∗ → {0, 1}n , and returns the
system parameters SP = (G, p, g, H1 , H2 , H3 ).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses α ∈ Z p , computes g1 = gα , and returns a public/secret
key pair (pk, sk) as follows:

pk = g1 , sk = α.

Encrypt: The encryption algorithm takes as input a message m ∈ {0, 1}n , the
public key pk, and the system parameters SP. It works as follows:
• Choose a random string σ ∈ {0, 1}n .
• Compute C3 = H3 (σ ) ⊕ m and r = H1 (σ ||m||C3 ).
• Compute C1 = gr and C2 = H2 (gr1 ) ⊕ σ .
The ciphertext CT is defined as
 
CT = (C1 ,C2 ,C3 ) = gr , H2 (gr1 ) ⊕ σ , H3 (σ ) ⊕ m .

Decrypt: The decryption algorithm takes as input a ciphertext CT , the secret


key sk, and the system parameters SP. Let CT = (C1 ,C2 ,C3 ). The decryption
works as follows:
• Decrypt σ by computing C2 ⊕ H2 (C1α ) = H2 (gr1 ) ⊕ σ ⊕ H2 (gar ) = σ .
• Decrypt the message by computing C3 ⊕ H3 (σ ) = H3 (σ ) ⊕ m ⊕ H3 (σ ) = m.
• Return the message m if C1 = gH1 (σ ||m||C3 ) .

Theorem 7.4.0.1 Suppose the hash functions H1 , H2 , H3 are random oracles. If the
CDH problem is hard, the Fujisaki-Okamoto Hashed ElGamal encryption scheme is
provably secure in the IND-CCA security model with reduction loss L = qH2 , where
qH2 is the number of hash queries to the random oracle H2 .
Proof. Suppose there exists an adversary A who can (t, qd , ε)-break the encryption
scheme in the IND-CCA security model. We construct a simulator B to solve the
CDH problem. Given as input a problem instance g, ga , gb over the cyclic group


(G, g, p), B controls the random oracles, runs A , and works as follows.
Setup. Let SP = (G, g, p) and H1 , H2 , H3 be random oracles controlled by the sim-
ulator. B sets the public key as g1 = ga where α = a. The public key is available
from the problem instance.
H-Query. The adversary makes hash queries in this phase. B prepares three hash
lists to record all queries and responses as follows, where the hash lists are empty at
the beginning.
204 7 Public-Key Encryption with Random Oracles

• Let the i-th hash query to H1 be xi . If xi is already in the hash list, B responds to
this query following the hash list. Otherwise, B randomly chooses Xi ∈ Z p , sets
H1 (xi ) = Xi and adds (xi , Xi ) to the hash list.
• Let the i-th hash query to H2 be yi . If yi is already in the hash list, B responds to
this query following the hash list. Otherwise, B randomly chooses Yi ∈ {0, 1}n ,
sets H2 (yi ) = Yi and adds (yi ,Yi ) to the hash list.
• Let the i-th hash query to H3 be zi . If zi is already in the hash list, B responds to
this query following the hash list. Otherwise, B randomly chooses Zi ∈ {0, 1}n ,
sets H3 (zi ) = Zi and adds (zi , Zi ) to the hash list.

Let the number of hash queries to random oracles H1 , H2 , H3 be qH1 , qH2 , qH3 , re-
spectively.
Phase 1. The adversary makes decryption queries in this phase. For a decryption
query on CT = (C1 ,C2 ,C3 ), the simulator searches the hash lists to see whether
there exist three pairs (x, X), (y,Y ), (z, Z) such that

x = z||m||C3 ,
H (x)
y = gX1 = g1 1 ,
X H1 (x)
C1 = g = g ,
C2 = Y ⊕ z = H2 (y) ⊕ z,
C3 = Z ⊕ m = H3 (z) ⊕ m.

We have the following cases.


• Case 1. All three queries exist. The simulator returns m as the decryption result.
• Case 2. There exists only one query (x, H1 (x)) = (x, X) to the random oracle H1
satisfying C1 = gH1 (x) . Let x = z||m||C3 . With such a query, the simulator knows
z and can compute y. Then, the simulator adds queries (y, H2 (y)), (z, H2 (z)) to
the random oracle. Based on these three queries, the simulator can easily de-
cide whether the queried ciphertext is valid or not. If valid, return the message;
otherwise, return ⊥.
• Case 3. No query exists satisfying the ciphertext structure. Then, the simulator
returns ⊥.

Challenge. A outputs two distinct messages m0 , m1 ∈ {0, 1}n to be challenged. The


simulator randomly chooses R1 , R2 ∈ {0, 1}n and sets the challenge ciphertext CT ∗
as
CT ∗ = (gb , R1 , R2 ),
where gb is from the problem instance. The challenge ciphertext can be seen as an
encryption of the message mc ∈ {m0 , m1 } using the random number σ ∗ if
• H3 (σ ∗ ) ⊕ mc = R2 ,
• H1 (σ ∗ ||mc ||R2 ) = b,
• H2 (gb1 ) ⊕ σ ∗ = RZ1 .
7.4 Fujisaki-Okamoto Hashed ElGamal Scheme 205

That is, we have


 
CT ∗ = (gb , R1 , R2 ) = gb , H2 (gb1 ) ⊕ σ ∗ , H3 (σ ∗ ) ⊕ mc .

The challenge ciphertext is therefore a correct ciphertext from the point of view of
the adversary if it does not query gb1 , σ ∗ , σ ∗ ||mc ||C3 to random oracles H2 , H3 , H1 ,
respectively.
Phase 2. The simulator responds to decryption queries in the same way as in Phase
1 with the restriction that no decryption query is allowed on CT ∗ .
Guess. A outputs a guess or ⊥. The challenge hash query is defined as

Q∗ = gb1 = (gb )α = gab ,

which is a query to the random oracle H2 . The simulator randomly selects one value
y from the hash list (y1 ,Y1 ), (y2 ,Y2 ), · · · , (yqH2 ,YqH2 ) as the challenge hash query. The
simulator can immediately use this hash query to solve the CDH problem.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The decryption simulation is correct except with
negligible probability according to the following analysis.
• For case 1, the simulator can correctly respond to a decryption query on CT =
(C1 ,C2 ,C3 ) following the description in Phase 1.
• For case 2, with (x, H1 (x)) where x = (z||m||C3 ) satisfying C1 = gH1 (x) , the simu-
H (x)
lator can compute y = g1 1 and extract z. Since the adversary did not query y, z
to the random oracles, we have that H2 (y) ⊕C2 and H3 (z) ⊕C3 must be random
in {0, 1}n , so that they are equal to z and m provided by the adversary in x with
negligible probability.
• For case 3 without (x, H1 (x)) satisfying C1 = gH1 (x) , we have the following two
sub-cases.
– C1 = gb . The ciphertext CT for every decryption query must be different from
the challenge ciphertext CT ∗ = (C1∗ ,C2∗ ,C3∗ ). For such a decryption query, the
simulator cannot compute gb1 to simulate the decryption. However, the hash
query H1 (σ ∗ ||mc ||Z2 ) = b will determine C2∗ ,C3∗ . That is, all ciphertexts where
C1 = gb are invalid except the challenge ciphertext. Therefore, the simulator
can perform the decryption correctly by returning ⊥ to the adversary.
– C1 6= gb . The simulator performs an incorrect decryption simulation for CT
if and only if there exist three hash queries and responses (x, X), (y,Y ), (z, Z)
after this decryption simulation, where these three queries satisfy

x = (z||m||C3 ),
H (x)
y = g1 1 ,
H1 (x)
C1 = g ,
206 7 Public-Key Encryption with Random Oracles

C2 = H2 (y) ⊕ z,
C3 = H3 (z) ⊕ m.

The simulation fails because the decryption returns ⊥ before these hash
queries, but the decryption should return m after these hash queries. Since
H1 (x) = X is randomly chosen, C1 = gH1 (x) holds with negligible probability.
Therefore, the simulator performs the decryption simulation correctly except with
negligible probability. In particular, the decryption response will not generate any
new hash query and its response for the adversary.
The correctness of the simulation including the public key, decryption, and the
challenge ciphertext has been explained above. The randomness of the simulation
includes all random numbers in the key generation, the responses to hash queries,
and the challenge ciphertext generation. They are

a, Xi ,Yi , Zi , b.

According to the setting of the simulation, where a, b, Xi ,Yi , Zi are randomly cho-
sen, it is easy to see that the randomness property holds, and thus the simulation is
indistinguishable from the real attack.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Advantage of breaking the challenge ciphertext. According to the simulation, we
have
• The challenge ciphertext is an encryption of the message m0 if

H2 (gb1 ) = R1 ⊕ σ ∗ , H3 (σ ∗ ) = R2 ⊕ m0 , H1 (σ ∗ ||m0 ||R2 ) = b.

• The challenge ciphertext is an encryption of the message m1 if

H2 (gb1 ) = R1 ⊕ σ ∗ , H3 (σ ∗ ) = R2 ⊕ m1 , H1 (σ ∗ ||m1 ||R2 ) = b.

Without making the challenge query Q∗ = gb1 to the random oracle, the adversary
can break the challenge ciphertext if and only if there exist queries to H3 , H1 satisfy-
ing the equations. Note that the upper bound of the success probability is the sum of
the probability that a query to H3 is answered with R2 ⊕ mc and the probability that
a query to H1 is answered with b. The success probability is at most (2qH3 + qH1 )/p,
which is negligible. Furthermore, any decryption response will not help the adver-
sary obtain additional hash queries or their responses. Therefore, the adversary has
no advantage in breaking the challenge ciphertext except with negligible probability.
Probability of finding solution. According to the definition and simulation, if the
adversary does not query gb1 = gab to the random oracle H2 , the adversary has no
advantage in guessing the encrypted message except with negligible probability.
Since the adversary has advantage ε in guessing the chosen message according to
the breaking assumption, the adversary will query gab to the random oracle with
7.4 Fujisaki-Okamoto Hashed ElGamal Scheme 207

probability ε according to Lemma 4.11.1. Therefore, a random choice of y from the


hash list for H2 will be equal to gab with probability qHε .
2
Advantage and time cost. Let Ts denote the time cost of the simulation. We have
Ts = O(qH1 ), which is mainly dominated by the decryption. Therefore, the simulator
B will solve the CDH problem with (t + Ts , qHε ).
2
This completes the proof of the theorem. 
Chapter 8
Public-Key Encryption Without Random
Oracles

In this chapter, we introduce the ElGamal encryption scheme and the Cramer-Shoup
encryption scheme [32]. The first scheme is widely known, and we give it here to
help the reader understand how to analyze the correctness of a security reduction.
The second scheme is the first practical encryption scheme without random oracles
with CCA security. The given schemes and/or proofs may be different from the
original ones.

8.1 ElGamal Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a cyclic group (G, p, g) and returns the system param-
eters SP = (G, p, g).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses α ∈ Z p , computes g1 = gα , and returns a public/secret
key pair (pk, sk) as follows:

pk = g1 , sk = α.

Encrypt: The encryption algorithm takes as input a message m ∈ G, the public


key pk, and the system parameters SP. It chooses a random number r ∈ Z p and
returns the ciphertext CT as
 
CT = (C1 ,C2 ) = gr , gr1 · m .

Decrypt: The decryption algorithm takes as input a ciphertext CT , the secret


key sk, and the system parameters SP. Let CT = (C1 ,C2 ). It decrypts the mes-
sage by computing

© Springer International Publishing AG, part of Springer Nature 2018 209


F. Guo et al., Introduction to Security Reduction,
https://doi.org/10.1007/978-3-319-93049-7_8
210 8 Public-Key Encryption Without Random Oracles

C2 ·C1−α = gr1 m · (gr )−α = m.

Theorem 8.1.0.1 If the DDH problem is hard, the ElGamal encryption scheme is
provably secure in the IND-CPA security model with reduction loss L = 2.

Proof. Suppose there exists an adversary A who can (t, ε)-break the encryption
scheme in the IND-CPA security model. We construct a simulator B to solve the
DDH problem. Given as input a problem instance (g, ga , gb , Z) over the cyclic group
(G, g, p), B runs A and works as follows.
Setup. Let SP = (G, g, p). B sets the public key as g1 = ga where α = a. The public
key is available from the problem instance.
Challenge. A outputs two distinct messages m0 , m1 ∈ G to be challenged. The
simulator randomly chooses c ∈ {0, 1} and sets the challenge ciphertext CT ∗ as
 
CT ∗ = gb , Z · mc ,

where gb and Z are from the problem instance. Let r = b. If Z = gab , we have
   
CT ∗ = gb , Z · mc = gr , gr1 · mc .

Therefore, CT ∗ is a correct challenge ciphertext whose encrypted message is mc .


Guess. A outputs a guess c0 of c. The simulator outputs true if c0 = c. Otherwise,
false.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the key
generation and the challenge ciphertext generation. They are a in the secret key and
b in the challenge ciphertext. According to the setting of the simulation, where a, b
are randomly chosen, it is easy to see that the randomness property holds, and thus
the simulation is indistinguishable from the real attack.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Probability of breaking the challenge ciphertext.
If Z is true, the simulation is indistinguishable from the real attack, and thus the
adversary has probability 12 + ε2 of guessing the encrypted message correctly.
If Z is false, it is easy to see that the challenge ciphertext is a one-time pad
because the message is encrypted using Z, which is random and cannot be calculated
from the other parameters given to the adversary. Therefore, the adversary only has
probability 1/2 of guessing the encrypted message correctly.
8.2 Cramer-Shoup Scheme 211

Advantage and time cost. The advantage of solving the DDH problem is
 
1 ε 1 ε
PS (PT − PF ) = + − = .
2 2 2 2

Let Ts denote the time cost of the simulation. We have Ts = O(1). Therefore, the
simulator B will solve the DDH problem with (t + Ts , ε/2).
This completes the proof of the theorem. 

8.2 Cramer-Shoup Scheme

SysGen: The system parameter generation algorithm takes as input a security


parameter λ . It chooses a cyclic group (G, p, g), selects a cryptographic hash
function H : {0, 1}∗ → Z p , and returns the system parameters SP = (G, p, g, H).
KeyGen: The key generation algorithm takes as input the system parameters
SP. It randomly chooses g1 , g2 ∈ G, α1 , α2 , β1 , β2 , γ1 , γ2 ∈ Z p , computes u =
β β γ γ
gα1 1 gα2 2 , v = g1 1 g2 2 , h = g11 g22 , and returns a public/secret key pair (pk, sk) as
follows:
pk = (g1 , g2 , u, v, h), sk = (α1 , α2 , β1 , β2 , γ1 , γ2 ).
Encrypt: The encryption algorithm takes as input a message m ∈ G, the public
key pk, and the system parameters SP. It chooses a random number r ∈ Z p and
returns the ciphertext CT as
 
CT = (C1 ,C2 ,C3 ,C4 ) = gr1 , gr2 , hr m, ur vwr ,

where w = H(C1 ,C2 ,C3 ).


Decrypt: The decryption algorithm takes as input a ciphertext CT , the secret
key sk, and the system parameters SP. Let CT = (C1 ,C2 ,C3 ,C4 ). It works as
follows:
• Compute w = H(C1 ,C2 ,C3 ).
α +wβ1 α +wβ2
• Verify that C4 = C1 1 ·C2 2 .
−γ −γ
• Return the message m by computing m = C3 ·C1 1 C2 2 .

Theorem 8.2.0.1 If the Variant DDH problem is hard, the Cramer-Shoup encryp-
tion scheme is provably secure in the IND-CCA security model with reduction loss
about L = 2.

Proof. Suppose there exists an adversary A who can (t, qd , ε)-break the encryption
scheme in the IND-CCA security model. We construct a simulator B to solve the
212 8 Public-Key Encryption Without Random Oracles

Variant DDH problem. Given as input a problem instance X = (g1 , g2 , ga11 , ga22 ) over
the cyclic group (G, g, p), B runs A and works as follows.
Setup. Let SP = (G, g, p, H), where H : {0, 1}∗ → Z p is a cryptographic hash func-
tion. B randomly chooses α1 , α2 , β1 , β2 , γ1 , γ2 ∈ Z p and sets the public key as
 
β β γ γ
pk = (g1 , g2 , u, v, h) = g1 , g2 , gα1 1 gα2 2 , g1 1 g2 2 , g11 g22 .

The public key can be computed from the problem instance and the chosen param-
eters.
Phase 1. The adversary makes decryption queries in this phase. For a decryption
query on CT , since the simulator knows the secret key, it runs the decryption algo-
rithm and returns the decryption result to the adversary.
Challenge. A outputs two distinct messages m0 , m1 ∈ G to be challenged. The
simulator randomly chooses c ∈ {0, 1} and sets the challenge ciphertext CT ∗ as
 ∗ ∗

a γ a γ
CT ∗ = (C1∗ ,C2∗ ,C3∗ ,C4∗ ) = ga11 , ga22 , g11 1 g22 2 · mc , (ga11 )α1 +w β1 (ga22 )α2 +w β2 ,

where w∗ = H(C1∗ ,C2∗ ,C3∗ ) and ga11 , ga22 are from the problem instance. Let r = a1 . If
a1 = a2 , we have
 ∗ ∗

a γ a γ
CT ∗ = ga11 , ga22 , g11 1 g22 2 · mc , (ga11 )α1 +w β1 (ga22 )α2 +w β2
 ∗

= gr1 , gr2 , hr m, ur vw r .

Therefore, CT ∗ is a correct challenge ciphertext whose encrypted message is mc .


Phase 2. The simulator responds to decryption queries in the same way as in Phase
1 with the restriction that no decryption query is allowed on CT ∗ .
Guess. A outputs a guess c0 of c. The simulator outputs true if c0 = c. Otherwise,
false.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. According to the setting of the simulation, the sim-
ulator knows the secret key, and thus can perform a decryption simulation indistin-
guishable from the real attack.
The correctness of the simulation including the public key, decryption, and the
challenge ciphertext has been explained above. The randomness of the simulation
includes all random numbers in the key generation and the challenge ciphertext
generation. They are
α1 , α2 , β1 , β2 , γ1 , γ2 , a1 = a2 .
According to the setting of the simulation, where α1 , α2 , β1 , β2 , γ1 , γ2 , a1 are ran-
domly chosen, it is easy to see that the randomness property holds, and thus the
simulation is indistinguishable from the real attack.
8.2 Cramer-Shoup Scheme 213

Probability of successful simulation. There is no abort in the simulation, and thus


the probability of successful simulation is 1.
Probability of breaking the challenge ciphertext.
If Z is true, the simulation is indistinguishable from the real attack, and thus the
adversary has probability 12 + ε2 of guessing the encrypted message correctly.
If Z is false, in the following we show that the adversary has success probability
qd
at most 12 + p−q of guessing the encrypted message.
d
z
Let g2 = g1 for some integer z ∈ Z p . The adversary knows

z, α1 + zα2 , β1 + zβ2 , γ1 + zγ2

from the public key and

a1 , a2 , a1 (α1 + w∗ β1 ) + za2 (α2 + w∗ β2 ), a1 γ1 + a2 zγ2 + logg1 mc

from the challenge ciphertext. If γ1 , γ2 are unknown to the adversary, and there is no
decryption query, we have that γ1 +zγ2 and a1 γ1 +a2 zγ2 are random and independent
because the determinant of the following coefficient matrix is nonzero:

1 z
= z(a2 − a1 ) 6= 0.
a1 a2 z

In this case, the challenge ciphertext can be seen as a one-time pad encryption of
the message mc from the point of view of the adversary, so that the adversary has
no advantage in guessing the bit c. Next, we show that the decryption queries do not
help the adversary break the challenge ciphertext except with negligible probability.
If a decryption query on CT = gr11 , gr22 ,C3 ,C4 passes the verification, the de-

−r γ −r γ
cryption will return C3 · g1 1 1 g2 2 2 to the adversary, and the adversary will know

r1 γ1 + r2 zγ2 .

If r1 = r2 (the ciphertext is treated as a correct ciphertext), what the adversary knows


is equivalent to γ1 + zγ2 , and thus the adversary gets no additional information to
break the one-time pad property. Otherwise, with γ1 + zγ2 and r1 γ1 + r2 zγ2 , the ad-
versary can compute γ1 , γ2 to break the one-time pad property. Therefore, in the re-
mainder of this proof, we prove that any incorrect ciphertext CT = gr11 , gr22 ,C3 ,C4
different from the challenge ciphertext will be rejected except with negligible prob-
ability.
• If (gr11 , gr22 ,C3 ) = (C1∗ ,C2∗ ,C3∗ ) and C4 6= C4∗ , such an incorrect ciphertext will be
rejected because only the ciphertext with C4 = C4∗ can pass the verification.
• If (gr11 , gr22 ,C3 ) 6= (C1∗ ,C2∗ ,C3∗ ), we have H(gr11 , gr22 ,C3 ) = w 6= w∗ because the hash
function is secure. This ciphertext can pass the verification if
r (α1 +wβ1 ) r (α2 +wβ2 ) r (α1 +wβ1 )+r2 z(α2 +wβ2 )
C4 = g11 · g22 = g11 .
214 8 Public-Key Encryption Without Random Oracles

That is, it can pass the verification if the adversary can compute the number
r1 (α1 + wβ1 ) + r2 z(α2 + wβ2 ).
According to the simulation, all the parameters associated with α1 , α2 , β1 , β2 in-
cluding the target r1 (α1 + wβ1 ) + r2 z(α2 + wβ2 ) are

α1 + zα2
β1 + zβ2
a1 (α1 + w∗ β1 ) + za2 (α2 + w∗ β2 )
r1 (α1 + wβ1 ) + r2 z(α2 + wβ2 ).

The matrix of the corresponding coefficients for (α1 , α2 , β1 , β2 ) is


 
1 z 0 0
0 0 1 z 
 a1 za2 a1 w∗ za2 w∗  ,
 

r1 zr2 r1 w zr2 w

where the absolute value of its determinant is z(r2 − r1 )(a2 − a1 )(w∗ − w) 6= 0.


Therefore, r1 (α1 + wβ1 ) + r2 z(α2 + wβ2 ) is random and independent of the other
given parameters, so that the adversary has no advantage except with probability
1
p of generating C4 to pass the verification.
When the adversary generates an incorrect ciphertext for a decryption query, the
adaptive choice of C4 the first time has success probability 1p , and the adaptive
1
choice of C4 the second time has success probability p−1 . Therefore, the proba-
bility of successfully generating an incorrect ciphertext that can pass the verifica-
qd
tion is at most p−q for qd decryption queries. The adversary also has probability
d
1
2 of guessing c correctly from the encryption. Therefore, the adversary has suc-
qd
cess probability at most 12 + p−q of guessing the encrypted message.
d

Advantage and time cost. The advantage of solving the DDH problem is
   
1 ε 1 qd ε qd ε
PS (PT − PF ) = + − + = − ≈ .
2 2 2 p − qd 2 p − qd 2

Let Ts denote the time cost of the simulation. We have Ts = O(qd ), which is mainly
dominated by the decryption. Therefore, the simulator B will solve the Variant
DDH problem with (t + Ts , ε/2).
This completes the proof of the theorem. 
Chapter 9
Identity-Based Encryption with Random
Oracles

In this chapter, we introduce the Boneh-Franklin IBE scheme [24] under the H-Type,
the Boneh-BoyenRO IBE scheme [20] and the Park-Lee IBE scheme [86] under the
C-Type, and the Sakai-Kasahara IBE scheme [91, 30] under the I-Type. The given
schemes and/or proofs may be different from the original ones.

9.1 Boneh-Franklin Scheme

Setup: The setup algorithm takes as input a security parameter λ . It selects a


pairing group PG = (G, GT , g, p, e), selects two cryptographic hash functions
H1 : {0, 1}∗ → G, H2 : {0, 1}∗ → {0, 1}n , randomly chooses α ∈ Z p , computes
g1 = gα , and returns a master public/secret key pair (mpk, msk) as follows:

mpk = (PG, g1 , H1 , H2 ), msk = α.

KeyGen: The key generation algorithm takes as input an identity ID ∈ {0, 1}∗
and the master key pair (mpk, msk). It returns the private key dID of ID as

dID = H1 (ID)α .

Encrypt: The encryption algorithm takes as input a message m ∈ {0, 1}n , an


identity ID, and the master public key mpk. It chooses a random number r ∈ Z p
and returns the ciphertext CT as
   
CT = (C1 ,C2 ) = gr , H2 e(H1 (ID), g1 )r ⊕ m .

Decrypt: The decryption algorithm takes as input a ciphertext CT for ID, the
private key dID , and the master public key mpk. Let CT = (C1 ,C2 ). It decrypts

© Springer International Publishing AG, part of Springer Nature 2018 215


F. Guo et al., Introduction to Security Reduction,
https://doi.org/10.1007/978-3-319-93049-7_9
216 9 Identity-Based Encryption with Random Oracles

the message by computing

m = H2 (e(C1 , dID )) ⊕C2 .

Theorem 9.1.0.1 Suppose the hash functions H1 , H2 are random oracles. If the
BDH problem is hard, the Boneh-Franklin identity-based encryption scheme is prov-
ably secure in the IND-ID-CPA security model with reduction loss L = qH1 qH2 ,
where qH1 , qH2 are the number of hash queries to the random oracles H1 , H2 , re-
spectively.
Proof. Suppose there exists an adversary A who can (t, qk , ε)-break the encryption
scheme in the IND-ID-CPA security model. We construct a simulator B to solve
the BDH problem. Given as input a problem instance (g, ga , gb , gc ) over the pairing
group PG, B controls the random oracles, runs A , and works as follows.
Setup. B sets g1 = ga where α = a. The master public key except two hash func-
tions is therefore available from the problem instance, where the two hash functions
are set as random oracles controlled by the simulator.
H-Query. The adversary makes hash queries in this phase. Before any hash queries
are made, B randomly chooses i∗ ∈ [1, qH1 ], where qH1 denotes the number of hash
queries to the random oracle H1 . Then, B prepares two hash lists to record all
queries and responses as follows, where the hash lists are empty at the beginning.
• Let the i-th hash query to H1 be IDi . If IDi is already in the hash list, B responds
to this query following the hash list. Otherwise, B randomly chooses xi ∈ Z p and
sets H1 (IDi ) as
g i if i 6= i∗
 x
H1 (IDi ) = .
gb otherwise
The simulator responds to this query with H1 (IDi ) and adds (i, IDi , xi , H1 (IDi ))
to the hash list.
• Let the i-th hash query to H2 be yi . If yi is already in the hash list, B responds to
this query following the hash list. Otherwise, B randomly chooses Yi ∈ {0, 1}n ,
responds to this query with H2 (yi ) = Yi , and adds (yi , H2 (yi )) to the hash list.

Phase 1. The adversary makes private-key queries in this phase. For a private-key
query on IDi , let (i, IDi , xi , H1 (IDi )) be the corresponding tuple. If i = i∗ , abort. Oth-
erwise, according to the simulation, we have H1 (IDi ) = gxi . The simulator computes
dIDi = (ga )xi , which is equal to H1 (IDi )α . Therefore, dIDi is a valid private key for
IDi .
Challenge. A outputs two distinct messages m0 , m1 ∈ {0, 1}n and one identity ID∗
to be challenged. Let the corresponding tuple of a hash query on ID∗ to H1 be
(i, ID∗ , x, H1 (ID∗ )). If i 6= i∗ , abort. Otherwise, we have i = i∗ and H1 (ID∗ ) = gb .
9.1 Boneh-Franklin Scheme 217

The simulator randomly chooses R ∈ {0, 1}n and sets the challenge ciphertext
CT ∗ as
CT ∗ = (gc , R),
where gc is from the problem instance. The challenge ciphertext can be seen as
an encryption of the message mcoin ∈ {m0 , m1 } using the random number c, if
H2 (e(g, g)abc ) = R ⊕ mcoin :
   
CT ∗ = (gc , R) = gc , H2 e(H1 (ID∗ ), g1 )c ⊕ mcoin .

The challenge ciphertext is therefore a correct ciphertext from the point of view of
the adversary if there is no hash query on e(g, g)abc to the random oracle H2 .
Phase 2. The simulator responds to private-key queries in the same way as in Phase
1 with the restriction that no private-key query is allowed on ID∗ .
Guess. A outputs a guess or ⊥. The challenge hash query is defined as

Q∗ = e(H1 (ID∗ ), g1 )c = e(g, g)abc ,

which is a query to the random oracle H2 . The simulator randomly selects one value
y from the hash list (y1 ,Y1 ), (y2 ,Y2 ), · · · , (yqH2 ,YqH2 ) as the challenge hash query. The
simulator can immediately use this hash query to solve the BDH problem.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the
master-key generation, the responses to hash queries, the private-key generation,
and the challenge ciphertext generation. They are

a, x1 , · · · , xi∗ −1 , b, xi∗ +1 , · · · , xqH1 ,Y1 ,Y2 , · · · ,YqH2 , c.

According to the setting of the simulation, where all of them are randomly chosen,
the randomness property holds, and thus the simulation is indistinguishable from
the real attack.
Probability of successful simulation. If the identity ID∗ to be challenged is the i∗ -
th identity queried to the random oracle, the adversary cannot query its private key
so that the simulation will be successful in the query phase and the challenge phase.
The success probability is therefore 1/qH1 for qH1 queries to H1 .
Advantage of breaking the challenge ciphertext. According to the simulation, we
have
• The challenge ciphertext is an encryption of the message m0 if
 
H2 e(g, g)abc = R ⊕ m0 .
218 9 Identity-Based Encryption with Random Oracles

• The challenge ciphertext is an encryption of the message m1 if


 
H2 e(g, g)abc = R ⊕ m1 .

Without making the challenge query Q∗ = e(g, g)abc to the random oracle, the ad-
versary has no advantage in breaking the challenge ciphertext.
Probability of finding solution. According to the definition and simulation, if the
adversary does not query Q∗ = e(g, g)abc to the random oracle H2 , the adversary has
no advantage in guessing the encrypted message. Since the adversary has advan-
tage ε in guessing the chosen message according to the breaking assumption, the
adversary will query e(g, g)abc to the random oracle with probability ε according to
Lemma 4.11.1. Therefore, a random choice of y from the hash list for H2 will be
equal to e(g, g)abc with probability qHε .
2

Advantage and time cost. Let Ts denote the time cost of the simulation. We
have Ts = O(qH1 + qk ), which is mainly dominated by the oracle response and
the
 key generation. Therefore, the simulator B will solve the BDH problem with
ε
t + Ts , qH qH .
1 2
This completes the proof of the theorem. 

9.2 Boneh-BoyenRO Scheme

Setup: The setup algorithm takes as input a security parameter λ . It selects


a pairing group PG = (G, GT , g, p, e), selects a cryptographic hash function
H : {0, 1}∗ → G, randomly chooses g2 ∈ G, α ∈ Z p , computes g1 = gα , and
returns a master public/secret key pair (mpk, msk) as follows:

mpk = (PG, g1 , g2 , H), msk = α.

KeyGen: The key generation algorithm takes as input an identity ID ∈ {0, 1}∗
and the master key pair (mpk, msk). It randomly chooses r ∈ Z p and returns the
private key dID of ID as
 
dID = (d1 , d2 ) = gα2 H(ID)r , gr .

Encrypt: The encryption algorithm takes as input a message m ∈ GT , an iden-


tity ID, and the master public key mpk. It chooses a random number s ∈ Z p and
returns the ciphertext CT as
 
CT = (C1 ,C2 ,C3 ) = H(ID)s , gs , e(g1 , g2 )s · m .
9.2 Boneh-BoyenRO Scheme 219

Decrypt: The decryption algorithm takes as input a ciphertext CT for ID,


the private key dID = (d1 , d2 ), and the master public key mpk. Let CT =
(C1 ,C2 ,C3 ). It decrypts the message by computing

e(d2 ,C1 ) e (gr , H(ID)s ) 1


C3 · = e(g1 , g2 )s m ·  = e(g1 , g2 )s m · = m.
e(d1 ,C2 ) α r
e g2 H(ID) , g s e(g1 , g2 )s

Theorem 9.2.0.1 Suppose the hash function H is a random oracle. If the DBDH
problem is hard, the Boneh-BoyenRO identity-based encryption scheme is provably
secure in the IND-ID-CPA security model with reduction loss L = 2qH , where qH is
the number of hash queries to the random oracle.
Proof. Suppose there exists an adversary A who can (t, qk , ε)-break the encryption
scheme in the IND-ID-CPA security model. We construct a simulator B to solve the
DBDH problem. Given as input a problem instance (g, ga , gb , gc , Z) over the pairing
group PG, B controls the random oracle, runs A , and works as follows.
Setup. B sets g1 = ga , g2 = gb where α = a. The master public key except the hash
function is therefore available from the problem instance, where the hash function
is set as a random oracle controlled by the simulator.
H-Query. The adversary makes hash queries in this phase. Before any hash queries
are made, B randomly chooses i∗ ∈ [1, qH ], where qH denotes the number of hash
queries to the random oracle. Then, B prepares a hash list to record all queries and
responses as follows, where the hash list is empty at the beginning.
Let the i-th hash query to H be IDi . If IDi is already in the hash list, B responds
to this query following the hash list. Otherwise, B randomly chooses xi ∈ Z p and
sets H(IDi ) as
g i if i 6= i∗
 b+x
H(IDi ) = .
gxi otherwise
The simulator B responds to this query with H(IDi ) and adds the tuple (i, IDi , xi ,
H(IDi )) to the hash list.
Phase 1. The adversary makes private-key queries in this phase. For a private-key
query on IDi , let (i, IDi , xi , H(IDi )) be the corresponding tuple. If i = i∗ , abort. Oth-
erwise, according to the simulation, we have H(IDi ) = gb+xi .
The simulator randomly chooses ri0 ∈ Z p and computes dIDi as
 0

dIDi = (ga )−xi · H(ID)ri , (ga )−1 gri .

Let ri = −a + ri0 . We have


0
gα2 H(ID)ri = gab · (gb+xi )−a+ri = (ga )−xi · H(ID)ri ,
0
gri = (ga )−1 gri .
220 9 Identity-Based Encryption with Random Oracles

Therefore, dIDi is a valid private key for IDi .


Challenge. A outputs two distinct messages m0 , m1 ∈ GT and one identity ID∗
to be challenged. Let the corresponding tuple of a hash query on ID∗ to H be

(i, ID∗ , x∗ , H(ID∗ )). If i 6= i∗ , abort. Otherwise, we have i = i∗ and H(ID∗ ) = gx .
The simulator randomly chooses coin ∈ {0, 1} and sets the challenge ciphertext
CT ∗ as

CT ∗ = (gcx , gc , Z · mcoin ).
Let r = c. If Z = e(g, g)abc , we have

 
CT ∗ = (gcx , gc , Z · mcoin ) = H(ID∗ )c , gc , e(g1 , g2 )c · mcoin .

Therefore, CT ∗ is a correct challenge ciphertext for ID∗ whose encrypted message


is mcoin .
Phase 2. The simulator responds to private-key queries in the same way as in Phase
1 with the restriction that no private-key query is allowed on ID∗ .
Guess. A outputs a guess coin0 of coin. The simulator outputs true if coin0 = coin.
Otherwise, false.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the
master-key generation, the responses to hash queries, the private-key generation,
and the challenge ciphertext generation. They are

a, b, x1 , · · · , xi∗ −1 , b + xi∗ , xi∗ +1 , · · · , xqH , −a + ri0 , c.

According to the setting of the simulation, where a, b, c, xi , ri0 are randomly chosen,
it is easy to see that the randomness property holds, and thus the simulation is indis-
tinguishable from the real attack.
Probability of successful simulation. If the identity ID∗ to be challenged is the i∗ -
th identity queried to the random oracle, the adversary cannot query its private key
so that the simulation will be successful in the query phase and the challenge phase.
The success probability is therefore 1/qH for qH queries to H.
Probability of breaking the challenge ciphertext.
If Z is true, the simulation is indistinguishable from the real attack, and thus the
adversary has probability 12 + ε2 of guessing the encrypted message correctly.
If Z is false, it is easy to see that the challenge ciphertext is a one-time pad
because the message is encrypted using Z, which is random and cannot be calculated
from the other parameters given to the adversary. Therefore, the adversary only has
probability 1/2 of guessing the encrypted message correctly.
Advantage and time cost. The advantage of solving the DBDH problem is
9.3 Park-Lee Scheme 221

1 1 ε 1 ε
PS (PT − PF ) = + − = .
qH 2 2 2 2qH

Let Ts denote the time cost of the simulation. We have Ts = O(qH + qk ), which is
mainly dominated by the oracle response and the  key generation.
 Therefore, the
simulator B will solve the DBDH problem with t + Ts , 2qH .
ε

This completes the proof of the theorem. 

9.3 Park-Lee Scheme

Setup: The setup algorithm takes as input a security parameter λ . It selects


a pairing group PG = (G, GT , g, p, e), selects a cryptographic hash function
H : {0, 1}∗ → G, randomly chooses g2 ∈ G, α ∈ Z p , computes g1 = gα , and
returns a master public/secret key pair (mpk, msk) as follows:

mpk = (PG, g1 , g2 , H), msk = α.

KeyGen: The key generation algorithm takes as input an identity ID ∈ {0, 1}∗
and the master key pair (mpk, msk). It randomly chooses r,tk ∈ Z p and returns
the private key dID of ID as
 
r tk r
dID = (d1 , d2 , d3 , d4 ) = gα+r
2 , g , H(ID)g2 , tk .

Encrypt: The encryption algorithm takes as input a message m ∈ GT , an iden-


tity ID, and the master public key mpk. It randomly chooses s,tc ∈ Z p and
returns the ciphertext CT as
 s 
CT = (C1 ,C2 ,C3 ,C4 ) = H(ID)gt2c , gs , tc , e(g1 , g2 )s · m .

Decrypt: The decryption algorithm takes as input a ciphertext CT for ID,


the private key dID = (d1 , d2 ), and the master public key mpk. Let CT =
(C1 ,C2 ,C3 ,C4 ). If C3 = d4 , output ⊥. Otherwise, it decrypts the message by
computing
  1
e(d2 ,C1 ) C3 −d4 1
C4 · ·
e(d3 ,C2 ) e(d1 ,C2 )
 1
e(H(ID), g)rs e(g2 , g)rstc tc −tk

1
= e(g1 , g2 )s m · ·
e(H(ID), g)rs e(g2 , g)rstk e(g1 , g2 )s · e(g2 , g)rs
1
= e(g1 , g2 )s m · e(g2 , g)rs · = m.
e(g1 , g2 )s · e(g2 , g)rs
222 9 Identity-Based Encryption with Random Oracles

Theorem 9.3.0.1 Suppose the hash function H is a random oracle. If the DBDH
problem is hard, the Park-Lee identity-based encryption scheme is provably secure
in the IND-ID-CPA security model with reduction loss L = 2.
Proof. Suppose there exists an adversary A who can (t, qk , ε)-break the encryption
scheme in the IND-ID-CPA security model. We construct a simulator B to solve the
DBDH problem. Given as input a problem instance (g, ga , gb , gc , Z) over the pairing
group PG, B controls the random oracle, runs A , and works as follows.
Setup. B sets g1 = ga , g2 = gb where α = a. The master public key except the hash
function is therefore available from the problem instance, where the hash function
is set as a random oracle controlled by the simulator.
H-Query. The adversary makes hash queries in this phase. B prepares a hash list
to record all queries and responses as follows, where the hash list is empty at the
beginning.
For a query on ID, B randomly chooses xID , yID ∈ Z p and sets H(ID) as

H(ID) = gyID g−x


2
ID
.

The simulator B responds to this query with H(ID) and adds(ID, xID , yID , H(ID))
to the hash list.
Phase 1. The adversary makes private-key queries in this phase. For a private-key
query on ID, let (ID, xID , yID , H(ID)) be the corresponding tuple in the hash list.
The simulator randomly chooses r0 ∈ Z p and computes dID as
 0 0 0

dID = (gb )r , gr (ga )−1 , gr yID (ga )−yID , xID .

Let r = r0 − a and tk = xID . We have


0 0
gα+r
2 = gb(a+r −a) = (gb )r ,
0 0
gr = gr −a = gr (ga )−1 ,
t r 0 0
H(ID)g2k = (gyID g−x
2
ID xID r −a
g2 ) = gr yID (ga )−yID ,
tk = xID .

Therefore, dID is a valid private key for ID.


Challenge. A outputs two distinct messages m0 , m1 ∈ GT and one identity ID∗
to be challenged. Let the corresponding tuple of a hash query on ID∗ to H be
(ID∗ , xID∗ , yID∗ , H(ID∗ )).
The simulator randomly chooses coin ∈ {0, 1} and sets the challenge ciphertext
CT ∗ as  
CT ∗ = gcyID∗ , gc , xID∗ , Z · mcoin .

Let s = c and tc = xID∗ . If Z = e(g, g)abc , we have


 
CT ∗ = (gcyID∗ , gc , xID∗ , Z · mcoin ) = (H(ID∗ )gt2c )s , gs , tc , e(g1 , g2 )s · mcoin .
9.3 Park-Lee Scheme 223

Therefore, CT ∗ is a correct challenge ciphertext for ID∗ whose encrypted message


is mcoin .
Phase 2. The simulator responds to private-key queries in the same way as in Phase
1 with the restriction that no private-key query is allowed on ID∗ .
Guess. A outputs a guess coin0 of coin. The simulator outputs true if coin0 = coin.
Otherwise, false.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the
master-key generation, the responses to hash queries, the private-key generation,
and the challenge ciphertext generation. They are

mpk : a, b,
H(ID) : yID − xID b,
(r,tk ) : r0 − a, xID ,
CT ∗ : c, xID∗ .

According to the setting of the simulation, where a, b, c, xID , yID , r0 are randomly
chosen, it is easy to see that the randomness property holds, and thus the simulation
is indistinguishable from the real attack.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Probability of breaking the challenge ciphertext.
If Z is true, the simulation is indistinguishable from the real attack, and thus the
adversary has probability 12 + ε2 of guessing the encrypted message correctly.
If Z is false, it is easy to see that the challenge ciphertext is a one-time pad
because the message is encrypted using Z, which is random and cannot be calculated
from the other parameters given to the adversary. Therefore, the adversary only has
probability 1/2 of guessing the encrypted message correctly.
Advantage and time cost. The advantage of solving the DBDH problem is
1 ε 1 ε
PS (PT − PF ) = + − = .
2 2 2 2
Let Ts denote the time cost of the simulation. We have Ts = O(qH + qk ), which is
mainly dominated by the oracle response and the key generation. Therefore, the
simulator B will solve the DBDH problem with t + Ts , ε2 .


This completes the proof of the theorem. 


224 9 Identity-Based Encryption with Random Oracles

9.4 Sakai-Kasahara Scheme

Setup: The setup algorithm takes as input a security parameter λ . It selects a


pairing group PG = (G, GT , g, p, e), selects two cryptographic hash functions
H1 : {0, 1}∗ → Z p , H2 : {0, 1}∗ → {0, 1}n , randomly chooses h ∈ G, α ∈ Z p ,
computes g1 = gα , and returns a master public/secret key pair (mpk, msk) as
follows:
mpk = (PG, g1 , h, H1 , H2 ), msk = α.

KeyGen: The key generation algorithm takes as input an identity ID ∈ {0, 1}∗
and the master key pair (mpk, msk). It returns the private key dID of ID as
1
dID = h α+H1 (ID) .

Encrypt: The encryption algorithm takes as input a message m ∈ {0, 1}n , an


identity ID, and the master public key mpk. It chooses a random number r ∈ Z p
and returns the ciphertext CT as
 r 
CT = (C1 ,C2 ) = g1 gH1 (ID) , H2 ( e(g, h)r ) ⊕ m .

Decrypt: The decryption algorithm takes as input a ciphertext CT for ID, the
private key dID , and the master public key mpk. Let CT = (C1 ,C2 ). It decrypts
the message by computing
 
C2 ⊕ H2 e(C1 , dID ) = m.

Theorem 9.4.0.1 Suppose the hash functions H1 , H2 are random oracles. If the q-
BDHI problem is hard, the Sakai-Kasahara identity-based encryption scheme is
provably secure in the IND-ID-CPA security model with reduction loss L = qH1 qH2 ,
where qH1 , qH2 are the number of hash queries to the random oracles H1 , H2 , re-
spectively.
Proof. Suppose there exists an adversary A who can (t, qk , ε)-break the encryption
scheme in the IND-ID-CPA security model. We construct a simulator B to solve
2 q
the q-BDHI problem. Given as input a problem instance (g, ga , ga , · · · , ga ) over the
pairing group PG, B controls the random oracles, runs A , and works as follows.
Setup. B randomly chooses w∗ , w1 , w2 , · · · , wq ∈ Z p . Let f (x) be the polynomial in
Z p [x] defined as
q
f (x) = ∏(x − w∗ + wi ).
i=1

We have that f (x) does not contain the root zero.


9.4 Sakai-Kasahara Scheme 225

The simulator B sets



g1 = ga−w , h = g f (a) ,
where α = a − w∗ . We require q = qH1 , where qH1 is the number of hash queries
to H1 . The master public key except two hash functions can be computed from the
problem instance and the chosen parameters, where the two hash functions are set
as random oracles controlled by the simulator.
H-Query. The adversary makes hash queries in this phase. Before any hash queries
are made, B randomly chooses i∗ ∈ [1, qH1 ]. Then, B prepares two hash lists to
record all queries and responses as follows, where the hash lists are empty at the
beginning.
• Let the i-th hash query to H1 be IDi . If IDi is already in the hash list, B responds
to this query following the hash list. Otherwise, B sets H1 (IDi ) as

wi if i 6= i∗

H1 (IDi ) = ,
w∗ otherwise

where wi , w∗ are random numbers chosen in the setup phase. In the simulation,
wi∗ is not used in the responses to hash queries. The simulator B responds
to this query with H1 (IDi ) and adds the tuple (i, IDi , wi , H1 (IDi )) or the tuple
(i∗ , IDi∗ , w∗ , H1 (IDi∗ )) to the hash list.
• Let the i-th hash query to H2 be yi . If yi is already in the hash list, B responds to
this query following the hash list. Otherwise, B randomly chooses Yi ∈ {0, 1}n ,
responds to this query with H2 (yi ) = Yi , and adds (yi , H2 (yi )) to the hash list.

Phase 1. The adversary makes private-key queries in this phase. For a private-key
query on ID, let (i, IDi , wi , H(IDi )) be the corresponding tuple. If i = i∗ , abort. Oth-
erwise, according to the simulation, H(IDi ) = wi . Let fIDi (x) be defined as

f (x)
fIDi (x) = .
x − w∗ + wi

We have that fIDi (x) is a polynomial in x according to the definition of f (x).


q
The simulator computes dIDi = g fIDi (a) from g, ga , · · · , ga , fIDi (x). Since
f (a) 1
g fIDi (a) = g a−w∗ +wi = h α+H1 (IDi ) ,

we have that dIDi is a valid private key for IDi .


Challenge. A outputs two distinct messages m0 , m1 ∈ {0, 1}n and one identity ID∗
to be challenged. Let the corresponding tuple of a hash query on ID∗ to H1 be
(i, ID∗ , w∗ , H1 (ID∗ )). If i 6= i∗ , abort. Otherwise, we have i = i∗ and H1 (ID∗ ) = w∗ .
The simulator randomly chooses r0 ∈ Z p , R ∈ {0, 1}n and sets the challenge
ciphertext CT ∗ as
0
CT ∗ = (gr , R).
226 9 Identity-Based Encryption with Random Oracles

The challenge ciphertext can be seen as an encryption of the message mc ∈ {m0 , m1 }


0 ∗
using the random number r∗ = ra if H2 (e(g, h)r ) = R ⊕ mc :

0


r∗  ∗
 
CT ∗ = (gr , R) = g1 gH1 (ID ) , H2 e(g, h)r ⊕ mc .

The challenge ciphertext is therefore a correct ciphertext from the point of view of

the adversary, if there is no hash query on e(g, h)r to the random oracle.
Phase 2. The simulator responds to private-key queries in the same way as in Phase
1 with the restriction that no private-key query is allowed on ID∗ .
Guess. A outputs a guess or ⊥. The challenge hash query is defined as
∗ 0 f (a)
Q∗ = e(g, h)r = e(g, g)r a ,

which is a query to the random oracle H2 . The simulator randomly selects one value
y from the hash list (y1 ,Y1 ), (y2 ,Y2 ), · · · , (yqH2 ,YqH2 ) as the challenge hash query. We
define
r0 f (x) d
= F(x) + ,
x x
where F(x) is a (q − 1)-degree polynomial, and d is a nonzero integer according to
the fact that x - f (x). The simulator can use this hash query to compute
1
r0 f (a)

1 d
Q∗ 1

d e(g, g) a

 = e(g, g) da d = e(g, g) 1a .
=
e(g, g)F(a) e(g, g)F(a)

as the solution to the q-BDHI problem instance.


This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the
master-key generation, the responses to hash queries, and the challenge ciphertext
generation. They are
q
mpk : a − w∗ , f (a) = ∏(x − w∗ + wi ),
i=1

H(ID) : w1 , · · · , wi∗ −1 , w , wi∗ +1 , · · · , wq ,
r0
CT ∗ : .
a
According to the setting of the simulation, where a, w1 , · · · , wq , w∗ , r0 are randomly
chosen, it is easy to see that the randomness property holds, and thus the simulation
is indistinguishable from the real attack.
9.4 Sakai-Kasahara Scheme 227

Probability of successful simulation. If the identity ID∗ to be challenged is the i∗ -


th identity queried to the random oracle, the adversary cannot query its private key
so that the simulation will be successful in the query phase and the challenge phase.
The success probability is therefore 1/qH1 for qH1 queries to H1 .
Advantage of breaking the challenge ciphertext. According to the simulation, we
have
• The challenge ciphertext is an encryption of the message m0 if
 ∗

H2 e(g, h)r = R ⊕ m0 .

• The challenge ciphertext is an encryption of the message m1 if


 ∗

H2 e(g, h)r = R ⊕ m1 .


Without making the challenge query Q∗ = e(g, h)r to the random oracle, the adver-
sary has no advantage in breaking the challenge ciphertext.
Probability of finding solution. According to the definition and simulation, if the

adversary does not query Q∗ = e(g, h)r to the random oracle H2 , the adversary has
no advantage in guessing the encrypted message. Since the adversary has advan-
tage ε in guessing the chosen message according to the breaking assumption, the

adversary will query e(g, h)r to the random oracle with probability ε according to
Lemma 4.11.1. Therefore, a random choice of y from the hash list for H2 will be

equal to e(g, h)r with probability qHε .
2

Advantage and time cost. Let Ts denote the time cost of the simulation. We have
Ts = O(qk qH1 ), which is mainly dominated by the
 key generation.
 Therefore, the
simulator B will solve the q-BDHI problem with t + Ts , qH εqH .
1 2
This completes the proof of the theorem. 
Chapter 10
Identity-Based Encryption Without Random
Oracles

In this chapter, we start by introducing the Boneh-Boyen IBE scheme [20], which is
selectively secure under the DBDH assumption. Then, we introduce a variant ver-
sion of Boneh-Boyen IBE for CCA security without the use of one-time signatures
[28] but with a chameleon hash function [94]. Then, we introduce the Waters IBE
scheme [101] and the Gentry IBE scheme [47], which are both fully secure under
the C-Type and the I-Type, respectively. The given schemes and/or proofs may be
different from the original ones.

10.1 Boneh-Boyen Scheme

Setup: The setup algorithm takes as input a security parameter λ . It selects


a pairing group PG = (G, GT , g, p, e), randomly chooses g2 , h, u, v ∈ G, α ∈
Z p , computes g1 = gα , selects a cryptographic hash function H : {0, 1}∗ →
Z p , selects a strongly unforgeable one-time signature scheme S , and returns a
master public/secret key pair (mpk, msk) as follows:

mpk = (PG, g1 , g2 , h, u, v, H, S ), msk = α.

KeyGen: The key generation algorithm takes as input an identity ID ∈ Z p and


the master key pair (mpk, msk). It chooses a random number r ∈ Z p . It returns
the private key dID of ID as
 
dID = (d1 , d2 ) = gα2 (huID )r , gr .

Encrypt: The encryption algorithm takes as input a message m ∈ GT , an iden-


tity ID, and the master public key mpk. The encryption works as follows:

© Springer International Publishing AG, part of Springer Nature 2018 229


F. Guo et al., Introduction to Security Reduction,
https://doi.org/10.1007/978-3-319-93049-7_10
230 10 Identity-Based Encryption Without Random Oracles

• Run the key generation algorithm of S to generate a key pair (opk, osk).
• Choose a random number s ∈ Z p and compute
 s 
(C1 ,C2 ,C3 ,C4 ) = (huID )s , hvH(opk) , gs , e(g1 , g2 )s · m .

• Run the signing algorithm of S to sign (C1 ,C2 ,C3 ,C4 ) using the secret key
osk. Let the corresponding signature be σ .
The final ciphertext on m is

CT = (C1 ,C2 ,C3 ,C4 ,C5 ,C6 )


 s 
= (huID )s , hvH(opk) , gs , e(g1 , g2 )s · m, σ , opk .

Decrypt: The decryption algorithm takes as input a ciphertext CT = (C1 ,C2 ,


C3 ,C4 , C5 ,C6 ) for ID, the private key dID , and the master public key mpk. It
works as follows:
• Verify that C5 is a signature of (C1 ,C2 ,C3 ,C4 ) using the public key C6 .
• Choose a random number t ∈ Z p and compute dID|opk as
 
dID|opk = (d10 , d20 , d30 ) = gα2 (huID )r (hvH(opk) )t , gr , gt .

• Decrypt the message by computing


   
e(C1 , d20 )e(C2 , d30 ) e (huID )s , gr e (hvH(opk) )s , gt
·C4 =  ·C4
e(C3 , d10 )

e gs , gα (huID )r (hvH(opk) )t
2
−s
= e(g1 , g2 ) · e(g1 , g2 )s m
= m.

Theorem 10.1.0.1 If the DBDH problem is hard and the adopted one-time sig-
nature scheme is strongly unforgeable, the Boneh-Boyen identity-based encryption
scheme is provably secure in the IND-sID-CCA security model with reduction loss
L = 2.
Proof. Suppose there exists an adversary A who can (t, qk , qd , ε)-break the encryp-
tion scheme in the IND-sID-CCA security model. We construct a simulator B to
solve the DBDH problem. Given as input a problem instance (g, ga , gb , gc , Z) over
the pairing group PG, B runs A and works as follows.
Init: The adversary outputs an identity ID∗ ∈ Z p to be challenged.
Setup. Let H be a cryptographic hash function, and S be a secure one-time signa-
ture scheme. B simulates other parameters in the master public key as follows.
10.1 Boneh-Boyen Scheme 231

• Run S to generate a key pair (opk∗ , osk∗ ).


• Randomly choose x1 , x2 , x3 ∈ Z p .
• Set the master public key as
b b +x3
g1 = ga , g2 = gb , h = g−b+x1 , u = g ID∗ +x2 , v = g H(opk∗ ) ,

where α = a and a, b are from the problem instance.


The master public key can therefore be computed from the problem instance and the
chosen parameters.
According to the above simulation, we have
ID
huID = gb( ID∗ −1)+(x1 +ID·x2 ) ,
 
H(opk)
b H(opk∗ ) −1 +(x1 +H(opk)·x3 )
hvH(opk) = g .

Phase 1. The adversary makes private-key queries and decryption queries in this
phase.
For a private-key query on ID 6= ID∗ , let huID = gw1 b+w2 . The simulator randomly
chooses r0 ∈ Z p and computes dID as
w
− 2 − 1
 0 0

dID = (ga ) w1 (huID )r , (ga ) w1 gr .

Let r = − wa1 + r0 . We have


w
− wa +r0 − w2 0
gα2 (huID )r = gab · (gw1 b+w2 ) 1 = (ga ) 1 (huID )r ,
− w1 0
gr = (ga ) 1 gr .

Therefore, dID is a valid private key for ID.


For a decryption query on (ID,CT ), let CT = (C1 ,C2 ,C3 ,C4 ,C5 ,C6 ). If the ci-
phertext passes the verification, C6 = opk must not be equal to opk∗ according to the
definition and simulation. We also have H(opk) 6= H(opk∗ ). Let hvH(opk) = gw3 b+w4 .
The simulator randomly chooses r,t 0 ∈ Z p and computes dID|opk = (d10 , d20 , d30 ) as
w
− 4 − 1
 0 0

dID|opk = (huID )r (ga ) w3 (hvH(opk) )t , gr , (ga ) w3 gt .

Let t = − wa3 + t 0 . We have

− wa +t 0
gα2 (huID )r (hvH(opk) )t = gab · (huID )r · (gw3 b+w4 ) 3

4w
a − w3 0
= (huID )r · (g ) (hvH(opk) )t ,
− w1 0
gt = (ga ) 3 gt .
232 10 Identity-Based Encryption Without Random Oracles

Therefore, dID|opk is valid for ID|opk. The simulator uses this private key to decrypt
the ciphertext CT .
Challenge. A outputs two distinct messages m0 , m1 ∈ GT to be challenged. The
simulator randomly chooses coin ∈ {0, 1} and sets the challenge ciphertext CT ∗ as

CT ∗ = (C1∗ ,C2∗ ,C3∗ ,C4∗ ,C5∗ ,C6∗ )


 ∗ ∗

= gc(x1 +ID ·x2 ) , gc(x1 +H(opk )·x3 ) , gc , Z · mcoin , σ ∗ , opk∗ ,

where σ ∗ is a signature of (C1∗ ,C2∗ ,C3∗ ,C4∗ ) using osk∗ . Let s = c. If Z = e(g, g)abc ,
we have
s  ∗ 

bc ID ∗
ID∗ ∗ −1 +c(x1 +ID ·x2 ) ∗
hu =g ID = (gc )x1 +ID ·x2 ,
H(opk∗ )
s  
−1 +c(x1 +H(opk∗ )·x3 )
 ∗ ∗
bc
hvH(opk ) = g H(opk∗ ) = (gc )x1 +H(opk )·x3 ,
gs = gc ,
s
e(g1 , g2 ) · mcoin = Z · mcoin .

Therefore, CT ∗ is a correct challenge ciphertext for ID∗ whose encrypted message


is mcoin .
Phase 2. The simulator responds to private-key queries and decryption queries in
the same way as in Phase 1 with the restriction that no private-key query is allowed
on ID∗ and no decryption query is allowed on (ID∗ ,CT ∗ ).
Guess. A outputs a guess coin0 of coin. The simulator outputs true if coin0 = coin.
Otherwise, false.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. According to the assumption of a strongly unforge-
able one-time signature scheme, the adversary cannot make a decryption query on
a ciphertext whose opk is equal to opk∗ . If opk 6= opk∗ , the simulator can always
generate the corresponding private key to decrypt the ciphertext. Therefore, the sim-
ulator performs a perfect decryption simulation.
The correctness of the simulation has been explained above. The randomness
of the simulation includes all random numbers in the master-key generation, the
private-key generation, and the challenge ciphertext generation. They are

b b
mpk : a, b, − b + x1 , + x2 , + x3 ,
ID∗ H(opk∗ )
a
skID : − + r0 ,
w1
CT ∗ : c.
10.2 Boneh-Boyen+ Scheme 233

According to the setting of the simulation, where a, b, x1 , x2 , x3 , r0 , c are randomly


chosen, it is easy to see that the randomness property holds, and thus the simulation
is indistinguishable from the real attack.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Probability of breaking the challenge ciphertext.
If Z is true, the simulation is indistinguishable from the real attack, and thus the
adversary has probability 12 + ε2 of guessing the encrypted message correctly.
If Z is false, and there is no decryption query, it is easy to see that the challenge
ciphertext is a one-time pad because the message is encrypted using Z, which is
random and unknown to the adversary. In the simulation, Z is independent of the
challenge decryption key. Therefore, the decryption queries cannot help the adver-
sary find Z to break the challenge ciphertext. The adversary only has probability 1/2
of guessing the encrypted message correctly.
Advantage and time cost. The advantage of solving the DBDH problem is
1 ε 1 ε
PS (PT − PF ) = + − = .
2 2 2 2
Let Ts denote the time cost of the simulation. We have Ts = O(qk + qd ), which is
mainly dominated by the key generation and the decryption. Therefore, the simula-
tor B will solve the DBDH problem with t + Ts , ε2 .


This completes the proof of the theorem. 

10.2 Boneh-Boyen+ Scheme

Setup: The setup algorithm takes as input a security parameter λ . It selects a


pairing group PG = (G, GT , g, p, e), randomly chooses g2 , h, u, v, w ∈ G, α ∈
Z p , computes g1 = gα , selects a cryptographic hash function H : {0, 1}∗ → Z p ,
and returns a master public/secret key pair (mpk, msk) as follows:

mpk = (PG, g1 , g2 , h, u, v, w, H), msk = α.

KeyGen: The key generation algorithm takes as input an identity ID ∈ Z p and


the master key pair (mpk, msk). It chooses a random number r ∈ Z p and returns
the private key dID of ID as
 
dID = (d1 , d2 ) = gα2 (huID )r , gr .
234 10 Identity-Based Encryption Without Random Oracles

Encrypt: The encryption algorithm takes as input a message m ∈ GT , an iden-


tity ID, and the master public key mpk. It chooses random numbers s, z ∈ Z p
and returns the ciphertext as
   s 
CT = C1 ,C2 ,C3 ,C4 ,C5 = (huID )s , hvH(C) wz , gs , e(g1 , g2 )s · m, z ,

where C = (C1 ,C3 ,C4 ).


Decrypt: The decryption algorithm takes as input a ciphertext CT =
(C1 ,C2 ,C3 ,C4 ,C5 ) for ID, the private key dID , and the master public key mpk.
Let C = (C1 ,C3 ,C4 ). It works as follows.
• Verify that
 
e(C1 , g) = e huID ,C3 and e(C2 , g) = e hvH(C) wC5 ,C3 .


• Choose a random number t ∈ Z p and compute dID|CT as


 
dID|CT = (d10 , d20 , d30 ) = gα2 (huID )r (hvH(C) wC5 )t , gr , gt .

• Decrypt the message by computing


   
e(C1 , d20 )e(C2 , d30 ) e (huID )s , gr e (hvH(C) wC5 )s , gt
·C4 =  ·C4
e(C3 , d10 )

e gs , gα (huID )r (hvH(C) wC5 )t
2
−s
= e(g1 , g2 ) · e(g1 , g2 )s m
= m.

Theorem 10.2.0.1 If the DBDH problem is hard, the Boneh-Boyen+ identity-based


encryption scheme is provably secure in the IND-sID-CCA security model with re-
duction loss L = 2.
Proof. Suppose there exists an adversary A who can (t, qk , qd , ε)-break the encryp-
tion scheme in the IND-sID-CCA security model. We construct a simulator B to
solve the DBDH problem. Given as input a problem instance (g, ga , gb , gc , Z) over
the pairing group PG, B runs A and works as follows.
Init: The adversary outputs an identity ID∗ ∈ Z p to be challenged.
Setup. Let H be a cryptographic hash function. The simulator B randomly chooses
x1 , x2 , x3 , x4 , y1 , y2 ∈ Z p and sets the master public key as
b
g1 = ga , g2 = gb , h = g−b+x1 , u = g ID∗ +x2 , v = gy1 b+x3 , w = gy2 b+x4 ,
10.2 Boneh-Boyen+ Scheme 235

where α = a and a, b are from the problem instance. The master public key can
therefore be computed from the problem instance and the chosen parameters.
According to the above simulation, we have
ID
huID = gb( ID∗ −1)+(x1 +ID·x2 ) ,
hvH(C) wz = gb(y1 H(C)+y2 z−1)+(x1 +H(C)x3 +zx4 ) .

Phase 1. The adversary makes private-key queries and decryption queries in this
phase.
For a private-key query on ID 6= ID∗ , let huID = gw1 b+w2 . The simulator randomly
chooses r0 ∈ Z p and computes dID as
w
− 2 − 1
 0 0

dID = (ga ) w1 (huID )r , (ga ) w1 gr .

Let r = − wa1 + r0 . We have


w
− wa +r0 − w2 0
gα2 (huID )r = gab · (gw1 b+w2 ) 1 = (ga ) 1 (huID )r ,
− w1 0
gr = (ga ) 1 gr .

Therefore, dID is a valid private key for ID.


For a decryption query on (ID,CT ), let CT = (C1 ,C2 ,C3 ,C4 ,C5 ) and C =
(C1 ,C3 ,C4 ). If ID 6= ID∗ , the simulator can generate the corresponding private key to
perform decryption. For ID = ID∗ , the simulator continues the following decryption
if it passes the verification. If y1 H(C) + y2C5 − 1 = 0, abort the simulation. Other-
wise, let hvH(C) wC5 = gw3 b+w4 . We have w3 6= 0. The simulator randomly chooses
r,t 0 ∈ Z p and computes dID|CT = (d10 , d20 , d30 ) as
w
− 4 − 1
 0 0

dID|CT = (huID )r (ga ) w3 (hvH(C) wC5 )t , gr , (ga ) w3 gt .

Let t = − wa3 + t 0 . We have

− wa +t 0
gα2 (huID )r (hvH(C) wC5 )t = gab · (huID )r · (gw3 b+w4 ) 3
w
− w4 0
= (huID )r · (ga ) 3 (hvH(C) wC5 )t ,
− w1 0
gt = (ga ) 3 gt .

Therefore, dID|CT is valid for ID|CT . The simulator uses this private key to decrypt
the ciphertext CT .
Challenge. A outputs two distinct messages m0 , m1 ∈ GT to be challenged. The
simulator randomly chooses coin ∈ {0, 1} and sets the challenge ciphertext CT ∗ as

CT ∗ = (C1∗ ,C2∗ ,C3∗ ,C4∗ ,C5∗ )


236 10 Identity-Based Encryption Without Random Oracles
 ∗ ∗ ∗

= gc(x1 +ID ·x2 ) , gc(x1 +H(C )·x3 +z x4 ) , gc , Z · mcoin , z∗ ,

where C∗ = (C1∗ ,C3∗ ,C4∗ ), and z∗ is the value satisfying y1 H(C∗ ) + y2 z∗ − 1 = 0. Let
s = c. If Z = e(g, g)abc , we have
 ∗ 
∗ s bc ID −1 +c(x1 +ID∗ ·x2 )
  ∗
huID = g ID∗ = (gc )x1 +ID ·x2 ,
∗ s
 ∗
 ∗ ∗ ∗ ∗ ∗ ∗
hvH(C ) wC5 = gbc(y1 H(C )+y2 z −1)+c(x1 +H(C )x3 +z x4 ) = (gc )x1 +H(C )·x3 +z x4 ,
gs = gc ,
e(g1 , g2 )s · mcoin = Z · mcoin .

Therefore, CT ∗ is a correct challenge ciphertext for ID∗ whose encrypted message


is mcoin .
Phase 2. The simulator responds to private-key queries and decryption queries in
the same way as in Phase 1 with the restriction that no private-key query is allowed
on ID∗ and no decryption query is allowed on (ID∗ ,CT ∗ ).
Guess. A outputs a guess coin0 of coin. The simulator outputs true if coin0 = coin.
Otherwise, false.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. According to the setting of the simulation, given a
ciphertext CT = (C1 ,C2 ,C3 ,C4 ,C5 ) for a decryption query, the simulator can per-
form a perfect decryption simulation if ID 6= ID∗ . If ID = ID∗ , we have the follow-
ing cases.
• y1 H(C) + y2C5 − 1 = 0. In this case, the simulator has to abort because it cannot
compute the corresponding private key dID∗ |CT for decryption.
• y1 H(C) + y2C5 − 1 6= 0. In this case, the simulator can compute the private key
dID∗ |CT for decryption.

If the adversary has no advantage in computing C5 = 1−y1y2H(C) , the simulator will


perform the decryption simulation successfully except with negligible probability.
From the given parameters, the adversary knows the following information:

b a 0 1 − y1 H(C∗ )
a, b, − b + x1 , + x2 , y1 b + x3 , y2 b + x4 , − + r , c, .
ID∗ w1 y2

It is not hard to see that the above integers and 1−y1y2H(C) are random and independent
because x1 , x2 , x3 , x4 , y1 , y2 are randomly chosen.
The correctness of the simulation has been explained above. The randomness
of the simulation includes all random numbers in the master-key generation, the
private-key generation, and the challenge ciphertext generation. They are
10.3 Waters Scheme 237

b a 0 1 − y1 H(C∗ )
a, b, − b + x1 , + x2 , y1 b + x3 , y2 b + x4 , − + r , c, .
ID∗ w1 y2
They are random and independent according to the above analysis of the decryption
simulation. Therefore, the simulation is indistinguishable from the real attack.
Probability of successful simulation. There is no abort in the simulation except
C5 = 1−y1y2H(C) in a decryption query, and thus the probability of successful simu-
qd
lation is 1 − p−q ≈ 1, where the i-th adaptive choice of C5 is equal to the random
d
1−y1 H(C)
number y2 with probability 1/(p − i + 1).
Probability of breaking the challenge ciphertext.
If Z is true, the simulation is indistinguishable from the real attack, and thus the
adversary has probability 12 + ε2 of guessing the encrypted message correctly.
If Z is false, and there is no decryption query, it is easy to see that the challenge
ciphertext is a one-time pad because the message is encrypted using Z, which is
random and unknown to the adversary. In the simulation, Z is independent of the
challenge decryption key. Therefore, the decryption queries cannot help the adver-
sary find Z to break the challenge ciphertext. The adversary only has probability 1/2
of guessing the encrypted message correctly.
Advantage and time cost. The advantage of solving the DBDH problem is
1 ε 1 ε
PS (PT − PF ) = + − = .
2 2 2 2
Let Ts denote the time cost of the simulation. We have Ts = O(qk + qd ), which is
mainly dominated by the key generation and the decryption. Therefore, the simula-
tor B will solve the DBDH problem with t + Ts , ε2 .


This completes the proof of the theorem. 

10.3 Waters Scheme

Setup: The setup algorithm takes as input a security parameter λ . It selects a


pairing group PG = (G, GT , g, p, e), randomly chooses g2 , u0 , u1 , u2 , · · · , un ∈
G, α ∈ Z p , computes g1 = gα , and returns a master public/secret key pair
(mpk, msk) as follows:

mpk = (PG, g1 , g2 , u0 , u1 , · · · , un ), msk = α.

KeyGen: The key generation algorithm takes as input an identity ID ∈ {0, 1}n
and the master key pair (mpk, msk). Let ID[i] be the i-th bit of ID. It chooses a
random number r ∈ Z p . It returns the private key dID of ID as
238 10 Identity-Based Encryption Without Random Oracles

!
 n
ID[i] r

r
dID = (d1 , d2 ) = gα2 u0 ∏ ui ,g .
i=1

Encrypt: The encryption algorithm takes as input a message m ∈ GT , an iden-


tity ID, and the master public key mpk. It chooses a random number s ∈ Z p and
returns the ciphertext CT as
!
 n
ID[i] s

s s
CT = u0 ∏ ui , g , e(g1 , g2 ) · m .
i=1

Decrypt: The decryption algorithm takes as input a ciphertext CT for ID, the
private key dID , and the master public key mpk. Let CT = (C1 ,C2 ,C3 ). It de-
crypts the message by computing
 
n ID[i] s r
e(C1 , d2 ) e (u ∏ u
0 i=1 i ) , g
·C3 =   ·C3 = e(g1 , g2 )−s · e(g1 , g2 )s m = m.
e(C2 , d1 ) e gs , gα (u ∏n u )r
ID[i]
2 0 i=1 i

Theorem 10.3.0.1 If the DBDH problem is hard, the Waters identity-based encryp-
tion scheme is provably secure in the IND-ID-CPA security model with reduction
loss L = 8(n + 1)qk , where qk is the number of private-key queries.
Proof. Suppose there exists an adversary A who can (t, qk , ε)-break the encryption
scheme in the IND-ID-CPA security model. We construct a simulator B to solve the
DBDH problem. Given as input a problem instance (g, ga , gb , gc , Z) over the pairing
group PG, B runs A and works as follows.
Setup. B sets q = 2qk and randomly chooses k, x0 , x1 , · · · , xn , y0 , y1 , · · · , yn satisfying

k ∈ [0, n],
x0 , x1 , · · · , xn ∈ [0, q − 1],
y0 , y1 , · · · , yn ∈ Z p .

It sets the master public key as

g1 = ga , g2 = gb , u0 = g−kqa+x0 a+y0 , ui = gxi a+yi ,

where α = a. The master public key can therefore be computed from the problem
instance and the chosen parameters.
We define F(ID), J(ID), K(ID) as
n
F(ID) = −kq + x0 + ∑ ID[i] · xi ,
i=1
10.3 Waters Scheme 239
n
J(ID) = y0 + ∑ ID[i] · yi ,
i=1
0 if x0 + ∑ni=1 ID[i] · xi = 0 mod q

K(ID) = .
1 otherwise

Then, we have
n
ID[i]
u0 ∏ ui = gF(ID)a+J(ID) .
i=1

Phase 1. The adversary makes private-key queries in this phase. For a private-key
query on ID, if K(ID) = 0, the simulator aborts. Otherwise, B randomly chooses
r0 ∈ Z p and computes the private key dID as
 !0 
r
J(ID) n 1
− F(ID) ID[i] − F(ID) 0
dID = (d1 , d2 ) = g2 u0 ∏ ui , g2 gr  .
i=1

We have that dID is computable using g, g1 , F(ID), J(ID), r0 , ID and the master pub-
lic key.
b
Let r = − F(ID) + r0 . We have
!r
n  − b +r0
ID[i] F(ID)
gα2 u0 ∏ ui = gab gF(ID)a+J(ID)
i=1
 − b +r0
F(ID)
= gab gF(ID)a+J(ID)
J(ID)
−ab+r0 F(ID)a− F(ID) b+J(ID)r0
= gab · g
J(ID)
− F(ID) b r0 (F(ID)a+J(ID))
=g g
!r0
J(ID) n
− F(ID) ID[i]
= g2 u0 ∏ ui ,
i=1
b +r0
− F(ID)
gr = g
1
− F(ID) 0
= g2 gr .

Therefore, dID is a valid private key for ID.


Challenge. A outputs two distinct messages m0 , m1 ∈ GT and one identity ID∗ ∈
{0, 1}n to be challenged. According to the encryption and the simulation, if F(ID∗ ) 6=
0, abort. Otherwise, we have F(ID∗ ) = 0 and
n
ID∗ [i] ∗ )a+J(ID∗ ) ∗
u0 ∏ ui = gF(ID = gJ(ID ) .
i=1
240 10 Identity-Based Encryption Without Random Oracles

The simulator randomly chooses coin ∈ {0, 1} and sets the challenge ciphertext
CT ∗ as  

CT ∗ = (C1∗ ,C2∗ ,C3∗ ) = (gc )J(ID ) , gc , Z · mcoin ,

where c, Z are from the problem instance. Let s = c. If Z = e(g, g)abc , we have
 n
ID∗ [i] s
  ∗
c ∗
u0 ∏ ui = gJ(ID ) = (gc )J(ID ) ,
i=1
gs = gc ,
e(g1 , g2 )s · mcoin = Z · mcoin .

Therefore, CT ∗ is a correct challenge ciphertext for ID∗ whose encrypted message


is mcoin .
Phase 2. The simulator responds to private-key queries in the same way as in Phase
1 with the restriction that no private-key query is allowed on ID∗ .
Guess. A outputs a guess coin0 of coin. The simulator outputs true if coin0 = coin.
Otherwise, false.
This completes the simulation and the solution. The correctness is analyzed as
follows.
Indistinguishable simulation. The correctness of the simulation has been explained
above. The randomness of the simulation includes all random numbers in the
master-key generation, the private-key generation, and the challenge ciphertext gen-
eration. They are

mpk : a, b, − kqa + x0 a + y0 , x1 a + y1 , x2 a + y2 , · · · , xn a + yn ,
b
dID : − + r0 ,
F(ID)
CT ∗ : c.

It is easy to see that the randomness property holds because a, b, y0 , y1 , · · · , yn , r0 , c


are randomly chosen. Therefore, the simulation is indistinguishable from the real
attack.
Probability of successful simulation. The simulation is successful if no abort oc-
curs in the query phase or the challenge phase. That is,

K(ID1 ) = 1, K(ID2 ) = 1, · · · , K(IDqk ) = 1, F(ID∗ ) = 0.

We have
n
0 ≤ x0 + ∑ ID[i]xi ≤ (n + 1)(q − 1),
i=1

where the range [0, (n + 1)(q − 1)] contains integers 0q, 1q, 2q, · · · , nq (n < q).
Let X = x0 + ∑ni=1 ID[i]xi . Since all xi and k are randomly chosen, we have
10.3 Waters Scheme 241
h i h i 1
Pr[F(ID∗ ) = 0] = Pr X = 0 mod q · Pr X = kq X = 0 mod q = .
(n + 1)q

Since the pair (IDi , ID∗ ) for any i differ on at least one bit, K(IDi ) and F(ID∗ ) differ
on the coefficient of at least one x j so that

1
Pr[K(IDi ) = 0|F(ID∗ ) = 0] = .
q
Based on the above results, we have

Pr[K(ID1 ) = 1 ∧ · · · ∧ K(IDqk ) = 1 ∧ F(ID∗ ) = 0]


= Pr[K(ID1 ) = 1 ∧ · · · ∧ K(IDqk ) = 1|F(ID∗ ) = 0] · Pr[F(ID∗ ) = 0]
 
= 1 − Pr[K(ID1 ) = 0 ∨ · · · ∨ K(IDqk ) = 0|F(ID∗ ) = 0] · Pr[F(ID∗ ) = 0]
 qk 
≥ 1 − ∑ Pr[K(IDi ) = 0|F(ID∗ ) = 0] · Pr[F(ID∗ ) = 0]
i=1
 
1 qk
= · 1−
(n + 1)q q
1
= .
4(n + 1)qk

Probability of breaking the challenge ciphertext.


If Z is true, the simulation is indistinguishable from the real attack, and thus the
adversary has probability 12 + ε2 of guessing the encrypted message correctly.
If Z is false, it is easy to see that the challenge ciphertext is a one-time pad
because the message is encrypted using Z, which is random and unknown to the ad-
versary. Therefore, the adversary only has probability 1/2 of guessing the encrypted
message correctly.
Advantage and time cost. The advantage of solving the DBDH problem is
1 1 ε 1 ε
PS (PT − PF ) = + − = .
4(n + 1)qk 2 2 2 8(n + 1)qk

Let Ts denote the time cost of the simulation. We have Ts = O(qk ), which is mainly
 Therefore, the simulator B will solve the DBDH
dominated by the key generation.
ε
problem with t + Ts , 8(n+1)q .
k
This completes the proof of the theorem. 
242 10 Identity-Based Encryption Without Random Oracles

10.4 Gentry Scheme

Setup: The setup algorithm takes as input a security parameter λ . It selects


a pairing group PG = (G, GT , g, p, e), selects a cryptographic hash function
H : {0, 1}∗ → Z p , randomly chooses α, β1 , β2 , β3 ∈ Z p , computes g1 = gα , h1 =
gβ1 , h2 = gβ2 , h3 = gβ3 , and returns a master public/secret key pair (mpk, msk)
as follows:

mpk = (PG, g1 , h1 , h2 , h3 , H), msk = (α, β1 , β2 , β3 ).

KeyGen: The key generation algorithm takes as input an identity ID ∈ Z p and


the master key pair (mpk, msk). It chooses random numbers r1 , r2 , r3 ∈ Z p and
returns the private key dID of ID as
 β1 −r1 β2 −r2 β3 −r3 
dID = (d1 , d2 , d3 , d4 , d5 , d6 ) = r1 , g α−ID , r2 , g α−ID , r3 , g α−ID .

We require the same random numbers r1 , r2 , r3 for the same identity ID.
Encrypt: The encryption algorithm takes as input a message m ∈ GT , an iden-
tity ID, and the master public key mpk. It chooses a random number s ∈ Z p and
returns the ciphertext CT as
!
CT = (C1 ,C2 ,C3 ,C4 ) = (g1 g−ID )s , e(g, g)s , e(h3 , g)s ·m, e(h1 , g)s e(h2 , g)sw ,

where w = H(C1 ,C2 ,C3 ).


Decrypt: The decryption algorithm takes as input a ciphertext CT for ID, the
private key dID , and the master public key mpk. Let CT = (C1 ,C2 ,C3 ,C4 ). It
works as follows:
• Compute w = H(C1 ,C2 ,C3 ).
• Verify that
 β1 −r1 +w(β2 −r2 ) 
d +d w
e(C1 , d2 d4w ) ·C2 1 3 = e g(α−ID)s , g α−ID · e(g, g)s(r1 +r2 w) = C4 .

• Decrypt the message by computing

C3 e(h3 , g)s · m
d
=  β3 −r3 
= m.
e(C1 , d6 ) ·C2 5 e g(α−ID)s , g α−ID · e(g, g)sr3
10.4 Gentry Scheme 243

Theorem 10.4.0.1 If the q-DABDHE problem is hard, the Gentry identity-based


encryption scheme is provably secure in the IND-ID-CCA security model with re-
duction loss L = 2.

Proof. Suppose there exists an adversary A who can (t, qk , qd , ε)-break the encryp-
tion scheme in the IND-ID-CCA security model. We construct a simulator B to
q+2
solve the q-DABDHE problem. Given as input a problem instance (g0 , ga0 , g, ga ,
2 q
ga , · · · , ga , Z) over the pairing group PG, B runs A and works as follows.
Setup. B randomly chooses three q-degree polynomials F1 (x), F2 (x), F3 (x) in Z p [x].
It sets the master public key as

g1 = ga , h1 = gF1 (a) , h2 = gF2 (a) , h3 = gF3 (a) ,

where α = a, β = F1 (a), β2 = F2 (α), β3 = F3 (α), and we require q = qk + 1. The


master public key can therefore be computed from the problem instance and the
chosen parameters.
In the following simulation, we assume that the queried ID for a private-key
query, a decryption query, and challenge is not equal to α = a, which can be verified
by gID = g1 . Otherwise, we can use a to solve the hard problem immediately.
Phase 1. The adversary makes private-key queries and decryption queries in this
phase.
For a private-key query on ID, let fID,i (x) for all i ∈ {1, 2, 3} be defined as

Fi (x) − Fi (ID)
fID,i (x) = .
x − ID
We have that fID,i (x) for all i ∈ {1, 2, 3} are polynomials.
The simulator computes the private key dID as
 
dID = F1 (ID), g fID,1 (a) , F2 (ID), g fID,2 (a) , F3 (ID), g fID,3 (a) ,

2 q
which is computable from g, ga , ga , · · · , ga , fID,1 (x), fID,2 (x), fID,3 (x). Let r1 =
F1 (ID), r2 = F2 (ID), r3 = F3 (ID). We have

ri = Fi (ID), i ∈ {1, 2, 3},


βi −ri Fi (α)−Fi (ID)
g α−ID =g α−ID = g fID,i (a) , i ∈ {1, 2, 3}.

Therefore, dID is a valid private key for ID.


For a decryption query on (ID,CT ), the simulator runs the private-key simulation
as above to compute dID and runs the decryption algorithm on CT using dID .
Challenge. A outputs two distinct messages m0 , m1 ∈ GT and one identity ID∗ ∈
{0, 1}n to be challenged. Let dID∗ = (d1∗ , d2∗ , d3∗ , d4∗ , d5∗ , d6∗ ) be the private key for
ID∗ , which is also computable. The simulator randomly chooses c ∈ {0, 1} and sets
the challenge ciphertext CT ∗ = (C1∗ ,C2∗ ,C3∗ ,C4∗ ) as
244 10 Identity-Based Encryption Without Random Oracles

aq+2 −(ID∗ )q+2


C1∗ = g0 ,
q
C2∗ = Z · e g0 , ∏ g fi ai

,
i=0

C3∗ = e(C1∗ , d6 ) · (C2∗ )d5

· mc ,
 
∗ ∗ w∗ ∗ ∗ ∗
C4∗ ∗
= e C1 , d2 (d4 ) · (C2∗ )d1 +d3 w ,

where w∗ = H(C1∗ ,C2∗ ,C3∗ ) and fi is the coefficient of xi in the polynomial

xq+2 − (ID∗ )q+2


.
x − ID∗
 q+2 −(ID∗ )q+2 q+1
Let s = logg g0 · a a−ID ∗ . If Z = e(g0 , g)a , we have

aq+2 −(ID∗ )q+2


∗ (logg g0 )· a−ID∗ aq+2 −(ID∗ )q+2

 
(g1 g−ID )s = gα−ID = g0 ,
a q+2 −(ID∗ )q+2
e(g, g)s = e(g, g)(logg g0 )· a−ID∗

q
q+1 i
= e(g0 , g)a ∏ e(g0 , g) fi a
i=0
 q 
i
= Z · e g0 , ∏ g fi a ,
i=0
 β3 −d5∗   d ∗
∗ 5
e(h3 , g)s · mc = e g(α−ID )s , g α−ID∗ · e(g, g)s · mc

= e(C1∗ , d6∗ ) · (C2∗ )d5 · mc ,
β1 −d1∗ +w∗ (β2 −d3∗ ) (d1∗ +d3∗ w∗ )
  
∗ (α−ID∗ )s
e(h1 , g)s e(h2 , g)sw =e g ,g α−ID ∗
· e(g, g)s
 ∗
 ∗ ∗ ∗
= e C1∗ , d2∗ (d4∗ )w · (C2∗ )d1 +d3 w .

Therefore, CT ∗ is a correct challenge ciphertext for ID∗ whose encrypted message


is mc .
Phase 2. The simulator responds to private-key queries and decryption queries in
the same way as in Phase 1 with the restriction that no private-key query is allowed
on ID∗ and no decryption query is allowed on (ID∗ ,CT ∗ ).
Guess. A outputs a guess c0 of c. The simulator outputs true if c0 = c. Otherwise,
false.
This completes the simulation and the solution. The correctness is analyzed as
follows.
10.4 Gentry Scheme 245

Indistinguishable simulation. According to the setting of the simulation, the sim-


ulator can compute any private key, and thus it performs the decryption simulation
correctly.
The correctness of the simulation has been explained above. The randomness
of the simulation includes all random numbers in the master-key generation, the
private-key generation, and the challenge ciphertext generation. They are

mpk : a, F1 (a), F2 (a), F3 (a),


r1 , r2 , r3 : F1 (ID), F2 (ID), F3 (ID),
 aq+2 − (ID∗ )q+2
s : logg g0 · .
a − ID∗
It is easy to see that the randomness property holds because a, F1 (x), F2 (x), F3 (x), logg g0
are randomly chosen with the help of the following discussion, where F1 (x), F2 (x), F3 (x)
are (qk +1)-degree polynomials. Therefore, the simulation is indistinguishable from
the real attack.
To prove that the randomness property holds, we only need to prove that Fi (ID∗ )
and Fi (a), Fi (ID j ) for all i ∈ {1, 2, 3} and j ∈ [1, qk ] are random and independent.
Without loss of generality, let Fi (x) = F(x) be the polynomial defined as

F(x) = xq xq + xq−1 xq−1 + · · · + x1 x + x0 .

Since the polynomial is randomly chosen, all xi are random and independent. We
also have

F(a) = xq (a)q + xq−1 (a)q−1 + · · · + x1 (a) + x0 ,


F(ID∗ ) = xq (ID∗ )q + xq−1 (ID∗ )q−1 + · · · + x1 (ID∗ ) + x0 ,
F(ID1 ) = xq (ID1 )q + xq−1 (ID1 )q−1 + · · · + x1 (ID1 ) + x0 ,
F(ID2 ) = xq (ID2 )q + xq−1 (ID2 )q−1 + · · · + x1 (ID2 ) + x0 ,
···
F(IDqk ) = xq (IDqk )q + xq−1 (IDqk )q−1 + · · · + x1 (IDqk ) + x0 ,

which can be rewritten as

F(a), F(ID∗ ), F(ID1 ), · · · , F(IDqk )



⊥
aq aq−1

··· a 1
 (ID ) (ID∗ )q−1
∗ q · · · ID∗ 1 
 
  (ID1 )q (ID1 )q−1 · · · ID1 1 
= xq , xq−1 , · · · , x1 , x0 · 
 (ID2 )q (ID2 )q−1
 .
 · · · ID2 1 
 ··· ··· ··· ··· ··· 
(IDqk )q (IDqk )q−1 · · · IDqk 1

The matrix is a (qk + 2) × (qk + 2) matrix, and the determinant of this matrix is
246 10 Identity-Based Encryption Without Random Oracles

∏ (yi − y j ) 6= 0.
yi ,y j ∈{a,ID∗ ,ID1 ,ID2 ,···,IDqk },i6= j

Therefore, Fi (ID∗ ) and Fi (a), Fi (ID j ) for all i ∈ {1, 2, 3} and j ∈ [1, qk ] are random
and independent.
Probability of successful simulation. There is no abort in the simulation, and thus
the probability of successful simulation is 1.
Probability of breaking the challenge ciphertext.
If Z is true, the simulation is indistinguishable from the real attack, and thus the
adversary has probability 12 + ε2 of guessing the encrypted message correctly.
qd
If Z is false, we show that the adversary only has success probability 1/2 + p−q
d
of guessing the encrypted message as follows.
q+1
Since Z is random, without loss of generality, let Z = e(g0 , g)a · e(g, g)z for
∗ ∗ ∗
some random and nonzero integer z. C1 ,C2 ,C3 in the challenge ciphertext can be
rewritten as
∗ ∗
C1∗ = gs(α−ID ) , C2∗ = e(g, g)s+z , C3∗ = e(g, g)zd5 · e(h3 , g)s · mc .

Furthermore, only C3∗ contains the random number d5∗ . Therefore, if the adversary
cannot learn d5∗ from the query, the challenge ciphertext is a one-time pad from the
point of view of the adversary.
According to the randomness property, the adversary can only learn d5∗ from
a decryption query on (ID∗ ,CT ). For a decryption query on (ID∗ ,CT ), let CT =
∗ 0 00
(C1 ,C2 ,C3 ,C4 ) where C1 = (g1 gID )s , C2 = e(g, g)s , and w = H(C1 ,C2 ,C3 ). If s0 =
00
s (treated as a correct ciphertext) and the ciphertext is accepted, the simulator will
return
C3 C3
d ∗ = 0,
e(C1 , d6∗ ) ·C2 5 e(h3 , g)s

where the adversary learns nothing about d5∗ from the decryption result. Otherwise,
s0 6= s00 (treated as an incorrect ciphertext), and in the following we prove that such
an incorrect ciphertext will be rejected except with negligible probability.

• Suppose (C1 ,C2 ,C3 ) = (C1∗ ,C2∗ ,C3∗ ) such that H(C1 ,C2 ,C3 ) = w = w∗ . For such
an incorrect ciphertext to pass the verification requires C4 = C4∗ . However, this
ciphertext cannot be queried because it is the challenge ciphertext.
• Suppose (C1 ,C2 ,C3 ) 6= (C1∗ ,C2∗ ,C3∗ ). Since the hash function is secure, we have
H(C1 ,C2 ,C3 ) = w 6= w∗ . For such an incorrect ciphertext to pass the verification
requires the adversary to be able to compute C4 satisfying
d ∗ +d ∗ w
C4 = e(C1 , d2∗ (d4∗ )w ) ·C2 1 3
β1 −d1∗ +w(β2 −d3∗ )  ∗ ∗
  
∗ 0 00 (d1 +d3 w)
= e g(α−ID )s , g α−ID∗ · e(g, g)s
0 ∗ ∗ 00 −s0 )
= e(g, g)s (β1 +wβ2 ) · e(g, g)(d1 +d3 w)(s ,
10.4 Gentry Scheme 247

which requires the computationally unbounded adversary to know d1∗ + d3∗ w.


On the other hand, according to the simulation of the challenge ciphertext, the
adversary will know
s(β1 + w∗ β2 ) + (d1∗ + d3∗ w∗ )z
from C4∗ , which is equivalent to d1∗ + d3∗ w∗ from the point of view of the com-
putationally unbounded adversary. Furthermore, similarly to d5∗ , we have that d1∗
and d3∗ are random and independent of the master public key and the private
keys. Therefore, we only need to prove that d1∗ + d3∗ w cannot be computed from
d1∗ + d3∗ w∗ . It is easy to see that d1∗ + d3∗ w and d1∗ + d3∗ w∗ are random and inde-
pendent.
Therefore, the adversary has no advantage in generating the correct C4 to pass
the verification. Suppose the adversary generates an incorrect ciphertext for a
decryption query by randomly choosing C4 . The adaptive choice of C4 the first
time has success probability 1p , and the adaptive choice of C4 the second time has
1
success probability p−1 . Therefore, the probability of successfully generating an
qd
incorrect ciphertext that can pass the verification is at most p−q for qd decryp-
d
1
tion queries. The adversary also has probability 2 of guessing c correctly from
qd
the encryption. Therefore, the adversary has success probability at most 12 + p−q
d
of guessing the encrypted message.

Advantage and time cost. The advantage of solving the q-DABDHE problem is
1 ε  1 qd  ε
PS (PT − PF ) = + − + ≈ .
2 2 2 p − qd 2

Let Ts denote the time cost of the simulation. We have Ts = O(q2k + qd ), which is
mainly dominated by the key generation and the decryption.
 Therefore, the simula-
tor B will solve the q-DABDHE problem with t + Ts , ε2 .
This completes the proof of the theorem. 
References

1. Abdalla, M., Bellare, M., Rogaway, P.: The oracle Diffie-Hellman assumptions and an analy-
sis of DHIES. In: D. Naccache (ed.) CT-RSA 2001, LNCS, vol. 2020, pp. 143–158. Springer
(2001)
2. Adj, G., Canales-Martı́nez, I., Cruz-Cortés, N., Menezes, A., Oliveira, T., Rivera-Zamarripa,
L., Rodrı́guez-Henrı́quez, F.: Computing discrete logarithms in cryptographically-interesting
characteristic-three finite fields. IACR Cryptology ePrint Archive 2016, 914 (2016)
3. Adleman, L.M.: A subexponential algorithm for the discrete logarithm problem with appli-
cations to cryptography. In: FOCS 1979, pp. 55–60. IEEE Computer Society (1979)
4. An, J.H., Dodis, Y., Rabin, T.: On the security of joint signature and encryption. In: L.R.
Knudsen (ed.) EUROCRYPT 2002, LNCS, vol. 2332, pp. 83–107. Springer (2002)
5. Atkin, A.O.L., Morain, F.: Elliptic curves and primality proving. Mathematics of computa-
tion 61(203), 29–68 (1993)
6. Attrapadung, N., Cui, Y., Galindo, D., Hanaoka, G., Hasuo, I., Imai, H., Matsuura, K., Yang,
P., Zhang, R.: Relations among notions of security for identity based encryption schemes.
In: J.R. Correa, A. Hevia, M.A. Kiwi (eds.) LATIN 2006, LNCS, vol. 3887, pp. 130–141.
Springer (2006)
7. Bader, C., Hofheinz, D., Jager, T., Kiltz, E., Li, Y.: Tightly-secure authenticated key ex-
change. In: Y. Dodis, J.B. Nielsen (eds.) TCC 2015, LNCS, vol. 9014, pp. 629–658. Springer
(2015)
8. Bader, C., Jager, T., Li, Y., Schäge, S.: On the impossibility of tight cryptographic reductions.
In: M. Fischlin, J. Coron (eds.) EUROCRYPT 2016, LNCS, vol. 9666, pp. 273–304. Springer
(2016)
9. Barker, E., Barker, W., Burr, W., Polk, W., Smid, M.: Recommendation for key management
part 1: General (revision 3). NIST special publication 800(57), 1–147 (2012)
10. Bellare, M., Boldyreva, A., Micali, S.: Public-key encryption in a multi-user setting: Security
proofs and improvements. In: B. Preneel (ed.) EUROCRYPT 2000, LNCS, vol. 1807, pp.
259–274. Springer (2000)
11. Bellare, M., Desai, A., Pointcheval, D., Rogaway, P.: Relations among notions of security for
public-key encryption schemes. In: H. Krawczyk (ed.) CRYPTO 1998, LNCS, vol. 1462, pp.
26–45. Springer (1998)
12. Bellare, M., Miner, S.K.: A forward-secure digital signature scheme. In: M.J. Wiener (ed.)
CRYPTO 1999, LNCS, vol. 1666, pp. 431–448. Springer (1999)
13. Bellare, M., Namprempre, C.: Authenticated encryption: Relations among notions and anal-
ysis of the generic composition paradigm. In: T. Okamoto (ed.) ASIACRYPT 2000, LNCS,
vol. 1976, pp. 531–545. Springer (2000)
14. Bellare, M., Rogaway, P.: Random oracles are practical: A paradigm for designing efficient
protocols. In: D.E. Denning, R. Pyle, R. Ganesan, R.S. Sandhu, V. Ashby (eds.) CCS 1993,
pp. 62–73. ACM (1993)

© Springer International Publishing AG, part of Springer Nature 2018 249


F. Guo et al., Introduction to Security Reduction,
https://doi.org/10.1007/978-3-319-93049-7
250 References

15. Bellare, M., Rogaway, P.: Optimal asymmetric encryption. In: A.D. Santis (ed.) EURO-
CRYPT 1994, LNCS, vol. 950, pp. 92–111. Springer (1994)
16. Bernstein, D.J., Engels, S., Lange, T., Niederhagen, R., Paar, C., Schwabe, P., Zimmer-
mann, R.: Faster elliptic-curve discrete logarithms on FPGAs. Tech. rep., Cryptology ePrint
Archive, Report 2016/382 (2016)
17. Blake, I., Seroussi, G., Smart, N.: Elliptic Curves in Cryptography, London Mathematical
Society Lecture Note Series, vol. 265. Cambridge University Press (1999)
18. Blake, I., Seroussi, G., Smart, N.: Advances in Elliptic Curve Cryptography, London
Mathematical Society Lecture Note Series, vol. 317. Cambridge University Press (2005)
19. BlueKrypt: Cryptographic Key Length Recommendation. Available at:
https://www.keylength.com
20. Boneh, D., Boyen, X.: Efficient selective-ID secure identity-based encryption without ran-
dom oracles. In: C. Cachin, J. Camenisch (eds.) EUROCRYPT 2004, LNCS, vol. 3027, pp.
223–238. Springer (2004)
21. Boneh, D., Boyen, X.: Short signatures without random oracles. In: C. Cachin, J. Camenisch
(eds.) EUROCRYPT 2004, LNCS, vol. 3027, pp. 56–73. Springer (2004)
22. Boneh, D., Boyen, X., Goh, E.: Hierarchical identity based encryption with constant size
ciphertext. In: R. Cramer (ed.) EUROCRYPT 2005, LNCS, vol. 3494, pp. 440–456. Springer
(2005)
23. Boneh, D., Boyen, X., Shacham, H.: Short group signatures. In: M.K. Franklin (ed.)
CRYPTO 2004, LNCS, vol. 3152, pp. 41–55. Springer (2004)
24. Boneh, D., Franklin, M.K.: Identity-based encryption from the Weil pairing. In: J. Kilian
(ed.) CRYPTO 2001, LNCS, vol. 2139, pp. 213–229. Springer (2001)
25. Boneh, D., Franklin, M.K.: Identity-based encryption from the Weil pairing. SIAM J. Com-
put. 32(3), 586–615 (2003)
26. Boneh, D., Lynn, B., Shacham, H.: Short signatures from the Weil pairing. In: C. Boyd (ed.)
ASIACRYPT 2001, LNCS, vol. 2248, pp. 514–532. Springer (2001)
27. Canetti, R., Halevi, S., Katz, J.: A forward-secure public-key encryption scheme. In: E. Bi-
ham (ed.) EUROCRYPT 2003, LNCS, vol. 2656, pp. 255–271. Springer (2003)
28. Canetti, R., Halevi, S., Katz, J.: Chosen-ciphertext security from identity-based encryption.
In: C. Cachin, J. Camenisch (eds.) EUROCRYPT 2004, LNCS, vol. 3027, pp. 207–222.
Springer (2004)
29. Cash, D., Kiltz, E., Shoup, V.: The twin Diffie-Hellman problem and applications. In: N.P.
Smart (ed.) EUROCRYPT 2008, LNCS, vol. 4965, pp. 127–145. Springer (2008)
30. Chen, L., Cheng, Z.: Security proof of Sakai-Kasahara’s identity-based encryption scheme.
In: N.P. Smart (ed.) IMA 2005, LNCS, vol. 3796, pp. 442–459. Springer (2005)
31. Costello, C.: Pairings for beginners. Available at:
http://www.craigcostello.com.au/pairings/PairingsForBeginners.pdf
32. Cramer, R., Shoup, V.: A practical public key cryptosystem provably secure against adaptive
chosen ciphertext attack. In: H. Krawczyk (ed.) CRYPTO 1998, LNCS, vol. 1462, pp. 13–25.
Springer (1998)
33. Delerablée, C.: Identity-based broadcast encryption with constant size ciphertexts and private
keys. In: K. Kurosawa (ed.) ASIACRYPT 2007, LNCS, vol. 4833, pp. 200–215. Springer
(2007)
34. Diffie, W., Hellman, M.E.: New directions in cryptography. IEEE Trans. Information Theory
22(6), 644–654 (1976)
35. Dodis, Y., Franklin, M.K., Katz, J., Miyaji, A., Yung, M.: Intrusion-resilient public-key en-
cryption. In: M. Joye (ed.) CT-RSA 2003, LNCS, vol. 2612, pp. 19–32. Springer (2003)
36. Dodis, Y., Katz, J., Xu, S., Yung, M.: Key-insulated public key cryptosystems. In: L.R.
Knudsen (ed.) EUROCRYPT 2002, LNCS, vol. 2332, pp. 65–82. Springer (2002)
37. Dolev, D., Dwork, C., Naor, M.: Non-malleable cryptography (extended abstract). In: ACM
STOC, pp. 542–552 (1991)
38. Dolev, D., Dwork, C., Naor, M.: Non-malleable Cryptography. Weizmann Science Press of
Israel (1998)
References 251

39. Dutta, R., Barua, R., Sarkar, P.: Pairing-based cryptographic protocols: A survey. IACR
Cryptology ePrint Archiv 2004, 64 (2004)
40. Freeman, D., Scott, M., Teske, E.: A taxonomy of pairing-friendly elliptic curves. J. Cryp-
tology 23(2), 224–280 (2010)
41. Frey, G., Rück, H.G.: A remark concerning m-divisibility and the discrete logarithm in the
divisor class group of curves. Mathematics of computation 62(206), 865–874 (1994)
42. Fujisaki, E., Okamoto, T.: Secure integration of asymmetric and symmetric encryption
schemes. In: M.J. Wiener (ed.) CRYPTO 1999, LNCS, vol. 1666, pp. 537–554. Springer
(1999)
43. Galbraith, S.D., Gaudry, P.: Recent progress on the elliptic curve discrete logarithm problem.
Des. Codes Cryptography 78(1), 51–72 (2016)
44. Galbraith, S.D., Paterson, K.G., Smart, N.P.: Pairings for cryptographers. Discrete Applied
Mathematics 156(16), 3113–3121 (2008)
45. Gay, R., Hofheinz, D., Kiltz, E., Wee, H.: Tightly CCA-secure encryption without pairings.
In: M. Fischlin, J. Coron (eds.) EUROCRYPT 2016, LNCS, vol. 9665, pp. 1–27. Springer
(2016)
46. Gay, R., Hofheinz, D., Kohl, L.: Kurosawa-Desmedt meets tight security. In: J. Katz,
H. Shacham (eds.) CRYPTO 2017, LNCS, vol. 10403, pp. 133–160. Springer (2017)
47. Gentry, C.: Practical identity-based encryption without random oracles. In: S. Vaudenay (ed.)
EUROCRYPT 2006, LNCS, vol. 4004, pp. 445–464. Springer (2006)
48. Goh, E., Jarecki, S.: A signature scheme as secure as the Diffie-Hellman problem. In: E. Bi-
ham (ed.) EUROCRYPT 2003, LNCS, vol. 2656, pp. 401–415. Springer (2003)
49. Goldwasser, S., Micali, S.: Probabilistic encryption. J. Comput. Syst. Sci. 28(2), 270–299
(1984)
50. Goldwasser, S., Micali, S., Rivest, R.L.: A digital signature scheme secure against adaptive
chosen-message attacks. SIAM J. Comput. 17(2), 281–308 (1988)
51. Gordon, D.M.: A survey of fast exponentiation methods. J. Algorithms 27(1), 129–146
(1998)
52. Grémy, L.: Computations of discrete logarithms sorted by date. Available at: http://perso.ens-
lyon.fr/laurent.gremy/dldb
53. Guo, F., Chen, R., Susilo, W., Lai, J., Yang, G., Mu, Y.: Optimal security reductions for
unique signatures: Bypassing impossibilities with a counterexample. In: J. Katz, H. Shacham
(eds.) CRYPTO 2017, LNCS, vol. 10402, pp. 517–547. Springer (2017)
54. Guo, F., Mu, Y., Susilo, W.: Short signatures with a tighter security reduction without random
oracles. Comput. J. 54(4), 513–524 (2011)
55. Guo, F., Susilo, W., Mu, Y., Chen, R., Lai, J., Yang, G.: Iterated random oracle: A universal
approach for finding loss in security reduction. In: J.H. Cheon, T. Takagi (eds.) ASIACRYPT
2016, LNCS, vol. 10032, pp. 745–776 (2016)
56. Hanaoka, Y., Hanaoka, G., Shikata, J., Imai, H.: Identity-based hierarchical strongly key-
insulated encryption and its application. In: B.K. Roy (ed.) ASIACRYPT 2005, LNCS, vol.
3788, pp. 495–514. Springer (2005)
57. Hankerson, D., Menezes, A.J., Vanstone, S.: Guide to Elliptic Curve Cryptography. Springer
Professional Computing. Springer (2004)
58. Hellman, M.E., Reyneri, J.M.: Fast computation of discrete logarithms in GF(q). In:
D. Chaum, R.L. Rivest, A.T. Sherman (eds.) CRYPTO 1982, pp. 3–13. Plenum Press, New
York (1982)
59. Herzberg, A., Jakobsson, M., Jarecki, S., Krawczyk, H., Yung, M.: Proactive public key and
signature systems. In: R. Graveman, P.A. Janson, C. Neuman, L. Gong (eds.) CCS 1997, pp.
100–110. ACM (1997)
60. Hofheinz, D., Jager, T.: Tightly secure signatures and public-key encryption. In: R. Safavi-
Naini, R. Canetti (eds.) CRYPTO 2012, LNCS, vol. 7417, pp. 590–607. Springer (2012)
61. Hohenberger, S., Waters, B.: Realizing hash-and-sign signatures under standard assumptions.
In: A. Joux (ed.) EUROCRYPT 2009, LNCS, vol. 5479, pp. 333–350. Springer (2009)
62. Itkis, G., Reyzin, L.: Sibir: Signer-base intrusion-resilient signatures. In: M. Yung (ed.)
CRYPTO 2002, LNCS, vol. 2442, pp. 499–514. Springer (2002)
252 References

63. Kachisa, E.J.: Constructing suitable ordinary pairing-friendly curves: A case of elliptic
curves and genus two hyperelliptic curves. Ph.D. thesis, Dublin City University (2011)
64. Katz, J.: Digital Signatures. Springer (2010)
65. Katz, J., Wang, N.: Efficiency improvements for signature schemes with tight security reduc-
tions. In: S. Jajodia, V. Atluri, T. Jaeger (eds.) CCS 2003, pp. 155–164. ACM (2003)
66. Kleinjung, T.: The Certicom ECC Challenge. Available at: https://listserv.nodak.edu/cgi-
bin/wa.exe?A2=NMBRTHRY;256db68e.1410 (2014)
67. Kleinjung, T., Diem, C., Lenstra, A.K., Priplata, C., Stahlke, C.: Computation of a 768-bit
prime field discrete logarithm. In: J. Coron, J.B. Nielsen (eds.) EUROCRYPT 2017, LNCS,
vol. 10210, pp. 185–201 (2017)
68. Knuth, D.E.: The art of computer programming. Vol.2. Seminumerical algorithms. Addison-
Wesley (1997)
69. Koblitz, N.: Elliptic curve cryptosystems. Mathematics of Computation 48(177), 203–209
(1987)
70. Koblitz, N., Menezes, A.: Pairing-based cryptography at high security levels. In: N.P. Smart
(ed.) IMA International Conference on Cryptography and Coding, LNCS, vol. 3796, pp. 13–
36. Springer (2005)
71. Lamport, L.: Constructing digital signatures from a one-way function. Tech. rep., Technical
Report CSL-98, SRI International Palo Alto (1979)
72. Lenstra, A.K., Lenstra, H.W.: Algorithms in number theory. In: Handbook of Theoretical
Computer Science, Volume A: Algorithms and Complexity (A), pp. 673–716 (1990)
73. Lenstra, A.K., Verheul, E.R.: Selecting cryptographic key sizes. J. Cryptology 14(4), 255–
293 (2001)
74. Lidl, R., Niederreiter, H.: Finite Fields (2nd Edition). Encyclopedia of Mathematics and its
Applications. Cambridge University Press (1997)
75. Lim, C.H., Lee, P.J.: A key recovery attack on discrete log-based schemes using a prime
order subgroupp. In: B.S. Kaliski Jr. (ed.) CRYPTO 1997, LNCS, vol. 1294, pp. 249–263.
Springer (1997)
76. Lynn, B.: On the implementation of pairing-based cryptosystems. Ph.D. thesis, Stanford
University (2007)
77. Lysyanskaya, A.: Unique signatures and verifiable random functions from the DH-DDH sep-
aration. In: M. Yung (ed.) CRYPTO 2002, LNCS, vol. 2442, pp. 597–612. Springer (2002)
78. McCurley, K.S.: The discrete logarithm problem. Cryptology and computational number
theory 42, 49 (1990)
79. McEliece, R.J.: Finite Fields for Computer Scientists and Engineers. The Kluwer Interna-
tional Series in Engineering and Computer Science. Springer (1987)
80. Menezes, A., Okamoto, T., Vanstone, S.A.: Reducing elliptic curve logarithms to logarithms
in a finite field. IEEE Trans. Information Theory 39(5), 1639–1646 (1993)
81. Menezes, A., van Oorschot, P., Vanstone, S.: Handbook of applied cryptography. Discrete
Mathematics and Its Applications. CRC Press (1996)
82. Menezes, A., Smart, N.P.: Security of signature schemes in a multi-user setting. Des. Codes
Cryptography 33(3), 261–274 (2004)
83. Miller, V.S.: Use of elliptic curves in cryptography. In: H.C. Williams (ed.) CRYPTO 1985,
LNCS, vol. 218, pp. 417–426. Springer (1985)
84. Naor, M., Yung, M.: Public-key cryptosystems provably secure against chosen ciphertext
attacks. In: H. Ortiz (ed.) ACM STOC, pp. 427–437. ACM (1990)
85. Nielsen, J.B.: Separating random oracle proofs from complexity theoretic proofs: The non-
committing encryption case. In: M. Yung (ed.) CRYPTO 2002, LNCS, vol. 2442, pp. 111–
126. Springer (2002)
86. Park, J.H., Lee, D.H.: An efficient IBE scheme with tight security reduction in the random
oracle model. Des. Codes Cryptography 79(1), 63–85 (2016)
87. Pollard, J.M.: Monte Carlo methods for index computation (mod p). Mathematics of com-
putation 32(143), 918–924 (1978)
References 253

88. Rackoff, C., Simon, D.R.: Non-interactive zero-knowledge proof of knowledge and chosen
ciphertext attack. In: J. Feigenbaum (ed.) CRYPTO 1991, LNCS, vol. 576, pp. 433–444.
Springer (1991)
89. Rosen, K.H.: Elementary Number Theory and Its Applications (5th Edition). Addison-
Wesley (2004)
90. Rotman, J.J.: An Introduction to the Theory of Groups. Graduate Texts in Mathematics.
Springer (1995)
91. Sakai, R., Kasahara, M.: ID based cryptosystems with pairing on elliptic curve. IACR Cryp-
tology ePrint Archive 2003, 54 (2003)
92. Shacham, H.: New paradigms in signature schemes. Ph.D. thesis, Stanford University (2006)
93. Shamir, A.: Identity-based cryptosystems and signature schemes. In: G.R. Blakley, D. Chaum
(eds.) CRYPTO 1984, LNCS, vol. 196, pp. 47–53. Springer (1984)
94. Shamir, A., Tauman, Y.: Improved online/offline signature schemes. In: J. Kilian (ed.)
CRYPTO 2001, LNCS, vol. 2139, pp. 355–367. Springer (2001)
95. Shanks, D.: Class number, a theory of factorization and genera. In: Proc. Symp. Pure Math,
vol. 20, pp. 415–440 (1971)
96. Shoup, V.: A computational introduction to number theory and algebra (2nd Edition). Cam-
bridge University Press (2009)
97. Silverman, J.H.: The Arithmetic of Elliptic Curves (2nd Edition). Graduate Texts in Mathe-
matics. Springer (2009)
98. Vasco, M.I., Magliveras, S., Steinwandt, R.: Group Theoretic Cryptography. Cryptography
and Network Security Series. CRC Press (2015)
99. Washington, L.C.: Elliptic Curves: Number Theory and Cryptography (2nd Edition). Dis-
crete Mathematics and Its Applications. CRC Press (2008)
100. Watanabe, Y., Shikata, J., Imai, H.: Equivalence between semantic security and indistin-
guishability against chosen ciphertext attacks. In: Y. Desmedt (ed.) PKC 2003, LNCS, vol.
2567, pp. 71–84. Springer (2003)
101. Waters, B.: Efficient identity-based encryption without random oracles. In: R. Cramer (ed.)
EUROCRYPT 2005, LNCS, vol. 3494, pp. 114–127. Springer (2005)
102. Wenger, E., Wolfger, P.: Solving the discrete logarithm of a 113-bit Koblitz curve with an
FPGA cluster. In: A. Joux, A.M. Youssef (eds.) SAC 2014, LNCS, vol. 8781, pp. 363–379.
Springer (2014)
103. Yao, D., Fazio, N., Dodis, Y., Lysyanskaya, A.: ID-based encryption for complex hierarchies
with applications to forward security and broadcast encryption. In: V. Atluri, B. Pfitzmann,
P.D. McDaniel (eds.) CCS 2004, pp. 354–363. ACM (2004)
104. Zhang, F., Safavi-Naini, R., Susilo, W.: An efficient signature scheme from bilinear pairings
and its applications. In: F. Bao, R.H. Deng, J. Zhou (eds.) PKC 2004, LNCS, vol. 2947, pp.
277–290. Springer (2004)

You might also like