86% found this document useful (7 votes)
304 views65 pages

Download full Machine Learning in Finance: From Theory to Practice Matthew F. Dixon ebook all chapters

From

Uploaded by

amonsclayszn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
86% found this document useful (7 votes)
304 views65 pages

Download full Machine Learning in Finance: From Theory to Practice Matthew F. Dixon ebook all chapters

From

Uploaded by

amonsclayszn
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Download the Full Version of textbook for Fast Typing at textbookfull.

com

Machine Learning in Finance: From Theory to


Practice Matthew F. Dixon

https://textbookfull.com/product/machine-learning-in-
finance-from-theory-to-practice-matthew-f-dixon/

OR CLICK BUTTON

DOWNLOAD NOW

Download More textbook Instantly Today - Get Yours Now at textbookfull.com


Recommended digital products (PDF, EPUB, MOBI) that
you can download immediately if you are interested.

Extramural English in Teaching and Learning: From Theory


and Research to Practice 1st Edition Pia Sundqvist

https://textbookfull.com/product/extramural-english-in-teaching-and-
learning-from-theory-and-research-to-practice-1st-edition-pia-
sundqvist/
textboxfull.com

Programming Machine Learning From Coding to Deep Learning


1st Edition Paolo Perrotta

https://textbookfull.com/product/programming-machine-learning-from-
coding-to-deep-learning-1st-edition-paolo-perrotta/

textboxfull.com

Science Education in Theory and Practice An Introductory


Guide to Learning Theory Ben Akpan

https://textbookfull.com/product/science-education-in-theory-and-
practice-an-introductory-guide-to-learning-theory-ben-akpan/

textboxfull.com

Fundamentals of optimization theory with applications to


machine learning Gallier J.

https://textbookfull.com/product/fundamentals-of-optimization-theory-
with-applications-to-machine-learning-gallier-j/

textboxfull.com
Fundamentals of optimization theory with applications to
machine learning Gallier J.

https://textbookfull.com/product/fundamentals-of-optimization-theory-
with-applications-to-machine-learning-gallier-j-2/

textboxfull.com

Financial Management: Theory & Practice Eugene F. Brigham

https://textbookfull.com/product/financial-management-theory-practice-
eugene-f-brigham/

textboxfull.com

Corporate Finance theory and practice Fourth Edition


Dallocchio

https://textbookfull.com/product/corporate-finance-theory-and-
practice-fourth-edition-dallocchio/

textboxfull.com

Machine Learning for Economics and Finance in TensorFlow


2: Deep Learning Models for Research and Industry Isaiah
Hull
https://textbookfull.com/product/machine-learning-for-economics-and-
finance-in-tensorflow-2-deep-learning-models-for-research-and-
industry-isaiah-hull/
textboxfull.com

C from Theory to Practice George S. Tselikis

https://textbookfull.com/product/c-from-theory-to-practice-george-s-
tselikis/

textboxfull.com
Matthew F. Dixon
Igor Halperin
Paul Bilokon

Machine
Learning in
Finance
From Theory to Practice
Machine Learning in Finance
Matthew F. Dixon • Igor Halperin • Paul Bilokon

Machine Learning in Finance


From Theory to Practice
Matthew F. Dixon Igor Halperin
Department of Applied Mathematics Tandon School of Engineering
Illinois Institute of Technology New York University
Chicago, IL, USA Brooklyn, NY, USA

Paul Bilokon
Department of Mathematics
Imperial College London
London, UK

Additional material to this book can be downloaded from http://mypages.iit.edu/~mdixon7/


book/ML_Finance_Codes-Book.zip

ISBN 978-3-030-41067-4 ISBN 978-3-030-41068-1 (eBook)


https://doi.org/10.1007/978-3-030-41068-1

© Springer Nature Switzerland AG 2020


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Once you eliminate the impossible, whatever
remains, no matter how improbable, must be
the truth.
—Arthur Conan Doyle
Introduction

Machine learning in finance sits at the intersection of a number of emergent


and established disciplines including pattern recognition, financial econometrics,
statistical computing, probabilistic programming, and dynamic programming. With
the trend towards increasing computational resources and larger datasets, machine
learning has grown into a central computational engineering field, with an emphasis
placed on plug-and-play algorithms made available through open-source machine
learning toolkits. Algorithm focused areas of finance, such as algorithmic trading
have been the primary adopters of this technology. But outside of engineering-based
research groups and business activities, much of the field remains a mystery.
A key barrier to understanding machine learning for non-engineering students
and practitioners is the absence of the well-established theories and concepts that
financial time series analysis equips us with. These serve as the basis for the
development of financial modeling intuition and scientific reasoning. Moreover,
machine learning is heavily entrenched in engineering ontology, which makes devel-
opments in the field somewhat intellectually inaccessible for students, academics,
and finance practitioners from the quantitative disciplines such as mathematics,
statistics, physics, and economics. Consequently, there is a great deal of miscon-
ception and limited understanding of the capacity of this field. While machine
learning techniques are often effective, they remain poorly understood and are
often mathematically indefensible. How do we place key concepts in the field of
machine learning in the context of more foundational theory in time series analysis,
econometrics, and mathematical statistics? Under which simplifying conditions are
advanced machine learning techniques such as deep neural networks mathematically
equivalent to well-known statistical models such as linear regression? How should
we reason about the perceived benefits of using advanced machine learning methods
over more traditional econometrics methods, for different financial applications?
What theory supports the application of machine learning to problems in financial
modeling? How does reinforcement learning provide a model-free approach to
the Black–Scholes–Merton model for derivative pricing? How does Q-learning
generalize discrete-time stochastic control problems in finance?

vii
viii Introduction

This book is written for advanced graduate students and academics in financial
econometrics, management science, and applied statistics, in addition to quants and
data scientists in the field of quantitative finance. We present machine learning
as a non-linear extension of various topics in quantitative economics such as
financial econometrics and dynamic programming, with an emphasis on novel
algorithmic representations of data, regularization, and techniques for controlling
the bias-variance tradeoff leading to improved out-of-sample forecasting. The book
is presented in three parts, each part covering theory and applications. The first
part presents supervised learning for cross-sectional data from both a Bayesian
and frequentist perspective. The more advanced material places a firm emphasis
on neural networks, including deep learning, as well as Gaussian processes, with
examples in investment management and derivatives. The second part covers
supervised learning for time series data, arguably the most common data type
used in finance with examples in trading, stochastic volatility, and fixed income
modeling. Finally, the third part covers reinforcement learning and its applications
in trading, investment, and wealth management. We provide Python code examples
to support the readers’ understanding of the methodologies and applications. As
a bridge to research in this emergent field, we present the frontiers of machine
learning in finance from a researcher’s perspective, highlighting how many well-
known concepts in statistical physics are likely to emerge as research topics for
machine learning in finance.

Prerequisites

This book is targeted at graduate students in data science, mathematical finance,


financial engineering, and operations research seeking a career in quantitative
finance, data science, analytics, and fintech. Students are expected to have com-
pleted upper section undergraduate courses in linear algebra, multivariate calculus,
advanced probability theory and stochastic processes, statistics for time series
(econometrics), and gained some basic introduction to numerical optimization and
computational mathematics. Students shall find the later chapters of this book,
on reinforcement learning, more accessible with some background in investment
science. Students should also have prior experience with Python programming and,
ideally, taken a course in computational finance and introductory machine learning.
The material in this book is more mathematical and less engineering focused than
most courses on machine learning, and for this reason we recommend reviewing
the recent book, Linear Algebra and Learning from Data by Gilbert Strang as
background reading.
Introduction ix

Advantages of the Book

Readers will find this book useful as a bridge from well-established foundational
topics in financial econometrics to applications of machine learning in finance.
Statistical machine learning is presented as a non-parametric extension of financial
econometrics and quantitative finance, with an emphasis on novel algorithmic rep-
resentations of data, regularization, and model averaging to improve out-of-sample
forecasting. The key distinguishing feature from classical financial econometrics
and dynamic programming is the absence of an assumption on the data generation
process. This has important implications for modeling and performance assessment
which are emphasized with examples throughout the book. Some of the main
contributions of the book are as follows:
• The textbook market is saturated with excellent books on machine learning.
However, few present the topic from the prospective of financial econometrics
and cast fundamental concepts in machine learning into canonical modeling and
decision frameworks already well established in finance such as financial time
series analysis, investment science, and financial risk management. Only through
the integration of these disciplines can we develop an intuition into how machine
learning theory informs the practice of financial modeling.
• Machine learning is entrenched in engineering ontology, which makes develop-
ments in the field somewhat intellectually inaccessible for students, academics,
and finance practitioners from quantitative disciplines such as mathematics,
statistics, physics, and economics. Moreover, financial econometrics has not kept
pace with this transformative field, and there is a need to reconcile various
modeling concepts between these disciplines. This textbook is built around
powerful mathematical ideas that shall serve as the basis for a graduate course for
students with prior training in probability and advanced statistics, linear algebra,
times series analysis, and Python programming.
• This book provides financial market motivated and compact theoretical treatment
of financial modeling with machine learning for the benefit of regulators, wealth
managers, federal research agencies, and professionals in other heavily regulated
business functions in finance who seek a more theoretical exposition to allay
concerns about the “black-box” nature of machine learning.
• Reinforcement learning is presented as a model-free framework for stochastic
control problems in finance, covering portfolio optimization, derivative pricing,
and wealth management applications without assuming a data generation
process. We also provide a model-free approach to problems in market
microstructure, such as optimal execution, with Q-learning. Furthermore,
our book is the first to present on methods of inverse reinforcement
learning.
• Multiple-choice questions, numerical examples, and more than 80 end-of-
chapter exercises are used throughout the book to reinforce key technical
concepts.
x Introduction

• This book provides Python codes demonstrating the application of machine


learning to algorithmic trading and financial modeling in risk management
and equity research. These codes make use of powerful open-source software
toolkits such as Google’s TensorFlow and Pandas, a data processing environment
for Python.

Overview of the Book

Chapter 1

Chapter 1 provides the industry context for machine learning in finance, discussing
the critical events that have shaped the finance industry’s need for machine learning
and the unique barriers to adoption. The finance industry has adopted machine
learning to varying degrees of sophistication. How it has been adopted is heavily
fragmented by the academic disciplines underpinning the applications. We view
some key mathematical examples that demonstrate the nature of machine learning
and how it is used in practice, with the focus on building intuition for more technical
expositions in later chapters. In particular, we begin to address many finance
practitioner’s concerns that neural networks are a “black-box” by showing how they
are related to existing well-established techniques such as linear regression, logistic
regression, and autoregressive time series models. Such arguments are developed
further in later chapters.

Chapter 2

Chapter 2 introduces probabilistic modeling and reviews foundational concepts


in Bayesian econometrics such as Bayesian inference, model selection, online
learning, and Bayesian model averaging. We develop more versatile representations
of complex data with probabilistic graphical models such as mixture models.

Chapter 3

Chapter 3 introduces Bayesian regression and shows how it extends many of


the concepts in the previous chapter. We develop kernel-based machine learning
methods—specifically Gaussian process regression, an important class of Bayesian
machine learning methods—and demonstrate their application to “surrogate” mod-
els of derivative prices. This chapter also provides a natural point from which to
Introduction xi

develop intuition for the role and functional form of regularization in a frequentist
setting—the subject of subsequent chapters.

Chapter 4

Chapter 4 provides a more in-depth description of supervised learning, deep


learning, and neural networks—presenting the foundational mathematical and sta-
tistical learning concepts and explaining how they relate to real-world examples in
trading, risk management, and investment management. These applications present
challenges for forecasting and model design and are presented as a reoccurring
theme throughout the book. This chapter moves towards a more engineering
style exposition of neural networks, applying concepts in the previous chapters to
elucidate various model design choices.

Chapter 5

Chapter 5 presents a method for interpreting neural networks which imposes mini-
mal restrictions on the neural network design. The chapter demonstrates techniques
for interpreting a feedforward network, including how to rank the importance of
the features. In particular, an example demonstrating how to apply interpretability
analysis to deep learning models for factor modeling is also presented.

Chapter 6

Chapter 6 provides an overview of the most important modeling concepts in


financial econometrics. Such methods form the conceptual basis and performance
baseline for more advanced neural network architectures presented in the next
chapter. In fact, each type of architecture is a generalization of many of the models
presented here. This chapter is especially useful for students from an engineering or
science background, with little exposure to econometrics and time series analysis.

Chapter 7

Chapter 7 presents a powerful class of probabilistic models for financial data.


Many of these models overcome some of the severe stationarity limitations of the
frequentist models in the previous chapters. The fitting procedure demonstrated is
also different—the use of Kalman filtering algorithms for state-space models rather
xii Introduction

than maximum likelihood estimation or Bayesian inference. Simple examples of


hidden Markov models and particle filters in finance and various algorithms are
presented.

Chapter 8

Chapter 8 presents various neural network models for financial time series analysis,
providing examples of how they relate to well-known techniques in financial econo-
metrics. Recurrent neural networks (RNNs) are presented as non-linear time series
models and generalize classical linear time series models such as AR(p). They
provide a powerful approach for prediction in financial time series and generalize
to non-stationary data. The chapter also presents convolution neural networks for
filtering time series data and exploiting different scales in the data. Finally, this
chapter demonstrates how autoencoders are used to compress information and
generalize principal component analysis.

Chapter 9

Chapter 9 introduces Markov decision processes and the classical methods of


dynamic programming, before building familiarity with the ideas of reinforcement
learning and other approximate methods for solving MDPs. After describing Bell-
man optimality and iterative value and policy updates before moving to Q-learning,
the chapter quickly advances towards a more engineering style exposition of the
topic, covering key computational concepts such as greediness, batch learning, and
Q-learning. Through a number of mini-case studies, the chapter provides insight
into how RL is applied to optimization problems in asset management and trading.
These examples are each supported with Python notebooks.

Chapter 10

Chapter 10 considers real-world applications of reinforcement learning in finance,


as well as further advances the theory presented in the previous chapter. We start
with one of the most common problems of quantitative finance, which is the problem
of optimal portfolio trading in discrete time. Many practical problems of trading or
risk management amount to different forms of dynamic portfolio optimization, with
different optimization criteria, portfolio composition, and constraints. The chapter
introduces a reinforcement learning approach to option pricing that generalizes the
classical Black–Scholes model to a data-driven approach using Q-learning. It then
presents a probabilistic extension of Q-learning called G-learning and shows how it
Introduction xiii

can be used for dynamic portfolio optimization. For certain specifications of reward
functions, G-learning is semi-analytically tractable and amounts to a probabilistic
version of linear quadratic regulators (LQRs). Detailed analyses of such cases are
presented and we show their solutions with examples from problems of dynamic
portfolio optimization and wealth management.

Chapter 11

Chapter 11 provides an overview of the most popular methods of inverse reinforce-


ment learning (IRL) and imitation learning (IL). These methods solve the problem
of optimal control in a data-driven way, similarly to reinforcement learning, however
with the critical difference that now rewards are not observed. The problem is rather
to learn the reward function from the observed behavior of an agent. As behavioral
data without rewards are widely available, the problem of learning from such data
is certainly very interesting. The chapter provides a moderate-level description of
the most promising IRL methods, equips the reader with sufficient knowledge to
understand and follow the current literature on IRL, and presents examples that use
simple simulated environments to see how these methods perform when we know
the “ground truth" rewards. We then present use cases for IRL in quantitative finance
that include applications to trading strategy identification, sentiment-based trading,
option pricing, inference of portfolio investors, and market modeling.

Chapter 12

Chapter 12 takes us forward to emerging research topics in quantitative finance


and machine learning. Among many interesting emerging topics, we focus here
on two broad themes. The first one deals with unification of supervised learning
and reinforcement learning as two tasks of perception-action cycles of agents. We
outline some recent research ideas in the literature including in particular informa-
tion theory-based versions of reinforcement learning and discuss their relevance for
financial applications. We explain why these ideas might have interesting practical
implications for RL financial models, where feature selection could be done within
the general task of optimization of a long-term objective, rather than outside of it,
as is usually performed in “alpha-research.”
The second topic presented in this chapter deals with using methods of reinforce-
ment learning to construct models of market dynamics. We also introduce some
advanced physics-based approaches for computations for such RL-inspired market
models.
xiv Introduction

Source Code

Many of the chapters are accompanied by Python notebooks to illustrate some


of the main concepts and demonstrate application of machine learning methods.
Each notebook is lightly annotated. Many of these notebooks use TensorFlow.
We recommend loading these notebooks, together with any accompanying Python
source files and data, in Google Colab. Please see the appendices of each chapter
accompanied by notebooks, and the README.md in the subfolder of each chapter,
for further instructions and details.

Scope

We recognize that the field of machine learning is developing rapidly and to keep
abreast of the research in this field is a challenging pursuit. Machine learning is an
umbrella term for a number of methodology classes, including supervised learning,
unsupervised learning, and reinforcement learning. This book focuses on supervised
learning and reinforcement learning because these are the areas with the most
overlap with econometrics, predictive modeling, and optimal control in finance.
Supervised machine learning can be categorized as generative and discriminative.
Our focus is on discriminative learners which attempt to partition the input
space, either directly, through affine transformations or through projections onto
a manifold. Neural networks have been shown to provide a universal approximation
to a wide class of functions. Moreover, they can be shown to reduce to other well-
known statistical techniques and are adaptable to time series data.
Extending time series models, a number of chapters in this book are devoted to
an introduction to reinforcement learning (RL) and inverse reinforcement learning
(IRL) that deal with problems of optimal control of such time series and show how
many classical financial problems such as portfolio optimization, option pricing, and
wealth management can naturally be posed as problems for RL and IRL. We present
simple RL methods that can be applied for these problems, as well as explain how
neural networks can be used in these applications.
There are already several excellent textbooks covering other classical machine
learning methods, and we instead choose to focus on how to cast machine learning
into various financial modeling and decision frameworks. We emphasize that much
of this material is not unique to neural networks, but comparisons of alternative
supervised learning approaches, such as random forests, are beyond the scope of
this book.
Introduction xv

Multiple-Choice Questions

Multiple-choice questions are included after introducing a key concept. The correct
answers to all questions are provided at the end of each chapter with selected, partial,
explanations to some of the more challenging material.

Exercises

The exercises that appear at the end of every chapter form an important component
of the book. Each exercise has been chosen to reinforce concepts explained in the
text, to stimulate the application of machine learning in finance, and to gently bridge
material in other chapters. It is graded according to difficulty ranging from (*),
which denotes a simple exercise which might take a few minutes to complete,
through to (***), which denotes a significantly more complex exercise. Unless
specified otherwise, all equations referenced in each exercise correspond to those
in the corresponding chapter.

Instructor Materials

The book is supplemented by a separate Instructor’s Manual which provides worked


solutions to the end of chapter questions. Full explanations for the solutions to the
multiple-choice questions are also provided. The manual provides additional notes
and example code solutions for some of the programming exercises in the later
chapters.

Acknowledgements

This book is dedicated to the late Mark Davis (Imperial College) who was an
inspiration in the field of mathematical finance and engineering, and formative in
our careers. Peter Carr, Chair of the Department of Financial Engineering at NYU
Tandon, has been instrumental in supporting the growth of the field of machine
learning in finance. Through providing speaker engagements and machine learning
instructorship positions in the MS in Algorithmic Finance Program, the authors have
been able to write research papers and identify the key areas required by a text
book. Miquel Alonso (AIFI), Agostino Capponi (Columbia), Rama Cont (Oxford),
Kay Giesecke (Stanford), Ali Hirsa (Columbia), Sebastian Jaimungal (University
of Toronto), Gary Kazantsev (Bloomberg), Morton Lane (UIUC), Jörg Osterrieder
(ZHAW) have established various academic and joint academic-industry workshops
xvi Introduction

and community meetings to proliferate the field and serve as input for this book.
At the same time, there has been growing support for the development of a book
in London, where several SIAM/LMS workshops and practitioner special interest
groups, such as the Thalesians, have identified a number of compelling financial
applications. The material has grown from courses and invited lectures at NYU,
UIUC, Illinois Tech, Imperial College and the 2019 Bootcamp on Machine Learning
in Finance at the Fields Institute, Toronto.
Along the way, we have been fortunate to receive the support of Tomasz Bielecki
(Illinois Tech), Igor Cialenco (Illinois Tech), Ali Hirsa (Columbia University),
and Brian Peterson (DV Trading). Special thanks to research collaborators and
colleagues Kay Giesecke (Stanford University), Diego Klabjan (NWU), Nick
Polson (Chicago Booth), and Harvey Stein (Bloomberg), all of whom have shaped
our understanding of the emerging field of machine learning in finance and the many
practical challenges. We are indebted to Sri Krishnamurthy (QuantUniversity),
Saeed Amen (Cuemacro), Tyler Ward (Google), and Nicole Königstein for their
valuable input on this book. We acknowledge the support of a number of Illinois
Tech graduate students who have contributed to the source code examples and
exercises: Xiwen Jing, Bo Wang, and Siliang Xong. Special thanks to Swaminathan
Sethuraman for his support of the code development, to Volod Chernat and George
Gvishiani who provided support and code development for the course taught at
NYU and Coursera. Finally, we would like to thank the students and especially the
organisers of the MSc Finance and Mathematics course at Imperial College, where
many of the ideas presented in this book have been tested: Damiano Brigo, Antoine
(Jack) Jacquier, Mikko Pakkanen, and Rula Murtada. We would also like to thank
Blanka Horvath for many useful suggestions.

Chicago, IL, USA Matthew F. Dixon


Brooklyn, NY, USA Igor Halperin
London, UK Paul Bilokon
December 2019
Contents

Part I Machine Learning with Cross-Sectional Data


1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 Big Data—Big Compute in Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Fintech . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Machine Learning and Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Statistical Modeling vs. Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.1 Modeling Paradigms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Financial Econometrics and Machine Learning . . . . . . . . . . . . . . . 18
3.3 Over-fitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5 Examples of Supervised Machine Learning in Practice . . . . . . . . . . . . . . 28
5.1 Algorithmic Trading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.2 High-Frequency Trade Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3 Mortgage Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2 Probabilistic Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2 Bayesian vs. Frequentist Estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3 Frequentist Inference from Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4 Assessing the Quality of Our Estimator: Bias and Variance . . . . . . . . . 53
5 The Bias–Variance Tradeoff (Dilemma) for Estimators . . . . . . . . . . . . . . 55
6 Bayesian Inference from Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.1 A More Informative Prior: The Beta Distribution . . . . . . . . . . . . . 60
6.2 Sequential Bayesian updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

xvii
xviii Contents

6.3
Practical Implications of Choosing a Classical
or Bayesian Estimation Framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.1 Bayesian Inference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.2 Model Selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.3 Model Selection When There Are Many Models . . . . . . . . . . . . . 66
7.4 Occam’s Razor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
7.5 Model Averaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
8 Probabilistic Graphical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
8.1 Mixture Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3 Bayesian Regression and Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2 Bayesian Inference with Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.1 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
2.2 Bayesian Prediction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.3 Schur Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
3 Gaussian Process Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.1 Gaussian Processes in Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.2 Gaussian Processes Regression and Prediction . . . . . . . . . . . . . . . 93
3.3 Hyperparameter Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.4 Computational Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4 Massively Scalable Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
4.1 Structured Kernel Interpolation (SKI) . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.2 Kernel Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5 Example: Pricing and Greeking with Single-GPs. . . . . . . . . . . . . . . . . . . . . 98
5.1 Greeking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.2 Mesh-Free GPs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.3 Massively Scalable GPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6 Multi-response Gaussian Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.1 Multi-Output Gaussian Process Regression
and Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
8.1 Programming Related Questions* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4 Feedforward Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
2 Feedforward Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
2.2 Geometric Interpretation of Feedforward Networks . . . . . . . . . . 114
2.3 Probabilistic Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Contents xix

2.4 Function Approximation with Deep Learning* . . . . . . . . . . . . . . . 119


2.5 VC Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
2.6 When Is a Neural Network a Spline?* . . . . . . . . . . . . . . . . . . . . . . . . . 124
2.7 Why Deep Networks? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3 Convexity and Inequality Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
3.1 Similarity of MLPs with Other Supervised Learners . . . . . . . . . 138
4 Training, Validation, and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
5 Stochastic Gradient Descent (SGD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
5.1 Back-Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
5.2 Momentum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
6 Bayesian Neural Networks* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
8.1 Programming Related Questions* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5 Interpretability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
2 Background on Interpretability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
2.1 Sensitivities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
3 Explanatory Power of Neural Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
3.1 Multiple Hidden Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
3.2 Example: Step Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
4 Interaction Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
4.1 Example: Friedman Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5 Bounds on the Variance of the Jacobian. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5.1 Chernoff Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.2 Simulated Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6 Factor Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.1 Non-linear Factor Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
6.2 Fundamental Factor Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
8.1 Programming Related Questions* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

Part II Sequential Learning


6 Sequence Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
2 Autoregressive Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
2.2 Autoregressive Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
2.3 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
2.4 Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
2.5 Partial Autocorrelations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
xx Contents

2.6 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199


2.7 Heteroscedasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
2.8 Moving Average Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
2.9 GARCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
2.10 Exponential Smoothing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
3 Fitting Time Series Models: The Box–Jenkins Approach . . . . . . . . . . . . 205
3.1 Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
3.2 Transformation to Ensure Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . 206
3.3 Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
3.4 Model Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
4 Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
4.1 Predicting Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
4.2 Time Series Cross-Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5 Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5.1 Principal Component Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5.2 Dimensionality Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
7 Probabilistic Sequence Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
2 Hidden Markov Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
2.1 The Viterbi Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
2.2 State-Space Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
3 Particle Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
3.1 Sequential Importance Resampling (SIR) . . . . . . . . . . . . . . . . . . . . . 228
3.2 Multinomial Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
3.3 Application: Stochastic Volatility Models . . . . . . . . . . . . . . . . . . . . . 230
4 Point Calibration of Stochastic Filters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
5 Bayesian Calibration of Stochastic Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
8 Advanced Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
2 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
2.1 RNN Memory: Partial Autocovariance . . . . . . . . . . . . . . . . . . . . . . . . 244
2.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
2.3 Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
2.4 Generalized Recurrent Neural Networks (GRNNs) . . . . . . . . . . . 248
3 Gated Recurrent Units. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
3.1 α-RNNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
3.2 Neural Network Exponential Smoothing . . . . . . . . . . . . . . . . . . . . . . 251
3.3 Long Short-Term Memory (LSTM) . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Contents xxi

4 Python Notebook Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255


4.1 Bitcoin Prediction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
4.2 Predicting from the Limit Order Book. . . . . . . . . . . . . . . . . . . . . . . . . 256
5 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
5.1 Weighted Moving Average Smoothers . . . . . . . . . . . . . . . . . . . . . . . . 258
5.2 2D Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
5.3 Pooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
5.4 Dilated Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
5.5 Python Notebooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
6 Autoencoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
6.1 Linear Autoencoders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
6.2 Equivalence of Linear Autoencoders and PCA . . . . . . . . . . . . . . . 268
6.3 Deep Autoencoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
8.1 Programming Related Questions* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275

Part III Sequential Data with Decision-Making


9 Introduction to Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
2 Elements of Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
2.1 Rewards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
2.2 Value and Policy Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
2.3 Observable Versus Partially Observable Environments . . . . . . . 286
3 Markov Decision Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
3.1 Decision Policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
3.2 Value Functions and Bellman Equations . . . . . . . . . . . . . . . . . . . . . . 293
3.3 Optimal Policy and Bellman Optimality. . . . . . . . . . . . . . . . . . . . . . . 296
4 Dynamic Programming Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
4.1 Policy Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
4.2 Policy Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
4.3 Value Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
5 Reinforcement Learning Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
5.1 Monte Carlo Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
5.2 Policy-Based Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
5.3 Temporal Difference Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
5.4 SARSA and Q-Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
5.5 Stochastic Approximations and Batch-Mode Q-learning . . . . . 316
5.6 Q-learning in a Continuous Space: Function
Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
5.7 Batch-Mode Q-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
5.8 Least Squares Policy Iteration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
5.9 Deep Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
xxii Contents

6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
10 Applications of Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
2 The QLBS Model for Option Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
3 Discrete-Time Black–Scholes–Merton Model . . . . . . . . . . . . . . . . . . . . . . . . 352
3.1 Hedge Portfolio Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
3.2 Optimal Hedging Strategy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
3.3 Option Pricing in Discrete Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
3.4 Hedging and Pricing in the BS Limit . . . . . . . . . . . . . . . . . . . . . . . . . . 359
4 The QLBS Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
4.1 State Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
4.2 Bellman Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
4.3 Optimal Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
4.4 DP Solution: Monte Carlo Implementation . . . . . . . . . . . . . . . . . . . 368
4.5 RL Solution for QLBS: Fitted Q Iteration . . . . . . . . . . . . . . . . . . . . . 370
4.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
4.7 Option Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
4.8 Possible Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
5 G-Learning for Stock Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
5.2 Investment Portfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
5.3 Terminal Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
5.4 Asset Returns Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
5.5 Signal Dynamics and State Space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
5.6 One-Period Rewards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
5.7 Multi-period Portfolio Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
5.8 Stochastic Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386
5.9 Reference Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
5.10 Bellman Optimality Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
5.11 Entropy-Regularized Bellman Optimality Equation . . . . . . . . . . 389
5.12 G-Function: An Entropy-Regularized Q-Function . . . . . . . . . . . . 391
5.13 G-Learning and F-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
5.14 Portfolio Dynamics with Market Impact . . . . . . . . . . . . . . . . . . . . . . 395
5.15 Zero Friction Limit: LQR with Entropy Regularization . . . . . . 396
5.16 Non-zero Market Impact: Non-linear Dynamics . . . . . . . . . . . . . . 400
6 RL for Wealth Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
6.1 The Merton Consumption Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
6.2 Portfolio Optimization for a Defined Contribution
Retirement Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
6.3 G-Learning for Retirement Plan Optimization . . . . . . . . . . . . . . . . 408
6.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
Contents xxiii

8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
11 Inverse Reinforcement Learning and Imitation Learning . . . . . . . . . . . . . 419
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
2 Inverse Reinforcement Learning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
2.1 RL Versus IRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
2.2 What Are the Criteria for Success in IRL? . . . . . . . . . . . . . . . . . . . . 426
2.3 Can a Truly Portable Reward Function Be Learned
with IRL?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
3 Maximum Entropy Inverse Reinforcement Learning . . . . . . . . . . . . . . . . . 428
3.1 Maximum Entropy Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
3.2 Maximum Causal Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
3.3 G-Learning and Soft Q-Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
3.4 Maximum Entropy IRL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
3.5 Estimating the Partition Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
4 Example: MaxEnt IRL for Inference of Customer Preferences . . . . . . 443
4.1 IRL and the Problem of Customer Choice. . . . . . . . . . . . . . . . . . . . . 444
4.2 Customer Utility Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
4.3 Maximum Entropy IRL for Customer Utility . . . . . . . . . . . . . . . . . 446
4.4 How Much Data Is Needed? IRL and Observational
Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
4.5 Counterfactual Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
4.6 Finite-Sample Properties of MLE Estimators . . . . . . . . . . . . . . . . . 454
4.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
5 Adversarial Imitation Learning and IRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
5.1 Imitation Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
5.2 GAIL: Generative Adversarial Imitation Learning. . . . . . . . . . . . 459
5.3 GAIL as an Art of Bypassing RL in IRL . . . . . . . . . . . . . . . . . . . . . . 461
5.4 Practical Regularization in GAIL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
5.5 Adversarial Training in GAIL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
5.6 Other Adversarial Approaches* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
5.7 f-Divergence Training* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
5.8 Wasserstein GAN*. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
5.9 Least Squares GAN* . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
6 Beyond GAIL: AIRL, f-MAX, FAIRL, RS-GAIL, etc.* . . . . . . . . . . . . . 471
6.1 AIRL: Adversarial Inverse Reinforcement Learning . . . . . . . . . 472
6.2 Forward KL or Backward KL?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
6.3 f-MAX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
6.4 Forward KL: FAIRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
6.5 Risk-Sensitive GAIL (RS-GAIL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
7 Gaussian Process Inverse Reinforcement Learning. . . . . . . . . . . . . . . . . . . 481
7.1 Bayesian IRL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
7.2 Gaussian Process IRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
xxiv Contents

8 Can IRL Surpass the Teacher? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484


8.1 IRL from Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
8.2 Learning Preferences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
8.3 T-REX: Trajectory-Ranked Reward EXtrapolation . . . . . . . . . . . 488
8.4 D-REX: Disturbance-Based Reward EXtrapolation . . . . . . . . . . 490
9 Let Us Try It Out: IRL for Financial Cliff Walking . . . . . . . . . . . . . . . . . . 490
9.1 Max-Causal Entropy IRL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
9.2 IRL from Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
9.3 T-REX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
9.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
10 Financial Applications of IRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
10.1 Algorithmic Trading Strategy Identification. . . . . . . . . . . . . . . . . . . 495
10.2 Inverse Reinforcement Learning for Option Pricing . . . . . . . . . . 497
10.3 IRL of a Portfolio Investor with G-Learning . . . . . . . . . . . . . . . . . . 499
10.4 IRL and Reward Learning for Sentiment-Based
Trading Strategies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504
10.5 IRL and the “Invisible Hand” Inference . . . . . . . . . . . . . . . . . . . . . . . 505
11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515
12 Frontiers of Machine Learning and Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
2 Market Dynamics, IRL, and Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
2.1 “Quantum Equilibrium–Disequilibrium” (QED) Model . . . . . . 522
2.2 The Langevin Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
2.3 The GBM Model as the Langevin Equation . . . . . . . . . . . . . . . . . . . 524
2.4 The QED Model as the Langevin Equation . . . . . . . . . . . . . . . . . . . 525
2.5 Insights for Financial Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
2.6 Insights for Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
3 Physics and Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
3.1 Hierarchical Representations in Deep Learning
and Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
3.2 Tensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
3.3 Bounded-Rational Agents in a Non-equilibrium
Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
4 A “Grand Unification” of Machine Learning? . . . . . . . . . . . . . . . . . . . . . . . . 535
4.1 Perception-Action Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
4.2 Information Theory Meets Reinforcement Learning. . . . . . . . . . 538
4.3 Reinforcement Learning Meets Supervised Learning:
Predictron, MuZero, and Other New Ideas . . . . . . . . . . . . . . . . . . . . 539
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
About the Authors

Matthew F. Dixon is an Assistant Professor of Applied Math at the Illinois Institute


of Technology. His research in computational methods for finance is funded by
Intel. Matthew began his career in structured credit trading at Lehman Brothers
in London before pursuing academics and consulting for financial institutions in
quantitative trading and risk modeling. He holds a Ph.D. in Applied Mathematics
from Imperial College (2007) and has held postdoctoral and visiting professor
appointments at Stanford University and UC Davis, respectively. He has published
over 20 peer-reviewed publications on machine learning and financial modeling,
has been cited in Bloomberg Markets and the Financial Times as an AI in fintech
expert, and is a frequently invited speaker in Silicon Valley and on Wall Street. He
has published R packages, served as a Google Summer of Code mentor, and is the
co-founder of the Thalesians Ltd.

Igor Halperin is a Research Professor in Financial Engineering at NYU and an


AI Research Associate at Fidelity Investments. He was previously an Executive
Director of Quantitative Research at JPMorgan for nearly 15 years. Igor holds a
Ph.D. in Theoretical Physics from Tel Aviv University (1994). Prior to joining
the financial industry, he held postdoctoral positions in theoretical physics at the
Technion and the University of British Columbia.

Paul Bilokon is CEO and Founder of Thalesians Ltd. and an expert in electronic
and algorithmic trading across multiple asset classes, having helped build such
businesses at Deutsche Bank and Citigroup. Before focusing on electronic trading,
Paul worked on derivatives and has served in quantitative roles at Nomura, Lehman
Brothers, and Morgan Stanley. Paul has been educated at Christ Church College,
Oxford, and Imperial College. Apart from mathematical and computational finance,
his academic interests include machine learning and mathematical logic.

xxv
Part I
Machine Learning with Cross-Sectional
Data
Chapter 1
Introduction

This chapter introduces the industry context for machine learning in finance, dis-
cussing the critical events that have shaped the finance industry’s need for machine
learning and the unique barriers to adoption. The finance industry has adopted
machine learning to varying degrees of sophistication. How it has been adopted
is heavily fragmented by the academic disciplines underpinning the applications.
We view some key mathematical examples that demonstrate the nature of machine
learning and how it is used in practice, with the focus on building intuition for
more technical expositions in later chapters. In particular, we begin to address
many finance practitioner’s concerns that neural networks are a “black-box” by
showing how they are related to existing well-established techniques such as
linear regression, logistic regression, and autoregressive time series models. Such
arguments are developed further in later chapters. This chapter also introduces
reinforcement learning for finance and is followed by more in-depth case studies
highlighting the design concepts and practical challenges of applying machine
learning in practice.

1 Background

In 1955, John McCarthy, then a young Assistant Professor of Mathematics, at


Dartmouth College in Hanover, New Hampshire, submitted a proposal with Marvin
Minsky, Nathaniel Rochester, and Claude Shannon for the Dartmouth Summer
Research Project on Artificial Intelligence (McCarthy et al. 1955). These organizers
were joined in the summer of 1956 by Trenchard More, Oliver Selfridge, Herbert
Simon, Ray Solomonoff, among others. The stated goal was ambitious:
“The study is to proceed on the basis of the conjecture that every aspect
of learning or any other feature of intelligence can in principle be so precisely
described that a machine can be made to simulate it. An attempt will be made to

© Springer Nature Switzerland AG 2020 3


M. F. Dixon et al., Machine Learning in Finance,
https://doi.org/10.1007/978-3-030-41068-1_1
4 1 Introduction

find how to make machines use language, form abstractions and concepts, solve
kinds of problems now reserved for humans, and improve themselves.” Thus the
field of artificial intelligence, or AI, was born.
Since this time, AI has perpetually strived to outperform humans on various judg-
ment tasks (Pinar Saygin et al. 2000). The most fundamental metric for this success
is the Turing test—a test of a machine’s ability to exhibit intelligent behavior equiv-
alent to, or indistinguishable from, that of a human (Turing 1995). In recent years,
a pattern of success in AI has emerged—one in which machines outperform in the
presence of a large number of decision variables, usually with the best solution being
found through evaluating an exponential number of candidates in a constrained
high-dimensional space. Deep learning models, in particular, have proven remark-
ably successful in a wide field of applications (DeepMind 2016; Kubota 2017;
Esteva et al. 2017) including image processing (Simonyan and Zisserman 2014),
learning in games (DeepMind 2017), neuroscience (Poggio 2016), energy conser-
vation (DeepMind 2016), skin cancer diagnostics (Kubota 2017; Esteva et al. 2017).
One popular account of this reasoning points to humans’ perceived inability
to process large amounts of information and make decisions beyond a few key
variables. But this view, even if fractionally representative of the field, does no
justice to AI or human learning. Humans are not being replaced any time soon.
The median estimate for human intelligence in terms of gigaflops is about 104 times
more than the machine that ran alpha-go. Of course, this figure is caveated on the
important question of whether the human mind is even a Turing machine.

1.1 Big Data—Big Compute in Finance

The growth of machine-readable data to record and communicate activities through-


out the financial system combined with persistent growth in computing power and
storage capacity has significant implications for every corner of financial modeling.
Since the financial crises of 2007–2008, regulatory supervisors have reoriented
towards “data-driven” regulation, a prominent example of which is the collection
and analysis of detailed contractual terms for the bank loan and trading book stress-
testing programs in the USA and Europe, instigated by the crisis (Flood et al. 2016).
“Alternative data”—which refers to data and information outside of the usual
scope of securities pricing, company fundamentals, or macroeconomic indicators—
is playing an increasingly important role for asset managers, traders, and decision
makers. Social media is now ranked as one of the top categories of alternative data
currently used by hedge funds. Trading firms are hiring experts in machine learning
with the ability to apply natural language processing (NLP) to financial news and
other unstructured documents such as earnings announcement reports and SEC 10K
reports. Data vendors such as Bloomberg, Thomson Reuters, and RavenPack are
providing processed news sentiment data tailored for systematic trading models.
1 Background 5

In de Prado (2019), some of the properties of these new, alternative datasets are
explored: (a) many of these datasets are unstructured, non-numerical, and/or non-
categorical, like news articles, voice recordings, or satellite images; (b) they tend
to be high-dimensional (e.g., credit card transactions) and the number of variables
may greatly exceed the number of observations; (c) such datasets are often sparse,
containing NaNs (not-a-numbers); (d) they may implicitly contain information
about networks of agents.
Furthermore, de Prado (2019) explains why classical econometric methods fail
on such datasets. These methods are often based on linear algebra, which fail when
the number of variables exceeds the number of observations. Geometric objects,
such as covariance matrices, fail to recognize the topological relationships that
characterize networks. On the other hand, machine learning techniques offer the
numerical power and functional flexibility needed to identify complex patterns
in a high-dimensional space offering a significant improvement over econometric
methods.
The “black-box” view of ML is dismissed in de Prado (2019) as a misconception.
Recent advances in ML make it applicable to the evaluation of plausibility of
scientific theories; determination of the relative informational variables (usually
referred to as features in ML) for explanatory and/or predictive purposes; causal
inference; and visualization of large, high-dimensional, complex datasets.
Advances in ML remedy the shortcomings of econometric methods in goal
setting, outlier detection, feature extraction, regression, and classification when it
comes to modern, complex alternative datasets. For example, in the presence of p
features there may be up to 2p − p − 1 multiplicative interaction effects. For two
features there is only one such interaction effect, x1 x2 . For three features, there are
x1 x2 , x1 x3 , x2 x3 , x1 x2 x3 . For as few as ten features, there are 1,013 multiplicative
interaction effects. Unlike ML algorithms, econometric models do not “learn”
the structure of the data. The model specification may easily miss some of the
interaction effects. The consequences of missing an interaction effect, e.g. fitting
yt = x1,t + x2,t + t instead of yt = x1,t + x2,t + x1,t x2,t + t , can be dramatic.
A machine learning algorithm, such as a decision tree, will recursively partition
a dataset with complex patterns into subsets with simple patterns, which can then
be fit independently with simple linear specifications. Unlike the classical linear
regression, this algorithm “learns” about the existence of the x1,t x2,t effect, yielding
much better out-of-sample results.
There is a draw towards more empirically driven modeling in asset pricing
research—using ever richer sets of firm characteristics and “factors” to describe and
understand differences in expected returns across assets and model the dynamics
of the aggregate market equity risk premium (Gu et al. 2018). For example,
Harvey et al. (2016) study 316 “factors,” which include firm characteristics and
common factors, for describing stock return behavior. Measurement of an asset’s
risk premium is fundamentally a problem of prediction—the risk premium is the
conditional expectation of a future realized excess return. Methodologies that can
reliably attribute excess returns to tradable anomalies are highly prized. Machine
learning provides a non-linear empirical approach for modeling realized security
6 1 Introduction

returns from firm characteristics. Dixon and Polson (2019) review the formulation
of asset pricing models for measuring asset risk premia and cast neural networks in
canonical asset pricing frameworks.

1.2 Fintech

The rise of data and machine learning has led to a “fintech” industry, covering
digital innovations and technology-enabled business model innovations in the
financial sector (Philippon 2016). Examples of innovations that are central to
fintech today include cryptocurrencies and the blockchain, new digital advisory and
trading systems, peer-to-peer lending, equity crowdfunding, and mobile payment
systems. Behavioral prediction is often a critical aspect of product design and risk
management needed for consumer-facing business models; consumers or economic
agents are presented with well-defined choices but have unknown economic needs
and limitations, and in many cases do not behave in a strictly economically rational
fashion. Therefore it is necessary to treat parts of the system as a black-box that
operates under rules that cannot be known in advance.

1.2.1 Robo-Advisors

Robo-advisors are financial advisors that provide financial advice or portfolio


management services with minimal human intervention. The focus has been on
portfolio management rather than on estate and retirement planning, although there
are exceptions, such as Blooom. Some limit investors to the ETFs selected by the
service, others are more flexible. Examples include Betterment, Wealthfront, Wise-
Banyan, FutureAdvisor (working with Fidelity and TD Ameritrade), Blooom, Motif
Investing, and Personal Capital. The degree of sophistication and the utilization of
machine learning are on the rise among robo-advisors.

1.2.2 Fraud Detection

In 2011 fraud cost the financial industry approximately $80 billion annually
(Consumer Reports, June 2011). According to PwC’s Global Economic Crime
Survey 2016, 46% of respondents in the Financial Services industry reported being
victims of economic crime in the last 24 months—a small increase from 45%
reported in 2014. 16% of those that reported experiencing economic crime had
suffered more than 100 incidents, with 6% suffering more than 1,000. According
to the survey, the top 5 types of economic crime are asset misappropriation (60%,
down from 67% in 2014), cybercrime (49%, up from 39% in 2014), bribery and
corruption (18%, down from 20% in 2014), money laundering (24%, as in 2014),
and accounting fraud (18%, down from 21% in 2014). Detecting economic crimes is
Discovering Diverse Content Through
Random Scribd Documents
Why cannot ninety thousand deal with four hundred, even were
the cause at issue less one of equity and justice? If, as has often been
asserted, the Brigandage has been fed from Rome—if the gold of
Francis II. and the blessing of the Pope go with those who cross the
frontier to maintain the disturbance in Southern Italy—what should
be easier, with such a superiority of numbers, than to cut off the
communication? With sixty thousand men a cordon could be drawn
from the Mediterranean to the Adriatic in which each sentinel could
hail his neighbour. Were the difficulty to lie here, could it not be met
at once? It was declared a few weeks back by Mr Odo Russell, that a
whole regiment, armed and clothed in some resemblance to French
soldiers, passed over to the south; and we are lost in amazement why
such resources should be available in the face of an army greater
than Wellington ever led in Spain or conquered with at Waterloo. To
understand a problem so difficult, it is first of all necessary to bear in
mind that this same Brigandage is neither what the friends of the
Bourbons nor what the advocates of united Italy have pronounced it.
If the Basilicata and the Capitanata are very far from being La
Vendée, they are also unlike what the friends of Piedmontism would
declare—countries well affected to the House of Savoy under the
temporary dominion of a lawless and bloody tyranny from which
they are utterly powerless to free themselves. If Brigandage is not in
its essence a movement of the reactionists, it has nevertheless been
seized upon by them to prosecute their plans and favour their
designs. To render the Neapolitan States ungovernable—to exhibit to
the eyes of Europe a vast country in a state of disorganisation, where
the most frightful cruelties are daily practised—where horrors that
even war is free from are hourly perpetrated—was a stroke of policy
of which the friends of the late dynasty were not slow to avail
themselves. By this they could contrast the rule of the present
Government with that of the former ones; and while the press of
Europe still rang with the cruelties of the Bourbons, they could ask,
Where is the happy change that you speak of? Is it in the
proclamations of General Pinelli—the burning of villages, and the
indiscriminate slaughter of their inhabitants? Do the edicts which
forbid a peasant to carry more than one meal to his daily labour, tell
of a more enlightened rule? Do the proclamations against being
found a mile distant from home, savour of liberty? Are the
paragraphs we daily read in the Government papers, where the band
of this or that brigand chief has been captured or shot, the only
evidences to be shown of a spirit which moves Italians to desire a
united nation? You tell us of your superior enlightenment and
cultivation, say the Bourbonists, and the world at large listens
favourably to your claims. But why, if it be true, have the last two
years counted more massacres than the forty which have preceded
them? Why are thousands wandering homeless and shelterless
through the mountains, while the ruins of their dwellings are yet
smoking from the ruthless depredations of your soldiery? If
Brigandage numbers but four hundred followers, why are such
wholesale cruelties resorted to? The simple fact is this: the
Brigandage of Southern Italy is not a question of four hundred, or
four thousand, or four hundred thousand followers, but of a whole
people utterly brutalised and demoralised, who, whatever peril they
attach to crime, attach no shame or disgrace to it. The labourers on
one of the Southern Italian lines almost to a man disappeared from
work, and on their return to it, some days after, frankly confessed
they had spent the interval with the brigands. They were not robbers
by profession nor from habit; but they saw no ignominy in lending
themselves to an incidental massacre and bloodshed. The National
Guards of the different villages, and the Syndics themselves, are
frequently charged with a want of energy and determination; but the
truth is, these very people are the very support and mainspring of
Brigandage. The brigands are the brothers, the sons, or the cousins
of those who affect to move against them. So far from feeling the
Piedmontese horror of the brigand, these men are rather irritated by
the discipline that bands them against him. They have none of that
military ardour which makes the Northern Italian proud of being a
soldier. Their blood has not been stirred by seeing the foreigner the
master of their capital cities; their pride has not been outraged by the
presence of the hated Croat or the rude Bohemian at their gates. To
them the call to arms has been anything but a matter of vain glory.
Besides this, there seems in the unrelenting pursuit of the
Brigandage a something that savours of the hate of the North for the
South. Under the Bourbons the brigand met a very different
measure, as he did under the French rule, and in the time of Murat.
Men of the most atrocious lives, stained with many and cruel
murders, were admitted to treat with the Government, and the
negotiations were carried on as formally as between equals. When a
Capo Briganti desired to abandon his lawless and perilous life, he
had but to intimate his wish to some one in authority. His full
conditions might not at first or all be acceded to, but he was sure to
be met with every facility for his wish; and in more than one case was
such a man employed in a situation of trust by the State; and there
yet lives one, Geosaphat Talarico, who has for years enjoyed a
Government pension as the reward of his submission and
reformation.
Under the old Bourbon rule, all might be pardoned, except an
offence against the throne. To the political criminal alone no grace
could be extended. The people saw this, and were not slow to apply
the lesson. Let it also be borne in mind, that the brigand himself
often met a very different appreciation from those who knew him
personally to that he received at the hands of the State. The assassin
denounced in wordy proclamations, and for whose head a price was
offered, was in his native village a “gran’ Galantuomo,” who had done
scores of fine and generous actions.
To revolutionise feeling in such a matter is not an easy task. Let
any one, for instance, fashion to his mind how he would proceed to
turn the sympathies of the Irish peasant against the Rockite and in
favour of the landlord, to hunt down the criminal and to favour his
victim. It would be a similar task to endeavour to dispose the peasant
of the Abruzzi to look unfavourably on Brigandage. Brigandage was,
in fact, but another exercise of that terrorism which they saw
universally around them. Was the Capo Briganti more cruel than the
tax-gatherer? was he not often more merciful? and did he ever press
upon the poor? Were not his exactions solely from the rich? Was he
not generous, too, when he was full-handed? How many a
benevolent action could be recorded to his credit! If this great
Government, which talked so largely of its enlightenment, really
wished to benefit the people, why did it not lighten the imposts,
cheapen bread, and diminish the conscription? instead of which we
had the taxes quadrupled, food at famine prices, and the levies for
the services more oppressive than ever. They denounced Brigandage;
but there were evils far worse than Brigandage, which, after all, only
pressed a little heavily on the rich, and took from them what they
could spare well and easily.
It is thus the Neapolitan reasons and speaks of that pestilence
which is now eating like a cancer into the very heart of his country,
and taxing the last energy of her wisest and best to meet with
success. At this moment Southern Italy is no more under the control
of the Italian Government than are the States of the Confederacy
under the sway of President Lincoln, and all the powerful energies of
the North are ineffectual to eradicate a disease which is not on the
surface, but in the very heart of the people.
The Italian Brigand, like the Irish Rockite, is by no means of
necessity the most depraved or most wicked of his native village.
Perhaps his fearlessness is his strongest characteristic. He is in other
respects pretty much like those around him. He has no great respect
for laws, which he has often seen very corruptly administered. He
has been familiar with perjury all his life. He has never seen the rites
of the Church denied to the blackest criminals, and he has come to
believe that, except in the accidents of station, men are almost alike,
and the great difference is, that the filchings of the minister are less
personally hazardous than the spoils of the highwayman.
That these men take pay and accept service from the Bourbonist is
easy enough to conceive. To cry Viva Francesco Secondo, when they
stop the diligence or pillage a farmhouse, is no difficult task; but that
they are in any sense followers, or care for the King or his cause, is
utterly and ridiculously untrue. The reactionists affect to believe so,
for it gives them the pretext of a party. The French like to believe so,
for it proclaims, what the press continues unceasingly to assert, that
the North has no footing in the South, and that no sympathy ever has
existed, or ever will exist, between peoples so totally and essentially
dissimilar.
The Piedmontese, too, unwilling to own that the event they have so
ineffectually struggled against has not all the force of a great political
scheme, declare that the Brigandage is fed from Rome, and would
not have a day’s existence, if the ex-King were compelled to leave
that capital, and the favour of the Papal Court withdrawn from its
support.
That the present rulers of Italy pursue the brigands with an energy,
and punish them with a severity never practised before, is cause even
to prefer the reign of the Bourbons to that of the Piedmontese. There
is no need for them to enter upon the difficult questions of freedom
and individual liberty, to contrast the rights enjoyed under one
government with those available under another. It is quite sufficient
that they see what was once tolerated will no longer be endured, and
that the robber chief who once gave the law to the district he lived in
is now hunted down with the remorseless severity that will only be
satisfied with his extermination.
It may be asked, How could the people feel any sympathy for a
system from which they were such heavy sufferers, or look
unfavourably on those who came to rid them of the infliction? The
answer is, that long use and habit, a sense of terror ingrained in their
natures, and, not less than these, a reliance in the protective power of
the brigand, disposed the peasant to prefer his rule to that of the
more unswerving discipline of the State. The brigand was at least one
of his class, if not of his own kindred. He knew and could feel for the
peculiar hardships which pressed upon the poor man. If he took from
the proud man, he spared the humble one; and, lastly, he possessed
the charm which personal daring and indifference to danger never
fail to exercise over the minds of the masses.
Let us again look to Ireland, to see how warmly the sympathies of
the peasant follow those who assume to arraign the laws of the State,
and establish a wild justice of their own—how naturally they favour
them, with what devotion they will screen them, and at what
personal peril they will protect them; and if we have to confess that
centuries have seen us vainly struggling with the secret machinery
which sustains crime amongst ourselves, let us be honest enough to
spare our reproaches to those who have not yet suppressed
brigandage in Southern Italy. It is not, in fact, with the armed and
mounted robber that the State is at issue, but with a civilisation
which has created him. He is not the disease, he is only one of its
symptoms; and to effect a cure of the malady the remedies must go
deeper.
Nor is the question an easy one to resolve; for though Garibaldi
with a few followers sufficed to overthrow a dynasty, the whole force
of a mighty army, backed by a powerful public opinion, has not
succeeded in firmly establishing a successor.
Piedmont is not loved in the South. There is not a trait in the
Piedmontese character which has not its antitype in the Neapolitan;
and they whose object it was to exhibit the sub-Alpine Italian in the
most unfavourable colours, could not lack opportunity to do so. The
severities practised towards the brigands—which were not always,
nor could they be, exercised with discrimination—furnished ample
occasion for these attacks. Many of these assumed a Garibaldian, or
even Mazzinian tone, and affected indignation at cruelties of which
the people—the caro popolo—were always the victims. One of the
chief brigands, Chiavone, pretended to imitate Joseph Garibaldi; and
in dress, costume, and a certain bold, frank manner, assumed to
represent the great popular leader. Amongst his followers he counted
Frenchmen, Spaniards, Germans, Belgians, and, it is said, Irish. One
of these foreigners was a man of high rank and ancient lineage,
Count Alfred de Trazégnies—a near relative of M. de Merode’s: he
was taken prisoner and shot. Another was the famous Borjès, from
whom was taken the instructions given him by General Clary, and,
more interesting still, a journal written in his own hand.
Though his “instructions” are full of grandiloquent descriptions of
battalions and squadrons and batteries—horse, foot, and dragoons—
with exact directions given as to the promotions, the staff
appointments, the commissariat,—let us hear how he himself
describes the first steps of his enterprise.
Having with great difficulty succeeded in obtaining about twenty
muskets at Malta, he saw himself in some embarrassment as to
getting away from the island, where intimations as to his project
were already about. He succeeded, however, in getting on board of a
small coasting vessel with his officers, and landed after a two days’
voyage at Brancaleone. “The shore,” he says, “was totally deserted,
no trace of habitation to be seen; and, directed at last by the glimmer
of a solitary light, we came upon the hut of a shepherd, who received
us kindly and hospitably. The next day he guided us to the little town
of Precacore, where we were met by the curate, and amidst cries of
Viva Francesco Secondo conducted into the Piazza. I was cheered by
this,” says he, “and deemed it a lucky augury. About twenty peasants
enrolled themselves here under my command, and we moved on to
Caraffa, where I was told a friendly welcome awaited me. On passing,
however, near St Agata, a company of the mobilised National Guard,
about sixty in number, opened a sharp fire on us, and my new
recruits took to their heels, leaving me alone with my officers.
Sustained, however, by a strong position, we held our own for an
hour and a half, after which a deputation from Caraffa came to offer
me the hospitality of that city—an offer I was fortunate enough to
refuse, for another and far more serious ambuscade was prepared for
me there.”
At Cirella he came up with a Bourbon partisan named Mittica, with
one hundred and twenty men under him, but who refused to accept
him as a leader, and in fact treated him and his officers as spies and
prisoners.
After many dangers and much suffering, deserted by Mittica and
his band, Borjès found himself in Tovre, “where an old soldier of the
3d Cacciatori offered to accompany me—the only follower I have met
with up to this day.”
His narrative, simply and unaffectedly written, is one of the most
extraordinary records of suffering, privation, and peril, and at the
same time of devotion to his enterprise and zeal in the cause of the
ex-King. He firmly believes that the mass of the people are “royalist,”
that he only needs five hundred men, well armed and disposed to
obey him, to “overthrow the revolution” and restore the sovereign.
He met his death like a brave man. He was surprised with some of
his followers at a farmhouse in the very last village before crossing
the Roman frontier, to which he was hastening. A young
Piedmontese Major, Franchini, with a detachment of Bersaglieri and
some mounted gendarmes, surrounded the house and at last set fire
to it, on which Borjès surrendered and was immediately shot. “I was
on my way to tell the King,” said he with his last words, “that he has
nothing but cowards and scoundrels to defend him—that Crocco is a
villain and Langlais a fool.” Then turning to the Major he added,
“Thank fortune for it that I did not start one hour earlier this
morning, for I should have gained the Roman frontier, and you
would have heard more of me.”
The Piedmontese have been severely blamed for the execution of
Borjès. Indeed he has found no less an advocate than Victor Hugo,
who would not consent to have him ranked with Crocco, Ninco
Nancho, and the rest, mere brigands and robbers on the highway.
That the popular sentiment of Italy was not disposed in his favour
may be assumed from the indignation felt by all the villages of the
frontier when General Lamarmora consented that the body of Borjès
should be exhumed and conveyed to Rome. There is little doubt,
however, that his being a Spaniard influenced this feeling. In no
country of Europe is the foreigner regarded with the same jealousy
and distrust.
While the report of General Lamarmora shows that no disparity of
force, not even ninety thousand to four hundred, is sufficient to deal
with the Neapolitan Brigandage, it affects to explain why. In fact, the
report is one insinuated accusation of the French, who by their
occupation of Rome supply arms and money to the reactionists, and
feed a movement which, if left to its own resources, must perish of
inanition. The report shrinks from the avowal that the whole
inhabitants of two great provinces are friends and sympathisers with
the brigands; that however little political reasons enter into the issue,
the priests have contrived to give a political colouring to the struggle,
and by contrasting the immunities of the past with the severities of
the present, have made the peasant believe that the rule of the
Bourbon was more favourable to him than that of the House of
Savoy. It is not merely in the conscription for the regular army that
the pressure is felt, but in the very enrolment for the National Guard,
which, liable as it is to being “mobilised,” exacts all the services and
all the privations of soldiering. So much as 3000 francs have been
paid for a substitute, rather than serve in a force which compels the
shopkeeper to desert his business or the farmer his fields for eight or
ten months of the year!
If we have heard much of the personal unpopularity of the
Piedmontese in Southern Italy, it is a theme which cannot be
exaggerated. There is not, perhaps, throughout Europe a people who
have less in common than the sub-Alpine and the South Italian. If
Garibaldi and his followers came as liberators, the Piedmontese
entered Naples as conquerors. The Garibaldians won all the suffrages
of a people who loved their free-and-easy manners, their
indiscipline, that “disinvoltura” so dear to the Italian heart;—their
very rags had a charm for them. The rigid, stiff, unbending
Piedmontese, almost unintelligible in speech and repulsive in look,
were the very reverse of all this. Naples was gay, animated, and
happy under the sway of the same lawless band of red-shirted
adventurers, but she felt crushed and trampled down by the regular
legions of the King.
In the great offices of the State, and in the Prefectures, it was easy
enough for the Piedmontese to appoint their own partisans; but how
do this throughout the rural districts, the small towns, and the
villages? In these the choice lay between a Royalist—that is, a
Bourbonist—and a Mazzinian. If you would not accept a follower of
the late King, you must take one who disowned sympathies with all
royalty. The Syndics and “Maires” of the smaller cities have been
almost to a man the enemies of the Northern Italian. It is through
these all the difficulties of propagating “union” sentiments have been
experienced. It is by their lukewarmness, if not something worse,
that Brigandage is able still to hold its ground, not so much because
they are well affected to the Bourbons, or that they cherish
sentiments of Mazzinianism, but simply that they disliked Northern
Italy, nor could any rule be so distasteful to them as that which came
from that quarter. That the French occupation of Rome has tended to
maintain and support Brigandage cannot for a moment be disputed.
The policy of France, from the very hour of the treaty of Villafranca,
has been to perpetuate the difficulties of Italian rule—to exhibit the
country in a state of permanent disorder, and the people unquiet,
dissatisfied, and unruly—to reduce the peninsula to that condition, in
fact, in which not only would the occupation of Rome be treated as a
measure of security to Europe at large, but the graver question urged
whether a more extended occupation of territory might not be
practicable and possible.
If Garibaldi’s expedition had not terminated so abruptly at
Aspromonte, it is well known the French would have occupied
Naples. When they would have left it again, it is not so easy to say. It
is clear enough then to see, how little soever the French may like that
Brigandage that now devastates the South, they are not averse to the
distress and trouble it occasions to the Italian Government, all whose
ambitions have been assumed as so many menaces against France.
Had you been content with the territory we won for you—had you
remained satisfied with a kingdom of six millions, who spoke your
own language, inherited your own traditions, and enjoyed your own
sympathies, you might have had peace and prosperity, say the
oracles of the Tuileries; but you would be a great nation, and you are
paying the penalty. “This comes of listening to England, who never
aided you, instead of trusting to us who shed our blood in your
cause.”
France never has consented to a united Italy; whether she may yet
do so is, however improbable, still possible; one thing is, however,
clear—until she does give this consent, not in mere diplomatic
correspondence, but in heart and wish, the southern provinces of the
peninsula will remain unconquered territories, requiring the
presence of a large force, and even with that defying the power of the
Government to reduce them to obedience.
Brigandage is but the open expression of a discontent which exists
in every class and every condition in the districts it pervades. It is the
assertion of the Catholic for the Pope, of the Royalist for the
Bourbon, of the Revolutionist against a discipline, and last of all, of
the Southern Italian against being ruled by that Northern race whose
intelligence he despises, and for whose real qualities of manliness he
has neither a measure nor a respect.
One word as to the Camorra before we conclude: and first of all
what is this Camorra of which men talk darkly and in whispers, and
whose very syllables are suppressed while the servants are in the
room? The Camorra is an organised blackmail, which, extending its
exactions to every trade and industry, carries the penalties of
resistance to its edicts even to death.
The Camorra has its agents everywhere. On the Mole, where the
boatman hands over the tenth of the fare the passenger has just paid
him—at the door of the hotel, where the porter counts out his gains
and gives over his tithe—at the great restaurant, at the theatre, at the
gaming-table—some one is sure to present himself as the emissary of
this dreaded society, and in the simple words, “for the Camorra,”
indicate a demand that none have courage to resist.
The jails are, however, the great scenes for the exercise of this
system. There the Camorra reigns supreme. In the old Bourbon days
the whole discipline of the prisons was maintained by the Camorristi,
who demanded from each prisoner as he entered the usual fees of the
place. The oil for the lamp in honour of the Madonna had to be paid
for, then came a sort of fee for initiation, after which came others in
the shape of taxes on the income of the prisoner and his supposed
means, with imposts upon leave to smoke, to drink, or to gamble. His
incomings too were taxed, and a strict account demanded of all his
gains, from which the tenth was rigidly subtracted. To resist the
imposts was to provoke a quarrel, not unfrequently ending fatally;
for the Camorristi ruled by terror, and well knew all the importance
of maintaining their “prestige.”
The revenues of the Camorra, amounting to sums almost
incredibly large, are each week handed over to the treasurer of the
district, and distributed afterwards to the followers of the order by
the Capo di Camorra, according to the rank and services of each, any
concealment or malversation of funds being punished with death.
The society itself not only professes to protect those who belong to it,
but to extend its influence over all who obey its edicts; and thus the
poor creature who sells his fruit at the corner of the street sees his
wares under the safeguard of one of these mysterious figures, who
glide about here and there, half in listlessness, and whose dress may
vary from the patched rags of almost mendicancy to the fashionable
attire of a man of rank and condition.
In the cafés where men sit at chess and dominoes, the Camorrist
appears, and with his well-known whisper demands his toll. In vain
to declare that the play is not for money; it is for the privilege to play
at all that his demand is now made. The newly appointed clerk in a
public office, the secretary to the minister, it is said, have been
applied to, and have not dared to dispute a claim which would be
settled otherwise by the knife.
Recognised by the old police of Naples, tolerated and even
employed to track out the crimes of those who did not belong to the
order, the Camorrists acquired all the force and consideration of an
institution. Men felt no shame at yielding to a terror so widespread;
nor would it have been always safe to speak disparagingly of a sect
whose followers sometimes lounged in royal antechambers as well as
sought shelter under the portico of a church.
It has been more than once asserted that Ferdinand II. was a
sworn member of the order, and that he contributed largely to its
funds. Certain it is the Camorra in his reign performed all the
functions of a secret police, and was the terror of all whose
Liberalism made them suspected by the Government. To the
Camorra, too, were always intrusted those displays of popular
enthusiasm by which the King was wont to reply to the angry
remonstrances of French or English envoys. The Camorra could at a
moment’s notice organise a demonstration in honour of royalty
which would make the monarch appear as the loved and cherished
father of his people.
It was, however, by the Liberals themselves the Camorra was first
introduced into political life, and Liberio Romano intrusted the
defence of the capital to these men as the surest safeguard against
the depredations of the disbanded soldiers of the King; and, strange
to say, the hazardous experiment was a perfect success, and for
several weeks Naples had no other protectors than the members of a
league who combined the atrocities of Thuggee with the shameless
rapine of the highwayman. The stern discipline of Piedmont would
not, however, condescend to deal with such agents; and Lamarmora
has waged a war, open and avowed, against the whole system of the
Camorra. Hundreds of arrests have been made, and the jails are
crowded with Camorrists; but men declare that all these measures
are in vain—that the magistracy itself is not free from the taint: and
certain it is that the system prevails largely in the army and navy,
and has its followers in what is called the world of fashion and
society.
The Mezzo Galantuomo is the most terrible ingredient in the
constitution of a people. The man who is too bad for society but a
little too good for the gallows, is a large element in this land, and it
will require something more than mere statecraft to deal with him.
A Parliamentary Commission is at present engaged in the
investigation of the whole question of Brigandage, and their “Report”
will probably be before the world in a few days. It is very doubtful,
however, if that world will be made much the wiser by their labours.
There is, in fact, no mystery as to the nature of this pestilence, its
source, or its progress.
It may suit the views of a party to endeavour to connect it with
Bourbonism, but it would be equally true to assert that the peasant-
murderers in Ireland were adherents of the Stuarts! The men who
take to the mountains in the Capitanata are not politicians. They
have no other “cause” at heart than their own subsistence, for which
they would rather provide at the risk of their heads than by the
labour of their hands. All that they know of civilisation is taxation
and the conscription. In these respects the old régime was less severe
than the present; neither the imposts were so heavy, nor the levies so
large; not to add that, under the Bourbons, soldiers led lives of
lounging indolence, and “no one was ever cruel enough to lead them
against the Austrians.”
The Bourbon Government of Naples had many faults, but the
Piedmontese rule has had no successes. There is that of ungeniality
in the Northern temperament that renders even favours at their
hands little better than burdens, and their justice has a smack of
severity in it that wonderfully resembles revenge.
What may be the future fate of Southern Italy it is not easy to say;
but one thing at least is certain, the influence of Piedmont has not
obtained that footing there which promises to make her cause their
cause, or her civilisation their civilisation. If the Bourbons governed
badly, their successors do not govern at all!
LUDWIG UHLAND.

Incontestably, since the death of Goethe, Ludwig Uhland has been,


at least in the hearts of the people, the Laureate of Germany. He is
not a poet who took the world by storm with his earliest productions;
but he has been gradually growing in favour and general acceptance,
until his death is now deplored as a national affliction. He died
quietly at Tübingen, the place of his birth, on the 13th of November
1862, in his seventy-sixth year, having been born on the 26th of April
1787. He was said never to have known a day’s illness until his last,
which was occasioned by his attending the funeral of a friend and
brother poet, Justin Kerner, in inclement weather.
The parents of the poet were Johann Friedrich Uhland, Secretary
to the University of Tübingen, and Elizabeth (born) Hoser, daughter
of one Hoser who held a similar office. He had a brother, Fritz, who
died in his ninth year, and a sister, Luise, who married Meyer, the
pastor of Pfullingen, near Reutlingen. His education conduced to
bringing out the talent that was latent in him, as it was the custom of
Kauffmann, the rector of the Tübingen school, to give free themes to
be worked out in prose or verse, according to the inclinations of his
scholars; and the young Uhland generally chose the latter, and was
early distinguished in his choice. Even at school he was known as an
enthusiastic student of German and Scandinavian antiquities. At the
age of sixteen and seventeen he produced many compositions of
merit, but only two, ‘Der Sterbender Held,’ and ‘Der Blinder König,’
found their way into that collection of his poems which was
published in 1815. At this time he was hesitating between the
professions of law and medicine. As a youth, though given to long
walks alone in the beautiful neighbourhood of Tübingen, he was
distinguished by his love of social manly exercises, particularly of
skating. Two of his earliest poetical friends were Schröder, who was
afterwards drowned in the Baltic, and Harpprecht, who fell in the
Russian campaign of Napoleon. This is the friend who is alluded to in
the exquisite poem of ‘Die Ueberfahrt’ as “brausend vor uns allen,”
while the fatherly friend spoken of there is Uhland’s maternal uncle,
Hoser, the pastor of Schmiden. He was also much influenced in his
tastes by Haug of Stuttgard, and Gortz, Professor of Ancient
Literature in Tübingen. Later he became acquainted with Justin
Kerner, whose talent he placed above his own, Oehlenschläger the
Danish poet, and Varnhagen von Ense the historian. Goethe he had
seen once when a boy in 1797, and he records his impressions in the
‘Münstersage.’ In 1810 Uhland went to Paris, in order to work at the
treasures of Romance literature contained in the Imperial Library.
On his return he applied himself to practice as an advocate at
Stuttgard, without remitting his poetic labours. His tragedy, ‘Herzog
Ernst von Schwaben,’ which belongs to this period, elicited the warm
admiration of Goethe. In 1819 he was elected a deputy of the
Würtemberg States. In 1820 he married Emma Vischer, a daughter,
by a former marriage, of a celebrated woman, Frau Emilie Pistorius,
to whose memory Rückert dedicated a poem called ‘Rosen auf das
Grab einer edlen Frau.’ In 1834 he was made Professor of German
Literature at Tübingen. He distinguished himself as a political
character in 1848, though without joining the extreme Liberal party,
and on one occasion presented an address to the King of
Würtemberg, praying for the restoration of the Constitution, the
prayer of which was immediately granted, as most prayers of the
kind were at that particular time, from prudential motives. He had
already resigned, in 1833, his office of deputy, finding it incompatible
with his professorship, and had returned to his residence at
Tübingen. His marriage with Emma Vischer was in many respects a
fortunate one. He appears to have lived with her in great harmony till
his death, and the dowry she brought him, though not large, was
sufficient to keep from his door the anxieties which usually beset a
priest of the Muses. On the other hand, the marriage was not blest by
children. There are old pictures extant of Uhland as a child, with a
fair honest face and powdered hair. His later face is now familiar to
the Germans. Its first impression is decidedly heavy. The upper-lip is
long, the cheekbones high, the eyes not large, the forehead broad
over the brows, and narrower above—altogether an ordinary honest
man’s face, nothing more. A phrenologist in a steamboat, to whom
the poet was unknown, once guessed him to be a watchmaker,
adding, to console him, that every one could not be a poet. Uhland’s
manners appear to have been plain and unpretending—rather those
of a man who makes friends than acquaintances. Yet those who knew
him, knew him as a hearty and even jovial companion. He was shy,
and shunned publicity, and could not bear to be treated as a literary
lion. On one occasion, when he was presented with a crown of laurel,
he hung it and left it on an oak beside the road. His habits were early
and healthy. In summer he lived in his open garden-house, and at
ten o’clock every morning used to go out for a long walk, prefaced by
a plunge in the Neckar when the weather was genial. At Tübingen,
which is a very pretty quaint little university town, lying in that
finely-broken country which intervenes between the Black Forest
and the Alps, he owned a plain house on the country side of the
Neckar bridge, only ornamented by Corinthian pilasters in front;
behind it was his garden, arranged in terraces, and his “Weinberg,”
from which he made his own ordinary supply of wine. He was of
social habits, but, at the same time, fond of musing and solitude. The
homely but intellectual society of Tübingen fully sufficed him. He
was not a man to care for that of those above him in station, as his
sterling independence shrank from patronage in the same way in
which his diffidence shrank from general notoriety.
Politically, Uhland was a people’s man without being a Radical.
His love of medieval literature imbued his mind with respect for
hereditary rank, station, and honours, while his love of freedom and
optimist views of the future of his country and mankind in general,
made him a sturdy opponent of any attempt to infringe on what he
called “the good old right.” In England he might have been a Tory or
Conservative Whig. In Germany, it has pleased the powers that be to
count him with the Democratic party; hence the admiration or policy
which prompted Louis Napoleon to make a national affair of the
funeral of Béranger, was wanting in the case of Uhland, who was
buried, as he had lived, in privacy. Although this does not tell well for
the temper of the Government of Würtemberg, and fully accounts for
the hatred of Englishmen which is said to be dominant at Stuttgard,
the deceased poet would probably not have wished it otherwise. No
doubt he was, as far as the honours that proceed from the great are
concerned, to the end of his life an unacknowledged and
unappreciated man. But he had all he wanted—robust health, self-
respect, and the respect of those he loved, sufficient worldly means,
and that divine gift which Homer himself thought a full
compensation even for blindness.
The uneventfulness of Uhland’s life, his unpretending presence,
his very look and bearing, his intense love for nature, the simplicity
of his habits, his steady domestic character, and unaffected religious
feeling, all bring to mind our own Wordsworth; and in his poems, as
in those of Wordsworth, the gems are to be sought among the shorter
compositions. But Wordsworth made it his business to sit down at
the Lakes and paint nature in words, as the pre-Raphaelite or
naturalistic school of landscape painters sit down and paint her in
colours. Wordsworth wooed the beauty of nature immediately and
for itself. His human figures are merely put in roughly to help out the
foreground. But Uhland rarely paints nature directly; he rather uses
natural scenery as a background to his “genre” pictures, which
interest chiefly by presenting the phases of human feeling, and the
joys and sorrows of mankind. All his poems are alive with the breath
of Spring—fresh, luminous, and joyous; but we are aware of his
surroundings rather from the effects they produce upon him than
from any actual descriptions. His poems have the ring of the true
singer; an internal melody permeates his verse, capricious rather
than monotonous, changing its airs and cadences like the voice of a
bird, rather than flowing on with the mechanical jingling of a musical
box. This is the quality which gives the bardic stamp to the
compositions of a Burns, a Béranger, a Tennyson, and a want of
which is felt in the glowing rhetoric of Byron, and in
“The beauty for ever unchangingly bright,
Like the soft sunny lapse of a summer day’s light,”

which belongs to the poetry of Moore. In matter and choice of


subject, and in some measure in respect of treatment, he has much in
common with Walter Scott. His preparatory studies were much of
the same nature, consisting in the history, scenery, and legends of his
own country. He has done for Germany what even Schiller and
Goethe with all their greatness omitted to do in the same degree. He
has immortalised her local recollections. Second only to the man who
leads an army to rescue his country from the stranger, such a man is
a patriot of the true kind, whatever the colour of his politics may be.
Some poems he has written are like those exquisite ancient
miniature pictures on a gold ground, best to be understood and
appreciated by the educated connoisseur, while others are so plain in
language and sentiment that they have sunk into the hearts of the
people, and will flow for ever from the lips of the people in the shape
of national songs. Uhland differs most from the twin stars of
Germany—Schiller and Goethe—in that his poetry is more
exclusively objective than theirs. Goethe was all wrapt in his glorious
self, and his all-absorbing devotion to art. Like Horace’s hero, a
world might have fallen in ruins about him and he would not have
quailed; and, indeed, all the crash of empires and clash of armies in
which he lived left his brow as serene as that of one of the gods of
Epicurus. But Uhland could not sing through the humiliation of his
country, and his voice sank within him through the French
occupation; but when Germany arose at length, and with incredible
hardihood pushed back the flood of invasion, Uhland, like Körner
and others, did manful service, not by fighting and falling among the
foremost, as Körner did, but with even better judgment, as
husbanding his gifts, becoming the Tyrtæus of the Liberation War.
His songs of that time have a deep and manly note peculiarly their
own, and they are such as no lesser circumstances could have called
forth. Uhland, again, as distinguished from Schiller and Goethe, was
the prominent poet of the Romantic school. But he was to them what
Socrates was to the Sophists—counted with them, but not of them.
From whatever source he derived his inspirations, he always
remained fast rooted in truth and nature. The unreal and morbid
sentimentality of Tieck and Novalis was unknown to him; nor did he
share the Romeward tendencies of Friedrich Schlegel, while fully
appreciating the beauty of the Roman Catholic ritual and
associations, and freely interweaving them with the golden tissue of
his compositions. On the whole, he is the most German of German
poets, as he owes none of his inspiration to “the gods of Greece,” and
little to any foreign source, except those old Romance writers whom
he studied at Paris; but then it must be borne in mind that the early
threads of history in France and Germany are closely interwoven,
and the empire of the Franks in particular belonged as much to one
as to the other.
In attempting to present to the English reader some of the best of
the poems of Uhland, we must premise that to translate a perfect
poem from one language into another is simply an impossibility, and
difficult exactly in proportion to the degree in which any poem
approaches perfection. The special difficulty of translating German
poetry into English, and vice versâ, consists in this, that though the
two languages are not in their basis much more than dialects of the
same original stock, yet German is as generally dissyllabic as English
is monosyllabic, owing in part to English having discarded inflection
where German retains it. We are aware that many of Uhland’s poems
are already known through very good translations, one of those most
highly spoken of being that of Mr Platt. Longfellow has also done
freely into English verse the ‘Castle by the Sea,’ ‘The Black Knight,’
the ‘Luck of Edenhall,’ and others, and has succeeded admirably in
catching the spirit of the original. Not having Mr Platt’s translations
before us, as we write in Germany, we must apologise, in our zeal for
Uhland’s memory, for attempts of our own in the same direction, in
which we have tried to reproduce as nearly as we can the ideas of the
original in the metres in which they appeared. It is impossible to find
a song in the whole collection more perfect than ‘Der Wirthin
Töchterlein.’ There is not a word or thought one would wish
changed. The pathos is expressed, without a single pathetic epithet,
solely by the situation. This poem has been interpreted politically, as
alluding to the different feelings with which three classes of patriots
regard the corpse of German liberty. But to our mind this spoils the
simplicity of the picture. It is more likely to be true that the poem
was occasioned by an incident of Uhland’s youth, since it is said that
he once stopped some students who were singing it under his
window, telling them not to end it, as the end had too close a
personal interest for him. If this be true, the poem is more
complimentary to the memory of the fair maid of the inn than to the
lady who became Frau Uhland. But poets will be poets, as boys will
be boys.
THE LANDLADY’S DAUGHTER.
Three students they hied them over the Rhine,
And there they turned in at a landlady’s sign.

“Landlady, hast thou good beer and wine?


And where is that beauteous daughter of thine?”

“My beer and wine are fresh and clear;


My daughter she lies on the funeral-bier.”

And when they did enter the inner room,


There lay she all white in a shrine of gloom.

The first from her face the veil he took,


And, gazing upon her with sorrowful look,

“Oh, wert thou living, thou fairest maid,


’Tis thee I would love from this hour,” he said.

The second let down on the face that slept


The veil, and turned him away and wept:

“Alas for thee there on the funeral-bier!


For thee I have loved full many a year.”

The third, he lifted again the veil,


And kissed her upon the mouth so pale:

“I loved thee before, I love thee to-day,


And I will love thee for ever and aye!”

The last line, “Und werde dich lieben in ewigkeit,” would be more
correctly rendered, “And I will love thee in eternity.” And we are
equally aware that our “landlady’s sign” is objectionable, as the
original is simply, “They turned in there to a landlady’s.” But it would
be hard to render it otherwise without losing the quadruple rhyme,
which has a certain mournful elegance. ‘The Landlady’s Daughter’
naturally leads us to ‘The Goldsmith’s Daughter.’ In this poem we
must not suppose that the hero and heroine meet for the first time.
The maiden has fallen in love with the knight, her superior in station,
but scarcely dares even confess it to herself, till the knight agreeably
surprises her by adorning her as his bride, taking her acceptance for
granted. We would not spoil the romance by hinting that it may not
have been an uncommon case in the middle ages for young
noblemen of small fortune to seek their brides from the rich
bourgeoisie of the Free Towns.
THE GOLDSMITH’S DAUGHTER.
A goldsmith stood within his stall,
Mid pearl and precious stone:
Of all the gems I own, of all,
Thou art the best, Heléna,
My daughter, darling one.

One day came in a knight so fine:


“Good morrow, maiden fair;
Good morrow, worthy goldsmith mine;
Make me a costly crownlet,
For my sweet bride to wear.”

The crown was made, the work was good,


It shone the eye to charm,
But Helen hung in pensive mood
(I trow, when none was by her)
The trinket on her arm.

“Ah! happy happy she to bear


This glittering bridal toy;
Would that true knight give me to wear
A crownlet but of roses,
How full were I of joy!”

Ere long the knight came in again,


Did well the crown approve:
“Now make me, goldsmith, best of men,
A ring with diamonds set,
To deck my lady-love.”

The ring was made, the work was good,


The diamonds brightly shone,
But Helen drew ‘t in pensive mood
(I trow, when none was by her)
Her finger half-way on.

“Ah, happy happy she to bear


This other glittering toy;
Would that true knight give me to wear
But of his hair a ringlet,
How full were I of joy!”

Ere long the knight came in again,


Did well the ring approve:
“Thou’st made me, goldsmith, best of men,
The gifts with rarest cunning,
For my sweet lady-love.

“Yet would I prove them how they sit;


So prithee, maiden, here
Let me on thee for trial fit
My darling’s bridal jewels:
In beauty she’s thy peer.”

’Twas on a Sunday morn betime;


It happed the maiden fair,
Expectant of the matin chime,
Had donned her best of raiment
With more than wonted care.”

With coyness all aglow, behold


The maid before him stand;
He crowns her with the crown of gold,
The ring upon her finger
He sets, then takes her hand.

“Heléna sweet, Heléna true,


I’ve ended now the jest;
That fairest bride is none but you,
By whom I would the crownlet
And ring should be possest.

“Mid gold and pearl and jewel fine


Hath been thy childhood’s home;
Be this to thee a welcome sign
That thou to heights of honour
With me shalt duly come.”
There is a great dramatic beauty in the accident of the girl having put
on her best apparel to make ready to go to church, so that the knight
has only to furnish her with the bridal accessaries to prepare her at a
moment’s notice to go to church with him.
A ferry-boat is a favourite subject for painters; and the navigation
of his native Neckar has been to Uhland the occasion of some of his
sweetest verse-pictures. In the poem called ‘The Boat’ he shows how
a freight of people, before unacquainted with each other, and
therefore silent, struck up an intimacy, and parted with regret, when
some improvised music had once furnished an introduction.
THE BOAT.
The boat is swiftly going,
Adown the river’s flowing;
No word beguiles the labour,
For no one knows his neighbour.

What pulls from coat the stranger,


The tawny forest-ranger?
A horn that sounds so mildly,
The stream-banks echo wildly.

Then haft and stopper screwing,


His staff to flute undoing,
Another, deftly playing,
Chimes with the cornet’s braying.

Shy sat the maid, self-chidden,


As speech were thing forbidden,
Now blend her accents willing
With flute and cornet’s trilling.

The rowers with new pleasure


Pull strokes that match the measure;
The boat the stream divideth,
And, lulled by music, glideth.

It strikes with shock the landing,


The folk are all disbanding;
“May we again meet, brother,
On board this boat or other!”

The companion to this little cabinet picture of the boat going with
the stream is the crossing of the ferry. The poet offers the ferryman
three times his fare, because the spirits of two friends, now dead,
who crossed the same ferry with him in past years, are supposed to
have gone with him.
THE FERRY.
Many years have passed for ever
Since I came across the river;
Here’s the tower, in evening’s blushing,
There, as erst, the weir is rushing.

Then with me the boat did carry


Two companions o’er the ferry,
One a friend, a father seeming,
One a youth with high hopes beaming.

That one lived a peaceful story,


And is gone in peace to glory;
This, of all most fiery-hearted,
Hath in fight and storm departed.

So when I, mid blessing cherished,


Dare to think on seasons perished,
Must I still to sorrow waken,
Missing friends that Death hath taken.

Friendship may not be united,


Save when soul to soul is plighted:
Full of soul those hours went by me,
Still to souls a bond doth tie me.

Ferryman, I gladly proffer


Thrice the fare that others offer,
Since two spirits thou didst carry
At my side across the ferry.

Longfellow, in his ‘Hyperion,’ has beautifully rendered the spirit of


this poem, if he has somewhat missed its cadence.
The fine elegy on the death of Tell belongs to Uhland’s ‘Songs of
Freedom,’ Tell’s death is undemonstrative, and he characteristically
comes by it, by rescuing a child from a torrent. ‘The Sunken Crown’
stands before it in the collection, probably by way of introduction:—
THE SUNKEN CROWN.
There, over on the hill-top,
A little house doth stand;
One gazes from the threshold
On all the lovely land.
There sits a free-born peasant
Upon the bench at even;
He whets his scythe so blithely,
And sings his thanks to Heaven.

There, under in the hollow,


Where glooms the mere of old,
There lieth deeply sunken
A proud rich crown of gold:
Though in it gleam at nightfall
Carbuncle and sapphire,
Since ages grey it lies there,
To seek it none desire.

In his neighbouring Switzerland the poet seems to see the image of


his ideal freedom, modest and self-respecting; founded on the laws
of decency and order; possessing its ancient charters and title-deeds;
no ephemeral offspring of democratic chaos; a gentle and serene
goddess of justice holding the exact balance between despotism and
universal suffrage. Such freedom as this, in many grand patriotic
strains, he desires for Würtemberg—a country whose praises he
enumerates in soil, products, climate, scenery, and manners, only
lamenting one want, without which it would be a paradise, the want
of “Good Right.” He is certainly justified in his praise of his country,
which, with the Grand-Duchy of Baden, forms a corner in the map of
Europe which is a garden of fertility, a museum of antiquities, and a
labyrinth of natural grandeur; but we question whether Uhland is
not over-sensitive as to its political misery.
When we pass from his ‘Songs of Freedom’ to his ‘Songs of the
Affections,’ we find the same moderation and purity of sentiment.
Uhland always seems afraid of saying too much. His exquisite taste is
a constant check upon him. He leaves the lines of his sketches to
speak for themselves, and shrinks from too much elaboration. The
imaginative reader may, if he pleases, supply for himself much of the
inessential detail. What a picture of a bashful old-world lover he
gives us in his poem called ‘Resolution!’
RESOLUTION.
She comes to walk in this sweet wild;
To-day I’ll banish all alarm;
Why should I tremble at a child
That does no living creature harm?

All give her greeting near and far;


I would, but dare not do the same;
And to my soul’s transcendent star
I cannot lift my eyes for shame.

The flowers that bend as she doth fare,


The birds with their voluptuous song,—
All these their love so well declare,
Why must I only feel it wrong?

To highest Heaven I oft prefer


Through livelong nights a bitter plaint;
Yet would I say three words to her,
“I love thee,” then my heart is faint.

In wait behind the tree I’ll stay


She passes in her daily walk,
And whisper “My sweet life” to-day,
As if in dreaming I did talk.

I will—but oh the fright I feel!


She comes, and she will see me sure;
So here into the bush I’ll steal,
And I shall see her pass secure.

For pathetic simplicity, perhaps none of his love-poems stands


higher than Die Mähderin—the ‘Female Mower.’ There is a pathos in
the very fact of the delicate girl—delicate at least in feeling—being
engaged in rude masculine toil, a case but too common in many
countries; then, again, in her hopeless attachment to the son of the
rich farmer; then in her overtasking her strength in mowing the
whole field without refreshment or repose, because the avaricious
and selfish old man has promised her his son’s hand as the price; and
again, in the killing deception at the close. She dies a martyr to the
combined effects of the labour and the disappointment, and the old
man has virtually murdered her to prevent her marrying his son and
for selfish gain. Another example of a deep and simple pathos,
produced by two pictures of the same place, is ‘The Castle on the
Sea;’ it is a dioramic change of effect produced by a dialogue. First
the castle stands superb in rising or setting sunlight, towering to
heaven and bowing to the deep; the king and queen walk on the
terrace in their royal insignia, and a beautiful princess walks with
them: the scene changes to a weird moonlight effect, where the castle
stands in ghostly grandeur; the king and queen are there on the
terrace, but without their robes or crowns; they are in mourning, and
the princess is no longer with them. This ballad has been effectively
translated by Longfellow. Though verging on the impossible in
subject, ‘The Mournful Tournament’ is a grand tragic sketch. Seven
knights came to joust for the favour of the king’s daughter, but as
they came in through the castle gate they heard the knell of her
funeral. They persist in the tournament; for the one who loves her
most truly, holds that still, though dead, she is worthy to be fought
for, the victor gaining her wreath and ring. All fall in the fight but he,
and he is mortally wounded, but, as the prize of victory, is buried
with his lady-love.
Similar in actual improbability of subject, but demonstrating its
bare possibility by its tragic truth, is the ballad of ‘Three Young
Ladies.’ The father brings to mind the Greek bandit, the hero of
About’s ‘Roi des Montagnes,’ who keeps his daughter at school at
Athens, and when she wants a new piano, harries a village. As he
returns from his rides, or raids, the three maidens ask this feudal
tyrant what he has brought for them. The first, he knows, loves gold
and finery; he has killed a knight for her, and brought her the spoil.
But the dead knight was her lover; she strangles herself with the
stolen chain, and dies beside his body. Two maidens only welcomed
the father on his next return. The second, he knows, loves the chase;
so he brings her a hunting-lance with a gold band, having killed a
wild huntsman to obtain it. The wild huntsman was her lover, and
she falls on the lance and dies beside him. One maiden only greets
him the next time. Flowers are her passion; so he brings her flowers,
having slain the bold gardener to obtain them. She takes the flowers
and seeks the body of the dead gardener, who was also her lover; but
flowers can inflict no wounds, so she stays beside him till the flowers
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade

Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.

Let us accompany you on the journey of exploring knowledge and


personal growth!

textbookfull.com

You might also like