Reactive Programming With Scaal, Lagom, Spark, Akka, Play
Reactive Programming With Scaal, Lagom, Spark, Akka, Play
com #53
Reactive
Programming
with Scala, Lagom, Spark, Akka and Play
Reactive programming is
gaining momentum
“We believe that a coherent approach to systems architec- If the definition “stream of events” does not satisfy your
ture is needed, and we believe that all necessary aspects are thirst for knowledge, get ready to find out what reactive pro-
already recognized individually: we want systems that are Re- gramming means to our experts in Scala, Lagom, Spark, Akka
sponsive, Resilient, Elastic and Message Driven. We call these and Play. Plus, we talked to Scala creator Martin Odersky
Reactive Systems.” – The Reactive Manifesto about the impending Scala 2.12, the current state of this pro-
Why should anyone adopt reactive programming? Because gramming language and the technical innovations that await
it allows you to make code more concise and focus on im- us.
portant aspects such as the interdependence of events which Thirsty for more? Open the magazine and see what we have
describe the business logic. Reactive programming means dif- prepared for you.
ferent things to different people and we are not trying to rein-
vent the wheel or define this concept. Instead we are allowing Gabriela Motroc, Editor
our authors to prove how Scala, Lagom, Spark, Akka and
Play co-exist and work together to create a reactive universe.
Manuel Bernhardt
“Checklist: Why are microservices 29
Compile-time dependency injection 14 important for you? – Part 3”
in the Play framework Interview with Daniel Bryant
James Strachan, the creator of Groovy, once said that if into the cornerstone of most Scala libraries. I believe they will
somebody had shown him Martin Odersky’s “Programming become even more important in the future.
in Scala” book back in 2003, he would have probably never
created Groovy. Scala’s star is still shining bright – we invited JAXmag: In your opinion, what are the most important tech-
Martin Odersky, the creator of Scala, to talk about the im- nical milestones for this programming language?
pending 2.12 version, the current state of this programming Odersky: The most important step was no doubt Scala 2.8,
language and the technical innovations that await us. which came out in 2010. With 2.8 we had for the first time a
Martin Odersky designed Scala in 2001. Now version 2.12 uniform collections library, refined rules for implicit resolu-
is in development and version 3.0 is just around the corner. tion, and type inference. After that, Scala 2.10 was also a big
Even James Gosling, the creator of Java once said that if he step because it introduced meta-programming in Scala.
were to choose “a language to use today other than Java, it
would be Scala.” We talked to the creator of Scala about the JAXmag: Were there also milestones in relation to the dis-
current state of this programming language, what to expect semination of the language? The community around Scala
from version 2.12 and more. was formed rather fast and important projects and compa-
nies adopted it quickly. What contributed to the expansion
JAX Magazine: It’s been 15 years since the work on Scala be- of this language?
gan at the École polytechnique fédérale de Lausanne (EPFL). Odersky: Adoption has by and large steadily increased for
If we look back at how Scala looked like more than a decade the last eight years, but there were nevertheless a few drivers
ago, one cannot help but wonder: What is the difference be- which led to rapid adoption in certain areas. The first leap
tween the original concept and design of Scala and the cur- was adoption of Scala by Twitter and other new web com-
rent state of this programming language? panies, starting in 2008. The second leap was widespread
Martin Odersky: Scala was designed to show that a fusion adoption of reactive programming, notably around Akka and
of functional and object-oriented programming is possible Play, starting around 2011. The third leap was the success of
and practical. That’s still its primary role. What has changed Scala in big and fast data scenarios as well as data science,
is that at the time it came out FP was regarded as an academic driven by Spark, Kafka, and many other frameworks.
niche. So it was difficult to make the case that FP should be
needed in a mainstream language. XML literals were add-
ed to the language as a specific use case, because we knew Portrait
that traditional OO techniques had a hard time dealing with Martin Odersky created the Scala programming language and is a
XML trees. Nowadays the tide has turned. Functional pro- professor in the programming research group at EPFL, the leading
gramming has become respectable, in some areas even main- technical university in Switzerland. He authored “Programming in
stream, and it’s sometimes harder to make the case for good Scala”, the best-selling book on Scala. Previously he has held po-
object-oriented design. sitions at IBM Research, Yale University, University of Karlsruhe and
In terms of language features I believe the biggest develop-
University of South Australia, after having obtained his doctorate
ment has been the refinements we did to implicit parameter
from ETH Zürich as a student of Niklaus Wirth, the creator of Pascal.
search. Implicits evolved from a somewhat ad-hoc feature
JAXmag: An entire stack was formed around Scala; it con- without too much disruption. For one, we are working on
sists of Akka, Play, Lagom, Apache Spark and others. What sophisticated rewrite tools that allow code to evolve to new
role do this stack – and the so-called reactive programming standards. We are also planning to use TASTY, a platform
paradigm –play in the Scala community and especially in independent interchange format, to avoid binary compati-
Lightbend? bility problems through automatic code adaption to specific
Odersky: Developing and supporting this technology stack platforms and versions.
is at the core of what Lightbend does. The stack covers ev
erything from web frameworks to big data backends, with JAXmag: Is there already a release roadmap for Scala 3.0?
special emphasis on reactive programming and microservices Odersky: No, not yet. We want to keep some flexibility in
in distributed scenarios. That’s where these technologies are deciding what to ship before we determine when to ship it.
very successful. A large part of the Scala community uses this
stack, but there are of course also many other frameworks to JAXmag: Scala was explicitly designed as a language which
choose from. should be adopted in research, but should also have an in-
dustrial use. Is the gap between industry and research really
JAXmag: Right now Scala 2.12 is in development. What is that big? There has been criticism with regard to the fact
this release all about? that a developer/architect’s job has little to do with what is
Odersky: The main purpose of 2.12 is to optimize Scala for taught in universities.
use on Java 8. Java 8 introduced lambdas and default meth- Odersky: I can’t speak for other universities. But I find that
ods for interfaces. Both are very useful to streamline Scala our students at EPFL have no problems finding interesting
code generation and make the code size smaller. work in which they can apply what they have learned during
their studies. Scala certainly helps since it has a healthy job
JAXmag: Scala 3.0 is fast approaching. Can you offer us market.
some insight into the technical innovations that await us?
Odersky: It’s too early to talk about that yet. The next ma- JAXmag: What area of research is still insufficiently taken
jor version will be Scala 2.13, which will concentrate on mod- into consideration in today’s mainstream languages?
ernizing and modularizing some of the core libraries. Odersky: Functional programming is just catching up in
industry. I think one of the next big developments will be
JAXmag: Will Scala 3.0 differ greatly from the 2.x line of de- the refinement of type systems to describe ever more pre-
velopment, as it is often the case with major versions or will cise properties of programs. This has been researched for
it be a natural evolution of 2.x? a while, and more progress is needed to make this really
Odersky: Scala has been quite stable over the last five years. practical. But I predict that this development end will pick
We hope that by the time Scala 3.0 comes out we will have the up speed in the future.
necessary technologies in place to make some bigger changes
Advert
Management
exposing 250+
extension points JSON as data format
for bandwidth
efficiency
Highly Scalable
Content Model
horizontally scalable
query engine and
data persistence
Event-driven architecture
Reactive microservices
with Scala and Akka
In this article, Vaughn Vernon will teach you a first-approach method to designing microservices, giving
you a workable foundation to build on. He introduces you to reactive software development, and sum-
marizes how you can use Scala and Akka as a go-to toolkit for developing reactive microservices.
by Vaughn Vernon Should we find a better balance here? Consider the case when
data that is spread across different subsystems must reflect
Do you ever get the impression that our industry is one of some kind of harmony. Some hold to the strong opinion that
extremes? Sometimes I get that impression. Many seem to all such dependent data must be transactionally consistent;
be polarized in their opinions about various aspects of how that is, up-to-date across all subsystems all at once (Figure 1).
software should be written or how it should work, whereas What about you? Do you agree with that? Do some of
I think we could find a better balance based on the problem your software systems use global transactions in this way
space that we face. to bring data on multiple subsystems into harmony all at
One example that I am thinking of is the common opinion once? If so, why? Is it the business stakeholders who have
that data must be persisted using ACID transactions. Don’t insisted on this approach? Or was it a university professor
get me wrong. It’s not that I think that ACID transactions who lectured you on why global transactional consistency
are not useful. They are useful, but they are often overused. is necessary? Or was it a database vendor who sold you
on this service-level agreement? The most impor-
tant opinion of these three is that of the business. It
should be real business drivers that determine when
data must be transactionally consistent. So, when
was the last time you asked the business about its
data’s consistency requirements, and what did the
business say?
If we all took a look back to the time before most
businesses were run by computers, we would find a
very different picture of data consistency. Back then
almost nothing was consistent. It could take hours
or even days for some business transactions to be
carried out to completion. Paper-based systems re-
quired physically carrying forms from one person’s
desk or work area to another. In those days even
a reasonable degree of data consistency was com-
pletely impossible. What is more, it didn’t result in
problems. You know what? If today you got an au-
thentic business answer to the question of necessary
data consistency, you would very likely learn that
a lot of data doesn’t require up-to-the-millisecond
consistency. And yet, extreme viewpoints in our in-
dustry will still push many developers to strive for
Figure 1: Global transaction transactional consistency everywhere.
What I find is a Bounded Context that is truly constrained Reactive software is ...
by an actual business-driven Ubiquitous Language is quite Reactive software is defined as having these four key charac-
small. It’s typically bigger than one entity (although it may be teristics:
just one), but also much, much smaller than a monolith. It’s
impossible to state an exact number, because the unique busi- • Responsive
ness domain drives the full scope of the Ubiquitous Language. • Resilient
But to throw out a number, many Bounded Contexts could be • Elastic
between 5 and 10 entity types total. Possibly 20 entity types is • Message driven
a large Bounded Context. So, a Bounded Context is small, yes
“micro” in size, especially when compared with a monolith. A responsive system is one that responds to user requests
Now, if you think about dividing a monolith system into a and background integrations in an impressive way. Using the
number of Bounded Contexts, the deployment topology isn’t Lightbend platform, you and your users will be impressed
so extreme. by the responsiveness that can be achieved. Often a well-de-
At a minimum I think that this is a good place to start if signed microservice can handle write-based single requests in
your team is determined to break up a legacy monolith into 20 milliseconds or less, and even roughly half that time is not
a number of microservices; that is, use Bounded Context and uncommon.
Ubiquitous Language as your first step into microservices and Systems based on the Actor model using Akka can be de-
you will go a long way away from the monolith. signed with incredible resilience. Using supervisor hierarchies
means that the parental chain of components are re-
sponsible for detecting and correcting failures, leav-
Listing 1 ing clients to be concerned only about what service
they require. Unlike code that is written in Java that
class Product(productId: String) extends PersistentActor { override def persistenceId = productId throws exceptions, clients of actor-based services
var state: Option[ProductState] = None never have to concern themselves with dealing with
override def receiveCommand: Receive = { failures from the actor from which they are request-
case command: CreateProduct => ing service. Instead clients only have to understand
val event = ProductCreated(productId, command.name, command.description) the request-response contract that they have with a
persist(event) { persistedEvent => given service, and possibly retry requests if no re-
updateWith(persistedEvent) sponse if given in some time frame.
sender ! CreateProductResult( An elastic microservices platform is one that
} can scale up and down and out and in as de-
... } mands require. One example is an Akka cluster
productId, that scaled to 2,400 nodes without degradation.
command.name, Yet, elastic also means that when you don’t need
command.description, 2,400 nodes, only what you currently need is allo-
command.requestDiscussion) cated. You will probably find that when running
override def receiveRecover: Receive = { Akka and other components of the Lightbend re-
case event: ProductCreated => updateWith(event) ... active platform, you will become accustomed to
} using far fewer servers than you would with other
def updateWith(event: ProductCreated) { platforms (e.g. JEE). This is because Akka’s con-
state = Some(ProductState(event.name, event.description, false, None)) currency capabilities enable your microservices to
}} make non-blocking use of each server’s full com-
puting resources at all times.
The Actor model with Akka is message driven to the core. can be published to the broader range of microservices that
To request a service of an actor, you send it a message that is must react to and integrate with the event originating micro-
delivered to it asynchronously. To respond to a request from service.
a client, the service actor sends a message to the client, which What I am describing here is a event-driven architecture,
again is delivered asynchronously. With the Lightbend plat- one that is entirely reactive. The actors within each micros-
form, even the web components operate in an asynchronous ervice make the one reactive, and the microservices that con-
way. When saving actor persistent state, the persistence mech- sume events from the others are also reactive. This is where
anism is generally designed as an asynchronous component, eventual consistency in separate transactions come into play.
meaning that even database interactions are either completely When other microservices see the events they create and/or
asynchronous or block only minimally. In fact, the point is modify the state that they own in their context, making the
that the actor that has requested persistence will not cause its whole system agree in time.
thread to block while waiting for the database to do its thing. I have written three books on developing these kinds mi-
Asynchronous messaging makes all of this possible. croservices based on Domain-Driven Design, and I also teach
two workshops on these topics. I encourage you to read my
Reactive components books and contact me for more information on how you can
At each logical layer of the architecture, expect to find the put this practical microservices architecture to work in your
following Lightbend platform components and microservice enterprise.
components that your team employs or develops (Figure 6).
For example, an Akka-based persistent actor looks like this
(Listing 1).
This is a component based on event sourcing. The Prod-
uct entity receives commands and emits events. The events Vaughn Vernon is a veteran software craftsman, with more than 30 years of
are used to constitute the state of the entity, both as the experience in software design, development, and architecture. He is a thought
commands are processed and when the entity is stopped leader in simplifying software design and implementation using innovative
methods. Vaughn is the author of the books “Implementing Domain-Driven
and removed from memory and then recovered from its Design”, “Reactive Messaging Patterns with the Actor Model”, and “Do-
past events. main-Driven Design Distilled”, all published by Addison-Wesley. Vaughn speaks and
Now with those events we can derive other benefits. First presents at conferences internationally, he consults, and has taught his “IDDD” Work-
shop and “Go Reactive with Akka” workshop around the globe to hundreds of develop-
we can project the events into the views that users need to see, ers. He is also the founder of “For Comprehension”, a training and consulting company,
by creating use-case-optimized queries. Secondly, the events found at http://ForComprehension.com. You may follow him on Twitter: @VaughnVernon.
by Dr. Roland Kuhn formulating responses. In order to allow more people to con-
tribute efficiently – and in order to allow the HTTP modules
Before we begin, a personal note: On March 7 this year, to evolve at a faster pace than the very stable Akka core – the
Patrik Nordwall took over the Akka Tech Lead duties be- high-level HTTP modules are moving out of the core Akka
cause I co-founded Actyx, a startup that helps small and mid- repository into a separate new home under the Akka organi-
sized manufacturing companies to reap the benefits of digital zation on github. For more details see the 2.4.9 release notes
technologies – sometimes classified as Industry 4.0. I remain and the discussion on the akka-meta repository [3].
very interested in Akka and I continue to develop some parts Apropos contributions: since the addition of Streams &
as we’ll see in this article, but I no longer speak for Lightbend. HTTP, we have seen a marked increase in the breadth of our
The biggest development within Akka since 2.3.0 (March contributor base. Every minor release contains the work of
2014) has for sure been the Reactive Streams [1] initiative circa 40 community members. Contributing has never been
and Akka Streams. The former is now slated for inclusion in easier, not only because the high-level features of Streams &
Java 9 under the name Flow [2] and it forms the basis for the HTTP are more accessible, I think it is also due to the use of
inner workings of the latter: Akka Streams is fully Reactive Gitter for nearly all team communication.
Streams-compliant. Streaming data pipelines are becoming There is also vivid exchange with authors of integration
ever more common and satisfy a rising demand of proper- libraries that connect Akka Streams to data sources and sinks
ly rate-controlled processing facilities – gone are the days of like message brokers, files, databases, etc. In order to foster
OutOfMemoryErrors resulting from one overwhelmed ac- this growing ecosystem, an umbrella project named Alpakka
tor’s mailbox. [4] has been created, similar in spirit but of course currently
Streams were first developed on a very long-lived feature a lot less powerful than Apache Camel. Everyone is invited
branch and versioned separately. Their experimental phase to think up and implement new integrations – this is your
ended when they were released as part of Akka 2.4.2 in Feb- chance to give back to the community. These are currently
ruary 2016. In parallel, Spray – the lean and expressive HTTP best targeted at the Akka Stream Contrib repository, but the
library – has been rewritten on top of Akka Streams, a new team is thinking about opening a special Alpakka dedicated
foundation for reliable and efficient processing of large pay- repository as well.
loads. The resulting Akka HTTP modules present one of the Another project that will have deep impact on how you use
most modern HTTP stacks available today, which is perfect- Akka is going on under the hood and driven by the core team
ly positioned for implementing the HTTP/2 standard that is – its codename is Artery. This new remote messaging stack
swiftly gaining popularity: HTTP/2 connections are optimized is based on the Aeron [5] library and on Akka Streams and
for streaming several independent requests and responses at delivers vastly improved throughput and latency for sending
the same time. Konrad Malawski (maintainer of the Akka messages across the network – just be sure to not use Java Se-
HTTP modules) and Johannes Rudolph (co-author of Spray rialization, which is just too slow to keep up. The goal of this
and now member of the Akka core team at Lightbend) have rewrite of the basis for clustering is not only improved per-
started a proof-of-concept implementation for HTTP/2 sup- formance, the main impetus behind it is to use the expressive
port in September and I’m excited to see the results. power and safety of Streams to simplify the implementation
The 2.4.9 release has brought its performance up to the and eventually provide verified cross-version stability and
level of Spray or even beyond, and the Java and Scala APIs compatibility. The current status (as of Sep 19, 2016) is that
are merely being polished a bit more to straighten out a few milestone 4 is available for you to try out and give feedback
remaining wrinkles. The surface area covered by an HTTP on [6]. Within the next few months Artery will replace the old
implementation as complete and expressive as this one is remoting implementation and become the default.
enormous: the user-facing API has more than 26,000 lines The last project to talk about is the one that I am deeply in-
of Scala code, defining all those media types, encodings and volved in. Akka Typed has been included since version 2.4.0
other headers as well as the directives for routing requests and as a sketch of how to achieve type-safety in actor interactions
in a fashion that is simple to understand, without compiler very cumbersome or impossible before. Think about an ac-
magic, and inherently built for great and easy test coverage. In tor that speaks some protocol with its clients, in the simplest
a nutshell, it consists of a generic ActorRef that only accepts a case request–response. In order to compute the response the
certain well-specified type of messages, plus a generic Behav- actor might have to converse with a handful of other actors
ior (i.e. actor implementation) that only expects inputs of a in a multi-message exchange. Previously you would imple-
given type. It is the third attempt in this direction within Akka ment that by manually stashing all unexpected messages
and I firmly believe that this is the one that works. Recently, and switching between different behaviors using context.be-
also due to the intention to use this at Actyx, I have created come(). With the upcoming Akka Typed DSL [9] it becomes
an entirely new implementation of the actor mechanics for possible to write down a sequence of send and receive actions
the Akka Typed package, making it available alongside the using a Process abstraction, where processes can be composed
adapter that offers the new features as an emulation layer on to run in sequence or in parallel within the same actor. This
top of the untyped akka-actor package. greatly simplifies the code structure because it disentangles
Based on the experience collected over the past six years the different aspects of an actor’s behavior and sorts them
from developing Akka, we decided to remove features that into cohesive strands that each fulfill their separate purposes.
greatly complicate the internal code while not delivering suf- All this is currently still in the research phase, so feedback is
ficient benefit to all users. The full list has been discussed on very much welcome and needed to get it ready soon.
akka-meta [7]; highlights are the configurability of mailboxes Oh, and one last thing: at Actyx we are currently working
and remote deployment. Together with the relocation of the on an improved version of distributed data structures (for the
execution of supervisor strategies into the child actors them- curious: an implementation of δCRDTs) that can be used in-
selves this has led to a huge reduction in code size, while the dependently of Akka Cluster. If that turns out to be successful,
user-facing feature set has only been trimmed slightly. we will open-source this as a cousin to Akka Distributed Data.
Special focus has been devoted to pervasive testabili-
ty, both of the implementation and of user-level code. The Dr. Roland Kuhn is CTO and co-founder of Actyx, author of Reactive De-
main difference to untyped actors in this regard is that be- sign Patterns, a co-author of the Reactive Manifesto, co-teacher of the
havior and execution are fully decoupled now, you can cre- Coursera course “Principles of Reactive Programming”, and a passionate
open-source hakker. Previously he led the Akka project at Lightbend.
ate your actor behaviors directly within tests and exercise
them synchronously without using a special TestActorRef or
CallingThreadDispatcher. This allows convenient validation
of internal logic as well as deterministic stimulation with ex-
ternal inputs. A mock ActorContext can be used to check that
child actors are created correctly, all other effects like watch-
ing/unwatching are accessible as well.
The new implementation is quite a bit faster than the old
one, reducing actor messaging overhead by 20–30 percent
even though the mailbox implementation is not yet optimized.
Once the full potential has been realized, it will be possible to
achieve allocation-free actor messaging. The current breadth
of the implementation covers only local actors without per-
sistence, the plan is to add a custom Materializer for Akka
Streams and reuse Artery in order to have a new basis for the
existing clustering code.
One aspect that I find extremely exciting about Akka Typed
is that it opens up actor interactions to even more static ver-
ification than is afforded by a type-selective ActorRef. Cur-
rent research within the ABCD group [8] has the potential to
enable an actor-based implementation of a verified commu- References
nication protocol to be checked for conformance to the spec-
ification by the Scala compiler (here it might be that Java’s [1] http://www.reactive-streams.org/
type system is not expressive enough to achieve the full fea- [2] http://download.java.net/java/jdk9/docs/api/java/util/concurrent/Flow.html
ture set). This would mean that many bugs would be caught [3] https://github.com/akka/akka-meta/issues/27
without even running a single test, the compiler will tell you [4] http://blog.akka.io/integrations/2016/08/23/intro-alpakka
that you forgot to send a certain message or that you expected
[5] https://github.com/real-logic/Aeron
to receive a message that will never arrive.
[6] https://groups.google.com/forum/#!topic/akka-user/V0DH_e8w7M0
Enabling the compiler to see these actions means represent-
ing them in the type-system and lifting the description of what [7] https://github.com/akka/akka-meta/issues/18
an actor does into a sequence of actions that can be inspected. [8] http://groups.inf.ed.ac.uk/abcd/
This is a powerful tool even without the protocol verification, [9] See my presentation at scala.world, slides at http://www.slideshare.net/
it allows actor behaviors to be composed in ways that were rolandkuhn/distributed-systems-vs-compositionality
lect locally undelivered messages, the so-called “dead letters”, periment first with APIs and implementations (Akka Streams,
providing a means to inspect why certain messages do not for example, has seen as many as six complete rewrites over
make it to their sender, at least locally. However, this mecha- the source of three years before it was deemed good enough).
nism does not work across network boundaries where the use This is also why, when working with Akka, you should al-
of acknowledgements is required to guarantee at-least-once ways be mindful of extensions tagged as experimental in
delivery semantics. In order to build distributed applications, the documentation: there is a real chance that the APIs will
Akka offers some very useful extensions. change significantly over time, which is not necessarily a bad
thing in itself but something to be aware of nonetheless.
Akka persistence, Akka cluster and Akka HTTP Last but not least, Akka has a very active community and
Akka Persistence allows actors to recover their state after a an excellent documentation – so good, in fact, that it is rather
crash. Persistent actors have a journal that allow them to re- difficult to do better when writing a book about it. I can only
play events after a crash; they can also make use of snapshots recommend downloading the PDF and reading the documen-
to speed up the recovery. Journal entries and snapshots can tation as a whole when getting started with the project to get
be stored in a variety of backends such as Cassandra, Kafka, a sense of what pieces are already provided by the toolkit and
Redis, Couchbase and many more. which concepts to be aware of. Happy hAkking!
Akka Cluster lets an actor system run on several nodes
and handles basic concerns such as node lifecycle and lo-
cation-independent message routing. In combination with
Akka Persistance, it provides at-least-once delivery semantics
for messages sent across the wire. It uses a lightweight gossip
protocol for detecting when nodes are failing. Lightbend’s
commercial offering also adds the Split Brain Resolver (SBR)
extension that allows to handle correctly in the face of net-
work partitions, where it may not be trivial to decide which
nodes should be removed and which ones should survive.
Akka HTTP offers client and server capabilities for HTTP
and WebSockets including DSLs for describing URI-based
routing.
Compile-time dependen-
cy injection in the Play
framework
Play introduced Dependency Injection (DI) in version 2.4 to reduce global state, and make it easier to
write isolated, reusable, and testable code by instantiating components on each reload and by provid-
ing stop hooks. In this article, Marius Soutier explains what compile-time dependency injection in the
Play framework is all about.
class AppLoader extends ApplicationLoader { While we now have more boilerplate to write, this way of
override def load(context: Context): Application = new building our application has several advantages:
AppComponents(context).application
} • Dependencies on components are more explicit
• We avoid using the current Application
class AppComponents(context: Context) extends BuiltInComponentsFromContext( • We can easily switch to a mocked WS implementation
context) with NingWSComponents { when writing tests
// NingWSComponents has a lazy val wsClient • When we refer to a new controller in the routes file, the
lazy val applicationController = new controllers.Application(wsClient) compiler will tell us that Routes is missing a dependency
lazy val assets = new controllers.Assets(httpErrorHandler)
Plugins
Listing 2 Play plugins are now deprecated and replaced by DI com-
ponent, which means the entire plugin system is gone, along
class FakeApplicationComponents(context: Context) extends with its configuration file and priorities.
BuiltInComponentsFromContext(context) { You just provide a component which ideally should be
val mockWsClient = ... // You can use Mockito or https://github.com/leanovate/ compatible with both runtime and compile-time DI. For this,
// play-mockws you typically write a generic API trait and implement it using
lazy val applicationController = new controllers.Application(mockWsClient) a Module or a JSR-330 Provider class, and a components trait
lazy val assets = new controllers.Assets(httpErrorHandler) for compile-time DI.
A basic example to get started is the ActorSystemProvider
override def router: Router = new Routes(httpErrorHandler, in Play itself which is also used in compile-time DI via Akka-
applicationController) Components.
}
Conclusion
By using compile-time dependency injection we gain more
control over how our application is assembled, making it
more testable. Writing isolated and testable components is
Listing 3 now straightforward and no longer requires an elaborate
import org.scalatestplus.play.{OneAppPerSuite, PlaySpec} plugin system. Plus, we don't have to worry about referring
class ApplicationTest extends PlaySpec with OneAppPerSuite { to an application too early.
override implicit lazy val app: api.Application = { You can find a full example in my PlayBasics repository un-
val appLoader = new FakeAppLoader der https://github.com/mariussoutier/PlayBasics/tree/master/
val context = ApplicationLoader.createContext(
DependencyInjection.
new Environment(new File("."), ApplicationLoader.getClass.getClassLoader,
Mode.Test)
) Marius Soutier is an independent data engineer. He consults companies
appLoader.load(context) on how to best design, build and deploy reactive web applications and
realtime big data systems using Scala, Playframework, Kafka, and Spark.
}
@mariussoutier
}
www.mariussoutier.com/blog
by Lutz Hühnken they – the project’s services, a service registry, an API gate-
way, and even the database Cassandra (in the embedded
The question regarding the meaning of the name is not easy to version) – are launched through the Maven plug-in. It is not
answer since one cannot literally translate the Swedish idiom necessary to set up services or a database outside of the pro-
Lagom. According to Wikipedia, the meaning is: “Enough, ject. Lagom stresses the importance to offer the developer an
sufficient, adequate, just right.” In our case, this is not sup- environment which feels interactive – check out the project
posed to be a self-praise but a critical statement to the concept and get going. This includes the fact that code changes will
of microservices. Instead of focusing on “micro” and stub- come into effect right after a reload, without the need for a
bornly following a “the less code, the better” concept, Lagom build/deploy/restart cycle.
suggests that we think of a concept of “Bounded Context”
from the Domain-Driven Design to find the boundaries for The services API — typesafe and asynchronously
a service. The conceptual proximity of domain driven design As it can be seen from the folder structure, every service is
and microservices can be found in different locations in the divided into an implementation (“-”) and an API definition
Lagom framework. (“-”). The latter defines
the HTTP interface of the
Getting started with Lagom service programmatical-
Listing 2
The easiest way to develop an application with Lagom is with ly, as shown in Listing 3.
the help of a Maven project template: With the help of a build- $ http localhost:9000/api/hello/
er, the service description Lagom
$ mvn archetype:generate -DarchetypeGroupId=com.lightbend.lagom \ will be created, in which HTTP/1.1 200 OK
-DarchetypeArtifactId=maven-archetype-lagom-java \ the requested path will be Content-Type: text/plain
-DarchetypeVersion=1.1.0 mapped on a method call.
This interface is not Hello, Lagom!
After the questions regarding names have been answered and only the template for im-
you switch into the newly-created directory, you will find the plementation; Lagom also
directory structure as displayed in the Listing 1. generates an appropriate Listing 3
As it should be for microservices, not one, but already two client library. In other
services were generated. After all, the interaction and com- Lagom services, this can public interface HelloService extends
munication between services are at least as important as the be injected via dependen- Service {
implementation of a single on (and frequently the bigger cy injection with Google’s
challenge). Here are the services “hello” Guice. This way, a type- ServiceCall<NotUsed, String>
and “stream”; each implementation is safe interface is provided hello(String id);
Listing 1 divided into two subprojects (“api” and when the respective service
“impl”). To launch the application, a is selected. The manual default Descriptor descriptor() {
cassandra-config
simple mvn lagom:runAll is enough. Af- construction of an HTML return named("hello").withCalls(
hello-api
ter a few downloads, should be running request and the direct use pathCall("/api/hello/:id",
hello-impl
at Port 9000. This can be easily checked of a generic http client can this::hello),
integration-tests
with a command line tool like HTTPIE be omitted. );
pom.xml
(Listing 2). Still, it is not mandatory }
stream-api
One particularity that all components to use the client library be- }
stream-impl
needed in the development have is that cause the framework maps
the method calls on HTTP calls, which may also be called we ask an unnecessary amount of idle time from our appli-
directly, especially by non-Lagom-services. cation: If it’s likely that we won’t be getting a response, why
By the way, our little “hello” method doesn’t deliver the should we wait for a timeout? Furthermore, there would be
response directly, but a ServiceCall. This is a functional in- requests accumulating to the service. As soon as it becomes
terface. That is to say we do not create a simple object but a available again, it will be bombarded with pending requests
function – the function which shall be executed by the corre- to such an extent that it will be brought to its knees immedi-
sponding request. We deliver the types as type parameters for ately. A reliable solution for this problem is the circuit break-
the request (since user GET call doesn’t submit any data, in er pattern. A circuit breaker knows three states:
this case “NotUsed”) and the response (in our case a simple
String). The processing of the request is always asynchronous • As long as everything is running without errors, it is
– the outcome of our function must be a CompletionStage. closed.
Lagom extensively uses Java 8 features. A simple implemen- • If a defined limit of errors (timeouts, exceptions) is
tation would look like this (Listing 4). reached, it will be open for a defined period of time. Addi-
For a simple GET request, the gain of the service descrip- tional requests will fail with a “CircuitBreakerException”.
tors is limited. It gets more interesting when we want to send For the client there won’t be additional waiting time and
events between services asynchronously. We can achieve the external service won’t even notice the request.
this in Lagom by choosing different type parameters for the • As soon as the set time period runs out, the circuit breaker
ServiceCall. If our request and response types are defined as will switch into the state “half open”. Now there will be
source (a type from the Akka streams library), as shown in one request passed through. If it is successful, the circuit
Listing 5, the framework will initialize a WebSocket link. breaker will be closed- the external system seems to be
Here the service abstraction can score since it simplifies work- available again. If it fails, the next round with the state
ing with the WebSockets. As far as future versions are con- “open” begins.
cerned, there are plans to support the additional “publish/
subscribe” pattern so that messages can be placed on a bus Such circuit breakers are already integrated into the Lagom
and other services can subscribe to it. service client. The parameters are adjustable with the config-
uration file.
Circuit breaker built-in
Let us assume that our service requests information per HTTP Lagom persistence
request at another service. This doesn’t respond within the One aspect which proves that Lagom is very different from
expected timeframe, which means there will be a timeout. Re- other micro frameworks is the integration of a framework for
quests to this server shouldn’t be repeated constantly because Event Sourcing and CQRS. For many developers, working
with a relational databaseis still the “default case”, possibly
in connection with an ORM tool. Even this can be imple-
Listing 4 mented in Lagom, but the user is steered into another direc-
public class HelloServiceImpl implements HelloService { tion. The standard in Lagom is the use of “Persistent Entities”
@Override ( corresponding to “Aggregate Roots” in Domain-Driven de-
public ServiceCall<NotUsed, String> hello(String id) { sign). These Persistent Entities receive messages (commands).
return request -> { Listing 6 shows exactly how this is presented in the code.
CompletableFuture.completedFuture("Hello, " + id); Our quite simple entity allows us to change the welcome text
}; for our service. We extend the superclass PersistentEntity
} which expects three type parameters: the command type, the
} event type, and the type of the state. In our case we define the
command as a class UseGreetingMessage, which implements
the interface HelloCommand and its instances are immuta-
ble. For type-saving purposes, one can go back to commands,
Listing 5 events and states from the library Immutables. To save your-
self some keystrokes, you can leverage a library such as Im-
public interface StreamService extends Service {
mutables for your commands, events and states.
The way our entity responds to commands is defined by
ServiceCall<Source<String, NotUsed>, Source<String, NotUsed>>
a behavior. This can change at runtime. This way the enti-
stream();
ties can implement finite-state machines – the replacement of
one behavior with another at the runtime correlates with the
@Override
transition of the machines into another state. The framework
default Descriptor descriptor() {
obtains the initial behavior via initialBevahior. To construct
return named("stream").withCalls(namedCall("stream", this::stream));
this, we will make use of the builder pattern.
}
First, we define a CommandHandler as our command. If a
}
command is valid and demands the entity to be changed, for
example, in case it sets an attribute to a new value, the change
won’t occur immediately. Instead, an event will be created, Separate writing and reading
saved and emitted. The EventHandler of the persistent entity While it is easy in SQL databases to request any informa-
which we also added with the builder to the behavior, reacts tion from the data model, it is impossible in the case of Event
to the event and executes the actual change. Sourcing. We can only access our entity and request the state
A significant difference to an “Update” in a relational data- with the primary key. Since we only have an Event Log and
base is that the current state of the persistent entity does not not a relational data model, queries through secondary indi-
necessarily have to be saved. This will be merely held in mem- ces are impossible to make.
ory (Memory Image). In case it becomes necessary to restore To enable this, the CQRS architecture (Command Que-
the state, e.g. after a restart of the application, this will be ry Responsibility Segregation, for further reading: A CQRS
reconstructed through a playback of the events. The optional Journey , https://msdn.microsoft.com/en-us/library/jj554200.
saving of the current state in called “Snapshot” in the model aspx) is applied. The basic principle here is that different data
and does not replace the Event history, but solely represents models are used for reading and writing. In our case this
a “pre-processing”. If an entity experienced thousands of means that our Event Log is the write side.. It can be used to
changes of state during its lifetime, there is no need to play reconstruct our entities, but we won’t perform any queries
back all the events from the very beginning. It is possible to on this. Instead, we also generate a read sidefrom the events.
shortcut by starting with the latest snapshot and repeating Lagom is already offering an ReadSideProcessor. Every event
only the following events. which occurs in combination with a class of PersistentEntities
The strict specifications that Lagom gives for the types and will also be processed and used to create the read side. This
the structure of the behavior are meant to ease the conversion is optimized for reading and doesn’t allow for direct writing.
to this principle, called Event Sourcing, for developers. The This architectural approach does not only offer technical
idea is that I am forced to specify a clear protocol for each advantages, since in many application cases the read and
entity: Which commands can be processed, which events can writing frequency are very different and they are scaled inde-
be triggered and which values define the state of my class? pendently with this method. It also enables some new possi-
bilities. As a consequence of never deleting the saved events, it
Clustering included is possible to add new structures on the read side, the so-called
The number of Persistent Entities that I can use is not limited projections. These can be filled with the historical events and
by the main memory of a single server. Rather, every Lagom thus can give information not only in the future but also from
application can be used as a distributed application. During the the past.
start of an additional instance I only have to add the address CQRS allows the use of different technologies on the read
of an already running instance, after that it will register there side, adjusted to the Use Case. It is conceivable while not sup-
and form a cluster with the present instances. The Persistent ported by Lagom yet, that one can build an SQL read sideand
Entities are administered by the framework and will be dis- continue the use of available tooling, but simultaneously feed-
tributed automatically within the cluster (Cluster Sharding). If ing an ElasticSearch database for the quick search and to send
nodes are added to or removed from the cluster, the framework the events for analysis to Spark Streaming. It is important to
will redistribute the instances. Likewise, it can restore instances keep in mind that the read sidewill be refreshed asynchro-
which were removed from the memory (Passivation). nously, with latency (“Eventual Consistency” between the
By the way, the built-in feature to keep the application state write and the read side). Strong consistency is only available
in the memory this way and also to scale this hasn’t been de- in this model on the level of the PersistentEntity.
veloped for Lagom originally. For this, Lagom relies on Akka. Finally, it is also possible to code Lagom without Lagom
This has definitely been used in mission-critical applications , Persistence. It is not mandatory to use Event Sourcing; the de-
therefore any concerns regarding the reliability of the young velopment of “stateless” – Services, or “CRUD” applications
framework are not well-founded. (Create, Read, Update, Delete) with a SQL database in the
Listing 6
public class HelloEntity extends PersistentEntity& snapshotState.orElse(new HelloState("Hello", /*
ltHelloCommand, HelloEvent, HelloState> { LocalDateTime.now().toString()))); * Event handler for GreetingMessageChanged.
*/
@Override /* b.setEventHandler(GreetingMessageChanged.class,
public Behavior initialBehavior(Optional& * Command handler for UseGreetingMessage. evt -> new HelloState(evt.message,
ltHelloState> snapshotState) { */ LocalDateTime.now().toString()));
b.setCommandHandler(UseGreetingMessage.class,
/* (cmd, ctx) -> return b.build();
* The behavior defines how the entity reacts on ctx.thenPersist(new }
* commands. GreetingMessageChanged(cmd.message), }
*/ evt -> ctx.reply(Done.getInstance())));
BehaviorBuilder b = newBehaviorBuilder(
backend is also possible. But if someone is interested in Event language always comes the fear of a temporary decrease in
Sourcing and CQRS, in scalable, distributed systems, Lagom productivity because developers cannot revert to familiar
can help them gain access into the topic. practices and resources. It is the same in our case.
Lagom is trying to prevent this by giving the developer a
Immutable values — Immutables clear path. If I follow the documentation of the textbook ap-
As mentioned earlier, the single commands, events and in- proach for service implementation and persistence in Lagom,
stances of the state must be immutable. Immutable data struc- I will be able to build a reactive system – completely based
tures are an important concept from the functional coding, on messaging and being able to cluster, maybe even without
especially in the area of concurrency. Let us assume a method realizing it.
gets passed a list of numbers. The result is a value that is In the relatively new area of microservices, standards are
calculated from the list (maybe a meridian of the numbers of yet to be established. We will have to see which frameworks
the list). By reasoning about this or maybe in some cases even can stand the test of time. In contrast with old acquaintances
through mathematical proof, you may state that a function is from Java EE and Spring, Lagom instills new life into this and
correct and will always deliver the same output for the same is putting a whole different architecture in the balance. Those
input. who wish to try something new and are interested in scalable
But what if the delivered list is e.g. an ArrayList – how can distributed systems will find Lagom helpful.
we be sure? Fix is only the reference that is delivered. But
what if another part of the program that is executed in paral-
lel has the same reference? And adds some values to the list?
In asynchronous systems that are based on sending the com-
mands, it is essential that a command must not be changed
after it has been sent. To rely on the fact that the developer
will be careful would be negligent.
Lagom uses third party libraries for this. For the commands
it binds Immutables, for collections pCollections. If I add a
value to a collection from this library, the original collection
will remain unchanged and I will receive a new instance with
an additional value.
Deployment
Microservices provide a challenge not just for the developer
but also for the ongoing operation. In many companies the
deployment processes are still set up for the installation of
.war or .ear files for application servers. But microservices
are running standalone and are often packed into (Docker)
containers and administered by the so-called service orches-
tration tools like Kubernetes or Docker Swarm.
Lagom requires such an environment, too. But it does not
depend on a certain container standard (like Docker). It re-
quires the runtime environment to have a registry which is
searchable through other services. To be accessible, it must
make an implementation of the Lagom ServiceLocator API
available.
Unfortunately, at the moment it is only available for the
commercial closed-source product ConductR. The open
source community is working on the implementation for Ku-
bernetes and Consul. Alternatively, a ServiceLocator based
on static configuration can be used, but this is not recom-
mended for production use.
Conclusion
Lagom follows an interesting path and is a remarkable
framework. It’s fundamentally different in its technical base:
Everything is asynchronous, it is based on sending commands
and persisting is done per Event Sourcing. This brings tre- Lutz Hühnken is Solutions Architect at Lightbend. He is an experienced
mendous advantages for the scalability of services – but for software architect, project manager and team leader, with a history of
successful projects in systems integration, internet/e-commerce and
most developers (including everybody from the Java EE area), server-side application development in general.
this means rethinking. With the change of a programming
But even Spark has its limits. Tools for data de-
livery and persistency are still necessary. This is
where we can resort to the experience of recent
years.
• Scalability – to deal with millions of data sets • Raw data persistency: A job that writes incoming raw data
• Fast enough to provide answers in Near Time to S3, HDFS or Ceph, and prepares it for later processing.
• Suitable to implement analyses of any duration • Speed Layer: Implementation of “quick win”-analyses,
• A unified, comprehensible programming model to handle whose results are measured in seconds.
various data sources • Batch Layer: Long-termanalysis or machine learning
Figure 2: The SMACK stack: a solid base for Fast Data infrastructures
Results are written to HDFS (5) and Cassandra (4), and can which they are being used. Existing components will, thanks
be used as input for other jobs. In the end, there is Akka again to Spark, be usable within one unified programming model.
as HTTP layer to display the data e.g. as a web interface. Spark is just a tool. Much more important are the goals be-
hind the catchphrase “Fast Data”:
Automation clinches
In addition to technical core components, automation is a key • Low entry barrier for Data Scientists
point in determining the success or failure of a real Fast Data • Differences between Speed and Batch Layer will disappear
platform. And Mesos already provides many important basic • Exploratory analyses will be significantly easier
components for that. Nevertheless, we will continue to need • The deployment of new jobs will be easier and faster
tools like Terraform, Ansible, Kubernetes and comprehensive • Existing infrastructure is easier to use
monitoring infrastructures. At this point it should be clear
where I am heading: Without DevOps, it is difficult to achieve SMACK offers a combination of all those goals and relies
the goals set. Cooperation between developer and operator on proven technologies. The key lies in their use and in their
is essential for a system, which is intended to elastically scale highly automated combination. The result is a platform which
and work on hundreds of machines. is hard to beat in its flexibility.
To Scala or not
Scala is notoriously the parting of the ways. However, I want
this article to deliberately initiate another debate on language
features. In this particular case, the normative power of real-
ity slams because every framework used is either written in
Scala or is very Scala-like:
A SMACK developer will not get past Scala code. Scala is the
pragmatic choice if you want to succeed with the stack.
“Expert checklist –
Why Scala and not Java?”
Which is the most popular JVM language and where are we heading to? We asked six Scala develop-
ers to weigh in on the state of Scala and answer some questions regarding the past, present and fu-
ture of this programming language.
```
Markus Hauck works as an IT consultant and Scala
case class User(id: Long, name: String, email: Email)
trainer at codecentric. His passion lies in functional
programming and expressive type systems.
case object GetUsers
Ivan Kusalic is a software engineer working for HERE, a
case class AddUser(name: String, email: Email)
Nokia business in Berlin. He is an active member of
case class RemoveUser(id: Long)
Berlin’s Software Craftsmanship community. Ivan is
coorganising SoCraTes 2015, International Software
class UserRepository extends Actor {
Craftsmanship and Testing Conference.
...
override def receive = { Daniela Sfregola is tech leader at PayTouch.
case GetUsers => // do stuff
case AddUser(n, e) => // do stuff
case RemoveUser(i) => // do stuff
} Julien Tournay is CTO at @mfg_labs and author of jto/
... validation.
}
```
‘equals’ and ‘hashCode’. And instead of pattern matching you EE or the Spring ecosystem. The projects developed by Light-
would have to work with ‘instanceof’ and type casts. Even bend share a vision of what they think Scala should be. They try
though some modern IDEs help you with that, the resulting to make everything they build usable to Java developers. For
code is much more verbose and ambiguous. Unlike Scala, example you can use Play without writing a single line of Scala
Java is not as focused on the "What." code. Of course there’s a trade-off here. Developing for both
JAXmag: Why Scala and not Java? In your opinion, what languages requires more time. It can also be hard to design an
are the reasons to chose Scala over Java? API that is usable by a Java developer, while not impacting the
Daniel Westheide: In addition to the often mentioned design of its Scala counterpart.
powerful type system, there is an entire list of reasons why But just like the fact that Java is not limited to projects devel-
I would choose Scala over Java: I would like to emphasize oped by Oracle (Spring is an excellent example of that), Scala is
two reasons which are somehow connected to each other. not limited to Lightbend’s initiatives. The work done by Type-
First of all, with Scala you are able to define algebraic data level is especially interesting to me. They are pulling Scala into
types. The other benefit is pattern matching which allows more functional ways and are also building a coherent ecosys-
you to work with readable code and the aforementioned tem. I think people coming to Scala from a Java background
data types. The following example shows both pattern will probably start using Lightbend’s technologies and move
matching and algebraic data types in action. We define an some projects to Typelevel after a few months – once they’ve
algebraic data type session and discriminate between the become comfortable with more functional ideas.
session of a logged in user and an anonymous session. We
then use pattern matching to return either a personalized JAXmag: For what kind of applications can we use the stack?
suggestion or a general one (Listing 2). Markus Hauck: The stack is really useful for creating ap-
plications that have to react very fast and / or scale to big
JAXmag: Some people say that after Java 8 introduced lamb- amounts of data. At the same time, it is modular enough to
da expressions, Scala lost a bit of its appeal because func- give the user the opportunity to choose and use only those
tional programming is now also possible directly in Java. parts they really need.
What’s your take on that?
Daniela Sfregola: I don’t think Scala lost its charm after JAXmag: Work has begun on Scala 2.12 . What do you find
lambda functions were introduced in Java 8. Quite the oppo- most interesting in this release?
site actually! Java is still missing a lot of features that Scala Ivan Kusalic: I’m really interested in the style checker –
can offer such as implicits, for-comprehensions, traits, type Scala syntax is very flexible which is actually great, but as a
inference, case classes, easy currency support, immutable col- consequence it requires extra effort to have consistency in a
lections….and much more! Introducing lambda expressions bigger codebase. In my team we currently take care of that in
in Java 8 is an opportunity for OO developers to start tasting code reviews, but it takes a while for new team members to
the power of functional programming – before going all the increase the speed.
way to the dark force with languages like Scala or Haskell.
JAXmag: Could you name something that you still miss
JAXmag: An entire stack was formed around Scala; it con- in Scala and would like to be implemented in the next re-
sists of Akka, Play, Lagom, Apache Spark and others. How lease(es)?
can we define this stack? Is it an alternative model for Java Heiko Seeberger: Scala has been around for awhile and has
EE or Spring? Is it a loose set of interesting technologies or collected some burdens. Martin Odersky, the creator of Scala,
is there a closer relationship between these technologies? is currently working on the next major release: 3.0. Some of
Julien Tournay: It’s true that the Scala open-source commu- the old things will be dropped once the 3.0 version is released.
nity is very active. Spark is of course a huge driver of Scala’s
adoption. The technologies you’re mentioning are mostly de- JAXmag: Should Scala move more in the direction of a main-
veloped by Lightbend, the company behind the Scala language. stream language like Java in the future (and possibly value
Java EE and Spring are both large ecosystems so yes, projects more things like backwards compatibility)? Or would you
in the Scala ecosystem are competing with projects in the Java rather welcome more innovative features (which could pos-
sibly break backwards compatibility)?
Julien Tournay: Backward compatibility is extremely im-
Listing 2 portant and one should be wary of breaking it at a language
level. The Python community has learned this the hard way
sealed trait Session since the release of Python 3.0 – which, after all these years,
case class LoggedInAs(userId: String) extends Session has failed to exceed Python 2. Overall I hope the Scala lan-
case object Anonymous extends Session guage will continue to evolve and improve. I follow the work
def showRecommendations(session: Session): List[Recommendation] = session Dotty does (the next major version of Scala) with a lot of
match { interest.
case LoggedInAs(userId) => personalizedRecommendationsFor(userId) Read the full interview on www.JAXenter.com.
case Anonymous => trendingArticles
}
by Sebastian Meyen shares, others are already making steps toward an open,
hard-to-plan future. Consistent digitalisation and high-per-
Modern DevOps: Connecting business and IT: Bringing teams formance IT-structures are imperative – as demonstrated by
from different fields together in a good way is rarely easy, renowned companies such as Netflix, Spotify, and Uber.
when those teams are involved in the same business process- What exactly are the driving forces in business towards a
es but do not work together directly. That’s why a group of DevOps culture (Figure 1)? Allow me to start by naming some
people led by Patrick Debois suggested a new concept back in (although certainly not all) buzzwords:
2009: DevOps. They offered a solution to tackle the problem
which exists in both development (Devs) and administrative
(Ops) level. The DevOps movement developed substantially
and made fundamental changes to basic concepts in IT and
their roles in organizations.
Business-driven DevOps
Originating from the idea of making processes in conven-
tional IT settings – classic on-premise-server, separated dev-
and ops-departments – smoother, the DevOps movement is
now mostly concerned with consistent digitalisation and ar-
eas with a high pressure to innovate.
Powered by the internet, many industries are subjected to
an increasing pressure to change. While some are still look-
ing back half-heartedly at their losses in traditional market Figure 1: DevOps culture
• Globalization results in increased competition in almost all developers) to try something new because they know a path
industries. once taken can always be corrected if new insights suggest so
• The internet is more than just a modern marketing and is part of the world we live in right now.
sales platform for traditional fields of business. It has the Continuous Delivery is putting gentle pressure on develop-
power to transform classic business models, modify them ers to optimize their software for smooth deployment. Devel-
or make them obsolete altogether. opers will put more thought into architectural concerns and
• Disruption is not an exception, but will be the norm in technical details that are important for deployment when they
most markets. The ability to innovate will, therefore be- are responsible for transferring applications to real life, rather
come the key to success for companies. than just take responsibility for applications in test environ-
• Therefore, markets cannot be perceived as stable, mak- ments.
ing long-term planning obsolete. Iterative strategies and
many changes will become essential for companies’ success. Microservices
Microservices are modeled with one goal in mind: to reduce
complexity in software systems. The theory reads as follows:
Five factors of DevOps By “cutting” software into small “bites”, inherent complexity
Modern DevOps does more than just bring together Devs and can be reduced. This revelation is added to the long history of
Ops; it aims to integrate business and IT across teams and ideas on modularity of software in IT (from object-oriented
systems. We would like to discuss the relationship between programming and component orientation to SOA and even
business and IT with speakers from around the world at our modularizations like OSGi and Jigsaw.)
DevOpsCon conference which takes place between 5–8 Decem- Dependencies between parts of the system, being responsi-
ber. ble for problems around complexity, are eliminated this way;
I will now try to outline the most important movements when working with microservices, they can be resolved by
which can bring a sustainable change towards DevOps if using APIs: When you change a service, you are obligated to
brought together. I would also like to talk about what inspired consider “neighbouring services” to ensure the API stays con-
us – myself and Peter Rossbach – to design the program of our sistent. You need to keep this important goal in mind through-
DevOps conference. If we want to make extensive changes, the out all development and deployment activities. If you have to
gradual improvement of conventional systems is not enough. change an interface, it’s easier to explicitly tell all neighbouring
We need to focus on the following aspects: services and initiate a cooperative plan to kick off the change.
There is no need to use the same technologies for all micro-
1. Continous Delivery services (one can be written in Java, another in Ruby on Rails,
2. Microservices the next one in Go in the cloud …). Many experts see this as an
3. Cloud Platforms advantage. We are merely mentioning this aspect as a side note;
4. Container Technology its relevance to the DevOps perspective is not a major one.
5. Business Culture It is important to mention that microservices should not be
seen simply as a new style of architecture which can replace
Let’s take a closer look at each of these five factors and how other architectures and lead to better technical results. Micro-
they come together. services represent a new solution not only for technology but
also for the organisation. It makes sense to use microservices
Continuous Delivery when you wish to change certain things beyond technology.
Continuous Delivery – automating each and every aspect of These encompass:
delivery – has been an important aspect for online companies
for quite a while. Bringing bugfixes, modifications and new 1. Autonomous, self-organising teams, each taking full
features into production as fast as possible without taking too responsibility for one (micro-)service.
big a risk represents a very important goal. 2. Technical considerations are not the driving force
Such companies usually don’t bring new software releases behind the design of such services; functional con-
into production every six months; they don’t just do that every siderations are (explaining the vast popularity of do-
month or even every day but in most cases several times a day! main-driven design in the field of microservices).
Why is it that many small releases are better suited for such 3. “You build it, you run it”, this quote by Werner Vogels
teams than just a few big ones? Because this prevents large (CEO at Amazon Web Services) is a description of the
backlogs from building up in the first place. Pending work? responsibilities of microservice teams. They are not just
Doesn’t fit into the mindset of continuous delivery propo- responsible for developing an application, but also for
nents. Important changes to usability or improvements to per- its full lifecycle, meaning deployment, monitoring, bug
formance don’t have to wait until the next big release, they are fixing, optimizations, further development …
put into production immediately. Even if that code does not 4. Furthermore, microservice teams are often cross-func-
stay the same too long, these modifications can also be rolled tional – that is to say, there might be operations/plat-
out without delay. form experts in the team, in addition to application
This culture of welcoming well thought-through exper- developers; quite often domain experts, marketing spe-
iments, encouraging all contributors (not just application cialists and designers join the team too.
5. Microservices are beneficial in the long run only to such releases frequencies, IT can be seen as an agile and ductile
organizations which are seeing them not just as some medium instead of a rigid shell that needs huge investments
technical innovation, but as a way to follow their busi- to initiate change.
ness goals. Microservices facilitate such automated deployments by
substantially reducing volume and complexity of the arte-
facts to deploy. They help companies focus on business goals,
Cloud meaning that they do not let software infrastructures defined
Modern cloud platforms represent more than an opportunity in the past decide the layout and focus of departments; in-
to transfer applications to public data centres. In fact, they are stead, they help companies focus on goals and services that
offering plenty of technical services which are challenging the make sense business wise.
conventional ways of building and using software. Furthermore, crossfunctional microservice teams promise
Where can the consequences of this cloud paradigm be seen to nullify classic boundaries between specialty departments/
in reality? You need to put some serious effort into deploying marketing/design/development/operations and, therefore,
an application in classic environments: You must choose an encourage different stakeholders to collaborate in a lively,
operating system and set it up, add application servers, da- multidisciplinary way focussed on the success of the product.
tabase, manage users and permissions, configure a firewall, When teams, put together like this, cultivate a communication
manage compatibilities and dependencies. Having done all culture in the spirit of Agile, without constraints from hierar-
this, the application itself finally needs to be configured and chic structures, iterative development of products guided by
adjusted to any given production environment. (changing) customer needs is being facilitated. An agile infra-
Modern cloud environments such as Amazon Web Services, structure as defined by DevOps can supply a likewise iterative
Microsoft Azure or Google Cloud Platform make this process IT.
substantially easier. Complicated infrastructures from the tra- Cloud platforms help such iterative businesses on a technical
ditional on-premise-world are almost trivial in comparison! level as solely software-based infrastructures; Docker contain-
Data management, user and permissions management (iden- ers are helpful too. Those make deployment and changes to the
tity), networks, management and monitoring, scaling are at infrastructure a no-brainer and could potentially dispose of the
hand as services in such environments. Calling one of those image of IT being the “party pooper” in business development
services takes just seconds to complete. once and for all.
No software architect can resist the temptation to talk lar, the ability to rapidly change a system without unintended
about their experience with microservices. We asked an ex- consequences. This means that as customer requirements (or
pert to talk about the benefits and challenges of microservices, the market) changes, the software delivery team can quickly
when people should not use them and what impact they have react and adapt the software to meet these new requirements,
on an organization. In this interview Daniel Bryant, Chief and do so without worrying that a small change will create
Scientist at OpenCredo, agreed to talk about his likes and unforeseen issues (or require large amounts of testing to pre-
dislikes about microservices. Here are his answers. vent regression). The properties of a microservice-based sys-
tem that enable this benefit include:
JAX Magazine: Why did you start using microservices?
Daniel Bryant: I first started using an architectural pattern • Understandable and well-defined cohesive services based
that was similar to what we now call microservices on a pro- around a business function (i.e. bounded contexts)
ject in 2011. The reason we chose to use a service-oriented • Well-defined service interfaces (APIs)
approach was due to the early identification of separate areas • The ability to make assertions about functionality
of functionality within the overall requirements. We also had throughout the system stack, at a local and global level
several teams involved in creating the software (spread across (e.g. component tests, contract tests, and end-to-end tests)
the UK and also Europe), and we believed that dividing the
system into well-defined services with clear interfaces would
allow us to more efficiently work together on the project. Portrait
The development of separate services, each with its own
interface, functionality, and responsibilities, meant that once Daniel Bryant is the Chief Scientist at OpenCredo. His current work
the teams understood and designed the overall system-level includes enabling agility within organizations by introducing better
requirements and interactions, we could minimize the need requirement gathering and planning techniques, focusing on the
for constant inter-team communication. Well-defined service relevance of architecture within agile development, and facilitat-
contexts and less communication meant that we delivered ing continuous integration/delivery. Daniel’s current technical
valuable software more quickly than if we had all been work- expertise focuses on “DevOps” tooling, cloud/container platforms
ing (and coordinating) within a single codebase. and microservice implementations. He is also a leader within the
London Java Community (LJC), contributes to several open source
JAXmag: What is the most important benefit of microservices? projects, writes for well-known technical websites such as InfoQ,
Bryant: When a microservice architecture is implemented DZone and Voxxed, and regularly presents at international confer-
correctly, the most important benefit is agility – in particu- ences such as QCon, JavaOne and Devoxx.
JAXmag: Have microservices helped you achieve your goals? tionality – then the potential impact on an organisation is mas-
Bryant: The use of the microservice architectural style has sive (particularly for a traditional enterprise), as business teams
definitely helped in several projects I have been involved due to will have to re-organise away from horizontally functional silos
the reasons mentioned in the previous answer. I work mostly (e.g. finance, marketing, PMs BAs, developers, QAs) to vertical-
as a consultant, and so am in the privileged position to see lots ly integrated cross-functional teams (e.g. the conversion uptake
of different projects. Although microservices aren’t a panacea team, or the user signup team). Many people have already writ-
(and I haven’t used them in every project), they are a very useful ten about Conway’s Law, and so I won’t cover this here, but
pattern in my “architectural toolbox”, and I have used them I’ve witnessed the results enough time to know that this is a real
to help teams I work with understand fundamental software thing. In fact, it’s worth noting that many of the pioneers in the
development paradigms/qualities like coupling and cohesion. microservice space started developing architectures that we now
recognise as microservices because of business requirements for
JAXmag: What do you think should be the optimal size of a agility, team autonomy and decreased time-to-market.
microservice? Many people have already written about Conway’s Law,
and so I won’t cover this here, but I’ve witnessed the results
Bryant: As a consultant, I like to say “context is vital”, and
enough time to know that this is a real thing. In fact, it’s
so I believe there is no optimal size for a microservice. My rec-
worth noting that many of the pioneers in the microservice
ommendations are to keep services focused around a cohesive
space started developing architectures that we now recognise
business function (e.g. user service, payment service etc), en-
as microservices because of business requirements for agility,
sure that the team can use a ubiquitous language within each
team autonomy and decreased time-to-market. This wasn’t
service (i.e. a concept within a service means only one thing
necessarily a top-down edict to componentize systems like
– for example a “user” within a payment service is simply an
there was with the traditional SOA approach.
identifier for a payer), and make sure that a developer can
readily understand the service context and code after a couple
of hours of investigation.
Other techniques include the use of code analysis, both • Is the organisation’s leadership well-aligned and are teams
in terms of complexity (e.g. cyclomatic complexity or more capable of working together effectively; is the organisation
crudely, lines of code) and churn (e.g. code change over time, “operationally” healthy – are they embracing “DevOps”
as shown by VCS logs), and the identification of natural fric- principles, and do they have a well-functioning continuous
tion points (e.g. interfaces, potential seams, locations where delivery build pipeline
data is transformed). Adam Tornhill’s book “Your Code as • Are developers/QA being well trained on architectural
a Crime Scene” is an excellent resource here, as is Michael patterns, domain modelling, and technologies;
Feather’s “Working Effectively with Legacy Code”. • Is there a process for feedback and learning e.g. retrospec-
tives, career development programs etc.
JAXmag: Should every microservice be written in the same
language or is it possible to use more languages?
Bryant: I like the idea of polyglotism, both at the language
and data store level, as this embraces my personal philoso-
phy of using the “right tool for the job”. However, context
is again key here, and if an organisation is struggling with
understanding core architectural patterns or using (and oper-
ating) a single language platform, then adding more into the
stack will only cause more trouble.
Imprint
Publisher
Software & Support Media GmbH Sales Clerk:
Anika Stock
Editorial Office Address +49 (0) 69 630089-22
Software & Support Media [email protected]
Schwedlerstraße 8
60314 Frankfurt, Germany Entire contents copyright © 2016 Software & Support Media GmbH. All rights reserved. No
www.jaxenter.com part of this publication may be reproduced, redistributed, posted online, or reused by any
means in any form, including print, electronic, photocopy, internal network, Web or any other
Editor in Chief: Sebastian Meyen method, without prior written permission of Software & Support Media GmbH.
Editors: Gabriela Motroc, Hartmut Schlosser
Authors: Manuel Bernhardt, Lutz Hühnken, Dr. Roland Kuhn, Jochen Mader,
The views expressed are solely those of the authors and do not reflect the views or position
of their firm, any of their clients, or Publisher. Regarding the information, Publisher disc-
Marius Soutier, Vaughn Vernon
laims all warranties as to the accuracy, completeness, or adequacy of any information, and
Interviews: Daniel Bryant, Markus Hauck, Ivan Kusalic, Martin Odersky, is not responsible for any errors, omissions, inadequacies, misuse, or the consequences of
Heiko Seeberger, Daniela Sfregola, Julien Tournay, Daniel Westheide using any information provided by Publisher. Rights of disposal of rewarded articles belong
Copy Editor: Nicole Bechtel, Jennifer Diener to Publisher. All mentioned trademarks and service marks are copyrighted by their respective
Creative Director:
Jens Mainz owners.
Layout: Flora Feher