►
From YouTube: OpenShift Coffee Break | Service Mesh for Developers
Description
With the move to microservice architectures came the challenges of distributed computing.
After using libraries like Netflix Hystrix, Feign Client and so on (often bundled in Spring Cloud), service meshes now solve typical problems in the areas of resiliency, security, observability, routing and more. In this session, we’ll show you how to take advantage of service mesh capabilities and to get rid of infrastructure-related code in your microservices.
A
A
Good
morning,
everyone
hi
Nick
good
morning
good
morning,
thanks
for
being
here
and
welcome
everybody
to
this
session
of
auto
shift.
Overshift
Coffee
Break,
please
some
minutes
to
introduce
yourself
and
then
we
will
move
to
the
topic
today.
B
Yeah
thanks
for
the
invitation,
so
my
name
is
Nick.
Lemberski
I
live
in
Germany
new
Munich
from
background
I
have
around
20
years
of
java
development
with
Java
Enterprise
version
and
spring,
and
all
that
stuff.
Then
moving
to
the
cloud
and
now
I'm
with
redheads
and
around
one.
A
B
Yeah,
so
server
mesh
I
think
it's
one
of
the
most
important
topics
at
the
moment.
So
just
the
from
my
experience
working
with
customers
moving
to
kubernetes
or
openshift,
they
are
moving
a
lot
of
apps
to
kubernetes
and
yeah,
move
it
going
away
from
them
from
a
monolith
to
a
microservice
landscape
means
you
you're
in
you
find
yourself
in
distributed
computing.
B
That
leads
to
another
whole
bunch
of
challenges.
You
have
Tech
land
yeah
service
mesh
can
really
help
here
and.
A
B
A
lot
of
times
that
customers
have,
for
example,
resiliency
pattern
in
their
backlog,
trying
to
implement
it
via
spring
cloud
or
something,
but
it
stays
in
the
backlog,
because
business
functionality
is
always
way
more
important
and
yeah.
Introducing
a
service
mesh
can
really
help
you,
because
then
all
your
problems
are
solved
and
you
can
clean
your
backlog.
Big
look.
A
Huge
topics
so
let's
say
that
service
mesh
should
help
the
developers
and
Ops
people
to
manage
all
the
challenges
that
you
can
find
in
a
high
level
of
distribution.
When
you
we
speak
about
microservices,
because
we
and
in
here
at
that
we
always
speak
about
how
coolies
and
effective
for
business
and
for
IIT
for
time
to
Market
to
put
in
production
application
based
on
microservice
pattern.
But
then
we
can't
forget,
there's
still
a
lot
of
challenges
you
have
to
deal
with
like
latency
security
and
consistency,
so
that
service
mesh
probably
will
help.
A
B
Yeah
yeah
you
can,
you
can
have
huge
outages
and,
from
my
perspective,
I
think
the
most
important
is
to
make
developers
productive
and
to
help
developers
really
to
focus
on
business
logic.
B
And
if
you
are
working
in
a
distributed
environment
in
a
Microsoft
landscape-
and
you
don't
have
support
from
something
like
a
service
mesh,
then
developers
have
to
write
infrastructure
related
code
and
this
needs
a
lot
of
knowledge
about
the
infrastructure.
You're
running
it,
and
you
cannot
focus
on
your
business
logic.
So
it
will
slow
down
your
feature.
Development,
yeah.
B
B
So
I
can
skip
the
next
slide,
because
I
introduced
myself
already
just
to
give
the
context.
I
have
some
slides
here
from
a
more
developers
perspective,
how
we
are
developing
apps
in
the
past
and
now
and
what
changed
about
it.
So,
just
some
years
ago
we
went
from
a
monolithic
applications
to
microservice.
B
Applications
This
was
a
huge
factor
to
bring
fun
again
into
the
development
world,
because
in
these
big
monolithic
applications
are
really
hard
to
write
apps
for
because
yeah
you
have
a
lot
of
side
effects
and
it's
hard
to
test
everything
and
it's
hard
to
scale
to
work
with
more
than
one
team
on
a
big
monolithic
application
and
microservices
applications
make
this
way
more
easier.
B
So
and
our
small
app
Zoom
develops
to
network
of
microservices
and
with
a
network
of
microservices,
we
have
a
new
area
of
problems
like
cascading
failures,
so
we
can
see
here
a
service
Bay
behind
crashes
and
the
the
error
is
propagated
to
the
end
user
and
yeah.
B
We
have
to
find
out
what
was
causing
the
the
failure
we
have
to
find
base,
how
we
can
solve
service
Discovery,
so
one
service
can
find
another
service
how
to
deal
with
network
latency,
how
we
can
have
a
good
observability,
what's
going
on
in
our
mesh
of
microservices
yeah,
so
a
new
category
of
challenges
and
yeah
how
we
did
it
in
the
past.
So
for
me,
dealing
with
with
topics
like
these.
B
This
book
was
entry
world
and
in
these
microservice
patterns
like
circuit
breakers
and
retries,
and
service
Discovery
doing
all
that
stuff
and
yeah.
This
book
was
a
great
introduction.
I
think
there
is
now
version
two
of
the
book,
but
as
we
have
moved
forward
to
microservices
I
did
to
service
mesh
I
didn't
read
the
version
two,
but
wait
again
because
yeah
the
first
version
was
already
a
great
help
for
a
lot
of
developers
working
on
microservice
architectures.
B
What
was
the
approach
of
this
so
the
the
developed
or
microservices
with
a
library
service
mesh?
So,
in
the
end,
with
spring
Cloud,
we
had
a
lot
of
libraries
originally
developed
by
companies
like
Netflix,
like
the
famous
Netflix
rocket
breaker
or
the
faint
client,
and
these
are
bundled
in
libraries
and
we
can
could
put
out
these
libraries
into
our
application
and
these
libraries,
when
configured
properly
took
care
of
all
that
stuff
like
making
the
application
more
resilient
and
exposing
metrics
to
have
a
good,
observability
and
so
on.
B
But
this
is
kind
of
problematic,
because
if
we
first,
these
the
apps
are
quite
bloated.
If
we
put
a
lot
of
libraries
on
our
apps,
we
have
to
take
care
of.
This
app
is
becoming
bigger
and
bigger.
And
yes
and
second
problem
is
these
libraries
are
not
available
for
other
languages.
B
So
maybe
there
is
something
like
a
circuit
breaker
for
a
go
application
or
a
python
app,
but
maybe
it
doesn't
work
exactly
the
same
way
as
the
Java
libraries
and
it
has
to
be
configured
differently,
and
this
can
be
a
challenge
if
you
are
working
with
different
languages
and
I.
Think
one
of
the
many
advantages
of
microservices
teams
are
free
to
use
whatever
language
they
think
is
best
suited
for
suited
for
their
applications
on
you
have
to
find
a
better
solution,
and
so
from
library
service
mesh.
A
Right,
yeah,
you're
perfectly
right,
consistency,
whatever
kind
of
language
you
are
using
is
a
key
element
of
effective
and
resilient
distributed
architecture
on
kubernetes,
otherwise,
or
managing
more
than
three
four
five
application
in
a
consistent
way.
It
definitely
needs
a
mess.
B
Yeah
absolutely
yeah
yeah,
so
the
sidecar
pattern
or
the
sidecar
service
mesh.
How
does
it
work
so
with
kubernetes?
We
have
the
possibility
to
put
more
than
one
container
in
a
pot
pot.
Is
our
smallest
deployment
unit.
So
every
app
is
we
put
in
a
pot
and
we
can
have
more
than
one
containers
in
a
pot
and
they
share
the
same
local
network
and
this,
and
this
makes
it
possible
that
we
can
just
move
all
these
from
a
library
mesh
functionality
into
a
sidecar
container
and
the
sidecar
container
takes
care
of
security.
B
Resiliency
observability,
it
works
like
a
normal
proxy.
So
if
an
application
makes
a
service
request
to
another
application,
it
goes
through
the
sidecar.
The
sidecar
makes
adds
all
this
functionality
and
request
to
another
application
yeah,
and
if
we
have
it
like
this,
we
can
configure
all
our
side
cars
from
a
central
control
plane.
B
B
Okay-
and
you
can
see
here-
I
already
showed
in
my
slide
here-
that
we
are
using
different
languages.
So,
for
example,
I
have
here
in
the
first
service
a
it's
on
our
python
app
service.
B
is
a
Dean,
almost
typescript
app,
so
you
know,
is
a
successor
of
node.js
from
Ryan
dial.
The
new
project
and
service
C
is
a
Java
virtual
machine.
App
and
yeah
I
have
the
same
service
mesh
functionality
for
all
these
apps
and
you
as
a
application
developer.
B
How
does
it
work
in
openshift,
so
our
service
mesh
we
are
using
is
based
on
SQL,
so
SDS
Upstream
project
and
you
have
a
resident
openshift
service
mesh,
that's
based
on
it
with
changes.
We
did
on
that
and,
of
course,
hardening
and
support
and
installation
normally
is
done
via
our
operator
framework.
It's
quite
easy.
I
can
show
later
in
our
demo,
where
it
is
how
it
is
installed.
B
It's
just
the
operator
and
yeah
you.
If
you
install
it,
you
install
not
only
the
Reddit
openshift
service
mesh.
You
also
install
additional
things
like
kiali,
where
you
have
a
good
overview
about
traffic
flow
on
your
application,
Jaeger
for
distributed
tracing,
for
example,
so
it
supports
open
tracing
API
and
our
Prometheus
kathana's
deck
to
just
get
more
insights
into
your
applications
and
your
service
mesh.
B
B
B
So
I
have
a
very
simple,
let's
just
start
by
using
our
three
apps
without
kubernetes,
so
you
can
see
how
it
works.
I
just
start
my
three
apps,
it's
a
simple
compose.
B
You
can
see
we
have
service,
a
server
is
B
and
sort
of
a
c,
and
instead
of
a
c
just
for
experimental
I
have
here,
I
have
a
version
of
my
service.
I
have
a
host
name
and
I
have
a
counter,
and
the
counter
just
always
adds
one.
So
we
can
see.
Okay,
if
we
make
changes,
you
will
see
later.
Why
I
have
it
Nick.
B
B
To
demonstrate
now,
let's
try
to
see
crash,
so
crash
mode
is
now
activated
and
if
you
make
our
call
to
localhost
3000,
you
can
see
we
have
here
an
error
and
this
error
normally
is
propagated
to
the
end
user,
and
we
want
to
avoid
that
and
now
we
are
moving
forward
to
kubernetes
and
to
show
you
some
people
playing
around
with
these
services.
So
you
can
see
it
in
action.
B
B
So,
okay,
while
the
the
application
has
started,
I
can
just
show
you
what
I
did
here
of
my
deployment
files
service:
a
deploy
service,
B
deployment
service,
C
deploy.
These
are
quite
simple
standard
deployments,
and
what
is
new
here?
I
have
here
an
annotation
side.
Color
is
qio
inject
true,
and
this
is
a
signal
for
the
service
mesh
to
inject
a
sidecar.
B
So
this
app
is
member
or
becomes
member
of
the
service.
Mesh.
Okay
and
the
rest
is
quite
easy.
Just
an
image
image,
pull
policy,
and
here
I
have
environment
variables
for
the
service
addresses
for
the
downstream
course.
So
nothing
special
here
and,
let's
see
if
our
pot
are
started,
and
yes
up
and
running,
what's
new
here,
if
you
are
using
a
service
mesh,
you
can
see
here
in
the
ready
column
two
of
two.
A
B
Great
okay:
normally
you
have
to
configure
the
control
plane
and
the
number
roll
it's
already
done,
because
I
don't
want
to
invest
too
much
time
here
in
in
topics
like
service
mesh
configuration
and
installation
from
an
administrator
view
on
using
the
service
mesh.
But
just
as
you
can
see
it,
we
need
always
a
control
plane
for
service
mesh,
that
configures,
for
example,
the
components
like
grafana
and
the
stuff,
and
we
have
a
member
role
in
the
member
role.
B
We
can
Define
all
the
projects
that
are
part
of
the
that
are
part
of
this
service,
mesh
and
yeah.
This
is
a
special
feature
of
the
openshift
service
mesh
that
we
have
a
multi-tenancy
capability,
so
we
can
isolate
so
having
one
service
mesh
for
one
namespace
or
we
can
have
managing
the
service
mesh
with
one
service
mesh,
multiple
namespaces
and,
of
course
we
can
also
have
a
cluster
byte
installation
yeah.
Let's
finish
everything
yeah.
A
B
Yeah
true,
so
now
our
last
step,
so
our
ports
are
running
as
we
can
see
here
now
we
just
want
to
make
our
app
testable
for
that
I
will
create
a
gateway,
so
we
are
using
SQ
as
an
Ingress
Gateway.
You
can
use
other
gateways
as
well,
but
if
you
are
using
SQ
for
English
Gateway,
for
example,
you
have
the
possibility
to
include
Gateway
traffic
into
your
distributed
pricing.
B
So-
and
you
can
see
here,
our
Gateway
just
there's:
okay,
we
want
to
create
an
istio
Gateway
on
Port
80
and
for
in
the
Gateway
creators
virtual
service
on
the
path
service.
A
I
only
export
expose
service
a
here,
because
this
is
our
main
service.
Our
root
service
and
the
destination
is
here
service
a
host.
A
B
B
If
you
are
working
with
saw
this
mesh
as
a
developer
or
from
App
Ops,
you
have
two
resources:
you're
normally
working
with
one
is
the
destination
Rule
and
the
second
is
a
virtual
service
and
you
can
configure
and
different
things.
But
in
the
end,
the
destination
rule
specifies
the
the
targets
of
your
services
and
then,
with
the
virtual
service.
B
You
can
then
make
changes
but
which
traffic
should
go
to
what
destination
and
these
two
parts
play
together
and
we
will
work
a
lot
with
means.
Let
me
we
will
see
very
short,
typical
examples
of
destination
rules
and
virtual
services,
because
I
have
some
examples
with
me
for
Canary
races,
circuit,
breakers
and
retries,
and
for
all
these
functionality,
you're
using
destination
rules
in
the
two
services
cool.
A
Yeah
these
two
composite
I
think
here
are
key
when
you
start
to
manage
service
measures
of
virtual
destination
virtual
service
and
destination,
because
they're
allowed
to
implement
a
different
kind
of
traffic
policy
yeah
independently
on
what
has
been
configured
during
the
dev
side
during
the
development
side.
So
the
Ops
Team
can
decide
if
there
is
a
better
way
to
manage
the
the
service
concept
so
yeah
and
then
Implement
different
kind
of
policy
and
margin
traffic
management
on
different
kind
of
contexts.
Let's
see.
B
Yeah,
that's
true
and
yeah
I
will
not
go
into
detail
the
roles
of
users
we
have
working
with.
Normally
we
have
a
bunch
of
user
roles
that
are
working
with
the
service
mesh,
but
what
I
see
in
our
customers?
Most
of
our
customers
are
just
starting
to
use
it.
B
We
have
some
customers
running
huge
service
meshes
in
production,
but
a
lot
of
customers
are
just
starting
it
with
it
and
to
start
with
I
I
really
recommend
to
to
build
first,
maybe
a
community
of
practice
to
to
gain
some
knowledge
together
and
the
community
of
practice
could
be
built
from
software
Architects,
Developers,
app
Ops
people
and
Ops
people.
So
you
have
the
knowledge
from
all
these
people
bringing
together
to
make
best
use
of
the
service
mesh,
I.
Think,
okay,
so
I
exposed
or
created
the
Gateway,
and
now
let's
get
our
route.
B
So
I'm
just
trying
to
find
the
route
for
our
istio
Ingress
Gateway
and
the
istio
Ingress
Gateway,
is
in
in
another
namespace
I,
decided
here
to
to
install
the
service
mesh
in
the
system
namespace
and
to
put
my
apps
in
the
app
namespace.
B
B
A
B
So
we
have
here
our
call
from
Collier
here
we
have
already
seen
before
in
potman,
but
now
running
in
kubernetes
service.
A
called
service
B
calls
up
call
service
C.
Here
we
have
service
C
in
the
version
two,
the
counter
of
one.
Let's
call
it
again,
and
here
you
can
see
version
one
on
another
host
and
this
is
critical
around
robbing
Ron
Robin
from
kubernetes.
B
So
what
our
sidecar
can
do
is
this
is
a
main
reason
for
a
lot
of
customers
doing
encryption.
So
we
have
SSA
encryption
now
and
our
istio
control
plane
takes
care
of
the
certificate
rotation
and
we
have
some.
A
B
A
One
a
key
characteristics
and
capability
of
service
mesh
and
the
fact
that
you
you
you're
not
rely
on
both
the
deaf
team
are
going
to
use
in
terms
of
libraries
or
certificates
management.
Everything
is
handled
by
the
operation
team.
B
Yeah
and-
and
this
is
really
a
huge
topic
so
as
I
said,
a
lot
of
customers
start
with
a
service
mesh
just
for
secure
service
calls
for
the
encryption
stuff
right
and
this
is
really
can
really
solve,
but
the
biggest
outages
I
have
seen
in
my
career
as
a
software
developer,
I
think
nine
out
of
ten
times.
It
was
because
of
some
certificate
was
not
rotated.
B
One
of
the
big
pain
points
is
the
certificate
rotation,
stuff
and
yeah
with
a
service
mesh.
You
have
a
lot
it
takes
care
of
this
and
can
really
help
make
your
life
easier
and
help
avoiding
outages.
Because
of
that,
you.
B
B
Now
they
have
achieved
because
they
have
more
observability
and
they
have
security
security
connections,
but
they
have
a
lot
more
possibilities
and
I
will
start
with
a
simple
example:
a
canary
release,
so
the
name
Canario
Elise
in
case
you're
not
familiar
with
it-
comes
from
from
from
the
coal
mines
where
you
have
a
canary
bird,
and
if
the
bird
is
always
singing
the
bird
and
if
he
dies,
then
you
know.
B
Okay,
we
have
here
some
problem
in
the
air
with
this
pollution
and
just
have
to
go
out
as
fast
as
possible
and
Canary.
Elise
is
the
same
procedure.
You
just
put
a
small
application
of
your
new
version
into
production,
and
this
is
your
Canary
and
you
want
to
see
if
the
new
version
Will
Survive
and
you
send
only
a
very
small
percentage
of
traffic
to
the
new
version.
And
then
you
check
your
observability
features
to
see
if
the
new
version
works
well
and
if
it
survives,
you
have
a
lot
of
possibilities
here.
B
You
can
just
send
distributed
traffic
in
a
percentage,
but
you
can
also
distribute
the
traffic,
for
example,
based
on
a
header
value.
So
maybe
you
can
say
Okay
I
want
to
just
test
my
new
version
in
production,
but
not
with
all
my
with
with
real
customers,
but
with
a
test
group
could
be
internal
customers
or
specific
user
group.
B
So
you
can
just
add
a
header
and
say:
okay,
based
on
that
header,
we
are
routing
to
the
new
version
and
then
over
time
you
check
your
observability,
you
get
feedback,
and
then
you
increase
the
traffic
until
you
finally
send
or
100
traffic
to
the
new
version.
There's
a
canary
at
least
I
highly
recommend
it
it's
not
easy
for
customers
or
for
big
companies
to
implement
it,
because
it's
not
only
the
technology,
it's
also.
Okay.
How
can
we
deal
with
two
different
versions
in
production?
B
Maybe
we
have
some
regulations
here
from
in
Germany
buffen.
Is
it
legal
to
have
different
versions
in
production?
Can
we
do
that?
But
the
real
advantage
of
canary
releases
is
you
avoid
the
Big
Bang
release?
Normally
we
do
big
bang
releases
and
maybe
a
blue
green
deployment,
and
if,
if
something
is
wrong,
you
have
and
the
the
the
problem
affects
100
of
the
users
yeah.
A
Let's
say
it
avoids
or
should
avoid
the
roller
coaster.
After
when
you
put
in
production
yeah
more
than
just
one
microservices,
but
also
with
just
one,
you
can
sometimes
don't.
You
can
see
the
side
effects
or
put
in
in
a
new
microservices
in
in
a
Harley
inside
the
highly
distributed
architecture.
So
they
should
avoid
right
away.
Yeah.
B
And
yeah
so
there's
I
I
forgot,
who
said
it,
but
it
was
a
great
it
was.
It
was
great
that
if
you're,
if
you
have
a
big
bang
roll
out,
all
you
can
get
is
a
big
bang,
and
this
is
really
very
often
the
truth.
And
these
big
bang
rollouts
are
a
high
risk
approach
and
a
canary
release
can
can
reduce
the
risk
of
a
rollout
and
you
finally
will
be
able
to
roll
out
new
software
versions
daily
or
or
every.
A
B
So
as
we
as
we
have
seen,
we
have
our
service
C
in
two
versions:
version
one
and
version
two.
They
are
basically
quite
the
same,
but
they
just
respond
V1
over
B2.
B
When
you
make
a
call
to
it-
and
here
we
inform
our
service
mesh,
okay,
so
the
C
we
have
two
subsets
V1
and
V2
and
service
mesh
knows
okay,
depending
on
the
label
of
the
deployment
of
these
spots,
or
these
apps
are
all
version,
one
or
version
two,
and
if
we
start
here
you
can
see,
this
is
a
deployment
on
our
destination.
Rule
decides,
depending
on
this
version,
take
here.
What
is
my
service?
C
version
one,
and
what
is
service?
C
version
two.
A
B
Okay
and
now,
let's
have
a
look
at
our
virtual
services.
So,
let's
start
here
in
the
destination,
you
can
see
destination
host
service
C
in
version
V1.
These
are
exactly
what
we
configured
in
our
destination
rule.
One
slide
before
I
start
with
the
canary,
at
least
by
sending
100
of
the
traffic
to
our
version.
One
and
then
I
gradually
send
more
and
more
traffic
to
our
version.
B
B
you
in
in
a
high
production
environment,
you
normally
would
just
send
maybe
one
percent
or
less
than
one
percent
of
the
traffic
to
the
new
version.
And
if
you
are
familiar
and
have
some
experience
doing
Canary
releases,
you
can
fully
automate
it.
For
example,
there's
an
app
it's
Flagger
app
and
you
can
configure
your
Flagger
app
to
do
this
bait
distribution
automatically,
based
on
some
metrics,
that
the
services
deploy
and
Flagger
app
observes
the
metrics,
and
if
everything
seems
to
be
healthy,
it
adjusts
the
traffic
gradually
automatically
for
you
yeah.
B
Let's
see
it
how
it
works!
No,
and
then
here.
B
B
So
if
you
remember
right,
this
was
the
first
virtual
service.
We
have
seen
sending
100
to
traffic
to
service
one,
and
you
can
see
here
in
real
time.
All
all
service
calls
are
going
to
service
C
in
version
one
now
and
now
we
can
say:
okay
and
now
we
want
to
do
our
Canary
release
and
sending
10
percent
to
service
C
in
version
2.
B
B
Okay,
we
just
let
it
run,
and
now
we
will
have
a
look
at
our
observability
features.
I
think
this
is
a
good
place
to
to
show
you
something
here
you
can
see.
This
is
just
standard
openshift
topology
of
our
services.
You
have
a
service
a
as
always
b
c,
V1
and
B2,
and
now,
let's
move
forward.
We
are
here
in
our
grafana
and
now
let's
go
to
our
istu
service
dashboard
and
let's
have
a
look
at
our
clients,
workloads
and
yeah
yeah.
Here
you
can
just
see
your
observability,
you
can
see.
B
We
have
here
some
incoming
requests.
We
have.
We
have
100
success
rate
and
so
on.
So
you
have
some
some
really
nice
metrics
here,
because
our
sidecars
expose
metrics,
for
example,
traffic
information.
B
A
B
Gateway
and
find
traces
so-
and
here
you
can
see-
we
have
here
a
lot
of
those
traces
and
you
can
see.
Okay,
we
have
here.
The
whole
service
call
with
the
duration
of
every
server
is
called
so
in
case.
You
have
some
delays,
you
can
figure
out
here
and
I'll
distributed
tracing
and
which
service
is
maybe
slow
again.
This
is
done
from
istio
Ingress
Gateway.
You
have
to
do
a
small
change
in
your
service.
B
I
will
show
you
later
how
it
works,
but
most
of
the
functionality
is
again
done
from
the
sidecar
proxy,
and
maybe
the
most
interesting
is
here
our
key
Ali,
and
here
let's
show
the
traffic
distribution.
A
B
B
Nice
overview
of
your
service
calls.
This
only
works
if
you
have
traffic.
So
if
you
start
to
deploy
your
app,
but
you
don't
have
any
traffic
kiali
is
not
able
to
create
that,
because
this
is
built
from
real
traffic,
and
you
can
see
here,
the
traffic
distribution
was
configured
to
9010
and
service
mesh
tries
to
more
or
less
find
here
the
that
right
traffic
distribution.
B
B
B
Yeah
and
for
of
course
here
for
all
very
simple
example,
with
three
services
and
one
service
and
two
versions,
you
can
also
figure
out
what's
going
on
in
your
applications
without
the
tools,
but
if
you
have
a
huge
landscape
of
microservices
with
a
lot
of
requests
between
kiali
is
really
a
great
herb
to
to
to
find
out
what's
going
on
in
your
distributed
services,
and
basically
you
get
it
for
free.
You
just
install
your
service
mesh
and
you
have
your
Chiari
in
place
and
you
can
see
how
all
your
apps
are
communicating
cool.
B
So
now
we
have
sending
50,
you
can
see
it's
not
around
Robin,
so
service
measures
try
to
do
to
solve
this
traffic
distribution
in
a
mathematical
way
and
not
just
doing
a
simple
round
robin
and
you
have
additional
features
here.
You
could
also
do
a
combine
the
the
routing
based
on
metrics
like
latency,
so
maybe
you
have
two
data
centers
and
you
have
Services
deployed
and
one
and
the
other
server
data
center.
B
B
B
Yeah.
This
is
one
traffic
shaping
capability.
As
I
said,
doing
a
canary
release
by
a
percentage
is
nothing
a
load
balancer
on
level.
Four
could
not
do,
but
you
have
way
more
possibilities.
For
example,
you
can
do
the
routing
based
on
header
values,
and
this
is
a
level
seven
in
the
network
layer
and
only
possible
if
you
have
inside-
of
the
HTTP
or
https
traffic.
A
B
Okay,
now
this
was
about
Canary
releases
and
now
I
want
to
just
show
an
example:
how
to
bring
more
resiliency
into
into
our
distributed
services,
with
a
socket
bike
and
a
retry.
These
two
are
working
normally
in
combination.
What
is
a
circuit
breaker?
So
we
assume
we
have
here
in
our
service
B.
We
have
here's
two
pots
and
one
pot
is
not
responding.
B
What
happens
if
you
continue
to
send
traffic
to
put
to
this
port?
So
this
is.
This
will
have
two
effects.
First,
we
can
hope.
Maybe
this
pot
is
not
not
responding.
Put
will
recover
at
some
time.
Maybe
it's
just
a
Java
application
and
the
thread
pool
is
at
its
limit
and
if
it
proceeds
all
the
requests,
the
set
pool
will
go
down
and
it
can
handle
new
requests,
but
if
you
just
send
traffic
to
to
the
spot,
it
cannot
recover
and
next
problem
service.
B
A
is
waiting
for
answers
for
service,
B
and
threadpool
is
full
at
some
time.
It's
always
a
as
well
or
you
just
if
it
sends
them.
If,
maybe,
if
the
service
is
responsive
but
sends
an
error
for
hundreds,
then
all
these
errors
are
propagated
through
the
network.
Until
finally,
the
customer
gets
a
sorry
for
an
error-
500
not
great
so,
and
a
circuit
breaker
kicks
in
here.
If
it
sees
a
pot,
is
unresponsive
or
not
healthy,
it
opens
the
circuit
for
a
specific
time.
B
We
start
by
running
it
by
replacing
our
virtual
service
here
just
to
because
we
have
some.
We
have
a
virtual
service
here
in
place
from
our
Canary,
at
least
from
former
project.
I
just
want
it
here
to
clean
so
that
we
have
a
situation
without
Canary
at
least,
and
then
we
can
see.
We
have
here,
a
new
destination
rule
and
the
circuit
breakers
are
configured
in
the
destination
rule
we
have
here.
A
connection
pool
and
the
most
interacting
interesting
thing
is
here:
the
outlier
detection.
B
Here
we
have
configured
our
circuit
breaker
to
check
for
error
500s,
and
we
have
a
base
ejection
time
of
10
seconds.
That
means
if
the
socket
Baker
goes
from
close
to
open,
it
will
try
after
the
base
addiction
time
again
and
see,
if
maybe
the
pot
has
recovered,
so
it
can
again
send
traffic
to
the
Newport
to
the
crash
pot
and
here's
our
retry
policy.
We
can
combine
these
two
and
we
can
say:
okay.
If
we
have
an
error
500,
we
will
not
propagate
the
service
to
the
end
user.
B
B
B
Magic
if
I
have
August
CD
running
and
deploying
all
that
stuff
and
I
think
this
way
you
can
follow
and
yeah.
Maybe
I
have
also
have
a
repository
for
all
these
apps
and
on
GitHub.
You
can
find
it
on
my
GitHub
account
and
just
try
it
out
by
yourself.
Everything
is:
is
there
so?
Okay
now
we
have,
as
you
can
see
what
I
did
I
scaled
down.
B
B
Make
a
network
tunnel
from
so
sending
requests
to
localhost
8080
to
one
of
these,
so
there's
two
pots,
so
I
can
have
a
color
to
localhost
8080
and
then
I
will
call
my
crash
endpoint
we
have
seen
in
the
beginning,
you
can
see
it
just
made,
sent
our
service.
One
of
our
two
Services
is
now
in
the
crash
mode,
and
this
is
a
situation
we
want
to
avoid,
because
now
50
of
the
requests
are
propagated
with
an
error
to
the
customer
and
of
course
we
want
to
avoid
it,
and
we
want
to
avoid
cascading
failures.
B
A
Also
because
this
could
cause
also
bottlenecks
or
network
failures
and
so
forth,
so
yeah.
B
B
So
and
as
you
can
see
now,
our
circuit
breaker
is
in
place
and
with
a
base
ejection
time
configured
on
10
seconds,
so
you
can
see
some
errors
are
still
propagated
to
the
end
user.
Always
then,
the
base
detection
time
after
the
basic
Junction
time
so.
A
B
Have
a
way
better
situation
or
our
crashed
servers
maybe
can
recover,
and
we
have.
We
have
only
a
small
percentage
of
Errors
propagated
to
the
end
user
and
not
50
of
the
of
the
traffic,
and
now
we
can
say
Okay
I
want
to
have
100
percent
success
rate
and
now
I
bring
my
retry
rule
in
place.
B
B
B
Now
you
can
see
so
what
happens
now
socket
breaker
is
still
in
place,
so
we
could
break
our
test
after
10
10
seconds.
If
it
works,
it
will
become
an
error.
500
retry
policy
will
kick
M
make
another
call.
This
is
routed
to
the
healthy
pot
and
no
errors
are
propagated
to
the
end
user,
foreign
yeah.
And
finally,
we
can
assume
our
crashed.
Pot
is
healthy
again.
B
Let's
repair
pressure
mode
is
deactivated
so
and
yeah
and
we
can
see
base
ejection
time
was
I
think
we
we
repaired
after
the
after
nine
seconds
base
ejection
time.
So
we
now
circuit
breaker
checked.
If
the
pot
is
healthy
again
it
is
healthy,
and
now
we
have
everything
as
before.
The
crashed
pot
is
recovered
and
gets
traffic
again
cool
yes.
So
this
is
the
end
of
no
demo.
As
I
said,
we
just
scratched
the
surface.
The
possibilities
are
huge.
B
One
warning
you
can
make
some
errors
here.
So,
for
example,
if
you
say
okay,
this
is
great.
I
I
found
out
there's
a
retry
policy.
We
can
make
a
retry
and
customers
complain
that
this
service
here
in
front
always
gets
errors
and
you
decide.
Okay,
I
will
just
place
a
retry
policy
here
in
front.
What
will
happen?
You
have
a
retry
storm
in
your
cluster
yeah.
So
this
is
why
I
said
say:
I
really
recommend
starts
more
build.
A
team
experiment
together
gain
some
knowledge.
B
B
A
retry
policy
helps
you
by
not
propagating
services
to
the
end
user
and
retry
policies
should
not
normally
not
be.
If
you
have
a
deep
call
horror
here,
it
is
it's
better
to
have
the
retry
policy
and
the
deepest
points
of
the
call
here
he
and
not
in
the
front
service
to
avoid
retry
storms-
and
this
is
just
a
warning.
So
you
have
a
lot
of
possibilities
now,
but
if
you
are
using
using
the
stuff
in
a
wrong
way,
this
could
lead
to
bigger
problems
than
you
had
before.
B
B
This
is
a
great
book
and
it
handles
a
lot
of
the
stuff
you
want
to.
You
have
seen
today
and
way
more
and
if
you
want
to
type,
have
a
deep
dive,
maybe
istion
actually
is
a
good
choice
and
istion
action
also
takes
care
of
running
a
service
mesh
in
production.
B
A
The
hour
was
whoa
already
11.
A
For
yeah,
definitely,
yes,
we
had
there's
one
question
from
Solomon,
which
service
measure
are
you
using
I
replied
the
one
based
on
istio
and
I
forwarded
and
shared
a
link,
the
one
where
you
can
download
the
book
have
some
more
detail
about
the
capability
of
service
mesh,
an
upper
shift,
how
you
can
try
it
and
install
it
and
so
and
so
forth.
So
definitely
thanks
a
lot
for
for
being
here.
Nick
was
very
simple,
but
so
effective,
so
I
think
explaining
service
mesh
could
be
sometimes
very
challenging.
A
A
And
for
all
our
attendees,
let's
meet
on
the
8th
of
March,
the
topic
should
be
probably
IBM,
Quantum
Computing,
so
stay
tuned,
because
it
is
going
to
be
a
very,
very
interesting
topic.
Thanks
again
have
a
great
day
and
see
you
soon.