►
From YouTube: Incremental adoption of microservices with an application gateway by Christian Posta at JBCNConf'19
Description
An application gateway is a piece of infrastructure that helps existing applications incrementally adopt new architectures like microservices and serverless. It is not as single purposed as an API gateway, and not as complicated as a full service mesh and provides immediate value. In this talk we’ll explore this emerging pattern. In this talk, we'll explore how to leverage an application gateway to get value out of your existing architecture while moving to microservices and serverless. This application gateway uses technologies like Envoy Proxy, GraphQL, and HTTP/2 to help solve some of these problems
http://www.jbcnconf.com/2019/infoTalk.html?id=5c3e5dbb38da16698cf41b28
A
A
I
left
the
definition
of
an
application
gateway
a
little
bit
vague,
because
what
we're
gonna
talk
about
is
at
a
bit
of
higher
level,
how
we
adopt
micro
services
and
how
we
may
migrate
or
modernize
from
monoliths
to
a
service,
install
architecture
and
reduce
the
risk
of
doing
that,
and
to
do
that
there
are
a
couple
of
patterns
that
we
want
to
follow
a
couple
of
really
important
patterns
and
some
of
these
technologies
will
come
into
play.
So
thank
you
again
to
the
JBC
n
organizers
for
having
me
again.
This
is
I.
A
Think
my
third
year
here
my
name
is
Christian
I
work.
Now
at
a
company
called
solo.
How
many
people
have
heard
of
solo
do
just
kind
of
curious
there's
my
raise
of
hands
one
person.
Okay.
Now
all
of
you
will
have
heard
after
the
end
of
this
talk.
Solo
is
a
startup
that
focuses
on
helping
people
to
be
successful
with
micro
services
at
a
high
level,
but
service
mesh
and
application
networking
tools
at
a
more
specific
level
and
I'll
talk
a
little
bit
about
this.
A
Are
you
I
came
from
Red
Hat
I
worked
at
Red
Hat
for
about
seven
years,
for
I
was
the
chief
architect
of
application,
development
or
soft
cloud
native
application
development,
so
basically
kubernetes
on
up
so
any
of
the
patterns.
Any
of
the
the
new
microsoft
services,
with
with
Rafael
from
Red
Hat
on
micro
services
for
Java
developers,
wrote
a
book
the
first
book
on
STL
I'm.
Currently,
writing
is
do
in
action
for
Manning,
which
is
in
our
early
access.
Preview.
A
I've
contributed
to
a
bunch
of
open
source
projects,
came
from
a
Java
background,
contributing
to
things
like
activemq
pachi,
camel
stayed
interested
in
messaging
systems
and
tributed
systems
like
pachi
Kafka
and
then
moved
more
into
the
the
docker
kubernetes
space,
and
for
the
last
year
and
a
half
or
two
years
really
I've
been
focused
on
Sto.
So,
let's
get
going
here.
I'm
gonna
set
a
couple
of
definitions
for
the
words
that
I
might
use
through
this
talk.
A
So
there
are
these
possibilities
of
outcomes,
and
some
of
them
have
negative
impact,
and
we
want
to
reduce
the
possibilities
of
of
that
when
I'm
talking
about
a
monolith,
I'm
not
talking
about
just
an
application,
that's
written
that
code
deploys
its
components
right,
I'm,
talking
specifically
about
an
older
application.
It's
been
around
for
many
many
years.
More
importantly,
has
seen
many
different
developers
work
on
it
and
each
time
a
new
developer
goes
to
add
a
new
feature
on
it.
A
So
it's
a
big
kind
of
stop
the
world
and
deploy
this
this
thing,
because
it's
it's
it's
complicated.
When
I
talk
about
micro
services
I'm
talking
about
how
do
we
take
or
evolve
rather
from
what
we
had
in
the
past,
where
things
are
a
little
bit
harder
to
move
and
harder
to
change?
Because
of
that
longevity,
because
of
that
erosion
and
architecture,
not
just
because
all
the
things
are
deployed
together.
But
how
do
we?
A
All
right,
because
the
whole
point
of
making
changes
in
software
is
to
get
stuff
out
in
front
of
customers,
get
things
out
and
get
their
feedback
get
their
determine
whether
or
not
it
provides
the
value
that
you
thought
that
it
did
because
a
lot
of
times
just
because
the
business
has
a
requirement
and
we
implement
it
and
we
deploy
it
doesn't
mean
we
actually
get
the
impact
that
we
want.
So
we
want
that
fat
that
fast
iteration.
We
want
to
move
fast.
A
We
want
to
get
things
out
in
front
of
customers
and
measure
whether
or
not
that
provides
the
impact
that
we're
looking
for
now.
I
showed
this
slide.
I
do
I,
show
this
light
a
lot
and
I
showed
this
last
year
and
it
is
continued
to
remain
the
same.
This
is
from
an
older
report,
but
the
trend
is
still
true.
A
I
said
last
year
that
we're
adopting
kubernetes
we're
adopting
containers,
we're
adopting
cloud
platforms
in
general
or
adopting
things
like
C,
ICD
and
automated
testing
and
so
forth,
and
we're
doing
that
for
the
purposes
of
going
faster
right.
We
wanted
to
ploy
things
out
to
production
faster
and,
if
you'll
look
at
these,
these
KPIs
that
the
state
of
the
depth.
A
So
this
is
from
the
state
of
the
DevOps
report,
I
think
it's
the
2017
report
and
the
state
of
the
DevOps
report
measures
the
impact,
the
business
impact
of
high-performing
teams
as
well
as
kind
of
defines
what
high-performing
teams
are
high-performing
teams
are
those
that
go
fast
and
do
that
safely.
If
we
see
the
first
two
KPIs
on
this
list,
we
see
the
number
of
deployments
and
the
lead
time
for
change
that
window
between
the
high-performing
teams
and
the
low-performing
teams
is
shrinking.
A
We're
able
to
get
to
production
quicker,
we're
able
to
move
through
our
environments
quicker
and
get
changes
from
our
developers
laptops
into
an
environment
where
we
can
actually
determine
impact
faster.
But
if
you
look
at
those
last
two
metrics
these
are
this
trend
has
is
continuing
the
mean
time
to
recovery,
so
our
safety
metrics
the
mean
time
to
recovery.
If
we
introduce
a
failure
into
production,
how
quickly
can
we
diagnose
what
it
is
and
how
quickly
can
we
fix
it?
A
The
change
failure
rate,
how
many
deployments,
how
many
failures
were
introducing
per
number
of
deployments
that
we're
doing
so.
If
we're
going
really
fast
and
we're
introducing
a
lots
of
failures
and
problems,
then
this
is
not.
This
is
not
good
right.
It
doesn't
matter
if
you're
going
fast,
if
you're
just
ruining
everything
so
we're
looking
at
evolving
and
modernizing
our
systems
microcircuits
help
us
go
faster,
but
we
need
to
be
able
to
do
that
safely.
A
One
of
the
areas
that
we
will
explore,
but
it's
not
unique
to
just
this-
is
going
from
a
monolithic
style
architecture
to
microservices.
There
are
lots
of
different
problems
and
forces
and
struggles
to
be
able
to
do
that.
How
many
people
just
curious
by
raise
of
hands
show
of
hands
how
many
people
are
on
a
journey
today,
where
you're
trying
to
monolith
or
trying
to
modernize
from
an
existing
monolithic
style
architecture
to
a
more
cloud
friendly
cloud
native
architecture
or
micro
services
just
by
show
of
hands?
A
So
it's
a
decent
half
of
the
half
of
the
room.
How
many
people
are
due
are
deploying
into
a
cloud
platform
like
a
kubernetes
or
or
or
a
public
cloud.
You
feel
like
you're
doing,
services,
architecture
and
you're
going
fast.
How
many
people
are
in
that
camp?
Oh,
awesome,
great,
ok,
so
the
big
part
of
one
of
the
big
parts
of
lowering
the
risk
of
modernizing
or
lowering
the
risk
of
making
changes
to
a
highly
complex
system
in
general
is
being
able
to
control
the
network
control
the
blast
radius
of
changes
that
you
make
alright.
A
So
in
the
past,
when
we
made
changes,
we
would
make
changes
for
a
code
every
three
months.
So
I
worked
at
a
big
bank
every
three
months.
We
would
deploy
our
our
code
over
the
weekend.
Everybody
would
deploy
at
the
same
time
and
when
we
said
things
were
live
all
the
customers,
everybody
at
the
bank
couldn't
could
see
it.
Alright.
A
So
what
we
have
in
some
instances
is
our
monolith
and
we're
talking
to
our
monolith.
The
traffic
is
going
to
our
monolith,
monolith
might
have
a
UI
might
be
written
in
angular
or
react
or
whatever.
Now,
if
we
start
to
add
new
services,
maybe
we
want
to
use
the
strangler
pattern
that
Martin
Fowler
talks
about
fairly
popular
pattern
and
add
new
functionality.
As
services
outside
of
the
monolith,
so
in
this
case
we're
adding
a
new
service
service,
a
and
we're
gonna
need
to
somehow
update
the
monolith
to
now
call
out
to
this
new
service.
A
A
So
we
we
add
this
application
gateway
or
this
mediator
in
the
path,
and
now
we
can
do
things
like
observe
how
many
requests
are
coming
through.
The
system
observe
how
many
failures
we're
seeing.
What
is
the
latency
is
for
these
requests
and
we
can
do
things
like
traffic
shift
and
route
traffic
to
a
new
service
and
we
can
keep
some
on
the
model
some
to
the
new
service,
where
it
makes
sense
and
the
client
doesn't
see
any
of
this
happening,
so
we're
kind
of
abstracting
the
client
from
these
changes
that
happen
in
the
system.
A
So
just
to
give
you
a
taste
of
some
of
these
patterns
that
we
can
do
if
we
have
this
application
gateway.
If
we
have
this
point
of
control
in
the
network,
we
can
do
things
like
shadow
traffic.
So
what
a
shadow
traffic
pattern
is
is
requests
are
going
to
the
monolith
and
we've
introduced
a
new
service
service,
a
new
capability.
Let's
say
it's
an
Asian
engine
that
we
want
to
introduce
to
our
retail
site
and
before
we
expose
this
recommendation
engine
to
all
of
our
customers
and
potentially
take
down
the
site.
A
If
things
don't
go
well,
what
we
can
do
is
take
copies
of
that
of
those
requests
and
send
them
off
to
our
new
service,
and
these,
this
copy
of
their
other
request
is
out-of-band
out
of
the
request
path.
So
everything
is
still
going
to
the
monolith.
User
still
sees
exactly
what
they
would
have
seen
before,
but
now
we
get
to
actually
test
and
get
feedback
with
real
live
production
requests
on
our
new
service
and,
of
course,
we
need
a
way
to
capture
metrics
and
logs
about
how
the
new
service
is
behaving.
A
So
the
request
goes,
as
is
a
cop
copies
made
its
shadowed.
It's
sent
off
to
service
a
and
it's
out
of
the
request
path.
If
service
a
fails,
it's
ignored,
if
serves,
they
return,
something
it's
ignored
right,
but
you,
but
you
can
actually
get
a
real
feel
for
how
your
service
will
behave
in
production
with
a
limited
blast
radius
using
using
this
application
gateway
pattern.
A
Another
approach
is
to
take
a
small
percentage
of
the
traffic.
So
let's
say
you
get
past
the
shadow
traffic
stage
and
you
want
to
take
one
percent
of
your
real
live
production
traffic
and
send
it
to
this
new
recommendation
service.
So
you
expose
it
in
the
UI.
This
recommendation
came
so
you
might
use
feature
flags
or
something
for
that,
but
the
actual
back-end
requests
end
up
now
going.
A
One
percent
of
the
traffic
will
end
up
going
to
our
new
our
new
service
and
again,
if
this
thing
fails,
you're
watching
through
metrics
and
telemetry
and
logs,
and
all
this
stuff
you're
keeping
an
eye
on
its
response.
If
this,
if
this
starts
to
perform
badly
or
or
or
look
badly,
then
then
you
roll
it
back.
You
take
it
out.
A
Another
option
very
very
similar
to
these
is
to
take
a
percentage
or
take
a
shadow
of
your
traffic
and
run
it
through
a
system
that
will
act
as
a
control
and
as
the
new
service.
So
what
I
mean
is
it'll
actually
call
the
old
implementation
and
the
new
implementation
and
do
a
diff,
and
you
can
see
both
in
terms
of
latency
how
long
things
are
taking,
but
as
well
as
correctness
of
the
data
that
are
returned
you
can.
You
can
examine
that
in
real
time
without
affecting
your
your
customer
traffic.
A
Equally
powerful
is
the
customers
when
they're
talking
to
these
systems
and
as
they
start
changing,
you
might
be
adding
new
services
like
we
talked
about,
you
might
be
adding
new
protocols.
Maybe
you
want
to
use
G
RPC?
Maybe
you
want
to
use
graph
QL?
Maybe
you
want
to
use
our
socket
or
some
new
our
RPC
way
of
implementing
services,
your
users
still
care
about
the
API.
Your
users
still
care
that
they
have
stability
when
they're
talking
to
their
applications.
A
A
Now,
there's
a
whole
slew
of
Technology
in
this
space.
That's
emerging
to
solve
these
types
of
problems,
like
I,
said
at
the
beginning,
and
working
on
a
book
on
sto.
How
many
people
heard
a
SCO
just
curious,
just
raise
your
hands,
how
many
people
heard
a
service
mesh
just
same
people?
Okay,
so
the
service
mesh
pattern
is
a
way
of
implementing
these
points
of
control.
These
network
controls
at
each
application
instance.
A
So,
basically,
we
deploy
something
that
looks
like
an
application
gateway
next
to
each
application
instance,
if
your
Java
developers
that's
the
instance
of
the
JVM,
if
you're
no
js',
that's
an
instance
of
a
node.js
process,
and
when
we
have
these
points
of
control
at
each
one
of
these
instances,
each
one
of
these
application
instances
we
can,
we
can
get
things
like
metrics
collection,
that's
one
of
the
things
that
I
showed
is
very
important
to
being
able
to
do
this
type
of
routing
and
progressive
delivery.
We
can
get
very
deep
metrics
collection,
these
these
gateways.
A
These
proxies
run
outside
of
the
application,
so
it
doesn't
matter
if
it's
Java
or
nodejs
or
golang
or
Python
or
whatever.
If
we're
collecting
the
exact
same
metrics
and
we're
implementing
the
exact
same
behavior,
these
application
gateways
in
this
case
I'll
talk
about
a
specific
one
called
envoy,
but
these
application
gateways
can
manage
traffic
routing
weight,
weighted
routing,
authentication
rate,
limiting
policy
enforcement,
resilience
things
like
timeouts
retry
circuit
breakers.
All
of
these
very
powerful
network
control
features
and
pieces
of
functionality
that
every
application
is
going
to
need
when
they
build
in
a
cloud
platform.
A
A
Last
week
here
at
cube,
Con
in
Barcelona,
the
company
that
worked
for
solo,
we
announced
with
Microsoft
this
new
spec
for
service
mesh
basically
provides
an
API
for
being
able
to
do
things
like
traffic
control
being
able
to
do
things
like
metrics
collection
and
security
and
policy
enforcement.
All
these
things
that
are
very
important
to
lowering
the
risk
of
making
changes
to
your
system,
but
doing
it
in
a
service
mesh
abstract
way,
so
implementation
agnostic
way-
and
this
is
actually
something
that
solo
that
we've
been
working
on
for
the
last
nine
months.
A
So
it's
always
nice
to
see
the
validation
from
a
big
cloud
vendor
like
Microsoft,
but
we've
been
working
on
this
for
last
nine
months
on
a
project
called
super
glue,
super
glue,
dot,
solo
IO.
So
it's
a
mesh
Federation
and
management
system
that
is
built
on
top
of
a
surface
mesh
interface
or
service
mission,
API,
so
service
mesh,
the
capabilities
of
routing
metrics
collection,
security
policy
enforcement.
A
These
types
of
things-
these
are
nice
API,
so
that
we
can
call,
but
something
needs
to
call
them,
and
if
we
want
to
automate
this
ability
to
do
traffic,
shifting
and
traffic
routing
and
build
this.
This
de-risking
of
our
deployments
and
architecture
changes
into
our
build
pipelines
into
our
automation
pipelines.
We
need
something
to
drive
this.
We
need
something
to
call
into
these
interfaces
and
we
can
do
that
with
extensions
to
the
surface
mesh.
A
So
one
of
the
extensions
that
I'll
talk
about
today
is
is
flagger
from
weave
works
and
they've
built
a
an
engine
that
automates
canary
analysis.
So
when
you
do
this
traffic
shifting
1%
5%
10%
traffic,
shifting
watching
the
metrics
setting
threshold
of
that
kind
of
stuff,
we
can
do
that
in
an
automated
way.
A
Now
the
service
mesh
is
very
focused
on
inside
a
cluster
service
to
service
communication.
So
we
have
all
of
these
different
points
of
control
within
the
applications
within
a
cluster
service.
Mesh
can
span
clusters,
but
that's
typically
done
through
a
gateway,
and
even
if
you
don't
span
clusters,
if
you
want
traffic
to
come
into
your
cluster
into
your
mesh,
you
need
some
ingress
mechanism
or
you
need
some
ingress
gateway.
But,
like
I
said
in
the
a
little
bit
earlier
that
we're
focused,
we
need
to
be
focused
on
network
control
as
well
as
API
API
stability.
A
So,
regardless
of
what's
running
in
our
in
our
cluster,
we
need
stability.
So,
let's
look
at
how
we
might
want
to
build
this
Nelson.
These
patterns
aren't
new
Amazon,
so
I
worked
at
Zappos,
calm
in
2012.
Amazon
was
doing
this
kind
of
stuff
back
then,
and
probably
long
before,
I
was
there.
So
these
patterns
are
are
not
new.
You
may
be
implementing
some
of
them
yourself,
but
I'm
going
to
show
you
some
technology.
That's
been
purpose-built
for
cloud
platforms
for
dr.
environments,
for
things
like
kubernetes
or
running
outside
of
kubernetes
and
in
other.
A
Workload,
orchestration
platforms,
but
these
these
projects,
these
open-source
projects,
have
emerged
to
solve
exactly
this
type
of
problem.
So
the
first
one
I'll
talk
about
it
because
it's
foundational.
If
we
talk
about
API
gateways,
next
generation,
API
gateways,
we're
gonna
talk
about
envoy.
If
we're
gonna
talk
about
service
mesh,
we're
likely
gonna
talk
about
envoy.
So,
let's,
let's
take
a
second
to
to
see.
What
on
boy
is
how
many
people
have
heard
of
envoy
just
curious
out
of
the
crowd?
A
So
envoy
is
a
service
proxy
or
an
application
gateway
that
originated
from
lift
the
ride-sharing
company
and
they
built
on
voice
specifically
for
the
purposes
of
going
from
their
monolithic
environment,
to
a
micro-services
style
environment,
as
well
as
getting
visibility,
getting
metrics
getting
telemetry
getting
observability
from
the
network
and
building
automated
resilience.
Things
like
circuit
breakers
and
timeout
retry
out,
load,
Ballantyne
side
load,
balancing
these
types
of
things
into
the
applications,
the
creator
of
Envoy
Matt
Klein.
He
spent
his
previous
career
at
Amazon
at
Twitter
and
so
on,
and
they
built
a
lot.
A
This
infrastructure
in
the
JVM
and
what
he
found
was
that
for
each
hop
an
infrastructure
network
at
high
super-high
load
high
throughputs
that
there
was
this
level
of
unpredictability
high
tail
agencies
that
they
weren't
able
to
to
work
around
and
what
he
found
was
riding
a
purpose-built
piece
of
infrastructure
like
this
in
C++
help
bring
some
determinism
to
to
that
infrastructure.
So
envoy
is
a
network
filter
at
its
core.
It
shuttles
bites
and
packets
around,
but
it
also
has
the
ability
to
write
plug-ins
for
it,
so
it
can
understand
application
level.
A
Now
we
can
do
this
outside
of
the
applications
and
get
consistent
behavior
across
any
any
framework
or
any
implementation
of
of
your
service
envoy.
Does
things
like
traffic
shadowing,
which
I
mentioned
very
fine-grain
traffic
routing
and
collects
tons
and
tons
and
tons
of
telemetry?
You
can
filter
it
down,
but
it's
better
to
have
more
and
filter
it
down
then
not
have
enough.
A
So
envoy
plays
the
role
of
this
application
gateway,
collecting
metrics,
allowing
you
to
build
your
services
architecture
and
whatever
protocols
and
whatever
style
you
want
in
the
backend,
but
it
doesn't
necessarily
focus
on
the
API.
So
we
get
this
network
control,
but
it's
not
very
focused
on
on
the
on
that
stable
API
that
you
might
expose
to
to
customers
this
part
and
just
to
just
to
point
out
I've,
been
a
little
bit.
I
haven't
clearly
defined
exactly
how
this
is.
A
These
are
logical,
architectures
that
we're
looking
at
right
now,
not
physical
deployments,
because
you
could
conceivably
think
that
you
put
your
Envoy
proxy
with
API
capabilities
next
to
the
monolith,
deploy
it
right
next
to
the
monolith,
so
that
your
applications,
your
services,
your
clients,
are,
they
feel
like
they're
talking
to
the
monolith.
But
actually
you
have
this
point
of
control
and
it's
able
to
do
this
sophisticated
routing
without
changing
anything
in
the
monolith.
A
So
if
we,
if
we
take
a
take
a
step,
sideways,
I,
guess
and
look
at
how
do
we
leverage
envoy
and
maintain
and
and
take
all
the
good
parts
of
envoy,
but
then
also
get
this
ability
to
decouple
your
API
and
keep
a
stable
API,
because
the
API
is
important,
our
lists
of
where
you're
running
your
your
new
services
that
could
be
containers,
I
could
be
VMs.
I
could
be
functions
of
service
or
these
new
types
of
compute
from
the
public
clouds,
so
the
API
gateway
pattern
is
the
important
part
here.
A
A
Your
application
could
be
a
bunch
of
different
functions,
it
could
be
components
written
in
G,
RPC
and
components
written
in
an
HTTP
or
rest,
and
now
how
you
bring
some
sense
to
this
and
how
you
compose
an
API
and
decouple
your
API
from
what
the
rest
of
that
system
looks
like
is
where
the
API
gateway
pattern
comes
into
into
play
and
that
solo
or
we
have
an
open
source
project,
called
glue.
Gl
o
that's
built
on
on
void
that
provides
these
API
gateway
capabilities.
A
So
it
provides
you
a
way
to
do.
A
clean
separation
of
of
your
API
for
your
users
and
glue
is
different
from
Envoy
and
the
settler
that
adds
the
capabilities
to
understand
how
to
route
to
a
specific
function
and
I'll
go
into
a
little
bit
more
detail
about
what
I
mean
with
that,
because
when
we're
talking
about
AP,
is
we
don't
care
about
a
host
and
port?
A
Alright,
you
might
have
a
service
running
on
particular
host
import
and
that's
fine,
but
what
about
the
specific
paths
for
an
API
that
you
might
have
there
or
if
it's
GRP
see
what
about
the
specific
functions
function,
calls
that
you
can
make.
So
we
need
to
be
able
to
understand
those
and
glucan.
You
can
understand
those
route
to
those
transform
messages
to
be
able
to
meet
the
shape
of
the
API,
is
in
the
backend
and
keep
a
stable
decoupled
API
on
the
front
end.
A
It
does
all
the
things
that
you
would
expect
from
a
api
gateway,
authentication,
olá
flows.
Tls
we
can
do
TLS
routing
with
SNI
rate-limiting.
Caching,
when
you
deploy
into
kubernetes
glue,
is
very
kubernetes
friendly,
all
the
configuration
all
that
all
of
the
management
of
glue
is
done
using
kubernetes
CR
DS.
So
if
you're
familiar
with
kubernetes,
it's
just
another
configuration
file
kubernetes
like
configuration
file
that
you
deploy.
A
You
don't
have
to
manage
any
more
databases
to
do
this,
because
kubernetes
does
all
that
stuff
for
you
so
again
glue
composes
functions
when
I
say
function,
I
mean
function
like
a
name
that
represents
some
capability
parameters
that
take
some
shape
and
return
a
response.
They
do
some
sort
of
activity
and
return
a
response.
Now
these
parameters
could
be
objects
of
some
type
and
they
have
some
kind
of
shape
all
right.
A
So
when
when
glue
is
taking
requests
on
an
incoming
request
for
a
particular
API
and
it
needs
to
talk
to,
let's
say
we
expose
a
REST
API,
because
that's
typically
what
we're
gonna
expose,
especially
we
don't
know
who
the
consumers
are
or
we're
exposing
it
for
a
web
web,
endpoint
and
so
on.
But
what
if
we
want
to
do?
G
RPC
on
the
other
side,
so
envoy
and
glue
can
do
that
protocol
transformation,
both
at
the
request
level
and
the
protocol
mediation.
Part
with
vanilla
envoy,
like
I,
said
very
powerful.
A
A
Okay,
so
here
is
here's
our
glue
dashboard.
Now
everything
I'm
going
to
show
you
on
this
demo
is
gonna,
be
through
the
UI,
but,
like
I
said,
all
the
configuration
for
a
glue
proxy
is
done
through
kubernetes
see
our
ds1
is
deployed
on
kubernetes,
it's
not
tied
to
kubernetes.
We
have
plugins
for
console
and
using
files
on
the
file
system
and
so
on,
but
it's
just
a
lot
easier
on
kubernetes,
so
I'm
gonna
show
you
through
the
UI
you
also
have.
You
can
also
use
the
CLI.
You
can
also
use
the
llamo
files
directly.
A
It's
a
little
bit
more
convenient
for
a
demo.
So
what
we're
gonna
do
here
is
we
see
we
have
our
application
deploy
if
we
refresh
everything,
looks
good
find
owners,
we
get
some
data.
This
is
our
our
application
that
we
want
to
now
extend
and
we
want
to
use
the
power
of
a
control
application
gateway
to
be
able
to
route
to
maybe
new
implementations
of
our
services.
So
we
have
this.
We
have
veterinarian's,
which
show
the
name
and
their
specialty.
A
If
we
click
on
'contact
there's
nothing
there
does
that
zoom
in
if
it
is
not
implemented.
Yet
it's
an
error.
So
if
we
come
over
here
to
the
to
the
glue
catalog
here,
we
can
see
all
of
our
services.
That
glue
has
automatically
discovered
another
component
that
the
glue
has
can
go
out
into
your
system
into
your
kubernetes
into
your
console
into
your
terraform
resources
into
your
ec2
resources
and
so
on
and
discover
hosts
import
just
like
most
discovery
systems
can
do,
but
it
also
goes
additional
an
additional
level.
A
It'll
say
for
each
one
of
these
hosts
and
ports.
I'm
gonna
go
look.
Are
there
G
RPC
endpoints
here?
Are
there
rest
endpoints
here
like?
Do
we
find
a
swagger
or
a
G,
RPC
reflection
and
and
or
or
if
we're
in
go
clout,
Google
cloud
or
Amazon
is
this?
Are
there
lambdas?
Are
there
cloud
functions
and
so
on?
So
here
we
do
see
some
rest
services.
We
see
some
rest
or
G
RPC
service,
and
then
we
see
some
ones
that
are
regular
applications
running
on
kubernetes.
A
So
if
we
click
on
our
our
default
Vergil
service
here,
what
we
want
to
do
is
add
new
functionality
to
veterinarians,
but
we
don't
want
to
do
that
in
the
existing
application
in
the
existing
Java
application.
Maybe
we
want
to
do
a
new
spring
boot.
App
Josh
long
showed
a
great
demo
yesterday
of
reactive
spring
and
using
our
socket-
and
these
said,
maybe
you
want
to
do
your
new
service
in
in
that
so
with
with
our
glue
gateway
here.
A
What
we
can
do
is
add
a
new
route
and
say
that
for
any
traffic
going
to
slash
vet,
if
you
can
see
that
we're
gonna
route,
this
to
our
new
new
service,
pet
clinic,
slash
vets,
vets.
So
we've
implemented
this
new
micro
service
outside
of
our
existing
application
and
we'll
create
this
new
route
and
we'll
reorder
it.
A
If
we
come
to
back
to
our
application
and
refresh
the
veterinarians
tab,
we
see
now
that
our
traffic
is
being
shuttled
off
to
our
vets
service,
which
has
reimplemented
this
vets
functionality
and
added
new
capabilities.
So
we
haven't
changed
our
existing
application,
we've
kind
of
forked,
the
traffic
off
and
re-implemented
that
in
a
different
service.
Now
we
can
do
the
same
thing
with
contact,
but
instead
of
maybe
calling
on
services
running
in
kubernetes,
what
we
can
do
is
call
a
Amazon
lambda.
A
A
A
Contact
form,
so
it
sees
multiple
versions
of
our
lambda.
In
this
case,
we'll
pick
the
latest
version,
and
then
we
also
I'm
gonna
enable
this
I'm
going
to
enable
this
response
transformations.
Another
thing
that
I've
mentioned
pretty
quickly
that
glue
can
do
automatic
response,
request
and
response
transformations
inside
the
proxy
itself.
So
in
this
case,
lambdas
gonna
return
us
a
JSON,
but
since
we're
trying
to
deploy
or
display
this
to
an
HTML
page,
we
want
to
grab
a
particular
component
of
the
JSON,
so
will
enable
the
transformation
here
for
lambda
and
we'll
stay
submit.
A
That's
reorder
this
one.
Now,
if
we
click
on
'contact
cross
fingers,
this
goes
out
and
calls
an
AWS
lambda
function
and
it
returns
with
this.
This
functionality
now,
on
top
of
a
proxy
like
glue
proxy
like
envoy,
that
does
things
like
author
authentication
authorization
rate,
limiting
caching
request
routing.
A
A
Things
like
envoy,
try
to
operationalize
envoy,
but
look
at
adding
the
capabilities
of
network
control,
starting
at
the
edge
starting
at
the
edge
where
you
can
get
a
lot
of
value
immediately,
and
it's
kind
of
familiar
enough
to
your
operations,
folks
that
they'll
be
able
to
get
it
now.
What
I'm
going
to
show
you
now
is:
how
do
we
take
this
idea
of
reducing
the
risk
of
making
changes
so
implementing
a
canary
release
system
automating
that
and
doing
that
by,
but
by
driving
the
API
of
a
application
gateway?
A
So,
in
this
case
it'll
be
glue,
it
could
be
a
full
service
mesh.
It
could
be
something
else,
but
we're
going
to
look
at
doing
canary
analysis
and
release
with
flagger
project
that
I
mentioned
from
from
weave
works
that
that
will
be
driving
the
the
traffic
shifting
and
deployments
for
our
application
here
and
flagger
supports
things
like
sto
things
like
app
mesh
directly,
but
it
also
supports
SMI.
So
it's
a
service,
mesh
interface
and
super
glue
is
kind
of
the
reference
implementation
of
that.
A
This
is
going
to
be
a
live
demo,
but
you'll
notice,
just
like
last
year,
I'm
having
I'm
having
a
script
to
automatically
type
for
me,
because
I'm
terrible
type
or
in
front
of
a
hundred
people
now
this
very
well
could
fail.
So
be
careful
cross
your
fingers
for
me.
What
we're
looking
at
here
is.
We
see
we
have
no
applications
deployed
yet
in
our
in
our
is
that
is
that
big
enough?
Can
you
see
that
back
there?
Is
it
all
right?
A
So
we
don't
have
any
applications
deployed
right
now,
but
what
we're
gonna
do
is
deploy
a
very
simple
pod
info
app.
So
this
application,
just
when
you
call
it
expose
HTTP
endpoint
when
you
call
it
it
just
returns.
The
version
and
I
cut
out
some
details
about
the
runtime,
the
pod
itself.
So
if
we
call
it
directly,
are
okay
we're
gonna
install
this
load
generator
as
well,
because
when
we
start
doing
this
canary
analysis,
we
need
metrics.
We
need
to
see
how
this
thing
is
behaving
so
we'll
use
helm
to
do
that.
A
Real
quick
and
now,
if
we
get
our
pods,
we
should
see
them
start
to
start
to
come
up
now,
if
we
call
the
pod
info
service
directly
using
curl.
This
is
the
type
of
information
that
we
see.
It's
a
very
simple
application.
You
can
expand
it
in
for
your
own
use
cases,
but
it's
very
simple,
as
it
returns
to
version
the
hostname.
Some
of
these
other
things.
A
A
A
Let's
go
back
up,
we'll
go
through
it.
Basically,
what
it's
saying
is,
if
you
detect
any
changes
to
our
pod
info
deployment,
don't
just
automatically
make
them
available
to
everyone,
we're
gonna,
slowly
and
progressively
control.
How
we
roll
this
out,
both
through
doing
a
deployment,
so
flagger
will
automate
automate
the
communities
deployment,
but
also
through
controlling
the
traffic
and
in
this
case,
through
glue.
A
So
what
we're
gonna
do
is
every
ten
seconds
we'll
check,
check,
stats
and
see
how
things
are
going.
We
will
roll
out
up
to
25%
of
the
traffic
to
our
new
canary.
So
if
we
make
a
change
new
application
gets
deployed,
we
will
slowly
and
incrementally
build
up
to
about
25%
of
traffic
to
our
new
canary
release.
At
that
point,
we
should
be
able-
and
you
can
change
this-
obviously,
these
sort
of
configuration,
but
for
this
particular
demo
in
this
case
at
25%,
we're
gonna
make
our
decision.
A
Do
we
go
full
live
or
do
we
roll
back
so
we'll
step
by
5%
at
a
time,
and
we
will
watch
metrics
in
this
case
in
Prometheus
things
like
the
request,
success
rate
and
the
request
latency.
So
these
are
all
configurable.
You
can
change
these,
you
can
watch
whatever
metrics
and
whatever
KPIs
you
want
to
determine
good
or
bad
and
whether
or
not
to
continue,
but
in
this
case
we're
just
using
some
of
the
defaults.
A
So
let's
add
that
let's
apply
this,
so
we
just
created
this
new
canary
deployment.
So
now
flagger,
which
is
running
in
this
cluster,
is
gonna,
see
that
we
created
this
it's
going
to
and
we
keep
an
eye
on
it
here.
It's
going
to
register
it
in
its
system
and
it's
gonna
start
the
canary
process.
You
can
actually
do
actually
in
this
case
it's
just
going
to
register
it
until
we
make
a
change.
We
haven't
made
a
change
yet
so
we'll
give
this
a
second
to
come
to
a
stable
State.
A
There
we
go
and
we
are
initialized
and
we
are
going
to
see
here.
If
we
look
at
the
bottom
pane.
What
we
see
is
our
client.
It
looks
like
it's
not
moving,
but
it
is
moving.
It's
just
returning
the
same
thing
over
and
over
and
over
well
at
the
bottom.
We're
calling
our
service-
and
we
can
see
not
much-
is
changing.
We're
on
the
stable,
1.4
l
version
in
this
case,
but
things
are
changing.
If
we
look
up
here,
we
can
see
all
of
the
artifacts
that
flagger
created
flagger
created
a
pod
info
primary
deployment.
A
That's
what
it's
going
to
use
to
control
the
the
primary.
If
we
look
at
the
services,
it'll
show
that
it
created
a
service
for
the
primary
and
a
service
for
the
canary
in
inside
of
kubernetes,
and
if
you're
not
super
familiar
with
kubernetes.
The
the
concepts
of
service
and
deployment,
hopefully
are
abstract
enough
that
it
makes
sense.
A
If
we
look
at
our
upstream
group
upstream
group
is
a
glue
term.
This
is
how
we
control
the
routing
between
different
services.
We
see
100%
of
the
traffic
is
going
to
primary,
which
is
what
we
see
in
the
bottom.
Now,
let's
change
the
pod
info
deployment.
Let's
change
that
to
use
a
docker
image
of
14.1.
A
If
we
look
at
some
of
the
events,
we
see
that
a
new
revision
was
detected,
so
that's
good.
Now,
if
we
keep
an
eye
on
the
Canaries,
we
can
watch
in
the
weight
column.
It's
gonna
start
incrementing
the
the
weight
that
it
gives
to
the
canary.
So
once
that
traffic
starts,
this
is
always
automatically
do
it
and
it's
using
glue
in
this
case
once
it
starts
to
shift
the
traffic
on
the
bottom,
we
should
see
version
1.4,
dot
1.
Maybe
that's
somebody.
A
It's
only
5
percent
of
the
traffic
right
now
we're
gonna
we're
gonna
shift
the
traffic
up
to
25
percent
and
at
that
point
we'll
we'll
go
all
the
way
to
100
we're
at
10%.
Now
we
start
we
do
start
seeing
1
2
4.1
start
to
show
up
in
the
results.
Here
and
again
it's
looking
at
Prometheus,
it's
looking
at
the
request
response
and
the
request
duration.
So
looking
at
latency
and
as
long
as
it
doesn't
exceed
a
certain
threshold,
it's
gonna
continue
to
roll
out
here.
A
Now
this
is
this
is
pretty.
This
is
pretty
powerful.
You
can
connect
flagger
into
your
CI
CD
system
and
use
it
like
I,
said
start
off
with
the
edge
start
off
with
ingress
gain
some
of
these
capabilities
understand
what
proxy
and
what
proxy
technology
you're
going
to
use
and
then
roll
out
to
more
of
the
capabilities
that
a
service
mesh
might
might
provide.
So
we
see
that
25
we
see
it's
exceeded.
A
And
I
am
slowly
running
out
of
time,
check
this
out
check
out
the
surface
mesh
hub
IO.
If
there's
any
questions
these
things
at
Q
con
these
little
plushies,
these
were
so
hot.
These
everybody
wanted
one
of
these
plushies
I
think
there's
a
secondary
market
for
these.
These
plushies
right
now,
I
have
three
of
them
and
if
you
ask
questions,
I
can
I
can
I
can
hand
them
out
I.
A
A
So
in
that
case
each
cluster
would
have
its
own
API
gateway,
but
you
would
use
higher-level
traffic
management
level,
l
for
traffic
management
to
get
requests
so
get
traffic
to
that
cluster,
and
then
the
workflow
in
this
case,
for
keeping
things
kind
of
in
sync
and
behaving
similarly,
especially
if
you
have
identical
clusters
is
is,
is
gonna,
be
how
you
version
how
you
roll
out
those
yamo
files
that
you
use
to
manage
the
configuration
for
the
process.
Yet.
A
So
the
question
is:
if
you
have
multiple
sources
of
truth
for
Prometheus
and
for
being
able
to
do
rollouts.
So
typically,
you
would
have
a
a
Prometheus
per
cluster,
but
then
Prometheus
also
federates
Confederate
itself.
So
you
can
build
a
single
plan
at
the
paint,
a
single
pane
of
glass
for
Prometheus
across
multiple
clusters
as
well.
Any
other
questions.
A
B
A
You
heard
the
question
he
has
a
microphone,
and
so
all
of
the
configuration
is
is
yeah
Mufasa
right
so,
and
the
intended
use
of
the
system
like
this,
including
kubernetes,
including
SD,
including
others,
is
to
build
this
into
a
get
op
style
workflow,
where
all
of
the
changes
like
I
did
the
changes
up
there
and
they
were
very
dynamic.
That's
not
you're.
A
So
the
question
was:
could
you
deploy?
Could
you
control
the
deployment
strategy
through
the
application?
So
now
this
is
an
area
where
we're
we're
really
interest
I'm,
really
interested
in
defining
configuration
that
is
more
suitable
for
developers
to
be
able
to
do
things
like
say
well,
I
want
the
traffic
routing
to
be
this
or
I
want
the
rollout
from
this
particular
application
to
be
this,
and
so
that
API
that
configuration
we're
going
to
announce
something
and
I
think
three
weeks
that
will
help
solve
these
types
of
questions
well,
I'm
out
of
time,
I
appreciate.