►
From YouTube: Envoy Overview and Envoy Powered API Gateway - Gloo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Sure
sure,
please,
okay,
guys
thank
you.
Everyone
for
joining
in
it
seems
there
is
some
issue
with
the
tool
today
to
join
people,
so
we
are
having
some
issues
there,
but
that's
okay.
We
are
also
doing
the
the
live
casting
at
youtube,
which
the
link
I
have
given.
So
I
think
we
should
be
good,
so
thank
you
for
joining
in
guys.
So,
let's
get
going.
B
This
is
the
seventh
meet
of
what
we
are
doing
in
the
kubernetes
kubernetes
in
the
cloud
native
online
meetup,
which
we
run
from
backlog.
B
So
in
this,
so
in
this
meetup
we
are
going
to
first
cover
envoy,
which
I
would
cover
for
around
an
hour
and
then
we'll
have
christian
poster
who
is
going
to
cover
n
by
ny
powered
api
gateway.
B
Glue
yeah.
So
that's!
That's
me,
quick
intro
about
myself.
I
am
founder
of
cloud
and
we
provide
training
and
consulting
in
the
area
of
kubernetes,
docker
and
so
on.
I
also
authored
the
official
kubernetes
course
for
cncf,
which
has
been
taken
by
more
than
100k
participants,
I'm
also
a
cncf
ambassador
and
secret
city
fight
as
well
before
starting
cloud.
B
I
have
worked
for
red
hat
for
a
long
time
and
where
I
did
different
play
different
role
there
and
I
wrote
a
book
on
docker
in
2015,
which
is
kind
of
five
year
back
now
and
before
running
this
meetup,
I
also
ran
the
docker
meter
group
in
bangalore
as
well
for
some
time:
okay,
yeah,
so
the
agenda
for
the
today's
my
session
would
be.
We
look
at
the
overview
of
proxies.
B
We
would
then
see
how
does
this
the
communication
happen
between
the
micro
services
into
ny
and
the
topologies?
What
technologies
you
have
for
envoy
some
of
the
configurations
for
ny
and
then
we'll
do.
I
think,
six
or
seven
demos
which
I
have
planned.
So
that
would
be
interesting.
I
believe
let's
get
going.
So
what
is
the
proxy
in
general?
If
you
kind
of
look
at
the
term
proxy,
it
is
like
authority
to
represent
someone
else.
That's
the
general
terms.
B
What
we
have
so
what
you
see
on
the
screen
is,
for
example,
there
are
two
hosts
that
they
have
to
communicate
and
host
one
and
host
two
can
communicate
without
knowing
each
other.
If
there's
a
proxy
in
between
so
host,
one
communicate
to
the
proxy
and
then
proxy
would
take
the
connection
back
to
the
host
two
and
then
vice
versa.
There
so
host
will
just
send
a
request
and
then
somehow
it
gets
a
response
back.
B
So
there
are
two
kind
of
proxies:
are
there
one
is
open
proxy.
It
has
two
types:
one
is
honest
proxy
in
which
what
happens
is.
B
The
process
the
server
does
not
know
who
is
the
originating
client
type
is
if
there
is
server
on
the
internet,
which
is
there
and
there's
a
client
and
client
kind
of
want
to
access
the
website
and
if
it
uses
a
proxy
in
between,
which
is
enormous
proxy,
then
announce
proxy
would
kind
of
take
the
connection
give
to
the
host
or
the
server
on
the
internet,
and
the
question
would
happen,
but
the
server
would
not
know
what
is
the
ip
of
the
client
there
because
the
proxy
in
between
and
then
we
have
the
transparent
proxy
in
which
the
proxy
discloses
the
ip
of
the
client
as
well.
B
Then
we
have
reverse
proxy.
So
in
the
previous
case,
we
kind
of
try
to
hide
the
client
ip
here,
but
in
this
case
we
do
not
tell
that
who
is
my
server.
So
this
is
called
reverse
proxy,
so
in
which
the
client
is
communicating
via
this
proxy
here
and
proxy,
would
kind
of
give
the
reply
back
from
any
one
of
these
servers,
which
is
running
behind
the
proxy
there
so
and
anybody
can
respond
back.
So
that's
a
reverse
proxy
for
you
that
in
this
case
we
are
hiding
the
details
of
the
servers.
B
That's
why
we're
calling
it
as
reverse
proxy.
So
what
are
the
use
case
of
reverse
proxy?
For
example,
we
can
use
it
for
load,
balancing
ssl
termination,
security,
caching,
compression
and
so
on.
So
for
all
that
purpose,
we
can
use
a
reverse
proxy,
and
this
has
been
very
widely
used
in
the
environment
I
mean
in
typical
internet
or
the
companies
you
will
see.
This
is
a
very
general
things.
What
we
do
these
days,
okay,
so
what
kind
of
software
you
can
use
for
use
of
proxy?
B
So
some
of
them,
you
can
kind
of
host
yourself,
like
nginx,
have
proxy
android
these
something
that
you
can
kind
of
configure
on
your
own
and
then
also
you
can
take
a
service,
so
some
of
the
companies
provide
it
as
service.
So,
for
example,
if
you
are
in
a
country
where
some
sites
are
banned
right,
but
you
can
access
the
proxy
server
so
why
the
proxy
can
access
the
respected
back.
I
mean
the
website
the
website
will
communicate
to
and
that
proxy
would
kind
of
take
care
of
that.
B
That's,
like
the
software
point
of
view
there
now
why
proxies
are
important
in
the
today's
microservices
world.
So
let's
try
to
cover
that
part
of
it.
So
generally,
we
assume
that
if
I'm
on
a
service
we,
for
example-
I
am
a
front
and
I
want
to
communicate
to
back
end
or
any
of
the
services
behind
the
scene.
Then
leave
assume.
B
Might
be
just
one
hop
away,
I
can
just
make
a
call
to
service
b
and
I
would
just
kind
of
reach
the
service
fee,
but
in
the
real
world
this
might
not
be
true
right
because
the
services
the
service
b
can
be
in
different
cluster
region
or
a
different
cloud.
It
can
be
anywhere
right
for
a
service
say
just
want
to
communicate
service
b.
B
That's
what
we
make
a
assumption
there,
but
things
would
play
in
between
and
what
can
happen
is
there
might
be
multiple
instances
of
service
b
also
so
which
one
you
would
connect
to
also
right,
how
do
you
load
balance
between
them?
So
these
are
the
future
to
take
care
in
the
today's
scenario
of
microservices
there.
Apart
from
this,
you
also
need
to
handle,
like
timeouts,
retries,
authentication,
circuit,
breaking
observability
and
other
protocols
which
you
might
have
in
the
in
the
in
today's
world
now
right.
B
So
how
do
you
handle
all
of
these
things
where
I
need
to
kind
of
say
if
I'm
going
to
communicate
to
service
b
and
if
service
b
for
the
face
for
the
first
time,
I
will
try
a
couple
of
times.
That's
called
retries
right,
so
I
want
to
kind
of
do
that.
I
want
to
implement
timeout
saying
that
okay,
I
need
to
get
the
result
within
a
specific
number
of
seconds.
So
how
would
I
kind
of
get
that
stuff
there
as
well?
B
So
all
these
things
need
to
kind
of
implement
in
the
today's
microservices
world,
so
how
we
can
kind
of
handle
all
of
these
things.
So
one
way
is
that
you
implement
a
language,
specific
implementation
right.
So,
for
example,
you
can
kind
of
say
okay,
I
would
I
would
writing
all
of
my
code
in
java
and
for
the
java
specific
language,
I
would
kind
of
use
the
tools
which
are
listed
here
mostly
from
netflix.
B
They
have
built
these
tools
to
kind
of
get
the
espresso
functionality,
which
you
are
looking
here
time
out,
retries
and
so
on
there.
So
you
can
use
this
language
specific
stuff
there
and
achieve
the
things,
and
it
has
happened.
It
has
happened
before
right,
so
java
has
kind
of
had
all
these
things,
and
we
could
do
with
this
thing
so,
but
this
does
not
look
in
the
current
scenarios
right,
because
we
write
our
code
in
different
languages
right
and
we
have
different
platforms,
deploy
the
application
and
so
on.
B
So
this
might
not
fly
in
the
today's
world
right.
So
what
we
can
do
is
we
want
to
have
something
which
is
independent
of
language
and
platform.
So
I
want
to
have
service
and
service
v,
but
I
want
to
have
them
communicate
such
a
way
as
transparent
for
them.
So
service
say
saying
that
I
want
to
communicate
service
b,
but
in
between
if
we
can
implement
some
kind
of
a
proxy
or
a
layer
which
gives
me
all
the
features
which
I
mentioned-
timeout
circuit,
breaking
and
so
on.
So
if.
B
Features
in
place
by
this
proxy,
I
can
transparently
put
in
front
of
any
service
which
I
have
and
that's
what
ny
is.
So
what
you
do
is
you?
Basically,
if
you
talk
about
in
terms
of
kubernetes
right,
you
might
know
about
side
car
containers
right,
so
I
would
deploy
a
part
for
my
primary
app
and,
along
with
my
primary
app
I'm
going
to
deploy
a
sidecar
proxy
and
that
proxy
would
kind
of
implement
the
features
without
telling
what
the
pod,
what
what
service
is
about
right.
B
So,
basically,
you
deploy
any
simple
app,
but
you
put
the
side
car
and
that
sidecar
would
take
care
of
building
all
the
features
or
like
circuit,
breaking
timeouts
and
all
that
stuff.
So
that's
what
ny
is
and
ny
is
being
used
in
service-
meshes
very
extensively
now
these
days
yeah.
So
let's
look
at
some
of
the
features
of
envoy
and
then
we'll
dive
deep
into
it.
B
So
it
is
out
of
the
process.
Architecture,
which
means
your
app
does
not
need
to
worry
about
anything
at
all.
All
the
features
would
come
and
build
with
envoy
and
you
just
attach
it
as
a
side
car
and
that's
it
and
your
app
need
not
to
change
anything
at
all.
It
just
run
plain
right.
Whatever
app
you
have,
it
will
just
finish
this.
B
We
can
have
different
filters
with
l3
l4
layers
which
I'll
talk
about
in
detail,
and
then
we
also
have
l7
filters
using
which
I
can
kind
of
identify
the
network
packets,
saying
that
this
packet
is
for
tcp
respected
for
my
sequel
thing,
and
so
on.
So
based
on
that,
I
could
kind
of
detect
my
pc
packet
or
the
network.
What
I
am
getting
and
then
I
would
put
apply
filters
on
them
and
then
continue
from
there
right.
B
It
has
a
first
class
http
sdp
to
support
and
we
have
a
layer,
7
routing
grp
supports
and
we
have
like
specific
support
like
a
mongodb,
mysql
dynamodb
as
well.
So
I'll
give
you
a
demo
of
mysql
today,
which
would
kind
of
clarify
any
doubt
or
anything
about
this
thing.
B
B
Discovery
with
android,
basically,
we
can
attach
whatever
we
have,
for
example,
if
we
depend
on
kubernetes
and
y
can
connect
to
the
kubernetes,
and
then
we
can
kind
of
get
all
the
services
point
via
the
invite
and
so
on
there,
and
there
are
different
ways
you
can
have
the
discovery
with
ny,
which
I'll
talk
about.
B
You
can
configure
this
thing
as
a
front
proxy
or
an
edge
proxy.
So,
for
example,
if
you
have
a
network
and
the
traffic
is
coming
from
there,
you
can
have
and
we're
sitting
on
top
of
that
network
and
all
the
traffic
goes
in
from
there.
A
B
Topologies
using
which
ny
can
be
configured
so
one
simple
thing
is:
you
can
have
it
as
an
ingress
and
egress
right,
so
any
traffic
coming
to
your
application
can
come
by
and
what?
Similarly
any
traffic
going
via
or
the
application
would
go
via
envoy,
so
nyk
kind
of
do
it
both
ingress
and
egress
free,
then
you
can
do
load
balancing
with
envoy.
So
basically,
if
you
have
multiple
replicas
of
your
application,
then
ny
can
kind
of
float
balance
between
them.
B
You
can
configure
ny
for
for
your
english
and
egress.
So
as
I
discussed
right,
if
you
have
your
network
and
you
can
kind
of
put
ny
on
top
of
that,
so
all
the
traffic
can
come
via
envoy
and
then
similarly,
you
can
have
an
on
the
egress
rule.
So
all
the
traffic
would
route
via
the
the
ny
as
well.
Depending
on
your
configuration,
you
can
have
a
setup
in
a
hybrid
mode
as
well,
where
you
can
have,
of
course
ingress
egress.
B
Then
you
can
have
a
load
balancing
also
happening
with
envoy
and
then
within
a
given
service.
You
can
also
deploy
an
envoy
which
can
have
internal
communication
between
the
services
as
well,
so
you
can
configure
ny
and
different
topologies,
whatever
what
we
are
seeing
here
and
then
you
can
have
multi-year
setup
setup
as
well,
where
you
kind
of
get
the
traffic
from
one
of
the
invoice
ingress
and
then
you
can
have
multiple
invoice
in
place.
B
B
Okay,
the
aperture
of
kubernetes,
the
architecture
of
ny,
looks
something
like
this
here.
So
what
happens
is
when
you
kind
of
make
any
connections
to
your
ny?
You
create
a
worker
thread
to
it,
so
a
worker
thread
would
kind
of
take
that
entire
connection
from
into
out.
So
that's
kind
of
a
threat
kind
of
takes
care
of
that,
and
then
what
we
have
is
something
called
as
listener
here.
B
So
any
traffic
coming
from
a
downstream
so
downstream
is
nothing
but
your
user,
for
example
your
browser
or
your
any
other
service
which
want
to
kind
of
come
and
communicate.
Your
app
via
the
envoy
that
becomes
a
downstream
free.
B
So
from
this
you
kind
of
get
into
your
ny
and
then
you
would
kind
of
have
this
listeners
here.
This
list
is
nothing
but
it
can
be
any
port
number
or
socket
thing
using
which
you
will
kind
of
get
into
your
ny,
ny,
chained
and
ny
thing,
and
then,
with
this
particular
listeners,
we'll
talk
about
filter
chain,
so
you
can
have
multiple
filter
chains.
For
example,
you
want
to
first
detect
the
mysql
filters.
B
Okay,
are
there
any
mysql
packets
there
and
you
want
to
say
okay,
I
want
to
look
at
the
tcp
stuff
in
that,
so
you
can
kind
of
configure
them
in
different
filters
and
once
those
filters
are
being
passed,
you
would
kind
of
send
your
stuff
to
clusters.
So
clusters
are
nothing
but
collection
of
multiple
of
your
end
points
so,
for
example,
yeah.
Let's
talk
about
those
things
in
a
minute,
so
you
can
have
a
cluster,
nothing
but
logical
collection
of
one
or
more
endpoints.
B
Those
are
called
as
upstream
so
upstream
is
something
which
is
your
and
thing
so,
for
example,
if
you're
running
a
pod
or
a
container
that
becomes
upstream,
so
your
connection
come
from
downstream,
which
is
your
client
and
then
whatever
is
your
server
of
serving
entity
that
becomes
upstream
for
you
right.
So
I
kind
of
go
detail
in
each
one
of
them
in
a
minute.
B
Okay.
So
let's
talk
about
listeners
now,
so
listeners
are
nothing
but
your
end
point
to
which
you
will
connect
to
so,
for
example,
if
you
configured
an
envoy-
and
you
can
have
multiple
listeners
here-
that
one
can,
let's
say,
listen
on
for
http,
only
other
listener
can
listen
for
tcp,
so
those
kind
of
you
can
have,
and
it
would
create
the
connection
and
pass
the
request
to
filter
change.
So
filter
chains
are
like
this
I'll
talk
about
this
in
a
minute.
B
B
B
From
the
listeners,
listeners
would
have
a
few
filters
changed
here.
Like
footage
is
nothing
but.
B
At
those
packages
you
can
see
what's
in
it
right
and
what
you
would
do
with
that.
So,
for
example,
you
are
saying
that,
okay,
if
I
am
getting
the
an
sp
packet
with
some
host
header,
I
want
to
change
it.
So
those
are
kind
of
the
filters
you
would
write
here
so
anything
going
to
change.
When
you
you
look
at
the
packet,
the
network
packet
and
you
identify
what
kind
of
packet
is
this
and
then
you
kind
of
do
something
on
that
right.
B
So,
for
example,
here
on
the
left
that
you
see
there's
a
filter
chain
here
in
this,
I
have
two
filters
here:
one
filter
is
for
mysql
another
filter
for
tcp,
so
I
would
have
like
one
filter
here
for
my
sequels
I'll
look
at
for
mysql,
stuff
and
I'll.
Tell
you
what
to
be
done
with
that
mysql
packets.
If
there
are
any
right
and
I'll,
see
you
for
that,
tcp
thing.
What
would
I?
B
What
should
I
do,
the
tcp
stuff
also,
so
I
can
write
these
multiple
filters
one
after
another,
and
eventually
from
that
I
would
go
to
clusters
just
about
clusters
now.
So
clusters
are
nothing
but
a
collection
of
similar
upstream
host.
B
So,
for
example,
you
can
think
of
a
kubernetes
service
can
be
a
cluster
kind
of
stuff
right.
So
a
cluster
would
be
like
that
so
like
here.
What
we
are
saying,
I
would
come
to
the
listener
here.
I
would
go
over
my
filter
chains
here
then.
I
would
hit
two
clusters
here
now
here.
If
you
look
at
the
cluster
is
like
a
block
so,
for
example,
suppose
you're
on
kubernetes
and
you
deployed
a
wordpress
app
and
there
is
a
front
in
the
back
end
front.
B
End
is
with
your
wordpress
application,
and
the
backend
is
from
mysql
right,
so
one
cluster
I
would
have
of
type
block
which
would
connect
to
my
block
service.
I
have
one
more
cluster
called
mysql
cluster
will
talk
to
my
mysql
endpoint
here,
so
you
kind
of
come
in
go
by
the
listener,
then
filter
states,
and
then
you
will
kind
of
come
to
your
clusters.
B
So
clusters
are
independent,
independent
of
listeners,
so
I
can
have
clusters
of
different
types
and
from
my
filter
chain.
The
traffic
can
go
to
any
one
of
these
clusters
and
after
that
I
would
go
to
end
points
eventually
that
what
is
my
end
point
towards
the
end,
where
I
will
connect
so
end
point
is
where
exactly
kind
of
go
and
hit
like
in
this
case.
The
end
point
is
the
wordpress
and
the
dbm.
B
So
what's
happening
suppose
this
is
being
deployed
on
kubernetes,
so
we
would
have
a
serve
discovery
within
kubernetes
itself,
so
kubernetes,
so
the
ny
can
connect
to
my
kubernetes
and
then
find
out
what
kind
of
services
I
would
have
there
and,
depending
on
my
configuration,
I
would
kind
of
communicate
with
them.
B
So
here
I'm
going
to
kind
of
connect
to
my
wordpress
or
the
db
service
and
eventually
then
the
backend.
I
would
have
these
pods
what
I
have
typically
so,
but
for
an
envoy
here,
the
end
point
in
the
service
to
which
I
would
communicate
to
so
my
traffic
comes
to
my
listener.
I
go
over
the
filter
chains,
then
I
would
kind
of
based
on
the
filter
chains.
B
I
would
go
to
the
specific
clusters
here
and
then
from
the
cluster.
I
would
kind
of
get
into
the
respective
end
point
here.
This
might
not
be
very
clear.
A
B
Let's
look
at
some
demos
and
they'll
become
much
more
clear
here,
okay,
so
so,
for
now
we
talked
about
a
few
terminologies
here.
Listeners,
filter
chains,
clusters
and
end
points
there
right
now.
How
do
we
configure
our
ny
for
doing
that
thing
right?
I
can
kind
of
either
always
build
this
kind
of
entire
stuff
manually.
Where
I
start
with
listener,
I
would
define
my
filter
chain.
I
would
define
my
clusters
and
from
there
I'll
go
to
my
end
points
there.
So
I
can
do
all
of
this
thing.
B
Configuration
manually
right,
that's
one
or
I
could
kind
of
do
it
dynamically,
that's
what
we
call
as
xds,
so
as
you
would
know
that
we
use
ny
heavily
with
istio
and
I'm
not
going
to
write
any
one
of
this
configuration
file
which
I'll
show
you
in
a
minute
manually
right.
So
there
should
be
some
way
by
which
I
would
configure
these
things
dynamically.
B
So
for
that
purpose,
ny
gives
you
this
xds
endpoints
and
we
have
different
discovery
services
for
that
listener,
route,
cluster
and
endpoint,
and
basically
I
would
kind
of
make
the
calls
of
these
discovery
services
and
then
I
would
kind
of
build
my
ny
flow
dynamically.
There.
B
B
So,
for
example,
if
you
kind
of
get
a
url
for
google.com-
and
that
gives
you
let's
say
multiple
ip
addresses
back,
so
how
would
you
kind
of
take
them
as
endpoint
to
your
service,
so
those
kind
of
things
are
kind
of
defined
here
and
then
you
can
also
always
use
this
discovery
service
for
sure,
and
you
can
write
the
custom
clusters
also,
if
you
want
to
kind
of
have
a
custom
logic
to
make
your
cluster
creation
so
cluster,
I
mean
here
how
I
define
the
logical
collection
of
my
endpoints.
B
B
Now
there
are
a
few
questions
coming
on
the
screen,
sure
I'll
take
them
in
a
minute.
Maybe
let's
take
them
now
right
now:
okay,
the
question
is
helping.
Okay,
that's
good!
Then,
let's,
let's,
let's
continue
here.
B
Okay,
so
this
is
how
the
if
you
put
together
everything
this
is
how
it
look
like
it
would
look
complex
now
because
we're
just
kind
of
getting
started
now,
but
if
you
kind
of
pay
attention
of
it,
I
talked
about
few
terms:
let's
try
to
kind
of
put
them
here,
so
you
have
a.
B
B
Then
there
are
read
filters
you
would
read
whatever
the
filters
are
there
and
from
there
kind
of
go
back
to
go
to
your
http
based
of
connection
manager
here.
So
one
thing
I
missed
here
actually
to
talk
about
briefly
on
that
yeah.
So
of
course
we
would
have
these
l3l
full
filters.
B
We
also
have
one
specific
filter
called
http
connection
manager,
because
http
is
very
important
part
of
our
day-to-day
life
right,
so
they
have
specifically
built
an
http
based
filter
which
can
which
further
has
http
only
specific
filters.
So
you
have
an
http
creation
manager
which
has
a
htt
specific
filter.
Also
so
I'll
talk
about
that
in
a
minute.
B
So
that's
what
this
x7
hdb7
l7
filter
here
and
from
there
I
will
kind
of
go
to
the
read
filter
of
http
7
or
for
l7.
Here
then,
I
would
kind
of
go
to
my
backend
service
on
the
backend
service.
My
traffic
would
come
back.
I
would
go
to
the
right
filters,
then
come
back
here
and
then
again
I'll
come
back
to
the
right
filters
of
l4
layer
and
come
back.
B
Okay!
So
now
we
look
at
different
demos,
so
let's
kind
of
see
that
in
place
now
this
would
be
most
interesting
section.
I
believe
so.
Let
me
just
get
this
screen
right
here.
B
On
the
screen,
this
is
something
that's
a
tool
what
we
have
built
for
our
training
purpose,
but
you
can
completely
ignore
that
what's
happening
here
is
whatever
that
stuff.
I'm
going
to
do
it
here.
That
is
going
to
run
on
my
docker
desktop.
B
B
So
that's
all
there,
and
so
I'm
going
to
kind
of
go
over
these
demos
here
and
with
these
demos
I
would
kind
of
connect
to
my
docker
and
the
docker
desktop
running
on
my
workstation
and
do
the
labs
there.
Okay.
So
let's
look
at
the
first
demo
here.
So
what
you
want
to
build
a
simple
pc
proxy,
in
which
I
want
to
kind
of
make
a
connection
to
my
envoy
here.
So
let
me
show
you
the
config.
B
B
A
B
Here
here
what
I
have
is,
I
have
first
of
all,
I
am
building
this
statically,
so
I
am
going
to
put
a
static
resource
here
because
I
am
not
building
it
dynamically
here.
So
now
I
have
a
listener
here
in
this
listener
I
am
saying,
go
and
connect
to
any
one
of
my
ip
address
of
the
host
on
port
8080.
So
I
have
defined
a
listener
here
right
then
I
would
define
a
filter
chain
right.
So
let
me
define
the
filter
chain.
B
So
then
I'm
saying
I
have
this
filter
chain
and
I'm
putting
a
tc
proxy
as
my
filter
chain.
So
I'm
going
to
kind
of
look
at
all
of
my
tcp
packets
coming
in
on
to
my
environment
right
and
then
what
I
define.
So
what
we
have
done
is
we
have
defined
the
listener.
B
We
have
defined
the
filter,
in
which
I
am
saying.
I
want
to
look
for
all
the
tc
packets.
Then,
if
you
look
at
here
we
are
defining
the
web.
So
I
am
saying
look
at
the
the
web
cluster,
so
I'm
creating
a
web
cluster
here.
So
we
are
saying
any
packet
which
is
coming
via
this
filter
has
to
go
to
the
cluster
called
web.
B
B
So
here
is
only
one
filter
of
mine
called
web
from
here
I
would.
I
have
defined
my
end
point.
So
what
we
are
saying
if
my
traffic
is
coming
to
this
web
cluster,
I
would
route
it
to
two
of
my
end
points.
I
am
saying
that
go
to
either
the
end
point
called
service
one
or
go
either
to
the
end
point
called
service
to
so
look
at
what's
happening
is
my
traffic
is
going
to
come
into
my
coming
to
my
ny.
B
I
would
look
at.
I
will
listen
on
port
8080
here
then,
from
there
I'll
see
the
tc
packet
I'll
go
to
the
web
cluster.
That
cluster
has
two
end
points,
so
whenever
material
is
going
to
come
from
my
downstream
I'm
going
to
kind
of
switch
between
service,
my
services
too,
as
I
kind
of
do
that
right.
B
B
B
B
B
Okay,
so
it's
kind
of
building
that
my
envoy,
I
think,
configuration
there
and
then
let
me
just
to
the
pseudonym
we're
going
to
be
there.
Let
me
just
do
it
quickly,
okay,
now,
what
I
need
to
do
is
I
kind
of
want
to
kind
of
go
and
access
these
urls
from
my
workstation
here.
So
I
kind
of
come
to
my
workstation
here.
B
B
B
B
B
B
B
B
B
A
B
B
B
B
Okay,
I
think
I'll
just
maybe
give
it
a
minute
before
I
can
look
over
all
the.
B
Okay,
fine,
so
this
is
what
I
want
to
kind
of
show
you
with
our
tc
proxy
here.
B
B
B
B
Okay,
let's,
I
think
I'll
give
it
a
minute
for
some
so
in
this
demo.
What
I
wanted
to
show
actually
is
that
you
hit
the
downstream
you
go
to
a
tcu
filter
and
then
from
there
we'll
hit
this
web
cluster
and
my
traffic
would
switch
between
two
services.
So
I
have
configured
an
nginx
here
and
when
I
said
we've
been
here
so
my
config,
my
connection
would
kind
of
switch
between
those
two
services
there.
So
the
first
time
I
want
to
kind
of
show
there
second
example.
B
I
want
to
kind
of
show
that,
with
the
help
of
http
filter
here
so
again,
we
have
two
same
services
here,
but
instead
of
hitting
the
specific
url
only
I
would
need
to
hit
a
slash
service
or
slash
service
2..
Based
on
that,
I
will
go
to
the
right
endpoints
of
mine.
B
Yes,
let's
go
over
the
http
thing,
one
so
I'll
go
to
configs
here
then
I'll.
Look
at
this
ny
configuration
here.
So
what
we're
discussing
here,
the
hp
filter
using
which
I
want
to
hit
my
url
plus
slash
services
left
service
too.
So
what
we
are
saying
in
this
case
is:
let
me
come
into
my
ny
with
the
listener
from
there.
I
would
look
at
the
match
here.
So
what
happens
is
you
would
look?
B
Have
this
route
configuration
here
and
in
which,
if
your
prefix
is
slash
service,
one
go
to
service,
go
to
cluster
call
service
one.
Similarly,
if
your
match
is
slash
service
to
you
will
go
to
the
cluster
of
service
to
there.
So
that's
how
it
is
going
to
work
out
where,
depending
on
your
suffix
of
the
url,
you
will
go
to
the
right
cluster
and
in
the
cluster.
I
would
have
these
different.
What
you
call
the
end
points
here,
which
is
going
to
be
my
service
one
and
service
two.
B
B
B
B
So
if
I
kind
of
try
to
come
here
and
hit
localhost
on
8080,
I
am
going
to
get
this
page
and
if
I
hit
again,
maybe
because
of
the
caching
I'll
not
see
that.
But
if
I
hit
on
incognito
mode
and
then
if
I
hit
localhost
8080,
I
will
get
the
other
page
right,
depending
on
which
one
you
will
hit.
You
will
get
that
stuff
because
at
least
our
one
demo
works.
So
let
me
bring
it
down.
B
Yeah
we
have
some
time
left,
hopefully
we'll
finish
all
the
demos
here.
Okay,
I
think
it's
not
going
to
work
out.
Let's
continue
further.
The
other
example
which
I
had
with
the
help
of
front
front
proxy
here.
So
what
we're
trying
to
configure
is.
I
want
to
kind
of
get
my
connection
in
from
the
external
world.
Why
are
the
envoy
and
then
I
would
have
envoy
within
my
application
as
well,
like
you
have
a
site
car
which
contains
your
ny
as
well.
So
what
happens
think
about
this
particular?
B
Is
your
entry
point
for
your
network,
so
my
traffic
would
come
in
to
this
ny
here
then
I
would
have
the
filter
chains
for
that
for
http.
Then
I
would
kind
of
go
to
my
app,
which
contains
another
envoy
and
from
there
I
will
go
to
my
flask
application
there.
So
that's
how
the
traffic
is
going
to
look
like.
So
let
me
show
that
thing
for
that,
as
well.
B
B
B
Okay,
then,
if
I
have
scaled
one
of
my
service
with
ny
here,
then
what
would
happen
this
in
the
previous
case,
if
I
hit
slash
service,
I
will
get
the
output
from
this
particular
application
of
flask
and
if
I
hit
my
port
8080
service
to,
I
would
get
served
from
here.
So
these
are
two
my
different
end
points
for
for
these
apps
applications.
B
Now,
what
I'm
saying
is
I
want
to
just
scale
my
service
one
here
so
now.
My
service
one
would
have
three
end
points
of
its
own.
So
if
I
hit
this
url
slash
service
one,
then
I
would
get
reply
from
any
one
of
these.
That
is
what
I
wanna
kind
of
showcase
here,
that
you
would
kind
of
try
it
out
with
the
front
proxy
scaling
there.
B
Which
I
want
to
kind
of
show
is
with
the
help
of
tracing
so,
as
you
know
that
you
can
have
the
tools
like
jager
and
some
kind
of
stuff
there
using
which
you
can
trace
your
entire
workflow
right.
So,
for
example,
if
you
have
a
microsoft
based
application
and
your
traffic
come
into
your
cluster
and
then
you
want
to
kind
of
track
where
the
packets
are
or
where
my
request
is
going.
B
So
there
are
things
tools
like
jager
and
zipkin,
which
can
help
us
trace
that,
but
they
need
some
input
data
right,
how
they
will
kind
of
build
the
graphs
for
you.
So
for
that
purpose
you
would
kind
of
again
use
ny
for
that
ny
has
this
jager
filters
or
tracing
filters
inbuilt
and
with
those
filters
as
the
packet
comes
in,
you
can
put
a
kind
of
stamp
on
that
and
that's
term
you
can
kind
of
walk
across
watch
across
the
network
and
see
what's
happening
there.
B
B
Front
proxy
first,
so
I
mentioned
about:
there
are
two
kind
of
proxies:
are
there
one
is
at
the
front?
So
this
is
the
front
proxy.
If
you
kind
of
imagine
that
so
from
there
my
traffic
is
going
to
come
to
service
service,
slash
one
I'll
go
to
service
one.
If
my
traffic
is
coming
to
this
port
ip
on
service,
slash,
12
I'll
go
to
this
service,
that's
right!
Traffic
is
going
to
come
so
from
here.
As
I
told
you,
I
am
going
to
kind
of
go
to
one
more
ny
here.
B
B
This
is
my
cluster
for
my
service,
one
here
and
now.
If
I
look
at
from
this,
I
would
go
to
my
service
one
container
on
port
8000
and
if
I
look
at
the
service
ny
there
now
here,
I
would
have
one
more
ny
configuration,
so
I
am
saying
that
any
traffic
coming
on
port
8000
take
it
to
my
local
host
on
port
8080.
B
B
B
Flow
of
my
call
state,
I
want
to
kind
of
give
you
all
the
demos
there,
but
somehow
it's
not
possible
today,
maybe
I'll
record
and
share
it.
Okay.
The
last
example
I
want
to
give
you
without
taking
so
much
time
that
I'll
I,
if
it's
possible.
Let
me
just
let
me
see
if
I
can
just
have
it
I'm
running
for
you.
B
Okay,
so
what's
happening
in
this
case
is
I
have
deployed
a
wordpress
application
and
for
that
wordpress
application.
I'm
also
doing
definitely
doing
that
http
one,
but
I
also
want
to
show
it
the
mysql
filter
that
how
does
the
mysql
filter
can
work
with
our
application
here?
So
if
you
look
at
what's
happening,
is
I
have
deployed
three
containers?
If
you
look
at
that
way,
our
wordpress
container
a
db
container
and
an
ny
container
and
an
ny
container.
B
B
A
B
One
listener
on
port
8080
other
listener
would
be
on
some
other
port,
like
I
think
3307
so
I'll
send
my
traffic
there
and
from
there
I
would
send
the
traffic
to
my
db.
What
is
the
benefit
of
this?
I'm
going
to
kind
of
collect
all
of
my
mysql
thing
as
well,
or
whatever
my
metrics
for
my
sql
as
well,
and
then
I
can
kind
of
see
how
much
my
mysql
behaving
there.
So
I
can
have
filters
like
this
for
mongodb
and
so
on.
So
I
hope
this
example.
This
would
come
in
so.
B
I
really
don't
know
what's
happening
here
today:
okay
I'll,
try
to
see
if,
towards
the
end,
I
can
show
the
demo
for
that,
but
that's
how
I
kind
of
show
the
demo
for
you
for
this
thing
and
you've
also
configured
a
grafana
and
prometheus
for
that.
So
I
can
kind
of
really
show
the
plot
for
you
that
if
I
kind
of
make
a
query
to
my
wordpress,
I
get
the
grafana
dashboard
for
that.
B
So
I'll
try
to
cover
that,
maybe
towards
the
end
of
this
after
the
christian
session,
if
possible,
I
can,
I
could
get
it
running.
Are
there
any
questions
for
me
if
they're
not
being
answered
there.
B
Okay,
thank
you,
but
there
is
it
is
any
questions
left
for
you
guys
just
put
them
in
the
chat.
Thank
christian
for
helping
me
out
there
I'll
try
to
kind
of
see
if
I
can
show
the
demo
towards
the
end
of
your
session.
B
So
I
think
you
can
take
over
now
christian
and
then
let
me
I'll
try
to
show
the
demo
towards
the
end.
A
Okay,
thank
you.
Thank
you,
I'll.
A
A
Yeah,
you
can
see
behind
me,
I'm
prepared
for
the
worst
all
right.
First
of
all,
thank
you
all
for
having
me
at
this
meetup.
I'm
excited
to
talk
about
these,
these
technologies
and
envoy,
and
the
presentation
that
you
that
you
had
there
about
envoys
was
perfect.
You
know
they
gave
people
an
idea
of
what
it
is
and
what
it
takes
to
configure
it.
What
are
some
of
its
capabilities
and
and
so
forth?
A
So
my
name
is
christian.
I've
I
work
at
a
company
called
solo.io
right
now
been
here
for
a
year
and
a
half
or
a
little
over
a
year
and
a
half
before
that.
I
worked
at
red
hat
before
that
I
worked
in
an
open
source
integration,
company
called
fusesource
I've
been
and
before
that
at
enterprises
and
so
forth.
A
I
did
spend
some
time
at
zappos.com,
where
I
got
to
work
hands-on
and
up
close
with
amazon
and
and
and
zappos
at
the
time
right
and
got
to
see
what
devops
and
microservices
and
all
this
stuff
was
before
it
became
all
this.
You
know
trend
in
our
industry.
This
was
back
in
2012
and
from
there
you
know,
I
built
a
lot
of
understanding
of
how
this
stuff
works
at
a
high
at
a
high
web
scale,
company
and
some
of
the
maturity
around
their
automation
and
the
processing
and
so
forth.
A
So
I
you
know,
through
my
work,
at
red
hat,
working
on
early
days
of
kubernetes,
very,
very
early
days
of
the
istio
project
and
so
forth.
You
know
bringing
this
type
of
infrastructure
both
into
open
source
and
to
enterprise
and
to
companies
to
help
adopt
that,
and
so,
when
I
left
red
hat,
I
wanted
to
go
to
a
company
that
focused
exactly
on
this
problem,
connecting
applications
together
in
a
reliable
way
in
cloud
environment.
And
so
that's
that's.
What
we're
doing
at
solo.
A
I've
written
some
books
here,
as
you
can
see,
on
microservices
and
and
istio.
I'm
currently
writing
the
istio
in
action
book
for
manning.
You
can.
I
I
do
encourage
you,
because
there
were
some
questions
in
here
and
just
general,
I
would
say:
uncertainty
or
confusion
in
the
in
the
ecosystem
in
general,
around
service
mesh,
api
gateway
envoy.
A
You
know
some
of
the
comparisons
between
the
two
and
so
forth,
and
I've
I've
done
my
best
to
try
to
share
some
clarifying
information.
So
if
you
go
to
my
my
my
blog,
so
I've
been
writing
about
this
stuff.
I
went
back
and
looked
you
know
since
since
the
very
beginning
of
2017,
specifically
about
envoy
and
and
and
this
technology
click
on
that.
A
So,
like
I
said,
check
check
out
some
of
the
the
these
blogs
about
envoy,
and
you
know
comparing
these
things
to
api
gateways
and
esbs
and
actually
putting
them
to
work
in
a
microservices
environment,
reducing
risk
and
so
forth.
So
take
a
look
at
that
reach
out
to
me
on
twitter.
A
I
also
share
videos
and
and
blogs
and
updates
on
my
book
and
so
forth
on
on
twitter.
So,
but
I'm
happy
to
be
here
talking
to
you,
I,
like
I
said
I
work
at
solo.I.o,
whose
mission
it
is
is
to
build
this.
This
application
networking
infrastructure
to
allow
services
to
talk
over.
You
know
when
deployed
in
cloud
environments.
So
if
you
just
take
a
look,
for
example,
kubernetes
right
kubernetes
is,
you
know,
kind
of
won
the
container
orchestration
wars,
but
what
is
it
it's
a
foundational
layer
for
deploying
containers?
A
A
A
Once
you
put
applications
into
kubernetes,
these
applications
need
to
do
things.
They
need
to
talk
with
each
other.
They
need
to
implement
some
level
of
reliability
and
security
and
so
forth
right
on
the
at
the
application
layer,
and
so
that's
where
technologies
like
envoy
start
to
come
in
and
fill
the
gap
right.
So
it's
a
layer
on
top
of
kubernetes
and
envoy
by
itself,
as
we
saw,
is
very
powerful
but
envoy
itself
isn't
the
final
solution
all
right.
A
The
the
the
goal
is
to
build
a
ultimately
your
platform
in
such
a
way
that
you
can
deploy
applications
regardless
of
what
language
they're
written
in.
A
So
envoy
is
this.
Is
this
little
piece
that
can
help
build
this
application,
networking
abstraction
right?
So
now?
What
is
this
application?
Networking
abstraction
some
of
the
important
pieces
to
this
is?
It
needs
to
be
decentralized
right.
We've
seen
in
the
past
application
networking
you
know
kind
of
functionality
in
in
terms
of
let's
say
message:
messaging,
based
systems,
message,
brokers,
right
with
message,
brokers.
You
could
kind
of
get
a
decoupling
of
the
the
caller
and
the
server
right
you
can
get.
A
So
you
can
get
that
transparency,
so
it's
kind
of
like
service
discovery,
because
you
don't
know
exactly
where
the
server
is.
You
get
things
like,
reliable
to
an
extent,
delivery
retries.
You
know
and
load
balancing
right.
You
give.
You
connect
multiple
consumers
to
a
queue
you
can
get
load,
balancing
and
so
forth.
But
it's
highly
centralized
same
thing.
We
saw
with
the
esbs
that
were
built
on
top
of
messaging
brokers,
often
or
api
gateways
right.
We
said:
let's,
let's
build
apis,
expose
this
on
the
api.
A
You
know
economy
and
do
all
this
stuff.
But
what
ended
up
people
really
doing
is
taking
api
gateway,
putting
in
the
middle
of
their
application
architecture,
because
it
can
do
security
because
it
can
do
some
routing
and
self-service
and
so
forth.
They
put
it
right
in
the
middle
of
this
of
their
application.
Architect,
everything
went
through
this
api
management
solution,
and
so
what
we're
trying
to
move
toward
when
we
move
to
cloud
is
decentralized
processes
all
right,
get
ops.
A
These
types
of
workflows
were
more
highly
centralized
teams
that
run
the
you
want
to
make
a
change.
You
have
to
file
a
ticket
and
so
forth.
So
we
want
to
build
this,
so
the
problems
are
not
all
that
new,
but
the
implementation
for
how
we
build
this
in
a
in
a
highly
decentralized,
highly
ephemeral
cloud
infrastructure
where
nodes
could
be
coming
and
going
pogba's
coming
up
scaling
and
so
forth.
The
the
the
implementation
needs
to
be
decentralized
and
it
needs
to
be.
A
You
know,
amenable
to
decentralized
processes,
so
the
people,
how
do
we
organize
our
people
and
how
do
we
use
this
technology
and
so
forth,
so
coming
back
envoy
provides
the
kernel
or
the
foundational
component
to
building
an
architecture
that
ends
up
solving
those
problems,
and
so
that's
what
we're
going
to
look
at
today
and
and
specifically,
some
of
the
things
that
we're
doing
at
solo.
A
So
why
envoy?
I
saw
some
questions
on
the
on
the
chat
there.
Why
envoy,
and
the
answer
is,
is-
is
pretty
straightforward?
Actually,
the
number
one
thing
is:
the
community.
The
community
has
kind
of
come
around
and
coalesced
around
envoy
the
innovation
in
level.
Seven
network
application.
Networking
technology
is
happening
in
the
envoy
community
and
it's
happening
with
input
from
massive
companies
like
google,
like
apple,
like
you
know,
verizon
and
yeah.
A
You
can
see
amazon,
you
can
see
the
some
of
the
people
on
the
on
the
right
hand,
side
of
the
slide
right
so
that
the
innovation
that
we
saw
at
google
is
being
poured
into
envoy,
the
innovation
at
netflix
and
ebay
and
so
forth.
Is
it
that
that's
kind
of
that
the
central
place
where
people
are
coming
together
to
push
on
this
and
we'll
look
at
some
of
the
fruit
of
that
innovation
in
a
second?
A
A
The
previous
talk,
but
this
is
a
foundational
piece
being
able
to
extend
envoy
and
a
community
that'll
help
you
do.
That
is
incredibly
important
because
we
want
to
support
all
these
different
technologies.
We
want
to
support
all
these
different
use
cases,
but
we
want
to
allow
extension
points
and
apis
and
make
you
know
a
modern
code
base
that
enables
that
that
extension.
A
So,
for
example,
an
example
of
this
innovation
and
extension
in
envoy-
that's
happening
right
now
is
around
webassembly,
so
webassembly,
which
kind
of
grew
up
in
the
in
the
browser
is
a
way
of
dynamically,
injecting
your
your
custom
code
into
a
target
endpoint
either
the
browser
in
this
case
enboy.
So
we
can
extend
onboard
writing
webassembly
or
writing
what
web
assembly
you
can
write
in
any
any
language
and
it
compiles
down
into
webassembly,
and
then
you
can
deploy
that
into
android
right.
So
you
can
make
you
can
you
all?
A
You
know
some
people
in
telco
have
these
highly
custom
protocols
or
you
need
a
certain
logger
for
your
access
logs
and
so
so.
Extending
envoy
has
been
a
foundational
principle
from
the
beginning
and
you
used
to
have
to
do
that
by
it's
actually
extending
the
cpu
plus
code,
but
now
you
can
just
do
it
with
webassembly.
A
The
last
thing,
which
was
definitely
touched
upon
in
the
previous
conversation
previous
talk
was
it
was
built
to
be
dynamic
and
to
change
and
to
be
driven
by
an
api,
not
file
configurations
and
hot
reloads
and
potentially
dropping
connections,
and
you
know,
but
where
we
didn't
have
this
dynamicism
20
years
ago,
right
envoy
was
built
from
the
ground
up,
as
we
saw
with
the
different
xds
services.
So
it's
basically
the
envoy
talks
to
an
api
gets
the
information
about
what
services
can
it
route
to?
What
are
the
routing
configs?
A
What
are
the
security
policies
and
so
forth
and
can
change
on
dynamically
right?
So
now
we
see
some
of
these
other
things
very
deep,
telemetry
collection
and
so
forth.
So
using
those
principles
as
the
foundation
for
ongoing,
we
can
build
pretty
pretty
powerful
networking
technology.
So
let's
take
a
look
at
what
that
what
that
might
look
like,
I
think
I'm
on
this.
I
know
I'm
on
this.
Sorry,
I'm
going
to
bounce
around
a
little
bit
because
I
want.
A
I
want
to
tell
a
story
naturally
and
I'll
use
the
slides
to
help,
but
I
want
to
just
I
want
to
speak
freely
here.
I'm
just
bound
to
things
like
that.
So
what
are
some
of
these
challenges?
What
are
some
of
these
connectivity
challenges?
How
would
envoy
fit
into
this
picture
at
all,
and
this
is
something
that
we
take.
You
know
this
is
core
to
our
our
ethos
and
our
competency
here
at
solo,
which
is
our
background,
is
in
in
envoy,
deploying
and
running
and
managing
envoy
in
production
at
scale.
A
But
some
of
these,
these
networking
challenges
we
talked
about
earlier
right,
a
service
talking
to
another
service
is
very,
is
rarely
that
simple
right,
you're
rarely
talking
directly
to
that
service,
usually
talking
over
the
network
through
multiple
hops,
different
boundaries,
firewalls
and
so
forth,
and
these
networks
are
not
reliable
right.
So
that's
and
that's
not
new
information,
but
so
these
applications
have
to
solve
for
things
like
timeouts,
retries,
circuit,
breaking
and
so
forth.
A
Oh,
you
need
a
way
to
allow
traffic
into
that
into
that
that
deployment
system,
that
architecture,
without
it
being
haphazard
and
just
coming
from
all
over
the
place,
and
not
being
able
to
enforce
good
practices,
around
security
and
and
so
forth,
and
so
you
might
not
be
trusting
the
callers
that
are
calling
into
the
system
and
you
might
want
to
expose
your
services.
A
You
know
a
little
bit
different
than
how
they're
actually
implemented
and
so
forth.
So
you
need
things
like
transformation.
You
need
things
like
authentication,
authorization,
policy,
enforcement
and
and
so
forth,
and
you
know
some
of
these
apis.
A
You
might
want
to
expose
and
you
might
want
to
catalog
and
you
might
want
to
have
searchable
and
and
so
forth.
So
you
know
self-service,
signup
and
documentation
and
automation
around
this
is
important,
and
then
you
take
that
problem
and
say
well,
there's
not
really
just
one
boundary
right.
We
have
maybe
multiple
clusters
of
kubernetes
or
maybe
running
on
premises
or
running
in
a
public
cloud
or
both,
maybe
maybe
in
some
cases,
we've
talked
to
lots
of
lots
of
large
companies
that
are
actually
running
in
multiple
clouds.
A
So
this
is
not
a
well,
let's
just
solve
the
problem
once
and
we
have
it
solved
right.
There's
multiple
areas,
all
right,
so
now
how
do
we?
How
do
we
expand
that
and
be
consistent
when
we
configure?
How
do
we
federate
the
different
services
that
are
running
in
this?
We
can
treat
this
as
a
single
large.
You
know
architecture
and
it
starts
with
envoy
okay.
So
before
I
get
into
this
part
and
glue,
I
want
to
come
to
this
slide
envoy
right
here.
Envoy
is
incredibly
versatile
and
can
be
run.
A
A
Alright,
so
maybe
a
group
of
services
that
all
live
and
work
together.
You
know
they
share
an
envoy
when
they
talk
out
to
the
network
of
when
requests
come
into
them
and
then,
lastly,
you
know
with
the
hype
around
service
mesh.
You
might
recognize
this
pattern
as
well
envoy
can
be
run
right
next
to
the
service
itself
and
perform
duties
on
behalf
of
the
application.
A
So
it's
effectively
part
of
the
application,
and
you
know
the
traffic
goes
through
it
when
it's
leaving
the
application
traffic
goes
through
it
when
an
application
is
coming
or
the
traffic's
coming
into
the
application.
You
can
kind
of
think
of
it
as
an
interceptor
of
the
networking
traffic,
for
that
particular
application.
So
envoy
is
incredibly
versatile,
but
you
know
the
scenario
that
I
just
presented
where
you
start
to
say.
Well,
how
do
I
get
traffic
into
my
clusters?
A
That's
where
envoy
fits
as
an
access
proxy
or
an
edge
proxy
or
api
gateway,
whatever
term
you
want
to
use,
but
you
need
to
solve
the
challenge
of
that
traffic.
Coming
in
to
your
service
deployment,
service,
footprint,
right
and
so
at
solo,
we
we
built
open
source
project
and
an
enterprise
product
around
that
particular
use
case
that
particular
role.
We
call
it
glue
and
I'll
I'll
talk
about
blue
in
a
second,
but
we
you
know
that
and
then
we
built
a
developer
portal
on
top
of
this.
A
So
with
this
envoy-based
infrastructure
you
can
have
a
developer
portal
and
and
so
forth.
We
we've
we're
kind
of
helping
drive
the
community,
we're
working
very
closely
with
google
and
helping
to
drive
the
community
in
in
the
direction
of
web
assembly
and
weather
assembly
adoption.
So
we'll
talk
a
little
bit
about
that,
but,
like
I
said
once,
you
start
to
deploy
across
multiple
different
footprints,
whether
that's
on-premises
and
kubernetes,
in
aws
or
any
cloud.
A
A
What
you
want
is
decentralized
implementation,
but
you
do
need
a
level
of
centralization
in
terms
of
configuration,
management
and
policy
authoring
right,
so
you
need
you
need
to
be
able
to
solve
for
the
problem
of
federation,
and
so
we'll
we'll
talk
a
little
bit
about
that
as
we
go
now,
this
problem
doesn't
just
stop
at
the
edge.
A
This
problem
also
exists
between
applications,
so
if
you
scope
the
boundary
down
to
instead
of
a
group
of
applications,
a
single
application,
we
still
have
these
some
of
these
same
same
problems,
they're
not
exactly
the
same,
but
they're
there's
some
similarity
in
overlap,
and
so
that's
where
the
you
start
to
push
envoy
down
into
the
services
themselves
and
that's
where
the
service
mesh
concept
starts
to
take,
hold
and
starts
to
fit.
But
again
we
run
into
the
same
problem
right
across
multiple
footprints
and
trying
to
federate
across
these
different
deployments.
A
That's
a
difficult
challenge
again!
These
are
all
things
that
solo
is
solving
right.
This
is
this
is
what
we're
building
our
products
to
do.
This
is
our
core
companies.
This
is
what
we
are
focused
on
and
ultimately,
across
this
type
of
hybrid,
highly
distributed
environment,
we
need
to
be
secure.
A
A
A
This
is
how
envoy
fits
in
glue
is
part
of
that
story,
but
but
you
know
it
what
what
you're
trying
to
ultimately
build
is
a
solving
for
application
connectivity
in
a
hybrid
world,
decentralized
and
built
on
state-of-the-art
technology,
but
that
was
built
to
do
this
envoy
was
built
exactly
to
do
and
solve
this
problem
envoy
itself
can't
do
it,
but
it
is.
It
is
a
core
to
that
to
the
frameworks
that
enable
that.
A
Now,
let's
switch
gears
into
on
go
back
to
onboard
itself
and
solving
the
problem
of
of
of
getting
traffic
into
your
service
platform
or
service
estate,
whether
that's
kubernetes
or
or
anything
right
anything
running
in
in
cloud
infrastructure,
and
so
that's
where
glue
our
api
gateway
fits
into
the
picture
and
I'll
probably
go
a
little
quick,
because
I
want
to.
I
want
to
show
demos,
but
so
what
is
so?
What
is
glue?
I
think
I
understand
glue
well,
for
I
mean
the
high
level.
Is
it's
open
source
project?
A
It's
built
on
envoy,
provides
api
gateway
functionality.
Now
what
is
api
gateway
function
now?
Why
is
that
different
from
just
envoy
or
nginx,
or
you
know
whatever,
and
I
think
the
most
important
part
to
understand
about
the
api
gateway
is
that
api
gateway
is
a
role
that
is
played
at
the
edge
of
your
boundary.
A
So,
there's
a
level
of
first
of
all,
trust
that
is
different
between
what's
outside
of
the
boundary
and
what's
inside
the
boundary,
and
we
need
a
component
to
you
know,
play
the
role
of
of
providing
and
and
coming
to
terms
with
that
difference
you
might
have
apis
that
you
expose,
like
I
said
they
might
be
running
in
grpc,
and
you
want
to
expose
them
as
rest
or
they
might
be
rest
inside,
but
they're
kind
of
an
older,
maybe
say
crappier
version,
and
you
want
to
expose
something
new
or
a
little
bit
nicer
to
your
clients
and
users
right.
A
So
there's
this
decoupling
of
of
what's
running
outside
the
gateway
what's
running
inside
the
service
estate,
and
so
that's
where
the
abstraction
part
of
a
api
comes
into
play.
Api
gaming
is
an
abstraction,
first
and
foremost
between
the
outside
of
the
bound
and
inside
the
boundary.
So
with
that
definition,
it's
an
abstraction.
A
Now
an
api
gateway
is
probably
more
like
a
door
with
somebody
standing
there
guarding
it
and
saying
you
know
the
you
you
don't
need
to
know
what's
actually
inside
and
so
to
do
that
to
build
build.
An
api
gateway
on
top
of
envoy
envoy
is
a
great
proxy
envoy
would
be
great
as
an
ingress
controller,
which
is
more
like
what
contour
is.
A
I
think
somebody
asked
the
question:
what's
the
difference
between
you
know,
one
of
these
things
all
these
different
things,
contour
is
sort
of
an
ingress
built
on
envoy,
fairly
simple
allows
you
to
drive
the
config
of
envoy,
but
but
I'm
talking
about
an
api
gateway,
api
gateway
needs
things
like
transformation.
Abilities
envoy
doesn't
have
that
api
gateway
needs
things
like
very
strong
security
aspects.
A
Things
like
web
application,
firewalling
things
like
being
able
to
plug
in
with
open
policy,
agent,
oidc
and
oauth
type
challenges
and
so
forth
out
of
the
box.
Employee
doesn't
have
that,
so
you
you
have
to
build
that
into
or
on
top
of
envoy
to
to
get
envoy
to
that
level
of
api
gateway.
So
that's
what
glue
is.
A
I
just
gave
a
couple
examples,
there's
many
more,
but
that
if
you're,
if
you're
wondering
what
is
it
between
glue
and
envoy
is
that
glue
is
built
to
be
that
abstraction,
not
just
a
plain,
jane,
ingress
controller.
It
can
play
the
role
of
plain
jane,
ingress
controller.
If
that's,
if
that's
what
you
want
it
to
do,
but
it's
a
little
bit
more
powerful
than
that
and
most
enterprises
that
deploy
these
applications
need
that.
A
So,
as
we
saw
in
the
previous
talk
the
that
the
envoy
proxy
needs
to
be
configured
by
something
it
needs
to
be
driven
by
something
and
in
earlier
slides
I
showed
that
envoy
is
driven
by
a
dynamic
api.
Something
needs
to
provide
that.
You
know
that
that
capability,
and
so
glue
comes
with
you
know
glue
is,
is
you
know
partly
the
addition
to
the
proxy,
but
also
the
control
plane
itself.
A
The
control
plane
allows
you
and
simplifies
the
api
of
using
envoy,
so
the
api
that
we
expose
in
glue,
which
let's
say
if
you're
running
on
kubernetes,
is
based
on
crds.
So
it's
a
declarative
format
that
you
might
be
familiar
with,
but
with
glue.
What
we're
saying
is
my
use
case
is
to
solve
the
problem
at
the
edge,
and
I
don't
really
care
about
listener,
listeners
and
transport
sockets
and
all
this
stuff.
That
ambly
cares
about.
What
I
care
about
is
what
are
my
apis?
A
How
can
I
expose
them?
How
can
I
secure
them?
How
can
I
build
transformations
and
so
forth?
That's
the
api
that
I
care
about
so
glue,
provides
that
that
decoupler
that
api,
that
you
use
that
you
write
your
configurations
in
glue
then
takes
that
translates
it
into
envoy
and
pushes
that
configuration
out
to
envoy.
A
A
Now
you
know
I
there's
there's
a
bunch
of
other
things,
but
I
might
just
jump
into
a
demo.
I
saw
some
questions
on
the
chat
there
about
comparing
to
contour
and
ambassador
and
so
forth.
So,
like
I
said,
contour
is
a
fairly
simple
ingress.
A
Ingress
controller
built
on
envoy
ambassador
is
something
that
would
probably
be
more
comparable
to
glue
in
the
api
gateway
space,
but
there
is
a
gigantic
difference
in
the
way
ambassador
is
built
and
the
way
glue
is
built
and
it
comes
down
to
the
control
plane,
and
this
is
a
very
fundamental,
very
important
part
of
building
an
architecture
like
this,
and
these
are,
let
me,
let
me
just
go
here:
why
are
these
hidden?
A
So
with
glue,
I
pointed
out
that
the
envoy
proxy,
as
we
as
we
call
it
in
this
ecosystem,
is
the
data
plan.
This
is
where
this
is
where
the
traffic
is
going
right.
Traffic's
coming
in
envoy's,
applying
the
security,
applying
the
policies,
doing
all
that
stuff,
routing
it
to
the
upstream
clusters
and
and
backing
services
and
so
forth.
The
control
plane
is
out
of
process
of
the
requests.
A
Control
plane
is
doing
things
like
discovery,
service
discovery
right,
so
glue
can
watch
your
kubernetes
cluster
and
know
and
understand
about
those
services
that
are
running
in
kubernetes.
It
can
watch
your
console
service
registry,
it
can
watch
amazon,
ec2,
amazon,
lambda
and
so
forth,
and
so
it
can
build
up
this
understanding
what
services
are
are
available
to
be
routed
to
in
glue.
A
We
also
have
the
the
translation
engine
right.
The
component
that
does
the
translation
engine
to
the
proxy.
It
watches
the
kubernetes
api.
If
you're
running
in
cube,
you
don't
have
to
run
glue
and
cube
by
the
way,
but
if
you're
running
in
kubernetes
you
can
what
it
watches
the
api
sees.
The
configurations
that
you
created
translates
them
and
pushes
that
configuration
down
to
envoy
right
so
there's
a
very
obvious
and
significant
difference
in
responsibility
between
what
the
proxy's
doing
and
what
the
control
plane
is
doing
in
glue.
A
We
we've
built
this
and
separated
out
these
components
so
that
you
run
and
scale
and
think
about
the
data
plane
separately
and
differently
than
how
you
run
and
scale
and
secure
and
think
about
the
control
plane.
All
right.
It's
a
very
important
concept,
because,
when
you're
running
at
scale,
especially
in
a
product
in
a
production,
environment,
security,
scalability
flexibility,
these
things
are
availability.
These
things
are
highly
highly
important,
and
so
that's
how
we
built
glue
that
isn't
the
case
with
let's
say,
ambassador.
A
A
You
look
inside
that
one
container
and
you've
got
about
six
or
seven
different
processes.
Individual
processes
that
are
running
one
of
them
is
on
board
and
then
the
other.
Six
or
seven
is
this.
You
know
the
control
plane.
Part
all
right,
so
if
you,
if
you
run
ambassador
you're,
running
the
data
plane
and
the
control
plane
in
the
same
application
unit
now,
once
you
start
to
scale
out
ambassador,
let's
say
you
have
a
lot
of
traffic
coming
in
at
the
edge.
A
You
start
to
scale
it
out
when
you
rev
up
the
number
of
replicas
you're,
not
getting
the
benefits
of
scaling
that
you're
that
you're
hoping
for
right.
You
scale
up
the
number
of
proxies,
ideally
to
deal
with
more
load,
but
you're
also
scaling
out
the
number
of
control
plane,
components
which
you
may
not
need,
or
you
know
what
ends
up
happening
is
in
the
control
plane
like
just
like
in
glue
control.
Plane
needs
to
talk
to
kubernetes
api
when
you
start
to
scale
to
meet
the
load
at
the
edge
for
your
application
traffic.
A
You
also
start
to
scale
inadvertently
the
control
plane
and
the
load
that
it
puts
on
the
kubernetes
api
or
any
other
resources
that
the
control
plane
is
responsible
for,
and
this
is
a
very
highly
coupled
scaling
dimension
that
you
know
that
is
not
ideal.
Even
less
ideal.
Is
that
the
control
plane,
like
I
said,
needs
to
be
secured
differently
than
the
data
plane?
The
data
plane
should
be
locked
down
with
no
no
privileges,
nothing
right.
A
That
container
needs
access
to
the
kubernetes
api,
so
at
the
edge
of
your
system
you're
giving
the
ingress
full,
you
know
read
write
capacity
to
the
kubernetes
api.
If
that
gets
compromised,
that's
a
secure,
that's
a
secure
risk
or
severe
risk
with
glue
and
the
way
we've
we've
built
glue.
It
doesn't
suffer
from
those
same
drawbacks.
We
can
scale
them
independently.
The
gateway
is
locked
down.
There's
no
read
write
going
on
in
the
proxy
there's.
It
doesn't
mean
you
don't
have
to
mount
the
kubernetes
security
token
at
all.
A
If
you
don't
want,
and
then
you
secure
the
the
control
plane
and
scale
the
control
plane
components
differently
right,
the
traffic
coming
into
the
proxy
at
the
edge
might
be
heavily
cpu
balanced,
doing
tls
termination
doing
routing
transformations
all
this
stuff.
A
The
control
plane
is
probably
going
to
be
doing
some
cpu
calculations
and
that
kind
of
stuff
too,
but
it'll
probably
be
very
memory
intensive
right.
So
you
have
these.
You
have
this
very
different
resource
usage
and
potentially
contention
that
doesn't
make
sense
to
co-locate
into
a
single
single
binary.
So
there
is
differences,
so
I
definitely
encourage
that's.
I
I
don't
want
to
spend
too
much
more
time
on
this,
but
there
are
definitely
differences
between
the
you
know.
A
Even
if
they're
using
envoy,
like
I
said
envelopes,
the
kernel
is
the
component
inside
the
rest
of
the
stuff
around
it.
That
makes
a
big
difference
and
that's
what's
that
that
becomes
very
important,
so
I'll
leave
it
at
that.
Let's
take
a
quick
look
at
then
so
glue
is,
like
I
said,
an
open
source
api
gateway
built
on
envoy
proxy.
Not
all
envoy
proxies
are
built
the
same,
but
let's
take
a
look
at
what
what
that
looks
like
if
we
come
over
here,
I
think
here.
A
A
One
of
the
1.4
line
is
probably
what
you
want
and
then
from
there
you
can
go
to
the
docs,
and
you
know
we
have
a
fairly
fairly
comprehensive
docs
setting
it
up
running
it
as
a
gateway
which
is
the
primary
way
to
run
glue
as
an
api
gateway.
You
could
run
it
as
a
simple
ingress
controller.
Like
I
mentioned
earlier,
you
could
run
it
in
k,
native
as
a
replacement
for
istio.
A
We
were.
We
were
the
first
api
gateway
to
replace
istio
in
a
k
native
environment
where,
in
in
for
scenarios
where
you
may
not
need
full
service
match
and
so
forth,
glue
federation
is
that
federation
management
plane
that
I,
that
I
discussed
and
that's
over
so
the
docks
are,
are
pretty
good.
A
So
you
go
you
download
it.
What
do
you
do?
You
can
install
it
with
helm?
There's
there's
instructions
in
the
setup
guide
for
installing
with
him.
You
can
also
install
it
with
a
cli
a
little
helper,
cli
and
I'll
show
you
that
there's
a
lot
of
good
value
and
convenience
in
this
cli,
so
the
cli
is
called
glue.
Blue
ctl,
we're
actually
running
a
slightly
older
version,
don't
mind
that
we
have
a
kubernetes
cluster
here,
very
plain,
james
running
on
gke.
A
What
we're
going
to
do
here
is
glue,
install
the
gateway
okay.
So
what
is
going
to
do
is
install
the
envoy
proxy,
as
well
as
the
control
plane
and
some
very
basic
default
configuration
and
the
basic
default
configuration
we're
very
security.
Conscious
here
at
solo,
the
basic
configuration
doesn't
expose.
Any
ports
doesn't
do
anything.
You
have
to
explicitly
tell
it
so
that
you
don't
end
up
in
a
situation
where
you've
accidentally
opened
up
ports
that
that
have
exposures
or
vulnerabilities
and
and
so
forth.
So
we
have
we've
installed
our
our
glue.
A
A
A
These
other
two
components
are
supporting
the
control
plane
in
doing
discovery
of
different
surfaces
as
well
as
some
transformations
of
the
api,
and
I
I
can
talk
about
that
in
a
in
a
second
now.
If
we
just
quickly
come
over
here,
I'm
going
to
use
this
tool
called
k9s
or
canines
awesome
tool,
highly
recommend
it.
It's
kind
of
like
a
console
for
your
kubernetes
clusters,
but
we're
going
to
explore
things
a
little
bit
more
in
canines
here.
A
So
if
we
come
over
here
to
the
gateway
proxy
and
let's
say
we
want
to
exec
into
it,
and
if
we
do
a
auxiliary
or
check
our
process
listing,
we
see
that
we
have
a
very
simple
init
process,
very
simple
and
we
have
envoy
and
envoy
is
configured
using
a
bootstrapping
configuration
so
that
the
gateway
proxy
container
here
is
envoy.
A
Oh
okay,
so
we
want
to
see
the
proxy.
I
want
to
see
the
proxy
url.
Where
does
the
proxy
url
live
at
okay?
It
lives
here
on
gke
that
ends
up
roughly
translating
into
the
external
ip
on
gke.
Well,
not
roughly
exactly
translating
to
that
right.
So
we
have
some
convenience
commands
here.
We
can
do
something
like
glue.
Actually
I'll
show
you
all
the
commands
here
for
the
proxy.
We
can
get
the
url.
What
we
just
did
we
can
get
all
the
stats.
A
So
envoy
exposes
a
tremendous
number
of
of
stats,
and
what
it'll
this
command
will
do
is
go
to
the
proxy
and
get
you
all
of
the
stats.
A
A
The
last
thing
I'll
show
just
real
quick
is
the
log,
so
we
can
get
the
the
the
logging
directly
we
can.
We
can
turn
debug
on
and
get
the
logging
from
the
proxy
see
exactly
what
logs.
What's
going
on
to
help
troubleshoot
and
debug
if
you're
familiar
with
envoy,
it's
not
always
that
easy,
maybe
using
istio
or
console
or
whatever,
to
get
this
information,
but
with
glue
it's
it's
fairly
straight
straight
forward.
The
last
thing
I'll
show
you
about
working
with
the
proxy
here
is.
A
Is
that
yeah,
don't
the
dump
command
will
show
the
actual
running,
kubernetes
or
envoy
configuration?
So
we
saw
some
of
this
in
the
previous
previous
talk.
This
is
the
actual
envoy
config,
that's
that's
being
run,
and
you
can
you
can
check
that
out
live
so
that
if
there
is
some
difference,
if
something's,
not
working
right,
you
can
go
to
envoy
itself
say
give
me
your
config.
I
want
to
see
the
config
exactly
and
and
then
see
what's
happening
with
with
the
the
proxy
glue
can
also
we
can
get
our
upstream
up.
A
Streams
are
the
services
that
we've
automatically
discovered
we
can
see.
We
have
there's
a
handful
of
the
defaults.
What
I'm
going
to
do
here
is
do
kubernetes
apply.
A
A
A
They
discovered
the
actual
functions
running
at
a
particular
endpoint
right,
so
it
has
this
level
of
of
service
discovery.
We
can
find
the
host
and
port,
but
then,
when
it
finds
those
it
can
go
a
little
deeper
and
say.
Oh,
this
is
a
rest
service.
We
we
see
a
swagger
or
open
api
spec
contract
here,
or
this
is
a
grpc
service.
We
can
introspect
the
grpc
reflection
and
we
can
build
more
awareness
and
more
metadata
about
the
services
that
we
that
we
discovered
now
using
glue.
Like
I
said,
all
the
configuration
is
crds.
A
A
This
is
what
we
could
do
here.
Is
I
didn't.
I
guess
I
didn't
have
the
the
ammo
prep
handy
here,
but
we
could
add
a
route
to
glue
with
the
path
prefix
of
this
and
we
can
just
say,
send
it
to
this
upstream.
What
that
should
do
is
create
a
new
object
for
us.
A
So
now,
if
I
do,
and
just
using
kubernetes
get
virtual
service
in
the
glue
system,
we
should
see
this
default
one
yeah
and
we
should
see
it
as
yaml,
so
this
this
is
how
we
would
actually
typically
declare
and
include
as
part
of
our
cicd
processes
and
so
forth
as
as
diamond
here.
So
now
that
I
have
that
declared.
A
If
I
do,
if
I
get
the
euro
and
call
this
thing
curl
that
api,
we
should
see
that
indeed
we
can
reach
our
our
service
and
so
that's
fairly
straightforward.
It's
really
fairly
simple,
but
we
just
exposed
an
api
on
the
api
gateway.
It's
using
envoy
here
now,
if
we
look
at
a
little
bit
more
sophisticated
example,.
A
If
we
look
at
a
slightly
more
sophisticated
demo
here
here,
I've
refreshed
should
have
everything
running.
Okay,
it's
refreshed
again,
maybe
not!
No!
It
is
okay
here
we're
looking
at
the
ui
of
glue
enterprise,
and
so
this
gives
a
user
interface
for
the
glue
system.
It
gives
the
idea
of
the
envoy
health.
This
is
very
important
because
a
lot
of
nuance,
a
lot
of
battle
scars
built
into
this
nice
little
green
dot.
It
looks
so
simple,
but
when
envoy
is
given
configuration
it
can,
it
can
apply.
A
The
configuration
in
all
kinds
of
different
orders
envoy
is
built
to
be
eventually
consistent.
So
when
you
apply
a
configuration
to
envoy,
you're,
not
100
sure
that
you're
not
going
to
have
errors,
because
you
could
you're
not
even
100
sure
that
envoy
won't
reject
it.
A
So
we
do
track
those
knacks
when
it
rejects
the
configuration
we
do
track
when
things
are
inconsistent
and
we're
able
to
show
that
to
you
a
little
more
user-friendly
dashboard,
because
when
you're
deploying-
and
you
have
a
bunch
of
different
envoys
running-
and
it's
really
important
to
understand
what
is
the
state
that
you're
in
especially
from
a
security
standpoint,
if
you
apply
a
security
policy
and
you
think
it's
all
good
and
it's
not
that's
a
problem
right.
A
So
you
need
to
know
that
we
can
see
our
list
of
virtual
services,
which
we
I
kind
of
poked
around
and
showed
you
in
the
previous
and
the
from
the
console
we
can.
We
can
show
our
different
upstreams
if
we
look
at
the
pet
one
pet
store.
I
don't
have
the
pets
to
run
this
one
all
right.
Well,
it
again,
it
would
show
you
the
automatic
service
discovery
and
endpoint
the
function
discovery
there.
We
also
have
a
dev
portal
that
exposes
the
apis
on
the
on
the
other.
A
Why
is
it
not
working?
Oh,
I
don't
know
a
live
demo.
I
have
no
problem.
So
in
this
particular
scenario
we
have
this
example
application.
Let's
say
it's
our
java
application:
we
want
to
extend
it
with
microservices,
maybe
written
in
golang,
maybe
written
as
lambdas
on
on
aws.
A
If
we
click
on
veterinarians,
we
see
the
names
and
their
specialties,
but
we
might
want
to
add
a
city
to
this.
We
want
to
extend
the
functionality
strangle.
The
monoliths
use
the
microservice
we're
going
to
do
some
basic
routing
here
with
glue,
we're
going
to
say,
create
a
route
and
we'll
say
that
this
is
going
to
our
vet's
service,
but
this
is
written
in
golang
whenever
we
call
slash
vet
instead
of
going
to
the
monolith
route
that
traffic
to
the
to
the
new
microservice-
and
I
want
to
I
want
to
expose
this
new
functionality.
A
A
A
B
A
Add
that
you
see
the
hopefully
see
the
route
there
and
in
our
application,
if
I
click
on
contact
now
cross
your
fingers
for
me,
oh
damn
it
refresh
sometimes
there's
a
time
delay.
Okay,
so
that
takes
time
for
the
lambda
to
spin
up
there.
You
know,
if
we
refresh
we
see
now
we
have
our
implementation
built
on
lambda.
A
We
want
to
secure
the
demo,
we
want
to
secure
the
the
gateway
and
we
want
to
do
that
with
oidc
in
this
case
we're
going
to
use
dex
as
a
provider.
Here's
just
some
basic
configuration
for
the
oigc
system,
we're
going
to
create
that
and
then
what
we
want
to
do
is
configure
our
virtual
service.
A
So
remember,
I
said
the
virtual
service,
all
yaml,
I
created
the
one
by
cli,
because
I
guess
I
didn't
have
that
prepared,
but
with
the
animal,
but
we
could
just
do
cube
apply
on
this
virtual
surface,
but
here
we're
specifying
that
we
want
to
use
the
external
auth
system
and
the
oidc
components
of
external
auth
to
challenge.
So
we
can't
just
go
to
our
our
application
or
our
api
in
this
case,
without
some
kind
of
identity
challenge.
A
So
now,
if
we
go
back
and
try,
let's
go
under
this
user,
if
we
go
local,
let's
just
go
back
to
localhost
80.
refresh!
Oh,
let's
just
blame
that
on
the
cash
8080.
there
we
go
so
now
it's
challenging
us
right
to
to
log
in
so,
if
I
add
this
one,
I
think
it
was
password
and
log
in
now
we
get
to
our
application,
as
we
would
have
expected,
we
can
click
around.
We
can
do
do
the
same
same
thing
now.
A
What
if
we
want
to
write
more
fine-grained
policies?
This
is
where
you
know
something
like
an
api
gateway
or
an
access
proxy,
especially
in
the
zero
trust.
Networking
environment
becomes
pretty
important.
What
if
we
want
to
use
something
like
an
open
policy
agent
to
specify
policy
restrictions
and
so
forth?
Maybe
for
you
know
it's
very,
very
simple
admin
we
can
allow
anything,
but
for
a
user
at
example.com
we're
not
going
to
allow
you
to
go
to
slash
owners
on
this
api,
and
we
can
get
very,
very
fine,
green
with
this,
with
this
policy
enforcement.
A
A
A
A
B
A
Mean
that's
so
that's
the
difference
of
the
api
gateway.
That's
the
difference
between
something
like
glue
and
contour
and,
I
would
say,
istio
ingress
gateway,
which
is
also
just
a
vanilla
envoy
that
you
can
do
these
more
powerful
access.
Proxy
type
edge
solve
these
edge
problems
a
little
a
little
better.
Now
on
the
glue
documentation,
I
left
this
pop-up
down
here
on
august
13th.
I
will
be
doing
a
webinar
on
federation.
So
how
do
you
take
multiple
of
these
glue
deployments
potentially
across
multiple
clusters?
A
A
How
can
you
enable
things
like
failover
between
the
the
proxies
in
a
smart
way
and
I'll
and
I'll
be
talking
about
that?
I
am
prepared
to
demo
that
right
now,
but
I'm
running
out
of
time,
and
I
want
to
leave
some
some
time.
A
The
last
thing,
I'll
point
out
is
what
is
well
again:
the
glue
dots,
a
lot
of
good
information
about
you
know
some
of
these
patterns
for
deploying
proxies
as
either
api
gateways
or
as
domain
boundary
gateways
and
all
the
way
down
into
service
mesh,
so
glue,
plugs
in
and
replaces
the
issue
ingress
gateway,
so
it
plugs
in
very
nicely
with
the
service
mesh
and
it
all
aligns
in
the
way
that
we
think
about
it,
like
I
said
all
aligns
with.
A
If
we
can
get
this
with
this
model
right,
we
have
federation
for
the
edge
federation
for
the
service
meshes
and
developer
portal
built
on
top
of
it.
Let
me
show
you
the
the
edge
or
some
of
the
service
mesh
federation
is
an
open
source
project
here
called
service
mesh
hub
so
definitely
definitely
go.
Go.
Take
a
look
at
that
and
I
would
love
to
if
you
know,
depending
on
your
schedule
and
so
forth,
I'd
love
to
come
back
and
do
a
presentation
on
service
mesh
istio.
A
You
know
continue
the
the
story
here
and
show
any
demos
about
that,
but
so
with
that
I'll
I'll
pause,
I'll
either
solicit
questions
or
I
know
any
ponder
you
wanted
to
you-
you
had
some
things
to
finish
up.
B
The
question
is
on
the
chat
like
this
question
about:
is
there
any
upper
limit
for
our
ps
and
tps
with
the
open
source,
blue
api
gateway.
A
Is
there
oh,
is
there
a
limit
on
the
limit,
so
performance,
wise,
they're,
they're,
the
same
proxy,
the
same
proxy,
the
your
tps
or
latency
or
any
of
the
performance
variables
will
depend
on
your
use
case,
always
right.
What
are
the
size
of
the
messages?
A
What
are
the
authentication
and
rate
limiting
policies
that
you've
enforced
and
those
can
vary
quite
wildly?
So
I
can.
I
can
tell
you
with
certainty.
We've
used
glue
so
for
from
an
enterprise
to
community
perspective,
there's
nothing
different
in
the
proxy
that
you
know
from
from
a
performance
perspective.
A
A
They
used
one-fifth
of
that
infrastructure
to
get
10
times
better
performance
than
they
saw
with
with
those
legacy.
Api
gateway.
So
envoy
is
very
performant.
You
know
we
are
very
performant
and
security
conscious
company,
so
those
those
things
but
but
to
answer
the
question
between
enterprise
and
open
source.
The
proxy
is
the
same.
B
Thanks
question:
there's
one
more
question
on
the
chat
here:
glue
and
stew
are
both
built
on
and
why
blue
is
an
api
gateway
and
skew
is
qualified
as
service
mesh.
Are
there
any
major
features,
functional
differences
between
the
two.
A
Yes,
okay,
so,
first
of
all,
that's
correct
assessment:
that
glue
is
an
edge
gateway.
Istio
is
a
surface
mesh.
They
are
not
the
same
thing
they
happen
to
use.
Envoy,
like
I
said
envoy,
can
be
used
in
very
versatile,
different
frameworks
and
that's
what
we're
seeing
here
and
because
they
both
use
on
use
envoy.
There
are
some
similarities
right,
there's
an
envoy,
the
same
same
functionality,
but
if
I
come
over
here
to
maybe
this
slide
will
make
it
a
little
a
little
bit
more
clear.
A
There
are
things
or,
like
I
said,
there
are
roles
that
you
solve
with
the
api
gateway.
So
I
started
getting
at
this
right
that
the
api
gateway
is
a
abstraction.
It's
not
just
a
pass-through
ingress
right,
it's
an
abstraction.
We
abstract
away
the
details
and
we
expose
the
api
a
certain
way.
A
A
They
have
telemetry
collection,
they
have
the
ability
to
do
some
similar
are
back
and
policy
enforcement
so
for
tls,
but
at
the
edge
there
are
things
that
you
need
to
solve
for,
like
oidc
like
web
application,
firewalling
message
transformation
and
these
types
rate
limiting
these
types
of
things
that
are
not
in
in
istio
or
if
you
want
and
you'll
see
this,
because
if
you
take
istio
today
take
seo
1.6
and
you
deploy
it
and
you
deploy
the
ingress
gateway.
You
have
awesome,
awesome
ingress
capability
right,
but
you
don't
have
oidc.
A
You
could
cobble
together.
There's
some
experimental
projects
out
there.
You
could
cobble
it
together.
You
can
use
the
envoy
filter
crd,
which
is
alpha
by
the
way,
and
you
could
probably
make
it
work,
but
it's
not
built
it
wasn't
designed,
or
it's
not
doing-
that
istio
is
not
doing
that
web
application.
Firewalling
is
something
that
we've
built
into
glue
that
is
running
in
the
proxies
based
on
mod
security.
A
It's
not
in
istio
transformation,
not
an
isc
right
limiting
again,
you
could
cobble
something
together,
some
of
the
out
the
existing
plug-ins
around
security,
like
hmag
validation,
this
kind
of
stuff-
not
so
these
are
things
that
are
you
need
to
solve
at
the
edge
that
you
won't
have
typically
in
a
service
mesh.
Now
that
doesn't
mean
one
is
better
than
the
other.
What
that
means
is
they
both
work
harmoniously
to
solve
the
problems
of
connecting
applications
in
this
cloud
environment,
but
just
in
different
roles.
A
B
A
B
Hybrid,
connecting
your
local
infra
with
the
cloud,
so
I
think,
on
the
edge
you
can
connect
use
glue,
maybe.
A
I
think
we
have
do.
I
have
that
announcement.
I
don't
have
that
announcement,
but
we
do
have
a
partnership
with
google.
You
can
run.
You
can
run
glue
on
top
of
gke
on
prem
anthos
in
the
in
the
public
cloud
anywhere.
I
would
say
that
they're
solving
a
slightly
different
problem,
they're,
not
federating
at
the
api
gateway
level.
A
I
don't
believe
they're
federating
at
the
service
mesh
level.
They
might
start
to
do
that,
but
I
mean,
I
think,
we're
quite
quite
a
well
ahead
of
them
on
with
service
mesh
hub,
so
I'm
not
exactly
sure
so,
but
but
from
an
api
gateway
level
and
the
stuff
that
we
talked
about
squarely
today.
No,
they.
I
don't
believe
that
it
overlaps,
but
it
can
be
used
on
answers.
B
Okay,
thank
you.
There
are
some
questions
on
the
google.
The
youtube
live
as
well.
Let
me
just
check
there
as
well
yeah.
The
one
question
is:
if
we
want
to
add
our
own
routing
logic,
how
is
it
possible.
A
Add
your
own
routing
logic.
How
exactly
is
that?
Can
you
clarify
it
because
if
you
come,
if
we
go
here
and
we
look
at.
B
Post
path
path,
basically
saying
that,
in
addition
to
the
host
based
pathways,
I
want
to
add
my
own
routing
policy.
That's
the
question
on
there
like
what
I
don't
know.
The
answer
I
mean.
A
The
person
who
asked
the
question
to
come
in
and
look
at
our
our
docs,
so
we
can
do
a
tremendous
number
of
different
routing
policies.
We
can
change
the
message
we
can
add
headrests
remove
headers.
We
can
give
direct
responses
and
and
rewrites
and
all
kinds
of
stuff.
A
So,
but
if
there's
a
specific
use
case
that
you're
asking
about
then
like
I
said,
definitely
reach
out
to
reach
out
to
me
either
on
twitter
or
my
my
email
is
my
first
name
christian
at
solo.io,
so
reach
out
to
me
that
way,
and
we
can.
I
can
help
with
that.
Yeah.
A
So
geolocate
that's
a
very
good
question.
That
is
something
that
we
that
we
do
in
our
federation
policy.
So
with
the
federation.
Basically,
what
happens?
Oh
crap,
I
actually
have
to
drop.
I
have
a
customer
meeting
right
now,
but
please
reach
out
to
me.
I
will
be
happy
to
come
back
and
happy
to
address
these.
These
questions,
but
geolocation
locality,
awareness,
balancing
all
of
these
things
we
address
in
in
glue
federation
in
the
doctor
here
believe
there
was
a
blue
federation.
A
Maybe
in
here
blue
federation
you
can
take
take
a
look
at
that.
We
just
announced
it
a
couple
weeks
ago,
but
I
again.
A
B
Thanks
for
your
time
here
to
join
us
for
this
meetup
and
the
demos
are
really
cool.
I
really
like
them,
so
hopefully
the
team
also
like
that
so
I'll
actually
share
the
demo
of
that
wordpress
is
working
now,
so
people
can
stay
here
they
want
to.
But
I'll
have
you
leave
christian
if
you
have
to
leave
I'll
appreciate
I'll,
see
you
around
and
thanks
for
thanks
again
for
joining
in.
B
You
let
me
share
my
screen
I'll
just
go
over
the
demos
which
I
had
and
then,
if
you
want
to
kind
of
see
that
one
right
so,
okay,
let
me
just
quickly
show
this
demo
here
there
would
be
just
like
a
couple
of
them
which
I
may
want
to
share
yeah.
So
let's
go
over
the
presentation
which
I
was
kind
of
having
towards
the
end
of
the
session
my
session
right.
So
I
wanted
to
show
this
is
a
very
nice
demo
which
I
did
not
find
anywhere.
B
I
kind
of
built
from
this
crash
there.
So
what
we
are
saying
is
I
want
to
deploy
a
wordpress
app
and
in
the
wordpress
f.
I
would
like
to
have
recall
my
analytics
come
that
how
many,
how
much
kind
of
what
you
call,
how
many
connection
I'm
having
and
those
kind
of
stuff
right.
So
typically,
we
deploy
the
wordpress
app
and
we
get
it.
I
just
get
it
running
there,
but,
for
example,
I
want
to
get
the
metrics
of
my
my
sequel
thing
like
how
many
connections
I
am
having
on
mysql.
B
What
is
the
number
of
query
I'm
doing
per
second
and
so
on
stuff
there
right
so
how
we
can
get
that
thing
working
here
right?
So
what
what
I
have
done
is
let
me
go
back
and
show
that
so
here
I
have
this
docker
combo,
which
I'll
just
quickly
open
it.
So
in
this
we
have
configured
an
ny
which
would
kind
of
help
us
come
inside
our
environment.
B
This
ny,
then
we
have
a
database
which
is
my
sequel,
our
wordpress,
what
we
have
prometheus
for
collecting
the
metrics
for
my
from
my
ny
and
then
grafana
has
to
visualize
them
right
now.
What
I
have
done
is,
I
have
configured
this
ny
dot
yml
file
in
this
file.
I
am
defining
two
listeners
as
I
was
showing
you
there.
There
were
two
listeners
there.
One
is
for
your
http
traffic,
which
is
for
my
blog.
Another
is
for
mysql
right.
So
that's
that's
my
two
listeners
here.
B
This
is
for
my
mysql.
This
is
for
my
wordpress
app
so
from
the
wordpress
app
when
my
traffic
is
going
to
come
on
port
8080
I'll,
go
and
hit
my
blog
cluster.
That
will
take
me
to
the
wordpress
service
here
that
will
be
the
wordpress
service
and
from
that
wordpress,
if
I
kind
of
look
at
local
compose
file,
so
what
I'm
doing
is
from
my
wordpress,
I
would
want
to
communicate
to
my
database
right.
So
I'm
saying
here
don't
go
via
the
database
so
like
here.
B
If
you
look
at
this
line
number,
I'm
not
communicating
to
the
database
service
here
db
service,
I'm
saying
that
go
to
ny3d07.
So
what
I'm
saying
is
come
back
to
the
listener
to
here
and
from
there
the
traffic
coming
correct,
that's
how
it
is
going.
So
let
me
kind
of
quickly
show
that
okay,
it's
working
there.
So
let
me
now.
B
Access
my
wordpress
here
I'll
bring
up
a
particular
presentation
here
yeah.
So
as
you
can
see
that
this
is
my
local
host
just
a.
B
Minute
this
is
my
local
host
serving
on
port
8080
and
from
there
I'll
kind
of
try
to
kind
of
install.
My
wordpress
here.
B
B
Finish
come
on
yeah
so,
as
you
can
see,
it's
installed-
and
I
can
now
log
into
my
to
my
wordpress
here
with
this
admin,
and
this
here.
Okay,
now
what's
happening
is
along
with
the
thing
which
I
said,
I'm
going
to.
I've
also
installed
a
prometheus
here
right,
so
I
will
now
go
to
the
prometheus
and
look
at
the
stats
there,
so
I'll
kind
of
say,
localhost,
port
1990.
B
This
is
where
I'm
going
to
get
all
the
metrics
of
ny
and
mysql,
and
so
on
here.
So
if
you
look
at
these
metrics
are
should
be
coming
here.
Yeah
those
are
coming
here.
Let
me
search
for
mysql,
so
you
can
see
that
okay,
I'm
going
to
see
very
fast
like
this
one.
It
gives
me
that
how
many
queries
has
been
passed
in
that
in
that
thing,
so
it's
like.
I
have
passed
that
one
zero
five
nine
queries
there
that
number
of
ways
in
past.
B
B
B
A
B
B
If
I
log
in
back
here
so
here,
you
will
see
that
I'm
going
to
get
the
new
data
as
well,
which
I
have
just
used
the
mysql
for
so
here.
What
I've
seen
is
the
demo
of
a
mysql
filter.
So,
as
you
can
see
that
we
are
not
kind
of
doing
anything
special
here,
what
is
just
that
you
kind
of
just
put
envoy
in
place
and
that
kind
of
takes
care
of
things
for
you.
So
similarly,
you
can
kind
of
look
at
other
demos
as
well.
B
B
B
I
was
going
to
take
some
time
I'll
not
want
to
kind
of
hold
you
guys
back.
So
what
I'll
do
is
I'm
going
to
kind
of
record
record
this
entire
thing
fresh
and
then
I'll
send
across
the
thing
for
you
guys,
so
that's
something
which
I
would
do
it
between
the
pull
out
days
and
kind
of
try
that
out
on
your
own
hope.
This
is
useful.
B
I
know
it's
kind
of
a
bit
advanced
topics
and
then,
but
they
are
really
what
we
need
in
production
and
so
on
these
days
right
so
hopefully
the
session
was
useful
for
you
guys
I'll
see
you
soon
in
the
future.
Meetups
there.
Okay,
thank
you
guys
I'll
see
you
soon.
Then
bye.