►
From YouTube: Service Mesh in the Real World video series - Ep 2
Description
This is the high-resolution version of the Istio livestream event co-hosted by Google Cloud and Solo.io
This educational video series will walk you through various application networking challenges, how we have traditionally solved them, service mesh networking concepts to solve them for microservices, and show you how to do it through live demonstration.
Learn more about the technologies used in the video:
Istio https://istio.io/
Google Cloud https://cloud.google.com/
Service Mesh Hub http://servicemeshhub.io
Solo.io https://www.solo.io/
B
We
are
live,
welcome
everybody
to
our
second
installment
of
the
service
mesh
in
the
real
world,
video
series
featuring
sended,
parikh
and
christian
posta,
this
time
we're
going
to
talk
about
multi-tenant
ingress
using
Sto,
so
this
is
live
streamed
and
this
is
recorded.
So,
as
these
gentlemen
are
speaking
and
going
through
the
different
examples
and
demos,
if
you
have
questions,
please
pop
them
into
the
Q&A
section
on
the
youtube
live
and
then
we'll
get
to
them
at
the
end
of
the
session.
All
right
over
to
you
great.
A
Thanks
Betty
thanks,
Jase,
hey
folks,
welcome
to
the
second
edition
of
sto
or
service
mesh
in
the
real
world.
So
today
we
want
to
talk
about
how
to
do
you,
multi-tenant
ingress,
using
sto.
My
name
is
sundae
park
working
at
google
cloud
find
me
on
twitter
at
CRC.
Smn
KY
I've
also
got
Christian
pasta
with
me
as
well,
and
at
Christian
pasta
on
Twitter
from
solo
Deadeye.
Oh,
why
don't
we
get
started
here
so
today?
A
What
we're
going
to
cover
we're
gonna
lay
out
a
little
bit
of
the
challenge
that
we're
trying
to
solve
for
some
examples
that
are
available.
Our
sorry,
an
example
deployment
that
we
can
kind
of
work
off
of
and
the
example
we're
trying
to
solve
for
talk
about
some
of
the
ecosystem,
solutions
that
exist
and
we'll
close
with
what's
new
in
the
latest
release
of
sto
and
we'll
have
some
time
for
questions
at
the
end.
So
with
that,
let's
start
with
the
challenge
that
we're
trying
to
solve.
A
For
so
what
we're
hoping
to
do
here
is
how
to
figure
out
how
to
run
multi
tenant
ingress
deployments
within
a
kubernetes
cluster.
So
what
do
people
usually
think
of
when
they
think
of
multi
ingress
well,
oftentimes
right?
You've
got
large-scale
kubernetes
deployments
that
are
multi
tenant,
so
that
means
lots
of
individual
workloads,
services
applications
and
that
could
also
be
served
by
many
many
teams,
and
it's
it's
easy
to
see
like
over
time.
Those
clusters
can
become
very,
very
large,
and
each
of
those
teams
may
need
their
own
dedicate
dedicated
approach
to
ingress.
A
They
may
need
to
serve
different
domain
names.
They
may
just
want
to
have
a
clear
and
distinct
separation
between
other
ingress
points.
Other
teams
may
be
using,
but
ultimately
you
know
in
a
large
multi-tenant
application
or
sorry
a
multi-tenant
cluster,
individual
applications
or
workloads
may
need
again
their
own
ingress
approach.
A
First
and
foremost,
we
see
it
as
a
way
to
isolate
individual
teams
and
logical
workloads
and
again,
a
workload
may
be
more
than
just
an
individual,
pod
or
service.
Think
of
multiple
applications
being
served
out
of
a
large
cluster
and
each
applications
themselves
are
composed
of
individual
pods
and
services,
so
the
isolation
approach
right
can
start
at
something
like
in
the
namespace.
So
if
you
have
different
teams
working
out
of
different
namespaces
over
time
based
on
some
of
those
requirements,
those
teams
may
again
want
their
own
ingress
ingress
point.
A
You
may
also
want
to
be
doing
things
like
serving
applications
out
of
different
domains.
If
you
serve
replication
of
different
domains,
you
may
have
particular
requirements
around
things
like
SSL
support,
so
that
could
be
one
aspect
of
it
and
then
for
different
ingress
types.
You
may
also
want
to
have
different
services
being
served
like
API
is
versus
and
user
facing
services,
and
because
of
the
way
those
may
scale
and
operate
differently,
you
may
want
them
to
use
different
infrastructure
again,
a
different
ingress
altogether.
A
So,
let's
talk
about
a
deployment,
this
might
make
sense.
So
in
this
case,
what
you
typically
have
in
a
kubernetes
environment
is,
you
may
have
again
multiple
applications.
Each
of
those
applications
could
be
powered
by
multiple
services,
but
you
have
a
single
ingress
point
right.
So
typically,
what
may
happen
is
you
may
spin
up
a
curious
ingress
resource
or
another
ingress
mechanism
to
get
traffic
from
the
outside
back
to
those
applications?
So
this
is
what
we
have
today
right.
This
is
a
pretty
common
standard
approach.
A
What
we
want
to
get
to
is
where
each
of
these
individual
applications
can
have
their
own
ingress
mechanism
altogether,
so
one
that
is
logically
and
physically
separated
right.
These
may
run
in
individual
namespaces
all
right,
so
each
of
those
ingresses
could
run
in
a
different
namespace
or
at
least
be
controlled
by
occasions
and
in
particular,
namespace
and
again
be
configured
very
particularly
for
those
application
needs
some
of
the
top
acquirements
that
we
see
kind
of
mention
this
already.
A
But
it's
good
recap:
we
want
the
ability
to
have
this
ingress
approach,
or
this
multi
ingress
approach
support
the
idea
of
structuring
our
cluster,
where
we
can
isolate
teams
and
isolate
logical
workloads
again
typically
in
kubernetes,
you
do
this
via
namespaces.
We
want
to
be
able
to
support
different
types
of
ingress
approaches,
so
I
made
the
example
earlier
of
maybe
an
API
use
case
versus
and
user
facing
service,
and
we
want
to
make
sure
that
each
ingress
approach
we
choose
in
this
multiple
view
has
support
for
SSL,
Certificates
or
HTTPS.
A
That's
a
pretty
common
need
for
the
most
part
these
days,
the
features
that
we
want
to
make
sure
we've
got
so
this
is
really
important,
because
this
really
ties
into
how
platform
native
right
this
technology
is
gonna
feel
so
we
certainly
want
to
be
able
to
have
platform
load
balance
or
support.
That
could
be
whether
it's
kubernetes
platform
labelled
support
or
even
any
the
public
health
providers
or
private
cloud
providers.
A
So
we
want
to
make
sure
we've
got
load
balancers
support
there,
so
that
ingress
gateway
object
inside
of
kubernetes
can
actually
be
connected
to
an
external
load
balancer.
We
obviously
want
to
have
SSL
certificate
support
there.
We
do
want
it
to
function
as
much
as
possible
like
a
coup
Brandis
negative
service.
That
means
it
should
have.
You
know
stable
API
approach.
It
should
follow
kind
of
kubernetes,
best
practices
around
Runnings
infrastructure.
A
Now,
in
an
ideal
world
we'd
like
to
have
you
know,
sort
of
additional
traffic
management
mechanisms
that
leaves
that
gives
us
the
ability
to
kind
of
go
beyond
just
a
single
entry
point
in
the
cluster
into
a
little
bit
more
fine-grained
traffic
control
as
we're
routing
to
different
parts
of
our
applications
or
different
backends.
Another
possible
option
would
be
you
know
having
API
game.
We
support.
A
So
let's
talk
through
some
of
the
available
ecosystem
solutions
that
are
on
the
market
today.
So,
first
the
one
we
you
always
will
see
under
pre
regular
basis
is
kubernetes
ingress.
Obviously
it's
more
capable
than
using
the
service,
spec
type
load,
balancer,
the
spec
type
load,
balancer.
Typically
on
most
public
cloud
providers
will
spin
up
an
elf
or
load
balancer,
whereas
qu
Brady's
ingress
is
meant
for
l7
load
balancers,
and
it
gives
you
a
little
bit
more
granularity
around
l7
paths
and
routing,
etc.
That
typically
critias
ingress
supports
SSL
Certificates,
that's
good.
A
The
challenge
with
kübra
ingress
is
that
it's
really
only
part
of
the
problem
right.
If
you
write
a
Cooper
NACE
ingress
resource
that
structure
that
manifest
actually
has
to
be
given
to
an
ingress
controller
which
does
the
actual
physical
service
based
routing
and
and
and
fan-out,
etc.
And
so
you
need
two
parts,
this
problem
right,
the
Careers
ingress
resource
and
an
ingress
controller.
When
it
comes
to
ingress
controllers,
there's
quite
a
few
options
right,
there's
a
lot
of
them
here
and
we're
not
going
to
go
through
all
these.
A
Well,
we'll
talk
about
a
couple
of
them,
but
the
point
here
is
that
there's
a
lot
of
options
in
this
space,
and
so
each
of
these
options
is
going
to
offer
a
number
of
different
features
and
functionality.
You
know,
specifically,
they
will
target
what
the
ingress
resource
supports,
but
they
may
also
have
additional
features.
A
So
one
of
the
first
ones
that
we'll
talk
about
really
quickly
here
is
traffic
traffic
has
a
ton
of
great
features
right,
it's
very
tiny!
It's
very
fast,
that's
written
and
go.
It's
I
think
it
lends
itself
to
being
a
very,
very
quick
and
powerful
solution.
It
actually
has
a
particularly
interesting
approach
here,
where
they
support
the
kubernetes
ingress
resource,
but
they
also
provide
an
ingress
throughout
crv,
which
means
they
take
some
of
the
basics
that
the
ingress
resource
provides
and
they
go
a
little
bit
further
into
more
functionality.
A
More
flexibility,
more
capability
with
a
specific
CD
called
ingress
route,
but
at
the
end
of
the
day,
this
is
really
gonna
serve
our
ingress
only
case
right.
It
does
lack
a
couple
of
things
that
would
be
nice
to
have
particularly
around
some
of
the
more
I
think
particulars
with
mechanisms
once
you
get
into
the
cluster.
Some
of
the
you
know,
potential
of
mutual
TLS
integrations.
It
can
be
used
as
an
API
gaming,
but
you're
you're,
you're
kind
of
you
may
be
pushing
the
limits
of
what
traffic
can
really
offer
at
that
point.
A
So
it's
really
just
a
kind
of
a
pure
ingress
approach.
Nginx
is
another
popular
option
that
we
see
pretty
regularly.
It's
probably
one
of
the
most
popular
ingress
controllers
that
kubernetes
deployments
use
as
you'd
expect
with
nginx.
It's
got
a
ton
of
great
features,
it's
very
much
a
tried-and-true
technology
with
many
many
many
examples
and
many
questions
and
answers
on
Stack
Overflow,
but
again
we're
kind
of
left
with
the
original
challenge
of
traffic
that
traffic
had,
which
is
it's
an
ingress,
only
approach
right.
A
So
it's
going
to
help
get
traffic
into
your
cluster
from
that
also
have
a
low
bouncer
through
its
ingress
controller,
back
to
your
backends
into
your
applications
or
your
workloads.
But
it's
not
gonna
do
all
of
the
kind
of
traffic
management
feature,
so
you
may
want
to
have
that
we
may
already
have
access
to
internally
with
something
like
Sto.
So
this
is
a
good
solution
again
for
getting
traffic
into
your
cluster.
C
The
the
issue
or
the
the
Cooper
days.
Ingress
is
an
option,
but
it
doesn't
come
with
more
powerful
capabilities.
Like
TCP
support
and
advanced
traffic,
routing
and
and
percentage
based,
traffic,
splits
and
and
so
forth,
and
so
that's
where
the
the
ingress
gateway
comes
into
play
now
in
sto,
the
ingress
gateway
is
driven
by
the
Gateway
resource
and
the
virtual
service
resources
that
you
can
use
inside
the
mesh
as
well.
It's
basically
a
Envoy
proxy
that
is
deployed
as
a
a
pod
and
exposed
either
through
load,
balancers
or
everyone
exposed
outside
the
kubernetes
cluster.
C
And
has
you
know
it
like
I
said
it
brings
more
power?
It
brings
the
capabilities
of
envoy
basically
to
the
to
the
edge
and
ties
in
natively,
with
with
this
Tio's
mutual
TLS
support
and
and
and
distributed
tracing
and
durability,
and
so
forth.
So
with
SCO
ingress
gateway
by
by
default.
You
you
get
a
deployment
of
one
single
proxy
or
one
gateway
that
has
a
horizontal
pot
autoscaler
and
can
be
scaled
horizontally,
but
you
might
want
to
have
multiple
ingress
points
as
send-up
outlined.
Those
are
the
various
forces
like
isolation
and
SLA
and
so
forth.
B
C
Gateways
and
use
those
to
drive
the
traffic
to
into
the
mesh,
and
that
is
possible
with
SEO.
So,
like
I,
said,
there's
one
that
comes
out
of
the
box,
but
you
can
use
the
helm
resource
generation
templates
to
create
a
additional
gateways.
You'll
have
to
kind
of
edit
out
some
of
the
details
and
change
them
to
align
with
the
particular
gateway
names
and
labels
and
parameters
that
you
want
to
give
to
it.
C
When
you
go
down
that
path,
you're
going
to
have
to
there
a
handful
of
sections
in
the
config
that
you'll
need
to
at
least
consider
changing
to
get
things
up
and
running.
You
want
your
own
service
account
and
you'll
want
the
standard,
kubernetes
deployment
and
service
resources
to
define
your
your
your
new
ingress
gateway.
C
Now
as
solo,
we
are
working
with
our
customers
to
bring
the
world
api's
to
to
the
to
their
clusters,
to
kubernetes
and
and
have
that
work
natively
with
ISTE
o
as
well.
So,
although
the
SCO
aggress
proxy
is
great
at
getting
traffic
into
the
cluster
and
it
ought
men
since
the
plants,
the
kubernetes,
ingress,
kubernetes
ingress
resource,
but
more
powerful
features
same
does
glue
to
the
issuing
breast
gateway.
C
Where,
where
we
can,
we
can
layer
in
more
api
focused
features
things
like
message:
level
of
transformation,
things
like
swagger
support
and
g
RPC
reflection,
support
things
like
a
little
bit:
more
advanced
and
user
security
flows
like
author
and
open
ID
connect
edge
based
security
like
web
application
firewall.
So
you
don't
to
rely.
If
you
don't
want
to
on
specific
cloud
providers-
or
you
know
more
expensive,
proprietary,
vendors
and
so
forth,
so
glue
is
our
edge
proxy.
That
is
also
built
on
envoy.
C
So
it
inherits
all
of
the
same
traffic
routing
capabilities
that
you
would
expect
out
of
envoy,
including
traffic
splitting
and
traffic
shadowing
and
and
so
forth,
and
runs
natively
inside
the
kubernetes.
So
glue
is
also
driven
through
CR,
DS
and
and
and
if
you
don't
want
to
run
in
kubernetes,
you
can
run
outside
it
as
well
on
VMs
or
bare
metal
or
where,
if
you'd
like
so
glue,
complements
a
service
map
specifically
ischium,
because
it
extends
the
capabilities
that
the
edge
extends
the
out
of
the
box.
C
Now,
let
me
show
you
a
quick
demo
of
using
ingress
gateway
and
using
a
a
some
other
helper
components,
to
make
it
easier
to
deploy
secure
gateways,
for
example,
using
certain
manager
to
automate
getting
certificates.
We
can
use
Geo's
SDS
functionality,
like
I,
said
instead
of
manually
mounting
in
volumes
with
secrets,
and
so
where
we
can
wire
up
the
gateway
to
talk
directly
to
an
SDS
service
and
have
the
SDS
Service
serve
the
certificates
without
any
extra
mounting
of
things
on
the
disk,
as
well
as
exposing
HTTP.
So
let's
take
a
look
at
that
demo.
C
Real
quick.
The
first
thing
that
we'll
show
is
that
we're
going
to
use
cert
Manager
to
define
the
certificates
that
we
we
use
in
our
cluster.
So
this
will
automate
in
this
case
talking
with
let's
encrypt,
if
we
just
take
a
look
at
some
of
the
sometimes
like
the
settings
here,
talk
with,
let's
encrypt
automatically
get
our
our
certificate,
create
the
secret
and
feed
that
into
its
Tio's
SDS,
so
in
this
case
we're
defining
the
certificate
using
certain
manager.
C
If
we
look
at
getting
the
seat
itself,
we
can
see
cert
manager
automatically
created
the
secrets
and
put
them
into
into
kubernetes
for
us
and
then
now
it's
do.
Citadel
and
SDS
implementations
will
take
over
consuming
these
and
serving
them
to
our
our
gateway.
So
if
we
look
at
the
sto
system
namespace,
we
can
see
that
we
have
our
ingress
gateway
here.
If
we
look
at
the
Gateway
resources,
we
have
a
gateway
resource
here
called
hello
world
and,
in
this
case
we're
listening
for
a
particular
host.
C
We're
exposing
it
over
HTTP
and
the
most
interesting
part
here
is
we're
using
the
certificate
that
was
created
by
cert
manager
and
is
being
served
over
SDS,
so
the
ingress
gateway
is
talking
over
a
UNIX
domain,
socket
to
the
SCS
agent
and
is
getting
their
certificates.
That
way.
The
last
thing
that
we'll
look
at
is
the
virtual
service,
which
then
does
the
routing
from
the
gateway.
So
the
gate
will
be
listening
on
HTTP
terminating
the
the
TLS
there
and
then
routing
to
our
back-end
service
in
this
case
is
hello
world.
C
If
we
look
here,
we
say
we
have
hello
world
v1
and
hello.
World
v2
will
be
routing
to
hello
world
v1
now,
I'll
show
you
what
the
the
curl
command
looks
like
here
and
if
you,
if
you,
if
you
take
a
quick
look
where
we're
calling
the
the
service
but
we're
also
and
since
it
since
I'm,
using
curl,
we're
gonna
match
on
a
particular
domain
ID,
but
we're
using
the
zip
at
I/o
DNS
resolver
here.
C
So
we
kinda
have
to
do
a
little
bit
of
munching
to
make
the
the
Gateway
realize
what
DNS
name
we're
actually
using,
and
so
we
have
to
add
a
little
bit
magic
here
with
it
with
the
resolve
part
of
the
equation
here.
But
that's
that's!
Okay.
If
you
have
real
DNS
certificates
and
reeled
in
a
DNS
provider,
then
you
don't
have
to
jump
through
these
foods.
So
if
we
do
a
curl
on
this,
we
are
going
to
notice
something
not
so
ideal.
C
If
we
look
closely
here,
we
see
that
the
the
certificates
are
being
presented,
but
that
our
local
computer
is
not
trusting
the
the
certificate
fully,
and
that
just
happens
to
be
because
there's
an
intermediate
chain
certificate
that
is
not
included
in
the
in
the
server-side
certificate,
and
that
is
an
issue
with
that.
I
and
I
had
setting
setting
up
the
certificate
through
sir
manager
and
that's
something
that
we
can
fix
and
I
just
didn't
have
enough
time
for
this
for
this
demo.
C
But
if
we,
if
we
momentarily-
and
you
never
want
to
do
this,
but
momentarily
just
skip
that
intermediate
chain
verification-
and
we
can
see
that
it
does
succeed
and
we
do
see
our
response
response
from
version
1
of
our
service,
so
that's
using
SDO
synchros
gatling,
combined
with
some
automation,
around
certificate
management
and
an
exposing
HTTP
to
to
the
outside
world.
The
last
demo
that
I'm
going
to
show
here
is
so
let's
say
that
works
great.
C
We
have
some
clients
that
use
that,
but
we
need
a
more
capable
API
gateway
and
more
sophisticated
security,
the
edge
of
our
cluster
at
the
edge
of
our
mesh.
So
we
might
turn
to
something
like
solar,
that
I'll
glue,
glue,
edge
gateway,
glue,
edge
gateway,
has
a
nice
UI
for
creating
new
api's
for
configuring,
routing
rules
and
and
so
forth.
C
That's
two
demo,
so
this
should
add
a
new
routing
rule
for
our
our
service
that
routes
from
the
edge
to
hello
v2.
So
we
treat
this
as
a
as
a
multi-tenant
ingress
solution.
Here
we
see
that
that
is
pending
in
the
second,
we'll
be
able
to
call
it
let's
come
over
here,
and
we
see
that
the
the
health
of
our
API
gateway
is
improved
because
we're
now
exposing
a
proper
route
and
what
we
can
see
here
now.
C
If
we
do
loop
until
proxy
URL
and
get
this
you
are
out,
we
can
call
our
our
v2
service.
Let
me
see
that
hello
v2
works
there,
but
the
important
thing
to
note
here
is
that
glues
gateway
is
actually
tied
in
directly
with
its
Tio's
SDF
service
and
one
way
we
can
show
that
is.
If
we
look
at
the
gateway
deployment
here
in
in
kubernetes,
we
can
see
that
we're
actually
mounting
the
the
SDS
UNIX
domain
socket
into
the
Luke
into
the
glue
proxy,
which
is
basically
Envoy.
C
You
know
that
and
then,
if
we
look
at
how
we're
talking
with
the
hello
world
b2
service,
we
can
also
see
that
the
configuration
we're
pulling
the
configuration
directly
from
SDS
and
not
a
not
the
the
mounted
mutual
TLS
tokens
that
you
see
in
this
do
so
coming
back
here.
Let's
wrap
up
the
demo
part
at
least
so
what
we
did
was
we
implemented
a
a
multi-tenant
cluster
using
various
capabilities
from
st
own
native
gateways.
C
A
Awesome
thanks
Christian,
so
let's
talk
about
what's
new
in
the
latest
release
of
SEO,
so
sto
1.3
we're
about
September
to
12.
So
just
about
a
month
ago,
roughly
it
had
52
kind
of
major
improvements
covered,
a
series
of
commits
about
600
plus
commits
and
over
the
course
of
sto
's
lifespan.
We've
seen
about
400
contributors,
foreigner
plus
contributors
actually
coming
from
300
companies,
and
the
reason
we
like
to
highlight
this.
A
Every
time
is
because
we
want
to
remind
everyone
in
the
community
that
that
there's
a
long
tail
of
contribution
coming
from
a
lot
of
different
areas.
It's
not
really
driven
just
by
you,
know
one
or
two
sort
of
big
entities,
there's
a
lot
of
community
contribution
and
to
be
honestly,
we
would
welcome
additional
community
input
that
community
meetings
notes
working
groups.
Everything
is
up
on
github
and
out
in
the
open,
and
we
would
love
to
have
more
folks,
even
more
folks,
attend
the
community
meetings
and
share
their
experience.
A
Get
on
the
slack
participate
because
there's
a
lot
of
work
to
do,
there's
always
there's
always
documentation,
updates,
there's
always
room
for
new
blog
posts.
We'd
love
to
have
more
of
those
there's
a
lot
of
work
out
there
that
we
can
that
we
want
to
share
and
we'd
love
to
to
get
folks
involved
in
the
link
to
the
full
release,
release
notes
is
there
and
will
be
post
the
slides.
That
link
should
be
clickable
if
you
want
to
dig
into
it,
but
I
didn't
want
to
highlight
a
few
things
that
we
thought
were
really
important.
A
So
is
SEOs
key
themes
from
one
to
three.
You
know,
if
you
remember,
for
what
that
2
was
really
around
a
lot
of
internal
machinery.
How
do
we
improve
a
lot
of
the
internal
processes
to
make
releases
more
predictable
from
a
quality
basis
and
more
predictable
from
a
timing
basis?
So
now
that
that
work
is
done
with
1.2,
we
kind
of
turned
1.3
and
took
a
look
back
outward
and
said:
how
do
we
improve
the
user
experience
for
new
users
adopting
SEO
from
day
zero
here?
A
How
do
we
improve
the
user
experience
for
debugging
challenges
that
do
come
up
with
a
co-write
with
configuration,
there's
still
a
large
API
surface?
So
a
lot
can
happen.
We
want
to
make
sure
that
we
give
you
all
the
tools
that
you
need
to
debug
those
problems,
and
then
we
wanted
to
focus
on
how
do
we
support
more
application
without
having
to
update
configuration
or
without
additional
configuration
altogether?
So
those
were
the
three
kind
of
key
themes
for
for
sto
1.3,
and
so
some
of
the
highlight
it's
particularly
around
I.
A
Think
some
of
the
UX
and
and
sort
of
additional
configuration
you
for
applications
is
one
of
the
biggest
ones
is
that
container
port
is
no
longer
required
in
your
deployment
spec.
So
in
the
past
we
required
you
not
only
to
have
a
particular
port
naming
convention
so
that
we
could
figure
out
protocols
and
go
from
there,
but
we
also
required
you
to
specifically
highlight
your
container
port.
We
have
removed
that
require
metal
together,
and
this
was
a
change
that
had
been
introduced
a
few
releases
ago
and
is
now
a
GA
feature.
A
A
We've
also
made
it
possible
to
customize
more
easily
the
generated
on
void
config,
that
is,
that
we
generate
for
use
in
the
Envoy
sidecar
proxies
and
then
there's
a
host
of
interesting
experimental
features,
which
kind
of
sets
us
up
for
the
future
and
I
want
to
talk
through
those
really
quickly
as
well.
The
first
one
is
mixer
list
telemetry,
so
what
that
means
is
now
we've
got
a
set
up
where
you
can
configure
the
sidecar
proxy
to
bypass
mixer
altogether
for
telemetry
and
actually
report
telemetry
information.
A
So
that's
tracing
logging
and
monitoring
data
to
your
back-end
infrastructure
for
those
systems
for
those
services
directly
from
the
proxy.
So
it
no
longer
goes
through
mixer,
which
is
this
kind
of
single.
You
know,
sort
of
single
point
of
kind
of
in
and
out
which
has
led
to
you
know,
potential
performance
issues
in
challenging
configurations.
Things
like
that.
A
It's
super
high
scale,
so,
instead
of
having
to
tweak
mixer
for
those
high
performance
scenarios,
we've
just
kind
of
wanted
to
start
this
path
of
removing
mixer
from
that
from
that
path
all
together,
and
so
that's
a
really
big
feature
that
we'd
love
to
get
more
eyes
on
that
and
have
more
folks
test
that
approach
out.
We've
also
added
intelligent
protocol
detection.
So
if
you
remember
I
mentioned
earlier,
we
require
you
to
name
your
ports.
It's
a
key
requirement
for
pods
and
service
is
to
become
part
of
the
service
mesh.
A
We're
actually
starting
now
work
on.
Can
we
intelligently
detect
the
type
of
connection
you're
using
so
that
we
don't
have
to
have
you
name
the
protocol
itself,
so,
instead
of
naming
your
ports,
HTTP
or
HTTP,
something
or
other,
we
should
be
able
to
detect
that
it's
HTTP
traffic,
and
so
that's
what
we're
working
on
as
well
and
so
there's
some
support
in
there
today
for
a
subset
of
the
total
protocols
that
ACO
supports
and
more
certainly
coming
but
again.
This
is
another
feature
where
we'd
love
to
get
more
eyes
on
another.
Very
popular
enhancement.
A
I
think
this
was
driven
by
not
only
users
but
also
from
the
engineering
team
building.
Seo
itself
is
they
wanted
to
move
to
an
operator
based
install.
That
was
a
little
bit
more
in
line
with
how
they
saw
this
system
growing
and
maturing
over
time
and
to
drive
better
experience
around
things
like
upgrades,
etc,
and
so
the
operator
based
install
again
is
experimental,
but
it's
it's
available
today
and
that
one
is
actually
executed
by
the
next
one,
which
is
SEO,
CTL
experimental.
A
Specifically,
if
you
run,
is
co,
CTL,
X
or
experimental
right
X
is
just
an
alias
for
it.
You'll
see
all
the
available
commands
that
are
in
there
and
there's
actually
been
quite
a
few
they've
been
added
in
sto,
1.3
and
I
want
to
focus
on
two
in
particular,
which
are
pretty
awesome.
So
the
first
one
is
analyzed.
A
This
is
one
of
the
first
releases
where
we
have
sto
CTL,
X
analyzed
and
what
it
will
do.
Is
it
will
actually
analyze
your
yamo
files,
not
just
for
API,
adherence
or
API
conformance
right?
We
were
able
to
do
that
already
with
the
SDF
CTL
validate
command,
but
with
the
analyze
command.
We're
actually
able
to
tell
you
what's
gonna
happen,
what
these
yamo
files
are
gonna
do
to
your
cluster.
A
We
could
also
actually
point
it
to
a
live
cluster
and
say
if
C
or
C
tell
X,
analyzed
and
analyze
the
current
coop
config
you've
got
configured
and
it
will
go
and
analyze
the
eggs.
The
running
live
cluster
and
give
you
a
readout
of
any
of
the
configuration
problems
you
might
be
having
where
again
individual
manifests.
Individual
API
objects
may
be
configured
correctly,
but
when
you
put
them
together,
it's
causing
some
kind
of
conflict
and
then
the
third
part
of
that
which
is
I,
think
the
best
part
is.
A
You
can
actually
now
simulate
the
effect
of
applying
the
yamo
that
you've
got
there,
so
you
can
actually
combine
the
first
to
analyzing
yamo
files
and
analyzing
the
live
cluster
and
then
see
a
simulated
impact
of
saying
well,
take
my
running
cluster
up.
You
know
and
simulate
the
application
of
these
these
one
or
two
llamo
files
and
show
me
what's
gonna
happen
to
the
cluster.
Tell
me
if
it's
gonna
break
things.
A
So
this
is
a
really
powerful
feature,
and-
and
this
is
just
the
beginning-
and
this
is
a
bunch
of
work
that
came
out
of
a
separate
working
group-
that's
spun
up
and
was
very
open
and
interested
in
feedback,
and
they
took
a
lot
of
good
approaches
and
they
applied
them
to
here
to
this
analyze
functionality.
So
this
is
a
growing
area
and
we're
hoping
to
see
a
lot
more
work
come
out
of
here,
but
the
point
being,
we
can
do
a
lot
more
to
get
out
in
front
of
potential
configuration
issues.
A
The
second
one.
This
goes
back
to
my
previous
comment
around
the
operator,
so
the
sto
CTL
experimental,
manifest
command
is
the
basis
for
the
operator
base,
install
and
lets
you
generate
all
the
manifests.
You
need
to
apply
to
your
kubernetes
cluster,
to
deploy
sto
or
you
can
actually
generate
and
apply
them
right
in
line,
but
it
also
gives
you
the
ability
to
diffa
gets
multiple
different
installation
profiles
or
manifests.
A
So
if
you
really
wanted
to
dig
into
the
differences
between
maybe
the
default
installation
and
the
SDS
configuration,
you
could
do
that
you
can
also
figure
out
how
to
dive
into
migrations
from
things
like
migrating
from
the
helm
approach
to
the
operator
based
approach.
So
there's
a
bunch
of
different
options
here
that
are
really
powerful
to
again
change.
A
How
we
do
the
installation
process
and
move
to
this
kind
of
operator
based
install
so
again,
another
area
that
that
we
see,
we
think
is
going
to
grow
quite
a
bit
and
we
would
love
to
get
more
eyes
on
for
testing
and
just
as
before,
we
close.
We
want
to
talk
about.
What's
coming
up.
Next,
we've
been
running
on
the
schedule,
we're
about
three
to
four
weeks
after
individual
sto
releases.
A
We
have
kind
of
you
know,
service
mesh
in
the
real
world
topic
and
so
for
the
next
one,
we're
targeting
kind
of
a
security
centric
I
think
example:
we're
gonna
dive
a
little
bit
deeper
and
do
authentication
and
authorization
with
SDO
and
the
timing
for
sto
1.4
is
somewhere
around
late.
Q4
I
think
right
now,
they're
sort
of
targeting
just
before
Thanksgiving
so
about
mid-november.
So
we
may
have
the
webinar
either
early
December
or
depending
on
interest.
B
B
And
I
was
actually
gonna
say
we
have
security
lined
up
as
kind
of
the
next
one,
because
we
were
talking
about
traffic
leaving
the
cluster
traffic
coming
into
the
cluster,
and
you
know
how
we
can
secure
that
as
the
next
step.
But
if
folks
have
specific
questions
or
use
cases
that
they're
interested
in
exploring
more,
please
do
just
you
can
post
them
on
Twitter
tag
us
on
it
or
even
you
know
in
the
in
the
chat
window.
Here,
that's
something
that's
input.
B
We
want
to
take
to
make
sure
that
we
create
demos
and
scenarios
that
you
may
be
thinking
about,
or
you
are
looking
to
do
and
just
want
to
see
how
Christian
Hansen
did
kind
of
put
those
two
together.
So
with
that
any
questions
out
there,
there
was
one
that
came
in
through
a
different
channel.
That's
like
talking
about
HTTPS
and
kind
of
setting
that
up.
If
you
guys
want
to
go
into
that
a
little
bit
yeah.
C
So
both
the
in
our
demo,
both
the
sto
ingress
gateway,
as
well
as
the
glue
edge
gateway,
support,
HTTPS
and
and
both
can
be
automated
with
certain
manager
to
provide
those
certificates,
so
I
kind
of
tried
to
show
how
that
works
in
in
in
combination
with
with
certain
manager.
A
couple
things
to
to
keep
in
mind
is
that,
if
you
can
do
is
to
ingress
gateway
or
envoy
at
its
core,
so
basically
both
a
CEO
and
COO
can
do.
C
C
The
second
thing
is
that
we
could
also
do
mutual
TLS
at
the
at
the
edge
here
with
with
glue
or
ingress
gamely,
and
so
the
the
settings
for
for
that
are
fairly
similar
ease
that
you
got
to
supply
the
the
server-side
certificate,
as
well
as
the
client
side,
trust
yet,
but
so
both
both
approaches.
Supporters,
a
yes
yeah.
A
The
demo
link
that
we've
got
here
we'll
go
through
how
to
set
up
the
first,
obviously,
the
out-of-the-box
assuming
address
gateway,
and
it
has
some
instructions
into
how
to
create
a
second
ingress
gateway.
Like
an
example,
one
that
you
can
use,
and
so
what
we'll
be
adding
to
that
repo
over
time
is
more
of
the
steps
to
set
up
sort
manager
and
HTTP
insert
manager
is
a
component
that
is
actually
also
included
in
the
sto
release.