►
Description
SoloCon 2022:
Modern apps, multi-cloud: build it with Google Anthos and Gloo Mesh
Speakers:
Richard Seroter
Director of Outbound Product Management - Dev tools, serverless, containers, Google
Christian Posta
VP, Global Field CTO, Solo.io
Session Abstract:
If you're looking to build modern applications and span multiple clouds, you're going to need an advanced Kubernetes platform and an enhanced Istio service mesh. Join this session to learn how leading organizations are leveraging Anthos and Gloo Mesh together to make complex environments easy for developers and operators.
Track:
Service Mesh and Application Networking
A
All
right,
thank
you
all
for
joining
our
session
here
at
solocon,
we'll
be
talking
about
modern
applications
and
and
how
google
anthos
and
solo
group
mesh
come
together
to
complement
each
other
and
provide
a
powerful
platform
for
running
these
modern
applications.
B
Yeah,
you
know:
we've
we've
just
been
really
twitter
friends
for
most
of
our
time,
and
so
I
mean
we
finally
get
to
hang
out
in
person.
B
So
you'll
see
me
doing
demo
threads
and
things
like
that,
just
because
this
is
an
awesome
time
to
be
doing
app
development
and
I
like
to
show
off
what
we've
built
all
right,
so
yeah
I'll
jump
in
christian
I'll
go
through
a
few
things
kind
of
on
the
antho
side,
though,
what's
the
point
of
all
this
kind
of
effort
and
hybrid
multi-cloud
stuff
whatever,
then
I
want
to
hear
from
you
about
kind
of
how
glue
comes
together
with
this,
it
kind
of
tells
a
pretty
cool
story
together,
you're
going
to
show
off
some
tech,
which
is
always
great
and
then
we'll
send
people
off
with
their
free
puppy
or
whatever
the
conference
gift
is.
B
So
you
know
I
do
like
to
start
when
we
think
about
you
know
all
this
platform:
tech,
like
you
and
I
christian
we
geek
out
over
tech-
I
think
and
enjoy
this
stuff,
but
it's
still
all
in
service
of
outcomes
right.
We
want
to
build
better
software,
do
better
business
stuff,
and
when
you
look
at
the
state
of
devops
report
and
the
last
version
we
did
last
year,
we're
kicking
up
2022
right
now.
B
When
you
look
at
it
look
good
teams
at
software,
do
two
things
really:
well,
they
ship
more
often
and
they
keep
their
systems
online
more
right.
Everything
we're
doing
should
be
somewhat
in
service
of
that
when
we're
building
platform
tech
by
helping
you
ship
more
often
reducing
complexity,
making
life
a
little
easier,
but
not
sacrificing
stability
and
resilience
and,
at
the
same
time,
this
data's
starting
to
show
that
this
isn't
nice
to
have
resume
driven
development
stuff.
B
So
whatever
we're
doing
at
those
kubernetes
service
meshes
none
of
it
matters
if
you're
not
achieving
these
things,
although
I
think
together,
we
think
that
we
do
help
this
a
lot,
and
so
forrester
just
came
out
with
this
data
which
showed
that
in
their
their
recent
public
cloud
container
wave,
but
look
we're
really
working
hard
here
at
google
to
have
a
premium
container
experience
to
enable
some
of
those
outcomes.
We
believe
you
know,
containers
are
the
modern
unit
of
compute
at
this
point,
vm's
too
big
function,
sometimes
too
small.
B
So
when
we
think
of
a
container,
it's
really
an
ideal
unit
of
compute.
Today,
google's
made
investments
on
this
from
the
beginning,
and
so
we
work
hard
at
saying.
How
can
I
help
you
ship
a
little
faster
and
have
some
really
stable
run
times
and
that
manifested
itself
by
being
the
top
rated
hyper
scaler
in
terms
of
just
like
a
complete
container
platform
and
dance?
We're
happy
about
that.
B
So
some
of
this,
of
course,
look
if
I
were
waiving
the
magic
one.
You
wouldn't
use
anybody,
but
google
cloud,
because
you
make
great
choices,
but
that's
not
the
world.
I
get
it,
you
use
other
stuff
you're
on
premises
in
other
clouds,
and
so
I
kind
of
google
from
the
start's
been
really
good
about
investing
in
a
lot
of
open
source
tech
and
other
platforms
to
make
multi-cloud
a
little
better.
Is
it
great
I
don't
know,
can
we
make
it
better?
Of
course
we
can.
B
So
when
I
look
at
how
we're
investing
at
multi-cloud
and
kind
of
first
are
we
investing
in
the
open
source
frameworks
right
front,
end
stuff,
like
angular
and
flutter,
or
even
writing,
and
go
back
end
stuff
like
istio
and
kubernetes,
and
k
native
and
tensorflow
and
all
kinds
of
great
stuff?
So
I
want
those
great
outcomes,
I'm
probably
using
some
containers,
I'm
also
probably
using
open
tech.
B
That's
a
huge
part
of
this
investment,
but
then
we
also
want
to
make
sure
you're
getting
those
outcomes
and
you're
getting
that
speed
by
good
platform
tech,
not
just
like
here's
open
source
rock
on
figure
it
out.
We
want
to
make
sure
we're
giving
you
some
of
these
stacks
and
so
nobody's
doing
what
we're
doing
here,
we're
putting
you
know
you
can
use
bigquery
on
amazon
and
azure.
Today,
that's
mind-blowing:
I
can
do
gke
that
way
with
amp
those
I
can
run
cloud
build
doing
in
a
hybrid
way
cloud
deploy.
I
can
connect
hybrid
workloads.
B
I
can
do
things
with
apogee
everywhere,
so
we're
working
hard
to
not
just
throw
you
some
great
open
source
stuff,
but
also
can
we
make
this
into
products
and
platforms
and
then
finally
use
some
of
the
practices,
those
numbers.
I
showed
you
at
the
beginning,
those
dora
numbers
like
that
stuff.
You
can
do
anywhere,
it
doesn't
have
to
be
in
google
cloud.
We
make
it
fun,
but
you
can
do
that
stuff
anywhere
same
with
salsa
right.
B
The
secure
software
supply
chain
standard
that
stuff
that
yeah
we're
going
to
work
really
hard
to
make
it
awesome
here
you
can
do
that
anywhere.
Like
that's
awesome,
so
practices
platforms
underlying
tech.
I
think
I
kind
of
all
those
come
into
play
on
multi-cloud
stuff.
I
guess
I'll
pause
for
just
a
second
christian.
Do
you
I
mean
when
you
think
of
the
multi-cloud
stuff.
This
can
either
be
a
complete
dumpster
fire
or
you
can
do
this
stuff
intentionally
and
it's
not
too
bad
and
maybe
even
accelerating
business
kind
of
in
these
areas.
A
I
know
these
are.
These
are
absolutely
great
foundations.
You
know
practices
is
something
that
kind
of
gets
overlooked.
People
get
very
excited
about
the
technology,
there's
a
lot
of
great
technology.
I
going
to
talk
about
today,
but,
but
I
just
I
guess,
I'll
reinforce
the
the
cloud
native
practices
are
are
are
extremely
important.
Like
you
said
you
can
do
them
anywhere,
but
just
make
sure
you
do
them
and
you
do
them
right.
B
Right,
like
you
know,
I
know,
and
I
frankly
I
tell
people
if
you're
trying
to
future
proof,
you're
going
to
do
three
things:
you're
going
to
bet
on
containers,
infrastructure,
automation
and
continuous
delivery.
Gosh,
if
you
figure
out
those
kind
of
practices
in
general,
not
to
mention
just
good
resilience,
engineering
and
stuff,
I
don't
care
what
cloud
you
use,
you're,
probably
going
to
be
in
great
shape,
so
part
of
this
for
us
manifests
itself
with
this
approach.
B
If
I
look
at
how
google's
trying
to
do
stuff
we're
doing
open
apis
all
over
the
place,
but
proprietary
kind
of
googly
implementations,
which
is
not
shocking
right,
I
mean-
I
think
the
industry
has
said.
Arguably
the
api
is
more
important
than
the
implementation
nowadays,
like
you
see,
we
just
launched
our
google
managed
service
for
prometheus
awesome.
It's
a
drop-in
replacement
for
prometheus,
it's
amazing,
but
what
makes
it
really
amazing
isn't
necessarily
that
interface?
B
It's
the
fact
that
it
runs
on
monarch
this
underlying
system
at
google
that
we
use
ourselves
that
takes
a
ridiculous
amount
of
logs
in
proprietary,
just
amazing
system
for
us,
but
we
put
a
prometheus
head
on
it.
That's
really
powerful
and
again
we'll
support
full
kubernetes.
We
just
have
none
of
the
ops,
because
we've
automated
the
heck
out
of
kubernetes
to
make
it
really
easy
to
use.
B
So
I
love
the
fact
that
we're
as
much
as
anybody
embracing
open
interfaces,
so
you
have
some
portability
you
can
integrate
with
your
best
stuff
glue
is
a
perfect
example.
I
don't
want
some
weird
proprietary
stack
that
I
can't
bring
best
of
breed
technology
into
what
kind
of
stack
is
that
I
want
open
api,
so
I
can
integrate
best
of
breed
at
my
edge
at
third
parties
in
the
cloud.
So
that's
kind
of
the
bet
we're
making
there
now
gke,
as
I
mentioned,
is
a
real.
B
I
think
premium
representation
of
this
from
security
scale
functionality
right
all
these
sort
of
things.
It's
unbelievable.
I
mean
I
enjoy
using
it
and
I
don't
love
writing
gamble,
which
should
be
enough
of
a
testament
that
even
I
will
enjoy
it
because
look
we've
automated
the
heck
out
of
it
nobody's
offering
a
fully
automated
kubernetes
like
we
are
with
autopilot.
B
Literally
we
do
the
scaling
provisioning
patching.
We
get
paged
our
sres
get
paged.
If
something
goes
wrong,
so
made
a
huge
investment
in
fully
automated
kubernetes,
that's
great
right.
It's
an
amazing
foundational
tech.
Going
back
to
those
original
goals
on
ship
more
often
and
be
more
stable.
I
need
rock
solid
platforms
and
nobody's
going
to
do
kubernetes
like
the
company
that
built
it,
but,
of
course
the
hard
part
and
christian.
B
You
probably
I
mean
you
and
I've
been
in
companies
who
helped
build
these
stacks,
but
most
people
don't
start
at
that
base
layer
right
or
they
start
there.
We
don't
end
there,
I'm
gonna
add
different
layers,
I'm
adding
application
experiences,
configuration
subsystems,
provisioning
automation,
service
meshes
like
and
that's
all
important.
That's
not
bad
stuff.
That's
amazing,
stuff
that
turns
that
into
an
app
platform.
B
Awesome,
a
lot
of
people
can
take
that
cncf
diagram
and
turn
that
into
a
stack
right.
That's
amazing
what
you
can
do
now
without
having
to
buy
a
bunch
of
commercial
software
you
don't
have
to,
but
what
I
see
is
when
people
can
do
that
in
one
place
cool
this
gets
really
tough
or
tougher
when
I'm
trying
to
maintain
a
consistent
sort
of
fleet
across
different
infrastructure
right.
It's
just
is
everything
going
to
be
compatible?
Is
the
version
of
istio?
You
use
here
work
with
this
version
of
kubernetes.
Does
this
compile
with
this?
B
How
are
you
getting
logs
from
here
and
so
all
of
a
sudden,
you
end
up
with
a
100
plus
person
platform
team,
because
your
custom
platform,
which
was
built
to
prevent
lock-in,
is
now
the
ultimate
lock-in,
because
nobody
has
what
you
have
so
it's
an
interesting
world.
We
kind
of
use
some
of
the
open
source
stacks
to
build
these
kind
of
useful
platforms,
but
they
can
be
a
beast,
and
so
again
I
guess
I'll
pause
for
a
second,
because
do
you
see
that
or
am
I
making
that
up
like?
Do
you
see.
A
Assembly,
absolutely,
oh,
absolutely,
and
it's
not
like
organizations,
don't
want
to
just
go
here.
Give
me
a
platform
that
works,
but
they
have
a
lot
of
unique
constraints
assumptions
you
know
backward
compatibility
issues
where
they
you
know
they
do
need
to
kind
of
put
put
these
pieces
together
and
that
the
question
just
becomes
about.
How
do
you
simplify
it
right?
That's
that's!
What
they're
looking
for.
B
That's
right,
you
and
I
have
worked
in
places
where
it
was
often
about
turnkey
platforms,
and
I
love
integrated
opinionated
stacks.
I
mean
I
like
that.
I
think
that's
great.
For
many,
though
you
hit
that
ceiling
kind
of
quick
about
what
other
things
can
run
there
right
now.
All
of
a
sudden.
I
do
want
to
swap
in
my
choice
for
this
or
that,
so
I
think
the
future
is
a
little
more
around
composability.
B
You
know
purposeful
integration,
but
composability.
So
you
know
the
the
nutshell
for
anthos
is
saying
all
right:
we've
had
this
amazing
set
of
open
source
tech
we're
trying
to
make
this
into
more
of
a
platform
experience
without
necessarily
bogging
ourselves,
down
with
very
heavy
weight
kind
of
very
complex
platform.
Simplicity
to
your
point,
things
like
that,
and
so
when
we
look
at
what
amphos
is
look,
it's
a
way
to
kind
of
have
a
distributed
application
platform,
that's
backed
by
the
cloud.
B
I
want
to
put
this
application
platform
and
these
application
services
on
prem
in
other
clouds
on
google
cloud.
I
want
to
kind
of
manage
it
from
the
cloud,
so
I'm
not
stuck
with
necessarily
just
like
a
software
driven
stack
in
every
place.
I
can
back.
You
know
back
support
this
from
the
cloud
so
kind
of
at
the
base
it
starts
with
gke.
B
I
can
run
gke
in
google
cloud,
of
course,
but
I
can
also
now
run
gke
on
a
bare
metal
node
at
the
edge.
I
can
give
me
give
me
a
box
with
two
v
cpus
and
four
gigs
of
ram
sitting
at
a
retail
edge,
and
I
can
run
a
full
gke
cluster
there.
I
can
install
that
on
amazon.
I
can
install
it
on
azure
or
I
can
even
attach
your
existing
cluster
sitting
in
eks
or
aks.
It
could
be
a
rancher
cluster,
on-prem
doesn't
matter
so
it
starts
saying
all
right.
B
I
got
to
start
with
the
fundamental
unit
of
compute,
something
that
can
run
these
containers.
Terrific.
Then
I
start
thinking
about
well.
How
do
I
manage
some
of
that?
So
what
amthos
can
bring
is
some
of
the
fleet
management,
because
all
these
clusters
are
connected
through
the
backplane
to
google
cloud
kind
of
a
thin
connection.
It
can
operate
without
it,
but
it
kind
of
does
that
for
basic
operations,
and
so
I
can
manage
them
see
them.
B
You
know
we're
making
updates
now
to
make
it
really
easy
to
even
provision
and
upgrade
all
through
the
cloud
wherever
they
are,
then
you
get
some
application
services.
A
cloud
run
experience
for
application
developers
being
able
to
deploy
software
full
integration
with
cloud
ops.
Maybe
you
say
I
don't
feel
like
dumping
a
bunch
of
logs
into
my
on-prem
stuff
anymore,
how
about
they
just
store
that
in
our
remarkably
scalable
log
system,
cool
that
just
works,
you
don't
have
to
worry
about
it,
full
configuration
management.
B
So
if
I
have
10
clusters
10
000
clusters,
how
can
I
make
them
all
look
consistently,
the
same
same
configurations,
same
policies,
applied
same
security
policies
really
powerful
stuff
and
still
with
a
common
identity
scheme.
Again,
if
I'm
running
in
amazon
or
google
cloud
or
in
vsphere
it
doesn't
matter,
I
can
have
a
single
way.
I
do
featherated
identity,
so
my
developers
go
fast.
B
All
these
environments.
It
doesn't
matter
where
it
runs
a
single
api
from
my
devs
and
I
can
have
maybe
a
better
op
story
and
a
consistency
story,
because
I
don't
have
a
bunch
of
snowflake
environments
everywhere
I
can
have
a
consistent
fleet,
even
though
it
might
be
distributed
across
environments.
So
that's.
What
we're
trying
to
solve
is
take
the
best
of
open
source
to
build
a
more
composable
stack
that
can
run
everywhere
but
backed
by
the
cloud.
A
Right
and
so
now,
that's
that's
where
we
jump
in
you,
you
have
this
powerful
foundation
for
running
your
applications,
especially
in
a
large
large
large
scale,
and
where
we
come
in
is
you've
deployed
the
applications.
A
How
do
we
manage
the
policy
around
that
from
a
more
centralized
place,
so
similar
in
some
respects
to
what
anthony's
is
doing
is
what
we're
doing,
but
at
the
we're
focusing
on
the
the
networking
layer,
the
application
networking
layer,
we
are
investing
in
bringing
together
a
lot
of
these
powerful,
open
source
projects
that
do
things
like
build
the
layer,
seven
proxy
like
envoy
or
that
bring
the
control
plane
like
like
a
steel
or
optimize,
the
networking
path,
a
little
bit
lower
layer
like
ebpf,
and
then
we
are
trying
to
expose
additional
application
protocols
and
make
it
simpler
to
pull
data
out
of
these
services
that
are
eventually
running
with
things
like
graphql.
A
So
I'm
going
to
go
a
little
bit
a
level
lower
and
explain
a
little
bit
more
about
what
it
looks
like
to
manage
the
application
network
on
top
of
anthos,
or
you
know
your
your
wherever
you've
deployed
your
applications
and
so
it
once
you
get
started
right.
You
start
off.
Let's
start
off
small
right,
you
don't
usually
start
off
by
deploying
10
000
clusters.
A
You'll
start
small
you'll,
maybe
bring
something
like
istio
in,
and
you
start
to
expose
some
of
this
capability
and
some
of
this
functionality
out
to
your
teams-
and
you
know
I've
seen
people
do
you
know
helm
and
all
these
various
scripts
and
maybe
plug
it
into
ci
cd
or
give
the
developers
free
access
to
istio
virtual
services
and
so
on.
What
we're?
What
we're
doing
here
at
solo
is
how
do
we?
How
do
we
make
that
easier?
A
We
know
in
enterprises,
there's
lots
of
assumptions
and
there's
lots
of
you
know
weird
ways
that
you
would
never
have
expected,
at
least
as
an
open
source
developer,
that
people
would
be
using
your
software
and
what
we
try
to
do
is
abstract
that
those
enterprise
details
and
give
them
first
class
support
for
things
like
multi-tenancy
things,
like
you
know,
still
life
cycle
upgrades
and
and
doing
this
without
taking
down
your
applications,
because
there's
some
thorny
things
that
istio
does
at
times
to
to
get
in
the
way
there.
A
So
we've
tried
to
automate
away
a
lot
of
those
best
practices.
The
things
we've
learned
in
the
community
things
we've
learned
from
our
customers
and
build
it,
build
this
layer
of
abstraction
and
make
it
easier
to
to
interact
with
istio
and
and
do
that
in
a
you
know:
multi-org
highly
tenant
driven
environment,
but
we
we
wanna,
we
don't
wanna,
go
all
the
way
to
what
the
traditional
enterprise
apps
did
where
they
built
these
big
clunky,
uis
and
proprietary
configuration
formats
and
all
this
stuff.
A
We
want
to
stay
true
to
you,
know
the
configuration
as
data
approach
and
building
workflows
into
git,
so
everything
that
we've
built
here
at
at
solo,
like
like
our
glue
mesh,
you
know
management
plan
for
managing
istio,
everything's,
driven
by
declarative
configuration
and
can
be
popped
into
a
git
based
workflow
or
a
policy
engine
like
like
anthoc
has,
and
you
know
you
don't
you
don't
get
tied
down
into
those
those
finicky
hard
to
use
and
hard
to
automate
apis.
A
The
next
step
we
we
bring
to
the
table
is
how
do
you
get
traffic
into
these
clusters?
And
how
do
you
do
that
in
again,
following
a
very
cloud
native
and
git
ops
approach?
A
A
We
have
a
gateways,
part
of
that
and
and
we
can
use
the
same
apis
for
driving
the
behavior
of
so-called
north-south
directional
traffic,
as
well
as
east-west
directional
traffic,
including
security
and
rate
limiting,
and
all
of
these
things.
A
Now
it
gets
more
interesting
once
you
expand
out
to
more
than
one
cluster
and
a
lot
of
our
our
customers
are
running
this
way
for
high
availability
and
failover,
and
even
more
tight
controls
over
tenancy
and
so
on.
So
we've
built
the
developer
portal
that
allows
you
to
self-service
and
get
access
to
your
apis
and
drive
the
policies
about
how
things
should
be
secured
and
how
routing
should
happen,
whether
that's
at
the
edge
or
in
the
east-west
direction,
using
the
developer
portal
and
building
self-service
on
top
of
a
communications
platform
like
this
again.
A
The
dev
portal
is
driven
by
custom
resources
and
you
know:
there's
no
extra
operational
database.
All
these
proprietary
formats
and
uis
that
you
could
see
in
in
older.
You
know
legacy
tech.
This
is
all
complete.
You
can
drive
the
developer
portal.
You
can
drive
the
traffic
routing
and
security
policies
directly
through
your
git
ops
engine,
but
then
getting
to
multi-cluster
this.
This
extends
out
right.
So
now
you
can
get
a
consistent
multi-cluster
api
for
managing
your
traffic
policies
and
your
access
policies
extension
policies.
A
Maybe
you
want
to
deploy
webassembly
extensions
into
into
your
data
plane
across
multiple
clusters
right
we,
we,
we
focused
on
having
a
single
pane
of
glass
and
a
single
point
of
entry
for
end
users
that
simplify
the
operational
and
end
user
experience
of
of
managing
these,
because
this
can
get
kind
of
tricky
right.
It's
not
just
about
placing
the
exact
same
configs
on
all
the
clusters.
A
What
we've
seen
in
in
our
users
and
our
customers
is
that
some
apps
are
deployed
on
you
know
some
clusters,
other
apps
are
deployed
on
other
clusters
and
then
maybe
in
in
a
public
cloud
or
in
a
different
cloud.
You
know
you
have
a
different
set
of
apps,
but
those
things
those
apps
are
communicating
with
each
other.
There's
compliance
and
regulatory
policies
about
how
they
are
allowed
to
communicate
with
each
other.
A
When
things
fail
over,
they
have
to
behave
a
certain
way,
and
so
the
you
know
how
the
configurations
and
the
policies
differ
between
clusters
and
environments
that
has
to
be
kept
somewhere
right.
We
don't
want
that
in
in
human
heads
and
you
know
making
you
know,
changes
to
yaml
and
all
this
stuff
without
independently
of
what's
really
supposed
to
be
happening
in
the
other
clusters,
so
the
management
plane
helps,
simplify
and
codify
a
lot
of
these
policies
that
then
get
implemented
on
the
various
clusters
where
they
belong
and
again
this
can
be.
A
This
can
be
stretched
out
to
maybe
you're
on-prem,
maybe
you're
in
google
cloud,
maybe
for
some
reason
you
have
to
go
to
another
cloud.
You
know
these
are.
This
is
a
way
to
effectively
enforce
multi-tenancy,
bring
along
consistent
api
gateway
technology,
drive
the
policies
and
behaviors
consistently.
A
You
might
even
have
different
versions
like
you,
like
you,
said
richard
of
of
kubernetes
or
istio,
and
you
need
to
kind
of
unify
and
manage
these
all
all
together,
and
so
that's
that's
a
big
part
of
as
you
scale
as
you
get
to
the
you
know,
hundreds
thousands,
tens
of
thousands
of
clusters
is
for
this
problem
becomes
extremely
difficult
and
you
know
are
these
the
types
of
things
that
you're
also
seeing
when
when
people
are
deploying
out
to
lots
and
lots
of
clusters
potentially
on
maybe
multiple
clouds
or
on-prem,
and
so
on.
B
Absolutely
yeah,
especially
as
you
say,
not
all
clusters
are
going
to
be
identical
with
their
workloads.
Some
are
using
dev
test
prod.
I
was
talking
to
a
company
this
week
that
does
dev
on
one
cloud
and
production
in
google
cloud
and
they
still
have
to
promote
consistency
between
those
and
some
of
the
stack
and
traffic,
and
so
that's
coming
up
more
and
more
as
people
start
to
take
advantage
of
things
in
different
places.
So
something
like
this
is
killer.
That's
super
helpful.
A
Right
yeah
exactly-
and
I
again
I
want
to
stress-
because
it
is,
it
is
something
that
I've
seen
over
over
the
course
of
my
career,
where
people
adopt
new
technology
they're
so
excited
you
know,
vendor
says
it
does
a
million
different
things
and
then
they
put
it
in
place
and
they
realize
they
created
a
new
silo
right
and
all
of
the
processes
and
stuff
that
they've
created
create
more
bottlenecks.
A
But-
and
so
I
want
to
emphasize
again
that
you
know
this
this
plugs
into
what
we
believe
is
the
right
way
to
build
automation
to
build
self-service.
You
know
we
hear
from
our
customers
all
the
time
that
you
know
get
ops
based
and
configurations.
Databased
workflows,
significantly
improve
the
developer
experience,
and
so,
no
matter
how
complicated
you
know,
you
might
be
using
the
edge
gateways
in
istio
and
ebpf
and
all
this
stuff.
A
You
don't
have
to
worry
about
all
that
blue
mesh
is
managing
and
and
simplifying
all
of
that
stuff
to
improve
your
security
posture
and
the
power
of
your
application.
Networking.
A
Now
this
I,
this
is
a
little
bit
busy
diagram,
but
what
I've
tried
to
do
is
distill.
Some
of
the
architectural
patterns
that
we've
seen
where
people
are
deploying
this
application
networking
technology
service
mesh
this
kind
of
stuff
across
multiple
clusters,
potentially
clouds,
multiple
clouds
or
on-premises-
and
you
know
public
cloud
and
there's
dmz's,
there's
virtual
machines.
A
There's
you
know
all
of
these
different,
maybe
different
lines
of
business
and
organization
all
this
stuff,
but
but
what
we've
seen
is
the
the
power
that
you
can.
You
can
get
over
over
the
network
like
this.
When
you
push
the
policies
down
to
the
level
of
the
application
and
operationally
you
can
simplify
this
if
you're
using
a
lot
of
the
same
technologies,
regardless
of
where
you're,
deploying
and
then
like.
A
We
like,
I
said
we
we
come
in,
we
support
all
that
stuff
and,
and
we
build
the
tooling
to
simplify
the
operational
experience
around
it
so
glue.
Mesh
enterprise
is
the
product
that
we
sell
and
that
we
you
know
deploy
at
our
customers
comes
with
istio
support,
multi-tenancy
apis,
you
know
extensibility
with
webassembly
a
native
api
gateway
built
on
envoy
proxy,
and
you
know
these
are.
These
are
end
up
being
the
foundations
for
very
powerful
control
over
over
application.
A
Networking,
like
I
said
developer
experience
is
one
of
the
biggest
things
that
that
we
see
that
our
customers
are.
You
know
very
excited
about
once
once
they've
actually
proven
this
out.
Getting
the
consistency
of
operations
is
huge.
It
reduces
manual
mistakes
and
configuration
errors
and
ultimately
outages
being
able
to
set
up
multi-cluster
for
high
availability,
dynamic
failover.
A
So
you
don't
have
to
go
in
there
and
say
all
right.
I
want
this
percentage
of
traffic
to
go
here
and
there
it's
just.
You
can
set
up
locality
rules
or
priority
rules,
this
type
of
stuff
plug
it
in
with
your
cloud
cas
and
your
pki
to
get.
A
You
know
a
little
bit
more
zero
trust
foundation
in
your
in
your
communication
and,
of
course,
we
support
vms
and
you
know,
hybrid,
a
diverse
type
of
types
of
workloads
so
enough
talking
how
about
and
we
jump
right
in
and
show
you
anthos
and
and
glue
mesh
working
together
to
provide
some
of
these
benefits.
A
If
we
click
on
the
workloads,
we
can
see
all
of
the
workloads
that
are
running
across
these
clusters.
You
can
see,
istio
is
deployed,
and
some
parts
of
the
book
info
demo
are
also
deployed
which
we'll
go
into
in
more
detail.
In
a
moment
we
can
see.
Some
of
these
endpoints
are
are
exposed
publicly
as
gateways
and
we'll
also
talk
a
little
bit
more
about
what
those
gateways
do
from
the
anthos
dashboard.
You
can
also
see
what
configuration
policies
are
set
up
for
these
clusters
or
groups
of
clusters.
A
For
example,
if
we
take
a
look
at
the
policies
set
up
for
one
of
these
clusters,
we
can
see
that
we,
you
know
we
have.
We
have
things
set
up
where
certain
users
and
groups
of
users
can
deploy
apps
and
configure
their
apps
a
certain
way
in
this
case
we're
limiting
resource
use,
usage
or
restricting
to
setting
up
just
certain
ips
and
these
policies
get
enforced
on
the
cluster
by
gatekeeper
and
and
provide
a
level
of
consistency
across
these
clusters.
A
We
can
even
log
into
the
clusters
here
that
might
be
running
in
a
different
cloud,
so,
for
example,
the
azure
cluster.
So
if
we
come
over
here
and
take
and
get
the
credentials
to
be
able
to
log
in,
we
can
cube
ctl
get
nodes.
We
can
see
that
you
know
we
have
access
to
the
kubernetes
cluster,
the
antholos
cluster
running
in
azure.
A
So
once
you
have
your
platform
set
up
and
your
cluster
is
running,
you
need
to
deploy
applications.
You
need
to
control
the
traffic,
the
security
policies
on
the
wire
between
the
services
and
that's
where
the
service
mesh
part
comes
in,
so
we
saw
istio
is
deployed.
Let's
take
a
little
bit
closer.
Look
at
what
the
what
the
platform
or
what
the
demo
environment
looks
like.
We
have
three
different
clusters.
A
A
One
of
those
clusters
will
be
running
the
management
plane,
which
is
responsible
for
providing
a
an
api
on
top
of
the
multi-cluster
environment
and
for
actually
building
orchestrating
reconciling
configuration
across
this
environment
so
that
it's
consistent
and
what
I
mean
by
consistent,
isn't
that
it's
the
same
on
every
cluster,
but
that,
given
the
context,
it
is
correct.
A
Let's
take
a
look
at
example,
so
in
cluster
one
we'll
have
the
book
info
demo
and
we
have
product
page
deployed
in
the
name
space
we
have
the
rest
of
it
deployed
in
a
separate
namespace,
but
you
can
see
we've
overlaid
an
additional
construct
or
api.
On
top
of
this
called
the
workspace
and
so
in
glue
mesh.
A
workspace
allows
you
to
group
namespaces
together
for
a
specific
team
across
multiple
clusters.
A
So
in
this
case
we
have
the
book
info
workspace.
It
has
a
couple
name
spaces
in
here
and
a
the
book
info
team
then
has
the
ability
to
go
in
and
configure
routing
security
policies
kind
of
stuff
for
their
applications.
We
also
have
another
workspace
called
the
gateways
workspace
in
this
workspace.
We
have
deployed
the
glue
mesh
gateway,
which
is
built
on
top
of
his
steels
and
grist
gateway.
This
adds
rate
limiting
external
auth
capabilities
for
oidc.
A
This
kind
of
stuff
also
has
request
transformation
and
web
application
firewalling
built
into
the
proxy
itself,
and
so
the
gateway
team
can
configure
and
expose
you
know,
route
the
ports
and
protocols
on
the
gateway
and
delegate
that
to
the
other
teams,
the
book
info
team,
which
then
can
import
the
gateway,
workspace
and
export
resources
like
routing
tables
and
so
on
to
it.
Let's
take
a
closer
look
at
what
we
have.
A
So
we
have
the
glue
mesh
dashboard
here
and
we
have
our
two
workspaces
book
info
and
and
gateways.
If
we
click
on
book
info,
we
can
see
we
have
a
few
resources
that
are
involved
here,
some
policies
that
have
been
set.
If
we
click
on,
see
more
details,
we
are
exporting
to
the
gateways
workspace,
so
we
can
control
our
the
traffic
routing
to
our
application
from
the
configuration
that
we
set
that
gets
exposed
on
the
gateway.
A
A
We
can
also
see
that
we
have
the
two
clusters
and
we
see
the
flow
of
traffic
between
the
various
workloads
in
those
clusters.
Here's
the
book
info
example
itself
pretty
familiar.
If
we
refresh,
we
see
that
we're
only
on
cluster
one,
we
should
only
see
black
stars
or
no
stars
right
so
now.
Let's
say
we
want
to
specify
rate
limiting
across
both
of
the
clusters
for
anyone
calling
the
product
page
or
this
book
info
service.
A
So
for
us
to
do
that,
let's
actually,
let's
see
if
we
can
call
the
the
service
in
such
a
way
that
that
would
trigger
rate
limiting,
but
we
don't
see
any
rate
limiting
at
the
moment,
all
of
our
configuration
in
glue
mesh
is
specified
as
declarative
configuration.
A
A
A
We
could
come
here,
let's
go
actually
demo
and
we
want
to
let's
just
go
into
it.
A
Workflow
argo
cd
is
what
we're
using
here
in
in
this
example,
so
we're
going
to
synchronize
our
new
configurations,
which
it
would
be
enforcing
rate,
limiting
we'll
give
that
a
second
looks
looks
all
good
and
now,
when
we
come
back
and
call
the
product
page
and
just
grab
the
headers,
we
see
a
200,
and
now
we
see
that
we've
been
rate
limited
as
more
than
one
per
per
minute.
A
A
A
So
let's
take
a
look
at
that,
let's
scale
product
page
down
to
zero
and
let's
come
back
and
let's
refresh
our
service
here
so
refresh,
we
see
we're
still
getting
traffic,
although
we've
scaled
things
down
we're
still
getting
traffic,
and
if
I
refresh
enough
time
we
see,
we
actually
get
the
red
stars
because
we
failed
over
to
cluster
two
or
the
second
cluster
right.
And
so
that's
where
we
have
reviews
v1,
v2
and
v3.
A
So
you
can
see
that
glue
mesh
layers
on
top
and
around
istio,
and
any
of
the
clusters
on
which
this
is
deployed
provides
a
more
simplified
api
for
specifying
things
like
what
is
a
virtual
gateway
which
may
span
multiple
clusters.
It
becomes
a
decentralized
api
gateway,
a
virtual
destination
which
can
use
locality
or
priority
and
failover
between
clusters.
A
Things
like
workspaces
to
group
configurations
and
security
boundaries
for
scaling
out
in
an
enterprise
and
giving
you
a
consistent
feel,
regardless
of
whether
the
traffic
is
coming
in
at
the
north,
south
or
the
east
west
direction,
with
glue
mesh
and
our
glue.
You
know
application
networking
platform.
There
is
no
distinction
between
those
those
directions.
B
That
was
awesome,
so
you
got
that
all
working
first
of
all,
which
is
always
great
but
then
yeah,
I
think
again
we
see
the
power
of
having
clusters
in
different
places
that
might
be
serving
different
roles.
But
again
your
connectivity
you're
sort
of
bridging
that
that
gap,
because,
even
as
you
showed
your
architecture,
sometimes
each
of
those
locations
may
be
somewhat
independent.
B
Maybe
the
app
doesn't
actually
span
clouds,
but
you
might
have
certain
things
that
are
homed
in
one
place
that
you
call
back
to
securely
or
what
have
you
so
it's
cool
to
see
you
can
you
can
do
that
fairly
straightforward
and
with
some
pretty
mature
attack,
so
awesome
job
a
couple
of
next
things.
Yeah
from
our
side
I
mean
look.
Gke
is
easy
to
use
it's
part
of
our
free
tier,
at
least
for
the
gke
fee.
There's
some
fee
for
the
compute,
of
course,
but
you
can
jump
in
here
on
autopilot.
B
Look
autopilot's
amazing,
because
you
only
pay
per
pod.
You
don't
pay
for
the
cluster,
so
spin
up
an
autopilot
cluster
only
pay
when
you
have
pods
deployed
and
consuming
that,
so
give
gke
a
try.
It's
an
awesome
way
to
use
google
cloud
at
the
same
time
you
might
start
looking
at.
How
do
I
use
something
like
anthos
to
deploy
on-prem
to
vsphere
or
bare
metal?
Maybe
I
don't
have
to
keep
separate
stacks
between
my
on-premises
microservices
application
platform.
My
public
cloud.
B
A
I'm
looking
forward
to
that
yeah
and
definitely
come
check
out
what
we're
doing
with
service
mesh
with
ebpf
graphql.
These
types
of
things
we
we
at
solo
are
very
big
fans
of
google
cloud
and
ngk
gke.
We
we
use
it
all
the
time
and
so
do
our
customers.
A
We
we
we
do
a
lot
in
the
open
source,
go
check
out
our
open
source
projects,
glue,
edge
and
glue
mesh
are
both
founded
on
open
source
projects
and
also,
you
know
no
better
place
to
hear
about
the
successes
of
running
service
mesh
at
scale
than
from
our
customers
definitely
go
check
out
our
customers
page.
A
So
I
want
to
thank
you
all
for
joining
our
session.
We
put
our
contact
information
up
here.
Please
reach
out.
If
you
have
questions
afterward
richard,
this
has
been
a
a
pleasure.
You
know
doing
this
this
session
with
you
look
forward
to
two
future
sessions
as
well
appreciate.