►
From YouTube: 001 Welcome to SoloCon Day 1 Keynote
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Although
those
trends
brought
a
lot
of
benefit
to
their
adopter,
they
also
introduced
some
new
challenges,
specifically
challenges
around
connectivity
connectivity
on
the
edge.
How
do
you
expose
your
application
to
the
outside
world
while
still
forcing
security
inter-app
connectivity?
How
do
you
connect
to
application
in
your
organization
and
into
app
connectivity?
A
A
In
2018
we
announced
blue
edge,
an
api
gateway
based
on
top
of
envoy
that
are
run
natively
on
kubernetes
we
grew
edge.
We
were
focusing
on
solving
the
connectivity
on
the
edge,
as
well
as
the
inter-app
connectivity
at
late
2018
we
announced
glue
mesh
glue.
Mesh
is
a
management
plan
that
help
you
streamlines
your
adoption
and
operation
of
sto
service
mesh
for
single
cluster
multi-cluster,
all
the
way
to
multi-cloud
with
glue
mesh,
we're
focusing
on
solving
the
intra-app
connectivity
problem.
A
So
in
the
end
of
2018,
we
had
the
two
building
block
that
basically
solved
all
the
connectivity
problems,
but
we
didn't
stop
there.
We
learned
from
a
customer
how
important
it
is
to
tune
sto
for
your
own
special
need.
So
in
2018
we
announced
glue
extension,
we
glue
extension.
You
can
build,
share
and
deploy
webassembly
model
on
your
sto
service
mesh
to
customize
it
and
extend
its
functionality.
A
Now
they're
a
customer
as
an
advanced
cloud
native
environment.
It
is
important
to
educate
their
developer
and
help
them
consume
available
api.
So
in
2020
we
announced
glue
portal
the
only
developer
portal
for
sdo
with
glue
portal.
You
can
catalog
and
expose
running
api
in
sto
for
a
developer
partner
and
community
member.
Those
are
the
components
of
building
the
solar
connectivity
platform.
A
It
has
a
solar
dna
to
consume
cutting
edge
technology,
so
we
built
a
platform
on
envoy,
proxy
sto
service,
mesh
web
assembly
and,
of
course,
everything
running
natively
on
kubernetes
and
we
were
busy
on
the
last
two
years.
We
consistently
deliver
a
new
feature
and
enhance
the
scalability
of
our
platform.
A
Efficiency
and
velocity
are
solo,
key
differentiator,
we
believe
in
doing
at
solarcon
and
long.
We
are
going
to
announce
over
five
new
announcements,
so
stay
tuned.
We
couldn't
have
done
it
without
partnering
closely
with
our
customers
and
a
community
member.
Today
solo
helps
thousands
of
users
to
run
glue
and
stone
production
from
small
startup
to
household
name
and
fortune
500
company.
We
have
over
three
thousand
active
member
on
our
slack
open
source
community
and
we
really
hope
that
you
will
join
us
too.
A
A
few
of
our
customers
were
kind
enough
to
let
her
use
their
logo,
but
actually
there
is
way
more
people
who
is
using
solo
technology
in
production.
Today,
most
likely
you
are
leveraging
solo
technology
by
using
your
credit
card
by
talking
on
your
phone.
By
checking
your
bank
account
by
going
to
your
favorite
restaurants,
we
really
honor-
you
are
trusting
us
with
your
business.
Some
of
our
customer
and
community
member
will
share
this
story
at
silicon.
I
encourage
you
to
check
it
out.
This
is
exciting
time
for
seller.
A
During
the
last
year,
we
show
quarter
to
quarter
exponential
grow
in
user
customers
and
community
member.
We
have
a
strong
demand
for
a
glue
mesh
product
and,
after
very
successful
beta,
we
are
busy
deploying
it
on
our
customer
production
environment.
We
are
proud
to
be
leader
and
an
active
member
of
the
stone
envoy
community
and
we
are
committed
to
continue
contributing
upstream
on
the
last
three
months.
A
We
over
double
our
employee
ads
counts
and
welcome
strong
leader
of
the
cloud
and
open
source
community
to
our
team,
based
on
the
current
demand,
we'll
have
to
double
it
again
and
the
first
silicon.
So
thank
you
so
much
for
being
part
of
our
journey
and
our
growing
community.
Next,
I
want
to
introduce
you
our
global
field,
cto
christian
pasta,.
B
Thank
you
adee
for
that
overview
of
what
we've
been
doing
here
at
solo
and
now
I'm
going
to
take
a
look
at
some
of
the
problems
that
we're
solving
the
approach
that
we
take
in
solving
them
and
the
products
that
we've
built
to
facilitate
that
my
name
is
christian
posta
and
I'm
the
field
cto
here
at
solo.
I've
been
here
for
over
two
years
now,
but
I've
been
helping
organizations
build
and
scale
distributed
systems
and
micro
service
style
architectures
for
quite
some
time
now.
B
I've
written
books
on
this
topic
and
I'm
extremely
excited
that
for
the
last
four
years,
I've
been
in
the
service
mesh
ecosystem
and
that
we've
built
a
team
here
at
solo
that
lives
and
breathes
these
technologies
and
these
solutions
and
that
we're
working
with
our
customers
and
we
see
positive
influence
and
positive
impact
from
what
these
technologies
are
able
to
bring
and
the
context
that
we're
looking
at
is
organizations
modernizing
their
application.
B
These
applications
need
to
communicate
with
each
other
and
more
so
now,
because
now
there's
more
of
these,
these
services
and
communicating
over
the
network,
as
we've
known
for
a
while,
is
challenging
right.
There's
no
safety
guarantees,
there's
no
security,
guarantee,
there's
no
time
guarantees
and
when
a
service
tries
to
communicate
with
another
service,
it
needs
to
know
where.
B
You
want
to
do
things
like
canary
releases
or
dark
launches,
and
so
the
the
infrastructure
to
support
this
type
of
thing
doesn't
exist
or
has
been
very
ad
hoc
up
until
recently,
and
that
has
shown
up
in
various
surveys
when
you
look
at
it
closely
that
show
people
are
able
to
move
faster
and
deliver
code
to
production
faster,
but
their
safety
metrics
are
not
improving.
In
fact,
they're
not
they're,
going
the
opposite
way.
So
things.
C
B
B
The
previous
and
past
approaches
you
know
they
were
built
for
infrastructure
that
was
more
static,
vms
physical
machines
and
so
on,
and
in
some
cases
they
were
built
to
support
api,
externalization
and
consumption
and
so
on.
But
what
we
actually
saw
is
people
start
to
use
these
systems
to
solve
the
traffic
problem
or
the
observability
problem,
or
the
security
problem,
at
least
to
some
degree.
B
So
things
like
apogee
things
like
layer,
seven
or
data
power
or
kong.
Any.
B
In
the
same
generation
of
technology,
for
those
underlying
that
underlying
context
or
the
underlying
infrastructure,
we
see
these
things
are
being
more
centralized
and
traffic.
If
service
a
tries
to
talk
to
service
b,
it's
going
to
have
to
go
up
through
this
centralized
system,
which
also
means
you
need
centralized
processes
to
lock
this
thing
down,
to
make
changes
to.
C
B
Now,
in
a
cloud
native,
highly
ephemeral,
highly
distributed
environment,
this
type
of
type
of
approach
does
not
lend
itself
well
to
solving
some
of
these
problems
that
we're
talking
about.
So
that
brings
us
to
technology
that
was
purposely
built
and
specifically
built
to
live
in
this
type
of
environment,
and
that
is
envoy.
Proxy
envoy
was
built
specifically
to
bring
observability
to
applications
as
they
communicate
with
each
other
over
the
network,
specifically
to
be
able
to
control
the
traffic
and
improve
their
security.
Posture
envoy
is
an.
B
Project
that
came
out
of
lyft
as
part
of
the
cncf
one
of
the
first
few
top
level
graduated
projects
and
is
a
very
vibrant
and
diverse
community
and
is
where
the
innovation
in
this
proxy
this
cloud-native
proxy
space
is
happening
at
solo.
We
recognize
this
very
early
on
and
we
were
building
our
solutions
on
top
of
envoy
from
the
beginning.
B
B
Now
we
have
products
in
the
in
the
api
gateway
space,
which
is
what
we
started
with
now.
We've
branched
out
into
the
service
mesh
space
and
we've
been
very
focused
on
web
assembly
for
the
last
year
and
a
half
or
actually
longer
to
be
able
to
extend
this
type
of
infrastructure
and
then
connecting
the
applications.
Securing
them
is
all
good
and
well
for
infrastructure
folks,
but
for
developers
who
want
to
find
and
start
using
these
apis
that
live
in
this
infrastructure.
B
A
developer
portal
that
lives
on
top
of
this
is
extremely
important,
so
let's
take
a
look
at
what
the
products
do
and
and
how
they
work.
So
we
started,
like
I
said,
with
an
edge
gateway.
This
is
the
simplest
of
the
of
the
of
the
frameworks
that
we've
we've
built
and
that
people
can
adopt
and
and
not
have
to
boil
the
ocean
to
try
to
get
working.
B
It's
a
familiar
part
of
the
architecture.
How
do
we
get
traffic
into
a
system
using
something
like
an
ingress
gateway
or
api
gateway?
These
are
familiar
concepts.
Our
gateway
is
a
little
bit
different,
though
first
of
all
it's
built
on
envoy
and
has
been
from
the
very
beginning.
We've
been
running
this
in
production
in
our
organizations
for
three
years.
B
We
we
deeply
understand
onboard.
Second,
we've
we've
built
it
in
such
a
way
that
it
is
intended
to
be
deployed
in
a
decentralized
manner,
so
in
this
case
we're
looking
at
it
as
a
kubernetes
ingress.
But
if
you
start
to
deploy
that
across
multiple
clusters,
right,
you
don't
have
any
centralized
bottlenecks
or
centralized
gateways,
and
each
of
the
clusters
can
live
independently
and
federate
with
each
other.
B
We've
also
built
it
so
that
you
can
extend
the
capabilities
of
envoy
with
webassembly.
We
actually
have
customers
in
production
with
webassembly
right
now.
So
this
is
not
just
some
futurist
looking
thing
it's
a
now
thing
and
in
general
it
was
built
to
be
kubernetes
and
cloud
native.
So
there's
no
extra
databases
that
you
need
to
to
spin
this
up.
No
extra
high
availability.
B
And
all
this
other
junk
that
you
need
with
some
of
the
older
older
technology,
it's
extremely
easy
to
install
and
operate,
is
intended
to
run
across
multiple
clusters
and
be
federated.
Security
is
a
is
a
big
set
of
features
that
we
focus
on
at
the
edge,
because
at
the
edge.
B
Like
rate
limiting
and
request
transformation
and
improving
the
trust
with
some
of
these
unknown
users
using
things
like
oidc
or
opa,
or
api
keys
and
so
on,
and
then
lastly,
it's
built
for
both
external
and
internal
api
use
cases,
and
we
see
a
lot
of.
B
That
internal
api
use
case,
where
they've
deployed
across
multiple
clusters
and
as
as
traffic,
goes
from
one
cluster
to
another,
or
maybe
even
crossing
boundaries
that
you
know
we're
enforcing
security
policies
and
we're
enforcing
usage
policies
and
collecting.
B
B
We
built
a
federation
and
automation
plane
to
be
able
to
manage
this
across
multiple
clusters
and
simplify
the
operational
day.
Two
aspects
of
doing
something
like
this
and
this
pattern
can,
you
know,
take
itself
to
its
logical
conclusion,
which
is
across
different
clusters,
different
lines
of
business,
potentially
even
different
clouds.
B
You
start
to
build
sort
of
a
mesh
of
edges
where
you
have
all
of
these
different
control
points
and
enforcement
points
for
different
usage
policies
and
security
policies,
access
policies,
these
types
of
things
and
you're
not
restricted
to
any
particular
type
of
infrastructure
on
which
to
run
this.
I've
mentioned
this
is
cloud
native
or
kubernetes
native,
but
it
can
run
outside
of
kubernetes
as
well.
B
B
An
actual
service
mesh-
and
so
that's
where
glue
mesh
glue,
mesh
enterprise,
starts
to
come
into
the
picture
so
glue.
Mesh
enterprise
is
a
service
mesh
is
its
istio,
it's
upstream
istio
and
is
supported
for
enterprise
usage.
We
can
go
a
little
bit
deeper
even
across
multiple
clusters.
B
We've
built
a
management
plane
to
federate
istio
across
multiple
clusters
and
again
across
different
boundaries,
with
with
things
like
access
policies,
things
like
traffic
control
policies,
federated
trust,
boundaries,
service
discovery,
and
these
are
all
extremely
powerful
features
once
you
start
to
connect
applications
to
each
other,
so
we've
seen
the
edge
where
traffic
comes
in
potentially
untrusted
and
now
we're
looking
at
traffic
in
the
east-west
direction,
where
services
are
communicating
and
talking
with
each
other
and
scaled
out
across
multiple
clusters,
multiple
clouds,
potentially
even
multiple
versions,
different
versions
of
istio,
different
versions
of
kubernetes.
B
B
So
gloomash's
enterprise
supported
istio,
with
upstream
long-term
support
and
minus
three,
so
the
last
about
a
year
worth
of
a
particular
release
of
istio,
with
security
patches
and
back
porting
and
so
on,
with
fips
compliant
builds
for
those
running
on
fedramp
and
dod
type
workloads
and,
like
I
said
we
manage
this
across
multiple
clusters.
We
reduce
the
configuration
toil
that
you
have
to
worry
about
is
apis
can
be
kind
of
verbose
and
low
level.
We've
built
a
simplified
api
to
abstract
away
those
pieces
that
is
multi-cluster
aware.
B
That
is
insofar
as
multi-cluster
aware
we're
able
to
do
telemetry,
gathering
and
log
aggregation
and
start
to
build
dashboards,
graph-based
based
dashboards
on
top
of
the
the
applications
that
are
running
across
these
multiple
clusters.
So
you
can
see
exactly
where
things
are
going
wrong
when
they
do
what
cluster
they're
having
issues
and
so
on.
B
And
lastly,
again
we
take
security
very
seriously
here
at
solo,
both
at
the
edge
and
between
our
applications
with
the
service
mesh,
so
we're
able
to
get
mutual
tls
and
identity
verification
end
to
end
between
the
services,
even
if
they're
bouncing
from
cluster
two
to
cluster.
B
Operations
of
a
architecture
like
this
and
enable
your
application
developers
to
focus
on
the
parts
that
they
need
to
focus
on,
which
is
building
the
services,
not
where,
where
do
these
services
live?
How
do
I
do
timeouts
retry
circuit
breaking
consistently
across
languages
and
frameworks,
and
so
on?
What
we've
built
here
and
what
what
you
you
can
leverage
is:
a
smart
network.
That's
able
to
do
routing
it's
able
to
do
security,
able
to
elicit
these
golden
metrics
from
the
network
to
make
your
system
smarter
and
understand
how
to
reason
about
it.
B
B
And
do
that
again,
like
I
said
in
a
highly
federated
approach,
now
stay
tuned
for
day
two
tomorrow
or
we
will
announce
more
more
things
around
the
developer
portal,
so
a
modern
application
platform
that
facilitates
bringing
new
you
know
changes
to
the
system
in
new
deployments
reduced
lead
time.
All
that
is
good,
but
you
can't
have
that
without
also
balancing
the
safety
aspects
around
reducing
change
failures.
B
Change
failure
rates
or
the
blast
radius
of
those
those
changes,
reducing
the
mean
time
to
recovery.
If
things
start
to
go
wrong
and
they
will
can
you
quickly
identify
where
and
and
and
remediate
them
or
fix
them
in
a
timely
fashion
as
well
as
availability?
You
don't
want
cascading
failures.
You
want
to
be
able
to
route
around
failures.
You
want
to
be
able
to
be
dynamic
in
in
the
routing,
depending
on
what's
happening
in
the
system
at
any
particular.
A
B
On
successfully
deploying
operating
things
like
a
service
mesh
for
securing
traffic
at
the
edge,
and
then
you
know
extending
those
using
web
assembly
where,
where
it
makes
sense
to
be
backward
compatible
with
some
of
your
existing
workloads
that
might
be
running
in
vms
or
running
somewhere
that
require
you
know
different
security,
hooks
and
telemetry
extensions
and
so
on,
and
then,
lastly,
being
able
to
find
these
services
and
see
their
documentation
and
test
them
and
self-service
sign
up
for
them.
I
know
the
developer
portal
is
a
very
important
piece
to
to
this
puzzle.
B
B
That
we're
doing
here
at
solo
is
based
on
declarative
configurations,
so
it
fits
very
nicely
with
a
git,
ops,
workflow
and
then
slowly
start
to
introduce
things
like
the
east-west
service
mesh
like
istio,
and
you
don't
want
to
do
that
alone.
We
have
the
expertise
here
at
solo
to
help
you
successfully
do
that
and
get
enterprise
support
for
istio
and
once
you've
gotten
istio
into
place
and
you've
solved
some
of
the
edge
gateway
problems.
Now
you're
gonna
start
to
spill
out
into
more
clusters.
More
lines
of
business.
B
You're
gonna
have
more
more
extensions
that
you
want
to
bring
to
the
system.
You
want
to
start
to
build
approval,
workflows
and
how
teams
work
with
each
other
and
then.
Lastly,.
B
That's
where
we
start
to
get
the
benefits
of
of
our
apis
being
able
to
move
quickly
of
simplifying
the
developers,
life
and
the
operator's
life
for
for
doing
this
and
is
where
I
am
especially
excited
to
see
and
announce
what
we're
doing
next,
and
for
that
I'm
going
to
hand
off
to
joe
who
is
the
joe
kelly
who's
the
manager
and
and
leads
the
the
glue
mesh
team
on
at
solo
so
joe.
This
is
all
yours
continue.
The
continue
the
story
from
here.
F
Thanks
christian
hi
everyone,
I'm
joe
kelly,
the
engineering
lead
on
glue
mesh
at
solo.
I
o
glue
mesh
is
a
management
plane
for
service
meshes
that
provides
a
single
pane
of
glass
for
managing
meshes
across
compute
environments
and
mesh
implementations.
When
you
register
a
kubernetes
cluster
to
glue
mesh,
it
automatically
discovers
all
your
meshes
and
mesh
injected
surfaces
from
there.
You
can
define
a
virtual
mesh
to
expose
your
services
to
one
another.
However,
you
see
fit.
F
Members
of
the
virtual
mesh
are
configured
under
a
common
root
of
trust,
enabling
secure
communication
over
mtls
across
clusters
and
traditional
service
mesh
boundaries.
This
empowers
you
to
reason
about
your
distributed
services
as
logical
sources
and
destinations
for
network
traffic,
regardless
of
where
they're
physically
running
glue.
Mesh
users
can
leverage
our
source
and
destination
semantics
to
author
explicit
access
policies
to
specify
which
workloads
can
access
which
services
across
all
clusters
in
your
environment.
This
helps
ensure
that
your
security
posture
remains
intact.
F
As
the
footprint
of
the
mesh
expands,
the
same
source
and
destination
semantics
can
be
used
to
apply
traffic
policies
such
as
retries,
timeouts
fault,
injection
and
more
to
the
various
edges
of
your
service
graph
glue,
mesh
traffic
policies
can
be
applied
globally
or
at
the
service
level,
allowing
you
to
harden
test
or
otherwise
configure
your
network's
behavior
and
as
fine
or
as
coarse
of
a
manner
as
needed.
Access
to
all
glue
mesh
apis
is
granted
via
our
expressive.
F
Role-Based
api
administrators
can
define
personas
with
specific
permissions
to
configure
individual
or
classes
of
services
and
policies
at
many
levels,
all
the
way
from
global
access
to
any
policy
to
restricted
permissions,
to
adjust
just
a
single
knob
on
a
single
service.
This
goes
a
step
beyond
traditional
kubernetes
rbac
and
puts
guardrails
in
place
to
ensure
that
each
user
can
only
affect
traffic
that
they
have
been
explicitly
allowed
to
configure
as
mesh
administrators
service
developers
or
whatever
other
role
makes
sense
in
the
context
of
your
organization.
F
This
is
only
the
beginning
of
the
functionality
that
glumesh
brings
to
service
mesh
and
beyond.
There's
plenty
more
to
cover
about
how
glue
mesh
enables
you
to
observe
your
services
and
extend
the
functionality
of
service
mesh
for
that
I'd
like
to
welcome
some
additional
team
members
to
share
what
they've
been
working
on.
First,
let's
go
to
scott
to
describe
the
architecture
of
glue
mesh
and
how
it
securely
manages
multi-cluster
mesh
configuration.
F
C
So
what
do
I
mean
by
multi-cluster?
I
o,
essentially
when
you
have
a
scenario
where
you're
managing
multiple
clusters,
any
kind
of
configuration
and
any
kind
of
interaction
with
an
api
server
from
a
centralized
location.
You
need
a
way
to
essentially
to
do
the
I
o
to
insert
objects
into
scd
and
read
them
back
out.
So
this
is
already
implemented
in
several
projects
today
and
they
all
take
essentially
the
same
approach.
C
What
they
do
is
they
take
the
data
plan
cluster
and
they
register
it,
which
means
that
they
create
a
service
account
on
that
cluster.
Kubernetes
provides
a
service
account
token
that
corresponds
to
the
service
account
and
the
rbac
that's
associated
with
it.
Then
what
we
do
is
we
take
that
and
convert
that
to
a
kubernetes
rest
configuration
and
then
supply
that
to
the
control
plane
cluster,
where
the
controller
is
running
that
manages
the
data
plane
cluster.
C
It
is
also
what
is
being
done
today
by
glue
mesh
in
our
pre
1.0
versions,
but
there
are
some
drawbacks
to
this
approach.
So
one
is:
it
provides
no
way
to
bound
the
scale
of
clients
that
are
talking
to
the
remote
api
server.
You
can
in
a
high
availability
scenario,
where
you
have
multiple
control
plane
clusters
will
wind
up
with
multiple
watches
on
these
remote
data
planes,
which
can
quickly
get
out
of
scale,
especially
if
you're
watching
something
like
pods
or
end
points.
C
This
is
also
provides
a
huge
security
problem.
This
is
something
that
we
were
told
by
our
customers.
Some
of
our
customers
would
be
completely
unacceptable,
which
is
the
idea
of
taking
a
secure,
kubernetes,
credential
and
taking
it
outside
of
the
cluster
where
it
was
created
and
shipping
it
into
another
cluster.
C
Basically,
we
want
to
not
ship
those
credentials
across
secure
boundaries,
and
the
last
piece
which
is
particularly
painful
for
a
number
of
users,
is
that
you
cannot
use
this
cluster
registration
flow
via
help,
because
it
requires
that
we
first
create
a
surface
account
in
the
data
plane.
Cluster
wait
for
kubernetes
to
provision
a
token
and
then
extract
that
token
out,
which
helm
provides
no
mechanism
of
doing
that
today,
which
means
we
cannot
independently
register
the
control,
plane,
cluster
and
the
data
plane
cluster,
or
we
cannot
independently
set
them
up.
C
So,
looking
at
all
of
these
drawbacks
and
limitations,
we've
decided
to
go
with
a
new
approach.
That's
going
to
be
rolled
out
in
our
1.0
ga
release
of
blue
mesh,
and
this
is
something
we
call
relay
and
the
way
it
works.
Is
it
utilizes
an
agent
server
architecture
to
establish
communication
between
two
go
processes
that
live
in
each
cluster
in
the
data
plane
and
control
plane
clusters?
C
C
That
way,
only
the
relay
agent
is
ever
directly
responsible
for
interacting
with
its
local
api
server,
and
we
are
able
to
come
with
an
api.
The
grpc
stream
allows
us
to
control
specifically
what
features
what
resources
were
allowed
to
watch
and
read
from
a
remote
cluster
without
exposing
the
whole
kubernetes
api
server.
C
So,
let's
discuss
some
of
the
advantages
of
this
model.
One
is
that
kubernetes
credentials
never
leave
the
cluster.
We
now
don't
have
to
secure,
distribute
access
directly
to
any
api
server
to
something
outside
of
the
cluster.
Additionally,
the
agent
can
also
filter
out
any
unnecessary
data.
Let's
say
you're
watching
the
whole
set
of
pods,
but
you
only
need
to
know
about
the
pods
that
changed.
We
can
now
send
deltas,
which
are
just
the
individual
resources
that
changed
or
even
subfields
of
those
across
this
grpc
stream
without
needing
to
utilize
the
entire
kubernetes
watch
api.
C
So
we
also
have
better
support
for
scaling
in
because
we
can
now
scale
up
the
number
of
control
plane
clusters
without
needing
to
scale
the
number
of
watches
on
the
remote
api
server.
And
finally,
this
is
really
nice,
because
it's
an
entirely
helm-able
setup
rather
than
needing
to
create
a
service,
account
token
that
we
then
extract
into
a
remote
cluster.
C
I
would
like
to
go
into
and
spend
a
little
bit
of
time
talking
about
the
handshake
that
happens
in
order
to
establish
identity,
we're
happy
to
take
questions
in
the
chat
in
order
to
explain
this
in
further
detail,
but
I
don't
want
to
take
up
too
much
time.
Just
let
it
be
said
that
the
server
acts
as
a
certificate
authority
and
signs
client
client
certificates
on
behalf
of
the
agent,
so
that
we
can
then
trust
and
verify
that
the
agent
is
actually
the
identity.
It
says
it
is.
C
I'm
gonna
pass
it
back
to
joe.
Thank
you
so
much
for
your
time
and
we're
really
excited
to
share
more
about
our
architecture
and
designs.
Thank.
E
Hey
joe
thanks
for
the
introduction,
so
glue
mesh
comes
packaged
with
a
set
of
observability
features
that
allow
the
user
to
get
painlessly
up
and
running
with
a
complete
observability
suite,
and
I
can
go
over
the
architecture
of
that
briefly
right
now.
As
we
see
in
this
diagram,
the
top
three
boxes,
they
represent
managed
clusters.
Each
of
them
has
a
set
of
envoy
proxies,
as
well
as
the
glue
mesh
agent.
E
The
bottom
box
represents
the
management
cluster,
where
we've
installed
a
management
plane
represented
by
the
glue
mesh
server,
so
glue
mesh
leverages
istio
to
configure
the
fleet
of
envoy
proxies
to
treat
the
glue
mesh
agent
as
a
sink
for
metrics
and
access
logs.
This
means
that
any
time
an
envoy
proxy
omits
a
metric
or
an
access
log
it'll
get
sent
to
the
glue
mesh
agent,
which
in
turn
forwards
them
to
the
glue
mesh
server.
F
E
That's
a
great
question,
so
there
are
two
parts
to
this
problem:
one
is
aggregating
the
metrics
to
a
single
place
so
that
we
can
allow
for
system-wide
multi-cluster
queries
and
the
other
is
taking
this
aggregated
data
source
and
wiring
it
up
to
a
user
interface.
Glue
mesh
will
expose
integration
points
for
both
of
these
problems.
So
if
a
user
already
has
a
sophisticated
thanos
federation
setup
and
they
want
to
use
that
they
can
just
plug
that
in
as
the
aggregated
data
source
and
then
leverage
glue
mesh
for
its
ui.
E
Alternatively,
if
a
user
already
has
a
ui
that
they're
comfortable
with
like
datadog
or
grafana,
they
could
treat
bluemesh
as
a
source
of
truth
for
aggregated
metrics
and
plug
that
right
into
their
ui.
Here's
a
sneak
preview
of
the
glue
mesh
observability
ui
that
we
will
be
showcasing
in
the
demo
portion
of
the
observability
talk.
As
you
can
see,
something
appears
to
be
wrong
with
the
connection
between
the
product
page
and
the
review
service.
We
will
dig
into
this
in
the
demo.
I
hope
everyone
can
make
it.
Thank
you.
F
Thanks
harvey
if
you're
interested
in
hearing
more
about
the
nitty-gritty
details
of
how
glue
mesh
collects
and
presents
telemetry
data
check
out
scott
and
harvey's
session
on
observability
and
istio.
Next,
let's
hear
from
eton
how
glue
mesh
can
make
your
environment
resilient
to
regional
outages
and
service
downtime
thanks.
G
G
G
Similarly,
if
the
user
service
were
to
go
down
in
cluster
2,
it
would
seamlessly
route
traffic
over
to
the
user
service
in
cluster
1
region.
1..
Now,
as
I
alluded
to
earlier,
this
can
already
be
done
in
in
istio,
but
let's
discuss
just
what
you
would
need
to
get
that
done.
Well,
you
would
need
two
virtual
services,
one
per
cluster.
G
You
would
need
three
destination
rules,
at
least
one
per
cluster.
You
need
one
service
entry,
one
gateway
and
one
envoy
filter.
So
just
right
off
the
bat,
that's
already
eight
crds,
and
that's
just
for
this
single
service
and
that's
not
counting
all
of
the
configuration
required
just
to
get
multi-cluster
istio
working
which
glue
mesh
handles
for
you
automatically.
G
So
now,
let's
talk
about
how
this
would
work
in
glue
mesh?
Well,
it
would
require
one
virtual
destination
and
one
traffic
policy.
That's
two
crds
compared
with
eight!
So
now
that
we've
talked
about
just
how
simple
using
the
virtual
destination
api
can
be
in
comparison
to
the
istio
api.
Let's
actually
take
a
look
at
an
example
of
how
to
use
it
and
what
it
would
look
like
to
accomplish
the
failover
that
we
mentioned
in
the
previous
slide,
specifically
with
the
globally
available
user
service.
G
G
Lastly,
is
our
virtual
mesh,
and
that
is
the
virtual
mesh
which
to
expose
this
virtual
destination.
The
magic
of
this
api
is
that
it
allows
you
to
group
a
bunch
of
services
across
different
clusters,
localities
and
anywhere
around
the
world
into
one
logical
group
such
that
it
will
route
to
the
most
proximal
service
at
any.
Given
time
and
if
any
of
those
services
were
to
go
down,
it
will
immediately
route
to
the
next
available
option,
with
just
one
crd
thanks
joe
back
to
you.
D
D
Our
goal
with
glue
mesh
is
to
fully
enable
web
assembly
usage
and
provide
a
docker-like
experience
for
service
mesh
extension
development
with
groommesh.
We
handle
the
wasm
experience
from
end
to
end
from
developer
bootstrapping,
the
code
building,
the
wason
filter,
binary
distribution
to
a
registry
and
finally
deployment
to
your
service
mesh
workload.
Sidecar
we
have
built
a
developer
workflow
into
our
meshctl
command.
D
Meshcdl.
The
glue
mesh
command
line
can
help
you
get
started
with
assembly
with
mercedes.
You
can
initialize
a
new
filter
from
various
templates
for
different
programming
languages,
build
the
filter
code
into
an
oci
wasm
image,
push
the
oci
image
to
a
docker
like
registry
and
deploy
the
image
from
the
registry
to
a
specific
load
cloud
in
your
service.
Mesh
back
to
you.
F
D
A
great
question,
joe
as
web
assembly
in
the
mesh
data
plane,
is
a
recent
innovation.
We
have
been
lending
our
expertise
and
I'm
helping
our
customers
to
get
the
most
out
of
it.
Let
me
share
some
use
cases.
We
have
seen
as
examples
on
how
webassembly
is
used
to
customize
the
service
mesh
to
specific
business
needs
things
like
custom
authorization
schemes.
D
Up
until
now
to
implement
customer
theorization,
you
would
have
to
use
a
mechanism
called
external
auth.
This
entails
creating
and
maintaining
an
authorization
server
and
involves
a
network
call
from
the
data
plane
to
the
authorization
server
to
make
authorization
decision
for
each
and
every
request.
D
This
introduces
latency
and
reliability
concerns
in
many
auth
schemes
like
hmac
signature,
verification,
for
example,
webassembly
used
to
avoid
the
extra
network
call
and
perform
authorization
directly
in
the
data
plane.
The
next
example
custom
routing
best
on
request
properties,
for
example,
modifying
the
header,
is
based
on
a
specific
value
in
the
body
of
a
request.
D
D
F
Joe
thanks
you've
all
looking
forward
to
going
deeper
on
this
subject
at
your
session
on
extending
istio
and
glumesh
with
web
assembly
later
today.
That's
all
on
glue
mesh
for
now
to
hear
more
about
our
vision
for
the
product,
as
well
as
upcoming
features
tune
into
the
glue
mesh
road
map
session.
With
me
and
christian
with
that
I'll
send
it
back
to
adit
to
wrap
up.
A
Thanks
so
much
y'all,
it's
very
impressive.
The
work
that
you
and
the
team
are
doing
and
thank
you
for
joining
the
first
silicon
keynote.
We
have
a
great
talks
lineup
for
you
today.
I
really
hope
you
can
stay
before
I'm
letting
you
go.
There
is
one
more
thing.
The
motivation
behind
building
glue
mesh
was
to
make
seo
so
easy
and
consumable
that
it
just
work
we
keep
asking
ourselves.
A
Can
we
make
it
even
simpler
and
we
think
we
can
we
cloud
delivery
model
with
the
moving
to
the
cloud
our
customers
enjoy
from
an
experience
of
consuming
their
services
as
a
managed
service,
and
they
requested
from
us
to
make
glue
mesh
and
sto
available
as
a
managed
service.
Today,
I'm
so
excited
to
announce,
glue
cloud,
the
first
and
only
fully
managed
sto
service
mesh
for
any
cloud
with
glue
cloud.
You
can
enjoy
the
powerful
feature
of
stl
and
the
extensive
feature
of
glue
mesh
without
need
to
manage
it
yourself.
A
We
are
very
excited
to
have
you,
try
and
looking
forward
to
get
your
constructive
feedback.
If
you
want
to
learn
more
about
it.
I
encourage
you
to
check
out
the
session
of
glue
mesh
world
map
by
christian
posta
and
joe
kelly
tomorrow,
thanks
very
much
for
joining
us
at
silicon.
Next,
we
will
have
christy
and
pasta
live
session.
How
to
select
the
service
mesh.
I
really
hope
you
can
join
I'm
hoping
to
see
you
all
in
person
on
the
next
solocon
2022
for
the
meantime,
stay
positive
and
test.