►
From YouTube: Day 1 Keynote and Welcome
Description
For more great content, visit https://solocon.io
SoloCon 2022:
Day 1 Keynote and Welcome
Speakers:
Idit Levine
Founder and CEO, Solo.io
Brian Gracely
VP Product Strategy, Solo.io
Abstract:
Join Solo.io's CEO and Founder, Idit Levine as she kicks off SoloCon 2022 with Docker Founder Solomon Hykes, Istio Co-creator Louis Ryan, and VP Product Strategy, Brian Gracely, with exciting news, updates, and a warm welcome.
B
And
I'm
brian
gracely
vice
president
of
product
strategy.
You
know
eda.
Over
the
last
year
it's
been
challenging
in
many
places
for
many
of
our
customers,
but
we've
also
seen
them
overcome
those
challenges.
They've
been
building
new
modern
applications
and
they've
been
doing
it
at
a
pace,
that's
faster
than
ever.
What
does
that
mean.
A
B
A
B
Yeah-
and
one
thing
we
like
to
point
out
is
while
we've
got
happy
customers
and
we
love
happy
customers.
Sola's
been
very
unique
in
that
in
that
we
found
what,
through
a
lot
of
different
industry
data,
a
lot
of
surveys.
Probably
30
percent
of
companies,
who've
been
looking
at
meshes,
have
either
looked
to
an
alternative
mesh
or
couldn't
get
it.
Working.
Solo
is
really
out
there
defining
what
a
mesh
looks
like
when
it's
great,
but
also
making
it
simple
and
making
sure
you
can
bring
istio
and
envoy
technology
into
your
environment,
and
do
it
successfully.
A
B
A
A
Yes,
I
mean
over
the
last
year
we're
working
so
closely
with
our
customer
length
so
much
about
their
environment
and
their
requirement.
And
honestly,
we
got
to
a
point
that
we
get
all
that
feedback
and
we
felt
that
it's
not
going
to
be
good
to
continue
doing
it.
Incrementally
we
need
to
basically
really
think
about
how
we
can
do
it
perfect
fit
for
each
organization
and
that's
where
glue
mesh
2.0
is
coming
the
idea
with
with
glue
mesh
2.0.
A
There
is
two
concepts
that
are
extremely
important:
number
one
is
the
api
and
what
we
did
there
is
basically
came
with
the
best
api
to
make
it
simpler
to
actually
consume,
but
more
than
that,
it's
very
important.
If
you
remember
in
the
past,
we
had
service
mesh
and
api
gateway.
We
have
them
right
now
in
the
same
platform,
all
of
them
running
on
sto,
and
it
was
very
important
to
us
that
the
api
will
be
the
same.
A
So
basically,
there
is
no
look
like
you
fill
two
different
products,
because
it's
not
a
different
product
and
honestly
there
is
a
lot
in
common
with
one
of
them.
So
we
came
with
an
api
that
will
give
you
amazing
feeling
to
be
able
to
reuse
everything
that
you're
doing
with
the
mesh
and
the
edge.
So
that's
the
first
one
that
happened,
it's
very,
very
important
for
us
that
it
would
be
easy
to
use,
and
I
think
we
made
it
very,
very
accessible
that
way.
A
The
second
one,
which
is
very
important,
is
the
workspace,
and
the
idea
is
eventually
all
our
customer
all
the
things
that
they're
doing
with
those
infrastructure.
They
want
to
do
one
thing
in
the
end
and
that
curved
piece
of
this
infrastructure
and
delegate
it
to
the
application
team.
That's
all
they
want,
and
then
they
also
want
to
decide
how
much
those
application
team
can
do.
A
So,
that's,
basically,
the
concept
that
we're
announcing,
very,
very
introducing
and
really
excited
about
it
is
workspace,
is
the
ability
to
basically
curve
part
of
your
infrastructure
and
delegate
to
a
specific
team
with
our
work
and
permission.
So
that's
the
two
pieces
that
is
joining
into
the
product.
There
is
way
more
innovation
there,
but
I
will
start
with
this.
B
Yeah
and
we're
going
to
talk
about
that
all
week,
you
know
we've
heard
over
and
over
for
our
customers.
Architecture
is
one
of
the
hugest
things
that
helps
them
differentiate.
The
product
helps
them
understand,
I'm
buying
a
product
today
that
solves
your
problems,
but
it's
designed
to
help
take
on
new
technologies
that
come
along,
whether
that's
wasm
or
graphql,
or
other
things
that
will
come
along
now.
B
B
And
one
of
the
ways
that
we
do,
that
is
through
the
solo
academy
and
you're,
going
to
get
to
enable
experience
that
hands-on
on
day
three.
But
let
me
give
you
a
little
bit
of
overview
of
solo
academy.
We
realized
that
we
couldn't
just
give
you
technology.
We
needed
to
make
sure
that
you
could
understand
the
technology,
get
to
use
the
technology
and
be
certified
so
that
when
you
go
back
to
your
jobs,
you
go
back
to
your
environments,
you're
going
to
be
expertly
prepared
to
be
successful.
B
Now,
we've
seen
over
8
000
people
come
to
the
solo
academies.
We
believe
it's
the
number
one
academy
for
learning
about
service
mesh
and
api
gateway
over
1500
people
have
now
been
certified
on
either
istio
or
envoy,
soon
to
speak,
graphql
and
ebpf
and
you're
going
to
see
all
that
on
day.
Three
in
our
work,
hands-on
workshops,
so
we
hope
you
take
advantage
of
it
and
we
hope
you
get
out
there
and
be
certified
because
it
helps
your
career.
It
helps
you
be
successful
in
your
jobs.
B
Obviously,
if
you're
attending
solocon,
you
know
about
the
power
of
service
mesh,
you
know
about
the
momentum,
that's
happening
in
industry,
but
we
think
it's
important
for
the
new
people
that
are
here
to
understand.
How
did
we
get
here?
Why
is
this
important
and
we
thought
that
it
was
important
to
look
back
and
put
some
context
around
the
momentum
that
we're
seeing
with
service
mesh.
A
C
Hey
everyone,
it's
solomon,
I
hope
you're
all
having
a
good
time
that
you're
all
happy
and
safe
wherever
you
are,
I'm
very
happy
to
be
part
of
this.
Thank
you
for
inviting
me
solo
team
and
honestly
my
parents
heard
that
I
would
be
speaking
at
a
conference
called
solo
con
and,
of
course,
they
said
wow.
Our
son
is
so
famous
that
there's
a
conference
actually
named
after
him
and
of
course
I
said
yes,
that's
exactly
what's
going
on,
so
nobody
tell
them.
C
So
I
thought
I
would
tell
you
a
little
story
about
my
experience
in
a
way
that,
hopefully,
is
relevant
to
your
experience
today,
I'm
sort
of
a
an
old
timer
of
cloud
native,
although
I
don't
feel
that
old,
I
did
see
the
last
10
years
unfold
like
it
was
nothing,
and
I
know
some
of
you
in
the
audience
have
too,
and
it's
just
amazing
how
much
this
whole
space
has
grown
and
so
thinking
about
what
I
could
share
with
you
all
that
would
be
of
interest
and
useful.
C
C
A
lot
of
things
in
our
industry
and
that
that
part
of
the
story
is
well
known,
containers
did
have
a
big
impact,
and
you
know
they
were
just
a
building
block,
but
an
important
building
block
to
you
know,
building
even
more
things.
What
I
thought
I'd
share
is
a
little
bit
of
the
context
around
the
process
of
launching
docker,
why
we
launched
it
and
how
we
planned
to
launch
it.
What
else
we
planned
to
launch
and
what
actually
ended
up
happening.
C
C
It
was
a
complete
platform
for
your
application
in
the
cloud
it
did
hosting
it
had
a
container
run
time
which
was
sort
of
this
hidden
thing
at
the
time
it
had
a
distributed
service
mesh,
it
had
low
balancers,
it
had
a
building
system,
it
had
data
persistence,
and
all
of
that
was
built
on
top
of
aws,
with
plans
to
go
multi-cloud
at
the
time
in
our
competitors
where
services
like
heroku
google,
app
engine
and
our
innovation
was
that
we
magically
supported
all
these
different
language
stacks
and
all
these
different
databases
and
components
when
our
competitors
were
very
tied
to
one
stack
right.
C
This
was
before
heroku
evolved
to
support
multiple
languages
too,
and
then,
after
them
gradually
every
pass.
We
did
that
first
because
we
had
containers
right,
so
our
plan
was
okay.
Well,
this
is
a
difficult
business
turns
out
building
one
platform
that
does
everything
perfectly
for
everybody
is
not
easy.
C
It's
certainly
not
easy
for
a
very
large
company
and
it's
even
worse
for
a
small
startup
like
us.
It
was
what
15
people
15
of
us
at
the
time.
This
is
2012
2013.,
and
so
we
thought.
C
Okay,
let's
break
it
up,
let's
break
it
up
into
the
individual
lego
bricks
and
let's
open
source,
each
brick
and
let's
switch
from
building
a
car
to
giving
away
the
blueprint
to
the
engine.
And
then,
if
we're
lucky,
we'll
build,
you
know
an
engine
related
business.
You
know,
let's,
let's
turn
our
competitors
into
customers,
so
that
was
the
plan
took
a
while
to
get
the
money
part
right,
as
many
of
you
know,
but
it
definitely
succeeded
in
terms
of
adoption,
but
here's
the
thing
notice.
C
I
did
not
say,
let's
just
open
source,
the
container
runtime.
What
we
said
was:
let's
break
it
up
and
let's
open
source
everything.
The
container
runtime,
the
builder
service,
the
serv,
the
service
mesh,
the
networking
layer
and
there
are
a
few
others,
but
I'll
focus
on
the
service
mesh,
because
I
know
this
is
something
that
you
all
are
interested
in.
There's
a
lot
of
fun
innovation
and
just
great
engineering
work
going
on
in
that
area.
We
basically
had
cloud
native
in
a
little
box.
It
was
not
a
great
box,
not
a
scalable
box.
D
C
C
That
ends
up
putting
a
lot
of
stress
on
the
traditional
load
balancer
and
we
were
using.
It
was
nginx
at
the
time
which
was
very
sturdy,
very
reliable,
no
issues
there,
high
quality
software.
There
were
other
options.
The
problem
was
the
very
dynamic
and
elastic
nature
of
the
of
the
the
workloads.
C
That
was
a
very
unusual
dimension
of
scale.
You
know
the
problem
was
not
that
we
had
too
much
traffic
for
nginx
to
handle.
The
problem
was
that
the
map
of
back
ends
that
our
cluster
of
nginx
proxies
had
to
forward
traffic
to
was
changing
too
rapidly.
We
had
you
know
many
many
thousands
of
endpoints
and
then
in
the
tens
of
thousands
of
endpoints
and
every
time
any
user
of
our
platform
would
deploy
or
scale
up
or
scale
down
or
any
other.
Automated
change
would
happen
on
the
container
cluster.
C
C
Through
a
special
api
regenerated,
the
whole
nginx
config
and
the
problem
with
that
is,
it
became
very,
very
large,
but
it
sort
of
added
this
little
layer
of
automation
on
top
of
a
server
that
was
not
designed
to
be
automated
in
that
way,
and
you
know
kind
of
worked,
but
we
had
all
sorts
of
issues.
So
then
we
re-engineered
that
and
developed
our
own
service
mesh
and
we
called
it
hepati.
C
C
We'd
call
now
a
service
mesh
and
it
was
a
very
experimental
thing
to
do
at
the
time
and
I
think
that
one's
actually
open
source,
but
anyway,
the
funny
part
is
that
our
strategy
that
led
to
docker
was
let's
split
up
all
the
pieces
of
our
platform
and
then
we'll
launch
one
per
week,
and
you
know
that
will
give
us
a
steady
cadence
of
interesting
projects
to
launch-
and
you
know
the
first
one
we
launched
was
docker
the
container
runtime
and
we
then
we
never
launched
anything.
C
You
know
it
took
us
years
to
just
keep
up
with
the
interest
in
that
one,
so
we
never
fulfilled
the
complete
plan
and
vision
of
launching
all
of
these
projects,
but
that's
okay,
because
you
all
did
it
instead
and
you
continued
in
that
direction
and
built
so
much
more
at
such
a
high
level
of
quality
and
innovation
that
it's
it's
hard
to
keep
up
with
everything
that's
going
on
and
and
the
solo
community
is
right
in
the
middle
of
all
that
it's
the
same
spirit
of
having
fun
with
technology
and
solving
important
problems
and
and
trending
towards
the
best
engineering
solution
together,
and
I
thought
that
was
just
really
cool
and
I'm
glad
I
get
to
be
a
little
bit
of
a
part
of
it
by
speaking
here.
A
D
Hi
folks,
my
name
is
louie
ryan,
I'm
one
of
the
founders
of
istio,
and
I'm
excited
to
be
here
to
talk
to
you
today.
I
want
to
thank
the
folks
at
solo
for
having
me
on
letting
me
talk
to
you
about
the
value
of
service
meshes
and
you
know,
what's
been
going
on
in
the
asu
space.
D
So
let's
dive
right
in
you
know
it's
often
useful
to
talk
about
why
service
meshes
are
important
like.
Why
do
they
even
exist?
You
know
when
we
founded
the
store
project,
we
had
a
nifty
soundbite
that
we'd
like
to
use
which
was
kind
of
secure,
connect
and
observe
traffic
between
all
your
services.
D
You
know
a
pretty
reasonable
question
right
after
that
was
well.
Can
I
secure
connect
and
observe
traffic
between
my
services
using
you
know
the
tools
that
I
already
have,
and
you
know.
Certainly
there
are
some
very
capable
tools
out
there
that
people
use.
You
know
lots
of
application
development
frameworks
that
people
have
been
using
for
a
while.
You
know
whether
it's
spring
or
j2e,
or
express
and
node.
You
know
you
can
definitely
do
some
of
these
things.
D
But
the
truth
is,
you
know,
as
is
usually
the
case,
much
much
more
complicated
than
that,
while
you
can
do
those
things
in
you
know
one
framework.
D
The
truth
is
that
in
most
enterprises
there
is
a
large
and
diverse
developer
community,
using
lots
of
frameworks
all
the
time.
Those
frameworks
evolve,
they
change
they
get
forgotten,
and
you
know
eventually
some
of
them
become
legacy,
and
the
one
thing
we
all
know
as
developers
is
that
maintaining
lots
of
code
bases
consistently
is
very,
very
expensive
and
you
know
sometimes
impossible.
D
D
You
know
you
could
go
and
do
api
management
using
a
framework
in
you
know,
lots
and
lots
of
applications,
but
eventually
that
really
starts
to
fall
apart
right
when
you
need
consistency
for
these
kind
of
cross-cutting
capabilities
to
do
the
thing
and
it's
the
same
with
service
meshes.
D
So
if
you
look
at
the
reasons
why
people
adopt
service
measures,
you
know,
particularly
in
the
kind
of
you
know,
traffic
control
or
observability
space-
it's
you
know,
they've
been
trying
to
maintain
consistency
for
years
and
you
know
struggle
and
it
kind
of
becomes
a
pain
threshold
decision.
D
They've
been
slowly
suffering
like
you
know
the
frog
boiling
and
eventually
they
do
hit
a
threshold
moment
and
they
decide
that
you
know
it's
just
not
really
viable
for
them
to
keep
doing
what
they're
doing.
D
It's
also
fair
to
say
that
this
this
issue
is-
or
you
know,
these
rationales
aren't
exclusive
to
kind
of
traditional
enterprise.
I.T
with
you
know,
large
amounts
of
legacy
systems
that
they
have
to
build
and
have
been
building
and
maintaining
for
long
periods
of
time.
We
also
see
the
same
kind
of
value
propositions
play
out
in
you
know
what
would
be
considered.
You
know
modern
and
dynamic
engineering
organizations
throughout
the
industry.
D
I
know
google,
you
know
where
I
work
uses
sidecar
like
technology
very
extensively
in
production,
for
the
same
reasons
you
know
envoy
was
born
out
of
you
know
the
the
needs
that
lift
that's
why
matt
was
super
motivated
to
get
going,
and
I've
talked
to
lots
of
companies
that
you
know,
use
istio
or
have
built
their
own
in-house
service
mesh
solutions
to
solve
for
basically
the
same
set
of
problems
right.
So
it's
an
operational
efficiency
concern
and
people
want
to
be
fast
and
agile.
D
They
want
mtls,
they
want
zero
trust
capabilities
and
if
you
look
at
the
cost
of
you
know
maintaining
observability
or
traffic
management
in
applications
and
application
frameworks,
which
is
high,
it's
much
higher
to
do
this,
for
you
know
really
strong
security,
stuff.
D
And
so
people
really
are
looking
for
off-the-shelf
solutions
in
the
space.
Kelsey
heiter
recently
did
a
great
talk
on
what
he
called
zero
trust
the
hard
way-
and
you
know
I
really
recommend
people
go,
dig
that
up
and
watch
it
where
kelsey
talks
about
okay,
you
want
to
do
zero,
trust
yourself.
D
You
might
have
seen
the
proclamation
from
the
white
house
recently
about
zero
trust
becoming
a
mandated
best
practice
for
software
development
in
the
government
at
mit,
and
this
is
obviously
driving
a
lot
of
interest
and,
as
you
might
imagine,
in
this
space,
no
shortage
of
noise
in
the
industry
about
what
is
and
is
not
really
as
not
something
I
want
to
get
into
today.
D
You
know,
but
you
know
I'm
proud
of
the
fact
that
istio
was
really
the
first
kind
of
open
source
solution
to
put
a
lot
of
the
fundamentals
of
xero
trust
and
capabilities
into
an
effective
solution
that
works
in
the
enterprise
and-
and
it
still
is
actually,
in
my
opinion,
the
most
capable
and
complete
open
source
project
in
this
space.
Now,
there's
still
lots
to
be
done
here,
issues
by
no
means
sitting
still,
but
you
know
I
think
we've
done
a
lot
to
move
the
industry
forward
in
this
world.
D
Okay,
so
that's
that's
really
enough
about
the.
Why
you
know.
Let's
talk
a
little
bit
about
you
know
what
we've
been
up
to
in
istio
and
how
the
projects
evolved.
You
know
she
was
born
with
a
pretty
ambitious
agenda
and
a
lot
of
capability.
D
You
know
you
know
over
the
years
we've
refined
these
capabilities
and
our
architecture
and
everything
else
to
kind
of
find
a
good
market
fit,
and
I
think
we've
done
that
you
know,
particularly
over
the
last
maybe
two
years.
A
lot
of
work
has
gone
into
that
you
know
simplifying
the
solution,
refining,
the
apis
really
listening
to
our
users
and
making
sure
that
we
have
a
solution
that
works
for
a
large
market
segment.
D
One
area
where
that's
been
particularly
focused
has
been
to
put
a
lot
of
investment
into
day
two
operations,
particularly
over
the
last
18
months.
Right
it
was
the
core
tenant
of
our
2021
roadmap,
and
lots
of
work
has
gone
into
that
before,
and
lots
of
work
in
2022
is
going
to
continue
to
go
into
that,
and
that
made
it
really
a
lot
simpler.
For
both.
D
You
know,
individual
operators
installing
and
maintaining
istio-
and
that
has
been
really
important,
but
also
we
want
to
recognize
that
there
are,
you
know,
a
lot
of
vendors
in
this
space
and
they
are
maintaining
istio
solutions
for
their
users
and
we
wanted
to
provide
them
with
the
tools
to
effectively
maintain
those
solutions
on
behalf
of
their
customers.
D
You
see
a
lot
of
that
has
been
now
leveraged
in
by
different
vendors
in
the
space
you
know
just
you
know
I
work
at
google
we've
leveraged.
You
know
these
improvements
to
enable
us
to
run
upgrades
for
huge
numbers
of
customers
running
htm
matches
in
production
with
almost
no
customer
touchpoint,
and
we
do
this
every
quarter.
D
So,
like
literally
running
upgrades
for
thousands
and
thousands
of
hto
clusters,
every
quarter
with
no
user
involvement,
so
you
know
I'm,
you
know
really
proud
of
the
success
and
the
efforts
of
the
team
in
this
space.
It's
almost
istio's
fifth
birthday.
Actually
I
think
2017
in
may
was
when
we
came
out-
and
you
know
it
remains
the
largest
and
most
diverse
project
in
the
ecosystem.
D
We
have
the
most
developers,
we
have
the
most
diversity,
the
most
contributions
from
the
the
largest
number
of
distinct
places,
and
so
we
have
a
really
active
and
vibrant
development
community.
D
It
was
probably
fair
to
say
that
a
lot
has
happened
in
these
five
years.
It's
probably
a
gross
understatement
and
we've
all
been
adopting
to
a
lot
over
this
period.
But
you
know
istio
has
really
matured
over
this
time.
We
have
a
clear
vision
for
the
product
and
also
we
take
our
responsibilities.
D
You
know
to
serve.
You
know
the
enterprise
market
very,
very
seriously,
right
we
ship
regular
releases,
we
ship
patches-
and
you
know
we
take
our
cve
and
maintenance
responsibilities
extremely
seriously,
and
so
you
know
that
was
one
of
the
most
important
things
we
felt.
We
could
do
so
that
people
felt
that
they
could
really
rely
on
istio
right
in
these
kind
of
important
use
cases
for
the
companies.
D
D
I
think,
there's
a
lot
that
we
can
do
to
really
improve
the
lives
of,
and
also
you
know,
the
the
people
performing
administrative
roles
in
this
space,
whether
it's
network
admin,
security,
admins
or
the
operators
of
istio
itself.
There's
lots
and
lots
that
we
can
do,
and
so
you
know
I
I'm
really
excited
about
the
prospects
for
the
project.
You
know
the
roadmap
that
we
have.
You
know.
I
think
we
have
a
great
community
and
you
know
I
think
you
you're,
you
know.
D
While
we
are
a
relatively
mature
project
at
this
point,
I
think
you
will
see
you
know
still
a
an
important
stream
of
innovation
coming
out
of
the
project
over
the
next
five
years,
and
so
I'm
you
know,
I'm
really
excited
by
that.
So
at
this
point
you
know
I'd
like
to
hand
off-
and
you
know
just
want
to
thank
solo
again
for
inviting
me
here
and
give
me
the
opportunity
to
talk
to
you
for
a
few
minutes
about
what's
going
on
in
the
steel
world.
B
E
All
right,
thanks
guys,
so
now
I'm
going
to
talk
a
little
bit
more
about
what
we
do
here
at
solo,
so
we
focus
on
application
networking.
So
if
you
go
to
our
website,
you'll
see
things
about
istio
and
envoy
and
so
on,
but
we're
not
a
gateway
company.
We're
not
a
service
mesh
company,
we're
an
application
networking
company.
We
use
those
technologies
to
build
our
solutions,
but
we
focus
on
how
services
connect
to
each
other,
how
we
secure
those
services
and
how
we
control
the
traffic
and
the
routing
between
them.
E
So
my
name
is
christian
posta,
I'm
the
global
field
cto
here
at
solo,
I've
been
working
on
istio
and
envoy-based
technologies
for
quite
some
time.
Over
five
years
now
and
I've
written
some
books
on
this
topic,
istio
in
action,
the
the
latest
book
will
be
coming
out
in
the
next
few
days.
E
I
also
have
my
contact
information
up
on
the
screen
and
you
can
feel
free
to
reach
out
to
me
anytime
offline,
and
you
know
catch
up
on
anything,
the
things
that
I've
said
here
or
what's
going
on
at
solo,
so
we
focus
on,
like
I
said,
application
networking
and
I'm
going
to
define
that
here
in
a
second.
E
But
the
way
we
do,
that
is
we
abstract
away
the
complexities
and
the
details
of
the
underlying
networks
and,
if
you're
deploying
on
kubernetes
or
on
vms
and
public
clouds
and
data
centers,
you
know
you
have
different
networking
technologies
to
make
these
things
work.
We
try
to
push
things
as
close
to
the
application
as
possible,
so
we
have
the
most
control
and
and
the
most
fine
grain
scope
for
how
we,
how
we
configure
traffic
routing
and
security,
so
application
networking.
What
is
that?
E
Well,
it
basically
comes
down
to
services
when
you
deploy
them
wherever
you
deploy
them,
they
need
to
communicate
with
each
other.
They
need
to
call
other
apis,
they
might
be
calling
databases
or
message
queues
or
whatever,
but
they're
making
requests
over
the
network
and
when
they
talk
over
the
network,
things
become
unreliable
right
just
because
the
network
is
not
reliable,
so
they
need
to
do
things
like
implement
timeouts
and
implement
retries
and
circuit
breaking,
and
they
also
need
to
do
things
like
capture
telemetry,
about
what's
happening
on
the
network.
E
These
are
all
things
that
the
applications
need
to
do
in
these
more
more
cloud
native
and
highly
ephemeral
environments,
and
these
are
not
optional
things,
because
if
they
don't,
if
the
applications
don't
take
care
of
these
non-functional
things,
then
they
set
themselves
up
for
massive
failures
and
outages
and
and
these
types
of
things
so
going
to
microservices
and
cloud
is
not
all
panacea.
You
still
have
to
take
into
account
the
distributed
systems,
nature
of
of
these
applications
and
solve
these
problems
head
on.
E
Let
me
give
you
a
few
examples,
so,
when
services
communicate
with
each
other,
let's
say
a
service,
a
talking
to
service
b,
we
need
to
implement
a
policy
like.
Maybe
they
can
only
call
service,
they
can
only
call
service
b,
three
times
a
minute
or
they,
if
calls
fail,
that
we
need
to
retry,
but
we
can
only
retry
a
certain
number
of
times
during
during
a
particular
period
or
things
like
service.
E
E
E
So
you
know
if
you've
been
in
these
environments,
these
enterprise
environments
and
you've
seen
how
we've
solved
these
problems,
especially
for
internal
communication
use
cases.
It
looks
something
like
this
right
where
you
have
a
central
team
managing
a
central
capability.
Maybe
api
management
traffic
comes
into
the
api
management
system
and
then
gets
forwarded
off
to
other
other
services.
Now
those
services
probably
are
communicating
over
load
balancers
and
then,
when
traffic
goes
back
to
maybe
another
service,
it
passes
through
more
load
balancers
and
then
through
the
centralized
system.
Again
now
there
are
drawbacks
to
this.
E
E
Some
of
it
is
offloaded
to
api
management
systems,
and
you
know,
like
I
said,
there's
a
there's
a
lot
of
challenges
to
doing
this.
First,
it's
expensive,
the
type
of
middleware
that's
used
to
run
and
the
hardware
used
to
load
balance
and
this
kind
of
stuff,
especially
when
you
add
more
services,
starts
to
proliferate
and
becomes
very
expensive
now,
when
cert,
when
service
a
needs
to
talk
to
service
b
and
it
has
to
go
to
a
load,
balancer
and
then
the
api
management
system
and
then
another
load
balancer.
E
You
know
that
introduces
a
lot
of
extra
hops,
more
points
and
more
more
more
steps
and
more
areas
where
the
call
can
fail,
for
whatever
reason
it
makes
it
hard
to
trace
that
those
calls.
In
that
point,
a
lot
of
times
it's
using
outdated
technology.
E
So
a
more
modern
approach,
kind
of
balances
out
these
two
different
extremes,
one
writing
it
directly
into
the
application
and
two
offloading
it
completely
to
some
centralized
system.
The
modern
approach
takes
proxies
and
you
know
networking
application
networking
technology
and
brings
it
close
to
the
applications.
E
E
E
E
So
from
the
configuration
engine
which
is
like,
I
said,
driven
by
declarative
configuration
so
think,
kubernetes
resources
to
plugging
in
with
security
systems
and
rate
limiting
and
we've
even
built
our
own
filters
into
the
envoy
proxy
itself.
E
The
mesh
so
glue
mesh
is
our
enterprise
service
mesh,
and
this
provides
the
multi-cluster
and
federated
management
plane.
This
also,
you
know
this.
This
works
for
the
east-west
direction
of
the
traffic,
so
this
so
the
service
to
service
communication,
as
well
as
at
the
edge
where
you
have
untrusted
traffic
coming
into
the
system.
So
now
you
have
a
unified
api
for
that
that
handles
multiple
clusters
handles
vms
on-prem,
public
cloud,
whatever
that
manages
the
traffic
at
the
edge,
as
well
as
in
service
to
service.
E
So
you
don't
have
these
bespoke
apis
that
well,
this
thing
handles
north
south.
This
thing
handles
east
west,
completely
different,
diverging
different
types
of
technologies,
and
so
on.
It's
consistent,
it's
portable
and
it's
transparent
to
the
end
users
so
glue
mesh
enterprise
is
based
on
istio,
so
you
know
I'll
show
in
a
second,
we
we
have
a
long-term
support
for
istio
we've
built
apis
to
make
istio
multi-tenant.
E
We've
built
apis
to
make
it
multi-cluster
aware
and
simplify
how
you
actually
operate
istio
across
multiple
clusters,
we've
tied
in
the
api
gateway
natively-
and
this
is
not
just
something
that
we've
imagined
we've
been
running
this
at
scale
with
our
customers
for
the
last
three
years
and
battle
hardened
tier
zero
type
scenarios,
financial,
retail
insurance,
health,
all
of
the
industries
around
the
world.
E
You
know
we
have
extensibility
with
a
web
assembly,
some
consistent
security
policies
that
span
these
different,
these
environments
to
provide
the
foundation
for
a
zero
trust
environment.
E
From
a
you
know,
tenancy
standpoint,
we
can
support
istio
in
production,
fips
based
distributions
arm
based
distributions
all
the
way
back
cv
patching
all
this
stuff,
all
the
way
back
to
n
minus
four
for
long
term
support
and,
lastly,
everything
we've
built
is
built
around
the
premise
premise
of
declarative
configurations
and
plugging
directly
into
your
git
based
workflows.
We
don't
have
all
these
extra
databases
and
fancy
rewrite
uis
that
you're
locked
into
it's.
It's
all
based
on
get
ops
and
the
automations
that
can
be
built
around
that
now.
E
I've
said
a
lot
about
the
technology
about
its
power
about
kind
of
the
architectures,
but
it's
best
that
you
hear
from
our
customers.
We
have
hundreds
of
customers
that
are
running
this
in
production.
Like
I
said
at
scale,
you
know
tier
zero
and
it's
best
to
hear
from
them.
We
have
a
handful
of
our
customers
that
are
speaking
at
solocon
today.
So
I
highly
highly
encourage
that
you
go
check
out
their
talks
and
yeah.
E
B
A
F
Graphql
is
soaring
in
popularity
today
and
for
good
reason.
It's
a
revolutionary
change
in
the
api
landscape
that
puts
api
clients
and
front-end
application
developers
in
control
of
the
data
they
consume.
Many
customers
begin
their
first
steps
with
graphql
just
with
a
simple
monolithic
deployment
where
they
take
an
existing
rest
api
and
then
wrap
that
with
graphql.
F
F
F
Eventual
conclusion
of
this
architecture
is
federating
that
graph
across
all
environments,
not
just
the
customer's
local
cluster,
but
on-premises
cloud-based
deployments
partner,
apis
and
sas
apis
unifying
all
of
those
together
in
a
uniform
application.
Backplane.
That's
both
secure,
reliable
and
scalable.
F
F
To
address
this
need,
we
see
many
customers
deploying
an
architecture
where
they
stand
up
a
graphql
server
to
integrate
multiple
upstream
apis
into
a
graphql
api
and
then
front
that
graphql
server
with
a
proxy
to
provide
the
application
networking
policies
they
need.
This
architecture
is
functional,
but
has
several
important
drawbacks.
F
First,
it's
another
application
development
task,
a
responsibility
that
falls
between
the
front-end
application,
development
teams
and
the
back-end
developers.
Second,
it's
another
application
deployment
to
build,
deploy
and
manage.
Third,
there
can
be
an
overlap
of
responsibilities
between
the
graphql
server
and
the
proxy
for
things
like
authentication,
authorization,
caching
rate
limiting
and
data
protection.
At
solo,
we
see
a
unique
opportunity
to
help
our
customers
by
simplifying
and
optimizing
access
to
their
backend
data
by
making
graphql
a
native
feature
of
envoy
itself.
F
This
capability
will
allow
our
customers
to
use
envoy
itself
as
a
graphql
server
and
completely
eliminates
the
need
for
a
separate
application
deployment
in
your
graphql
architecture.
Supporting
graphql
directly
in
envoy
significantly
reduces
the
operational
overhead
and
complexity
of
managing
a
separate
application
deployment.
It
also
optimizes
network
performance
by
providing
native
integration
with
envoy
and
eliminating
an
extra
network
hop
to
the
graphql
server.
It
allows
platform
and
appdev
teams
to
use
declarative
configuration
from
their
graphql
apis
all
the
way
down
to
their
platform
resources.
F
It
also
allows
you
to
leverage
the
native
filtering
capabilities
envoy
to
handle
rate
limiting
caching
and
other
application
networking
policies
instead
of
integrating
with
third-party
libraries
in
your
applications
on
your
own.
Finally,
this
instantly
enables
graphql
in
any
architecture
where
you
have
the
proxy
scaling
from
the
simplest
monolithic
apis
to
the
most
complex
multi-cloud,
federated
graphs,
all
righty
and
brian.
Back
to
you.
B
A
Yeah
so
honestly,
we're
working
for
a
while
over
a
year
and
basically
ebpf
in
general,
is
the
ability
to
extend
your
operator
system,
it's
kind
of
like
a
plug-in
to
your
operating
system,
one
of
the
things
that
we
were
doing.
We
really
think
that
we
can
use
that
technology
to
actually
supercharge
the
service
mesh
steel
and
in
order
to
do
that,
we
were
working
on
it
internally
for
the
company
for
over
a
year
and
while
doing
it,
we
had
some
challenges
right,
it's
not
a
simple
technology.
A
You
still
need
to
understand
the
operator
system
really
well,
and
you
need
to
understand
how
to
develop
it
and
how
to
share
it,
and
here
we
actually
got
inspired
a
lot
by
solomon
hike
and
decided
internally
to
build
a
tool
called
bumblebee
and
what
bumblebee
is
doing.
It's
basically
helping
you
to
build
it
and
share
it
as
an
oci
image.
So
basically
it's
like
docker
for
your
bpf.
A
That's
helped
us
a
lot
to
accelerate
our
development
inside
the
company.
Now
we
were
so
excited
about
the
technology
and
honestly,
it
enabled
us
to
go
so
much
faster
that
we
decided
to
actually
give
it
back
to
the
phone
to
the
community.
So
we
open
source
bumblebee
it's
an
open
source
project.
It's
getting
a
lot
of
attention.
A
Please
go
and
check
it
out
if
you
are
interested
in
learning
more
about
ebpf
but,
as
I
said
all
of
this
done,
because
we
wanted
to
enable
our
product
so
to
learn
more
a
little
bit
about
the
work
that
we
were
doing
putting
ebpf
in
our
product.
I
want
to
introduce
you
yuvalko
javi,
our
chief
architect,.
G
Thanks
edit
ebpf
is
an
amazing
technology
that
has
been
gaining
more
usage
in
recent
years.
I
think
that
in
2022
we
will
see
increased
adoption
of
ebpf
as
major
linux.
Distributions
are
starting
to
support
btf
a
kernel
feature
that
enables
ebpf
portability
in
solo.
We
explored
to
see
how
ebpf
can
help
our
customers.
G
This
is
what
we
have
in
store
service,
mesh
acceleration
site
cars
proxies,
add
latency
to
your
application.
We
plan
to
use
ebbf
to
accelerate
the
throughput
and
latency
of
your
service
mesh
by
short-circuiting
the
network
stack
between
the
sidecar
and
the
application
observability
showing
you
extensive,
kubernetes,
aware
networking
metrics
for
your
workloads,
whether
they
have
a
sidecar
or
not.
We
really
believe
in
the
crawl
walk
run
model,
and
with
this
feature
you
can
have
advanced
observability,
even
if
you're,
just
starting
your
mesh
journey.
G
We
plan
to
allow
you
to
leverage
bumblebee
our
open
source,
ebpf
observability
tool
to
allow
you
to
have
meaningful
low
overhead
kubernetes
aware
custom
metrics,
so
you
can
create
metrics
that
make
sense
for
your
business
and
deploy
them
effortlessly
using
the
kubernetes
primitives
that
you're
already
familiar
with
and
finally
security.
We
want
to
leverage
the
power
of
evpf
to
enable
layer,
3
and
layer
4
security
policies,
regardless
of
the
networking
layer
you
currently
use.
G
G
A
B
You
know,
I
hope,
you're
excited,
as
you
can
see,
adidas
about.
What's
going
on,
we
have
a
great
week
planned
for
you
day,
one
and
day
two
we've
got
three
tracks,
because
we
know
that
you're
interested
in
lots
of
different
technologies,
you're
going
to
have
the
ability
to
engage
with
our
product
teams,
our
technology
teams,
look
at
road
maps,
see
what's
going
on,
learn
about
technology
and
on
day,
three
we're
doing
hands-on
workshops
so
great
opportunity
for
you
to
learn
free
to
interact
with
the
community
and
to
get
hands-on
with
the
technology.
B
Now
we're
really
excited
that
soloq
is
launching
today,
but
we're
also
excited
today's
international
women's
day
and
for
us
we're
super
well
represented
by
being
our
founder
and
cto,
both
a
technologist
and
a
business
leader.
We
hope
you
have
a
great
time
this
week,
we're
so
excited
to
interact
with
you.