►
From YouTube: CNCF SIG Network 2020-05-21
Description
CNCF SIG Network 2020-05-21
A
Gentlemen,
lily,
how
are
you
hey
good,
hey?
I
didn't.
I
just
saw
the
annual
review
pop
up
and
but
I
didn't
get
a
chance
to
look
through.
B
It
so
oh
feedback
would
be
super.
Welcome
there.
It's
it's
always
nice
to
go
back
over
the
year
and
realize
just
how
much
we've
accomplished.
I
really
you
know
it's
the
natural
human
thing.
You
focus
on
the
problems
at
hand
and
it's
very
productive
most
of
the
time,
but
it
means
that
you're
just
focused
on
okay.
What
is
it
that
I
have
to
get
done
that
I
haven't
gotten
the
net?
You
lose
track
of
all
the
stuff.
You
have
done:
yeah
yeah
yeah,
you
lose
the
perspective.
A
As
a
side
note,
as
we
wait
for
people
to
join
I'll,
say
that
I
consider
much
like
a
presentation
like
a
powerpoint
presentation,
where
people
will,
where
it's
like
easily
possible
to
overuse
animation
and
have
it
be
distracting
that
the
same
is,
I
think,
true
of
of
one
of
the
calls
that
I
was
on
yesterday,
where
the,
where
another
docker
captain
had
a
top
gun
playing
as
a
video
in
the
background
of
their
zoom
and-
and
we
didn't
want
the
meeting
to
end
because
we
wanted
to
you
know
finish
the
movie,
but
it's
just
I
think
it
was
overdone.
A
Is
the
point
I
think,
but
anyway
the
yeah.
A
So,
hopefully,
for
the
folks
representing
kuma
today,
yeah
there
it
is
okay,
nikolai
beat
it
to
beat
me
to
it.
Then,
before
I
could
even
all
right,
representing
very
good
cool,
all
right,
fair
enough.
We
are
a
couple
minutes
after
I
anticipate
we'll
have
a
few
other
folks
joining.
I
invited
a
few
folks
that
might
that
might
be
interested
in
today's
topics.
A
A
couple
of
housekeeping
items,
so
one
is
that
this
is
a
cncf
sig
network
call,
and
so
we
meet
about
twice
a
month
as
such.
We,
the
meetings
are
recorded
and
publicly
posted,
so
use
swear
words,
as
you
will
on
that.
Hopefully,
everyone
has
a
link
to
today's
meeting
minutes
I'll
post
those
in
the
chat.
A
In
general,
just
for
some
of
us
that
are
either
new
or
haven't
been
here
before
I'll
I'll,
go
ahead
and
share
those
minutes,
but
I'll
also
say
a
couple
of
other
housekeeping
things
or
just,
and
that
is
oh,
that
we
we
are.
You
know
I've
generally
been
able
to
keep
a
cadence
of
about
two
topics,
a
meeting.
That's,
I
think
we
could
squeeze
in
more
into
the
meeting
time,
although
I
don't
know
that
us
as
a
sig
could
necessarily
digest
more
and
so
in
some
respects.
A
Think
today
we're
fortunate
that
ambassador
that
that's
coming
up
for
review
soon
that
they
won't
present
today,
but
next
time
we
meet.
I
anticipate
that
mesherie
will
also
be
presented,
probably
the
next
time
that
we
meet
so
so
we'll
have
a
full
agenda
for
the
fourth
getting
into
the
agenda
then,
and
off
of
housekeeping
items.
A
Last
time
we
met.
One
of
the
topics
was
the
discussion
around
the
formation
of
the
service
mesh
performance
working
group,
and
so
the
presentation
that
we
covered
last
time
is
available,
and
this
is
kind
of
an
open
call
for
those
that
might
be
on
this
call
or
others
that
you
might
know
of.
That
would
be
interested
in
that
working
group
to
to
jump
on
in
we'll
be
identifying
a
routine
meeting
time
soon.
C
Would
you
consider
presenting
this
at
the
smi
calls?
I
think
that
next
week
there
is
some
call
for
the
smi
group,
so.
A
Yeah,
that's
a
good
thing
to
mention
that
there's
folks
that
are
clearly
interested
in
service
matches
there,
so
they
might
be.
C
A
Yeah
next.
A
Chaos
mesh
has
been
presented
a
little
over
a
month
ago.
I
think
here
for
consideration
into
the
sandbox,
and
so
their
review
is
in
flight,
and
I.
A
We've
spent
a
healthy
amount
of
time
with
well
with
that
whole
team,
actually
not
not
not
everyone,
but
but
all
the
most
of
the
maintainers,
and
so
my
hope
is.
Is
that
we'll
have
this
review
done
this
week
for
chaos
smash?
A
That's
a
timely
review
in
that
litmus.
We've
also
spent
a
lot
of
time
with
litmus
chaos,
even
though
they're
not
being
reviewed
in
this
sig
that
group,
that
community
has
just
been
spending
a
lot
of
time
over
over
here
as
well,
and
so
in
a
very
good
way.
We've
got
there's
a
lot
of
chaotic
things
happening.
A
And
then
just
as
we
welcome
in
well,
so
nikolai
has
been
here
for
some
time,
but
some
others
that
are
on
the
call
for
the
first
time.
So
marco
good
to
have
you
and
then,
if
you
haven't
put
your
name
down,
feel
free
to
record
your
attendance
but
other
than
that.
We've
got
a
fair
bit
of
time
on
today's
agenda
for
kuma
and
I'm
gonna
stop
sharing
so
so
that
you
guys
can
introduce
us
to
the
project
and
tell
us
what
it's
all
about.
E
E
So
I'm
presenting
kuma
today
as
part
of
the
donation
proposal
as
a
cncf
sandbox
project
that
I've
started
a
month
ago.
So
this
is
the
issue
for
37
on
the
cncf
repository.
E
You
know
when
we
look
at
at
the
industry
and
we
look
at
the
cncf
landscape
today.
There
is
no
control
plane
built
on
top
of
envoy
that
supports
envoy
as
a
data
plane
proxy.
That's
open!
That's
vendor,
neutral
that
the
rest
of
the
community
can
use
in
you
know
from
cncf.
There
are
some.
There
is
another
project
that
is
graduated
or
incubated
linker
d-
it's
not
built
on
top
of
onboard
envoy.
E
It
is
really
you
know
in
the
industry,
is
being
adopted
as
the
data
plane
of
the
future
and
today,
if
somebody
wants
to
leverage
envoy,
they
either
build
their
own
control
plane
or
they
go
use.
Another
control
plane
that
supports
somebody
and,
like
I
said
today
in
the
cncf
landscape,
there
is
no
vendor
neutral,
donated
control,
plane
that
the
community
can
go
to
in
order
to
do
that.
E
So
we
built
kuma
and
we
wanted
to
donate
kuma's,
assemble
sandbox
project
with
this
goal
in
mind,
giving
everybody
the
best
control
plane
out
there
built
on
top
of
amboy
and
also
integrating
with
the
rest
of
the
cncf
landscape
in
order
to
allow
everybody
to
simply
implement
mesh
across
pretty
much
any
architecture.
E
So
what
is
kuma
kum
is
a
service
mesh
built
in
golang
on
top
of
envoy
has
been
released
in
september,
10
2018
by
kong.
It
has
we
think
we
are
like
about
1500
stars
on
github.
It's
an
apache
license
2.0
project
kuma,
it's
a
very
simple
project.
It
supports
sound
void,
it
implements
the
xds
api
of
envoy
and
it
supports
knight
envoy
out
of
the
box.
E
It
is
one
executable,
so
developers
or
architects
who
want
to
create
a
service
mesh.
They
will
start
this
one
executable,
that
is
the
kuma
control
plane
and
then
they
can
go
ahead
and
then
start
the
data
planes
and
the
data
planes
will
connect
with
kuma
and
then
from
there
on
kuma
is
going
to
be
controlling
those
data
planes
it's
written
in
golang.
It
provides
a
native
envoy
integration
and
really
has
been
built
with
extensibility
in
mind.
E
Kuma
has
came
out
of
kong
with
with
the
learnings
that
we
have
captured
from
the
users
and
the
customers,
even
that
kong
is
working
with
in
pretty
much
every
industry,
technology,
financial
industry,
healthcare
and
so
on
and
so
forth.
It
is.
It
was
clear
from
the
service
mesh
landscape
back
when
kuma
was
released,
that
there
were
some
some
missing
aspects
that
today
were
not
taken
care
of
by
any
other
service
measure
there.
So,
first
of
all
for
many
organization
service
mesh,
it
is
a
journey.
E
Organizations
are
working
towards
transitioning
existing
workloads
to
kubernetes.
So
there
is
this
transformation
to
kubernetes
that's
happening
and
at
the
same
time
as
they
do
that,
and
as
they
increase
the
service
connectivity
among
their
services,
they
want
to
be
able
to
take
control
of
that
connectivity
right,
obstruct
that
away
from
the
application
teams,
so
they
can
manage
it
from
a
central
place.
E
Kuma
has
been
built
to
support
this
transition,
so
kuma
does
not
support
just
kubernetes,
but
kuma
is
a
project
that
supports
any
architecture
and
any
platform.
It
can
run
on
virtual
machines
and
it
can
support
virtual
machine
based
workloads
as
well
as
it
can
support
kubernetes-based
workloads.
That's
why
we
call
it
universal.
E
It
can
support
the
organization
as
they're,
making
this
transition
to
kubernetes
into
containers
and
in
a
way
it
simplifies
the
transition
as
well.
If
we
can
abstract
away
connectivity
from
brownfield
applications
that
we
want
to
transform
into
kubernetes
applications
we're
reducing
the
scope
of
that
transition
in
the
first
place,
because
we
don't
have
to
manage
that
connectivity
anymore,
and
so
once
we
do
that
effectively.
Kuma
wants
to
enable
an
easier
migration
to
kubernetes.
E
E
So
we
wanted,
first
and
foremost
to
create
something
that
would
that
was
easy
to
use,
something
that
was
lightweight
something
that
was
extensible,
something
that
didn't
have
many
moving
parts,
so
it
would
be
easy
to
deploy
and
something
that
would
provide
out
of
the
box
hooks
to
allow
users
to
utilize
the
product,
the
project
in
different
ways,
via
obviously
kubernetes
crds,
to
change
the
state
on
kubernetes-based
deployments,
as
well
as
providing
an
http
api
out
of
the
box.
E
Kuma
has
been
built
to
be
simple,
has
been
built
to
be
scalable
when
working
with
different
kind
of
users.
What
we
have
learned
is
that,
especially
in
a
large
organization,
service
mesh
doesn't
just
happen
out
of
the
box.
Like
I
said,
it's
a
transition
and
different
teams
are
going
to
be
moving
to
service
mesh
at
different
times,
depending
on
their
specific
goals
and
roadmaps.
E
At
the
time,
and
so
one
very
common
way
to
support
the
entire
organization
and
all
the
different
lines
of
businesses
and
all
the
different
teams
that
are
building
these
apps,
it
is
to
create
different
meshes
so
that
each
team
can
provide
a
mesh
for
their
own
application
and
they
can.
Each
mesh
comes
with
its
own
dynamic
certificate
authority,
but
effectively,
these
different
teams
do
not
have
to
coordinate
together
within
the
same
mesh
in
order
to
adopt
mesh
now,
because
this
is
a
single
control
plane
that
can
start
as
many
meshes
as
we
want.
E
Kuma
is
multi-tenant
in
this
case
from
one
control
plane.
We
can
provision
as
many
meshes
as
we
want.
We
can
create
this
connectivity,
abstraction
layer
that
can
cross
boundaries,
can
cross
platforms
can
cross
even
kubernetes
namespaces
and
then
it's
up
to
the
architect
or
to
whoever
is
managing
kuma
to
determine
if
they
want
to
merge
different
meshes
together,
if
it
makes
sense
or
if
they
want
to
keep
them
separate
financial
users
in
the
financial
industry
that
we
are,
you
know
that
are
using
kuma.
E
They
really
like
the
idea
that,
if
they
want
to
they
have
this
additional
layer
of
separation
between
different
teams
and
different
applications,
some
others.
Instead
they
prefer
to
have
a
very
large
mesh
and
everything
is
being
deployed
within
one
mesh.
The
point
is
that
kuma
supports
both
so
users
who
want
to
support
kubernetes.
They
can
do
that
who
want
to
support
the
virtual
machines
as
part
of
that
transition
to
kubernetes.
They
can
do
that.
They
want
to
create
one
mesh.
They
can
do
that.
They
want
to
create
multiple
measures.
E
They
still
can
do
that
in
in
this.
In
this
case,
comb
is
a
very
pragmatic
service
mesh
and
by
the
way,
kuma
as
a
project
itself.
You
know
kang
is
not
a
cloud
vendor.
The
company
that
built
kuma
did
not
have
any
agenda
other
than
trying
to
build
the
best
control
plan
for
service
measure
there.
We
don't
build
this
to
transition.
Anybody
to
the
cloud
we're
not
doing
this,
because
there
is
some
other
hidden
agenda
into
kong.
Is
the
switzerland
of
connectivity.
E
If
you
wish,
we
work
closely
with
all
the
clouds
all
the
platforms,
so
we
don't
have
any
other
agenda
other
than
creating
the
best
product
out
there
and
really
you
can.
You
can
see
this
passion
and
this
goal
within
kum
and
the
way
kuma
has
been
built.
So
it's
easy
to
use.
It's
simple.
We
provide
policies
out
of
the
box,
we're
going
to
be
releasing
documentation
right
now.
E
You
know,
kum
is
a
a
nine
month
sold
project,
but
we
want
to
create
documentation
to
make
it
even
easier
to
create
new
policies
on
top
of
kuma
by
perhaps
supporting
web
assembly.
It's
horizontally
scalable.
There
is
only
one
moving
part:
it's
multi-tenant
we
can
create,
as
many
meshes
as
we
want
runs
on.
Kubernetes
runs
on
virtual
machines
runs
across
pretty
much
every
it's
easy
to
use.
E
It
provides
native
crds
that
allow
anybody
to
follow
kubernetes
best
practices,
if
you,
if
they
want
to
configure
their
mesh
policies,
can
do
pretty
much
anything
policies
can
be,
can
be
doing.
Traffic
control,
fault,
injections,
traffic,
routing,
mutual
tls,
mutual
tls
comes
in
different
forms.
Different
flavors
kuma
supports
a
built-in
certificate
authority,
so
we
auto
generate
out
of
the
box,
the
certificates
and
the
keys
for
the
ca
root
backend,
and
then
we
out
to
generate
a
certificate
for
each
data
plan
proxy.
E
They
can
also
provide
their
own
root
certificate
and
key,
and
we
built
this
entire
system
in
such
a
way
that
it
can
be
extended
so
the
by
default,
the
certificates
that
we
create
are
spiffy
com
compatible,
but
because
this
backend
system
is
extensible
as
part
of
the
roadmap,
we
would
like
to
support,
for
example,
spire
as
one
of
the
ca
back-ends
that
the
users
can
use.
E
It
provides
a
cli
that
simplifies
retrieving
the
state
of
the
resources
stored
in
kuma.
It
provides
an
http
api
that
can
be
hooked
with
pretty
much
anything
and
the
glue
itself
is
actually
built
on
top
of
the
http
api,
so
anything
that
the
cli
can
do
that.
The
api
can
do
that.
The
gui
can
do
can
also
be
done.
E
You
know,
can
also
be
automated
by
hooking
into
the
http
api.
We
do
not
allow
to
change
the
state
of
the
resources
if
com
is
running
on
kubernetes,
because
we
want
to
use
kubernetes
to
do
that.
But
if
cooma
runs
on
virtual
machines,
then
the
api,
the
cli,
can
also
be
used
to
change
and
apply
state
changes
on
top
of
kuma.
E
Like
I
said,
kuma
abstracts
away
envoy,
actually,
so
if
we
want
to
start
using
kuma,
we
don't
need
to
have
any
prior
experience
on
android
under
the
hood,
there's
going
to
be
an
envoy
proxy
running,
but
we
bundle
envoy
into
this
comadp
process.
That
automatically
does
the
initial
bootstrap
configuration
provisioning.
E
So
if,
if
somebody
wants
to
use
kuma
under
the
hood
they're
going
to
be
using
envoy,
but
they
have
no,
if
they
don't
look
into
kuma
dp,
they
they
wouldn't
know,
this
makes
envoy
easier
to
deploy
and
easier
to
configure
because
they
don't
need
to
know
what
envoy
is
or
how
to
configure
in
the
first
place.
But
if
they
want
to
go
ahead
and
configure
the
low-level
envoy
configuration,
they
can
still
do
that.
E
We
provide
a
proxy
template
policy
where
the
user
can
effectively
configure
the
envoy
configuration
for
different
services
via
kuma,
so
kuma,
first
and
foremost,
is
a
control
plane
for
envoy,
but
then
kuma
also
provides
policies
that
abstract
away
the
most
common
features
that
amway
provides
into
easy
to
use
policies
for
for
the
user.
E
It
supports
cni,
of
course,
because
we
want
to
be
able
to
deploy
kuma
on
openshift
3.x
4.x.
It
supports
kubernetes.
We
have
about
nine
different
distributions
that
we
generate
for
the
community.
We
do
also,
by
the
way
work
with
the
community.
We
have
bi-weekly
community
calls.
Kuma
is
an
open
governance
project
by
weekly
community
calls
where
we
get
to
gather
their
feedback
and
and
we
collect
input
from
the
community.
We
also
have
a
slack
channel
in
the
development
channel
that
the
community
can
contribute
to
yeah.
A
Question,
if
I
might
dude
it
might
just
be
semantics
or
the
way
that
I
think
of
a
distribution.
The
the
different
distributions
you
know
pulled
out
to
support
those
different
platforms
is
what
are
the
primary
differences
between
the
distributions.
E
We
we
make
it
easier,
so
basically
kuma
on
kubernetes
would
use
the
kuma
docker
container,
obviously
in
order
to
be
able
to
provision
everything.
But
if
you
would
like
to
run
this
on,
let's
say
red
hat:
I'm
sorry
debian!
You
would
then
start
the
golang
project
without
necessarily
having
to
use
a
container.
So
if
I
go,
you
know
the
best
way
to
to
look
into
this.
If
I
go
on
the
website,
commodore
io
and
then
you
know
we
can
click
on
the
installation
page.
E
We
see
that
on
kubernetes
we
provide
so
first
and
foremost
on
kubernetes.
We
provide
an
automatic
installer
that
would
install
kuma
and
then,
if
you
decided
to
use,
for
example,
amazon
linux,
we
would
provide
the
automatic
installer
would
automatically
detect
that
the
underlying
operating
system
and
it
would
install
the
kuma
cp
binary
executable
on
top
of
that
virtual
machine.
So
the
difference
is
that
with
kubernetes
we
we
just
abstract
away
the
entire
installation
process.
E
Installing
kuma
is
as
easy
as
running:
kumakato
install
kumakato,
install
control
plane
and
then
this
would
generate
a
yaml
file
that
we
can
apply
on
kubernetes,
whereas,
if
you're
running
this
somewhere
else,
you
would
get
an
executable,
then
you
have
to
run,
got
it
okay,
but
the
project
and
the
binary
is
the
same.
So
what?
E
Although
we
make
it
easier
to
install
kuma
across
these
different
distributions,
it's
not
like
there
are
different
flavors
of
kuma
itself,
that's
always
the
same
flavor.
So
basically
every
time
we
release
the
new
version.
We
generate
installation
instructions
for
the
new
version
across
all
of
these
different
distributions,
but
the
binary,
if
you
look
into
the
kubernetes
container
or
if
you
look
into
the
amazon
linux,
the
binary
is
always
be.
It's
always
the
same.
That's
nice!
Oh,
very
good!
Yeah.
A
I
think
that
the
reason
I
was
asking
is
it's.
I
think
that
those
two
concepts-
flavors
and
distributions
are
often
conflated
and
so
yeah
was
it
in
fact
a
different
capability
or
but
yeah.
It's
about
it's
about
installation
and
kind
of
fitting
into
the
environment,
not
yeah.
E
Yeah
with
openshift,
for
example,
we
we
suggest
using
cni
right
this.
So
basically
we
you
know
effectively
each
one
of
these
distributions.
E
E
E
You
know
fetch
tracing
logs
from
all
the
service
to
service
traffic
that
we
are
generating,
because
kuma
wants
to
make
this
very
simple
for
the
user.
We
implement
also
some
shortcuts
that
allow
us
to
install
prometus
and
grafana
in
order
to
automatically
out
of
the
box
capture
those
metrics.
They
also
implement.
E
We
implement
a
helper
for
installing
jaeger
so
that
we
can
collect
tracing.
You
know
we
want
to
really
make
the
entire
service
mesh
experience
extremely
easy
in
order
to
be
able
to
cater
to
a
largest
number,
a
larger
audience
when
it
comes
to
adopting
service.
Mesh
simplicity
is
a
feature.
E
This
is
a
simple
example
of
a
kubernetes
policy,
so
if
I
go
back
on
the
website
by
the
way-
and
I
go
on
policies,
we
can
see
that
there
are
some
policies.
You
know
we
generate
new
policies
every
time,
but
some
of
these
policies,
for
example,
like
traffic
route,
can
be
implemented
on
kubernetes
by
using
this
very
simple
configuration.
E
So
in
this
example,
every
request
made
by
the
backend
service
to
the
redis
service
goes
some
of
that
traffic
goes
to
redis
to
ready
service
5.0
and
some
other
part
of
the
traffic
goes
to
service.
Right
is
6.0.
Now
the
cool
thing
is.
We
can
tag
our
services
with
any
arbitrary
tag
so,
which
means
that
we
can
assign
a
cloud
region
to
a
tag.
We
can
implement
routing
across
different
clouds
across
different
cloud
regions.
We
do
cross
data
center
routing.
We
can
do
all
sorts
of
things
with
these
stacks
tags.
E
Allow
us
to,
regardless
of
what's
the
underlying
complexity
of
our
workloads
and
where
they're
running
with
tags
we
can.
We
can
apply
policies
in
a
very
flexible
way
across
the
entire
mesh
they're,
very
powerful,
and
if
we're
running
a
universal
because
we
want
to
you,
know-
implement
a
service
mesh
on
top
of
systems
other
than
kubernetes,
because
perhaps
we
want
to
integrate
this
with
an
existing
mesh
running
on
kubernetes.
E
We
do
provide
kuma
running
as
an
individual
executable,
of
course,
on
kubernetes.
We
can
leverage
the
underlying
kubernetes
api
in
xcd
to
store
all
of
our
configuration,
but
on
virtual
machines
we
cannot
make
that
assumption.
Therefore,
we
have
support
for
postgres.
E
So
if
somebody
wants
to
run
kuma
on
virtual
machine
workloads,
they
can
use
postgres
as
the
underlying
storage
for
their
for
their
configuration
and
the
policies
are
very
simple.
E
As
a
matter
of
fact,
one
of
the
goals
here
is
to
make
sure
that
we
make
it
easier
for
teams
that
are
not
familiar
with
kubernetes
to
get
up
and
running
with
service
mesh
with
something
that
it's
very
similar
to
kubernetes,
but
not
quite
there
yet,
but
it
exposes
them
to
the
same
concepts
and
then,
as
you
can
see,
this
is
the
universal
policy
example
that
does
the
same
thing
as
I've
demonstrated
before
with
kubernetes
it's
quite
simple
and
quite
similar.
E
We
have
released
an
open
governance
which
I
believe
makes
kuma
the
only
control
plane
built
on
top
of
envoy,
with
open
governance
right
now
in
the
market,
in
the
entire
community
landscape
we
provide
bi-weekly,
we
set
up
by
weekly
community
calls
the
next
one
it's
next
week,
where
we
discuss
with
the
community
we're
gonna
a
list
of
agenda
topics
that
usually
discuss,
but
you
know
it's
road
map,
or
particular
spikes
or
architectural
conversations,
and
so
on,
and
this
is
the
first
envoy
based
control
plane
that
is
being
donated
for
service
mesh
that
is
being
donated
to
to
cncf.
E
As
far
as
I
know,
the
velocity
of
the
project,
it's
quite
high,
our
making
releases
pretty
much
every
month
or
almost
every
month.
We
want
to
keep
iterating
on
introducing
more
coma
policies
to
make
it
convoy
easier
to
use.
We
want
to
be
exposing
the
zombie
features
via
native
kuma
policies.
E
One
of
the
most
requested
features
right
now
is
to
support
smi
integration.
So
this
is
something
that
I
would
like
to
explore.
Perhaps,
with
the
smi
team,
the
microsoft
team
in
the
maintainers
of
smi
introducing
support
for
web
assembly
which,
as
you
know,
envoy,
already
supports
as
well
as
plugable
mkls
backends
we've
built
that
the
gui
wizards
to
get
up
and
running
with
the
gui
we
built
that
you
know
and
so
on
and
so
forth.
Today,
kuma
is
being
used
by
a
series
of
users
across
the
board.
E
We're
using
kuma
is
being
used
by,
for
example,
companies
like
saber
enterprise
companies
like
sabre
financial
institutions
in
new
york,
government
agencies,
financial
companies
in
europe
in
the
uk
as
well.
As
I
know
there
is
many
projects
led
by
teleworks
in
the
companies
like
wipro
around
kuma
and
those
guys
are
active
in
our
slack
channel
and
and
community
channels
as
well.
E
So
this
is
a
presentation
to
donate
kuma,
which
is
the
first
envoy
based
control
plan
to
be
donated
to
the
community
to
advance
the
adoption
of
service,
mesh
and
envoy
kuma
supports
and
enables
those
cloud
native
technologies
and
best
practices.
We
want
to
enable
the
adoption
of
kubernetes
across
the
board,
and
we
want
to
be
able
to
make
that
transition
as
easy
as
possible
for
those
users
or
organizations
that
are
transitioning
to
kubernetes,
but
they're,
not
there.
Yet
it's
a
journey.
E
Like
I
said
we
built
kuma
with
no
other
agenda
and
and
no
other
plan
other
than
building
the
best
control
plan
for
service
mesh
out
there
I
mean
if,
if
we
look
at
the
features
and
the
roadmap,
it's
entirely
focused
on
advancing
service
mesh
on
advancing
at
the
adoption
of
envoy
and
also
advancing
the
adoption
of
the
integrations
that
we
natively
provide
with
kuma.
That
includes
prometheus.
That
includes
jaeger.
E
We
integrate
with
cni
when
it
comes
to
open
shift,
and
perhaps
there
is
some
other
some
other
ones
that
that
were
integrating
well
when
it
comes
to
our
policies.
It
has
been
originally
designed
by
kong.
Kong
is
a
neutral
player
in
in,
as
you
know,
in
the
very
opinionated
cloud
landscape
so
like
I
said
this
can
be
entirely
used
on
any
any
platform,
any
architecture
with
with
no
the
cloud
dependency
at
all
whatsoever.
E
It
can
be
deployed
in
in
pretty
much
any
any
use
case,
and
there
is
nothing
that
would
prevent
any
user
from
from
deploying
a
service
mesh.
That's
not
opinionated,
using
kuma
in
their
own
environment.
E
If
you
want
to
learn
more,
we
provide
the
official
website
at
comma
yo,
the
repositories
kuma
kuma
gui
kuma
website.
There
is
a
slack
chat
that
we
provide
as
well,
and
the
twitter
kumamesh
kumamesh
handle
is
where
we
announce
our
releases.
So
we
just
announced
that
0.5.
E
It
was
quite
exciting,
with
30,
more
features
that
the
community
has
been
built
building
and
we're
going
to
be
planning
to
release
a
0.1
release
sometime
this
month.
We're
going
to
be
discussing
this
in
the
community
call
as
well
as
we
plan
the
next
major
one
to
show
up
sometime
in
june
or
july,
so.
A
Nice,
oh
very
good.
Thank
you
for
this
marco.
This
is
this
is
nice.
I
was
just
trying
to
pull
up
the
pull
request
to
on
some
of
this,
because
some
of
some
of
these
questions
answers
might
be
in
there,
but
the
oh
yep-
and
it
is
so
current
maintainers-
are
yourself
ilya
jacob.
A
All
three
of
you
are
are
at
kong
or
of
kong,
so
to
speak.
Okay,.
E
Yeah,
so
we
have
some
other
users
that
have
been
contributing
quite
actively
into
the
project.
One
of
them,
for
example,
comes
to
tallworks
and,
and
those
users
are
in
the
process
they
could
be
potential
candidates
to
be
additional
maintainers
to
the
project.
We
have
introduced
open
governance,
which
creates
a
guideline
to
become
a
maintainer,
sometimes
like
three
three
weeks
ago,
four
weeks
ago,
so
open
governance
is
new
for
kuma,
but
we
did
that
in
preparation
for
this
donation
to
sandbox.
E
A
Here's
a
here's,
a
question
considering
the
well
just
here's
one
you've,
maybe
fielded
in
the
past,
but
so
so
envoy.
You
know
great
choice
for
a
proxy
underneath
the
this
control
plane
kuma,
given
the
experience
that
you
guys
have
with
the
maintainers
have
with
nginx
curious
as
to
you
know
why
not
nginx.
E
For
the
same
reason,
because
we
have
because
we
have
lots
of
experience
with
nginx,
we
also
know
the
limitations
of
the
products,
and
we
also
know
what
was
the
intended
use
of
nginx
and
what
is
the
intended
use
of
envoy
and
envoy
really
provides
a
nicer,
dynamic
api
which,
as
you
know,
nginx,
does
not
provide
to
be
able
to
dynamically
change
the
state
of
on
how
the
proxy
is
going
to
be
managing
those
network
requests.
Those
service
requests
in
such
a
way
that
nginx
cannot
provide
without
some
very
messy
reloads.
E
You
know
of
the
entire
process.
Also,
I
am
a
big
fan
of
envoy.
As
a
matter
of
fact,
one
of
the
sponsors
for
the
kuma
donation,
sandbox,
is
mclean.
You
know,
I'm
a
big
fan
of
envoy.
We
have
in
the
past,
also
contributed
to
envoy
upstream
envoy,
and
we
plan
to
keep
doing
so
when
it
comes
to
kuma.
Quite
frankly,
we
want
to
use
the
best.
The
best
tool
for
the
job
and
ambo
has
lots
of
momentum
right
now.
E
A
Very
good
questions
from
from
others
on
the
call.
A
By
the
way,
the
marco
gray,
I
think
I
said
it
before
great
presentation.
This
is
this
is
good.
You
know
cool
great
project
too.
Thank
you.
E
A
Awesome
yeah,
I
love
the
simplicity,
motto
and
kind
of
it's
pretty
pervasive
throughout
the
various
things
you
highlighted
in
the
project,
which
is
yeah,
networking
is
hard
enough
and
so
yeah
simplifying
is
always
needed.
A
E
Yeah,
the
old
one
and
the
old
one,
the
old
logo
was
a
little
bit
problematic
when
it
came
so.
The
old
logo
was
created
very
quickly
prior
to
releasing
the
project
back
in
september,
and
it
was
very
problematic
to
it
was
very
problematic
in
different
ways.
But
but
basically
this
the
new
one,
it's
more
recognizable
and
it's
smoother
and
it's
nicer
and
I
think
it's
a
better
foundation
for
the
long
term.
E
Don't
tell
me
they
are
I
I
didn't
come
up
with
the
logo.
Thankfully
we
have
you
know
somebody
else
contributed
to
the
design
of
the
logo.
Somebody
somebody
who's
more
creative
than
I
am,
but
but
this
new
logo
was
also
created
in
preparation
for
the
donation
to
cncf.
We
did
not
want
so
to
me.
It's
very
important
to
clarify
this
point.
I
don't
want
to
leverage
this
donation
to
cncf,
as
as
something
that
would
make
the
old
logo
and
kong
and
ink
the
company
more
recognizable
in
the
marketplace.
B
I'll
make
about
the
logo,
and
please
just
think
this
is
a
take
it
or
leave
it
suggestion.
My
experience
has
been
the
cncf
creative
services
team
is
unbelievably
good.
I
mean
you
literally
see
the
result
right
behind
my
head,
and
so
not
this
is
something
you
do
or
don't
have
to
do,
but
they're
just
really
really
good.
That
said,
apparently,
it's
particularly
stressful
for
them
when
you
take
feedback
from
30
or
40
community
members
at
the
same
time,
so
you
might
want
to
rate
limit
that.
F
Wrangle,
like
the
various
community
feedback
pieces,
around
logos
as
we
get
there
so
pre
continue.
E
Yeah
as
part
of
the
donation,
you
know
we,
I
had
a
chat
with
chris
chris
sent
me
all
the
documentation
and
materials.
Obviously
we're
going
to
be
transferring
the
trademarks,
the
ips,
and
all
of
all
of
that,
all
that
we
have
to
transfer
is
going
to
be
then
par
part
of
ownership
of
cncf
right
so
including
all
of
these
efforts
and
so
on
so
far,
so
at
that
point,
cncf
can
do
pretty
much
what
they
want
with.
A
It,
marco-
that
was
one
of
the
questions
that
I
had
had
kind
of
hey
dangling
in
the
back
of
my
mind,
but
you've
spoken.
I
think
in
part
to
it-
and
that
is,
I
guess,
maybe
the
phrase
a
little
bit
differently,
but
just
the
association
to
kong
and
maybe
the
the
business
model
around
kuma
or
you
know
how.
How
does
it
become
a
self-sustaining
thing.
E
Yeah,
so
you
know
we
we
provide,
like
you
know
many
many
other
organizations
out
there.
We
provide
support
if
enterprise
customers
wants
to
deploy.
Kuma
wants
to
run
kuma
as
well
as
I
don't
exclude
in
the
future,
to
also
have
a
cloud
version
of
kuma.
E
So
if,
if
somebody
wants
to
run
coma
by
themselves
the
control
plane,
for
example,
they
can
do
that
if
if
they
would
rather
have
a
simpler
way
to
deploy
the
product
the
project,
then
perhaps
they
can
go
to
kong
or
to
quite
frankly,
anybody
who
decides
to
create
a
cloud
version
of
cool
right,
so
it
can
be.
Con
can
be
something
else
at
that
point,
but
we
want
to
first
and
foremost
enhance
and
integrate
our
you
know
our
core
business
is
the
gateway.
E
It's
not
kuma,
so
we
want
to
integrate
our
our
our
gateway
project
and
product
into
a
service
mesh.
That's
open,
that's
neutral!
That's
easy
to
use,
that's
agnostic,
and
that's
why
we
have
created
coma
in
the
first
place
and
that's
why
we
want
to
release
it.
We
kong
inc,
the
company,
doesn't
run
a
service.
Mesh
business
runs
an
api
management
business
that
that
is
our
business,
so
kuma
is
is
something
that,
from
a
business
standpoint,
provides
a
vendor
neutral
integration
to
to
our
gateway,
but
the
service
mesh
per
se.
E
Nice
yeah
thanks:
okay,.
F
E
Because
because
I
wanted
to
add
just
because
this
is
not
our
business,
we
have
also
that
that's
one
of
the
reasons
why
there
is
no
other
agenda
with
kuma
that
that
you
can
see
in
the
product
or
by
using
the
product
other
than
creating
a
service
mesh
out
there,
that
anybody
can
use.
And
so
that's
simple,
that's
agnostic!
That's
multi-tenant!
You
know
all
things
that
other
service
meshes
out.
E
There
do
not
have,
as
well
as
we're,
making
some
significant
effort
to
make
sure
that
we
can
support
more
complex
network
topologies
when
it
comes
to
deploying
kuma
across
you
know,
larger
and
more
complex
use
cases
in
such
a
way
that
existing
service
meshes
out
there,
including
some
of
the
most
popular
ones,
cannot
support,
and
so
we're
very
excited
about
the
roadmap
and
the
things
that
we're
doing,
and
we
believe
that
you
know
we.
E
A
A
Marco
one
of
the
things
that
you'd
mentioned
a
couple
of
times
are
our
users
curio
and
the
feedback
that
you're
tuning
into
and
it's
helping
drive
part
of
the
roadmap
for
the
project.
I
guess
there's
kind
of
two
questions
in
there.
One
being
that
you
were
speaking,
you
were
alluding
to
and
kind
of
speaking
to
you
on
one
of
the
last
slides
part
of
the
intended
road
map
and-
and
I
thought
I'd
ask
is
that-
is
that
public
as
well?
E
Yeah,
so
we
are
making
our
transitioning
everything
to
github,
including
github
as
a
feature
like
trello.
You
know
it's
called
projects,
so
obviously
we're
transitioning
everything
there
and
if,
if
you
go
on
the
github,
everything
is
public.
Everything
is
available
on
the
repository
as
well
as
we
discuss
roadmap
items
two
times
a
month
in
the
bi-weekly
community
calls.
So
there
is
also
a
slack
chat,
chat
channel
on
akuma
called
development
where
folks
can
suggest
or
talk
about
roadmap
items
that
they
want
to
work
on.
E
So
everything
is
quite
open
and
it's
out
there,
nice
nice
very
good.
I
see
the.
A
A
E
A
You
guys
should
die
yeah,
just
in
many
different
respects,
moving
right
along
briskly
or
just
you
know,
good,
looking
healthy,
a
healthy
shape
to
the
project.
I
guess
is
what
I
would
say.
The
the
other
part
of
the
question
that
I
didn't
ask
was
oh
about
users,
and
you
guys,
you
know
the
project
tuning
into
their
feedback,
curious.
You
mentioned
thoughtworks
or
a
couple
of
other
any.
A
I
guess
the
question
I'm
trying
to
ask
is
what
have
you
you
know
what
are
some
of
those
things
that
you
learned
from
your
puma
users
and
or
are
there
any
references
to
a
doctors
today?
A
This
isn't
something
by
the
way,
marco.
Some
of
the
questions
that
I'm
tossing
toward
toward
you
and
toward
the
other
maintainers
are
not
necessarily
strict
criteria
for
the
sandbox,
but
these
are
just
you
know,
general
questions.
E
Yeah
but
those
are
very
good
questions.
I
mean
the
the
idea
that
we
have
you
know
of
kuma
in
cncf,
it's
not
of
a
project
that
stops
at
sandbox
but
keeps
growing
from
there.
So,
as
in
my
slides,
you
can
imagine
so
I
presented
my
slides
and
you
know.
Obviously
there
is
a
quote
from
telus,
which
is
the
largest
telecommunication
company
in
in
canada.
We
are
also
working
on
a
community
use
case
with
an
enterprise
ticketing
company.
E
That's
been
using
kuma
to
fundamentally
transform
how
they're
transitioning
to
microservices
and
kubernetes,
and
we
are
actually
going
to
be
releasing
that
community
case
study
on
the
website.
So
where
you
know
for
some
of
for
some
of
these
case
studies,
we
have
to
ask
approval,
we
can
draft
them
out
and
then
we
have
to
ask
approval
to
their
legal
team
if
they
want
us
to
publish
them.
E
If
not,
we
are
going
to
just
publish
them
in
an
anonymized
form,
but
basically
we're
going
to
be
creating
more
of
these
case
studies
so
that
other
users
can
learn
how
they
are
using
kuma
and-
and
fundamentally
you
know,
one
of
the
biggest
feedbacks
that
we
hear
is
fine.
You
know
we
want
to
create
a
service
mesh,
but
service
mesh
is
a
pattern.
It's
not
an
implementation,
so
why
couldn't
we
implement
a
service
mesh
that
can
span
across
the
entire
org?
E
If
you
wanted
to
do
so
right
and
that's
a
very
valid
question,
it
doesn't
have
to
be
kubernetes
only
as
a
matter
of
fact.
There's
lots
of
value
in
making
a
service
measure,
that's
as
big
as
as
it
can
be
across
different
kinds
of
businesses
or
different.
You
know,
teams
within
the
organization
and
as
we
do
that
one
of
the
other
questions
is:
how
do
we
do
that
without
having
to
create
a
hundred
clusters
of
service
mesh,
but
perhaps
having
everything
into
into
one
place?
E
If
you
know,
if
you
use
other
service
mesh
control
planes,
you
have
to
create
a
new
cluster
for
each
service
mesh
that
you
want
to
support.
If
you
want
to
support
different
teams
or
different
applications
with
different
service
meshes
at
different
times,
whereas
with
kuma
we
capture
that
feedback
and
in
kuma
we
implement
multi-tenancy.
So
we
have
this
concept
of
a
mesh
object.
E
You
can
create
as
many
meshes
as
you
want,
and
each
one
of
them
has
its
own
certificate
authority
back-end
that
the
user
can
configure
but
effectively
they're
being
provisioned
different
data
plane
proxy
certificates,
which,
for
all
intents
and
purposes,
makes
them
become
different
meshes
and
the
policies
that
apply
traffic
permission,
traffic,
routing
traffic
logging,
traffic
tracing,
you
name
it
all
of
those
policies
can
also
be
applied,
can
also
be
applied
on
a
per
mesh
basis.
So
it's
it's.
B
Difficult
to
understate
how
important
what
you
just
said
is
because
what
would
so
many
people
have
missed,
not
in
all
kinds
of
spaces,
but
this
is
one
is
the
fact
that
there
is
not
going
to
be
a
single
point
of
administrative
control.
That's
that's
silly.
What
you're
going
to
have
is
a
whole
bunch
of
points
of
administrative
control
that
are
going
to
need
to
be
able
to
operate
independent
of
one
another.
E
Yeah
and
this
decision
has
been
made
quite
early
into
the
journey
of
kuma
because
we
didn't
have
to
learn
it.
We
knew
that
that's
exactly
what
they
wanted,
because
they
were
telling
us.
You
know
if
you're
building
a
service
mesh,
build
it
in
such
a
way
that
the
operational
cost
of
supporting
the
entire
org
doesn't
it's
all
of
one,
not
all
of
n
the
more
and
n
equals
the
teams
or
the
meshes
or
the
applications
that
we
want
to
support.
E
Therefore,
we
believe
that
with
kuma
we
made
something
that's
extremely
easy
to
operate
and
can
scale
quite
well
across
pretty
much
the
entire
organization
in
a
very
nicely
way
clean
way
and
plus,
you
know,
because
everything
it's
into
one
at
the
end
of
the
day,
all
of
this
is
being
provided
by
one
cluster
of
kuma
one
control
plane.
Then
you
know
it
also
provides
this
bird's
eye
view
on
how
many
meshes
are
running
in
the
organization,
and
is
it
time
to
merge
some
of
them?
E
All
of
this,
what
I'm
very
excited
about
all
of
this
is
that
all
of
these
is
being
powered
by
envoy,
which
is
just
amazing.
So
if
anything
you
know
by
allowing
to
create
these
meshes,
allowing
to
simplify
how
all
of
this
is
being
done.
We
advance
the
adoption
of
envoy
across
pretty
much
every
every
single
team,
every
single
line
of
business,
and
that's
why
I'm
very
excited
to
not
only
work
more
closely
with
with
the
envoy
community
to
build
things
that
we
need
as
well
as
I'm
very
excited
to.
E
I
really
believe
that
that
envoy
is
a
pretty
sweet
technology
that
you
know
it
is
going
to
be
the
data
point
of
the
future.
E
You
know
when
we
look
at
the
data
planes
and
we
look
at
anything
in
life,
and
we
ask
ourselves
is
the
best
time
of
this
thing
ahead
of
us
or
in
the
past,
and
when
I
look
at
envoy,
the
answer
to
that
question
is
the
best
times
are
yet
to
come
right
and
so
I'm
very
excited
about
envoy
and
what
and
what
the
humble
community
has
been
building
and
so
with
kuma
and
envoy.
I
want
to
energize
this
cooperation
to
promote
a
better
adoption
of
amway
across
the
board.
E
Plus,
I
have
to
say,
kuma
comes
with
no
drama
attached.
This
is
a
no
drama
project
and-
and
I
know
that
in
our
industry,
we
we
we
have
enough
of
that
already
like
I
said
this
is
vendor
neutral,
there's
no
other
agenda
rather
than
building
the
best
service
mesh
control
plane
out
there
it
supports
envoy,
it
comes
with
no
drama.
There
is
no.
E
There
is
no
strategic
plan
other
than
creating
the
best
product
product
obsession
is,
is
one
one
of
the
requirements
for
any
maintainer
in
kuma
and
demonstrate
the
product
obsession
over
time
right
with
continuous
improvements
and
and
and
contributions?
So
so
so,
if
there's
no
drama
does.
B
E
I'm
I'm
promising
a
very
collaborative
environment.
That's
that
it's
it's
not
driven
by
anything
other
than
let's
do
something-
that's
nice,
and
that
the
entire
world
can
use
very
product
oriented
product
centric.
E
It
doesn't
hook
into
any
greater
plan,
but
but
that
I
mean
it's
really
that's
what
it
is,
we're
a
bunch
of
people
doing
something
that
we
like
doing
working
on,
something
that
we
like
working
on
and
we
find
we
find
pride
into
the
adoption
that
envoy
has
within
the
community
and
the
user
base
and
we're
you
know
we
want
to
keep
it
that
way.
E
A
Yup
yup,
that
kind
of
that
kind
of
thing
is
hard
to
quantify,
although
can
be
extraordinarily
apparent
in
various
communities.
Some
of
the
some
of
those
that
were
mentioned
on
this
call
actually
fair
enough.
Maybe
last
call
for
for
comments
from
others.
While
I
tie
off
here
and
to
say,
but
one
I
guess,
let
me
pause
and
say
any
other
comments
from
from
folks.
A
A
Well,
nikolai
marco
nick
nichola-
I
guess
I'm
just
I'm
used
to
seeing
the
nsm
logo
behind
your
but
I'll
get
I'll
adapt
allah.
A
So
that's
so
great
thanks
a
lot
for
the
presentation
today,
guys
we
will
reach
out
and
spend
a
bit
more
time,
we'll
do
a
a
review
and
and
hopefully
akin
to
the
cadence
that
we
delivered
for
smi
and
for
chaos
smash
also
come
out
with
a
review
of
kuma
and
yeah.
I
guess
I
had
said
it
before,
but
yeah
what
what
a
good
looking
project
you
guys
have
got
thus
far.
So
this
is
I'm
particularly
pumped
personally
about
it.
So.
E
Yes,
thank
you
lee
and
thank
you,
everybody,
and
if
anybody
wants
to
join
the
next
community
call
nikolai.
I
believe
that
is
next
wednesday
right.
C
Yep,
it
is,
is
it
8
a.m?
Pacific.
E
Yeah,
so
you
can
get
more
details
around
that
if
you
go
on
commodore
you
slash
community.
If
you
do
that,
there
is
a
field
that
you
know,
you
enter
your
mail
and
it
will
create
an
event.
A
google
calendar
event
that
you
can.
Then
you
can
then
save,
of
course,
and
then
I
hope
to
see
you
there.
A
All
right,
very
good-
that
was
that's
the
last
item
on
today's
agenda,
so
we'll
we'll
convene
our
wrap
up
a
little
bit
early.
Thank
you
guys,
we'll
we'll
we'll
I'll
reach
out
we'll
have
some
some
more
conversations.
This
is
good.