
►
From YouTube: Kuma Community Call - March 31, 2021
Description
Kuma hosts official monthly community calls where users and contributors can discuss about any topic and demonstrate use-cases. Interested? You can register for the next Community Call: https://bit.ly/3A46EdD
A
A
Oh
okay,
actually,
actually
it's
not
closing
it's
just
on
on
may
okay,
so
apparently
I
got
the
wrong
information.
Okay.
Yet
I
think
that
this
is
the
one
of
the
optimistic
events
that
is
planned
to
be
live
in
us.
Who
knows
we'll
see?
A
Okay,
in
any
case,
there
is
this:
if
someone
is
interested
even
for
a
joint
that
would
be
better,
we
can
chat
and
get
on
this
okay
other
than
that
does.
Anyone
here
is
this.
A
It's
always
some
complex
to
get
okay,
I'm
posting
the
link
in
the
chat.
Please
add
your
names
here.
The
meeting
is
recorded
and
I
don't
have
a
particular
agenda
to
today.
Actually,
we
we
were
thinking
to
do
a
release
from
tomorrow,
but
then
decided
that
first
of
april
is
not
a
great
day
to
do
a
review
so
postponed
it.
For
next
week
jacob
is
laughing.
A
B
Yeah
I've
had
that
pr
open
for
a
while
for
the
mats
proposal.
I
yeah.
I
I'm
not
sure
if
it's
just
not
a
priority
or
there's
been
a
lot
of
other
stuff
going
on,
but
that's
the
only
thing
kind
of
blocking
me
right
now.
I've
implemented
most
of
what's
proposed
here.
B
No
worries,
but
then
should
I
just
open
up
a
pr
for
the
implementation
that
I
have
so
far.
I
think
it's.
It's
really
maybe
a
lack
of
understanding
on
the
xdx
protocol
that
I
need
the
most
feedback
on
and
if
it's
actually
feasible,
yeah
right.
A
Now
create
a
draft
so
that
that
we
can
see
it
created
the
spr
and
then
and
then
I
guess
people
will
chime
in
there
any.
B
Yeah
I've
fully
implemented
this
on
yeah
this
prometheus
side.
So
I
think,
as
soon
as
we
can
get
it
into
the
kuma
main
branch
and
verify
it
works
there,
then
I
can
open
up
a
pr
on
the
prometheus
side.
B
It
wasn't
too
complex
to
do.
I
don't
think
it's
just
exposing
a
new
http
server,
so
it's
just
yet
another
port.
I
don't
know
if
that's
the
issue
or
not,
but.
B
C
B
Okay,
that's
really
cool
I'll
check
out
how
to
do
that.
Thanks.
C
Okay,
let
me
know
if
you
need
some
pointers
in
the
code
where
we,
where
we
do
this.
B
Yeah
that
if
you
have
any
hands
that
would
be
great
and
then
I
can
try
and
open
up
a
pr,
a
draft
pr
for
what
I
have
sometime
in
the
next
week
or
so.
A
Okay,
so
I
see
this
moving,
I
see
that
charlie
is
on
the
call.
A
A
A
Good,
maybe
maybe
if
he
comes,
maybe
maybe
we
can
chat
about
this-
also
okay,
so
I
don't
have
any
anything
else
on
the
agenda
for
today,
so
I
suggest
that
we
do
kind
of
an
open
q
a
session.
Now
I
know
that
usually
the
community
has
some
questions.
So
if
there
are
any
questions
or
topics
that
people
can
suggest
to
discuss.
D
D
You
know
right
now
we're
going
to
be
doing
a
manner
release
that
will
include.
You
know
a
few
improvements.
A
few
fixes,
some
of
the
bugs
that
we
have
reported
will
be
fixed,
but
I
guess
that
the
next
big
thing
for
these
for
this
project
it
is
to
introduce
real
l7
support
when
it
comes
to
routing-
and
you
know,
our
policies
and
traffic
permissions
starting
with
http.
This
has
been
a
very
requested
feature
by
by
many
users,
and
so
I
just
want
to
give
some
visibility
into
into
this
l7
support.
D
It's
probably
going
to
be
the
next
big
thing,
new
feature
that
we're
going
to
be
shipping,
which
means
we
can
route
requests
for
http
path
per
header
for
http
method,
and
things
like
that
as
well.
As
you
know,
we're
going
to
be
starting
to
collect
soon,
proposals,
speaking
proposals
from
the
community
for
kubecon,
and
we
do
have
a
person
caitlin.
D
That
is
going
to
be
helping
with
collecting
any
proposal
that
you
guys
want
to
submit
about
your
usage
of
cuba
in
your
organization,
and
so,
if
anybody
is
interested
in
submitting
something
you
can
reach
out
to
me
on
the
community
slack
channel
and
I'll
make
sure
to
put
you
in
touch
with
caitlyn.
In
order
to
then
help
assist,
sending
the
proposal
that's
pretty
much.
It.
B
This
is
for
which
kubecon
the
one
that
nikolai
provos
or
brought
up
earlier.
E
D
A
I
think
the
largest
was
the
last
online,
but
okay
yeah,
the
the
european
one,
is
the
first
week
of
may
so
it's
like
people
are
reporting
that
are
already
recording
their
talks.
D
B
So
I've
implemented
most
of
it.
I
was
just
waiting
for
feedback
on
the
proposal
that
I've
had
open
in
kuma
for
for
a
while
now,
but
I
think
we
just
cross
missed
pads.
So
I'm
gonna
open
up
a
draft
pr
sometime,
hopefully
in
the
next
week,
to
get
some
feedback
and
then
the
plan
should
be
to
first
integrate
it
into
kuma
and
then
I'll
open
up
that
pr
in
prometheus,
which
I've
already
implemented
as
well.
D
Answer:
okay,
yeah,
very
well,
yeah!
Thank
you.
Thank
you
austin.
So
did
anybody
see
the
proposal
jacob
that
austin
submitted.
B
Yeah
I
I
through
implementing
it.
I
found
some
holes,
so
I've
updated
it
from
time
to
time.
C
Yeah,
I
think
yeah,
I
think
we
kind
of
agreed
on
on
some
call
that
it's
probably
the
best
to
just
start
implementing
this,
because
for
me
it
was
essentially
a
thumbs
up,
but
not
officially
on
github.
I
guess
yeah.
B
No
worries
I've
been
doing
other
things
as
well,
so
yeah
no
harm
no
foul
I'll
I'll
open
up
a
draft
pr
soon.
B
For
for
a
multi-zone
setup,
how
does
the
the
global
control
plane
high
availability
work?
Is
the
global
control
plane
able
to
be
deployed
in
many
different
regions
with,
or
I
just
have
no
experience
and
I'm
just
starting
to
look
through
the
docs.
C
Yeah
global
the
global
control
plane
is
designed
to
be
deployed
in
kind
of
one
zone
right
because
this
is
essentially
a
management
plane.
C
But
what
you
need
to
keep
in
mind
is
what
happens
if
the
global
control
plane
is
down
right,
yeah
and
what
happens
is
that
you
cannot
modify
the
policies
like
traffic
permission,
traffic
route
and
so
on
right,
but
you
can
still
do
a
critical
thing,
which
is
to
spin
up
new
data
plane
or
to
get
rid
of
a
data
plane
right,
because
the
data
plane
is
managed
within
the
zone
and
it's
synced
to
global
right.
So,
even
if
global
is
down,
you
can
still
manage
the
local
data
planes
in
the
zone
right
so.
B
E
C
B
Okay,
has
there
been
any
thought
about
how
to
remove
that
requirement
to
share
a
persistence.
A
Yeah
because
essentially
we
have
to
invent
an
mechanism
or
actually
to
borrow
something
and
just
just
just
adopt
to
sync
the
databases,
because
the
global
is
essentially
your
storage
for,
like
central
store
for
all
the
policies.
So
if
a
zone
disappears.
A
And
then
like
misses
on
some
updates
when,
when
it
comes
back,
all
the
policies
that
are
applied
on
the
global
will
be
will
be
pushed
there
and
the
actual
state
will
be
increasing.
But
if
you
want
to
have
multiple
globals,
then
you
should
have
a
way
to
somehow
think
that
their
storage
is
the
resources
between
them.
Yeah.
A
On
some
of
the
on
some
of
the
calls,
I
have
been
asked
about
the
possibility
to
have
like
a
super
global
like
global
of
the
globals
like,
if
you
consider
a
huge
huge
deployment
with,
let's
say
many
many
zones
and
you
want
to
somehow
split
it
and
fragment
it
into
smaller
chunks.
A
People
are
even
thinking
of
this
as
a
matter
of
fact,
in
the
very
very,
very
first
designs
we
were
thinking
about
this,
but
then
we
thought
that
it
is
probably
an
overkill
at
this
early
stage
where
we
are
implementing
the
very
first
multi
zone.
So
we
we
decided
that
single
global
at
some
point
would
be
enough.
Why
I'm
saying
this?
Because
if
you
have
this
super
global,
it
can
be
the
this
single
place
that
actually
you
know,
syncs
things
between
the
the
globals.
E
A
Well,
I
mean
you
sh,
you
can
use
actually
the
same
the
same
data
database
in
terms
of
like
the
physical
instance
of
the
database,
but
then
they
they
would
use
their
own.
I
don't
know
how
it's
called
their
name
namespaces
or
what
I
mean
like
they
see
their
own
table.
They
don't
share
the
same
tables.
E
A
Unless,
unless
of
course,
you
want
to
have
replicas
of
the
same,
so
if
we
are
talking
universal,
I
I
assume
that
that's
also
a
talk
when
you
have
multiple
remote
control
planes
for
high
availability,
they
need
to
share
the
same
the
same
schema
so
that
they
they
have
the
same
view
of
the
of
the
resources.
E
A
Okay,
but
this
is
this-
is
like
the
the
definition
of
a
zone,
so
the
zone
is
determined
by
the
set
of
control
planes
that
share
the
same.
The
same
view
of
the
persistent
storage.
E
Got
it
so?
Is
there
a
document
wherein
it
is
mentioned
as
in
how
are
the
the
crds
or
how
are
they
mapping
into
the
global
one
versus
the
remote
one,
as
in
what
is
getting
created?
Where
do
we
need
to
know
as
an
as
an
end
user
or
as
a
platform
guy
as
in
do
we
need
to
know
that.
C
Yeah,
that's
a
good
point
that
we
don't
have
this
in
docs,
but
the
only
thing
which
is
managed
by
the
remote
is
the
data
plane,
definition
right
and
the
data
plane
definition
is
passed
to
kuma
dp
on
universal
and
that
kubernetes
it
is
just
created
by
the
radio
control
plane
on
kubernetes,
not
by
by
the
user.
So
technically
from
user
perspective.
E
E
From
secret,
actually
I
have.
I
have
another
question
if
I
can
ask
course
okay,
so
this
is
pretty
interesting.
So
what
so?
There
is
a
token
concept
that
you
have
right
and
the
token
is
actually
created
from
the
control
plane
and
the
input
of
the
control
pin
is
the
mesh
name
and
the
data
plane
name
and,
and
there
are
multiple
combinations
of
them.
So
let's
say
as
a
we
basically
allow
people
to
use
machinima
data
data
data
name
as
the
input
to
create
the
token
okay.
E
Now
that
that
token
is
gotten
created
now,
let's
say
there
is
another
guy
who
is
running
the
control
pin
separately,
and
there
is
absolutely
no
linkage
between
my
control
plane
to
this.
Let's
say
a
rogue
control
plane.
He
passes
the
same
input
same
mesh
name
and
same
data
plane
link.
Will
he
actually
end
up
creating
the
same
token
or
will
the
token
differ?
E
They
are
science
yeah
they
are
signed.
But
how
do
you
take
the
saw
it
as
in
there
has
to
be
a
difference?
Let's
say
there
are
two
control
plane
and,
and
you
pass
the
same
data
plane
name
and
the
and
the
mesh
name
to
both
the
control
planes.
You
should
ideally
be
able
to
create
different
tokens.
E
C
No
wait:
wait,
wait!
Okay,
okay!
So
one
thing
the
the
the
data
token
is
a
job
token
right,
which
is
signed
by
a
signing
key
and
the
signing
key
with
multi-zone
deployment
is
generated
from
global
and
synced
to
the
remotes
right.
So
there
is
always
the
same
assigning
key
for
given
mesh
even
with
multi-zone
deployment.
C
E
So
so
would
that
mean
that
if
the
global
control
plane
dies-
or
let's
say
let's
say
there
are
two
replicas
of
the
of
the
same
deployment
right?
One
is
something
that
that
is
blessed
by
the
by
the
enterprise.
Okay
and
there's
somebody
who's
trying
to
hack
into
the
system
by
by
he,
and
he
also
basically
brings
up
the
same
kuma
cp.
E
Now
he
gives
it
gives
the
same
input.
He
gives
the
token
data
plane
name
and
the
and
the
mesh
name.
E
Does
not
connect,
he
does
not
connect,
it's
a
completely
separate
deployment.
So,
okay,
essentially
the
key,
is
how
do
you
generate
the?
How
do
you
generate
the
signing
key
right?
How
do
you
generate
and
how
do
you
make
sure
that,
let's
say
the
control
or
the
global
control
plane
dies
and
comes
back
again?
It
will
have
the
same
key.
C
C
C
So
you
can
see
that
with
every
with
every
instance
of
the
control
plane,
the
same
input
will
result
in
a
different
token,
because
the
signing
key
is
different.
Every
time.
Okay,
okay,.
E
Yeah
yeah,
that
that
makes
a
lot
of
sense
yeah.
This
is
what
I
wanted
to
know.
Isn't
how?
How
do
you
generate
the
signing
key
and
looks
like
we
don't
have
a
control
on
that,
so
it
is,
it
is
generated
randomly
and
then
it
is.
It
is
basically
managed
by
you
and-
and
this
is
like
a
super
critical
thing
actually,
so,
if
somebody
takes
it
or
flicks
it,
I
think
I
think
they'll
be
able
to
generate
people
if
they
know
the
algorithm
and
and
thanks
a
lot
for
that
update
on
the
http
one.
E
Actually,
it's
something
that
that
will
really
that
we
are
really
looking
forward
to.
A
C
E
At
this
point
in
no,
but
we
have,
we
are
looking
to
basically
expand
it
out
and
we
are
debating
among
ourselves
as
in
should
we
have
a
standalone
deployment,
or
should
we
move
to
a
to
a
global
and
remote
one?
So
those
those
discussions
are
going
on
and
I
think
we'll
choose
one.
A
E
E
Right
yeah,
so
it's
about
choices
right
so
let's
say:
if
on
gkp
we
don't
choose
to
use
kuma
but
and
let's
say
we
choose
to
use
histo
and
and
and
then
we
have
another
deployment
of
puma.
But
there
are
workloads
which
are
spanning
between
both
kubernetes
and,
let's
say,
a
vm
workload
and
vm
workload
is
managed
by
kuma
and
and
the
and
the
kubernetes
workload
is
managed
by
istio.
A
In
on
the
hto
site,
you
you
do
external
service,
I
don't
know
how
it's
called
there.
Yes,
something
like
this
yeah
and
then
you
you
expose
it
on
on
a
gateway,
the
universal
one,
so
that
you
can
consume
it
from
the
html.
I
mean
you
won't
be
able
to
have
like
a
single
policy
management
plan.
You
have
to
do
this
separately,
but.
E
Correct,
so
what
you're,
basically
meaning
is?
You
will
like
to
convert
the
east-west
communication
to
a
north-south
communication
by
terminating
it
on
the
gateway.
E
B
E
A
Actually
want
to
have
the
certificates
on
the
gateway
only
and
then
from
the
gateway
they
will
go
inside
the
mesh
which
will
manage
its
own
certificates.
So
I
guess
that's
what
you
want.
The
other
thing
is,
I
don't
know
if
you
can
do
okay,
but
that's
that's.
That's
the
enterprise
thing
I
was
referring.