
►
From YouTube: Kuma Community Call - September 14, 2022
Description
In this call, we discussed the following:
- Kuma Birthday – 3 years 🎉
- Kong Summit https://konghq.com/conferences/kong-summit
- New policy designs:
Traffic Trace https://github.com/kumahq/kuma/pull/4938 [IN REVIEW]
Traffic Log https://github.com/kumahq/kuma/pull/4908
Mesh Traffic Permission https://github.com/kumahq/kuma/pull/4666
- Kuma + NSM integration (https://docs.google.com/presentation/d/12aiunkKqPLOe1R0o_QUdkuZM4LPGhN0g9AwJ1vuFpOo/edit?usp=sharing)
A
Hello,
everyone
welcome
to
the
kumo
community
call,
please
add
your
name
to
the
attend
the
list
and
also
feel
free
to
suggest
any
topics.
For
today's
call.
A
Today
we
will
see
the
demo
of
kuma
and
network
service
mesh
and
we
have
an
opportunity
to
ask
a
lot
of
questions
to
the
network
service
mesh
community,
but
before
I'd
like
to
take
some
time
and
congratulate
everyone,
because
last
week
it
was
kuma's
birthday,
it's
three
years,
so
congratulations
everyone!
A
A
A
lot
of
mesh
related
topics
so
feel
free
to
check
the
link,
and
also
I
wanted
to
mention
that
the
last
few
week
we
came
up
with
a
three
designs
for
the
new
policies
and
two
of
them
already
merged
for
mesh
traffic
permission
and
for
traffic
lock
and
traffic
trace
is
still
in
reviews.
So
you
have
an
opportunity
to
make
an
impact.
How
traffic
trace
will
look
like
in
the
future
so
feel
free
to
add
your
comments?
A
Yeah,
and
I
think
this
is
pretty
much
it
from
me
and
I
can
stop
sharing,
probably.
B
B
C
C
C
So
we
thought
about
use
cases
that
you
can
use
with
nsm,
and
one
of
that
is
using
vms
and
comma
cabinets
via
nsm.
C
So
we
also
support
these
bare
metals
that
use
this
data
path.
Providers
like
os
sroad,
vpp
and
journal.
C
C
Another
use
case
that
you
can
create
is
multiple
virtual
machines
and
parameter
server,
so
you
can
connect
not
only
virtual
machine.
You
can
connect
any
kind
of
parameter
server
that
I
described
just
before
to
the
nsm
network
and
use
the
communications
through
an
assembly
to
ping
each
other,
for
example.
C
So
our
motivation
was
to
run
kuma
over
an
sm
wheel,
3..
We
were
just
interested
in
this
scenario.
Will
it
work
or
not
and
it
worked
so
another
one?
Is
the
use,
kumar
use
cases
with
raw
machine,
and
so
it
will
work
on
different
network
configurations
and
another
one
is
add
to
comma
a
possibility
to
work
with
nsm
clusters
so
about
the
configuration
itself
that
I
will
show
you,
here's
the
diagram
flight.
C
So
basically,
on
the
first
cluster,
we
have
nsm
wheel,
three
endpoint
fold
with
the
comma
control
plane
and
our
network
service
client
code
with
the
workload
that
will
be
radius
database
and,
of
course,
and
see,
and
the
common
side,
car
and
the
second
cluster
will
have
old
with
the
demo
application.
C
So,
basically,
from
page
that
uses
that
we
can
view
and
the
server
that
sends
a
request
to
the
reddit
database
and
in
general
we
can
accomplish
this
configuration
even
when
we
use
virtual
machine
instead
of
workload
one
so
basically,
cluster
two
is
just
a
representation
or
easier
demo
setup,
and
you
can
switch
workload.
C
So
first
workload
one
does
local
two
via
sidecar,
then
a
request
goes
from
comma
sidecar
to
wheel
three
and
c
after
that,
it
is
based
on
configuration,
goes
to
common
sidecar
on
workload,
two
bolt,
and
then
it
just
goes
to
workload
to
itself
service.
C
Yeah
so
let's
switch
to
the
demo
itself,
I
will
show
you
currently.
We
have
two
clusters
with
the
preset
up
and
same
then
we
just
have
to
apply
configuration
for
wheel.
Three.
C
C
Yes,
so
network
service
endpoint
is
running
after
that
we
should
install
commerce
ctl,
but
I
already
have
it
so.
I
will
skip
this
step
and
then
we
have
to
create
a
certificate.
C
D
C
C
So
basically,
even
though
it
is
on
the
second
cluster,
it
thinks
that
it
has
connection
to
the
first
cluster
locally,
so
cluster
two
has
no
other
ways
of
connecting
to
cluster
one
except
an
same
connection.
C
Yes,
so
basically,
that's
it
about
the
practical
side
of
the
demo.
Here's
the
links
to
the
examples
we
have
already
merged
it
into
our
repository
and
I
have
created
a
pull
request
into
comma
2
repository
just
to
replicate
that
we
have
integration
yeah.
So
we
thought
about
future
plans,
and
currently
we
want
to
implement
the
integrations
for
coma,
not
for
comma
universal,
which
was
used
actually
in
this
demo
and
integrated
with
the
vms
and
bare
metal
devices
with
nfm.
C
You
can
comment
this
presentation.
B
Thank
you
very
much.
That
was
quite
interesting.
Could
you
could
you
explain
a
little
bit
more
your
point
in
future
plans
because
you
talk
about
kumar
and
kumar
universal?
They
you
know,
implement
integration
for
kumar
and
not
kumar
universal.
B
I'm
not
really
sure.
I
understand
this
point.
C
So
give
me
a
second
so
currently
we
have
this
configuration
for.
B
C
Yeah,
so
we
want
to
just
reduce
amount
of
work
and
use
kuma
default
kuma
to
work
with
the
vm
clusters.
Vm
machines.
C
Currently,
it
can
be
done
over
if
we
use
universal
setup,
so
it
will
run
on
the.
C
D
D
Yeah,
I
just
want
to
say
that
that
was
basically
my
question
like
how
did
we
get
the
side
car
in
a
cluster
without
the
control
pane
but-
and
now
I
know-
maybe
you
can
share
so
about
this
future
plan.
So
let's
say
we
want
to
integrate
this
with
kuma
with
cycle
injection.
So
maybe
the
question
is:
how
do
you
see
it
done
in
other
meshes?
Maybe
like
are
you
planning
to
point
every
kubernetes
cluster
to
one
cluster
with
the
control
plane
or
how
do
you
see
this.
E
So
one
thing
to
realize
is
that
with
network
surface
mesh
you
we're
not
as
cluster-bound
as
people
typically
are,
because
we
actually
have
the
ability
say
that
I
have
a
workload
that
I
need
to
plug
into
a
particular
kuma
instance.
That's
running
in
some
cluster.
That's
got
an
entirely
different
service
mesh
setup
going
on
that's
doable.
E
We
also
have
the
ability
to
federate
trust
domains
across
organizations
so
that,
if
I
am
say
a
large
manufacturer
who
has
part
suppliers
that
I
want
to
be
able
to
give
access
to
my
service
mesh
for
particular
purposes,
I
don't
have
to
control
their
clusters
in
order
for
them
to
have
workloads
that
I
can
cryptographically
authenticate
and
allow
to
plug
in.
So
you
can
work
across
organizations
multinet
extra
core.
You
know
no
extranet,
that
kind
of
stuff
multi-core
so
yeah.
E
D
Yeah
I
see
that,
but
at
the
same
time
you
need
to
receive
the
configuration
or
sidecar
right,
assuming
that
we
are
still
talking
that
workloads
are
part
of
a
match
right,
so
sidecar
needs
to
connect
to
the
control
plane,
to
receive
configuration
and
to
receive
certificates
for
mdls
and
so
on.
Right,
so
yeah,
just
just
wondering
if
you
maybe
have
a
thought
about
this.
One.
E
Well,
the
the
nice
thing
is:
you
have
that
underlying
flat
l3
connectivity
to
the
control
plane
is
the
nice,
the
nice
piece
about
this
yeah,
in
that
your
sidecar
absolutely
has
the
connectivity
to
the
control
plane.
I
think
you're
sort
of
just
asking
a
little
bit
more
about
the
discovery
process.
How
does
the
control
plane
figure
out
how
to
phone
home
at
the
workload
figure
out
how
to
phone
home
to
the
control
plane?
Is
that
more
of
what
you're,
asking
or
the
control
plane
appears
on
how
to
discover
the
workload.
D
Yeah
I
mean
because
technically
we
could
go
both
ways
like
with
our
multi-cluster
model,
without
the
smart
networking
that
every
cluster
has
stp
and
those
sidecars
connects
to
the
cp
in
the
cluster,
but
they
do
not
require
ingress
or
egress,
because
we
have
a
flat
network
right.
So
I
see
this
as
one
option
and
second
option
is
to
have
like
a
control
plane
in
only
one
clustering.
Every
sidecar
every
envoy
connects
to
this
one
control
plane
where,
when
it
is
deployed
right
so.
E
D
It
is
true
yeah
I
mean
it
is
very
nice
at
the
same
time,
a
control
plane.
Maybe
is
not
a
good
word
in
this
scenario,
because
users
are
not
really
interacting
with
the
control
plane
in
the
specific
zone.
D
So
it's
more
of
an
agent,
let's
say
right
and,
on
the
other
hand
like
to
counter
your
argument,
is
that
this
one
is
technically
more
scalable
can
be
more
stable
if
the
sidecars
are
connecting
to
only
agent
in
the
specific
cluster
instead
of
going
like
you
know,
over
other
region
or
cluster
or
whatever,
especially
that
the
xds
config
can
be
quite
big
with
with
increasing
size
of
your
mesh,
so
yeah.
But
this
this
architecture
of
putting
like
only
one
control
plane
somewhere,
is
also
very,
very
tempting
interesting
yeah.
E
So
think
of
it
as
being
able
to
separate
two
previously
commingled
problems
right,
because
one
set
of
problems
is
how
do
I
get
all
the
pieces
to
talk
to
each
other?
The
other
set
of
problems
is
how
do
I
scalably
deploy?
What
are
effectively
caches
of
my
configuration
for
the
endpoints
to
interact
with
and
in
the
traditional
model,
they're
commingled,
and
here
you
have
the
option
to
separate
them
in
the
way
that
makes
the
most
sense
to
you.
D
Right
yeah,
yeah
yeah.
I
can
see
that
how
this
kind
of
opens
the
alternative
architectures
from
what
we
have
right
now
at
this
moment,.
D
One
more
question:
so
we
had
this
sidecar
next
to
the
workload.
Can
this
be
deployed
as
a
demonstrator,
or
is
it
always
a
sidecar.
E
So
the
nfc
sidecar
is
a
is
a
control
plane
sidecar,
in
the
sense
that
all
it's
doing
is
monitoring
the
virtual
wire
that
gives
you
the
l3
connection,
to
make
sure
that
it's
alive
and
to
respond
with
appropriate
healing
behavior.
If,
in
fact
something
goes
wrong,
the
actual
traffic
doesn't
ever
get
touched
by
that
sidecar.
That's
the
never
surface
mesh
sidecar
running
the
pod.
It
literally
it
actually
goes
through
a
daemon
set,
a
daemon
setted
thing
that
does
the
cross
connect
between
the
kernel
interface
and
whatever
the
tunnels
are.
E
E
That
piece
is
all
part
of
the
game
inside,
so
you
you,
don't,
I
think
part
of
what
you're
getting
at
is
how
bad
is
my
scaling
problem,
adding
more
sidecar
overhead
and
the
sidecar
for
network
service
mesh
is
literally
just
something
sitting
there
waiting
to
be
updated
every
relatively
infrequent
period
of
time,
with
somebody
saying
we're
still
here,
we're
still
doing
the
thing
we're
supposed
to
be
doing.
This
is
what
we
think
the
state
is
doesn't
matter
what
you
think
the
state
is:
okay,
great,
we're
good!
You
know,
so
it's
very
lightweight!
D
E
B
B
I
guess
one
question
is:
what
was
the
motivation
to
like
integrate
with
kuma?
Was
it
just
to
like?
Have
all
the
service
meshes
tested
out
with
network
service
mesh?
Always
there
like
an
actual
production,
push
behind
it.
C
C
Personally,
for
me
seems
more
interesting
and
I
don't
know
for
some
kind
of
closer.
B
B
E
E
E
Service
meshes
tend
to
want
to
take
over
entire
clusters
at
a
time
and
that's
almost
always
bad
architecture,
and
so,
if
you
have
a
situation
where
you
have
someone
who
is
you
know,
I
see
this
all
the
time
in
args
or
go
to
the
company
has
decided
they
want
to
go
with
kuma
or
two
has
decided
they
want
to
go
with.
You
know
istio.
Now
what
do
you
do?
Well,
with
network
service
mesh?
E
Two
right,
you
all
the
the
istio
running
org,
and
so
it
allows
for
a
certain
amount
of
distribution
and
coexistence,
and
we
also
have
ambitions
of
potentially
at
some
point
being
able
to
pull
the
sidecars
out
into
sidecar
pods
when
you're
talking
about
these
sort
of
kuma
side
cars,
and
if
you
we
can
do
that
for
the
side,
cars,
the
l7
side,
cars
for
service
meshes,
you
then
have
the
possibility
of
an
individual
workload
connecting
to
more
than
one
service
mesh,
depending
on
who
it's
talking
to,
and
that
also
broadens
the
space
of
innovation
that
people
can
play
in
in
a
very
attractive
way
and,
of
course,
also
gives
the
ability
to
life
cycle,
the
sidecars
independent
of
the
workloads,
I'm
sure
you've
heard
of
the
situations
where
people
are
not
happy
about
having
to
reboot.
E
You
know
restart
their
workloads
because
they
need
to
rev
the
versions
of
their
sidecars
for
security
reasons
or
other
reasons,
so
more
modularity
and
flexibility
in
the
system
and,
of
course,
that
story
gets
to
be
more
interesting.
The
more
different
kinds
of
service
measures
are
in
play
as
a
proof
point.
B
Okay,
one
more
question
for
me:
is
there
a
point
to
think
of,
for
example,
kumas
cp?
If
I
remember
correctly,
the
control
plane
to
be
able
to
drive
some
of
these,
like
the
nsm
part.
E
So
I'll
give
you
the
generic
answer,
then
I'll
turn
some
questions
back
at
you.
The
generic
answer
is
network
service
measures
intensely
flexible
about
what
it
does,
the
the
more
that
we
can
get
help
from
the
higher
layers
sort
of
hinting
downward.
That's
very
helpful,
so
it
would
be
very
doable
the
way
network
service
mesh
is
structured
if
kuma
wanted
to
be
a
participant.
E
Whom
is
the
control
plane
to
be
a
participant
it
very
well
could
and
productively,
but
we
also
designed
so
it
doesn't
have
to
be
right
because
we
want
to
be
able
to
support
people
running
things.
On
top.
I'm
curious,
though,
because
it
sounds
like
you
have
some
interesting
ideas
floating
around
in
your
head.
If
you
have
particular
ideas
to
express.
E
But
I
mean
there's
a
lot
to
be
said
about
having
a
single
plane
of
glass
from
l3
all
the
way
up
to
l7,
particularly
when
you're
talking
about
a
multi-cloud
environment
yeah
and
that
that's
a
very
attractive
approach
to
the
world.
A
Okay,
I
think
we
we're
out
of
time.
So
oh
that
was
interesting.
Thank
you.
Everyone
have
a
good
day
and
see
you
next
time.