►
From YouTube: Kuma Community Call - March 8, 2023
Description
Agenda:
- Kuma patch releases https://github.com/kumahq/kuma/releases
- KDS redesign
- Q&A
- MSM integration: performance testing
https://docs.google.com/presentation/d/1yG6xgbwbxFGE4YgU8NBGu3RrfDcZyXu4lCccOImiQwY/edit#slide=id.gc6fa3c898_0_0
A
A
B
Oh
yeah,
hello,
everyone
could
I
just
share
my
screen.
Please
yeah.
A
A
B
Okay,
so
hello,
everyone,
I'm
Daniel
from
xoret
and
working
on
a
multi
service,
mesh
project
and
well.
This
is
a
project
about
integrating,
in
the
same
with
other
service
measures,
and
one
of
these
meshes
is
Kuma.
We
have
talked
about
our
integration
earlier.
B
So
if
you
don't
know
what
this
integration
is,
you
can
check
out
the
database,
the
record
of
playlist
comedical,
which
was
September
14..
So
here's
a
link-
maybe
you
can
copy
it
retyped
it.
Oh,
you
can
check
out
our
integration
example
in
the
comatose
Repository,
so
I
I'll
give
you
a
short
reminder
of
what
this
is
so
and
Sam
is
a
service
mesh
that
is
based.
That
is
level
three
connectivity,
so
IP
and
stuff,
and
it
is
compatible
with
other
service
meshes
and
in
comma
example.
B
We
are
using
virtual
Earth
L3
Network
to
provide
connectivity
with
other
clusters
and
virtual
machines
without
using
an
increase
or
anything
like
it,
and
he
has
a
small
diagram.
So
we,
for
example,
you
can
have
two
clusters,
both
of
which
have
NSM
components
and
on
one
cluster.
We
deployment
control
plane
and
we
can
another
cluster.
B
So
in
theory
this
should
improve
performance,
but
we
didn't
have
any
numbers
to
compare
So.
Currently,
we
are
working
on
actually
running
the
tests.
B
B
We
are
using
Kuma
control
plane
in
Universal
mode,
because
we
don't
have
a
spare
that
cluster
or
virtual
machine
and
in
the
our
integration
setup
we
we
just
using
basic,
basically
the
same
setup,
as
is
what
is
merged
into
kumato's
repository,
but
in
both
cases
using
different
applications,
we're
using
engines
as
a
server
which
are
testing
against
and
we
are
using
photo,
which
is
a
tool
for
testing
HTTP
connections.
B
B
Solution
is
quite
a
bit
faster,
but
before
I
go
through
results,
I
should
I
would
like
to
remind
you
that
these
are
virtual
clusters
with
next
zero
Network
latency,
so
the
results
aren't
really
representative
of
their
performance.
So
this
is
why
I
results
in
the
title,
but
basically
well
I,
have
here
two
examples.
First
is
negative:
a
lot
100
queries
per
second,
with
just
one
thread,
and
here
we
see
that
average
mean
50.
Percentile
is
when
12
times
lower
with
our
integration.
B
B
Also,
we
have
I
have
another
example
here
with
infinity
Target
queries
per
second,
so
just
how
fast
it
can
work-
and
here
we
see
similar
picture
I,
see
that
maximum
latency
with
Kuma
is
the
same
and
I
actually
think
it's
not
related
to
Google.
It's
something,
this
Linux
Network
system,
performance
or
maybe
kubernetes
issues.
B
B
I
expect
that
these
results
will
get
much
less
dramatic
as
soon
as
we
move
to
production
clusters
where
there
will
be
Network
latency,
and
maybe
there
will
be
difference
in
Linux
Network
stack,
but
we
see
that
there
is
some
potential
people
on
the
performance
gains
So.
Currently
we
are
working
on
setting
up
testing
using
production
grade
clusters
like
from
Google
AWS
azure.
B
B
Thank
you
for
listening
and
yeah.
I
would
very
much
like
to
hear
some
feedback
thoughts
if
it's
useful,
it
fits
if
you
have
a
good
setup
or
maybe
I
have
missed
something.
Thank
you.
A
Thank
you,
the
new
anyone
who
has
questions.
B
B
A
Yeah
I
have
a
question
when
you,
when
you
checked
Kuma
multi-zone,
did
you
use
egress
or
it
was
just
Ingress.
C
A
Yeah
and
how
do
you
do
you
have
any
interpretation
explanation
of
these
results
like?
Why
do
you
think,
with
NSM
with
squeaker.
B
Yeah
and
Sam
actually
uses
VPP
program,
which
is
it
translates
to
Vector
packet
processing,
which
is
actually
even
faster
than
Native
Linux
stack
because
it
processes
packets
in
brand
in
batches,
so
yeah,
I
I,
think
this
should
be
the
main
cause
of
such
improvements.
A
B
Yeah
I
I'd
like
to
add
some
more
questions
since
I
have
the
opportunity
about
improving
our
integration
so
yeah.
The
first
question
is.
B
Could
you
think
of
some
real
life
use
cases
that
could
be
covered
by
my
integration?
If
we
improve
it
somehow,
maybe
add
something
to
it?.
A
B
Yeah
but
maybe
there's
some
issues
with
an
integration
that
we
can
solve
well,
I,
guess
I.
This
is
not
very
a
good
way
to
ask
this
question
because
you
probably
need
to
look
at
the
code
of
the
integration
to
answer
it
but
yeah.
Maybe
maybe
you
have
something
right.
A
Yeah,
to
be
honest,
I
like
to
understand
more
how
like
the
path
of
the
request
like
what's
what's
happening,.
B
Yeah
yeah
I
can
discuss
it
a
bit
I,
don't
think
I
have
any
good
pictures
with
me
that
I
can
just
tackle
that
I'll
talk
about
it
a
bit
so
on
each
node.
We
have
a
hover
there
and
some
favor
there,
which
runs
VPP
and
when
you
start
a
pod
which
connects
to
an
SM
cover
the
establishes,
a
connection
between
it
and
the
pod.
B
And
so,
if
we
have
a
virtual
Network,
then
everything
in
this
network
is
connected
to
forward
s
in
on
respective
nodes,
and
in
this
veteransi
we
have
a
virtual
switch
which
oh,
which
is
also
based
on
VPP,
which
creates
which
manages
searching
between
ports.
B
B
Yeah
yeah
probably
disconnect
to
each
other
with
some
method,
which
is
also
specified
in
some
settings,
and
so
when
you
want
to
get
service
from
different
pod
in
the
same
field,
three
Network,
this
the
packet
goes
to
favor
the
same
node
then
this
for
the
sends
it
to
C.
Then
it
sends
to
proper.
C
B
C
C
Like
what's
the
I
guess
is
a
good,
a
good
example
of
the
of
the
advantage
of
NSM,
but
I.
Don't
know
if
you
have
any
other
ones,
maybe
like
yeah.
B
B
You
can
also
use
open
policy
agent
security,
configuration
I,
believe
it's
called
to
manage,
who
can
connect
to
where
so,
oh,
basically,
what
understand
does
is
it
manages
VPP
and
it
manages
all
the
management
stuff
to
make
it
easy
to
use.
I
think
it's
kind
of
like
this.
B
A
Yeah,
it
would
be
interesting
to
see
the
numbers
from
the
real
clusters
when
you
have
like
clusters
in
different
availability
zones.
For
example.
A
In
that
case,
probably
the
difference
will
be
that
dramatic,
so
yeah
yeah
expect.
B
A
A
If
you
really
want
to
move
to
the
next
time
depends
if
you
have
more
questions
from
community
yeah,
okay,
that
makes
sense
yeah.
Maybe
then,
let's
have
questions.
A
Yeah
I
also
didn't
mention
that
since
the
last
a
community
call,
we
published
a
few
releases.
I
guess
five
watch
releases
mostly
to
update
Envoy,
and
we
had
some
minor
issues
with
disabling
IPv6
or
for
the
proxy.
A
Iron:
okay,
with
that
yeah,
okay,
have
a
good
day,
see
you
next
time
right,
bye,.