
►
From YouTube: Kuma Community Call - October 28, 2020
Description
Kuma hosts official monthly community calls where users and contributors can discuss about any topic and demonstrate use-cases. Interested? You can register for the next Community Call: https://bit.ly/3A46EdD
A
A
So
I
don't
see
kevin
here,
so
I'm
not
sure
if
someone
can
help
me
with
the
kung
summit
videos,
but
in
any
case
this
is
the
last
time
that
we
mention
it.
So
I
will
remove
it
from
the
list,
the
videos
the
videos
are
out,
but
I
guess
that
you
need
a
registration
in
order
to
get
them.
A
So
if
someone
is
interested
in
what
happened,
I
can't
submit
there
they
are.
There
is
actually
open
source
summit
europe
going
on
now.
Yesterday
I
had
a
brief,
like
pre-recorded
10-minute
intro
session,
about.
A
Nothing
new
to
the
audience
here,
but
yet
you
are
there
for
kubecon
there's
going
to
be
a
big
big
boots:
okay,
big
virtual
or
joe
one,
but
like
the
all
the
four
days
we
are
going
to
be
present
almost
at
whole
time
I
mean
like
with
me:
I
mean
someone
from
the
team,
so
I
don't
know.
A
I
guess
that
there
will
be
some
announcements,
twitter,
you
know
social
media
etc,
but
essentially,
if
someone
is
interested
we'll
be
there
we'll
be
ready
to
answer
questions
and
to
open
for
for
other
discussions
if
needed.
A
A
A
There's
also
some
other
events
going
on,
but
yeah
this
year
with
everything
virtual.
Probably
it's
a
bit
a
bit
strange.
A
I
I'll
also
be
speaking
to
a
local
conference
here
in
bulgaria
next
week,
but
I
guess
that's
I'll-
probably
have
to
do
it
in
our
local
language.
So
it's
more
towards
the
local
communities
here,
good.
So
on
the
agenda
quickly.
I
see
that
guess:
austin,
probably
you
added
this
one
thanks
for
this.
A
A
C
Yeah,
so
there
were
two
things
here:
the
first
thing
is
openshift
openshift.
When
you
use
deployment,
config
kubernetes
creates
an
extra
pod
called
deployer
pod,
and
the
problem
is
that
comma
requires
a
service
for
every
pod
that
we
are
injecting
comma
dp
too,
and
this
deployer
pod
did
not
have
this
service
object.
Therefore,
you
could
not
deploy
huma
on
openshift
when
using
deployment
config.
C
Of
course
it
worked
with
deployments
with
regular
kubernetes
deployments,
so
we
added
a
functionality
that
you
can
now
define
the
set
of
labels
that
will
detect
and
not
inject
kuma
to
pots
with
those
labels
right
and
by
default.
It
covers
those
labels,
those
labels
for
the
player
part
in
openshift.
C
So
that
was
one
thing
and
the
second
thing
we
experienced
some
problems
with
the
rds,
which
is
the
route
discovery
service,
and
we
decided
to
switch
for
now
to
a
static
configuration
of
routes
on
the
listener,
because,
to
be
honest,
we
are
using
one
route
per
listener
so,
like
we
didn't
see
any
benefit
for
now
to
use
rds
and
therefore
we
switch
to
more
reliable
way
of
distributing
routes.
A
I
don't
know
if,
if
we
can
switch
subject
now,
probably
before
we
go
to
to
the
performance
optimizations-
and
you
know,
I'm
going
to
ask
you
for
help
here
to
explain
a
little
bit
what's
going
on
what
we
did
with
the
performance
of
simulations,
but
but
before
that,
probably
it's
worth
mentioning.
I
know
that
I
have
mentioned
this
probably
two
three
months
ago,
this
whole
story
with
deploying
command.
So,
okay,
we
we
pushed
hard
boston
is
here.
A
A
Then
people
were
asking
about
operators,
so
we
are
again
in
this
cycle
of
let's
say
so:
kuma
is
accessible
everywhere,
but
yet
you
need
some
tweaks
here
and
there
and
of
course,
there's
there's
two
universal
for
which
we
have
a
big
technical
debt
there
to
actually
make
the
the
user
experience
on
universal.
A
More
how
to
say
we
have
a
better
user
experience,
probably
produce
what
I'm
trying
to
say.
I
mean
there's
still
this
transparent
proxy
mode
to
be
enabled
there.
I
know
that
people
have
been
using
common
universal.
You
know
laying
out
their
own
scripts,
maybe
ansible
or
you
know,
using
systemd
to
to
schedule
the
services
there
etc.
I
mean
it's,
there's
no
uniform
way
there.
A
So
I
I
just
wanted
to
to
bring
forward
that,
and
I
think
that
do
we
have
an
issue
about
the
operator
like
there
are
so
many
so
many
options
out
there
that
it's
it's
hard
to
to
kind
of
be
able
to
cover
each
and
every
let's
say.
Okay,
I
want
to.
I
won't
code,
call
them
corner
cases
but,
let's
say
options
to
deploy
and
to
use
kuma
out
there
and
to
cover
it
in
a
sustainable
and
same
way.
A
I
mean
there's
no
point
into
having
an
operator
or
something
if,
if
it's
not
part
of
our
ci
and
then
our
ci
is
already
you
know
more
than
30
minutes.
So
if
we
want
to
have
this,
it
will
explode
more
and
more
and
more
so
I
don't
know,
I
don't
know
where,
where
we
will
end
up.
I
just
just
just
wanted
to
bring
this
forward,
because
this
reminded
me
some
thought
that
I
had
along
like
when
we
discussed
about
openshift
and
what's
the
problems
that
we
had
there.
D
A
So
the
the
statement
there
was
like,
if
you
want
to
deploy
this
at
scale
like
to
have
multiple
zones,
multiple
common
enabled
clusters,
you
would
eventually
need
an
operator
to
to
be
able
to
manage
them
all.
I
I'm
not.
Actually,
I'm
not
familiar
with
with
operators
like
I
roughly
understand
and
know
what
it
is.
I
don't
know
the
the
details
there,
but,
for
example,
we
have
hit
some
say
limitations
with
him
like.
If
you
want
to
delete
a
helm
chart,
it
doesn't
delete
the
crds
by
default.
I
don't
know.
E
C
D
C
B
A
Is
I
mean
like
it's?
It's
already
version
three
I
mean
like
you
would
expect
that
these
things
I
mean
like
if
I
deploy
something
and
I
like
some
chart
and
all
the
dependencies
et
cetera,
et
cetera,
I
would
expect
that
it
would,
I
I'll
be
able
to
maintain
the
resources
related
to
them.
Why
do
you
say
that
these
are
not
covered
in
in
the
deletion,
because
I,
like
we
had
real
use
cases.
We
were
talking
to
people
that
were
deploying
kuma
and
then
they
like
they
end
up
in
some
strange
situation.
They're
like
okay.
A
Can
I
clean
up
my
cluster
and
then
start
from
scratch
and
then
there's
something
hanging
somewhere
or
some
resource
sitting
here
and
there,
especially
with
the
cni.
We
had
these
back
and
forths.
It's.
I
think
it's
it's
it's
better
now,
but
it's
a
bit
frustrating
when
you
put
so
much
effort
into
trying
to
make
the
like
the
project.
Do
all
these
and
nice
things
and
then,
in
the
end,
this
step
of
installation,
the
installation
is
it's
like:
okay,
yeah,
so
yeah.
A
But
okay,
I
mean
I
guess
that
is
it's
an
ongoing
discussion,
what
the
community
wants
and
needs-
and
if
I
were
to
choose
I
would
definitely
prioritize
better
universal
experience
than
having
an
operator
but
yeah
for
some
people.
This
might
be
also,
you
know,
showstopper.
I
don't
know.
D
Do
you
have
any
projects
that
currently
do
this
really
well
from
a
non-kubernetes
standpoint,.
D
Yeah,
so
can
we
take
anything
from?
I
don't
know
what
they've
done
or
what
other
projects
have
done.
A
D
A
Things
so,
but
then
there
are
so
many,
let's
say:
marketplaces
like
all
the
clouds
have
their
own.
You
know
ideas
of
how
they
ship
so
again,
if
you
want
to
support
all
these
options
in
the
same
way,
this
means
at
least
very,
very
serious
investment
in
in
in
the
ci
or
I
don't
know,
nightly,
builds
some
checks
that
go
for
hours
and
something
like
this
yeah,
which
is,
I
don't,
know,
significant
investment.
A
If
you
ask
me,
okay,
we
are
already
like
15
minutes
in
the
in
the
call,
so
I
don't
want
to
lose
much
time
there
unless
there
is
interest
to
continue
this
discussion,
but.
C
Yeah
you
asked
about
the
issue.
I
think
there
is
an
issue
on
github
about
operator.
It's
one
of
the
oldest
issues.
Let's
see.
D
A
A
Yeah,
okay,
so
lately
he
has
been
spending
a
lot
of
his
time
and
also
there
was
some
some
internal
discussions.
Jacob
was
involved
also
in
a
way
regarding
improving
the
performance
of
the
control
plane
right.
A
This
is
this
is
where
the
effort
was
focused,
and
it
is
because
there
was
some
some
effort
going
going
on
about
trying
to
figure
out
what
are
the
limitations
of
the
control
plane
like
if
you
what
what
type
of
instance
you
can
have
in
a
in
the
public
cloud
with
what
you
know,
capacity
virtual
cores
memory
and
what
number
of
data
planes
you
can
handle.
A
That's
that's
roughly
roughly
the
the
test
scenario
and
they
started
from
these
numbers
and
tried
to
improve
them
like
to
be
able
to
handle
much
more,
let's
say,
data
planes
with
with
with
the
same
resources.
So
I
we
had
some
questions.
I
don't
remember,
maybe
maybe
maybe
in
the
in
the
slack
somewhere.
There
was
a
question
about
what
are
the
limitations,
what
you
can
handle
et
cetera,
et
cetera.
So
I
guess
that
yeah,
probably
it's
worth
it.
E
Oh
yes,
there
is
two
parts
of
this
performance
optimization.
The
first
one
is
on
universal
and
it's
already
in
a
1.0
release
candidate.
I
believe
it's
already
merged
and
you
can
check
it
and.
E
E
But
I
was
optimizing
memory,
usage
and
latency
that
we
have
on
so
I
can
create
a
limited
number
of
data
planes
and
I
was
trying
to
optimize
memory
and
latency
for
these
number
of
data
planes
and
I
didn't
try
to
increase
it
yeah.
So
the
second
part,
as
I
mentioned,
is
it
it's
already
sbr
and
in
the
review
process.
I
can
speak
more
about
details
like
what
exactly.
A
No,
I
mean
like,
if
you
can,
if
you
can
quickly
quickly,
tell
like
what
what
what
were
the
let's
say
that
the
top
level
problems
that
you
found
like
we
were
having
too
many
calls
to
kubernetes
api
server,
for
example,
and
things
like
that.
E
Oh
yes,
yes
now
we
again
start
using
our
caching
client
for
kubernetes,
so
before
we
rely
on
our
own
cache,
but
now,
alongside
our
own
cache,
we
also
enabled
caching
client
of
grenades
because
together
they
work
kind
of
much
better
than
with
application
client,
and
we
made
some
adjustments
in
terms
of
this
because
they
are
using
deep
copy
always
so
there
is
caching
client
and
it
always
give
you
a
copy,
deep
copy
of
resource,
and
this
is
not
really
suitable
for
us,
since
we
already
have
a
kind
of
a
synchronization
on
this
resource,
so
we
don't
need
to
have
deep
copy
in
every
core
routine.
E
That's
why
we
want
to
start
a
dialogue
with
control,
plane,
control
and
run
time.
I
believe
that's
how
project
named
or
or
not
go.
I
don't
remember.
A
Do
you
have
any
safe
numbers
like
if
I
deploy
kuma,
which
is
with
a
single
control
plane
on
kubernetes,
how
many
data
planes
is
safe
to
assume
that
I
should?
I
would
be
able
to
like
I'm,
not
I'm
not
assuming
here,
like
some
extreme
cases,
with
a
lot
of
services,
although
yeah.
E
Yeah,
okay,
I
I
used
personally,
I
used
50
data
planes,
but
I
made
I
made
a
hundred
and
three
hundred
services
for
each
of
them,
so
kind
of
I
created
100
services
which
implements
this
50
data
planes.
So
you
have
a
really
big
density
of
outbounds
and
inbounds.
The
one
data
plane
has
300
outbound
sending
bounds.
So
I
was
optimizing
really
big
data
plane
resources,
but
I
didn't
try
to
run
more
if
you're
interested,
but
about
universal.
E
A
C
A
C
E
C
I
think
this
machine
had
like
eight
cores
or
so,
but
if
we
did
like,
if
we
don't
change
anything
in
the
mesh,
we
were
using
like
what
half
of
the
one
car
even
less.
I
don't
remember.
A
So
usually
the
spike
is,
and
we
we
know
that
probably
this
needs
to
be
to
be
outlined
here.
Usually
the
spike
is,
when
you
start
doing,
changes
like
introducing
new
services
or
deleting
or
etc
when
there
is
a
need
to
of
reconciliation.
That
is
where.
So,
if
you
have
an
environment
where
you
dynamically
spawn
up
and
down
instances
spots,
maybe
scaling
up
and
down
that
that
could
create
some
okay,
like
higher
higher
level
of
usage
of
cpu,
specifically.
F
So
I
have
a
suggestion.
Actually,
if
you
look
at
istio's
performance
and
scalability
page,
you
should
be
able
to
see
the
performance
summary
for
like
istio
1.7.3,
which
is
the
latest
and
what
they
they
clearly
mentioned,
how
much
of
vcpu
they
use
and
how
much
of
memory
do
they
use
and
how
many
sidecar
inject,
how
many
side
cuts
are
they
injecting
and
how
many
measures,
so
something
like
that
would
be
like
lyric
will
be
really
great
if
we
can
have
for
puma.
C
If
you
don't
change
anything
right
like
you
know,
it's
easier
to
maintain
this,
so
it's
like
there
are
so
many
different
variables,
but
that's
why
there
is
a
project
called
measuring
that
is
trying
to
kind
of
do
one
universal
test
for
all
them
for
all
the
meshes,
which
I
still
find
a
little
bit
questionable,
let's
say
yeah
yeah,
but
but
overall
I
agree
that
that
we
that
we
should
have
in
dogs
something
similar
yeah.
B
F
Because,
even
if
you
would
have
tested
in
in
that
kind
of
constrained
environment
right,
I
I
completely
understand
and
acknowledge
the
fact
that
yeah,
if
there
are
more
number
of
services
which
are
changing
data,
is
changing,
then
your
reconciliation
loop
will
take
a
lot
of
time.
I
think
your
reconciliation
loop
is
one
second
right
every
one.
Second,
you
reconcile.
F
I
had
a
chance-
I
I
was
having
a
question
on
that.
Also,
so,
let's
say:
if
there
are
no
changes
right
then
why
do
you
need
to
reconcile
again
and
again,
let's
say
there
are
no
events.
You're
running
on
a
environment
play
a
universal
plane
and
there
are
no
data
plane
changes.
There
are
no
new
confliction
coming
in.
E
Why
the
question
why,
on
universal
we
reconcile
if
there
is
no
changes,
there
are
no
choices
right,
yeah
yeah,
because
on
universal
we
are
using
postgres
by
default.
As
a
as
a
storage
for
our
data-
and
I
don't
know
we
right
now-
we
don't
have
any
event-based
event-based
connection
to
postgrad
just
to
kind
of
take
events
and
only
when
they
happen.
C
Okay,
overall,
okay,
overall,
there
are
two
models
with
this
right,
so
one
model
is
to
rely
on
the
events
from
the
database
right,
and
this
is
kind
of
easy
on
kubernetes,
because
quarantine
has
this
system
that
you
can
have
a
controller
and
to
watch
the
resources
and
it's
a
little
bit
harder
on
postgres
to
have
this
model
and
also
okay.
C
So
the
recent
optimization
that
elia
introduced
is
that
when
this
reconciliation
loop
is
executing,
we
first
are
checking
if
there
are
any
changes
in
the
resources
and
if
there
are
no
changes
in
the
resources
like
data
planes,
traffic
permissions,
mesh
and
so
on,
we
don't
even
try
to
generate
android
config.
So
this
is
kind
of
soft
event.
Based
approach
to
this.
To
this
program.
B
A
D
Yeah,
so
I've
been
chatting
with
the
prometheus
developers
list,
which
is
linked
at
the
bottom,
and
this
main
guy
julian.
He
initially
was
the
one
who
was
fronted
to
those
issues
we
opened
up
in
prometheus
and
had
a
little
bit
of
apprehension
to
adding
new
kuma
specific
service
discovery
mechanisms.
D
So
I
I
went
back
and
forth
with
jacob
and
he
helped
me
understand
why
dns
wasn't
a
good
option,
so
I
proposed
a
more
generic
http
option
and
in
response
to
that,
I
think
julian
is
now
more
on
board
with
integrating
envoy's
xdns
protocol,
which
would
be
something
we
could
definitely
take
advantage
of
and
probably
better
than
a
new
http
based,
but
I'm
not
familiar
with
the
xds
protocol
at
all
in
practice.
So
if
anyone
could
jump
in
on
this
thread,
I'm
I'm
sure
it'd
help
out.
B
Oh,
no,
not
really
not!
For
this.
This
community
call
there
is
so
actually
yeah.
We
actually
have
an
open
cfp
for
the
upcoming,
so
back
in
may
we
did
like
a
camp
cloud
native
with
microsoft
and
mirantis
as
an
online
kind
of
conference
and
we're
kind
of
doing
a
round
two
in
december,
so
the
cfps
are
open,
I'll
share
that
link
out
and
put
in
the
document
for
folks
but
yeah.
It's
going
to
be
a
co-organized
event
again
with
those
two
organizations.
So,
okay.