►
From YouTube: Kuma Community Call - August 4, 2021
Description
Kuma hosts official monthly community calls where users and contributors can discuss about any topic and demonstrate use-cases. Interested? You can register for the next Community Call: https://bit.ly/3A46EdD
A
We
will
try
to
cover
all
the
topics
today
and,
at
the
end,
we'll
have
some
q
a
session
as
usual.
So,
okay,
let's
start
last
week
we
released
kumar
1.2.3
or
here
we
can
find
the
link
to
the
blog
post
and
it's
not
a
big
release.
We
just
had
some
minor
fixes
that
improve
stability
of
the
product
and
you
can
find
more
details
in
the
link
for
blog
and
not
many
many
things
for
the
release,
and
here
I
think
I
can
give
it
to
austin
to
show
us
some
demo
for
postgres
sorry
for
prometheus.
A
B
D
Yeah
yeah
thanks,
let's
just
make
sure
it
works,
but
the
the
bones
are
there,
so
I
guess
I'll
share
this.
D
So
really,
what
we're
doing
is
just
natively
discovering
scrape
targets
based
on
metrics
or
a
monitoring
assignment
server
right.
So
currently
the
way
to
integrate
with
prometheus
is
you
run
a
sidecar
that
writes
to
a
json
file
within
the
prometheus
container
and
then
prometheus
reloads
this
through
the
file
discovery
system?
D
This
has
a
few
problems,
one
being
that
it
requires
you
to
actually
run
that
sidecar
in
prometheus
itself,
which,
if,
if
your
teams
are
split
right,
if
you're,
not
if
the
people
doing
the
monitoring
prometheus,
aren't
the
same
ones,
doing
kuma,
that
that
can
pose
a
challenge
and
yeah
it's
not
as
performant
or
it
has
some
some
serious
bugs.
So
what
we've
done
is
add
a
native
discovery
mechanism
into
prometheus
and
in
this
demo
what
I've
got
is.
D
Let
me
know
so:
we
have
that
running
alongside
just
the
akuma
one.
One,
two
three
control
plane
and
the
prometheus
I
have
running
is
2.29
and
that's
the
first
rc
that
was
cut
lost
friday.
D
So
all
you
need
to
do
to
configure
prometheus
now
is
in
your
standard
script.
Configs.
You
just
specify
this
new
kuma
sd
configs
and
the
server
right.
So
this
is
just
pointing
to
the
kuma
control
plane
running
in
this
kubernetes
cluster
and
the
5676
is
the
monitoring
assignments,
server.
D
All
right
cool
these
are
the
logs
from
prometheus,
so
it's
currently
just
pulling
the
the
kubernetes
or
the
the
kuma
control
plane
for
new
discovery
targets
right.
So
if
we
delete
the
demo
app
we'll
see
that
prometheus
is
updated
and
drops
back
down
to
just
the
the
non-demo
app
services
right.
So
as
the
kuma
data
planes
are
updated,.
D
Back
to
only
having
one,
we
see
that
prometheus
is
now
updated
to
only
having
one
group
or
one
discovery
target,
but
where
it
gets
fun
is
now
you
so
now
the
only
this
is
the
literal
only
thing
you
need
to
do,
and
then
you
get
all
your
envoy
metrics
natively
exposed
to
prometheus.
B
D
So
this
kumar
data
planes,
so
this
is
only
this-
is
only
showing
one
right
now,
which
I
think
is
odd.
I
can't
tell
if
it's
a
prometheus
bug
or
on
us,
but
I'm
debugging
that,
but
basically
you
see
that
it
gets
the
the
address
of
the
data
plane
and
then
a
bunch
of
metadata
about
it.
So
there
are
top
level
tags
for
our
labels
for
the
actual
data
plane
that
we're
running
against
the
metrics
path,
the
scheme
and
then
all
the
kuma
tags
get
built
in
as
well.
So
that's
like
the
protocol.
D
The
component-
oh
so
this
these
are
the
these
are
the
kubernetes
tags,
so
this
should
all
get
bundled
into
prometheus
server
scraping,
so
why
this
is
important
in
the
context
of
prometheus.
Is
that
currently,
in
order
to
do
authorization
and
authentication,
you
have
to
build
that
into
the
prometheus
config
file.
D
So
if
you're
scraping
services
across
many
teams
or
clusters,
if
you're
doing
that,
you
have
to
centralize
your
credentials
and
with
kuma,
obviously
you
can
use
mtls
to
control
it
those
identities
and
who
can
talk
to
you
on
a
more
fine
grain
level.
D
That's
also
what
got
me
to
thinking
about
authorizing
on
a
http
level,
on
l7
from
the
the
traffic
permission
policy,
it
would
be
very
nice.
It
would
be
a
very
nice
extension
to
this
to
say,
prometheus
is
only
allowed
to
scrape
the
the
given
metrics
path
that
it
sees.
C
That's
pretty
cool,
though
so
austin
you
said
that
there
might
be
a
bug.
What
what
bug
are
you
talking
about?
So
this.
D
Is
why
I
wasn't
trying
to
publish
that
blog
post
yesterday?
I
wanted
to.
I
didn't,
have
time
to
test
it.
The
last
changes
out
yet
and
to
end,
but
the
bug
is
so.
You
see
how
the
the
scraping
looks
like
it's
only
persisting
one
target
right:
it
should
have
all
five
data
planes
here,
so
it
looks
like
it's
only
choosing
one,
I
believe.
D
That's
it's
clearly
finding
all
of
them
back
from
the
the
service
from
the
monitoring
assignment
service,
but
it's
not
persisting
them
back
to
prometheus
targets,
which
is
a
problem.
C
So
but
but
it
seems
like
we
still
have
time,
because
this
is
for
an
upcoming
release
of
prometheus.
So
we
have
time
between
now
and
then
to
fix
this
problem.
D
Yes,
this
is
now
what
I
will
be
doing
this
afternoon,
because
I
got
this
demo
done
right
before
the
meeting.
D
Yeah,
let's
take
a
look,
so
they
cut
the
release
candidate
zero
on
friday
right.
B
D
D
Yeah,
I
hope
it
eases
people
into
prometheus
and
come
on.
C
Yeah,
I
believe
that,
once
this
is
fully
shaped
in
prometheus,
we
can
also
significantly
simplify,
simplify
the
documentation
on
you
know
what
what
needs
to
happen
today
to
connect
promoters
with,
with
with
kuma.
D
A
B
C
People
think
of
questions
in
the
meanwhile,
we
can
give
some
visibility
on
the
release
cycle.
We
would
like
to
make
another
release
in
one
or
two
weeks.
We
we
have
to
determine
exactly
when
there's
been
a
few
improvements
that
have
been
merged,
that
obviously
we
want
to
make
available
to
everybody
else.
A
E
Hey,
I'm
carl
hayworth
from
american
airlines
and
we're
just
doing
a
bunch
of
automation
with
kubernetes
and
setting
up
new
clusters
and
incorporating
service
mesh
into
it.
So
at
this
point,
we've
got
it
where
we
can,
with
kuma,
deploy
a
new
kubernetes
cluster
and
aks
and
get
it
added
to
the
full
kuma
environment,
utilizing
private
routing
and
the
private
a
network
while
also
having
public
facing
websites
as
well
my
preference.
But
that's
what
we
use
here
so.
A
E
As
far
as
the
documentation
I
felt
like,
maybe
it
was
a
little
on
the
light
side
as
far
as
like
when
things
went
wrong,
because
I
know
that
I
reached
out
in
the
slack
channel
for
support,
but
I
did
really
like
the
slack
community.
I
found
lots
of
help
through
there.
E
Another
area
that
I
was
looking
at,
that
we
implemented
that
seemed
to
be
maybe
slightly
under
documented,
was,
when
you
incorporate
the
location,
aware
routing
that
I
wasn't,
it
took
a
while
to
figure
out
exactly
where
to
apply
the
location
and
region
tags
for
the.
I
believe,
they're,
the
service
tags
and
I
was
able
to
after
lots
of
digging
through
the
slack
community
figure
out
exactly
where
to
apply
those,
but
that
would
kind
of
be
the
only
thing
that
I've
noticed
us
going
through
all
this
setting
everything
up
the.
E
Utilizing
the
helm
install
made
it
easier
to
be
able
to.
I
think
I
added
an
annotation
to
specify
that
I
wanted
to
use
the
internal
network
versus
the
azure
public
network,
because
our
cluster
is
hooked
up
to
both
networks,
so
that
could
potentially
be
helpful
to
point
out
in
the
documentation
too.
If
there's
other
users,
that
would
be
interested
in
specifying
exactly
where
kuma's
routing
through
because
before
kuma
was
trying
to
route
through
the
public
internet,
and
we
were
working
to
set
up
the
azure
aks
nsg
rules
to
lock
things
down
and,
of
course,
puma.
E
On
the
public
internet
was
kind
of
a
problem,
so
once
we
figured
out
how
to
edit
to
the
annotations
properly,
which
I'm
happy
to
share
our
configuration
for
that,
but
once
we
found
the
annotation,
I
want
to
say
it
was
a
mix
between
slack
and
just
googling,
but
we
were
able
to
achieve
that
as
well.
So,
but
overall,
I
think
we've
been
pretty
happy
with
what
we've
seen
and
what
we've
implemented.
E
I
know
we
compared
a
bunch
of
the
other
service
meshes
that
are
available
and
being
american
airlines.
We've
got
lots
of
teams
using
different
solutions,
but
we're
the
team,
that's
looking
to
automate
kubernetes
kind
of
for
everyone
and
introduced
kuma
at
the
company
and
then
one
of
our
directors,
jason
walker
from
cargill.
Previously,
I
think
he'd
reached
out
for
support
from
you
guys
and
worked
with
you
guys
as
well,
so
he
helped.
B
E
Yeah
he
recently
well,
I
guess
he's
been
with
us
just
over
a
year
now
so
bringing
in
some
new
concepts
and
modernizing
things
and
pushing
us
forward
so
and
seeing
some
of
the
other
service
mesh
implementations
in
american
versus
what
kuma's
been
able
to
provide
on
the
east
of
it
as
well
for
install
and
adding
new
zones
to
the
mesh
too,
and
then
having
flexibility
for
vms
being
incorporated
too
for
all
big
benefits.
C
Very
well
so
carl.
We
are
aware
that
the
documentation
needs
improvements
and,
as
a
matter
of
fact,
we
are
about
to
start
a
major
effort
to
essentially
reorganize
the
entire
docks.
This
is
going
to
be
making
it
easy
easier
for
everybody
to
get
up
and
running
with
kuma.
You
know
some
of
these
things.
Some
of
these
features
were
added
along
the
way,
but
the
main
documentation
foundation
was
the
one
you
know
prior
to
releasing
all
these
improvements.
C
E
And
I
figured
that,
but
you
guys
are
awesome
through
slack
and
helping
us
work
through
some
of
the
issues
and
then
finding
information
through
the
community
too.
So
good
resources
there.
E
There
was
one
thing
that
I
ran
into
and
I
didn't
figure
out
exactly
what
it
was
with
the
regular
kuma
install.
I
didn't
see
a
way
to
properly
version
which
install
we're
using
and,
of
course,
we've
switched
over
to
the
helm
chart,
which
is
much
better
and
everything,
but
originally
when
we
were
going
through
it
it.
E
I
think
that
we
had
installed
kuma
and
then
it
upticked
a
version,
and
then
we
installed
it
on
the
next
cluster
and
it
seemed
like-
and
maybe
I
didn't,
because
I
wasn't
anticipating
that
upgrade
really
just
happening
on
the
fly
like
that.
It
seemed
like
with
the
upgrade.
Maybe
the
side
cars
didn't
properly
restart
on
the
upgrade,
but
I'm
not
100
sure
on
that.
E
B
E
One
two:
three:
okay,
so
I
think
it
was
going
from
one
two,
three
one,
maybe
to
one
two,
three
yeah.
I
think
it
was
when
we
jumped.
I
think
it
somehow
had
jumped
two
versions
before
we
had
switched
over
to
the
helm,
install.
B
Yeah
in
the
in
the
latest
release
we
added
tcp
keep
alive
from
the
connection
between
dp
and
cp,
because
we
saw
we
saw
an
issue
with
azure
load
balancer.
So
actually
that
might
have
been
a
problem
for
you,
so
yeah,
I'm
I'm
interested
if
you
can
upgrade
your
deployment
to
1.2.3
and
see
if
this
happens
again
or
not.
E
Okay,
yeah
I'll
check
that
out
this
afternoon,
thanks
for
that
info.
A
So
yeah,
I
think
that's
it
for
today
everyone
have
a
great
day
and
see
you
in
two
weeks
or
in
community
stock.
D
The
prometheus
maintainer
opened
a
pr
or
opened
yeah
open
up
pr
and
kuma
for
the
code
owners,
but
we
still
have
this
cla,
which
is
not
the
cncfcla.
How
do
we
disable
that.
B
B
D
He's
funny
yeah,
so
I
don't
know
how
we
do
that.