►
From YouTube: Kuma Community Call - April 13, 2022
Description
In this call, we will go over the following:
- Release 1.6 https://kuma.io/blog/2022/kuma-1-6-0/
- E2e improvements
- Kuma T-shirts
- Transparent proxy rewrite
A
So
hello,
everyone
today
we
have
at
least
four
items
in
the
agenda.
First
of
all,
last
week,
oh
technically
this
week
right
we
released
because
this
week
has
released
Kumo
1.6.
A
You
can
check
out
blog
post
some
notable
features
described
here.
I
think
I
can
briefly
talk
you
through.
So
we
added
a
review
of
support
for
kubernetes
Gateway
API,
but
Gateway
API
is
still
in
as
an
is
an
experimental
feature.
Also,
we
improved
our
inspect
API
that
we
introduced
previously.
Now
it
supports
building
Gateway
resources,
so
you
can
check
what
policies
were
matched
for,
what
Gateway
resource?
Also
we
improved
Zone
egress
a
lot
I
think
we
added
support
for
Standalone,
because
for
some
reason
before
that
it
was
only
multi-zon.
A
We
added
a
locality,
aware
the
routing
for
external
services
and
we
finally
added
support
for
fault
injections
and
trade
limiting
on
the
egress.
What's
next
yeah
we
did
a
massive
rewrite
of
transparent
proxy
I
think
we
will
speak
about
this
later.
This
call
yeah
many
points
of
Helm
charts
like
exposing
CP
with
an
Ingress
I
think
also
yeah
security
context
as
well,
and
imagine
to
see
how
one
of
the
questions
to
take
propagate
yeah
and
this.
So
a
lot
of
changes
create
release.
I.
A
B
B
The
problem
with
those
tests
is
that
well,
it's
slow
right
because
you
need
all
those
kubernetes
clusters
to
deploy
command,
multiple
things
to
connect,
deploy
apps,
so
the
whole
setup
takes
a
little
bit
over
time
and
to
run
the
full
suit
sequentially,
it's
probably
like
two
hours.
So
that's
why
we
parallelize
this
on
the
CI
to
a
multiple
VMS
and
we
are
trying
to
speed
those
tests
up
because
well,
that's
the
feedback
loop.
If
we
develop
a
new
feature
right.
B
Obviously
there
is
a
huge
value
in
improving
this,
so
our
strategy
is
to
have
a
free
sets
of
the
environment
and
for
the
universal
standard
Standalone
one
for
Standalone
kubernetes
and
the
third
one
for
the
multi
Zone
deployment
and
the
strategy
is
to
deploy
Kuma
once
and
then
run
tests
on
different
meshes
right.
This
way,
we
don't
need
to
wait
to
like
deploy,
delete,
comma
all
the
time.
B
It
also
improves
the
stability
of
the
product
because
we
are
testing
the
mesh
isolation
right
if
we,
by
any
chance
get
into
some
bugs,
then
well,
that's
actual
pack
in
the
product
right.
So
there
is
disadvantage
and
yeah
and
that's
it
so
I'm
working
on
preparing
the
environment.
For
this
we
have
kubernetes
Standalone,
matched
and
Universal
pending.
So
that's
in
progress,
I
hope
we
can
write,
speed
our
and
that's
very,
like
matched
by
the
factor
of
10,
maybe
100.
Let's
see.
B
Pasting,
it
depends
on
the
tests
with
some
of
the
tests.
We
were
kind
of
well
lazy
right,
so
we
had,
for
example,
multiple
hits
and
we
were
recreating
an
environment
on
every
like
test
case
right.
So
obviously,
if
we
rewrite
the
test
for
the
new
thing,
you
need
to
really
take
care
of
cleaning
up
the
environment
between
those
kids,
which
is
something
we
should
be
doing
for
that
from
there
from
the
day.
One
right
so
I
I
expect
with
many
of
tests.
B
There
will
be
some
extra
work
with
cleaning
up,
but
I
migrated
a
couple
of
them
and
it
was
pretty
straightforward.
B
C
If
yeah
I
I
have
one
question,
I
think
how
do
you
think
do
you
think
this
would
make
it
possible
to
run
end-to-end
tests
on
other
environments?
So
at
the
moment
we
run
on
like
kind
and
k3d
I've
thought
about
in
the
past
that
maybe
we
could
run
at
least
a
subset
of
our
end-to-end
test
suit,
like
we
used
to
do
a
while
back
on,
like
you
know,
elastic
Cloud,
elastic
kubernetes
service
and,
and
that
kind
of
thing,
how
much
do
you
think
it
would
be
complicated
to
do
that.
B
B
C
I
mean
we
have
to
see,
but
if
the
tests
are
a
bit
cleaner
now
as
in
like
they
don't
require
a
new
environment
each
time,
maybe
it's
more
feasible
into
just
always
having
the
kubernetes
infrar
running
and
just
have
come,
be
happy
about
paying
a
few
hundreds
dollar
that
is
a
month.
It's
much.
D
Better
from
my
experience
when
I
wrote
the
test
for
the
eks
and
for
the
AKs,
some
of
the
biggest
problems
and
the
biggest
point
of
failure
was
actually,
it
was
unpredictable
to
spawn,
because
everything
could
break
out
the
limits
or
or
something
was
wrong
or
the
timeouting
in
the
ipis
ETC.
So
like
90,
when
it
was
failing,
it
was
because
of
unpredictable
state
of
of
spawning.
D
Then
then
it
means
we
have
to
maintain
it,
and
we
already
have
a
lot
of
work
with
the
CIA
and
the
infrastructure.
B
C
Cool
next
point
is
Kuma
t-shirts
for
everyone.
That
is
not
aware
of
that.
We
are
giving
away
Kuma
t-shirts
that
anyone
that
like
has
done
any
kind
of
level
of
contributions
to
Kuma
it
could
be.
You
know,
contributing
some
good
issues
or
like
having
PRS
merged.
C
There
is
a
link
in
the
contributing
dot
MD
to
the
form
you
need
to
fill
into
like
receive
your
T-shirt,
so
yeah
don't
hesitating
getting
your
T-shirt.
It's
like
free,
closers,.
A
D
D
D
These
rules
and
the
third
one
it
was
to
write
like
a
proper
Black
Box
tests,
which
will
not
test
what
rules
are
generated,
but
if,
for
example,
our
transparent
proxy
is
redirecting
or
incoming
traffic
to
one
port,
we
want
to
know
if
there
is,
if
we
will
install
this
transparent
proxy
if
the
packets
are
really
going
where
they
should
go.
So
that
was
a
non-trivial
thing,
as
it
appears,
because
I
actually
was
focusing
currently
on
these
two
things:
the
Black
Box
tests
and
the
happy
tables.
D
Right
and
my
my
main
goals
with
these
circuitables
engine,
because
we
decided
to
call
it
an
engine
because
in
the
future
we
will
add,
additional
engines
was
to
First,
simplify
it.
Currently,
it's
overly
complicated
with
a
lot
of
overheads
with
the
E
issues
which
we
are
not
using,
which
are
even
not
supported,
but
by
our
model.
D
The
second
thing
was
to
make
it
easy
for
people
to
debug
and
that's
the
tricky
part,
because
it's
not
actually
a
lot
which
we
could
do
there
without
a
lot
of
efforts,
but
I,
currently
at
least
make
the
call
rules
generation
the
longer
names,
with
some
descriptions
out
of
some
of
the
rules,
the
more
GK
rules
which
what
they
are
doing,
why
they
are
doing
this,
and
the
third
thing
was
to
actually
write
the
rule
generator
in
a
way
where,
when
you
look
into
the
actual
functions
the
function
calls
you
can
understand,
the
rule
which
will
be
generated
so
and
the
fourth
actual
theme,
which
is
beneficial
from
that
approach.
D
Was
we
now
have
more
safety
mechanism
in
a
way
of
types
where,
before
we
were
just
generating
the
string
with
the
habitable
rules,
we
thought
actually
by
letting
either
this
this
one
is
valid.
Now
we
have
this
kind
of
validation,
not
very
complicated,
but
at
least
from
the
perspective
of
the
types
which
can
be
used
inside
the
genetic
rules.
D
The
next
part,
which
was
trickier,
was
this
black
box
test,
because
my
main
pawns
I
wanted
to
satisfy
is
to
not
use
external
pooling
so
to
test
it
using
only
goal,
which
is
tricky
because
to
really
isolate
an
environment
to
to
test.
If
the
packets
going
from
the
one
place
to
the
other
in
a
way
it
should
go,
was
I
had
to
create
the
separate
Network
namespace,
but
go
like
creating
the
support.
Namespace
means
that
the.
D
Call
which
is
calling
to
create
his
name
space
is
going
directly
into
this
namespace,
and
what
we
want
to
do
is
actually
to
put
something
there
like
PCP
server
in
that
area
and
call
from
the
outside
of
the
namespace,
send
some
packets
and
see
if
the
packets
arrives.
First
arrives
up
to
the
server,
the
second
one.
If
the
the
underlying
socket
has
the
original
destination
map
set.
D
So
that
was
a
three
key
very
tricky,
because
the
namespace
are
thread
bound
or
a
thread
bound
and
gold
doesn't
allow
to
play
with
threads
at
all.
D
You
can
lock
it
so
I
spent
some
time
and
figuring
this
out,
and
what
I
succeeded
to
do
today
is
to
have
tests
for
the
inbound
part
of
our
transparent
proxy
tomorrow,
I
I
plan
to
finish
the
outbound
parts
and
what
will
be
left
is
to
test
if
the
DNS
redirection
works
and
after
that,
what
will
be
great
is
that
it
doesn't
matter
if
it
will
be
at
the
tables.
D
If
we
will
add
additional
engine,
for
example,
I,
don't
know
I,
don't
want
to
say
it,
but
I
will
say
it
ebpf
engine
it
can
be
run
inside
its
test
because
it
was
designed
as
a
test
framework.
So
you
can
take
parts
put
your
regular
tests
to
use
this
to
to
run
your
generation
like
transparent
box
installation
inside
the
test
and
then
see
if
it's
actually
working.
So
that
was
tricky,
but
we
are
progressing
with
a
lot
in
that
area.
A
D
Doesn't
matter
it's
that
only
listed
the
version
of
kernel
is
I,
don't
remember
fully
it's
free
or
something
and
above
which
I
think
it's
network.
Next
verses.
A
D
Actually,
from
I
think
five
years,
IB
tables
are
deprecated
and
the
NF
tables
is
actually
The
Next
Step
at
top
of
the
it's
actually
the
cloud,
the
user
space
client
for
the
net
for
the
left,
link
set
of
of
IP
apis
and
what,
in
99
of
cases
right
now,
all
distributions
comes
with
the
layer
which
is
translating
the
IP
tables
to
the
NF
tables,
but
if
it
will
be
like,
if
there
won't
be
any
this
translation
letter
will
be
enough
tables,
then
we
would
have
to
write
additional
engine,
which
is
not
very
this
one
is
really
problematic,
because
anything
is
actually
much
better
and
there
are
better
like
a
tooling
for
working
with
them.
B
D
By
default,
there
is
much
extension
which
allows
you
to
add
rules
to
send
every
bucket
logs
inside
the
slots
and
I
kind
of
was
thinking
about
adding
right
out
in
the
essential
version.
There's
nothing
like
that,
but
I
have
the
rules
written
down,
but
I
just
would
have
to
add
logic
to
the
generators
to
generate
the
rules
for
the
for
this.
But
yes,
there
is
possibility
to
do
it.
D
E
No
you're
right
yeah,
so
this
is
we've
probably
already
spent
too
long
on
this,
but
like
right
now
we
bundle
the
helm,
charts
or
the
contents
of
the
helm
chart
right.
The
yaml
for
the
helm
chart
in
Kuma
CTL
and
use
that
for
our
installer.
E
That
means
that
I
know
we've
gone
to
aligning
the
version
of
the
software
with
the
version
of
the
helm
chart,
which
I
think
is
great,
but
I
think
it
would
be
really
cool
if
we
were
able
to
override
that
for,
like
a
a
bug,
fix
version
of
the
helm
chart
that
we
might
want
to
release
on
a
more
frequent
Cadence
than
the
software
release
and
right
now
that
coupling
means
it's
not
possible.
It
all
sort
of
just
give
us
a
bit
more
flexibility.
E
So
like
I'm,
not
anticipating,
like
I'm,
not
suggesting
changing
the
ux
of
how
this
works
I'm
just
saying
we
should
pull
the
helm,
chart
or
use
the
local
Helm
chart
that
is
already
pulled
to
install
or
we
should
provide,
and
we
should
provide
an
override
if
you
want
to
use
a
different
version
of
The
Help
jar.
A
A
F
E
E
That's
exactly
what
I'm
saying
by
this
point
right
like
is
the
is
the
code
in
in
Kuma
CTL
in
such
an
impossible
way
to
pull
out
like
couldn't
we
just
pull
the
helm,
chart
and
use
the
external
like
run
a
template
with
our
own
values
and
then
just
use
the
helm
chart
or
do
we
have
to?
Is
it
like
super
intrinsically
bundled
and
unpillable
out.
F
F
So
then
you'd
have
to
do
something
like
like
there's
there's
going
to
be
a
mismatch,
no
matter
what
you
do
I
feel
like,
because
if
you
have
Kuma
CTL
1.6.1,
you
then
realize
you
have
a
fix
you
want
to.
You,
don't
want
to
Pumas
1.6.1
to
install
1.6.2,
even
though
that's
the
highest
version
of
the
helm
chart
you
want
it
to
install
the
most
fixed
version
of
1.6.1,
like
I
I,
think
what
Elia
said
is
the
is
the
is.
C
Otherwise,
but
like
I'm
stretching
here
it's
like,
could
you
could
we
imagine
kumakuru,
instead
of
bundling
the
helm
chart
in
line
it
would
download
the
latest
Helm
chart
and
do.
E
That
that's
literally
exactly
why
I'm
suggesting
with
this
item
sorry
John
yeah!
No,
no
I'm,
like
thank
you,
you're,
obviously
saying
in
a
different
way
to
me,
but
like
that's,
basically,
what
I'm
saying
pull
out
the
logic
that
bundles
it
and
just
pull
it
ourselves
and
provide
an
override
for
which
version
you
pull
I.
Think.
C
In
which
case
would
need
like
a
a
sort
of
mapping
between
like
Kuma
versions
and
Helm
charts,
exactly
that's
what
you
say:
yeah,
which
is
maybe
possible
yeah.
It
might
be
possible.
I
mean
in
the
docks,
there's
a
thing
like
that,
because
we
generate
docs
and-
and
we
want
the
references
in
the
docs
to
be
correct
version
wise.
E
I
should
say:
I
I,
really
don't
I
have
no
preference
on
the
implementation
whatsoever.
I'm.
Just
like
keen
and
recently
you
know,
a
lot
of
fixes
have
come
up
in
the
helm
chart
which
are
just
ux
improvements
which
don't
really
change
the
software
so
being
able
to
release
Helm
releases
at
a
different
Cadence
from
the
software
would
just
be
super
useful,
I
think
so
bad,
but
the
implementation
I
don't
mind
at
all.
C
A
E
This
is
me
as
well,
but
I
think
most
folks
are
familiar
with
it.
So
there
have
been
a
lot
of
cases
recently
where
folks
have
needed
end.
Users
have
needed
to
customize
the
sidecar
container
definition
for
actually
a
variety
of
reasons
right,
so
just
a
few
off
the
top
of
my
head.
One
of
them
is
security
contacts,
although
we
fixed
that
in
another
way,
I
actually
no,
we
haven't.
E
We
do
need
to
do
security
context,
also
the
ability
to
mount
custom
volumes
to
the
sidecar
container,
the
ability
that
came
up
recently
the
need
to
be
able
to
add
a
pre-stop
hook
to
drain
Envoy
prior
to
the
app
shutting
down.
Now,
that's
probably
significant
enough
that
we
can
add
additional
ux
around
it,
but
if
we
had
a
break
glass
way
of
customizing
the
side
card,
it
would
be.
E
You
could
work
around
it
today,
right
so
I,
don't
know
whether
I've
thought
really
a
lot
about
the
implementation
here,
I
in
terms
of
having
a
working
sidecar,
there's
not
really
that
many
requirements
like
should
we
even
be
able,
should
we
even
allow
end
users
to
customize
the
image
they
run.
I've
definitely
had
cases
where,
like
I've
done,
a
real,
quick,
hot
fix
in
the
image
and
it'll
be
super
useful
to
modify
the
image
as
well.
E
F
Oh
yeah
I
mean
we're
just
starting
to
look
at
the
design
for
this,
but
like
basically,
what
we
were.
What
me
and
Charlie
were
just
talking
about,
like
literally
right
before
this
call
was
kind
of
what
like
what
you
were
describing
is
kind
of
like
along
the
lines
of
what
we're
talking
about
so
I.
We,
but
also
I'd,
like
I,
mean
I
have
like
a
minute
left
here,
but
oh
I'll
be
getting
more
of
your
feedback
and
other
people's
feedback
on
the
design
in
in
the
next.
F
Feedback
from
the
people
who
are
going
to
be,
you
know
affected
by
it,
yeah.
C
Because
I
also
dug
into
like
the
tickets
with
Marco
earlier
today,
and
we
found
another
one
where
someone
wanted
to
specify
their
resource
limits
and
requests
based
on
a
specific
like
a
specific
pod,
would
require
something
probably
like
for
this
specific
use
case.
It
would
probably
be
a
deployment,
no
not
a
pod,
but
I
think
these
are
all
valid
use
cases
in
some
ways,
so
yeah,
probably
having
the
container
spec
and
then
matches
on
this
saying
like
this
deployment
used.