►
From YouTube: Kubernetes SIG network meeting 2019-09-19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
A
D
E
C
D
F
A
A
Wow
this
was
assigned
to
me
interesting
yeah,
all
right,
yeah,
I'd,
look
at
this
last
time.
I
guess:
there's
been
no
response,
even
one
more
pinion,
yep
yeah.
Who
is
this.
A
H
H
Can
everyone
see
this
yep
cool
all
right?
So
as
part
of
this
performance
testing,
we
wanted
to
I
wanted
to
make
sure
that
ku
proxy
actually
performed
well
with
endpoint
slice.
The
end
result
was
performance
improvements
with
endpoint
slice
and
without
endpoint
slice,
but
generally
what
I
did
is
I
would
spin
up
a
cluster
with
coupe
tests.
I'd
modify
the
KU
proxy,
manifest
on
at
least
one
node
to
enable
profiling,
I'd
use
port
forward
and
go
tool
peeper
off
to
profile,
ku,
proxy
and
min
Han.
H
Actually,
has
this
really
cool
bunch
of
helper
scripts
for
kubernetes
called
creep
script,
which
makes
it
super
easy
to
push
a
custom
build
of
KU
proxy
to
a
specific
node
I,
so
I
use
that
to
push
a
few
custom
builds
to
different
nodes
and
those
different
builds
ended
up
being
original
1.6
1.16
with
slices,
enabled
and
then
new
and
new,
with
slices
enabled
in
both
cases
being
the
performance
improvements.
I
got
to
and
the
final
results
were
gathered
on
a
150
node
cluster
over
a
15-minute
window
scaling
from
zero
to
ten
thousand
two
endpoints.
H
So
we
found
three
bottlenecks
in
this
whole
process
and
just
want
to
cover
what
they
were.
The
first
two
of
them
were
very
much
me
and
endpoint
slices.
So
for
consistency
when
comparing
values,
I
was
trying
to
sort
endpoints
just
so
that
they
would
always
be
in
the
same
order
when
comparing
them
and
I
was
comparing
them
by
endpoint
IP
and
that
IP
function
call
was
not
just
returning
a
string
which
I
thought
it
was.
It
was
parsing
a
string
out
of
an
IP
port
combination
using
net
util,
which
was
very
slow
and
ended
up.
H
Bottleneck
number
two
endpoint
slice
cache
endpoint
map,
so
for
those
familiar
with
qu
proxy
there's
this
data
structure
called
endpoints
map,
and
that
makes
it
really
straightforward
for
proxies,
like
IP
tables,
RI
PBS,
to
translate
that
into
actual
proxy
rules.
I
we
needed
to
that
was
fairly
easy
to
convert
from
endpoints,
but
a
bit
more
complicated
from
endpoint
slices
and
I
had
been
following
the
same
workflow
that
endpoints
had
which
was
trying
to
convert
every
single
time
a
new
endpoint
or
endpoint
slice
came
in,
and
that
was
very
slow.
H
So
the
you
know
it
ended
up
taking
around
45%
of
total
to
proxy
CPU
time
after
that
previous
fix.
So
the
solution
was
to
only
compute
endpoints
map.
One
proxy
is
actually
required
because
the
proxy
sink
runs
so
IP
tables,
actually
getting
written
to
to
the
computer
are,
is
pretty
rare
every
few
seconds
as
compared
to
every
single
end
point
slice
update,
can
be
very
frequent,
so
just
changing
when
that
happened.
Made
another
huge
difference,
and
then
the
final
change
here
was
to
change
when
we
ran
detect
stale
connections.
H
H
So
if
you
had
nothing
but
TCP
ports
and
your
work
flow,
it
ended
up
taking
a
huge
percentage
of
time
that
it
didn't
need
to.
So.
In
this
case
it
was
taking
eighty
one
percent
of
total
ku
proxy
CPU
time,
regardless
of
end
point,
slice
is
being
enabled-
and
this
is
after
the
previous
fixes
were
already
in
place.
D
H
Exactly
yeah,
but
it
the
most
expensive
part,
is
figuring
out
what
those
stale
connections
are
yeah.
So
with
those
results
again
highlighting
this
is
watching
a
cluster
scale
from
zero
to
ten
thousand
endpoints
have
150
node
kubernetes
cluster
I
have
four
different
versions
of
ku
proxy
on
different
nodes.
Those
nodes
have
nothing
else
other
than
the
very
basic
elements
that
you
would
expect
on
a
kubernetes
node,
and
this
is
over
a
15
minute,
p
prof
profiling
window.
So
the
first
thing
is
qu
proxy
memory.
H
Util
really
didn't
change
that
much
there's
a
bit
of
a
change
here.
So
if
you
can
see
this
green
line,
that
represents
the
newest
version
of
an
endpoint
slice
with
some
performance
improvements.
But
this
is
these
changes
are
small
enough
that
I
can't
say
for
sure
that
it's
actually
a
change
and
not
just
noise,
but
potentially
some
improvement
here.
H
Okay
and
then
the
next
thing
that
was
very
dramatic
was
coup
proxy
CPU
utilization
and
this
bottom-income.
These
numbers
represent
the
number
of
minutes
into
the
testing,
and
you
can
see
that
my
initial
end
point
slice.
Implementation
was
very
CPU
intensive
and
not
very
efficient.
That
is
the
yellow
line
that
is
taking
several
minutes
longer
and
significantly
more
CPU
time
to
complete
the
dark.
Blue
navy
blue
line
is
the
original
endpoints
implementation
in
1.16
and
then
the
light
blue
line
is
the
new
endpoints
implementation.
H
With
these
improvements
and
the
green
line
is
now
the
most
efficient
implementation
which
is
endpoint
slices
and
if,
if
it's
easier
to
visualize,
P
Prof
actually
gives
you
a
total
CPU
time
that
was
used
throughout
this
process,
and
this
is
what
that
looks
like.
So,
according
to
P
broth,
the
total
CPU
time,
you
see
this
huge
change
from
endpoint
slice,
1.16
down
to
endpoint
slice,
new,
almost
comical,
and
if
it's
easier
to
visualize
with
numbers.
Those
are
the
seconds
of
CPU
time
in
over
that
15-minute
window
used
by
each
implementation.
H
D
H
At
it,
they
weren't
as
obvious.
That's
what
I'll
say
that
the
biggest
things
that
it
was
spending
time
on
were
map
reads
and
like
it's
just
it's
just
a
really
expensive,
I
hope,
there's
optimizations,
because
that's
still
taking
a
huge
amount
of
time.
If
you
have
UDP
connections,
but
there
was
nothing
really
obviously
stuck
out.
E
H
C
D
D
H
D
H
These
so
what
you
notice,
if
you
go
back
to
some
of
these
charts,
you
don't
so
this
doesn't
really
explain
it
very
well,
but
you
didn't
really
I
didn't
really
notice
huge
differences
until
you
got
into
the
thousands
of
endpoints,
so
for
your
average
user.
This
is
not
going
to
be
that
noticeable,
but
when
you
do
get
into
those
thousands
of
endpoints
like
this
is
ten
thousand
tested
with
15
with
50,
especially
with
50,000.
It's
absurd,
so
yeah.
H
I
H
G
C
H
D
H
That'd
be
cool
all
right,
so
endpoint
slice
is
going
beta
in
1.17
and
as
part
of
that
added
some
graduation
criteria,
I'm
sure
some
of
you
saw
the
cap
PR
that's
been
active
for
a
bit
and
got
merged
in
sometime
this
week
and
here's
what
we
committed
to
for
graduating
beta
in
1.17,
the
first
one
was
already
in
there.
It
was
coop
Roxy
switch
to
consume
endpoint
slice,
API,
that's
already
done
in
alpha.
H
H
Certainly
we
want
to
make
sure
that
EDA
we
tes
cover
both
endpoints
and
endpoint
slices.
We
get
a
lot
of
that
for
free,
but
we
want
to
make
sure
that
when
we
turn
on
endpoint
slices
and
all
the
existing
tests,
that
will
cover
that.
We
also
want
to
make
sure
they
cover
endpoints.
So
we
have
full
backwards
compatibility
and
we
know
that
both
continue
to
work
for
the
foreseeable
future.
H
Then
there's
three
editions
and
they're
all
really
related
here,
they're
related
to
the
idea
that
we
want
endpoint
slices
to
be
useful
for
more
than
just
our
own
endpoint
slice
controller
include
proxy
use
cases.
We
want
other
things
to
be
able
to
manage
these
right
to
these
and
use
them,
for
whatever
is
useful
for
them.
So
this
label
name
is
very
much
still
for
debate.
Certainly
comment
on
this
PR
I--,
but
there's
a
endpoint
sliced
kubernetes
that
io
/
managed
by
is
the
current
proposal.
H
That
would
basically
say
this
is
the
controller
or
entity
that
is
responsible
for
this
endpoint
slice,
then
we're
adding
a
fqdn
address,
type
to
just
support,
more
uses
of
end
point
slice
and
finally,
a
requested
feature.
Add
support
for
optional
app
protocol
field
on
endpoint
port
as
I
understand
it.
I
think
this
has
been
talked
about
in
other
API
is
before,
and
we're
trying
to
make
endpoints
as
possible,
and
it
felt
like
a
good
place.
So
any
questions
on
any
of
those.
J
I
C
So
that's
I
think
that
discussion
is
in
the
PR
review
during
the
alpha.
So
yes,
I
agree.
Now
it
has
a
problem
of
fragmentation.
When
you
have
let's
say
you
scale
up
your
service
and
you
scale
down
significantly
because
we
don't
do
active
balancing
so
meaning
that
we
don't
move
one
endpoint
from
one
endpoint
slice
to
another.
You
know
so,
let's
say
you
scale
down,
and
then
you
have
one
endpoint
in
the
whole
slice
that
slice
end
up
like
basically
living.
G
C
What
we
like,
what
we
plan
to
address
is
that
by
instrument
this
exact
behavior,
it's
basically
how
many
desired
end
points
lies
we
want
and
how
many
is
like
actual
number
of
end
points
lies
we
have,
and
then
we
expose
that
as
like,
like
metrics,
and
see
like
how
bad
was
it
and
then
like?
What's
the
impact
and
then
evaluate
on
like
what's
the
later
on
algorithm,
to
move
forward
so.
I
That
sounds
fair
to
me,
so
you
want
to
do
the
the
utilization,
the
sly
civilization
metric.
Yes,
all
right
and
I'm,
assuming
that
worst
case
scenario,
if
I'm
running
this
and
I
end
up
with
thousands
in
point
slice
was
one
percent
utilization.
I
can
just
let
script
delete
all
the
in
point
slices
and
at
least
the
new
one
should
be
kinetic
compacted
because
that's
the
default
baby.
Yes,
all.
I
H
A
G
Yeah
so
I
have
a
beard
open
for
that
and
there's
been
some
discussion
going
on.
Tim
and
Andrew
have
reviewed
it
and
added
some
comments.
Basically
on
the
design.
The
way
I
initially
implemented
is
basically
have
the
North
Sider
mask
sizes.
Comma-Separated
values
are
similar
to
plasticizers
and
there
will
be
a
one-to-one
mapping.
G
I
think
what
Tim
was
suggesting
on
the
PR
is
we
have
a
separate
flag
for
ipv4
and
ipv6,
and
the
user
would
be
able
to
do
that
and
the
thing
is
we
keep
that
and
the
existing
node
side
am
a
size
flag,
mutually
exclusive.
So
if
it's
a
single
static
cluster,
then
they
would
just
use
the
one
flag,
but
if
it's
we
would
stack,
they
would
use
the
ipv4
ipv6
flag.
G
I
mean
like
right
now
from
the
PR.
It
does
look
like
a
lot
of
folks
around
both
the
keeping
mutually
exclusive.
So
if
it
didn't
mention
the
other
flag-
and
you
will
stack
then
at
around-
and
that's
the
kind
of
direction
that
I'm
looking
forward
to
go
through,
I
mean
I,
don't
see
any
objections
on
this
call.
So.
G
A
Okay,
in
that
case,
I
guess
carry
on
with
what
you're
doing
in
the
VR
doesn't
sound
like.
There
are
any
strong
opinions
here,
yeah.