►
From YouTube: Kubernetes SIG Network 2019-04-04
Description
SIG Network meeting for April 4th, 2019
B
C
E
And
as
a
reminder
for
folks
after
you
volunteer,
if
you
figure
out
that
something
is
a
legitimate
bug,
then
remove
the
triage
unresolved
label,
otherwise
we'll
revisit
next
week.
So
there's
a
bunch
of
bugs
in
our
queue
today
that
are
gonna,
be
revisits
from
last
time.
So,
if
you're
assigned-
and
you
think
it's
bogus
close
it
or
or
ask
someone
to
close
it,
if
you
don't
have
the
access
or
remove
the
label
or
ask
some
label
and
if
it's
still
in
progress,
it's
okay
to
leave
the
label.
H
B
H
F
I
B
E
B
E
G
E
I
F
L
L
I
E
E
N
D
E
O
D
B
C
E
B
B
D
C
E
F
E
All
right
volunteer
someone
just
a
probe
follow-up,
see
what's
going
on
sure,
Thank
You,
Valerie,
BLL.
F
F
F
E
E
J
O
G
I
O
G
E
The
v6
port-
and
it's
probably
sitting
on
the
v6
port,
even
though
they
don't
want
it
to,
but
you
can
send
this
one
to
me,
I
think
I.
O
E
B
D
E
E
D
E
E
I
B
I
G
E
Okay,
all
right
awesome
we're
at
time
we
made
it
through.
What
do
we
open
up?
They'd
open,
like
5600
you're
like
you,
enter
like
48,
bugs
cool
nice
work,
people
I
know
sorry,
just
in
terms
of
stats
we
pulled
up
this
morning
we
had
a
net
decrease
from
last
week
of
10,
I,
think
and
just
as
a
starting
point
and
that
included
seven
or
eight
six
or
seven
some
somewhere
in
that
range
new,
ountry
oz
bugs
since
two
weeks
ago.
P
Q
D
G
E
R
So
the
issue
we
have
here
is
that
coop
rocks
he
has
an
optimization
which
rewrites
traffic
that's
headed
towards
the
load
balancer
from
within
the
cluster
who
the
backing
service,
but
per
the
recommendation.
We
implement
proxy
protocol
on
that
load,
balancer
and
if
the
traffic's
redirected,
without
ever,
hitting
the
load
balancers,
the
proxy
protocol
is
not
added.
R
It
was
a
similar.
The
request
was
originally
created
by
some
and
Alibaba
iCloud
user
and
they
have
key
LS
determination
if
there
were
balancer
and
we
do
as
well
as
an
option-
a
proxy
protocol.
Is
there
not
their
concern
there
and
yeah.
So
we're
trying
to
figure
out
what
we'd
like
to
propose
a
PR
and
address
this,
but
I
want
to
kind
of
find
the
right
solution
and
turn
away
too
so.
E
E
E
E
R
And
I
guess
I
should
say
also
there's
kind
of
two
idea:
two
different
directions.
We
were
considering
taking
this
one,
just
adding
a
flag
to
crude
proxy,
which
would
disable
that
optimization
entirely.
You
know
obviously
default
to
the
current
just
behavior
and
then
the
second
one
being
changing
that
load,
balancer
status,
object
or
the
service
itself
to
have
a
internal
traffic
policy
option
that
would
default
to
the
current
behavior,
but
have
a
different
behavior
that
allowed
that
hairpin.
That's
all
a
balancer
mmhmm.
E
B
Last
time
I
think
we
saw
this
was
coming
from
the
on-prem
user
and
then
I
think
the
hack
was
to
inject
iptables
rules
to
basically
force
the
destination
to
be
like,
for
instance,
if
the
destination
fall
into
the
low
bouncer
IPS,
then
dunno
track.
So
you
just
go
right
out.
I
think
that
was
the
hack
I.
Remember.
E
A
D
E
O
N
Sure
so
we've
had
the
series
of
issues
that
came
from
how
inherently
bug
and
error-prone
the
cleanup
process
is
because
each
mode
has
some
intersections
of
how
the
rules
work.
We
decided
kind
of
at
the
last
meeting
and
what
it
kept
in
to
just
remove
the
cross
mode,
auto
cleanup.
So
now,
if
you
want
to
switch
coup
proxy
between
different
modes,
you
have
to
explicitly
run
like
the
cleanup
single
run
or
restart
your
node.
N
K
E
N
N
F
F
F
K
I
guess
one
thing
worth
mentioning
for
this
is
that
in
one
release,
we're
not
actually
supporting
finalizes,
we're
just
gonna,
add
the
code
to
handle
finalizer
so
that,
if
a
user's
rollback,
they
don't
have
services
stuck
with
finalized
errs,
but
not
the
controller's
handling
it.
So
it's
probably
going
to
be
a
multi
release.
Feature
that
we're
going
to
add
incrementally.
D
C
H
So
I
just
want
to
check
what's
the
status
of
it,
so
I
thought
I
know
it.
It's
got
implement
the
inclu
night
and
unwise
plugins.
So
we
want
to
know
the
future
plan
and
also
it's
done
it's
the
pod
animation.
So
if
we
updated
our
annotation,
it's
not
Scott.
He
popped
by
the
plug-in.
So
we
want
to
know
I
mean.
E
I
H
E
I
F
E
F
F
E
As
the
seg
need
to
decide,
if
we
think
that
it's
a
real
feature
that
we
can
really
support
and
to
do
that,
I
think
we
need
to
do
a
little
bit
of
thinking
about
how
would
look
as
a
real
feature
and
what
does
it
mean
for
non
you
Metin
on
container
II
or
non
namespace
e
based
environments?
What
does
it
mean
for
other
plugin
implementers,
those
robots,
I
guess.
E
E
Out
one
one
more
quick
topic:
of
course,
there's
we
actually
had
a
meeting
last
week
to
talk
about
our
Signet
work,
maintainer
track
session.
It's
an
80
minute
session
in
Barcelona
split
in
two
halves.
One
is
the
introductory
session
and
the
other
is
the
deep
dive
session.
We
started
an
outline
of
this
and
we
will
hopefully
refine
the
outline
and
start
turning
it
into
slides
in
the
next
couple
of
weeks.
E
We've
proposed
four
topics:
one
ingress
v1
and
the
path
to
GA
to
the
DNS
node
cash,
three
service
topology
and
for
ipv6
and
dual
stack
I'm,
not
sure
we'll
get
to
all
four
topics
inside
the
40
minute
half,
but
we're
gonna
start
writing
up
slides
for
that.
So
anybody
who
wants
to
help
is
welcome
to
help.
Otherwise,
we'll
just
chip
away
at
it.