►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20220414
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20220414
A
All
right
sounds
like
we're
recording
this
is
the
sig
network
meeting
for
april
14th
2022..
Why
don't
we
start
off
with
some
issue
triage
anyone's
got
that
lined
up.
B
Anybody
see
that
yep
all
right,
I'm
having
some
video
glitch
on
the
way
in,
but
that's
fine
as
long
as
you
can
see
me
cool.
So
I
appreciate
everybody
who
ran
through
this
afternoon
or
this
morning
and
pinged
a
bunch
of
these
issues.
I
have
what
one
six
six
left,
that
we
can
talk
about.
B
So,
let's
start
most
recent
pods
failed
with
status
ip
address
reused
on
new
pods,
so
the
tl
dr
it
sounds
like
they
have
somehow
got
two
pods
running
that
have
the
same
ip
address
and
they
think
there's
some
correlation
to
the
fact
that
one
or
more
of
the
containers
in
the
pod
are
in
some
non-normal
state.
B
But
I
asked
for
some
clarity
on
that.
Cal
is
also
in
there
because,
if
that's
true,
that's
disastrously,
bad
and
we
will
need
to
get
to
the
heart
of
that
very,
very
quickly.
So
keller,
you
want
me
to
sign
it
to
you.
You
want
me
to
sign
it
to
myself
sure
sure,
but
I
don't.
C
See
it
the
way
you
see
it,
but
yeah
sure,
like
we
obviously
need
clarity
to
my
understanding
is
that
the
old
pod
is
done
dead
and
the
ip
move
to
the
new
one,
and
that
will
happen
if
you're
like
at
the
very
end
of
the
cider
space,
like
you
still
have
two
or
three
remaining,
it
will
quickly
get
reused
so
anyway,
let's,
let's
ask
yes
assign
me
and
I'll
take
care
of.
Thank
you.
D
C
A
B
They
say
their
cni:
no,
they
didn't
it's
on
gke,
but
they
didn't
say
which
myriad
options
they
have
used
but
cal.
This
is
the
sentence
that
made
me
question
my
sanity.
E
B
Yes,
if,
if
the
container
so
it's
unclear
whether
they
say
these
are
container
statuses
or
pod
statuses,
because
like
terminated
as
a
pod
phase,
which
should
be
terminal
and
and
should
not
be,
you
know,
come
back
fromable,
but
if
the
pod
is
still
up
and
running
and
serving
requests
anyway,
we'll
get
some
clarity
on
it.
It's
not
clear!
What's
going
on
here,
I
think
it's
more
likely
that
we
have
it
like
a
stale
endpoint
somewhere.
B
C
Seen
this
before,
because
I've
seen
broken
too
many
things
programmed
my
life
hcd
restore,
can
do
that
if
the
scene
I
depend
on
ncd,
if
you're
doing
it's
the
restore,
if
one
of
the
lcd
like
it's
the
total
failure
and
you're
restoring
and
there's
a
just
a
small
drift
enough
for
the
c9
to
think.
Oh,
I
have
this
free.
C
It's
it's
just.
I
have
seen
that
twice
before
once,
because
the
c9
was
running
as
a
pot
and
that
didn't
have
like
a
state
kept
on
the
node,
so
every
time
that
what
restored
the
cni
thinks.
Oh
all,
the
these
are
free,
so
it
started
allocating.
I
started
assigning
them
and
the
other
time
I've
seen
this
recently
was
the
c
just
decided,
yeah.
Okay,
I
restored.
Then
there
is
a
delta
between
what's
what's
actual
and
what's
running
so.
E
B
E
A
B
I
went
through
and
I
went
through
and
checked
today.
We
do
filter
out
end
points
that
aren't
running
so
once
it's
reached
one
of
these
completed
terminal
states.
It
shouldn't
show
up
as
an
endpoint
anymore,
not
saying
that's
guaranteed
to
work,
but
it's
at
least
designed
to
work
that
way.
They
clearly
have
like
a
broken
cluster
in
some
way
right.
B
Yeah,
we'll
get
some
more
information
we'll
find
out,
but
this
was
the
most
alerting
alarming
bug
that
I
ran
into
today.
B
Next
up
is
the
issue
of
cube
proxy
being
a
very
large
container,
and
apparently,
since
it
downloads
on
every
node
in
the
world,
it's
a
significant
fraction
of
our
total
download
bandwidth
from
gcr
and
we're
trying
to
cost
optimize
a
little
bit
there.
So
there's
some
discussion
about.
Can
we
make
a
much
smaller
base
image
for
cube
proxy?
B
B
B
If
anybody
wants
to
take
this
and
take
a
look
at
it,
it
would
be
appreciated.
I
don't
we
don't
need
to.
We
can
confirm
the
triage.
It
is
in
fact
an
issue,
but
the
question
then
is:
is
anybody
here
wanna
jump
on
this
one.
G
B
I
I
looked
at
like
ben's
aka
proxy
script.
It
actually
looks
pretty
straightforward
if
you
follow
the
script,
so
it
might
not
be
that
difficult
to
just
reproduce
what
he's
done
with
aj
proxy.
B
G
E
G
B
Okay
and
we
need
to,
and
we
need
a
shell
because
of
the
wonderful
nft
versus
iptables
classic
all
right.
So
that's
assigned
ricardo
great
this
one
jay
you
filed
it.
Do
you
want
me
to
assign
it
to
you?
Don't
want
me
to
sign
it
to
somebody
else.
What
do
you
want
with
this.
C
E
Knows
I'm
trying
to
get
shravan
to,
like,
I
think
shreven
said
you
all
are
going
to
take
over
the
entire
rewrite
of
the
of
the
windows
kernel
space,
coupe
proxy,
but
but
he's
out
this
week,
but
yeah
definitely
I'll
talk
to
james
about
it
next
week
at
sig
windows
yeah.
I
think
this
is
a
good
getting
started
issue.
If
anybody
wants
to,
I,
I
don't
think
it's
hard.
It's
I
think
it's
like
we
can
just
look
at
the
we
can
just
remove
those
deprecated
calls
so
I'll.
B
Okay,
next
agn
hosts
something
something
headers
it's
assigned
to
signet
and
well,
unless
somebody
copies
testing
good,
but
I
don't
think
we
actually.
I
don't
think
I've
ever
done
anything
in
agn
host.
Does
anybody
here
feel
ownership
over
it.
B
B
B
E
B
They'll
spend
more
time
finding
where
to
put
the
line
of
code
than
to
write
the
line
of
code.
Okay,
this
is
an
older
one
that
we
haven't
really
gotten
responses
to
about
module
checks.
We,
I
guess
we
have
some
logic,
that
checks
for
whether
modules
are
installed
or
not,
and
we
could
be
smarter
about
it.
J
J
B
Maybe
I'll
give
it
all
right,
so
this
is
a
real
issue.
We
don't
need
to
assign
it
to
anybody,
since
we
know
it's
for
reals,
unless
somebody
wants
to
take
it,
in
which
case
you
are
free
to
jump
on
it.
Hint.
B
And
the
last
one
is
another
old
one:
bringing
back
the
sctp
contract
discussion,
it's
still
unresolved,
so
I
brought
it
back.
I
don't
know
if
we
want
to
spend
the
entire
meeting
today
talking
about
this
again,
but
at
some
point
we
do
need
to
figure
out
what
the
right
answer
is.
B
J
Yeah,
I
believe
so
and
this
guy
take
clap,
he's
an
erection
guy
and
we
are
using
user-space
sctp
and
he's
arguing
from
from
that
point
of
view.
B
Okay,
so
then,
what
what
are
we
doing
today.
J
I
believe
your
treat
is
treated
as
udp
already
today.
Okay,
but
then
yeah.
This
issue
is
about
like
having
the
opportunity
to
to
terminate
in
the
same
way
as
tcp
for
for
satp,
and
then
we
shouldn't
clean
up
the
the
contractor,
but
I
believe
the
debate
ended
up
with
that's
the
best
way.
It's
not
a
very
a
good
way.
It's
the
best
way.
B
E
For
mentioning
the
meeting
has
been
off
for
a
while,
I
guess
the
calendar's
off
the
dock
was
off.
The
whole
thing
was
messed
up,
so
it's
all
fixed
if
folks
want
to
come,
we'll
be
there
tomorrow.
It's
at
7
30
a.m.
Now
in
california,
because
the
only
people
that
come
in
europe
or
east
coast
so
and
and
the
we
really
wanted
the
india
folks
to
come
because
vivek
and
hanuman
have
been
really
helping
a
lot
and
so
we're
early
in
the
morning
again.
E
That's
it.
I
also
had
a
question
about,
but
I
didn't
have
time
to
dig
up
the
code
but
like
we
have,
we
have
a
weird
thing
where,
in
the
endpoints
we
in
kpmg
we
have
a
or
in
coup
proxy
kernel
space.
We
have
a
thing
where
we
have
like
an
we
like
plumb,
an
interface
like
that
you
can
pass
through.
B
E
E
E
I
B
E
Yeah
I
I
haven't,
I
didn't.
I
could
very
easily
just
spend
some
time
to
write
something
up
if.
B
So
I
mean
jay,
you
you
remind
me
of
something
I
wanted
to
ask.
So
maybe
I'll
hijack
your
topic
dan
and
I
dan
more
than
me-
have
almost
totally
rewritten
the
iptables
path
over
the
last
month
and
a
half.
Are
you
guys
keeping
in
sync
with
that?
Like?
Can
we
start
taking
steps
to
not
have
to
resync
every
time
we
do
this?
Is
it
time
they're
really
good
changes?
B
E
Oh
cool,
but
it
would
be
a
shame
copy
and.
E
B
Also,
the
test
cases
are
like
way
way
better
now,
okay,
yeah
or
what
they
will
be
once
once
that
pr
merges
we
got.
We
missed
the
boat
on
that
one,
but
I
guess
the
bigger
question
is
like:
when
do
we
start
sharing
code.
E
Already
sharing
your
code
with
myself,
so
so
I
think
we're
all
yeah,
like
I'm
doing
the
same
thing
for
the
windows
kernel
proxy
right,
I'm
just
cutting
it
up
and
re-transmogrifying
it.
So
whenever
I
mean
how
should
we
do
it,
how
how
do
we
do
this?
I
don't
I
don't
know
how
to
do
this.
Organizationally
right,
we're
kind
of
in
two
different
worlds.
B
Well,
I
mean
we've,
we
tell
people
not
to
vendor
kk
right,
so
I
it
would
be
bad
for
us
to
say
we
made
a
library,
you
should
go,
take
it.
We
could
either
extract
it
into
a
separate
repo
and
then
vendor
it
back
into
kk
and
into
kaping,
or
we
could
extract
it
into
kaping.
Make
that
the
true
home
of
it
and
then
vendor
that
back
into
kk,
like
that
feels
like
a
commitment
like.
Are
we
ready
for
ready
to
take
this
relationship
to
the
next
level.
E
My
goal
was
to
get
all
of
the
tests
passing
and
then
I
figured
once
it's
all
passing
y'all
will
tell
me
whether
you
want
to
do
that
or
not.
So.
I
feel
like
we're
close
iptable's
100
passing
it
works
for
ipv4.
It
works
for
ipv6
ipvs,
basically,
the
same
thing,
with
the
exception
of
a
couple
of
contract
tests,
windows,
user,
space
works
and
windows.
Kernel
space
is
the
last
one
and
I
was
thinking
once
windows
kernel
space
is
working
and
I
think
I'm
pretty
close.
E
B
Sure
sure,
but
I
guess
are
we
all
on
the
same
page
with
respect
to
the
eventual
destination
here,
which
is
q,
proxy
logic,
gets
moved
out
of
kk
and
into
kaping
or
kaping
gets
moved
into
kk
one
or
the
other.
C
B
C
C
B
B
We
should
probably
make
that
serious
and
start
talking
about
like
what
is
the
plan
in
terms
of
merging
moving
code
which
direction?
How
are
we
going
to
retain
the
ede
coverage
or
expand
the
ede
coverage,
and
what
are
the
criteria
for
graduation
so
jay?
Do
you
want
to
you
want
to
bring
that
back
to
the
ping
group
and
say
hey,
maybe
it's
time
to
get
serious.
E
B
E
B
Not
in
a
rush
yeah
and
we're
not
in
a
rush
but
like
24
is
coming
to
a
conclusion:
25
will
open
soon.
If
we're
gonna
do
something
like
this,
we
better
do
it
at
the
beginning
of
a
cycle
not
at
the
end,
so
yeah.
E
E
I
got
you
okay,
so
everybody
here
is
kind
of
on
board,
with
the
spirit
of
doing
that,
so
that
you
know,
because
I
caps
are
a
lot
of
work.
So
I
want
to
make
sure
like
do
do
do.
Should
we
do
the
should
we
do
the
cap
now
and
sort
of
assume
that,
like
things
are
where
they
at
or
should
I
maybe
do
another
demo
or
something
at
the
next
cig
network
and
show
everybody
the
code
and
everything.
E
B
Do
it,
let's
do
that?
Why
don't?
Why?
Don't
you
plan
it
like
either
next,
two
weeks
from
now
or
four
weeks
from
now,
something
like
that
four
weeks
from
now
we'll
many
of
us
will
be
in
spain,
so
not
not
that
week,
okay,
but
either
two
weeks
or
six
weeks,
come
back
with
a
demo.
We
can
actually
like
trawl
through
the
code
as
a
group
and
decide
like
hey.
B
Are
we
ready
to
start
moving
this,
and
what
do
you
think
we
are
honestly,
you
know,
there's
discussion
with
cigar
folks
like
cube,
cuddle,
wants
to
move
out
of
kk
right
or
is
or
has
moved
one.
I
don't
even
know
what
the
state
of
it
is,
but
they
had
reasons
for
that.
I
don't
know
what
all
their
reasons
were.
I
have
in
the
back
of
my
mind,
this
little
voice.
B
E
B
J
Sorry
go
ahead
lars
yeah.
Could
we
not
forget
that
one?
For
me
at
least
the
important
property
of
of
kepping
is
that
it
should
be
available
as
a
library
for
anybody
to
implement
the
proxy.
That's
right!
That's.
E
J
B
Yes,
for
sure,
I'm
I'm
pretty
sure
at
the
end
of
this
we
have
a
library,
repo
and
a
final
binary
repo
or
something
like
that
for
sure,
because
we
don't
want
to
leave
those
those
folks
out,
but
I
do
think
we
for
the
foreseeable
future.
We
are
going
to
continue
to
build
cube
proxy,
which
has
a
few
built-in
binary
modes.
E
Yeah,
so
for
what
it's
worth
as
of
now,
we
did
actually
lars
was
the
person
that
asked
for
this.
So
we
did
that
like
we,
we
actually
now
are
everything
in
kpga.
Is
it's
all
mod?
It's
all
you
can
import
it.
You
can
import
the
thing
that
does
the
server
as
a
standalone
thing
and
the
way
it
works
now
is
that
each
one
of
the
back
ends
has
its
own
go
modules
and
the
go
modules
just
import
the
server
as
like
an
external
dependency,
okay,
so
we've.
E
I
think
we
think
I
think
we've
done
a
really
good
job
with
that
and
then
in
terms
of
making
it,
and
the
reason
is
because
lars
lars
lars
needed
that
for
his
his
thing.
K
E
B
E
C
E
Yeah,
we'll
we'll
yeah
we'll
definitely
do
a
do
it.
We
had
a
demo
a
while
back,
but
I
don't
think
it
was
recorded
because
I
was
going
to
like
show
it
to
someone
if
they
they
wanted
to
learn.
And
then
I
think
it
was
before
it
was
when,
during
that
blip,
when
sig
videos
were
not
being
recorded
for
like
a
month
and
a
half
or
something,
but
it's
time
for
another
one.
So
yeah
definitely.
E
A
I
was
next
up,
but
this
is
pretty
quick.
I
sent
an
email
just
shortly
before
this.
We
are
once
again
a
bit
late
on
our
annual
report,
the
second
ever
annual
report
for
sig
network,
so
I've
started
a
draft
doc
in
google
docs
that
I'm
going
to
be
trying
to
fill
out.
I
would
like
some
like
some
help
on
that,
just
to
make
sure
that
we're
thorough.
A
So
it's
in
it's
on
the
mailing
list.
It's
also
in
the
in
the
minutes
document.
H
A
Good
outcome
from
this,
so
I
mean
yeah
primarily
I
mean
there's
a
lot
of
like
kind
of
tedious
stuff
that
I
can
do
that.
I
wouldn't
worry
about,
but
mostly
things
like
the
sub
projects
section
where
we
we
need
to
talk
about
like
what
what
sub
projects
are
active
and
which
you've
become
inactive
and
work
in
groups
plus
talking
about
all
the
stuff
we've
accomplished
over
the
last
year
in
the
initiatives
and
kept
sections.
A
So
so
yeah,
if
especially,
if
you've
been
involved
in
in
one
of
those-
and
you
can
just
to
quickly
throw
something
in
that-
would
help
a
lot
and
then
I'll
put
up
a
pr
with
the
contents
of
this
next
week
and
then
invite
everybody
to
review
it.
Make
sure
we
didn't
miss
anything
and
squeeze
that
in.
C
A
C
C
All
yeah
my
turn
indeed
yep,
so
I
today
I
was
spending
some
time
looking
at
issues
and
then
I
looked
at
vrs
and
I
found
this
massive
cluster
multi-cluster
cider
thing.
To
be
honest,
I
had
a
mix
of
feeling
of
oh
and
then
it's
like.
Oh
my
god.
This
girl
is
scared.
C
So
please
spend
some
time
looking
at
it.
There's
a
massive
change.
It's
super
massive
and
it
needs
some
organization,
but
I
think
it's
in
a
state
where
not
everything's
done
so.
It's
good
state
for
to
go
and
jump
and
say:
okay,
this
looks
okay.
This
looks
bad.
This
looks.
Let
me
change
this
some
comments
here.
C
Some
comment
there
and
it's
it's
it's
it's
big
and
it's
impactful
as
well
like
if
this
thing
goes
the
wrong
way,
it's
impactful
because
it
has
a
reliability
issue
and
it
will
impact
our
ability
to
build
on
top
of
it
like
stuff
around
changing
cider
later
on.
If
we
ever
decide
to
do
that,
that's
really
what
I
wanted
to
say.
B
Adding
on
to
that,
it
came
in
very
late
in
the
last
cycle
and
antonio
sort
of
found
a
lot
of
things
that
he
was
really
not
super
happy
with,
so
it
missed
the
boat.
We
tried
to
just
get
the
api
part
in,
but
we
didn't
cross
the
t's
and
dot
the
eyes
properly.
B
B
I
wanted
it
alpha
in
this
release.
I
mean
I
don't
care
right
like
if
it
takes
forever
it
takes
forever,
but
we
do
have
kind
of
a
reputation
of
taking
forever
and
I'd
like
to
not
live
up
to
that.
If
we
can
avoid
that.
C
There
are
some
really
unopened
questions
unanswered,
questions
on
the
cab
and
on
the
code.
I
think
the
code
made
like
usual,
the
code
made
it
clear
that
the
cap
had
gaps
because
that's
what
always
happens
when
you
write
the
code
so
yeah,
here's,
here's
the
thing
I'll
I'll
go
through
it
tomorrow
on
today
and
I'll
send
send
an
email
out
after
I'm
done
like
with
key
points.
I've
found,
and
I
invite
everybody
to
look
at
it.
Really
it's
a
little
bit
like
it
needs
help
it
needs.
C
It's
gonna
need
a
lot
of
eyes
and
a
lot
of
pushing
and
like
helping
pushing
it
forward
to
get
there
if
we
want
alpha
next
release,
all
right,
and
since
I
before
I
yield
the
floor,
I
am
away
three
months
starting
mid-june
on
australia.
So
yes,
I'm
really
looking
forward
to
that.
So
thank
you.
All
right.
F
A
F
Absolutely
so
I
was
looking
unless
I
missed
it,
which
is
absolutely
possible.
I
don't
think
we
actually
have
a
sig
network
in-person
session
at
kubecon
eu,
but
that's
fine
because
we
do
have
some
sig
network
members,
including
mr
still
has
his
hands
up
right
there
and
probably
lockheed
and
myself
and
some
other
folks
are
going
to
be
on
the
ipv6
panel,
which
should
be
lively.
F
So
there
will
be
at
least
some.
F
And
if
we
have
anything
else
we
want
to
produce
before
kubecon,
I
am
not
doing
what
cal
is
doing
and
taking
an
excellent
amount
of
time
off,
but
I
am
going
to
be
out
of
pocket
the
week
before
kubecon.
So
I
will
not
be
at
the
sig
network
meeting
the
week
before
coop
con.
So
if
anyone
is
looking
for
any
last
minute,
anything
find
me
on
all
of
the
other
places,
but
not
this
meeting
the
week
before
kubecon.
B
I
just
added
item
to
the
agenda.
I
will.
I
will
also
be
out
the
week
the
meeting
before
just
before
kubecon,
and
so
we
should
figure
out
if
we're
gonna
have
the
meeting
or
not.
If
we
just
wanna
focus
on
spain,
we
should
coordinate
a
get-together
or
something
while
we're
there
for
anybody
who's
going
to
be
there.
B
While
cal
brought
up
the
topic
of
large
reviews
that
need
to
get
done,
it
wasn't
locked
to
the
release,
so
it
got
the
back
burner
treatment,
but
the
admin
network
policy,
andrew
and
crew,
have
some
pr's
that
they
are
asking
for
review
on.
So
anybody
who's
got
review
bandwidth
and
is
interested
in
that.
Please
take
a
look
and,
let's
not
let
it
sit
until
the
last
month
of
the
cycle
again.
L
Yeah
we've
actually
been
getting
some
really
good
reviews.
So,
thanks
to
the
folks
who
already
have
on
in
addition
to
that,
something
that
has
been
helpful
is,
I
know,
dan
started
coming
to
our
monday
afternoon
meetings
and
it's
basically
just
an
hour
of
like
let's
tackle
some
of
the
biggest
issues
we
still
have.
So
if
folks
have
time,
please
please
come
through
to
that
meeting
and
it's
almost
more
efficient
than
dealing
with
a
bunch
of
comments
on
the
pr
so
yeah,
thanks
for
the
reviews
looking
forward
to
get
it
moving
forward.