►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20220623
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20220623
A
June
23rd
2023
two
yeah
two
got
23
on
my
mind,
though,
let's
start
it
off
with
some
triage.
C
Sorry
I
couldn't
unmute,
while
I
was
trying
to
find
the
right
window
to
share
at
the
same
time
again
appreciate
everybody
who
went
through
yesterday
and
this
morning
I
flagged
a
few
for
looking
at
today.
If
we
have
a
few
minutes.
I'd
also
love
to
run
through
the
open
cap
issues,
and
I
know
we
have
a
rather
long
agenda
today.
If
we
have
time
I'd,
also
love
to
look
at
the
cap
dashboard
because
today
is
kept
free,
so
last
chance
for
people
to
scream.
C
Oh,
my
god,
you
forgot
my
pr
or
something
if
we
get
there.
So,
let's
run
through
triage
there's
an
issue
filed
yesterday
talking
about
topology,
hints
and
consideration
of
taints,
and
they
lay
out
a
fairly
complicated
situation
where
they
expected
one
set
of
behaviors
and
they
got
a
different
set
of
behaviors.
C
And
if
I
understand
it
correctly,
it
comes
down
to
services,
don't
know
anything
about
taints
we
consider
we
can.
We
use
the
number
of
cores
available
in
a
topology
region
to
decide
whether
or
not
to
do
topology
hints,
based
on
whether,
like
all
the
clients
in
one
region,
would
swamp
all
the
servers
in
that
region
like
if
it's
massively
imbalanced,
then
we
just
try
not
to
do
topology,
and
this
person
seems
to
be
saying
well,
but
all
my
clients
are
going
to
come
from
this
set
of
nodes
that
are
tainted
this
particular
way.
C
So
you
should
only
consider
those
in
the
service,
and
the
question
then
is
like:
are
we
going
to
consider
taints
in
services?
My
instinct
is
is
no,
but
I
think
it's
an
opportunity
to
look
at
our
heuristics.
C
So
rob
I
don't
know
if
you're
here,
but
oh
you're
already
assigned
cool.
I
don't
know
what
we
want
to
do
with
this.
I
think
it's
a
case
of
thinking
about
it
and
seeing
if
there's
something
we
can
do
to
have,
this
user
have
a
better
experience,
but
I'm
not
sure
has
presented
that
it's
solvable
the
way
they
want
it
to
be
solved.
D
C
Right,
yes,
we
end
up
keeping
more
metadata
about,
like
suppose,
for
example,
we
put
a
only
consider
nodes
with
taints
equals.
C
You
know-
or
I
guess,
tolerations
on
the
service.
Basically
yeah.
C
C
So
I
thought
it
was
an
interesting
topic
worth
bringing
up
today.
Next,
oh
casey's,
already
on
it
yeah,
you
can
just
leave
that
with
me.
Is
it
assigned
to
you
yeah,
yep,
cool
close.
This
one
is
they're
reporting
as
an
issue.
C
The
idea
that
rapidly
changing
pod
readiness
can
cause
a
denial
of
service
on
all
the
other
nodes,
and
I
explained
a
little
bit
here
that
that's
sort
of
the
design
I
mean
they
recognize
that
oh
there's
an
update.
One
thing
I
noticed,
as
I
was
looking
at
this,
we
have
this
bounded
frequency
runner
for
iptables
sync.
C
As
far
as
I
can
tell
we're,
setting
the
default
like
one
second,
which
is
probably
not
cool,
we
should
probably
set
it
to
something
on
the
order
of
like
five
or
ten,
it
looks
like
an
ipvs.
It's
set
to
a
higher
default,
but
ipds
doesn't
use
ip
tables
for
most
of
the
data
plane
right.
E
E
C
Right-
and
maybe
the
answer
here
is
actually
to
make
that
more
dynamic
like
if
this,
if,
if
the
last
update
was
very
large,
we
increased
the
time
or
or
something
like
what
we
want
to
avoid.
Is
that
we're
continuously
running
iptables
restore
as
soon
as
one
finishes.
We
start
the
next
one
right,
because
they
you
know
as
they
get
bigger
they
take
longer,
and
then
so
you
know
there's
an
opportunity
again
here,
for
maybe
we
can
do
better.
I
have
to
read
the
updated
comment.
C
Yeah
or
I
suggested
you
know,
maybe
maybe
we
have
a
rate
limit
by
namespace.
I
don't
really
like
that,
but
something
to
think
about.
I
feel
like
I
can
triage
this
one
as
accepted
like
it
is
an
issue.
E
Right
so
I
I
have
a
pr
to
greatly
minimize
our
ip
tables
or
stores
and
just
pasted
it.
That's
probably
on
your
queue
somewhere.
C
Probably
I
to
be
fair,
the
only
thing
I've
looked
at
for
the
last
two
weeks
is
caps,
anything
that
was
not
in
the
enhancements.
Repo
did
not
get
it
much
attention
for
me
at
all
and
to
be
fair,
I'm
kind
of
spamming
you
with
pr's,
hey,
I'm
totally
cool
with
that.
C
Okay,
so
we'll
leave
this
one
as
accepted
I'll
circle
back
and
read
the
update
and
look
at
your
pr
or
your
prs
once
we
get
through
cap
freeze,
this
one
is
a
user
using
the
older
echo
server.
I
didn't
reload
any
of
these
this
morning.
Sorry.
D
D
F
Support
ar64,
it
seems
to
be
something
really
old
on
that.
I
can
take
a
look
into
that,
but
I
don't
think
it's
like
too
much
priority
anyway,
but
if
you
want
to
sign
it
to
me
like
it's
a
really
old
nginx
image
and
if
we
are
not
using
echo
server
anymore,
I
can
just
go
ahead
and
say:
hey,
don't
use
it.
C
D
C
H
Okay,
for
what
it's
worth,
we've
hit
the
same
problem,
we're
using
the
echo
server
somewhere
and
there's
a
new
version
of
it.
That
is,
that
does
work
on
arm
64..
Someone
can
just
drop
the
link
to
this
issue
in
the
notes
I
can
follow
up
on
that,
but
that
would
be
awesome.
Thank
you.
C
Okay,
this
issue
I
filed
as
I
was
reviewing
a
cap
realized
the
way
we
do
the
something
class
defaults
we
could
handle
it
better.
This
was
in
the
context
of
storage
class,
but
I
filed
the
issues
that
we
can
come
back
and
look
at
it
specifically,
I
think
ingress
class
can
follow
the
same
pattern.
If
somebody
wants
to
look
at
it,
I
tagged
it
good
first
issue.
Maybe
we
want
to
fork
off
a
separate
one
for
ingress
class
and
runtime
class.
C
I
think
those
are
the
only
three
storage
and
aggressive
runtime,
so
I
thought
it
was
worth
bringing
up
here
for
context.
If
anybody
wants
to
jump
on
an
issue,
there's
already
a
pr
posted,
I
think
yeah
there's
already
a
pr
posted
against
storage,
so
we
can
even
follow
the
same
pattern
and
just
make
sure
that
it
it
works.
The
way
we
think.
D
C
That's
the
well,
I
mean
that's
the
that's
the
goal
right
and
jan
pointed
that
out
at
the
bottom
like.
Maybe
we
should
actually
throw
a
metric
or
something
that
says
hey.
You
have
two
defaults
so
that
people
could
alert
on
it
because
it's
not
really
a
normal
situation,
but
you
know,
is
it
really
a
bad
situation?
Is
it
something
we're
going
to
like
prevent?
C
I
don't
think
so.
I
think
it's
just
you've
sort
of
misconfigured
the
system
and
it's
doing
something
reasonable
right
now.
It
does
something
I
think
unreasonable,
which
is
you
just
can't
create
any
storage
storage
volume
claims
right.
You
can't
create
any
ingresses.
While
there
are
two
defaults
and
I
feel
like
that's
pretty
hostile,
you
should
just
pick
one.
C
C
C
C
It
based
on
what
I
mean
creation
time
I
guess
is
like
as
good
a
proxy
as
anything
literally,
it
doesn't
matter
right.
Anything
random
is
acceptable,
so
anything
better
than
random
is
also
acceptable.
Okay,
I
mean
for
fun.
We
should
just
literally
pick
a
random
number
and
just
pick
one,
and
you
know
don't
don't
configure.
D
C
C
C
Long,
okay,
so
anybody
who
wants
to
look
at
this
feel
free
to
look
at
it.
It
seems
like
a
fun
way
to
get
some
pr
points.
The
other
ones
are
all
older
ones
here
anyway,
so
we
can
go
back
and
revisit
them.
There's
only
three
of
them.
I
will
yield
the
floor
and
if
we
have
time
at
the
end,
I
would
love
to
run
through
the
list
of
open
cap
issues
in
enhancements
that
are
tagged.
Sig
network
there's
only
23
of
them.
A
Cool-
let's
see
how
quick
we
can
get
through
these.
I
think
it
was
bridget
and
james
next.
B
G
Yes,
actually
so
I
got
some
answers
this
morning,
but
I
wanted
to
verify
that
I
understood
the
answers.
G
G
The
report
of
the
issue
is
saying:
the
traffic
is
going
from
the
windows.
Node
is
going
out
to
the
load,
balancer
and
then
being
distributed
again,
and
I
think,
if
I
understand
correctly,
the
traffic
should,
if
it's
coming
from
a
node
and
going
to
the
load
balancer
ip
address,
it
should
just
get
distributed
to
the
the
nodes
or
the
endpoints
that
are
that
are
listed
inside
the
the
service.
C
Might
make
sense
there
yeah,
so
we
spent
a
good
amount
of
time
a
couple
months
ago
debating
the
the
semantics
of
internal
versus
external
traffic,
and
is
it
really
based
on
the
client
or
is
it
based
on
the
end
point
that
you're
talking
to
and
we
arrived
on
a
model
of
it's
more
based
on
the
end
point
you're
talking
to
whether
you're
considered
internal
external
traffic
dan
winship.
C
Stop
me
if
I
mischaracterize,
I
went-
and
I
pulled
up
the
cue
proxy
code
that
we
had
negotiated
at
the
end
of
all
this,
and
it
specifically
has
a
path
that
says.
C
If
you're
accessing
a
load
balancer
ip
from
internal
from
from
this
node
or
from
a
a
pod
on
this
node,
then
we
just
send
you
to
the
cluster
endpoints
because
it
emulates
as
if
you
had
gone
out
to
the
load,
balancer
and
the
load
balancer
had
picked
a
healthy
node
and
that
node
has
endpoints.
C
And
so
we
do
that
on
linux,
because
we
know
that
some
of
the
cloud
providers
create
local
local
ip
assignments
for
the
load,
balancer
ips.
So,
if
you
don't
do
it
in
iptables,
it
actually
gets
routed
back
to
localhost
anyway,
and
then
it
fails.
Actually
it
does
worse
than
fail.
It
does
bizarre
things
and
my
point
here
being
I
don't
think,
comparing
literally
linux,
does
this
many
routes
and
windows
does
that
many
routes
and
they're
not
the
same?
Oh!
No!
It's
much
more
about
the
semantics.
C
G
Okay,
I
think
I
I
think
we
are
not
getting
the
right
semantics
yet,
but
so
it
because
if
I'm
understanding
correctly,
if
it's,
if
you're
on
the
node
and
you're
sending
to
the
load
balancer,
then
it
should
just
get
sent
to
the
endpoints
directly
and
not
go
all
the
way
back
out
to
load
balancer.
It.
C
Should
pick
a
valid
endpoint,
it
shouldn't
fail
just
because
this
node
doesn't
have
endpoints,
whether
you
actually
send
it
up
and
out
to
the
load,
balancer
and
then
back
down
or
whether
you
internally
emulate.
That
is
a
decision
that
the
windows
implementation
can
make.
Okay
dan.
Does
that
sound,
fair.
G
Is
is
up
to
you,
okay,
and
so
that's
it
doesn't
really
affect
or
like
we
don't
need
to
so
this
is
behaving
the
way
it
is
supposed
to
behaving.
If
we
wanted
to,
we
could
add
another
rule
that
would
not
go
to
the
load
balancer
and
distribute
it
to
the
endpoints.
If
we
wanted
right
at
that
point,
it's
an
organization,
okay,
and
so
the
customer
seems
to
think
that
they
would
like
that
optimization.
Should
we
move
forward
with
that
or.
C
C
A
Thanks
guys,
bridgette
and
craig,
I
think
you're
next.
B
Yes,
thank
you
so
much
and
thank
you
for
letting
us
kind
of
jump
this
one
up.
We
wanted
to
make
sure
to
talk
about
it.
Tldr
is
in
yesterday's
smi
community
call.
We
were
talking
about
the
intersection
between,
and
collaboration
between
and
all
of,
the
congruence
of
gateway,
api
and
smi
and
craig
who
has
joined
us
here
had
a
wonderful
idea
which
is
hey.
There
is
a
structure
for
this.
Just
like
wg
policy.
This
could
be
wg
mesh
under
the
sig
network
and
I
thought
craig.
B
You
are
right.
We
should
go
talk
to
sign
network
about
that
and
keith
just
joined.
I
see,
and
I
think
mike
is
on
as
well
and
I
just
dropped
a
link
in
the
back.
We
just
started
drafting
you
know
going
through
the
template
of
what
one
is
supposed
to
do
to
set
up
a
working
group,
but
I
just
wanted
to
kind
of
run
a
biasing
network
and
see.
Does
that
sound
relevant
to
people's
interests?
Craig,
do
you
want
to
add
any
context
or
thoughts
there.
J
Yeah,
just
for
anyone
who
isn't
familiar
with
the
context
from
bridget
stock
already
the
north
south
east
case
is
sorry.
The
north
south
use
case
is
very
well
covered
by
the
gateway
api,
and
now
we
wish
to
consider
the
east-west,
and
so
there
is
obvious
convergence
with
all
the
implementing
vendors
on
gateway
api
for
north
south
and
now
we're
looking
to
establish
the
same
for
east
west,
and
so
the
smi
project
has
a
number
of
the
vendors
look
at
the
specifics
of
making
that
move.
J
Sdo's
already
got
an
implementation,
but
we're
looking
to
bring
that
in
line
with
whatever
the
community
decides,
and
it
feels
like.
The
sig
network
would
be
the
absolute
best
place
to
drive
that
forward.
B
C
C
C
J
So
some
of
the
specifics
there
look
like
they're
weighing
up
it's
sort
of
an
extension
of
the
gateway
api.
But
it's
that's.
Why
effectively,
we
figured
a
working
group
might
be
the
right
way
to
do
it
because
it's
sort
of
a
time-bound
thing
effectively
to
add
more
support
to
the
gateway
api
for
a
different
use
case.
Once
gateway's
gone
better
for
the
ingress
kind
of
use
case,
then,
hopefully,
there'll
be
a
little
bit
more
room
on
their
side
too.
B
B
I
So
jeff
apple,
I
haven't
joined
these
meetings
before,
but
I've
been
involved
on
the
gateway
api
work
for
the
last
year
plus
and
I'm
curious.
Why
would
you
propose
a
separate
working
group
as
opposed
to
just
be?
You
know
just
do
this
within
the
existing
framework
of
the
of
the
gateway
api
group.
B
Yeah,
absolutely
just
because
there's
a
bunch
of
interests
in
the
service
mesh
world
that
don't
fit
neatly
inside
gateway
as
it
stands
right
now,
but
working
groups
are
a
time-bound
way
to
bring
everything
together.
I
It's
it
sounds
more
like
you're
trying
to
hijack
gateway
api
than
than
to
help
expand
it.
To
be
honest,
maybe
it's
I'm
I'm
not
trying
to
be
ours.
B
I
Well,
you
know.
I
People
have
been
working
on
the
gateway
api
for
three
years.
I
think
now
you
know,
there's
a
established
set
of
of
parties
that
have
driven
that.
I'm
not
saying
that
that
group
is
necessarily.
I
Opposed
to
what
you're
proposing,
but
I
think
it
it
would
be
appropriate
to
bring
it
to
that
group
and
say
we
want
to
you
know
we
we're
interested
in
expanding
it,
and
you
know
so
we'd
like
to
be
part
of
this
and
and
join
as
as
regular
participants
and
of
that
process,
as
opposed
to
doing
it
kind
of
from
the
outside
in
a
sense
and
then
coming
with
a
like
a
fate
accompli.
H
So
I
think,
that's
reasonable
to
me.
I
would
be
in
favor
of
trying
to
get
like
bring
this
to
the
gateway
api
working
group
and
get
some
kind
of
affirmative
not
give
it
work
over
the
gateway,
whatever
it
is
meeting
and
get
some
kind
of
like
affirmative
consent
that
the
folks
there
are
generally
supportive
of
this
idea
before
moving
forward
with
creation
of
a
working
group.
C
Yeah
just
to
be
clear,
I
didn't
interpret
this
as
an
attempt
to
disempower
gateway
like
more
like
a
a
focus
group
like
hey.
If
we
wanted
to
tackle
this
use
case,
what
are
the
sorts
of
particular
requirements?
I'm
not
part
of
this
discussion
at
all,
so
this
is
just
how
I
interpreted
this
conversation
and
ultimately,
the
folks
who
are
building
gateway,
own
gateway
right,
like
I'm,
not
going
to
force
anything
on
them,
and
neither
would
this
working
group
yeah
so.
J
Jeff
on
behalf
of
that
group,
like
would
it
be
distracting
if
all
of
a
sudden,
a
bunch
of
new
people
turned
up
and
were
talking
about
a
different
use
case,
while
you're
trying
to
nail
down
the
first
one
like
from
from
my
perspective,
it
seems
like
simply
saying
all
right,
we're
going
to
take
this
group
of
existing
people
and
put
them
on
here.
I
I
Do
we
want
to
have
kind
of
a
sub
me
subgroup
that
is
focused
on
that,
and
does
that
and
kind
of
keeps
you
know
keeps
us
informed,
or
do
we
want
to
make
it
just
just
a
standard
part
of
our
work
stream
as
we're
going
forward
with
us?
For
you
know
not,
you
know
basically
just
another
aspect
of
it
that
we
take
it
into
the
direction
of
I
mean
you
know
that.
I
Would
that
would
be
my
approach
instead
of
deciding
how
you're
going
to
do
it
first
and
then
going
to
the
group
going
to
the
group
first
and
say
what
makes
the
most
sense
for
how
we,
how
we
do
this
and
I
mean
shane-
you
you've
been
involved
in
it
in
it
a
long
time
too.
You
know
your
your
views
may
be
different
than
mine.
L
L
The
the
reason
that
this
started
outside
you
know
outside
the
gateway
api
to
begin
with
is
because
we
didn't
want
to
distract
especially,
you
know,
with
some
of
the
great
work
that
that
group
is
doing
right
now
we
wanted
to
get
people
together,
who
said
hey,
couldn't
this
be
used
for
this
use
case
and
work
alongside
the
gateway
api
and
and
going
back
to
mike's
point
earlier,
I'm
100
in
favor
of
polling
asking
the
greater
get
api
group
if
they're,
if
there's
anybody
who's
opposed
to
this
model
or
if
they
have
any
other
model
to
propose
in
discussing
it.
L
J
Yeah,
just
just
a
lot
to
run
that
point
out
quickly.
This
meeting
was
before
the
gateway
api
meeting,
so
in
fairness,
we
could
have
done
this
in
the
other
direction,
but
it's
more
that
this
was
the
one
that
was
next
on
the
calendar
and
if
it
is
going
to
be
a
working
group
within
the
charter
of
the
kubernetes
project,
those
are
sponsored
by
sigs,
and
so
there
is
already,
I
think,
a
working
group
gateway
api
under
the
sig
and
so
effectively
the
decision
to
form
another
one.
J
C
A
Sounds
good
ricardo?
You
had
the
next
three
items.
F
K
As
some
of
you
are
probably
aware,
we've
had
several
large
cves
this
year
and
we've
noticed
that
we
need
to
take
some
time
just
like
the
kubernetes
project
and
do
some
stabilization
and
some
security
work.
So
we
are
going
to
present
to
the
community
later
today
after
this
discussion
that
we're
going
to
stop
accepting
feature
requests,
putting
together
a
project
on
our
stabilization
goals.
So
we've
got
some
of
them
outlined
in
the
project
and
we
just
wanted
to
bring
this
up
to
the
group.
K
I
know
we've
done
it
individually,
so
we
just
wanted
to
discuss
it
with
the
group
as
a
whole.
F
We've
we've
pasted
in
in
the
agenda
link
about
email
that
we
wanna
send.
Probably
it's
gonna
be
today.
F
We
discussed
it
just
before
this
meeting
during
cigna
to
our
ingredients,
project
meeting
about
that,
and
we
we've
been
discussing
these
things
cubecon.
I
guess
so.
If
you
just
can
take
a
look
into
the
into
the
document
and
also
provide
us
some
feedback
on
that,
because
our
goal
is
actually
to
have
some
time
to
clean
up
the
code
and
try
to
stabilize
instead
of
just
keep
accepting
features.
C
I
think
you
are
not
alone
in
this
problem,
like
the
project
writ
large
might
benefit
from
the
same
approach
for
a
little
while
you
have
my
thumbs
up.
I
gave
you
that
at
cubecon,
but
it
seems
like
well
within
your
purvey,
to
hit
the
pause
button.
F
Okay,
so
the
the
second
one
is
like
during
kubecon
we
discussed
about
starting
to
work
on
moving
cube
products
actually
a
bit
before
moving
proxy
to
disroles
jim's
page
at
me,
because
I
did
some
some
other
stuff
on
on
docker
image
optimizations
and
there
are
two
goals
here
so
from
the
ck
infrared.
F
The
idea
is
to
reduce
the
image
size
of
kubernetes
of
q
proxy
because
of
costs,
but
also
rob
rob
scott
paged
me
two
weeks
ago,
because
I
think
there
is
some
interest
inside
google
and
other
companies
about
doing
that
because
of
the
amount
of
cves
that
we
have
in
in
kyoproxy
being
based
on
debian
and
so
on
so
I've.
I
have
pasted
the
issue
on
the
agenda
as
well.
F
Then
winship
has
made
a
lot
of
reveals
on
that,
and-
and
I
I
appreciate
that,
so
I
basically
see
release
and
other
folks
just
to
make
sure
that
we
could
take
the
distalized
image
and
make
it
run
in
in
kk.
So
right
now
the
image
that's
published,
it's
running,
fine,
just
the
the
dependency
spinning
check.
Is
it's
not
working,
but
everything
else
is
it's
running
fine.
F
So
I
would.
I
would
like
to
ask
you
folks
what
you
all
expect
as
a
next
step
right,
because
I've
been
with
not
much
time
to
do
that
and
to
move
forward
with
that.
So
right
now
that
image
is
working.
Epitables
wrapper
is
on
javascript.
We
didn't
move
it
that
to
go
program
yet
then,
is
making
some
review
on
that
and
I
need
to
to
follow
up
in
that
review.
F
But
my
goal
was
to
measure
that,
with
the
current
ip
tables
wrapper
shell
script
and
see
how
it
goes
if
it
works
and
then
maybe
in
the
next
cycle
we
move
the
show
the
ip
table
share
script
to
a
go
program,
but
right
now
I
would
like
to
ask
you
some
help
with
the
follow
up
on
that
with
sig
release
and
and
and
seek
testing
and
even
seek
network,
they
can
look
into
that
and
we're
moving
forward
with
that.
F
Maybe
before
version
125.,
so
ck's
infra
may
have
some
cost
optimizations
and
and
so
on.
So
the
item
is
it's
in
the
agenda,
but
I
wanna
hear
from
you
what
you
expect
from
me
as
the
next
steps
on
that.
C
I
mean
the
number
one
thing
I'm
looking
for
is
the
number
one
thing
I'm
looking
for
is
confidence
that
it
that
it
works
that
there's
not
some
weirdo
corner
case.
I
don't
know
that.
I
believe
our
edes
cover
every
weirdo
corner
case
of
executing
all
the
various
tools
and
so
on.
C
So
having
tried
to
like
remove
stuff
from
other
images
and
then
found
corner
cases
later
it
I'm
I'm
afraid
of
this
effort
right,
like
the
need
for
a
shell,
like
I
found
out
that
get
half
of
the
git
commands
are
actually
shell
scripts.
So
if
you,
if
you
come
in
with
a
plan
like
this,
is
why
we
think
we're
confident,
then
that's
what
I'm
looking
for.
I'm
not.
F
This
is
why
I
just
try
to
run
on
kk
and
like,
but
I
agree
with
you
like
then
ask
at
me:
like:
hey,
have
you
tested
like
ip
tables
nft
viruses
legacy,
and
I
even
don't
think
that
we
can
test
that
scenario
in
our
end-to-end
tests
right.
So
we
don't
have
a
test
that,
like
have
a
let's
say,
an
old
debian
node
versus
a
new
debian
node
and
saying
hey.
This
works
on
both
scenarios
with
ipvs
and
ip
tables
and
whatever.
F
So
I
I
can't
say
that
I
have
full
confidence
on
that
and
from
my
side
I
think
that
would
be
great
to
have
like
iptables
wrapper
as
a
go
program
as
well.
Just
removing
all
of
the
shell
stuff
and
other
things,
but
still
we
do
have
like
like
then
raise
it
something
on
on
my
pr
which
is
like
hey.
You
are
just
copying
the
binaries
and
the
libraries,
but
are
we
missing
some
configuration
or
something
like
that?
And
then
I
keep
like
changing.
F
Okay,
I'm
just
gonna
copy
everything,
but
the
main
pages
and
documentation,
because
it's
gonna
reduce
the
size
of
the
image
but
still
yeah.
We
may
be
missing
something
on
that
right,
so
I
actually,
I
need
some
help
to
try
to
figure
out.
F
F
D
Yeah
the
scripts
work
and
the
dance
is
clear,
because
these
are
the
ones
that
we
use
in
china
and
the
technique
is
the
one
that
benjamin
implemented
in
kind.
So
I
I
will
work
with
you
to
to
what
we
need
is
confidence
to
to
have
some
way
of
of
or
running
this
for
a
long
time
in
different
scenarios
and
get
confidence,
but
we
definitely
have
to
move
to
to
this
thing.
D
E
I
I
feel
reasonably
confident
that
q
proxy
does
not
invoke
random
binaries
that
we
don't
know
about,
because
you
know
all
the
ones
that
we
do
know
about.
We
occasionally
run
into
problems
where
people
turned
out
to
not
have
them
installed
or
had
a
broken
one
installed,
and
so
I
feel
like
if
there
are
other
ones,
we
would
have
noticed.
I
D
F
F
Okay,
but
mine
is
probably
gonna
take
some
time,
so
the
third
one
I
was
discussing
yesterday
or
tuesday
with
team
about
network
policy
status.
F
Cap
like
he
was
on
his
cap
flow,
approving
or
denying
everything
and
the
part
range
got
approved,
yay,
but
then
we've
been
discussing
about
network
policy
status
and
the
cap
was
implemented,
but
no
cni
actually
created
anything
on
that
and
I
just
like
I
didn't
jump
it
into
calico
code
or
in
triaco
to
see
how
I
could
help
them
as
well,
because
I
didn't
got
time
for
that
and
my
point
on
that
was
like
when
we
did
the
cap
review
for
that
one.
I
remember,
I
guess,
was
casey
raise
it,
something
like
hey.
F
Why
don't
we
just
do
this
using
validation
web
hook
instead
of
adding
status
for
like
port
range
or
something
like
that,
and
I
raised
it
actually
that
to
team-
like
probably,
we
are
not
gaining
track
on
that
cap,
because
each
cni
wants
to
do
their
own.
Each
network
policy
provider
actually
wants
to
do
their
own
approach
on
that,
and
then
we
started
some
more
philosophical
discussion
about
what
should
we
do
next
right?
So
what
should
we
do
actually
like?
F
Should
we
just
accept
validation,
webhook,
which
is
not
like
everyone's
favorite
solution,
or
should
we
just
like
say
that
all
of
the
objects
they
have
status
so
team?
Do
you
wanna
bring
some
context
on
that,
because
you
even
told
something
about
cig
architecture
and
creating
some
precedence
on
that.
C
Yeah
I
mean
we
have
as
a
situation-
that's
relatively
rare
in
the
project,
which
is
an
api
that
is
implemented
by
plugins
exclusively,
where
we've
now
added
an
optional
feature
that
users
might
use
and
won't
work
right.
So
specifically,
if
I'm
a
user-
and
I
specify
an
end
port
and
it
doesn't
work,
I'm
going
to
be
surprised
right.
At
least
we've
designed
it
so
that
it'll
fail
closed
but
I'll
be
surprised
and
I'll
sort
of
be
angry
like.
Why
didn't
you
tell
me
right
and
there's
really
only
two
ways
to
do
that?
C
It's
synchronous
or
asynchronous
and,
like
I
hate
web
hooks,
I
hate
asking
people
to
go.
Write
web
hooks.
I
think
they're
really
clunky,
but
it
is
the
thing
we
have
right.
So
one
path
is
if
you're
a
cni
provider
or
a
network
policy
provider,
and
you
go
and
you
don't
support
endport.
Well,
then
you
get
to
go
run
a
web
hook
that
captures
all
network
policies
and
looks
at
them
to
say.
C
Do
you
specify
end
port?
If
you
do
fail
the
transaction?
With
an
error
that
says,
I
don't
support
import
right
and
that
feels
like
a
lot
of
work
will
will
policy
providers
actually
do
this?
I
don't
know
the
other
path
is
to
do
it
asynchronously
and
let
you
create
the
resource,
but
then
provide
a
status
that
says
hey
just
so
you
know
I
only
partially
implemented
this
or
I
didn't
implement
it
at
all
and
here's.
C
Why
and
again
it
seems
like
a
bunch
of
work
like
policy
providers
have
to
do
that
work
but
they're
already
watching
network
policy
resources,
so
they're
already
getting
those
events,
so
they
already
have
the
right
place
like
adding
a
status
block
to
their
watch.
Loop
seems
like
it
would
be
less
work
to
me
right
now.
C
What
we
are
seeing
is
well
but
nobody's
taking
us
up
on
that.
It's
beta
we've
got
a
bit
of
a
chicken
egg
thing.
We
know
that
you
know
everybody.
Everybody
is
busy
right,
so
nobody
is
going
to
do
work
that
isn't
really
important
and
if
nobody's
using
this,
then
why
is
it
important?
So
it's
not
a
ga
feature.
So
a
lot
of
customers
aren't
using
the
field.
So
ergo
is.
Is
it
important,
and
so
the
question
I
guess
comes
back
to?
C
A
Side
it's
on
our
radar
to
implement,
but
it's
pretty
low
down
the
priority
list
like
we'd
rather
spend
that
time
implementing
end
port,
for
example,
or
whatever
next
comes
out,
and
it's
it
doesn't.
I
mean
there's,
maybe
some
value
in
say
like
calico
320
doesn't
support
whatever
new
field
comes
out
in
in
the
new
kubernetes,
but
then
you
need
to
it
doesn't
just
fit
in
with
the
existing
code,
because
you
need
to
use
like
the
raw
client
to
get
and
learn
about
fields.
A
You
don't
know
about
in
the
go
client
you've
built
in
right,
so
it's
not
impossible
but
a
little
bit
icky.
I
don't
think
that
that
changes
web
hook
versus
status
for
us
in
terms
of
implementing.
E
C
E
F
E
So
but,
but
I
feel
like
like
yeah
like
the
admission
controller,
is,
is
a
better
feature
than
the
status,
although
I
worry
about
people
trying
to
install
random
third-party
software
that
uses
some
feature.
That
they're
I
mean
I
I
guess
yeah.
Maybe
you
want
it
to
fail
in
that
case
yeah
I
don't
know,
but
I
I
would
worry
about
it
like
breaking
installs
of
third-party
software,
that
is
trying
to
create
network
policies.
M
E
No,
no
I'm
saying,
like
we
add
the
web
hook
and
say:
oh
you
know,
if
you
use
endport,
then
we
reject
your
network
policy
and
then
somebody
tries
to
install
an
operator
that
creates
a
network
policy
with
an
end
port
and
now
it
won't
work,
and
I
guess
I
mean
arguably
it
wouldn't
have
worked
even
if
we
hadn't
blocked
it.
So
probably
we
do
want.
C
To
block
it,
it's
it's
the
age-old
question
of
fail,
sync
or
fail.
Async
yeah.
M
I
know
when
I
talked
to,
I
was
going
to
look
at
him
playing
this
in
11k
and
when
I
talked
to
some
of
the
folks
over
there,
the
first
question
they
had
was
well
does
cube
upstream
network
policy
test
now
like
use
that
like
can
we
do
the
tests
not
run
a
test
for
network
policy
unless
the
status
says
accepted,
and
I
was
like
no,
that's
not
what
it's
for,
but
I
mean
that
was
the
first
thing
they
said,
and
after
I
said
no,
there
was
a
lot
less
interest
kind
of
something
to
think
about.
M
C
But
this
being
a
strictly
optional
feature
right,
like
we're
not
going
to
make
it
part
of
conformance
at
least
not
in
the
foreseeable,
and
so
it's
not
going
to
be
in
the
upstream
tests
right
I
mean
I
guess
it
could
be
it's
just
in
the
optional
section,
so
it's
not
going
to
fail
either
way.
C
So
I'm
not
hearing
a
strong
feeling
and,
like
my
default
opinion
at
this
point,
is
if
we
don't
know
that
we
need
an
api,
we
shouldn't
put
the
api
in,
so
we
should
maybe
sit
on
this
another
cycle,
and
you
know
maybe
soft
ping,
some
folks
from
other
implementation,
see
if
they
have
a
strong
feeling
about
it
right
like
if
people
are
already
implementing
web
hooks,
then
this
is
easy
for
them
to
add.
If
it's
a
net,
you
know
first
time
adding
a
web
hook,
it's
kind
of
a
pain
in
the
butt.
E
C
Yeah,
okay,
all
right,
then,
let's
take
the
decision
at
this
point.
We're
not
going
to
push
that
kept
forward.
We,
the
endpoint
endport,
is
going
ga,
so
we
will
get
more
customer
interest
in
it.
Now
that
it's
ga
it'll
hit
the
release.
Notes
ricardo,
make
sure
that
when
you
send
that
pr
that
you
write
a
nice
release,
note
for
it
and
then
we
can
deal
with
the
aftermath
of
people
starting
to
use
it
and
it
not
working
okay.
A
M
Not
even
just
two
really
quick
things:
we're
kind
of
getting
to
the
end
stages
of
admin
network
policy,
api
implementation,
so
this
is
like
a
final
poke
dan's,
given
us
a
plus
one.
So
looking
for
folks
who
have
been
involved
in
the
past,
maybe
even
if
it
was
a
while
ago
to
just
go
and
give
it
a
look,
there's
a
couple
open
questions.
I
know
we've
all
been
busy,
but
I
would
like
to.
C
Thanks,
I
have
your
slack
ping
marked
as
unread,
so
I
don't
lose
it,
but
yeah.
The
last
two
weeks
have
been
just
all
caps.
I've
been
completely
absent
from
all
of
my
other
job
responsibilities
so
like
next
week,
I'm
going
to
shift
my
attention
back
to
my
day
job
and
then
I'm
going
to
shift
back
to
all
the
pr's
that
are
open
against
me.
M
C
Keep
the
pings
the
pings
are
useful
because
it
helps
me
make
sure
my
queue
is
is
current
as
things
fall
off
the
bottom
of
my
mailbox
they're,
clearly
less
clearly
less
important
right.
M
Yeah
no
worries
no
worries
and
then
the
next
thing
really
short,
the
network
policy
api
subgroup
meeting
has
fallen
on
is
falling
on
holidays
two
weeks
two
times
in
a
row
where
bi-weekly
the
last
one
was
juneteenth
and
the
next
one
july.
Fourth,
so
we're
gonna
have
an
impromptu
meeting
this
upcoming
monday
on
june
27th.
I
believe
we're
going
to
be
talking
about
multi-cluster
network
policy
and
trying
to
move
that
conversation
out
of
here
into
a
smaller
group
just
to
start.
M
So
please
come
to
that
if
you're
interested
same
time
same
meeting
link
just
wanted
to
give
a
heads
up.
That's
all
I
have
thank
you
guys
thanks.
Everyone.
A
Thank
you,
and
that
brings
us
right
up
to
time,
looks
like
we
didn't
make
the
cup
review.
C
Okay,
let
me
make
a
plea
to
anybody
who
is
interested
in
what
this
sig
is
doing.
Kept
wise,
go
to
kubernetes
enhancements,
go
to
issues.
Look
at
for
label
sig
network.
We
have
23
open
caps.
These
are
the
keps
that
are
in
progress.
This
doesn't
include
things
that
are
ga
but
still
have
gates
in
the
code
base.
Some
of
these
are
stale
and
are
just
going
to
age
out.
Some
of
them
are
probably
done.
C
I
don't
think-
and
I
didn't
find
any
that
are
actually
done,
and
some
of
them
could
use
a
little
bit
of
infusion
of
energy,
so
do
take
a
look
and
help
us
drive
them
to
completion.
I'm
really
anxious
about
opening
more
and
more
and
more
caps
when
we
have
23
open
cap
and
we're
not
the
worst
we're
not
the
worst
sig
here,
but
it
is
a
lot
to
wrap
our
head
around
the
intersection
of
all
of
these
non-gaa
features,
especially
around
all
the
cube
proxy
stuff.
B
Tim,
do
we
need
to
block
lack
of
time,
whether
in
this
meeting
or
elsewhere,
to
go
through
and
do
some
kep
decisions,
triage
et
cetera.
C
We
should
I
I
should
have
put
it
on
the
agenda,
but
I
thought
there
were
so
many
things
for
today
that
I
didn't
and
because
the
kephriis
was
supposed
to
be
last
week
we
got
an
extra
grace
period.
I
didn't
figure,
it
was
that
urgent
for
this
week,
but
we
really
should
go
through
it
for
next
time
so
I'll
put
on
the
agenda
for
next
time.