►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20211111
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20211111
A
and
at
the
top
of
the
agenda
we
have
see.
I
think
we
decided
to
delay
the
issue
triage
to
the
end,
but
antonio
wanted
to
talk
about
some
important
deadlines.
We
have
coming
up.
B
Right
you
added
the
links
to
the
pr,
but
tomorrow
is
the
last
day
for
the
cherry
picks
and
if
you
have
something
that
needs
to
be
reported
and
just
don't
forget
to
to
cherry
pick
and
being
approvers,
and
so
we
we
can
have
it
and
the
the
other
one
is
the
deadline
for
code
freeze
that
is
nest
tuesday.
So
I
I
I
don't
know.
I
think
that.
A
A
Me
to
oh
okay,
I
mean
I,
I
yeah,
I
just
put
this
list
together
and
they
were
ones
that
I
saw
while
skimming
that
seemed
to
have
recent
activity
and
also
looked
like
they
were
at
some
level
of
consensus
or
needed
something
to
push
them
a
little
bit
further.
A
A
Either
that
or
if
we
have
some
better
way
of
tracking
prs
that
we
think
need
the
attention
to
get
in
by
a
certain
date.
It
seems
like
a
reasonable
thing
for
a
project
board.
It
also
seems
a
little
late
to
put
together
a
project
board.
Yes,
it
does.
I
was
like.
A
But
then
I
realized
no,
it's
only
for
caps.
I
think
I've
got
them
all
loaded.
So
let
me
do
the
share
screen
and
for
the
rest
of
the
group.
If
anybody
on
this
call
has
prs
that
they
think
should
get
the
attention
like
tim
said,
please
ping
people
you
might
as
well
put
them
in
this
list
too,
and
then
you
know
at
least
they're
in
a
couple
of
places
that
we
can
look
for
them.
C
C
This
is
part
of
the
ip
mode
load,
balancer
stuff
that
was
sort
of
abandoned
by
someone
else.
I
thought
I
would
take
over
their
pr
to
see
how
complicated
it
was.
There's
the
rest
of
it
is
not
going
to
make
it
to
23.
So
I
don't
really
care
if
this
one
makes
it
23
or
not
dead
code
cleanup.
Oh
I'm
always
a
big
fan
of
dead
code.
Cleanup
is
somebody
assigned
to
this.
That's
not
me.
It's
anthony!
F
B
B
C
G
C
H
E
Patch
hostnet
pod
status
for
changing
notes.
That's
not
that's
not
important,
and
it's
waiting
on
a
cap.
A
Okay,
I'll
take
that
one
out
of
the
list,
then
we
should
probably
throw
milestones
here.
G
23.,
that
was
this
and.
A
B
F
B
B
C
This
is
the
one
that
was
sort
of
abandoned,
so
this
is
not
gonna
make
it.
I
I'm
not
gonna
have
time
to
finish
the
whole
thing
so
and.
C
C
I
need
to
clarify
a
few
things,
so,
okay,
this
one
andrew
needs
to
come
back
and
revisit
it.
If
he
does.
This
is
not
a
huge
deal.
I'm
not
gonna
label
it.
C
It's
good
that
we
have
a
bunch
that
are
not
a
huge
deal.
That's
that
means
there's
nothing
on
super
fire,
hms
load
bounds
or
health
checks.
J
C
Yeah,
that's
I
for
that
one
in
particular.
If
we
don't
get
it
in,
it's
not
a
huge
deal,
I
wasn't
going
to
do
it
anyway.
He
said:
oh,
let's
just
do
it
and
send
the
pr
all
right.
Let's
check,
I
don't
know
this
one.
This.
D
C
K
I
I
am
following
this
one.
This
is
something
that
me
and
then
winship
we've
been
discussing
about
moving
some
stuff
outside
package,
kk
to
to
component
helpers
and
other
stuffs.
I
I
I
can
see
how
laughing,
because
we
have
no
two
tools
that
work
with
you:
youtube's
youtube's
network
network.
You
choose
network
but
anyway,
sorry
kyle,.
F
Yeah,
no,
I
just
I
just
disco,
particularly
not
controller
utilities,
and
then
there
is
no
hotel
somewhere
else
doing
the
same
thing.
Yes,
yes,
I
remember
that
yeah.
K
Yeah
but
anyway,
this
is
a
this
is
this
is
a
cleanup
team,
so
that's
that's
want
to
move
things
outside
kk
that
will
help
to
to
move
to
vendor
to
proxy
stuff
as
well.
K
A
All
right,
then,
so
I
pasted
a
link
at
the
top.
That
is
a
search
for
prs
that
are
labeled
sig
network
and
milestone
123..
I
guess
let's
use
that
as
our
list
of
important
pr's
for
the
next
week
and
I'll
remove
the
other
links.
A
I
love
this.
Thank
you
all
right
last
chance
for
people
to
call
out
prs
that
were
not
on
that
list
that
they
would
like
attention
on.
They
can
ping
us
on
slack
too,
like
yes,
last
call.
A
J
I
just
noticed
the
next
meeting
is
on
u.s
thanksgiving,
so
maybe
people
don't
want
to
have
a
meeting,
but
if
there
are
people
who
are
going
to
have
one
then
cool,
but
I
imagine
a
bunch
of
us
in
the
u.s
will
probably
not
be
working
that
day.
So
I
was
looking
into
whether
or
not
we
should
officially
cancel
it,
and
I
was
also
looking
at
thursday
december
23rd.
C
The
rest
of
the
world
gets
the
benefit
of
our
american
holiday
and
we
all
celebrate
the
end
of
the
year.
C
Thank
you
casey,
I'm
up,
so
I
wanted
to
just
say
briefly
how
important
it
is
that
everybody
feels
like
they
can
voice
their
descent.
So
I've
heard
sort
of
through
the
grapevine
that
it's
not
always
people
don't
always
feel
like
they
can
weigh
in
or
they
don't
feel
like.
They've
got
the
clout
or
the
importance
they're
not
important
enough
to
have
an
opinion
or
they
don't
feel
comfortable
disagreeing,
and
I
want
to
just
be
really
clear
on
the
record.
C
The
only
way
this
whole
project
operates
is
if
people
who
disagree
disagree,
if
you
see
people
making
bad
decisions-
and
I
don't
care
if
that's
cal
or
dan
or
me
or
anybody
else,
making
bad
decisions.
Sorry
kyle!
I
had
to
put
you
at
the
front
of
the
list.
Whoever
is
here,
if
they're,
making
a
call
that
you
disagree
with.
I
really
encourage
you
to
speak
up.
C
If,
if
you
don't
feel
comfortable
speaking
up
on
a
recorded
session,
then
you
know
ping,
one
of
us
on
slack
or
send
us
an
email
or
something
please,
because
it's
really
important
that
we
hear-
and
that
goes
for
you
know,
people
who
are
vendors,
who
are
implementing
these
apis.
If
we're
doing
something
that
you
can't
implement
or
presents
a
real
problem.
I
really
really
I
need
to
know
about
it.
I
really
want
to
hear
about
it
and
I
want
you
to
scream
and
jump
up
and
down.
C
I'm
not
saying
I
will
acquiesce
to
everybody's
streams,
but
I
want
to
hear
them.
So
please
don't
be
shy
and
if
you
feel
like
you're
going
to
be
shy
and
you
want
to
anonymize,
it
feel
free
to
send
it
to
me
or
to
one
of
us
or
bridget
or
somebody,
and
we
will
anonymize
it
appropriately
for
you
and
and
get
the
message
across
you.
A
Yeah,
thanks
tim,
you
know
it's
not
just
him.
I'm
sure
everybody
feels
that
way,
not
to
put
words
in
everybody's
mouth
but
yeah
like
we
talked
about
last
time.
This
is
not
a
technology
priesthood
and
everybody's
input
is
very
worthwhile
and
we
would
love
to
hear
it.
C
I
wanted
to
touch
on
a
fun
little
topic,
as
I
was
looking
at
that
ip
mode
pr.
I
noted
we
noticed
a
pattern
and
when
I
went
to
go
and
expand
on
it,
I
realized
how
nefarious
this
sort
of
pattern
is.
So
once
upon
a
time
there
was
service
status,
ingress
load
balancers,
which
is
a
struct.
C
You
know
several
levels
deep
and
this
struct
has
a
field
for
an
ip
and
a
field
for
a
host
name,
and
it
basically
says
one
of
these
will
be
set
if
you
have
a
service
load
balancer,
and
then
we
have
this
cap.
That
said,
hey,
let's
add
multiprotocol
load
balancers,
and
in
order
to
do
that
we
said
well,
we
need
an
error.
We
need
a
way
to
recognize
an
error.
So,
let's
add
a
field
to
that
structure,
which
is
a
list
of
optional
list
of
ports
where
each
port
can
describe
an
error.
C
What
we
missed
was
that
this
same
struct
was
used
by
ingress
to
represent
load
balancer
endpoints
for
ingress,
and
so,
while
the
pr
added
this
field
and
properly
gated
it
so
that
it's
behind
an
alpha
gate
and
nobody
could
accidentally
use
it.
That
only
applies
when
they
come
in
through
service
when
they
come
in
through
ingress.
This
new
field
just
exists.
We
added
it
accidentally
to
our
api
and,
as
I
went
to
to
de-duplicate
that
structure,
I
realized.
C
Oh,
I
should
just
take
out
the
ports
field,
which,
of
course,
all
of
our
wonderful
machinery.
Thanks,
jordan,
blew
up
on
me
and
said
this
is
an
incompatible
api
change.
You
can't
do
this
dummy,
and
so
I
I
thought
it
was
interesting
enough
to
bring
up
here.
C
We
have
a
choice.
We
we
can
either
eat
it
and
just
live
with
this
and
say
whoops.
We
accidentally
expanded
the
ingress
api,
didn't
really
mean
to
it's,
not
un,
an
unreasonable,
little
change,
but
nobody
implements
it
as
far
as
I
know,
and
so
we
could
just
go
out
and
tell
people
hey,
you
should
start
implementing
this,
you
should
you
can
put
information
about
your
ingress
ports
there
or
we
could
rip
it
out
and
say
we
never
meant
for
this
to
be
there
anyway.
C
And
yes,
it's
strictly
speaking
an
api
change,
but
it's
a
bug
or
we
could
undertake
a
bunch
of
work
to
seek
out
all
the
implementers
and
ask
them
hey.
Is
anybody
actually
filling
this
field
in
because,
if
you're
filling
it
in,
then
maybe
we'll
keep
it,
but
if
nobody's
filling
it
in
then
maybe
we
should
just
get
rid
of
it.
What
I
don't
want
to
do
is
have
implementations
suddenly
start
exploding
because
they're
specifying
a
field
that
we
don't
understand
or
something
I
know
so.
C
This
is
this
is
the
fun
of
these
sorts
of
api
changes,
and
so
I
thought
it
was
fun
to
bring
up
here
and
see
if
anybody
had
any
obvious
thoughts
on
it
or
you
know
just
for
the
record.
F
H
Yeah
that
I
I
think
the
safest
and
you
know
least
disruptive
thing
to
do-
is
leave
it
in
and
call
it
a
feature
gateway.
Api,
like
you
pointed
out,
already,
has
a
concept
of
ports
in
status.
It
seems
fairly
that
this
is
really
just
a
field
in
status,
a
new
field
in
status
that
people
can
implement
if
they
want
to
it,
seems
pretty
low
risk.
It's
something
that
is
on
its
way
to
being
a
thing
in
service.
H
You
combine
those
things.
It
seems
like
it's
not
worth
the
effort
to
try
and
remove
it.
We
should
just
embrace
it.
That's
my
perspective
anyway.
H
C
C
C
Well,
I
mean
actually,
actually,
I
think,
if
we
ripped
it
out,
it
would
just
deserialize
and
just
disappear.
It
would
be,
it
would
be
saved
in
their
data,
but
it
would
when
we
deserialize
internally,
when
we
convert
between
versions,
we
would
just
lose
those
fields.
C
F
F
C
F
C
B
G
C
A
I
was
gonna
vote
with
rob
as
well,
but
I
was
holding
my
tongue
to
let
other
people
weigh
in.
C
We
don't
need
to
make
it
a
vote
here.
I
have
a
pr
open.
That
was
one
of
the
ones
we
scanned
through
skimmed
through
really
quickly
that
isn't
super
critical
for
23,
but
it
does
keep
the
ports
field.
I
forget
if
I
actually
copied
the
validation
over
or
not.
A
D
D
Right
all
right,
so
here's
where
we're
at
so
here's
I'll
start
with
a
little
demo.
So
tim
asked
me
to
just
mentioned
to
us
that
we
should
sort
of
show
people
what
we've
been
up
to
so
I'll
start
this
off.
Then
I
can
show
you
what
it
does
later.
It's
just
so.
You
can
use
this
stuff
now
so
because
everybody.
D
On
what
kaping
is
okay,
okay,
so
kpng
was.
It
was
a
thing.
Okay,
so
I'll
give
the
whole
history.
So,
a
long
time
ago,
antonio
mentioned
in
sig
network
he's,
like
I
don't
know
who
owns
ipvs,
what
are
we
gonna
do?
That
was
I
looked
up
all
the
dates,
so
I
think
it
was
around
march
the
18th
he
said
this
and
it
it
turns
out
like
in.
I
think
it's
january.
D
I
think
I
think
this
was
2020..
D
Mikhail,
actually
wrote
the
first
implementation
of
kaping
right
and
kpmg
is
like
a
modular
coupe
proxy,
so
it
kind
of
decouples
the
whole
structure
of
the
code
base,
so
that
your
back
ends
are
separate
from
the
thing
that
talks
to
kubernetes
right
so
for
folks
that
haven't
seen
it
before
it
looks.
I
know
there's
a
lot
of
junk
on
here,
but
like
the
idea
here
is
that
you've
got
this
stuff
that
can
pull
in
all
your
all
your
networking
primitives
your
api
primitives
from
anywhere
a
file.
D
Whatever
you
want
a
generic
model
of
a
network
proxy,
a
network
topology
right,
it
needs
to
be
load,
balanced
and
then
coping
kpng
is
just
a
thing
that
has
all
these
separate
back
ends
and
you
can
run
the
back
ends
on
their
own
or
not
so
it
solves
that
sort
of
classic
scalability
problem
of
I
have
a
thousand
watches,
because
I
have
a
thousand
coupe
proxies
and
I
don't
know
what
to
do.
It's
bringing
my
api
server
down.
D
C
D
Yeah
exactly
right,
so
so
it
solves
the
code
level
problem
and
to
me
that's
kind
of
the
more
interesting
thing
because
I
don't
you
know.
I
think
that
coupe
proxy
is
really
hard
to
understand
and
like
going
to
what
cal
said
last
week
and
what
tim's
kind
of
said
this
week
like
for
people
to
be
able
to
help
more
making
the
code
and
more
understandable,
is
kind
of
a
cool
path
forward.
D
For
for
those
of
us
that
don't
know
the
coop
proxy
code
base
inside
out,
so
we've
been
working
so
so
we
started
this
coup
proxy
working
group,
but
the
point
of
it
was
to
be
like.
Well
what
do
we
do
about
all
these
things
we
want
to
fix,
and
but
it
turned
out
that
the
first
time
we
had
a
meeting,
all
anybody
talked
about
was
kaping
so
so
like.
D
But
then
a
lot
of
those
people
didn't
come
to
the
next
meeting,
and
then
we
like
started
looking
into
the
issues
and
the
caps
and
everything-
and
some
of
them
are
over
here.
If
you
all
are
interested
but,
like
you
know,
like
you,
can't
hot
roll
hot
reload,
the
proxy
you,
you
could
pass
a
dsr
argument
to
a
linux,
iptables
coupe
proxy.
D
I
mean
the
amount
of
weird
things
that
we
do
right
now,
right
that
we
don't
know
how
to
fix,
because
there's
too
many
things
in
our
way
because
of
the
overall
sort
of
monolithic
situation.
So
it's
it's.
You
know
that
we've
all
talked
about
this
so
anyways.
We
weren't
really
sure
what
to
do
so.
We're
like
well
wait.
We
could
do
whatever
we
want
if
we
just
work
on
coping
because
there's
nobody
reviewing
it.
So
let's
just
go
work
on
it.
D
So
like
we're
like
all
right,
so
let's
see
how
much
of
this
thing
we
can
get
working.
So
we
got
a
lot
of
it
working
right.
So
first
thing
we
did
was
we
got
cape
kaping,
passing
conformance
on
nft
right,
which
was
the
initial
implementation
that
mikhail
did
then.
After
that
ricardo
did
this
cool,
ipvs
sort
of
thing
he
like
hacked
up
the
implementation
over
the
weekend,
and
then
we
merged
that
and
then
we
and
then
we
rewrote
it.
D
Like
you
know,
this
guy
from
cisco
came
along
100
month
and
he
rewrote
it,
and
then
he
asked
ricardo
not
to
kill
him
because
he
rewrote
ricardo's,
ipvs
implementation
and
then
ricardo
was
totally
fine
with
that.
So
and
then,
after
that,
we
implemented
iptables.
So
the
way
we
did
ip
tables,
because
it's
so
complicated
and
so
stable,
we
didn't
really
want
to
mess
with
it
too
much.
D
So
we
just
kind
of
me
and
this
guy
kind
of
hacked
on
that
together
and
we
ported
it
over
by
literally
taking
all
the
stuff
in
tree
and
like
just
implementing
it
underneath
this
coping
api,
and
I
think
it's
a
really
good
example
of
how
you
can
hide
complexity
because
it
like
we
were
able
to
like
just
go
like
create
a
sync
file
and
you
you're
able
to
you
sort
of
implement
this
declarative
api,
and
I
know
folks,
some
of
you
all
know
about
this
service
changes
and
endpoint
changes.
Data
structure.
D
It's
this
thing
deep
inside
of
deep
inside
a
coup
proxy
that
sort
of
caches
everything
and
has
this
internal
model.
So
what
we
do
is
we
just
sort
of
facade
that
internal
model
through
here,
so
that
we
didn't
change
ip
tables
that
much,
but
all
the
other
ones
are
different
implementations
that
are
more
native
to
coping
anyway.
So
yeah
got
that
working
and
then
I
antone,
I
promised
antonio.
D
We
were
hanging
out
after
a
tgik
episode
with
me
and
casey,
where
casey
came
to
it
and
we
talked
about
calico,
and
then
we
promised
antonio
we'd
have
ci
one
day,
and
so
we
so
we
built
ci,
but
then
so
now
we
have
ci
and
we
actually
kind
of
fast
tracked
that
this
week,
because
I
found
because
you
all
wanted
to
hear
about
it,
so
we
fast-tracked
getting
it
all
polished
so
that
it
so
it
looks
okay
anyway,
so
we've
got
ci
now
I
just
did
a
first
example
of
it,
and
so
each
one
of
these
back
ends.
D
The
cool
thing
about
this
is
each
one
of
the
back
ends.
Has
its
own
ci
has
its
own
github
action
so
like
these
back-ends
can
be
vendored
they're,
totally
modular
they're,
totally
separated
out
from
the
rest
of
everything
they
have
their
own
go
modules
even,
but
they
can
still
be
compiled
together
right,
so
you
could
still
like
you
know
you
could
vendor
them
all
in
and
make
one
big
monolithic
container,
which
is
what
we
do
in
these
cici
jobs.
D
And
if
you
wanted
to
try
this
out,
you
could
run
this
kaping
local
up
script,
and
I
just
ran
that
for
you
and
you
can
see,
we've
got
kaping
running
here
and
everything
else.
Cordianus
is
happy
so
and
we've
got
a
regular
cni
running.
We've
got
calico
running
in
this
case
and
you
could
also,
if
you
wanted
to
clone
it
down,
you
could
kind
delete
cluster.
D
What's
the
kind
just
delete
it
delete,
you
can
delete
this
and
you
can
just
run
the
ci
if
you
wanted.
If
you
wanted
to
like
you
could
just
run
this,
and
this
will
do
like
a
full
end
to
end
well,
it
doesn't
run
the
full
conformance
suite.
D
It
just
runs
a
couple
of
the
like
couple
of
the
tests,
but
we'll
expand
it
to
run
the
whole
conformance,
suite
and
parallelize
it
so
like
we've
got
ci
now
and
that's
where
we're
at
and
also
so
then
lars
did
this
really
cool
blog
post
right
where
he
went
off
and
showed
people
how
to
how
to
vendor
kaping
to
make
your
own
proxy
and
then
dan
winship
actually
wrote
a
readme
article
about
how
to
build
a
proxy,
and
then
he
committed
that
so
so
yeah.
D
This
is,
and
I
think
we
uploaded
this
to
like
google,
google,
google
cloud
somewhere,
we
put
it
on
a
gcr
bucket,
the
initial
container
he
made
because
it
was
real
hard
to
vendor
back
then,
but
we
fixed
all
the
vendoring
a
couple
of
weeks
ago
me
and
mikhail,
and
now
it's
really
easy
to
vendor
it
again.
Everything
has
its
own
go
module,
so
you
can
build
your
own
proxies
off
of
it
and
what
else
we
got
so
I
kind
of
walked
you
through
this
diagram
people.
D
Can
anyone
who
wants
to
interrupt
or
ask
questions
whatever
like?
What's
next
windows,
a
meme
is
kind
of
hacking
around
on
windows.
Rich
us
is
working
on
porting
the
user
space
implementations
over
just
so
we
have
a
user
space
implementation
kind
of
just
for
the
hell
of
it
for
completeness,
but
there
are
some
reasons
why
that's
kind
of
useful.
D
I
know
I
think
openshift
used
it
at
some
point
in
windows.
If
you
want
to
do
something
other
than
using
hns,
you
have
to
do
some
user
space
proxying
because
it
messes
with
the
kernel,
and
you
can't
extend
the
kernel
using
the
kernel
space
windows
proxy
and
we
don't
have
snatch
and
affinity
working-
and
I
know
dan's
doing
a
bunch
of
stuff
to
fix
that
and
antonio
and
everybody.
So
we
were
kind
of
spying
on
that
pr
and
we're
like.
Oh,
we
got
we
gotta
copy
copy
that
over
and
get
it
working.
D
So
we
got
to
figure
that
out
and
we're
doing
our
best
to
track
what's
entry
and
I
think
there's
there
were
a
couple
of
conformance
tests.
We
haven't
confirmed
conformance
on
all
the
back
ends
right,
so
we
think
we
we
got
it
all.
Working
on
nft
nft
doesn't
work.
Well,
in
kind
because
of
kernel
stuff,
but
we
want
to
make
sure
it
passes
on
ipvs
and
ipv
tables
and
everything
else
yeah.
D
So
this
is
all
the
scripts
that
I
just
showed
you
and
you
can
see
so
it's
running
right
now
and
it's
now
running
the
e2es
here.
So
we've
got
end-to-end
testing
in
place
and
yeah.
So
this
is
about
how
far
so
like
we
started.
Here
is
2020.
This
is
mchale's
initial
code
drop
and
I
think
the
thing
that's
kind
of
cool
is
we've
built
a
community
around
this
problem.
That's
historically
been
something
that
only
very
few
people
were
able
to
work
on,
but
we
still
don't
really
know
what
we're
doing
so
like.
D
If
anybody
wants
to
help
us
that's
better
at
this
stuff
than
we
are,
we
might
be
able
to
make
a
lot
more
progress,
but
I
also
understand
that
everybody's
busy
and
people
are
keeping
the
lights
on
all
over
the
place.
So
so
this
is
just
I'm
not
making
a
sales
pitch,
we're
just
going
to
keep
hacking
on
it
and,
if
y'all,
if
and
when
it
looks
like
it's
useful,
like
you.
B
C
So
what
I'd
love
to
see
is
you
know,
maybe
in
the
course
of
the
run
up
to
24,
we
figure
out.
What
do
we
want
to
do
with
this?
What's
the
goal
like
is
it
to
do?
We
move
it
in
tree
and
replace
the
existing
cube
proxies
setup
code
with
this,
or
do
we
move
cube
proxy
like
do
we
deprecate
the
entry
version
and
use
this?
Instead,
what
are
the
acceptance
criteria?
C
How
do
we
want
to
manage
it
in
the
long
term,
and
you
know
we
don't
yet
really
have
an
existence,
proof
of
out-of-tree
things
being
versioned
in
sync,
with
the
main
release,
but
you
know,
maybe
this
is
a
vehicle
for
that
so
anyway,
I'd
like
this
to
see
23
is
the
only
thing
on
my
brain
right
now,
but
after
23,
what
do
we
want
to
do
with
this?
What's
that,
what's
the
next
step,
how
do
we
make
this
real.
F
F
This
is
something
like
I
have
I'm
not
even
thinking
about
that
anymore.
Right,
don't
think
this
is
right,
the
victory,
but
the
problem
is
when
and
how
is?
Is
the
thing
that
they
like?
We
don't
have
like
one
of
the
things
you
said.
Is
we
don't
have
a
criteria
like
what's
the
criteria
for
allowing
the
allowing
this
to
be
accepted
not
by
just
by
us
but
to
white
white
white
community?
That's.
F
B
I
was
thinking
about
on
that
and
I
think
that
we
can
have
with
and
for
me
the
criteria
is
during
one
release.
It
runs
in
parallel
the
same
test,
the
same
jaws
and
the
same
everything
after
that
release.
You
should
be
able
to
do
an
av
switch
one
or
switch
the
other
and
in
the
third
release,
to
completely
replace
you.
B
So
you
cannot
do
put
something,
and-
and
let's
see
what
happens
because
you
cannot
say
this
is
ignore.
This
is
c
machinery,
or
this
is
sig
whatever
this
is
kubernetes
and
a
flake,
it
can
be
for
any
anyways.
So
if
you
want
to
start
to
to
replace
this
for
q
proxy-
and
you
have
somebody
that
wants
to
work
in
pro,
I'm
I'm
happy
in
guiding
someone
to
add
jobs
in
pro,
then
we
can
check
the
stability
of
the
job
that
the
way,
the
same
way
that
we
did
with
the
network
policies.
B
D
Yeah
I
mean
we
could,
if
that's
what
folks
want
we
could.
We
could
put
that
as
like
a
goal,
if
that's
if
this
is,
if
that's
where
folks
want
to
head
for
sure
right
now,
some
of
the
stuff
in
coop
proxy
just
isn't
tested
at
all.
So
it's
kind
of
like
we,
we
probably
should
be
removing
stuff
from
coop
proxy.
That's
not
used
and.
F
D
That
yeah,
and
so
like
we,
we
hope
to
have
more
test
coverage
than
what's
in
crowl,
is
kind
of
what
I'm
saying,
because,
as
of
now
there's
a
lot
of
stuff,
that's
not
tested.
So
that's
one
thing
I
forgot
to
mention
is
we
we
do
have
a
new
set
of
suite
of
tests,
so
y'all
are
familiar
with
these
table
tests,
but
like
this
is
kind
of
funny,
because
we're
just
playing
with
this
today,
because
we
were
looking
at
the
user
space
proxy
and
how
weird
the
test
results
were
for
it.
D
But
we
have
a
new
set
of
service
proxy
tests
that
a
meme
has
been
putting
together
that
use
the
new
e2e.test
framework.
Obviously
we
need
to
run
the
200
or
whatever
sig
network
and
conformance
and
all
the
rest
of
the
tests
too,
but
I'm
hoping
we'll
have
way
better
test
signal
for
service
implementation,
sort
of
out
of
the
box.
B
C
For
the
sake
of
time,
we
have
other
things
on
the
agenda.
This
is
awesome.
Let's
carry
forward
the
discussion
about
how
to
start
thinking
about
a
transition,
a
hypothetical
future
transition
and
what
we
want
to
do
to
get
there
sure
cool.
Thank
you
so
much
for
giving
us
this.
I'm
excited
about
this
cool.
L
L
Yeah
so
yang-
and
I
are
here
from
the
team
that
worked
on
it
so
about
four
to
six
weeks
back,
you
may
recall
we
had
shared
the
updated
version
of
the
cap
and
asked
for
feedback
on
sort
of
the
priority
approach
versus
multiple
actions
approach.
The
feedback
from
the
group
had
been
to
focus
on
the
priority
approach
and
we've
updated
the
cap.
Since
then,
we
would
like
to
invite
the
community
to
review
yang.
M
Yeah
sure,
on
just
one
second.
L
And
for
continuity
service,
the
main
key
points
to
keep
in
mind
are
a
this.
You
know
so
those
say
saline
points,
just
as
a
kind
of
a
refresher
is.
There
would
be
these
instances
of
cluster
network
policy
or
we
might
rename
it
something
like
admin
network
policy.
L
Each
instance
would
have
a
priority,
and
you
know
different
instances
would
be
treated
in
order
of
priority
and
within
each
instance
also
there
could
be
multiple
match
and
action
rules,
so
within
an
instance,
rules
are
ordered
in
according
to
yammer
listing
order
right.
So
whichever
is
first
in
the
yaml
is
higher
priority
than
what
is
next
in
the
yemen
and
across
cnps.
L
L
M
L
M
So
I
think
the
major
thing
we
wanted
to
sort
of
like
get
an
idea
from
the
city
network
today
is
first
of
all,
there
are
some
discussions
back
and
forth
in
the
communi
in
our
little
group
in
terms
of
the
priority
numbering
right.
So
there
are
two
sort
of
like
power
numbering
schemes.
One
is
that
a
lower
priority
number
means
a
higher
priority,
so
something
like
priority.
Zero
means.
The
policy
is
really
really
high
priority
and
like
something
like
900
means.
The
policy
is
a
little
bit
lower
priority
now.
M
The
benefit
of
this
is
that
you
know
we
in
some
use
cases.
This
is
more
intuitive
to
people
and
we
can
always
have
zero
as
the
highest
priority,
so
that
you
know
when
people
have
a
really
really
strong
stance
on
what
the
policy
will
mean,
they
can
put
zero
as
they
know
that
this
number
will
never
be
overridden
by
other
policies,
but
in
other
cases
people
feel
like
you
know,
we
want
a
pirate
number
to
make
higher
priority
and
we
can
reserve
something
like
zero
to
mean
a
default
priority
case.
M
Where
you
know
this
policy
is
evaluated
after
kubernetes
network
policies.
You
know
to
solve
this
in
the
cap.
What
we
propose
now
is
that
we
put
the
poverty
field
as
an
intel
string,
just
like
the
port
number
in
the
network
policy,
so
that
you
know
for
baseline
or
default
network
policies.
We
can
put
a
special
keyword
in
there
and
say:
okay,
this
policy
is
default
or
this
policy
is
baseline.
Now
this
is
unknown
big
years
right
and
the
number
of
numeric
values
will
be
always
evaluated.
You
know
before
the
kubernetes
network
policies
and.
M
Yeah
sure,
let's
see
so
yeah,
this
is
the
first
thing
that
we
wanted
to.
You
know
maybe
get
some
feedback
to
see.
If,
if
a
you
know,
lower
number
correspond
to
a
higher
power,
your
other
makes
sense
or
the
or
the
other
way
around
make
more
sense
for
people
and
in
terms
of
poverty.
There
is
another
thing
right,
because
sanji
was
just
mentioning
right
now.
We
think
you
know,
because
of
possible
conflicts.
M
We
probably
don't
want
two
cmp
instances
to
share
the
same
priority
number
that
way
we
can,
you
know
get
around
with
you
know
two
policy
instances,
one
have
a
allow
rule
and
one
have
a
drop
rule,
but
have
the
same
priority
number
and
it
will
be
indeterministic
and
it
will
be
really
weird
to
have
cni
implement
something
like
if
they
have
the
same
priority,
then
the
deny
always
wins,
or
they
allow
always
wins.
M
But,
on
the
other
hand,
if
we
wanted
to
enforce
something
like
if
two
cmps
cannot
and
you
in
any
time
have
the
same
priority,
then
there's
a
burden
in
the
admission
controller
to
say:
if
there's
already
a
cmp
created
at
priority
50,
you
cannot
create
something
else.
That's
at
50.
M
now
is
this
a
okay
thing
for
for
the
admission
controller,
because
we
sort
of
like
needed
to
maintain
a
power
of
the
index
on
the
policy
nowadays,
if
that's
the
case.
L
Another
area
where
we
would
invite
feedback
from
the
community
and
go
ahead
young,
but
we
have
highlighted
these
and
we
would
request
that
over
the
next
between
now
and
the
next
meeting,
you
know
if
many
people
can
give
us
feedback
so
that
we
can
close
this
and
move
this
to
a
you
know:
standardized
proposal
go
ahead,
yeah
and
keep
going.
Please.
M
M
The
workflow
selector
right
now
only
has
pass
vector
inside
of
it.
Basically,
it's
accompanied
with
your
namespace
vector
just
like
select
parts
and
certain
name
spaces.
Now
in
the
future.
I
know
there
are
also
caps
that
are
working
on
service
accounts
factors
right
so
in
the
future
we
wanna
may
wanted
to
add
a
service
accounts
vector
or
something
in
those
lines
into
the
workload
selector,
so
that
you
know
in
the
in
the
cluster
network
policy,
we
can
do
select
these
namespaces
and
then
select
the
workloads
which
can
be
paused
by
labels
or
service
constructors.
M
M
I
guess
we
have
now
is
that
you
know
we
don't
know
if
the
service
account
sectors
and
the
post
actors
or
other
selectors
in
the
future
might
still
be
mutually
exclusive
in
the
in
the
future,
meaning
that
is
there
any
cases
in
the
future
or
the
use
cases
where
we
might
want
to
do
something
like
in
the
same
workload
as
an
actor.
We
do
pause,
adapter
and
service
accounts,
vector,
saying
that
we
wanted
to
select
workloads
that
has
these
pod
labels
and
matching
some
service
accounts
that
we're
not
sure.
M
So,
if
that's
the
case,
maybe
we
we
also
have
alternative
here.
Instead
of
doing
workload,
selector
pos
director,
we
can
be
a
little
bit
more
verbal
and
say
workload
selector
and
provide
a
workload
selector
type,
which
is
an
enum
right
now
it
can
only
be
pass
vector
in
the
future.
It
can
be.
You
know,
service
count,
selector,
positive
actor,
plus
service,
constructor,
blah
blah
blah.
It
can
be
any
combination,
and
we
know
that
it
will
be
very
deterministic
and
and
explicit
so
that
we
don't
fall
into
any.
M
You
know
foul
open
issue
that
we
meet
with
network
policies
when
we're
trying
to
add
new
fields,
but
we're
wondering
you
know
if
this
is
necessary
to
have
this
drugs
being
this
verbals
or
if
this
is
okay
and
in
the
future
we
can
just
add
service,
conspector
or
any
other
selector
in
this
struct.
C
It's
taking
everything
I
have
not
to
answer
all
these
questions
right
here.
These
are
great
questions.
The
right
place
to
answer
them
is
in
in
the
cat
prs.
M
L
Yeah
there's
a
number
of
points
here,
so
please
it
will
require
some
amount
of
time
here
to
go
through.
You
know,
changes
in
the
default,
implicit
isolation,
logic,
the
conflict
resolution
logic
so
definitely
appreciate,
especially
the
cni
vendors
calico.
Cdm
folks
from
andrea,
have
already
sort
of
implicitly
agreed
from
the
red
hat.
Also
cite
this
being
some
sort
of
lose
agreement,
but
more
input
is
welcome
so
anyway.
So
the
goal
is
by
the
next
meeting.
A
Okay
thanks
next
up,
bridgette
has
two
items
in
three.
J
Should
be
pretty
quick,
they
should
be
pretty
quick
first
one
is
I've
been
struggling
with?
Do
I
even
file
an
extension
because
it's
so
ambiguous
what
kind
of
effects
things
could
have
on
people?
Then
I
looked
at
this
web
hook.
Extension
support
thing
and
I
thought
that
kind
of
looks
like
it's
going
in
in
1.23.
J
If
I'm
reading
this
right,
maybe
that
would
make
it
possible
for
people
to
not
have
problems
with
if
their
service
doesn't
support
mixed
protocols.
It's
no
problem,
and
so
I'm
interested
in
people's
thoughts
on
that,
because
I
kind
of
don't
want
to
file
an
extension
just
to
make
a
bad
experience
for
people
at
some
clouds,
even
if
other
clouds
do
support
it.
I
don't
want
people
to
have.
I
read
read
through
the
comments.
I
don't
want
people
to
have
an
ambiguous
experience
that
could
lead
to
problems
and
that's
it.
C
I
mean
my
my
initial
thought
is:
maybe
maybe
the
web
hook
is
enough.
It
it
depends
on
people
implementing
it
right.
So
in
reality
it's
I
mean
yeah,
it
just
punts
back
to
the
provider.
Right,
you
tell
us
what
you
don't
support.
Maybe
it's
enough.
I
need
to
go
back
and
revisit
the
cap.
Maybe.
J
J
J
Yeah,
okay,
I'm
gonna,
do
more
reading
and
and
see
after
that's
more
fully
baked
okay
that
and
we
have
one
minute
left
and
I
have
a
dual
stack
feature
blog
draft
available
for
comment.
I'm
going
to
submit
it,
I'm
going
to
submit
the
pr
to
send
it
to
the
docs
team
this
time
next
week.
If
you
have
ideas
and
thoughts
and
comments,
feel
free
to
add
them
to
that
blog
post
in
that
google
doc
in
the
next
week.
J
Oh,
it's
just
the
support
for
mixed
protocols.
If
we
turn
it
on
and
it
only
works
for
some
cloud
providers,
then
other
cloud
providers
users
could
have
a
bad
experience.