►
From YouTube: Kubernetes SIG Network meeting for 20230706
Description
Kubernetes SIG Network meeting for 20230706
A
A
I
didn't
hear
the
announcer,
but
it
looks
like
it's
recording,
hello
everyone
and
thank
you
for
coming
to
the
July
6th
edition
of
the
Sig
Network
meeting
we
as
usual.
This
is
under
the
kubernetes
code
of
conduct,
which
effectively
boils
down
to
be
nice
to
one
another.
So
please
do
be
nice
to
one
another
and,
as
you
can
use
the
hand
raise
feature
in
Zoom
if
you
have
questions
or
need
to
interject
so
that
we
can
keep
things
flowing.
A
A
But
I
think
we'll
have
time
for
a
couple
more
items,
given
what
we
have
all
right.
So,
let's
get
started
we'll
go
through
triage.
My
basic
plan
is,
we
go
through
the
unassigned
triage
items
for
now
and
then
go
through
the
agenda
and
if
we
have
time
at
the
end,
come
back
for
the
the
assigned
triage
items
that
are
kind
of
stuck.
If
anybody
has
an
objection
to
that,
please
let
me
know
or
wants
to
raise
a
specific
triage
item.
Please
let
me
know,
hopefully
you
can
see
my
screen.
A
All
right,
starting
at
the
oldest
unassigned
triage
issue,
first
default
value
for
service
IP
family
policy,
not
right.
A
C
Yep,
sorry
I,
so
the
cap
does
say
that
it
will
use,
prefer
dual
stack
and
the
implementation
says
it
will
use
require
dual
stack.
It
doesn't
actually
matter
because
it's
going
to
set
both
families
anyway,
in
that
one
special
case.
C
So
the
question
there's
two
questions
then
one
is:
is
it
worth
changing
it
to
match
the
cap?
I?
Don't
think
so.
At
this
point
it's
it's
a
feature.
That's
out
there
I
don't
think
changing
the
semantics
has
any
meaningful
result.
C
C
We
could
fix
it
up
in
the
case
that
they
make
that
modification.
We
you
know
we
have
some
other
places
where
we
do
special
case
changing
of
various
like
the
the
service
type
but
in
general
I
think
that's
just
kind
of
gross,
and
so
this
seems
Niche
enough
that
my
opinion
is.
We
should
say
this
is
just
not
worth
fixing.
C
D
C
C
That
it's
already
out
there
as
require-
and
it
wasn't
clear
to
me
that
switching
it
to
prefer,
while
leaving
the
families
as
dual
stack,
will
actually
be
correct
anyway.
So
if
we
switch
it,
if
we
switch
it
to
prefer
instead
of
require,
it's
still
going
to
publish
both
families
in
that
special
case
of
headless
selectorless
right
now,
they
add
a
selector.
Both
families
are
still
present,
which
isn't
what
would
happen
if
they
had
just
created
that
service
from
scratch
right.
D
E
D
C
B
C
C
C
C
C
It
would
give
a
slight
boost
in
like
portability.
I
guess,
you
could
say,
prefer
dual
set,
the
order
that
you
want
and
then
have
that
in
your
yaml
I'm,
not
sure
that
I'm
actually
I
would
have
to
go
back
and
refresh
on
that
whole.
B
C
B
F
B
C
C
I
would
be
oh
now
now
that
you
asked
this
Dan
I
would
be
okay,
with
exploring
switching
it
to
prefer
if
we're
okay
with
prefer
allowing
both
families
but
I,
don't
know
what
the
implication
of
that
would
be.
I'd
have
to
think
really
hard
about
that.
C
I
mean,
fortunately,
we
commented
the
crap
out
of
that
code
because
we
all
figured
it
out
once
and
we're
like.
Let's
write
it
down,
so
the
code
is,
is
pretty,
in
my
opinion,
reasonably
easy
to
follow,
but
the
test
cases
are
voluminous
and
so
we'd
have
to
like
make
the
change
and
and
see
about
it.
So
let
me
update
this
one
since
I
was
updating
it
before
I'll
say
this
here:
I
don't
have
bandwidth
in
the
near-term
future
to
make
this
change.
C
But
if
somebody
wanted
to
explore
it,
it
seems
like
it's
at
least
worth
trying
and
if
it
doesn't
work
out
then
like,
if
it's
a
huge
amount
of
effort
to
do
I,
don't
think
we
should
do
it.
But
if
it
was
actually
straightforward,
then
maybe
okay.
C
Follow
up
on
it,
I'll
assign
myself
and
follow
up
on
it,
cool.
A
Next,
one
up
is
from
Dan,
meaning
of
sync
proxy
rules.
Iptables
total,
given
minimize
IP
tables,
restore
Dan.
Do
you
want
to
just
go
ahead
and.
B
C
B
E
B
A
A
A
A
A
D
It's
interesting
I,
as
we
just
said
on
a
chat.
Maybe
I
had
a
second
metric
there.
So
part
of
the
problem
is
that
it
is
now
very
difficult
to
count
what
the
original
value
would
have
been
because
we're
just
not
actually
doing
the
work.
That
would
let
us
know
that
and-
and
we
could
do
it
I.
You
know
I
talked
about
that
some,
but.
C
A
C
I
mean
what
we
will
see
is
you
know,
normally
we'll
see
the
incrementals
and
so
it'll
be
a
small
number
and
then
we'll
hit
one
of
those
cases
where
you
were
like
fall
back
on
the
full
path.
Right
and
you'll
see
this
huge
spike
in
your
metric
and
it'll
go
right
back
down
and
people
will
wonder
what
the
heck
is
going
on
with
this
metric.
It
becomes
not
a
useful
metric.
F
C
B
A
Is
valuable
this
way
and
stuff
like
that,
but
Tim's
perspective
is
also
accurate.
It's
like
we
don't
want
to
just
randomly
change,
fiddly
things
for
people,
but
it's
not
that's,
not
a
kubernetes
that
we
want
to
ship
people
where
things
just
kind
of
that
they
expected
to
work
a
certain
way,
just
change
without
notice,
even
if
they're
smaller
it's.
C
C
I
Dan
I've
lost
track
of
exactly
how
hard
it
would
be
to
calculate
this.
Is
it
like
really
really
hard
or
just
like
what
a
pain
in
my
butt
hard?
If.
D
We
want
to
calculate
the
exact
number,
then
we
basically
need
to
remove
the
short
circuit
from
sync
proxy
Loop
and
have
it
go
through
the
second
half
of
the
loop,
but
not
actually
write
the
rules,
just
count
them
and-
and
that
would
be
a
little
bit
ugly,
but
it
wouldn't
be
difficult.
H
Is
would
the
performance
be
the
same
if
you
added.
C
B
F
B
C
C
Worded
and
there's
value
in
both
interpretations,
so
I
I
would
support,
adding
whichever
one
we
interpret
this
to
be
I
would
support
adding
the
other
one
because
they're
both
sort
of
interesting,
but
you
know
Dan.
Ultimately,
this
is
going
to
fall
on
you.
If
you
say
this
is
just
too
much
of
a
pain
in
the
ass
to
implement.
I
won't
push
too
hard.
F
C
A
D
C
All
right,
if
you
take
a
look
at
it
and
you
decide,
oh
I
was
wrong.
It
really
is
that
hard.
Then
let's
bring
it
back
and
have
a
discussion,
but
if
it's
really
not
super
complicated,
then
I
think
it's.
The
compatibility
is
better.
A
Okay,
sorry,
all
right
next
up!
Thank
you
all
for
that
great
discussion.
Thank
you,
Dan,
who
I
think
this
falls
on
quite
a
bit.
Moving
on.
We
have
a
multiple
concerns
with
network
policies.
A
From
last
week,
I
didn't
read
this
one
I
saw
some
of
the
other
ones,
but
I
haven't
seen
this
one.
Yet
it's
from
the
audit
report
policy
object
provides
a
flexible
manner.
D
D
F
A
D
Is
just
something
that
came
out
of
the
end
of
tables
cat
there
are.
There
are
some
test
cases
that
that
explicitly
call
IP
tables
to
ruin
the
network
to
see
how
things
cope
with
it,
and
we
should
add
NF
table
support
to
those
at
some
point.
So
I
just
filed
this,
hoping
that
somebody
would
pick
it
up
someday.
A
Node
local
DNS,
not
working
with
custom
hosts
I,
haven't
seen
this
one
either.
Actually
eks
122
after
installing
node
local
DNS,
inclusive
DNS
and
external
DNS
work
normally,
but
custom
hosts
stored
in
accord
DNS
config
map
stopped
working.
B
A
B
A
Roger,
that,
okay,
so
if
anybody
on
the
call
it
sounds
like
this
one
is
in
a
situation
where
it
just
needs
somebody
who
has
familiarity
with
eks
to
actually
take
the
time
to
go,
try
to
reproduce
it
slash
understand
it
so
that
we
can
figure
out
what
the
approach
is
here.
Does
anybody
want
to
pick
that
up.
B
A
B
A
All
right,
what
time
are
we
at?
Oh,
almost
30
after
might
as
well
just
hit
this
last
one
real,
quick
and
then
we'll
go
into
the
rest
of
our
agenda.
Some
natillo
TCP
contract
remains
unreplied
when
pod
is
deleted,
causing
traffic
loss
see.
If
anybody
took
a
look
at
this
one,
yet
Antonio.
B
I
think
that
that
Larson
and
I
went
through
this
in
in
another
duplicate.
This
and
I
think
that
is
because
people
is
using
mentality
or
this
kind
of
things
to
implement
those
balances.
So
they
have
external
traffic
policy.
Workers
been
receiving
notes
that
doesn't
have
both
right,
then,
and
and
then
they
have
all
these
not
holding
problems,
but
all
right
I
can
last
did
you
volunteer
to
to
reassign
it
with
me
to
this,
because.
G
F
G
A
A
Okay,
thank
you
appreciate
it
cool
all
right,
so
that
covers
everything
that
was
unassigned.
Let's
go
back
to
the
agenda
for
today
we
have
a
few
items
and
we
should
get
started
on
them
because
we're
at
halfway
point
I,
think
it's
cesari
want.
A
Sorry,
code,
freeze,
I,
missed
the
code
free
right,
yeah,
sorry,
so
there's
code
freeze,
Wednesday,
19th
of
July,
so
coming
right
up
around
the
corner.
The
link
is
here
and
there's
the
128
cycle,
information
and
stuff
like
that.
But
just
so
people
are
aware,
did
anybody
else
want
to
say
anything
else
about
the
code?
Freeze.
C
I
know,
there's
a
lot
a
lot.
A
lot
of
PRS
open,
I'm,
trying
furiously
to
work
through
it.
I
am
also
on
vacation
in
two
weeks,
so
I'm
here
this
weekend
next
and
then
and
the
Monday
after
and
then
I'm
out.
So
that's
my
own
deadline.
A
Ring
the
bells
and
such
okay
moving
on
cesari,
you
want
to
talk
about
your
service
Health
item
here.
H
Yeah,
so
if
you
can
open
this
in
the
top,
maybe
and
show
that,
thank
you
so
we're
trying
to
add
a
header
in
qproxy
service
that
returns
health
check
for
services
with
external
traffic
policy,
local
and,
as
done
pointed
out
this
as
information
which
you
want
to
put
in
the
header,
is
already
present
in
the
body.
So
from
our
point
of
view,
it's
like
we
need.
We
have
a
lot
of
balancers
that
require
a
header
with
a
HTTP
header,
with
a
number
of
endpoints
to
correctly
utilize.
H
This
information
for
weighted
load,
balancing
and-
and
it
also
seems
like
easier
to
parse
from
the
balancer
point
of
view-
if
it's
in
the
header
other
than
Json
decoding
on
the
load,
balancer
side
and
also
like
the
header
name,
I
have
not
seen
that
anywhere
outside
our
company.
But
it's
it
looks
generic
enough,
so
other
providers
could
use
it
so
yeah.
That's
the
proposal.
H
I
think
that
the
latter,
the
second
fight,
is
definitely
this
is
the
test,
and
then
just
this
one
header.
C
Okay,
so
I
I,
I'm,
biased,
so
I'm.
Looking
for
people
to
argue
with
me
and
I'm
happy
to
be
argued
with
this
seems
reasonable
to
me.
I
get
the
argument
that
parsing,
a
header
is
easier
than
parsing
a
arbitrary
Json
body.
The
name
is
ostensibly
generic.
If
somebody
came
along
and
said,
we
want
the
same
capability,
but
we
don't
like
the
header
name.
I
would
not
object
to
making
it
a
configurable
thing
if
we
had
to
I
just
wouldn't
start
with
it
as
a
flag
or
even
adding
multiple
headers.
C
If
you
needed
slightly
different
formats
or
something
like
as
long
as
they're,
ostensibly
General-
and
it's
not
like
you
know,
X
Google,
something
I'm
pretty
much,
okay
with
it.
It
seems
reasonable
to
me
anyway
so,
but
that
said,
I
know
who
my
employer
is
so
somebody
who's,
not
a
googler,
feel
free.
To
argue
with
me,
please
I.
D
H
So
it's
not
like
there
is
no
race
between
providers
to
provide
such
header.
We
are
the
the
only
one
that
needs
that
it's
generic
enough
and
we
can,
in
the
future,
replace
it
with
the
flag.
If
you
want.
C
It
also
reminds
me
of
the
discussion
about
adding
support
for
head
calls
on
probes,
like
it
seems
plausible
that
you
know
some
load,
balancer
implementation
May
one
day
say
we
only
call
head
on
our
probes
or
on
our
health
checks
and
so
having
it
in.
The
header
also
makes
sense.
C
H
D
E
A
F
A
Okay,
that
sounds
like
I'm.
Sorry,
if
I've
been
messing
up
your
names,
sorry,
it's
perfect
pronunciation:
okay,
wonderful
yeah!
So
it
sounds
like
we're
good
to
continue
moving
forward
just
test
and
documentation
and
I
subscribe
to
this
one
as
well.
So
I'll
go
back
over
and
review
it
as
you
update
it.
Okay,
awesome!
Thank
you.
A
B
Okay,
so
this
is
the
we
went
through
the
big
debate
and
I
also
meet
with
people
in
qcon
in
Amsterdam.
There
was
no
no
more
much
interest
from
people
outside
of
networking
and
I
reduced
the
scope
to
to
Two
Pro
price,
or
we
create
a
Singleton
object
that
defines
the
service
and
port
side
there
and
we
can
cross
validate
in
that
object.
But
has
the
downside
of
people
need
to
deal
with
updates
and
can
break
the
Clusters
without.
B
B
Okay,
sure,
okay,
so
the
problem
is
statement
is
like
this
right
now
we
have
a
a
pod
Network
that
use
the
North
Sun
clusters
have
a
pop
Network
that
use
the
node
ipam,
that
is
in
the
Q
controller
manager.
So
that's
a
very
basic
iPhone
that
just
create
assigns
side
split
one
super
subnet
into
a
small
subnets
and
assign
design
it
to
the
nose
in
the
node
respect
port
side.
B
This
has
a
cap
to
be
able
to
span
these
ranks
dynamically,
so
people
can
add
new
notes
and
add
new
subnets,
and
this
way
they
can
consume
this
Summit
to
allocate
new
nodes
without
having
to
restart
the
Q
controller
manager
and
and
avoid
disruption
on
the
crust.
In
parallel,
we
have
the
other
cap
that
was
doing
the
same
with
service,
so
people
can
resize
the
server-side
dynamically
when
we
realized
that
these
two
caps
were
going
in
parallel,
Carl
I
think
was
Carl
flagged
the
problem
of
oh.
B
E
B
The
question
right
now
is:
how
critical
is
this
problem
and
how
do
we
solve
it
because
API
Machinery
doesn't
allowed
to
say
to
cross
validate?
So
if
you
create
a
class
decided,
you
cannot
go
in
the
API
server
and
check.
Oh,
what
service
centers
do.
I
have
and
I
see
if
overlaps,
so
that's
not
possible.
The
other
alternative
is
to
do
some
kind
of
provision.
Volume,
modern
cleanse
that
was
super
complex
and
it
was
discarded
in
the
first
iterate
and
the
other
two
feasible
iterations
from
the
previous
discussion
is
okay.
F
B
Create
a
one
object
that
has
three
configurations
of
when
somebody
updates
another
new
subnet
I
can
cross
validate
with
all
the
sudden
it
in
that
topic,
and
the
other
option
is
oh
I.
Keep
going
with
creating
cluster
sizes
and
service
sizes
and
I
take
the
same
approach
that
we
are
doing
right
now
with
Services.
If
something
is
overlapping,
I
just
plug
it
I
say
11,
so
warning
so
put
the
status
or
something
like
that,
and
and
that's
how
the
user
can
be
informative.
Yes,.
D
So
I
just
thinking
this
now
we
could
have
some
sort
of
controller
spec
versus
status
pattern
where
you
can
create
multiple
Network,
config
objects
with
specs
and
then
some
controller
validates
them
and
creates
a
status
showing
all
of
the
ones
that
have
been
accepted.
And
so,
if
you
create
an
invalid
one,
it
just
doesn't
get
accepted
into
the
status.
C
F
C
Big
fear
is
that
people
will
start
abusing
this,
but
my
my
argument,
which
is
I
think
noted
in
the
doc,
is
all
the
ciders
within
a
cluster
already
have
to
exist
in
a
larger
context
and
that
larger
context
isn't
represented
anywhere
in
the
cluster.
There's
nothing
that
we
can
do
that
prevents
you
from
choosing
a
pod
cider
or
a
service
cider
that
tramples
on
your
corporate
Network
and
causing
yourself
problems.
There's
just
nothing.
C
B
This
kind
of
thing
right
I
mean
you
can
configure
a
network
there,
but
nobody's
guaranteeing
that
network
is
going
to
exist,
that's
more
than
the
magic
multi-network
okay.
This
is
a
an
old
iPhone
and
saying
okay
to
have
this
supersizer
and
it's
not
it's
going
to
have
this
side
assign
it,
but
there
is
no
guarantee.
This
note
is
going
to
have
the
the
network.
B
D
It's
not
the
same
like
we
had
talked
about
this
at
a
cubecon
that
for
services
we
own
everything
like
we
can
provide
a
configuration
object
to
say:
I
want
additional
service
citers
for
the
Pod
Network.
The
network
plugin
owns
that
and,
and
we
shouldn't
be
providing
an
API
to
change
the
network.
We
should
be
providing
an
API
to
let
the
the
network
plugin
indicate
how
it
has
configured
the
network.
B
A
A
C
So
Dan,
what
you
say
is
true
in
the
strictest
sense
of
what
we
have
today,
but
in
theory
the
service
implementation
could
be
an
external
range
that
is
managed
with
real
load
balancers
instead
of
virtual
IPS.
Just
nobody
I
know
implements
it
that
way,
but
they
could
right
or
in
fact
metal
lb
uses
real
VIPs.
So
there
could
be
Network
level
programming
that
happens
outside
of
our
Cube
proxy
purvey.
So
it
is
a
representation
of
configuration
and
for
cube
proxy.
It's
also
the
source
of
Truth.
D
D
B
C
B
E
Just
I
think
we
just
want
to
point
out
what
done
what
you
just
mentioned.
The
like
say,
you
said
Cube
proxy
has
I
think
today.
The
problem
with
the
service
Cider
at
least,
is
that
it's
only
configurable
through
arguments
in
a
command
line,
and
basically
there
is
no
centralized
config
for
that.
But
I
think
this.
The
proposal
that
Antonio
shows
solves
this
right.
It
is
then
controlled
through
API,
as
you
said,
and
a
question
is
whether
we
want
to
do
that.
E
But
then
at
least
it's
centralized
I
think
that's
the
key
of
this
whole
thing
right.
We
have
an
API
that
explicitly
defines
this
is
the
side
that
we
want
to
use,
and
then
everyone
else
and
every
switches
to
that
right.
So
that
gives
us
like,
of
course,
it's
not
going
to
happen
immediately,
but
then
we
have
the
object,
and
then
we
have
q,
proxy
or
the
library
or,
what's
not,
then
eventually
read
from
that
and
do
stuff
based
on
that
right,
that's
that
would
be
the
final
goal
right.
B
That's
that's
the
point
why
I
want
to
move
the
discussion
from
let's,
let's
define?
How
are
we
going
to
configure
the
networks
to
say
to
the
discussion
too?
Let's
remove
the
flag
configuration
to
an
API
driven
configuration
of
these
options
right,
you
configure
the
service
either
and
you
configure
the
node
ipan
flag
side,
and
that
and-
and
this
is
why
I
want
to
focus
the
discussion
there.
B
C
I
think
the
problem
statement
is
at
its
core
we're
making
it
more
difficult
than
it
needs
to
be
to
change
things
that
are
reasonably
changed
and
the
we
have
done
a
poor
job
in
the
past
at
distinguishing
the
cluster
provider
like
in
Google's
case,
it
would
be
the
gke
team
versus
the
cluster
operator,
which
is
the
customer
who
bought
that
cluster
in
our
use.
Cases
and
I
think
the
ciders
that
you
use
for
this
is
actually
a
cluster
operator
problem,
not
a
cluster
provider
problem,
and
so
we
should
have
API
for
it
and
I.
Think.
C
If
we
added
like
went
to
the
back
to
the
Pod
cider
cap
and
add
in
and
accept
Clause
like
we
have
for
Network
policies,
cider
blocks,
then
we
could
model
exactly
what
we
have
in
Flags.
So
the
bootstrap
would
be
trivial
and
at
that
point
it
becomes
the
cluster
operators
domain,
which
I
think
is
the
right
trade-off.
Here.
B
A
B
This
is
what's
the
targeting
team,
but
because
he
comes
in
this
and
and
I.
B
C
Mean
the
it
has
always
been
the
same,
there's
so
what's
changed
recently,
let
me
back
up
there's
two
different
issues
that
use
all
the
same
words,
but
in
a
slightly
different
order.
One
is
when
I
apply
a
service
that
has
two
ports
that
use
the
same
number
in
different
protocols.
C
It
gets
broken
and
that's
client-side
apply,
which
uses
the
merge
key,
which
is
defined
as
port
number,
just
port
number,
and
it
will
silently
merge
them
and
pick
one
right
which
is
awful,
but
it's
not
really
fixable,
because
it's
baked
into
clients
and
it's
very
difficult
to
change
that
merge
key
at
least
there's
no
trivial
solution
to
this,
and
so
we've
generally
said
like
yeah,
don't
use
apply
for
services
where
you're
changing
ports,
it's
it's
apply
and
edit
are
going
to
break
in
this
way.
C
There's
a
second
issue
which
is
server
side
apply,
has
issues
with
node.
Oh
sorry,
node
pod
ports,
not
service
ports,
but
pod
port,
which
are
defined
again
under
under
specified
in
that
their
key
is
defined
as
Port
plus
protocol,
which
we
thought
we
got
it
right
this
time
and
it
turns
out
it's
actually
Port
plus
protocol
plus
host
Port,
because
some
crazy
people
out
there
expose
the
same
port
on
different
host
ports
and
we
tried
to
take
it
away
once
and
people
complained.
C
So
it
is
now
insufficient
on
the
server
side
and
server
side
apply,
doesn't
silently
merge
them
and
pick
one,
which
would
also
be
bad,
but
at
least
consistently
bad.
It
instead
fails
all
server-side
apply
operations,
even
ones
that
don't
touch
those
fields,
and
so,
as
controllers
are
converting
to
use
server-side
apply,
they
go
and
update
the
status
of
a
pod
or
worse.
C
C
That
problem
is
the
one
that
Antoine
is
working
on
right
now.
Antoine
and
Joe
Betts
are
trying
to
figure
out
a
good
way
to
make
that
less
impactful
first,
the
first
plan,
I
think,
is
only
make
it
explode
if
you're
touching
the
port
in
quest
or
touching
the
fields
in
question.
There's
about
seven
other
objects
that
have
similar
problems
and
it
still
explodes.
C
If
you
end
up
touching
the
ports
and
so
now
we're
thinking
about
how
can
we
actually
make
it
come
close
to
doing
the
right
thing
or
at
least
fail
with
a
better
error
or
fall
back
on
something
other
than
server-side
apply
different
different
discussions?
The
client-side
apply,
problem
exists
and
I
still
don't
have
an
answer
for
it.
Unless
we
want
to
go
hack
into
the
server
side
of
the
patch
logic,
with
the
Strategic
merge,
patch
and
and
handle
it,
which
Antoine
really
does
not
want
to
do.
B
F
C
Since
everything
is
leaning
towards
server
side
now,
like
more
and
more
things
are
adapting
to
server
side.
That
was
the
priority
to
try
to
actually
fix.
Also
it's
because
we
have
more
leverage
there,
because
it
is
all
done
server
side.
We
can
find
ways
to
handle
these
cases
without
having
to
update
all
the
clients
in
the
world.
C
It's
super
complicated
and
fundamentally,
it
comes
down
to,
in
my
opinion,
service
being
one
of
the
oldest
apis
when
patch
support
was
added.
So,
like
imagine,
a
time
before
patch
was
supported,
patch
was
added,
strategic,
merge
patch
was
added,
not
a
lot
of
thought
was
given
into
what
is
the
key
going
to
be
here?
What
does
it
mean
if
we
get
that
wrong,
and
so
it
happened?
We
got
it
wrong,
and
now
we
just
are
trying
to
live
with
the
consequences
of
it.
B
Yeah
well
last
topic
now
that
I
have
Alexander,
because
this
is
he's
the
ministry
called
the
latest.
The
road
balances.
B
With
Casey
another
cni
maintenance,
because
the
the
right
now
the
Q
Net
Network
ready,
is
the
container
runtime
at
the
container.
Runtime
just
checked
that
there
is
a
file
there
and
say
I'm,
ready
and
I
think
that
we
can
do
better
and
I'm
asking
to
to
have
a
sound
wave
of
the
container
runtime
to
check
the
cni
with
a
more
active
holding
or
something
like
that,
and
we
are
coming
up
with
this
proposal
of
status.
I
B
Yeah
but
that
that's
the
problem
right,
because
once
the
the
fight
is
there
nobody's
always
ready,
but
the
problem
is
that
new
plugins
that
do
more
stuff.
They
have
a
different
life
cycle,
so
you
may
upgrade
only
the
cni
plug
right
and
then
it
goes
down.
But
the
network
keeps
going
ready
and
then
posts
keep
being
scheduled
there
and
there
is
no
cni
or
anything.
So.
The.
B
Spinning
around
and
and
that's
the
problem
that
we
want
to
solve,
but
what
we
don't
realize
is
what
happens?
What
we
don't
want
is
oh,
but
you
cannot
create
new
posts,
don't
schedule
more
posts
to
this
node
because
I'm
not
able
to
provide
that
group,
but
the
existing
parts
are
okay,
I
mean
they
are
plugged
to
the
network,
so
the
load
balances
should
be
able
to
keep
work
and
I,
don't
know
if
we
we
have
this.
I
Yeah,
so
what
happens
today
is
well
the
service
proxy
answers
to
the
load,
balancers
probe
and
in
the
case
of
Q
proxy,
we
do
watch
the
node,
so
we
could
react
to
whatever
condition
we
would
like
to
add
in
the
future.
So
if
the
CR
CRI
or
the
cubelet
is
able
to
probe
the
cni
for
this
status
and
set
a
status
condition
on
the
Node
object,
then
Q
proxy
could
pick
that
up
and
start
failing
the
lb
health
check
to
drain
traffic
away.
I
I
I
The
load
balancer
today
will
I
mean
in
the
case
of
external
traffic
policy.
Clustered
the
load
balancer
will
continue
to
send
the
no
traffic
while
Q
proxies
red.
Easy
is
okay
reports,
a
status
of
okay
or
agreement
right
so
I
mean
this
is
for
the
time
being.
There
is
no
involvement
with
regards
to
like
the
cni
plugins
state.
It
doesn't
impact
load
balancers
from.
B
No
I
just
wanted
to
write
this
topic,
because
this
is
happening
and
and
I
mean
the
more
people
we
started
to
know
the
problem
that
we
want
to
solve.
But
then
all
these
things
are
started
to
pop
up
and
I'm,
not
sure
you
know
how
this
is
going
to
Cascade.