►
From YouTube: SIG Network Meeting 20200903
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Okay,
so
let
me
see
post
disabled
screenshot.
B
This
is
a
new
one,
so
I
guess
we'll
just
start
with
this,
because
this
is
definitely
new.
You
know
what
I'll
do
I'll
open
this
in
a
new
window.
B
Let's
do
something
kind
of
fancy
today,
we'll
have
a
little
bit
of
fun
over
here.
Okay,
with
this
this
this
this
one
on
this
side
and
this
one
on
this
side.
So
what
we
got,
we
got
a
coupe,
dns
master,
cni,
ci,
benchmark
failing
coup,
dns
benchmark
two
weeks,
six
scalability
benchmark
no
module
numpy.
So
is
it
python
error?
What.
B
G
F
E
Also
want
to
remove
sig
network,
I
forget
what
the
magic
incantation
is,
but
maybe
it's
remove
sig
space
network.
F
A
F
F
I
don't
think
other
people
are
using.
I
mean
this
this.
This
was
added
by
athenabot,
so
it's
is
scalability
on
here
they're
on
here.
Okay,.
C
B
B
So
accordions
flop,
flapping
around
I
mean
that's
likely,
not
a
cignet
thing,
that's
a
the
an
endpoint's,
the
dying
thing.
Let's
see,
oh
people
are
already
working
on
this.
Who
is
this?
Does
he
see
any
of
our
people.
H
B
C
I
I
C
C
B
I
M
Yeah
you
can
assign
it
to
me
if
it's
not
already
assigned
to
me
there's
a
bunch
of
people
looking
at
this.
This
is
rob
scott
and
and
I'll
just
go
go
on
record
as
saying
if
you
ever
want
to
point
someone
towards
an
issue
that
is
well
well
written.
This
is
the
one
yeah.
B
You
know
it's
like
and
he
works
on
cluster
api.
This
is
off
topic,
but
it's
great
like
sometimes
I
want
to
understand
something
and
I'll,
just
like
look
for
issues
that
he
filed
that
have
his
name
on
it.
It's
like
that
are
tangential,
so
I
can
like
learn
about
it,
so
these
are
great
docs,
also
the
ones
that
he
writes.
Yes,.
I
B
Some
environment
that
doesn't
have
a
he's,
not
running
it
from
it,
does
it's
because
it
doesn't
exec.
What
does
it
do
or
it
doesn't
ssa
there's
a
lot
of
weird
stuff
in
that
test,
like.
I
C
B
Yeah,
okay,
so
if
anybody's
looking
for
something
to
do,
I
know
actually
michael
is
going
to
look
at
it.
I
think
there's
a
couple
people
over
here
that
we're
interested
in
kind
of
diagramming
what
it
does
figuring
out,
what
we
want
to
do,
what
we
don't
want
to
do
what
we
should
keep
doing,
but
this
is
okay
cool,
so
we
don't
need
to
do
anything
there.
Let's
go
to.
B
C
N
B
B
Oh,
this
is
a
oh
yeah.
I
was
gonna
look
at
this,
but
then
I
figured
it
was
a
google
one.
So
then
I
I
think
there's
one
of
these.
That
probably
is
it's
a
gce.
I
thought
it
would
be
easier
to
reproduce
on
gce,
because
it's
the
test
is
failing.
I
assume
on
gce,
because
it's
it's
an
external
load
balance
or
scale
test.
So
I
don't
know,
but
I
don't
know
if
anybody's
assigned.
M
B
B
A
N
H
N
Yeah
humans
there's
a
set
of
ultra
diligent
folks
who
peruse
all
incoming
issues
and
make
their
best
guess
as
to
which
sig
it
belongs
to
and
bless
their
heart.
They
mostly
get
it
right.
E
D
E
A
F
Yeah,
so
we
had
talked
about
this
in
back
in
december
actually,
but
there
are
people
like
specifically
people
on
my
and
dan's
team,
at
red
hat
and
presumably
other
people
who
would
like
to
be
coming
to
sig
network
meetings,
but
don't
want
to
stay
up
until
midnight
when
we
talked
about
this
in
december,
somebody
said
that
there
were
other
teams
that
were
starting
to
experiment
with
doing
alternating,
europe-friendly
and
asia-friendly
time
slots
in
in
alternating
weeks
or
whatever,
and
we
decided
that
we
were
going
to
see
what
was
happening.
F
You
know
wait
for
reports
from
them
see
if
it
was
working.
I
don't
actually
know
what
cigs
those
were,
so
I
don't
know
if
it's
working
for
them,
but
you
know
this
came
up
in
in
one
of
our
internal
team
meetings
a
month
ago
and
people
were
like
hey,
you
know
why
haven't
they
moved
sick
network
yet
so.
M
So
some
anecdotal
experience
for
our
service
apis.
We
have
been
trying
that
we
have
a
more,
at
least
from
the
perspective
of
pacific
time.
We
have
a
morning
meeting
and
then
the
other
week
we
have
a
end
of
day
meeting
both
pacific
time
friendly,
but
at
least
usually
that's
enough
to
make
it
relatively
friendly
for
europe
and
relatively
friendly
for
aipac,
alternating
meetings.
I'd
say:
that's
worked
reasonably
well.
M
Unfortunately,
it
does
mean
that
you
know
we're
we're
meeting
every
week,
so
it
means
that
somebody
can
still
make
it
to
a
meeting
every
other
week.
It
may
be
tougher
if
it
means
that
we're
ruling
out
people
from
attending
you
know
more.
M
Yeah
but
I'd
say
it's
generally
worked
well
for
service
apis.
F
I
mean
you
know,
I'm
east
coast
time,
so
I'm
kind
of
in
the
middle,
but
I
mean
the
the
the
the
two
times
of
the
services
meeting
are.
You
know,
look
good
to
me.
N
I
I
don't
see
any
reason
why
we
wouldn't
move
it,
whether
we
split
or
whether
we
just
pick
a
an
earlier
time
like
I
don't
know
how
about
the
preponderance
of
would-be
attendees
is.
If
we
make
apac
time,
will
we
actually
get
apec
people?
I
know
we
have
europeans.
F
N
Confidence
right
I
mean
it,
it
is
what
it
is.
As
they
say,
all
sig
meetings
are
trending
in
the
same
direction.
There's
no
globally
optimal
time,
but
it
seems
like
everybody
ends
up
with
the
same
conversation,
so
we're
gonna
end
up
at
a
morning
time,
and
some
of
us
will
just
have
to
pick
which
seg
meeting
we're
going
to
just
please
not
tuesday
mornings.
F
E
I
mean
it's
important
to
point
out
that,
like
no
time
that
we
pick
is
going
to
be
perfect
for
apac,
you
know,
even
if
we
move
it
to
like
8
a.m.
Pacific!
That's
still
like
I
don't
know
11
o'clock
at
night
in
beijing
yeah
it's
going
to
be
a
commitment
from
anybody
in
apac
anyway,.
F
N
You
know
yeah,
that's
a
good
question.
Actually
I
think
that's
a
great
idea
dan.
Why
don't
we
can
you?
Can
you
write
something
quick
to
the
email
list
just
so
we
can
get
a
sense
of
the
the
sort
of
count
body
count
in
each
major
geo
continental
body
count.
A
Cool
thanks
dan
for
reminding
us
about
that
and
for
getting
that
email
out
in
advance
jay.
I.
B
Next
yeah
is
there
anything
more
important
than
mine
is
a
config
map
thing
that
I
was
just
confused
about.
I
assume
there
might
be
some
more
important
things
because
I
know
tim
mentioned
he.
He
might
want
to
talk
about
what
features
we're
blocking
and
whatnot,
and
I
can
bring
that.
N
That
brings
us
to
tim,
okay.
So
this
I
had
this
idle
thought
as
I
was
chasing
down
some
other
cap
and
trawling.
My
way
through
the
feature
gates
file
and
realizing
there
were
all
these
feature
gates
that
I'd
seen.
I
was
like
wait.
Aren't
we
done
with
that?
Why
isn't
that
done?
Why
didn't
that
get
finished?
N
And
I
thought
it
was
useful
to
discuss
which
feature
gates
we
have
that
exist
and
which
features
ought
to
be
finished
like
what?
What
what
is
between
us
and
the
finish
line.
So
I
did
get
to
do
a
little
bit
of
prep
work
here
and
it
looks
like
there's
only
about
a
dozen
of
them
actually,
so
maybe
I
sort
of
overestimated,
as
I
was
scanning
through,
but
there
are
a
dozen
of
this
dozen.
Some
of
them
are
ga
already.
N
My
understanding
is
that
we
mark
from
ga
and
then
eventually
we
remove
them
from
from
the
list
right,
so
the
gate
stops
existing,
so
ipvs
proxy
mode
is
marked
as
ga
we
should
probably
remove
it.
Service
load,
balancer
finalizer
is
marked
as
ga
that
one
may
be
younger
than
the
ipvs
proxy
mode.
So
maybe
you
want
to
leave
it
for
a
couple
more
releases,
but
might
be
worthwhile
to
go
through
and
throw
an
annotation
or
a
comment
on
there.
N
N
N
E
N
I
would
I
would
almost
certainly
bet
not
because
it's
a
really
subtle
feature:
that's
going
to
require
infrastructural
fiddling
right,
the
the
cloud
provider
or
whatever
my
goal
is
not,
I
guess,
to
assign
people
right
now,
but
to
maybe
trigger
the
people
whose
fingerprints
are
near
these
things
to
go
back
and
look
at
them.
Sctp
support.
F
Us
from
going
ga,
so
we
needed
the
tests
and
the
last
pr
merged
like
on
tuesday
awesome.
So
the
the
the
cap
says
that
we
have
to
demonstrate
that
at
least
two
network
plugins
can
pass
the
sctp
test,
which
means
that
we
need
to
like
rebase
and
get
the
the
latest
cube
into.
You
know
openshift,
so
we
can
test
if.
F
That
does
sctp
and
you
want
to
test
things.
Read
the
cap
to
get
the
details
or
talk
to
me.
F
Okay,
but
yeah.
F
Okay,
so
so
yeah,
so
that
I
assume
hopefully
can
go
to
ga
for
possibly
120.
If
we
manage
to
get
the
testing
all
done.
N
Also
great
dual
stack:
we
know
what
the
state
of
that
is
endpoint
slice.
So
again,
I
did
not
rebase
this
to
master,
but
endpoint
slice
was
beta
and
endpoint
slice.
Proxying
rob
are
those
those
are
ga
now
right.
M
Are
enabled
by
default
they're,
not
ga,
so
endpoint
slice
proxying,
is
a
is
actually
now
separate
for
windows
that
is
technically
alpha
and
point
slice
proxying
for
linux
is
beta,
hopefully
not
far
from
ga.
I
don't
know
the
endpoint
slice
api,
I
think,
is
on
pace
to
get
to
ga,
hopefully
in
the
next
release
cycle.
Hopefully,
okay,
unless
anyone
has
ideas
for
what
what
might
be
missing
or
what
needs
to
be
improved.
F
So
endpoint
slice
is,
I
believe,
also
the
only
beta
api
that
we
have
to
worry
about,
which
is
potentially
subject
to
being
killed,
because
it
involves
actual
api
yeah
with
the
new
killing
beta
apis
rules
so
yeah.
If
it's
not
going
to
make
it
to
ga,
then
we
need
to
worry.
M
N
So
I
think
sig
arch
needs
to
have
a
rethink
about
deprecation
of
apis,
at
least
of
timelines,
but
that's
a
separate
discussion.
Okay,
so
that's
endpoint
slice,
windows,
endpoint
slice,
service
topology
is
a
red
hot
potato,
though
you
know,
rob
and
intern
gave
us
a
nice
demo.
Two
three
weeks
ago,
two
three
cycles
ago,
rather.
M
Yeah
I
I
am
very,
very
open
to
feedback.
We
we're
working
on
a
draft
cap
right
now
and
I
we
we
have
a
couple
different
options
that
performed
pretty
well.
M
The
the
big
thing
is
that,
as
it
exists
right
now,
we
have
a
way
to
evaluate
just
about
any
potential
approach
and
give
it
a
score
based
on
a
variety
of
different
factors
and
millions
of
test
cases.
So
yeah
anyways,
that's
going
well
and
I
I
hope
to
have
a
cap
pr
in
the
next
couple
weeks.
N
Awesome
then
app
protocol.
N
Awesome
and
that's
it
that's
all
of
them,
so
part
of
the
part
of
the
the
goal
of
doing
triage.
I
guess
we
should
encompass
these
things,
yeah,
sorry
about
the
kids.
In
the
background
again,
we
should
encompass
these
things
in
our
triage
process.
N
We
had
an
interesting
discussion,
dan
and
dan
and
casey,
and
I
had
an
interesting
discussion
with
laurie
from
the
video
sorry
from
the
contrib
x
side
and
we're
trying
to
strategize
how
to
more
systemically
do
triage
for
all
the
things
that
are
going
on
on
in
our
sig.
N
I'm
not
gonna
go
into
it
now,
because
I
didn't
put
that
on
the
agenda
yet,
but
I
would
love
to
see
us
clean
up
this
list
and
I'm
going
to
go
and
have
a
conversation
with
other
cigs
who
have
a
lot
more
feature
gates
open
than
we
do
and
encourage
people
to
start
draining
those
down
if
they.
P
J
N
Interesting
well,
we
can't
throw
a
gate
in
front
of
them
now,
because
they're
enabled.
P
Yeah,
I
know
my
suggestion
was
just
to
to
open
an
issue
I
was
studying
in
the
chat,
so
we
can
track
those
and
probably
ask
for
the
community
help
into
the
ones
that
are
already
ga,
like
ipvs
or
anything
else,
and
mark
this
as
a
good
first
issue,
at
least
the
ga
ones
that
we
know
that
there
isn't
there
isn't
any
they
aren't
in
any
part
of
the
code.
Only
in
the
cube
features,
and
probably
some
read
me.
N
Yeah,
that's
a
great
point.
I've
got
the
list
up
here,
so
maybe
I'll
do
that
right
after
this
meeting
I'll
just
open
three
or
four
issues.
N
A
Thank
you
tim,
and
that
means
the
floor
goes
to
bridgette.
G
Short
and
sweet:
this
is
your
moment
if
you
have
been
reviewing
the
dual
stock.
Pr
awesome.
Thank
you
if
you've
been
waiting
for
exactly
the
right
moment
to
make
an
impact
there.
This
is
your
moment,
because
we'd
really
like
to
get
that
merged
early
in
120.
cal.
What
kind
of
stuff
do
you
anticipate
exciting-wise
that
people
are
going
to
see
when
they
look
in
that
pr
and
they
go
to
review
and
tim.
N
I
was
just
going
to
say:
cal
has
done
a
wonderful
job
of
isolating
the
changes
into
a
series
of
reasonably
self-contained
commits.
The
api
stuff,
which
is
the
most
persnickety,
is
in
the
very
first
couple
of
commits
and
the
commits
all
after
that
are
much
more
approachable
and
could
definitely
use
the
mini
eyeballs
approach.
N
Q
That's
that's
that's
well
said.
Thank
you.
The
the
only
thing
I
want
to
raise
is
we
have
an
open,
an
open
question
really
about
defaulting
on
reit.
The
classic
example
I
have
is
for
a
cluster,
that's
being
upgraded.
We
need
to
the
new
fields,
need
to
have
values
right
according
to
the
ips
assigned
that
and
the
problem
with
that
is
some
it's
according
to
how
the
cluster
is
configured
as
well,
and
we
also
allow
dirty
data,
meaning
if
somebody
has,
for
some
odd
reason,
a
service
that
has
ipv6
on
a
cluster.
Q
That's
ipv4,
we
need
to
say:
okay,
this
service
is
ipv6
family.
Also,
the
cluster
is
ipv4.
We
have
been
going
and
back
and
forth
with
jordan
from
api
machinery.
We
have.
We
have
a
separate
comment
for
a
hook.
All
right.
I
will
merge
it
into
the
the
way.
This
is
all
in
the
registry
stuff,
I'll,
merge
it
into
the
respective
commit
tomorrow
right
it
works,
and
it
does
everything,
however,
for
some
of
the
reason
inside
the
api
server,
it
just
doesn't
work,
and
I
suspect
this
is
because
of
the
list.
Q
Q
Is
lars
here?
Oh,
I
just
want
to
say
thank
you
to
lars,
because
lars
is
the
only
one
who
discovered
that
we
have
a
major
problem.
Upgrading
clusters
that
the
hook
we
thought
it
would
work.
It
didn't
work
and
lars
is
the
one
that
pointed
out
and
he
helped
me
test
it.
So
thank
you.
Lars.
B
Q
So
I
am
in
a
state
of
a
brainwash
right
now,
so
we
can
do
that
after
after
I
get
back.
Q
E
Eight
to
one
eight
okay.
G
But
that
doesn't
mean
wait
until
the
18th
to
look
at
things.
It
means
look
at
everything
and
get
all
your
pr's
comments
and
line
edits
and
everything
in
in
the
next
couple
of
weeks,
because
then
we
can
hopefully
get
it
all
merged
after
he
gets
back
from
vacation.
E
Q
N
N
And
anybody
who
feels
particularly
adventuresome
the
api
related
commits
the
the
biggest
problem
that
we've
run
into
over
and
over
again
is
weird
corner
cases
across
upgrades
or
pre-existing
data
that
we
didn't
handle
cleanly.
A
Okay:
well
thanks
everybody,
antonio.
I
And
we
were
talking
during
the
capture
cap
about
the
props
and
I
was
discussing
discussing
guitar,
I'm
in
the
opinion
that
the
props
should
be
simply
stacked,
the
way
that
they
are
now
and
he
thinks
that
it's
better
to
have.
You
know,
prosper
ip
family
and
what
I
don't
know
if
there
is
a
clear
position
on
this
or
if
I
should
send
an
email
to
discuss
or.
F
So,
to
clarify
this
is
the.
If
you
have
health
checks
in
a
pod
and
the
pod
is
dual
stack:
do
we
health
check
the
first
pod
ip?
Do
we
health
check
both
pod
ips?
Do
we
held
check
the
pod
ip
that
has
the
same
ip
family
as
the
nodes?
First,
ip.
Q
Yeah,
it
goes
also
to
the
liveness
and
readiness
probes,
so
pro
all
the
family
of
props
that
we
have
right
yeah.
That's.
Q
But
we
need
to
start
thinking
about
how
we
want
to
do
health
check
and
do
the
stack
work,
sorry,
propping
and
dual
stackport.
That's
a
better,
better
question.
I
My
point
is
that
having
props
for
dual
stack,
you
know,
and
you
never
know
what
what
you're
going
to
pro.
So,
if
that,
if
the
app
is
only
a
single
stack
and
you
will
be
proving
something
that
doesn't
exist
and
what
happens
if
it
doesn't
exist
now
and
later
starts
and
works,
but
the
probe
didn't
notice,
it.
Q
If
we
are
to
keep
for
service
for
pot
the
status.ip,
then
we
can
keep
going
forever
because
that's
the
primary
ip
and
that's
assigned
and
that
will
work
in
a
single
stack
world.
I
G
Q
I
I
second
dan
on
that.
It's
a
related
topic.
F
F
So
the
the
first
thing
in
antonio's
bullet
list
there,
the
node
addresses
so
currently
nodes
only
have
one
ip,
which
means
that
host
network
pods
only
have
one
ip,
which
means
they
could
only
back
single
stack
services,
and
so
I
filed
a
cap
about
dealing
with
it
and
the
initial
version
of
the
cap
was
much
more
expansive
and
trying
to
solve
20
problems
at
once,
and
it
sort
of
backed
away
from
that
now
and
is
pretty
much
just
about
having
dual
stack
host
network
pod,
ips
and
figuring
out
the
best
way
to
get
that
without
breaking
anyone.
F
So
people
who
haven't
looked
at
that
want
to
look
at
it.
People
have
looked
at
it
want
to
look
at
it
again,
figure
out
what
we
should
do.
Q
A
Dan,
do
you
want
to
throw
a
link
to
that
one
in
the
I
don't
know.
A
J
Q
Just
to
add
to
the
plate
of
things
that
will
will
drag
drag
after
dual
stack.
Policies
also
will
require
some
look
if
we,
if
we
have
to
do
policies
all
right
right
now,
policies
uses,
as
far
as
I
know
it
uses
pod
names
and
selectors
and
stuff
like
that,
so
it
doesn't
really
care
about
ips,
but
the
enforcement
of
them.
Is
it
really
looking
at
the
ips
of
the
pods
or
the
services?
So
that's
another
topic.
Q
The
angus,
thankfully,
is
already
designed
with
multiple
ips,
so
we're
fine
right,
but
policy
comes
on
top
of
my
list
of
things
that
we
really
need
to
worry
about.
I
would
venture
and
say
the
probably
higher
priorities
than
host
ips
and
the
probe,
because
those
are
the
security
stuff
and
all
the
big
codes
quote
unquote
to
the
trademark
at
the
end,
are
using
these
things
heavily.
So
we
probably
need
to
give
the
policy
a
higher
prio.
N
F
Q
Q
Are
they
progressing
as
in
the
policies
that
you
have
on?
Calico
are
progressing
and
being
grown
and
all
of
that
stuff
and
you
support
old
stack
so
that
way
at
least
we
we
can
say.
Okay,
we
will
stack
supported
with
policies
on
this
kind
of
cni's
and
other
cni's
are
ramping
up.
Is
this
a
statement
we
can
make
now.
Q
F
We
do
have
a
problem
that
we
have
terrible
ede
coverage
of
dual
stack
right
now
and
that's
another
good
first
bug
maybe
like
we
need
to
to
get
some
people
adding
tests
like
you
know
that
yeah.
M
F
Q
We
just
overblown
the
test,
matrix
single
stack
tool,
stack,
dual
stack,
46,
dual
stack,
64
and
then
on
a
single
stack
cluster
on
a
dual
stack:
four
six
and
a
double
stack,
six
four,
and
then
you
have
the
external
headless
headless,
all
selectors,
so
that
that
and
then
all
of
that
you
have
to
think
about
endpoint
and
endpoint
slices
if
you
are
doing
connectivity
and
data
pack
so
that
the
test
matter
matrix
is
overblown.
So
I'm
kind
of
hoping
we
just
get
over
that
hump
and
then
we'll
start
looking
back
and
think
of
okay.
Q
What
do
we
need
to
do
in
order
to
overs
to
simplify
this
because
right
now?
Any
test
that
touches
on
service,
for
example,
is
a
huge
test.
It's
by
definition,
big,
not
because
of
anything
other
than
that.
The
data
that
the
dictionary
of
test
that
you
need
to
have
over
the
vectors
of
cluster
configurations
and
the
rdi
added.
B
A
So
we've
got
10
minutes
left
and
three
more
items.
I
think
they
might
be
kind
of
quick.
But
let's,
let's
move
on
to
those
is
h.
Bag
d
on
the
call.
A
All
right
now,
I
guess
you've
got
two
items
left
on
the
agenda.
Rob
you're.
M
Next
yeah
cool-
I
I
I
have
just
wanted
to
bring
a
pr
to
everyone's
attention
and
make
sure
we're
not
missing
something
obvious
mike
found
a
really
cool
fun,
endpoints
controller
bug.
That
is
surprising.
M
It
hasn't
surfaced
yet
there's
a
fun
thing
where
every
time
you
save
endpoint,
we
call
repack
subsets
and
it
resource
everything
and
then
every
time
the
endpoints
controller
syncs.
It
also
calls
repacked
subsets
and
compares
the
repacked
sunsets
version
with
whatever's
stored
to
see
if
there's
any
difference
what's
happening
is
those
two
repack
subset
calls
are
actually
different
functions
and
in
some
cases
they
sort
things
differently.
M
So
you
actually
end
up
getting
into
this
vicious
cycle
of
endpoints
controller
thinks
it's
sorting
things
one
way
it
saves
it
and
by
the
time
it
gets
a
storage
strategy
resorts
saves
it
a
different
way
all
the
way
through
it.
It's
not
pretty.
M
This
has
existed
for
four
or
five
years
now,
it's
not
a
new
thing,
but
we're
trying
to
understand
if
we
actually
need
that
sorting
inside
the
storage
strategy
like
it's.
It
seems
bizarre
that
when
we
save
something
we
would
then
go
ahead
and
sort
before
we
store.
M
N
Are
there
really
two
functions
that
I
remember
writing
that
function,
so
blame
me
and
it's
a
beast
of
a
function
if
I
recall,
but.
M
N
Yeah
I've
got
the
bug
open,
but
I
haven't
dug
deep
on
it
rob
my
feeling
is,
I
think,
in
line
with
what
you
said,
maybe
despite
best
intentions,
sorting
it
internally
is
not
a
good
idea.
N
I
don't
think
we
have
any
other
places
where
we
take
user
input
and
then
mutate
it
in
a
significant
way
and
then
save
it
right
put
another
way
like
if
a
user
writes
in
something
pathological
and
they
read
it
back,
and
it's
now
canonical
like,
isn't
that
surprising
to
an
end
user,
I
would
suggest
that
maybe
removing
it
is
the
right
thing
to
do.
Yeah.
M
Yeah,
that's
that's
where
everyone
was
leaning.
We
just
wanted
to
bring
it
to
abroad.
It
hadn't
had
that
many
networking
people
looking
at
it.
So
I
wanted
to
make
sure
the
whole
community
was
aware
of
this
and
couldn't
think
of
a
compelling
reason
not
to
make
this
change
but
yeah
that's
helpful
feedback.
Thank
you.
N
B
B
Yeah,
that's
why
it
was
confusing
to
me
because
it
it's
like
well,
so
I
guess
tim
is
so
tim
you're.
I
think
the
last
person
you
and
antonio,
I
think,
there's
the
the
broad
issue
for
folks.
That
don't
know
is
that
cool
proxy
doesn't
hot
reload.
The
reason
it
doesn't
hot
reload
is
because
of
something
in
the
notify
wrapper
thing
that
we
rely
on
that.
I
guess
doesn't
send
the
right
signals
out
when
the
config
map
changes
the
file,
and
I
think
someone
made
a
hotfix
for
it
inside
a
coop
proxy.
B
But
then
I
think
we
started
talking
about
how
maybe
they
should
be
more
generic
and
then
nobody
did
anything
after
we
decided
it
would
be
more
generic
or
something
like
that.
So
the
question
is:
do
we
care
enough
about
it
to
make
coop
proxy
just
do
this,
or
do
we
want
to
wait
for
a
more
generic
fix?
That
would
you
know,
get
the
figure
out
what
needs
to
be
done
with
the
notifier
thing
and
all
that.
I
I
think
that
there
are
two
things
there.
One
is
what
team
explained
in
the
vr.
I
remember
one
guy
from
china
was
trying
to
change
that.
One
is
the
way
that
the
conflict
map
mounts
the
config
file
and
all
this
the
the
problem
that
the
key
process
is
not
watching
for
same
things
or
something
like
that
I
mean
both
are
related
but
are
different.
I
I
N
Yeah,
I
am
trying
to
remember
the
details
of
what
the
issues
were.
I
mean
fs
notify
has
several
corner
cases
that
have
to
be
handled.
The
the
question
I
have
here,
I
guess,
is
how
much
like
do
we
really
need
this?
Do
we
really
want
this
config
map
updates
are
interesting
and
powerful,
they're
also
really
kind
of
scary,
because
they're
not
rate
limited
they're,
not
roll
backable
in
in
an
obvious
way
like
within
gke.
N
Even
a
low
impact
change
is
a
risk
and
by
doing
it
automatically
through
the
config
map,
it
basically
is
going
to
update
on
all
of
your
nodes
approximately
immediately,
and
if
you
made
a
mistake,
then
you're
going
to
bring
them
all
down
instantly
as
opposed
to
like
doing
it
through
a
rolling
update
or
something
where
you'll
bring
down
the
first
couple
and
you'll
realize
oh
shoot
they're
not
coming
back.
There
must
be
a
mistake,
so
I
don't
know.
G
Q
N
Q
N
Which
is
exactly
the
point
where
you
can
rolling
update
this
and
you'll
be
better
off.
H
N
Well,
this
is
the
trick
now,
since
we've
done
some
things
in
the
past,
removing
it
becomes
more
difficult
right.
So
if
we
want
to
get
rid
of
it,
we'll
need
to
think
about
a
strategy
for
for
changing
that
and
clearly,
since
somebody
tripped
over
it
they're
trying
to
use
it,
and
so
we
need
to
convince
them
that
it's
really
not
better.
E
N
I
N
Okay,
we
should,
I
think,
there's
general
agreement.
We
should
probably
discourage
people
from
doing
this.
We
need
to
figure
out,
if
it's
possible,
to
end
of
life
at
overtime.