►
From YouTube: SIG Network Bi-weekly meeting for 20220217 - Part 2
Description
SIG Network Bi-weekly meeting for 20220217 - Part 2
A
C
B
With
a
regular
report,
the
network
put
in
the
network
and
a
port
and
in
host
right
and
I'm
gonna
find
the
way
to
test
it
and
report
back
this
and
then
from
there.
We
can
make
a
decision
on
whether
we
want
the
pr
and
update
the
docs
or
change
the
pr
or
just
go
with
the
pr.
C
Yeah,
I
I
think,
antonio,
I
think
you
said
it.
This
is
mostly
people
who
are
playing
with
raspi's
and
stuff.
I
have
a
suspicion
that
that's
true,
and
so
I
I
don't
feel
bad
saying
you
really
shouldn't
be
changing
the
note
ip.
While
things
are
running
on
it
like
we
haven't
so
far
made
a
really
strong
statement
about
node
identity,
but
I
would
say
that,
like
the
node's,
ip
is
a
pretty
important
factor
in
it.
A
The
thing
is,
I'm
not
against
something,
not
diminishing
the
people.
What
I'm
saying
is
we
have
unstable
case,
that
is
the
node
ip
and
everything
works,
and
we
have
these
new
cases
that
are
popping
up
in
absence
that
we
really
don't
know
if
they
are
useful,
and
if
we
make
this
change
without
enough
data,
we
can
we
can.
D
A
E
E
I
mean
self-configured
addresses
right.
It
changed
very,
very
fast.
We
the
way
we
had
to
solve
it
is
that
I
put
the
loop
back
where
I
put
the
node
that
is
on,
and
then
I
start
having
to
route
all
the
traffic
over
whatever
interface
address
that
the
the
interface
is
connected
to
the
sort
of
the
nick
has,
so
they
always
route
to
the
node
that
will
have
a
stable
address.
That's
it's
not
trivial.
F
F
E
Yeah
I
mean
if
you
have
a
ipv6
and
you
get
your
address.
You
build
your
address
from
what
support
I
mean
you
get
the
prefix
from
the
router
and
then
you
will
build
your
own
address
and
typically
you
change
that
that
address
every
30th
minute.
So
you
will
not
be
able
to
be
tracked
right.
So
you
add
a
new
address
and
after
a
while,
you
remove
the
old
actions
right.
So
so.
B
D
Want
to
watch
him
do
it.
I
don't
want
to
learn
how
to
do
it.
Just
a
quick
time
check.
Is
it
possible
for
me
to
get
the
last
10
or
so
yeah?
Let's
give
robin's
time?
Yes,
yes,
awesome!
All
right!
Thank
you
rob
it's
yours.
You
should
be
able
to
present
if
you
want
cool
thanks.
Hopefully
everyone
can
see
this
this.
The
last
time
I
tried
to
explain
this.
D
It
was
really
complicated
and
I
tried
to
add
some
very
basic
visuals,
which
will
still
be
complicated
but
there's
a
bug,
an
endpoint
endpoint
slice,
api
controller
somewhere
in
between,
and
if
you
are
unlucky
enough,
it
can
cause
downtime,
and
so
I
want
to
try
to
explain
the
scope
of
this
bug
and
then
there's
two
options.
D
We
have
for
mitigation,
both
are
already
pr's,
but
I
like
to
try
and
decide,
which
is
the
least
bad
of
those,
so
this
is
related
to
the
transition
between
v1,
beta1
and
v1
of
the
endpoint
slice
api.
D
So
for
those
of
you
who
remember
history
of
kubernetes,
that
was
between
120
and
121
as
part
of
that
transition,
we
really
wanted
to
remove
the
topology
field.
That
was
this
relatively
unbounded
map
string
string
field.
We
almost
got
it
right,
but
not
quite
so
anyways
in
kubernetes
120.
We
had
one
release
to
prepare
for
this,
and
as
part
of
that,
we
wrote
to
both
fields,
essentially,
so
the
endpoint
slice
controller,
if
possible,
would
write
to
both
the
new
node
name
field
and
the
old
topology
field,
etc.
D
This
is
actually
not
the
the
correct
code,
but
anyways
it
would
write
to
both.
The
problem
was
the
new
node
name
field.
We
added
was
gated
by
an
alpha
feature
gate.
So
unless
someone
was,
you
know
using
all
alpha
features,
it
was
generally
meaningless,
so
the
new
no
name
field
was
not
effectively
written
to
until
after
we
upgraded
to
121,
which
is
where
we
start
front
end
issues.
D
D
D
Okay,
so
this
is
most
likely
to
occur
when
the
endpoint
slice
controller,
the
very
first
operation
it
does
after
an
upgrade
to
121,
is
purely
additive.
So
the
thing
that
comes
in
is
just
a
new
endpoint.
There's
no
changes,
there's
no
readiness,
there's
there's
nothing
else.
It's
just
I'm
adding
a
new
endpoint
to
this
service.
D
D
And
then
you
can
see
that
strategy
code
from
before
drops
node
name
information
from
previous
unused
endpoints,
okay,
so
the
most
any
anyone
is
probably
familiar
with
networking
components
that
rely
on
node
name
the
most
commonly
used.
One
is
coup
proxy
for
external
traffic
policy,
and
so
I
tried
to
explain
the
the
worst
case
scenario
I
can
think
of
for
external
traffic
policy.
D
External
traffic
policy
will
only
route
to
end
points
that
coup
proxy
considers
local
and
it
considers
it
local
when
it
has
a
node
name.
So
what
will
happen
is
only
endpoints
with
the
node
name.
Present
will
be
routed
to
so
worst
case.
V1
beta,
1,
endpoint
slice
with
99
endpoints
has
one
endpoint
added
to
it
and
in
that
case,
you've
gone
from
routing
your
traffic
across
99
endpoints
to
one.
D
The
good
news
is
on
the
very
next
endpoint
slice
update
everything
is
restored
to
normal,
because
it's
a
traditional
update
operation
so
usually
in
what
I've
observed
this.
This
occurs
between
a
unready
state
and
a
ready
state.
So
it
can
be
pretty
quick.
It
can
be.
You
know
a
few
seconds,
but
that's
not
guaranteed,
there's
a
variety
of
things
that
could
slow
that
process
down.
D
So
be
aware
of
that,
and
of
course,
different
components
may
respond
to
this
differently
and
not
recover
as
quickly
as
coup
proxy
would,
in
this
case
there
there
are
a
variety
of
other
things
that
are
tied
into
external
traffic
policy,
so
you
can
imagine
and
other
config
so
not
trying
to
get
into
all
the
different
things
this
could
break.
But
this
is
one
very
clear
one
that
I
think
affects
everyone.
D
We
actually
got
lucky.
I
don't
have
time
to
cover
this,
but
we
can
get
back
into
details
in
a
little
bit.
I
really
want
to
focus
since
we
have
like
four
minutes
left
on
two
potential
mitigations
here,
they're,
both
very
similar,
but
the
first
option
is:
we
retain
node
name
in
that
deprecated
topology
field.
D
D
The
downside
with
this
approach
is
we
live
in
this
hybrid
state
for
a
while,
I
we've
gone
from
a
place
where
you
just
can't
populate
the
field
at
all
in
the
v1
api
to
where
you
can
kind
of
do
a
few
things
with
it.
I
the
second
option
is
we
just
update
the
endpoint
slice
strategy
to
copy
the
node
name
from
the
defecated
place
to
the
new
node
name
field.
This
is
likely
better.
D
D
So
that's
like
a
whirlwind
tour
of
this
bug:
we're
hoping
to
get
this
patch
included
as
soon
as
possible.
But
does
anyone
have
a
preference
on
mitigation
option
or
questions
on
any
of
what
I
just
explained
and
two.
D
B
D
I
I
don't
think
it's
a
huge
risk,
but
basically,
if,
for
some
reason
the
value
in
topology
node
name
is
not,
does
not
actually
pass
node,
name
validation
for
the
new
field,
because
we
that's
that's
a
big
part
of
this
problem.
The
value
in
topology
node
name
was
not
validated
at
the
same
level
that
the
new
node
name
field
is
so
we
couldn't
just
do
a
direct
conversion.
D
So
what
this
proposes
is
we
take?
What
whatever
was
in
topology
and
copy
it
over
as
long
as
it
passes
validation?
But
if
it
doesn't
pass
validation,
we
just
drop
it.
So
there's
a
chance
that
there
are
some
values
in
that
field
that
don't
pass
validation
that
we
would
drop
but
could
maybe
be
useful.
I
think
that's
a
real
edge
case,
but
it
is
not
zero.
C
Get
all
my
stuff
turned
on,
so
it
is
possible
that
someone
could
have
stored
invalid
data
in
that
label
value
the
use
cases
we
were
talking
about
where
the
hostname
label
is
actually
used
for
things
like
node,
local
routing,
anything
using
that
to
compare
to
actual
node
names.
C
The
invalid
data
would
never
have
been
a
match
anyway
right.
That
was
my
first
thought
too.
Like
e,
we
could
just
wave
the
validation
there
entirely,
because
if
it's
bogus
it's
not
going
to
match
a
node
name,
so
it
won't
be
useful
or
we
could
say
well,
it
wasn't
useful
in
the
first
place,
so
just
throw
it
away
yeah,
so
I
think
the
risk
there
is
really
low.
There's
a
question
of
whether
we
should
just
retain
garbage
or
throw
the
garbage
out.
C
We're
dropping
the
deprecated
topology
map
entirely
like
if
a
v1
client
makes
changes
to
the
object,
we
say:
you're
a
v1
client.
You
understand
v1
semantics.
This
deprecated
field
is
not
writable
via
v1.
The
partial
update
case,
where
a
client
just
appended
and
thought
that
all
the
existing
stuff
would
be
preserved.
D
Yeah,
that's
where
I've
been
leaning
to.
I
know
we're
at
time,
so
I
don't
want
to
delay
anything,
but
if
you
have
ideas
or
feedback,
I
guess
comment
on
one
of
these
pr's
or
there's
an
issue
I
linked
in
the
agenda
itself.
C
What
one
thing
I
didn't
see
called
out
that
does
favor
option.
Two:
is
the
impact
on
clients,
so
clients
consuming
this
api
reading
from
this
api
likely
do
not
expect
objects
that
are
a
mix
of
old
fields,
and
so
the
more
we
can
do
to
spare
them
that
weirdness
the
better.
I
think.
D
Yeah,
I
think
so
right
now,
every
client
that
I'm
aware
of
unfortunately
has
to
read
from
both
fields
and
it
favors
the
old
field.
The
problem
with
that
is
with
v1.
We
said:
okay.
Well,
you
don't
need
to
care
about
the
old
field
after
a
certain
point,
and
if
there
was
still
this
possibility
that
we
could
populate
the
old
field,
you'd
need
to
care
about
the
old
field
indefinitely
or
much
longer.
Anyways.
E
D
No,
we
just
need
some
approvals
on
this.
Actually,
I
think
option
two
is
from
jordan,
so
we
probably
don't
need
that
many
approvals.
I
would
like
other
approvals
and
eyes.