►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20220317
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20220317
A
A
C
Yep
looks
good
all
right,
so
we
have.
We
did
a
lot
of
filtering
beforehand.
So
thank
you.
We
have
dan
wind
ships
treatise
on
external
versus
internal,
it's
actually
a
thrilling
read
and
I
didn't
get
all
the
way
through
it
actually
in
the
prep
time.
C
C
It's
either
weird
and
awkward
or
consistent
and
sort
of
broken
in
this
corner
case
there
isn't.
There
doesn't
seem
to
be
a
much
better
answer,
so
I'm
leaving
this
one
open.
Actually,
I'm
going
to
remove
the
triage
flag
from
it,
because
it
is
clear
that
it
needs
to
be
done.
C
A
But
yes,
they
seem
to
be
the
same
issue.
Are
they
at
least
linked
to
each
other
in
github?
There's
a
reference
from
one
to
the
other.
Okay
yeah.
D
C
C
Next
was
about
graceful
termination
and
tcp
versus
sctp
versus
udp,
which
I
see
is
on
the
agenda
for
today.
Yeah.
E
C
Okay,
so
we're
going
to
discuss
that
in
a
few
minutes,
so
I
won't
close
this
yet
or
I
mean
I'm
not
going
to
close
it.
It's
a
not
really
a
bug
report
and
it's
not
really
a
proposal
yet
either.
So
we've
got
to
figure
out
whether
this
is
a
bug
report
and
or
what
we're
going
to
do
with
this,
the
last
one
that
was
needed
attention
today
was
this
friend
who
was
a
couple
of
months
old
yeah,
and
it's
not
clear
to
me
still
what's
going
on
here.
C
This
is
the
case
of
it
wasn't
clear
whether
they're
trying
to
keep
cuddle
logs
and
they're
not
getting
a
connection
all
the
way
through,
but
only
if
they're
using
these
six.
There
were
some
follow-up
questions
two
weeks
ago
that
the
original
poster
actually
did
answer,
and
I
didn't
get
it
back
to
so.
I
guess
this
one's
on
me
I'll,
assign
it
to
myself.
E
But
there
is
another
one
that
is
reporting
the
same,
that
they
are
not
able
to
execute
some
yeah
that
one
this
one
yes,
but
they
only
fail
with
one
of
them
points
I
don't
know,
but
last
was
doing
a
lot
of
investigation
there
and
I
was
trying
to
reproduce
it
and
it
worked
for
me.
I
don't
know,
and
it
works
for
you
whoa.
C
H
A
C
C
I'll
just
stop
sharing
that
and
I'll.
Let
the
agenda
carry
the
day.
A
Yeah,
let's
just
continue
with
agenda
so
antonio
sctpe
to
flesh
or
not
two
flesh
contract
is
the
first
item
on
the
agenda.
E
Okay,
this
is
one
of
the
things
that
the
theory
doesn't
match
the
reality.
So
I
I
asked
one
person
that
is
learning
http
and
he
commented
on
the
issue
and
he
says
that
there
are
devices
the
stack
or
something
doesn't
behave
as
suspect,
so
that
it's
better
to
flush
the
contract
entities
and
the
last
comment
in
the
in
the
issue
is
from
one
person
I
was
checking
and
he's
working
in.
E
E
I
don't
know
what
is
the
best
thing.
The
other
alternative
that
I
was
thinking
is
to
have
a
feel
in
the
service
part
to
say,
graceful
shoot
down
or
not,
and
let
the
let
the
proxy
choose
the
behavior
based
on
the
field,
because
we
also
had
reports
of
people
that
want
tcp
to
to
flash,
do
not
shut
down
gracefully
and
well.
That's
the
current
status.
D
E
Yeah,
but
this
is
for
the
people
that
have
two
databases
and
they
have
active
passive,
so
they
want
to
move
to
the
passive
one.
Just
for
switching
then
point
you
know,
but
the
the
so
the
active
one
goes
to
to
to
passive,
but
it's
not
shut
down,
so
they
want
them
points
to
do
the
the
switch
over
for.
J
C
J
If
someone
tries
to
hit
it,
it
really
depends
on
your
contractor
because
from
one
side,
you're
gonna
come
with
a
five
tuple
that
doesn't
include
a
sin
and
that's
gonna
go
in
and
you're
gonna
you're
gonna
check.
If
you
have
a
connection,
I
mean
a
fighter
apple
that
matches
right,
and
then
it's
really,
if
you,
if
you
how
you
set
it
up,
if
you
will
send
a
receptor
if
it
just
will
be
dropped
in
the
black
hole.
E
J
E
Weight
or
similar
yeah,
but
but
before
we
go
to
the
to
the
edge
cases.
So
the
thing
is,
we
have
three
protocols
and
we
have
cleared
two.
We
have
in
one
that
is
tcp
people
that
wanted
to
do
this
this
flashing
and
we
have
a
new
protocol.
That
seems
something
obscure
in
the
internet,
that
we
have
some
assessment.
That
is
better
to
flash.
H
C
My
I
mean
my
question
I
mean
first
of
all,
you
know
my
bias
is
to
not
add
api
if
we
can
avoid
it
right,
but
it
sounds
like
if
we
do
this
shutdown
from
the
middle
for
tcp.
At
least
it's
it's
not
going
to
be
a
shutdown
like
the
client's
not
going
to
get
a
reset
they're
just
going
to
get
a
timeout
they're
going
to
sit
there
for
hours
or
days
waiting
for
something
to
happen.
C
J
That
seems
like
that
seems
like
useless
to
anybody
absolutely
and
if
they
send
it's
going
to
go
into
black
hole
in
best
case
they
will
get
the
reset
back.
Okay,
then!
Well,
it's
unspecified
really.
What
happens
right?
It
depends
completely
on
the
implementation
of
the
of
the
load
balancer
to
speak
so.
E
J
E
C
D
D
This
is
what
happens
when
you
and
then,
and
then
we
can
figure
out
like
okay
is
the
sctb
case,
doing
the
right
thing
or
not,
and
then
we
can
figure
out
if
we
can
actually
make
it
do
the
same
thing
as
tcp
and
if
we
can't
or
or
if
it
like,
doesn't
work.
Well,
then,
you
know
figure
out
if
we
need
an
exception
like,
like.
I
feel
like
everybody's
talking
about
the
details
of
of
specific.
J
J
D
D
J
E
J
I
seem
to
remember
that
if
you
back
up
and
large
can
probably
correct
me,
but
one
of
the
problem
is
that
people
lock
the
source
port
when
they
set
up
satp
connections,
and
that
means
that
if
there's
no
connection
tracking
and
something
happens,
the
the
timer
as
it
is
now
since
the
udp
base
can
sit
for
30
minutes.
So
that
means
that
they
cannot
reconnect
actually
for
every
packet
they
send
into
the
connection
tracker.
They
will
just
move
the
the
point
where
they
can
reconnect
up.
E
E
Okay,
but
I'm
just
reading
the
comments.
I
mean
I'm
here
as
trying
to
to
preach
a
conclusion,
because
I
think
that
we
have
two
options
today:
keep
the
current
behavior
or
change.
C
So,
just
to
echo,
maybe
what
dan
was
saying:
we've
never
really
written
down
what
we
think
the
semantic
should
be
when
you
delete
a
service
with
respect
to
open
connections
right
or
honestly,
we've
never
really
written
it
down
for
when
an
endpoint
goes
away
either
right
and
there's
a
question
I
I
swear.
I
saw
one
of
the
bugs
this
morning,
but
I
can't
find
it
of.
Are
those
actually
the
same
semantic
or
are
they
different
things.
J
I
think
they're
different,
because
when
an
endpoint
goes
away,
you
can
still
have
active
endpoints
there.
So
I
mean,
if
you
just
look
at
the
udp
case,
where
someone
has.
Basically,
someone
has
locked
the
source
port,
which
I
think
we
all
agree
is
a
bad
behavior,
but
it
is
permitted
and
when
you
do
that,
and
the
other
endpoint
goes
away,
you
cannot
re-establish
a
session,
because
the
connection
tracker
keeps
remaining
reminding
the
remembering
this
and
for
every
packet
it
will
update
the
counter
and
it
will
send
the
packet
to
the
endpoint
that
doesn't
exist
anymore.
J
If,
if
you're
unlucky,
if
you
don't
clean
it
up
properly
in
the
connection
tracker,
so
that
they
really
need
to
make
sure
that
everything
gets
cleaned
up
in
a
connection
factory,
then
it
should
at
least
be
able
to
re-establish,
and
I
think
that
that's
one
of
the
key
things
to
get
this
to
work
hopefully
and
have
that
well
defined
and
have
it
implemented
everywhere.
J
K
Example
in
this
issue
a
bit
further
up
on
on
tcp,
where
they
actually
use
to
set
the
end
point
not
ready
when
it's
the
the
thread
pool
is
has
run
out
and
when
they
have
threads,
they
set
it
to
red
again
and
expect
the
end
points
to
be
re-established.
But
all
existing
connections
should
should
work
whenever
the
endpoint
is
taken
away
or
not.
K
It
was
quite
interesting
use
case.
It
will
not
not
work
on
on
ipv,
yes,
but
it
will
work
on
iptable's
proxy
mode.
C
L
L
L
What
I'm
trying
to
say
is
what
I'm
trying
to
say
is
the
assumption
that
everyone
is
making
is
people
do
that
because
they
are
trying
to
gracefully
shut
down
things.
But
an
argument
can
be
made
people
do
that
because
they
want
to
kill
everything
either
because
they
are
under
attack
or
something
majorly
wrong,
and
they
want
to
immediately
lock
everything
so
intent
matter
all
right.
L
I
am
trying
to
drive
the
discussion
to
the
higher
level.
We
have
to
sit
down
and
discuss
the
intent
and
make
sure
that
once
the
intent
is
set,
then
that
the
implementation
is
easy
to
follow,
but
right
now
we're
we're
singularly
looking
from
one
point,
which
is:
should
we
allow
graceful
or
should
we
not
algorithm.
J
So
what
you're
saying
is
on
the
service
in
reality?
Should
we
have
a
grace
I
mean
delete
is
pretty
brutal,
but
should
you
have
a
an
admin
state
in
reality
or
not
where
you,
where
you
bring
it
down,
I
mean
you
lock
it
and
it
would
be
a
graceful
period.
I
mean
is
that
the
type
of
behavior
we
should
have
yeah?
I.
K
K
L
C
So,
just
for
the
sake
of
time,
maybe
the
the
thing
to
do
to
move
forward
is
actually
just
write
a
little
bit
on
what
the
clean
slate
assumptions
might
should
have
been
and
see.
If
there's
a
way
to
drive
towards
those,
I'm
afraid
that
the
things
that
people
might
have
naturally
expected
will
be
breaking
changes,
but
we
can
look
at
those
individually,
and
specifically,
the
edges
I
would
be
interested
in
here
are
an
end.
Point
is
terminating
well,
an
endpoint
becomes
unready,
an
endpoint
becomes
terminating,
an
endpoint
is
removed
entirely
from
the
endpoint
set.
C
C
J
The
issue
or
in
the
link
in
the
agenda,
I
don't
know
where
to
look.
Oh.
C
I
was
going
to
capture
it
in
the
issue.
I
was
just
going
to
go
straight
to
the
end
of
108
523
and
just
write
out
what
I
just
said.
More
or
less.
M
E
Next
item
is
about
keep
proxy.
There
is
one
issue
report
by
one
person,
so
the
person
wants
to
configure
the
the
health
and
the
metrics
want
to
bind
the
health
automatic
address
to
the
node
ip.
The
problem
is
that
that
is,
the
flags
are
overridden
by
the
config,
so
it
picks
the
the
the
default
addresses
and
he
com.
C
So
I
remember
discussing
this
a
very
long
time
ago,
before
the
component
base
program
even
kicked
off
before
it
died,
and
I
remember
making
the
same
argument,
but
there
were
reasons
which
I
cannot
recall
now
why
that
represented
a
break
for
some
cases,
maybe
we're
past
that
now,
like
maybe
maybe
we've
had
config
files
around
for
long
enough
that
we
can
actually
make
this
argument
because
I
agree,
the
semantics
should
be
flags
override
config,.
E
The
the
flux
versus
config,
j
and
another
person
try
to
they
put
a
cap
and
jordan
complained
because
cubelet
I
don't
know
he
was
sharing
some
experience
that
that
the
franchise's
config
was
problematic
and
and
that
got
stuck.
But
the
the
thing
here
is
that
the
the
host
name
is
clearly
that
has
to
provide
a
conflict
because
it's
per
per
node
and
and
the
point
of
this
person-
and
I
agree
with
him-
is
that
the
buying
address
has
to
be
always
per
node.
E
I
mean
it's
not
that
you
can
configure
an
ip
for
for
every
node
in
a
configuration
file,
because
mainly
this
is
usually
used
mainly
with
config
maps,
so
there
is
no
way
to
have
a
configurable
panel.
C
A
A
A
E
Yeah,
that's
right,
so
the
problem
is
that
how
an
user
can
config
buying
q
proxy
addresses
per
node,
because
if
you
see
the
issue-
and
you
see
his
manifest
is-
is
pretty
clear
from
there.
What
he's
trying
to
do?
That's
why
I
I
bring
this
up,
because
I
think
that
this
is
a
very
legit.
C
The
example
of
the
host
name
override,
like
it's
named
override,
so
it
seems
pretty
clear,
although
it
didn't
really
mean
override
the
config
file,
but
it
you
know
still
sounds
like
it
was
meant
that
way
you
know.
Do
we
need
a
net
new
parameter
here?
That
is
the
bind
override,
which
we
can
then
say
this
isn't
even
in
the
config
file
like
if
you
specify
it
as
a
flag,
it
will
trump
over
the
config
pile.
C
C
I
I
would
be
personally
like
I'd,
be
okay,
adding
in
that
new
parameter
if
we
need
to
or
making
the
bind
be
a
cider,
an
ip
or
cider.
If
it's
a
cider
then
like
use,
that,
like
does
that
satisfy
the
requirement.
J
C
Would
the
those
addresses
aren't
specifically
bound
to
local
interfaces,
though
right?
I
guess
they.
J
J
C
J
A
J
A
What
about
external
ips
we've
had
some
issues
where
administrators
chose
external
ips
that
were
within
the
nodes
subnet
in
this
case,
would
there
be
a
way
to
exclude
that
external
ip
from
this
list?
It
doesn't
seem
like
it,
we
think
that's
silly,
but
they
have
done
it
and
somehow
we
don't
prevent
that.
I
mean,
I
guess.
J
A
I
mean
it
seems
like
this
is
the
way
forward
to
at
least
figure
out
whether
we
do
this
already
and,
if
not
at
least,
be
consistent
with
other
components.
A
E
C
A
Okay:
okay,
next
one
andrew
proxy,
terminating
endpoints,.
N
Yeah,
so
I
wanted
to
talk
about
what
we're
going
to
do
for
proxy
terminating
endpoints
for
124,
which
kind
of
overlaps
with
the
first
topic.
N
So
when
we
merged
the
cap,
we
kind
of
said
we're
going
to
apply
the
fallback
policy
to
external
and
internal,
as
well
as
when
the
policy
is
clustered,
and
I
know
some
people
kind
of
disagreed
or,
like
they
weren't
sure.
If
that's
the
right
approach
and
basically
we
said
we'll
write
the
code
in
the
pr
and
then
talk
about
it,
then
so
I
have
a
pr
open
that
does
implement
the
fallback
policy
for,
for
all
the
cases
really
and
wanted
to
open
up
discussion
to
see
if
we're
still.
D
So
I've
added
a
comment
to
that
pr,
since
the
meeting
started
summarizing
the
previous
discussion,
which
I
found
in
an
earlier
vr.
But
basically
my
argument
was
that
proxy
terminating
endpoints
exist
for
local
for
a
very
specific
reason
to
avert
a
certain
race
case,
and
that
argument
doesn't
apply
at
all
to
the
cluster
case
and
then
tim's
argument
was
then
it's
simpler
and
easier
to
document.
N
Yeah,
like
the
biggest
convincing
point,
the
biggest
convincing
point
for
me
is
that
it
can't
be
worse
than
like.
This
is
something
that
tim
mentioned
another
thread
but
like
it
can't
be
worse
than
dropping
it
like,
even
if
it
seems
unlikely
or
not
used
as
much
like
worst
case
scenario.
Is
that
it's
better
than
what
we
already
had.
So
I
tend
to
agree.
F
Awesome,
thank
you.
So
this
one
should
be
pretty
quick.
It's
just
a
question
of
hey.
This
has
been
there
for
a
long
time
and
I
painted
andrew
sidekim
and
said:
is
this
something
we
can
just
yeah?
Is
this
a
fix
for
a
bug
and
it's
an
unambiguous
and
he
said
remember
we
were
discussing
this
last
time
exactly
what
happens
when
node
ips
changed
and
I
said,
darn.
Okay,
I
got
to
bring
this
to
sig
network
to
make
sure
we
aren't
gonna
mess
something
up
big
time
and
andrew.
N
Yeah,
like
my
take
on
this,
was
we
should
the
rock
controller
change
that
we
make
should
be
aligned
with
what
we
do
when
a
host
network
pod
like
if
a
node
ip
changes,
do
we
change
the
pod
ip
status
of
the
host
network
pod?
And
if
we
do,
then
I
think
raw
controller
should
also
accept
node
ip
changes.
A
Is
this
a
live
change
on
the
node
like?
Is
this
the
node
changes
ip
well
it's
running,
or
is
this
node
changes
ip
after
it
reboots.
N
My
understanding
is,
it
could
be
both
like
if
yeah
like,
if
you
do
a
reboot,
but
the
node
isn't
deleted
or
you're
created
with
the
same
name,
then
it
from
the
from
kubernetes
standpoint.
It's
the
same
cubelet.
So
oh
yeah,
then
both.
A
Right
right,
that's
what
I'm
trying
to
get
at
was.
You
know
to
andrew's
question:
if
the
node
reboots,
then
cubelet
should
update
the
ip
address
in
all
host
network
pods,
right
yeah
and
that
works
today.
L
L
B
L
That's
a
good
point
andrew
because
I
never
I
didn't
consider
controller
when
I
was
thinking
about
these.
F
L
L
L
It
doesn't
happen
in
cloud
of
yes
right,
but
what
where
it
happens,
is
then
on
either
bare
metal
or
private
class
like
esx
environments
and
so
on.
They
tend
to
have
that
and
even
the
person
who
who
submitted
the
issues
that
triggered
host
network
for
the
ips.
I
asked
point
blank
like:
where
is
that,
because
I
haven't
seen
it
in
cloud:
it's
like
like
yeah,
it's
bare
metals.
J
Going
to
say
typically,
I
mean
I
work
on
with
bare
metal
right
and
we
have
another
way
to
solve
it.
Typically,
this
because
someone
changes
the
network,
consider
that
you're
on
and
that
your
attachment
point
has
been
your
node
address
and
that
it
changes
and
you
have
to
change
our
way
of
solving.
It
is
always
use
a
loopback
type
of
address
for
the
node
address
and
then
always
route
all
the
traffic.
J
So
I
don't
care
what
the
address
is
used
on
the
physical
sort
of
interfaces,
and
by
that
I
don't
have
to
change
known
that,
but
it
typically
is
that
the
the
network
team
says
that
we're
changing
this
network,
be
that
management
or
anything,
and
that's
what
you
have
had
as
your
node
address,
and
you
want
to
do
this
live,
so
you
want
to
update
it.
That's
the
typical
use
case.
J
N
Go
ahead,
sorry,
some
echo
so
to
tim's
point
like
in
the
typical
case,
like
I,
I
guess
like
you,
would
just
slap
a
dns,
server
or
dns
for
that,
but
like
with
the
rock
controller
like
it
has,
it
doesn't
support
like
it
needs
like
an
actual
ip,
which
is
where,
like
the
specific
problem
comes
up
like
is
there
a
way
to
like?
Maybe
we
can
update
route
controller
to
use
like
the
internal
dns
from
node
status,
like
I
don't
know
how
feasible
that
is.
F
O
Cool
yeah,
I
think
both
of
mine
are
pretty
quick
for
those
of
you
following
gateway
api.
We
released
v1
alpha
2
last
year,
now
we're
finally
getting
into
that
final
stretch
where
we're
feeling
confident
enough
to
get
to
beta.
I
think
we've
hit
basically
all
of
our
release
criteria.
Now
what
we're
proposing
is
a
beta
that
includes
a
gateway
gateway,
class
and
http
route.
There
are
other
resources
in
the
api
that
we're
planning
to
leave
at
alpha
because
they
don't
have
the
same
level
of
stability
that
these
core
resources
have.
O
Api
reviewers,
I
know
code
freeze
is
coming
up,
so
I
recognize
it's
a
busy
time,
but
I,
the
api,
we're
proposing
to
graduate
beta
is
identic
nearly
identical
to
the
ap,
the
v1
alpha
2
api
we've
had
broad
implementation
at
this
point
from
lots
of
different
projects,
which
is
great
and
so
far
feedback
has
been
promising.
O
So
just
keep
an
eye
out
for
that.
We
have
weekly
meetings
on
mondays,
so
if
you're
interested
in
being
a
part
of
that
definitely
come
join
us,
as
we
know
apis
last
for
a
really
long
time
and
in
kubernetes
when
we
get
to
beta
those
apis
last
for
a
really
long
time.
So,
if
you,
if
you
can
help
us,
find
any
mistakes
along
the
way
that
would
be
great
before
we
get
to
beta
yeah,
that's
that's
gateway.
Api.
N
O
Well
said,
and
we
we've
got
an
implementations
page,
I
lost
track.
I
think
we
have
somewhere
between
five
and
eight
implementations
right
now
with
api,
so
feel
free
to
play
around
and
tell
us
what
you
find
yeah.
The
the
other
item
I
had
was
just
a
follow-up
said.
I
think
others
had
raised
this
issue
and
that's
updating
our
sig
network
meeting
time.
Someone
had
asked
it
in
cignetworkslack
yesterday
or
today.
O
I
don't
really
have
much
context
here
other
than
it
sounded
like
we
had
been
leaning
towards
11
a.m,
pacific
on
wednesdays
and
then
it
kind
of
got
stuck.
But
I
don't
remember.
C
I
let
it
fall
off
my
plate.
Okay,
yes,
I
think
you
were
right.
There
was
some
question
about
whether
it
was
better
at
like
9
00
a.m
or
11
a.m,
but
the
truth
is
given
that
we
have
folks
here
from
the
us
from
south
america,
from
europe
from
and
folks
who
would
like
to
attend
from
asia.
There
is
no
perfect
time
zone.
C
That's
going
to
work
for
everybody,
so
we
can
either
do
like
some
of
the
other
cigs
and
alternate
and
make
it
sort
of
clumsy
for
some
people
every
other
week
and
awkward
for
the
rest
of
us
every
week
or
just
apologize
to
whatever
our
smallest
constituency
is.
O
O
C
Go
to
every
alt
week,
yeah,
okay,
I'll
put
it
back
on
my
plate.
I
I
forgot,
I
wrote
a
sticky
note
and
stuck
it
on
my
screen.
So
now
I
won't
forget
it
until
it
falls
off
again.
E
H
P
O
Yeah,
so
we
tried
that
we've
tried
lots
of
things
with
gateway
api.
We
did
try
the
alternating
meeting
times,
one
one
time
that
was
europe
friendly
and
then
the
next
week
would
be
asia
friendly.
Basically,
at
times
and
in
the
case
of
gateway
api,
we
ended
up
with
a
meeting
time
that
is
more
apac
friendly
than
europe
friendly.
Just
because
we,
the
the
number
of
people
attending
our
europe
friendly
time
slot
was
just
a
subset
of
who
was
attending
the
apac
friendly
slot.
O
C
I
don't
know
right
I'll
try
to
revive
that
that
thread,
although
honestly,
maybe
it'll
be
after
code
freeze.
N
A
N
A
N
Ton
of
time
just
want
to
poke
for
review
on
the
amp
api
pr,
I
guess
p
code
code
freeze
is
probably
more
important
because
this
isn't
affected
necessarily
by
code
free.
So
I
get
if
some
people
are
putting
it
off
but
just
want
to
keep
poking
it.
So
we
can
keep
momentum
there.
Kcd
gave
a
really
good
review.
So
thanks
for
that
so
far,
otherwise
we
haven't
had
anything
else.
So
yep,
that's
all
I
had
thank
you.
C
N
Yeah,
no
totally
understandable,
just
want
to
keep
it
there
so
that
it
doesn't
get
lost.
E
E
I
just
because
andy
was
well
saying
that
I
mean
it's
two
weeks
for
cold
freeze
and
that
means.
F
E
C
Yeah,
I'm
gonna
try
not
to
like
let
it
all
sit
until
the
last
last.
Second,
so
most
of
my
time
for
the
rest
of
next
week
and
until
code
freeze
will
be
spent
on
this,
I've
got
about
40
pr's
assigned
to
me,
although
some
of
those
are
like
long
term
stale.
C
C
E
D
Yes,
oh
yeah
she's
on
vacation,
but
that
will
merge
when
she
gets
back.
Probably.
C
E
O
So
I
did
make
a
mistake
for
topology,
where
I
forgot
to
switch
the
default
on
in
123,
so
it
went
to
beta
and
wasn't
default
on,
so
that's
changing
in
124,
but
otherwise
I
don't
think
anything
else
is
changing
for
this
cycle.
C
All
right
I'll
I'll
follow
up
on
these
issues.
After
my
last
meeting.