►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20220804
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20220804
A
A
Everybody
get
the
announcement.
I
don't
get
the
announcement
all
right,
cool,
welcome
to
Sig
networks,
regular
meeting
this
is
Thursday
August
4th
I'm
your
host
Tim.
As
usual.
We
are
subject
to
the
code
of
conduct
which
basically
boils
down
to
don't
be
jerks,
be
excellent
to
each
other.
That
said,
we
have
a
fairly
slim
agenda
today.
So,
let's
start
with
triage,
which
I
have
queued
up.
Let
me
go
ahead
and.
B
Share
that,
where
did
we
put
it.
A
Everybody
got
that
all
right,
so,
let's
start
with
the
newest
first,
which
came
in
12
hours
ago
with
just
missing
the
first
round
of
triage
that
myself
and
others
did
yesterday
afternoon.
So
node
part
I've,
not
read
this
one.
So
let's
read
this:
maybe
quickly
as
a
group
or
Antonio,
you
can
give
us
a
a
brief
since
I
know,
you
read
it.
C
Remember
so
previously
we
have
a
process
in
the
queue
process
and
we
do
open
an
old
part.
They
call
the
subject.
If
you
follow
the
socket,
nobody
can
take
it
in
the
horse.
So
it
seems
that
since
we
no
longer
hold
the
socket,
any
process
in
the
in
the
house
can
use
this.
This
part
as
those
part
and
what
I
thing
that
may
happen
is
that
some
processes,
using
this
node
port,
a
source
Port,
sending
a
request
and
something
is
happening
with
the
IP
tables
in
the
name,
part
that
is
causing
the
connection
to
break.
C
C
A
On
my
at
least
on
my
machine,
so
I
I
guess
this
is
a
question
for
maybe
everybody
can
cross
check
it.
When
I
look
at
the
CIS
cuddle
for
local
Port
range,
it's
configured
by
default
to
be
32768
to
61k,
so
it
shouldn't
anyone
anything
that
is
used
by
node
ports
shouldn't
be
covered
by
the
ephemeral
range,
but
I,
don't
know
how
every
other
distro
sets
that
up.
C
D
Was
gonna
say
that
that's
that's
basically
right
when
we
were
talking
about
getting
rid
of
the
code.
We
pointed
out
that
the
default
value
of
the
node
Port
range
and
the
default
value
of
the
ephemeral
Port
range
do
not
overlap,
and
and
as
long
as
you
don't
change
that
everything
will
work.
But
if
you
do
change
that,
then
you
can
possibly
break
yeah.
A
C
Know
what
I
I
understand
is
that
he
wants
to
say
I,
just
don't
want
to
to
do
work
with
the
CTS
I.
Don't
want
to
to
manage
the
cities
myself.
A
Yeah,
but
he
also
said,
I
use
a
lot
of
node
ports,
so
that
makes
me
think.
Maybe
they
changed
the
default
node
Port
range
yeah.
C
Because
the
key,
the
example
is
using
the
30
000,
whatever
so,
okay
that
I
said
to
me.
I
want
to
play
with
this
and
I
will.
A
C
A
We
should
we
should
document
that
or
something
I,
so
I
would
like
to
say,
like
I
would
love
for
us
to
hold
those
ports
and
not
let
people
use
them
right
like
that
would
be
great,
but
the
code
to
do
it
was
so
fraught
that
you
know
we.
It
was
better
to
just
get
rid.
A
So
here's
what
I'll
do
I'll
assign
to
you
but
I
will
also
add
a
response
myself
actually
I'll
cop.
B
A
A
Yeah,
okay,
all
right,
you
got
it,
let's
dig
into
it
and
see
if
we
can
figure
out
a
good
answer
here.
Cool
I
wish
there
was
a
better
way
like
BPF
or
something
to
say
block
this
port
consume
this
port,
but
don't
leave
a
process
running
on
it
right
all
right.
Next
for
dual
stack
deployments,
node
addresses
show
only
one
IP.
If
node
IP
parameter
is
passed.
C
C
D
A
Got
it
next
API
server
crashes,
with
dual
stack
ciders
if
the
primary
range
does
not
match
the
community's
default,
so
I
I
responded
a
little
bit
to
this
one.
What
they
showed
me
was
they
ended
up
with
this,
so
they
configured
their
cluster
with
service
cluster
IP
range
V6,
then
V4,
but
they
also
added
advertise
address
as
an
ipv4
address.
A
So
our
API
server
helpfully
created
the
kubernetes
service,
set
it
to
single
stack
as
it
should
be
set
it
to
IPv6,
because
that
is
the
first
family
which
got
allocated
in
IPv6
address
with
no
selector
and
then
we
went
and
we
wrote
an
ipv4
endpoint
because
that's
what
they
told
us
to
advertise.
So
the
question
this
is
this
seems
clearly
wrong.
A
The
question
then
is:
should
we
try
towards
my
response
here?
Should
we
try
to
use
the
secondary
range?
The
non-primary
range
when
we
recognize
that
advertised
address
is
V4
in
in
this
particular
case,
or
should
we
just
say,
hey
whoa,
you
can't
set
an
advertised
address
that
isn't
the
same
family
as
the
primary
service
family.
C
C
Go
ahead:
the
problem
is
that
there
is
a
lot
of
assumptions
on
the
advertise
address
and,
and
this
this
happened
in
openshift
I-
think
that
every
week
we
had
a
backlight,
this
cluster
doesn't
boot
people
spinning
around
taking
locks.
Until
someone
take
a
language
report,
all
right
check
it,
you
have
the
service
in
the
other
way
around.
So
that's
why
we
failed
it
fast.
D
D
A
No,
that
would
probably
work
yeah,
so
I
mean
my
my
feeling
in
looking
at
this.
If
I
was
a
user,
I
would
expect
it
to
try
to
use
the
non-primary
range
when
setting
up
the
the
API
server
service.
The
kubernetes
default
service,
because
I
feel
like
advertise
address
is
a
like
a
more
powerful
config,
and
so
it's
worth
at
that
case
sort
of
being
adaptable.
A
A
Okay,
the
two
of
you
are
assigned.
If
you
think
it's
actionable,
I
I
forget
who
actually
wrote
that
code
I
think
it
was
you
Antonio,
but
if
you
think
it's
actionable
like
we
could
just
write
a
bug
that
says
hey
these
are.
This
is
the
place
that
we
need
to
change
it
and
market
help
wanted.
We
have
a
lot
of
people
who
are
looking
for
actionable
bugs
if
we
just
tell
them
where
to
go.
A
Okay,
all
right
then!
Well,
then
then,
it's
yours.
If
you
confirm
which
one,
you
think
is
the
correct
path
to
go,
then
I'm
happy
with
either
path.
A
If
it
all
right
well,
let's
take
it
to
the
bug.
Let's
move
forward
here,
we
have
a
a
bug
report
that
centered
on
customize
that
I
don't
really
understand
what
the
report
is
saying
something
about
labels
not
working
properly
with
network
policy
with
customized.
They
have
a
you
know,
a
suspicious
label,
which
is
an
integer
value.
So
it's
quoted,
but
I
don't
really
understand.
A
What's
not
working
so
I
I
was
responding
a
little
bit
with
the
the
poster,
so
is
Dan
Williams,
oh
yeah,
he
just
assigned
to
you
Antonio,
so
I
can
assign
it
to
myself.
I'll.
Take
this
one.
A
This
one
is
someone
complaining
that
there
is
no
external
DNS
node
address
for
them,
and
Dan
Williams
correctly
cites
that
it's
not
a
requirement
that
it
be
provided.
So
I
asked
for
a
little
bit
more
information
to
see
if
I
can
help
this
person,
but
I
think
it's
not
a
bug
here.
Let
me
add
myself.
A
And
I
don't
know
anything
really
about
mongodb,
so
I
don't
know
what
it
is
they're
trying
to
do,
but
it
sounds
like
the
mongodox
are
making
some
assumptions
that
maybe
they
can't
and
then
the
last
one
is
this
oldie
about
documenting
the
the
the
intentions
for
the
source
ranges
discussion,
I
just
left
it
up
just
to
say
you
know
it's
been
a
long
issue,
but
it's
still
open.
B
A
Don't
know
how
I
stopped
I
must
have
accidentally
hit
the
button.
Okay,
that's
good
to
know
all
right,
so
agenda
triage,
so
Bowie
is
number
one
on
the
agenda,
but
we
have
an
issue
number.
Are
you
here.
E
Yep
you
hear
me
yep,
yeah,
sorry,
are
you
presenting
I,
just
I'm
not
presented.
A
E
So
basically
the
issue
I
filed
was
that
acute
proc
there's
a
race
condition
where
Q
proxy
may
start
up.
Read
a
node
look
at
this
pod
cider
cache
that
value
and
then,
unfortunately,
sometimes
the
node
actually
gets
deleted
and
recreated
with
a
different
pod
site
or
out
from
underq
proxy.
What
happens
in
this
case
is
that,
unfortunately,
Q
proxy
is
also
still
running
and
now
everything's
messed
up.
It
has
basically
an
inconsistent
State.
E
The
general
I
think
there
was
some
discussion
here
about
whether
or
not
I
think
it
was
from
Dan
whether
or
not
this
is
actual
bug
versus
like
don't
do
that
with
nodes.
Unfortunately,
that
is
the
behavior
of
certain
unnamed
major
cloud
provider.
E
C
C
This
is
not
going
to
be
a
bug
now
because
it's
fitulated
when
Alexander
at
his
call.
This
is
going
to
be
a
back
if
we
demonstrate
to
them
that
this
happens
with
the
notes
and
what.
D
C
A
Do
we
understand
why
a
cloud
provider
is
doing
this?
Yes,
we
do,
is
it
is
it
us?
Are
we
doing
this
man?
That
seems
like
a
bad
idea
too?
Okay,
you
can
fill
me
in
on.
Why
we're
doing
it?
If,
if,
if
you
want
I
on
my
long
list
of
docs,
that
I
meant
to
write
was
something
about
node
identity
and
in
particular
it
was.
A
A
Yeah,
exactly
without
even
turning
the
note
off,
it
sounds
like.
E
Yeah
so
like
it's
actually
slightly
different,
if
anyone
Encounters
this,
it's
because
the
node
actually
was
rebooted,
but
there's
a
stale
resource
and
Q
proxy
runs
before
cubelet
is
able
to
update
it.
So
that's
how
the
race
happens,
at
least
in
our
case
versus
the
cubelet
is
running
and
somehow
it
gets
like
redone.
A
B
C
D
C
D
Tim
was
saying,
like
you
know,
the
Pod
IP
or
sorry
the
node
IPS
like
I.
Remember
there
was
this
problem
with
openshift
years
ago,
involving
VMware,
where
people
would
delete
a
node
and
then
bring
it
back
and
and
like
it
would
get
assigned
a
different
IP
that
time
and
and
in
in
that
case
we
were
failing
even
when
things
did
get
restarted,
but
you
know
eventually
we
fixed
that,
but
it
was
just
like
impossible,
so
not
impossible,
but
really
annoying
to
make
it
work.
If
the
components
were
still
running.
A
Yeah
sorry
I
had
a
little
distraction.
This
was
somebody
said
it
was
preemptable,
VMS,
right,
yeah,
okay,
I,
remember
having
this
argument
with
those
folks.
A
Okay,
so
is
the
real
fix
here
just
to
fix
the
node
handlers
and
then
Cube
proxy
will
just
adjust,
even
though
this
is
not
a
great
idea.
It'll
work.
C
E
A
I
agree:
having
looked
at
Dan's
minimizing
iptables
rights,
PR
I
can
imagine
these
two
things
intersect
badly.
A
You
have
to
reevaluate
everything
that
made
in
local
actually.
D
B
C
E
Would
be
my
recommendation,
we
do
need
to
evaluate
whether
how
like
more
surgical,
we
can
make
that
Informer
enablement
is
because
yeah
it
doesn't
watch
all
nodes.
It
would
swatches
like
a
single
note
or
something.
B
A
Okay,
all
right
next
on
the
agenda
is
exceptions
that
need
review.
So
we
hit
code
freeze
this
week,
Long
Live
code
freeze.
We
have
a
couple
that
seem
like
they
need
exceptions
which
I
will
try
to
look
at
today,
but
I'm
gonna
be
honest.
My
meat
day
is
slammed
today
and
I'm
on
a
plane
tomorrow
at
8
A.M,
so
to
Hawaii,
so
goodbye,
so
I
could
use
help
reviewing
these
exceptions,
I'm
going
to
try
desperately
to
look
at
them
today.
B
B
This
didn't
I
approve
this
one
I,
don't
know
this
is
a
different
one.
That
I
realized
this
morning
when
Antonio
reviewed
my
follow-up
Cube
proxy
change
that
I
wanted
to
do
for
the
HC
I
realized
that
in
fact,
we've
introduced
a
regression
I
think
which
is
specifically
around
the
fact
that
now,
whenever
there's
an
unscheduable
node,
we
will
also
remove
it
from
the
lb
set
and
Clayton
raised
the
pr
a
couple
of
years
back
where
he
explicitly
removed
the
schedule
ability
predicate
from
having
any
impact
on
the
lb
set.
B
Okay
specifically
because
it
created
a
production
outage
and
looking
through
his
PR
and
his
reasoning,
I
kind
of
realized
that
yeah
it's
completely
normal
because
schedule
ability
just
says
that
you
can't
place
any
new
notes.
A
B
B
A
B
F
B
A
And
then
then
we
have
two
from
survesh.
Are
you
here,
yeah
yeah.
F
F
A
Okay,
so
the
the
I
want
to
be
careful
that
we
don't
set.
The
bar
super
super
high
for
things
that
are
completely
alphagated,
like
the
point
of
alpha
is
to
be
able
to
try
stuff
out
and
make
incremental
progress.
So
I'll
look
at
this
I'll
mostly
be
looking
for
confidence
in
the
gate,
and
that
can
let
us
move
forward.
A
So
I'll
I'll
try
to
look
at
that
today,
but
if
I
can't
give
it
a
super
deep
review,
then
I
will
approve
and
I'll
leave
the
lgtm
to
Antonio
or
like
you're
in
Europe,
so
you're
gonna
go
to
sleep
soon.
I'll
leave
the
final
LG
TM
to
either
Antonio
or
Bowie.
C
A
You're
a
bad
example:
okay
and
then
the
last
one
is.
Oh
sorry,
one
question
is
the:
is
the
implementation
inclusive
of
the
API
PR
or
are
they
just
PR's.
B
A
Okay,
cool
I'll
I'll
shift
my
energy
to
that.
Okay,
sorry
and
then
the
last
one
is
Dan's.
Minimize
iptables,
restore
input,
I
haven't
seen.
I
saw,
there's
updates
to
it.
D
A
D
You
had
suggested
doing
a
mini
cap
which
I
did
and
I
filed
that,
and
you
think
oh
you
did,
but
but
I
was
going
to
say
like
since
all
of
the
heavy
lifting
for
this
PR
has
already
merged.
If
we're
going
to
do
a
feature
gate
anyway,
then
could
we
just
throw
this
in
behind
a
feature
gate
and
get
it
into
1.25?
D
A
You
don't,
we
need
prr
review
like
adding.
D
D
I
mean
you
know
it:
it
had
the
existing
unit
tests
that
we
had
added
in
the
previous
cycle,
but
actually
amputable
safe.
We
don't
really
test
much
at
all,
like
the
ambutable.
Save
patch
makes
it
so
that
we
don't
delete
stale
rules
right
away,
but
it
shouldn't
matter.
A
That's
this
one
right,
the
sorry
you're
not
with
my
screen
the
minimize
input.
A
D
D
But
but
basically
we
had
gotten
to
the
point
where
the
only
reason
why
we
run
iptable
save
is
so
that
we
can
get
the
list
of
rules
that
we
need
to
delete
or
change
that
we
need
to
delete
right
and
but
IP
table
save
can
be
very
slow
if
you
have
tons
and
tons
of
rules,
so
I
changed
it
so
that,
if
you're
in
a
large
cluster,
we
only
do
that
cleaning
up
stale
rules
like
once
every
sync
period,
instead
of
doing
it
on
every
single
sink,
and
then
that
saves
us
from
having
to
run
iptable
save
every
time.
A
I
see
I,
don't
think
I
reviewed
that
one,
but
okay,
it
sounds
plausible,
but
it
also
does
sound
scary,
okay,
I'll
look
at
this,
but
honestly,
if
I
were
CC,
I
would
be
saying
no
like
too
late
to
get
a
new
cap
in
too
much
risk
at
the
late
Point
did
you
did
you
file
an
exception
for
this
I.
D
A
I
I
want
this
in
I,
get
it
in
fact.
I
talked
with
some
scalability
folks
this
morning
and
they
were
like
yeah.
That
sounds
great,
but
I
I
don't
know
if
we
should
push
this
one
right
now.
A
D
It's
not
the
only
reason
I
was
arguing
is
like
I
said,
because
the
hard
part
of
the
pr
already
merged
or
the
hard
part
of
the
the
rewrite
already
merged
and.
D
C
A
We
do
it's
a
sort
of
ambiguous
signal
if
that
number
drops
right.
C
A
A
This
is
the
prr
review
right.
This
is
this
is
the
stuff
that
we
haven't
done
because
it
was
late
in
the
cycle.
I
love,
the
pr
I
think
it's
really
cleverly
done
the
way
you
moved
all
the
things
around,
and
then
this
just
becomes
a
really
easy
change,
but
I
think
we
should
punt
it
to
six
to
26.,
okay,
okay,
cool,
so
I
won't
focus
on
that
one.
Today
then
yeah.
C
But
what
happens
with
the
previous
one?
Do
you
want
to
Roberto.
A
I
didn't
look
at
it.
How
do
you
feel
about
it.
A
C
D
C
A
All
right,
I
had
thrown
the
idea
that
we
would
look
at
the
cap
dashboard,
but
maybe
we'll
see
what
else
Rob?
Why
don't
you
go
next.
B
Yeah,
this
is
really
quick.
I
just
wanted
to
mention
that
we
have
Sig
Network,
Deep,
dive
and
intro
in
kubecon
Detroit.
This
time
it's
been
a
while,
since
we've
done
one
of
these,
if
anyone
has
content,
they
want
to
make
sure
we
cover
or
wants
to
be
more
involved
just
reach
out
to
me
or
Bowie
want
to
get
more
diverse
perspectives
on
this.
If
you
want
to
be
more
involved,
just
just,
let
us
know,
are
you
guys
going
to
do
the
presentation?
B
A
A
Yeah
yeah
I'd
love
to
see
us
cover
all
the
cool
stuff
that
we've
been
doing.
Hopefully
for
26.
We
can
actually
have
a
story
for
the
topology
and
the
terminating
endpoints
and
all
the
the
kepts
that
are
sort
of
paused
at
the
moment.
I'd
love
to
unpause
those
and
and
talk
about
how
those
are
about
to
make
the
world
a
better
place,
plus
all
the
Gateway
Stuff
Plus,
all
the
iptables
minimization
stuff
plus,
like
we
got
a
lot
of
content
to
talk
about.
A
A
That's
on
my
screen
right
now:
okay,
where
to
put
the
agenda
there,
you
are
all
right,
so
we've
hit
the
code
freeze,
but
we
haven't
hit
the
25.0
release
for
me
personally,
I
like
to
take
this
little
time,
interstitial
to
look
at
Tech
debt
and
things
that
we'd
like
to
pay
down
that
are
neither
new
features
nor
new
caps,
and
so
I
wanted
to
just
put
it
out
here
that
when
I
get
back
from
my
vacation,
I
intend
to
just
go
hunting
for
some
tech
debt,
low
hanging,
fruit,
stuff.
A
That
is
ugly,
that
we
could
start
queuing
up
some
PRS
for
whether
that's
logging
stuff,
although
Dan
you
know
I
gotta
hand
it
to
you.
Cube
proxy
is
a
much
nicer
code
base
than
it
was
six
months
ago,
a
lot
less
low-hanging
fruit,
but
just
looking
around
for
old
issues
that
we
know
we
want
to
fix,
but
we
haven't
been
able
to
fix
those
sorts
of
things.
I
would
encourage
everybody
if
you've
got
a
few.
You
know
free
cycles
and
you
want
to
take
a
breath.
A
I
think
getting
some
tech
debt
paid
out
in
this
break
makes
the
next
cycle
easier
to
achieve
and
and
we
stop
accumulating
issues
so
there's
plenty
of
Sig
Network
tagged
issues
in
the
repository.
If
anybody
feels
like
I
want
to
help,
but
I
don't
know
what
to
do.
You
know,
let
me
know
or
I'm
sure
Antonio
can
help
you
find
stuff
or
Dan
or
Dan
or
Rob
or
Bowie.
We
kind
of
all
know
where
the
bodies
are
buried
so
like
they
can
Implement
a
mirroring
controller.
That
goes
in
the
other
direction.
Right,
Rob.
A
B
Screen
again
share
a
Chrome
tab.
Where
are
you
can
I
search?
No
I
can't
search?
Oh
great?
What's
it
called
apps,
of
course,.
A
A
That's
weird,
okay,
all
right
as
long
as
you
can
see
it-
and
you
can
hear
me
at
the
same
time
all
right:
let's,
let's
go
backwards
from
Alpha,
so
I
I've
updated
it
a
few
weeks
ago,
but
I
want
to
make
sure
that
I
got
them
all
did
anything
that
was
ga
get
their
Gates
removed.
Did
we
remove
the
gates
for
namespace
Ingress
class.
A
Is
already
in
this
is
already
in
the
gate,
removed,
column,
sorry,
I'm,
I'm
off
by
one
not
removed
all
right.
Dual
stack
support
is
for
next
cycle,
node
ports,
this
next
cycle
and
load
balancer
classes
next
cycle.
Okay,
cool
did
anything
beta,
go
ga.
No,
this
just
went
beta
right
and
internal
traffic
policy
did
not
mixed
protocol.
A
A
B
A
Yeah,
when
I
hit
the
button
here,
it
paused
my
share
I
guess
so:
I,
don't
know
how
to
share
and
have
a
microphone
at
the
same
time.
Maybe
yes,
that
is
super
weird
pause,
audio
share.
D
B
A
All
right,
there's
a
separate
there's
a
pause
button
and
a
pause
audio
button.
Okay,
don't
use
Chrome
web.
B
A
E
Yeah
I
don't
as
far
as
I
know,
nothing
happened.
Okay,.
B
I
was
just
talking
about
that.
What
we
need
for
that
is
a
clearer
definition
of
what
it
mean.
What
we
need
for
ga
my
hunch
is
that
we
might
need
to
provide
a
different
option
for
an
algorithm
and
probably
make
our
tests
a
bit
more
thorough
than
they
already
are.
I,
don't
think
it's
going
to
be
major
changes,
but
we
need
to
actually
Define
that
in
the
cap
somewhere.
A
Okay,
so
when
the
kept
cycle
opens,
let's
make
sure
we
focus
on
that
I'd
love
to
get
this
beta
column
emptied
out
all
right.
Anything
Alpha
that
went
beta,
Network
policy
status
did
not
terminating
endpoints
did
not
and
I
don't
think.
I
saw
any
PRS
for
this.
These
are
all
tagged
for
26,
so,
okay
and
did
anything
sorry
go.
D
A
All
right,
this
did
not
make
progress.
This
did
not
make
progress.
We
got
to
figure
out
what
we're
really
doing
there.
Dual
stack.
That's
pause.
A
A
A
Proximity
and
points-
oh-
maybe
it
just
didn't,
get
added
to
the
right
column.
Okay,
Port
range
and
Ricardo
hear
you
here.
A
I'm,
pretty
sure
I
saw
that
go
in
if
I
didn't
approve
it
myself.
Okay,
we
got
a
bunch
of
PRS
open
against
caps.
We'll
have
to
go.
Look
at
these
too.
A
Okay,
that's
the
kept
dashboard,
looking
pretty
good
as
the
cap
window
opens.
Hopefully
we
don't
have
a
whole
too
many
new
ones
and
we
can
spend
our
energy
trying
to
get
these
ones
moved
through
awesome
thanks
for
sticking
with
me,
while
I
figured
out
how
zoom's
new
share
works.
A
C
Won
the
the
main
my
my
my
filter
is
not
going
to
better
it's
not
even
now,
which
one
service?
No,
it's,
okay,
it's
the
other
one.
C
But
we
have
the
other
one
with
the
reserve
IPS.
So
what
what's
the
deal
now
is
you?
You
spend
two
releases
on
beta
right
before
going
ga.
A
For
ones
that
really
do
need
feedback.
Yes,
so
for
that
one,
you
might
choose
two,
but
it's
not
a
requirement.
I
mean
some
are
more
obvious
than
others
right.
In
fact,
I
want
to
be
more
thoughtful
about
whether
some
of
these
even
need
to
go
to
Beta
at
all
or
whether
they're
just
like.
Clearly
they
could
go
from
alpha
with
a
gate
to
GA
with
no
gate
right.
A
The
goal
of
alpha
is
to
give
the
rollback
ability,
but
some
of
them
just
they
don't
need
to
be
beta,
like
maybe
some
of
the
cube
proxy
cleanups.
Maybe
don't
need
to
go
to
Beta.
A
B
Question
actually
about
that
Tim
now
that
it's
changed
so
that
by
default
beta
is
not
flagged
on
as
instead
of
how
it
was
fly
down
by
default
in
the
past.
Oh.
C
C
D
A
B
A
Right
then,
we
will
see
you
all
in
two
weeks
time
and
if
you've
got
time
in
the
next
two
weeks,
let's
look
at
some
tech,
Debt
Pay
down.
We
certainly
have
some
debt
I
know
we
do
it's
buried
in
there
somewhere
it's
hard
to
find
I'm
sure,
but
we'll
we'll
find
it.