►
From YouTube: 20211117 SIG Arch Prod Readiness
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody,
the
this
is
the
kubernetes
sig
architecture,
production,
radius,
sub
project
meeting
for
november
17th
2021,
and
today
we
will
hear
from
david
eads,
who
has
done
an
analysis
on
our
survey
from
earlier
this
year.
C
David
sure,
so
we
issued
a
survey
back
in
q2.
It's
taken
a
little
while
to
collect
the
responses
and
then
analyze
them.
We
can
now
compare
those
results.
What
you're
seeing
here
on
the
left
is
the
survey
from
2021
trying
to
tie
it
as
closely
as
I
can
to
the
survey
from
2020
so
that
we
can
look
at
what
has
changed
over
the
course
of
the
last
year.
We
couldn't
do
that
with
last
survey
results
and
I'm
not
clever
enough
to
put
them
on
one
page.
C
So,
starting
off,
we
can
see
that
the
people
who
responded
we
got
a
lot
more
respondents
that
was
very
good.
The
sort
of
people
that
responded
are
actually
very
similar.
C
The
number
of
clusters
that
are
are
under
management
actually
went
up.
So
the
we
have
a
lot
more
in
the
11
to
100
range
and
in
the
100
to
1000
range
than
we
did
last
year,
and
you
can
see
the
number
who
are
managing
relatively
few
clusters.
We
don't
have
as
many
of
those
anymore,
which
you
know,
I
think
supports
the
arc
that
we're
on.
We
saw
huge
growth
in
the
number
of
nodes
that
are
under
management.
C
If
you,
if
you
look
at
last
year,
we
you
know
10
to
100
is
not
a
small
number
of
nodes,
but
we
have
a
lot
more
in
the
100
to
1000
and
especially
in
the
1000
to
10
000
buckets.
C
D
C
True,
the
nodes
under
management
actually
remain
pretty
stable,
so
yeah
you're
right
wojtek,
the
individual
clusters
slightly
larger.
C
You
know
when
you
compare
some
of
the
buckets
but
not
huge
differences,
and
you
know
there's
some
variation
across
it.
There
are
very
few
people
in
the
more
than
10
000
category.
It's
you
know.
C
Time
we
discovered
that
the
sweet
spot
is
between
between
the
a
hundred
and
ten
thousand
when
you
get
to
more
than
ten
thousand.
You
have
people
who
have
like
entire
teams
devoted
to
running
these
things,
so
their
problems
are
different
and
they
have
people
who
specialize
in
in
resolving
them.
E
C
I
wish
I
could
tell
you
that
my
sequel
skills
are
skilled
enough
to
know.
I
think
that
what
happened
there
is,
I
got
a
a
piece
of
data
got
included
in
one
of
the
buckets
that
had
an
empty
value,
and
so
I
have
one
of
the
values
to
find
and
not
the
other,
and
it
resulted
in.
A
C
On
the
situation,
but
they're
excluded
from
some
of
the
views,
so
there
are
some
views
where
we
say
for
the
people
that
answered
what
was
the
reason?
Why
and
I'll
call
them
out
as
we
get
to
them
so
like
for
the
case
of
doing
a
rollback.
For
instance,
the
percentages
are
based
on
people
who
responded.
C
Yeah,
so
I
did
look
at
this
chart
and
try
to
pull
something
out
of
it
and
what
I
really
took
from
it
is
that
our
version
skus
are
roughly
the
same.
I
think
the
new
newest
versions
are
almost
identical
in
terms
of
their
weightings.
E
C
A
little
bit
n.
C
E
Guess
we
also
are
releasing
less
frequently
too,
but
I
think
that
that
hadn't
started
by
the
time
that
we
released
121.
So
maybe
we're
just
doing
better.
C
On
yeah
declare
victory
and
move
on,
but
but
I
will
make
note
of
that
in
the
notes
for
I'll
see
if
I
can
find
a
way
to
put
it
in
the
notes
for
the
queries
for
the
next
survey
to
try
to
convert
it
to
n
minus
or
maybe
I'll,
try
to
see
if
I
can
do
it.
E
It's
probably
like
you
know,
statistical
outlier,
but
seeing
that
like
more
than
half
of
the
clusters
with
more
than
10
000
nodes
under
management
are
on
either
113
or
114.
That's
a
little
freaky.
C
So
you
can
see,
there's
very
few
respondents
in
that
category.
What
we
actually
do
on,
I
think
it
starts
on
the
next
chart,
is
when
we
start
slicing
the
other
data.
We
include
all
the
data
and
then
we
have
a
only
taking
from
the
middle
buckets,
because
that's
where
the
majority
of
respondents
are.
A
Just
a
note
I
don't
know,
can
somebody
take
notes
because
I
only
have
one
screen
on
my
laptop.
I
don't
know
if
anybody
has
two
monitors,
I
can
do
it.
C
A
A
Yeah
exactly,
I
think
we
should
note
these
potential
changes
like
the
n
minus
one
minus
two.
It
also
might
be
helpful
to
do
it
as
a
cumulative
thing
percentage
of
clusters
that
are
on
you
know,
n
n,
minus
one
or
greater
n
minus
two
or
greater,
or
maybe
even
the
reverse,
inverse
right
like
try
to
see
get
an
idea
of
like
in
the
past
how
what
percentage
have
advanced
quickly
versus
you
know
in
the
current
times
that
might
be
useful
as
opposed
to
individual
versions.
It's
sort
of
like.
C
F
A
I
think
that,
and
I
think
that
from
the
point
of
view
of
like
trying
to
determine
if
we're,
making
kubernetes
more
reliable,
which
this
is
a
very
much
a
proxy
measure.
But
if
people
are
more
willing
to
upgrade,
then
we're
probably
doing
a
better
job
of
of
making
it
things
more
reliable.
From
the
start.
A
C
All
right,
so
this
is
a
chart
of
minor
version.
Rollbacks
did
you
roll
back?
Did
you
roll
back
in
production
or
in
stage
or
in
prod?
And
if
you
rolled
back,
why
did
you
do
so?
Okay,
minor
version
rollbacks
appear
to
have
gotten
more
common
in
prod
in
dev.
I
think
it's
actually
good
to
have
the
minor
reversion
rollbacks.
It
means
people
are
likely
trying
them.
There's
a
survey
question
about
that
later
in
stage
I
could
believe
the
same
minor
version
rollbacks
in
prod.
E
A
C
So
yes,
we
can.
We
can
figure
that
out
and
actually
john,
if
you
can
find
a
way
to
port
that
data
into
our
new.
I
can
find
a
way
yeah,
I
think
that'd
be
super
useful,
but
otherwise
I'm
willing
to
start
with
just.
C
I
guess
if
it
works
eventually
so
yeah
it
started
out
with
with
not
so
good
news.
I
mean
you
look
at
it.
The
trend
is
consistent.
The
the
trend
of
year
over
year
right
we
changed
and
became,
had
more
frequent,
rollbacks
across
any
number
of
nodes
and
any
number
of
clusters
under
management.
E
Like
it's
skewing
more
to
excuse
me
more
towards
the
like
smaller
clusters
versus
larger
clusters,
which
is
maybe
a
good
thing
because,
presumably
like
we
don't
necessarily
want
to
see
as
many
rollbacks
at
the
like
over
a
thousand
node
scale,
because
that's
going
to
be
a
lot
more
disruptive,
like
you
know,
downgrading
10,
000,
cubelets
versus
downgrading
10,
cubelets.
C
E
A
Take
it
well
anything
for
over
ten
thousand
nodes,
and
over
a
thousand
clusters
is,
I
would
doubt,
has
any
statistical
significance
at
all
very
small
n.
C
So
just
to
give
yeah
it
actually
is
because
it's
that
small,
but
just
to
give
some
sense
of
the
numbers
you're.
Looking
at
more
than
a
thousand,
we
had
two
respondents
versus
once
you
get
down
into
this
level,
a
dozen
at
the
hundreds
to
a
thousand
and
oh
fifty
forty.
C
G
C
G
C
If
we
clip
off
the
very
top,
then
these
the
numbers
down
here
are,
you
know,
likely
to
be
reasonable.
C
There
was
one
silver
lining
on
this
chart
the
reason
for
the
rollback
previously,
because
the
component
failed
was
extremely
likely
to
be
called
out
as
as
a
reason,
and
it
was
much
less
common
in
in
this
current
year.
The
other
one
that
stood
out
was
the
my
cluster
failed.
C
So
sorry,
this
is
56
compared
to
73.
Oh
okay,.
A
C
That's
that's
true,
and
this
one
stood
out
to
me
as
a
number
that
did
not
really
inspire
confidence.
11.5
percent-
it's
it's
still.
You
know
a
little
bit
better
than
last
year.
A
The
other
thing
we
we
have
to
think
not
not
to
be
totally
self-serving
in
this,
but
like
from
the
point
of
view
of
the
prr
program
like
when
did
we
institute
actual
prr,
it
was
like
119
was
like
the
first,
the
pilot,
I
think.
Wasn't
it
was
it
yes,
it
was
so
almost
none
of
these
actually,
like,
I
guess
well.
C
That's
why
we
collected
the
versions,
so
it
turns
out
that
a
lot
of
the
respondents-
that's
119,
not
all
of
them,
but
you
know
about
a
third
okay.
Would.
B
C
There
is,
like
you
know
this
matters,
the
particular
component.
Failing
is
we're
feeling
substantially
less
as
the
reason
and
the
this
number
did
drop
the
number
of
failed
clusters
we
did
improve.
According
to
the
survey
we
did
improve
between
last
year
and
this
year,
just
not
a
lot.
E
I
think
the
real
test
is
going
to
be
when
we
run
this
next
year,
because
that'll
be
after
we've
had
like
a
full
year
of
mandatory
prr,
and
I
think
that
we'll
also
see
more
people
upgrading
two
versions
that
had
prr,
because
right
now,
there's
so
much
lag
that
the
vast
majority
of
the
clusters
on
both
of
these
surveys
never
had
any
features
touched
by
prr.
So.
C
Okay,
yep
so
hopefully,
hopefully
we'll
get
better
overall,
and
that
this
will
be
better,
in
particular,.
A
C
You
know
we
should
make
a
note
of
that.
I
forgot
to
try
to
slice
it
on
that.
Okay,
because,
yes,
we
did
ask.
We
changed
once
you
rolled
back
yeah.
C
So
the
patch
upgrades
actually
looked
a
lot
better.
It's
no
surprise
that
patches
are
more
stable
than
miners
right
I
mean
you
would
just
sort
of
expect
that
they're
also
easier
to
roll
back,
though
so
you
know
that
was
a
little
bit
odd.
So
overall
rollbacks
were
fairly
low
on
patch
versions
across
the
board.
I
think
that
most
of
this
is
reporting
and
and
they're
squished
slightly
differently.
C
When
you
go
back
and
look
at
the
reasons
we
again
see,
the
components
appear
to
be
doing
better
overall
right,
the
component
failure
is
less
likely
nowadays,
but
the
cluster
failed
percentage
I
need
to
actually
I
want
to
actually
go
back
and
read
the
notes
and
see
if
somebody
included
in
their
free
form
response
something
yeah
related
to.
C
C
You
recall
disabling
beta
features.
Are
you
allowed
to
use?
Do
we
disallow
some
beta?
Do
we
disallow
some
ga
and
the
answers
here?
They
look
to
my
eye
to
be
almost
identical
and
they
basically
support
a
hypothesis
of
people
just
use
whatever
the
defaults.
Are
it's
too
much
trouble
to
change
something
off
of
the
defaults
yep
so.
C
Is
almost
a
ga
it
does
and
that
chart?
Well,
we
don't
allow
disabling
g.
I
just
forgot
to
remove
that
so
yeah.
I
think
this
basically
tells
us
whatever
we
choose
as
defaults
is
going
to
have
a
tremendous
impact
on
what
ends
up
being
allowed
in
clusters.
I
actually
want
to
come
back
to
this
after
we
present
the
rest,
because
I
have
I
have
some
thoughts
on
what
this
means
for
how
we
need
to
approach.
Turning
that
on.
C
So
alpha
enablement
continue.
Sorry,
the
pages
don't
line
up
perfectly
because
there
wasn't
anything
really
interesting
in
the
next
set
of
slices
so
jumping
to
alpha
enablement.
This
surprised
me
we're
still
in
a
state
where
people
turn
on
alpha
in
prod
and.
C
Well,
that
wasn't
that
wasn't
death,
so
don't
worry.
Development
clusters
are.
C
C
C
I'm
surprised
by
the
outcome-
I
don't
know,
I
do
know
that
some
distributions
actually
make
it
difficult
to
turn
on
alpha
features.
E
B
A
C
Have
been
known
to
do
it,
but,
but
you
know
I
guess
some
of
that
is.
Is
the
people
responding
to
the
surveys?
Would
I
expect
them
to
even
know
that
openshift
turned
goes
on
for
them?
Maybe
they
notice
that's
safe.
Oh.
A
C
You
were
running
the
the
sre.
C
So
so
this
one
stood
out,
it
seems
to
be
persistent
over
time.
It
doesn't
seem
to
be
getting
worse.
It
does
seem
to
be
getting
more
fairly
distributed
when
you
compare
it
like,
you
know,
cluster,
no
matter
how
many
nodes
you
end
up
having
or
about
how
many
clusters
you
end
up
having
they're
starting
to
level
out,
there's
not
like
a
if
you
have
a
lot
you
turn
on
like
if
you
look
before
it
was
not
evenly
distributed
and
it's
starting
to
get
more
evenly
distributed.
Now,.
C
So
maybe
maybe
we're
leveling
out
the
troubleshooting
methods
was
pretty
cool.
You
recall
before
we
were
looking
at
it
trying
to
figure
out.
Why
don't
people
use
metrics
and
it
turns
out.
People
are
now
starting
to
use
metrics,
it's
catching
on,
which
which
is
good,
and
I
say
that
because
it's
the
smaller
categories,
you
know
when
you
get
to
lots
of
clusters,
everybody
uses
metrics,
but
when
you
only
have
a
few
we're
seeing
more
people
using
them
more
frequently
now.
B
C
I
think
is
good,
and
you
know
if
you're
trying
to
tie
it
back
to
prr
prr
is
where
we
started
requiring
those.
So
there
are
things
to
watch
now
that
used
to
be
the
case
that
there
might
not
even
have
been
something
to
watch
right.
There
may
not
have
even
been
a
metric
for
exactly
the
usage
of
events
becoming
slightly
less
frequent.
Surprised
me
pink
is
more
than
a
year
and
a
quarter
is
the
turquoise
color
and
the
use
of
events
becoming
less
frequent,
surprised
me,
but
it
doesn't.
I.
E
Mean
I
think,
that's
a
good
thing,
because
events
can
in
theory
be
lossy
and
some
people
even
will
split
their
events
like
into
a
separate
fcd
depending
on
load.
So
I
don't
think
that's
a
terrible
thing.
I
think
that's
actually
probably
a
good
thing
from
a
scalability
point
of
view
like
if
you
need
to
be
absolutely
correct.
Events
can
be
somewhat
unreliable.
D
D
No,
I
think
that
like,
given
that
events
have
like
one
hour
ttl
by
default,
like
you,
can
only
pretty
much
use
them
if
you
are
debugging,
something
that
is
happening
now,
so
there
are
less
things
that
were
like
metrics
or
logs
or
like
are
more
universal,
more
universally
useful
in
my
opinion,
so
I
don't
think
it's
like
necessarily
bad
thing
that.
C
C
C
Yeah
and
then
this
is
just
slicing
the
page
by
number
of
clusters,
but
I
think
it
tells
a
similar
story:
metrics
become
more
popular
this
year.
Events
well.
C
So,
knowing
that
it's
been
pretty
consistent,
the
essentially
95
of
our
clusters
use
our
default
settings,
which
are
all
ga
is
on
and
all
beta
is
on.
C
I've
actually
been
wondering
whether
we
want
to
consider
trying
to
make
our
default
one
where
only
the
ga
apis
are
on
by
default,
and
we
require
effort
to
turn
on
new
beta
apis.
I
want
to.
I
want
to
take
away
the
beta
apis
that
already
exist,
because
that
would
cause
tremendous
problems
but
introducing
new
beta
apis.
C
A
It
is,
and
it
isn't
right
because
we've
already
just
done
this
exercise
in
the
last
half
a
dozen
releases,
where
we've
moved
a
lot
of
things,
long-standing
beta
things
into
ga,
but
so
in
that
sense
it's
not
as
much
of
a
if
you'd
said
that
two
years
ago.
I
think
that
would
have
been
like
you
know,
a
non-starter
but
anyway,
something
like
which
had
something
to
say.
D
I
think
we
I
wanted
to
clarify
if
you
are
talking
david
about
a
better
apis
or
better
features,
also
or
both
or.
C
Beta
apis
are
the
first
ones
that
stand
out
in
my
mind
in
some
ways
in
most
ways,
because
they
are
easier
to
exert
control
over.
We
already
have
all
the
mechanisms
in
place
to
do
it
it.
It
is
also
the
case
that
we
know
that
beta
apis
are
going
to
have
a
migration
involved
right,
because
because
we
enforce
that
right,
a
beta
api
from
the
api
perspective
is
still
beta
and-
and
that
is
different
than
a
beta
feature
that
may
not
have
a
new
api
service.
D
Exactly
so,
I
fully
agree
and
for
what
it's
worth
like,
we
are
in
gke,
literally
discussing
that
now
I
think
it's
it's
something
that
we
want
to
do
in
gk,
so
so
like
not
enable
better
apis
by
default.
D
D
So
I'm
personally
definitely
in
favor
of
doing
that
for
feature
for
apis,
I'm
not
yet
sure
about
features.
C
If
there
is
general
agreement
that
it's
a
thing
worth
pursuing,
we
could
consider
trying
to
write
a
cap
for
124
sort
of
time
frame.
Yep
yeah,
there's
some
benefit
to
trying
to
agree
to
it
before
the
next
beta
api
gets
added
because,
like
I
said
I
I
don't
want
to
stop
serving
any
of
the
beta
apis
that
are
already
served.
Imagine
the
destruction
that
would
happen
if
we
stop
serving
pdbs
early
right.
C
So,
okay,
I
will
think
about
it.
I
mean
I
don't
know
wojtek.
If
you
had
already.
You
said
you
were
thinking
about
it
in
gke
already.
Have
you
already
started
writing
something
up.
D
We
did
not,
I
mean
okay,
we
I
think
we
we
want.
We
initially
wanted
to
do
that
for
123,
even
in
gke.
D
We
probably
won't
manage
to
do
that
on
time
for
123,
but
so
I
didn't
really
have
time
to
to
to
go
to
to
to
this
group
or
to
open
source
in
general
and
and
propose
it,
but
so
yeah.
I
don't
have
anything
anything
written
or
anything
like
really
that
I'm
ready
to
share,
but
but
I'm
definitely
supportive.
C
Okay,
I'll
think
about
I'll
see
if
I
have
time
to
write
up
some
rough
notes
before
our
next
meetings
talk
about
to
see
if
we
have
the
same
vision
in
our
heads
and
then
try
to
figure
out
where
to
take
it
from
there
did
anyone
else
I
stopped
sharing.
Maybe
early
did
anyone
else
have
any
questions
about
the
survey
or
anything
else.
They
wanted
to
dig
into
deeper.
A
In
this
release,
it'll
tell
us
more
in
this
survey.
It'll
tell
us
more
in
the
next
survey.
If
that
you
know
if
it's
the
same
versions
that
keep
getting
rolled
back
after
survey,
I
mean
obviously
age
out
eventually
so,
and
I
think
there's
probably
some
other
notes
too.
We
should
do
a
readout
with
the
broader
sega
architecture
which
we
could
do
tomorrow
or,
if
you're
not
prepared
to
do
that.
A
If
we
want
to
do
further
analysis
and
something
a
little
more
like
you
know,
just
like
we
went,
we
took
half
an
hour.
I
went
through
the
whole
thing
here,
but
we
might
want
to
do
a
five
minute
thing
there
and
ten
minute
thing
there
and
and
sort
of
read.
B
Let
me
look,
I
might
actually
be
able
to
do
it
tomorrow.
Okay,.
A
C
Yeah
you're
right.
I
would
like
to
only
go
and
present
and
face
the
firing
squad
once
so
yeah.
Let's
go
ahead
and
plan
that
for
two
weeks
I'm.
A
E
It
looks
like
their
next
meeting.
That's
not
tomorrow
will
be
december
2nd.
So
I
just
put
that
on
my
calendar
with
him
to
remind
you.
D
C
There
was
one
other
thing
that
I
wanted
to
make
sure
before
we
sort
of
tucked
the
survey
away
and
called
it
finished.
I
wanted
to
be
sure
that
we
felt
like
we
had
asked
the
right
questions
this
time
around
to
review
for
next
time.
Technically,
we
can't
like
change
the
questions
that
we
asked,
but
I
was
feeling
pretty
good
when
I
was
looking
through
what
we
had
collected
that
these
were
the
questions
that
I
would
want
answered
to
decide
whether
this
group
was
effective
or
not.
A
So
before
we
do,
the
next
survey
obviously
we'll
we'll
revisit
that
question
david,
but
I
think
that
let's
this
was
our
second
refinement
of
it
and-
and
you
did
the
analysis
and
that's
where
sometimes
you
you
start
to
realize.
Oh,
we
should
ask
this.
We
should
ask
that,
let's
do
the
comparative
analysis
against
the
two.
A
I'll
guess
I'll
say
that
first,
but
I
think
once
we
do
the
comparative
analysis
and
we
look
at
the
versions
I
mean
by
comparative
analysis.
I
mean
like
if
we
put
the
the
2020
and
the
2021
data
in
the
same
in
the
same
bigquery
data
store,
then
you
know
we
can
actually
do
direct
comparisons
where
the
questions
are
the
same
and
and
maybe
get
instead
of
eyeballing
it
from
one
to
the
other.
A
So
you
know
we
might
come
up
with
some
new
questions,
but
I
think
that
that's
yeah
definitely
before
we
run
it
again
in
what
maybe
three
four
months
we'll
one
we'll
want
to
do.
That.
C
Yeah-
and
you
know
just
like
last
time-
we
went
through
to
refine
the
survey
before
we
sent
it
out.
I
think
this
time
we
used
just
about
all
the
fields
and
that
the
results
that
I
was
looking
at
I
felt
like
I
was
able
to
tell
a
story
about
this
is
how
people
go
about
debugging
problems.
This
is
how
prr
could
have
impacted
that
does
the
survey
support
or
not,
that
hypothesis
and
similar
things
for
the
rollbacks.
D
F
C
Scheduling
me
oh
and
hi
to
sergey.
I
don't
really
recognize
him.
F
Hello
yeah:
I
was
flying
a
wall
kind
of
listening
for
this
meeting.
C
A
Welcome
yeah,
okay,
so
thank
you.
What
is
our
let's
see?
What
else
do
we
have?
I
think
I
thought
I
threw
a.
E
Thing
on
the
agenda,
I
just
had
a
quick
question
that
I
thought
would
be
reasonable
to
discuss
with
this
group
about
futuregate
toggling,
so
this
came
up
with
a
feature
that
I
was
reviewing
for
node
this
cycle
and
specifically
the
person
who
had
implemented
this
feature
had
implemented
it
behind
a
feature
flag
in
such
a
way
that
if
the
feature
flag
was
enabled
or
disabled,
it
would
cause
all
of
the
containers
on
the
node
whether
or
not
they
were
using
the
feature
to
restart,
and
I
said
that
doesn't
seem
like
in
line
with
our
expectations
for
prr,
like
I
would
expect
as
a
cluster
operator.
E
If
I
enabled
or
disabled
feature
the
only
things
that
would
possibly
be
affected
by,
that
would
be
the
things
using
that
feature.
So
as
a
reviewer,
I
said,
I
think
this
is
blocking.
I
don't
think
we
can
do
it.
This
way
like
we
need
to
be
able
to
isolate
out
the
changes,
whether
we're
using
the
feature
or
not,
and
so
the
feature
did
not
did
not
land
this
cycle.
E
As
far
as
I'm
aware,
I
mean
I
haven't
gone
back,
but
that
was
blocking
feedback
where
I
left
that-
and
I
just
wanted
to
like
make
sure
that
I
am
sharing
the
right
information
here,
because
I
got
a
little
bit
of
pushback.
That
was
something
along
the
lines
of
well.
You
know
when
you're
making
an
omelet,
sometimes
you
got
to
break
a
few
eggs
and
it's
like
well.
E
A
If
I'm
going
to
try
out
this
feature
on
this
node,
like
I
don't
want
anything
else,
scheduling
on
this
right
I
need
to
like
have.
I
need
to
be
careful
that
I'm
not
breaking
workloads
if
I'm
playing
with
a
new
feature,
but
I
mean
that
seems
like
a
reasonable
assertion.
In
my
mind,
I
don't
know
if
others
have
that
other
thoughts.
C
Unexpected
fan
out
from
setting
this
some
node
features
a
lot
of
some
that
I've
seen
appear
to
have
an
impact
on
the
way
that
say,
volume
mounts
happen
and
I
think
those
can
actually
interact
with
all
content.
Like
some
some
things,
I've
read
have
said,
and
then
you
reboot
the
node
and
when
that
is
a
requirement
for
making
whatever
switch
it
is
I
mean
sometimes
that
is
what
it
is
right.
I
don't
know.
I
doubt
yours
was
that
case,
because
you
would
have
noticed
that
and
understood
like
okay
to
turn
this
on.
E
Well
specifically,
says:
does
it
require
a
component
to
be
restarted,
and
so
I
got
a
little
bit
of
pushback
there,
because
our
documentation
says
that
the
way
that
you
set
a
feature
gate
is
with
a
command
line,
flag,
toggle,
and
so
it's
not
possible
to
like
turn
off
assuming
you're
doing
it.
That
way,
it's
not
possible
to
set
a
feature,
get
gate
without
like
restarting
the
component,
because
you
can't
change
its
command
line.
While
it's
running
now
the
tests
set
on
and
off
feature
gates
in
components.
E
E
Cool,
okay,
that's
great
feedback.
I
just
wanted
to
confirm
both
of
those
things,
because
I
know
that
we
previously,
like
the
the
prr
survey,
says
you
know:
can
you
disable
this
sort
of
like
without
restarting?
But
it's
not
that
you
don't
restart
the
particular
component
in
order
to
enable
or
disable
it
that
you
don't
have
to
restart
the
whole
machine.
A
The
whole
machine,
or,
for
example,
you,
if
you,
if
you
change
it
on
the
api
server,
it's
not
going
to
require
you
to
restart
the
cubelet
right
like
like,
let's,
okay,
right,
like
between
components,
right
kind
of
things.
So
it
sounds
like
maybe
a
little
tweak
to
the
guidance
might
be
in
in
order
there
to
make
that
explicit.
More
explicit.
E
C
E
E
Yeah
yeah
so
yeah
and
that
that
documentation
came
into
existence
in
response
to
me
asking
a
question
as
to
whether
or
not
it
existed,
and
the
answer
was
no.
So
it
sounds
to
me
like.
Potentially,
there
is
a
follow-up
action
here
to
further
document
expectations,
possibly
in
that
document
of
behavior
for
a
feature
flag.
For
example
like
we
expect
that
if
you
toggle
the
feature
flag,
it
won't
affect
things
that
are
not
using
that
feature.
Typically.
A
Okay,
thank
you
so
much
I
will.
I
will
try
before
I
leave
this
week
to
get
the
data
exported
from
my
private
bigquery
that
I
couldn't
figure
out
how
to
share
and
then
import
it
into
the
into
the
one
we
have
now
for
the
project
for
2020
survey,
and
then
we
can
hopefully
soon
go
through
and
do
that
a
little
bit
of
comparison
and
have
access
to
that
data
too,
and.
G
C
If
you
can
just
add
those
counts,
I'll
get
the
data
right
for
the
2021
so
that
when
we
build
the
2022
it'll,
be
there.
Okay,
awesome.
A
Okay,
well,
thank
you
all,
and
so
we
have
a
great
thanksgiving,
those
of
you
in
the
us
and
the
rest
of
you.
You
know
I
guess
just
enjoy
working
on.