►
From YouTube: Kubernetes SIG Node 20211006
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
everyone
and
welcome
to
today's
edition
of
the
node
ci
subproject
today
is
october
6
2021
and
the
first
item
on
our
agenda
is
the
cube
test.
2
migration.
B
Plan
awesome,
hi
everyone
I
am
amit,
my
gator
panel
is
what
and
I
work
mostly
with
the
sick
testing
folks
and
I
primarily
work
on
cube
plus
two
and
kind,
and
so
here
I'm
about
to
talk
about
the
q
plus
two
migration
plans
and
what
it
means
for
sig,
node
and
node
testing
in
general.
B
B
Has
all
the
testing
related
information
about
how
to
bring
up
clusters
and
then
subsequently
how
to
run
tests
on
them.
So
if
any
of
you
have
been
following
test
infra
closely,
all
of
the
tools
that
I
mentioned
in
this
in
these
layers
are
deprecated
and
they
don't
have.
B
They
don't
have
any
ongoing
features
or
they're,
mostly
in
maintenance
mode,
where
we
fix
critical
bugs
that
will
affect
all
of
ci
or
something
like
that,
and
the
reason
is
it's
very
since
cube
test
itself
is
a
whole
bunch
of
all
deployers
bundled
together
and
it
initially
evolved
from
literal
bash
scripts.
So
it
is
starting
to
become
unmaintainable
and
which
is
why
we
developed
something
called
cube
test2,
which
is
intended
to
be
the
replacement
for
cube
test
and
is
designed
in
a
way
that
is
more
extensible.
B
And
so
one
of
the
ways
that
it
is
extensible
is
we've
decoupled.
The
deployment
part
with
the
testing
part,
so
you
can
plug
and
play
which
cluster
you
want
to
bring
up
need.
It
may
be
gce,
it
may
be
gke,
it
may
be
a
kind
or
something
like
that,
and
then
we
also
decouple
what
we
are
testing.
So
it
could
be
the
gingko
tester
or
it
could
be
the
node
e2e
framework,
and
so
most
of
the
details
that
I
mentioned
here
are
in
the
cap
linked
in
the
meeting
notes.
B
B
We
would
be
leaving
it
to
the
individual
six
to
move
or
migrate
whenever
they
have
the
bandwidth
to
do
so,
and
so
what
we
are
mainly
aiming
for
here
is
for,
firstly,
just
awareness
about
this
project
and
what
what
the
future
looks
like
so
basically
any
any
features
that
you
might
want
to
add
to
cube
test
or
test
in
front
general
would
now
be
in
keep
this
2
instead
and
the
most
critical
part
that
I
think
signal
would
be
mostly
interested
in
is
the
node
tester.
B
So
previously,
the
way
cube
test
used
to
invoke
the
node
tester
is,
I
think,
a
hard
coded
reference
to
run
remote,
the
binary
which
invokes
the
node
e2e
framework,
but
whereas
in
cube
test
2
we've
tried
to
make
it
such
that
we
use
the
make
file
as
the
source
of
truth.
So
I'm
sure
every
one
of
you
knows
that
the
the
node
e
tweeters
have
a
make
file
called
make
test
e2e
or
something
along
those
lines,
and
that
has
all
the
sorts
of
configuration
that
lets.
B
You
determine
the
specific
configuration
that
you
want
to
run
the
tests,
and
so
we
are
planning
on
using
that
as
the
source
of
truth,
so
basically
cube
test.
2
itself
will
just
call
the
make
file
with
the
parameters
that
you
passed
to
it,
and
so
that
way,
any
sort
of
new
additions
would
be
in
the
make
file
and
kk
itself
like
dealing
with
test.
Infra
becomes
mostly
like
if
you
encounter
bugs
and
so
yeah
that's
most
of
the
overview.
B
I
don't
know
if
anyone
has
any
specific
questions
and
be
happy
to
answer
that,
but
yeah
there's
we've
started
to
migrate.
Some
of
the
pre-submit
kubernetes
blocking
jobs
that
I
mentioned
previously
so
pool
kubernetes,
node
e
and
poll
kubernetes
node
e2e
container
d
are
two
of
the
jobs
that
we
are
currently
in
the
process
of
migrating
and
so
right
now
they
won't
be
the
they
won't
be
the
source
of
truth.
B
The
existing
jobs
would
still
be
the
source
of
truth,
but
we
are
starting
to
add
them
as
optional
jobs
that
you
can
run
and
then
eventually,
we'll
make
the
switch.
Where,
once
once,
we
make
sure
that
both
both
the
tests
have
the
same
greenness,
we
will
make
the
switch
where
we
start
using
the
cube
test,
two
job
as
the
source
of
truth
in
pre-submits
and
so
yeah.
That's
most
of
the
overview
of
quick
test
2.
How
node
testing
would
look
like
in
the
future.
B
So
if
anyone
we
definitely
want
signal
to
be
on
board
and,
like
I
have
input
on
this
so
yeah.
If
anyone
had
any
questions
yeah,
I
can
take
them
now
or
you
can
also
open
up
issues
or
something.
C
That
seems
mostly
reasonable
to
me.
The
only
question
I
really
have
is
signaled
on
the
only
people
who
use
the
ete
node
test
runner,
I'm
not
sure
if
the
way
that
we
run
things
like
the
c
advisor
integration
tests
is
exposed
via
hack
test
et
node
and
the
mig
file.
C
B
Yeah,
so
in
that
case
I
would
say
we
definitely
can
add
more
features
to
the
node
tester
so
like
where
makefile
is
just
one
of
the
ways
we
run
the
like
most
common
node
to
test
framework.
It
will
run
in
ci,
but
if
there
are
some
tests
that
we
run
like
in
a
different
way,
the
c
advisor
tests,
we
can
add
more
features
to
the
qps2
node
tester.
C
B
D
B
So
in
terms
of
timeline,
we
have
scoped
in
the
cap,
we've
scoped
it,
as
I
mentioned,
to
the
communities
pre-submits
and
the
release
blocking
jobs,
and
so
we
are
looking
at
124
to
at
least
do
the
switch
at
least
have
the
jobs
enabled,
and
so,
even
if
we
might
not
make
the
source
of
truth
change,
we
should
we
would
at
least
have
the
jobs
running
optionally
in
the
background
to
make
sure
that
both
both
the
jobs
there's
no
discrepancy
between
the
jobs.
B
So
that's
mainly
the
timeline,
at
least
in
terms
of
the
cap.
Obviously,
in
terms
of
completely
migrating
all
the
jobs,
I
don't
have
a
strict
timeline
there.
B
It's
mainly
going
to
depend
on
how
much
effort
each
sig
or
how
much
bandwidth
each
sig
has
to
make
start
making
migration,
and
we
will
definitely
have
a
guide
where
we
publish
the
most
common
use.
Cases
of
how
cube
test
is
being
used
currently
and
how
what
the
migration
will
look
like,
so
that
it's
easier
to
start
my.
But
anyone
who
is
interested
can
start
migrating.
D
Okay,
yeah.
I
wanted
to
revisit
this
one,
so
what
we
found
is
one
of
the
like
jobs
started.
Failing
recently
on
I
mean,
I
think,
police
admit
jobs
kept
working
and
the
periodic
start
failing,
and
we
found
that
this
was
caused
by
a
bad
image
from
published
by
ubuntu,
so
the
ubuntu
image
like
since
we're
using
latest
so
in
in
our
job
definition.
D
We
have
lcs
version
image
instead
of
a
specific
version,
then
we
pick
up
any
last
updated
version
like
elastic
ubuntu
image
from
a
job
and.
A
D
D
Discussing
whether
we
want
to
pin
in
image
or
just
use
lts-
and
I
think
agreement
at
the
time
was
that
specifying
image
familiar
is
just
eliminates
so
much
material
that
we
just
do
that
and
at
the
time
we
didn't
see
many
failures
when
image
when
your
image
appears,
but
I
think
you
may
reconsider
that
and
start
pinning
images.
D
I
don't
know
what
it
involves
from
like
from
test
infra
perspective
like
is
there
some
automation
that
will
help
us,
pin
images
and
update
periodically?
D
So
I'm
curious
if
anybody
knows
that,
like
I
mean
one
decision
we
can
make,
is
to
keep
like
give
the
image
family
definition
and
just
like
next
time
it
will
happen.
We
will
just
pin
again
and
unpin
when
new
version
appears
or
we
need
to
investigate
whether
we
can
pin
and
how
to
how
the
jobs
will
be
updated
and
who
will
be
updating
us.
F
A
We
don't
have
to,
we
have
a
bunch
of
like
we
would
probably
write
a
bot
that
like
prs
the
new
image
in
and
then
a
human
just
has
to
review
it.
D
Yeah
we
already
have
that.
I
think
the
problem
is
one
of
the
problems
I
see
is
that
it's
when
it's
a
pr
for
test
infra
not
to
kubernetes,
so
you
cannot
test
pr
for
this
test.
Infrared
so,
like
I
don't
know
what
like
how
this
bot
works.
If
it
does
test
every
job,
then
yeah
that
definitely
solves
the
problem.
Well,.
C
There's
also
the
thing
of
like,
regardless
of
which
one
we
choose,
we
need
to
make
it
consistent
across
our
pr
jobs
and
periodics,
because
having
them
differ
just
means.
We
don't
see
things
like
this.
G
Yeah,
my
two
cents.
I
think
we
should
like
before
I
think
a
while
ago
we
pinned
images
and
without
any
automation
and
the
result
was
basically
things
got
stale
pretty
fast
and
nothing
got
updated.
So
I
think
we
should
either
you
know,
continue
using
image
family
and
maybe
just
keep
your
eye
on
the
images
or
have
some
automation
in
place
where
we
keep
updating
it,
because
I
think
if
we
just
pin
the
image
things
are
just
going
to
go
stale
again
because
it's
too
much
manual
effort
to
go
in
and
update
the
images
everywhere.
D
D
Is
anybody?
Does
anybody
want
to
go
investigate
or
we
like
I
mean
we
can
revert
to
image
family
for
now
and
like
postpone
this
problem
or
we
can
like
if
anybody
wants
to
investigate
and
go
dig
into
that,
this.
E
A
C
D
Yeah,
I
will
I'll
take
care
of
that.
D
C
Yeah,
if
it's
a
broader
problem,
I
assume
this
is
a
broader
problem
outside
of
just
node
so
filing
it
in
a
place
where
other
people
might
see
it
and
go.
Oh,
we
need
this
thing
too.
It's
probably
helpful.
D
Yeah
one
of
the
problem
with
this
automation
is
somebody
with
very
wide
permissions
needs
to
update
and
merge
this
pr
like
approve
and
merge
spr.
So
I
mean
typically,
I
think
aaron
was
doing
it
most
of
the
time.
So
we
may,
like
I
mean
it,
it
is
important
for
us
to
write
the
same
kind
of
train
of
automation.
Otherwise
we
will,
we
may
be
lacking
permissions
and
we
may
delay
updates
of
images.
D
Okay,
I'll
write
it
down
in
the
issue
and
we
can
take
care
of
it
by
the
way.
Mike
do
you
know
if
ubuntu
image
doesn't
have
this
problem
any
longer,
can
we
revert
it
already.
F
I
believe
they
haven't
fixed
the
development.
We
should
probably
wait
as
soon
as
I
get
some
more
updates
on
that
I'll.
Let
you
know.
D
I
wrote
this
document
about
signal
test.
Cleanup
again
more.
You
dig
like
when
you're
going
deep
into
investigation
how
tags
were
historically
used
and
what
they
meaning
more.
You
understand
it
and
more
untangled.
It
is
so
there
is
a
small
tldr
of
proposed
solution
and
some
explanation
below
why
this
is
a
proposed
solution.
D
D
So
it's
kind
of
confusing
definition,
because
I
mean
the
feature
indicates
both
situation
when
it's
like,
wouldn't
work
on
like
specific
installation
or
like
it's
alpha
feature
for
instance,
or
it
will
not
work
on
specific
environments
because
it's
just
a
like.
It
needs
a
special
like
configuration
like
app
armor,
config
file
or
like
runtime
class,
to
be
configured
so
both
of
the
situations
are
covered
by
feature
flag
and
it's
very
confusing.
D
I
think
this
may
need
to
be
cleaned
up.
So
we
need
to,
I
think,
feature
should
indicate
whether
feature
will
work
on
all
environments
and
if
it
doesn't
work
on
all
environments.
Let's
mark
it
as
a
feature,
so
people
can
say
like
I
don't
want
to
test
this
feature
on
my
environment
because
it
just
simply
doesn't
work
and
then
sometimes
we
need
to
indicate
that
the
feature
requires
special
configuration
for
ci,
and
in
this
case
I
suggest
we
introduce
a
special
tag
that
will
like.
D
We
already
have
special
tag
introduced
in
one
of
the
documents
called
not
special
feature,
which
exactly
is
this
meaning
that
they
require
special
configuration
and
environment
to
be
to
run,
but
it
was
wasn't
used
consistently
first
and
it
wasn't
officially
documented.
So
I
suggest
just
to
make
it
high
level
like
testing
white,
special
tag
and
but
clearly
it
needs
to
be
double
checked
with
sick
tasting.
If
they
agree
with
that,
then
nothing
not
feature
and
feature
they
semantically.
D
D
The
problem
is
that
today,
although
our
ci
jobs
are
query
by
not
feature
for
focus
mostly
because
all
our
container
on
time
support
all
the
features,
all
the
node
features
so
like,
if
only
we
had
like
more
fragmented
support
of
unknown
features
in
like
let's
say
continuously
and
cryo
has
like
very
different
supported,
set
of
features,
not
features,
then
we
unlikely
will
query
by
all
node
features
any
longer
we
will
like.
I
mean
it
wouldn't
make
sense
to
query
by
not
features
because
you
you
want
to
test
specific
features
that
you
support.
D
Unfortunately,
it's
not
the
case.
You
know
today
well
like
fortunately,
so
since
it's
the
same
semantic,
I
suggest
to
merge,
note,
feature
and
feature
and
if
you
remove
the
I
mean
in
future,
if
situation
will
change
in
another
one
like
container
runtimes
will
support
less
and
less
features
or
like
it
will
be
very
fragmented.
Then
we
can
change
the
definition
and
like
stop
querying
by
not
featuring
or
by
feature
to
that
sense.
C
Oh
sorry,
danielle
you
go
first,
do
we
know
if
other
container
runtimes
that
aren't
officially
supported
by
node
rely
on
the
node
test.
A
I
don't
think
we
know
one
way
or
the
other
yeah.
My
question
is
so
feature
already
means
something
to
the
rest
of
tests
in
like
their
sig
testing
guidance
on
it,
and
I
see
that-
and
it
usually
means
that
like
something
is
an
alpha
feature
and
needs
to
be
turned
on
so,
but
here
we're
talking
about
introducing
a
feature
gate
tag.
A
D
The
feature
was
proposed
by
aaron
originally
because,
like
I
mean
it's
the
same
semantic
but
then
like,
we
discovered
that
like
same
semantic,
but
the
supply
applied
differently,
just
because
of
nature
how
we
used
these
tags-
and
I
ran
some
analysis
on
test
cases
that
we
have
today
and
I
think
it's
safe
today
to
merge
them
and
if
you
will
keep
it
will
change
semantic
a
little
bit
and
keep
it
consistent
over
time.
Then
it
wouldn't
be
a
problem.
D
C
I
have
another
question:
that's
mostly
about
the
special
tag
in
the
a
large
portion
of
the
node
test.
Suite
makes
a
lot
of
assumptions
about
the
host
it's
running
on,
and
I
don't
necessarily
know
how
we
would
apply
that
like.
Are
you
talking
about
doing
things
like
a
special
gpu
or
like
special
like
low
memory,
or
something
similar
to
that
or
just
like?
D
Yeah,
I
understand
I
I
mean.
Ideally
every
test
needs
to
provide
requirements,
how
it
needs
to
run
like
what
it
requires
from
an
environment.
It
may
be
too
expensive
to
require
it
right
now.
So
I
think
special
in
this
sense
is
some
extra
configuration
we
expect
that
was
additionally
added
to
the
node
like
test
handlers
at
test.
Runtime
handler,
like
runtime
closet,
called
test
handler,
was
added
to
the
environment.
It
is
tested.
D
You
know
like
across
all
the
environments
that
we
currently
support
in
open
source
in
kind
and
gce,
but
it
was
intentionally
made
to
enable
this
test.
So
change
into
environment
was
made.
It
enables
this
test,
so
it
deserves
a
flag
special.
I
think,
if
it's
just
a
specification
of
a
vms
that
we
run
on-
and
we
know
that
all
out
has
run
on
this
vms,
I
think
it's
fine.
We
have
performance
for
to
distinguish
like
more
powerful
vms
from
less
powerfully.
Apps
are.
A
We
even
like
using
the
node
special
feature
like
I
know.
Obviously
there
are
things
that
have
them
defined,
but
are
we
using
it
in
test
infra
anywhere
because,
like
I
mean
this
might
be
all
much
ado
about
nothing
if
we're
not
actually
using
them,
and
I
have
not
had
a
chance
to
look
at
like
this
is
something
we
should.
We
could
verify
pretty
quickly
with
the
spreadsheet.
C
A
D
I
see
is
a
runtime
class.
I
think
runtimeclass
is
supposed
to
be
working
on
everywhere,
and
the
problem
is
that
I
mean
we
mark
runtime
classes
feature
to
indicate
that
it
needs
special
environment
and
I
don't
think
it
like
it
has
to
do
that.
We
we
can't
just
do
like
it
should
be
conformant
like
runtime
class,
but
then
we
need
to
indicate
that
this
test
needs
a
special
environment,
so
people
can
well
the.
A
A
A
I
think
each
one
we
would
want
to
deal
with
differently
and
maybe
like
the
issue
is
that
we
do
want
to
like
be
able
to
find
all
of
the
special
things
but,
like
I
don't
know
that
this
is
actually
very
helpful
right
now
for
that
organization.
So
there.
C
Are
some
tests
that
currently
get
skipped
by
nature
of
like
the
test
itself?
Things
like
gpu
and
device
tests
that
might
warrant
some
kind
of
tagging
to,
like
you
know,
save
on
like
ci
cost,
as
opposed.
E
A
C
D
So
how
do
we
change
runtime
class
tests,
so
runtime
class
is
supposed
to
be
conformance?
I
think
I
mean
not
conformance
then,
if
it's
conformant,
how
do
we
indicate
it
in
this
special
environment.
A
The
only
five
things
on
your
list
that
are
tagged,
node
special
feature
are
huge
pages
supposedly
at
least
according
to
the
the
like
filter
graph
thing.
I
think
there
are
maybe
a
few
more
like
the
benchmarks.
A
D
D
D
D
No,
I
mean
all
our
jobs
pre-configured
test
handler.
It's
fine.
I
mean
it's
it's
special
in
the
sense
that
if
you
run
on
some
other
environments
and
you
need
to
configure
it
as
hunter,
if
it,
if
you're
running
on
oci,
it's
all
pre-configured
with
the
standard
out
of
the
box
just
for
everything,
because
it's
cheap.
C
D
It's
fair,
but
what
would
be
the
ideal
way
to
handle
runtime
class
class
tests?
Do
you
want
to
market
somehow,
like.
D
C
And
we
also
don't
mark
the
eviction
tests
need
you
to
have
like
very
specific
memory
setups
or
that,
like
a
bunch
of
the
test,
suite
breaks,
even
if
you
allow
kubelet
to
run
with
swap
like
there
are
so
many
cases
like
this,
that
we've
never
worried
about
solving
before
and
solving
them
without
going
and
like
actually
auditing.
All
of
the
tests
isn't
gonna
really
solve
the
problem
like
in
the
case
of.
A
A
A
They
need
okay,
so
most
of
these
things
like
they're
either
it's
I.
I
think
it's
going
to
be
rare,
that
you
would
need
special
hardware,
but
not
a
special
configuration.
So
it's
either.
You
need
a
special
configurations
or
you
need
both
a
special
configuration
and
special
hardware.
D
And
we
already
had
like
many
of
those
cases
like
with
pre-existing
tags
like
performance
disruptive
like
linux.
Only
there
is
another
tag
that
proposed
like
windows,
containerity
as
as
opposed
to
windows
docker.
So
linux
only
is.
A
A
conformance
tag,
so
it
has
like
a
specific
meaning
for
conformance,
so
I
think
it's
good
to
standardize
on
those
things
as
long
as
they're
consistent.
D
Yeah,
it's
already
standardized
so
yeah.
I
I
see
that,
but
like
I
wanted
to
introduce
special
talk
is
like
extra
like.
If
nothing
of
this
is
working
just
mark
it
as
special,
so
people
pay
attention
and
maybe
in
test
definition,
we
can
explain
why
it's
special
like
what
what's
special
is
required
and
yeah.
So
alternative
is
to
make
it
somehow
like
just
have
unique
tag
for
every
situation
like
that,
so
runtime
class
will
be
marked
as
like.
Runtime
can
be
required
a
tag,
and
then
you
can
query
by
that.
A
Or
like,
if
it's
specifically
that
we
need
like
special
hardware
for
the
tests,
we
could
like
mark
them
as
hardware
blah
and
then
anytime.
We
have
a
test
in
for
config.
With
that
hardware,
we
pull
on
those
tests,
mostly
I'm
just
like
we're
already
in
this
situation,
where
we
have
like
lots
and
lots
and
lots
of
these
tags
and
nobody
seems
to
know
what,
like
they
weren't,
properly
documented,
and
I
think
people
are
using
them
inconsistently
so
like,
even
if
everybody
within
this
smaller
group
knows
what
they
mean.
A
A
sig
node
approver
might
not
realize
that
oh,
but
this
test
has
like
this
hardware
requirement
so
like
these
tests
should
have
been
tagged.
This
way
they
go
ahead
and
approve
it
and
like
then,
we
end
up
with
drift
anyways,
so
I
think
it
behooves
us
to
try
to
make
the
surface
area
as
small
as
possible,
otherwise
we're
just
going
to
end
up
in
this
situation
again.
I
think.
D
D
Yeah,
okay,
it's
fine!
Let's
see
I
mean
we
can
start
without
special.
We
don't
have
that
many
cases
anyway.
Okay,
I.
C
Think,
thank
you,
I
think,
generally
as
part
of
reviewing
test
changes,
we
should
start
being
a
bit
more
strict
about
asking
for
people
to
document
their
expectations.
How
a
test
will
run
mostly
because,
right
now,
you
have
to
basically
reverse
engineer
every
test
that
fails
to
be
like
okay.
What
does
this
need?
What
aren't
we
giving
it
or
is
the
future
broken.
A
And
previously,
I
think
some
reviewers
were
a
little
bit
like
blacks
about
this
and
would
just
merge
those
tests
because
it
was
an
alpha
feature
or
whatever
and
then,
as
soon
as
you
try
to
turn
the
tests
on,
they
were
all
failing
like
they
never
worked.
I'd
like
to
avoid
those
sorts
of
things
as
much
as
possible.
A
We've
been
doing
better
about
this
as
a
sig
in
previous
releases,
but
I
think
that
that
would
also
be
a
good
requirement
to
have
is,
if
you
add,
a
new
end-to-end
test,
it
has
to
run
in
that
pr
to
demonstrate
that
it
works.
If
you
need
special
hardware
and
whatnot,
that
means
you
have
to
go
and
set
up
your
job
in
advance.
D
So
yeah
next
thing
is
not
conformance.
I
thought
that
I
can
avoid
solving,
not
conformance
problem
by
like
when
I
talk
about
feature
and
not
feature,
but
the
general
note
conformance
was
originally.
I
mean
well
a
few
documents
about
not
conformance,
but
I
think
the
main
point
for
node
conference
was
to
indicate
that
this
is
a
feature
that
this
functionality
like
what's
up
with
what
features
it
works
everywhere.
D
So
it's
supposed
to
be
working.
If
you
want
to
test
your
runtime
run,
node
conformance
at
least,
and
everything
else
is
optional
and
yeah.
A
D
Node
conform
is
kind
of
opposite
to
feature
or
not
feature,
and
we
cannot
today,
we
don't
apply
node
conformance
to
some
test
cases
that
some
test
cases
we
have
doesn't
have
neither
not
feature
nor
not
conformance,
which
make
it
very
hard
to
query.
D
So
when
you
design
a
job
like
we
have
jobs,
that
runs
also
not
conformance,
and
we
have
jobs.
That
runs
all
the
features,
but
we
don't
have
a
job
that
runs
everything
that
have
neither
except
yeah
so
yeah.
I
suggest
we
start
applying,
not
conformance
a
little
bit
more
often,
and
one
controversial
thing
about
node
conformance
is
whether
to
apply
node
conforms
to
beta
features,
because
beta
features
the
design
to
be
working
everywhere
and
enabled
out
of
the
box.
C
A
Yeah,
that
was
I
definitely.
We
got
the
feedback
from
the
conformance
subproject
in
sig
architecture
that
they
don't
like
us
using
the
conformance
name,
because
conformance
has
a
specific
meaning,
and
this
confuses
it,
because
this
means
a
different
thing
than
what
they're
using
conformance
to
mean.
A
I
agree
that
we
should
document
what
we
mean
it
to
be.
We
may
also
need
to
change
the
name.
D
A
D
G
D
Okay
and
then
I
want
to
have
feature
gate
that
is
artagon,
auto
feature
or
not
conformance,
because
what
we
discover
recently
like
like,
for
example,
like
exact
prop
timeout
on
some
environments,
exact
prop
timeout,
is
set
to
false
because
we
like
still
migrating
customers
and
that
causes
all
sorts
of
issues,
because
it's
really
hard
to
filter
tests
because
they
already
ga
so
they
already
kind
of
conformance
tests
or
not
conformance
tests,
node
conformance
test
and
but
we
they
still
protected
by
feature
gate.
D
So
having
ability
to
filter
test
out
by
feature
gate
would
be
very
useful
plus
what
we
also
like
what.
I
also
noticed
that
we
never
test
kubernetes
with
all
the
feature
gates
disabled,
which
may
indicate
like
which
may
uncover
some
problems
when,
like
somebody
took
a
depends
on
beta
feature
and
without
by
the
feature,
nothing
working-
and
this
is
not
ideal,
so
I
think
feature
gate
will
help
in
this
situation
as
well.
D
We
can
just
run
all
the
tests
with
filter
out
by
feature
gate
and
run
all
the
other
tests
that
doesn't
require
any
feature
gates
and
make
sure
that
it's
still
working
so
yeah.
So
I
suggest
that
we
introduce
this
feature
gate
flag.
I
don't
know
whether
like
do
you
have
any
opinions.
D
We
never
use
not
beta
feature.
We
have
not
alpha
feature
applied
inconsistently.
A
Basically,
do
not
graduate
them
so,
like
I've
seen.
I
think
we
have.
A
number
of
things
might
miranda
put
together
an
issue
where,
like
we
saw
that
we
have
a
bunch
of
things
that
are
like
there
are,
I
think,
some
things
with
ga
that
still
had
feature
on
them,
which
shouldn't
have
been
the
case
so
yeah.
We.
We
need
to
definitely
take
like
feature
tags
off
of
things
generally,
when
they're
like,
I
think
the
typical
usage
is
feature
means
alpha
feature
so.
D
It's
very
confusing
for
people
like
white
yeah,
so
feature
gate
may
solve
this
problem,
and
I
mean
we
can
make
it
strongly
typed.
We
can
just
have
a
helper
method
that
use
feature
gate
actual
tag
to
do
it.
The
actual
tests.
E
D
D
A
D
Yeah,
I
will
keep
updating
it.
I
mean
it's.
I
have
a
small
tool
that
generates
csv
file
anyway.
Okay,
mike
do
you
wanna
take
next
one.
A
F
Of
oktoberfest,
this
is
an
annual
event
that
occurs
it's
organized
by
digitalocean
and
the
idea.
A
E
A
I
I
put
I
sort
of
preempted
you
so
hacktoberfest
has
kind
of
been.
I
I'm
surprised
that
I
haven't
been
able
to
find
an
issue,
because
I
know
that
there's
been
a
lot
of
discussion.
I
might
have
to
go.
Hunt
through
slack
so
like
we
have
explicitly
opted
out
of
and
asked
them
like
multiple
times
do
not
feature
us.
Do
not
talk
about
our
project,
because
what
happens
is
we
get
all
of
the
like?
A
We
get
just
this
wave
of
really
low
quality
contributions
like
people
changing,
like
you
know
a
word
and
a
readme
kind
of
thing,
like
just
one
word
and
like
it's
wrong,
and
we
just
get
like.
I
think,
we've
previously
gotten
like
hundreds
of
those
pr's,
and
it
takes
so
much
work
for
the
maintainers
to
go
and
close
them
all
so
previously,
like
project
wide
we've
kind
of
agreed.
A
We
want
nothing
to
do
with
hacktoberfest
like
it
would
be
great
if
it
got
us
high
quality
contributions,
but
generally
it
hasn't,
and
I
think,
there's
also
like
kind
of
more.
We
would
have
to
do
some
work
as
a
project
in
order
to
be
able
to
like
sort
of,
I
guess,
get
in
the
hacktoberfest
spirit
and
like
have
some
issues
like
you
know,
labeled
for
like
yeah.
This
would
be
a
reasonable
thing,
but
like
not
to
completely
suck
the
wind
out
of
your
sails.
A
We're
not
really
doing
anything
as
a
sig
right
now
to
like
systemically,
go
and
try
to
like
mark
things.
That's
help
wanted
or
like
reasonable
first
issues,
and
that
kind
of
thing
we
do
a
little
bit
of
like
bug
triage.
But
then
we
don't
necessarily
go
back
and
say
like
hey.
This
is
this
seems
like
a
good
thing
for
a
beginner
to
look
into
like
we're,
trying
to
now
do
that
as
issues
are
coming
in.
F
Yeah
because
I
recently
had
a
an
interaction
with
one
contributor
that
made
a
pull
request
and
then
requested
for
a
label
a
hacktoberfest,
and
I
believe
the
contribution
was
significant
enough.
A
F
A
Yeah
I
mean
so,
we
do
not
participate.
There
is
no
hacktoberfest
label,
and
I
mean,
if
you
want
more
context
in
the
github
management
channel
here.
I'll
put
this
in
the
zoom
chat
as
opposed
to
the
agenda
like
here's,
a
thread
where
someone's
like
so
have.
We
explicitly
opted
out
like
we
hope
that
they
won't
pester
us
again
but
like
we
really
want
nothing
to
do
with
this.
So.
F
Yeah
I
mean
I
understand
them.
I
mean
I've
also
done
this
kind
of
this
kind
of
collaborations
that
are
really
worthless,
but
but
yeah
I
mean
just
to
make
sure
so
I
can
actually
reject
the
request
from
this
contributor.
F
A
It
looks
like
a
bunch
of
like
you
know,
searching
for
hacktoberfest
on
the
kubernetes
slack.
It
looks
like
a
bunch
of
random
projects
that
aren't
necessarily
kubernetes
core
do
participate
in
hacktoberfest,
but
they
say
kubernetes
itself
does
not
so.
F
Like
yeah,
I
think
there
yeah
I'll
request
some
some
more
formal
communication
and
then
I'll
just
decline.
The
request
thanks.
D
Thank
you
for
bringing
it
up.
I
think
we
only
have
a
few
minutes
left
for
looking
at
the
dashboard.
A
This
one
just
got
triaged
today
I
took
a
look
at
this,
so
this
is.
I
think
this
is
an
api
review
thing.
It's
not
really
necessarily
a
city.
E
D
Okay,
so
this
is,
we
probably
need
to
remove
this
advisor
push
job
david
replied
here
that
it's
not
needed.
Anybody
wants
to
take
a
stop,
and
just
I
mean
it's
removable
job.
It
should
be
easy.
D
A
D
Yeah,
we
discussed
it
on
previous
signature
meeting
and
nobody
wanted
to
take
it.
I
don't
know
if
you
anybody
wants
to
take
a
look.
Do
you
test.
D
Yeah,
it
was
a
temporary
spike,
didn't
cause
by
any
likes
prs
or
anything,
and
it
went
back
by
itself.
So
I
think
we
will
need
to
close.
D
D
Okay,
then
yeah,
and
we
at
that
time
anything
else
today.
Next
time
we
prioritize
back
triage,
since
we
didn't
have
much
time
this
time.