►
From YouTube: 20211007 SIG Arch Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody!
This
is
the
humanities,
sig
architecture,
community
meeting
for
october,
7th
2021.
Why
don't
we
get
started?
I
think
we
have
a
pretty
short
agenda.
So
let
me
share
that.
A
A
Okay
got
it:
yes,
zoom
automatically
stopped
my
audio
when
I
brought
up
the
agenda,
got
it
okay,
thank
you.
That
was
confusing
all
right,
sergey.
Why
don't
you
take
it
from
here?
I
think
that's
all
I
said.
B
Okay,
thank
you.
I
wanted
to
talk
about
sig
note,
test
tax
cleanup,
so
we've
been
looking
at
the
signal
test
and
we've
been
trying
to
understand
the
history
and
how
texts
were
applied
and
how
we
can
create
a
inst
like
how
we
can
tell
these
people
to
apply
it,
go
and
forward
those
attacks
and
investigation.
That
led
me
into
writing
this
document
and,
like
bigger
you
more,
you
dig
into
that
more
edge
cases
and
interesting
findings.
You
find
so
I
don't
know.
B
Do
you
want
me
to
share
a
screen
or
you
just
want
to
open
the
document.
What
would
be
the
best
way.
A
Why
don't
you
go
and
I'll
stop
sharing
and
you
can
I'll
give
you
permission
to
share.
A
B
So
yeah
there
is,
we
started
all
this
effort
from
understanding
how
features
and
not
feature
being
applied
to
tests
and
how
they
degraded
over
time
in
some
test
cases.
So
I
started
looking
to
that
and
then
I
realized
that
some
tests,
like
the
stacks
are
applied
improperly,
and
some
like
sometimes
feature
sometimes
feature
indicates
that
the
test
is
running
on
special
environment
and
sometimes
it
means
that
it
doesn't
need
special
environment,
but
it
wouldn't
run
on
other
environments.
B
B
On
certain
environments,
there
is
no
easy
way
to
tell
not
to
run
tests
related
to
the
feature
gate,
so
we
applied
all
sorts
of
different
hacks,
starting
with
like
most
commonly
we
just
add
skip
into
the
beginning
of
test
when
feature
gate
is
not
applied
and
but
skip
is
very
unu
like
very
strange
decision
because
skip
like
if
you
intend
it
as
a
feature,
but
you
have
a
skip,
then
you
may
not
test
it
and
still
get
a
green
result.
So
it's
it
was
also
like
questionable
solution.
B
We
wanted
to
take
a
look
at
and
there
are
other
problems,
so
I
created
this
proposal
out
of
like
few
items
and
we
discussed
it
with
signal
ci
subgroup
already,
which
is
edges
from
subgroup.
The
first
feature
semantic
today
we
have
feature
flag
defined
in
sick
testing.
Community
repository
and
the
feature
tag
today
indicates
that
the
feature
may
not
work
on
all
distress
and
or
environments.
A
Yep,
sorry,
can
you
pause
for
a
second
before
we
drill
into
the
specific
recommendations
here,
so
go
back
up
to
the
problems?
Yep,
okay,
so
these
are,
these
are
sort
of
do
we
have,
I
guess
what
I'm
wondering
about,
is
sort
of.
What's
motivating
changing
this,
I
believe
that
it's
all
a
mess,
because
that's
what
happens
over
time
like,
like
you,
said
things
degrade.
A
Are
we
trying
to
accomplish
some
goal
with
us,
create
some
certain
test
jobs
that
we're
not
able
to
create?
Are
we
trying
to?
I
see
that
there's
one
that
says
we
want
to
disable
all
beta
feature
gates.
I
think
we
do
that
for
conformance
test,
but
probably
not
for
which
is
only
a
small
fraction
of
the
tests,
so
that
makes
sense
like
what
are
we
going
to
so
there's
two.
A
I
don't
want
it
to
be
a
theoretical
exercise
of
like
we
solve
these
problems
that
that
that
are
there
in
theory,
but
we
actually
aren't
don't
need
to
solve
those
problems,
because
I
want
to
make
sure
people
are
spending
their
time
on
something.
You
know
that's
the
most
useful
for
the
project.
So,
can
you
explain
to
me
a
little
bit
more
like
what
we're
going
to
do
with
the
thing
once
it's
solved.
B
Okay,
so,
first
of
all,
we
we
don't
have
a
clear
guidance:
how
to
apply
tags
when
we
develop
features.
So
we
see
very
intense,
inconsistent
way.
People
apply
it
and
it
always
cause
a
lot
of
comments
in
the
pr
like
I
have
alpha
feature.
How
do
I
apply
like
which
tags
I
need
to
apply,
and
there
is
no
like
clear
guidance,
so
we
see
all
sorts
of
like
not
feature
and
feature
combinations.
B
This
is
first
like
have
a
guidance
like
this
is
what
it
is
about
like
we
haven't.
We
didn't
have
a
document,
and
now
we
have
it
documented.
So
first
I
have
a
guidance.
Second,
we
see
problems
with
multiple
tests
like,
for
instance,
runtime
class
test.
It
requires
test
hundred,
thus
hundred
to
be
registered
as
a
runtime
class
on
the
environment,
so
it
has
a
special
meaning,
but
we
don't
market
as
feature
because
all
our
like
open
source
cis
have
this
runtime
class
pre-installed,
but
on
other
environments
it
may
not
be
pre-installed.
B
So
now
we
introduce
a
very
cryptic
workaround
when
we
check
that
specific
cloud
provider
is
specified
when
you
run
a
test.
So
if
it's
just
ease
and
run
a
test,
otherwise
keep
a
test,
and
it's
like
just
not
very
good
solution
for
everybody.
So
we
we
need
to
mark
this
test
special,
somehow
same
with
exact,
prop
timeout,
similar
situation,
with
the
exact
prop
timeout.
B
We
introduced
exact,
prop
timeout,
feature
gate,
and
we
said
that
if
you
have
it
set
and
everything
will
be
working,
it's
true
by
default,
so
you
don't
mark
this
anyway,
like
test
depending
on
feature
gate
anyway,
special,
so
everybody
who
disables
this
feature
gate
stumble
across
failed
tests
and
they
don't
know
how
to
disable
this
file
test,
except
like
querying
by
specific
tests,
and
they
need
to
like
go
very
deep
into
understanding
that
this
test
needs
to
be
disabled
because
a
feature
gate
is
not
not
true
in
their
environment.
B
A
Yeah,
if
I
can
just
echo
that
back
and
see,
if
I
understand
correctly
so,
first
of
all
during
future
development,
the
current
situation
is
confusing.
People
don't
know
what
to
tag
their
tests,
and
why?
So?
We
want
to
clear
that
up
so
that
we
avoid
a
lot
of
noise
in
future
development
and
that
we
get
it
right
two.
We
we
want
to
make
sure
that
we've
thought
about
how
to
apply
these
features
and
the
goals
of
applying
those
tags.
A
A
It
sounds
like
there's
another
set
of
tests
that
require
that
have
some
dependencies
in
setup
that
may
not
be
available
in
every
cluster
that
we
want
to
run
the
tests
against,
and
so
we
want
to
tag
that
set
of
tests
with
something
to
indicate
you
need
set
up.
Although
one
vanilla
tag
well,
okay,
I
saw
some
discussion
later
in
the
document
about
that
and
then.
A
B
Yes,
but
pretty
complete,
I
think
one
more
thing
that
I
wanted
to
highlight
so
sometimes
today
we
mark
something
as
feature
because
of
the
reason
that
this
requires
special
environment.
But
then
we
don't
run
this
test
as
like
in
some
conformance
or
like
not
conformance
or
release
blocking
test
cases,
even
though
we
may
want
to
run
it
there,
because
this
is
something
that
needs
to
work
everywhere
and
we
want
it
to
be
stable
and
enabled.
C
There's
a
mismatch
in
the
open
source,
ci
configurations
that
combines
with
this
tagging
in
really
hidden
ways
and
often
means
we're
not
running
tests
that
someone
who
wrote
a
test
for
running
like
they
write
a
test
and
they
it's
part
of
a
feature.
And
so
they
put
a
feature
tag
in
there
because
they
see
that
tag.
And
then
the
pre-submit
passes
and
it's
green
and
it
merges
and
they're
happy
and
they
think
they're
doing
a
good
job.
And
it's
not
even
running
at
all.
A
Yes,
that
name
is
super
misleading
and
I
didn't
see
a
proposal
to
change
that,
although
I
it
should
probably
change
the
semantics,
but
so
okay.
A
I
think
that
that
that
there's
a
point
there
that,
like
whatever
names
we're
using
today,
we
should
make
sure
that
the
names,
because
people
as
much
as
we
like
to
think
people
read
the
documentation
about
what
the
tag
their
tests
and
they
might
the
names
are
really
evocative
and
if
they
they'll
remember
things
based
on
the
names.
B
Sorry,
I
concentrated
on
solutions
so
much
that
I
forgot
problem
statement
needs
to
be
a
little
bit
more
clearer,
so
my
assumption
was
like
yeah.
My
original
audience
was
a
little
bit
more
into
all
those
problems
and
we've
been
discussing
over
and
over
again.
So
sorry
for
that
anyway,
yeah
so
first,
I
suggest
we
change
semantic
in
the
description
for
a
feature.
Saying
that
feature
is
something
that
not
necessarily
work
on
all
distress
rather
than.
B
On
all
digital
environments,
so
let's
say
like
today:
we
say
that
it
only
marks
some
something
that
requires
special
configuration
for
ci,
it's
very
confusing.
I
I
will
explain
in
and
not
capabilities.
B
So
today
we
mark
test
as
feature
like
in
the
communication.
We
stand.
That
feature
will
be
marked
as
something
special
setup
and
ci
required.
I
think
it's
a
little
bit
misleading.
We
want
to
make
sure
that
feature
indicates
functionalities
that
either
working
or
not
working,
depending
on
like
whether
you're
running
specific
disks
through
a
specific
environment.
So
if
you
have
a
swap
enabled
on
a
node,
then
it
will
be,
we
can
think
of
it
as
a
feature,
for
instance,
and
then
this
feature
needs
to
be
working.
A
I
want
to
introduce
another
thing
here,
but
I
guess
I
would
consider
if,
if
I
were
in
your
shoes,
I
would
consider
evaluating
that
word
feature
because,
like
jordan
said
that
the
natural,
oh
sorry
dems,
I
can't
see
the
hands,
you
should
just
blurt
out.
That's
what
I
did
yeah.
I
even
start
doing
that.
So
let
me
just
finish
my
thought
I
would
recon.
I
would
consider
if
I
were
sitting
in
your
shoes
I
would.
I
would
think
about
that.
A
That
name
is
super
misleading
and
exactly
what
jordan
just
said
happens
where
people
think
oh,
this
tags
it
as
associated
with
my
feature,
I'll
just
tag
it
and
they
don't
realize
that
means
it's
not
going
to
run
unless
somebody
asks
it
to
run.
I
would
almost
put
that
to
name
that
something
like
requires
right
or
something
like
that.
Like
this
test
requires
some
specific
setup
requires
swap
to
be
enabled
on
the
node.
A
D
Yeah
I'll
go
first
and
then
daniel
can
go
next.
Is
that
okay,
daniel
okay?
So
one
of
the
problems
here
is
like
when
something
goes?
Ga
we
don't
expect
a
feature
to
be
present
at
that
time.
Right
so
take
the
thing
of
runtime
class
right
like
if
it's
a
feature
will
it
always
exist,
even
when
the
thing
has
gone,
ga
right,
because
even
something
that
has
gone
ga
might
not
work
on
certain
environments
so
that
that's
that's
a
problem
there,
daniel.
E
E
So
the
feature
flag
case
is
usually
fairly
simple
in
a
while.
The
features
behind
a
flag
tagging
us
needing
this
flag
helpful
for
everyone
trying
to
run
said
test,
especially
as
we
remove
things
that
would
let
you
enable
that
in
your
tests,
like
removing
dynamic,
kubelet
config,
but
there
are
also
a
bunch
of
tests
that
require
something
a
little
bit
different.
That
is
hard
to
map
back
to
something
that
simple.
E
Usually
that
doesn't
you
know,
require
special
hardware,
but
like
it
does
require
special
system
configuration
there's
a
few
of
these
cases
right
now,
aside
from
the
feature
case,
but
there
aren't
enough
of
them
to
have
a
like
good,
consistent
design
that
we
know
is
gonna
like
make
sense
for
people,
and
so
the
takeaway
from
the
like
ci
subgroup
meeting
was
we
solve
the
most
common
case
we
have,
which
is
something
needs
a
feature
flag,
and
then
we
look
at
this
again
in
a
few
months
to
figure
out
if
there
are
other
special
cases
that
we
keep
and
then
try
and
map
those
back
to
a
common
paradigm,
which
is
why,
like
the
special
feature,
tag
for
example,
is
going
to
go
away,
and
so
it's
mostly
a
case
of
dealing
with
the
common
case.
C
Yeah,
I
I
think
I
agree
with
what
you
were
saying:
there's
some
chat
as
well
talking
about
like
feature
enablement
versus
particular
configuration.
C
The
main
point
I
wanted
to
make
was:
we
can
catch
a
lot
of
these
things
with
automation,
in
terms
of
like
tests
being
tagged
with
a
feature,
gate
and
pairing
that
to
the
life
cycle
of
the
feature
gate
so
dims
you
were
talking
about
once
a
feature
graduates
and
like
we
have
a
process
for
that.
We
lock
feature
gates
on
once
they
reach
ga.
C
Oh,
you
graduated
the
feature
gate
and
you
didn't
like
your
test
is
still
being
filtered,
so
it
seems
like
the
two
dimensions
for
features
at
least
are
like
what
are
the
feature
gates
that
have
to
be
on
for
this
test
to
be
meaningful
and
what
is
their
default
state
and
that
way
and
right
now
those
are
sort
of
conflated
in
a
single
tag
like
the
presence
of
a
feature
tag
today
is
only
I
guess
supposed
to
be
used
if
the
feature
isn't
on
by
default,
and
so
then,
once
we
promote
it
to
beta
and
want
the
test
to
run,
we
remove
the
feature
tag
from
the
test,
which
kills
everybody's
ability
to
filter
out
that
test
if
they
have
to
turn
off
the
gate.
C
E
C
Yeah,
so
I
I
like
the
idea
of
identifying
what
the
gates
or
feature
names
are
and
identifying.
I
don't
even
know
if
alpha
beta
is
the
right
thing,
but
like
default
state,
because
sometimes
we'll
have
something
that
goes
to
beta
but
isn't
on
by
default.
Yet,
and
and
really
what
I
think
we
want
is
to
have
a
reasonable
default
config
be
like
run.
C
D
Do
we
separate
how
we
discover
those
things
and
find
out
what
the
defaults
correct
defaults
should
be
for
that
feature,
you
know
separately
from
whether
the
feature
flag
is
on
or
off
through
the
test.
You
know
definitions.
C
B
And
derek
you
have
a
hands
up.
Can
I
reply
to
this
before
you
comment,
yeah
sure,
okay,
so
and
yeah
james?
I
think
it's
going
to
the
next
item.
Originally,
I
proposed
to
have
a
special
tag
to
indicate
this
special
environment
is
required,
but
after
ci
group
yesterday
we
just
discussed
that
special,
like
catch
all
special
attacks.
We
already
have
special
attacks
like
linux,
only
or
like
serious,
slow,
disruptive
and
performance
and
catch.
B
All
special
attack
may
look
okay
in
in
theory,
but
in
practice
we
likely
will
need
to
have
a
little
bit
more
detailed,
special
facts,
so
we
suggest
have
unique
tags
for
every
single
configuration
we
require,
so
gpu
flag
will
be
there
and
the
runtime
class
tag
may
be
there,
but
it
will
be
unique
pair
configurations
that
we
require,
and
we
somehow
documented
the
comments
for
the
test.
So
next
person
investigating
it
will
know
what
to
expect.
D
About
performance,
disruptive,
slow
serial,
linux
only
are
attributes
of
either
the
environment
we
are
running
in
or
what
kind
of
tests
that
we
want,
so
that
that's
even
worse
than
adding
to
it
is
not
going
to
help
us.
You
know
if
that
bag
of
list
of
things
expands
more,
then
it's
going
to
be
more
confusing.
I
feel.
F
So
I
just
want
to
state
what
I
think
is
obvious,
but
sometimes
it's
useful
to
make
sure
we
all
agree.
Historically,
we
started
out
with
some
gpu
features
built
into
kubernetes
that
it
was
an
error,
and
so
I
think
we
just
need
to
make
sure
that
we
all
make
sure
we
agree.
That
is
the
kubernetes
project
itself
only
needs
to
test
contracts
that
it
makes
as
a
project,
but
not
features
or
layering
that
exploiters
of
those
contracts
than
leverage
to
test.
F
So,
for
example,
like
I
would
assume
that
no
gpu
feature
testing
is
ever
done
in
kk,
only
device
plug-in
mock,
plug-in
testing
is
done
in
kk
and
where
we
have
some
debt,
where
something
may
or
may
not
have
been
present
in
existing
test
suites,
it's
a
call
to
action
to
remove
those.
So
we
had
gpu
specific
metrics
at
one
point
in
the
cubelet
that
we
worked
with
nvidia
to
remove
or
on
the
process
of
removing.
F
But
I
just
want
to
make
sure
that,
like
we,
we
all
kind
of
agree
that
in
some
of
these
cases
it's
similar
with
runtime
classes,
for
example
like
that
was
meant
to
obfuscate
communities.
Like
kata
or
maybe
g
visor,
I
would
never
expect
the
kata
test
to
be
present
in
kk,
and
I
just
want
to
make
sure
that
we
all
agree
with
that
in
the
same
way
that,
like
kkk,
shouldn't
be
exercising
csi
driver
plug-ins
either
that
are
not
sourced
from
k.
The
careers
community
itself.
D
I
agree
with
you
direct
it's
this
one
additional
twist
is.
We
should
let
the
cartage
folks
be
able
to
run
our
stuff
where
it
exercises
their
special
things
as
well.
Right.
F
Like
nvidia,
I
would
expect
the
nvidia
gpu
operator
would
write
its
own
testing
or
the
vendor
who's,
providing
their
device
to
write
their
own
testing,
and
they
don't
need
to
run
a
full
qualitative
kk
suite.
They
just
need
to
run
the
sweep
that
verifies
their
intersection
with
the
device
plug-in
api.
D
Do
it
direct
that
that's
their
issue
right
like
if,
if
for
nvidia,
we
allow
them
to
switch,
you
know
switch
from
the
mock
version
that
we
have
to
the
version
that
they
have
and
they
exercise.
You
know
the
full
suite
of
tests
that
we
have.
They
do
that
right
now,
like
many,
many
of
the
people
in
the
ecosystem
rely
on
our
tests
and
then
they
they'll
have
a
way
to
run
their
stuff
with
our
tests.
D
C
C
F
Yeah,
but
even
within
node
conformance,
we
don't
have
any
tests
that
validates
the
behavior
behind
a
runtime
class
obfuscation
right.
I'm
aware
of
many
runtime
class
configurations
in
the
world
that
either
loosen
or
tighten
how
a
container
is
containerized.
F
F
So
I
guess
yeah
I
I
wouldn't
expect
us
to
have
anything
more
unique
than
I
asked
for
a
device
from
a
mock
plug-in.
I
got
said
device
and
then,
where
we
struggled
a
little
bit
right
now
is
is
handling
exotic,
architectures
or
topologies,
where
things
intersect,
so
pneuma
multinuma.
F
I
guess
arm
those
types
of
challenges
that
come
up,
but
at
least
for
gpus.
I
just
want
to
be
clear,
like
I
don't
think
if
there's
anything
gpu
around
it,
it
should
be
a
thing
that
we
work
to
finish
removing
because
we
had
started
a
long
path
to
to
do
that
at
least
and
personally,
I
would
put
a
hold
on
any
tests
that
tried
to
exercise
something
unique
behind
a
runtime
class,
because
the
whole
point
is
to
keep
it
opaque
and
so
anyway,
that's
just
all.
I
want
to
get
out.
E
E
E
But
it's
also
useful
to
have
a
sense
of
like,
if
something's,
going
to
break
a
more
realistic
use
case
for
something,
especially
if
there's
a
couple
of
things
we
can
do
that
cover
a
large
range
of
features.
Usage.
F
Yeah,
so
I
know
at
red
hat:
we've
run
some
machinery
to
test
like
pneuma
alignment
with
particular
device
topologies
that
we
can't
keep
up
with
the
scale.
F
I
don't
have
a
clear
path
on
why
not
what
I
would
do
is
at
least
I
try
to
go
back
to
where,
at
least
at
red
hat,
we
have
capacity
to
try
to
go
and
improve
those
mock
plug-ins,
where
in
some
cases,
it's
difficult
to
write
that
mock
apps
in
a
concrete
device.
So
I'm
sure
other
vendors
or
community
representatives
can
share
that
same
challenge.
E
Yeah
figuring
out
the
right
boundary,
there
is
hard,
I
I
don't
think
at
least
short
term
doing,
like
click
hot
boundary
is
necessarily
best,
but
that
falls
into
the
range
of
it's
complicated.
B
C
C
If
that's
the
case,
then
maybe
we
should
consider
removing
the
test
or
reworking
the
test
to
be
able
to
run
against
a
mock
if
it's
a
special
configuration
where
it's
not
a
extension
point.
It's
just
like
this
is
an
optional
thing
and
you
can
configure
a
cluster
this
way.
That
seems
more
reasonable,
like
presence
of
a
runtime
class
or
turning
on
service
account
token
signing
or
like
some
optional
thing
you
can
configure.
B
Okay
make
sense,
and
I
think
we
can
handle
special
configurations
through
maybe
tags
and
comments
for
the
tests
so
going
forward.
Another
thing
that
we
have
is
not
feature
and
node
feature
was
originally
designed
as
a
opposite
node
conformance,
so
everything
that
is
not
not
conformance
needs
to
be
not
featured
indicating
that
it's
not
running
on
every
single
environment
or
every
single
distra.
B
The
problem
today
is,
I
mean
we,
we
have
two
separate
attacks,
not
feature
and
feature,
and
they
semantically
similar
the
catch
here
is
that
feature
was
indicating
special
environments
and
we
mostly
filter
by
feature
when
not
feature
indicating
special
functionality,
and
we
query
by
not
feature
so
it's
opposite,
meaning
I
opposite
use
by
the
same
meaning
and
this
opposite
uses
caused
by
the
fact
that
most
of
node
features
are
working
fine
with
environments
that
we
running
our
ci
on.
So
we
just
query
by
node
feature
and
we
don't
even
filter
by
any
special
features.
B
Today,
merging
them
will
be
writing
semantically,
and
I
run
some
analysis.
There
is
not
much
intersection,
so
you
won't
lose.
I
mean
with
the
cleanup
that
needs
to
be
happen.
We
will.
We
will
get
the
right
point
and
then,
if
you
start
keeping
start
applying
them
the
way
I
we
changed.
The
magic
in
number
one
so
feature
is
no
longer
indicating
special
environment.
Then
it
will
be
exactly
the
same
semantic
and
we
can
slowly
moved
into
like
right
usage
of
notification
feature
and
it
will
be
the
same
so
same
same
tag.
B
Okay
and
then
note
conformance
conformance
is
a
feature,
a
set
of
functionalities
that
runs
that
can
be
run
everywhere
and
that
conform
that
node
is
operating
the
way
it
should
operate.
Given
the
environment,
so
node
conformers
still
may
have
special
tags
like
special
requirements,
especially
environment
is
required,
but
if
everything
configured
right
in
the
environment
that
those
test
needs
to
pass
and
every
time.
D
Hang
on
sergey,
I'm
still
stuck
on
number
three.
So.
D
Renaming
note
feature
to
feature
right
now,
as
far
as
I
can
see,
node
feature
is
present
in
the
seek
describe
or
whatever
in
the
test,
and
it
is
present
in
the
test
definitions
saying
you
know:
regex
switch
on
or
off
right.
D
So
when
you,
when
you
say
rename
note
feature
to
feature,
does
it
also
imply
that
we
will
add
it
to
the
list
of
features
that
we
have
alpha
beta
things
that
are
listed
down
in
there
is
there
is
a
special
go
file
where
we
have
the
list
of
features
and
whether
they're
in
alpha
beta,
ga
and
whether
they're
locked
or
unlocked
so
does
it
mean
that
we
have
to
this
list
like,
for
example,
there's
a
note
feature
called
gke
environment
right
like
our
garbage
collect,
so
will
that
end
up
in
the
list
of
features?
B
No,
this
is
orthogonal,
so
number
five
introduces
feature
gate,
so
I
think
we
need
to
distinguish
features
at
this
permanent
indication
that
this
thing
requires
like
not
working
everywhere
and
feature
gate
indicating
the
functionality
is
being
developed.
B
So
I
think
feature
gate
will
be
something
that
you're
talking
about
that
go
through
alphabet
and
duplicated
stages,
and
we
can
make
it
even
stronger
typed.
You
can
try
to
have
a
helper
method
that
will
require
you
to
pass
a
feature
gate
as
a
parameter.
B
Something
like
that
when
feature
will
be
more
appear
when
you
think
like
once
you
applied
feature
or
not
conformance,
it
will
stick
forever.
You
don't
need
to
remove
it
any
longer
unless,
like
feature
will
be
graduated
to
not
conformance
when
it's
applicable
everywhere.
D
So
we'll
have
feature
which
is
there
in
test
definitions
and
test
tags,
and
we
have
feature
gate
which
is
going
to
be
there
in
in
the
go
file
which
can
be
toggled
on
or
off.
You
know
from
say
the
cubelet
command
line
or
api
server
command
line.
B
I
think
I
we
can
do
it
for
all
feature
tests,
mostly
for
node,
like
for
node,
not
feature
volts
used,
not
feature
also
used
for
six
storage.
Sometimes
so
I
mean
I,
I
will
go
talk
to
stick
storage
when
change
will
be
needed
all.
C
Right,
I'm
I'm
strongly
in
favor
of
resolving
the
feature,
confusion
for
all
the
ede
tests.
I
see
this
so
the
document
seemed
scoped
to
node
stuff,
but.
B
D
Yeah,
please
go
ahead,
I
I
skipped
you
skipped
four
and
got
you
to
do
five.
Now
you
can
go
back
to
four.
B
Yeah,
it's
really
hard
to
understand
three
and
four
at
five,
because
yeah
but
yeah
five
is,
I
think,
yeah
coming
back
to
four,
not
conformance.
The
proposal
is
to
document
it
and
the
problems
that
we
discussed
yesterday
on
conformance,
meaning
that
node
conformance
has
a
conformance
board
in
it
and
it
may
be
super
confusing.
B
We
entertain
some
possibilities
for
a
name
in
node
ci
group.
We
didn't
come
up
with
like
very
good
solution,
like
some
examples
are
like
capabilities
behaviors,
but
we
need
to
have
something.
That's
saying
like
this
is
functionality
that
works
on
every
distra.
Conformance
is
a
right
word
here,
but
it's
kind
of
late
and
completely
is
how
conformance
works
in
certification
program.
B
D
Yeah
the
the
the
problem
here
is:
what
is
the
list
of
tests
that
should
run
when
we
run
code
con
performance?
That's
the
usual
problem
right
like
and
when
somebody
like
continuity
or
cryo
are
running,
something
that
they
think
is
note
conformance.
D
Did
all
the
tests
run
or
not?
That's
the
usual
problem
and
like
you,
you
can't
easily
tell
whether
all
the
tests
that
were
supposed
to
run
actually
ran
or
not,
which
is
the
same
problem
we
have
in
conformance,
and
we
did
that
we
tackled
that
by
adding
a
yaml
file
which
has
a
consolidated
list
of
jobs,
sorry
tests
that
need
to
run
so
I
have
a
feeling
that
you
will
strengthen.
D
You
know
the
need
to
do
something
like
that
here.
Also,
I
think.
B
I
see
yeah,
I
mean
it
basically
goes
down
to
conformance
being
having
very
similar
meanings
of
many
toolings
that
we
have,
for
conformance,
may
be
applicable
for
not
conformance
going
forward
yeah
one
problem
with
not
conformances.
If
environment
doesn't
support
something
then
node
conformance
is
not
supposed
to
run
like
this.
That's
not
supposed
to
run,
and
it
leads
me
to
this
discussion
about
feature
gate
and
beta.
So
once
feature
gate
goes
to
beta.
B
D
Hang
on
so
just
before
you
go
there.
There
is
one
more
use
case
that
people
use
node
conformance
for
I
just
want
to
make
sure
that
you
realize
that
or
understand
that,
which
is
people
who
are
developing
custom,
cris
or
virtual
cubelet
or
other
alternate
implementations.
D
They
end
up
running
node
conformance
as
well,
so
you
know,
let's
make
sure
that
we
cover.
You
know
their
use
case
too.
So
they'll
get
some
sense
of
okay,
my
the
cubelet
equivalent
that
I'm
writing
is
as
good
as
what
is
there
in
the
community.
B
Okay,
yeah,
that
it's
a
good
point
and
today,
for
instance,
all
runtime
classes
are
marked
as
a
feature
when
what
we
discussed
today,
some
tests
needs
to
be
marked
as
not
conformance,
because
it's
supposed
to
work
you're
supposed
to
have
put
into
this
runtime
class,
but
behavior
is
on
undefined,
so
yeah,
it's
some
runtime
cost.
Maybe
mark
is
not
confirmed
going
forward
and
it's
a
good
thing
that
runtimes
will
tell
that
this
function
is
actually
working
so
yeah.
Thank
you.
B
It's
a
good
comment,
so
what
I
wanted
to
discuss
is
not
conformance
and
beta
features.
So
when
feature
is
in
alpha,
okay,
when
functionality
is
enough
features,
I
overloaded
term,
unfortunately,
in
alpha
we'll
apply,
feature
gate
and
alpha
tags.
B
We
also
will
need
to
apply
feature
tag
because
alpha
everything
is
an
alpha
requires
special
environment.
So
feature
is
needed
there,
but
then,
when
it
goes
to
beta,
we
need
to
remove
feature
tag
and
one
once
we
remove
feature
attack.
We
need
to
mark
this
job.
As
like
I
mean
we
remove
feature
because
it's
not
alpha
any
longer,
but
the
question
is:
do
we
add
not
conformance
there
or
we
don't
add
any
tags,
while
it's
in
beta,
so
beta
indicate
that
it
can
be
turned
off,
so
it
may
not
work
on
all
environments.
B
B
In
other
words,
if
this
feature,
as
applicable
everywhere,
mark
beta
features
is
not
conformance
or
another
example,
when
beta
feature
is
not
applicable
everywhere.
Keep
the
feature,
attack
and
feature
tech
will
indicate
that
not
all
environments,
all
districts
will
need
to
execute
this.
This
set
of
tests
so
yeah.
This
is
one
of
the
questionable
moment
about
node
performance.
C
Not
knowing
a
lot
about
node
conformance
specifically,
it
seems
really
weird
that,
before
features
graduated
to
ga,
we
would
remove
the
things
that
identify
the
test
as
being
related
to
that
feature,
I
think
this.
I
think
this
goes
back
to
the
two
dimensions
I
talked
about
earlier,
like
what's
the
feature
and
what's
the
like
default
state,
and
maybe
there's
a
third
one
which
is
do
we
expect
this
to
be
applicable
to
all
environments
but
like
if
the
feature
hasn't
graduated,
removing
the
identification
that
the
test
is
related
to?
That
feature
seems
really
strange.
B
B
C
Maybe
it's
just
the
the
name
feature
like
in
my
mind
that
it's
really
easy
to
conflate
with
feature
gate.
Maybe
maybe
there's
a
different
term
or
a
better
word
that
so
it
might
be
worth
getting
feedback
on
like
what
people
think
that
means,
and
if
eight
out
of
ten
people
think
the
same
thing.
I
thought
you
might
want
to
pick
a
different
name.
C
B
Yeah,
this
is
confusion
of
terms
right,
so
it
goes
to
original
jim's
comment
that
feature
might
be
renamed
as
a
bigger
scope
of
change
right.
D
Okay,
so
let's
stick
to
just
what
you
have
on
screen
right
now,
because
this
itself
is
like
so
one
way
to
think
about
this
is
what
is
this
implication
to
all
the
other
sigs
who
are
not
currently
paying
attention
to
anything
when
there
is
a
prefix
with
node,
they
say.
Oh,
this
is
signaled
stuff.
So
I'm
not
going
to
worry
about
it
right
like
whether
it
is
note
feature-
or
you
know
things
like
that
note
conformance
they
they
just
say:
okay.
This
is
no
signal
stuff.
D
D
D
I
think
we
need
to
like
parse
it
out
into
like
two
different
steps.
So
to
say.
B
Okay,
yeah
extending
to
all
six
is
definitely
great,
and
what
you
also
saying
is
not
conformance
is
very
not
specific,
so,
as
other
cigs
may
not
pay
attention
yeah
another
reason
to
have
a
better
name.
C
Right
and
automation
to
catch
this
like
if
we're
wanting,
if
we're
saying
ede
tests,
need
to
be
tagged
in
a
certain
way,
then
we'll
definitely
want
like
we
can
go
fix
up
existing
ed
tests,
but
it'll
immediately
drift.
Unless
something
is
saying,
like
e
tests
have
to
have
these
tags,
they
have
to
map
to
these
features,
or
they
have
to
say,
like
I
don't
depend
on
any
features.
C
The
enablement
has
to
map
to
the
actual
enablement
like
we
something
has
to
enforce
those
things
match
or
they'll,
get
forgotten
or
copied
and
pasted
and
we'll
be
back
where
we
are
today.
So.
E
D
So
the
only
other
thing
I
can
think
of
is
like
what
is
the
value
that
the
changes
to
the
regular
understanding,
the
first
bucket,
that
we
were
talking
about
to
the
rest
of
the
things
you
know
and
what
do
they
they
have
to
change
in
their
normal
cadence
of
how
they
do
work.
Example,
six
storage
right
like
what
is
the
value
that
they
get
out
of
this
other
than
them
having
to
go
figure
out
all
these
things
like.
What
do
I
change?
D
What
do
I
flag
as
a
feature
and
hoping
the
feature?
Gate
remains
the
same
as
what
it
is
now
and
they
they
have
to
go,
go
around
looking
at
their
test
suite.
They
have
to
fix
up
what
they
have
today.
D
I
hope
they
don't
have
to
do
that,
and
then
they
have
to
scan
through
their
jobs
and
they
have
to
do
some
modifications
there.
I
don't
know
the
implications
of
you
know
what
this
will
turn
into
work.
That
needs
to
be
done
by
different
things.
B
B
So
if
you
start
the
job
like
that
and
we'll
run
most
of
the
test
that
doesn't
require,
it
doesn't
indicate
that
the
feature
gate
dependent.
Then
we
can
catch
many
situations
when,
like
I
did
and
we'll
catch
stats
that
depend
on
feature
gates
but
doesn't
indicate
it
in
the
description.
So
it's
another
way
to
catch
this
kind
of
problems,
and
I
think
it's
important
to
start
testing
ga
features
only
with
a
push
for
no
permanent
betas.
D
C
Data
apis
and
features
turned
off,
so
that's
great
and,
and
that
actually
has
caught
many
many
things
where
we
accidentally
were
taking
a
hard
dependency
on
a
beta
thing
right.
It
sounds
like
you
want
a
the
same
type
of
thing
for
node,
conformance
yeah,
I'm
plus
one
on
that.
That
makes
a
lot
of
sense.
B
Okay,
so
that
is
tldr
of
all
the
changes
and
I
will
summarize
the
feedback.
D
And
so
I
do
want
to
poke
at
one
more
thing
before
we
go
again,
which
is
we
have
five
minutes
left
damn,
so
I
have
a
feeling
that
we
need
something
else
which
is
like
a
list
of
key
value
pairs
that
that
is
present
that
can
be
picked
up
by
the
test,
harness
which
which
we
can
set
in
either
the
ci
job
definition
or
baked
into
the
environment
where
it
is
running
which
can
be
looked
up
and
then
check
whether
based
on
that,
I
don't
even
know
what
to
call
it
like
there's
a
instead
of
like
confusing
and
conflating
like
the
feature
and
the
feature
gate
with
whatever
is
present
in
the
environment.
D
Do
you
see
what
I'm
trying
to
say?
It's
like
looking
at
an
environment,
variable
to
say,
hey?
Yes,
I
should
be
able
to.
You
know,
use
this
feature
gate,
because
I
know
that
environment
variable
is
on
right.
You
know
one
way
to
look
to
think
about
it.
So
if
we,
if
we
can
have
some
some
kind
of
configuration
about
what
is
available
in
this
environment
and
that
the
feature
gate
can
make
use
of,
I
think
that
might
be
a
better
way
to
look
at
it.
D
Right
and
then
we
can,
we
can
say
that
this
ci
job,
you
know
this
feature
gate
is
on
and
we
are
trying
to
test
this
specific
feature,
but
the
see
but
the
environment
says
that
whatever
I
need
is
not
available
so
plea.
So
it's
easier
to
surface
a
problem
of
that
sort,
instead
of
like
skipping,
which
we
do
today,
you
see
what
I'm
saying.
B
D
Right
so
again,
we
need
to
separate
it
into
two
things.
One
is
come
up
with
a
proposal
which
is
a
cap
where
we
enumerate
what
are
the
changes
that
can
that
flow
across
all
the
sigs.
You
know,
you
know
the
environment
configuration
parameters,
the
feature
gate
that
you're
talking
about
here
and
any
implications
to
like
the
feature
gate
is
at
beta
and
not
switching
things
off.
Those
are
one
bucket
which
is
applicable
to
everybody
right.
D
How
does
it
apply
to
existing
node
conformance
tests
would
be
like
a
second
cap?
You
know,
but
you
know
it
depends
on
like
how
signal
wants
to
deal
with
it,
whether
you
want
to
just
start
with
a
google
doc
and
and
not
worry
about
a
cap.
You
know
that
that
I
think
I'm
open
there,
but
we
definitely
need
that.
First
bucket
of
here
is
the
changes
that
we
are
proposing.
That
will
be
needed
across
all
six
and
then
we
debate
with
everybody
else.
B
Okay,
I
think
we
can
make
I'm.
I
wonder
how
much
changes
we
can
make
without
cap,
which
are
not
local.
I
mean
not
specific
and
yeah
for
everything
else.
Okay,
I
will
ever
start
a
cap.
D
Yeah
there
are
some
things
that
you
can
still
do
right
now,
which
is
doing
the
node
conformance
with
all
the
flags
switched
off.
You
can
still
do
it
with
whatever
we
have
right
now,.
A
Okay,
okay!
Well,
we
are
out
of
time
here.
So
unless
there's
any
quick
comments,
I
think
we
should
wrap
it
up.