►
From YouTube: Kubernetes SIG Node 20200824
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
this
is
sig
note,
ci,
subgroup
meeting
it's
monday
august,
24th,
hello,
everybody
so
yeah,
let's
take
in
the
into
topics.
First,
three
topics
suggested
by
morgan.
We
can
either
try
to
parse
it
out
or
start
this
next
topic
and
then
get
back.
A
I
would
suggest
we
will
start
with
later
topics
and
then
we'll
get
back.
Okay
next
is
mine,
so
I've
been
looking
at
this
doing
the
share
screen
or
you
can
follow
the
document,
which
one
is
better
any
opinion.
A
Idea,
okay
and
I
will
make
it
fit
the
screen
so
yeah.
Let's
get
into
this
topic,
I've
been
looking
into
like
right
now,
some
feature
flags
I
will
try
like
there
are
pull
requests
in
kubernetes
repository
to
remove
feature
flags,
and
one
of
them
is
to
remove
feature
flag
from
startup
probe,
operation
and
part
of
the
pr
it
was
moving
end-to-end
tests
and
into
different
files.
A
I
was
looking
into
like
what
is
that,
and
I
figured
I
realized
that
we
have
this
test
area
called
not
complete
cereal
and
there
is
a
not
kubrick
serial
alpha,
so
they
different
between
so
c
regular
serial
doesn't
include
any
alpha
features,
and
this
one
include
alpha
features
only
and
it
has
started
a
probe
flag
set
to
true.
So
presumably
that
needs
to
run
startup,
probe
end-to-end
test,
but
turn
out
that
it
doesn't
run
anything.
This,
like
entire
alpha
area,
doesn't
run
any
tests,
so
it's
empty.
A
So
I
was
looking
into
I
mean
historically
somebody
just
removed
like
when
starter
pro
moved
from
alpha
to
beta.
This
was
removed
from
tests,
so
it's
it
stopped
running.
But
then
I
looked
into
yeah.
A
I
looked
into
another
test
and
like,
for
instance,
we
have
this
test,
cpu
manager,
serial
test,
and
this
test,
for
some
reason,
is
not
reflected
on
this
test.
Okay,
like
on
the
chart
at
all,
so
let
me
open
test,
read
really.
A
Cool
blood
and
we
look
at
alpha-
I'm
sorry
zoom
within
those
closing
part
of
the
screen
for
me.
So
a
little
bit
hard
to
know
me.
A
Yeah
this
one,
this
one
is
completely
empty.
So
I'm
just
curious
if
anybody
could
know
the
history
of
this
alpha,
no
like
alpha
tab
and
do
we
need
to
put
like
make
sure
the
tests
are
running
as
part
of
this
alpha
like
it
can
be.
B
Wrong,
I
can
share
some
general
comments
about
the
cpu
manager.
We
we
definitely
want
that
to
run.
If
I
look
under
the
working
group
resource
management
tab,
it
is
there
and
I
think.
B
I
was
looking
at
the
job
now
to
see
if
it
actually
ran
and
ran
any
tests.
What
I
suspect
is
yeah.
Okay,
the
test
is
running
okay.
I
I
suspect
it's
just
that
you
know.
Cpu
managers
used
to
be
alpha.
I
think
it's
actually
beta
now,
and
it
really
should
probably
go
to
ga
and
I
think
that's
one
one
of
the
sig
node,
the
document
that
derek
sent
out-
and
I
think
I
did
see
some
people
sign
up
to
be
interested
to
help
get
that
to
you
know
ga
from
beta.
B
So
I
really,
I
suspect,
what
it
really
boils
down
to
is
just
that
job
needs
to
be
updated
at
least
to
go
from
alpha
to
beta,
because
I
think
it's
yeah
so
probably
maybe
just
we
need
an
issue
to
track
that
to
get
the
job
updated.
C
I
remember
david,
I
just
want
to
say
the
cpu
manager.
I
remember
it
was
under
the
alpha
serial
alpha
tab
before,
but
I
sent
a
pr
to
remove
the
alpha
tag
from
it
because
it
wasn't
an
output
feature
anymore
and
I
think
the
reason
that
that
tab
isn't
showing
anything
is
there's
no
more
jobs
that
are
like
tagged
alpha.
Basically,
that
are
also
serial
tests,
but
the
last
one
that
was
there
was
the
sdp
manager,
but
it
has
its
own
section
under
the
working
group
thing,
as
you
mentioned,.
A
Okay,
do
we
need
this
serial
output
tab
at
all,
since
it
doesn't
run
any
test?
It
just
runs
like
startup,
test,
environment
and
chips
it
down.
Should
we
just
remove.
B
It
might
be
useful
if
there
are
some
new
features
coming
on
that
would
be
alpha.
I
don't
know,
maybe
maybe
david
you
have
some
input
on
that.
I
don't
know.
C
A
D
A
quick
comment
on
the
previous
one,
so
if
you
go
back
to
test
grid
and
if
you
click
anywhere
and
open
up
a
spyglass
page
read
through
the
blogs
every
time,
the
a
if
you
want,
if
you
click
on
the
escape
four
thousand
and
something
lines
right
below
the
show,
all
hidden
lines
thing
or
show
all
hidden
lines.
D
There's
gonna
be
some
text
in
blue.
That's
gonna
appear
somewhere
around
like
line
300
and
something
and.
D
Yeah
yeah,
so
usually
the
blue
is
when
gingko
says
okay,
we
have
this
test,
but
we're
gonna
skip
it.
For
some
reason,
can
you
skip?
Can
you
keep
scrolling
down
please?
D
It
says
there
should
be
something
that
says:
cpu
manager,
serial
blah
blah
blah.
It's
gonna
match
exactly
the
same
string
that
you
have
in
the
talk.
A
Yeah
it
seems
that,
like
david
said
that
he
removed
this
flag,
so
maybe
my
source
port
is
a
little
bit
outdated.
I
don't
need
to
double
check
that.
D
So
so
maybe
so
maybe
it's
way.
Maybe
this
test
is
it's
another
one
of
those
that
just
has
possibly
a
messtop
between
a
messed
up
in
a
configuration
for
the
for
the
test
which
for
any
way
which
for
additional
context
is
so
note
it
does.
They
have
two
pieces
of
configuration.
One
of
them
is
the
brow
job,
which
is
the,
which
is
the
thing
that
the
sergey
was
showing
up
a
little
while
ago.
D
D
D
Possibly
it's
okay,
so
every
single
every
single
day,
every
single
lead
we
test.
It
has
a
at
least
every
every
node
that
we
test
has
three
pieces
of
configuration.
So
the
problem
job
is
the
one
that
you
have
on
screen.
Can
you
a
called
you
just
to
show
the
other
one?
Could
you
open
up
another
tab
on
test
a
on
test
infrared?
Please.
D
Okay,
jobs
and
then
it
will
note
if
you
compare
so
if
you,
if
you
compare
the
brow
job
in
the
pro
job
configuration
there's
one
that
says
not
args
image,
config
file
and
then
for
node
serial
alpha.
It
specifies
that
it
is
using
the
file.
That's
in
this
ripple,
that's
called
image,
config
serial.jamo
and
these
files,
if
they
don't
specify
a
machines,
a
if
they
don't
specify
a
machine,
size
or
anything
or
anything,
of
the
sort.
D
You
might
not
be
running
because
we
are
not
using
the
the
proper,
the
proper
vm
for
it,
which
could
be
another
possibility.
A
Yeah,
I
can
create
an
issue
to
investigate
what
bothered
me
is
that
the
only
feature
gate
here
is
the
startup
prop
2,
and
I
would
expect
like
for
cpu
manager.
Cpu
manager
also
needs
to
be
enabled.
A
Okay,
so
I
would
expect
that
the
alpha
test
will
enable
a
bunch
of
featured
gates,
and
I
was
surprised
to
see
on
the
startup
prop
here.
Okay,
so
I
will
create
issue
to
investigate
and
we
can
discuss
in
inside
the
issue,
but
if
there
is
no
serial
alpha
fast
any
longer,
we
can
just
remove
this
area
for
now
and
restore
it
when
needed.
A
Okay-
and
I
also
wanted
to
ask,
is
there
any
knowledge-
and
I
think
it's
somehow
related
to
morgan
question?
Is
there
a
knowledge
which
test
we
run
and
which
not?
Do
we
have
any
any
statistics
on
like
or
any
tools
that
will
validate
it?
Like
some
testing,
kubernetes
endpoint
is
never
run
as
part
of
test
grid.
D
There
is
an
issue
that
that's
open
in
the
project
board
for
documentation
for
documentation
and
amin
actually
made
an
excel
sheet
of
all
the
possible
a
of
all
the
play
of
all
the
ginkgo
tags
that
we
have
available.
I
can
post
it
in
here,
but
overall,
if
there
is,
I
think
it
will
be
useful
if
anyone
is
interested
in
building
something
that
could
answer
this
kind
of
question.
A
A
Okay,
do
you
want
to
go
next?
Okay,.
D
Yes,
so
the
the
next
two
items
I
just
wanted
to
put
them
in
here
just
to
properly
just
to
propagate
knowledge,
and
so
this
week
this
week
in
the
release
team
is
actually
gonna
start
pulling
all
the
levers
for
releasing
for
releasing
119.
D
And
this
is
a
this
is
useful
for
now,
and
it's
going
to
be
just
for
a
useful
in
the
future.
D
All
the
the
schedule
for
every
single
release
is
it's
always
going
to
be
up
in
the
in
a
similar
place,
as
the
link
that
I
posted
in
the
notes
under
the
secret
list,
repo
in
releases
and
then
the
specific
release-
and
I
guess
the
tldr
for
this
team
is,
if
you
well
you're,
working
on
some
tests
or
something
and
if
you
see
something,
that's
wrong
and
you
think
that
is
going
to
affect
people
using
kubernetes.
D
Please
thank
everyone
and
just
just
to
make
it
just
to
make
sure
that
the
signal
gets
back
to
the
release
team
in
case.
That's
something
something:
release
blocking
is
actually
hiding
in
somewhere
in
the
logs
or
a
test
grid,
and
I
guess
quick
stop.
Since
I
have
the
next
bullet
item.
Does
anyone
have
any
comments,
questions
suggestions.
A
So
in
testing
for
pretty
soon
we'll
switch
to
119
conformance
right,
so
we
don't
need
to
the
only
test
you
run
from
119
is
conformance
test.
It's
not
like
whole
test
grid.
D
D
So
if,
if
anything,
that
is
a
that
doesn't
have
the
119
a
so
so,
for
example,
all
the
all
the
other
jobs
that
don't
have
a
specific
release,
they
are
most
likely
running
against
the
master
branch
and,
if
something
it
starts,
if
any
failure
starts,
it
starts
showing
up
in
one
of
those
jobs.
That
will
also
be
one
of
those
things.
It
will
be
good
to
collaborate
with
the
release
team
and
just
being
like
hey.
This
suddenly
broke
yesterday.
E
E
D
Jobs,
for
example,
signal
cubelet
that
is
an
alias
for
there
for
an
e2e
job.
They,
the
the
thing
that
I've
seen
that
most
people
do
is
that
they
explicitly
state
that,
and
that
is
conformance
so,
for
example,
a
signal
cubelet
conformance
119
other
than
other
than
that.
If
we,
I
think,
and-
and
this
is
completely
an
opinion,
if
we
have
any
job
study-
that
they
are
running
a
conformance
test
suite
and
it
isn't
obvious,
I
think
we
should
probably
rename
them
rename
them
or
they
put
some
explanation
in.
D
D
In
the
19
branch,
and
also
anything
that
doesn't
have
a
specific
release,
because
it
is
very
likely
that
if
it
doesn't
have
specific
release,
it
is,
it
is
running
against
the
master
branch
and
the
master
branch
it's
frozen.
So
it
is
still
kind
of
in
sync,
with
the
118
release
or
with
the
possible
119
release.
D
If
not
because
I
hear
silence
the
last,
but
the
last
item
that
I
added
to
the
list,
it
was
just
another
invitation
for
people
to
come
and
take
a
look
at
the
project
board.
D
I
think
I
think
a
reasonable
pace
to
not
over
do
not
over
task
individuals
and
to
not
realize
to
not
rely
way
too
much
on
heroes,
because
there
are
harmful
heroes
throughout
kubernetes
that
do
that.
Do
a
lot
of
that
do
a
lot
of
work.
I
think
it
will
be
reasonable
to
every
now
and
then
and
for
people
who
are
interested
in
and
thank
you
doing
my
doing
more
work
here.
Just
go
through
the
issues.
D
If
you
see
anything
that
you
find
interesting,
please
ping
people
in
there
ask
them
questions
and
the
only
other
thing
that
I
that
I
think
I
cannot
do.
This
is
to
a
way
to
always
ask
to
always
ask
questions
to
make
sure
that
you
understand
what
is
going
on
in
that
issue,
because
if
you
don't
understand
it,
there
is
a
very
good
chance
that
someone
else
doesn't
understand
it
and-
and
in
that
case,
just
explaining
something
out
a
writing
documentation
or
anything
anything
that
gives
us
an
explanation.
A
Okay,
thank
you
for
reminding
everybody
about
issues.
I
would
suggest
you
go
back
into
first
items,
morgan
still
not
here,
but
some
of
those
items
are
very
interesting.
So
first
is
kublet
master
and
kublai
conformance
seems
like
duplicates.
A
I
think
they
are
to
the
extent
duplicates
anybody
knows
the
history
or
it
just
I
mean.
Do
we
just
need
to
investigate.
D
It
I
think
we
also
may
need
to
investigate
this
one.
It
sounds
really.
It
just
feels
wrong.
The
fact
that
they
confirm
a
the
conformance
and
in
the
normal
master
job
are
exactly
the
same,
because
that
kind
of
implies
that
every
single
thing
every
single
feature
that
they
get,
that
the
cubelet
has.
It
is
a
ga
and
conformance
ready,
which
is,
I
don't
know
it
just
fails.
It
just
feels
weird,
like
it's
weight.
It's
like
it's
way
too
nice
to
be
true.
D
Not
always
so
so,
depending
technically
people
can
set
up
their
jobs
in
whatever
way.
Whatever
way
they
want
is
so
if
you
go
through
previous
branches,
you
don't
only
have
conformance,
and
it
is
good
to
not
always
have
conformance,
because
conformance
are
the
features
away
from
alpha.
Beta
stable
have
been
stable
for
a
while,
and
they
are
completely
and
they
are
not
in
the
test,
for
them
are
not
flaky
and
like
the
set
of
jobs
that
meet
all
those
requirements
are
relatively.
A
Okay
and
a
related
question
to
this,
was
we
still
trying
to
investigate
all
the
blocking
like
signal
blocking
tests?
I
wonder:
do
we
need
to
repeat
all
the
conformance
tests
there
or
conformance
top
with
this
similar
like
separate
tab,
that
we
just
want
to
look
at
separately
like.
A
So
remember
this
tab
that
we
created,
I
mean
we
haven't
created,
is
still
in
pr,
but
basically
we
want
to
put
we
want
to
answer
the
question
from
last
meeting
saying.
A
If
a
person
is
not
a
part
of
signal,
can
I
ask
quickly
whether
signal
does
okay
or
not
so
we're
creating
this
sig
blocking
tab
dashboard
and
we
want
to
understand
like
we
want
to
answer.
The
questions
is
whether
the
signal
is
okay
or
not,
and
we
want
to
understand
how
we
can
add
tests
into
this
dashboard,
so
we're
definitely
adding
all
the
release
blocking
tests
there.
Now
there
are
different
other
different
tests.
A
We
need
to
understand
like
one
is,
for
instance,
in
topology
topology
manager,
and
it
was
marked
as
high
in
the
document
that
we
started
with
this
excel
spreadsheet.
A
I
think
it
was
marked
high
because
it
was
very
important
for
for
victor,
specifically
and
red
hat,
because
it
was
like
very
important
feature,
but
I
mean
in
general,
it's
it's
a
great
feature
and
we
need
to
like
if
it's
failing,
we
need
to
make
sure
that
it's
like
signal
is
not
healthy
right.
So
we
need
to
notify
everybody,
so
I
would
assume
we
need
to
put
it
into
signal
blocking.
B
Yeah,
I
would
say
you
know
when
I
did
the
spreadsheet.
I
think
we
had
mentioned
this
before
I
put
it
aside,
because
I
think
it
was
just
important
to
me,
but
typically
I
mean
that
topology
manager
test
is
really
that
whole
feature
would,
I
think,
be
used
by
customers,
probably
more
in
the
telco
environment,
on
bare
metal
and
probably
not
used
by
a
vast
majority
of
you
know:
kubernetes
users
in
general.
That's
just
my
sort
of
take
on
it
and
you
know
so.
When
I
think
about
hey.
B
Is
this
important?
I
think
it's
important,
probably
for
a
much
smaller
subset
of
customers,
and
I
don't
know
that
it
would.
B
You
know
that
spreadsheet
was
that
priority
or
importance
thing
was
our
first
take
at
it,
and
maybe
it
would
be
good
to
review
that
just
to
say
hey
what
does
the
community
in
general
think
because
that
was,
I
think,
just
a
subset
of
folks-
and
I
know
I've
marked
some
on
there
and
that's
probably
a
bunch
of
blanks
on
there
too.
D
A
kind
of
a
process
for
a
for
a
for
no
for
all
the
blocking
jobs.
I
think
everything
that
we
that
we
work
on.
We
sorry
yeah.
We
are
seeing
signal.
It
should
eventually
land
on
not
blocking,
because
you
know
if
a
feature
is
important
to
one
user
or
another.
Is
that
that's
a
that's
always
probably
going
to
happen?
But
if
we
are,
you
know,
you
know
we're
maintaining
this
piece
of
the
project
and
everything
in
the
in
the
perfect
world.
D
Every
everything
should
work
and
if
something
is
not
working,
you
know,
let's
assume,
that
we
have
10
milli,
10,
000
users,
and
only
five
people
actually
use
the
topology
manager
if
the
topology
manager
is
not
actually
working
for
those
by
five
people.
That
is.
That
is
something
very
really
important,
which
way
that
we
should
take
out
that
we
should
take
a
look
at
independent
of
the
number
of
users.
A
We
suggest
to
put
something
in
blocking
based
on
number
of
maintainers
of
this
test.
So
if
you
have
enough
maintainers
for
the
test
and
who
vouch
for
it
to
be
blocking,
then
it
will
become
blocking
can
that
we
will
review
this
main
maintenance
commitment
regularly.
Is
it
what.
D
I
think
there
has
to
be
some
commitment
to
actually
maintain
things
so
that
we
don't
end
up
with
a
job
that
has
been
failing
for
three
years
and
nobody,
no
one
seems
to
is
no
one
seems
to
mind,
but
we
also
should
aim
to
have
every
single
end
to
end
a.
I
think
we
should.
I
think
we
should
have
a
similar
into
the
conformance
test
like
every
single
intune
test
that
is
assigned.
D
It
is
the
same
with
the
goal
in
mind
that
at
some
point
is
going
to
end
up
in
the
conform
in
the
conformance
list
and
every
single
a.
I
think
I
think
it
should
be
similar
to
this,
like
every
single
thing
that
we
do,
we
should
aim
to
at
some
point
be
blocking,
because,
because
I
feel
like
that
kind
of
signals,
people
that
we
are
working
on
this
area
of
the
project,
we
are
actively
maintaining
it
and
if
something
breaks,
that
means
something
something
if
the
job
breaks.
B
A
Proposal
original
proposal
was
to
include
tests
in
the
blocking
test
based
on
maintenance
commitment.
So
if
you
victor
saying
I'm
up
for
watching
this
test
and
we
need
to
put
it
in
a
blocking
test
and
then
everybody
uses
commitment
periodically
and
remove
it
from
blocking
if
needed.
If
you
don't
have
a
critical
mass
of
maintainers.
B
So
that's
saying
a
blocking
test
is
only
can
be
blocking
or
a
test
can
only
be
blocking
if
we
have
people
maintaining
it.
Otherwise,
it's
not.
A
Yeah,
I
think
maintenance
suggests
importance
right.
So
it's
a
let's
let
let's
yeah,
I
don't
know
how
to
work.
I
I
it's
it's
kind
of
neat
idea,
just
saying
that
when
people
care
about
the
test,
meaning
that
it's
important,
if
nobody
cares
about
that,
then
like
maybe
it's
less
important,
so
let's
say
we
have
a
dynamic,
config
end-to-end
test.
A
B
Yeah,
I'm
I'm
not
sure
if
I,
if
I
could
buy
into
that,
I
mean
it's
just.
B
B
Do
you
know
what
the
criteria
was
for
that
in
the
past?
Anyone
so
from
my
understanding.
D
I'm
in
you
know
like
everything
this
is
a
little
bit
subjective,
but
a
what
becomes
what
becomes
merge,
blocking
and
release
blocking
are
just
whatever,
whatever
six
think
that
I
think
that
it
is
important
and
under
is
some
kind
of
model.
If
people
are,
I'm
gonna
keep
using
the
topology
manager
as
an
example,
because
I
know
that
that
is
a.
That
is
a
feature.
D
D
Some
some
level
is
some
level
of
blocking.
You
know,
for
example,
if
the
feature
was
in
alpha,
then
it
might
be
a
it
might
be.
It
might
be
good
to
still
have
a
mass
release
blocking,
because
if
they,
if,
if
those
tests
break,
then
that
means
that
we
introduce
our
regression
or
something
but
into
kubernetes,
and
we
need
to
fix
it.
E
D
Way
what
ultimately
becomes
a
what
ultimately
becomes
blocking
is
relatively
it's.
It's
it's
objective,
it's
objective,
but
in
a
way
we
should
be
aiming
to
work
to
make
everything
to
make
everything
that
we
to
make
everything
that
we
do
a
blocking,
because
otherwise,
it
kind
of
sends
the
signals
we
are
working
on
this
enhancement
it
would
it
can
make
that
it
can.
It
can
be
that
this
enhancement
doesn't
really
work
it
can.
B
A
B
B
I
don't
know
that
I
have.
I
don't
know
that
I
can
provide
any
more
insight
into
what
should
be
blocking
or
not.
At
this
point.
D
The
only
other
comment
that
I
can
add,
since
we
mentioned
merge,
blocking
and
release
blocking,
is
if
we,
if
we
label
something
as
blocking
in
in
our
section
of
this
grid,
then
it
is
kind
of
only
like
a
hope
that
we
are
working
towards
in
order
to
actually
make
something
release
blocking.
D
We
also
need
to
put
in
the
secret
list
as
test
grid
and
in
order
to
make
something
merge
blocking,
we
need
to
put
it
with
a
bunch
of
merch
blocking
jobs,
so
they
so
thing
is
so
technically
like
right
now,
right
now
what
we
are
doing,
if,
right
now
for
a
lot
of
jobs,
I
think
they
ended
up
in
the
blocking
section,
but
this
is
kind
of
a
I
I
think
right
now.
This
is
more
aspirational
and
it's
a
and
it
kind
of
sends
a
signal
within
the
within
the
the
this
area
of
the
project.
D
A
Make
sense?
Okay,
so
I
think
we're
still
walking
around
this
issue
like
what
we
want
to
like,
like
how
to
filter
out
all
these
106
test
cases
and
like
how
to
group
them
into
what
we
really
want
to
watch
right
now.
A
What
you
want
to
watch
periodically
and
we
what
we
don't
really
care
about
that
much
so
you're
saying
like
there
is
no
like
what
we
want
to
care
about
that
much
category,
but
we
don't
have
a
clear
signal
or
like
agreement,
how
we
can
mark
something
like
a
series
blocking
something
that
will
be
like
the
source
of
truths
like
what
like
whether
signal
is
healthy
or
not.
B
B
For
example,
like
the
you
know,
the
conformance
test,
I
mean,
I
think,
that's
just
doing
some
basic
tests
that
if
they
fail
it's
bad
news
and
some
of
the
other
stuff
I
mean
even
like
cpu
management,
topology
manager
are
really
optional
and
they're
not
on
by
default.
So
you
know
if
we
use
those
as
examples,
does
it
make
sense
to
have
something
like
that
blocking?
I
don't
know.
B
I
think,
when
I
think
about
how
what
I
go
about
or
one
way
to
go
about
determining
that
is
again
to
have
a
really
good
understanding
of
what
the
tests
are
doing
and
what
part
of
that
functionality
is
I'd
say
must
work,
and
I
guess
you
could
make
a
point.
Well,
if
it's
in
there
it
should
work
and
it
it's
not,
then
make
it
blocking
them.
B
That
would
be
one
extreme
yeah,
I'm
not
sure
other
than
doing
a
a
review
of
all
the
tests
and
which
would
probably
take
a
few
folks
to
do
that
and
get
their
input
and
sort
of
decide
from
there.
There
may
be
other
ways
too,
that's
just
sort
of
what.
A
B
Yeah
I
mean
here's
like
that,
for
example
that
test
you
really.
The
tests
are
pretty
much
skipped
unless
it's
on
a
multi-number
system
so
like,
for
example,
when
we
run
it
upstream,
those
tests
are
not
really
doing
anything.
B
A
B
We
don't
have
a
multi-number
system
and
there
at
one
point
we
were
looking
to
say:
hey
you
know
upstream,
can
we
get
a
system
with
multilingual
nodes
and
you
know
spin
up
pods
and
verify
our
alignment
so
what
we
we
have
downstream
tests
that
are
are
doing
that
on
bare
metal
and
we
do
pay
attention
to
those
and
if
they
fail,
you
know
we
push
fixes
upstream.
So
that's
that's,
maybe
a
unique
example.
B
A
Okay
and
I'm
not
trying
to
pick
on
you,
I
just
like
another
example:
runtime
class,
like
you,
probably
have
some
tests
on
the
runtime
class
and
not
everybody
using
runtime
class.
So
do
we
need
to
make
it
as
blocking
or
not
I'm
just
trying
sorry,
my
windows
just
be
misbehaving.
D
So
sorry,
sorry,
just
to
just
to
finish
my
proposal,
because
I
I
think
that's
it.
I
think
that's
a
good
example,
so
what
I
was
trying
to
get
at
up,
for
example,
with
runtime,
let's
say
that
even
though
three
people
might
be
using
it,
we
worked
on,
we
worked
on
it,
we
as
a
collective
signal.
We
are
telling
people
hey,
you
can
use
this.
D
So
if
we,
if
we
are
telling
people
that
hey
you
can
use
this
even
if
it's
not
conformance
ga
or
something
we
should
treat
a
test
as
a
measure
of
stability
of
the
code
that
we
wrote
and
in
that
case,
to
actually
make
my
proposal
actionable.
I
think
it
will
be
the
every
single
test
that
is
may
actively
be
maintained,
that
it
is
being
green
and
it
doesn't
flake
too
often
yeah.
I
think
that
would
be
a
good
candidate
to
add
to
the
to
blocking.
D
If
it
is,
if
it
fails
every
now
and
then-
and
that
is
frequent,
then
we
should
either
fix
it
or
remove
it.
If
it's
failing
is
if
it's
content
constantly
failing
same
thing,
we
shall
fix
it
or
remove
it.
It
would
essentially
would
essentially
like
every
single
job
should
hopefully
end
in
test
blocking
and
if
they
are,
if
they
have
been
green
for
a
while,
and
they
have
people
working
on
them,
then
they
are
good
to
be
added.
D
A
D
Yes,
and
just
taking
out
that
part
of-
and
I
guess
just
kind
of
putting
aside
how
many,
how
many
people
actually
use
a
feature
or
not,
because
I
think
I
think
that
will
get
us
in
a
weird
place
where
we
are.
Where
were
we
just
gonna?
Stop
a
start,
a
a
I
don't
know,
what's
a
correct
or
for
it
I
think.
That's
it.
I
think
like
put
it
taking
into
account
how
many
people
actually
use
a
feature,
which
is
something
that
was
mentioned.
I
think
that
would
be
a
way.
D
A
Okay,
I
think
it's
a
good
proposal.
I
I
really
like
it
okay,
so
we
have
two
more
topics.
One
of
them
like
in
10
minutes
left,
so
one
of
them
is
a
link
to
some
document
from
a
past.
I
think
it's
a
proposal
how
to
rearrange
your
tests.
A
I
would
suggest
we
can
take
this
as
a
homework
for
for
the
next
week
and
like
read
it
carefully
and
try
to
original,
like
discuss
it
next
time.
So
since
it
was
added
to
agenda
friday,
I
would
suggest
to
just
put
them
to
agenda
next
week
any
objections.
Anybody
knows
this
document
to
the
heart
and
can
talk
about
it.
B
A
Okay
and
the
only
test,
the
only
area
left
is
how
to
like
docker
docker
and
docker
shim
test
versus
docker
and
cri
tests.
So
the
question,
I
think,
is
how
many
tests
we
like
do.
We
have
a
parity
of
how
well
we
test
container
d
and
docker
docker
ship.
This
is
how
understood
the
question-
and
I
think,
it's
very
important
topic
to
start
testing
container
d
as
wide
as
possible
and
as
on
as
many
tests
as
possible.
A
B
I
am
not
familiar
with
it.
Maybe
again,
maybe
we
can
reach
out
to
morgan.
It
looks
like
morgan
probably
put
some
thought
into
it.
A
Yeah
put
in
some
conformance
tests
like
test
the
20
dates
probes,
like
cloud
lives
in
this
prop,
it
would
be
really
nice
to
make
sure
that
we
run
it
for
both
for
continuously
and
docker
shim
and,
like
they
at
least
have
a
equal
priority.
A
So
maybe
maybe
morgan
can
speak
about
it
next
time
or
we
can
investigate
it
as
a
as
a
part
of
an
issue
talking
about
contin
container
d
test
error
like
in
general,
okay,
so
we
only
have
nine
minutes
left
any
other
topics
people
want
to
discuss.
A
Okay,
so
let's
discuss
like
we
have
a
few
action
items
to
and
to
discuss
and
investigate.
I
will
create
issues
for
them
or
make
sure
that
there
are
issues
and
let's
mix
meet
next
week
and
maybe
we'll
have
more
information.
At
least
we
have
some
agreement
on
how
to
treat
signal
blocking,
and
I
really
like
that.
We
have
it.
A
A
Okay,
so
yeah,
if
no
more
topics,
let's
finish
just
a
little
bit
earlier
than
I
need
it
and
happy
next
week
like
happy
new
week,
I
guess.