►
From YouTube: Kubernetes SIG Node 20201207
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
A
Hello,
it's
december
7th,
it's
almost
end
of
the
year
and
it
says
signal
ci
group,
meaning
welcome
everybody.
We
have
many
agenda
items
today.
So
let's
get
started.
B
Sure
I
was
reading
briefly
about
this,
but
I
think
this
fits
good
in
what
we
are
achieving
here.
So
the
first
milestone
is
specific.
I
think
you
have
read
and
has
a
better
context
and
better
opinions
about
it,
but
I'll
just
bring
it
here.
A
Okay,
so
yeah
I've
it.
There
are
many
efforts,
like
specifically
even
this
effort
was
created
in
response
on
like
problems
that
we
had
in.
I
think
two
releases
back
when
I
released
blocking
jobs
were
flaky
and
crushing
and
like
I
mean
all
sorts
of
problems
and
it
delayed
the
release
significantly.
So
this
like
this
ci
group
effort,
was
created
out
of
it
and
also
some
people
started
thinking
how
to
improve
tooling
and
how
to
improve.
A
I
would
say
not
responsibility
like
how
to
improve
how
to
make
people
more
responsible
for
tests
and
how
to
increase
visibility
into
failing
critical
infrastructure
tests
so
yeah.
I
think
wojtek
started
this
effort
of
creating
a
reliability
working
group
across
entire
kubernetes,
so
we
are
working
on
signals
specifically
and
he's
suggesting
that
we're
not
even
only
like
helping
create
tools
that
educate
our
decision
on
what
is
failing
and
what
we
need
to
pay
attention
on,
but
also
enforces
this
work
by
merging,
like
by
blocking
prs
for
specific
seek.
A
So
the
idea
is
that
all
tests
will
be
marked
for
specific
tig
and
whenever
test
of
this
seek
start
failing,
all
pr's
from
the
seek
will
be
blocked
yeah.
This
is
ultimately
the
idea,
and
there
are
more
into
this
document
and
how
to
make
it
happen,
how
to
like
do
additional
tooling.
So,
if
you're
interested,
please
take
a
look
and
read
it.
I
I
mean
it's
a
novel
idea
for
me.
A
I
I
see
how
this
enforcement
can
backfire,
especially
with
the
smaller
country
like
with
a
first-time
contributor,
so
like
contributors
who
just
want
to
work
on
their
specific
feature,
because
they
like
end
users,
for
instance,
who
doesn't
want
to
understand
all
the
problems
that
kubernetes
has.
At
the
same
time,
I
mean
it's
a
good
engineering
process
to
know
that
if
something's
broken,
don't
break
it
more
fix
it
first.
A
So
I
would
say
right
now:
our
all
our
prs
will
be
blocked,
because,
right
now
we
have
this
this
one
one
of
the
critical
tests
is
failing
and
like
before
we
had
like
a
couple
of
weeks
of
investigation
of
the
broken
critical
test
I
think
like
months
ago,
so
quite
often
tests
will
be
broken
and
we
will
be
in
bet
bad
shape.
At
the
same
time,
it
will
mean
that
more
people
will
start
looking
into
this
test
failures
and
we
may
get
in
better
shape
faster.
C
C
C
I
think
that
would
be
good,
because
I
think
what
what
happens
is
how
do
you
even
know
the
ci
runs
exists
for
the
most
part,
especially
if
you're,
like
you
said,
a
single
issue,
contributor
kind
of
thing
and
you're
definitely
not
going
to
come
back
after
your
pr
mergers
and
oh
wait.
The
ci,
like
the
ci,
seems
to
break
irrespective
of
a
pr
that's
my
point.
B
For
the
jobs
as
well,
this
is
one
part
of
this
process
that
josh
set
up,
and
I
think
this
is
kind
of
working.
I
don't
know
so.
A
Yeah,
it's
working
in
a
sense
that
we
know
about
these
failures.
It
doesn't
work
in
a
sense
that
we
may
not
care
for
them
for
a
week
or
maybe
even
months,.
A
Okay,
yeah,
I
think
that's
the
problem
that
this
proposal
trying
to
address.
A
I
I'm
not
in
the
kubernetes
for
long
enough
to
understand
how
critical
it
will
be
for
community.
It
definitely
will
be
great
for
explaining
what
what
you
have.
What
have
you
been
doing
and
like?
Why
are
you
spending
so
much
time
with
that
like?
This
is
because,
like
all
the
tests
needs
to
be
clean
and
yeah,
that
may
be
a
good
thing.
What
people
think
about
a
release
process
and
this
merge
block,
and
maybe
jorge
you
have
some
opinion.
A
My
feeling
is
at
least
like
last
release.
We
had
so
many
ish
like
so
many
features
going
hot
and
being
nursed
in
last,
maybe
less
than
a
week
before
court
fees
and
specifically
in
this
release,
I
will
investigate
in
a
critical
test
failure.
I
mean
even
I
I
did
it
very
slowly
like
because
I
had
some
other
work,
but
in
the
like,
this
critical
test
would
have
blocked
all
the
merges
in
a
sig
node,
even
though
it
wasn't
signed
related.
A
It
was
from
six
storage,
but
that
would
have
been
a
problem
and
then
release
would
miss.
So
many
features
because
all
the
features
were
running
very
late
jorge.
Do
you
have
any
opinion
about
like
how
to
front
load
this
improvement
work
or
maybe
like
how
to
deal
with
ease
if
this
kind
of
issue
appears.
A
A
No,
I
I'm
not.
I
mean
specific,
this
specific
example.
It
was
different
sig
introduced
in
the
bug,
but
I
can
clearly
like
I
can
easily
see
that
how
flakiness
can
be
introduced
by
one
of
the
prs
in
a
in
signaled,
and
this
pr
will
block
all
the
improvements
that
we're
playing
for
planned
for
elise,
but
cannot
be
merged,
because
I
mean
people
start
merging
releases
very
late
like
week
before
code.
Freeze.
D
A
It's
more,
like
I
mean
imagine,
quote,
freeze
is
friday
and
today
is
monday
and
many
people
were
planning
to
to
release
like
to
merge
their
pr's
and
like
clean
up
like
let's
say
we
have
10
improvements
that
running
hard
for
this
release,
so
like
10,
10,
big
pr's
needs
to
be
merged,
and
now
this
is
the
situation
of
120
release.
A
We
had
so
many
pr's
been
worse
in
the
last
week
and
now
like
one
of
this
pr
interest,
like
maybe
even
pr
before
that
introduced
some
flakiness
in
the
in
a
in
the
tests,
and
we
are
investigating
this
flakiness
for,
like
maybe
three
days
so
after
three
days,
we
only
have
like
one
day
left
for
quad
freeze
before
cod
freeze,
and
we
still
have
all
this
improvement
like
all
these
features
to
be
merged.
A
So
I
think
I
mean
I'm
just
afraid
that
it
will
call
for
a
lot
of
heroic
effort
to
unblock
unblock
all
those
features
to
be
merged
and
it
will
be
situationally
released.
D
Yeah,
okay,
okay,
now
and
now
I
think
now
I
think
I
understand-
and
this
is
not
so
much
an
answer
as
another-
a
related
thought
that
came
to
mind
while
listening
and
listening
to
a
listening
to
the
conversation,
I
think
a
lot
of,
I
think,
a
lot
of
contributions
that
we
that
we
get-
and
I
don't
know
if
they
are
the
bulk
of
it
or
if
they
are
they
a
lot.
D
Okay
with
some
with
some
of
the
contribute
like,
I
guess
I
should
say
some
of
the
contributions
that
I
see
that
pop
up
every
now
and
then
are
people
that
are
interested
in
a
feature.
You
know
I
use
kubernetes,
I
want
to
do
x
with
it
and
I'm
gonna
make
a
pr
on
it
and,
as
a
as
you
all
already
mentioned,
it
is
a
little
bit
too
complicated
to.
D
To
try
to
on
board
nuke
to
touch
through
it
to
try
to
onboard
new
contributors.
You
know
if
you
want
to
fix
something.
Not
only
did
you
have
to
figure
out
how
to
do
it,
but
now
you
have
to
figure
out
how
ci
works.
You
have
to
figure.
You
have
to
learn
the
whole
stack
brow,
it's
amazing,
but
it's
not
the
most.
It's
not
the
it's,
not
the
simplest
thing
to
wrap
your
head
around
proud
jobs
are,
I
know
they
are
a
monster.
D
D
That's
something
that
we
cannot
strive
to
is
to
make
things
simpler,
a
nicer
better,
and
you
know
everything
can
always
be
improved
in
one
way
or
another,
especially
if
we
have
a
lot
more
feedback,
which
is
the
case,
but
I
think
it
might
also
be
a
people
issue
in
which
a
and
I
and
I
guess
the
reliability
proposal
it
tries
to
get
to
this.
A
little
bit
more,
if
you
want
to
contribute
to
kubernetes,
it's
amazing
and
we
welcome
it.
D
But
I
guess
to
some
extent
we
also
do
need
people
who
are
willing
to
commit
to
not
only
just
do
a
a
a
a
one-time,
a
one-time
contribution,
but
could
a
good
who
are
willing
to
contribute
a
little
a
little
bit
more
of
a
little
bit
more
of
their
time,
and
we
need
to
figure
out
how
to
make
that
possible,
because
we
cannot
expect
everyone
to
become
an
open
source
contributor,
because
people
have
jobs,
families
in
different
life
situations,
but
we
need
to.
I
guess
we
need
to,
and
I
don't
know
if
this
is
possible.
D
We
also
need
you
to
be
available
and
at
least
for
some
period
of
time,
take
care
of
that
feature
in
case
of
in
case
that
it
breaks
in
case
that
it
needs
something
else
and
before,
and
you
know
if
you
want
to
go
away
because
time
constraints
or
whatever,
we
also
need
you
to
at
least
like
leave.
Some
notes
some
something
that
we
can
use
to
help
train
a
help,
train
someone
else,
and
I
don't
know
if
and
at
least
at
least
something
like
this
and-
and
I
know
this
is
a
very
complicated
wish
list.
D
But
if
we
had
more
things
of
this
sort,
I
think
that
a
lot
of
the
a
lot
of
the
things
that
we
are
constantly
firefighting
will
be
a
little
bit
will
be
a
little
bit
more
a
little
bit
more
simpler
and
you
know
because
people
come
and
go,
but
a
lot
of
the
knowledge
is
lost
and
then
we
are
trying
to
figure
out
what
they
were
thinking,
how
to
how
to
get
it
back
to
the
how
to
get
it
back
to
the
original
state,
which
might
be
a
little
bit
related
to
you
know.
D
We
have
to
do
some
heroic
act,
the
week
of
cold
freeze
and
then
that's
gonna
take
us
four
days,
and
then
we
have
to
magically
come
back
toward
it
to
our
work
and
actually
figure
out
what
needs
to
happen.
A
Yeah,
it
don't
make
sense.
I
I
just
don't
know
like
this
effort
that
we
are
running
right
now
seems
to
be
working
well
in
terms
of
like
we
constantly
making
small
progress
and
we
exchange
knowledge.
So
I
feel
that
we
have
enough
people
who
is
like
with
the
knowledge
to
contribute,
but
I
also
see
that
the
progress
that
we're
making
is
quite
slow
so,
like
I
think
this
proposal
is
another
extreme.
A
I
don't
know
whether
there
is
a
in
the
middle
proposal
of
source,
maybe
if
part
like
some
of
the
students
will
be
developed
or
maybe
after
initial
firefighting,
we
can
get
into
the
like
stable
state
like
steady
state,
but
I
don't
know.
D
One
thing
that
comes
one
thing
that
comes
to
mind
is
that
at
least
kubernetes
wise.
I
think
it
might
be
good
to
have
a
and
like
a
like,
an
official
onboarding,
not
just
like
carlsbad,
hey,
let's
talk
about
something
yeah
but
something
you
know.
If
you
went
to
a
company
and
you
start
working
on
it,
you
start
working
on
something
like
there's
a
there's,
a
solid
onboarding
that
people
have,
and
at
least
on
that
onboarding
you
get
introduced
to
a
lot
of
the
things
you
get
to
do.
D
Something
is
some
on
hands
thing
with
a
with
a
couple
things,
and
we
can
do
well
and
like
right
now
a
lot
of
a
lot
of
a
lot
of
things
in
kubernetes.
They
definitely
need
some
a
really
huge
improvement
and,
like
those
are
things
that
we
need
to
do
right
now,
but
you
know
it's
like
it
is
very
possible
that
a
lot
of
the
people
in
this
meeting
you
know
we're
gonna,
take
a
vacation,
we're
gonna
go
on
by
apparently
something
we're
not
going
to
be
available
and
we
it's
like.
D
D
And
I-
and
I
I
guess
I
guess
in
my
head-
that
that
is
a
proposal
for
reaching
them
for
reaching
them
for
reaching
the
middle
state.
Because
what
I,
what
I
see
in
a
lot
of
places,
is
that
if
you
want
to
become
a
maintainer,
it's
like
you
really
have
to
struggle
through
it.
And
if
you
want
to
struggle
to
it,
you
you
you
need
to
be
able
to
commit
huge
amounts
of
time
which,
for
a
lot
of
people,
might
not
be
desirable
or
possible.
C
A
Okay,
does
anybody
else
have
any
opinion
on
this
proposal
like
what
can
be
changed?
What
other?
In
the
middle,
like?
I
like
this
in
the
middle
proposal
to
just
educate
people
better,
like
I
think
this
is.
This
would
be
great,
and,
like
I
mean
when
you
know
a
lot
about
end
to
end
test,
you
don't
feel
the
threatened
anymore,
so
you
already
have
some
tool
belt.
I
mean
you
constantly
hit
some
other
issues.
A
Like
I
mean
recently,
I
struggled
to
configure
a
runtime
class
for
my
kind
cluster,
but
I
mean
anyway,
if
you
don't
need
this
kind
of
edge
cases,
then
it's
fine.
A
Okay,
then,
let's
go
to
the
next
item
jorge,
do
you
want
to
talk
about
yeah,
go
ahead.
D
D
D
Just
throwing
ideas
out
there
describing
the
problem
describing
the
problem,
the
problem
a
little
bit
more,
but
overall,
whenever
whenever
we
are
running
some
some
end-to-end
tests,
whenever,
whenever
we
are
running
a
test,
we
have
a
bunch
of,
we
have
some
make
a
make
file
and
a
bunch
of
bash
in
kk
and
at
the
hack
directory
that
a
bunch
of
bash
it
just
pro
just
fills
in
some
default
values,
processes
on
flags,
and
then
he
calls
an
actual
go
program.
D
The
co-program
that
he
calls
in
our
case-
and
I
think
the
for
most
of
the
time
in
our
case
is
go
program.
The
leaps
under
test
e3
note,
which
is
what
I'm
referring
to
as
a
node
test
suite,
and
that
test
suite,
has
a
remote
runner
and
a
local
runner
which,
thanks
to
I
mean
this,
has
been
a
better
document
better
documented
now,
but
the
remote
runner
essentially
spins
up
a
bm
on
gcp,
and
then
it
just
sshs
artifacts
and
it
and
lets
them
run
and
we
just
execute
an
end-to-end
test.
D
The
thing
is
that
the
test
suite
is
essentially
the
one
in
charge
of
configuring:
the
test
environment.
You
know
passing
the
configuration
figuring
out
how
to
run
the
cubelet
figuring
out,
how
to
a
figure
and
now
how
to
run
your
tests
any
dependencies
that
you
need.
It's
gonna,
it's
gonna
figure
them
out
for
you
and
a
lot
of
the
things
and
a
lot
of
the
information
that
it
needs
to
figure
out
how
to
do.
Things
are
also
present
in
the
cluster
e3
framework,
which
that
is
which
that
is
it.
D
D
Suite
jaws
hacks
its
way
around
the
cluster
e2e
framework,
and
I
think
that
if
we
looked
into
how
we
pass
information
flags,
runtime
runtime
variables,
any
anything
that
the
test
environment
needs.
If
we
looked
into
how
it
is
passed
into
the
node
test
suite
and
we
developed
a
proper
api
for
it,
something
cleaner,
something
more
standardized
that
would
be
a
that
would
be
an
a
that
would
be
an
amazing
contribution
and
it
will
make
a
running
and
understanding.
D
Note
note
test
a
little
a
little
bit
simpler
and
so
again
the
document
is
just
a
stating.
Some
of
the
iss
is
sustaining
the
problem
statement
that
that
I
just
made
in
in
some
other
ways.
He
gives
a
couple
examples,
but
overall
putting
it
putting
it
out
there
for
anyone
who
anyone
who's
interested
in
joining
this
collaboration-
maybe
some
point
during
this
week
or
in
the
near
future.
D
D
D
Eds,
yes,
and
if
we
can
in
and
if
we
can
do
anything
to
help
the
node
tests
are
not
part
of
the
node
test
suite.
So
everything
that
is
on
their
test,
if
we
under
test
it
to
in
kubernetes
not
under
test
e2e,
slash
node
yeah,
so
that
one
is
a
if
we
do
anything
for
those
that's
going
to
be
a
plus.
But
I
think
that
I
don't
have
any.
C
Yeah
no,
I
can.
I
can
see
that
there's
definitely
some
touchy-feely
bits
in
between
the
sort
of
general,
no
general
ede
framework
and
the
node
framework
that
make
it
complicated
and
then
the
various
runners
that
I'm
gonna
say
populate
the
gcp
info
so
that
everything
works.
C
A
A
Call
me
in
as
well
I
I'm
really
interested
to
understand
how
it
works
and
how
we
can
make
it
better.
Okay,
any
other
comments
on
this
topic.
D
I
guess
last
thing
that
I
can
say
is
I'll,
send
something
on
slack
and
on
the
mailing
list.
Then
we
can
figure
out
if
we
can,
if
we
want
to
have
another
meeting
or
just
do
it
as
sync
and
catch
up
or
something.
A
Yeah
definitely
thank
you
for
driving
it
next
one.
So
I
wanted
to
talk
a
little
bit
about
docker
shim
like
there
were
a
lot
of
noise
in
about
docker,
shim
or
docker
ship
duplication.
Rather
last
week
and
my
community
published
amazing
blog
posts
and
faq,
so
I
mean,
I
think,
misunderstanding
that
release.
D
A
Cost
was
was
fixed
now
I
want
to
understand
like
first,
I
want
to
understand.
Is
there
a
easy
way?
Somebody
can
suggest
that
we'll
make
sure
that
we
know
which
tests
are
running
with
docker
and
which
tests
are
running
with
any
other
runtime,
so
it
like.
So
we
will
know
that
docker
stream
test
coverage
is
not
I
mean.
A
I
know
it's
already
like
degrading
like
for
the
benchmark
test,
like
we
recently
removed
all
the
benchmark
tests
for
docker
shim
in
favor
of
continuity,
just
because
deuterostream
was
is
planned
to
be
deprecated
so
like
we
didn't
want
to
maintain
it
for
longer.
A
So
I
want
to
make
sure
that
core
functionality
is
still
being
tested
for
docker
shim
for
for
a
period
of
time,
while
we
support
the
cursive,
so
we
wouldn't,
like
I
mean
we
made
this
pro
promise
to
customers
that
docker
shim
will
keep
working
and
at
least
for
a
couple
of
releases,
and
I
want
to
make
sure
that
we
hold
in
our
promise,
even
though
we
are
not
paying
that
much
attention
as
before
the
docker
ship.
C
I
mean:
do
you
want
a
job
that
just
runs
the
tests
with,
I
mean
yes,
it's
difficult
to
know
which
runtime
is
explicitly
enabled.
Certainly
I
think
we
should.
I
think
you
know.
I
think
it
was
a
mistake
to
not
explicitly
have
sort
of
chosen
to
run
time
originally,
when
we
did
the
the
original
thing
and
sort
of
we're
sort
of
implicitly
switching
it
out
by
you
know
doing
some
one
thing
or
another,
so
yeah.
It
would
be
great
to
have
a
explicit
flag.
C
That
says
you
know
container
d
or
docker
or
what's
the
other
creo,
whatever
yeah.
That
would
be
good.
As
far
as
knowing
when
it's
running
I
mean,
I
think,
a
lot
of
the
stuff
sort
of
is
defaulting
to
container
d.
Now
and
possibly
we
should
have
a
separate
job
with
the
test
grid
tab
for
docker
shim.
If
we
really
want
to
you
know
explicitly
know
about
it
and
then,
as
far
as
comparing
the
two
I
don't
know,
you
just
run
both
of
them
and
look
at
the
results.
A
So
do
you
think,
do
you
think
it
would
be
best
to
just
add
this
flag
now
or
like
retroactively,
or
at
least
like
validate
it?
So
I
think
like.
If
test
is
marcus
dorker
then
it
will,
I
mean
desktop
rank
should
be
running
on
both
right,
so
we
cannot
run
only
on
one.
C
Yeah,
I
just
I
don't
think
we
know,
and
I
think
if
we
go
in
and
we
start
setting
stuff
we're
gonna
find
that
a
lot
of
stuff
that
we
think
is
running
on
one
run.
Time
is
running
on
a
different
run
time
and
we're
going
to
find
that
docker's
kind
of
always
running
in
the
background,
and
thus
you
know
that's
docker
starting
container
d
rather
rather
than
us
starting
it.
I
think
it's
going
to
be
a
interesting
kind
of.
C
Happy
to
do
that,
but
I
don't
want
to
be
the
only
one
pushing
on
these
things
and
having
somebody
go.
Oh
everything's
broken
now.
A
A
A
C
F
With
so
with
with
this,
you
know,
moving
away
from
docker
shim
is
there?
Is
there
any
risk
of
say
you
know,
we
start,
we
start
moving
tests
off
of
docker
shim
and
then
and
then
we
end
up
breaking
things
without
knowing
it,
because
we're
losing
test
coverage.
C
The
the
only
test
that
I'm
aware
of
our
tests
that
are
explicitly
for
docker
shim
there
shouldn't
be
any
cri
tests
that
like,
if
we're
switching
from
docker
shim
to
cri
the
zri
test
should
run
so
there
shouldn't
be
any.
The
coverage
for
cri
versus
docker
ship
should
be
the
same.
If
anything,
there
should
be
a
bunch
of
weird
tests
to
ensure
that
the
docker
shims
version
is
doing
the
right
thing
so
make
sense.
I
mean.
C
Probably
a
concern
where
we
should
identify
that,
like
the
doctrine
test,
does
this
and
the
cra
test
does
that?
Are
they
doing
the
same
thing?
Probably
we
should
we
should.
We
should
look
at
that
while
we're
in
here,
but
beyond
that,
you
know
we're
trying
not
to
add
more
docker
shim
tests,
more
exceptions
for
that,
while
we
get
rid
of
it
so.
A
Yeah
in
120
we
added
this
timeout
on
exec
probes
and
we
found
very
late.
I
think,
even
after
quote-free
that
it
didn't
work
very
well
for
docker
shim,
so
we
found
some
regression
there
and
the
reason
we
found
this
regression
is
we
expected
that
pre-submits
run
on
both
docker
and
contingency,
but
apparently
they
only
run
on
continuity,
so
well,
yeah.
C
Yeah
I
mean
that
was
that
was
a
recent
change
that
that's
the
whole
container
is
the
default
now,
so
we
weren't
running
that's
right.
This
is
what
I'm
saying
where,
where
back
to
that
earlier
topic,
where
the
pr,
what
we're
running
in
the
prs
doesn't
necessarily
match
the
ci,
because
the
ci
takes
longer,
and
so
we
don't
want
to
you
know
you
have
to
come
back
sort
of
to
what
is
the
c?
What
is
the
act?
What
does
the
ci
say?
C
Even
if
the
pr
passes
this
is
sort
of
a
delayed
what's
going
on
feedback
loop,
which
kind
of
kind
of
stinks,
but
what
are
we
going
to
do.
A
I
have
the
same
feeling
about
it,
but
I
was
like
maybe
it's
the
same
situation
that
we
need
to
learn
from
somebody.
D
I
think
I
think
there
is
like
an
old
escalability
test
that
actually
uses
that
actually
specifies
something
that
is
in
cubenet,
but
I
don't
think
that
it
matters
a
lot,
but
sig
network
is
actually
actively
working
on
creating
those,
so
I
guess
anything
fun
and
interesting.
We
can
just
sync
up
with
them.
A
A
Will
okay,
so
somebody
already
commented
on
these
failures.
I
want
to
bring
attention
that
we
have
the
spellers
and
somebody
added
this
comment
that
it
was
recently
moved.
So
is
there
a
issue
that
somebody
interested
is
already.
C
Yeah,
this
is
the
one
test.
Okay,
first
of
all,
I
don't
know
why
the
tests
come
out
really
weird
looking
when
you
look
at
them
and
test
grid.
But
what
caused
this
well
at
least
what
I
think
is
the
failure
that
we're
looking
at
yeah
this
one,
the
these.
What
happened
is
they
moved
from?
Where
is
it?
I
don't
see
it
on
your
page,
there,
the
the
orphans
directory
to
conformance,
because
somebody
was
like
this.
This
test
runs
good.
C
It
should
be
part
of
the
conformance
suite
et
cetera,
et
cetera,
et
cetera,
and
so
they
moved
it
and
it
worked
fine
in
the
where
it
was
before
and
now
it
fails
every
time,
and
so
this
is
one
of
those
like.
What's
the
difference
between
these,
these
jobs
that
it
works
in
some
environment,
and
then
we
put
it
in
the
conformance
environment
and
it
fails
like
what
what-
and
this
is
also
again
back
to
well
it
passed
when
it
went
through
the
pr
process
and
then
it
fails
in
ci.
F
A
A
D
On
my
end,
I
either
either
check
email
notifications
to
see
if
anyone
said
hey,
I'm
working
on
this
and
there
here's
the
issue
other
than
that.
I
just
type
the
name
of
the
job
in
kubernetes
and
if
I
see
something
I
if
I
see
something
that
is
obvious
and
recent,
I
look
at
it.
Otherwise
just
create
a
new
issue
and
either
hope
for
someone
to
say
hey.
This
is
a
duplicate
of
this
thing
that
I've
been
investigating
or
for
not,
for
that
will
not
happen.
D
A
Okay,
so
we
have
a
number
for
that
so
now
container
issues.
It
was
a
change
like
maybe
two
months
back
from
derek
from
container
d
team
and
he's
changed.
I
mean
after
he
changed
his
yaml
file,
the
path
to
which
he
changed
doesn't
exist,
intense
environment,
any
longer
yeah.
We
asked
if
you
want
to
work
on
it
and
apparently
he
doesn't
work
on
it.
So
I
was
wondering:
do
we
need
to
revert
it
or
we
want
to
like?
Is
there
any
takers
for
this
issue
to
investigate.
C
This
is
just
apr
that
I
I
don't
really
understand
how
to
deal
with
the
milestone.
How
do
I,
who
do
I
talk
to
there's
a
message
that,
at
the
end
of
the
thing.
C
Not
merging
because
it
needs
to
be
on
a
milestone,
but
there's
no
information
on
how
do
I
get
in
a
milestone
or
you
know
I
don't
really
understand
the
process.
I
guess
I'm
not.
A
Okay,
so
right
now
it's
a
quote:
freeze
and
like
very
late
stage
of
one
to
infinity,
so
nothing
will
be
merged
unless
it's
like
super
released
working,
which
I
don't
think
it
is
and
like.
I
think
they
they
will
be
opening
the
merge
gate
somewhere
in
this
week.
Maybe
I
know.
E
A
Yeah,
at
least
no,
it's
almost
done
and
should
be
open.
C
This
breaks
this
breaks,
capabilities
for
container
d
so
and
probably
the
docker.
C
D
C
C
C
D
And
so
I'm
gonna,
I'm
gonna,
go
I'm
gonna,
try
and
answer
both
both
of
your
questions.
Okay,
but
gonna
start
backwards.
So
the
thing
with
adding
something
to
adding
something
to
the
milestone-
and
I
think
this
might
be
related
to
what
I
said
earlier
about
having
a
proper
onboarding
for
me
for
new
people.
Is
that
if
you
look
at
the
agenda,
I
added
a
link
right
and
to
be
a
milestone.
Maintainer.
You
essentially
just
need
to
pr
yourself
to
a
to
that.
D
To
that
group,
that's
going
to
allow
you
to
add
things
and
remove
things
from
the
air
from
the
area
from
the
milestone,
and
this
is
a
it
is
essentially
the
same.
I
guess
it
should
be
an
analog
to
be
a
to
being
an
approver
and
a
broker
or
a
reviewer
for
something.
So
you
know
if
the
the
sick
that
you're
working
on
knows
that
you
are
dependable,
that
you
know
what
you're
doing,
then
you
should
pr
yourself
in
here,
and
people
you'll
have
the
power
to
match
milestones.
D
The
thing
with
the
test
is,
unless
I'm
looking
at
the
calendar
wrong,
which
I
very
well
maybe
I
think
the
kubernetes
release
is
later
for
tomorrow
or.
D
Yeah,
so
a
an
app
at
this
point,
I
don't
think
that
it
might
be
a
positive
a
I
don't.
I
don't
think
there
might
be
a
possibility
unless
it
is
something
that
definitely
breaks
users
yeah.
So
so,
if
it
is
so,
if
it
isn't
like
the
end
of
the
world,
then
we
can
wait
for
the
patch
which,
for
so
what
is
it
like
one
to
a
121
which
usually
comes
a
one
or
two
weeks
after
the
release.
D
It
does
it
doesn't
matter
very
much
after
the
release.
The
master
branch
is
going
to
open
up
again
and
the
milestone
requirement
is
going
to
go
away
okay,
but
in
a
but
in
the
future,
whenever
we
are
getting
close
to
the
to
the
beginning
of
code
3s.
If
you
want
to
get
in
either
pr
yourself
into
the
group
or
ping
anyone
in
that
group
and
for
anyone
who
wants
to
be
in
that
group
or
who
thinks
they
should
be
on
that
group,
please
feel
free
to
pr.
D
And
I
guess
the
only
the
only
thing
is
that,
whenever
you're
thinking
about
adding
something
into
a
milestone,
please
be
very
cognizant
of
the
release
team
deadlines
because
otherwise
you
might
be
you
might
be
breaking
up.
You
might.
C
Yeah,
no,
I
understand
yeah,
I
just
I'm
not.
I
need
to
learn
more
about
the
release
process.
A
In
fact,
right
now
we
have
the
situation
when
somebody
applies
the
milestone
120
and
the
change
was
not
actually
approved.
So
120
and
myself
are
now
not
not
matching
branches
which
not
supposed
to
be
the
case
but
yeah.
So,
as
I
said
like,
you
need
to
be
very
aware
of
the
schedule.