►
From YouTube: Kubernetes 1.12 Release Team Meeting 20180731
Description
B
B
B
Well,
it
is
three
past,
so
I
am
going
to
get
started.
Welcome
everybody
I
think
from
the
attendance
it
looks
like
Stephen
is
the
only
person
who
attended
yesterday.
Maybe,
although
actually
Stephen
so
I
know,
I
heard
your
voice,
but
you
aren't
listed
in
the
Google
Doc
and
I
was
only
on
the
phone.
I
wasn't
actually
in
the
Xoom
properly.
C
B
Also,
if
you
go
to
the
main
page
for
the
release,
there's
a
link
there
at
the
top
of
the
page
and
there's
a
so
I
just
pasted
it
also
a
short
link
to
the
release
info
for
112
and
then
at
the
top
of
that
page
is
a
set
of
links.
So
it's
got
the
short
link
to
that
which
has
the
calendar.
It's
got
the
meeting
minutes
link.
The
meeting
minutes
leaked
also
has
the
zoom
Earl
in
it.
So
that's
sort
of
a
easy
jumping-off
point
for
all
the
links
there.
B
B
B
That's
it's
just
hey
write
down
what
you're
intending
to
aim
to
have
done
in
the
future
for
this
release,
so
that
exercise
we
start
to
find
out
who
is
paying
attention
and
listening
to
our
nagging
and
who
isn't
maybe
and
who
puts
things
off
until
the
last
minute
and
who
doesn't
so
we've
gotten
a
whole
bunch
of
new
features
defined
coming
in
just
over
the
last
three
or
four
days.
So
that's
good
we're
finding
out
more
things
that
SIG's
are
interested
in,
but
then
again
sort
of
last
minute.
That
may
be
slightly
worrying.
B
It
means
we
have
a
little
extra
work
to
do
to
read
through
and
understand,
what's
going
on
and
get
a
sense
of,
what's
gonna
be
happening
over
the
coming
weeks.
The
other
big
thing
today
is
that
we
will
be
attempting
to
make
an
alpha
release,
so
the
alpha
release
happens
from
master
branch-
it's
not
from
a
1.12
specific
branch.
Yet
that
won't
happen
until
beta
so
really
today.
B
This
is
just
about
exercising
the
mechanics
of
cutting
a
release
and
it's
gonna
be
interesting
to
see
because
we
we
have
some
new
mechanics
there,
both
in
terms
of
the
people
and
the
the
code,
automation
and
mechanism
for
making
a
release.
So
the
question
that
we
have
always
at
one
of
these
release
sort
of
milestones
and
alpha
is
really
the
easiest
I
would
say,
but
should
we
make
the
release
and
but
for
alpha?
B
B
B
It
looks
like
we
do
actually
have
a
couple
of
bugs,
but
for
the
most
part
they're
the
failing
tests,
so
it
things
are
pretty
quiet
and
this
this
mirrors
the
the
1.11
release
and
for
me
this
was
kind
of
worrying
in
the
1.11
release
and
try
to
explain
why.
We
we
used
to
require
that
a
pull
request
had
an
issue
opened
and.
B
Since
from
a
developer
perspective
that
this
feels
sort
of
like
overhead,
like
hey
somebody's
mentioned,
a
bug,
I
understand,
what's
going
on,
I
just
want
to
fix
it,
and
you
just
want
to
make
the
pull
release
and
get
the
code
in
and
kind
of,
be
done
and
move
on
with
things.
But
from
a
release
team
perspective.
We
want
to
understand
what
the
bug
is
and
have
a
little
more
visibility
and
to
cause
an
effect.
Does
the
the
poll
request
really
feel
like
it
fix
the
bug?
Does
it
imply
risk
in
other
areas?
B
Things
like
that,
because
we're
looking
at
a
little
bit
bigger
picture,
so
a
simple
pull
request
that
just
says
something
really
minimal
in
a
few
words
about
fixing
a
bug
in
a
particular
area,
maybe
something
with
storage
or
volumes,
and
and
maybe
the
person
who
reported
the
issue,
notices
the
PR
and
says
yeah.
It
works
for
me
now,
but
we
we
don't
see
a
whole
lot
of
detail
on
that.
B
So
I
get
that
there
aren't
as
many
issues
now,
but
that
that
kind
of
raises
my
concern
level
and
makes
me
want
to
be
watching
a
little
more
closely
to
understand
the
state
of
bugs.
So
right
now
we're
in
a
phase
where
there's
active
development
happening
and
we
might
have
some
destabilization.
The
test
start
failing
a
little
more
and
we
have
to
circle
back
through
the
test
path
through
CI
signal
to
get
things
corrected,
but
we
have
a
looser
feedback.
B
Loop
I
feel
like
just
because
there
we
don't
see
the
details
of
the
issues,
we'll
see
how
that
goes
over
the
coming
weeks.
It
ended
up
being
okay
and
1.11
and
I
I
guess
that
was
largely
due
to
the
work
that
the
CI
signal
folks
did
just
to
really
push
on
having
a
clean
test
signal.
Always
so
will
we
can.
We
hope
that
that's
the
case
again,
but
we'll
need
to
watch
out
for
it.
This
cycle.
B
C
Sure
Tim,
so
it's
kind
of
an
exciting
process,
because
you
know
we're
everything's
happening
a
lot
faster
than
it's
been
it
used
to.
Based
on
the
the
nagging
that
we've
done
over
the
past
few
days,
we've
seen
the
features
from
the
initial
email
that
I
sent
out
jump
from
27
up
to
54
features
which,
which
is
kind
of
a
lot
higher
than
I've,
seen
in
the
last
few
release
cycles.
C
What
we'll
see
over
the
next
I
want
to
say
two
to
three
weeks,
maybe
maybe
a
little
more
is
maybe
thirty
to
forty
percent
of
those
features,
get
cut
so
stay
tuned
for
that
and
then
even
closer
to
the
end
of
the
cycle.
We'll
see
those
features
you
know,
people
didn't
have
time
or
things
were
deferred
or
missed,
you'll
see
even
maybe
fifteen
fifteen
twenty
percent.
More
of
those
features
drop
out,
so
I
expect
I
expect
to
probably
be
looking
at,
like
twenty
seven
features
or
something
like
that
for
the
actual
cycle.
C
So
yeah
now
we're
I've
sent
out
pings
this
morning,
one
to
the
K
dev
group,
as
well
as
release
leads
the
release.
Team
and
sig
leads
I've,
also
kind
of
run
through
all
of
the
all
the
issues
that
had
no
milestones
or
issues
that
were
marked
as
as
one
twelve
milestone,
but
we're
not
officially
marked
as
tracked.
Yes,
which
means
which
essentially
means
I,
haven't
vetted
it
and
said.
Okay,
this
does
look
good
for
the
release
or
you've
provided
the
appropriate
information,
so
yeah
we're.
C
Those
are
features
that
have
been
classically
non-responsive,
so
I'm
not
I'm,
not
sure
what
to
expect
there,
but
well
we'll
check
it
out.
One
interesting
addition
that
I've
done
and
I
need
to
clean
it
up.
A
little
bit
is
I,
basically
piggybacked
off
of
the
the
sec
ping
script.
That
jesse
wrote
to
create
all
of
the
issue:
the
security
contacts
across
all
the
repos.
C
I
did
something
similar
for
the
futures
repo,
which
essentially,
you
can
be,
can
pop
in
a
few
labels
and
and
labels
to
exclude
as
well,
and
then
it
will
go
through
and
basically
annoy
people
on
the
the
issues
with
those
parameters.
So
I
want
to
clean
that
up
a
little
bit,
I'll
I'll
post
that
in
the
chat
as
well.
If
you
guys
want
to
take
a
look,
cool
I
have.
B
C
Yes,
yes,
so
if
something
comes
in
as
net
new
we're
kind
of
expecting
it
to
be
an
alpha
state,
sometimes
things
come
in
as
net
new
beta,
but
yeah
yeah.
The
whole
reason
for
adding
that
stage
status
is
so
that
people
understand
that
it's
moving
into
the
next
phase.
Birds
is
just
what
whatever
it
was
before.
C
C
One
additional
things
that
I
wanted
to
point
out.
We
have
sig
p.m.
hopefully
today,
as
well
as
the
sig
release
meetings
again.
Hopefully,
today,
where
we
have
a
few
agenda
items
around
the
features
enhancement
proposals.
So
this
is
basically
helping
to
redefine
what
an
enhancement
looks
like
following
the
kept
process.
So
that's
a
proposal
that
Chase
has
been
working
on
I
I
can
now
that
it's
up
on
the
agenda,
I
can
actually
said.
I've
been
working
on
a
proposal
as
well
around
futurist
triage
via
automation.
C
Something
like
that
right
so
having
an
additional
set
of
labels
around
that,
and
maybe
even
building
some
BOTS
to
help
news
status,
move
issues
through
those
different
label,
statuses
very
similar
to
what
the
milestone
bot
looks
like
right.
Now
I
am
moving
around,
but
I
will
post
links
for
one
that
script
to
the
proposals
that
I'm
talking
about,
as
well
as
the
cig,
releasing
p.m.
meeting
notes.
So
you
guys
can
take
a
look.
B
A
C
So
at
this
point
in
the
release
cycle,
if
the
feature
is
submitted
and
not
vetted
by
the
end
of
the
day,
it
will
require
an
exception.
If
it's
something
that
we've
discussed
and
I
know
it's
like
yeah,
some
of
the
details
are
a
little
fuzzy
and
then
Tim
has
ultimate
veto
power
here
or
decision-making
power
here,
but
but
essentially,
at
this
point,
features
are
not
submitted
by
the
end
of
two
day.
They
need
to
also
include
an
exception
request.
So
if
you
check
out
ke
dev
has
links
to
the
exception
request
as
well.
C
So
I
would
say:
I
would
say
now
that
you've
submitted
it.
It's
fine,
just
a
cursory
glance
looks
good.
I
would
say,
try
to
to
get
a
proposal
soon,
it's
more
about
having
the
tracking
feature
in
there
than
having
the
proposal
immediately.
You'll
notice
that
a
lot
of
the
features
on
the
futures
tracking
spreadsheet
do
not
have
proposals
which
is
bad,
but
it's
it's
kind
of
the
state
of
the
game
right
now,
so
yeah.
B
It
kept
sir
caps
are
still
in
I,
don't
know
if
we're
calling
them
alpha
or
beta
I,
guess
probably
sort
of
beta,
but
caps
are
desirable,
but
not
required,
and
the
the
line
I
mean
I'm
curious
to
see
the
sig
p.m.
meeting.
Today
the
line
kept
versus
feature
like
what
what
rises
to
the
level
of
needing
which
one
will
it'll
be
good,
to
have
some
additional
clarity
on
that.
C
So
yeah
it
so
to
give
you
a
kind
of
general,
the
the
general
intention
is
that
the
cap
is
supposed
to
be
the
source
of
truth.
The
feature
issue
is
tracking.
It
should
be
tracking
a
cap
right,
and
the
future
issue
should
kind
of
be.
The
cap
is
the
explanation
of
what's
happening.
The
future
issue
is
the
tracking
umbrella
for
what's
happening
across
all
repos
right.
So
all
all
the
information
should
be
fed
back
into
that
future
issue.
Hopefully
the
problem
around
the
kind
of
process.
C
Right
now
is
that
one,
like
you
said
it's
it's
alpha
beta
Shinto.
You
know
the
adoption
criteria
for
keps
is
not
defined
per
se
right.
So
if,
when
Jace's
proposal
comes
out
or
it's
it's
out,
when
I
actually
link
it
so
that
you
can
see
it,
there
are
some
some
definitions
of
criteria,
for
you
know
actually
actually
pulling
captain
and
and
what
SIG's
should
be
working
on
to
make
their
caps
accepted
and
properly
tracked.
B
All
right,
thank
you
and
it
it's
good
to
see
the
the
improvements
I
really
like
how
much
improvement
on
documentation
is
happening,
whether
this
the
stuff
around
features
in
cig
PM,
are
just
the
release
team
documentation
improvements
over
the
last
weeks.
This
stuff
feels
like
it's.
It's
really
good
from
a
software
engineering
process
perspective
that
we're
we're
moving
past
some
kind
of
early
semi
ad
hoc
days,
writing
down
a
whole
lot
more
formalizing
and
exercising
based
on
the
documented
process.
So
this
is.
This
is
really
good
improvement.
C
An
additional
call-out
to
documentation,
I
totally
agree,
I'm
pretty
excited
about
it.
One
thing
that
I
would
say
for
new
contributors,
new
release,
team
members,
new
ish
contributors.
If
you
see
something
that
you
don't
understand
or
you
think
the
process
is
it's
not
quite
what
it
should
be
call
it
out,
feel
free
to
contribute
and
have
someone
review
it
like,
because
so
much
of
the
improvement
that
that
tim
has
been
talking
about
over
the
last
few
weeks
or
month
or
so
has
been
people
saying
I,
don't
understand
this.
C
B
Can't
occur
that
enough.
That
is
one
of
the
the
best
things
you
can
do
as
a
shadow
on
the
release
team
or
a
newcomer
to
any
community.
Really
you,
you
have
less
built-in
biases
and
assumptions
and
intuition
for
how
to
do
things
and
you're
much
more
likely
to
follow
the
the
document
to
a
tee
and
stumble,
because
somebody
else
read
it
differently,
based
on
their
their
knowledge
and
by
not
having
that
knowledge.
Yet
you
notice
errors
and
you
can
then
correct
them
in
the
documentation
and
that's
a
huge
improvement
for
the
people
who
follow
you.
B
So
looking
I
don't
see
mohammad
having
joined
us
today,
so
I
I've
already
linked
her
no
I
didn't
leak.
B
The
the
meeting
minutes
his
link
to
CI
signal,
but
I
think
what
I
want
to
do
is
just
go
ahead
and
do
a
walk-through
of
test
grid
and
kind
of
show
how
I
go
about
looking
at
things
and
I've
got
it.
I'm
actually
excited
to
go.
Look
at
the
video
from
yesterday
to
see
what
Mohamed
showed,
because
I
expect
to
learn
something
from
it.
This
is
one
of
those
tools
where
it's
confusing
and
even
more
than
the
tool
itself
the
tool
sort
of
a
framework
that
enables
a
workflow.
B
So
every
individual
who
approaches
test
grid
is
liable
to
navigate
through
it
slightly
differently
and
then,
if
we
had
two
people
looking
at
test
grid
who
have
different
backgrounds
and
knowledge,
they're
gonna
key
into
different
things,
because
this
is
partly
about
bug
triage
in
a
way
or
you're.
Looking
at
something
that
failed
and
wanting
to
get
out
what's
wrong.
Now
the
most
simplistic
thing
is
to
say:
well,
okay,
this
particular
test
failed
and
we
have
cigs
such-and-such
phones
at
tests.
B
B
B
So
your
very
first
thing,
you
start
out
hunting
a
little
bit
in
in
this
and
then
there's
a
set
of
different
things.
Here.
We've
got
tests
that
run
on
on
different
targets.
We
don't
have
a
112
yet
because
we
haven't
created
the
112
branch,
but
there's
prior
ones
as
well,
because
the
community
provides
support
for
older
releases
and
there
was
a
proposal
yesterday
that
we
should
be
removing
the
city
release,
1.8
ones,
and
we
actually
don't
have.
This
goes
back
to
the
documentation
discussion.
B
B
The
day
one
eleven
finishes,
or
two
weeks
or
four
weeks
in
and
historically,
it's
varied,
quite
a
bit
so
I
think
Erin,
Crocker
burger,
maybe
has
a
PR,
or
else
it's
on
the
agenda
for
the
cig
release
meeting
later
today
to
discuss
what
we
want
to
do,
but
probably
something
like
kind
of
four
weeks
and
part
of
the
release.
Team
process
will
be
and
I'm
guessing.
Probably
the
testam
four
people
will
drop
those
older
tests
so
lacking
a
one.
B
So
there's
a
header
area
here
that
lists
some
sort
of
buckets,
I
guess
of
area,
and
these
are
the
same
things
that
were
the
tiles
on
the
first
day,
so
master
blocking
and
cube
cuttle
skew
master
upgrade
one
eleven,
ten,
nine
eight.
Those
are
all
the
same
things
that
were
in
the
the
drop
down
here.
So
once
you
get
into
the
details,
you
can
still
do
that
top
level
navigation.
If
you
want
to
so
that's
sort
of
this
top,
oh
and
can
can
you
all
see
my
mouse
as
I
wiggle
it?
Okay.
B
So
this
this
top
little
bit
of
a
heading
area.
Is
that
top
level
navigation
so
down
below
that
since
I'm?
In
release
master
blocking
I've
got
a
set
of
information
about
release,
master
blocking
tests,
there's
a
summary
and
you
can
scroll
through
and
you
get
a
bunch
of
information.
I'll
get
a
little
more
detail
later,
but
then
individual
tests
buckets
here
if
they're
having
issues,
instead
of
just
being
a
blue
link,
they're
red
to
catch
your
eye,
a
little
bit
so
build
integration,
bounds
will
build.
B
Those
are
the
same
that
you
have
here
in
a
little
more
detail,
build
integration,
Bassel
build
then
the
first
thing
you'll
probably
see
here
so
build
passing
all
passing
this
week,
integration.
It
wasn't
highlighted
as
red
in
the
top,
but
it's
looks
slightly
different
here:
it's
not
a
green
checkmark.
It
says
integration
flaky,
so
36
of
55,000
tests
or
0.1%
18
of
165
runs
or
11%
nearly
have
failed
in
the
past
week.
B
So
we
have
this
this
configurable
threshold,
things
that
are
all
passing,
that's
obviously
easy,
but
what
constitutes
failure
and
necessarily
know
when
you
have
flaky
test
cases
or
flaky
infrastructure?
What
is
a
failure
until
you've
seen
that
number
of
them
go
by?
So
there
are
some
thresholds
that
allow
us
to
say
well:
we've
started
having
some
failures
and
those
start
getting
reported
as
flaky
tests
and
at
some
point
it
crosses
whatever
the
configurable
failure
is,
and
we
go
ahead
and
declare
that
the
thing
is
a
failure,
and
these
these
actually
look
relatively
good.
B
But
this
is
a
good
one
than
to
tunnel
into
so
I
go
ahead
and
click
on
that,
and
this
is
what
the
actual
detailed
results
are.
So
here
you
can.
You
still
have
the
same
headers
for
navigation,
but
you're.
Getting
detailed
information
on
what's
going
on
in
the
GC
I
gke
bucket,
so
left
column
is
a
whole
bunch
of
individual
tests
on
the
very
left
of
that
you
have
a
sig,
that's
associated
with
the
test,
so
as
you're
doing
triage.
Trying
to
understand
who
to
talk
to.
B
That
is
a
useful
way
to
understand
who
to
first
reach
out
and
then
you've
got
something
for
a,
hopefully
a
descriptive
name
of
what
the
test
is.
But
if
you
look
here
actually
most
of
what's
here
looks
green
and
to
me
then
first
said
well
like
53%.
That's
like
worse
than
tossing
a
coin,
but
this
actually
looks
mostly
green.
So
at
the
very
bottom
you
have
a
horizontal
scroll,
and
here
you
start
to
get
a
sense.
B
Okay,
this
test
was
these
tests
were
pretty
consistently
failing,
so
we
may
be
biasing
away
from
Fay
old
through
flaky
towards
good,
especially
on
these
two
that
are
highlighted
from
sig
instrumentation.
They
were
clearly
consistently
failing
earlier
in
the
week
and
appear
to
have
cleaned
up
now.
Some
of
these
towards
the
top.
B
Those
are
kind
of
the
what
the
flaky
classically
looks
like
green
red,
green,
green,
red,
green,
green
red,
that's
the
kind
of
flappin
that
that
we
typically
think
of
from
the
the
flaky
term
and
looking
at
those
they're
flaking
a
relatively
low,
maybe
maybe
one
or
two
and
ten,
a
lower
percentage.
So
now
I
may
be
feeling
a
little
sense
of
relief
like
okay,
this
this
test
isn't
horrible.
Now
the
DCI
gke
bucket,
we
have
people,
those
are
google-specific.
B
We
have
people
who
are
focused
on
making
sure
that
those
go
well,
maybe
more
than
some
of
the
other
corners
of
all
of
the
the
broader
tests,
but
often
when
something
is
failing,
because
there's
so
much
here
and
we're
not
the
subject
matter.
Experts
in
everything.
Your
first
question
is
like.
Is
this
test
good?
B
B
Another
nifty
thing
that
I
like
to
do
is
I'm
trying
so
having
tunneled
back
into
the
details.
So
looking
at
this
first
one
here:
six
storage
for
system
volumes,
local
two
pods
melting,
a
local
volume.
It
failed
briefly
this
morning,
if
you
hover
over
a
green
cell,
that's
just
in
the
time
series
before
the
failure
and
drag
your
mouse
over
to
the
red
up
pops.
B
A
little
thing
that
says
search
for
changes
and
when
you
click
that
you
come
up
with
a
new
page
and
it
actually
shows
you
what
changed
in
the
kubernetes
kubernetes
repo
across
that,
and
here
it's
telling
us
nothing
change,
and
we
can
actually
see
that
I
think
at
the
top
of
the
the
detail
grid.
You
have
some
things
that
are
committed
and
the
top
line
in
it.
You
can
mouse
over
over
it.
It's
part
of
the
Shah
from
KK
the
lower
line
our
test
and
for
structure
commits.
B
So
since
looking
at
the
difference
here
showed
that
there
is
nothing
to
change.
These
are
both
on
the
same,
commit
in
KK,
a
106
8
8
to
5
7
e.
This
is
maybe
a
change
in
test
infrastructure,
but
looking
at
the
columns
there,
you
can
see
the
test
infrastructure
was
also
at
the
same
commit,
but
it
changed
just
after
so
this
may
be
a
case
where
testing
for
notice
something
was
flaky
and
they
fixed
it.
B
Just
after
and
the
problem
went
away,
or
it
could
just
be
a
straight-up
flake
so
to
get
at
that
next
level,
if
you
on
just
the
red
cell
for
the
tests
you
tunnel
into
the
detail,
results
for
that
actual
test
failure
and
you
have
a
specific
job
number
that
was
the
run,
but
you
also
do
get
a
little
bit
of
aggregated
info
so
to
test
fail
out
of
498
in
this
bucket
and
I
think
remember
I.
This
may
be
because
it
was
just
they
had
gone
from
green
to
red
I.
B
Think
when
you
have
multiple
failures,
you
there's
another
section
here
that
shows
a
little
more
history,
oh
yeah!
Well,
you
could
do
it
here
view
test
history
on
test
grid.
You
can
get
at
just
that
particular
test
and
and
they're.
Actually
you
do.
Is
you
just
rapidly
scroll
through
time?
You
can?
You
can
see
that
maybe
this
one
is
just
kind
of
flaky
and
in
the
in
the
view
settings
you
can
go
into
a
super
compact
view
and
and
see
that
in
a
little
more
detail
of
the
history
without
quite
so
much
scroll.
B
So
this
test
just
looks
like
it's
probably
historically
flaky
and
looking
at
it
now
actually
I'm
on
a
slightly
wider
view.
So
this
particular
one
that
I'm
looking
at
is
something
related
to
NFS,
maybe
there's
just
some
Network
flakiness
or
some
slowness
or
something
timing
out,
but
we
do
want
to
get
to
a
place
and
an
errand
was
really
stressing
this
yesterday.
B
At
this
point
in
the
assessment
you
could
say:
well,
okay,
this
is
a
flaky
test,
maybe
I'll
check
in
tomorrow
and
see
like.
Has
it
continued
to
be
red,
but,
looking
through
the
history,
you
can
just
done
this
or
kind
of
tell
it's
a
flaky
test.
We
should
have
an
issue
open
for
this
to
understand
better,
what's
going
on
and
we
want
to
get
to
where
we
don't
have
any
of
these,
and
this
is
actually
something
really
positive.
I
think
that
happened
in
1.11
and
111.
We
had
up
until
1.11.
B
We
had
typically
a
lot
of
destabilization
that
happened
on
feature
implementation
ordering
feature
implementation.
So
these
flakes
were
like
a
second
order
problem.
We
really
had
to
focus
on
just
fixing
things
when
they
broke
catching
them
relatively
quickly.
Getting
people
focused
on
breakage
fixing
breakage,
but
now
that
we
mostly
are
keeping
a
clean,
CI
signal
we're
getting
quick
turnaround
when
breakage
happens,
we
need
to
clean
up
this,
this
remaining
fuzz
on
the
edge
of
the
testing
and
work
to
make
these
tests
more
reliable,
more
stable.
B
So
opening
issues
is
a
first
part
of
that
and
then
working
to
build
this
culture
of
actually
fixing
them
so
Aaron's
on
on
the
steering
and
he's
also
now
at
Google,
so
I
think
we
have
people
in
the
right
places
to
continue
the
nudging
on
that
culturally,
but
I
think
also
from
the
release
team
perspective.
That's
something
that
we
can
really
help
to
bolster.
B
B
That's
actually
interesting:
I
I
have
an
idea
of
what
might
be
going
on
here,
and
that
makes
me
want
to
go
look
at
the
test.
So
you
you
get
some
references
in
here
to
what
the
test
name
is
and
what
the
test
code
is
so
right
here
up
at
the
top.
We
have
what
this
is
and
I'm
actually
gonna.
Go.
Look
at
that
later
today
to
see
how
they
are
managing
their
loopback
devices.
The
ellow
set
of
command
I,
think
of
is
a
Linux
command.
B
B
Otherwise,
when
you're
using
LS
setup,
you
usually
need
to
wrap
it
in
a
overly
complex
loop
that
figures
and
add
logic
that
figured
out
how
to
find
a
non
busy
resource.
But
if
you
do
that
your
test
becomes
more
reliable.
This
looks
like
a
classic
flaky
test
that
could
be
made
better.
So
briefly,
I'm
going
to
bounce
over
to
the
release.
B
Master
upgrade
dashboard
this
one,
so
this
again
to
kind
of
go
from
the
top
I've
on
test
grid,
yeah,
cig,
release,
tile
and
release
master
upgrade
compared
to
the
other
one
that
we
started
on
where
it
was
mostly
green
right
here
we
mostly
have
fails
and
flaky
so
upgrades
are
the
domain
typically
of
cig
cluster
life
cycle
and
depending
on
the
type
of
test
we
have
tests
that
are
handled
through
cube,
ADM
and
not
through
cube
ADM.
So
the
first
pass
triage
here
usually
is
figuring
out.
B
Do
we
reach
out
to
cube
ADM
or
who
else
into
cluster
life
cycle,
but
as
as
I
shown
in
the
other
one
as
I
in
here
one
of
the
first
things
in
the
individual
referent
individual
test
case?
Reference
is
the
name.
So
here
we
have
cig,
API
machinery
being
highlighted,
so
that's
probably
who
needs
some
talking
to
there
to
see.
What's
up,
there
are
some
older
failures
in
cig
storage
here
yeah
those
look
like
they
resolved
a
couple
of
weeks
ago.
B
So
probably
the
current
issue
and
having
started
a
number
of
days
ago
and
been
consistently
read,
we
have
a
problem
with
this
admission,
webhook
test
so
I'm
pretty
sure
on
each
of
these
already
yeah.
These
are
ones
that
Muhammad
has
opened.
Issues
on
is
reaching
out
to
the
sakes,
but
I
want
to
just
kind
of
walk
through
that
from
a
workflow
perspective,
shows
some
of
the
little
mouse
hovers
and
things
like
the
making
the
size
compact.
B
To
show
an
example
of
that
now,
here
we
go
so
between
these
two
commits
something
did
actually
change.
So
you
get
a
github
query.
That's
searching
between
two
commits
IDs
and
we
have
a
couple
of
things
here:
a
timeout
for
slow
math
on
arm,
and
that
is
it
this
particular
test
failure
that
I
was
looking
at
there.
This
one
I
can
I
guess
that
has
nothing
to
do
with
arm
and
the
other
thing
here
that
I
see
on
this
one.
B
This
one
says
serial
serial
test
run
in
order,
and
that
means
they
often
run
slowly,
which
means,
if
there's
something
flaky,
statistically
it's
more
likely
to
crop
up.
Yes,
so
clicking
on
the
detail.
You
can
see
this
this
particular
test.
It
has
an
elapsed
time
of
almost
14
hours,
so
across
14
hours.
B
If
something
went
wrong
somewhere
in
the
infrastructure,
something
was
excessively
slow
or
we
had
something
that
didn't
handle
a
time
out
well
and
yeah
here,
waiting
for
terminating
namespaces
to
be
deleted
timed
out,
so
something
was
slow
and
our
timeout
was
short
enough
that
we
bumped
into
it.
Maybe
if
this
would
run
slightly
longer,
it
would
have
succeeded.
This
is
also
a
common
pattern.
B
B
B
Aaron
said
that
he
was
gonna
talk
to
somebody
that
he
works
with
yesterday,
with
respect
to
some
of
that
so
I'll
circle
back
with
him,
nc
and
it
initially
I,
was
just
sort
of
thinking,
throw
something
simple
in
the
the
developer
guide,
but
this
would
make
sense
to
have
on
the
formal
web,
especially
once
you
get
in
like
having
screenshots,
would
make
it
much
more
navigatable.
Just
plain
plain:
words
like
what
I
was
doing
there
with
the
mouse
and
visually.
It
was
very
much
a
visual
process
and
just
doing
that
in
words
without
screenshots
yeah.