►
From YouTube: Kubernetes SIG Release 20201006
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello,
everyone
today
is
august
6th.
This
is
one
of
the
sig
release
bi-weekly
meetings.
This
is
a
meeting
that
is
recorded
and
will
be
available
later
on
the
internet.
So
please
be
mindful
of
what
you
say
and
do
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
in
general,
just
be
awesome
people.
A
So
we
don't
have
a
super
packed
agenda,
so
we're
going
to
take
the
opportunity
to
go
over
some
road
mapping
items
as
well
as
walking
the
project
board,
but
before
we
get
started
one
are
there
any
new
folks
on
the
call
that
want
to
say
hi.
A
Okay,
all
right,
and
if
there
is,
if
you
are
here-
and
you
have
not
had
an
opportunity
to
drop
your
name
on
the
agenda,
please
do
so
now
and
as
laurie
mentioned
in
the
chat,
if
anyone
is
interested
in
being
the
note
taker
for
the
meeting,
the
illustrious
honor
can
be
all
yours.
If
you
just
put
your
name
under.
A
All
righty
so
first
up,
I've
been
signed
out
of
the
google
doc,
of
course,
but
first
up
we're
going
to
do
some
updates
on
krell
and
inargo
and
sasha.
You
want
to
take
that
away.
B
Yeah
so
there's
just
one
pull
request
left
open
to
finish
the
push
build
integration
in
enago
and
yeah.
Therefore,
we
can
reuse
most
of
the
source
code
from
my
previous
migration
from
put
the
push,
build
script
to
the
krell
pushbuild
command
and
after
that
we
probably
can
move
forward
on
integrating
grill
as
api
in
cube
test
two
and
yeah.
My
my
I'm
thinking
about
some
some
follow-ups,
something
like
making
the
bush
build
script,
a
self-contained
script
that
we
can
move
everything
out
of
releaselip.sh
to
have
a
better
possibility
to
refactor.
B
Yeah
I
mean
we
still
have
the
whole
build
process,
for
example,
so
everything
which
is
around
building
the
artifacts,
but
all
in
all.
I
think
it
shouldn't
be
that
much
I
can.
I
can
do
it
look.
A
Cool,
so
this
has
been
some
really
really
phenomenal
work.
Seeing
push
build
done,
seeing
the
integration
work
start
with
the
inaugural
sub
commands,
so
hat
tip
to
everyone
in
release
engineering.
That's
been
working
on
that.
It's
been
really
really
awesome
to
see
the
vrs
flyby.
A
I
was
staring
at
the
inaugural
push
one.
I
will
finish
doing
a
review
later,
but
it's
looking
pretty
good
and
you
did
testing
already
for
this
right.
B
And
one
thing
to
mention:
we
had
some,
we
published
some
md5
and
char
ones
for
all
the
releases
and
there
was
a
switch
where
we
said
that
we
only
stick
to
256
and
512
for
releases
from
kubernetes.one.
A
C
Expect,
though,
that
that
will
break
people
wouldn't
when
you
start
doing
that,
be
watching
for
the
messages
on
slack
all
of
my
stuff
broke,
but
we'll
just
have
to
remind
people
that
it
has
been
messaged
for
a
while.
It
went
through
a
duplication
cycle
with
a
little
luck.
Most
people
have
updated
their
code.
I'm
gonna
guess
at
the
point
that
they
got
broken
before
they
were
like
oh
crap.
We
gotta
fix
this
for
the
future
and
then
subsequently,
if
not
because
that's.
B
A
Yeah-
and
I
would
say,
also
consider
the
any
outputs
that
would
be
presented
in
download
kubernetes.
I
think
there's
an
open
there's,
an
open
pr
for
the
checksums
to
be
added
to
the
site,
but
yeah.
I
would,
I
would
say,
like
dependence
wise,
we
had
a
few
breakages.
Maybe
it
was
cops
or
mini
cube
or
something
that
was
like
fairly
large
right
as
we
did
it.
So
I
think
we
discovered
those
earlier
but
yeah.
A
Let's
just
make
sure
that
we
message
that
once
it's
complete,
I
think
I
think,
once
the
push
build
stuff
gets
wrapped
up.
We
can.
We
can
maybe
send
a
note
to
the
community
on
that.
Oh
yeah.
D
Work,
it
sounds
like
they're
an
action
item
but
to
send
out
a
message.
I
was
waiting
for
tim
because
he
was
taking
notes,
but
maybe
action
item
to
communicate
to
community.
A
Okay,
all
right!
Next
up,
we
are
doing
the
ci
signal
draft
roadmap.
Laura
you
want
to
talk
a
little
bit
about
that.
D
D
So
I
can
share
my
screen
and
just
go
over
the
highlights,
but
then
I
think
others
with
actual
practical
experience,
should
take
over
and
talk
about
to-do's.
So
essentially
the
question
around
boundaries
who
is
responsible
for
what
we
have
a
long-term
goal
to
get
rid
of
the
release
team.
You
know
that
this
release
team
doesn't
exist
and
that
all
of
its
functions
are
being
handled
in
sigs
themselves,
but
it's
going
to
take
a
while
for
us
to
get
there
and
we
want
to
be
mindful
of
activities
that
pull
responsibility
back
into
our
domain.
D
So
yesterday,
some
of
us,
the
leads
were
chatting
and
the
idea
for
the
boundary
for
sig
release
is
this.
D
Is
that
we
mind
blocking
and
informing
jobs
for
all
in
support
release
branches,
but
currently
we're
on
the
hook
for
jobs
that
owning
cigs
don't
take
care
of,
and
there
are
historical
reasons
for
that,
which
I
touch
upon
further
below
one
of
them
being
that
we
don't
currently
have
like
a
stick
to
incentivize
cigs
to
actually
treat
these
alerts
as
they
would,
if,
if
they
might,
if
say,
the
alerts
were
in
the
workplace
and
they
were
on
call
and
that's
actually,
the
kind
of
behavior
that
we
would
like
to
see
is
that
when
cigs
get
an
alert,
they
would
treat
it
as
an
on-call
page
and
go
and
take
a
look
and
handle
their
own
alerts.
D
That's
not
happening
right
now
because
of
these
historical
reasons
and
the
lack
of
that
stick.
So
one
question
or
action
item
might
be
that
we
changed
the
sig
release
charter
to
actually
create.
E
D
D
We're
supporting
sig
testing
in
what
seems
like
their
responsibility,
which
is
actually
to
create
a
ci
signal
culture
to
drive
this
responsiveness
to
alerts
that
we
want
to
see,
and
so
I've
listed
as
two
acceptance
criteria,
for
that
would
be
that,
yes,
the
sig
would
treat
test
script
alerts
like
pages
and
their
boards
would
be
green
and
then
six
were
responding
to
those
alerts
with
their
own
actions,
which
I
guess
is
redundant
I'll.
Get
rid
of.
That
c
is
simplification.
D
So
then
we
have
yeah
get
rid
of
stuff.
So
then,
just
like
a
very
quick
review
of
action
items
to
get
us
farther
along
because
this
is
going
to
be
like
we
don't
want
to
look
at
the
whole
pile
of
work.
Changing
a
culture
writing
docs
all
of
this
stuff
at
once.
Otherwise
it's
overwhelming.
So
the
idea
here
is
breaking
it
down
into
smaller
chunks.
We
actually
can
achieve.
D
D
That
sigs
would
need
to
also
have
at
their
disposal
to
be
able
to
handle
their
own
cs
signal
problems
themselves,
and
this
might
require
some
alignment
with
suggesting.
So
I
just
put
that
there
is
an
option,
and
then
we
have
a
number
of
items
related
to
what
I
notice
is
basically
there's
two
documentation
items
and
then
there's
one
scalability
test
items
that
we've
been
discussing.
D
So
what
is
the
order
of
those
items
in
terms
of
priority
and
do
we
need
to
do
any
of
those
items
to
actually
unblock
our
own
goal?
D
Which
right
right
now
at
the
moment,
is
to
get
us
back
to
here
where
we
are
minding
blocking
and
informing
jobs
for
all
in
support
release
branches,
but
starting
to
step
away
from
doing
other
sig's
work,
and
then
here
is
where
I
have
listed
deciding
whether
a
subproject
is
necessary
because
we,
ideally
we
want
the
work
to
be
done,
and
we
want
the
work
to
be
done
by
the
boundaries
that
we've
established
at
the
top,
knowing
what
we
do
and
sticking
to
that
and
consulting
and
helping
other.
D
You
know
suggesting
to
help
those
other
sigs
do
their
part
and
then
here's
the
charter
change.
This
number
is
arbitrary.
It
might
come
earlier,
it's
just.
How
long
might
it?
How
much
effort
would
be
required
to
actually
change
the
charter?
That
would
be
what
we
would
have
to
identify,
and
then
here
finally,
is
the
culture
shift
for
sig
testing
to
drive
and
sig
release
being
consultants
in
that
effort.
D
So
then
I've
provided
a
detailed
view.
There
is
an
issue,
so
the
longest
of
these
items
is
the
documentation
part.
There
are
two
items
by
rob
and
jorge
that
deal
with
this,
and
what
I
did
last
week
was
break
those
items
down
into
a
list.
You
know
just
here's
the
items
that
they've
provided
for
documentation
needs
five
videos.
D
So
our
goal
here
would
not
be
to
document
all
of
these
things
exhaustively,
but
what
needs
to
be
documented
in
order
for
a
ci
signal
shadow
to
actually
do
the
job
and
then,
if
that's
also
enough
to
satisfy
what
the
sigs
need
to
know
great.
But
then
again,
what
would
be
those
gaps
between?
Let
us
say
a
signal
shadow
would
need
and
then
what
these
things
might
need.
My
bow
would
be
to
prioritize
what
this
group
needs
like.
D
D
Here
are
the
three
github
issues
on
on
the
subject
that
we
should
prioritize.
So
this
one
documenting
the
reason
for
inclusion
of
release
blocking
jobs
from
the
text
and
from
some
notes
from
different
meetings.
I've
pulled
together
these
action
items
and
questions,
so
we
would
need
to
these
would
be
action
items
that
are
sub
items
of
the
main
github
issue,
and
then
these
are
questions
that
would
influence
the
direction
of
that
action
item.
D
This
is
another
one
ad
mention
of
work
on
progress.
Tests
are
really
sparking
jobs
and
criteria.
D
D
This
is
the
subproject
action
item,
which
is
we
talked
about
that
already
and
then
scalability
tests
for
beta
releases.
It's
not
clear
to
me
at
least
what
we
actually
need
to
do,
but
here
are
some
action
item
like
things
one
is
working
with
the
working
group
for
kate,
simple.
Another
is
that
we
have
someone
who's
willing
to
get
involved
here,
mm4tt
looking
for
some
direction
and
how
to
help.
D
And
then
this
was
maybe
an
action
item.
D
Really
updating
the
release
charter-
I
mentioned
that
already
and
then
supporting
sig
testing
as
consultants.
This
has
no
details
because
we
don't
know
what
that
would
look
like
say
you
get
all
the
documentation
done.
D
You'll,
learn
things
along
the
way
and
then
at
that
time,
or
as
you
approach
that
time
evaluating
what
would
be
required
from
this
group
to
be
helpful
consultants
in
that
shift,
and
then
just
for
reference.
I
put
the
ci
signal
role,
responsibilities
items
so
that
we
can
use
this
as
like
a
benchmark.
D
This
is
what
is
currently
expected
from
the
role,
so
we
should
think
about
anything
that
we
do
should
reflect
these
items
unless
we
decide
that
we
wanted
to
change
those
items
that
leave
up
to
you.
But
basically
here
is
what
the
role
is
responsible
for
and
so
anything
that
we
would
do
should
fit
there.
Anything
that's
broader
than
that
we
may
want
to
reconsider.
D
So
I
think
the
main
question
here
is
like
what
would
actually
be.
I
don't
know
if
this
group
wants
to
take
this
on
right
now,
but
what
I
had
in
mind
is
like
from
these
needs.
You
know
making
videos
and
then
test
grid
what
needs
to
be
documented
there.
What
questions
would
need
to
be
answered
for
a
brand
new
signal
shadow,
for
example,
triage
again.
What
would
a
cia
signal
shadow
need
to
know
to
to
do
the
job
and
all
these
other
things
like?
D
Could
we
brain
dump
parts
of
this
process
that
together
would
make
a
happy
path
toward
actually
being
able
to
do
the
job?
We
would
start.
We
can
start
skeletal
and
then
build
contacts
as
we
go,
but
I
think
a
starting
point
is
needed,
and
that
was
the
intent
of
the
github
issue.
As
far
as
I
can
see
and
understand,
it.
F
Yeah
it,
it
sounds
like
to
me
that,
and
and
I'm
kind
of
repeating
stuff
that
jorge
has
said
so,
please
feel
free
to
jump
in.
But
it
sounds
like
that.
You
know
like
the
the
two
pronged
approach.
This
is,
one
of
them
is
documentation.
That's
kind
of
mostly
caters
to
ci
signal
team
members
on
sig
release.
Right
like
how
do
you?
F
How
do
you
judge
the
healthiness
of
these
boards
and
how
do
you,
you
know,
get
people
to
respond
to
things
and
that
sort
of
thing
and
that's
kind
of
where
the
videos
come
in
my
opinion
and
then
there's
kind
of
the
other
side,
which
is
how
do
we
make
these
boards
healthy,
more
right?
And
that
is
more
of
just
kind
of
like
a
consumption
aspect
of
cigarette
release
right.
F
So
so
I
don't
think
that,
and
I
think
this
is
probably
why
the
sub
project
thing
went
back
and
forth
so
much
between
sig
testing
and
single
lease,
because
it's
not
really
like
sig
release's
job
to
make
sure
the
boards
are
healthy
in
a
sense
like
it
is
to
ensure
that
before
releases
happen
and
that
sort
of
thing,
but
in
terms
of
the
underlying
you
know,
quality
of
the
code
and
that
sort
of
thing,
that's
kind
of
just
like
a
bonus
when
ci
signal
jumps
in
there
and
works
on
those
things
and
typically
folks
have
been
around
ci
signal
will
actually,
you
know,
fix
tests
and
that
sort
of
thing,
but
ideally
right
like
that.
F
That
doesn't
have
to
happen
because
the
person
who
implemented
the
tests
are
are
constantly.
You
know
ready
to
respond
to
it.
So,
in
regards
to
how
that
affects
the
actual
creation
of
a
sub
project,
it
seems
like
most
of
the
stuff.
F
That's
related
to
sig
release
are
things
which
fall
within
like
the
sig
release,
lead
kind
of
role
on
the
release
team
and
the
other
aspects
of
the
sub
project
would
tackle,
potentially
fall
outside
of
the
responsibilities
of
sig
release,
which
I
think
definitely
brings
in
the
question
whether
a
ci
signal
subproject
should
exist
under
sig
release,
but
that's
kind
of
just
my
two
cents
and
interpretation
and
summation
of
others
as
well.
So
definitely
welcome
feedback
or
push
back
on
that.
A
Yeah,
so
the
the
conversation
yesterday
between
the
leads
actually
got
interesting
because
we
were
like
wait.
Do
we
need
this?
The
because
you
know
I,
you
know
the
first
one.
One
of
the
things
I
asked
was
when
you
receive
a
test
grid
alert.
What
do
you
do
and
sometimes
the
answer
is
nothing
right,
but
I
think
it
depends
on.
A
I
think
it
depends
on
who
you
are
in
kind
of
the
racy
chart
of
of
test
grid
alerts
right
and
judging
judging
judging
jobs
on
on
kubernetes
kubernetes
overall
right
so
for
the
most
part
for
a
blocking
forming
jobs.
Sig
release
is
informed
and
may
be
consulted
occasionally,
but
they're
not
responsible,
right
and
the
way
we've
kind
of
built.
This
up
is
is
that
we
our
premium
cat
herders
right.
A
So
we
spend
a
lot
of
time
wrangling
people
getting
getting
a
better
understanding
of
how
these
jobs
work
and,
as
you
said,
dan,
as
you
get
deeper
and
deeper
into
ci
signal
life,
you
may
not.
You
may
have
the
context
to
effectively
fix
the
job
yourself
right
as
opposed
to
putting
the
right
people
in
the
right
rooms
at
the
right
time
to
to
get
the
job
fixed.
So
the
you
know
the
question
the
question
became
like
well.
A
How
would
you
handle
this
if
you
were
dealing
with
this
at
a
job
right
and
if
I
was
receiving
alert
at
a
job,
and
I
was
on
the
ops
team,
sre
team?
What
have
you
my
job
would
be
to
go
fix
that
alert
right
and
dig
into
it
right.
A
A
A
So
we
were
kind
of
we're
kind
of
talking
through
this,
and
I
was
like
this
doesn't
feel
like
our
thing
for
for
a
lot
of
these
components,
the
you
know
for
for
me,
first
and
foremost
along
that
sub-project
formation
issue,
I'm
concerned
with
the
blocking
and
informing
jobs
right.
What
is
our
fitness
to
release
of
a
new
version
of
kubernetes,
right
and
and
for
the
most
part
like
our
our
responsibility?
A
It
doesn't
end
there,
because
there
are
things
that
we
have
additionally
picked
up
across
the
years
when
it
comes
to
responding
to
these
jobs,
but
like
that
should
be
our
concern.
Right.
Are
these
jobs?
Are
these
jobs
in
a
reasonable
state
to
feel
good
about
releasing
kubernetes
right?
If
they
are
not,
I
think
the
responsibility
starts
to
shift
to
say.
A
Sig
sub
projects
component
leads
to
to
discuss
those
jobs
right,
so
maybe
maybe
a
good
first
step
is
like
what
to
do
when
you
get
a
test
test,
grid
alert
right
and
just
thinking
through
that
process.
A
The
kind
of
interesting,
interesting
pr
we
received,
maybe
a
week
or
two
ago,
from
from
someone
submitting
conformance
for
for
sig
architecture
right,
I
think
it
was
some
api,
snoop
or
something
and
the
the
job
was
submitted
as
it
was
submitted,
one
under
the
sig
release,
subdirectory
right
and
then
two.
As
you
know,
it
was
mentioned
in
the
comments
that
the
job
should
be
considered
to
to
be
blocking
right,
and
so
you
know
so.
A
I
immediately
go
like
this
is
an
education
thing
right,
because
there's
a
process
or
a
set
of
criteria,
we
expect
for
jobs
that
would
be
blocking
for
sig
release
right
and
then
to
you.
That's
there's
an
a
question
of
ownership,
because
that
was
a
job
that
is,
you
know,
fits
better
in
in
sega
architecture
and
as
a
conformance
sub-project
job
that
was
submitted
as
a
sig
release.
A
Job
right,
so,
I
think,
there's
a
there's
an
opportunity
for
education
overall
with
like
job
submission
and
job
maintenance
right
just
because
we
are
listed
as
one
of
the
informed
parties
of
a
job.
It
doesn't
necessarily
mean
we
should
be
this
first
party
active
in
bringing
that
job
back
to
green
right,
so
yeah,
just
brain
dumping.
D
F
Like
the
sub
project,
not
like
you
know
stalling
as
a
like
good
excuse,
I
mean
I've
also
been
very
busy
but
like
as
a
good
excuse
to
be
like,
oh,
like
I
can't
really
move
on
this
stuff,
or
maybe
even
subconsciously,
doing
that
you
know
and
like
we
have
action
items.
Let's
just
do
them
and
like
not
worry
about
this
like
organizational
thing
that
may
or
may
not
need
to
happen
so
just
calling
on
myself
there
absolutely.
G
If
you
have
a
project,
you
know
you
have
like
a
large
program
of
work.
You
have
an
end
goal
in
sight
and
I
kind
of
see
that
we
have
a
couple
of
bits
and
bobs
to
do
to
support
current
ci
signal
team
work,
but
but
but
as
but
as
I
go
through
this
release,
I
I
have.
I
have
some
out
there
feedback
in
terms
of
taking
the
word
ci
out
of
it,
because
we
have
no
responsibility
for
ci.
G
We
don't
operate
ci
so
when
ci
is
pulled
out
from
under
us,
then
people
have
come
to
me
and
actually
people
from
the
from
the
team
that
submitted
that
job
and
looked
at
me,
sadly
going
oh
and
the
cr
ci
signal
in
119.
That
was
like
and
I
kind
of
went.
We
just
report
on
the
flaky
tests.
G
I
want
to
move
away
from
using
the
term
flaky
test
and
I'm
increasingly
using
the
term
a
test
that
produces
a
non-deterministic
result
and
because
it's
inflammatory,
when
you
tell
a
test
writer,
that
their
test
is
flaky.
So
you
have
to
be
careful
there.
You
know,
and
particularly
if
the
root
cause
has
nothing
to
do
with
the
test.
So
and
that's.
H
I
didn't
quick
small
comment
just
to
just
to
get
back
into
the
feeling
of
the
amazing
and
beautiful
dog
that
laurie
put
together
for
us.
I
think.
I
G
H
I
think,
at
the
end
of
the
day,
the
big
thing
to
focus
is
exactly
it's
exactly.
H
As
stephen
said,
if
we
were
in
a
company
and
if,
if
we
actually
got,
if
everyone
here
got
paid
to
the
kubernetes-
and
we
probably
will-
we
will
probably
already
have
created
a
some
type
of
operations-
sre
team,
if
we,
if
somebody
had
alerts
if
that
grid
actually
had
alerts,
there
will
be,
there
will
be
a
nice
trail
of
actions
and
conversations
and
more
more
than
anything
instead
of
just
you
know,
working
on
a
feature
pushing
the
cap
forward
and
we
there
will
be
a
lot
more
requirements
for
proper
ownership.
H
You
know
I
don't
I
don't
only
have
a.
I
don't
only
have
a
cap,
but
I
also
have
a
a
some
a
room
book,
some
some
something
that
is
going
to
tell
you
what
this
were
created,
what
they're
actually
testing
and
it's
going
to
help
it's
that's
going
to
help
us
whenever
something
breaks,
to
figure
out
exactly
how
to
get
a
ride
on
track,
and
I
think
I
think
I
think
to
that
end.
The
big
question,
the
big
question
for
this
is
forget:
it
forget
about
a
forget
about
releasing
testing
who's
gonna.
H
H
We
have
the
tweet
test
framework,
which
is
you
know,
the
tools
that
we
use
are
going
to
inform,
how
we,
how
we
they,
how
we
do
the
job
and
the
the
first,
the
first,
the
first
things
that
I
could
see
is
imagine
that
again,
if
we
were
all
paid
to
work
on
kubernetes,
if
we
were
coming
brand
new
to
this,
you
know
switching
from
a
fancy
team
to
sla
to
srv
or
operations.
H
It
would
be
really
nice
to
have
another
some
sort
of
onboarding
that
wasn't
just
hey.
This
is
broken.
Please
fix
it
and
enter
the
end
to
that,
and
just
just
focusing
and
you're
just
focusing
on.
H
On
solidifying,
all
the
tribal
knowledge
that
we
had
that
we
have
that
we
were
lucky
that
we
were
lucky
to
gather
from
from
the
front
of
a
from
the
released
from
the
release
team
and
the
like.
It
just
solidify
all
that
and
cannot
be
a
user
group
for
a
a
user
group
for
ca
for
sick
testing.
H
D
Like
so
what
it
sounds
like
if
we're
gonna
use
the
metaphor
like
a
runbook
right,
if
you're,
if
we
have
this
ops
team
like
what
they
need,
is
a
runbook
and
so
you've
already
created
the
outline
for
that
with
these
tools
right
that
are
involved
in
the
process.
So
yeah.
I
H
I
A
So
I
think
I
think
a
good
start
for
the
skeleton
is:
is
the
ci
signal
team's
handbook
right
so
like
pulling
that
out
and
seeing
what
works
generally
right?
So
there's
a
there's,
a
there's,
a
part
of
the
you
know,
there's
a
part
of
the
responsibility
at
which
kinds
kind
of
starts
to
shift
from
essentially
bubbling
up.
These
alerts
outwards
to
looking
at
them
right
so
pulling
out.
The
salient
bits
of
the
ci
signal
handbook
is
a
really
good
start.
D
Right
so
I
have
that
here,
like
I've
already
I've
also
listed
whatever
documentation.
I
could
find
that
supports
these
different
headings
so
that
you
know
we
can
just
refer
to
them.
We
know
it
exists
already
so
like
with
test
grid
going
to
the
handbook,
seeing
what's
there
and
then
maybe
taking
one
of
the
shadows
or
some
of
them
and
saying
like
what
are
you,
what
is
mystifying
to
you?
D
What
do
we
need
to
clarify
or
what
is
missing
and
then
doing
the
triage?
I
mean
if
we
again,
if
we,
if
we
do
all
of
this
at
one
time,
it's
going
to
be
huge.
So
what
I
would
suggest
is
like
breaking
apart
this
list,
picking
off
the
first
two
topics
that
make
sense
right,
maybe
the
most
important
tools.
Yes,.
A
So
I
would
say
in
in
terms
of
that,
the
it
really
does
go
back
to
like
what
do
you
do
when
you
get
a
test
grid
alert
right
answer
that
first
right,
we
should
be
able
to
answer
that
first
and
anyone
should
be
able
to
answer
that
question
if
they
own
jobs
anywhere
right.
The
the
second
piece
would
probably
be
understanding.
A
Ci
right,
prow
is
specific
to
kubernetes.
There
are
things
that
are
happening,
and
there
are
things
that
are
happening
behind
the
scene
that
like
when
it
works.
You
don't
have
to
care
about
it
right.
You
don't
need
to
know
about
tide.
You
don't
need
to
know
about
any
of
the
plug-ins
that
are
attached
to
a
repo.
A
It
just
does
its
thing
right
when
it's
not
going
well
right,
when
you
know
when
bosco's
pools
or
full
or
different
things
are
happening
or
tide
is
stuck
because
one
of
the
nodes
did
a
thing
right
like
there
are
all
these
random
scenarios
that
you
can
enter,
and
maybe
those
are
more
advanced
scenarios.
But
I
do
think
that,
like
someone
who
is
approaching
just
the
same
way,
they
might
understand
a
github
action
right
that
they've
configured
for
a
repo
that
is
non-kubernetes.
H
The
only
other
thing
that
I'd
like
to
add
is
if
anybody
on
the
call
is
interested
and
not
currently
working
on
any
of
this,
and
please
feel
free
to
reach
out
to
dan
rob
myself
and
other
than
that.
I
guess
it
will
also
be
good
to
commit
to
definitely
to
do
this
in
the
open
as
much
as
possible
and
jump
in
in
conversations
with
with
sick
testing,
because
alt
a
ultimately
is
like
we
want
to.
H
We,
we
have
all
these,
I'm
just.
Let's
call
it
knowledge
and
we
have
all
this
a.
We
have
all
this
knowledge
and
we
want
to
make
sure
that
he
gets
out
that
way
that
he
gets
out
into
the
community
and
actually,
instead
of
just
having
one
sub
theme
of
the
release
team
focusing
on
all
of
this
is,
like
also
have
a
seek
storage.
We
have
some
visibility
on
this.
You
know
when
people
are
working
on
this,
so
they
can
see
that
we
publish
this
new
thing
on
documentation,
etc.
D
So
you
can
take
this
existing
dock
or
you
know
we
can
create
a
section
for
it
here
and
then
you
just
the
three
of
you
who
are
the
most
knowledgeable.
I
mean
I
think
I
don't
know
if
they're
we
know
for
sure,
rob
dan
and
jorge
dump,
dump
your
brains
into
that
and
make
an
outline
and
you'll
start
creating
the
documentation
that
way,
because
you'll
you'll
be
shaping
what
to
do.
When
you
get
a
test
screen
alert
and
then
we
can
pull
in
more
more
feedback.
D
I
mean
that
can
be
open,
but
if
the
three
of
you
don't
what
we
have
to
do,
that's
a
great
starting
point
that
gets
us
on
our
way:
you're,
basically
creating
the
documentation.
That
way,
that's
what
we
need
cool
so
and
I'm
here
to
help
you
do
that.
I
think
right
now,
if
we,
if
we
set
ourselves
an
estimate
and
I'm
very
flexible
on
that,
but
like
how
how
big
of
a
job
are
we
talking
here
is
this:
it's.
D
That's
great
to
hear
easy
jobs
are
the
best.
So
once
we
have
that,
maybe
we
have
something
by
the
next
meeting.
I
don't
want
to
create
pressure
right,
but
it
sounds.
G
Like
that's,
actually
reasonable,
yeah
like
yeah
until
I
can
click
a
button
to
give
a
ci
signal
status,
I
I
I
won't
be
doing
anything
else.
D
J
Topic,
I
just
want
to
throw
one
thing
in
there
as
a
ci
signal
shadow.
One
of
the
hardest
things
to
learn
so
far
has
just
been
a
lot
of
the
like
the
the.
Why
and
the
what
so
we
talked
about
like
master
blocking
branch
and
the
board
and
all
that
and
there's
actually
no
description
of
what
that
actually
is
and
why
and
like
who
owns
it
and
like
who
creates
that
so
as
we're
doing
that
documentation.
D
That's
that's
our
target
user
here,
because
we
we
know
that
the
sigs
don't
know
that
either
and
for
us
to
be
able
to
onboard
people
quickly,
yeah
so
use.
You
have
you're
an
expert
eddie
because
you
have
the
fresh
eyes
of
a
newcomer
here.
So
if
you're
willing
to
get
involved
here,
that
would
be
extremely
valuable
because
you're,
a
great
editor
to
tell
us
like
this,
doesn't
make
any
sense
to
me.
A
A
I
popped
in
the
chat
to
release
blocking
jobs.
Explanation
take
a
look
at
that
and
let
me
let
us
know
if
there
are
things
you'd
like
to
see
there.
D
A
We'll
get
to
that
point,
so
I
think
we
can
have
parallel
tracks
like
one
right
now
is,
is
fitting
the
need
for
a
ci
signal
shadow
right
that,
like
we
have
identified
a
need
that
there
there's
missing
information
in
the
handbook.
So
let's
fix
that
and
we
can
fix
that
faster
than
anything
else
and
as
well
as
doing
all
of
the
broader
release,
plus
testing
kumbaya
work.
G
Grid
alert,
let's
add
to
that,
actually
that
that,
in
in
in
documenting
what
a
ci
signal
shadow
does
to
bear
in
mind
that
that
there
are,
there
is
a
role
there
and
the
role
is
somebody
who
is
seeking
out
a
flaky
test
and
gathering
the
evidence
to
hand
it
off
to
somebody
who
is
going
to
fix
it
and
and
really
that,
in
terms
of
that
for
today,
for
now,
as
a
ci
signal
team
exists,
we
describe
the
steps
that
need
to
be
taken
from
from
alert
and
down
to
report,
but
but
really
anyone
who
has
an
interest
in
flaky
tests
and
should
know
how
to
work
through
these
steps
and
the
key
people
who
should
be
interested
in
this
are
the
test
owners
and
the
test
writers
and
the
the
the-
and
I
think
I've
said
this
before
I'd
like
to
see
ci
signal
as
consultants
walking
through
the
ci
playground
or
taking
you
to
where
you
need
to
go
to
to
get
your
tough
job
done
and,
and
we
basically
set
the
table
for
them.
G
But
you
know
they
have
to.
They
have
to
get
stuck
into
the
meal
themselves.
But
anyway,
analogies.
I'm
getting
sick
of
them
myself,
so.
D
D
A
So
yes,
but
yeah,
so
in
terms
of
action
items,
we
want
to
do
this.
What
happens
when
you
hit
test
grid?
What
happens
when
you
get
an
alert?
It
sounds
like
rob.
You've
got
some
stuff
that
you're.
A
On
so
in
the
meantime
jorge
can
you
take
making
the
outline
as
your
action?
Yes
awesome
and
let's
all
check
in
in
two
weeks
next
sig
release
meeting.
G
G
D
Yeah
exactly
so,
the
question
is
like:
how
do
we
handle
a
test?
Skirt
alert?
That's
an
outline
topic
within
the
doc.
That's
already
present
there
with
the
rest
of
those
github
issues
around
it.
As
existing
outlines
everything
has
been
outlined
that
is
coming
from
those
github
issues.
It's
all
in
one
place,
so
we're
breaking
a
part
of
that
to
focus
on
it,
because
if
we
look
at
the
whole
outline,
we
don't
get
anything
done.
We
look
at
one
item
and
it's
this
urgent
key
question
from
that.
D
D
G
D
A
D
G
Can
you
see
my
screen,
you
can
okay,
one
two,
that's
good!
There's
nothing
earth
shattering
here.
What
what
I've
done
is
to
write
a
report
that
at
the
moment,
is
having
a
chat
with
test
grid
and
it's
it's
going
to
the
test
grid
summary,
which
is
this
here,
and
it's
it's
pulling
down
the
json
that
produces
this
page
and
in
in
the
report.
I
am
just
the
key
things
to
note
here.
G
Is
I'm
collecting
I'm
noting
when
the
data
was
collected
so
out
in
the
future
that
may
be
useful
for
flake,
analytics
or
analytics
and
flakes
down
down
the
line
I
separate
out
and
the
job
owner.
So
basically,
I
picked
the
first
in
square
brackets
tag
that
has
a
that
references,
a
sig,
and
I
pull
that
out
as
the
job
owner
if
for
a
test
name
that
or
test
step
or
phase
that
doesn't
have
a
sig
associated.
G
I
just
put
in
a
piece
of
placeholder
text
to
say
that
the
job
owner
owns
this
test,
and
so
I
can
produce
the
csvs
for
blocking
and
informing
jobs,
and
I'm
really
bad
at
this,
and
so
anyone
who's
better
at
this
than
I
would
do
possibly
a
better
job.
But
that's
just
a
simple,
quick
me
going:
how
do
pivot
tables
work
effort
to
doing
a
pivot
table
on
that
data?
So
it's
yeah.
It's
not
something,
I'm
very
good
at
now.
G
The
the
thing
that
I
want
to
say
about
this
report
is
that
at
present
I'm
generating
csv.
I
don't
see
this
as
being
the
final
report
output,
so
I'm
I'm
kind
of
eating
my
own
dog
food
here.
I
don't
see
this
as
a
final
solution,
but
in
terms
of
notable
parts
of
this
report,
the
two
things
are
that
I'm
collecting
the
time
at
which
this
was
collected.
G
I'm
splitting
out
the
the
owner
and
the
next
step
that
I'm
going
to
work
on
for
the
next
few
days
is
to
go
to
the
project
board
and
look
for
issues
and
do
a
search
for
issues
logged
against
specific
jobs
and
specific
tests
and
put
those
beside
each
row
in
this
in
this
report.
G
So
there's
the
two
benefits
that
will
exist
for
a
ci
signal
lead
will
be
one
click.
A
button
get
a
status
update
after
a
minute.
After
about
a
minute,
the
then
the
other
possible
use
case
in
terms
of
day-to-day
ops
for
a
ci
signal
team
is
that
you
can
go
to
a
flaky
test
and
refer
to
a
report
that
says.
G
Oh,
we
already
have
an
issue
logged
for
this,
so
I
have
a
few
ideas
of
what
we
can
do
and
myself,
jorge
and
and
and
dan
have
been
talking
about
where
this
could
go
right
now.
This
is
just
a
go
program
that
runs
in
a
go
runtime
and
that's
it
there
and
that's
the
main
function
for
it.
Basically,
I'm
actually
in
the
middle
of
of
of
adding
a
change
to
search
for
logged
issues,
but
I
just
didn't
initialize
the
class
there.
G
I
am
just
supplying
the
board
that
I'm
interested
in
and
I
give
it
a
connect
to
that
time,
and
then
I
build
up
a
summary
url,
where
I'm
going
to
lift
that
data
and
and
then
there's
some
reporting
fields
for
logging.
I
collect
the
status
of
the
tab
group,
which
is
all
of
the
the
jobs
which
that
summary
page
on
test
grid
and
then
for
each
for
each
job,
marked
as
flaky.
I'm
collecting
flaky
tests
right
now
and
then
the
output
is
separate
out
from
here.
G
I
just
walk
the
data
structures
that
I
build
up,
so
I
just
want
to
say
that
that
that
I
am
doing
this
as
a
feature
branch
on
my
fork
of
test
infra-
and
I
see
the
only
place
where
I
can
put
this
as
in
test
chamber-
is
the
experiments
directory
in
text
infra
for
stuff.
G
That's
just
experimental
and
unsupported
by
testing
for-
and
I
may
or
may
not
submit
this
as
a
pr
that,
with
ongoing
discussion
and
usage
of
this
report,
we
may
find
that
there's
things
that
I'm
doing
here,
that
really
should
be
done
in
test
grid.
G
And
if
I
look
at
some
of
the
data
structures
that
I'm
lifting
from
there
are
unused
fields
that
I'm
getting
back
from
testgrid
that
has
field
names
that
I
wish
had
data,
but
I'm
going
off
and
getting
that
data
because
testgrid
isn't
giving
it
to
me,
and
you
know
that's
because
I
yeah,
I
don't
know
why.
That
is.
G
A
G
A
Yeah
for
given
that
this
is
right
now
a
sig
release
tool.
I
would
put
this
in
k
release.
We
also
have
a
library
for
some
bits
of
the
release
engineering
work
that
that
maps
to
test
grid,
so
you
may
find
some
useful
bits
in
there
and
it's
something
that
we
can.
You
can
build
on
together
with
the
release
engineering
team.
The
reason
I
suggest
k
release
is
because
we
version
our
revo
yeah.
That's
that's
one
of
the
big
bits
you
are
going
to
run
into
some
fun.
A
That
are
in
production,
use
that
have
not
graduated
out
so
versioning.
A
Thing
for
for
me,
especially
if
you're
going
to
be
using
this
tool
more
intently
in
the
future
cycles,
so
I'd
land
it
there,
especially
like
you're.
I
think
you're
gonna
find
some
bits
that
we've
already
written
on
the
on
the
release
engineering
side
that
you
can
play
around
with
two.
G
Well,
yes,
there's
so
for
run
times
for
now
for
prototyping
and
development,
it's
just
a
go
program.
My
my
thoughts
are
containerizes
whatever
I
do
next,
and
so
so
I
was
thinking
of
using
m
tim's
goal
line,
build
sort
of
set
up.
I
don't
know
if
you
know
that
tim
hawkins
one
there,
then
the
the
two
runtimes
that
I
was
thinking
of
doing
well
see
I
could
probably
cater
for
multiple
old
times.
So,
although
I'm
producing
csv
now,
I
might
leave
that
in
there
as
an
ad
hoc
option.
G
I
might
also
leave
jason
in
as
a
to
run
ad
hoc
tools
locally
and
but
the
other
two
runtimes
I
was
considering
was
a
pro
instance
somewhere
doesn't
have
to
be
that
the
ktc
i1
could
be
the
cncf
one
and
then
the
other
thing
was
to
just
go
dash,
dash
server
and
run
up
a
port
and
then
and
so
on,
running.
D
G
G
It's
a
good
first
start
yeah
and
and
that's
the
way,
I've
I've
written
it
to
to
collect
data
and
then
output
data.
So
so,
although
you
know
there's
an
architectural
sort
of
line
there
on
272-ish,
you
know
that
could
that
could
be
made.
You
know
into
proper
components
where,
where,
where
an
output
or
an
output
formatter
is
taking
that
data
and
then
doing
the
needful
kind
of
thing
you
know.
A
Look
at,
I
would
say,
take
a
look
at
package
test
grid,
testgrid.go
yeah
and
see
if
there's
any
useful
bits
in
there
for.
D
A
G
A
Yeah,
so
I
mean
the
the
cool
part
is
if
it,
if
it
grows
into
something
that
that
we
decide
is
no
longer
really
a
releasy
thing
it
at
least
it'll
be
versioned
and
it'll,
be
in
a
state
that
we
can
probably.
G
D
G
Yeah
but
yeah,
but
to
be
honest,
throwing
it
over
the
wall
at
somebody,
so
it
has
influenced
how
I've
written
it
and
because
I
want
to
deliver
something:
that's
you
know
not
too
intimidating
and
is
reasonably
maintainable,
and
that
kind
of
thing
you
know,
but
anyway,
any
questions,
if
not
I'll
hand
back
over
a
simple
one.
First,
how
long
does
it
take
to
run
to
collect
it
takes
about
under
a
minute
cool
yeah?
G
Unfortunately,
one
of
the
things
I
was
thinking
of
doing
was
parallelizing
it,
but
I
have
a
rake
of
maps
in
here
and
I
don't
think
golang
and
and
and
associative
arrays
are
eminently
parallelizable.
I
don't
think.
D
A
You
dig
around
the
utils
package
as
well
in
in
k
release
so
like
we've
we've,
I
think
we've
collectively
all
tried
to
do
a
bunch
of
different
things
on
related,
like
related
bits
of
data
and
have
eventually
kind
of
like
consolidated
a
lot
of
the
work
within
within
various
packages
right,
so
you
may
find
some
useful
bits
and
and
be
able
to
deduplicate
a
bunch
of
code.
There's
some
parallelization
stuff
hanging
around
k
release
that
we
can
dig
into.
G
G
C
And
then
one
other
question
we're
basically
out
of
time.
So
maybe
the
answer
we
could
discuss
more
on
slack,
but
I'm
curious,
comparing
and
contrasting
versus
the
testing
for
folks
triage
tool.
What
I
I
can
say
is,
I
sense
well,
not
necessarily
the
difference.
The
the
unique
additive
like
you've
you've
got
some
specific
thing
that
you're
optimizing
for
much
different
than
what
they've
created
their
sieve
for,
but
maybe
we
could
it's
end
of
time.
G
One
last
analogy
is
test
grid
is
the
satellite
view
in
google
maps
triage
is
the
street
level
view,
and-
and
this
is
a
drone
halfway
between
the
two
zooms.
G
A
A
So
with
that,
we
are
we're
out
of
time.
Thank
you
all
for
hanging
out
with
us
this
week.
If
you
are
on
one
of
our
other
calls,
we
will
see
you
then,
if
not
see
in
two
weeks
later
see
you.