►
From YouTube: k8s 1.16 - Week 8 - Release Team Meeting 20190821
Description
Release details: http://bit.ly/k8s116
A
Hello
and
welcome
to
the
first
midweek
burned
down
for
the
1/16
release
today
is
Wednesday
August
21st.
We
are
in
week
8
of
the
release.
It's
wonderful
to
see
all
your
faces
before
we
get
into
this
packed
agenda.
I
would
like
to
remind
everybody
that
all
conversations
here
beneath
has
a
lovely
smile
on
his
face
over
Neath
you're
laughing.
At
me,
our
subject
of
the
kubernetes
Code
of
Conduct,
it's
lovely
to
see
that
smile.
A
It
made
me
happy,
thank
you
and
that's
that
basically
states
that
we
should
treat
each
other
excellently
and
not
be
jerks,
as
I've
heard
it
eloquently
put.
So
let's
get
into
this
packed
agenda
for
this
week.
If
you're,
following
along
from
home,
we're
in
week
8
August
21st
you
can.
The
links
have
been
added
to
the
chat
and
I
have
taken
note
that
the
burndown
has
the
wrong
link
in
it.
So
I
will
get
that
fixed
ASAP
and
you
will
all
get
a
hundred
emails.
Each
I
swear.
A
Ok,
so
let's
get
into
it.
If
I
could
have
a
note-taker
that
is
always
really
Pleasant,
so
I
don't
have
to
run
the
agenda
and
take
notes.
So
if
you
would
love
the
honor
of
having
your
name
in
the
note-taker
space
and
taking
notes,
I
would
be
much
obliged
and
very
thankful
and
also
attendees.
Please
place
your
name
in
the
attendees
section
of
today's
meeting
minutes
and
agenda
without
further
me
talking,
let's
get
into
it.
Kendrick
with
in
enhancements,
update,
take
away
Kendrick,
yeah.
B
What's
going
on
y'all
so
right
now
we're
looking
good
everything's
green
thirty-nine
enhancements
tracking
one
drop
this
past
week,
API
unions
went
to
milestone
version
117,
so
right
now
we're
looking
good
starting
Monday
is
when
we
were
going
to
start
bugging
the
crap
out
of
people
to
make
sure
that
we
see
all
the
kkp
ours
that
are
going
in.
So
we
have
an
actual
tracking
mechanism
going
into
code
freeze
other
than
that
we're
good.
A
B
C
First,
stop
everyone
say
everyone
in
the
world
of
CI
signal:
I,
don't
really
have
anything
interesting
to
tell
you
all,
and
everything
seems
to
be
working
at
least
a
at
least
at
this
point.
Hopefully,
it's
gonna
stay
like
that
for
for
a
long,
long
time,
just
a
just
a
couple,
things
so
say
signal
report
if
we
sent
out
a
after
day
release
meeting
last
Monday.
So
if
you
wanna,
if
you
want
to
look
at
it,
see
a
a
could
get
the
details
into
a
couple.
C
More
things
are
going
on
within
the
community,
especially
efforts
cleaning
a
cleaning
up
a
CI
jobs,
trying
to
a
trying
to
organize
everything
a
if
within
the
release
blocking
in
my
with
Intel
released
blocking
jobs,
check
it
out
other
than
that.
So
one
big
thing
we
finally
have
the
116
block
and
informing
you
can
see
it.
You
can
see
the
links
to
them
within
my
update,
put
a
116
block
in
informing
same
as
master
blocking
and
informing
everything.
Since
evidence
is
to
be
okay,
nothing
not
a
nothing
too
terribly
wrong.
C
There
are
a
couple:
a
hit
are
a
couple
flakes
every
every
now
and
then
from
some
age,
usually,
six
storage,
a
test
pills
for
some
reason,
but
nothing
persistent
other
a
other
than
that
they
they
there
is
a
CA.
There
is
a
CI
job
from
OpenStack.
They
confirm
a
series
of
conformance
a
conformance
test
running
on
OpenStack
I
got
in
touch
with
someone
from
OpenStack,
but
I
didn't
think
I
didn't
get
too
many
details.
I
just
got
the
I
just
got.
C
C
D
I
was
gonna,
try
to
hack
on
that
before
this
meeting,
but
a
few
more
important
PRS
have
jumped
in
the
way.
So
just
a
note
on
those
jobs
there.
They
will
be
important
eventually,
but
they
are
not
important
today
because
we
didn't
have
them
before
alright.
So
this
is
just
kind
of
like
working
out
what
the
scripts
do.
E
C
Man,
this
is
something
that
I've
been
meaning
to
look.
They
look
up
for
a
while
I'm
great
I'm
gonna.
Give
you
the
worst
possible
answer
that
anyone
can
give
and
just
say
that,
as
it's
been
like
this,
since
I've
been
CI
signal,
it
will
definitely
make
it
a
priority
to
give
you
a
proper
answer.
A
next.
D
Meeting
so
one
note
there,
what
like
one
thing
that
I
think
that
we
need
to
tackle
as
a
cig
overall
is
I
kind
of
don't
like
the
way
that
jobs
end
up
on
our
boards
right,
it's
possible
to
annotate
it
essentially
now
like
they've
made
it
so
that
we
can,
you
can
create
a
job
control.
Your
job
add
it
to
appropriate
boards
via
annotations
in
in
the
configs
for
and
the
job
configs
intestine.
D
For
right,
the
the
problem
that
I
see
there
is
someone
can
arbitrarily
add
jobs
to
any
SIG's
boards,
without
intervention
from
the
sig
right,
I
think
for
the
master,
informing
the
master,
Baca
or
the
star
informing
and
blocking.
That
is
really
important,
that
we
have
some
oversight,
some
sort
of
audit
when
jobs
do
and
don't
get
added
right.
D
Additionally,
there
is
a
another
annotation
I
think
called
a
fork
per
release,
or
something
like
that
when
a
job
has
that
annotation
there
is
a
config
Forker
that
runs
when
the
branches
are
cut,
or
rather
there's
a
config
Forker
that
you
need
to
run
when
the
branches
are
cut.
That
will
generate
new
jobs
for
for
the
relevant
dashboards
right.
So
those
are
the
jobs
that
land
in
in,
like
the
one
16
yeah,
no
in
tests
infra.
D
So
like
kubernetes,
config
jobs,
kubernetes,
release
jobs,
I
think,
and
then
116
yeah
Moe
right
so
between
those
two
things,
jobs
can
either
autumn-like
arbitrarily
be
added
to
our
boards
or
not
be
properly
forked.
So
it's
something
that
we
need
to
look
into
with
tests
infra,
but
yeah.
That's
one
of
those.
Two
were
probably
the
reason.
Yeah.
C
Yeah
also
also
just
to
tack
just
to
tackle
onto
that,
so
a
or
a
a
so
originally
the
a
way
when
I
first
came
into
the
when
I
first
came
into
the
release
team,
my
my
understanding
and
the
way
the
way
the
jobs
were
is
assumed.
I,
assume,
I,
know
reserved
a
116,
all
the
115,
old,
etc,
etc.
They
work
they
work
in
I.
Believe
the
consensus
was
that
people
just
talk
whatever
jobs
they
want
to.
C
Eventually,
they
want
to
eventually
learn
into
master
in
forming
within
a
within
the
within
the
old
and
within
they
all
dashboards.
And
if
we
see
that
those,
if
we
the
released
him,
see
that
those
jobs
are
actually
behaving
there
and
they're
being
wrong,
they
meet
the
release,
criteria
or
release
blocking
criteria.
Then
we
play,
then
we
bump
them
up
for
they
for
the
six
put
a
good.
That
was
a
process.
He
was
it
an
intended
for
us
in
the
release
team
and
the
problem
is
a
as
far
as
I
know.
C
That
has
never
happened
for
any
of
a
for
a
for
any
a
for
any
job.
That
is
actually
that
is
living
a
that
is
living
within
the
all
dashboards.
Essentially,
it's
just
a
black
hole
were
for
some
reason:
some
jobs
live
in
there
and
no
one
really.
No
one
really
takes
care
of
them,
a
normal,
a
and
no
one.
Besides,
the
patch
managers,
as
far
as
I
know,
actually
actually
look
in
actually
look
into
them.
C
Then
there
has
been
more
recently
there's
been
a
more
active
workflow
from
the
community
were
six
is
six
actively
work,
a
work
and
work
on
CI
jobs,
they
proposed
him
and
they
and
they
you
know
they
open
pay.
They
open
PRS
in
a
Sulha
step
in
this
is
they
released.
Him
is
tag
on
the
PR.
If
everything
looks
good
and
we
had
that
job
into
into
one
of
our
dashboards
and,
for
example,
that
was
they
wait
all
day.
Cuban
minion
kinder
jobs
were
actually
adding
into
master
informing
a
as
Friday
they.
C
Never
they
never
actually
went
through.
They
dumped
them
into
all
they.
If
they
behave,
then
we,
then
we
bump
them
into
master
informing
or
must
a
master
working.
So
it's
okay,
so
just
to
tag
that
I
guess.
We
also
need
to
establish
a
policy,
a
communicated
way,
communique
communicated
with
the
six
and-
and
you
know
just
keep
everyone
actively
looking
at
it,
because
jobs
are
just
a
kind
of
a
thing:
work.
C
Okay,
it
works
now,
a
a
one
release
a
won't
release
after
this
everyone's
going
to
forget
that
we
thought
we
did
this
a
I'm,
not
saying
that
they
say
that
this
always
happens
but
or
a
lot
of
job
say
it
kind
of
happens.
For
you
know
we
work
on
this,
we'll
fix
it
and
somebody
else
is
gonna.
Hopefully
a
will
be
able
to
look
at
it
in
the
future,
and
you
know
it's
just
that
we
just
leave
it
aside.
D
D
If
anyone
has
seen
it
if
anyone
is
interested
in
that
stuff,
please
take
a
look
review
that
once
that
merges
what
I'd
like
to
see
is
a
an
issue
template
for
for
proposing
a
job
to
release,
flocking
or
suite
of
jobs,
release
blocking
or
release
informing
or
star
blocking
or
informing
right,
the
idea
being
there
is.
We
establish
an
audit
log
right.
D
We
have
the
conversation
in
one
place
great,
and
you
know,
because,
because
all
that
all
that
you've
mentioned
is
like,
when
you
know
when
the
release
team
gets
tagged,
it's
if
they
get
tagged
right,
it's
assuming
that
the
cig
is
well
behaved
or
cares
that
we
care
about
these
jobs
right
so
establishing
a
poly
you're
right.
We
definitely
need
to
establish
a
policy
around
that
and
I
think
that
we
can
do
that
once
we
land
Josh's
dock,
so.
E
F
E
Was
just
that
that
those
did
not
get
copied
over
informative,
something
if
there
was
status
on
them
the
because
they
are
in,
for
example,
release
115
min
Foreman,
the
the
other
one
while
I
just
have
to
file
an
issue
about
is
I,
don't
understand
why
the
reboot
test
only
ones
against
master
sales,
but
that
would
be
a
separate
issue.
D
E
D
E
D
D
C
A
C
Absolutely
and
I'll
get
back
to
you.
Do
you
walk
with
an
answer
on
that
I
think
the
simplest
answer
is
you
know,
people
just
assign
their
jobs
and
they
just
created
the
configuration
to
run
against
master
and
and
we
never
a
when
you're,
creating
a
configuration
for
all
the
other
branches.
Hey,
but
I'll
get
I'll
double-check
and
give
you
a
full
answer.
Yeah.
D
A
One
other
thing:
I
see
that
Aaron,
quick
and
Berger
is
doing
some
work
to
break
apart
long-running
pieces
in
master
blocking
the
serial
job,
which
seems
to
be
something
that
runs
for
whatever
n
number
of
very
long
hours.
So
I,
don't
know
if
you
saw
yesterday,
but
he
he
I
think
he
demoted
the
HP
a
jobs
from
master
blocking
and
he's
still
working
towards
getting
them
out
of
cereal.
A
C
G
Hey
everyone,
so
how
are
we
static?
For
today
we
have
76
top
initialized,
including
non
fleek
and
some
other
types.
So
this
is
four
lines
that
we
had
on
Monday
regarding
PRS
to
be
a
to
over
a
month
there,
on
Monday
at
49
in
total
number
of
issues,
including
flakes,
is
89
a
dead
spider
swirl
then
one
day,
so
no
other
special
updates
for
Dow.
We
are
going
to
start
pinning
everything
Friday
one
day
mod
is
going
to
be.
G
G
Yes,
I,
we
have
started
pink
ink,
but
there
are
some
problems
like
when
you
pink
issue
that
is
created
by
someone
who
is
not
a
maintaining
or
some
stuff.
Then
you
may
not
get
three
sauces
of
time.
For
example,
I
think
we
had
much
rather
response
when
we
started
building
PRS
and
we
managed
to
bring
that
number
Romer
much
faster
like
a
day
or
two,
but
this
is
not
a
case
for
issues
we
are
going
to
see.
How
is
it
going
to
be
until
the
next
meeting?
H
Everyone
so
for
today
we
have
about
28
announcements,
abducts
they
are
good
to
go,
but
we
are
waiting
on
eleven,
so
I
mean
I've,
communicated
the
shadows,
among
also
tracking
each
other,
in
announcements
and
reaching
out
on
slack
I,
told
them
also
to
reach
out
to
the
eye
announcements
and
get
like
a
status
update
on
those
eleven
and
I
think
we
should
be
able
to
get
them
in
before
the
place
or
that
deadline.
So
hopefully,
by
end
of
tomorrow
or
early
Friday,
we
should
be
great
yeah.
F
F
I
Good
mornin
/
afternoon
/
evening,
everyone
happy
Wednesday
I
have
a
little
bit
beefier
of
an
update
than
usual
for
you.
We
have
been
reaching
out
to
a
couple
of
the
SIG's
we're
still
having
some
issues
getting
responses
from
some
of
them.
I've
listed
them
in
the
agenda.
I'm
gonna
be
working
with
Caitlin
I
might
need
some
help
from
the
release
leads
in
terms
of
reaching
out
to
those
groups.
I
We
are
reaching
out
this
week
to
get
some
blog
drafts,
set
up
as
well,
really
looking
for
teams
to
have
those
in
by
late
next
week,
I'm
also
going
to
be
starting
on
the
116
blog
templates
and
running
that
by
various
teams.
So
we
can
get
the
ball
rolling
on
that
and
then
tomorrow,
at
9:00
Pacific
time.
I
did
call
a
meeting
just
to
review
items
of
importance
for
116,
just
kind
of
want
to
open
that
up
to
everyone,
make
sure
we're
on
the
right
path.
I
I
D
A
D
Know
also,
I
would
say,
reach
out
to
Paris
or
George
who
do
the
do?
The
like
sig
leads
sig
chairs
technical
leads
like
need
to
know
email,
which
includes
so
like
a
lot
of
people,
may
bury
the
email
but
gets
sent
to
a
group,
but
that
email
gets
sent
to
the
individual
email
addresses
of
all
the
chairs
and
tech
leads.
So
maybe,
if
you
need
an
email
Avenue,
that's
the
right
way.
Gosh.
I
D
So
Jung
is
actually
not
going
to
be
on
the
call
I'll
give
the
update.
Basically,
we've
got
a
beta
coming
coming
soon,
or
rather
we
just
did
a
beta.
The
the
next
beta
is
going
to
be
September
4th,
that's
beta,
2
and
Nikita
will
be
handling
some
of
the
branch
for
fast
forward
tasks.
Basically,
branch
fast
forward
can
only
be
done
by
people
who
are
a
part
of
the
release
managers
group.
The
people
who
are
part
of
that
group
are
the
patch
release
team.
D
The
branch
managers,
including
myself
and
Nikita
and
Klaus
I
believe
who
have
who
have
access
for
publishing
bot
reasons
and
other
stuff
that
I
may
not
be
aware
of.
Anyhow,
we
require
someone
who
who
has
that
access
to
do
it.
So,
given
that
Nikita
is
a
release
manager
associate,
we
want
to
give
her
some
opportunity
to
exercise
that
muscle,
so
I
believe
Hannes
will
also
be
occasionally
doing
that
if
you
were
part
of
the
elicit
the
release,
engineering
meeting,
I'm,
forgetting
meetings
now
I
think
it
was
a
release.
D
Engineering
meeting
earlier
this
week,
Hannes
did
a
demo
of
a
pipeline
using
Concours
pipeline
using
a
leveraging
branch
password,
as
well
as
the
release
notified
tools
that
we
have
today
pretty
interesting
stuff.
So
to
test
that
tool,
he
wants
to
occasionally
do
branch
fast-forward.
So
that's
totally
cool,
nothing
super
exciting
from
us,
but
any
questions.
Let
me
know.
A
D
Oh
Slade's,
Josh
and
Stephen.
They
have
no
updates.
Sorry,
there
was
one
more
interesting
thing
from
the
kind
of
release
engineering
side
of
the
house.
We
just
had
a
meeting
in
Kate's
infra
to
talk
about
are
rather
Brandon.
Phillips
did
a
demo
of
argot,
which
is
essentially
a
tool
that
allows
you
to
gather
gather
packages
from
currently
from
github
and
validate
them
against
a
certificate
transparency
log
which
is
pretty
cool.
So
this
is,
you
know
if
you've
seen
the
the
recent
news
about,
like
the
ruby
gems,
the
ruby
gems
repository
being
compromised.
D
Part
of
that
has
to
do
with
like
how
we
validate
our
supply
chain
right,
so,
like
once
releases
get
to
a
certain
point.
How
do
we?
How
do
we
validate
the
fact
that,
like
they
are
released
by
the
people?
That
said
they
release
them
and
nothing
has
been
edited
since
they've
released
them
right.
So
the
demo
was
pretty
cool,
we're
considering
using
that
for
kubernetes,
one
of
the
first
steps
to
doing
that
is
and
linking
in
zoom
one
of
the
first
steps
to
doing
that
is.
D
Allowing
us
to
publish
sha,
52:56
and
512
sums
to
github,
so
the
argot
tool
can
actually
pick
those
up.
I
have
a
PR
in
flight
right
now.
That
is
basically
done
to
refactor
some
of
the
release
tooling,
to
do
that.
So
if
you
want
to
check
that
out,
I
should
put
these
in
the
dock
and
not
just
zoom.
Do
that
in
a
second
but
yeah.
Those
are
the
issues
and
the
PR.
If
you
want
to
take
a
look
at
them,.
A
Thanks
Steven
emeritus
Leeds
update,
no
updates.
That's
still
the
case.
Give
me
a
thumbs
up.
If
that's
still
the
case.
Okay,
we
can
continue
on
release,
lead
update,
so
given
that
we're
in
burn
down
I'm
going
to
assign
a
color
to
the
release,
because
I
think
that's
relevant,
given
all
the
information
coming
in
from
all
the
teams
and
I'm
going
to
go
with
yellow
my
main
concerns
from
what
I'm
hearing
just
to
call
them
out
is
probably
just
in
markers
making
sure
that
we
can
adequately
get
those
issues
addressed.
A
But
happy
to
hear
mark
has
got
a
plan,
for
that.
Just
seems
like
a
high
number
in
a
very
short
amount
of
time,
but
we
will
keep
an
eye
on
that
and
obviously
Taylor
I'd
like
to
get
the
comms
team
in
a
good
position.
So
they
have
adequate
information
from
all
those
things
so
I'm
going
to
call
it
yellow
but
happy
to
CCI
signal
in
a
green
state
and
enhancements
tracking.
Well,
as
for
updates
to
milestones
on
Friday
tonday
mentioned,
there
is
the
PR
deadline
to
have
your
placeholder
Docs
PR
in
he
is
looking
for.
A
I
also
wanted
to
mention
the
next
burndown,
so
we're
on
an
additional
two
meetings
a
week
at
the
moment.
So,
in
addition
to
the
one
that
we
have
on
Monday,
we
have
one
today,
which
is
this
one
now
and
one
on
Friday
at
9:00
a.m.
and
that
will
repeat
for
the
next
two
weeks,
I
sent
this
out
on
the
mailing
list,
and
then
we
will
go
to
one
every
day,
so
there'll
be
a
Tuesday
and
a
Thursday
included
as
well.
A
A
D
J
A
A
Yes,
yes,
okay,
so
I'm
going
to
take
a
look
at
the
in
progress
issues.
First,
we
have
backporting
fixes
and
defining
exceptions.
Let
me
get
pop.
The
hood
on
this
one
and
I
will
take
a
look
at
the
latest
here.
So
that's
with
Jim
I
know:
Jim
picked
that
up
a
week
ago,
no
updates
Jim
Jim
on
the
call
I'll
ping
Jim
on
this
issue.
A
D
A
D
A
D
So
we
hadn't
mentioned
this
I
think
on
the
Monday
call,
or
maybe
a
different
call
that
this
is
probably
good
to
close.
Soon.
It's
kind
of
its
kind
of
one
of
those,
like
definition,
have
done
things
where
we've
done
quite
a
bit
for
the
cycle
already,
but
we
don't
want
it
to
stay
open
forever,
so
chunking
down
whatever
is
remaining
to
do
and
breaking
those
out
into
separate
issues
and
then
closing
this
one
out.
D
D
F
D
A
C
A
A
J
A
A
D
A
A
A
D
A
K
Sorry
about
that,
that's
weird
I,
see,
release,
notes
and
I'm
thinking.
You
know
yeah.
A
D
Yeah,
that's
that's
on
the
EA's,
so
I
think
where
we're
at
I
think
this
one
got
juggled
around
a
few
times,
but
it
should
be
squarely
and
Josh
in
my
court
now
so
I
think
that
you
know
part
of
it
is.
We
should
review
not
just
the
survey,
not
just
the
survey
feedback,
but
also
like
the
exit
interviews
and
all
the
stuff
from
both
the
shadows
from
last
cycle
and
the
and
the
leads
from
last
cycle
and
turn
that
into
an
even
better
survey
for
117.
D
D
Don't
know
what
we
want
to
do
here,
this
one
is
kind
of
what
is
the
value
right
like?
Where
would
we
put
this
right
so
like
we?
We
call
it
the
we
collate
the
exceptions
right
now
but
like
who
is
it
useful
to,
and
where
would
we
put?
It
would
be
the
questions
before
doing
anything
here.
I
think
this
can
stay
on
the
backlog.
Tim
I,
don't
know
is
this
that
stated
in
here
yeah,
but
it
was
like.
We
could
write
a
tool
to
do
something
like
I
really
know
what
it
is
yet.
C
Yeah
I
can
do
put
in
my
in
progress.
Please
this
one
is
essentially
a
result
of
misinformation
and
miscommunication
between
a
lot
of
people
in
the
community.
A
couple
releases
ago
there
were
there
were
a
lot
of
issues
with
nade
know
a
with
a
lot
of
people,
not
knowing
how
to
add
a
new
issue
or
a
PR
to
a
given
milestone,
and
that
issue
just
has
some
comments
on
it.
A
I
Sorry
I
just
thought:
I
just
wanted
to
let
y'all
know
that
like
I
went
through
and
kicked
out
the
super
slow
tests
out
of
the
cereal
chop,
I,
don't
know
if
George
already
reported
on
that
I
posted
something
in
the
sig
release,
channel,
saying:
hey,
I!
Think
now
that
we
have
this
dashboard
that
shows
the
health
of
the
release,
jock
released,
blocking
jobs
and
how
well
they
adhere
to
the
release
blocking
criteria.
I
We
should
act
on
it,
so
I
gently,
poked,
Sega
auto-scaling
to
get
rid
of
their
HPA
jobs
or
HPA
tests
out
of
the
serial
job
and
sig
note,
they're
gonna
find
other
ways
to
bring
those
back
on
to
the
port,
but
the
serial
job
has
been
passing
since
we
did
that
it's
gone
down
from
its
path
going
over
its
500
minute,
timeout
to
300
minutes.
This
is
something
I
kind
of
hope
that
this
team
takes
on
going
forward.
I
I
took
care
of
the
super
low
hanging
fruit,
but
I
still
kind
of
think
that
serial
job
runs
way
too
infrequently
and
takes
way
too
long,
and
the
question
needs
to
be
asked
like
why
do
these
tests
matter?
What
is
what
is
it
about
them?
What
is
it
that
they're
exercising
that
makes
them
released
blocking
so.
D
I
So
your
best
mechanism
for
enforcement
right
now
would
be
the
set
of
tests
that
run
again.
We
have
a
couple
tests
spread
out
as
you
notice
them.
The
branch
manager
handbook.
There
are
tests
that
enforce
like
proud
job
specific
conventions,
and
then
there
are
tests
that
also
enforce
test
grade
specific
conventions.
So,
like
the
test
grade
convention,
job
is
the
one
that
enforces
like
release.
I
Blocking
jobs
have
to
have
alerts
set
up
or
that
there
have
to
be
descriptions,
so
you
could
probably
enforce
that,
like
these
jobs
have
to
I,
don't
know,
have
belongs
it
to
something
or
whatever.
So,
alternatively,
you
could
maybe
write
something.
That's
like
any
job
that
uses
this
particular
test
grid
annotation
has
to
live
in
a
source
path
that
matches
this
regular
expression
so
that
you
could
gain
based
on
like
owner
spots,
yeah.
D
I
So
that's
that's
my
suggestion
for
how
you
can
implement
that.
If
that
is
a
convention
you
wish
to
in
horse.
Thank
you.
Thank
you.
Okay,
now
let
y'all
go
sorry.
I
just
thought.
I
wasn't
clear
to
me
whether
folks
have
noticed
that
that
that
happened,
but
that's
gonna
be
the
end
of
me
chasing
after
people,
for
that
particular
thing
and.
A
I
A
Yeah
I
was
just
gonna,
ask
Aaron,
actually
I
think
it's
worth
just
stating
aloud.
Thank
you
for
all
the
hard
work
you've
done,
Aaron
to
clean
up
cereal.
We
do
appreciate
it
and,
as
you
know,
that
had
been
failing
for
several
weeks
first
that
job
the
would
this
come
to
your
attention
via
velodrome
I,
was
on
the
community
call
and
I
watched
your
velodrome
updates
last
week.
Was
it
pretty
much
looking
at
this?
The
average
of
all
the
job
runs
and
saying
these
are
the
three
that
stick
out
like
sore
thumbs.
I
Yeah
yeah,
don't
like
the
this
is
the
dashboard
I
was
talking
about
like
ie
I
feel
like
when
I
I
forget,
which
at
least
I
did
it
right,
I
laid
out
the
release.
Job
released,
walking
criteria
like
we
expect
any
job
that's
released,
blocking
should
adhere
to
these
criteria
and
then
nobody
implemented
anything
to
actually
measure
this
and
I.
Don't
know
that
anybody
actually
acted
on
this
policy
other
than
some
like
battling
back
and
forth
to
kick
the
scaleability
jobs
over
to
warning
I.
I
Don't
know
that
we
ever
like
measured
this
and
like
policy
is
cool,
but
if
it's
not
acted
upon,
I,
don't
really
see
the
point
of
it
and
it's
kind
of
unclear
to
me
whether
see
I
signal
has
ever
like.
Like
again
it's
it's,
maybe
a
lot
of
toil
for
humans
to
notice
this
and
measure
this.
So
this
is
me
trying
to
like
wrap
that
up
and
make
sure
that
we
have
the
tools
in
place
to
actually
hold
our
jobs
to
the
criteria.
We
said
they
should
be
measured
against.
I
I,
don't
think
we'll
ever
get
to
the
point
where,
like
a
bot,
can
automatically
open
up
a
PR
to
take
a
job
out
if
it's
demoted,
although
like
this,
is
the
way
that
some
CI
systems
and
some
large
companies
who
shall
remain
nameless,
work,
I,
think
quarantine,
tests
or
jobs
that,
like
are
misbehaving
and
I,
think
it'd
be
cool.
If
we
got
to
that
point,
this
is
I
am
hoping
a
tool
that
allows
the
humans
on
this
team
to
make
said
judgments
about
whether
or
not
we
should
quarantine
things
and
I'm
there.
I
I
It
was
unclear
to
me
how
much
the
the
previous
dashboard
the
focused
on
like
a
bunch
of
random
jobs
or
a
bunch
of
random
repos,
was
being
used
by
people
and
then
just
kind
of
continued
that
I
guess,
like
the
one
release
blocking
criteria
that
is
not
represented
on
here.
That
I
haven't
yet
figured
out.
I
I
I
know
that
was
like
that
came
out
of
tribally
what
I
haven't
seen
the
release
team
do
in
times
past.
You
all
tell
me
if
you
still
do.
It
is
like
when
we're
getting
ready
for
the
release,
to
go
out
the
door
to
give
us
comfort
and
to
feel
like
we
have
appropriately
soaked
the
release.
We
kind
of
wait
until
we
see
three
runs
of
all
the
jobs
and,
if
they're
all
green,
we're
happy.
I
If
not
all
of
them
are
green,
then
we
kind
of
try
and
read
the
tea
leaves
based
on
how
frequently
this
jobs
and
failing
or
flaking
in
the
past
and
one
of
the
issues
there
are,
and
what's
the
I
signal
spreadsheet
looks
like
and
so
yeah
that's
where
the
three
commits
thing
came
from,
but
I
don't
think
anybody
measures
it,
which
is
why
I
flake
grades
being
displayed
right
now
as
opposed
to
commit
to
the
wreck.
So
somebody
wants
to
help
measure
commits
in
a
row.