►
From YouTube: Kubernetes SIG Release 20200922
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello,
every
everyone
today
is
september
22nd.
This
is
the
sig
release
bi-weekly
meeting.
This
is
a
meeting
that
is
recorded
and
will
be
available
on
the
internet.
So
please
be
mindful
of
what
you
say
and
do
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
in
general,
just
be
awesome
people.
A
So
we've
got
two
main
agenda
items
today,
the
first
of
which
I
see
a
bunch
of
the
working
group
lts
folks
here.
So
welcome
good
good,
seeing
you
all
the
first
of
which
is
going
to
be
a
discussion
on
the
working
group,
lts
turndown,
so
I
guess
tim
you
want
to
take
it
away.
B
B
So
out
of
that
came
a
cap
last
year,
I
guess
it
was,
and
then,
by
the
beginning
of
this
year,
that
was
ratified
in
terms
of
declaring
that
we're
changing
from
a
nine
month
support
term
for
a
given
release
to
a
12
month,
support
term
and
those
are
both
a
little
bit
fuzzy,
because
we
never
quite
did
nine.
It
was
nine
plus
a
little
and
we
might.
We
would
continue
to
do
12,
plus
a
little,
because
you
want
just
a
little
bit
of
overlap
on
things.
B
Maybe
you
have
some
late-breaking
changes
stragglers
decide
to
upgrade
and
discover
the
upgrade
path
is
broken,
so
that
was
our
primary
focus
over
the
last
while,
but
additionally,
we
had
some
other
things
that
we
had
talked
about.
Doing
there
was,
and
not
just
talked,
did
jordan's
here.
He
drove
spearheaded
a
lot
of
work
around
defining
where
we
had
gaps
in
supportability
and
working
to
rectify
those.
I
think
at
this
point
that
kind
of
relates
to
ongoing
work
out
of
sig
architecture,
sigs,
release
and
testing
the
ci
signal
project.
B
That's
coming
up
the
way
we
handle
enhancements
and
talk
about
graduation
criteria
across
alpha
beta,
stable
and
even
now.
The
new
excuse
me
upcoming
working
group
reliability.
So
there
are
similar
forums
that
exist
and
feel
like
logical
places
for
the
the
other
things
that
we've
talked
about
for
the
most
part
to
to
happen.
B
The
main
remaining
thing
that
we
had
talked
about,
possibly
within
the
working
group,
but
initially
we
kind
of
split.
We
realized
we
had
two
conversations:
what
is
the
support
term
and
what
is
the
release
cadence
if
we're
releasing
something
once
a
year,
probably
makes
sense
to
support
it
for
at
least
a
year
or
you
you've
run
out
of
support
before
the
next
release
comes
right.
But
what
we've
done
for
quite
some
time
now
is:
we've
had
four
releases
per
year.
B
The
nine
months
thing
was
just
a
pragmatic
approach
in
a
lot
of
ways
to
say
that
you
have
three
releases
that
are
under
support
and
you
can
sort
of
be
stepping
along
through
those
and
still
have
supportability,
within
that
many
people
operationalize
things
on
the
calendar
year
or
the
business
year.
So
there's
a
there
was
a
desire
for
that
and
we
we
focused
on
seeing
what
it
would
take
to
get
there.
The
conversation
about
what
is
a
good
release.
B
B
If
you
want
to
chime
in
anymore
on
any
of
this
or
that
part,
especially
there
just
wasn't
clear
consensus,
it
wasn't
clear
that
we
would
drive
an
action
that
would
close
to
some
proposal
on
that,
so
that
kind
of
leaves
us
in
a
place
where,
without
some
additional
new
push
to
accomplish
that,
we've
done
what
we
see
us
getting
done
and
it
makes
sense
probably
to
to
go
ahead
and
close
down
the
working
group
and
in
the
way
that
they're
intended
we've
time
boxed.
We
focused
on
some
things.
We
we
got
them
done.
C
Well,
I
guess,
from
my
point
of
view,
as
as
one
of
the
other
chairs
yeah,
I
think
to
me.
I
thought
that
a
lot
of
the
value
was
in
the
conversations
we
had
about
what
not
only
about
what
lts
meant,
but
all
of
the
stuff
that
we
ended
up,
finding
that
we
needed
to
fix.
C
As
you
said,
tim
jordan's
done
heaps
and
heaps
of
amazing
work
on
sorting
most
of
that
out,
but
yeah,
and
I
think
that
we
did
have
a
lot
of
discussion
about
both
the
release,
cadence
and
a
whole
bunch
of
other
stuff,
like
install
processes
and
being
able
to
jump
modern
versions
and
all
this
sort
of
stuff
that
I
think
you
that
has
been
what
we
ended
up
talking
about,
has
been
pretty
well
disseminated
across
the
community
since
we
talked
about
it
with
the
with
the
release
codes.
C
Here
I
think
the
the
distribution
of
opinion
was
definitely
bimodal.
There
was
dif,
you
know
there
was
definitely
a
few
people
who
felt
very
strongly
that
you
know
we
should
be
releasing
like
once
a
month,
dev
branches,
with
a
stable
branch
cut.
You
know
a
stable
release
cut
at
some
other
time
frame
like
years
of
like
the
links
come
or
something
like
that,
and
there
were
other
people
who
were
like
yeah.
We
should
totally
have
a
you
know
three
year,
a
three-year
actual
long-term
support
release.
Yeah.
C
I
think
personally,
I
think
that
that
is
100
not
viable
until
all
of
the
stuff
that
is
in
the
process
of
being
pulled
out
of
the
core
is
out
of
the
core,
but
yeah
anyway.
That's
a
discussion
for
another
time.
I
think
the
I
completely
agree
with
tim
that
if
there's
no
one
willing
to
I
mean
that
that
will
need
to
be
a
really
big
push
and
if
there's
no
one
willing
to
really
push
hard
and
get
a
whole
bunch
of
people
to
talk
about
that,
then
it's
not
going
to
happen.
C
It's
very
difficult
to
get
all
the
people
to
talk
and
to
bring
some
and
to
get
some
consensus
on
that
sort
of
thing,
because
people
do
feel
really
strongly
about
it.
D
D
Dependencies,
so
one
of
the
things
that
we
got
out
of
the
work
in
in
wglts
was
that
we
discussed
you
know
the
idea
of
the
sort
of
whatever
it
is.
You
know,
debian,
ubuntu,
long-term
release,
type
approach
of
having
like
one
supported,
release
every
two
years
or
whatever
and
decided
that
it
just
was
not
viable
given
where
the
project
was
at
and
that
could
change
in
the
future.
E
Since
we're
furiously
editing
the
agenda,
adding
notes
just
to
clarify
the
previous
support
policy
was
always
stated
in
terms
of
releases
and
because
we
had
a
regular
release
cadence
of
three
months
or
four
releases
a
year
that
worked
out
to
about
nine
months
of
support.
But
we
always
stated
our
support
policy
in
terms
of
releases.
So
the
most
recent
three
minor
versions
were
supported
so
because
119
and
120
ended
up
being
four-month,
releases
or
119
might
have
been
actually
slightly
longer.
E
That
effectively
gives
1
18
and
117
and
116
one
year
support
windows.
So
we
didn't
end
up
or
we
didn't
originally
intend
for
that
to
happen.
But
because
119
and
120
slowed
down
the
release,
dates
for
those
meant
that
the
preceding
three
releases
ended
up
getting
a
year
of
support.
E
If
the
release
cadence
picks
back
up
to
three
months
four
releases
a
year,
then
that
means
we
will
maintain
an
extra
release
if
it
stays
at
four
months
or
three
releases
a
year,
then
our
one
year
policy
and
our
three
release
policy
will
continue
to
coexist.
So,
but
framing
the
support
in
terms
of
time
is
way
easier
for
people
who
are
consuming
those
releases
to
plan
around.
Instead
of
saying
you
say
three
releases,
but
I
don't
actually
know
how
long
that's
going
to
be.
A
So
for
those
interested,
we
are
planning
on
having
a
discussion
around
release
cadence
overall,
I
think
that
this
was
one
of
the
overall
leftover
things
that
lts
wanted
to
chat
about,
and
I
I
put
out
a
very
scientific
twitter
poll
so
at
some
point
I'm
going
to
write
up
some
details
on
that
and
we'll
have
an
official
discussion,
but
I
I
don't
want
to
get
into
that
today.
A
Right
scientific
twitter,
I
I
gave
I
gave
the
people
two
options,
which
was
three
or
four
releases
given
given
the
the
elongation
of
119.
A
The
question
became
well,
are
we
are
we
doing
three
releases
a
year
now
and
we've
gotten
that
question
from
like
multiple
forums
at
this
point,
so
I
was
like,
let's
go
to
the
internet
and
and
a
lot
of
people
voted
for
three.
I
think
you
know.
At
the
last
I
checked
the
700
people
had
voted
on
this
poll,
this
extremely
scientific
poll,
but
we
got
lots
of
good
feedback
there
too,
like
should
it
be
two
releases
a
year.
A
Should
it
be
one
release
a
year,
so
so
lots
of
things
to
to
to
dig
into,
and
I
think
that
you
know
the
you
know
some
of
the
conversations
that,
like
you
know,
tim
jordan
and
I
have
had
about
like
go
release,
cycles
right
and
how
we
navigate
that
are
going
to
come
into
play
big
time.
A
I
think
that
we're
still
going
to
be
bounded
by
our
our
external
dependencies
and
we
need
to
carefully
consider
that
before
making
any
decisions,
but
I
think
for
now
we
continue
at
pace
and
have
the
discussion
for
120
moving.
A
A
Okay,
do
we
feel
that,
given
with
us
as
a
stakeholder
of
working
group
lts
and
the
working
group
lts
leads
on
the
call,
are
we
comfortable
saying
that
we're
going
to
start
turning
down
lts.
E
A
A
We
did
it
awesome,
so
I
think
one
more
action
item.
Our
one
remaining
action
item
that
we
have
is
the
is
the
working
group
lts
report
to
steering.
Do
we
know
status
on
that
are?
Are
they
happy
for
us
to
call
it
quits
and
and
and
let
that
be,
the
status.
C
Yeah,
it
will
be.
I
I
gotta
meet.
I'm
not
gonna.
Miss
waking
up
at
1
30
in
the
morning
once
every
couple
weeks
we're.
C
A
A
Okay,
so
next
up
on
the
agenda,
the
120
milestone
restriction
removal.
So
this
has
been
jordan.
Do
you
want
to
set
us
up?
Actually,
since
you
wrote
the
proposal.
E
Sure
so,
because
of
the
long
freeze
and
119
and
the
ci
issues
that
we
were
working
through,
120
accumulated
a
ton
of
prs
that
were
pending
and
approved
in
lgtm,
and
I
think
it
was
like
120
or
something
so.
We
reopened
master
for
merges
for
120
in
a
controlled
way
and
took
about
maybe
a
week
and
a
half
or
two
weeks
to
drain
that
backlog,
and
so
that
was
done
with
different
label
queries
and
different
milestones
and
things
like
that.
E
And
so
the
question
is:
should
that
restriction
be
left
in
place,
or
should
we
fall
back
to
what
we've
done
in
previous
releases,
where
for
most
of
the
release,
any
pull
request
that
has
appropriate
review
and
approval
merges
regardless
of
milestone,
and
then
only
at
the
end
of
the
release?
Once
we
get
close
to
code
freeze,
does
the
milestone
restriction
get
added
back
in?
So
that's
the
issue
on
the
table.
A
So
I
had
sent
out
it
was
sending
out
you
know
of
every
few
days
updates
around
the
status,
so
the
basically
the
way
that
we
drained
the
queue
was
we
focused
on
fixes
for
failing
tests,
moved
into
documentation
and
prs
tagged
kind
documentation.
Our
cleanup
then
moved
into
low
risk
bug
fixes.
So
this
is
kind
of
like
shuffling
prs
between
various
kind
of
phased
milestones
and
then
finally,
jordan
had
a
fix
for
some
unit
testing
stuff
right
basil
trigger
for
flakes
that
merged.
A
After
the
low
brisk
bug
fixes,
then
we
took
care
of
the
rest
of
the
bug,
fixes
and
then
finally
tossed
the
kind
feature
and
kind
deprecation,
and
then
there
might
have
been
one
or
two
kind:
api
change
pprs
into
the
queue,
since
we
did
that
all
of
the
prs
that
were
targeted
and
merged
ready
and
merge
ready
at
the
time
that
we
set
the
restrictions
have
merged.
A
So,
like
jordan,
said
the
the
the
question
is:
when
do
we
remove
this?
The?
So
previously
I
was,
I
was
in
favor
of
maintaining
this
throughout
the
milestone.
What
ends
up
happening
is
that
increased
burden
for
every
sig
and
you
know
and
kind
of
increased
contributor
friction
for
anyone
who
is
pushing
a
pr
to
kubernetes
kubernetes.
A
I
don't
have
strong
opinions
at
this
point.
The
I
think
there's
some
value
in
bolstering
our
reviewers
and
approvers,
as
well
as
the
milestone
maintainers
and
treating
this
as
something
that
needs
to
be
actively
triaged
throughout
sigs,
but
I'm
not
sure
that
the
increased
friction
is
worth
it
and
hearing
no
succinct
reasons
about
why
either
way
is
good
or
bad.
I
opened
a
pr
to
remove
the
milestone
restriction.
A
E
Some
it's
less
obvious
that
that
factors
into
whether
features
or
bugs
get
merged,
and
so,
if
we
leave
the
restriction
in
place,
I
anticipate
that
pushing
more
people
into
the
milestone
maintainers
list
and
if
we
already
are
unsure.
E
So
my
feeling
is:
we've
strengthened,
ci
quality
and
strictness
we're
in
better
shape
now
than
we
have
been
in
previous
releases.
E
A
Reasonable
yeah,
I
think
a
question
that
arose
from
this
is
to
your
point
because
I
went
I
went
on.
I
wouldn't
call
it
a
rant,
but
I
I
had
during
one
of
the
I
think
the
last
edition
of
the
the
sig
leads
and
chairs
meeting
this
points
to
this
points
to
triage
problem
right.
I
think
that
you
know,
I
think,
that
some
of
the
the
work
that
the
release
team
does
every
now
and
again
I
mention
it.
A
I
think
that
eventually
the
release
team
goes
away
right
in
some
long
long
in
the
future
future.
The
release
team
goes
away
because
we've
we've
automated
enough
of
these
processes
and
we've
strengthened.
A
You
know:
we've
strengthened
the
various
components
that
we
care
about
throughout
the
release,
to
the
point
where
we
don't
need
cat
herders
right
for
for
for
individual
areas,
and
I
think
that
I
think
that
by
having
functions
like
ci
signal
and
bug
triage,
we
actually
mask
some
of
the
triage
that
should
be
happening
for
by
other
cigs.
A
So
in
an
increase
of
the
milestone,
maintainers
group,
you
know
one
of
the
questions
that
could
be
asked
is:
do
we
are
we
sure
that
milestone
maintainers
are
doing
what
they're
supposed
to
currently
and
and
do
they
know
that
there
is
some
definition
of
a
what
a
milestone
maintainer
is
responsible
for
today.
If,
if
not,
what's
the
point
of
the
group
right
or
are
we,
you
know
so
so
discuss
discuss.
G
Save
him
yeah.
Can
you
hear.
G
And
so
my
oh,
this
is
a
real
new
reaction,
but
what's
a
milestone,
maintainer.
A
So
the
milestone
maintainers
group
is
a
group
that
is
maintained.
It's
a
github
team
that
is
maintained
by
sig
release
that
allows
you
access
to
use
the
slash
milestone
command
right
initially
a
while
ago.
I
think
it's
a
decent
while
ago.
The
milestone
maintainer
group
also
had
write
access
over
kubernetes
kubernetes.
A
The
current
milestone
maintainer
group
1
gives
you
access
to
using
the
slash
the
slash
milestone
commands,
but
it
also
gives
you
right
access
to
kubernetes
and
enhancements
right
so
for
people
maintaining
maintaining
features,
enhancement,
tracking
issues
within
kubernetes
enhancements.
They
would
be
able
to,
if
they're
part
of
that
group,
actually
edit
the
descriptions
and
keep
them
up
to
date.
A
So
the
the
team,
the
github
team
itself
overall,
because
there's
kind
of
like
shared
ownership
across
kubernetes
kubernetes
and
you
know
some
functions
in
kubernetes
kubernetes
as
well
as
kubernetes
enhancements
sig
release,
maintains
the
group,
but
the
individual
sigs
designate
milestone
maintainers
for
their
for
their
components.
So
there
is
a
combination
of
if
you
look
at
the
milestone
maintainer
group
in
korg,
it's
korg,
config,
kubernetes,
sig,
release,
teams.yaml
right
and
within
burial.
Yeah
you'll
be
able
to
see
the
milestone
maintainers
group
and
within
within
that
team.
A
There
there
are
comments
for
each
of
the
cigs
right.
So
if
so,
it'll
be
a
username
and
then
a
comment
to
say
that
they
are
a
milestone
maintainer
for
a
specific
sake.
G
G
Yeah
yeah,
I
I
I
to
today
I
learned
that
I
can
filter
dashboards
based
on
city
and
yeah.
I
shared
a
link
with
a
sig
of
the
the
ci
signal,
workflow
dashboard
filtered
on
their
sig,
and
I
tell
you
what
the
the
reaction
was
interesting.
A
So
it
was
an,
I
believe
it
was
initially
nk
community
and
some
kind
of
disused
docs
right
now.
It's
in
the
k,
sig,
release,
release
team
and
there's
a
section
for
milestone,
maintainers,
which
I
see
some
some
giggles.
It's
not
exactly
discoverable.
D
Yes,
if
you
wear
the
leopard,
yes,
the
yeah,
the
because
I
mean
the
thing
is
that
like,
if
you're
talking
about
milestone
maintainers
having
some
sort
of
a
new
role,
then
you
know
that
would
require
an
education
campaign.
I
mean.
D
What
they're
used
to
doing,
which
is
their
concept
of
the
milestone
maintainer,
is
this
is
the
person
who
gate
keeps
the
milestone
during
code
freeze
and
if
you
know
we
wanted
to
change
that
definition,
it
would
require
promulgating.
A
change,
including
getting
feedback
from
sig
leads
from
all
over.
So
one
thing
I
think
that's
hold
on
one
thing
I
wanted
to
bring
in
here
that
I
think
that's
relevant.
D
I've
been
messing
around
with
dev
stats,
with
lori
et
cetera,
and
one
of
the
things
I
think
is
actually
relevant
anytime.
We're
talking
about
gatekeeping.
Is
you
know
the
number
of
prs
we're
merging
is
actually.
B
H
Yeah,
I'm
just
going
to
share
a
link
some
to
some
data.
I've
been
collecting
across
eggs.
Jordan's
helped
me
a
little
bit
here
and
some
of
the
other
other
folks
as
well,
but
it's
just
an
inventory
of
which
cigs
are
doing
what
types
of
processes
that
would
lead
to
higher
pr
velocity,
so
I'll
drop
that
in
now.
A
So
yeah
this
this
kind
of
tops
back
into
the
point
of
like.
I
think
that
some
of
the
work
that
we
do
masks
some
problems
and
one
of
the
problems
is
triage.
The
I
just
I
decided
to
do
a
git
blame
and
this
is
not
new
information
I
committed
this
two
years
ago
for
milestone
maintainers.
A
E
E
And
so
having
having
a
process
that,
when
we
were
in
code,
freeze
really
shrank
the
number
of
people
who
were
adding
pull
requests
to
merge.
I
think
made
sense.
A
G
E
Philosophically,
that's
what
the
reliability
working
group
is
trying
to
achieve
so
right
now,
if
a
post-submit
test
starts
failing
the
only
thing
that
gates
is
cutting
a
release,
and
so,
if
the
component
or
team
responsible
for
that
introduction
of
that
failure
or
that
flake
is
not
aware
of
it
or
is
not
prioritizing
fixing
it,
they
can
go
months
during
the
deb
cycle
with
no
real
impact
and
it
just
piles
up
things
that
block
the
release.
At
the
end
of
the
cycle.
G
So
so,
and
I'm
not
sure
if
I'm
speaking
to
your
point,
but
when
I
when
I
asked
the
question
at
the
start,
what
what
I
wanted
to
get
out
was
that,
in
the
interaction
where
I
displayed
a
a
vast
set
of
flaky
tests
that
are
mainly
in
an
observing
state
on
the
ci
signal
board,
it
would
have
been
a
little
bit
of
a
shock
to
the
that
sig
member,
and
I
got
that.
I
got
that
little
bit
of
shock
expressed
in
the
reply,
which
was
courteous
and
useful.
G
What
donna
say
about
this
is
that,
in
terms
of
splitting
up
the
work
amongst
ci
signal
shadows
and
what
we've
traditionally,
what
we
did
in
the
last
cycle
was
that
we
said:
okay
right,
let's
divvy
up
the
jobs
amongst
the
team,
but
I'd
like
to
suggest
that,
instead
of
this
being
a
that,
the
ci
signal
team
are
a
supportive
part
of
the
job
of
triage
and
managing
the
vast
and
sometimes
vast
number
of
jobs
that
are
flaking
for
a
specific
underlying
reason
and
that
we
almost
act
as
pm.
G
Slash
drivers
to
assist
sigs
rather
than
if
you
know
what
I
mean
work
in
a
collaborative
way
to
go
right
and
I
take
one
of
my
shadows
and
say
right,
you're,
looking
after
six
scalability,
these
are
your
flaky
tests,
help
them
update
these
tests,
interact
with
them
because,
on
the
one
hand,
they're
working
hard
to
do
the
work
to
fix
underlying
issues
and,
on
the
other
hand,
I'm
going
well
you're
just
to
say
to
me:
I
have
a
load
of
things.
G
I
have
a
lot
of
reporting
to
do
back
to
the
release
team,
so
I
could
turn
around
and
say
to
them
update
your
cases
on
this
board
or
I
could
say,
okay
right.
Let's
take
a
shadow
and
let's,
let's
have
them
work
through
this
and
kind
of
almost
like,
be
a
a
program,
a
mini
program
manager
for
for
working
through,
say
a
raft
of
flaking
tests,
because
you
want
people
doing
the
work,
but
you
kind
of
need
to
see
the
work
happening
in
order
to
report
it
back
and
up.
G
But
there's
a
part
of
me.
That's
once
that
wants
to
say
well,
updates
your
tests
here
and
update
the
reports
here,
but
I
think
the
pushback
will
be
well
while
we're
working
hard
and
it's
just
waiting,
let's
sort
it
out,
but
we're
working
our
own
issues.
You
know.
E
G
Is
that
is
that
what
you're
you?
I
just
kind
of
I
kind
of
feel,
and
these
are
feelings
now
I
feel
like
when
I
show
up
and
go
hi,
I'm
rob
I'm
from
ci
signal,
it's
kind
of
like
who
you
know
so
and
and
then
on
the
on
the
pushback
that
I
sometimes
get
it's
like
it.
It's
just
an
it's
just
an
interesting
piece
of
pushback.
I
just
want
to
change
the
conversation
so
that
I'm
arriving
at
it
in
a
supporting
role
rather
than
a
combative
one.
You
know.
E
Yeah,
so
this
is
definitely
something
we've
been
engaging.
Sig
leads
on
the
the
leads
and
chairs
calls
been
talking
about
these
tools
and
these
dashboards
and
these
queries
and
communicating
that
triage
of
these
things,
visibility
to
these
things
and
addressing
these
things
needs
to
be
part
of
your
regular,
stig
processes.
E
G
Test
failures,
yeah,
like
I
mean,
there's
there's
it
could
be
that
it
might
be
useful
for
me
to
sit
and
sit
in
the
some
of
those
meetings
and
just
do
a
little
bit
of
advertising
and
promotion
of
the
work
that
we
do
and
how
we
can
help
rather
than
yeah
and
pitch
it.
That
way,
I
think.
H
H
Yeah,
so
when
I
was
doing
the
data
collection
for
the
spreadsheets
that
you
see,
I
I
had
some
in
conversations
about
this
and
it's
it's
quite
clear
that
some
sigs
definitely
need
to
figure
out
their
onboarding
process
like
they
know
it,
but
they're
they
need
to
onboard
new
contributors
and
ci
signal
would
be
one
way
to
do
that
without
expecting
the
new
contributor
to
have
all
of
the
experience
of
somebody
who's
been
in
this
project
for
five
or
six
years.
H
G
There's
a
lot
of
this
if,
if
the,
if
the
work
was
if
the
work
was
super
simple
and
it
would
just
get
done
and-
and
I
think
the
challenge
is,
the
work
is
not
super
simple
in
terms
of
in
terms
of
the
effort
required
in
order
to
deflate
a
test
and
one,
I
think
one
of
the
one
of
the
one
of
the
one
of
the
one
of
the
things
that
you
need
going
in
to
deflate
a
test
is
deep
knowledge
of
the
the
functionality
around
which
the
test
is
operating,
and
so
that's
one
chunk
of
knowledge
that
you
need.
G
But
then,
in
addition
to
that-
and
this
is
where
I
see
an
external
team
like
ci
signal.
Eventually
helping
is
a.
You
also
need
to
have
a
lot
of
knowledge
of
of
how
ci
works
in
on
kubernetes
and
really
the
challenging
part
is
not
so
much
looking
at
prior
job
configurations
or
anything
like
that
or
job
configurations
or
even
test
configuration.
It's
more
well.
G
What
happened
when
the
test
ran,
and
I
think
that
that
is,
I
think,
that's
probably
where
the
challenge
lies
and
where
the
most
distasteful
part
of
of
of
deflating
a
test
is
now
again.
Education
is
the
answer.
Education
and
training
is
the
answer
I
think
there,
but
you
have
to
arrive.
I
really
feel
you
have
to
arrive
with
deep
knowledge
of
the
thing
being
tested.
A
Yeah,
and
and
in
that,
I
think
that
it's
it's
almost
unfair
for
the
ci
signal
team
at
times
to
have
to
work
through
and
report
on
those
issues.
You
know
some
of
some
of
what
we've
noticed
over
the
cycles
is
that
a
lot
of
this
becomes
reporting.
You
know
reporting
and
reactive,
as
opposed
to
proactive
work,
because
we
do
not
have
the
context
that
someone
who
works
day-to-day
in
a
safe
will.
G
Yeah,
that's
and
that's
that
yeah,
that's
why
I'm
you
know
that's
why
I'm
going
full
tilt
on
on
automating
the
reporting,
because
that's
the
first
thing
I
want
to
get
rid
of
to
do
this
work,
because
I
want
that.
I
want
that
done
so
yeah.
I
need
help
on
that.
There's
a
couple
I've
either
on
an
issue.
I've
launched
that-
and
I
want
to.
I
want
to
get
that
done,
but
we
can
discuss
that
another
time,
but
yeah
I've
put
out
a
call
for
help.
H
G
I
would
see
it
as
potential,
ideally
in
a
way,
you'd
want
to
be
the
the
there's
a
part
of
me
that
thinks
well.
This
should
be
the
montessori
school
for
kate's
contributor,
because
you
get
it
you
get
in
here.
You
find
out
what
you're
it
actually
a
montessori
is.
A
good
idea
is
a
good
description
of
it
because
you
play
in
this
ci
playground
and
you
get
to
interact
with
all
of
the
cigs
and
all
the
different
parts
of
kubernetes
and
you
kind
of
go.
Oh,
do
you
like
dinosaurs?
G
Well,
sick
dinosaur
off,
you
go,
you
know,
so
it's
it's
it's
it
is.
It
is
the
place
to
play
and
that's
what
I'm
saying
to
shadows
that
this
in
terms
of
people
in
terms
of
technology.
You
know
in
terms
of
what
you
want
to
learn.
This
is
a
good
place
to
to
figure
it
out,
but
I
agree
with
stephen
that
there's
so
much
work
to
be
done
in
the
space
that
I
know
I'm
certainly
getting
lost
in
this.
Not
I'm
happy
to
stay
in
it
for
quite
some
time.
G
I
see
it
almost
as
a
as
an
18
to
36
month
journey
to
get
this.
You
know
going
really
sweetly.
A
A
Love
the
montessori
school
of
kubernetes.
That's
really
how
I
see
the
release
team
right.
We
have
we.
We
have
hands
into
multiple,
cigs
and
and
component
areas,
and
we
all
often
see
people
go
off
and
lead
those
areas
afterwards
right.
So
it
is.
It
is
kind
of
like
a
nice
ingress
point
for
the
community
overall,
but
very
much
so
it
is
getting
thrown
into
the
fire.
Sometimes
right,
yeah
figuring
out
documentation,
figuring
out
processes
working
through
you
know,
working
through
areas
where
people
might
be
overburdened.
H
D
One
of
the
one
of
the
issues
that
people
keep
sort
of
touching
on,
but
not
expressing
here.
One
of
the
reasons
why
you
know
a
wide
variety
of
sigs:
don't
pay
attention
to
ete
and
other
post-submit
test
failures
is
because
there's
a
delay
and
there's
often
no
direct
connection
between
the
code
they
merged
in
the
test.
Failure.
D
You
know
the
fact
that
you
say:
okay
well,
the
sig
should
have
somebody
on
active
duty
for
figuring
out.
You
know
what
broke
etc.
Having
been
ci
signal
for
the
release,
team,
etc.
The
figuring
out
which
merge
broke.
The
upgrade
et
test
is
a
really
complicated
process,
and
I
can
completely
understand
somebody
who's
stacked
with
work.
Putting
it
off.
D
F
I'll
throw
in
real
quick,
I'm
eddie
by
the
way,
ci
signal
shadow.
I
work
mostly
on
six
cli
up
until
this
point,
but
last
cycle
dan
popped
into
the
six
cli
channel
and
was
like
hey.
I
need
someone
to
take
a
look
at
this
flaky
test.
It's
failing,
and
I
saw
it
I'm
like
okay
and
then
like
six
hours
later
he's
like
hey.
F
I
still
need
someone
to
take
a
look
at
this,
so
I
volunteered
to
do
it
and
popped
in
and
was
greeted
with
my
first
look
at
test
grid
and
was
very
confused
right
so
like
as
someone
knew,
who
wanted
to
jump
in
and
contribute
there
that
tool
is
very
intimidating.
That
is
not
a
intuitive
tool
at
all
for
someone
who's
not
familiar
with
it.
F
So
so,
basically,
what
I'm
I'm
saying
is
like
does
the
user
experience
need
to
be
improved
there
for
for
developers
right
like
how
many
people
are
opening
that
page
bouncing
and
not
contributing
because
they
just
get
lost?
What
they're,
looking
at
you
know
is.
Is
that
something
like
the
this
you
know.
Obviously,
the
sig
is
super
tight
on
resources,
but
like
is
that
something
that
the
cncf
can
provide
funding
for
right?
If
we
wanted
to
have
someone
build
out
a
proper
tool
or
something.
A
So
so
I
think
you
know,
I
think
part
of
that
goes
back
to
education
right
and
the
you
know,
test
grid
is
an
open
source
tool.
Now
the
it's
a
it's
a
question
of.
Can
you
get
the
information
that
you
need
from
from?
Maybe
the
test
readme
page
right
and
does
the
kubernetes
documentation
say
in
test
infra
correlate
to
how
you
would
do
triage
at
this
point
in
time?
A
Can
we
easily
answer
that
question
right
and
you
know
the
new
and
shiny
is-
is
sometimes
cool
to,
especially
as
it
relates
to
like
improving
developer
experience,
but
at
the
same
time
there
is
we're
already
talking
about
an
area
that
may
be
underfunded
in
terms
of
in
terms
of
overall
effort
right
and
considering
having
like
who
rolls
that
tool
out
right
who
necessarily
develops
that
tool.
A
So
I
think
that
you
know
education
on
the
tools
that
we
have
today
and
and
deciding
to
see
if
we
can
identify
gaps
in
those
tools
like
if
they
truly
are
gaps
in
the
tool.
Are
there
gaps
in
documentation
or
gaps
in
training?
A
I
think
that
we
have
the
ability
to
not
necessarily
easily,
but
you
know
we
have
the
ability
to
train
and
we
have
the
ability
to
document
and-
and
we
have
a
wealth
of
a
wealth
of
people
who
have
that
knowledge
right.
Given
the
time
right
given
the
inclination
is,
is,
of
course,
always
there,
we're
open
community
happy
to
help
as
as
often
as
we're
able,
but
sometimes
it's
time
right.
A
So
I
think
that
you
know
we
often
target
the
things
that
the
I
know
you
know
when
I'm
dealing
with
things
on
my
plate.
Often
it's
like
where's,
the
next
fire
right
and
and
you're
jumping
for
the
fire.
First
right.
If
the,
if
the
release
process
breaks
down
right,
you
know
tim
or
I
will
drop,
what
are
what
we're
doing
or
the
release
managers
will
drop,
what
we're
doing
and
try
to
sort
that
out.
A
First,
so
often
you
know
we're
people
are
running
from,
and
this
goes
for
all
approvers,
who
are
also
you
know,
who
are
also
tests
and
test
reviewers
and
and
deflakers
and
reviewers
approvers
cap
writers
program
managers
for
their
sigs,
like
kind
of
the
omnibus
role.
It's
yeah,
sometimes
you're,
jumping
from
fire
to
fire
right,
so
I
think
yeah
I'll
shut
up.
Let's
go.
H
Tim
just
I'll
just
do
a
quick,
I'm
wondering
if,
if
we
think
that
improving
the
ux
of
test
grid
is
something
that
we
want
to
look
into
and
educate,
so
I
guess
what
you're
saying
what
I
was
asking
about
was
like
we're,
making
some
documentation
and
other
efforts
to
improve
the
materials
to
use
test,
grid
and
dci
signal.
So
maybe
someone
like
eddie
who's.
H
B
E
A
I
have
I
have
been
swayed.
I
think
that
you
know.
I
think
that
it
is
not
necessarily
fair
to
people
who
may
not
have
the
information
that
they
need
to
act
accordingly.
Prior
to
this
re-education,
we,
I
don't
think
we
should
move
from
the
status
quo
until
we
figure
that
out.
E
Yeah,
I
I
do
think
that
in
the
120
time
frame,
we
want
to
continue
pushing
on
cigs
to
pay
attention
to
their
ci
signal
and
continue
making
that
easier
for
them
with
queries
and
dashboards
and
things,
and
I
would
like
to
see
progress
on
the
reliability
working
group,
which
sig
release,
has
is
one
of
the
driving
things
on
to
automate.
Some
of
that
scoped
signal
feedback.
F
G
I
just
quickly
add
to
that.
Is
that
if
you,
if,
when
you
interact
with
taskrate
and
you
and
you
see
something
you
see,
you
think
you
have
a
good
suggestion.
If
you
make
that
suggestion,
it
will
be
acted
on
if
it's
reasonable
and
easily
implementable.
So
so
I've
given
feedback
to
to
test
group
developers
before
international-
and
you
know
I've
always
been
grateful
and
happy
with
that.
H
D
A
But
test
grid
is
not
maintained
by
us,
so
we
should
make
sure
that
we
funnel
that
feedback
towards
suggesting
testing
and
the
maintainers
of.
A
All
right,
so
I
see
that
we
have
five
minutes
left.
It
sounds
like
I'm
releasing
the
hold
on
this
pr
as
the
meeting
ends,
and
we
have
a
few
more
topics.
I
don't
know
if
we
want
to
try
to
grab
one
that
is
quick.
Lori
you've
got
a
few.
H
Minor,
real,
quick,
they're,
just
really
providing
info
to
this
group,
so
they're,
aware
of
things
going
on.
One
thing
is
that
we're
clearing
out
a
lot
of
old
items
by
just
a
combination
of
walking
the
project
board
and
the
release
team
meetings
and
also
doing
a
threading
effort
in
slack.
So
one
item
at
a
time
goes
into
slack:
hey,
what's
going
on
here,
people
talk
about
it,
then
we
have
an
action
either
to
close
or
move
forward.
H
So
that's
going
pretty
well,
and
that
includes
retro
items
from
docs,
so
retro
items
didn't
even
see
the
light
of
github
issued
day
and
github
issues
from
past
retros,
plus
a
variety
of
things.
So,
thanks
for
everyone
for
for
contributing
and
everyone's
welcome
to
provide
their
feedback,
they
go
in
either
the
sig
release
channel
or
the
release
engineering
release
management
channel
for
now,
oh
a
few
enhancements
too,
and
then
the
next
item
is
release.
H
Engineering
has
a
way
of
ways
of
working
agreement
in
progress,
so
this
is
just
basically,
you
can
click
on
it
and
look
at
your
leisure,
but
basically
how
we
like
to
run
meetings,
discuss
decisions
that
we
need
to
make
the
mechanics
of
how
release
engineering
can
operate.
So
everybody
is
aware
of
how
the
efforts,
how
the
ceremonies
and
other
actions
work
and
has
a
say
in
influencing
the
direction.
So
it's
a
work
in
progress
and
again,
if
you're
in
release
engineering
and
active
there,
please
take
a
look
and
add
your
feedback.
A
Lori,
let's
scooch
back
for
the
triage
party
stuff.
A
H
The
next
last
item
in
question
is
of
you
walking
the
project
board.
So
is
it
still
a
goal
for
this
meeting?
H
We
have
not
been
successful
in
doing
much
project
boardwalking,
so
maybe
we
just
thread
it
for
now,
but
I
just
want
to
keep
it
on
everybody's
radar
screen
that
walking
that
board
in
a
meeting
is
helpful
for
delegation
discussion,
pairing
up
people
who
are
experienced
with
newcomers,
there's
all
sorts
of
good
reasons
to
walk
the
board
in
a
meeting
setting.
So.
A
H
F
A
I'm
also
going
to
link
the
the
sig
sig
release
triage
party,
one
in
here,
so
we've
recently
updated
some
of
the
configuration
the
overall
version
and
hold
on
dude
and
arno
added.
Some
new
queries
for
bug
triage
so
check
that
out
yeah
we'll
try
to
go
into
that
in
more
detail
for
the
next
meeting.
Yes,.