►
From YouTube: Kubernetes SIG Release - 2019-09-04
Description
A
A
A
A
All
right
so
I
think
we've
got
critical
mass
and,
let's
see
and
josh
is
here.
Okay,
so
hello.
Everyone
welcome
to
the
September
4th
special
edition
of
the
cig
release
meeting.
This
is
a
meeting
that
is
recorded
and
available
on
the
Internet.
To
please
be
mindful
of
what
you
say.
Please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
in
general
just
be
nice
to
everyone
all
right.
So
josh
is
going
to
kick
it
off
with
a
discussion
around
blocking
and
forming
jobs
and
criteria
and
general
discussion.
So
Josh
you've
got
the
floor.
C
C
A
D
C
C
C
That
the
criteria
that
we
sat
around
blocking
job
must
not
be
flaky
is
not
criteria
that
you
can
actually
check
by
looking
at
any
of
our
various
boards
or
you
eyes,
and
so
we
need
to
actually
come
up
with
a
different
flakiness
criterion
that
is
supported
by
our
infrastructure,
so
that
you
can
pretty
much
instantly
see
which
jobs
are
not
passing
the
flakiness
criterion
and
therefore
ought
to
be
removed
from
blocking
or
fixed
depending
the.
C
Second
issue
is:
we
don't
currently
have
any
kind
of
real
process,
either
written
or
even
traditional,
for
how
new
jobs
would
get
added
to
blocking
or
informing
what
the
is
there,
and
given
that
this
has
come
up
in
the
116
release
with
the
new
Windows
jobs
hi,
it's
actually
kind
of
important
for
us
to
figure
that
out,
like
you
know,
do
you
file
a
PR?
If
so,
where
do
you
file
the
PR?
Does
it
have
to
come
up
in
a
meeting?
C
C
George
actually
had
a
proposal
that
would
be
actually
substantially
different
than
what
we've
cleaned
have
done.
Ad
hoc
in
the
past
there's
a
lot
of
reasons
why
it's
appealing,
but
it
would
require
working
with
the
SIG's.
That
is
the
idea
that
each
cig
would
actually
have
a
set
of
blocking
jobs
on
their
own
boards
and
that
the
release
team
would
then
cherry-pick
things
into
the
sig
release
boards
based
on
them
already
being
I
marked
as
critical
on
the
individual,
sigmoids
and
I.
C
C
C
So-So,
you
know
if
you
haven't
already,
please
take
a
look
over
his
proposal.
I
think
there's
a
lot
to
like
about
it,
particularly
because
one
of
the
big
problems
that
the
CI
signal
team
has
had
is
getting
responsiveness
from
the
cigs
and
if
they
effectively
retain
ownership
of
their
own
blocking
jobs,
maybe
there
would
be
a
little
bit
more
responsive
when
those
start
failing
without
needing
to
be
pestered
by
the
CI
signal
team
in
order
to
pay
attention
so.
A
I
think
you
know
maybe
a
few
things
that
I
may
not
articulate
well
yeah.
C
A
A
The
release
team
is
responsible
for
the
for
the
maintenance
of
that
job,
which
I,
don't
think,
is
true
right.
So
so
one
thing
I
want
to
make
sure
we
do
is
minimize
the
burden
for
the
CI
signal
team.
I.
Think
that
that's
part
of
the
goal
of
this,
the
the
other
is
making
sure
that
the
sig
is
actually
responsible
for
the
jobs
that
they
should
be
responsible
for.
A
So
one
of
the
thoughts
that
I
had
was
to
bring
if
we,
if
we
kind
of
restructured
the
way
that
we
had
blocking
and
forming
jobs
within
our
test
grid,
config
right
in
forming
jobs,
can
you
can
arbitrarily
put
an
in
forming
what
I
was
thinking?
Is
you
can
arbitrarily
put
something
as
long
as
it's
discussed,
with
sig
release
into
master
informant
right,
assuming
it
meets
some
set
of
criteria
that
we
have
partially
defined
the
into
moving
into
blocking
I
think
it
should
be
stricter.
I
think
it
also
should
be
under
sig
release
control
right.
A
A
Vanetti
cig
release
few
or
a
blocking
job
and
allows
you
to
see
the
same
way
we
do
like
que
or
issues
right,
seeing
if
you
meet
those
criteria
before
submitting
the
issue
right
or
at
least
having
a
set
of
checkboxes
to
to
go
through
when
you
submit
that
issue
right
that
way,
one
it's
it's
visible
by
people
who
are
looking
at
urban
Eddy
cig
release.
It
can
be
visible
to
the
release
team.
A
We
can
at
least
have
the
discussion
and
have
an
audit
log
of
that
within
one
place
and
then
from
there
cig
release
should
go
about
forklifting
or
that
sake
can
go
about
forklifting
into
a
set
of
se
caen
fit
and
tests
infra,
config
jobs,
kubernetes,
release
flocking
or
some
some
such
great.
That
way
it's
under
our
owners
file
and
we
can
control
whether
or
not
the
job
gets
in
one
of
the
big
problems
that
I
have
right
now.
A
We
also
need
some
enforcement
on
the
tests
infrasonic
ant
arbitrarily
get
on
to
blocking
just
by
a
sig
popping
the
annotation
on
right.
So
a
big
problem
that
I
have
right
now
is
the
annotations
that
are
configured
the
it's
a
it's
a
overall
improvement
on
on
the
way
we
configure
the
jobs,
but
it
also
means
that
by
applying
an
annotation
to
the
kubernetes
job
config,
but
it
also
means
that
we
are
then
we
then
have
no
visibility
into
the
way
that
jobs
went
on
our
boards.
A
C
C
C
A
I
think
that
I
would
go
a
step
even
further
and
say
that
this
is
something
between
the
CI
signal
team
and
the
cig
overall
right,
because
these
jobs
survive
release
these
jobs
survive
the
release
boundary
right.
So
this
is
something
that
so
like
the
six
key
ally
thing,
I
think
I
saw
at
some
point
and
then
forgot
about
it
and
then
saw
the
PR
pop
again
pop
up
again
and
I
was
like
oh
wait.
Why
is
that
so?
I
think
having
some
continuity
around,
that
and
and
and
so
like?
C
A
Like
it
I
don't
hate
it
I,
don't
so
part
of
the
reason
for
that
is
I.
Think
that
you
know
CI
signal
does
quite
a
bit
of
work
already
in
terms
of
like
reading
the
tea
leaves
on
test
grid,
and
they
have
to
do
so
within
our
boards.
If
we're
seeing
that
were
like
what
are
we
saying
with
adding
sig
blocking
boards
right?
Does
that
mean
that
see?
Does
that
mean
that
the
CI
signal
team,
as
well
as
the
sig,
are
responsible
for
watching
all
of
those
words
now
I
think.
E
I
just
mentioned
walking
and
walking,
and
informing
just
to
a
name
day
just
to
name
it
something
good
they,
okay,
but
the
overall
idea
and
something
I
and
something
that
I
seen
that
a
couple
say
a
couple
6a
do
is
you
know
they
have
jobs,
they
have
a,
for
example,
the
CNA,
the
CLI
job
that
you're
a
that
you're
measuring
those
jobs
have
important
information
and
they
are
essentially
you
know
they
are
essentially
what
we
what
we
have
in
the
release
blocking
jobs.
They
are
jobs
that
they
monitored
they
they
they
really
do
fix.
E
So
a
they
bigger.
The
other
I
wanted
to
communicate
in
the
issue
is
that
we
just
want
six
to
tell
us.
These
are
the
jobs
that
we
absolutely
carry,
that
we
are
absolutely
maintaining
something
breaks.
We
will
fix
it
as
soon
as
possible,
and
these
are
other
jobs
that
are
a
work
in
progress.
It
will
we're
still
figuring
out
or
videos.
E
It
is
something
that
is
not
really
that
important
to
the
rest
of
the
coronaries
community,
so
that
is
so
that
way
they
so
that
way
we
can
get
up
back
and
forth
between
the
release,
team,
ACI
signal
and
the
own
in
six
of
okay.
We
have
these
boards
in
this
again,
I,
don't
know
what
to
call
it,
but
let's
say
six
CLI
blocking.
We
have
this
eight
this.
E
We
have
these
jobs
here
and
we
would
like
to
propose
them
to
a
civil
case
to
signalize
Mustang,
unless
informing
master
blocking
that
that
type
of
thing,
but
the
a.
But
the
big
idea
is
that
we
I
think
it
will
be
really
just
foolish.
We
just
have
some
signal
off.
These
are
the
jobs
that
we
absolutely
care
about,
and
the
rest
of
the
community
should
care
about,
and
these
sort
of
jobs
are
working
progress
they
need
fixing
and
we
don't
have
enough
people,
power,
etc.
A
So
so
my
thought
process
is
that
we
already
do
have
that
signal
and
the
signal
is
the
master
informing
in
the
master
blocking
jobs,
as
well
as
the
master
blocking
boards,
as
well
as
the
release
blocking
water
release,
blocking
and
release
inform
reports
right
if
jobs
are
not
on
that
on
those
boards.
They're
not
important
a.
E
A
E
A
Walk
and
release
informing
so
that
that's
yeah
that
sugar,
on
top
of
the
the
test
graded
configs
right,
so
what
I'm
saying
is
like
what
sig
release
should
care
about
is
whether
or
not
it's
in
our
boards
right?
If
it's
not
in
our
boards,
then
we
shouldn't
care
right.
If
a
cig
has
not
proposed
it
for
our
boards
right,
then
then,
maybe
that's
the
problem.
A
Maybe
that's
a
lack
of
communication
right
but
to
add
to
to
add
a
job
to
two
separate
boards
as
a
matter
of
switching
the
annotation
right,
so
they
can
design
they
can
design
from
from
the
sig
side.
They
can
design
it.
However
they
want
to,
but
from
our
side,
what
we're
saying
is
we
need
it
to
be
in
our
board
for
us
to
have
visibility
on
it.
Yeah.
E
Definitely,
but
it
might
be
my
big
ideas
just
for
the
release
team
and
for
the
sixth
to
have
a
do
it
to
to
have
the
same
perspective
on
how
to
how
to
look
at
their
jobs
in
terms
of
adding
the
job
SIA,
it's
just
a
notation,
there's
a
you
know,
a
pay
is
not
really
a
big.
It's
not
really
a
big
change
that
I'm
proposing
is
just
a
ID
other
a
we
won
this
in
the
same
way
that
the
release
team
looks
up
the
dashboards.
E
A
I
I
don't
want
to
I,
don't
want
to
suggest
process
for
for
other
SIG's
right.
I
think
that
it's
whatever
we
do
should
remove
burden
from
us,
while
not
imposing,
on
another
sake,
right,
I,
think
I,
don't
think
it's
unreasonable
to
ask
for
if
it's
something
that
you
want
us
to
care
about,
to
put
it
in
our
place
outside
of
that
I,
don't
I
I,
don't
want
like
what
I'm
worried
about
is
like
having
too
much
of
a
heavy
hand
for
for
other
SIG's.
F
Was
not
paying
attention
till
about
the
point
you
made
about
reducing
your
burden
without
creating
unnecessary
additional
burden
is
so
fundamental
to
everything.
We've
done
like
that
right,
there
is
like
we
should
get
a
t-shirt
that
says
that
from
because
I
think
that,
like
that's
the
process
goal
for
most
things
like
the
automation
over
automation,
over
process,
yeah.
E
In
a
really
huge
plus
one
for
that
and
I,
guess
it
really
they
the
motivation
of
why
I'm
proposing
a
I'm
proposing
I
was
proposing
a
Tao
workflow
is
a
it
cuz.
I,
don't
think
we're
that
we
are
essentially
telling
six
what
to
do.
Six
already
have
their
dashboards
a
we're
just
really
asking
for
work
with
us.
E
It
don't
they
don't
just
let
it
don't
just
let
things
run
stale
in
in
tenth
grade,
if
you
actually,
if
you
actually
go
to
look
through
it
more
a
lot
of
the
dashboards,
they
are
completely
broken
and
you
know
pay
people
to
a
a
lot
is
on
six.
They
don't
really
look
at
their
a
at
their
own
pace
with
dashboards.
They
just
rely
on
everything,
that's
on
sick
released,
please
just
and
don't
really
have
it.
Let's
work
together,
just
communicate
back,
communicate
back
and
forth.
It.
A
So
so
that,
if
that
is,
that
is
a
leaning
into
a
different
issue
right,
which
is,
if
you
have
a
board,
you
should
be
looking
at
your
board
and
maintaining
your
jobs
right.
What
we
want
to
prevent
is
stale,
weird
unmaintained
jobs
from
landing
on
our
boards
right,
so
I
think
that
that's
a
different
issue
from
whether
or
not
a
sig
maintains
their
own
jobs,
but.
E
A
E
I'm
just
saying
in
the
case
that
we
already
a
so
in
the
case
that
we
already
have
a
job
that
shows
up
in
synchronous
and
say
something
if
they,
you
know
if
they
are
looking
at
the
work
they
change.
If
they're
looking
at
the
board,
then
we
can
work
it
an
issue
sort
of
a
surface
on
faster
and
we
have
a
lot
more
people,
a
try,
org
a
working
on
fix's
and
the
change
is
just
going
to
propagate
back
to
us
and
we
get
a
good.
A
B
A
Sc
I
signal.
Yes,
you
are
supposed
to
make
sure
that
bastard,
like
we're
good
to
release
based
on
the
tea,
leaves
that
you
read
from
velodrome
and
test
grid.
However,
a
lot
of
this,
a
lot
of
those
situations
have
been
reactive,
you've
been
reacting
to
jobs
that
have
not
been
configured
to
alert,
SIG's
right
and
that's
not
fair
to
your
team
right.
If
a
sig
is
responsible
for
their
job,
they
should
be
responsible
for
their
job.
A
E
And
we
should
we
should
a
I'd.
Maybe
we
should
let
that
to
a
criteria
by
the
way,
just
just
an
FYI
thinking
on
the
6e
and
light
job
that
you
mentioned
a
the
reason
that
he
has
only
released
the
release
team
as
an
alert.
That
is
a
that
is
a
mistake
that
we
will
fix
a
ye.
It
was
actually
what
a
it
was
yeah.
That's
it
that's
just
a
mistake:
I'll
fix
it.
A
Yeah
I
know
I,
had
it
I
had
it
on
my
list
to
to
come
back
and
add
that
alert,
but
I
didn't
want
to
block
getting
signal
on
our
board
by
not
approving
that
PR
so
yeah
same
very
same
thing,
essentially
from
from
our
end.
Okay.
So
let's,
let's
round
the
wagons
Josh,
let's
get
back
to
what
you
were.
Okay.
C
C
A
C
C
Because
that's
gonna
make
it
pretty
straightforward
and
will
also
give
the
signal
team
actionable
stuff
right,
because
when
the
test
job
that
say
wasn't
on
the
flaky
job,
sport
shows
up
and
the
flaky
jobs
more
than
they
just
send
a
link
to
the
owning
sig.
The
I
mean
so
does
automation.
But
the
only
thing
that's
not
to
read
the
messages
from
automation:
okay,
so
I
mm-hmm.
A
C
And
then
the
remaining
issue
is
more
of
a
release:
team
documentation,
issues,
I,
don't
think
we
actually
need
a
lot
of
discussion
there,
which
is
just
one
of
the
things
that
we
put
in
there
is
the
difference
between
blocking
and
informing.
Is
that
with
blocking,
at
least
theoretically,
at
least
when
some
day
when,
when
we
attain
our
aspirational
blocking
board?
C
If
anything
is
read
and
walking,
we
don't
release
with
informing
that's
not
the
case
with
informing
a
CI
signal
and
the
release
team
can
make
a
decision
to
ignore
the
failure,
I
and
Aaron
I
think
pointed
out
that
we
need
to
have
some
criteria
in
there
by
which
they
make
that
decision
in
some
process.
Pity
goo.
C
A
A
A
So
let
me
provide
some
context
here,
so
the
next
topic
is:
what
would
we
need
to
be
true
for
us
to
remove
the
blockade
on
on
K
release
right,
so
I
created
a
blockade
on
ke
release,
which
basically
anything
that
is
potentially
a
release,
thingy
stuff,
which
is
most
of
the
stuff
in
the
repo
honestly?
So
if
it
touches
a
release
tool
like
a
knocko
branch
fast
forward,
the
GCB
manager,
anything
that
is
in
the
GCV
directory
for
stage
or
release
anything
that
might
be
in
the
way
we
configure
our
images
or
the
build
directory.
A
Really
it's
a
good
portion
of
its
a
good
portion
of
K
release
gets
tagged,
gets
blockaded
right,
so
the
so
it
marks.
So
for
people
who
aren't
familiar
with
what
the
blockade
is.
It's
a
proud
plugin
that
allows
you
to
configure
a
regex
to
essentially
block
PRS
from
merging
if
they
touch
a
directory
or
file
that
matches
that
regex
right.
So
this
came
up
because
we
we
did
some
stuff.
We
did
quite
a
bit
of
stuff.
A
We
were
working
through
some
shell
check,
updates,
trying
to
clean
up
the
repo,
and
this
led
to
several
bad
things
happening
which
basically
took
down
kubernetes
for
a
bit
so
I
think
one
of
the
one
of
the
major
things
that
we
touched
was
the
push
build
script,
which
is
in
the
top-level
directory
and
sinneth
push
ville,
doesn't
work.
It
can't
push
it
essentially
push
build
as
a
script
that
kubernetes
kubernetes
calls.
There
is
a
there
is
a
periodic
job
that
runs
our
to
periodic
jobs.
A
Maybe
three,
now
it's
down
to
two
right,
so
CI
kubernetes
build
and
CI
kubernetes
build
fast.
Those
two
jobs,
basically
push
staging
staging
versions
of
kubernetes
up
to
GCSE
buckets
right,
so
those
staging
versions
of
kubernetes.
When
those
jobs
happen,
they
also,
they
also
push
version
marker
files
right.
A
Those
files
are
essentially
used
in
so
many
places
across
tests,
infra
that
when
that
job
breaks
or
if
those
files
don't
get
pushed
correctly
or
if
something
was
wrong
with
that
job
and
the
version
marker
file
got
pushed
anyway,
it
will
take
down
all
of
those
traps.
So
that's
essentially
what
happened
that
happened
around
the
112
ten
time
frame.
I
think
we
write
as
112
was
going
out
of
support.
A
Bad
things
good
things
stuff
that
we
discovered,
including
if
you
ever
wanted
to
know
like
how
these
jobs
are
built
or
run,
there's
an
explanation
here
which
probably
should
turn
into
tests
in
for
a
document
ation
the
so
one
of
the
big
things
I
wanted
to
do
in
that
state
was
prevent
that
from
happening
again.
While
we
were
trying
to
understand
the
way
this
stuff
worked,
so
that's
what
the
blockade
was
for
right.
So
the
blockade
is
basically
saying
only
people
with
this
is
a
blockade
that
cannot
be
overridden.
A
So
if
you're
not
familiar
with
the
overrides
override,
is
a
plug-in
in
prowl
as
well.
That
allows
you
to
to
essentially
cancel
required
contexts
for
for
for
PRS
right,
so
that's
say
a
job
is
failing
and
you
don't
care
if
the
job
is
failing,
if
you're
a
repo
admin
and
override
is
configured
on
your
repo,
you
can
slash
override
the
name
of
the
test.
Don't
do
this
if
you
like,
don't
in
general,
don't
do
this,
so
this
is
something
that
cannot
be
controlled
via
override.
A
It
is
something
that
can
only
be
controlled
if
you
have
access
to
if
you're
a
repo
admin
essentially-
and
you
can
change
labels
on
a
repo
right
so
by
removing
that
do
not
merge,
blocked
paths,
label
that
will
essentially
pull
the
blockade
and
you'll
be
allowed
to
merge,
be
ours.
So
essentially,
what
I
was
saying
is
I
want
sig
chairs
to
review
all
of
the
PRS
going
into
something
that
will
potentially
break
kubernetes
right,
so
that
blockade
is
still
on.
Honus
is
act
asking
what
does
it
take
for
us
to
remove
that
bucket
right?
A
A
The
reason
for
this
was
that
we
wanted
to
lock
in
a
known
good
state
of
the
kubernetes
release
repo
right.
So
that
way,
if
we
were
trying
out
changes
like
essentially
if
we
can't
make
safe
changes
to,
if
we
can
over
iterate
over
this
repo,
we're
not
we're
not
improving
right-
and
this
is
a
repo
that
has
classically
classically
been
not
touched
so
I
want
to
move
to
a
place
where
we
are
touching
it
and
I
think
we
are
in
that
place.
A
A
If
we
have
those
jobs
targeted
to
a
tag
of
kubernetes
release,
that
means
like
if
we
find
a
good
tag,
we
can
configure
jobs
to
use
that
tag,
and
then
we
can
move
forward
on
master
I
think
we
have
different
ideas
now
in
that
we
would
instead
tie
the
essentially
tie
the
life
cycle
of
our
partially
tie.
The
life
cycle
of
kubernetes
release
to
kubernetes
kubernetes
right
so
creating
release
branches
for
kubernetes
release,
wiring
them
into
the
scripts
that
we
already
have.
A
So
when
you
do
something
like
configure,
do
you
config,
config,
Forker
stuff,
within
test
infra
or
or
do
a
branch
fast
forward,
say,
for
example,
maybe
it
does
it
for
both
repos
right?
So
that
way
we
stay
in
sync
across
the
release
cycle
and
we
cut
releases
on
we
cut
releases
for,
but
you
know
this
is
a.
This
is
a
larger
discussion
and
it's
not
something
I
want
to
try
to
wedge
into
this
meeting,
because
I
have
ideas
and
they're,
not
all
written
down.
A
Yet
I
want
everyone
to
have
appropriate
context
before
we
move
into
that.
So
maybe
for
the
Monday
cig
release
meeting,
we
can
chat
about
what
to
do
with
tagging
and
branching
I
think
that
would
be
a
good
plan.
There
are
also
interesting
conversations
on
Twitter.
That
I
would
like
to
bring
forth
regarding
stability.
Releases
could
be
an
interesting
conversation,
I
think
comes
up.
A
Every
few
well
basically
comes
up
every
year,
so
a
short
version
I
think
that
we
are
in
a
much
safer
place
to
and
and
we
can
remove
that
blockade
as
I
think
we
understand
the
repo
a
lot
better
than
we
did
at
the
time
that
we
did.
What
we
did
are
understand
all
the
intermingled
pieces,
so
I
will
I
will
do
that.
I
will
remove
the
blockade
and
and
close
that
out,
yep.
A
E
A
A
G
Lime,
hair:
do
you
want
yeah,
sorry,
I
joined
a
little
late
was
on
the
working
group.
Kate's
and
free
Colin
lost
track
of
time.
So
summary,
briefly
about
a
year
ago,
ish,
there
started
being
discussion
of
a
possible
working
group
to
look
at
our
overall
support
stance
on
the
project.
The
the
lifecycle
release,
cadence,
all
of
those
sort
of
things
working
groups
were
time
limited.
We
were
aiming
to
try
to
get
together
a
set
of
proposals
by
this
fall
and
that's
kind
of
happening.
There's
a
draft
kept
underway.
G
The
main
thing
that's
in
discussion
right
now
is
to
extend
the
support
life
cycle
by
one
release.
So
currently
we
support
three
releases
going
backwards
or
about
nine
months.
So,
specifically,
what
that
would
mean
like
right
now
is
113
is
12
days
away
from
going
out
of
support
and
just
to
do
a
quick
screen
share,
just
because
I
think
data
is
useful
on
this.
G
This
is
a
copy
of
the
CNCs
spreadsheets,
because
I
wanted
to
like
chop
out
some
stuff
and
just
focus
on
a
bit
of
information.
Colorize
it
a
bit
so
I.
The
the
CAF
keeps
a
list
of
who
submitted
conformance
results
on
which
versions
and
just
looking
at
our
supported
versions
in
hosted
platforms.
So
somebody
that's
got
a
staff
they're
taking
our
kubernetes
releases
and
running
them
as
a
service
on
behalf
of
others.
G
G
G
A
A
G
G
The
first
is
just
processing
of
patches
and
getting
the
cherry-picks
into
the
branches
monitoring
the
CI
to
make
sure
we
didn't
destabilize
things.
At
this
point,
the
vast
majority
of
cherry-picks
going
on
to
release
branches,
go
on
to
all
branches
and
I
believe
also
like.
If
we
were
doing
this
right
now
it
there
wouldn't
be
much
of
a
change
right
now
we
look
at
a
sort
of
cherry
pick
from
a
meta
perspective
and
it
lands
on
the
three
branches,
the
work
to
review
it
and
do
that
for
four
versus
three
isn't
much
then.
So.
G
That's
the
the
processing,
the
incoming
stuff
onto
the
branch,
the
output
of
the
patch
release
team
is
another
potential
cost.
Building
the
the
building
perspective
in
terms
of
the
binaries
I.
Don't
see
this
being
much
cost
again,
it's
fairly
straightforward.
We
kick
stem
stuff
off
and
GCB
in
the
cloud.
It
happens.
The
bigger
concern
that
I
have
is
around
packaging
right
now,
since
we
depend
on
a
Googler
for
that
portion.
I
know
we're
we're
working
to
get
away
from
that.
That
I
think
could
be
a
bit
more
of
a
bottleneck.
A
A
We
need
to
make
packaging
a
focus
for
117,
right
and
I.
Think
it's
been
said
and
it's
been
in
our
heads
and
we've
had
DMS
about
it,
but
it
needs
to
be
a
focus
for
for
117,
I.
Think
what
we
have
in
place
right
now
in
terms
of
GCB
projects.
We
now
have
a
staging
test
project
and
a
kate's
release
test
fraud
project
for
GCV.
So
we
have
like
the
playgrounds
that
we
need
to
vet
this
stuff.
We
need
to
focus
on
repo
design
for
both
apt
and
yum.
A
We
need
to
focus
on
a
tool
that
allows
us
to
easily
do
that.
I
recently
added
some
updates
to
to
the
docker
files
that
do
the
RPM
and
deb
builds.
So
now
you
can-
and
they
deb
builder,
since
it
was
already
a
go
tool-
is
now
a
module.
So
you
could
do
a
go,
get
deb
builder
and
you
would
have
the
things
that
you
need
to
to
create
Deb's
the.
A
So
we
should
focus
on
getting
that
stuff
done
so
that
we
can
vet
out
what
it
looks
like
to
have
a
repo
that
is
not
that
is
not
behind
the
Google
curtain.
So
to
speak
right.
There
are
also
discussions
about.
There
are
also
discussions
about
like
well.
How
do
we
sign
it
right
and
the
Google,
the
GCB
HSM
seems
like
a
reasonable
path
forward.
We
just
need
to
poke
at
it.
So.
G
On
that
front,
I
think
we're
on
a
reasonable
path
to
managing
the
incremental
cost
and
actually
making
most
of
it
go
away,
and
this
is
a
draft
calf
where
the
the
hope
would
be
to
discuss
it.
Socialize
it
with
sig's,
get
the
details
ironed
out,
get
it
towards
maybe
final
discussion
at
Kubik
on
in
November
and
hopefully
into
an
implementable
state,
and
then
there
would
be
those
details
but
I
think
from
a
sig
release
perspective.
A
Yeah
I
would
hope
to
see
that
we
have
the
kept
really
close
to
an
implementable
state
before
cube
con.
That
way,
you
know,
as
we're
talking
about
I
know
it's
later
on
the
agenda,
but
as
we're
talking
about
the
contributor
summit
contributor
summits,
sessions
which
we
have
ideas
for
but
haven't
submitted
proposals
yet
for
who.
G
G
A
A
working
group
LTS
has
submitted
a
proposal
for
contribs
Emmett,
specifically
to
discuss
this
cup
right.
So
this
needs
to
be
a
release.
Engineering
contributor
summit
session,
where
we
essentially
just
have
a
war
room
right,
I
think
we
haven't
had
the
time
to
to
sit
down
and
really
go
through
the
board.
G
G
The
working
group
is
also
focused
heavily
on
looking
at
what
API
is
need
to
get
to
what
stability
level
and
when
the
code
base
is
actually
ready
to
be
more
supportable
in
a
longer
duration
way,
because
there's
also
implications
here
on
how
users
upgrade
and
how
they
manage
to
migrate
between
versions
of
kubernetes,
with
data,
that's
behind
the
scenes
at
different
levels
and
needs
mutated
across
those
upgrades
and
things,
so
that
there's
a
lot
more
there
that
they're
worried
about.
Oh
yeah.
A
This
is
I
mean
this.
This
is
also
a
change
in
in
our
general
policy,
about
releases
and
and
about
the
tools
that
we
use
the
different
API
components,
but
that
is
a
policy
change
right.
That
is
a
policy
change
that
partially
happens
on
the
architecture.
Side
partially
happens
on
the
release
side
and
it's
not
totally
a
an
LT
out
like.
Ultimately,
it's
not
totally
an
LTS
concern.
Sure.
A
I'm
not
just
release
engineering
but
but
I.
Think
that
is
a
component
support,
ability,
concern
and
not
like
yes,
there's
a
separate
upgrade
concern,
but
we're
not
just
talking
about
upgrades
we're
talking
about
the
support
of
of
a
specific
version
right
and
yeah.
I
think
that
I
think
that,
like
whether
I
can,
whether
I
can
skip
versions
or
whether
I
can
are
whether
like
there
will
be
compatibility
between
and
an
N
minus
2?
U
and
minus
3
and
minus
X
versions
and
whether
or
not
the
version
is
supported
or
2
different
concerns
right.
G
Have
deliberately
separated
those
out
so
there's
we're
starting
it
short
on
time
and
there's
a
couple.
Other
topics
to
briefly
I
want
to
make
y'all
aware
of
and
try
and
get
a
little
bit
of
feedback
on
the
the
next
one
is
around
keeping
CI
running
longer.
That's
gonna
come
at
a
cost.
I
I've
never
heard
specifically
from
Google
and
the
hope
had
been
I.
Think
when
this
working
group
had
started
that
the
working
group
Gates
infra
would
be
further
along
to
have
more
moved
over
to
where
it's
actively
tracked
and
managed
through
the
community.
G
But
it's
unclear
what
the
actual
dollars
cost
would
be
in
running
the
additional
CI
and
but
that's
something
for
us
to
be
aware
of
and
probably
or
to
try
to
suss
out
and
we'll
need
to
get
more
of
that
from
cig
testing.
But
I
wanted
to
to
see
here
too,
because
I
know
in
various
conversations
and
sig
release.
People
have
said,
like
it's
no
big
deal
to
keep
it
running,
that's
part
of
why
we
have
kept
it
running
instead
of
cutting
it
off
immediately
when
we
hit
that
nine-month
period.
A
Yeah
so
I
know
that
we
generate
billing
reports
for
for
Kate's
in,
for
at
least
write,
do
we
do
the
same
for
the
Google
stuff,
or
is
that
just
kind
of
managed
somewhere
in
space
I've,
never
seen
it
for
the
existing
stuff?
Okay,
so
I
think
the
only
way
we're
going
to
get
an
answer
on
this
is
if
people
are
willing
to
give
us
numbers
or
if
we're
able
to
mock
out
some
of
this
on
on
CN
CF
infer.
G
A
A
A
A
G
A
G
A
Think
that
that's
fair
I
would
say,
I
would
say
Seavey's
and
whatever
our
definition
of
critical
bug-fix
is
right
is
is
fair
for
that
window.
The
what
I've
at
least
what
I've
seen
from
seeing
some
of
the
patch
release,
announcements
or
discussions,
is
that
what
we
could
probably
use
is
a
finer
tune,
definition
of
what
it
looks
like
for
a
how
we
essentially
grade
fixes
for
vertical
cigs
right
so
more,
maybe
even
more
specifically
like
cloud
providers
right,
I.
Think
a
lot
of
things
have
come
up
where
the
differentiation
between
what
is
a
bug-fix.
A
G
I
think
that
that's
going
to
become
less
of
an
issue
anyway.
The
trajectory
of
this
kept
landing
kind
of
next
year
and
it's
shifting
to
a
different
time
scale
is
also
corresponding
with
the
entry
cloud
provider
deprecation.
So
as
we
free
up
the
cloud
providers
to
run
on
their
own
cadence,
we
don't
have
that
much
of
a
problem
there
anymore
either.
Hopefully.
A
G
Exactly
this
stuff
is
marching
roughly
in
the
same
direction
going
forward.
The
last
big
bucket
of
stuff
is
around
third-party
dependencies
and
from
a
cig
release
perspective.
Just
the
last
couple
days,
we've
been
talking
about
the
new
golang
release,
but
there's
there's
always
a
new
coaling
release,
whether
it's
one
for
a
CV,
it's
a
patch
release
or
the
twice-yearly
majors,
but
we
also
have
other
third-party
dependencies
beyond
just
going.
So
we
have
this
problem
today,
where
the
stuff
is
not
managed
super
super
tightly.
Our
dependencies
life
cycles,
don't
line
up
with
our
life
cycle.
G
A
So
one
of
you
know
so
one
of
the
things
that
we've
we've
recently
worked
on
is
the
the
build
dependencies
that
you
animal
right,
so
that
moves
us
into
a
place
where
we
better
understand
version
bumps
for
external
dependencies.
It
doesn't
get
us
all
the
way
of
the
all
of
the
way
there
I
think
that's
in
terms
of
in
terms
of
things
like
going.
A
We
also
need
to
understand
who
all
touches
these
dependency
bumps
right,
because
you
know
essentially
there's
a
team
of
people
who
are
aware
that
go
has
been
bumped
and
like
liggett
liggett
as
part
of
that
and
and
Christophe
and
and
you
know,
and
Ben
and
a
bunch
of
others
who
dims
and
like
a
bunch
of
other.
If
you
kind
of
come
out
of
the
woodwork
when.
G
B
G
All
of
these
other
a
couple
dozen
dependencies,
whether
it's
at
CD
or
goaling,
or
a
whole
bunch
of
others.
This
is
a
big
area
of
complexity.
Where
are
our
risk?
Management
is
quite
weak
as
we
we
increment
the
versions
under
under
use
like
I
it
that
some
of
them
hardly
even
get
test
exposure
consistently
or
formally
until
we've
committed
the
change.
I
know.
A
Like
personally
the
the
CRI
and
the
CNI
stuff
that
I
try
to
bump
it's
just
like
it
gets
held
there
because
I
don't
think
we
understand
the
entirety
of
the
problem
or
what
needs
to
be
fixed.
Who
needs
to
push
an
image
at
what
date?
And
you
know
this
test
won't
work
until
someone
pushes
to
the
image
over
here
like
we
need
to
understand
the
stakeholders,
the
process
for
each
of
those
releases.
You
know
if
it's
something
that
we
own,
like
we
don't
own
CNI
anymore,.
B
A
We
kind
of
do,
but
we
kind
of
don't
it's
a
it's
no
longer
kubernetes
right
CRI
is
is
a
bit
of
a
different
story:
the
CRI
tools
they
live
within
guru,
Nettie,
SIG's
right.
So
that's
something
that
we
technically
all
have
ownership
of
as
a
project,
but
there's
a
bunch
of
other
stuff
where,
like
Etsy
D,
we
have
close
partnership
with
the
people
who
work
on
it
CD.
There
are
also
contributors
in
kubernetes,
but
they
have
their
own
release
cycles.
So
we
need
to
identify
the
stakeholders.
A
We
need
to
enumerate
all
of
the
you
need
to
enumerate
all
of
the
external
dependencies
as
much
as
we
can.
We
need
to
have
a
process
that
literally
anyone
can
do
to
bump
these
things
to
vet
these
things
right
and
that
will
involve
bringing
all
the
stakeholders
in
to
describe
what
their
release
process
is
to
describe
the
things
that
we
need
to
look
at.
A
You
know
just
just
earlier
today
or
yesterday,
or
something
I
found
out,
that
there
is
a
there's,
a
113
performance
regression
and
and
go
and
that's
a
performance
regression
that
has
been
open
for
two
months,
that
we
as
a
release,
team
and
really
integral
isse
overall,
didn't
necessarily
know
about
right.
So
how
do
we
also
control
the
flow
of
information
about
things
like
that?
That's
something
that
sig
scalability
was
aware
of
what
we
weren't.
So
we
had
a
discussion
about
it
yesterday
and
it
shouldn't
have
been
yesterday.
It
should
have
been.
G
A
G
We're
gonna
go
around
to
the
each
of
the
sake.
The
primary
SIG's
stakeholder
groups
so
like
this
is
kind
of
the
first
one
with
sig
release
and
talk
to
them,
get
their
feedback
and
see
how
how
off
we
are
and
based
on
that
refine
it
a
bit
and
I.
Think
at
that
point
declare
a
date
that
we
want
to
have
final
feedback
ahead
of
cube.
Con
okay
sounds
gay.
G
Been
a
it's
a
very
controversial
topic:
let's
put
it
that
way,
so
we
don't
want
to
declare
an
arbitrarily
near
date
before
we've
gotten
a
little
more
people,
because
people
don't
read
the
kept
until
you
push
them
to
or
you
declare
the
date
but
then
they're
like.
Why
did
you
this
is
totally
wrong?
You
can't.
A
G
G
G
A
G
A
So
we
have
so
we
have
a
few
additional
agenda
topics
there.
Primarily
talking
about
the
cig
release
the
cube
con
events,
so
the
cig
release
intro,
deep
dive
and
contributor
summit
contributor
summit.
We
still
need
to
talk
about
the
intros.
Will
the
intro
will
be
a
walkthrough
of
release,
team
activities
and
shadow
mentoring
and
so
on
and
so
forth,
held
by
Gwen
and
Lackey,
and
the
deep
dive
will
be
myself
and
Hannes
on
release
engineering.
A
So
the
contributor
summit
stuffs
is
TBD.
I,
think
that
what
we
can
do
is
maybe
shoot
something
out
to
the
mailing
list
him
and
our
yeah.
We
won't
have
time
to
do
that
before
the
date,
all
right,
so
yeah,
something
to
the
mailing
list
and
we'll
we
can
discuss
there
and
discuss
on
the
channel
and
try
to
get
something
together
and
I.
A
On
us,
close
yeah,
so
the
I
thought
the
deadline
for
the
kubernetes
forum
and
soul
and
Sydney
was
the
sixth
and
this
deadline
might
be
the
11th
anyway.
We
should
verify
it
and
and
and
finish
it.
Thank
you.
Everyone
for
attending
this
was
a
lot
of
fun.
I
think
we
talked
about
some
really
important
issues:
important
process
improvements
to
move
forward
so
not
same
bat-time
same
bat-channel,
but
next
week
we'll
have
the
cig
release
meeting
now
at
its
new
time
on
Mondays,
so
the
event
should
already
be
in
your
calendar.
Everything
should
be
up-to-date.