►
From YouTube: Kubernetes 1.12 Release Burndown Meeting 20180921
B
B
So
folks
are
joining
I
think
we
have
the
key
set
of
folks.
That
I
definitely
know
I
need
to
talk
to
a
little
bit
here.
So
I'm
gonna
get
started.
We
have
a
half
hour
a
couple
of
important
things.
Talk
about.
It
is
Friday
September,
21st,
I'm,
Tim,
pepper,
you're
112
release
lead.
We
are
almost
there
after
the
meeting.
I
will
be
uploading.
This
video
to
YouTube,
which
is
a
way
of
saying
it's
recorded.
So
please
behave
in
accordance
with
our
community
guidelines.
As
I
know,
you
will
okay.
B
So
we
are
going
to
probably
cut
another
r/c
today,
but
we
need
to
discuss
that
a
bit
Monday
should
be
our
last
branch
fast
forward
and
will
thaw
the
branch
at
that
point
get
a
little
bit
of
final
soak
and
Thursday
release.
I
think
we're
on
a
reasonable
trajectory
for
that
to
be
the
case
so
to
get
into
some
details.
We
don't
have
coal
here
today,
so
I'm
gonna
have
to
chase
these
down.
B
The
publishing
bot
appears
to
have
broken
today,
so
that'll
need
to
resolve
next
week
and
then
our
first
major
issue,
so
we
built
Deb's
and
rpms
for
the
RC,
which
amazingly
has
never
been
done
before,
and
this
is
a
problem.
We've
got
multiple
things
where
we
have
the
final
release.
The
way
we
do
that
and
all
the
other
things
are
done
slightly
differently
so
publishing
those
has
highlighted
a
few
issues
that
were
working
through
today.
B
B
This
guy
highlighted
because
we
actually
have
I,
guess,
cube
ATM
I'm,
sorry,
not
Q,
medium
cube,
any
we're
testing
the
tripped
on
this
and
there's
a
patch
that's
gone
in
for
cube
anywhere,
so
that
test
should
go
green.
But
this
remains
a
problem
because
those
packages
are
available
to
the
world,
and
anybody
who
has
automation
that
just
looks
trying
to
set
me
regularly.
E
Instructions
from
111
onwards
explicitly
say
to
hold,
but
that
doesn't
mean
people
do
it,
so
there
is
at
least
a
clause
in
case
some
of
these
comes
with
pitchforks.
So
there
is
documentation
that
says
you
don't
just
randomly
update
and
specifically
set
hold
on
these
repositories,
but
it
shouldn't
be
in
staples
for
sure.
D
F
B
I
I
really
I
think
this
is
good
and
we
need
to
push
to
do
this
sort
of
stuff
all
the
way
through
earlier
in
the
cycle
that
we
have
one
release
process
that
we
turn
the
crank
and-
and
that
is
the
thing
that
we
do
for
release
and
publish,
and
we
do
it
from
I
mean
maybe
not
the
alpha
since
that's
a
weird
one,
but
every
single,
beta
and
RC
is
the
same
as
an
official
release.
I.
E
B
We've
got
a
another
issue
open
in
release
to
work
on
this.
The
RPMs
and
ebbs
are
maybe
a
little
bit
awkward
because
those
are
still
done
by
Googlers
internally.
But
if
we're
gonna
de
couple
that
we
may
as
well
D
couple
it
and
do
it
in
a
way
that
solves
this
problem
as
well,
so
I'll
be
looking
at
that
is
the
next
month
or
so
comes
by
the.
F
D
No
real
magic.
It's
there's
like
a
small
Delta
between
the
specs
that
are
the
checked
into
kubernetes
kubernetes
and
the
ones
that
are
in
Cabernets
release.
There's
not
a
huge
difference
between
building
with
debs
yourself
and
having
a
Googler
built
the
dead
to
just
where
they
land
up
is
Apple
protected.
So.
B
We
can
we
incur
the
saan
in
the
integral
ease
and
figure
it
out.
I
think
the
short-term
issue
of
as
the
things
get
decoupled
for
that
particular
point
in
the
build
I
think
we
could
resolve
that
quicker.
But
there
is
definitely
a
need
for
a
larger
cap
on
what
the
overall
properly
architected
release
engineering
process
looks
like
capturing
the
stakeholders
and
the
artifacts
and
and
managing
their
life
cycles.
Could.
F
It's
got
to
be
at
least
the
if
your
parts
and
finding
a
home
for
it
in
the
Google
infrastructure
and
the
CNC
of
instructure.
Yes,
it's
that
part
needs
to
be
in
the
ad
group,
but
then
sig
release
itself
should
ask
for
what
exactly
they
will
need.
So
we
can
turn
around
and
go
make
that
happen
in
the
other
group.
F
F
B
Okay,
so
remaining
issues,
then,
in
terms
of
where
we
stand
relative
to
release
readiness.
So
there's
a
couple
of
things
that
have
just
flown
by
around
cube,
ADM
and
upgrade,
and
the
previously
mentioned
cube
anywhere
change,
so
I
think
things
are
looking
good
there
I'm
not
aware
of
anything
else
that
we'd
be
pending,
so
the
the
one
remaining
issue.
We
literally
have
one
issue
that
I'm
looking
at
right
now
so
issue,
six,
eight,
eight,
nine,
nine
and
then
there's
also
related
six,
eight,
eight
five
seven.
This
was
a
Damon
set,
upgrade
issue.
B
Now
this
may
or
may
not
be
a
broader
problem,
there's
still
a
bunch
of
looking
going
on
and
I
need
to
pull
people
this
morning
again
or
later
this
morning
to
see
what
the
outcome
is
looking
like,
but
this
is
our
old
friend
note
aims
by
condition
that
has
cropped
up
multiple
times
now,
depending
on
how
you
look
at
this,
this
could
be
a
minor
issue.
That's
it
it's
just
the
cluster
directory
deprecated
cluster
up
stuff,
but
depending
on
the
analysis
and
who
actually
uses
those
scripts.
B
If
it's
only
the
test
that
we
run
for
GCE
in
the
release
process,
okay,
fine
something
is
accidentally
getting
run
on
master
that
shouldn't
at
times
not
a
big
deal.
I
wouldn't
even
block
on
that,
probably
but
we've
never
quite
understood
who
uses
those
scripts
or
maybe,
who
has
been
influenced
by
those
scripts
and
whether
there's
other
layers
of
problems
potentially
out
there.
So
I
need
a
little
more
guidance
on
that
one
from
the
the
SIG's
we.
D
B
B
And
it
won't
at
all
based
on
the
diagnosis
so
far
what
we
understand,
but
that
doesn't
mean
that
somebody
else
might
not
have
an
issue.
That
said,
one
of
the
the
things
here
went
in
and
1.6
and
has
been
documented
and
and
people
like
that.
The
ecosystem
seems
to
understand
this
need
so
people
who
people
have
kind
of
surprised
like
how
did
this
actually
get
through,
and
we
don't
quite
understand
why
why
we
weren't
just
failing
all
over
the
place
and
a
bunch
of
GCE
tests
are
having
flakes.
B
A
B
It
sounds
like
folks
are
working
on
those
and
definitely
not
a
release
blocker
for
us,
but
ie
great
I
want
us
to
partly
have
the
the
culture
that,
even
though
it's
not
release
blocking,
and
it's
sort
of
this
other
thing
gke.
It's
really
important,
because
so
many
of
our
tests
are
run
in
variations.
That
are
there
that
we
as
a
release
team
kind
of
end
up,
depending
on
the
signal
from
there,
and
when
we
have
a
bunch
of
separate
things,
they
kind
of
overlap
to
give
us
a
sense
of
the
signal,
but
they're
all
flaky.
H
H
I
feel
like
a
parent
who's
to
the
principal's
office,
because
misbehaving
alright,
so
we
I
do
realize
the
importance
of
the
CI
signal.
I'm
thinking
I'll
make
a
proposal
internally,
try
to
figure
out
how
we
can
tighten
it,
like
maybe
run
the
OSS
main
line
test
before
making
changes
internally
or
whatever
makes
sense
from
a
process
point
of
view.
But
the
changes,
the
kind
of
changes
we
do
are
on
the
hosting
side
or
necessary
for
us
to
maintain
the
product.
H
But
we
just
need
to
make
ensure
that
it's
not
affecting
us,
so
they,
the
change
that
happened
last
week
to
was
essentially
changing
the
permissions
reaching
out
the
production
to
test
boundary
and
that
was
kind
of
widely
effecting
change
so
that
it
was
resolved
faster.
But
we
I
can
only
say
yeah.
We
do
realize
the
importance
of
says:
they'll
try
to
figure
out
a
process
where,
in
the
impact
for
all
yeah.
B
Something
for
an
additional
layer
of
acceptance
test
there
could
be
could
be
useful
to
us
if
you
were
willing
to
take
that
on
and
also
I
really
appreciate.
I,
don't
want
you
to
feel
like
you're
being
called
to
the
principal's
office
here.
I
really
appreciate
all
you've
done
for
us
this
last
week,
I
know
you've
put
in
some
late
nights
as
well.
So
the
other
thing
I
think
that'll
be
useful.
B
It's
still
gonna
be
complex
like
if
we
change
something
in
master
and
only
a
portion
of
those
providers
results
feed
back
into
I,
saying
there's
a
problem
like
well:
we'll
have
to
triage
to
figure
out
whether
it
was
us
or
them
whether
they
need
to
change
to
fix
something
or
we
need
to
change
or
what,
but
ideally
we're
never
breaking
them.
If
we're
doing
things
properly
from
a
stability
standpoint,
but
that
is
also
hard
so
that
this
is
going
to
be
a
work
in
progress
and
I.
B
A
It
also
creates
a
personal
burden
on
individuals,
so
either
we
have
to
reduce
that
burden
somehow
or
we
have
to
make
an
official
like
this
is
the
gke
test
liaison
role,
maybe
not
an
official
police
team
role,
but
somebody
to
talk
to
that
will
then
have
as
part
of
their
duties
not
like.
Oh
wow,
it's
10:00
p.m.
A
and
I'm
just
here
and
somebody's
got
to
do
it
but
know
someone
who
can
do
that
on
their
own
on
their
on-the-job
time
to
really
follow
up
on
those
things
and
and
also
some
sort
of
template
for
the
report
back,
because
because
there's
probably
good
reasons
why
this
is
hidden
but
follow-through
and
transparency
as
much
as
possible.
It
just
helps
so
much
yeah.
B
A
B
A
That
actually
ties
me
into
one
of
the
last
issues.
Questions
that
I
have
I
have
issue
number
eight
nope,
six,
eight
two
six
zero
and
it
was
one
of
the
issues
that
was
fixed
together
with
another
issue,
but
I
actually
haven't
heard
back
on
whether
the
original
content
of
that
issue
was
fixed.
I
can
follow
up
with
the
sig.
That's
fine,
but
I,
don't
understand
the
last
comment
on
that
issue
and
I
just
want
to
make
sure
that
I'm
not
closing
something
that
hasn't
been
fixed.
I've.
A
B
All
right,
then
where's,
my
agenda,
doc,
okay,
so
the
the
golang
change
had
gone
in
also,
and
we
haven't
seen
anything
unexpected.
There
still
want
to
watch
that
space
a
little
bit
just
for
the
the
having
scalability
had
a
little
more
time
to
have
their
test
run,
to
completion
for
a
period
so
kind
of
leaving
that
on
the
agenda
until
Monday
just
to
keep
watching
in
case
anything
cropped
up.
There
I
think
that's
it
for
issues
and
PRS,
then
Zak
anything.
We
need
to
do
for
docs.
C
F
Exactly
I
had
one
question
here:
I
saw
a
note
on
the
core
OS
once
that
we
reverted
saying
the
material
is
still
useful,
even
if
all
we
did
was
revert
a
PR
that
was
cheeky
related
in
a
specific
script.
So
we
are
not
really
reverting
the
fact
that
you
know
core
OS
is
G,
but
all
we
are
you
know,
switching
back
is
the
GC
once
so.
Maybe
we
do
you
have
any
thoughts
on
that.
I
think.
F
C
F
Is
the
default
in
Cuba
diems,
so
it
will
be
for
the
few
media.
It
is
G,
it
is
not
G
a
so
all
we
are
saying
in
the
release.
Notes
is
that
if
you
have
more
than
thousand
nodes,
then
you
know
there
is
things
that
you
need
to
take
care
of,
and
we
reverted
a
patch
which
was
specific
to
G
ke,
which
had
nothing
to
do
with
the
actual
cube
area.
More
notch.
B
F
G
B
F
B
Just
pulled
up
all
of
them
as
well,
so
I
will
drop
the
drop,
a
link
into
the
agenda
and
there's
on
I
think
there's
three
commits
it
dims.
If
you
could
glance
and
give
them
a
read
through
and
I
guess
Steven
as
well.
Just
make
sure
that
we
think
that
the
wording
is
correct.
Okay,.
F
B
Will
be
okay,
that
transitions
and
probably
nicely
over
to
release,
notes
Nick.
J
Yeah,
so
me
are:
we
are
in
good
shape.
We
finally
have
what,
for
us
is
sort
of
fine.
It's
not
final.
It's
the
the
final
version
of
the
document
that
we
will
then
now
turn
into
the
final
document.
So
we
will
begin
our
our
last
copy,
editing
and
any
major
themes
that
aren't
in
there.
We
will
make
a
call
on
whether
there
seem
to
be
any
and
if
there
aren't
then
we'll
just
pull
it
out.
J
B
I
K
L
D
G
B
I'm
inclined
it's
fine,
it's
it's
isolated
to
their
code
as
well.
They're,
the
subject
matter,
experts
and
that's
that's
normal
for
normal
for
code.
Freeze
fixing
what
they
they're
sick
believes
is
a
critical
urgent
bug.
So
yeah
there
been
a
couple
of
things:
I
haven't
brought
each
and
every
single
thing
that's
come
come
by,
but
yeah
there
have
been
a
few
things
merging,
but
hopefully
we're
not
seeing
any
more
of
those
and
and
on
Monday
Tuesday.
We'll
need
to
be
watching
and
see
who's
done.
What
over
the
weekend
got.
G
B
So
one
question:
then:
we
we
talked
about
cutting
an
RC
dims
today,
but
is
there
value
in
doing
that?
If
we're
not
gonna
build
the
rpms
and
Debs
as
well
and
I,
don't
think
we
should
build
them.
Given
what
we
understand
right
now,
we
should.
D
E
Actually,
there
the
whole
purpose
of
publishing
it
I
mean
we
need
to.
We
need
to
get
to
a
point
where,
if
it
was
pushed
and
unstable,
that'd
be
a
useful
thing
for
consumers,
because
we
don't
want
people
like
me,
being
the
only
people
testing
this
stuff,
because
I
have
a
totally
hijacked
weird
build
that
no
one
else
should
do
right.
Ideally,
we
have
the
same
artifacts
that
we
were
gonna
publish
in
one
week,
but
publish
to
an
unstable
repository
that
they
can
use
to
actually
test
the
cluster
before
12
is
cut
right.
D
But
that
does
not
work
for
our
p.m.
so
rather
than
having
to
processes.
I
think
it's
best
to
put.
We
can
put
rpms
and
Deb's
for
our
C's
someplace,
but
it's
just
there's
no
good
reason
to
have
a
Googler
involved
using
the
build
process
that
will
put
them
in
the
repositories
where
everything
else
goes
well.
F
D
D
D
D
B
This
is
a
bigger
problem
than
we're
gonna
solve
this
week.
Yeah
today
are
we
gonna,
get
useful
information
from
that,
and
if
we,
if
we
build
rpms,
given
what
we
know
about
how
they
published
today,
that's
problematic,
so
I
don't
think
we
should
build
them
today.
In
which
case
do
we
need?
Are
we
gonna
get
any
additional
useful
information
by
doing
our
rc2
today,
okay,.
F
F
And
we
had
a
problem
and
we
found
and
we
fixed
it.
So
we
need
to
build
an
RC
I'm,
not
saying
that
they
should
be
published
exactly
where
our
c1
was
published
and
they
should
be
and
they
should
reflect
whatever
we
build
as
Deb
and
rpms
should
be
built
from
the
same.
Wherever
repository
that
we're
picking
up
the
specs
from
so
people,
don't
have
to
use
binaries
or
build
from
the
specs
in
the
KK
repository
well.
D
So
I
guess
what
I'm
trying
to
say
is
that
at
least
the
way
I
see
the
dead
in
rpms.
It's
the
difference
between
using
its
it's
a
packaging
difference,
there's
not
a
material
difference
between
using
the
static
binaries
or
using
the
Debian
rpm.
It's
the
difference
between
a
zip
file
and
a
tar.gz
I
get
that
I
get
that,
but.
D
F
D
D
F
D
D
F
D
F
B
I
build
an
RC
today
and
rpms
and
dubs,
and
if
so,
where
we
publish
them,
one
thing
that
I'm
not
sure
about
from
the
automation
perspective
here
is:
is
it
phased
like
some
of
the
other
things
we
have
a
separate,
build
from
publish
phasing
thinking?
Can
we
control
where
these
come
from,
or
is
this
gonna
have
to
be
a
completely
separate
path
where
some
of
us
just
build
the
packages
ourselves
and
test
them
ourselves
without
having
done
the
official
build
stuff.
D
D
I
mean
that
just
there's!
No,
if
you
find
a
place,
there's
no
reason
for
me
to
build
the
RPMs.
I.
Guess
is
what
I'm
saying
so
the
right,
because
the
way
that
I
build
and
publish
them
they
go
to
the
the
official
location.
There's
no
easy
way
for
me
to
change
that,
because
that's
how
this
tooling
is
designed
and
it's
its
internal
tooling.
That's
designed
to
do
just
this
particular
thing,
but
I'm
very
sensitive
to
the
user
issues
of
having
an
easy
process
for
kicking
the
tires
with
a
pre-release
version
of
kubernetes
I.
D
Just
don't
think
that
the
Deb
rpm
process
is
the
right
one
to
to
realize
the
actual
value
that
we're
trying
to
provide
to
the
users.
So
I
would
love
to
talk
about
that
in
a
sig
release
meeting
or
in
the
mailing
list.
But
I
have
I
can
imagine
a
much
easier
process
that
we
can
be
totally
community
but
more
or
less
entirely
be
driven
that
doesn't
involve
Deb's
in
rpm,
but
still
provides
the
same
user
value
a
being
able
to
quickly
spin
up
a
cluster
because
everything's
already
installed
for
you,
Timms.
B
B
If,
even
if,
that's
from
a
hack,
script
temporarily
and
hand
me,
the
RPMs
and
I
wanted
to
compare
them
in
diff
is
go
with
an
RPM
that
I
built
and
if,
if
we
truly
believe
that
it
doesn't
matter
where
they're
built,
let's
prove
that
they're
the
same,
because
I
really
doubt
we
have
reproducible
builds
to
that
extent.
Right
now,.
K
D
K
D
They're,
not
reproducible
at
the
moment,
due
to
some
other
challenges,
though,
there's
also
the
only
possible
way
that
they
would
be
different
is
when,
if
and
when
the
specs
diverge,
that
are
in
tribulation
communities
and
communities
release.
This
is
the
only
source
of
difference
between
me
building
or
Metis
building
and
someone
outside
of
Google
bill.
Okay,
so
those
files
also
should
never
have
changed,
and
we
should
probably
just
remove
the
ones
in
tribulations
release
too
and
make
that
not
possible.
F
K
D
D
K
B
B
B
We
would
have
done
it
fast
for
it
anyway,
but
yeah,
okay,
yeah,
okay,
so
we'll
do
a
fast
forward
in
the
next
probably
early
this
afternoon,
because
some
things
have
merged
this
morning.
I
want
to
see
their
CI,
but
we'll
do
a
fast
word
this
afternoon
and
do
a
RC
but
no
rpms
and
Deb's,
and
we
will
regroup
on
Monday
and
hopefully
nothing
has
changed.
B
Thank
you
all
for
the
passionate
discussion
on
this
topic.
I
think
this
is
an
area
where
we
all
believe
we
need
to
improve
and
I
really
appreciate
that
that
that
shows
we
will
continue
to
get
better
in
subsequent
releases.
It's
an
evolution
right,
so
we
have
what
we
have,
but
we're
gonna
continue
to
try
to
do
better.
Thank
you.
All
sorry,
we've
gone
over
quite
a
bit
and
we'll
regroup
on
Monday.