►
From YouTube: k8s 1.16 - Week 9 - Release Team Meeting 20190830
Description
Release details: http://bit.ly/k8s116
A
A
Okay,
here
we
go
all
right
good
morning
or
good
afternoon
good
evening,
hello
and
welcome
to
the
burndown.
It
is
week,
9,
August,
30th,
2019
and
we're
in
1/16
burndown,
big
milestones
include
code,
freeze,
so
I
think.
A
lot
of
the
discussion
today
is
going
to
be
about
that
and
the
ripple
effects
that
it
calls
so
excited
to
get
into
it.
Let
me
remind
you
that
I
am
recording
this
video
and
it
will
be
posted
to
YouTube
later
today.
So
please
keep
that
in
mind.
A
When
you
go
about
your
conversations
and
all
conversations
are
covered
by
the
kubernetes
code
of
conduct,
which
states
please
be
excellent
to
each
other
in
the
interest
of
time.
Let's
get
straight
into
it,
please
be
sure
to
add
your
name
to
the
attendees
list.
If
you're
in
attendance
I
will
grab
the
lazy
link
while
we
are
here
and
throw
it
in
the
chat
so
that
you
can
do
that
and
add
your
associated
role
pop
yourself
down
to
week,
9
August,
30th
and
catch
up
there.
I
will
pass
it
over
to
Kendrick
with
an
enhancements
update.
B
Hey
everybody,
so
buddy
could
put
my
name
down
a
list
on
the
attendee
side,
because
I
am
driving
right
now,
so
I
can't
add
it
in
there.
However,
this
morning,
I
did
update
all
of
the
issues
in
the
tracking
spreadsheet
right
now
we
should
be
looking
pretty
good.
There's,
six
or
seven
that
were
taken
off.
There
were
a
few
of
them.
I
know
a
lot
too
you'd.
B
Send
me
a
private
message
having
to
get
to
it
yet
regards
of
yours,
but
there
is
a
few
of
them
that
had
open
PRS
that
weren't
merged
yet
and
those
were
moved
to
remove
the
milestone.
There
were
two
that
were
easily
that
we
could
have
removed
from
the
milestone.
There's
one
more
that
I'm
waiting
to
hear
a
response
back,
because
the
PR
has
been
merged
for
the
main
part
of
the
code.
However,
there's
tests
that
were
waiting
to
be
merged,
however,
there's
nothing
in
the
emerge
queue
everything
else.
B
That's
either
on
track
either
had
all
their
PRS
merged
or
their
PRS
are
sitting
in
a
tide
pool
just
awaiting
the
merge.
I
know
they're
still,
probably
some
at
this
point
still
waiting
to
be
merged
and
I
would
hopefully,
by
the
end
of
today
that
I'll
be
cleared
up
and
we
should
be
kind
of
good
to
go
and
degree
on
the
sheet
after
today,.
A
Thanks
for
the
update
Kendrick,
just
a
couple
of
things,
I
might
just
take
a
pass.
I
think
some
of
those
ones
like
the
the
ones
that
don't
have
P.
As
in
the
merge
pool
I
commented,
it
was
just
a
case
of
bad
links
and
we'd
moved
some
things
around
with
sig
networking,
so
I've
just
updated
you
with
the
latest
PRS
that
we
were
tracking
and
they
have
all
merged,
so
I
think
we're
okay,
the
ones
that
you
had
stated
I
had
commented
on
a
while
back
and
we
actually
deferred
them
in
with
sig
Network.
A
So
I
just
never
got
around
to
updating
you.
So
I've
cleaned
that
all
up
in
the
comments,
and
we
should
be
good
to
go
for
that.
One
specifically
I
think
the
same
thing
might
apply
for
new
endpoint
API,
but
I
will
just
take
a
pass
at
the
four
that
are
dangling
there
and
I'll
go
and
just
ping
these
people
directly
this
morning
and
just
nudge
them
and
make
sure
we
can
take
another
pass
at
just
doing
that
to
get
them
in
and
the
deferred
removed.
A
C
We
had
a
feature
gate
update
for
a
feature
of
we're
graduating
the
beta
and
it's
been
in
the
tide
pool
since
yesterday,
but
that
was
number
eight
to
one
one:
zero,
it's
currently
under
the
pending
column
and
so
I
think.
If
we
just
wait,
that's
gonna
go
through
okay
and
then
there
was
a
bug
fix
for
one
of
the
other
features
we
had
in
there
eight
to
133,
but
all
those
had
all
the
approvals
and
we're
passing
tests
yesterday.
So
I
think
it's
just
a
waiting
game
at
this
point
right,
yeah.
C
C
And
the
other
one
like
I,
don't
know
if
we
have
Aaron
or
anybody
from
conformance
on
here.
But
if
you
look
at
the
conformance
working
group
project
board,
there's
a
number
of
things
that
are
on
milestone,
1.16
that
still
don't
have
lg,
TM
or
approve
a
couple
of
them
are
ones
that
we
had
given
some
feedback
on
from
sick
windows
and
requested
changes,
and
so
I
don't
know
what
the
status
of
those
are
so
I
started.
Mention
that
I
mean.
A
C
A
I
mean
we
should
we
should
have
take
a
pause
and
see
if
we
can
get
them
in
same
as
this
kind
of
runtime
class
scheduling
that
doesn't
have
its
confronted,
that
it
doesn't
have
its
either
we
checked.
So
let
me
just
take
those
two
links.
The
one
link
that
you
put
in
there
I'm
gonna,
add
it
to
the
I'm.
Just
gonna.
Add
it
to
follow
up.
C
C
A
D
So,
first
off
happy
Friday
everyone
and,
let's
get
to
it,
so
play
friendly
reminder.
Like
always,
every
day,
every
single
update,
the
see
a
signal.
Gibbs
is
essentially
a
snapshot
off
or
project
more.
If
you
want
to
stay
up
to
date
on
any
issues
or
anything
that
pops
up
a
check
out
the
project
board,
they
see
a
signal
report,
a
multiple
linking
in
there.
If
you
wanna
a
read
up
on
things
that
have
been
happening
lately,
a
new
version
will
be
available
to
you
a
probably
a
next
week,
a
go
freeze.
D
So
last
time
I
check,
which
was
10
seconds
ago.
We
have
like
a.
We
have
nine
PRS
a
on
the
tidepools.
So
that's
that's
there
to
do
for
today
and
if
you
wanna
check,
if
you
want
to
stay
up
to
date
on
every
day
on
everything,
that's
on
the
table,
just
click,
the
nice
link
that
I
that
I
included
with
my
update
and
now
up
today.
So
it's
my
small
pulse
anyway.
Any
questions
on
that.
D
We
see
a
signal
and
some
a
and
some
of
the
and
some
other
people
we
right
at
this
pay.
At
this
point
in
time,
it
looks
like
there's
something
something
misconfigure
with
the
API
server
control
plane
from
the
coaster,
but
don't
really
I
don't
really
have
story.
If
anybody
wants
to
get
into
it,
we
welcome
any
help
other
than
that.
Sometimes
we
have
tied
clicking
and
in
then
jobs
passing
out
from
the
flake
from
the
flaky
ones
right
now,
a
no
persisting
failures
so
that
that's
good
on
the
master
informant
side.
D
We
have
a
couple
failures
and,
in
those
couple
failures
come
from
sick
scalability
on
jobs,
so
this
one's
a
this
one's
are
gonna,
be
a
little
bit
tougher
to
the
bug
and
fix
just
because
the
jobs
are
on
once
per
Bale.
So
the
first
one
they
a
deletion
error
in
the
skilled
performance
performance
job
this
one
Joe
it
is
born.
The
first
failure
showed
up
yesterday
a
second
for
a
second
failure.
D
Today,
a
we
need
to
get
in
contact
with
sick
scalability
to
get
more
insight
to
get
more
insight
into
the
failure
for
the
other
scalability
own
job.
That's
that's!
Currently,
failing
that's
in
scale
correctness,
and
now
one
there's
actually
a
thing:
that's
causing
it
to
fail
is
actually
a
signet
work
on
test
and
also
a
that
one
is
a
double,
is
also
new,
so
we
need
to
investigate
a
master
informing
the
conformance
of
a
conformance
openstack
job
still
still
so
it's
okay
for
a
little
bit
more
confer
a
little
bit
more
context.
D
D
That's
that
is
still
being
investigated
and
am
beside,
and
besides
that
failure,
it
seems
that
the
job
is
it's
also
failing
to
upload
the
resource
from
the
global
resource
from
the
area
from
from
the
wrong
to
the
place
where
despot
can
actually
I
can
actually
read
it
and
parse
it,
and
that's
why
no
recent
bronzer,
you
are
showing
up
on
a
are
showing
up
on
rent
despite
the
job.
Despite
the
Java
recording,
it
will
be
clean
both
for
a
positive
note.
The
Devon
rpm
jobs
have
been
fixed
and
and
older
than
that.
D
Let's
see
so,
we
won
16
blocking,
as
I
mentioned,
they
want
a
the
ones
in
1/16.
Blocking
has
the
same
failure
as
a
must:
a
master
master
walking
I
figure
on
the
half
on
the
other
figure
on
the
job
that
tastes
awful
features,
116
informing
the
big
the
reading
in
116
informing
is
actually
that
the
cubed
mean
a
cubed
mean
my
name,
ain't,
a
nurse
it
just
added
a
aka
Cuban
mint,
a
cubed
me
in
a
test
for
the
100
160
a
for
the
116
branch.
D
So
this
is
going
to
be
these
jobs
are
similar
to
what's
running
in
muster
a
master
in
forming,
so
we
get
a
little
bit
more
signal
and
other
than
that
other
body.
At
the
bottom
of
my
update,
you
can
see
some
statistics
for
some
of
the
more
flaky
jobs
in
mastering
a
master
walking
a
it
look.
A
this
is
this
couple:
jobs
are
the
top,
are
the
out
of
the
top?
D
Two
are
the
most
a
of
the
most
flaky
ones
that
we
had,
so
we
will
investigate
and
see
whether
we
can
fix
them
more,
a
weather
which
will
actually
move
a
more
master
informing
for
the
release,
blocking
criteria,
or
you
know
we'll
see
a
and
we'll
keep
you
all
up
to
date.
In
with
that
all
and
my
update
and
open
the
floor
for
any
questions,
comments
or
concerns
that
any
of
you
might
have.
C
D
D
As
such
we
haven't
been
in
the
habit
of
of
checking
or
checking
up
on
them,
but
if
the
jobs
are
actually
maintaining,
if,
if
you
were
so
or
someone
else,
essentially
looking
at
them,
fixing
them
using
a
use
in
them
a
there
when
compared
directly,
the
thing
that
we
would
recommend
is
to
actually
promote
them
to
either
a
Twitter
and
informing
or
block
a
or
blocking
dashboard
it,
but
other
than
that.
I.
Don't
really
have
any
I.
Don't
really
have
any
other
details
to
say
on
anyone.
Any
jobs
are
in
there.
All
I
think.
A
D
A
E
The
forking,
the
version,
specific
jobs,
is
actually
kind
of
a
mess
right
now.
There's
I
think
four
different
issues
open
and
test
infra
about
problems
in
that
respect
so
and
Catherine
and
similar
folks
are
working
on
the
code.
That
does
because
it
became
pretty
clear
with
the
forking
of
116
that
what
we
thought
it
was
doing
was
not
what
I
was
actually
doing.
This
is
not
the
only
test,
that's
in
a
different
place
in
116
than
it
is,
and
the
other
releases
so.
C
C
A
C
E
A
So
I
think
the
action
for
now
Patrick,
if
you
want
it
in
116,
informing
you're
gonna,
have
to
pay
our
it
yourself,
which
is
what
we
had
to
do
for
every
job.
That's
in
116
informing.
Currently
it's
had
to
been
a
manual
addition,
because
the
way
that
the
fork
and
the
config
thing
work
is,
it
doesn't
work
yep.
A
D
D
The
other
really
important
one
is
a
warning
master
informing
and
that's
the
first,
a
bullet
point
that
I
yet
for
the
failing
section,
82
182,
the
only
one
that
we
still
a
that
we
still
need
to
create
an
issue
for
is
a
scale
correctness,
one,
the
one
that
has
a
signet
work,
a
the
signal.
Signet
work
is
failing
and
we'll
do
that
as
soon
as
possible.
A
The
other
thing
is
I'm,
not
sure
if
he
caught
it
in
the
was
either
in
release
management
or
testing
ops.
But
basically
the
test
infra
had
some
pretty
flaky
nodes
earlier
this
week,
which
was
causing
a
lot
of
flakes.
So
if
I'm
not
sure
if
he
flakes
stats,
looked
a
little
bit
off
this
week,
but
I
did
notice
that
some
of
the
things
that
you'd
mentioned
were
flaking
a
lot
more
earlier
in
the
week.
A
D
So
yeah
I
guess
pulled
up
a
we're.
Just
gonna
keep
on
reporting
statistics
on
flakiness
and
hopefully
after
you
know,
after
like
after
a
couple
weeks
months
by
possibly,
we
can
get
better
get
a
better
view
whether
there
is
just
a
temporal
failure,
because
this
jumps
to
way
to
go
super
flaky
or
whether
that's
the
thing
that's
just
always
been,
and
we
never
actually
notice,
because
we
never
measure.
A
F
Everyone
welcome
to
the
code
freeze.
So,
regarding
our
status,
we
can
say
that
you're
finally
green,
which
is
a
good
thing,
and
to
start
with
some
numbers,
we
have
36,
you
should
open
in
the
wall
system.
I
stole
my
own,
but
this
time
we
don't
have
any
critical,
urgent
issue
and
we
are
about
870
imported
soon,
issues
but
cause
it
will
be
at
red
code
freeze
unless
it
becomes
urgent
will
be
almost
like
kick
them
for
the
milestone.
We
got
a
couple
PRS
we
have
41
of
them,
but
only
70
has
the
HTM
approved
labels.
F
Now
this
query
is
a
different,
then
wanted
George
use
it
for
CI
signal,
because
this
one
counts
the
PRS
that
are
in
the
batch
that
will
be
merged
soon,
but
luckily
the
tight
status.
There
are
nine
that
are
waiting,
it
cure
to
get
to
the
retest
and
all
the
stuff,
and
we
got
a
critical
critical.
Our
should
be
ours.
F
We
are
zero,
which
is
a
great
thing,
but
the
only
thing
I
am
a
little
concerned
about
is
that
I
have
not
said
that
we
have
PR
s
that
have
HTM
approve,
but
also
fall,
and
we
have
two
of
them.
One
of
them
is
8
1,
8,
9
2,
and
this
is
to
update
I,
think
it
was
a
cd4
cube,
a
DM
and
I
kept
in
it,
but
I
have
it
to
see
the
response
so
far,
so
we
I
don't
really
know
what
to
do
about
that
one,
and
there
is
another
one.
F
It
is
about
music
sub-packages
around.
It
is
a
two
zero
two,
four
and
I
forgot,
which
also
did
that
they
are
going
to
fix
the
comments
on
the
BR,
but
I'm
not
sure
what
you
should
know
about
them
should
be
longer
to
go
in,
or
so
they
require
some
expection
or
something
like
that.
Do
we
have
any
guidelines?
What
should
we
do
in
such
case?
I.
A
F
A
Up
so
the
first
one's
kind
clean
up
and
the
second
one
is
kind
clean
up
in
kind
feature.
So
in
the
case
that
it's
clean
up
I,
don't
even
know
that
it
needs
to
land
feature,
I,
don't
even
know.
If
pumping
sed
a
dot
release
is
a
feature.
It
looks
more
like
a
clean
up
thing
only
because
it's
a
it's
a
patch
release
of
three
three
and
they're
moving
at
five
doctors,
yeah.
F
A
A
A
A
A
G
A
H
Hello,
everyone
so
for
dogs,
we
don't
really
have
any
new
update.
We
are
to
trying
to
meet
the
deadline
for
September
board,
which
is
next
Tuesday.
We
are
getting
response,
but
we
haven't
gotten
like
a
lot
of
Skype.
The
number
of
this
dogs
ready
for
review
and
I
reach
out
to
my
shadows,
I
myself,
we
are
reaching
out
to
people
and
I
see
an
accident.
We
need
to
get
this
thing
so
hopefully,
by
hopefully
you
know
over
the
weekend
before
Tuesday
everything
is
in
and
we
should
be
great.
I
A
I
J
Morning
afternoon
evening,
everyone
after
Friday
today
I
don't
have
too
many
news
updates
for
you
either
still
I've
switched
us
to
yellow
from
green
just
because
we're
waiting
on
some
of
the
blog
post
graphs
come
in
I
totally
expect
by
Tuesday.
We're
gonna
be
back
to
green,
with
that
it's
just
code
freeze
and
just
going
to
that
slaw
that
we're
going
through
right
now
again,
congratulations
on
that
and
getting
through
that
really
yep.
A
K
G
K
A
K
A
K
A
A
A
K
C
K
K
A
Okay,
fantastic
thank
you,
young,
okay,
and
with
beta
to
coming
on
next
Wednesday.
Will
you
you
be
taking
care
of
that
yeah?
Okay,
all
right!
Excellent
emeritus
lead
with
Josh.
A
Excellent
thanks
Josh
my
update
I'm
calling
this
read
based
on
CI
signal.
I,
think
we
you
know
it
would
be
my
hope
that
we
can
get
CI
signal
cleaned
up
by
next
Wednesday
enough
to
have
master
blocking
at
least
no
failing
jobs
there,
and
even
if
we
can
get
to
the
scalability
jobs,
so
I'd
like
to
at
least
do
enough
today
to
get
people
the
opportunity
to
fix
that
over
the
weekend
or
on
Monday.
A
Anything
else
needs
an
exception
and
the
exception
process
is
documented.
So
if
people
are
knocking
on
your
door
asking
why
this
isn't
getting
merged,
please
point
them
at
me
or
Kendrick,
and
we
will
have
to
get
an
exception
put
through
and
determine
it
then
docks
deadlines
to
under
spec
about
that.
That
means,
if
you
have
an
enhancement,
you
need
to
have
your
docks
ready
for
review
on
Monday.
A
A
We
will
be
talking
about
things
that
I
need
to
go
for
to
the
next
release
and
I
know
we
have
retros
and
other
things,
but
just
Steven
will
update
us
all
that
next
Wednesday
9/4
will
be
used
as
a
sig
release
meeting
rather
than
a
burn
down
because
of
the
public
holiday
on
Monday,
and
he
didn't
want
to
shuffle
everything
around
given
the
short
timeframes
any.
If
there's
any
problems,
please
let
me
know:
I
still
have
the
Tuesday
and
Thursdays
set
to
9
o'clock.
A
I
haven't
heard
from
anybody
that
that's
a
big
burden
on
them,
I'm
open
to
changing
them.
We
have
four
of
them,
so
if
we
need
to
get
them
changed
or
even
a
one-off
to
have
it
a
different
time,
please
ping
me
and
we
will
see
what
we
can
do
is
that's
it.
I'm
gonna
Park
it
at
red
at
this.
At
this
point,
and
we
that
brings
us
to
open
discussion
any
thing
with
any
things
we
need
to
discuss.
A
Okay,
I
think
from
a
lead
perspective,
our
marching
orders
is
really
just
to
help
get
through
enhancements
today
and
make
sure
that
everything
is
cleaned
up
with
Kendricks.
So
that's
pinging,
the
few
people
that
Kendricks
called
out
and
also
CI
signal
so
seeing
if
we
can't
raise
some
people
going
into
the
weekend,
making
sure
that
that's
at
least
somebody's
committed
to
taking
a
look
at
that
and,
following
that
through
before
we
get
back
on
next
Tuesday
anything
else,
otherwise
excellent
code,
freezers,
great
milestone
for
116
we're.
A
You
know
only
three
weeks
out
we're
going
into
September
September
16th
is
the
targeted
release
date.
So
we're
it's
two
in
a
bit
weeks
out
from
that
now,
which
is
really
exciting,
and
hopefully
once
we
get
over
these,
this
hump
of
getting
all
the
things
sorted
out,
postcode
freeze
will
be
in
great
shape.
Thank
you
and
have
a
lovely
weekend.
It's
there's
a
long
weekend
of
the
US,
so
we're
all
excited
for
that.
But
if
you're
not
in
the
US
have
a
great
weekend-
and
we
will
see
you
Tuesday
do.