►
From YouTube: Kubernetes 1.13 Release Team Meeting 20181107
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
I'm
pressing
the
record
button,
hello,
everyone
welcome
to
our
first
burndown
for
113.
First
of
all,
my
bad
for
thinking
it's
next
week,
but
this
cycle
is
lot
shorter
than
I
thought
it
would
be
so
it
is
this
week
so
for
those
joining
in,
please
enter
your
attendance
in
the
meeting.
Minutes
and
I
just
started
recording
this
meeting
and
it
will
go
public
on
YouTube
afterwards.
A
A
So
I'll
just
go
ahead
and
talk
about
the
main
blocking
things
and
Josh.
You
can
add
your
comments
on
each
of
those,
so
we
were,
as
of
yesterday,
we
had
two
main
issues
that
were
blocking
beta:
one
comes
under
the
bucket
of
scalability,
specifically
the
GC
100
job
and
then
the
large-scale
G's
performance
run.
We
were
specifically
tracking
this
because
we
had
the
go
update.
Go
the
one
little
1.11
tour
to
go
update
merge
yesterday
morning
and
we
wanted
to
see
if
the
scalability
jobs
held.
A
We
did
see
some
increase
in
the
in
the
test
run
time
itself
about.
It
was
hovering
about
40
to
43
minutes,
which
was
kind
of
5%
more
than
what
we
normally
saw,
so
that
was
kind
of
concerning
us,
mainly
because
we
didn't
have
the
large-scale
run
come
by,
which
runs
just
once
a
day
and
that
job
missed
the
go
update.
So
we
still
don't
have
a
clear
signal
from
that
job,
so
we
wanted
the
scalability
folks
to
take
a
look
at
it.
A
So
that's
the
that's
bucket
number
one
and
number
two
was
all
the
auto
scaling,
the
horizontal
pod
issues
that
had
that
we
were
seeing
for
quite
some
time,
so
we
wanted
to
make
sure
why
they
were
not
turning
green,
despite
the
fixes
landing
in
master.
So
those
are
the
two
things
we
were
tracking.
So
first
up
is
scalability
issue
for
the
GC
hundred
job
Josh.
A
Why
is
she,
could
you
say
something
Josh
yeah?
Okay,
it
was
my
okay.
B
C
However,
there's
a
lot
of
ways
for
that
to
just
be
another
delay.
Yes,
because
if
we
get
a
nice
green
signal
and
it
doesn't
take
substantially
longer
to
complete
than
the
other
runs,
then
we
know
we're
good,
but
there's
a
lot
of
ways
right.
It
flakes
sometimes,
so
we
can
get
a
flake
it
could
pass
but,
for
example,
take
longer
to
complete
yes
and
those
are
both
things
that
we
would
then
need
to
wait
for
whoa
check
to
interpret
for
us,
which
means
delaying
another
day.
C
So
I
guess.
My
attitude
is
this:
you
know
as
much
as
the
beta
release.
This
is
a
beta
and
not
the
final
release.
So
I
would
say:
let's
just
go
ahead
and
release
it.
There
will
be
another
beta
yeah
and
particularly
because
Moe
Jets
perspective
on
this
is
we
are
seeing
some
performance
issues,
but
it's
not
actually
from
the
go
upgraded
from
a
change
to
the
schedule
or
a
couple
weeks
ago.
C
A
Completely
agree
with
your
assessment:
that's
that
was
kind
of
the
pros
and
cons
I
put
in
the
meeting
notes
as
well.
Mainly
the
biggest
fear
for
me
is
more
braking
changes
coming
into
master
I
know,
storage
is
waiting
too
much
to
Peggy
to
eat
changes
for
their
CSI
out
of
three.
They
integrating
both
the
entry
and
all
three
tests
to
share
the
same
framework
and
I'm
just
I'm
just
worried
that
if
we
extend
beta
longer
we're
going
to
have
more
breaking
changes
in
master
if
they
merge.
C
Yeah
yeah,
the
other
thing
I
honestly
say
I'm
worried
about.
You
saw
the
Sinister
to
see
I
report
is
we
have
a
lot
of
non-blocking
test
files
that
are
open
and
I
don't
feel
like
it
would
take
that
much
for
some
of
them
to
become
blocking
test
fails.
There's
a
lot
of
them
are
non-blocking,
because
the
contributors
involved
have
assessed
that
these
are
actually
problems
with
the
test
and
not
problems
with
the
code.
Well,
they
can
change
that
opinion
the
and
then,
of
course
the
other
problem
is
with
all
these
non-blocking
test
fails.
A
I
know
there
are
a
lot
of
those
in
the
non-blocking
list,
especially
the
taint
nodes
and
all
those
that
are
also
related
to
adding
coverage
with
the
new
features.
So
yeah
the
scheduling,
the
taint
node
scheduling,
one
which
claims
to
be
a
test
infra
the
upgrade
setup
issue,
but
we'll
need
to
follow
up
lĂșcia
yeah.
C
A
Cool,
so
okay,
then
we
are
not.
We
are
not
waiting
for
the
large-scale
run
for
beta
for
the
cut,
but
we
will
follow
up
closely
both
on
this
and
on
the
scheduler
issue.
Once
we
open
that
causing
this
line,
okay
and
also
yeah
to
your
point-
there
be
the
perf
dashboard.
We
don't.
Even
if
we
wait
yeah,
we
are
not
clear
what
signals
we
are
looking
for
there
so
that
I
know
you
open
an
issue
on
the
Josh
to
get
more
clarity
on
how
to
read
the
dashboard
itself.
A
I
know
you've
asked
waited
for
that
train,
yeah,
okay,
all
right.
The
next
bucket
is
auto-scaling.
Luckily,
this
morning,
all
of
I
mean
I
got
a
complete
green
on
the
upgrades
dashboard
for
this.
The
great
job
for
this,
but
from
the
comments
looks
like
we
need
to
cherry-pick
that
text
fix
into
one
day
1.11
and
one
neutral,
because
the
same
test
is
passing
when
both
master
and
cluster
was
upgraded
to
1:13
and
we
run
the
1:13
branch
tests.
A
It's
only
failing
when
the
master
is
upgraded
and
the
cluster
is
not,
and
then
we
run
the
older
tests
so
fighting
Joe,
Jim
and
mozzie
and
I'm
sorry
for
announcing
the
name
wrong
both
of
them
terrified
that
this
is
a
test
issue
and
the
fix
needs
to
be
Jerry
big
back,
so
I
am
leaning
towards
this,
not
being
a
blocker
Josh.
Let
me
know
what
do
you
think.
C
A
Cool-
and
they
also
seem
surprised
that
we
run
all
tests
against
I
mean
when
we
approaches
the
master
and
not
the
cluster.
We
run
the
older
tests,
which
is
pretty
much
what
up
greatest
cement
to
do.
We
kinda
run
three
different
flavors,
so
one
of
my
question
is:
what
do
we
generally
do
if
the
changes
in
a
current
release
company
changes
the
feature
in
such
a
way
that
the
113
tests
need
to
be
different
from
the
111
tests?
What
happens
in
that
case?
We.
C
C
A
C
I
mean
the
real.
The
real
issue
is
that
right
somebody
upgrading
a
production
cluster
is
going
to
have
to
do
a
gradual,
rollout
mmm-hmm,
and
we
need
to
make
sure
that
that
cluster
doesn't
stop
functioning
in
the
middle
of
the
rollout,
and
this
is
our
way
to
do
it.
It
may
not
be
the
best
way,
but
nobody's
come
up
with
anything
better
I'll.
A
Reply
to
this
comment,
but
yeah
we
I'll
be
watching
this
cherry-pick
closely
and
then
hopefully
it
turns
green
when
that
goes
in
cool.
So
that
brings
us
to
beta
go
no-go
so
we're
saying
go
today.
A
A
After
the
meeting
yeah
all
right,
so
once
that's
done
yeah
we
should
we
have
a
couple
of
other
things
lined
up
for
Justin
Trump,
but
so
anybody
anyone
has
any
other
objection.
Objection,
/
questions
on
cutting
the
betta
branch
today,
after
this
meeting
I.
A
D
We
can't
really
test
them
out
quite
yet,
but
it's
it's
just
a
split
for
master
such
and
the
everything
should
work
in
theory,
okay,.
A
A
So
the
next,
let
me
put
the
EDD
see
they
be
MDS
teachers
to
be
on
the
safer
side
cool
and
the
next
thing
that
they're
coming
up
is
slush
on
Friday
I
think
we
are
all
the
issue.
Owners
and
enhancement
owners
have
been
frequently
pinged
asking
for
status,
but
yeah
Friday
we
will.
We
will
be
enabling
the
slush
much
most
requirements
around
5
p.m.
BST.
A
E
B
E
A
Yeah,
the
draft
is
up
if
anybody
wants
to
take
a
look
at
it:
pay
for
Monday
but
and
we'll
target
Monday
to
send
it
to
Leith
so
that
we
can
catch
them
a
little
bit
before
code,
freeze,
whoever's
here
and
not
in
Shanghai,
but
here
so
next
up
was
enhancements.
Kendrick
is
out,
but
Claire
I
know
you
made
all
the
updates.
Do
you
want
to
talk
to
it
quickly.
B
D
B
Issues
with
low
activity
which
I
categorized
as
no
activity
from
the
issue
owner
in
nine
eight
ish
days
or
less
so
a
couple
that
have
had
more
than
ten
days
of
just
silence
and
then
issues
that
are
looked
to
me
to
be
complete,
including
Docs,
but
folks
feel
free
to
like
sanity
check
that
in
case
I
missed
something
the
general
sense
of
where
we're
at
is.
B
It
looks
like
a
lot
of
the
open
issues
have
PRS
for
either
testing
or
Docs,
but
code
is
complete
and
there
are
a
handful
of
issues
that
still
have
open
PRS,
but
the
owners
for
those
issues
have
made
statements.
They
feel
confident
that
they
can
still
make
the
1115
code
freeze
date
and
I
put
updated
status
for
all
the
issues
in
the
enhancements
tracking
sheet
and
links
that
there.
So
that's
as
of
today.
B
All
of
those
updates
thank.
A
You
would
you
in
general
sense
give
a
status
of
elo
at
this
point.
That's
what
I'm
leaning
towards,
but
then
yeah.
Let
me
know
what
you
need:
census,
I,
think,
yellow
seems
fair
yeah,
mainly
because
of
the
whole
CS
I.
Think
that's
going
on
it's
a
lot
of
different
PRS
and
in
hand.
I
mean
different
enhancement
issues
to
its
that.
So
that's
the
main
reason
I'm,
not
sure
if
you
agree
on
and
just
one
request
is
when
you
guys
follow
up
on
the
onto
enhancements.
A
A
C
C
C
What's
my
call
it
with
the
spreadsheet
mm-hmm,
the
I
think
we
may
actually
have
to
replace
that
with
a
manually
maintained
spreadsheet,
whose
problems
with
the
Google
Docs
auto-updating
Finan,
and,
at
this
point,
I'm
spending
more
time
troubleshooting
it
with
Nicko
than
I
am
actually
using
it.
The
so
I'll
make
that
decision
before
I
leave.
Okay,
that.
C
I'm
still
keeping
it
in
yellow
because
we
are
resolving
those
major
blocker
issues,
but
we
have
a
lot
of
non
blocking
test,
fails
and
Bigley
a
lot
of
flakes.
So,
for
example,
even
if
we
don't
have
new
problems,
it
is
entirely
possible
and
even
likely
that
we
will
have
a
bunch
of
flakes
that
happen
to
strike.
At
the
same
time,
all
this
was
in
the
111
release.
This
is
exactly
what
happened
the
day
before
we
were
supposed
to
release.
C
1:11
thing
is
at
the
time,
there's
no
way
to
know
for
sure
that
those
are
flakes,
and
so
you
end
up
with
a
release,
delay
the
so
and
I
I
think
we
were
pretty
much
set
up
to
have
something
like
that
happen
again
and
I.
You
know
the
problem
is
that
a
lot
of
that
is
technical
death.
It's
a
lot
of
tests
that
have
been
sort
of
incrementally
updated
for
two
years
and
never
refactored.
C
A
Sense,
even
the
HPA
ones
that
they've
been
failing
on
and
off
and
suddenly
became
a
beta
blocker.
Then
it
stopped
looking
like
a
flake
and
started
looking
more
like
a
failure.
So,
okay
I,
will
bring
this
up
a
little
bit
more
clearly
in
the
community
meeting
tomorrow.
Just
to
say
that
we
need
all
these
flakes
are
all
the
non-locking
issues
addressed
ASAP
as
much
as
possible
and
yeah.
We
will
see
how
things
go
from
there.
A
Next
up,
Nikko
I
know
he
is
out
Nicola.
Are
you
here
by
any
chance?
Okay,
so
he
wasn't
able
to
make
the
meeting
today
nothing
much
from
the
back
tree
outside
I
would
maybe
starting
next
week.
I
would
like
to
see
more
of
the
numbers
more
to
see
the
numbers
going
up
and
down
as
to
how
many
are
important,
soon
or
critical
urgent.
F
A
A
Okay,
she's
not
there,
but
I,
don't
think
there's
much
considering
we
did
the
feature
scrub
the
other
day
and
then
she
was
going
to
come
up
with
a
draft
and
the
only
last
thing
I
had
before
I
open
up
to
any
more
questions
is
the
wonder:
14
release
stream
drafting
it's
already
here.
I
know
all
of
us
are
still
busy
with
113.
We
are
just
getting
into
the
weeds
of
113,
but
as
per
the
release,
lead
task.
A
This
is
this
week
to
get
the
ball
rolling
on
staffing,
the
114
team,
so
with
tradition,
I
will
work
with
Stefan
to
get
an
issue
opened.
They
asked
us
for
current
leads
and
shadows
to
think
who,
with
who,
amongst
the
shadows,
might
be
well
prepared
for
taking
the
troll
up
again
or
also
have
that
conversation
just
to
see.
If,
if,
if
nobody's
ready,
then
what
do
we?
How
do
we
fill
the
troll
so
yeah
I?
Once
we
had
the
issue
open,
we
will
just
start
crowdsourcing
for
nominations
at
this
point
along
the
way.
A
If
there's
any
dogs
that
need
to
be
updated,
I
know
there
were
at
least
a
couple
in
buck
triage
that
needs
some.
That
could
do
it,
some
updates,
shadows
or
leads.
Please
do
so
this
time
so
that
we
can
once
once
we
start
drafting
the
new
release.
Team
folks
can
take
a
look
at
those
talks
and
see
if
they
want
to
be.
You
know
we
want
to
sign
up
to
be
part
of
the
release
team
self.
A
Cool,
we
will
do
this
again
on
Friday
I
know
a
bunch
of
folks
might
be
away
because
of
cube
con,
but
hopefully
we
shouldn't
have
huge
surprises
and
the
main
agenda
would
be
code.
Slush,
I
think
we'll
still
still
code
slush
will
happen,
because
it's
just
adding
one
requirement
too
tight
other
than
that
it's
just
more
of
a
semantic
deadline,
but
yeah
we
will
meet
on
Friday
with
the
latest
status.
On
that.