►
From YouTube: Kubernetes 1.13 Release Team Meeting 20181128
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
that's
Josh,
hi.
Everybody
welcome
to
our
1:13
burn
down
for
today,
11:28.
So
there's
the
link
to
the
minutes
in
the
chat.
Please
add
your
to
add
your
attendance
and
the
usual
disclaimer.
We
are
being
recorded
and
will
go
onto
YouTube
shortly
after
with
that,
let's
get
started
is
my
gender
okay,
so
release
timeline,
I'm
still
keeping
it
at
chartreuse,
though
it
seems
to
tip
a
little
bit
more
toward
screen
as
of
this
morning,
mainly
because
CI
looks
green
for
the
first
time,
or
at
least
for
the
second
time
in
the
entire
release.
A
So
things
are
looking
good.
We
also
triaged
a
whole
bunch
of
flakes
out
of
113.
Yesterday
we
can
discuss
more
in
the
CI
section
so
and
we
also
cut
our
rc1
yesterday,
which
is
great
so
so
with
that
it's
still
is
tending
to
its
screen.
I
see
fixed
work
yeah,
so
the
main
things
that
we
are
tracking
from
yesterday
was
the
fix
for
the
race
condition
the
fix
for
the
minute.
A
The
fix
for
the
most
part
of
the
issue
went
in
yesterday,
there's
still
one
known
edge
case,
but
as
per
the
comments
in
the
PR
it
we
decided
to
handle
it
outside
of
113
and
we
that
would
be
a
mention
about
it
in
the
release
notes.
So
if
anybody
is
curious
about
it,
there's
link
to
the
PR
in
the
in
the
meeting
minutes
and
also
thanks
to
six
storage.
A
They
also
got
the
corresponding
end-to-end
tests
in
so
both
both
the
tests
and
the
fix
are
in
right
now
and
things
and
manual
tests
apparently
also
passed
so
I
think
we
are
good
in
that
issue.
Code
3
code
freeze
is
still
slated
to
left
end
of
day
to
day
11:28.
Unless
we
discuss
the
one
outstanding,
scheduler
bug,
that's
that's
on
hold
and
then,
if
there's
any
CI
signal
that
later
in
the
day,
then
we
might
take
a
different
call.
But
as
of
now,
we
are
on
track
to
lift
code
freeze
and
oft
iterate.
A
Okay,
I
see
Bobby
there
hey
Bobby,
so
this
is
so.
The
link
to
the
issue
is
right
there.
So
there
was
a
critical
bug
open
yesterday
with
respect
to
sorting
the
scheduler
queue
so
that
we
avoid
starvation
on
thoughts
because
thoughts
that
aren't
that
cannot
be
scheduled
or
ahead
in
the
queue
it
is.
It
is
a
late
landing
fix
in
scheduler
and
from
from
what
we've
seen
scheduler
fixes
generally
tend
to
have
a
impact
on
scalability
and
performance
runs.
B
Now
yeah,
so
you
hear
me
well
right,
okay,
so
if
this
existed
actually
in
111
and
112
as
well,
this
was
discovered.
I
would
say
a
couple
days
ago
or
three
days
ago,
maybe
by
some
of
the
folks
in
Warsaw
Google
employees
in
Warsaw.
The
problem
actually
is
twofold:
one
is
the
case
that
there
is
a
high-priority
part
in
in
the
cluster
that
cannot
be
scheduled
and
the
cluster
is
large
when
the
cluster
is
large.
B
The
issue
is
that
there
are
obese,
a
lot
of
events
causing
the
scheduler
to
retry
scheduling,
unschedulable
parts
when
there
is
a
high-priority
part,
this
high-priority
part
gets
precedence
over
other
existing
parts.
So
that's
that
high
priority
part
can
go
very
very
frequently
to
the
head
of
the
queue
and
block
the
scheduling
of
other
parts.
This
is
not
too
bad
because
a
lot
of
clusters-
people
don't
even
use
priority
notes.
I,
don't
have
this
issue
much.
B
B
C
B
Say
it
may
be
electric
clusters
with
50.
Now
it's
or
smaller
won't
be
affected
much
by
this
issue,
and
in
fact,
as
you
know,
this
has
been
out
since
111
and
it
hasn't
caused
a
lot
of
trouble
so
far,
but
it
the
other
thing
to
note
that
111,
for
example,
in
case
a
GK,
is
just
being
rolled
out
and
hasn't
been
tested
of
its
very
large
customers.
Yet
so
we
cannot
really
rely.
C
B
B
A
I
understand
that
criticality,
but,
given
that
this
is
like
super
late
and
the
next
scale
tests
for
us
to
even
get
signal
on,
this
would
be
only
on
12
one,
because
the
scale
runs
don't
run
just
once
a
week,
the
large-scale
ones,
so
that
puts
us
super
close
to
the
release
date.
So
in
my
drift,
my
preferred
course
of
action
would
be,
of
course,
to
cherry-pick
this
into
111
and
112,
but
kind
of
wait
for
the
one
dot
13.1
like
patch
release,
which,
if
Jim
and
Alexandra,
if
either
of
you
are
around,
we
could.
A
B
C
A
D
D
The
short
version
is
that
most
drivers
will
simply
update
to
support
CSI
one,
oh
and
for
them
what
we
have
today,
where
they
would
be
required
to
go
into
the
new
plug-in
directory
works,
fine
and
works.
The
way
we
want
it
to
work
and
existing
drivers
that
support
CSI
0.3
would
be
able
to
stay
deployed
where
they
were
and
continue
work,
and
that
works
fine
in
a
case
where
a
super
conscientious
vendor
updated
their
driver
to
support
both
0.3
and
1.0.
D
The
directory
restrictions
we
have
in
place
would
require
them
to
move
to
the
new
directory,
which
is
not
actually
what
we
want.
We
would
want
to
let
them
let
this
hyper
conscientious
vendor
who
made
their
plugins
super
compatible.
Let
it
the
boy
in
either
directory
and
so
sudden
I
just
talked
through
this,
and
we
would
like
to
fix
that
issue.
D
Fix
is
tiny,
it
is
one
one
line
and
yes,
the
upgrade
tests.
Well,
the
upgrade
tests
are
using
our
sample
driver,
wait.
We
don't
even
provide
a
driver
that
implements
both
CSR
API
versions.
It
needs.
It's
not
expected
that
many
vendors
will
do
this
at
all
much
more
common
that
they
will
just
switch
to
the
new
API
version.
Okay,
it's
actually
quite
hard
to
make
a
driver
that
supports
both
so.
D
A
D
A
A
I'm
Li
afraid
of
the
tight
cue
like
blowing
up,
and
this
fix
going
at
the
end
of
the
queue
and
then
have
for
us
having
to
wait
for
everything
else
too
much
so
I
would
I
would
also
be
willing
to
push
free.
I
mean
lift
free,
is
either
later
today
or
tomorrow.
Until
we
can
get
some
signal,
but
get
this
in
first
yeah.
A
I'll
keep
an
eye
out
on
that
and
we
can
tweak
code
freeze,
lift
accordingly,
dad
said
cool,
it's
mixed
up
to
you,
which
brings
us
to
that
child
cue
question.
So
I
was
looking
at
the
crowd,
the
tide
that
that
particular
type
cue
target,
but
it
doesn't
show
anything.
Is
it
because
all
the
money?
It
only
tracks
one
13
at
this
point,
it's.
E
Because
there
is
nothing
at
tide
is
handling
right
now
that
shows
PRS
that
are
currently
Poole,
currently
ready
to
merge
and
are
being
retested
essentially.
So,
if
ty
does
to
completely
idle,
nothing
will
show
up
there.
I
linked
the
other
velodrome
page
that
has
historic
results
and
shows
shows
graphs,
specifically
the
graph
that
I
link
you'll,
see,
that
is
very
spiky,
which
is
why
that
other
page
is
empty,
because,
typically,
when
we
get
something
in
the
tide
pool,
it
clears
pretty
quickly.
Okay,.
A
A
A
A
A
E
A
F
The
the
sort
of
Keith
ones
by
area
are
there's
a
whole
set
of
flakes
that
belong
to
storage
and
apparently
a
bunch
of
those
also
belong
to
node
I'm,
their
investigation
of
them,
because
a
lot
of
them
are
apparently
runtime
issues,
though
some
of
those
should
theoretically
have
been
resolved
by
PRS
that
merged
yesterday
and
last
night
we
won't
know,
for
you
know,
1216
hours
or
so,
whether
or
not
that's
actually
the
case
and
since
they're
all
flakes.
We
really
won't
know
for
several
days
in
two
legs,
the
other
ones,
you
know,
fall
many.
F
F
The
then
there's
a
few
miscellaneous
ones,
like
the
network
affinity
flake,
that
I'm
still
waiting
for
signet
work
to
say
whether
or
not
something's
newly
broken
for
113
on
there
yeah
and
the
other
thing-
that's
just
a
sort
of
you
know
worry
for
releasing
on
time
is
the
GCE
provisioning.
Flake
has
never
been
really
investigated,
let
alone
resolved
and
as
a
result
that
tends
to
crop
up
every
two
to
three
weeks
and
suddenly
we
already
see
to
all
of
our
GC
upgrade
downright
s
fail.
F
F
Okay,
YUM
and
and
like
we're
clear
that
it
is
a
test
problem
as
in
it,
doesn't
reveal
anything
wrong
with
kubernetes
itself.
But
the
problem
is
that,
if,
if
it
happens
to
strike
three
days
before
the
release,
a
whole
bunch
of
tests
will
go
red
and
we
can't
see
anything
other
than
this
particular
problem,
because
the
tests
all
quit
after,
like
20
minutes,
okay.
F
F
A
F
Don't
as
far
as
I
can
tell
that
has
at
one
time
another
hit
all
of
the
upgrade
and
downgrade
tests
or
any
of
them.
It
has
definitely
hit
some
of
the
parallel
ones
as
well.
I,
don't
think
it
actually
I,
think
I,
don't
think
it
actually
is
also
hit
some
of
the
scalability
tests.
So
the
only
I
think
the
only
dependency
is
that
the
test
job
needs
to
involve
a
lot
of
pause,
the
and
and
when
that's
the
case,
this
construct.
F
A
F
This
has
been
a
problem
for
as
long
as
we
have
history.
It's
just
like
I
said
it's
one
of
those
lists
of
things
of
hey.
This
could
strike
on
Monday
and
then
we'd
have
to
postpone,
because
we
wouldn't
be
able
to
you
know
if
it
even
hit
four
or
five
tests
at
once.
We'd
have
to
postpone,
because
we
wouldn't
be
able
to
see
whether
or
not
there
were
any
problems
being
masked
by
this
kind
of
failure.
Mm-Hmm.
A
I'm,
just
hoping,
if
we
lift
code,
trees
and
kinda
block
one,
thirteen
branch
out
will
at
least
have
like
for
four
to
five
days
of
clean,
say
I
there
without
anything
merging,
so
even
if
it
hits
on
Monday.
Historically,
we
can
check
the
signals
of
the
previous
days
to
see
if
it's
a
test
issue
or
not.
You
go
oh
yeah
thanks.
F
For
you
know
so
sooner
we
lift
code
freeze,
it's
my
plan
to
stop
tracking
signal
ease,
master
blocking
yeah,
because
we're
gonna
get
a
whole.
You
know
you're
gonna,
get
a
whole
queue
of
things
that
have
been
pending
for
week
and
and
are
not
necessarily
tested
in
combination
with
each
other,
so
I
backed
signal
release
master
blocking
could
go
it,
although
it
didn't
happen
last
cycle,
but
I
expect
it
to
go
extremely
red.
This
site
I'm
only.
F
It's
about
it.
You
know
the
storage
is
been
working
hard
on
resolving
a
bunch
of
they're
sort
of
mostly
long-standing
flakes.
Like
I
said
in
the
comments.
The
goal
here
is
not
to
address
any
specific
flake,
but
to
simply
reduce
the
number
of
them
so
that
we
don't
have
them
striking
on
every
run,
yep
which
it
looks
like
they've
done,
but
it's
going
to
be
hard
to
tell
whether
that's
been
successful
for
a
couple
of
days.
Yeah.
A
C
We
should
be
hopefully
resolving
them.
We
did
see
the
test
flake
this
morning
and
both
Saud
and
I
took
a
look
at
two
different
jobs
where
it
flaked
and
we
concluded
that
the
test
or
cubelet
is
doing
the
right
thing
and
it's
behaving
correctly,
but
for
some
reason
the
test
isn't
able
to
detect
it.
So
at
this
point
it
looks
like
there
might
be
a
problem
in
the
test
itself
that
we
can
look
into
fixing
there
so
and
then,
regarding
all
the
other
flakes
we
had.
C
I
took
a
look
at
a
lot
of
the
flakes
and
I
think
each
of
those
flakes
ended
up
being
like
a
different
issue,
but
I
think
so.
I
opened
up
issues
for
all
of
them
and
well
we
can
look
into
fixing
them,
but
I
think
it'll
be
I.
Think
for
like
going
forward
for
114
will
probably
start
tackling
the
most
well,
we'll
try
to
figure
out
all
the
most
frequent
failing
jobs
and
try
to
tackle
them
in
order
frequency.
Okay,.
G
Sounds
good
yeah
thanks
once
again
for
all
your
effort
here.
A
H
A
I
Those
are
all
now
the
issue
there,
so
you're
gonna
cover
those
in
and
then
there
are
three
SIG's
that
haven't
been
responsive,
that
they're
major
themes
and
that's
auto-scaling,
GCP
and
sig
node
he
reached
out
to
them
in
channel
will
either
we'll
probably
just
pull
them.
If
nothing,
a
bunch
of
six
have
been
like
in
major
themes:
okay,.
I
It
seems
like
Sega
auto-scaling,
mostly
works
in
another
repository,
and
they
did
put
in
a
release.
Note
like
yesterday,
for
you
know,
bumping
the
version
of
the
depend.
Artifact
dependency
and
GCP
doesn't
really
speak
juicy
if
he
doesn't
have
that.
Many
like
kubernetes
release,
no
D
major
themes,
things
but
sig
node
had
a
few
so
we'll
see
and
then
once
so,
then,
once
all
that's
done
really,
the
plan
is
just
to
copy
the
doc
into
K
King
and
the
change
log
file,
and
hopefully
I
was
hoping
to
do
that
tomorrow.
So
ideally
need
some.
A
Tag
I
mean
the
labels
to
get
things
in
red.
Ok
is
this:
oh
I'm
gonna
kicked
out.
Okay,
that
sounds
good.
Just
one.
Fine
I'll
follow
up
with
you
offline
Mike
about
this,
and
just
one
quick
question:
I'm
planning
to
move
the
release
meeting
to
9:30
tomorrow
to
accommodate
for
the
community
meeting.
Can
the
leads
make
it
is
that
fine
by
everybody,
okay
and
I'll,
move
it
and
thanks
a
lot
guys
have
a
good
rest
of
your
day
today.