►
From YouTube: k8s 1.15 Burndown 20190611
Description
See https://github.com/kubernetes/sig-release/tree/master/releases/release-1.15 for more information
C
B
D
D
As
that
issue
is
related
to
the
GCE,
go
it's
in
the
it's
in
the
real
of
a
cluster
and
a
it's
in
the
realm
of
the
Google
side
of
a
sickness
or
life
cycle
and
those
those
that
have
been
known
to
be
to
be
constant,
a
constant
flakes
in
past
releases.
So
then
we
are
left
with
lasting
for
me,
which
has
been
a
husband
issue
for
the
last
last
couple
of
weeks,
so
sick
scalability,
the
only
update,
is
a
update
from
sick
skull
abilities
up
for
their
tests.
They
are
work.
They
are
going
through.
D
D
So
still
a
civil
word
more
would
dare
to
be
done
and
we
need
to
get
a
better
idea
of
whether
this
actually
and
definitely
the
test
failure
actually
tells
something
about
the
state
of
Coburn
80s
or
more
about
just
a
sick
scalability
it
with
a
testing
framework
they
in
Cordilleras
so
justly
the
river
two
one
three
one
it
was
merged
so
that
a
and
that
we
don't
worry
that
was
merged
with
almost
merged
I,
I
think
late
yesterday.
So
what
is
I
just
waiting
for
a
form
or
a
it?
D
Just
wait
it
just
waiting
for
more
the
job.
I
said
we
must
be
informing
to
actually
pick
up
that
committee
and
run
it.
There
was
a.
There
was
one
run,
a
DC
new
master
bed
master
which
actually
had
a
which
actually
which
actually
passed
so,
possibly
we
so
possibly
the
rebirth.
It
did
fix
things.
But
again
we
need
to
wait
more
for
it.
This
one
is
gonna.
Take
this
one
is
gonna,
take
a
while
to
a
job
sort.
D
Now
Barry,
they
are
not
really
run
very
frequently
and
they
take
an
entire
day
to
run
so,
let's
wait
and
see
other
than
that
78
851
was
okay
was
something
no
I
opened
up
fire
by
the
people.
Working
on
Cordy,
honest
that
one
is
the
PR.
That's
proposed
to
allow
a
custom
core
Bienes
version
for
the
cue
ball,
a
way
of
bootstrapping
a
cluster,
but
last
a
last
we
checked
they
decided
to
not
block
150
115
on
it.
So
hopefully
the
Robert
is
good
enough
to
just
fix
a
most
of
them.
D
Asking
for
a
master
must
be
informing
failures
and
the
weekend
their
big
reason
why
I
am
going
for
yellow
today
is
so
they
it.
We
test
actually
providers
on
signal
or
whether
upgrading
a
of
cranium
with
coding
this
one
three
one
was
actually
a.
You
know.
It
actually
worked
in
the
work
that
were
taken
drug
test,
the
kinder
kinder
basis,
which,
which
is
tomato
they
were
working
until
today.
D
More
until
this
morning
it
started
failing
so
gonna,
being
sick,
close
to
right
cycle
and
see
second,
if
we
can
get
the
quick
fix
or
if
this
is
actually
something.
This
actually
tells
us
something
in
kinetics
is
wrong
or
chose
is
something
with
it
we
set
up
and
besides
that,
so
must
be
informing
a
also
suffer
from
a
couple
flakes
and
related
to
the
reboot
test,
and
that's
it
for
today's
update
on
see
a
signal.
Anyone
has
any
questions,
comments,
concerns.
B
It
sounds
good,
so
essentially,
core
DNS
is
not
a
major
concern,
as
most
production
environments
just
have
it
separately,
and
it's
not
like
one-to-one
linked
to
the
release
itself.
So
it
should
be
okay,
it's
mostly
about
the
CI
failures
and
the
scalability
issue
that
takes
a
bit
of
time
but
they're
on
it
and
should
be
resolved.
Soonish
I.
Think
hopefully
you
know
so
on
the
the.
A
Cluster
lifecycle
side
with
cube
atom.
It
sounds
like
they've,
triage
Liuba
mer
just
mentioned
in
the
chat,
but
my
understanding
is
they've
triaged,
that
to
a
tooling
flake
on
their
side.
I
think
he'd
mentioned
earlier
today
to
me
on
a
download
timeout.
There
was
set
to
fast.
So
hopefully
that
is
truly
a
flake
on
the
the
core
D&S
thing
with
scalability.
So
I
see
that
we've
got
the
commit
for
the
revert
on
the
master
branch.
Do
we
know
that
scalability
has
started
a
test
run
that
includes
that
revert.
D
Last
I
checked
a
which
wasn't
very
recently
for
that
I.
Don't
think,
there's
been
a
say,
I,
don't
think.
There's
been
at
this
started
for
a
for
most
air
for
most
of
the
jobs
which,
which
is
also
something
that
I
wanted
to
ask.
Is
that
something
that
we
can
do
manually
or
just
wait
for
the
next
one
to
happen?.
A
It's
definitely
something
they
could
do
manually
at
this
point,
we'll
probably
ended
day
for
them,
so
we'd
need
to
ask
quickly,
probably
and
give
them
a
specific
like.
Can
you
start,
as
of
this,
commit
a
fresh
test
or
find
out
when
their
next
automated
test
would
be
starting
with
with
that
commit,
and
then
the
other.
E
A
Like
there's
not
one
running
currently
yeah,
okay
and
that
that's
kind
of
why
I
asked
like
if
we're,
if
the
next
one
starts
tomorrow,
it
would
be
a
shame
to
just
be
sitting
idle
in
the
meantime
at
I.
Don't
know
if,
like
for
sure
where
that
is,
and
that's
where
we'd
need
that
input
and
then
the
other
thing
on
that
front
is
once
we
know
that
they've
started
that
we
should
do
a
fast
forward
so
that
we
we
have
two
distinct
bits
of
signal
from
master,
and
then
we
start
as
soon
as
possible.
D
B
A
B
A
B
B
A
F
G
Your
homey
so
I'm
not
sure
what
my
color
is
at
the
moment,
I'm
inclined
to
think
that's,
yellow
but
the
so
the
cornea
Ness
issues
have
been
moved
to
116
the
PRS
and
the
issues
associated
with
my
coordinates.
We
bump
into
the
116
based
on
conversation
I
was
having,
and
that
was
a
major
blocking
issue
from
a
bug
triage
perspective.
G
But
there
are
some
new
flaky
tests
are
opened
by
George
and
so
I
think
we
want
to
see
the
results
of
these
finishing
up
before
saying
that
we're
ready
for
code
thought
tonight,
there's
also
a
new
PR
that
was
added
yesterday
by
Jordan,
looks
like
this
one's
getting
resolved,
I'm,
not
sure
Jordan.
Is
this
blocking
or
not
I
mean
it
looks
like
it's
hard
to
be
managed
anyway,
so
yeah
I
would
consider
this
blocking
okay
cool.
G
I
G
G
E
Work
load
controllers
do
spurious
rollouts
on
upgrade,
so
it
defaults
a
new
field
underneath
the
pods
Beck
API
object.
This
was
partially
introduced
in
112
and
then
made
worse
in
114,
and
we
just
got
a
report
of
this
and
the
reason
we
didn't
catch.
It
was
because
our
upgrade
tests
didn't
exercise
every
pod
or
every
field
under
the
pod.
E
So
if
you
added
pod
that
used
a
certain
subtree
of
the
object,
this
new
default
when
you
upgraded
the
API
server,
would
make
a
new
field
pop
into
existence
and
all
the
workload
controllers
and
say:
oh,
the
pod
change
I
must
need
to
roll
out
new
stuff,
and
so
this
is
resolving
the
issue
all
the
way
back
to
112.
But
it's
blocking
in
115,
because
every
every
upgrade
starting
in
112
112
to
113
114
to
115
would
encounter
this
issue.
E
So
we
need
to
fix
it
in
all
of
the
all
of
the
releases
Wow
that
sucks
yes
night
last
night,
yeah,
good
and
part
of
the
fix
is
also
upgraded.
Updating
our
upgrade
test
fixtures
so
that
they
would
have
detected
this
cool
and
I
opened
a
follow-up
issue
to
do
additional.
Checking
on
container
restart
counts
and
making
sure
the
actual
pod
instances
stay
the
same
when
we
upgrade
just
an
API.
So
so
there's
some
follow-up
work
to
make
our
grade
tests
more
robust.
That
will
you
early
in
1/16,
do.
A
E
A
E
H
The
wall
any
updates
on
this
different
Hey,
so
everything
is
green
noise.
In
the
pro
side
we
had
28
pr's
seven
days
ago
and
four
peers
in
the
last
day
merged,
but
everything
seems
green.
The
issues
I
filed
yesterday
for
splitting
out
their
responsibilities
to
different
roles
need
updates
from
the
owners.
I
I
B
J
So
right
now
we're
working
on
formatting.
The
major
themes
we've
gone
through
the
external
dependencies
and
those
have
been
comb
through
and
cleaned
up,
we'll
do
one
more
pass
of
those
closer
to
actual
release,
but
most
of
the
works
been
done.
The
rel
note
site
real
notes,
Kate's
do
a
lot
of
that
work
is
nearing
completion
and
we
should
be
able
to
link
it
in
the
release
notes
for
1:15.
J
K
Everyone
today
we
are
green.
We
had
stirred
up
our
logs
all
that
work
is
kind
of
ongoing.
This
week
we
were
able
to
sync
up
with
the
cnc
FDR
team,
yesterday
kind
of
get
the
inside
baseball
the
extra
steps
that
we
need
to
perform
from
them,
and
people
that
we
can
work
with.
We
have
feedback
from
api
machinery
and
and
I
say,
storage
on
some
of
their
blog
posts
and
different
definitions.
They're
just
gonna
keep
on
continuing
working
with
them
this
weekend,
fleshing
out
those
blogs.
No
per
block
was
right
now.
B
F
B
Congrats,
and
next
is
really
sleep
updates.
They
are
pretty
much
to
yesterday's
and
one
pending
PR,
which
I
suppose
is
yeah
Jordans
PR,
that's
there
and
next
is
assume
on
his
blog.
For
today,
until
a
PR
is
merged,
very
big
deadline
is
on
Thursday
June
13th
and
target
release.
Date
is
next
week.
Monday
retro
is
on
20th
of
June
Thursday
next
week
and
successes
to
be
nominate
that
soonish
hands
hand
books
to
be
updated
soonish
by
the
leads
of
the
end
of
the
release
and
gap
time
until
the
next
one
starts.
B
B
Possibly
my
own
opinion
is
that,
like
it's,
it's
complicated
by
food,
accordion
s
like
I,
think
the
suggested
version
is
won
5-0
anyway.
However,
it
currently
provided
from
the
release
perspective
because
of
the
configuration
changes.
So
there
could
be
a
release.
Note
saying
recommended
version
is
1/5
Oh,
however
schnapps
like
provided
by
default,
neither
an
upgrade
path
from
cube,
ATM
and
possibly
yeah.
It
should
be
tested
on
CI
before
moving
forward
with
the
rebounded
from
the
others
negative
correct,
I.
A
Agree
with
that
portion
of
it,
but
I
think
the
issues
Joshua
is
raising
is
that
they're
bisecting
in
an
unknown
issue?
If
we
move
forward
until
we
ahead
of
having
that
triaged,
it
may
be
something
else
that
is
more
complicated
to
patch
on
the
master
cherry-pick
once
once,
master
thaws
is
going
to
diverge
rapidly,
and
if
we
have
a
new
surprise
of
an
issue
in
scale
abilities
triage
there,
then
it
could
be
a
release,
delaying
event
to
figure
out
how
to
pull
that
back
to
the
branch
that
is.
A
D
As
far
as
I,
let
me
see
if
I
can
find
a
really
quick.
So
far.
Yes,
first,
we
know
they
had
been
having
a
bunch
of
a
for.
The
last
couple
releases
I've
been
having
trouble
a
troubles
with
a
bad
sort
of
scalability
test
and
in
the
pay
I
give
the
empirical.
The
empirical
data
I
can
give
is
a
in
the
past
having
they
have
been
blocked
and
in
releases
on
those
tests.
B
G
F
L
Yeah
Nick,
sorry
sorry
I
was
needed.
I
just
wanted
to
mention
to
DeWall
he'd
asked
me
to
comment
on
the
dissolving
of
the
test
and
parole
and
116.
It's
issues.
631
I've
commented
on
that
I'm.
Okay,
with
dissolving
the
testing
for
a
role
at
the
end
of
115
I
left
some
kind
of
conditions
of
dissolvent
soda.