►
From YouTube: k8s 1.15 Burndown 20190613
Description
See https://github.com/kubernetes/sig-release/tree/master/releases/release-1.15 for more information
A
B
A
C
So
for
today's
signal
a
must
have
blocking.
There
are
a
couple
things,
but
nothing,
nothing
that
looks
too
serious
one
important
a
it
are
corny
at
least
last
time.
I
checked
there
were
two
jobs
out
were
marked
as
failing
boot,
and
they
were
failing
due
to
to
a
couple
test
a
to
a
couple,
and
it
took
couple
fail
to
a
couple.
Tests
were
failing.
C
Those
failures,
don't
seem
to
be
flakes,
so
we'll
just
continue,
we'll
just
continue,
observing
see
if
they
actually
see
if
they
actually
was
see
if
they
actually
resolved
at
some
point,
hopefully
with
more
time
they
the
dashboard,
they
want.
A
the
115
block
on
dashboard
here
is
going
to
start
clearing
out
a
flakes
again,
no
persistent,
persistent
failures
that
we
found,
and
that
leaves
us
with
muster
in
forming,
which
has
been
ruled
a
funny
spot
for
a
during
the
during
the
country's.
So
right
now,
six
a
last
time,
I
checked
six
shot
jokes
worth.
C
C
Adding
some
noise
to
the
signal.
So
really
the
big
issue
that
we
have
in
mastering
forming
right
now
is
really
is
the
one
from
six
scalability
that
wasn't
related
was
due
to
a
lot
of
questions.
Your
three
tube.
There
was
a
lot
of
to
it.
There
was
a
lot
of
conversation
yesterday,
but
the
TLDR
is
everything
is
actually
in
this
a
every
everything.
A
a
lot
of
things
were
Robert
a
were
reverted
to
the
state
that
we
started
this
release
cycle.
Yes,
so
a
Gaelic
version
is
back
to
what
114
is
you?
C
What
kubernetes
114
is
using
em
their
latest
beret
the
latest?
The
latest
version
was
trying
to
solve
a
problem
when
you
J,
when
you
try
to
log
everything
to
a
when
you
don't
want
everything
to
in
into
a
file
something
they
prefer
a
a
performance
regression.
They
a
scalability
test
which
affected
a
which
affects
clusters
are
like
around
$5,000
large,
so
a
bunch
of
Pierre's
got
reverted
and
this
I
don't
I.
Don't
think
this
should
be
anything
that
we
should
be
blocked
by,
as
they
work
seems
to
be
take
a
catalog.
C
Actually,
it
directly
affects
so
important.
Culturally
was
doing
in
trying
to
will
contain
a
container
images
are
on
mastering
components
into
a
distro,
a
distro
list,
a
container
image
so,
instead
of
using
Debian
and
I'm
having
to
touch
a
having
to
touch
the
image
every
time
CB
was
low
nor
or
something
of
the
sort
they
were
trying
to
go
to
destroy
this
from
this
image,
images
were
used
in
producing
the
surface
area
to
the
artifacts
that
we
actually
project
that
we
actually
produced
from
cornetist.
C
So
some
work
is
gonna
have
to
be
done
there
to
actually
get
K
locked
and
work
and
not
counsel
have
not
caused
performance
issues.
There
was
an
issue
created
in
the
repo
it
is
morning
by
teams
and
other
than
that
flakes
a
flex.
A
master
in
forming
a
nothing
looks
too
serious
right
now,
and
that
leaves
us
with
the
latest
issue.
C
The
Jordana
ticket,
by
brought
up
to
us
this
morning
and
based
on
CI
Ted
shops,
are
not
running
all
unit
tests
need
to
catch
up
more
on
that,
but
I
think
it
mainly
affects
is
six
here
is
six
here
like
a
cannon
which
child
it
aim?
If
shortened
is
one
of
mine,
please
correct
me,
but
she
mentioned,
and
this
shouldn't
be
anything
visual
in.
D
E
We
actually
did
some
more
investigations,
sorry
for
kind
of
raising
the
alarm
early
on
this.
There
were
two
tests
specifically
that
we're
failing.
We
tracked
that
down
to
one
of
them.
The
test
itself
decided
to
skip
if
it
couldn't
locate,
go
source,
which
is
the
case
in
basil,
environments
and
the
other.
The
test
passes
on
Linux
and
fails
on
non
Linux
environments,
but
it's
a
test
only
issue,
so
there
is
one
PR,
that's
open
to
fix
the
CLI
bug,
but
it
doesn't
need
to
block
fall.
It's
a
small
PR
that
can
be
cherry
picked.
C
C
E
I
had
a
question
about
the
master.
Informing
the
it
looks
like
there
are,
the
upgrade
and
downgrade
tests
or
failing
the
upgrade
master
parallel
is
looking
good
that
started
that
went
green
once
we
merge
the
Q
control
test
fix,
but
it
looks
like
a
great
cluster
and
downgrade
cluster
are
pretty
much
solid
red.
C
Everything
I
was
failing
up
to
until
610
it
was
a.
It
was
negated
to
the
core
business
issue.
They
latest
one
of
the
a
one
of
the
runs
from
yesterday
was
actually
it
was
actually
green.
So
that's
good
news
in
the
latest
and
the
latest
Ron
actually
had
a
couple
days
bail.
It
seems
that
he
was
to
tool
I'm
out
because
no
Jerry
I
don't
see
any
actual
into
it.
This
hadn't
taken
anything
wrong
with
it
and
master
in
Hungary
cluster.
C
Hey.
Let
me
show
it
I
think
the
way
I
think
the
latest
one
you
fell
because
again
the
accordion
is
issue.
There's
a
DNS
complete
map,
name
server,
a
test
fail,
and
that
is
related
to
a
to
the
bad
configuration
okay
with
Cardenas
one.
So
this
day
in
this
test,
I
always
take
up.
I
always
take
a
long,
a
long
time
to
actually
clear
out
because
they
don't
they
don't
turn
are
run
as
frequently
as
we
will
want
them
to,
and
there
won't
take
an
entire
day.
Sometimes,
okay,.
E
E
E
C
C
C
E
B
C
E
A
F
G
So
we're
looking
pretty
good
from
a
bug
triage
perspective.
We
sold
like
the
flaky
test
issues
open,
but
George
covered
those
let's
see,
I
was
just
looking
at
you
guys.
There's
one
open,
PR
and
I.
Don't
think
it's
blocking
it's
the.
H
Hi
everyone
things
are
looking
pretty
good
from
a
testing
perspective.
We
have,
as
of
the
last
few
minutes
three
PRS
merging
since
yesterday,
and
these
were
the
addition
of
upstream
options
and
option
in
Cardenas
conflict
mark
which
fixed
an
issue
that
we
knew
about
foresee
a
signal.
The
revert
of
the
K
log
version
from
and
just
a
few
minutes
ago,
a
PR
to
fix
this
hyper
States
flag
test.
H
I
J
K
L
A
L
A
D
A
M
A
A
All
right,
I
think
we
can
then
move
on
to
my
update,
I
will
say
I'm
feeling,
yellow
just
because
we
still
have
entered
code.
Foie
and
I
would
like
to
see
some
greener,
greener
signals
on
test
grid,
but
I'm
very
hopeful
that
we
will
be
entering
code
thaw
today
and
that
the
failures
will
slowly
clear
up.
A
M
A
We
don't
have
Cheryl
on,
hopefully
she'll
be
able
to
join
tomorrow,
but
I'll
make
sure
that
she
knows
all
the
things
that
need
to
be
done
on
release
day.
If
I
remember
from
last
cycle,
there
was
a
dock
that
Erin
and
Hannes
had
that
might
have
been
from
ice,
that
if
Cheryl
does
not
have
I
want
to
make
sure
she
has
access
to
you,
but
I
can,
since
she's
on
I
can
check
up
with
her
offline
on
that.