►
From YouTube: Kubernetes 1.14 - Week 8 - Burndown 2019-02-27
Description
A
B
So
now
that
we're
in
burn
down
I
anticipate,
we
probably
want
to
like
trim
our
meeting
notes
template
a
little
bit
I'll
place
the
link
to
that
in
the
chat
here.
If
you
need
handy
access
to
it,
but
I
I
think
burn
down
generally
means,
let's
start
paying
a
lot
more
attention
to
the
numbers
and
see
if
they're
headed
in
the
right
direction
and
if
they're
not
headed
in
the
right
direct
and
what
are
the
particular
issues
or
pull
requests
that
you
specifically
are
having
problems
with.
B
So
with
that
I.
The
only
like
issue
that
I
have
a
problem
with
right
now
is
I
really
need
some
fun
inventive
memes
to
talk
about
code
freeze.
So
if
anybody
has
any
ideas
or
suggestions,
I
will
welcome
any
and
all
takers.
My
hope
was
to
have
a
bunch
of
them
to
flash
up
in
front
of
the
community
during
tomorrow's
community
meeting
update.
If
you
leave
it
up
to
me,
you're
just
gonna
get
a
bunch
of
boring
brace
yourself
memes.
So
that's
pretty
much
all
that
I
have
concretely
Claire.
D
D
B
E
Sounds
good
hello,
it's
a
quick
run
through
numbers
at
the
moment
as
of
five
minutes
ago,
so
we're
looking
at
three
master
blocking
jobs,
five
master
upgrades
and
three
114
blocking
jobs.
Failing
at
the
moment,
there's
good
news
about
news.
Good
news
is
that
most
of
them,
the
vast
majority
of
them,
are
fairly
new.
So
we
have
had
consistently
quite
good
activity,
quite
good
trading
ownership
from
this
eggs
to
go
ahead
and
troubleshoot
them.
E
E
So
it's
made
it
quite
hard
to
actually
point
somebody
to
it
and
enable
them
to
fix
it,
and
just
a
couple
of
heads
up,
I've
started
using
critical,
urgent
priority
label
for
things
that
I
have
been
failing
for
a
while
and
two
are
failing
across
different
dashboards
or
they're,
giving
us
red
for
different
different
sets
of
sweets
and
also,
as
you
may
have
seen
in
the
CI
signal
and
report
today.
There
is
now
a
board
of
issues
that
you
can
look
at.
It's
at
the
project
board
that
we
experimented
with
I'll.
E
E
Think
so,
I'm
gonna
just
check
before
the
Col
the
last
it
seems
that
there's
a
plan
for
a
solution
and
the
last
activity
was
about
an
hour
ago,
so
it
doesn't
feel
I'd
be
very
worried
if
it
was
stuck
somewhere
just
not
having
anything
happening
around
it.
The
main
reason
why
more?
It
is
just
because
it's
replicated
in
many
different
dashboards.
B
E
F
E
B
Why,
my
god
tells
me
that
historically
upgrade
jobs
have
always
been
failing
at
this
point
in
the
release
cycle?
I
can't
remember,
but
the
fact
that
almost
all
of
them
are
failing
is
not
super
great,
so
I
agree
in
terms
of
priority
I'd
be
focusing
on
getting
the
release
faster.
Blocking
dashboard
back
to
green,
and
so
difference
or
tsa's
is
definitely
the
worst
offender
there.
B
It
looks
like
something
caused:
the
Alpha
features
job
too
recent.
They
start
failing.
So
maybe
that's
a
really
easy
pi
sect
yeah
anything
I
can
do
to
help
out
there.
Let
me
know
I
feel
like
it's
time
to
start
chasing
after
people.
My
plan
is
to
show
this
during
the
community
meeting
and
make
sure
that
for
everything,
that's
big
bright
and
red
and
could
be
blocking
us
during
code.
Freeze
that
I
know
that
something
is
actually
working
on
it.
Yeah.
E
A
F
F
So
this
week
we're
really
starting
our
work
in
earnest
before
I've
talked
about
the
fact
that
we
kind
of
had
a
bit
of
a
lull
compared
to
the
other
groups,
so
Niko
has
begun
triaging
some
of
the
bugs.
Hence
you
know
good,
that's
the
name
of
our
group
and
then
given
us
shadows
the
some
examples
of
what
he's
looking
for
and
so
what
we
can
do
and
we
have
a
spreadsheet
that
we're
using
to
go
through
the
triage.
F
B
Okay,
just
to
reiterate
what
we
talked
about
at
the
release
team
meeting
on
Monday,
my
gut
tells
me
we
don't
actually
have
the
114
milestone
on
all
of
the
issues
that
we
should
have
that
milestone
on,
and
this
is
the
time
where
it's
important
to
err
on
the
side
of
more
issues
having
that
milestone
than
not
so.
The
CI
signal
issues
are
definitely
super
obvious
and
and
should
always
be
tagged
as
114.
B
G
Hey
so
I
just
had
one
big
announcement
that
gubernatorial
now
officially
being
replaced
by
spyglass
after
the
lazy
consensus
period.
It's
still
there
so
there's
a
link
inside
spyglass.
If
you
want
to
use
Governator
for
some
reason,
but
now
all
the
links
will
go
to
spyglass
instead
and
I
also
happen
to
query.
The
number
of
peers
emerged
for
the
last
two
days
since
Monday
and
it
was
82.
So
that's
pretty
good.
B
Yeah
I
didn't
have
time
to
show
it
yesterday
and
I.
Don't
have
it
prepped
today,
but
for
what
it's
worth
I'm
starting
tees,
def
stats
to
take
a
look
at
the
relative
rate
of
PR
merges
for
this
release
cycle
compared
to
the
release
cycle.
A
year
ago,
it's
kind
of
unfair
to
compare
a
q1
release
to
a
q4
release
because
of
the
holiday
slump
in
q4,
and
then
it's
kind
of
unfair
to
compare
it
to
a
q3
release,
because
we're
coming
up
from
asan
as
well.
B
But
we
are
I,
would
anticipate
a
large
spike
of
activity
in
the
next
week,
based
on
what
I've
seen
before
I'll
try
and
have
that
ready
to
show
on
Friday
and
spyglass
is
awesome.
Please
if
you
use
it
and
you
find
there's
something
weird
open
issues,
give
feedback
in,
say,
testing.
We're
super
appreciative
of
that.
A
Hi
I'm
here
I
stopped
a
passage
in
the
chat
because
I'm
in
transit,
so
connectivity
might
be
limited,
but
we
don't
actually
have
multiple
updates.
Since
Monday
Jeff's
presented
their
release,
notes
to
Allah
like
the
UI
to
the
release
team
yeah,
but
Sigma
release
at
the
sig
release
meeting
yesterday.
But
that's
pretty
much.
B
B
Think
that's
okay.
I
personally
my
opinion,
based
on
everything
that
Jeff
has
been
talking
about
I,
feel,
like
he's
gotten
some
broad
agreement
from
folks
that
we
should
try
and
find
a
different
way
of
doing
things,
and
so,
if
there
is
a
contrary,
different
way
of
doing
things,
that's
going
to
land
as
this
mark
that
document
changes
to
that
or
welcome
I'll,
probably
look
to
see
I'll,
probably
ask
that
we
have
an
update
of
this
on
Friday.
That's.
B
C
Well,
we
cut
beta
one
yesterday
with
my
shadow
with
one
of
my
shadows,
not
much
to
say
here
when
smooth
one
thing:
we
did
differently
this
time
around.
We
added
some
screenshots
of
failing
tests
great
stuff
just
before
we
were
cutting-
and
my
ask
is
to
just
check
on
that-
and
let
me
know
if
it's
useful
I
have
some
ideas
how
this
can
be
automated
and
in
the
release
process.
Basically,.
B
C
B
C
There
was
some
issue
for
leukemia
from
sick
cluster
lifecycle.
I,
don't
understand
the
full
picture,
but
it
seems
like
some
basil
job
did
not
publish
stuff
where
it
should
publish
stuff.
As
far
as
I
understand,
that's
not
a
problem
with
the
release
tooling,
but
I
will
try
to
catch
up
with
Lou
boom
yeah
I'm.
He
did
not
respond
yet
in
slack,
but
I.
Think
I
will
just
check
with
him.
What
the
issue
now
really
was.
It
seems
to
be
resolved
by
a
PR
from
him.
Okay,
but
yeah
I'll
check
on
that.
B
B
Wow,
this
has
like
been
one
of
the
fastest
burn.
Downs
ever
I.
Hope
they're.
Always
this
fast,
though,
for
what
it's
worth
my
what
I
have
seen.
Historically,
as
we
start
to
dive
into
the
details
and
some
of
these
specific
issues
so
I
appreciate
that
we're
starting
to
talk
about
them
and
I
trust
that
you're
gonna
go
poke
people
but
I'll
start
asking
more
and
more
probing.
Questions
as
it
seems
like
the
same
problem
keeps
coming
up
and
keeps
not
quite
being
resolved.
B
Two
things
that
came
up
on
Monday,
but
I've
had
some
discussion
about.
Since
then
one
was:
are
we
getting
bumped
to
go?
Laying
112
or
not
I
talked
about
this
at
Monday
and
I
also
talked
about
this
at
the
sink
release
meeting
consensus.
It
appears
to
be
we're
not
going
to
do
it
main
reason
being
I,
look
to
sake,
scalability
for
large
changes
like
golang
or
the
sed
version
or
other
stuff
that
could
potentially
impact
performance.
The
5,000
minute,
scalability
jobs
take
some
significant
amount
of
time
to
run.
B
So
it's
important
to
do
these
sorts
of
changes
when
everything
is
green
and
clean
and
quiet
and
I
feel
like
our
window
for
doing
that
has
effectively
just
passed.
We
had
had
this
happen
a
week
ago,
maybe
a
little
more
doable,
and
it's
also
kicked
off
an
ongoing
discussion
of
what
to
do
about
the
version
of
go
line
used
to
build
earlier
kubernetes
releases,
which
isn't
so
much
of
concern
to
this
team.
But
just
to
let
you
know,
kubernetes,
111
and
112
are
now
built
using
versions
of
go
that
are
no
longer
supported.
B
This
feels
like
a
security
risk,
because
what
happens
if
there's
a
CVE
relevant
to
go
along
that
comes
out,
and
we
need
to
address
that
that
point.
It
seems
like
we're
going
to
have
to
find
a
way
to
build
our
supported
versions
of
kubernetes,
using
something
so
there's
that
the
other
thing
was
discussion
around
Q
the
ADM,
the
existing
instead
of
jobs
in
release
master
blocking
that
exercise,
cube,
ATM
are
based
on
a
sub
project
called
kubernetes
anywhere
and
basically
the
maintainer
of
kubernetes
anywhere
and
the
maintainer
zuv
cube.
B
B
It
has
already
caught
a
number
of
Cuba
ATM
bugs
in
the
past,
and
so
I'll
be
working
to
sort
of
devote
the
Canadian
job
to
release
informing
and
add
the
kind
job
to
release
informing,
to
give
us
some
amount
of
time
to
make
sure
that
yeah
for
hysteresis
purposes,
we've
actually
gathered
enough
data
that
everything
looks
good
and
then
we
can
promote
it
to
blocking
next
week
before
we
enter
code
freeze.
This
is
also
reminding
me.
B
The
changes
we're
making
to
master
probably
also
need
to
be
ported
to
the
other
release,
branches
and
so
I'll
make
sure
we
have
an
issue
open
on
that
and
possibly
help
out
and
shuffling
around
jobs.
I
know,
Maria,
you
opened
I
think
an
issue
about
adding
the
cops
AWS
job
back
to
release
master
blocking
and
I
won't
work
to
resolve
that
today,.