►
From YouTube: Kubernetes 1.10 Release Burndown Meeting 20180314
Description
A
Hello,
everybody:
this
is
Wednesday
March,
14th
2018.
This
is
the
kubernetes
1.10
released
burndown
meeting
I
am
Jay
singer
Dumars
your
host
and
release
team
leader
and
the
agenda
that
will
be
following
along
with
is
available
after
the
fact
that
bitly
slash
gates,
110
burned
down
and
you
can
follow
all
the
notes
there.
A
We're
gonna
go
through
a
timeline,
real,
quick
based
on
decisions.
Yesterday,
things
have
moved
around
some
so
this
week
we're
really
working
on
issue
triage
and
cleanup
and
resolution
trying
to
deal
with
some
of
the
longer-term
issues
that
are
tricky
and
also
have
lots
of
different
moving
parts.
So
there's
some
staffing
things
happening
so
Shaam
is
doing
handoff
to
somebody.
It
was
going
to
be
white
checkbook
way.
A
Check
is,
is
sick
and
not
able
to
be
part
of
the
effort
right
now
so
Shawn
this
working
on
finding
somebody
Medi
is
going
to
be
on
vacation,
so
1.9
branch
duties
are
being
handed
off
to
Joe
Betts
and
there
may
be
some
other
shifting
of
people
as
we
move
forward.
But
that's
the
big
stuff
right
now
timeline
for
next
week
again
there
was
shift
so
the
end
of
code
freeze
is
actually
scheduled
for
Monday.
That
involves
all
the
bits
that
we
talked
about.
A
Prior
I
won't
go
into
detail
there,
but
essentially
sometime
Monday
will
lift
code
freeze
and
I'll
notify
the
the
communities
dev
mailing
list
of
that
I
put
in
here
in
the
timeline
for
week.
13
because
essentially
I
want
to
have
the
two-week
view
and
week.
13
will
now
have
the
release
on
Monday,
assuming
that
everything
goes
as
planned.
A
That
will
also
involve
the
release
of
1.1
1.2
Levin,
alpha
1,
because
everybody's
actually
against
that
zero
right
now
and
the
meetings
that
we'll
have
around
the
media
stuff
and
we'll
do
the
handoff
to
the
110
patch
manager
and
all
that
good
stuff.
Also
for
week.
13
will
be
the
retro
and
that
will
be
in
place
of
the
normal
committees,
community
meeting
or
maybe
it'll
be
an
abridged
criminai,
tease,
meaning
followed
by
the
retro,
depending
on
on
what
that
looks
like
so
any
questions
on
schedule.
A
All
right
moving
forward,
metrics
metrics,
are
trending
up
slightly,
which
is
not
necessarily
good.
Critical
issues.
Right
now
are
at
16,
that's
up
by
2.
Since
yesterday
now
we
have
a
pretty
good
handle
on
what
these
are,
but
something
I'm
noticing
is
that
they're,
starting
to
spread
out
a
little
bit
in
terms
of
effort
and
people
working
on
them
and
such
so
I'll
have
Josh,
go
through
a
more
detailed
issue,
rundown
here
in
a
bit,
but
this
is
just
the
general
stuff
other
other
types
of
tests
and
other
things
are
kind
of
about
the
same.
A
The
open,
PRS
there's
a
whole
bunch
here
that
needs
some
attention.
I
put
in
the
release
blocking
label
on
one
PR
and
then
obviously
the
release
block
label
is
is
not
really
used
by
anything,
but
I
just
wanted
to
have
it
as
a
quick
way
to
identify
this
one
PR,
which
is
the
cuvette
TM
test
that
or
the
comedian
stuff.
That
depends
on
one
time,
wrench
so
and
we'll
we'll
get
that
merged
before
when
Tim
goes
and
what
else
so,
we've
got
critical.
Prs
I
called
out
some
things
here.
A
There's
one
that's
in
progress
for
the
autoscaler
env
VARs
to
coup
B
and
V.
That's
in
progress
looks
like
it's
being
worked.
There's
the
fix
issue
with
race
condition
during
population
which
is
number
six
one:
zero,
seven
one
there's
a
failing
test
there.
It
looks
like
there's
something
to
do
with
the
pots
not
getting
deleted
or
something
or
the
namespace
not
actually
being
deleted
because
there's
a
pot
in
it
or
something.
So
if
anybody
wants
to
look
at
the
test
for
that
and
have
more
insight
than
I
have,
that
would
be
fantastic.
A
Important
soon,
there's
a
few
things
that
need
LG
TMS
one
was
ready
to
merge.
So
important
students
are
looking
good
there's
a
couple
of
long-term
PRS.
If
you're
following
along
the
agenda,
one
is
ready
to
merge,
it
probably
will
merge
today
and
then
the
other
one
needs
an
LG
TM.
So
I
will
post
a
link
to
that
one.
If
anybody
wants
to
see
if
they
can
run
over
LG
TMI.
B
B
On
Jake-
and
he
said
he
might
take
up
this
bisection,
but
I
will
only
be
able
to
confirm
this
by
tomorrow.
I'll,
try
to
add
him
to
the
slack
our
slab
group
till
then
I
will
be
doing
the,
but,
like
till
tomorrow,
I
will
be
doing
the
bisection
like
I'll,
try
to
probably
finish
one
or
two
more
so
yeah.
This
is
still
I'd
say
in
the
air
for
now,
but
hopefully
I'll
be
able
to
give
the
update
by
tomorrow
or
if
Wojtek
is
back.
That
probably
is
the
ideal
situation.
B
And
regarding
the
other
scalability
regression
like
the
six
zero
five
zero
zero,
that's
the
one
that
the
one
who
is
increased
memory
uses
of
API
server
and
Control
Manager.
So
today,
I
did
quite
some
debugging,
along
with
someone
else
from
instrumentation
team
Daniel
and
it's
more
or
less
is
all
now
like
the
issue
with
API
server.
The
increased
memory
uses
that
I.
B
That
is
already
understood,
like
I
told
earlier,
about
audit
logging
thing
and
I
send
the
PR
and
the
PR
is
already
in
that
PR,
basically
bumps
the
threshold
for
our
scalability
test
and
yes,
so
that
part
is
resolved
and
the
other
part
about
controller
manager.
I
have
identified
the
PR
and
be
more
or
less
know
the
issue
already
of
why
this
increase
is
in
controller
manager,
so
maybe
I
can
just
quickly
explain
it.
B
So
there
is
just
newly
added
component
called
fluent
AG
CP
scalar,
which
basically
sets
the
resource
requests
and
limits
for
the
flu
indi
demon
set
based
on
some
policy,
and
what
happened
was
there?
Was
this
PR
which
actually
changed
the
scalar
rook
to
look
at
the
right
flu
nd
demon
set,
which
it
was
not
doing
earlier,
so
it
actually
started
behaving
as
intended
after
this
fear,
which
is
when
we
started
seeing
this
problem.
So
what
happens
is
there
seems
to
be?
B
This
is
not
completely
understood
yet,
but
like
at
a
high
level
from
the
debugging
we
did.
What
we
identified
was
the
scalar
is
trying
to
patch
these
pass.
The
resource
requests
in
the
demon
set
object
continuously
and
simultaneously
if
the
add-on
manager
is
also
trying
to
patch,
probably
because
it's
trying
to
reconcile
the
the
the
object
with
the
manifest
the
add-on
has
in
which
the
resource
requests
is
not
set.
So
there
are
these
patches
continuously,
which
is
basically
making
the
demon
set
controller.
B
Do
a
lot
of
work,
so
so
so
the
effect
that
you
are
seeing
is
so
the
increase
in
the
condo
manager
memory
usage
is
because
of
the
demon
set
controller
which
is
basically
in
charge
of
the
fluently
demon
set.
And
what
is
happening
is
a
lot
of
these
flu
and
easy
DCP
pods
are
getting
are
getting
deleted,
they're
getting
deleted
and
recreated
continuously,
and
this
is
evident
from
the
logs
like
from
before
and
after
this
PR.
B
B
If
the
resource
requests
of
the
object
hasn't
been
changed,
and
currently
it
is
trying
to
do
that
continuously
and
because
of
this
will
basically
saying
this
unexpected
behavior,
so
yeah,
you
know,
I
probably
gave
a
too
verbose
explanation
of
this,
but
yeah.
We
at
least
understand
where
the
problem
lies
and
at
least
part
of
it
and
yeah.
Hopefully
this
should
be
fixed
soon.
B
We
fixed
the
like.
We
understood
why
the
increases
in
memory
is
seen
for
both
API
server
and
controller
manager
for
API
server,
like
I,
even
sent
the
PR,
which
basically
bumps
these
ratios
for
controller
manager,
that
that
means
to
be
fixed
soon
by
Daniel
and
I.
Think
we
should
file
a
separate
bar,
because
this
sounds
like
a
correctness
issue
as
well,
and
that
should
be
a
release.
Blocker.
A
B
Oh
yes,
so
I
just
went
to
Daniel
and
I
think
he
is
right
now
trying
to
summarize
our
findings.
Another
comment
in
that
bug
so
I'll
probably
be
able
to
bring
you
once
it's
done
so
yeah.
We
should
probably
move
that
to
a
separate
issue
and
because
we
now
know
that
it's
not
really
I
mean
it's
related
to
performance,
but
it's
like
it's
an
understood
issue
with
the
with
the
instrumentation
code.
B
A
B
A
A
B
A
C
C
C
So
we
do
still
have
one
feature
issue
which
is
a
little
troublesome
because
of
a
glitch
with
how
we
track
issues
via
spreadsheets,
I,
Tim
and
I
have
not
been
nagging
the
feature
owner.
That's
that
schedule
demon
set
pods
feature.
We
have
not
been
nagging
the
feature
owner
the
way
that
we
did
every
other
feature
owner
during
the
early
stages
of
code
freeze.
C
So
my
general
thinking
on
this
is
I'm
waiting
to
hear
back
from
Klaus
and
sig
scheduling,
who
seemed
to
have
been
doing
all
the
work
on
this
and
unless
what
they
say
is.
Oh,
yes,
we're
almost
finished
we'll
get
this
in
by
tomorrow
that
the
answer
is
to
bump
that
the
because
there
are
still
two
technical
PRS
and
the
dhaka
PR
open
on
that
feature-
and
we
are,
you
know
we
passed
feature
freezes,
you
know
and
the
beginning,
coach
rates
etc.
And
yes.
A
C
C
And
then
a
couple
of
things
that
are
so
I'm
just
trying
to
get
an
idea
of
the
status
of
this,
because,
yes,
there
still
didn't
work
on
it,
but
the
work
appears
it
appears
to
be
targeted
at
completion.
Some
point
a
couple
weeks
from
now,
which
is
obviously
not
gonna
work
for
one
time
so,
but
I
have
not
heard
back
the
where
I've
I've
posted
is
actually
the
soon
scheduled
email
in
less
than
six
scheduling.
C
C
You
just
heard
from
Sam
about
all
the
performance
things
up
great
down
great
things.
One
of
the
reasons
that
we
increase.
The
number
of
issues
is
that
Jordan
we
opened
some
of
the
upgrade
downgrade
bugs
that
were
codons,
because
tests
are
failing
again
so
and
he's
got
several
PRS
open
to
basically
not
run
certain
tests
that
are
expected
to
fail.
D
E
C
C
F
C
A
A
C
But
I
think
we
have
to
choose
there.
One
is
that
we
don't
run
the
upgrade
tests
until
code
freeze
and
then
the
second
issue
is:
is
this
business
about
needing
to
patch
the
previous
versions
of
great
tests
in
order
for
the
previous
versions
test
in
order
for
the
upgrade
testicles?
And
we
need
to
come
up
with
some
methodology
that
doesn't
involve
that
because
it's
a
lot
of
extra
work
and
extra
work
needs
extra
delays.
G
To
clarify,
so
we
passed
like
all
the
time
like
from
the
previous
release
to
master.
That's
like
always
running
it's
just
like
nobody
is
looking
at
it
until
really
start,
and
normally
there's
upgrade
itself
will
be
failing,
like
I,
think
the
entire,
like
of
the
time
until
release
start,
and
we
like
see
hey,
it's
not
updating
anything,
it's
not
even
updating
the
master
or
cluster.
G
So
the
first
step
is
to
like
check
down
why
it's
not
updating
that
might
take
a
while
and
like
after
we
call
few
days,
then
we
fix
the
actual
upgrade
issue
then
like
we
will
see
another
ton
of
actual
test
failure
after
the
upgrade
we,
if
we
can
help
like
a
dedicated
person
like
for
each
please
just
like
even
before
release
just.
A
A
B
So
like
on
a
side
note,
I
actually
wrote
a
design
proposal
to
introduce
some
processes
to
make
like
some
formal,
scalability
processes,
to
basically
avoid
these
situations
that
regressions
get
in
at
the
first
place
and
if
they
indeed
do
get
in
like
how
do
we
get
them
to
be
fixed
like
as
soon
as
possible,
like?
Let
me
actually
share
a
link
here.
This
is
something
we
definitely
think
about
because
I
just
now,
we
probably
like
one
of
the
problems
with
scalability.
We
are
seeing
right
now
is
the
scalability
effort
itself?
B
Isn't
scaling
really
well
and
if
there
are
not
many
people
who
are
actually
actively
involved
or,
like
the
let's
say,
the
expertise
to
be
able
to?
There
are
probably
a
few
people
who
have
the
expertise
to
do
this,
but
busy
with
a
bunch
of
like
they're,
probably
busy,
with
a
bunch
of
other
things
as
well,
that
the
not
able
to
find
time
for
this
foot
in
general.
Yes,
we
definitely
want
to
avoid
this
situation
that
we
don't
get
past
all
these
regressions
in
the
last
one.
A
No
that's
and
shall
I
understand
that
you've
jumped
into
this
sort
of
the
last
minute
and
I
appreciate
that.
So,
if
there's
any
kind
of
air
cover
we
can
provide-
or
if
there's
somebody
at
Google
that
you
want
me
to
just
kind
of
touch
base
with
and
kind
of,
let
them
know
that
you've
been
doing
this
I'm
happy
to
do
that.
I'm.
That
goes
for
anybody
who's
working
on
this
release.
If
you
have
somebody
that
you're
you
know,
there's
your
line,
manager
or
somebody.
A
A
G
G
C
G
Mmm-Hmm,
then,
that
one
other
date
is
I'm
working
on
a
to
Martin
priests.
Image
fix
to
make
it
not
trying
to
review
you
test
after
24
hours.
This
is
this
is
not
parking
master
it,
but
it's
sparking
one
seven
and
when
they
branches,
which
is
using
a
different,
go
version
and
cannot
compile
your
past,
was
this
fix
go
seeing
is
unblock
those
two
branches.
B
First,
actually,
with
regard
to
this
Q
mark,
like
one
of
the
things
like
one
of
the
ideas
I
had
in
my
mind,
was
maybe
we
are
fine
with
disabling
these
chests
disabling
the
cube
artists
against
those
release
branches.
Given
all
this
I
mean
I
agree
that
it's
not
the
most
ideal
solution
here
and
if
you
are
able
to
get
get
your
fix
working,
then
that's
actually
great.
We
probably
will
be.
A
B
G
So
what
I
did
is
like
nuke,
you
Kings
image
last
night,
so
that
it's
not
going
to
pull
it
at
least
today
for
24
hours
and
I,
also
added
a
cute
mark
pre-summit
canary
job,
which
that's
actually
a
good
call.
Yeah
yeah,
which
will
use
my
image
I'll,
make
sure
that
job
goes
to
green
and
I
will
fit
the
actual
release
job
to
use
the
new
clip
kings.
Okay,.
G
Cool
for
grease
week,
yeah
I,
think
no
updates.
There
I
think
we
are
still
going
to
pour
back
port.
Some
fixes
back
to
you
online
so
that,
for
at
least
for
this
release,
upgrade
test
can
get
better
signals,
but
maybe
we
need
to
discuss
further
like
in
future
releases.
We
want
you
like
mark
certain
tests
to
be
it's
fine
to
fail.
You
upgrade,
or
we
have
a
list
of
skipping
past
being
upgrade
test.
A
F
I
put
some
stuff
in
the
notes:
I
should
have
known
not
to
be
new
gleeful
about
things,
we're
getting
a
few
odd.
You
know
PRS
against
the
wrong
ducks
branch
which
we
understand
confusing.
Unfortunately,
we
don't
have
a
flood
of
other
Doc's
pr's
this
week
knock
on
wood,
which
means
it's
easy
to
catch.
The
problems
related
to
the
PVC
protection
downgrade
issue,
I
started
pulling
at
the
at
the
end
of
a
tangled
sort
of
ball
of
yarn.
There
I
think
since
I.
F
Put
that
note
in
the
docks,
I'm
gonna
leave
it
there,
but
I
think
I've
got
the
right
people
identified,
they've,
weighed
in
on
the
downgrade
issue
Pienaar,
but
there's
more
stuff
in
the
docks
that
needs
to
get
cleaned
up.
What
we
are
really
going
to
explain
to
people
what's
going
on
there,
it's
not
really
a
risk
but
meeting
some
help
with
it
and
it's
my
understanding
that
we're
simply
at
the
document
stage
of
things.
That's
issues
not
going
anywhere
for
now
right,
I'm,
just
confirming
what
I've
been
hearing
anyway.
F
I'm
sorry,
I
didn't
fully
understand
the
question.
Sorry,
up
to
this
point,
I've
been
tracking
docks
issues
and
feature
tracking
spreadsheet.
It's
starting
to
look
like
I'd,
better
track
a
few
other
things
like
that
downgrade
issue.
Yes,
because.
F
F
And
this
is
something
that
we
kind
of
started
making
up
as
we
were
going
along
and
41.8.
Please
so
I've
seen
it
through
three
releases
now
and
we're
basically
doing
the
same
thing
here
for
1:10
that
we
did
for
1.8
for
a
different
issue
and
yeah
I
kind
of
I'm
thinking,
Nick
and
I
are
gonna
brainstorm,
a
bunch
about
things
we
can
do.
E
F
Exactly
and
that's
when
it's
four
so
yeah
I
guess
we
need
to
reopen
it.
Get
this
in
I
mean
maybe
that's
a
better
place.
To
put
my
questions
to
I,
don't
know,
I'll
see
what
the
how
the
conversation
shapes
up.
I
mean
this
is
not
this
isn't
a
blocker.
You
know
worst
case
scenario,
we'll
fix
up
the
related
documentation,
not
the
not
the
known
issue
specifically
but
they're
related
stuff.
That's
confusing!
We
can
fix
that
up
after
the
release.
A
Okay,
thanks
for
reopening
that
scent,
I'll
I'll
put
that
in
my
list
of
daily
things,
I
look
at
just
to
make
sure
it
isn't
closed.
F
Knew
he
was
in
the
webinar
and
I
couldn't
remember
exactly
what
the
scheduling
was
like
I'll
make
sure
to
circle
with
Nick
about
that
too.
In
case
there's
stuff
that
we
both
need
to
touch.
A
Okay,
great,
thank
you
so
much,
and
he
basically
said
in
his
update
that
the
the
user
are
the
release,
notes
are
in
progress,
things
are
moving
along
nicely.
People
are
contributing
where
they
need
to
tell
about
it.
There'll
be
somewhere
at
it's
done
today
and
if
anybody
else
wants
to
peruse
those
I'm
gonna
spend
some
time
combing
over
those
myself
just
to
see
what's
in
there.
A
H
I
can
comment
on
that.
I,
don't
know
what
it
looked
like
in
previous
releases,
but
we
put
out
the
call
yesterday
for
hey.
We
need
a
clarification
on
the
following
things,
because
you
don't
want
us
to
be
making
it
up
and
trying
to
assume
what
you
meant
on
whatever
issues
were
not
completely
clear
and
over
the
last
24
hours
we've
got
about
a
70%
turnout
of
people
responding
and
being
there.
Clarifications
so
seems
pretty
good
to
me,
but
I
don't
know
what
it
looked
like
in
previous
releases.
A
A
Ok,
let's
see
what
next
has
anybody
else
have
anything
that
they
want
to
add
I.
Think
marketing
is
basically
looking
good
at
this
point.
D
C
A
C
A
A
All
right,
let's
keep
on
keepin
on
one
in
the
homestretch.
We
just
need
to
keep
keep
on
this
and
and
keep
keep
tracking.
All
these
things
stay
stay
tuned
in
the
release
channel
on
slack
any
updates
for
the
group
that
happened
interstitially,
please
do
so
in
that
select
channel
and
we'll
stay
on
top
of
these
things.
Thank
you
so
much
everybody
and
have
a
good
rest
of
your
day
and
I'll
see
you
tomorrow.
At
the
same
time,.