►
From YouTube: Kubernetes 1.19 Release Team Meeting 20200729
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
There
we
go
get
your
head
in
the
clouds.
Thank
you
bob.
We
are
recording
welcome
to
the
119
release
team
meeting.
I
am
taylor
dolezal
and
today
is
july.
29Th
2020,
let's,
let's,
let's
dive
right
into
it
today,
enhancements
update.
B
Hi
everyone
happy
wednesday,
so
enhancements
is
green
right
now
we
are
tracking
34
enhancements,
with
nine
going
to
like
nine
being
new
enhancements,
graduating
to
alpha
15
enhancements
graduating
to
beta
and
10
announcements
graduating
to
stable
and
that's
all
on
the
announcement
site.
A
Awesome,
thank
you
number
in
any
questions
for
enhancements.
A
All
right,
ci
signal.
C
Hey
folks,
I'm
still
filling
out
some
of
the
1.19
stuff,
but
it's
pretty
much
the
same
as
masters
we're
still
doing
fast
forwards,
but
we
are
red
right
now,
which
is
kind
of
to
be
expected.
If
you
weren't
aware
we
introduced
resource
limits
on
all
the
jobs
running
on
community
infra,
which
we
expected
would
cause
some
issues,
and
so
we've
been
kind
of
like
ironing
those
out
and
adjusting
some
of
them.
C
So
this
issue
linked
is
kind
of
a
full
write-up
of
what
is
happening
there,
but
essentially
we
could
bump
the
memory
request
back
up
to
very
high,
like
42
gigs,
I
think,
is
what
it
was
passing
at,
but
essentially
type
check
is
spiking
to
do
that
type
check
was
rewritten
towards
the
end
of
june
and
I
did
some
investigating
into
the
metrics
before
and
after
that
was
merged
and
it
nearly
doubled
the
memory
consumption.
C
So
there's
kind
of
like
a
couple
of
different
ways.
We
could
go
there.
We
could
separate
that
out
into
its
own
job
and
kind
of
address,
that
separately
to
reduce
the
loss
of
signal,
or
we
could
revert
that
back
or
we
could
try
and
fix
it
really
quickly,
but
not
exactly
sure
what
all
that
would
take.
But
you
can
read
that
issue
if
you
want
kind
of
like
a
full
breakdown,
including
graphs,
and
that
sort
of
thing
the
other
one
is
conformance
ga
only
and
that
was
having
pod
scheduling
timeouts.
C
So
this
job
has
a
request:
that's
basically
for
all
of
the
cpu
on
a
node.
So
if
that's
not
available,
then
it's
going
to
have
pod
scheduling
timeouts
as
you
would
guess.
However,
that's
not
something
that
was
adjusted
recently,
that
was
adjusted
maybe
a
week
or
so
ago,
and
we
weren't
seeing
pod
scheduling
timeouts
before
so
I'm
guessing
it's
likely
that
other
jobs
are
taking
up
more
resources
and
that's
thus
resulting
in
this
having
trouble
scheduling.
C
That
being
said,
I
did
successfully
get
scheduled
this
morning
and
it's
running
right
now,
so
we
will
keep
that
updated
as
we
observe
that
there's
some
other
flakes
that
were
related
to
a
duplicate
argument
and
a
recent
change
to
kubernetes
build.pi,
so
that
was
affecting
the
builds
and
they've
recovered
pretty
well,
although
build
master
had
a
couple
of
flakes
this
morning
related
to
a
sim
link
that
had
to
do
with
the
pod
log,
so
I'm
not
exactly
sure
what's
happening
there,
but
I'm
going
to
follow
up
with
some
folks
and
sig
testing
on
that
other
things
are
just
kind
of
some
other
flakes.
C
The
cluster
api
gcp
job,
the
issue
that
we've
been
experiencing
there,
that's
causing
it
to
fail,
was
fixed,
but
now
we're
seeing
another
issue.
So
it's
not
running
at
all.
Right
now
the
capg
team
said
that
they
would
take
a
look
at
that
today,
in
slack
so
we'll
track
that
and
yeah
that's
pretty
much
it
I'm
going
to
continue
updating
the
1.19
section
below
but,
as
I
said
it
pretty
much
tracks
right
with
master
right
now,
but
yeah.
C
We
are
dealing
with
expected
turbulence
here,
so
hopefully
we'll
we'll
bounce
back
to
looking
green,
like
we
were
on
monday.
A
Calm
cool
and
collected
just
like
a
pilot.
Thank
you.
Thanks
dude.
I
did
see
one
question
from
jorge
about
the
just.
If
you
could
satiate
our
curiosity
on
the
gis.
C
Yeah,
so
it
was
it
spikes
to
just
under
46
gigs,
on
type
check
and
before
it
was
around
24.
So
actually,
when
aaron
introduced
these
resource
limits,
he
was
basically
just
being
like
super
generous
and
he
put
it
at
46.
C
Then
I
looked
at
the
metrics
and
was
aggregating
on
average
per
like
five
minutes,
or
something
like
that.
So
it
was
smoothing
that
out
a
little
bit
since
that
is
a
spike
on
just
one
subtest,
and
so
I
recommended
we
bump
it
down,
which
then
caused
it
to
have
memory
errors.
So
now
I
think
we're
at
like
24
gigs,
which
means
that
it
runs
everything
and
then
type
check
fails.
So
it's
better
than
just
nothing
getting
run,
which
was
the
issue
before
so.
C
But
obviously
that
is
a
lot
of
memory
and
we
don't
want
to
be
doing
that,
but
yeah
so
we'll
have
to
see
how
to
address
that.
A
A
All
right,
thank
you.
Dan
bug,
triage.
D
Yeah
our
numbers
overall
are
slightly
down,
but
the
number
of
open
pr's
is
up.
I
assume
people
are
trying
to.
I
haven't
dug
into
the
list,
but
I
assume
people
are
trying
to
hopefully
close
the
open
issues
working
towards
that.
Otherwise,
numbers
are
for
the
week
ahead
about
what
we'd
see.
Obviously
we
need
to
close
some
of
these
out
and
with
them,
but
otherwise
most
of
the
obvious
issues
that
should
get
pushed
have
gotten
pushed
so
far.
A
A
E
Hi,
everyone
hope
everyone's
doing
good.
We
are
green,
the
branch
there
1.19
is
healthy
and
I
think
that,
like
last
week
I
tried
to
generate
the
apis
and
ran
into
a
initial
where
I
couldn't
pull
the
references
for
1.19
branch
from
the
kubernetes
code
base,
but
the
dog's
handbook
is
telling
that
they'll
update
the
field
to
1.9
in
the
future
release
and
try
to
one.
So
I
saw
some
errors,
I'm
gonna
sync
with
jim
angel
today
or
tomorrow.
I
know
so.
E
I
was
posted
it's
going
to
be
this
last
day
at
gm
today,
so
I'm
I'm,
I'm
I'm
guessing
he's
going
to
be
busy.
So
if
he's
busy,
I
will
sync
up
with
someone
else
from
the
sick
dogs
team.
He
has
been
helping
out
me
with
all
these
branch
issues
or
knowledge
transfer,
so
I'm
going
to
check
in
with
them
first,
and
I
will
keep
you
all
posted
on
that
any
questions.
A
F
Hi
everyone
so
we're
yellow
right
now,
because
we
haven't
yet
generated
the
notes
for
rc3
it's
running
right
now,
but
they
should
be
up
in
the
next
few
hours.
A
Awesome.
Thank
you
so
much
adolfo
any
questions
for
release
dance.
A
Wonderful,
let's
go
over
to
comms,
didn't
see
max
online,
but
I
didn't
know
if
someone
was
on
for
comms.
A
All
right
no
worries,
I
think
we're
green
there.
I
didn't
see
anything
that
would
move
us
otherwise,
so
keep
on
keeping
on
comes
release
branch
management
with
carlos.
G
I'm
having
some
issues
to
mute
myself.
Okay,
today
we
released
the
119,
oh
rc3,
and
currently
we
are
facing
an
issue
that
looks
like
the
api
server
they
made
for
the
cube
api
server
contains
like
some.
The
container
binary
of
the
like
the
git
commit
is
not
matching
with
the
one.
G
We
release
it
and
looks
like
he's
getting
from
the
mock
stage
and
not
from
the
the
actual
official
stage,
and
there
is
a
issue
that
are
linked
in
the
in
the
meeting
notes,
for
we
are
tracking
that
and
sasha
did
some
investigations
as
well,
when
dofo
as
well
did
some
investigations
like
looks
like
we
are
getting
from
the
mock
stage
or
not
from
the
official
stage
like
we
need
to
have
some
more
time
to
debug
and
see
what's
going
on
yeah.
A
Awesome,
thank
you
very
much,
carlos
and
congrats
on
all
your
certifications.
You
keep
linkedin
fun
to
scroll
through.
So
thank
you
very
much
for
that.
Thank
you.
A
Sweet
tim
is
not
able
to
join
us
today,
but
thank
you
for
being
an
awesome.
Emeritus
advisor
tim,
my
update.
I
don't
have
too
much
for
you
just
that
you
all
are
doing
a
fantastic
job.
I
don't
mean
to
sound
like
a
broken
record,
but
seriously.
That
is
absolutely
the
case.
A
There's
a
lot
to
get
cleaned
up
and
pushed
through
here
in
the
final
weeks
of
this
release.
Next
week
we
start
our
burn
down
meetings
daily,
so
we'll
just
have
a
little
bit
more
hoping
to
have
a
little
more
signal
on
that
hoping
to
be
available
to
help
you
out
in
getting
things
closed
out.
We
are
going
on
that
two-week
break
after
that,
starting
on
the
10th
and
then
getting
back
to
business
on
the
24th.
I
believe
august
6th.
A
We
have
the
cherry,
pick
deadline
and
test
freeze
and
yeah
that
is
it
sig
scalability
there
is
no
update
listed.
However,
I
assume
it's
very
much
due
to
the
qos
updates.
You
know
just
those
memory
limits
and
and
requests
with
testing.
I
expect
it
to
be
pretty
much
in
the
same
sci
signal
and
so
I'll
reach
out
to
that
group
and
see
if
we
can't
get
anything
added
on
that
front,
open
discussion,
so
we
do
have
our
retro
link,
please
make
sure
to
add
anything
there.
A
If
you
have
not
already-
and
I
am-
and
I
do
see,
lori
typing
out
a
note-
prioritization
sessions
results
are
moving
forward.
So
thank
you
very
much.
Lori.
Is
there
anything
that
you
want
to
talk
to
on
that
front
at
all.
H
Yeah
I
just
I
mentioned
it
yesterday
at
the
sig
release
meeting
and
thought.
I
would
hear
as
well
that
the
results
of
the
breakout
sessions
that
many
of
us
did
together
a
little
while
back,
are
bearing
fruit,
there's
some
actions
and
activities
in
flight.
At
the
moment
there
was
a
discussion
with
nabarro
and
tim,
and
I
the
other
day
about
asynchronous
teams
and
david
mckay,
a
former
release
team
member
has
expressed
interest
in
helping
us
with
that,
because
he
actually
filed
a
github
issue
asking
expressing
the
need
for
more
asynchronous
teams.
H
So
I
have
a
lot
of
notes
that
I
basically
need
to
summarize
into
some
sort
of
structured
proposal
and
then
share
it
with
navron
and
tim
and
then
we'll
share
it
with
all
of
you.
I
just
want
to
make
sure
my
comments
on
that
are
accurately
representing
what
they'd
said
and
then
the
other
thing
is
that
there
has
been
a
lot
of
talk
about
the
ci
sub
signal
subproject
and
then
just
policies
around
ci.
H
As
you
all
know,
I've
been
spending
the
day
compiling
all
of
the
notes
from
all
of
the
sources
where
they
exist
agenda
items.
Github
issues,
documents
there's
a
lot
so
there's
a
meeting
later
today,
where
many
of
us
will
get
together
and
I
guess
look
through
all
of
those
notes,
but
I
don't
know
how
we're
gonna
do
that
because
there's
like
15
pages
right
now.
H
So
if
you
are,
if
you
have
any
time
like
some
of
you
who
are
really
specialized
on
this
topic,
it
would
be
great
if
you
can
help
look
at
that
document.
So
I
can
put
the
link
in
here,
but
basically,
how
can
we
consolidate
some
of
these
items
because
there's
no
way
we
can
get
through
all
of
that
in
a
one-hour
meeting.
H
I
guess
what
I
would
say
like
an
outline
of
like
nested
items,
what
are
high
level
objectives
and
then
what
are
the
sub
items
we
would
do
in
support
of
those
objectives
and
then
what
are
like
the
detailed
items
that
we
would
pursue
in
in
honor
of
those
sub
items
you
know
kind
of,
if
you're
big
to
smaller.
H
A
Gotcha
gotcha
cool
any
questions
for
lori
on
those
efforts
or
anything
else
going
on
with
kind
of
the
program
management
for
sig
release.
H
H
A
And
and
thank
you
so
much
for
doing
that.
Laurie,
chiseling,
michelangelo
or
david
from
the
from.
A
A
Thank
you,
lori
moving
on
into
open
discussion.
Does
anyone
have
anything
they'd
like
to
talk
about
or
discuss.
A
All
right,
well,
everyone!
I
wish
you
all
a
wonderful
wednesday.
I
will
see
you
again
on
friday.
Have
a
fantastic
couple
of
days
and
yeah
keep
being
awesome.
Thank
you
very
much.
Everybody
I'm
going.