►
From YouTube: Kubernetes 1.19 Release Team Meeting 20200731
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
everyone:
this
is
another
rendition
of
the
119
release
team
meeting.
It
is
july,
not
june
31st.
Thank
you.
Thank
you
lori
and
yeah.
Let's,
let's
dive
straight
into
it,
never
in
with
enhancements.
B
Sorry
happy
friday,
everyone
so
enhancements
is
green
right
now
we
have
nine
enhancements
coming
new
as
alpha
15
announcements
graduating
to
beta
and
ten
announcements.
Graduating
is
stable
and
it's
the
same.
No
so
yay.
A
But
you're
not
bruin
any
questions
for.
C
Hey
folks,
sorry,
I,
my
zoom,
just
dropped
out
on
me,
but
it
looks
like
I
rejoined
at
the
right
time
so
anyway,
speaking
of
things
not
working,
we
are
very
red
right
now.
If
you've
noticed,
we
have
seven
failing
tests
on
master
blocking.
They
are
all
due
to
the
same
issue,
though
so
not
quite
as
much
calls
for
concern.
C
Basically,
as
you
can
see
from
some
of
these
issues
listed
in
the
notable
events
we
switched
all
of
the
master
blocking
jobs
over
to
using
fast
builds
and
they're,
having
an
issue
basically
pulling
the
latest
version,
but
the
url
exists
and
it's
saying
it's
getting
a
404,
but
it's
not
reporting
what
url
it's
hitting
so
the
way
we're
addressing
that
is
basically
adding
in
a
log
message.
That's
going
to
tell
us
what
the
error
message
is.
However,
we
have
to
wait.
C
That's
in
cube
test,
so
we
have
to
wait
for
the
auto
bump
job
every
day,
so
troubleshooting.
This
is
kind
of
painful,
because
you
basically
have
a
24-hour
window
to
get
everything
in
you
want
to
and
then
see.
If
you
you
got
it
right,
so,
hopefully
we'll
be
able
to
see
that
today
and
correct
what
was
happening
there.
C
One
thing
to
note
is
we
can
kind
of
use
the
1.19
blocking
board
as
a
as
still
getting
signal
right,
because
we're
still
doing
fast
forwards
every
day,
so
it's
essentially
the
same
code
and
they're
not
using
fast
builds,
so
we're
still
feeling
pretty
good
there.
We
also
had
the
scalability
team
went
ahead
and
reverted
to
not
using
fast
builds
and
their
job
instantly
turned
green
again.
C
So,
while
it's
not
great
to
have
that
much
red
we're
in
an
okay
place,
a
couple
other
updates
on
spyglass.
Now,
if
you
look
at
a
job
failure,
it
should
inform
you
that
if
it
was
due
to
an
infrastructure
issue-
or
it
was
due
to
the
actual
test
failing
so
previously,
it
would
just
say
that
the
test
failed,
even
if
it
was
a
pod
scheduling,
timeout,
so
that'll
be
a
little
bit
nicer,
a
little
bit
easier
to
identify
issues.
C
Only
other
thing
to
note
verify,
master
and
also
1.19,
of
course,
as
well,
is
looking
a
lot
more
green.
We
were
since
we
introduced
resource
constraints,
as
I
mentioned
on
wednesday,
that
was
using
quite
a
lot
of
memory,
and
so
we
reduced
the
parallelism
for
the
platforms
that
it
was
type
checking
at
the
same
time
and
it's
been
a
little
more
green,
but
still
barely
hits
the
memory
limit
every
so
often
so
we
bump
that
down
again
this
morning.
C
That
should
merge
and
should
be
pretty
good
there
moving
forward
other
than
that.
It's
mostly
just
tuning
some
of
the
resource
requirements
for
the
other
jobs
that
are
having
pod
scheduling,
timeouts,
particularly
the
kind
jobs,
so
you'll
see
a
little
a
little
more
flakiness
than
usual
until
we
get
all
those
straightened
out
in
regards
to
informing
the
cap
g
issue
was
identified.
They
basically
had
an
issue
with
using
python
2
to
add
the
ansible
package,
and
so
they
just
needed
python.
C
3
present
the
bootstrap
image
once
again,
that
has
to
wait
for
the
auto
bump
each
day,
so
a
little
bit
slower
moving
there
and
then
outside
of
that
windows.
Jobs
are
still
failing,
but
that's
related
to
the
version
markers
and
the
openstack
conformance
job
is
still
failing
to
dump
the
cluster
logs,
but
that's
just
at
the
end
of
a
run.
So,
while
we'd
like
to
fix
that,
we
have
no
visibility
into
their
ci
and
they
don't
really
either
so
we'll
get
that
done
as
soon
as
possible.
A
Awesome,
thank
you
so
much
dan.
I
took
a
look
at
some
of
the.
It
was
talking
with
bob
and
jeremy
last
night
and
I
think
that
we're
looking
good
on
the
prs
and
the
issues,
I
don't
think
there's
any
open,
critical
pr's
for
119,
so
really
fantastic
there
and
then,
and
then
just
some
issues
like
you
said,
flaky
tests
and
things
of
that
nature.
But
awesome.
I
expect
a
little
bit
of
turbulence
as
well,
with
the
qos
items,
go
1
15
and
the
new
qos
limits
and
everything
else.
So,
okay,.
C
For
sure,
there's
also,
if
folks
are
interested
in
that
there's
a
few
umbrella
issues
that
I
can
add
to
being
linked
here
in
test
infra
related
to
the
qos
stuff,
so
that
spans
a
little
more
than
just
like
the
release
blocking
jobs.
That's
where
we're
starting,
but
also
involves
pre-submits,
as
you've
probably
noticed
those
flaking
quite
a
bit.
So
if
you're
interested
in
that
discussion,
there's
a
number
of
issues
there
opened
by
aaron
and
there's
a
lot
of
folks
having
those
discussions
across
release
and
testing.
C
Right
now
laura,
you
might
actually
have
some
things
to
say
about
that.
If
you'd
like.
D
D
That's
now
been
dumped
into
or
not
the
whole
thing,
but
mainly
the
main
points
dumped
into
a
github
issue
and
then
the
three
first
priority
issues
or
policies
now
have
their
own
github
issues
as
well,
and
so
the
one
eight
five
three
zero
is
the
guaranteed
qs
pods
issue
and
last
night
it
looks
like
aaron
and
tim,
both
added
a
bunch
of
the
different
jobs.
D
So
there's
some
checklists
in
that
issue
and
I'll
put
it
in
the
chat
too
in
case
anybody's
having
trouble
finding
it,
but
but
yeah
you
can
see
that
there's
a
lot
of
different
jobs
to
to
address.
So
a
couple
of
things
is
that
I
just
just
for
the
sake
of
making
sure
everything
related
to
like
this
policy
is.
This
was
descri
identified
as
the
priority
policy.
That
folks
would
work
on.
D
First,
I'm
just
trying
to
keep
all
of
the
information
centralized
and
pointing
back
to
like
be
able
to
find
all
of
all
of
the
information,
all
the
issues
that
are
being
generated.
D
So
I
left
a
message
in
the
select
channel
to
the
effect
of
can
we
turn
this
18530
into
the
landing
page
of
sorts
for
all
the
subsequent
sub
issues
that
might
be
created
and
just
make
sure
that
we
link
to
them
and
I'm
assuming
no
one's
going
to
object
to
that.
It's
already
been
happening,
but
just
as
more
people
might
get
involved
like,
we
should
make
sure
that
we
have
a
source
of
truth
for
everything
related
to
this
policy
work.
D
A
D
A
E
Yeah
I
mean
the
number
are
going
up,
but
I
had
a
look.
We
had
a
look
at
the
open
issues
and
broadcast,
and
some
of
them
are
released.
Secretly
is
related,
others
are
under
control,
so
I'm
not
really
really
worried
about
them
and
the
number
are
not
too
crazy.
So
all
good
awesome.
A
Thank
you,
jamlica.
Any
questions
for
bug,
triage.
A
F
So
happy
friday
hope
everyone's
doing
good.
We
are
green.
I
did
a
mock
run
of
the
dog.
The
reference
generation
on
my
local
and
ken
bradshaw
from
sick
dogs
is
kind
enough
to
help
me
tweak
the
python
script
and
one
more
thing.
She
she
fixed
one
other
thing
for
me:
reference.yaml
to
clone
the
correct
release,
branch
and
everything-
and
I
got
it
working
today
morning
for
this-
I'm
using
release
candidate,
3
rc3
to
generate
the
references.
F
So
I
I
I
don't
know
how
it
works
in
the
previous
release
and
I
just
want
to
throw
it
out
there.
I
think
we
need
to
capture
this
that
the
in
the
roles
handbook
that
okay,
we
will
be
using
this
particular
version
of
the
release
rc
candidate,
and
is
there
any
any
knowledge
about
that
that
I
need
to?
I
have
reached
back
to
her
asking
that
is.
A
A
Thank
you
very
much.
Moving
on
to
release
notes.
G
A
Notes
all
right:
let's
move
on
to
comms
with
max.
H
Hey
everyone
hope
you're
doing
great
so
far,
I
moved
it
to
yellow
actually
for
the
release
block.
It
looks
still
good
we're
just
missing
one
major
feature
which
needs
to
be
described,
but
I
think
this
is
not
the
biggest
issue
also
looking
forward
for
getting
feedback
from
cncf
about
the
ecosystem
stuff
going
on,
but
I
assume
they're
a
little
bit
heavy
under
pressure
with
the
cube
con.
Currently
I
just
give
them
ping
to.
Please
do
not
forget
us
for
the
future
blogs.
H
We
have
five
blogs
actually
told
that
they
will
be
delivered,
but
none
of
them
have
yet
started,
so
we
will
reach
out
to
them
and
remember
then,
of
course,
we
know
that
they
have
also
a
lot
to
do
that
potentially
also
will
need
to
present
stuff
on
the
kubecon,
but
also
please
them
there
that
they
shouldn't
forget
about
the
the
rest
of
us.
Also.
A
Awesome
sounds
good
to
me.
Thank
you
max.
I
do
know
that
there
is,
I
know,
they're
reshuffling,
a
few
roles
over
cncf
and
that
I
think
the
the
director
of
marketing
is
still
being
decided
on
if,
if
I'm
not
mistaken,
and
so
there
might
be
a
little
bit
of
latency
due
to
that
as
well,
but
more
than
happy
to
help
ping.
That
team,
if
you'd
like.
A
J
Hello,
we
don't
have
any
big
news.
We
still
have
this
open
issue
regarding
to
the
wrong
git
committing
the
cube
api
server.
This
is
only
affecting
the
container
image
so
far
like.
I
need
to
get
some
update
with
sasha,
maybe
today
or
tomorrow
morning,
and
I
think
after
we
found
the
the
issue
and
fixer.
Maybe
rc4
is
needed
to
validate
that
as
well.
J
A
Cool.
Thank
you,
carlos
any.
A
Awesome,
thank
you.
Tim
is
not
on
the
line,
so
I'm
gonna
go
ahead
and
jump
into
milestone,
updates.
Nothing.
Much
has
changed
here.
I
sent
out
invites
for
tuesday
and
thursday,
initially
put
tuesday
on
the
wrong
calendar
and
switch
it
over
to
the
kubernetes
calendar.
So
I've
got
a
lot
of
fun.
Bounce
back
emails.
A
If
you
have
not
received
the
tuesday
or
thursday
meeting
invites
please
let
me
know-
and
I
can
get
you
all
fixed
up
on
that
front.
I
will
not
be
able
to
make
the
monday
or
tuesday
calls,
but
we'll
be
on
the
wednesday
one
jeremy
is
gonna,
be
jumping
in
for
that.
Thank
you
very
much.
Jeremy.
The
miles
again
milestones
haven't
changed
august
6th
is
the
cherry
pick
deadline
and
test
freeze
end
of
day,
pacific
deadlines?
A
I
don't
see
a
note
in
sig
scalability,
but
we'll
make
an
educated
guess
that,
with
the
ci
signal
being
read
and
qos
items
going
on,
that'll
still
be
a
little
bit
in
flux.
For
the
time
being
now
they
did
also
see
that
steven
sent
in
a
note
kind
of
asking.
I
think
it
was
115.
A
We
started
getting
more
involved
with
the
sig
scalability
group
and
trying
to
figure
out.
You
know
ways
that
we
could
help
one
another
out
and
make
things
a
little
bit
more,
a
smoother
sailing,
so
to
speak
with
some
of
the
items
there.
So
I
saw
that
that
kind
of
got
revived
this
week,
just
trying
to
figure
out
some
things.
We
can
continue
that
relationship
and
make
things
look
better
overall.
A
Absolutely
I
think
that
it's
always
it's
always
dns.
It's
always
communication
issues
when,
when
we
run
into
problems
like
this
most
of
the
time,
it's
it's
something
I'll
I'll,
probably
put
something
in
on
the
retro
about
that.
I
know
that
in
talking
about
deadlines
and
timelines,
that
has
been
a
big
theme
this
this
release,
as
well
as
you
know,
sig
to
sig
communications
and
adjusting
network
policy
there,
a
little
bit
so
so
that'll
be
it'll,
be
fun
to
talk
to.
D
I
D
I
I
don't
think
there's
too
much
we
can
do
that
can
like
force
an
immediate
outcome.
This
is
more
so
like
a
long-term
thing
that
we
need
to
engage
sort
of
like
what
we
did
with
scheduling
and
not
scheduling,
I'm
sorry,
scalability
and
and
the
scalability
liaison,
and
we
can
also
turf
this
discussion
to
you
know
to
you
know
something
else,
but
that's
just
sort
of
my
my
gut
feeling
on
on
this,
at
least
at
this
point
in
time,.
I
Yeah,
short-term,
I
think,
would
probably
be
if
there's
anything,
that's
blocking
we're
not
getting
good
engagement
is
try,
and
you
know
pop
over
another
slack
and
go
to
their
meetings
like.
D
Yeah,
that's
that's
been
said
by
others
here
too,
so
it
seems
like
that's
the
approach
on
people's
minds.
A
Absolutely
and
if
and
if
there
is
anyone
that
needs,
you
know,
help
again
an
invite
this
meeting
or
anything
like
that,
like
I
know
we
invited
jordan
and
a
couple
and
sig
architecture
a
couple
others
when
we
needed
help
from
those
things.
So
I'm.
A
The
merrier
when
it
comes
to
swarming
and
attacking
an
issue
when
okay,
so
moving
into
open
discussion,
I
think,
is
slightly
related,
ci
ci
policy
progress.
Do
you
wanna
talk
to
that
one,
a
little
bit
lori?
I
I
so
I
think
with
so.
A
My
thought
on
that
one
was
this
group
will
so
119
is,
is
the
main
focus
of
this
group
and
then,
when
1.20
once
119
gets
released,
then
the
group
will
change
hands
for
one
1.20,
and
so
I
think
that
that's
that's
kind
of
like
the
scope
and
and
overall
focus,
and
so
I
think
that
when
it
apply
where
it
applies
to
119,
that's
something
we
might
be
able
to
help
out
on
collectively,
but
any
anything
future
or
any
like
longer
term
goals
would
be
difficult
to
get
this
group's
involvement
in
sign
on
four.
D
A
Typically,
typically,
we'll
ask
for
those
to
be
associated
with
milestones,
and
then
that
way
we
can
track.
You
know
this
is
just
the
focus
of
119
or
120
or
future
jace
had
one
in
for
the
kubernetes
2.0.
So
I
definitely
recommend
you
check
that
out
as
a
fun
easter
egg
in
the
repo,
but.
D
So
everything
that
the
sig
testing
group
is
putting
out
right
now
is
not
planned,
for,
like
must
do
right
now,
is
that
it's
for
the
future.
It's
building
up
for
120
to
go
better.
Is
that
correct
right
because
there's
been
a
lot
of
details
coming
out
over
the
past
couple
of
days,
so
I
just
wanted
to
make
sure.
Have
you
actually
been
able
to
take
a
look
at
some
of
the
issues?
A
Yeah,
I
feel,
like
I
feel,
like
we've
got
those
tracked.
I
think
immediately
the
those
limits
do
affect
us
with
119,
but
those
you
know
those
we
should
be
able
to
tune
and
tweak
on
that
phone.
D
A
I
think
that's
the.
If
I'm
not
mistaken,
I
think
that's
the
18
551
issue
and
that's
or
it
might
have
been
a
closed,
pull
request
where
it
was
just
saying
that
the
intent
was
to
get
limits
and
requests
on
tests
so
that
they
can
be
scheduled
appropriately
across
that
testing
cluster.
D
Okay,
because
aaron
might
have
done
it
yesterday,
so
there's
the
like.
I
mentioned
the
umbrella
issue
with
the
three
policies
like
there's
a
bundle
of
three
that
are
more
urgent
priorities.
Those
are
listed
in
that
issue
and
he
says
they
should
be
done
in
order.
So
that
means
that
the
guaranteed
pod
qs
comes
first
release
blocking
jobs
running
in
dedicated
cluster
second
and
merge
blocking
jobs
running
a
dedicated
cluster
come
third,
so.
A
I
think
that's
the
case
is,
would
you
say
that
that
would
would
you
feel
good
with
that
dan
and
and
jeremy?
I
know
that
there
was
that
ci
meeting
earlier
this
week,
but
I
was
not
able
to
make
that
one.
C
Yeah,
I
think
that
I
agree
that
a
lot
of
it
is
just
you
know,
affecting
1.19
is
all
that's
in
scope
for
for
this
team.
That
being
said,
like,
I
think,
a
lot
of
folks
here
are
probably
involved
in
other
aspects
of
it.
I'm
not
sure
exactly
like
how
how
separated
those
two
things
can
be
right,
how
separate
it
can
be,
making
things
good
for
119
versus
satisfying
some
of
these
issues.
C
I
do
think
it's
possible
we'll
get
to
a
point
where
it's
like
all
right,
we're
at
a
good
place
to
be
able
to
cut
a
release,
but
I
definitely
heard
sentiment
and
and
some
of
the
meetings
we
had
this
week
about
like
we're
willing
to
push
things
to
get
in
a
good
place
before
we
release
exactly
so
yeah
so
like.
If,
if
that's
the
case,
then
it
could
bring
kind
of
everything
into
the
scope
of
1.19.
C
If
that
makes
sense,
so
I
think
we'll
just
have
to
monitor
kind
of
how
the
progress
is
being
made
there,
and
you
know
if
there
are
things
that
are
going
to
block
the
release,
then
they
immediately
come
into
scope
for
us.
So.
D
C
Kind
of
I
guess
I
mean
like,
in
my
opinion
we
we
wouldn't
have
to
get
like
all
of
the
resource
limits
and
stuff
in
place,
necessarily
before
we
release
1.19.
That
being
said,
we're
it's
already
pretty
far
along
on
the
like
blocking
and
forming
jobs.
So
I'm
not
super
concerned
about
that.
So
that's
kind
of
phase.
One
of
that.
I
definitely
do
not
think
we
need
to
have
like
all
of
the
pre-submits
in
a
in
a
super
good
place
before
we
release.
C
However,
I
know
that
there
was
some
folks
who
were
feeling
like
we
shouldn't
be
opening
up
new
feature
stuff
until
our
ci
is
reliable,
which
includes
blocking
pre-submits,
so
you
know
that
could
a
case
could
be
made
for
that.
However,
I
I
don't
really
see
us
blocking
a
release
if,
if
the
blocking
and
forming
you
know,
boards
are
good
to
go
and
we're
getting
good
signal
there.
So
I
could
be
over
in
there,
though,.
D
Okay,
so
I
guess
the
summary
of
this
is
that
for
now
all
of
the
issues
and
the
work
contained
in
those
issues
that
tig
testing
has
created
based
on
the
ci
policy
suggestions
is
for
post
119,
leading
up
to
120,
unless
we
find
that
we
have
things
breaking
in
such
a
way
that
we
actually
need
to
push
that
work
up.
In
which
case
we
would
postpone
the
release.
A
Absolutely
yeah,
I
think
that
I
think
that's
as
we
hear
more,
we'll
know
more
that
then
we'll
and
we
can
make
those
changes
on
that
front.
I
think
that
was
a
little
bit
of
the
sentiment.
This
week
was
trying
to
figure
out
what
criteria
were
you
know
what
what
criteria
do
we
need
to
look
at
to
know
if
we're
going
to
push
forward
or
or
hold
our
correct
pattern,
but.
A
We
just
test
our
resilience:
it's
a
wonderful
grab,
bag
of
events,
wonderful
and
in
your
quotes
for
those
sarcasm,
any
other
questions
on
ci
policy.
A
Progress
excellent.
Thank
you
very.
Thank
you
very
much
everyone
please.
We
mentioned
the
retro
a
little
bit
today.
If
you
have
any
issues,
please
take
some
time
and
fill
out
the
retro
and
with
that
any
final
comments
or
questions.
C
D
A
Well,
thank
you
very
much
everyone.
I
wish
you
all
a
happy
friday
a
wonderful
weekend
and
I
will
I
will.
I
won't
be
here
on
monday
I'll
catch
y'all
again
on
wednesday,
but
I'll
be
around
slack.
So
please
at
me
please
at
me.
If
you
need
anything
and
I
wish
y'all
a
wonderful.