►
From YouTube: k8s 1.16 - Week 10 - Release Team Meeting 20190905
Description
Release details: http://bit.ly/k8s116
A
Sorry,
okay,
we're
recording
now
hello
everyone.
This
is
an
informal
meeting
of
sig
releasing
scalability
the
CI
signal
and
release
team
leads
for
116.
This
is
a
meeting
that
will
be
recorded
and
available
on
the
internet
for
perusal
later.
So
please
be
mindful
of
what
you
say:
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
in
general
just
be
excellence.
Each
other
all
right,
so
I
will
give
I
will
give
Marco
a
chance
to
kick
it
off.
B
So
yeah
just
to
kick
it
off:
okay,
Walter,
my
Matt.
If,
if
you
have
a,
we
just
wanted
to
know,
if
you
have
any
comments,
any
a
useful
information,
a
on
the
a
on
the
filling
jobs
on
mastering
forming
a
actually
actually
one
of
them.
Actually
one
of
them
is
flaking,
but
a
called
you
tell
us
if
there's
anything
that
we
need
to
fix
anything
that
we
have
to
hold
off
so
that
when.
C
We
do
like
the
correctness
and
list
performance
like
the
correctness
has
been
fixed.
In
the
meantime,
we
are
not
aware
of
any
issues
there
and,
like
the
last
one
is
green,
so
I
believe
it
should
stay
green,
the
performance
one
one
is
flaky,
but
we
are
not
aware
of
any
issues
that
is
really
stalking
now,
like
those
are
problems
more
with
tests,
then
with
kubernetes
itself.
We
don't
believe
that
there
is
a
rigorous
under
okay.
B
C
C
C
C
D
Going
to
so
when
I
look
at
issue
on
kk8
to
182
I'm,
just
looking
at
the
agenda,
you
have
referenced
this
network
programming,
latency
regressed
in
5k
node
clusters,
which
links
out
to
issue
a
to
364,
which
I
can
also
just
throw
in
the
chat.
Can
we
just
comment
on
that?
Is
this
the
issue
that
you
believe
is
NaN
non
release
blocking
for
1/16
and
that's
going
to
take
a
little
bit
more
work.
C
Programming
latency
Fink
is
something
that
is
definitely
not
blocking.
It's,
not
even
an
official
SLO
that
we
have.
We
are
working
to
make
that
official
SLO,
but
it's
we
started
measuring
it,
but
this
can,
if
this
can
both
be
a
problem
with
how
we
measure
stuff.
We
are
aware
of
some
issues
with
how
we
measure
stuff,
so
it
shouldn't
be
blocking
the
releasing
ok.
D
So
if
I
can
I
comment
that
on
this
issue,
because
specifically
that
is
linked
off
the
failing,
you
know
the
scale
performance
testing
and
is
that
why
this
is
red
I
mean
my
only
concern
is
if
we
leave
it
there
and
it
keeps
flaking
or
going
red
for
this
regression
I'm
very
concerned
about
you
know
when
we're
looking
at
the
test
results
that
something
else
pops
up
and
there
that
you
say.
Oh,
that
is,
release
blocking
and
we're
not
actually
going
to
see
it
because
we're
like
well,
you
know,
there's.
D
C
D
C
D
I
think
you
know
my
my
only
real
takeaway
from
this.
This
call
is
just
be
on
alert.
What
we're
trying
to
do
is
preemptively
strike
the
chain
of
events.
That's
happened
for
the
last
two
releases,
which
is
you
rock
up
at
the
eleventh
hour
and
say
up
release
blocked,
so
just
can
wear
nine.
Sixteen
is
the
tentative
release
date
for
one.
Sixteen,
so
just
keep
that
in
your
mind
as
we
go
for
the
next
two
weeks.
The
only
concern
I
have
is
that
the
the
I
think
it's
that
the
correctness
is
flaky
right.
D
D
D
D
The
interesting
thing
I
think
that's
more
ongoing
and
we
can
discuss
this
now
or
in
this
call
is:
can
we
actually
get
these
on
the
release
blocking
and
if
the
criteria
is
stable,
let's
figure
out
how
to
do
that,
because
this
is
just
a
burden
mentally
for
the
people
to
come
back
and
say
hey.
We
need
to
go
talk
to
the
six
scalability
team
before
two
weeks
out
for
release,
and
are
we
really
going
to
be
blocked
11
days.
C
D
D
Is
it's
just
a
cesspit
of
things
that
are
unloved
and
loved
and
yours
to
need
to
be
loved
and
it's
kind
of
like
how
do
we
that
that
information
is
in
your
minds
and
my
mind
and
George's,
but
it's
very
hard
for
us
to
carry
that
flag
as
we
roll
out
of
the
release
team,
so
you
know
I'm
wondering
how
we
can
make
this
more
visible
and
even
to
you
not
to
have
to
jump
on
this
call
every
release
and
say
hello.
Remember
us:
are
you
gonna
block
on
anything
yeah?
B
Related
to
that,
the
way
the
one
big
ask
and
that
I
want
to
propose
his
source,
a
signal,
a
a
wish,
a
detain.
That
thing
shall
always
be
open
in
an
issue
whenever
something
a
master
informing
okay,
so
something
a
master
informing,
fails
it
and
I,
and
it
will
be
really
useful
if,
anyway,
if
anyone
from
six-color
political
explicitly
just
say
explicitly
just
say:
okay,
we
know
this.
We
know
that
this
failure
is
happening.
It's
not
really
smoking
or
or
it
is
really
or
it
is
really
smoking
in.
B
If
you
are,
if
you
are
all
aware
of
something
that
is
going
on,
please
link
a
link
to
any
known
any
other
issues
or
peers
that
you
are
have
that
you
are
how
open
it
was
just
with
that
double
the
doubles
really
made
a
double
make.
It
really
make
it
a
lot
simpler
on
the
a
CA,
signaling
and
release
statement,
so
we
label
sorry.
A
B
I
know
they
had
lloyd,
that
that
was
even
a
thing,
but
it
is
usually
a
élisa
élisa,
élisa
standard,
labeling
techniques,
anything
that
is
label
a
critical
urgent
and,
in
the
has
six
scalability,
a
six
color
ability
in
them.
If
any
convey
any
comments
that
you
all
have
a
even
if
it's
something
really
simple
like
we
were
looking
at
it,
it's
not
really
smoking
that
will
be
really
useful.
They
also
I
think
that
they
also
think
that
we've
been
pushing
is
if
we
don't
see
a
signal.
C
That
that's
definitely
reasonable
to
like,
like
require
an
action
from
us
might
ask
for.
You
would
be
to
file
a
separate
issue
for
every
every
failure
because,
like
in
general,
like
it's
very
frequent
that,
like
the
same
test
or
I,
should
put
it
in
other
words,
so
it's
very
rare
that,
like
the
test,
is
failing
for
the
same
reason
multiple
like
if
it's,
if
it
was
red,
green
and
red,
it's
very
rare
that,
like
those
two
failures,
are
because
of
the
same
reason,
so
it
would
be
like
merging
multiple
failures
into
single
issue.
C
B
A
A
Thing
cool,
so
what
I'm
wondering
what
I'm
wondering
on
on
my
end
is
that,
so
you
know
the
release:
release
sagen,
release
team.
Have
this
general
idea
of
blocking
and
forming
jobs?
What
we're
wondering
is:
is
there
a
definition,
canonical
definition
somewhere
for
scalability
that
there
a
list
of
criteria
to
say
like
we
will
block
jobs
on
this
criteria?
There.
C
E
C
E
Cherry
picks,
which
are
hopefully
much
lower
risk
from
a
scalability
standpoint,
so,
rather
than
so
whether
you
can
just
build
this
into
the
regular
release
process,
so
we
don't
to
think
necessarily
I
mean.
Obviously
you
can
still
block
the
release
at
the
end
of
at
the
end
of
the
peace
process,
but
here
we've
got
good
set
aside
some
time
specifically
to
look
at
the
scalability
runs
or
to
ensure
that
scalability
is
green
before
we
open
this
make.
You
and
people
start
stop
thinking
about
this
confidence
or
think
about
it
less
so.
A
E
E
E
I
mean
we
could
also
do
you
can
also
deep
the
jobs
right.
We
could
just
request
them.
No
right,
no
well
I
mean
no
I
mean
like
not
easily,
but
it's
a
problem
money
we
can
be
behind
money
right,
like
it's
not
like
CN
CF
is
brick
like
there's
like
a
process
for
requesting
like
additional
resources.
If
it
seems
like
it
would
produce
a
naturally
better
relationship
between
synchronise
insuk
scalability
and
improve
the
quality
of
the
product,
it
seems
like
a
reasonable
thing
to
do
if
it's
just
a
question
of
money.
E
A
On
other
branches,
so
let's
I
think
I
think
we
can
take
it
as
an
action
item.
I
think
that
you
know
one
of
the
things
that
Caleb
brought
up,
which
I
put
later
in
the
agenda,
is
resourcing
great.
There
are
there
places
that
people
can
be
contributing
that
are
not
from
Google
to
cig
scalability.
In
addition,
there's
also
the
ideas
kind
of
like
how
do
we
make
sure
that
how
do
we
make
sure
that
this
feedback
is
even
tighter,
because
it's
clear
that
this
is
like
one
of
the
things
that
will
get
us
every
release?
A
Right
so
is
it?
Is
it
we
have
a
sig
scalability
advisor
that
lives
on
the
release
team
right
are
for
for
a
cycle
like
you
are
our.
You
are
dedicated
resource
force,
sig
for
first
sake,
release.
We
expect
you
to
X
right.
Some.
Some
level
of
criteria
will
respond
in
this
many.
You
know
this
many
days
or
this
many
hours
or
something
for
for
issues
or
do
be
staff.
A
A
role
on
the
release
team,
where
that
person
is
there
to
be
given
out,
do
be
used
to
release
seen
as
an
opportunity
to
maybe
maybe
further
resource,
sig
scalability
right.
If
there
was
a
scalability
role,
if
the
CI
signal
role
was
extended
into
scalability
like
I,
don't
I'm
just
throwing
out
ideas
here,
but
it's
clear
that
there
is
one
I
mean
we
also
get
bitten
by
this
time.
Zone
shift
right
where
a
majority
of
us
are
in
u.s.
Eastern
or
US
specific
or
even
even
a
me.
A
A
E
Think,
if
the,
if
we
add
a
performance,
engineering
role
to
the
release
team
itself
and
naturally
they
should
be
attending
there,
I
do
think
it
is
fairly
specialized
skill,
set
least
that's
been
my
experience
working
with
other
performers
engineers,
so
I
can
see
the
need
for
adding
a
formal
role
there.
That
just
touches
my
opinion.
It's.
C
C
C
A
Cool
cool
all
right,
so
the
next
one
I
had
on
the
list
was
going
testing
right.
So
this
comes
up
every
cycle
as
well
our
going
versioning.
This
comes
up
every
cycle
and,
depending
on
the
cycle,
we
may
be
on
the
threshold
of
moving
into
a
place
where
we're
where
our
least
recent
release
branch
will
be
out
of
go
support.
A
A
At
least
what
I
was
a
little
dismayed
about
was
the
fact
that
that
issue
was
open,
June,
28th
and
say:
release
essentially
has
no
awareness
of
this
right,
so
we
were
trying
to
make
a
decision
towards
the
end
of
the
cycle
and
sig
scalability
has
been
sick
scale
abilities
all
over
this
issue
right.
So
this
is
something
that
we
we
need
to
have.
Visibility
on,
yeah,
I
think
sorry,.
C
A
Right
so-
and
this
is
specifically,
this
will
be
specifically
sig
release
and
the
so
the
sig
release
shares,
as
well
as
as
release
engineering,
because,
ultimately,
any
external
dependency
like
we're
ultimately
responsible
for
making
sure
it
gets
bumped
or
it
how
it
affects
the
release.
I
think
the
trick
here
is
also
that
I
was.
C
I
was
hoping
that
we
were
like,
given
that
we
already
knew
it
like
two
months
before
the
release.
I
was
really
hoping
that
we
won't
release,
go
with
with
their
aggression
or
at
least
like
or
to
put
it
in
another.
In
other
words,
like
that,
when
we
release
go,
we
will
know
where
we
are
with
kubernetes
and
we
will
be
fine
with
respect
to
that,
but
because
of
like
vacation
period
and
like
many
people
being
off
both
from
go
side
and
our
side,
this
wasn't
what
is
happening.
A
C
What,
if
that's
for
sure,
but
they
we
met
with
going
to
team
just
today
and
we're
actively
working
on
that
on
the
volunteer
and
performance
issue,
and
that
just
got
here.
We
also
have
plans
to
actually
start
or
maybe
add,
a
continuous
integration
tests
for
like
new
races
of
going
to
like
anytime
there's
a
good
or
a
new
version
are
going
supposed
to
be
released
or
before
that
we
want
to
run
performance
tests,
testing
how
it
affects
kubernetes
and
I.
A
Mean-
and
this
should
be
reasonably
easy,
reasonably
easy-
we
still
need
to
figure
out
some
things
here,
but
yeah
like.
We
definitely
want
to
do
that,
and
hopefully
we'll
tackle
this
in
the
next
quarter,
or
so
yeah
yeah
and
then
and
not
to
not
to
downplay
the
difficulty
of
any
of
it.
C
A
C
C
A
So
I
think
that
you
know
in
terms
of
the
most
of
the
external
dependencies
that
we
have.
We
don't
like
you
know
there,
there's
not
too
much
friction
in
terms
of
bumping
them
and
getting
signal
all
that
stuff,
but
but
again
like
going
is,
is
the
biggest
one
and
like
the
sooner
we
have
information
about
that.
The
sooner
we
can
like
make
quick
decisions
like
we
know
this
is
not
going
in
the
release.
Anyone
who
opens
an
issue
about
this.
We
can
block
on
that
and
reference.
A
A
Ok,
all
right
well,
so
one
more
is
so.
The
last
topic
is
meeting
cadence
right,
so
we
you
know
the
first
time
we
met,
we
kind
of
said.
Like
you
know,
this
is
this:
is
semi
critical
like
let's,
let's
like
sit
down
and
talk
about
like
what
we
can
improve
for
the
next
cycle,
I
think
that
went
great
and
we
had
immediate
action
items
for
gonna.
Okay-
and
you
know
some
of
that
has
been
going
smoothly.
Of
course,
they're
always
opportunities
for
improvement.
A
You
know
cleaning
up
some
of
what
our
our
test
right
here
looks
like
I'm.
Getting
were
single
from
you
about
what
the
what
you
were
test
criteria
looks
like
you
know
when
and
if
you
should
be
blocking
merges,
but
I
think
you
know
this.
This
kind
of
interaction
is
good.
So
how
do
we?
What
should
the
meeting
cadence
be?
How
do
we?
How
do
we
make
sure
that
this
feedback
loop
happens.
D
C
A
B
A
Would
end
up
that
would
end
up
just
being
one
more
meeting
for
the
cycle
right,
so
we
could
meet
at
the
beginning
or
I
guess
after
we
set
the
release
team,
maybe
for
the
cycle
at
least
get
give
them
an
opportunity
to
you
know:
introduce
you
to
the
new
release,
lead
to
the
the
CI
signal
team,
since
you'll
be
working
most
closely
with
them
and
then
from
there
do
you
do
once
once
a
month
after
that?
That
sounds
good
yeah.
C
I
mean
it's
pretty
close
to
what
I
was
thinking
two
or
three
times
doesn't
change
yeah
yeah
I
think
it's
reasonable,
so
I.
Second,
that
like
we
can
probably
like
if
it
appears
that
there's
no
agenda
we
currently
improve
over
time.
Maybe
then
does
not
need
it.
We
can
change
that,
but
I
think
like
for
117.
For
example,
it
makes
sense.
Okay,
yeah,
perfect,
perfect.