►
From YouTube: Kubernetes Community Meeting for 20220317
Description
Kubernetes Community Meeting for 20220317
Topics discussed:
1. Release Updates
2. Release Cadence Survey
3. Interacting with Bots survey
4. Contributor summit @ KubeCon
Find the link to the doc containing the meeting agenda & discussions at https://bit.ly/k8scommunity
If you have any questions about the meeting, you can reach out to the Contributor Experience SIG in Slack under #sig-contribex.
You can find more about sig-contribex at https://github.com/kubernetes/community/tree/master/sig-contributor-experience
A
Hello,
everyone
I
welcome
to
the
march
kubernetes
community
meeting.
Thank
you
all
for
joining
or
for
watching
if
you're
watching
this
on
the
recording
which
which
hopefully
more
of
you
are
I'm
josh
burkus
I
am
hosting
today's
meeting
and
allison
is
our
notetaker
and
laura
is
our
usher.
Here,
you
too
can
be
a
volunteer
for
the
glamorous
community
meeting
that
we
have
every
month.
A
If
you
are
interested
in
this,
please
contact
us
on
in
sig
contributor
experience,
either
through
the
mailing
list
or
through
the
slack,
and
we
will
be
happy
to
set
you
up
helping
out
with
one
of
the
future
community
meetings,
the
the
usual
housekeeping.
First
of
all,
this
is
an
official
meeting
of
the
kubernetes
project.
As
such,
we
are
subject
to
the
kubernetes
code
of
conduct,
so
please
be
respectful
towards
each
other.
A
Oh
right,
yes,
so
we
posted
a
link
there
in
the
chat
for
a
read-only
version
of
the
notes.
A
So
that's
there,
the
we
have
a
number
of
topics
for
this
meeting.
All
those
topics
were
gathered
from
the
community
over
the
last
several
weeks
as
things
that
we
felt
were
of
general
community
interest.
A
However,
for
any
and
all
of
you
as
contributors
to
kubernetes,
you
are
entitled
and
in
fact,
encouraged
to
put
things
on
the
agenda
for
the
community
meeting
when
they
are
things
that
the
community
should
broadly
pay
attention
to
and
again,
please
either
just
drop
something
on
the
agenda
document,
which
is,
is
open
year
round
or
contact
sid
contributor
experience
or
use
our
form
or
if
you
could
drop
a
link
to
that
form
in
there.
A
A
So
with
that
we're
gonna
go
ahead
and
get
started
with
the
meeting.
I
we've
got
some
announcements,
one
of
them
I'm
going
to
be
real,
quick
on,
so
the
release
cadence
survey
is
open.
That
link
is
in.
A
That
link,
no,
that
is
not
the
right
link.
That
link
is
in
the
notes
and
will
soon
be
posted
to
chat
the
that,
and
we
will
be
talking
more
about
that
later.
In
this
meeting
the
there
is
by
the
way,
a
second
survey.
This
is
not
a
kubernetes
community
survey.
This
is
a
survey
being
done
by
a
student
at
florida
university.
A
A
Hey
there
so
spurty
who's,
a
student
at
florida
university
doing
her
graduate
study
in
ncs
wants
to
find
out
how
we
interact
with
bots,
such
as
the
proud
bot
and
has
put
together
a
survey
with
the
help
of
some
of
the
people
in
the
community.
A
So
take
a
look
at
that
link
in
the
notes
and
fill
it
out.
If
you
have
time.
A
Thank
you
and
then
finally,
we
are
planning
to
have
a
contributor
summit
at
kubecon
europe,
so
pouya.
C
Are
you
here
yeah?
Yes,
I
am
here
I'll
make
it
short.
There
will
be
a
kubecon
contributor
summit
in
europe
in
valencia.
It's
gonna
be
on
the
monday,
the
16th
so
pre,
the
co
events,
colo
events
we're
currently
aiming
at
kicking
off
the
organization.
On
monday,
there
is
roughly
a
plan
to
make
it
unconference
style,
single
track,
plus
hallway
track
and
volunteers
are
still
welcome
if
you
want
to
volunteer
just
join
the
aura
meeting
on
monday.
C
The
link
is
in
the
notes
document
that
I'm
gonna
post
in
the
chat
in
a
bit
and
or
just
join
the
summit
staff
channel,
but
only
join
the
summit
staff
channel.
If
you
want
to
volunteer,
if
you
just
want
to
join
the
contributor
summit,
there's
also
a
contributor
summit
channel,
where
we're
gonna,
post
registration,
information
and
other
kind
of
things.
A
Okay,
thank
you
and
with
announcements,
it's
time
to
move
into
release
updates
and
to
give
us
release
updates.
We
should
have
one
of
our
release.
Leads
yeah
susie
here,
yep,
hey
suzy,
go
ahead;
hi.
D
Yeah
hi
everyone.
This
is
cece
from
the
release
team
list,
I'm
going
to
be
the
release
list,
shadow
for
this
cycle
and
I'm
going
to
give
the
updates
for
the
release
team
and
currently
we
are
at
week
10
and
the
release
is
currently
still
scheduled
for
19th
april,
which
is
in
late
15th,
and
I
will
just
give
some
major
milestones
for
everyone
to
be
aware
of.
D
First,
we
are
going
to
start
to
take
exceptions
starting
march
21st,
and
we
are
going
to
have
like
one
more
brand
meeting
edited
starting
next
week,
also
21st
of
march,
and
we
will
be
have
daily
breakdown
meetings
starting
28th
of
april
sorry
march,
and
also
we
will
have
our
first
release
retrospective
meetings
scattered
also
next
week,
and
currently
we
have
6
enhancement
tracked,
and
we
are
like
roughly
two
weeks
to
code
phrase
which
is
on
wednesday
20
30th
of
march.
D
So
for
all
the
track
enhancement,
please
be
aware:
you
have
to
get
all
the
pr
merged
before
the
code
freeze
to
be
able
to
consider
that
as
completed
and
for
all
the
enhancement
which
needs
docs.
D
Please
be
aware:
you
have
to
open
the
placeholder
prs
before
20
april
31st
of
march,
and
the
doc
deadline
for
pr
ready
to
review
is
fifth
april,
and
our
docs
team
will
make
sure
that
all
the
dogs
being
merged
one
week
before
the
mainly
release
and
we
are
starting
to
collect
the
major
sims
candidate
and
you
have,
if
you
have
like
any
suggestions,
please
don't
hesitate
to
reach
out
or
for
any
potential
release
blog
context,
and
we
are
easy
to
be
reached
in
the
sig
release
channel.
So
please
reach
out
there.
D
If
you
have
any
questions
or
concerns
for
the
ci
signal
update,
we
currently
still
have
a
one
filling
test.
We
will
be
syncing
with
the
gas
signal
guys
to
make
sure
it
gets
addressed
before
the
release
and
for
the
patch
release
currently
waiting
for
the
offer
for
release
card
and
all
the
previous
release
card
reads
mostly
yeah.
That's
basically
updates
from
us.
Thank
you.
A
E
E
Both
of
these
are
a
bit
of
a
long
standing,
open
failure
and
flakes.
So
they
have
a
bit
of
a
history
too,
but
we'll
be
coming
around
and
making
sure
that
that's
resolved
before
tuesday
and
yeah.
That's
it.
A
Okay
well,
thank
you
and
I
actually
forgot
to
solicit
anybody
from
the
patch
from
the
patch
team
from
release
engineering.
Do
we
have
anybody
online
here.
A
Okay,
I
won't
say
anything
other
than
we
should
have
gotten
patch
releases
yesterday
and
just
look
at
the
dev
list
for
that.
Okay.
Well,
thank
you,
and
that
said,
let's
go
into
our
first
cap
item
here,
which
is
the
release
cadence
update.
This
is
kept
from
a
year
ago
and
we've
kind
of
really
reached
the
next
stage
in
that
so
jeremy.
F
F
But
if
you
take
a
look
at
you
know
the
support
windows
and
the
number
of
releases
that
happen
per
year
and
how
hard
it
is
for
some
people
to
to
upgrade.
We
started
the
discussion
about.
Maybe
we
should
move
to
a
permanent
three
release
cadence
or
try
it
for
a
while
and
see
what
it
looks
like
sort
of
a
natural
experiment
as
jordan.
Just
said
in
chat,
and
as
part
of
that
we
created
a
cap-
and
the
cap
is
linked
here-
you're
welcome
to
go,
read
it,
but
at
a
really
high
level.
F
If
you
have
not
filled
it
out
yet
please
do
so.
Your
feedback
is
very,
very
valuable
and
will
help
us
determine
whether
this
is
the
pattern
we
continue
or
whether
we
need
to
make
any
further
tweaks
to
the
the
release
cadence
or
the
schedules
themselves.
I
think
a
really
interesting
thing
that
came
out
of
this
was
that
the
the
answers
kind
of
break
down
in
a
pretty
clear
pattern.
F
People
strongly
prefer
the
three
release
cadence
so
far.
There
is
a
smaller
number
of
people
that
prefer,
but
not
strongly
prefer
the
three
release
cadence
and
then
a
smaller
number
that
has
no
preference.
The
four
release,
cadence
preference-
seems
to
be
pretty
low
amongst
the
respondents
so
far.
So
again,
this
is
still
in
progress
and
we're
still
collecting
results.
F
Another
interesting
thing
is
that
there
are
a
ton
of
people
that
are
still
running
very
old
versions
of
kubernetes,
whether
that's
through
choice-
maybe
they
don't
want
to
upgrade
or
they
just
can't
because
of
the
the
difficulty
that
can
come
with
doing
major
upgrades
like
that.
If
you
look
at
the
the
data
in
the
third
chart,
that's
in
the
notes.
F
We
see
a
lot
of
people
are
running
121,
but
there's
also
a
lot
of
people
running
120,
which
is
now
in
maintenance
mode
and
there's
a
surprising
number
that
are
still
running
119,
118
or
the
bucket
of
117
or
older.
So
we
didn't
really
go
into
any
more
granularity
there,
but
that's
a
lot
of
folks
that
are
running
clusters
that
are
no
longer
getting
patches,
no
longer
getting
updates
and
again,
whether
that's
because
it's
difficult
to
kind
of
keep
up
with
the
upgrade
cadence.
F
We
don't
have
the
direct
drill
down
here
for
that
or
not,
but
looking
at
the
data,
it's
pretty
clear
so
far
that
the
three
cadence
release
thing
is
working
working.
Okay
for
folks
from
the
release
team's
perspective,
we
think
it's
been
beneficial.
The
change
in
general
has
been
beneficial.
It's
been
a
little
bit
easier
to
staff
the
release
teams.
F
F
The
the
predictability
of
it
makes
it
a
little
easier
to
know
like
I'm
gonna,
be
able
to
commit
to
this
time,
because
I
know
that
it's
going
to
be
pretty
much
these
dates
inside
the
cap.
We
have
some
high
level
guidance
about
what
it's
going
to
look
like
for
the
rest
of
2022.
F
You
know
those
dates
are
notional,
so
they
may
shift
a
little
bit
depending
on
things
that
happen,
but
you
can
you
get
a
little
bit
better
understanding
of
what
things
are
going
to
look
like.
Obviously,
the
downside
is
that
with
fewer
releases,
the
the
cadence
or
the
velocity
of
getting
new
features
out
falls
a
little
bit.
So
that's
that's
an
area
where
we
we
end
up.
You
know
needing
to
do
some
balancing
between
the
needs
of
distributors
and
the
needs
of
users.
F
With
that
I'll
open
it
for
any
questions
again,
I
think
the
major
takeaway
here
is,
if
you
have
not
done
the
survey,
please
please
do
so.
We
would
very
much
like
to
have
the
feedback
and
it
will
help
us
make
decisions
for
the
community
going
forward.
A
Yep,
so
if
you
have
any
comment
on
release,
cadence
go
ahead
and
raise
your
hand.
F
There's
a
question
in
the
chat:
it's
not
really
related
to
this,
but
I'll
answer
it
since
we're
here
and
kind
of
pause.
Any
idea
when
shadows
will
be
hired
for
the
upcoming
releases
because
of
the
like
the
predictable
schedule.
Generally,
what
happens
for
every
release
is
at
the
end
of
the
cycle.
We
send
out
a
survey,
a
google
form
basically
to
collect
applicants
that'll
generally
be
in
the
last
two
three,
maybe
four
weeks
of
the
release
and
you
can
kind
of
back
that
up
from
the
release
date.
F
So
if
you
look
at
the
release
date
for
1
to
23
as
cc
mentioned
or
sorry
124,
as
cc
mentioned,
that
is
supposed
to
be
april,
19th
so
probably
end
of
march,
which
is
coming
up
beginning
of
april
you'll,
start
to
see
some
of
the
comms
go
out
for
that,
so
definitely
stay
tuned
to
kubernetes
dev.
If
you're,
not
there
already
yeah
thanks
tim
for
drop
jim's
for
dropping
the
the
link
to
the
release
timeline
in
the
chat
as
well.
F
G
I
mean
not
much
to
be
honest,
the
the
the
at
least
in
regarding
the
time
spent
with
approving
cherry,
picks
and
reviews.
It's
not
not
much
an
opponent.
A
Yeah
another
thing:
someone
did
a
report
on
1.23
for
the
number
of
exception
requests
and-
and
it
did
not
go
up
markedly
with
the
change
in
release
cadence.
It.
H
A
Yeah
part
of
it
was,
we
were
trying
to
decide.
It
was
in
a
thread
about
deciding
how
long
code
freeze
should
be,
and
one
one
of
the
release
team,
hopefully
posted
a
bunch
of
stats.
For
the
last,
like
four
releases
and
they're.
Just
I
mean
there
was
a
small
increase
like
as
you'd
expect
with
a
slightly
longer
one,
but
there
wasn't
a
doubling
or
anything
like
that,
which
would
indicate
a
problem.
F
Yeah,
I
think,
on
that
kind
of,
like
that
line
of
thinking.
The
number
of
enhancements
that
has
landed
in
those
releases
has
generally,
I
think,
with
the
smaller
cadence,
has
been
a
little
higher
like
this
time
around.
F
We
have
66
things
that
are
being
tracked
and
that
that
runs
the
gamut
of
brand
new
alpha
things,
things
that
are
graduated
to
beta
things
that
are
graduating
to
stable
things
are
deprecating
as
well,
and
that
number
is
higher
than
I
think
we
saw
in
the
times
prior
to
say,
like
119,
but
it's
been
fairly
consistent
across
those
releases.
Since
we
119
was
a
little
different
as
well,
because
we,
while
it
was
elongated,
we
didn't
keep
the
enhancements
collection
period
opened
for
longer.
It
was
kind
of
coming
off
of
the
previous
normal.
F
You
know
normal
cycles
from
before,
and
we
were
you
know
wondering
whether
the
the
120
release
coming
right
after
that
was
going
to
be
like
a
big
bang
thing,
where
folks
had
a
lot
of
queued
up
things
that
they
wanted
to
ship
and
whether
that
would
be
unusual
and
it
turns
out,
it
was
bigger.
But
it
didn't
didn't
seem
like
out
of
line
with
the
ones
that
came
after
120.
A
I
think
that
would
be
interesting.
I
have
a
feeling
that
it's
going
up,
but
for
reasons
unrelated
to
the
release
cadence
and
for
reasons
having
a
lot
to
do
with
security
holes.
A
Okay,
any
further
comments
about
release:
cadence,
no
we'll
move
on
to
the
next
item:
okay,
so
next
item:
this
is
david's.
There's
going
to
be
a
change
in
how
we
handle
beta
apis
in
the
future,
something
that
folks
should
know
about
david.
J
Turning
them
on
by
default
makes
it
worse
because
they're
available
in
nearly
all
the
clusters,
and
if
I
were
really
going
to
sum
it
up,
I
would
say
that
kubernetes
is
at
a
point
in
this
life
cycle
where
having
production
clusters,
be
our
beta
testers
by
default
is
no
longer
a
reasonable
choice
for
us.
So
in
124
we
have
started
all
new
beta
apis
in
a
in
a
disabled
by
default
state.
J
J
H
I
can
I
can
do
that
shortly.
That'd
be
great.
Alpha,
apis
have
no
forward
compatibility
guarantees,
so
they
can
be
dropped
in
a
single
release.
They
can
have
incompatible
changes
made
to
them
in
a
single
release.
There's
no
guarantee
that
you'll
be
able
to
update
a
cluster
that
you've
created
alpha
api
objects
on.
F
H
For
a
beta,
but
the
fact
that
you
can
upgrade
a
cluster
that
you
created
beta
things
in
is
the
biggest
difference.
By
far.
A
Okay,
do
we
have
other
discussion
about
beta
apis
disabled
by
default.
J
A
Yeah
and
and
for
people
who
are
watching
this
in
the
recording,
that
is
the
place
to
add
your
comments
and
suggestions
offers
to
help.
If,
if
any,
is
that,
I
was
just
wondering
if
there's
any
discussion
here
in
the
meeting
the
on
the
how
the
change
will
affect
folks.
E
J
A
question
yeah
so
so
apis
that
are
already
on
by
default
will
remain
on
by
default
and
turning
them
off
by
default
would
break
people.
So
this
is
just
about
new
apis.
It's
even
slightly
narrower
than
that.
If,
if
there's
a
new
version
of
the
beta
api
that
was
enabled
by
default,
we
will
keep
that
on
by
default,
but
new
net
new
beta
apis
will
be
disabled
by
default.
A
Okay,
so
another
hands
up,
I
will
say
in
a
follow-up,
is
it
sounds?
This
sounds
like
good
material
for
a
kubernetes
blog
post
as
well.
The
and
I
puerto,
are
you
able
to
enable
your
mic
and
give
us
an
update
on
patch
releases.
G
release
hit
a
small
pickup
because
we
are
slowly
rolling
in
some
of
the
changes
we
are
implementing
to
to
sign
the
the
kubernetes
images
and
artifacts,
but
the
release
manager's
team
was
able
to
fix
them
and
recover
the
releases,
so
they
should
be
working
as
expected.
Right
now,.
A
Okay.
So,
and
now
we
have
our
third
topic,
which
is
actually
kind
of
a
continuation
from
what
was
a
fairly
lively
discussion
last
month,
which
is
I
wg.
Reliability
has
come
forward
with
a
suggestion
around
improving
reliability
in
kubernetes,
since
reliability,
like
security
is
everybody's
job.
A
Improving
reliability
in
kubernetes
needs
to
pretty
much
involve
everybody
in
the
project
and
as
such,
we're
trying
to
provide
multiple
opportunities
for
contributors
across
the
project,
to
offer
suggestions
and
ideas
and
thoughts
on
ways
that
we
could
make
improving
reliability.
A
A
continuous
process
I
will
say
for
my
part,
I'm
not
sure
that
reliability
is
necessarily
decreasing,
but
it
certainly
is
not
continuously
improving
in
in
any
measurable
way.
The
even
if
you're,
only
looking
at
test
flakiness
test
flakiness
is
not
better
now
than
it
was
the
last
time
I
was
on
the
release
team
two
years
ago
and
the
so
and
of
course
there
are
potentially
lots
of,
and
hopefully
are
lots
of,
other
measures
of
reliability.
A
Cap
and
I'll
actually
link
the
cap
from
the
notes
on
the
link
cap
or
in
the
kubernetes
dev
mailing
list,
where
I've
started
a
thread
there
to
talk
about.
What
could
we
do
to
improve
the
reliability
of
kubernetes
and
to
keep
improving.
A
A
I'd
be
really
interested
to
hear
what
people
think
about
our
state
of
reliability
and
what
we
can
do
about
it
go
ahead
and
either
raise
your
hand
or,
if
you
don't
want
to
speak
out
loud,
go
ahead
and
type
it
into
chat,
and
we
start
with
danielle
who
says:
we've
had
a
lot
of
scary
escape
bubs
bugs
in
the
kublet
over
the
last
few
releases
that
were
all
down
to
lack
of
coverage.
Sadly,.
E
Okay,
laura
hi.
I've
only
been
a
ci
signal
shadow
for
this
one
release.
E
So
I
don't
have
a
lot
of
long-term
observation
of
this,
but
I
do
feel
that
at
least
compared
to
the
ci
signal
handbook
interpreting
what
is
a
flake
or
not
or
expecting
like
the
wall
of
green
on
the
master
blocking
board,
as
is
written
in
the
handbook
like
it's
not
really
like
that
right
now,
so
there's
a
lot
more
like
qualific,
like
qualitative
analysis,
that's
going
on
and
obviously
that's
like
a
little
risky
or
hard
to
like
train
people
to
do
so.
That's
one
observation
for
me
as
a
newbie.
A
The
james
asks-
and
this
was
a
response
for
the
kublet
bugs
scary-
escaped
as
in
scary,
bugs
that
escaped
the
release,
process
or
scary
bugs
that
are
container
escapes
by
the
way
I
danielle
and
james
you're
welcome
to
discuss
this
out
loud.
If
you
want
to.
L
But
just
sort
of
as
a
general
feedback
right
now,
I'm
one
of
insignode,
one
of
the
subproject
leads
for
rci
signal
team
and
like
we,
we
have
our
own
team
where
what
we're
doing
every
week
is
we're
triaging,
all
sorts
of
test
flakes
or
any
issues
with
test
signal
we're
looking
at
all
of
our
test,
grid,
dashboards,
etc.
L
We
definitely,
as
danielle
said,
have
lacking
coverage.
I'm
also
quite
concerned
right
now
that,
like
yeah,
for
example,
in
the
previous
release,
we
spent
almost
all
of
last
year
making
this
gargantuan
effort
to
get
the
serial
tests
fully
green
for
node
passing,
and
then
they
immediately
go
and
get
broken
again.
L
It
seems
like
there's
not
really
any
incentive
right
now
for,
like
we
have
these
teams,
who
are
doing
a
lot
of
work
to
try
to
make
sure
that
tests
are
passing,
and
a
lot
of
that
is
behind
the
scenes,
and
one
thing
that
sergey
my
co-lead
and
I
did
is
we
we
wanted
to
give
people
a
little
bit
more
reward
for
this
sort
of
thing.
We
wrote
a
blog
post
being
like
hey.
L
Look
at
all
these
awesome
people
doing
awesome
work
for
like
this
node
ci
team,
but
fundamentally
that
doesn't
create
an
incentive
for
the
people
who
are
going
and
like
breaking
these
things
to
not
break
them.
In
the
first
place,
we're
catching
a
lot
of
stuff
on
post
submit
rather
than
on
pre-submit
those
people
don't
come
in.
L
You
know
they
make
a
contribution
and
then
they're
gone
like
they
might
be
a
part-time
contributor
or
something
like
that
or
you
know,
they've
landed
their
particular
feature
and
they
add
all
these
end-to-end
tests
and
then
over
time
the
end
to
end
tests
aren't
maintained
and
they
start
flaking
or
failing,
and
those
people
are
now
gone
and
who
does
it
fall
on
to
make
those
tests
start?
Passing
again?
L
Oh
hey
sergey,
so
I
think
that
we
as
a
project
really
need
some
kind
of
incentive
such
that
like
tests
get
fixed
and
it's
not
just
falling
on
like
special
teams
of
people
who
end
up
being
sort
of
the
like
people
who
are
like
yeah.
L
We
want
the
tests
to
work,
so
we're
going
to
fix
them
because
they
end
up
fixing
other
people's
like
test
failures,
like
I'm,
not
sure
how
we
can
achieve
this,
but
I
think
that
there
needs
to
be
some
sort
of
incentive,
because
otherwise
I
think
this
is
going
to
get
get
swept
behind
the
rug.
L
We've
got
these
really
hard
working
people
doing
a
lot
of
stuff
to
try
to
make
the
tests
work,
to
make
the
release
pass,
to
fix
these
bugs
and
that's
not
necessarily
getting
surface
to
the
whole
project
or
bringing
a
contributor
to
where
it
needs
to
be.
M
Yeah
I
mean
I
actually
just
wanted
to
follow
up
on
what
I
wanna
just
said,
like
I've
done,
release
stuff
and
you
know
enhancements
things,
and
so
I
get
a
broad
view
of
like
who's
working
on
what
and
it's
a
lot
of
the
same
people
doing
a
lot
of
the
work,
and
so
you
know
when
maybe
you
have
a
special
team
and
the
special
team
is
filled
with
the
same
people
who
are
doing
a
lot
of
the
work
in
other
cigs,
or
I
mean
it
seems
like
the
fundamental
issue.
M
Is
that
they're,
just
like
aren't
enough
active
contributors
in
the
project?
And
so
it's
like
you
know
like
like
reliability,
is
really
important,
but
you
know
some
like
elena's
pointing
out
like,
like
you,
get
people
who
drive
by
and
maybe
they
put
something
in
and
who's
going
to
maintain.
It
is
that
there
needs
to
be
more
maintainers,
but
I
also
don't
necessarily
know
that
that
is
like
visible
to
all
of
the
you
know,
end
users
and-
and
you
know,
companies
that
contribute
like
how
much
people
how
much
help
is
needed.
M
A
I
Yeah,
just
what
kristen
just
said
resonates
with
me
so
well,
because
the
reality
is
that
it's
really
hard
to
be
a
part-time
contributor
kubernetes
and
there
aren't
a
lot
of
new
full-time
contributors
to
kubernetes
being
hired
and
just
entering
the
community,
and
we
need
to
make
sure
I
talked
about
this
yesterday
with
ben
and
arno.
But
we
need
to.
I
We
need
to
find
better
on-ramps
for
people,
because
we've
had
plenty
of
people
show
up
to
different
sig
meetings
and
they're,
like
hey,
we're,
ready
to
contribute,
and
we
just
don't
have
anything
for
them
to
take
on
and
work
on
it.
You
know
as
a
part-time
thing
that
doesn't
involve
trust
or
a
bunch
of
other
domain
knowledge
and
so
yeah
that
totally
resonates
with
me
a
ton.
We
need
to
figure
that
as
a
project.
I
But
what
I
actually
wanted
to
say
was
that,
in
terms
of
auditing
all
of
our
tests,
we
actually
need-
and
we
have
planned
to
migrate
all
of
our
our
pipelines
and
proud
jobs
to
our
community-owned
cluster
as
part
of
the
the
initiative
from
sig
caves
infra,
and
this
may
be
coming
sometime
this
year
or
next
year.
But
that'll
be
a
great
time
for
everyone
to
actually
like
audit
all
of
their
tests
and
figure
out
what's
old
and
needs
to
be
updated
or
doesn't
need
to
run
so.
A
Thank
you.
Okay,
now
sergey.
N
Okay,
thank
you
yeah.
I
plus
one
to
most
of
the
things
I
said
before.
I
also
want
to
bring
up
this
reliability
group
document
that
was
created
about
jailing,
seeks,
I
don't
really
like
jailing
6
aspect
of
it,
but
the
document
I
listed
a
lot
of
improvements:
how
to
track
issues,
how
to
raise
visibilities
into
failures
and
how
to
simplify
maintainers
life
right
now.
N
It's
kind
of
a
pain
managing
like
test
grid,
github
issues,
tracking
the
progress-
and
there
are
some
tooling
for
triage,
like
this
triage
tool
that
can
be
used,
but
overall,
it's
not
like
well
integrated
and
it's
really
hard
to
get
visibility
into
that.
So
we're
running
this
meeting
and
we
have
people
who
understand
all
the
aspects
like
where
to
look,
but
there
is
no
easy
way
to
like
newcomer
to
see
like
what's
going
on
and
how
to
troubleshoot
what
was
read
for
a
long
time.
N
Those
kind
of
things
so
tooling,
definitely
one
of
the
things
that
will
help
our
project
overall
to
get
more
reliable
by
exposing
the
issues
and
making
them
really
clear.
A
Okay,
paris
did
you
want
to
say
any
remarks.
B
Sure
I
do
hear
a
lot
of
the
pain
points
here
about
lack
of
reviewers
and
approvers.
I
hear
all
the
time
that
people
put
in
test
play
prs
and
then
the
reviewer
bandwidth
is
just
so
low
and
that's
just
quite
unfortunate,
and
I
think
that
this
is
something
that
we
are
working
at
the
steering
level,
I'm
also
working
on
this
at
the
governing
board
level.
B
But
I
don't
know
what
I
don't
know
and
we
don't
know
what
we
don't
know.
So,
if
sig
sigs
are
not
communicative
about
what
they
need
as
far
as
approvers
and
where
then
we
just
can't
be
psychic
and
we're
also
not
going
to
do
mentoring
efforts
for
cigs
that
don't
have
any
onboarding
program.
Excuse
me
onboarding
programs,
because
it
just
doesn't
make
sense.
You
know
and
having
people
come
to
sig
meetings
and
say
you
know,
stick
around
with
us.
Take
issues
is
no
longer
the
onboarding
program.
B
We
think
it
is,
and
I
think
that
we
need
to
do
something
more
drastic
with
our
asks
to
end
users
and
folks,
because
we
just
keep
saying
oh,
we
need
help.
We
need
help,
but
no
one
actually
says
where,
when
how
and
I
would
love
to
work
with
folks
on
this-
we
have
a
humongous
17,
page
sustainability,
doc
of
our
needs
right
now
that
we
need
to
turn
into
asks,
but
only
about
six
sigs
have
weighed
weight
in
on
those
17
pages,
so
we
definitely
need
more.
F
F
Yeah
one
thing
I
was
thinking
about
sort
of
around
the
pull
request
that
voytec
had
opened
is
whether
or
not
we're
considering
health
of
an
area
when
we're
looking
at
introducing
a
new
feature.
So
we've
got
a
few
different
signals.
We've
got
test,
flakiness
test
coverage
can
be
found
out.
I
don't
know
if
we're
tracking
that
or
publishing
that
automatically
and
then
as
a
trailing
signal.
F
We
actually
need
to
get
the
current
state
better
tested
to
make
sure
we're
not
regressing
anything,
but
I
don't
think
we
have
any
formal,
like
that's
not
mentioned
anywhere
that
I
know
of,
and
so
I'd
like
to
see
people
like
maintainers,
considering
that
when
they're
looking
at
a
new
feature,
even
if
the
feature
proposal
may
be
fine,
but
if
they
know
oh,
we
just
had
like
three
or
four
regressions
in
this
area
and
we
haven't
actually
gone
back
and
closed
those
with
better
test
coverage.
F
Not
saying
we
won't
have
to
do
it,
it's
just
going
to
be
hard.
It
is
hard
like
I
think.
Sometimes
we
make
the
mistake
of
optimizing
for
contributors
and
forgetting
about
users
and
like
we.
We
don't
want
to
go
100.
The
other
direction
say,
like
only
users
forget
about
contributors,
because
that
obviously
hurts
everybody
in
the
end
as
well,
but
we
have
to
have
some
feedback
loop
that
is
prioritizing
what
we
actually
ship
to
users,
and
so
we
have
to
consider
that.
M
I
mean
I
actually
think
that's
like
an
interesting
point
that
children
just
make
and
I'm
thinking
of
it
like.
What
do
you
need
to
get
a
feature
into
kubernetes?
You
fill
out
a
cap
right
and
maybe
that's
something
not
like
a
require.
I'm
not
saying
there
needs
to
be
a
requirement
that
x
reaches
such
a
level
before
your
feature
goes
in,
but
it
could
be
valuable
to
have
something
that
either
has
the
author
or
like
authors,
to
collaborate
with
the
sig
leads
anyway
for
their
kept.
M
Maybe
talk
about
like
what
the
test
coverage
is
like
in.
You
know
that
area
of
the
code,
or
maybe
there's
something
there
where
it's
not
creating
a
rule,
but
it's
adding
information
and
just
by
having
to
have
some
of
those
conversations,
people
can
start
thinking
about
it
more
as
opposed
to
like
this
is
a
hard
festival.
You
can't
merge
this
feature
whatever,
but
even
just
having
at
some
point
in
that
process.
Some
kind
of
like
somebody
has
to
actually
think
about
it
and
not
forget
about
it
and
say
like
well.
M
What
does
what
are
there
end-to-end
tests
in
this
area
already
like,
because
we
say
like
do
you
have
test
coverage
right,
but
is
there
existing
end-to-end
coverage
like
is
there
whatever?
I
don't
know
like?
It
seems
like
there's
something
there
to
explore
and
how
we
can,
at
least
like
start
thinking
about
highlighting
things
that
are
important
to
us
in
the
cap
or
in
some
other
manner.
When
we
start
thinking
about
features
going
into
a
release.
A
Yeah
just
checking
if
daniel
wants
to
talk
on
mike
otherwise
I'll
read
out
her
comments.
K
Yeah
sure
so
we
regularly
have
like
part
of
the
issue
here
is
new
features
in
like
very
old
parts
of
the
code
base
from
people
with
no
project.
History
are
generally
speaking,
pretty
scary,
especially
when
we
already
don't
have
solid
test
coverage
or
necessarily
even
institutional
knowledge
of
like
how
and
why
that
code
was
written
in
the
way.
K
It
was
especially
now
that,
like
a
lot
of,
say,
node,
for
example,
like
people
don't
work
on
node
anymore,
who
did
in
the
beginning,
or
have
very
little
bandwidth
to
do,
reviews
there,
and
a
lot
of
new
features
tend
to
get
like
alpha
landed
before
they
have
anything
in
the
way
of
like
competent
e
to
e
tests,
with
a
promise
that
you
know
tests
will
come
later
and
then
they
don't
happen
or
they
don't
happen
to
like
a
level
that
is
good
enough
necessarily,
and
it's
hard
to
review
when
you're
not
also
seeing
what
code
changed,
and
so
we
kind
of
need
to
push
that
much
earlier
in
the
process.
L
I
can
maybe
give
an
example
of
that
danny
we
recently
in
the
node
ci
sub
project.
L
There
was
a
feature
that
landed
a
lot
contention
coverage
for
the
cubelet,
and
some
folks
wanted
to
also
add
this
to
the
keyboard
configuration
because
I
think
it
was
only
a
command
line
flag
and
we
said
well,
there's
no
test
coverage
for
this
and
it
took
like
six
months
to
get
a
working
like
test
in
for
a
job
and
like
a
lot
of
that,
was
not
because
it
actually
took
six
months
in
order
to
get
the
working
test
in
for
a
job
with
the
appropriate
test
and
then
get
the
test
green.
L
But
like
reviewer
bandwidth,
it
was
not
a
high
priority
feature
kind
of
thing.
So
that's
a
big
example
of
that.
I
think
new
features
in
signo
at
least
we've
done
a
pretty
good
job
of
making
sure
like
no
like
this
has
to
have
alpha
test.
They
have
to
actually
run
somewhere.
They
have
to
pass
before.
We
will
merge
this
code,
but
we
still
have
lots
and
lots
of
older
code,
of
which
that's
not
true,
and
going
back
and
trying
to
find
all
the
places
that
that's
the
case
and
then
fix.
A
Yeah
some
comments
from
I,
unfortunately,
this
chris
a
statement
ago
about
enforcing
a
minimum
threshold.
O
L
L
Some
of
the
unit
test
coverage
that
does
exist
is
like
very
difficult
to
expand
or
maintain
a
lot
of
the
code
wasn't
written
to
be
unit
tested,
and
so
a
lot
of
the
code
ends
up
getting
tested
in
the
end
to
end
tests,
which
are
very
expensive
to
run
as
well.
Right,
like
you,
gotta
spin
up
a
full
cluster,
you
gotta
wait
for
a
while,
etc,
and
then
we
you
know
we
don't
want
people
to
take
forever
to
wait
for
their
pr
to
merge.
L
So
a
lot
of
those
tests
are
never
gonna
get
run
in
a
pre-submit.
We
wait
until
a
post-submit
job
runs,
and
then
we
see
if
that
test
passes
or
not,
and
so
that
can
be
a
big
problem
because,
like
you
know,
code
can
land
and
then
it
turns
out.
The
test
had
never
even
passed,
but
you
know
the
test
wasn't
running
and
the
pre-submit,
so
nobody
had
any
idea
so
like
that
sort
of
thing
is
very
common.
L
I
think
it's
great
that
we're
talking
about
this
at
the
community
meeting,
because
I'm
not
sure
how
many
people
have
a
lot
of
awareness
about
this
sort
of
thing
like
I
I'd
wager
that
the
average
contributor
isn't
sticking
their
nose
inside
test
grid
or
poking
around
in
there
or
seeing
like
you
know.
How
is
it
that
we
actually
like
run
our
tests
on
like
a
periodic
basis
on
a
pre-submit
basis?
L
It's
you
know
very
complex,
so
I
get
that
but
like
I
think
that
we
need
some
sort
of
better
process
for
this,
because
it's
really
hard
to
have
a
minimum
threshold,
at
least
like
for
me
as
a
reviewer
like
I
want
things
to
preferably
be
unit
tested.
If
they
can't
be
unit
tested,
then
I
want
an
end-to-end
test
like
anything
that
is.
A
new
api
has
to
have
an
end-to-end
test
that
hopefully
could
be
eventually
promoted
to
perform
a
conformance
test
and
so
on
and
so
forth.
L
But
we
don't
really
have
a
project-wide
policy
for
any
of
this
stuff.
We
don't
have
code
coverage
as
part
of
our
linting.
We
don't
have
any
requirements
in
terms
of
code
coverage
for
code
that
lands
so
on
and
so
forth.
O
K
K
But
this
isn't
sig
venting
so.
L
So
paris
asked
a
question
in
the
chat.
Could
we
have
new
feature
folks
provide
contact
details,
so
we
could
reach
out
to
them
for
maintenance
and
tell
them
that
they
will
indeed
get
pinged
in
12
months?
I
have
definitely
pinged
authors
of
tests.
The
problem
is,
if
they
don't
work
on
cube
anymore,
like
there's
no
way
to
get
them
to
come
back
and
work
on
that
thing
or
if
they've
moved
on
to
a
different
area
of
cube
or
if
they're
working
more
on
internal
things
at
their
full-time
job
like
that
can
be
very
challenging.
L
There's
also
like
just
somebody
earlier.
I
can't
remember
whom
mentioned
like
maintainer
continuity.
Right
like
there
was
a
lot
of
people
who
maybe
worked
on,
for
example,
cubelet
sort
of
the
start
of
kubernetes
and
then
like
most
of
those
people
aren't
active
anymore
in
the
project.
They've
got
all
this
great
knowledge
that
was
like
knocked
up
locked
up
in
their
heads,
hasn't
really
been
transferred
to
anybody
and
so
like
who
knows
what
their
plans
were
for
test
coverage
like
we
don't
know,
we
can't
go
back.
L
We
can't
go
back
in
time
and
talk
to
these
people.
We
can't
bring
them
out
from
like
you
know,
wherever
they
might
be
now.
I
don't
think
that
there's
any
reasonable
way
to
do
that
consistently
across
the
board.
L
We
might
be
able
to
do
it
in
some
cases,
but
just
thinking
like
systemically,
I
don't
think
it's
feasible
things
that
could
be
feasible
like
in
terms
of
just
like
policy
solutions,
and
this
is
not
something
that
I've
like
thoroughly
thought
through,
but
just
some
possible
options
spitballing
right
now
we
don't
have
any
requirements
for
any
new
code
coming
into
the
code
base
to
have
test
coverage.
L
We
could,
I
think,
relatively
straightforwardly.
Like
our
test.
Suite
has
coverage
capability.
We
could
add
a
linter
that
says
if
your
pr
causes
test
coverage
to
drop,
it
doesn't
get
merged,
like
that's
a
pretty
straightforward
thing
right,
like
if
you've
added
new
code
and
like
test
coverage
goes
down.
That
means
that
you're
not
testing
your
new
code.
L
Don't
do
that
so
that
might
be
an
option,
but,
like
I'm
sure
that
there's
other
things,
I
would
really
like
to
see
kubernetes
over
time,
move
away
from,
like
everything
is
integration
tested
and
move
more
towards
everything,
that's
unit
tested.
The
unit
tests
can
run
faster
than
the
integration
tests.
L
In
theory,
like
the
unit
tests,
do
take
a
while
to
run
right
now,
but
not
as
long
as
the
full
suite
of
integration
tests
and
the
unit
tests
are
all
on
pre-submit,
whereas
that
is
not
true
for
a
lot
of
these
post-submit
tests
that
people
don't
even
realize
they're
submitting
like
a
test
that
never
works,
and
somebody
goes
back
years
later
and
it's
like.
Oh,
this
stuff
doesn't
work.
What
do
we
do
about
that?.
L
A
Yeah
we've
had
a
lot
of
chatter
and
chat
and
by
the
way
I
will
be
saving
the
chat
and
publishing
it
with
the
meeting
notes,
because
there's
been
a
lot
a
lot
of
good
comments
on
there.
I
just
wanted
to
give
in
our
last
two
minutes
here,
give
a
brief
chance
for
mod,
laura
santamaria,
danielle
and
or
eddie
and
alison
and
garima,
and
pardon
me
if
I
mispronounced
that
last
one.
If,
if
you
have
any
further
comments
that
you
want
to
get
on
the
audio.
I
Yeah,
I
can
say
something
real
quick.
I
think
test
grid
and
prowl
confuse
and
scare
people
away,
and
that
just
is
the
reality
of
it
right.
It's
a
complex
system,
that's
hard
to
read
for
a
new
person,
and
I
imagine
most
people
that
are
adding
stuff
or
fixing
things
and
they
get
flakes
they
they
don't.
Even
you
know
they
have
no
idea
what
to
look
for
in
the
flake
they
just
slash,
retest
and
move
on.
I
think
we
need
to
convince
jordan
to
maybe
record
some
more
sessions
of
debugging
fights
and
yeah.
I
I'm
happy
to
help
with
that
too.
But
I
really
think
we
need
more
resources
for
folks
to
just
understand
and
maybe
the
maybe
the
proud
bot
that
tells
you
that
your
you
know
jobs
failed.
We
can
link
to
better
resources
or
hey
like
track
this
down,
and
I
know
there's
a
lot
we
can
do
there,
but
I've
heard
that
that's
a
big
point
for
people.
A
Okay,
well
thanks.
I
do
have
one
comment.
There's
a
lot
of
you
know
before
kubernetes
I
worked
on
the
post
cris
ql
project,
which
is
27
years
old.
Now
I
believe-
and
let
me
say
that
the
general
problem
we're
talking
about
here
is
a
problem
that
literally
nobody
has
solved.
A
I
mean
particularly
the
problem
of
contributors
come
in,
they
contribute
a
feature,
and
then
they
change
jobs
and
careers
and
vocations
or
die
in
a
helicopter
crash
in
one
particularly
noticeable
incident,
and
then
it's
thrown
on
a
fairly
small
group
of
maintainers
to
maintain
their
code
in
perpetuity
thereafter
and
that's
not
a
problem.
A
I
think
that
anybody
in
open
source
has
yet
solved.
So
the
reason
why
it
feels
like
a
hard
problem
is:
it
is
a
hard
problem
with
a
capital
age.
I
if
people
can
continue
this
discussion,
because
I
don't
think
we're
going
to
solve
this
in
one
release
or
one
cap
or
one
specific
measure
again
any
of
the
places
you
can
discuss
it
on
the
existing
cap
from
wg.
Reliability
in
you
know,
contributes
or
testing
or
release
chat
in
the
dev
mailing
list
or
in
future
community
meetings.
A
A
I
if
you
want
to
be
mc
for
one
of
these
things
or
notetaker
or
usher
or
please
contacts
contributes
in
addition,
if
you
have
something
that
you
believe
needs
to
be
discussed
at
a
community
meeting,
please
contact
us
and
make
sure
that
it
gets
on
the
agenda,
and
with
that
I
want
to
thank
my
usher,
laura
and
my
notetaker
allison
for
this
particular
meeting
for
helping
out
the
and
I'll
see
you
all
in
the
various
kubernetes
forums.