►
From YouTube: Kubernetes SIG Testing - 2020-03-24
Description
A
A
Josh
Marcus
is
going
to
talk
to
us
a
bit
about
the
proposal
from
the
LTS
working
group.
Jeff
is
going
to
talk
to
us
about
Velasquez
as
a
product.
Paris
is
going
to
talk
to
us
a
little
bit
about
the
results
of
the
contributor
experience
survey
and
how
you
know
what
what
that
means
to
us
is
a
sig
and
Alvaro
will
talk
about
in
repo
config
for
prowl.
So
with
that
I'd
like
to
hand
it
off
to
Harshal
to.
C
The
point
I
want
to
talk
about
yes,
C
group
v2,
so
so
single
v2
support
is
coming
in
kubernetes.
The
PRI
is
about
to
get
much
the
moment.
We
have
the
window
open
for
1.19
and
run
C.
C
group
support
was
merged,
just
I.
Think
last
week,
cRIO
already
supports
secret
of
v2.
They
used
to
use
a
different
runtime
called
cedar
and
which
they
wrote.
So,
having
said
all
that,
we
thought
you
know,
like
kubernetes,
already
runs
protest
cases
for
continuity
as.
D
C
Submit
as
well
as
periodic
s--,
if
you
go
to
job
config
job
scope,
when
it
is
signal
there
is
a
container.
D
already
runs
a
bunch
of
cases
against
various
versions
of
kubernetes.
So
I
came
to
this
meeting,
mainly
from
trying
to
understand
if
the
community
will
be
ok
with
adding
cryo
test
case
as
well
and
the
the
C
group
we
two
aspect
is
since
Guyot
stack
already
has
C
Group
B
to
support,
enabled
we
could
enable
even
kubernetes
to
test
c
groovy.
So
so
our
part
form.
C
Yep,
so
these
these
already
have
a
continuity
test
cases
there
this
clear
test
cases.
Ideally
we
will
like
to
have
test
cases
for
resubmit
as
well,
but
since
this
will
be
first
time,
you
want
to
run
these
test
cases.
Iii
think
that
if
we
start
running
them
as
periodic,
it
will
give
us
more
confidence
about
their
stability.
I,
don't
expect
any
issue
from
a
runtime
point
of
view
just
because
you
changed
from
contenido,
creo
I
don't
expect
any
changes,
but
nevertheless
that's
a
part.
I
was
thinking
and
I.
C
C
A
A
D
A
Long
as
they
are
related
to
the
kubernetes
project,
it
sounds
like
they
are
in
this
case,
but
I
think
the
next.
The
next
piece
would
be
that
sig
note
would
be
willing
to
support
these
tests
so
that
if
they
fail,
they
go
green.
So
to
me,
I
feel
like
if
you
haven't
discussed
this
with
sig
note
already
I
would
I
would
go
there,
but
I
don't
think
we
have
any
like
opinions
on
what
image
you
used
to
stand
up
your
cluster
or
what
runtimes
you
use.
C
A
A
F
Support
cap
yep
so
link
is
there
in
the
notes.
Yeah
we've
been
working
on
this
cap
for
a
while.
This
is
from
the
LTS
working
group.
After
doing
a
huge
survey
and
talking
to
a
lot
of
individual
users
and
talking
to
a
lot
of
people,
we
discovered
that
what
people
really
wanted
out
of
LTS
was
slightly
longer
patch
support
and
by
slightly
longer
we
mean
about
four
months
longer
that
that
was
the
not
the
majority
opinion,
but
the
plurality
opinion
the
one
move
we
could
make
that
that
helped
the
largest
number
of
users.
F
However,
as
you
can
imagine,
extending
our
period
of
patch
support
means
also
extending
our
period
of
test
coverage,
as
well
as
increasing
the
number
of
skew
tests
to
cover
more
versions.
So
a
lot
of
the
impact
of
this
change.
If
we
do,
it
is
going
to
affect
cig
testing
and
test
infra
and
as
such
Oh
a
lot
of
people
have
commented
on
it
from
a
testing
perspective.
A
Okay,
so
I
glanced
over
it,
but
haven't
had
a
chance
to
look
over
everybody's
comments
line
by
line.
I
appreciate
you
showing
up
here
so
I
could
ask
some
silly
questions
the
so.
The
first
one
is
I
feel
like
from
sig
testings
perspective.
I'm,
not
sure
how
this
impacts
us,
like
I
recognized
it
would
result
in
the
creation
of
more
jobs,
but
I
feel
like
all
of
the
jobs
that
are
necessary
to
support
the
different
versions
of
kubernetes.
A
Are
these
days
more
or
less
under
the
purview
of
the
release,
team
right
and
we've
kind
of
automated
away,
the
the
administrivia
of
like
maintaining
yet
another
set
of
jobs?
For
yet
another
branch,
I
feel
like
most
of
the
burden
is
on
the
release
team
there,
and
as
long
as
they're
fine
with
it.
That
seems
great.
A
The
the
question
I
have
is
maybe
not
necessarily
understate
testings
purview
but
seems
like
the
longest
tentpole
is
expanding
version.
Skew
I
feel
like
expanding
version.
Skew
has
kind
of
a
long,
lasting
impact
on
sakes
like
sig,
node
and
sig
API
machinery,
big
CLI,
has
that
been
discussed
by
those
sakes.
Are
they?
Okay
with
that
I.
F
F
F
F
G
A
D
A
End
version
of
kubernetes,
so
118
is
like
out
the
door
soonish
today
ish,
so
the
test
confirms
the
coop
CTL
117,
yeah,
right,
yeah
and
them
now.
He
already
support
coop
CTL
116
should
also
be
able
to
talk
to
that
if
I
remember
correctly
so
you're
talking
about
expanding
in
such
that
coop
CTL
115
would
also
be
able
to
talk
to
the
API
server
yeah.
F
A
A
But
they
won't
be
able
to
because
the
couplets
supportive
version
skeet
right
now,
one
version
yeah
those
thing
expanding
it
to
two
yeah
and
it's
not
clear
to
me
whether
either
of
those
scenarios
are
currently
covered
by
test
Liuba
mere.
Maybe
you
can
correct
me
here.
I
know:
you've
done
some
upgrade
work
with
with
kinder,
but
it's
not
clear
to
me
whether
or
not
you
are
doing
skew
testing
to
confirm
that,
like
the
118
version
of
kubernetes
works
with
117
nodes.
D
So
we
are
not
a
particularly
queuing
cube
kuroh
and
although
the
cube
Cross
Q
is
minus
1,
0
plus
1,
which
is
no
portion
in
the
past
version
of
the
future.
In
the
current
version
put
the
couplets
Q,
we
don't
have
one
in
qadian
testing.
The
supported
version
of
the
couplet
is
the
current
one.
I
also
opera
in
the
past
supports
the
current
API
server.
Sorry.
D
This
quickly,
but
117
couplet
can
work
with
the
117
API
server
and
also
with
an
API
server
that
is
118.
We
don't
have
tests
for
that.
Maybe
in
the
future
is
just
limited
bandwidth.
We
have
the
Q
million
sites
to
implement
these
extra
tests.
I
think
they
might
be
sub
tests
for
the
coolest
guy
out
there,
but
I'm
not
sure.
Maybe
they
released
him
or
maybe
signaled
those
great.
A
And
some
of
my
concern
is
the
increasing
the
cubelets
Q
is
probably
for
introduced
is
probably
far
more
complexity
and
the
state
of
skew
coverage
today
is
pretty
low
and
I
already
feel.
Like
there's
a
lot
of
combinations
you
want
to
test
I
feel
like
it
could
get
prohibitively
large
if
you
were
to
expand
version
skew
further
than
you
have,
but
I'm
happy
to
like.
A
Take
the
rest
of
my
comments
to
the
cap.
I
just
wasn't
sure
how
much
that
aspect
have
been
run
by
other
states
because
state
testings
perspective,
we
don't
write
those
tests,
we
don't
own
those
tests,
we're
not
gonna
we're
not
going
to
add
troubleshoot
those
tests
or
whatever
I'm
just
concerned
about
the
complexity
of
that
introduces
and
and
who
would
be
willing
to
stack
up
to
support
that
yeah
yeah.
F
F
A
D
A
D
A
A
How
about
we,
how
about
we
take
that
offline,
yeah?
Okay,
what
he
said,
so
you
put
a
version
from
g-cloud
that
dispatches
over
a
cube,
see
tail
versions
which
is
I,
think
something
that
somebody
already
suggested
on
the
cap
to
improve
the
ability
of
coop
CTL
to
talk
to
different
different
versions.
That
seems
like
the
easy
thing,
which
again
is
why
I
go
back
to
I,
don't
know
about
cubelet
version,
skew
that
seems
dicey.
Yes,
the
dispatcher
is
open-source
and
somebody's
definitely
like
that
in
a
comment
on
cap.
D
And
the
the
versions
of
cube
teens
do
do
you
like
store
them
as
a
history,
you
know
once
evolution
of
kubernetes
goes
out
of
support.
My
question
is,
guess
is:
should
we
keep
one
more
version
of
chickens
as
maintainable.
I
A
Seems
fair,
I
I
honestly,
don't
know
whether
or
not
we
babysit
queue
kins
or
the
release
team
babysits
Cubans
I
feel
like
I've,
seen
a
fair
amount
of
collaboration
there
and
I.
Don't
yeah
like
Ben,
says
I,
don't
feel
like
koukin
x'
is
a
huge
additional
maintenance
burden
or
the
source
of
additional
complexity.
G
The
extra
jobs
are
probably
actually
negligible,
especially
if
we
follow
our
current
branching
scheme,
where
we
are
for
the
old
four
branches.
For
less
often
we
run
all
the
periodic
sonnets
right
like
currently
are
n
minus
two
jobs,
one
every
six
or
12
hours
for
the
most
part,
I
think
adding
a
bunch
more
six
or
two
of
our
jobs
will
not
be
noticeable
right.
J
Yeah
I
think
the
only
other
thing
which
I
think
is
mentioned
somewhere
in
the
cap,
I
think
I'm
gonna
comment
about
this
is
just
the
security
windows.
The
dependencies,
for
example,
would
go.
We
already
sort
of
are
often
like
right
at
the
limit
of
a
go
release
by
the
time
we
get
to
an
end
of
a
I.
Don't
guarantee
we're
gonna
have
to
update,
go
on
a
branch
at
some
point
and
that's
probably
fine,
but
it's
something
to
think
about.
D
A
F
F
And
within
the
cap
we
say
specifically
we're
I
mean
we've
got
a
couple
of
people
on
the
LTS
who
have
experienced,
trying
doing
LTSs
for
say,
Linux
distributions
and,
from
our
experience,
any
sort
of
uniform
policy
for
how
to
deal
with
those
does
not
succeed
in
actually
ability
that
what
we're
gonna
end
up
doing
is
taking
you
know,
failure
to
support
upstream,
on
a
case-by-case
basis,
to
the
extent
that
we
can
influence
the
golang
team
to
offer
more
patch
support.
That
would
be
awesome.
I'm
not
necessarily
expecting
that
to
work,
though.
D
A
J
J
Tobuscus
is
kind
of
a
peace
component
that
runs
usually
any
build
cluster
alongside
pro-jobs,
and
basically
it
manages
resources
for
you,
so
you
can
share
resources
among
different
jobs,
so,
for
example,
on
the
open
source
covered
eddie's,
we
largely
use
Bastas
for
trucking
GCP
projects
and
so
there's
basically
cycle.
If
we
have
all
of
these
free
GCP
projects
and
then
a
job
can
basically
request
a
project
it'll
get
assigned
to
the
job,
the
job
can
do
it
ever
wants
with
it.
It
returns
the
project.
J
It's
then
marked
as
dirty
and
then
there's
janitors
inside
bosses
that
go
and
they
clean
up
the
project.
We
turned
it
to
a
free
state
to
the
next
job
can
then
run
so
basically
kind
of
allows
us
to
share
resources
in
a
meaningful
way.
So
you
don't
have
to
create
a
new.
The
old
way
used
to
be
creating
a
new
project
for
every
usable
job,
and
this
was
really
painful
and
not
very
sources
or
sharing
resources,
which
is
also
problematic.
So
this
again
the
system
was
written
over
time.
It's
gotten
more
and
more
features.
J
You
know
like
there's
a
system
called
Basin
which
allows
you
to
actually
like
take
one
of
these
static
resources.
You
call
them
and
then
like
generate
other
resources.
So,
for
example,
I
think
sto
worked
on
one
which
will
pre
create
a
gke
cluster
for
you.
So
if
you're,
like
testing,
is,
do
you
don't
necessarily
want
to
wait
to
create
a
curve
Redis
cluster?
You
wanted
to
immediately
begin
your
test
with
a
pre
created
committees,
Jiki
cluster,
so
Vasquez
has
the
system
that
will
allow
you
to
do
that.
So
we
kind
of
have
these
features.
J
I
know,
OpenShift
I
think
is
using
bosses
for
some
resource
tracking,
and
then
we
are
doing
this
with
number
of
different
projects
inside
Google
and
so
we're
kind
of
running
into
some
issues
where
basically
different
teams
are
running
boss
cos
and
it's
not
super
well
supported.
Who's
kind
of
you
know
written
as
is
pretty
simple
system,
but
it's
kind
of
grow
in
features,
and
it
has
some
technical
debt
basically
to
work
with
and
I
know
like
Alvaro
is
worked
on.
J
Some
fixing
someone's
technical
today,
as
has
Steve
I'm,
trying
to
tackle
us
on
this
as
well,
but
basically
also
to
have,
in
addition
to
kind
of
fixing,
this
technical
debt
I'm
wanting
to
have
a
better
model
of
like
how
do
people
actually
use
Bastas,
because
right
now,
it's
kind
of
just
like
well
go,
find
an
existing
employment
and
maybe
run
it
and
at
some
point,
maybe
upgrade
and
I
figure
out
what
has
changed,
and
you
know
your
resources
get
like.
There's
really
like
not.
D
J
A
very
good
story
for
like
how
you
should
actually
use
this.
It's
kind
of
very
much
if
you
already
use
baskets.
If
you
wrote
baskets,
you
know
how
to
use
it.
If
you're
on
you
know
the
the
kubernetes
and
front
team
at
Google,
maybe
you
can
use
it,
but
if
you're
anyone
else
like
it's
kind
of
this
amount
of
disasters
right
word,
but
it's
very
challenging.
You
know
it's
not
necessarily
super
easy
to
use.
J
So,
basically,
I
kind
of
run
up
this
document
kind
of
looking
at
some
of
the
challenges
people
have
faced
and
trying
to
figure
out
how
we
can
actually
make
this
so
that
it's
a
product
where
we
can,
you
know
improve
it,
but
also
make
it
more
of
a
self-service
product
that
people
can
use.
So
that's
kind
of
the
overall
background
of
this
here
and
so
I
kind
of
have
listed
the
number
of
different.
You
know,
I.
J
Think
I
have
like
four
different
categories,
which
I've
kind
of
broken
down
into
like
ease
of
deployment,
improved
observability,
painless,
upgrades
and
kind
of
a
method
of
like
how
people
can.
This
is
kind
of
more
targeted
towards
the
Google
internal
teams,
but
can
probably
apply
generally
is
like
how
we
want
to
sort
of
have
a
support
escalation
so
like
if
something
does
go
wrong
with
one
of
Oscars
components
like
who
you
talk
to
you
know:
what
do
we
expect?
People
can
debug
themselves
versus
what
people
should
attack
whole
or
what
which
should
be
escalated?
J
J
I
will
come
back
to
this
comment,
just
a
second,
unless
something
if
someone
wants
to
interject
immediately
with
a
question,
please
do
so
otherwise
I'll
come
back.
I
see
some
questions
in
chat.
We'll
come
back
to
this
in
a
second
I
was
kind
of
run
through
this
doc.
Otherwise,
so
you
know
basically
trying
to
you
know,
have
some
basic
recommendations
of
like
how
you
should
set
up.
J
Bosca
is
some
basic
recommendations
around
monitoring
and
things
and
alerting,
potentially
that
you
might
want
to
do
because,
right
now,
like
I
said,
you
know,
there's
a
great
deployment
for
like
open
source,
the
open
source
communities
deploy
no
bosses,
but
everyone
else
is
kind
of
left
in
the
dark
of
whatever
they
wanted
to.
We
also
want
to
improve
the
error
messages,
and
the
logging
right
now
is
not
super
helpful.
It's
really
hard
to
tell
what
this
system
is
it's
hard
to
tell
when
something
goes
wrong.
J
J
Well,
we
discovered
it
before
somebody
complained
their
test
for
failing
whoops,
don't
know
what
actually
happened
so
try
to
improve
a
lot
of
that.
So
just
kind
of
have
better
metrics.
To
be
able
to
understand,
like
you
know,
is
cleanup
getting
slower.
Are
we
you
know
as
the
janitors
failing
a
lot
more
often
things
like
that,
so
we
can
actually
identify.
Something.
J
So
one
of
the
things
I
was
actually
suggesting.
Here
was
a
few
things.
One
thing
is
potentially
actually
having
explicitly
tagged
releases
so,
rather
than
like,
just
you
know,
grab
the
person
that
was
built
today,
you
can
say
like
grab,
you
know
version.
Is
you
or
zeroing
out
whatever?
Whatever
the
version
number
is
that
you
know
we
release
like
every
two
weeks
or
every
month
or
something,
and
so
we
know
this
has
been
tested.
We
know
this
actually
works.
You
can
upgrade
to
this
version.
J
We
guarantee
it's
actually
going
to
work,
and
then
you
know,
then
you
can
also,
if
you're
building
Mason,
you
know
exactly
what
version
of
the
code
to
use
part
of
this
I
suggest
is.
They
may
actually
want
to
move
Bastas
out
of
test
infra
so
rather
than
having
everything
ever
just
having
its
own
little
repository,
so
that
we
can,
you
know,
focus
on
if
you
want
to
work
on
Basia.
If
you
want
to
improve
things
by
how
you
use
this
is
effectively
getting
quality
cool.
J
J
Repository
don't
to
care
about
everything
else
is
happening.
A
test
in
proach
is
kind
of
dislike
mixes
everything.
So
that's
a
potential
idea
that
also
allows
us
to
kind
of
control
our
release
story.
It
just
makes
I
think
it
makes
a
lot
of
things
easier.
Where
it's
very
clear,
you
know,
if
you
need
you
to
do
some
of
the
process
here
is
everything
you
need
here
deployments
here:
the
documentation,
it's
not
of
mixed
in
with
everything
making
a
test.
Intro,
which
is,
you
know,
has
a
bunch
of
things
as
prowl.
J
It
has
a
lot
of
the
kubernetes
CIA
configuration
kind
of
you
know
it's
kind
of
just
a
mix
of
a
bunch
of
things
so
basically
trying
to
improve
kind
of
the
release
process.
A
little
bit
actually
have
a
release
process
which
we
don't
have
right
now
have
some
actual
testing
on
the
explicit
version
to
make
sure
wasn't
like
toss
it
up
in
prowl
and
you
know
hope
it
works.
You
see
what
happens
so
try
to
improve
that
a
little
bit
and
then
last
piece
here
was
just
sort
of
this
idea
of
support
escalation.
J
Basically
you
know.
Ideally,
we
have
all
of
these
other
components
in
place
where
you
have
good
monitoring
good
as
a
resilient
system.
Hopefully
you
can
first
see
like
if
you're
running
out
of
resources,
maybe
it's
your
custom
resources
having
issues
or
something
else.
You
know
if
it's
actually
something
that's
wrong
in
the
bhaktas
system,
then
escalates
people
who
are
working
on
Bastas
but
try
to
have
it
just
so
that,
like
you
know,
basically,
if
we
have
end
deployments,
you
know
I
can't
just
be
me.
J
That's
kind
of
all
I
have
they're
just
scrolling
back
in
here
Daniel's
or
way
to
run
boxes
one
project
in
the
pool
and
does
it
work
with
hyper
scalar
tool
in
Google
as
well
so
I
mean
yeah?
You
can
have
I,
guess
yeah,
your
pool
can
be
you
know
you
could
have
one
single
resource,
hyper
scalars
other
than
Google
I,
don't
know
what
that
what
you
mean
by
work
with
hyper
scalars
on
the
Google.
Basically,
at
least
Bastas
is
super
agnostic
about
whatever
the
actual
resource
is.
J
You
know
really
actually
at
its
core
currently
about
this.
Basically,
the
resources
just
a
string
to
it
and
you
can
have
janitors.
You
know
we
have
a
janitor
that
works
on
GC
projects.
We
have
a
janitor
that
works
on
AWS
accounts.
You
can
have
generators
that
work
on
other
resources
as
well.
Basically,
you
just
have
it
create
a
and
that's
something
we
might.
J
You
know
improve
the
workflow
a
little
bit
around
this
or
you
know,
actually
have
a
better
framework
for
usually
building
a
janitor,
because
right
now
it's
kind
of
feel
a
little
hacky,
not
great,
but
basically
you
could
just
create
another
controller.
You
could
run
your
Bob,
this
deployment
that
would
check
out
dirty
resources
that
type
and
goes
and
cleans
up
whatever
it
needs
to
be
so
I
could
have
some
hypothetical
resource
that
you
know.
I
wanted
to
use
as
your
account
or
you
know
some
other
resource
or
something
you
can
have.
J
It
go
and
continue
looking
stuff
and
then
return
them
for
you,
so
I'm
pretty
agnostic
and
it
won't.
Let
me
get
you
know.
Part
of
this
is
basically
making
it
well
documented
and
having
these
frameworks,
so
that,
if
you
want
to
do
this,
it's
really
really
easy
to
just
plug
in
your
custom
scripts,
your
custom
implementation
and
then
plug
it
in
and
then
you
have
that
thing
working
so
yeah.
So
we
have
it
one
state.
Basically
you
want
to
make
that
really
the
other
thoughts
or
comments
on
this
again.
J
I'll,
actually
email
out
the
doc
as
well,
and
we
can
continue
discussion
comments
there
and
then
so
far,
every
what
I've
talked
to
it
reasons.
The
two
has
generally
seem
to
be
in
favor
positive,
so
assuming
they'll,
there's
no
other,
but
we'll
continue
discussion.
Then
I'll
probably
turn
this
into
more
concrete
example
or
concrete
actions
as
well.
A
Yeah,
what
Daniel
said
I
really
like
the
idea
of
actual
releases?
That
sounds
wonderful,
a
silly
question
that
maybe
isn't
100%
related
to
this
I'm
looking
out
at
creating
a
prow
build
cluster
over
in
CNCs
Google
project,
eventually
scheduling
a
bunch
of
kubernetes
jobs
over
there.
My
understanding
is
that
I
will
also
need
to
stand
up
on
velasquez
instance
over
there.
So
does
it
manage
the
CN
CF
projects
right,
okay,
yeah.
J
That's
correct
and
that's
one
of
the
reasons
we're
kind
of
seeing
this
explosion
of
Bosco's
deployments
is
currently
generally
Bastas
lives
inside
the
building
cluster.
So
if
you
have
one
you
know,
I
have
been
proud
cluster
and
you
have
a
bunch
of
Bosca
or
build
clusters
that
you
know
service
that
one
main
prowl.
You
know
control
plane.
Basically,
it
was
just
employment
per
build
cluster,
and
so
that's
why
I
kind
of
it
starts
become
this
management
challenge.
H
Command,
do
that
so
I
know
that
what
you're
saying
it's
to
today
for
I,
guess
kubernetes,
but
in
officer,
if
we
actually
have
one
Bosco's
instance
that
is
used
by
my
little
bit
faster.
We
edit
I
think
it's
T
X
s,
support
for
that
and
October
something,
but
this
Lux
and
this
also
possible
approach.
So
it
doesn't
have
to
be
one.
J
Yeah,
that's
true,
I,
don't
think
yeah
I
guess
we're
good.
Actually
how
the
yeah
is
it
is
it
can
you
clarify?
Is
it
just
you
just
have
like
login
access,
that's
just
to
boss
cos
itself,
or
is
it
actually
act
like
the
resource
level
you
can
control
like
which
users
or
accounts
have
access
to
specific
resources
inside
boss
cos
no.
H
E
A
H
H
With
that
and
and
to
discuss
this
I
opened
this
issue,
that's
linked
into
the
dock
and
then
already
left
some
comments
in
there
and
I
would
really
like
everyone
adds
from
the
Google
folks.
That
will
end
up
having
to
be
ok
with
this,
to
comment
and
express
any
concerns,
if
you
have
that,
so
I
can
start
with
the
G
dock,
and
we
can
probably
see
if
concerns
can
be
addressed.
A
H
G
Instance,
rather
in
particular,
might
I
have
two
concerns.
One
of
them
is
that
we
are
heavily
dependent
on
having
all
of
the
configuration
centralized,
so
we
can
batch
edit
it,
which
we
do
fairly
frequently
and
the
other
one
RV
trust
issues
that
you
mentioned
in
particular
the
fact
that
you
no
longer
needs
to
actually
have
your
job
committed
to
get
it
to
run.
That
said,
I
am
mostly
concerned
about
for
management
issues
for
business
causes,
rather
than
we
trust
ones,
I.
Think
great.
G
J
H
Yes,
but
there
are
also
a
lot
of
jobs
that
do
not
use
this
image
and
use
something
like
just
calling
and
just
executing
or
test
for
something
sure.
I
mean
it's
not
supposed
to
like,
be
the
best
choice
for
everyone
in
every
situation,
but
I
think
there
are
some
very
good
use
cases
for
it
and
that's
what
I
would
like
to
not
be
generally
disables
right.
L
Hey
everyone,
data
and
charts
here,
so
we
did
do
a
contributor
experience
survey
I'm,
not
sure
if
folks
took
that,
but
just
to
give
context,
we've
released
it
late
last
year
into
early
this
year.
So
technically
it's
like
2019
2020
survey,
who
really
looks
at
dates
anyway,
but
it
was
really
to
get
a
capture
of
the
pulse
of
the
community
for
the
programs
and
processes
and
some
projects
that
we
have
in
contributor
experience
which,
as
everybody
in
this
room
knows
sometimes
overlays
was
sake.
L
Testing
and
things
like
that
so
I
know,
Christophe
was
intentional
for
putting
some
specific
wording
and
language
and
there
around
testing
and
infrastructure.
So
I
wanted
to
show
you
some
of
those
questions
specifically
and
also
to
show
you
kind
of
where
this
stuff
is
stored
and
talked
to
you
about
the
data
itself
so
that
you
can
be
self-sufficient
and
get
what
you
need
out
of
it.
Since
you
know,
especially,
we
only
have
we
only
have
at
like
10
minutes
or
so
and
I
don't
want
to
go
through
the
data.
L
You're
gonna
see
there's
so
many
charts
in
the
in
the
public
bucket
that
we
have
so
there's
just
not
enough
time
to
go
through
each
one,
and
many
of
them
aren't
necessary
applicable
to
this
group
either.
For
instance,
questions
like
you
know
the
community
meeting
or
mentoring
things
like
that.
So
all
right,
I'm
gonna,
share
my
screen.
L
And
see
if
this
works
all
right,
does
everybody
see
Survey,
Monkey
or
charts
all
right
cool
all
right?
So
some
quick
things
about
the
survey
itself.
The
survey
was
conducted
on
Survey
Monkey.
We
did
that
so
we
can
get
a
global
audience.
It
is
very
heavy
itself
as
far
as
the
day,
the
data
it
comes
back
with
and
the
data
that
it
allows.
So
we
can
do
pretty
much
whatever
we
need.
This
is
also
a
plea
to
you.
All
is
sig
testing.
L
If
you
want
to
do
surveys
throughout
the
year
for
your
crew
or
do
more
testing
related
and
infrastructure
related
questions
on
the
contributor
experience
like
specific
crowd
questions,
for
instance,
all
we're
we're
game
for
it.
So
anyway,
I'm
gonna
breathe
through
just
a
few
of
the
questions
on
here.
We
had
25
so
again,
just
not
enough
time,
but
this
just
this
just
shows
you
who
is
taking
this
survey.
L
We
had
230
ish
respondents,
but
this
just
gives
you
a
little
bit
about
the
personas
of
the
people
that
took
it
so
clearly,
there's
definitely
a
lot
of
folks
who
weren't
yet
members,
but
then
there's
about
I,
don't
know.
50
percent,
or
so
that
are
in
some
kind
of
membership
review
or
approver
sub-project
or
in
our
category
that
they
self-identify
as
and
then
this
is
a
positive
one
that
I
wanted
to
share
as
well.
Are
you
interested
in
advancing
to
the
next
level
that
contributor
ladder?
L
So
many
people
said
yes,
which
is
heartwarming,
because
that
means
yes,
they
want
to
build
trust
with
you
all
and
then
obviously
this
just
no,
which
I
did
do
some
digging
here
and
a
lot
of
those
knows
for
people
who
are
already
sub
project
owners.
So
I
can't
go
really
that
if
you're
already
at
the
high,
you
can't
really
go
that
much
higher.
L
So
anyway,
that's
just
a
little
bit
of
background
of
the
people
that
have
taken
this
and
then
here
is
a
good
one
that
I
wanted
to
show
y'all.
So
we
can.
We
can
also
play
it's
the
different
chart
types
here,
but
this
is
please
rate
any
challenges.
They've
had
with
the
contribution
process
and
by
steps
we
can
see
the
whole
the
whole
words
here,
because
they
are
cut
off
in
the
chart.
L
But
if,
if
it's
five,
that
means
they're
saying
that
it's
the
most
challenging
part,
which
is
the
orange,
so
the
greater
the
orange
bar
there
is
the
most
challenging
and
then
the
green
is
not
challenging
at
all.
So
the
higher
the
thicker,
the
bar
for
green
means
that
they
don't
think
it's
a
problem
whatsoever,
and
so
the
thing
as
far
right
there
as
github
tools
and
processes
for
the
not
challenging
piece
and
then
the
most
challenging
thickest,
would
be
big,
surprise.
Debugging,
test
failures.
L
We
actually
did
go
in
and
dig
I
see
some
chats
going
up.
We
did
actually
dig
into
this
and
I'm
gonna
show
you
some
Jupiter
notebook
stuff
in
a
second,
but
we
did
dig
into
this
this
part
of
the
data
for
sure
that
you'll
see
in
the
images
just
kind
of
see
like
who
was
saying
this,
like
what
personas
and
like
how
many
years
of
experience
and
things
along
those
lines
and
shocker
it
was
across
the
board.
L
Reviewers
had
a
slight
edge
of
annoyance
with
it,
but
it
was
by
a
you
know,
by
Pete
supply
sliver.
So
not
you
know
not
necessarily
totally
noteworthy
there.
So
again,
I'll
give
you
this
data
that
y'all
can
dig
into
yourselves,
but
let
me
go
and
get
through
the
other
questions
really
quickly
and
then
here
you
can
see
what
the
other
selections
were
to
you
by
the
way.
So
like
we
had
one
that
said
our
CI
labels
crafted
customized
automation.
L
Maybe
we
can
do
better
next
time
and
say
prowl
or
call
you
know,
call
words
out
and
things
like
that,
and
then
this
one
is
they
do
you
agree
with
the
following
statements?
One
is
strongly
disagree
and
five
is
strongly
agree
and
then
here's
the
questions
down
here.
Sorry,
Survey,
Monkey,
super
super,
weird
and
heavy
one
is
I,
understand
enough
about
Kubrick
about
how
kubernetes
CI
works
to
be
able
to
diagnose
my
own
PR
failures.
L
The
next
statement
is
when
something
is
broken
in
my
PR
I
can
read
the
comments
from
CI
and
understand
why
the
next,
the
number
of
test
failures,
unrelated
to
my
PR,
severely
impacts
my
ability
or
desire
to
contribute
and
then
the
last.
There
are
too
many
notification
just
to
be
helpful
when
I
open
a
PR,
and
this
is
saying
that
the
strongly
agree
statement
is
I
understand
enough
about
how
kubernetes
ci
works,
to
be
able
to
diagnose
my
own
PR
failures,
but,
as
y'all
can
see
in
the
weighted
averages,
it's
like
by
hairs.
L
L
And
then
here
is
what
areas
of
converted
hu
contribute
to
again.
This
is
sort
of
a
demographics
question
and
that
is
testing
infrastructures
pretty
high
fifty
or
sixty
ish
people
said
that
they
contribute
to
it,
which
is
good
and
then
also
another
comment
about
this.
Compared
to
the
2018
survey,
people
actually
are
committing
less
code
inside
of
core
inside
of
core,
which
you
know
we
consider
like
kubernetes,
kubernetes
repository,
which
proves
you
know
some
of
the
efforts
that
are
going
on
are
being
you
know,
Rolaids
through
this
contributor
experience
survey
as
well.
L
There
are
a
ton
of
things
that
people
suggested
and
here
that
are
related
to
testing,
and
this
is
the
freeform
question
of
what
can
we
do
to
make
things
better
for
you
and
I
will
get
you
all
of
those
freeform
questions
again.
You
all
will
see
them
as
soon
as
as
soon
as
we
release
the
post
this
data
specifically,
so
you
can
see
that
for
yourselves
and
then
I
think
that
might
have
been
it
for
y'all.
L
And
then
let
me
show
you
the
rest
of
the
stuff
that
I'm
talking
about.
So
this
is
what
we're
getting
together
right
now
for
CN
CF.
What
we're
gonna
do
is
a
blog
post
with
pretty
charts
and
graphics,
and
their
folks
are
gonna.
Take
it
from
here,
and
what
I
mean
from
here
is
this:
this
is
the
images
that
were
produced
from
a
jupiter
notebook.
We
hired
a
firm
to
do
this.
We
did
it
so
that
there
was
no
bias.
L
None
of
us
touch
the
data
or
made
any
suggestions
to
the
data
or
told
any
stories
that
we
wanted
to
tell
through
the
data.
This
is
the
data
on
someone
else.
Did
this
it's
actually
an
Apache
Software,
Foundation
shop
and
they're
super
awesome
folks
and
already
knew
you
know
open
source
landscapes
and
things
like
that,
so
they
actually
have
a
lot
of
fun
doing
this,
but
these
are
all
of
the
charts
that
were
produced
from
some
of
the
digging
and
again.
These
are
public.
L
L
So
if
you
have
any
things
that
you
want
to
comment
about
the
survey
itself
and
wants
to
see
it
get
better
for
2020
or
hey,
you
have
additional
testing
questions
that
you
think
you
want
to
add,
etc.
You
can
pile
them
here
for
us
so
that
we
can
be
useful
to
you
next
year
and
then
again
themselves.
The
charts
are
inside
of
a
the
contributors
at
kubernetes,
Google
Drive,
so
feel
free
to
you
know,
grab
those
there.
L
I
did
share
them
with
the
chairs
today,
but
they
will
be
shared
with
the
broader
audience,
including
kubernetes,
dev
I'd,
say
within
the
next
24
hours,
I'm
just
buttoning
up
and
patching
up
a
lot
of
this
stuff.
Now,
so
any
questions
I'm
going
to
stop
sharing
my
screen,
so
I
can
look
at
chat
for
one
sec.
L
All
right,
nope,
oh
yeah,
Lumiere.
How
do
you
Kiyo's
rejected
not
challenging
at
all
any
questions
or
comments
about
the
survey
data?
The
one
question
that
we
accidentally
left
off
was
probably
a
testing
question
and
that
was
left
off
an
error
and
ironically
10
people
did
not
pick
it
up.
It
was
the.
How
do
you
consume?
L
F
L
L
L
What
areas
do
you
contribute
to
was
the
same
I'm
gonna
say
from
my
just
random
knowledge
in
my
brain
I
think
it
was
like
15
out
of
24
were
the
same
I'm,
pretty
sure
it
was
something
around
there,
including
the
question.
Here
we
go,
including
the
please
rate,
how
challenging
that
was
directly
verbatim
so
that
we
could
do
a
multi-year
analysis.
So
that's
in
there
and
you're
gonna
I
think
you're
gonna
see
what
I
see,
which
is
not
much
changed,
still
a
lot
of
the
same
problems,
a
lot
more
new
folks
took
at
this
time.
L
A
L
And
if
anybody
has
any,
you
can
see
those
survey,
questions
for
yourselves
that
are
already
checked
into
the
repo.
So
if
anybody
does
have
any
deeper
analysis
that
they
want
to
see,
one
I
already
guarantee
that
it
was
done
in
that
repin
in
our
charts.
So
I
can
just
hand
you
a
chart,
but
you'll
also
have
the
Jupiter
notebooks
that
are
being
and
them
in
the
middle
of
being
checked
into
the
community
repo.
So
if
you
go
to
the
community
repo
right
now
and
look
at
the
pull
requests,
you'll
see
one
from
Brian.