►
From YouTube: Kubernetes SIG Contributor Experience 20170614
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Thank
you
George,
alright,
good
morning.
Everybody,
if
you
could
please
add
your
names
to
the
attending
section
for
the
agenda.
Okay,
great
is
Miles
orange
on
the
call
yet
hi
good
morning.
Are
you
ready
for
for
your
your
demo?
Your
part.
B
D
E
A
couple
okay,
I'll
see
are,
we
are
we
recording
you.
B
B
Thank
you
so
much
Garrett
for
taking
notes.
Sure,
okay,
now
hunch
into
two
miles
apart:
okay,
everybody.
This
is
miles
from
the
nodejs
community
and
we
thought
that
it
might
be
useful
and
valuable
to
talk
to
miles
about
his
experience
with
governments
in
the
nodejs
community.
And
so
our
first
question,
for
you
is:
what
is
the
decision-making
process
with
respect
to
sub
components
of
the
project?
B
C
So
let
me
see
a
cool
get
a
piece
of
paper,
so
I
can
even
write
things
down.
Are
you
interested
more
in
like
decisions
from
like
like
technical
decision-making
or
decisions
like
larger
decisions
like
how
do
we
implement
the
code
of
conduct
or
you
know,
how
do
we
implement
the
rules
or
make
decisions
about
money?
I.
C
So
Foundation
has
a
board
of
directors
and
then
that's
kind
of
either
the
bottom
or
the
top.
Depending
on
how
how
you
think
about
thing,
we
then
have
a
technical
steering
committee.
C
Also
kind
of
like
directly
reporting
to
the
board
is
another
committee
that
we
spun
up
recently
called
the
calm
calm.
The
community
committee
in
that
community
committee
is
responsible
for
well
they're,
still
kind
of
figuring
it
out,
but
a
lot
of
it
has
to
do
with
more
community
related
stuff.
So
you
know
the
board
has
money
and
is
mostly
focused
on
what's
going
on
with
the
money.
C
So
if
we
want
to
take
money
and
invested
into
community
efforts
that
stuff
community
committee
is
responsible
for
now
from
a
technical
standpoint,
the
core
Technical
Committee
has
a
number
of
chartered
working
groups
that
have
spun
up
under
it,
which
have
kind
of
oversight
over
a
particular
parts
of
the
code
base,
in
particular
parts
of
our
infrastructure.
So
that
includes
like
a
builds
working
group
and
the
builds
working
group
is
responsible
for
our
CI
infrastructure
and
our
build
infrastructure.
C
We
have
a
security
working
group
which
is
responsible
for
handling
CDs
and
any
sort
of
security
reporting.
Any
sort
of
embargoed
security
information
is
helped
by
the
Security
Committee
of
the
CTC
and
also
has
access
the
same
information.
We
have
a
working
group
for
we're
in
the
midst
of
chartering,
one
for
release,
which
includes
our
LPS
backporting
process,
as
well
as
our
release
process
for
various
versions.
Streams,
which
is
a
fairly
convoluted
part
of
our
code
base,
has
its
own
working
group,
and
you
end
up
with
something
that
kind
of
you
know
looks
like
this.
C
As
far
as
like
an
org
chart
is
concerned,
some
of
the
most
important
parts
of
this
is
that,
like
the
board,
has
no
control
over
the
technical
direction
of
the
project.
That
is
completely
in
you
know,
within
the
real
house
of
the
PTC,
even
the
TSC,
to
a
certain
extent,
I
think
the
PSC
would
be
able
to
like
call
into
question
decisions
of
the
CTC
but
I
believe
based
on
our
governance
model.
You
know,
decisions
are
the
ctc
decisions.
C
Now
the
model
has
not
been
perfect
and
we're
in
the
process
of
trying
to
figure
out
so
like,
as
it
goes
right
now.
Any
change
that
gets
submitted
to
court,
someone
submits
a
poor
request.
Now
that
poor
request
we
use,
what's
known
as
the
consensus
seeking
model,
so
we
have
about
80
collaborators
with
commitments
about
20
of
collaborators
that
are
part
of
the
core
Technical
Committee,
once
all
request
is
submitted,
is
any
collaborator.
C
C
If
one
person,
if
one
collaborator,
says
that
something
shouldn't
land,
then
we
try
to
reach
consensus
within
the
pull
request
stuff
if
we're
not
able
to
reach
consensus
within
the
pull
request
itself
and
that
consensus
could
be
pork
to
lander
for
it
not
to
land.
If
you
can
be
select
for
the
core
technical
committee
to
review,
and
then
the
core
technical
committee
will
try
to
reach
consensus
and
if
consensus
can't
be
reached,
there
will
be
a
vote
generally.
We
try
not
to
vote
as
much
as
possible.
C
C
Recently,
we
decided
to
delay
a
release
in
order
to
use
a
newer
version
of
eh,
because
I
had
a
new
compiler
tool,
train
that
could
directly
to
a
vote
and
it
was
like
a
somewhat
convoluted
process,
but
there's
maybe
been
I
can
count
on
one
hand
the
number
of
things
that
have
been
directly
like
pushed
towards
a
vote
to
begin
with,
and
that
was
mostly
just
because
we
knew
they
were
going
to
be
contentious.
But
for
the
most
part
we
try
to
reach
consensus
in
the
issue.
C
What
we've
done
kind
of
like
in
response
to
that
is,
we
have
a
pretty
broad
testing
infrastructure,
which
includes
a
pretty
extensive
test,
suite
with
I.
Think
at
this
point
we're
up
to
almost
90
percent
code
coverage
on
the
Rico
itself,
and
we
test
on
every
single
infrastructure
that
we
release
for
whatever
we
run
CI,
we
run
CI
on
every
single
PR.
If
you
don't
run
CI
on
the
PR
and
it
lands,
then
you
know
you'll
get
some
sort
of
passive-aggressive
message
about
like.
C
Why
didn't
you
run
CI,
but
we
also
have
a
smoke
testing
suite,
which
I
did
a
lot
of
work
on
called
canary
in
the
gold
line
and
with
canary
in
the
gold
mine.
We
can
take
any
arbitrary
sha
in
the
tree
or
branch,
and
then
we
build
node
with
that
version
and
run
the
test
suite
of
the
top
80
to
90
modules
in
the
ecosystem
and
then
review
the
results
of
those
tests.
C
What
do
you
mean
by
more
liberal
lending
policy?
I
mean
in
the
sense
that,
like
a
commit,
can
land
on
master
in
48
hours
with
one
reviewer
so
like,
because
not
everyone
has
to
review
everything
because
we're
a
little
bit
more
liberal
about
landing,
pull
requests.
We
have
a
more
expensive
testing
process
to
catch
any
potential
problems.
C
We
also
have
like
all
of
our
releases,
we've
been
doing
generally
for
LTS
releases.
We
do
two-week
RCS
so
during
that
two-week
process.
So
actually
I
should
step
back
and
talk
about
our
LTS
process
if
you're
interested,
because
that's
a
little
bit
more
intensive,
so
everything
that
lands
on
master
on
a
weekly
or
bi-weekly
basis,
we
cut
into
what
we
call
our
current
branch,
which
is
like
tip
of
tree.
So
right
now
version
8.
C
C
Sometimes
that
will
cause
it
to
push
longer
if
there,
if
there's
anything
that
is
like
controversial,
that
may
actually
take
longer
to
land
on
an
LPS
release.
Things
that
are
perf
related
generally
take
a
lot
longer
because
they
can
have
unexpected
side
effects.
We
let
them
kind
of
bake
a
little
longer.
C
Then,
once
a
quarter,
we
do
a
minor
release
on
our
LPS
branches
generally,
most
things
that
are
minors,
don't
land,
but
we
review
all
the
minors
we
find
things
like,
for
example,
there's
a
bunch
of
intersection,
poor
requests
that
were
minors
that
we
landed
on
ours.
Six
eleven
released
that
went
out
last
month,
and
that
was
just
like
more
introspection
into
v8
as
stuff
that
ATM's
commute
to
offer
better
products,
but
aren't
really
going
to
affect
the
bottom
line
of
anyone.
C
Writing
the
code
base
without
this
flag,
and
so
this
kind
of,
like
cascading
waterfall
release,
means
that
like
well,
things
may
get
through.
They
may
even
land
in
a
release
and
break
in
a
release
on
our
current
release,
like
our
LTS
release
line
very
very
rarely
as
regressions
I
think
I
can
count
on
one
hand
the
number
of
regressions
that
we've
had
in
our
version
six
release
line
in
its
lifetime.
C
Well,
we're
in
the
process
of
figuring
that
out
right
now,
so
at
a
recent
collaborator
summit,
we
had
had
a
conversation
around
our
current
governance
structure
and
some
of
its
failures,
and
so
I
think
if
this
will
be
really
good
to
kind
of
complement.
What
we
just
discussed
current
governance
structure
has
a
lot
to
do
with
meritocracy
for
lack
of
a
better
term,
which
I
see
is
one
of
its
bins.
You
come
in.
You
show
up
to
the
project
two
hands.
You
do
a
bunch
of
pull
requests.
C
People
recognize
you
you
get
kind
of
tapped
on
the
shoulder
to
get
it
illiquid
now,
I'm,
not
saying
that
that
should
change,
but
like
the
rest
of
the
processes.
Now
that
you're
a
collaborator,
if
you
do
a
whole
bunch
of
work
and
people,
recognize
that
work
and
see
that
your
insight
into
the
project
is
important.
You
get
tapped
on
the
shoulder
and
nominated
to
the
CTC,
then
over
time
the
TSE
rarely
but
sometimes
tasks,
people
on
the
shoulder
and
nominates
them,
so
the
TSI,
for
example,
was
recently
nominated.
C
This
was,
after
you
know,
working
on
the
project
for
about
two
years
me.
You
know
about
two
months
or
so
to
get
nominated
to
be
a
collaborator.
It's
looking
about
another
six
to
eight
months
to
get
nominated
about
another
year
and
a
half
to
get
nominated
to
the
TSC.
Now
the
TSE
is
a
group
of
about
twelve
people
right
now,
most
of
whom
are
kind
of
artifacts
from
I'm
not
going
to
get
too
much
in,
like
the
nose
foundation.
C
Duck
sounded
like
there
was
the
joint
run
project
and
there
was
the
fork,
then
they
recombined
and
most
of
the
governance
structure
from
the
fork,
just
kind
of
became
the
TSC
most
of
the
leaders
from
report.
So
there
are
a
handful
of
people
there
that
were
more
like
engineers,
rather
than
you
know,
technical
governance,
leaders
and
weren't
really
interested
in
the
governance
work,
which
part
of
the
reason
why
the
CCC
was
broken
out.
C
That
way,
the
TSC
could
focus
on
governance,
work
in
the
CTC
focus
on
technical
work,
but
then
a
lot
of
people
just
stuck
around
because
like
why?
Why
do
you
want
to
give
away
power
that
you
have,
and
so
right
now
we
find
ourselves
in
a
place
where
what
the
TSE
does
is
not
incredibly
clear,
and
there
isn't
really
much
motivation
for
people
to
move
on
from
either
the
CTC
or
the
TSE.
C
C
There's
so
many
comments
that
if
you
weren't
involved
from
the
first
minute,
like
you,
need
to
literally
take
a
day
and
work
off
to
catch
up
and
that's
not
really
sustainable
I
proposed
a
governance
model
that
was
essentially
a
TSC
that
was
made
up
of
chairs
rather
than
individuals
of
voting
chairs,
and
that
each
charted
working
group
would
have
one
voting
chair
on
the
TSC.
A
working
group
could
send
as
many
people
as
they
wanted
to
the
TSC
for
a
meeting,
and
the
idea
was
to
reach
consensus.
C
In
a
way
they
aren't
right
now
core
can
act
unilaterally,
and
so
this
is
actually
a
lot
of
the
pushback
that
I'm
getting
to
this
new
model.
People
are
like.
We
don't
want
people
to
talk
more,
what
to
do,
and
so
we're
still
kind
of
figuring
out
we're
still
figuring
that
out,
but
the
biggest
thing
that
I'm
trying
to
push
towards
in
our
new
governance
structure
and
one
thing
that
at
least
from
my
experience,
I,
would
advise
to
all
of
you
for
a
governance
structure
is
I
like
there's
kind
of
two
key
ideas.
C
One
is
I
want
to
scale
working
groups
and
projects
and
not
people,
so
at
the
top.
We
should
be
moving
towards
efforts,
because,
when
you
have
at
the
top
that
responsibility
is
really
hard
and
it's
hard
to
cycle
people
in
and
out,
whereas
if
you
have
a
project
at
the
top,
people
can
cycle
in
and
out
of
a
project
much
easier
and
the
other
is
making
a
focus
on
responsibilities
rather
than
power.
So,
in
this
new
model
that
we're
talking
about
it's
not
about
a
power
structure,
it's
not
about
a
hierarchy.
C
It's
about
responsibilities
for
parts
of
the
codebase.
What
I
would
really
love
to
see
is
that
if
someone
from
the
community
sees
that
or
did
something
that
was
wrong
and
they
disagree,
they
should
be
able
to
go
to
a
working
group.
They
should
be
able
to
reach
consensus
in
that
working
group
to
bring
something
to
the
TSE
and
hopefully
create
consensus
in
the
TSC,
but
if
not
know
that
there's
actually
a
path
forward
to
like
holding
people
responsible.
This
creates
like
a
path
where
someone
who
is
not
involved
in
the
project
can
implement
change.
C
In
a
matter
of
weeks
rather
than
years,
but
that's
kind
of
like
the
direction
that
I'm
trying
to
move
things
in
right
now,
and
so
your
question
as
to
how
do
we
make
these
changes?
It's
not
entirely
clear.
It's
it's
really
really
hard,
because
there's
so
many
stakeholders
and
there's
there's
just
so
much
to
talk
about
so
it
seems
like
our
best
bet.
Right
now
is
actually
maybe
using
something
like
Google
Docs,
rather
than
github,
to
use
a
living
document
format.
C
So
we
can
kind
of
clear
things
up
as
we're
moving
forward,
but,
to
be
honest,
I
think
we're
going
to
get
the
most
work
done
in
person
most
likely,
but
you
know
we're
still
trying
to
figure
out
the
best
way
to
just
have
these
conversations,
because
getting
all
the
stakeholders
in
the
room
that
this
affects
is
really
hard.
Yeah.
C
I
would
say
nothing,
but
what
I
make
I'd
like
try
to
burn
everything
to
the
ground
a
little
bit
right
now,
but,
like
I,
mean
realistically
I
think
I'm
trying
think
of
it,
like
all
the
things
that
I
think
are
important,
which
would
be
like
a
reason
why
you
would
make
something
immutable
and
my
opinion
would
be
things
like
moderation
or
go
to
conduct.
B
C
That
would
be
important
to
be
immutable,
maybe
like
some
in
distress
or
sets
I'm,
not
sure
if
that's
like
the
best
example,
but
some
of
our
infrastructure
around
like
release
like
our
release
process,
is
fairly
immutable.
Like
we've
made,
you
know
like
we
have
actually
I
would
say
that.
That's
sorry,
I'm,
like
arguing
with
myself
in
my
head
as
I'm
talking
right
now,
which
isn't
the
most
helpful.
C
So
our
release
process
is
still
like
kind
of
moving
around
the
sense
that
we're
defining
it.
But
I
would
imagine
that,
like
once,
our
release
process
is
locked
down
as
far
as
like
when
you
can
expect
are
seized
when
you
can
expect
releases
to
come
out.
That
is
something
that
I
would
like
to
see
immutable,
because
I
would
like
people
to
be
able
to
consistently
rely
on
it,
but
that's
not
exactly
our
governance,
so
maybe
I'm
not
following
the
question
exactly
the
way
you
want
to
you.
B
Know,
that's
totally
fine,
so
you
mentioned
this
earlier.
But
how
do
you
identify
merit
within
the
community.
C
C
But
generally
like
to
mean
more
important
thing
than
technical
prowess
is
trust.
We
can
have
extremely
technical
people,
but
if
I,
don't
trust
them
to
like
handle
PRS
appropriately
or
if
I,
don't
trust
them
with
like
I
can
fit
on
master.
Look,
it
doesn't
matter
how
capable
they
are
so
so
I
went.
Almost
I
would
almost
edge
towards
like
replacing
the
word
Merit
with
trust.
Okay,.
C
So
I
mean
behavior.
Behavior
is
a
huge
one,
especially
the
way
in
which
people
interact
with
other
individuals.
So
are
they
acting
with
empathy?
Are
they
acting
with
foresight?
Are
there
are
they
being
thoughtful
in
their
actions?
That's
a
huge
once
I
mean
it
is
like
how
people
are
carrying
themselves
and,
if
they're
being
thoughtful
in
the
real
image
for
interacting
other
other
stuff
around
like
like.
C
So
we
have
kind
of
like
different
layers
of
like
what
so
like
for
a
regular
collaborator
to
just
get
a
commitment
like
we've,
given
commitments
to
people
who
have
helped
with
Doc's
forgiving
committed
people
who
are
helping
with
like
responding
to
issues
and
reviewing
for
requests,
we've
given
commitments
to
people
who
have
submitted
tons
of
PRS,
so
it's
not
exactly
tied
to
any
one
thing,
but
I
would
say
like
can
like
is
everyone
on
this
call,
a
Google
employee
or
no?
No,
okay?
C
So
no
Google
has
this
idea
in
their
purse
structure
of
like
like
consistently
repeating
like
what
you're
doing
so,
it's
not
just
like
showing
good
behavior,
but
it's
sustained
behavior.
So,
like
that's
a
huge
thing,
so
it's
like
not
just
doing
something.
That's
impressive
but
like
sustained,
sustained
good
behavior,
sustained
to
same
things
that
are
viewed
as
valuable
whatever
they
may
be,
and
that
could
be
technical.
C
That
could
be
a
support
that
can
be
documentation
and
that's
kind
of
like
the
layer
that
we
have
to
just
get
a
commitment
and
become
a
collaborator,
and
we
found
that,
like
over
time,
we've
given
a
commitment
over
a
hundred
people
now
and
in
the
last
almost
I
guess
it's
like
three
years
now,
we've
only
had
one
person
who
at
one
point
we're
like.
Oh
no.
What
have
we
done?
I
can't
believe
we
gave
this
person
to
commit.
C
They
are
a
problem
and
actually
within
a
couple
weeks
of
working
closely
with
that
person
they're
one
of
our
more
productive
individuals.
So
we've
actually
not
really
like
we've
had
some
stress
around
it,
but
we've
not
seen
any
problems
now
for
the
poor
technical
committee.
It
tends
to
be
people
who
are
showing
a
consistent
value,
four
parts
of
the
code
days
or
pets
of
the
project
that
we
don't
necessarily
have
coverage
of.
C
Yet
so
we
recently
nominated
someone
for
their
work
on
the
strains
or
someone
for
their
knowledge
of
the
v8
virtual
machine
or
I
was
nominated
for
my
work
on
LTS
in
the
release
process.
So
we
try
to
keep
the
core
technical
committee
to
have
breaths
so
that
when
decisions
are
made,
all
the
various
aspects
of
what
it
may
affect
you
know
have
a
stakeholder
there.
The
GSE
is
not
really
so
clear
for
lack
of
a
better
term.
B
C
So
I
want
to
say,
like
I
mean
the
high
level
thing
that
I
would
say
would
be
malleability
like
being
able
to
recognize
things
that
are
working
and
not
working
and
being
able
to
be
flexible
around.
It
is
a
huge
one,
but
another
thing
another
thing
that
I
think
would
be
really
important
and
it's
something
that
we're
trying
to
work
on
right
now
is
communication.
So,
as
you
break
down
all
the
problems
into
these
smaller
groups,
how
do
you
make
sure
that
everyone
within
the
project's
aware
what's
going
on?
C
C
About
two
and
a
half
years,
I
think
okay,
but
it's
been
kind
of
changing
throughout
that
time.
So,
like
the
TSE
spun
out
the
CTC,
probably
about
like
close
to
two
years
ago,
but
we
have
like
a.
We
have
a
structure
for
chartering,
new
working
groups.
So
in
that
time
new
working
groups
have
been
spun
up
and
working
groups
have
been
spun
down.
C
D
I
got
I,
guess:
I
asked
in
the
context
of
I
have
a
difficult
time
envisioning.
What
concrete
problems
will
arise
with
certain
governance
models?
So
it's
useful
for
me
to
go
back
and
look
through
historical
stuff
for
context.
So
just
finding
like
it
seems
like
maybe
the
github
repo
for
the
TSE
would
be
a
good
place
to
dig
through
for
the
most
history.
C
There
that's
fair
yeah.
That
seems
reasonable,
also
feel
free,
I'll
draw
my
email
address
in
here.
I
just
put
my
email
address
in
the
chat
and
if
you
have
other
questions
or
if
you
just
want
to
like
find
30
minutes
to
just
dig
through
stuff
or
even
if
you
want
to
come
and
audit
some
meetings.
Just
let
me
know
all
of
our
meetings.
Oh
that's
another
thing,
I
guess
a
10
minutes.
All
of
our
working
group
meetings
are
publicly
broadcast
on
YouTube
so
and
there's
a
public
calendar
for
when
they
all
will
be.
C
C
C
C
B
A
This
was
this
was
brought
up,
I,
don't
know
two
or
three
meetings
ago
and
then
I
had
a
separate
document
with
another
group's
first
contributions
as
well,
and
we
posted
them
to
the
list.
I,
don't
think
what
we
haven't
done
is
like
look
at
it
and
seen
like
if
there
was
low-hanging
fruit
or
tasks
that
we
can
break
out
of
that.
But
I
did
give
those
URLs
to
the
docs
team
yesterday,
who
are
very
interested
in
the
back
feedback,
specifically
the
bits
about
stuff
being
spread
out
over
multiple
repositories,
and
things
like
that.
A
D
So
you're
just
going
to
have
to
trust
my
hazy
memory,
so
the
fun
thing
is
I'm,
not
sure
that
we
walked
away
from
our
discussion
at
the
dev
summit.
The
concrete
action
speedom,
the
the
issue
here
was
there
projects
such
as
munch,
github,
prowl
and
velodrome,
which
all
seem
to
talk
to
you
data
hub.
They
all
have
different
ways
of
interacting
as
github.
They
all
have
funny
different
features.
D
For
example,
munch
github
has
caching
built
in
because
it
is
most
likely
to
hit
github
API
rate
limit
because
it
poles,
instead
of,
has
not
pushed
to
it.
Ello
Jerome,
because
it's
interested
in
transforming
a
lot
of
data
and
storing
it
into
influx.
Db
has
a
architected
plug-in
based
transformation,
architecture
and
prowl
seems
to
be
very
minimal
and
trying
to
be
architected
to
very
clearly
isolate
between
package
boundaries.
I
think
all
three
of
these
are
great
qualities.
D
Unfortunately,
you
know
they're
in
three
separate
code
bases
and
some
constants,
like
the
bot,
names
and
stuff
are
copied
between
all
of
these.
It
would
be
great
if
we
could
share
all
of
this
in
a
single
code
base,
whether
that's
realistic
or
not.
I'm
not
sure
so,
I
think
a
compromise
in
the
short
term
is.
We
would
collectively
like
to
say
bunch
github
go
away.
D
We
think
the
crowd's
architecture
allows
for
periodic
execution
of
things
for
things
like
repo
maintenance,
like
make
sure
all
of
the
labels
are
consistent
across
repos
make
sure
that
teams
are
consistent,
I'm,
not
I'm,
making
stuff
up
now,
but
TLDR
I.
Think
we'd
like
to
obviate
the
question
of
whether
or
not
we
should
merge
all
the
github
code
into
one
place
by
saying
all
these
functionality
should
go
into
prowl.
At
the
moment,
whether
or
not
velodrome
is
project
that
continues
to
exist
is
still
an
ongoing
discussion.
D
I
think
I've
heard
from
folks
like
Brian
grant
that
Sappho
is
awesome
and
Sappho
is
the
way
forward,
but
it
seems
like
we
have
to
rely
on
SAPO
to
add
new
graphs
and
stuff,
whereas
velodrome
as
a
code
base
that
we
own
and
it's
open
source
and
if
we
wanted
to
add
new
metrics
to
it
or
new
grass
to
it
that's
within
our
purview
and
so
there's
some
development
going
on
there.
So
I
think.
E
E
E
That
being
said,
there's
been
a
ton
of
investment
into
it,
and
things
like
this
to
make
use
already
running
there
tools
for
reading
the
owner's
files
from
our
repo
structure,
all
that
functionality,
built-in
and
moving
it
will
be
a
significant
effort,
I'm
not
opposed
to
it,
but
I
think
we
need
to
develop
like
a
pretty
strong
plan
and
break
it
up
into
bite-sized
chunks.
For
somebody
to
do
because
I
don't
think
we
can
just
ask
someone
to
duplicate
the
functionality
into
prowl.
I
would
love
it
if
you
could
share
the
crowd,
doc,
I!
D
It's
almost
as
if
he
knew
that
the
1.80
is
F
for
contributor
experience
was
next
up
on
the
agenda
and
I'll.
Just
don't
tail
that
by
saying
we
had
sig
testing
discussed
the
1:8,
our
hopes
and
dreams
for
1/8
and
for
beyond
in
2017
during
our
last
meeting,
I
will
post
a
link
to
that
as
well.
Our
goal
is
to
have
a
follow-up
discussion
about
that
next
week.
It's
sig
testing
and
then
try
and
get
folks
to
actually
commit
to
things.
D
Two
weeks
from
yesterday
so
actually
have
like
a
issues
created
and
tagged
with
the
appropriate
milestone
before
v17
gift
gets
cut,
and
this
is
sent
out
the
door
so
and
I
think
that
a
Prowse
future
and
roadmap
and
lunch
day
hubs
future
and
roadmap
were
very
much
a
part
of
that
discussion.
So
perhaps
you
and
I
can
catch
up
offline
and
I
will
provide
the
appropriate
links
to
all
of
that
to
make
sure
the
folks
you
can
take
a
look
at
that
if
we
want
to
chat
about
that
later
sounds
good
Erin.
E
E
Just
take
a
little
bit
I
think
in
the
past,
I
had
some
ideas
about
things
that
were
important
to
contributor
experience.
I
talked
about
like
themes
and
I
relied
on
people
to
sort
of
review
ideas,
but
I
was
thinking.
You
know
we're
a
pretty
new
group,
I
think
we've
been
around
for
less
than
a
year.
Now
we
could
start
brainstorming,
maybe
a
little
as
a
group
and
turn
them
into
more
actionable
items.
E
So
if
we
start
trying
to
nail
down
some
of
the
themes
and
things
we'd
like
to
focus
on
from
things
like
a
new
contributor
experience,
if
we
want
to
do
more
on
the
velodrome
and
metrics
side,
I
know
like
getting
that
kind
of
visibility
in
the
press
into
the
project
is
really
important
and
I.
Don't
think
the
end
goal
is
to
have
staff
over
continue.
E
Adding
graphs,
I
think
right
now,
they're
just
helping
getting
the
integration
up
and
running,
but
they'd
like
to
be
able
they
want
to
be
able
to
pass
it
off
to
us
so
that
we
can
use
their
builder
tool
and
add
perhaps
ourselves
without
having
to
write
as
much
code
as
you
would
for.
Velodrome
so
yeah
like
I,
was
saying
things
like
additional
metrics
things
like
a
nuke
interior
experience.
E
If
we
want
to
do
things
along
the
lines
of
key,
our
workflow
I
think
it's
within
our
purview
to
review
the
policy
set
forth
by
the
release
team
I
mean
the
release.
Team
I
think
is
capable
of
setting
them,
but
if
we
could
start,
you
know
coming
up
with
measures
of
success,
are
we
actually
achieving
our
goals
in
terms
of
visibility
into
the
project?
A
lots
of
things
like
that
I'm
I'm,
open
to
brainstorming.
Our
own
thing
do.
A
E
Would
get
a
mugful
but
internally
now
as
a
project
manager
I'm
starting
to
think
about
the
next
quarter.
So
this
was
just
a
very,
very
pre-flight,
very
early
start
to
about
the
work
that
we
want
to
do
sure.
A
E
B
B
B
B
D
Yes,
when
I'm
saying
labels
I
mean
the
labels
that
are
used
on
issues
and
PRS,
there
are
over
a
hundred
of
them
in
the
main
kubernetes
repo.
Many
of
those
probably
should
go
away.
A
subset
of
those
labels
should
be
propagated
to
all
repos
in
place.
To
do
this,
you
already
have
code
in
munch
github
to
accomplish
this
there's
over
an
open
pull
request
to
have
this
edited
to
prowl.
D
Other
things
that
would
be
really
cool
is
if
we
could
have
the
bot
enforced
labels
to
the
point
where
it
deletes
labels
that
are
not
in
its
list
of
well-known
labels,
so
that
humans
can't
go
through
and
arbitrarily
add
labels
through
the
UI
I.
Don't
have
it
in
front
of
me,
but
there's
one
issue
that
hilariously
has
like
priority:
critical,
urgent
priority:
T
zero
priority
queue,
blocker
queue,
black
six,
all
sorts
of
stuff,
like
just
add
as
many
labels
as
you
can
and
see.
D
If
that
makes
it
go
faster
and
so
I
think,
like
actually
documenting
what
all
of
these
labels
are,
would
also
be
helpful
there.
Another
hope
and
dream
might
be.
There
are
some
labels
that
are
used
only
for
PRS.
There
are
other
labels
that
are
used
only
for
issues,
so
perhaps
if
we
prefix
the
labels
that
are
used
on
PRS
as
such,
it
would
be
clear
what
you're
filtering
for
you
look
for
that.
There
are
other
labels
that
are
used
across
both,
for
example,
the
sake
labels
are
used
on
both
issues
and
PRS.
Right
now,.
D
D
Labels
are
used
to
cross
both
I'm,
not
sure
if
it's
necessarily
this
groups
purview
or
the
communities
at
large,
but
we
have
over
a
thousand
issues
right
now,
but
having
needs
Inc
label
applied
to
them
and
I
and
some
other
folks
have
been
going
through
on
a
daily
basis
and
just
triaging
off
a
couple
issues
each
morning
to
try
and
bring
that
list
down.
If
anybody
here
is
knowledgeable
enough,
perhaps
using
the
triage
doc
as
as
an
example,
it's
like,
can
we
appropriately
assign
these
to
the
right
six
get
them
appropriately.
D
A
F
B
A
From
looking
on
the
1.7
document,
one
of
the
goals
is
verify
that
all
the
Bach
commands
are
of
the
same
format
and
now
that
they
have
like
nice
links
that
explain
what
they
do
or
anything
there.
Any
follow-on
work
on
that
that
can
make
that
nicer,
or
we
basically
are
the
bots
good
enough
for
now,
but.
D
A
A
Just
I'll
just
write
on
lap
here
improve
automation
across
a
project
to
promote
overall
velocity,
so
this
is
stuff
like
pinging,
inactive
reviewers
and
reassigned
PRS
appears
emerge
faster,
reenable
machine
learning
to
automatically
triage
issue.
Labels
is
that
is
that
something
are
we
just
going
to
turn
through
those
errand?
D
A
it's
a
scary
thing:
yeah
it
here's
here's
a
concrete
goal.
This
thing
can
take
for
1/8.
We
need
to
define-
and
this
kind
of
ties
into
that
I
was
sort
of
chatting
it
up
as
an
chat.
One
of
the
things
that
came
out
of
the
Leadership
Summit
is
one
of
the
very
first
items
the
steering
committee
is
going
to
address,
for
the
kubernetes
project
is
to
help
each
say
concretely
define
what
it
owns
as
part
of
its
charter.
Honestly,
this
means
every
cig.
D
It's
going
to
own
all
the
concrete
sub
parts
of
the
project,
so
like
repos
Oh,
if
you
particularly
Buhl's
like
it's
really
obvious,
for
example,
that
you
know
sing
/
scalability
should
go
to
the
scalability
sake,
but
how
about
a
/
performance
or
what
about
area?
/,
gke,
there's,
actually
not
a
cig.
That's
in
charge
of
g/kg
right
now,
right
things
that
will
need
to
be
sorted
out.
D
So
a
good
question,
for
this
sake,
is
what
repost
post
this
segment
to
own.
What
code
does
the
cygwin
tone?
What
documents
does
precision,
what
labels
is
take
in
charge
of
triaging
and
oppressively,
etc,
etc?
So
just
like
coming
up
with
that
charter
should
be
a
v18
thing.
That's
a
good
one.
I
believe
the
steering
committee
is
supposed
to
work.
We're
like
aiming
for
a
vote
by
the
beginning
of
August,
so
I
think
that
lines
up
roughly
with
by
the
completion
of
v1a.
We
should
have
a
charter.
Okay,.
A
B
B
D
I've
got
the
rest
of
the
things
on
the
agenda
and
I've
read
even
more
over
the
ship,
sometimes
so
I
apologize
for
running
over
here
briefly,
github
nested
teams
came
up
as
this
cool
new
thing
on
the
mailing
list.
I
can
respond
on
the
issue,
that's
linked
there
or
in
the
mailing
list,
but
I
said
in
fact
yesterday
I
think
it
goes
against
the
purpose
of.
D
Although
we
have
a
ton
of
teams,
a
ton
of
github
teams,
the
purpose
of
them
was
to
make
sure
that
oversubscribed
individuals
could
just
subscribe
to
the
design
proposals
team,
for
example,
and
then
they
wouldn't
get
notified
for
all
of
the
test,
failures
and
bloods
and
flakes
that
that
happen.
If
we
were
to
hierarchically
organized
things
and
somebody
mentioned
Saifu,
they
would
hit
all
the
teams
underneath
of
it,
which
would
defeat
that
oh
okay
yeah.
D
Having
said
all
of
that,
this
it'sjust,
oh
crap,
we're
being
recorded
they'll
get
this
recorded.
My
pet
peeve
is
I'm
not
actually
sure
how
effective
all
of
those
teams
are.
I.
Don't
really
have
the
time
to
do
it
at
the
moment,
but
a
little
interesting
side
project
minor.
If
somebody
wants
to
take
it
on
is
like
how
often
have
all
the
different
teams
actually
been
used,
I
know
that
not
all
of
the
SIG's
have
all
of
the
teams.
D
I,
don't
know
how
much
traffic
each
individual
team
is
getting,
and
you
know
like
how
effectively
is
somebody
from
that
team
magically
showing
up
on
a
PR
when
they
get
notified?
I
can
speak
from
personal
experience
that
SiC
testing,
although
we
have
you,
know
sick
testing,
proposals
and
say
testing
test
failures,
which
what
does
that
even
mean
we
mostly
get
pinged
on
to
a
testing
disc.
That's
pretty
much.
D
Anybody
who
from
outside
is
anybody
from
outside
generally
just
uses
that
team
to
notify
us
so
there's
there
may
even
be
a
question
of
whether
those
teams
are
worth
it,
but
anyway,
PL,
VR,
I'm,
not
sure
get
github,
nested
team
sounds
really
cool,
but
I.
Don't
think
it's
of
much
use
to
us
right
now.
We
can
have
further
discussion
on
that
issue
next
thing.
Maybe
this
is
something
we
could
do
for
b18
or
beyond.
The
6mo
file
that
has
been
added
to
kubernetes
community
I've
been
a
little
cranky
about
this.
D
On
the
sick
Channel,
and
for
that
for
my
for
attitude,
I
apologize
but
I
think
my
main
complaint
was
it
was
it
was
put
in
place
and
then
over
the
a
change
was
merged.
That
overrode
the
content
of
a
lot
of
SIG's
read
knees
without
giving
anybody
a
heads
up.
So
there
was
no
notification
formally
sent
out
that
says
this
is
when
it's
going
to
happen.
If
you
have
custom
content,
you
don't
want
to
be
overwritten.
Please
add
it
here.
So
I
think.
D
If
this,
if
this
thing
was
the
thing
that
was
driving
a
change,
I'm,
not
sure
whether
or
not
we
work,
we
want
to
make
sure
that
we
broadcast
those
sorts
of
changes
ahead
of
time
as
a
result
of
this
sick
release,
never
noticed
that
they
should
add
their
thing
into
sake
CMO.
So
there
really
hasn't
been
touched
yet
I
have
a
pull
request:
I'm
working
on
to
try
and
have
their
info
to
60
ml
and
get
their
readme
generated,
go
undo.
What
was
blown
away
by
sake,
testing!
There's
an
issue
open
by
saying
storage.
D
To
do
this,
so
maybe
even
a
sake
might
want
to
just
double-check
and
make
sure
that
custom
content
wasn't
blown
away.
There
is
a
problem.
I
couldn't
really
find
like
a
document
that
describes
okay
now
in
this
wonderful
new
future,
where
we
have
a
60
mo
file.
What
is
the
process
by
which
async
updates
their
readme
I
have
to
know
to
go?
Look
in
the
generator
subdirectory
of
the
Cooper
Nettie's
community
repo,
which
doesn't
seem
like
the
right
place.
I'm,
not
sure
what
the
right
place
should
be
buzzer,
how
there
nope?
D
D
But
we
put
if
this
is
the
way
forward.
We
ought
to
document
that
this
is
the
way
forward,
because
I
think
right
now,
a
human
being
can
still
go
manually,
update
their
readme
on
their
own
and
it
won't
get
over
it,
but
I'm
not
sure
which
way
I
won't
get
overwritten,
I'm,
not
sure
which
brings
me
to
the
next
point.
I
see,
there's
a
Travis,
the
mo
file
in
the
repo,
but
I'm
not
sure
if
things
are
set
up
right
now.
D
D
Specifically
for
things
like
running
unit
tests
on
the
app
on
the
doc
generator
and
renting
the
city
mo
files,
so
on
and
so
forth,
because
I
do
think
having
one
canonical
place
to
keep
things
dry
to
have
all
the
info
for
six
would
be
fantastic
for
autumn.
Which
brings
me
to
my
next
point:
we
have
some
signaling
consistency
issues.
Maybe
some
of
you
have
noticed
this.
D
A
great
concrete
example
is
we
can't
have
our
slack
channel
named
sig
contributor
experience
because
that's
longer
than
the
21
character,
channel
name
limits
in
slack,
however,
the
directory
is
named,
sig
contributor
experience
in
the
community,
repo
and
the
label.
That's
used
to
triage
issues,
often
github
in
the
main
kubernetes
repos
a
contributor
experience,
so
we
either
need
to
normalize
everything
back
to
sidqin
tricks
or
we
need
some
kind
of
lookup
table,
and
so
maybe
60
ml
could
play
the
role
of
lookup
table
for
normalizing
this
stuff.
D
There
are
a
number
of
other
areas
where
we
have
normalization
issues.
I
opened
up
an
issue
within
the
community
repo
and
I'd
like
to
just
encourage
folks.
If
you
notice
places
where,
like
the
directory
name,
does
line
up
with
the
label
name,
doesn't
line
up
with
the
Google
group
name
doesn't
line
up
with
the
slack
channel
name.
Please
add
it
to
that
issue
or
link
it
to
that
issue.
That
sort
of
thing
and
that's
all
I
got
Thank.
B
You
Aaron
I
definitely
relate
to
the
naming
consistency
but
like
always
getting
something
all
right
so
like
github,
what
is
this
class?
What
is
this
yeah
so
I
definitely
think
that
all
the
things
that
we
need
to
address,
thank
you
for
stopping
by
all
right,
so
we're
over
time.
So
we're
going
to
call
it
a
day.
I
see
everyone
in
two
weeks.