►
From YouTube: Kubernetes Community Meeting 20160211
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Summary -- Pangaea Demo, SIG reports from Cluster Ops and #AWS, release automation and documentation team introductions. 1.2 update and planning the 1.3 planning.
A
Click,
this
is
a
public
and
recorded
meeting
of
the
kubernetes
community.
Today
is
februari
11th
2016
and
these
videos
posted
on
the
coubertin
Eddie's
youtube
channel,
as
will
the
notes
be
posted
on
the
Cooper
Nettie's
I/o
block,
so,
let's
jump
in
with
our
tomorrow.
This
is
a
toolset
that
came
through
and
we
had
a
little
bit
of
internal
discussion
going
wow.
This
is
super
nifty.
We
want
to
see
more.
So
that's
how
we
came
to
invite
pangea
to
give
us
a
demo
and
thought
you
guys
might
be
interested
in
it.
B
B
My
online,
like
are
perfect,
so
ok,
so
the
pangea
started
point
and
shoot
for
knitting.
The
idea
is
to
just
get
coconut
is
on
to
configure
a
bunch
of
things
and
have
coconut
da
fooood
net
is
running
whatever
you
want
it.
This
kind
of
just
go
into
the
motivation.
Little
bit
we
be
working
on
a
product
called
for
Scott
at
I/o,
which
I
guess
you
guys
can
check
out.
A
B
B
The
underlying
system,
if
we
needed
to
make
this
a
reality,
was
to
have
google
entities
deployed
everywhere
right,
soku
netting
should
not
depend
on
what
infrastructure
that
you're,
using
especially
on
local
development
machines,
because
I
would
like
to
test
my
entire
microservice
application
as
a
whole
and
not
just
images
one
by
one.
So
out
of
this
effort
emerged
from
the
sort
of
collection
of
scripts
really,
which
is
what
we
got
the
Pangaea,
and
the
idea
is
that
it
should
be
the
easiest
wait
again
who
connect
is
running
for
application
developers,
not
for
anybody
who's.
B
You
know
working
on
humanities
but
for
final
application
developers,
so
the
application
developers
will
expect
things
to
have
will
will
expect
to
have
easy
ways
to
configure
things
between
development
and
production.
They
would
expect
to
have
in
select
File
mounts
on,
and
they
will
certainly
expect
the
entire
application
to
behave
in
the
same
way
on
a
local
machine
and
on
different
enter
search
engines
so
to
be
specific
in
development
mode.
B
The
demo
that
it
will
take
you
over
is
going
to
be
creating
a
simple
to
the
engine.
X
container
that
serves
static,
HTML
I
think
as
usual,
every
single
demo
and
the
idea
will
be
to
make
some
live
modifications
in
development
mode
use.
Taken
for
that
and
then
build
that
image
and
deploy
the
pack
2
or
GC
is
one
what
the
steps
as
we
are
going
over
are
available
in
the
workflow
dogs
on
the
github
repo
as
well.
So
that
can
be
the
difference
at.
Let
try
bacon
hi.
C
B
C
Okay,
so
yeah,
what
be
what
I'm
going
to
be?
The
moving
is
we'll
discuss
them.
This
build
a
sample
application
it.
We
will
have
a
ninja
next
container.
Saving
some
stand,
the
html5,
so
how
we
use
trangia
is
okay
by
the
way.
All
of
these
steps
are
well
explained
in
the
workflow
docs
and
so
I
just
start
off
by
creating
a
demo
projects.
C
C
C
And
here
we're
inch
cube
shuttle
blow
dressage
one
stream.
This
is
that
it
downloads
a
core
OS
image
and
set
up
a
virtual
box
machine
with
that
and
install
proven.
It
is
binaries
into
it
and
start
up
a
single,
not
proven
it
is
cluster,
so
it
is
going
to
take
some
time.
You
go
to
the
images
and
all
involved
in
the
download,
so
I'm
just
going
to
skip
into
folder,
which
I
already
set
up
yeah.
C
C
So
it's
already
running
so
I
have
already
added
a
engine
X
chap
here,
as
I
show
you
them
I
have
NZXT.
It
just
has
that
of
a
file
scattered
out.
So
if
it
is
you,
so
it
just
says
from
engine
X
for
Pete
HTML
directly
into
the
current
directory,
and
we
have
so
one
more
thing
we
have
implemented
is
that
since
we
can,
you
can
have
different
configuration
for
humanistic
files
are
0
to
EML
files.
We
have
made
a
template
thing.
C
We
use
ginger
for
templating,
so
so
that
we
can
use
different
configuration
for
production
as
well
as
in
development.
So
we
are
using
a
mound
paths
so
that
the
working
HTML
directory
is
directly
served
from
the
post
itself
and
how
we
deploy
it
is
so
I
have
set
up
all
the
files
here.
So
let's
call
it
in
once
more
so.
C
What
we
are
doing
is
so
we
have
made
another
tool
called
Cube
T
to
template
the
occido
EML
files.
These
configurations
are
returning
a
file
form
template.
It
says
which
files
has
to
be
tabulated,
which
is
a
CO
tml
has
to
be
tabulated
and
environment
is
development.
Now
we
have
an
RC
wrote,
EML
files
generated
a
pencil,
necks
are
0
ml,
so
we
will
be
using
this
file
to
start
the
application
controller.
C
A
C
A
C
B
C
B
B
C
C
St
of
my
index
footage
given,
I
will
just
maybe
this
time
and
we
can
see
there,
it
has
been
changed.
So
that's
the
look,
so
that's
the
local
setup.
Rapid
local
development
is
possible
by
mounting
files.
Now
the
same
interface
can
be
used
to
deploy
it
to
GC.
Also,
all
we
need
to
do
is
set
provider
equal
to
TC
as
well
as
given
your
DC
project,
IV
instance
name,
and
there
are
two
more
preliminary
steps
in
rodin
which
created
this
persistence
is
and
an
external
static
IP
address.
C
He
suggest
additional
features
so
that
we
can
bring
down
the
vm,
destroy
the
vm
and
cleared
it
again
without
any
change
in
your
underlying
disk
or
external
IP
rips
at
you
had
we
have
one
mode
right
now
we
have
one
more
provider,
as
you
also.
We
have
made
a
provider
for
a
cure,
so
GC
is
also
highly
skillful,
and
here
GCE
upload
SH
in
it
in
it.
We
do
it
in
the
initial
setup,
because
we
need
to
set
up
the
ssl
certificates
and
send
it
over
to
the
view.
C
D
E
G
G
B
G
B
H
B
Again,
hello,
no,
we
hear
you
all
right,
Thank,
You,
Fred,
so
I
think.
So
what
we've
noticed
is
when
we
were
writing
applications
in
the
setting
it
up.
The
communities
and
multiple
people
were
collaborating
and
whatnot
every
developer
sort
of
needs,
its
own
set
up
right
and
and
in
this
microservices
thing
that
you
have
where
you
know
some
days
it
in
the
database
can't
make
the
surface
or
somebody
else
has
some
other
microservice
ever
three
or
four
developers
and
everybody's
working
on
same
thing.
B
Do
you
need
to
have
a
big
project
that
sort
of
contains
the
controller
files,
the
kinetics
controller
files
that
have
references
to
all
the
micro
services?
I
need
to
run
them
low
PD
so
that
I
can
I
trade
on
them
as
I
want
and
then
once
I
have
the
code
set
up
running
locally?
I
would
like
to
push
it
to
staging
which
can
be
equipment,
is
cluster
and
then
eventually
push
it
with
ruff.
G
B
B
B
B
Another
important
thing
here
is
that
when
you're,
when,
when
I
is
developer
and
deploying
on
too
loud
or
whatever
I
want
things
like
load
balancers
to
be
automatically
created
with
this
game,
IP
and
whatnot,
so
there's
there's
a
little
bit
of
there's
a
little
bit
of
skipping.
The
we've
done
where
we
just
included
all
of
these
things
that
need
to
be
done
by
the
developers
of
the
time
like
creating
the
load
balancer
as
well,
opening
up
the
firewall
ports
because
you
waste
time
in
creating
the
load
balance.
B
A
B
B
B
So
so
this
is,
this
is
just
this
project
is
just
about
you,
maybe
a
few
weeks
old,
a
few
weeks
for
the
record
vo
two
months
old,
and
we
have
to
now
start
doing
the
multi-mode
set
up
as
well
with
core
OS
and
whatnot.
One
of
the
problems
that
we
face
with
40
s
was
VirtualBox,
does
not
support
for
OS
dismounting,
so
we
have
to
use
NFS
and
that
does
become
slow
on
local
developer
machines.
That
is
one
of
the
challenges
that
we've
been
conquered
with
I
think
that's
more,
the
photo
I
social
box
problem.
B
Another
problem
is
that
on
the
single
node
set
up,
the
average
CPU
consumption
seems
to
be
twenty
percent.
All
the
time
and
its
various
agents
are
consuming
it
we're
not
sure
again
if
this
is
normal
or
expected
behavior
or
this
is
just
because
we
were
putting
everything
on
a
single,
node
and
I
believe
this
setup
is
supposed
to
be,
maybe
both
lean
of
the
disk.
So
those
are
the
certain
things
like
wrapping
with.
If
anybody
has
any
insight
who
weather
which
should
be
worried
about
it
at
all,
it
would
be
nice,
personal.
A
B
A
G
Is
and
I'll
send
out
announcing
email
and
I'm
trying
to
do
first
is
collect
efforts
and
you
know
sort
of
sort
of
do
an
inventory.
So,
if
you're
interested
you're
not
committing
to
a
lifelong
ball
and
chain
with
cluster
ops,
I
just
tried
to
find
people
who
have
interest
and
then
also,
if
you're,
aware
of
cluster
ops
efforts,
I'm
trying
to
build
an
inventory
up,
so
you
can
get
make
sure
we
represent.
People
can.
A
G
H
Indeed,
I
don't
know
if
I
Kensi's
here,
but
Mackenzie
and
I
are
kicking
off
a
big
AWS
for
people.
That
one
is
the
special
interest
of
Amazon,
Web,
Services
and
deploying
criminal
is
armed
services
and
adding
support
for
all
the
various
pieces
or
equivalents
to
GCE
services
on
AWS
and
that
will
be
sig
AWS.
There's
a
slack
channel.
That's
already
seen
some
activity
and
we
have
a
mailing
list
that
has
not
yet
seen
any
activity
but
will
shortly
be
getting.
H
A
Fantastic,
I
also
saw
some
I've
been
seeing
more
artifacts
that
say
azure.
So
at
some
point
we
will
have
to
do
a
cig,
asier
as
well
so
fun
stuff
ahead.
Alright,
so
I
promised
a
bunch
of
introductions
of
people
from
the
Google
team
and
who
are
focused
on
a
lot
of
the
things
that
you
as
a
community,
ask
about
and
so
I'm
going
to
introduce
David
McMahon
first,
who
is
working
on
release
automation
here,
which
is
slightly
different
from
some
of
the
questions
that
have
been
coming
up
about
really
shepherding
so
I'll.
A
F
Everybody
hear
me:
okay,
yes,
hi
david
mcmahon.
I
was
recruited
on
the
team
to
help
out
with
release
automation.
I
started
on
the
team
about
a
year
talked
about
here
at
the
beginning
of
the
year.
Very
reason:
I
felt
the
previous
groups
here
if
we
will
to
automate
their
release
processes
and
that's
what
I
wanted
to
hear
as
well
release
process
right
now
is
heavily
documented,
but
not
so
much
automated.
A
F
Deep
diving
here,
I,
haven't
developed
a
whole
lot
of
this
yet,
but
at
a
high
level,
the
kind
of
things
that
I
want
to
do
is
have
a
better
clean
separation
between
release
and
build
right.
Now,
it's
little
money
generally
make
it
more
software
driven
versus
documented,
because
documentation
can
be
interpreted
and
modified
on
the
fly.
We
don't
like
releases
to
be
interpreted
and
modified
in
the
flat.
F
This
is
especially
true
when
people
are
multiple
type
of
multiple
people
are
actually
doing
the
releases.
You
can
have
different
inconsistent
releases
of
that
point,
so
the
best
way
to
do
release
obviously
is
to
take
as
human
settlement
as
much
out
of
the
equation
as
possible,
so
policy
enforcement
process.
F
F
So
I
haven't
actually
checked
anything
in
yet,
but
you'll
be
seeing
coming
through,
because
I
want
you
know
this
is
going
to
be
highly
transparent
by
internally
here
at
Google,
as
well
as
obviously
outside
to
the
community.
So
you'll
see
exactly
how
releases
are
done
through
software
versus
Doc
again,
which
can
be.
F
So
one
of
the
other
things
that
was
asked
was
Sarah
funded
this
out
as
well.
What
is
it
that
I'm
doing
versus
some
of
the
things
that
concerns
that
were
raised
in
terms
of
shepherding
the
actual
shepherding
and
controlling
those
releases?
I
really
want
in
the
hands
of
the
stakeholders?
That's
the
principles,
the
leads
on
the
team
internally
and
externally,
as
well
as
tpms
and
others.
F
I
I
E
I'm
here,
if
you
guys
can
hear
me
yes
yeah
so
basically,
I
will
still
continue
to
sort
of
hopefully
drive
and
push
the
project
on
the
whole
towards
dates
and
towards
releases,
and
what
Dave
is
working
on
is
much
more
of
the
technical
infrastructure
and
background
for
how
code
is
actually
built,
packaged,
released,
etc.
So
that's
sort
of
the
the
difference
between
our
roles.
That
said,
obviously
we
work
very
closely
and
I'll
have
the
same
goals.
I
Cool
sounds
good.
I
would
love
to
see
you
the
next
day,
testing
meeting.
A
Alright,
let's
do
the
next
introduction,
so
next
up
is
documentation.
Xin,
KU,
burnett,
ease,
website,
redesign
proposal
and
an
introduction
more
specifically
of
john
mulhouse
and
who
is
has
joined
the
team
here
in
google
and
is
working
on
cleaning
up
making
easier
and
more
transparent,
the
docs
and
websites
and
such
so
that
we
can
have
I'll.
Let
him
go
on
a
great
lengths
because
he
does
it
so
very
well.
John
all.
J
Right
so
I
just
joined
the
team
on
january
fourth,
and
right
now,
I'm
working
with
a
creative
agency
called
the
United
creations,
who
are
great
on
a
complete
redesign
and
build
up
of
the
website.
So
what
we're
trying
to
do
as
far
as
the
documentation
workflow
is
get
out
of
the
business
of
having
a
lot
of
munching
scripts
generate
content.
J
We're
aiming
what
we're
aiming
to
do
is
do
that
more
with
do
the
things
that
munching
scripts
were
doing
with
native
Jekyll
functionality
and
what
that'll
mean
is
we
can
we
can
keep
doc
editing
a
very
light
and
staging
very
light,
and
so,
in
other
words
like
we
can
get
pretty
close
to
WYSIWYG.
So
my
goal
is
for
those
who
are
familiar
with
get
up
pages
technology
is
that
you
should
be
able
to
fork
our
site
and
it'll
have
the
docs
and
the
site
and
all
the
assets
for
the
site
in
that
fork.
J
Do
it
would
stage
at
that
URL
up
in
the
cloud,
and
you
would
see
it,
and
this
enables
you
to
sort
of
make
changes
light
as
soon
as
you
hit
save
on
something
either
up
on
github
com
or
on
your
machine
you'll
be
able
to
see
the
results
of
those
changes
right
away
and
no
scripting
necessary,
don't
have
to
install
go,
and
it
basically
just
makes
it
possible
for
everybody
just
babe
in
a
very
lightweight
way.
J
If
you
wanted
to
do
all
your
doc
editing
up
on
github
com,
you
wouldn't
even
have
to
download
Jekyll
download
any
software
at
all.
It's
just
a
simple
as
hitting
some
buttons
on
a
website
and
yeah
I'm
trying
to
get
us
there.
The
new
website
also
looks
really
nice.
It's
a
little
more
great
word.
J
We're
also
trying
to
to
work
towards
exciting
things
and
announcements
in
pucon
that
I'm
working
with
Sarah
on
so
yeah
I
mean
that's
it
for
me,
really
I
guess
if
any
was
worried
about
quality
which
the
munching
was
kind
of
the
munching
scripts,
we're
kind
of
enforcing
through
automatic
means
we
are
going
to
have
pre,
submit
checks
that
run
on
your
fork.
J
West
still
that's
going
to
be
where
the
scripting
happens,
but
as
far
as
authoring,
what
I
want
that
to
be
really
white
I,
think
everybody
should
be
able
to
fork
the
site
and
just
make
some
edits
instead
of
two.
It's
just
mark
down,
after
all,
so
should
be
as
easy
as
editing
a
text
file
from
now
on
when
we're
done.
So
that's
me.
J
D
J
A
A
A
E
Yeah,
so
I
want
to
talk
mainly
about
1.3,
but
it's
worth
checking
in
on
one
point
to
as
well.
So
one
point
to
the
big
news:
ish
was
a
feature
complete
as
of
a
day
or
two
ago.
We
do.
That
is
not
a.
We
are
a
hundred
percent
done
with
every
feature.
A
I
interrupt
you
to
also
mention
the
test
afflict
st
flake
search,
no
testing
flake
search,
so
the
surge
at
this
point
is
over,
but
the
goal
is
not
to
do
these
surges
anytime
regularly.
The
idea
has
been
always
to
do
this
once
to
then
maintain
it.
So
as
one
of
the
changes
in
the
last
few
weeks,
at
least
internally
at
Google,
we
are
making
basically
a
test
flaky
test,
your
first
priority.
It's
a
it's
a
p0.
A
Doing
testing
if
you
want
to
and
then
the
last
pitch
in
here
is,
if
you
want
to
get
your
pull
requests
through
faster,
come
help
fix
tests
because
we
get
to
say
that
too,
but
our
flaky
tests
are
down
by
seventy-five
percent
at
this
point,
and
we
are
continuing
to
push
on
that.
So
that
was
just
an
update
on
the
test.
Surgeon
and
the
path
to
1.2
so
TJ,
if
you
want
to
jump
on
to
1.3
or
if
people
have
questions
about,
122
may
be
first.
E
Yeah
install
this
commented
via
milestones
that
we
have
for
in
github
for
v,
1.2
and
v
1.2
candidate.
Those
are
actual
milestone,
tags
in
github.
Does
it
been
essentially
left
for
dead
and
bit
rotted
during
1.2,
but
there's
been
a
decent
amount
of
work
to
sort
of
triage.
What's
in
there
in
the
last
week
or
so,
and
so
those
milestones
are
now
much
closer
to
representing
the
work
we
think
we
have
left
for
1.2.
E
So,
as
you
find,
you
know,
more
serious
bugs
that
you
think
need
fixed
will
work
to
get
them
into
those
milestones,
drive
those
down
to
zero
and
then
and
then
release
1.2
hi.
I
This
is
Aaron
Cregan,
burger
I
was
on
me
just
to
put
some
numbers
to
that
like
I.
Think
two
weeks
ago,
when
I
started
asking
about
the
state
that
milestone
the
issue,
count
was
ballpark
like
150
160,
and
we
were
at
sixty-six
percent
issues
open
total
our
sixty-six
percent
of
the
issues
in
total,
we're
still
open
and
we're
now
down
to
fifty.
Eight
percent
of
them
are
open,
but
the
total
issue
count
has
doubled
over
the
past
two
weeks
because
seemed
like
the
full
scope
of
work
has
been
more
accurately
represented.
I
E
So
the
the
I
mean
the
original
proposal
plan
was
to
close
as
much
as
possible
in
the
next
two
weeks
week
and
a
half
then
to
do
the
branch
and
then
just
sort
of
finish,
the
final
trail
of
bugs
in
the
in
the
branched,
1.2
and
release,
and
so
that's
that
would
be
three
and
a
half
weeks
away.
I
do
agree
that
looks
relatively
aggressive,
given
where
we
are
today.
I,
don't
think
anyone
wants
to
ship
a
1.2.
That's
buggy
are
not
complete,
so
there's
nothing
magical
about
the
original
proposal
release
date.
I
E
I
mean
I
think
I
think
would
be
the
opposite
if,
if
there
is,
if
there
are
still
bugs
in
the
1.2
milestone-
and
they
are
serious
enough-
that
we
know
they
do
need
to
be
in
there
they're,
not
the
kind
of
things
that
we
feel
comfortable
shipping
with
and
moving
to
1.3.
What
I
think
we
will
delay
until
we
fix
those
things,
as
you
say,
with
the
numbers,
we're
sort
of
just
getting
the
first
accurate
count
of
things
to
do
this
week,
and
it's
it.
You
know
its
software.
E
E
E
My
first
observation
was:
if
I
look
back,
we
were
doing
a
lot
of
the
planning
for
1.2
during
the
end
of
the
1.1
milestone
sort
of
at
this
point
and
and
as
you
said,
there
seems
to
be
a
lot
of
people
really
heads
down
on
I'm
finishing
up
the
milestone.
So
my
first
proposal
is
to
sort
of
punt
on
1.3
planning
until
1.2
is
fully
released
and
I'd
love
to
hear
feedback
on
that.
E
J
No
reason
out
a
survey
for
the
backlog
features
441,
not
too
lemony
plans
to
do
that
for
193,
like
we
can
start
that
process
already,
I
think
for
what
big
features
people
are
looking
for.
We
have
a
list
of
things
that
we
are
trying
desperately
to
get
into
one
tattoo
and
assuming
those
things
get
in
and
I
think
that
there
should
be
at
least
clear
what
is
what
is
to
left
out
there?
I
couldn't
come,
not
three
I
think.
E
Totally
Sarah
and
I
work
we're
talking
just
face
to
face
the
other
day
about
how
we,
how
we
do
this
this
one
point
three
thing
and
then
improve
from
one
point:
to
and
yeah
like
I
I.
Think,
like
you
say,
Brian
like
there
are
a
whole
bunch
of
features
that
everyone
has
off
the
top
of
their
head,
that
they
can
probably
name
that
are
probably
very
likely
to
be
in
1.3.
So
we
already
have
a
start
of
an
idea.
E
I
think
where
we
struggled
for
1.2
is
defining
sort
of
things
that
absolutely
must
be
in
there
versus
things
that
we'd
love
to
have,
but
but
maybe
shouldn't
block
a
release
of
a
minor
version.
So
what
I
was
thinking
is
once
again
once
1.2
is
out
and
we
feel
good
about
that
release.
We
sort
of
have
people
go
off
for
a
week
or
two
and
sort
of
individually
or
within
their
companies
or
what,
however,
they
want
to
play
them.
Think
about
probably
two
sets
of
things.
E
One
would
be
the
sets
of
features
that
you
think
you'd
like
in
1.3
and
that
you
can
contribute
to
that.
You
will
be
you
or
your
teammates
will
be
working
on
and
I
think
there
probably
is
another
list
which
is
a
set
of
features.
That's
a
wish
list
things
that
you
would
love
to
see,
but
you
don't
think
you'll
be
committing
to
work
on
yeah.
J
J
That
we
one
end
into
the
release
or
entity,
so
you
know
it
would
help
us
tremendously
to
understand
what
things
other
people
are
trying
to
get
done
and
in
what
time
frame
so
1.3
might
be
the
time
frame.
Sure
I
know
that
some
people
have
released
deadlines
that
don't
properly
line
up
with
Denise's.
So
if
the
release
deadline
is
different
than
103
either
earlier
or
later,
and
you
still
need
to
get
it
into
head.
That
would
also
be
very
useful
for
us
to
know
right.
So,
ideally,.
A
This
will
end
up
generating
functionally
a
/
/
contributor
or
a
per
contributor
group
backlog
list
of
their
own.
These
are
the
features
we
want.
These
are
the
timings
we
think
on
it.
This
is
what
we're
committing
to
and
then
there's
sort
of
the
end.
We
think
this
all
needs
to
happen,
but
we
need
to
figure
out
who's
doing
it
list
that
needs
to
be
prioritized
by
google
in
the
community
collectively
and
I.
Think
that
also
leads
to
the
next
point.
I
may
have
seen
a
preview
of
this
from
in
emptiness
face
that's
RS,
I'll.
E
Let
you
go
yeah
I'm,
imagining
that
that
you
know
after
1.3.
We
take
some
time,
put
some
thought
into
that
bring
those
lists
and
then
maybe
have
a
community
meeting
discussion
typically
around
those
two
sets
of
lists
the
things
that
people
will
contribute
to
and
things
that
people
want,
but
maybe
don't
have
time
to
contribute
to
and
then
I
think.
E
The
last
thing
is,
like
you
said,
Brian
with
the
the
released
and
whether
or
not
things
can
be
in
head
or
be
in
the
release
or
before
after
we
have
chosen
this
sort
of
three-month
release
cycle,
and
that
was
you
know
some
early
feedback
from
the
community
between
one
point,
0
and
1
point
1,
there's
nothing
that
keeps
us
mated
to
that.
E
So
I
don't
have
the
answer
yet,
but
I'd
like
to
improve
that
I
think
there
could
be
a
world
where
we
get
much
better
a
test.
We
get
much
more
rigor
into
the
way
we
build
Cooper
Nettie's
and
we
get
to
the
point
where
we
say
head
is
mostly
stable
and
so
a
release
is
just
two
weeks
away
at
any
given
point.
We
don't
have
to
have
these
sort
of
cutoff
dates
that
might
be
pretty
far
in
the
future,
but
I
think
we'd
probably
like
to
get
there
but
I'm
open
to
feedback
in
discussion.
Yeah.
J
A
Also
leads
to
interesting
point
about
more
rigor
around
new
features.
So
even
as
we
have
these
new
lists
of
what
you
the
community
will
commit
to
doing,
we
want
to
make
this
as
smooth
the
process
of
getting
it
in
and
discussing
it
in
pull
requests
with
things
that
look
like
very
lightweight
user-focused
PR
DS
like
what
are
you
trying?
A
What
problem
are
you
trying
to
solve
from
a
user
perspective
and
then,
if
the
discussion
starts
to
get
longer
than
hey,
why
don't
you
change
this
thing
and
then
it
looks
awesome
or
something
you
know
relatively
lightweight
within
the
whole
of
the
prd,
about
the
discussion
about
the
prd.
Then
we
need
to
look
at
having
design
review
for
the
more
complicated
features.
Just
so
we
can,
as
a
group
come
together,
decide
the
right
way
to
implement
something
on
these
features.
A
A
Think
then,
in
one
point
to
get
mired
in
discussions,
because
someone
submits
something
a
little
bit
more
fully
baked
than,
and
and
you
know,
other
groups
or
the
leads
here
would
have
done
it
very
differently,
and
so
we
want
to
head
those
off
by
having
the
conversations
earlier.
So
to
that
end,
with
in
1.3
I
would
love
to
as
TJ
described,
get
those
sets
of
those
lists
from
people
synthesize
them
discuss
them
as
a
community
and
then
start
moving
toward
a
lightweight
prd
process
as
well.
G
So
this
is
rob
hirschfeld,
throw
throw
my
camera
I.
This
is
thorny
problem.
I
know
from
watching
OpenStack
for
a
long
time.
They've
really
struggled
for
this
I
think
you're
right
to
try
and
do
something
up
front
and
make
sure
that
that's
actually
an
authoritative
source
because
I
doing
it
at
putting
it
back
and
after
the
fact,
it's
really
tricky.
G
One
thing
that
and
I
don't
have
a
good
answer,
but
I
want
to
throw
some
thoughts
out
there,
if
that,
if
that
helps
just
that,
if
you
segment
it,
if
you
sign
of
it
too
much
to
make
the
lists
manageable,
then
you're
going
to
end
up
with
you
know
a
lot
of
cross
product
project
or
cross
component
dependencies.
And
if
you
don't
do
that,
then
people
can
get
lost
in
what
those
priorities
are
so
SE
nodding,
I
to.
G
A
Actually
leads
to
one
other
point
I
have,
which
is:
we
are
still
trying
to
pull
together
a
contributor
summit.
I
have
been
dogging
EP
level
here
for
my
budget.
So
as
soon
as
I
get
budget,
I
will
be
able
to
make
a
make
a
couple
of
date
proposals.
My
preference
would
be
roughly
the
first
week
of
april,
but
we
shall
see
if
we
can
make
that
happen.
A
Think
that's
an
open
question,
we're
happy
to
work
through
it
and
keep
coming
back
to
the
community.
If
that
doesn't
feel
like
enough
enough
conversation
or
enough
engagement,
I'm
open
to
trying
to
adjust
that
I
think
it
goes
a
little
bit
faster.
If
we
try
to
come
up
with
an
idea
put
out
a
straw,
man
and
you
guys
go
yes
and
or
no,
but
preferably
yes
and
yeah.
G
From
that
perspective,
this
is
one
of
the
things
I
argued
for
an
openstack
and
it
became
very
hard
because
you
would
somebody
would
ask
for
a
feature
but
not
know
which
manager
to
talk
to
to
get
developers
to
help
with
that
feature,
and
so
transparency
on
the
the
relationship
between
the
proposers
for
this
and
the
people
doing
that
work,
I
think
would
be,
would
go
a
long
way.
That's.
A
It's
a
really
good
idea
and
it
actually
makes
more
concrete
some
of
the
work
that
I've
been
trying
to
figure
out,
which
is
how
do
we
grow
community
members
into
leads
of
some
stature
for
different
components,
possibly
or
different
ideas?
Basically
growing.
The
next
set
of
leaders
as
Google
moves
to
be
more
a
large
company
contributor,
as
opposed
to
the
large
company.
J
A
J
So
as
people
develop
expertise
and
parts
of
the
code-
and
we
do
need
to
foster
people
which
building
that
expertise,
we
need
to
add
them
to
the
automatic
PR
assignor.
We
need
to
make
sure
that
people
know
that
they
can.
You
sign
for
code
reviews
for
that,
and
distinctions
are
with
that.
Yes,
yes,
I.
I
Would
love
to
see
a
concrete
action
item
out
of
this
meeting
to
be
a
documented
process
on
how
one
gets
promoted
into
reviewer
status?
You
don't
mean
what
the
process
is
to
even
get
added
to
the
queue
benetti's
org,
which
is
a
pain
point
for
some
of
the
cigs
who
use
team
membership
as
a
way
to
know
by
the
rest
of
the
Sega
issues
of
interest.
Yeah.
I
Sort
of
well,
you
know
who,
on
your
team,
has
contributed
to
a
PR
and
did
they
sort
of
know
that
it
was
hygienic
and
that
sort
of
thing
like
that
I
mean
I,
appreciate
the
motivation
behind
that
that
there
should
be
a
process
of
vetting
and
and
whatnot,
but
I
just
needs
to
be
documented
and
in
the
open.
So
we
can
start
to
use
this
a
little
bit
better.
Yeah.
J
J
Considerations
yeah,
we
just
haven't
had
time
to
think
about
it.
We've
just
been
so
overloaded,
I
just
have
to
say,
sort
of
qualitatively
and
somewhat
quantitatively.
You
know
someone
has
has
had
a
hundred
PRS
merge,
that's
a
pretty
good
indication
that
they're
pretty
committed
to
the
project
and
pretty
pretty
familiar,
and
especially
if
they've
worked
in
a
specific
area.
For
most
of
that,
then
you
know:
I
have
no
trouble,
adding
them
to
the
org,
for
example,
because
then
I
can
assign
issues
and
gaurav
use
to
them
right.
So
there's
both.
J
I
J
Just
absolutely
untrue,
ryan
right.
There
are
plenty
of
people
who
review
key
arts
and
can
put
lgtm
on
the
label,
and
then
they
can
add
a
lead
on
the
project
to
add
the
label,
and
we
can
automate
that
and
we
can
do
other
things
or
be
figure
out
how
to
control.
You
can
click
the
merge
PR
button,
even
if
have
write
access
and
that's
something
we're
looking
at
doing.
Then
that
would
be
easier,
but
we
can
automate
the
application
label
based
on
right
for
testing.
J
A
A
Are
also
working
with
github
to
get
around
these
limitations.
So
that's
another
another.
Another
thing
we
have
no
minutes
left
I
was
going
to
say
you
I
had
a
whole
min
it,
but
we
have
no
minutes
left
and
are
about
to
lose
our
room.
So
I
will
see
most
of
you
next
week.
I
hope
and
we
will
continue
this
discussion
and
others.
We
can
also
continue
this
discussion
on
the
mailing
list,
or
in
slack,
so
I
will
see
you
all
there.