►
From YouTube: Kubernetes Community Meeting 20160623
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo of UpUp; 1.3 release update; 1.3 release events (June 27)
A
People
good
morning
this
is
the
kubernetes
community
meeting
today
is
june.
Twenty
third
and
we
have
a
whole
lot
of
fun
ahead
of
us.
It
may
be
a
slightly
short
meeting.
So
if
you
guys
have
things
you
want
to
discuss
that
aren't
in
the
agendas,
please
feel
free
to
throw
it
either
to
the
agenda
or
send
me
a
check.
A
If
you
want
to,
and
we
will
hit
notices
at
the
ends-
and
we
will
go
straight
now
to
Justin
so
tell
us
all
about
the
new,
the
new
Cooper
nities
up,
and
there
is
also
a
new
Cooper
midis
cluster
in
life
cycles,
special
interest
group
that
I
have
seen
in
the
last
two
or
three
days
getting
busy.
So
you
know
more
about
that
that
you
can
share.
B
Up
into
maintenance
mode,
so
we're
not
going
to
be
adding
a
lot
of
new
features
into
q-pop,
but
if
there
any
bugs
well,
they
obviously
fix
them,
but
instead
we're
going
to
be
looking
at
revamping
the
installation
process,
hopefully
making
it
all
a
lot
simpler.
My
effort
in
this
regard
is
called
up
up.
We're
not
going
a
cubic
v2,
because
that
would
imply
there's
only
one,
but
my
effort
is
called
up
mike
denise
has
an
effort
called
mint
or
nut,
which
is
great.
B
Oh,
this
is
all
happening
in
the
github
cube,
deploy
repo
and
hopefully,
basically
will
consolidate
a
lot
of
the
installation
programs
that
are
out
or
systems
that
are
out
there
into
that
repo
and
then
work
to
make
them
all
reliable
and
all
simpler
and
so
I
want
to
show
you
a
demo
of
up
up.
I'm
gonna
kick
right
off
with
a
demo.
It's
early
in
the
morning.
B
I
will
need
a
little
bit
of
excitement,
so
we're
going
to
create
a
cluster
called
demo,
nine
abus
datacom
and
we
are
going
to
create
a
cluster
in
three
zones
in
the
EU.
So
it's
going
to
be
an
H,
a
cluster
and
what
I'm
going
to
do
cloud
up?
Well,
basically,
I'm
gonna
get
in
dry
run
mode.
So
what
this
is
gonna
do
is
it's.
B
Is
the
way
on
another
frequency
to
be
like
that
here
we
are,
this
is
going
to
show
me
all
the
things
around
I
just
ran
cloud
up
in
dry
run
mode,
asking
it
to
basically
pretend
to
create
a
cluster
in
these
three
new
zones,
and
it's
typing
all
the
resources
that
it
would
have
created
to
a
bunch
of
this
is
this
is
the
equivalent
of
what
happens
in
cube
up.
You
just
can't
necessarily
see
it
right
now,
so
it
creates
a
bunch
of
Secrets,
a
bunch
of
EBS
volumes.
It
uploads
your
ssh
key.
B
B
So
it
it
a
little
faster
than
cubot,
mostly
because
it
really
doesn't
actually
wait
for
everything
to
happen
and
we
found
it
found.
It
finds
an
IAM
bug
which
is
interesting,
but
it's
going
to
recover
from
that,
so
it
it
does
retries
automatically
and
all
right.
So
it
has
now
created
all
the
resources
and
the
the
cluster
itself
is
probably
not
fully
up
because
it
has
to
boot
up,
but
everything
will
happen.
Asynchronously
there's
no
need
to
sort
of
management
process
that
washes
it
all.
B
The
state
is
stored,
currently
an
s3
soon
to
be
that
s3
or
GCS.
So
here
you
can
see
the
the
config
file
that
we've
created
we've
actually
uploaded
all
the
ssl
keys
and
secrets
into
that
pocket,
but
we've
basically
got
a
simple
llamo
config
file.
That
says
you
know
we
want
to
run
on
AWS.
There's
that
cluster
name
we're
running
with
t
two
larges
in
these
zones.
We've
assigned
some
ciders
to
the
subnets,
so
it's
going
to
go
and
create
that
the
cool
thing
is
then,
of
course,
we
can
change
that
and
reconfigure.
B
Our
cluster,
which
I
know,
will
make
some
people
in
IBS
very
happy.
I
wanted
to
quickly
explain
the
architecture,
because
otherwise
it's
a
little
bit
complicated
to
understand
exactly
what's
going
on.
So
what
we
have
is
we
have
the
CLI
tool
which
we
just
interacted
with,
and
it
uses
a
simple
dsl
which
is
really
basically
just
yamo
and
go
line
templates,
which
is
you
know,
not
the
best
but
is
fairly
ubiquitous.
Maybe
we
can
look
at
doing
some
better
things
later
on
the
air
fest.
B
That's
what
we
have
right
now
quickly
show
you
one
of
those
templates,
so
here's
AWS
a
master,
the
photo
scaling
group
from
a
master.
So
here
you
can
see
in
each
zone.
We
create
an
auto
scan
group
for
each
master
because
we're
running
a
chain
master.
So
we
want
the
auto
scaling
groups
and
that's
how
we
do
it.
So
it's
basically
amal
with
go
templates.
It's
acceptable,
I
think,
but
then
underneath
that
is
a
sort
of
reliable
tasks.
Engine
which
is.
B
It
takes
the
individual
tasks
that
we've
configured
and
will
synchronize
that
declared
state
with
the
actual
state.
So
when
we're
running
in
the
director
to
cloud
mode
will
actually
go
and
create
an
auto
scaling
group,
for
example,
when
you
say
auto
scaling
group
with
this
name,
if
we're
running
in
dry
run
bye
week
for
the
first
mobile
you,
man
and
we'll
just
say:
well,
what
would
we
change
and
that
will
give
us
the
output
of
you
know
everything
that's
different,
which
is
nice
both
to
preview?
What
will
happen,
but
also
is
very
useful
for
diagnostics.
B
So
you
know,
if
you
have
a
broken
cluster,
you
can
run
it
in
run
mode
and
see
what
what
is
what
is
not
right,
what
needs
to
change
and
then
the
third
and
more
boxes
over
to
the
right
hand.
Side
here
is
terraform,
so
you
know
why
didn't
we
just
use
terraform
here
waiting,
I
deceased
era
from
here?
Well,
the
reason
is,
you
know.
B
Terraform
is
great,
but
then
excuse
people
that
want
to
use
cloud
formation,
so
we
have
to
if
without
cloud
up,
we
would
have
to
maintain
terraforming
cloud
formation
and
ansible
and
salt,
and
you
know
all
the
other
things
that
come
along
Google
deployment
manager
as
we're
deployment.
As
a
deployment
manager,
instead
the
ideas
what
we
do
is
we
maintain
a
single
cloud
up,
manifest,
keep
it
all
working
and
then
we
just
output
to
terraform.
So
I'm
going
to
quickly
show
you
outputting
to
terraform
it's.
I
got
it
over
there,
but
that's
all
right.
B
So
this
is,
we
are
saying,
target
terraform
and
we
are
checking
and
we've
started.
We've
put
the
terraform
output
into
out
terraform.
So
if
I
look
it
out,
you
can
see.
I
have
a
a
terraform
file
and
I
also
have
a
bunch
of
supporting
data
files,
and
this
is
terraform
in
all
its
glory.
It's
a
fairly
long
term,
one
file,
but
that's
the
way
it
is,
and
so
now
you
can
check
that
into
your
git
repo
apply
that
now
do
all
those
nice
things.
B
B
B
Add-Ons
delete
a
cluster
exported
cluster.
That's
for
upgrades,
tube
com
figures,
men
in
your
cube,
config,
doing
a
rolling
update
of
clusters
when
you
change
the
configuration
managing
secrets
and
operating
the
cluster,
so
we
just
generated
a
cube
config
file,
which
means
if
we
are
lucky
the
cluster
we
started.
Yay
is
up
three
masters.
Those
are
the
scheduling
disableds,
one
of
which
is
only
been
out
for
27
seconds
in
an
H,
a
@
çd
cluster
with
DNS
all
set
up,
and
we
have
two
nodes.
They
should
all
be
t
two
larges.
B
What
I
think
I
asked
for.
Let's
see
if
we
can
see
that
there
they
are
t
too
large,
and
we
should
also
go
to
see
that
they're
spread
across
the
zone,
so
1a
1b
1c.
So
that's
all
good
and
then,
of
course,
we
can
do
fun.
Things
like
reconfigure
our
cluster,
so
we'll
do
a
dry
run,
reconfiguration,
changing
the
node
size
to
teach
you
medium
and
it's
it's
going
to
tell
us
all
the
things
that
are
going
to
change,
which
is
not
a
lot.
We're
just
changing
the
launch
configuration
to
teach
you
medium.
B
We
could
do
that
for
me.
Others
do
that.
You
won't
take
long,
so
we
actually
apply
that
for
real,
because
at
least
on
AWS,
even
when
you
apply
a
change
to
an
auto
scaling
group,
it
doesn't
actually
make
change
so
there's
another
tool.
You
then
have
to
do,
which
is
called
rolling
update.
It's
a
rolling
update
is
currently
fairly
primitive,
but
will
basically
tell
you
oh
look
here.
B
We
are
your
nodes
need
an
update
and
if
you
specified
yes,
it
would
have
done
a
rolling
update
right
now
the
updates
primitive
they
will
get
better
doing
things.
I'm.
Thinking
of
note
eviction,
things
like
that
we
can
do
updates,
for
example,
of
a
version
up
up
up
up
is
just
another
change.
Just
like
that
one.
So
again
a
dry
run.
It
will
tell
us
if
we
were
to
update
from
1
to
4
to
1
30
in
order
to
do
that,
it
will
have
to
change
the
launch
configurations,
we'll
do
a
rolling
update.
B
Everything
will
be
great
coming
very,
very,
very
soon.
Updating,
1.12
1.2
into
the
up
up
family
will
be.
These
commands,
they're
actually
live,
but
they
are
very
lightly
tested,
so
coming
really
soon,
but
don't
try
it
just
yet.
Finally
add
on
management.
This
is
sort
of
part
of
the
general
philosophy
of
our
clustered
lifecycle
management.
So,
right
now
add
ons
are
done
by
the
installer
and
we
would
like
to
get
as
much
as
possible
out
of
the
Installer.
B
Here
is
one
way
we
could
do
that
to
basically
create
a
command
in
up
up,
which
will
do
that.
Here's
a
list
of
all
our
add-ons
that
we're
running
running
DNS
and
the
namespace
add-on,
but
we
can
like
add
the
dashboard
and
we
can
add
a
standalone
monitoring
and
simple
as
that
right
now
we're
sourcing
from
a
file,
but
you
know
we
could
source
from
anywhere.
There
is
a
problem
right
now
with
multiple
masters
and
the
way
it's
configured.
Basically,
they
live
on.
B
They
live
on
a
on
a
file
on
the
master
and
we
probably
ought
to
move
that
to
s3,
but
that's
the
sort
of
thing
we're
going
to
be
figuring
out
in
sync
cluster
life
cycle.
But
actually,
in
this
case
we
got
lucky
so
I
should
have
kept
my
mouth
shut
and
we
do
actually
have
all
the
all
the
add-ons
listed.
So
I
obviously
stayed
on
the
same
master
for
all
the
time.
And
if
we
look
at
the
pods
in
all
the
namespaces,
we
should
see
dashboard
somewhere,
which
I
added
well
I.
B
Right
now,
the
networking
configuration
is
the
standard
or
the
one
that
comes
up
with
with
Cuba,
so
this
sort
of
built
in
the
PC
advanced
networking
configuration
on
AWS
and
on
GCE.
It
uses
the
you
know,
just
the
same,
the
same
thing
where
you
basically
configure
ciders
alternate
side
rows
for
each
host
that
works
fine
in
between
zones.
It
doesn't
work
in
between
regions
yeah,
it
doesn't
work
in
between
regions,
but
we
could
you
know
we
could
install
other
networking
options
like
flannel
or
calico
or
weaver,
whoever
else
I'm
forgetting
and
I
apologize.
A
B
Contributions
would
be
very,
very
welcome.
It
is
all
happening
in
the
Cooper
Nettie's
in
github
com,
/,
community
/,
cube
dash,
deploy
and
that's
where
the
code
is
happening.
Basically
trying
to
merge
understanding
is
we're
trying
to
get
all
the
installation
tools
to
live
in
that
repo,
and
then
we
will,
through
sig
clustered
life
cycle.
We
will
try
to
make
them
all
work.
B
C
E
B
E
B
E
B
Motivation
in
building
it
was
twofold,
one
of
which
was
to
like
give
give
a
rip
up
an
ongoing
path
for
people
that
were
using
cube
up
so
that
they
can
basically
not
be
stuck
when
they
want
to
run
a
che,
or
something
like
that.
So
you
know
I've
done
a
lot
of
work
on
the
80
s
cube
up.
We
need
to
keep
that
keep
people
they're
using
it.
What
give
them
away
forward.
So
there's
that
the
other
thing
was
I
wanted
to
by
rewriting
it
and
go
there's
a
lot
of
code
behind
this.
B
It
means
that
we're
able
to
I
found
it
easier
reasonable
what
the
installer
is
doing
so,
hopefully
it
will
in
a
that
we're
doing
in
sync
Lester
life
cycle
to
rationalize
the
installation
process
which
will
benefit
everyone
and
finally,
I
didn't
want
to
use.
You
know.
I
didn't
want
to
just
use,
let's
say
terraform
or
just
use
confirmation,
because
one
of
the
things
I
saw
from
trying
to
help
people
was
that
a
lot
of
people
were
only
willing
to
use
a
particular
tool,
and
so,
if
I
chose
one
tool,
it
would
exclude
everyone
else.
B
So
that's
the
the
killer
feature
that
means
that
I
can't
use
terraform,
for
example,
is
the
ability
to
output
in
other
formats,
so
terraform,
maybe
cloud
formation
in
future.
Maybe
ansible
but
I
mean
I,
think
I.
Think
long-term.
The
goal
is
to
make
sure
that
we
don't
we
don't
have
to
output
in
other
formats
because
it's
so
trivial
like
it
would
be
great
if
it
was
just
a
one
liner
in
each
in
each
configuration
system.
I
don't
know
if
I
ever
get
there.
That
will
certainly
get
closer.
So.
B
A
B
You
you
are
welcome
to
extend
it
with
add-ons.
It
definitely
that'll,
be
great
I
think
in
general,
if
we
can
move
you
know,
the
goal
is
to
get
as
much
as
possible
out
of
the
Installer.
So
if,
if
you
can
run
as
a
daemon
set
or
like
literally
an
add-on
that
you
can
drop
in,
like
I
did
the
dashboard
you
can't
see
me
anymore,
like
I
did
the
dashboard,
then
that
means
that
you
will
work.
B
You
know
with
any
installer
and
you
won't
have
to
be
part
of
the
Installer,
so
that
that's
great,
but
it
doesn't
work
for
everyone.
It
doesn't.
Work,
doesn't
necessarily
work,
for
example,
for
networking,
yet,
although
obviously
we'd
love
to
have
it
work
for
networking.
So
for
that
sort
of
thing,
yes,
any
contributions
are
welcomed,
but
if
we
can
find
a
way
to
do
it
without
needing
the
install
our
support,
then
that's
definitely
good
for
everyone.
I
think
so.
B
D
B
Example,
a
concrete
example
is,
it
is
going
to
be
I
think
it
is
hard
to
add
H
a
support
to
Cuba
and
I.
Don't
particularly
want
to
it's
hard
to
do
it
on
AWS,
so
I,
don't
think
we're
gonna.
Do
that
I'm
not
planning
on
doing
that,
although
I
know
some
people
might
want
it
so
we're,
let's
say
that's
probably
not
gonna
happen,
but,
for
example,
if
someone
adds
or
if
amazon
app
launches
another
region
tomorrow-
and
we
have
to
add
support
for
to
Cuba-
for
that,
we
will
do
that.
Okay,.
A
Pretty
good
boundary
set
for
that.
There
are
also
some
other
very
specific
questions
as
you
as
you
go
through
the
chat
and
catch
up
by
suspect
you'll
be
able
to
answer
them.
B
Bare
metal
is
trickier,
there's
nothing
particularly
stopping
you
using
the
node
upside,
but
they.
It
certainly
been
focused
on,
like
the
cloud
side
that
the
concepts
like
the
engines,
the
state
engine
of
the
task
engine,
could
work
on
Azure
or
bare
metal
or
whatever
it
is.
But
right
now,
there's
certainly
no
bare
metal
support
per
se.
I
have
thought
about
how
to
do
it,
but
I
haven't
got
there
yet.
Okay,.
A
Cool,
thank
you
very
much
and
thanks
for,
of
course,
digging
into
the
whole
cluster
lifecycle
and
how
to
make
this
easier
and
better.
If
people
want
to
join
cluster
lifecycle,
I
can
find
you
on
slack
and
a
Google
Group,
both
of
which
are
going
to
be
in
from
this
meeting.
Is
there
a
recurring
meeting
of
any
short
for
it
yet
or
ever.
A
E
E
So
we
need
to
organize
information,
so
people
can
understand
where
we
are
and
what's
going
on,
because
they're
definitely
multiple
efforts
in
the
space
and
we
need
to
create
a
list
of
I
areas
where
specific
work
items
where
people
can
help,
but
definitely
there's
lots
to
do
so.
Definitely
we
would
welcome
the
help.
F
I
I
kicked
the
hornet's
nest
yesterday
by
opening
a
feature
repro
and
a
an
issue
in
the
future
repo
I.
You
know
this
is
great
work.
I
do
worry
that
there's
just
a
ton
of
efforts
here,
there's
a
ton
of
confusion,
we're
just
you
know
I
feel
like.
We
need
to
find
a
way
to
actually
split
this
problem
into
smaller
problems
and
layer
things
around,
and
so
that's
some
of
the
stuff
that
I'd
love
to
talk
in
the
you
know:
I'm
not
a
huge
fan
of
the
closer
lifecycle
name,
but
but
what
have
you?
F
I
think
you
know
wherever
we're
going
to
discuss
this?
I
think
there's
a
that's
the
right
thing
to
talk
about
here
as
a
community,
I'm
going
to
say
something:
controversial.
I
think
we
need
to
find
a
way
to
juice
the
number
of
these
tools
and
and
and
shrink
them
in
scope,
because
I
think
we're
not
doing
ourselves
any
favors.
F
By
having
you
know,
ten
different
things
in
the
cube,
deploy
repo
I
think
it
just
confuses
the
heck
out
of
everybody
and
like
Cuba
sucks,
I
wrote
it
most
of
it
and
I
am
sorry
it
totally
sucks,
but
I
think
the
fact
that
there
was
one
way
initially
actually
helped
streamline
the
process,
and
so
you
know
hopefully
that's
some
of
the
stuff
that
we
can
do
as
we
as
moving
forward.
It.
A
Certainly
would
be
good
to
rationalize
it
some
and
make
it
more
clear,
but
generally
I
think
this
we're
all
looking
to
do
and
why
there's
been
such
a
proliferation
because
we're
trying
to
make
it
easier
to
onboard
and
to
start
and
people
all
have
their
own
opinions
about
that.
But
I
agree
that
getting
from
one
to
as
many
as
there
are
now.
We
need
to
sort
of
back
down
to
a
few
that
are
the
likely
use
cases.
A
E
A
A
A
I
G
F
E
F
Let's
back
up
a
little
bit,
I
think
nobody's
talking
about
removing
flexibility
here,
I
think
what
we're
talking
about
is
meeting
the
easy
path,
easy
right
for
the
common
scenarios.
We
want
to
make
that
a
streamlined
as
possible.
For
instance,
one
of
the
things
that
people
have
been
batting
around
is
making
the
installation
and
management
of
etsy
deep
built-in
in
some
way,
whether
it's
built
into
the
binary,
whether
it's
automatically
launched
and
managed
for
you.
That
is
not
a
very
ops
friendly
way
to
do
things
that
you
know
anybody
who's
doing
things.
F
That
scale
is
going
to
want
to
run
and
manage
a
TD
separately
but
like,
but
for
the
people
who
you
know
want,
you
know,
run
three
commands
on
a
set
of
machines.
You
know
my
raspberry
PI's
and
I
want
to
get
some
crap
up
and
running
right.
We
need
to
reduce
the
sort
of
manual
install
instructions
for
the
easy
case
to
something
that's
tractable,
because
right
now
you
look
at
that
menu
install
page
and
it's
just
daunting.
I
mean
people
run
away
in
fear
there
yeah
and
scaffolding
around
it
will
help.
F
E
F
I
actually
I
mean
Brian
I,
respectfully
disagree.
What
docker
announced
was
was
a
a
single
way
to
launch
clusters
that
was
built
into
the
tool.
It
wasn't
one
of
many
choices.
There
was
the
official
way
to
get
this
stuff
up
and
running
adding
yet
another
thing
that
drives
Cooper
Nettie's
from
the
outside
in
a
new
way
just
creates
more
confusion,
instead
of
actually
helping
to
guide
people
in
the
right
direction.
This
is
my
catalyst
egg,
but
I
I
think
that
the
problem
is.
K
F
Sir
I
think
it
goes
further
than
just
a
shiny
demo.
What
they
did
is
they
chose
same
defaults
with
the
way
to
actually
change
those
defaults.
Those
defaults
will
work
for
a
large
percentage
of
their
users.
Now
I
think
we
can
argue
on
whether
those
defaults
are
right
or
not.
We
can
argue
whether
they
have
the
right
places
to
change.
Those
defaults
like
I.
Don't
think
that
there's
any
affordances
for
actually
having
a
separate
store,
I
think
you
have
to
use
their
built-in
store.
J
Say
a
hundred
percent
agree
with
you,
Joe
there's
no
question:
we
need
a
one-line
thing
that
looks
just
like
a
Cuba,
because
Cuba
is
actually
the
right
experience.
It's
Cuba
blah
and
it
runs
it's
just
super
fragile
and
there's
a
bunch
of
things
we
want
to
do.
That
said
they
said
themselves,
it's
scales
to
64
nodes.
It
doesn't
go
higher.
They
went
in
there
in
there
as
your
demo
on
stage
they
needed
a
separate
control
plane,
there's
a
whole
bunch
of
things
that
they
are
not
saying.
J
D
A
Cycle
and
it's
going
to
be
a
fun
and
exciting
special
interest
group
to
join
and
participated,
but
I
think
we're
all
in
agreement
that
there
needs
to
be
an
easy
way
to
get.
People
started
with
kubernetes,
as
well
as
lots
of
configurability
extensibility
and
options
for
people
who
need
you
to
do
things
differently,
as
as
their
use,
cases
are
different,
because
that
is
one
of
the
big
benefits
of
Cooper
Nettie's.
A
L
L
The
most
of
the
items
are
currently
in
progress
and
also
you
may
find
a
progress
sexual
for
the
new
features
in
their
tracking
dashboard,
ensure
that
your
learn,
the
previous
missions,
so
right
now,
the
most
of
the
features
only
emerged
and
for
so
for
the
most
of
the
features
we're
waiting
on
the
for
the
dogs.
So
from
this
point
of
view,
we
ever
Saracen
is
fine
and
right
now
our
like
engineers
are
working
on
fixing
the
issues
that
we
currently
have
ways
mr.
car
place.
So
we
expect
interlace
and
decide.
L
The
release
yeah,
yes,
so
possibly
we
may.
We
may
speak
about
that
right
now.
Possibly
my
men
move
to
the
next
committee
meeting
after
we'll
start
the
next
iteration
with
wondered
for
so
as
we
have
discussed
the
previous
on
the
previous
commended
medium.
So
I
have
moved
all
the
items
from
there
from
the
classic
wicket
page
22.
Like
the
report,
we
sub
directories
classic
markdown
files.
You
may
find
the
actual
status
where
link
that
is
also
posted
in
in
the
medium
topics.
A
E
H
E
L
H
E
K
D
L
Okay
and
another
question,
so
what
form
it
would
be
better
for
the
Magdalen
files
should
we
consolidate
all
the
features
in
a
single
mom
down
files?
My
gran
files
we
have
right
now,
I
have
prepared
it
awesome.
Lowly
features
have
to
be
split
it
in
two
separate
mcgann
files
in
a
singular
equal,
like
every
feature,
has
its
own.
Its
own
web
page
visits
on
lockdown
fire.
E
L
I
M
I
had
I'll
say
what
my
vision
was
for
the
features
repo.
It
was
first
of
all
that
people
would
only
bring
initiate
things
there
when
they
had
built
consensus
within
a
cig.
So,
for
example,
number
11
had
not
really
built
consensus
and
so
I
put
a
lot
of
discussion.
My
envision,
the
features
repo
is
only
having
crisp
status
update
so
that
people
could
follow
it
without
being
bombarded
with
plus
ones
and
no,
let's
do
it
this
ways.
So
that's
that's
one
hope
I
had
for
it.
M
M
You
know
easily
discoverable
location
for
people
to
want
to
track
the
overall
progress
from
a
very
high
level
for
Cooper,
Nettie's
and
I
imagined
that
design
docs
did
not
belong
in
there
and
that
the
proposal,
the
design
docs
thing
happen
in
the
main,
repo
or
somewhere
else
and
I
could
say
a
little
more
about
that.
But
like
that's
that's
that
was
my
vision:
I'm
certainly
open
to
hearing
other
stuff,
but
want
to
lay
that
out
there.
So.
F
With
respect
to
the
the
number
11
yeah
I
splatted-
that
out
there,
mostly
as
a
way
to
have
a
meeting
point
to
start
the
discussions,
because
there
were
so
many
discussions
across
so
many
different
mediums
I,
you
know
it
was
you
know:
I
talked
to
to
Dave
and
he's
like
okay
put
something
there.
I
just
wanted
to
sort
of
you
know
essentially
have
a
standard
that
everybody
could
start
driving
towards
to
actually
start
some
of
the
discussions.
Definitely
a
meeting.
M
F
Now
the
problem
I
have
with
the
proposal
stuff
and
the
proposal
doc
directory-
is
that
there's
this
tendency
for
proposals
to
sit
in
PRS
with
all
sorts
of
delta's
never
actually
get
merged,
never
actually
get
rolled
up
into
something
that's
readable.
I
still
don't
have
a
single
document
that
describes
what
the
hell
up
pet
said.
Is
we
still
don't
have
that
I
asked
brian.
He
pointed
me
to
three
issues
you
have
to
read
through
these
things.
There's
one
doc
I'm
like
is
this
thing
current?
F
It's
like
know
all
the
feedback
hasn't
actually
been
merged
in
yet
right.
If
we're
really
going
to
drive
this
process,
we
have
to
view
merging
these
things.
Keeping
that
stuff
up
to
date
is
a
key
component
of
communicating
what
the
hell's
going
on
I
agree.
It
opens
the
way
with
having
this
stuff
sitting
proposals,
but
we
have
to
drive
to
actually
keep
those
things
up
to
date.
Well,
things
are
going
on
egli,
that's
a
great
example
here:
I
want.
M
The
people
who
are
leaders
of
the
feature
repo
to
be
concerned
not
with
pets
that
in
particular,
but
that
all
proposals
get
completed
and
the
feature
tracking
issues
is
a
way
for
us
to
say
these
guys
did
a
good
job
of
bringing
their
proposals
in
to
finalize
design
docs.
And
these
guys
didn't.
Let's,
let's
give
them
some
a
hard
time.
I,
don't
see
it
as
a
place
where
people
who
care
about
a
specific
probes
will
go
to
talk
about
that
proposal
and
it.
F
Are
separable
from
the
future
repro
Eric,
so
I'm,
not
I'm.
This
isn't
a
dig
in
the
future.
Repro
I'm,
just
saying
that
we
have
to
with
to
view
design
docs
as
as
something
that
is
an
outward
communication
thing
that
has
to
be
cut
kept
up
to
date.
We
can't
force
people
to
read
300
comment
PRS
to
actually
figure
out.
What's
going
on
with
the
future.
F
F
E
E
F
Are
overloaded,
then
we
should
do
less
and
we
should
do
it
right.
I
don't
disagree
where
we
can't
be
half
ass
in
this
stuff
and,
like
you
know,
just
start
throwing
code
around
I
mean,
like
you
know,
we
have
to
be
able
to
communicate
this
stuff
I'll
stop
ranting
so
anyway,
pet
set
is
gonna,
be
alright.
D
M
J
Be
clear,
I'm
enough
to
be
clear:
let's
not
have
this
discussion
here.
We
have
the
feature
repo
Joe
did
the
right
thing
opened
up
an
issue
in
the
feature
repo.
The
issue
is
not
the
design
issue.
It
is
not
a
zine
doc.
It
is
nothing.
The
first
line
of
the
issue
tracker
says:
build
a
proposal
so
to
be
clear.
This
is
an
iterative
thing
we're
working
on
it.
This
is
the
first
time
thus
next
step.
J
A
A
1.11
point:
oh,
so
all
of
this
is
important
discussion
with
perhaps
less
hyperbole
and
specific
focus
on
headset
or
other
issues,
because
they
they
do
differ
specific
or
they
do
differ
in
the
way
they
are
executed.
Sometimes,
but
tying
this
up
and
improving
continually
is
important,
so
I
think
we're
moving
forward
on
a
better
way
to
do
feature
track
and
Igor.
Thank
you.
A
I
know
you
and
Mike
have
spent
a
bunch
of
time
trying
to
after
the
fact,
all
together
or
four
as
the
bug
fixes
are
happening
toward
1.3,
pull
together
a
coherent
vision
of
what
is
going
to
go
into
1.3
and
that's
fantastic.
Thank
you.
So
I'm
going
to
jump
back
to
the
1.3
release
and
talk
about
what
we're
doing
to
promote
it
and
how
you
all
in
a
community
might
be
able
to
help.
Clearly,
we
have
a
lot
of
work
to
make.
D
A
And
developers
of
Coogan
ADIZ
make
that
experience
better.
That's
really
in
a
big
focus
of
1.4.
So
clearly
it's
it's
something
we
need
spend
time
on
1.3
release
events
Monday
morning,
so
you
are
pointed
out
that
we're
going
to
cut
the
release
on
friday
afternoon.
That's
the
intent
and
we
are
going
to
give
the
hits
that
I
of
blog
posts
will
go
out
on
Monday
morning
and
then
Monday
evening
at
deafnation.
There's
a
keynote
space
and
Daver
on
chick,
as
well
as
a
she
and
Matt
from
an
open
shift.
A
Engineering
at
redhead
are
going
to
be
giving
a
keynote
at
detonation
and
then
Tuesday
morning.
There
is
a
deep
dive
at
damnation
about
what
is
in
the
1.3
features.
We
are
looking
for
people
for
customers
or
users
of
Cooper
Nettie's
1.3
to
be
part
of
that
Tuesday
morning.
Talk
if
that
works.
So
we,
if
you
are
interested
and
wants
to
spend
the
weekend,
helping,
pull
together
some
slides
and
make
that
presentation
helpful
and
have
a
user
focus
in
a
user
distilled
view.
A
A
A
So
if
you
make
that
sort
of
mix
of
that
sort
of
effort
and
publish
something,
please
feel
free
to
share
it
back
with
me
and
Bob
heard
in
ski,
and
we
can
make
sure
that
it
is
also
amplified
in
Reverse
from
Cooper
Nettie's
Twitter
account
and
we
can
link
to
it
from
you
know
other
social
media
and
such
so
that
is
super
helpful.
We
would
love
more
more
noise
about
1.3.
Of
course,
anybody
have
questions
about
the
1.3
release,
events,
work
and
release
what
everything
that's
happening
around
the
release,
yeah.
N
M
N
I
understand
it's
just
I've
wasn't
looking
for
the
last
submit
q
status
message
that
you
posted
to
the
meristem
mailing
list
Daniels
since
I
seems
to
be
like
the
only
way
I
can
consistently
keep
up
with
what's
sort
of
been
happening
there
and
at
the
moment
maybe
I'm
just
like
missing
it,
but
I
don't
see
a
published
schedule
that
says
like
when
we're
going
to
go
back
to
14
and
then
he'd
bring
up
p,
0,
p
1.
I'm
sure
this
isn't
the
case
I'm
just
a
little
confused
right
now.
O
N
A
Daniel
mentioned
the
burndown
meeting,
which
actually
is
an
a
mostly
open
meeting
if
you're
interested
in
participating,
it's
not
closed.
What
we
aren't
inviting
everyone-
everyone
on
this
list,
because
it's
just
another
meeting
and
it's
happening
a
lot
so
even
more
interesting-
that
we
reach
out
to
me
or
to
table
for
them,
and
we
will
make
sure
that
you
get
added
to
that
now.
This
may
be
more
toward
the
1.4
release,
ferndown
meetings,
because
we
built
that
that
her
down
off
of
this
point
here
interested
please
let
us
know
it's
not
trying
to
be
friendly.
A
Okay,
then
I
will
talk
about
the
last
two
notices
which
I
about
for
1.3
release,
oddly
enough,
so
I
scheduled
and
did
invite
everyone
on
this
meeting
to
the
1.3
post-mortem.
We
talked
about
this
a
couple
of
weeks
ago
and
we
said
we
wanted
to
carve
out
a
specific
time
outside
of
this
meeting.
To
do
a
formal
post-mortem
and
Jason
do
March
has
offered
to
be
the
facilitator
on
that
he's
done
a
bunch
of
post
mortems
in
the
past
and
me
this
is
scheduled
for
Friday
at
ten.
A
Also
just
got
bashed
earlier
this
week
with
the
last
two
community
meeting:
videos
that
have
our
last
two
post
mortems
and
the
notes
from
those
so
I'm
looking
forward
to
see
how
I
see
how
he
changes,
what
we've
done
in
the
past.
Thank
you
Jason.
So
it
puts
the
1.3
post
mortem
and
then
the
1.3
contributor
happy
hour.
So
thank
you!
Well,
contributor
user,
123,
Cooper,
Nettie's
half
an
hour.
A
Let's
call
it
that
is
being
hosted
by
40
s
in
san
francis
go
around
the
defamation
summit
and
I
put
a
link
to
RSVP
if
you're
interested
in
participating,
I'm,
remembering
correctly,
is
Tuesday
night
next
week.
Yes,
so
that
will
be
happening
in
San
Francisco.
So
those
of
you
who
are
local,
please
feel
free
to
join
us
and
we
will
we
will
have
a
social.
I
talked
about
our
fix-it
that
we're
doing
within
within
size.
Gold
and
TJ
talked
about
this
a
little
bit
last
week
as
well.
A
Next
week
is
the
google
technical
infrastructure
fix-it
week,
that
is
happening
around
gke
and
cooper
Nettie's
and
the
idea
what
fixes
is
to
try
to
burn
down
a
bunch
of
bugs
and
a
bunch
of
little
things
that
have
been
bothering
everyone
technically
for
a
while
now
to
get
the
community
involved.
We
are
working
to
pull
together
a
couple
of
onboarding
events
for
new
community
contributors.
I
still
don't
have
specifics
on
those,
but
my
hope
is
that
I
will
be
able
to
say
that
out
today
or
tomorrow,
we
do
have
this
a
post-mortem.
A
A
After
all
that,
we
now
have
communities
patches
like
actual
embroidered
patches,
so
anyone
who
submits
a
a
pull
request
next
week,
I'm
going
to
then
have
someone
who
is
starting
to
help
me
on
in
community
next
week.
Fine
go
reach
out
to
you
and
start
mailing.
You
all
patches
to
say.
Specifically,
we
will
do
it
for
next
week's
fix
it
to
send
our
patches
for
pull
requests.