►
From YouTube: Kubernetes Community Meeting 20151119
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
This week included demos of updating the control plane with the Deployment object (Mike Danese) and Open Contrail on GCE (Pedro Marques from Juniper) as well as a 1.1 Release postmortem discussion and an event report after Kubecon!
A
B
D
E
F
B
There's
a
calendar
invite
that
it
was
everyone
who
was
on
the
old
calendar.
Invite
has
been
added
to
a
cooper,
Nettie's
community
video
chat
group,
which
is
at
the
top
of
the
cooper
Nettie's
meeting
document.
And
if
you
join
that
group,
you
will
always
be
invited
to
and
seeing
the
most
updated
calendar
items.
So
we
were
starting
to
get
into
a
crazy
list
of
people
that
was
hard
to
manage.
B
F
B
Well,
welcome
welcome
anyway,
so
we
have
topics
since
we've
got
people
gathering
I'm
going
to
go
through
the
one
specific
topic.
That's
super
short
next
week
is
the
tectonic
conference
and
the
CNC
f,
the
first
cnc
f,
not
meeting
the
cloud
native
compute
foundation
meeting
is
also
on
the
third.
Are
there
enough
people
that
want
to
have
a
meeting
next
week
or
is
everybody's
sorry,
two
weeks
next
week
is
Thanksgiving.
Two
weeks
is
the
and
CNCs
meeting
on
the
third?
B
So
this
sounds
like
a
large
number
of
people
will
most
likely
not
be
available.
I
will
bump
content
from
the
third
until
the
tenth
and
then
we
can
pick
up
again
on
the
tenth
awesome
alright.
So
that
was
my
easy
thing.
So
I
have
two
possible
demos,
for
today
is
Mike
Denise
on
yeah
awesome
like
do
you
want
to
go
ahead
and
do
your
control
playing
demo
yeah.
G
That
good
working,
yes,
okay,.
G
G
Right
today,
I'm
giving
a
demonstration
of
a
bootstrapping
a
cluster
with
the
controller
manager
and
scheduler
running
on
the
cluster,
and
then
I
will
show
a
upgrade
of
the
control
plane
using
a
deployment
object
and
if
you
are
unfamiliar
with
the
deployment
object.
It's
a
new
object
that
sin.
It's
a
controller
of
the
replication
controller
and
it
manages
rolling
updates
of
replication
controllers
all
right
and
feel
free
to
stop
me
or
ask
any
questions
during
this.
So
right
now,
I
have
a
cluster
deployed
in
GC.
G
It's
just
a
normal
cluster
built
from
head
up
step
forward.
It
doesn't
have
a
controller
manager
scheduler
running
right
now,
so
it's
actually
a
pretty
useless
closed
cluster.
So
what
I'm
going
to
do
is
I'm
going
to
start
a
controller
manager
I
have
a
controller
manager
packaged
in
a
pod.
Here's
the
pod
template.
G
G
G
Watch
my
pods
that
bootstrapping
controller
manager
is
already
preventing,
so
it
should
be
good
to
go
now.
What
I
want
to
do
is
I
want
to
create
some
deployments
for
the
controller
manager
and
the
scheduler,
so
these
are
deployments
that
you
can
think
of
them
like
replication
controllers,
that
the
actual
API
object
is
very
similar
to
a
replication
controller
and
I
have
one
for
a
scheduler
as
well
so
I'm
going
to
create
both
these.
G
G
So
now
we
have
three
replicas,
but
if
I
go
back
and
look
over
here,
I
see
that
the
controller
manager
created
the
pods
but
their
impending,
and
that's
because
there's
nobody
yet
to
schedule
a
to
start
scheduling
pots
to
start
writing
no
names,
so
I
actually
have
to
write
one
more
node
name.
So
I'm
going
to
do
the
same
thing
that
I
did
with
the
controller
manager
and
I'm
going
to
manually
bootstrap
a
scheduler
by
manually
scheduling
it
sending
a
patch
to
write
the
node
name.
G
G
G
G
G
Just
may
be
suffering
for
the
same
one,
yep
well,
I
think
my
demos
is
just
about
done,
but
I'll
explain
to
you
what
I
was
trying
to
do.
So
I
pushed
a
bad
revision
of
the
scheduler
earlier
this
morning.
I
should
check
that,
but
I
had
another
meeting
is
to
go
to
this
morning.
Of
what
I
was
going
to
do
was
I
was
going
to
update
the
deployment
object
by
running
a.
G
G
G
So
this
allows
during
a
rolling
update
once
the
the
leader
is
deleted,
because
he
has
an
older
version
than
the
version
that
the
deployment
is
rolling
towards
a
new
controller
manager
with
version
v2
will
pick
up
from
this
from
where
the
older
version
deployment
controller,
left
off
and
I
and
finish
the
deployment.
So
it
works
pretty
nice
and
seamlessly
sorry
that
I
could
not
demo
that
to
you
guys
today.
G
G
You,
like
you,
know
a
next
step
out
of
that
it's
a
little
trickier,
but
we
have
some
ideas
on
how
to
do
it.
Definitely
a
little
bit
trickier
yeah.
D
Oh
yeah,
the
trickiest
bit
is
getting
the
qubits.
Are
the
cute
proxy
to
find
the
API
server
in
the
API?
Is
perverted
finance,
ed,
so
I
so.
F
Here's
one
thing:
that's
always
bothered
me
a
little
bit.
Nha
scenarios
is
that
we've
said
quite
a
bit
that
hey
it
doesn't
matter
if
the
API
server
reboots,
because
they're
everything
else
just
coasts
right
and
so
a
little
bit
of
downtown
with
the
API
server
is
all
fine,
but
as
we
move
towards
doing
more
and
more
name
resolution
and
sort
of
discovery
in
the
API
server
with
things
like
the
endpoints
API,
that
does
become
critical
path.
D
So
we're
have
a
couple
of
things
in
the
works
that
I
think
we
will
look
at
that.
I
mean
I.
Do
agree
that
the
more
logic
you
put
in
one
component,
the
more
likely
it
is
it
that
component
that
component
at
some
point
at
some
point
its
employees,
bad
update
or
you
know
otherwise,
get
into
a
persistently
bad
state.
D
So
we're
we're
actually
working
federating
API
servers
right
now,
so
we
can
actually
split
out
logic
into
multiple
API
servers.
So
I
think
the
endpoints
logic
would
be
a
prime
candidate
for
moving
out
and
we're
also
thinking
about
moving
the
DNS
logic
down
to
the
notes
into
Q
proxy
or
something
like
that
and
eliminating
the
additional
instance
of
FTE.
That's
required
for
that,
and
also
eliminating
the
serialization
of
in
points
themselves
to
that's
to
s.e.e
and
then
potentially
constructs
that
data
is
a
view
/
cached
pods
and
the
new
the
representation
of
external
services.
D
B
Awesome
other
questions
before
we
go
on,
so
we
can
get
on
to
the
contrary
mo
all
right.
Thanks,
Mike
I,
oh
good,
you
did
stop
sharing
your
screen
that
lets
meet.
My
next
request.
B
Top
marques
these
every
week,
so
the
control
demo
from
Pedro
and
juniper
Pedro
are
you
about
on
here.
Yes,
so.
A
A
A
Okay,
so
before
actually
you
know
going
to
the
actual
demo,
I'd
like
to
share
some
slide
deck
on.
Can
you
see
the
dick
good?
Thank
you.
So
essentially,
what
we've
done
is
open.
Contrail
is
going
to
provide
some
of
the
niceties
of
overlay,
networking
and
managing
overlay
networks,
as
well
as
the
providing
multi-tenancy
and
tenant
isolation
in
the
existing
OpenStack
words
we
are
trying
to.
Basically,
you
know,
bring
those
niceties
that
are
already
there
as
part
of
the
OpenStack
deployment
into
Cuban
IDs,
which
is
basically
enabling
orbs
to
basically
have
isolation.
A
So
when
applications
get
deployed
in
parts,
they
basically
have
private
eye,
peas
and
the
eyepiece,
basically
another
pod
basic.
You
cannot
talk
to
each
other
unless
there
is
an
intent
for
the
developer
crucial
and
that
intent
is
basically
a
exposed
wire
annotations
by
labels
that
we
actually
apply.
So
the
the
good
thing
about
the
open
contrail
is
basically,
there
is
a
community.
A
The
the
model
that
we
actually
followed
in
integrating
with
Cuba
nities
is
replacing
the
cube
proxy
and
then
providing
the
video
outer
as
the
functionality
to
basically
you
know
service
networking
further
for
the
pods,
so
the
value
prop
for
open
contrail
is
basically
it's
a
distributed.
Router.
It
actually
can
serve
not
just
the
content.
The
network
for
the
workloads
that
are
containerized
but
also
can
actually
provide
networking
for
the
existing
workloads
that
are
either
running
on
the
virtual
machines
in
OpenStack,
or
you
know,
plain
vanilla,
kvm
based
verse.
A
You
know
cloud
worst
cloud,
meaning
basically
it's
just
using
the
word
tools
and
standing
up
the
VMS
and
as
well
as
bare
metals.
You
know
some
of
the
customers
that
we
actually
work
with
have
workloads
that
cannot
be
virtualized,
so
you
know
with
the
contrail.
It's
basically,
you
know
can
provide
networking
seamlessly
across
all
these.
A
The
other
thing
is
basically
multi-tenancy
full
isolation
and
fault
tolerance,
so,
basically
having
any
kind
of
false,
not
being
you
know
contained
if
at
all,
there
is
a
compromise
of
a
particular
part,
then
the
failure
is
basically
contained
to
that
that
pod
itself
so
going
here.
The
implementation
wise.
What
we've
done
is:
we've
basically
developed
a
cube
network
manager
that
actually
is
running
on
the
master
talks
to
the
queue
API
server
and
the
southbound.
A
It
basically
communicates
and
updates
all
the
objects
that
are
required
for
control
and
then
on
the
node
it
may
we
have
a
cube,
let
plugin
and
then
the
kuebler
plugin
is
what
interacts
with
the
V
doubter
and
the
video
outer.
Basically
as
two
aspects,
one
is
the
kernel
module
and
there
is
the
user
space
model,
so
the
key
components.
You
know
what:
how
does
it
actually
tie
into
Cuban
at
ease
it
basically
encontre?
We
have
virtual
networks
and
a
network
policy,
and
these
get
exposed
as
labels
so
named.
Basically
is.
A
If
you
look
at
this,
one
is
more
interesting.
Is
you
know
a
pod
definition
when
we
define
a
part
part
in
the
in
the
metadata
label?
We
basically
have
a
name
which
actually
relates
to
the
virtual
network
and
uses
is
basically
you
know
that
part
needs
to
use
some
of
the
service,
so
it
actually
exposes
that
services
uses
class,
and
then
you
could.
Basically,
you
know
have
that
policy
tie-in
between
two
parts,
so
essentially
two
parts
cannot
communicate
if
it
all
without
the
users
cause.
A
This
is
an
example
of
the
guestbook
application
that
was
that
I'm
going
to
actually
in
a
run
in
some
and
now
that
actually
has
named
Redis
and
then
the
front
end,
which
is
the
guest
book
app
basically
has
uses,
and
if
you
use
this
Redis
so
without
that
uses,
guess
boot
will
not
be
able
to
communicate
with
the
Redis
cluster.
So
standing
up
is
basically
very
simple,
so
it's
as
simple
as
basically
providing
expert
network
underscore
provider
equals
open
country
lens
and
Duke
you
up.
A
So
the
salt
changes
for
getting
all
the
Quantrell
components
are
already
in
place.
There's
a
there's,
a
there's,
a
PR
which
is
actually
pending.
So
hopefully
you
know.
Tim
can
look
at
it,
it's
12
Tim's
queue
and
then
now
let's
actually
run
the
demo.
So
what
I've
done
is
I've
already
already
just
this
morning,
sometime
back
I
already
have
this
cluster,
which
is
done
so,
if
you
see
here
I,
just
actually
van
cuba
with
a
tent
Cuba,
the
environment
variable
actually
set.
As
you
know,
you
put
a
wider
open
country,
send.
K
A
A
A
So
this
would
actually
essentially
stand
up
the
cluster
I
have
already.
I
already
have
it
running,
so
I
don't
need
to
basically
you
know
stand
it
up
so
in
this
deployment.
What
I
have
is
this
default
three
nodes,
one
master
and
in
addition,
I
basically
have
a
gateway,
node,
and
all
of
these
are
basically
again
running
in
as
containers.
If
I
do,
you
get
parts.
A
So
if
you
see
here,
these
are
all
the
contrail
components
which
are
basically
providing
control
functions
as
well.
As
you
know,
updating
the
video
return
of
module.
They
are,
essentially,
you
know,
running
as
containers,
so
these
are
the
three
nodes
and
there's
one
open
content
will
gateway,
which
is
actually
a
separate
vm.
So
one
thing
to
mention
also
is
basically
I
think
cuban
it
is
basically
have
a
container
vm
which
is
provided,
so
this
is
actually
running
on
a
debian
79.
A
A
A
So
we
have
three
such
nodes
and
then
the
gateway,
node
and
the
reason
for
the
gateway
node
is
basically
to
provide
north-south
communication.
So
any
traffic
that
is
coming
into
the
part
I
mean
needs
to
actually
come.
We
are
the
gateway
node,
so
it's
like
a
gateway
for
receiving
external
traffic
coming
into
the
part.
A
A
K
A
A
You
know,
gets
deployed,
get
this
book
as
a
service
actually
gets
deployed,
and
the
reason
is
basically,
this
exposes
the
whole
external
traffic
to
actually
come
in.
So
now,
if
you
basically
king
this,
this
is
essentially
coming
where
the
gateway
node,
and
we
actually
pinned
it
to
the
gate
we
know
to.
Let
me
now
show
you.
A
Okay,
so
let
me
now
show
you:
can
you
see
this
now?
Basically,
you
have
just
booked
up
and
okay,
sorry,
so
what
I
need
to
do
is
basically
create
the
tunnel
again.
I
have
the
tunnel
here.
Okay,
so
if
you
see
here
now,
basically
you
have
this
kiss
book
app
running
and
it
has
a
connection,
so
you
could
actually
say
one,
and
this
is
basically
the
way
that
it
gets
routed.
B
A
Yeah,
so
the
whole
deployment
actually
is
just
so
simple
and
easy
to
deploy,
and
you
know
once
it
is
stood
up
now.
The
beauty
of
it
is
basically
that
you
cannot
have
pods
or
talk
to
each
other
unless
there
is
an
intent
by
the
developer
to
do
so
and
when
you're
developing
an
application,
the
the
developer.
Basically,
it
has
this
name
and
uses
that
can
be
provided
into
the
pod
definition
to
allow
secure
access
so
to
say
so.
A
L
A
L
That's
and
so
can
I
cry,
can
I
do
a
network
across
a
namespace
or
is
that
just
explicitly
forbidden,
no
matter
what?
Yes.
K
At
the
moment,
we
have
two
things
that
are
implemented:
it's
a
very
fault
when
you
say
when
you
use
a
name
label,
you
get
a
network
inside
your
namespace.
We
also
have
a
different
annotation
that
creates
global
networks.
For
instance,
the
DNS
pod
is
in
the
cube
system
namespace.
So
to
connect
the
cube
system
of
the
connected
DNS
pod
with
everybody.
We
have
a
different
configuration
that
just
says
well
that
the
cube
system
Network,
where
the
dns
service
is,
is
available
to
everybody.
L
That
makes
sense
than
any
other
sort
of
comment.
That
I
would
have.
Is
that
in
general,
I
think
that
we
have
a
preference
for
people
using
domain
names
as
routes
on
the
kinds
of
fun
like
magic
labels,
that
they
add
two
systems,
and
so
it
would
be
great
to
see
the
name
and
network
labels
turn
into
you
know:
juniper,
calm,
/
name
or
some
open,
contrail
org,
slash
name
or
something
like
that.
Just
so
that
we
don't
have
collisions
right.
L
K
L
A
So
what
the
voltage
and
is,
basically,
we
have
open,
concrete
blocks
that
actually
talked
about.
How
do
how
to
stand
this
up?
We
had
in
cube
con.
We
had
a
hands-on
workshop
and
we
are
also
planning
for
a
meetup
to
basically
and
create
people
to
just
try
this
out
and
give
us
feedback
on
what
features
they
would
like
to
basically
see
especially
around
you
know.
You
know
multi-tenancy,
making
it
fault-tolerant
having
zones
define
az's
if
at
all,
they
would
like
to
which
probably
the
schedule
is
already
doing,
but
yeah.
A
I
we
have
asked
contrail,
basically,
which
is
an
alias-
that
anybody
can
actually
send
questions
to.
They
have
these
blogs
that
we
have
published.
So
you
know
comments
are
basically
welcome
and
will
actually
you
know
get
you
no
respond
to
the
comments
as
well.
As
you
know,
getting
unique
as
stiff
at
all
this
necessary
perfect.
B
On
the
plus
side,
just
to
keep
with
timing,
we
actually
don't
have
a
cig
update
today,
because
the
home
for
a
che
is
still
being
discussed.
So
we
will
have
that
update
in
December.
So
let's
jump
right
on
to
David
or
uncheck.
Talking
about
the
1.1
released
post-mortem,
and
thank
you
to
those
of
you
who
answered
my
google
form
request
for
community
feedback
on
what
and
how
the
1.1
release
went
and
what
you'd
like
to
see
from
the
one
point
you
planning
so
David
hi.
E
All
David
Rajic
I
run
product
management
for
GK
internet
ease
thanks.
E
I,
just
I
I,
don't
know
how
many
I
met
many
of
you,
but
you
know
can't
can't
hurt
to
introduce
again
so
the
the
top
of
lying
thing
is
I'm,
a
huge
believer
in
what's
called
Kaizen,
which
is
continuous
improvement
and
the
idea
around
every
release.
Everything
that
goes
out
the
door.
E
Is
you
empower
the
entire
team
to
provide
feedback
and
help
explain
ways
that
they
see
our
opportunities
to
get
better
integral
to
that
is
the
idea
of
these
post,
mortems
and
kind
of
an
evaluation
after
you
do
a
significant
release,
and
you
know
I
just
want
to
go
through
what
we
saw
so
far.
This
is
the
first
one
and
in
the
spirit
of
eating
your
own
dog,
food
and
and
kinds
in
you
know
you
this
process
can
absolutely
change.
E
So
you
tell
me
what
what
works
and
what
doesn't
as
far
as
reviewing
this
stuff
and
we'll
figure
out
what
you
know,
how
we
can
make
it
better.
That
said
as
part
of
the
post-mortem
Saracen
out
as
well
as
you
know,
just
talking
with
folks
internally,
we
found
kind
of
a
bunch
of
things
that
both
people
liked
and
went.
Well
and
things
that
potentially
could
go
better
in
this
particular
case.
E
You
know
some
of
the
things
that
we
saw
went
really
well
were
you
know
we
released
pretty
close
to
on
time
as
well
as
getting
it
out
to
door
quickly.
There
was
a
little
bit
of
a
slip,
but
that's
often
to
be
expected.
I,
don't
know
how
much
better
we
could
do
there.
The
team
was
incredibly
quick
to
respond
to
changes.
For
example,
we
introduced
the
concept
of
the
beta
bar
and
things
like
that
fairly
late
in
the
process,
but
folks
we're
ready
to
snap
to
it.
E
We,
it
didn't
feel
like
we,
you
know
let
something
out
the
door
with
a
lot
of
compromises,
which
is
a
really
big
thing
for
us.
We
do
want
to
be
the
the
scheduler
that
is
known
for
an
Orchestrator
that
is
known
for
being
the
best
highest
quality
and
things
like
that
and
not
letting
you
know
things
sneak
out
the
door
when
they're
not
well
tested.
E
And
finally,
you
know
I
didn't
know
how
many
of
you
were
able
to
go
to
q
con,
but
there
was
a
real
sense
of
excitement
around
the
stuff
that
we
did
release
auto
scaling
the
huge
benefits
in
scale
generally,
you
know
just
a
whole
litany
of
things
that
we
were
able
to
get
out
the
door
that
people
are
really
excited
about
it,
and
you
know
I
think
that
excitement
is,
is
you
know,
going
forward
even
further
on
the
negative
side
there,
but
there
wasn't
a
lot
that
people
mention,
but
probably
one
of
the
ones
that
that
had
a
thread
through
most
things
was
the
idea
of
a
little
bit
better
forward-looking
planning.
E
It
still
felt
like
11
was
a
lot
of
things
that
came
together
a
kind
of
piecemeal.
You
know
we
really
want
this
and
we're
going
to
get
it
in,
but
it
doesn't
have
kind
of
a
unifying
theme
behind
okay,
we're
going
to
make
a
big
investment
in
scalability
or
auto
scale
or
for
ease
of
setup,
or
things
like
that,
and
so
that
is
something
that,
in
product
we're
going
to
take
very
very
seriously
myself
and
Sarah
have
already
discussed
a
few
times
about.
E
You
know
presenting
a
kind
of
roadmap
both
for
the
12
planning,
which
we
expect
to
come
in
q1
of
next
year,
as
well
as
potentially
even
looking
even
further
than
that
for
13
planning,
which
would
be
a
second
quarter
of
next
year.
So
we
are
very
serious
about
that
and
we're
you
know.
This
is
not
something
supposed
to
come
from:
google
or
red
hat
or
any
particular
organization.
This
is
a
community
effort.
We
want
to
make
sure
everyone
feels
very
positively
about
that.
So
we
will
be
sharing.
E
Something
in
the
not-too-distant
future
depends
on
whether
or
not
the
post-thanksgiving
meetup
happen
or
community
meeting
happens.
It
would
be
in
the
first
one
that
we
have
post-thanksgiving.
It's.
E
Great
so
then
the
second
thing
is
around
publicity.
You
know
again,
we
had
a
lot
of
great
features
and
we
did
get
good
coverage,
but
I
think
we
can
do
better.
E
Part
of
that
is
that
the
CNC
f,
who
owns
Cooper
Nettie's,
is
still
kind
of
finding
its
feet
and
we're
going
to
work
very
closely
with
them
to
make
sure
that
we
highlight
all
the
hard
work
that
the
folks
are
doing
and
that
can
be
in
things
like
community
are
in
Cooper,
Nettie's,
proper
or
in
the
many
add-ons,
as
we
just
saw
with
with
open
contrail.
You
know
those
kind
of
things
and
we'd
love
to
highlight
those
and
the
company's
contributing
to
that.
E
So
we're
gonna
have
a
more
formal
plan
around
that
it's
not
as
under
our
control.
Unfortunately,
it's
it
really
is
a
CNC
f
thing.
We
were
very
serious
about
that
and
so
we're
going
to
work
with
them
as
they
find
their
feet
to
work
with
them
very
closely
and
then,
finally,
again,
this
is
kind
of
a
process
thing,
but
it
is
something
that
we'd
like
to
tackle,
which
is
really
having
a
little
bit
better
checklist
for
anyone
looking
to
take
on
something
as
they
bring
it
to
production.
E
So
if
someone
says
well,
you
know
I
want
to
write
a
new
load
balance
or
a
new
schedule,
or
something
like
that.
We
should
have
a
way
to
say.
Okay,
here
are
the
steps
you're
going
to
go
through
to
bring
it
to
production?
You
know,
by
the
time
it
reaches
beta
its
to
look
like
this
by
the
time
it
needs
to
reach
production.
It
needs
to
look
like
this.
You
should,
you
know,
sign
the
CLA.
E
You
should
have
your
folks,
you
know,
read
the
following
documentation
whatever
it
might
be,
so
that
if
you
want
to
contribute
to
us,
it's
extremely
clear
on
how
to
do
that
and
then
once
you
meet
that
bar,
you
can
be
a
contributing
member
of
the
community
and
and
we'd
love
to
highlight
it.
So
again,
that's
something
on
us
to
really
highlight
and
make
it
very
clear
that
you
know
when
you
want
to
contribute,
what's
involved
as
cleanly
as
we
can.
So
those
are
some
of
the
things
that
we
heard.
E
My
email
address
is
just
my
last
name.
You
can
see
it
there
and
zoom
at
Google,
I'd
love
to
hear
more
and
and
especially
again
with
the
idea
of
Kaizen
and
constant
continuous
improvement.
Don't
don't
just
wait
for
the
next
milestone
to
tell
me
how
we
can
do
better,
please
let
us
know
now,
but
you
will
see
some
things
addressing
these,
both
positives
and
negatives.
You
know,
within
a
matter
of
weeks,
there
wasn't
for
thanksgiving
it.
Be
it
sir.
This
next
week
is.
F
Just
quickly,
I
think
you
know,
through
the
11
release
process
being
on
the
outside,
not
being
at
Google
things
looked
pretty
opaque
stuff,
like
all
the
discussion
around
alpha
beta
experimental.
What
have
you
that
stuff
was
all
Google,
only
I'd
love
to
see
you
know
as
we're
as
we're
winding
down
to
a
release,
regular
updates
in
this
meeting
any
big
decisions
that
are
going
to
affect
the
release
be
communicated
widely.
It
felt
like
a
lot
of
that
stuff
was
happening
in
you
know:
Google
conference
rooms
in
Google,
hallways
yeah.
H
E
E
To
hear
these
for
second,
so
road,
one
is
a
better
right
map,
so
we're
going
to
do
that.
Like
I,
said
we're
going
to
have
a
road
map
published
of
the
entire
communities
goals
around
12
at
the
next
meet
at
or
at
the
next
community
meeting.
So
these
are
the
big
things
we're
working
on.
You
want
to
add
something
on
your
own
great.
Well,
show
you
how
to
do
that.
E
You
want
to
complain
about
something
that
that
we
again
as
a
community
we're
trying
to
reach
out
to
as
many
folks
as
we
can
in
prior
to
that
changing
priorities
and
things
like
that.
That's
absolutely
the
time
to
do
it
and
we're
going
to
do
that
sooner.
The
second
half
of
the
thing
that
we're
trying
to
fix
on
the
road
map
is
the
way
that
a
lot
of
these
decisions-
you
know
not
just
decisions,
but
points
of
conversation
need
to
happen
and
and
Sarah
will
help
with
that
and
already
has
helped
enormously
with
that.
E
But
but
things
exactly
like
hey,
if
we're
going
to
have
an
alpha
bar,
let's
make
sure
that
the
alpha
bar
is
public,
and
this
is
how
it
looks
to
be
an
alpha
bar,
and
this
is
all
going
to
be
in
public.
So
you
don't
like
it
do
a
pull
request
to
a
committee.
You
know
comment
on
it
and
we'll
tweak.
Based
on
that.
You
know
again.
This
is
Mayock
open
on
us.
E
B
And
I'll
add
the
third
half
after
I've
just
been
pushed
under
the
bus
here.
The
third
half
of
this
is
also
that
one
of
the
things
I'm
going
to
be
tasked
with
is
getting
this
community.
This
core
developer
core
contributor
community
and
if
you
mean
contributor
more
than
just
code,
but
the
people
who
are
core
to
this
group
together
physically
twice
during
2016
I've,
stuck
my
stake
in
the
sand,
and
we
will
be
having
those
conversations
and
getting
those
dates
out
to
you.
As
soon
as
we
can.
B
E
B
B
Oops
cool,
thank
you
David
does
anybody
else
have
any
more
commentary?
We
got
a
little
bit
more
in
a
chat
which
is
a
lot
of
definitely
plus
ones
to
more
transparency.
We
also
got
it.
It
was
better
than
10,
so
progress
a
little
bit
and
the
translated
to
github
milestones
is
something
that
I'm
also
going
to
be
working
on
with
David,
so
we'll
try
and
get
that
sorted
as
quickly
as
possible
on
so
that
there's
more
transparency
in
it.
B
Any
other
commentary
other
than
Joe's
meta
point
agenda
for
the
contributor
meetings
not
yet
give
me.
This
is
my
third
week
at
Google
with
this
position,
so
give
me
another
couple
of
weeks
and
I
will
throw
something
on
most
likely.
The
first
community
meeting
of
2016
will
get
you
guys
dates
as
soon
as
possible.
J
One
comment
about
the
issues
around
communication:
this
is
Carl
Eisenberg
from
mrs.
fear
by
the
way-
and
it
kind
of
reminded
me
of
issues
that
we
sometimes
have
on
distributed
teams.
We
have
a
very
distributed
team
here
and
often
times
if
we
have
hallway
or
meeting
conversations
locally
in
San
Francisco,
the
Germany
guys
in
New
York
guys
don't
get
any
of
that
information.
So
it
takes
a
lot
of
slack
conversation
and
then
the
more
slack
conversation
is
the
harder
it
is
to
keep
up
with.
J
Someone
usually
ends
up
having
to
like
summarize
that
some
of
the
ways
that
that
we
do,
that
is
that
we
have
a
slack
channel,
that's
for
sort
of
important
information
rather
than
discussion,
and
we
have
discussion
in
one
channel
and
announcements
in
another
channel
that
way
people
can
keep
up
to
date
on
the
day
to
day
without
having
to
read
all
of
the
conversation.
That's
going
on.
M
J
For
like
big
decisions,
I
agree
that
the
dev
email
is
the
way
to
go,
there's
sort
of
a
lower
barrier
of
entry
to
the
the
slack
channel.
So
you
can
just
have
a
conversation
and
then
summarize
this
the
conversation
in
the
sentence
or
two
without
having
to
give
a
ton
of
context,
and
it
allows
people
to
sort
of
read
a
summary
of
what's
going
on.
Historically,
obviously,
if
that
gets,
you
know
has
the
same
problem
of
too
much
conversation
in
it.
Then
it
becomes
useless.