►
From YouTube: Kubernetes Community Meeting 20151210
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
This week's meeting included a demo of Kubernetes + Sysdig. reports from the Scaling, Config, Scheduling and Storage SIGS as well as a promise that SIG testing is coming! There was also an update on the formation of the CNCF and a discussion about 1.2 Release + 1.3 planning.
A
But
a
cystic
demo
from
Chris
carnage,
11
new
co
and
then
our
demo
from
Rob
hirschfeld
a
trek
and
got
bumped
to
nest
next
week.
Although
it
is
the
only
item
on
next
week's
agenda
at
the
moment,
so
we
will
have
to
make
a
decision
at
the
end
of
this
call
as
to
whether
or
not
we're
going
to
have
a
meeting
next
week,
so
basically
I'm
looking
for
volunteers
that
might
have
things
they
want
to
talk
about
next
week.
So
please
update
the
meeting
topics.
A
If
you
have
something
and
then
we
can
make
a
decision
whether
or
not
we're
going
to
take
a
rest
of
the
year
hiatus.
From
this
meeting,
we
have
a
cig
report.
Conversation
about
trying
to
find
a
home
for
a
che.
Is
that
a
new
sig?
Is
that
a
it
sounds
like
it's,
not
the
scaling
sig.
So
do
we
need
to
form
a
new
stick
around
a
che,
so
I
believe
that
Bob
and
Joe
are
going
to
have
at
least
opinions
about
it.
A
Well,
actually,
a
release,
update
for
1.2
from
David,
Iran
chick
and
then
a
little
bit
of
discussion
around
1.3
planning
so
like
what
would
we
like
to
do
better
on
1.3
planning?
How
can
we
do
it
better
and
getting
people
who
are
interested
in
that
involved
at
this
point?
So
why
don't
we
go
ahead
and
jump
in
with
the
Cystic
team?
Are
you
good
to
go?
I.
C
B
With
the
dark
art
of
governesses
monitoring,
I,
perfect,
so
hi
everyone,
and
thanks
for
attending
this
quick
presentation
there
will
be
about
monitoring
and
troubleshooting
Cooper
net
is
my
name,
is
gianluca
I'm,
a
software
engineer,
assisting
and
I'm
a
core
developer
of
the
open
sources
and
troubleshooting
tools
is
dig
so,
as
I
said
today,
I'm
going
to
quickly
talk
about
best
practices
and
use
cases
for
monitoring
and
troubleshooting
kubernetes.
So
I
want
to
start
with
this
little
demo
environment
that
I
have
here,
which
is
essentially
a
coup.
B
B
So
this
is
the
overall
a
fairly
complicated
infrastructure
and
the
problem
with
an
infrastructure
like
this
is
that
it's
essentially
a
big
trouble
shooting
nightmare,
because
when
it
comes
essentially
to
troubleshooting
something
like
that,
you
will
now
have
all
your
components
scheduled
across
your
cluster
and
they
will
come
and
go,
they
will
die,
they
will
gets
pinned
up
again
and
they
will
talk
to
each
other
using
all
sort
of
virtual
networking
layers.
So
it
becomes
a
big
challenge
to
essentially
see
what's
going
on
and
in
the
queue
Burnett
is
ecosystem.
There
are.
B
And
so
these
questions
are
not
always
easy
to
answer
and
the
best
way
to
answer
them
is
essentially
doing
some
heavy
troubleshooting
with
a
good
all
the
system,
troubleshooting
tools
that
we
have
read
around
for
essentially
decades.
So
you
need
to
pick
the
area
of
the
system
you're
interested
in
troubleshooting,
and
you
need
to
apply
them.
Those
tools
are
fairly
advanced
and
they
can
end.
B
They
can
get
essentially
a
long
way
in
in
essentially
pointing
out
what
you
are
looking
for,
and
this
is
all
awesome,
but
the
problem
with
that
is
that
these
tools
are
rarely
optimized
for
container.
So,
if
you've
ever
used
those
in
the
cons
in
the
context
of
containers,
you
know
that
if
you
run
top
outside
inside
inside
your
OS,
you
just
see
a
flat
list
of
processes.
You
don't
see
the
containers
where
these
processes
are
let
alone
the
KU.
B
Events
where
system
events
are
mainly
system
calls
and
it
allows
you
to
filter
them
and
run
scripts
on
them,
and
it
can
be
thought
essentially
as
a
TCP
dump
that
doesn't
work
with
network
packet,
but
works
with
all
the
controls.
In
your
system,
it's
of
course
completely
open
source
and
more
important
for
the
scope
of
today
presentation.
It
has
native
support
for
cover
natives,
which
has
been
around
for
the
past
couple
releases
and
before
jumping
into
the
action
for
the
people
who
have
not
heard
about
this
dig,
let's
see
how
we
click
it
work.
B
All
the
system
calls
that
are
going
on
in
the
system
by
all
the
processes
and
we'll
deliver
them
to
the
city
user
space
application
that
can
essentially
live
in
a
separate
in
a
separate
container
in
this
case,
and
so
the
core
idea
of
cystic
is
that
we
are
able
to
see
inside
the
other
containers
from
outside
the
from
outside
the
other
containers.
And
so
how
does
this
look
in
practice
if
I
drop
here
to
a
terminal
I'm
not
seeing
anymore
your
faces?
B
So
if
you
cannot
see
this,
please
say
something,
but
any
sympathy
invocation,
essentially,
sis
dig,
as
I
said,
is
a
system
called
sniffer
that
will
show
every
system
code
that's
going
on
in
the
system,
so
this
is
not
very
useful
or
effective.
So
the
first
thing
that
one
can
do
is
filter
the
system
event.
B
If
I
deuces,
big
dash,
L
I
get
access
to
all
the
dozen
of
filters
that
I
can
use
to
limit
the
scope
of
system
events
that
get
printed
on
screen,
so
I
can
filter,
for
example,
on
file
attributes
on
processes,
attributes
and
most
important
for
today.
I
can
filter,
for
example,
on
kubernetes
attributes,
so
I
can
feel
their
own
name.
Id
and
label
for
pods,
replica
controller
services
and
namespaces,
and
this
in
the
context
of
Cooper
Nettie's
can
be
quite
useful.
B
For
example,
if
I
now
run,
sis
dig
and
I
choose
as
a
filter,
Cooper
Nettie's,
dot,
pod,
that
label
dot
name
equal,
my
sequel.
This
will
now
show
me
on
screen
just
the
events
done
by
a
specific
pod
that
has
a
label
name.
My
sequel
and
in
fact
here
I,
can
see
a
much
more
limited
output
where
I
see
the
odd
name,
and
they
can
see,
of
course,
the
typical
name
given
to
a
coup.
Bernetta
spawn
a
managed
by
a
replication
controller.
B
I
can
see
the
process
name,
I
can
see
the
specific
system
call
and
then
I
can
see
the
in
this
case
the
container
ID
that
is
running
inside
the
despot,
and
so
overall
this
is
very
useful
and,
depending
on
what
you're
looking
to
troubleshoot,
this
can
be
very,
very
effective.
But
sometimes
you
might
want
something
that
is
a
bit
more
user
friendly,
and
so
the
other
option
that
we
have,
which
is
dig,
is
essentially
delivering
this
list
of
system
events
to
our
Lua
scripting
engine,
which
is
called
essentially
chisel.
B
And
if
we
do
this,
for
example,
I
can
run
with
dash
C
and
if
I
press
tab
I
get
access
to
all
the
dozens
of
cheeses
that
we
support
by
default.
These
are
simple
script
that
anyone
can
write
and,
for
example,
let's
use
the
top
file
bites,
and
now
we
will
get
an
output.
It
is
much
more
user-friendly,
so
this
will
show
me
the
IO
of
the
system
broken
down
by
individual
file
name
and
by
individual
container
name.
B
I
can
see
that
I
have
two
namespaces
named
dev
a
prod
on
top
of
the
default
one,
and
so
cheeses
are
very
effective
to
do
essentially
a
top-down
approach
to
troubleshooting,
where
you
start
with
chisels,
and
then
you
drill
down
into
the
system.
Events
there's
another
way
to
interact,
which
is
dig
that
our
users,
like
a
lot,
which
is
cici's
dig
ceases.
Dig
is
our
essentially
curses
base
the
interface
for
cystic
and
the
innies
standard
invocation.
B
It
doesn't
output
that
is
similar
to
top,
of
course,
where
you
see
all
the
processes
in
the
system
with
all
the
system
at
all
the
system,
resources
that
they
are
consuming
but
notice
immediately.
How
I
have
this
pretty
handy
column
here
called
the
container,
and
so
this
will
not
show
me
just
a
flat
list
of
processes,
but
this
will
now
show
me
all
the
containers
where
this
processes
are
running
and
I
can
filter
on
them
because,
again
for
c
zg,
containers
and
cooler
net
is
our
first
class
citizen.
B
So
we
immediately
emerge
them
in
the
UI.
Now
I
don't
particularly
like
these
container
names
because
they
have
been
chosen
essentially
bye-bye
kubernetes,
so
they
contain
some
random
IDs
or
something
so,
let's
say,
let's
get
a
more
user-friendly
view.
If
I
press
f2,
I
can
go
into
the
views
panel,
where
I
can
essentially
apply
different
views
of
my
system
depending
on
what
I'm
looking
to
troubleshoot.
So
here
we
are
talking
about.
B
Where
I'm
running
my
application
and
I
came,
for
example,
select
one
of
them,
for
example,
proud
and
by
pressing
enter
I,
do
what
we
call
the
drill
down,
and
so
we
are
brought
to
a
view
where
we
see
the
individual
thoughts
like
you
can
read
here
in
the
top
left
corner.
I
can
see
the
individual
pots
that
are
running
in
this
namespace,
along
with
the
resources
name,
namespace
and
set
of
labels.
So
this
is
very
useful
and
I
can
keep
doing
this
drill
down
thing
which
is
incredibly
effective.
B
B
So
I
can
see
the
data
that
has
been
exchanged
on
the
on
on
on
the
actual
on
the
actual
network
connection
and
going
back
to
a
comparison
with
the
traditional
troubleshooting
tool.
This
is
a
much
faster
approach
than
going,
for
example,
with
a
TCP
dump
and
discovering
which
network
interface
corresponds
to
the
network
name,
space
of
the
pod,
we're
ready
seas,
running
and
then
sniffing
there
and
then
taking
a
loop
with
wireshark
at
those
packets.
So
if
you're
using
containers
big,
this
is
very
useful.
B
We
can
apply
essentially
different
views.
For
example,
now
the
time
inside
the
namespace
I
can
instead
go
to
the
replication
controllers,
for
instance,
and
they
can
see
all
my
replication
controllers
in
the
system
in
this
specific
namespace
and
if
I
drill
down
into
one,
for
example,
the
WordPress
and
brought
to
the
replicas
of
this
specific
replication
controller
and
I
can
interact
with
them.
B
Faster
and
there
are
there
essentially
tons
of
views
that
are
worth
exploring,
for
example,
one
that
I
like
also
lot
is
the
spy
users
that
will
show
me
all
the
processes
that
are
being
executed
in
my
specific
namespace,
which
in
this
case
are
a
bunch
of
curl
that
you
can
see.
I'm
essentially
curling,
periodically
all
my
services
using
the
sky,
DNS
extension
to
keep
the
infrastructure
up
and
running
so
that
I
have
some
numbers
to
show
you.
B
And,
of
course
all
this
is
not
only
limited
to
the
kubernetes
application
itself,
but
I
can
also
use
it
to
troubleshoot
Cooper
Nettie's.
The
core
components
of
copper
net
is,
for
example,
if
I
go
to
processes
and
I
filter
by
at
CD,
which
of
course
is
a
critical
component
of
the
kubernetes
setup.
I
can
see
essentially
the
resources
that
HCD
is
consuming,
and
if
I
press
a
five
I
can
see
the
individual
requests
that
HCD
is
serving,
which
in
this
case
are
simple
request
done
by
Sky
dns.
B
So
if
I
see
that
that
CD
is
overloaded,
I
can
I
can
take
a
quick
look
at
this
eco
mode
to
essentially
see
if
everything
is
as
I
would
expect,
and
so
overall,
this
is
pretty
much
what
I
wanted
to
show.
You
and
concludes
my
quick
presentation.
If
you
have,
if
you're
with
me
still
essentially
you'll,
find
yourself
wondering
what
happens
if
I'm
in
a
multi-node
environment,
because
obviously
you
might
not
want
to
SSH
into
all
your
hundred
of
note
of
your
co.
B
D
So
hopefully
this
is
coming
through
now
hi
everyone,
my
name
is
chris.
I
run
product
here
at
cystic
and
we're
running
low
on
time.
So
I'm
going
to
keep
this
really
quick,
but
I
wanted
to
give
you
just
a
little
bit
of
a
taste
of
what
we're
able
to
do
when
you
take
this
technology
and
apply
it
across
a
cluster
of
notes,
as
opposed
to
just
a
single
note.
D
D
That's
really
built
from
the
ground
up
to
be
container
native,
and
on
top
of
that,
we've
added
the
comprehensive
Cooper
nutty
support
that
you
saw
in
the
open
source
tool.
So
what
you
see
here
is
now
a
cluster
of
nodes.
It's
very
similar
to
the
environment.
Jean-Luc
was
looking
at
in
this
case.
We've
got
a
master
machine
and
three
ec2
instance,
winston
TSA's,
that
are
serving
as
the
shared
pool
of
resources
for
the
screamer
Nettie's
cluster.
What
I
can
do
now
is
start
to
dig
down
in
here
and
I.
D
See,
can
get
all
the
way
down
to
the
container
level
on
each
one
of
these
machines,
and
then
I
can
click
into
any
container
or
at
any
level
of
this
hierarchy
and
start
to
get
dashboards
and
configure
alerts
on
on
the
metrics
that
are
being
collected
here.
I
want
to
show
you
one
other
view
that
I
think
is
pretty
cool.
D
D
I
can
see
the
network
traffic
and
dependencies
shared
between
them,
but
where
this
gets
really
cool
is
I
can
go
inside
the
Machine
and
now
I
see
this
the
scattering
of
containers
that
have
been
scheduled
on
this
machine
and
how
they're
interacting
with
each
other
now
similar
to
the
the
scattering
that
John
Luke
was
showing
before
this
isn't
really
useful.
If
I'm
trying
to
get
an
application
perspective
of
my
containers.
So
what
we've
done
is
is
add
the
KU
benetti's
context
into
cystic
cloud.
D
So
what
I'll
do
now
is
switch
to
a
semantic
map
which
is
going
to
use
the
the
logical
fears
of
the
Ku
brunetti
entirety
to
investigate
the
system
while
abstracting
away
the
underlying
physical
architecture.
So
now
I'm
looking
at
namespaces
in
this
cluster,
if
I
zoom
in
on
the
production
namespace
now,
it's
suddenly
really
clear.
What's
going
on
so
I
can
see
all
the
different
applications
and
micro
services
that
are
running
I
can
see
the
dependencies
between
them.
In
this
case,
I'm.
D
But
really
importantly,
what
this
lets
me
do
is
monitor
the
performances
of
my
the
performance
of
my
services
at
the
service
level,
despite
the
fact
that
these
containers
are
scheduled
arbitrarily
across
the
services
underneath,
and
what
that
really
enables
me
to
do
is
cool
things
like
this,
where
I
can
create
a
dashboard
for
an
application.
In
this
case
cassandra
and
despite
the
fact
that
my
cassandra
containers
are
scattered
across
a
variety
of
ec2
knows.
I
now
have
this
holistic
overview
of
all
the
metrics
that
I
really
care
about
in
this
application.
D
Now
the
there's
dozens
of
applications
that
we
support
out
of
it
right
out
of
the
box.
All
you
do
is
just
a
ploy
your
container
in
there
we're
going
to
automatically
discover
all
the
services
that
are
running
and
start
pulling
metrics
from
them,
and
that's
another
unique
thing
about
cystic
cloud:
that's
not
actually
in
the
open
source
tool,
which
is
these
application
plugins
that
we
apply
automatically
to
start
pulling
actual
application
level
data
from
whatever
you're
right.
D
D
A
A
Anybody
unmute
and
having
a
burning,
a
question
all
right.
Well,
we
will
go
ahead
on
to
the
next
things.
As
I
said,
we
will
have
the
Wrekin
demo
next
week,
providing
we
decide.
We
want
to
have
a
meeting.
So
the
next
topic
on
the
agenda
was
scaling
sig
for
end
the
home
for
a
che,
because
a
che
doesn't
fit
in
scaling,
sig,
so
Joe
or
Bob.
Do
you
guys
want
to
field
that.
E
E
claire
has
been
I'll,
save
eating
I,
don't
wanna
sound
too
pompous
about
it,
but
he's
been
leading
the
subcommittee
of
the
H,
a
subcommittee,
so
I,
don't
unless
I'm
behind
here
I,
don't
think,
there's
been
an
effort
to
actually
create
an
H,
a
state
specifically,
but
I.
Think
a
lot
of
us
would
like
to
see
that
happen.
E
A
We
certainly
need
to
if
we
find
out
if
tim
is
happy,
to
continue,
leading
it
and
promote
out
to
a
full
blown
sick,
because
I
know
he
pays
attention
to
a
lot
of
the
sakes.
You
may
not
answer
us
today.
That's
fine,
not
on
the
spot
right.
We
need
that
and
then
we
need
to
identify
some
the
right
person
from
inside
google
to
participate
with
that.
E
So
that
would
be
great.
I
think
the
the
second
community
same
thing
that
I
wanted
to
point
out
is
that
there's
a
news
edler
sake
and
the
scalability
group
to
date
has
a
lot
of
time,
bonds
on
scheduler
based
issues,
and
so
we're
we're
working
diligently
to
figure
out
how
we
can
you
know
collaborate
with
the
scheduler
snake
and
best
was
helpful,
so
no
one's
the
ability
to
do
so.
I
think
we're
still
working
on
that.
I.
F
E
Just
for
you
super
thank
you.
Would
you
like
me
to
just
give
like
a
super
quick
update
as
to
what's
been
going
on
in
scalability
land?
We
have
on
chiller,
okay,
a
minute
yeah
if
we
have
a
couple
of
minutes,
so
I
just
wanted
to
hit
some
hit,
try
to
hit
some
highlights
and
certainly
hit
up
joe
or
me
or
hop
onto
the
slack
channel,
and
ask
if
you
need
some
more
detail.
E
I
think
there's
been
some
of
these
fake
conservatives,
I
won't,
say
peripheral
II
involved,
but
they're
sort
of
tangents
that
have
kicked
off
inside
the
scalability
group
as
someone's
like
some
thread
and
run
with
it.
So
one
is
that
I
think
we're
getting
close
on
a
storage
abstraction
that
would
let
us
swap
something
else
in
for
a
TD
and
one
of
the
things
we're
really
interested
in
is
like
an
in-memory
buddy
well
in
memory
plug
into
some
kinds
that
we
can
use
that
for
performance
analysis.
E
My
thing
of
interest
is:
there's
a
PR
I
think
it's
149
16,
which
is
a
way
to
take
all
of
the
configuration
parameters
for
an
entire
crew,
Bernie's
cluster
and
dump
them,
and
the
the
link
to
the
scalability
topic
is
we
want,
as
we
figure
out
how
to
publish
performance
results.
We
want
to
make
sure
that
we
have
a
great
way
to
explain
exactly
what
the
configuration
was
that
generated
that
performance.
So
that's
one
that
we're
super
interested
in
I.
E
Three
foot
issues
on
trollee,
I
think
the
core
OS
folks,
so
we're
kind
of
swapping
over
to
like
a
rotating
log
kind
of
set
set
up
old
school,
but
I
think
the
core
OS
and
red
hat
folks
that
I
talked
to
you
last
week
we
were
sounding
pretty
motivated
to
work
with
the
journal
d
contributors
to
see
I'm
in
the
coming
months.
If
there
was
not
some
architectural
improvements
that
can
be
made
for
performance,
so
I
think
let
me
just
check.
Let
me
just
check
my
notes
here.
G
I
think
that's
good.
I
think
the
only
thing
to
mention
is
that
the
the
storage
abstraction
stuff
is
done,
but
the
the
memory
back
end
was
is
still
a
glimmer
in
and
at
least
somebody's
eye
I
don't
know,
Mike
still
need
to
find
an
owner
for
that
great.
So
again,
thanks
to
for
doing
that
stuff,
that
was,
that
was
a
ton
of
work,
but
I
think
it's
a
it.
It's
going
to
pay
dividends
in
a
lot
of
places
again.
A
Thanks,
so
if
anybody
has
any
questions
for
the
scalability
sig,
you
might
of
course
find
the
months
on
slack
or
people,
and
then
my
next
question
is:
are
there
any
other
six
for
running
a
couple
of
minutes
ahead
of
time?
So
are
there
any
other
cigs
that
wanted
to
do
a
report
out
today?
Something
that's
been
going
on
because
I
know
there's
been
a
lot
of
cig
echt
in
the
last
two
weeks:
I.
A
F
Know
if
Jack's
on
but
I
can
give
a
quick
cig,
config
Abdullah
we've
met
twice
at
this
point.
We
are,
you
know
hastily
bike
shedding
as
fast
as
we
can.
F
I
think
that
we've
actually
made
a
bunch
of
progress
by
discussing,
we
did
some
demos
on
the
first
time
and
had
some
discussions
about
things
on
the
guest.
I.
Think
that
what
we're
going
to
try
and
do
is
is
take
some
of
the
work
that
various
people
have
done
and
try
and
coalesce
it
into
a
single
concrete
proposal
and
actually
to
sort
of
fork
the
discussion
it
into
to
discuss
sort
of
the
stages
of
templating
and
the
stages
of
config
and
one
and
then,
as
a
separate
topic,
the
topic
of
sort
of
clients.
F
I
vs
server
side,
because
those
are
kind
of
the
two
big
contentious
points
I
think
and
it
I
think
it
makes
sense
to
discuss
them
as
separate
entities,
because
I
think
we
can
achieve
they're,
just
working
and
and
so
I'm
going
to
work
with
some
of
the
redhead
folks
and
other
people
are
going
to
work
to
kind
of
come
up
with
a
cohesive
proposal
and
get
that
in
in
time.
For
the
next
meeting
that
will
have
nice
mixture.
So
anybody's
interested
can
see
the
sort
of
coalesced
community
proposal
around
these
topics.
Sometime
soon.
A
Cool,
thank
you
community
or
sorry
configuration
cigs,
then
clearly,
Jack
and
Brendan
have
been
working
on
those
and
they
have.
You
can,
of
course,
find
them
or
participate.
Slack
mailing
lists
all
the
usual
things,
we're
trying
to
standardize
all
of
that,
and
if
there
are
some
of
you
who
want
to
participate
more
actively,
please
do
join
up.
The
Latsis
work
seems
to
be
happening
in
those
smaller
groups,
which
is
great.
Are
there
any
other
six
that
wanted
to
do
a
quick
report
out.
H
H
Tim
Sinclair
took
some
notes
on
the
meeting
for
people
who
are
interested
and
I
didn't
review
them.
Yet
I
was
going
to
attach
some
issue
numbers
to
that.
I
will
do
that
soon,
but
yeah
so
we're
starting
we're
starting
up
the
scheduling,
sig
meetings
and
there's
information
on
the
wiki.
If
you
want
to
join
the
mailing
list,
as
with
the
all
the
other
cigs
how
to
join
the
mailing
list
for
the
sig,
so
I
think
that's
about
it.
C
Mr.
mark
I'll
talk
about
the
storage
day
we
met
twice.
The
conversation
so
far
has
been
focused
around
flex
volume
and
dynamic
provisioning
and
with
regards
to
those
where
the
plugins
live,
and
how
can
we
make
extensibility?
How
have
we
may
call
bit
extensible
and
so
on,
and
a
lot
of
that
has
Nicko
is
a
flex
value.
That's
really
about
it.
18
333
is
the
issue
where
all
these
things
being
hashed
out
in
the
larger
sense,
and
it
deals
with
how
the
controllers
are
going
to
be
run
where
we
have
one
or
many.
A
I
Is
Carl
I've
been
meaning
to
start
up
a
testing
cig
but
I?
Think
it's
not
going
to
happen
till
January.
We
have
some
stuff
to
discuss
there
in
terms
of
conformance
testing
and
whether
we're
going
to
extract
them
to
separate
repositories
and
how
we're
going
to
version
them
and
how
we're
going
to
test
versions
of
conformance
versus
master
conformance.
Some
of
that's
been
going
on
in
PRS
sort
of
asynchronously,
but
we
probably
need
to
start
howling.
So
if
you're
interested
in
discussing
that
make
sure
you're
on
the
mailing
list
and
we'll
try
to
organize
there.
E
I
A
So
there
are
a
couple
of
ways
and
it
depends.
It
depends
on
what
you
mean
generally.
What
people
have
been
doing
is
asking
the
mailing
list
by
proposing
a
couple
of
dates
and
then
seeing
where
timeka
less
code
or
where
time
in
the
group
coalesce
is.
There
is
a
tool
called
o
called
doodle,
where
you
can
set
a
bunch
of
possible
times
in
it
and
then
have
a
group
go
vote
basically,
and
you
can
see
that
so
I've
had
really
good
luck
with
that,
but
I
have
a
spreadsheet
that
is
mostly
up
to
date.
A
But
of
course
it's
with
people
moving
things
around
with
the
holidays.
It's
not
quite
accurate,
always
having
all
of
the
different
cigs
and
when
the
expected
scheduled
time
is
and
when
the
next
time
is
now
I
need
to
go
update
it
again
this
week
and
that
is
going
to
end
up
out
on
to
github,
not
as
a
spreadsheet,
but
as
a
document
and
github
about
all
the
different
cigs
and
then
once
that's
all
propagated
each
of
the
cigs
can
or
once
that's
all
out
there.
A
A
So
the
cloud
native
compute
foundation
update
is
last
week
at
tectonic
summit,
as
Bob
mentioned,
was
the
first
governing
board
meeting
and
general
technical
conversation
around
the
cloud
native
compute
foundation.
Now,
for
any
of
you
who
don't
remember,
cloud
native
cute
foundation
was
announced
last
June
or
last
July,
and
that
is
where
the
Cooper
Nettie's
project
is
going
to
live.
We
expect
now
that
leads
me
to
the
first
act
of
the
governing
board
other
than
setting
up
an
agreement
as
to
what
it
meant
to
have
a
cloud
native
compute
foundation.
A
The
first
thing
that
the
governing
board
is
doing
is
trying
to
seat
a
technical,
Oversight
Committee
and
that's
going
to
be
nine
people.
Craig
describes
it
as
the
Supreme
Court
justices
of
tech,
who
will
try
to
define
clearly
what
we
mean
by
cloud
native
and
what
components
need
to
be
included
in
what
or
can
choose
to
be
included
in
cloud
native
applications
and
what
best
practices
are
decreasing
confusion.
This
is
not
anointing
winners.
A
We
think
that
would
actually
only
increase
confusion,
but
instead
giving
good
taxonomy
good
language
any
good
leadership
around
what
it
really
means
to
be
cloud
native.
So
two
nominations
for
this:
technical
operating,
the
technical
Oversight
Committee
are
open
right
now,
and
anyone
who
is
a
member
of
the
cloud
native
compute
foundation
can
nominate
people
for
that
for
the
technical,
Oversight
Committee.
A
The
idea,
though,
is
to
focus
on
inventory
the
whole
space
of
cloud
native,
decreasing
confusion
and
defining
what
it
means
to
be
a
club
native
application
and
defining
what
it
means
to
be
a
cloud
native
project,
which
is
why
I
said
we
expect
to
burn
a
diesel
and
there,
but
the
technical,
Oversight
Committee
is
actually
going
to
be
the
group
that
decides
what
projects
which
have
been
offered
to
the
foundation
or
accept
it
or
not.
The
other
thing
that
is
getting
started
right
now
is
the
end
user
technical
advisory
board.
A
This
is
the
we're
trying
to
do
this
and
we
think
it
should
help
this
way.
What
pain
points
are
having
it's
more
about
the
users
as
opposed
to
the
development
side
of
the
cloud
native
infrastructure,
so
that
is
the
very
short
version
of
that.
Bob
you
were
in
the
meeting
in
the
afternoon.
Did
I
miss
anything
exciting.
A
K
You
know,
as
we
talked
about
a
couple
of
weeks
ago,
our
goal
is
to
make
the
1.2
and
future
planning
much
more
in
the
public,
and
you
know
right.
C
K
We
don't
even
really
have
the
tools
necessary
to
accomplish
that
today.
So
our
goal
is,
you
know
what
we've
done
so
far
is
basically
looked
at
many
of
the
major
contributions
that
are
currently
in
the
1.2
and
1.2
candidate
camps
collected
them
into
kind
of
a
single
document
which
I
will
be
sharing
in
just
a
second
and
said.
Well,
you
know
broad
strokes.
This
is
what
looks
like
blockers.
C
K
L
Anyone
it's
really
small
source
of
how
about,
if
I
do
this,
that
food
is
this
better?
That.
M
M
K
Better
okay,
so,
as
I
was
saying
that
the
goal
here
is
not
to
be
to
cover
every
work
item
that
we
is
projected
our
goal
is
to
you
know
highlight
the
things
that
that
we
know
that
we're
fairly
confident.
Even
now.
You
know
we're
not
saying
that
this
is
a
lock
right
now,
but
then
we're
fairly
confident
our
blockers
and
then
those
that
are
nice
to
have.
You
know
again
I
stress
again
that
there
will
be
guaranteed,
add
additional
work
that
comes
in
and
I
guarantee
that
some
of
these
will
not
make
it.
K
This
is
you
know
we're
trying
to
be
responsive
to
what
works
in
the
community,
so
I
won't
go
through
these
all
it
in
detail
right
now,
but
maybe
the
blockers
and
the
things
that
we
want
to
hang
our
hat
on
for
1.2,
basically
center
around
three
big
areas.
The
first
is
around
performance
and
scale.
You
can
see
some
items
here
that
are
associated
with
that.
K
The
second
is
around
ease
of
use,
including
our
new
GUI
and
some
other
things
that
the
systems
could
do
on
your
behalf
and
and
then
the
third
is
r
and
developer
velocity
and
kind
of
paying
down
technical
debt,
and
so
you
can
see
some
of
the
items
there.
I
will
have
a
couple
of
similar
miscellaneous
items
that
I
wasn't
sure
what
you
know
where
they
could.
They
were
fairly
small,
but
the
teams
that
identified
them
did
I
identifying
them
as
blockers.
K
Then
we
have
a
few
more
that
are
in
the
nice
to
have
category
and
I
didn't
associate
these
the
themes,
but
you
know
these
are
the
kind
of
things
that
we
expect
to
have
be
picked
up
over
time
it.
You
know,
if
there's
the
available
space,
you
know
and
again
we're
not.
This
is
a
community
weird
about
dictating
like
this
is.
This
must
come
in
where
this
must
not
come
in.
If
you
want
to
work
on
something
that
is
like
incredibly
low
priority,
you.
K
K
K
Some
smoke
filled
room
to
really
do
all
this
stuff
in
public
and
so
around
13
we're
going
to
do
something
similar
proposed
in
a
set
of
themes
of
a
small
set
of
themes
that
that
we,
as
the
community
can
get
behind
I
decide
which,
which
themes
are
the
best
ones
and
then
have
folks
fill
in
space
behind
them
to
see
that
feels
like
it's
valuable.
So
with
that,
you
know
I'll
leave
it
open
for
discussion.
Does
this
feel
natural
to
the
community?
Are
there
things
that
we
can
do
to
prove
this?
You
name
it.
Oh.
E
K
G
K
Well,
so
I
guess
you
know
we're
having
a
fair
number
of
discussions
about
that.
You
know
right
now.
We
would
argue.
I
would
certainly
argue
that
that
something
like
this,
a
google
doc,
is
probably
not
the
best
way
to
share
get
comments.
Things
like
that
back
and
forth
to
have
a
a
you
know,
deep
discussion
about
things
to
keep
things
in
sync
with
github
we're
we're.
K
Obviously,
all
the
activity
has
what
I
what
I
am
suggesting
is
that
we
want
to
identify
a
tool
that
can
solve
all
those
problems
where
all
this
stuff
actually
does
happen
in
the
community,
where
there
are
concepts
like
voting
and
threaded
discussions,
and
you
know
integrated
with
github-
and
you
know,
tracking
assignees
and
all
that
kind
of
stuff.
A
There's
a
lot
of
discussions
right
now
about
possible
process
changes
and
making
this
way
more
transparent
and
way
more
collaborative
for
those
of
you
who
don't
know
I've
joined
Google
just
about
six
weeks
ago,
and
one
of
my
core
tasks
is
to
make
sure
that
Google
is
not.
The
google
is
not
perceived
as
guiding
pushing
the
owner
of
Cooper
Nettie's,
so
I
get
to
spend
my
days
in
meetings
saying
so.
A
A
I
have
meetings
scheduled
for
just
to
go
through
what
are
the
existing
processes
that
we
have
and
how
can
we
make
them
different,
more
transparent,
better
the
one
that
has
come
up
a
number
of
times
in
my
conversations
has
been
sort
of
the
request
for
comment
on
a
future
right
now,
they're
going
into
github
issues.
That's
a
huge
pain
there,
they're
not
exactly
pull
requests
even
they're,
not
exactly
handled
well
or
as
well
as
they
could
be.
A
The
conversations
feel
opaque
so
we're
going
to
look
at
how
we
might
be
able
to
make
this
better
come
back
to
the
community
and
say:
does
this
fit
lots
and
lots
of
iteration
around
process?
So
1.3
isn't
going
to
be
perfect,
but
hopefully
it
will
be
different
from
1.2
and
moving
in
the
right
direction.
So
feedback
on
all
of
this
is
really
helpful
and
not
just
feedback
like
we
want
this
to
be
more
transparent,
but
hey
I've
used
a
tool
like
this.
That
might
be
really
helpful,
would
also
be
very
useful.
A
A
One
of
the
big
processes
that
I
would
like
to
figure
out
in
the
next
couple
of
weeks
or
months
is
also
how
do
we
on
board
people
who
want
to
help
contribute
so,
for
example,
how
do
we
make
this
nice
and
shiny
easy
to
bring
new
people
in
and
help
us
and
get
things
moving
forward?
So
we
got
lots
and
lots
of
work,
because
they're
does
certainly
feel
seem
to
be
feelings
that
if,
if
you
aren't
already
part
of
the
the
team
that
is
contributing,
it's
really
hard
to
get
started.
A
N
K
We
do
actually
I
will
so
without
question
is
q1
I,
don't
know
if
we've
shared
publicly
or
and
again
this
is
this
is
something
that
the
community
decides.
It's
not
us,
but
we
kind
of
did
some
mental
math
here
internally,
but
it
is
certainly
q1
is
certainly
three
months
post
11
release.
The
question
is
how
much
beyond
11
really
11,
plus
three
months
three
is,
and
we
were
well
likely
as
part
of
this,
to
make
a
proposal
to
the
community
and
see
what
everyone
thinks.
O
Yeah
I
think
to
add
on
to
that
one
of
the
things
we
agree
to
at
a
community
meet
up
probably
about
a
year
ago.
Was
we
wanted
about
three
or
four
months
between
minor
point
releases,
so
trying
to
schedule
like
what
we
should
look
at
for
1.2
like
the
goal
was,
what
can
we
do
to
keep
that
schedule
of
three
to
four
months
between
minor
releases.
A
Another
thing
to
tie
into
this
is
that
we
are
looking
to
do
again
more
collaborative
bring
all
of
the
contributors
and
that's
a
broad
description
of
contributors
together
in
a
way
that
is
different
and
able
to
have
sort
of
unconference
II
conversations,
and
that
will
be
all
so
in
first
quarter.
But
I
expect
and
hope
it
will
be
shortly
after
the
1.3
release
or
1.2
release.
P
One
suggestion
I
have
for
the
1.2
release
is,
if
you
could
for
some
future
community
hangout.
If
somebody
could
describe
what
the
release
process
is
in
where
they
are
in
that
I
think
that
would
be
super
helpful.
I
know
there
are
some
issues
and
PRS
on
going
to
sort
of
modify
this,
but
again
just
figuring
out
what
it
is
today
and
what
the
plan
is
for.
1.2
would
be
super
helpful.