►
From YouTube: Kubernetes Community Meeting - Birthday Edition 20160721
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo Kubernetes + Snap PoC; Why does Redhat contribute; SIG-Windows; SIG-Node; SIG-API-Machinery; WG-contribex
A
So
welcome
to
the
kubernetes
community
meeting
july,
twenty
first
happy
birthday,
cooper.
Nettie's
addition:
it
is
Cooper
Nettie's,
first
birthday,
which
is
super
exciting,
and
then
we
also
have
our
normal
agenda
of
demo
of
a
demo
and
then
notices
and
updates
from
the
special
groups
that
we
don't
often
hear
from
so
this.
This
week
we
will
vary
from
those
are
to
going
to
be
giving
us
an
update
from
600
we're
gonna
get
a
update
from
yuen
and
then
sync
API
machinery.
David
is
going
to
give
us
an
update.
A
Then
there's
a
couple
of
other
quick
things.
As
I
mentioned,
it's
Cooper
Navy's
birthday
and
we
have
twenty
odd
supernatant
birthday
parties
happening
around
the
world.
I
put
those
in
the
notes
as
well.
So
let's
get
started
with
Nicholas
Weaver
from
Intel
and
he
asked
us
for
a
little
bit
of
a
longer
spot
for
a
demo.
So
we'll
we'll
see
a
little
longer
demo
today.
So
Nicholas
do
want
to
introduce
yourself
and
snack
sure.
B
Thing:
okay,
everybody
hear
me
so
we
hi,
my
name
is
Nicholas
Weaver
I
work
for
an
Intel.
Some
of
you
may
have
heard
of
us
with
your
computers.
I
I
run
a
team
that
works
on
orchestration,
scheduling,
telemetry
and
emerging
technology
for
cloud.
A
software
focus
group
and
we
built
a
piece
of
software
last
year
called
snap
its
you
a
boil
it
down
to
the
simplest
form.
B
It's
a
go
line
based
telemetry
collection
tool
intended
to
get
data
out
of
hardware
and
software
and
make
it
easily
consumable
by
cloud,
and
so
one
of
the
things
we
had
kind
of
talked
with
different
people
at
Google
and
in
the
Cabrillo
community
and
some
customers
of
ours
was.
We
should
look
into
doing
more
work
with
snap
around
Cooper
Nettie's,
so
our
discussions
we
decided
to
go,
do
a
POC.
We
call
khoob
snap
and
I'm
going
to
share
my
screen
once
I
figure
out
how
to
do
it.
How
do
you
sure
your
screen
assume
Oh.
B
I
am
in
management,
so
we
gotta,
forgive
me:
okay,
can
you
guys
see
my
screen?
B
Yes,
sweet
all
right,
so
I
got
a
really
really
tiny
deck
here,
I'm
a
roll
through.
Basically,
you
gute,
you
guys
have
this
thing
called
google
out
there
and
we
have
really
good
ramiz
and
documentation
online.
So
I'm
not
going
to
try
to
go
through
selling
you
what
snap
is
or
what
it
does.
B
I'm
gonna
kind
of
get
straight
to
the
demo
about
what
we
built
and
kind
of
why
that
might
be
valuable,
and
then
we
can
follow
up
with
some
QA
and
also,
if
you'd
like
to
go,
look
online
or
pay
me
offline
and
we
can
show
you
more
information.
But
basically
snap
is
a
telemetry
framework
that
has
a
real
strong
focus
on
automation
and
operationally
is
so
the
idea
is
you
able
to
do
almost
everything
dynamically
so
add,
plugins
upgrade
plugins
cluster
resources
do
distributed
flow
across
multiple
nodes.
B
The
purpose
is
really
just
to
try
to
get
the
data
out
of
systems
in
a
really
easy
to
consume
manner
and
from
a
selfish
perspective,
we
add,
in
cell,
have
a
lot
of
really
really
smart
people
who
are
great
at
things
like
power
and
thermal
and
we
make
hard
drive.
Obviously
we
make
network
cards
and
you
make
computer
chips,
so
we
make
memory,
and
so
we
have
lots
of
really
smart
people.
B
It
can
get
really
great
data
out
of
those
things,
but
historically
we
haven't
necessarily
made
it
easy
to
collect
those
into
one
place
or
get
access
to
them.
So
snap
is
our
way
of
having
an
open
source
tool
where
we
will
continue
to
expose
lots
of
stuff.
We
do
and
hopefully
drive,
really
cool
use
cases
for
consumers,
users
and
stack
builders.
Our
architecture
has
a
real,
simple,
three
component
kind
of
model:
we're
not
the
ones,
do
invent
plugins
and
we're
not
the
ones
to
invent
data
processing
pipelines,
but
we
use
collection
as
a
plug-in
model.
B
So
anything
we
would
get
data
out
of
something
we
have
a
model
in
the
middle
called
processing
where
you
can
do
like
manipulation
or
I'm
switching
encryption
things
like
that
way.
You
want
to
mini
plate
the
data,
the
telemetry
in
line
as
it's
flowing,
and
then
we
have
a
publishing
plug-in,
which
is
the
ability
to
sync
that
data
into
a
wide
variety
of
systems
from
file
to
in
this
exam
show
hipster
influx
to
Cassandra
or
even
like
online
services
like
single
FX
or
others.
So
for
this
demo
or
this
POC,
what
did
we
do?
We?
B
Basically
what
we
replaced?
See
advisor
completely,
listen
app.
So
today
I
want
to
show:
there
is
no
see
advisor.
We
extended
a
heap
stir
to
have
a
snap
data
source,
so
hughster
can
directly
talk
to
snap
and
pull
data
from
it.
Snap
d
or
the
daemon,
that
is
the
service
for
snap,
is
deployed
as
a
daemon
sets.
This
is
fully
deployed
and
the
snap
d
nodes
are
configured
and
clustered
as
part
of
a
tribe
and
without
going
to
a
huge.
B
B
Is
this
we
built
this
entire
POS
SI
on
top
of
GCE
using
into
the
testing,
so
I'm
going
to
fire
up
using
the
normal
e
2e
test
scripts
on
GCE
and
what
we
did
with
this
is
we
basically
have
a
rico
called
khoob
snap,
which
already
have
open
sourced
and
I'll
point
to
it
at
the
end
of
the
powerpoint
we're
following
the
bag,
cloning
it
down
and
running
the
provisioning
tools,
you'll,
actually,
provision
in
de
teste
testing
you'll
deploy
snap
as
a
daemon
set
and
you'll
wired
up
to
heap,
stir
and
you'll
wire
up
everything
you
see
so
everything
you
see
in
the
demos,
I'm
doing
it
by
the
way
I'm
using
a
video
because
we
compressed
it
down
so
I
could
keep
the
time
nice
and
short
for
Sarah,
but
everything
I'm
doing
this
entire
video
in
demo
is
something
you
can
actually
go
to
the
repo
download
and
run
it
yourself,
an
experiment
with.
B
So
we
want
to
make
sure
anything
we
could
do
can
be
proven.
So
I'm
gonna
fast
forward
just
a
little
bit
for
times
sake,
but
basically
we
go
through
the
whole
nted
deploy
which
most
you're
probably
very
familiar
with.
We
can
figure
snap
and
we
run
the
tests
and
everything
passes.
So
the
big
thing
about
that
is
in
this
tube
snap,
POC
Cuba
days
itself
with
staff
replaces
see
advisor
has
idea
the
sea
advisor.
B
Isn't
there
everything
works,
the
way
it
normally
does
and
as
I'll
show
you
in
the
next
part,
so
the
next
part
based
on
this
setup,
we
actually
go.
We
deploy
I'll,
show
you
the
cluster
info
first
year,
but
we
actually
go
and
deploy
a
a
PHP
application
on
top
of
Apache
with
a
pod,
and
then
we
go
and
take
which
I'm
I
do
right
now,
as
you
can
see,
and
then
we
go,
we
take
the
HPA,
the
auto
scaling
settings
and
we
actually
set
it
to
auto
scale.
B
Fifty
percent
give
it
a
second
here
and
it
will
auto
scale
up
to
three
notes
or
three
pods:
that's
pretty
cool,
and
then
we
go
up
top
and
we
actually
stop
the
load.
And
then
we
watch
the
auto
scale
scale
the
service
back
down.
This
is
all
running
on
top
of
the
coop
snap
setup.
So
this
is
no
C
advisor
in
place,
snap,
Easter
and
so
on.
B
Then
we
actually
have
a
little
video
here,
I'll
fast
forward
into
where
we
go
into
grow
fauna.
It's
not
the
exact
same
run,
but
the
same
setup
spied
up
a
little
bit
for
the
sake
of
a
demo.
You
basically
have
a
graph
of
our
original
application.
The
PHP
applicate
pod
running
up
top
with
the
ingress
and
egress
network
settings
from
snap
through
hipster
at
the
bottom,
and
you
can
watch
live
as
the
load.
B
Spikes
as
we
generate
load,
and
then
you
can
watch
the
auto
scale
automatically
turn
on
an
additional
node
and
it
distributes
the
load
across
an
AP.
As
you
watch,
a
third
pod
will
come
up
and
you'll
open
to
you
scale
that
load
across
the
third
Potter
and
a
little
bit
further.
We
actually
kill
the
load.
You'll
watch
it
load
will
go
back
down,
you'll
watch
it
stop
the
two
extra
pods
and
you
look
and
we're
a
little
more
extra
load.
B
B
So,
looking
at
all
the
pods
and
namespaces,
once
again,
you
can
see
snap
itself
is
running
on
it
on
the
master
in
on
three
of
the
minions
at
the
bottom,
and
what
I
want
to
demonstrate
here
is
that
the
staff
itself
is
actually
easy
to
manage
when
it's
deployed
lois
kay
so
quickly,
just
port
forward
each
of
those
three
mini
nodes
to
to
a
local
court
for
me
to
access
60,
1602
and
603.
Those
are
each
of
the
snap
damon's.
B
So
snap
itself
has
a
REST
API
on
it,
so
you
can
actually
manage
it
and
when
it's
clustered
any
of
the
operations
for
plugins
and
tasks,
a
lot
of
different
things
are
actually
handles
a
cluster.
The
change
one
thing
it
replicates
across
the
rest
of
in
the
cluster,
so
it
makes
a
little
easier
to
manage
things,
and
this
mystery
in
this
case
got
a
question.
Are
so
many
sneeze
makes
me
sneeze
right,
yeah,
ok,
in
this
case,
we're
showing
you
that
for
each
one
to
snap
anillo
nodes,
we
have
three
plugins
loaded.
B
So
I
mentioned
before
there's
three
types
of
plugins
collector
processor
publisher.
So
in
this
case,
we
have
a
docker
plugin
at
version
8,
which
we're
getting
most
of
our
metrics.
That
you're
seeing
show
up
the
previous
two
examples.
We
have
our
heap
stir
publisher,
which
is
how
we're
providing
the
hipster
API
interface
features
getting
its
data
from,
and
we
have
just
for
the
heck
of
it
a
file
publisher
as
well.
B
So
if
we
wanted
to
write
out
any
of
this
data
directly
to
a
file
locally,
we
could
so
we're
going
to
show
you
next
is
the
metric
list.
So
when
you
load
plugins,
a
metric
catalog
is
dynamically
generated
with
all
the
values
you
could
collect
off
of,
and
so
in
this
case,
I'll
show
you
that
we
have
caused
mostly
to
start
docker
plugin.
We
have
quite
a
lot
of
different
metrics
you'll,
also
notice,
there's
a
asterisks
in
the
middle
for
places
where
there's
dynamically
populated
values
like
the
container,
ID
and
stuff.
B
You
can
actually
drop
in
wild
cards
and
do
like
your
query.
Selections
against
the
space,
so
you
can
do
things
like
like
I'll,
show
you
in
a
second
our
task
manifest
we're
just
choosing
everything.
That's
inside
the
stalker
plugin
and
one
thing
to
pay
attention
to
is
that
call
on
the
right
that
says
eight,
because
that's
version
8
is
providing
that
metric
namespace,
specifically
so
a
task
which
OC
listed
right
here
off
our
third
node
is
what
collects
data,
and
so
we're
going
to
do.
B
Is
we're
just
going
to
export
the
details
of
this
task,
which
is
already
running?
This
is
the
task
that
was
running
and
populating
the
data
the
heap
store
was
collecting
so
all
task
going
out
as
manifest,
which
can
be
gamma
or
JSON,
so
we're
just
exporting
the
manifest
out.
And
if
you
look
at
the
detail
right
here,
there's
a
couple
things
to
look
at.
B
First
of
all
up
here,
the
beginning
of
the
tree:
we're
collecting,
metrics
and
you'll
notice
that
our
metrics
selection,
which
is
that
ford
/
intel
for
/
stalker
for
/
asterisks,
is
basically
saying
everything
after
that.
So
that's
why
we're
getting
all
the
metrics?
So
if
we
were
to
add
new
types
of
metrics,
for
example,
by
upgrading
a
plug-in,
we
automatically
start
collecting
the
new
ones
without
having
to
touch
this
file.
B
A
notice
in
the
next
section,
the
publish
key,
is
where
we're
actually
calling
out
to
go
to
the
hipster
plug-in
and
actually
provide
an
endpoint
that
hughster
can
collect
against,
including
you
know
what
kind
of
how
long
a
stats
fan
or
depth
that
we
would
like
I'm
skipping
forward
just
a
little
bit
for
time
sake.
Okay,
so
we're
gonna
do
now.
B
Is
we're
actually
going
to
actually
have
a
new
version
of
the
plug-in,
so
we're
running
version
8
right
now,
but
I'm
actually
going
to
load
a
new
version
with
version
9,
which
has
the
exact
same
metrics
that
were
version
8,
but
I've
got
a
bunch
of
new
ones
that
we
added
based
new
versions:
ocker,
maybe
version
cgroups
underneath
so
we
load
version
9
into
plugging
number
or
up
sorry,
snap,
damon
number,
three,
so
600
three!
So
we've
only
loaded
this
plug
into
one
of
the
three
minions
and
then
we're
going
to
a
plugin
list.
B
On
that
same
one,
you'll
see
that
now
you
have
an
extra
version
of
docker
plugging
a
version.
9
is
now
loaded
and,
at
the
same
time,
we're
going
to
go
ahead
and
print
out
the
values
on
the
other
two
on
602
and
on
601.
If
you'll
notice,
they
both
also
now
have
version
9
loaded
as
well.
This
goes
back
to
our
tribe
feature,
which
allows
them
to
follow
each
other
for
certain
updates
for
plugging
updates,
creating
new
tasks,
those
kind
of
things.
B
So
if
you
had
a
thousand
minions
in
this
case-
and
you
wanted
to
upgrade
plugins
across
their
add
new
values,
we
also
restart
our
task
here.
You
don't
actually
have
to
start
the
task
or
stop
it
to
load
things.
But
the
big
thing
to
point
out
is
this
right
here:
it's
the
metric
list,
so
you'll
notice
now
there's
a
new
breach,
one
of
the
doctor,
based
ones
there's
more
than
one
plugins,
there's
more
than
one
version
that
supports
it.
B
You'll
see
an
eight
and
a
nine,
and
if
we
scroll
up
you'll
notice
that
there's
actually
some
new
name
spaces
like
kids,
stats
limits
and
usage
limits,
and
so
up
a
little
higher
I
think
believe
you
have
some
cool
networking
ones
as
well.
They
were
actually
added
in
version
9,
and
so
a
couple
cool
thing
about
this
snap
allows
you
to
dynamically
add
new
metrics
or
data
or
plugins
on
the
fly
without
having
to
restart
anything.
B
Also
in
our
task
in
the
collection
we
weren't
pinning
to
a
version
which
means
that
task
which
was
running
if
there's
a
newer
version
available,
will
automatically
start
getting
that
because
our
queries
are
dynamic,
it'll
automatically
txt,
the
new
metrics
as
well.
So
you
have.
Our
mechanism
allows
you
to
actually
add
new
metrics
or
custom
metrics
on
the
fly
automatically.
Have
them
go
through
and
automatically
have
them
populate
because
of
the
length
of
the
demo
here.
For
times
sake,
we
can't
go
through
dynamically,
creating
graphs
and
all
this
and
other
things.
B
But
the
point
is
that
our
goal
with
snap
was
to
make
this
highly
extensible,
so
I
can
have
lots
and
lots
of
different
plugins
and
lots
of
values
and
data
going
from
the
demo
and
I.
Think
I
was
the
last
big
piece.
Was
that
yeah
going
from
the
demo
back
to
the
slide
that
for
a
second?
Let
me
show
you
walk
you
through
a
couple
things.
Let
me
switch
to
Q&A
and
I.
Give
everybody
some
time
back,
so
repo
wise
snap
itself
is
under
intel.
Sdi
dash,
X
dash
/
snap
you'll
find
it
here.
B
You
have
a
lot
of
information,
you're
curious
about
how
it
works,
how
it
runs
or
anything
else,
and
we
have
contacts
at
the
bottom.
If
you
want
to
ping
anyone
for
questions
snap,
it
also
has
a
pretty
massive
catalog
lists
and
it's
getting
bigger
here
into
our
plug-in
catalog
off
our
snap
repo.
We
have
I
lost
count,
but
everything
from
Apache
through
F
tool,
OpenStack
stuff,
my
sequel,
databases.
B
These
are
all
just
collectors
and
publishers
mostly,
but
we
can
sink
in
almost
every
open-source
back
end
right
now,
and
we
have
a
lot
of
really
cool
data
that
you
can
go
and
collect.
We
also
have
about
seven
or
eight
plugins
they're,
just
about
to
open
source
as
well,
which
you're
gonna
get
it
pitted
to
this
list.
And
finally,
there's
the
Intel
SDI
x
cube
snap
repo,
it's
open
source.
It's
out
there,
it's
a
lot
more
detail
than
what
I
showed
in
the
video.
B
If
you
go
to
the
coop
snap
repo,
we
worked
very
hard
that
you
can
actually
follow
along
on
GCE
and
do
everything
I
did
yourself.
So
all
the
instructions
are
there.
You
can
clone
down
it's
wired
to
the
ETD
testing,
which,
because
of
some
recent
bug
fixes,
is
a
lot
more
stable,
which
is
awesome,
trust
you
guys
ended
up
fixing
stuff
that
we
were
running
into,
but
you
sure
to
follow
along
with
this
and
experiment
on
your
own,
for
anything
you'd
like
to
do.
B
Okay,
so
with
that
leash,
a
couple
of
slides
and
thanks
which
the
QA,
so
our
next
steps,
if
you're
curious
what
we're
planning
on
doing
we'd
like
to
submit
PRM
startapp
snap
as
an
optional
data
source,
we'd
like
to
add
also
a
PR
for
documentation
reasons,
a
snap
as
an
alternative
to
see
advisor
the
way
we
look
at
it
is
wick.
We
do
a
lot
of
things
very
differently
advisor.
We
also
have
I
mean
to
be
quite
honest.
B
We've
got
20
people
on
this,
because
snap
has
wide
use
outside
of
just
communities,
and
our
community
is
already
bigger
than
our
own
team
on
it.
So
we're
looking
to
make
snap
an
optional
opt-in
selection.
If
somebody
wants
to
use
it,
we
provide
value
we're
not
going
to
kill
off
the
advisor.
If
there's
things
I
see
advisor,
we
can
help.
Dude
will
do
that
and
that's
why
we're
not
planning
on
trying
to
uncouple
see
advisor
in
any
way.
B
Where
should
I
make
this
an
option
for
people
that
may
be
interested
in
using
this
instead,
but
we
need
to
do
the
due
diligence
for
a
couple
things.
One
is,
we
have
to
make
sure
a
heap
stir
can
support
it,
and
we
support
that
part
of
you,
sir.
So
we'll
keep
that
working
and
running
and
make
sure
people
always
fixing
bugs
on
that
will
have
documentation
and
to
our
ways
to
actually
choose
to
opt
in
for
step
with
Cooper
Nettie's.
B
It
will
maintain
support
on
that
as
well
and
then
also
right
now,
snap
is
considered
beta,
but
we
have
a
10
plan
that
we're
about
to
announce.
We
will
actually
have
wider
information
as
far
as
snap
support
what
that
means,
how
bugs
are
handled
and
things
like
that
for
a
community
so
expect
that
announcement
pretty
shortly
and
I
will
leave
this
slide
up
right
here
for
feedback
and
then
I
believe.
B
B
So
Derek
I
am
confused
on.
Why
see
advisors
replaced?
Is
it
possible
that
see
advisor
could
just
get
the
features
you
need
not
replace
it.
There's
a
pretty
massive
architectural
difference
between
see
advisor
and
snap,
so
it
would,
it
would
be
a.
We
have
to
basically
turn
see
advisor
into
snap.
So
you
can
make
the
argument.
B
The
snap
could
be
overkill,
vs-si
advisor,
but
we
bring
a
lot
of
different
value
and
we
had
to
make
some
really
tough
architectural
choices
that
took
a
lot
of
work
like
we
spent
probably
six
months
on
plug-in
api's
and
managing
the
all
the
day
and
I
epidemic
pieces,
a
bunch
of
stuff
you'd
have
to
redo
and
see
advisor.
So
if
see
advisor
works
for
your
purposes
and
you
don't
really
need
all
the
cool
features
snap
may
bring
you,
then
you
don't
have
to
use
snap
and
we're
cool
with
that.
B
C
To
elaborate
on
my
question,
I
guess:
one
thing
I
was
just
trying
to
tease
out
is
there's
a
lot
of
cool
stuffs
in
there
and
then,
as
we
look
on
the
road
map,
whereas
we
have
feature
it's
like
a
lot
of
the
core
features.
We
add
into
the
Cuba
end
up
driving
requirements
in
to
see
advisor
and
I
just
worry
about
how
we
keep
these
things
in
sync,
across
replacements.
C
C
Just
wonder
how
people
track
parodied
on
that
and
the
one
thing
I
was
wondering,
as
I
thought,
the
long
term
plan
probably
was
that
the
cable
is
going
to
embed
a
minimal,
see
advisor
and
then,
if
you
want
to
rich
your
stack
collection,
you'd
run
these
like
cube
snap
as,
like
a
Damon,
said
pod,
not
just
curious.
If
you've
looked
at
that
option
as
a
means
of
deploying
will,
let
me
know
they
look
very
cool
well.
B
So
coupe
snap
itself,
coupe
snap,
is
basically
a
repo
that
lets
you
automate
the
process
of
putting
snap
in
so
you
can
play
with
it,
be
real
clear.
So
our
long-term
goal
is
to
have
support
instructions
or
the
options
for
installing
snap
instead
of
see
advisor
so
from
Intel
perspective,
we're
extremely
committed
to
communities
so
on
the
call
I've
got
Connor,
balaji
Nick
others
on
my
team
that
are
working
on
the
committee's
oversubscription
and
qos
stuff,
we're
trying.
We
want
to
contribute
lots
of
relays.
B
So
if,
if
there's
requirements
for
snap
to
align
to
support
pieces
of
communities,
what
we
have
to
commit
to
supporting
api's
or
aligning
we're
totally
open
to
that,
because
we're
very
committed
to
being
valuable
to
various
community
there's
a
lot
of
stuff
we
bring,
which
because
mar
6
tents
ability
that
I
think
is.
C
B
B
C
B
E
E
The
question
here
since
Snap
is
a
superset
of
what
see
advisor
does
it
would
and
see?
Visor
is
actually
kind
of
expensive.
Wouldn't
it
be
better
to
only
have
snap
running
and
not
have
C
advisor
running
and
I
thought.
There
was
already
work
on
allowing
something
provided
the
needed
metrics
other
than
C
adviser
I.
B
I
think
this
is
for
me
here:
I,
don't
be
really
clear.
We
did
this
demo
just
to
kind
of
inspire
this
conversation,
and
we
should.
We
definitely
are
willing
to
go
through
in
detail
and
explore
more.
If,
if
that's
something
the
thing
I
worry
about
is
I
would
want
you
to
depend
on
us
unless
we're
committed
and
we
support
it
and
it's
very
clear,
but
you
need
us
to
do
with
snap,
but
I
would
love
to
explore
that
more
signode
and
also
talk
about
anything
ugly
about
snap
that
we
need
to
fix.
Yeah.
E
A
B
A
A
Happy
to
okay,
so
up
next
is
Clayton
Coleman.
Who
is
going
to
actually
talk
on
Cooper
nidhi's
birthday
today
about
why
Red
Hat
jumped
in
and
really
wants
to
find
to
this
strategically
important
to
contribute
to
Cooper
Nettie's?
So
Clayton
are
you
there
I
am?
Can
you
hear
me?
Yes,
we
can
great
thanks
Sarah,
so.
F
For
those
who
don't
know
me,
my
name
is
Clayton
Coleman
I'm
contributor
to
Cooper
Nettie's
I'm,
also
one
of
the
lead
engineers
on
openshift
I've
kind
of
been
involved
in
grades
from
the
very
beginning,
but
you
know
really
early
on
and
I'll.
Try
to
keep
this
short
I
could
probably
talk
for
35
years
about
this
topic,
even
though
it's
only
been
two
years
since
we
started,
but.
F
Well,
just
use
30
seconds
of
it.
You
know
our
ghola
redhead
was
I,
worked
on
openshift
for
a
long
time,
Logan
shipped
his
platform
as
a
service,
and
we
looked
around
we
said
platform
as
a
service
doesn't
really
solve
the
problems
that
people
care
about
platform
is
to
solve.
As
a
service
helps,
administration
teams
run
applications,
it
helps
developers
stay
focused,
but
there's
a
lot
of
flexibility.
F
You
lose
with
that,
and
so,
when
carew
neighs
was
released,
it
was
very
important
to
us
to
say
you
know
what
we
think
that
there's
a
paradigm
shift
coming
and
I
think
everybody
else
has
probably
seen
it
by
this
point
as
well
is
developers
deployers
real
applications
need
more
flexibility
than
what
the
first
round
of
platform
as
a
service
provided
and
cooper
Maysles
right
at
that
low
level
of
abstraction.
That
said,
you
can
run
containers,
but
you
also
need
these
concepts
that
sit
on
top
of
containers
to
make
them
useful
at
scale.
F
A
F
Not
necessarily
about
the
tools
it's
about
the
patterns,
so
instead
of
modeling
load
balancers,
we
model
services
and
all
of
the
characteristics
of
a
service
flow
out
of
that,
and
so
that
initially
appealed
to
us.
We've
been
very
involved
in
the
beginning
on
trying
to
help
make
Cooper
Nettie's
succeed
as
a
platform
for
people
to
build
stuff
on
top
of,
and
that's
not
just
kuba
knows
itself,
that's
all
of
the
people
in
the
community.
Everybody
on
this
on
this
meeting
who've
contributed
code
or
build
projects
on
top
of
crew.
F
Next
operating
system
level,
which
is
you
move
off
the
single
machine
to
the
cloud
so
for
all
of
the
technical
reasons,
but
also
for
the
fact
that
you
know
there's
incredibly
bright
and
talented
group
of
folks
working
on
kerber
Nettie's,
everybody
who's
contributed.
You
know
from
the
very
beginning,
everybody
who's
gotten
involved
in
the
last
year.
It's
been
extremely
exciting
to
work
with.
You
know
everybody
in
the
community,
all
830.
F
But
kerber
Nettie's
is
really
interesting,
because
a
lot
of
the
use
cases
that
we've
had
from
the
very
beginning
had
been
driven
by
people
like
samet
box
and
the
guys
in
samsung,
on
focusing
on
actually
solving
like
very,
very
hard
problems
at
scale
in
a
way
that
also
is
beneficial
to
everybody
at
the
lowest
scale
and
the
guys
from
core
OS
have
been
involved
as
well.
You
know
at
CD
as
a
as
a
key
contributor
of
a
distributed,
simple
database
that
we
can
use
to
back
cube.
F
There
were
a
lot
of
people
originally
said:
that's
never
going
to
work,
but
you
know
this
community
made
that
bet
and
made
that
bed
pan
out
and
the
guys
from
core
OS
have
been
fantastic.
At
supporting
our
crew,
vanadis
has
gone
the
performance
requirements,
the
use
cases
so
more
than
anything
else
soon
this
community
come
together
and
put
in
place
real
technical
change
to
solve
problems.
We've
been
extremely
exciting.
F
So
for
all
those
reasons
for
everything
else,
you
know
we
believe
in
the
mission
of
Gruber
Nettie's,
and
you
know
my
call
to
action
for
everybody
else
on
keeps
first
birthday.
Is
the
email
brian
sent
out
to
the
grenades?
Dev
list
really
is
some
of
the
most
important
things
we
need
to
deal
with
the
stuff
that
Sarah
and
other
been
working
on
is
how
do
we
scale
this
community?
How
do
we
go
that
next
level?
Can
we
we
make
you
even
more
extensible?
F
Can
we
bring
in
folks
like
Intel
when
there's
technologies
that
can
be
drop-in
replacements
and
easily
fit
those
into
the
platform
so
that
everybody
can
benefit
so
I
am
very
excited
by
that
I
urge
everyone
to
read.
Brands,
email
and
I
heard
everybody
else
to
get
involved
in
the
sixth
to
help
make
the
next
the
next
year.
Committees
be
as
exciting
as
the
first.
A
Thanks
Clayton,
that
is
awesome
all
right,
I'm,
going
to
point
people
to
you
individually
to
keep
us
moving,
because
one
of
the
big
requests
I've
had
recently
is:
can
we
make
this
community
meeting
more
technical,
so
I've
started
requesting
that
all
the
cigs
report
out
on
some
sort
of
cadence,
and
so
this
week
we
actually
have
cigs
we
haven't
heard
from
in
a
while.
So
let's
start
with
sig
windows,
which
some
of
you
may
not
even
have
known,
existed
so
jitsu,
can
you
tell
us
what's
going
on
in
sig
windows?
Yes,.
G
Sure,
hi
Bob
so
have
the
sigmundus
basically
started
with
a
technical
investigation
and
we
basically
followed
it
with
minimal
viable
fears.
You
could
come
in
limitations
of
support,
Inc
abilities
for
numbers
containers
and,
as
part
of
the
PRC,
we
decided
that
to
initially
have
a
la
cubana
dis
control,
plane
that
is
API
server
and
hcv
on
Linux
and
have
public
running
on
windows,
not
in
a
host.
G
So,
as
part
of
the
PRC,
we
added
support
for
windows
container
and
time
articulate
and
we
started
basically
exploring
how
the
pod
would
be
architected
and,
as
Windows
doesn't
have
a
network
name.
Space
and
rocket
for
windows.
Also,
basically,
doesn't
support
sharing
Network
stack
with
other
containers,
so
the
initial
POC
basically
concentrated
on
having
one
part
equal
to
one
container
construct.
G
H
Absolutely
I
think
you
do
I
apologize.
If
you
guys
cannot
hear
me
clearly,
I,
don't
have
a
great
network
connection.
So,
like
you
to
said,
actually
we.
H
Excellent,
like
due
to
mentioned,
we
landed
on
the
el
to
bridge
mode
as
our
networking
mode
that
we
plan
to
use
for
Cuba,
natives
and
one
of
the
requirements
of
the
hell
to
bridge
mode
is,
if
you
want
to
bridge
the
connection
across
multiple
container
host,
so
that
you
could
have
one
part,
that's
part
of
the
same
application
or
service
talk
to
another
pod
that
landed
on
a
different
container
host.
Then
what
Microsoft
requires
that
you
have
an
identical
networking
configuration
on
both
of
those
two
notes.
That
means
everything
has
to
be
the
same.
H
The
the
gateways
IP
address
ranges
dns
settings
everything
so
that
opened
up
the
little
security
risk,
because
once
you
do
that,
then
any
container
on
anyhow
can
talk
to
any
other
container,
either
on
the
same
host
or
any
other
host.
So
essentially
you
get
no
networking
isolation
for
the
pod
or
for
any
other
construct.
H
Once
you
discover
that
they
started
working
with
Microsoft
I,
where
elated
that
that
request-
and
essentially
we
don't
necessarily
have
an
answer
yet
from
Microsoft,
but
they're
working
there
they're
actively
interested
in
supporting
us
and
they're
actively
working
on
a
couple
of
different
solutions
that
potentially
could
provide
relief
in
this
area.
The
here
here
is
to
create
a
private
network
that
you
can
subscribe.
Containers
to
and
the
private
network
could
span
a
single
host
or
multiple
hosts.
One
of
the
difficulties
of
that,
obviously,
is
the
micro
service
on
the
latest
pages
of
Windows
Server
2016.
H
Their
ability
to
execute
and
deliver
this
feature
on
time
is
met
by
schedule
pressure.
So
we
don't
have
a
confirmed
commitment
from
Microsoft
deliver
gonna
make
this
happen,
so
we're
you
know
would
probably
have
a
better
update
in
the
next
three
or
four
weeks,
but
that's
alright.
Now
we
r,
who
we
r
still
waiting
for
them
to
architect
a
solution,
so
you
can
have
a
community
discussion
around
the
architecture
and
make
sure
that
it
meets
our
needs
that
that
call
would
probably
be
in
about
two
weeks
from
today
the
say.
H
So
it's
a
concurrent
path,
we're
also
looking
at
cloud
base,
which
is
one
of
our
community
members
for
the
sig
windows
and
they
have
contributed
heavily
into
the
open,
be
switch
implementation.
So
could
we
switch
an
oven?
They
have
a
solution
around
overlay
networks
that
basically
allows
you
to
create
an
elegant
in
connection
that
could
find
multiple
hosts
and
subscribe
to
that
now.
H
Ovn
has
not
been
tested
and
windows
and
its
implementation
has
been
heavily
tested
only
Linux,
so,
as
you
are
gearing
up
on
on
trying
and
implementation,
and
the
prototype
of
that
solution
will
probably
have
more
details
on
that
also
in
the
next
few
weeks.
But
we
have
been
in
active
talks
with,
with
guru
from
the
open,
beseech
team,
the
cloud-based
folks
and
we're
trying
all
their
different
solutions.
So.
A
A
H
One
of
the
things
I.
I
know
that
when
we,
when
I
called
into
the
sig
meeting,
maybe
month
and
a
half
ago,
I
promised
you
guys
that
we're
going
to
sit
down
and
talk
about
the
pod
architecture.
We
still
owe
you
that
sit
down.
There
is
no
why
we
haven't
scheduled
this
yet
is
because
you
haven't
cleared
out
all
our
networking
issues
so
we're
hitting
one
issue
of
the
data
or
networking
once
you
clear
the
doubt,
and
we
understand
what
our
no
voting
options
would
be
for
windows
server
containers.
H
A
J
Sure
I'll,
just
kind
of
give
a
quick
overview
of
things
that
we
are
we've
been
working
on
towards
one
point
for
it,
because
pretty
much
I
think
those
are
the
interesting
topics
that
we've
got
right
now.
I'll
also
quickly
talk
about
a
couple
things
that
hit
1
dot.
3.
If
you
don't
know,
we
have
rocket
container
support.
Sorry
rocket,
runtime
support
where
you
can
swap
the
qubit
used
rocket
at
your
container,
runtime
and
I
think
that's
really
cool.
J
That
work
is
being
driven
by
the
hyper
guys
in
the
form
of
the
client
server
integration
proposal
and
also
by
the
Google
guys
in
the
form
of
the
container
runtime
interface,
which
will
hopefully
make
this
all
more
maintainable
we've
been
working
on
making
the
node
more
reliable
in
terms
of
things
like
disk
usage.
We've
got
work
towards
making
image
images,
be
managed
better
when
you're
running
low
on
disk
space
delete
images
when
you're
running
low
on
disk
space,
evict,
pods,
potentially,
which
that's
something
Derek's
kind
of
proposal.
J
Our
honor
has
been
working
on,
which
is
awesome,
also
more
reliability
in
terms
of
isolation
between
things
by
having
called
level
c
groups
where
first
of
all,
pods
and
guaranteed
pods,
and
so
on,
have
a
little
more
isolation
between
them
and
a
little
better
control
from
the
culet
by
putting
them
under
AC
group
that
is
created
and
controlled
by
it.
There's
also
been
some
work
on
a
parmer
and
few
other
cool
things
like
that.
J
I
think
I
think
those
are
kind
of
the
high-level
topics
and
RD
have
to
talk
in
more
detail
about
any
of
them
or,
if
I
missed
any
I'm.
Sure
Derek
can
talk
about
it,
and
we
also
have
weekly
meeting
notes,
which
I
just
emailed
out
the
sig
yesterday
I
believe
they
link
to
them,
as
well
as
the
summary
of
our
last
meeting.
I
encourage
check
them
out.
If
you
want
shell
or
or
ask
some
questions
now
as
well.
C
Would
say
it
was
a
good
summary
and
I
guess
the
only
thing
I'm,
not
sure.
If
I
heard
there
was,
there
has
been
some
discussion
around
just
getting
a
generic
oci
runtime
implementation
occupy
bieber
and
I.
Don't
believe
that
that
proposal
has
emerged
yet,
but
I
encourage
folks
to
review
that
and
give
their
input
as
well.
So.
A
To
the
whole
idea
of
having
this
be
a
place
for
escalation
of
any
cross
across
community
across
special
interest
group
needs.
Is
there
anything
that
you
that
you
would
note
collectively
need
from
the
rest
of
the
group
or
anything
you
want
people
to
weigh
in
on?
As
far
as
architectural
things
or
just
you
come
join
you
if
they
want
to
help.
J
C
No
I
think
we've
had
some
good
discussions
on
that
topic,
with
cigs
scheduling,
at
least
with
David
Oppenheimer,
but
yeah
I
think
we
would
encourage
anybody
to
come
and
and
for
folks
who
are
looking
to
become
new
contributors.
I
know.
One
thing
that
everybody
would
appreciate
is
anything
that
can
be
done
to
make
the
cubic
code
more
readable
or
more
understandable,
whether
that's
just
writing.
Documentation
go
doc
or
anything
like
that.
Probably
well
appreciated,
and
it's
a
good
place
for
folks
to
jump
in
and
run
the
code
base.
Awesome.
K
An
API
machinery
is
last
time.
We
spoke
about
three
objectives
or
14.
We
want
to
take
the
garbage
collection
controller,
known
as
like
a
server-side
reaping
and
we'd
like
to
get
it
to
a
point
where
we
can
turn
on
by
default
for
at
least
one
control.
There
are
a
couple
issues
that
we're
dealing
with
there's
an
umbrella
issue
to
track
it
in
the
agenda,
notes
for
a
cig,
API
machinery,
if
you're
interested
in
precisely,
what's
getting
fixed.
K
What
we're
trying
to
do
for
one
for
the
next
thing
that
we
want
to
do
is
we
want
to
generate
more
complete
swagger
to
the
point
where
someone
might
be
able
to
run
a
generator
against
it?
It's
something
that
many
people
asked
to
be
able
to
do
so.
They
can
generate
swagger
for
the
server
and
then
generate
their
Python
or
Java
or
Ruby
client
code.
So
we
are
going
to
be
working
towards
that
and
one
person
was
dedicated
to
make
that
work.
The
last
major
thing
of
external
interest
is
splitting
out
ego
client.
K
K
There
are
a
couple
of
significant
issues
to
work
through
regarding
how
we
would
actually
make
use
of
that
for
migration
purposes,
but
our
goal
is
to
split
it
out
end
to
intern,
try
to
depend
upon
it
and
build
at
least
one
thing
on
top
of
it.
Inside
of
the
coupon,
those
are
the
major
external
facing
pieces.
There
are
some
internal
pieces
regarding
storage
and
some
technicalities
about
how
you
put
in
serial
s
things
in
SED.
If
somebody
wants
to
go
into
it,
I
can
but
otherwise
I
think
that
is
the
major
interest
areas.
A
K
Can
do
one
better?
We
have
a
poll
that
is
starting
to
to
split
that
out.
If
you
want
to
take
a
look
at
the
1i
have
just
sent
in
there.
There
are
some
additional
comments
on
it.
It's
big
it's
going
to
be
a
multi-stage
process
for
us
to
get
it
in,
though.
First
we
have
to
get
the
dependencies
right
and
then
we
have
to
break
it
out,
and
then
we
have
to
go
debit
back
in
or
vendor
it
back
in,
but.
A
A
D
Isn't
specific
to
the
take
but
I
guess
again
on
the
go
line,
client
issue
just
since
we've
been
I,
see
something
in
the
meeting
that
it's
about
feature
cutoff
thing
for
14
and
I'm
sort
of
yes,
so
confused
aquascape
of
feature
work?
Is
this
the
sort
of
like
splitting
out
this
goliyon
client?
Is
this
the
sort
of
thing
that
needs
to
be
covered
by
a
future
issue,
or
is
it
like
grandfathered
in
because
work
is
already
ongoing.
A
D
I
mean
the
heuristic
I
heard
this
morning
during
six
scale
was
one
of
if
you're
going
write
a
blog
post
about
it.
Are
you
going
to
go
and
brag
about
it,
and
you
probably
need
a
feature
issue
for
it
and
I
super
and
not
like
I'm,
not
trying
to
slow
this
down
or
throw
process
at
this,
because
this
is
super
awesome.
We've
been
talking
about
splitting
up,
repos
we've
said
that
the
clock
like
the
API
and
the
client
is
a
great
place
to
start,
and
this
is
actually
happening.
D
D
A
Is
also
something
we're
going
to
learn
and
adjust
and
tweak
from
the
1.4
release.
So,
as
Brendan
said,
we
don't
know
if
they're
right,
let's
start
here
and
see
what
happens
and
we'll
find
out
if
it's
the
right,
the
right,
metrics
or
not
and
we'll
get
a
better
feel
for
it.
Over
this
release,
all
right,
we
had
two
other
things
to
touch
on
one
I'm,
going
to
just
give
the
world's
fastest
update
on
my
elders
proposal
from
last
time,
which
is,
as
these
things
happen,
they're
going
to
take
longer
than
I
had
hoped.
A
So
I
don't
have
a
short
list
to
present
to
you
this
week,
but
I
am
going
to
update
the
issue
that
everyone
has
offered
comments
on,
because
we
very
broad
discussion
on
their
of
saying.
We
really
need
to
understand
this
more
better,
so
I'm
going
to
say
we
apply
something
that
looks
like
a
product
model
to
it
and
say
what
would
you
want
to
escalate
to
another?
A
Let's
figure
out
what
our
use
cases
might
be
for
this,
and
so
I'll
update
the
issue
with
that
today,
and
then
we
can
get
people
putting
in
different
things
that
they
think
they
might
want
to.
They
can
be
hypotheticals,
they
could
be
specific
reference
cases
and
then
some
sort
of
what
they
would
want,
what
sort
of
action
they
would
want
out
of
it
not
a
rip,
not
the
resolution
like
pick
my
way,
but
I
would
like
a
resolution.
A
I
would
like
someone
to
wait,
and
that
kind
of
thing
so
we'll
do
this
from
a
sort
of
a
product
perspective,
so
I'll
update
at
today.
So
that's
the
world's
fastest
update
on
elders
and
unless
people
have
questions
or
just
given
that
we,
yes,
if
26,
disagree
on
future
and
elder,
is
needed
awesome
so
that
can
go
into
the
issue.
A
So
the
next
thing
that
we
want
to
talk
about
is
the
idea
of
a
working
group
around
the
contributor
experience
and
I've
used
the
phrase
working
group
once
before.
When
we
were
talking
about
how
cluster
lifecycle,
one
cluster
ops
work
together
and
and
whether
they
are
two
independent
cigs
or
not-
and
they
stayed
two
independent
cigs
for
the
moment,
but
sort
of
sub
cigs,
which
is
even
more
confusing
but
they're
they're
working
together
in
a
way
that
is
good
for
everyone
now.
A
A
A
M
Establishing
a
standard
way
that
cigs
meet
minimum
requirements,
for
you
know
at
least
2
liters
leading
it.
You
know,
there's
a
official
meeting
time,
recording
of
notes,
blah
blah
blah
so
I
think
the
where
the
default
request
and
since
I'm
requesting
I'm
happy
to
help
write
it
up
is.
It
is
to
formalize
a
proposal
for
conduct
of
a
working
group
as
well,
so
that
everyone
is
able
to
know
who's
working
on
it
who
to
contact
what
all
the
working
groups
are.
I
think
that
it
will
be
a
much
better
shape
in.
A
The
very
short
in
the
very
shortest
method
is
there
a
reason
that
we
can't
generalize
what
we
have
set
for
special
interest
groups
to
be
all
Cooper,
Nettie's
groups,
working
groups
or
special
interest
groups.
You,
because
I
think
most
everything
is
that
is
required,
would
also
be
required
of
the
working
group.
I.
M
M
I
So
we
have
a
number
of
people
who
started
working
on
issues
as
Brandon.
It's
an
email,
coral
s
has
picked
up
work
on
the
owners
feature.
Thank
you
very
much
for
that.
That's
something
we
desperately
need
github.
As
we
found
is
really
designed
for
small
projects,
small
teams-
they
might
disagree
with
that,
but
I've
looked
at
a
number
of
other
repositories
across
github,
even
among
popular
projects-
and
you
know,
on
the
plus
side,
we
have
achieved
a
velocity,
that's
higher
than
almost
any
other
project.
I
You
can
think
of
it's
higher
than
rails,
it's
higher
than
Django,
it's
higher,
the
docker
even
on
a
single
repo
basis.
But
you
know:
docker
has
achieved
a
rate
of
300
PRS
merge
per
week
in
the
way
one
of
the
ways
they
did,
that
is
by
fragmenting
the
project
into
multiple
repositories
and
that's
something
we're
eventually
going
to
have
to
do
and
something
we
need
to
work
towards.
I
The
client
is
one
example
that
you
know,
but
Google
I
don't
know
if
everyone
knows
what
Google
has
a
ginormous
monolithic
repository
and
that's
the
approach
that
we
brought
to
this
project
and
it
clearly
doesn't
scale
okay,
so
you
know
it's
going
to
take
some
time
to
break
up
the
project
into
multiple
repositories.
So,
in
the
meantime,
we
need
to
do
things
to
add
the
extra
bits
of
tooling
the
github
lacks
to
make
our
current
repository
actually
work.
I
So
we
had
a
number
of
people
starting
work
in
this
area.
I
think
we
have
seven
or
eight
people
now
working
on
different
things,
so
we
formed
the
working
group
as
a
means
of
coordinating
amongst
those
people
and
right
now
it's
mostly
Googlers
plus
chorus,
but
we
would
definitely
like
help
from
the
community
and
there's
no
shortage
of
work.
I
I
created
a
wiki
page
in
the
community
repo
listing
a
bunch
of
the
ideas
we've
been
discussing
for
a
long
time,
the
things
that
are
currently
in
progress
on
bold
things
like
owners,
we're
working
on
moving
to
this
CNC
FC
LA,
which
is
holding
up
a
few
people
and
that's
mostly
hooked
up
on
automation.
At
this
point,
the
the
work
the
Daniel
started
last
quarter,
we
need
to
finish
we're
still
have
problems
with
a
lot
of
rebase
a--'s
due
to
conflicts
and
generated
code
and
docks.
I
So
there's
a
long
list
of
issue
list
of
issues
and
we'd
really
like
your
help.
Please
sign
up
for
the
slack
channel
and
mailing
list
if
you're
interested
in
getting
involved
and
just
reach
out
to
us
and
I.
Don't
have
a
stack
rank,
prioritized
list
on
the
wiki
page
right
now.
But
if
you
ask
I'd
be
more
than
happy
to
pick
something
for
you
to
work
on
all.
A
Right
and
we
are
out
of
time-
apologies-
we
can
pick
this
up
again
next
time
or
people
can
join
mailing
list
as
we
form
this
group
and
get
it
formalized
with
all
of
all
of
the
things
that
you
require
of
cigs,
as
well
as
getting
the
idea
of
the
working
group
formalized.
So
thank
you
all
and
happy
happy
birthday,
two
congruent
Eddie's
and
thank
you
all
for
your
contributions.
A
Over
the
last
year
or,
however
long
you've
been
participating
because
that
there
Cooper
Nittis
is
not
a
just
a
pile
of
code,
it's
a
whole
bunch
of
people,
pushing
a
vision,
and
that's
really
important
to
remember,
especially
when
we
are
when
we
are
challenged
by
our
own
successes.
So
thank
you
very
much
for
all
of
your
help
over
the
last
year
and
let's
have
another
and
awesome
and
even
better
you're,
going
ahead.
See
you
all
next
week.
Thank
you.
Thank.