►
From YouTube: Kubernetes Community Meeting 20160804
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo of Bootkube & self-hosted Kubernetes; Future of PetSet; SIG Cluster Lifecycle; SIG Scheduling; 1.4 Feature Tracking Update; 1.3 Community Award - Daniel Smith
A
So
with
that
I
starts
our
August,
the
eighth
I
think
no
August,
the
fourth
Cooper
Nettie's
community
meeting
and
we
have
a
lovely
full,
exciting
agenda
at
of
us
and
our
first
topic
is
going
to
be
a
demo
from
Aaron
Levie
at
core
OS
about
boot,
cube
and
self
hosted
Cooper
Nettie's.
So
Aaron.
Can
you
introduce
yourself
yeah.
B
Yes,
we
came
Aaron,
okay,
cool,
so
I'm
just
going
to
quickly
go
over
some
of
the
background
just
to
give
an
idea
of
what's
I'll
post
it
actually
is
so
just
in
general.
What
it
kind
of
means
is
that
communities
dza
is
managing
its
own
core
components.
They
self
hosted.
That's
what
we're
talking
about.
What
this
actually
means
is
that
all
the
the
cluster
components-
things
like
API
server,
scheduler
controller
manager,
they're
actually
managed
as
higher
level
API
objects.
B
So
things
like
Damon
sets
or
deployments
any
of
the
actual
cluster
components
essentially
want
to
model,
as
as
communities
objects,
then
the
end
goal
of
this
is
that
our
host
requirements
end
up,
essentially
being
that
you
have
a
couplet
and
a
container
runtime,
and
then
everything
else
is
kind
of
stacked.
On
top
of
that,
so
just
kind
of
a
general
look
at
what
a
self-hosted
cluster
might
actually
look
like
the
way
that
we've
been
deploying
it
today.
B
If
you
were
to
take
a
look
at
you
know
the
deployments
in
your
cluster,
you
might
see
something
like
a
controller
manager
or
scheduler
DNS
add-on
and,
if
you're
to
look
at
the,
as
you
would
have
components
like
Google,
a
coupe
proxy
tube
api
server.
In
this
case,
this
is
showing
a
three
node
cluster,
where
you
know
there's
for
the
daemon
sets
for
the
couplet
in
the
coop
proxy.
There
are
no
node
selectors.
We
want
these
to
be
running
on
all
nodes
in
our
cluster,
and
so
we
have
three
copies
close
the
API
server.
B
B
Essentially,
that's
just
using
normal
cooper
Nettie's
tools
to
do
this
upgrade
by
saying
control
apply
these
updates
to
cluster
and,
in
the
background
what's
happening
because
the
scheduler
is
deploy
is
an
actual
deployment
object
what's
happening
is
that
the
inside
of
you
Burnett
is
the
deployment
controller,
is
managing
the
rolling
update
of
that
component.
So
it's
you
know,
killing
an
old
pod
and
then
replacing
it
with
a
new
and
going
across
your
cluster
managing
at
that
upgrade
process.
B
For
you
another
reason
is
you
get
to
use
your
typical
cluster
introspection
tools
for
debugging
and
such
they're
already
built
into
communities
to
inspect
the
actual
communities,
components
themselves,
or
maybe
something
like
secret
rotation?
That
can
be
a
little
bit
difficult,
otherwise,
where
let's
say
that
you
wanted
to
change
the
tos
assets
for
your
API
server,
what
that
might
look
like
posted
clusters
that
you
create
a
new
secret
that
contains
your
new
assets?
B
B
B
What
it
might
be
like
to
upgrade
a
self-hosted
cluster,
essentially,
we
should
be
able
to
get
it
down
to
more
or
less
these
kind
of
five
commands.
You
know,
update
the
control
plane
first
and
then
update
google
compute
proxies
after
that,
and
we're
not
quite
to
this
point
yet
there's
some
more
new
ones
here,
but
yeah
we're
not
that
far
away.
So
one
of
the
things
I
wanted
to
do
was.
B
Part
of
this
meeting
stuff
is
in
the
way
all
right.
So
if
we
take
a
look
at
those
things,
I
just
talked
about
so
here's
our
deployments,
controller
manager,
scheduler
DNS.
If
we
take
a
look
at
the
damen
sets,
the
API
serve
an
approximate
google
it.
So
if
we
want
to
update
the
cluster,
we
want
to
do
the
control
plane.
First,
we
want
to
do
the
API
server
first.
So
let's
just
take
a
look
at
what
the
version
we're
running
right
now.
B
A
B
B
B
And
so
what
I'm
doing
here
is
this
is
just
the
image
field
of
our
pods
back
inside
of
the
daemon
set
to
take
it
up
and
say:
well,
we
want
the
API
server
to
actually
be
running
134
instead.
Now,
when
I
save
this
nothing's
actually
going
to
happen
as
there's
there's
no
rolling
update,
Magnus
Damon
sets,
the
daemon
set.
Control
will
replace
this
pot
of
it
doesn't
exist,
but
it's
not
going
to
roll
it
for
us.
So
we
take
a
look
at
the
pods
and
we
can
see
this
is
the
API
server.
B
Essentially
what
it
is
is
it's
acting
as
a
temporary
control
plane
just
long
enough
that
it
can
be
replaced
by
a
self-hosted
control
point,
because
we
kind
of
start
in
this
chicken
and
egg
situation
where
you
need
an
API
server
to
inject
manifest.
But
there
is
no
api
server
running.
So
how
do
you
have
you
rectify?
That?
Elf
is
essentially
a
single
binary,
that's
running,
maybe
I
server,
a
scheduler
and
a
controller
manager,
and
then
it
runs
long
enough.
B
You're
you
can
inject
manifested
it,
it
watches
and
waits
until
it
sees
that
a
replacement,
API
server,
controller
manager
and
schedulers
running,
and
then
it
just
dies
and
you
don't
eat
it
not
going
to
be
able
to
go
into
a
whole
lot
of
detail
about
it,
but
take
a
look
at
the
github
for
it.
Documentation
builds
reference
and
blowing.
B
Take
that
kind
of
thing
launch
local
clusters
and
no
kick
the
tires
or
we
we
have
a
Google
Doc
fit
actually
goes
into
a
lot
more
detail
about
the
design
of
each
of
the
self-hosted
components
and
how
it
all
works
together.
So
you
can
take
a
look
at
that
as
well.
Let's
check
on
the
API
server
should
be
back
okay,
so
we
can
see
that
that
that
suffix
has
changed.
The
API
server
got
rolled,
it's
only
33
seconds
old,
so
we
were
to
check
the
version
now
hasn't
updated
itself.
B
B
B
So
what
happens
when
I
save
this
in
the
background
what's
happening?
Is
the
deployment
controller
is
going
to
see
that
it's
been
modified
and
then
it's
going
to
do
a
rolling
update
of
those
components
for
me,
and
so
it
may
even
do
it
fast
enough
that
I
don't
see
it
if
the
pods
okay,
we
can
see
that
that
scheduler
is
now
container
creating,
because
the
deployment
controller
is
seen.
Okay.
Well
now
the
mountain,
the
manifest
doesn't
match
what
the
actual
deployment
is.
We
need
to
roll
those
claws,
so
we've
just
done
our
scheduler.
B
B
All
right,
so
that's
that's,
been
updated.
Three
seconds
old,
pretty
quick!
So
now
our
control
plane
is
updated,
have
a
134
control
plane.
Now
we
want
to
do
the
notes
themselves.
So
again,
both
the
proxy
and
the
couplet
are
Damon
set.
So
we
have
to
do
our
manual
kind
of
rolling
update.
So
the
first
thing
I
want
to
do
is
edit.
The
actual
game
is
set
to
the
proxy
first.
B
First,
oh
actually,
before
I
update
the
couplet.
Let's
take
a
look
at
like
verify
the
version
they're.
Actually
writing!
So
if
we
get
nodes
and
yeah
well,
we
can
see
in
the
node
object.
The
couplet
is
is
updating
the
status
of
the
node
with
the
versions
of
the
proxy
and
the
couplet
locally
on
it.
So
we
can
see
this.
This
node
in
particular
is
running
130,
so
we've
updated
the
daemon
set,
but
we
still
need
to
kick
those
pots
to
see
update
roll
over.
B
Do
this
one
first,
and
these
can
take
us
a
little
bit
longer
interests
because
actually
terminate.
B
B
B
And
what
we
should
see
now
is
that
we
have
two
API
servers
running
to
controller
managers
and
to
schedulers,
so
within
I,
don't
know
a
minute.
We
were
able
to
expand
our
cluster
and
essentially
make
it
a
che.
So
this
is
kind
of
like
the
reason
that
I'm
pretty
excited
about
just
the
self-hosted
path
in
general
is
because
communities
is
really
good
at
managing
pieces
of
software
is
really
good
at
doing
these
kinds
of
things.
B
A
B
B
The
turtle
of
the
very
bottom
is
a
Koopa
that
runs
on
the
host
long
enough
to
be
replaced
by
a
self-hosted
couplet
and
in
that
design,
doc
I
mentioned
I
can
think
it
as
well
on
the
dock
we
go
into
how
that
actually
works,
but
again
it's
vanilla,
Cooper
Nettie's.
It
essentially
coordinates
around
the
lock
file,
and
so
this
is
just
you
know
how
how
you
can
deploy
in
communities
today
already,
and
this.
A
B
I
mean
that's
part
of
the
reason
that
you
want
to
have
rolling
updates
of
these
components.
Is
that
you
know
so,
if
you
do
screw
it
up
that
you're
going
to
roll
it
back
you're
not
going
to
you
know,
if
you're
being
a
safe
operator,
you're
not
going
to
go
out
and
blow
away
every
API
server,
you
have
at
the
same
time
with
bad
images.
B
So
I
actually
think
that
this
gives
you
a
lot
of
power
in
that
regard,
where
you
know
you're
going
to
do
these
kind
of
canary
deployments
and
do
it
slowly
and
get
okay
well
that
one
out
and
then
the
next
step
of
this
is
that
I
can
do
this
manually
by
hand
without
much
difficulty.
It
makes
automating
this
process
not
that
difficult
either
where
a
controller
loop
could
sit
there
and
do
the
same
thing
and
say
well
I'm
going
to
bring
up
an
API
server.
That's
new!
Did
it
come
up?
Is
it
functioning
properly?
B
Okay,
I'm
going
to
go
on
to
the
next
one
once
the
API
server
is
done,
they
can
go
on
to
the
couplets
and
I
can
roll
each
node.
This
node
is
coming
back
up
safely
if
they're
not
stop
the
update
process.
I
think
that
this
lens
to
being
like
a
pretty
safe
perspective
from
from
an
operator
because
Cooper
Nettie's
is
really
good
at
these
types
of
things.
Awesome.
A
A
D
Hey:
what's
up
hey
you're,
good,
good,
Oh,
sup,
guys
so
I
just
thought.
I'd
give
a
quick
update
to
people
who
are
interested
in
participating
and
pets
at
development
on
the
whole.
I.
Don't
have
slides
at
our
demo.
I'm
just
gonna
talk,
so
you
get
to
look
at
my
face
and
ask
questions
if
you
want
ok,
so
else
I'm
going
to
try
and
keep
this
brief,
because
this
the
breadth
and
scope
of
this
particular
topic
has
a
way
of
filling
up
the
available
time
arm.
But
to
start
off
with
where
we
are.
D
D
The
end-to-end
test
is
pretty
basic
sanity
check.
We
just
write
some
keys.
Please
jot
the
entire
cluster
leave
the
keys.
You
don't
do
some
more
checks
like
that,
so
all
this
is
is
really
sort
of
basic
it.
It
just
proves
that
we
can
in
fact
applied
maths
digital
applications
for
the
happy
path
on
Cabrera's.
This
look
quite
a
bit
of
work
to
be
done,
but
that's
where
we
at
where
we're
going
from
now
is
you
know,
there's
a
lot
of
work
like
I
said
to
be
done.
D
Examples
of
such
things
are
rolling,
update
and
scale
up
and
down
or
they've
actually
been
reports
that
quickly
scaling
a
pet
set
up
and
down
causes
some
of
your
issues,
which
you
know
I'm
totally
willing
to
believe
because
I
didn't
design
it
to
be
quickly
sketch
up
and
down
at
the
very
least,
it'll
be
much
much
slower
than
a
replica
set.
So
if
you
try
to
not
create
a
hundred
rafa
chasm
and
come
back
at
ten
or
something
there
probably
are
the
issues,
but
these
are
more
operational.
D
I,
don't
think
that's
probably
in
this
bucket
is
recycle
policy
on
dynamically
provision,
volumes
which
we
currently
don't
have
at
least
those
those
created
for
pet
set
right.
So
these
are
just
things
that
you
know
would
be.
Nice
would
make
using
pet
set
easier.
We
know
we
need
to
do
obviously
needs
to
be
consistent
with
the
rest
of
the
cube,
CTL
verbs
and
the
work
here
is
just
we
need
to
improve
at
it.
D
The
second
big
bucket
is
networking
on
the
there
are
a
bunch
of
open
questions
and
networking
and
a
lot
of
us
kind
of
know
how
to
sell
them,
I'm
punting,
on
committing
to
anything
right
now.
The
open
questions
are
along
the
lines
of
we
need
static,
ip's
for
pets.
We
need
public
ip's
for
pets.
Public
identities
in
some
way
are
certain.
Pets
are
absolutely
cannot
tolerate,
DNS,
apparently
because
they
don't
respect
etl
and
even
if
you
set
a
shot
dtl,
they
just
ignore
it.
D
So
by
and
large
I'd
found
that
a
lot
of
the
databases
have
evolved
to
accepting
host
names
and
pet
set
today
gives
you
host
names
and
the
ones
that
I
prototype
work
out.
Well
for
the
failure
modes
that
I
pro
typed
again,
it
might
totally
be
the
case
that
we
need
I
ps2
successfully,
deploy
all
databases
to
production,
and
this
is
kind
of
on
the
table.
D
There
are
a
couple
of
ideas
that
have
been
proposed,
probably
the
simplest
and
most
basic
idea
that
you
can
do
it
right
now
is
set
up
a
cheap
proxy
and
pass
a
cookie
header
saying:
I
want
to
talk
to
Pat
zero
I
want
to
talk
to
that
one,
and
so
on
energy
proxy
will
route.
You
are
probably
more
complicated
version.
Is
you
know
clear,
a
service
for
pat
yourself
and
then
expose
that
service
through
node
for
type
load
balancer?
If
we
wanted
to
offload
some
of
this
work
depends
that
we
could
buy
you.
D
No
pets
are
already
today
requires
a
governing
service,
which
is
responsible
for
the
network
identity
of
all
the
pets.
So
we
could
make
that
country
like
that.
Governing
service
is
a
headless
service.
We
could
make
that
gumming
service
a
different
type,
such
as
type
node
put,
and
there
are
some
challenges
and
that
we
need
to
figure
out
such
as
allocating
a
node
put
on
/
endpoint
are
on
the
bed,
but
but
these
will
get
worked
out
in
time
and
what
I'm
really
looking
for
is
hard
use
cases
that
say
we
must
have
ID.
D
There's
a
gap
today
in
that
said
that
I
believe,
like
a
couple
of
people,
have
submitted
very
good
prototypes,
most
notably
cockroach
and
LCD
I.
Think
Jan
from
Red
Hat
support
just
recently
submitted
one
for
a
TD,
and
the
thing
that
stood
out
to
me
was
that
storage
prevented
them
from
actually
deploying
something
that
we
would
call
more
or
less
production
ready
right.
We
don't
have
a
local
storage
auction.
D
That
is
not
these
two
first,
you
have
the
very
high
end,
applications
that
one
no
latency
access
to
SSDs
and
don't
want
to
go
over
a
network,
and
then
you
have
the
really
low
end
up
or
not
the
low
end,
but
the
dev
and
test
prototype
type
things
that
we
don't
necessarily
tearing
down
your
cluster
because
you're
iterating
on
design-
and
you
don't
want
to
spend
money
and
you
don't
want
to
you
know,
set
up
access
Gloucester
and
both
these
would
fit
well
with
what
we
till
now
describe
this
data
gravity
and
it's
it's
a
it's.
D
D
Credits:
it's
it's
the
ability
to
attach
sort
of
local
storage
to
persistent
volumes
and
have
pods
use
that
across
three
star
such
that
they
always
get
scheduled
to
the
node
very
design.
Doc.
On
that,
yes,
it's
a
proposal
is
not
a
dark.
The
proposal-
I,
don't
remember
the
number
I
forgot
last
night.
If
you
look
at
proposals
by
prashant,
well,
proposals
assigned
a
Tim
harken
would
be
probably
in
an
easier
but
a
larger
list,
and
it's
essentially
about
local
storage
and
I
think
it
would
enable
the
two
use
cases
I
mentioned.
D
There
are
other
ways
we
could
tackle.
The
storage
problem,
like
I,
think
I
spoke
to
Clayton
about
getting
the
Gloucester
setup
streamlined
into
communities,
because
today
it's
a
little
harder
than
it
should
be
the
set
of
either
surf
or
gloucester
or
honestly,
I
haven't
tried
to
list,
but
I
probably
should
and
that's
another
viable
option
right
that
solves
one
of
the
two
troubles
that
the
problem
is
sort
of
twofold.
One
is
local
stories,
the
other
is
network
storage.
D
You
could
expose
local
storage
over
a
network
and
that
wouldn't
solve
potentially
wouldn't
solve
the
latency
problem,
but
it
would
solve
the
local
storage
problem
and
I
think
there
are
things
like
carry
that.
Allow
you
to
expose
glass
door,
as
you
know,
a
sort
of
restful
interface
that
essentially
tells
you
where
to
put
the
bricks
when
I
read
about
it,
is
essentially
like
the
GFS
master
I'm,
yet
to
prototype
something
that
works
with
vanilla,
Canaries.
But
that's
another
way
with
all
these
storage
problem.
D
I
guess
the
exit
criteria
for
the
storage
problem
is.
It
should
be
easily
used
without
necessarily
spending
money
and
really
simple
to
set
up
without
another
major
failure
point
you
know
like
if
if
we
didn't
set
up
cluster-
and
it
just
happened
to
flake-
which
maybe
it
will
or
will
not
lie
about
different
mixed
reports,
then
that
would
not
provide
us
to
exit
this
criteria
right
arm.
D
Having
discussed
storage.
The
next
bucket
is
just
difficult
distributed
system
edge
cases,
and
this
is
something
that
we've
been
going
back
and
forth
on,
and
it's
really
hard
to
draw
a
line
about
where
exactly
the
boundary
of
what
pet
set
should
do
and
shouldn't
do
lies
are
there
are
certain
patterns
that
we
see
people
trying
to
solve
repeatedly
by
themselves,
luminaries
and
I
think
we
should
try
and
make
it
easier
by
distilling
some
of
these
patterns
into
the
system.
D
Maybe
there
was
some
documentation
issues
there,
but
by
and
large
I
think
I
just
need
people
who
are
not
me
or
clay
to
know
or
someone
who
is
very
familiar
with
the
divine
to
try
and
prototype
their
data
basic
choice.
If
you
were
interested
in
doing
this,
look
up
all
the
github
issues,
labeled
state
apps,
/,
/a
full,
and
there
are
already
a
bunch
of
issues
file.
Saying
simple
database
x
on
pad
set.
Some
of
them
have
been
fixed
and
some
of
them
haven't.
D
If
you're
interested
in
an
existing
issue,
that's
filed
there
and
you
want
to
try
it
out.
I
would
suggest
just
commenting
on
the
issue
saying
haze
and
I'm
working
on
this.
If
not
I'm
gonna
grab
it
grab
it
prototype.
It
come
and
ask
me
for
questions
on
and
we
can
take
it
from
there.
What
I'm
really
looking
for
is
I
tried
doing
this.
What
we
have
today-
and
it
was
I
needed
to
light
a
bunch
of
code
to
solve
this
condition
right.
D
I,
perhaps
I
need
you
to
write
like
something
that
watches
the
Canaries
API
and
proactively.
Take
some
action
or
something
like
that:
I,
don't
know
what
it
is,
but
essentially,
if
you
could
come
up
with
that
feedback,
we
could
distill
that
back
up
into
the
system.
D
So
that's
the
distributed
systems
subtle
bucket
and
the
final
bucket
is
essentially,
we
need
to
support
frameworks
that
use
databases
such
as
cough
can
spark,
and
this
is
going
to
be
an
additional
challenge,
because
whenever
we
try
to
layer
a
system
on
top
of
another
system
that
we're
developing
like
edge
conditions
and
bogs
happen,
this
happened
with
your
application
control,
for
example,
and
deployments.
Application
controllers
vertical
is
stable,
or
at
least
so.
We
thought
until
it
apply
mins,
tickle
them
and
all
sorts
of
weird
ways.
D
A
A
A
D
F
So
we've
been
doing
a
lot
of
work
with
pet
sets
with
two
database
systems
that
you
haven't
mentioned,
with
both
post
graphs
and
with
Scylla.
The
Scylla
is
kind
of
a
follow-up
to
a
bunch
of
the
Cassandra
work
they
were
doing
I,
don't
think
we
necessarily
have
feedback
at
least
not
yet
on
well
like
feature
X
is
broken
and
we
need
something
else,
but
I
do
think.
We
would
like
to
see
those
databases
and
I
assume
others
added
into
the
ete,
and
we
wanted
to
share
our
work.
F
D
Dead,
so
the
eateries
are
not
actually
it
in
contribute.
Ideas
are
using
images
that
are
in
Coober,
Nettie,
/,
I,
think,
test
images
or
something
you
should
be
able
to
open
up
the
testee
doing
pets
at
file
and
figure
out
whether
using
the
image
is
strong.
I
mean
the
sort
of
documentation
stuff
is
in
control,
yes,
but
the
actual
llamo
files
that
are
a
part
of
the
eat.
We
are
not
in
contributors.
F
G
D
F
I
D
I
think
you
know
it's
sort
of
open
to
how
we
would
test
this
in
a
distributed
fashion
so
that
everyone
can
use
it,
but,
like
I
said
that,
do
we
test
that
are
pretty
much
running
it
from
head
today,
the.
I
Only
reason
why
you
wouldn't
do
that
is,
if
you
have
some
sort
of
like
bare
metal
clusters,
set
a
component
that
you
want
to
test
like
you
can't
test
that
with
our
system,
you'd
have
to
federate
testing
in
some
way.
But
if
you
just
want
to
test
config
that
runs
on
top
of
a
cooper,
Nettie's
cluster,
that's
the
place
to
do
it.
I
see.
F
I
I'd,
like
I'd,
actually
be
interested
in
tossing
out
potentially
a
crazy
idea
Aaron
and
shout
me
down
if
I'm
being
foolish.
What
if
we
actually
checked
in
your
work
into
the
charts
repository
for
hell
and
and
actually
made
this
a
first-class,
you
know
easy
to
install
application.
There
I'm
not
clear
what
end
end.
Testing
is
done
there,
particularly
against
head.
Certainly
we
should
investigate
it,
but
the
fact
that
you
guys
you
folks
are
out
building
these
amazing
applications.
We
know
people
want
postgres.
We
know
people
want
whatever
the
other
database
you
mentioned
was
like.
F
J
I
want
to
raise
one
point
of
concern,
is
we're
still
in
the
point
of
all
of
these
are
going
to
be.
You
have
a
non-zero
chance
of
this
eating
all
your
data
and
going
back
to
start
so
I
am
at
least
somewhat
concerned
about
anything
with
pet
sets
or
anything
that
we've
created
on
pet
sets
being
advocated
to
end
users
in
any
way
until
we're
comfortable
with
them,
which
is
six
months
down.
The
road
yeah.
G
B
I
We
have
a,
we
have
a
soak
test
that
runs
a
cluster
for
a
week
like
we
should
for
for
things
like
this.
We
should
really
have
tests
that
start
one
up
at
the
beginning,
Ron
and
like
verify
that
it
at
least
at
least
survives
the
week.
That's
just
like
that's
like
not
sufficient,
but
it
is.
It
is
at
least
one
singing
group
yeah.
G
J
No
offense
to
prashanth
or
anybody
else,
who's
worked
on
a
pet
set
so
far
like
there
is
a
very
big
gap
between
we've
made.
These
work
and
stuff
we've
done,
and
this
is
supported.
Configuration
like
I'd
I
think
that's
the
challenges.
We
need
some
time
on
pet
sets
and
we
want
to
ensure
that
as
many
people
and
as
much
infrastructure
was
brought
to
bear
to
soak
it,
but
we
also
need
people
like
postcards
authors
like.
A
J
Who
are
committed
to
postgres
going
yeah
I
can
follow
this
and
like
deep
reviews
on
the
consistencies.
These
systems,
right,
like
these
kinds
of
things,
are
the
things
that
Jepsen
tests
are
designed
to
go
flush
out
right
and
find
prove
that
we
actually
didn't
do
anything
correctly.
We
haven't
even
begun
to
start
that
work
too.
So.
F
Be
clear
with
to
justanswer
Joe's
question:
I
did
I
I
wasn't
implying
that
the
end
end
tests
would,
for
you
know,
enforce
helm
usage.
It
was
more
within
the
helm
within
the
helm,
repo
they
would.
They
would
build
their
own
tests
and
do
nightly
or
whatever
made
sense
for
them
and
then
testing.
They
would
be
decoupled,
though
so,
okay
and
and
I'll
be
quite
so
Erica's
meeting
alright.
K
We're
talking
about
pet
set
and
we're
talking
about
helm,
which
they're
related,
but
their
make
sure
they're
clear,
they're
different
in
our
minds.
I
see
pet
set
is
a
thing
that
helps
you
make
a
pre
container
app
run
well
on
Cooper
Nettie's
Bob
mentioned
projects
that
are
next-generation,
like
Scylla
DB,
which
the
next
generation
of
Cassandra
and
maria
de
B,
which
the
next
generation
of
my
sequel
it'd,
be
great.
K
Actually,
Bob,
Bob
didn't
mention
rating
me,
but
someone
did
it'd
be
great
if
we
reach
out
to
those
applications
and
make
sure
they're
cloud
native,
so
they
don't
need
pet
set,
it's
always
preferable
to
use
a
deployment
and
services
and
have
the
thing
Miku,
benetti's
native
and
scale
fast
than
to
use
a
pet
set.
That
set
is
a
an
adapter.
It's
not
an
end
goal
for
applications
on
communities
and
then,
in
terms
of
like
testing
and
end
great
point
by
Brian
about
wanting
end-to-end
testing.
One
thing
I'd
like
to
c
is
for
helm
charts
to.
K
We
talked
about
this
in
a
recent
meeting
to
up
when
they
install
the
server.
Also
say:
here's
what
a
client
here's,
how
to
configure
a
client
and
even
maybe
to
have
like
a
little
minimal
ping
from
that
client.
So
then
we
can
have
an
e
2e
test,
which
is
not
a
cougar
Nettie's
quarry
any
test,
but
is
more
like
a
helm,
ed
test
which
says
it's
all
a
thing
on
a
recent
release:
kubernetes
cluster
and
have
the
client
for
that
server.
Like
my
sequel,
client
read
a
CLI
ping,
it
make
sure
it's
up.
That's
all.
A
A
A
A
L
Okay,
cool
so
I'm
just
going
to
give
a
short,
hopefully
less
than
10,
minutes
update
on.
What's
going
on
in
sick
cluster
life
cycle,
the
primary
goal
of
cig
foster
life
cycle
is
to
make
kubinashi's
itself
really
easy
to
install
for
ninety
percent
of
users.
We're
assuming
those
are
going
to
want
to
kick
the
tires
or
go
into
production.
L
I've
divided
the
following
user
stories.
So
in
the
next
few
slides
there
are
going
to
be
a
few
user
stories
that
we're
targeting
I've,
divided
them
up
into
phase
1
and
phase
2.
Broadly
speaking,
phase
one
is
stuff
that
we
hope
we
might
achieve
in
terms
of
one
or
more
I
used
to
do
is
longer
term
stuff
for
these
user
experiences,
that
could
be
more
production,
ready,
type
of
15
and
so
to
be
clear.
We're
only
targeting
an
alpha
release
of
what
follows
in
time
for
14
just
before
I
go
into
those
user
stories.
L
The
reason
for
that
is
that
there's
already
some
there's
many
different
ways
of
provisioning
servers
and
it's
we
want
to
get
into
the
game
of
being
opinionated
about
that
I
mean,
though
the
users
will
have
their
own
preferences,
but
once
we've
made
two
and
three
bootstrapping
discovery
and
then
add
ons
much
much
easier,
it
will
make
the
job
of
tools
that
want
to
do
the
provisioning
as
well.
Much
easier
it'll
also
become
self-evident
to
users
who
want
to
automate
it
with
their
own
choice
of
share
for
puppets
or
whatever
much.
L
Can
once
they
can
see
Seelye
a
new,
easy
way
of
installing
to
vanessa's,
so
I'm
just
going
to
blast
through
these
different
user
stories?
The
main
almost
the
most
important
one
is
the
initial
installation
experience.
So
as
a
potential
Cuban
entities,
user
I
can
install
one
for
on
a
handful
of
computers
by
typing
two
commands
into
each
of
those
computers,
and
the
process,
as
I
said,
is
simple
and
so
obvious
that
I
can
easily
automate
it.
But
that's
the
phase
one
target
so
we're
going
to
try
and
get
that
into
14.
L
L
The
next
one
is
controls,
but
obviously,
once
you've
provisioned
a
cluster,
you
actually
need
to
get
some
credentials
that
allow
you
to
control
it
with
KU
cuddle
and
also
as
part
of
phase
1
being
able
to
install
cluster
add-ons
using
coupe
cuddle
apply,
and
that
includes
networking
atoms.
So,
in
parallel,
we're
working
on
making
sure
that,
for
example,
we've
met
can
be
deployed,
add-on
just.
L
A
couple
more
user
stories
being
able
to
add
a
node
is
actually
kind
of
easy
if
you
can
satisfy
the
installation
story,
because
if
you
installed,
if
you've
added
a
computer
to
the
cluster
early
on,
you
can
add
a
computer
to
the
cluster
later
upgrades.
We
are
currently
thinking
about
punting
on
because
we're
going
to
be
labeling,
this
feature
alpha
and
we
need
to
get
the
software
written
in
the
next
two
weeks.
We
we
are
going
to
punt
upgrades
out
to
absolutely
15
final
user
stories
or
requirements.
A
L
H
a
we
are
for
that
phase.
1
h
a
we
are
hunting
out
phase
2.
So
if
one
of
the
computers
fails,
the
plus
the
carriers
are
working.
That's
basically
multimaster.
That's
going
to
be
a
little
bit
too
complex
to
land
in
the
next
two
weeks,
so
on
to
the
good
stuff.
This
is
what
the
user
experience
that
we're
proposing
actually
looks
like.
So
these
are
the
two
commands
the
user
would
have
to
type
and
the
first
one.
L
The
first
thing
to
point
out
is
just
what's
in
the
the
orange
cloud
in
the
top
right
here.
So
one
of
the
assumptions
here
is
that
the
user
has
installed
KU
benetti's
from
an
operating
system
package,
and
what
that
means
is
the
things
like
it
allows
us
to
smooth
over
the
problems
of
needing
to
install
a
system
d-unit
files,
for
example,
or
upstart
scripts,
on
operating
systems
that
don't
have
system
to
unit
files,
because
that
can
all
just
be
hidden
away
in
the
operating
system,
specific
or
the
registry.
Oh.
L
So,
one
of
so
we're,
assuming
that
the
user
has
already
typed
something
like
apt-get,
install
q,
benetti's
and
mike
denise,
is
working
on
packaging
for
debs
and
rpms.
So
this
is
going
to
be
possible.
So
the
first.
This
is
the
simplest
example
of
installing
a
cuban
eighties
cluster,
the
user
typed
khoob
in
it
Master
on
one
machine
and
they're,
given
a
secret
token
bag
and
then
out-of-band
like
maybe
copy
and
paste
that
token
over
ssh
and
paste
it
into
cube,
join
node
on,
however
many
machines.
L
L
Just
one
note
on
discovery.
We,
in
order
to
implement
this
user
experience
that
we
all
want.
We
are
currently
exploring
a
secure
gossip
based
solution
for
bootstrapping
versus
a
public
discovery
service.
This
is
still
in
sort
of
an
X
eration
mode,
where
we're
trying
to
figure
out
whether
whether
this
is
possible,
we
think
it
is.
We
want
to
prove
it
with
a
prototype,
and
the
reason
for
that
is
that,
as
David
said
in
the
last
cig,
users
don't
want
to
leave
a
file
or,
where
possible
and
users
don't
want
to
run
and
operate
incremental
services.
L
So
earlier,
and
I
and
Mike
Denise
are
working
on
a
proposal
for
this
discovery
that
we're
going
to
share
as
soon
as
it's
looking
sensible.
So
everything
else
that
I've
talked
about
is
a
work
in
progress.
Please
come
and
get
involved
on
on
six
lost
a
life
cycle
on
the
slack
there's
some
links
in
this
presentation.
The
presentation
is
linked
to
from
the
minutes
and
that's
it
any
questions.
I'm.
A
A
Apologies
for
getting
you
short
on
that,
but
we
are
tight
and
I
am
going
to
now
interrupt
our
regularly
scheduled
programming
for
a
one-minute
announcement
in
that
in
our
in
our
a
retrospective
of
1.3.
There
was
a
particular
name
that
came
up
over
and
over,
and
people
have
commented
across
the
community
about
the
extraordinary
work
done
by
daniel
Smith
in
1.3
to
keep
our
to
keep
us
moving
forward
and
get
us
through
and
out
to
1.3.
A
So
we
I
took
it
upon
myself
to
make
the
first
Cooper
Nettie's
community
award
and
present
it
from
the
1.3
release
cycle
to
daniel
Smith
for
being
the
nuts
that
held
it
all
together.
And
this
for
those
of
you
can't
see,
it
is
actually
a
seven
sided
lug
nut
that
we
went
out
and
found
in
bright,
blue
of
course,
and
so
this
goes
to
Daniel
with
our
thanks.
Complaints
of
their
Samuel,
hey.
K
A
You're
most
welcome
Daniel,
a
blue
seven
sided
lug
nut.
Yes,
John
there,
sorry,
yes,
Joe,
so
that
that
was
actually
the
brainchild
of
Eric
tune.
Who
happened
to
find
it?
Because
why,
wouldn't
you
have
a
google
search
set
up
for
hexagon?
So
so
thank
you,
daniel
for
joining
us
and
thank
you
for
all
the
work
in
1.3.
A
A
H
Keep
I'll
keep
this
short
and
sweet.
We
have
not
met
that
in
the
six
scheduling
group
for
several
weeks
now,
so
we
recently
had
a
couple
of
items
that
were
kind
of
fairly
high
priority,
so
we
decided
to
meet
and
we
also
are
going
to
have
some
upcoming
events
soon
too,
as
well,
that
we
wanted
to
give
a
PSA
for
one
of
the
agenda
items
we
talked
about
this
week
was
regarding
Intel's
proposal
for
burke
on
binary
encountered
resources,
there's
a
link
in
the
notes
there.
H
This
crosses
several
six,
so
you're
going
to
see
aspects
of
this
cross
from
signode
and
as
well
into
six
scheduling,
but
the
primary
topic
that
we
were
covering
was
how
we
were
going
to
do
the
matching
on
and
on
what
data
we
kind
of
resolve
that
there
would
probably
be
an
annotation
on
the
note
resource
information,
and
there
is
still
debate
upon
whether
or
not
you're
going
to
put
the
resource
request
into
the
pod,
spec
or
yet
another
annotation
on
the
spec,
so
that
that
was
still
a
cheap
TBD
item.
H
The
next
agenda
item
that
we've
recently
been
talking
about
was
what
was
is
going
to
land
414
cycle.
How
the
biggest
thing
that's
landing
in
14
is
going
to
be
the
pod
and
node
affinity
and
anti
affinity,
and
they
should
be
enabled
by
default
now.
So
there
have
been
some
performance
a
lot
patients
that
way
Tech
has
made,
and
those
who
are
interested
feedback
is
welcome,
so
try
it
out
see
how
it
works.
H
H
This
cycle,
the
last
thing
that
we
wanted
to
call
out
was
we
wanted
to
get
a
PSA
or
larger
audience
that
next
week,
during
our
six
scheduling,
meeting
we're
going
to
the
chorus
folks
have
gladly
offered
to
show
a
demo
of
one
other
interns
who
had
rewritten
governments,
which
is
a
flow
based
scheduling,
algorithm
for
an
NGO
atopica
pernetti's.
So
for
those
who
are
interested,
it
should
be
cool
demo
and.
A
Day
to
pacific,
thank
you
see.
We
have
to
get
that
public
service
announcement
out
there
too.
Every
time
we
say
seguis
a
time
more
slack
channel
awesome.
Thank
you
for
the
info.
Do
you
know
if
six
scheduling
is
going
to
try
to
be
more
consistent
about
meeting,
or
is
this
going
to
be
ad
hoc
still
and
mailing
list
in
slack
channel
I.
M
H
A
I
meant
there
we
go
more
community
engagement
which
might
lead
to
more
consistent
meetings.
Okay,
fantastic
awesome!
Thank
you
both
for
leading
six
scheduling
and
updating
us
on
all
of
this
step
group
and
I'm,
going
to
do
the
same
thing
with
questions
I'm
going
to
send
you
off
to
the
sig
scheduling,
mailing
list
and
six
scheduling
slack
channel.
If
you
have
questions
for
Tim
and
David
E
Lord,
do
you
have
a
three-minute
variant
of
updates
on
1.4
features,
and
maybe
we
can
give
a
deeper
dive
next
week
if
that
weren't,
if
there's
warrant
well.
C
Were
submitted
to
the
key
beneficiary
for
AA
and
marked
as
wonderful
milestone,
this
single
spreadsheet,
that
is,
sligon
addicts
like
spreadsheet.
If
we
have
prepared
with
my
powerful
one
release-
and
this
bridge
is
much
more
element
and
supposed
to
feel
a
little
more
much
more
emotive,
so
I'm
going
to
drag
all
the
features,
get
expect
it
to
be
acquired
for
release,
give
it
at
us
and
I'm
going
to
thread
them
from
the
feature
straight
from
it.
C
Features
there,
since
the
Seven
Fishes,
that
doesn't
mean
all
these
features
will
be
added
to
do
their
actual
release.
But
it
means
that
this,
which
is
expected
to
be
added
one
day
and
in
what
the
fluoroscope
didn't
people
are
working
on
there
and
the
same
time,
I'd
like
to
ask
people
who
are
only
the
features.
Sadly,
some
data
with
the
actual
data.
C
If
you
have
some
amends
with
us
for
a
lot
of
drought
like
features
issues
that
have
been
not
updated
for
long
time,
desta
lectures
and
also
may
I
remind
you
that
we
are
expecting
to
close
the
codon
stage.
One
look
for
in
two
weeks,
so
you,
if
you
actively
Gordon
some
stuff
for
communities
and
expected
this
stuff
being
led
to
date,
this
place
when
I
was
dead
next
up
mysteries,
and
it's
so
for
my
steps.
A
Excellent,
thank
you
ER,
and
if
people
want
a
broader
overview
of
the
features
that
are
being
worked
on,
they
are
not
bubbling
up
through
cigs.
If
there
are
specific
features
that
you
haven't
heard
about,
please
protect
me
and
let
me
know
we
have
two
whole
minutes
any
notices
anything
anybody
else
wants
to
mention
so.
E
As
real
quick
as
we
look
towards
1.4,
this
came
up
in
the
scale
meeting
this
morning.
Are
we
going
to
like
have
one
point,
four
gigabytes
for
the
download
and
maybe
1.5
will
be
1.5.
I
know
like
there's
there's
an
issue
on
this
I
think
everybody
knows
it's.
A
problem
seems
to
be
fallin
said
it
cracks.
Is
anybody
actively
working
on
it?
I?
Don't
I,
don't
think
so,
but
I
just
wanted.
Did
you
pick
that
up?
E
F
F
B
G
David
mcmahon
he
would
love
to
have
helped
if
people
are
interested.
There
is
a
road
map
document
with
the
ton
of
stuff
and
you've
done
a
ton
of
oneself.
There
haven't
been
what
people
say
or
anything
you
know,
but
if
it
more
people
are
willing,
we
dedicate
time
to
help.
Then
we
could
at
least
great
a
little
group
or
something
to
facilitate
collaboration.
But
woman
just
reach
out
to
david
mcmahon
and
what's.
G
M
M
G
What
I
heard
just.