►
From YouTube: Kubernetes Community Meeting 20160901
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo: etcd cluster external controller; SIG Instrumentation; 1.4 release update; K8s market position and vision; OWNERS update
A
B
Thank
you.
Can
you
see
my
screen
other
slides,
sure
yep,
let's
go
star,
so
so
I'm
hong
kong,
dan
from
coast
and
I
work
on
the
kubernetes
scale
mostly
and
today,
I'm
going
to
talk
about
an
lcd
controller
that
we
write
to
manage
sd
cluster
on
top
kubernetes
and
in
this
tab
we
are
gonna
split
it
into
four
phases.
B
First
of
all,
why
we
build
a
lcd
controller.
B
We
we
need
a
better
and
easier
way
to
manage
and
deploy
std
cluster
in
a
cloud
environment
and
another
question
is
why
kubernetes,
why
build
it
on
top
kubernetes
because,
like
by
using
kubernetes,
it
enables
us
to
to
build
our
controlling
logic
easily
and
kubernetes
handles
all
those
low-level
details
for
us
and
make
us
employ
imp
like
radical,
faster
at
the
meantime,
by
building
ncd
controller,
we
can
share
our
experience
and
lessons
building
stable
applications
on
top
kubernetes,
like
sd,
is
pretty
stable
itself.
B
So
here's
how
it
works.
Generally,
we
we
built
the
sd
controller
using
third-party
resource.
B
Initially
we
we
create
a
third
party
resource
in
kubernetes
and
our
controller
keep
watching
any
new
objects
created
under
that
and
when
we
are
create
a
new
by
creating
new
sd
cluster
is
just
creating
a
new
object,
and
here
we
are
creating
a
free
member
sd
cluster.
B
Our
controller
is
gonna
watch
the
event
and
then
it's
gonna
using
some
kubernetes
api
to
create
to
me
to
take
resources
and
create
process.
That's
parts
like
in
kubernetes
cluster.
B
We
can
use
either
like
vanilla
parts
or
password
api
to
to
manage
those
to
make
use
of
the
creating
process
in
kubernetes
and
the
process
will
go
like
initially.
We
will
have
a
seed
member
being
created,
and
then
we
will.
We
have
a
reconciling
loop,
which
we
will
keep
resizing
it
until
we
reach
to
desire
state
which
is
required
in
the
stack
we
see
before,
and
we
are
keep
adding
more
and
more
members.
Until
we
have
we
reached
the
desired
state,
which
is
free,
remember,
class,
to
here.
B
And
and
we
we-
the
controller,
didn't
stop
here.
It
keeps
reconciling
because
there
might
be
partial
failure
in
here,
I'm
going
to
talk
about
like
the
loose
of
one
member.
B
So
the
the
process
could
crash
like
there's.
All
kinds
of
failures
could
happen.
That
could
happen
and
if
one
member
that
failed,
our
control
is
gonna
file,
some
some
member
some
parts
and
and
it's
unhealthy
and
it's
first
first,
it's
going
to
use
the
kubernetes
api
to
reconcile
it.
And
finally,
it's
going
to
use
the
lcd
api
to
like
remove
the
member
and
and
you
use
the
kubernetes
api
to
create
another
part,
another
process
and
then
like
use
the
sd
api
to
add
the
member
back
to
existing
cluster.
B
B
For
example,
we
can
lose
more
two
or
more
members
at
the
same
time,
and
at
that
point
of
time,
the
controller
it's
gonna
recover,
the
scd
cluster,
from
the
back
up,
which
we
have
another
backup
part
that
periodically
saves
snap
at
the
the
snapshot
of
the
cluster
and
we
are
gonna,
save
that
into
more
stable
storage
like
like
persistent
volume
or
or
s3,
or
just
store
like
s3,
then
we
recovered
the
cost
from
the
snapshot.
B
And,
of
course,
we
we
are,
we
keep
building
it,
there's
more
features
that
we
want
to
develop.
According
to
our
production
reform,
experience
of
scd,
for
example,
we
we
won't,
we
have,
we
want
to
implement
features
of
resize,
in
which
we
can
dynamically
resize
a
cluster
from
three
members
to
five
members
and
maybe
from
five
months
back
to
three
three
members,
and
we
also
security
feature
which
we
want
to
have
tls
enable
cluster
and
we
in
slow
feature.
We
want
to
guarantee
the
service
of
sap
being
good.
B
We
want
to
guarantee
that
the
service
of
sdd
being
good.
We
want
to
spread
the
std
process
across
different
nodes
and
we
want
to
like
preserve
the
nodes
or,
and
we
also
we
might
use
a
node
selector
to
to
put
std
process
on
ssd
notes.
Also,
that
would
give
sd
class
a
better
service
and
of
course,
we
are
also
building
ui
so
that
people
can
watch
the
cluster
and
manage
those
things
easily.
B
So
in
the
following,
I'm
gonna
give
a
demo
okay.
So
so
is
there
any
question
on
the
features
here
there?
It
does
look
like
there's
a
quick.
C
Quick
question:
by
the
way
this
looks
awesome.
I
can't
wait
for
the
demo,
I'm
sure
there'll
be
more
more
questions.
Perhaps
well.
Actually
I
I'm
gonna
cheat.
I'm
gonna
ask
two
questions.
One
would
be
perhaps
towards
the
end.
Could
you
discuss
bootstrapping
into
a
kubernetes
cluster
and
how
that
might
work?
But
the
thing
I
was
really
mostly
wanting
to
ask
at
this
point
was:
have
you
looked
at
using
helm
or
do
you
have
an
opinion
on
using
helm
for
the
installation
and
management
here.
C
D
B
Yes,
that
would
be
great.
E
So
also
all
right
on
on
the
helm
thing.
Is
it
just
about
like
helm,
packaging
up
the
third-party
resource
and
controller,
and
we
still
have
to
have
the
controller
right.
D
F
Okay,
the
model,
so
the
model
that
I've
always
thought
is
that
the
controller
itself
should
be
responsible
for
creating
the
third-party
resource,
and
in
that
context
the
helm
the
helm
chart
would
simply
be
responsible
for
creating
the
pod
that
contained
the
controller
or
the
replication
set.
That
can
train
the
controller
and
then
the
controller
in
turn
is
responsible
for
talking
to
the
api
to
create
the
resource
that
it
wants
to
manage.
D
F
G
I
had
another
question:
the
behavior
you
described
for
the
sd
controller
sounds
the
same
as
the
behavior
for
the
replication
controller.
I
was
wondering
why
you
built
a
new
controller.
B
B
B
Four
minutes:
okay,
let
me
really
quick
so
so
I'm
gonna
log
into
a
remote
machine
and.
B
So
now,
let's
take
a
look
at
the
now,
I'm
going
to
create
a
controller.
It's
just
a
part.
We
deploy
it
on
top
of
kubernetes
cluster.
B
And
we
see
like
it's
creating
one
part
and
then
like
it,
should
keep
creating
more
parts
until
it
reached
to
some
desired
state.
Yes,
so.
B
And
finally,
we
reached
to
like
three
members
and
and
now
and
now
we
can
do
some
resizing
by
changing
the
spat
from
like
the
size
three
to
a
size.
Five,
but
unfortunately,
like
the
the
cups
here
apply,
doesn't
work
for
foot
property
resource
right
now
it
has
a
bug.
So
the
only
thing
I
can
do
is
I
need
to
log
into
the
master
machine
and
do
curl
update
there.
B
Okay,
now
I'm
so,
if
you
can
see
here
like
it's
actually
gonna
put
update
on
the
cluster
resource
and
on
the
object
of
sd
class
that
we
just
create
now,
let's
do
it
request
ready.
B
Oh,
oh,
I
see
yeah
this
one
sucks
yeah
all
right
now
we
see
that
the
size
has
become
five
and
we
get
again
like
it's
increasing,
more
and
more
parts,
and
and
now,
let's
change
it
that
again
to
like
resize
back
to
free,
it
should
work
again.
B
B
And
yeah
this
we
killed
the
part
at
one
part,
and
we
see
like
this.
It
creates
five
index
five
parts
now
and
we
can
also
like
go
to
see
the
membership
inside
sd
cluster
and
we
can
do
in
the
ncd
cluster
knowledge.
B
Right
we
saw
like
it
has
three
members
like
two
five
and
four
which
matches
the
the
past
knowledge
we
have
here.
So
it's
a
little
hurry
due
to
the
limit
of
time.
B
So
we
are
also
going
to
cooperate
in
november.
If
you
have
more
interest,
we
can
talk
more
at
that
time.
B
And-
and
we
are
also
like
gonna
open
sources
so
once
we
implement
the
initial
set
of
features
and
finish
the
documentations
okay.
Thank.
A
Awesome,
thank
you
all
right,
so
we
have
up
next
the
report
about
the
first
meeting
of
sig
instrumentation
and
what
sig
instrumentation
is
going
to
be
working
on.
Fabian
are
you
I
think
I
saw
your
you
is
connected.
H
Yeah,
basically,
which
is
the
second
one-
and
we
are
sort
of
in
second
instrumentation,
now
just
sort
of
showing
around
like
how
we
can
do
monitoring.
We
had
the
first
presentation
about
that
today
and
we
have
all
already
scheduled
further
ones
for
the
next
two
weeks.
So,
if
you're
like
interested
in,
how
can
I
monitor
my
class
cluster
and
things
flying
on
top
of
it
and
just
interested
in
different
approaches?
H
Yeah,
the
stick
meeting
is
a
good
good
place
to
start,
and
we
also
thought
about.
Okay:
where
can
we
start
making
a
monitoring
process
better,
and
one
of
the
points
we
want
to
address
really
really
soon
is
getting
more
metrics
on
the
cluster
state
in
general.
So
how
many
pots
are
there
running?
H
How
many
restarted
a
certain
container
have,
and
basically
all
these
details
currently
have
to
fetch
from
the
api
and
extract
manually,
and
we
want
to
make
them
accessible
really
easily
for
everyone,
and
so
we
sort
of
want
to
revive
the
cube
state,
metrics
components
which
was
actually
scheduled
for
retirements
but
yeah.
I
hope
you
can
avoid
that
and
have
something
on
that
and
really
really
soon.
A
A
Right
well
and
retired
doesn't
necessarily
mean
gone
forever.
It
means
unsupported,
but
still
archived
for
the
moment.
But
if
you
guys
are
working
on
it,
then
it
then
it
seems
a
thing
that
we
need
to
pay
attention
to.
Anybody
have
questions
about
what
sig
instrumentation
has
kicked
off
to
do
and
where
to
find
them.
A
I
Yes,
I'd
love
to
so
yeah,
I'm
just
going
to
talk
about
kind
of
the
stuff
that's
been
going
on
for
the
past
week
and
then
give
an
update
on
where
we're
at
generally.
I
So
one
thing
that
we
kind
of
discovered
is:
we
are
going
to
need
to
refine
our
exception
policy
for
feature
free
stuff
and
give
more
clear
guidance
in
this
area
right
now,
the
boarding
just
there's
not
enough
historical
cases
for
for
us
to
point
to
to
say
to
set
expectations,
so
we
need
a
way
of
doing
that.
I
I
I
know
that
not
everyone
gets
all
the
information
that
I
send,
because
I
receive
a
lot
of
questions
and
and
people
don't
people
come
to
me
not
knowing
stuff
I've
sent
the
list
so
and
that
and
I
expect
to
have
to
say
in
time-sensitive
announcements
like
it's,
not
we
haven't
nailed
down
when
we're
going
to
open
up
the
main
master
branch
for
1.5
work
and.
I
Communicate
that
and
make
sure
it's
clear
to
everyone
and
have
the
people
we're
not
in
a
situation
where
half
people
only
know
how
to
learn.
So
I
might
start
putting
really
bold
subjects
like
all
caps
release
announcement
or
that
sort
of
thing,
and
that's
my
attempt
just
to
make
sure
everyone
is
notified.
I
I
Tess
flakes
haven't
been
getting
assigned
to
consistently
to
folks
who
are
the
correct
owners.
Sometimes,
as
we
mentioned,
sometimes
people
want
to
be
on
vacation
or
something
or
for
whatever
reason
are
not
going
to
triage
the
test
flakes.
I
Test
priority
is
also
overloaded,
so
we
have
this
p0
thing,
which
means
could
mean
we
use
p0
to
mean
this
is
happening
a
lot
and
slowing
down
development.
It
also
means
this
is
like
really
important
to
the
release
and
could
we
need
to
have
it
fixed
and
we
need
to.
I
We
need
to
separate
those
two
decisions
and
saying
this
is
slowing
down
development
and
that's
why
it's
important
versus
this
is
potentially
really
severe
and
we
need
to
have
it
fixed
before
we
release
just
a
quick
update.
You
can
do
this
search
as
well,
but
we
have
37
open
zero
flakes
and
43
open
p1
flakes
in
the
past,
we've
said
that
we
need
to
close
all
the
p0
p1
flags
to
cut
the
release.
I
I
I
plan
to
cut
a
release
branch
today
we
can
still
fast
forward
that
not
opening
up
the
main
master
to
general
to
non
1.4
work.
This
is
just
to
get
a
dry
run
through
and
make
sure
that
all
the
piping
isn't
in
place.
All
the
tests
are
in
place
that
we
can
cut
a
release
without
any
surprises
with
the
branch
everything
looks
good
and
stable,
possibly
we're
gonna
open
up
master
tomorrow
for
pre
1.5
work.
I
This
release,
I've
done
something
that
we
haven't
done
in
previous
releases,
we're
trying
it
out,
which
is
I've,
received
a
lot
of
pr's
that
are
like
big
refactoring.
Oh
I've
received
a
number
of
pr's
that
are
bigger,
factoring
prs
that
are
subject
to
rebase
issues
and
for
one
reason
or
another,
the
authors
really
want
to
get
them
in
before
the
pipeline
is
generally
opened
up,
but
they're
not
really
critical
for
1.4.
I
They
just
need
to
get
in.
As
soon
as
possible-
and
so
I
created
a
separate
milestone
that,
on
a
case-by-case
basis,
I've
been
adding
stuff
to
to
try
and
give
a
good
compromise
in
this
area.
So
I
plan
on
opening
it
up
to
that
first,
get
all
that
stuff
merged
and
then
possibly
opening
up
generally
friday,
depending
on
where
we're
at
with
flakes
issues
and
release
branch
testing
and
just
generally
the
how
I'm
feeling
about
the
stability
of
the
release
I
would
like
to
I.
I
I
So
so
I
don't
know
I
need
to
I
mean,
maybe
for
a
future
at
least
we
can
figure
that
out
automated
upgrade
tests
are
still
broken.
We're
working
on
this
manual
upgrade
tests
are
gonna,
be
started
once
the
beta
is
cut,
I'm
putting
together
we're
putting
the
other
team
now
of
folks
that
are
going
to
work
through
this.
This
is
something
we
want
to
automate.
It's
really
important
that
we
want
to
automate
it.
It's
a
lot
of
pain.
Every
release.
I
I
think
one
suggestion
I'm
gonna
have
is
a
lot
of
this
stuff
that
we're
doing
for
the
release
can
actually
be
done
at
the
beginning
of
the
release
and
we
kind
of
like
leave
it
till
the
end.
So
I'm
gonna
say
I'm
going
to
put
together
a
list
of
suggestions,
but
one
of
them
would
be
for
next
release.
I
If
we
just
do
everything,
we
can
up
front
we're
going
to
be
in
a
better
state
now
and
I'm
going
to
I'm
going
to
try
and
send
daily
updates
on
the
kubernetes
dev
with
marley
status.
I
can't
I
can't
promise
I'm
gonna
get
this
done
every
day,
but
it's
I'm
gonna,
try
to
just
say
kind
of
what
what's
changed
that
day
and
how
things
are
improved
or
how
things
have
anything
new
with
discovery.
That
is
worth
something.
I
Thanks
eric
for
for
your
message,
richard
yeah,
something
something
you're
trying
out
for
as
a
compromise
and
basically
to
relieve
pressure
for
stuff
that
I'm
receiving
a
lot
of
pressure
to
get
in,
and
I
I
don't
think
she
joined,
but
I
we
can
take
it
up
on
slack
or
I
whatever
would
I
I
again.
This
is
something
we're
trying
and
wherever
you
want
to.
If
you
want
to
put
together
a
form
to
discuss
it,
I'd
be
happy
to
attend
that.
A
Awesome,
I
suggested
the
mailing
list
for
things
that
are
more
decision.
I
mean
there's
plenty
of
discussion
but
make
sure
that
if
we
come
to
some
sort
of
decision
we
can
reference
where
we,
where
we
had
the
discussion
and
then
where
we
came
to
a
decision
so
that
we
don't
continually
relitigate.
So
I
think
it
may
be
worth
some
conversation,
but
I
agree
that
we
have
to
try
a
bunch
of
stuff
to
see
what
what
sticks
for
this
community
and
fill
up
your
work
continues
to
be
awesome
in
all
of
this.
A
There
continue
to
be
burned
down
meetings.
If
you
want
to
attend
those
as
well
all
right
on
to
the
next
thing,
then
so
aparna
and
I'm
going
to
turn
the
video.
So
we
can
see
if
harna,
okay,
there's
aparna
aparna,
is
going
to
give
us
a
market,
positioning
and
product
division
for
kubernetes
for
discussion.
A
J
A
J
Thanks
so
we
had
put
together
a
summary
kind
of
of
what
the
landscape
is
like
in
terms
of
the
market
for
containers
and
container
orchestration.
J
J
A
very
rough
sense
of
market
sizing,
a
a
proposal
for
how
we
might
segment
the
user
base
a
little
bit
of
comparison
to
alternatives
to
kubernetes
that
customers
have
and
what
their
pros
and
cons
are
from
a
customer's
point
of
view
and
then
lastly,
it
it
goes
into
kind
of
a
little
bit
of
what
we
should
work
on
in
kubernetes
to
target
the
different
segments
and
a
rough
sense
of
priority,
and
this
is
kind
of
the
start
starting
point
for
obviously
discussion
with
this
group,
but
also
a
starting
point
for
developing
a
longer
roadmap
for
the
kubernetes
effort,
and
I
hope
to
do
that.
J
Along
with
the
pm
group.
You
know
over
there.
A
I
was
gonna
say:
can
other
people
see
the
slides,
because
I
am
so
many
levels
of
inception
in
this
particular
video
presentation
that
I'm
not
sure
what
I
can
see
it?
You
can
see
it
okay,
so
then
we
can
see
it
from
here,
okay,
yeah!
So,
okay,
you
can
see
it.
So
we've
got
customer
segments
and
alternatives
project
vision.
What
slide
do
you
want
to
be
on
yeah?
This
is.
J
Good
perfect,
so
that's
the
overall
agenda
for
today
we're
not
going
to
talk
about
roadmap
because
we
partly
don't
develop
the
roadmap,
so
it
would
be
an
incomplete
discussion,
but
I
hope
that
you
know
in
in
some
amount
of
time
we
will
take
this
work
and
develop
it
into
a
roadmap
and
we'll
be
able
to
come
back
to
this
team
and
and
discuss
that.
J
So
I
really
welcome
feedback
and
I
think
with
that
we
can
go
to
the
next
slide,
so
just
to
kind
of
tee
up
what
we're
working
on
and
why.
I
think
a
lot
of
us
are
excited
about
the
the
work
that
we
do
here
in
this
community.
J
I
believe
that
this
is
really
the
next
transformation
in
compute,
and
I
think
you
know
a
bunch
of
us
have
been
sort
of
comparing
it
to
other
transformations
and
computes.
So
you
know
every
eight
years
it
seems
like
there's
a
major
change,
and
so
these
are
a
list
of
some
of
them
and
it
seems
that
containers
and
microservices
are
the
next
one.
And
why
do
we
believe
this?
You
know
there's
a
bunch
of
data,
obviously
in
terms
of
adoption,
but
also
this
was
something
that
I
was
able
to
pull
together.
J
Just
looking
at
mind
share
you
can
sort
of
see
that
virtualization
that's
the
yellow
line
here,
which
is
you
know,
vmware
esx's
standard,
it
kind
of
went
through
this
peak
and
then
now
it's
sort
of
on
a
decline
and
it
sort
of
maps,
at
least
in
my
experience
to
you
know
in
2006
when
it
took
off
in
2010
when
it
sort
of
peaked
and
then
2014
when
openstack
started
to
take
off.
That's
really
when
virtualization
seems
to
have.
J
You
know-
and
this
is
hardware
virtualization
seems
to
have
kind
of
started
declining
and
then
containers,
which
is
the
red
line
represented
by
docker,
combined
with
the
green
line.
J
That
really
seems
to
be
the
next
revolution,
so
you
know
combination
of
containers
and
public
cloud
and
again
it's
roughly
an
eight
year
cycle,
which
is
probably
due
to
hardware
refresh
and
software
updates,
that's
kind
of
how
long
it
takes,
and
so
I
was
trying
to
project
this
forward
into
how
much
time
do
we
have
as
a
community
to
make
this
change
happen,
and
I
think
I
think
we're
at
the
beginning
stages
in
the
in
the
first
couple
of
years,
and
we
have
another
five
to
six
years,
but
it's
really
important
to
catch
that
initial
phase,
and
then
I
was
also
trying
for
the
team
to
size
the
market.
J
So
if
it
is,
you
know
if
it's
an
eight
year
trend
and
we've
got
the
next
three
or
four
years
till
it
peaks.
You
know
how
big
will
it
will
container
adoption
and
container
orchestration
be
in
the
next
four
years,
and
this
is
a
very
rough
estimate.
I
will
it'll
be
difficult
actually
for
me
to
to
to
go
into
how
I
calculated
this,
but
roughly
based
on
multiple
different
analyst
reports.
J
50
45
to
60
of
customers
are
expected
to
containerize
their
workloads
in
production
by
2020,
which
is
four
years
from
now,
and
that's
assuming
a
thirty
percent
keger
and
it
could
be
higher
in
public
clouds.
This
is
an
overall
figure,
so
I
think,
if
you
kind
of
put
the
market
size
for
public
cloud,
I
think
it's
at
40
billion.
J
You
know
you
could
say
that
roughly
half
of
that
could
be
containers
in
public
clouds
into
container
orchestration,
which
is
a
20
billion
dollar
market
in
the
next
four
years,
which
is
very
substantial,
I
also
was,
was
trying
to
figure
out.
So
what
is
the
desired
benefit?
What
is
the
reason
that
people
want
to
use
containers
and
container
orchestrators
at
this
time,
and
there
are
many
different
benefits,
obviously,
within
google,
actually
it's
resource
utilization.
J
I
think
that
was
one
of
the
main
drivers,
but
this
survey
from
docker
that
they
presented,
I
think
at
dockercon
yeah.
J
I
think
it's
it's
relatively
accurate,
just
based
on
other
surveys
as
well
that
primarily
the
benefits
is
that
customers
are
seeing
now
and
targeting
now
is
that
of
speeding
up
the
software
development
cycle
and
having
better
standardization
and
actually
a
lot
of
the
reduced
opex
or
reduced
cat-back
stuff
is
down
at
the
bottom
and
and
so
I
think
that
container
orchestration
has
several
waves
still
to
play
out
and
in
the
initial
wave
it's
very
much
about
improving
developer
productivity
in
the
kind
of
second
wave.
J
I
think
people
will
start
to
realize
the
portability
benefits
and
we'll
start
to
use
federation
and
multiple
clouds
and
containers
for
that,
and
then
I
think,
there's
going
to
be
a
phase
after
that
which
will
involve
true
resource
efficiency,
and
you
know
that's
where
a
lot
of
scheduling
sophistication
will
come
in.
So
that's
all
the
high
level
stuff
that
I
have
then
I
kind
of
go
into.
How
should
we
be
thinking
about
this
in
terms
of
what
are
the
customers
and
what
are
the
segments?
J
So
I
actually
used
some
customer
examples
to
illustrate
the
segments
at
least
one
way
of
looking
at
the
customer
segments.
So
I'm
positing
that
one
segment
is
the
segment
of
smaller
companies
or
startups
that
actually
start
in
the
cloud.
They
start
in
a
public
cloud
and
they
come
to
containers
as
a
service
from
different
from
different
perspectives.
One
came
from
aws
ec2
and
they
were
a
vm
based
customer
and
some
of
the
reasons
why
they
came
to
gke
in
in
particular,
is
is
for
developer
productivity.
J
J
So
they
had
less
than
ten
percent
utilization
and
overall
their
deployment
time
was
measured
in
hours
and
when
they
came
to
containers
as
a
service,
they
actually
rewrote
their
app
and
there's
a
small
picture
at
the
bottom,
which
is
probably
very
hard
to
see,
which
shows
kind
of
the
the
re-architecture
that
they
did
and
they
they
rewrote
it
with
only
one
level
of
load
balancing
and
they
were
able
to
get
a
real
developer
productivity
benefit
so
that
it's
less
than
10
minutes
to
fully
roll
into
production.
J
And
they
can
also
roll
back
very
quickly.
And
this
is
a
company
that
is
a
public
facing
website,
their
main,
that's
their
main
business
and
so
being
able
to
deploy
30
to
50
times
a
day
is
really
pretty
important
in
their
business.
They
also
found
that
it
was
better
and
easier
to
troubleshoot
because
they
had
fewer
levels
of
load
balancing
and,
lastly,
it
was
less
expensive
and
easier
to
use.
So
I
think
that's
definitely
one
kind
of
that's,
not
just
one
customer.
J
There
are
many
such
customers
that
come
from
a
world
of
using
vms
to
containers
as
a
service,
and
they
immediately
find
these
productivity
and
actually
cost
benefits.
The
second
segment
that
I'm
positing
is
at
scale
modern
apps-
and
these
are
the
two
examples
I'm
using
here-
are
uber
and
box.
Uber
is
obviously
not
a
kubernetes
customer
today,
they're
a
mesos
shop
box
is
a
publicly
you
know,
known
kubernetes
user,
and
so
typically
in
these
kinds
of
environments,
there
is
already
an
existing
footprint.
J
So
in
the
case
of
uber
and
they've
documented
this
extensively,
you
know
they
used
to
be
an
aws
shop.
They
had
a
monolithic
architecture,
they
kind
of
moved
to
postgres
and
sorry.
They
moved
to
thrift
and
the
schema
s
they
built
their
own.
You
know
version
of
mysql
database
and
started
using
mesos,
and
they
were
still,
I
think,
until
last
year
they
had
just
started
adopting
containers.
So
this
is
a
very
different
kind
of
customer
versus
the
first
segment,
which
is
which
is
pure
cloud
native.
J
This
segment
comes
from
a
you
know,
kind
of
a
legacy
environment,
but
at
the
same
time
they
are
a
very
at-scale
at-scale
application,
and
so
I
mean
some
of
the
things
that
they
noticed,
or
they
noted
as
benefits
are
again
the
developer
productivity,
but
also
they
found
that
they
had
better,
better
isolation
by
using
containers.
J
I
won't
spend
that
much
time
on
on
this
I'll
go
to
the
next
segment.
I
think
the
next
segment
is
distinct
from
the
other
two,
so
these
are
kind
of
more
legacy:
enterprise
customers.
There
are
a
variety
of
these
customers.
I
think
we
all
kind
of
know
a
lot.
A
lot
of
banks
are
in
this
situation.
They
have
mostly
legacy
applications,
some
of
them
developed
in-house.
J
Partly
in
the
cloud
and
partly
you
know
in
a
private
cloud
and
the
requirements
that
I
hear
most
often
from
from
from
these
type
of
customers
are
around
improving
governance
and
maybe
utilizing
existing
hardware
having
kind
of
much
more
tie-in
to
policy
and
self-service
whatever
the
internal
framework
is
that
they
have
for
policy
and
self-service
so
again
kind
of
a
meaningfully
different
segment.
J
So
if
we
go
to
the
next
slide
here,
I've
just
tried
to
say
to
summarize
the
three
segments
and
and
again
just
a
summary
of
what
the
requirements
and
goals
of
each
segment
are,
so
the
startup
or
emerging
segment.
Again
they
are
starting
in
the
cloud
they
may
be
coming
from
a
pure
vm
environment
or
from
a
pure
pass
environment,
they're
new
to
containers.
I
think
that
the
main
requirements
are,
you
know
they
are
looking
for
lower
cost
and
ease
of
use
and,
of
course,
the
deployment
speed.
J
The
modern
at
scale
segments
you
know,
they're,
coming
from
potentially
a
colo
owned
data
center
with
a
bare
metal
environment
and
they
may
be
a
sas
provider.
Their
requirements
are
pretty
hefty.
A
lot
of
them,
like
uber,
for
example,
is
looking
for
geo
locality,
global
expansion,
and
they
are
looking
very
much
for
federation
and
hybrid
type
of
footprint,
where
some
is
on
for
majority
potentially
is
on
premise,
and
some
is
in
different
types
of
clouds.
J
Different
clouds
they
often
particularly
when
it
comes
up
relative
to
mesos,
are
looking
for
thousands
of
nodes
of
scale
at
least
proof
points
of
that,
and
increasingly
looking
for
multi-workload
efficiency,
so
being
able
to
run
batch
and
web
and
other
and
streaming
workloads
in
parallel
on
the
same
infrastructure
and
gain
high
efficiency,
which
is
not,
I
don't
think,
a
well
solved
problem.
Many
of
them
are
also
since
they're
sas
vendors
looking
for
multi-tenancies
and
often
they
are.
They
are
very
much
into
the
ability
to
customize.
J
J
The
add-ons
to
those
apps
are
typically
what
they're,
bringing
and
are
interested
in,
docker
and
and
kubernetes
for,
and
I
think
their
requirements,
as
I
mentioned
policy,
governance,
security
and
and
a
mix
of
some
of
the
things
that
the
modern
guys
are
are
looking
for
as
well
in
terms
of
hybrid,
so
I'll
go
to
the
next
slide.
So
what
should
be
our
strategy?
If,
if
we
actually
believe
that
there
are
these
three
distinct
segments,
they
have
distinct
needs,
does
it
make
sense
for
us
to
target
everything?
I
think.
J
But
how
do
we
prioritize-
and
I'm
positing
here-
that
for
the
first
segment,
the
startup
segment,
we
actually
don't
need
any
special
features
and
that
we
should
focus
on
the
middle
segment
and
that
if
we
focus
on
the
middle
segment,
we
have
the
greatest
impact
because
the
startups
will
grow
up
and-
and
you
know
they
will
become
these
modern
at-scale
apps.
So
we
will
cover
those
the
legacy
add-on.
J
Folks,
we
won't
cover
entirely,
but
but
perhaps
you
know,
we
can
leverage
and
build
up
a
community
of
service
providers
who
can
support
the
legacy
on-prem
ecosystem,
as
particularly
as
it
as
it
relates
to
supporting
legacy
hardware
and
or
just
providing
consulting
and
professional
services,
support
to
the
extent
that
the
legacy
add-on
segment
requires
federation
and
hybrid
cloud
those
needs.
Of
course
we
would
be
covering,
as
we
focus
on
the
middle
segment,
I'm
not
going
to
spend
too
much
time
on
it.
J
J
Okay,
so
then,
you
know
I'll
kind
of
dig
into
that,
a
little
bit
more
kind
of
what
are
the
features
and
what
are
the
things
that
we
can
focus
on
to
to
enable
that
so
the
vision
here
I
mean
not
rocket,
science
is
just
to
make
it
easy
for
everyone
to
use.
You
know
distributed
applications
everywhere.
I
think
that's
been
the
vision
of
kubernetes
from
the
start.
J
That's
the
reason
why
the
founders
founded
this
project
is,
you
know,
kind
of
make
this
technology
and
this
set
of
ideas
available-
and
I
think
this
whole
community
has
been-
has
been
working
towards
this
objective.
J
So
I
then
go
into
each
of
those
areas
of
importance,
so
it
should
be
easy
to
use
and
what
specifically
do
we
need
to
do
to
make
it
easy
to
use
and
there's
already
a
lot
of
work
going
on
in
the
community
with
1.4,
and
I
think
we
plan
to
continue
that
in
1.5
we
will
have
a
planning
session
for
1.5
next
week,
but
specifically
here
you
know
it's
it's
it's
very
important
for
people
for
for
users
to
get
a
gradual
introduction
to
kubernetes
concepts.
J
I
think
when
you
come
from
the
world
of
virtual
machines,
you
know
you
see
docker
and
you
think,
okay.
Well,
I
want
to
use
a
container,
but
I
still
want
to
have
control
over
the
node.
You
know
the
virtual
machine,
that's
the
thing
that
I
I
know,
and
I
and
my
policies
are
all
based
on
that
and
it's
it's
really
a
a
step
function,
change
for
customers
when
they
have
to
think
about.
Okay,
I
shouldn't
think
about
the
node,
I'm
just
going
to
think
about
pods
and
services.
J
These
are
all
new
concepts
and
and
in
many
cases
customers
you
know
are
not
able
to
grasp
that
or
it's
too
difficult
a
change
for
them
to
make.
So
I
do
think
that
we
need
a
more
gradual
introduction
to
kubernetes
concepts,
and
this
is
maybe
more
of
a
documentation,
tutorial
type
of
challenge.
Another
thing
is
that
it
should
be
really
quick
to
start
up
common
applications
because
at
the
end
of
the
day,
developers
interested
in
the
application
and
not
so
much
in
the
infrastructure
and
all
the
coolness.
J
That's
in
the
infrastructure,
the
coolness
gets
us
excited,
but
not
so
much
for
the
for
the
users,
and
I
think
this
is
some
of
the
work
that
we're
doing
with
the
workloads
with
the
workloads
and
with
the
helm.
Folks.
Also,
our
ui
right
now
is
fairly
poor
and
it's
not
equivalent
to
the
cli.
J
The
cli
itself
needs
some
work,
so
this
is
all
stuff
that,
by
the
way
is
coming
up
at
least
I'm
going
to
be
proposing
it
for
1.5
I'll
skip
some
of
this
in
favor
of
some
of
the
other
slides
so
easy
to
adopt.
We
don't
have
great
examples
of
support
for
batch
and
stateful
workloads
and
stateful
workloads
are
still
alpha,
which
is
really
difficult
for
a
lot
of
people
to
to
go
on
to
so
those
are
some
of
the
the
major
things.
J
Also,
one
request
that
comes
up
very
often
is
dependency
management.
So
I
want
to
start
my
containers
in
a
certain
order.
I
want
my
database
to
be
initialized,
and
then
I
want
my
app
and
and
customers
want
to
be
able
to
capture
that
and,
and
also
you
know,
be
able
to
have
the
dependencies
considered
when
during
upgrades
and
and
it's
not
clear
that
kubernetes
has-
I
mean
we
do
have
a
solution
for
it.
J
J
I
think
this
one
is
really
around
hybrid
and
around
around.
You
know,
support
in
both
aws
and
on
private
clouds,
and
I
think
one
of
the
things
that
comes
up
most
often
there
in
customer
conversations
is
now
that
I've
moved
to
microservices.
J
I
really
need
better
visibility
from
a
monitoring
and
logging
information
perspective
across
clusters
for
troubleshooting,
and
that's
something
that
customers
still
don't
really
have.
So
I
I
think
you
know
that's
also
something
that
we
should
target
for.
A
Future
release
I'm
going
to
cut
you
off
here.
There's
lots
of
awesome
stuff
in
here,
including
a
whole
pile
of
next
steps,
and
I'm
going
to
recommend
that
for
discussion.
I
promise
discussion
in
the
agenda.
We're
not
going
to
get
there
because
we've
got
one
more
item
to
cover
in
the
next
eight
minutes,
but
for
discussion.
The
kubernetes
pm
group
is
going
to
be
covering
this
and
turning
it
into
more
road
map
work.
A
So
if
people
have
thoughts
commentary,
you
can
share
it
with
the
kubernetes
pm
group
and
do
you
know
when
the
next
meeting
is
yeah?
It's
not
next
monday,
but
the
monday
after
that.
J
Okay,
so
there's
only
two
more
slides.
I
encourage
you
to
read
them
on
your
own.
I
mean
the
one
before
this
just
talks
about
upgrades
and
h.a
as
being
the
other
two
very
important
pieces,
and
I
think
other
than
that
I'm
actually
done
awesome.
No.
A
A
All
right,
so
we
will
jump
back
to
next
item
on
the
agenda,
which
was
to
talk
about
maintainers
and
approvers
and
owners.
I
believe
that
brendan
and
brandon
started
a
discussion
and
wanted
to.
We
wanted
to
make
that
a
little
bit
more
broad,
so
brandon
had
proposed
brandon's
been
working
on
the
owner's
work
with
core
os
and
his
team
and
brendan
had
some
questions
so
brendan.
Are
you
still
about.
F
A
A
A
F
Linked
the
net
of
it
was
to
add
260
people
as
reviewers,
several
of
whom,
I
know,
are
only
sort
of
very
tangentially
involved
in
the
kubernetes
process.
An
example
would
be
rick
buskins,
who
is
did
some
of
the
spinnaker
work,
and
so
I
just
kind
of
wanted
to
put
out
to
everybody.
I
think
the
reviewers
are
great.
F
F
I
guess
I
I'm
a
little
bit
leery
of
opting
in
people
who
may
only
have
tangential
connections
who
are
gonna,
suddenly
get
pinged
on
github
to
review
code
and
that
sort
of
thing.
F
Yeah,
I
mean
there's
already
some
degree
of
filtering
and
I
would
suggest
actually
that
we
tried
to
get.
You
know
like.
I
suspect
that
there
are
upwards
of
100
people
who
make
weekly
contributions
or
at
least
bi-weekly
contributions,
and
that
seems
like
a
great
number
to
start
with
for
me
and
then
have
some
easy
way
for
people
to
opt
in.
F
You
know
either
via
a
form
or
I
don't
know
what,
but
some
easy
way
for
people
to
opt
in
if
we
want
to.
But
let's
just
make
sure
that,
if
we're
going
to
opt
in
a
bunch
of
people,
we're
pretty
sure
that
they
are
like.
I
don't
mind.
Shanghaiing
people,
like
my
belief,
is
that,
if
you're
contributing
on
a
weekly
or
bi-weekly
basis,
you
probably
should
be
reviewing
some
code
too,
just
out
of
sort
of
a
sense
of
community.
F
Well,
I
was
hoping
that
what
we
would
do
is
we'd
say
something
like
you
know,
two
commits
in
the
last
four
weeks,
and
then
we
would
target
the
code
areas
that
you
know
that
this
person
has
contributed
to
looking
back
further
right.
So,
like
judge
activity
versus
contributions
separately,
yeah
and.
K
F
Yeah,
basically,
and
and
brandon
you
know,
has
code
that
does
this,
and
so
I
think
it's
just
a
question
of
extending
that
code,
a
little
bit
to
cover
this
stuff
and-
and
I
should
say
thank
you
very
much
to
the
core
os
folks
for
for
pushing
this
forward.
It's
a
very
important
improvement
and
so
yeah,
I
think
maybe
the
right
thing
there
is
to
just
if
anybody
who's
interested
in
sort
of
how
we
select
this
or
has
ideas.
F
Let's
go
to
that
issue
and
we'll
hash
it
out
there
and
and
then
we
should
also
as
part
of
that,
make
it
super
easy
for
people
who
are
interested.
Who
maybe
don't
have
that
history,
but
do
want
to
start
reviewing
to
opt
in
in
some
way.
A
I
think
at
this
point
our
conversation
and
position
has
been
please
review,
even
if
you
aren't
familiar,
and
that
doesn't
mean
that
you're
approving
anything.
But
it
is
helpful,
so
we
haven't
found
enough
opt-in
that
way,
which
is
why
we,
we
were
targeting
a
broader
opt-out
policy
by
pinging
200
people.
A
So
so
I
think
there's
there's
lots
of
this.
I
hope
not
lots
of
discussion,
but
I
do
think
we
have
to
come
to
an
answer
on
this
pretty
quickly
and
push
owners,
approvers
and
reviewers
out
as
quickly
as
possible.
F
I
agree
I
mean,
I
think
it's
perfectly
fine
if
we
as
a
community
want
to.
As
part
of
this
I
mean.
Maybe
this
is
the
right
answer
and
I
can
do
a
proposal
for
this
proposal
pr,
but
if
we
wanted
to
set
a
policy,
it's
basically
like
you
know
what,
if
you
merge,
if
you
commit
more
than
two
pr's
a
month,
you're
gonna
have
to
review
something
and
we're
gonna
put
you
on
this
list,
no
matter
what
and
that's
sort
of
a
community
good
citizenship
thing.
Please.
F
E
E
L
K
K
F
Yeah
I
mean
my
only
concern
would
be.
I
don't
want
to
opt
like,
as
just
as
an
example
someone
came
through
and
did
a
bunch
of
openstack
code
right
and
like
if
that
was
eight
months
ago.
I
don't
necessarily
want
to
opt
them
into
being
reviewers
like
if
it
was
a
dr.
I
mean
they're
a
drive-by
and
that's
like
that's
kind
of
crappy
at
some
level,
but
like
we
also
shouldn't
expect
that
they
should
be
coming
back
if
they've
been
inactive
for
eight
months
right.
K
F
K
L
L
So
I
guess
I'm
wondering
what's
the
worst
that
could
happen
if
we
have
a
reviewer
that
you
know
doesn't
commit
to
it
and
we
we
have
this
kind
of
pinging
system
set
up.
It
rotates.
L
F
A
A
I
suspect
that
this
conversation
is
going
to
continue
the
pull
requests
that
brendan
put
up
there,
which
is
31
752.
C
Just
an
incredibly
quick
thing,
I
know
we're
already
negative
in
time.
Please
do
talk
to
your
sig
leads
to
help
identify
features
and
scope.
It
out
that'd
be
great
if
you
could
bring
those
with
sig
approval
to
next
week,
yep.