►
From YouTube: Kubernetes Community Meeting 20170615
Description
We have PUBLIC and RECORDED weekly video meetings every Thursday at 10am US Pacific Time.
https://docs.google.com/document/d/1VQDIAB0OqiSjIHI8AWMvSdceWhnz56jNpZrLs6o7NJY
Demo - Local storage; Release 1.7, 1.6.5; SIG CLI, SIG PM, SIG Cluster Ops; Announcements
B
Hello:
everyone
welcome
to
the
communities
community
meeting
for
June
15th
singing
you're
in
this
time
zone
and
it's
June
15
for
you,
I'm
Justin,
I'm,
helping
brown
the
meeting
today
and
we
have
a
few
things
on
the
list.
We
got
demo
for
local
storage.
We
got
some
release,
updates,
sig
updates
from
6000
APM
and
a
few
announcements
as
well
and
Jase.
Thank
you
very
much
for
taking
notes.
It's
the
most
helpful
thing
in
the
world
and
Michelle
I
think
you
said
you're
here
for
the
local
storage
demo,
yep.
C
Cool
so
let's
see
alright,
so
I
am
the
shell
I'm
part
of
the
storage,
Sid
and
I'm,
going
to
demo
a
feature
that
we've
been
working
on
for
this
release
for
local
precision
storage.
C
C
Some
use
cases
where
this
is
useful
are
things
like
distributed.
Data
stores
and
file
systems
such
as
Cassandra
and
monster,
and
also
for
large
caching
use
cases
where
the
cache
you
are
using
is
so
large
that
it
can't
fit
into
memory,
and
you
have,
to
you
know,
put
it
into
some
fastest
local
disk.
Now,
using
local
storage
itself
is
subject
to
the
node
and
storage
ability.
So
for
that
reason,
local
storage
is
not
suitable
for
all
these
cases,
and
we
are
mostly
targeting
these
specific
use
cases
here.
C
Even
then,
there
are
some
benefits
in
the
cloud
environments
as
well
so
cloud
environments,
most
of
the
major
ones
offer
this
local
SSD
feature
where
you
can
access
a
specific
SSD
that
is
physically
attached
to
that
host
that
the
VM
is
attached
to.
So
those
are
some
of
the
motivations
of
this
feature
now
the
what
what
we
were
using
in
order
to
use
local
storage
before
this,
the
main
mechanism
was
to
use
a
host
Pat
volume,
and
there
are
a
lot
of
issues
with
host
pat
volume.
The
way
it
works.
C
If
your
pod
has
to
be
for
some
reason,
and
then
another
issue
with
portability
is
that
if
you
want
to
move
your
application
for
your
application
across
different
clusters
and
different
environments,
those
paths
can
be
different,
and
so
then
you
know
with
this.
With
the
old
mechanism
you're
you'll
have
to
you
have
to
maintain
different
pop-specs,
depending
on
which
environment
you're
in
and
depending
on
an
environment.
You
also
have
to
know
which
node
names
to
use
and
what
nodes
to
use.
So
it's
a
pretty
painful
mechanism
for
accessing
local
storage.
C
The
other
is
to
with
host
path.
Is
accounting
in
that,
because
every
pot
is
specifying
a
path
you
need
to
have.
You
have
to
coordinate
between
all
the
different
applications.
I
won't
use
host
path
to
make
sure
that
you
guys
are
not
going
on
the
same
node
or
using
the
same
paths
so
to
avoid
that
path,
collision
with
also
a
challenge,
and
then
the
last
issue
is
security.
Where
your
quad
can
specify
any
path
on
the
node,
it
can
specify
some
systems
data.
It
can
specify
some
other
users
data
and
it
could
potentially
have
it.
C
Has
the
potential
to
you
know
be
able
to
modify
and
corrupt
that
data
too.
So
all
of
these
were
a
lot
of
issues
with
hostpapa
volumes,
and
we
wanted
to
fix
those
issues
with
this
new
feature
here
and
so
how
we're
going
to
fix
those
issues
with
local
persistent
volumes.
Is
we're
basically
going
to
use
the
persistent
volume
abstraction
to
solve
all
of
these
issues?
So
the
first
issue
of
portability,
persistent
volumes,
will
follow
that
by
separating
details
of
the
node
and
the
underlying
storage
from
the
pod
from
the
pod
consumption.
C
That
way,
you
can
take
your
pod
spec,
move
it
around
different
clusters
and
it's
portable.
The
second
issue
that
we're
solving
is
accounting
so
with
persistent
volumes.
There's
a
one-to-one
mapping
between
the
persistent
volume
claim
and
the
persistent
volume
with
that
we
can
better
control
how
many
users
are
actually
using
this
local
volume
and
we
don't
have
to
worry
about
path
collisions
this
way.
In
addition,
because
persistent
volumes
are
first-class
api
objects,
it's
going
to
have
a
managed
lifecycle,
create
and
delete.
C
So
you
know
for
sure
you
can
control
when
you
want
to
delete
the
volume
and
clean
of
the
data
and
recreate
it
and
expose
it
back
into
the
cluster
and
then
the
third
issue
of
security.
Persistent
volumes
will
follow
that
because
the
you
need
administrator
privileges
to
create
persistent
volumes.
So
it
is
the
administrators
control
in
defining
what
paths
on
which
notes
are
available,
and
it's
something
that
users
cannot
specify.
C
So
that's
the
solution
that
we
are
going
for
in
1-7
as
an
alpha
feature.
Well,
what
we've
basically
provided
in
one
seven
at
this
point
is
a
new
volume
type
called
the
local
volume.
You
can
only
use
this
as
part
of
a
persistent
volume,
so
you
cannot
specify
a
local
volume
directly
in
a
pop
spec
and
then
the
second
part
of
this
that
we
added
is
to
make
the
scheduler
the
default
schedulers
aware
of
the
local
volumes.
C
What
this
basically
does
is
it
runs
as
a
daemon
set
on
every
node
and
you
will
launch
it
with
some
configurable
parameters
to
say
you
configure
some
directories
to
say
the
is
where
I'm
going
to
mount
all
my
local
volumes.
So
then,
this
static
provisioner
is
going
to
look
through
that
directory
that
you
gave
it
and
look
for
all
the
volumes
that
are
under
that
directory
and
go,
and
it's
going
to
manage
all
the
volumes
there.
C
It's
going
to
create
the
persistent
volumes
there
it's
going
to
and
then
it's
going
to
watch
the
persistent
volume
objects
and
when
they
get
released,
then
it
will
go
ahead
and
clean
up
and
destroy
those
persistent
volumes,
and
then
we
create
them
again.
So
the
this
is
external
creator
is
the
job
of
this
is
to
simplify
some
of
the
cluster
administrator's
management
of
the
volume
life
cycle
and
not
have
to
manually,
go
in
and
delete
and
clean
up
and
recreate
those
volumes.
C
So
with
that
I'm
going
to
go
into
the
demo,
I
have
a
very
simple
application
here,
where
I
have
a
staple
application
and
every
instance
of
the
stateful
set
is
just
writing
to
its
own
local
volume.
It's
just
writing
the
time
stamp
and
then
a
sort
of
like
a
counter
of
how
many
times
that
application
has
to
be
started
and
then
I
have
a
read
a
separate
deployment.
C
That
is
a
meter
pod
and
it's
going
to
directly
attached
to
one
of
the
local
volumes
that
the
staple
set
instances
writing
to,
and
it
will
just
read
the
contents
of
it.
And
the
main
thing
to
demonstrate
here
is
add
these
pods,
even
if
you
kill
the
pods
and
they
restart,
they
will
always
get
scheduled
to
the
correct
node
where
the
volume
is
on.
So
let
me
switch
over
to
to
demo.
Okay,
can
you
guys
see?
This
is
a
spot
big
enough,
yeah,
alright!
C
So,
okay,
if
we
look
here,
if
you
look
at
my
pods
you'll
see
I,
have
my
provision
or
demon
set
running
it's
running
on
every
node.
C
C
C
Okay,
so
if
we
look
at
the
yamo
of
this
persistent
volume,
there's
a
few
things
to
note
first
thing
is
that
it
was
created
by
this
local
volume
provisioner
that
is
running
on
the
specific
node,
and
then
it
has
this
new
node
affinity
annotation
for
volumes.
So
this
is
basically
the
same.
It's
the
same,
node
affinity,
struct
that
the
the
normal
node
affinity
features
using
we're
just
using
the
same
type
here
and
applying
it
to
volumes
instead.
C
C
C
We
don't
need
to
specify
anything
about
what
notes
to
run
on
or
what
paths
on
those
notes
to
use.
Everything
is
encapsulated
in
the
persistent
volume
and
in
the
persistent
volume
claim.
You
don't
need
to
specify
any
of
those
details,
only
the
storage
class
that
you
want
to
access,
so
that
is
really
cool.
This
is
completely
portable.
You
can
take
this
and
move
it
to
any
cluster.
As
long
as
they
have
something
called
storage
class-
local
storage,
it
is,
it
will
work.
So,
let's
launch
this,
a
toolset.
C
So
here
we
see
the
stateful
set
is
starting
the
instances
one
by
one
in
order
and
it's
creating
a
persistent
volume
claim
for
each
of
those
and
we'll
see,
edges
it
gets
created,
and
then
it
immediately
gets
bound
to
one
of
the
available
local
volumes.
So
if
we
look
at
the,
if
we
just
look
at
the
summary
here,
we
see
that
three
persistent
volumes
with
each
of
the
replicas
and
it
got
down
to
one
of
these
specific
volumes.
And
if
we
look
at
the
pods
itself.
C
Yes-
and
here
are
the
stateful
set
pods
and
they
each
got
schedules.
They
should
all
be
scheduled
to
denote
that
the
local
volume
is
on.
So
let's
verify
that,
so
we
see
this.
Let's
just
verify
this
one
we
see.
Vogel
has
zero
schedule,
two
lkk
two
and
it's
using
it
is
down
to
this
bound
to
this
persistent
volume.
So,
let's
see
what
note
that
persistent
volume
is
on.
C
C
So
if
we
look
at
my
leader
pod,
it's
another
very
simple,
bash
script.
All
it's
doing
is
it's
reading
a
file.
It's
reading
the
file
that
the
staple
set
is
writing
to,
and
here
under
persistent
volume
claim,
I
have
specified
one
of
these
stateful
set
instances
PDC's
so
I'm
just
going
to
attach
to
the
same
one
and
read
what
this
instance
is
writing
so.
C
C
C
C
C
B
C
Yeah
I'll
have
all
of
that.
Okay,
so
all
right,
it
has
okay
terminated
and
it's
we
started
it
god.
We
started
on
the
correct
note,
and
now
we
see
here
in
the
reader.
The
staple
set
is
now
writing,
and
it
saying
this
is
my
second
invocation
of
my
staple
set.
It's
now
count
to
alright,
so
that's
all
I
have
for
the
demo.
Now
I
have
a
few
more
slides,
just
two
more
slides
so
because
this
is
a
one,
seven
alpha
feature:
there
are
still
some
limitations
and
not
everything
is
totally
working
completely.
C
So
there's
two
big
issues
right
now
that
might
not
work.
The
first
main
issue
is
that
the
persistent
volume
binding
still
happens
before
the
pod
scheduling.
This
is
what
happens
today
with
volumes,
and
we
haven't
changed
anything
here,
but
the
problem
is
a
bit
excessive,
aided
with
local
when
you
use
local
volumes,
but
the
main
problem
is
that
the
volume
binding
does
not
consider
any
of
the
other
pod
resources
or
other
scheduling
requirements.
So
it
won't.
C
So
it
might
be
possible
that
you
might
bind
to
a
volume
on
a
node
that
doesn't
have
enough
CPU
or
resources.
As
that
the
pod
needs,
or
if
you
specified
any
pot
affinity
or
anything
like
that,
this
doesn't
really
handle
that
at
all
another
side.
Effect
of
that
is
that,
at
the
moment,
you
cannot
have
multiple
local
volumes
in
a
single
prospect,
most
likely
what
will
happen
it
will
try
to
round-robin
and
do
some
things.
I
think
we'll
end
up
getting
volumes,
local
volumes
on
two
different
nodes,
and
then
your
pod
is
completely
unprintable.
C
B
C
C
Okay,
I'll
skip
that
anyway.
Here
is
all
the
links:
I
have
a
user
guide.
There's
a
user
guide
for
how
to
use
the
pod
a
launch,
a
provision,
or
how
do
you
specify
the
TVC?
And
all
of
that
we
have
two
github
issues
here.
The
first
is
a
tracker
for
the
implementation
work
that
we're
doing
all
the
line
items
and
there
you'll
kind
of
see
our
road
that
too
of
future
features
and
bugs
that
we
need
that
we
need
help
fixing,
and
then
we
have
our
design
proposal
here
as
well,
so
yeah.
C
Basically
again,
this
is
all
being
done
by
the
storage
fig.
So
if
you're
interested
in
helping
out
with
this
feature-
or
you
know-
you
want
to
use
it
and
need
some
help
or
you
want
to
provide
any
feedback-
just
come
to
one
of
our
Sidney
beings
or
send
an
email
to
the
sixth
or
six
storage
mailing
list,
and
we
will
all
be
there
and
we're
all
watching
so
we'll
be
able
to
be
spawned.
Thank.
B
You
that
looks
super
useful
for
a
lot
of
things.
Could
you
also
add
the
slides
to
the
Google
Doc,
so
people
can
yes
good
if
they
need
to
yep
I
will
do
that.
Alright.
Thank
you
very
much.
Thank
you.
B
E
Thank
you
for
someone
healthy
topcoat
commentary,
okay,
there
and
so
Connie,
and
this
morning,
I
just
kind
of
do
the
slapstick
effect
shot
of
other
kinds
of
things.
So
we
all
have
negative
one
times.
Even
blocking
issues
are
specified
and
including
those
of
leaky
pipes
and
the
enemies
team,
and
also
state
group
political
elite
and
review.
E
Those
is
rocking
issues,
the
diversity
part
and
we
kind
we
have
like
the
2008
pending
Tia
and
approve
the
for
one
point:
seeded
and
also
there's
the
certificate
approval
for
one
point:
seven
and
there,
but
it
is
still
pending,
may
be
waiting
for
the
approval
or
it
is
you
every
rejected
are
in
a
mood
to
the
to
the
after
will
move
to
the
108th.
So
we
are
talk
about
the
code
of
phrase
leads
connecting
the
code
rate
yesterday
is
called
a
meeting
and
we
review
that
data
different
other.
E
It
is
because
the
past
constellations
yesterday,
and
just
so
we
value
and
so
by
cosine
X,
has
so
in
the
pool,
even
some
EQ,
and
actually
it
is
taking
over
a
longer
time
because
all
those
solutions
we
give
to
that
a
pastor.
So
we
don't
up
the
time
Avenue.
So
so
far
those
significant
looks
okay.
So
so
we
plan
to
have
it
back
at
you
yesterday
and
the
due
to
some
of
the
really
serious
problem,
and
so
we
we
working
on
solve
those
problems
and
most
departments.
E
So
there's
the
remain
constant,
a
senior
and
a
slow,
slow
path
and
with
read
out
the
problem
and
also
the
all
the
GK
it
has
been
saying.
So
we
also
reworked
that
amending
PR
and
a
vision
of
sign
and
then
just
couple
takes
some
serial
test.
So
all
the
kind
of
things
to
say,
ethics
and
the
factor
forward
into
the
one
point
single
branch,
so
a
little
meant
after
we
are
cutting.
E
So
Jeff
Jeff
fell
out
of
the
issues
and
the
property
to
be
locked
and
that's
the
one
still
have
the
ones
we
need
outstanding
and
I
just
renewed
at
work
and
improve
at
for
milestone
and
we'll
try
to
figure
out
which,
which
they
book
should
should
be
fix
that
so
then
1.5
to
100,000
athlete
as
realistic,
assists
and
all,
and
it
really
is
because
the
senior
England
health
infrastructure
and,
in
fact
in
Japan
team,
is
working
on
that
one
to
solve
the
problem.
The
last
one
I
want
mention
in
this
documentation.
E
This
will
release
the
documentation,
priority
area
disabilities,
so
so
yesterday,
it
is
the
suppose
is
the
time
the
deadline
for
all
the
PR
open,
even
now
to
finish
the
documentation
PR.
If
you
require
the
documentation-
and
you
have
the
feature
or
one
point
women,
you
should
open
the
PR,
even
a
company,
it's
not
the
finish
that
completes
it.
So
in
your
amount
download,
please
just
quickly.
Things
is
because
Angeles
are
so
you,
so
you
can
psyche,
because
these
are
alcohol-based,
so
we
so
so
then
mix
then
is
the
June
21st.
E
F
E
Another
sentiment
aesthetic
outside
allows
I
want
mention,
because
we
have
so
many
different
bill
repository
to
discuss
back.
For
example,
we
have
the
Cuba
cattle,
provide
immediate
positive
and
we
have
the
English
revel
and
and
instead
of
us
to
track
all
those
rebel.
So
first
you
can't
release
if
somebody
needs,
if
you
need
to
help
us
to
speed
up
the
whole
review
process.
If
you
have
some
feature
in
the
separated
river
and
with
the
fire
of
tracking
issue
in
the
main
report
and
the
sit
into
the
game
is
team
thanks.
E
G
I
had
one
quick
question
related
to
release
one
seven
specifically
they're
about
30
issues
that
don't
have
an
approved
for
milestone
label,
milestone,
I,
think
we
talked
about
a
week
or
two
ago
how
that
was
required
for
the
issues
to
stay
in
the
milestone.
When
are
we
planning
on
kicking
those
out
so.
E
We
my
do
end,
the
I
is
actually
constantly
taking
something
out,
so
so
there's
still
something.
Maybe
we
missed,
if
not
to
God
and
also
something
it
is.
You
need
to
the
big
group
leader
we
talked
today,
so
they
said,
are
we
they
are
understand?
Wouldn't
you
approve
a
milestone
or
it's
not
a
full
mouth
or
so.
So
that's
that
low
camping
idiot
will
allow
those
campaigns
and-
and
it's
just
some
pieces
is
not
to
feminize
yet.
F
D
D
F
F
Sharing
my
screen
is
that
okay,
you
ever
seen
so
this
is
the
future
tracking
spreadsheet
that
you
are
and
the
phanzig
maintains.
We
use
this
to
make
sure
that
everything
needs
to
be
documented,
for
the
release
is
documented
and
there
are
release
notes,
written
officers
as
well
as
the
blog
post.
That
goes
out.
So,
as
you
can
see,
there
are
currently
19
out
the
future
7
beta
features
and
for
stable
features.
F
So
typically,
we
will
be
highlighting
the
stable
and
beta
features
at
more
extensively
and
then
some
select
alpha
features
so
I
just
wanted
to
note
I
think
it's
important
for
the
in
this
meeting
to
highlight
what's
coming
up
in
1:7,
because
sometimes
what
happens
is
the
various
things
don't
notice
that
there
are
dependencies
and
some
of
these
futures
can
come
through
and
a
different
that
could
be
impacted.
So
please
note
some
of
the
things
that
are
happening.
For
example,
you
know.
Obviously
we
have
the
demos
from
the
storage
saved
by
Michelle.
F
H
F
It's
going
direct
to
beta
states
will
set
upgrades
and
daemon
set
upgrades.
Are
you
and
they're
going
to
beta
the
node
access
is
a
sig
auth
feature
off
feature,
sometimes
cause
trouble
for
others,
for
example,
cluster
lifecycle,
so
I
just
wanted
to
highlight.
You
know
limiting
node
access,
it's
going
to
be
a
beta
feature
coming
out
of
sig
off
and
then
the
other
alpha
future
that
is
we're
likely
to
highlight
is
the
encrypt
encrypting
secrets
in
SPD.
So
those
are
the
ones
I
just
wanted
to
point
out.
F
There
are
many
many
more
exciting
alpha
features
as
well,
but
again
their
alpha
features
so
they'll
be
emphasized
with
less
stuff.
That
was
the
primary
update
release
related
to
the
release.
There
are
a
couple
more
topics
that
we
wanted
to
cover
from
pm6
environment
has
a
question
of
going
directly
to
beta
yeah.
This
is
something
that
sig
apps.
If
we
have
ass
against
representative,
can
cover.
I
J
So,
as
I
understood
it,
the
policy
wasn't
that
everything
had
to
be
alpha
beta
and
stable,
because
that
meant
that
every
feature
would
take
nine
months.
I
didn't
I
mean
I'm,
not
saying
that
I'm
the
authority
here
I'm
just
saying
that
as
I
just
sort
of
was
told
the
while
back
that
if
there
was
the
leaf
in
the
cig
and
maybe
even
wider
than
mistake
to
the
API
review
process
that
you
were
okay
to
start
with
data
that
might
be
allowed,
I
know
some
small
features
in
the
past.
J
F
F
D
F
D
H
They
put
there
the
good
news,
Olympics
plan
went
out.
Yes,
sir.
The
bad
news
is
the
nonissue
that
rented
you,
while
upgrading
to
either
one
six
four
four
one
six
lied.
If
you're
using
certain
google
cloud
load,
balancer
features
so
I
will
I'm
still
getting
I
just
found
out
about
this
morning,
so
I'm
still
getting
information
on
how
to
exactly
communicate
it,
but
I
will
be
sending
out
a
broader
notice
of
what
you.
B
J
K
K
This
is
a
new
alpha
feature
that
is
going
to
Parcher
partially
be
released
on
a17,
so
we
are
adding
some
gypsy
gel
plug-in
support
in
one
seven.
So
what's
the
idea
here
basically
keep
CTL
plugins
will
allow
anyone
to
write
subcommands
under
keep
CTL.
So
if
you,
you
would
like
something
like
I,
don't
know,
keep
CTL
where's
my
cube
config.
So
you
could
do
that
and
publish
your
plugin.
Plugins
could
be
written
in
any
language,
not
necessarily
go,
and
you
could
have
a
plugin
that
has
access
to
the
API
and
authentication.
K
All
of
that,
so
we'd
be
north
working
on
plugins
for
quite
some
time
already,
basically,
the
entire
one,
seven,
and
even
prior
to
that
and
in
one
seven
we
are
going
to
release
some
kind
of
alpha
versions:
version
for
plugins
and
the
first
probably
the
first
major
feature
that
is
going
to
use.
The
plugins
framework
is
a
service
catalog.
So
we
are
working
on
a
number
of
commands
for
for
the
service
catalog
and
all
of
them
are
going
to
be
published
as
tips
ETL
plugins.
K
After
one
seven,
we
already
have
some
prayer
requests
and
some
work
being
done
for
both
one
seven
and
we
are
hoping
that
in
a
one
point,
eight
we
have
some
kind
of
stable
support
for
plugins
in
a
in
a
cube
control
with
every
everything
that
you
would
expect.
You
know
for
writing,
plugins,
so
full
API
support
super
to
two
flags
arguments
and
everything
that
you
would
expect
some
documentation
is
also
coming.
K
It's
still
an
alpha
featuring
one
seven,
but
we
are
going
to
publish
some
at
least
some
duck
in
the
github
repo,
because
there
are
a
lot
of
details
so,
where
exactly
go
your
plugins
go
on
your
hard
drive,
how
to
control,
discover,
discover
plug-ins
that
are
installed,
how
you
actually
install
and
publish
them.
So
we
are
also
working
on
on
some
documentation
for
per
plugins.
K
L
Right,
I'm,
going
to
yeah
I
just
got
this
really
quickly,
so
briefly
were
migrating
open
API.
This
is
couple
benefits.
The
caching
is
going
to
be
a
lot
better
and
will
from
high
level
you
can
pass
arbitrary
data
from
the
types
to
the
client
through
comments
and
tags
on
trucks
and
fields,
so
that
a
lot
of
system
interesting,
Club
configuration
management,
we're
just
improving,
apply
kind
of
quarter
after
quarter
checking
through
issues
quickly,
things
that
we
fixed
ordering
of
list
elements
is
now
retained.
L
So
if
you
do
an
edit
on
a
list
that
keeps
the
ordering
unions
with
defaulted
fields,
wasn't
handled
well
before
we
have
support
for
this,
but
we
need
to
go
update
individual
types.
We
have
a
proposal
for
having
identifying
elements
in
a
list
with
multiple
fields,
which
is
odd.
The
bug
for
I
believe
reports
getting
merged
correctly
proposal
for
replacing
the
tags
on
trucks.
L
So
one
one
thing
that
we
do
need
to
be
careful
with
is:
if
we
change
like
a
merge
key
for
field
to
fix
an
issue,
are
we
breaking
api
compatibility
because
someone
sending
the
old
request
doesn't
work
anymore?
So
that's
actually
a
challenge
that
we
need
to
figure
out
how
to
address
all
right.
Reboot
split.
This
is
kind
of
a
big
issue,
not
enough
time
to
go
into
details
here,
but
roughly
there's
at
least
four
sub
areas
of
this
one
is
just
breaking
out
dependencies
for
coop
control.
L
That's
not
like
200
actual
references
that
200
packages
each,
which
may
have
dozens
up
references,
so
I've
listed
out
in
the
notes
kind
of
the
way
we're
triaging
that
and
strategies
there
also
there's
testing:
how
do
we
test
the
components
individually
outside
the
repo?
We
rely
on
meaty
test
incredibly
heavily
now,
so
we
have
to
change
the
strategy
there.
That's
going
to
have
general
benefits,
that's
something
we
need
to
do
anyway,
but
it
becomes
necessary
when
we
do
a
reboot
split
issue.
Management.
Jo
beta
actually
planned
this
out
to
me,
which
was
really
helpful.
L
That
issues
in
the
split
repos
aren't
tracked
with
the
main
repo.
So
during
release,
we
need
to
be
more
vigilant
and
great
umbrella
issues
around
those
and
then
there's
just
issue
of
once.
We
split
it
out.
How
do
we
think
and
remain
consistency
between
the
main
rebloom,
the
split
repo,
having
make
sure
vendor
depths
are
at
the
same
version
in
both.
L
How
do
you
make
sure
kubernetes
is
up-to-date
all
right
and
then
one
challenge
I
think
we
should
just
call
out
is
just
the
stating
engineering
design
reviews,
PR
reviews,
this
sort
of
stuff
we're
getting
like
a
lot.
We
do
get
bug
fixes
in
future
improvements
which
is
really
great,
but
we
can
see
where
our
focus
is,
but
it
does
kind
of
take
away
from
our
focus
of
apply
and
extensions
areas
in
the
repo
supply
to
do.
L
On
new
commands
and
do
reviews
on
feature
enhancements,
and
so
we're
trying
to
figure
out,
how
can
we
both
look
like
have
enough
wood
to
focus
on
the
main
initiatives
and
still
not
block
the
community
things
we've
discussed
are
leveraging
the
plugins
commands
and
say
no
new
top-level
commands
until
our
at
least
for
now
or
trading
reviews
for
sustaining
engineering
PRS.
So
we
have
those
200
packages.
We
need
a
break
and
for
future
enhancements
asking
that
we
also
see
a
PR
that
helps
us
attack
those
bigger
issues.
These
are
just
ideas:
we're
kicking
around.
B
K
F
So
we
had
the
CPM
meeting
and
one
of
the
things
that's
working
well
in
the
6
p.m.
meeting
is
that
we
are
having
demos
of
the
features
that
are
launching
in
the
in
the
upcoming
release
and
we
had
sig
storage.
They
mo
several
features
and
the
kind
of
meeting
that
we
had
last
week
or
this
week
or
the
others
week,
and
then
in
the
next
meeting
we
will
have
sig
off
and
sig
apps
demo.
The
features
that
are
coming
and
I
would
encourage
folks
to
attend
and
or
watch
the
video
of
the
cig
p.m.
F
meeting,
because
not
all
features
can
be
demoed
in
this
community
meeting,
and
so
the
PM
big
meeting
is
a
good
way
to
understand.
What's
coming
out
in
the
release.
That
was
one
update.
The
second
update
is
that
we
do
have
some
questions
about
the
alpha
2
beta,
2,
stable.
What's
the
process
and
the
criteria
that
should
be
used
for
features
to
move
from
one
stage
to
the
other,
and
we
would
like
to
come
up
to
either
study
because
the
promotion
policy
and
I'll
recommend
you
know,
updates
to
it.
F
So
before
and
I
think,
another
person
from
the
sig
will
be
dice
to
Mars
and
iwere
will
be
working
on
that
topic,
and
we
also
welcome
others
who
are
interested
in
that
topic.
And
then
the
last
update
that
we
wanted
to
give
is
that
coming
out
of
the
Leadership
Summit
two
weeks
ago.
I
think
it
was.
There
is
a
a
proposal
to
work
more
on
code
and
repository
cleanup
and
stability,
and
so
we
would
like
to
shape
the
roadmap
accordingly
and
we'd
like
to
seek
a
decision
on
1.8,
whether
that
will
be
a
stability
release.
A
A
question
sorry
so
I
attended
my
first
cig
p.m.
meeting
and
it
felt
sort
of
like
it
was
like
a
part.
Two
to
this
meeting
have
we
talked
about
maybe
coupling
them
more,
because
I
felt
that
a
lot
of
the
demos
and
stuff
or
things
that
if
I
wouldn't
mean
about
me
I,
would
have
missed
out.
So
is
it
like,
what's
appropriate
to
be
in
cig
p.m.
versus
the
community
meeting
I,
don't
know
if
I'm
the
only
one
who
feels
that
way,
yeah.
F
Well,
our
thinking
was
that
there
are
too
many
features
in
the
release,
and
most
of
them
are
not
ready
for
general
until
towards
the
end
of
the
release.
Therefore,
it
can't
all
be
packed
into
this
meeting,
and
maybe
some
of
the
more
Alpha
features
can
be
part
of
the
community
meeting,
but
certainly
for
the
stable
and
beta
feature
we
wanted
them
to
be
addressed
in
pn6,
so
that
we
can.
We
can,
you
know,
provide
visibility.
We
can
do
the
documentation
appropriately,
so.
F
B
M
F
M
F
B
Cluster
office
just
added
some
stuff,
but
in
the
sake
of
time
since
I
was
added
late,
I'm
just
going
to
say
that
they
want
to
operate
your
stories
so
get
on
their
slot
channel
or
email
list
and
send
them
some
updates
and
I'm
sure.
Maybe
next
week
we
can
have
some
more
time
if
they
want
more
updates,
because
there
are
some
announcements
we
want
to
get
to
as
well.
G
G
Bae
issues
don't
really
have
a
owning
state
right,
there's
a
big,
a
sure,
there's
City
kws,
there's
big
OpenStack,
we
time
to
like
a
six-piece,
or
maybe
it's
time
to
have
a
sitcom,
the
working
for
Google's,
whatever
Brendan
Eich
that
I
rather
not
have
formation
that
they
use
be
a
roadblock
reflectively.
Triage
English
use
big
cluster,
once
I
felt
with
area
platform,
GCE
or
area
the
platform
gkq
label.
So
that
there's
one
query:
you
looked
at
CP
all
those
issues
if
it
needs
to
global
off
at
me,
there's
an
owning.
G
B
Just
to
wrap
up
the
lastly
announcements
this
there
was
a
the
Leadership
Summit.
The
summary
was
posted
in
the
Google
Doc,
there's
also
a
github
link
for
adding
notes.
If
you
have
notes
and
would
like
to
gather
them
all
in
one
place,
as
well
as
a
feedback
form.
If
you
were
there
to
survey
that
you
can
go
through
and
fill
up
smear
feedback
for
the
event,
there's
also
initial
proposals
for
governance
feedback,
which
is
due
today,
so
do
that
if
you
have
feedback
the
links
in
the
Google
Doc
as
well,
that.
D
B
And
then
the
last
thing
on
there
is
some
refinement
on
community
membership.
There
is
a
document
for
process
for
different
levels
of
membership
inside
the
community,
and
so
you
can
follow
along
there.
I
just
submitted
my
email
this
morning
and
caused
a
stir,
so
I
don't
know
that
the
process
is
quite
ready,
but
there's
some
document,
that's
work-in-progress
that
people
are
working
on
and
we're.
D
This
is
a
thank
you
to
Philips
for
helping
break
up
the
governance
and
feedback
and
the
contributor
ladder
dog
and
simplify
the
contributor
ladders.
We
have
more
ways
the
people
who
can
enter
and
engage
with
the
project,
and
we
have
failing
lists
where
you
can
make
these
requests
or
just
what
Justin
Thomas
morphing
and
then
causes
stir
up
Haywood
fish.
So
we'll
figure
it
out
on
the
fly
today,
but
we
don't
have
clear
bounds
and
we'll
figure
out
who
is
able
to
and
making
those
additions.
D
D
E
D
Repository
so
for
those
of
you
who
did
volunteer
for
that,
that
would
that
is
super
awesome.
Please
get
your
dogs
in
because
we
want
to
be
able
to
have
them
as
a
place
with
the
gift
of
directory
as
the
actual
record
for
this,
and
then
for
those
of
you
who
did
take
notes
but
may
not
identified.
Who
is
going
to
transfer
the
goats
to
github.
This
is
going
to
fall
back
to
you
to
find
a
volunteer.
So
please
work
on
that.
We
are
getting
nagged
by
myself
and
Jennifer.
The
note-taker
from
on-site.