►
From YouTube: Kubernetes Community Meeting 20181025
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 5pm UTC.
See this page for more information: https://github.com/kubernetes/community/blob/master/events/community-meeting.md
A
All
right
welcome
everybody.
It
is
October
25th
2018.
This
is
the
weekly
kubernetes
community
meeting.
Everything
that
you
are
saying
is
the
live
stream
to
the
Internet
and
will
be
recorded
for
YouTube
in
the
public.
So
please
be
cognizant
offender.
Today,
a
small
change
because
of
conflicting
meetings
are
actually
gonna.
Have
the
release
team
go
first
and
we're
gonna?
Do
a
cluster
API
AWS
provider
demo
with
Chuck,
then
Erin's
got
some
contributor
tips.
We
have
two
sig
updates
today,
one
from
sig
Docs
and
the
other
one
from
a
sig
storage.
B
To
this
release
means
we
need
to
go
through
a
formal
exception
process,
so
the
link
to
which
is
linked
there.
Considering
this
is
a
shot
stability
release,
we
might
not
be
able
to
honor
any
exceptions
at
this
point
again,
considering
we
just
have
two
weeks
to
slush
and
three
weeks
to
code
freeze,
but
if
there
is
any
urgent
need,
please
follow
the
exception
process,
and
while
we
get
into
the
coding
of
things,
please
also
think
about
tests,
talk,
pr's
and
also
any
release
notes
that
are
relevant
for
your
features.
B
As
for
CI
signal,
we
had
a
fresh
report
go
out
yesterday,
thanks
to
cluster
lifecycle
and
app
sports
for
fixing
a
couple
of
long-standing
issues.
We
now
have
up
craters,
not
time-out
on
us
very
nearly-
and
this
is
covering
few
other
flakes
and
failures
that
we
are
actively
tracking
for
113
and
there's
a
quick
list
of
test
failures
that
our
beta
block
that
will
be
beta
blocking
good
news
is:
there
are
just
a
few
of
them
that
are
showing
up
in
multiple
jobs.
B
So,
ideally,
if
we
fix
those,
we
should
get
more
green
runs
and
upgrade
tests
continue
to
be
a
little
bit
flaky.
So
we
are
actively
following
up
on
D
flicking
those
and
actually
seeing
which
are
the
legit
signal
failure
signals
from
those.
So
that's
up
for
113.
We
have
112
batch
plant
for
yes
today,
yeah
later
today,
so
there's
a
link
to
all
the
VRS.
There.
A
C
Right
thanks
George,
before
I
jump
into
the
demo,
I
wanted
to
talk
a
little
bit
about
cluster
API
and
cluster
API
is
a
managed
it's
a
it's.
A
kubernetes
SIG's
sponsored
project
and
the
goal
of
cluster
API
is
to
provide
a
kubernetes
like
experience
for
managing
clusters
managing
kubernetes
clusters.
So
that
means
that
you'll
see
things
like
coop
control,
create
cluster
coop
control,
delete
cluster
and,
of
course
the
cluster
object
has
many
attributes
that
you
can
change.
C
So
you
can
get
the
cluster
to
your
specification,
but
clusters
have
to
run
somewhere
and
that's
going
to
be
outside
the
scope
of
the
cluster
API
project,
so
cluster
API
is
only
interested
in
providing
the
objects
to
give
you
a
kubernetes
like
experience,
and
that's
where
the
provider
implementations
come
in,
so
I'm
doing
a
demo
of
the
cluster
API
provider
AWS,
which
that
name
should
make
a
little
bit
more
sense.
Now
it's
the
AWS
implementation
of
cluster
API,
and
so
with
that
I
will
share
my
screen.
C
Okay
and
I
wanted
to
run
through
some
of
the
tools
and
requirements
that
we
provide
with
the
cluster
API
provider
AWS
project.
So
there
are
a
couple
of
requirements
here.
The
the
the
scope
of
the
project
focuses
on
provisioning.
Infrastructure
are
pre
provisioning
infrastructure
and
also
installing
kubernetes
onto
the
provisioned
infrastructure,
but
in
AWS
world
you
need
a
few
more
things
than
that.
C
So
one
of
the
requirements
we
have
are
are
the
iam
roles
and
users
and
groups,
and
things
like
that
and
to
that
end
we
provide
a
tool,
because
there
are
a
couple
there,
quite
a
quite
a
number
of
roles
and
users
that
we
need.
We
provide
a
tool
called.
This
is
a
great
name.
Cluster
AWS
ADM
I'm,
still
not
sure
how
to
pronounce
that
one
I
was
going
with
cluster
awesome,
but
I
don't
know
we
could
make
up
your
own
pronunciation
and
that
will
help
you
create
all
of
the
IAM
rules
that
you
need.
C
So
there's
a
command
cluster
AWS
PM,
alpha
bootstrap,
create
stack,
and
what
this
does
is
generate
a
cloud
formation
template
and
then
upload
it
to
your
account
and
create
a
bunch
of
things
for
you.
So
you'll
see
here
that
I've
created
all
of
these
different
resources
that
are
required
to
run
the
provider
AWS.
It
went
very
quickly
because
I'd
already
done
it.
C
The
second
requirement
we
have
is
that
you
need
to
have
an
existing
SSH
key
pair
and
I've
already
created
a
key
pair
in
the
region
that
I'm
going
to
be
using.
So
we
don't
have
to
worry
about
that.
The
third
requirement
is
an
existing
kubernetes
cluster,
so
cluster
API
uses
kubernetes
as
a
deployment
platform,
which
means
we
need
a
kubernetes
cluster
to
get
more
kubernetes
clusters.
C
One
thing
is
that
it
needs
to
be
a
relatively
recent
version
of
kubernetes,
so
you
can
see
I'm
using
version
112
one
but
I
think
I.
Think
111
with
the
latest
patch
version
will
work
well
I'm
using
mini
cube
for
this,
but
it
can
be,
it
doesn't
have
to
be
mini
cube,
it
can
be
any
existing
kubernetes
cluster
and
then
the
last
thing
we
need
are
the
manifests.
So
we
actually
need
to
generate
the
CR
DS
to
describe
a
cluster
and
to
describe
machines,
so
we
can
run
a
command.
Make
manifests.
C
And
that
will
create
a
couple
of
CR
DS
for
us
using
some
some
tooling
through
cube
builder.
Oh-
and
the
other
thing
I
forgot
to
note-
is
that
I
installed
the
latest
binary
releases
from
the
provider
AWS
project
before
I
started
with
this,
and
that
is
what
you
need.
This
is
all
written
down
in
our
Doc's
and
the
getting
started
guide
in
the
repository
I
will
link
that
in
the
notes,
if
it's
not
already
and
then
there's
one
last
command
that
we're
going
to
run
to
actually
create
pay
cluster,
it's
the
cluster
control
workflow.
C
So
what
that
looks
like
is
it's
it's
everything
you
need
to
get
started
with
cluster
API,
so
cluster
control.
So
one
of
the
weird
things
about
cluster
control
is
that
it
has
to
be
packaged
with
each
provider.
So
I've
just
named
this
one
cluster
control
AWS,
so
that
I
know
that
this
cluster
control
is
working
with
the
AWS
provider
and
it's
got
a
lot
of
parameters.
So
cluster
create
cluster
will
say
this
is
a
long
one,
so
I'm
gonna
copy
and
paste
it
we're
saying
that
we
have
an
existing
bootstrap
cluster
and
I
pass.
C
It
I
could
config
so
big
and
then
I
give
it
the
degenerated
other
than
I.
Then
I
tell
it
which
provider
I'm
using
let's
provide
our
AWS
apologies
one
break
and
then
I
pass
it
the
list
of
machines
that
I
want
to
create
and
that's
going
to
live
in
the
generated
code,
so
that
this
this
yeonil
is
coming
from
the
generate
the
make
manifest
command
around
earlier
now,
I
pass
it
the
cluster
yema.
So
this
defines
the
cluster
that
we're
going
to
be
creating
and
then
I
finally
passes
the
provider
components.
C
C
C
C
So
this
what
this
will
do
is
create
a
bootstrap
cluster,
but
we
already
have
a
good,
strong
cluster,
so
really
we're
reusing
an
existing
bootstrap
cluster.
These
logs
are
a
little
bit
legacy,
but
this
is
this
is
creating
the
controller's
for
the
custom
resources
that
we've
defined
and
then
it
provisions
the
target
cluster.
So
that
means
it's
going
out
to
AWS
and
creating
all
the
necessary
infrastructure
needed
to
set
up
with
your
Nettie's
cluster,
and
then
it
creates
a
control,
plane,
node
and
once
that's
all
finished
it
pivots.
C
It
does
this
thing
where
it's:
it
moves
the
cluster
API
from
your
from
the
bootstrap
cluster
to
the
new
cluster
that
you've
provisioned,
so
that
it's
on
a
so
that
it
lives
on
the
managed
cluster.
So
it's
managing
itself
which
is
kind
of
cool,
and
then
it
finishes
right
now.
There's
one
little
issue
that
we
have
where
it
ends
on
a
fail,
but
that's
not
super
important,
because
all
the
infrastructure
is
still
there
and
I
can
show
you
and
the
output
of
this
command
is
a
coop
config
file
in
the
local
directory.
C
So
if
I
do
Lucy
tell
loop,
config,
config
and
I
pods
in
the
coop
system
namespace,
you
can
see
that
we've
got
a
whole
bunch
of
nodes
running,
and
so
we
have
one
control,
plane.
Node,
there's
still
the
still
a
next
step
of
getting
worker
nodes
to
join,
but
that's
an
issue
that's
being
actively
worked
on
right
now,
so
that
is
where
the
project
is
at.
If
there
are
any
questions,
I
would
love
to
take
something
yep.
A
C
C
A
E
Committee,
whatever
you
want
to
call
it
today,
we're
gonna
talk
about
hold
so
hold.
Is
this
wonderful
command
all
these
are
linked
in
the
meeting
notes?
So
this
is
a
command
that
is
supplied
by
prowl.
It
lets
you
apply
a
label
to
a
pull
request.
That
says,
please
hold
this
PR
and
do
not
actually
merge
it.
You
can
also
see
a
description
for
the
label
in
our
github
labels,
description
thing
and
because
github
API
supports
label
descriptions.
E
If
you
were
to
hover
over
the
label,
you
could
also
see
that,
and
it
just
says,
do
not
hold,
because
somebody
has
issued
a
command
I
skipped
by
it.
Real,
quick,
I
guess
when
I
was
showing
that
the
commands
without
documentation,
but
anybody
can
apply
a
hold
and
anybody
can
remove
a
hold.
You
don't
have
to
be
a
member
of
the
kubernetes
community
in
order
to
do
either
of
those
things.
The
idea
is
it
sort
of
intended
to
be
a
lightweight
social
thing.
E
Here's
me
hovering
over
a
whole
label
in
the
github
search
to
show
the
description
we
kind
of
have
these
label
descriptions
for
most
of
our
github
labels.
So
why
would
I
want
to
use
hold?
First,
let's
talk
about
some
good
ideas
and
bad
ideas
when
it
comes
to
using
hold
a
good
idea
and
I
actually
have
to
go
back
to
the
talks
for
this
I
guess
a
good
idea.
E
It
would
be
to
explain
why
you
are
putting
a
hold
on
a
full
request,
so
here's
an
example
of
a
contributor
putting
a
hold
on
a
pull
request,
because
they're
saying,
like
actually
I'm,
not
sure
this
is
merged.
This
should
merge
because
we're
still
seeing
flakes-
and
this
is
a
pull
request
about
merging
something
that
should
no
longer
be
flaky.
So
it's
not
meeting
our
criteria
hold
up
I,
don't
think
we
should
do
this.
Another
good
example
would
be
contributors
saying:
hey
hold
up
as
PR
seems
kind
of
large.
E
We
believe
that
it's
up
to
the
author
to
decide
when
that
last
final
step
actually
happens.
So
that's
what's
going
on
here,
a
bad
idea.
Yes,
although
I
said
anybody
can
remove
a
hold,
doesn't
doesn't
matter
who
you
are
we're,
probably
gonna
frown
on
people
who
just
randomly
go
around
to
any
one
of
the
PRS
that
show
up
in
this
search
and
start
removing
alts.
E
You
know
the
idea
here
is:
don't
don't
be
a
jerk
like
it
probably
doesn't
make
sense
for
you
to
jump
into
some
random
PR
and
remove
a
hold
if
you
haven't
been
involved
in
that
PR
in
any
way,
shape
or
form
or
have
no
context
for
for
what's
going
on
in
there.
But
we
do
want
to
make
sure
that
this
is
a
really
lightweight
thing.
You
know
it's
the
emergency,
stop
it's
the
fire
handle
whatever,
but
it's
also
really
lightweight
to
remove
so
and
then,
like
I
said,
why
might
you
want
to
do
this?
E
So
you
know
hang
on
either
of
you
or
think
the
PR
needs
more
discussion
or
hang
on
I.
The
author
want
this
hell
because
I
have
some
specific
people
I
want
to
hear
from
before
I
let
the
PR
merge
or
hang
on
I
the
author
or
the
reviewer
really
think.
It's
important
that
the
author
have
final
say
on
when
this
merges.
We
found
this
label
to
be
incredibly
helpful
and
useful
for
overcoming
some
of
the
tricky
or
corner
cases
when
it
comes
to
you
have
a
code
review
process
that
you
want.
E
You
want
to
use
awesome
automation
to
have
things
merged
when
you're
good
and
ready
for
them
to
merge,
but
we
don't
have
a
whole
bunch
of
time
to
code
up
this
Byzantine
structure
around
the
proven,
LG
TM
and
now,
let's
add
some
more
labels
and
all
that
stuff.
So
generally,
we
have
found
that
acting
as
a
human
being
talking
to
people
and
using
the
hold
and
hold
cancel
commands
has
really
enabled
a
lot
of
excellent
organic
code
review
workflows
to
pop
out
of
the
wild
that
has
been
this
week's
contributor
tip.
E
F
E
Can
blacklist
that
person
from
the
org?
That's
something
that
we
there
are
people
responsible
for
frontline
moderation.
If
people
are
posting
like
inappropriate
comments
or
harassing
people,
they
can
be
prevented
from
contributing
in
any
way
to
any
of
the
repose
of
the
org,
and
if
we
do
find
that
this
command
is
getting
abused,
you
know
we're.
Gonna
have
to
tighten
it
up
a
bit,
but
yeah.
D
E
Mean
I
thought
you
were
saying
right
now
that
you
didn't
have
to
be
an
organ
umber
to
use
old.
You
don't,
but
you
can
so
that
you
can
prevent
people
from
interacting
with
your
repo,
but
you
do
have
to
be
signed
into
github
to
interact
with
it.
So
if
somebody
was
really
determined
and
wanted
to
create
a
whole
bunch
of
troll
accounts
and
then
do
this,
we
would
probably
end
up
having
to
lock
this
down
so,
but
thus
far
the
community
has
been
filled
with
a
lot
of
not
jerks.
E
E
A
If
you
look
sick
chairs,
if
you
look
at
the
notes,
there,
we've
added
a
link
to
a
list
of
recommended
topics
that
you
can
cover
as
part
of
your
sig
status,
and
we
also
added
a
template
so
like
if
you
want
to
look
snazzy
and
things
like
that,
and
at
the
top
of
this
document
is
the
schedule
on
when
your
sig
is
supposed
to
do
its
status
update
at
this
meeting.
So
please
be
cognizant
of
that.
So
when
a
host
comes
to
chase
you
down,
hopefully
it
does
not
surprise
you
with
that.
C
Well,
let's
take
it
as
it
lies,
so
my
name
is
at
core:
listen:
I
am
contributor
to
sig
beard.
I
am
also
a
co-chair
for
sig
Docs
with
me
today
is
my
co-chair
Jennifer
Rondo
and
here's
what
we
did
last
cycle.
We
release
the
1.12
Docs
and
many
things
goes
to
Zack
Arnold,
who
is
one
of
our
newer
approvers
and
a
brand
new
Doc's
Meister
for
being
the
Meister
for
the
1.12
process.
I
did
not
link
to
them
here,
but
Zack
ended
up,
creating
a
couple
of
really
useful
tools
for
the
docs.
C
Another
thing
that
we
have
done
in
the
past
recent
cycle
is
that
we
have
updated
our
internationalization
workflows
for
kubernetes
dot
IO.
If
you
go
to
kubernetes,
not
IO,
you
can
see
now
that
there
is
a
language
selector
that
is
fully
populated
with
the
language
options
currently
available.
That's
not
to
say
that
all
of
that
content
is
there
and
fully
localized,
but
the
infrastructure
supporting
full
localization
exists.
C
We
have
a
great
tool
for
that
now,
thanks
to
prowl
and
that
tool
is
language
labels
and
we
have
added
in
owners
files.
We
have
added
automatic
designations
for
language
so
that
when
content
in
a
certain
content
subfolder,
so
the
sub
content
folder
for
Chinese,
for
example,
Chinese-
has
the
two
letter
code
of
Z
H.
So,
whenever
content
in
that
sub
content,
folders
modified
Crowl
automatically
applies
a
label
to
the
issue,
language,
/,
zi,
h
and
it's
possible
to
filter
by
language
and
for
reviewers
to
meaningfully
review
work
with
making
other
languages
opaque.
C
Let's
see,
we've
also
updated
our
localization
guidelines
and
many
thanks,
especially
to
the
Korean
translation
projects
team
for
the
substantial
improvements
that
they
offered
to
not
only
how
they
do
localization,
but
how
all
teams
do
localization
so
Jimmy
and
Claudia
Qing
from
the
Korean
translation
team.
Again,
a
special
shout
out
other
projects.
Recently
we
have
better
automation
now
for
generating
API
reference
Docs.
C
What
that
looks
like
in
particulars
that
were
no
longer
relying
on
a
container,
that's
no
longer
being
maintained
and
we've
gotten
rid
of
about
the
API
Docs
used
to
be
about
45%
blank
space.
That's
no
longer
the
case.
A
lot
of
this
work
goes
to
the
credit,
for
this
goes
to
shaming
tang.
One
of
our
maintained,
errs
from
IBM
in
China,
is
its
awesome
work,
so
the
the
process
for
generating
API
reference
stocks
and
the
output
of
them
is
much
better.
Now.
C
C
What
that's
looked
like
what
that
looks
like
in
particular,
is
better
onboarding
Doc's
through
graphical
models,
Android
10
and
under
Chan
at
Google
and
Dominic
Dominic
12
at
s.
Ap
Dominic
has
been
given
to
us
as
a
dedicated
resource
for
six
months
to
help
add
models
explaining
kubernetes
architecture
to
the
documentation,
and
there
is
a
link
in
this
presentation.
I,
don't
know
if
you're
going
to
be
able
to
get
it.
Folks
can
see
this
well
enough.
C
Let's
see
other
things
that
are
coming
up,
Tim
Fogarty
is
the
Dux,
my
sister
from
1.13
he's
already
off
to
a
great
start.
We
have
two
upcoming
doc
sprints
one
in
Shanghai,
where
we're
going
to
focus
specifically
on
localization
workflows.
I
expect
that
the
Chinese
localization
team
will
have
a
lot
of
questions
about
how
to
do
how
to
do
localization
with
a
workflow.
C
That's
been
reconsolidated
indicate
website,
so
we're
dedicating
our
time
to
making
sure
that
the
localization
team
there
has
a
good
face
time
and
as
as
much
resources,
we
can
give
them
for
learning
and
getting
comfortable
with
that.
Workflow
topic
for
Seattle
is
TBD,
I
think
we're
all
just
very
much
in
Shanghai
mode
right
now,
looking
farther
forward
in
2019
I.
Think
one
of
the
questions
that
we're
looking
at
is
how
to
ensure
that
our
content
remains
fresh
in
student,
instituting
some
sort
of
automatic
staleness.
Let's
make
sure
the
content
regularly
receives
review
or
snails
out.
C
C
C
sub
sub
project
status
for
localization
in
particular,
cig
Doc's,
had
three
different
sub
projects,
each
one
dedicated
to
a
specific
language.
Those
have
all
been
consolidated
and
back
into
UK
website
and
all
of
those
language-specific
repositories
and
sub-project
ownerships
have
been
archived
and
put
back
into
que
website.
So
that
is
once
again
an
integrated
workflow
within
within
the
que
website.
Rico
related
working
group
status,
so
we
have
a
working
group
now
for
sigdoc
stooling.
C
We
found
the
tooling
was
taking
up
a
lot
of
our
time
and
attention
and
that
we
weren't
focusing
on
content
nearly
enough.
So
we
have
decided
to
put
together
a
working
group,
that's
led
by
Luke
Perkins
and
to
shunt
conversations
about
tooling
and
architecture
into
the
working
group
and
let
them
sort
it
out.
It's
pretty
ad-hoc
at
the
moment,
but
it's
working
well,
let's
see
how
you
can
contribute.
C
So
if
you
are
interested
in
contributing
to
kubernetes
documentation,
there
is
always
a
need
for
technical
review
to
ensure
the
concepts
and
tasks
remain
accurate
and
up-to-date
tutorials
as
well.
Any
time
that
you
want
to
offer
a
technical
review
of
documentation.
Please
be
welcome
and
the
best
way
to
do
that
is
to
open
a
pull
request.
You
can
also
take
a
look
at
open
issues
and
pick
an
issue,
that's
relevant
to
your
interests
and
abilities.
The
list
is,
if
you
visit
the
repo
the
the
issues
link
is
is
unexceptional.
C
If
you
see
something
incorrect
and
the
documentation,
please
feel
free
to
open
a
PR.
Anyone
can
open
a
PR.
You
do
not
have
to
be
a
member
of
kubernetes
org
to
open
a
pull
request
so
feel
free
to
make
that
fix
and
just
as
a
general
rule,
pull
requests
tend
to
get
more
attention
than
issues.
We
would
love
to
be
able
to
have
enough
folks
and
contributors
to
make
sure
that
issues
got
as
much
love
as
pull
requests,
but
in
terms
of
what
we
can
review
easily
and
devote
our
attention
to
pull
requests.
C
C
So
the
three
co-chairs
of
sig
Docs
right
now
are
Andrew
Chen,
Chen
APIs
on
github
and
in
slack
Jennifer
Rondo
Brad
amante,
with
a
three
at
github,
J,
Rondo
and
slack
and
myself
is
a
Curie
Sarah,
our
home
page.
Here
you
can
see
more
in
the
community
repository
for
the
sigdoc
specific
page.
We've
got
a
link
to
our
select
channel
and
link
to
our
email
list.
G
C
H
All
right
there
we
go,
try
it
now
all
right.
Can
everyone
see
my
screen
yeah
all
right,
so
we
go
ahead
and
get
started
so
I
am
Solly
I'm
one
of
the
sig
leads
for
SIG's
storage.
Today,
I
wanted
to
give
you
an
update
on
what
we've
been
working
on
for
the
last
quarter
and
what
our
plans
are
for
this
quarter
and
how
you
can
get
involved
so
last
quarter
there
was
a
three
big
projects
that
we
worked
on.
H
One
was
topology
aware
volume
scheduling,
so
the
idea
here
is
that
we
want
to
make
the
kubernetes
scheduler
smarter
and
more
aware
of
what's
going
on
with
storage,
when
it
makes
a
decision
for
where
to
schedule
your
workload
previously.
The
way
that
this
worked
with
a
some
of
the
entry
volume
plugins
was
it
was
a
hard-coded
hack
for
a
couple
of
the
cloud
providers.
So
the
thing
concepts
like
zone
and
region
were
hard-coded
into
the
scheduler
for
AWS
and
GCE.
H
It
wasn't
generally
applicable
beyond
those
two
cloud
providers,
but
the
concept
of
topology
is
more
generally
applicable.
You
know
if
you're
running
in
an
on-premise
environment,
you
could
have
something
like
racks
as
your
topology
and
there's
arbitrary
topologies
that
could
exist
in
any
type
of
environment.
What
we
wanted
to
do
was
move
towards
a
model
where
your
storage
system
could
express
what
the
accessible
topology
it
has
and
expose
that
information
to
the
scheduler
so
that
the
scheduler
can
make
intelligent
decisions.
H
Based
on
that,
we
started
work
on
that
in
1.10,
on
the
kubernetes
side
and
on
the
CSI
spec
side
and
in
1.12
we
ended
up
bridging
that
with
work
that
went
into
the
side,
car
containers
between
kubernetes
and
CSI,
so
that
now
CSI
drivers
have
full
capability
of
using
volume.
Topology
aware
scheduling,
we're
planning
on
moving
that
feature
to
beta
this
quarter.
Another
item
we
were
working
on
for
a
very
long
time
was
snapshot
and
restore
functionality.
H
Specifically
one
was:
do
we
really
want
this
functionality
to
be
part
of
the
kubernetes
api
and
the
question
that
we
asked
ourselves
was:
what
benefit
does
it
give
to
the
end-user?
What
we
realized
was
that
a
lot
of,
for
example,
database
administrators,
would
like
the
ability
to
be
able
to
create
a
snapshot
before
they
do
some
sort
of
destructive
operation
and
before
this
functionality
existed,
they
had
to
go
around
kubernetes
to
be
able
to
do
that.
H
So
we
realized
it
was
important,
but
the
challenge
is
that
creating
a
snapshot
is
very
much
a
declarative
operation
and
kubernetes
is
a
sorry,
is
an
imperative
operation
and
kubernetes
is
a
declarative,
API,
so
figuring
out.
How
that
would
mesh
took
us
some
time
in
1.12
we
have
an
alpha
release
that
we're
fairly
happy
with
and
seems
to
be
well-received.
Please
take
a
look
at
that.
We
posted
a
blog
on
kubernetes
I/o
detailing
how
all
of
that
works.
H
In
addition
to
that
in
112
we
also
did
a
lot
of
work,
preparing
CSI
to
go
to
GA
and
stable
in
q4
on.
There
are
three
big
functionalities
of
CSI.
One
is
remote,
persistent
volumes.
That's
the
primary
use
case
for
us
this
you
can
think
of
as
NFS
or
you
know,
GCE
persistent
disks
or
Amazon
EBS
volumes
volumes
that
are
independent
of
the
local
machine.
They
exist
a
remote
and
they
persist
beyond
the
life
cycle
of
any
given
pod.
That's
the
primary
use
case
for
CSI.
H
Beyond
that
use
case.
We
also
wanted
to
support
ephemeral,
local
volume.
So
if
you
look
at
the
existing
volumes
that
we
have
entry,
that
would
be
something
like
the
empty
durval
um--,
where
the
scratch
space,
basically
taking
from
the
local
host
machine,
used
temporarily
for
the
pod
and
deleted
a
long
term.
We
want
to
be
able
to
support
that
kind
of
volume
as
well
and
beyond
that.
We
also
want
to
be
able
to
support
block
volumes.
H
So
while
the
the
core
feature
of
promote
persistent
volumes
is
going
to
be
moving
to
GA
this
quarter,
what
we
wanted
to
do
last
quarter
was
make
sure
that
we
have
plans
for
these
other
features
and
are
moving
towards
getting
support
for
these
other
features
in
a
way
that
we
don't
end
up
breaking
the
core
functionality.
So
now
that
we've
gotten
to
that
point,
we're
going
to
go
ahead
and
move
that
core
to
GA
and
continue
to
work
on
these
other
features.
H
In
addition
to
si
si
moving
to
GA,
we
need
to
start
thinking
about
how
to
move
the
current
entry
volume
plugins
to
si
si.
The
large
motivation
here
is
the
cloud
provider
extraction
project.
We
want
to
get
the
cloud
provider
code
out
of
kubernetes
kubernetes
and
the
volume
plug-ins
for
cloud
providers
are
a
large
part
of
that.
The
work
here
that
we
did
last
quarter
was
to
come
up
with
a
long-term
design
for
how
we're
going
to
do
this.
The
success
metric
here
is
a
little
bit
funny.
H
Ideally
end
users
should
not
notice
that
it
happened,
but
once
it
happens,
it
would
be
silent
to
the
end
user.
They
would
continue
to
use
the
same
API
as
they're
familiar
with
kubernetes
under
the
cover
reroutes
to
see
si
so
last
quarter
we
came
up
with
a
design
for
that.
In
addition,
we're
also
coming
up
with
reusable
libraries
for
ice
cozy
and
NFS.
H
So,
if
you're
writing
si
si
drivers
that
use
any
of
those
protocols
instead
of
having
to
rewrite
in
a
complete
driver
with
mount
code,
you'd
be
able
to
start
with
the
library
and
build
only
the
pieces
that
are
custom
for
your
storage
system.
We
also
worked
on
coming
up
with
a
conformance
test
plan.
Kubernetes
is
focusing
a
lot
more
on
conformance
and
the
test
that
we
had
for
storage
were
basically
non-existent
before
one
point.
H
Twelve
and
one
point:
twelve,
we
focused
on
coming
up
with
a
list
of
tests
that
we
wanted
to
add
to
conformance
and
getting
approval
on
that
so
moving
forward
this
quarter.
The
big
goal
is
to
get
si
si
GA
the
this
is
the
the
core
functionality
of
CSI,
which
is
remote,
persistent
volumes
so
far,
we're
on
track
for
that
we
have
some
dependency
on
the
node
team
with
a
cubelet
device
registration
mechanism,
but
we're
working
with
them
to
get
that
unblocked
beyond
moving
the
core
functionality
of
CSI
to
GA.
H
We're
also
continuing
to
drive
non
core
CSI
feature.
So,
as
I
mentioned,
ephemeral
volumes
is
a
feature
that
we
want
to
support
and
we
want
to
move
the
features
for
that
to
beta
this
quarter.
We're
also
going
to
be
moving
CSI
to
apology,
support
to
beta
this
quarter
and
we're
gonna
work
on
figuring
out
how
to
do
automating
automatic
installation
of
the
CR
DS
that
these
features
depend
on.
H
For
those
of
you
are
not
aware,
there's
a
big
push
to
move
away
from
adding
additional
built-in
api's
into
kubernetes,
even
for
features
that
core
kubernetes
feature.
Functionality
depends
on
we're
moving
to
a
model
where
those
api's
are
going
to
be
CR
DS.
This
introduces
some
challenges,
including
how
are
those
CR
DS,
that
cor
kubernetes
components
depend
on?
How
are
they
going
to
be
installed?
H
And
you
know
traditionally
with
a
built
in
api
types,
you
don't
have
to
worry
about
the
api
type
disappearing
and
reappearing,
whereas
with
a
CR
d,
you
know
a
user
could
accidentally
delete
a
cor
CRV
that
a
core
component
depends
on
so
there's
a
lot
of
work
beyond
cig
storage,
to
try
and
figure
out
how
that
process
should
work.
Big
cluster
lifecycle
and
cig
API
machinery
are
helping
us
with
that.
For
the
short
term,
we
have
we're
going
to
use
the
add-on
manager
for
kubernetes
as
a
workaround
and
unblock
storage
a
longer
term.
H
H
We
would
get
the
code
ready
and
begin
to
test,
and
then
this
is
going
to
be
a
multi
quarter,
effort
of
actually
moving
to
a
point
where
the
default
kubernetes
installation
we'll
go
ahead
and
automatically
use
CSI
drivers
rather
than
the
entry
drivers,
and
eventually
we
can
deprecated
the
entry
code.
Of
course,
we
can't
deprecated
the
API,
but
we
can
have
the
functionality
move
to
CSI
we're
also
moving
entry
block
volume.
Support
to
beta
this
feature
has
languished
in
alpha
for
multiple
quarters
and
there's
a
big
push
to
get
it
to
beta
this
quarter.
H
Beyond
that,
these
I
scuzzy
NFS
and
fibre
channel
libraries
I,
was
talking
about
we're
finding
new
homes
you
repos
for
those
and
continuing
to
work
on
them.
We're
also
extracting
the
mount
library
that
exists
inside
kubernetes
kubernetes
used
by
a
lot
of
the
common
volume
code
into
a
separate
repo,
so
that
CSI
drivers
can
also
leverage
that
library.
A
lot
of
that
code
is
shared
and
we're
continuing
to
drive
on
conformance
testing.
So
we
came
up
with
a
plan
last
quarter
in
this
quarter,
we're
starting
to
implement
a
lot
of
the
tests
for
conformance.
H
So
that's
a
summary
of
what
the
sig
has
been
working
on.
If
you
are
interested
in
getting
involved,
we
hold
bi-weekly
meetings.
We
had
one
this
morning
at
9:00
a.m.
we'll
have
another
one
in
two
weeks:
go
to
that
link
to
find
out
more
and
feel
free
to
add
anything
to
the
agenda.
At
any
point,
we
have
a
slack
channel
and
a
mailing
list.
H
If
you
have
any
questions,
we
tend
to
be
pretty
responsive
there
and
we're
going
to
have
a
pretty
large
presence
at
Q
consti
at'll,
there's
going
to
be
a
sig
storage
intro
that
I
believe
I'll
be
leading,
if
not
Brad
childs
from
Red
Hat
may
lead
that
it's
just
if
you're
unfamiliar
with
what
the
sig
does
we'll
introduce
that
and
how
you
can
get
involved.
There
is
going
to
be
a
container
native
storage
day
which
is
planned
for
the
day
before
the
conference,
which
will
be
a
number
of
sessions
related
to
cloud
native
storage.
H
There
is
also
a
CNC
F
storage
workgroup
that
has
been
working
on
creating
a
white
paper
defining
what
storage
looks
like
in
the
CNC
f
landscape.
They
had
an
earlier
attempt
at
this
about
a
year
ago,
which
resulted
in
a
lot
of
controversy
because
it
was
focused
on
what
is
CNC
at
what
storage
systems
are
CNC
AF.
What
storage
systems
are
not
and
kind
of
making
recommendations
which,
given
the
diversity
of
all
the
different
types
of
storage
systems
out,
there
will
ended
up
being
very
controversial
in
this
iteration.
H
They
have
an
explicit
and
non
goal
to
not
have
any
recommendations
for
storage
or
try
to
define
what
is
or
is
not
CN,
CF
or
cloud
native,
and
it's
more
about
here's,
the
the
landscape
of
what
storage
looks
like
today.
So
if
you're
interested
in
that
at
all
the
CNCs
storage
workgroup
is
going
to
be
presenting
that
at
Q
Khan,
we
also
have
a
number
of
storage
talks
from
folks
in
the
storage
sig.
Please
feel
free
to
attend
those.
That
is
all
any
questions.
I.
G
Have
a
question
about
the
CRD
auto
install
and
using
the
I
believe
deprecated
add-on
manager
to
unblock
big
storage?
My
main
concern
would
be
around
upgrade
that
it
goes
wrong
downgrade
of
clusters
that
are
going
through
that
lifecycle,
yeah,
so
I
would
encourage
you
to
think
through
that
very
careful
yeah.
So.
H
The
functionality
that
CSI
is
depending
on
for
the
CR
DS
is
currently
alpha.
Moving
to
beta
this
quarter,
the
functionality
that
CSI
has
that
movie
2
GA
doesn't
depend
on
CR,
nice
and
the
criteria
that
we
have
to
move
the
functionality
from
beta
to
GA.
One
of
the
criteria
will
more
robust
CRV
installation
mechanism
and
ensure
that
we
handle
the
race
conditions.
A
A
Right
thanks,
OTT
I'm
interested
in
comments
in
the
chat,
most
things
kind
of
gave,
an
update
with
like
this
new
format
with
the
new
slides.
Please
let
us
know
if
you
like
it
plus
one
minus
one,
I,
really
kind
of
enjoyed
the
summary
that
both
things
gave
there
so
I
think
that's
a
plus
okay.
Moving
on
to
the
announcements
two
week
warning,
the
meter
contributor
session
is
going
to
be
November
7th
at
2:30
p.m.
and
8:00
p.m.
that's
in
UTC
time.
The
2:30
p.m.
A
UTC
time
is
for
the
5
steering
committees
AMA
the
stands
for
ask
me
anything.
You
basically
show
up
and
ask
the
steering
committee
all
sorts
of
questions.
Those
sessions
are
pretty
popular
on
YouTube,
so
I
recommend
that
you
attend
that
and
then
the
second
session
is
contributor
mentors.
You
can
ask
them
anything
as
well.
You
can
always
find
Paris
at
hash,
meet
our
contributors
on
slack
and
there's
a
link
there
to
the
YouTube
playlist.
A
If
you
want
to
see
any
of
the
past
sessions,
final
call
from
me
for
the
CN
CF
Awards,
the
nominations
are
still
open.
Follow
the
link
and
those
awards
are
the
ones
that
are
presented
during
cube
con
I,
the
cube,
the
kubernetes
contributor
summit
details
for
Shanghai,
the
contributor
Sokol
social
has
been
scheduled.
It's
going
to
be
from
6
to
8
November
13th
at
the
Convention
Center.
The
event
will
feature
a
panel
Chinese
contributors
for
kubernetes,
discussing
obstacles
and
opportunities,
Josh,
anything
to
add
about
cube
con
Shanghai.
D
A
We
are
basically
full
up
to
like
V
fire
code.
So
if
it
ends
up
those
of
you
that
are
attending,
if
it
ends
up,
you
can't
attend
for
some
reason.
Please
let
us
know
in
the
contributor
summit,
slack
channel
or
just
reach
out
to
myself
or
Paris
or
Bob
and
we'll
sort
it,
because
we
have
a
wait
list
of
people
that
really
want,
and
unfortunately
we're
running
into
physical
limitations
in
the
space.
So
please
help
us
out
if
you
can
there.
A
As
always,
if
you
have
any
questions
about
that,
please
feel
free
to
reach
out
to
us
a
lot
of
shout
outs
this
week.
I'm
gonna
try
to
get
through
them
as
fast
as
I
can
so
it
Rock
would
like
to
shout
out
to
Alexis
MP
G
fee
and
mr.
Bobby
tables
for
helping
me
with
all
things
open,
cube,
Builder
workshop.
Thank
you.
So
much
iishe
we
like
to
shout
out
to
Nick
open
for
automating
the
issues
in
PR
spreadsheet,
for
bug
triage
and
see
a
signal
for
window
13.
A
Thanks
for
staying
on
top
of
this
and
accommodating
the
feature
requests.
Josh
burkas
would
like
to
thank
justice,
Santa
Barbara
for
splitting
on
our
long
time
running
upgrade
tests
so
that
they
actually
complete
Aaron
would
like
to
thank
Ben
the
elder
for
creating
a
PR
the
deletes
over
three
million
lines
of
code,
everybody
clap
touching
over
sixty
five
hundred
files.
Aaron
would
also
like
to
shout
out
to
XD
for
setting
things
up,
so
we
can
use
shorter
URLs
for
test
grade
kate,
cio
and
goober
nader.
A
Do
Noah
Abraham's
would
like
to
shout
out
to
ideal
hack
for
translating
a
huge
pile
of
slides
in
preparation
for
the
new
contributor
workshop
in
Shanghai,
Kenny
Coleman
I.
Think
I
got
that
right,
shout
out
to
a
iishe
spiff
x,
@c,
Laurence,
Guinevere
and
a
Mook
mi
hope
I
get
that
right
for
their
help
on
getting
all
the
K
sized
features,
issues
in
great
spot,
where
everything
is
not
being
tracked
to
a
PR
in
kubernetes
kubernetes
and
getting
the
freeze
over
the
finish
line.
A
At
the
same
time,
it's
no
easy
feat
and
Nikita
would
like
to
shout
out
to
Lucas
for
being
extremely
responsive
for
feature,
requests
for
dev
stats
and
implementing
them
and
fixing
bugs
really
quickly
I
echo
that
as
well
Lucas
great
job
and
lastly,
the
Stack
Overflow
top
users
for
the
month
are
rico:
praveen's,
Revati,
EJ,
EJ's,
Khan,
sorry
about
that
one
Ryan
Dawson
Samhain
one
one.
Three
eight
Vaughn
see
Michael
Hassan,
Blas,
David,
Mays,
Ignacio,
Milan
and
Constantine
boostin.
A
As
always,
we
we
highly
encourage
contributors
to
check
out
the
kubernetes
tag
on
stackoverflow
to
help
user.
How
and
with
that's
any
other
topics
or
issues
or
we
break
ten
minutes
early
opening
up
the
floor.
Anything
Geoffrey
would
like
to
add
a
community
meeting.
November
22nd
I
believe
that
falls
on
Thanksgiving
week
for
the
US
I
know.
In
the
past
we
have
canceled
meetings
for
us
holidays,
but
this
year
we're
kind
of
thinking
that
we
don't
want
to
be
so
us-centric.
A
So
we're
looking
for
you
every
month
to
figure
out
whether
we
can
have
a
meeting
during
Thanksgiving
u.s.,
so
Europeans
or
people
in
Asia
or
something
if
you
want
to
work
with
us
to
kind
of
put
together
an
agenda
to
have
a
meeting.
So
we
can
unblock
from
having
this
whole
thing
that
have
to
happen
in
the
u.s..
Please
see
us
in
sig
contributor
experience
and
we
can
work
something
out.