►
From YouTube: Kubernetes SIG Storage 20190425
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 25 April 2019
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.vwcmzkqqxthy
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
None
A
All
right
today
is
April
25
2019.
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group
today
on
the
agenda.
We
have
a
lot
of
items
so
we'll
try
to
move
through
them
quickly.
First,
up
we're
going
to
go
through
the
queue
to
2019
planning
for
the
1:15
release
of
kubernetes
important
dates
to
keep
in
mind
next
Tuesday
April
30th
is
the
enhancement
freeze
by
the
state.
A
So
please
keep
that
in
mind.
We
have
a
couple
of
columns
in
this
spreadsheet
that
we're
using
to
keep
track
of
this.
The
enhancement
issue
is
here:
if
you're
red,
you
need
to
open
an
enhancement
issue,
otherwise
your
feature
is
going
to
get
blocked
and
then
the
second
thing
is,
if
you
have
an
enhancement
issue,
also
make
sure
you
have
a
cap.
If
you
don't,
then
the
feature
will
get
blocked.
A
B
I
can
go
ahead,
so
we
remain
some
good
progress.
The
current
things
in
progress
are
pretty
much
locking
down
on
the
entry
volume
design,
so
we
have
some
additional
API
reviews
that
might
need
to
happen
because
we
realized
certain
fields
from
Peavey's
are
missing.
For
inline
volume,
there's
been
a
lot
of
discussions
between
him
on
tan
Chang
around
how.
C
B
There's
also
work
happening
around
the
attach
limits,
so
a
cap
is
in
progress
there
and
the
design
is
still
happening
and
finally,
there's
the
best
intra
work.
That
David
is
doing
that
keeps
track
of
metrics
to
make
sure
that
the
flow
is
going
through
the
migrated
code
and
not
through
the
entry
code,
when
operations
are
executed,
with
the
flags
enabled
for
migration.
A
B
E
E
A
E
E
F
C
F
C
I
can
given
up
yeah
Vlad's,
been
making
a
lot
of
good
progress
on
this
I
think
I
believe
now
most
of
the
issues
should
have
been
fixed,
so
you
can
he's
been
able
to
test
out
I
think
both
drivers,
the
host
path,
driver
and
also
Kevin,
had
an
image
populate,
ur
driver
and
I
think
I
think
both
of
them
were
able
to
work.
So
the
next
steps
that
he
has
is
to
work
on
the
e
to
e
testing
and
also
address
some
of
the
minor
review.
A
A
A
G
A
E
But
I've
been
making
like
addressing
all
the
comment
that
are
there
on
the
cap.
I
think
I
have
address
right
now,
all
the
comments,
so
we
had
a
call
last
week
with
Bobby.
He
was
mostly
happy
with
it.
He
wants
to
defer
to
six
stories
for
a
general
approach
first
and
then
he
will
Algeria
might
so
desta,
okay,.
A
E
E
E
A
A
A
H
A
H
H
H
A
A
C
A
A
Cool
alright.
Thank
you
very
much
for
that.
That's
one
of
the
first
items
done
next
item
is
refactoring
the
qubit
device
plug-in
to
reconciliation
model.
This
PR
is
still
in
progress.
I
took
another
look
at
it.
Last
night
it
looked
pretty
close
to
done
it
needed
a
rebase
and
there
were
a
couple
of
knits
and
one
open
question
I
had
other
than
that.
It
looks
mostly
done
so.
This
is
in
good
condition.
A
H
A
E
A
A
E
A
E
So
sadaq
has
and
I
started
a
Google
Doc
offline
to
share
some
of
the
ideas
I
and
he
proposed
some
ideas
I
before
some
ideas,
but
we
I
haven't
had
a
look
into
last
week,
actually
just
been
busy
with
other
stuff.
So
but
since
it's
a
design,
only
item
for
about
15
I
will
look
into
it
like
next
week
by
Monday,
hopefully,
and
see
which,
which
idea
we
can
reach
of
the
problem
which
of
the
ideas
you
can
use
to
solve.
Offline
resizing
part
for
certain.
H
A
A
D
A
Well,
next
item
is
pluggable
intend
test
framework
which
we
already
got
a
date
on
Thank
You.
Michelle
next
item
is
CSI
entry
read-only
handling.
This
is
an
important
item
that
we
do
not
have
an
owner
for
I
know.
Some
people
have
been
mailing
on
I
think
the
six
storage
mailing
list
asking
about
things
that
are
related
to
this.
H
A
B
B
Containers
so
spoke
a
bit
with
Peter
horny
and
who
helps
out
with
various
things
GCP
in
cig
windows,
and
he
was
able
to
show
me
some
pointers
so
with
that
I'm
unblocked
and
yes,
making
progress
I,
don't
think
this
will
be
in
a
cat
form
in
115.
So,
like
you
know,
I'll
prototype
a
little
more
to
kind.
B
B
A
Moving
on
from
the
planning
session,
we
have
a
design
review
today,
I'm
gonna
hand
it
over
to
Erin
and
her
team
to
walk
through
object,
bucket,
dynamic
provisioning
so
for
context.
This
cig
has
mostly
been
focused
on
block
and
file,
and
you
know
Erin
and
her
team
are
pointing
out
well
the
the
the
benefits
that
we
have
by
using
PV
PVC
to
be
able
to
dynamically
provision
volumes
and
the
the
benefits
of
the
portability
for
workloads
that
that
provides
is
pretty
valuable
and
to
be
able
to
apply
those
patterns
to
object.
A
J
So,
as
sad
mentioned,
you
know
in
the
past,
we
have
really
been
purely
focused
on
file
and
block
and
looking
at
application
portability
being
our
primary
motivator.
If
using
the
kubernetes
federation,
v2
AP
is
having
a
construct
similar
to
the
pv
PVC
annotations
or
resources
rather,
would
be
useful.
We
talked
to
the
cig,
probably
six
months
ago,
various
people
and
vendors
about
how
they
handled
object.
J
Many
of
us
took
the
route
of
using
the
Service
Catalog
to
expose
that,
but
from
a
usability
standpoint
alone
and
an
administrative
standpoint,
it
becomes
kind
of
a
secondary
set
of
storage
that
were
administering
and
trying
to
keep
track
of
outside
of
a
storage.
And
then,
when
it's
tied
to
a
workload,
it
becomes
even
more
complicated
figuring
out
how
to
move
that.
So
what
we're
suggesting
in
this
design
is
some
new
CR
DS
that
are
called
object,
bucket
and
object
bucket
claims.
J
The
names
are
intentionally
familiar
to
be
used
as
an
admin,
with
the
same
look
and
feel
the
difference
being
that
with
object,
you
have
to
probably
want
the
primary
differences
you
have
to
create
a
user
within
that
data
store
and
the
data
store
itself.
So
you
know
just
the
differences
between
binding
and
user
creation.
We
created
the
CR
to
used
to
be
separate
instead
of
trying
to
shove
them
into
the
PvP
BC
framework,
and
are
you
sharing
anything
no
I'm,
not
I'm
gonna?
Let
John
and
Jeff.
Okay.
H
J
K
Hey
John
go
ahead
just
because
I
know
some
people
on
the
call
of
hurt
about
us
using
controller
runtime,
and
then
we've
switched
doing
our
own
controller,
but
John
you
could
do
just
a
high
overview
of
the
kubernetes
aspects
of
the
design
and
then
we've
got
a
fast,
quick
demo.
I
know
there's
other
things
in
the
agenda
beyond
this
yeah.
L
Yeah
that
sounds
good,
so
we're
trying
to
enable
objects
or
preventers
the
we're
trying
to
give
them
the
ability
to
write
their
own
provisioners.
Much
like
the
six
storage,
Lib
external
provision
or
library
does
so.
The
key
component
here
is
that
they
should
only
have
to
call
a
very
simple
API
and
find
just
a
handful
of
interfaces
that
will
be
used
by
this
library
to
provision
deep
provision
buckets
or
grant
access
to
what
we
call
brownfield
buckets
preexisting
buckets
that
weren't
generated
dynamically,
so
it
it
internalized
it's
its
own
controller.
L
The
idea
is
that
you
should
only
have
to
call
a
you
should
only
have
to
define
a
provision
and
delete
method
at
a
minimum.
Then
you
call
new
provision.
You
would
pass
your
provisioner
well
with
these
methods
to
find
to
that
function
and
they
call
run,
and
then
it
should
go
from
there.
We've
written
a
simple
AWS,
s3
provisioner
as
a
way
to
exempt
exemplify
this.
So
let
me
go
ahead
and
run
the
demo
here
on
this
Jeff.
If
you'd,
like
that,
adding
a
thing
I
do.
K
I
would
just
to
anticipate
because
there's
been
some
comments
made
in
our
design
about
that.
It
seems
like
our
focus,
is
the
control
plane
and
not
the
data
plane
and
the
data
and
that's
a
true
statement
and
the
data
plane
is
difficult
to
solve
because
there's
not
there's
Amazon
s3
as
the
de
facto
standard
for
buckets
and
objects
within
the
buckets,
but
there
isn't
a
POSIX
equivalent
to
that
or
any
I
Triple
E
sanctioned
standards.
K
So
we
are
focused
on
the
control
plane
using
familiar
resources
for
a
kubernetes
administrator
separation
of
concerns
between
developers
and
admin,
utilizing
storage
classes
and
so
forth.
So
these
things
are
familiar
and
John
will
show
you,
those
in
the
demo
and
the
links
in
the
agenda
show
are
links
to
the
library
to
our
design
document
and
the
links
to
the
simple
s3
provisioner.
So
if
there's
interest
in
that,
you
can
click
those
links
and
get
more
information.
L
Okay,
thanks
Jeff,
so
lucky
mentioned
we're
using
storage
classes.
Here
we
decided
not
to
write
around
CRD
to
aid
and
the
familiarity
of
the
design.
Some
things
we've
written
specifically
for
the
AWS
provision,
or
we
have
parameter
lists
here
that
will
specify
a
secret
name
and
name
space.
This
is
the
admin
secret
and
the
provisioner
will.
When
a
request
comes
in
it's
when
an
OBC
is
created.
Rather
it
specifies
a
storage
class.
The
AWS
provisioner
will
use
the
secret
provided
are
defined
by
that
storage
class
to
create
the
bucket.
L
It
will
also
create
a
key
with
very
limited
permissions
only
within
the
bucket
that
is
provided
back
to
the
user.
So
in
this
way
we're
able
to
define,
say,
separate
credentials
for
separate
provisioners
or
separate
separate
roles
within
the
same
user
and
just
give
you
a
look
at
what
an
OVC
looks
like
they're,
relatively
simplistic,
we've
added
the
ability
to
hear
generate
bucket
names,
the
name
of
the
actual
bucket
in
AWS.
You
can
also
define
your
own
name,
but
we
thought
it
would
be
prudent
to
avoid
name
collisions
REO.
L
You
give
people
the
ability
to
avoid
name
collisions
with
this
and
then,
of
course,
you
define
your
storage
class
here
as
well.
So
in
the
lower
section,
I
have
the
the
running
log
output
of
the
provisioner.
It's
running
on
my
host
machine
right
now,
not
in
a
pod
on
the
right
hand,
side
I'm,
showing
any
existing
pods
OB
sees
OB
secrets
and
config
maps.
They
exist
on
the
right
hand
side.
So
what
happens
is
a
when
a
user
once
a
request
bucket?
It's
a
simple
queue:
control
create
OBC,
provisioner
will
detect.
L
The
OBC
is,
kicked
off.
The
library
will
know
the
name
of
the
provisioner
and
will
do
a
check
beforehand
and
pretty
quickly.
Here
you
see
so
we've
already
got
a
successfully
reconciled
log
line.
The
provisioner
has
detected
the
OBC
validated
that
it's
of
the
provisioner
that
it
is
being
used
for
and
made
the
call
out
a
called
provision,
which
was
the
provision
or
defined
method
and
giving
us
back
a
config
map
and
a
secret
in
our
namespace
and
those
look
like
now.
L
L
L
Connection
data
that
is
commonly
expected
for
connecting
sorry
connecting
it
to
pots,
and
so
we
have
a
bucket
house
here.
We
have
a
bucket
name,
the
port,
the
region
and
if
I
were
to
do
a
native
ss3
you'll
see
that
I
am
not
faking
it.
This
was
this
bucket
was
created
and
then,
conversely,
when
you
want
to
delete
it's
the
the
same
process,
you
would
with
PVC
so
cute
cuddle,
delete,
I,
know
BC.
L
Provisioner
detects
the
delete
validates
that
it's
the
of
its
it
matches
its
provisioner
deletes
the
bucket
deletes
the
user
and
the
key,
so
all
the
AWS
resources
that
were
generated
or
cleaned
up
after
the
fact
that
the
key
can't
be
reused,
and
that
essentially
concludes
the
demo
right
now.
I'm
happy
to
take
questions,
I
think.
K
It's
important
to
point
out
just
for
people
on
the
call
here
that
this
is
not
an
operator.
It
doesn't
run
the
the
this.
This
bucket
provision
is
in
an
operator.
It's
a
library,
it's
imported
by
each
provision.
The
provision
errs
have
the
knowledge
of
what
it
takes
to
create
buckets
in
their
object,
store.
K
The
library
does
not
the
libraries
of
control
plane
that
knows
about
secrets,
config,
Maps,
CR,
DS,
etc,
and
orchestrates
calls
to
provisions
that
have
this
like
I
said:
objects
store
specific
knowledge,
so
it's
very
similar
in
concept
to
external
storage,
provisioning
in
kubernetes,
and
even
the
separation
is
similar
to
CSI.
Where
there's
you
know,
the
details
of
the
CSI
driver
are
not
important
to
kubernetes
and
so
we're
following
along
that
that
philosophy,
but
but
applying
to
buckets.
C
J
K
K
The
idea
is
that
you
do
not
need
to
have
an
8
in
our
case
for
this
s3
provisionary,
you
wouldn't
need
an
s3
library
in
on
every
target,
node
that
your
provision
or
my
land
on
it's
it's
self
contained
within
the
config
map
and
the
secret
that's
generated
so
the
Michelle.
You
know
the
pod
just
consumes
the
CM
and
secret,
like
it
would
any
other
one.
You
don't
have
to
agree
on
the
key
field
names
because
you
can
in
the
pod
spec
you.
As
you
know,
you
can
map
them
to
any
number
any
any
name.
K
C
L
L
Correct
yeah,
it's
one
thing:
we're
not
trying
to
solve
with
this
I
think
we
need
to
say
it
explicitly
is
similar
to
the
same
problem.
The
service
I
was
not
trying
to
solve
it's
too
difficult
right
now
to
differentiate
between
object,
vendors
and
so
what
we're
trying
to
do
is
standardize
the
way
in
which
they're
provisioned,
but
not
abstract,
the
actual
vendor
from
the
user,
because,
ultimately,
they
will
have
to
know
who
they're
talking
to
so.
C
J
J
K
K
However,
if
you
have
an
application
that
now
works
with
a
config
map
and
then
underneath
the
covers,
your
storage
administrator
decides
we're
going
to
not
use
s3
we're
gonna
use
something
else,
as
our
object
store
and
the
storage
class
has
changed
to
reflect
that
you're
sane
app
can
run
now
with
a
different
object
store
without
being
changed.
Okay,.
C
Yet
that
was
kind
of
what
I
was
asking
like
if
you
change
out
the
our
DS
like
it
are
these
environment
variables
that
it's
outputting
like
standardized,
so
that
if
you
change
out
the
provisioner
underneath
on
like
a
different
environment,
will
the
application
still
be
able
to
to
run
with
the
same
environment
variables?
Yes,.
K
Except
there's
possibilities
that
if
the
app
did
some
you
know,
xs3
extension,
sort
of
a
less
used
feature
of
the
s3
api
and
then
the
new
object
store
doesn't
implement
that.
Well
then,
the
answer's,
no
right
I
mean
that's
the
issue
with
the
data
plane
not
being
guaranteed,
but
in
cases
where
portability
matters
in
your
app
design
and
you
use,
you
know,
sort
of
a
lowest
common
denominator
of
API
features
or
right.
H
L
K
D
J
Cncs
is
to
foster
many
projects
to
provide
a
rich
landscape
choices.
Right
have
a
certain
you
know
held
up
to
a
certain
standard
of
cloud
nativeness.
So
even
if
the
CNC
F
did
accept
various
object
projects
into
it,
it
wouldn't
be
saying
one
as
the
standard
over
the
other.
So
I
just
wanna
make
sure
that's
abundantly
clear
what.
L
L
Say
edge
of
s
with
a
different
API,
the
library
would
support
that
too.
We
have
so
like
that.
The
one
caveat
to
that
is
environment
names
are
our
environment.
Variable
names
are
skewed
towards
s3.
You
don't
see
here
because
it's
omitted
if
it's
empty,
but
the
the
object
bucket.
That
gets
not
to
get
it
to
into
the
weeds
here,
but.
L
J
A
I
think
it's
it's
a
good
question.
I
I,
really
like
the
direction
that
this
is
going.
I
think
we
want
to
get
to
a
point
where
we
can
abstract
away
object:
storage,
just
like
we
have
done
with
block
and
file
that
portability
of.
Let
me
write
my
application
once
and
be
able
to
run
it
anywhere.
It
has
been
extremely
valuable,
so
if
we
can
replicate
that
for
for
object
stores,
that
would
be
awesome
right
now
with
what's
proposed,
I
think
there's
two
aspects
of
it.
A
One
is
dynamic
provisioning
of
an
arbitrary
resource
with
an
arbitrary
bucket
within
a
cluster,
and
then
the
second
part
is
making
your
workload
portable
across
different
environments,
so
the
first
part
of
that
dynamic
provisioning.
This
nails
pretty
well.
It
follows
the
same
model
that
we
have
for
block
and
file,
and
it-
and
it
can
do
that
now.
A
J
And
from
a
Red
Hat
perspective,
my
team
is
actually
intimately
involved
in
directly
that
and
how
you
this
like.
We
have.
You
know
examples
where
we
can
dynamically
provision.
You
know
object
with
the
storage
class,
move
it
to
a
different
cluster
or
you
know,
leverage
the
same
means
of
what
we
want
to
do
and
we're
doing
it
all
on.
You
know
the
kubernetes
federation,
v2
api's,
so
I
mean
I,
think
that
that
is
our
motivating
factor.
Our
motivating
factor
is
usability,
consistency
and
application
portability.
A
It,
and
so
ideally,
what
I
want
is
as
an
application
developer,
I,
don't
really
have
to
think
if
I'm
gonna
run
in
on
Prem
or
AWS
or
Google
Cloud
or
some
weird
funky,
you
know
has
that
what
object
store
am
I
going
to
be
using.
Do
I
have
to
rewrite
my
application
with
a
specific
client
like
forget
that
I
want
to
be
able
to
write
it
once
and
have
it
work
anywhere.
If
we
can
get
to
that
point,
this
will
be.
This
will
be
a
huge
fit
and.
K
It
also
is
like
I
mean
to
get
what's
odd,
saying
it's
really
it's
it's.
It's
defining
me
to
path
to
a
safe
and,
and
that
requires
probably
some
API
mapper
or
some
other
technology
that
knows
about
all
the
object.
Api
is
out
there,
but
so
who
could
we
talk
to
to
get
to
advance
that
it
sounds
like
what
you're
we?
The
control
path
is
reasonably
good
in
this
design,
but
the
data
path
isn't
isn't
there
and
you.
A
J
M
Think
honestly,
like
it,
everyone
will
will
have
to
code
to
it
because
there's
nothing
that's
really
in
place,
but
but
there
are
a
couple
of
implementations
like
I
mentioned,
J
clouds
and
in
spring
clouds
that
are
out
there
and
there's
also
one
in
in
Python.
That's
been
working
with
service
broker
type
stuff.
So
if
we
can,
you
know
manage
to
one
of
those
or
an
average
of
house
or
something
then
then
I
think
it
all
work
relatively
well.
Okay,.
J
F
C
A
So
that
is
the
intention.
The
the
problem
is
that
there
are
new
functionality,
that's
being
introduced
with
every
new
kubernetes
version
and
we're
trying
to
find
a
way
to
be
able
to
express
that
and
say
hey
if
you're
gonna
be
using
this
version
of
the
sidecar
and
you're
expecting
volume
resizing
to
work
you're,
you
know,
you're
gonna
have
to
use
at
least
kubernetes
114
volume
resizing
doesn't
exist
before
that
for
CSI
and
I.
Think
there's
a
better
way.
We
can
share
that
information
to
say
you
know.
A
A
F
Like
I'm,
imagining
like
a
year
in
the
future
after
we've
gone
through
a
few
more
kubernetes
versions,
we
want
to
have
one
version
of
the
plugin:
that's
compatible
all
the
way
back
to
1.13
I
realized
going
past.
113
is
not
possible
because
of
the
CSG
then,
but
we
don't.
We
don't
want
to
have
a
situation
where
we
need
to
have
different
versions
of
the
plug-in,
depending
on
your
Karuna
use
version.
We
wanted
to
say.
F
C
So
it's
a
bit
of
a
tricky.
It's
it's
a
little
tricky
because
the
sidecar
not
only
has
to
deal
with
kubernetes
compatibility,
but
it
also
has
to
deal
with
CSI
version
compatibility
at
the
same
time.
So
it's
and
and
we
can't
possibly
test
every
single
permutation
of
kubernetes
and
all
the
features
going
through
alpha
beta
and
GA.
C
F
F
A
vendor
we'll
be
testing
all
the
versions
that
we
care
to
support,
to
be
sure
that
everything
actually
works,
but
I,
just
I
don't
want
to
have
a
situation
where
you
know,
after
a
few
more
versions
in
the
future.
We
realized,
oh
there's
something
that's
broken,
and
now
we
have
to
ship
to
different
things
depending
on
what
you,
what
version
you
have?
We
want
to
be
able
to.
You
know
through
some
combination
of
disabling
and
enabling
features
and
locking
down
some
common
core
that
works
across
all
the
versions
just
ship.
One
thing:
yeah.
F
F
C
I
think
that's
gonna
be
hard
to
test
and
enforce
but
sure,
but
so
so.
For
example,
in
114
we
moved
topology
to
beta
and
then
in
our
external
provisioner
we
had
the
topology
feature
and
what
I
did
there
was
have
it
disabled
by
default,
so
that,
if
you
are
running,
if
you
are
upgrading
your
sidecar
from
a1,
you
know
113
cluster
and
you
want
to
upgrade
to
the
new
one.