►
From YouTube: Kubernetes SIG Storage 20200213
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) - 13 February 2018
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.j1jkv5ux1k2w
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Michelle Au (Google)
A
A
C
E
A
F
Updates
are
that
ABS
yeah,
so
updates
that
we
have
caught
in
the
upstream
changes
moist
for
node
driver
registrar,
as
well
as
the
couplet
changes
which
were
required
to
make
it
working.
The
disk
API
is
the
eyes
have
been
merged
currently
and
a
few
more
P
as
we
are
in
the
process
of
getting
washed.
That's
the
current
status
awesome.
G
A
A
A
H
This
is
Christian
I,
circled
back
to
that,
starting
this
week
and
I
was
able
to
update
the
the
open
PR
with
the
API
I'm
going
through
and
updating
all
the
references
to
it.
There
have
been
a
couple
of
issues
with
dropping
the
volume
life
cycle
mode
when
I'm
doing
that,
but
I'm
working
through
them
so
effectively.
H
A
I
C
A
C
C
A
I
Hi
last
week,
I
started
looking
into
it.
I
talked
to
John
Griffith
to
get
a
bit
of
the
history.
Basically,
it
started
out.
As
you
know,
PVC
and
snapshot
transfer
then
turned
into
a
generic
object
transfer,
but-
and
we
decided
that's
probably
best
to
focus
back
on
PVC
and
snapshot,
but
so
I
started
a
kept
and
I
started
doing
some
prototyping
I
was
going
to
make
the
cut,
but
I
just
had
one
just
in
my
research.
I
Thus
far,
it
seems
like
this
may
be
something
that
we
can
implement
with
just
a
controller
external
controller
and
some
CR
DS.
So
I
was
wondering
if
it
really
it's
a
kept
or
should
we
maybe
you
know,
don't
maybe
just
have
this
discussion
in
the
cup,
but
it
just
seems
like
it.
Maybe
I'm
missing
some.
You
know
education
or
something,
but
it
seems
like
something
that
may
be
able
to
do.
I
D
G
A
G
G
G
G
B
G
A
A
Next
up
we
have
the
CSI
generic
NFS
and
I
scuzzy
drivers
I
had
sent,
and
you
know
out
to
the
list
asking
for
help
on
this.
Otherwise
we
would
go
ahead
and
deprecated.
The
repos
I've
been
able
to
find
a
couple
of
volunteers
to
help
work
on
both
the
NFS
and
I
scuzzy
drivers,
so
I
think
so.
I
think
in
terms
of
deprecation.
We
can
hold
off
on
those
two
drivers
for
now
and
still
continue
to
work
on
them.
A
So
and
there's
been
PRS
out,
there's
been
peers
out
to
help
update,
update
the
repos
and
get
the
CI
infrastructure
and
stuff
set
up.
So
that's
making
progress
as
for
Fiber,
Channel
and
flex
adapter
drivers
I
did
not
get
an
many
responses.
There
I
got
one
person
for
fiber
channel,
but
I
would
really
like
to
have
at
least
two
to
continue
maintaining
the
repo
and
then
no
responses
for
flex
adapters.
So
I
think
those
two
are
still
currently
going
to
be
deprecated.
A
Up
is
the
deprecation
of
the
kubernetes
incubator,
external
storage,
repo.
This
is
where
a
lot
of
the
external
provisioners
have
been
living
for
a
while
I
think
I
haven't
had
a
chance
to
sync
up
with
Matt
on
this,
but
I
so
far,
I
have
not
heard
anything
otherwise
in
terms
of
not
deprecating
to
repo.
So
as
far
as
I
know
we're
still
on
track
to
archive
this
soon.
J
J
A
A
Okay
and
the
other
thing
is
that
we
are
continuing
to
each
of
the
cloud
entry
cloud
providers
are
working
on
improving
their
CSI
migration
implementations,
so
last
quarter,
GCE
and
AWS
went
beta
and
I
believe
this
I'm
not
sure.
If
this
release
any
of
the
other
cloud
providers
are
going
to
target
beta
I
have
seen
some
activity
from
OpenStack
in
terms
of
in
terms
of
enabling
the
test.
The
testing
and
also
I
think
some
activity
around
asher
for
at
least
working
towards
the
windows
windows
implementations,
so
that
they
can
support
both
windows
and
linux.
Migration.
A
A
B
K
Emmechelle,
we
are
still
working
on
that.
We've
started
collaborating
with
Saad
Andrew
and
Joe
Chen
to
implement
a
more
CSI
like
design
and
as
such,
the
the
cap
is
now
kind
of
stale
we're
working
on
an
offline
draft
of
it
with
them,
and
once
we've
got
the
entire
thing,
rewritten
we're
gonna
push
it
up
as
one
one
large
PR
rather
than
piecemeal
in
a
bunch
of
small
iterative
changes.
Okay,.
K
A
F
F
F
F
A
G
G
G
So
I
do
have
a
I
do
have
a
project
like
a
year
ago.
I
feel
interested
I
can
show
you
what
that
is.
It's
a
based
on
a
older
version
of
that
cap,
but
I
can
show
you
how
that
that's
just
before
you
can
run
that
hook
manually,
you
create
a
cook
to
freeze
and
then
you
take
snapshots
and
then
you
create
another
hook.
To
obviously
are.
G
B
G
B
G
G
B
L
G
B
M
M
F
F
So
the
whole
idea
is
that
CSI
drivers
in
general
are
containerized
and
they
need
they
need
to
perform
privileged
operations
in
from
within
the
continuous,
but
windows
world
still
doesn't
support
privileged
continuous.
Yet
so
because
of
that,
we
need
some
other
way
in
which
we
can
solve
this
issue.
We're
in
CA
dollars
can
still
perform
the
privilege
continuous,
but
they
will
be
inside
the
container.
That
is
where
CSI
project
comes
into
picture,
wherein
there
will
be
this
proxy
process
just
sitting
outside
on
the
node
and
they
would
be.
F
The
drivers
would
be
able
to
contact
this
proxy
process
run
the
elevated
you
know,
commands
via
the
proxy
process
and
perform
the
regular
CSI
operations.
So
some
of
the
examples
are
like
partitioning
the
disk
formatting
exit
out.
Those
are
like
operations
which
will
happen
in
this
model,
so
the
the
work
started
with
a
cap.
F
I
put
a
link
to
the
cap
here
in
the
presentation
we
have
a
github
repo
where
most
of
the
activities
are
happening
and
there's
a
good
talk
in
queue
corn,
for
which
I
put
the
link
here
as
well
from
gene
and
deeper
going
to
a
lot
of
details
and
context
of
how
this
came
about.
Let's
move
to
a
gentle
architecture.
F
So
here,
as
you
can
see,
the
CSI
proxy
is
going
to
be
processed
just
like
cubelet
and
it
would
get
commands
over
unique,
socket
in
Windows
from
the
storage
vendor
related.
You
know
like
plugins,
there
are
storage
vendor
plugins
would
get
contacted
from
cubelet
as
well
as
CSI
note
driver
registers
like
normal,
as
in
linux.
The
same
things
would
happen,
but
then
the
node
plug-in
is
about
to
do
these
privileged
operations.
It
would
proxy
it
off
to
the
CSI
proxy
process
perform
these.
F
F
The
following
are
the
api
groups
that
we
implement
in
the
CSI
proxy
of
there
are
four
main
groups.
As
of
now
the
disk
volume
SMB
and
file
system
like
operations
like
partitioning
the
disk,
you
know:
rescanning,
the
host
storage
formatting,
creating
a
link
between
two
paths,
etcetera
are
done
from
the
plugin.
It
will
ship.
These
calls
to
the
CSI
proxy
and
proxy
would
perform
it
on
behalf
of
there
are
operations
for
SMB
as
well.
F
This
is
in
cases
where
we
have
file
based
storage
drivers,
which
needs
to
mount
SMB
storage
from
a
remote
like
remote
service.
In
that
case,
this
SMB
calls
would
be
made
available
too.
So,
let's
move
on
to
a
sample
flow.
These
are
two
phases
like
here.
We
are
just
taking
an
example
for
two
phases,
but
pretty
much
all
the
phases
which
requires
node
related
operations
would
involve
CSI
proxy
but,
for
example,
in
node
stage
volume.
F
Typically,
these
operations,
like
you,
know,
updating
the
host
storage
cache
listing
the
disk
partitioning
them
taking
up
the
volumes
formatting
them
all
of
these
operations
happen
as
normal,
like
by
the
node
driver
plug-in,
but
these
operations
get
shipped
to
CSI
proxy
and
they
would
perform
the
proxy
would
perform
it.
On
behalf
of
that,
similarly,
in
node
publish
volume,
we
have
the
way
in
which
global
mount
would
be.
F
You
know
connected
to
a
specific
local
directory
which
will
be
used
for
the
port
to
access
the
contents,
so
that
would
also
be
shipped
to
the
CSM
proxy
by
and
we
perform
this
by
creating
a
soft
link
in
those
side
of
things.
Now
now
we
have
the
demo,
but
before
that,
if
anyone
has
questions
for
what
we
went
through
just
now,
I
have.
B
A
question
this
has
been
so
is:
this
is
a
short
term
plan
and
the
plan
is
eventually
there
will
be
privileged
containers
and
this
will
go
away,
or
is
this
actually
something
that
we
like
and
would
want
to
preserve?
Even
if
windows
eventually
has
privileged
containers
someday
so.
F
It
is
definitely
the
case
been
that
if
Windows
comes
up
with
privileged
containers,
we
would
definitely
have
like
easily
switch
to
privileged
container
phone.
But
as
of
now,
the
plans
are
not
very
clear.
It's
not
pleasant
it
yet
out
on
when
the
timelines
are
so
for
the
foot
sink
and
I
think
we'll
have
to
see
si
proxy
approach
to
get.
B
Back
to
you,
the
reason
I'm
asking
is
because
is
because
this
this
design
could
be
applied
in
Reverse
back
to
Linux,
and
it
would
solve
a
number
of
problems
so
like
if
we
like
this
model.
I
wonder
if
there's,
if
that
we
should
investigate
like
a
Linux
storage
proxy
of
some
form
that
would
do
the
same
facilities
on
behalf
of
drivers.
I.
M
B
There's
there's
just
a
lot
of
things
that
the
node
plugins
need
super
power
for
and
if,
if
you
could
take
the
super
power
out
of
the
node
plug-in
and
put
it
into
a
centralized,
you
know
community
reviewed
community
own
things,
so
the
node
plugins
themselves
didn't
need
to
be
privileged,
like
that
would
be
a
step
forward.
Security
wise,
probably
reduce
some
bugs,
maybe
improve
compatibility
across
distributions.
F
One
thought
about
that
is
the
distribution
of
the
process
itself
right
right
now,
when
we
have
it
in
container
form,
it's
a
much
straightforward
thing,
but
where,
when
you
have
this
additional
process,
the
corresponding
distros
or
the
providers
need
to
really
do
the
deployment
in
a
different
form.
So
that
was
one
thing
which
comes
to
my
mind:
Kuechly
when
you
mention
it,
but
I
think
it
needs
a
wider
discussion.
Sure.
B
F
Okay,
let
me
switch
to
a
separate
player
just
to
make
sure
that
it's
good
phone
size.
So
basically,
this
is
a
demo
in
which,
as
you're,
this
driver
is
operating
with
the
deployment
and
we
have
PVC
and
PV,
which
has
been
previously
created.
We
have
a
deployment
where
we
have
specifically
a
made
sure
that
it
goes
to
the
it
goes
to
the
Windows
nodes.
Only
as
in
like
node
selectors
have
been
said
to
run
or
only
Windows.
F
So
some
of
the
actions
like,
for
example,
from
the
container
creating
state
to
running
state
in
this
video-
it
goes
very
quickly,
but
that's
not
the
case.
It
just
takes
some
time.
So
this
is
to
to
ensure
that
we
can
finish
the
demo
quickly
have
edited
some
of
the
you
know
waiting
time
here
but
effectively.
What
happened
is
we
deployed
like
deployment
and
we
went
from
continue
creating
to
running
state.
F
F
F
F
F
So
a
little
bit
of
current
status
and
future
plan,
so
we
have
into
and
as
you're
like
a
Girard
disk
working
right
now
with
working
progress,
be
asked,
there
were
changes
required
in
the
cubelet
CSI.
You
know
utilities
portions
of
it
there
were
changes
required
for
no
driver
register.
Those
things
are
currently
emerged.
The
the
pr
so
on
CSI
proxy
are
in
the
process
of
some
of
them
merged.
Some
of
them
were
under
process.
We
have
a
few
discussions
still
going
on
on
the
file
system
api
effectively
around.
L
F
Of
when
the
directories
and
files
are
created,
we
wanted
to
restrict
how
what
the
the
path
where
these
creations
can
happen.
There
are
some
discussions
around
that
which
we
are
having,
but
the
whole
idea
is
currently
we
have
as
your
desk
work-in-progress
PR,
which
will
once
the
si
proxy
pr's
all
get
merged,
which
a
straightforward
thing
to
have
this
much
because
it's
already
working-
and
you
know
like
there's
some
more
testing
around
it,
which
you
are
doing
DCP
DC
PD,
has
also.
We
have
also
started
working
on
it.
F
The
idea
is
that
we
will
have
an
alpha
release
of
CSF
proxy
as
well
as
your
desk.
You
know
at
least
working
fine
in
the
one
rotating
time
frame
and
where
to
go
from
here.
Basically,
we
have
the
github
project
and
we
have
the
slack
channel
which
people
can
come
and
interact
more.
We
have
weekly
meetings
every
Friday
at
3
p.m.
and
also
meeting
notes
can
be
found
here,
you're
always
looking
for
contributors.
B
M
B
A
F
Way
in
which
we
are
going
to
work
Michelle
is
that
we
are
going
to
start
working
on
a
jar
file
as
soon
as
the
SNB
thing
gets
merged.
If
it
has
gets
much
and
work
it
out
to
see
if
we
can
do
it
in
one
rotating,
but
it's
not
very
clear
that
everything
will
work
end-to-end,
because
currently
we
are
just
focusing
on
the
disk,
but
we
do
have
an
intent
to
you
know,
try
and
finish
it
off
by
one
rotating.
M
All
right,
any
other
questions
going
once
twice
sold
all
right.
Thank
you.
So
much
KK
I
think
this
is
a
extremely
valuable
project
for
the
large
portion
of
the
world
that
still
uses
Windows
and
so
I'm.
Looking
forward
to
this
landing
in
alpha.
Thank
you.
Next
item
is
an
update
on
cig
governance.
As
many
of
you
know,
my
co-lead
Brad
childs
passed
away
late
last
year,
and
so
since
then
it's
kind
of
been
me
running
the
group
that
is
not
sustainable
for
obvious
reasons.
M
So
a
few
weeks
ago,
I
had
put
out
a
call
to
ask
for
volunteers
for
a
cig
leadership,
and
so
today,
I
wanted
to
make
some
announcements.
Specifically
one
is
around
how
we're
gonna,
instead
of
having
two
chairs
running
the
group,
we're
gonna
follow
the
model
of
other
groups,
other
SIG's,
where
they've
broken
the
role
into
kind
of
two
separate
roles.
M
We
have
two
volunteers,
Yan
and
Yan
and
Michelle,
who
pretty
much
have
been
acting
in
this
role,
so
I'm
very
happy
to
announce
that
they're
going
to
be
our
tech
leads
for
cig
storage
and
then
the
chair
for
SIG's
storage,
Shang,
volunteered
and
I
think
she
would
be
an
excellent
person
to
help
run
this
group
and
keep
keep
us
all
on
non
tasks.
So
please
join
me
in
thanking
them
and
congratulating
them
for
their
new
roles.
M
Next
steps
I
am
going
to
put
together
PRS
in
the
community
repos
two
updates
a
leadership
officially
I'm
going
to
send
out
sedges
to
sega
architecture,
to
let
them
know
as
a
heads
up
that
this
is
happening
and
the
steering
committee
and
then
I
will
be
working
with
yon,
Michelle
and
Shing
to
kind
of
onboard
them.
If
you
have
any
questions,
concerns
objections
feel
free
to
voice
them
now
or,
if
you're
not
comfortable,
always
feel
free
to
email
me
or
any
of
the
new
leadership.
Members
and
we'd
be
happy
to
to
talk
through
it.
L
L
Hey
I'm
Jerry,
this
coming
Fedora
store,
which
is
an
AFS
Andrew
file
system
like
file
system.
Next
later
this
month
on
the
24th
and
25th,
is
the
USENIX
conference
vault
on
linux,
storage
and
file
systems,
there's
a
track
on
kubernetes
early.
I
attack
a
block
on
to
Peretti's.
There's
three
talks:
one
that
I'm
giving
on
Kaveri
storage,
one
on
object,
storage
and
one
on
lustre.
So
there's
also
David
house
from
Red
Hat,
who
is
a
Linux
kernel
and
storage
and
security.
L
Guys
he's
done
a
lot
of
very
important,
interesting
Linux,
core
work
on
file
systems
like
FS
cache.
It's
because
caches
his
worth.
My
company
is
sponsoring
a
hackathon
totally.
You
know
sponsored
open
jurors,
have
impact
on
on
that
work
that
he
did
also
includes
a
kernel
AFS
driver,
but
David
House
will
be
there
if
anybody
needs
or
would
want
to
talk
to
him
and
on
what
he's
doing
on
with
respects,
to
containers
and
lids
bottle
for
storage
and
that's
not
part
of
the
conference.
L
J
J
M
Oh,
thank
you
yawn
and
last
item
is
Q
Connie.
You
is
coming
up
at
the
end
of
March,
beginning
of
April,
there
will
be
a
six
storage
intro,
a
number
of
talks
and
data
protection
work
group
session.
If
you
are
aware
of
anything
else,
that's
going
to
be
happening
or
you
want
to
organize
something
feel
free
to
add
it
to
this
agenda,
and
you
can
talk
about
it.