►
From YouTube: Kubernetes SIG Storage 20170427
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 27 April 2017
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.e7i2dcbdyx9l
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
Chat Log:
09:15:55 From npramod to SIG Storage (Privately) : The priorities for Dynamic update and data replication need to be reversed
09:25:19 From mohamed : this is when the flexvolume implements GetVolumeName
09:33:00 From mohamed : Shay wrote just one method until now
A
So
good
morning
this
is
the
bi-weekly
meeting
of
the
kubernetes
storage
special
interest
group
today
is
April
27
2017.
As
a
reminder,
this
meeting
is
public
and
recorded
today
on
the
agenda,
we're
going
to
go
over
status
updates
of
what
folks
are
working
on
then
luis
de
bon
has
a
demo
of
quartermaster
and
it
doesn't
look
like
there
is
too
much
else
on
the
agenda.
If
you
have
anything
that
you
want
to
add,
please
feel
free
to
add
things
to
the
agenda.
A
A
A
B
B
A
C
D
A
So
Matt
the
Leo,
our
storage,
cig
p.m.
wasn't
able
to
be
here
today,
but
he
wanted
to
remind
us
that
the
deadline
for
having
features
in
the
futures
repository
for
1.7
is
coming
up.
It
was
supposed
to
be
yesterday,
but
that
deadline
has
been
moved
up
to
Monday
May
5th,
so
we
still
have
an
opportunity
to
add
items.
A
We
have
apparently
new
instructions
from
the
PM
group
that
features
that
go
into
the
futures.
Repository
are
not
strictly
new
features.
They
can
also
be
user
facing
bugs
anything
that
impacts.
The
user
should
be
tracked
as
a
feature
request
in
the
future
Rico.
So
for
some
of
these
issues
we
should
consider
opening
opening
a
feature
in
the
future
repo
for
this
AWS
EBS
support.
Since
we're
creating
testing
I.
Don't
we
need
a
feature
bug.
A
A
Next
up
is
out
of
tree
volume.
Plugins
I've
been
working
on
this
with
G
from
Bezos,
and
we
are
planning
on
having
a
another
meeting
of
the
cluster
orchestration
orchestrators
next
week
and
after
that,
we're
hoping
to
to
open
it
up
again
for
a
basically
the
second
draft
to
a
wider
community
for
input
so
keep
an
eye
out
for
that.
A
C
E
Yeah,
so
I
have
a
few
PRS
out
already,
and
a
number
of
people
have
taken
a
look
already
and
giving
some
comments
so
I'm
working
on
applying
those
changes
over
the
major
things.
I
have
out
right
now
are
the
API
changes
and
then
the
plugin
changes
once
I
get
those
in
then
I
can
start
working
on
some
of
the
basic
scheduler
changes
and
the
provision
or
changes
as
well
cool.
A
F
Well,
I
thought
I've
been
done.
I've
got
one
PR
out
there
that
is
related
to
the
NFS
wedge
issue.
I
am
trying
to
write
some
été
tests
to
replicate
the
environment
and
try
to
show
that
cubelet
doesn't
get
wet.
The
new
pods
can
be
created,
but
that
PR
is
kind
of
stalled
I'm
not
getting
with
use
on
it,
I'm
also
very
flexible
to
adding
more
test
cases.
You
know
additional
test
cases,
but
anyway,
so
any
kind
of
attention
to
get
that
PR
moving
forward
would
help
for
the
NFS
like
this,
and
that's
one.
A
On
it
feel
free
to
add
the
PR
to
the
agenda
as
something
that
needs
attention
engage
someone
to
take
a
look
at
it.
Something
else
that
I
wanted
to
point
out
was
that
Eric
beta
has
been
sending
out
messages
to
the
storage
sake
about
flaky
tests,
and
we
need
to
make
sure
that
we're
paying
attention
to
these
and
we
have
folks
taking
a
look
at
them.
I
think
a
few
people
Michele
especially
has
been
helping
out
a
lot.
If
we
could
get
more
people
helping
out
with
these,
it
would
be
great
yeah.
G
A
I
think
Matt
picked
up
one
that
was
huge
I
got
to
keep
an
eye
on
it.
I
think
Eric
is
sending
these
out
regularly.
Now
he's
pinging
this
the
SIG's
directly
and
saying
these
are
the
issues
that
are
popping
up
and
blocking
or
failing
frequently,
and
instead
of
maybe
focusing
on
new
tests,
we
should
try
to
double
down
on
fixing
these
existing
flakes.
If
we
can,
I
saw.
H
A
B
C
So
I
we
just
have
a
meeting
this
Tuesday
about
step
shot
and
we
are
trying
to
kind
of
make
a
decision
and
finalize
the
alpha
version
of
API
design
for
snapshots
and
I.
Think
Tomas
just
updated
the
document
and
so
I
will
review
it
and
also
it
puts
something
into
the
document
and
I
think
if,
at
the
end
of
the
week,
we
should
have
a
clear
idea
about
this
API
and.
C
A
J
A
K
L
J
A
Okay,
so
we'll
take
a
look
at
that,
and
hopefully
this
feature
or
this
bug
fix
is
close
to
getting
resolved.
There
was
another
PR
that
I
was
reviewing
I
believe
it
was
from.
It
was
related
to
this.
Somebody
was
adding
in
a
basically
logic
to
when
a
controller
restarts
and
the
first
time
a
note
is
discovered,
go
out
and
fetch
all
the
pods
for
that
node
and
do
that
for
every
single
node
on
discovery
and
I
was
wondering
if
that
was
required.
A
B
A
M
A
B
A
O
A
Okay,
improving
containerization
of
mounts-
this
is
something
that
we
are
beginning
to
realize
is
much
much
more
important
for
this
particular
release.
Then
we
initially
thought
we
really
don't
want
to
get
into
the
business
of
having
everybody.
You
know
continue
to
write,
increase
volume,
plug-in
and.
A
Especially
I
think
for
a
lot
of
OS
images,
it's
kind
of
painful
to
make
sure
that
all
the
all
the
bids
that
are
required
for
any
particular
stories
provider.
All
the
entries
are
present,
so
we're
I'm
glad
Yan
is
working
on
this
yan.
Do
you
think
this
is
still
you're
still
going
to
be
able
to
code
some
sort
of
solution
for
this
41.7.
D
P
A
Here's
what
I
was
thinking,
maybe
we
can
punt
the
work
that
you
have
to
1.8,
so
you
could
focus
on
design
in
1.7
and
then
implementation
of
alpha
in
1.8
in
Dre
I'm
Jing
had
a
interesting
proposal
to
be
a
stopgap.
So
your
solution
was
supposed
to
be
a
stopgap
until
we
have
CSI
and
so
between
your
solution
and
current
Jing
is
proposing
a
hack
on
top
of
her
current
solution,
which
would
allow
people
to
basically
provide
a
container
and
then
have
her
code
extract
that
container
into
the
chroot
that
she
already
can
create.
A
A
What
would
be
blocking
in
Red,
Hat,
chote?
Okay,
you
can't
set
up
a
true
okay.
We
can
follow
up
offline
and
see
if
anyone
has
any.
A
Okay,
anything
against
improving
this
solution.
If
not
Jing
will
work
on
it
for
this
quarter
and.
Q
I
think
we
had
to
solve
a
similar
problem
at
core
OS
here
in
order
to
ship
runtime
tools
for
RBE
okay.
So
maybe
we
maybe
can
help
comment
on
that
puzzle
there.
That.
R
C
A
Thanks
a
lot
for
working
on
this
Jing
yeah
next
item
is
fixing
broken
volume
reconstruction.
This
was
something
that
cropped
up
recently.
One
point:
six,
there
was
a
big
refactor
of
the
Flex
volume
plug-in
shipped
and
folks
started,
noticing
an
issue
in
particular
volumes
or
plugins
that
implement
the
attach
interface
when
a
volume
is
attached.
After
five
minutes
are
after
volume
is
mounted
after
five
minutes,
the
volume
becomes
unmounted.
So
this
is
a
major
issue.
A
And
so
the
fix
for
this
is
adding
a
basically
saving
more
information,
along
with
with
the
mount
adding
in
a
metadata
file
next
to
the
mount
directory,
so
that
when
reconstruction
logic
comes
a
it's
going
to
read
the
metadata
and
be
able
to
recover
the
full
volume
spec,
and
then
it
can
do
the
right
thing
and
will
leave
the
existing
logic
as
therefore
backwards
compatibility
and
as
a
fallback
and
potentially
disable
that
existing
logic
for
certain
volume.
Plugins
like
Flex,
where
it
could
misbehave
Chakri
mentioned.
There's
a
couple.
A
L
So
I
was
looking
at
it
and
right
now
only
very
few
plugins
implement
that
natural
interface,
so
the
ones
which
implement
that
that
interface
are
fine,
but
the
reconstruction,
the
basic
retail
circle
logic
is
broken
in
all
the
plugins
which
don't
implement
the
natural
interface,
almost
all
of
them,
including
cluster
everywhere.
So
if
the
plugins
are
going
to
support
and
attach
an
interface
going
forward,
we
need
to
fix
this
and
for
the
Flex
volume
in
the
current
state.
L
L
A
L
M
L
K
A
A
It
looks
like
the
next
steps
are
you're
going
to
have
a
temporary
fix,
we're
going
to
do
for
flex,
and
maybe
a
couple
other
volume
plug-in
packs
that
in
21.6
to
fix
exclusive
1.6
and
that
will
go
out
with
the
next
1.6
patch
release
and
then
for
1.7
we're
going
to
work
on
an
overhaul
of
this
and
add
in
the
metadata
for
reconstruction,
okay,
yep
cool
and
just
to
put
everything
in
perspective.
We
are
at
the
end
of
April
right
now.
A
L
A
A
To
the
Picasso
yep
yep
cool,
sorry,
thanks,
Jerry!
Okay,
thank
you.
Next
about
storage,
OS
volume
plug-in
Simon.
Do
you
want
to
give
an
update
on
this?
Looks
like
you
added
this
item
here?
Yes,.
I
A
A
S
Like
Emily's
hidden
here,
sorry
I
was
not
at
the
beginning
of
the
call.
I
just
want
to
give
worried,
so
the
NFS
weddings
too
so
I
get
to
work
on
it,
but
it'll
be
done
in
time,
hopefully
41.7
and
and
the
cloud
provider
storage
matrix.
Thus,
thus,
both
AWS
and
GC
GC
was
already
merged
and,
as
you
know,
Bowie
has
opened
up.
Comment
appears
to
fix
some
of
the
matrix
and
it
always
I
will
appear
as
well.
So
it
should
be
done
within,
like
like
next
week's
position.
Awesome.
A
J
Q
L
A
K
D
D
So
so
this
project
that
we
started
Co
quartermaster,
is
able
to
it's
based
on
the
operator
pattern
from
chorus
and
it
has
a
set
of
operators.
They
had
wait
for
events
that,
on
a
certain
event,
it
axle
nut
and
it
calls
the
appropriate
implementation
for
that.
A
storage
system
that
wants
to
be
deployed
onto
communities
so,
unlike
some
other
operators
that
are
specific
to
a
specific
application,
chordoma
is
more
of
a
framework
that
allows
any
type
of
solar
system
to
be
deployed.
D
As
long
as
it
has
a
driver
support
okay,
so
that's
just
a
big
overview.
So
what
my
goal
is
here
to
propose
it
to
the
sig
is
to
see
if
we
can
expand
on
the
number
of
drivers
that
quartermaster
has
and
being
able
to
also
have
a
better
relationship
between
his
disability
to
store
and
deploy
storage
easily
onto
kubernetes
and
then
use
and
consume.
D
We
built
with
this
model
for
the
past
twenty
something
thirty
seven
years
where
we
have
tape
or
disks
or
something
attached
over
an
IO
transport
to
compute
and
for
the
last
you
know,
maybe
four
years,
agos
main
cranes
and
opening
systems
and
then
virtual
machines,
and
just
want
to
give
you
a
heads
up
that
this
presentation
was
the
precision
I
gave
to
a
linux
fault,
so
I'll
be
skipping
through
some
of
these
items.
This
slide,
so
one
of
the
things
is
as
as
we
have
this
one
view
that
we
all
know
and
love
storage
people.
D
One
other
things
is
that
we
are
looking
to
change
it
a
little
bit.
Now
that
we
have
companies,
we
have
a
little
more
features
and
now
the
kubernetes
really
the
operating
system
of
our
data
center.
So,
instead
of
viewing
storage
as
a
separate
entity
separate
from
kubernetes
itself,
we
can
view
it
as
kubernetes
multi
entire
cluster.
Okay,
we
have
nodes
with
disks,
we
have
nodes
without
disks
and
we
could
have
it
that
kubernetes
manager
is
not
just
the
storage
systems
and
many
of
the
cluster
types
we
can
have
a
clusters
with
lustre
Affairs.
D
K
D
D
G
D
It
could
be
that
as
we
move
forward,
we
could
use
the
features
of
persistent
local
storage
to
be
able
to
assign
disks
to
these
storage
systems.
What,
for
today,
we
create
this
tourist
cluster
third
party
resource
we
submitted
to
kubernetes
that
creates
an
event,
the
quartermaster,
it
reads:
the
source
cluster
object
and
it
for
in
one
cluster.
It
has
many
nodes,
that's
what
the
phoenicians
of
the
cluster
is.
So
it
reads
the
number
of
nodes
inside
the
cluster
and
it
creates
what
a
storage
node
third
party
resource
for
each
one
of
those.
G
D
No,
no
so
so,
there's
two
ways
to
use
quantum
after
today.
One
is
that
you
put
your
driver
inside
quartermaster,
right
you
and
then
what
a
message
The
Container
employees
or
has
all
those
drivers
inside
of
it
and
just
like
soon,
there's
us,
for
example,
or
momentum
of
the
driver
right
system,
but
don't
have
to
do
it
that
way,
you
could
create
your
own
driver
and
just
pull
quartermaster
as
a
library
into
yours
and
just
run
what
a
master
in
your
own
container
right.
D
You
still
benefit
from
the
same
API
and
the
same
same
third-party
resources
and
such
so.
You
don't
have
to
be
part
of
the
quartermaster
container.
You
could.
You
could
have
your
own,
what
you
you
can
just
pull
the
quartermaster
as
a
library
into
yours.
Ok,
so
so,
then
quantum
master
picks
those
nodes
and
submits
them
back
into
quarantine,
kubernetes,
which
then
creates
another
event
for
it
to
look
at
each
one
of
those
notes
that
were
submitted
and
then
initialize
those
nodes
with
the
appropriate
message
that
are
needed
for
that
popular
driver.
Okay.
D
D
The
very
first
thing
is
going
to
do
is
going
to
check
the
number
of
nodes
and
then
what
we're
going
to
do
here
is
and
check
the
quartermaster's
running
in
the
system,
so
we're
going
to
actually
going
to
actually
deploy
it
and
then
we're
going
to
check
that
it's
running.
So
it's
that
simple
to
deploy
it.
D
So,
let's
go
here:
let's
make
this
little
faster,
so
there
it
is
running
and
then
what
we're
going
to
do
next
is
look
at
a
storage
cluster
third
party
resource.
This
stores,
this
third
party
pieces
right
here,
define
as
three
notes
used
to
deploy
for
cluster
fast
and,
as
you
can
see
here,
the
specs
as
typed
type
is
equipment
for
quite
a
mess
to
determine
which
driver
to
use
it
has
a
set
of
devices
in
it.
D
Then
it
has
a
section
here
which
the
the
definitions
can
use
to
specify
information
as
needed
specifically
for
that
cluster.
You
can
pass,
for
example,
if
set
one
is
something
specific
or
glossary
want
something
specific.
You
could
pass
that
information
right
here
and
the
drive
will
be
able
to
absorb
that
and
use
that
information
accordingly.
So
here
we
have
that
first
cluster,
what's
going
to
here,
is
that
we're
just
going
to
deploy
when
we
deploy
it?
We
can
see
from
the
top.
We
have
a
watch
on
the
top.
D
The
very
first
thing
that
quartermaster
does
that
the
driver
actually
does
is
a
Detroit
Cecchetti,
which
is
a
volume
manager,
cluster
fresh
once
that
is
ready
and
it
goes
ahead
and
deploys
cluster
F
s
across
the
nodes
now
we're
defined
in
the
source
cluster
third
party
resource.
So,
as
you
can
see
here,
here's
what's
our
flash
coming
up
now
and
soon
it
will
be
fully
up
and
ready
to
run
so.
G
D
T
D
You,
let's
say,
for
example,
as
a
administrator
I
want
a
10,
node
cluster,
this
cluster
and
then
I
could
just
execute
that
and
run
that
on
my
cluster
I
see
so
you
could
have
it's
kind
of
hyper-converged,
essentially
yeah.
In
a
way
it's
called
prokovich
I
mean
you
could
have
apps
running
on
the
same
nodes
that
are
running
backwards.
You
may
not
bill
depends
on
the
yeah
and
the
CPU
availability,
my
availability,
it.
T
D
T
D
Yeah,
there's
nothing
stopping
you
from
doing
that.
Definitely
can
be
done.
Oh
thank
you.
So
as
we
continue
here,
we
have
the
entire
cluster.
Now
is
now
ready
and,
as
you
can
see
here,
we
have
a
storage
cluster
information.
We
can
get
back
from
people
and
there's
a
demo
information
that
they're
all
straight
about
the
cluster
is
is
saved
on
to
kubernetes
itself
and
that's
really
key
also
because
it's
not
just
about
deployment,
but
it's
all
about
it's
also
about
management
of
your
storage
cluster.
Where
now
you
can
get
status
for,
and
so
it's
example.
D
If
you
have
a
UI
that
manages
your
entire
communities
cluster
and
you
want
in
a
section
for
it
to
view
your
storage
lockers,
you
can
now
just
try
executing
the
commands,
which
are
just
regular.
Kubernetes
api
calls
not
only
that,
but
by
supporting
update,
for
example,
if
you
hadn't
wanted
to
add
more
disks
or
wanted
to
add
more
nodes,
you
just
change
the
cluster,
the
storage
closer
to
the
party
resource
and
then
quit
a
master
will
get
an
event,
and
then
it
will
act
upon
that,
but
cuz
we're
going
to
need
a
drive
out.
A
D
That
is
how
most
of
the
API
controllers
work
in
kubernetes,
not
just
operators,
if
you
look
at
the
code
for
example,
or
anything
like
that,
they
get
an
event
and
they
go
into
like
a
channel
a
key
when
they
get
reconciled.
So
they
get
worked
on
so
an
operator
pattern.
It's
just
that
same
logic,
but
outside
our
server
makes
it
yep.
D
So
here
we
have
now
that
we
have
storage
cluster,
we
have
a
single
source
classic.
We
have
the
stores
closed
that
were
created
automatically
and
now
what
we're
going
to
do
is
we're
going
to
do
a
demo.
A
demo
is
going
to
be
a
simple
demo
of
a
nginx
container
running
with
three
busybox
containers
running
adding
to
the
same
reading
and
writing
to
the
same
gloss:
surface
bone.
D
Okay,
so
we
have
here
it's
a
persistent
volume
claim
we're
going
to
use
dynamic
provisioning
to
create
a
volume
from
Gloucester
a
queue
and
the
default
of
34
gigs
ready
for
case.
Why
not
not
32
might
have
to
be
a
little
bit
different.
So
and
then,
after
that
we
have
here,
we
have
a
service
just
for
nginx.
D
We
have
the
pot
for
nginx,
that's
going
to
mount
that
persistent
volume
claim
on
the
HTML
directory,
and
then
we
have
the
replication
control
is
going
to
do
a
replica
three
of
busybox,
which
is
all
all
its
going
to
do,
is
just
output,
the
date
and
name
of
the
host
onto
a
file
onto
a
file
called
index.html
which
is
going
to
get
picked
up
by
nginx.
So
here
we
have
that
we're
going
to
create
this.
There
goes
that
crea.
D
It
sets
up
the
containers
to
run.
It
also
sets
up
the
dynamic
provisioning
from
blood
surface.
It
creates
the
volume.
Now
we
can
go
ahead
and
look
at
the
application.
We're
going
to
look
at
no
port
here
to
be
able
to
access
it
because
be
able
to
talk
to
the
application
than
nginx
application.
So
here
there's
no
port
32003.
D
You
so
now
we
can
see
Intuit
that
we
are
getting
information
here
is
one
of
the
busy
boxes
running
to
it
is
another
one
and
another
one
went
to
the
same
cluster
festival
and
the
cool
thing
about
this
was
that
it
all
was
deployed
using
a
Cuban,
a
construct
right.
Everything
was
deployed
using
a
third-party
resource.
We
got
status
using
a
third-party
resource.
We
then
deployed
using
dynamic
provisioning
and
then
use
that
we
consume
that
storage
in
the
end.
Ok,
so
let
me
go
lastly,
bring
up
the
the
github
fix
your
screen.
Again.
K
D
D
So
we
need
any
champion
and
then,
in
the
long
run
we
would
like
it
to
be
incubators
so
that
it
can
be
integral
part
of
the
kubernetes
deployment
so
that
they
can
bring
storage
closer
to
users
to
be
able
to
easily
deploy
storage
on
to
the
community
systems
and
then
to
keep
apart
to
is
so
there's
a
question.
I
had
one
time
when
I
was
talking
to
the
book.
D
Guys
I
was
you
know,
root
io
has
is
starting
to
work
on
their
own
operator
and
they
say
why
do
we
need
quartermaster
if
they
have
their
own
and
I
explained
that
it's
really
of
a
standardization
of
the
API,
because,
if
not
everybody's
going
to
have
their
own
model
of
doing
things?
At
the
same
time,
Brooke
has
had
to
learn
how
to
do
an
operator
and
set
create
events
and
attached
and
so
on.
A
I've
got
a
couple
questions.
One
is
regarding
implementation
and
second
question
kind
of
about
general
direction.
So
first
one
is:
how
are
you
getting
access
to
the
storage
underneath?
Is
it
just
an
EM
feeder?
So
when
you
deploy
a
storage
like
a
Gloucester
FS
container,
how
did
they
get
access
to
the
underlying
storage
on
the
node
machine?
Great.
D
Question
so
I
do
my
research
to
for
my
presentation
of
vault
I
talk
to
many
of
the
vendors
that
deploy
storage
on
to
kubernetes
and
every
single
one
of
them
without
fault,
deploy
in
privileged
mode,
and
they
do
this
so
that
it
can
get
access
to
slice
def.
So
flash
def
is
pretty
much
passed
on
to
the
container
and
they're
saying
here.
You
go
your
own
control,
so
in
the
glossary
first
MOU
so
right
there
slash
def
devices,
we're
passing
in
the
stores.
Cluster
definition
in
the
in
the
third-party
resource
has
a
set
of
array.
D
A
D
G
D
I
agree
in
Cannavaro
myself:
this
is
just
a
great
segue.
Thank
you.
I!
Don't
wines
are
a
little
bit
different.
This
is
mainly
for
on-premise,
because
you're
looking
to
provide
storage
back
to
applications,
but
in
cloud
environments
where
you
have
EBS
of
your
Google
Blog
stories
or
other
things
like
that,
you
may
or
may
not
want
to
use
corner
master,
then
or
may
not
be
a
need,
for
example.
D
If
if
you
want
to
create
a
file
system,
plus
FS
is
great
for
it,
but
I
you
may
not
want
to
create,
for
example,
a
block
storage
system
in
the
cloud,
but
you're
right.
There
may
be
an
issue,
for
example,
to
deploy
in
in
Amazon
container
servers
or
in
Google
container
servers
to
be
able
to
force
it
to
do
that.
Right
now,
there's
I
haven't
seen
one
store
system
that
does
not
use
privileged
mode
and.
D
D
A
Okay,
so
my
second
question
was
around
where
this
would
fit
in
and
I'm
thinking.
This
is
a
mechanism
by
which
to
deploy
an
application
on
to
kubernetes.
In
this
case,
the
application
happens
to
be
very
special
cased.
It's
a
storage
system,
but
I
wonder-
and
this
is
going
back
to
something
that
Aaron
mentioned
earlier-
whether
there
is
some
existing
project
that
this
could
be
kind
of
rolled
into,
maybe
helm,
maybe
something
else
where
this
good
quartermaster
can
be
added
in
and
in
addition
to
supporting
the
regular
set
of
applications
that
they
support.
G
Ask
the
Service
Catalog
big
palm
lorry
I'd
like
to
be
I'm
kind
of
working
on
MathML
Louis
a
device
for
making
storage
more
cat
alike.
I
mean
my
basis
is,
of
course
around
CNS
and
Gluster
to
I'm
out
here,
but
you
know
I
think
I
think
we're
on
the
same
page.
I
think
I
would
like
to
see
it
not
only
be
restricted
to
on-prem.
You
know,
I
think
this
has
to
be
solved
for
the
cloud
as
well,
so
I
mean.
G
G
D
Q
There's
also
the
resource
management
working
group,
that's
a
combination
of
sig
note
and
other
interested
parties
for
scheduling
more
workloads.
Then
you
have
more
Hardware
locality
requirements,
so
multiple
GPU
support,
so
you
need
access
to
rod,
devices
and
Dev.
So
like
a
special
case
of
a
block
device
versus
like
a
GPU,
okay,.
P
D
So
good
caching
Ireland
right
now,
nothing
but
the.
D
Is
going
to
support
the
ability,
winning
plus
FS
to
add
more
nodes
to
a
cluster
and
and
then
we
move
a
node
when
it's
you
know
something
happen
to
it.
You
look
at
replace
or
add
more
disks
to
those
nodes,
so
it
needs
to
be
closer.
So
in
NFS
it
was
only
supported,
NFS
right
now,
it's
just
the
NFS
kanesha
and
it's
just
adding
another
node
would
just
be
another.
Another
Ganesha
server
I've
seen,
but
it's
more
of
a
demo
for
NFS
need
a
real
one.
There
is
clusters,
implementation,
so.
M
D
A
I
kinda
see
these
as
two
separate
project
parts
of
the
coin.
Right
like
what
you're
working
on
what
quartermaster
the
problem
that
it
started
trying
to
solve
is
the
deployment
of
the
server-side
components
necessary
to
deploy
a
new
storage
system,
whereas
what
most
of
what
we
work
on
here
in
the
storage
sig
is
assumes
that
the
server
side
bits
are
already
up
and
running
somehow
someway,
and
we
focus
mostly
on
the
consumption
of
that
store
storage
system.
A
And,
oh
that's
why
it
sounds
like
what
the
next
steps
for
you
should
be
are
to
go
to
the
cig
service
catalog,
because
they
focus
mostly
on
how
to
deploy
applications
on
top
of
kubernetes,
and
it
sounds
like
this
would
be
a
perfect
fit
there
I'm,
not
sure
they
focus
too
much
on
how
to
deploy
storage
applications.
It
looks
like
you've
got
a
pretty
solid
solution
here
and
I
wonder
if
you
could
roll
that
in
with
one
of
the
existing
projects,
if
we
could
I
think
that
would
be
a
perfect
home
for
it.
M
A
U
I
didn't
add
that,
but
but
I
a
few
pieces
of
information
they're
just
just
about
the
open-source
days
that
kubernetes
is
participating
in.
There
are
a
few
discussions
about
storage
there
and
we're
also
during
some
of
our
main
event.
Keynotes
were
also
going
to
be
highlighting
some
your
integrations
with
criminality,
storage
too.
So
those
are
just
a
few
notes
and
I'm
available
to
summon
if
anyone's
there,
and
they
want
to
chat
about
anything
nice.