►
From YouTube: Kubernetes SIG Storage 20200716
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 16 July 2020
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.e7uos64tuaie
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
All
right
welcome.
Everyone
today
is
july,
16
2020..
This
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
recorded
and
posted
on
youtube.
Today
on
the
agenda,
we're
going
to
go
over
the
1.19
planning
spreadsheet
we've
already
had
code
freeze
for
this
release.
A
After
that,
we'll
come
back
to
the
remaining
items
on
the
agenda.
If
there's
anything
that
you
want
to
discuss,
feel
free
to
add
it
to
the
agenda
and
we'll
get
to
it
after
the
planning
session
so
jumping
directly
into
the
planning
session.
First
item
is
csi
online
offline
volume,
res
offline
online
offline
volume,
expansion,
hamath-
are
you.
B
Us
I
think
he
had
a
couple
of
prs
that
didn't
make
it
in
into
119
and
he
will
continue
in
120.
C
C
A
E
Yeah,
I
see
I
see,
updates
going
or
prs
going
on
in
the
repo,
so
I
think
they
are
trying
to
target
a
beta
release
by
in
a
month
or
so.
C
Yeah,
so
we
continue
with
the
bug,
fixing
and
adding
tests.
I
think
there's
the
one
e3
text
fix
that
got
merged
and
we
cut
a
2.2
rc1
release.
I
we
we're
waiting
for
a
change
to
move
the
api
to
a
separate
package
once
that
one
is
seen
we
can
cut
a
formal
2.2
release.
A
F
F
So
now
in
120,
we
are
going
to
make
some
change
changes
and
I
think
it
will
remain
in
alpha
in
120,
because
we
are
going
to
the
plan
is
to
add
a
field
to
pvc
status
to
track
the
last
fs
group
that
was
used
rather
than
the
old
plan
of
like
checking
the
top
level
directory
for
the
fs
group.
If
the
top-level
directory
has
so.
This
will
make
sure
that
that
we
can
default
to
to
a
behavior
that
works
for
all
users
and
it's
faster
for
ac
linux.
F
I
think
jan
will
give
another
update,
but
s
linux
would
not
require
no
change,
so
we
have
kind
of
have
a
plan
to
solve
this
now
in
a
better
way.
So
yeah
sounds
like
good
progress.
A
A
F
So
yeah
yeah,
so
we
made
a
lot
of
progress.
I
think
I'll
try
to
schedule
a
call
next
week.
What
progress
we
made,
but
we
didn't
get
two
changes
didn't
get
in.
One
was
the
recovery
from
expansion,
failure,
which
was
which
is
blocked
on
api
approvers
still
bit
up
in
the
air.
Whether
the
new
field
should
allocated
resources
should
go
to
status
or
spec.
F
I
hope
to
get
clarity
on
this
from
jordan
and
tim
and
then
the
read
write
many
I
made
a
pr,
but
it
needs
a
little
bit
work
to
get
it
merged.
So
we
should,
but
that's
not
a
blocking
thing.
The
other
one
is,
but
we
did
fix
a
lot
of
long-standing
issues,
so
cool.
F
Yes-
and
I
was
wondering-
and
this
is
again
we'll
discuss
in
a
follow-up
call-
that
is
it
okay,
if
we
recovery,
accept
recovery
from
resize
filler,
which
will
take
its
own
time
to
go
from
alpha
beta
phase,
can
we
consider
moving
rest
of
the
resize
feature,
ga
and
fix
all
of
them
in
120?
That's
an
action
item
I
want
to
consider
for
for
a
follow-up
call
next
week.
F
A
Okay,
so
we'll
consider
that
started
and
we'll
go
ahead
and
move
that
to
120
as
well.
Next
item
is
file
permission
handling
for
windows.
A
I
don't
think
this
got
any
yeah.
This
never
got
started
we'll,
go
ahead
and
move
it
and
see
if
we
have
anybody
to
take
ownership
of
it
for
120.
next
item's
file,
permission
handling
and
projected
service
accounts.
That's
complete
next
item
is
csi
entry
read-only
handling.
A
So
we'll
carry
that
over
to
120..
Next
item
is
issues
related
to
assuming
volumes
or
mount
points,
and
he
was
assigned
to
this.
Let's
see
michelle
would
you
happen
to
have
any
updates
here.
E
Yeah,
so
there's
actually
quite
a
few
different
pr's
fixing
various
different
issues
and
some
of
them
merged,
and
some
of
them
didn't
so
I
think
specifically
andy's
pr.
We
decided
to
hold
it
until
120
because
it
has
the
potential
to
break
some
drivers,
so
instead
I
think
we
just
need
to.
I
think
we
already
sent
a
deprecation
notice
in
maybe
118,
but
I
think
we
can
send
another.
E
F
A
So
docs
in
progress,
external
provisioner
changes
pending
and
the
core
food
when
these
changes
are
merged
do
the
external
chain.
The
external
provisioner
changes,
look
like
they're
on
track
to
complete
by
coach
or
by
release
date.
A
H
C
Yeah
no
update
yet
because
I
think
that
was
trying
to
figure
out
the
first
part
of
the
group
design
how
to
support
the
group
snapshots
first.
So
we
do
have
this
in
that
cap
and
I
also
have
some
draft
in
google
doc,
but
have
not
added
that.
So
I
think
it's.
B
C
Yeah
so
we
had
a
design
meeting
and
I
think
I
added
item
at
the
end
of
the
meeting
agenda
just
to
say
that
we
are
plenty
planning
to
drop
the
support
for
immutable
volume
group
from
the
scap
from
the
cap
and
see
if
there
are
any
objections
on.
A
Okay:
next
item
is
csi
out
of
three
moving
manifest
driver
that
was
completed.
Moving
the
iscsi
driver
last
status
was
we
needed
to
follow
up
with
the
devs?
We
build
the
release
images
any
updates
on
that
from
anyone.
E
A
All
right
next
item
is
moving
the
cluster
fs
provisioner
out
of
the
deprecated
repo
initial
pr
was
merged,
not
complete,
yet
was
the
last
status
any
updates
on
this
from
anyone.
A
In
progress
but
not
yet
complete,
so
we'll
carry
that
over
to
120.
nfs
provisionary
karen
reached
out
to
me
offline.
He
says,
thanks
to
yon,
the
nfs
provisioner
has
merged
the
nfs
client.
I
still
have
a
few
to
do.
Items
to
work
on
and
for
team
has
been
following
up
on.
A
The
closure
of
the
repo
current
plan
is
to
get
the
task
completed
by
end
of
this
month,
so
I'm
going
to
go
ahead
and
leave
it
open
and
we
can
carry
it
over
to
120
if
we
need
to,
if
not
we'll,
go
ahead
and
mark
it
as
complete
as
soon
as
it's
done,
and
so
this
status
applies
to
both
nfs
provisioner,
as
well
as
nfs
client,
provisioner.
A
Next
item
is
deprecation
of
external
storage.
Repo.
This
is
going
to
depend
on
the
above
items
being
completed.
Nfs
is
in
progress.
A
So
once
those
items
are
completed,
then
we
can
go
ahead
and
deprecate
that
repo
and
again
this
is
a
call
out
to
anyone
on
this
on
this
call
who
uses
anything
from
that
repo,
the
external
storage
repo.
The
plan
is
to
deprecate
that
repo
and
archive
it.
If
there's
anything
in
there
that
you
use
or
that
you
care
about,
please
make
sure
that
there
is
an
effort
to
get
that
moved
to
a
kubernetes
official
repo.
A
A
Next
item
is
volume
snapshot,
namespace
transfer,
yeah
no
updates,
unfortunately,
hopefully
I'll
have
better
luck
at
120..
Okay,
no
worries.
C
Help
yeah
so
the
controller
and
the
agent
side
changes
are
merged,
more
driver
changes
merged.
We
are
now
working
on
the
adding
unit
tests.
A
Sounds
good
next
item
is
the
object,
storage
api,
which
is
being
called
the
container
cozy
container
object,
storage
interface,
and
so
a
few
updates
here
jeff
wasn't
able
to
attend
the
meeting
today,
but
he
sent
me
notes.
Offline
john,
who
is
leading
this
effort,
is
going
to
be
moving
to
another
project
and
won't
be
leading
the
cap
any
longer.
A
Sid
is
going
to
be
taking
over
leadership
for
this
cap.
Jeff
and
aaron
are
going
to
be
responsible
for
the
administrative
aspect
of
the
cap
running
the
meetings,
resolving
necessary
conditions
required
for
getting
emerged,
communications
and
rob
is
committed
to
ongoing
coding
he's
currently
working
on
the
sidecar.
A
They
are
in
need
of
help
on
this
project.
So
if
you
are
at
all
interested
in
object,
storage,
please
reach
out
to
me
or
jeff
or
sid
or
any
of
the
sig
leads
trion.
Michelle.
A
And
we
can
put
you
in
touch.
This
project
really
needs
help.
I
think
they're
getting
a
lot
of
momentum.
It's
unfortunate
that
john
had
to
leave
the
project,
so
they're
probably
a
little
bit
understaffed.
So
if
there's
anybody,
that's
interested
in
helping
with
this
please
reach
out
and
we'll
put
you
in
touch.
A
A
B
A
A
C
The
debian
seems
to
be
not
online,
so
I
can
give
an
update,
yes
merge,
so
you
make
it
into
beta
and
the
doc
pr
also
merged
yesterday.
C
So
now
we
still
need
to
figure
out
the
next
steps.
That
is
how
we
need
to
come
up
with
a
deprecation
message
on
how
to
handle
the
cases
when
the
vsphere
version
is
lower
than
701.
C
So
for
the
deprecation
does
it
if
it's
for
kubernetes
normally
is
it
does
it
require
like
two
releases
before
when
you
say
I
was
duplicating,
let's
say
1.19?
If
we
still
want
to
duplicate,
then
does
it
take
two
releases
for
that
to
really
happen,
did
it
happen.
E
Changing
it
depends
on
what
is
being
deprecated.
In
this
case,
we
would
be
deprecating
behavior,
I
think,
which
is
one
year.
C
Oh
one
year,
okay,
so
that
means
the
ga
would
be
take
me
to
wait
for
one
year
until
we
declare
ga
is
that.
E
I
think
we,
this
is
what
we
need
to
discuss,
whether
or
not
we
can
turn
on
things
but
make
it
able
to
turn
off
within
the
deprecation
period.
E
I
think
the
it's
to
me:
it's
a
little
unclear
if
the
deprecation
period
refers
to
changing
the
default
or
you
know
just
completely
removing
the
behavior,
so
I
think
that's
something
we
need
to
sort
out
and
that
will
help
us
figure
out
exactly
like
the
timelines
that
we
can
achieve
this.
F
Okay,
if
you
are
going
to
seek
out
for
this
one,
I
think
which
for
guidance,
maybe
then
this.
C
F
The
only
reason
we
are
doing
it
is
so
that
the
api
remains
the
same,
but
basically,
at
that
point
all
the
users
should
be
encouraged
to
use
csi
types
in
their
pvs,
because
otherwise
they
won't
get
new
features,
so
so
any
of
existing
customers
clusters
cannot
use
this
and
they
cannot
upgrade
so
it's
it's
essentially
we're
just
checking
up
a
checkbox.
It
feels
to
me
by
supporting
this
migration-
and
it's
big
enough
topic
that
maybe
we
should
consider
what
should
be
the
deprecation
cycle
for
this
one
and
and
talk
to
cigar
for
guidance.
A
How
about
we
do
an
internal
meeting
within
this
sig
as
a
next
step
and
then
based
on
that,
if
kind
of
we
have
unresolved
major
questions
after
that
meeting,
we
can
consider
looping
in
other
sigs.
E
Yeah
and
also
I'm
planning
on
meeting
with
sig
cloud
provider
to
also
sync
up
timelines,
because
we're
kind
of
we
have
two
sort
of
parallel
tracks
of
work
going
on
so
sig
cloud
provider
has
their
own
stuff,
so
I
think,
coordinating
with
them
on
their
status
and
will
also
help
with
our
timelines
too,
at
least
from
what
I
know
so
far.
E
I
I
think
they
are
already
going
to
slip
the
1.21
target,
so
I
think
that
gives
us
a
little
more
breathing
room
and
we
don't
necessarily
need
to
try
to
rush
this
so
much
and
make
such
big
changes.
C
Quickly
is
there
any
driver
ready
for
ga
in
1.20.
C
A
A
So
that
is
ongoing,
but
I
mean
the
beta
is
complete,
so
I'm
going
to
go
ahead
and
mark
that,
as
done,
I
think
the
road
to
ga
is
going
to
be
more
work.
E
And
he's
probably
sleeping,
but
I
can
give
an
update,
azure
disk
move
to
beta
azure
file
is
remaining
alpha.
I
think
mainly
there
is
some
bug
fix
pending
in
external
provisioner
needed
for
azure
file
and-
and
I
think
andy
is
still
working
on
getting
the
ci
set
up
for
it,
but
I
think
azure
file
should
be
able
to
go
to
beta
next
release.
It's
it's
very
close.
F
Ahead,
the
fs
azure
file
depends
on
fs
group
support
at
mount
mount
time.
I
don't
know
if
you
want
to
sort
this
out,
because
right
now,
when
I
was
look,
we
were
looking
at
the
code.
It
was
marked
as
a
to-do
item,
so
can
it
be
go
beta
without
that?
I
would
just
because.
E
I
mean
we,
I
think
we
can
go
beta
with
some
known
issues,
but
I
think
definitely
something
like
that.
We
need
to
resolve
the
4ga.
A
So
it
sounds
like
it's
partially
complete
and
what
we
should
do
for
the
next
release
is
break
this
into
two
as
you're
found
as
your
disk
instead
of
a
single
item,
so
for
now
we'll
keep
it
together,
we'll
call
it
as
started
and
then,
when
we
copy
it
over
to
120,
we'll
break
it.
Apart
into
multiple
pieces,.
A
Any
updates
on
that
do
we
know
if
matt
started
working
on
this.
E
He
said
that
it's
on
his
radar,
but
he
doesn't
at
least
for
q3
he
didn't
or
for
this
release.
He
didn't
have
time
to
look
at
it,
but.
A
Next
up
is
openstack
csi
migration,
which
is
cinder
only
no.
I.
D
H
A
A
A
Immutable
secrets
and
config
maps
that
was
completed
next
item
is
switzer
gaps,
address
issues
with
pvcs.
C
A
By
stateful,
set
not
being
auto
deleted,
any
updates
on
that.
H
Yeah
this
is
keiki
here,
so
we
have
had
some
discussions.
I
have
started
working
on
a
cap
with
dave
and
matt
and
michelle
as
the
reviewer
and
hopefully,
by
next
meeting.
I
should
have
kept
vr
out
with
the
required
updates.
H
So
I
was
supposed
to
meet
up
with
the
hemanth
after
the
code.
Freeze
and
kind
of
discuss
about
the
decrease
part
of
the
size
so
need
to
sync
up
with
them
have
not
had
a
chance
yet.
A
C
Yeah,
so
we
had
another
meeting
with
signaled
and
they
still
have
concerns
travel
concerns.
So
they
asked
us
to
look
at
a
few
more
things.
Let's
look
at
how
the
workflow
would
work
for
things
like
sender,
c
club
or
changing
a
like
a
log
level,
just
different
from
quieting
and
also
how
to
handle
if
there
are
too
many
probes
like
how
to
limit
it.
C
So
so
they
also
ask
us
to
summarize
the
the
initial
cap,
which
is
the
execution
hook,
crd
approach
so
yeah,
so
I
have
sent
those
information
to
them
and
maybe
should
arrange
another
meeting
and
see
how
we
can.
A
We
if
we
do
another
meeting,
let's
pull
in
jordan
as
well,
because
I
think
we're
kind
of
playing
messenger
between
those
two
groups.
C
A
C
A
All
right,
thank
you
for
your
work
on
that.
Shane
next
item
is
the
kubernetes
mount
library
moving
that
into
staging
last
status
was
it
was
in
progress.
C
E
Yeah,
unfortunately,
we
missed
the
code
freeze
deadline
for
getting
this
initial
pr
merged.
We
were
fighting
some
weird
versioning
and
dependency
things,
but
I
think
we
have
a
decent
handle
on
getting
this
merged
first
thing:
once
the
branch
reopens
again.
A
Awesome
all
right,
that's
good
progress.
I
think
that
actually
got
done
faster
than
I
thought
it
would
so.
Thank
you
for
that.
That
is
all
for
status
updates.
Switching
back
to
our
agenda,
just
as
a
reminder,
august
25th
is
when
the
119
release
is
being
cut,
which
actually
we
have
a
considerable
amount
of
time.
Usually
it's
much
tighter.
A
The
reason
for
this
is
because
of
this
old
covid
thing.
Instead
of
doing
four
kubernetes
releases
in
the
year,
we
reduced
it
to
three,
and
so
we
have
more
time
to
get
things
done,
which
is
good.
Next,
moving
up
to
design
reviews,
we
have
two
items,
one
is:
is
it
giannis.
J
Yes,
so
yeah,
basically,
so
I
just
wanted
to
get
some
feedback
and
just
to
explain
a
bit
what
the
concept
is
about.
Is
it
okay?
Can
I
share
my
screen
or.
J
So
actually
there
would,
I
think
I
would
be
interested
on
having
it
contributing
to
the
object,
storage
api.
So
I
joined
last
week's
design
conference.
Okay,
so
yeah
it's
a
bit
related
to
that-
or
at
least
it
can
leverage
our
framework
listening.
Okay,.
J
Okay,
so
yeah
it's
the
dataset
lifecycle
framework.
So
it's
a
bit
high
level,
the
the
presentation,
but
I'm
gonna,
skip
just
to
the
interesting
bits.
So
this
is
more
or
less
the
data.
The
use
case
that
we're
trying
to
handle.
So
there
is
the
problem
of
the
data
scientists,
data
engineers
accessing
data
sets
right
and
the
data
provider
providing
access
to
these
data
sets.
J
And
now,
while
there
are,
you
know
the
csi
work
and
actually
we
are
leveraging
all
that
work
so
under
the
hood,
we're
using
csi,
but
actually
what
we're
trying
to
do
is
to
bring
a
new
definition
that
it's
the
data
sets
right.
So
it's
a
custom
resource
definition
that
it
is
actually
a
pointer
to
remote,
s3
and
nfs
data
sources.
So
basically,
what
this
means
is
that
the
user
would
declare
a
data
set
right
and
we'll
define
the
type
and
then
under
the
hood.
J
Our
framework
would
provision
the
pvcs
and
the
conflict
maps
and
the
secrets
and
it
would
be
directly
mounted
into
their
boat.
So
what
we
are
also
pushing
for
is
this
bit:
that's
called
transparent
data.
Caching
and
it's
basically,
what
we're
trying
to
do
there
is
to
have
a
pluggable
architecture
for
bringing
custom
frameworks
inside
that
pipeline,
without
changing
the
core
framework
and
with
the
ability
the
frameworks
would
implement
their
casting
on
their
own
right.
So
we
have.
J
We
have
a
proof
of
concept
for
that
in
based
on
sev
and
rook,
and
also
imagine
that
when
you
cast
these
data
sets
in
your
kubernetes
nodes,
how
beneficial
would
be
for
the
scheduler
to
have
to
have
pointers
and
hints
about
where
to
schedule.
Pods
that
use
the
data
sets
and,
of
course,
we
are
doing
tests
with
spark
tensorflow
kuberflow
to
make
sure
that
it
it
works
as
is
so
yeah.
This
is
the
overall
approach,
as
I
said
so
this
is
the
crd.
J
I
define
the
data
set,
you
just
annotate
your
pods
like
this
and
then
the
casting
plugin.
If
there
is
a
casting
plugin
available,
it
makes
sure
that
it
causes
the
the
data
in
the
in
the
gas
ports.
J
So
this
is
the
a
bit
of
flow
from
a
different
perspective,
so
the
user
defines
the
data
set
of
the
data
provider.
The
data
set
operator
creates
the
corresponding
ppc
and
then
when
they
go
and
create
the
pod,
they
the
admission
controller,
mutates
this
port
to
use
the
created
pvc,
and
this
is
very
quickly
just
to
show
the
how
the
transparent
casting
is
working
currently.
So
this
is
the
core
framework
right.
J
So
when
the
user
declares
a
data
set
with
these
credentials,
the
dataset
controller
says
probes
the
cluster
for
plugins,
and
if
there
isn't,
there
is
the
intel.
This
internal
object,
the
dataset
internal,
which
is
hidden
from
the
user
right.
It's
just
for
the
components
to
know,
and
then
it
goes
back
to
the
dataset
internal
controller,
which
is
responsible
for
creating
all
the
native
kubernetes
components.
So
csis3
csinfs
that
we
have
implemented.
J
We
have
actually
modified
the
open
source
implementation
of
those
and
anything
else
that
comes
in
the
future.
Now,
when
there
is
a
custom
plugin,
this
would
be
the
flow.
So
the
dataset
controller
is
aware
that
there
is
a
custom
plugin
available,
so
it
delegates
the
creation
of
the
dataset
internal
to
this
casting
plugin.
So,
basically,
what
happens?
There
is
the
only
responsibility
of
the
casting
plugin
is
to
create
a
dataset
internal.
J
So
we
understand
that
this
is
a
bit
of
an
async
process
because
they
might
need
to
provision
some
stuff,
some
pods,
some
configuration
that
we
know
we
need
to
do
in
any
case,
but
in
the
end
they
don't
have
to
implement
the
low
level
creation
of
pvcs
and
so
on
so
forth.
They
just
need
to
create
a
dataset
internal
which
is
then
handled
again
by
the
core
framework
and
create
the
ppcs.
And
if
you
don't
have
any
questions,
I
have
a
very,
very
short
demo
that
I
can
show
you.
J
Just
to
show
you
what
we're
talking
about
exactly
so
this
is
the
data
as
a
definition
right,
so
type
cos.
Cloud
object,
store
the
access
key,
the
end
point
in
this
case
being
on
ibm
cloud,
but
we
have
tested
with
azure
with
aws
and
minion.
It
works
the
bucket
that
we
want
to
use
data.
Imagine
it
and
the
region
right
so
as
normal
kubectl
create.
J
Yeah,
okay,
so
good
question.
So
basically
we
have
also
and
started
to
design
in
mind
that
there
would
be
a
remote
catalog
as
well.
So
that's
why
we
have
it
locally.
The
local
is
in
the
sense
of
the
local
locally
defined,
let's
say
in
this
cluster
defined
if
it
makes
sense.
Okay,
but
yeah
thanks
thanks
for
this
make.
J
J
So
here
we
have
the
pvc
that
was
created
as
a
result
of
this
data
set
and
it's
this
storage
class,
because
our
orchestrator
realized
that
it's
a
type
s3.
So
we
will
invoke
this
type
of
csi
s3
and
if
we
want
to
use
it
in
a
pod
and
simple
as
this,
so
in
the
labels
we
have
this
convention.
So
we
have
the
data
set
dot
number
and
id,
which
is
the
name
of
the
data
set
yanis
and
the
same
to
be
used
as
mount
now.
J
This
means
that
we
instruct
the
framework
that
we
want
the
pod
this
dataset
mounted
inside
the
port
and
not
used
as
a
conflict
map,
because
in
the
cases
that
the
user
wants
to
access,
let's
say
only
the
s3
api
directly
and
not
mounted.
You
will
see
that
they
they
will
change
it
to
contact
map
right.
So
also
we
can
specify
the
volume
mounts
like
that.
So
we
can
say
this.
I
want
you
to
mount
here,
but
if
it's
emitted
by
definition
by
convention,
we
mount
on
limited
data
sets
janice.
J
And
it's
the
raw
data
from
the
the
remote
packet
on
s3.
So
we
envisioned
that
in
that
way
the
users
would
find
it.
They
don't
experience
users
with
a
csi
or
any
of
the
low-level.
J
Kubernetes
components
would
be
able
to
launch
their
pods
and
their
jobs
in
a
much
more
easy
way,
and
it
would
be
much
easier
for
them
to
onboard
on
on
kubernetes
and
yeah
we're
heavily,
relying,
of
course
on
the
csi
interface
and
because
of
the
discussion
about
the
object
cloud
object
api.
J
If
once
this
becomes
more
development
rating,
more
production
ready,
we
plan
to
leverage
as
well.
So
this
is
a
very
rough
idea
of
the
framework
and
we
have.
I
have
presented
on
cncf
six
storage
group
as
well
and
yeah.
Basically
we're
looking
to
reach
out
and
get
feedback
and
trying
to
understand
what
is
the
landscape
and
where?
Where
does
this
work
fit?.
A
And
just
to
make
sure
I
understand
you
can
set
the
use
as
to
mount
or
to
something
else.
J
So
we
we
just
inject
the
the
environment
variables
to
reflect
on
the
remote
s3
url,
the
bucket
and
the
username
and
password
so
we're
looking
also
to
there
is
another
project
that
it's
called
trusted
service
identity
which
somehow
makes
it
it
leverages
vault
to
keep
the
secrets
there,
instead
of
having
it
as
kubernetes
secrets
and
we're
looking
ways
to
integrate
with
that,
so
that
the
the
budget
credentials
won't
be
stored
in
secrets
or
somewhere
else.
So
only
the
users
that
create
these
spots
can
access
this.
J
Exactly
so
yeah,
so
basically
we
there
is
a
csis3
open
source
project
that
we
took
and
then
modified
the
bit,
because
we
wanted
the
functionality
to
be
a
pointer,
so
we
didn't
want
so
in
their
case
when
they
were
deleting
pvc,
they
were
deleting
also
the
bucket,
and
we
just
wanted
to
tweak
this
a
bit
to
support
this
functional
that
we
we
want
to
keep
the
bucket
there.
J
We
just
want
to
delete
the
the
mount
point
right
so
by
default
it
uses
s3fs,
but
we
haven't
experimented
with
other
options
a
bit
with
goofy's
a
while
ago,
but
yeah
s3fs
for
us.
For
the
time
being,
you
know
just
does
the
trick,
but
yeah
the
csi
is
three
under
the
hood.
They
use
s3fs,
so
yeah.
J
The
idea
is
just
to
offer
one
more
one,
one
abstraction
higher
of
that
of
the
csi
s3
right.
So
while
it
is
possible
to
mount
it
of
course
right
it's
we
find
it
that
for
users
who
are
not
experienced
with
you,
know,
storage
classes
and
provisioners
and
dynamic
previously
and
all
this
stuff.
Maybe
with
these
approach
it
would
be
easier
for
them.
You
know
to
spin
up
a
board
with
using
a
data
set
that
maybe
it
was
provided
also
by
someone
else.
J
J
Takes
definitions
of
data
sets
right
and
in
the
end,
creates
the
pvcs.
Now
what
is
happening
is
that
the
core,
the
core
operator
working
here
right
when
they
receive
a
definition
of
the
data
set,
they
do
a
lookup
on
the
cluster
right.
So
basically
the
cluster
is
just
in
order
to
understand
for
us
that
it
is
a
cluster.
J
We
have
the
just
these
labels
right,
so
we
have
a
a
pod
that
should
be
running
that
says:
dlf
plugin
custom
and
dlf
plugin
name
chef,
custom,
plugin
right
in
the
specific
case.
So
basically,
what
happens?
Is
that
the
the
main
operator
once
they
see
that
there
is
a
plugin
running
using
these
labels?
It
will
say
you
know
what
I'm
done
with
this
data
set
right.
J
I
won't
process
it
anymore,
right,
I'll,
leave
it
I'll,
leave
it
to
you,
so
the
casting
plugin
is
again
an
operator
that
reacts
to
the
creation
of
data
sets
with
a
specific
label.
So
basically
the
initial
operator
says:
okay,
because
I
know
that
there
is
this
plugin.
I
will
assign
this
plugin
so
basically
annotate
the
data
set
and
says
you
know
what
now
this
data
set
is
responsible.
J
So
then,
it's
when
the
core
operator
kicks
in
and
says:
oh
okay,
there
is
a
data
set
internal
which
originated
from
somewhere
else,
but
not
me,
but
still
it's
okay,
because
it's
of
oh
sorry,
it's
of
a
type
of
a
storage
that
I
understand
that
I
can
speak.
So
basically,
it's
an
s3.
So
it
will
continue
the
flow
right.
It
will
continue
the
flow
to
create
the
pvcs,
the
config
maps
and
the
secrets
by
using
the
exact
same
code
base
that
it
would
do
in
the
case
that
it
was
just
identical.
J
So
the
dataset
internal
in
case
it's
one
to
one.
It
still
will
proceed
the
exact
same
fashion.
The
only
thing
that
the
casting
plugin
is
responsible
for
is
to
react
to
creations
of
data
sets
and
providing
the
end
dataset
internal.
So
in
the
code
of
the
custom
plugin,
they
won't
have
to
write
details
about
s3fs
about
storage
classes
or
whatever
they
have
to
just
give
back
a
data,
internal
and
yeah.
We
we
were
able
to
create
a
plug-in
fairly
quickly.
A
A
J
Mm-Hmm
so
yeah,
in
this
case
the.
If,
if
there
is
a
plugin
available,
they
need
to
expose
somehow
a
different
endpoint
right.
It
makes
sense
because
they
they
look
up
on
the
remote
right
and
they
want
to
expose
a
new,
a
new
new
address
that
exposes
the
cast
version
of
it
yeah
and
it
does
it
locally
right
on
the
local
kubernetes
cluster.
So
we
want
to
keep
all
that
all
that
plumbing
independent
from
the
core
framework
and
from
the
user
of
course.
J
So
basically,
the
user
will
mount
in
the
end,
the
pvc,
but
they
will
mount
it
with
these
details,
which
means
that
they
won't
be
hitting
the
remote
they
would
be
hitting
the
the
one
that
the
custom
plugin
has
provided
now,
if
the
plugin
internally,
you
know,
does
prefetching
or
handles
it
in
any
other
way,
it's
not
part
of
the
core
framework
right,
so
we
we
don't,
we
don't
provide.
Caching,
we
provide
an
interface
for
custom
plugins
to
attach,
but
also
we
expo,
we
give
an
example.
Implementation
of
you
know.
J
This
is
why
how
we
do
it
right?
We
get
a
definition.
We
create
a
rather
gateway
pod.
We
create
a
user
once
we
get
those
details
and
it's
up
and
running,
we
created
it
as
an
internal
and
then
the
core
framework
can
take
it
from
there.
So
this
is
the
we
don't
do
causing.
We
just
provide
the
cast
again
interfaces
for
the
users
not
to
not
modify
not
to
configure
anything.
A
Got
it
cool?
Thank
you
for
all
that
information.
I
think,
there's
definitely
a
lot
of
overlap
with
the
qazi
project,
the
container
storage
projects,
I'm
glad
that
you're
looking
into
that
already
and
planning
to
kind
of
align
with
that,
as
that
becomes
more
mature.
A
G
A
Going
once
going
twice:
okay!
Well,
thank
you
so
much.
I
took
some
notes
in
the
agenda
doc.
If
folks
want
to
reference
back
to
that
feel,
free
and
janice.
I
look
forward
to
working
with
you
in
the
quasi
community
as
well.