►
From YouTube: Kubernetes SIG Storage - Bi-Weekly Meeting 2022-11-17
Description
Kubernetes Storage Special-Interest-Group (SIG) Workgroup for Bi-Weekly Meeting - 17 November 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
Hello,
everyone
today
is
November
17
2022.
This
is
the
kubernetes
storage
seek
meeting
today
we're
going
to
go
over
1.22,
1.26
planning,
spreadsheet
and
and
then
we
have
a
design
topic
today,
we'll
go
over
over
that
one
later.
Okay.
So
let's
look
at
this
one.
First,
the
first
one
is
the
delegate
office
group,
2
CSI
driver
instead
of
couplet.
A
Okay,
I
think
code
is
all
merged.
A
Next,
one
is
that,
after
volumes,
this
is
the
wrap
up.
Conformance
test
is
Jonathan
on
call
hi.
C
D
This
is
Heyman
by
the
way
I
joined
a
little
bit
later.
The
delegate
difference
group
is
is
done.
A
Okay,
so
I,
we
still
have
talk
right.
The
Dark
Horse
emerged.
A
So
I,
just
that
is
add
a
doc
PR
in
review.
Do
you
know
if
there
is
a
there
was
a
Blog?
Please
hold
it
right.
A
Okay,
yeah
I
think
the
let's
see,
what's
the
the
line,
let's
just
take
a
look
at
the
next
deadline.
I
think
the
a
doc
PR
ready
for
okay.
So
this
one
passed,
which
is
15th
but
I,
think
the
talk
is
already
being
reviewed,
so
I
think
that's!
That's.
Okay
and
the
the
dark
ready
to
be
merged
date
is
also
the
state.
Tony
I
think
it's
29s
I'll
double
check,
but
I
think
this
is
this:
is
the
room
dog
ER
ready
to
merge
yeah?
A
So
please
address
comments
on
the
docs
as
soon
as
possible,
and
also
for
the
blog.
Please,
you
know
try
to
fill
in
the
content,
since
this
is
ready
to
review
date
is
coming.
D
A
All
right,
thank
you,
and
this
one.
Do
you
have
an
update
on
this?
This
is
the
E3
test
for
the.
What
is
that
the
retry,
the
recover
from
resize
failure.
D
Yeah
so
I
actually
like
so
this.
This
is
what
I'm
doing
where
I
am
I
have
not
finished
yet,
but
I
opened
the
pr
like
a
dependency
PR
I
opened
yesterday.
Basically
the
CSI
Mark
volume
that
file
is
growing
like
anything
and
it's
like
I
refactor
that
code
to
make
it
easier.
So
two
two,
because
my
changes
depend
on
that
file.
D
A
Okay,
we
can.
We
can
probably
add
that
one
yeah
we
can.
We
can.
If
you
have
a
link,
you
can
add
that
one
we
do
have
a
yeah
there's
a
board
where
we
add
PR's
there.
D
I'll
go
all
right.
Thank.
A
A
Has
Gene
joined
all
right
to
this
one?
The
volume
group
this
one
I
have
not
got
no
update
yet
I
need
to
get
back
to
this
one.
A
And
next
one
is
provision
volumes
from
Cross
namespace
snapshot
and
the
PVC
yeah
for
this
one
I
actually
got
an
got
an
update
from
top
taka.
For
me,
let's
see
it,
that's
what
it
said.
I
need
to
check
what
was
the
message
she
said:
API,
okay,
API
and
API
code
is
submerged
right.
This
is
said,
get
a
code.
A
Merged
dog
here
in
review,
I
thought
you
didn't
mentioned
that
I
thought
we
still
have
the
pr
in
the
external
provisioner.
Is
it
Michelle
yet.
A
A
A
C
No
that's
correct.
Okay,.
A
A
C
That's
a
good
question:
I
I
will
stick
up
with
Chris
and
see
if
there's
anything
else,
that
the
e2e
test
was
the
main
thing,
but
I
think
he
also
wanted
to
make
some
improvements
to
the
scheduling
logic.
Oh
okay,
but
that's
not
like
a
strict
requirement.
A
Sure,
thanks
does
this
also
need
like
metrics
before
this
one
going
to
this
going
beta,
but
I
think
that
one
can
be
animated,
yeah
all
right.
Next,
one
is
the
wrong
term,
assisted
Mountain.
We
have
deep
here.
A
Or
young,
do
you
do
you
know?
The
status
of
this
was
Jana
here,
yeah.
A
Okay,
yeah
I
think
deep
was
in
the
the
the
CSI
community
meeting
the
other
day.
He
was,
he
was
saying
making
it's
actually
on
the
on
good
track.
Basically,
the
got
some
comment
from
signal
side
anyway,
I
think,
let's
see
what
okay
so
so.
It
looks
like
the
same
on
this
one
runtime
class
suggestion.
A
A
And
next
one
is
the
CSL
proxy
for
Windows
transition
to
probability,
containers
there's
a
mauritio
here.
C
That's
a
good
question:
I
I,
don't
know,
I
know,
there's
been
PR's
out,
but
I
don't
know
if
they've
been
merged.
Yet.
A
A
But
we
are
we're
not
really
following
the
like
the
release
deadline
right,
so
this
is.
A
A
Next
one
says:
prox
performance
issue:
yeah
I,
don't
think
anyone
is
working
on
this.
So.
A
Next
one
is
a
node
expansion,
Secrets
E3,
test,
I
believe
Michelle.
You
said
someone
is
interested
in
this
one
right.
It's
yeah.
A
A
D
I
think
all
the
PRS
that
were
there
for
volume,
reconstruction
and
slnx
handling
for
CSI
drivers,
which
was
missing.
So
all
those
are
merged.
Okay,
I,
don't
know
I,
don't
think
there's
any
any
any
like
pending
blog
I,
don't
know
if
you
signed
up
for
it,
but
okay.
A
Oh
I
think
I
think
he's
not
writing
yeah.
He
did.
He
did
not
opt
in
this.
One
I
know
he's
probably
waiting
for
you
know
when
it's
almost
to
Beta
right.
This
is
a
I
thought
right,
but
I
think
there
was
also
there's
still
a
cap
right.
Is
that
one
merged?
Let's
update
the
cap,
I'm,
not
sure
if
that
one,
maybe
that
might
not
much.
A
Fine
last
release
yeah,
so
it's
basically
just
he
updated
it
based
on
the
voting
reconstruction
change,
but
I
that
PR
I
did
not
shaved,
but
that's
not
like
it's
not
like
urgent.
It's
just
you
know
if
you
want
to
update
the
the
cap
after
the
volume,
reconstruction
thing
is
in
yeah
that
I'm
not
sure,
maybe
not
much
again
yeah,
but
it's
no,
not
like
it's
urgent,
okay,
yeah!
We
are
good.
A
Okay,
so
see
the
migration
so
see
some
aggression,
this
fear
yeah.
So
this
one
is
so
it's
merged
code
matched.
Basically,
the
dark
PR
is
real.
A
A
A
Log
PR
is
still
there,
I
think
in
review
and
then
the
the
rest
of
those
I
think
yeah.
Those
actually
did
not
make
it
so
I
think
we
we
have
this
on
here
to
track
tests
tests.
A
We
need
to
have
EG
tests
right
this
one
yeah,
so
this
one
did
not
make
it.
We
don't
we
didn't
get
the
test
ready
by
the
merge
deadline,
so
this
one
is
to
1.27.
A
A
A
And
then
the
next
one
is
always
on
a
reclaim.
Policy
is
Deepak
here.
A
So
I
think
I
think
he's
making
some
changes
in
the
I.
Don't
know
if
it's
either
in
the
external
provision
or
is
in
the
the
provisional
labor,
so
I
think
he's
still
working
on
working
on
code
changes.
A
Oh
sorry,
I
should
check.
Let's
see
this
one
says
code,
merge,
okay,
I
need
to
check
with
him,
I'm,
not
sure
I
thought
he's
still
working
on
something
I,
don't
know
which
one
he's
looking
at,
but
the
test
is
not
there
yet
because
we're
waiting
for
because
the
tests
will
be
added
out
of
the
tree.
So
we're
waiting
for
this.
This
work
that
that
drawnack
is
working
next.
Okay,
we
can
cut
this
one
next,
so
two
tests
depending
the
next
one,
is
volume
of
conversion
between
source
and
Target.
A
So
yeah
this
one's
too
long
but
well
I
do
I,
don't
think
we
can
make
it
but
I'll
keep
it
like
this
for
now,
but
I,
don't
think
I
think
he
said:
ronak
is
working
on
adding
the
E3
test.
It's
getting
very
close,
but
I
think
it's
not
out
yet
still
waiting
for
your
test.
A
A
That's,
oh
no!
It's.
E
Yeah
so
this
and
it
ended
up
not
making
the
deadline,
a
bug.
E
Correct
a
bug
was
found
after
the
exception
and
oh
in
discussion
with
Jordan.
We
decided
to
wait.
Okay,.
A
Thank
you,
okay,
so
so
we
changed
this
one
to
Alpha,
then
right
and
the
next
one
one
expression
for
staple
said.
Is
there
any
progress
on
this
one
command.
A
A
Ize,
that's
what
I
know!
Okay!
So
that's
all
we
have
here
now,
let's
come
back
so
we
have
this
topic
here.
A
Do
we
have
a
Tata
and
sorry
I'm,
not
sure?
How
do
you
pronounce
this
one.
F
A
Hi
yeah
go
ahead.
Please
talk
yeah,
you
added
a
lot
of
notes.
Here.
Please
go
ahead
and
can
you
yeah
explain
this
issue
and
the.
F
A
But
okay
I
couldn't
hear
anything.
Okay,
let's
see,
there's
something
in
the
check:
okay,
yeah
sure,
yeah,.
G
Oh
okay,
sorry
that
was
an
issue
with
my
some
configuration
yeah
sorry.
So
we
wanted
to
join
the
six
storage
to
talk
about.
G
How
we
are
planning
to
handle
to
handle
storage
in
in
parts
with
username
spaces
support,
so
the
situation
we're
currently
at
is
we
immersed
support
for
stateless
spots
in
kernels
125,
which
handle
only
very
simple
volumes
where
the
life
cycle
of
the
volume
is
tied
to
the
electricity
of
the
pot?
So,
like
continuous.
A
A
B
G
G
The
challenge
here
comes
because
when
you
create
the
username
space,
you
create
a
mapping
from
the
user
inside
the
container
to
the
user
outside
the
container,
and
so
when
you
are
trying
to
use
volumes,
if
you,
if
you
don't
use
any
kernel,
support
to
facilitate
this,
the
volumes
need
to
be
like
the
files
that
you
really
need
to
be
using
the
the
ID
of
that
is
assigned
to
you
map
to
you
as
the
host
user,
and
that
is
the
solution
we
started
with.
G
G
But
during
the
review,
I
think
Herman
from
six
storage
suggested
that
we
just
dropped
that
part
of
the
code
and
rely
on
FS
group.
That
part
was
maybe
problematic
because
we're
we
changed.
Config
map
secrets
and
everything
to
honor
the
fs
user
field
that
can
be
specified
and
today
is-
is
not
honoring
it
so
that
change
had
bigger
effects
that
just
being
used
when
user
new
spaces
are
are
in
use.
G
So
we
switch
to
FS
group
and
we
didn't.
We
didn't
change
that.
The
thing
is
that
FS
group
is
not
ideal
because
it
forces
you
to
always
have
permission
for
the
node,
so
things
like
SSH
keys
or
things
like
that,
where
a
library
or
application
expects
some
permissions
to
not
be
granted
to
the
group
won't
work.
G
G
So
basically,
what
what
we
have
in
mind
is
to
to
instruct
the
container
runtime
to
create
all
the
vine
mounts
using
the
the
mapping
for
the
username
space
and
when
you
do
it,
when
you
do
that,
you
don't
need
to
change
the
ownership
of
of
the
files
and
volumes
and
whatever,
because
the
kernel
will
do
that.
Conversion
using
the
mapping
of
the
user
of
of
the
mapping
of
the
username
space.
D
Heaven
here
so
the
reason
like
we
we
kind
of
did
not
like
we
had
to
drop.
Ask
you
to
drop
that
change
where
we
were
applying
FS
user
was.
There
was
some
issues
around
like
like
traditionally,
no
everything
is
permissions
are
driven
driven
by
a
FS
group
in
in
kubernetes
for
all
the
the
storage
volumes
now
I,
don't
remember
if
FS
user
was
like
pod
like
there.
D
There
are
issues
that
we
have
to
figure
out
about
two
parts,
for
example,
can
specify
same
FS
group
and
share
the
volume
or
yes
and
then
so.
Those
are
the
issues
and
then
second
one
was
like
I,
don't
remember
top
of
my
head,
but
like
can
you
specify
different
FS
user
for
different
containers
and
the
same
and
the
same
volume?
So
one
container
can,
if
you
no.
D
Okay
and
yeah,
so
we
were
hoping
that
the
whole
design
of
like
how
the
FSU
service
permission
will
work
it
it
like.
Maybe
it
requires
a
separate
cap
actually,
rather
than
putting
it
as
part
of
the
as
part
of
the
that
username
space
137
right.
G
G
So
basically
we
don't
have
to
do
any
more
changes
in
the
cube
that
we
won't
be
changing
that
at
all.
Just
when
you
create
the
amount
like
this
is
as
special
line.
Mount
is
in
the
new
White
Mountain,
the
new
Mount
API.
You
can
specify
a
mapping
and
therefore
the
kernel
will
do
the
translation
for
you.
When
you
read
a
file
when
you
create
a
new
file,
so
we're
not.
We
don't
need
to
change
that
complicated
part
at
all.
If
we
go
through
with
ID
map
balance,
that's.
C
How
this
does
it
does
it
require
the
username
spaces
to
also
be
to
also
work,
or
is
this
a
independent
feature
from
username
spaces?
No.
C
G
No,
no,
no,
no,
not
of
those
options
are
true.
It
doesn't
require
username
spaces
to
work
and
you
don't
need
username
spaces
to
use.
Item
amounts
like
the
Linux
kernel
feature.
I
demand
mounts
doesn't
require
that
you're
running
inside
a
user
namespace,
it's
just
an
ID
mapping,
conversion
that
is
very
convenient
to
use
with
the
same
mapping
that
you
use
in
a
username
space,
and
it
was
created
to
solve
this
volume.
This
problems
that
we
have
with
us
in
space
and
some
volumes,
but
none
of
the
features
requires
the
other.
D
I
guess
the
next
question
was
like
is
this?
Will
this
feature
this
change
require
user
to
require
you,
users
to
enable
user
namespace
feature
in
in
kubernetes.
I
know
that
ID
my
accounts
doesn't
necessarily
need
username
space,
but
will
this
this
change
that
you're
proposing
will
require
that
feature
to
be
enabled
in
kubernetes.
G
All
right,
yes,
I'm,
not
sure
I,
followed
the
question.
So
the
thing
is
when
you,
when
you
set
the
feature
in
kubernetes
to
like
I,
want
I
use
your
name
for
this
part,
then
in
the
CRI
interface,
to
communicate
with
the
container
runtime,
we'll
send
a
message
saying
like
eosd
mapping
and
whatever,
and
the
container
runtime
will
will
use
the
mapping
for
the
for
the
amounts
too.
D
So
we
are
proposing
that
we
can
solve
this
problem.
This
whole
FS
Group
by
using
ID
map
mods,
so
I'm,
just
trying
to
ask
like
will
this
be
featured
this
change,
like
ID
map,
mounts
applying
ID
map
mounting
in
kubernetes
if
you're,
because
I
don't
say
yeah
so.
G
Yeah,
so
yes,
this
will
be
featured
with
with
the
username
spaces
feature
gate,
and
the
changes
like
the
cubelet
or
no
kubernetes
component
will
do.
An
ID
map
will
create
an
ID
map
nouns.
It
will
just
send
the
necessary
things
over
the
CRI
interface,
so
the
container
runtime
that
today's
who
does
the
buy
Mouse
to
do
the
abundance
using
an
ident.
D
D
G
So
we
wanted
to
discuss
early
with
six
storage
our
plans,
so
if
this
makes
sense
to
to
fix
storage,
we'll
create
a
proof
of
concept
of
the
code
and
then
update
the
cap
with
the
specific,
the
very
specific
changes
that
we
need
to
do
to
do
this
properly,
but
I
want.
We
wanted
to
know
if
this
feature
this
overall
direction
makes
sense
before
we
spend
so
much
time
like
several
weeks.
Working
on
this
foreign.
C
G
G
B
F
Yeah
then
it's
there
is
no
way
to
do
that.
It's
like
a
requirement
in
the
username
space
like
each
ID,
you
know,
username
space
must
be
mapped,
but
it
was
single
ID
outside
and
the
same
structure
is
maintained
already
amounts
so
because
it
will
be
confusing
for
the
kernel
like
let's
say
that
it
allows
to
squash
everything
to
the
single
ID,
but
then
when
it
goes
to
write
back
the
file
on
the
well
on
the
storage,
it.
F
G
B
C
I
guess
so,
if
I
also
understand
I,
guess
then
this
ID
map,
it
doesn't
really
do
anything
in
the
case
when
you're
not
using
username
spaces.
G
C
G
No,
no
they're,
so
if
you
create
amount
with
a
mapping,
there's
the
mapping
that
will
be
used
for
that
mount
so,
but
but
yeah
most
use
cases
are
with
username
spaces,
but
for
I
don't
know.
For
example,
systemd
login
uses
I
did
not
launch
without
using
spaces
because
it's
it
creates
a
formal
IDs
and
then,
for
example,
it
supports
that
you
can
have
your
home
directory
in
a
and
on
different
machines
and
you
and
you
log
in
with
different
IDs.
G
F
F
B
So
for
FS
group
cubelet
will
just
do
a
challenge
on
any
file
that
doesn't
match
the
ID.
So
if
it's
effectively
squashing
everything
into
the
single
ID,
that's
provided
by
FS
group
and
there's
not
a
way
to
to
signify
that
with
ID
mapping
you
have
to
like
you
said
you
have
to
provide
a
one-to-one
mapping
and
there's
a
limit
to
the
number
of
mappings
that
you
can
provide.
E
C
E
C
Yeah
I
think
the
main
use
case
was
like
you
know
we
in
the
past.
We
cautioned
a
lot
against
supporting
FS
group
for
NFS,
because
we
didn't
want
to
you
know,
because
NFS
is
widely
shared
across
multiple
pods
and
we
didn't
want
to
get
into
a
case
with
like
two
pods
specified
different
FS
groups
and
then
end
up,
like
you
know,
flipping
the
volume
permissions
back
and
forth
all
the
time.
D
B
D
No,
but
it
touches
on
this
because
the
the
ID
map
Mount,
so
if,
if
we
use
ID
map
mounts
in
the
for
the
username
space,
that
means
that
whatever
is
being
mapped
to
that
user
must
already
have
permission
to
read,
write
files
on
on
the
Kernel
and
and
on
the
host.
So
let's
say
like
username
space
in
the
the
in
the
inside.
The
container
like
the
shifted
uid
is
20
000.
D
G
That
is
completely
out
of
the
scope
for
username
spaces.
Okay,
so
the
idea
is,
if
you
so,
if
you
insta
is
our
namespace
and
you
mount
using
the
same
mapping
if
you're
running
as
user
0
inside
the
container,
then
the
file
create
it
on
the
file
system.
If
you
look
at
the
inode,
the
uid
will
be
zero
yeah.
G
So,
basically,
if
you
can
access
the
volume
without
using
spaces,
you
can
access
the
volume
with
username
spaces
and
I
did
not
bounce,
and
if
you
cannot
accept
the
volume
without
these
learning
spaces.
Well,
we
don't
care,
you
need
to
fix
it
and
when
you
fix
it
like
we,
we
are
not
trying
to
solve
the
problem
of
any
tone
to
solve
the
problem
of
any
tone.
G
Fs
group
or
other
things
need
to
be
used
and
those
things
work
correctly.
If,
if
mounted
with
I,
did
my
mounts
like
there's
no
mapping
to
take
into
account,
the
kernel
is
taking
into
account
just
when
you
read
or
write
to
that
mount.
G
D
D
And
yeah
as
a
so,
but
this
this
proposal
also
trying
to
solve
the
problem
of
like
SSH,
keys
and
whatnot.
So
there's
they're,
not
like
group,
readable
and
group
writable.
That
assumes
that
the
on
the
host
there
is
specific
uid
that
has
the
permissions
to
to
read,
write
the
volumes
and
thus
thus-
and
that's
where
it
will
be-
will
be
ID
map
too.
So
it's
like
there's
a
user
called
200
that
can
rewrite
those
files
and
then
inside
the
username
space
or
like
it
will
be
mapped
to
that
that
user.
D
G
D
G
D
So
go
ahead
so
when
you
use
ID
map
mounts
and
you
try
to
to
like
what
this
ID
map
will
Mount
to,
because
there's
no
single
uid
to
which
this
to
which
like
which
will
have
permission
to
read,
write
or
read
those
those
volume.
Because
cubelet
is
not
touching.
The
uid
of
those
volumes.
G
Yeah,
no,
so
exactly
so,
let's,
let's
say,
for
example,
a
simple
example:
for
example,
it
is
C
host
or
Etc
result.com
or
files
that
are
created
in
the
cost
are
owned
by
root
and
the
cost.
G
G
So
you
cannot
modify
those
files
If.
Instead,
when
that
bind,
Mount
is
done
for
ETC
result.conf,
it's
done
with
the
ID,
not
bound.
You
will
see
that
the
file
is
owned
by
root
inside
the
container,
because
the
kernel
is
doing
the
ID
number
ID
map
translation,
and
this
very
same
thing
happens
with
any
other
volume
that
that
you
mount.
G
The
ideas
are
just
mapped
in
that
way.
So
if
you
want
to
read
a
file,
the
process
inside
the
container
must
be
running
as
a
user
inside
the
container
that
had
that
matches
permission
wide.
What
is
written
in
the
file
system
in
the
volume.
D
G
D
In
that
case,
does
the
ID
map
Mount
require
like
map
this
to
like
kernelite
is
zero,
like
that
ID
map
Mount
system
called
that
it
takes
two
argument
right:
the
the
uid
and
the
kernel
ID
and
then
there's
a
range
so,
for
example
like
taking
Etc
host
or
it
is
result.com
which
is
owned
by
our
only
like
zero
kernel
user
uid
can
can
read
them.
So
does
this
mean
that
the
ID
map
Mount
will
mount
something
to
zero
inside
the
kernel
like.
G
It's
not
something
mapped
to
zero.
It's
exactly.
The
other
way
around
is
that
zero
inside
the
container
is
mapped
to
some
user,
some
umbrellas
user.
But
let's
say
it's
six,
five,
five,
five
three
six
is
mapped
to
that
user.
So
when
you
use
that
mapping
to
read,
then,
when
you're
reading
a
file
that
is
owned
by
root,
it
will
be
automatically
translated
to
65536
and
therefore,
as
you're
running
as
that
user,
you
can
see
it.
The
kernel
handles
all
the
translation
for
you,
but
it's
exactly
the
other
way
around.
G
So
it
does
the
translation
following
One
Direction
from
the
container
to
the
heart.
When
you
read
a
file
and
in
the
other
direction,
when
you're
writing
so
basically
it
it
all
works
it.
It
was
created
basically
to
solve
the
problems.
Containers
have
wages,
learning
spaces
and
volumes.
G
Does
it
make
sense,
am
I,
clear
or
so
yeah
so
coming
back
to
the
to
the
dock,
so
so
which
which
we
switched
to
FS
group
and
yeah?
We
would
like
to
get
rid
of
this
group
because
of
these
limitations,
and
there
are
also
these
other
limitations
like
ATC
horse.
It
is
your
result.com
for
owned
by
putting
the
host.
So
today,
when
you
create
a
pod,
we
use
certain
spaces.
G
G
All
all
the
uids
as
one
no
it
doesn't
help.
But
what
does
it
change
if
it
helps
for
any
other
issue
or
not.
G
Well,
even
if
I
did
not
bounce
helps
with
that
or
not
adding
support
for
I
didn't
announce
with
us
learning
spaces
will
definitely
not
solve
it
like.
That
is
a
different
problem
right
that
will
need
a
different
cap
and
a
different
different
consideration.
Motivation,
migrations
picture,
Gates
and
everything.
B
G
Yeah,
well,
that
is
open
for
discussion
like
we
can.
We
can
change
this
UI
interface
to
specify
one
mapping
per
per
per
amount,
so
it
could
potentially
be
a
different
mapping
for
different
things,
or
we
can
just
keep
it
as
it
is
today
and
in
the
future.
If
you
find
that
I
didn't
allowance
and
running
and
like
using
different
mappings
for
different
volumes
will
have
some
volume,
then
we
can
add
that
field
too.
F
Yeah,
it
can
be
something
incremental
because
we
call
that
Slater
or
if
you
prefer,
we
can
add
it
now,
but
internal
industry
right,
but
even
if
it's
not
you
use
it
with
username
space.
C
I
think,
from
my
perspective,
I
think
I
I
need
to
understand
this
ID
map
a
little
better
like
do
you
have
do
you
have
a
write-up
that
can
maybe
have
a
little
more
detail
and
maybe
like
a
diagram
or
something,
and
maybe
that'll
help
me
understand,
sort
of
the
problem
that
we're
trying
to
solve,
because
I
am
I,
am
very
interested
in
also
solving
the
general
case
of
user
ownership.
G
A
And
also
you
mentioned,
you
wanted
your
POC,
so
there's
a
everyone,
Michelle
command
Johnson
did
that.
Do
you
guys
think
that's
a
it's
fine
just
for
them
to
do
a
POC
and
then
maybe
they
can
come
back
and
talk
about
it
or
what
are
you
guys
are
thinking.
D
Yeah
I
think
I
want
to
understand
like
how
the
the
fs
group
like
like
I,
know
it's
kind
of
orthogonal
to
this.
The
ID
map
one,
but
how
does
interact
with
ID
map
mounts
like
what
will
be
Cube
will
be
doing
and
what
CRI
will
be
doing
like
the
responsibility
is
how
it
would
be
like
it's
it's
right
now.
It's
not
clear
to
me
because
we're
saying
okay
this
year,
I
will
do
them.
Id
map
mounts
and
what
what
do
we
expect
people
to
do?
G
A
D
G
Yeah,
so
if
the
user
requests
FS
group,
then
FS
group
will
be
used
with
the
downside
that
FS
group
has.
What
we
want
to
do
is
that
if
a
user
doesn't
specify
FS
group,
you
don't
use
it
today.
If
the
user
doesn't
specify
FS
group
is
we,
we
set
it
to
the
same
user.
That
is
the
host
GID.
So
we
do
some
conversion
and
it's
the
same
as
if
the
user
said
it
so
the.
G
F
Yeah
so,
but
basically
we
want
to
use
at
the
method
months
to
to
solve
up
an
issue
that
we
we
created
with
username
spaces
like
the
cubelet,
will
still
think
in,
like
from
the
host
point
of
view,
like
the
same
for
any
ideas,
it's
using,
it
won't
have
to
worry
how
they
this
will
look
inside
the
username
space,
so
it
will
look
exactly
how
it
looks
today
without
username
spaces.
The
problem
that
we
have
with
username
space
is
that
once
we
set
the
kernel,
does
the
translation
of
at
least
internal?
F
So
if
I
file
on
a
volume
is
owned
by
root
on
the
host,
then
once
we
we
use
our
username
that's
inside
the
container,
it
will
look
honed
by
a
user
that
it's
not
part
of
the
username
space,
so
it
does
that
unknown
ID.
So
what
we
want
I'm.
A
A
G
A
Can
edit
edit
yeah
just
add
it
here
add
a
link
in
this
agenda
doc.
You
can
review
that
and
then
maybe
you
guys
can
come
back
again
I
or
maybe
we
need
to
have
a
separate
meeting
to
just
discuss.
It
looks
like
there
are
still
a
lot
of
concerns,
maybe
start
with
the
dog,
and
then
people
read
the
dog,
then
maybe
understand
more
for
what
you're
trying
to
do
yeah,
let's
start
start
from
there.