►
From YouTube: Kubernetes SIG Storage Meeting 2023-06-15
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 15 June 2023
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.pbwxqc294u69
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
All
right
today
is
June
14
2023,
and
this
is
the
meeting
of
the
kubernetes
storage
special
interest
group.
As
a
reminder,
this
meeting
is
public
recorded
and
posted
on
YouTube
on
the
agenda
today,
we're
going
to
go
over
the
128
planning.
A
128
is
the
next
milestone
for
kubernetes,
and
we
have
a
number
of
items
that
Sig
storage
is
committed
to,
so
we'll
get
status
updates
on
those
and
then,
if
there's
any
other
items,
you'd
like
to
discuss,
please
feel
free
to
add
them
to
the
agenda,
looks
like
there's
already
one
item
there.
You
can
find
the
link
to
the
agenda
in
your
calendar,
invite
for
the
timeline
for
128.
A
The
next
upcoming
Milestone
is
the
enhancement
freeze,
that's
coming
up
in
fact
this
Friday
this
end
of
this
week,
Thursday
today
for
Pacific
and
Friday
for
UTC.
A
So
what
this
Milestone
means
is,
if
you
have
a
feature
going
into
the
128
release,
you
must
have
the
cap
approved
by
by
this
date.
So
if
you
are
tracking
work
for
this
128
release
in
terms
of
a
feature,
please
ensure
that
your
cap
gets
approved
by
this
time.
A
With
that,
let
me
switch
over
to
our
tracking
spreadsheet
and
we
can
get
status
updates
on
the
pending
items.
I'm
gonna
add
a
new
column
here
for
today's
date,
which
is
6
15.,
and
we
can
jump
right
into
it.
I
don't
see
a
month
on
the
call.
B
If
I
remember
correct
play,
there
is
no
update.
A
C
D
A
B
A
Yeah,
it
looks
like
we
haven't
gotten
an
update
on
that
for
a
while,
so
I'll
mark
that
as
no
update
next
one
is
we're
skipping
it
for
this
cycle.
A
Then
we
have
provision
volumes
from
Cross
namespace
snapshot
PVC.
This
is
continuing
Alpha
work,
so
no
cap
here
any
other
updates
for
this
one.
A
A
Good
next
is
CSI
volume,
Health
additional
metrics.
C
I
think
this
is
a
e2e
test,
so
still
yeah.
There
was
someone
working
with
me
on
the
E3
test,
but
still
working
progress
sounds.
A
A
C
I
think
that
team
is
updating
the
cap
based
on
the
new
design
and
we'll
try
to
bring
it
up
in
the
next
CSI
Community.
Sync.
E
Yeah
there
was
a
meeting
on
that
just
yesterday
and
we
ended
up
spending
most
of
the
time
talking
about
the
CSI
level
rpcs
and
that
they're
they
need
to
update
their
proposal
to
the
CSI
spec
like
the
the
pr
that's
out
there
is.
This
is
old,
so
it
needs
to
be
rev
to
reflect
the
new
design
and
we
need
to
review
it.
So
we
might
have
time
in
next
week's
CSI
meeting
or
it
might
be
pushed
to
the
following
month
got
it.
A
Cool
thanks,
Ben
thanks
Shane
next
is
runtime
assisted
mounting.
A
We
have
no
active
owner
for
this
one
I
think
at
this
point.
We
should
probably
drop
it
for
the
cycle.
Unless
someone
wants
to
pick
this
up.
A
It
was
being
driven
by
Deep.
We
haven't
seen
him
for
a
while
at
least
definitely
not
in
this
cycle.
E
A
That
makes
sense
so
I
think
what
we'll
say
is
your
office.
A
Cool
next
item
is
enable
privilege
containers
for
Windows
to
replace
CSI
proxy.
F
Yeah,
so
this
is
Manu
here
we
had
a
discussion
with
Mauricio
about
this
I
think
my
understanding
is
that
at
this
point
the
Upstream
work
is
pretty
much
complete.
What
is
outstanding
is
basically,
if
any
driver
or
provider
needs
to
support
this
functionality,
then
they
will
need
to
implement
the
changes
on
there
and
so
I
think
we
are
evaluating
this
from
our
side.
F
We
are
interested
in
doing
this
work,
but
I
think
it's
it's
a
significant
effort,
so
we
need
to
think
about
when
we
can
prioritize
this
so
I
I'm,
hoping
to
have
an
update
within
the
next
two
to
three
weeks.
In
terms
of
that
effort,
we
do
want
to
support
it.
I
think
it's
just
a
question
of
when
not
if
so,.
F
Go
ahead,
no
I
was
just
going
to
say
that
you
know
at
least
within
the
EBS
CSI
driver.
We
we
are
continuing
to
use
the
CSI
proxy
at
this
point,
because
that's
that's
the
that's
the
existing
implementation
we
we
do
want
to
get
rid
of
that,
but
getting
to
the
point
where
we
get
rid
of
that
and
use
host
process
containers
instant
instead,
I
think
that's
the
work
that,
based
on
at
least
our
initial
understanding,
it
sounds
like
it's.
It's
not
a
small
small
chunk
of
work.
E
I
have
the
opposite
perspective,
like
I
I,
like
the
CSI
proxy
I
wish.
We
had
one
on
Linux
I.
Think
I
mentioned
this
like
a
year
or
two
ago.
I
can't
tell
you
how
many
conversations
I've
had
with
security
people
who
look
at
CSI,
node,
plugins
and
and
get
terrified
of
the
fact
that
they're
running
privilege,
if,
if
there
was
a
CSI
proxy
on
Linux
like
I,
would
encourage
people
to
use
it
first
from
a
security
perspective,
right
I
think
we're
doing
the
wrong
thing
about
moving
away
from
the
proxy.
F
So
that's
an
interesting
point
of
view.
I
I
hadn't
heard
that
one
before
I
I
can
tell
you
from
our
perspective
when
we
use
the
CSI
proxy,
it
just
seemed
like
there
were
a
bunch
of
Fairly
I,
don't
know
what
the
right
word
is,
but
not
ideal,
slash,
hacky
things
that
we
had
to
do
in
order
to
get
the
CSI
proxy
to
do
all
the
things
that
that
that
should
be
possible
to
do,
and
so
you
know
at
least
just
from
a
code
modularity
perspective.
F
We
would
be
happy
to
to
move
away
from
that.
I
mean
if
there
are
security
concerns.
I.
Don't
really
know
much
about
that
at
this
point
to
be
able
to
comment
knowledgeably
on
that,
but
I
think
I
think
at
least
based
on
what
I
understood
it
seemed
like.
It
would
be
a
good
thing
for
us
to
move
away
from
using
the
CSR
proxy
and-
and
we
thought
that
you
know
the
whole
point
of
having
this
feature
on
Windows
was
that
you
know
it
would
allow
us
to
do
that.
E
Yeah,
the
the
basic
issue
is
that
kubernetes
security
people
don't
like
privileged
pods
and
the
only
way
to
to
not
have
your
node
node
plug-in,
be
privileged,
would
be
would
be
to
have
something
else
be
privileged
like
the
proxy
and
then,
if
you
have
to
decide
between
you
know
some
Community
maintained,
open
source
component
being
privileged
versus
some
closed
Source,
vendor
proprietary,
I,
think
being
privileged.
You
would
always
choose
the
open
source.
Community
maintained
thing
right,
because
it's
easier
to
find
bugs
it's
easier
to
fix,
bugs
it's
easier
to
to
do
testing
so.
F
E
F
Makes
sense
to
me,
but
I
think
at
this
point:
isn't
this
a
ga
feature
within
kubernetes
and
yeah,
yes
and
and
I
thought
that
Azure
and
maybe
I
think
Google
also
both
support.
This
is
that
is
that
accurate.
E
E
Privilege
containers,
yeah,
yeah,
privileged
containers
are
are
valuable
for
other
reasons
right,
but
I
just
I
think
that
there
should
be
an
option
to
continue
using
the
proxy
and
I
wish.
We
had
a
proxy
on
Linux
so
that
you
could
have
unprivileged
CSI
plugins,
because
I
think
that
would
be
a
step
forward
for
security.
I
lost
that
argument
when
I,
when
I
made
it
last
time,
because
there's
significant
effort
to
build
and
maintain
that
proxy
and
no
one
I
guess
is
sufficiently
incentivized
to
do
that.
E
F
It
Ben
I
think
I'll,
probably
talk
to
you
in
more
detail
to
get
more
context
on
this.
If
you
feel
strongly
about
this,
and
if
you
feel
that
you
know
it
makes
sense
to
have
something
like
this
on
Linux,
then
you
know.
Maybe
we
should
talk
about
it
a
little
bit
more
to
see.
If
that's
that's
a
direction
that
the
community
might
be
interested
in
sure
sure,
but
but
yeah
I
think
at
least
for
the
foreseeable
future.
F
We
will
still
try
to
implement
this
capability
within
the
AWS,
EBS,
CSR
driver
and
and
then
we'll
like
I
said
it's
just
a
question
of
finding
the
resources
to
do
it.
So
awesome.
A
Thanks
a
lot
for
the
update
there
in
the
progress
Manu
and
thanks
for
raising
the
concern,
Ben
I
think
it's
worth
the
interior.
Continuing
that
discussion.
A
B
A
It
no
problem
thanks
for
the
update
neon
next
is
CSI
migration.
Remove
entry,
GCE
PR
out
for
review
is
the
last
status.
Do
we
have
Matt,
doesn't
look
like
it?
A
I'm
gonna
mark
this
as
no
update.
Unless
someone
has
an
update.
A
Okay
next
is
VMware
vsphere
removing
entry.
This
is
tracking
for
130
same
for
Azure.
That's
for
130,
so
we'll
ignore
those
then
CSI
migration,
cfrbd
campus
for
RBD
paid
off
by
default.
End-To-End
tests
any
update
on
this
one.
C
A
Great
cool,
thank
you
for
that
update,
Shank
and
say
I
guess,
heads
up
for
folks!
Seth
RBD
is
being
deprecated
rather
than
moved
to
CSI.
So
if
that
is
a
concern
to
you,
please
speak
out
SEF
at
best
I
assume
similar
situation.
Yeah.
A
Cool
thank
you
for
the
update,
Shane
and
again
reminder
for
anybody
using
Seth.
The
entry
plugin
will
get
deprecated
unless
you
speak
up.
A
Okay,
next
item
is
always
on
a
reclaim
policy.
We
have
Deepak
any
updates
on
this
one.
It
looks
like
we
were
looking
for
volunteers.
C
D
C
D
B
A
D
A
Thank
you
yawn
for
the
update.
Next,
we
have
quality
of
service
for
volumes,
so
sunny
and
matte
I
think
the
latest
status
here
is
that
they
have
a
app
is
ready
for
review
and
CSI.
Spec
changes
are
also
in
review.
A
I
think
there
is
an
offline
discussion
on
order
of
operations,
because
cap
deadline
is
this
week
and
it
looks
like
CSI.
Spec
changes
won't
be
merged
until
then,
so
I
think
we
agreed
that
the
cap
will
get
merged
and
CSI
spec
changes
will
continue,
hopefully
getting
merged
soon.
If
CSI
changes
are
not
able
to
be
merged
in
time,
we
will
effectively
block
the
cap
and
punt
it
to
the
next
cycle.
F
Got
it
one,
quick
question
about
this,
so
I
think
the
last
discussion
that
I
was
part
of
it
sounded
like
we
were
going
to
go
with
be
a
big
parameter
approach
I'm,
assuming
that
I
haven't
had
a
chance
to
review
the
latest
version,
but
I'm,
assuming
that
that's
what
we
are
going
with.
E
F
Okay,
great,
so
the
only
question
that
I
have
is
once
we
have
approval
on
the
CSI
changes
and,
and
the
cap
is
officially
approved.
What's
the
next
step,
do
we
start
with
the
implementation
at
that
point,
or
is
there
anything
else,
yeah.
A
F
Got
it
and
then,
if
if
there
are
specific
things
that
we
want
to
discuss
about
the
implementation,
those
can
still
be
brought
up
as
part
of
the
subsequent
meetings
here.
Exactly.
A
Yeah
you
can
bring
them
up
in
subsequent
meetings.
You
can
even
bring
them
up
in
the
app
now.
If
you
just
want
to
open
up
discussion
or
on
the
PRS
themselves
for
implementation,
any
any
of
those
forms
would
be.
Okay
sounds
good.
That.
E
Makes
sense,
thank
you.
I
was
also
going
to
say
you
don't
have
to
wait
for
the
cap
to
be
approved
or
the
CSI
expect
change
to
merge
to
start
working
implementation.
Sometimes
sometimes
issues
are
found
by
doing
your
prototype
implementation
and
and
then
you
discover
something
important
that
was
left
out
of
the
design,
so
yeah.
F
Does
it
make
sense?
I
I
do
have
one
thing
that
I
want
to
bring
up
about
that,
but
it
can
wait
until
later.
I'm
I,
don't
think
we
need
to
spend
time
talking
about
it
right
now.
So
sounds.
A
Good
all
right
thanks.
Next,
we
are
going
to
talk
about
robust
volume
manager,
reconstruction.
This
is
remaining
in
beta
for
this
cycle.
A
Okay,
then,
we
have
PB
last
phase
transition
time.
Cap
needs
update,
anything's.
B
Marked
the
RR
is
done.
What
is
this
one
I?
We
want
to
have
in
PV
status,
a
timestamp
when
the
face
was
changed
last
time,
so
when
it
goes
from
I,
don't
know
bound
to
release.
We
want
to
have
a
timestamp
of
this
event.
E
A
Cool,
thank
you
John,
for
that
update
next
item
is
volume
expansion
for
stateful
sets.
This
would
be
designed
for
this
cycle.
It
looks
like
last
status
update
the
design
was
in
progress
and
needed
some
answers
from
Sig
apps.
Anyone
have
an
update
on
this
one.
A
Okay,
I'll
mark
that
as
no
update
for
this
cycle.
Next,
we
have
non-graceful
node
shutdown,
anything
new
on
this
one.
A
C
So
they
don't
want
to
go
Target
Alpha
in
1.28,
but
nerissa
kept
there
yeah.
The
cap
itself
is
ready
for
review.
A
Got
it
okay,
so
we'll
keep
that
in
design
for
this
cycle,
and
anybody
interested
please
help
with
reviewing
that
cap
with
that
I
think
we're
done
with
reviewing
the
status
of
the
items
for
128.
Going
back
to
our
agenda.
Doc
looks
like
we
have
a
couple
of
items
for
discussion.
First.
Is
this
issue
from
I
believe
it
was
a
niche
who
added
this?
G
G
So
one
of
the
issues
that
we
recently
encountered
was
we
log
the
RPC
request
as
part
of
the
incoming
and
the
outgoing
response
as
well,
and
we
also
use
the
Proto
sanitizer,
that's
offered
in
the
CSI
lab
utils
and
recently
we
started
using
these
token
requests,
Fields
right,
which
was
added
in
CSI
and
then
the
initially
it
was
added
because
they
saw
that
in
our
CSI
providers
we
were
generating
the
token
and
then
it
made
more
sense
to
add
that
as
part
of
the
CSI
spec
and
all
of
that
right.
G
So
recently,
what
we
saw
was
these
S8
service
account.
Tokens
are
part
of
the
volume
context
and
not
part
of
the
secrets
field,
and
this
volume
context
field
is
not
really
marked
as
a
secret,
which
means
proto-sanitizer
does
not
really
sanitize
anything.
That's
there
in
that
map
and
because
of
that,
if
someone
enabled
like
higher
logging
in
ours
like
the
service
account
tokens
were
being
logged
and
we
had
a
cve
for
that
and
we
fixed
it.
G
We
also
I
I,
also
looked
up
across
like
other
CSI
drivers,
and
there
are
like
a
bunch
of
other
CSI
drivers
which
use
this
token
request
Fields.
So
it
is
possible
they
could
run
into
this
issue
if
they
are
just
logging.
The
volume
context
field
at
some
point
without
knowing.
G
E
So
like
and
that's
not
encrypted-
and
that's
like
that
seems
like
a
bigger
hole
than
the
fact
that
it
might
get
logged
in
a
log
is
the
fact
that
it's
just
there
unencrypted
in
the
BV
visible
to
anyone
who
has
PV
read
access.
That's
right
so,
like
maybe
these
things
shouldn't
be
in
the
volume
context,
is
what
I'm
saying
like
so.
E
G
Yeah
so.
E
G
Was
saying
so
there
is
another
field
called
like
Secrets
field
like
maybe
that
is
a
better
candidate,
because
that
is
used
today
for
like
no
public
secret
reference
or
like.
Is
there
a
different
way,
but
I
do
agree
that,
like
this
shouldn't,
be
part
of
the
volume
context,
because
I
think
the
list
it's
just
a
map
and
it
keeps
growing
as
we
add
newer
fields
and
then
like
users
who
consume
it,
don't
really
know
what
is
part
of
it.
So
they
just
end
up
logging.
It.
G
And
I,
so,
given
that
this
change
has
already
been
there,
we
if
we
have
to
modify
this
I,
think
we
have
to
do
it
in
like
a
Backward
Compatible
way
and
maybe
across
a
couple
of
releases.
But
but
I
opened
this
issue
to
start
a
discussion
and
then
maybe
we
can
follow
up
on
slack
or
something
to
see
how
we
want
to
do
it
and
I
can
help
lead.
The
effort.
A
B
G
The
the
service
account
tokens
are
added
to
the
volume
context
from
the
queue
lip
code
right
like
so.
Basically
when
the
cubelet
calls
the
CSI
driver
to
say,
hey
Mount
this
volume.
For
me,
the
service
account
tokens
is
part
of
the
volume
context,
so
the
change
would
need
to
be
in
cubelet
so
that
we
don't
use
volume
context
for
like
secrets
and
these
kind
of
fields,
but
rather
have
like
a
dedicated
field.
E
That
makes
yeah,
but
but
the
way
that
that
field
gets
populated
is
already
specified
in
in
the
kubernetes
CSI
glue
layer
right
we
have
these.
These
objects
to
get
created.
E
E
That's
where,
like
cubelet
will
go,
look
for
the
secrets
to
read
out
and
then
shove
into
that
particular
field
of
of
the
of
the
RPC
for
that
CSI
driver
I,
don't
know
if
you
can
overload
that
with
like
some
special
other
source
of
Secrets,
because
we
already
know
where
the
contents
of
that
field
are
is
supposed
to
come
from.
E
I
know
that
this
is
this
is
this
is
not
a
regular
CSI
driver
right?
This
is
like
in
a
ephemeral
volume
driver.
So
I,
don't
know
if
that
changes,
the
rules
I've
never
written
one
of
those
before
I.
Don't
know
if,
like
the
logic
in
cubelet
is,
is
different,
but
I
just
want
to
caution
you
and
say
that
Kubler
already
knows
how
to
get
the
contents
of
that
field
set
and
we
might
not
be
able
to
just
to
just
change
it
right.
G
Yeah
I
mean
we
don't
necessarily
have
to
use
that
field
like
I.
Understand
that
right,
like
that
field
was
added
for
specifically
the
note
public
secret
reference,
and
that
makes
sense
we
don't
have
to
use
that
field
like
we
probably
need
a
different
field.
Just
so
that,
like
things
like
tokens
and
all
of
these,
which
have
high
privileged
like
don't
get
included
as
part
of
like
a
normal
volume
context,.
G
A
Sure
I
understand
both
of
your
concerns.
I
think
what
Ben
is
saying
is
like
it
may
be.
Non-Trivial
and
I
think
an
issue.
You
mentioned
the
same
thing
which
is
like
we'll
have
to
figure
out
backwards
compatibility
if
we,
if
we
make
this
change,
that's
right.
E
This
is
a
fundamentally
hard
problem.
Moving
Secrets
around
on
the
CSI
layer,
I've
never
been
thrilled
with
the
mechanism
we
have,
but
if
it
can
be
used
and
obviously
that's
the
easiest
path
forward,
if
it
can't
be,
we
should
we
should
consider
doing
something
better,
maybe
yeah.
A
Yeah
this
this
makes
sense
and
to
clarify
so
this
applies
really
only
to
kubernetes
ephemeral
volumes.
G
A
G
This
is
the
specific
field
I'm
referring
to
so
I.
Think
if
you
configure
the
audience
here,
you
could
do
that
for
any
volume.
A
Does
anyone
have
background
on
this
token
request
field
foreign.
A
Okay,
I
think
definitely
this
needs
to
be
fixed.
Thank
you,
Anish
for
highlighting
that
are
you
interested
in
helping
Drive
the
fix
for
it,
or
this
is
more
of
a
call
out
for
yeah
I
can
help
with
the
fix
awesome.
A
Okay,
please
feel
free
to
come
back
to
this
group
at
any
point,
if
you
need
additional
call
outs
discussions,
anything
like
that
and
otherwise
I
think
this
is
a
high
priority
to
fix
I'll,
actually
go
ahead
and
add
it
to
our
128
tracking
sheet
and
something
we
need
to
fix.
Thank
you
for
flagging
this.
Okay.
F
Quick
quick
question
for
Anish
Anish:
do
you
think
this
is
something
that
will
impact
storage
drivers
in
general,
or
is
it
something
that's
more
specific
towards
the
ephemeral
drivers
or
the
drivers
for
ephemeral
volumes.
G
I
mean
so
at
least
now,
but
I
looked
at
a
couple
of
drivers
right,
so
I
think
CSI
drivers
specific,
which
is
part
of
cert
manager,
is
using
it
and
then
Democratic
CSI,
like
they
use
this
in
their
driver
and
then
I
also
saw
GCE.
Few
CSI
driver
uses
token
requests,
so
there
are
like
a
bunch
of
drivers
that
use
it
so
that
they
can
use
that
service
account
token
to
exchange
it
so
I
think
the
primary
use
cases
they
use.
The
service
account
token
for
workload
entity.
G
Sorry
they
exchange
it
for
a
cloud
provider
access
token
and
then
like
they
could
use
it.
They
mostly
use
it
for
a
female
volumes,
but
also
for
like
attaching
disks
Maybe.
A
A
Thank
you
Anish
for
helping
with
the
fix
here.
Yeah.
All
right
next
item
is
from
Ben
eliminating
Windows
proxy.
Oh,
we
just
discussed
this
I.
A
We
can
track
for
the
future
yeah,
let's,
let's
continue
the
discussion
offline
on
this
one,
any
other
items
for
discussion
today
before
we
go.
F
I
had
a
quick
question
about
an
old
issue
that
people
folks
recently
reported
against
AWS
that
we
think
is
surfacing
here.
I
just
posted
a
link
for
that
in
the
chat
window.
So
we
go.
We
have
somebody
reporting
this
Behavior
against
this
AWS
CBS
CSI
driver.
We
think
this
is
the
issue
and
looking
at
this
issue,
I'm
not
sure
if
any
there
was
any
resolution
on
this
ever
and
and
I
was
curious.
F
If
anybody
else
from
the
from
the
threaded
themes
like
folks
have
reported
this
in
the
past,
but
not
not
recently
and
I
was
wondering
if
anybody
had
any
thoughts
about
what
the
long
term
fix
should
be
and
who
should
own
that
foreign.
F
It
seems
like
there's
a
race
condition
here
that
that
is
driving
and
I
think
there
was
a
comment
from
Michelle
further
down
in
the
tte,
about
that.
So.
F
C
D
C
C
F
If
you
could
share
that
thing,
as
you
know,
maybe
through
slack
or
something
I,
think
that
would
be
really
helpful.
But
just
trying
to
understand
if
this
is
something
that
is
there
as
part
of
the
Upstream
code.
And
if
so
you
know
what
the
right
fix
should
be
so.
F
And-
and
we
we
looked
into
it
and
I
mean
we
don't
know
for
sure
if
it
is
exactly
this
this
issue,
but
the
symptoms
seem
to
match
up
and-
and
our
current
theory
is
that
this
is
what
they're
hitting.
So
we
need
to
confirm
that,
but
if,
in
the
meanwhile,
you
know
if
we
can
get
more
details
around
what
was
done
to
fix
this
I
think
that
would
be
helpful.
So.
F
I
can
probably
get
more
details
on
that
yeah
if
necessary.
I.
E
F
Yeah
yeah
I
can
I
can
definitely
provide
those
details.
Give
me
one
second,
oh
actually,
you
know
what
I.
C
I
found
I
found
a
fix.
This
is
in
the
it's
not
provisioner.
Basically,
it
prevents
the
body
on
CSA
plugin
for
volumes
that
are
still
attached
to
your
node.
So
let
me
add
it
here,
but
but.
E
E
But
but
because,
because
no
deletion
is
much
less
common
than
volume
deletion,
that's
probably
why
it's
gone
unsolved
for
so
long
and-
and
we
could
probably
use
a
similar
approach
to
what
we
did
for
for
volume
deletion
to
protect
against
node
deletion.
E
F
F
About
the
about
the
exact
scenario,
I'll
I'll
reach
out
by
a
slack
on
this
on
the
slack
channel,
so.
A
Okay,
so
I
think,
let's
continue
the
discussion
on
this
offline
and
anything
new
that
we
learn.
Let's
add
it
back
to
the
original
bug,
whether
it's
you
know
ways
we
could
potentially
fix
this
or
some
other
fix
that
may
have
partially
addressed
this
just
to
keep
the
bug
updated.
Yeah.
F
Yeah
we're
trying
to
confirm
if
this
is
really
the
issue
that
we
are
hitting
so
once
we
have
more
details,
we
can
update
the
issue.
Okay,.