►
From YouTube: SIG - Storage 2023-08-28
Description
Meeting Notes:
https://docs.google.com/document/d/1mqJMjzT1biCpImEvi76DCMZxv-DwxGYLiPRLcR6CWpE/edit#
A
All
right
we're
at
the
three
minute
Mark
here
so
I
think
we
can
go
ahead
and
get
started.
Welcome
everyone
to
the
August
28th
Cube,
vert,
Sig
storage,
meeting
I
have
the
agenda
shared
on
the
meeting.
Now,
if
you
have
any
other
topics,
you'd
like
to
discuss,
please
feel
free
to
add.
Those
I
am,
however,
really
excited
to
get
into
this
first
topic.
So
thank
you
for
adding
it
Michael.
Why
don't
you
go
ahead
and
get
us
started
if
you
would.
C
The
problem
is,
I
was
hoping
I
invited,
David
and
boban
to
join
okay
and
David.
City
I,
don't
know
he
may
he
may
not
be
able
to
join,
maybe
at
8:30
or
you
know
a
half
hour
from
now.
Okay,
so
I
don't
know.
If
there's
any
other
stuff,
we
can
talk
about
before
or.
A
Yeah
I
think
it
I
think
it
would
be
good
to
have
him
here
if
possible,
since
he's
been
a
major
commenter
on
the
proposal.
So
why
don't
we
go
down
to
the
CDI
issues
for
a
bit
here
and
see
if
we
can
take
care
of
some
of
that
stuff?
First
and
also,
if
anyone
else
has
a
topic,
you
can
add
that
and
we'll
jump
back
and
catch
that
at
the
top
here,
so
I'm
going
to
go
ahead
and
open
this
issue
that
we
left
off.
A
One
well
I
was
gonna,
say:
I,
have
some
Deja
Vu
about
about
doing
this
and
I
think
as
I
recall
the
conversation
kind
of
w
wed
around
the
need
to
investigate
where
the
time
is
being
spent,
but
we
currently
really
don't
have
any
instrumentation
in
the
CI
yet
to
really
understand
that
and
I
think.
Maybe
there
was
a
a
casual
agreement
to
investigate
further
and
I'm,
not
sure
if
that's
happened,.
D
Yeah
I
definitely
didn't
invest
more
time
in
it.
Yeah
there
is
I,
don't
remember
if
I
brought
this
up,
but
this
goes
back
to
older
branches,
so
preop
Poors.
This
is
the
same.
I.
Have
this
PR
opened
on
the
156
branch
and
I
could
see
the
same
thing.
The
wait
for
first
consumer
Lane
is
taking
a
whole
lot
longer.
A
A
Like
just
as
a
really
simple
thing
does
gko
allow
you
to
add
timestamp
logging
to
each
of
the
like
the
buy
statements
in
the
tests
or
like
so
that,
each
time,
it's
printing
out
the
detailed
steps
of
the
test
that
it
could
actually
write
a
Tim
stamp
and
I,
don't
know
if
we
could
somehow
just
turn
that
on
then
we'd
actually
be
able
to
see.
A
D
Long
yeah
I'm
not
familiar
with
with
the
tunables
for
gko,
but.
E
A
D
And
if
we
don't
have
a
Gino
argument
to
enable
this,
we
have
can
always
do
it
like
bash
through
bash.
Somehow
I've
seen
I
think
I've
I've
seen
that
in
the
cubd
repo
they
have
like
time.
Stamping
of
everything.
F
Possible,
we
should
probably
also
enable
or
set
the
seed
on
the
test
test,
so
they
don't
get
randomized.
So
we
can
actually
compare.
You
know
the
wait
for
first
consumer
Lane
versus
the
nonwe
for
first
consumer
Lane,
where
it's
doing
the
exact
same
thing
in
thect
same
order.
D
D
A
F
A
Want,
okay?
Is
anyone
interested
in
taking
taking
on
a
couple
of
next
steps
with
this.
F
A
Know?
Okay,
thanks
do
we
know,
like
the
degree
of
slowness.
D
Yeah
I
have
some
timings
above.
F
D
We
do
expect
some
to
to
for
the
waave
for
first
consumer
Lane
to
run
a
little
longer,
because
every
consumer
P
takes
around
like
four
seconds
spin
up,
delete
and
stuff
yeah.
So
we
expect
some
slowness,
but
that
should
be
like
30
minute
difference.
But
I
was
seeing
a
hour
hour
and
a
half
Soh.
That's
pretty
consistent
through
our
CI.
Okay.
A
Mh
all
right,
okay,
so
I
think
we've
kind
of
identified
the
potential
next
steps
on
this
one.
So,
let's
pop
back
up
and
okay,
this
was.
C
2848,
because
we
already
did
the
next
one.
A
A
E
D
E
A
Right:
okay,
Alexander!
You
did
a
lot
of
commenting
here.
So,
let's
see,
maybe
you
can
give
us
the
latest.
F
So
the
the
latest
is
he's
using
like
a
local
like
kind
or
mini
Cube,
or
something
like
that.
E
F
A
Okay,
all
right
so,
let's
go
to
API
server,
fails
to
discover
kubernetes.
A
Responded:
nice,
okay!
So
this
one
I
believe
that
was
yeah
last
week
that
was
commented
on
so
I
guess
we
can
give
it
a
little
more
time
for
the
reporter
to
try
out
what
you
requested.
D
From
couldn't
we
bump
the
RPM,
Di
I
think
it's
really
easy.
Nowadays,
in
with
the
setup
we
have
in
CDI.
D
E
F
A
A
So,
okay,
so
that
kind
of
brings
up
a
question
like
how
are
we
handling
those
kind
of
older
branches
like
I?
Don't
think
we
I
don't
know
that
we
have
like
a
documented
policy
about.
You
know
how,
how
long
or
to
what
degree,
we'll
maintain
security
fixes
or
other
updates
on
previous.
B
F
A
But
that
would
include
all
like
interim
branches,
not
just
like
not
just
like
stable
branches,
that
that
we're
back
porting
to
I
guess
right.
So.
E
A
A
D
Yeah
I
think
if
we
want
the
co
version,
CV
is
fixed,
then
we
have
to
backboard
stuff.
Like
a
builder
bum
know
the
Builder
we
use
to
run
all
the
make
Targets
in
CDI,
so
I
think
that's
intrusive,
I'm,
not
sure.
A
Yeah
I
guess
the
question
is
we
have
we
should?
Maybe
maybe
we
should
take
a
look
at
in
Cube
vert,
Cube
vert
if
they
have
any
kind
of
like
statement
on
that
or
you
know
like
what,
if
they
have
a
policy
on
that
and
see
if
we
want
to
conform
to
that-
or
at
least
mention
like
you
can
expect
that
a
certain
number
of
previous
branches
will
be
kept
up
to
date
with
security.
A
Okay,
all
right
sounds
good
thanks
all
right.
So
that's
all
the
issues
at
this
point,
I'm
going
to
go
back
to
the
agenda.
D
Yeah
so
yeah,
so
basically,
this
is
only
reproducible
on
on,
like
the
open
shift,
slackbot.
D
It's
like
femal,
open
shift
clusters,
I
I'm,
not
sure.
What's
the
oh,
it
says
gcp,
so
it's
gcp,
Emeral
cluster
they're,
using
storage,
called
the
standard
CSI
and
that
maps
to
I
can't
remember
the
name.
Well,
not
something
we
test
very
heavily,
but
anyway,
what
happens
is
that
at
first
we
thought
their
container
images
were
like
malformed.
They
had
a
bunch
of
directories
that
there
was
no
need
for
and
after
fixing
that
they
still
see
this
issue
so
that
wasn't
the
problem.
E
D
The
problem
seems
to
be
copying
from
the
scratch
space
once
we
have
the
layers
like
the
container
image
layers
and
we
copy
them
from
we
copy
like
the
chunky
file
from
the
scratch
space.
We
just
hit
eof,
which
is.
B
D
Like
I
like
it's,
it's
an
IO
copy
called
that
fails
unexpected
eof.
You
could
see
that
in
this
comment
and
I,
don't
know
like
it,
I
think
it's
a
bad
sign
about
the
storage.
I,
don't
think
we
could
derive
the
CDI
bug
out
of
this
like
if,
if
you're
accessing
the
crat
space-
and
it's
mounted
correctly,
how
could
it?
How
could.
E
D
Basically,
the
image
should
be
there,
but
it's
not
I,
guess
that's
what
that
end
of
means,
and
it
doesn't
seem
to
resolve
itself
so
any
like.
If
there
was
some
issue
with
the
backend
storage,
it's
been
persisting
for
a
couple
of
weeks
now
so.
A
The
only
thing
that
pops
into
my
mind
is
this
kind
of
reminds
me
a
little
bit
of
when
we
accessed
so
we
had
bugs
in
the
past
where
we
had
to
put
in
the
fsync
call
after
an
import,
because
the
import
would
say
it
was
complete
and
then,
when
you
actually
tried
to
run
the
dis,
if
this
VM
was
scheduled
on
a
different
node,
if
the
io
didn't
fully
write
out,
then
perhaps
you
wouldn't
be
able
to
see
it
on
on
the
disc
of
the
other
node,
because
it's
still
in
the
other
nodes
page
cache.
A
So
one
thing,
I
wonder,
is
if
we
added
an
fsync
call
after
extracting
the
the
file
to
the
scratch
SPAC.
If
that
could
help
to
resolve
the.
A
I,
don't
know
how
the
underlying
gcp
storage
works,
but
if
they
are
doing
you
know
if
they
are
doing
some
kind
of
IO
caching
and
then
we're
quickly
trying
to
access
a
file
that
was
written
from
a
different
context.
You
know
I'm
sure
they're
doing
some
crazy,
complex
things
underneath
in
order
to
scale
or
whatever
so
I
do
wonder.
C
D
I
think
that
information
is
missing
on
this
environment,
but
the
default
storage
class
yeah
was
the
default
was
picked
up
as
as
the
scratch
and
the
same
one
was
used
for
the
data
volume.
The
target
data
volume.
D
A
Yeah
dust
Dusty's
from
the
the
core
team
and
I've
been
working
with
him
actually
on
an
issue
related
to
lvm
activation
bugs.
So
that
might
be.
Why
he's
been
testing
our
stuff
a
little
bit
more.
E
A
A
D
With
node
yeah,
it
works
with
node,
but
the
problem
with
node
is
that
they
need
to
supply
some
credentials.
E
A
Config
yeah
I
think
this
mirror.
Redirection
is
interesting,
but
also
when,
if
I'm
trying
to
understand,
there's
performance,
disparity
between
the
pod
pull
method
and
the
node
pull
method
was
that
what
we're
finding.
D
Dominic
just
jumped
in
and
said
that
we
have
possibly
a
performance,
decoration,
I,
don't
think
it's
the
case,
but
it's.
A
Okay,
all
right
I
wanted
to
check
on
that
because,
like
what
that
would
sort
of
almost
make
me
more
suspicious
of
the
caching
issue,
because
if
like
one
method
is
super
fast,
is
it
fast
because
the
io
is
pending
still
and
if
the
iio
is
the
bottleneck
or
something
so
I
mean
I
do
think
it
would
be
worth
like,
but
I
mean
it'd
be
kind
of
difficult
to
te.
A
Well,
we
would
just
have
to
provide
a
like
a
version
that
does
that,
but
it
could
be
a
little
difficult
to
get
to
get
it
picked
up
in
this
type
of
environment,
where
it's,
where
it's
adding
an
F
sync.
C
Just
so
I
understand
the
flow,
because
I
think
I
forget
on
the
registry
Imports
for
this
case
I
think
so
we
download
like
the
tar
layer
and
extract
them
in
scratch,
and
there
should
be
a
like
a
qod
2
file
in
there
and
then
do
we
use
MBD
kit
to
serve
that
like
and
then
do
Q
image
convert
from
MBD.
C
F
F
Much
I'm
fairly,
certain
we've
never
used
MBD
kit
in
the
registry,
import.
F
Flow
actually
no
I'm
positive.
We
haven't
because
we
only
start
Meed
kit
server
in
the
HTTP
import,
FL.
C
C
D
C
So
right
so
the
way
that
that
works
is
I.
Guess
we
open
an
HTTP
client
to
download
the
tar
file
and
we
just
write
we
peek
into
the
tar
file
and
write
the
image
to.
A
That
makes
sense.
Yeah
I
was
trying
to
understand
.go
unable
to
write
file
from
data
reader,
unexpected
eof,
okay,
yeah.
C
C
Could
then
that's
probably
that
eof
is
coming
from
the
server
side,
so
it's
reading
from
an
HTTP
stream
and
it's
writing
to
scratch
space
and
it
got
an
eof
while
reading
from
the
network
is
my
guess,.
A
Is
it?
Is
it
possible
that
the
image
file
is
split
across
multiple
layers?
Is
that
even
a
thing
like,
let's
say
they
decide
the
the.
A
A
Okay,
I
was
just
kind
of
wondering,
if
maybe
yeah
in
that
case,
it
should
have
been
maybe
like
come
out
of
zeros
or
something,
but
we
are
using
a
lowlevel
interface
there.
So
if
there
was
a
if
there
and
I
guess
it's
weird
that
this
is
only
because
we
can
use,
we
can
import
this
exact
same
image
on
a
different
environment
and
have
it
work.
A
C
D
Yeah,
so
they
opened
the
issue
without
mentioning
the
credentials,
but
they
do
have
some
flows
that
require
credentials.
That's
why
pool
method?
Node
isn't
working
for
them
across
the
board,
but
this
is
reproducible
with
without
accessing
a
credential
protected.
A
Registry,
okay,
so
we
have
some
notes
here
and
thoughts
on
it:
I'm
trying
to
understand
how
we
can
take
a
take,
a
single
step
towards
the
solution.
What
that
would.
D
Be
Michael's
Theory
sounds
right,
but
what
what
could
we
do
if
this
is
like
an
intermittent
network
error?
Could
we
keep
R
trying
on
that
UF
or
would
that
be
too
nasty.
A
Well,
shouldn't
it
try
shouldn't
this
failure,
cause
the
data
volume
to
retry
is
it
like?
Is
the.
C
C
I
guess
what
I'm
wondering
you
know:
I
I
assume
that
I
I
don't
know
much
about
this
code,
but
it
seems
that
you
know
we're
using
some
library
to
make
this
HTTP
request
and
download
the
file.
E
F
A
Fails,
why
is
there
credentials
to
the
to
the
Centos
repo
anyway,
if
that's
where
it's
coming
from
cloud.
sos.org.
D
So
the
original
issue
is
using
the
quo
container,
diss,
I,
think
and
some
other
flow.
They
have
requires
the
credentials
like
some
custom,
buil
stuff.
It's
not
something
that's
mentioned
here
in
the
issue.
A
A
A
C
Think
testing
the
same
image,
authenticated
and
unauthenticated
would
be
interesting.
C
Or
specifically,
yeah
one
that
that
does
not
require
authentication,
but
that
that
would
be
interesting
too
I
mean
maybe
if
another
registry
with
authorization
but.
A
Connection:
okay,
any
other
suggestions
we
want
to
make
I
see
that
David
is
joined,
so
I'd
love
to
engage
in
the
other
topic
thanks
for
joining
today,
yeah
I'm
going
to
add
this
as
a
comment
and
then
we'll
jump
back
to
the
pr
principal
topic
for
today.
So
yeah,
let's
go
ahead
with
that.
One
go
ahead:
Michael.
C
E
C
We
may
want
to
I,
don't
know,
take
a
step
back
and
think
about
some
more
fun
fundamental
issues,
and
also
you
know,
maybe,
if
we're
on
the
same
page,
we
can
dig
into
some
of
these
technical
discussions
too.
C
Just
my
interpretation
of
some
of
the
takeaways
that
I've
had
from
the
pr
so
far,
so
the
link
to
the
pr
is
there
I
could
give
a
background,
but
I
think
most
of
the
people
in
this
call
know
what
data
volumes
are
know.
What
populator
are
yeah
so
right,
I
think
and
then
David
feel
free
to
jump
in
whenever
I
think,
but
so
I'll
start
off
by
saying
I.
Think
one
of
the
main
issues
is
that
you
know
by
adding
so
we
have
popular.
C
We
had
volumes
for
a
long
time.
People
are
familiar
with
them.
Now
we
have
these
volume
populat,
so
we
essentially
have
two
ways
of
doing
the
same
thing
and
that
generally
it's.
C
It
it
it
it's
best
case.
It
can
be
confusing
worst
case.
It's,
you
know
not
a
good
thing,
so
I
think
maybe
as
Community
as
convert
developers
and
users,
we
should
think
about
the
path.
You
know
the
best
path
forward.
C
Again:
data
volumes,
they're,
familiar
populator,
they're,
new
they're,
like
the
community
standard
we've
been
waiting,
many
many
years
for
them,
they're
here,
they're
still
relatively
new,
but
I
think
they're
going
to
become
the
familiar
pattern
for
initializing
persistent
volume
claims,
so
populator
are
new
and
then
you
know
in
using
data
volumes
all
these
years
we've.
You
know
there.
D
D
C
Know
just
stay
that
for
now
there
have
been
some
issues
so
when
populator
came
around
I
think
some
of
us
said:
okay,
populates
are
here
they're
the
standard.
You
know
it
makes
sense
for
all
your
kubernetes
applications
to
just
use
pops.
C
Let's
you
know
data
volumes
have
their
issues.
Let's
just
you
know,
deprecate
them
and
move
forward
with
populator,
but
I
think
what
you
know
in
going
through
this
PR.
Maybe
we
need
to
have
more
of
a
discussion
about
you
know
the
second
bullet
point
here:
like
can,
should
we
fix
data
volumes,
so
you
know
I
think
the
main
issue
that
we
have
with
data.
C
Well,
the
main
issue
that
we
have
with
data
volumes
in
their
current
Incarnation
is
that
they
basically
cause
issues
with
backup
and
restore
programs,
specifically
at
restore
time.
C
The
issue
is
specifically
that
we
don't
allow
you,
by
default,
to
create
a
data
volume
if
there
is
a
PVC
with
the
same
name.
So
it's
a
very
easy
issue
to
solve.
There
are
annotations
that
deal
with
it.
If
you
use
one
of
our
plugins.
C
Partners
know
this
Behavior
they've
handled
it,
but
you
know
this
is
something
that
you
know
out
of
the
box
data
voles.
As
is
it's.
It's
a
problem.
A
C
Just
the
the
Dr
issue,
at
least
with
Metro
Dr
and
Alexander,
can
maybe
speak
more
to
the
regional
Dr
case.
The
Metro
cases,
the
the
Metro
Dr
case,
is
kind
of
fixed
with
data
volumes
using
populator
internally.
C
But
I
I
think
that
you
know
this.
The
same
issue
I
think
is
relevant
to
at
least
the
async
or
Regional
Dr
as
I
understand
it,
because
the
way
that
works
is
that
you
know
PVCs
are
snapshotted
on
the
source
system.
You
know
this,
you
know
V
Sync
is
is
is
like
the
application
that
we
use.
It
doesn't.
B
C
Anything
about
data
volume,
so
it's
just
going
to
snapshot
a
PVC
and
send
it
to
another
site.
But
to
make
you
know
the
failover
work
you
have
to
do
some.
You
know
metadata
munging.
You
essentially
encounter
the
same
issue
as
we
have
with
restore.
Where
you
know.
Vyn
is
going
to
restore
this
PVC
and
then
you,
you
know
your
get
Ops
or
your
pipeline
or
whatever
is
applying
gonna.
C
Apply
data
volume
manifest
and
you're
going
to
get
an
error,
so
you
have
to
do
this
in.
Maybe
Alexander
can
speak
more
to
it,
but
there's
some.
You
know
little
wonkiness.
There.
F
Yeah
I
I
worked
around
it
by
using
the
data
volume
templates
and
having
CER
essentially
eat.
The
air
and
it'
be
like
oh,
okay,
I
can
create
the
the
data
volume
because
the
PVC
already
exists
and
it
doesn't
generate
an
error
at
the
giops
level,
but
it
the
error
at
the
coer
level,
is
essentially
ignored.
So.
C
Yeah
so
but
yeah
I
mean
obviously
with
Standalone.
C
Are
definitely
can
be
a.
B
C
B
C
This
specific
behavior
could
it
be
fixed.
Yeah
I
mean
the
the
the
question
is,
you
know,
could
it
be
fixed
in
a
way
that
makes
sense?
You
know
to
me,
I
think
it
most
of
the
time.
You
would
want
this
failure
to
occur,
because
if
a
PVC
exists
named
Fu
and
you
created
the
data
volume,
it
would
be
named
Foo.
Well,
if
a
PVC
named
Foo
exists
and
a
data
volume
named
Foo
does
not
exist,
someone
else
is
probably
using
that
PVC.
As
my
general
intuition
data
volum.
C
Garbage
collection
throws
a
whole
weird
monkey
wrench
into
this,
and
that
complicates
things
too,
but
I
it
could
we
change
behavior
in
some
way
where
this
for
stor
situation
works.
Yes,
I
think
there
will
be
tradeoffs
and
then
the
mechanics
about
changing
this
Behavior,
because
it's
a
you
know,
API
that
has
been
around
for
five
years,
is
going
to
be
complicated.
You.
C
We
should
maybe
do
do
or
not
do
it's
just
G
to
be
a
lot
of
work
like,
for
example,
you
know.
C
To
figure
out
a
way
to
make
data
volumes
work
in
this
fix
data
volumes
for.
A
This
particular
issue,
I,
wonder
if
you
know
just
to
get
into
that.
A
little
bit
is
because
I
I
understand
that
we
have
a
current
set
of
behavior
that
we've
been
using
for
some
period
of
time,
and
we
don't
want
to
upset
anyone
who's
happy
with
that
current
behavior.
A
However,
if
we
had
a,
we
already
had
introduced
one
annotation
to
get
data
volumes
to
behave
well
without
populator,
but
in
the
Dr
flow
that
could
be
optionally
added.
So
if
we
had
a
particular
annotation
that
says
we
want
to
sort
of
enable
I,
don't
we
still
have
to
name
it
and
I'm
not
going
to
waste
my
time
naming
it,
but
we
want
to
enable
this
new
set
of
behaviors
right
whatever
that
is,
and
it
could
be
experimental
at
first
and
subject
to
change.
A
So
if
you
opt
into
that
behavior
you're
opting
into
an
experiment
to
work
better
with
backup
and
restore
and
with
with
Dr,
potentially
at
the
risk
of
compatibility
that
you
used
to
enjoy.
So
if
we
do
that,
then
we
could
work
incrementally
to
develop
this,
because
one
of
the
big
challenges
of
making
these
changes
is.
There
are
multiple
backup
and
restor
options.
A
There
are
multiple
disaster
recovery
options,
and
so
what
may
seem
to
work
really
well
for
one
scenario
might
turn
out
to
have
a
snag
with
another
one,
and
so
it
may
take
us
a
while
to
get
this
right.
So
I'm
wondering
if
we
had
an
experimental
annotation
that
behind
which
we
could
adopt
different
behavior
and
then
once
you
have
something
like
this,
it
would
be
opt
in.
A
But
when
you
have
opinionated
Cube
vert
deployments
with
something
like
the
the
hco,
we
could
potentially
introduce
an
API
that
that
automatically
annotates
data
volumes
with
a
particular
annotation,
or
that
turns
this
behavior
on
on
a
global
level,
so
that
people
don't
have
to
deal
with
the
the
bad
ux
of
needing
to
remember
some
crazy
annotation.
So
I'm,
just
kind
of
wondering
like
this
could
be
a
mechanism
to
introduce
incompatible
Behavior
in
a
way
that
can
opted
into.
C
So
I
think
I
I,
don't
know
me
personally:
I
I
think
The
annotation
on
an
individual
caseby
casee
level.
That's
totally
fine
and
I
think
something
we
could
do.
No,
no
problemo.
What
I
think
is
a
little
more
concerning
or
I
think
something
we'd
have
to
think
more
about.
Is
this
automatically
having
some
sort
of
global
setting?
That
does
it
even
in
like
a
on
open
shift.
A
C
A
We've
been
as
a
team
been
struggling
with
these
issues
for
a
while
trying
to
get
it
right
with
we've
taken
several
different
stabs
at
the
problem,
and
it's
difficult
and
like
I've
and
I've
always
thought
like
until
we
use
populator
and
until
we
solve
this
backup
restore
issue,
we
should
not
call
ourselves
V1,
but
I
think
that
after
we
come
up
with
what
works
here,
we
could
so
that's
why
I
wonder
like,
while
we're
trying
to
figure
this
thing
out,
we
could
potentially
hide
it
behind
a
an
annotation,
but
like
it'd,
be
nice,
if
people
could
easily
consume
that
if
they
like
it
until
we
get
it
settled
and
then
maybe
in
time
for
V1,
the
behavior
could
change
to
use
that
alwayss
and
you'd
have
to
opt
out
of
it.
A
If
we
decide
to
keep
the
old
rules
behind,
but
at
some
point
I'd
like
to
shed
it's
a
lot
of
cases
to
cover
like
did,
they
have
The,
annotation
or
not.
So,
eventually,
it'd
be
nice
to
shed
one
of
these.
Like
the,
we
had
an
early
idea
that
the
PVC
should
be
recreated
if
it's
deleted,
because
that
would
seemed
like
the
declarative
thing
to
do,
and
maybe
a
valid
use
case.
If
somebody
wanted
to
repport
the
data,
but
like
these
days,
I'm
not
sure
I'm,
aware
of
anyone
who's
really
using.
C
E
C
The
I
could
imagine
that
being
used,
but
yeah
I,
don't
I,
don't
know
for
sure,
but
so
yeah
I
mean
this
is
something
we
could
do
for
me
personally.
I
am,
you
know
very
hesitant
to
change
something
on
a
on
a
global
level
like
that
without
a
like,
you
know,
yeah,
without
without
a
sort
of
specific
versioning
behind
it
like
I
I
I.
Would
you
know
I
just,
but
it
it's
certainly
an
option.
Yeah
I
I.
Just
you
know.
A
To
have
a
global
option
would
be
is
if
we
wanted
to,
for
example,
at
let's
say
we
get,
we
get
the
behavior
the
way.
We
think
that
it
should
be
behind
an
annotation.
That's
you
know
single
data
volume
applied.
If
we
want
to
run
a
test
in
the
community
with
people
and
say
we
are
thinking
about
switching
to
data
volume
V1,
and
we
would
like
to
adopt
the
behavior
behind
this
optional
annotation
permanently
in
V1,
so
we
would
encour
you
like
we're
asking
everyone
to
to.
A
Please
try
this
and
let
us
know
if
they
run
into
any
issues
like
this
would
be
a
way
to
sort
of
do
this
like
like
once
in
a
you
know.
Occasionally,
projects
have
a
planned
incompatibility
like
it
does
happen,
we
try
to
avoid
it,
but
if,
at
least,
if
we
do
it
in
a
careful
way
give
people
a
large
enough
window
to
to
test
it
and
see,
then
you
know
we
could
have
a
clearer,
conscious,
conscience
about
it.
A
B
C
User
wants
to
do
it,
maybe
yeah
I,
don't
know,
but
yeah
so
I
think
it
seems
again.
This
isn't
something
we
have
to
solve
here,
but
it
seems
like
this
is
something
that
this
is
a
discussion.
You
know
we
could
fix
data
volumes.
Should
we
fit
again.
This
is
still
what
even
what
you're
describing
Adam
is
not
a
small
effort.
You
know
right
well,
I
would
point.
A
Out
sorry,
I
was
just
goingon
to
point
out
that
that
Shelly
Kagan
did
try
to
to
fix
data
volumes
in
the
past,
and
she
ran
up
against
kind
of
some,
staunch
opposition
from
the
community
about
changing
current
behavior.
So
this
is
why
we
started
to
look
at
ways
to
move
forward
with
that
preservation
of
behavior
so
like
that
is
one
of
the
things.
That's
that's
stopped
it
in
the
beginning.
C
Yeah,
so
right,
so
the
implicit
thing
is,
you
know,
even
is
sort
of
with
the
knowledge
that
populator
are
out
there
and
you
know
whatever
they.
They
have
their
own
issues
or
whatever
does
is
do
can
and
should
we
fix
data
volumes
with
the
knowing
that
there
are
populator
out
there,
and
this
is
the
new
standard
and
blah
blah
blah.
So.
B
C
B
So
I
need
to
jump
to
another
meeting
in
a
second.
If,
if
I
might
just
add
some
thoughts,
M
Chief
concern
when
I
was
reading,
this
proposal
wasn't
so
much
that
we
were
getting
rid
of
data
volumes
or
creating
another
data
volume
like
API.
It
had
a
lot
to
do
with
maintaining
the
utility
of
data
volumes
in
our
VM
API,
so
the
ability
to
express
the
the
source
and
destination
The
Source
being
container
disc
or
HTTP
and
the
destination
being
the
EC
clearing
that
in
line
of
the
VM
spec.
So
the
whole
thing.
B
That
was
one
issue,
and
the
other
issue
was:
if
we're
going
to
have
a
data
volume
like
API,
so
volume
claim
template
kind
of
overlap
with
the
functionality
of
data
volumes,
I'd
like
for
us
to
figure
out
a
way
to
completely
like
get
rid
of
data
volume
templates.
So
what
would
be
the
transition
path?
Those
are
the
kinds
of
things
that
I
would
like
to
to
explore.
B
If
we
go
down
the
volume
claim,
template
path
is:
what's
the
transition
from
data
volume
templates
to
volume,
claim
templates,
and
can
we
achieve
that
long
term
like
what
I?
Don't
want
is
some
sort
of
fragmentation
where
we
have
data
volume,
templates
and
volume
claim
templates,
and
we
have
to
explain
this
kind
of
nuanced.
B
This
Nuance
Behavior
between
the
two
two
customers,
where
you
know,
use
data
volume
templates.
If
you
need
to
do
this,
use
volume
claim
templates.
If
you
need
to
do
this
or
only
use
volume
claim
templates
and
ignore
that
the
St
of
volume
templates
things
exist,.
C
Thing-
and
you
know
to
you
that
is
like
one
of
the
the
points
of
data
volumes
is,
you
can
create
this
VM
and
you
get
everything
all
in
one.
B
E
C
Well,
yeah
I
mean
yeah,
so
yeah
we
talked
about
that.
A
bit
and
yeah
I
mean
yeah.
C
B
Other
concern
when
I
looked
at
volume
claim
templates
is
the
proposal
mentions
a
few
things
like
wanting
to
preserve
some
Behavior
data
volumes
like
the
storage
profiles
and
things
like
that?
Maybe
that's
optional
I,
don't
know,
but
it
kind
of
it
makes
what
looks
like
a
pure
kubernetes
API
have
a
little
bit
of
magic
behind
the
scenes.
It
still
depends
on
CDI.
So
that
kind
of
looks
it's
like
it
looks
like
we're
pure
kubernetes,
but
then
we
require
modules
to
get
pure
like
the
behavior
that
people
would
expect.
C
I
I
I
definitely
think
that
is,
is
valid,
and
that
was
really
honestly.
The
M
that
whole
the
magic
part
is
was
really
I.
Think
the
main
motivation
for
adding
them
and
I
wonder.
A
I
wonder
if
we
could
switch
where
we
keep
the
data
volume
template
section
and
it's
API,
but
we
switch
to
under
the
covers
actually
creating
PVCs
and
populator
CRS
like
and
just
don't,
don't
actually
create
the
the
data
volume
object.
That
would
be.
B
E
C
B
Like
I
would
consider
that
a
transient
step
so
we'd
have
data
volume,
templates
and
we'd
have
volume
claim
templates
and
what
would
be
happening
is
they',
be
an
analogous
that's
the
right
word
I'm
this
early
and
I
haven't
had
enough
coffee
to
each
other.
Basically,
they
would
be
the
same
thing
behind
the
scenes
and
eventually,
when
you
drop
data
volume
templates,
it
would
like
there's
no
functional
change
in
Behavior.
It's
just
their
volume
claim
templates.
B
Us
a
conversion
web
hook,
so
the
conversion
to
data
volume
templates
to
volume,
claim
templat
say
if
we
did
a
VM
version
two.
It
would
be
really
easy
to
translate
between
the
two,
if
they're,
using
the
exact
same
functionality
behind
the
scenes.
A
Yeah,
you
still
have
the
issue
of
not
being
a
pure
kubernetes
thing
when
you,
when
you
suggest
volume
claim
templates,
because
we
we
do.
One
of
the
key
features
we
have
to
maintain
is
the
the
checking
in
with
storage
profiles,
because
it's
incredibly
helpful.
B
Good
discussion,
I
think
you
guys
get
some
of
my
points.
All
of
these
are
possibilities
like
I'm,
not
shooting
down
I'm,
not
able
to
shoot
I'm
not
like
trying
to
steer
in
one
way
or
the
other
I'm.
Just
asking
some
questions
to
make
sure
that
we
have
a
transition
plan
and
our
thinking
long
term
about
what
these
API
changes
mean
for
us.
That's
all
I,
don't.
B
C
So
so
I
guess
what
I'm
saying
is
so
what
I'm
wondering
is
like,
so
you
don't
without
this
full
path
laid
out,
you
don't.
Can
we
proceed
with
Vol
P
templates
or
not.
B
I'm,
hesitant
to
because
I
think
we
might
get
ourselves
in
a
situation
where
we
have
both
of
these
apis
and
they
work
subtly
different
from
each
other,
and
then
it
makes
it's
something
we
have
to
maintain
forever
and
it
gets
like
the
API
fragmentation
is
what
I'm
worried
about
where
we
have
to
explain
these
differences
to
people,
and
then
it
just
kind
of.
Why
do
we
have
two
apis
that
do
almost
exact
same
thing,
but
have
these
slightly
variant
yeah
for.
E
C
B
Want
to
see
it
thought
through
all
the
way,
so
there's
only
in
the
long-term
one
API
that
we
use
on
the
VM,
yep,
okay,
I,
think
that's
possible.
I!
Don't
think
that
that's
it
doesn't
mean
that
it
all
has
to
be
implemented
at
once,
like
that.
It
just
means
that
we
have
to
have
a
plan
that
will
very
likely
succeed
long
term
to
getting
us
to
that.
A
Point
all
right,
thanks
for
joining
David
appreci.
We
do
appreciate
appreciate
your
input
on
this.
It's
it's
really
insightful
and
important.
A
Bye
and
for
the
rest
of
us,
we
are
at
the
slightly
overtime,
so
I'd
like
to
end
here
thanks
for
the
participation
and
we'll
continue
this
discussion
for
sure.
So
thanks
everyone
and
we'll
catch
you
later,
byebye.