►
From YouTube: Kubernetes SIG Storage Meeting 2023-01-12
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 12 January 2023
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.yilfuaafqpay
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
All
right,
let's
go
ahead
and
kick
it
off.
So
today
is
January
12,
2023,
Happy,
New,
Year
everyone
and
welcome
back
today
we're
going
to
go
over
the
127
planning
spreadsheet.
So
we
are
in
the
beginning
of
the
127
planning
Milestone.
If
you
have
enhancements
that
you
want
to
go
into
one
step,
one
seven,
please
ensure
that
your
enhancement
ends
up
with
a
lead
opted
in
label.
A
You
can
reach
out
to
Michelle
myself,
Shang
or
Yan
to
help
get
that
done,
and
the
important
important
dates
to
be
aware
of
are
listed
here.
The
upcoming
date
to
be
aware
of
is
February
2nd,
which
is
the
production
Readiness
freeze,
which
means
all
the
features
that
we
want
to
go
into.
127
must
have
their
caps
approved
by
this
date
and
let's
have
an
enhancement,
bug
and
have
caps
approved
by
this
date.
So
if
you
have
features
that
you
want
to
get
in,
please
keep
those
dates
in
mind.
A
A
Last
time,
if
I
remember
correctly,
was
Shing
copied
over
the
incomplete
items
from
the
previous
cycle
as
a
starting
point
for
127.,
and
so
we'll
go
over
these
to
make
sure
that
they
have,
they
have
owners
assigned,
make
sure
they
still
make
sense
for
127,
and
if
you
have
anything
that
you
think
should
go
into
127
now's
a
good
time
to
speak
up,
and
we
can
add
it
to
the
bottom
as
work
being
tracked
by
the
Sig
and
with
that
I'll
go
ahead
and
kick
it
off.
A
So
first
up
we
have
recovering
from
resize
failures.
So
let
me
create
a
new
column
here
for
today's
date.
A
D
A
And
we'll
drive
that
to
Beta
this
cycle,
so
cool.
Thank
you.
John
next
up
is
issues
related
to
assuming
volumes
are
Mount
points
or
is
Jing
on
the
call
by
any
chance.
A
B
A
Awesome
next
up
so
PVC
volume
snapshot,
name
space
transfer
we
ended
up
dropping.
Is
this
still
a
feature?
Anyone
is
interested
in
helping
drive?
If
so
not
now
might
be
a
good
time
to
volunteer
and
help
Shepherd
it
along.
B
A
A
So
for
provision
volumes
from
Cross,
namespace
snapshot,
PVC
last
status
was
one
PR
pending
on
populator
almost
ready.
You
may
need
more
work
to
move
it
to
Beta.
So
did
we
finish
the
alpha
work
and
is
the
target
for
this
cycle
beta.
A
One
is:
did
the
work
get
completed
for
the
last
cycle
and
did
we
want
to
move
to
Beta
this
cycle.
F
I
think
the
workout
got
harshly
completed
last
cycle,
so
I
think
this
cycle
will
continue
finishing
up
all
pieces.
F
And
there's
also
a
number
there's
a
couple
of
things
we
need
to
do
like
we
need
to
talk
with
before
we
get
to
Beta,
so
we
need
to
talk
with
stick
off
to
get
one
of
the
the
reference
script
API
that
we're,
depending
on
into
this
sick
off
space,
because
right
now
it's
currently
under
State,
networking
and
so
I
think
that's
a
big
thing
we
need
to
resolve
before
we
can
go
to
Beta.
A
Okay,
so
I'll
leave
this
as
Alpha
for
this
cycle
and
I'll
continue
to
work
on
it.
So.
B
A
Okay,
then
CSI
volume
Health
something
we
want
to
pick
up
again
for
this
cycle
or
not.
A
Okay,
so
let
me
so
the
programmatic
response
piece,
so
let
me
go
ahead
and
drop
that
for
now,
if
we
decide
to
pick
this
back
up,
we
can
reintroduce
it
couldn't
and
volume
data
source
add
metric
support
testing
out
of
tree,
so
this
ended
up
getting
dropped.
D
A
Awesome,
thank
you
Akash,
and
is
there
any
any
particular
Milestone
that
you're
driving
to
for
this
cycle.
G
I
think
for
I
think
there
will
be
nothing
much
which
will
be
coming
in
for
this
cycle.
E
A
A
H
A
A
Work
going
on
on
this
yep
makes
sense,
so
let's
keep
tracking
it
and
Alpha
makes
sense.
Here.
Next
is
new
rwo
access
modes.
I
think
this
is
pushing.
Is
it
targeting
beta
in
127
.
F
I
think
he
he
is
driving
this
and
he's
actually
done
a
lot
of
work
last
cycle
to
prepare
for
the
beta
promotion,
nice,
so
yeah.
F
But
I
think
we
need
to
I,
don't
remember
if
the
cap
has
been
updated
yet
and
got
into
reviews
for
the
127
cycle.
Oh.
B
F
Said
he's
off
yeah
I
need
to
double
check
with
him
yeah,
because
he's
gonna
be
out
for
a
while.
A
Cool,
thank
you
for
the
update.
Michelle
next
is
runtime
assisted,
mounting
deep.
H
Hey
so
yeah
this
one
is
really
blocked
on
the
CSI
spec
changes
at
this
point,
I
haven't
had
seen
any
reviews
on
the
CSI
spec
PR.
So
I
would
really
appreciate
that,
if
possible.
H
So
I
was
thinking
like
if
the
CSI
spec
does
not
get
reviewed
and
approved
I,
don't
think
we
can
go
with
127,
so
it's
kind
of
dependent
on
that.
There.
A
B
So
deep,
do
you
want
us
to
out
of
the
opt-in
label
for
1.27?
We
can
take
a
shot
right.
We
can
just
see
if
it
can
go
in
where
sure,
okay.
B
A
Thank
you
deep.
Thank
you.
Shang
next
is
CSI
proxy
for
Windows
transition
to
privileged
containers
out
of
tree.
That's
the
last
status
here
was
design
spec
out
for
review,
feature
branch
and
CSI
proxy
I'm,
assuming
Mauricio
is
still
going
to
continue
to
drive
this
for
127.
H
Yeah
I
think
the
cap
is
out
and
it
has
gone
through
some
reviews.
Looking
pretty
solid
right
now,
I
think
it's
just
like
waiting
on
approvals
got.
A
Next
is
CSI
proxy
performance
issue.
I
think
this
was
no
update
for
a
long
time.
I
don't
think
it
even
has
an
owner.
Is
this
still
an
issue?
Is
this
still
somebody
something
someone's
going
to
work
on.
F
A
D
C
Okay,
so
I
would
like
to
continue
in
127
and
we
have
been
testing
the
current
approach,
and
maybe
we
will
need
to
change
the
enhancement
and
stay
Alpha
for
over
one
more
release.
Out
of
curiosity
is
here:
anyone
who
uses
Linux
in
kubernetes
I
know
about
the
Jeep.
Is
there
anybody
else.
C
Nope
all
right,
I
I,
the
Bible
talks
too,
or
fine.
Maybe
on
Slack
how
to
approach
read,
write
many
volumes.
Maybe
there
is
a
way
how
to
speed
up
the
proposal.
A
B
So
we
do
have
a.
We
do,
have
a
customer
asking
for
this
one,
but
I
want
to
try
this
yet.
C
B
C
C
A
Okay,
moving
on
CSI
migration,
anything
new
here
for
127.
A
So
I'm
gonna
remove
core
sounds
like
we
don't
have
much
more
work
there.
Csi
migration
remove
entry
GCE
Matt.
Are
you
on
the
call?
Is
that
something
we
want
to
do
for
127.
I
I,
don't
think
so
so
I
mean,
if
I
understand
that
this
is
like
actually
tear
out
the
code.
Yep
yep,
yeah,
I,
I,
suspect
We're,
not
gonna
have
bandwidth
to
do
that.
Okay,.
F
I
think
we
also
said
it
needs
to
be
two
releases
after
GA,
so.
A
I
think
delaying
the
removal
is
not
a
big
deal.
We
should
try
to
get
to
it
soon,
but
not
the
end
of
the
world.
So
I'll
go
ahead
and
remove
it
for
the
cycle
and
then
hopefully
we
can
pick
it
up.
That's
128.
A
I
I
mean
I
I,
we
could
add
if
there
is
someone
who
is
interested
in
taking
this
on
as
like
a
contribution
thing
that
would
this
would
be
a
good
project.
F
A
Just
leave
it
here
and
say:
looking
for
volunteers.
A
A
All
right
cool,
so
next
is
vsphere
Windows
support
raw
block
beta
on
by
default,
so
this
was
merged
for
126.,
docs,
PR,
merged
release,
notes
added
anything
new
for
127.
B
E
A
And
remove
AWS
EBS
same
thing,
I'm,
guessing
we're
looking
for
volunteers.
B
Oh,
the
well
Bob
is
University
right.
Well,.
A
Cool,
thank
you
for
catching
that
dashing
and
Wayne.
You
were
saying
something
about
AWS
yeah.
A
Got
it
okay,
so
I'll
leave
it
as
looking
for
volunteers
now
until
we
have
that
person
identified,
is
that
okay,
perfect
yeah,
yeah,
all
right
cool,
then
next
up
we
have
Azure
disk,
which
has
been
in
GA
for
two
releases:
okay
to
delete,
so
I'll
go
ahead
and
remove
that
and
I'm
guessing
same
story
here
we're
just
waiting
for.
E
B
Right
as
well
so
probably
well
1.28,
then.
F
A
Okay
and
then
I'll
go
ahead
and
remove
the
Azure
file,
and
then
we've
got
ceph
RBD
and
Seth
FS
last
status
here
was
we'll
move
these
to
127..
Do
you
know
scheng?
If
humble
is
still
going
to
pick
this
up.
B
Oh
yes,
he
said
for
Seth
RBD
he's
going
to
Target
beta
in
1.27,
okay,.
D
A
We
got
it
so
beta
for
127
and
TBD
on
this
one.
Okay,.
A
Cool,
thank
you
Shane
and
then
we've
got
Port
Works,
so
I
think
we
kind
of
lost
track
of
this,
as
we
had
no
Port
Works
folks
on
the
call
anyone
from
Port
works
here.
B
I
think
Czech
was
Oksana,
I'm,
not
sure
what
whether
she
want
to
Target
anything.
This.
A
So,
let's
uncross
it
out
for
now.
D
A
Actually,
this
is
beta,
so.
A
A
B
I
was
actually
going
to
check
with
a
defect
on
this
one.
It's
just
that
he
has
a
dependency
on
the
next
one,
because
Rona
is
adding.
This
edu
test
framework
got.
I
E
A
And
then
for
for
the
next
one,
is
that
confirmed.
B
Yeah
that
one
is
fine
yeah,
so
you
do
tests
are
in
progress.
A
Perfect,
thank
you
Shane,
and
then
we've
got
to
set
that
heaven
crossed
out.
So,
let's
see
if
we
want
to
keep
them
or
not,
secret
protection
prevent
deletion
while
in
use
depends
on
the
in-use
protection
and
the
in-use
protection.
A
We
kind
of
lost
track
of
this
masaki
I
guess
hasn't
been
around
for
a
bit
anything
new
on
this.
A
Okay,
I'm
gonna
propose
that
we
dropped
this
unless
someone
else
wants
to
pick
it
up
or
we
hear
back
from
asaki
both
of
these
any
objections.
A
Go
ahead
and
delete
that
next
up
is
Sig,
auth
user,
ID
ownership
and
config
maps
and
secrets.
So
this
one,
let's
see
again
I,
think
we
lost
track
of
this.
Nobody
was
driving
it
same
thing.
Anyone
interested
in
driving
it
if
not
I'll,
go
ahead
and
drop
it
for
now.
A
Okay,
I'm
gonna
drop
it
for
now
and
then
the
remainder
here
are
actually,
let's
pull
these
two
up.
So
we'll
put
the
cross
Sig
ones
at
the
bottom,
so
better
default
storage
class
wait
for
One
More
release
before
moving
to
GA.
Do
we
want
to
do
anything
else
here
this
release,
or
is
this
just
for
tracking
some
work?
Next
release.
C
A
B
A
I
Actually,
there's
note
in
the
agenda
I
think
we'd
like
to
do
a
cap
for
127.
We
should
have
a
draft
of
that
available
there.
As
you
all
know,
there's
that
we've
we've
been
been
having
a
discussion
and
a
meeting
on
that.
We,
you
know:
I
haven't
reached
consensus,
but
I
think
we've
explored
the
ideas
enough.
I
That
I
think
a
possibly
productive
way
to
go
forward
is
is
like
we
have
a
cap
that
we're
going
to
propose,
and
then
we
can
have
I
think
concrete
discussion
on
that.
We
should
have
a
draft
out
by
end
of
this
week
tomorrow,
I
guess
and
we'll
try
to
schedule
a
meeting
in
the
next
week
or
so.
A
Yeah
and
just
to
be
clear,
is
the
target
Alpha
for
this
cycle,
opportunistically
or
just
design.
I
A
A
Four
volumes:
okay,
cool
thanks
Matt,
so
the
rest
of
these
are
co-owned
by
between
six
storage
and
other
cigs.
So
first
up
we
have
Sig
node,
non-graceful,
node
shutdown,
I
think
this
was
adding
end-to-end
tests,
we'll
make
wait
one
more
release
for
GA,
so
I
think
this
is
just
for
tracking
this
cycle.
B
A
A
A
G
Yeah
still
not
tracked,
actually
the
the
rootless
mode,
so
yeah
got
it.
So
we
can
just
it's
not
gonna
go
anywhere.
Okay,.
A
Oh,
it's
already
Alpha,
so
it
would
be
where
no
work
in
127
Target,
beta
and
128.
G
A
Okay
and
then
same
thing
for
this
one
or
difference.
G
Yeah
they're
working
on
it,
but
yeah,
but
no
it's
not
changing
any
Milestone
got
it.
Okay,.
A
And
then
the
final
two
that
we
have
are
Sig
apps
first
up
address
issues
PVC
created
by
stateful
set,
will
not
be
Auto
removed.
This
will
Target.
E
I
Yes,
yeah
the
the
current
workers
it
turns
out.
The
CSI
migration
for
GCE
has
kind
of
screwed
up
a
lot
of
the
Alpha
end-to-end
testing
stuff,
so
I'm
trying
to
sort
that
out
now
in
that
issue
there
that
I,
just
posted
in
the
chat
here,
has
the
details,
cool.
I
Yeah
and
just
as
I
said,
the
issue
here
involves
redoing
a
bit
of
how
the
CSI
storage
test
framework
Works.
I
So
please
comment:
if,
if
you
have
questions
I
think
John
has
started
a
discussion
there.
So
thank
you
for
that.
A
G
This
release
I
want
to
work
with
the
design
if
we
have
to
need,
if
need
be,
find
new
owners,
but
there's
already
a
PR
open,
so
I
need
to
find
out
if
that
person
is
still
interested
and
then
obviously
review
it
and
everything
so,
but
we'll
keep
it
at
design.
For
now,.
A
Got
it
okay,
so
I'll
leave
this
open
for
now
with
that
anyone
else
have
a
feature
that
they
think
we
should
be
working
on
for
127.
That's
not
tracked
here.
J
A
J
E
Perfect
so
we'll
open
tab
for
127.
D
A
All
right
with
that
we'll
go
ahead
and
switch
back
to
our
agenda,
so
two
topics:
first
up
Beyond
anyone
interested
in
adding
release
time
stamp
to
PB.
C
So
yeah
we've
got
a
customer
who
wants
to
have
a
custom
policy
when
the
garbage
collect
release
PVS,
so
they
want
to
have
it
like
fairly
configurable
and
that's
probably
out
of
scope
of
this
call,
but
they
need
to
know
when
a
PV
was
released
like
a
timestamp
or
something
like
that.
There
are
many
ways
how
to
do
it.
C
One
of
them
is
completely
out
of
three
use:
some
annotation
or
something
and
watch
TVs-
that's
easy,
but
is
there
anybody
interested
in
having
like
a
field
in
PV
status
when
we
moved
PV
to
release
state
or
something
like
that.
A
Seems
like
not
a
bad
thing
and
I,
don't
think
it
would
hurt
anyone
to
have
one
more
piece
of
information.
A
F
C
Okay,
I
have
just
one
issue
with
new
Fields,
because
that
field
wouldn't
be
used
in
kubernetes.
It
will
be
just
exposed
and
that
that's
like
one
downside,
I'm,
not
sure.
If
this
good
design,
if
you
have
a
field,
that's
not
used
I.
Think.
A
If
it's
in
status,
that's
okay
right,
the
intention
of
putting
in
status
is
like
it
could
be
either
informative
to
the
end
user.
Or
you
know,
programmatic
use
outside
of
kubernetes.
A
F
C
Right
so
yeah
I
might
well
for
me
make
design
something
and
that
could
be
actually
127th
feature
because
I
don't
know
this
is
just
one
field
somewhere
or
maybe
a
new
condition
on
the
field.
You
just
need
a
few
lines
in
the
PV
controller
to.
B
D
B
One
of
the
reason
that
we
said
previously
was
that
since
kubernetes
are
not
using
it,
we
cannot
add
that
as
a
first
class
field,
I
was
just
wondering:
I
mean
I'm,
fine,
adding
this
one
I'm
just
wondering.
Can
you
also
add
a
watering
house
status
in
in
there?
Of
course,
it
will
be
a
different
picture.
C
B
D
B
I
was
thinking
it
seems
to
me
it's
similar
to
this
tungsten
field,
not
exactly
the
same,
but
very
similar
I.
Think
yeah.
A
I
think
the
concern
on
the
volume
Health
side
might
be
the
programmatic
use
of
it
needs
to
be
thought
through
a
little
bit
more
to
make
sure
people
don't
shoot
themselves
in
the
foot.
A
C
K
Yeah,
so
actually
it's
an
interesting
point
that
you
bring
up.
We
are
actually
in
the
process
of
implementing
volume,
Health
monitoring
for
AWS,
and
we
have
been
looking
at
the
current
proposal
and
right
now,
I
think.
One
of
the
challenges
that
we
have
is
that
we
just
get
like
a
single
bit
to
declare
whether
the
volume
is
healthy
or
unhealthy
and
calling
a
volume.
K
Unhealthy
has
a
lot
of
consequences
in
terms
of
how
that
is
perceived
by
the
application,
so
we'd
like
to
get
a
little
bit
more
granularity
in
terms
of
when
we
report
status
as
to
you
know
what
what
might
be
happening
with
the
volume.
So
we,
this
is
just
based
on
some
preliminary
thought
that
we
we
cause
that
we
have.
K
We
will
provide
a
more
detailed
analysis
of
what
we
feel
at
some
point
in
the
near
future,
but
I
I
just
wanted
to
mention
it
here,
because
you
were
talking
about
volume,
Health,
monitoring,
yeah,.
B
Yeah
sure,
if
you
have
some
feedback,
that'll
be
great,
but
we
actually
in
the
beginning,
we
do
have
like
a
range
of
fields,
but
before
the
feature
was
merged,
we
actually
reduced
it
to
just
this
one
one
flag
and
then
we
can
add
more
in
the
future.
That
was
the
initial
all
right.
Okay,.
K
So,
that's
that's
good
to
know.
We
can
certainly,
you
know,
come
up
with
some
kind
of
a
proposal
to
to
make
which
would
explain
what
we
are
thinking,
how
we
are
thinking
about
it
and-
and
we
can-
we
can
have
that
discussed
here
so.
B
There's
one
challenge
is:
is
to
like
how
to
use
this
field
like
well.
That's
the
you
know
exactly
what
we're
discussing
here
right.
That's
the
one
thing
that
we
have
not
reached
the
consensus,
but
for
us
internally
we
actually
do
have
a
warring
House
Field,
that
to
reuse,
which
is
just
whether
the
Voting
is
accessible
or
not
accessible
and
based
on
that
higher
level
application
will
make
decisions
about
what
to
do
with
it.
Yeah.
K
So
I
think
one
thing
that
I
can
tell
you
right
now
is
that
from
the
AWS
perspective
right,
you
may
want
to
declare
like
there
may
be
something
going
on
with
a
volume
it
may
not
be
in
in
like
good
health,
but
we
the
moment
you
declare
a
volume
to
be
unhealthy.
There
can
be
consequences
in
terms
of
how
that
volume
is
actually
handled
in
terms
of
faults,
alarms,
Etc,
so
you
may,
you
may
not
want
to
trigger
something
that
causes.
K
That
indicates
that
the
volume
is
completely
unusable
right.
So
what
would
be
helpful
is
if
you
could
have
some
kind
of
a
progression
in
indicating
that
okay,
this
it's
not
healthy.
You
know
the
volume
is
not
healthy
at
this
point,
but
but
we
don't
want
any
further
action
taken
at
this
point
and
then
see
it
up
to
the
application
to
determine
what
the
behavior
might
be
in
that
case,
or
something
along
those
lines
so
I
we
can
certainly
provide
something
which
explains
a
position
better
in
a
in
a
future
proposal,
if
necessary,.
A
L
To
say
that
that
getting
a
common
set,
what
was
the
sticking
point
and
so
what
what
we
agreed
to
way
back
when
we
were
originally
designing
this
feature,
is
it
only
makes
sense
to
have
multiple
like
levels
of
severity?
L
K
My
first
preliminary
thought
on
that
and
it's
I
may
be
wrong
about
this-
is
that
you
know
if
we,
if
we
can
come
up
with
some
kind
of
a
framework
where
we
don't
have
to
specify
the
entire
set
right
away,
but
just
provide
a
a
set
of
values
that
we
know
for
a
fact
would
be
helpful
and
that
can
be
extended
and
extended
in
the
future.
I
think
that
that
would
be
the
way
to
go,
because
you
could.
K
You
can
certainly
imagine
scenarios
where
storage
providers
might
want
to
do
something
specific
to
what
their
implementations
are.
Want
them
to
do,
and
you
know
so
so
having.
L
It
does
no
good
if
one
storage
vendor
implements
a
bunch
of
different
levels
and
then
custom
codes,
their
application
to
deal
with
those
levels,
it's
not
interoperable
with
anything
else,
and
so
whatever
we
Define
has
to
be
something
that
you
know
any
you
can
plug
in
any
storage
system
and
get
the
same
results
got
it
got
it
if,
if
I
mean
it
is
true
that
store
a
lot
of
individual
storage
systems
have
tons
of
detail,
but
the
right
way
to
extract
that
detail
is
through
some
proprietary
Channel
rather
than
this.
The
standardized
interface.
K
A
Details
yeah,
the
good
news
here
is
that
you're,
you
know.
Now
we
have
more
than
one
kind
of
company
trying
to
implement
the
the
use
of
this
feature,
so
I
think
moving
it
forward
will
be
easier
when
you
have
at
least
a
concrete
use
case.
Yep.
A
B
I
was
just
saying
that,
like
in
our
case,
we
actually
just
have
this
one
flag,
but
we
don't
call
it
healthy
or
not
healthy.
We
call
it
accessible
or
not
accessible.
So
just
this
one
value
and
based
on
that,
we
make
decisions.
This
is
already
in
production
yeah,
but
definitely
we
say
we
probably
need
to.
A
Might
be
being
more
specific,
might
help
here,
but
yeah,
that's
good
to
know.
Okay,
I!
Think!
That's!
That's
worth
the
follow-up
discussion
going
back
to
Jan's
point.
It
sounds
like
his
flag
is
kind
of
simple
enough
that
I
think
we
should
be
okay
with
it.
Yeah.
F
I'm
sorry
I
had
I
also
had
one
more
use
case,
just
kind
of
related
to
the
original
original
like
time
stamp
on
faces
thing,
but
I
think
in
general,
for
like
one
of
the
things
that
we'd
like
to
do
is
really
track
end
to
end
latency
of
how
long
it
takes
for
operations
to
happen
like
how
long
does
it
take
from
someone
creating
a
PVC
to
that
PVC
getting
provisioned
or
how
long
does
it
take
from
when
a
pod
gets
scheduled
to
when
that
volume
gets
attached
to
a
node
and
that
kind
of
thing
and
so
I
think
one
like
I
think
one
of
the
challenges
is
we
have.
F
We
do
have
operation
metrics
today,
but
they're
only
good
at
tracking,
like
a
single
iteration
of
the
controller
Loop
they're,
not
good
at
tracking
like
time
taken
across
multiple
each
rides
and
so
I
think
having
in
general,
like
in
all
of
our
objects,
having
like
a
time
stamp
when
things
reach
certain
phases,
I
think
would
help
with
that
kind
of
observability.
B
C
So
what
I'm
hearing
from
Michelle
is
there
should
be
timestamp
for
the
phase
change
so
where
the
volume
was
bound
where
the
volume
was
released
when
it
was
I,
don't
know
what
and
the
same
for
PVC
when
the
PVC
was
bound
and
the
PVC
was
well
well,
there
is
no
other
state
delete.
It.
G
F
They
don't
and-
and
they
don't
work
if
the
controller
crashes
MP
starts,
because
it
had
to
keep
a
bunch
of
State
in
memory
of
like
an
operation
and
so
I
think
having
it
in
the
actual
kubernetes
API
I
think
would
be
easier
to
or
would
be
more
reliable
and
there's.
Also
things
like
there's
projects
like
Cube
State
metrics.
That
I
think
could
take
advantage
of
that.
If,
if
we
actually
had
the
information
in
the
API
itself,.
A
Phases
folks
I've
got
a
hard
stop
at
10..
Let's
Okay
do
this
discussion.
Do
a
bug
sure
it
sounds
like
something
we
do
want.
We
are
interested
in
Matt.
Do
you
want
to
quickly
comment
on
this
design?
Yeah.
I
This
is
exactly
the
thing
I
mentioned
before
about
the
kept
proposal.
As
I
said,
we'll
have
a
draft
out
shortly:
I
put
a
link
into
the
doc.
That's
captured
the
existing
discussion
meetings
from
last
month,
so
please
check
it
out
and
watch
this
space
for
the
cap
and
you'll,
see
the
cap
issue
and
please
review
and
give
comments.
A
I
G
Matt,
do
you
have
like
like
weekly
meetings
or
like
if
there's
anything
like
I,
don't
know
if
you're
sending
the
invites
to
the
group,
the
six
stories
group
or
like
I,
don't
know
I.
I
Haven't
haven't
had
a
regular
meeting,
we
had
a
meeting
before
the
holidays,
and
this
stock
represents
picking
that
up
having
you
know,
if
there's
interest
in
having
a
weekly
meeting
as
we
we
find
the
cap.
That
would
be
great,
so
yeah
I
will
for
sure
post
additional
details
here,
but
you
haven't
missed
anything
well.
K
A
lot
it
would
be,
it
would
be
great
if
we
can,
if
we
can
get
to
some
kind
of
convergence
around
around
this
proposal.
I
think
quite
a
few
people
have
been
asking
for
this
capability
for
a
while.
So
it
would
be
great
if
we
can,
if
we
can
come
up
with
some
some
Way
Forward
here
so
yeah
cool
cool.
A
All
right,
folks,
sorry
for
the
hard
stop.
Thank
you
very
much
and
we'll
see
you
in
two
weeks.
Thank
you.
Thank
you.
Thank.