►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Workgroup for Container-Storage-Interface (CSI) Implementation - 25 January 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Michelle Au (Google)
A
Today
is
january,
25th
2021:
this
is
the
kubernetes
csi
implementation
meeting,
so
we'll
go
down
our
usual
agenda
of
release,
statuses,
starting
with
119.
A
The
only
thing
remaining
here
is
fixing
the
csi
proxy
build
and
I
believe
jing
has
been
working
with
the
test
and
for
folks
to
figure
out
how
to
get
the
proper
bucket
promotion
setup
going
so,
hopefully
we'll
be
able
to
finally
fix
this
soon.
B
You
know,
but
the
is
a
proxy,
are
they
going
to
update
the
k
log
and
that
key
log
is.
B
A
Yeah,
so
I
took
a
look,
I
took
a
look
and
at
the
repo
they're
using
k,
log
v1,
but
they
also
don't
have
anything.
That's
using
k,
log
v2.
A
So
technically
I
don't
think
they
have
an
issue
and
they
can
just
update
it
whenever
they
want,
but
I
don't
think
it's
like
causing
any
bugs
or
anything
like
that
right
now:
okay,
but
yeah
they
they
were
not
planning
on
doing
a
release.
A
This
cycle-
I
don't
I
guess
they
don't
have
changes
or
any
things.
So
I
I
think
we
can
defer
up
the
upgrading
k
log
to
the
next
release,
given
that
they
I.
B
B
But
if
someone
looking
for
some
work
to
do,
I
found
that
and
then
oh.
B
A
All
right,
let's
see,
moving
on
to
120,
I
have
a
pr
out
to
update
our
docs
with
all
the
new
sidecar
versions.
A
A
Status
yeah,
I
just
got
a
bunch
of
like
test
failure,
emails,
so
yeah,
but
anyway
yeah
so
making
some
progress
so
awesome.
Also,
we
had
some
other
side
cars
that
are
repos
that
we
wanted
to
release
around
this
time.
I
guess
we
have
host
path
and
csi
tests
are
we
able
to?
Are
there
still
pending
things?
There.
C
C
I
think
there
were
things
going
on
around
snapchatter
and
a
potential
update
of
csi
release
tools
that
that
at
least
is
pending
and
it
was
supposed
to
go
into,
I
think,
csi
host
path.
Perhaps
we
shouldn't
hold
up
a
release
because
of
that,
because
I
think
the
current
approach
is
going
to
take
to
turn
out
to
be
broken
or
well
it
wasn't.
It
was
pocket
for
a
while.
I
think
the
latest
one
might
be
okay,
so
anyway
long
story,
long
story
short.
A
Okay
sounds
good
one
more
thing
about
the
host
path:
repo
is
all
the
side.
Cars
need
to
be
updated.
D
C
B
A
B
Yeah,
okay,
because
it's
getting
close
actually,
I
was
actually
thinking
about
merging
it.
It's
the
reason
having
this
unlike
to
get
some
reviews,
I
think
nick
also
reviewed
it,
so
I
think
patrick
also
looked
at
it.
There
was
some.
C
But
that
was
the
one
that
was
problematic
for
a
while,
because
it,
I
think
it
was
making
an
assumption
of
how
to
or
whether
we,
the
external
health,
monitor
or
snap
of
some
some
files,
whether
they
were
local
or
not,
and
I
think
the
latest
version
is
fine.
I
I
think
I
reviewed
everything
and
it
looked
good
to
me
so
yeah
I
would
argue
we
should.
We
should
include
it.
E
B
Something
minor,
I
don't
think
we
released
a
description,
it's
not
really
the
code,
okay,
so
let's
just
merge
it
and
then
the
other
one,
the
one
about
the
snapshot.
That's
like
the
the
name
space
right
has
to
do
with
namespace.
B
A
test
I
think
there
are
tests,
I
don't
know
which
one
is
merged.
Oh,
I
need
to
check.
I
think
he
submitted
a
test
in
host
pass
and
then
you
said,
looks
good
to
me
to
that
test.
Pr,
but
not,
I
think
the
one
in
host
in
release
two
is
still
not
merged,
so
you
want
to
take
a
look
at
that
one.
The
test,
one
is
okay.
If
you
look,
if
you
think
the
wineries
choose
okay,
then
you
get
that
merge
and
then
we
can
get
one
css
snapshot
or
merged.
A
Okay,
all
right
awesome,
let's
see
moving
on
to
the
k
log
v
use
stuff,
so
I
think
we
got
everything
except
for
these
two
repos.
I
opened
up
an
issue
in
the
smb
driver.
I
believe
andy
took
care
of
it.
So
now
that's
done
and
then
csi
proxy.
Like
I
mentioned
it's
not
critical
because
they
don't
have
a
mix
of
v1
and
v2
going
on,
but
it's
a
nice
to
have.
A
So
if
ishing
you
want
to
have
someone
work
on
it,
that
would
be
great,
which
one
you're
in
the
proxy.
B
The
yeah
csi
proxy-
oh
it's
just
someone-
was
asking
saying
why
that
is
not
updated.
So
I
thought
it's
already
taken
care
of.
I
said,
don't
worry
now
you're
saying
now,
then
I
can
just
tell
that
person.
If
he
wants
to
you
can
go
ahead
and
update
it
and.
A
Yeah
yeah,
if
they're
looking
for
something
to
do,
then
that's
okay,
that's
fine
cool,
all
right!
Okay!
Moving
on
to
121,
I
think
our
main,
our
biggest
focus
for
the
next
two
weeks
is
getting
the
enhancements
ready.
So
feature
freeze
is
in
two
weeks
on
tuesday,
the
ninth
or
is
it
the
fifth?
I
can't
count
the.
F
A
So
we
need
to
have
all
the
caps
merged
we
need
to
add.
We
need
to
add
each
feature
that
we
want
to
graduate
to
the
there's:
a
new
release,
tracking
spreadsheet,
where
we
have
to
add
the
enhancement
issues
to
to
get
tracked
by
the
release
team
and
then
the
other
part
is
we
need
to
have
the
production
readiness
review
also
done,
I
believe,
done
by
the
feature
fees
date.
C
A
E
C
A
Okay
sounds
good,
so
yeah,
let's
first
go
down
sort
of
the
out
open
pr's
that
we
have.
First
is
the
safe
volume
data
recovery.
D
A
A
Awesome
all
right
cool
and
then
we
have
a
new
e3
test
to
do
performance
measurements.
A
Right,
I
guess
it
looks
like
this
just
needs
review
from
me.
Probably
I
guess
one
question
is:
is
that
so
this
is
adding
like
a
performance.
This
is
adding
a
performance
test.
I
guess
one
question
is:
is
in
general
for
performance
tests.
Do
we
want
to
do?
We
want
to
put
them
here
in
the
e2e
framework,
or
do
we
want
to
use
the
scale
framework
like
cluster
loader.
C
C
A
A
C
It
I
think
it
does-
I'm
not
entirely
happy.
How
are
you
not
entirely
sure
how
portable
that
metric
support
is
I
my
experience.
It
sometimes
depended
on
ssh
access
to
the
cluster
and
that
that
part
did
not
always
work
for
me,
but
in
theory,
if
it
works,
you
get
a
lot
more
or
you
basically
get
metrics
out
of
it.
In
contrast
to
the
e
to
e
framework,
which
doesn't
have
anything.
A
Okay,
like
maybe
patrick
since
you've,
had
experience
running
both
kinds
of
tests.
Would
you
be
able
to
like?
I
don't
know,
kind
of
sort
of
put
your
ideas
here
to
to
about
like
the
pros
and
cons
of
using
cluster
loader,
two.
A
Sure
yeah,
I
guess
I
guess
the
main
thing
is.
We
should
decide
like
whether
or
not
we
should
add
this
performance
measurement
as
to
to
here
or
should
we
try
to
set
it
up
in
cluster
load
or
two
instead.
C
A
Okay
sounds
good
all
right.
Next
up
we
have
creating
an
entry
pvc
with
data
source.
Oh,
I
have
not
submitted
pr
yet.
Okay.
B
A
A
Oh
is
joey
not
here.
Okay,
there
was
a
meeting
on
friday
discussing
this.
I
think
the
main
action
item
is
for
all
the
clouds
to
come
up
with
a
plan
on
how
they're
going
to
reach
the
code
removal
deadline
by
124..
A
I
think,
regarding
the
121
release,
there's
a
lot
of
work
that
jahweh
has
planned,
but
I
think
none
of
the
features
none
of
the
features
are
going
to
be
promoted
to
a
phase
to
a
new
phase.
So
I
don't
think
we
need
the
release
team
to
be
tracking
the
work.
I
think
so
I
think
here
it's
mostly
just
about
enabling
more
testing
and
getting
more
coverage
of
the
beta
features.
A
B
I
think
this
one
it's
just
for
some
pr's
that
are
being
reviewed.
I
think
grand
has
a
pr
on
the
as
a
eg
test
for
the
matrix,
so
so
I'm
looking
at
it
so
yeah
I'd
like
to
have
others
to
also
review
that
one.
B
I
think
it
in
general
looks
good.
I
just
have
some
comments
on.
We
probably
should
also
add
this
in
the
season
walk
driver,
so
we
could
add
some
negative
tests
right
now.
Okay
for
host
passes,
so
a
positive
test
and
the
other
one
the-
and
there
is
also
the
eq
test
for
the
for
the
secrets.
B
So
so
I've
already
reviewed
it.
I
think
it
looks
fine,
so
I
see
michelle
you
added
a
few
others
to
review.
So
we'll
wait
for
others
to
take
a
look
as
well,
but
I
think
yeah
I
mean.
I
think
that
one
is
okay,
getting
close.
A
Sounds
good
and
feel
free
to
ping
them
on
slack.
If
you
don't
hear
back.
B
C
Yeah,
okay,
I
finally
got
the
pr
appear
review
to
look
at
the
pending
pr
and
he
left
some
comments
about
what
he
expects
and
we
agreed
that
I
update
the
cap
to
targeting
and
targeting
beta,
which
I
wanted
to
do
anyway.
I
was
just
wasn't
sure
whether
I
should
get
first,
the
old
pr
merged
and
then
updated,
but
my
next
step
is
the
prr
readiness
or
the
readiness
review
for
beta,
I
think
for
ephemeral
volumes.
C
We
don't
really
need
that
much
more
work.
I
technically
at
least
I
I
think
it's
fairly
stable
and
I
don't
expect
much
changes.
The
question
is
what
the
what
the
readiness
requirements
demand
in
terms
of
additional
work,
and
I'm
I'm
not
quite
up
to
date
on
that.
Yet
so
I'll
see.
If,
when
I,
when
I
do
the
pr
update
for
the
cap.
B
B
C
A
Yeah
one
month
after
feature,
freeze,
yeah.
C
A
Cool
all
right,
and
then
we
have
volume
populators.
A
All
right
is
the:
is
the
cap
ready
for
review
yet.
G
No,
I
mean
that's
what
I'm
doing,
but
it's
not
done
yet.
A
Okay
sounds
good,
and
then
I
guess
shing
is
this
already
on
the
spreadsheet.
I
don't
know.
B
Okay,
where
is
so
for
some
reason,
we
63
have
a
lot
of
caps
in
there
compared
to.
E
F
Yep
I
opened
a
cap
for
fs
group
on
mount
via
work
like
supplying
fs
group
to
publish
node
stage.
F
A
All
right
cool
all
right,
we'll
go
ahead
and
add
that
to
the
spreadsheet
next
is
volume,
expansion.
F
Yes,
so
I
got
some
help
from
some
people
from
microsoft,
ping
me
and
the
plan
is
like
this
quarter.
We
are
planning
to
work
on
moving
the
allow
volume
expansion
field,
copying
it
from
storage
class
to
pv
and
use
that
so,
but
I
have
to
yet
decide
like
if
that
thing
will
be
under
alpha
feature
gate.
I
hate
to
do
that,
but
I
have
to
like
maybe
get
to
talk
to
jordan
or
someone
and
figure
out
what
api
machinery
folks
say
about
that.
F
I
have
created
the
cards
like
issues
for
all
those
individual
items
like
in
the
to-do
section.
You
can
see
copy
live
expansion
fill
from
sc
to
pv
the
secret
in
node,
expand
volume
and
there's
already
another
person
working
on
node,
maybe
resize
online.
No,
not
that
one.
Sorry
that
actually
may
not
be
an
issue,
but
we
can
talk
about
that
later,
like
volume
and
maybe
decide.
This
is
the
issue
that
you
find
long
back,
so
we
are
going
to
default
to.
We
are
like
already
kind
of
on
the
way
to
defaulting
to
online
expansion.
F
Yep
and
the
one
that
zinc
file
about
the
pvc
being
deleted
yeah
this
sung
from
microsoft
is
working
on
it
and
yeah.
Basically,
my
plan
by
the
way
to
I
was
talking
with
jan
and
other
folks
like
to
find
a
to
not
find
to
create
a
resizing
working
group
until
we
can
get
the
whole
thing,
ga
so
that
it
just
and
yeah,
but
basically
two
people
from
microsoft
volunteered
to
work
on
this,
I'm
working
with
them
to
review
and
do
whatever
is
necessary,
but
there's
no
long-term
commitment
from
them.
F
A
Awesome
cool,
I
guess,
do
we
need
any
feature
enhancements
for
any
of
this
or
are
we.
F
Just
gonna
for
copying
the
field,
we
may
need
an
enhancement,
I'll
I'll
update
in
one
or
today
or
tomorrow.
F
A
F
A
All
right
cool
thanks
for
the
update.
Next,
we
have
volume
health.
B
So
yeah,
so
we
have
that
pr
to
add
the
wooden
house
port
in
the
the
hostpass
driver,
so
I
think
we'll
get
that
one
merged,
so
I
updated
the
the
cap.
So
I
see
that
we
actually,
the
the
person
who
is
reviewing
that
for
production
is
actually
he
already
given
logo
to
me.
So
that's
good
so
see
that
michelle.
You
have
some
comments
on
so
basically
mainly
it's
about
how
we
are
going
to
coordinate
with
signaled
on
some
of
the
reporting
right.
B
Okay,
yes,
probably
should
we
need
to
ask
them,
I
don't
know
what
they're
doing
with
that
and
also
you're
saying
that
there
is
a
problem
of
monitoring,
adding
a
part
informer
on
the
notes.
You
think
that's
a
that's
a
problem.
A
Yeah
yeah:
that's
a
big
performance
problem
to
have
a.
B
A
Yeah
yeah,
that's
just,
I
think.
That's
one
thing:
yeah
yeah!
So
that's
like
one
thing
where,
like
I
think
we
need
to
see
what
the
monitoring
work
that
signate
is
doing.
Okay
and.
E
A
B
Okay,
actually
so
that
actually
reminds
me
ben
remind
me
of
that.
We
have
this
discussion
for
cozy
on
the
monitoring
the
pod.
B
B
G
B
There
we
were
talking
about
the
you
know
that
finalizer
saying
in
cozy
we
talked
about
like
monitor,
adding
informer
for
the
pod.
So
michelle
was
saying:
that's
like
not
recommended,
or
just.
B
A
I
mean
my
my
concern
is
more
from
a
scalability
perspective
than
a
security
perspective.
It's
it's
mainly
that
creating
an
informer
on
all
pods
on
all
nodes
will
completely
flood
and
drown
the
api
server.
When
you
have
a
lot
of
nodes
and
a
lot
of
pods
so
sure.
E
G
B
B
Unless,
if
we,
I
think
we
had
this
discussion
even
when
we
discussing
this,
do
we
want
to
make
a
better
requirement,
then
that
will
be
easy
because
of
that
we
are
saying
we
add
those
to
the
parts,
because
otherwise
we
can
add
those
events
actually
on
the
pvc.
Sorry
so
yeah.
B
G
A
A
There
is
a
there
is
a
security
concern
about
nodes
being
able
to
see
information
about
workloads
that
are
not
running
on
that
node,
because
so,
like
the
thing
about
controllers,
is
that
you
can
someone
can
actually
like
single
controllers?
Someone
can
actually,
you
know,
run
single
controllers
on
dedicated
nodes
that
are
like
in
a
different
security
group
than
like
the
rest
of
the
nodes.
E
A
Like
the
nodes
that
have
to
run
user
workloads,
we
have
to
sort
of
treat
it
as
like.
The
user
workloads
could
be
malicious
and
we
don't
want
those.
If,
if
like
a
breakout
occurs,
we
don't
want
those
malicious
workloads
to
be
able
to
see
data
from
other
workloads
that
are
just
running
in
the
cluster.
G
G
A
All
right,
but
yeah,
I
guess
regarding
this
one.
I
think
we
just
need
to
sort
of
sync
up
with
signate
and
see
what
their
current
monitoring
plans
are
and
and
make
sure
that
we
just
kind
of
were
aligning
with
that.
Okay,
all
right!
Thank
you.
C
It's
fairly
obvious
that
this
feature
will
work
well,
depending
on
what
your
workload
is,
how
many
nodes
with
the
driver?
You
have,
how
many
volumes
you
create
per
second,
and
it
may
work
for
for
some
cases
it
will
not
work
for
others,
and
I
I'm
still
unsure
what
that
means
for
the
feature,
if
it's
required
that,
for
example,
it
works
for
every
single
port
in
the
cluster,
with
hundreds
of
nodes
with
local
volumes
managed
by
lvm,
for
example.
A
Okay,
I
mean
like,
like-
I
think
I
think
this
feature
will
especially
in
combination
with
ephemeral
volumes,
has
the
potential
to
be
really
powerful
and
to
be
very
widely
used,
in
which
case
I
I
would
expect
people
could
actually
use
this
as
an
empty
derby
replacement.
A
So
at
least
from
like
my
perspective,
then
I
think
if
people,
if
we
like,
I
can
see
people
wanting
to
use
this
as
an
alternative
to
empty
dir.
In
which
case
then
the
scalability
requirements
would
be
pretty
high.
I
think
for
beta.
I
don't
think
we
need
to
like
block
going
to
beta
if
we
can't
meet
those
scale
requirements,
but
I
think
it's
something
that
we
should
work
on
improving
while
the
features
in
beta
to
improve
the
scalability
of
it.
Does
that
make
sense.
C
Okay,
yeah,
I'm
fine
with
that,
and
it
basically
means
that
for
beta
we
we
probably
want
some
test
that
covers
that.
But
it's
not
a
blocker.
If
the
numbers
don't
live
up
to
expectations
or
just
show
that
it's
not
ready
yet,
but
that's
fine,
I
think
that's
real.
A
Yeah,
I
think,
maybe
from
like
going
to
alpha
to
beta.
I
think
we
should
at
least
define
some
goals
and
then
like,
while
it's
in
beta
work
on
meeting
those
goals,
yeah.
C
C
Another
question
that
came
up
that
I
need
to
revisit
is
a
question.
I
think
we
discussed
that
if
you
have
something
like
a
driver
that
is
running
on
a
node
distributed
provisioning
and
you
have
capacity
enabled
the
current
usage
model
is
of
the
current
setup
is
that
the
storage
capacity
objects
will
be
owned
by
the
pod
running
on
each
node.
C
A
C
If
you
update
the
driver-
and
it
would
would
be
nice
to
avoid
that,
my
current
idea
for
achieving
that
would
be
to
drop
or
to
move
the
ownership
back
to
the
core
of
a
corresponding
app
object,
which
would
be
the
stateful
set
like
like
we
do
for
and
not
just
for,
the
demon
set
for,
which
is
the
same
setup
that
we
have
when
having
a
central
component
where
it's
also
owned
by
the
deployment
or
the
stateful
set.
C
The
question,
then,
is:
how
do
we
delete
objects
for
nodes
where
the
driver
no
longer
is
available,
and
I
think
that
can
be
taken
care
of
by
the
other
still
running
drivers.
They
just
need
to
have
some
kind
of
garbage
collection
where
they
collectively
look
at
existing
objects
from
time
to
time
and
and
match
them
against
nodes
and
then
check
whether
the
daemon
set
still
has
a
pod
running.
On
that
note,
I
think
that
should
be
doable.
A
C
I
was
not
sure
I
wanted
to
make
better
requirement,
because
then
that
implies
that
csi
actually
pmfcsi
does
still
have
a
central
component.
It's
where
we
implement
the
the
schedule,
extensions,
but
long
term.
My
thinking
was
that
the
csi
driver
deployment
could
be
as
simple
as
just
having
a
stateful
set,
and
it
would
be
nice
to
not
force
a
csi
driver
developer
to
to
install
a
central
component
if
it
if
it
can
be
avoided
and
if
it,
if
it's
not
too
costly,
to
run
that
on
on
multiple
different
nodes.
C
Well
so
far,
I've
worked
on
external
provisioner,
the
external
provisioner
it
had
it
used
to
have
a
port
informal
or
a
pv,
a
node
form
it
used
to
have
a
node
informer
and
for
distributed
provisioning.
I
added
I
replaced,
or
I
cus
I
parameterized-
that
code
so
that
it
doesn't
need
for
node
deformer,
because
it
knows
that
it's
always
running
on
the
same
node.
B
But
for
the
all
the
other
informers
that
we're
currently
using
the
central
controller,
then
that
will
be
replicated
right.
C
A
Yeah,
I
think
it
just
depends
on
it's
going
to
depend
on
the
number
of
objects
and
also
how
frequently
those
objects
get
updated.
So,
like.
A
E
C
E
C
C
C
It's
not!
I.
C
Because
the
external
provisioner
needs
to
know
when
it
can
delete
pvs
and
that's
it's
the
same
code
as
in
the
central
case.
It
watches
pvcs
and
pvs
and
and
compares
to
figure
out
which
of
the
pvs
no
longer
have
a
pvc
and
and
are
in
in
that
release
what
what
the
deletion
criteria
is
and
that
that's
the
check
that
it
runs.
It
only
needs
to
do
that
for
volumes
on
the
node,
but
there
is
currently
no
good
way
to
filter
for
those.
A
It's
something
that
we
need
to
sort
of
keep
in
mind
when
we
discuss
and
outline
potentially
the
scalability
limitations
of
of
the
features.
So
I
think
we
should
just
call
it
out,
and
then
we
can
see
where
we
go
from
there.
A
All
right
so
so
it
looks
like
roy
tech
has
some
questions
here.
Is
there
anything
that
is
there
anything
that
I
can
help
with
on
any
of
these?
So
do
you.
C
Have
this
I
I
I'll
I'll
start
looking
into
updating
the
cap
probably
tomorrow,
and
then
we
will
need
to
do
a
both
a
technical
review.
If
I,
if
I
decide
to
make
changes
like
with
the
ownership
model
and
the
prr
refu
will
also
then
need
to
be
handled,
so
I
guess
yeah
yeah
at
some
point.
This
will
land
on
on
your
desk
to
be
reviewed
again.
A
All
right
sounds
good
all
right,
so
that's
the
main
thing
that
we
have
with
with
this
is
that
right.
C
Yeah
and
and
then,
of
course,
some
some
updates
in
various
site-
cars,
external
provisioner,
mostly
and
and
korff-
I
don't
think
we'll
in
in
core
kubernetes-
it's
pretty
much
just
to
beta
update.
C
Yes,
just
to
recap,
the
question
was:
if
we
get
a
size
by
the
csi
driver,
what
is
the
meaning
of
that
is
it?
Is
it
that
the
next
volume,
if
it's
smaller,
can
be
created
or
is
that
giving
you
the
total
capacity
in?
That
is
that
you
only
get
by
creating
multiple
volumes,
perhaps
and
depending
on
that
we
can
use
the
value
for
different
purposes.
C
And
my
my
task
is
basically
to
clarify
that
in
the
csi
community
and
then
perhaps
based
on
an
updated
csi
spec
update
v,
the
usage
of
that
that
value
in
kubernetes,
so
yeah.
A
C
I've
my
personal
interpretation
is:
we
should
extend
the
csi
spec
to
have
both
capacity
or
actually,
we
should
clarify
the
value
of
the
return
value.
We
should
have
the
current
value,
which
is
kind
of
ambiguous,
but
we
need
it
for
backwards
compatibility
and
then
have
inverse
total
capacity,
get
capacity
response,
new
fields
with
total
available
capacity
and
maximum
volume
size
and
those
two
values
then
have
a
precise
definition.
C
A
Awesome
all
right
all
right,
let
let
me
know
if
I
can
help
in
terms
of
reviewing
anything
or
trying
to
resolve
some
of
these
blockers.
So
let
me
know
all
right.
Next
up
we
have
the
non-graceful
node
shutdown.
There
was
a
discussion
about
this
last
week.
B
Yeah,
I
think
yasin
is
supposed
to
update
the
cap,
incorporating
other
comments
so
I'll
pee
on
that.
A
All
right
sounds
good
and
we
need
to
add
this
one
to
the
spreadsheet
too.
So
he's
already
add
yeah.
This
is
already
oh,
okay,.
A
Okay,
awesome!
That's
good!
All
right!
We
are
well
over
time
thanks
everyone
for
staying
late.
Are
there
any
other
issues
that
anyone
wants
to
quickly
bring
up.
A
G
So
shin
you
may
have
not
been
here
last
week,
but
but
we're
observing
your
problem
where
the,
if
you
return
an
error
like
unsupported
from
snapshot,
create
the
sidecar
just
spins
and
calls
create
over.
G
B
B
Oh
okay,
yeah
you
open
the
issue
and
yeah.