►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Workgroup for Container-Storage-Interface (CSI) Implementation - 27 October 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
If
you
have
some
other
topics,
yeah,
please
also
add
them
in
case.
We
cannot
finish
all,
but
we
can
continue
next
time.
A
A
Okay,
I'm
not
sure
okay,
so
I
think,
as
we
mentioned,
we
want
to
talk
about
issue
young
kind
of
mentioned
the
last
time
related
to
csi
volume,
stage
and
stage,
for
I
think
if
the
volume
handler
is
the
same,
but
it
might
be
represented
by
multiple
pvs
and
the
second
item
is
angry's
phone,
no
shutdown
proposal.
A
B
C
And
there
hasn't
been
any
change
in
the
in
the
in
that
pr
itself.
It
was
just
that
when
reviewing
it
for
last
time
we
came
up,
we
we
identified
a
few
areas
that
might
need
more
tests
and
that
that
was
done
in
separate
prs
and
that's
now
all
merged
and
ready.
So
I
think
we
can
just
go
ahead
with
making
a
ga.
C
B
C
C
A
And
so
we
need
to
review
your.
A
So
I'm
just
just
want
to
double
check
this
one
is
the
the
one
we
need
to
review
this.
C
Yeah,
I
think,
all
well,
the
others
are
pinning
I
I
can
add
two
more
pr's
for
smaller
ones
that
are
pending.
I
I
right
now.
My
thinking
is
the
people
who
should
be
reviewing
them
are
still.
I
still
haven't
done
it,
so
I
think
I'll
I'll
just
try
to
get
that
merged
through
that
bigger
pr
with
tim's
help,
or
so
that
may
be
the
faster
way.
Overall.
B
Yeah,
I
remember
well
with
the
prg
you
need
a
lot
of
approvers
like
they
are
kind
of
many
files
are
affected
right.
I
think
that's
this
vr,
it's
not
even
like
in
the
store
teams,
like
all
over
the
place.
B
B
Have
you
pinned
the
people
from
those
like
scheduler
and
the
hublet.
C
C
A
C
And
vga
one
generic
fml
volume,
ga-
that
is
what
six
storage
should
be
reviewing:
okay,
okay,.
A
Okay
sounds
good,
so
any
other
feature.
I
want
to
give
a
quick
update
or
an
epr
video.
B
So
I
wonder
I
don't
know
if
get
his
mic
fixed,
because
we
have
not
oh
yeah,
I'm
here.
Okay,.
D
Sorry,
yes,
I'm
I'm
actually
working
on
the.
I
have
a
pr
open
for
the
fs
group
change
policy
going
ga
and
actually
it's
failing
some
unit
test,
but
I'll
fix
those,
but
the
pr
is
out.
It
should
be
pretty
trivial
to
fix
it.
I'm
working
on
the
pr
for
recovery
from
resize,
and
I'm
mostly
done.
One
of
the
things
that
I
am
kind
of
was
discussing
with
you
on
today
morning
also
was
like
was
our
cap.
D
Originally
we
proposed
that
we
will
only
change
external
resize
controller
for
recovery
flow
will
not
change
the
entry
one
and
I
think
that's
what
I
want
to
do
for
this
series,
but
at
the
same
time
it
does
mean
that
any
of
the
entry
changes
in
the
cubelet
should
not
affect
any
of
the
entry
drivers
for
this
as
far
as
expansion
is
is
concerned,
so
I'm
just
working
on
that
bit
actually
because
that
makes
that
ins
that
requires
making
sure
that
the
code
path
is
separate
and
things
like
that
so
yeah.
D
A
F
Thank
you
very
much,
so
I
created
a
separate
issue
for
one
of
these
issues
I
found
so
this
one
is
about.
F
F
Everything
else
would
work
because,
like
cubot
is
smart
enough
to
see
that
they
have
the
same,
these
two
pvs
have
the
same
volume
handle.
So
it
is
the
same
volume
it
caused
the
node
stage
once,
but
unfortunately,
there
are
no
stage
is
done
into
a
directory
that
contains
the
pv
name,
so
only
one
of
the
spots
can
actually
run.
F
So
I
looked
at
where
this
staging
path
is
used,
and
there
is
a
case
when,
if
qubit
cannot
find
the
json
file,
that
is
in
the
staging
directory,
it
parses
the
pv
name
from
the
staging
path
and
tries
to
read
it
from
the
from
the
api
server.
So
it
knows
how
to
call
the
csi
driver
to
call
note
on
stage.
F
F
B
A
Yeah,
so
I
I
think
just
want
to
add
a
point
in
tray
driver
does
not
suffer.
This
does
not
have
this
problem,
because
in
tree
driver
stage
like
a
global
mount
path,
we
call
it
right
use
the
unique
kind
of
volume
id
like
from
source
volume
id.
It
depends
on
the
driver.
A
They
will
find
out
that
unique
id
somehow
in
csi
we
use
pv
as
the
part
of
the
global
mom
pass.
Is
there
a
particular
reason
at
that
time?
This
is.
This
is
the
reason.
F
To
be
able
to
call
and
mount
device,
I
think
initially
we
didn't
have
this
json
file,
so
I
think
that
was
the
reason,
so
the
unmount
device
retrieve
the
pv
from
the
from
the
directory
name
but
later
on,
we
introduced
the
json
file
in
staging
directory.
So
I
think
we
can
pretty
rely
on
presence
of
the
json
file.
F
No,
that's
not
possible.
I
was
thinking
if
somebody's
dragging
like
the
same
volume
amounted
by,
I
don't
know
kubernetes
1
13
or
this
kind
of
stuff
and
just
restarting
cubette
without
draining
the
note.
I
don't
think
that's
possible
because
we
require
notes
to
be
drained
during
updates
so
yeah
this.
This
can't
happen.
F
A
F
A
Oh,
you
mentioned:
if
cubelet
restart
what
will
happen
if
we
don't
drink
the
node.
F
Well,
I
was
thinking
well,
there
are
two
possibilities
we
discussed
last
time:
either
we
updated
in
the
minor
version,
update,
meaning
124.0
and
require
a
draining
during
update
from
123.,
or
I
could
imagine
some
code
that
tries
the
old
path
before
using
the
new
path.
I
think
it
would
be
kind
of
error-prone,
but
it
could
work.
B
Yeah,
but
if
you
don't
handle
those
no
existing
cases,
then
I
I
just
think
there'll
be
a
lot
of
issues
right,
because
this
is
not
only
for
the.
This
is
a
particular
file
file
share
thing.
It's
also
for
everything
right
all
the
block
volumes.
Everything
is
passed.
It's
if
we
don't
try
the
old
past.
I
think
we
all
we
just
rely
on
everybody.
What's
in
the
node,
I
mean,
I
think,
a
very
big
assumption.
B
F
Oh
well,
we
already
rely
on
that
during
csi
migration.
B
So
that
was
actually
another
thing
I
was
actually
today.
I
asked
debian
from
my
team
to
join
us,
but
he
was
actually
saying
yeah,
since
we
are
asking
that
can
we
kind
of
if
we
can
align
those
two
changes
he
was
asking,
but
oh.
B
A
B
A
Otherwise,
I'm
just
saying
without
any
change,
just
enable
cs
immigration.
D
So
what
about
going
in
other
direction?
And
we
actually
do
notice
this
twice
for
two
different
pvs,
even
if
they
use
same
volume,
handle.
F
Break
csi
in
csi
volume
can
be
staged
only
once
on
a
node.
You
can
stage
it
thrice
to
different
directories.
B
The
for
the
fact
that
you
try
not
to
do
the
stage
or
the
stage
is
the
required
in
our
case
actually
for
the
firewall,
and
we
we
don't
do
stage.
F
Yeah,
like
I
checked,
I
checked
nfs,
I
checked
efs.
They
don't
do
stage.
B
F
D
B
B
I
mean
every
driver
is
I
mean
until
we
move
everything
to
this
right,
so
it's
not
a
problem
to
keep
that
path
right.
Why
is
that
a
problem
to
keep
that
pass.
F
A
Yeah,
I
kind
of
also
feel
use
pv
as
a
part
of
the
global
month
pass.
It
seems
not
like
correct.
F
You
know
it's
long
and
I
don't
know,
but
then.
F
B
F
B
B
F
Volume
handles
for
you,
uniform
ids
are
computed
by
the
volume
plugin
and
they
have
their
own
way.
How
to
do
it
correctly.
For
example,
in
aw
s
we
use
the
volume
id
in
the
cloud,
which
is,
I
don't
know,
20
characters
total
and
we
are
sure
it
contains
only
the
right
characters.
We
can
use
in
paths.
So
every
single
storage
backend
has
something
like
that,
which
is
usually
very
simple,
just
contact
concatenation
of
smart
strings,
but
in
csi
we
have
this
volume
handle
which
can
be
wrong
and
actually
in
cfs.
F
I
would
need
to
check
what
they
use,
but
I
don't
know
what
they
use.
It.
F
F
B
A
So
we
don't
have
any
restrictions
on
volume
handle
like
how
long
it
can
be.
I
think
we
do,
but.
F
And
yeah,
like
the
volume
handle
itself,
is
not
enough,
because
you
must
press
exit
with
this
driver,
because
two
different
drivers
can
agree
on
the
same
volume
handle.
A
F
B
B
Is
that
okay,
would
that
be
enough?
Could
you
be
running
like
multiple
drivers
on
the
same
you
know
on
the
same,
not
could
you
be
running
multiple
of
them?
It's
actually
possible
right.
A
F
A
Yeah,
I
think
you,
you
already
put
all
the
details
in
the
issue
and
I
tend
to
agree
like
we.
We
want
to
change
to
get
rid
of
pv
name
and
but
the
issue
we
after,
like
you,
make
the
change
right.
We
can
test
in
what
situation
in
the
break
and
how
may
we
address
that?
We
can
yeah
so.
A
D
Still
recover
from
like
undrained,
no,
then
it
will
have
wrong
path.
How
like
we'll
have
to
quote
some
back,
like
fallback
logic,
in
the
cube
that
you're
thinking.
A
A
Right
like
we'll
just
make
them
a
bigger
change,
and
then
we
can
test
against
that
to
know
what
will
happen.
We
can
restart
kublaid,
for
example,.
B
F
B
B
G
B
Because
yeah-
because
I
remember
I
look
at
that
part,
where
is
that,
though
I
forgot,
I.
A
Yeah
right
for
csi,
it's
probably
different
yeah.
A
So
yeah,
I
think
young
could
go
ahead
with
some
changes
to
try
out.
F
F
When
two
pvs
use
different
volume
handles,
but
in
the
mount
table
they
look
the
same,
they
have
the
same
mount
source,
then
cubelet
outside
of
csi.
It
checks
when
it
wants
to
unmount
the
device
it
checks.
If
it
is
mount
and
it's
somewhere
else-
and
in
this
case
it
will
find
the
other
volume
mounted
and
it
will
not
have
volume
and
yeah,
and
it
will
always
complain
an
error
and
you
can't
basically
unmount
the
volume
ever.
F
F
That's
the
problem
here
we
have
two
different
volumes,
totally
different
volumes,
but
they
have
the
same
source
in
mount
table
so
cooper.
Does
this
check?
So
I
was
thinking
like
what,
if
we
remove
this
check
completely,
because
qubit
already
checks
that
all
the
ports
that
use
the
volume
are
gone
and
their
volumes
are,
the
local
volumes
are
unmounted.
We
checked
it
somewhere
else,
so
why
is
this
check
actually
good
for.
B
So
basically,
just
allow
cubelet
to
call
it
as
many
times
as
it
needs
if.
A
Do
you
have
item
code
is
here.
F
D
F
D
A
F
It
was
somewhere
in
operation
generator.
F
Yes,
it
well,
it
checks
that
the
staging
mount
is
the
only
amount
on
the
system
of
the
volume
on
the
system.
F
B
F
F
F
A
Yeah,
typically,
we
do
the
check,
I'm
thinking
the
reason
we
do,
the
it's
kind
of
like
a
month
reference
check.
Is
it
because
we
want
to
avoid
some
risk
condition?
We
don't
trust
the
actual
state
or
we
want
to
kind
of
verify
physically,
like
there's.
No,
no
longer
some
months,
hello,
where
that.
F
I'm
just
thinking
if
we
unstage
the
volume,
then
we
also
remove
it
from
the
no
status
and
we
like
treat
it
as
ready
to
ready
for
detach,
but
if
it
is
mounted
somewhere
else,
I
don't
know.
F
F
F
By
anything
on
the
system,
it's
not
about
pawns,
it's
about
really
any
any
mount
somewhere
else
anywhere
else
it
could
be.
I
don't
know
system,
I
mean
mounting
the
volume
for
some.
I
don't
know.
F
B
B
That's
that's
why
this
yeah
right,
supposedly
you
are
supposed
to
cause
stage
only
once
so
now
we
actually
could
call
that
multiple
times
could
that
be
causing
problems
for
some
other
drivers
that
actually
only
want
to
on
this
one
to
be
called.
I
guess
that's
something
you
just
need
to
think,
and
I
don't
know
so.
F
I
personally
don't
see
too
many
problems,
but
I
would
appreciate
any
feedback
before
we
remove
this
check
yeah.
The
only
issue
I
see
is
then,
if
the
note
on
stage
succeeds,
which
probably
succeeds,
they
declare
the
volume
to
be
ready
to
detach.
B
B
B
Maybe
we
need
to
ask-
and
just
you
know
this
behavior:
could
this
be
a
problem
for
other
drivers?
If
we
go
on
stage
too
early,
I
mean
what
problem
will.
F
F
Drivers
that
they
find
out-
I
don't
know,
but
for
the
drivers
they
do
at
that
edge.
They,
I
think,
like
all
of
them
use
block
devices
and
the
block
device
is
always
unique
for
each
volume.
F
B
D
I
think
that
there
could
be
like
if
the
pod,
if
the
cubelet
crashed-
and
it
was
restarted,
then
the
the
counters
that
we
maintain
like
okay,
we
before
we
are
on
one
device.
I
think
we
count
the
reference
like
if
this
is
the
only
reference
left.
So
I
suspect
that,
like
yeah,
if
we
just
relied
on
the
state
stored
in
cubelet,
then
we
could
have
like
wrong
data
or
something.
So
that's
why
we
are
referring
to
to
mount
table
to
verify.
F
Okay,
but
if
it
is
not
in
the
cubelets
states
of
the
world,.
A
But
like
young,
I
think
it's
that's
not
not
in
actual
state.
We.
We
never
call
this
like
call
this
operation
at
all.
So
that
means
we
should
have
information
about
the
volume
actual
state.
F
I'm
not
sure
that
we
have
a
counter.
I
think
we
have
a
list
of
spots
that
depend
on
their
volume,
yeah.
F
A
Reconstruction
only
happened
during
the
accumulated
once
yeah.
It
did
not
periodically
run.
D
A
A
Even
we
have
the
check
the
the
tech.
I
don't
know
I'm
trying
to
find
history,
but
I
I
yeah.
A
F
A
A
G
F
B
B
A
I
I
I
can
take
a
look
because
this
is
my
pr
and
yeah.
If
I
can
remember
anything,
it.
D
Why
this
is
a
workaround,
for
this
is
a
workaround
in
gci
cluster.
If
gci
cluster
is
used,
mounted
gcm
monitor
user
mounting.
B
B
Oh,
oh,
oh,
oh,
oh,
oh
you
you're!
Also,
okay,
okay
walk
around
for
the
month.
You
should
cause
she's
hammered.
Oh.
A
B
A
No,
the
the
this
workaround
is
mean,
doesn't
mean
we
don't
want
this
month,
reference,
it's
just
how
this
month,
reference
implemented.
I
think.
A
So
because,
at
the
beginning,
the
maybe
some
implementation
of
month-
reference
we'll
have
issue,
because
there
are
some
months
we
should
not
come,
but
we
can't
so
it's
just
the
implementation
of
how
much
reference
is
implemented.
That's
to
do.
I
think,
but
it's
not
a
reason
why
we
want
to
check
the
amount
of
reference.
A
A
Okay,
I
think,
next
time
I
would
see
whether
I
can
find
something
about
this.
B
B
Okay,
so
I
need
to
find
the
okay.
This
is
the
pr
yes,
I
just
want
to
go.
B
I
have
not
updated
the
yet.
This
is
still
the
same
one.
So
I
think
what
I
want
to
do
is
to
remove
the
part
what
we're
saying
we're
going
to
depend
on
the
graceful,
no
shutter
tap,
because
we're
not
right.
So
now,
initially,
we
said
we
we
narrowed
down.
The
scope
only
handle
the
grid
for
no
shutdown,
but
now
it's
actually
the
opposite.
B
We
we
only
handle
the
case
that
is
not
handled
by
the
crystal
no
shutdown.
Basically,
so
I
just
want
to
see
if
this
general
idea
and
see
you
know
what
you
guys
think,
and
you
know
what
what
else
we
should
try
for
this.
So
basically
this
we
introduced
this
csi
spec
field
called
the
safety
touch,
mainly
it's
just
for
csr
driver
to
say
if
they
want
to
opt
in
for
this
feature,
obviously
meaning
you
know,
css
driver
know
knows
this
feature
and
they
think
this
can
solve
their
problems.
B
So
this
we
have
this
flag
here
and
then
the
in
the
existing
logic.
What
happens
is
if,
let's
say
if
we
have
a,
I
think
so
now,
we
should
actually
also
consider
like
node
partition
case
with
partition
or
or
graceful
or
it
would
like
no
shutdown
but
not
detected
by
cubelet,
somehow
right
not
detected
by
their
graceful,
no
shutdown
logic.
B
Then
it
will
all
falls
into
this
because
what
happens
is
like
after
five
minutes
the
there
is
the
notice
the
taint
manager
will
try
to
delete
the
parts
and
then
and
then
but
then
the
part
will
be
stuck
in
the
terminating
state.
It's
going
to
you
know.
I
think
this
is
reproducible.
B
If,
if
it's
not
yeah
questions,
do
you
have
questions
sure.
A
B
B
This
controller
got
a
gc
port
controller,
so
in
here
we
are
adding
some
logic
here,
just
to
check
to
check
if
you
know,
but
this
is
of
course
some
of
this
will
change,
because
I
was
initially
trying
to
check
if
the
if
this
is
a
great
final
shot.
Actually
so
right
now.
Basically
it's
just
over
here
if
this
is
this
is
a
terminating.
We
basically
waited
for
already.
This
is
like
wait
already
waited
for
five
minutes
right,
maybe
five
minutes
comes
to
here.
Then
we
will
start
to.
B
We
will
check
this
we'll
check
the
safety
catch
flag
in
the
seaside
driver
so
that
one
we
introduced
if
that
is
set,
we
know
that
driver
wants
to
opt
into
this
feature.
Then
we
will
try
to
delete
the
pod,
but
we
also
you
know
here
we
are
trying
to
add
some
quarantine
paint
right
so
before
we
delete
the
part.
B
We
add
those
tint
just
this
is
trying
to
make
sure
that
when
the
part
comes
up
again
well,
no
on
the
node
on
the
node,
when
the
node
comes
up
again,
we
don't
want
to.
We
want
to
clean
up
before
we
schedule
the
part
to
them.
So
the
so,
basically
we
add
10
on
the
node
first
and
then
we
try
to
delete
those
parts
that
are
kind
of
stuck
forcefully
delete
them
and
then
yeah.
So
that's
what
it
says
here.
B
So
there
is,
there
is
a
check
that
basically
we
need
to.
We
need
to
kind
of
skip
this
there's
a
particular
check
there.
We
have
to
skip
that.
So,
basically,
if
there's
a
safe
detach,
then
we
will
skip
the
skip
this
check
so
that
it
will
not
block
us
from
moving
forward
to
allow
the
volume
attachment
to
be
deleted.
B
So
so
this
will
actually
then
this
will
go
ahead
and
delete
the
volume
attachments
and
then-
and
then
this
way,
basically
the
volume
unpublished,
controller
on
publish
one
will
happen
before
publish
or
no
downstage
this.
You
know
when
we
allow
this
one
to
happen.
So
there's
this.
This
is
something
that
just
you
know.
We
allow
this
one
to
go
kind
of
out
of
sequence.
B
Because
of
this
you
know
abnormal
situation
and
and
then,
of
course,
the
the
attacher
external
attacher
will
detect
that
and
then
it
cause
controller
on
purpose
volume,
basically
and
then
in
the
and
then
this
one
basically
just
saying
that
well,
the
css
drive
right
see
the
driver
is
it's
opt
in
and
then
we'll
check
whether
it's
safe
to
detach
or
not.
B
So
you
know
you
see
a
drawer,
doesn't
I
want
detection
and
it
does
not
detach
and
then
it
will
it
basically
retry
it
easily.
A
Right
overall,
like
I
think
it's
we
probably
can
handle
the
shadow
case.
Just
if
we
think
about
the
network
petition,
then
it's
very
hard
because
from
the
network,
when
the
workload's
still
running
and
at
the
same
time
you
detach
the
volume
it
will
cause
some
data,
corruption.
B
So
this
is
the
this
is
why
I
think
here
we're
saying
it's
the
csa
driver
will
have
to
check.
So
that's
why
I
think
this
solution,
initially,
I
think,
was
written
more
for
kind
of
cloud
provider
like
you
can
actually
check
your
compute
node
go
find
out
whether
to
save
to
detach
or
not
yeah.
So
I
think
it's
kind
of
like
a
more
narrow
cases,
because,
yes,
just
as
you
said,
the
partition
case,
if
you
see
the
driver
does
not
know
then
just
have
to
you
know,
don't
don't
detach
right.
B
So
basically
this
will
be.
First
of
all,
this
will
be
like
opt-in,
opt-in
option
because
otherwise,
we'll
have
to
really
go.
You
know
there
are
some,
I
think,
there's
actually,
I
think
huaming
have
some
something
in
his
own
ripple,
that's
kind
of
old.
I
think
a
few,
maybe
a
few
years
back,
he
proposed
that
it's
a
fencing
right
that
you
actually
have
to
get.
B
You
have
to
have
to
reboot
the
node,
but
then
that's
also
very
complicated,
because
for
every
cloud
provider
or
even
for
any
driver,
it's
a
different
shutdown
command.
So
it's
very
complicated.
Also,
it's
like
external
controller.
To
do
that,
I
I
you
know
I
just
feel
kind
of
uncomfortable
to
have
that
incorporated
in
this
approach.
B
I
just
wonder
if
we
should
try
this
one
first,
I
don't
know
if
you
have
you
look
at
the
the
fencing
thing
that
he
proposed,
basically
you
so
basically
for
every
so
the
so
basically
adam
will
have
to
go
configure
that
if
it's
you
know
for
different
for
different
kind
of
environment,
you
have
to
use
different
command
to
shut
down.
Basically,.
B
A
Adds
or
just
one
thing
to
make
it
safe,
so
we
can
make
sure
the
machine
we
can
get
the
machine
state
and
if
we
can
say
the
machine
we
know
is
shut
down.
B
That's
a
driver
right
driver's
side
because
it's
we're
not
really.
We
can't
really
like
hear
we're
not
really.
We
can't
really
check
that.
B
I
don't
think
they
can
check.
What
do
you
mean
check?
The
machine
state
check
notice
that
but
no
state
already
notice
node
is
not
ready
right,
it's
not
right,
no,
it
isn't
already,
but
so
what
what
you
want
to
check?
But
what
you
are
asking
for.
I
think
it's
more
like
the
call
provider
may
be
able
to
go
check.
Those
kind
of,
if
you
know
that
but
then
but
then
I
think,
then
I
don't
think
it's.
Maybe
that's.
That
would
be
like
cloud
cloud
provider
logic
we
can
check
it.
Can
we
check?
B
D
B
Oh
sorry,
sorry
for
this
for
the
sorry
for
the
confusion,
this
was
at
that
time.
I
was
trying
to
incorporate
the
grace
for
no
shutdown
logic
into
here.
That's
why
so.
This
will
not
no
longer
be
there.
This
was
at
that
time.
We
can
frown
the
grease
for
no
shadow.
We
can
know
the
reason,
but
but
as
we
discovered
griswold,
no
shadow
can
handle
it.
If
you
know
if
they
actually
detect
it's
a
great
one
or
shadow,
so
we
can't.
We
cannot
rely
on
that.
One.
B
Yeah,
so
I
think
we
are
actually
at
the
top
of
that.
Maybe
I
can
maybe
jingle
ping
offline
on
that
that
thing
we
will.
Can
we
continue
this
on
monday,
yeah
sure.
D
A
D
H
B
Right,
so
that's
why
we
we
need
to
add
the
you
know
the
we
need
to
add
the
we
need
to
add
the
quarantine
10
first
before
we
actually
even
shut
down
right.
So
that's,
but
we,
I
think,
yeah
I
actually
have
to
jump
on
another
meeting.
Can
we
can
we
talk
about
this
again
on
monday,
yeah.
B
Yeah,
I
think-
maybe
just
I
will
say
I
would
suggest-
maybe
just
like
on
friday
afternoon-
check
if
there
are
anything
on
the
agenda.
If
there's
nothing
on
the
agenda,
then
we
just
cancel
that
meeting.
How
about
that
in
this
way
we
have
you
know
if
we
actually
have
this
the
time
to
talk
about
these
issues.
I
think
this
is
actually
helpful.