►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Standup Meeting - 15 March 2021
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Slap
I've
made
no
changes.
I
I
just
shared
this
link
on
the
sig
storage
cozy
channel
in
slack,
and
so
this
is
what
we
looked
at
thursday,
but,
but
maybe
I
know
ben
you
had
said
you'd
like
it
to
soak
a
little
bit
and
so
hopefully
that's
there's
been
a
chance
for
that
to
happen,
and
maybe
you
see
some
issues
with
it,
but
so
this
is
it
and
is
there
anyone
here
that
wasn't
present
thursday
for
the
thursday
cozy
meeting.
A
No
okay,
so
I
don't
need
to
go
over
that
again
because
you've
heard
it,
and
so
I
guess
we
can
just
start
a
discussion
sit
unless
you
had
any
specific.
A
B
Yeah
yeah,
so
I'm
so
so
I
just
want
to
go
over
the
entire
lifecycle
one
more
time,
because
by
doing
that
exercise
we
also
you
know
bring
to
light
or
or
others
others
might
have
a
different
perspective
on
it
and
they
might
ask
new
questions.
B
So
I
just
want
to
like
go
through
the
exercise
once
of
how
things
work
so
so
a
user
creates
a
br,
correct,
yep
and
then
there's
also
bc
associated
with
it,
that
the
admin
tells
them
yeah
and
once
the
br
and
br
is
created
with
a
valid
bc.
The
controller
goes
and
creates
a
bucket
object.
B
C
B
All
right,
okay,
now
bar
life
cycle,
is
similar
where,
when
a
bar
is
created,
the
admin
has
to
let
the
user
know
of
a
bac
and
the
control
response
to
that
by
creating
a
ba
object
with
the
policy
actions
config
map
from
the
bac
and
then
the
provisioner
uses
the
ba
ba
to
go
and
provision
the
access
for
that
bucket.
A
B
Yeah,
what
do
you
mean?
Admin
is
involved
in
two
steps.
They
would
have
to
let
the
user
know
of
the
bc
name
and
then
the
bac.
D
D
A
Okay,
if
that's
more,
if
that's
typical,
that
the
users
can
see
those
resources,
then
that's
fine
and
then
the
user
just
needs
to
wait.
If
there's
a
hundred
of
them,
the
user
needs
to
know
which
one
they're
supposed
to
pick.
So
if
they
can
see.
A
D
That's
that's.
The
whole
point
is
that
these
things
are
really
globally
readable.
So
all
of
the
details
that
a
user
might
want
to
use
to
decide
which
one
he
wants,
he
can
access
himself
and
make
that
decision,
yeah
and
and
and
furthermore,
like
there's
no
restrictions
right.
Anyone
you
can
see
you
can
use.
B
So
it's
fine,
even
if
you
see
what
all
is
there
yeah,
so
so
how
about
like,
like,
given
that
now
that
I
think
about
it,
it
is
technically
possible
to
create
a
bunch
of
you
know,
preset
of
classes,
you
know
say,
say,
for
access
bunch
of
preset
access
policies
for
for
aws,
like
for
s3
like
like
read
only
bucket
or
read,
write
bucket
or
write
only
bucket,
or
you
know
just
those
three.
B
Actually
we
can
actually
create
that
up
front
for
the
user
so
that
you
know
admin
doesn't
even
have
to
get
involved.
If
it's
one
of
these
standard
classes.
A
B
Yeah
they
can
always
override
it.
This
is
more,
for
you
know,
convenience
again,
it's
technically
possible
is
all
I'm
saying.
Should
we
do?
It
is
my
question.
A
Yeah,
I
don't
know,
I
I
think
your
main
point
just
was
said
that
is
there's
two
steps
that
an
admin
has
to
do
one
time
so
that
we
can
do
a
green
field
use
case,
which
is
the
admin,
has
to
create
a
bucket
class
and
create
a
bucket
access
class.
It
is
a
one-time
thing
like
ben
said,
and
and
thank
you
ben
and
it's
discoverable,
which
is
nice.
B
Yeah
all
right,
so
that
makes
sense.
So
how
does
a
credential
revoke
work
just
the
same
way
right
as
usual,.
B
Right
right,
okay
and
bucket
deletion.
A
So
if
we
covered
this
a
little
thursday
but
the
br
deleting
the
br
triggers
a
buck
deletion,
we
where
we,
where
we
got
into
some
discussion,
was
the
case
of
a
force
delete
like
if
I
delete
the
br
and
regardless
of
other
accessors
or
other
workloads
that
might
be
using
the
same
b
instance
or
the
same
physical
box.
We're
going
to
just
yank
the
rug
out
we're
going
to
force
the
loop.
The
other
idea
was
a
delete
when
all
accessors
have
completed
type
mode
and
then
the
other
retained.
A
So
we
we
talked
about
that
a
little
thursday.
I
have
updated
the
kept
to
reflect
that
in
that
same
pr,
that's
been
out
there
for
a
few
weeks
and.
D
A
So
I
called
it
delete
and
I
think
that's
the
default
retain
and
those
are
what
but
but
delete
means
delete
when
all
accessories
done,
then
I
have
a
force
delete
which
means
delete
it.
You
know
now
essentially,
okay.
D
A
D
Open
what
can
prevent
a
pod
from
going
away
when
you
delete
it?
What
can
somebody
beat
that
like
when
I
delete
a
pod
it?
It
always
goes
into
the
deleting
states,
and
as
soon
as
all
the
finalizers
are
gone,
it
disappears
like
you,
never
need
to
do.
Forced
delete
force
delete
is
about
overriding
the
finalizers.
I
think
no,
it
doesn't
remove.
B
The
finalizers
actually
so
force
delete
is
so
parts
can
handle
sig
term
and
second,
so
if
they
start
handling
and
not
not
terminating,
you
can
do
a
force
delete.
D
D
Yeah
and
and
you
can
delete
pods
on
nodes
that
are
down
using
forced
delete
to
my
recollection.
It.
D
Yeah
why
I
mean
this
is
not
an
essential
part
of
it,
but
I
think
we
need
to
be
careful
about
the
semantics
here,
just
because
I'm
pretty
sure
that
the
regular
delete
typically
means
like
blow
it
away.
D
D
B
What
do
we
do?
Okay,
so
let's
say
the
deletion
is:
let's,
let's
not
talk
about
force,
delete
yet
just
talk
about
normal
delete.
What
do
we
do
in
that
case?
So,
let's
say
a
bucket
is
you
know,
marked
for
deletion
yeah,
so.
D
C
D
D
D
One
for
any
accessors
yeah
they're
coming
into
access,
but,
as
you
know,
I
I
was
never
happy
with
the
proposal
to
have
like
a
finalizer
per
pod
or
anything
like
that.
No.
D
B
Yeah
yeah,
why?
Otherwise,
you
wouldn't
know
all
the
different
axises
of
it.
We
talked
about
this,
like
you
said
we
could
just
list
and
find
out
who
all
the
bas
that
are
using
it.
The
issue
was.
B
A
Well,
I
know
at
one
point:
we
thought
the
workload
the
pod
there'd
be
a
finalizer
that
named
the
pod,
I
think,
or
something
on
a
b.
But
I
guess
it
sounds
like
that's
been
changed.
The
reason
for
a
finalizer
on
the
b
is
that
it
would
let
that
seems
like
an
easy
way
to
understand.
B
D
B
I
remember
you
implemented
the
finalizer
edition
on
the
va
bucket
object
right.
C
Yeah,
I
believe
so.
I
think
it
was
a
little
while
ago
right.
B
Right
so
I
think
what
you
were
doing
was
creating
a
finalizer
for
every
accessor
right,
or
was
it
per
part?
Was
it
for
for
bucket
access.
B
Right
that
makes
sense.
Okay,
I
just
wanted
to
verify
that,
so
it's
not
per
part,
which
is
which
is
the
right
way
now
what
ben
was
saying?
So
if
you
have,
if
you
don't,
have
a
finalizer
per
bucket
access
bin,
then
how
do
we
know
all
the
different
accesses
for
the
bucket?
D
Would
do
is
you
would
have
an
informer
to
list
all
the
bucket
accessors
in
the
or
the
all
the
bas
in
the
provisioner
sidecar,
and
it
would
have
a
single
finalizer
that
it
would
put
on
at
the
first
time
it
saw
the
bucket,
and
if
the
bucket
was
in
the
deleting
state,
it
would
validate
that
there
are
no
bas
referring
to
it
before
removing
its
finalizer,
and
it
would
do
that
by
always
at
all
times,
knowing
all
of
the
bas,
and
you
would
have
to
have
a
rule.
B
D
The
number
of
bucket
accesses
can
increase
while
it's
not
deleted,
but
once
it's
deleted
it
can
only
decrease
and
once
it
hits
zero
you
remove
the
finalizer
and
you're
guaranteed
then
to
reliably
delete
it
only
when
the
number
of
ba's
reaches
zero
and
you're
guaranteed
to
eventually
hit
that
point,
because
you
can't
add
any
more,
and
so
you
just
wait
for
them
all
to
get
deleted.
Let's
say
you're
trying
to
delete
a
ba.
D
Do
we
have
that
so
that
that's
a
different
question?
I
mean
that
they're.
In
order
to
play
the
same
game,
you
would
need
to
have
the
informer,
that's
watching
the
ba
or
the
controller.
That's
watching
the
bas
be
able
to
enumerate
the
pods,
but,
as
we've
discussed,
there
are
concerns.
Well,
I
don't
think
it's
a
security
problem,
but
there's
a
scaling
issue,
so
I
think
we
hit
a
similar
issue
in
csi,
where
I
believe
one
of
the
side.
Cars
must
watch
the
pods
and
there
were
concerns
about
how
well
that
scales.
B
And
you
can
list
parts
by,
you
know
some
label
filters
or
whatever,
so
it
should
be
okay,
I
guess.
D
Well,
I
I
mean
yeah,
so
the
two
approaches
you
could
take
or
one
you
could
just
have
an
informer
that
would
show
you
all
the
pod
events
and
you
could
have
a
comprehensive
notion
of
all
the
pods
at
any
given
time
or
you
could
just
in
time,
do
a
selector
a
list
with
selector
on
the
the
pods
to
just
get
the
ones
you
need
to
know.
I
don't
know
what
would
be
more
efficient,
but
but.
D
Detail,
yeah
that
you
know
there's
multiple
ways
of
doing
it
with
one
finalizer,
it's
just
a
question
of
which
one
is
best.
I
I
don't
think
we
would
want
to
go
to
the
place
where
you
have
multiple
finalizers,
just
because
then
you're
putting
a
lot
of
pressure
on
the
finalizer
system
which
which
we
know
has
problems
right
like
what
no
no,
we
we.
We
also
looked
into
this
right.
We
looked
into
the
maximum
object
size
in
fcd
and
it's
actually
in
the
megabytes.
So
so
the
number
of
files.
C
D
Could
add
thousands
of
finalizers
before
you
would
kill
that
cd
right,
but
I
just
I
don't
know
where
else
in
kubernetes
something
like
that
is
done.
I
think.
B
D
B
Finalizer
yeah,
I
understand
where
you're
coming
from,
I
was
just
trying
to
understand,
also
understand
the
reason
behind
csi,
not
using
this
finalizer
method.
Like
was
it.
D
We
should
we
should
ask
people
who
would
know
more
about
that.
We
should
go
into
one
of
the
csi
meetings,
which
unfortunately
just
happened
in
the
previous
hour
and
have
to
have
this
discussion
about
you
know.
Is
there
ever
a
case
where
we
create,
like
a
you,
know
a
linear
number
of
finalizers,
or
is
it
always
a
constant
number
of
finalizers?
And
then
we
just
implement
the
logic
to
know
when
to
remove
them
right?
B
D
B
D
A
But
then
let
me
make
sure
I
understand
why
you
don't
like
the
finalizer
approach,
say
a
a
list
of
finalizers
one
per
ba
we're
talking
about
finalizers
on
the
b,
and
the
reason
you're
saying
is
that
you
think
that
it
might
not
scale
well
right.
B
Manage
feels
winds
that.
D
A
That
the
main
is
well
what's
your
main
concern
about
a
list
of
finalizers,
though,
is
it
scaling
or
is
it
something
else.
D
It's
just
that
it's
not
done
elsewhere,
I
think
and-
and
it
feels
it
feels
like
the
wrong
pattern,
to
follow
that
that
that's
all
I
can
say
like,
I
can't
think
of
anything
that
will
specifically
break,
but
maybe,
if
I
thought
about
it
for
a
while
something
I
would
come
up
with
something.
C
Yeah,
I
also
have
not
seen
any
pattern
like
this
before.
I
think
it's
normally
it's
like
a
limited
number,
a
small
number
of
finalizers
yeah.
D
D
Well,
every
time
you
touch
an
object,
it's
everything
all
the
informers
that
are
watching
it
will
get.
C
C
D
We're
going
to
be
updating
these
objects
anyways
when
we
add
things
so
so
I
I
don't
know
I
guess
I'll
I'll
reserve
judgment
on
this.
Maybe
we
can
look
into
it
more
deeply.
I.
B
D
D
B
Know
it
just
seems
like
an
easy
solution
so
yeah,
but
coming
back
to.
D
There
is
one
other
small
issue
that
I
have
noticed
that
I
think
is
surmountable,
but
it's
it's
tricky
which
is
modifying
like
like
patching.
The
list
of
finalizers
like
when
it
is
large
is
tricky
because
there
is
no
you,
you
want
to
stay
away
from
doing
object
updates.
Typically,
you
want
to
use
patches
because
updates
can
cause
issues
when
you
version
an
api.
If
you
add
a
new
field
and
you
call
update
it
can
erase
a
field
that
somebody
else
has
populated
and
that's
bad.
D
So
we
try
to
use
patches
and
the
problem
is
when
you,
whenever
you
patch
an
array
of
json,
like
you
can't
just
there's
no.
C
D
B
An
end
but
that'll
be
arrays,
regardless,
whether
you're
using
two
finalizes
or
n
finalizes
that
can
still
have
brace.
D
Well,
no,
if
you
do
it
correctly,
you
can
use
a
json
patch
with
a
test
statement
in
it.
That
specifies
like,
I
believe,
the
current
list
of
finalizers
is
a
b
c
d
and
I'm
removing
c.
So
I
want
the
new
list
of
final
edges
to
be
abd
and
then,
if
anyone
else
adds
e,
your
patch
will
fail
and
you'll
have
to
retry
or
if
anyone
else
modifies
listening
any
other
way
like
you
can.
D
You
can
set
up
a
json
patch
to
fail,
but
you
have
to
do
it
pretty
carefully
and
I
don't
believe
most
pieces
of
kubernetes
handle
this
right.
So
if,
if
you
have
two
mutations
to
the
finalizer
list
that
come
in
very
close
together,
it's
possible
for
one
to
clobber
the
other.
It's
just
hard
to
patch
lists
in
kubernetes
yeah.
B
You
know
anywhere
actually,
if
you're
trying
to
update
a
list,
it's
kind
of
tricky
tell
me
ben
what
is
the
issue
with
doing
updates
again.
D
D
D
Know
about
the
field!
So
if
you
have
a
combination
of
old
controllers
and
new
controllers,
interacting
with
the
new
object,
old
controllers,
will
just
always
delete
the
contents
of
that
field
whenever
they
do
an
update
and
and
that's
bad.
So
so
we
basically
never
do
updates
on
an
object
which
could
potentially
evolve
over
time,
because
the
possibility
that,
in
the
next
version,
we'll
add
a
field
that
we
don't
know
about
in
this
version.
D
D
B
D
D
Kubernetes
is
on
v
1.20.
Well,
you
only
going
to
20.
and
like
pods
and
all
the
basic
apis
are
still
v1
after
19
revs.
So
like
there's
a
strong
desire
to
stay
on
v1,
because
the
moment
something
moves
to
v2
anything
that
was
coded
over
the
last
19
releases
will
now
have
to
be
changed
and,
like
there's
just
a
vast
amount
of
code
that
works,
that
they
don't
want
to
break.
B
Fair
enough,
okay,
so
I
understand
that
now,
let's
talk
about
forced
deletes,
so
we
still
don't
have
an
understanding
of
how
do
we
differentiate
between
normal
deletes
and
force
deletes.
So
if
the
deletion
policy
is
retained,
there's
no
problem
here-
we
just
go
away
and
the
br
just
goes
away,
and
if
there
is
a
finalizer
associated
with
the
br
on
the
b
that
gets
taken
out
is
that
right.
D
That
would
be
how
retained
that's,
how
it
is
with
snapshots
right
shang.
If
you
have
a
volume
snapshot,
content
with
retain
and
the
user
deletes
the
volume
snapshot.
Your
content
just
hangs
around
right.
B
B
Got
it
got
it
okay,
so
then
talking
about
forced
deletes
or
normal
deletes
for
a
normal
delete.
When
someone
removes
the
br,
do
we
mark
the
b,
as
deleting
like
say,
said,
the
deletion
time
snap
and
wait
for
the
sidecar
to
go,
delete
the
actual
back-end
bucket?
D
I
mean
with
the
regular
deletion
you
just
the
the
the
controller
that
does
the
bucket
binding
would
would
call
delete
on
the
the
b,
which
puts
it
into
the
deleting
state
and
then
removes
finalizer
and
then
go
back
to
sleep,
and
the
br
would
just
disappear
at
that
point
and
he's
done
because
now
now
all
you
have
to
do
is
wait
for
all
the
finalizers
on
the
b
to
go
away
and
then
cube
api
server
will
delete
that
too,
and
then
yes
and
then
right
and
one
of
the
things
that
is
holding
a
finalizer
is
the
sidecar
that
has
to
delete
the
bucket
yeah.
D
C
C
D
B
A
B
Yeah
but
pvcs
are
different,
though
pvcs
are
wired.
You
know
at
the
system
level
into
the
part,
whereas
buckets
don't
have
the
same
limitation.
A
D
Well,
yeah,
so
so
I
I
don't
disagree
that
that
yeah
with
with
pvcs
and
pods,
if
you
have
a
pod,
it
can
keep
your
pvc
alive.
Even
if
you
intended
to
delete
it.
It'll
just
sit
there
until
the
pod
goes
away
like.
That
is
an
object
that
is
in
the
same
name
space
that
in
principle
the
user
can
also
delete.
If
he
really
wants
this
thing
to
go
away.
So
you
you
never
have
a
situation
where
the
user
intended
to
delete
something,
and
now
he
can't
do
it
because
he
has
control
over
deleting
it.
D
B
Yes,
so
is
deletion
as
important
as
so.
If
you
want
to
revoke
access,
we
can
we
can
facilitate
that,
but
is
it
important
that
you
delete
the
data
also
like?
Can
we
not
wait
for
the
the
mean
I
mean,
what
I'm
saying
is
an
admin
can
forcibly
revoke
access,
so
say:
deletion
is
being
held
back
by
accessors.
B
D
D
C
D
D
The
minute
so
so
that's
the
thought
exercise
I
wanted
to
go
through
is.
Is
it
reasonable
to
say
that,
like
there's
a
point
at
which
I
passed
where
I
gave
up
ability
to
delete
my
bucket
willingly.
B
And
yeah,
it's
like
an
assumed
loss
of
ability
to
delete,
because
once
you
start
sharing,
only
the
admin
can
have
the
full
context
of
who
is
sharing
and
how
they're,
using
it.
A
Right
to
ben's
point:
if,
when
he
talks
about
I
or
me,
the
user,
the
user
didn't
authorize
sharing
of
that
bucket,
the
admin
did
so
it.
What
ben
said
is
true.
It's
an
interesting
point.
I
create
I
let's
say
just
left
side
name
space,
one.
I've
done
all
those
things
and
it's
all
under
my
control
and
if
there
was
no
namespace
2
in
that
diagram,
I
have
full
control
of
the
life
of
that
bucket
right.
I
can
delete
my
br.
A
I
can
delete
my
bars
and
those
will
trigger
the
right
deletions
and
the
bucket
goes
away
as
soon
as
the
admin
tells
some
user
in
namespace,
two,
the
name
of
bucket
one
and
that
user
creates
bar
3
in
this
diagram.
I've
lost
now
unknowingly.
Potentially,
I've
lost
that
ability
to
to
manage
my
own
resources
that
I
caused
to
be
created
right,
yeah,
you're,
saying
that.
B
Well,
when
you
create
a
bucket
the
bc,
you
know
has
a
list
of
allowed
namespaces,
so
you
know
it
can
be
shared.
But
let's.
B
A
D
I
think
so
so
the
reason
I
get
worried
about
situations
like
this
is
like
is
if,
if
b1
is
charging
my
credit
card
or
like
costing
me
quota,
then
it's
my
inability
to
delete.
It
is
a
problem
for
me
right
if
it's,
if
it's
costing
the
admin,
if
it's
charging
the
admins
credit
card,
then
I
don't
care
as
much.
If
I
can't
delete
it,
because
it's
not
my
problem
so.
B
Yeah
but
but
we're
assuming
that
we
can
never
talk
to
the
admin.
In
that
case,
I
think
it's
reasonable
to
say
if
you
do
end
up
in
a
situation
where
it's
it's
it's
getting
to
that
point
where
you
created
a
bucket
knowingly
that
can
be
shared,
and
now
you
want
to
get
it
out,
but
you
can't,
then
you
talk
to
the
admin.
D
Yeah
I
I
guess
I
would
want
to
think
through
the
the
quota
use
cases,
because
we
have
situations
with
like
quotas
on
pvcs,
where,
like
I,
create
a
pvc
and
it
deducts
my
quota,
and
if
I
try
to
create
another
pvc,
the
quota
system
can
actually
veto
that
operation
and
say
sorry.
You've
used
up
all
your
pvc
quota
so
like.
If
we
had
something
similar
here,
I
believe
what
would
happen
is
the
moment
your
br
went
away.
Your
quota
would
be
replenished
regardless
of
the
existence
of
the
bucket,
because
it's
not
my
bucket
anymore.
D
D
To
work
along
with
this,
this
this
model
right,
it's
not
the
same
as
pvs
well
yeah.
I
want
to
just
make
sure
we're
not
creating
a
situation
where
you
can
have
like.
First
of
all,
we
could
game
the
quota
system
or
where
I
could
get
stuck
with.
My
quote
is
being
consumed,
and
I
can't
get
it
back
by
deleting
my
stuff
right.
Those
are
the
two
bad
cases
yeah.
C
D
I
would
think
that
the
any
quotation
would
be
on
brs
and
it
would
be
on
the
namespace
things,
the
brs
and
the
bars,
and
if
I
delete
all
of
all
of
my
brs
and
bars,
then
all
my
quotas
would
go
back
to
normal
and
I
would
could
go
off
and
create
other
objects.
I
don't
worry
about
the
non-namespace
stuff.
That's
someone
else's
problem.
Okay,.
A
You
said
the
s3
amazon's
gonna
charge
you
as
long
as
the
physical
bucket
is
still
there
right
and
so
yeah,
and
so
you
how
I
don't
know
how
you
you
know
sid
said
well
you've
solved
that
by
asking
the
admin
to
you
know,
delete,
say,
ba3,
and
so
now
there's
no
accessors
to
the
bucket
and
it
can
go
away
or
you
change
the
deletion
policy
to
whatever
delete
or
force
delete,
whatever
we
want
to
mean
yank
the
rug
out
and
it
can
go
away
so
there
are
recourses,
but
it
seems
to
be
that
the
user
has
to
ask
for
help
now.
D
Yeah,
I
I
think
I
I
think
I've
managed
to
talk
myself
into
agreeing
that
we
can
call
it
delete
and
force
delete
and
make
delete
the
default
and
make
it
wait
for
bas
to
go
away.
I
I
would
not
be
surprised,
though,
if
the
api
reviewers
revisit
this
and
say
no,
no,
we
don't
like
that,
but
but
I
I
think
I'm
okay
with
it,
because.
A
Yeah,
I
like
it
better
myself,
but
we'll
see
what
the
api
yeah
then
I
like
delete
to
be
a
soft
delete,
essentially
and
and
and
explicitly
say,
yank.
The
rug
out
with
force
delete.
D
D
D
B
Because
that's
what
unmount
semantics
looks
like,
so
you
can
you?
Can
you
mount
dash?
L
or
you?
Can
you
mount
dash
fl
well
f,
which
is
force
like
even
if
someone's
using
it?
If
there
are
open
fd's
and
all
that
you
still
you're
still
force
unmounted
or
you
mount
dash
l,
which
is
the
the
mount
point
itself
stops
showing
up,
but
then
it
doesn't
unmount
until
all
the
rights
are
complete.
A
D
Right
we
agree.
We
can
find
a
name
that
we
that
we
that
we're
happy
with
and
and
proceed
with
just
the
design.
So
so,
but
getting
back
to
the
original
question
that
you
asked
45
minutes
ago
I
having
having
sort
of
marinated
in
this
idea,
I
still
don't
see
any
problems.
So
I'm
feeling
better
about
about
this
proposal.
Overall
yeah.
B
Yeah,
okay,
so
this
is
good.
Let's
then,
let
on
thursday,
let's,
let's
bring
it
up
again
and
and
in
support
of
this
and
see
what
others
also
have
to
say,
and
if
we
get
no
pushbacks,
then
we
should.
B
We
should
update
the
kept
to
show
this
and
in
terms
of
development,
I
think
the
effort
is
not
too
high,
so
we
can
take
care
of
that
also
because
the
normal
workflow
is
the
same
as
same
as
it
was
already
there
for
green
field
within
the
namespace,
and
we
haven't
really
implemented
the
sharing
part.
Yet
so
I
think
we're
in
good
shape
there
right.
B
All
right,
let's
do
that
we
have
13
minutes
left,
we
don't
have
to
necessarily
spend
the
13
minutes.
Unless
someone
wants
to
bring
anything
up,
I
think
we
can
end
the
meeting
now.
D
Well,
are
we
resolved
that
we're
going
gonna
follow
up
with
the
csi
guys
about
the
multiple
finalizer
thing
and
see
what
their
feeling
is,
because
that's
still
it's
an
implementation
detail,
yeah.
A
B
But
but
it's
an
important
concern
yeah.
I
think
we
should
do
that
ben,
so
I'll
I'll
join
one
of
the
meetings
and
I'll
put
this
question
forward,
asking
why
or
or
if
this
approach
was
even
considered
what
they
think
of
this
approach,
because
I
think
it'll
be
interesting
to
hear
what
they
have
to
say:
yeah
yeah,
it's
an
implementation
detail,
though,
like
you
said
jeff,
and
it
doesn't
affect
evaluation
of
your
proposal
right,
yeah,
all
right.
So
any
other
questions.
C
Are
you
going
to
start,
you
have
the?
What
is
that
the
office
hour?
The
next
monday
is
that
the
plan,
okay,.
B
Yeah,
that's
the
plan
well,
next
monday,
what
we'll
do
is
we'll
I'll,
be
you
know
having
my
having
my
terminal,
open
and
answer
questions
about
how
to
set
up
how
to
do
particular
things
or,
if
someone's
adding
code
like
what
kind
of
conventions
we
follow
stuff
like
that,
even.
C
Okay,
so
you're
going
to
announce
there
somewhere
on
the
slide.
B
B
Yeah
yeah
the
email,
the
slack
yeah
everywhere
and
also
announced
in
the
next
meeting.