►
Description
Kubernetes Storage Special-Interest-Group (SIG) Object Bucket API Design Meeting - 03 February 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Sidhartha Mani (Minio)
B
A
Okay,
am
I
too
quiet
or
is
it
is
it?
You
can
hear
me
well.
A
Okay,
good
yeah,
so
michelle
said
she
joined
today.
She's
been
reviewing
the
cap,
so
there
are
only
a
few
comments
left,
so
we
should
be
able
to
address
them.
A
Ben
looks
like
he
won't
be
able
to
join.
We
have
a
new
member,
hi,
grant
hello,
hey
and
we
have
aaron
hey.
So
we
can.
We
can
get
started.
A
Let
me
let
me
open
up
the
cap
back
to
where
we
left
off.
A
Hi
hi,
michelle
are
you
here.
I
see
one
account
by
the
name:
sick
storage.
A
So
while
this
loads,
it
seems
like
maybe
there's
only
one
or
two
issues
that
that
that
were
really
big
questions.
One
of
them
is
multi-protocol
within
a
single
definition.
A
So
I
want
to,
I
want
to
discuss
that
and
make
sure
we
all
agree
on
and
approach
forward,
so
either
way
is
really
fine
for
me
having
multiple
protocols
having
a
single
protocol,
but
the
conversation
that
happened
last
time
about
single
protocols.
Multi
protocol
was
simply
that
if
you
have,
the
same
thing
can
be
achieved
with
a
single
protocol
and
multiple
buckets.
A
So
let
me
for
those
who
might
not
be
familiar
with
what
the
discussion
is
yeah,
so
this
is
the
this
is
the
question
that
I'm
trying
to
answer
here
so
in
the
definition
of
a
bucket
protocol
is
a
is
not
a
not
an
array,
it's
a
single
string
and
in
in
providers
like
gcs
google
cloud
bucket
support,
both
gcs
protocol
and
s3.
A
So
so
the
question
came
up
shouldn't.
We
have
multiple
protocol
definitions
within
a
single
bucket
and
our
our
answer
to
that
was
it's
not
really
needed.
We
can
have
a
copy
of
this
bucket
with
a
different
protocol
for
whatever
it
supports
the
idea
being.
If
you
pull
out
this
bucket
from
one
provider
to
another
which
supports,
say
only
one
instead
of
two
protocols,
it
will
be
easy
to
invalidate
one
bucket
and
then
keep
the
other
one
valid.
A
A
Now
michelle
was
asking
in
the
cap,
wouldn't
it
be
better
if
we
had
multiple
protocols
within
you
know
one
bucket
definition,
so
I
just
want
to
open
this
question
up
to
the
you
know
the
people
here
in
the
community-
and
you
know
find
out
if
you
had
any
thoughts
on
this,
if
not
I'm
I'm
just
as
happy
to
move
with
multiple
protocols
if
it'll
help
push
things
through
and
and
if
we
get
it
wrong
right
now,
it's
okay,
we're
still
only
in
pre-alpha
state,
we
can
always
fix
it.
D
Yeah,
so
maybe
I
can
spend
just
a
quick
time
just
explaining
sort
of
my
thoughts
around
this,
because
I
think
the
original
reason
was
about
portability.
But
I
think
my
my
understanding
is
this.
Bucket
object
is
actually
not
supposed
to
be
portable
because
the
bucket
object
is
going
to
contain
a
handle
to
a
provider's,
specific
implementation,
and
so
I
don't
like,
I
don't
think
you
could
take
a
bucket
object
that
is
tied
to
say,
gcs
and
then
copy
it
to
like
aws,
for
example,
right.
A
Right
right,
so
when
we
say
portable,
we
meant
more
in
the
sense
that
so
so
there
are
cases
where
we
do
copy
the
object.
We
meant
more
when
we
say
portable,
we
meant
more,
like
not
the
you,
not
the
general,
not
portability
in
the
general
sense
that
we
that
we
talk
about
in
the
kubernetes
community,
where
we
want
the
same
definition,
the
same
user-defined
animals
to
be
reusable
in
in
a
different
environment.
A
Managing
these
objects
is
simpler
is,
is
all
we
thought,
but,
but
I
I
get
where
you're
coming
from
the
bucket
resource
is
really
not
user
defined,
the
user
shouldn't
even
know
that
exists
really,
and
so
their
definitions,
their
bucket
claims
and
access
claims
they're
not
going
to
be
affected.
If
you
have
multiple
protocols
here,
so
I
get
that
and-
and
I
agree
with
you
so.
A
We
can
move
to
that
model
having
having
multiple
protocols.
I
don't
think
there's
anything
wrong
with
it.
D
Yeah
I
I
mainly
think
that,
because
there's
also
some
challenges
to,
if
you
have
say,
multiple
objects
that
actually
represent
the
same
entity,
that
you
know
that
makes
you
know
cleanup
and
rough
counting
a
bit
harder,
because
it's
you
can't,
you
know
no
longer
assume
that
a
single
bucket
object
refers
to
actually
a
single
bucket
on
the
back
end.
A
Yeah
yeah,
we
had
ways
to
deal
with
it,
but
but
you're
right.
This
is
this
is
simpler.
In
that
sense,
so
we
were
going
to
have
each
object
individually
or
a
controller.
Look
at
each
object
state
in
the
backend
individually
individually,
and
so
we
didn't
need
some
sort
of
ref
counting
there.
We
don't
really
do
much
about
in
terms
of
preventing,
I
would
say,
preventing
deletion
well
other
than
if
some
workload
is
using
it.
A
A
Okay,
so
we
can
update
this.
There
is
so
we
we
can
do
it
right
away
other
than
that,
so
so
shin
was
mentioning
there
was.
There
was
some
part
of
the
cap
where
we
wanted
to
be
more
like
pvcs
and
snapshots.
I
can't
fully
remember
where
that
was
michelle.
Does
it
ring
a
bell.
D
Yeah,
I
think
I
the
same
thing
popped
up
to
me
too.
D
I
think
it's
where,
in
the
bucket
access
claim,
I
think
right
now
we're
saying
if
you
want
to
support
the
brownfield
case,
you
directly
access
or
you
directly
specify
the
bucket
name
in
the
bucket
access
claim,
but
that's
sort
of
it's
pretty
different
than
how
we
do
that.
Support
brownfield
for
pvc
and
snapshots.
A
Right
yeah,
so
the
reasoning
there
was
it's
really
not
owned
by
any
user.
If
it's
a
brownfield
bucket
because
it
wasn't
created
by
them
in
general,
deletion
of
bucket
claim
is
expected
to
delete
the
bucket
in
the
back
end,
I
guess
with
retention
policy
like
we'll
have
to
enforce
retention
policy.
A
So
in
case
of
pvcs
and
pvs,
if
a
pvc
manually,
let
me
go
back
a
step
even
so
talking
to
ben
who
has
been
involved
in
the
snapshot
thing,
he
suggests
he
seemed
to
suggest
that
the
old
way
of
binding
pvcs
and
pvs,
where
pvs
are
created
independently
and
pvc
is
buying
to
them.
A
D
A
Okay,
okay,
so
the
reason
I
was
bringing
it
up
is
doing
brownfield
where
we
go
where
the
bucket
access
points,
the
bucket
claim
and
the
bucket
claim
points
to
a
bucket
is-
is
similar
to
that
manual,
binding
process
that
we
were
doing
with
pvcs
and
pvs,
where
pvcs
are
created
initially
or
pvs,
are
created
initially
and
then
pvcs
are
bound
to
them.
A
D
A
So
there
is
another
use
case.
So
currently,
the
way
we
use
buckets
that
were
created
in
other
name
space
so
say
names
is
one
creates
a
bucket
claim
that
ends
up
creating
a
bucket,
and
if
you
want
to
utilize
this
bucket
in
a
different
name
space,
we
follow
the
same
as
the
brownfield
approach,
as
in
the
bucket
access
claim,
bucket
access
points
directly
to
the
to
the
bucket.
A
The
the
the
implication
here
is,
I
guess
I
guess
all
the
other
bucket
claims
other
than
the
one
that
created
the
bucket
would
have
to
have
retained
policy.
But
if
someone
were
to
set
it
to
delete,
then
it
can
lead
to
the
deletion
of
that
bucket.
A
D
A
Okay,
if
that
works
with
pvcs
and
pvs
yeah,
there's
no
reason
that
we
can't.
We
can't
expect
the
same
here
okay,
so
we
can
do
that.
We
can
go
through
a
bucket
claim
whenever
a
bucket
needs
to
be
accessed,
regardless
of
it
being
brownfield
or
greenfield
yeah.
D
I
think
that
will
also
help
in
the
future,
if,
like
we
want
to
add
something
like
cuda
or
like
some
other,
like
quota
or
some
sort
of
like
enforcement
of
access
control,
or
anything
like
that,
having
the
same
object
for
both
brownfield
and
greenfield,
I
think,
will
help
in
the
policy
enforcement.
A
Yeah,
I
see
what
you
mean:
yeah
yeah
yeah,
so,
okay,
so.
E
Can
I
ask
a
clarifying
question
here:
I
I
did
miss
like
a
little
bit
of
this
conversation
like
it's
my
it's,
my
memory
of
like
working
with
pvcs
like
I,
I
guess
it's
possible
to
create,
like
multiple
pvcs,
to
point
to
the
same
pv,
but
but
generally
like.
A
Something
like
that,
yeah
we'll
have
only
one
bucket
claim
likely
per
namespace
and
we'll
have
multiple
bucket
accesses,
so
each
bucket
access
corresponds
to
one
one,
I
would
say
service
account
or
credentials,
one
set
of
credentials.
A
So
so
it's
possible
that
that
people
create
it
that
way
where
there's
one
one
bucket
claim
in
a
particular
namespace
for
a
bucket
and
multiple
bucket
accesses.
So
it's
kind
of
the
same
model.
A
bucket
access
is
the
reference
to
the
bucket
that
ends
up
fetching
the
credentials
and
putting
into
the
part
so
yeah.
That's
that's!
That's
that's
how
it'll.
E
E
D
D
E
Okay,
okay,
that
was
my
understanding
of
or
my
misunderstanding.
I'm
sorry
gotcha,
okay
yeah!
This
is
just
about
about
the
brownfield
case.
Yeah
I
mean
I.
I
think
that
probably
makes
an
amount
of
sense
that
there
must
be
a
claim
for
a
for
for
a
bucket,
even
if
it's
a
brownfield
one
right.
A
Yeah,
it's
yeah
it'll,
make
sure
it's
consistent,
regardless
of
if
it
is
brownfield
or
greenfield.
A
That's
a
that's
a
good
question,
so
how
will
the?
How
will
usernamespace
know
that
there's
a
bucket
that
they
can
consume
and
and
the
the
current
you
know?
Let's
say
we
were
going
with
the
old
model
where
bucket
access
was
directly
referring
to
the
bucket?
That
problem
still
exists,
as
in
they'd
have
to
know
what
bucket
they
want
to
get
access
to.
So
I
think
it's
reasonable
to
say
that
the
user
has
to
create
the
bucket
claim
in
which
your
name
space.
They
want
to
access
the
bucket.
E
Does
this
prevent
users
from
what
is
I'm
trying
to
remember
like
there's
a
there's,
a
good
name
for
this,
but
like
there
there's
effectively
like
a
thing,
you
can
do
with
like
web
pages,
where
you
just
basically
try
all
of
like
the
various
hashes
to
like
see
what
user
data
exists
in
something
that
is
like
otherwise
opaque
to
you
like?
E
A
So
yeah,
that's
a
good
question
too.
So
we
do
have
restrictions
so
before
we
get
into
that.
A
So
we
the
way
we
do
access,
is
name
space
scope,
not
user
scope,
the
reason
being
if
if,
if
one
namespace
has
access
to
a
bucket
pretty
much
all
if
one
part
in
a
namespace
has
access
to
a
bucket
pretty
much
all
the
pods
in
that
namespace
have
access
to
the
same
bucket,
because
it's
assumed
that
within
a
namespace,
it's
free
for
all
a
user
that
can
access
that
that
has
access
control,
enabled
at
the
name
space
level
or
there
is
no
user
access
control
restriction
at
a
level
more
granular
than
name
space.
A
D
So
I
don't
think
that's
true.
I
think
you
can
create
users
that
don't
have
access
to
everything
in
the
name
space.
Is
that
possible?
I
yeah
I
mean
there's
like
when
you
create
a
namespace
there's
like
a
default
user
for
that
namespace
and
that
user
will
have
like
all
the
permissions
to
name
space,
but
you
can
create
additional
service
accounts
or
other
users
and
don't
give
them
everything
in
the
name.
Space.
A
We
we
in
in
the
cluster
role
bindings
and
the
cluster
role
definitions,
it's
not
possible
to
restrict
at
the
names
anything
lower
than
the
namespace
level.
A
Right
right,
so
so
the
idea
is:
if,
if
someone
has
access
to
a
bucket
in
well,
you
could
get
the
so
so
is
it
possible
for
a
user
to
be
restricted
from
say,
a
specific
conflict
back
it's
either.
They
have
yes.
D
D
Yeah
double
check
role
bindings,
but
there
is
this
resources
name
that
lets
you
further
restrict
to
specific
resources.
D
E
E
I
I
I
guess
I
I
think
I
might
be
asking
a
slightly
different
question
like
I'm,
I
guess
specifically
kind
of
asking
around
brownfield
buckets
so,
like
you
know,
let's
assume
that,
like
an
administrator
wants
like
users
to
be
able
to
like
consume
buckets
through
cozy
that
they
previously
had
manually
created
or
like.
Maybe,
let's
assume
that,
like
the
administrator
has
like
a
special
bucket
that
they
have
created
that
they
keep.
E
You
know
some
information
that
they
want
to
have
whatever
it
is.
Is
there
a
mechanism
that
prevents
users
from
making
brown
field
claims
at
random?
To
like
you
know
if
the
administrator
is
like
this
is
my
bucket.
You
know
that
I,
you
know,
store
all
of
the
information
about
all
the
users
that
are
allowed
or
whatever
yeah.
E
C
D
I
can
explain
how
we
do
it
for
snapshots,
because
I
think
the
the
snapshot
model
for
brownfield
and
this
brownfield
model
for
buckets
are
very
similar.
D
A
Right
right
we're
coming
up
with
something
very
similar
here
too.
So
currently,
in
the
current
model,
we
we
say
that
nobody
outside
the
creating
namespace
can
access
a
bucket,
so
bucket
sharing
isn't
isn't
possible,
but
but
in
the
next
version,
what
we're
saying
is
we
we're
coming
or
we're
going
to
go
with
the
last
discussion?
What
we
decided
was
to
have
a
policy
on
the
bucket
that
that
restricts
buckets
to
certain
namespaces
alone.
A
A
So
that's
that's
the
level
of
access
control
we
have
over,
who
gets
to
use
a
bucket.
So
even
if
someone
were
to
guess
the
name,
this
restriction
would
prevent
them
from
using
it.
A
We're
going
to
add
this
we're
going
to
add
this
as
a
bucket
sharing
policy
as
a
field
in
the
next
word.
E
I
I
guess,
like
my
like
my
point-
is
a
bucket
in
the
back
end
like
well
like,
if
like
because
I
I
deal
with
seth
so
like
let's
say
a
bucket
that
exists
in
ceph,
like
isn't
like,
doesn't
have
a
kubernetes
namespace
associated
with
it.
It's
just
a
bucket
in
the
object
store,
like
I'm
concerned,
about
any
user
being
able
to
create
a
like
bucket
claim
for
a
brownfield
bucket.
That
has
a
pre-existing
like
bucket
in
the
back
end
like
there.
E
There
is
a
possibility,
like
I
I
don't
know
if
it
is
like
ruled
out
by
by
the
current
design,
but
like
there's
a
possibility
in
like
some
worlds,
where
the
implementation
would
allow
the
user
to
be
like
I'm
going
to
make
a
bunch
of
you
know,
brownfield
bucket
requests
to
try
to
discover
and
get
malicious
access
to
buckets
that,
like
actual
back-end
buckets.
A
So
so
it's
not
possible
to
create
a
buck,
so
bucket
claims
can
only
be
used
to
create
new
buckets
if
the
best
you
can
do
is
refer
to
an
existing
bucket,
a
bucket
okay,
but
you
can't
use
it
to
specify
a
backend
bucket
directly.
E
Okay,
so
it's
still,
it's
still
a
requirement
that
the
administrator
would
have
to
create
a
button
for
the
backend
bucket
and
then
a
bucket
claim
for
that
brownfield
bucket.
Okay,
I
guess
I
could
have
asked
that
question
before.
E
My
answer
yeah
first
so
yeah,
thanks
for
your
patience.
A
Of
course
you
can
ask
as
many
questions
as
you
need
yeah.
There
is
no
need
to
even
apologize
and
and
yeah
we
can
okay.
So
I
hope
that
answers
that
question,
but
coming
back
to
namespace
level
access
so
so
we
we've
always
worked
with
the
with
the
understanding
that
we
don't
need
user
level
bucket
access
control,
we've
always
scoped
it
to
the
name
space
now.
A
I
think
we
can
how
about
this
michelle?
We
we
can
proceed
with
this
model
for
for
this
version,
and
and
we
can,
we
can
reopen
it
and
constrain
it
further
for
the
next
version,
because
I
would
like
ben
to
be
here
ben
kind
of
wanted
to,
or
he
had
some
good
points
about
about.
Why
name,
space
level
bucket
access
control
is
all
that's
needed
and
and
it'll
be
good.
A
If
he's
here,
when
we,
when
we
make
changes
to
it
and
he's
not
available
today,
but
but
to
proceed
for
now
with
the
alpha
version,
do
we
need
a
stronger
access
control
model,
or
can
we
work
with
this
for
now,
which
act.
D
No,
I
think,
that's
fine,
that
wasn't
a
huge
concern
for
me.
I
think
my
my
main
concern
was
having
the
bucket
name
in
the
bucket
access
claim,
object
that,
because
that
one
basically
just
lets
you
kind
of
do
what
was
being
discussed,
which
was
like
guess
at
bucket
names
you
just
get.
A
Access
yeah,
I
mean
bucket
claim-
can
still
do
that,
but
yeah
I
see
what
you
mean.
D
A
Understood,
okay,
so
yeah,
so
so
that
so
we
we
already
so
we
did
decide.
I
think,
during
the
courses
meeting
we
will.
We
will
change
our
method
to
always
go
through
the
bucket
claim
to
address
that
concern
yeah.
So
so
so
I
felt
like
protocol
and
and
following
the
more
intuitive
snapshot
model
for
for
axing
buckets
were
two
of
the
main
concerns.
Is
there
anything
else
that
you
think
we
should
address
during
the
course
of
this
meeting?
A
One
one
of
them
that
comes
to
mind?
Is
you
wanted
to
call
them
driver
methods
instead
of
provisional
methods?
So
all
of
the
grpc
calls
today
use
the
prefix
provisional,
create
bucket
or
provision
or
grant
access
yeah.
It's
just.
I
think
we
can
change
that
to
driver,
create
bucket
and
driver
grant
access,
and
so
on
so
just
wanted
to
bring
it
up,
and
you
know
confirm
that
we
can
change
it.
A
D
A
Right
where
the
user
specifies,
because
the
application
is
the
one
responsible
for
for
working
with
the
style,
that
was
a
portability
concern
as
in.
If
you
move
from
one
provider
to
another
and
the
second
provider
didn't
support,
say
a
particular
iam
or
authentication
method.
B
D
B
A
So
so
initially,
I
wanted
to
go
with
this
approach,
but
but
the
argument
that
came
up
was
storage
class
today
doesn't
is
is
where
you
specify
something
like
file
system.
Let's
say
your
application
relied
on
a
particular
file
system
feature.
Currently
you
don't
get
to
save
in
in
the
pvcs.
A
A
So
I
I
I
wanted
to
go
this
way
with,
because
it's
the
application
that
knows
what
they
can
work
with
the
application
should
or
the
application.
A
A
A
That
being
said
there
is,
there
is
still
that
problem
if
someone
particularly
writes
an
application
where
they're
looking
only
for
access
keys
and
secret
keys,
and
we
don't,
we
can't
provide
it
to
them.
Then.
Yes,
the
application,
wouldn't
really
be
usable
in
that
environment,
even
though
the
yaml
definitions
are
technically
working.
A
D
Yeah,
I
guess
I
wasn't
aware
so
I
think
you
brought
up
a
good
point
that
the
client
libraries
today
already
can
handle
seamlessly
handle
the
difference
between.
I
am
or
key,
so
I'm
that
was
something
I
wasn't
aware
of.
So
I
think
I
am
less
concerned
about
needing
to
expose
like
the
off
method,
but
I
do
think
that
protocol
is
still
very
important
that
needs
to
be
specified
by
the
application.
A
Yeah
yeah
yeah,
especially
if
you're
supporting
multiple
protocols
yeah
the
application,
would
have
to
say
which
exactly
it
wants.
It's
a
requirement
now,
yeah
agreed.
We
need
it
yeah.
A
All
right,
so
so,
just
in
terms
of
change
that
we
need
to
make
it's
it's
protocol,
changes
which
is
having
multiple
protocols
in
a
bucket
and
having
the
protocol
name
and
the
bucket
claim
and
and
two
is
with
always
going
through
a
bucket
claim.
You
you're
not
directly
going
to
a
bucket
bucket
object
if
you
want
to
gain
access
to
it
talking
about
protocol.
So
so
let's
say
we.
A
So
so
let
me
ask
you
this
way,
so
it
wouldn't
just
be
a
protocol
field
in
in
bucket
claim
right.
It
would
be
a
list
of
protocols
right.
D
D
But
I
mean
there
might
be
some
theoretical
wrapper
that
you
know
switches
between
various
sdks,
in
which
case
then
an
application
could
technically
support
multiple
protocols.
B
A
I
also
was
asking
this
in
terms
of
bucket
creation,
so
when
creating
a
bucket,
when
someone
creates
a
bucket
claim,
would
they
specify
all
the
protocols
that
it
should
support.
C
D
A
Right
so
so
yeah,
so
the
confusion
here
is,
so
how
would
how
would
they
design
an
application
that
can
work
with
something
like
an
or
like
that?
So
so,
let's
say
a
user
requests,
a
bucket
which
supports
gcs
or
s3,
so
their
application.
Would
I
mean,
while
requesting
access
to
that
bucket,
would
they
would
they
say
the
or
again
so,
while
creating
the
bucket,
they
would
say
or
either
or
and
when
accessing
the
bucket
they'd
create
a
bucket
access
object
for
it,
and
would
that
object
have
all
the
protocols
or
one
of
them?
D
A
I
mean
that's
the
point
that
that's
the
that's
the
point
of
reference
for
the
application,
so
the
bucket
access
is
what
goes
into
the
part
and
and
yes
absolutely
we
need
it,
because
access
control
is
is
defined
in
the
protocol.
So
but.
D
The
but
the
the
application
also
has
they
can
access
the
bucket
claim
too.
A
Oh
so
so
the
difference
here
is
so
that's
where
we
differ
from
pvcs
and
pvs.
I
guess
with
with
pvcs
when
you,
when
you
claim
a
pvc,
it's
it's
a
claim
to
the
actual
drive
or
underlying
storage
itself,
whereas
with
the
bucket
a
bucket
is
inherently
meant
to
be
multi-user,
so
so
gaining
access,
or
we
don't
actually
provide
the
the
underlying
storage
into
a
part.
A
We
just
give
give
credentials
and
the
end
point
as
as
a
key
value
set
of
key
value
pairs
into
the
part
we
just
mount
that
into
a
file
so
that
same
model
doesn't
apply
here,
as
in
the
bucket
claim.
It's
just
a
claim
to
create
a
bucket.
D
D
Yeah
then
I
guess
in
that
case
you
still
need
the
protocol
in
the
bucket
access
claim.
I
guess
the
question
is
though,
at
provisioning
time
do
you
still
need
it?
I
mean
it's,
I
think,
for
me,
it's
mostly
a
matter
of
like
how
early
can
we
fail,
but
I
guess
you
know
it's.
Okay,
if
we
fail
post
provisioning,
yeah.
A
That
will
work
so
so
to
summarize
that,
then,
while
creating
a
bucket,
they
wouldn't
have
to
specify
the
protocol
in
the
bucket
claim
what
they
would
do
is
they
would
refer
to
a
bucket
class,
which
has
a
set
of
listed
protocols
and
and
the
bucket
would
get
provisioned
if
any
one
of
them
is
supported.
E
E
D
Think
that
would
make
things
helpful
to
kind
of
understand
why
it's
done
this
way.
A
I
can
do
that,
yes,
yeah.
I
think
I
think
that
cleans
up.
I
think
that
cleans
up
the
model.
I
think
that
that's
that's!
That's
a
good
approach,
okay,
so
so
michelle
I
want
to
do
whatever
it
takes
actually
to
to
make
sure
we
get
it
through.
Today
I
mean
as
much
as
I
can
do
from
my
site,
so
yeah
I'll
address
these
issues
and
update
the
cap
as
soon
as
possible.
Is
there
any
other
concern
that
that
you'd
like
us
to
address
now.
D
A
Yeah,
it
would
still
be
good
if
we
can,
you
know,
hit
this
deadline.
A
A
D
Yeah
I
mean,
I
think
I
I
personally
don't
think
we
need
to
stick
with
the
schedule,
but,
like
I
I'll
definitely,
you
know
try
to
watch
out
for
the
updates
that
you
have
and
I'll
try
to
look
at
it
like
today
or
tomorrow.
A
Thank
you,
yeah
that'll
help.
Okay,
so
is
there
anything
else
anyone
wants
to
bring
up.
A
All
right,
okay,
so
we
have
10
minutes
left.
I
don't
have
anything
else.
We
can
end
early,
I'm
I'm
I'm
going
to
jump
on.
You
know,
fixing
or
updating
the
cap
right
away
and
as
soon
as
I
updated
I'll
I'll
post
it
on
on
the
six
storage
cozy
channel
and
I'll
also
send
you
a
message
message
michelle:
if
that's
okay,
just
so
that
you,
you
know
that
it's
updated
yeah.
D
Sounds
good
to
me,
I,
I
am
not
good
at
keeping
up
with
emails
with
github
notifications.