►
From YouTube: 2022-03-05 Object Storage WG
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
now
we
are
recording-
and
this
is
the
object,
storage
working
group
and
today
is
april
the
5th
and
yeah
we
were
discussing
about
the
the
schedule.
So
this
is
not
in
the
agenda,
but
we
will
start
cheat
chatting
and
mathias
was
pointing
out
that
the
let's
say
the
morning
morning
for
us
anemia
the
morning
schedule
is
the
morning
meeting
is
a
is
a
little
lightly
populated.
So
we
were
discussing
if
it
makes
sense
to
reconsider,
doing
everything
in
the
afternoon,
which
is
more
america's
related.
A
B
I
mean
another
option
could
be
to
just
do
it
bi-weekly,
like
every
other
week,
depending
on
it.
It
fluctuates
a
bit
like
how
much
days
to
talk
about
like
sometimes
there's
a
ton
of
things
that
come
up
at
the
same
time
and
then
then
half
an
hour
is
almost
like
tight
to
go
through
it
all
and
and
other
times
there
was
almost
nothing.
I
think
I
think
we
skipped
once
or
twice
as
well
and
just
did
an
async
update.
A
Yeah
I
mean
we
can
even
consider
saying
that
if
there's
nothing
in
the
agenda,
we
just
canceled
the
meeting,
and
so
we
we
keep
having
the
opportunity
for
a
more
diverse
in
terms
of
time
zone
participation.
And
if
there's
nothing
to
talk
about,
we
can
just
cancel
it.
So
this
could
be
a
good
way.
Okay,
so
let's
start
with
the
agenda.
A
A
If
you
don't
mind,
so
I
will
skip
point
two
and
go
to
because
it's
kind
of
part
of
point
four,
so
I
I
would
just
go
to
point
three
first,
so
encryption
requirement,
so
I
was
reading
the
agenda
meetings
and
having
chat
with
matthias
as
well
as
with
jakob,
and
I
think
that
the
encryption
requirements
came
again
in
discussion,
not
really
in
discussion.
I
mean
something:
new
was
shipped
in
the
product
that
included
a
new
buckets
again
because
of
the
encryption
address
requirements.
A
So
I
think
that
when
we
did
the
first
gathering
of
requirements
which
was
more
about
a
collection
of
what
we
have
than
the
real
gathering
of
requirements,
we
kind
of
glossed
over.
It
say:
okay,
there
is
this
feature
here:
terraform
in
that
case
they're
from
state
file
that
requires
encryption,
but
we
didn't
really
spend
much
time
into
it
and
I
think,
before
I
think,
before
we
have
three
four
or
five
different
implementation
of
this,
I
think
it's
worth
trying
to
figure
out
what
do
we
really
need?
A
So
maybe
opening
a
new
issue
or
if
it's
already
there,
continuing
on
that
one
and
trying
to
figure
out
yeah
what
options
we
have
here
right,
because
the
way
it's
implemented
doesn't
really
scale
well
right.
So
this
is
doing
we're
doing
decryption
in
the
controller.
I
think
it's
doing
in
callbacks
or
something
like
that's
the
interactive
record,
so
we
can
direct
upload,
etc,
etc,
etc.
So
what
are
the
requirements
here?
If
it's
just
encryption
address,
can
we
leverage
object,
storage,
encryptions?
What
what
are
we
missing?
A
B
Yeah,
I
was
thinking
we
yeah.
We
briefly
talked
about
it
yesterday.
Right
now,
I
know
one-on-one
that
I
think
it
would
be
super
helpful.
I
I
just
think
developers
probably
don't
even
know
what
is
kind
of
like
okay
to
do
right
now
and
what
maybe
practices
we
should
just
stop
doing.
So
I
think
maybe
it
would
be
helpful
to
hand
out
some
guidance
around
like
which
things
should
we
stay
away
from
and
and
which
things
are
still
okay
to
do,
independent
of,
like
a
specific.
B
We
know
we
we
will
we
kind
of
want
to
move
away
from
what
right,
so
that
we
do
not
build
up
like
even
more
yeah
kind
of
yeah
cases
that
we
might
eventually
have
to
fix
or
migrate.
B
So
I
think
that
could
be
a
good
next
step
to
like
think
about
this
and
and
have
something
that
we
can
point
users
to,
because
I'm
a
bit
afraid
that
if
we-
because
we
can't
just
say
like
stop
doing
x
but
then
not
offer
an
alternative
right,
I
think
we
need
to
have
like
a
clear
some
clear
guidance
on
like
how
to
implement
a
new
upload.
A
Yeah,
I
totally
agree-
I
can
add
on
this-
that
when
we
did
the
first
say
categorization
of
uploads
and
writing
some
guidelines
on
how
to
upload
on
gitlab.
A
This
was
kind
of
so
someone
read
it,
but
during
review
was
really
hard
to
pinpoint
that
there
was
a
new
upload
and
maybe
the
reviewer
or
the
maintainers
were
not
really
aware
of
guidelines.
So
what
really
changed
this
was
introducing
robocop
rules
that
were
detecting
new
uploads.
So
in
the
api
they
were
detecting,
we're
just
adding
a
file,
and
this
was
kind
of
a
red
flag
right,
because
if
there's
a
file
upload
happening,
this
means
that
it's
not
going
through
workers.
A
A
Okay-
I
don't
know-
I
can't
understand
this,
so
we
can
make
the
the
documentation
better
if
it's
not
working,
but
it
reduced
the
the
chances
that
something
got
merged
into
into
master
without
just
not
knowing
that
there
is
a
guideline
or
something
like
this
and
yes,
and
also,
as
you
said,
about
just
having
a
proposal,
I
mean
an
alternative
right,
because
we
we
can't
just
say
you
can't
do
encryption
at
rest.
The
way
we
are
doing
and
not
provide
an
alternative.
A
That's
why
I
was
suggesting
how
to
having
something
that
really
focus
on
that
one,
because
for
I'm
thinking
that,
for
the
features
that
we
have
in
the
product
that
one
so
encryption
address
is
the
only
one
that
has
that
is
clearly
not
correct
to
implement
in
this
way.
But
we
don't
have
an
alternative
because
for
the
other
one
we
have
alternatives,
they
may
be
too
too
much
to
work
with
so
say.
Implementing
direct
upload
is
an
alternative,
but
we
are
discussing.
How
can
we
move
this
into
a
better
shape?
A
A
We
made
a
decision
about
using
that
specific
gem
that
I
don't
remember
the
name
for
the
encryption,
which
maybe
was
the
boring
solution,
which
is
fine,
but
there
are
competing
requirements
for
running
gitlab.com
and
things
like
that.
So
we
have
to
balance
all
those
things
together
and
if
it
needs,
if
this
ends
up
that,
we
need
to
implement
or
extend
something
in
workers
or
in
direct
upload
or
whatever
that's
I
mean
we
should
know
it,
and
we
should
try
to
figure
out
if
we
can
schedule
this.
Otherwise,
we
it
it
will
continue
bleeding
right.
A
A
B
Point
in
this:
well,
I'm
just
wondering
like
what
can
we
take
like
action
items
like
even
something
small
that
moves
us
in
the
right
direction?
I
mean
this
this
encryption
issue.
I
I
have
no
idea
how
much
work
that
would
be.
I'm
not
even
sure.
I
fully
understand
the
necessity
of
this,
because
we
already
have
transport
encryption
right.
So
it's
not
like
we
send
data
over
the
wire
in
like
plain
text
or
something,
and
we
have
encryption
at
rest
at
the
object,
storage
provider
level.
So
I'm
I
don't
know,
I'm
not
sure.
A
A
B
A
Do
it
all
for
you,
yeah
yeah?
I
think
that
one
is
implemented,
but
is,
I
think
none
of
our
buckets
is
configured
that
way
or
if
they
are
it's
kind
of
it's
different,
er
er.
The
point
here
was
that,
if
I
remember
I'm
specifically
talking
about
terraform
state
file,
so
terraform
state
file
are
clear
text
representation
of
the
infrastructure,
including
all
the
information.
A
So
if
an
object
has
password
in
it,
there
will
be
password
in
in
the
state
file.
Okay.
So
that's
the
problem
of
terror
from
state
files-
and
I
think
the
baseline
in
the
conversation
on
this
feature
was
that
a
customer
of
gitlab
is
storing
those
information
in
a
bucket
which
is
potentially
shared
among
other
users
of
the
system.
Okay,
so
that
was
the
what
started
the
conversation
about.
We
should
make
sure
that
there's
no
way
to
get
the
state
files
from
another
client,
another
customer
or
things
like
that.
A
B
So
this
is
maybe
a
discussion.
We
should
then
pick
up
with
this
team
again
because
it
sounds
like
they're,
the
first
one
running
into
it.
So
another
thing
like
to
come
back
to
this,
like:
let's
put
up
guard
rails,
though
about
because
now
it
seems
like
they
cannot
use
direct
upload.
That's
not
good
right!
So
I'm
thinking
as
well,
so
I
forgot
like
I.
I
did
look
at
the
amr
when
they
first
opened
it.
It
was
a
while
ago.
B
So
I'm
not
sure
I
remember,
but
if
they
moved
to
background
uploads,
that
would
be
a
problem
right
because
we
just
deprecated
this
and
it's
going
to
be
removed
in
you
know
a
good
month
from
now
less
than
two
months
from
now.
So
that
is
something
we
would
have
to
look
at
very
soon
right,
yeah.
So.
A
Because
if
I
so,
if
I
remember
how
those
things
are
working,
they
are
usually,
we
are
usually
using-
let's
say
small
files,
smalls
instance
that
they
can
be
encrypted
and
uploaded
in
controller
time
without
getting
into
a
timeout
error.
But
I
mean
this
is
this.
A
This
is
like
what,
when
we
had
the
problem
with
the
git
lfs
uploads,
so
it
works,
but
then,
if
that
one
was
direct
uploaded,
but
you
still
had
the
five
gigabytes
limits
and
once
we
worked
over
the
five
gigabytes
limit
because
of
timeout
someone
said
I
was
uploading,
15
gigabytes,
lfs
object
and
it's
timing
out
right.
So
yeah,
that's
the
same
thing
right.
So
maybe
someone
have
a
state
file
which
is
you
know
several
megabytes
and
we
can
process
this
in
in
within
the
controller
timeout
limits.
So.
B
Yeah
this
is
then,
maybe
going
into
the
directional
question.
Like
the
question
about
future
direction.
Again,
I
opened
an
issue
to
discuss
this.
I
my
feeling
is,
but
of
course
I
want
to
know
what
everyone
thinks
about
it,
and
I
got
some
feedback
from
jacob
around
this
already.
I
think
we
should
deprecate
this
buffering
as
well.
B
We
should
move
away
from
doing
this
because
it
always
means,
even
if
you,
if,
even
if
you
don't
think
about
like
sidekick
as
part
of
the
question,
it
definitely
means
that
workhorse
and
rails
need
to
share
a
disk,
and
that
is
currently
necessary,
not
just
for
uploads
but
for
other
things
as
well.
But
I
know
that
distribution-
I'm
not
happy
about
this.
C
B
That
is
certainly
something
that
will
that
will
get
in
the
way,
and
another
thing
I
don't
like
about
this
buffering
is
that
it
was.
I
think
it
was
a
great
solution
at
the
time
to
address
something
that
was
immediately
painful
right,
which
is
to
upload
large
files
in
rails
controllers,
but
so,
as
a
workaround.
I
think
it
was
great
like
a
great
solution,
but
by
design
it
is
not
a
great
solution
right,
because
it's
it's
just
fixing
a
symptom,
not
the
cost
right.
B
The
cost
is
really
we
shouldn't
really
upload
files
to
rails
controllers.
If
we
can
do
it
in
a
more
efficient
way.
We
should
just
all
do
this
through
direct
uploads.
That
is
my
opinion,
even
if
it's
local
files
right
that
that
this
another
discussion
we
had
in
separate
like
how
can
we
do
that?
How
can
we
upload
to
to
a
node
that
uses
local
storage
with
through
workhorse,
and
it
is
possible
right?
It
requires
changes.
It's
just
like.
We
have
so
many
disparate
mechanisms
for
moving
a
file
to
its
destination.
B
It
makes
it
really
hard
to
understand
how
this
works
and
sure
yeah.
Some
of
some
of
them
are
just
like,
I
feel
like
serve
their
purpose,
and
maybe
now
it's
time
to
look
at
consolidating.
All
of
that.
That's
that's
why
I
would
be
against
doing
this
saying
like
okay,
just
because
the
files
are
small.
We
just
sent
them
through
this
like
yeah,
this
buffer,
accelerated,
a
path
and
again
bypass
direct
upload.
It
just
again
adds
or
maintains
complexity
that
I
was
hoping
we
can
move
away
from.
A
Yeah,
I
totally
agree
with
this.
I
will
also
double
down
on
the
fact
that,
even
if
direct
upload
happens
to
be
direct
uploaded
on
the
disk
right,
it
can't
work
on
kubernetes
because
even
within
the
same
pod,
workforce
and
puma
runs
in
two
different
containers
and
there
is
a
configuration
about
which
part
of
the
disk
is
shared,
which
is
only
the
temp
file,
which
is
where
there's
the
intermediate
file
when
things
get
moving.
A
So
that
being
said,
this
thing
that
you
said
it
kind
of
opened
the
conversation
on
point
number
four,
because
there
is
the
car
sorting
attachment
without
card
wave
and
under
the
jakob
proposal
about
unified
blob
storage,
and
I
think
that
these
two
options
are
not
alternative
proposals.
A
It's
just
that
the
first
one
put
the
focus
and
more
effort
in
the
direct
upload
should
be
the
default.
So
it's
a
unified
authentication
point:
a
unified
structure,
a
unified
way
of
handling
every
file
that
goes
from
the
user
to
the
application,
and
then
I
see
as
an
exercise
for
the
reader
left
the
discussion
about.
How
do
we
implement
the
database
structure
for
it
and
was
kind
of
mocked
over?
How
active
storage
is
behaving
because
it
was
the
interest
in
testing
if
it
can
be
done
with
active
storage,
and
things
like
that.
A
Jakob
proposal,
on
the
other
hand,
is
just
tackling
this
storage,
so
the
representation
on
database
of
storage
file
and
retrieval.
So
it
is
not
touching
at
all.
It's
not
thinking
about.
A
How
do
we
authorize
or
do
things
like
that,
except
for
minor
cosmetic
changes
like
send
back
a
file
url
instead
of
a
an
empty
location
right,
but
is
more
about
storing
those
information
and
retrieving
this
information
in
an
efficient
way
without
doing
a
huge
data
migration
so
that
it
can
go
back
to
the
old
tables
if
things
are
stored
in
the
old
tables
or
to
the
new
tables,
and
things
like
that
so
and
this
one
er,
not
this
one.
The
first
issue
points
to
what
you
were
saying:
matthias
right.
A
So
if,
if
we
don't
have
a
centralized
standard
authorization
for
uploads,
that
is
already
there,
so
we
can't
really
ask
every
engineers
working
on
a
new
features
to
implement
a
new
authorizer,
a
new
finalizer
and
understanding
all
the
complexity
around
this.
So
without
that
we
can't
really
say
this
has
to
be
transparent
to
you.
This
has
to
be
just
the
only
way
of
doing,
because
someone
would
always
say
yeah,
I
mean
this
is
the
first
iteration
or
we
are
just
trying
to
figure
out
if
this
feature
works.
A
So
I
can't
afford
the
extra
work
to
do
the
the
say,
the
full
scale,
solution
and
yeah,
so
that
was
that
was
my
point.
I
would
also
like
to
hear
from
david
here
if
I
mean
because
last
time
so
it's
two
months
ago,
so
I
don't
know
what
happened
the
main
kind
of,
but
if
you
actually
made
some
progress
or
if
you
found
some
time
to
if
you
are
in
some
way
working
on
the
the
authorization
part
of
it
or
yeah
just
want
to
know
about
it,.
C
Yeah
yeah
I've
been
working
on
the
single
point
authorization.
I
actually
have
a
proof
of
concept
working
now.
I
need
to
put
it
together
and
draw
some
conclusions.
It's
on
a
different
issue
of
this
one.
I
will
link
the
issue
on
the
on
the
agenda
on
the
proposals
between
removing
carry
with
and
jacob's
proposal.
I
think
there
is
a
there's
a
middle
ground
where
we
can
have
both,
because
they
are
really
really
similar.
B
Was
my
first
thought
when
I
read
I
read
both
and
I
mean,
of
course
you
have
more
understanding
than
I
do,
but
they
sounded
quite
similar
to
me
except,
like
I
think
you
had
an
additional
joint
table,
which
yeah
is
a
bit
more
flexible,
but
also
that's
yeah.
C
Try
to
to
support
this
new
blob
table
and
have
a
look
of
what
are
the
changes
needed
for
this.
Also
the
the
attachments
table.
B
No
go
ahead.
Oh
thanks!
All
right.
I
have
sorry
just
a
naive
question.
I
guess
yeah,
I'm
still
like.
I
always
try
to
like
stitch
this
stuff
together
in
my
head
with
like
what
are
the
existing
mechanisms,
and
how
would
we
migrate
to
this?
So
if,
with
this
buffering
right,
where
we
do,
we
have
this
generic
route
that
requests
might
fall
into
that
are
not
explicitly
rooted
in
in
workhorse.
B
If
we
were
to
migrate
to
your
suggested
approach
or
jacobs
whatever
something
like
that,
since
that
would
be
replacing
the
table
structure
and
the
logic
basically
that
carrier
wave
currently
is
responsible,
for
that
would
also
mean
that
we,
if
we
keep
this
buffering
around,
we
would
then
also
have
to
have
a
mechanism
for
that
does
not
use
direct
upload
to
still
move
files
to
a
destination
in
rails
right,
because
carrier
wave
is
a
little
bit
more
than
just
you
know,
a
blog
table
or
something
or
like
like
like,
like
an
uploads
table,
because.
C
B
You
know
all
of
the
mechanisms
as
well
to
actually
upload
files
somewhere.
So
if
did
you
see
what
I'm,
where
I'm
getting
with
this
like.
B
C
Yeah,
I
guess
the
perfect
example
would
be
uploads
done
by
background
jobs
where
you
can't
use
direct
uploads
at
all.
For
this
I
would
take
a
very
good
inspiration
of
the
attach
api
of
active
storage.
C
Where,
on
you,
have
your
business
model
and
then
you
declare
that
the
field
is
a
an
uploaded
file,
and
this
will
generate
the
attach
method.
That
is
quite
versatile.
It
can
accept
a
blob
id
or
it
can
accept
a
I
o
buffer.
If
I'm
not
wrong
so
and
just
with
those
two,
you
already
have
those
two
usages
where
you
can
just
pass
the
blob
id.
B
So
but
we
would
definitely
have
to
like
kind
of
re-implement
those
bits
that
were
and
carry
away
kind
of
abstracted
away
behind
the
store
call
where
we
actually
like
perform
like
issue
an
http
request
somewhere
to
it
might
be
object,
storage
or
whatever
the
destination
is
right,
because
we
have
these
use
cases
where,
like
you
said,
like,
I
think
it's
in
in
in
the
import
export
where
we
have.
B
It's
not
not
background
upload,
what
we
call
background
upload,
but
it
is
just
a
psychic
job
in
in
which
we
directly
upload
to
object,
storage
and
carry
away.
It's
currently,
that's
very
easy
right,
because
you
say
something
like
carry
away,
then
the
the
file
type
is
fog
or
something
right
and
then
you
say
store
and
like
it
moves
it.
There.
B
Those
bits
like
the
transport
right
that
might
still
have
to
happen
in
ruby
in
a
sidekick
job.
That's
those
bits
we
would
have
to
completely
re-implement
right.
A
A
Just
because
of
this
right
there
are
limits
and
if
you
have
huge
files,
this
can
be.
This
can't
work
this
way.
So
when
you
have
something
like
david
was
suggesting,
when
you
can
just
send
him
a
string,
io
basically
object
means
that
you
can
write
something
that
generates
the
the
thing
to
upload
in
memory
and
the
other
thing
is
consuming
it
and
writing
it
directly
into
the
object
storage
destination.
What
is
important
here,
which
I
mean
it
was
covered,
but
in
my
yakov
proposal,
is
that
this
should
not
happen
in
transaction
time.
A
So
when
you
attach
something
which
is
not,
I
a
blob
id,
let's
go
to
the
blob
id
the
the
point
is
that
you
generate
the
blob
and,
let's
say,
let's
use
jakob
proposal
as
an
example,
you
generate
the
blob,
I
think
it
will.
It
has
a
state
field,
something
like
this,
which
is
created.
So
you
generate
it
as
created
and
done
transaction
is
completed.
Then
you
start
uploading
and
at
the
end
of
the
upload
you
can
open
a
new
transaction.
A
When
you
say
okay,
this
is
now
finalized,
which
is
the
same
thing
that
happened
with
direct
upload
with
workers
in
front
right.
So
you
do.
The
authorization
call
it's
created
then
goes
back
to
workers.
Workers
do
the
uploads,
then
you
finally
reach
the
the
controller
endpoint
and
the
control
and
point
say
yeah.
This
is
completed
now
it's
finalized.
A
There
are
clients
for
doing
this,
because
we
can
finally
use
the
official
clients
for
s3
and
for
other
I
mean
we're
already
using,
but
we
can
remove
all
the
craft
around
it
right
because
we
are
using
the
thing.
But
then
we
have
fog,
I
mean
funk-
is
actually
a
new
implementation,
but
we
use
fog,
which
is
a
third
part
implementation
and
then
there's
carrier
wave
around
it
and
our
monkey
patching.
We
are
just
removing
all
of
this
and
we
just
say
playing
tables
playing
clients
and
then
again
playing
tables
right.
So
yeah.
C
C
Yeah,
I
think
the
main
point
is
that
we
have
plane
tables
and
the
clients
would
be
just
used
by
services
really
similar
to
the
active
storage
services.
You
can
find
you,
you
have
the
s3
services,
the
the
f3
service,
the
gcs
service
that
will
just
implement
the
upload
or
the
ways
to
create
blob,
ids
or
new
blobs
and
and
it's
everything
packed
together.
C
So
to
answer
your
question
on
the
background,
jobs,
uploading
files,
they
would
just
attach
a
stream
and
then
it's
it's
the
responsibility
of
that
service
to
upload
this
stream
to
a
given
blog
or
a
blog.
That
needs
to
be
created.
So
to
me
it
would
be
fully
transparent
for
the
developers.
They
would
just
attach
things
and
that's
it.
Everything
else
is
taken
care
of.
A
Yeah
also,
another
point
that
he
has
pointed
out
was
the
reference
to
this
buffering
as
a
default.
So
a
technical
detail
here
so
disk,
because
direct
upload
can
downgrade
to
disk
buffering
the
default.
This
buffering
is
a
trick,
so
in
workers
is
exactly
the
same
controller
that
handles
direct
upload,
but
instead
of
using
a
real
api
request,
is
doing
a
fake
api
request
that
is
answered
by
the
workers
code
directly,
which
gives
out
the
an
answer
it
says.
Yes,
this
is
authorized
and
I
have
no
clue
about
object
storage.
A
So
please
write
this
down
on
temporary
storage,
which
means
that
once
once
we
have
the
single
endpoint
authorization,
we
just
wired
that
api
call
into
the
single
authorization
point
and
then
it
basically
we
move
the
the
control
over
the
rail
side.
So
let's
say
we
start
implementing.
We
say
we
only
have
this
for
gcs
in
the
first
iteration,
so
we
just
we
know
the
configuration
because
we
are
in
rails
and
we
say:
okay,
this
is
gcs.
We
can
give
the
new
answer.
We
authorize
it.
A
We
give
the
final
destination,
we
just
remove
the
delete
url,
so
workers
will
not
delete
the
file
at
the
end
I
mean
we
can.
We
have
very
granular
control
over
it
and
if
it's
something
like
it's
azure
and
we
haven't
implemented
that
we
can,
we
can
just
give
back
the
old
answer
which
is
do
this
on
file
and
and
once
we
migrated
everything
we
can
say.
Okay,
now
it's
fine
to
to
to
remove
all
the
audio
logic
around
this.
B
A
That
one,
so
that
specific
thing
for
the
the
everything
that
has
not
a
specific
implementation
for
direct
upload
is
handled
this
way,
and
you.
C
A
Yeah
yeah
the
fallback,
but
my
point
is
that
once
you
have
a
single
endpoint
for
authorization,
you
can
start
providing
real
object.
Storage
answer
to
those
to
those
api
called
that
when
they
previously
didn't
have
an
authorization
endpoint,
and
so
they
may
end
up
into
the.
I
think
that
in
yahoo
proposal
was
kind
of
big
into
it,
because
it
was
referring
to
generating
a
single
url
which
could
be
in
the
uploads
bucket.
A
A
Long
term,
but
the
point
is
that
jakob
description,
it
really
takes
a
lot
of
effort
into
the
fallback
mechanism
and
making
sure
that
you
don't
have
to
migrate
stuff
immediately
to
enjoy
the
benefits
of
the
new
system.
So
you
could
be
able
to
define
stuff
either
on
disk
or
to
the
new,
unified
blob
storage.
B
Right,
I
think
we've
passed
time
already,
but
in
terms
of
action
items,
so
it
sounds
like
because
this
is
all
connected
and
a
bit
circular
it
does
sound
like
the
single
off
end
point
is
because
it
keeps
coming
up.
That
is,
like
kind
of
the
most
reasonable
thing
to
focus
on
right
now,
right,
so
we're
already
like
david's
already
already
looking
into
this.
That's
that's
great
in
terms
of
the
yeah
preventing
things
from
like
kind
of
proliferating.
More
you
mentioned
that
for
new
features
being
added.
B
We
should
just
always
use
the
uploads
bucket
because
it
already
exists,
and
it's
kind
of
like
generic
is
this
something
that
should
already
be
in
the
documentation,
because
that
is
not
documented.
Currently.
So,
if
users
add,
if
developers
add
a
new
feature
that
requires
a
new
upload,
they
they
don't
know
that
right.
So
they
might
just
add
a
new,
a
new
bucket
and
maybe
sometimes
that's
necessary.
For
some
reason
I
don't
know,
but
is
that
something
that
we
should
do
now
or.
A
And
I
would
yeah
yeah,
and
maybe
we
want
to
provide
some
good
some
example
of
good
reasons.
Maybe
we
don't,
but
this
is
something
we
can
just
discuss
in
the
in
the
review
phase
of
it.
A
We
can,
depending
on
how
strong
we
are
on
this.
We
can.
We
can
change
the
the
wording
right
so
it
can
be.
We
can
prescribing
this.
We
can
just
say
this
has
to
be
this
way,
no
matter
what
or
we
can
just
suggest,
and
then
we
try
to
to
socialize
this
idea
a
bit
more
around,
but
I
mean
I
think
we
are
all
in
agreement
that
we
with
a
single
bucket
will
be
many.
Things
would
be
easier
to
handle,
but
none
of
this
proposal
will
really
need
a
single
bucket.
A
A
A
Try
to
summarize
what
we
discussed
it
today
and
upload
the
video
and
then
see
what
we
can
do
for
the
discussing
the
encryption
requirements.
So
you
made
a
link.
Is
this
an
issue
that
is
already
there
yeah?
It
is
okay.
So
thank
you
for
your
time
and
see
you
soon.