►
From YouTube: 2021-12-07: Object Storage Working Group
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
Oh,
let
me
go
to
today's
agenda.
Okay.
So
let's
keep
going
through
the
action
items
that
are
in
progress.
So
but
yes
is
not
here,
so
I
can
give
an
update
on
describing
the
current
status
of
the
object.
Storage.
Oh
good
progress.
I
think
good
progress.
We
are
in
yeah.
We
have
this
big
tables
where
we
document
we
are
still
documenting
some
of
the
use
cases.
B
B
There
are
some
features
that
are
using
special
cases
as
well,
as
we
found
probably
a
feature
that
no
longer
it
is
no
longer
in
use,
but
it's
still,
there
still
documented
still
part
of
the
code
base,
so
yeah
good
progress
there.
B
Actually
how
hard
is
to
implement
some
of
them,
because
I'm
expecting
that
we
will
have
different
solutions,
and
maybe
this
could
be
something
which
is
technically
better
but
say
too
hard
or
quite
impossible
to
implement
or
something
that
maybe
it's
a
middle
ground,
but
it's
easier
to
start
with
or
gives
us
a
better
give
us
a
more
iterative
approach.
Something
like
this
right.
So
this
is
something
that
I
think
we
are
ready
to
start
tackling.
B
I
will
like
before
going
through,
I
mean
star
question:
please
interrupt
me,
but
still
on
the
action
items.
There
is
one
missing
point
here,
which
is
understanding
if
there
are
some
inputs
from
customer
side
of
things
that
we
should
take
into
account
when
we're
planning
the
new
solution
so
yeah.
Maybe
this
is
time
to
yeah
make
make
sure
that
we
have
those
inputs
if
we
want
to
take
them
into
consideration.
B
Okay,
I
have
the
first
real
agenda
item,
which
is
making
a
decision
about
platform
providers
support.
So
I
was
doing
some
kind
of
archaeology
in
the
last
week
and
I
found
a
very
old
comment.
One
year
old.
I
think
that
son
made
about
basically
making
a
decision
about
what
we
want
to
support,
because
we
have
direct
upload
and
background
uploads
for
those
that
are
not
familiar.
B
Direct
uploads
is
when
things
get
uploaded
by
workers
itself
in
transit
when
they,
when
we
upload
something,
not
every
object.
Storage
providers
support
it.
So
there
is
this
kind
of
backward
compatible
mechanism,
which
is
background
uploads,
which
means
that
the
file
goes
straight
to
the
rates
controller
and
then
reads
controller
is
uploading
it
and
is
doing
this
asynchronously
with
a
sidekick
job.
B
The
problem
is
that
if
you
have
an
solution
that
is
not
running
on
nfs,
these
things
don't
work
at
all,
because
there's
no
shared
storage
between
the
sidekick,
node
pool
and
controllers.
B
So
yeah
I
mean-
and
I
think
this
is
the
probably
this
is
one
of
the
biggest
deprecation
that
we
have
in
the
in
the
topics
of
this
working
group.
So
it's
something
worth
considering
if
acting
on
this
now
and
I
don't
have
real-
I
was
trying
to
build
another
item
for
today,
but
yeah.
A
A
B
I
wasn't
aware
of
a
strict
end
of
life
support
for
it,
so
I'm
glad
that
you
found
it
gregor.
C
A
Data,
it's
possible
so
yeah,
but
still
I
I
remember
an
issue
or
I.
B
Think
we
never
acted
on
this.
That's
that's
the
problem,
so
we
try
to
say,
let's
remove
it,
let's
remove
it.
We
removed
on
github.com
and
the
thing
that
you
linked
the
the
first
section
just
say
it's:
it's
not
recommended.
We
never
made
the
extra
step
of
saying
it's
deprecated.
It's
just
say
it's
not
recommended,
but
I
remember.
A
B
Okay,
so
I
think
this
is
something
worth
exploring
so
that
we
can
build
things
together.
Let's
say:
if
1412
is
the
duplication
date
for
nfs
on
italy?
Maybe
we
can
try
to
make
a
decision
say:
can
we
deprecate
it
globally
by
15.00
so
that
we
can
remove
this
type
of
complexity
from
the
yeah
from
the
solution.
A
A
Yeah
also,
I
think
what
could
be
useful
is
to
elaborate
on
this
summary
of
the
requirements
a
bit,
because,
right
now,
it's
a
table,
it's
very
difficult
to
understand
what
what
each
entry
means
without
you
know,
searching
through
this
topic
in
the
issue
and
perhaps
having
a
samaritan
with
a
few
sentences
explaining
what
a
given
requirement
means
could
be
useful.
I
think
that
that
might
make
sense.
B
Yeah,
I
totally
agree
on
that,
so
I
was
trying
to
summarize
for
another
reason
which
I
was.
B
B
I
think
it's
time
to
start
wrapping
up
this
issue,
and
so,
if
someone
wants
to
keep
moving
on
this
one,
I
think
it's
it's
the
right
time
for
doing
this.
I
also
want
to
say
that
comparative
what
we
say
initially,
there
are
some,
let's
say
some
of
the
things
that
we
discussed
are
kind
of
hided
behind
some
of
this
requirement,
so
we're
gonna
just
want
to
make
an
exam.
B
B
A
B
Yeah
I
mean
the
thing
is
yeah.
This
is
this
is
something
that
I
wanted
to
mention.
This
is
why
I
say
this
is
this:
is
this
was
for
something
else?
So
we
have
workers
code
that
is
capable
of
doing
this.
B
It's
already
there.
We
could
extract
the
code
handling
from
there
and
building
something
new
there's.
No,
this
is
not
a
problem.
Is
I
mean
yeah?
I
think
the
right
requirement
is
offloading
data
handling
from
from
the
rails
controller,
because
we
don't
want
to
to
have
the
upload
in
the
rails
controller.
B
So
absolutely
fine
in
in
doing
this,
let
me
try
to
explain
why
I
was
doing
this
first,
so
it
was
gonna
make
more
sense,
so
I
was
trying
to
build
up
a
proposal
which
is
based
on
them
current
model
that
we
have
and
try
to
make
it.
Let's
say
the
minimum
viable
change
to
to
have
this,
and
my
idea
was
to
try
to
figure
out
if
what
I
had.
What
I
have
in
mind
makes
sense
with
what
we
were
discussing
here.
B
So,
very
briefly,
the
idea
was
to
make
a
poc
based
on
the
artifacts
metadata,
because
artifacts
metadata
is
an
upload
that
is
already
happening
right
now,
but
is
it
ends
up
on
disk
because
of
some
of
some
limits
of
the
current
solution,
so
it
was
kind
of
building
on.
B
I
don't
know
what
is
the
link
there
there's
an
old
link
from
camille
where
is
proposing
the
introduction
of
an
internal
api
so
that
when
we
reach
the
point
of
uploading,
the
artifacts
metadata,
which
is
a
very
specific
point
in
the
code
of
workers
code
base,
instead
of
doing
what
we
are
doing
today,
we
test
this
new
internal
end
point
asking
for
a
pre-signed
url,
and
then
we
use
the
the
workers
ability
to
offload
the
uploads
to
object
storage
so
that,
basically,
my
idea
was
that
we
can
just
change
a
little
thing
and
try
to
see
if
this
internal
api
idea
works
and
basically
an
something
that
I
was
discussing
with.
B
B
So
one
idea
what
could
be
instead
of
starting
straight
with
decomposition
to
build
a
new
service
was
to
maybe
we
could
decompose
uploading
into
a
binary
so
that
workers
can
do
exactly
what
it's
doing
right
now.
So
it's
asking
credentials
to
rails
because
rates
controls
the
logic
of
the
where
stuff
is
stored,
authentication
and
things
like
that
and
provides
and
right
now
that
gives
you
an
api
answer
when
it
tells
you
what
to
do
where
to
put
the
stuff
right.
So
basically,
right
now
is
rails.
B
So
an
idea
could
be
that
when
you
need
to
upload
something
from
rails,
which
is
which
happens
because
we
saw
in
the
in
in
the
list
of
the
current
features
that
there
are
things
that
get
uploaded
starting
from
the
rails
side,
we
could
basically
build
the
same
information
that
we
are
providing
workers,
but
instead
of
basically
we
we
can,
we
can.
We
can
call
our
workers
binary,
it's
not
real
okay,
so
we
can
get
a
smaller
binary
from
workers
like
we
do.
B
Workers
is
based
on
several
binaries
that
are
handling
specific
tasks,
so
you
have
something
that
can
inspect
inside
a
zip
file.
They
can
inspect
inside
a
zip
file
over
over
object,
storage
and
things
like
that
right.
So
we
could
extract
with
the
same
logic,
that
we
have
right
now
for
uploading,
a
binary
that
received
a
configuration
from
rails
and
uploads,
whatever
it
receives
on
standard
input
to
object,
storage.
A
I
think
I
remember
discussion
where
we
considered
doing
something
like
that,
where
it's
rails,
that
calls
you
know,
shells
how
to
open
a
binary,
but
I
remember
we
decided
against
that,
and
I
cannot
exactly
remember
what
context
we
made
this
decision
in
and
stuff
like
that.
A
B
A
Yeah,
so
I
I
think
it's
something
we
can
we
can
consider,
and
we
can
also
find
past
discussions
about
that
because
I'm
quite
certain
we
had
them,
but
perhaps
for
now
we
should
focus
more
on
describing
these
requirements
a
bit
better,
and
perhaps
we
can
divide.
A
Like
must-haves
in
poc
or
the
first
iteration
and
future
nice
to
haves,
because
if
we
need
to
make
a
decision
about
the
solution,
that
will
also
enable
us
to
build,
on
top
of
it
to
provide
support
for
future
features,
yeah
and
yeah.
So
the
two
set
of
requirements,
something
that
we
need
right
now
and
something
that
we
might
need
in
the
future
might
be
useful.
B
Yeah,
I
totally
agree
with
you
gregers
on
this,
so
yeah
see
yeah.
My
point
was
just
I
wanted
you
to
test.
If
this
idea
could
stand
the
the
requirement,
but
I
I
totally
agree
with
you:
it's
just
that
we
remove
the
less
code
we
remove
from
workers.
B
The
the
less
stuff
we
have
to
to
do
was
more
focusing
on
the
carrier
wave
aspect
of
the
problem
which
is
extraneous
to
to
to
workers
itself
right.
So
I
just
agree.
So
if
someone
wants
to
go
through
the
requirements,
build
them
in
a
more
structured
way
and
more
descriptive
way,
I'll
be
happy
to
help
reviewing
them,
and
then
we
can
start
making
real
proposals
so
that
we
can
test
them.
We
can
take
on
this
action
point
and
do
that.
B
So,
let's
go
through
this
items
here
so
that
we
can
just
refresh
them
without
just
thinking
about
what
we,
what
what
they
meant.
Okay,
so
offloading
data
handling
via
workers
means
we
just
want
to
upload
data
uploads.
Where
is
applicable,
we
said
workers
can
be
anything
else,
we
don't
care.
The
point
is
that
uploads
should
not
happen
in
the
rails
controller.
B
B
B
So
your
conversation
here
in
this.
What
is
it
it's
here
in
david's
proposal?
Basically,
there
is
this
idea
of
decoupling
blobs
from
attachment.
So
when
you
upload
something
you
just
get
a
uid
or
some
some
general,
let's
say
identifier
for
the
thing.
Then
it
ends
up
in
the
object,
storage
and
then
you
link
it
with
the
data
database
level
to
whatever
this
upload
is
supposed
to
be
so
that
you
can
generate
the
say,
the
nice
path
to
it
by
not
having
to
move
it
around.
B
You
just
have
a
link
in
the
database.
That
tells
you
what
this
up,
what
this
blob
in
the
object
storage
is
then,
obviously
this
has
to
be
backward
compatible
with
the
current
solution.
We
can't
really
think
of
just
running
online
migrations
that
move
stuffs
around
in
one
go
and
make
this
work.
So
we
have
to
think
about
what
we
have
right
now
in
database.
B
It
was
a
request
for
better
documentation,
it's
quite
hard
to
understand
the
current
status
and
the
current
technologies
involved
in
the
uploading,
but
I
mean
if
we
converge
to
a
new
solution,
that's
going
to
be
the
we.
We
could
document
that
one
and
we
are
fine,
simplifying
the
testing
metrics
for
api
and
controllers,
because
right
now,
because
of
legacy
reason,
we
were
trying
to
require
developers
to
test
at
a
controller
level,
both
uploading
on
disk
uploading,
an
object,
storage,
and
there
was
a
quite
of
complex
metrics
of
requirement,
which
right
is
no
longer.
B
I
would
say
it's
no
longer
needed,
because
right
now
we
have
a
middleware
that
obstructs
these
things
on
the
red
side.
So
if
you
have
the
file
uploaded
representation,
don't
remember
the
name
of
the
class,
but
basically,
if
you
have
that
one,
it's
fine,
because
either
it
was
on
disk
or
directly
on
object.
Storage.
These
details
are
tested
in
that
in
the
class
unit
tests,
so
this
was
on
the
requirement
as
well
as
we
have
a
feature
specs
that
can
run
tests
with
workers
in
line.
B
There
is
this
idea
of
if
we
end
up
having
a
restricted
set
of
object,
storage
that
we
provide
as
we
support,
we
should
consider
expanding
our
qa
so
that
we
are
actually
testing
all
of
them,
because
some
of
them
are
just
on
documentation
and
we
should
make
sure
they
are
covered.
B
This
is
very
important
which
this
is
carrier.
Wave
removal,
the
couple
active
model
and
object,
storage,
api,
so
no
callbacks
on
object,
storage
should
go
over
sorry,
no
callbacks
on
active
model
stuff
should
go
to
object,
storage
and
deleting
stuff
uploading
stuff
or
doing
any
type
of
api
call
because
they
keep
transaction
open
and
they
are
a
very
big
pain
point
that
we
have
right
now.
So
new
solutions
should
be
decoupling.
Those
two
things
then:
I
have
here
the
requirements
from
the
security
side
of
it
all
of
yeah.
B
I
would
say
that
all
of
them
are
already
implemented
in
the
current
solution,
but
just
it's
kind.
I
mean
visa.
Tell
me
about
my
ideas
that
we
should
validate
the
final
solution
against
those
things
so
making
sure
that
there's
no
path
traversal,
it
protects
against
arbitrary
file,
read
and
write
permission
being
checked
and
authentication
always
required,
because
we
don't
have
uploads
that
do
not
require
at
least
authentication
yeah
yeah.
C
You
are
correct,
I
just
added
them,
so
they
are
there.
We
don't
forget
about
them,
just
make
sure
like
yeah.
We
don't
forget
about
that,
but
but
you
are
correct.
Indeed,.
B
Yeah
same
thing
with
geo,
so
there's
no
special
requirement
about
geo
just
to
keep
in
mind
that
geo
exists
and
we
should
not
build
something
that
is
not
compatible
to
it
right
now.
Geo
is
just
moving
the
object
storage
stuff
manually,
so
I
mean
let's
keep
this
in
mind
and
then
there
are.
There
are
two
points
there
are
still
in
discussion.
B
One
is
the
this
one
about
the
replay
attack,
so
I
have
let's
let's
discuss
this
maybe
later,
because
twitter
is
here,
so
we
can
discuss
this
and
then
there
is
encryption.
Encryption
is
a
specific
requirement
for
terraform
file
state
which
happens
at
a
controller
level.
B
B
Supposing
I
will
consider
making
this
a
second
level
of
requirement,
so
we
maybe
we
migrate
the
terraform
state
file
at
the
end
and
in
the
meantime
we
try
to
figure
out
what's
the
best
way
forward
for
this,
because
if
we
restrict
the
number
of
object,
storage
provider,
we
support,
maybe
we
can
work
our
way
through
it
and
figure
out
if
they
meet
the
security
requirements
to
have,
let's
say
a
globally
enabled
encryption
encryption
at
bucket
level,
and
so
this
is
no
longer
a
problem.
If
not.
A
There
is
new
epic
recently
mentioned
about
mobile
code.
Signing
architecture.
Darby
is
working
on
that.
It
also
involves
some
amount
of
encryption
right
now.
The
plan
is
to
actually
store
the
keys,
initialization
vector
and
stuff
like
that
on
the
rails
side
and
to
encrypt
it
somewhere,
presumably
in
transit-
I
don't
know
where
and
how
and
store
the
encrypted
blob
in
the
object
storage.
So
this
might
be
also
important
to
keep
in
mind
that
encrypting
data
in
object,
storage
might
be
important
and
presumably
it
should
be
object,
store
agnostic.
A
So
I
should
work
with
everything,
but
yeah
that's
an
important
requirement
in
my
opinion,
and
not
sure
if
that
should
be
in
the
level
one
or
level
two
appointments.
B
Yeah
yeah
I
do
agreement
is
let's
keep
this
in
mind.
Let's
explore
what
we
can
do,
because
if
we
want
file
level
encryption,
we
can
just
build
an
encryption
pipeline
in
whatever
component
is
doing
the
upload.
I
mean,
as
we
process
things
as
long
as
the
encryption
function
does
not
require,
is
a
streaming
function,
so
we
can
encrypt,
as
while
things
are
in
transit.
B
We
can
implement
this
okay,
so
yeah.
This
was
just
a
rehearsal
of
where
we
are
at
oh
there.
Oh
sorry,
victor
yeah,
the
that
I
skipped
this
one,
the
replay
attack.
B
C
Four,
I
think
you're
referencing,
mainly
the
package
registries
or.
B
C
Yeah,
which
is
which
is
indeed
correct-
they
they
do
allow
to
upload
the
packages
several
times,
because
that's
just
how
it
works
and-
and
your
comment
make
me
think
about
being
a
little
bit
more
precise
on
these
requirements
since
like
we
can
exclude
what
we
have
just
dispute
packing
package,
for
example,
and
some
other
stuff
that
that
are
out
of
scope.
C
For
that
I
was
more
thinking
about
all
the
other
cases
where
people
could
just
replay
indefinitely
requests
just
to
try
to
saturate
the
storage,
and
I
think
we
we
could
provide
protection
against
that
because
storage,
it
costs,
it
has
a
cost
for
us,
and
I
think
if
we,
if
we
can
protect
against
that,
then
it's
it's
a
good
thing.
A
By
storage
limits
like
every
user
that
uploads
something
needs
to
be
authenticated,
so
we
can.
We
do
have
some
storage
limits
already
in
place
and
we
can
use
that
to
prevent
saturating
the
storage
or
yeah.
Okay.
B
B
Some
of
them
are
artifacts.
Artifact
is
streaming
content,
so
you
can
check
at
the
end,
so
you
may
go
above
the
limit
with
that
single
upload,
but
this
would
just
lock
your
your
environment,
so
your
ability
to
to
put
something
more
on
that
so
yeah.
This
is
good
point!
Okay,
I
understood
your
point.
Thank
you.
B
Okay,
so
we
are
at
time
as
well
as
we
concluded
going
through
this,
so
we
have
an
action
item
for
gregers.
I
will
try
to
understand
how
we
can
move
forward
on
the
the
things
that
we
are
doing
weekly,
so
the
describing
current
status,
trying
to
figure
out
what
is
missing
there,
who
was
chatting
also
with
with
marin
just
trying
to
figure
out
what
what
he
needs
from
that
effort,
because
there
was
also
a
need
for
say
communicating
outside
the
impact
of
this
working
group.
B
A
What
what
do
you
mean
by
pocs,
because
I
feel
like
right
now?
We
are
still
gathering
requirements.
We
might
want
to
brainstorm
on
a
couple
of
solutions,
score
them
and
then
decide
which
direction
is
probably
the
best
and
it
might
take
actually
a
couple
of
weeks
until
we
will
be
able
to
select
the
solution
we
are
happy
with
and
then
to
poc
that
perhaps
a
bit.
So
I
don't
feel
like
doing
some
pocs
next
week
is
feasible
here.
B
No,
my
point
was
a
bit
different
was
that
if
we
end
up
having
solutions
that
goes
in
different
direction,
we
can
score
them
as
well,
as
we
can
grab
more,
let's
say
inside
on,
because
something
may
look
good
on
paper,
but
then
we
start
try
to
implement
and
basically
the
whole
application
falls
apart.
B
So
just
taking
I'm
not
talking
about
having
something
that
works
or
something
that
is
not
even
mergeable
but
just
trying
to
fit
the
idea
in
the
current
code
and
see,
does
it
actually
work?
Can
this.
A
Be
implemented
just
can
it
give
us?
Is
it
even
possible
to
fit
something
in
the
current
code
base?
I
think
what
is
possible
only
is
to
ship
some
very
small,
iteration
and
iterate
on
that
to
apply
the
stronger
pattern
where
you,
actually,
you
know,
move
things
to
the
new
architecture
piece
by
piece.
This
is
something.
C
B
Yeah
but
they
started
with
poc.
They
started
with
plc
that
were
just
supposed
to
see.
They
had
several
options
and
they
were
not
sure
about
which
one
was
actually
implementable
because
they
had
something
that
was
a
a
great
solution
on
paper,
but
then,
when
they
started
trying
it
with
the
ps2
say
no.
This
is
too
it's
too
hard
to
implement.
A
Yeah,
of
course,
I'm
not
saying
that
poc
we
should
not
do
a
poc,
but
it
might
be
very
difficult
to
come
with
fdoc
without
having
solutions
described
somewhere
in
this
court.
So
without
you
know
even
predicting
where
we
are
going
in
which
direction
we
are
going,
the
poc
might
not
be
really
visible
yet.
B
Yeah
I
mean,
let's
move
with
the
with
the
requirement.
I
know
that
some
of
some
people
were
working
on
poc
even
before
the
we
started
this
working
group,
so
if
they
gather
some
knowledge
out
of
it,
it's
still
available
information
as
well
as
there
are
many
aspects
of
this
code,
uploading
and
object
storage,
but
they
are
kind
of
obscure
to
to
many.
So
if
they
just
get
more
familiar
with
codebase,
I
mean
I
see
the
value
on
it.
B
I
don't
just
say
just
waste
one
week
doing
something
that
we
are
going
to
throw
away.
But
if
this
moves
someone
more
familiar
to
the
to
the
code,
we
want
to
touch,
it
would
just
have
better
time
contributing
to
the
conversation
together.
Okay,
thank
you
gregor.
I
really
appreciate
I
do
yeah.
We
will
score
solution.
We
will
build
solution.
Yes,
so
yeah,
okay,
so
thank
you.
Everyone
for
participating
this
week
and
see
you
all
next
time.