►
From YouTube: 2022-02-08 Object Storage WG - APAC
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
On
youtube,
apparently
yeah
this-
this
is
pretty
cool.
Maureen
mentioned
this
last
time.
I
hadn't
used
this
before
it
kind
of
makes
it
easier
to
manage
the
recordings
as
well,
because
I
have
recorded
to
the
just
to
my
local
machine
early
and
then
you
to
wait
for
zoom
to
you
know
let
the
encoding
finish.
I
need
to
upload
it
to
youtube.
I
don't
know
where
it's
gonna,
if
it's
gonna
sort
it
into
a
playlist
somewhere,
probably
not
so
I
might
have
to
do
that
afterwards,
but
yeah
so
welcome
to
the
object.
A
Storage
working
group
sync,
this
is
the
americas.
Is
that
right?
This
is
the
eight
pack
meeting
and
we
have
a
bunch
of
agenda
items
actually
today.
So
let's
just
jump
right
in
we
had
a
couple
action
items
from
well.
Actually
let
me
share
my
screen,
so
you
can
see
what
I'm
looking
at.
A
So
you
should
be
seeing
the
agenda
document.
Now
we
had
a
couple
of
action
items
from
last
week
and
just
like
general,
you
know
check-ins
around
progress
as
well.
I
know
yeah
david,
you
had
worked
on,
you
know
the
the
active
storage
poc
and
looking
into
how
how
we
could
potentially
replace
carrier
wave.
Do
you
want
to
talk
about
this?
A
little.
B
Yeah,
I
think
we
can
close
the
active
story
gemma.
I
will
leave
it
open
for
comments.
This
question
I
guess
one
week
and
then
I
will
close
it,
so
it
brought
the
answers
I
was
thinking,
which
is
we
can't
use
active
storage
directly,
but
the
concepts
around
the
concepts
used
by
active
storage
are
quite
powerful
and
flexible
and
we
should
strive
to
have
a
custom
approach.
Implementation
around
that
one.
B
Attach
api,
so
I
would
say
that
the
next
step
here
would
be
might
be
a
good
idea
to
have
well
another
quick
poc,
but
around
the
custom
implementation
just
to
have
a
feeling
of
how
how
hard
it
could
be
to
build
this.
A
Yep
all
right
yeah,
it
sounds
like
active
storage.
Like
follows:
a
pretty
good
design.
It
was
my
understanding,
so
we
can-
probably
you
know,
like
you
know,
be
inspired
from
from
its
design,
but
like
specifically
to
use
it
with
gitlab
as
it
stands
would
be
quite
tricky
because,
there's
a
lot
of
workarounds,
we
would
have
to
use.
B
C
B
The
attachments
and
blobs-
and
this
is
complete
out
of
scope
for
active
storage,
and
that's
why
there
is
a
lot
of
overrides
and
yeah
hockey
code.
We
can't
say
that
around
active
storage
to
support
those
different
tables,
but
at
some
point
they
might
implement
the
option
to
define
and
target
whichever
table
you
want
from
active
storage
directly,
and
so
it's
not.
B
A
Did
you
actually
happen
to
talk
to
the
the
folks
from
the
database
chart?
I
think
it's
called
what
it's
called
working
groups
database,
scalability
working
group
now
or
something
the
people
that
we're
looking
at
sharding.
The
database
might
just
be
an
interesting
like
like
finding
to
to
pass
by
them
that
this
is
causing
problems
with
active
storage,
so
that,
maybe
I
don't
know
whatever
solution
they're
working
towards
that
would
be
good.
If
that
would
work
with
active
storage
in
the
future,
would
that
make
sense
just
to
loop
them
in.
B
I
don't
think
so,
because
active
storage
is
really.
You
have
two
tables
for
the
whole
application,
and
this
will
not
work
for
us
since
one
day
we
can
split
the
tables
in
different
databases
and
well,
I
guess
the
upload
table
should
go
with
the
yeah,
the
other
table.
So
one
example
is
the
actual
decomposition
which
is
happening
for
the
ci
tables.
B
B
A
A
All
right
would
it
would
it
be
useful
to
track
a
new
issue
for
for
building
out
like
a
custom
solution
in
in
a
poc,
or
we
just
do
that
in
the
same
in
the
same
issue,.
B
A
Yeah
sounds
good,
that's
great!
Let's,
just
like
I
I'm
just
going
to
take
the
same
note
for
next
week,
we'll
just
check
in
again
see
where
we
are
next
week.
Does
that
make
sense.
D
A
Awesome
cool
all
right
and
any
questions
about
this,
or
should
we
move
on
to
the
next
item.
A
All
right
yeah,
this
one
I
had
added.
I
think
I
have
meant
to
talk
about
this
last
time
and
I
think
I
forgot
so
I
moved
it
to
this
weak
sink.
We
had
this
nice
requirements
table
that
gregor
should
put
together
and
I'm
just.
This
is
just
a
super
quick
question.
If
there
was
anything
we
should
do
for
this
now,
where's
that
table.
A
Actually,
I
think
it
was
a
google
sheet,
but
you've,
probably
all
seen
it
right,
I
think,
might
be
done
here,
yeah
whatever
I
can't
find
it
right
now,
but
the
last
state,
the
last
the
status
update,
I
remember,
was
that
we
had
meant
to
like
vote
on
complexity,
but
we
kind
of
said,
like
most
people
didn't,
feel
super
comfortable
like
voting
on
this,
because
it's
really
hard
to
get
a
sense
of
complexity
there.
A
So
should
we
just
like
park
this
for
now
and
wait
until
we
feel
a
bit
more
comfortable
with
understanding
the
complexities
behind
this
whole
problem
space,
which
I
hope
you
know
will
be
improved
by
working
on
things
like
you
know
the
documentation
and
refactoring
issues
that
we
have
is.
Is
that
kind
of
the
is
everyone
on
board
with
this
or
any
any
other
ideas
like
what
we
could
be
doing
there?
B
Perhaps
the
documentation
issue
so
point
c
on
the
agenda
is
the
requirement
for
this
one
yeah.
It
feels
like.
A
C
B
A
Yeah,
I
personally
feel
the
same
like
having
I
I
spent
some
time
in
the
in
the
guts
of
workhorse
last
week
to
to
understand
how
the
uploads
are
handled,
and
it
is
quite
complex
right,
so
I
was
actually
a
bit
surprised
even
and
I
I
I
definitely
think
as
well,
that
you
know
after
we
like
kind
of
dive
into
this
more
and
just
get
a
sense
of
what's
going
on
currently
and
then
writing
it
down.
A
It
will
be.
It
will
be
easier
to
then
gauge
as
well.
How
much
work
would
something
be
if
we
were
to
approve
a
certain
aspect
of
it
yeah
I
I
hope
others
feel
similarly
like
for
me,
so
I
started
looking
at
workhorse
more
at
the
moment.
I'm
still
pretty
fuzzy
on
the
whole
rail
side
of
things,
because
there's
also
then
different
things
that
can
happen
within
rails
right
also
depending
on
which
path
we
took
in
workhorse,
so
so
yeah.
I
hope
I
hope
yeah,
that's
actually
good
segue
into
this
anyway.
A
So
let
me
just
talk
about
this,
so
we
had
like
one
issue
here
about
just
looking
at
we
had
started
to
so
so.
Basically,
the
way,
the
reason
I
created
this
issue
was
because,
when
I
first
looked
at
the
stuff
outside
of
reading
the
blueprint
and
the
main
working
group
page
was
to
just
see
what
we
have
in
terms
of
documentation
around
a
developer
documentation.
A
I
should
say
for
how
to
implement
uploads
and
like
coming
to
this,
like
not
having
implemented
an
upload
before
I
found
that
we
could
probably
improve
this,
and
it
seems
also,
we
adopted
a
certain
language
in
in
this
document
here,
which
is
around,
which
is
around
things
like
direct
uploads,
but
also
accelerating
uploads
using
diff
disk
buffering.
A
A
So
so
I
think
one
thing
we
could
do
was
to
make
this
a
bit
more
exhaustive
and
provide
specific
examples
and
and
control
flows
of
like
how
certain
uploads
are
handled
and
then
also
reflect
the
language
back
and
forth
between
the
code
base
and
and
the
documentation.
So
if
we,
if
we
use
terms
like
this
buffering
here,.
B
A
What
does
that
mean
exactly
where
is
that
implemented
in
workhorse,
and
when
exactly
does
that
does
that
trigger,
and
another
idea
I
had
was
to
a
suggestion-
was,
I
feel,
like
there's
a
lot
of
historic
information
here
around,
like
kind
of
like
how?
How
did
we
get
here
or
like
what
was
the
case
in
the
past
like
this?
This
is
not
the
status
quo
anymore,
right
yeah.
You
can
see
what
I'm
looking
at
right,
I'm
looking
at
the
the
good
yeah
good,
so
I
I
was
thinking.
A
Maybe
it
would
be
good
to
split
this
up
into
like
a
high-level
description
of
what
the
challenges
are
with
uploading,
large
files
or
files
in
general,
and
this
could
very
well
contain,
because
these
diagrams
are
useful
right
to
get
in
to
get
like
an
understanding
of
like
what
are
the
complexities,
but
as
a
developer.
A
If
I
need
to
implement
up
implement
an
upload
endpoint,
I
don't
need
to
read
this
every
single
time
right,
so
so
I
think
it's
good
to
have,
but
maybe
we
could
move
this
out
of
the
developer,
docs
and
or
like
move
it
to
the
side.
Kind
of
you
know
like
here,
if
you
want
to
read
a
bit
more
about
like
why
it
is
the
way
it
is
this.
This
is
the
stuff
you
would
find
there
and
and
look
if
you
look
at
the
index
like
how
to
actually
work
with
uploads.
A
A
I
have
which
is
implementing
an
upload
like
this
is
the
stuff
I'm
interested
in
right.
So
I
think
this
should
be
front
and
center
for
the
developer
docs,
and
we
can
just
then
link
maybe
some
of
these
more
historic
things,
especially
if
they
don't
even
happen
anymore.
Right,
like
this
specific
diagram
here
and
I
misunderstood
when
I
read
this-
this
doesn't
even
happen
anymore
right.
So
this
is
what
rails
would
do
by
default
if
there
was,
if
workhorse
wasn't
involved
right
or
if
it
would
just
forward
it.
A
So
I
find
this
a
bit
confusing,
because
why
do
I
need
to
know
this?
If
that's
not
a
problem
anymore,
right
yeah?
So
this
was
kind
of
my
my
main
idea
behind
this,
and
then
you
know
see
make
sure
that
we
use
like
interfaces
that
have
that
really
tell
me
what
they're
doing
you
know.
A
We
have
things
like
upload
that
accelerate,
but
they
can't
mean
anything
right,
accelerate
how
what
are
we
accelerating
here
and
and
some
of
these
things
I
think,
focused
more
around,
actually
the
what
we
call
the
content
upload
encoding
in
this
documentation.
Here
we
have
a
section
on
upload
encodings,
where
we
discern
between.
Is
this
a
form
data
request
or
is
this?
This
is
just
a
blob
and
the
body
that
we
upload
or
something
else.
So
I
think
these
code,
switches
and
workers
are
at
the
top
level
are
mostly
around
this.
A
So
so
yeah
there's
some
weird
dichotomies
here
that
aren't
really
dichotomy,
so
I
would
yeah
these
are
the
things
I
think
we
can
just
improve
with
naming
and
just
make
it
a
bit
easier
for
newcomers
to
the
code
base
to
understand
what
these
things
are
doing
here.
A
Yeah,
so
patrick
picked
this
up.
That's
awesome.
You
probably
haven't
worked
on
it
yet,
but
I
saw
you
I
said
it
to
yourself
just
yesterday.
I
think
so
yeah
thanks
for
that
and
yeah
it.
I
don't
know
if
you
wanted
to
say
anything
about
it.
C
I
agree
with
that
with
the
with
all
the
things
that
you
said,
I
mean
I
was
looking
at
it
earlier
as
well
when
you
go
to
these,
it's
like
okay,
next
yeah.
B
A
Right
and
and
by
the
way
like
this,
was
just
like
my
personal
take
feel
free
to,
like
you
know,
do
we
do
we
use
your
own
best
judgment
for
like
well
how
this
should
be
named
or
like
a
way
to
move
things,
especially
since
I
haven't
spent
that
much
time
in
workhorse.
So
just
don't
take
this
as
like
a
spec
for
something
right.
It's
just
just
an
idea
of
one.
One
suggestion
makes
sense
all
right.
A
What
else
do
we
have
all
right?
This
was
I
didn't.
I
don't
have
the
issue
linked
here.
I
think
this
was
the
issue
about
so
we
we,
we
noticed
that
we
had
started
to
categorize,
object,
storage,
buckets
and
how
they
were
assigned
to
like
certain
features
in
gitlab
and
the
respective
code
owners,
but
we
very
quickly
realized
that
this
is
a
moving
target
and
we
had
close
up
the
issue
and
basically
the
table
that
we
had
produced
was
already
out
of
date.
A
So
I
think
marine
made
a
good
suggestion
that
well,
maybe
we
shouldn't
just
you
know,
put
that
in
a
spreadsheet
and
or
put
it
in
a
marketing
table
in
the
issue
and
then
close
the
issue,
because
then
it's
kind
of
like
out
of
outside
out
of
mind
and
it
won't
get
updated.
So
the
first,
so
we
said
we're
going
to
do
two
things.
A
The
first
thing
should
be
to
extract
these
tables
that
we
created,
for
which
parts
of
the
code
base
are
kind
of
assigned
to
which
up
to
storage,
bucket
or
feature
that
is
attached
to
it,
and
and
also
how
does
it
integrate
with
carrier
wave,
and
we
just
turn
that
into
developer
documentation,
and
I
I
made
my
life
very
simple
here
and
I
just
put
it
into
the
document
we
just
looked
at
as
a
new
section.
That
was
the
most
obvious
place.
A
To
put
it
now,
I
I'm
totally
fine
with
revisiting
this
as
part
of
the
bigger
documentation
overhaul.
I
just
wanted
to
like
get
that
fixed
quickly,
so
that
we
can
put
this
link
in
front
of
other
developers
and
remind
them
to
keep
this
table
updated.
So
that's
just
small,
the
small,
mr
that
I
opened
and
it's
just
waiting
for
for
a
copy
review.
A
So
but
nothing
surprising
in
here.
I
just
added
this
new
section
and
developer
docs
for
uploads
yeah
and
that's
verbatim
like
what
we
had
in
in
the
issue
actually
like
now
that
I'm
looking
at
this,
I
had
it
on
on
the
agenda
later
anyway.
A
Pedro
get
raised
to
good
question.
We
have
this.
This
distinction
here
between
the
different
kind
of
like
who's,
the
actor
like
when
we
upload
right.
Is
it
workhorse
like
going
directly
to
object
storage,
or
is
it
some
like
other
tool
like
gitlab
runner
or
whatever,
or
is
it
the
controller,
and
then
we
have
the
uploader
is
sidekick.
A
C
A
A
B
To
my
limited
knowledge
on
uploads,
I
would
think
that
this
is
an
upload
that
is
started
by
a
background
job,
and
so
it's
directly
using
the
derails
part
to
upload
anything
and
that's
why
we
have
carrierweave.
A
Right
so
so
so
so
psychic
just
the
thing
that
triggers
the
upload
that
I
guess
I
guess
like
during
the
upload
where's
the
time
spent
is
that
doing
a
web
transaction
in
rails
or
during
sidekick.
I
guess
maybe
that's
the
kind
of
the
core
of
the
question
right.
B
I'm
not
sure,
but
I
I
recall
hearing
multiple
times
that
we
have
uploads,
that
I
guess
the
file
is
built
in
a
background
job
and
it
is
uploaded
by
background
job.
So
you
don't
have
any
web
requests
around
this.
The.
C
B
Might
be
triggered
by
a
web
request
that
could
be
possible,
but
the
whole
upload
is
done
on
by
the
the
background
job
and
that's.
A
Yeah,
if
you
look
at
these
combinations
here
as
well,
it
seems
like
carrier
wave
see
seems
to
always
imply
side
kick,
but
not
necessarily
the
other
way
around.
So
the
way
I
would
read
this
is
that
if
we
go,
if,
if
there
is
an
upload,
that
is
not
a
direct
upload
handled
by
workhorse
and
carrierwave
handles
it
that
this
triggers
the
cyclic
job
that
then
performs
the
actual
work
that
transmits
the
file
to
object.
Storage.
A
B
A
B
They
kind
of
shortcut
carry
with
and
they.
A
Maybe
because
they
don't
need
an
entry
in
the
attachments
table
or
something
I'm
not
really
sure
yeah.
This
is.
This
is
exactly
why
we're
looking
to
document
this
more
right.
I
think
this
was
a
good
start,
because
it
gives
you
a
very
nice
overview
of
the
overall
kind
of
landscape
of
what
features
do
we
have
and
kind
of
what
kind
of
work
are
they
performing,
but
then
yeah
the
specifics
around?
What.
A
You
know
in
the
back
end
that's
kind
of
the
next
step
that
we're
still
working
on.
So
maybe
we
can
just
keep
this
question
in
mind
when
we
work
on
documentation
to
also
document
the
rail
side
of
it
better
like
what
are
the
different
paths
like
the
different
control
flow
that
can
trigger
if
the
request
is
handed
off
from
workhorse
to
to
rails,
to
finish
the
upload
and
see,
what's
going
on
there
exactly.
A
A
I
want
to
go
back
to
this
quickly
because
this
merge
request
here
that
adds
documentation.
That's
only
the
first
step
right
I
mean,
but
just
having
this
documentation
doesn't
mean
it's
maintained.
What
I
was
struggling
with
a
bit
is:
how
can
we
make
sure
this
gets
like
keeps
being
updated?
So
we
just
had
a
case
where
I
forgot,
which
team
it
was.
They
had
to
add
a
configuration
for
a
new
bucket
which
should
have
an
entry
here
right.
A
So
I
want
to
get
this
mr
merged
asap,
so
I
can
actually
go
back
to
that
team
and
tell
them
hey.
Look
we're
documenting
these
things,
it's
kind
of
a
moving
target.
Could
you
just
add,
you
know
a
line
here
around
the
stuff
you
just
added,
but
that's
that's
not
the
sustainable
approach
right,
because
we
can't
constantly
stay
on
top
of
what
is
happening
around
the
org.
And
so,
if
you
have
any
ideas
around
how
we
could
prompt
developers
there,
I
looked
a
little
bit
at
the
mrs.
A
A
We
have
this
unified
object,
storage
settings,
but
then
we
also
support
the
per
bucket
settings
right
where
you
can
then
enable
you
know
whatever
direct
upload
and
stuff
like
per
bucket.
So
you
need
to
touch
like
the
gdk
chart,
the
gitlab
chart
and
omnibus
and
yeah,
I'm
not
totally
sure
like
where,
in
this
workflow,
we
would
hook
into
to
kind
of
notify
developers
that
hey
look
you're,
adding
a
new
bucket.
Please
document
it
here
right
so
that
we
know
what's
going
on.
B
We
have
some
rules
and
rails
like
if
you
add
a
new
sidekick
job.
You
have
a
bunch
of
comments
from
the
boat.
Could
we
have
something
similar.
C
A
A
Point
I
hope
I
actually
mentioned
this
here.
I
forgot
what
I
wrote
yeah,
I
kind
of
I
kind
of
do
here.
I
don't
know
if
that's
like
kind
of
noticeable
enough.
I
can
maybe
also
highlight
this
board
or
something
I'm
just
saying
here.
We
ask
new
uploads
to
be
documented
by
slowing
them
into
the
following
tables:
it's
not
the
final
copy,
maybe
but
yeah
I
don't
know.
Maybe
I
can
also
put
it
into
a
call
out.
A
You
know
these,
like
notices,
that
stand
out
a
bit
more
too,
but
it
should
be
just
below
the
implementation
guide.
So
I
hope
this
is
fairly
visible.
We
could
just
like
I
mean
we're.
Definitely
going
to
do
this
anyway.
First
get
this
documentation
and
see
if
it
works,
and
if
it
doesn't,
I
guess
we
can
still
decide
to
come
up
with
something
more
clever.
A
A
B
B
So
I
I
wrote
it
directly
on
the
issue
but
yeah.
I
think
the
the
poc
will
bring
more
answers
to
to
these
questions.
A
Yeah
yeah
yeah,
it
sounds
great.
I
mean
it
sounds
like
the
right
approach
and
we'll
just
see
how
we'll
see
how
bad
it
gets.
Basically
and
and
usually
like
that's
exactly
the
point
where
you
discover
all
of
the
hidden
complexity
right
of
these
kind
of
things
like
oh
yeah,
we
didn't
really
think
about
this
and
then
cc
kind
of.
B
A
Okay
good,
so
there
aren't
too
many
action
items,
but
we
have
a
couple
ongoing
things,
so
I
think,
maybe
next
week
we
can
just
check
back
in
on
status,
updates
and
see
how
we
are
progressing.