►
From YouTube: 2021-11-30 Object Storage working group - APAC
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
recording
is
running
progress.
Yes,
it
is
so,
let's
get
started,
let's
start
with
the
action
items
and
the
things
that.
B
A
In
progress,
so
the
first
point
is
about
the
describing
the
current
status
of
the
average
storage
and
leave
this
to
you,
matthias
you're,
doing
a
great
job
there.
So,
oh.
C
Thank
you
yeah.
It
just
took
a
while
it's
just
it's
spread
across
so
many
different
stakeholders.
It
just
took
a
while
for
everyone
to
find
find
the
time
and
the
right
people
as
well
to
to
even
figure
this
out
some
areas,
I
guess,
were
trickier
than
others.
Some
are
quite
obvious.
You
know
like
get
lfs
and
stuff
like
that,
I'm
still,
honestly,
maybe
I'll
just
do
it
myself,
we're
still
looking
for
one
item
that
is
related
to
user
avatar
uploads.
C
I
have
not
looked
so
much
myself
at
the
upload
path,
but
I
had
some
exposure
to
it
from
working
on
the
image
scaling,
which
is
the
opposite
way,
kind
of
rendering
them
out,
but
so
I
kind
of
have
a
big
idea
of
like
where
that
all
sits.
So
maybe,
if
there's
no
answer
and
then
next
day
or
so
I'll,
just
look
into
it
myself.
But
that
said,
though,
I
think
we
have
a
good
amount
of
data
already.
So
I
guess
my
next
question
was:
is
that
enough
that
we
can
start?
C
Maybe
I
was
thinking.
Maybe
we
should
cluster
this
a
bit
to
see
which
of
these
solutions
are
related
and
how
so
that?
Maybe
there's
some.
Maybe
you
know
this
already,
I
don't
know
I
probably
have
not
as
much.
I
didn't
have
as
much
exposure
to
object
storage,
but
I
was
just
wondering
if
there's
some
patterns
emerging
that
tell
us
a
bit
more
about
how
this
is
used
like
from
a
higher
level
perspective.
A
Yeah,
so
I
I
want
to
cover
both
your
question,
so
both
avatars
and
the
idea
of
clustering.
So
I
know
how
avatar
works
at
least
yeah.
I
think
that
I
know
how
it
works.
A
So
avatars
are
are
part
of
a
multi-part
request
and
they
end
up
being
stored
on
disk,
so
workers
remove
them
from
the
upload
put
them
on
disk,
and
then
they
are
there's
a
middleware
in
rails
that
just
replace
the
thing
so
that
you
have
a
file
handler
instead
of
jumping
it
on
the
disk
trades
dumping
it
on
the
disk.
You
got
you,
you
get
a
file
handler,
which
is
already
open.
You
can
read
from
there,
so
this
is
for
avatar
uploads
project
avatars
as
well.
A
A
Let
me
just
okay,
I'm
sorry,
I
I
say
the
wrong
thing
here.
So
topics
goes
to
to
the
disk
through
workers,
and
this
is
the
exception,
because
avatars
and
everything
else
they
just
go
straight
into
rails.
So
it's
a
multi-part
fight
that
that
reaches
rails
and
so
yeah.
They
should
at
least
go
on
on
disk
so
that
we,
but.
C
A
A
A
A
Neglectable,
but
the
thing
is
that,
because
of
this
thing,
so
we
have
a
code
path
where
we
can't
handle
this
way,
then
we
can't
enable
a
shared
level
of
say
at
least
everything
is
removed
by
workers
and
put
on
disk,
which
is
the
the
skip
rates
authorized,
and
I
think
you
you
mentioned
somewhere
in
a
conversation.
A
No,
no,
it
wasn't
you
with
someone
else.
I
don't
remember,
but
basically
there
is
jesus.
This
is
technical.
Basically,
every
request
is
wrapped
by
an
authorized
request
where
raids
provides
pre-signed
url,
but
at
workers
level
for
stuff
like
this,
there
is
a
skip
authorizer.
So
basically,
what
happens
there
is.
This
is
just
a
small
file
upload
or
something
that
was
backward
compatible
to
with
multipart
upload.
So
we
don't
care.
We
just
never
hit
object,
storage
by
default.
It
always
goes
on
a
temp
file.
A
A
So
that
that's
the
thing,
but
so
I
spent
even
too
much
time
describing
this,
but
the
point
is
that,
on
a
cloud
on
on
the
idea
of
clustering,
I
do
love
it.
I
think
that
this
should
also
try
to
categorize
in
terms
of
how
those
things
are
handled
in
workers.
So
if
they
go
direct
upload,
if
they
are,
they
just
pass
through
with
no
interaction
and
if
they
are
accelerated
on
disk,
because
this
is
one
one
of
the
improvement
point
and.
C
There
was
yeah
I
can.
I
can
have
a
look
at
that.
I
actually
noticed
that
I
should
have
made
this
a
bit
clearer,
maybe
by
putting
it
in
the
column
title
directly.
I
had
left
a
comment
on
the
uploaded:
how
column
where
I
actually
listed,
I
kind
of
tried
to
enumerate
the
options
of
what
you're
supposed
to
put
in
there,
but
I
think
people
didn't
see
it
because
it
was
just
a
comment
on
the
cell
in
the
spreadsheet,
so
they
just
made
gave
a
verbal
description
of
how
that
yeah.
A
A
C
And
what
the
kind
of
dimensions
should
be
because,
like
what
you
said
already,
that
makes
perfect
sense
to
me,
like
you
know,
is
it?
Is
it
like
an
accelerated
upload
or
not?
But
if
you
have
any
other
dimensions
in
mind,
that's
something
that
would
help
me
as
well
in
terms
of
clustering,
just
because,
because
I'm
not
100
sure
like
what.
What
is
the
information
we
need
to
drive
than
other.
This
is
kind
of
a
preparational
issue
right,
so
that
yeah.
B
C
We
want
to
draw
as
much
information
out
of
this
to
then
drive
other
decisions
in
the
follow-up
issues.
I
think
it
would
still
help
me
to
know
what
exactly
we're
looking
for
in
terms
of
the
okay
categories.
I
guess.
A
Okay,
so
I
will
say
that
categorizing
by
type
of
upload
is
important.
Obviously,
so
we
we
say
the
acceleration.
There
is
another
one
which
is
there
for
at
least
for
manage
for
design
management,
which
is
sidekick
uploaded
stuff.
So
we
have
things
that
get
processing
background
and
get
uploaded
on
a
different
type
of
schedule,
and
then
it
moves
files
around
as
well
right.
I
think
yeah.
A
A
When
I
say
uploads,
I
mean
user
generated
uploads
right,
so
everything
that
is
computed
on
our
site
or
when
we
actually,
we
are
downloading
the
information
and
rates
for
doing
something,
instead
of
just
sending
a
link
back
to
customers,
because
most
of
the
the
research
that
we
did
in
this
was
in
terms
of
making
uploads
faster,
more
reliable
or
scalable.
But
there
is
a
good
point,
which
is:
let's
say
we
want
to
remove
kyrie
wave
altogether.
So
in
how
many
plays
in
rails,
we
actually
want
to
upload
stuff.
C
Is
that
you
mean,
like
something
user
initiated
so
so
contrasting
like
something
like
a
like
a
git
lfs
object,
which
is
more
like
an
implicit
thing
right.
The
user
doesn't
really
care
like
whether
something
goes
into.
A
B
A
This
is
the
user
upload.
Then
it
hits
the
controller.
Something
happened
there,
which
is
suboptimal.
That's
another
problem.
The
point
is
that
then
this
trigger
a
sidekick
job.
So
in
sidekick
we
download
the
image
again
because
it's
an
object,
storage,
fine,
so
we,
I
was
sure
that
we
were
downloading
stuff,
no
problem,
but
then
we
are
creating
thumbnails
and
uploading
them
back
to
object,
storage
from
in
a
different
place.
A
A
So,
in
terms
of
we
put
all
the
effort
in
having
workers
doing
all
the
ev
lifting,
then
we
know
because
of
this
requirement
that
we
are
actually
uploading
stuff
from
ruby
on
rails
and
so
okay,
how.
C
Yeah,
I'm,
I
think,
okay,
like
maybe
we
can
do
this
asynchronous
as
well,
because
this
might
take
a
while
to
figure
this
out.
I
think
I
know
what
you
mean,
but
I
think
there's
a
bit
of
nuance
still
to
to
figure
out
so
so
maybe
we
can.
Maybe
I
can
just
open
a
thread
on
the
on
this
issue
where
we
can
like
collect
this.
I
saying
kind
of
like
what
the
I
think
it's
really
important
to
understand,
at
least
for
me
as
well.
What
are
the
questions
we
want?
C
A
A
A
So
I
don't
know
if
someone
has
point
as
items
or
want
to
discuss
the
collection
of
requirements.
Otherwise
we
can
just
jump
to
point
c,
but
I
don't
see
lucas
here.
So
I
suppose
we
don't
have
anything
on
that
as
well,
which
is
about
collecting
information
about
what
customers
are
using,
so
that
we
we
consider
what
features
we
are
we
have
and
what
customers
are
using,
what
what
problem
are
they
experiencing
in
terms
of
defining
the
requirements?
D
Yeah
so
thanks
alessio
and
wanted
to
say
that
I'm
still
catching
up
after
the
pto,
and
I
still
don't
have
a
full
picture
of
what
this.
D
What
is
the
state
of
the
work
in
the
working
group,
but
I
I
saw
some
really
good
artifacts
and
I
I
feel
I
was
a
bit
confused-
that
the
blueprint
got
merged
before
actually
the
working
group
started,
but
I
think
it's
always
possible
to
actually
refine
it
and,
as
far
as
I
understand,
that's
the
intent
of
the
working
group
to
make
it
a
bit
more
clear
how
we
want
to
proceed
with
the
architectural
change
and
we
can
always
update
the
blueprint.
D
So
I
think
that,
as
we
proceed
with
our
discussions
in
the
working
group,
we'll
probably
came
up
with
multiple
solutions,
and
I
created
this
google
spreadsheet
to
actually
start
scoring
them.
D
That's
also
the
place
where
I
think
we
could
actually
add
more
solutions
and
more
objectives.
D
So
I
thought
that
perhaps
creating
this
this
document
early
is
going
to
make
it
easier
to
not
to
forget
about
objectives
or
the
suggested
solutions.
So
I
added
two
possible
solutions
to
the
document.
There
is
this
code
name,
I
came
up
with
just
very
quickly
from
the
top
of
my
head
called
alexandria,
which
kind
what
is
kind
of
you
know
related
to
the
library
of
alexandria
and
that's
probably
very
bad
name.
D
They
had
not
been
able
to
design
a
very
good
disaster
recovery
solution
that
we
probably
should
design
so
so
yeah.
So
I
added
the
two
solutions.
Perhaps
we
could
score
about
the
stateless,
alexandra,
stanford
stateful,
alexandria.
There
is
also
the
solution
of
replacing
the
carrier
wave
gem
with
something
more
custom,
and
there
is
probably
going
to
be
more
solutions
that
we
could
score.
D
I
think
that,
right
now,
it's
not
the
solutions
described
in
the
dog.
That
is
the
most
most
important
thing,
it's
more
like
objectives
and-
and
this
will
allow
us
to
more
objectively
score
solutions,
so
yeah.
What?
What
are
your
thoughts
about
that?
I'm
just
curious.
D
10
zero
is
bad,
then
it's
good,
so
this
is
actually
something
we
did
in
the
ic
gearing
working
group,
and
I
think
that
was
actually
very
useful.
B
D
D
Way
to
collaborate
on
finding
the
best
way
forward,
and
I
want
to
avoid
actually
you
know
suggesting
which
solution
might
be
better,
and
this
is
this-
you
know
rfc
way
collaborative
way
of
designing.
You
know
how
we
want
object,
storage
to
look
like.
A
Thanks
for
doing
this
draggers,
we
actually
mentioned
this
last
week
in
terms
of
the
collection
requirements.
So
the
the
discussion
was
something
like
we
are
collecting
requirements
so
that
we
can
build
a
scoring
board
where
we
can
score
overall,
the
the
solutions
that
will
came
out
of
this.
So
thank
you.
Yeah.
A
A
Is
a
important
missing
criteria,
which
is
the
complexity
of
operation,
so
how
complex
would
be
to
run
gitlab.com
with
that
solution,
or
it
can
even
be
true?
Let's
say
one
is
complexity
of
operation,
which
is
general,
can
even
be
split
in
running
it
at
github.com
scale
compared
to
running
it
at
whatever
smaller
customer
skill
is.
But
I
think
we
have
to
keep
in
mind
the
operational
side
of
it
and
the
other
more
general
path
on
on
this
is
that
here
we
are
talking
specific
solution
to
a
specific
problem
of
the
working
group.
A
There
is
there's
a
more
of
an
overall
discussion,
which
is
there
are
many
outstanding
problem,
part
of
the
object,
storage
space
and
the
limited
time
that
we
have
with
working
group
will
also
require
us
to
figure
out,
which
is
the
most
impactful
direction
to
to
say
to
follow
at
the
beginning,
because,
basically,
what
we're
doing
here
is-
and
it
just
mentions-
also
your
thing
about
the
blueprint.
A
So
the
blueprint
defines
a
problem
and
it
just
drafts
solution
which
is
very
work
in
progress,
because
the
the
drive
for
the
blueprint
is
the
working
group.
Then
the
working
group
is
not
really
binding
in
terms
of
allocation
for
engineers
that
are
taking
part
into
it.
So
out
of
this,
we
will
have
a
plan
and
the
first.
A
The
first
item
of
this
plan
has
to
be
actionable
in
terms
of
engineering
allocation,
so
that
marin,
our
executive
sponsor,
can
go
through
the
engineering
allocation
with
the
plan
that
gets
out
of
this
and
we
start
working
on
one
thing
and
then
I
said,
because
this
is
a
blueprint
worth
type
of
project
the
there
will
be
several
engineering
allocation
to
cover
future
improvements.
So
I
don't
know
if
this
answer
your
question
about
them.
D
So
it
does
partially,
but
how
I
think
about
architectural
blueprints
is
that
I
think
of
them
as
a
map,
so
the
map
that
describes
where
you
are
right
now,
where
we
want
to
go
and
basically
makes
it
possible
for
everyone
to
see
basically
the
path
together.
So
this
is
something
that
makes
it
easier
to
make
all
the
small
decisions
along
the
way
about
how
to
design
things.
What
the
objectives
are,
what
things
we
are
trying
to
solve.
D
So
I
think
that
ultimately,
blueprint
can't
really
describe
the
first
solution,
because
it's
this
map
will
be
very
incomplete
and
I
I
think
it's
it
should
be
like
more.
It
should
cover
like
the
problem.
It
should
basically
tell
us
where
we
want
to
go
to
eventually,
so
I
I
think
that
right
now
it
the
the
current
shape
of
the
blueprint,
might
make
people
think
that
we
basically
want
to
just
mean
io
and
improve
ruby
side
of
things,
and
I
think
it
actually
might
be
insufficient.
D
I
added,
for
example,
this
criteria,
criteria,
number
18
being
able
to
address
future
problems
because
at
some
point
we
will
say
okay.
So
this
is
the
state
we
are
in
and
we
are
quite
happy
with
that.
D
But
the
solution
we
are
designing
right
now
should
make
it
easier
to
actually
solve
future
problems
that
are
not
known
yet,
for
example-
and
there
is
this
very
important
objective-
designing
a
solution
that
is
easy
to
iterate
on,
so
that's
that's
kind
of
similar,
so
perhaps
we
should
duplicate
this
to
two
criteria
and
so
yeah.
I
think
you
answered
my
question
that
doesn't
make
sense
what
I'm
saying
as
well.
A
Yeah,
it
does
does
yeah
just.
Let
me
remark
that
the
idea
of
the
blueprint
was
yes
to
define
the
overall
solution
long
term,
but
the
the
point
was
more
about
making
it
actionable
since
the
beginning,
because
we
have-
let's
say
we
can
say
we
have
an
ownership
problem
here
right.
This
is
a
spread
problem.
It's
a
spread,
widespread
feature
set
that
is
not
owned
by
anyone
and
that's
why
it's
really
hard
to
take
the
grasp
of
it
and
start
working
on
it
so
yeah,
I
do
agree.
A
The
blueprint
should
be
a
long
term
type
of
goal,
but
what
we
want
to
get
out
of
this
as
well
is
is
an
actionable
next
step
so
along
a
long
term
solution,
plus
an
actionable
next
step
that
we
can
start
working
on
right
after.
A
Mean
I
o
is
the
actionable
next
step.
It's
not
that
one
is
not
right
everywhere
in
in
any
place.
We
are.
We
are
saying
this,
it's
actually
we
were
discussing
this
with
distribution
and
probably
will
never
be
the
first
action
item,
because
there
are
complexity
related
to
licensing
and
things
like
that.
It's
just
that
we
there
are
three
directions
that
are
in.
I
mean
it's
actually
described
in
point
number,
three,
so
material,
so
we
can
discuss
that
later
on.
A
But
the
point
is
that
there
are
three
struggle
points
which
are
the
one
that
are
described
in
the
object
in
the
blueprint
that
are
kind
of
interleaved,
so
some
of
them
are
really
hard
to
solve.
If
you
can
tackle
at
least
the
first
step
of
one
of
the
others,
and
other
solutions
like
having
specified
components
can
be
an
alternative
to
shipping
min.
I
o,
I
mean
it's
just
that
there
are
three
points
which
are.
A
We
need
place
to
store
the
information
regardless
of
yeah,
so
we
need
a
place
to
store
information,
regardless
of
what
type
of
features
we
are
using.
We
need
to
remove
all
the
complexity
of
moving
stuff
around
and
we
need
to
make
a
decision,
not
necessarily
us,
but
at
least
start
thinking
about
decision
if
we
still
want
to
support
local
storage
or
not
in
terms
of
complexity.
So.
D
Let
me
check
if
we
are
on
the
same
page
and
if
I
do
understand
that
correctly.
So
if
we,
you
know,
think
about
a
blueprint
as
an
abstraction
and
a
map,
does
it
mean
that
the
current
state
of
the
blueprint
right
now
is
that
we
basically
know
where
we
are,
and
there
are
three
paths.
D
A
D
Six
of
them-
and
there
are
like
three
possible
improvements,
and
I
it's
from
reading
the
blueprint.
It's
not
entirely
clear
to
me.
How
related
these
are.
Are
these
things
that
we
want
to
do
in
parallel?
Like
I,
I
do
see
iterations,
but
you
are
saying
that
this
is
not
really
actionable
yet
yet
it's
yeah.
D
First
iteration
is
to
basically
have
a
conversation
within
a
working
group,
and
the
the
the
other
iterations
are
still
kind
of
unknown
right
now,.
A
A
Merge
because
they
were
reviewing
the
things
so
just
to
make
it
clear
that
the
the
iteration
is
just
a
draft.
It's
just
a
conversation
started,
but
the
problem
definition
I
mean
the
abstract
problem.
Definition
are
good
and
they
are
actually
a
good
description
of
what's
the
current
status.
So
let's
merge
it,
as
is
so
that
we
can
have
a
common
ground
for
conversation.
So
that's.
D
A
C
Sure
it's
just
a
question.
I
just
dropped
it
in
there
because
it
it.
I
I
realized.
After
joining
the
working
group,
there's
just
there's
a
whole
lot
of
discussion
has
happened
in
the
past
right.
I
I
just
wasn't
sure
how
much
we
had
looked
at
these
things
already
and
actually
one
of
these
epics.
Even
it's
the
one
it's
which
one
is
it
it's
483,
epic
483.
C
This
actually
had
passed
my
team
as
well
before
as
a
potential
owner
for
this,
and
we
were
back
then
a
bit
concerned
that
it
was
like,
like
these
issues
that
are
collected
in
this
epic.
They
all
made
sense,
but
they
were
all
it's
just
kind
of
a
colorful
bag.
You
know
of
like
often
unrelated
things
now
I
was
just
wondering
if
it
would
make
sense
to
like
review
these.
C
I
don't
know
how
much
work
that
would
be
and
like
like
to
see
if
there's
any
like
how
many
of
these
are
even
like
current
anymore,
there's
any
requirements
we
should
be
aware
of,
because
I
don't
even
know
where
a
lot
of
this
actually
came
from
or
how
much
of
that
is
still
relevant.
Even
yeah
like
it's.
It's
not
like.
I
have
any
strong
opinions.
C
It
was
just
something
that
occurred
to
me
that
there's
already
it's
not
being
worked
on
actively,
maybe
but
like
there
is
already,
or
there
had
been
some
kind
of
work
stream
that
seemed
to
be
going
in
a
similar
direction.
I
think
I
even
spotted
an
issue
here
around
considering
min
io
as
like
a
uniform
object,
storage,
back-end
yeah,
so
I
just
wanted
to
throw
that
out
there.
I
was
just
wondering
if
people
had
looked
at
this
at
all,
probably
I
wasn't
sure.
A
Yeah,
so
that
epic,
there
was
kind
of
a
portman
way
to
make
sure
we
didn't
lose
track
of
all
the
improvement
for
to
object,
storage
that
we
never
had
time
to
work
on.
So
it
was
kind
of
yeah,
a
box
full
of
stuff
that
was
never
worked
on.
So
that's
the
thing,
so
it's
just
a
pointer
so.
A
I
would
probably
consider
reviewing
them
when
we
are
in
a
in
a
stage
when
we
are
defining
our
requirement
just
to
make
sure
that
we
are
not
missing
anything
and
to
figure
out
if
something
is
still
important
or
is
completely
outdated.
What
I
can
tell
you
so
the
one,
the
first
one
that
you
mentioned
in
point
a
was
strongly
related
to
actually
having
the
catch
old
bucket,
because
the
point
is
that
the
way
the
system
is
designed,
you
need
to
know
where
to
put
the
stuff
up
front.
A
So
if
the
feature
set
doesn't
have
its
own
bucket,
you
you,
you
really
can't
do
this,
and
and
because
it's
specific
to
the
features
means
that
all
the
outstanding
features
that
were
not
supported
by
the
the
acceleration
or
things
like
that
had
to
be
reworked,
which
is
one
of
the
reasons
why
they
just
came.
They
just
left
were
left
in
the
yeah
and.
C
That
makes
perfect
sense,
I'm
just
so.
I
guess
I'm
also
wondering
to
to
for
like
visibility
and
to
in
terms
of
how
we
organize
things.
Just
assuming
I'm
not
even
saying
that's
the
case,
but
assuming
if
we
were
to
agree
that
this
single
bucket
approach
would
come
to
pass
in
some
way
shape
or
form.
Maybe
we
should
just
move
those
more
certain
items
that
we
will
know.
C
We
will
very
likely
work
on
maybe
out
of
this
older
epic
and
maybe
just
into
the
working
group,
epic,
so
that
we
can
kind
of
kind
of
play
separate
a
little
bit
like
the
items
that
we
think
these
are
the
most
impactful
ones
or
the
more
certain
ones,
so
that
so
as
to
declutter
like
this,
this
yeah
the
basket
one
as
well.
We.
A
Can
even
make
some
kind
of
some
set
of
scoped
labels
to
this
working
group,
so
we
can
set
what
is
still
to
be
reviewed.
What
is
still
accurate
and
what
is
just
needs
to
be?
We
don't
want
to
fix,
we
don't
it's
not
important,
or
we
just
say
it
close,
because
it's
no
longer
relevant.
C
Yeah,
and
actually
I
remember
as
well
like
I
said-
I
didn't-
have
that
much
exposure
to
it,
but
I
do
remember
when
working
on
the
image
scaling
stuff
that
just
accidentally,
I
discovered,
because
I
was
testing
serving
images
both
from
disk
and
from
object,
storage
right
like
locally
via
midio,
and
because
I
was
unfamiliar
with
the
whole
mechanism
behind
uploads.
C
Like
one
thing
like,
I
was
kind
of
acting
as
an
admin
user
in
a
way
right,
if
you
think
of
personas
like
not
super
familiar
with
implementations
behind
it,
but
kind
of
knowing
the
switches-
and
I
was
kind
of
like
a
bit
disturbed
to
see
that
if
you
wanted
to
go
back
from,
if
you
switch
between
object,
storage
and
local
storage,
which
you
can
and
omnibus
right,
it's
just
a
toggle.
The
system
is
completely
broken.
C
It's
just
every
every
file
is,
is
dead,
right,
yeah,
so
so
they're
like
some
usability
things
as
well,
that
are
quite
yeah
like
they
diminish
your
whole
installation
right.
So
I'm
wondering
like
this
is
like
what
do
we
do
about
these
things
right?
We
kind
of
don't
want
to
like
get
people
to
a
point
where
their
github
installation
is
broken
and
stuff.
So
I
think
there
might
be
just
some
valuable
things
in
there
as
well,
which
are
very,
very.
A
C
Yeah,
I
think,
actually
there
was
another
issue
and
then
I'm
gonna
stop
it.
I
wanna
like
get
carried
away
too
much
here,
but
there
was
another
one
which
sounded
quite
pragmatic,
which
is
like.
I
think
it
was
suggesting
to
just
lock
in
that
settings.
Setting
like
when
you
here.
This
does
allow
usage
of
local
storage
when
object.
A
B
A
Yeah,
I
do
remember
the
time
so
I
the
the
package
team
is
one
that
is
more
affected
by
all
the
stuff
and
they
they
came
after
and
they
had
all
the
say,
the
hard
time
in
trying
to
untangle
the
situation.
So
thank
you
for
writing
those
information.
Thank
you.
Everyone
for
attending
the
working
group.
So,
let's
keep
moving
on
this.
Let's
continue,
I
think
the
conversation
that
we
started
and
see
you
all
next
week.