►
From YouTube: Ceph Developer Summit Quincy: RGW
Description
00:00 - Dashboard and RGW: current status and next steps
27:12 - Alternate bucket indexing schemes: tree-based, sqlite
43:24 - Unified RGW CLI
58:24 - Caching
1:00:39 - Zipper
1:05:39 - Bucket inventory
1:09:40 - RGW Dedup
1:23:24 - Reshard without blocking writes
1:25:10 - Auto pool creation failures
1:30:20 - Internal metrics
1:34:07 - RGW workload model testing
Full agenda: https://pad.ceph.com/p/cds-quincy
B
B
A
D
D
D
D
D
Everything
is
relevant
info
from
the
demons
we
have
added
the
column
song
group
and
so
on.
In
order
to
when
you
have
multisite
configuration
to
see
more
clearly
what
in
which
zone
is
running
okay,
apart
from
this,
when
there
is
multiside
configuration
the
it
appears,
the
apart
from
the
overall
performance
for
for
the
daemon,
which
has
some
some
counters
that
you
can
see
here.
When
there
is
a
multi-site,
then
we
have
added
some
graphing
opponents
for
tracking
the
the
multi-site
sync
performance
or
the
replication
objects
for
the
subzone.
D
D
When
we
go
to
users,
for
example,
we
added
the
ability
to
see
the
capacity
limit
like
a
progress
bar,
so
you
can
see,
for
example,
for
the
capacity
limit.
You
see
the
used
space
and
the
free
space
in
this
tooltip
or
delay
for
the
object
limits
the
same
when
you
set
name
free
the
maximum
10,
and
apart
from
this,
you
can
see
that
if
we
select,
for
example,
this
is
the
username
dev
one
from
the
real
one.
D
Okay,
apart
from
this,
regarding,
quotas
is
not
only
for
users
also
for
buckets
here.
We
don't
have
packets
in
the
realm
too,
but
in
real
one.
We
have
a
couple
of
buckets
and
you
can
see
that
for
the
test
packet
we
have
set
up
some
also
in
the
tooltip,
some
some
packet
quota,
so
it
is
reflected
in
this
progress
bar
same
for
the
object
limit.
D
If
you,
edit,
a
bucket,
you
cut,
enable
versioning
or
set
up
the
multi-factor
authentication
delete,
which
is
an
extra
protection
here,
is
a
tooltip
with
a
brief
description
of
it
or
for
the
can.
The
login
is
only
is
disabled,
because
you
can
only
enable
logging
when
create
a
bucket.
Then
it
is
enabled.
D
D
The
last
feedback
that
we
got
was
about
enhancing
the
the
packet
policy
and
the
backend
management
in
general
terms,
like,
I
think,
adding
the
quota
management,
because
right
now
we're
only
displaying
the
the
bucket
quota
or
the
user
quota,
but
not
able
to
to
manage
yet.
So
the
last
paper
that
we
got
was
that
about.
D
C
C
Sir,
and
where
did.
E
C
A
D
Okay,
okay,
then,
I
left
a
tracker
issue
with
all
the
the
well.
It's
like
an
epic
tracker
for
all
the
work
that
we
have
to
do
so
we'll
take
this
all
this
stuff
to
be
reflected
in
tracker
issues
in
order
to
to
implement
that
features
and
from
the
on
the
other
side.
D
It
was
put
on
the
table
the
possibility
of
the
dashboard
be
being
able
to
talk
directly
to
rather
way
I
mean,
or
at
least
to
be
able
to
execute
rather's
away
and
main
commands.
D
C
F
F
C
H
B
You
know
token
thing,
but
for.
B
D
But
like
like
this,
you
mean
part
four.
D
D
B
D
F
D
Yeah
yeah,
no,
no,
I
was
you
mean
there
are
mfa,
create.
B
Yeah,
so
we're
missing
that
my
point
was
that
we're
missing
some
some
admin
ups,
we
can
cannot
do
it
through
the
restful.
A
C
E
Can
I
ask
a
question
about
the
the
way
that
the
demons
panel
work
works?
E
I
guess
this
is
kind
of
a
high
level
question
here.
It's
actually
all
three
of
these
have
this
thing
at
the
top.
That
says,
select,
object,
gateway.
It
doesn't,
as
far
as
I
can
tell.
What
that
actually
is.
Selecting
is
what
realm
you're
looking
at
is
that
right?
That
seems
to
be
how
it
behaves
for
users
and
buckets
like
it's,
which,
which
independent
name
space
of
uses
and
buckets
you're.
Looking
at.
D
E
E
D
E
D
E
No,
I
think
well
it's
clear
if
you
look
at
those
users
tab
or
the
buckets
tab
like
if
you
go
here.
It
should
be
realm
because
each
realm
has
its
own
independent
namespace
of
users,
so
the
users
aren't
associated
with
a
demon
they're
associated
with
an
entire
realm,
and
each
realm
has
lots
of
users
and
lots
of
demons,
and
so
here
it's
absolutely
around
at
the
top.
D
Yeah,
provided
we
have,
for
example,
this
real
has
a
twosome
groups,
and
its
own
group
has
two
zones.
If
we
have,
for
example,
one
demon
person,
it
would
be
for
four
demons
running
the
same
region.
Should
we
then
hide
the
demons
and
only
run
on
the
primary
zone
or
or
should
we
show
the
four
rim?
Because
you
want
to
connect
to
right
now,
you
can
connect
to
any
demon
that
is
running.
C
E
F
E
F
C
E
Demons
page
same
thing
with
yeah:
you
do
some
buckets
work,
the
same
like
you're
they're
nested
under
there,
but
demons
also
like
this
is
listing
multiple
demons,
and
so
it
still
seems
like
it
should
say
realm
and
then
there
should
be.
It
should
show
all
of
the
zones
and
then
under
the
zones,
all
the
demons
like
it
should
be
a
two
level
tree.
Essentially
that
shows
you
all
the
stuff
for
that
realm
and
again
it
doesn't
matter
like
which
demon
we
talk
to
in
order
to
get
that
information
to
display
it
like.
A
View
of
the
world
is
complicated,
also
because
it's
unlikely
that
all
of
the
zones
in
your
realms
are
going
to
be
on
this
cluster
you're.
Only
going
to
have
probably.
E
E
D
D
One
of
the
features
to
deliver
a
topology
viewer
or
when
you
have
a
cluster
and
you
have
multisite
configure,
we
want
to
provide
a
topology
view,
so
you
can
see
exactly
yeah.
This
is
on
our
q
list
and
this
was
related
to
wt,
for
example,
because
after
topology
we
will
would
come
the
desired
configuration
from
dashboard
and
then
is
when
we
need
either
a
wrapper
for
the
rather
that
we
have
in
common
or
enriching
the
admin
ops
api.
In
order
to
to
be
able
to
create
some
some
group
students
through
the
admin
ops
api.
A
Wrapper,
yes,
maybe
somebody
that
lives
in
cephadm,
but.
E
Yeah
yeah,
I
mean
it
seems
like
that.
This
is
a
slightly
related
discussion
about.
Maybe
we
tabled
that
for
a
minute.
Just
a
couple.
Other
comments
on
the
dashboard
here,
like
I
wonder
if,
on
the
on
the
left,
it
says,
object
gateway.
I
wonder
if
that
should
just
say
object
or
object:
storage,
because
you're
talking
about
the
whole
service,
not
talking
about
an
individual.
E
Yeah,
I
mean
it
almost
and
it's
for
this
demon
view.
Like
I
mean,
maybe
even
it's
it's
like
which
zone
like
on
a
per
zone
view,
and
you
see
all
the
demons
under
the
zone.
I
don't
know.
Maybe
I
mean
we
should
probably
sit
down
with
one
of
the
rgw
folks
and
just
sort
of
brainstorm,
how
how
these
tablets
should
be
laid
out
a
little
bit,
but
it
might
not
make
sense
until
you
have
that
view.
That
shows
the
the
whole
realm
view
with
how
the
zone
groups
topology
or.
F
B
Okay,
there
are
certain
operations
that
you
can
only
do
against
specific
zones
like
the
master
zone
right
like
so
that
needs
to
keep
in
mind.
F
E
Maybe
one
last
one
last
comment,
because
this
was
coming
up
for
seth
adm
as
well.
We
had
this
whole
discussion
about
a
year
ago
with
creating
rook
crds
to
bootstrap
multi-site,
and
we
settled
on
basically
a
crd
that
creates
a
zone
or
whatever
zones
on
group
realm
in
one
site
as
like
the
first
instance
of
it
and
then
another
one
where
you
just
specify
a
url
pointing
to
the
first
one
and
that
bootstraps
these
secondary
sites
tertiary
sites.
E
I
wonder
if
the
question
I
had
for
steph
adm
is
whether
we
should
just
basically
mimic
that
exact
same
pattern
in
the
orchestrator
api
and
it's
fadm
and
then
surface
that
in
the
dashboard,
so
that
you
have
something
here
where
you
would
say
like
create
new
realm,
and
it
would
just
it
would
push
that
down
to
rooker
stuff,
adm
or
whatever,
to
create
the
new
realm
in
this
current
cluster
and
or
like
link
to
existing
realm.
E
E
The
one
thing
I
was
hoping
to
hear
from
somebody
someday
was
whether
people
have
used
the
rook
theories
and
whether
they
worked
well
for
them,
so
that
we
know
whether
that
it's
something
that
we
should
just
assume
is
what
we
want
and
replicate
it
and
bring
it
all
the
way
through
the
orchestrator
api
and
into
the
dashboard
or
whether
we
need
to
make
any
revisions.
First.
E
Maybe
this
can
go
on
the
list
of
like
top
level,
whatever
deliverables
that
we
want
to
look
at
from
this
like
top-down
perspective,
of
like
the
end
goal
is
to
have
this
dashboard
on
the
panel,
where
you
can
create
a
new
realm,
and
this
dashboard,
where
you
paste
in
the
url
to
another
cluster
and
have
the
whole
thing,
come
up
plum
it
all
all
the
way
through
all
the
pieces.
D
A
Maybe
the
daemons
view
should
expose
like
a
a
url
directly
to
a
gateway
so
that
you
can
point
out
s3
client
at
it
and
use
it
directly.
I'm
not
sure
exactly
how
to
find
that
information.
Otherwise,.
E
B
Yeah,
although
that's
more
of
a
user
story
right,
not
not
an
admin.
D
E
B
Cool
yeah
right
now
now
there's
the
whole
management
api
versus
user
interface
that
we
are
not
not
really
there
right,
but
some
of
that
might
reflect
user
operations
like
a
user,
would
want
to
be
able
to
look
at
their
own
keys,
for
example,
or
something
you
know,
there's
a
subset
of
that.
I
Yeah
we've
had
that
discussion
in
the
past
whether
the
dashboard
would
be
suitable
for
having
this
kind
of
multi-tenanted
view,
so
every
user
could
access
the
dashboard
but
yeah.
It's
not
clear
if
this
is
the
best
fit
for
the
dashboard,
because
it's
more
of
a
management
view
rather
than
a
sap
user
or
a
genetic
view
on
the
different
resources.
So.
A
Thanks
moving
on
to
the
next
one
is
one
that
I
put
in
about
alternate
bucket
indexing
schemes,
but
I
think
this
was
mostly
open-ended
for
for
matt,
since
he's
been
looking
at
a
couple
of
these,
we've
been
interested
in
the
new
sqlite
stuff,
considering
using
it
for
a
bucket
index
matt.
Have
you
done
any
more
investigation
there?
Do
you
think
it's
viable.
C
I
haven't,
I
haven't,
actually
done
more
tests,
more
more
interactive
testing
but
the,
but
the
performance
reports
we've
seen
from
from
patrick
suggested.
It's
not
not
not
implausible.
A
C
That'd
be
the
model
I
mean,
I
think
it
would
have
to
be
matt
it
would
have
to.
I
think
this
model
would
have
to
assume
that
we
had
a
single
point
of
control
in
a
in
a
group
of
our
in
our
w
group
for
and
and
had
the
ability
to
to
to
to
do
to
do
remote
operations
on
a
particular
bucket
that
index,
for
example,
that
or
or
other
index
that
would,
but
I
think,
that's
not
not
impossible
that
this
isn't
the
only
way
we
could
perceive
it.
C
I
think
it's
a
plausible
direction
to
explore
and
and
it
and
it
matches
up
with
two
things
that
are
being
worked
on:
one
d4n,
which
is
the
next
generation
of
the
moc
d3
and
cache,
and
and
it
has
a
redis
directory
and
other
and
and
d3
and
already
has
remote
operations
of
different
that
are
just
strictly
s3
api,
but
we're
working
on
a
couple
different
projects
that
put
other
kinds
of
group
communications
into
place.
C
Our
friend
is
working
on
your
mq
based
communications
and
the
the
redis
and
d4
in
itself
is
an
example,
and
eric
advantage
is
working
on
aero
error.
Flight
interfaces,
experimental
aeroplane
interface
to
data
and
s3,
potentially
formatted
with
s3,
select
or
restrict
pre-restrict
with
s3
select
those
those
interfaces
are
exposed
over
gs
or.
C
J
J
J
C
Customers
well
both
are
important,
but
but
but
I
think
I
think
what
I
think
our
key
goal
is
to
improve.
Make
mx
makes
performance
more
symmetrical,
provide
much
better
index
operating
credit
performance
on
indexes
for
listing.
We
want
to
improve
it,
but
it,
but,
but
there
is
a
peak
at
which
we
can
tolerate.
You
know
that
the
api
is
not
as
inefficient
has
issues
with
performance.
C
Remember
that
the
upstream
again,
obviously
upstream
bucket
inventory
is,
is
it's
a
thing
we're
an
implementation
of
like
an
inventory,
is
planned
in
my
team,
but
there's
a
rip
but
there's
great
interest
in
it
from
from
what
from
what
we
can
tell
from
from
from
our
downstream
sort
of
quote-unquote
customers.
C
That
suggests,
and
I
can
explain
that
for
what
I've
seen
of
those
of
those
inquiries,
there's
there's
a
strong
ecosystem
developing
within
the
aws
community,
to
use
them
to
consume,
to
consume
bucket
inventories
by
a
parquet
or
primarily
and
and
build
workflows
around
them.
People
already
have
attached
things
that
will
consume
those
workflows,
either
several
serverless
functions
or
some
spark
or
hadoop
or
something,
and
so
it's
reasonable
to
consider.
You
know
expecting
applications
to
to
learn
how
to
use
plugin
inventory.
J
F
A
When,
when
a
particular
osd
gets
a
whole
lot
of
omap
keys
on
it,
whether
from
just
having
too
many
buckets
or
shards
that
have
too
many
entries
or
for
multi-site
bucket
index,
log
entries
can
fill
up.
B
That's
one
thing
that
we
want
to
do,
but
but
maybe
I'm
I'm
not
sure,
that's
the
secret
light
thing
it.
It
is
the
dependent
dependency
either.
C
Yeah,
I'm
not
going
to
see
it
either.
I
think
it's
something
that
we
should
explore.
A
So
matt
you've
also
done
some
design
work
on
like
a
b
tree
or
b
plus
tree
implementation
for
bucket
index.
That
can
split
instead
of
reshard.
A
C
C
C
C
Appeals
to
me
to
to
reuse
the
code-
that's
that's
an
emphasis.
You
know
that,
and
this
is,
I
think,
patrick's
remarks
on
that
in
the
in
the
were
one
of
the
things
that
made
me
want
to
want
to
look
at
his
work
when
we,
if
you
do
something
else,
I
still
like
to
use
reuse
code
and
yeah
and
and
reuse
high
quality
code
that
used,
but
that
spot
for
paste
is
sensible,
whether
it's
sqlite
or
something
else.
C
C
B
No
specific
issue
with
sqlite
is
what
you
mentioned,
that
you
need
to
have
a
lock
on
the
bucket
like
you
cannot
have
multiple
and
it's
kind
of
against
the
baseline
premise,
where
you
don't
unlock
so.
C
I
don't
I
don't
know
I
mean
I
mean,
maybe
it
is
but
but
I
think,
but
I
think
that
this
this
is
an
area
we
could
get
we're
gonna
go
on
a
rat
hole,
but
but
there's
but
there's
a,
but
there
are
theorems
in
computing
in
distributed
computing.
That
would
lend
us
to
to
consider
doing
it.
You
know
single
point
of
control,
type
of
type
of
type
of
things.
It's
it's
not
just.
C
C
C
B
Yeah,
I
I
would
love
to
see
like
a
some
kind
of
a
b
plus
three
overrated
implementation,
but
you
know
it's.
You
know
having
done
that.
Fiber
thing
you
know
in
other
related
stuff,
it's
not
trivial,
not
gonna
be
trivial,
but
you
know
it
might
be
that
that's
something
that
is
needed.
B
C
B
Yeah,
but
do
you
want
to
have
big
counter
lock
or
do
you
want
to
have
granola
locks
that
that's
you
know.
C
C
The
fact
that
the
bucket
indexes
are
in
our
natural
ranges
are
naturally
partitioned
and
then
all
those
can
be
going
in
parallel
and
there's
a
second
piece
that
comes
in.
If
you
actually,
you
know,
I
admit
this
is
the
this
as
its
trade-offs
but
the
wins
column,
our
single
point
of
control
and
and
cash
locality.
C
You
know
it's
there's.
No,
then
it's
also
worth
mentioning
the
the
you
know
the
work
going
on
in
right.
You
know
at
10
seconds
santa
cruz,
sort
of
an
and
other.
You
know
the
other
database
stuff
is
kind
of
pushing
yeah.
It
isn't
really
providing
solutions
for
that
for
this
prop
for
this
problem,
but
we
have
looked
at
the
skyhook
stuff
and
it's
it's.
It's
sort
of
it's
sort
of
focusing
on
columnar
data,
all
right.
B
C
E
Me
as
a
subject
where
we
can
talk
ourselves
circles
for
several
hours
sure
do
we
have
like
a
plan
for
like
how
how
we're
going
to
make
progress.
Here
I
mean
we
can
say
we're
going
to
investigate
like.
Are
there
specific
projects
in
flight,
or
are
we
like
talking
to
grad
students
who
want
to
look
at
these
things
or
like
what
is
there?
What's
the
plan
of
action
solve
some
of
these
questions.
C
C
C
But
parts
of
request,
price
of
request
routing
could
be
president
in
quincy.
I
I
do
think
that,
because
we
have
because
there
are
things
various
because
d3n
already
has
it
because
t4n
would
would
ascend
would
essentially
essentially
has
it
they're
on
the
truck
being
different
ways
of
doing
it,
just
with
d4
and
alone
and-
and
some
are
some
and
some
of
what
or
friedman's
work
should
should
should
also
merge
just,
but
I
think
that
that,
but
the
running
really
is
really
most
believably.
I
think
d3n
is
essentially
already
doing
that.
B
The
the
code
need
the
rgw
code,
needs
to
support
multiple
index
types
right
so
and
that
and
that
there
needs
to
be
some
groundwork
happening
there
before
anything.
C
A
We
do
have
some
groundwork
from
the
bucket
layout
types
for
abstraction.
C
You
know
such
as
maybe
carlos,
is
interested
in
some
over
there,
since
they've
done
overlapping
work.
I'd
I'd
certainly
love
to
talk
to
them.
I
think.
Well,
you
know,
then
others
would
too.
A
All
right
well
going
back
to
listing
real
quick
before
we
move
on.
I
just
wanted
to
mention
that
we
also
support
the
unordered
listing,
which
can
be
a
big
speed
up
for
for
large
buckets
as
long
as
the
client
can
use
the
extension
to
request
it,
but
I
think
you've
all
made
a
repo
under
staff
somewhere
that
lets
you
plug
in
a
file
to
bodo
and
and
use
the
python
to
to
request
unordered.
E
Yeah
yeah,
I
kind
of
dropped
this
into
last
minute.
I
don't
want
to
take
a
lot
of
time,
but
I
I
just
wanted
to
have
a
quick
discussion
around
what
the
like
the
the
big
picture
goal
is
so
that
we
can
at
least
start
inching
in
the
right
direction.
So
I
that's.
The
sort
of
superficial
goal
is
just
to
bring
all
the
rgw
admin
commands
under
the
cli,
which
basically
just
means
making
the
manager
able
to
do
all
those
commands
that's
reasonably
easy
to
implement.
E
In
fact,
the
manager
was
already
shelling
out
to
do
radio
skate
admin
for
a
bunch
of
other
random
tasks,
maybe
not
a
bunch,
a
few
random
tasks
for
and
because
it's
all
containerized
that
we
always
know
we
have
the
right
version
where
it's
located
and
all
that
stuff.
E
So
I
think
that's
that
part
is
easy,
but
I
think
this
kind
of
relates
to
the
the
question
that
kept
coming
up
in
the
dashboard
section,
also
around
the
admin
api,
where
there's
like
a
chicken
and
egg
problem
about
setting
up
our
realms
and
zones
when
you
don't
have
an
endpoint,
a
writer's
gateway
demon
running,
yet
that
you
can
connect
to
its
api.
E
I'm
just
wondering:
if
does
it
make
sense
that
we
want
to
move
some
of
this
functionality
into
the
manager,
so
that
manager
modules
can
do
things
like
manipulate
realms
and
zones?
And
this
configuration
stuff.
E
E
Or
from
rgb,
I
guess
from
rgw's
either
like
I
just
keep
hearing
this
issue
come
up
around
where
the
management
api
is
running
and
what
to
manage
what
that
particular
like
it's
a
radio
skate.
We
have
a
radio
gateway
process,
but
it's
only
running
in
one
zone,
and
so
it
can't
do.
I
don't
know
I
just
want.
I
I
don't
really
understand
what
the
constraints
are
and
what
the
current
problems
are,
that
we're
hitting.
B
Oh
well,
they're
multiple
issues
right
first,
you
have
this
one:
big
monolithic
admin,
command
line
tool
called
radius,
go
to
admin
that
really
way
way
way
overdue.
B
You
know
refactoring
and
it's
like
pretty
embarrassing
how
it
looks
like,
but
you
know
it
kind
of
works
for
us
right
now
and
it's
it's
gonna
be
huge
work
to
break
it
down
into
you,
know
something
more
modular
and
that
would
actually
make
more
sense.
So
that's
one
thing
now:
creating
restful
interfaces
out
of
that
is
kind
of
challenging,
because
or
maybe
not
necessarily
like.
We
I'm
not
sure,
and
I
know
there
have
been
some
discussions,
but
you
know
you
need
to
refactor
that
before
it
before
you
do
any
anything.
C
And
well
maybe,
although
some
some
some
previous
refactoring
discussions,
have
focused
a
lot
on
command
processing,
abs,
you
know
abstractions
and
we
can
perhaps
that
can
be
bypassed
if
we
focused
on
turning
it
into
a
restful
into
restful
services.
Building.
You
know,
which
I
think
admin
ops
is
basically
already
doing.
B
Like
it's,
it's
very
powerful
so
get
getting
all
the
the
commands
to
try
to
get
it.
I
mean
he's
doing
now
into
into
that.
B
C
C
B
C
B
Not
everything
can
can
be
even
pushed
their
to
add
me
knobs,
because
people's
which
strappings
have
now
you
could
ask
about
why
certain
things
cannot
run
against
certain
zones
because
stuff,
like
user
creation,
while
you
can
create,
do
it
on
every
zone,
only
the
master
zone
is
the
one
that
pushes
all
the
like
distributes
the
metadata.
B
So
certain
things
need
to
happen
in
the
master
zone
on
the
master
zone
of
the
master
bone
group
at
the
moment.
So
there
needs
to
be
some
work
there.
If
you
want
to
change
it.
A
B
A
E
So
I
mean-
I
guess
I
guess
my
maybe
to
pose
this
as
a
more
concrete
question
from
a
user
experience
perspective.
Does
it
make
sense
to
move
maybe
only
some
of
the
commands
under
radio
scale
we
admin-
maybe
all
of
them-
I
don't
know,
but
maybe
maybe
starting
with
some
of
them
under
the
the
cli,
like,
maybe
just
starting
with
those
related
to
managing
the
realm
configuration
type
stuff,
creating
rounds
zones
and
zone
groups
and
so
on.
E
Even
if
the
initial
implementation
of
that
is
just
shelling
out
to
radius
gateway
admin,
it
would
put
that
under
the
review
of
the
manager
module
like.
I
guess
that
the
parallel
I'm
thinking
of
here
is
that
is
the
manage.
Is
the
volumes
module
where
we
basically
tied
the
creation
of
a
cef
file
system
to
spinning
up
the
demons
that
actually
serve
that
file
system
by
moving
things
around
managing
realms
and
zones
into
the
manager?
E
E
Does
that
make
sense
and
might
that
be
sort
of
a
direction
to
go
in
terms
of
if
there
is
a
refactoring
of
the
admin
api
stuff?
That
has
to
happen
anyway,
moving
to
a
model
where
some
of
the
stuff
is
hosted
by
the
manager,
as
opposed
to
buy
a
raiders
gateway.
B
What
do
you
mean?
I
think,
that's
more
of
our
user
experience
question
like
that.
We'll
use
this
the
it
will
be
easier
for
users
to
do
it
through.
You
know
some
of
the
comments
through
that
this
is
july
or
in
some
ways
there's
red
two
admin
commands
like
I'm,
not
sure
you.
B
Like,
like,
you
want
to
have
stuff
managing
great
skater,
I
mean
it's
the
same
place,
it
seems
to
me,
but
where
is
that
place?
You
know
not
necessarily
where
it's
pretty
scary.
I
mean
right.
A
If
we're
maybe
talking
about
creation
of
zones
and
deployment
of
gateways
for
it,
I'm
still
unclear
on
the
division
between
like
staff,
rgw
commands
and
stuff
adm
itself.
E
Yeah,
I
mean
the
way
that
the
volumes
I
mean,
the
orchestrator
interface
defines
the
manager
internal
api
for
just
blowing
the
daemons,
both
for
metadata
servers
for
the
file
system
or
for
radius
gateway
stuff.
E
The
way
that
the
volumes
module
works
is
it
does
all
the
the
fiddling
with
creating
the
file
system,
and
maybe,
if
it's
a
soft
volume,
creating
the
directories
and
setting
up
caps
on
it
and
all
that
all
that
random
stuff
and
that's
specific
to
file
systems
and
then,
at
the
very
end,
there's
like
one
line
that
just
does
a
remote
call
into
the
orchestrator
module
to
go,
create
a
deployment
of
daemons
to
serve
that
file
system.
E
That
is
just
set
up,
though
I
would
imagine
that
we
would
do
something
similar
where
if,
if,
for
example,
we
just
took
the
realm
related
management
commands
and
mapped
them
to
cli,
commands
and
it'd,
be
something
like
def
rgw
create
realm,
and
then
theft
rgw
create
zone
under
realm
foo.
E
And
when
you
do
that
it
would
do
the
rate.
It
would
initially
just
shout
out
to
radio
skype
to
create
that
zone,
and
then
it
would
call
into
the
orchestrator
to
go
rate.
Those
demons
all
in
one
go
and
then
the
whatever
the
dashboard
experience
would
be,
which
maybe
is
what
we
might
be
the
sort
of
the
driving
ux
flow
or
whatever
for
for
deciding
this.
It
would
go
and
create.
E
It
would
do
the
same
thing.
It
would
call
the
same
apis
and
turn
it
to
the
manager.
B
It
could
be
that
there
is
a
space
where
it
is
gateway.
I
mean
there's
still,
you
know
if
you
want
to
have
that
and
but
cli
would
could
do
like
scripting
on
top
of
it,
like,
like
higher
level,
commands
right
nowadays
to
create
a
new
realm
or
its
own.
In
a
new
realm
you,
you
have
like
at
least
three
to
five
commands
that
you
need
to
run
right.
You
need
to
create
a
round,
create
a
zone
group,
create
the
done,
create
a
user
for
synchronization
and
set
the
end
points
another
thing.
L
E
Lets
you
create
users,
but
there's
this
issue
of
making
sure
you're
talking
to
the
right
zone
master
or
whatever,
for
the
realm
to
do
that,
like
my
understanding,
is
that,
right
now
the
dashboard
module
is
basically
has
all
that
logic.
So
it's
actually
going
out
and
calling
out
to
the
rgw
admin
api.
If
we
just
shift
that
down
a
little
bit
into
like
an
rgw
module
in
the
manager,
then
we
could
expose
that
same
functionality
through
this
cli,
so
you
could
have
a
staff
rgw
user,
create.
E
That
knows
where
to
to
do
it,
which
zone
to
talk
to
do
the
right
thing.
I
Yeah
just
a
comment
from
the
worksite
yeah
for
the
multi-side
or
multi-demo
support,
and
once
I
think
it's
still
online,
what
we
have
to
do
is
configure,
I
think,
multiple
keys
for
each
user,
so
the
currently
the
user
manually
has
to
enter
or
the
admin
has
to
manually,
enter
in
a
module
option,
a
dictionary
with
the
what
mappings
between
the
keys
and
the
different
demos.
That's
the
way
we're
currently
doing
and
not
very
friendly.
D
B
Well,
I
I
think
I
think
we
do
need
to
spend
time
this
year
in
at
least
breaking
down
radio
schedule
admin
into
lots
of
logical
units
or
something
and
build
on
top
of
that.
A
All
right
next
on
the
agenda
is
cache
with
a
question
mark.
I
assume
this
is
talking
about
mlc's
work
rather
do
3n
or
d4n.
C
C
it's
doing
more
robust,
metadata
and
and
different
kinds
of
cache
strategies
it
it
right.
It's
primary
mode
right
now
is
oneness
as
a
redis
directory.
They
can.
They
can
manage
cache
state,
there's
a
student
project
to
integrate
this
with
s3
select,
so
that
you
know
so,
that's
a
select
can
be
upgrade
can
operate
on
data,
that's
cached
that
appears
to
work.
C
You
know
right
now
right
right
now
before
end,
unlike
d3n
stores
cache
towards
its
materialized
caches
on
on
another
rgw
s3
substrate,
I
think
probably
converging
it
with
d3n-
is
something
we
do
well
while
we're,
while
upstreaming
it
but
there,
but
the
authors
of
that
aren't
here.
So
I
don't
want
to
drill
too
much
into
all
that.
But
that's
the
thing
that's
being
worked
on.
A
A
Next
up,
then,
is
zipper.
Maybe
we
should
just
kind
of
recap:
the
current
status
and
what
the
next
step
is
before
talking
about
eventual
goals
for
quincy
dan.
Do
you
want
to
take
this
one.
M
Sure
so
current
status
is
that
the
first
pass
api
and
implementation
is
complete.
M
The
first
non-rados
backend,
which
is
based
on
sqlite
and
it's
called
db
store,
is
under
development
and
it's
finding
issues
that
of
course
were
missed
in
the
first
pass
of
the
api,
which
is
expected
that,
hopefully,
will
be
done
within
the
next
month
or
so,
and
then
we'll
be
able
to
do
a
reasonable
subset
of
rgw
operations
on
this
store.
We're
not
implementing,
for
example,
we're
not
we're
not
implementing
multiple
instances
or
multiple
zones,
or
anything
like
that.
M
It'll
only
be
single
rgw
in
the
first
pass,
but
within
that
context
you
should
be
able
to
do
just
about
everything.
Rgw
does
the
next
step
beyond
that
is
to
implement
the
first
transformation
layer,
which
will
be
a
lua
transformation
layer,
and
that
should
get
us
all
of
the
basics
that
we
need
to
declare
ourselves
to
be
fully
feature
complete,
because
at
that
point
we
will
have
implemented
non-rados
versions
of
everything
we
need
to
implement.
A
Cool-
and
I
I
do
remember
in
these
kind
of
intermediate
layers-
we've
talked
about
moving
our
caches
into
that
potentially
yep,
at
least
the
metadata
cache
and
then
potentially,
the
the
data
cache
from
d3n.
A
M
M
C
There's
so
there's
also
the
you
know
the
the
submission
by.
I
think
it's
ibm
research
folks
upstream,
that
you
know
that
they
call
it
s3
mirror,
but
but
it's
basically
that
there's
basically
a
redirection
layer
that
they
insert
into
the
rw
process
or
the
request
go
to
a
remote
s3
it'll
be
nice
to
look,
maybe
relocate
that
into
a
zipper
layer.
A
All
right
and
I'd
also
mention
that
caleb
has
been
working
on
making
this
zipper
pluggable,
basically
moving
the
rados
backend
into
a
shared
library,
so
that
we
can
basically
load
them
as
plugins
and
eventually
potentially
have
external
ones
plugged
in
yep.
I
think
there's
a
lot
more
to
talk
about
in
terms
of
api
stability
for
things
like
that,
at
least.
Originally
it's
going
to
be
pretty
tightly
coupled
to
upstream.
M
Yeah
I
mean
the
the
plan
is:
is
once
we've
reached
what
we
think
to
be
a
stable
level,
then
we
can
start
doing
actual
versioned
apis
and
so
you'd
be
able
to
develop
against
a
particular
version
of
the
api
and
and
you'll
know
when,
when
the
api
changes
in
a
non-compatible
way,
because
the
version
will
change
so
then
that
should
help
support
out
of
tree
implementations.
M
A
A
Cool
well,
I
look
forward
to
getting
more
experience
with
the
the
apis
and
improving
their
shape
as
we
work
on
other
backends
or
filter
layers.
A
A
This
is
a
amazon
feature
that
this
is
meant
to
optimize
bucket
listings
for
large
buckets,
basically
by,
I
guess,
making
kind
of
a
in
another
bucket,
a
condensed
form
of
the
buckets
contents
that
you
can,
that
you
can
search
through
with
something
like
s3
select.
C
Basically,
the
notion
it's
similar
a
little
bit
similar
to
life
cycle
there,
like
you,
set
up
as
you
set
up
a
schedule
of
as
it's
an
optional
thing.
You
set
up
a
schedule
of
of
inventory
construction
daily
or
weekly
on
a
bucket,
so
we
needed
to
to
agree
on
an
implementation
strategy
for
the
for
the
for
the
for
the
asynchronous
work
to
spool
them
off
and
presumably
manage
their
lifecycle.
C
Although
some
retention
policies
and
things
would
be
natural,
you
know
that
this
life
cycle
could
do,
but
I
don't
know
how
aws
handles
that,
but
for
internally
I
mean
we
need
to
have
some.
You
know
agree
on
ways
to
do
that
that
that
asynchronous
processing
and
including
that
processing
is
you
know
it's
doing
something.
Rather,
if
we
talked
about
it,
referring
it
to
prospect
prep,
it
would
be,
are
still
you
internal,
but
there's
probably
more
discussions
about
that.
There's
some
people
who
proposed
that
there
should
just
be
something
outside
running
in
a
serverless
function.
C
That
goes
and
does
this
it
some
things
that
create
a
construct,
a
like
an
object.
A
complex,
structured
storage
object
like
like,
like
a
part
like
parquet,
or
a
little
bit
heavy
weight.
A
Okay,
well,
if,
if
everything
that
this
background
process
would
need
to
read
and
write,
the
data
can
be
exposed
through
s3,
then
it
could
make
sense
to
live
outside
of.
A
C
A
Cool
other
than
work
on
parquet
is
there
other
stuff
that
we
need
from
s3
select
to
be
able
to
support
that.
C
Again,
it
doesn't
need
s3
select,
although,
although
people
might
use
it
like,
you
could
use
s3
select
to
talk
how
to
talk
to
it,
but
it's
just,
but
it's
just
a
4k
up,
as
as
as
as
with
that
series
like
targets
and
there's
no
there's
no
connection
with
this
3s
like,
although
you
could
use
it
right
it.
It
just
writes
ordinary
in
this
case,
parquet
objects
to
to
a
to
a
magical
location,
a
no
it's
a
well-known
heuristic
location.
A
A
C
We
might
understand
you
know
the
precursor
to
this,
as
we
all
right
this
up
as
a
whole.
Sorry
because
it
has
been
developing
primitives,
but
my
understanding
is
of
of
their
of
them
is,
is
that
there's
two
is
twofold:
one
that
there
probably
won't
be
a
full-fledged
implementation
of
of
the
pool
of
the
stacked
pool
model
of
the
duplication,
full
duplication
model
and
rados
that's
being
proposed,
it
might
not
it
might
it
might
be
there
might
be.
C
It
might
be
that
also
when
it
does
exist,
might
be,
it
might
be
a
bit
different
from
you
know
the
the
style
of
operation
that
rw
would
want,
but
we
as
well
since
we
have
a
lot
of
since
we
have
a
lot
of
schema
operations
and
so
forth
and
rhw
layer.
But,
on
the
other
hand,
we
would
like
to
use
that
now
the
primitives
that
that
you
know
that
have
been
worked
on
the
vast
cdc
fingerprinting
work
that
you
did
say
seems
is,
is
I
think
it's
a
based
on?
C
What
I
can
tell
is
a
potentially
large
contribution,
it's
not
that
it
doesn't
seem
to
be.
I
think
it's
widely
available
in
industry
and
right
now-
and
I
don't
know-
I'm
not
sure
about
that.
But
that's,
but
that's
my
impression.
It
seems
like
it
has
a
lot
of
advantages
over
over
over
anything
over
our
commercial
components,
I'm
mostly
10
or
15
years
old.
C
Now,
and-
and
so
that
was
what
I'm
very
excited
to
use,
use
that,
but
I've
I've
been
talking
with
you
with
you
about
possible
ideas
of
doing
an
offline
dedupe
that
that
leveraged,
perhaps
the
same
technology
we
would
use
for
for
bucket
indexes,
but
as
an
alt
as
a
as
a
disk
as
a
discrete
in
the
in
the
head
on
the
on-demand
index
market
mechanism.
C
That
would
you
know
and
that
they
would
employ
they
would
employ.
They
would
employ
a
fast
cdc
that
you
know
that
those
those
indexes
and
and
and
some
extension
to
print
stuff
object,
manifests
and
and
to
accomplish
a
deep
solution
at
the
rhw
layer.
I
don't.
C
B
B
Mutates
the
the
existing
objects
in
a
way
such
that
they
point
at
the
the
pieces
that
are
duplicate,
so
there
needs
to
be
some
kind
of
a
a
way
to
represent
those
in
the
new,
a
new
representation
of
the
object
through
the
object
manifest
somehow
yeah.
That's
that's!
The
idea
of
ours.
C
E
On
the
on
the
radio
side
of
things,
if
you
ignore
rgw
for
a
second
and
just
think
about
the
radius,
half
there's
basically
two
parts
of
it,
there's
the
there's,
the
actual
storage
pool
that
has
all
the
content,
addressable
object,
fragments
and
rough
counts
on
them
all
the
infrastructure,
as
far
as
I
can
tell
exists
for
that
to
all
work,
there's
the
chunking
algorithm
to
decide
how
to
choose
your
boundaries
and
then
there's
a
there's.
An
efficient,
ref
count
class
that
lets.
E
You
have
complete
back
pointers
for
small
numbers
of
numbers
of
rough
counts
and,
as
the
number
of
refs
increases
that
started
to
create
an
accuracy
exercise
like
blob
of
reference
counts.
Essentially,
all
that
stuff
is
there.
The
radius
have
to
have
a
radius
tier
that
takes
a
radius
object
and
transparently
puts
it
into
a
bunch
of
chunks
is
like
half
implemented,
maybe
two-thirds
implemented
but
needs
a
lot
of
work.
E
I
wouldn't
rely
on
that
and
the
only
the
advantage
of
that
is
that
it
can
be
used
in
place
of
any
user
of
rados
that
and
it
serves
as
the
redirection
and
direction
layer
in
rjw's
case,
because
you
already
have
an
indirection
layer,
you're
already
hitting
a
gateway
and
you're
already
have
a
manifest
rely
on
that
radius
or
else
you're
having
multiple
pops.
So
the
big
question,
in
my
mind,
is
really:
can
the
rgw
manifest
behave
well
for
a
large
object
say
and
you
have
lots
of
little
chunks
like?
B
You
need
to
work
need
to
have
some
kind
of
an
interaction,
another
layer
of
interaction,
so
the
manifest
itself
should
point
at
something
else
that
represents
that,
like,
I
don't
think,
I
don't
think
we
should.
The
the
current
manifest
as
it
is
is,
could
scale.
B
C
E
And
not
the
content
of
the
object,
so
yeah
there'd
have
to
be
some
other
game.
I
guess
I
mean
it
really
depends
on
how
small
you
make
your
chunks
and
it's
sort
of
a
trade-off
like
the
smaller.
E
Your
chunks
are
the
better
your
gdp
ratios
are
but
the
higher
the
metadata
overhead
is
and
the
worse
the
read
performance
is,
and
so
my
sort
of
gut
feeling
is
that
we'd
want
chuck's
chunks
that
are
like
on
the
order
of
like
a
megabyte
or
half
a
megabyte
or
something
like
that,
like
not
too
small,
but
not
too
big
either,
in
which
case
the
manifest
could
be
pretty
big
for
a
large
s3
object.
We
have
lots
of
these
lots
of
things.
B
F
E
E
B
It's
a
short
hand
right,
yeah,
there's,
there's
an
algorithm
and
yeah.
So
we'll
need
to
take
that
into
account,
and
it
might
be
that
we
can
have
something
similar
to
what
we
have
now.
But
then
in
cases
where
it's
getting
too
big,
then
we
we
say:
okay,
the
the
the
definitions
are
in
that
object
right,
some
other
radio
subject:
yeah.
B
A
So
if
we're
going
to
have
some
background
process,
do
the
fingerprinting
and
and
dedupe
stuff,
then
it
would
have
to
it
would
have
to
modify
the
head
object
in
place
right
where
it's
currently
immutable.
B
B
Well,
you
create
new
head
at
the
end
right,
you
create
something
and
then
at
the
end,
you
do
the
final
right.
It's
not
different
than
changing
knuckles
or
copying,
something
in
like
overwriting,
the
the
same
object
with
with
itself.
Basically.
E
Okay,
setting
aside
that,
the
like
the
undiscovered
changes
is,
there
is
making
it
offline
significantly
simplify
things
like.
Would
it
be
that
hard
to
make
it
so
that,
as
you're
streaming
data
from
the
client
it
chunks,
as
it
goes,
writes
out
the
chunks
and,
at
the
very
end,
right.
A
J
J
Having
the
the
ability
to
just
have
the
references
there
is
really
cheap
to
detect
the
duplication.
J
B
C
J
A
All
right,
I
think
we
can
add
this
as
an
agenda
item
for
our
gw
refactoring
calls
to
keep
keep
up
with
it.
B
A
A
C
A
A
All
right
and
the
next
one
I
had
added
about
pool
creation.
A
A
H
H
But
we
think
there's
also
a
discussion
around
the
secondary
mode
of
the
auto
scaler,
where
it
actually
does
try
to
use
the
full
budget
of
pgs
for
the
entire
cluster
from
the
get
go.
So
you
do
get
a
full
level
of
parallelism
for
things
like
data
pools,
and
then
you
have
to
have
like
some
cap
on
the
metadata
pools
like
for
rgw
and
type
ffs,
so
that
they
don't
use
like
a
whole
bunch
of
pgs.
They
don't
need.
A
H
Help
the
idea
is
that
the
the
monitor
check
wouldn't
would
go
away
so
or
effectively
never
be
hit
anymore.
H
So
yeah
once
you
create
the
data
pool,
you
would
just
start
out
with
one
vge,
but
if
you,
if
you
specify
a
higher
number
or
even
if
you
don't,
the
autoscaler
will
give
it
a
higher
allocation
at
once,
once
you
create
the
pool,
but
essentially
the
new
algorithm
is
respecting
like
the
target
size
parameters
that
already
exist.
H
If
you
set
those-
and
if
you
don't
set
those
it's
dividing
the
rest
of
the
pg
budget
evenly
along
the
pools.
H
Yeah,
I
think
that
it's
much
more
reasonable
to
to
expect
that
to
work
well
with
this
new
auto
scale,
behavior
that
we're
working
on
like
in
the
past
we
had
the
issue
was
that
those
pulls
out
we
could
create
automatically
with
a
minimum
number
of
pgs
and
then
wouldn't
have
enough
parallelism,
but
with
the
new
behavior
without
a
scalar
that
shouldn't
be
a
problem.
They'll
get
the
much
higher
level
of
parallelism
out
of
the
gate.
B
Maybe
we're
missing
some
command.
That
would
check.
Do
some
sanity
check
about
the
states
or
something
you
know
it
said
it
was
hard
to
do
to
identify
what
was
the
underlying
issue.
B
Maybe
we're
missing
a
way
to
to
you
know
for
users
to
to
check.
H
Well,
the
problem
was,
we
had
essentially
an
obsolete
check
in
the
monitor
with
the
autoscaler
running.
In
these
days,
we
don't
actually
need
a
lot
of
the
checks
that
we
used
to
have
these
kinds
of
things.
A
A
All
right
last
thing
was
from
blaine
on
the
rook
and
the
bucket
claims
is
there.
Is
there
more
to
discuss
here?
I
think
I
think
we
still
have.
L
Oh,
I
I
was
going
to
ask
matt:
do
you
think
it's
worth
mentioning
the
metrics
stuff
or.
C
L
C
Well,
something
we've
been,
I
wanted
to
do
some
it
may
or
may
not
overlap
cleanly,
but
I
had
talked
with
jason
dillman
in
depth
a
few
weeks
ago
about
it.
What
we
we're
hoping
to
do
is
be
able
to
provide
more
what
more
more
more
per
more
either.
C
Essentially
parameterize
performance,
counters
or
other,
and
possibly
other
metrics
may
make
make
telemetry
from
or
from
from
the
even
into
into
event,
eventually
into
prometheus,
more
flexible,
and
it
appeared
that
the
work
that
jason
was
doing
what
it
would
have
allowed
that
I
assume
it's
it's
still
continuing,
but
but
our
needs
are
slightly
different
from
from
from
those
that
have
been
framed
by
ffs
and
rbd
replication.
C
Organizations
things
like
counters,
but
also
also
other
variables
by
parameters
so
that
we
can
talk
and
as
soon
as
we
can
so
we
can
talk
about
transient
or
or
or
or
otherwise,
grouped
information.
Information
like
exam
examples
would
be
the
the
the
the
users
that
have
been
operating
on.
You
know
the
rgb
cluster
are
parts
of
it
at
different
times.
The
activity
on
particular
buckets
or
the
actually
they
are
the
or
the
activity
from
particular
and
s3
clients.
C
Some
of
the
some
of
the
motivation
for
doing
that
for
us
is
related
to
qos,
but
but
another
motivation
is
really
just
to
be
able
to
provide
greater
observability
of
what
the
whole
system
is
doing.
K
Did
you
mean
to
collect
it
as
telemetry
data,
or
is
it
available
internally
just
for
the
user
to
view
in
the
saf
dashboard.
C
Well,
you
started
using
telemetry
as
a
as
a
term
to
mean
I
think,
phone
home
information
and
stuff.
I
I
I
by
it
I
meant
the
internal
flow
of
of
of
of
usage
and
other
activity
across
the
system.
C
So
example,
what
I
mean
are
things
like
the
currently
the
the
the
the
counter
beacons
that
are
sent
to
manager
yeah
every
few
minutes.
K
If
you
have
any
metrics
that
you
wish
to
phone
home,
I'm
passing
the
either
pad
so.
C
K
H
So
I
had
a
general
topic
that
I
thought
we
could
do
some
discussion,
which
was
around
testing.
I
mean
we've
been
discussing.
Lots
of
the
lots
of
very
large
scale
changes
the
way
our
rgw
is
storing
things
and
how
it's
maintaining
data,
and
even
with
different
backends
seems
like
testing,
is
going
to
be
a
large
component
of
that,
but
I
was
kind
of
curious.
I
know
that
sv
test
is
pretty
comprehensive
in
the
api
coverage.
H
Does
it
have
a
sort
of
wide
test
space
sampling
coverage
in
terms
of
like
things
like
one
of
the
things
that's
been
most
very
effective
for
rbd
and
rados
has
been
having
a
kind
of
test
that
has
a
model
of
how
this
data
system
should
look
and
running
random
sequences
of
operations
to
explore
that
space
and
verify
that
the
model
matches
the
reality
is
that
something
that
exists
in
the
rdw
tests
as
well.
J
I
think
ali
has
left
us
for
today
unless
I
he
rejoined
and
he
can
answer
better
than
I
can.
There
aren't
many
layout
based
tests
in
s3
tests.
Today
there
are
some
older
tests
in
the
qa
section,
like
the
actual
qa
section
that
do
poke
some
parts
of
layout,
but
I'm
not
certain
how
well
those
have
been
updated
for
new
pieces.
Maybe
other
people
can
comment,
I
will
say,
for
the
s3
tests.
H
C
But
basically
I
think
the
answer
is
no.
What's
the
thing
I've
seen
recently
that
it
was
work
that
gal
salman
has
added
to
you
know,
but
it's
really
for
s3
select
but
they're
generating
test
profiles
through
through
through
you
know,
with
with
randomization
and
then
freezing
them
as
big
as
big
inventories
of
tests.
That
can
be
rerun
with
known
results
and
they
can
run
them
against
s3
amazon.
You
know
any.
C
They
run
them
against
amazon
or
they
can
run
against
w.
The
series
of
test
itself
isn't
really
like
that.
It's
a
big
inventory
of
a
very
specific
expectations
in
each
test.
I
think,
even
though
there
are
some
options,
there
are
some
there's
some
parameters,
but
they
don't
explore
a
wide
space.
I
think
it'll
be
fair
to
say.
H
So
is
that
some
kind
of
something
that
you're
looking
at
then
or
is
that
something.
C
I
mean
I
mean
the
way
that
we
sort
of
deal
with
it
from
the
point
of
view
from
from
the
point
of
view
from
from
the
point
of
view,
they
say
we
have
a
coverage
for
it
downstream.
In
particular,
we
cover
it
with
maybe
cause
benches
directly,
the
in
and
so
and
so,
and
so
so
the
parameters
we're
trying
to
explore
is
the
wide
you
know
there's,
but
it's
a
bunch
of
workload
characteristics
because
there's
there's
new
stuff
upstream,
the
the
minion
folks
have
something
called
called
warp.
That's
that
that's
that's!
C
That's
a
kind
of
a
amble
game
of
the
bunch
of
their
s3
test
club
plus
a
bunch
of
randomized
parameter
based.
You
know
stuff
like
that,
so
first,
so,
first
of
all,
so
for
a
subset
of
workloads,
those
kind
of
tools
cover
things,
but
but
that's
a
bit
different
from.
I
think
what
you're
saying
in
terms
of
how
geology
lets
you
probe
all
kinds
of.
C
Exactly
that's
a
lot,
so
I
don't
think
I
think
it's.
I
think
it's
like
something.
We
don't
have
there's
something
called
ragweed
that
mark
sorry,
that
you
could
talk
more
about
it,
but
it's
a
pretty
limited
number
it's.
It
was
a
test
that
was
designed
with
a
test
suite
because
I
think
it's
a
nose
test
suite
that
allows
you
to
work
with
both
radius
commands
and
s3
or
other
operations.
If
you
want
to
do
things
and
then
explore
dig
into
what's,
what's
you
know
to
make
sure
assertions
about?
What's
what
should
be
present?
C
H
H
That
sort
of
thing
like
an
example
of
how
this
was
is
used
in
like
rbd,
is
the
fsx
tests,
which
actually
came
from
a
file
system
test
originally,
but
they're
randomly
doing
different
operations
on
images
like
creating
snapchats
and
clones
and
reads
and
writes
and
truncates,
deletes
basically
I'll
look
at
different
kinds
of
operations
that
are
supported
and
then
verifying
that
the
state
in
the
cluster
matches
the
expected
state
in
their
memory
model.
H
It's
a
that
kind
of
gets
you
much
more
coverage
for
the
general
state
space
for
the
of
the
system
that
it
could
be
in
right
as
a
then,
a
statically
defined
set
of
tests.
C
It's
it's
interesting.
This
is
this
is
the
sort
of
thing
that
our
downstream
quality
engineering
people
try
to
do
various
ways,
but
they've
had
to
approach
they've
approached
it
in
terms
of
writing
programs
that
mix
specific
things
together,
like
great
random
number.
Four
version
buckets
and
create
bunches
of
objects
in
each
one
and
then
do
vary
and
apply
your
combinations
of
life
cycle
to
them.
This
was
the
type
of
thing
I've
seen
recently
but
yeah.
If
we
had
a
system
to
generate
a
generative
approach
to
generation
to
predict,
to
produce
an
inventory
of
testosterone.
F
From
my
experience,
the
rados
model
style
tests
have
been
insanely
valuable
in
rados
they're,
more
valuable
than
all
of
the
other
units
has
been
put
together.
I'd
also
submit
that
if
it's
worth
doing
downstream
manually,
it's
extremely
worth
doing
in
technology.
C
F
E
E
I'm
not
sure
it's
it'll
be
significantly
more
work
to
write
something
like
greatest
model.
That
is
a
model
for
like
the
full
s3
api.
F
E
F
If
you
think
about
api
commands,
as
leaves
hanging
off
of
manipulations,
underlying
structures,
if
you
choose
one
or
two
representative
leaves
of
each
corresponding
to
each
of
those
structures,
you
get
pretty
good
coverage
without
being
able
to
get
the
entire
api.
The
point
isn't
so
much
to
test
the
api
command.
Specifically
it's
to
find
ways
of
manipulating
the
underlying
structures
into
larger
sets
of.
H
J
Yes,
so
the
baseline
proposal,
I
asked
students
to
well.
The
summary
I
gave
them
for
students
was,
if
you
instrument
with
coverage
boto,
while
running
s3
tests,
what
part
of
boto's
s3
apis?
Are
we
not
yet
touching?
J
J
Right
the
parts
where
that
doesn't
necessarily
wind
up
being
covered,
and
I
still
believe
this
probably
future
room-
are
things
where
boto
doesn't
cover
parts
of
the
s3
spec.
J
For
example,
variants
that
you
could
put
into
the
core's
xml
and
then
run
the
requests
that
should
return
various
parts
of
cause
headers
or
static
sites
behavior
on
the
static
sites
configuration
aren't
well
covered
in
botha
voter
load
lets.
You
set
the
set
the
xml
payload,
but
there's
not
really
meaningful
ways
of
pulling
it
back.
J
Browser
post
uploads
have
the
same
problem
photo
just
doesn't
do
them
it'll,
happily
generate
the
signed
pieces
that
you
need,
but
you
can't
actually
operate
them
with
photo.
You
need
other
coverage
for
that.
Like
I've
mentioned
browser
posts
specifically
because
there
was
a
previous
bug
there,
where
certain
certain
policies
that
in
browser
post
didn't
work
so
like
we're
going
to
need
something
bigger
than
both.
G
One
thing
that
comes
to
mind
is
that,
at
least
in
the
field
we've
seen,
you
know
a
certain
kind
of
scenarios
exposed
when
there
are
combination
of
rgw
operations
going
on,
like
you
know,
resharing
and
listing
things
like
that,
so
are
there
existing
tests
and
pathology
that
already
exercise
those
kind
of
scenarios.
H
That's
a
kind
of
exact
kind
of
a
combined
operation
that
this,
like
randomized
testing,
can
help
find
like
when,
when
you're
talking
about
background
operations
in
terms
of
restarting
or
gc
or
d-dupe
having
the
kind
of
state
space
walks,
so
you
can
see
while
those
things
are
happening,
it
can
turn
up
all
kinds
of
issues
that
you
wouldn't
otherwise
see.
D
We
got
feedback
that
customers
are
interested
in
this.
This
multisetting
policy-
and
I
was
wondering-
is
this
info
at
least
the
info
computer
via
the
admin
ops
api
also.
B
B
Get
I'm
trying
to
think
if
you,
if
you
get
the
what,
if
you
had
to
admin
metadata,
get
packet
if
you
or
back
it
instance
info
for
the
packet.
If
you
see
that
in
information
you
might
you
might
be,
but
if
not
then
the
no,
so
it.