►
From YouTube: CDS Pacific: RGW
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
A
A
A
A
B
A
A
A
B
A
B
Sure,
just
the
I'll
add
notes
to
the
etherpad
about
the
different
pieces.
Big
one
is
dynamic:
rashard
there's
some
data,
sync
optimizations
and
some
caching
of
sync
status.
B
A
B
C
When
dealing
like
clear
when
dealing
with
sharding
issues
or
push
pushing
some
of
the
logic
to
to
outside
of
the
listing
thing
process
there
directly,
so
do
you
have
a
module
that
would
be
responsible
of
fetching
the
data
model
responsible
for
actually
syncing
to
data
and
finding
common
commonalities
between
the
different
different
dressing
types?
That
is
the
metadata
data
and
on
the
back
he
thinks.
C
Like
we're
gonna
be
the
same
code,
the
transients,
it's
going
to
be
multi-staged
girl
that
you
need
H
staged
it
fetches,
they
have
it
differently
or
user.
Some
modules,
legislator
different
weekly
from
here
or
the
dated
and
fed
from
the
source
differently,
so,
depending
on
how
things
are
done,
but
that's
pretty.
These
are
my
my
my
thoughts
about
it
and
and.
B
C
A
A
We
can
just
this
is
a
is
a
pretty
short
list,
but
I
think
each
of
these
items
was
pretty
significant.
What
what
what
do
we
want
to
do
here?
Do
you
want
to
talk
about
any
of
these
in
detail?
I'm,
yep,
I,
don't
know
what
the
scope
of
like
this
dynamic,
rashard
stuff
feels
like
we've
been
talking
about
for
the
last
year,
I'm
Sarah
working
I'm
for
it
now
or.
A
E
A
B
C
A
A
C
A
G
That
were
not
here,
I
would
have
to
discuss
with
them,
but
I
know
we've
had
you
know,
we've
had
some
discussions
and
what
we
would
require.
You
know
for
or
API
needs.
You
know
how
is
that
going
to
translate
into
you
know
what
type
of
workflows
do
we
want
and
I
still
know,
there's
a
lot
of
outstanding
question.
So
if
we
want
the
dashboard
to
go
along
with
it,
I
think
we
have
action
items
to
really
help
define
that
you
know
it's.
E
G
G
B
A
Okay,
so
dynamic
Rashard
top
of
the
list
documents
there.
Everything
is
sort
of
in
shape
there.
This
data
sync
optimizations
topic.
B
C
B
A
B
Not
necessarily
the
where
I
was
going
here
was
just
being
able
to
integrate
the
multi-site
syncs
co-routines
with
the
boost
ezio
ones,
so
that
you
can
on
a
stack
full
co-routine
within
that
and
share
memory.
Because
it's
well
threaded
and
if
we're
able
to
do
that,
then
we
can
call
in
to
all
of
the
places
that
support
an
optional
yield
thing
and
we
don't
have
to
send
work
out
to
a
separate
thread
pool
for
it.
A
B
B
A
B
And
the
second
thing,
Dan
is
planning
to
have
like
a
policy
layer
built
into
the
abstraction
here,
potentially
calling
into
Lua
I'm,
not
sure
exactly
whether
that
comes
first
or
not,
but
I
think
that
will
be
really
interesting
when
it
comes.
C
A
A
A
D
That's
one
that
I
threw
in
there-
and
this
is
something
I
had
I
pulled
Casey
aside
for
backin
topic
on
Barcelona
last
year
as
a
way
to
expose
our
BD
snapshots
via,
like
some
designated
bucket
in
our
GW.
So
basically
integrate
live
our
BD
into
our
GW
natively
exposed
and
provide
access
to
our
BD
snapshots
from
our
GW
and
then
longer-term,
be
great
for
you
then,
like
integrate
it
with.
D
You
know
necessary
to
hook
it
to
multi-site,
so
that
multi-site
knows
that
hey,
there's
a
new
snapshot
available
knows
how
to
sync
it
over
or
you
know,
integrate
with
nuba
or
something
like
that.
So
you
could
have
data
policies
about
retention
of
our
BD,
backup
images
and
things
like
that,
but
yeah
Casey's
mention
last
year.
That's
like
well
everything's
being
like
rewritten
right
now.
So
it's
not
a
good
time
to
put
anything
in
it.
A
C
B
B
B
A
I
mean
I
can
imagine
wanting
to
have
a
superb
plug-in
that
does
this
for
RVD,
where
you
have
a
bucket,
that's
mapped
to
a
I,
guess
a
pool
name,
space
combination
or
something
and
then
also
have
one.
That's
does
it
for
plugin
that
uses
lips
ffs
that
just
exports
a
directory
in
the
file
system
or
it.
D
D
But
yes,
so,
like
the
the
snapshot
thing,
you
know
you
snapshots,
we
would
want
not
only
just
like
here's
the
full
snapshot
dumb,
but
we
also
want
to
have
like
a
way
to
basically
expose
programmatically
like
arbitrary
deltas
of
snapshots,
so
that
if
you
want
to
do
backup
policies,
you
don't
do
like
necessarily
copy
a
full.
Like
one
terabyte
object,
you
can
say
like
well,
I'll
take
my
you
know.
One
terabyte
object.
You
know
once
a
month
or
whatever
and
I'll
do
like
incremental
disks
between
there
and
determine
you
know
what
I'm
gonna
copy.
D
A
A
D
Like
there
was
the
one
with
like
the
rgw
service
that
like
puts
out
like
blessed
answers,
you
know
things
like
that,
so
that
you
could,
you
know,
hooking
up
extra,
my
process
up
so
say
like.
F
D
C
C
A
Okay,
but.
B
C
A
A
B
A
A
Okay,
so
that
I
keep
running
up
against
and
in
working
on
the
set,
the
ad
and
stuff
writing
up
against
this,
because
trying
to
make
sure
that
the
orchestrators
interface
for
thinning
up
collections
of
rgw
demons,
sort
of
maps
onto
what
greatest
gateway
wants
to
do
and
right
now,
rick
has
all
these
Billy
assumptions
like
you
know.
When
you
create
an
object,
store
and
rook
it's
you
give
it
a
name
and
that's
the
name
of
the
realm
and
the
zone,
and
that's
it
and
it's
always
isolated.
A
A
But
this
needs
to
be
sort
of
merged
with
the
rook
view
of
the
world,
so
that
the
orchestrator
abstraction
generalizes,
both
of
them
I'm
sort
of
hopefully
works
the
molle
across
the
two
though
I
think.
For
me,
one
of
the
key
things
is
make
sure
that
that
design
matches
what
the
orc
shader
API
is
currently
presenting,
and
that,
if
there's
a
mismatch
that
we
sort
of
that,
we
plan
those
orcs,
your
abstraction
changes
and
this
matching
stadium
changes.
But
it'll
of
aive.
A
Think
it's
us,
that's
really
true!
I,
don't
know
if
I
think
that
yeah
I'm,
not
sure
I
think
that
stuff
that
all
he's
doing
is
but
I'm
targeting
somewhat
of
us
a
set
of
the
full
set
of
all
these
sight
capabilities
for
a
GW
I.
Don't
think
it's
is
that
right,
not
necessarily
attempting
to
be
able
to
do
everything.
B
A
Okay,
but
I
guess
if
I
think
that
the
goal
should
be
that
if
you're
deploying
with
ideally
either
Brooke
or
SFA
DM,
but
at
least,
was
a
a
DM
that
the
full
set
of
capabilities
are
possible,
and
so
the
strategy
I
took
is
just
stuff.
The
DM
is
just
to
like,
basically
stay
out
of.
It
focus
only
on
running
questions
of
demons
root
zone,
but
I.
A
What
I
really
want
to
do
is
make
sure
that
that
the
fella
pers
were
working
on
the
stuff
are
using
it
and
understand
how
that
orchestration,
API
works
and
ideally
are
using
that
in
their
development,
so
that
it's
clear
where
there
are
points
of
friction
or
things
that
are
just
annoying
and
are
many
steps
to
setup
or
whatever
it
is,
or
and
actually
just
that
that
things
are
lined
up.
Like
you
mentioned
earlier,
you
could
have
mentioned
this
idea
of
combining
multiple
zones
into
a
single
single
set
of
demons.
A
A
B
Yeah
I,
don't
I,
don't
think.
We've
quite
figured
out
the
details.
There,
though
I'm
still
I'm
still
a
ways
off.
A
There's
a
whole
request
that
I
just
opened
yesterday.
Actually
that
adds
a
tool
called
see
patch
and
see
start,
and
it
basically
lets
you
build
a
container
or
patch
a
container
based
on
your
local
builder.
Ector
e
is
basically
just
that
cleaned
up
version
of
John
soul,
cube
jacker
script
from
your
build,
build
machine
and
it
takes
like
30
seconds
60
seconds
to
run.
I
am,
and
then
you
can
use
that
container
for
either
rook
or
stuff
ATM
and
then
see
start
is
sort
of
a
view
start
equivalent,
but
vastly
simplified.
A
That
just
starts
F
of
F
ATM
cluster,
and
so
you
can
run
C
patching
and
set
the
ATM
to
boot
up
like
a
full,
fully
normal
cluster
without
any
other
weird
special
cases
that
these
start
has
and
then
use
the
real
orchestration,
api's
and
basically
interact
with
the
cluster.
The
same
way
that
it
really
is,
it
would
so.
A
Yeah
I
think
that
make
sense
right
now.
So
you
start,
I
was
assuming
that
you'd
want
one
per
directory,
but
I
guess
you'd
want
multiple
clusters
in
order
to
do
it
to
you,
anyway,
I
will
I
guess.
My
hope
is
that
if
we
can
do
this,
then
the
all
this
testing
will
be
done
driven
through
the
orchestrator
of
you
guys.
Instead
of
ad
hoc
developer
scripts,
or
at
least
the
ad
hoc
developer
scripts
will
be
running
starting
up
the
cluster
and
that
not
actually
doing.
A
Yeah
right
now,
I
mean
in
in
theory
you
can.
You
can
build
everything
inside
a
container
on
any
inside
a
sent
us
a
container
on
top
of
any
of
us,
but
we
need
to
like
document
how
exactly
they
do
that
properly.
On
my
tab,
machine
for
testing
us
I
just
installed
CentOS
eight,
so
I
wouldnt
have
to
worry
about
it,
but.
A
C
A
F
B
So
it's
a
really
big
feature
with
a
lot
of
surface
area,
so
I
think
that
we've
picked
some
good
kind
of
first
targets
in
terms
of
data
formats
that
we're
going
to
support
the
execution
model.
Weather
stuff
runs
on
the
OS
DS
or
in
our
GW,
and
then
just
all
of
the
the
dialect
that
we
that
we
support
the
different
kinds
of
queries.
A
Is
that
targeting
the
Pacific
time
frame.
A
B
A
D
You
know
just
cashes,
just
read
only
copies
of
objects,
locally,
it's
used
by
our
BD
right
now
for
the
parent
images
be
interesting
to
know
if
something
the
work
there
for
the
d3i
encash
didn't
also
integrate
with
email
object
cash,
so
you
don't
have
like
then
two
competing
things
that
cash
enable
objects.
B
D
B
B
C
B
C
A
So
so
my
the
item
on
my
wish
list
here
I
know:
I've
talked
to
you
good
about
this
I'm,
not
sure.
If
I
mentioned
it,
you
Casey
would
be
basically
that
everything
that's
right
owes
gateway
admin,
boo,
would
map
to
afar
GW,
or
maybe
was
some
cleanup
or
whatever
in
the
meantime,
but
just
basically
unify
that
entire
interface
with
the
existing
so
Sula,
and
that
could
be.
A
A
H
A
There's,
oh
yeah.
We
basically
tabled
it
because
nobody
from
the
dashboard
camera,
sir,
we
can
come
back
to
that
in
a
second.
A
I
guess
on
the
CLI
thing,
I'm
not
sure
I
mean
partly
I'm
motivated
just
because
it
it
it
seems
it
sort
of
unifies
the
user
experience,
and
it
seems
idea
to
me,
but
I
know
that
the
implementation,
because
of
the
way
that
things
have
evolved
a
little
bit
awkward
to
actually
do
it.
This
way,
I,
don't
know
what
their
level
of
importance
here
is,
but
I
think
it's
an
end
goal.
At
least
this
is
something
that
we
should
keep
in
our
minds.
That's
moving
in
this
direction.
A
It
just
everything
comes
up
and
in
contrast
right
now
on
what's
Rios
gateway,
you
have
basically
three
or
four
radio
Skateway
admin
commands
to
create
the
realm.
The
zone
group,
the
zone
I
thought
that
and
then
finally
do
Steph
Forge
apply
rgw
Elms
own
men,
like
you
know,
to
demons
or
whatever
in
that
she
starts
demons.
A
C
A
A
C
One
thing
that
we
can
do
is
right
now
is
to
find
a
way
to
put
or
run
more
than
one
command
ready
forget
to
add
command
in
a
single
command.
So
if
you
don't
need
to
run
for
different
comments
or
or
at
this,
the
zone
creation
to
make
it
also
greater
around
in
its
own
group
for
at
least
I
make
it
so
the
complexity
of
setting
up
any
new
zone.
It's
like
four
times
easier
than
you
know
from
four
commands
to
do
that.
Yeah.
A
A
B
A
B
A
C
B
A
A
That's
this
card
think
from
cloud
is
the
future
were
I.
Think
right,
optimizing.
A
C
C
F
A
C
A
C
Yeah
well,
if
we
know
that
we're
gonna
send
a
request,
it
is
already
in
flight
Raiders
and
it's
a
great
request.
C
Then
we
can
just
you
know,
wait
wait
for
the
response
from
Raiders
and
then
they
return
the
result
that
we
got
right.
So
we
have
that
lower
layer,
rather
rapid
object,
referees.
C
We
can
build
something
on
top
of
that,
instead
of
doing
it
at
the
top
level
data
so
bottom
level,
so
that
would
make
sure
that
we've
done
then
duplicate
treat
requests,
while
they're
in
flight
that
doesn't
necessarily
cash
like
if
two
clients
don't
over
request
when
they.
This
second
array
arrives
after
the
first
one
finish
it
and
resent
it
again,
which
I
think
is
fine
I.
B
A
A
A
I
I
We
were
waiting
for
this
in
order
to
get
some
kind
of
monitoring
overview
about
multi-site
SingStar
too
sinful
I
think
it's
working
progress.
Yet
it's
true
about
the
tragedy
show
that
they
put
I.
Think
there
were
some
last
question.
The
last
comment:
I,
don't
know.
If
I
see
you
can
take
a
look
when
you
have
enough
time
for
this.
B
I
B
Yeah
I'm,
sorry
that
you
guys
have
been
blocked
on
api's
here
and
progress
has
been
slow.
I
wish
I
mean
the
the
new
API
is
just
kind
of
a
simplification
or
collapses.
A
lot
of
different
API
requests
into
a
single
one,
but
all
of
the
info
that
the
dashboard
needs
is
currently
available
in
existing
API
is
so.
Maybe
the
best
way
to
make
progress
is
to
write
the
dashboard
side
against
those.
B
I
I
B
B
So
there-there
are
breasts
api's
that
will
give
you
the
status
of
a
single
shard
of
a
single
log,
so
the
dashboard
could
query
every
shard
from
every
zone
and
kind
of
correlate
them
and
display
them.
The
new
API
is
that
were
proposing
here
exposed
one
API,
that
kind
of
does
the
pressured
stuff
itself.
So
it's
more
of
a
simplification
than
anything.