►
From YouTube: 2019-Jan-17 :: Ceph Tech Talks - NooBaa: A data platform for distributed hybrid clouds
Description
Guy Margalit presents on NooBaa: a data platform for distributed hybrid clouds.
Twitter: @guy_mrg, @NooBaaStorage
More tech talks: https://ceph.com/ceph-tech-talks/
A
A
The
reason
why
we're
talking
about
nuba
and
not
a
stuffed
topic,
because
we
usually
do
a
couple
reasons:
nubo
was
recently
acquired
by
Red
Hat
and
will
be
open
source
shortly
and
from
from
my
perspective,
I'm
interested
in
gauging
the
interested
interest
amongst
community
members.
You
know
developers,
users,
vendors
in
these
sorts
of
hybrid
multi
cloud
capabilities
that
it
provides
and
that
guy
is
going
to
be
talking
about
and
we're
interested
looking
at
how
those
capabilities
can
be
combined
integrated
and
can
complement
rgw,
so
I
think.
A
The
first
step
in
that
conversation
is
just
to
give
everyone
an
understanding
of
what
Nova
is
today
and
how
it
works.
I
think
I
as
a
as
a
presentation
prepared,
she's
gonna
go
through
it
and
then,
of
course
feel
free
to
ask
ask
questions.
We're
gonna
have
some
Lim
Q&A
at
the
end
or
feel
free
to
interrupt
partway
through.
If
there
are
any
questions.
So
that's
welcome.
Guy.
A
B
B
Yeah,
okay,
okay,
so
hi
everyone
thanks
stage
for
for
inviting
me
to
talk
at
this
f,
Tech,
Talk
and
I'm
gonna
talk
about
Luba
and
what?
What
is?
What
is
nuba?
And
what
would?
What
do
we
mean
when
we
say
a
data
platform
for
disability,
big
crowd
and
I'm
gonna
show
you
a
demo
and
let
me
jump
over
to
what
I
planned
for
today,
and
this
is
just
suggested
times.
B
I
guess:
I,
don't
have
any
problem
with
interruptions
for
questions,
certainly
during
demo,
but
at
any
point
so
I
just
kept
just
a
few
minutes
at
the
end,
but
I'm
sure
I'm
gonna
make
the
first
parts
quicker
than
what
I
won't
know
to
hear
so
I'm
going
to
go
through
the
market
as
we
see
it
a
salute.
What
is
our
solution?
Show
you,
the
demo
of
how
you
can
actually
use
Nova
and
what
is
the
features
that
that
we're
providing
for
this
market
and
just
a
short
discussion
on
what's
next,
okay,.
B
So
quick
inter
action,
myself,
an
uber
so
I
I
am
a
software
architect.
I
didn't
I
wasn't
born
this
way,
but
I
I
suddenly
become
have
become
software
architect
and
I
love
software
and
I.
I've
done
that
for
a
while.
So
I've
done
that
in
the
Air
Force
and
later
on,
an
excellent
accident
was
a
sort
of
a
distributed
or
more
more
of
a
clustered
file
system.
You're
hearing
me
work.
B
B
I'm
gonna
show
you
some
of
her
all
some
management
and
stuff,
and
we
address
retrospect
that
we
think
it
was
the
best
choice
we've
made
from
product
perspective
and
just
a
little
bit
about
the
culture
and
what
we
believe
in.
So
we
would
promote
flexibility
to
choose
and
change
choices
usability,
because
we
really
believe
that
that
things
should
first
be
usable
before
they
are
doing
all
the
rest.
And
what's.
C
B
But
in
many
cases
they
don't,
and
the
reason
is,
is
that
they
might
think
that
they
can
enforce
something,
and
then
they
do
an
M&A
and
suddenly
they
they
bring
into
the
enterprise
a
completely
new
infrastructure,
and
they
have
two
of
these,
and
each
of
each
of
these
are
different,
separated
in
in
the
best
case.
They
might
have
a
one
to
one
connection,
so
you
can
do
stuff.
Just
you
know,
copy
stuff
from
one
another
or
you
know,
read
stuff
from
one
another.
B
There
are
multiple,
so
big
big
picture,
there's
multiple
problems,
there's
there's
the
compute
side
of
it.
There's
migrations,
there's
multiple
things
and
what
we
were
said
to
solve
sorry
was
to
provide
a
data
platform
ultimately
or
sorry
for
though
initially
at
this
point
also
for
unstructured
data,
and
let
me
jump
to
the
next
slide.
I'm
going
to
show
you
a
little
bit
of
the
solution.
It's
a
big
slide.
I'll,
take
you
through
it
once
it.
B
So
we
are
actually
creating
a
flexible
data
service
and
by
that
what
I
mean
is
that
we
wrap
multiple
resources
that
can
be
consumed
either
through
an
API
or
directly
from
a
local
resource.
Local
file
system,
whatever
that
is,
it
can
be
on
premises,
multiple
data,
centers
clouds
and
for
the
application
we
provide
an
endpoints.
B
What
is
provided
here
so
the
low
stack,
the
story
stack
I'm
not
going
to
go
into
much
because
I
think
I
think
we
can
jump
to
the
data
services
quicker
and
you're
gonna
see
it
if
needed
so
on
the
data
server
side.
So
a
few
things
to
note
so,
first
of
all,
by
providing
an
API
on
s3
API
to
the
application
we're
using
endpoints
and
abstracting
the
location,
we
can
provide
an
active
active
access
to
data
at
all
times.
B
So,
even
if
there's
a
change,
we
want
to
to
insert
a
new
cloud
into
the
into
the
platform
or
remove
a
cloud
from
platform.
Had
a
new
on-prem
resource
burst
in
the
cloud.
Whatever
the
use
case
is,
data
is
active
at
all
times
using
the
existing
resources.
That
is
being
that
there
is
the
data.
Is
the
data
is
available
at
the
point
of
access
and
in
the
next
step
we
will
optimize
for
the
for
the
next
access
locality.
B
We
support
iOS,
Google,
pager
and
a
bunch
of
us
week,
any
any
has
to
be
compatible,
but
also
we
have
a
pluggable
way
of
adding
easily
adding
more
backends
for
for
connecting
into
services
for
storage,
and
this
allows
and
also
the
the
other
point
of
flexibility
is
that
is
that
we
have
a
sort
of
I'd,
say:
fine
grain
control
over
over
how
we
map
data
onto
onto
storage
resources,
so
that
we
can,
we
can
coexist.
We
can
have
all
the
data.
B
B
Next
point:
we
support
lambda
as
an
API,
but
not
just
by
invoking
lambdas
on
on
on
lambda
services,
which
is
consumed
outside
of
the
platform,
but
actually
we
supported
in
the
platform.
We
implemented
that
because
we
we
saw
that
as
a
core
feature
for
for
data
services,
and
this
allows
us
to
very
quickly
install
a
function,
extend
the
platform
to
do
something
new
and
actually
have
a
data-driven
platform
so
that
we
can
trigger
specific
functions
from
data
events.
B
So
an
object
creation,
object,
deletion
can
get
trigger
functions
and
do
manipulations
and
do
whatever
is
needed
to
create
a
personalized
platform.
In
other
cases.
We
also
use
that
to
solve
immediate
requirements
in
the
field,
which
is
super
useful
and
the
reaction
time
using
that
is
is
immediate.
So
you
have
a
platform.
The
customer
suddenly
wants
something
new
and
you
just
put
a
function
into
the
existing
platform.
No
time
takes
no
time
to
do
that
and
it's
already
providing
new
functionality
and
last
is
security.
B
B
So
if
there's
changes
in
push
or
or
sort
of
a
push
bars
push
pushed
objects
in
in
the
internal
to
the
object
its
it
will
detect
that
and
it
will
find
oops
remove
them
at
the
endpoint
level
so
that
we
don't
have
to
ingest
that
over
network
to
storage
nodes,
compress
each
chunk
and
encrypt
it.
And
in
the
last
phase
we
actually
look
for
storage
locations
which
are
most
optimized
for
that
specific
object.
B
According
to
that
data
repository
that
we
are
serving
in
that
object,
type
sizes
whatever,
and
we
can
store
each
of
these
in
a
different
location
as
an
encrypted
chunk
and
the
encrypted
key
encryption
keys
are
stored
separately
from
the
chunk
itself,
and
so
that
there's
also
an
opportunity
to
support
regulated
environments
that
require
encryption
with
separation
and
in
this
layer
we
also
support
replication.
The
regi
coding
and
whatever
the
storage
and
I
resiliency
is
configured.
B
B
I'm
gonna
describe
briefly
the
components
of
the
system
here.
Each
of
these
is
a
software
component
that
we
package
together
for
delivers
for
easy
delivery
as
a
VM
image,
and
it
can
be
also
extracted
from
that
VM
image
to
scale
out
to
further
to
add
more
endpoints
or
demons
and
cluster
the
new
Bacoor
to
create
more
resiliency
and
scale.
B
Well,
let
me
start
from
the
beginning,
so
an
application
host
is
connecting
to
a
nova
endpoint
through
s3
or
lambda
api,
and
the
endpoint
is,
is
responsible
for
doing
all
of
the
protocol,
stuff
authentication
whatever
and-
and
it
also
handles
all
the
chunking
dido
compression
encryption.
So
all
the
the
network
and
CPU.
B
Sort
of
tasks
that
that
that
is
required
that
requires
scale
out
and
the
reason
is
for
doing
all
the
at
this
point-
is
that
we
created
the
endpoint
as
a
stateless
component
that
can
be
scaled
out
seamlessly
without
any
effect
on
the
in
the
rest
of
the
system,
and
it
does
not
have
any
state.
It
can
communicate
directly
with
the
Nuba
core
right
here
and
and
live
in
a
world
where
it
doesn't
need
to
know
anything
else.
B
Besides,
where
is
my
core
and
the
application
sends
the
data
and
retrieve
data
from
the
endpoint
directly
and
the
endpoint
once
it
has
the
encrypted
chunks
ready,
it
communicates
metadata
to
the
core,
but
you
also
refer
to
something
as
a
brain,
because
it's
very
smart
and
and
the
response
is
basically
instruction.
So
the
brain
is
responsible
for,
like
a
nervous
system,
just
sending
instructions
and
making
decisions
making
conscious
decisions
about
the
state
of
the
system.
What
should
be
done?
What
is
the
policy?
B
B
The
let
me
jump
a
little
bit
to
the
cloud
resources,
so
we
support,
as
I
said,
s3
compatible,
Azure,
blob,
Google
cloud
and
actually
adding
more
is
very
simple.
It's
it's
not.
It
does
not
require
a
lot
of
implementation.
Just
require
sort
of
a
key-value
implementation,
roughly
mm-hmm,
so
that's
you
can
plug
in
any
anything
that
can
provide
a
key
value
and
and
refer
to
it
as
a
new
storage
as
a
new
resource.
B
Basically,
and
also
these
demons
are
actually
storing
key
values
and
they
use
a
file
system
for
that,
so
this
story
encrypted
chunks
on
their
local
file
systems.
In
addition
to
that,
these
can
also
be
used
to
run
lambda
functions,
so
they
can
host
sort
of
a
function
node.
It
can
invoke
functions
internally
and
for
that
and
for
the
storage
part
part,
you
know
monitor
the
hosts.
B
We
know
their
performance
CPU
utilization
over
time
and
capacity,
and
we
communicate
all
this
information
to
the
brain
periodically
so
that
it
has
concrete
view
of
the
system
and
its
status
and,
at
every
point
of
time
the
core
can
make
these
decisions
to
optimize.
All
of
these
parameters
of
of
the
operation
of
a
system,
so
these
are
the
components
just
a
note
on
on
delivery.
We
package
everything
together
as
a
VM,
so
that
you
have
when
you,
when
you
just
launch
the
the
Nuba
VM,
you
have
everything
working
together.
B
Okay,
so
I
think
it's
a
good
time
to
show
you
the
demo
just
before
I'm
whoops,
sorry
so,
just
before
I'm
jumping
into
the
demo.
Sorry
yeah
many
questions
just
jump
in
so
before.
Jumping
to
the
live
system,
I
just
want
to
say
some
some
words
about
deployment.
So
I've
said
it
already.
You
download
the
VM.
You
run
it
either
on
your
laptop
and
a
virtual
VBox
or
any
hypervisor
from
the
marketplaces
of
any
business
as
your
Google
on
KVM.
Wherever
you,
you
need
to
ESX
VMware,
of
course,
so
you
download
VM,
you
run
it.
B
B
Okay.
So
let
me
jump
to
that
UPS
all
right,
yeah,
okay,
zoom
a
little
bit
I
hope
you
can
see
my
entire
screen.
If
there's
problems
tell
me
okay!
So
first
step
you
have
your
heavy
system
running
after
deploying
on
a
VM.
The
first
thing
it
should
do
in
any
case
is
connect
your
application,
and
you
want
to
test
the
application
against
the
end
point.
You
want
to
see
that
it
works.
B
You
want
to
see
that
that
you,
you
can
make
it
run
exactly
like
you
want
to,
and
you
don't
need
to
add
more
storage
yet
because
we
have
internal
storage
inside
the
core
that
we
handle
automatically
once
you
scale
and
we
can
get
it
get
the
data
out
of
it
automatically.
So
you
connect
your
application
to
this
end
point.
These
are
the
credentials
and,
of
course,
there's
more
users,
so
you
can
get
whatever
critters
you
want
to
for
your
specific
access
and
you
can
test
it.
B
Let
me
show
you
where
what's
going
up
next,
so
you
connect
your
application.
You
you've
written
a
few
objects.
You've
read
it.
It
works
perfect.
Now
the
next
step
is
to
see
where
the
services
are
actually
managed
and
to
scale
and
and
get
the
next
steps
of
managing
a
system.
So
let
me
jump
to
the
buckets
and
I'm
gonna
show
you
the
movies
bucket
just
for
a
second
here,
and
this
is
where
we
manage
everything
about
the
data
services.
B
Those
policies
here,
those
triggers
I'm
gonna-
show
you
a
little
bit
so
first,
first
of
all,
there's
the
data
placement
piece
move
my
face.
Okay,
so
every
bucket
can
have
a
different
policy
for
data
placement.
You
can
store.
You
can
choose
the
specific
resource
that
that
best,
that
that
is
best
for
this
specific
data
repository
for
this
bucket,
and
you
can
actually
add
that
on
the
fly
and
change
and
we
and
the
data
is
always
active
through
any
transition.
Let
me
show
you
a
little
bit.
B
B
Replication
with
and
you
can
have
a
neighborhood
one
with
either
of
the
data
center
pools,
which
is
a
group
of
newer
demons
and
for
other
cases
you
you
might
choose
to
spread
it
because
you
want
to
aggregate,
for
example,
multiple
data
data
centers,
which
are
close
by
or
just
for
bursting
into
new
infrastructure
when,
when
there's
new
projects,
new
capacity
needed,
so
any
change
of
these
can
be
can
be
just
created
on
fly.
So
here's
how
I
just
go
to
a
single
data
center
London,
remove
the
cloud
and
I
just
de
clouded,
basically
and
updated.
B
The
policy
should
be
updated
and
the
objects
of
this
bucket
will
will
change
the
policy.
In
the
background.
We
can
also
have
visibility
here.
You
can
actually
see
we
have
three
replicas,
but
there's
one
to
be
removed
on
the
cloud.
Okay,
and
it
will
of
course
update
once
once
once
the
task.
The
background
task
is
completed.
That's
data
placement.
You
can
control
quickly
where
you
want
to
place
your
data.
We
also
support
regions
for
locality.
B
There's
a
bunch
of
things
that
is
not
visible
as
policies
but
is
implemented
behind
the
scenes
for
optimizations
data
resiliency.
You
can
easily
choose
what
whatever
policy
you
you
you
see
fit
for
this
type
of
data.
If
you
want
less
resiliency,
you
can
change
even
from
three
replicas
to
two
or,
if
you
want,
you
can
just
see
the
effect
on
failure,
tolerance
and
rebuilding
effort,
etc.
If
you
want
raise
your
coding,
I
just
go
ahead
and
define
what
what
do
you
like?
B
B
Object
versioning:
this
is
an
API
feature.
Spillovers
is
another
service
that
can
be
enabled
if
you,
if
you'd
like
to
edit
spill
over,
you,
can
create
a
spill
over
pool
for
this
bucket
that,
once
this,
the
capacity
is,
is
completely
consumed.
This
resource
will
be
used
for
any
access
usage
and
once
capacity
is
added
to
the
sort
of
regular
resources,
it
will
spill
back
automatically
and
by
quit
quotas
again
just
for
limiting
usage.
B
Okay.
So
so
you
got
basically
through
these
services
the
ability
to
choose
your
locations,
change
them
whenever
you
need
decide
on
on
resiliency
or
just
use
the
default,
of
course,
and
you
spill
over
whenever
needed,
and
you
can
create
multi
cloud
environments
with
this
hybrid
and
adjust
according
to
your
cloud
strategy
and
plans.
B
Now
next
very
cool
thing
that
I
want
to
show.
You
is
how
you
can
connect
between
data
and
functions,
so
the
platform
can
can
is
hosting
functions
and
these
functions
are
connected.
Okay
can
be
connected
to
buckets,
so
I
will
go
back
to
a
bucket
called
first
bucket,
which
is
the
first
of
its
name
and
I'm
going
to
jump
this
view
of
triggers.
Now
this
trigger
this
bucket
is
configured
to
trigger
piece.
B
The
mask
credit-card
function
on
object
creation
on
every
log
file.
So
what
this
means
is
that
every
object
will
trigger
every
object.
Upload
that
completes
will
trigger
this
function
and
let
me
jump
to
this
function
and
maybe
I
can
put
push
some
stuff
into
it.
I
don't
know
if
we
have
already.
Probably
you
do
have
some
stuff,
but
we
want
to
probably
first
yeah
yeah
so
in
background
I'll
try
to
just
push
something
here.
B
Can
also
you
can
actually
see
the
code.
The
code
is
just
here,
you
can
edit
it.
This
is
how
credit
card
can
be
detected
and
masked,
and
it's
as
simple
as
you
know,
few
lines
of
code
and
it
runs
in
nodejs
and
it
accesses
the
data
directly.
So
you
can.
The
function
can
actually
access
data
from
the
s3
API
as
well
just
easily
doesn't
need
to
provide
any
credentials
or
anything.
B
It
just
has
credentials
through
the
trigger,
so
that
everything
is
already
set
up
to
run
these
flows,
and
you
can
have
as
many
as
you
want
some
of
these
functions
so
I
guess
just
to
give
some
sense.
Some
of
these
are
sort
of
data
flows
or
data
driven
those
that
that
answer
to
some
business
in
the
business
need
or
functionality.
Some
of
them
is
extended
functionality
and
some
of
them
is
exposing
an
API.
B
For
example,
using
lambda
is
actually
an
API
which
is
authenticated
using
the
same
methods
of
signatures
or
whatever,
and
you
can
use
it
to
consume
api's
from
the
system,
and
we
have
a
bunch
of
examples
that
we
are
adding.
There
are
functions
to
expose
APRs.
It's
really
easy,
it's
really
simple,
and
can
it
can
do
some
stuff
on
the
background
on
the
on
the
sort
of
way
to
the
API
to
the
internal
API,
and
this
gives
a
very
cool
level
of
flexibility
to
a
running
system
that
needs
something
extra.
A
Do
you
get
some
other
examples
of
what
kinds
of
triggers
you
might
do
besides
the
credit
card
masking
just
to
get
a
sense
of
what
scope
is
there.
B
Yeah
so
example,
so
there's
a
normalization,
there's,
there's
tier
so
there's
one
of
the
examples
that
we
were
given
is
so
someone
wanted
sort
of
a
bronze
silver
gold
tearing
model,
which
means
that
you'd
have
sort
of
a
three
tier
and
a
lifecycle.
So
after
30
days
of
gold,
you
you're
being
demoted
to
silver
and
after,
for
example,
another
30
days,
you're
being
demoted
to
bronze.
B
It's
in
in
every
kind
of
requirement
like
this
and
optimizing
it
to
the
you
know
to
the
fine
level
of
of
perfection
before
we
know,
there's
a
business
and
before
we
know
the
customer
is
happy
with
with
what
the
customer
defined
so
the
cases
that
customers
sometimes
define
it,
but
once
they
get
it,
they
understand
they
and
they
need
something
else.
So
so
the
point
is
that
we
created
this
tiering
model
here
and
we
created
it
with
a
function.
So
there's
okay,
so
it
caused
another
function,
but
this
is
by
the
way,
I.
B
B
So
that
this
function
will
will
take
three
buckets
and
and
actually
detect
the
fact
that
the
objects
that
should
be
that
should
move
between
each
of
these
tiers
and
actually
make
the
transition
between
them.
Okay,
so
these
are,
these
are
sort
of
cost
custom
data
flows.
I
see
you
can
you
can
create
these
on
the
fly
for
any
kind
of
requirement
that
that
you
bump
into
and
you
you
can
react
immediately.
You
don't
need
to
say:
okay,
so
I
don't
have
it
yeah
I
do
have
it
I,
just
I.
Need
you
know.
B
B
It
allows
us
to
do
locality,
optimizations
of
data
access
and
and
decide
where,
where
the
operations
should
go
first,
where
it
should
go
to
second
and
and
we
can
even
decide
on
how
to
promote
data
to
a
local
region
in
such
cases
as
well,
clustering
of
the
of
the
Luba
core
management
functions,
account
management,
center,
etc,
and
the
nice
thing
may
be
to
play
around
with
is
analytics.
It's
still
beta.
So
it's
not
it's
not
opening
the
last
release
without
a
small
tweak,
but
it's
it's
cool
it.
B
B
B
Ok,
so
just
before
I
jump
on
that,
so
we,
the
platform,
is
very
easy
to
on
board.
That's
a
key
thing
that
we
wanted
to
to
create
here
and
it's
it
takes
very,
very
little
effort.
It
doesn't
require
any
knowledge
of
of
the
platform
to
start
using
it
and
and
it
provides
multiple
values
to
multiple
use
cases,
and
it
can
connect
to
many
things.
So,
even
without
you
know
further
things
that
we're
I'm
going
to
talk
now
about.
B
B
The
efforts
we
are
currently
chasing
in
the
in
the
near
medium
near
future
is
container
izing,
and
for
us
it's
actually
something
that
is
quite
I.
Don't
know
it's.
It's
not
heavy
lifting
we
are
running
inside
a
vm
today,
but
everything
is
is
not.
It
is
very
decoupled
from
from
the
OS
the
platform.
We
don't
have
any
dependencies
specific
to
the
OS
so
moving
to
any
any
to
the
two
containers.
It
is
not
a
big
thing,
however.
We
think
that
that's
a
good
place
to
be.
B
B
So
we
are
aligned
with
Federation
because
we
aggregate
things
wherever
they
are
and
can
present
a
single
API
single
endpoint
to
to
the
federated
to
federated
applications.
So
I
think
in
that
space
we
really,
you
know,
look
are
looking
for
opportunities
to
to
show
use
cases
of
Federation
and
get
better
integrations.
B
That's
that's
about
it
for
the
short
term
and
we
are
actually
the
sort
of
ending
note.
We
are
looking
for
an
open
community,
so
we
can
we're
providing
lace
VM
on
on
our
website
on
download
page
they
can
download
and
just
play
with
it.
It's
it's
completely
open
for
anybody
to
play
with
and
provide
feedback
or
not-
and
that's
that's
about
it.
You
don't
need
anything
besides
VBox
on
your
laptop
or
whatever
you
want,
and
we
also
have
we're
trying
to
build
some
some
interest
from
community.
B
B
Yeah,
so
so
that's
a
good
question.
We
actually
got
it
quite
a
few
times
and
the
the
short
answer
is
no,
and
the
long
answer
is
I
think
it's
it's
possible
to
I
think
it's
possible
to
create
it
quite
easily,
but
I
I,
don't
I,
don't
think.
We've
seen
the
use
cases
enough
to
to
make
an
arbitrage
system
out
of
it.
I
think
that
arbitrage
is
still
is
still
sort
of
a
thing
that
doesn't
really
materialize
it
changes
in
prices
are
not
real
time
and
it's
more
of
a
constant
thing.
B
A
Can
I
can
jump
in
with
a
couple
comments
again
feel
free,
I'm
being
hazed?
Anybody
has
questions,
so
I
think
that,
from
the
from
the
set
perspective,
this
nuba
is
interesting
because
this
it's
a
lot
of
the
same
for
the
abilities
that
we're
driving
towards
adding
to
redose
gateway.
So
rgw
today
has
a
federation
capability
that
lets
you
replicate
across
sites,
we're
in
the
process
of
adding
the
ability
to.
A
Our
our
users
today
are
mostly
on-premise,
but
most
of
them
either
have
or
are
planning
to
have
multiple
sites,
and
so
these
are
the
multi-site
policies
are
really
important.
Where
does
that?
Where
do
you
want
to
put
the
data
where
you
want
to
move
it?
Can
you
change
that
in
real
time,
can
you
change
another
per
bucket
basis
and,
and
they
these
users
are
enterprises
that
have
footprints?
A
So
there
are
a
lot
of
technical
questions
and
challenges
about
how
that
how
that
could
happen,
but
I
think
that
the
first
I
think
piece
for
me
is
to
understand
the
extent
to
which
the
community,
the
stuff
community,
is
interested
in
these
types
of
capabilities
and
how
important
that
Amit
is
to
provide
them
and
whether
you
know
traditionally,
seth
has
focused
on
you
know,
just
just
storing
lots
of
data
at
scale
on
your
own
art,
where
you
know
the
traditional
sort
of
software-defined,
storage
and
nuba
is
really
positioned.
A
Sort
of
one
layer
above
that
in
terms
of
managing
data
across
different
sites
and
defining
policy
and
weather
sort
of
the
in
general
stuff
community,
is
interested
in
like
expanding
its
scope
to
include
these
types
of
capabilities,
I'm
interested
in
hearing
comments
or
questions
about
any,
and
all
of
that
you
know
either
now
or
in
kickoff
an
email.
This
discussion
or
whatever.
B
Of
the
things
that
I've
been
hearing
from
from
at
least
various
people
lately
is
that
there's
there's
a
thought
that
migrating
the
block
storage,
so
the
block
volumes
should
be
done
through
an
object.
Api,
so
I
think
that
so
not
migrating
I
mean
either.
You
know
multi-site
Federation,
whatever
the
use
case
is
specifically,
and
it's
interesting
on
how
you
know
how
the
guys,
actually
you
know,
using
these
migrations
view
this.
So
what
will
be
the
process
right?
Okay,
so
you
push
that
to
the
object
and
then
pull
it
from
the
other
side.
B
A
A
Yeah.
Ok,
if
there
are
any
other
questions,
we
can
wrap
this
up.
I'll
I'd
like
to
kick
off
a
thread
on
deceptive
LF
users
list
also
to
discuss.
That's
this,
so,
hopefully
I
get
more
more
feedback
there
to
you
number.
We
can
reference
this
yeah.