►
From YouTube: CDS Pacific: RBD
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
this
is
the
CD
s
session
for
Pacific
and
for
RVD
stage
is
gonna,
be
joining
a
little
bit
late.
They
said
just
to
go
ahead
and
get
started
in
the
chat
window.
I
pushed
pasted
the
link
for
the
pad
for
the
CD,
yes,
roll
down,
there's
a
RBD
section
under
Thursday
April,
2nd
at
1400,
UTC
I
pasted
a
few
things
there,
some
of
the
bigger
items
that
we
have
on
the
Trello
board
on
the
stuff
backlog
think
so
we
can
publish
start
looking
at
for
diffic
I.
A
A
All
right,
I
put
the
care
buddy
features
at
the
top.
It's
not
obviously
tied
to
Pacific
release,
but
these
are
things
that
I
know
are
going
to
be
worked
on
at
least
or
I
would
like
to
see
you
get
worked
on
during
the
this
development
period.
Elia
is
on
the
line.
Talk
to
any
of
these
I
know
he's
working
messenger,
be
that's
not
our
BD
specific
it'll
help
stuff
a
fast
and
our
BD
there's
anything.
You
want
to
talk
to
you
on
that
Ilia
or
everything
is
pretty
much
in
hand.
B
A
And
that
so
the
next
item,
it
was
something
that
we
added
lib
RBD
late
in
the
octopus
cycle.
The
first
one
was
compression
hints
and
specify
on
individual
RVD
images
around
our
BD
pools
to
say
that
these
images
are
incompressible
and
it
sends
that
hint
to
the
OSD
so
that
if
it's
got
aggressive
compression
settings
it
won't
even
attempt
to
compress
it
or
if
you
say
that
this
is
compressible.
If
it's
got
OSD,
that
means
that's
a
you
know:
hey
try
to
compress
it
it'll
be
more
aggressive
about
attempting
the
compression
on
it.
A
So
this
is
just
something
that
hopefully
we
can
get.
Those
hints
passed
up
via
the
our
buddy
CLI
to
Decatur
buddy,
so
that
caerbae
can
send
those
hints
on
right
iOS
to
the
back.
You
know
SD
and
the
localized
read
hints
that's
something:
that's
not
actually
more
towards
lid
stuff,
but
the
same
thing
where
you
can
specify.
A
A
B
B
This
this
stands
in
the
way
of
features
such
as
low,
clear
level
timeouts
so
that
you
could
at
a
time
out.
You
know
for
each
request
and
it'll,
be
time
down
by
the
block
layer,
as
opposed
to
the
sort
of
big
hammer
that
we
currently
have
an
in
lip
south
that
can
timeout
the
USD
requests
only
and
nothing
else
and
the
problem
with
this
again.
B
B
Any
of
the
like,
basically
as
long
as
you
deviate
from
a
simple
image
that
is
not
that
doesn't
have
any
parents,
no
parent
chain
and
a
very
simple
object
layout.
So
pretty
much
just
the
default
object
layout
with
the
only
variable
that
can
be
changed
is
the
object
size.
So
we
know
striping
under
those
conditions
on
the
timeouts
through
lips,
elf
or
work,
but
for
everything
else.
B
So
while
that
feature
is,
you
know,
we
haven't
seen
any
a
customer
or
user
requests
for
it.
That's
that's,
probably
nice
to
have,
and
and
the
second
thing
that
ties
into
this,
which
is
you
know,
kind
of
timeout,
except
except
that
a
forced
timeout,
it's
the
forceful
on
map
of
images
that
I
know
for
some
reason
stuck.
B
C
B
B
B
No,
no,
the
object,
ID
is
computing,
it
it.
It
caches
the
hash.
Basically,
it
caches
the
mapping
of
the
of
the
PGS
and
then
we
don't
have
to
when
we
get
the
PG
right.
So
we
don't.
We
don't
invoke
the
crush
on
the
rhythm.
We
do
everything
else
like
without
invoking
the
algorithm,
and
we
are
those
thing
about
education,
anyways,
the
map,
changes
and.
A
Moving
on
the
the
next
item
I
had
on
the
list
is,
while
our
body
focused
it's.
Actually,
the
implementation
to
go
inside
of
W
I
spoke
to
it
briefly.
Yesterday,
the
goal
being
that's
tough,
RBG
snapshots
are
essentially
in
giant
immutable
objects.
So
why
not
expose
them?
The
joint
immutable
objects?
They
are
via
rgw,
and
this
would
basically
be
as
as
I
was
told,
the
with
the
new
zipper
plug-in
layer,
codename
zipper,
plugin
layer
of
our
GW.
We
can
create
a
new
basically
provider,
which
is
a
Barbie
D
provider.
A
The
black
is
from
the
objects
within
the
buckets
any
any
restful
calls
to
our
GW.
For
those
buckets
can
they
get
translated
down
via
this
the
zipper
layer
and
zipper
plug
into
the
associated
live.
Our
BD
calls
to
read
and
write
RBD
images,
the
goal
being
exposing
snapshots
and
both
are
raw
fashion,
so,
like
the
full
block
device
and
also
the
permutations
of
snapshot,
deltas.
A
A
A
The
next
four
def
commands.
We're
gonna
want
to
make
sure
those
are
at
least
a
unified
format,
so
not
just
in
reinventing
the
wheel
again
and
we're
also
going
to
want
to
extend
that
diff
formats,
so
that
we
can
also
include
an
optional
set
of
indexing
which
will
actually
tie
into
the
next
feature,
which
is
the
the
instant
recovery
from
an
s3
or
Swift,
backed
RVD
image
of
the
the
goal
with
this
is
essentially
with.
A
Instead
of
that
parent
image
being
a
true
like
live
true
army
d
image,
we
could
embed
an
s3
or
Swift's
clients
with
and
live
our
BD
and
satisfy
all
the
reads
for
that
parents
via
that
that
s3
or
Swift's
client
endpoints
create
a
new
image.
You'd,
basically
be
able
to
say.
Okay,
this
is
a
new
import.
This
image
or
whatever
started
instant
recovery
from
this
s3
end
point
provide
the
credentials
to
talk
to
that
s3
end
point
and
go,
and
so,
when
the
images
first
opens,
there's
obviously
no
data
there.
A
Any
reads
that
would
be
satisfied
would
be
redirected
towards
the
s3
Swift's
end
point,
but
then
they
can
also
start
doing
things
that
live
migrate
does
which
is
rehydrating,
the
local,
the
new
cloned,
or
what
have
you
image?
Instant,
recovered
image.
So,
in
the
background
we
can
have
a
process
that
is
just
reading
and
writing
essentially
copying
up
the
data
from
the
s3
or
swift
endpoint
back
into
the
into
the
RVD
image,
so
that
eventually
I'm
just
like
live
migration.
A
We
can
eventually
some
of
the
link
between
the
ds3
Swift
end
points
and
the
in
the
RVD
image.
So
this
is
just
a
transitory
states
of
how
to
get
the
image
back
in
it's
gonna
be
pretty
tradable
just
for
raw
images
to
start
with,
and
then
snapshots
as
I
mentioned.
If
we
extend
our
current
snapshot,
diff
format
to
basically
a
pen
down
some
indexing,
so
we
can
know
how
to
look
up
a
way
to
look
in
the
data
stream
or
particular.
Offsets
should
be
able
to
support
snapshots
as
well.
E
D
E
E
E
A
A
You
can
possibly
get
into
like
the
ability
to
add
inject
secrets,
Violette
bar
BD,
but
I-
think
for
a
straight
up,
simple
implementation.
If
you
just
want
it,
if
you
were
concerned
about
access,
you
can
always
use
your
your
object
stores
ACLs
to
restrict
access
to
update
your
file
based
on
a
particular
aesthetic
credentials,
at
least
in
the
short
term.
D
A
Right
yeah,
the
next
one
I
had
down
was
the
the
new
labret
dos
SEO
interface.
That's
like
a
a
thread
pool
and
if
you
like,
one
actually
faster
IO
handling
and
the
currents
liberate
us
completion
style,
callback
handler
I'm,
just
entertaining
benchmark,
see
if
you
get
any
performance
improvements
on
the
IO
path
for
sure
you
know:
high
Q
depth,
high
high
throughput,
a
high
app
work
load
cases.
A
And
the
octopus
really
Sweden
incorporate
a
bunch
of
changes
that
drastically
increase
the
throughput
of
a
blue
bar
BD
and
brought
it
way
closer
to
care
BD
in
terms
of
performance
but
yeah.
If
we
can
narrow
that
cap
even
more
or
potentially
narrow
the
gap
or
while
reducing
the
CPU
usage
that
be
I,
think
a
real
win
for
a
liver,
ready,
I
am
super
CPU.
C
So
we
are
just
doing
this
integration
work,
so
I
was
wondering
like.
Are
there
any
known
bottlenecks
or
existing
issues
that
that
might
come
in
the
way
while
we
are
trying
to
get
that
we
are
in
and
do
the
LaBarbara
integration
work
because
I,
remember
you
bringing
up
some
IBD
export
bug,
I
think
which
was
but
I
mean
something
that's
that's
to
do
with
enabling
multiple
liability,
threads
and
a
bug.
That's
I'm,
not
talking
about
the
one
that's
existing.
Currently,
that
is
in
crash
I'm
talking
about
something
that
you
brought
up
a
while
back.
C
C
D
A
C
A
Until
there's
a
good
use
case
for
a
user
for
that
and
I
don't
receive
that
happening
anytime
soon
like
he
is
not
gonna
be
right
themselves
to
stay.
Hey
we're
gonna
use
SEO!
As
our
you
know,
internal
event
driver
but
I'm,
not
sure
about
the
value-add
there.
But
if
we
can
drive
more
io
through
liberate
us
and
get.
C
Right
are
there
any,
do
you
have
I
mean?
Are
you
aware,
or
do
you
you
know
if
any
current
issues
that
you
know,
that's
that
we
are
to
expect
after
if
you're,
if
you
want
to
work
on,
say
this
integration?
What
do
you
are
you
already
aware
of
any
bottlenecks
that
we
might
face
if
we,
if
we
work
on
it,
if
we
start
working
on
it
after
the
we
are
much
of
SL.
B
I
might
be
thinking
about
something
completely
different,
but
I
think
will
be
half
you
know
all
looking
interface
England
by
viddy.
This
is
what
oh
my
added
many
years
ago.
A
A
B
A
Yeah
we
have
the
polling
interface
already
and
we
have
the
callback
interface.
They
call
back
on
her
face.
We
can't
violate
our
existing
like
users,
assumptions
that
they're
not
gonna
get
concurrent
callbacks,
but
if
you
wanted
to
potentially
drive
more
from
the
live,
RBD
clients
point
of
view,
if
you
wanted
to
potentially
pull
and
drive
more
IO
through,
live
RVD,
there's
the
event
bucket
interface,
it
may
not
be
optimized
and
there
may
be
some
things
we
can
do
to
tweak
it
and
make
it
better.
But
I
think
that's
the
starting
point.
A
A
C
C
A
We
want.
We
wanted
to
offer
a
way
that
a
smart
enough
clients
could
notice
that
a
snapshots
about
to
be
created.
They
could
do
something
to
acquiesce
or
freeze
their
file
systems
that
are
running
on
top
of
our
workloads
running
on
top
of
our
BD
map.
Slot
could
then
be
created,
and
then
they
could
be
told
to
unfreeze.
We
asked
their
the
block
device.
A
So
our
buddy
MVD
is
one
a
you
know,
initial
hood
point
where
we
can,
you
know,
send
the
IO
PI
Actos
for
fi
freeze
and,
if
I
thaw
on
the
block
device.
Other
examples
could
be
key.
Mu
we
modified.
There
are
maybe
block
driver
there.
We
could
she
send
something
to
the
team.
You
guessed
agent
running
inside
a
VM,
saying:
hey
we're
about
to
create
a
snapshot,
freezer
thaw,
that's
obviously
outside
of
our
control.
A
But
if
there's
enough
time
we
can
maybe
submit
something
upstream
and
say:
hey
here's
an
example
of
how
you
could
implement
this
in
or
murchison
bikini
or
if
it's
acceptable.
A
It'd
also
be
nice.
If
we
could
get
something
semi
equivalent
in
to
indicate
everyday
so
in
care,
but
he
sees
these
messages
coming
it
could
you
know
at
the
worst
case
the
simplest
case,
just
kind
of
follow
the
FI
threes
I,
probably
thought
octal
path,
maybe
even
better.
If
I
could
and
I,
don't
know
how,
but
if
I
could
do
something
to
up
coordinate
application
level,
freezing
and
fine
in
user
space
and
some
some
type
of
notification
out.
A
They
don't
have
any
tell
us
a
questions
about
that
items.
B
B
D
B
A
Yeah
absolutely
know
if
there's
like
another
way
you
could
just
like
upon
receiving
this
vacation.
You
could
like
I'm
spending
some
type
of
message
or
whatever
it's
back
down
to
user
space,
to
say:
hey
if
you
care
about
it
now,
it's
the
time
to
freeze
on
any
application.
You
want
to
visit
I've
seen
you
through,
like
a
you
death
notification,
that's
just
like
a
one-way
pasture.
If
she's
always
listening
to
it
that
doesn't
really
necessarily
help
you.
B
B
Yes,
so
I
think
we
need
something
like
a
two-phase:
commit
poor
two-phase
commit
procedure
for
snaps
on
creation
for
like
for
like
fundamental,
like
saving
your
reasons
and
and
we
could
tie
the
request
to
freeze
stuff
in
it
and
then
we
could
potentially,
if
all
else
fails,
do
the
freeze
at
the
driver
level.
So
just
you
know
flush
the
stuff
that
we
have
in
flight
and
then
just
tell
them.
Luckily
there
today
we
only
go
to
the
accepting
any
requests
for
a
while
there's
a
way
to
do
that,
but
we
obviously
need
this.
A
B
And
we
need
to
extend
that
to
not
just
you
know
the
fries,
but
also
to
the
act
of
like
publishing
a
new
snaps
on
context,
basically
taking
a
snapshot.
So
if
there's
a
client
that
doesn't
act,
we
need
to
fail
the
snap
Creed,
unless
you
know,
could
probably
had
a
force
option
and
the
users
could.
Then
we
try
with
the
forest
option
if
there
is
a
misbehaving,
client,
but
I
think
if
we
go
into
you
know,
do
some
surgery
on
this
code
that
addressing
this
long
long-standing
kind
of
this
is
basically
a
data.
A
Well,
we
could
certainly
like
I'm
the
on
the
API
when
this
first
release
is
they'll,
be
like
no
clients
that
was
supported,
so
I
wouldn't
want
to
change
penny
default
behavior.
It
says,
fail
snapshots
if
clients
respond
with
the
e
op
not
supported,
but
if
we
had
some
type
of
optional
on
the
CLI
and
and
and
via
the
API
to
say
you
know
what.
Yes,
you
know
mode,
you
want
like
ya
required,
or
you
know
best
effort.
You
know,
type
of
type,
the
flags
and
one
day
in
the
future.
A
B
D
A
A
Move
it
on
the
the
RBD
MBD
demon.
We
still
have
in
our
plans
that
we
want
to
be
able
to
have
an
optional
mode
of
our
body,
NBD,
wear
it
it's
a
single
demon
that
serves
multiple
posts
or
multiple
block
devices
right
now,
it's
a
single
demon
per
per
image,
that's
being
served!
So
it's
like
a
one-to-one
time
between
you
know
a
dev
MBD
device
and
mb
d,
RB
DME
D
demon,
that's
servicing
it
for
the
use
case
of
kubernetes
in
containers.
A
A
A
A
So
right
now
in
Q
you
can
you
can
layer,
human
has
its
own
Lux
layering
support
where
you
can
layer
Lux
on
top
of
Q
or
layer
it
on
top
of
RB
d
or
what-have-you.
But
the
problem
that
that
doesn't
solve
is
is
the
use
case
of
our
BD
cloned
images,
because
if
you
layer
Lux
on
top
of,
if
you
have
lux
already
encrypting
a
parent
image
and
you
clone
it,
you
either
need
to
share
the
exact
same
crypto.
A
A
But
if
you
wanted
to
be
able
to
say,
like
oh
I,
can
take
a
cryptid
and
gold
image
and
make
it
a
my
own
encrypted
child
image.
Well
then,
that
means
that
you
need
to
then
basically
fully
copy
the
the
the
child
image
I'm,
so
you
can
fully
Rhian
crypt
it
with
your
new
master,
encryption
keys
and
new
IVs,
which
potentially
waste
a
lot
of
space.
A
So
the
goal,
the
goal
with
the
client-side
encryption-
is
to
provide
then
encryption
layer
where
the
only
things
that
get
encrypted
using
where
each
each
cloned
image
gets
its
own
potential
master
key
in
IDs
and
will
have
to
have
some
api's
into
until
a
bar
belief
or
injecting
secrets.
Every
image
in
the
chain
and.
E
A
Yeah
you
could
do
it
that
way.
That's
probably
easiest
way
would
just
be
to
the
ability
to
inject
secrets:
yeah
Lib
RBD,
so
you
can
say
like
imagining
the
kiemce
you
set
up
and
you
define
all
right-
here's
my
parent
secret
to
unlock
here's,
the
passphrase
to
unlock
the
master
key
for
the
parent
image
and
here's
the
secret
to
it.
A
And
the
other,
the
other
facet
of
this
would
be
creating
a
snapshot
on
an
encrypted
volume.
Again,
if
you
don't
create
a
new
initialization
vector,
it
exposes
you
again
to
chosen
plaintext
attacks.
A
Instead,
the
goal
would
also
be
somehow
to
every
time
you
create
a
snapshot
to
create
a
new,
an
initialization
vector
that
you
use
from
that
point
forward
for
any
new
incoming
writes.
But
if
you
read
for
something
that
was
created
at
snapshot
version
X,
you
need
to
use
IP
version
X
to
decrypt
the
data
correctly,
because
that's
the
IB
it
was
used
when
it
was
when
it
was
created.
A
Don't
quite
understand:
I,
don't
have
a
solution
you
up
for
how
to
do
the
snapshot,
initialization,
vector
management,
because
I
think
it's
pretty
complicated
when
you
create
and
delete
snapshots.
You
don't
know
if
that
object
was
initially
written
with
a
snapshot
that
has
since
been
deleted,
but
you
still
need
the
IV
for
it.
So
how
do
you?
How
do
you
store
that
data
permanently
without
potentially
boundless
growth
of
IV
storage?
Oh.
A
A
B
Why,
using
like
wine,
crippling
the
entire
store
and
why
they're
not
encrypting,
just
at
the
blue
store
level?
This
the
IV
management
is
very
easy
to
get
wrong,
especially
when
we
get
into
the
like
snapshot
territory
and
having
to
sort
of
segment
the
RV
space
between
between
the
snapshots
and
the
fact
that
this
is
not
that
this
is
an
encryption
at
rest,
as
opposed
to
just
an
in-flight
encryption.
A
B
So
this
we
were
working
on
this
at
they
like
at
the
wrong
level,
because
there's
nothing
like
there's
nothing
to
be
gained
from
doing
this
in
Lombardy,
because
we
don't
have
like,
like,
for
example
with
you,
know,
mirroring
where
the
like
Generic
raters,
based
solution
was
just
to
on
wildly
here.
It
seems
like
the
problem.
Space
is
exactly
the
same,
and
it's
like
exactly
the
same
objects
exactly
the
same.
You
know
like
snapshot,
context
and
I.
Just
don't
see
the
point
of
doing
this.
I
think
I
believe
level
yeah.
A
B
A
To
key
management,
I
mean
there's
certain
compliance
requirements.
Why
tenants
would
need
to
control
the
keys
and
the
key
life
cycles?
And
things
like
that,
and
if
you
have
one
set
of
keys
that
controls
the
OSDs,
that
it
doesn't
give
that
ability
to
the
to
the
end
users
is
the
same
reason.
Why
rgw
supports
encrypting
objects
at
rgw.
E
F
But
this
one
I
just
have
my
Phil
over
Phil
back,
so
basically
just
fix
that
problem
with
a
lot
bouncing,
for
we
have
the
two
cases:
the
one
where
we
have
the
misconfigure
and
then
the
one
where
operations
operating
systems
discover
the
paths
in
different
orders
and
then,
when
the
active
optimize
pass,
fails,
the
different
hosts
all
over
two
different,
active
non-optimized
paths
and
so
I've
just
been
working
on
the
fixes
and
improvements
for
that
and
I.
Think
for
I
study.