►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
That's
the
right
path:
I
did
a
hack,
so
Lib
RBD.
If
you
don't
have
cash,
enabled
it.
It
uses
puffin
list
ability
to
like
install
a
delete
on
a
pointer,
so
I.
Basically,
reference
count
the
pointer
once
it
goes
to
zero
I
walk
any
AO
completions
until
the
reference
kind
of
goes
to
zero
and
then
it
it
fires
them
if
need
be
yeah.
So
that
solves
the
one
issue
with
the
message
you're
having
like
two
copies
or
whatever.
A
B
B
You
know
great
iterators
or
whatever,
where
to
where
it
left
off
and
then
and
then
like
the
objector
can
basically
then
take
over
and
say
yeah
I
have
a
bufferless
to
put
all
this
data
into
and
read
off
the
lamb
but
I,
but
I
think
just
the
way
that
the
current
thing
works
is
yeah.
You
can't
really
pass
him
like
those
I
old
x-rays
like
that
into
the
on
the
read
path.
It
just
kind
of
says.
A
When
I'm
all
bit
nervous
about
it
really
depends
on
a
beam.
There
was
that
there
was
a
full
request.
I
looked
at
a
couple
weeks
ago,
that
was
camera
first
I
think
was
trying
to
do
the
right
side
actually
and
I.
Just
I
didn't
think
was
gonna
work.
B
B
Next
one
now
some
I
related
for
the
optimized
diet
path.
There's
a
lot
of
something's
already
gone
in
its
it's
a
lot:
lower,
CPU
usage
per
I
up
the
I
ops
are
higher
already
mm-hmm
can
always
optimize
more
so
I
don't
feel
like
it's
ready
get
merged
yeah,
especially
as
we
start
adding
adding
new
features
or
and
start
changing
things
around.
That
will
probably
break
something
to
make
it
slower
again,
because
that's
part
of
that
getting.
B
To
capture
by
default
is
gone.
It
uses
right
around
cash,
so
all
it
does
is
it
uses
the
same
RVD
target
dirty
memory
that
aim,
but
a
basically
allows
X
number
of
megabytes
of
of
in-flight
IO
to
be
on
the
wire,
and
it
put
an
accent
back
to
the
client
immediately
until
he
gets
a
flush
I
get
sloshed,
then
it
will
actually
make
sure
that
I
think
there's
like
I
mean
we'll
get
right
now
and
that
works
in
concert
with
the
the
I/o
scheduler.
B
B
A
A
B
Or
something
like
that,
create
a
snapshot
and
let
those
snapshots
be
automatically
transferred
the
other
side
and
then
like
the
third
mode,
was
yeah.
Basically
just
manual
like
hey,
take
a
snapshot
now,
but
then
with
those
nodes,
2
&
3.
That
I
think
we'd
also
wanted,
then
be
able
to
support
the
inter
cluster
live
migration.
So
the
live
migration
feature
that
was
added
to
de
nautilus,
only
works
on
a
single
cluster,
but
then
be
able
to
expand
that
feature
to
say:
yeah.
A
B
I
now
want
to
basically
put
this
image
on
this
remote
side
in
live
migration.
Those
any
iOS
up
into
like
the
last
known
good,
committed,
fully
replicated
snapshot,
has
to
get
served
by
the
remote
side
and
that
remote
side.
Every
time
you
read
or
write,
it
will
basically
copy
up
the
remaining
data
to
the
to
the
local
side.
I
think.
A
B
Against
two
and
three,
the
interests
of
live
migration,
that
the
three
is
the
I
don't
need
a
scheduled
snapshot,
so
you
just
go
into
our
our
buddy
CL
ID
saying:
do
it
now
versus
I?
Think
you
chew
is
now
I
need
to
have
like
all
this
command
line.
Stuff
said
to
find
a
schedule
and
I
have
to
have
something
in
the
background
you
know
or
believe
me,
or
you
know
say
it's
time
now
for
me
to
create
a
snapshot.
Let
me
go
create
a
snapshot.
B
B
A
What
would
you
do
you?
The
I
mean
the
first
piece
of
work.
That's
common
to
both
is
that
you
have
to
have
the
Baptist
that's
happening
in
our
be
near.
Would
you
add
the?
Would
you
do
the
scheduling
from
o2
before
you
did
live
migration
from
o3,
or
would
you
do
in
my
head?
It
just
feels,
like
the
scheduling
is
gonna
easier.
B
The
actual
like,
inter
cluster,
like
connect
to
the
remote
cluster
and
and
read
the
dad
and
beat
copy
data
I
mean
we
have
most.
The
things
are
there
for
for
live
migration,
deep
copy,
it's
just
you'd,
be
now
tweaking
it
to
say.
Well
now,
I
need
to
open
up
another
cluster
connection
and
the
big
thing
that's
kind
of
the
unanswered
question
for
that
one
cuz.
Now
it's
a
client's,
that's
actually
opening
up
a
remote
connection.
B
B
If
you
have
like
a
radios
proxy,
we
can
basically
say,
like
the
rate,
is
proxy
hides
all
the
stuff
that
stuff
cuz.
You
can
do
like
a
mapping
inside
the
proxy
to
say
like
here's,
how
I
can
map
you
know
local
deaf
users
to
permissions
on
the
remote
side,
or
something
like
that.
So
now
you
don't
need
to
worry
about
key
management
for
any
individual
client
talking
to
a
totally
different
remote
cluster.
A
But
yeah
I
have
a
I,
have
a
stupid
question
because
I
keep
keep
talking
about
that.
The
current
live
migration
feature
that's
in
Nautilus,
but
I
I'm,
not
sure
I
actually
understand
exactly
how
it
works.
My
assumption
was
that
you
can
have
an
active
VM
that
has
a
device
in
these
and
you
can
migrate
that
image
to
different
pool
but
I
seem
to
recall
hearing
something
about
how
you
have
to
do
like
repair
stage
and
you
have
to
like
reattach
and
but
yeah.
Does
that
actually
work,
but.
B
B
Yeah
so,
but
it's
clones
with
the
deep
copy
support
so
it'll,
if
you
had
n
number
of
snapshots
of
on
a
given
they'll
handle
it
so
yeah
the
problem,
the
problem
being,
is
that
well,
we
separated
in
three
phases,
err
migrate
and
commit
and
abort
rollback,
but
the
the
problem
being
is
at
some
point
in
time.
You're
gonna
have
to
point
your.
B
D
So
that's
a
prepare
stage.
Actually
we
need
to
top
the
restart
decline.
So
when
we
are
doing
prepare
of
stage,
we
actually
create
new
image,
new
headers
new
metadata,
and
at
this
stage
we
need
the
client
to
reconnect
with
a
new
image
after
this
actually
booked
for
microphone,
as
Jason
said
and
just
copying
the
data
in
the
ground,
and
so
and
actually
I
was
thinking
about
doing
it
without
interruption,
just
changes
in
the
live
client
with
others
and
so
on.
D
B
It's
kind
of
like
those
corner
cases,
so
if
we
had,
if
we
knew
and
guarantee
that
the
images
had
exclusive
lock,
we
could
send
an
RPC
request
to
the
kernel
exclusive
block
owner
and
say:
do
it
like
start
to
prepare
and
like
any
kid
like
do
everything
he
needs?
But
we
don't
know
if
the
image
or
its
exclusive
arc
they
may
not,
in
which
case
we
have
no
way
to
actually
tell
the
other
client
hey
weird
started.
B
A
A
But
it
seems
like
it
would
be
nice
if
or
the
cases
like
imagine,
OpenStack
know,
for
example,
I
would
imagine
the
workflow
would
the
vendor.
Would
you
do
a
retype
on
cinder?
It
would
change
its
metadata
in
its
database
and
then
it
would
call
out
RVD
and
says
by
the
way
life
migrated
and
so
from
top
down
it
would
change
the
bed
a
day.
A
So
the
next
time
the
DM
restarts
he
would
talk
to
the
new
image
name
would
probably
a
two-phase
commit,
there's
something
but
and
then
it
would
reach
underneath
and
then
ask
RVD
to
do
it
and
the
running
process
would
just
would
see
the
rename
event,
and
it
would
just
start
writing
to
the
new
clone.
As
it
happens,
they
would
do
to
prepare
whatever
without
having
to
restart
mender
the
next
time
the
VMS
parts
yeah.
B
I
think
I
think
that's
certainly
something
we
could
do
for
if
and
only
if
image
has
exclusive
block
yeah
that
we
could
coordinate
it.
But
again
you
still
run
to
that
final
commit
stage
where
commit
right
now
we
delete
the
original
source.
You
like
basically
came
into
the
image
and
if
you
didn't
tell
your
higher-level
orchestration
layer,
that's
the
hey,
the
new
image
it's
actually
over
here,
even
if
yeah,
yes,
the
transient
version
of
you,
know,
team,
UVA
and
that's
running.
It
knows
that
cuz.
B
A
B
A
B
You
use
like
something
that
says:
like
hey,
create
a
new
volume
from
you
know
this
volume
or
from
this
snapshot
was
you
know
some
special
Flags
say
you
want
a
live
migrate.
It
you
know,
create
it.
You
know
instantly,
but
your
workload
has
already
stopped
by
that.
You
can't
do
things
on
live
volumes
with
kubernetes,
so
cinder
obviously
think
a
lot
more
complicated,
which
adds
way
more
corner
cases.
A
B
A
A
A
B
B
B
In
terms
of
that's
the
Intel
stuff,
the
persistent
read-only
right.
A
B
B
A
A
B
A
B
A
B
So
I
keep
trying
to
get
work
out
of
the
CSI
and
there
you
know,
there's
you
know,
use
cases
coming
up
about
like
well.
I
want
to
create
a
snapshot
but
instantly
I
want
that
snapshot.
You
know
potentially
to
get
flattened
or
something
like
that.
Well,
we
can't
have
the
CSI
run
that
flat
inauguration
or
if
we
could,
what
if
it
dies
in
the
middle
who's
tracking.
All
these
you
know
batch
requests
and
who's
gonna
restart
them
after
you
know,
it
restarts
dashboard.
B
Does
the
same
thing
where
you
know
you
do
a
long-running
operation,
the
dashboard,
if
the,
if
the
manager
crashes,
where
it's
running
the
dashboard
you've
it
just
kind
of
stopped
in
the
middle
and
there's
no
like
redo
log,
about
the
actions
that
you
said,
do
these
and
I
know
after
you
know,
restart
them
upon
failure.
It.
B
Exactly
so,
just
some
random
objects
like
a
list
of
image
ID
what
you
have
to
do
some
other
metadata
and
then
that
support
module
can
on
restart
can
scan
it.
You
know
periodically
scan
it.
Oh
this
works
me
to
do
and
do
it
and
provide
feedback
to
say.
Like
hey,
I'll,
that's
remove
operation,
you
know
you're
30%
done
or
whatever,
not
just
that's
something.
B
That's
in
progress
that
you
actually
wanted
to
get
feedback,
but
then
they're
interested
in
also,
then,
if
that
happens,
to
look
into
it
and
then
we
get
to
see
aside
also
hook
into
it
for
any
long
running
operations,
but
they
don't
need
to
come
up
with
their
own,
like
system
to
keep
track
of
redo
logs.
Okay,.
A
A
A
B
Removals
are
slow
yeah
if
the
CSI
just
removes
it
to
the
trash
they
can
forget
about
it.
Also
if
it
has
like,
if
that
linked
clones,
it
doesn't
matter
move
to
the
trash,
and
you
don't
have
to
worry
about
anymore
and
then
eventually
it'll
get
deleted
once
all
the
linked
clones
there,
you
know
lead
into
flattened
or
whatever
I.
A
&Amp;
4
the
for
the
mode
2.
If
we
can
I
think
it'd
be
really
nice
if
it
would
work
with
Colonel
RVD
yeah,
not
sure
how
many
pieces
are
necessary
for
that
to
happen.
I
guess.
B
Herbal
Colonel
RVD,
it's
not
really
a
problem
doesn't
have
any
maintenance
work,
so
the
bio
as
basically
a
Lib
RB
decline,
gets
the
exclusive
lock
from
Caribee,
which
is
great
cuz.
It
just
falls
all
your
eye.
Oh
right
by
default,
creates
the
snapshot
and
then
release
lock
back
to
cavity.
You
can
do
that
right
now,
so
there's
nothing
magical
about
creating
a
snapshot.
A
B
B
B
Don't
think
he
actually
had
a
bad
as
a
drop
on
random
ayahs
as
they
did
in
sequential
because
he
had
the
object
cashier
enabled.
So
he
was
Gideon
for
the
small
for
the
small,
sequential
I/o.
He
was
seeing
a
bigger
Delta
speed
with
generally
an
enabled
and
disabled.
That's
just
because
it
does
a
great
job.
Yeah
actually
does
a
great
job,
taking
sequential
iOS
and
turn
it
into.
You
know:
500,
like
sequential
apps,
into
one
single
on
the
backside.
B
Other
issues
in
terms
of
journaling
yeah,
there's
some
big
ones
that
were
added
especially
about
breaking
up
larger,
is,
but
we
didn't
have
a
big
memory
use
on
the
RBD
mirror.
That's
not
going
to
be
an
issue
with
nicole
has
changes
with
memory
targets,
because
then
we
can
basically
say
that
we
know
we
have
this
much
memory
that
we
can
we
can
use,
and
now
we
can
try
to
stay
within
those
bounds.
B
Came
up
for
testing
after
like
during
the
jewelry,
like
oh
I,
have
a
thousand
images
being
replicated
I'm
doing
like
DD
operations
on
them
with
like
four
megabytes,
something
like
that.
So
that
means
the
journal
entries
like
four
megabytes
times
and
number
of
objects
per
times.
A
thousand
images
Wow
am
I
using
like
you
know,
16
gigabytes,
you
know
whatever
Ram
yeah,
so
we
changed
it
to
be
like
well,
I,
think
by
default,
it's
after
you
get
past
16
K.
B
It's
breaking
up
right
events
into
multiple
journal
events,
but
that
the
other
side,
the
Army
near,
can
limit
its
memory
usage
and
just
like
nibble
on
you
know,
but
that
just
explodes
the
number
of
is
oh
and
the
other.
The
other
thing
which
I
talked
to
foreign
about
was
oh,
but
you
have
time
to
test.
B
Conference
but
right
now
we're
very
anal
about
not
basically
like
issue
one
for
each
I/o
coming
in.
We
issue
one
like
Journal,
append
event
mmm-hmm
beside
the
track
or
ticket
open
to
like
well,
when
we
add
the
same
option,
we
have
for
the
cache,
where
it's
kind
of
like
you're
you're
right
through
on
the
journal
until
flush.
So
we
can
then,
once
you
see
it
once
you
see
a
flush
now
we
can
start
batching
together,
multiple
journal
events,
and
we
don't
to
worry
about.
You
know
consistency
issues
because
we
can
yeah.
B
A
A
Okay,
so
I
think
that
I
think
that
main
thing
that
I
was
thinking
when
I
was
trying
to
understand.
Imagine
how
was
working
and
what
might
the
prom
with
you
with
the
the
flush
thing
where,
in
principle,
you
can
buffer
all
these
things
up
and
until
you
get
a
flush
and
there's
any
and
then
after
that
also
you
can
coalesce
the
rights
yeah.
B
Yeah,
so
right
now,
with
the
object,
a
sure
the
object,
a
sure
it
starts,
the
iOS
go
immediately
to
the
journal
immediately
il
cashier
project,
Ashley
I
mean
then
it
could
act,
the
rights
back.
So
as
long
as
like
the
write-back
cache
is
now
like
filling
up
its
like
you're
hiding
the
cost
of
the
of
the
the
general
operations.
But
when
you
do
in
all
these
benchmarks,
idents
gets
to
you
obviously
filling
up.
You
know
your
16
megabytes
or
whatever
your
default
size
is
of
the
object.
B
Kasher,
so
you're
now
you're
running
into
a
brick
wall
of
the
latency,
so
it
looks
I
think
a
lot
of
times.
The
journal
performance
looks
a
lot
worse
than
it
actually
is
as
how
many
new
things
are
just
like
straight
up
running
at
full
speed.
You
know
that
benchmark
speed,
yeah,
stressing
the
clusters
rusty.
You
know
you're
at
the
max
ash,
sighs,
if
you
just
have
small
random
bursty,
I/o,
behavior
I,
don't
think
you're
gonna
see
anywhere
near
as
bad
as
yeah.
You
know
those
latency
hits
and
I
upset
yeah.
A
A
B
A
A
B
A
B
B
B
A
A
A
A
B
C
B
B
C
I'll
look
into
it,
so
it
was
that
same
thing
that
you
know
Matthew
Wilcox
had
wanted
to
create
are
true.
You
know,
as
users
face
to
kernel
device
and
so
I
look
that
over
my
vacation
and
yeah
that
would
be
cool
to
just
finally,
do
it
instead
of
adding
next,
but
the
thing
is
well.
We
need,
though,
and
like
a
jour
and
cluster,
is
you
know,
we're
gonna
pop
up
the
user
space,
but
we're
gonna
pop
down
back
to
the
kernel.
C
So
we
really
just
want,
like
the
data
blocks,
to
go
from
there
to
there,
and
then
we
want
to
add
in
you
know
our
little
header,
the
stuff
header.
So
we
don't
really
need
to
zero
of
coffee
user
space
and
then
cuz.
If
we
do
we'll
still
have
the
same
problem
that
we
need
to,
you
know
have
the
same
problem
on
the
back
end,
where
we're
gonna
hit
copies
in
the
network
layer
or
you
know,
between
user
space
and
kernel
space.
C
So
we
really
just
need
to
do
you
know
just
pop
up
either
so
they
say:
hey,
add
this
header
to
this
data
buffer
and
then
have
something
back
in
the
kernel
back
in
it.
So
like
I
I
hacked
it
in,
but
it's
not
pretty.
It's
basically,
like
you
know
the
exact
same.
You
want
to
do
with
ice
cuz
you
target
too,
and
so
it's
always
been
the
dream
and
so
like.
We
can
do
it
in
hacking
ways,
but
to
do
it
in
nice.
Please
yeah
I,
don't
know
I.
B
C
Yeah
and
if
we
can
add
it
like
in
our
own
kind
of
like
call-outs
and
like
specials
like
a
basically
like
a
special
MeV,
socket
type
thing,
and
then
you
know
we
can
connect
it
to
back
end
and
do
various
things
but
yeah
it
just
depends.
So
it
just
needs
more
work
to
pretty
it
up
or
just
do
more
research
into
you
know
what
exactly
needs
to
be
done,
and
things
like
that.
A
A
I
guess
that
the
user
space
demon,
like
we're
gonna,
have
that
some
interaction
with
users
face
teaming
to
like
please
map
this
image
and
tell
me
when
it's
ready
here
and
carry
this
one
down
or
whatever
it
might
be
nice
to
have
that.
Maybe
it's
not
worth
it
cuz,
it's
all
gonna
be
hidden
by
this.
Our
buddy
CLI
I.
Guess
at
the
end
of
the
day
right,
the
everybody
cielo
is
gonna.
Do
a
map
rather
have
the
demon?
Do
it
for
you
or
it's
gonna,
do
it
itself
so
yeah,
okay,
anyway,.