►
From YouTube: 2019-06-05 :: Ceph Developer Monthly part 1
Description
Part two: https://youtu.be/vYY4yngqMsQ
A
Start
recording
here
and
then
we
can
get
started.
We
tried
to
be
more
proactive,
Lily
yeah,
it's
already
recording.
We
try
to
be
more
proactive,
this
time
around,
to
advertise
the
agenda
ahead
of
time
and
make
sure
the
agenda
actually
has
relevant
items.
So
the
leads
are
taking
a
more
active
role
in
making
sure
this
is
effective
discussion.
So
I
want
to
continue
doing
that
in
the
future.
If
there's
anything
Justin
to
people
I
have
or
whatever.
As
far
as
things
that
we
can
do
you
improve
the
value
of
this
discussion
every
month.
A
Let
us
know
alright,
so
I'll
put
the
agenda
in
the
chat.
Anybody.
B
And
also
the
other
design
choice
being
that
we
wanted
to
just
reuse
as
much
as
possible,
so
we
we
reused,
basically
librettos
to
connect
clusters
together.
That
meant
that
you
had
to
have
between
the
two
clusters.
You
had
to
have
a
fully
routable
network
so
that
the
two
clusters
could
talk
to
each
other
and
so
after
the
jewel
release.
You
know
from
our
point
of
view,
everything
was
fine
laughing
reality.
It
just
turns
out
a
lot.
B
Not
a
lot
of
people
have
been
using
the
future
and
over
the
past
couple
of
releases,
it's
you
know,
gotten
more
and
more
apparent
interest
in
our
be
nearing
and
also
the
the
current
issues
with
it.
So
the
journaling
issue,
being
it
gonna,
hit
you
with
it's
gonna
cost
twice
as
many
iOS
going
to
your
storage
system
as
without
journaling
and
as
a
result,
because
it
needs
to
write
to
the
journal.
First
wait
for
the
journal
to
be
committed
before
it
can
actually
go
to
modify
the
back
in
our
BD
image.
B
It
also
like
doubles
your
latency
for
for
I/o
and
then
the
routable
network
issue,
because
we
use
liberators
to
from
a
few
instances
of
liberators
to
talk
to
two
different
clusters
from
the
RVD
mirror
demon.
Some
people
find
it
awkward
or
hard
to
have
to
fully
routable
stuff
cluster
networks
over
over
away
and
connection
so
going
forward.
B
We
we
want
to
be
able
to
offer
new
modes
of
actually
being
able
to
mirror
your
RVD
images
to
different
data,
centers
or
clusters,
with
clouds
or
whatever
you
want
to
call
it
and
hopefully
emit
a
way
that
is
higher
performance,
perhaps
if
they
that
are
able
to
suit
some
other
use
cases.
Besides,
these
fully
consistent
journal
based
model,
so
I.
B
What
I'm,
offering
or
what
I'm
proposing
going
forward
for
octopus
is
number
one
will
keep
the
current
system,
as
is
that
will
expand
its
by
offering
a
couple
other
optional,
mirroring
modes,
one
of
them
being
a
a
periodic
schedule
based
ultimate
automatic
snapshot,
will
have
some
background
process.
I
redeem.
Your
demon
take
snapshots
on
images
on
a
on
a
on
a
schedule,
whatever
the
user
defines
and
we'll
use
those
snapshots
and
we'll
use
different
occasion
between
the
previous
snapshot
and
the
the
current
had
snapshot
just
to
push
the
deltas
to
the
other
side.
B
We
just
need
to
make
sure
that
when
we're
replicating
the
snapshot
and
if
you
have
a
failure
during
that
transmission-
that
you
can
roll
back
to-
you
know
the
previous
last
known
good,
consistent
state,
because
if
you
were
only
consistent
at
the
actual
snapshot
point
and
then
besides
the
periodic,
maybe
just
offer
a
way
just
to
say
on
demand.
If
you
have
a
cloud
workload,
I'm
running
some
more
cloud
X
on
this
cluster
I
want
to
on-demand
move
it
to
a
different
cluster.
I
just
want
to
be
able
to
tell
staff.
B
So
that's
that's
some
options
of
how
we
can
eliminates
or
kind
of
offload
the
current
permits
hit
from
journaling.
The
other
big
issue
with
our
Burien
is
the
networking
and
how
to
connect
to
different
clusters
together
that
are
over
a
LAN
so
back
when
we
first
were
talking
about
in
designing
the
original
jewel
implementation
or
Carboni
mirror,
and
one
of
the
things
we
talked
about
was
well
in
the
future.
We
could
add
this
capability,
which
is
kind
of
like
the
concept
of
a
radios
proxy
where
you
you
run
this
proxy
on
both
sides.
B
B
Mon
map
and
OST
map
would
basically
tell
hey
why
your
connections
to
this
cluster
over
here
actually
through
this
proxy,
this
local
proxy
and
it
would
tunnel
the
traffic
through
the
other
side
and
the
other
side
would
then
stand
it
back
out
to
the
lowest
ease
in
the
end,
the
Mon
from
the
other
side
as
needed,
but
that
would
eliminate
the
need
to
have
a
fully
routable
network
between
between
two
clusters.
You
can
kind
of
tunnel
that
traffic
through
a
single
TCP
connection.
B
One
of
the
downsides
with
that
approach
is
that
you
still
have
the
it
makes
it
super
easy
for
us,
I
mean
even
then
we
get
the
ratos
proxy
harmony
mirror,
doesn't
even
change
at
all.
We
keep
using
our
low-level
radios.
Fundamentals
of
you
know:
hey
watch
an
image.
You
know
read
from
the
read
from
objects.
B
I'm
sure
read
from
journals
or
to
read
snapshot
entries
or
whatever
the
other
design
choice
we
made
when
we
implant
our
bead
Amir
in
just
in
the
sake
of
time
and
make
simplicity,
was
that
we
had
a
pull
model
where,
if
the
remote
side,
that's
pulling
changes
from
the
source
primary
image
to
in
applying
it
locally.
So,
but
if
you
have
any
latency
on
that
way
and
link
that
whole
process
becomes
more
and
more
and
more
expensive,
because
those
pull
operations
are
hand
handled
on
individual
Rados
objects,
then
they're
not
handled
in
bash.
B
So
one
of
the
other
alternatives
we
could
think
about
is
kind
of
swapping
the
rolls
around
and
having
orbiting
mirror
or
some
other
thing
in
the
middle
operators
as
a
push
process
to
it.
Locally
watches
for
changes
gathers
them
up
in
a
batch
and
can
shift
those
batches
off
to
the
other
side,
and
the
other
side
could
apply
them
in
bulk.
So
you
don't
have
that
Lake
Austin
poling
operation
that
we're
dealing
with
right.
Now,
you
don't
have
to
have
the
the
same
issues
of
high
latency
links.
B
So
those
are
things
we
want
to
think
about
on
the
rbb
team
for
for
octopus
and
then
I've
thought
about
some
future
ideas
to
also
think
about
going
forward.
Like
I
know,
Azure
supports
a
REST
API
where
you
can
just
get
access
to
their
their
blob
image
storages.
So
it
could.
We
could
have
some
simple
tooling
to
import
or
export
as
your
and
then
during
stuff,
like
on
I,
also
talked
to
a
couple
people
about
the
idea
of
having
a
rgw
expose
like
a
pseudo
buckets
for
multi-site
rgw
replication.
B
That
would
expose
our
body
image.
Snapshots
and
multi-site
could
actually
take
that
those
those
snapshots
and
properly
replicate
them
across
and
that
it
wouldn't
involve
because
it's
a
pseudo
bucket
rgw
would
actually
be
using
lib
RBD
to
like
actually
see
the
data.
It
would
actually
have
to
be
copying
the
whole
like
map
shots,
has
its
objects
within
rgw.
A
B
A
B
Will
continue
to
use
the
the
journal
model?
We
have
a
Nicola
Souza
is
he's
working
on
improvements.
We
had
to
make
some
compromises
when
we
did
it
because,
like
we
didn't
want
to,
we
used
to
only
support
a
single,
our
buddy
me
ready
and
then
a
few
someone
created
a
thousand
images
we
couldn't
have.
You
know
two
terabytes
of
memory
being
used
for
cashing
all
those
journal
objects.
We
support
active
active
now,
so
we
can
split
the
load
and
now
we're
integrating.
B
Basically
a
memory
pool
that
says
I.
You
have
a
target
allocations,
this
I
believe
memory,
demon
of
two
gigabytes
or
whatever
you
set
for
it,
and
if
you
only
have
five
images
to
to
mirror
it'll
try
to
use
as
much
memory
as
possible,
so
it
can
prefetch
more
objects
and
basically
run
faster.
Instead
of
taking
small
nibbles
of
a
journal
objects,
we
can,
you
know,
take
big
gulps
of
the
journal
objects
to
try
to
reduce
that
latency
from
all
those
pull
effects.
B
A
A
B
It's
it's
our
buddy
medium
and
we
don't
need
it.
We
are,
we
already
have
it
running.
We
already
have
the
concept
of
leaders
and
things
like
that,
so
the
the
leader
can
be
responsible
for
coordinating
the
creation
of
snapshots.
You
know
on
the
define
schedule,
whatever
those
define
sketches
are
I,
think
there'd
be
some
overlap.
I
know
with
the
toughest
talk
or
version
of
schedule,
based
snapshot,
so
I
think
a
consistent
way
to
define
those
schedules
would
be
lucky.
B
If
you
do
a
snapshot,
LSU
wants
the
N
leaf
image
and
it
has
mirroring
snapshots,
especially
where
they
get
auto
deleted,
and
so
the
user
does
have
to
care
about
him,
and
that
way
we
can
actually
make
sure
like
whoa.
What,
if
you
set
it
to
replicate
every
hour,
and
it
takes
longer
than
an
hour
to
you-
know,
create
a
new
staff.
We
don't
want
to
create
a
new
snapshot,
you're
still
replicating
the
last
one,
so
they
can
control
the
life
cycles
of
those
yeah
yeah.
B
A
B
B
Theory,
it
could
because,
if
you
have
the
concept
of
a
live
migration
tied
into
it
yeah
as
long
as
it's
able
to
pull
the
latest
data
from
remote,
you
could
still
use
that
model
like
basically
like
I
prayed,
a
snapshot
at
head.
I
start
the
live
migration
and
basically
attach
it
to
the
head
snapshot
on
the
remote
side.
So,
while
it's
actually
copying
the
data
in
the
background
and
also
if
on-demand
as
your
live,
our
biggie,
you
know
client
needs
data,
it
can
pull
it
alive
from
the
other
side,
basically
copy
it
up.
A
So
they
I
mean
I.
Think
that's
not.
That
thing
makes
sense
to
me.
It's
absolutely
that
the
on-demand
part,
if
does
that
mean
that
the
lebar
BD
has
to
be
like,
say
you
say
you
do
the
side,
migration,
but
the
actual
data
hasn't
been
copied
yet,
and
you
do
a
read
on
the
remote
sides.
It's
supposed
to
that's.
B
The
live
migration
part
right,
where
it's
basically
on-demand,
like
I,
shot
this
image.
This
guy
stopped
the
workload
on
cluster.
A
and
I
basically
attach
a
live
migration
on
cluster
B,
saying:
hey,
you're,
you're
attached
to
this
image
on
cluster,
a
I
can
restart
my
workload
instantly
or
near
instantaneously,
against
Buster,
B
and
as
needed.
It
can
the
data
from.
B
B
A
C
B
B
A
B
C
Normal
snapshots,
initiated
by
the
user,
who
can
supposedly
stop
the
workload
or
freeze
the
filesystem,
take
the
snapshot
and
then
unfreeze
it
here.
It
will
be
an
external
entity
that
is,
it
will
be
snapshot
that
is
initiated
by
staff
as
opposed
to
by
the
user.
No
and
I
think
it
needs
to
be
consistent
from
the
point
of
view
of
a
high-level
application,
because
that's
the
point
that
you
roll
back
if
things
go
wrong
right,
I.
A
As
long
as
your
application
as
well-behaved,
it
should
be
able
to
tell
her
to
crash.
It
would
be
nice
if
I
guess
could
also
have
a
more
careful
timing
of
its
shots,
but
not
actually
sure
how
many
applications
need
that
yeah.
B
So
like
in
terms
of
like,
if
you're
doing
hooks
in
the
key
mu,
we
did
some
type
of
call
out
block
driver.
There's
the
guest
attest
free
stuff,
so
I
think
this
already
hooks,
but
like
the
key
new
guest
agent,
where
you
me,
or
can
you
instruct
the
the
guest
agent
say,
hey
freeze
the
file
system?
If
you
wanted
to
get
to
that
level,
if
we
provided
those
hooks
and
kiemce
you
serious
possible,
that's.
B
A
B
B
Be
something
equivalently
if
in
the
kernel
as
well,
if
you,
if
the
kernel
saw
hey
snapshot
created,
it
could
right
now
I'm
with
exclusive
lock,
it's
the
it's,
the
exclusive
lock
owner
that
creates
the
snapshot.
Accepting
the
kernel,
I
think
it
basically
says:
I'm
I
can't
do
it.
We've
always
had
those
hooks
to
say
yes,
I
can
create
that
snapshot
for
you
and
I'll.
Freeze
the
file
system,
or
do
you
ever
want
any
do
ya.
D
A
E
A
Wonder
if
it
would
be
simpler
to
modify
the
mom
client
and
object
er,
though
they
just
understood
that
they
should
communicate
via
proxy,
and
so
instead
of
having
to
fake
it
out,
the
Montclair
would
just
always
talk
to
some
yeah,
the
equivalent
of
setting
an
environment
variable
that
says
Sox
box
equals
and
your
HTTP
clients
know
what
to
do
about
it.
You
do
something
similar
where
just
living.
B
B
A
Because
if
that's
that's
the
case,
then
yes,
I
mean
the
only
way
to
do
is
pick
up
the
maps
but
ish.
If
that
isn't
a
requirement,
if
it's
fine
to
have
the
client
has
might
understand
that
it
needs
to
be
talking
to
a
proxy
and
behave
differently.
Then
there's
a
much
wider
behind
space
that
we
can
consider.
I
haven't
really
thought
about
it.
Cuz
I,
just
notice.
Just
now,
but
I
guess,
picking
out
the
maps,
isn't
the
only
option
yeah.
F
It's
a
lot.
It's
a
lot
easier
to
upgrade.
Your
proxies
minute
incident
like
at
a
point
that
I
used
to
upgrade
to
geographically
separated
set
Buster's
like
yeah,
like
we
already
have
users
and
customers
who
have
trouble
upgrading
their
clients
at
the
same
time
as
their
SEP
clusters,
and
if
they
were
completely
different
data.
Centers,
that's
a
whole
extra
special
thing.
F
But
I
mean
that
means
that
you
can't
have
like
that.
You
can't
sort
of
go
around
in
circles.
Doing
the
upgrades
and
get
the
new
stuff
in
your
new
places
like
people
are
gonna,
be
a
lot
more
conservative
with
their
backup
destinations,
or
at
least
you
know,
some
people
be
more
conservative.
Back
up
the
snake
instance,
we
were
conservative,
the
production
and
stuff.
Oh
no,
probably
a
couple.
The.
A
F
But
that's
a
lot
easier
when
it's
a
proxy
and
not
all
of
the
way
those
clients
running
in
the
cluster,
maybe
you
can
upgrade
the
ratos
client
proxy
package
and
how
to
talk
to
both
sides
of
that
and
I
mean
like
we.
We
had
a
lot
of
trouble
just
with
forwarding
messages
within
the
monitors.
I
can't
imagine
like,
like
writing
local
proxy
layer,
that.
A
Client
and
I
guess
it
feels
like
this
warrants
its
own
discussion,
probably
on
list,
because
it's
is
it.
Is
it's
efficient
just
if
the
goal
just
to
not
have
a
thousand
parallel
TCP
connections
and
just
tunnel
them
over
like
one
meta
tunnel,
semantically,
that's
probably
what
a
proxy
is
doing.
Right
and
OSI
request
is
still
gonna
pass
over
the
wire
said
they
would
see
on
the
others.
A
B
A
Any
other
questions
on
this
I
think
we're
gonna
schedule,
either
next
week
or
the
week
after
a
series
of
meetings
to
talk
about
the
octopus
priorities.
So
presumably
this
will
come
up
there.
B
Well,
yeah,
what
we
have
right
now
without
the
radio's
proxy
or
whatever
we
get
it
to
work
with
the
caveat
of
the
routing.
The
next
step
is
the
diving
into
the
denying
of
the
new
meds
and
how
they
interact
with
one
another.
D
D
B
D
B
A
A
Layers
below
us,
yeah
I,
mean
in
the
block
case,
you'd,
be
like
shipping
a
vm
around
the
world
right.
That
requires
some
level
of
coordination
from
know.
B
Well,
yeah,
if
you're
following
the
kubernetes
example,
and
also
you
have
like
a
workload
of
thousand
pods,
you
know
each
for
their
own
independent
awkward
clothes.
Whatever
you
kind
of
just
like
bring
down
some
and
and
bring
em
up,
you
know
inside
oh
yeah,
gradually
shifting
the
load
you
know
over
to
the
new
site
I'm.
So
we
don't
to
worry
about.
Like
the
service
is
down
and
then
the
service
comes
back
up.
It's
you
have
n
number
of
endpoints,
you
know
providing
the
service.
A
A
So
the
goal
here
is
similar
to
the
RVD
mode
to
that
Jason
was
describing
where
yeah
local
self
a
vest
and
another
set
of
s
and
another
cluster
somewhere
else.
You
just
want
to
specify
you
want
to
do
a
dr
mode,
where
you're
periodically
taking
snapshots
on
a
subdirectory
and
then
marrying
the
snapshots
across
whatever
timing.
A
If
you
do
it
every
10
minutes
every
hour
or
whatever,
that's
how
I'll
steal
your
your
disaster,
recovery,
backup
copy,
I'd,
be
I,
think
I
kind
of
broke
this
in
two
parts,
because
I
think
it's
somewhat
related
to
just
best
aveling
a
snap
schedule
in
general.
I'm,
not
it
turns
out
that
I
think
maybe
they
shouldn't
actually
be
tightly
coupled,
but
maybe
just
setting
the
schedule
stuff
would
be
a
good
place
to
start.
A
The
idea
would
just
be
that
you
for
any
given
subdirectory
or
any
given
file
system
in
the
cluster
any
given
subdirectory.
You
can
specify
a
schedule
that
says
that
I
want
a
snapshot
to
be
taken
every
hour
or
every
two
hours
every
day,
every
month
or
whatever
it
is,
and
then
have
also
have
a
retention
policy.
A
A
Think.
Probably
the
first
thing
is,
you
know
just
defining
a
defining
a
CLA
experience
that
that
we,
like
I,
think
the
only
real
question
I
ran
into
here
is
like.
How
do
we
want
to
specify
the
schedule?
A
A
F
F
Yeah
like
like
the
numbers,
get
used
in
like
different
sequence
like
like
you're
smashing
them
together
and
I.
Think
I
understand
why
now,
but
it's
still
like
super
weird,
whoever
whoever
was
yellow,
apparently
like
them.
Borge
backups
has
a
different
syntax,
but
the
fleek
is
broken
now.
So
aren't
a
lot
of
examples,
but
that
looks
a
little
more
understandable.
Maybe.
H
The
idea
of
the
bork
backup
idea
is
to
actually
separate
the
snapshotting
schedule
from
the
pruning
schedule
that
would
simplify
the
CLI
quite
significantly
significantly
I.
Think
because
you
could,
you
know,
create
your
snapshot
schedule
and
then
create
a
separate
pruning
schedule
for
the
same
sub
directory
and
the
two
obviously
related,
because
you
want
to
do
some
pruning
on
on
snapshotting.
But
you
can
look
at
them
individually
and
specify
them
individually
and
the
only
the
relation
of
the
two
is
actually
just
a
sub-directory.
H
What
what
I
meant
to
illustrate
with
the
example
of
for
backup
is
that
you?
Basically
you
don't
place
a
pruning
schedule
on
the
snapshots
themselves,
but
on
time
intervals,
though,
you
basically
specify
okay
I
want
to
keep.
You
know
for
thirty
minutes.
I
want
to
keep
them
like
a
schedule.
Every
a
snapshot,
every
minute
I
want
to
keep
for
weekly
schedules,
and
then
you
can
go
and
look
like.
Okay,
we
have
in
our
week.
Do
we
have
one
scheduled
snapshot
in
there?
Yes,
good
nothing2prove.
If
we
have
two
and
then
we
prune
one
I
see.
A
H
I
think
that
this
would
better
interact,
though,
with
you
know,
a
mix
of
scheduled
snapshots
and
maybe
user
created
snapshots.
Thank
you.
A
user
could
create
as
many
snapshots
as
they
wanted,
but
they
would
still
get
pruned
out.
I,
see,
okay
and
then
you
know
I
kind
of
put
this
in
the
ether
pad
as
well.
There's
you
could,
you
know,
conceivably,
think
about
having
multiple
schedules
per
subdirectory
I've
had
this.
You
know
wacky
example.
You
know
who
knows
what
kind
of
requirements
some
people
have,
but
they
need
to
keep.
H
A
H
H
So
again,
what
would
work
back
up
does?
Is
you
get?
You
know
these
keep
weakly
keep
monthly
and
you
just
give
it
a
number.
You
can
keep
a
number
of
snapshots
regardless
of
timing,
so
you
just
say:
I
always
want
to
have
30
snapshots,
no
matter
what
and
you
can
also
give
it
a
basically
a
prefix,
also
on
the
on
the
name
of
the
snapshot,
and
you
can
say,
like
only
proven
these
snapshots
that
are
prefixed
with
whatever
you
want.
H
A
H
A
The
one
thing
that
I
like
about
the
original
proposal
is
that
if
you
do
like
an
LS
whatever
you
would
get
a
single
table,
that
would
show
the
directory
and
when
you're,
taking
the
snapshots
and
also
how
long
you
keep
them
like.
You
could
see
the
whole
a
complete
view
of
the
schedule
in
the
printing
schedule.
I
wonder
if
we,
if
we
separate
out
ghetto
and
the
printing
into
two
different
pieces,
if
it
still
makes
sense
to
have
a
combined
view
that
shows.
A
H
Think
it
does
I
mean
again,
the
question
is:
what
do
you
do
with
manual
snapshots?
Do
you
list
them
in
this
kind
of
view,
and
but
it
still
can
be
figured
out.
I
mean
you
can
also
another
benefit.
I.
Think
of
the
separate
pruning
is
that
you
can
actually
run
it
manually
like
we
don't
have
to
rely
on
the
schedule.
H
A
Don't
think
anybody
was
expected
to
start
this
tomorrow.
Okay,
I
wasn't
okay,
okay!
Well,
that
also
that
all
sounds
great
and
that's
just
the
that's
just
a
snapshot,
management
in
pruning,
I,
guess!
Well,
I!
Guess
the
meta
question
for
Jason
is:
should
we
try
to
mirror
the
same,
the
same
thing
for
our
buddy
and
does
it
make
sense
to
do
that
either
from
the
seal?
Because
here
the
CLI
is
going
to
be
sort
of
combined
with
this
F
seal
I
already
has
its
own
CLI.
B
I
Should
we
be
using
volume
names
rather
than
filesystem
names,
and
then
maybe
the
volumes
and
set
of
has
the
downside
of
using
or
just
tying
ourselves
to
volumes
and
sub
volumes?
If
they're,
not
using
it,
then
I
don't
know,
will
need
to
provide
another
interface
to
snapshot
a
particular
path
in
the
file
system.
I
You
suggesting
that
the
sub
volume
module
would
just
call
out
to
this
other
command-line
interface
to
set
up
the
snap
schedules,
and
then
you
give
the
actual
path
instead
of
the
small
you
name,
yeah,
that's
one
option
that
might
simplify
the
coupling
to
because
when
you
delete
this
up
volume,
you're
going
to
want
the
sub
volume
module
is
going
to
need
to
go
out
and
delete
all
a
snap
schedules.
I.
A
Mean
that
the
actual
path
for
sublime
is
something
weird
right,
so
I
think
you
definitely
want
that.
I
love!
We
want
the
convenience
of
be
a
little
specify
sub
blowing
lady,
this
snapchat,
our
entire
side
lame,
but
I
think
we'd
also
want
the
flexibility
seems
like
it
doesn't
cost
us
anything
to
have
the
flexibility
to
also
specify
a
specific
path
with
a
missive
volume
or
the
volume
or
whatever
the
snapshots
can
do
that
and
there's
no
particular
reason
why
we.
F
I
I
I
A
F
One
other
question
around
this,
which
is
I,
don't
actually
know
what
a
reasonable
limit
on
the
snapshot.
Creation
rate
is
anymore,
but
there
probably
still
is
one
I
mean
I
know
we
went
through
a
lot
of
work
to
make
keeping
around
freedom
snapshots
lots
of
times
within
the
OSD
map
processing
but,
like
you
know,
create
again
and
especially
pruning
them
into
accessibility,
cost
I
hope
you
want
to
surface
that
here
by
not
letting
people
specify
automatic
creation
not
less
than
an
hour
or
something
else,
but
we
should
probably
think
about
that.
F
F
Yes,
I
mean
so
there's
sort
of
two
ways
in
which
it
matters
of
scales,
on
like
there's
a
per
OSD
cost
and
there's
the
there's,
the
processing,
the
OSD
map,
absolute
cost
and
so,
and
we've
tried
to
make
the
crossing
here.
I'll
see,
map
Kostas,
look
like
get
lower
and
H
has
some
other
is
doing
some
other
work
to
continue
making
lower.
But
so
that
means
actually
it's
gonna,
be
less
of
a
four
cluster
cost,
except
whether
your
online
classes,
tease
apart.
F
A
F
A
A
I
A
I
More
thing
I
want
to
bring
up.
If
we
have,
you
know,
hundreds
of
ads
all
with
similar
schedules,
and
then
we
also
have
the
corresponding
task
of
nearing
those
sub
volumes
on
those
schedules.
Then
we
deal
with
these.
You
know
resume
Lee.
The
sub
manager
is
gonna
on
off
a
bunch
of
containers
and
I'm
getting
a
little
head
of
us,
but
I
think
this
is
relevant
to
the
snapshot.
Scheduling
then
were
potentially
nearing
hundreds
of
paths
all
at
the
same
time
on
the
same
schedule,
and
that
could
be
somewhat
disruptive
to
the
cluster.
I
F
F
What's
the
goal
for
this
mirroring
fun,
it's
just
to
have
like
kind
of
up
to
date.
Is
it
just
that
kind
of
up
to
date,
back
notes
when
you
do
the
synchronization
or
because
I
mean
we
could
just
like
compress
them
all
into
the
same
snapshot
at
midnight
and
then
over
the
course
of
the
day
migrate
like
migrate
them
is.
A
F
A
Have
in
the
schedule
you
specify
the
interval
and
by
default
the
offset
within
that
interval
is
like
randomized
or
whatever
it
may
be
consistent
for
that
particular
schedule
or,
alternatively,
you
can
specify
like
a
strict,
specific
offset
like
zero,
that's
exactly
at
midnight
or
whatever.
It
is
well.
F
A
So
I
briefly
mentioned
1.5
there's
a
pull
request
in
flight
that
adds
a
new
primitive
to
suffice
to
flush
all
the
our
stats
for
a
snapshot
which
will
be
necessary
so
that
any
subsequent
rsync
that
you
do
on
the
snapshot
can
use
our
stats
to
optimize
itself,
I'm
just
kidding
if
I'd
directories
or
whatever.
So
we
should
make
sure
that
that
gets
in
I.
A
But
currently,
if
you
like,
go
and
read
from
it,
it'll
block
all
the
you
know,
amenity
of
the
walks
or
whatever
happened,
and
the
our
stats
are
gonna,
become
converge
on
a
stable
value.
At
some
point
you
know
seconds
or
minutes
later
is
it?
Does
it
make
sense
to
have
any
kind
of
feedback
on
something
like
that?
Is
that
something
that
you
thought
about
Patrick?
I
F
A
So
there's
no
right
there.
There
are
two
things.
One
is
that
when
you
create
the
snapshot,
the
actual
blushing
of
the
snapshot
is
totally
asynchronous,
so
the
barrier
happens
on
the
client
side
and
it
will,
in
the
background,
write
out
all
of
its
dirty
data
for
that
time
shot
so
I'm
isn't
really
complete
until
that
happens,
and
then
separately,
when
the
stuff
that's
created,
the
MVS
basically
broadcasts
all
clients
that
they
should
create
the
snapshot,
but
there
isn't
a
two-phase
thing:
where
all
clients
quiesce
in
coordination
and
then
create
the
snapshot
and
then
proceed.
A
F
A
F
F
F
A
The
are
set
propagation,
it
doesn't
really
know
white
but
again
they're
having
their
vocal
states.
There's
all
clients
may
or
may
not
have
illogically
taking
the
snapshot.
All
clients
have
taken
the
snapshot
but
haven't
written
back
the
caps,
our
clients
have
flush
their
caps,
and
so
all
the
data
in
this
app
shot
is
stable,
but
the
our
stands
are
aren't
yet
consistent
and
then,
finally,
all
the
our
stats
have
converged.
That's
like
four
phases
of
whatever
and
they're
right
now,
there's
no
real
term
to
transfer
any
transparency
into
which
these
are
in.
A
F
A
Okay,
so
the
goal
of
the
snapchat
mirroring
is
a
sort
of
the
simplest.
Dr
model,
though
you
would
specify
the
source
directory
on
one
cluster
and
a
target
directory
on
a
remote
cluster
and
some
sort
of
edge
will.
How
often
you
want
to
synchronize
it
and
the
basic
loop
would
be.
You
would
create
a
snapshot
on
the
source
cluster
you'd
synchronize
data
from
that
snapshot,
you'd
flush,
it
make
sure
there
are
stats,
whatever
push
the
snapshot
copy
at
all
to
see
the
remote
cluster
after
that
set
is
copied.
A
You
take
the
same
snapshot
with
the
same
name
on
the
remote
cluster,
the
same
directory
and
then
you'd
loop
and
getting
back
to
your
earlier
comment.
Great.
The
idea
here
would
be
that
it
would
be
a
self
regulating
loop
so
that,
if
you
specified
a
10-minute
schedule,
for
example,
but
then
it
takes
an
hour
to
copy
all
the
data,
then
it
would
only
happen
every
hour
right.
A
A
A
Guess
the
that's.
What
schedule
is
about
snapshots
that
you're
retaining
for
the
users
benefit
and
the
snapshot
mirroring
is
probably
something
where
it'll
create
its
it'll
mirror
any
subsets
that
already
exist
just
so
that
it
mirrors
the
whole
thing
from
point
A
to
point
B,
but
then
it'll
also
create
its
own
snapshots
at
some
configured
granularity
to
have
an
up-to-date
to
make
sure
it
has
enough
to
date
copy
on
the
other
side.
B
A
B
I
was
just
says:
it's
the
same
issue,
then
that
our
buddy
Miri
has
you
have
to
have
two
clusters
that
can
fully
talk
to
each
other,
like
basically
mount
chief
I'll.
A
I
think
less
so
because
we
could
all
the
inter
cluster
communication
could
be
like
the
our
sink,
for
example,
that
would
be
sort
of
my
first
thing
that
we
probably
implemented
would
just
be
like
a
naive
thing
that
just
actually
just
running
our
sink,
and
so
in
that
case
our
thing
can
tunnel
over
that's
his
nature
whatever,
and
so
you
could.
You
could
do
it
that
way.
That's
gonna
have
to
be
connected.
F
A
F
F
Maybe
no
one
does.
Maybe
it's
not
a
problem
at
all,
but
I'd
be
worried
that
if
we
assume
that
we've
like
found
how
frequently
you
can
snapshot,
then
suddenly
someone
creates
like
a
very
large
source
file.
Then
suddenly
we're
based
off
of
the
like
logical
size,
logicals
created
space
rather
than
the
actual,
like
buster
throughput.
A
F
Mean
like
and
like
we
can
still
do
that
if
we
were
even
if
we
were
driving
arcing
because
we
can
like
or
like
do
wrapping
something
that
drives
our
sync,
because
we
just
like
go
say:
okay,
we
know
what
their
hinges
are
and
just
like
happy
those
ranges
instead
or
something
and
actually
I
mean
I,
guess
technically,
our
sync
does.
F
B
B
A
The
ideas
yeah,
it's
doing
all
kinds
of
cleverness
right,
it's
even
like
making
something
like
a
Merkle
tree.
It's
hashing
knows
whether
or
not
even
considered
whatever
it's
doing
all
kinds
of
crazy
stuff.
I
like
the
idea
of
building
an
arcing.
Just
because
it's
you
can
do
the
simple
thing
and
it'll
work
and
you
can
refine
the
optimize
from
there
and
also
because
it
will
generalize
to
non
targets.
I
F
I
A
A
A
A
Like
reading
the
data
that
mean
there's
a
bunch
of
stuff
that
we
could
do
to
make
it
all
better,
but
that's
sort
of
like
it,
even
if
we
did
all
that,
we
would
probably
still
want
the
rsync
option
for
the
nonce
F
targets
packet
these
cases.
So
it
was
like
a
good
place
to
start
and
if
I
feeling
that
it'll
work
well
enough
for
lots
of
these
cases,
I.
F
Mean
you
can
even
with
the
new
directory,
you
can
just
invoke
RC
about
specific
files,
yeah
the
reason
I
mention
this
is
because
I've
never
worked
with
the
like
r6
or
unity,
but
we
are
a
Linux
file
system,
so
yep,
then
they
do
a
lot
of
you
know,
per
FS,
customizations
I
think
so.
Presumably
we
can
get
stopped
in
if
it
goes
through.
The
Linux
kernel
interfaces
I.
A
A
A
But
didn't
define
what
this
COI
would
look
like
for,
for
the
mirroring
part
think
maybe
I
think
it
makes
sense
to
pin
down
what
does
have
set
scheduling.
One
looks
like
first
then
built
this
one
I.
Do
it
at
eye
level
would
be
similar?
The
idea,
do
you
specify
a
volume,
sub
volume
of
paths
and
then,
where
you
wanted
to
go?
You'd
have
to
have
some
other
description
of
how
you
connect
to
that
remote
cluster,
which
remote
closer.
It
is.
A
Talk
to
it,
maybe
that
would
be
orthogonal
so
that
you
have
multiple
directories
mirrored
to
the
same
cluster.
You
don't
feel
like
can
figure
out
the
credentials
multiple
times,
whatever
there's
a
whole
could
iterate
on
that
a
bunch,
but
the
assumption
would
be
that
there
again
there'd
be
a
manager
module
that
is
sort
of
managing
the
process.
It
would
spin
up
workers
via
the
orchestration
API
to
do
whatever
it
is
to
do
the
similar
way
to
the
way
that
our
view
mirror.
Does
it
my
strategies
there.
A
In
some
of
us,
our
body
has
a
stem
cell
revert
command
that
does
it
atomically,
but
it
does
it
for
the
whole
image.
Instead
of
s,
you
just
have
to
like
our
sink
out
of
the
dot
snap
directory.
Basically,
in
order
to
do
a
rollback,
which
is
less
than
ideal,
so
I
think
one
question
is:
should
we
sue
a
efficient
or
streamlined
a
rollback
process
or
stuff
with
this
back
that
directory
to
at
snapshot?
B
F
In
the
distant
past,
I
think
there
were
like
I
the
options
either
are
to
try
and
just
like
do
it
from
the
MDS
and
with
on
your
cluster
stuck
or
else
you
need
to
like
tell
the
clients,
our
prefix
every
op,
with
like
every
radio
stop
with
a
rollback,
which,
I
think
is
what
RVD
does
right
like
it.
It's.
F
F
F
A
A
A
A
I
G
A
Mean
you
could
do
it
in
two
ways:
right,
you
could
have
a
blocking
command,
that
does
the
rollback
and
when
that
command
finishes-
and
you
know
you're
done
or
you
could
have
an
asynchronous
command.
That
starts
the
process,
and
then
you
have
another
interface
that
lets
you
clearly,
whether
it's
complete
or
not
either
way.
A
A
F
Okay,
so
a
disaster
happens
and
you're
failing
over
to
this
site,
then
the
admin
you
know
wants
to
turn
as
much
stuff
on
as
quickly
as
possible
and
so
communicating
how
they
do.
That
is
gonna,
be
hard,
so
I'm,
so
I'm
wondering
if
there's
any
point
like
this
would
be
a
little
faster
I
guess,
maybe
they
decided
them
to
a
copy,
but
I'm
not
just
convinced
it
would
be
a
lot
faster
unless
it
was
like
asynchronous
and
lazy.
F
A
A
G
A
Okay
and
then
the
last
thing
is
just:
how
do
we
make
the
parsing
fast,
though
than
anything
is
just
to
do
our
sync
with
no
optimization
and
the
first
optimization
would
be
to
use
the
the
RC
time
which
will
require
that
our
stat
flush
or
pull
request
to
commit
so
that
we
actually
rely
on
not
on
the
source
but
I
think
that's
something
that
would
make
sense
to
add
to
upstream
our
sink
anyway,
with
appropriate
documentation
to
make
sure
that
they
run
the
right
sink
command
on
the
source
before
these
and
then
the
alternative
would
be
to.
A
A
A
There
was
a
lot
I
think,
there's
a
lot
of
possible
work,
pads
here,
I
think
yeah
and
I.
Think,
starting
with
the
snapshot,
Bijlee
seems
like
a
good
place
to
start,
but
probably
the
next
step
or
the
parallel
step
would
be
to
figure
out
what
the
interface
should
look
like
for,
describing
the
mirroring
how
you
would
how
you
would
configure
it
in
a.
A
Learned
from
away
from
that
I
think
we
also
need
to
pick
a
name
for
it.
I,
like
the
simplicity
of
our
body,
mirror
and
the
idea
of
mirroring
that.
A
A
A
H
F
A
So
I
guess
that's
that's
a
question
kind
of
asset
earlier
for
Jason
I'm,
the
arbiter
mirror
diem
and
I
asked
if
the
mirror
demon
is
going
to
be
the
thing
that
drives
the
orbity
snapshot.
Bearing
thing
would
you
imagine
that
a
similar
it
would
all
fall
under
the
RVD
mirror
feature
set
and
it
would
be
different
modes?
It
would
be
like
continuous
right
logging
versus
just
contiguous
snapshot
beasts.
Yes,.
B
A
B
E
B
A
So
I'd
like
I,
like
that,
and
that
makes
this
mirror
makes
sense.
I
think
the
only
thing
that
I
think
we
should
keep
in
mind
as
we
watch
the
rest
of
this
out
is
that
we
should
be
thinking
ahead
to
what
future
mirroring
modes.
We
might
have
four
7s,
why,
like
a
contiguous
one,
where
we're
continually
like
sending
changes
as
they
happen
in
real
time
across
the
throughout
sight
and
possibly
bi-directional
replication
in
two
directions
that
we
should
just
if
it's
all
gonna
fall
under
the
same
that
have
miracle
abilities.
A
A
A
D
Yeah
so
first
module
as
it
currently
exists.
Actually
it
measures
the
progress
of
recovery
when
an
OST
is
marked
out
but
other
circumstances.
Today
the
stakes
look
at
which
P
G's
are
affected
by
notice.
Team
are
being
worked
out
and
then
tracks
yeah
the
PG
static
gates
that
come
in
roughly.
How
much
did
they?
D
Like
I
said
at
the
eye
level
there's
a
couple
of
different
paths
we
can
take
in
terms
of
trying
to
track
recovery.
Parkus
one
is
trying
to
track
just
equipment
estimating
like
how
much
time
there's
left
to
recover
across
the
cluster
or
regardless
of
cause.
The
other
eye
is
trying
to
track
more
individual
causes
and
effects.
So
like
one
of
us
use
worked
out
or
in
seem
tracking
how?
D
It's
like
the
juniors
already
started
working
on
some
of
this
with
the
looking
at
the
end
of
ends
now
and
that
parts
whose
place
relatively
straightforward,
in
terms
of
it
being
very
similar
to
the
existing
out
events,
I
kind
of
wondering
it
if,
if
it
might
make
sense
to
write
subtract,
some
of
the
other
causes
at
their
source
like
in
the
PG,
autoscaler
or
balancer
modules,
have
them
maybe
create
their
own
virus
events.
I.
A
Like
that
I
like
that
idea,
if
we
can
figure
out
how
to
have
a
clean
interface
so
that
they
can
fire
off
an
event
and
populate
it
with
some
additional
metadata,
but
then
the
progress
of
it
handles
the
like.
Ongoing
maintenance,
right,
I
think
actually
what's
set
up
to
do
that.
But
I
haven't
actually
seen.
D
Okay,
I
guess
in
this
case
we
just
might
work
right
track.
The
recovery
work
kind
of
in
a
similar
fashion,
we're
doing
for
the
other
kind
of
recovery,
make
sense
to
try
to
keep
that
within
the
progress
module
itself.
A
I,
like
the
idea
of
having
something
else
that
does
some
sort
of
discrete
event
that
causes
activity
in
the
cluster,
generating
generating
own
events
that
understands
what
completion
means.
The
balancer
seems
like
a
good
candidate
except
that
kind
of
the
way
the
balancer
works
is
a
little
bit.
It's
an
incremental
like
hill,
climbing
type
of
thing
so
that
it's
always
going
to
take
all
steps
that
are
gonna
trigger
some
activity
and
then
once
it
gets
close
enough
to
the
where
it
steps
towards,
but
not
necessarily
all
the
way
there.
E
A
D
A
A
Or
even
if
each
that
had
its
own
event,
I
mean
one
of
the
nice
things
of
having
the
progress.
Events
is
that
if
you're
looking
at
assess
status-
and
you
see
a
bunch
of
rebalancing
going
on
just
like
knowing
why
it
happened,
what
this
was
even
if
the
balancer
made
it
a
short-term
event
that
says
I
took
a
step.
I've
tried
to
remove
the
balance
and
I'm
70%
done
with
that,
even
though
it's
gonna
disappear,
and
that's
probably
take
another
step
after
that,
it's
still
that
might
still
be
valuable.
A
D
Think
I
could
be
careful
about
is
not
having
too
many
in
progress
prints
at
once.
So
if
the
balancer
ends
up
like
taking
render
steps,
maybe
trying
to
patch
them
into
one
congressman.
A
A
D
A
D
F
A
E
D
Crack
the
OMAP
recovery
wait
in
the
same
way,
like
that's
a
little
bit
harder.
D
D
A
Yeah
we
have
an
umpire's,
recovered
it
and
I'm
keys
recovered.
We
just
added
key
bytes
for
covered.
A
A
A
A
The
big
question
for
me
is
if
there's
a
way
to
try
to
generalize
a
way
to
capture
some
arbitrary
change
in
a
cluster
that
generated
a
bunch
of
PG
migration,
whatever
in
a
press
event
in
a
sort
of
a
catch-all
way.
Does
anybody
have
any
like
intuition
insight
ideas
about
how
that
would
how
that
might
work.
A
A
G
G
J
J
J
D
A
E
When
this
balance
me
happening,
we
could
probably
create
events
using
things
like
chain
and
rush
map
or
like
the
up
up
map
change
at
least
highlight
if
the
balancing
is
because
of
good
things
going
up
and
down
or
things
changing
the
way
crushers
configured
or
the
up
maps
are
changing.
That
high-level
information
is
going
to
be
useful.
Yeah.
A
D
A
Yes,
except
that
it
tries
to
it,
tries
to
group
them
into
an
event
right.
So
if
you
have
I
know
Steve's
markdown
at
time,
a
it'll
create
an
event
that
identifies
specific
P
G's
that
are
effective
and
I,
don't
make
progress
and
then
two
hours
later
another
realities,
markdown
it'll,
create
a
distinct
event
that
identifies
different
P
G's.
It
has
different
progress,
and
so
you
can
see
that
oh,
the
first
thing
finished
healing
because
whatever,
where
is
this?
If
you're
looking
at
an
arbitrary
set
of
changes
like
how
do
you
know
that.
A
G
A
A
And
it's
not
even
I
mean
I
think
what
it's
doing
is
if
I
remember
it's
waking
up
on
every
like
on
new
o
steamapps.
It
looks
that
looks
for
changes
mm-hmm,
but
even
that
isn't
quite
right
isn't
quite
sufficient
to
this
map
it
to
that
epoch,
because
you
have
following
changes
where
the
peaches
are
peering
and
they
changed
their
their
PG,
tent,
mappings
or
whatever
in
response
to
that
same
key
event,
and
that
happens
in
the
next
couple
of
epochs,
so
we
probably
should
find
like
a
window
or
something
we're
all.
A
D
It
meets
hopeful
to
think
about
like
what
what
kinds
of
use
cases
people
would
have
for.
Looking
at
these
kinds
of
events
like
for
the
out
case
I
could
see
it
being
helpful
when
I
was
trying
to
migrate
did
often
I
was
t
before
not
taking
out
the
cluster.
They
want
to
know
when
that
that
it
was
Jesus
safe
to
remove.
D
We
already
have
that
those
kind
of
commands,
but
you
know,
let's
see
I,
got
some
other
recovery
going
on
somewhere
else,
but
I
also
know
that
this
coasties,
almost
done
getting
day
to
migrate
to
talk
of
it
yep
would
be
helpful.
A
Yeah
I
mean
these
feel
like
it
feels
like.
The
goal
of
this
is
to
have
a
I'm
sort
of
qualitative
cause-and-effect
understanding
for
the
operator
just
so
they
know
why
the
cluster
is
doing
work,
but
anytime,
there's
like
I
need
to
wait
for
that
thing
to
finish
before,
I
do
X,
there's
actually
some
like
the
mark
out
case.
You
want
to
look
at
like
safe
to
remove.
A
J
A
Remember,
I
believe
what
the
current
module
does
is
it.
It
will
just
yank
the
PG
out.
If
it
changed
again
can
remember,
I
took
a
look
at
the
PG
update.
It
kind
of
feels
like
either
like
either.
Both
events
track
that
Fiji
and
neither
of
em
will
complete
until
that
PG
is
done
doing
its
thing.
That
is
a
fun
with
defensible
or
if
another
event
happens,
that
also
affects
that
same
VG.
A
A
Think,
in
order
for
that
to
happen,
then
the
progress
events
with
attribute
change
right
now
it
just
has
a
list
of
Fiji's.
In
order
for
it
to
do
this
thing,
it
would
also
have
to
have
a
remember
the
mapping
of
the
PG.
So
if
the
mapping
changes
again,
then
it
will
kick
it
out
and
consider
it
completed.
A
G
Mentions
misplaced
teaches
and
optionally
a
cause,
and
you
group
them
by
that
basis,
I
kind
of
wonder
if
it
might
not
be
easier
to
do
this
from
OSD.
G
A
G
A
G
A
It
backing
up
slightly
a
real
easy
way
to
solve
the
problem
of
just
having
each
PG
owned
by
one
event
would
be
that
when
you
create
a
new
event-
and
you
add
a
PG
to
it,
you
just
look
at
all
the
previous
events
and
you
remove
that
PG
from
previous
events.
Right
then,
if
they're
module
has
a
list
of
all
existing
dance
is
probably
really
quick
scan
to
just.
A
G
A
You're
right,
the
split
can
happen
in
a
state
with
the
merge
is
only
initiated
SEC
to
clean
it
so
presumably
wouldn't
be
involved
in
any,
but
we
probably
just
want
to
make
sure
that
the
there's
a
safety
check
so
that
the
PG
there
are
thirty
something
that
checks
what
the
pool
disappeared
and
the
PG
gets
dropped.
So
we
should
do
the
same
thing
for
him.
If
the
PG.
F
Well,
I
think,
what's
a
missing
is
that
a
user
can
set
a
target.
Painting
that
will
mean
merging
a
bunch
of
stuff.
Is
that
anybody.
F
A
A
D
D
Let's
move
on
to
this
10
breeds.
Yes,
oh
he's
got
such
a
fit
in
the
past.
There's
a
email
thread
links
from
the
pad
there
I'm
last
fall
yeah
general
ideas
that
would
be
nice
to
be
able
to
support
than
grading
within
a
major
release,
so
downgrading
from
one
point
leads
to
another
criminal
eats
what
happening
to
worry,
about
kind
of
any
kind
of
for
mining
abilities
or
compatibilities
or
or
my
changes
that
may
have
existed.
D
Probably
the
risk
here
is
to
consider
is
how
to
test
this,
because
before
we
can
contest
these
kind
of
downgrades,
where
I
could
be
able
to
make
make
them
reliable,
we
have
the
existing
point-to-point,
really
Suites,
that
upgrade
between
point
releases
I
think
we
can
probably
use
the
same
kind
of
sweets
to
downgrade.
We
might
need
to
add
a
little
bit
more
support
in
technologies.
D
Install
tasks
do
horse
reverse
to
effective
I'm,
not
sure
about
that,
and
we
would
mean
add,
more
workloads
than
currently
exist
there,
though
things
like
with
more
than
one
hour
to
view
more
than
one
end.
Yes,
thank
you.
The
workloads
in
the
existing
PDP
suites
are
fairly
basic,
like
greatest
RPG
things
most
partner,
so
when
I
capture
fighter
setup
workloads
that
would
include
any
kind
of
protocol
changes
between
other
components
to
the
other
mile
aspect
that
came
up
in
that
thread,
but
he's
got
discussed
previously.
D
One
thing
we
could
start
with
the
existing
kind
of
tests
buddy
and
it
in
Reverse.
So
we
have
this
media
polettis
h,
script
right
now.
That
just
goes
through
and
tries
to
ID
code
every
current
and
previous
version
from
a
I
couldnt
release,
and
we
have
some
kind
of
weight
listing
in
the
audrey
corpus
to
be
able
to
manually
flag
types
as
incompatible.
At
some
point
we
can
use
the
same
kind
of
thing
with
in
point
releases,
so
at
least
check
that
we
can
decode
things
and
not
crash
just
kind
of
a.
D
Few
points
made
it
about
some:
how
to
improve
the
corpus
right
now,
it's
kind
of
manually
populated
from
requesting
stage
there
mentioned
that
there
was
a
compaction
that
we
can
set
size
essentially
dump
out
random
structures
that
wealth
clusters
or
any
different
workloads.
So
you
get
more
realistic
objects.
This
better
see
realized
further
than
just
the
test
instances
that
we
have.
A
D
D
Assuming
we
had
the
testing
in
place,
we
also
need
to
keep
with
more
things
in
mind
when
we're
back
wording.
Changes
need
to
be
much
more
careful
about
my
desk
on
wire
formats
and
make
sure
we're
not
changing
things
in
a
way
that
is
gonna
be
misinterpreted
by
a
previous
report.
My
release,
but
that's
kind
of
help,
help
with
pure
Audrey
corpus
testing,
but
there
are
some
classes
of
things
that
don't
already
quickly.
D
There
are
things
like
protocol
changes
where
you're
sending
new
messages
or
changing
the
meaning
of
existing
messages
or
the
ordering
of
which
they're
sent
and
how
they're
interpreted
you
see.
We've
used
things
like
feature
bits
in
the
past,
but
oftentimes
that's
been
kind
of
a
once
the
feature
bit
is
there.
We
assume
it's
always
gonna,
be
there
rather
than
having
a
considering
the
possibility
that
it
may
go
away,
they
may
want
to
either
be
a
more
more
restrictive
on
how
we're
back
porting.
So
we
don't
never
hire
back
or
new
feature
bits
like
that.
D
Fit
Bertie
and
pewter
bits
yeah
if
the
existing
flies
for,
like
the
major
release
like
require
novices
to
use
or
those
things
that
that
make
sense
for
and
major
release
upgrades,
but
for
I
keep
you
that
green
ability
within
their
point
release
if
we
did
have
some,
have
to
make
some
kind
of
profile
change
that
required
the
future,
but
like
so
just
like
a
setup
or
a
feature
bit
reported
by
the
demons
or
something
in
that
was
team.
Add
for
other
kind
of
man,
perhaps
yeah.
E
E
D
E
Have
a
question:
we
are
gonna
start
supporting
down
crates.
Where
do
we
start?
E
F
F
Have
feasible
it
is.
There
is
one
other.
We
talked
about
this
briefly
in
the
testing
room
meeting
last
week
and
there
was
one
other
category
of
things
which
is
not
on
here:
I,
don't
think,
which
is
on
the
manager,
module
configurations
and
persistent
storage.
I,
don't
know
if
that's
likely
to
be
covered
by
defending
it.
Hopefully
it'll
be
covered
by
running
the
downgrade
chest,
but
yeah
every
once
you're
still
pretty
pretty
so
I.
Wouldn't
confidence.
C
D
D
D
About
I
think
that's
a
good
question
for
Casey
he's
here.