►
From YouTube: 2018-MAY-02 :: Ceph Developer Monthly
Description
Monthly developer meeting for the coordination of Ceph project development.
http://tracker.ceph.com/projects/ceph/wiki/Planning
A
2018
edition-
and
we
are
running
this
meeting
on
a
back
friendly
hours
as
usual,
so
before
going
further,
we
have
some
announcements
to
make
the
first
one
is
multi-point
dot.
Io
is
a
conference
about
software-defined
storage,
organized
in
collaboration
with
Gloucester
self
and
other
open
source
projects,
both
software-defined
storage.
Right,
though
I
pasted
the
URL
on
the
chat,
the
copper
papers
is
open
until
the
next
month
right
and
it
would
be
fantastic
having
submissions
from
safe
people
on
this
conference.
A
The
second
announcement
actually
has
been
made
on
the
mailings.
Recently,
it's
the
the
set
user
survey.
We
already
had
lots
of
inputs
from
our
community,
so
I'd
like
to
thank
you.
If
you
read
already
answer
the
survey,
if
you
haven't,
please
consider
taking
some
time
to
to
contribute
right
so
going
further.
We
have
Domino's
some
announcements
on
the
the
meeting,
planning
and
I
think
sage
and
talk
about
me
to
stay
tours
and
a
lot,
not
those
priorities,
and
then
we
can
move
to
the
meeting
agenda.
Yeah.
B
We've
done
the
feature
fees
for
mimic,
I
guess
that
was
two
weeks
ago.
The
last
sort
of
trickling
thing
that
we
were
blocked
on
was
the
upgrade
support
for
converting
snap
grounds
for
south
of
us,
but
I
believe
Patrick
said
he
expected
to
get
that
sorted
out
by
just
testing
today.
So
assuming
that
happened,
then
we
can
cut.
The
first
are
see
tomorrow.
B
B
Just
do
it
okay
and
I.
Think
overall,
the
raita
suite
has
been
looking
pretty
good.
There's
still
a
few
lingering
issues,
but
at
the
tests
are
doing.
Okay,
I
haven't
checked
on
the
other
ones,
but
everybody
else
seems
happy
at
least
for
an
RC.
So
every
tests
are
doing
okay,
so
that's
that's
minik
still,
hopefully
on
track
to
release
in
maybe
three
or
four
weeks
towards
the
end
of
the
month.
B
I
think
I'm
less
concerned
about
the
multiplexing
aspect
of
it,
which
was
sort
of
one
of
the
more
complicated
pieces
and
more
interested
in
making
sure
that
it
has
the
bits
and
pieces
so
that
I
can
do
encryption
over
the
wire.
That's
the
part
that
at
least
on
the
right
hand,
side
we're
seeing
more
customer
demand
for
so
there's
that
and
with
that
it
would
be
really
nice
to
get
some
progress
on
the
Kerberos
integration.
I'm
luckily,
both
of
those
two
things.
B
B
B
The
play
coats,
it
seems
like
should
be
able
to
do
that
in
this
timeframe.
But
I'll
I'll
talk
to
muna
here
shortly,
Jason,
there's
anything
sort
of
on
your
radar.
E
Rbd
namespaces
has
been
bubbled
to
the
top
unless,
for
today
cool
it
also
probably
be
good
to
stop
kicking
the
can,
unlike
the
RB
d,
dot
top
and
yeah
yep.
B
B
Okay,
I
think
that's
all
I
had
Leo
mention.
The
survey
should
go,
do
it
and
mount
plate?
We
should
probably
plan
to
do
like
a
an
evening
whatever
for
all
the
stuff
people
that
are
I'm
at
point.
That's
the
theater
of
August
in
Vancouver.
F
B
F
F
F
First,
we
as
as
discussed
previously,
we
still
classify
all
operations
received
by
an
OSD
into
two
categories
or
one
one
is
there's
issued
by
clients
and
the
other
is
it
as
issued
by
other
recipes
and,
and
we
still
think
that
only
only
those
are
issued
by
client
that
made
our
it
out
returned,
and
so
there
were
under.
The
replication
feels
like
this.
F
Firstly,
in
the
u.s.
T's
in
the
main
cluster
keep
replicating
the
received,
we're
pub,
who
liberators
opt
to
the
backup
cluster
through
an
intermediary
intermediate
or
transfer
notes.
In
spite
of
times
last
boundaries
and
post
keys
in
the
back
had
prospered
cat
cache
these
the
brightest
out
there
growl
ops
cash.
Oh,
do
we
call
this.
F
F
So
rule
as
to
the
time
boundaries,
counselors
Malory's,
it
is
fed
by
monitors,
monitors,
send
out
timestamp
is
a
team
bhakti
bound
to
s
keys
every
time
slice
interval
t-bone?
Is
he
called
equal
to
on
D
current
plus
analyzed
interval
to
current
is
the
current
time
of
the
opted,
monitor
and
I
times
less?
Is
the
interval
of
the
literal
of
the
contrast
this
should
be
configured
by
users
and,
after
receiving
this
T
bound
message?
Osp's
first
check
whether
these
two
conditions
are
satisfied
satisfied.
The
first
is
they
their
system?
F
Clubs
have
to
be
synchronized
with
the
time
sync
service
and
the
time
school
is
should
should
be
small
enough,
and
these
two
conditions
or
service
are
satisfied
or
it's
that
their
post
timer
with
the
post
timer
are.
The
post
country
is
a
timer
that
triggers
when
the
suspension
of
responding
clients
should
be
started.
Who
win.
These
two
conditions
are
satisfied.
Is
that
this
post
timers
to
trigger
object
T
to
pass,
which
is
a
t-bone,
subtracted,
30
local
subtract,
T
error
and
subtract
T
Delta
40
local.
F
Preparation,
work
to
suspend,
to
suspend
Orestes,
to
apply
to
coins
and
when
a
post
timer
is
trigger,
those
cases
has
been
responding
to
clients
or
I
pause
and
which
is
two
times
a
tiara
plus
T
Delta,
and
who-
and
we
think
that,
for
this
timing
would
be
precise,
we
may
have
to
use
the
way
when
we
have
to
sorry
that's
an
error
that
we
may
have
to
implement
a
dedicated
their
timer
rather
than
using
the
share
one.
F
This
share
one
could
involve
many
other
works
that
could
make
this
T
Delta,
not
very
precise
and
after
the
suspension,
when
all
liberators
arts,
whose
time
stamps
are
earlier
than
T
bound
replicated,
which
means
the
commit
messages
for
those
liberators
hops,
are
all
received
from
the
backup
cluster.
The
USDA
report
two
monitors
about
their
finish
of
the
suspension
and
the
T
bound
that
they
suspend
at
it
all.
The
OS
T's
have
reported
off
the
horse
atiba.
F
Boy
stays
in
the
backup.
Cluster
apply
all
operations
that
is
earlier
and
there
are
earlier
than
be
bound
to
the
backing
store
when
were
asleep
when
they
receive
this
identification.
We
think
that
there
are
some
circumstances
aware
and
monitors
should
not
send
the
notification.
The
first
is
the
claude
sing
condition,
which
is
this
dude
are
not
satisfied.
F
B
I
mean
when
he's
lucky
there
we
got
a
travel
question
to
the
notes.
So
far,
so
going
back
a
little
bit,
I
think
at
a
high
level,
architecture
makes
sense,
think
you're
on
the
right
track.
If
you
go
up
a
little
bit,
you
talked
about
how
you
identify,
which
hops
are
client,
opps
and
which
ones
aren't
and
only
get
the
client
ones.
I.
Think.
B
B
But
you
want
to
make
sure
you
get
things
like
the
snap
trimmer
activity,
which
is
not
actually
a
client
op
is
generated
by
the
OST
itself,
but
does
need
to
be
replicated
because
you
also
need
to
knows
if
there's
like
a
scrub
repair
operation
and
that
should
be
captured
or
maybe
not
actually
probably
shouldn't
be
captured.
There's
some
weirdness
with
the
cache
tearing
where
they're
hit
set
operations.
Those
ones
would
probably
ignored
you
just
like
want
to
ignore
cache
tearing
in
the
case
of
application,
but
I'm.
That's
something
about
for.
G
B
Yeah
I
think
it
it
might
depend
on
how
the
it
might
be
that
you
create
the
pool
on
both
ends
and
then
link
them,
or
it
might
be
that,
like
when
a
pool
is
created,
unwanted
magically
gets
created
on
the
other
end,
I'm,
not
really
sure
which
model
makes
sense
like
it
might
be
that
the
key
you
might
want
different
PG
counts.
All
the
different
clusters
on
the
different
I.
Don't
know
if
that
matters
or
not,
and.
C
B
So
I
would
kind
of
call
it
like
a
mountain
buffer
or
not
Q,
or
something
like
that,
meaning
but
I
think
the
main
thing
that
stands
out
and
I
came
to
head
a
little
bit
here
is
a
way
that
you're
looking
replication,
op
cash,
where
you're
suggesting
reusing
the
OST
journal,
adding
pointers
to
that
in
order
to
determine
what
ops
haven't
been
sent
yet
end
up
buffer.
Those,
that's
I,
don't
think
that's
a
good
idea.
I
do
two
reasons.
B
One
is
it's
a
specific
to
file
store,
which
is
just
one
of
multiple
object,
store,
backends,
and
so,
if
you
relied
on
that,
then
you
wouldn't
work
if
you
use
the
blue
store,
back-end
or
anything
else
that
comes
along
in
the
future
and
that's
the
first
reason.
The
second
reason,
though,
even
if
you
that
didn't
bother
were
you
and
you
did
want
to
use
file
store
if
you're
reusing
this
journal,
then
you're
sort
of
opening
yourself
up
to
there,
I'm
blocking
or
I.
B
Don't
know
if
you
wanted
denial
service
or
something
like
that
or
if
the
link
goes
down
to
the
backup
cluster
and
that
drawer
I'll
fill
that,
then
you
won't
be
able
to
do
any
more
rights
and
you'll,
basically
stall
I/o
on
the
master
cluster,
because
the
the
replica
is
enabled
to
pull
things
over.
So
I
expect
you'd
like
to
find
something
that
has
a
maybe
not
infinitely
unbounded
but
probably
effectively
unbounded
Journal,
and
so
so
that
you
can.
B
Think
so
that's
this.
In
my
mind
at
least
that's
like
the
simple
simplest
conceptual
thing
you
fight,
I,
don't
know
if
you
want
it
to
be
like
infinitely
embedded
like
if
it
becomes
100
gigabytes
or
something,
then
maybe
you
want
to
either
like
throw
it
out
and
like
I
was
before
so
full
resync,
but
in
general
it
seems
like
it
would
be.
You
should
be
able
to
tolerate
at
least
like
modest
downtime
so
like
if
your
cluster
backup
cluster
goes
down.
B
H
B
Of
the
know,
the
backend-
the
downside,
of
course,
is
that
if
you're,
just
writing,
twice
you're
writing
into
like
a
big
journal
object
and
then
you're
also
actually
doing
the
right.
Then
you're
like
doubling
your
I/o.
So
it's
you
don't
sort
of
get
the
like
the
free
freeness
of
the
journal.
It's
gonna
be
slower
and
cost
more
I.
F
B
F
B
Get
the
rest
I
do
want
to
take
a
step
back
though,
and
just
make
sort
of
a
high-level
comment.
I
want
to
be
totally
upfront.
I
think
it's
highly
unlikely
that
we're
gonna
want
to
merge
this
type
of
capability
into
the
Deaf
upstream
OSD
anytime.
Soon
they
were
in
the
midst
of
like
doing
a
major
rewrite
refactor
on
the
OST,
and
it's
going
to
be
a
lot
of
code
churn
and
a
lot
of
things
changing
around
and
I.
B
B
Think
partly
that's
because
most
of
the
most
of
the
multi-site
and
disaster
recovery
type
scenarios
we've
dealt
with
at
higher
layers
in
raitis
gateway,
there's
a
whole
multi-site
Federation
capability
that
that's
a
lot
more
than
just
disaster
recovery
and
it
sort
of
works
better
when
implemented
at
the
higher
layer
and
on
the
RBD
block
side.
There's
also
RVD
mirroring
that
does
multi
sites
replication
and
it's
a
much
more
flexible
than
doing
it.
At
the
rate
of
level.
E
It
does
a
journal,
but
it
does
like
it
journals
like
each
each
PG
is
against.
The
right
operation
has
to
go,
send
it
to
like
it's.
You
know,
journaling
tier,
to
like
journal
it
off
at
before
it
can
like
commit
it
up
and
then
on
the
other
side
is
just
reading
from
that
drillings
here,
to
get
the
same,
effective
like
what
our
beady
mirror
does
right
now,
it's
got
like
a
number
of
you
know,
objects,
it
kind
of
like
divides
its
work
over
and
tries
to
replay
in
order
yeah.
B
B
Like
going
off
somewhere
else
and
and
just
the
the
operator,
flexibility
right,
you
can
have,
individual
images
can
have
different.
Mirroring
configurations
for
us
at
the
rate
of
slaughter
would
be,
the
entire
level
would
be
the
entire
pool
right.
I
think
that's
that
that's
the
main
thing
so
I
guess.
B
I
don't
want
to
like
be
too
to
aggressively
discourage
you,
I
mean
if
this
is
something
that
like
makes
sense
for
you,
you
should
pursue
it.
Just
don't
don't
assume
that
it's
like
it's
going
to
be
incorporated
until
we
sort
of
know
more
but
I
would
I
would
encourage
you
to
think
about
what
the
what
the
specific
I
guess,
high-level
user
problems,
business
problems
or
whatever
are
that
you're
trying
to
solve.
You
know
whether
is
it
just
as
astro
recovery
or
is
it
something
else
like
it
is
what
you
really
want.
B
A
file
system
that
has
added
ER
copying.
Another
site
may
be
a
better
way
to
do.
That
is
that
the
file
system,
layer
that
might
be
less
effort,
it
might
be
more
flexible
whatever
it
is,
but
I
would
consider
as
those
sort
of
questions
before,
assuming
that
this
is
going
to
be
the
best
path,
because,
although
I
I
totally
believe
that
we
could
implement
this
capability
and
radius
and
it
would
be
super
slick
I'm,
not
convinced
that
it's
actually
like
the
best
way
to
solve
the
whole.
You
know
brain
drain,
jitter
problems.
B
You
should
know
that
who
was
Ricardo
and
shall
I
think
talked
about
this
a
couple
of
months
back
they're,
also
interested
in
this
problem.
You
might
want
to
check
in
with
them
they're
crying
out
on
the
call,
because
they're
in
Europe.
B
But
yeah
I
do
be
interesting
to
hear
once
you
sort
of
taken
a
step
back
or
whatever
what
your
plans
are
on
the
you
molester
or
whatever.
F
B
B
H
So
I
updated
the
pull
request.
They
sync
I
have
done
some
like
very
basic
tests
on
putting
an
object
with
respect
o'clock,
new
plug-in
and
repair
and
decode
scenarios
have
tested.
So
the
unit
test
part
is
not
done
yet
but
like
like.
Very
basic
testing
is
done
and
this
code
can
internally
use
any
generation
plug-in
or
I
sell
plug-in,
so
that
can
be
given
as
input
in
the
AirAsia
called
profile.
H
So
currently,
I
do
not
document
this
part
on
how
the
commands
should
be
run
excetera,
but
I'm
planning
to
do
that
and
yeah
so
about
the
code
review.
I
was
wondering
like.
When
will
this
review
be
then,
and
I
mean
what
should
be
the
process
for
the
unit
testing
as
well?
I
would
like
to
know
what
are
the
things
to
be
implemented.
The
unit
testing.
B
B
Yeah
I,
don't
think
how
big
is
it
I,
don't
think,
there's
anything
really
blocking.
We
just
need
to
we're
probably
going
to
be
focusing
on
to
mimic
bug
fixing
for
the
next
couple
of
weeks
once
that
sorted
I,
don't
think,
there's
gonna
be
I,
don't
think,
there's
anything
really
blocking
it's
just
a
matter
if
I'm
looking
at
it
on
the
testing
side-
and
let
me
think
I'm
gonna
have
to
like
remember
what
what
all
we
do
on
the
PC
side.
D
Not
sure
what
the
unit
tests
look
like
for
encoding
and
decoding
that
we
certainly
have
more
integration
style
tests,
for
they
run
through
everything
with
different
records
like
at
the
moment.
I
guess
we
have
different
except
sweets
into
elegy
for
running,
but
each
erasure
code.
So
we
probably
want
to
copy
one
of
those
and
make
it
user
click,
ood,
I.
B
Yeah
yeah
that
it's
all
do
one
look
at
the
inner
testers,
there's
no
best
ones
that
tests
the
plugins
like.
If
you
look
in
that
in
the
test,
or
we
should
go
directory,
there's
a
bunch
of
stuff,
you
can
probably
just
copy
one
of
these
and
update
the
tests.
There's
one
for
like,
is
AJ
or
assure
LRC
and
check
that
would
be
a
that
would
be
a
start.
B
You
already
have
that
and
then
the
integration
test,
I
think
the
last
thing
is:
there's
there's
an
on
regression,
piece
that
I've
never
actually
looked
at,
but
I
think
what
it
is.
It's
a
bunch
of
random
data,
that's
been
encoded
and
then
stored
and
the
tests
just
make
sure
that
I
could
do
you
code
it
and
so
that
we
don't
like
break
the
code
over
time
with
some
small
change
or
something
in
a
way
that
makes
it
unsuitable.
B
B
B
Yeah
I
mean
I
just
starting
with
that.
That's
a
bull
you
know
test
which
isn't
that
in
that
same
directory
running
the
same
similar
tests,
look
the
other
ones
are
doing.
They
might
actually
just
be
like
we
should.
The
plug-in
modes
actually
know
thing
about
the
Scheck.
One
has
a
little
into
tests,
but
the
other
ones
are
pretty
basic.
A
B
B
But
it's
basically
just
like
a
bunch
of
a
bunch
of
erasure
code
profiles
and
then
a
bunch
of
data
that
was
encoded
with
its
encoded
data
and
it's
decoded
data
and
the
test
just
goes
or
like
decodes
at
all,
make
sure
that's
the
same
I'm
vice
versa.
Oh
and
that's
that's
the
thing
that
we
should
probably
update
this
so
that
we
have
a
new
dude
like
there's
no
data
here.
I,
don't
think
for
another
is
check
check.
Isn't
your
too
good.
H
B
B
Yeah
I
think
I
think
the
thing
to
do
would
be
just
come
up
with
a
bunch
of
ERISA
hood
profiles
that
stack
clay
on
top
of
very
sure
and
maybe
I
say
and
whatever
some
variety
that
have
different
values
of
km
and
then
just
feed
it
a
bunch
of
I.
Don't
it
doesn't
matter
what
data
you're
encoding,
just
a
buncha
random,
who
I'm
so
data,
but
have
some
Oh
a
little
bit
will
be
different
from
each
other,
not
just
like
zeros
or
something
and
then
dump
them
in
there
and
that'll
that
also
catch.
B
Yeah
otherwise
I
think.
Probably
the
big
thing
is
just
to
add
a
facet
to
the
tooth
ology
tests
and
then
run
it
through
ology.
That
might
actually
be
something
that
whoever
decides
to
test
this
can
do
pretty
easily,
but
one.
So
once
the
codes
reviewed,
we
can
start
just
hammering
on
it
and
make
sure
teams
hold
up.
B
B
Give
you
the
path,
and
it's
it's
it's
a
little
bit
too.
So
if
you
haven't
you're
not
used
to
using
it
essentially
their
workloads
that
use
different,
Richard
good
profiles
and
then
hammer
it.
So
you
can
basically
look
at
the
ones
that
are
in
there
and
probably
copy
that
and
change
the
eraser
code
for
a
file
to
add
new
cases
to
the
test
matrix
and
if,
if
it's,
if
it's
not
clear,
just
ask
one
of
us
on
IRC
or
we
can
help,
you
know
figure
it
out.
B
B
But
yeah
I
mean
I'm
excited
baby
here
that
the
revisions
are
done
and
it's
ready
for
review
again
I'm
going
to
tag
it.
B
B
B
B
E
All
right,
this
is
mostly
derived
from
blueprints
and
things
like
that
from
years
ago.
So
the
original
proposals
were
basically
that
there
be
a
single
unified
directory
bucket
for
all
our
body
images
and
then
there's
just
that
the
the
ID
object
would
basically
say
well,
this
particular
body
image.
Every
image
XYZ
is
actually
all
the
data
is
a
namespace
ABC.
E
B
B
E
The
heads
outside
the
namespace,
so
what
I
proposed
you
know,
is
to
basically
say
no,
that
let's
see
if
we're
gonna
do
this,
let's,
let's
do
totally
isolated.
You
know,
equivalent
of
like
rgw
buckets
I'm
like
what
we're
name
space
would
be
equivalent
to
a
bucket
where,
if
I
say,
put
an
image
in
it
in
a
given
namespace,
it's
totally
isolated
in
that
namespace.
So
if
you
do
it
an
RB
d
LS,
you
won't
see
it.
You
have
to
be
like
our
BLS.
B
B
Never
a
dull
moment
so,
the
so
completely
separating
out
the
directories
in
our
VD
or
whatever
new
spaces
like
that's,
certainly
like
the
cleanest
thing,
you're,
just
like
no
complexity
around
them
interfering
with
you
with
each
other
or
whatever.
So
it's
attractive
from
that
context,
I'm
just
want
to
make
sure
we've
thought
through
the
sort
of
usability
piece
where
you
won't
be
able
to
list
the
our
beauties.
If
you
don't
know,.
E
Proposing
that
we
have
an
object
that
tracks
in
use
namespaces
like
already
namespaces
so
there'd,
be
several
objects
that
we
someone
we
already
have
like
the
already
mirroring
objects
that
Excel
namespace.
So
it
doesn't
matter
what
namespace
images
are
being
sorted,
but
it
as
a
storage
admin,
I
set
up,
our
VD
means
I'm
setting
it
up.
You
know
globally,
I'm,
not
I'm,
not
setting
up
on
a
per
namespace
basis
like
oh,
let
me
go
out
of
here
and
things
like
that.
E
That's
that's
pretty
complicated
and
then,
as
a
storage
I
mean
I
like
I
would
need
to
know
what
what
namespaces
are
in
use.
So
it's
kind
of
proposing
that
we
would
also
track
those
as
like,
a
directory
of
in
use
namespaces
with
two
proposals
there.
One
of
them
is
that
the
the
admin
would
have
to
explicitly
add
the
namespace
before
it
can
be
used.
So
he
tried
to
do
image
create
in
a
namespace
that
hasn't
been
added
to
that
directory.
E
B
Explicitly
creating
him,
that's
it
seems,
seems
like
it
makes
sense,
can
I
back
up
just
one.
Second,
though,
you
said
that
the
mirroring
object
is
external
to
the
namespace,
because
that's
an
admin
operation
to
configure
that
makes
sense,
but
you
list
it
right
next
to
our
BT,
trash
and
I
would
think
trash
is
something
that
has
happens
inside
the
namespace,
where,
if
I
delete
an
image.
E
Or
whatever
I
said,
our
BT
directory
IRA
need
a
group
directory.
An
arbitrary
trash
should
be
got
it
up
her
her
namespace.
We
don't
for
things
that
are
like
a
given
user,
read:
let's
try
not
to
leak
the
data,
but
for
things
like
already
mirroring
right
now
we
it's
stores
not
only
your
peer
configuration,
also
stores
the
list
of
images
that
are
being
mirrored.
So
we
would
just
for
each
of
those
images.
We'd
have
to
also
add
on
a
little
tag.
That
says,
and
that's
also
in
namespace
ABC.
E
B
C
B
E
The
person
that
you
would
be
writing
to
that
object,
if
you
enabled
mirroring
on
something
and
that's
why
I
like
lower
down
for
I,
say
that
under
the
are
beating
mirror
demon,
the
are
expanded
to
include
the
Amiri
namespace
and
the
profile
RBD
caps
that
we
have
the
shortcut.
Could
a
lot
be
modified
to
allow
class
method,
execute
permissions
on
that
specific
class
method
so
that
he
doesn't
get
full
permission
to
touch
that
object
would
be
something
like
hey
go,
add
me
to
it,
and
also
it's
a
later
time.
E
The
OSD
section
would
be
great
if
we
could
add
I'm,
just
a
little
helper
method
for
the
class
methods
to
say:
hey,
please
validate
this
cap
for
me.
So
then,
within
that
class
method
of
hey,
add
this
to
the
directory
in
image.
In
this
namespace
we
can
say:
does
this
person
have
permission
given
his
caps
to
touch
that
namespace
and
if
they
don't
they
can't,
they
can't
add
that
image
or
delete.
B
C
E
C
E
B
They're
sort
of
there
are
three
layers
here,
like
one
is
the
operator
at
the
cluster
operator.
That's
like
creating
new
spaces.
One
is
a
user
who
has
access
to
a
single
namespace,
but
is
allowed
to
create
and
delete
images
within
that
single
namespace.
They
have
to
be
able
to
poke
in
this
object
just
like
they
poke
at
the
directory
and
then
the
last
one
is
like
I'm.
Just
I
have
access
to
one
image
I'm
just
reading
and
writing
that
one
image
right:
okay,.
B
B
Like
I
think,
I
think
that
one
makes
sense.
My
actual
issue
is
that
when
you
get
to
the
point
where
you're
restricting
calls
to
class
method
class
methods
in
order
to
control
access
to
this
are
beti
mirroring
object.
That's
the
exact
same
problem.
We
were
trying
to
solve
with
the
RPG
directory
and
we
decided
was
sort
of
too
complicated,
and
so
we
just
put
in
our
view
directly
inside
a
snooze
face
right.
B
So
it's
the
same
problem
or
you
create
an
imagery
to
leave
an
image
and
you
need
to
add
or
remove
it
from
the
directory
and
previous
proposal
had
a
global
eye.
Redirector
e-everyone
could
have
written,
write
to
you
and
we
had
to
restart
the
class
method.
So
it's
exactly
it's
like
identical
problem
right.
B
I
guess
my
question
is
I
I
think
it's
kind
of
attractive
to
just
have
a
separate
directory
printing
space
and
have
the
name
space
creation.
Be
this
like
operator
operation?
Can
we
just
do
the
same
thing
with
our
buddy
mirroring
object,
and
then
we
bypass
all
the
weird
cap
enforcement,
unlike
class
methods,.
E
B
E
B
Yeah,
it
just
feels
it
feels
analogous
to
directory.
So
it
feels
like,
if
we're
gonna,
go
to
the
effort
of
making
sure
that
the
cap
system
is
robust
enough
to
restrict
access
to
add
and
remove
this.
Like
shared
object,
that's
outside
of
the
namespace,
then
we
could
do
the
exact
same
thing
for
the
directory.
E
So
yeah
it
becomes
kind
of
weird
because,
like
the
ones
that
are
in
namespaces,
don't
contain
your
peer
information.
They
just
contain
images
because,
right
now
the
mirroring
object
contains
both
your
configuration
and
what
images
are
being
stored.
But
if
it's
in
a
namespace
you're
not
gonna,
have
like
namespace
ABC
say:
oh,
but
here's
the
peers
I
want
to
replicate
with,
because
that
would
be
your
storage
admin
that
sets
up
all
that
replication
stuff
that
you
as
an
attendant,
have
access
to.
It's,
not
that
big
of
a
deal
yeah.
E
Potentially,
watching
and
number
of
it's
no
different
than
our
buddy
Mary
daemon,
watching
an
M
number
of
images,
I!
Guess
what
cuz
ask
every
image:
that's
open
it
that
it's
writing
to
I
mean
yeah.
It
has
all
those
images
open
with
a
watch
established
on
there,
buddy
image
header,
the
world
yeah.
B
E
E
B
B
Yeah
I
had
this
like
vague
recollection
of
talking
about
when
we
were
locking
down
tab
with
the
cap
syntax
for
calling
a
class
method,
you
specify
the
class
and
the
method
name,
but
then
you
could
also
specify
some
arbitrary
key
value
pairs
that
are
sort
of
interpreted
by
the
class.
Our,
however,
the
class
one
to
interpret
it
but
I
don't
see
it
in
the
code.
I
think
we
just
talked
about
it.
We
didn't
actually
do
it.
There's
nothing
like
this,
but
I
think
that's
what
you
need.
B
E
Give
me
like
a
my
same
thing:
that's
happening
right
now.
Inside
the
you
know,
primary
replicated,
PG
or
whatever
word
says,
can.
I
E
I
E
B
Three
specific
that
the
cat
would
have
two
parts:
the
first
one
would
say:
you're
allowed
to
read
and
write
to
namespace
foo
and
the
second
part
would
you're
allowed
to
call
our
buddy
dot
ad
or
whatever,
and
the
our
buddy
had
would
ask
the
capita
allowed
to
read
and
write
gasps
ooh.
Yes,
II
give
and
I
passed
it.
Okay,.
E
So
that
would
work
that
was
a
proposal
I'm
also
fine,
with
to
saying
it's
100%
separate
our
buddy
mirror
changes
more
because
it
needs
to
what
now
that,
if
we're
gonna
deal
with
the
are
believe
namespaces
thing
like
this
listening,
it's
an
RV
mirror
will
know
which
namespaces
it
needs
to
try.
You
know
periodically
poled
to
say:
hey,
hey,
do
you
have
images
to
mirror.
E
D
E
B
It
if
my
recollection
is
that
we
were
like
talking
about
the
same
set
of
issues
for
RV
directory
before,
and
it
still
feels
to
me
like
it's
basically
that's
our
problem.
Did
the
data
is
less
sensitive
in
the
directory?
It's
just
the
name
of
the
image,
whereas
and
our
buddy
mirroring
there's
also
some
like.
B
Have
it
I,
don't
have
a
good
sense
on
the
mirroring
set
I,
don't
have
a
good
sense
of
whether
it's
better
just
have
the
community
man
watch
a
bunch
of
separate
objects
or
to
go
the
effort
of
walking
down
the
class
methods.
It
just
makes
me
nervous
that
we're
doing
a
different
thing
for
the
directory
and
the
mirroring,
but
Sam
I.
D
B
C
B
C
E
C
B
I'm,
not
I,
can't
tell
I
think
for
OpenStack.
That
is
the
case.
It's
all
going
through
cinder
and
like
who
cares,
but
I'm
I
can
imagine
scenarios
where
you
wanna
like
say
you
have
your
own
little
program
and
create
whatever
images
you
want,
but
you're
isolated
from
everyone
else,
and
you
give
sort
of
like
a
workgroup
operator,
type
role.
C
E
B
E
C
Mean
the
point
of
main
stages
is
that
you
can
set
up
clients
so
that
when
you
distribute
the
keys,
then
they
can't
go
look
at
other
people's
stuff.
But
that's
because
clients
need
to
be
able
to
look
at
the
actual
data.
Like
I
mean
you
know,
I
do
the
hypervisor
does
or
the
in
which
case
it's
less
secure
to
go,
which
is
how
I've
gotten
away
with
it.
But
if
but
the
hypervisor
makes
sense
like
there.
C
That's
that's
an
attack
service
for
people
to
exploit,
but
you
need
an
admin
that
can
do
RVD
admin
stuff
and
you
can't
but
you're
I.
Just
can't
imagine
any
scenario
in
which
you
have
a
thing
where
you
like.
Oh,
we
want
to
have
a
separate
service
running.
That's
allowed
to
create
that's
a
lot
to
create
images,
but
it's
not
fully
privileged
because
we
want
a
different
service
to
do
more.
C
D
E
B
B
And
I
think
a
lot
of
these
scenarios,
not
so
much
limiting
on
quota
you're,
just
billing
based
on
usage
after
the
fact
like
the
more
they
use
the
better,
but
I
can
imagine
a
case.
I
mean
that
the
whole
goal
of
namespaces
is
to
separate
out
the
data
layout
from
the
sort
of
the
policy
security
part
see
a
different
work
groups
that
are
logically
using
different
pools
or
whatever,
but
they're
not
actually
different
pools,
because
you
want
to
have
a
single
set
of
peachy's.
B
Okay,
they're,
not
illogical,
using
different
pools,
they're
using
different
security
domains
right
exactly
right.
So
in
that,
in
that
case,
like
those
two
different
user
groups,
one
of
them
might
like
set
up
a
sender
thing.
What
if
I
might
like
use
the
crown
line
like
who
knows,
but
there
they
shouldn't
be
able
to,
they
should
be
able
to
create
it
feels
like
you,
should
be
able
to
create
and
delete
images
without
going
and
poking
at
somebody
else.
B
C
Just
like
it
sounds
like
you're,
going
to
a
lot
of
effort
to
support
use
case
that
I,
don't
think
anyone's
gonna
care
about,
and
it
seems
like
it
would
be
a
lot
easier
if
you
just
said
nope
we're
not
going
toward
that
use
case,
but
I
don't
have
any
objection
to
either
of
the
options
you've
talked
about.
So
it's
just
you
know.
B
Is
a
related
question,
I
think
about
what
the
what
the
security
mode
model
is
or
whatever
I
guess,
user
roles?
I?
Guess
it's
about
resource,
because
you
said
that
that
mirroring
is
something
that
you
set
up
as
as
an
operator
as
like
a
cluster
operator,
but
we
sort
of
have
these
three
roles
right.
There's
like
the
person
who's
reading,
writing
a
single
image.
There's
the
person
that's
creating
and
deleting
images
within
a
namespace
and
then
there's
a
person
who
like
runs
the
whole
cluster
and
can
create
new
spaces.
E
G
E
B
E
B
I
E
To
like
disabled
the
dream
future
bit
like
okay
got
it,
okay,
I
mean
I'm
are
back
anyway.
So
it's
basically
right
now,
if
you
can
write
to
the
army,
the
image
header,
which
is
anyone,
that's
basically
reading
it
right
into
the
image
or
your
a
your
in
this
other
intermediate
role
like
there's
no
like
we
don't
have
the
three
levels
of
control
right
now,
because
we'd
have
to
lock
down
to
specific
class
methods,
I
mean
nowadays
now
that
we
can
lock
down
a
specific
class
buses.
We
probably
could
get
to
that
level,
but
it's
not.
B
B
Yeah,
okay
I
mean
it
seems
like
it
still.
It
still
feels
it's
awkward
to
have
different
strategies
for
directory
mirroring,
but
it
might
be
that
their
usage
patterns
are
different
enough,
that
it
sort
of
justifies
it,
because
marrying
is
just
only
as
one
reader
that's
privileged
and
that
it's
just
the
unprivileged
writers
for
as
directory
as
unprivileged
readers,
it's
a
little
different,
but.
B
E
These
are
basically
these
tasks
are
gonna,
be
like
able
to
be
done
in
sprint
fun
right,
so
you
can
keep
adding
on
as
later.
So
if
a
given
thing
like
oh,
this
sounds
like
you're
using
namespaces,
RBD
mirroring
doesn't
work,
you
know
at
the
end
of
the
day.
It's
because
that's
you
know
like
that's.
Where
cut
nautilus
and
that's
how
much
was
I
came
about
it,
yeah
right.
B
It
kind
of
feels
to
me,
like
the
big
user
user,
facing
change
here.
Let's
I,
just
from
being
able
to
using
sizes
at
all
is
that
namespace
is
something
that
you
explicitly
create.
It's
like
a
first
class
concept.
Alright
I
create
a
any
space
for
workgroup
one
and
another
one
for
worker
two,
and
that
sounds
fine
to
me.
Yeah.
E
D
E
B
That
that
almost
suggests
that
having
a
single
directory
object
that
has
every
namespace
in
it
might
be
easier
because
that
contract,
the
state
of
like
this
image
has
been,
is
also
in
the
trash,
and
so
the
namespace
is
no
longer
used,
or
you
know
whatever
it
is.
I
mean
I
got
I,
don't
know
what
how
the
trash
works
right
now
when
you,
when
you
move
something
into
trash,
does
it
get
removed
from
the
directory
yeah
so
that
the
list
of
namespaces
could
just
be?
B
You
know
a
class
operation
on
the
directory
just
takes
a
union
or
whatever
all
the
distinct
namespace
is
mentioned
in
the
directory
are
just
also
they're
stored
in
a
little
set
of
the
beginning,
and
so
then,
let's
look
it
just
sort
of
magically
happen,
but
then
the
trash
object
would
also
have
to
be
also
global
and
all
the
rest.
Just.
E
D
Wait
wait
is
Clinton
V
to
do
I.
Just
you
don't
need.
E
D
E
D
E
Another
image,
okay
yeah-
that
makes
sense
so
it
it
does
all
the
atomic
reference
counting
and
things
like
that
about
its
children
can
detect.
If,
if
a
parent
snapshot
still
in
use-
and
you
want
to
delete
it,
it
lets
you
delete
it,
but
moves
it
to
a
snapshot,
trash
namespace
that
will
automatically
get
deleted
when
the
image
gets
deleted
or
the
last
clone
gets
deleted.
E
G
E
E
E
E
E
B
This
is
a
bit
of
a
tangent,
but
I
seem
to
remember
discussion
recently,
where
somebody
noticed
that
there
was
a
big
performance
differential
between
v1
and
v2
images.
Was
that
because
of
the
was
it
because
they
had
the
object
map
enabled
on
v2
by
default,
and
so
the
rights
had
to
go
update
the
object
map
before
it
rip
rip
that.
E
E
B
Okay,
well,
this
I
I
think
my
suggestion
is
still
just
I
would
write
out
what
the
what
the
security
role,
whatever
czar,
and
what
they're
allowed
to
do
and
just
validate
that
with
I.
Don't
know
like
some
OpenStack
people
or
something
just
to
make
sure
it
makes
sense.
And
then
we
have
like
one
last
think
about
whether
you
use
the
shared
directory
or
the
printing
space
directory
still
feels
like
we're
sort
of
leaning
towards
the
per
name
safe.
One
like
we
read
it
up.
That
seems
like
he'd
go
with
their
work
boy.
B
E
I
B
B
D
Okay,
so
Kieffer
are
you,
sir
discussion
kind
of
render
son
mailing
list
related
with
respect
to
C,
star
logging,
but
I
think
it's
that
kind
of
an
independancy
star
as
well.
We
had
this
issue
with
an
Akron
logging
Creamery,
where
it's
incredibly
slow,
because
it's
doing
tons
of
copies
of
strings
everywhere
and,
for
example,
Hamid
goodbye
and
the
mailing
list
mentioned
that
he
tried
replacing
the
D
out
with
a
simple
I
trace
point
trรชs,
trรชs
trรชs
log
format,
which
still
has
the
same
kind
of
like
a
trend.
D
Copying
and
there's
no
improvement
in
the
performance
at
all.
So
that
suggests
that,
in
order
to
get
any
kind
of
an
improvement
in
performance
and
be
able
to
turn
on
logging,
more
often
without
such
a
big
performance
that
need
to
have
fewer
mem
copies,
which
has
to
be
a
structured
logging
format,
rather
than
using
the
streaming
operators
everywhere
and
probably
some
kind
of
binary
format
for
the
actual
log,
they're,
not
kind
of
generating
text
for
all
our
internal
structures,
but
rather
just
copying
certain
fields
directly.
D
D
D
Output,
parsing
and
custom
every
were
absurd.
There
are
other
ways
that
we
can
try
to
minimize
how
much
were
outputting
as
well
in
terms
of,
for
example,
on
the
OSD,
with
every
single
upward
from
the
night
from
a
debug
saving
in
a
placement
group
they
up
with
a
whole
bunch
of
fields
for
it
from
the
placement
group,
when
we
really
only
need
to
have
the
information,
perhaps
for
time
that
the
members
of
that
placement
change
change,
yeah.
B
Yeah,
the
current
logging
is
a
little
bit
of
a
worst
case
because
you
have
like
80
percent
of
every
line
is
identical
from
the
line
before
and
it's
just
a
dump
of
this
huge
PG
think
that's
probably
from
the
time
that's
spent,
and
then
you
have
a
little
bit
of
stuff
at
the
end,
so
just
getting
rid
of
the
prefixes
might
might
be
big
for
the
toys.
It's
always
hard
to
like
give
up
that
information,
because
you
always
want
to
know
what
it
is
right.
D
B
If
we're
gonna,
if
we're
gonna,
go
it's
a
structured
log
entry,
then
what
is
there?
What's
the
downside
of
just
using
trace
points
because
I
mean?
Maybe
this
is
sort
of
my
like
naive
assumption,
but
it's
that
the
whole
point
of
the
trace
points
is
they're
like
zero
cost
when
they're
disabled
and
even
when
they're
enabled
they're
like
highly
optimized
they're
super
fast
I.
Don't
really
know
how
true
that
is,
but
I
thought
that
was
the
whole
point
of
using
something
like
LTTE
and
G.
B
D
I
think
then
I
think
I
mostly
agree.
One
thing
I
did
come
across
I
was
it
was
that
the
way
they
were
implementing
the
creature
consumer
model
through
our
brain
buffer,
which
has
two
modes
of
operation,
one
where
it
loses
events
and
one
where
it
starts?
Overwriting
or
sorry,
I
ate
them.
This
keeps
colder
events,
but
there's
new
ones,
yeah
and
it
never
end
up,
never
blocks
it.
Decider
throws
events
away
or
over
eights
them.
B
B
B
Because
I'm
kind
of
the
LT
team
just
straight
up,
LT
TGIF
and
also
sounds
attractive,
because
it's
just
sort
of
pushing
us
all
the
way
to
the
other
end
of
the
spectrum,
where
there
trace
points
you
sort
of
construct
them
carefully.
Do
you
think
about
what
fields
you
put
in
them
just
like
kind
of
what
we
want?
I
guess
the
thing
that
I'm
the.
B
Thing
that
worries
me
about
it
is
that,
in
the
case
of
a
user,
where
you're,
like
a
user
hits
a
crash,
you're,
like
you
know,
give
me
a
log,
it's
more
complicated
for
them
to
do
that,
like
they
don't
just
like
type
one
command
and
like
copy
a
file
out
of
bar
like
stuff,
they
probably
have
to
do
something
else,
but
I
never
actually
use
it.
So
I,
don't
know
how
hard
it
is
to
do.
Yeah.
D
So
I
think
well,
I
think
so
the
idea
is,
but
the
architecture
is
basically
there's
a
producer
and
a
consumer
and
I
consumers,
typically
I
traditionally
I
command,
my
application
that
you'd
run
manually
but
I
think
what
if
you
did
want
to
use
this
for
logging
we'd
probably
want
to
make
that
happen
automatically
and
just
go
to
file
and
for
logs
F,
perhaps
on
a
per
core
basis,
similar
to
the
way
C
start
logging
works.
C
D
So
the
basically
you
can
provide
it
with
at
least
a
wild
card,
if
not
I
read
X,
based
on
the
name
of
the
trace
point,
so
we
can
main
trace
points
with
based
on,
like
our
current
e
bug,
subsystems,
for
example,
and
along
with,
like
a
level
save
like
an
error,
debug
info
warn
that
kind
of
thing.
Oh,
it.
D
C
B
C
G
B
Guess
my
I
would
be
a
little
nervous
if
our
expected
mode
of
operation
is
that
this
is
just
on,
and
you
always
have
this
like
ltte
to
tracer
running
just
logging
tomorrow,
like
stuff
like
if
it's
something
that
we
expect
to
run
like
on
a
normal
system
in
production,
I
mean
we
certainly
wanted
to
be
able
to
do
that.
So,
like
you
should
be
able
to
trace
a
production
system
without
having
a
huge
impact.
So
I
mean
I.
Guess
there's
that,
but
it
still
feels
like
it
should
be
exceptional.
C
Well,
so
maybe
this
isn't
somebody
won
in
there,
but
I
really
us
to
start
logging
on
operations
more
precisely
than
we
do
with
the
OP
tracker,
because
we
need
that
if
everything
is
a
future
yeah
and
that
will
be
some
level
dragon
all
time,
and
this
could
be
a
separate
system
that
I'm
definitely
attracted
if
we're
gonna
do
structured,
stuff,
I'm
really
attracted
to
having
them
be
the
same
system
and
just
like,
maybe
you
set
up
what,
like,
maybe
a
flag
that
says
like.
C
Well
sure,
but
what
I'm
saying
is
that
we
have
a
problem
with
our
users
right
now
where
they
say.
Oh,
we
have
this
problem,
we
go
well,
do
you
have
any
logging
turn
on
and
they
go
no
we're
like
well,
we
have
no
idea
what
went
wrong
then.
So
it'll
be
really
good
if
we
could
have
a
minimal
amount
of
like
seeing
what
paths
happen
on
the
on
the
objects
that
are
live
right
now,.
D
C
D
B
It
still
kind
of
feels
to
me
like
that.
What
you're
talking
about
Gregg
is
is
in
that
OP
tracker
category
is
different
than
logging,
because
you
want
to
keep
track
of
live
requests
which
might
be
super
old
right,
so
the
object
will
have
something
that
happened
started
90
seconds
ago
and
it
so
stuck,
whereas
logging
is
just
gonna
have
like
the
last.
You
know
10
seconds
of
events
or
whatever
it
is
right.
B
D
With
tracing
those
and
with
the
like,
that's
exactly
what
we
needed
same
kind
of
trace
points
to
track
I'm
like
distributed
tracing
him.
I
cropped
from
for
a
given
er/ir
crest
and
you're
tracing
the
different
stuff
that
goes
through
in
that
pipeline.
I
think
it's
kind
of
an
orthogonal
problem
to
how
we're
doing
the
logging
itself.
C
I
B
C
Well,
that's
one
use
of
trace
points
like
we're
talking
about
logging.
So
yes,
that's
what
we
want
our
logging
to
do,
but
I
mean
yes,
they
may
be
completely
different
systems
other
than
some
syntactic
sugar.
That
could
be
too,
but
I
think
that
both
of
these
are
going
to
need
a
structure,
light
light
like
you.
B
C
C
G
F
B
D
B
D
B
Can
you
can
accomplish
the
same
thing
that
the
object
is
doing?
If
you
have
the
logging
on
right,
then
an
external
tool
can
collate
them
and
like
identify
the
things
I
gots
done
right,
but
that
requires
you
to
sort
of
have
logging.
You
look
before
the
fact.
Instead
of
after
the
fact,
that's
sort
of
the
beauty
of
the
app
tracker
that
we
have
now
I
mean.
B
C
A
little
bit
sad,
well,
no
I
mean
so
part
of
it.
I
think
is
that
even
if
we
have
nice,
cheaper
logging,
I
think
that's
like
that's
really
hard
for
anyone
to
debug
and
under
and
understand
what
you
choose
right.
I
think
we
need
to
go
into
a
more
of
an
introspective
model,
or
you
say:
okay,
like
the
OSD,
is
stuck.
What
is
the
OSC
doing
right
now?
Which
of
these
operations
is,
is
waiting
on
things
or
is
waiting
on
what
oh
they're
all
waiting
for
this
one
thread
to
finish
flushing
to
disk?
J
B
To
be
fair,
I
think
that,
like
modern
tools
make
this
much
easier
because
when
they're
structured,
you
can
do
all
these
queries,
and
so
you
can
just
like
user
like
I've,
never
actually
done
it
but
I.
Presumably
these
fancy
couey's
for
like
search
database
or
whatever
you
can
do
it,
but
but
yeah
I
think
you're
right
I
think
we
need
to.
D
Not
even
though
best
restaurant
control,
but
like
they
distributed
tracing
stuff
like
with
Jaeger,
which
is
like,
let's
see
like
facial
eyes
like
what,
where
where
the
latency
is
in
a
bunch
of
different
different
requests
as
they
go
through
the
system.
And
if
you
see
a
bunch
that
are
perhaps
stuck
in
this
one
state
in
this
one
demon,
then
you
can
figure
out.
Perhaps
what's
going
on
with
that
that
even
from
there.
D
I
D
B
B
Guess
might
and
I
don't
know
this
is
my
uninformed
gut
opinion
is
that
if
we
can
keep
LT
team
G's
role
confined
to
one
where
it's
like,
not
something
it's
something
that
is
not
like
an
internal
dependency
like
you
can
run
fine
without
it,
and
it's
not
like
the
user.
Experience
changes
unless
you're
like
profiling
or
debugging
or
developing,
but
like
normal
users,
wouldn't
just
know
dependency
for
a
normal
user.
Then
then
that
would
be
nice
right
because
then
it
wouldn't
matter
as
much
like
I
would
I
would
get
nervous
if
we
relied
on
LTG.
B
B
D
I
B
I
B
G
B
But
half
of
those
number
add
those
debug
messages
are
just
our
exceptional
situations,
like
you
got
into
a
weird
corner
case
that,
like
shouldn't,
have
happened
and
you're
just
printing
out
like
why
am
I
here
and
I'm
gonna
like
there,
it
goes
X,
that's
what
I
expected,
and
so
you
just
want
to
like
log
that
so
you
don't
lose
that
information
yeah.
So
at
a
certain.
G
Well,
you
want
that
if
what,
if
a
subsequent
crash
happens-
and
so
then
you
you
see
that
you
output
this,
you
hit
this
place
in
the
code
that
you
didn't
expect
and
then
it
later
asserted.
So
you
you
now
have
that
information,
but
that's
why
you
put
that
message
there,
because
you
didn't
know
what
why
that
would
get
there
yeah.
G
G
D
B
K
One
one
aspect
of
this
that
I
think
is
is
may
be
useful
to
at
least
think
about
is
that
whenever
a
user,
my
assumption,
whenever,
like
a
user
or
a
system
administrator,
sees
something
in
a
log
that
you
know
precedes
an
OSD
crashing
or
something
crashing
their
first
instinct
is
to
type
it
into
Google,
and
just
say
you
know:
here's
here's!
What
I
saw
you
know
what
what
does
this
mean?
What
do
I
do
now,
so
you
know
as
we're
talking
about
all
this.
You
know
the
the
the
easiest
mechanism.
I
guess.
K
E
I
mean
if
you
have
like,
if
you
could
do
this
thing,
where
it's
the
built-in
ltte
consumer
demon
inside
the
inside
the
OS,
DS
or
whatever,
that
could
basically
say
well,
if
it's
a
certain
class
of
thing
it's
air
or
whatever
I
can
also
redirect
it
to
this
log
or
whatever.
It
is
like
the
current
logging
infrastructure
does.
So
you
have
a
consistent
way
to
you,
have
a
consistent
way
to
do.
E
Logging
and
tracing
like
exceptional
logging
there's
an
error
any
of
a
way
to
do
tracing,
but
it's
all
you
know
using
trace
points
and
then
just
where
the
consumer
grabs
it
and
says.
Oh,
that's,
actually
pretty
important
to
me:
dump
it
out
of
the
pond.
No
formatting
convert
it
to
something:
that's
human,
readable
and
throw
someplace
for
someone
going
to
see
it
so.
I
B
D
Yeah,
yes
together,
but
the
cost
right
now
is
the
other
string
processing.
So
even
logging
into
memory
is
too
expensive,
I
guess!
Theoretically,
we
could
like
save
all
the
objects
that
we
wanted
to
log
at
that
point
in
time
and
the
kind
of
snapshot
then,
but
that
it's
kind
of
crazy
in
terms
of
our
lifetime.
B
C
One
thing
we
could
think
about
is
trying
to
do
some
kind
of
copy
of
the
data
structures
and
jump
dumping.
That'll
be
static
and
dumping
pointers
to
them
in
the
tree
in
the
trace
points.
So
then
we
can
like
print
out
all
of
whatever
we're
interested
in
I'm,
not
sure,
if
there's
a
good
way
to
do
that
with
the
garbage
collection
or
the
reference
issues
that
would
come
up
like
idiotic
or
down
like.
C
Yeah,
like
David,
was
asking
about
the
PG
structure
and
I
mean
like
just
it
could
be
that
whenever
we
change
the
PGM
to
or
whatever
we
like,
take
a
copy
of
the
new
worth
of
the
other
new
one
and
start
using
that
for
dumping
out
to
the
trace
points.
So
we
can
change
the
live
one
whenever,
but
the
trace
point
like
the
trace
point.
Consumer
can
go
see
what
the
old
values
were.
C
D
B
Okay,
so
stepping
coming
back
a
little
bit
here,
maybe
we
can
come
to
some
high-level
conclusions,
they're
sort
of
like
three
options
right,
there's
like
going
whole
hog,
LTTE
mg,
there's
doing
your
own,
inventing
your
own
binary
log
thing
and
there's
sort
of
the
free
form.
Do
you
have
stuff
we
have
now?
B
E
B
It
feels
like
they're
the
same
thing
already
implemented.
I
think
the
one
thing
that
kind
of
worries
me
about
may
not
the
one
thing,
but
the
one
other
thing
that
worries
me
about
Ltd
Angie.
Is
that
all
the
trace
points
we
have
are
like
super
tedious
to
define
because
you
have
to
like
you,
put
the
trace
poi,
which
it
looks
like
a
macro
call
and
then
in
a
different
file
and
the
TP
file
you
like
to
find
all
the
argument
types
it's
like
this
is
tedious.
D
B
J
B
B
But
it
I
mean
it
feels
like
there's
just
this
there's
the
case
where
I'm,
just
like
writing
the
code
and
there's
like
some
I
can't
figure.
What's
going
on
I
just
want
to
dump
the
whole
structure
like
they're
gonna,
be
trace
points
like
that
likings,
like
they're,
a
bunch
of
places
in
the
monitor
where,
like
if
the
CRC
mismatches
encoding
an
OST
map
I
like
do
a
hex
temp
of
the
encoded
version
that
mismatched,
so
that
I
can
like
go
back
and
compare
them
and
see.
Actually
what
was
different
or
do
like
a
chastened
them.
B
B
E
B
Don't
know
I
mean
every
time
it's
not
there
and
there's
an
OCD
bug.
Coc
error,
like
I,
had
one
maybe
like
two
years
ago,
where
the,
if
the
COC
matches
on
the
OS
just
doesn't
match
on
the
OSD.
It
dumps
a
hex
dump
of
this
thing
that
didn't
match
and
I
mean
those
does
pop
up
every
like
two
months.
There's
a
bug
where
there's
an
Augustine
coding
mishmash
to
have
to
go
figure
out
where
the
encoding
future
wasn't
right
and
why
it
was.
F
B
I
B
B
C
B
B
B
D
I
think
it's
actually
unrelated
to
the
C
start
directly.
I
think
we
want
to
do
this
independent
notes,
Easter,
because
even
today
we
have
the
issues
where
someone
hits
a
bug,
and
you
know
we
can't
get
from
that
around
logging,
because
it's
too
slow,
yeah.
D
The
existing
ad
stuff
in
for
now
and
get
it
first,
we
need
to
like,
provide
whether
we
can
switch
everything
to
over
to
LT
t
ng
and
then
you
know
start
slowly
adding
trace
points
where
make
sense
in
the
OSD
and
maybe
start
that
at
that
point,
starting
to
move
them
all
the
ad
stuff.
There
Oh.
J
B
J
D
B
B
B
D
B
B
Alright
cool,
that's
everything
in
the
list.
Any
other
topics,
Rona
or
I,
don't
have
anyway,
look
at
that.
It's
exactly
two
hours
doing
good
at
this.