►
From YouTube: February 2023 OpenZFS Leadership Meeting
Description
Agenda: block cloning; shared slog; mirrored L2ARC; i/o rate limit
full notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit#
A
All
right,
I
guess
we
should
get
started.
Then
I
think
Pablo
looks
like
you
have
the
first
item
on
the
agenda
to
talk
about
the
block
cloning
status.
B
Correct
so
I
was
able
to.
B
To
finish
the
last
missing
bit
with
with
Zeal,
so
the
the
problem
was
how
to
properly
support
block
cloning
across
multiple
data
sets,
because
the
seal
is
replied
when
the
the
problem
was.
B
What
happens
if
we
have
some
Zeal
login
trees
that
are
trying
to
clone
from
one
file
system
to
another
file
system,
but
then,
when
we
import
the
pool
before
we
Mount
the
destination
file
system,
we
destroy
the
source
file
system,
so
the
block
pointers
are
no
longer
valid
and
but
suggested
to
use
Zeal
claim.
So
this
is
what
I
did
so.
There
are
two
situations.
First,
when
we
import
the
pool
we,
we
will
use
Zeal
claim
to
just
bump
the
references
in
brt
for
all
the
block
pointers.
B
So
then,
when
we
replay
this
hill,
we
only
add
the
block
pointers
to
the
file.
So
we
don't
bump
preferences
anymore
and
the
other
situation
is
when
we
dozil
climb.
We
bump
the
references,
but
we
never
Mount
the
destination
file
system.
We
destroyed
the
destination
of
our
system
before
mounting
it.
So
before
replying
the
Zeal,
so
then
we
have
to
decrement
all
the
references,
so
Alexander
also
suggested.
Another
issue
is
not
100
percent
I
think
addressed
yet
maybe
but
with
zieval.
B
Currently,
we
don't
support
block
cloning
for
xivos,
but
let's
say
we
start
support
block
cloning
in
the
future
in
the
future
and
then
when
we
have
some
zero
entries
for
a
civil
and
we
try
to
import
this
pool
on
some
older
ZFS
version,
then
on
the
on
pull
import
will
bump
the
references
during
0
claim,
but
then
we
won't
be
able
to
replay
the
Zeal.
B
My
first
attempt
was
to
deny
pool
import,
but
Alexander
suggested
to
do
something
else.
Try
to
free
the
entries.
B
So
my
husband's
understanding
was
that
he's
adjusted
to
just
free
the
entries
and
and
return
success,
and
so
so
during
Zillow,
replay
I
would
just
decrease
references
that
were
increased
during
zero
claim.
But
apparently,
if
I
understand
correctly,
there
can
be
another
or
maybe
previous
Zeal
entry
that
fails
and
I
haven't
checked.
That
much
from
my
understanding
is
that
the
zero
replay
will
stop.
B
B
If
my
understanding
is
correct
but
other
than
that
I
found
another
one
more
issue
which
I.
A
B
Yes,
but
we
would
need
a
second
feature:
flag
for
zevils.
B
C
If
I
may
I
said
Recon
there
on
on
APR,
we
some
time
ago,
a
few
months
ago,
will,
when
implementing
for
Linux
support
for
what's
that
null
FS
Mount,
not
all
FS
but
equivalent.
We
added
few
log
entries
which
are
implemented
only
for
Linux,
but
not
for
FreeBSD,
and
at
that
point
we
decided
that
we
are
not
going
to
diverge
to
implementation,
keeping
them
compatible,
but
allowing
like
non-existing,
not
implemented.
C
Log
entries
for
some
operating
system
in
this
case
FreeBSD,
so
log
replay,
will
just
go
up
to
the
first
unknown
record
and
then
stop
so.
That
was
particularly
example
where
which
I
was
thinking
in
contexts
of
brt.
So
it
may
happen
that
pool
that
was
used
in
both
brt
and
whatever
that
feature
was.
C
So
I
was
thinking
that
in
this
situation
we
should
call
free
for
every
log
entry
after
that,
up
to
the
points
to
which
we
claimed
so
I
was
trying
to
look
on
the
code
trying
to
figure
out
whether
maybe
it's
already
supported,
because
we
do
call
like
they'll
destroy
or
something
like
that
after
replay
attempt,
but
I
haven't
actually
found
that
the
story
would
care
about
exact
failure
position.
C
So
as
it's
broken
or
it's
not
working
or
it
was
never
designed
designed
to
work
at
least
that's
what
my
appearance,
appearance
after
like
half
an
hour
there
yesterday
I
had
no
chance
to
look
deeper,
but
if
somebody
knows
that
area
better
I
could
take
a
look,
it
would
be
good
to
understand
as
I
understand.
A
proper
solution
would
be
just
to
replay
normally
as
much
as
we
can.
C
Then
we
get
first
error
and
for
every
record
after
that,
we
must
call
free
to
free
all
their
effects,
we'll
find
it
can
includes
TX
right
data
blocks.
It
includes
brts
and
who
knows
what
to
make
include
in
future,
I
think
if
we
would
pass
additional
arguments
to
zero
parse
I
think
function
that
actually
traverses
through
them.
We
could
just
make
it
okay,
after
first
order,
don't
return,
but
instead
to
perform
on
call
it
normal
process
function,
just
call
free
function
and
continue.
Something
like
that,
but.
C
40
hours,
maybe
different
file
or
who
knows
what
happened
it's
better
to
get
last
known,
valid,
State
and
just
I
think
we
are
printing,
something
in
logs
that
hey,
we
can't
replay
so
user
knows
that
something
bad
happened,
yeah
it's
better
possible,
but
yeah.
It
needs
investigation.
What
to
do
with
free
after
that,
because,
as
I
told
log
destroy
is
called
me,
he'll
destroy
it's
called
after
that.
But
what
exactly
destroys
it's
slightly
cut.
B
Out
of
my
understanding,
so
it's
actually
a
surprise
that
this
is
how
we
handle
this
case,
because
I
was
like,
like
I,
assume,
initially
that
if
we
found
some
entries
that
we
are
not
able
to
replay,
we
won't
be.
We
won't
mount
the
file
system
we
will
deny
mounting.
So
we
don't
lose
any
data
because
currently,
when
we
stop
on
error,
we
still
allow
we
still
continue
with
with
the
mount
right.
So
there
is
no
way
for
the
user
to
somehow
stop
this
from
happening.
B
B
B
And
there
is,
there
is
another
issue:
well,
I
didn't
verify.
I
am
not
sure
if
I
have
a
way
to
verify
that,
but
during
Zeal
claim
we
because
normally
when
we
reply
first
thing
we
do.
We
we
check
if
we
have.
C
A
B
B
Okay,
not
sure
when
I
stopped,
so
you
heard
a
briefing
or
was
what
I
said
or.
A
Up
to
when
you're
talking
about
what
to
do
when
we're
playing.
B
Okay,
so
yeah,
so
so
the
problem
is
Alexander
mentioned
with
replying
Crites
as
well.
We
can
stop
before
replying
right,
but
basically
I'm,
not
sure
if,
if
I
agree,
that
this
is
the
best
approach
to
just
Mount
file
system
and
and
drop
all
the
entries
together.
A
C
A
C
B
Yeah,
but
my
understanding
was
that,
of
course,
most
of
the
time
you
want
to
move
clean
pool,
but
maybe
you
have
your
system
is
broken
and
you
try
to
you.
You
cannot
improve
the
puller.
You
have
some
issue
on
the
Linux
system,
so
you
try
to
import
this
on
the
freebies,
the
as
a
recovery
option
to
diagnose
what's
happening,
and
then
you
lose
those
energy
but
I
I
yeah.
A
A
C
D
C
Walk
you're,
important
errors,
who
knows
what
happened
and
we
should
try
to
be
clean
and
correct
in
that
case,
so
a
problem
doesn't
go
anywhere,
it's
probably
not
directly
related
to
brt
or
not
related
at
all,
but
some
can
we
closer
look
on
zero
replay
from
that
perspective,
yeah.
A
Like
when
we're
doing
the
Zoe
replay,
can
we
use
a
different
error
code
for
this
entry
is
corrupt
or
otherwise
broken
from
it's?
Just
literally,
your
OS
doesn't
support
this
entry
and
it,
and
so
it's
not
an
error,
her
so
much
as
an
incompatibility
and
what
we
default
to
doing
about.
It
is
different
in
that
case
like,
if
it
seems
like
yourself,
is
corrupt,
I
guess
even
then
you
still
kind
of
would
prefer
to
get
back
into
production,
then
to
sit
there
and
ask
for
help
right.
B
But
I'm
not
sure
I'm,
not
sure
when
exactly
we
declare
the
entry
as
replay
it.
If
I
will
call
0
parse
again
on
error,
would
I
walk
through
the
all
the
entries
as
well?
Let's
say:
I
succeeded
with
you,
but
I
failed
on
like
fifth
one
and
then
when
I
call
0
parse
again,
will
it
start
from
the
beginning
or
will
it
start
from
the
the
last
not
replayed
N3,
because
then
we
could
just
call
0
Parts
again
and
free
the
remaining
entries.
C
Code
itself,
it
considers
only
a
claim
position:
I
haven't
seen
there
any
start
position,
it
I
was
thinking.
Maybe
it
could
start
from
arbitrary
block
if
you're
free,
some
earlier
blocks,
but
I
haven't
found
there
any
pointer
to
first
record
and
obviously
first
record,
like
error
can
be
not
on
the
first
record.
It
should
can
be
within
a
Blog
that
I
was
thinking.
Passing
additional
function
directly
to
Zeal.
Parse
could
be
easier,
but
but
yeah
other
ways
are
possible
to.
B
B
Yeah
and
somebody
could
like
deliberately
not
Mount
some
file
systems
to
not
lose
those
entries,
so
we
don't
want
him
to
lose
the
entries
on
import.
B
A
B
C
It
probably
matters
only
in
case
of
some
transactional
database
when
you
confirm
with
somebody
as
a
transaction
is
done,
but
then
you
roll
back
couple
seconds
seconds
back
and
you
already
confirm
it.
It's
done
so
then
there's
a
problem,
so
it
doesn't
help
here.
But
if,
if
your
system
will
be
down
from
Earth
now
on
period
of
time,
is
it
better
maybe
to
somebody,
but
probably
not
everybody.
A
Yeah,
so
that's
where
the
idea
of
a
tunable
or
pool
property
like
fail
mode
or
something
to
deal
with
what
you
do
when
Brazil
doesn't
want
to
collaborate,
just
like
am
I
okay,
with
going
ahead
anyway
or
do
I
want
to
stop
and
hope
somebody
can
solve
it
for
me,
although
I
don't
know
in
reality,
how
often
there'll
be
anything
they
can
do
about
it
other
than
decide
to
throw
the
data
away
right.
C
A
B
A
Yeah
there's
two
different
rename
ones
that
are
new.
It's
where
you
atomically,
swap
two
file
names
and
another
one
where
you
rename
a
file
and
leave
a
white
app
behind
in
the
old
file
name.
A
Makes
it
worse,
yeah
you
invite
some
of
the
changes,
but
not
all
of
them.
Instead
of
you
right
up
to
a
point
where
it
was
good
and
then
everything
after
it
was
gone,
I
think
I
agree
with
Alexander
here
that
that
doing
a
smattering
of
of
the
later
transactions
would
be
worse,
it
would
say,
lose
less
data,
but
consistency
wise.
You
would
have
much
less
an
idea
of
what
did
it
was
okay,
which
was
what
wasn't
then
just
replayed
up
to
a
certain
time,
and
then
we
had
to
stop.
B
A
B
But
probably
not
for
the
the
complication
anyway,
I
will
I
will
take
a
look.
I
did
this
yeah,
let's
see
if
the
icon
came
up
with
come
up
with
some
ways
to
not
leak
the
entries
there
is
one
more
issue:
I
found,
I,
think
unrelated
with
byte,
swapping
the
entries,
the
headers
of
zeal
entries
zil
claim,
because
from
what
I
see
on
the
rights
we
do
access
the
headers,
but
I
don't
think
we
do
bite
swaps,
as
we
do
during
replay.
A
Or
Kalani
has
some
power,
PC
machines
or
something
that
are
big,
Indian
and
tests,
that
kind
of
stuff
occasionally,
and
maybe
we
can
describe
the
situation
in
heaven,
check
it
out
and
make
sure
it
works
as
we
expect
or
determine
for
sure
that
it
doesn't
and
that
we
should
do
something
about
it.
D
Yeah,
so
I
went
through
the
pr
for
this
recently.
It's
a
project
that
I've
been
working
on
for
a
few
months
and
I
figured
I
would
just
explain
what
it
was
about.
The
convex
was
and
give
people
a
chance
to
ask
questions
about
it.
D
D
D
D
Also,
if
your
pools
are
moving
around,
you
know
like
it's,
the
the
main
storage
for
the
pool
is
on
like
a
Sam
and
then
the
the
slog
devices
are
locally
attached
to
ssds.
You
now
have
a
situation
where
you
want
to
move
these
two
things
independently,
but
adding
new
rules
to
the
system.
You
might
already
be
out
of
ssds.
D
So
now
you
need
to
remove
it
along
from
a
pool
and
like
partition
it
more
and
make
all
these
changes
that
are
sort
of
obnoxious
to
deal
with,
and
one
of
the
reasons
that
GFS
was
created
in
the
first
place
was
to
solve
some
of
these
problems
by
integrating
the
volume
manager
with
the
file
system.
D
So
you
use
to
sort
of
integrate
the
volume
manager
with
slog
concept
as
well.
So
it
is
that
you
can
create
a
pool
and
use
that
as
effectively
the
slog
device.
When
you
create
the
a
pool
you
can
say.
Please
use
my
shared
slog
pool
as
this
log
device
and
then
all
of
the
the
slog
rights
will
go
directly
to
this
other
pool,
rather
than
be
stored
locally
on
the
normal
data
pool.
D
And
so
there's
a
bunch
of
advantages
to
this.
The
the
complications
are
that,
like
you
now
effectively
have
these
two
pools
that
are
sort
of
interlinked
with
each
other,
and
so
the
BR
has
a
bunch
of
changes
to
like
make
sure
that
you
know
you
only
can
import
one
when
or
the
these
like
client
pools
when
the
shared
model
pool
is
already
imported.
D
D
If
you
want
to
get
off
of
the
shared
log
and
there's
some
nutrition
stuff
in
the
shared
log
pool
to
sort
of
track,
all
the
different
client
pools
and
what
all
of
their
zillows
are
like,
so
that
we
don't
have
to
like
do
any
weird
space
accounting,
we
can
do
Zill
Replay
in
a
similar
way
that
we
used
to,
but
using
these
new
data
structures
instead
of
iterating
over
all
the
file
systems
on
import.
D
Ant
so
that's
sort
of
the
like
10
000
feet
view
I
can
drill
into
more
details
or
if
people
have
questions
I
can
answer
those
as
well.
A
I
guess
it
sounds
a
bit
like
or
similar
approach
to
the
the
shared
L2
Arc
stuff.
We
talked
about
at
the
dev
Summit
that.
D
Yeah
yeah
I
I,
don't
remember
all
the
details
of
that
discussion,
but
at
least
at
the
high
level
it's
pretty
similar.
The
way
that
this
works
is
that
it's
really
very
direct.
D
D
Just
like
goes
to
the
the
normal
class
of
the
target
pool
and
does
the
allocation
there
so
the
it
it
hybridizes
it
like
a
very
direct
level
with
the
allocation
code,
and
then
we
avoid
storing
block
pointers
in
oh,
like
the
shared
slogs
block,
pointers
in
the
client
pool,
because
the
V
Dev
IDs
like
might
match
up-
and
it
would
be
like
this-
wouldn't
be
a
meaningful
block
pointer
in
the
client
pool.
D
So
what
we
do
is
we,
like
the
Zilla
header
effectively
just
always
has
a
hole
in
it
and
if
that's
the
case
and
you're
using
a
shared
slog,
the
you
go.
Ask
the
shared
slog
like
what
is
what
do
you
have
listed
as
the
header
of
my
zone,
and
then
it
can
go,
do
and
do
go
through
and
do
Replay
in
the
same
way
that
it
normally
would,
after
sort
of
fetching
the
shirts
launch,
block
planner,
and
so
the
zilb
knows
that
it's
supposed
to
go.
D
Do
these
iOS
to
this
other
pool
rather
than
its
own
employer.
So
it
really
involves
very
minimal
changes
of
the
Zell
layer
because
it
all
happens
at
like
the
the.
B
D
If
anybody
else
has
any
other
questions
great
otherwise,
you
know
you
can
reach
out
to
me
on
slack
or
an
IRC
or
comment
on
the
pull
request.
I
would
be
happy
to
have
full
reviews.
I
have
a
couple.
People
at
delphix
lined
up
take
a
look
at
it,
but
I
would
be
very
happy
to
have
people
from
the
community.
Take
a
look,
and
especially
people
have
ideas
about
use
cases
and
stuff
like
that
that
you
know
where
they
might
have
feelings
about
particular
implementation
details.
D
A
Everybody
I
had
a
slightly
random
question.
Is
there
a
reason
we
don't
support,
mirrored
L2
Arc
other
than
the
fact
that
it
kind
of
seems
like
a
silly
idea.
A
The
particular
customer
we're
talking
with
just
depends
on
the
performance
boost
they
get
from
their
L2
Arc.
They,
their
working
set
is
as
such
a
size
that
it's
too
big
for
Ram
but
does
actually
fit
with
our
plus
L2
Arc,
and
so
when
the
SSD
dies
and
they
fall
back
to
reading
from
hard
drives
their
performance
tanks
really
badly.
So
they
were
looking
at
doing
a
mirror
and
we
noticed
that
the
zoo,
zoo
pool
command
just
will
not
let
you
construct
a
pool
with
a
mirrored
L2,
Arc
and
I.
A
Wonder
if
that's
actually,
because
we
don't
really
track
stuff
in
the
right
way
to
be
able
to
update
one
of
the
mirrors.
We
got
out
of
sync
like
the
dtl
or
just
you
know,
attaching
a
mirror
if
you
try
to.
If
you
pull
attach.
How
is
that?
If
it's
going
to
figure
out
what
should
have
been
on
the
the
new
SSD,
because.
C
C
A
Yeah
but
yeah
I
think
you're
right
that
the
rebuild
code
you've
just
scrubbed,
would
then
have
to
do
something
about
it
and
and
rebuild
doing
attacher
and
so
on
means.
It
would
just
be
a
lot
of
extra
work
and
I
agree
that
it
doesn't
really
seem
that
useful
I
was
just
looking
at.
It
looks
like
it
could
have
just
been
a
bug
in
the
the
way
we
parsed
the
Jeep
will
create
string.
A
Where,
basically,
you
know
if
we
see
the
keyword
mirror
and
it's
not
directly
following
one
of
these
keywords,
then
we
think
it's
the
next
vdev,
and
so
the
error
you
get
is
that
you
can't
create
an
L2
Arc
with
zero
devices
in
it.
If
you
do,
you
know
the
log
or
sorry
cashmere
thing
it
thinks
mirror
is
the
next
v-dev
and
it
would
even
try
and
I
wondered
if
that
was
just
a
fluke
that
it
didn't
work
but
you're
right.
A
C
A
D
A
We've
had
a
couple
minutes
out
of
Pablo.
Did
you
want
to
talk,
maybe
a
bit
about
the
I
o
rate,
limiting
and
see
if
anybody
has
opinions
or
ideas
about
the
properties
problem.
B
Yes,
so
we
are
looking
for
like
cleanway
to
to
represent
the
the
following
situation,
so
we
have
multiple
properties
related
to
rate
limiting,
so
we
can
rate
limit
the
bandwidth,
the
reading
bandwidth
we
can
or
throughput.
We
can
limit
the
right
throughput
and
we
can
limit
the
total
throughput
and
also
we
can
limit,
read
the
number
of
read
operations,
right
operations
and
total
operations.
B
So
we
have
six
properties,
but
the
problem
is
that
they
all
overlap.
So
when
we
configure
one
property
on
data
set,
we
actually
have
to
create
the
structure
that
has
all
of
them
configured.
B
So
when
we
just
limit
the
let's
say,
read
throughput,
we
actually
create
a
structure
that
have
the
other
properties
Unlimited
and
we're
wondering
how
to
bust
please
what
would
be
the
best
way
to
to
make
this
configurable
and
like
user
friendly,
because
if
those
are
separate
properties,
then
it
might
be
confusing
that
that
setting
one
actually
sets
them
all.
Or
if
this
is
one
property,
then
it
may
be
some
weird
string
where
we
have
like,
like
shared
certain
of
us
property,
where
we
basically
put
all
the
options
into
one
property.
B
So
they
have
ideas
along
the
lines
where
we
have
a
single
property,
but
we
can
use
at
to
set
those
sub
properties.
Let's
say
that
we
have
rate
limit
at
read
throughput
and
this
would
set
the
only
read
throughput
for
the
rate
limit
property
or
something
like
this.
So
I
don't
know
if
anyone
have
has
any
ideas
how
to
how
to
do
this.
So
it's
so.
It's
like
more
pretty
intuitive
to
set
and
also
I,
don't
know
it
looks
elegant,
I
guess.
A
Well,
the
main
problem
is
that
if
you
set
the
read
quota
on
a
certain
data
set,
then
that
would
if
before
it
was
inheriting
now
it
wouldn't
be.
If
you
set
only
one
of
them,
then
we've
set
all
the
other
ones
to
unlimited
and
suddenly
the
the
total
bandwidth
quota
you
set
higher
up
the
tree
doesn't
apply
anymore.
B
So
it's
it
would
be
a
bit
hard
to
to
explain
it
quickly,
but
because
the
properties
overlap,
I
cannot,
let's
say
if
I
have
to
pause
some
threads
because
they
are
over
rate
limit
at
each
level.
I
have
different
different
cues,
so
I
don't
have
to
go
around
and
look
through
entire
pool
where
the
threads
are
should
wait
and
where
they
should
wait
longer.
B
Let's
say
you
have
on
one
data
set,
you
have
read
throughput
configured
and
somewhere
below,
you
have
read
operations
configured
and
then,
when
you
try
to
read
something,
then
we
would
need
to
decide
where
to
check
the
rate
limit
and
which
weight
Q
to
use
for
this
thread
Etc.
So
it's
it's
getting
very
complicated,
so
so,
basically
the
way
I
implemented.
That
is,
is
that
it's
like
a
whole
package.
So
we
check,
if
you
configure
the
rate
limit
on
this
data
set.
B
It
applies
to
this
data
sets
and
all
its
children
until
it
finds
another
rate
limit
configured.
But
basically
you
should
think
about
that.
This
is
that
this
is
like
rate
limiting
point
and
on
and
at
this
point
you
decide
what
to
rate
limit
and
how.
A
C
Yeah
I
I
got
that
point
from
it
like
it
would
be
separate
properties
that
would
inherit
separately
and
made
you
made
up
that
you
need
to
take
a
read
limit
from
one
point
right
leaving
from
another
and
combine
them
and
not
just
create
new
new
queue
with
those
parameters
but
combine
cues.
C
B
Yes,
that
was
our
starting
point.
It
would
be
nice,
but
yeah,
it's
it's.
It's
it's
getting
really
messy,
especially
if
you
like
one
data
set
and
have
multiple
children,
so
you
have
one
rate
limit
here.
The
children
have
different
rate
limits,
but
also
you
have
to
like
take
all
of
this
into
account.
So
then
also.
B
Yeah
and
the
Locking
will
will
get
really
messy.
The
lock
contention
would
be
I
think
pretty
high.
If
we
try
to
like
always
currently.
A
Running
all
the
way
back,
like
I,
think
Alexander.
You
brought
that
up
when
we
talked
about
this
initially
a
couple
months
ago
of
you
know,
it's
your
biggest
complaint
about
the
quota
system
is
that
you
recurse
all
the
way
up
to
the
parent
every
time
and
yeah,
so
we're
we're
at
least
the
first
implementation
of
this
we're
trying
to
avoid
all
that.
But
yeah
is
one
compound
property
does
feel
pretty
kind
of
nasty
for
the
user,
but
maybe
pavel's
idea
of
having
the
at-based.
A
C
But
so
you
mean
like
there
will
be
no
combination
of
different
data
sets.
It
will
be
just
completely
overwrite.
Even
so,
you
specified
only
one
yeah
that
way
by
using
single
property
you're
a
limiting
users
expectation.
Yes,.
A
Expectation-
and
you
know
means
that
we
don't
ever
have
to
recurse
and
look
at
anything
else
right
as
soon
as
we
we
rehearse
up
until
we
find
the
a
rate
limit
structure
and
then
we
can
stop,
we
don't
have
to
go
and
see.
Oh,
is
there
a
bigger
limit?
We
also
have
to
respect
from
a
parent
higher
up
the
tree.
C
Yeah,
it
makes
sense.
I
I
just
haven't
got
what
power
do
you
mean
with
ad
science?
I
was
thinking
you
mean
like
at
in
a
property
name
like
if
yeah
yeah,
but
then
it
would
be
not.
It
could
be
Collide
from
the
old
idea
of
limited
using
expectations.
Then
it
would
mean
there
could
be
multiple
properties
like
that
and
then
I
inheritance
would
return
back.
That's
the
original
point,
I'm
not
sure.
B
B
All
six
values
here
in
this
property,
but
to
to
make
it
easier
to
set
them
up
individually.
We
could
use,
add
sign
rate
limit
at
I,
know,
read
throughput
and
you
only
configure
three
throughputs.
You
don't
have
to
provide
the
entire
string
so.
A
A
C
B
C
A
A
You
want
to
atomically
change
one
and
avoid
you
know:
read,
modify
right
and
they're
being
a
race
in
there
or
something,
and
so
on.
C
From
perspective
of
Satan
separate
sub
property
versus
replacing
completely,
maybe
that
syntax
could
have
sensed
only
for
right
part
not
for
read
yeah,
but
this
would
be
first
of
its
kind
that
maybe
conflicting
with
like
features
where
it's
different
properties
at
science
as
part
of
property
name,
was
its
real
or
fake.
Whatever
I
can't
look,
because.
A
Yeah,
there's
just
a
is
feature
or
something
macro
that
we
deal
with.
Is
it
look
for
the
AD
Sign,
and
maybe
we
could
do
something
silly
like
use
that
the
hash
sign
instead
to
make
it
not
conflict.
But
we
don't
want
to
use
up
all
the
letters.
A
Dollar
signs
are
for
some
of
the
Hidden
properties.
I,
don't
think
we
want
to
go
there,
though,
but
definitely
open
if
anybody
has
a
better
idea
of
how
you'd
want
to
manage
this
kind
of
compound
property,
because
they
kind
of
agreed
even
just
Alexander's.
First
point
of
like
six
new
properties
is
kind
of
a
lot
already
and
looks
a
bit
crowded,
but.
C
It'll
be
okay,
would
they
be
properly
inherited
and
properly
working?
Then
I
would
say
it's
much
cleaner
yeah,
but
if
it
won't
work
that
way,
it's
triple
the
scope
of
this.
B
Project
well,
we
could
probably
inherit
the
value,
but
it
still
might
be
confusing
if
it's
inherited,
but
it
doesn't
respect.
A
B
So
maybe
it's!
This
is
better
solution
than
just
we
should
document
it
properly,
but.
B
A
A
B
A
A
All
right
we're
good
at
time.
Does
anybody
else
have
anything
before
we
wrap
up.