►
From YouTube: June 25, 2019 OpenZFS Leadership Meeting
Description
We discussed FreeBSD/ZoL integration; error reporting infrastructure; and using pyzfs to get zpool config information.
Meeting notes: https://docs.google.com/document/d/1w2jv2XVYFmBVvG1EGf-9A5HBVsjAYoLIFZAnWHhV-BM/edit?ts=5d125dd0#
A
B
I
can
start
out
with
the
first
couple
items
there.
So
I
guess
the
first
thing
I
thought
I'd
be
interesting
to
touch
on
would
be
the
work
that's
being
done
to
with
the
FreeBSD
and
Linux
integration
of
ZFS
there's
some
updates
for
that.
So
I
think.
As
everybody
knows,
the
hope
people
know.
The
intention
is
to
have
a
single
repository
that
builds
for
both
of
these
platforms
that
we
can
test
on
and
cut
releases
from.
So
that
work
is,
from
my
point
of
view,
progressing
really
nicely.
B
There's
a
couple
months
ago
there
was
a
call
for
testing
for
the
FreeBSD
side
of
things
and
I
haven't
heard
back
from
that.
Yet,
but
I
hear
I,
suspect
things
are
going
well
there.
So
the
next
steps
we're
looking
at
are
getting
the
CI
set
up
for
this
all
project,
so
it
will
actually
test
the
freebsd
code
and
then
we'd
like
to
get
a
pull
request
open
using
the
freebsd
changes.
So
we
can
get
code
reviews
done
for
that
and
get
it
tested.
So
those
are
kind
of
the
next
steps
people
should
watch
for.
B
A
A
In
the
pull
request,
it's
going
to
like
what
changes
should
we
expect
to
see?
Are
there
gonna
be
like
new
files
added
that
would
be?
You
know
like
FreeBSD
specific,
like
files
that
would
only
be
compiled
on
freebsd,
like
you
know,
Veta
of
GMC
or
something
and
when
they're
also,
we
should
also
expect
to
see
changes
to
existing
files,
to
clean
things
up
or
to
add,
if
def
Linux,
if
def
FreeBSD,
welcome,
yeah
I,
think.
B
All
of
those
things
would
be
great
if
someone
from
previously
who
could
comment
on
this
too,
but
yeah.
The
intention
is
to
make
it
so
that
you
know
all
the
work
was
done
to
auto
complement
figure,
so
it
can
detect
what
platform
you
do
you're
running
on
and
then
to
refactor
the
code
in
such
a
way
that
the
common
ZFS
bits
are
common
and
don't
include
any
of
the
archetype
form
specific
stuff.
And
then
we
have
new
files
or
sub
directories
of
files
for
the
particular
by
four
bits.
C
C
C
B
C
Of
the
cleanup
be
things
that
I
know,
Macy
was
looking
at
doing
was
around
some
of
the
tuna
balls
to
make
a
macro
that
make
it
easier
to
have
it
right.
The
freebsd
or
linux
cool
specific
bits.
Instead
of
having
you
know.
If
def
freebsd
create
the
tales,
otherwise
create
the
slash
cysts
things
or
whatever.
B
C
B
Were
looking
at
that
I
think
the
proposal
there
was
to
just
create
some
generic
macros
that
both
platforms
would
use
exactly
which
I
think
it's
great
I
think
the
more
of
this
we
can
refactor
out,
you
know
obscure
those
kind
of
details
or
hide
them
away
in
the
platform
through
the
code.
That's
great
yeah.
D
C
A
C
But
in
general
it
will
likely
be
similar
to
it
is
now
where
we
will
continue
to
treat
it
like
an
upstream
and
pull
copies
in
the
interesting
thing
is,
we
might
actually
end
up
with
both,
at
least
for
a
while,
so
there's
the
version
of
ZFS
that
will
ship
with
freebsd,
but
you
can
also
stop
install
the
newer
one
as
a
package.
Okay,.
D
E
C
A
Okay,
yeah,
that
was
my
question,
is
like
you
know
today
you
can't
we
the
three
BC
kind
of
did
that
with
the
Lumos,
but
then
there's
a
whole
bunch
of
gifts
yeah
and
generally,
when
changes
are
made
by
freebsd
people
they're
made
in
the
phoebius
decode,
but
it
I'm
very
pleased
to
hear
that
the
goal
is
that
there
would
be.
You
know
there
would
be
0
diff
little.
E
We're
a
BSD
is
traditionally
done.
This
sort
of
thing
is
they
take
a
snapshot,
configure
it
and
put
it
into
the
source
tree
yeah
under
contribute.
The
normal
FreeBSD
tree
has
links
to
it.
To
do
the
build
I
expect
that's.
What
will
happen
was
that
FS
here
Alan
probably
has
a
better
sense
of
it
than
I.
Do
at
this
point,
yeah.
C
Like,
for
example,
when
we
integrated
said
standard,
there's
a
vendor
tree
that
just
says,
here's
an
unmodified
copy
of
the
standard
code
and
then
the
kernel
when
it
needs
it
just
reaches
into
that
directory
and
grabs
the
8c
files.
It
wants
and
compiles
them
into
itself.
And
so
we,
you
know
if
we
have
some
compatibility
shims,
we
keep
those
files
separate
so
that
when
it's
time
to
update
the
version
of
that
Center,
we
have
a
big
diff
to
carry
cool.
F
C
F
F
G
C
F
G
B
B
A
And
what
what
do
you
so
in
terms
of
making
it
the
default
and
FreeBSD
you,
we
kind
of
covered
that
in
terms
of
previously
13
or
14?
What
about
the
timeline
for
getting
this
integrated
into
the
current
repo
and
having
that
supported
and
do
the
builds
on
both
operating
systems?
What's
the
timeline
for
that
look
like.
B
So
for
getting
it
into
the
current
linux
repo,
what
is
currently
blocked
behind
are
another
highly
blocked
behind,
but
kind
of
queued
up
behind
are
the
fash
cloned
deletion
and
the
logs
base
map
just
to
minimize
conflicts,
so
we'd
like
to
get
those
in
first,
but
it's
not
necessarily
blocked
behind
that
all
right.
It's
just
a
matter
of
rebasing
and
when
things
are
ready,
so
the
intent
is
basically
once
the
CI
bits
are
in
place
which
I'm
looking
at
getting
in
place
this
week.
B
Chris
Moore
did
some
great
work
for
that,
but
I'm
trying
to
get
integrated
into
the
CI
and
then
once
once
it's
reviewed,
and
you
know,
passing
all
the
tests
on
all
the
targeted
platforms
and
people
are
happy
with
it.
I
think
you
can
go
in
so
I
would
hope.
I
would
hope
in
a
few
months,
because
it's
currently
working.
That
was
my
understanding.
So
as
long
as
it
doesn't
destabilize
things
on
the
Linux
side,
it's
not
that
you
guys
want
to
do
extensive
testing
on
the
freebsd
side
anyway.
B
A
So
another
thing
that
I
wanted
to
at
least
briefly
touch
on
is
the
naming.
This
has
come
up
a
couple
of
times,
mostly
from
folks
kind
of
outside
of
the
core
community.
I
know
that,
for
a
bunch
of
reasons
I
it's
it's,
it's
not
a
great
look
for
previously
to
be
using
the
coracoid
ZFS
on
Linux,
repo
I
and
we've
talked
about
making
that
change
may
be
making
some
time
gonna
change
there.
I
guess
I'd
like
to
reiterate
that
my
my
goal
would
be
to
have
us
have
something
that
doesn't
include
the
name.
A
A
Zo
LF
you
kept
on
living
through
BSD
or
something
but
I
think
doing
something
to
put
both
those
platforms
on
I
think
the
reality
is
that
both
those
bathrooms
are
going
to
be
on
equal
footing
in
you
know,
technically
in
the
repo
any
and
how
we
make
those
changes,
but
putting
them
on
equal
footing
in
the
naming
and
ohmic
picture.
I
think
will
help
us
to
send
the
right
message
to
people
that
are
using
or
considering
using
give
us
our
folks
on
the
same
page,
with
with
at
least
at
like
broad
goal.
D
H
A
Think
it
matters
in
terms
of
how
we
go
around
go
about
talking
about
it.
You
know,
because
if
we
you
know,
if
we're
cute
talking
about,
oh
this
change
went
into
ZFS
on
Linux
and
it's
you
know.
Basically
it's
going
to
repo.
That
makes
it
immediately
part
of
both.
You
know
the
Linux
version
and
the
Fugazi
version
I
think
that's
that's
a
little
bit
inaccurate.
E
To
the
point
where
we
have
more
than
one
OS
that
can
run
continuous
integration
from
the
same
sources,
then
everything
I
think
will
be
a
lot
better.
At
that
point,
then
we
can
convince
these
eol
people
and
the
previously
people
who
pulled
from
a
common
repo
and
push
any
changes
back,
because
the
CI
will
be
able
to
check
any
thing
that
breaks
for
either
one
and
then
ideally
everyone
else
if
any
other
operating
system
will
start
picking
up
from
that
as
well.
That
is
appears
to
be
an
attainable
and
desirable
goal.
A
H
Not
that's,
that's
fine,
I
think.
From
our
perspective
only
I
think
I
said
it.
We
were
fine
if
you
rename
the
one.
That's
there
already
out
of
the
way
to
make
this
thing.
That's
fine,
all
right,
I,
think
I
would
say.
If
you're
going
when
you
speak
of
equal
footing
between
the
two
platforms
in
the
repository
today,
I
would
probably
give
some
white
to
potential
future
other
inclusions
and
probably
just
put
nobody
in
the
name.
You
know
it
is
it'll.
Just
cuz
it'll
just
be
easier.
D
I
H
A
H
I
E
I
B
The
call
just
one
sec
go
ahead,
continue.
I,
think
we
laid
out
the
tree
such
that
we
would
like
to
include
everybody
who
wants
to
get
you
know,
building
on
their
platform.
So
if
it's
a
matter
of
where
to
put
the
files
in
the
tree,
work
it
out
in
a
full
request
for
the
mailing
list
or
plaque
or
whatever,
yes,.
B
C
So,
for
the
way
it
works
currently
is
when
you're
running
the
configure
you
set
profile
or
something
to
use
your
space
and
it'll
only
and
then
when
you
g
make
and
it
uses
the
auto
comm
stuff,
it
builds
everything
except
for
the
kernel
module
and
then
under
module.
There's
a
make
file
that
bsd
that
uses
bsd
make
instead
of
gin,
you
make
just
compiles
the
kernel,
module
I,
think.
E
C
B
B
G
I
G
H
D
D
B
Can
I
was
going
to
hand
off
the
Tom
here?
If
we
wanted
to
talk
about
this?
A
little
bit
we've
been
talking.
The
issues
come
up
about,
maybe
improving
some
of
the
error
reporting
to
users.
We
do
in
the
DFS.
At
the
moment,
we've
had
a
couple
issues
come
through
where
the
error,
the
error,
knows
that
a
reported
from
the
I
axles
end
up
being
a
little
confusing
when
they
bubble
up
to
users
about
exactly
what's
being
reported.
So
I
know,
Tom
wanted
to
talk
about
that.
K
You
hear
me:
okay,
good,
my
webcam
is
not
working
and
it
was
very
much
hoping
my
microphone
anyway,
so
basically
I've
kind
of
been
seeing
some
like
patterns
in
the
code,
just
especially
since,
like
encryption
emerged
and
people
are
starting
to
hit
like
errors
that
you
know
we
test
and
we've
made
sure
that,
like
the
that
errors
are
being
thrown
correctly,
but
sometimes
the
error
message
that
gets
given
to
the
user
has
not
been
correct.
This
is
particularly
true
with
the
send
and
receive
code
and
in
general.
K
The
reason
for
this
seems
to
be
that,
like
from
up
from
a
10,000
foot
perspective,
not
you
know
just
looking
at
individual
cases,
I
think
a
big
summary
of
the
reason
why
we
have
some
of
these
cases
like
this
is
because
our
errors
generally
return
very
general
error
messages
like
Ian
Val.
You
know
you
know,
and
he
exists
something
like
that,
and
then
we
take
that
and
attempt
to
figure
out
what
exactly
actually
went
wrong
based
on
a
whole
bunch
of
context
that
user
space
has,
and
sometimes
it's
right.
K
K
You
know
cases
where
somebody
throws
Ian
Bell
and
that's
probably
good,
because
that
just
means
that
we're
checking
you
know
users
input
when
whenever
we
get
anything
from
user
space,
but
we
don't
have
enough
error
messages
to
match
up
with
all
the
situations
that
that
came
from
that.
What
I
would
like
to
talk
about
today
is-
and
one
of
the
things
that
has
come
up
since
then
is
we've
had
a
ZFS
eros
which
allow
for
much
more
specific
error
codes
in
general.
K
All
of
the
new
I/o
CTLs
return
and
b-list
switch
engine
in
strings
and
they're
already
set
up
to
do
that,
and
basically
what
I'm
getting
at
is
so,
for
instance,
there's
an
error
case
which
kind
of
keeps
sticking
in
my
mind,
which
is
removing
a
slot.
When
you
remove
a
slug,
you
need
to
make
sure
that
all
of
the
data
sets
have
completely
clean
zil's.
But
how
does
the
user
know
which
data
sets
actually
do,
have
plans
ills
right
now?
They
just
get
an
e
busy
and
we
interpret
that
and
say:
hey.
K
You
have
a
data
set
somewhere.
That
needs
to
be
a
method,
but
what
would
be
really
neat
is
if,
as
you
as
the
kernel
was
going
through
here,
it
can
find
a
data
set
that
says
hey.
This
is
not
you
know
this
particular
data
set
is
wrong
and
then
actually
give
that
string
back
user
space
and
say
this
is
the
data
set
or
the
list
of
data
sets
that
you
need
to?
K
Actually
you
know
that
you
need
to
mount
and
and
and
to
correct
the
issue
and
I'm
thinking
I'm
just
kind
of
thinking
if
we
had
something
like
that,
that
was
kind
of
generalized
and
like
a
general
mechanism
for
returning
users
that
could
solve
a
lot
of
like
user
confusion,
it
comes
up,
I,
think
it's
developers,
we,
we
kind
of
we're
aware
of
a
lot
of
like
some
of
the
kind
of
caveats
of
like
well.
This
code
works
really
well,
but
it
has
this
one
caveat
where
it
can't
do
this
very
particular
thing.
K
For
this
very
particular
reason-
and
you
know
we
can't
explain
all
of
that
in
the
man
page
and
people-
don't
always
read
the
man
pages
in
general.
They
just
want
to
read
what
you
know
they
want
to
know
what
to
do
next,
when
an
issue,
that's
my
monologue,
sorry
for
taking
a
long
time,
but
that's
kind
of
what
I
wanted
to
discuss.
A
I
think
that
the
the
very
graceful
thing
that
we
can
and
should
do
is
continue
to
extend
a
ZFS
Erin
ot,
which
is
basically
just
like
new
air,
like
new
values
that
we
can
return
for
air
now
through
the
vehicles,
and
that
will
allow
us
to,
like
you
know,
specify
like
which
specific
air
it
is
and
keep
inventing
new
new
types
of
errors.
The
problem
that
it
doesn't
necessarily
solve
is
like
returning
additional
information
back
up
about
which
thing
it
was
that
encountered
there.
I.
B
K
Matter
how
specific
their
know
is,
and
basically
like
from
from
this-
is
kind
of
the
analog
to
the
ZFS
aux
error
function
that
we
have,
which
you
know,
allows
people
attach
an
auxilary
error,
along
with
the
generic
like
error
code,
because
a
lot
of
times,
that
is
usually
the
thing
that
actually
will
tell
the
user
what
to
do
and
what
actually
went
wrong
and
then,
like
the
the
error
code,
will
just
end
up
printing
something
really
really
generic.
That's
not
super
useful
yeah.
Not
soon.
Usually,
that's
what
that's.
F
A
K
Yeah
exactly
I,
don't
know
if
we
I
know
we
use
read
specific
data
for
some
things
related
to
this
and
I
know
threads
specific
data
can
be
relatively
slow,
but
I
know
we
already
use
it
for
like
history
and
things
like
this.
The
the
zpool
history
recording
stuff
is
kind
of
very
much
based
on
thread
specific
data,
at
least
in
ZFS,
on
linux
or
whatever
we're
gonna
call
it
now
once
it
adds
to
bsd
built
in,
but
in
the
ZFS
ioctl
dot
c
file.
K
It's
all
based
on
just
like
thread
specific
data
saving,
like
the
context
of
what's
going
on
so
I,
feel
like
we
could
either
latch
on
to
that
mechanism
and
add
this
to
that
or
we
could
invent
something
new
I,
don't
know
some
kind
of
just
like
list
in
the
spa.
That's
like
you
know,
get
associate
C
IDs
with
with
error
messages.
I,
don't
know
something
like
that.
K
Could
work
as
well
and
then
just
when
ioctl
returns
it
it
gets
rid
of
it
or
or
the
beyond
exit
handlers
that
we
also
have
at
least
in
Linux
I.
Don't
know
if
these
are
just
like
some
ideas:
I,
don't
have
a
specific
implementation
in
mind
or
one
that
I'm.
You
know
one
that
I'm
supporting,
but
I
just
think
that
something
like
this
would
be
good.
A
L
Have
a
question:
can
everybody
hit
me
yeah?
Is
the
governor
of
this
change
to
improve,
or
is
the
plan
to
improve
error
messages
that
are
presented
to
users
or
isn't
also
targeting
better
programmatic
error
handling?
Because
right
now
we
have
to
reserve
to
screen
scraping
everything
yeah.
B
A
K
L
A
A
And
and
kind
of
do
everything
that
the
old
kernel
can
do
but
and
fail
gracefully
when
you
try
to
do
stuff
that
the
new
kernel
that
the
old
colonel
can't
do
but
new
zealand
can,
I
think,
that's
kind
of
the
goal,
but
I
think
we
would
definitely
reserve
the
option
of
like
having
you
also
here
fail,
gracefully
at
doing
some
other.
You
know
some
things
if
you,
if
you
end
up
in
that
mismatched
environment
I,.
K
A
K
K
This
is
something
that
we
would
support
your
dad,
because
I
know
at
least
a
lot
of
our
admins
get
very
confused
by
some
of
the
error
messages
that
come
through,
and
so,
if
we
could,
like
you,
know,
build
a
generic
way
of
returning
better
error
messages
and
then
use
that
to
fix
some
of
the
really
common
ones.
I
think
that
would
probably
be
I
think
I
think
I
could
probably
argue
enough
to
get
time
to
that
on
the
mo.
K
At
the
moment,
though,
the
next
project
that
I
was
supposed
to
be
committed
to
with
sequential
rebuilds
for
as
part
of
the
D
raid
stuff,
that
was
going
to
be
my
next
media
project,
so
I'm
not
sure
about
what
order
we
want
to
get
to
that
in
and
if
somebody
has
more
free
time
than
me,
then
you
know
we
could
somebody
can
take
one
of
them.
I
could
take
the
other.
It
doesn't
really
matter
to
me
which,
but
you
know
this-
that's
what
my
time
looks
like
at
the
moment:
cool
cool.
F
Sure
it's
actually
a
good
segue
to
this
conversation,
so
as
anyone
who's
written
code
to
either
gather
statistics
or
build
an
appliance
or
some
other
control
plane.
Fanciness
knows:
there's
not
a
really
good
interface
for
getting,
for
example,
the
Zippo
config
information
in
a
nice
possible
way,
and
so
there's
been
a
couple
of
you
know.
F
So
these
guys,
like
Python
and
so
that
that
started
down
that
path
and
so
the
the
PR
came
in
this
morning
and
basically
it's
an
extension
to
the
PI
ZFS
work
that
was
done
at
cluster
HQ,
but
without
the
restriction
of
interfacing
to
live
ZFS
core.
So
for
those
of,
you
may
not
be
familiar
with
cluster
HQ,
who
is
no
longer
a
non
going
concern,
but
the
wrapped,
Python
or
built
a
python
library
wrapper.
F
So
they
could
do
their
automation
in,
of
course,
so
that
you
know
it's
creating
file
systems,
creating
snapshots,
getting
properties
all
the
usual
stuff.
We
do
in
the
ZFS
command,
and
so
that's
then
contributed
into
the
ZFS
on
Linux
source
code.
It's
in
the
contradictory,
we'll
find
it
there
and
basically
it
allows
you
to
link
a
Python
program
to
two
libs
ef-s
of
a
core
and
then
use
you
know,
native
Python
dictionaries
instead
of
invalid,
then
do
do
all
the
right
thing
and
you
know
basically
allows
you
to
make
automation
pretty
easy.
F
So
we
were
discussing
about
all
that
and
basically
comes
to
the
point
where,
for
some
of
the
things
we
need
both
for
telemetry
as
well
as
just
overall
systems
management
getting
to
the
ZFS
or
the
pool
configuration
Envy
lists
is
useful,
so
I
know
we've
had
many
discussions
of
this
over
the
years
of
how
to
make
that
consistent
and
everything-
and
this
is
just
one
more
of
those
discussions,
but
the
good
news
is.
We
have
a
pull
request,
then
that
we
can
set
along
to
refer
to
that
to
the
efforts
by
these
guys
so
I.
F
F
B
If
it
seems
like
useful
functionality-
and
we
can
support
it
and
test
it-
we've
been
allowed
that
kind
of
stuff.
Another
contributor
actually
felt
at
the
time
that
this
was
the
best
spot
for
it,
but
it
doesn't.
If
we
want
to
make
it
a
full-fledged
thing:
that's
supported
everywhere.
I
mean
it
could
be
moved
out
of
the
contributory.
B
We
have
tried
very
hard
to
keep
it
updated.
When
you
know
a
new
feature
to
get
landed
right,
we
make
sure
your
functionality
gets
added
to
exercise
less
functionally.
So
anything
that
goes
into
live.
Zfs
core
we've
tried
to
adhere.
We've
tried
to
restrict
it
to
live
ZFS
core
for
the
moment.
Just
because
it's
you
know
a
stable
interface,
so
I
guess
there's
a
question
of.
Do
we
want
to
extend
beyond
the
stable
interface?
It
sounds
like
the
intent
here
to
do
that.
So.
A
Mean
we
can,
if
they
add
something
to
retrieve
it
like
in
terms
of
stability.
It
seems
like
do
like
do
to
the
user
kernel
issues
on
Linux.
We
have
an
incentive
to
keep
it
like
more
or
less
stable,
at
least
according
to
the
like,
newer
stuff
should
be
able
to
read
older
stuff,
and
therefore
we
don't
want
to
change
it
very
often,
but
I
mean
I.
A
Don't
think
we
want
to
say
that
it's
never
gonna
change,
but
it
seems
like
we
wouldn't
be
changing
it,
flagrant
they
like,
without
and
without
a
good
reason,
yeah
that
makes
the
excess
I
think
that
makes
sense
and
not
like
very
it's
not
agree.
It's
not
clean
or
great
kind
of
thing,
but
it
gets
the
job
done.
A
F
F
F
E
Yeah,
sorry,
is
this
a
case
where
libs
EFS
common
could
just
express
a
subset
of
the
data
that's
available
and
it
would
satisfy
most
of
the
use
cases
and
the
things
that
are
not.
You
know
generally
used
or
may
not
be
terribly
useful.
In
most
cases
just
is
not
available
or
it's
only
there.
If
you
pass
a,
you
know,
give
me
all
the
data
and
it
might
be
wrong
plague.
F
Yeah
so
kind
of
one
of
the
attempts
we
made
at
this
a
while
back
a
couple
of
lives
ago.
Is
we
took
that
envy
list
and
then
created
another
envy
list?
That
was,
you
know,
transform
those
arrays
into
key
values
and
then
the
the
key
would
remain,
even
though
the
location
in
the
array
of
the
value
may
change
over
time.
F
You
know
one
of
those
structures
and
somebody
will
pick
it
up
and
then
they
will
forget
to
add
it
to
PI
Z
of
s
and
there
won't
be
any
testing
and
hilarity
will
ensue.
So
I
want
to
avoid
that
in
invar
process.
So
maybe
the
best
approach
in
the
short-term
is
take
a
look
at
doing
a
an
in
a
human
friendly
envy
list
for
the
Z
pole
config
and
add
that
to
Lib
C
of
this
course
yeah.
F
E
F
B
A
Yeah
I
mean
one
way
that
we
could
go
is
to
say
hey.
What
we
really
need
is
a
programmatic
zpool
status,
and
we
just
say:
hey
everything,
everything
that's
in
's,
equal
status
needs
to
be
retrievable
from
you
know:
a
programmatic,
real,
committed
interface,
whether
it's
a
full
property
or
some
other
thing-
and
you
know,
whenever
you
add
new
stuff-
does
equal
status
that
has
to
go
into
somewhere
else
to
so
that
it
can
be
used
for
grammatically
I.
C
I
remember
we
discussed
this
before
with
was
part
of
one
of
the
things
zero
fraud
pervy
tub
properties
could
do
is
being
able
to
get
things
like
those
air
counters
in
the
health
of
each
member
of
each
V,
dev
and
so
on.
Where
you
could
just
do
like
Z
would
get
health
of
pool,
name,
v,
fname
and
and
have
it
returned.
Yeah
I
think
I
have
most
of
that
in
a
branch
somewhere.
I
have
to
find
that
and
rebase
it
on
on
the
new
repo
and
push
that
so
that
people
can
see
it
yeah.
F
H
D
H
Got
two
things
or
you've
got
the
the
pipe
to
get.
The
data
could
be
committed
without
committing
the
thing
that
comes
through
the
pipe
yeah,
which
is
like
litter.
Well,
I
mean
it's
better
than
nothing.
Yeah
like
the
committing
to
the
pipe
would
at
least
probably
mean
that
you'd
be
able
to
like
staple
e,
build
and
link
against
the
library
say
and
then
just
care.
Don't
you
carry
a
bunch
of
translation
code
around
for
the
different
contents
would
be
better
than
also
having
to
privately
rebuild.
The
thing
constantly
would
be
I
mean
from
a
release.
L
H
H
H
F
Okay,
so
to
wrap
that
up-
and
there
is
a
PR
I-
think
we
should
take
the
different
approach
of
of
doing
that.
Translation
in
Libya,
best
core
and
then
that'll
make
the
PR
actually
simpler
because
they
won't
have
to
do
all
the
other
wrappings.
Even.