►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-06-22
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
For more information on IPFS
- visit the project website: https://ipfs.io
- or follow IPFS on Twitter: https://twitter.com/IPFS
Sign up to get IPFS news, including releases, ecosystem updates, and community announcements in your inbox, each Tuesday: http://eepurl.com/gL2Pi5
A
Yep
we
are
okay,
so
hello,
everybody,
worker
and
rod
are
not
here
today.
So
I'll,
be
your
host
for
this
meeting
on
june
22nd
for
the
ipod
team
and
yeah.
So
we
have
our
normal
hack
md
pad,
and
I
will
try
to
take
notes,
even
though
I
am
not
really
good
with
that
cool.
So
a
certain
update,
it's
actually
pretty
short
most
of
last
week,
was
taken
figuring
out
how
to
run
a
falcon,
node
and
further
descend
into
storage
deals
and
other
unsavory
stuff.
A
I'm
waiting
for
the
team
which
works
on
the
retrieval
market
stuff
to
put
things
on
the
test
test
network,
and
then
we
will
see
if
we
can
indeed
send
selectors
over
the
wire
and
get
parts
back
as
hana
says
it's
supposed
to
work,
but
nobody
has
tried
this.
Yet
it's
all
like
the
types
are
there,
the
code
supposedly
is
there,
but
nobody
ever
read
this
so
we'll
see
how
that
goes
and
other
than
that.
I
got
a
little
bit
closer
on
the
dagger
streamer
from
the
file
system.
A
It
is
almost
doing
what
I
want
it
to
do,
but
it's
like
still
not
not
exactly
there
it
it
it's
too
much
too
much
memory
trying
to
do
things
over
and
over
again
another
another
thing.
By
the
way
I
had
a
moment
to
try
the
new,
the
new
arm-based
aws
instances
that
they
released
for
general
availability
on
the
11th,
I
believe,
and
they
can
actually
arrival
on
amd
r9
when
it
comes
to
practicing
stuff
in
a
stream.
A
So
I
am
able
to
get
on
this
arm
instances
that
are
like
40
40
bucks
a
month,
I'm
able
to
get
about
a
gigabyte
and
a
half
per
second
throughput
with
actual
hashing
and
everything.
So
that's
an
interesting
thing
to
consider,
maybe
for
something
like
dumbo
drop
further
down
the
further
down
the
line
instead
of
lambdas.
To
actually
have
you
know
stuff
on
diamond
boxes
and
that's
pretty
much
all
I
have
for
this
week.
B
Well,
I
have
no
short-term
memory
when
I
speak,
so
I
have
to
like
pre-write
the
entire
thing
or
I
just
make
a
lot
of
sounds
yeah,
so
so
this
last
week
I
tried
to
implement
an
advanced
data
layout
because
we
had
had
some
discussions
lately
and
it
sounded
like
there
was
some
worries
that
we
just
don't
have
enough
proof
that
we
know
what
we're
doing
with
these.
B
Our
plans
are
actually
well-formed,
so
I
tried
to
just
do
the
thing
and
see
if
I
had
trouble-
and
so
I
initially
I
was
going
to
try
to
bite
off
doing
hamps,
but
I
thought
yeah,
that's
a
bunch
of
things,
and
so
instead,
I
just
sketched
out
the
simplest
thing
possible
to
make
a
dummy
example
advanced
data
layout.
B
So
I
invented
the
concept
of
a
fanout
map
and
started
implementing
that,
and
all
this
is,
is
for
every
value
in
the
map,
we're
just
going
to
encode
it
in
a
new
block
and
put
links
to
them
in
the
map
internals
and
then
that's
it.
So
this
is
like
probably
a
totally
useless
data
structure.
You
would
be
very
unlikely
to
use
this,
but
it
should
be
a
proof
of
the
interface
is
working
correctly
and
so
far,
no
major
roadblocks,
the
strategy
of
just
conform
to
the
node
interface
seems
to
be
working.
B
The
biggest
problem,
I'm
probably
having
is
that
it
turns
out
the
amount
of
boilerplate
I'm
needing
in
writing.
Some
of
this
stuff
writing.
Another
node
implementation
is
turning
out
rather
high
and
that's
starting
to
get
to
me
because
it's
not
a
blocker
but
very
irritating
in
the
code.
Gen
work
as
well
so
seeing
it
crop
up
in
another
example,
is
making
me
I'll
talk
more
about
that
later.
B
The
other
thing
that's
interesting
about
trying
to
do
an
adl
interface
has
been
there's
just
a
lot
of
things
that
go
into
setting
one
of
them
up
so
well,
you
need
all
of
these
pointers.
You
need
a
pointer
to
the
link
loader
function.
To
make
your
readable
thing,
you
might
have
a
bunch
of
configuration
stuff
for
it.
That
would
be
specific
to
the
adl.
B
One
thing
that
I
was
tempted
to
put
in
there,
but
I'm
probably
going
to
rip
back
out
is
you
might
need
a
pointer
to
the
node
prototype
for
the
internal
structure,
node
that
you're
going
to
use-
and
I
think
I'm
going
to
rip
that
one
back
out,
because
that's
just
the
kind
of
configurability
that
I
think
you
shouldn't
need
like
adl
internals,
should
be
allowed
to
be
opinionated
about
the
memory
structure
they're
using
at
runtime.
B
So
yeah,
the
first
draft
is
finding
out
all
sorts
of
things
like
this,
where,
like
yeah,
you
could
make
this
configurable
but
like
what,
if
we
didn't
so
a
little
bit
more
work
on
that
will
probably
be
forthcoming,
and
I
want
to
generate
some
design
decision
documentation
around
this.
B
The
good
news
is
that
all
of
the
major
interfaces
seem
to
just
be
working.
This
node
prototype
concept
that
got
introduced
also,
sometimes
referred
to
as
node
style,
does
also
seem
to
be
really
helping
here,
because
that
gives
me
a
place
to
put
all
these
configuration
things
for
the
ado
in
one
place
in
memory,
and
so
then
you
get
to
use
the
normal
node
builder
to
actually
fill
in
the
content
and
none
of
those
function.
B
B
I
probably
won't
talk
too
much
about
my
boilerplate
problems,
but
long
story
short.
I
have
to
add
a
lot
of
darn
boilerplate
methods
to
every
new
note
implementation
I
make
like
if
I'm
making
this
new
ado,
it's
going
to
act
like
a
map.
I
have
to
add
all
these
methods
to
match
the
interface
like.
Can
I
be
course
to
a
string,
and
the
answer
is
no,
and
go
is
not
being
helpful
for
me
in
trying
to
make
this
concise
embeds.
B
Do
not
quite
give
me
the
ability
to
do
what
I
want,
or
at
least
they
don't
give
me
the
ability
to
have
error
messages
that
include
the
type
information
in
them,
because
when
you
have
an
embed,
it
only
sees
itself.
It
has
no
way
to
reference
the
thing
that
it's
embedded
in,
so
it
just
literally,
cannot
do
this
and
in
so
many
other
languages.
This
should
be
easy.
It's
like
the
extends
keyword
in
almost
every
object,
oriented
language,
but
I
just
can't
do
it
and
go.
B
I've
tried
increasingly
fancy
things
like
using
runtime
tricks
to
peek
into
the
call
stack
info
and
see
if
I
can
extract
relevant
names
from
there,
and
I
can't
that
doesn't.
Work
goes
very
consistent
about
the
logic
it's
using
here.
It's
just
consistently
not
what
I
want
today-
and
this
is
driving
me
a
little
bit
nuts,
because
I've
also
dived
into
the
assembler
and
like
the
information
is
there,
there
is
an
auto-generated
method,
stub
yeah,
that
has
exactly
the
relevant
information.
I
want
on
the
interface.
B
B
A
Have
you
considered,
you
know
this
experimental
implementation
that
they
released,
that
has
genetics
just
right
against
it
and
after
a
year
and
a
half
we
just.
B
C
B
I
haven't
looked
enough
to
say
with
confidence
that
it's
irrelevant,
but
I
would
be
surprised
if
it
is
relevant
like
what
I
want
in
other
languages.
Here
would
not
be
generics,
it
would
be
like
traits
or
subclassing
or,
like
some
other
virtual
type
inheritance
thing,
and
then
so.
I
would
be
very
surprised
if
golang's
definition
of
generics
is
so
interesting
that
it
would
actually
apply
to
this
it's
possible.
I.
A
Don't
know
it
basically
sounds
like
the
stuff
that
you
were
talking
about,
that
it
will
surface
in
order
for
genetics
to
work,
as
described
in
the
blog
posts,
exactly
this
info,
that
you're
missing
for
the
errors
and
so
on
and
so
forth
needs
to
be
transported
somewhere
visible.
So
you
won't
have
this
problem
anymore,
but
I.
C
C
Okay,
I
guess
I'm
up
yeah,
so
I
did
a
ton
of
porting
things
to
esm
module
standard.
C
I
started
doing
this.
I
did
not
realize
how
early
I
was
to
doing
this.
C
I
mean
people
have
been
using
esm
for
like
five
years
across
the
js
ecosystem,
but
using
it
in
a
node
program
that
still
needs
to
be
required
using
the
nade
and
using
the
native
node
esm
stuff,
like
no
almost
nobody's
doing
that
very
few
people
so
miles
who
wrote
it
is
using
it
in
a
package,
and
there
is
one
example
of
still
allow
still
being
able
to
use
commonjs
require
and
it
requires
you
compiling
down
a
version
of
the
package
for
require,
and
so
that's
what
we're
doing
across
all
of
our
modules.
C
Now
there
is
a
compiled
down
version
of
all
of
the
entry
point
files
that
users
require,
and
I
also
figured
out
how
to
cross,
compile
all
the
tests
so
that
we
run
all
of
our
tests
against
that
require
as
well.
So
that's
really
nice.
So
all
of
that
is
updated.
Now
all
the
multi-formats
and
almost
all
the
dependencies.
I
think
there's
like
one
hashing
function
that
I
need
to
go
and
get
so
that's
like
a
lot
of
projects
across
our
ecosystem.
C
I
also
updated
all
of
our
release.
Automation
to
my
latest
release,
automation,
stuff
and
I
have
a
script
now
to
update
it
in
the
future,
which
is
great.
The
block
api
is
also
up
to
date.
There's
a
pr
now
that
needs
review.
I
need
rods
and
put
on
that
before
I
merge
it
and
then
that
migration
is
finally
complete,
which
will
be
awesome,
and
then
I
did
lots
of
managery
stuff
all
week
and
I
need
everybody's
okrs
for
last
quarter.
C
So
if
you
have
okrs,
I
think
only
eric
here
right
now
so
score.
Your
okrs
bro,
there's
a
pr
in
the
roadmap
repo.
Already
you
gotta
score
your
okay
and
and
then
I'm
working
on
the
okrs
for
next
quarter.
C
The
thing
that's
relevant,
I
think
most
for
everybody,
is
that
I
want
to
cut
out
a
documentation
week.
This
is
actually
eric's
suggestion
and
I'm
gonna
run
with
it,
but
we're
gonna
cut
out
a
week
where
everybody
on
the
team
writes
docs,
and
I
talked
to
terry
and
she's
agreed
to
set
aside
time
each
day
to
do
a
synchronous
review
on
all
of
the
docs
that
we're
building
that
whole
week.
So
we
can
get
the
imp,
the
input
of
somebody
who
knows
nothing
about
art.
C
C
If
we're
all
doing
it
that
week,
I
think
that
we'll
get
all
the
main
things
like.
The
main
thing
is
that
all
of
the
current
resources
are
bad,
so
we
will
at
least
take
them
down
or
replace
them
with
something
that
is
less
bad.
They
won't
be
complete,
but
they
will
be
not
confusing,
as
which
is
what
they
are
now.
Yes,
the
documentation
should
not
make
people
understand
it
less
like
that.
That's
that's
like
this.
I'm
setting
the
bar
low
and
that's
the
bar.
B
C
E
I'm
trying
to
think
of
how
I
can
say
it
I
mean
I
think,
overall,
the
project,
so
there's
a
lot
of
good
documentation.
That
in
some
ways,
is
a
bit
misleading
because
it
seems
like
things
are
more
thought
out
than
they
are.
E
So
as
a
consumer,
you
know
what
is
safe
to
use
and
what
isn't
would
be
helpful.
C
I
don't
remember
whose
suggestion
it
was
if
it
was
rod,
or
maybe
one
actually,
but
it
was.
We
should
really
have
like
an
entry
point
for
each
language
stack
like
where
you're
actually
like
gonna
write
code
in
and
then
that
can
do
a
much
better
job
of
pointing
you
at
like
the
things
that
are
really
solid
and
the
things
that
you
should
really
rely
on
and
shuttling
away.
Some
of
the
other
bits,
because
it
really
varies.
My
language
right
now
like
what
you
should
be
messing
with
and
what
you
should
so.
E
Yeah,
I
think
that's
true
too,
like
I
came
in
from
a
javascript
point
of
view,
and
it
was
later
I
found
that.
Well,
it
wouldn't
take
that
long
to
realize
it
goes
kind
of
like
you
know
the
most
mature,
advanced
thing
and
actually
kind
of
lost
a
step
that
kind
of
gets
done
there
first
and
trickles
down,
but
actually
it's
probably
not
totally
true,
I
think
in
general,
then
ipfs
is
generally
how
things
work.
C
E
Yeah,
it
might
be
actually
good
to
say
if
you
are
yeah,
I
don't
know
how
to
deal
with
a
different
language
thing,
because
it's
going
to
be
differences
between
them
in
terms
of
maturity
and
readiness
and
whatever.
But
I
do
know
one
thing
I
think
like
for
people
that
are
just
trying
to
get
their
head
wrapped
around
it.
E
Maybe
as
maybe
a
pointer
that
says
like
what
you
know,
I'm
trying
to
think
like
use
this
language-
and
you
can
do
this
with
it
successfully,
because
you
know
they
may
use
rust
or
something,
and
you
know
they
get
stuck
because
it's
got
like
so
much
in
flux
right.
So
I
think,
having
a
place
where
you
know
you
can
go
and
something
you
can
count
on
would
be
good.
I
don't
know
I
thought.
B
B
They've
definitely
had
the
same
thing
where
it's
like:
oh
yeah,
the
dht
module
or
something
works
great
in
this
language
and
like
very
experimental
in
this
other
language
and
just
doesn't
exist
in
these
four
new
languages
yet,
and
they
had
a
whole
page
of
the
website
that
I
remember
you
could
scroll
down
pretty
long
and
it
would
have
like
lots
of
lists
of
maturity,
level
scores.
I
think
I
remember
people
from
that
project
saying
that
they
had
some
beefs
with
the
way
that
they
engaged
that
too.
So
I
don't
know.
C
Yeah
well,
like
I
think,
it's
because
there's
like
a
gatekeeper
on
like
when
you
decide
something
is
mature
and
so
and
they
have
a
bunch
of
stuff,
that's
being
built
by
a
lot
of,
like
other
grants
from
ethereum
so
like.
Where
do
you
rank
the
relative
maturity
of
the
rustac
compared
to
the
js
deck,
when
the
same
people
just
aren't
working
on
either
one?
So
how
do
they
compare
to
each
other
yeah.
B
C
E
And
then
you
could
actually
see.
Maybe
something
like
that
could
kind
of
help
is
its
new
app
for
ipld.
I
don't
know.
C
I
think
like
I
was
just
thinking
about
this.
One
thing
that
I
did
in
the
new
js
multi-formats
library
that
you
can
go
check
out.
C
Is
that
because
you
have
to
go
and
implement
all
these
plugins
for
hashing
functions
and
codecs,
I
started
to
actually
put
in
tables
for
all
the
known
ones,
and
that
actually
looks
really
nice,
and
so
I
think
in
the
new
website
and
the
new
docs,
we
can
do
that
in
each
language
and
they'll
all
be
fairly
comparable
right.
So
like
for
the
multi-format
stuff,
like
here's
all
the
hashing
functions
and
then
like.
C
Oh,
you
can
see
that,
like
maybe
one
language
doesn't
have
as
many
as
the
other
one
and
when
we
get
into
ipld
you
could
even
do
it
for
some
of
the
advanced
data
structures
right
so
like
here's.
Our
hamps
like
here
are
the
implementations
in
these
different
languages
like
oh,
you
see
like
that
language
doesn't
have
it.
That
is
probably
a
bit
behind,
or
you
know.
Here's
like
schema
validation,
libraries
and
stuff,
like
that,
like
in
each.
C
One
cool
all
right,
good
stuff,
all
right,
I'm
not
done
anybody
else.
Yeah.
A
I
guess
hey
chris,
I'm
going
to
put
you
on
the
spot
again
a
little
bit.
This
conversation
that
we
had
with
michael
about
the
maturity
of
the
flexible
byte
format.
Is
this
something
that
like
helps
you
the
way
this
right
now
or
there
are
things
that
are
basically
missing
from
your
point
of
view
or
you
haven't
had
a
chance
to
read
that
yet.
E
I
haven't
had
a
chance.
I
was
kind
of
waiting
for
it
to
the
dust
to
settle,
but
I
am
like
so
close
to
actually
starting
on
my
rust
implementation
of
what
I'm
doing
and
that's
something
I'm
going
to
look
at
right
away.
So
I
have
a
busy
day
tomorrow,
but
wednesday.
I
have
time
again
so
I'll,
probably
poke
around
and
see
what
I
can
do.
My
requirements
may
not
be
you
know
the
way
I've
seen
the
most
recently
updated
may
not
be
ideal
for
what
I
want.
E
So
I
don't
know
if
and
just
maybe
different
set
of
requirements,
but
I'm
definitely
gonna
give
a
look.
So
I
can
report
back
on
monday
or
on
the
ipld
channel,
either
way.
Awesome.
C
E
Well,
I
you
know,
I
yeah,
I
see
that,
but
I'm
optimistic
I'm
able
to
figure
it
out,
we'll
see
how
it
goes.
I
think
I
like
spewed
russ
pretty
darn
well,
and
I
may
actually
he
asked
for
kind
of
a
code
review
on
the
multi
format
stuff,
so
I
may
just
start
by
digging
into
code
and
looking
at
to
get
a
feel
for
what
I,
what
aspects
of
rust,
they're
using
and
you
know
how
things
are
laid
out
so
we'll
see.
C
The
javascript
implementation
is
not
very
big,
like
it's
a
pretty
easy
data
structure
to
implement.
I
will
say,
though,
that
like
there
isn't
a
schema
validation
library
in
rust,
yet
so
that
might
be
just
something
that
you
just
need
to
be
careful
and
like
get
get
a
lot
of
good
review
to
make
sure
that
you're
not
breaking
the
schema
anywhere,
because
I
don't
think
that,
there's
anything
that's
going
to
check
that
for
you
right
now.
C
A
Yeah
car
files
actually
work
across
like
across
stacks.
Now
it's
it's
kind
of
awesome
yeah,
so
I
one
one
more
thing
from
my
side
kind
of
on
the
heels
of
what
I
just
said.
Chris,
I'm
sorry.
A
I
raised
this
issue
about
what
are
ins
in
schemas
and
we're
kind
of
saying
like
well,
it's
just
a
number.
How
does
this
actually
gel
together
with
the
flexible
byte
layout,
which
has
a
size
where
it's
super
important?
How
wide
this
size
is.
A
A
A
Basically,
we
can't
discuss
this
last
time
and
we
said
like
yeah,
it's
kind
of
implementation,
specific
and
my
question
is
now:
if
it
is
implementation
specific,
how
do
we
treat
the
flexible
byte
layout
then.
C
So
if
you
look
at
the
data
model,
spec
data
model
specs
says
that
your
date
in
order
to
be
compliant,
you
have
to
support
64-bit
integers,
like
you
you're
sorry,
you
have
to
support
big
integers
like
it
says
that
in
the
spec,
without
losing
precision.
So
that
means
that,
for
instance,
like
json,
doesn't
actually
have
a
problem
with
large
integers
javascript
has
a
problem
with
large
integers,
so
we
can't
use
like
the
regular
javascript
parser.
We
have
to
use
one
that
will
recognize
it
and
use
big
integers.
C
For
that
reason,
or
I
think
just
use
a
newer,
I
think
that
the
latest
vms
have
json
implementations
that
will
detect
it
and
use
a
big
number
so
that
works
now,
but
anyway
yeah.
So
it's
sort
of
the
job
of
the
codec
implementation
to
make
sure
that
when
it
gets
a
large
number
that
it
puts
it
into
the
proper
large
number
format
for
that
language.
A
As
far
as
a
schema
is
concerned,
our
integers
are
essentially
arbitrary,
precision,
yeah,
okay,
that
that's
the
part
that
was
missing,
because
we
actually
don't
say
this
anywhere
in
the
data
models.
B
Yeah,
I
think
I
think
the
merged
stocks
are
insufficient
on
this.
We,
I
think
our
agreement
is
basically
that
yes,
we're
going
to
treat
things
as
roughly
infinite
precision.
We've
struggled
with
how
exactly
to
phrase
that
maybe
we
should
just
say
it.
B
Some
of
the
clarifications
that
I
use
to
make
this
not
keep
me
up
at
night.
Is
we
don't
do
math
in
ipld
and
thank
god,
because
that
means
that
we
get
to
care
about
this,
a
hell
of
a
lot
less
and
we
can
safely
punctitude.
That's
implementation,
specific
much
more
reliably,
because
all
of
the
things
that
we're
pumping
to
be
implementation,
specific,
are
it
works
or
it
should
error
real
hard.
B
There
are
no
undefined
transitions
where
like
math
occurs,
and
it
does
something
weird
or
something
else,
it's
do
something
for
halt,
and
so
I
assume
that
that
would
continue
to
just
apply
in
the
internal
of
the
flexible,
byte
layout
spec.
If
you
start
processing
some
of
this
stuff
with
a
library
that
doesn't
support
big
enough
integers
for
the
data
that
you're
processing,
then
it
should
hold.
A
Right
so
basically,
we
we
almost
need
to
define
hold
on
known
out
of
bounds,
values,
kind
of
thing.
A
C
Like
well,
I
can't
think
of
a
case
in
which
we
do
that
so
because
we
we
so
we
have
two.
We
basically
have
two
types
of
codecs
right.
We
have
ipld
native
codecs
and
then
we
have
codex
for
things
that
already
exist,
that
we're
just
parsing
and
turning
into
data
model
for
the
things
we're
parsing
and
turning
into
data
model.
C
There's
like
a
known
like
the
spec
that
we
read
is
effectively
like
we're
taking
this
value
and
then
we're
representing
it
in
our
data
model
and
our
data
model
says
arbitrary
position,
and
so
each
of
those
languages
would
have
to
support
whatever
precision
in
each
of
those
specs,
and
if
it
can't
support
it,
then
it
would
have
to
cause
an
exception
for
our
specs.
We
do
say
you
have
to
support
arbitrarily
large
editors
for
our
codex
specs.
C
We
say
you,
you
need
to
support
arbitrarily
large,
like
numbers,
basically
and
without
losing
precision
I
mean
we
should
say
that
if
we,
if
we
don't
well.
B
C
A
A
To
see
what
what
they
wrote
like,
what
the
base
58
stuff
is
citizen
against
I'll
get
back
to
you.
B
Maybe
there's
something
in
the
x
series
packages-
I
don't
actually
know,
but
I
can
definitely
tell
you
that
none
of
our
codecs
in
any
of
our
things,
support
that
right
now.
C
Yeah
I
mean
like,
because
what
ends
up
happening
is
that
this
turns
into
a
determinism
problem
on
read,
write
right,
because
you
read
data,
you
don't
have
a
you
parse
it
into
something
that
doesn't
have
sufficient
capacity
for
the
integer
precision,
and
then
you
re-serialize
it
and
you're
you're,
not
actually
serializing
the
right
data,
because
you've
now
mutated
the
integer,
where
you
didn't
want
to.
C
Yeah,
I
mean
yeah,
if
you,
if
you
want
to
throw
that
spot
like
if
your
language
just
doesn't
support
it,
your
language
doesn't
support
it
like.
I
think
that,
though,
if
you're
going
to
write
the
code
to
halt,
you
might
as
well
just
write
the
code
to
put
into
a
big
integer.
A
C
So
it's
it's
on
the
list
of
things
that
we
need
to
worry
about,
every
time
that
we
deal
with
a
codec
so
every
time
that
we're
like
taking
an
existing
format
and
figuring
out
how
to
represent
it
like
whenever
we're
doing
that
work
we
we
need
to
like
somewhere.
We
need
to
start
writing
down
like
all
the
things
that
you
need
to
go
and
worry
about,
like
you
need
to
worry
about
map
sorting,
and
you
need
to
worry
about
like
this.
A
C
C
I
think
the
only
thing
that
we
may
need
to
think
about
is
like
if,
if
we
want
to
introduce
a
feature
into
schemas
that
allowed
you
to,
in
your
schema
say,
add
some
specificity
to
your
numbers
like,
for
instance,
this
has
to
be
a
negative
number
or
something
like
that.
Then
then,
or
you
know,
this
can't
be
a
negative
number
if
we
wanted
to
add
something
like
that,
would
that
would
still
be
a
schema
feature,
though,
and
a
codec
feature
an
fbl
feature.
C
It
would
be
like
a
schema
feature
first
and
then
maybe
we
use
it
in
that
fbl.
But
you
know
that's
a
different
conversation
about
like
schema
features.
A
Yeah,
no,
this
this
makes
sense
cool
yeah.
This
helps
us
all
all
right.
That's
all!
I
have
cool
all
right
cool.
Well,
then,
bye,
everybody
and
see
you
next
week.
Awesome.