►
From YouTube: 🖧 IPLD weekly Sync 🙌🏽 2020-09-21
Description
A weekly meeting to sync up on all IPLD (https://ipld.io) related topics. It's open for everyone and recorded. https://github.com/ipld/team-mgmt
A
A
Please
put
your
name
on
the
attendees
list
on
the
hack
pad
by
the
way
the
heck
pad
is
online
again,
because
there
was
some
internet's
work,
but
it
should
work
again
yeah.
So
I
start
with
myself
not
that
much
report,
I'm
still
on
the
rust
multicast
stuff.
A
A
But
in
many
cases
you
were
100
sure
there
won't
be
an
error,
but
you
still
have
to
unwrap,
because
theoretically
there
could
be
one
but
not
in
those
cases.
So
this
lead
to
a
few
api
changes,
making
things
a
bit
nicer
and
then
kind
of
like
I
changed.
Almost
everything
and
I
kind
of
rewrote
the
whole
thing,
but
now
it's
again
back
to
what
it
used
to
be
more
or
less,
which
is
good.
A
So
there's
not
too
many
changes
when
I
do
a
pr
and
but
I'm
now,
I
think
I'm
happy
with
the
result
and
I'm
still
making
sure
that
everything
works
but
and
then
also
again
trying
to
integrate
with
rust,
cid
and
lib
t2p,
because,
like
this
is
kind
of
like
my
libraries,
I
use
to
make
sure
that
the
api
is
at
least
kind
of
same
yeah,
cool
yeah,
there's
not
much
else.
I
did
on
the
ipod
side
of
things
next
on.
My
list
is
danielle.
B
Cool,
so
I
continued
with
the
ipld
adl
for
hampt.
I
had
a
helpful
call
with
rod
and
peter
to
essentially
discuss
any
questions
I
had
about.
You
know
hams
in
general
and
especially
the
schema
for
the
one
we're
trying
to
build
here,
so
I'm
still
making
progress
on
that
now.
I've
gotten
to
a
point
where
I
actually
understand
all
that
I
need
to
write,
because
before
I
wasn't
familiar
with
how
go
ipld
prime
works
or
how
I
adls
are
actually
implemented.
B
I
also
wrote
the
first
half
of
the
introduction
to
ipld
go,
which
was
part
of
the
talk
week,
there's
just
a
little
section
left
as
a
to
so
I'll
finish
that
up
this
week
I
also
dug
myself
into
a
rabbit
hole
for
multicodec,
because
I
found
that
the
examples
for
ipld
prime,
when
they,
for
example,
use
stack
c
war.
They
would
just
use
0x71
instead
of
doing
something
like
multicodec.cbor
or
apld.taxi,
or
something
like
that.
B
So
I
think
I'll
be
co-generating
a
lightgo
package
with
consonants
for
all
of
those
ids
at
least
and
finally,
with
eric.
We
started
writing
down
ideas
of
how
we
could
change.
Go
ipld,
primes
api
in
the
near
future,
once
things
with
filecoin
calm
down
a
little
bit
things
like
the
import
paths
that
don't
depend
on
github,
but
also
some
other
things
like
removing
some
unnecessary
error,
returns
and
stuff
like
that,
and
that's
it
for
me.
A
Thanks
next
is
michael.
C
Hey
yeah
yeah,
so
I
just
did
a
ton
of
doc
stuff,
mostly
cleaning
up
stuff,
shuffling
it
around
writing
more
documentation
and
then
a
substantial
change
to
the
spec
site.
So
there's
a
new
specs
website
it.
It
has
all
of
the
specs
and
all
the
content,
not
just
schemas.
C
Now
before
was
really
just
for
schemas,
and
that
was
really
necessary,
because
I
I
need
to
document
the
doc
site
to
link
to
the
specs
quite
often
so
there
are
improvements
going
on
in
kind
of
both
of
those
places
as
a
result,
there's
a
few
new
files
that
landed
in
the
specs
repo
without
pr
because
they
just
like
had
to
go
somewhere
really
quickly
and
I
had
to
get
the
site
up,
but
so
there's
some
cleanup
still
to
do
there
and
there's
still
a
little
bit
left
to
do
on
the
docs
website,
we're
waiting
on
infrared
to
click
over
the
website,
but
in
general
the
docs
website's.
C
Looking
pretty
good,
it's
like
showable
to
people.
We,
you
know
we
need
to
go
stuff
merged
in.
We
need
the
rest
stuff
merged
in,
but
in
general
it's
looking.
You
know
much
much
better
than
anything
that
we
had
before.
So
that's
good.
I
also
took
on
the
padding
spec,
so
I
took
that
off
of
peter's
plate
and
I'm
gonna
write
that
up
working
on
it.
C
Now,
if
you
have
any
relevant
docs
send
to
my
way
I
have
a
markdown
file
that
him
and
eric
eric
wrote
a
quick
dock
on
paddington.
That
was
in
the
docks
repo
that
I
moved
over.
So
working
on
that
and
last
last
week
I
was
poking
at
this
podcast
client
on
dagdb
at
some
point
building
the
stocks
website.
I
realized
that
I
don't
know
why
it
takes
so
long
and
there's
so
much
code
and
view
press
to
make
a
website
from
markdown
files.
C
It
seems
like,
like
a
lot
and
seems
like
it'd,
be
really
easy
to
make
a
website
from
markdown
files,
so
I
started
kind
of
poking
at
that
problem
and
realized,
like
oh
there's,
actually
something
really
cool
that
you
could
do
here
where,
if
you
take
the
markdown
files
and
you
use
a
yaml
front
matter,
you
end
up
with
like
basically
sort
of
display
data
and
metadata,
like
actually
structured
data
kind
of
together
in
this
unified
tree
structure.
C
And
then
I
can
just
take
that
and
actually
stick
it
in
dag
db,
like
as
a
data
structure
and
then
what
happens
when
I
start
playing
with
that,
and
so
I
realized
like.
Oh,
you
can
use
that
to
like
build
out
like
a
list
of
posts,
and
you
know,
site
maps
and
all
that
kind
of
stuff
can
just
be
like
from
the
dagger
view
representation,
because
it
can
do
all
the
linking
and
everything
all
the
indexing.
So
it's
basically
like
I
wrote
this
little.
I
started
working
on
this
little
project.
C
It
basically
takes
you
know
a
repo
of
markdown
with
yet
the
ammo
front
matter
optionally
and
turns
it
into
data
structures
and
then
another
thing
that
takes
those
data
structures
and
turns
them
into
this
website
and
then
also
I
was
like
oh,
but
you
could
follow
other
websites
and
then
they
could
be
like
in
your
follow
feed
and
you
could
start
to
do
some
of
like
the
data,
like
the
social
networking
kind
of
interlinking
stuff.
So
I've
been
poking
at
this.
C
We're
calling
it
ipmd,
because
it's
like
it's
really
just
sort
of
like
mark
down
plus
yammer
front
matter
as
like
a
data
structure
that
it's
like
a
website
that
you
move
around,
but
it's
kind
of
cool
and
it's
like
using
a
lot
of
the
github
action
stuff
as
like
the
back
end
for
this.
So
like
all
the
dagnyb
stuff
is
just
stored
in
that.
So
that's
how
like
replicating
other
people's
databases,
comes
around
like
all
of
the
hosting
and
everything
is
already
there.
C
It
automatically
installs
like
a
an
action
to
do
the
publishing
and
things
like
that.
So
that's
been
like
a
fun
little
thing
to
work
on.
It's.
It's
definitely
feeding
a
lot
of
stuff
into
dagging
into
our
primitives,
like
I'm,
realizing
that
there
are
things
that,
like
could
be
a
little
easier
in
all
these
layers
that
I'm
like
fixing
as
I
go,
but
yeah
it's
like
a
fun
little
project.
I've
been
poking
at
you
can
feel
free
to
look
at.
C
I
think
that
by
the
new
year,
everyone
on
the
team
will
just
have
their
own
little
personal
website
that
they're
publishing
with
this,
because
it
does,
it
does
make
publishing
a
little
website
like
really
fun,
actually
like
hack
pad,
like
I'm
just
gonna
start
pushing
markdown
files
to
my
personal
website
and
you'll,
see
them
like
because,
like
that's
actually
easier
and
like
some
of
the
workflows
are
like
easier
than
some
of
these
like
websites
that
we
use
for
that.
So,
like
I
probably
won't
use
gists
anymore
I'll.
C
Just
like
make
those
pages
on
my
personal
website,
it'll
be
fun
anyway,
I
don't
know
who's
next.
Eric.
D
Me
yeah.
That
sounds
great
by
the
way,
because
I
spent
a
little
time
poking
around
and
trying
to
renovate
my
personal
website
this
week.
My
current
one
is,
I
don't
know
where
I
actually
don't
know
where
it's
like
decades
out
of
fade
and
it
like
all
of
the
modern
tools.
D
Anyway,
so
in
ipld,
golang
news
so
as
daniel's
been
writing
some
of
the
the
doc
stuff,
it's
kicked
me
in
the
shins
to
try
to
make
some
of
the
things
prettier,
which
has
been
good,
and
so
one
of
the
things
I
worked
on
this
week
is
making
some
porcelain
type
functions
to
help
you
construct
data
that
will
take
like
normal
golang
types
like
maps
and
structs
and
stuff
and
sort
of
scan
over
them
with
reflection
and
turn
it
into
data
model,
and
so
that's,
arguably
long
overdue.
D
D
So
there's
a
pr
for
that,
I'm
not
quite
finished.
You
have
to
handle
every
case
and
reflection
that
takes
some
diligence.
The
other
thing
I
worked
on
this
week
is,
I
took
another
crack
at
the
whole
link.
Loader
story.
D
This
is
something
that
there's
also
been
an
open
issue
on
in
the
go
ipld
prime
repo
for
a
while,
it's
number
55,
and
it's
just
big
questions
about.
Can
this
be
simpler?
Please
and
that's
been
a
tricky
subject
because
it
comes
it
drags
together.
All
these
different
interfaces
at
once
like
we
want
to
have
codecs,
be
separate,
but
all
come
plugged
together
and
the
multi-hashes
all
be
separate
and
but
come
plugged
together,
and
then
all
of
this
comes
at
the
same
junction
points
as
the
rest
of
the
link
stuff
and
like
yeah.
D
So
it's
not
a
direct
binding
there
either,
and
the
idea
is
that
then
we
can
make
the
whole
registry
system
even
one
of
those
function,
types,
that's
just
the
chooser
sort
of
prototype,
and
this
should
give
us
all
the
flexibility
that
we
need
to
build
registries
that
are
simple
and
user
facing
and
like
do
the
right
thing
by
default,
and
it
also
gives
us
the
ability
to
let
other
people
make
their
own
versions
of
those.
So
like.
D
So
we
have
one
place
to
repeat
all
of
those
things
without
pushing
all
of
that,
all
that
implementation,
effort
and
tons
of
other
helper
functions
into
each
implementation
of
links
or
codex
or
hashes
or
anything
else.
So
this
is
still
just
a
draft,
but
it
feels
a
lot
simpler
and
I
think
we
might
want
to
try
it
out
in
the
future.
So
so
that's
cool
to
have
in
the
sleeve
that's
about
it.
For
this
week,.
E
Okay,
am
I
on
yeah
my
week
has
been
all
about
proto
buffs
and
my
head
is
full
of
proto
buffs,
so
full
of
proto
buffs.
So
I
think
I'm
done.
Although
I
I
see
the
dale
has
posted
an
interesting
issue
on
the.
Where
is
he
posted
this?
The
shipyard.
E
Yeah,
there's
a
he's
just
posted
an
interesting
issue
on
ipfs
explorer
components
where
the
web
explorer,
and
also
the
just
the
the
web
ui
and
also
the
explorer,
is
having
trouble
navigating
through
certain
unix
of
s
paths
and
he's
an
error.
Type
e
is
undefined,
so
it
looks
like
it's
coming
out
of
ipld
tag,
pb,
the
older
protobuf
library.
So
that's
something
to
look
into
I'll.
Have
a
look
at
that.
So
what
have
I
done?
E
E
So
really,
the
only
major
change
to
the
schema
is
making
the
hash
optional,
which
it
is
in
the
format,
and
it
is
in
go
so
and
so,
and
so
I
I
decided
to
just
to
make
that
a
this
there's
no
like
it
would
be
nice
if
everything
if
every
link
had
actually
had
a
hash,
but
it
is
possible
to
construct
blocks
now
that
I
don't
have
a
hash.
They
just
empty
links,
and
so,
while
that's
not
ideal,
we
should
account
for
those
blocks
existing.
E
But
anyway,
this
discussion
on
that
might
be
worthwhile.
If,
if
you
think
that's
a
bad
idea
and
we
should
start
blocking
that
out
and
make
making
any
making
stopping
loading
those
blocks
as
being
valid,
so
that's
poor
request,
297
in
the
specs
repo
in
there
as
well.
I've
added
a
bunch
of
other
little
notes
underneath
the
schema
that
clarify
how
additional
constraints
that
you
can't
express
within
the
schema
things
like.
E
E
So
your
links
array
has
to
be
one
or
more
and
there's
a
few
other
little
things
like
that.
In
there
now,
I
added
some
tests
into
the
into
go:
merkle,
dag
where
the,
where
the
go
implementation,
lives
and
that's
in
there's
a
pull
request
in
the
notes,
I'll,
try
and
get
that
merged.
E
But
that
adds
some
tests
to
push
at
the
edges
of
the
format
to
see
what
happens
in
these
in
different
cases,
what
it
ends
up
with
as
binary
and
what
it
decodes
as
just
to
see
the
nature
of
these
round
trips
and
what
what
it
means
for
various
shapes
to
exist
in
at
the
data
model.
And
then
what
happens
to
them
when
they
go
through
a
round
trip
so
that
we
can
pin
down
the
an
exact
form
for
these
things.
E
And
then
I've
mirrored
those
tests
in
javascript.
There's
a
couple
of
cases
where
I
exclude
the
options.
So
one
case
is
that
I'm
saying
that
a
hash
has
to
be
a
cid
hash
and
then
a
hash
that
can't
be
interpreted
as
cid
should
be
invalid
and
you
could
use
that
the
same
argument
for
that
to
say.
Well,
a
hash
shouldn't
be
undefined
either.
E
Maybe
that's
something
we
should
do
and
a
couple
of
other
things:
where
are
the
empty
links
array?
So
you
can't
do
that.
You
just
can't
make
a
thing.
So
one
thing
that
the
new
js
implementation
does
is
it.
It
only
allows
you
to
encode
an
object
that
is
of
the
precise
shape
that
it
should
be
and
that
it
would
end
up
being
if
it
goes
on
to
undergo
a
round
trip
through
the
encoding.
E
So
you
can't
make
an
in-memory
version
of
these
things.
That
is
invalid
and
that
would
be
changed
if
it
was
to
go
through
a
round
trip.
So
it's
really
strict
on
the
checking
of
those
objects,
but
it
also
exports
a
a
prepare
method
that
lets
you.
You
can
run
it
as
a,
I
guess,
a
safety
measure
or
as
a
convenience.
If
you
were
a
little
bit
sloppy
about
the
way
you
wanted
to
create
objects
and
it'll,
do
things
like
it'll
make
sure
that
you
don't
have
any
extraneous
properties
on
your
objects?
E
It'll
turn
strings
into
bytes
things
like
that:
it'll
sort,
your
links
list
the
right
way
stuff.
That
is
a
bit
tedious
to
take
care
of.
If
you
don't,
if
you're,
creating
complex
things
or
if
you're
just
shoving
data
in-
and
you
don't
want
to
make
links,
you
just
want
to
show
bytes
and
you
don't
want
to
form
these
things.
So
there's
a
prepare
method
on
there
to
make
it
easier
to
deal
with
a
very
strict
format,
and
I
think
that
that
works
quite
nicely.
E
So
that's
that
I
think
I'm
I
think
I'm
wrapped
up
this
question
about
the
hasp.
Maybe
needs
needs
to
be
addressed.
I'm
still
unsure
about
that.
But,
aside
from
that,
that's
that's
me.
I'm!
I
think
I
I
I'd
like
to
move
on
from
pb
now
a
little
bit
a
little
bit
tears.
Oh,
the
other
thing
is
in
the
in
the
javascript
implementation.
I
I
wrote
a
hand-rolled
encoder
and
decoder
for
protobus,
because
our
format
really
isn't
that
complicated.
E
C
We
can
probably
include
it
in
the
defaults
in
the
block
interface
now,
if
it's
that
small,
too.
E
F
E
I
took
it
so
that
so
the
go
version
has
a
code
gen
version
of
a
protobuff
encoder
decoder
in
it,
because
that's
just
how
you
do
it
in
go
like
you
actually
make
the
code,
so
I
pulled
that
code
across
and
then
converted
it
into
javascript
manually
and
then
because
it's
not
that
complicated
and
then
I
just
simplified
it
out
because,
like
I,
you
know
all
the
duplication
of
the
variant
encoding.
I
just
I
just
got
a
function
to
encode
and
decode
variance
and
that
they're
just
not
complicated.
E
So
it's
all
inlined,
it's
all
in
the
in
the
thing:
there's
no
dependency
for
virus
or
anything
and
deduped
things,
and
I
added
what
should
I
do
so.
There
is
this
case
really
interesting
case
in
protobufs
that
you
it
will
allow
you
to
decode
sloppy
objects,
so
it
dag
pb
will
accept
objects
that
have
extraneous
data
that
doesn't
relate
to
the
format
and
in
the
go
version
it
just
hangs
it
off
this
hidden
property
in
the
dag,
pv,
struct
and-
and
it's
never
touched.
E
I've
left
that
in
there
and
it
just
gets
dropped
on
the
floor
at
the
moment.
But
it's
one
of
these
cases,
where
our
formats
have
all
of
these
ways
that
you
can
embed.
You
can
create
blocks
with
the
same
data
but
different
hashes
or
you
can
hide
data
in
there
that
that
doesn't
get
used
and
I
didn't
think
db
had
that,
but
yeah
it
does.
E
E
So
would
I,
but
you
never
know.
F
E
I
was
in
the
go
code
and
I
was
messing
around
with
it
and
I
was
testing
some
of
the
edges,
and
so
I
had
my
head
in
this
code
and
I
was
like
I
want
to
see
what
this
does
in
javascript
and
it's
porting
a
little
bit
of
it,
and
why
don't
you
just
do
it
and
it's
really
it's
really
not
hard
to
convert
basic
go
especially
code
didn't
go
that
does
doesn't
do
too
much
fancy
stuff
into
javascript.
It's
just
you
know.
E
It
really
is
just
changing
a
bit
of
syntax
and
it's
that's
it
and
then
you
get
to
refactor
it
into
something
nicer
and-
and
you
end
up
with,
and
the
nice
thing
is
it
it's,
it's
really
quite
efficient
in
the
way
that
it
lays
out
the
bytes
and
I've
managed
to
find
some.
What
I
think
are
some
pretty
good
efficiency
gains
in
the
byte
handling
the
way
it
lays
it
out
into
the
block
and
then
the
way
it
even
reads
it
in
it'll.
E
Do
some
it'll
reuse
bytes
in
a
fairly
efficient
way,
so
I
wouldn't
be
surprised
if
we
get
a
pretty
significant
performance
boost
from
this.
But
I
I
don't
really
want
to
spend
too
much
more
time.
Testing
that
kind
of
stuff
and
I'm.
A
Cool
okay:
does
anyone
else,
have
any
updates
or
want
to
share.
F
I
have
two
things
real
quick,
so
one
is
someone
posted
some
question
on
the
ipfs
reddit
today
about
ipld
for
bitcoin,
and
I
provided
a
response
so
feel
free
to
view
it.
Let
me
know
if
I
missed
something
of
a
better
way
to
do
it.
I
just
linked
it
in
there
in
the
chat
and
then
the
other
thing
too
is
while
you
guys
are
looking
at
that
with
the
js
craftsync
project.
F
What
I
want
to
do,
what
I
need
is
kind
of
like
a
a
reference
of
a
responder
and
a
requester
with
graphsync,
and
I'm
planning
to
write
those
in
go
since
there's
already
a
library
that
works.
Obviously,
ipfs
has
a
responder
capabilities,
so
I
could
use
that,
but
it's
just
a
big
dependency.
F
F
What
are
you
guys,
thoughts
on
how
I
should
structure
this?
Should
I
put
the
go
base
code
as
separate
repo
and
pulled
in
as
like
an
extern
git
sub
module?
Should
I
put
that
those
go
things
in
some
other
directory,
my
javascript
package
or
my
repo
or
and
if
so,
what
would
be
the
directory
structure?
For
that?
I
don't
know
if
you
guys
are
not
talking
about.
My
main
goal
is
to
write
javascript,
but
I
need
some
go
code
to
use
this
verify
verification
for
the
javascript
I
want
to
put.
F
Yeah,
I
mean
probably
yeah,
that's
all
my
immediate
needs
are.
I
need
like
a
cli
to
basically
act
as
a
requester
against
a
javascript
responder
and
then
vice
versa.
A
javascript
request
gets
a
go
responder,
so
the
very
simple
wrapper
around
go
graph.
Sync
that
will
you
know,
issue
a
request.
I
could
even
hard
code,
the
cid
and
all
this
kind
of
stuff.
I
wanted
to.
E
Yeah
I've
had
a
few
occasions
where
I've
had
to
do
things
like
that,
and
I've
had
variations
of
either
embedded
as
a
I've
got
one
repo
where
I've
actually
got
javascript
and
go
mirrored
side
by
side
for
publishing.
But
it's
just
on
the
go
side.
It's
not
an
ideal
way
of
consuming
it.
E
In
other
cases,
where
you
you
just
stick
it
into
your
test
directory
as
a
separate
subdirectory,
and
then
you
could
npm
ignore
it
from
your
publishing,
because
the
nice
thing
about
go
is
that
you
don't
end
up
with
you:
don't
bloat
your
repo
compiling
it
you
that
all
happens
elsewhere,
so
your
code
just
sits
alone
and
compiling?
Doesn't
it
doesn't
you
don't
end
up
with
tons
of
object
files?
E
You
need
to
clean
up
later,
so
it's
just,
but
maybe
the
ideal
option,
if
you're
gonna
do
a
fair
bit
of
it
is
actually
to
set
up
a
separate
project
and
put
it
in
your
in
your
go
source
directory
because
one
of
the
problems
with
go
is
it's
not
happy?
If
you
don't
have
it
set
up
in
your
in
the
right
directory
in
the
right
layout,
go
source?
G
F
G
F
E
E
F
E
F
A
All
right
any
other.
G
Updates
questions
or
things
to
discuss,
so
I
just
have
a
question
about
encoding
formats
with
cbor
to
json
and
I'm
just
curious
how
conformant
the
the
ipld
spec
is
for
there's
a
new
draft
in
ietf
or
rfc
7049,
and
I
think
a
lot
of
the
hedge
cases
are
not
relevant
for
my
use
case.
But
I
was
curious
if
you
guys
are
following
up
with
the
latest
draft
of
the
of
the
ief
I
ietf
rfc,
to
make
sure
that
it
actually
is
following
the
json
serialization
recommendations
which
are
non-normative.
C
Here's
the
link
you
cut
out
a
little
bit
at
the
end
there,
okay,
so
it's
a
new
cbor
spec.
G
It's
a
new
draft
for
updating
the
seabor
with,
I
think.
As
I
mentioned
last
week,
I
had
some
challenges
with.
G
A
Well,
we
don't,
but
like
at
least
I
took
it
sometimes
as
an
inspiration
on
basically
how
they
json
should
look
like
or
even
like
how
we
structure
they
or
like.
Basically,
so,
no
see
at
least
it
is
an
inspiration
for
when
we're
defining
how
the
deck
c
board
works,
to
make
sure
that
when
we
do
something
specific
in
the
xc
board
that
it
also
kind
of
like
applies
to
when
you
then
convert
it
to
json
or
back
so
kind
of
like
so
so.
A
Basically,
the
the
the
recommendations
in
this
section
is
quite
good
to
see
how
kind
of
like
a
trimmed
down.
C-Board
looks
like,
and
so
I
took
this
inspiration
when
I
thought
about
the
data
model
and
so
on.
So,
but
I
don't
think
we
really
follow
it
closely.
But
if
there's
any
updates,
I
think
we
should
check
it
out
to
make
sure
that
yeah.
A
G
Update
to
the
did
document
for
the
cbor
section
I
mentioned
last
week
and
there
they
want
me
to
come
up
with
very
specific
guidelines
for
lossless
conversion
between
sibor
and
json,
especially
when
it
has
to
deal
with
binary
representation
in
the
seabor
and
to
transform
that
into
a
conversion
to
let's
say,
base
16
base,
64
encoding
of
that
binary
blob
in
cbor,
since
that's
not
supported
in
json.
Obviously,
in
cbor
there
is
like
tags,
tag,
21,
22
and
23
to
do
this,
but
there
is
no
like
tag
for
base
58.
G
So
in
addition
to
all
that,
I'm
actually
also
recommending
a
the
public
key
multi-codec
attribute
property,
and
that
is
basically
they
don't
like
a
link.
So
it's
not
gonna
be
a
slash
in
the
cid.
It's
just
gonna
be
a
cid
and
that
cid
is
a
self-describing,
a
multi-codec
that
says
this
is
base
58
encoded,
it's
in
version,
one.
It's
a
public
key
dd25519
and
then
the
bytes
follow
in
that
encoding.
G
Mostly
just
corporate
jousting
trying
to
get
people
to
basically
one-up
each
other
and
and
I'm
trying
to
make
sure
that
it's
just
basically
it's
easy.
G
The
what
I
mentioned
last
time
is
that
they
don't
like
the
basically
string
representation,
and
so
they
have
this
whole
convoluted
naming
structure
of,
let's
say
public
key
hex
and
that's
supposed
to
repre
like
the
thing
that
follows
it's
supposed
to
be
a
public
key
hexadecimal
representation
of
a
public
key
that
is
have
particular
type
etc,
and
so
it's
over
engineering,
if
you
ask
me,
but
I'm
trying
to
be
conformed
and
give
some
recommendations
for
lossless
conversion
between
cbor
and
json.
A
Yeah,
okay!
Yes,
so
I
would
just
like,
like
I
guess
like,
because
the
base
58
is
kind
of
a
pain,
at
least
from
my
experience
that
you
busy.
So
there's
not
that
many
like
like
for
base
64.
There
are
so
many
encoders
out
there.
So
it's
super
straightforward,
but
yeah.
I
would
probably
use
base64
or
32.
G
But
yeah
yeah,
but
they
want
to
basically
like
if
you
do
represent
it
in
binary
in
the
seabor
that
there's
these
these
tags,
so
that
it
will
give
you
hints.
As
far
as
when
it's
converted
to
json.
It
should
be
converted
to
let's
say
base64
and
I
need
to
do
the
same
thing
for
all
the
different
encodings,
including
base
58..
G
C
G
Right
yeah:
well,
this
is
mostly
for
seabor,
outside
of
ipld
to
fit
their
the
document
with
lost
this
lossless
conversion,
but
I
think
I
certainly
am
working
on
it
all
in
ipld,
but
I
need
to
have
it
interoperable
with
everyone
else's
approach.
So
major
headache.
A
Yeah,
I
I
think
on
like
that
the
json
specification
I
think
we
use
also
base
64..
I,
if
I
recall
correctly,
I
can't
recall
I.
A
All
right
anything
else.