►
From YouTube: IETF101-CBOR-20180320-1330
Description
CBOR meeting session at IETF101
2018/03/20 1330
https://datatracker.ietf.org/meeting/101/proceedings/
A
B
C
E
F
B
E
G
B
E
D
C
G
C
F
G
B
B
B
Okay,
move
on
so
a
quick
status
update
of
the
working
group
documents
right
now
before
custon
takes
it.
So
the
four
CD
out
there
was
a
working
group
last
colder
than
Ted
the
13th
of
March,
and
this
this
created
some
review
comments
that
are
going
to
be
discussed
today
and
for
the
Seaboard
70
49
this.
So
it's
working
progress.
We
are
version
zero
to
the
implementation.
Matrix
was
updated.
So
thank
you
for
thank
you
to
those
who
provided
input
to
that.
B
D
Don't
have
to
say
this
because
the
Francesca
already
said
it.
So
let's
go
right
into
Syria
wondering
right
now:
okay,
so
we
had
a
pretty
good
discussion
at
8100
and
what
we
did
after
ATF
100
was
to
introduce
cuts
in
maps
for
one
specific
application.
So
we
are
not
introducing
the
general
concept
of
cuts,
but
just
for
that
one
application.
D
So
we
still
have
some
time
to
tweak
this
for
CDA
1.1
or
something
we
had
a
problem
in
Singapore
about
regular
expressions
and
the
solution
that
came
up
after
the
meeting
was
using
x,
SD
regular
expressions.
They
are
a
little
bit
weird
but
which
your
expression
variant
isn't,
and
there
is
a
little
bit
of
text
in
the
document
that
explains
what
what
kinds
of
pitfalls
come
with
that.
The
main
problem
is
that
vectors,
D
and
and
vector
W
actually
are
unicode
references,
so
you
might
get
Indian
numbers
or
Japanese
numbers
in
your
wonderful
identifiers.
D
If
you
don't
pay
attention-
and
the
third
thing
is
that
we
defined
matching
rules
in
Appendix
C.
So
this
is
a
summary
of
what's
going
on
in
that
and
yeah,
and
there
were
a
few
editorial
things
we
were
trying
to
fix
the
terminology
little
and
later
that
we
probably
have
to
take
another
round
at
the
terminology.
Some
fixes
around
the
examples-
and
there
are
some
cobwebs
in
the
engine
reminded
me
this
morning-
there
are
stayed
some
cobwebs
in
there
so
of
work
to
do
there
as
well.
I
also
generated
a
freezer
document.
D
D
D
D
Add
back
PCIe,
there
is
a
proposal
that
maybe
we
should
allow
a
PMF
for
specifying
tech
springs
in
our
documents
and
there
is
ideas
of
adding
a
module
superstructure.
All
of
these
can
be
done
later.
So
that's
why
they
are
in
the
freezer
adoption.
Now,
on
the
current
document,
we
have
lots
of
good
editorial
comments,
which
is
code
for
saying
the
text
isn't
that
great
editorial,
yet
even
after
humans
of
improvement
and
yeah
I'm
wondering
when
are
we
reaching
the
point
of
diminishing
returns.
D
Okay,
so
there's
lots
of
work
to
do
again
to
minimize
this,
but
they
also
want
to
want
people
that
this
will
not
be
Shakespeare
at
the
end
of
the
process
so
to
the
technical
issues,
the
the
main
people,
the
main
thing
that
gives
people
some
powers
is
the
map
matching.
That's
not
on
the
map
matching
issue.
D
D
However,
the
linearity
is
something
we
only
use
for
array
matching
where
we
process
the
elements
of
the
array,
just
as
we
would
process
the
characters
in
a
language
that
we
match
with
a
grandma
while
math,
so
that
nurturing
from
the
grandma
defined
by
a
group
actually
always
picks
any
member
any
member
of
the
map-
and
that
is
maybe
slightly
unfamiliar,
but
it
seems
to
work
quite
well
implementation
wise.
This
means
you
actually
drive
the
pawza
from
the
grandma,
not
from
the
text,
as
you
would
in
a
normal
path.
D
D
D
Is
kind
of
counteracted
by
the
wild
card
match,
so
we
put
an
old
proposal
out
of
the
driver,
adding
cuts
to
the
language
again,
we're
not
doing
another
thing,
but
we
are
just
doing
a
subset
for
the
maps.
The
idea
about
the
cup
is
that
once
something
has
mesh,
you
are
not
going
back
to
matching
other
things.
You
are
kind
of
committed
to
that
match
and
exactly
that
works
for
this
case
as
well.
D
Some
of
the
important
prerequisite
to
a
cut
of
doing
anything
is
that
we
have
to
have
a
mesh,
doesn't
do
anything
if
there
is
no
match,
and
some
of
the
confusion
was
about
the
current
indicators
that
can
happy
around
something
that
uses
the
cut.
So
how
often
does
this
pay
you
that
far
really
maps
to
a
text?
D
That's
currently
no
way
to
write
that
down,
so
the
current
indicator
would
say
we
have
only
one
match
and
all
the
other
numbers
between
four
and
seven
would
go
into
this
white
card
so
that
that's
a
limitation
of
this
approach,
not
one
that
has
perhaps
even
but
maybe
something
that
that
needs
to
be
documented,
documented
better
than
it
is
now.
So
what
do
we
need
to
do
again?
This
is
being
discussed
there.
Github
issues
and
so
on.
Are
the
remaining
comments
on
the
editorial
side,
or
do
we
actually
need
additional
technical
changes?
H
I
believe
that
the
semantics
of
group
matching
and
occurrence
matching
are
not
specified
well,
nearly
at
all
they're
kind
of
waved
at,
but
not
specified
and
kind
of
the
one.
One
way
to
specify
them
means
that
the
cut
indicator
on
the
last
slide
has
no
effect
and
I'm
not
sure
what
specification
you
intend
to
to
cause
it
to
have
an
effect
and
I.
Think
I
think
that's
a
technical
problem,
not
an
editorial
problem
that
that
we
need.
We
need
a
description
of
what
what
causes
a
map
to
match.
H
Basically
with
with
this
mapping,
if,
if
occurrence
indicators
apply
by
saying
you
get
a
non-deterministic
expansion
of
the
of
the
occurrence
indicator,
then
the
occurrence
indicator
could
easily
appear
zero
times
here,
causing
the
cut
to
be
ignored
and
we
need.
We
need
a
precise
specification
of
that
matching
algorithm
in
order
to
have
a
reasonable
RFC
I.
D
Can
give
you
the
precise
specification?
This
is
the
positive
expression
runner,
but
you
probably
won't
be
happy
with
that
greater
than
the
truth,
concise.
So
the
fashion
grammar
is
that
it's
greedy,
so
it's
trying
to
match
as
much
as
possible,
so
T
4
would
indeed
match.
If
you
won
of
this
as
a
non-deterministic
final,
then
indeed
you
have
the
problem.
I.
I
Am
sure
I'd
actually
like
to
change
to
modify
the
the
ordering
of
of
this
thing?
So
you,
basically,
if
you
have
multiple
things
in
a
row,
you
start
by
saying.
You're
gonna
match
the
things
which
are
most
specific
in
terms
of
keys
first,
so
you
would
all
match
to
know
everything.
That's
a
string,
everything
this
number
and
then
you'd
mesh
everything,
there's
any
number,
and
then
you
everything
that's
in.
H
I
D
Whose
I
the
specification
writer
can
do
that
or
just.
I
D
D
J
H
Yeah,
this
is
Jeffrey,
Gaskin
I,
don't
think
it's
possible
in
general
to
say
this
type
is
more
specific
than
this
other
type.
You
can
say
it
for
a
for,
and
you
int
of
course,
but
you
can
have
since,
since
there
are
choice
types
they
can
just
overlap
and
not
not
be
more
specific
than
each
other.
I
I'll
answer
that
one
first
I
would
basically
say
anything:
that's
a
value,
it's
a
type
which
would
it
be
constructor
types,
anything
that's
any
in
the
order
in
the
part,
okay,
Sean
Leonard
I
want
to
express
that
I
am
opposed
to
introducing
cuts
in
the
CDD
alfie.
1.0
cuts
can
can
can
transform
a
context-free
grammar
into
a
context-sensitive.
What
an
alternative,
subsets
or
constraints
was
sketched
out
in
Singapore
from
an
editorial
point.
I
D
D
D
The
number
four
is
smoking
four
and
smearing
it
out
to
the
other
places
that
might
possibly
also
match
that
so
for
a
specification
writer.
This
means
a
lot
of
tedious
work,
copying
all
the
things
making
sure
all
the
cases
I
actually
hit,
and
so
on
and
I,
don't
think.
That's
a
good
thing
for
a
specification.
Language
should
be
possible
to
write,
concise
specifications.
H
Jeffrey
askin
I
think
it's
useful
to
have
the
colon
syntax
act
as
a
cut
I'm,
not
sure
that
we
also
need
to
cut
syntax
that
that
seems
to
have
some
more
general
problems.
If
we
can
do
just
:,
maybe
that's
easier
to
specify.
K
We
had
a
long
discussion
about
introducing
cuts
at
the
last
moment
and
effectively
how
CBD
edits
are
used,
and
this
is
I
think
the
point
class
Nene
tries
to
bring
across
here.
The
exception
is
the
behavior
of
cuts,
there's
a
syntactic
connotation,
that
is
in
the
second
example
yeah,
and
that
is
what
we
experienced.
So
if
that
is
not
the
case,
this
is
very
important
to
know
because
it
would
be
contradicting
what
we
got
as
feedback
in
the
last
months.
K
So
if
you
look
at
the
last
example,
I
think
to
most
of
you
in
the
room,
it
seems
to
be
clear
that
you
want
four
to
be
text,
and
that
is
the
only
point
here.
So
if
we
do
not
want
a
simplistic
sentence
in
Turkish
notation
like
in
the
last
example,
because
it
is
intuitive
how
cats
work,
we
really
have
to
decide
this
very
very
soon
and
again,
I.
K
Think
if
you
look
at
the
last
example,
it
is
very
obvious
what
the
designer
of
the
city
at
once
so
my
bedazzler
yeah,
the
microphone
I,
think
I
said
it.
Maybe
we
just
should
try
to
agree
or
not
agree
on
the
last
example
and
its
meaning
as
being
depicted
a
constant
and
that's
the
important
thing
yeah
and
if
someone
else
thinks
it
does
unintuitive
to
have
another
example
for
being
expected
to
be
text,
then
we
should
make
that
very,
very
openly
clear.
Yeah.
D
I
D
I
So
Sean
Leonard,
so
the
point
is
what
you
want
to
say
is
or
is
text
only
all
the
other
answers
something
else
right.
So
this
ice,
the
slide
says
you
end,
and
that
means
all
you
wants,
except
for
right.
What
happens
when
you
later
want
to
say
five
is
only
a
byte
string.
You
have
to
put
it
all
right
at
the
top,
but
you
can't
put
it
at
the
top
if
you're
supposed
to
be
putting
at
the
bottom.
I
Unfortunately,
I
wish
you
had
the
slide
that
I
thought
I
was
seeing
this
morning,
which
was
my
example
of
if
you
build
a
group
with
cuts
that
you
can
end
up
appending
things
on
the
bottom
below
the
start,
you
amped,
which
are
never
enforced,
yep
we'd.
Look
if
somebody
does
an
additional
spec
they're
going
to
expect
it
to
be
enforced,
but
it's
never
going
to
be
checked
by
anything
that
uses
the
grammar
to
check
and
that's
one
of
the
reasons
I
came.
D
Okay.
Now
there
are
two
reasons
why
I
don't
like
the
sorting
mechanism
well,
first
of
all,
I
think
the
spec
writer
can
be
to
put
things
in
this
sequence
in
which
the
think
writer
wants
to
then
consider.
But
the
main
thing
is
I
really
don't
know
how
to
do
this.
Honoring
it
since
videos
are
types.
I
can
even
pass
the
proposal
that
you
age.
I
I
Because
his
stuff
is
going
to
potentially
be
a
pin
at
the
bottom
hole
copy
and
paste,
they
say
you
include
this
fact
from
this
document
and
append
this,
because
I
want
the
the
I
am
put
it
in
the
front,
because
I
want
the
rule
to
be
the
same.
I
can't
put
it
at
the
bottom,
because
I'm
going
to
have
the
wrong
word
or
cots,
so
it's
like
I
now
have
to
reproduce
the
module
over
there.
In
my
document.
I
The
way
that
constraints
work
is,
is
that
you
say
with
appropriate
syntax.
Star
unit
goes
to
any
first,
but
then
the
subsequent
parts
of
this
spec
you
can
identify
specific
instances
of
star
and
the
star.
You
ain't
goes
to
any.
For
example,
when
you
ant
for
Tet
is
text,
and
when,
u
and
v
is
district,
and
then
he
refers
to
a
document
he's
produced,
the
precise
nature
of
the
syntax
requires
a
bit
more
fleshing
out,
though.
Yes,.
D
F
D
Yeah,
so
let
me
try
to
summarize
so
there
are
some
proposals
that
the
grammar
should
are
metal
and
the
approach
would
reorder
things
before
matching
on
your
maps.
And
that's
always
there
are
some
proposals
to
have
the
shortcut,
but
not
the
L'enfant,
and
did
there
are
some
proposal
but
introduce
something
like
constraints
which
we
probably
would
have
to
start
developing
at
this
point,
which
would
still
have
the
problem
that
you
have
to
repeat
the
same
information
in
multiple
places
to
make
it.
So
maybe
you
can
take
this
to
the
list
just.
D
D
D
Okay,
next
item
operator
precedence
now
I'm,
not
aware
of
any
non-trivial
computer
language
where
people
don't
which
operational
precedence.
So,
yes,
we
have
our
share
of
this
year
as
well.
The
operator
precedence
that
we
have
until
it's
quite
logical
when
you
are
always
thinking
in
terms
of
groups
and
types,
but
if
you
don't,
it
may
be
surprising,
for
instance,
this
thing
here,
plus
ARB
means
you
can
have
multiple
a
zombie's.
D
It
does
not
mean
you
can
have
multiple
A's
and
single
V
if
all
a
single
V,
and
that
may
be
surprising
to
people
who
think
about
this
as
a
goof.
But
in
really
it's
not
a
group.
The
thing
after
the
plus
is
a
tight
trust.
That's
why
we
have
slash
and
double
slash
as
two
different
choice
operators,
so
why
this
is
maybe
surprising
in
this
case
the
same
thing
in
nap
context
suddenly
is
very
familiar
and
we
say
we
have
an
optional.
D
Integer
or
a
text,
it
all
makes
a
lot
of
sense
and
I.
Think
most
of
us
who
have
written
specifications
actually
have
like
this
in
their
specifications.
So
this
is
really
a
very
normal
thing
to
do,
but
it's
essentially
the
same
operator
precedence
here.
It
just
says
time
spec
can
occur
within
a
group
without
parentheses.
D
So
I'm
not
sure
that
this
isn't
just
yeah
getting
used
to
the
language
and
the
fact
that
that
groups
and
types
are
two
different
things
in
the
language.
I
have
thought
about
ways
to
modify
the
operator
person
and
I
feel
of
like
where
I
am
grateful
to
oppose
that,
but
only
on
a
very,
very
small
scale.
D
So
general
changes
here
are
probably
not
good
here
and
they
are
too
easily
through
situations
like
this.
So
if
you
do
a
+,
H
B,
that's
a
syntax
error,
and
if
we
change
the
operator
precedence
to
this
over
this,
we
would
just
make
it
easier
to
write
so
in
Excel.
But
sometimes
it's
a
good
thing,
because
it
reminds
the
implementer
of
to
put
in
the
parentheses
yeah.
But
in
this
case
it's
just
an
annoyance.
D
So
I
think
what
we
should
do
is
add
some
more
text
to
section
3.11,
which
is
the
one
that
he
finds
the
operator
precedence
and
maybe
also
include
a
little
bit
of
text
about
asking
the
the
specification
of
I
try
to
use
a
style
that
doesn't
actually
evolve
visa
processing.
So
just
like
in
a
B
and
F
where
people
are
continuously
confused
when
people
mix
concatenation
and
choice,
we
should
ask
people
to
use
parentheses
when
they
want
to
mix
groups
and
types
in
ways.
D
K
This
is
Hank
again
why
I
understand
the
problem,
the
necessity,
the
spire
that's
the
world
of
being
not
noisy,
because
I
think
this
will
happen
all
over
the
place.
So
it
will
be
very
more
apparent
thesis
Leni
after
that
and
I'm
just
thinking
of
a
readability
here
and
our
promise
that
it
would
be
easy
to
heat
and
bright
and
I.
Think
we're
moving
away
from
the
ever
understand
point
that
it
might
be
a
necessity
of
just
political.
It
is.
F
D
Need
not
be
parentheses
by
the
way,
so
the
reason
why
we
are
seeing
this
year
so
rarely
occasions
is
that
people
actually
mostly
define
a
type.
The
name
for
father
is
that
we
before
putting
it
into
a
group
so
that
they
are
not
actually
writing
for
enter
this
they're
they're,
actually
providing
additional
documentation
by
giving
the
choice
and
lien
before
using
it
so
I
think
in
in
practice.
We
can
still
make
the
recommendation
without
littering
up.
The
specification
is
too
much.
We
should
check.
D
Okay,
so
let's
move
on
to
general,
you
I
think
we
kind
of
have
our
item
form,
which
was
essentially
about
dead
count
with
the
example
that,
unfortunately,
so
generally
I
don't
think
dead
should
eat
too
hard
errors,
because
dead
code
may
just
be
the
result
of
applying
abstractions
in
the
right
way,
and
if
you
create
traps
in
in
applying
extractions,
you
are
not
better
off.
Then.
D
D
So
I.
Think
yeah
with
the
discussion
where
we
are
kind
of
comment
on
this
comment.
Fix
is
maybe
an
editorial
comment,
but
just
to
make
sure
that
the
people
are
real.
This
there
is
this
mechanism
called
generics
and
it
can
be
used
for
both
groups
and
types,
and
the
grammar
actually
says
that,
but
the
text,
so
we
shouldn't
say
that.
D
Table
and
actually
looking
more
closely
so
the
unary
operators,
ampersand
&
interior,
probably
should
get
their
own
precedents
and
then
we
can.
So
if
there's
a
problem
as
well
and
finally,
a
gent
found
problem
in
the
grammar
for
the
unread
statement
and
essentially
was
a
copy
and
paste
error
here
if
they
were
murders
in
the
document.
This
says
root
day,
but
that
is
of
course,
nonsense
because
other
words
I'll
apply
two
types
that
didn't
cause
any
problems
during
testing,
because
group
name
attack
training
are
identical,
identical
right
hand,
side,
so
yeah.
D
D
D
Finally,
maybe
it
in
the
notorious
a
little
bit
more
than
in
the
Victoria
comment.
They
have
been
several
editorial
comments
that
essentially
pointed
out.
We
are
using
terms
confusingly
inconsistently,
so,
for
instance,
there
is
the
well
entry
which
is
defined
in
a
way
that
makes
you
think
this
is
about
a
member
and
not
an
entry
and
I
think
we
should
actually
make
very
visible
in
the
document
that
we
have
challenged
for
the
see
both
side
and
terms
for
the
ciggy,
a
side
and
these
terms
never
mix.
H
Jeffrey
askin
I
wanted
to
make
the
a
very
general
comment
that
the
CDL
spec
is
mostly
written
as
a
tutorial
for
how
to
use
CTD
L,
rather
than
as
a
specification
for
when
C
bar
items
match.
Cd
do
rules
and
I
think
before
it
goes
Darcy.
It
should
be
written
as
that
specification.
H
Appendix
e
starts
down
that
route,
but
is
not
complete,
doesn't
define
all
of
the
elements
of
C
video
and
and
is
also
written
fairly
more
concisely
than
it
should
be,
because
it's
I
think
trying
to
be
a
summary
of
the
rest
of
the
spec.
Rather
than
being
the
specification
itself.
So
I
would
like
to
expand
Appendix
C
and
make
that
be
kind
of
the
main
body
of
the
spec,
and
the
current
main
body
should
be
a
tutorial
on
how
to
do
it.
I.
G
H
In
in
some
cases
like
for
a
group,
matching
I,
don't
even
know
the
algorithm
that
Carsten
has
in
mind
so
that
that
definitely
has
to
be
parsed
in
for,
for
some
of
the
other
things
like
what
dot
dot
bits
and
unwrapping
it's.
It's
clear
enough.
I
think
that
that
I
at
least
am
capable
of
of
writing
the
specification,
but
it
takes
a
fair
amount
of
time,
so,
for
instance,
I
might
be
able
to
sketch
sketch
something
in
I,
don't
know
a
month,
even
even
other
constraints,
but
actually
getting
exactly
what
words
would
take.
B
I
G
This
is
Joe
Hildebrand
from
the
floor.
I
stood
up
to
say
roughly
that
which
was
I
do
see
a
ton
of
value
in
specification
being
written
away.
You
just
said
one
of
the
things
we're
trying
to
do
is
get
a
stake
in
the
ground,
so
that
this
conversation
that's
going
on
about
JCR,
etc.
There's
a
an
actual
sort
of
RFC
number.
That's
the
start
of
that
conversation
from
our
point
of
view
here
at
least
that
was
one
of
the
things
that
I
thought
we
had
sort
of
talked
about.
D
H
Jeffrey
asking
again
having
having
CDL
spec
the
way
it
is
out
of
limits
where
we
can
use
it.
I
think
in
that,
for
instance,
I
get
objections
when
using
it
in
web
specs,
because
it's
not
as
it's
not
precise
as
as
web
sessions
need
to
be,
but
clearly
there
is
value
in
getting
kind
of
the
sketch
nailed
down
and
it's
fairly
clear
what
you
mean
when
you
write
CDL
given
the
current
draft.
So
it's
definitely
not
like
over
my
dead
body.
We
must
not
publish
this.
G
That
that's
actually
super
helpful
to
hear
so.
Does
anybody?
Does
anybody
feel
more
strongly
that
we
should
stop
and
do
this
work
now
before
we
publish
a
1.0
just
looking
to
see
if
anybody's
raising
their
hand
or
going
to
Mike
all
right?
That's
so
it
seems
like
a
reasonable
path
that
we
can
sort
of
agree
to
that.
We
know
that
there
is
still
work
to
do
that.
G
This
is
a
first
grab.
We
already
knew
that
we
were
going
to
do
abyss
on
this
anyway,
because
there
are
a
couple
of
features
we
wanted
to
add
and
at
that
point
the
working
group
could
decide
to
take
on,
or
you
know
more
structural
rewrite
at
the
same
time.
Aleksey.
Are
you
comfortable
with
that
approach?.
G
B
D
B
D
So
the
other
document
that
this
document
that
this
works
on
is
the
SIBO
spec
and
just
to
remind
people.
This
is
about
taking
this
tenets
level,
it's
not
about
futzing
around,
but
it
is
about
learning
from
the
first
five
years
of
using
SIBO,
okay,
so
yeah
improvements
in
specification
quality
by
now
go
way
beyond
fixing
via
rattles.
D
This
is
an
old
idea,
so
I
think
we
are
doing
some
some
good
chance
and
if
you
want
to
know
what
taking
something
to
standard
means,
look
into
our
of
c64
ten,
which
documents
this
and
we
are
really
lucky.
You
don't
have
to
go
to
this
one
yeah,
just
quick
update.
There
are
more
than
45
representations.
D
D
Now
we
invented
a
few
terms
and
one
is
the
term
generic
data
model,
which
is
essentially
the
set
of
data
items
that
can
be
represented
in
C
ball,
and
we
split
this
into
a
basic
infinite
generator
and
the
extension
points
and
I
think
one
of
the
successes
of
C
was
so
far.
There's
been
that
we
do
have
those
extension
parts
and
the
whole
point
of
being
very
specific
about
this.
Is
we
want
to
enable
the
implementation
of
generic
serialization
implementations
so
getting
this
ecosystem
out
there?
D
Getting
this
105
plus
helps
and
if
they
are
interoperable,
write
down
what
you're
thinking
about
the
data
model
yeah
one
thing,
for
instance,
we
decided
I
expected
undefined
is
a
nice
to
have,
but
it's
not
actually
expected
not
to
be
there.
So
when
you
write
your
CDA
spec
for
something
and
you
may
want
to
use
limit,
use
the
use
of
undefined
two
cases
when
you
actually
need.
D
So
what
did
we
do
in
ditional?
Well,
first
of
all,
we
accidentally
duplicated
the
data
model.
Cakes
and
worse.
We
didn't
apply
changes
to
have
a
little
bit
of
editorial
work
to
do
we're
going
to
come
home
but
I
think
more
importantly,
we
can
now
make
use
of
the
economic
terminology
so
that
there
are
a
few
places
that
actually
are
already
improved
with
it,
and
I
would
expect
the
next
version
to
have
more
of
these
places.
D
My
typical
observation
is
that
we
have
learned
that
people
who
are
using
a
SIBO
want
to
separate
the
world
of
integers
from
the
world
of
floating
project
players.
So
there
is
one
paragraph
in
1749
which
essentially
declares
the
equivalents
of
integers
and
floating
point
values,
and
that
paragraph
is
gone.
So
that
is
a
technical
change.
It's
not
really
a
thing.
You
change
to
the
way
C
was
being
used
because
people
have
been
doing
this
all
the
time.
They're
just
simply
ignored
what
what
was
in
the
error,
but
we
now
take
heed
of
that
and
yeah.
D
So
this
means,
for
instance,
places
like
equivalence,
separate,
integers
and
floating
points,
and
there
are
a
few
more
places
where
we
probably
have
to
make
sure
that
we
maintain
that
separation.
Now
it's
very
clear
that
there
are
environments
that
cannot
do
this
separation
very
easily
and
the
fact
that
environments
are
different.
We
are
not
going
to
fix
that,
so
there
there
needs
to
be
some
attention
to
the
fact
when
you
actually
define
an
application
of
this
specific
data
model,
and
we
may
want
to
expand
that.
D
G
Jungle
to
run
from
the
floor,
so
the
in
the
JSON
working
group.
We
had
all
those
notes
about.
If
you
go
down
this
path,
you
might
end
up
with
something
that's
on
interoperable,
like
depend
upon
the
difference
between
these
things
or
whatever
I'm
wondering.
If
we're
going
to
need
some
of
those
notes.
G
So
in
in
the
JSON
RFC,
it
says
like
if
you
use
something
looks
like
an
integer,
but
it's
more
than
to
the
53rd,
then
you're
gonna
have
that
day,
because
there
are
a
lot
of
implementations
that
don't
process
enters
larger.
Is
this?
Are
we
going
to
need
some
of
those
kinds
of
instructions?
Here's
my
question.
The.
D
Kind
of
instruction
would
be
slightly
different
because
we
already
separate
those
on
the
serialization
yeah.
So
it's
not
the
situation
like
in
JSON,
where
you
have
the
number
you
need
to
find
out
how
am
I
going
to
process
this
number
in
my
implementation,
so
is.
This
is
10
e1.
Is
that
an
integer
instead
of
floating
on
that?
That's
a
typical
problem
that
you
have
when,
when
interpreting
JSON
and
keeping
integers
and
floating
points
separate,
we
don't
have.
H
D
D
But
there
are
some
applications
that
actually
came
can
make
use
of
a
unique
representation
for
each
potential
data.
What
I
don't
so,
let's
help
these
people
so
again.
The
question
of
course,
is:
can
there
be
generic
serialization
for
economical,
SIBO
and
generally
canonicalization
has
certain
application
dependent
effect
to
it?
D
There
are
things
like
key
equivalence
that
we
only
have
a
meaning
in
at
the
application
level.
So
one
has
to
be
careful
about
that
and
there
are
other
places
like
floating
point
representation.
Where
really
it
depends
on
the
shape
of
your
application,
what
you
actually
want
there,
so
we
do
want
to
encourage
generic
encoder
writers
to
not
ignore
it
when
they
ask
for
canonicalization,
and
so
how
can
we
help
them
with
that?
And
the
idea
is
that
we,
as
this
says,
provide
some
recommendations
that
an
application
can
use
to
choose
their
canonicalization.
D
What
and
that's
all
that
is
in
1749
if
you
read
it
close
now
those
rules
that
we
originally
picked
both
were
kind
of
leaky.
There
were
quite
a
few
places
where
we
didn't
really
say
anything
at
all
and
also
there's
one
thing
that
implementers
have
told
us
in
pretty
subtle
charms
was
not
so
great,
which
is
the
key
operand.
D
So
the
recommendation
for
Hyorin
will
change
to
buy
twice
lexicographic
ordering,
which
is
right
now
everything
it
says,
I
think
we
have
to
define
what
that
means,
but
we
also
keep
the
old
recommendation
in
there
for
people
who
want
to
reference
that,
because
they
already
had
an
application
standard
that
is
based
on
the
old
recommendation.
So
follow
will
stay
in
that
the
recommendation
will
be
four
by
twice
lexicographic
other,
and
this
is
about
the
thing
I'm
most
rented.
D
H
D
D
I
D
Yeah,
but
that's
true
of
all
integers
integers
and
see
what
are
64
bits,
but
we
represent
them
in
32,
bits
and
magic
thing.
That's
in
it
fits
if
we
can.
So
how
is
that
different
from
bigger?
We
did
that
the
question
is:
is
there
ever
a
situation
in
which
technology
makes
you
take
a
different
application
path?
Then
the
same
number.
I
D
D
H
This
is
Geoffrey
Gaskin
I'm,
two
examples
from
the
webathon
Fido
security,
key
space,
Adam
Langley
tested
a
bunch
of
security
keys
for
their
ask
them
wonder,
output
and
discovered
that
many
of
them
had
bugs
in
which
links
they
gave
to
the
to
the
numbers.
If
a
number
was
was
a
byte
shorter,
many
of
them
forgot
to
shorten
the
the
dur
representation,
and
so
in
our
canonical
in
our
preference
for
the
canonical
encoding,
are
we
going
to
introduce
those
bugs
into
implementations
of
seaboard
based
protocols?
H
Authentic,
like
shortest
encoding
for
the
floats
they
yelled
at
me,
they
said
make
it
make
it.
The
64-bit
floats
because
that's
simpler
to
implement
now,
I,
don't
I,
don't
feel
that
strongly,
because
this
is
just
a
preference.
Individual
applications
can
make
the
other
decision
when
it's
right
for
them,
but
but
in
both
both
cases,
the
kind
of
fixed
length
encoding
seemed
to
be
easier
to
intuit.
J
Matthew
miller
so
kind
of
going
back
a
little
bit
to
the
correct
cryptographic
context.
So,
especially
if
it's
used
as
a
counter
the
semantics,
what
it's
really
meaning
is
the
semantics
are
treated
as
a
counter,
but
it's
syntax
is
expected
to
be
a
set
of
bits.
This
many,
this
many
long
and
losing
the
information
can
be
death,
can
be
catastrophic
to
decrypting.
For
instance.
D
So
there
is
one
case
where
we
actually
are
using
a
SIBO
encoding
of
of
an
integer
where
we
always
wanted
that
will
be
five
bytes
and
we
just
represented
as
a
bad
string.
So
on
a
civil
level,
it's
about
string
that
the
application
then
turns
this
into
an
integer
is
not
visible
on
the
serialization
that
so
that
that
would
be
my
way
to
handle
these
things
would
really
need
to
be
constant
size
for
some
reason.
G
G
H
D
D
So
that
may
be
an
somewhat
more
expansive
answer
event
and
the
question
you
know
one
other
thing
that
the
decoder
may
want
to
do
is
check
canonicalization.
So
look
at
the
stream
and
see
whether
it's
actually
genetically
encoded,
which
trivially
may
be
done
by
Rhian
coding
it,
and
you
may
want
to
put
this
into
your
decoder
so
that
that's
another
function
that
a
Dakota
four
canonical
encoding,
a
one-time
CR
go
to.
Did
that
answer
your
question
it
did.
It
did
not
satisfy
me,
but
did
answer.
J
H
Is
Jeffrey
Gaskin
again
in
response
to
Joe's
comment?
I
got
advice
from
Adam
Langley
that
all
of
the
specs
that
I
do
should
specify
canonical
output
in
order
to
facilitate
testing.
Because
then
you,
you
only
have
to
assert
that
your
bytes
are
this
not
in
this
set
and
then
that
that
decoder
should
enforce
canonicalization
so
that
again
the
the
encoders
are
constrained
to
produce
that
canonical
output,
I
I
think
it.
H
It
is
totally
plausible
for
this
document
to
not
say
what
the
canonical
encoding
should
be
be
a
change
that
I
that
I
wrote,
gives
kind
of
the
base
a
base
canonicalization
for
for
the
basic
generic
data
model,
but
not
the
extended
generic
data
model
and
and
gives
that
a
name
so
that
other
specifications
can
refer
to
it
when
they're,
defining
their
own
canonicalization
and
I.
Think
that's
sufficient.
We
we
don't
have
to
express
preferences
about
how
to
canonicalize
big
numbers
or
floats.
We
just
say
other
specifications
must
must
specify
it.
I
This
is
a
comment
from
Shawn
I
guess.
The
point
is,
to
what
extent
is
C
board
type
and
tag
information
supposed
to
be
preserved
when
serializing
between
different
implementations-
I
guess
I'll.
This
is
gem
now
I
tend
to
agree
with
that
question
because
are
you
trying
to
see?
Are
you
trying
to
canonicalize
the
data
model
or
are
you
trying
to
canonicalize
the
seahorse
structure.
D
I
D
So
indeed,
in
the
data
model,
an
integer
is
an
integer
and
did
it.
The
data
model
doesn't
tell
you
whether
you
need
to
encode
this
with
a
particular
wave
field
or
therapy
immediately
in
the
first
byte.
So
the
the
encoder
actually
has
a
choice.
They
and
there's
also
an
assumption
that
a
decoder
discards,
the
information
which
length
was
actually
used
because
it's
none
of
the
business
of
the
application
to
look
at
that
length.
D
So
that's
a
pretty
strong
statement,
but
I
think
that
something
that's
really
important
to
enable
generic
decoders,
because
if
a
weird
application
for
some
reason
starts
to
keep
on
serialization
details,
then
you
need
a
completely
different
decoder
which,
by
the
way,
will
be
a
by
factor
of
three
slower.
Then
then,
the
one
that
just
discard
this
information
so
I
think
it's
important
to
be
able
to
discard
irrelevant
information
in
a
decoder.
Now
whether
that
applies
to
interval,
this
big
none,
we
certainly
can
discuss
I
personally
believe
it
does.
But
the
specification
currently
takes
no
position.
I
G
This
is
Joe
Hill
friend
from
the
floor.
Going
back
to
the
question
that
was
asked
from
the
jabber
room.
I
do
think
it
is
a
property
that,
if
something
is
in
canonical
form
as
bytes
in
the
wire
and
I
get
those
bytes
and
I
parse
them
and
I
regenerate
canonical
form
100%
of
the
time
those
bytes
have
to
match
exactly
no
matter
what
the
canonical
I
mean.
That's
what
canonical
means
right
without
that
property
we
have
nothing.
So
that's
not
exactly.
The
question
was
asked
I
think
that's
the
like.
G
G
D
F
Given
that
and
I've
just
been
thinking
about
you
even
saying
person
about
that,
it
decoders
should
throw
away
unimportant
information
with
her
examples
and
I've
certainly
seen
them,
and
you
and
I
talked
about
this.
When
we
were
doing
is
the
first
time
where
decoders
throw
things
away
and
then
make
an
assumption
about
what
they're
looking
at
I
think
it's
important
for
us
to
keep
this.
F
You
know
from
earlier
I
think
that
we're
now,
in
the
same
rabbit,
hole
again
and
I
think
what
we
should
do
for
this
document
is
to
say:
canonicalization
is
hard
or
oMG,
as
you
can
on
previous
slide.
We
are
not
going
to
recommend
things.
We
are
going
to
give
some
observations
about
canonicalization,
but
you
make
up
your
own
rules
and
you're
gonna
cry.
H
This
is
Jeffrey
yes,
Caen
I.
Think
I
would
heartily
endorse
a
statement
that
generic
conical
is
a
ssin
is
a
bad
idea,
because
the
generic
thing
cannot
tell
whether
you're
big
no
mattered,
whether
it
mattered
that
it
was
a
big
numb
but
I
think
protocol-specific
canonicalization
is
important,
whether
in
this
document
or
another
one
thanks.
J
Matthew
miller,
look
I
mean
ultimately
this
this.
You
know
this
data
format
has
some
pretty
strong
ideas
of
what
types
are,
as
opposed
to
json,
where
we
absolutely
needed
to
do
these.
Do
these
things
or
you're
gonna
run
into
big
problems
and
these
big
problems
weren't.
So
much
that
you
know
two
different
things
might
have
the
same
core
meaning,
but
my
serialize
method
differently.
They
would
blow
up
and
that
that
from
everything
I
was
served,
that's
not
the
case
here.
So
I
don't
you
know
I.
G
This
is
Joe
Hildebrand
from
the
floor
again.
I
believe
that
a
world
that
had
20
different
sea
bore
anomalous,
canonicalization
mechanisms,
each
of
which
a
generic
library
probably
had
to
implement
separately
from
one
another
with
all
their
minor
twiddles
would
be
a
world
that
is
strictly
worse
than
where
we
are
today.
G
J
G
J
G
Xc
are
horses:
what
have
you
oh
yeah
and
asn.1
like
that?
Is
an
anti-pattern
and
I
get
that
those
are
encoding
rules
and
not
canonicalization
rules,
but
they're
related
the
failure.
The
ways
that
that
fails
are
related
here
in
the
the
myriad
number
of
little
bits
that
I
need
to
write
in
this
and
one
system
in
order
to
parse
their
entire
world
and
generate
their
entire
world
and
I
really
really
don't
want
that.
So
I
would
be
okay
with
saying.
G
One
of
three
things:
I'd
be
fine
with
number
one,
never
canonicalize
it's
evil
number
two
here
is
a
canonicalization
mechanism,
either
in
the
stock
or
a
different
dock,
though
those
are
the
three
things
either
in
this
doctrine.
I
would
not
be
fine
with
number
four,
which
is
hey.
Canonicalization
might
be
interesting
here,
some
stuff.
You
might
want
to
consider
and
go
write
your
own
and
let
a
you
know,
but
there'd,
be
many
of
these
I
think
would
be
a
fit
so.
I
Echoing
a
channeling,
Sean
Leonard,
my
first
impression
is
that
there's
a
big
distinction
between
equivalents
of
major
data
types,
ie,
0,
X,
0,
1
versus
interact
users
are
1
when
they're
both
ends
and
the
equivalence
of
tag,
data
types.
So
that'd,
be.
You
know
big
numb
of
one
versus
an
engine
1
because
of
the
big
numpad.
D
Okay,
so
I
think
we
still
have
to
decide
whether
this
becomes
a
separate
document
or
is
done
in
in
the
main
document.
I
think
all
the
other
questions
remain
on
the
table
by
the
way,
so
I
think
we
can
discuss
the
technical
issues
independent
of
the
procedural
issue.
How
Xavier
know
this
so
I?
You
know
I'm
a
fan
of
batteries
included,
so
you
know
where
we
are.
My
preference
would
be,
but
if
it's
too
hard
to
fix
this
in
a
reasonable
timeframe,.
D
H
I
have
a
patch
to
improve
some
other
bits
of
Seaboard
definition,
but
I
think
that
the
rest
of
the
spec
is
fairly
close
to
ready
and
from
the
discussion
here.
It
sounds
like
canonicalization
is
not
so
just
to
make
to
make
progress
easier.
I
think
I
suggest
that
we
pull
out
canonicalization
now
into
his
own
document
and
proceed
on
them
in
parallel.
So.
G
M
The
impression
I
got
from
their
own
is
it
in
between
strongly
saying
that
it's
a
bad
idea,
which
is
a
one
or
two
sentences
saying
very
little
or
doing
it
in
a
separate
document.
If
people
can
agree
on
saying
very
little
or
strongly
saying
against
it,
hopefully
you
can
do
it
quickly.
If
you
want
to
do
anything
more,
you
should
do
a
separate
dot.
That's
I
mean
so,
and.
F
Hoffman
thinking
a
process
here,
I
think
that
the
vast
majority,
if
not
everything
else
in
this
document,
has
strong
consensus
in
this
doesn't
to
me
that
calls
for
a
separate
document
and
in
moving
to
full
standard.
One
of
the
things
as
acceptable
is
to
split
out
stuff
from
the
early
one
that
in
fact
didn't
work
and,
in
this
case,
didn't
work,
yeah
or
move.
But
in
this
case,
didn't
work
means
didn't,
have
a
strong
consensus
as
everything
else
and
poll.
G
F
G
G
F
D
H
D
Yes,
so
since
we
already
have
the
text
for
the
74
separate
document
review
an
internet
draft
for
that
tomorrow
morning,
so
we
can
start
working
on
it,
but
I
think
I'm
taking
home
that
this
is
the
best
way.
Splitting
orders
the
best
way
and
I.
Think
Jeffrey
gave
an
example
of
why
even
including
strong
language
about
the
benefits
of
deterministic
encoding,
which
is
a
chump
I,
prefer
what
canonicalization
is
a
useful
thing
in
cases
that
we
don't
have
better
scars
for.
G
D
L
D
No,
this
is
just
a
reminder
that
we
have
this
implementation.
Metrics
Franchesca
already
mentioned
it,
and
we
will
all
have
an
implementation,
we'll
want
to
look
at
this
again
and
see
what
we
can
fill
in
yeah.
It's
not
a
formal
requirement,
but
I
think
it's
just
prudent.
It's
just
sanity
checking
to
do
this
at
this
point
in
time.
D
H
Is
jeffrey
askin
not
directly
related
to
any
of
the
slides
in
making
changes
to
see
Horvath's
over
the
last
period?
I
think
several
people
sent
pull,
requests
and
discussed
it
a
little
bit
on
the
list
and
then
kind
of
all
at
the
end,
all
merged
them
all,
and
it
was.
It
was
hard
to
tell
whether
consensus
had
been
reached,
whether
they
could
have
been
merged
earlier
and,
of
course,
earlier
merging
makes
it
easier
to
write
later
ones
without
conflicts
and
it's
a
conflict
which
caused
the
duplication
of
the
data
model
section.
H
B
General,
we
said
that
pull
requests
for
big
parts
of
the
document
should
go
to
the
mainland
list.
I
think
that's
what
you
have
done
for
most
of
them
and
I
understand
that
they
were
like
old
pull
requests
that
they
were
in
consider
when
merged,
like
that
new
pull
request
that
they
did
the
same
parts,
etc.
B
B
D
E
D
F
B
B
B
D
D
G
As
we
discussed
sort
of
off
mic
before
the
meaning,
what
we're
not
looking
for
is
for
people
to
do
a
bunch
of
work
in
order
to
fill
out
the
rows
here
and
to
make
sure
that
they've
got
the
most
complete
implementation
possible.
What
we're
looking
for
is,
what
did
you
implement
so
that
we
know
what
was
interesting
for
you
to
implement
for
your
use
cases.
B
D
D
Next
technicians,
yeah
that
release
included
there
are
some
some
tags
in
the
document.
There
are
a
few
tags
out
there
and
we
don't
need
a
working
group
to
do
tags.
Everybody
can,
but
there
are
maybe
a
few
things
that
where
we
actually
want
working
group
attention-
and
just
just
as
a
reminder
that
this
is
going
on
SIBO
we're
talking
just
about
approved
as
being
in
the
RFC
editor
for
four
days.
So
that
is
a
Sybil
tank
that
we
can
be
pretty
proud
of.
That's
good
work
and
there
are
a
few
drafts
out
there.
D
Two
of
them
are
on
charter,
but
by
selection
of
the
people
who
wrote
the
Charter,
the
OID
one
needs
work.
We
have
told
it
for
a
while.
We
just
have
to
find
the
array.
One
is
on
charter
and
it's
one
of
the
things.
That's
very
far
option
just
to
remind
you
what
it
is.
It
mainly
provides
tags
for,
homogeneous
arrays
represented
in
byte
strings,
so
JavaScript
has
typed
arrays.
There
is
a
criminal
standard
for
type
arrays
and
so
on.
D
That's
pretty
much
an
industry
consensus
for
what
these
typed
arrays
are
and
we
are
just
providing
tanks
for
them,
so
we
can
use
them
seamlessly
in
civil
documents.
So
this
document
takes
the
position
that
tags
are
reasonably
cheap,
so
we
can
put
the
type
information
of
what
exactly
is
in
the
hajime's
array,
into
a
tag
which,
together
with
the
various
data
types
and
the
various
organizations,
leads
to
a
total
of
24
tags
where
two
of
them
are
actually
equivalent
because
endianness
of
videos
and
such
particularly
interesting.
D
D
Should
these
tanks
come
out
of
a
two-part
space
or
a
three
much
space,
I
showed
this
slide
the
last
time.
I.
Don't
really
want
to
do
this
again,
but
too
much.
We
have
to
decide
and
we
have
some
reviews
on
their
documents
of
all
unnoticed.
We
probably
have
to
be
a
bit
more
explicit
on
the
endianness
issues.
D
D
D
D
D
So
one
reason
I
wanted
to
have
this
on
the
agenda
today
is
there
is
actually
a
registration
request.
Fishing
at
Ayana
in
the
first
come
first
serve
space
for
doing
pretty
much
the
same
thing
in
a
slightly
less
ordered
way,
and
I
inau
will
have
to
take
that
registration,
so
that
would
be
a
nice
run
around
the
world.
I
have
tried
to
contact
the
author
of
that
registration.
I
haven't
succeeded
yet,
but
I
will
continue
trying,
so
that
kind
of
creates
a
little
bit
of
an
urgency
that
we
finally
come
to
grips
with.
F
I
Jim
shot
I
think
it's
fine,
I
I
would
rather
take
them
from
the
free
bike
range.
There's
no
need
to
try
to
conserve
space
for
this,
and
that
makes
sure
that
you
don't
end
up
later
regretting
the
decision.
I
A
D
I
G
D
D
Think,
okay,
but
essentially
what
they
did
and
there
is
a
github
repository
with
with
requests
in
it.
So
it's
not
totally
it's
secret.
What
they
did
is
they
just
defined
a
bunch
of
tags
for
a
few
items
out
of
the
whole
array
that
they
needed,
and
these
come
from
first
campus,
their
self
space,
so
they
they
are
three
by
tags.
M
D
M
D
M
D
M
G
N
We
did
something
similar
in
core,
we
call
happen
and
that
does
the
expert
a
bad
one,
but
still
and
I
thought
that
was
quite
useful
to
protect
your
your.
You
know,
everybody
wants
their
thing,
their
shiny
object,
the
the
most
efficient
Dom
registration
class
and
not
all
things
are
created
equal
for
much
just
how
widely
they're
going
to
be
used
in
deploy
point
of
view,
which
I
think
is
something
that
doesn't
it
extra
than
the
working
group
can
make
a
better
call.
So
I
would
agree.
D
Okay,
so
here's
a
few
people
saying
one
plus
one
byte
space
whitespace
is
fine
and
a
few
more
people
saying
1,
plus
2
byte
space
is
where
we
should
be
going,
because
we
want
to
finish
this
discussion
at
some
point
in
time.
It
seems
like
getting
ourselves
into
101
plus
too
much
space,
so
that
would
be
my
summary
of
the
discussion.
I'm
not
happy.
H
D
G
Just
well,
we've
got
everybody
here.
Does
anybody
object
to
us
taking
this
up
as
a
working
group
draft
cuz
if
I
just
I'm,
just
looking
to
see?
If
anybody
did,
we
could
have
that
discussion
now
since
we're
all
here,
I,
don't
see
anybody
jumping
up
to
object
and
are
there
at
least
a
couple
people
who
think
this
is
interesting
work
for
us
to
do.
I
don't
want
to
do
a
homage,
cuz,
there's
I,
don't
think!
There's
all
there's!
Only
only
one
or
two
people
go
ahead,
all
right.
I
Some
thought
you
got
it
Sean
about
the
RGB
it's
interesting,
but
I
was
really
thinking
about
high
dynamic
range
I.
Think
RGB
is
a
bit
more
complicated
in
this
day
of
4k,
HDR,
10,
Dolby
vision
and
all
that
it
probably
means
some
specialized
data
types
of
its
own,
which
is
to
say
outside
the
purview
of
this
Jayraj
typed
they
arrays,
and
then
he
says
one
more
thing.
I
agree.
The
typed
array
should
be
worked
on
by
the
working
group,
which
is
relevant.
G
A
G
G
So
I'm
gonna
be
stepping
down
as
working
group
chair
for
C
board.
Here,
I
won't
be
coming
to
ITF
meetings
as
regularly
going
forward
for
variety
reasons,
and
if
you
are
interested
in
sharing
talk
to
Alexi
and
do
it
quickly
because
he's
already
got
some
interest
in
variety
folks,
but
if
you
think
you're
the
right
person,
this
would
be
a
fantastic
time
to
pull
Alexei
aside
anything
else.
No,
all
right
great
thanks.
Everybody.