►
From YouTube: IETF112-CBOR-20211111-1430
Description
CBOR meeting session at IETF112
2021/11/11 1430
https://datatracker.ietf.org/meeting/112/proceedings/
A
A
A
Can
talk
up
freely
so
please
ensure
that
people
feel
welcome
here.
If
you
have
any
questions.
B
About
any
of
that,
please
follow
the
links
that
are
all
listed
in
the
slides
indicated
here
in
think
or
talk
to
me
or
the
onwards
team,
depending
on
what
kind
of
questions
arise.
A
First
group
of
first
block
is
working
group
documents
there
I'll
just
give
a
few
a
brief
update
about
documents
that
are
sent
to
the
rc
editor
or
our
work
in
progress
without
any
particular
updates
that
need
for
the
discussion
here
then
filling
in
for
michael
who
is
also
busy
in
another
meeting.
Our
carson
will
tell
a
bit
about
issues
that
have
come
up
late
in
file
magic
and
then
we'll
continue
with
seaboard
pact.
A
And
a
large
block
on
the
future
on
future
development
of
city
of
syria
and
apologies
for
whatever
happened
in
there,
that
it
says
notable
text.
That's
why
I
stumbled
here
briefly.
The
individual
document
that
should
be
here
is
notable
text,
which
is
also
something
that
will
need
work
in
the
work
group
but
was
not
planned
for
today.
I
don't
know
what
happened
here.
What
should
be,
it
should
be
saying
here:
is
application
oriented
literals,
extended
diagnostic
notation.
A
A
Cddl
control
got
switched
over
to
the
standards
track
and
is
now
in
the
rfc
editor's
queue
network
addresses
in
the
latest
iterations
after
the
last
itf
gain
support
for
zone
identifiers
which
may
be
numeric,
maybe
textual
or
maybe
absent,
as
they
always
were,
and
this
now
is
also
in
the
in
the
rfc
editor
queue.
A
The
time
tag
document
we
adopted
in
may
is
still
active,
but
this
is
largely
waiting
for
input
from
the
state
working
group,
because,
whereas
much
of
this
is
rather
uncontroversial,
the
topics
of
time
zone
indication
that
will
also
be
supported
in
the
sewer
time
tag
will
just
need
to
wait
for
whatever
comes
out
of
the
date,
but
judging
from
having
seen
the
minutes
and
how
of
how?
Often
this
has
come
up
in
homewood
discussions,
I
conclude
that
the
group
is
rather
active
and
we
just
follow
what
is
happening
there.
B
C
C
We
have
a
way
to
use
555
799,
which
is
a
tag
that
already
was
defined
in
7049,
as
together
with
a
oneplus
4
tag,
identifying
a
specific
kind
of
data
item
and
thus
the
file
format
to
get
an
eight
byte
prefix
for
sibo
data
items,
but
that
only
works
for
single
data
items.
C
So
if
we
want
to
have
a
magic
number
for
a
cbo
sequence,
then
we
would
use
a
new
tag
which
is
defined
in
this
document,
which
is
also
a
one
plus
two
check,
plus
a
one
plus
four
takes
so
together
we
have
eight
bytes
plus
conventional
content.
For
that
tag,
which
is
the
byte
string
b,
o
r,
which
miraculously
becomes
c
bar
when
you
look
at
its
representation,
so
that
would
be
a
12
byte
prefix
you
you
prepend
to
a
sibo
sequence.
C
So
that
is
stuff.
We
have
had
for
a
while,
and
then
we
thought
well,
it
might
be
nice
to
actually
have
pre-allocated
tags
for
sibo
content
formats.
There
are
two
to
the
sixteen
siebel
content
format,
so
it
that
only
takes
a
small
byte
out
of
the
oneplus
4
tags.
C
So
all
this,
the
things
on
this
slide
really
work
with
data
that
are
in
sibo
form,
either
a
single
zebra
item
or
receiver
sequence.
And
of
course,
the
mechanism
just
doesn't
work
for
data
that
are
not
sibo-shaped.
C
So
these
examples
are
really
misleading
and
the
the
comments
came
in
that
we
would
need
to
do.
Bytes
ring
wrapping
for
these
data
to
fit
them
into
either
555
for
799
or
55
800.,
which
one
could
do,
but
then
it
would
no
longer
be
a
constant
prefix
that
you
just
have
to
slap
in
front
of
your
your
data,
so
it
would
be
more
work.
C
I
mean
it's,
it's
a
killing
amount
of
work,
but
it
would
be
more
work
and
it
would
make
it
harder
to
peel
off
that
prefix,
so
it
would
be
a
much
worse
situation.
In
addition,
you
wouldn't
necessarily
know
whether
you
need
this
byte
string
wrapping
or
not
so
this
this.
This
is
a
non-starter
that
doesn't
make
sense,
so
we
wanted
to
have
a
version
of
the
document
ready
for
the
the
deadline
for
this
itf,
but
that
threw
a
monkey
wrench
into
that.
C
So
we
since
have
discussed
this
some
more
and
came
up
with
the
idea
that
maybe
we
spend
another
tag
which
would
be
55801,
which
is
essentially
works
like
five.
Five.
Eight
zero
zero,
so
you
prefix
it
to
something,
but
the
the
something
that
you
prefix.
It
doesn't
need
to
be
zero
data,
so
this
this
would
be
almost
but
not
entirely.
C
Unlike
a
zebra
sequence,
this
is
this
works
with
cbo
decoders
that
can
decode
one
item
and
then
hand
up
the
raw
data
for
the
rest
of
the
input
so,
for
instance,
in
the
receiver
implementation
there's
an
interface
called
decode
with
rest,
which
takes
one
item
off
data
of
binary
data
by
string
file
and
gives
you
the
the
decoded
item,
plus
the
the
rest
of
the
data
in
undecoded
form,
and
this.
This
is
exactly
the
api
that
would
be
needed
for
this
and
yeah.
That
happens
to
be
a
relatively
common
api.
C
So,
with
the
addition
of
this,
we
would
be
able
to
actually
define
file
magic
for
all
content
formats,
which
I
think
is
is
desirable,
but
it's
a
bit
of
a
scope
creep
for
for
this
document.
I
must
admit
so.
I
think
this
is
not
a
no-brainer,
but
we
we
should
think
about
that.
C
So,
assuming
that
that
we
can
reach
consensus,
to
put
this
in
the
the
job
would
be
to
actually
put
it
in
to
keep
this
non-sibo
content
performance
thing
with
55
801s
and
again
the
examples
that
I
I
made
kind
of
assume
that
we
already
have
that,
but
it
doesn't
distinguish
between
sibo
and
non-zero.
So
that
was
the
problem,
so
these
examples
would
be
kept.
But
that
changed
to
talk
about
55801
and
we
would
add
a
couple
of
examples
that
actually
have
a
siebel
content
format
using
55,
799
and
55-800.
D
Michael
please,
so
I
think
that
the
55801
is
a
good
solution,
and
but
I
I
did
ask
I
don't
know
if
we
have
this
problem,
and
so
I
would
a
little
bit
tend
to
unless
someone
really
thinks
we
should
do
that,
I
would
tend
towards
let's
not
go
there
and
just
just
stick
with
what
we
have.
A
Just
to
to
to
get
a
bit
of
a
better
view,
would
we
still
have
use
cases
for
seabourn
for
num
for
content
format,
numbers
in
all
those
three
categories?
That
is
five
five,
seven,
nine
nine
eight
hundred
and
eighty
one
yeah
can?
Could
we
slim
this?
Is
there
any
of
those
that
is
kind
of
where
we
don't
really
have
a
full
use
case?.
C
Well,
if
we
didn't
have
a
use
case,
we
shouldn't
do
it.
So,
yes,
I
think
that
there
are
examples,
meaningful
examples
that
can
be
put
into
the
document,
and
I
think
it's
useful
to
have
these
examples,
because
we
are
not
now
opening
up
the
choice
of
the
three
different
ways
to
do
things,
and
it
certainly
helps
to
explain
when
you
use
what.
A
So,
just
I'm
taking
taking
my
document
shepard
hat
here,
the
byte
string
version
has
been
in
there
for
the
the
dash
05
and
dash
06
versions,
which
was
what
the
working
group
last
call
covered.
So
I'd
like
just
to
point
out
that
if
we,
if
we
extend
the
scope
here,
this
will
put
the
document
through
definitely
through
another
working
group.
Last
call
and
probably
a
bit
of
designing
on
the
way
there.
A
So
that's
not
a
reason
not
to
do
it.
It's
just
something
that
I'd
like
to
put
out
and
make
people
aware
of.
If
there's
any
urgency
on
the
rest
of
the
document,.
D
C
Give
an
example,
the
the
one
example
that
that
I
would
build
for
55801
would
use
the
content
format,
11,
5,
42
application,
slash,
vnd.oma.lwm2m,
plus
dlv.
A
Is
this
better?
Now
again?
Yes,
okay,
so
I've
heard
a
bit
of
I've
heard
some
positive,
some
positive
input
and
some
cautious
input
on
going
forward.
So
I
suggest
that
this
can
be
explored
in
a
dasher
seven
and
that
the
examples
there
will
hopefully
make
the
use
case
clear
enough
that
we
can
go
on
with
this.
C
Yeah,
so
this
is
the
slide
ahead
in
july,
so
the
the
main
issue
is
stable
building
and
I
think
we
need
to
not
boil
the
ocean
here,
but,
on
the
other
hand,
have
something
that
that
has
batteries
included
so
I'll.
Come
to
that
in
a
minute.
C
C
So
I
just
sent
some
some
additional
comments
to
him
on
the
mailing
list
and
I
think
that
that's
a
pretty
good
proposal
and
people,
if
you
can,
if
you
think
you
you
have
comments
on
that,
please
send
them
to
the
list,
because
I
think
that
will
be
a
pretty
useful
addition
to
our
library
of
tags.
C
So,
let's
go
to
pact,
so
pack
really
is
three
things:
it's
a
processing
model
which
is
in
in
contrast
to
to
actual
compression
schemes
based
on
in
place
usage
of
of
the
packed
data
items.
So
you
do
reference
chasing
in
the
data
you
got,
then
it's
the
registration
of
a
number
of
tags
and
simple
items
that
allow
you
to
reference
items.
C
So
these
are
the
the
origins
of
those
arrows
that
the
processing
model
foresees,
and
I
think
we
we
have
a
pretty
good
understanding
where,
in
the
sibo
basic
data
model,
we
have
the
gaps
where
we
can
put
these
references
in
and
finally-
and
this
is
the
part
that
isn't
quite
as
stable
as
the
rest,
the
the
table
building
and
in
particular
the
the
nesting
aspect
where
we
may
have
more
than
one
place
in
the
the
data
item
where
something
is
added
to
the
table.
C
I
think
we
we
now
have
a
pretty
good
understanding
of
a
push
model
or
shift
model
depending
on
how
you
think
about
it,
where
you
essentially
have
a
stack
of
tables
and
pushing
something
on
the
stack
means
that
you
get
control
over
the
lowest
numbers
in
the
various
reference
and
various
referral,
encodings
encodings
and
push
the
existing
table
entries
up
to
higher
numbers.
C
So
I
think
that
that
is
now
well
understood,
if
maybe
not
fully
described,
so
that
that's
probably
a
place
where
at
least
editorial
work
is
needed,
and
basically
what
I
think
we
should
be
doing
in
the
base
document
is
provide
the
referrers,
of
course.
So
we
we
have
allocated
tags
and
and
simple
items
for
sharing
for
adding
prefixes
and
for
adding
suffixes.
C
We
describe
the
pad
table
model,
including
the
push
mechanism,
and
we
probably
should
describe
this
in
a
way
that
we
can
add
future
kinds
of
referrers,
for
instance
using
the
record
or
template
proposals.
So
the
model
is
extensible,
but
we
only
fill
it
in
for
the
three
kinds
share:
prefix
and
suffix.
C
We
add
a
basic
table,
setup
tag
that
is
making
use
of
the
push
model
and
pushes
to
the
share
prefix
and
suffix
tables
that
that's
pretty
much
already
there.
It
probably
just
has
to
be
qualified
as
something
that
that
is
just
one
way
to
do
things,
and
then
we
provide
a
framework
for
defining
more
specific
setup
tags,
where
I
think
we
we
should
foresee
two
kinds
of
setup
takes,
but
of
course
it
it's
always
possible
to
define
other
texts.
That's
just
these
are
the
two
that
I
expect
we
will
make.
C
A
lot
of
use
of
one
is
an
implicit
reference.
So
if
an
application
protocol
defines
a
dictionary
like
we,
we
did
20
years
ago
with
the
the
sip
encoding,
zip
compression
dictionary,
there
is
an
rfc
that
has
the
the
bytes
of
the
dictionary
in
it
similar.
Here
you
would
write
in
the
specification
that
that
allocates
this
tag,
the
actual
table
that
would
be
pushed
on
to
this
push
model
for
tables.
C
So
the
advantage
of
course,
is
that
you
can
have
very
very
short,
setup
tags.
If
the
application
requires
that
and
you
you
don't
have
to
do
complicated
lookups,
it's
really
just
a
tag.
When
you
implement
a
specific
application,
then
you
implement
the
tag
for
that
as
well,
and
then
you
have
your
application,
specific
dictionary
included.
C
C
So
the
the
table
said
setup
tag
would
include
a
hash
value
and
probably
also
a
cosy
algorithm
identifier,
because
the
hash
algorithm
identifier
to
explain
how
the
hashing
is
supposed
supposed
to
be
done.
C
C
That's
not
really
something
that
the
the
data
format
defines.
It
just
says.
If
you
have
a
hashed
setup
tag
with
this
hash,
then
insert
it
here.
A
Just
for
for
understanding
the
records
would
then
be
a
use
case
of
of
pact,
and
whoever
defines
the
records
would
set
up
the
table.
To
have
these
specific
semantics.
Is
that
the
intention?
I
don't
know,
okay,.
C
E
Yeah
hi,
thank
you.
Apparently,
this
is
hank,
so
unfortunately,
brandon
cannot
here
with
us
today
due
to
a
time
conflict,
but
I
want
to
highlight
that
I
think
it
would
be
really
cool
to
use
the
now
finalizing
suit,
manifest
specification
as
an
example
for
a
siber
pact.
E
I
think
that
would
be
something
we
can
start
after.
This
is
moving
out
of
the
gate
as
because
this
will
take
some
time,
but
I
think
it
is.
It
would
be
an
excellent
exercise
to
to
instantiate
this
in
real
life.
E
Okay,
so
I
think
that
there
is
a
lot
of
redundant
references
in
in
directives
and
and,
for
example,
identify
us
for
several
things,
like
classes
or
environments
or
software
effectively
in
the
suit
manifest,
and
my
assumption
is
that
and
and
brenton
really
works
on
trimming
down
every
single
bite,
and
I
think,
as
the
manifest
is
already
pretty
much
compact,
I
assume
that
the
packed
approach
will
still
yield
a
significant
reduction
of
size.
So
so
that
is
something
I
would
like
to
just
well
try
out.
E
No,
no!
No!
No!
So
it
would
be
the
suit
2.0
or
the
suit-packed
manifest.
You
know,
because
we
can't
do
this
with
the
rear
one
if
you
delay
this
any
more
further
somewhere
that
throws
stones
at
me,
so
so
that
that
isn't
possible,
but
but
immediately
as
this
is
stable
and
out,
I
would
like
immediately
do
it
with
pact.
E
Yeah,
that
is,
that
is
part
of
the
experiment,
so
we
have
to
find
out
so
but
again,
brandon-
and
I
will
not
be
doing
this
this
year
to
be
honest,
but
but
but
maybe
around
the
next
hackathon
before
the
itf
in
in
some
hopefully
actual
location.
That
would
be
nice.
C
Okay,
so
that
maybe
leads
into
to
a
general
request
for
data
items.
So
if,
if
you
have
any
data
items
that
are
not
entirely
trivially
and
that
would
maybe
benefit
from
sibo
pact,
we
might
want
to
collect
these
data
items
in
a
repository.
So
we
understand
what
what
sibo
pack
does
to
them
and
and
of
course,
also
how
how
good
different
packer
implementations
would
be,
because
the
packer
implementation
can
have
different
qualities
of
implementation,
that
there
is
no
one
way
to
pick
things.
A
Christian
here
brief
question
on
packer
implementations:
do
you
expect
these
to
be
widely
used,
because
my
impression
was
that
the
that
most
of
the
most
of
the
time
packing
would
be
done
in
a
more
static
way,
so
that
the
application
would
use
its
information
on
the
structure
already
to
create
the
tags?
Do
you
think
that
kind
of
free
form
compression
is
something
that
we
need
to
expect
in
the
in
applications.
C
That
that
is
a
good
question.
So
if
you
have
a
generic
picker,
you
probably
should
call
them
this
way.
C
Then
you
may
save
some
time
in
your
application
actually
doing
these
things,
but
of
course
it
requires
to
actually
build
the
full
structure
and
then
submit
it
to
the
packer.
So
it
it's
not
something
you
would
do
in
a
constrained
implementation.
In
a
constrained
limitation,
you
would
always
generate
a
sibo
pact
right
from
the
data
that
you
have
so
yes,
I
see
some
some
areas
where
generic
packers
might
be
useful
and
that's
why
I
think
it's
a
good
idea
to
collect
some
some
best
practices
for
building
them,
but
also
the.
E
Yeah
just
about
quick
question,
because
I
have
no
gut
feeling
for
this.
How
easy
or
effective
would
be
an
auto
pack
feature,
in
contrast
to
a
manual
manually,
I'm
going
to
say
configured
packing
of
content,
where
you
will
maybe
guide
that
a
little
bit
should
be
this
almost
the
same.
In
the
end.
C
Well,
that
depends
on
how
much
machine
learning
and
and
ai
you
put
into
your
packer
okay,
but
generally
writing
compressors
is,
is
a
a
pretty
well
understood
area
of
work.
So
I
would
expect
that
if
you
write
a
generic
packer,
that
will
often
be
as
good
as
your
manual
packing
scheme
is,
and
it
will
also
find
some
opportunities
for
packing
that
you
simply
didn't
address
in
your
manual.
A
A
On
the
queue
so
let's
go
on,
I
don't
think
that
there
are.
There
are
prepared,
slides
for
the
for
the
for
the
edn.
Is
there
something
that
that
you'd
like
to
see
there
or
shall
we
keep
that
for
for
basically
last
for
the
aop
section,
and
so,
let's.
C
Do
it
now,
so
we
had
some
some
positive
feedback
during
the
interim.
We
haven't
implemented
that
yet,
which
always
makes
me
a
little
bit
hesitant
of
of
going
for
something
like
a
working
glass,
troll
hi
barry,
but
we
might
go
for
adoption
that
that
certainly
would
be
possible
at
this
stage.
A
So
maybe
just
a
brief
show
of
hands
around
around
the
room,
given
that
we
have
almost
20
participants,
could
you
please
indicate
using
the
show
of
hands
tool,
whether
you
have
well
whether
you're
interested
in
that
document
in
in
for
working,
the
working
group
or
well?
I
just
have
to
find
the
right.
A
But
I
take
this
as
kind
of
a
preliminary
show
of
interest
in
the
in
the
document.
This
is
not
on
its
own,
an
option
call
that
will
be
later
on
the
mailing
list,
but
this
is
just
a
brief
brief
thing
to
to
guard
the
interest
in
the
room,
and
I
see
a
lot
of
hands
going
up
going
up
here.
A
So
in
the
minutes,
please
note
that
this
is
even
within
the
short
within
a
short
show
of
hands
showing
7
out
of
20
raised
and
none
not
raised.
So
to
me,
this
shows
that
there
is
interest
in
the
working
group
and
I
think
I'll
we
can
handle
the
rest
of
the
failing
list.
Thank
you
next
item.
Please.
C
C
We
have
done
a
few
low
hanging
fruit
in
the
cdi
control
specification,
which
gives
us
a
few
things
that
that
already
good
on
the
range
2.0,
but
there,
of
course
we
only
could
do
things
that
didn't
actually
require
changing
cddl.
We
just
used
its
extension
points
and
what
I'm
describing
now
really
is
going
beyond
using
extension
points
and
to
me
it
seems
that
there
are
two
aspects
that
are
also
low
hanging
fruit
but
low
hanging
on
the
way
of
actually
extending
the
language,
and
one
is
annotation
and
the
other
one
is
composition.
C
So,
right
now
cddl
works
with
a
single
file.
I
mean
we
don't
even
talk
about
files
because
there
is
no
file
structure.
So
there's
no
reason
to
talk
about
files,
but
in
practice
you
have
a
single
cdl
file.
Maybe
you
concatenate
that
together
out
of
several
input
files,
but
essentially
the
thing
is
a
sequence
of
rules
and
the
first
rule,
which
must
be
a
type
and
not
a
group,
is
the
entry
point
so
at
that
service
value.
C
The
whole
cdda
file
defines
one
data
type,
and
this
this
has
been
quite
useful,
but
we
probably
want
to
go
beyond
that,
and
I
think
what
I'm
here
most
is
that
we
actually
want
to
build
libraries
which
are
ctdl
files
that
export
one
or
more
rules
well
typically
types
but
might
be
groups
as
well,
and
we
also
want
to
be
able
to
import
those
rules
from
another
city
specification,
whether
that
was
intended
as
a
library
or
it's
a
standalone.
C
So
when
you
do
an
import
you,
you
have
something
you
can
talk
about,
what
you
are
importing
and
you
also
want
to
control
the
naming
of
the
exported
or
imported
rule.
So
an
existing
city
aspect
might
want
to
export
something,
but
maybe
it
has
a
very
short
name
in
in
that
spec,
and
there
are
reasons
why
you
don't
want
to
change
it.
C
C
So
there
is
some
manage
management
of
names
needed,
so
I
have
shown
a
simple
way
of
doing
implicit
importing.
C
C
So
you
don't
have
to
to
do
lots
of
things
to
actually
get
it,
and
if
you
actually
need
a
short
name,
you
can
simply
write
another
rule
and
say
oid
equals
rfc1990.id,
and
then
you
will
have
a
short
name
for
for
this
thing,
so
the
implicit
mechanism
would
be
an
easy
way
to
do
things
without
completely
leaving
the
cdl
1.0
envelope.
C
But
of
course
the
tool
has
to
support
doing
doing
this.
Lookup
and
the
the
more
powerful,
explicit
import
would
identify
a
library,
maybe
identify
a
versioned
sequence
using
a
semantic
versioning
reference
so
that
that's
a
very
popular
subject-
and
I
think
the
young
people
have
been
discussing
this
for
about
two
years
now,
so
maybe
we
can
actually
steal
something
from
them
and
then,
when
you
have
identified
the
library
you
want
to
manage
what
names
are
introduced.
I
talked
about
name
management,
potential,
conflicts
and
so
on.
So
this
will
be
the
explicit
import
interface.
C
I'm
not
putting
an
example
in
here
because
I
come
to
the
syntax
in
a
few
slides,
so
the
export
interface
would
provide
a
way
to
name
the
library.
So
you
don't
just
have
an
anonymous
city
a
file,
but
the
cda
file
itself
says
under
which
name
it
expects
to
be
imported,
gives
the
version
number
probably
semantic
version
number,
and
you
probably
also
want
to
identify
the
rule
names
that
this
library
intends
to
export.
C
So
this
is
not
not
a
required
list
for
the
importer,
it's
just
the
default
set.
So
if
you
just
import
the
library
without
saying
anything
else,
you
get
this.
This
exported
word
set,
but
you
can
import
less
and
you
also
can
import
more.
So
we
are
not
not
trying
to
do
protection
of
class
internals.
Here
I
mean,
if,
if
you
do
that,
then
you
know
that
you
are
doing
something
on
your
own.
C
So
the
the
question,
of
course,
is
how
do
we
do
the
linkage
so
somewhere
on
my
laptop
that
there
is
a
cdfi
that
says
it:
exports,
foo
and
somebody
else
somewhere
else.
There
is
a
city
aspect
that
says
an
imports
foo,
but
how
do
the
these
two
files
actually
meet
each
other
and
one
way,
of
course,
is
doing
this
outside
the
specification
language.
C
So
you
essentially
give
some
cli
parameters
that
tell
the
tool
that
these
specifications
files
are
going
to
be
used
as
the
library
files
going
into
that
other
specifications
so
that
that's
certainly
one
way
to
do
it
and
that
that's
again
useful
during
development,
when
the
specification
has
become
more
established
and
maybe
even
standardized,
it
should
be
possible
to
to
give
a
hint
inside
the
spec,
for
instance,
a
ui
that
points
to
a
github
repository
or
something.
C
So
I
don't
have
a
problem
with
hardwiring
github
in
here,
just
as
long
as
we
have
other
ways
of
referencing
repositories
as
well.
Yeah
name
spacing.
That's
probably
the
best
way
to
handle
these.
These
name
conflict
and
bad
naming
issues.
So
rfc
1990.
already
already
shows
the
idea
of
a
namespace.
So
there
is
an
rfc
1990
namespace
from
which
we
import
the
oid
rule,
and
while
we
are
added,
we
could
maybe
make
the
the
one
watch
that
the
that
ad610
has
where
we
have
a
defined
prelude.
That
is
always
imported.
C
C
The
default
is
to
actually
do
that
and
you
would
have
to
do
extra
work
to
not
do
that,
and
maybe
we
actually
want
to
to
think
about
some
mechanisms
that
allows
you
to
continue
working
when
you
have
some
some
name
spacing
errors
in
particular.
If,
if
you
work
with
revisions,
that
might
happen
quite
often,
so
that's
the
namespacing.
C
Many
people
want
to
use
the
same
cdi
specification
for
different
formats,
so,
for
instance,
you
have
one
specification
that
explains
how
to
do
cinema
in
json
and
another
one,
how
to
do
it
in
c
bar,
and
we
know
how
to
do
this
manually.
The
centimeter
specification
defines
it
a
manual
way.
Cta
control
gives
an
example
of
another
manual
way,
but
we
probably
want
to
to
make
this
a
little
bit
more
first
class.
C
So
we
don't
do
this
on
the
lexical
level
alone,
because
that
always
makes
it
hard
for
implementations
to
actually
process
this.
So
if
we
make
the
alternatives
first
class,
we
might
actually
be
able
to
define
to
write
a
tool
that
does
translations
between
the
two
representations.
C
To
automation,
it
should
be
able
to
actually
generate
libraries,
so
you,
if
you
have
an
rfc
that
has
some
cdl
in
it,
and
I
think
we
now
have
a
two-digit
number
of
those.
It
should
be
possible
to
generate
the
libraries
from
those
automatically
and
that
should
also
be
possible
for
new
ids.
So
we
probably
want
to
establish
a
few
conventions
how
you
expose
ctdl
in
a
draft.
C
We
cannot
define
new
conventions
for
rfcs,
but
we
can
define
them
for
for
new
draft.
We
want
to
be
able
to
generate
libraries
from
iana
registries.
There
are
several
registries
that
are
just
very,
very
useful,
think
about
interface
types,
which
you
just
want
to
be
able
to
use
in
a
specification
and,
of
course,
the
the.
What
I'm
saying
here
for
for
documents
and
registries
is
not
just
for
itf
sources,
but
this
should
also
be
possible
for
non-itf
sources.
C
So
if
there
are
interesting
registries
or
interesting
documents
that
we
want
to
extract
cdda
from
automatically
that
we
should
look
at
those
and
yeah
that
should
be
possible
from
a
cdl
spec
to
trigger
that
automation,
not
in
the
sense
of
we
run
a
random
operating
system
command,
that's
always
a
bit
dangerous,
but
it
should
be
possible
to
just
point
to
an
internet
draft
and
say
I'm.
I
want
to
import
the
city
data
from
there
and
put
it
in
that
namespace
and
that
should
be
possible.
C
That's
probably
not
the
way
you
actually
publish
your
specifications
in
the
end
because
well,
of
course,
you
would
reference
an
rfc
and
no
longer
an
id
and
so
on,
but
it
would
be
good
to
to
make
the
language
accessible
for
this
kind
of
automation.
C
Okay,
let's
talk
about
syntax
for
a
second,
the
idea
is
to
do
this
transition
from
1.0
into
2.0,
in
a
way
that
you
won't
notice
that
it
happened
so
cdl
files
should
still
be
2.0
files
and
cdi.
1.0
processors
should
be
able
to
do
useful
things
with
2.0
files.
They
won't
be
able
to
do
everything
that
you
can
do
with
2.0
files,
but
it
would
be
good
if
these
processors
can
process
2.0
files
and
yeah.
C
There
are
several
places
where
we
can
stash
things
into
1.0,
syntax
yeah,
that's
one
way
of
doing
it,
but
this
needs
to
be
designed.
So
I'm
not
sure
how
exactly
it
will
look
like,
but
I
showed
some
examples
at
iit,
f111.
C
Cddl
has
a
processing
model
that
can
be
described
with
kernighan's
car,
which,
interestingly,
doesn't
have
a
wikipedia
entry,
so
you
will
have
to
find
it
somewhere
else,
so
you
put
in
an
instance
and
the
model,
and
the
thing
says
yes
or
it
says
no,
and
we
have
extended
that
with
dot
feature
a
little
bit,
but
that's
still
the
the
main
processing
model.
C
C
So
a
lot
of
these
rules
are
actually
noise
when
you
annotate
trees
and
of
course
you
want
to
be
able
to
put
information
into
the
specification
that
goes
beyond
full
names
and
finally,
rule
names
are
these
things
that
don't
have
a
relationship
to
real
to
the
real
world?
So
maybe
we
should
do
something
about
that.
C
C
So
we
would
need
to
think
about
representations
in
various
forms
and
particularly
in
civil
diagnostic
notation,
so
for
annotation,
I
think
the
the
minimum
variable
product
is
to
be
able
to
put
attributes
on
rule
names,
so
you
can
select
which
actually
rituals
actively
annotate
and
maybe
associate
rule
names
with
some
real
world
concept.
The
ui
thing
I
talked
about
you
might
have
special
description
attributes
that
you
just
extract
out
of
comments,
some
additional
spec
writer
defined
attributes.
So,
for
instance,
a
unit
could
be
added
to
something
and
yeah.
C
You
could
even
generate
tags
tags
that
are
not
on
the
wire,
because
the
schema
implies
them,
and
this
already
can
be
in
many
cases
can
be
taken
from
the
unwrapped
information.
So
I,
if
I
have
a
tilde
time
somewhere,
I
know
that
that
number
is
a
tag
one
time,
okay
and
final
slide.
How
quickly
you
should
be
able
to
be
able
to
do
this
again.
C
E
Okay
yeah,
so
this
is
saying
obviously
I'm
a
strong
as
a
part
of
this,
we
encountered
several
pain
points
without
a
strict
composition
feature.
E
This
also
includes
how
we
define
code
points
in
over
maps
like
are
there
global
for
a
document
or
are
they
specific
to
certain
subsets
of
a
single
cta?
Sorry,
I'm
saying
document
here,
but
what
I
mean
is
they
see
the
data
definition
and
so
so
yeah
this
this
really.
So
we
have
a
lot
of
ideas
how
this
works
and
I
hope
some
of
them.
E
E
So
if
this
timeline
is
realistic,
that
would
be
awesome
because
then
we
can
incorporate
it
already
and
so
so
yeah
I
I'd
say
I
would
even
go
so
far
in
splitting
out
more
time
to
do
this
and
I'm
in
full
support
of
that
part
on
the
annotation
part.
Unfortunately,
again
brenton
is
not
here.
I
think
he
has
some
really
really
constructive
views
on
this,
and
so
maybe
in
the
next
instagram.
We
can
read
him
in
and
and
elaborate
on
that
a
little
bit.
E
Yeah
sure
that
is
yeah,
that
is
even
better.
If
we
have
a
higher
frequency
on
on
that,
and
then
we
can
use
the
interim
to
discuss
major
turning
points
or
something
like
singularities
or
something.
A
If,
if
this
is
to
work,
I
think
this
will
need
design
team
meetings
in
addition
to
the
interims,
the
my
roth
plan
would
be
to
start
interims
again
in
around
december
15th
in
in
our
regular
schedule,
but
even
with
these
and
the
holidays
in
between
this,
this
will
a
lot
of
work
by
by
the
authors,
speaking
of
which
hank
would
hank
and
carson.
Would
you
would
you
collaborate
on
this
document
or
do
you
have?
Are
there
other
interested
parties
that
have
shown
up
so
far?
A
C
So
I
I
didn't
even
think
about
that
yet,
but
I
know
that
hank
has
been
pinging
me
whether
I'm
going
to
do
everything
about
this
for
a
while.
So
I
knew
that
tank
was
going
to
contribute
and
brendon,
of
course,
would
make
an
awesome,
contributor
or
co-author.
A
As
a
as
a
user
of
cbor,
I
not
sure
I
will
write
much
text
or
code,
but
I'm
quite
looking
forward
to
the
annotation
features
and
especially
curious
whether
this
this
work
might
allow,
later
to
not
only
very
to
use
the
to
use
the
validation
to
extend
annotation
in
such
a
way
that
you
can
also
verify
whether
your
cdl
allows
unambiguous
annotation,
because
right
now,
validation
is
always
unambiguous,
but
many
cdl
documents
out
there
do
not
allow
unambiguous
annotation
and
if,
if
the
annotation
extensions
facilitate,
I'm
gonna
be
checking
this.
E
Yeah
again,
thank
you.
Thank
you.
So
I
haven't
brought
this
up
yet
because
that
is
kind
of
there's.
Not
nothing
really
tangible,
yet
so
take
this
with
a
grain
of
salt,
but
I
think
that
some
supporters-
and
they
are
rallying
fast
at
the
moment
due
to
some
other
things
like
in
the
cozy
realm
a
cdd
ida-
might
manifest.
Are
these
requirements
for
it
that
is
close
coming
closer
to
the
annotation
part,
so
an
idea
might
not
so
so.
The
messages
for
the
input
open
for
rpcs
basically
make
up
like
a.
E
I
don't
know,
majority
portion
of
all
of
that,
so
there
might
be
again
external
syntax
that
can
glue
that
together
and
and
and
that
might
make
use
of
some
of
the
annotation
parts.
So
I'm
just
highlighting
this
because
this
is
just
all
I
don't
know
a
pipe
dream
today,
but
it
might
manifest
faster
than
one
things
in
the
next
months,
and
so
why
is
this
interesting?
Because
there
might
be
offers
on
that
from
that
pool
of
interested
people,
but
I
could
not
name
a
single
one
today
with
any
reliability.
C
So
the
I
was
in
the
skim
meeting
two
hours
ago
and
that
would
be
a
nice
benchmark
for
doing
something
like
that
as
cim
barry
is
co-chair
for
that.
So
maybe
he's
now
in
fight-or-flight
mode
that
that
we
might
want
to
contribute
something
to
that.
But
we
can
use
it
as
a
benchmark
and
if
it
turns
out
to
be
useful,
we
can
still
try
to
contribute
to
the
standardization
effort.
E
E
A
Okay,
we
are
already
in
overtime,
so
I
take
any
last
comment:
if
there
is
still
one.
A
As
I
mentioned,
interims
are
planned
to
resume
in
the
same
cadence
that
we
had
them
in
during
the
last.
Its
probably
starting
december.
15Th
may
we'll
go
out
on
that,
and
also
always
also
on
topics
that
topics
that
we
just
took
a
rough
reading
on
here,
for
example,
the
interested
in
interest
in
edn
with
that
review
on
the
mailing
list
thanks
and
have
a
nice
rest
of
the
itf
goodbye.