►
From YouTube: IETF114-CBOR-20220728-2130
Description
CBOR meeting session at IETF114
2022/07/28 2130
https://datatracker.ietf.org/meeting/114/proceedings/
A
A
I
had
thought
christian
was
going
to
be
on
remotely,
but
I
do
not
see
him.
I
wonder
if
he
got
the
time
right,
but
in
any
case
it
should
all
go.
Well,
so
note
the
note
well.
Well,
you
probably
all
know
it
by
now
many
many
times,
but
this
is
your
obligations
with
respect
to
internet,
intellectual
property,
disclosure
and
compliance
with
the
various
bcps.
A
If
you
don't
know
it
learn
it.
Here's
christian
hi,
my
co-chair
is
here.
A
So
everybody
in
the
room
is
wearing
a
mask.
Thank
you.
Each
of
you
is
wearing
a
mask.
I
should
say
you're,
not
all
sharing
the
same
one.
Thank
you
for
doing
that
and
I'm
going
to
keep
my
mask
on,
even
though
the
rules
say
I
can
take
mine
off.
A
But
if
you,
if,
if
anybody
in
the
room
is
coming
up
to
this
mic,
which
I
don't
think
anyone
is
but
if
you
are
you're
welcome
to
take
it
off
otherwise,
please
use
the
the
magic
code
here
and
get
on
meet
echo
light.
If
you
are
in
the
room,
please
add
yourself
to
the
queue
through
that
tool.
A
So
what
we
have
for
the
agenda?
I
get
three
minutes.
It
says,
and
I
have
one
left
we're
going
to
go
through
status
on
the
working
group
documents,
we're
going
to
talk
about
some
upcoming
work,
cddl
tag,
registry
development
and
maintenance.
A
B
Yes,
as
I
said,
doesn't
chat
sorry
for
my
delay,
but
I'm
here
now
and
yeah
jumping
in
whatever
he
did.
A
Right
and
as
you
you
weren't
on
when
I
said
this
to
the
room,
but
the
speakers
in
the
room
are
facing
everybody
else,
not
me.
So
it's
hard
for
me
to
understand
what
the
people
who
are
remote
are
saying.
So,
if
I
don't,
if
I
look
stupid,
that's
why.
A
All
right
document
status
we
have
rfc
9164,
cbor
tags
for
ipv4
and
ipv6
addresses
and
prefixes
that's
been
published.
Thanks
to
the
authors
and
the
reviewers
of
that
document,
yay.
A
A
File
magic
is
in
the
rfc,
editor
queue,
it
finished,
iesg,
processing
and
that's
that's
always
excellent
karsten.
You
have
an
update
for
time
tags
so
you're
on.
C
A
We'll
just
we'll
defer
that
to
when
we
hit
your
slides,
okay,
okay
and
that's
what
we're
about
to
do
so.
Go
ahead
and
request
slide
access.
C
Okay,
yeah:
it's
always
a
bit
confusing
to
use
this
interface,
don't
know
what
makes
it
so
confusing
anyway.
So
I
want
to
talk
about
time
tag
which
we
just
mentioned.
C
I
want
to
talk
about
packed
and
I
have
one
slide
on
92
54,
which
is
not
a
sibo
working
group
result,
but
it
has
zebra
in
the
name,
and
it's
maybe
something
that
we
should
keep
in
mind
in
in
future
work
and
then
at
the
end,
I
would
like
to
talk
about
the
city
evolution
and
maybe
a
little
bit
more
about
what
the
9165
means
and
then
the
2.0
roadmap,
which
obviously
needs
new
dates
from
those
that
we
discussed
in
atf-112.
C
Okay,
so
the
time
tag
document
defines
a
tag
for
times.
We
already
have
two
tags
for
times:
zero
and
one,
but
these
only
have
limited
possibilities
for
attaching
additional
information
and
tag.
1000
one
has
been
registered
to
carry
lots
of
additional
information
and
that
has
been
around
for
a
while
and
it
also
has
been
registered
for
a
while
already
because
at
the
time
1001
it
was
actually
a
first
come
first
served
space,
so
it
was
easy
to
register
and
we
just
did
it
and
it's
also
in
use
in
implementation.
C
So
it's
not
something
we
can.
We
really
should
be
changing
a
lot
in
in
the
way
of
breaking
it,
but
we
still
can
extend
it
and
when
we
discussed
this
about
a
year
ago,
when
we
did
the
wreck
group
adoption,
we
said
this
there's
this
new
stuff
going
on
the
sedate
working
group
and
we
wanted
to
wait
for
their
considerations
to
become
available.
So
they
are
taking
the
text
form
of
internet
time,
stamps,
rfc,
539
and
adding
hints
to
them.
C
So
the
the
white
stuff
is
the
existing
3-3-9
timestamp
format
that
we
all
know
and
love
with
the
date.
The
t
and
the
time,
and
then
a
numerical
offset
if
desired.
This
is
of
course
inspired
by
iso
8601
and
what
sedate
does
is
providing
a
way
to
add
hints
to
that.
These
are
these
bracket
bracketed
things
that
are
added
as
suffixes
to
a
539
date
string,
and
that
includes
time
zone
hints.
So
you
can
say
that
the
those
minus
08
there
actually
are
america
los
angeles.
C
So
if
some
politician
gets
the
idea
that
the
summertime
daylight
savings
time
needs
to
be
extended
to
december,
you
can
react
to
that
because
you
know
no.
This
is
an
american
time
and
not
a
canadian
time
or
whatever,
and
the
other
thing
in
the
brackets
uca
equals
hebrew
is
a
more
general
extension
mechanism.
In
this
case,
uca
means
a
unicode
calendar.
The
unicode
project
collects
certain
localization
information,
certain
types
of
that
and
they
have
calendar
formats.
C
So
this
is
the
work
that
sedate
has
been
working
on
and
it
seems
to
me
that
sedate
is
now
converging.
I
think
the
the
last
big
problem
was
solved
in
the
meeting
on
monday,
so
we
might
add
this
information
to
to
the
time
tag
definition
and
since
the
time
tag
is
designed
to
be
infinitely
extensible,
this
was
really
easy.
C
C
But
since
this
is
maybe
not
trying
to
be
the
kitchen
sink
but
but
pretty
except
accepted,
how
do
you
say
that
pretty
responsive
to
to
people
trying
to
add
information
to
timestamps?
That
should
be
okay?
C
I
think,
however,
sedate
also
has
certain
limits
in
which
it
operates
because
of
the
way
it
is
chartered
and
this
it
is
chartered
really
not
to
leave
what
339
can
do
so,
for
instance,
the
date
is
not
going
to
have
other
time
scales
besides
utc
they
have
time
zones,
but
not
time
scales
like
tai
or
leaf
smeared,
utc
or
all
the
stuff
that
is
floating
around
out
there,
but
the
time
tank
take
long
has
had
that.
So
we
are
not
limited
to
that,
but
sedate
is
limited
to
that.
C
So
one
question
that
came
up
was
floating
time,
which
means
a
time
stamp
that
actually
is
in
local
time,
without
telling
you
how
the
local
time
relates
to
utc,
and
we
already
have
one
such
tag.
The
tag
100
the
date
tag.
That
is
a
zoneless
tag.
It
doesn't
tell
you
it
doesn't
tell
you
a
date
but
not
which
time
zone
you
were
in
when
you
experienced
that
date
and
since
we
appear
to
be
able
to
do
that
on
the
day
at
the
date
level
we
might
as
well.
C
Do
it
at
the
time
level
and
emil
had
some
some
arguments
that
we
actually
should
be
doing
that
on
on
the
mailing
list,
this
in
the
last
couple
of
days,
yeah,
so
there's
also
a
little
issue
with
sedate
that
rfc
339
added
minus
zero
as
a
numerical
offset
and
that's
incompatible
with
iso
8601
and
yeah.
We
probably
don't
have
to
react
to
that
issue,
because
we
don't
even
encode
numerical
offsets
in
time
tag
which,
by
the
way,
is
something
that
maybe
people
want
to
do.
C
But
I
haven't
found
a
use
case
for
that.
Yet
so
so
I
haven't
seen
a
reason
to
put
that
in
yeah.
So
the
question
really
is:
do
we
want
our
larger
freedom
to
actually
put
something
like
floating
times
in
and
the
the
way
we
could
do?
This
is
a
little
bit
well,
it
could
be
inspired
by
ntp
version.
Five,
the
ntp
version.
C
Five
people
also
want
to
put
in
floating
times,
and
they
can
just
use
the
or
they
plan
to
use
the
time
scale
field
for
saying
that,
so
they
would
have
utc
and
tai
and
leap
smeared,
utc
and
local
time.
We
could
just
do
the
same
thing
so
that
that
would
be
something
where
we
wouldn't
have
to
to
invent
something
new,
but
could
just
import
this
from
ngp.
C
So
this
this
is
the
situation,
and
I
would
propose
that
we
with
respect
to
see
date.
We
try
to
go
for
synchronized
publication
with
them.
C
So
whenever
their
working
plus
call
finishes
hours
should
be
finished,
two
I
mean
we
don't
have
to
do
this
precisely,
but
that
should
be
the
general
target
date,
but
on
the
other
hand,
we
should
also
be
watching
what
ntp
v5
really
does
in
terms
of
adding
floating
time
and
maybe
even
adding
their
their
leap
smearing
time
scale
as
well,
but
we
probably
don't
want
to
wait
for
their
completion
to
actually
publish
this.
C
This
document
I
mean
there
will
be
many
things
happening
in
ntp,
v5,
obviously,
but
we
don't
have
to
copy
all
of
them.
So
that
would
be
what
I
would
consider
the
obvious
plan,
the
other
plan.
Of
course
could
be
to
to
say
well
yeah.
There
are
some
use
cases
for
floating
time,
but
these
are
not
use
cases
for
tag
1001
and
it
should
be
done
in
in
a
separate
attack.
C
So
that
would
be
an
alternative
plan.
I
haven't
heard
arguments
for
that
yet,
but
if
people
feel
that
way,
then
then
we
probably
want
to
consider
those
arguments.
C
D
C
D
Reiterate
what
I
said
on
the
mailing
list
earlier
in
your
response
to
your
question:
ntb
v5,
the
protocols
back
there
isn't
even
a
unified
spec
to
be
called
for
adoption.
The
requirements
and
use
cases
have
only
just
been
adopted
after
a
long
period.
So
I
consider
that
npp
v5
as
an
rfc
or
even
as
a
late
stage.
Stable
internet
draft
is
a
year
and
a
half
away
on
that
order.
D
So
you
certainly
you
can
watch
it,
but
I
don't
think
you
want
to
wait
that
long
to
adopt
time.
D
Very
much
so
and
a
separate
tag
is
a
fine
collusion
at
some
time
in
the
future.
I
don't
you
know,
we
don't
need
to
add
it,
because
there
are
ntbp
five
use
cases
or
other
use
cases.
Unless
somebody
comes
forward
and
says
we
really
need
it.
In
the
general
case.
C
B
Right,
hi
christian
head
off,
so
just
for
clarification,
even
if
we,
if
we
went
with
this
document
as
it
is
now,
there
is
nothing
that
would
stop
the
later
addition
of
the
floating
time
through
additional
keys.
At
the
same
time,
right.
C
No,
yes,
yes,
you're
right,
there's
nothing!
That
would
stop
us
from
doing
that,
but
we
could
do
it
now.
So
maybe
it's
just
easier
to
do
it
now.
C
Okay,
I
think
we
can
take
that
one
to
the
list.
The
next
one
is
sibo
pact
and
that
I
think,
has
taken
a
a
pretty
surprising
turn.
We
have
actually
managed
to
make
this
quite
a
bit
simpler
than
I
thought
it
would
turn
out
to
be
so
we
now
have
this
function
tag
concept,
so
we
actually
have
even
have
an
extension
point
in
the
packed
mechanism.
C
We
have
ongoing
implementation
work.
We
probably
want
to
have
at
least
one
implementation
that
actually
implements
the
current
draft
before
we
go
forward.
I
think
we
will
have
to
do
a
second
working
glass
call,
but
that's
for
the
chairs
to
decide.
A
A
C
So
the
the
one
thing
that
that
came
up
in
in
discussions
of
other
tag
activities
is
that
we
are
doing
a
little
bit
of
a
sled
of
hand
here
by
just
saying
all
those
tags
that
we
defined
in
pact
can
be
used
in
place
of
the
data
they
stand
for,
and
what
we
probably
have
to
to
remember
is
that
8949
defines
tag
validity
in
such
a
way
that
a
tag
can
define
the
shape
the
the
structure
of
a
valid
tag
content.
C
C
So
a
tag
is
a
tag
and
if
an
outer
tag
has
been
defined
at
a
time
when
this
inner
attack,
maybe
wasn't
defined
yet
there's
essentially
no
way
for
the
outer
tank
to
to
make
use
of
of
the
fact
that
this
inner
tag,
maybe
fits
extremely
well
in
in
the
outer
tank,
because
the
the
inner
tag
cannot
make
this
kind
of
information
available.
C
But
that
is
a
problem,
because
tags
often
are.
The
role
of
attack
often
is
to
define
data
that
that
actually
stand
in
for
other
data,
and
we
currently
have
no
way
to
record
this
intention
and
we
actually
had
this
problem
already
in
87
46
in
the
tank,
the
race,
but
we
kind
of
glossed
over
it.
C
87
46
is
a
collection
of
some
25
tags,
some
of
which
create
data
structures
that
really
are
arrays
and
some
of
which
operate
on
data
structures
that
really
arrays,
and
so
we
define
the
term
typed
array
and
use
that
term
everywhere
in
this
spec,
where
an
array
is
needed.
But
if
we
now
came
in
and
defined
another.
C
Tank
26
tag
that
has
the
same
properties
as
a
typed
array.
We
cannot
do
this
because
8746
just
enumerates
those
25
tags,
and
that
is
a
complete
list
of
what
can
be
the
tag
content
for
for
such
a
tag
and
the
same
problem
of
course
happens
with
sibo
package.
Now,
a
reference
tag
stands
in
for
the
reference
data,
but
a
tag
that
has
a
reference
tag
as
tag
content,
or
has
a
data
structure
that
it
wants
to
control.
That
has
a
reference
tag
in
a
position
that
it
wants
to
control.
E
I
think
that
there's
a
logical
flaw
here.
Your
tag,
validator,
must
have
already
validated
the
outer
seaboard,
packed
tag
for
it
to
ever
encounter
a
reference
tag,
and
if
it's
done
that-
and
it's
recognized
that
tag-
and
it's
been
able
to
validate
that
tag,
then
it
knows
full
well
that
it's
going
to
encounter
references
inside
it.
C
E
B
E
C
Yes,
but
one
that
actually
is
allowed
by
89.49,
so
this
this
is
really
about
the
legacy.
Generic
decoders
and
a
generic
decoder
usually
has
some
tag
processing
in
it.
C
And
that
means
when
it
finds
a
tag
too,
on
top
of
something
that,
for
instance,
is
a
data
compressor
tag
which
we
haven't
talked
about,
but
that's
actually
what
motivated
the
sole
discussion.
It
will
not
be
able
to
build
a
number
of
out
of
the
uncompressed
data.
It
will
just
fail
in
in
this
place,
because
the
tank
validity
checking
for
tag
2
really
is
hard
coded
in
in
the
generic
decoder.
C
So
what
what
this
concept
that
these
slides
are
about
is
actually
doing
is
describing
how
this
implementation
of
tag
validity
is
actually
deficient,
and
we
create
a
new
category
of
decoders
that
actually
can
deal
with
this
situation.
The
old
decoders,
of
course,
stay
valid
cbo
decoders.
C
Yeah,
but
when
you're
right
that
there
are
some
very
specific
assumptions
here
about
how
you
actually
reach
this
situation,
where
things
break-
and
one
such
assumption
is
that
a
table
setup
tag
is
not
supported
by
the
validity
checker
of
legacy
decoder.
So
it
will
just
present
that
setup
tag,
as
is
to
the
application.
E
But
then
it
continues
validation
within
that
yeah
yeah
yeah,
so
so
that
that's
the
flaw
right,
you've
parsed
into
a
tag
that
defines
the
structure
of
its
data
in
a
very
particular
way
and
you've
just
blithely
ignored
it
and
continued
on
validating
tags
anyway,
yeah,
okay,
fair
enough.
That
sounds
like
a
bad
decision,
but
I
guess
that's
what
we've
got.
C
Yeah
zebra
is
really
designed
to
make
that
possible,
so
the
idea
was
to
to
share
the
duties
of
tag,
validation
or
tag
processing
between
generic
decoders
and
applications.
So
when
a
generic
decoder
finds
something
it
doesn't
understand,
it
just
sends
it
to
the
application
and
if
it
finds
something
that
it
thinks
it
does
understand,
it
actually
may
blow
up
in
in
a
place
where
it
shouldn't
lower.
E
Yeah
no,
I
I
appreciate
that.
Okay,
thanks.
C
But
that's
a
very
good
observation
that
we
probably
need
to
make
this
a
little
bit
more
explicit
anyway.
So
what
the
the
new
draft
of
packs
that
that
I
did
right
before
the
before
the
itf-
well
too
late
in
any
case,
does
is
define
a
concept
of
tag:
equivalence
where
a
tag
not
only
controls,
what
what
is
inside
it,
but
also
can
say
what
it
actually
looks
like
from
the
point
of
the
tag
validity
of
an
enclosing
tag.
C
C
C
C
C
Christian
has
pointed
out
that
this
probably
needs
some
caveats,
because
people
shouldn't
use
this
mechanism
to
build
overly
elaborate
type
systems
that
do
weird
things
with
the
data
going
on
in
there,
so
that
there
is
a
little
bit
of
text
that
provides
caveats.
Maybe
we
want
to
add
some
more
caveats
and
now
the
question
is
how
how
do
we
handle
this?
We
could
just
standardize
as
part
of
the
sibo
specification.
This
is
the
smallest
number
of
documents,
the
smallest
editorial
effort,
but
yeah.
C
I
just
ran
to
through
a
very
interesting
case
where
people
had
really
big
problems
with
defining
a
data
structure
together
together
with
a
new
tag
in
one
document.
So
the
problem
details
the
document
over
in
in
core
that
raised
some
eyebrows
because
it
defines
the
tag
and
yeah.
So
maybe
there
are
arguments
to
put
it
into
separate
documents,
but
my
plan
would
be
to
not
to
anticipate
them,
but
just
to
wait
whether
they
actually
come
and
do
this
in
in
the
zebra
pack
specification.
C
And
then
the
question
of
course
is:
does
this
get
an
updates
tag?
Does
it
do
an
update
to
89.49,
which
of
course
is
interesting,
proposed
standard
specification,
updating
an
internet
standard
specification?
I
have
no
idea
how
that
works
or
whether
that's
a
problem
at
all,
or
we
could
simply
not
say
that
it
updates
it,
which
would
be
a
little
bit
of
a
lie,
but
maybe
a
little
bit
of
a
white
lie.
So
I
don't
have
a
strong
opinion
on
that.
C
Yeah
again,
we
could
put
it
into
a
separate
document
and
then
we
have
a
little
bit
of
an
open
issue
here
with
the
question:
how
do
you
actually
express
tag?
Your
equivalence
in
cddl,
because
cda
doesn't
currently
allow
you
to
do
that.
It
has
exactly
the
same
structural
approach
that
sibo
had
in
1749
and
then
we
get
that
we
kept
for
89.49.
C
So
we
will
have
to
invent
something
to
be
able
to
express
these
things
as
well,
but
I'm
not
proposing
to
to
wait
with
zebra
pact
until
we
have
invented
that
new
cddl.
But
this
should
be
a
separate
effort.
I
think.
D
D
If
I
have
a
full
standard,
an
internet
standard
8949-
and
I
have
seabor
pact-
and
I
published
it
in
a
later
rfc-
and
I
say
it
updates
89.49
in
a
way.
That's
rather
that's,
maybe
stretching
a
point
in
any
case
is
a
proposed
standard
allowed
to
update
an
internet
standard.
A
D
A
Absolutely
what
effect
that
has
depends
on
what
the
update
is,
but
I
will
give
you
a
more
extreme
situation.
Rfc
5321
is
the
current
smtp
standard,
it's
at
draft
standard
level
and
it
obsoleted
rfc
821,
which
was
full
standard.
So
you
know
all
that
kind
of
stuff
happens.
A
In
the
case
of
an
updates,
I
would
say
that
the
the
new
feature
that
the
updated
creates
is
at
the
proposed
standard
level
and
the
rest
of
the
protocol
remains
at
the
full
standard
level,
but
yeah
it's
fine.
If
that's
the
right
thing
to
do,
then
that's
what
we
do.
D
A
Yeah,
oh
absolutely!
If,
if
it
turns
out
that
we
made
a
mistake
in
an
in
an
internet
standard
level
document,
we
need
to
correct
the
mistake
and
put
that
back
through
the
standards
track,
starting
at
proposed
absolutely.
C
C
So
we
started
an
activity
that
did
that
and
at
some
point
we
found
that
the
best
way
to
do
this
would
be
to
encode
the
yang
data,
not
in
xml,
which
is
the
original
way
of
encoding
it
or
in
json,
but
in
sibo.
So
that's
how
the
the
callback
group
got
to
do
the
yang
zebra
document,
and
this
is
of
course
pretty
interesting
because
with
with
the
the
publication
of
young
sibo,
we
now
have
a
way
to
provide
yang
defined
data
in
cbo,
or
you
can
put
it
the
other
way
around.
C
That
is
actually
a
young
defined
data
structure
that
is
now
encoded
in
sibor
and
what
we
did
when
we
did
this.
We
didn't
try
to
to
compress
all
the
the
xml
wordiness
so,
for
instance,
ip
addresses
or
date
time
they
used
are
still
text
strings
when
you
have
defined
that
them
that
way
at
the
yang
level,
young
sibo
doesn't
change
that,
but
it
has
one
significant
performance
perk
and
that
is
all
the
yang
names.
C
The
names
for
the
various
items
in
the
structure
which
are
text
strings
in
young
xml
and
json
are
can
be
replaced
by
pure
integers
and
using
data
coding.
This
usually
is
very,
very
efficient
because,
typically,
when
you
use
one
name
inside
the
data
structure,
you
probably
use
names
that
are
close
to
that
and
the
allocation
mechanism
that
is
still
being
defined
in
the
draft
that
is
still
in
the
isg
that
tries
to
make
sure
that
those
datas
are
typically
relatively
small.
C
Anyway,
we
we
have
a
new
tool
in
our
toolkit,
so
when
we
see
sibo
as
a
useful
way
to
to
structure
data
to
represent
data
that
is
being
interchanged,
but
maybe
the
the
group
that
is
doing
that-
that
is
working
on
that
is
more
familiar
with
yang
or
doesn't
like
cdl.
For
any
reason
we
may
want
to
use
yang,
and
that
of
course
opens
a
little
bit
the
question:
how
do
we
make
sure
that
these
two
parts
of
the
sibo
ecosystem
stay
on
good
terms
with
each
other
over
time?
C
So
that
that's
a
pretty
complicated
question?
But
I
think
in
the
future,
when
we
do
something
on
the
city
outside,
we
probably
will
have
one
look
at
the
yang
side
and
see
how
how
are
they
doing
things
and
are
we
doing
something
stupid
here?
We're
making
it
harder
to
combine
city,
fine
data
with
yang
defined
data
or
not,
and
actually
that
has
already
happened,
the
the
way
cgl
uses.
C
C
You
can
express
co-occurrence
constraints
using
xpath,
which
is
a
touring
equivalent
language,
so
you
may
not
always
have
for
the
right
level
of
complexity
in
mind
when
you
do
gang,
but
when
you
do
a
really
complicated
management,
information
based
standards,
then
maybe
yang
is-
is
a
better
way
to
do
this.
So
I
just
wanted
to
point
out
that
this
is
something
that
will
inform
our
cdl
development
in
the
future
and
we
will
try.
We
certainly
always
have
in
mind
that
there
is
this
other
thing
there
that
we
want
to.
C
We
want
to
make
sure
that
we
don't
do
stupid
things
good.
So
this
is
my
my
little
preliminary
for
the
the
cda2
discussion.
So
just
as
a
reminder,
what
the
sibo
working
group
does.
We
have
the
sieber
format
which
is
stable,
we're
not
currently
changing
anything
there,
but
we
have
an
extension
point,
which
is
the
the
tag
use
ecosystem.
C
So
we
have
to
think
about
good
ways
to
to
move
the
tag
ecosystem
forward,
which
is
mostly
done
by
by
the
users
of
sieber,
but
sometimes
we
generate
some
highly
reusable
shared
tags
and
the
other
item
on
the
zebra
working
group
played
as
cgdl,
which
was
standardized
as
a
1.0
and
with
a
strong
understanding
that
we
would
want
to
develop
this
further
and
that's
the
the
city,
a
2.0
discussion
cdl
defines
one
extension
point,
the
control
operator.
C
So
whenever
we
want
to
do
something,
one
question
is:
can
we
do
this
with
a
control
operator
and
it
turns
out,
we
could
do
much
more
with
the
control
operator
than
we
originally
originally
thought.
C
So,
for
instance,
we
have
rfc
9165,
which
is
really
cdl
1.1,
because
it
makes
some
significant
additions,
but
using
the
existing
extension
points.
So
it's
not
a
new
version
of
cda
at
all.
It
just
provides
additional
functionality
through
this
extension
point
and
that's
in
particular
the
abnf
support
and
the
dot
feature
support.
C
There
are
also
some
other
places
where
this
group
can
do
things.
For
instance,
when
we
did
sibo,
we
defined
diagnostic
notation,
because
we
thought
it
was
important
to
be
able
to
put
sibo
on
a
whiteboard
and
well.
Cddl
is
very,
is
great
on
a
whiteboard,
but
when
you
have
tools
that
need
to
interchange
cddl,
these
tools
would
need
to
to
have
pretty
printers
and
parsers
for
cddl,
and
maybe
it's
just
easier
to
just
interchange,
json
and
therefore
well.
C
There
is
what
you
see
on
the
right
side
of
this
slide
is
the
entirety
of
the
definition
of
the
json
grammar
for
cddl.
So
if
you
want
to
interchange
cddl
as
json,
this
is
the
document
you
may
want
to
look
at.
So
that's
something
where
we
might
want
to
do
work,
but
I'm
not
go
proposing
something
anything
concrete
at
this
point
in
time.
C
So
the
actual
city
that
put
2.0
work
is
work
where
we
need
to
go
beyond
the
city,
syntax.
We
have
defined
in
1.0
and
we
identified
annotation
and
composition
as
to
the
the
highest
priorities.
C
Let's
talk
about
composition
that
that's
where
we
want
to
build
a
cdda
specification
from
multiple
files,
possibly
files
that
come
out
of
a
library.
So
we
don't
have
to
do
all
this
cut
and
paste
stuff
that
we
have
to
do
today
to
actually
build
our
city
specifications.
C
So
there
should
be
something
like
an
export
interface
for
from
a
library
file
into
some
other
specification
using
the
library
and
an
import
interface,
so
that
the
cd8
spec
can
get
some
definitions
from
from
another
spec
and,
of
course
we
need
some
naming
conventions
or
mechanisms
to
do
this.
C
So
the
whole
name
spacing
issue.
I
gloss
over
that
because
I'm
I'm
already
over
time.
Sorry,
we
also
had
the
discussion
how
to
do
alternatives.
So
writing
one
specification
that
can
actually
generate
different
structures
depending
on
some
parameter
in
rfc
4,
84
28.
In
cinema
we
actually
used
some
some
lexical
mechanisms
where
we
told
people
combine
the
cdl
in
figure
five
with
that
in
figure
seven
and
you
can
do
json
and
combine
five
and
six
and
you
can
do
a
zebra.
C
We
now
have
that
feature
to
do
this
a
bit
on
a
more
semantic
level,
and
maybe
at
some
point
we
can
extend
this
in
some
way
that
you
actually
can
translate
between
the
alternatives.
But
translation
is
not
so
it's
currently
something
that
city
supports
at
all.
C
The
the
other
thing
that
that
maybe
is
even
more
important
than
than
coming
up
with
a
good
syntax
for
this
is
doing
automation
so
making
libraries
available
from
from
existing
sources
like
rfcs
and
internet
drafts,
inregistries,
non-iitf
sources,
3gpp
and
so
on
and
being
able
to
trigger
this
automation
from
a
cdl
spec.
C
So
a
cdl1
file
is
a
cdl2
file
and,
on
the
other
way,
around
a
cdi2
file
can
be
passed
by
a
cdl1
processor
and
it
can
still
do
useful
things.
It
cannot
use
all
the
functionality,
but
it
can
be
useful
things
and
that's
something
that
that
helps,
of
course,
with
an
ecosystem
where
we
now
have
multiple
implementations
and
we
all
want
to
take
them
along
for
the
ride
to
cdda2.
C
So
we
would
put
some
of
the
functionality
of
cda2
in
comments
or
control
operators,
or
maybe
rules
that
are
otherwise
unused,
and
one
way
of
trying
to
do
this
is
to
really
put
all
of
this
composition
functionality
into
something
that
works
a
bit
like
a
preprocessor,
maybe
not
like
cs
cpu.
C
Preprocessor,
maybe
a
little
bit
more
keeping
in
mind
that
there
is
a
certain
structure
that
we
want
to
generate,
but
we
could
put
all
this
referencing
and
automation
and
so
on
into
something
that
looks
like
comments
and
is
actually
interpreted
by
the
pre
processor,
so
that
that
actually
could
be
useful
for
abnf
as
well,
because
cgl
is
so
close
to
abnf.
Whatever
we
would
do
in
such
a
preprocessor
would
probably
immediately
immediately
be
applicable
to
abnf
as
well
yeah.
So
this
is
a
potential
objective
for
the
composition.
C
The
annotation
aspect
really
is
about
going
beyond
carnigen's
car
kernighan's
car
is
a
car
where
you
only
have
one
indicator
light
which,
when
it's
off
says
everything
is
okay,
and
if,
if
when
it's
on
it
says
something
is
wrong
and
that's
your
traditional
validator
that
validates
an
instance
again
against
the
schema
and
tells
you
that's
wrong
or
that's
right,
and
the
idea
is
that
annotation
can
give
you
back
more
information.
C
But
right
now,
what
we
have
in
in
cdl
implementations
are
annotators
that
don't
get
the
benefit
of
information
provided
by
the
spec
writers.
So
the
spec
writers
cannot
say
whether
a
particular
rule
is
has
only
been
created
to
to
minimize
the
line
length
in
the
specification
or
something.
So
these
are
not
rules
that
are
carrying
semantic
information
that
that
an
application
would
be
interested
in
and
we
only
have
dot
feature
for
carrying
information
beyond
rule
names,
and
these
rule
names
also
are
not
related
to
the
real
world.
C
So
we
cannot
use
rdf
names
as
as
rule
names
and
so
on.
So
this
is
all
related
to
the
post.
Schema
validation
instance
thing.
I
showed
this
slide
on
itf
112,
so
I'm
going
to
go
through
this
very
quickly,
but
basically
the
idea
is
that
if
you
do
a
validation
process,
you
get
an
enriched
and
augmented
version
of
of
the
instance
and
of
course,
what
what
is
the
data
model
for
that
enriched
version?
C
And
can
we
define
json
and
sibo
diag
and
yemen
and
sieber
for
that?
So
that
would
be
an
important
addition
to
the
cdl
ecosystem.
So
you
can
take
the
validator
output
and
feed
it
in
into
some
application.
C
C
So
for
iatf
115
we
should
have
a
prototype
of
this
composition,
engine
which
might
be
a
preprocessor
if
we're
lucky,
and
maybe
some
first
elements
of
the
annotation
semantics.
For
instance,
just
the
definition
of
the
post
schema
validation
instance,
and
that
would
allow
us
to
to
look
at
this
in
itf
115
and
decide
which
of
the
functionality
that
has
been
prototyped.
C
C
That
is
technically
complete
and
and
agrees
with
the
implementations
that
exist,
and
then
we
could
maybe
do
some
document
splits
or
decide
how
exactly
this
should
be
published,
but
we
should
have
something
that
is
reasonably
stable,
so
implementers
can
actually
start
playing
with
this
and
there's
also
a
longer
term
objective,
which
is
really
increasing
the
integration
with
iana
there
there's
so
much
data
in
the
ina
registries
that
we
currently
have
to
manually
extract
to
use
that
in
in
specifications
like
we
had
with
core
sid,
we
had
a
multi-year
discussion
with
ayna
that
finally
came
up
with
a
mechanism
that
we
could
all
agree
on.
C
We
could
discuss
with
ayanna
how
we
actually
get
the
web
access
interfaces.
We
need
well-defined,
maybe
even
be
able
to
tell
ayanna
how
much
additional
load
we
are
generating
by
people
using
cddl
and
maybe
even
define
some
rules
for
future
registries,
so
they
they
become
automation,
friendly
yeah.
I
already
talked
about
the
interpretation
between
yang
and
cdl.
C
So
let
me
go
back
to
to
this,
so
this
is
a
bit
of
course,
also
a
question
to
implementers.
C
What
would
what
components,
what
what
toolkit
elements,
what
they
like
to
see
to
be
to
best
be
able
to
participate
in
in
this
development.
F
Yes,
hello,
friends,
speaking
as
somebody
who's
done,
some
cdl
development
in
dtn
rfcs,
the
composition
aspect
is
something
that
would
be
very
helpful
because
we're
starting
down
the
road
of
building
up
libraries
and,
as
you
said,
copying
and
pasting
or
or
concatenating
things
runs
quickly
into
limits
of
scale.
C
B
Doing
things
via
comments
sounds
really
scary
to
me,
and
I
wonder
whether
it's
not
a
better
approach
or
at
least
approach
an
approach.
B
We
should
consider
to
introduce
new
syntax
elements
which
would
make
the
documents
incompatible
with
ctdl1
but,
at
the
same
time
define
possibly
very
simple
rules:
how
to
remove
all
that
new
stuff
and
receive
the
receive
the
equivalent
cdtl1
document
that
one
would
have
obtained
if
we
were
if
we
went
for
with
comments
so
that
as
a
user,
I
don't
have
to
tread
carefully
around
what
possibly
the
new
language
might
interpret.
E
Yeah,
so
I
ended
up
needing
an
extra
control
operator
in
cddl,
specifically
to
represent
sequences
that
are
key
value
pairs,
but
not
encoded
as
a
map
encoded
as
an
array
instead
and
the
solution
I
eventually
settled
on
was
to
not
do
anything
at
all
in
the
cddl
leave
it
exactly
as
it
is,
and
then
use
external,
an
external
document
to
annotate
which
paths
into
the
cddl
had
this
kind
of
information
and-
and
that
seemed
to
be
the
simplest
thing
at
this
stage,
and
I
wonder
exactly
how
much
of
this
is
actually
something
that
belongs
in
cda,
cddl
itself
versus
something
that
honestly
belongs
in
an
external
annotation.
E
Definitely
it's
still
very
much
in
the
prototype
stage,
and
I
I
don't
have
it
ready
yet
and
ultimately,
the
goal
of
that
project
was
to
produce
an
automated
suit,
parser
generator.
That,
hopefully,
will
come
at
some
point,
but
it's
not
quite
there.
Yet.
C
C
G
So
I
did
a
lot
of
work
with
that,
where
I
have
cdl
that
can
express
data
structures
that
are,
you
know,
kind
of
encoded,
both
in
jason
and
sibor,
and
I
definitely
ran
into
a
lot
of
limitations
with
dot
feature.
G
I
don't
know
if
anybody
else
has
tried
this,
but
there
was
at
some
point
I
just
had
to
give
up
on
dot
feature
for
for
it,
and
that
was
where
you
were.
I
was
embedding
seaborg
and
jason
and
jason
and
sebor,
and
that
really
that
was
too
much
for
it.
I
also
found
that
the
cdl
tool,
the
diagnostic
output
or
the
error
output
from
the
cbvl
tool,
with
the
the
use
of
dot
feature
to
distinguish
json
from
seaborg,
was
hard
to
work
with.
C
A
Just
just
set
the
mic
on
the
table,
so
okay
and
we
are
out
of
time.
So,
let's
if
we
need
to
wrap
things
up,
though,
let's
do
it
quickly.
C
Carsten
yeah,
I
I
got
some
pretty
good
feedback
now
and
we
probably
have
to
pursue
this
feedback
here
on
the
list.
In
particular,
I
would
like
to
know
what
lawrence's
embedding
situations
were
and
how
we
maybe
possibly
can
can
address
those.
A
Okay,
so
we
have
that
the
the
other
item
we
had
christian,
you
had
a
tag
registry
item
we
have.
Our
next
interim
call
will
be
on
the
24th
of
august,
and
I
guess
we'll
just
put
those
last
two
things
on
the
agenda
for
that
yep.
Let's.
F
A
Okay,
so
everybody
thank
you
for
coming
and
thank
you
especially
for
marco
for
taking
notes
and
the
those
of
you
for
for
whom
it's
after
midnight.
Thank
you
for
staying
up
late
and
we'll
see
you
all
on
the
24th
of
august
and
on
the
mailing
list.