►
From YouTube: IETF100-CBOR-20171116-1550
Description
CBOR meeting session at IETF100
2017/11/16 1550
https://datatracker.ietf.org/meeting/100/proceedings/
A
Welcome
to
see
Bohr
anybody
who
is
going
to
participate,
you
might
want
to
come
a
little
bit
closer
to
the
front,
so
you
don't
have
to
trot
up
to
the
microphone.
A
So
I'm
Joe
Hildebrand
and
this
is
Francesca
Palin
beany
and
so
here's
the
coordinates
to
participate.
Do
we
have
any
remote
participants
on
jabber.
A
Christian,
okay,
all
right,
if
your
remote-
and
you
can
hear
me
and
you
need
something
related
to
the
microphone-
we've
got
somebody
who
will
do
that
for
you
prefix
your
your
jabber
with
em
IC
:
and
he
will
relay
it
for
you,
that'll
be
Dave.
Dale
are
doing
that.
The
etherpad
is
on
here.
I.
Think
that
now
this
URL
should
work
great.
A
A
Agenda
so
this
introduction
bit
we'll
talk
about
cuddle
for
a
while
for
about
a
half
an
hour,
we'll
talk
about
the
status
of
the
Seaboard
draft,
and
then
we've
got
the
oid
draft
and
a
techhdarre
draft
to
talk
through
and
then
there's
one
that
I
haven't,
read
and
should
called
time
tags
and
then
we'll
wrap
up
anything
else
that
people
want
to
put
on
this
list
anything
that
people
think
we
ought
not
talk
about
great
I'm
gonna.
Take
that
as
good,
then
all
right
so.
C
E
Just
a
quick
status
update
since
Prague,
so
CDL
was
updated
after
Prague
and
Caston
and
Hank
are
gonna
present
some
issues
that
they
would
like
the
working
group
to
decide
on
then
about
the
Seaboard.
It
was
recently
updated
and
to
check
that
you
can
also
check
the
github,
because
that's
where
the
latest
version
is,
there
is
still
the
implementation
matrix
to
fill.
I
I
have
pink
the
main
list.
The
only
one
feeling
in
was
Jo,
so
I
think
Karsten
is
gonna.
Is
gonna
talk
about
that
a
little
bit,
but
please
come
forward.
A
A
So
there
are
some
tests
in
there
that
you
can
use.
As
you
know,
here
are
some
inputs
and
if
you
can
process
those
inputs,
then
you're,
probably
in
good
shape
now
I
don't
implement
all
the
entire
set
of
extensions
and
tags.
So
hopefully
you
know
somebody
else
will
have
implemented
things
that
I
haven't.
E
So
we
need
to
talk
about
that
and
we
need
to
see
what
is
what's
the
status
for
those,
and
here
I
just
reported
what
you
can
see
the
data
tracker
about
when,
where
these
documents
last
update
and
then
we
would
like
to
hear
from
the
authors
about
what
they
think
they
need,
it's
still
needed
to
go
for
adoption
and
and/or.
Then
working
group
last
call
so
for
the
OID
tags.
There
has
been
no
update
since
March,
so
this
document
probably
needs
feedback.
We
have
asked
for
it
and
we
even
yeah
the
past
couple
of
meetings.
E
So
talk
about
that
and
about
the
array
tags
it
was
updated
after
Prague
still
needs
still
need.
Reviews
I
was
going
through
the
minutes
for
the
past
meetings
and
noticed
noticed
that
some
people
promised
reviews
last
meeting
did
not
provide,
maybe
forgot
so
I'm,
just
gonna
remind
it
here.
Jim
and
Paul
promised
reviews
on
this
document.
It's
it's
short.
It
shouldn't.
You
shouldn't,
take
you
long,
but
yeah.
So
we'll
talk
about
this
one
too
I
think
that's
it
from
us
and
I'm
going
to
start
with
CTL.
F
If
I
tip
over,
you
have
to
catch
me,
okay,
so
what's
this.
F
Okay,
so
if
you
can
try
switching
it
out,
it
worked
after
switching
it
on
in
the
car
meeting,
so
I
think
it
really
just
needs
to
research
on
okay.
So
just
reminder
of
what
this
group
is
about.
We
are
supposed
to
take
our
c7
349
to
standards
level.
We
are
supposed
to
standardize
the
standardized
CDL
and
do
a
few
tag
structure,
nuts.
So
that's!
What's
currently
the
Charter
and
I
run
through
this
in
three
parts.
F
The
first
one
is
about
CD
DL,
so
I'm
not
sure
Hank
is
here
because
he
has
a
conflict
with
net
conf,
so
I'm
not
sure
you
can
make
this
meeting
and
and
christophe
is
in
raven.
Unfortunately,
so
just
to
remind
people
what
CDL
is
about
and
give
a
little
bit
of
a
tutorial.
That
is
probably
needed
to
talk
about
the
actual
issue
that
we
should
discuss
today.
F
We
have
been
using
BNF
since
RFC
40,
so
that
comes
quite
well
together
with
being
at
ITF
100.
This
has
a
tradition,
and
for
about
forty
years
we
have
been
using
a
BNF,
the
Augmented
BNF
that
can
hair
enstein
developed
for
the
mail
every
day.
I
think
so.
This
is
something
we
have
some
experience
with
and
there
are
750
to
overseas
that
reference,
the
the
a
B
and
F
standards
RFC.
So
one
could
say
this
is
a
successful
our
see
it's
even
more
successful
than
yang,
which
is
currently
referenced
by
160
ourseives.
F
There
is
some
tool
support
it
hit
the
a
B
and
F
form
it
hasn't
taken
the
world
by
storm,
but
there
is
support.
We
have
our
own
tools
like
Bill
feni's,
a
bap
and
a
B
and
F
Gen,
but
also
will
establish
pathogen
rater
towards
like
endler
as
supported,
and
it's
just
normal
in
the
IDF.
If
you
do
a
text-based
protocol,
you
write
in
a
BNF
spec.
F
So
this
is
our
role
model
here
we
want
to
be
like
a
BNF
and
actually
that
not
only
works
on
on
a
high
level,
but
also
on
a
technical
level.
A
B&F
is
composed
of
productions,
so
you
have
something
like
an
address
specification
and
that
is
a
local
part,
followed
by
an
ad
side
@
sign,
followed
by
a
domain.
So
what
these
a
B
and
F
routes
do
they
give
names
for
sublanguage
'as,
and
then
you
have
two
kinds
of
composition
you
can
compose
by
concatenation
like
like
this.
F
Does
here
from
catenate
in
a
local
party
net
sign
in
a
domain?
All
you
can
compose
by
choice
and
this
nesting
structure
then
at
some
point
terminates
at
literals,
like
the
add
sign
here,
or
we
add
a
B
and
F
notations
for
literal.
So
this
just
means
all
ASCII
characters
between
33
decimal
and
90
Jason,
okay.
F
So,
given
that
the
data
that
we
are
using
are
integers
text,
strings,
binary,
strings,
floating-point
and
so
on,
we
have
to
add
literals
for
those
primitive
types.
We
not
only
have
text
strings.
We
have
these
other
things
as
well,
and
we
have
conveyed
constructors
for
the
to
container
types
we
have
in
JSON
and
C
were
the
V
arrays
and
maps,
and
the
whole
thing
is
inspired
by
relax.
Ng,
which
was
the
the
schema
language
for
exam,
is
done
right.
F
You
probably
know
another
one
and
yeah
I'm
not
going
to
comment
on
that
one,
but
relax
and
G
has
done
some.
Some
interesting
work
here
too.
So
in
CDL
rural
names
are
types
you
can
write,
something
like
boolean
is
either
a
false
or
true,
and
then
you
have
a
boolean
type
followed
by
the
way
is
also
a
type
which
has
just
one
number:
the
value
falls.
F
You
could
have
something
like
label,
which
is
either
a
text
or
an
integer.
This
would
be
an
application
specific
type
or
an
integer
is
actually
either
an
unsigned
integer
or
a
negative
integer
in
C
bar.
So
types
are
just
sets
of
potential
values,
and
literal
is
just
a
very
small
type.
So
this
little
one
here
is
a
type
with
the
one
member
in
it,
which
is
the
number
one
and
then
there
are.
There
are
other
ways
of
writing
these
types.
For
instance,
one
two
actually
means
a
type
that
contains
the
numbers,
one
two
and
three.
F
F
F
The
label
is
ignored
when
you
use
the
group
within
an
array
and,
on
the
other
hand,
arrays
have
a
sequence
which
is
ignored
within
maps,
so
groups
are
grammars
for
key
value.
Pairs
and
keys
and
values
are
types.
So
this
is
where
the
circle
closes.
You
compose
groups
of
types
and
you
compose
types
if
they
contain
arrays
or
maps
out
of
groups,
so
that's
essentially
the
whole
technology
behind
CD
DL,
and
the
result
is
that
you
can
write
something
like
this.
This
was
one
of
the
first
examples.
F
When
we
read
RFC
1771,
we
were
annoyed
that
they
had
to
define
a
special
stylized
form
of
English
to
define
their
JSON
data
type,
which
takes
I,
don't
know
four
or
five
pages,
and
so
this
is
the
technical
content
of
RFC
7:07
one
on
one
slide.
So
what
we
have
here
is
a
name
for
a
type.
The
type
is
reputation
object.
F
We
have
the
map
or
JSON
object,
braces
and
then
what's
in
here
is
a
group
which
says
you
can
have
the
label
application
with
a
value
of
text
and
I'll
show
you
you
must
have,
because
there's
no
question
marks
there
and
you
must
have
labeled
reputations,
which
has
an
array
of
zero
or
more
refuge
ins
in
it
and
the
refuge.
One
is
this
thing
with
some
mandatory
parts
and
some
optional
parts
and
a
wild-card
at
the
end,
that
is
there
to
express
the
extensibility
and
that's
really.
F
F
F
The
FC
SDI
was
outside
the
IGF
which
cannot
be
named
in
this
room,
but
you
can
guess
three
letter
and
longer
acronym
who
that
might
be
so
yeah.
We
don't
want
to
break
it
and
we
don't
want
to
put
in
the
kitchen
thing,
but
currently
is
one
of
the
two
focuses
is
getting
the
definition
of
the
semantics
of
the
language,
unambiguous,
so
tool
vendors
come
in
and
say:
yes,
we
can
implement
that
and
one
new
thing
which
isn't
in
the
internet
draft.
Yet
it's
in
the
topic
branch
in
the
github
is
appendix
B.
F
That
is
called
matching
and
it
summarizes
the
matching
words
that
are
used
by
CD
types
and
groups
in
a
short
way.
It
essentially
just
goes
through
the
ABN
F,
of
course,
CD
DL,
as
if
it's
a
texture
language
as
defined
in
a
B
and
F,
and
it
goes
through
that
a
B
and
F
and
for
every
CD
I
construct
that
exists.
It
concisely
summarizes
the
CDR
it
semantics,
and
what
we
need
to
find
out
is
whether
that
new
appendix
is
useful,
whether
it
is
correct
and
whether
it
is
complete.
F
So
that's
something
where
where
it
would
be
really
useful
to
get
reuse
on.
So
if
people
think
it's
useful
and
I
think
the
mailing
list
has
had
pretty
much
a
consensus
that
it
is
useful.
We
would
put
this
into
the
next
version
of
the
draft,
so
that's
one
area
of
work,
the
other
area
of
work
is
actually
making
technical
changes
and
so
far
the
appetite
for
that
has
been
limited.
So
they
have
been
few
people
who
said
I
have
said.
F
We
would
like
to
change
this
and
Dad,
except
for
one
area
and
the
problem
comes
in
when
you
add
a
wildcard
to
some
existing
map
definition,
that's
why
it
was
discussed
under
the
heading
map
validation,
but
it's
actually
a
more
general
issue.
So
since
this
is
a
productive
language
based
on
the
same
concepts
as
a
B
and
F,
these
two
parts
of
the
group
are
taken
separately.
F
They
are
not
interpreted
together,
they
are
just
interpreted
as
they
are,
and
the
language
is
then
constructed
by
taking
these
two
two
parts
together
like
like
in
a
because
now
nonfarm
you,
you
know
so
this
has.
We
might
have
an
entry
in
the
map
with
a
key
of
four
and
a
value
of
type
text,
and
we
might
have
any
kind
of
entry
that
has
an
unsigned
integer
as
the
key
and
any
data
type,
because
we
don't
know
yet
how
we
will
extend
things
as
a
value.
So
that's
generally
fine,
however,
really
what
what
like
one.
F
What
one
would
like
to
explain
here
to
express
here
is
that
the
fall
is
taken,
so
we
don't
really
want
that
other
guy
here
to
say,
oh
by
the
way,
you
also
can
have
a
key
of
four
and
any
value,
because
the
the
falls
already
taken
with
us
now,
a
grammar
mathematician
will
tell
you.
Oh
you
are
making
the
grammar
context-sensitive
and
that's
exactly
the
reason
why
we
haven't
done
this
yet
context-free
grammars
have
advantages
of
a
context-sensitive
grammars,
but
the
discussion
on
the
mailing
list
was.
F
This
is
really
a
feature
we
would
like
to
have.
It
would
really
be
useful
if
we
get
an
instance
that
has
a
key
of
four
and
a
value
of
floating
point
would
really
be
good
to
have
a
validator,
throw
an
error
for
that.
Instead
of
just
accepting
it
because
it's
captured
by
the
wild
card
now,
how
do
we
fix
this?
F
There
is
one
possible
answer,
and
this
is
a
slide.
I
have
used
last
time
in
the
list
of
kitchen
things
saying
things
we
might
want
to
add,
which
is
cuts.
Cuts
is
something
that
is.
It
was
initially
defined
by
the
Prolog
programming
language
and
recently
has
become
an
item
of
interest
for
people
who
do
pause,
expression,
grammar
pauses,
so
BNF
started
out
just
as
a
language
for
defining
things.
Then
the
theory
was
built
around
that
and
two
theories
emerged.
F
One
was
way
too
complicated
way
too
expensive
to
implement
the
past
expression
grammars,
so
the
theory
veered
off
into
LR
and
pauses
and
so
on.
But
in
the
meantime
people
have
started
to
do
a
pause,
expression
grammars
again
and
the
example
here
is:
we
have
a
data
type,
a
which
is
either
an
ant
or
a
cat
or
an
egg,
and
the
ant
is
an
array
with
the
first
element
being
ant
and
the
second
being
an
unsigned
integer
cat
text,
egg
float,
no,
the
yes,
yes,
oh
yeah.
This
should
said.
G
F
That
were
true,
then
this
would
be
right.
So
sorry
about
that.
So
the
reason
why
we
started
talking
about
cuts
in
the
office
team
was
that
we
wanted
to
have
better
error
messages,
because,
right
now,
when
you
put
this
into
the
CDL
tree,
it
doesn't
really
have
a
good
way
to
tell
you
what
went
wrong
it
just
says
this
is
not
an
a.
It
is
not
going
to
tell
you.
Oh
you
said,
and
and
after
an
end,
you
really
should
have
put
an
unsigned
integer,
and
this
is
what
these
cuts
are
doing.
F
They
essentially
committing
to
a
choice
that
has
been
made.
They
are
cutting
down
the
rest
of
the
search
tree
by
saying,
oh,
if
you
actually
saw
the
word
and
then
really
all
the
other
rules
that
are
in
this
choice,
don't
matter
because
this
is
the
one
you
should
be
choosing.
So
that
means,
if
you
get
anything
but
a
you
ain't
after
the
end.
You
know
this
is
wrong
and
you
can
say
there
should
be
a
you
end
there.
F
So
that
was
the
the
reason
we
wanted
to
do
cuts,
and
that
was
a
good
reason,
but
maybe
not
good
enough
for
for
putting
it
in
now.
The
interesting
thing
is,
we
can
use
the
same
construct
to
handle
the
met
validation
issue
by
essentially
inserting
cuts
between
the
keys
and
the
values
of
an
item
in
a
group.
So
this
would
mean
if
you
match
the
number
four
and
you
can
ignore
all
the
other
alternatives
for
matching
this
particular
video,
pier
in
the
map
you
are
done
and
the
only
value
that
is
allowed
here
is
X.
F
So
that's
the
this
button.
Okay,
thank
you.
So
this
is,
of
course,
a
little
bit
noisy,
so
this
would
come
with
a
proposal
to
make
the
existing
:
shortcut
include
that
cut.
So
when
you
say
for
:
text,
this
means
you
really
mean
the
for
is
taken
by
this
particular
production
about
this
particular
part
of
it.
So
that's
the
proposal
now
this
needs
to
be
fully
defined.
F
Those
people
who
know
about
the
theory
know
that
when
you
do
a
cut
to
reach
up
the
tree-
and
you
actually
find
how
far
you
reach
up,
it
may
seem
obvious
here,
but
in
more
complicated
examples
it
has
to
be
fully
defined.
That's
one
thing.
The
second
thing
we
want
to
do
is
checking
for
breakage
that
this
change
makes,
and
the
third
thing,
of
course
you
want
to
do,
is
implement
it
to
make
sure
it
works.
So
this
is
probably
month
of
work
that
is
needed
to
make
this
happen.
A
Can
L
demand
from
the
floor?
Do
you
have
other
proposals
or
is
this?
The
is
this
the
only
one
you've
got
I,
don't
want
to
start
I
da
ting.
If
you've
got
other
things
you
want
to
also
suggest
instead,
no.
A
Mean
if
what
you're
really
trying
to
do
is
get
the
extensibility
bit
and
you're
having
a
sort
of
retrofit
this
in
just
for
extensibility
like
I,
get
the
the
the
aunt
example.
You've
got
there,
but
like
do
you
want
the
aunt
example
enough
to
make
extensibility,
it
seems
relatively
complicated
from
a
theoretical
perspective
compared
to
like
the
cognitive
load
associated
with
this
is
relatively
high
compared
to
the
rest
of
the
document.
A
I
mean,
but
there
are
other
things
you
could
do
to
get
extensibility
if
you
just
wanted
extensibility.
So,
for
example,
at
the
end
of
the
the
end
curly
bracket,
you
could
put
something
else
at
the
end.
That
says,
by
the
way,
this
is
extensible
in
some
way
like
in,
for
example,
you
could
you
could
have
in
parens
after
it
some
sort
of
type
statement
that
says
what
you,
how
you're
allowed
to
extend.
A
H
Hi
Sean
Leonard,
so
a
couple
of
proposal
I
want
to
bring,
but
I
want
to
first
address
this
pretty
clearly
so
I
think
it's
valuable
as
discussed
on
the
mailing
list
to
have
some
way
to
either
cut
or
constrain
the
type
Carson
and
I
actually
worked
on
a
related
problem
with
this
in
the
context
of
a
B
and
F,
and
I
would
like
to
propose
that
we
consider
an
alternative
way
to
get
the
same
results,
namely
allowing
the
broader
definition
right.
H
H
Right
because
you
you
take
that
point
of
the
parser
of
the
tree,
you
output
the
error
message
and
then
you
backup
right
essentially,
so
then
it
doesn't
match.
However,
another
way
to
look
at
it
is
that
the
data
production
still
does
match
the
broader
type.
You
went
to
any
okay,
but
it
just
doesn't
match
the
subtype.
H
So
then,
a
parser
on
the
fly
or
during
production
will
know
that
it
matched
the
broader
type,
but
not
the
subtype,
and
then
the
application
can
treat
that
as
either
an
error
or
as
something
that's
potentially
recoverable,
where
the
consumer
can
then
just
realize.
Okay,
we
have
the
broad
match,
but
not
the
subtype
that
we
were
looking
for.
Therefore,
the
data
item
is
in
is
deficient
basically
and
get
the
same
result.
So
do.
H
That
would
be,
there
is
another
control
operator
that
is
in
CD
DL
now
called
within
or,
and
so
it
would
behave
in
a
similar
way.
I
guess
I
can
propose
another
way
to
look
at
this.
I
have
a
draft
in
the
a
B
and
F
context,
which
is
a
draft
chantek
constrained,
a
B
and
F,
which
ironically
uses
the
Hat.
H
H
You
you
would
write
the
catch-all
first
and
then
you
write
the
specific
instances
that
also
match
the
catch-all,
so
you're
constraining
the
generic
production
to
these
specific
instances
of
interest
and
if
it
doesn't
match
the
specific
instances
of
interest,
then
that's
a
condition
that
then,
of
course,
a
program
can
can
process
or
deal
with
that
it
match
the
general
thing,
but
not
the
specific
thing.
So
it's
in
the
catch-all.
F
H
F
F
H
That
makes
sense,
although
if
the
remainder
type
also
matches
all
of
the
other
specific
types,
you
can
first
match
the
remainder
type,
which
presumably
would
also
be
relatively
cheap
relative
relative
to
going
through
the
list
of
specific
matches
and
then
going
to
the
specific
match.
The
cup
is
essentially
a
way
to
construct.
H
E
And
actually
I
want
to
take
this
to
remind
people
to
participate
in
the
mailing
list,
because
we've
seen
very
little
participation
and
the
working
group
seems
interested
and
active,
but
in
the
face-to-face.
But
then
I
would
like
to
see
more
discussion
in
the
main.
Unless
then,
I
cannot
keep
like
for
like
trying
to
get
people
to
talk.
You
need
to
him
so.
B
B
B
Grima,
it
does
not
matter
in
the
current
grammar
if
I
do
the
star
first
or
the
four
first
that
that
there
those
would
be
equivalent
in
the
grammar.
This
is
not
true
anymore
and
even
worse
than
this
is
you
now
have
the
question
of.
If
you
have
group
one
which
ends
in
the
star
int,
you
int
points
to
any,
and
then
you
say
and
now
append
group
two
to
it
all
of
the
cuts
in
group.
Two
won't
ever
be
seen
because
you'll
hit
the
star.
You
end.
F
B
Cut
bit
and
that's
because
of
the
order,
dependence,
a
look,
so
that's
the
first
problem
I
have
with
with
all
of
this.
The
second
problem
I
have
is
I.
Don't
understand
why
this
needs
to
be
expressed
in
the
grammar
and
cannot
be
expressed
in
semantics.
This
is
commonly
done
today,
for
example,
if
you
have
an
email,
headers
I
can't
remember
for
sure,
but
I
believe
that
they
leave
part
of
the
grammar
says
you
can't
have
two
of
these:
that's
not
in
the
grammar.
F
I
Think
an
edit
of
this
document.
First
of
all,
if
there
are
good
proposals-
and
it's
basically
what
you
just
said
so
busy
took
some
of
the
words
my
mouth
that
can
express
this
as
a
control
to
this
type
definition
very
interested
to
see
your
proposals
for
that
and
then
coming
back
to
Joe's
comment.
This
is
why
it
looks
like
this
is
about
extensibility.
It
isn't.
It
is
about
the
fact
that
sometimes
we
have
the
problem
that
there
are
more
very
defined
keys
and
less
defined
keys
and
maps.
I
If
it
would
be
about
extensibility,
we
would
use
the
extension
points.
There
are
extension
points
to
our
choices
there.
We
could
use
extension
points
here.
We
could
use
as
we
could
you'll
end
with
an
extension
point,
just
as
you
proposed
it
is
already
in
CDL,
but
that
is
not
what
this
is
about.
This
is
about
effect
that
we
have
sometimes
layers
that
are
less
specified
than
others,
and
those
would
be
basically
gobble
up
all
the
voyage
identified
Labor's.
I
F
I
H
Okay,
great
sean
leonard
all
right
I
did
want
to
draw
the
working
groups
attention
to
a
couple
of
other
proposals
that
have
come
up
with
regard
to
see,
DDL
and
and
the
grammar
specifically.
The
proposal
that
I
wanted
to
make
or
discuss
had
to
do
with
regular
expressions.
So
I
did
bring
up
on
the
list
a
few
months
ago.
The
issue
that
regular
currently
are
defined
as
pcre,
which
is
great
because
I
think
pcre,
is
a
great
regular
expression,
implementation
and
everything
else.
H
But
there
isn't
a
normative
reference
in
CDL
or
in
seaboard
to
that
and
I
think
that
that
is
something
that
should
be
addressed
in
both
documents,
because
both
of
them
do
in
fact
reference
regular
expressions,
then,
because
of
the
power
that
regular
expressions
have
to
identify
and
constrain
the
different
data
productions.
I
feel
strongly
that
regular
expression
should
be
first-class
syntactic
elements
inside
of
CDL.
So,
for
example,
just
as
you
can
do
in
Perl
or
in
JavaScript.
H
The
way
you
write
a
regular
expression
is,
you
do
slash
and
then
the
regular
session
content
and
then
slash
and
then
usually,
if
you
have
a
editor,
that
its
language
aware
it'll,
color
the
regular
expression
and
show
you
interesting
stuff
about
it
and
and
so
forth.
So
technology
is
all
there
right.
Now,
though,
it's
defined
in
a
string
and
the
problem
is
that
when
it's
in
a
string
there's
a
ton
of
escaping
right
of
back
slashing
that
you
have
to
do
to
make
that
work.
H
So
I
wanted
to
bring
that
up
to
make
regular
expressions
a
first-class
syntactic
element,
another
advantage
of
doing
it.
That
way,
is
that,
then
you
can
add
the
flags
or
the
modifiers,
such
as
I
and
X
and
so
forth
on
the
end
of
it
yeah,
because
otherwise
you
essentially
have
to
construct
two
strings
and
then
put
them
together
and
then
it's
it
just
doesn't
look
it.
It
doesn't
look
good
and
that
that
alone,
the
conciseness
will
will
help
I
think
a
lot
with
readability
and
it'll
help
identify
and
catch
errors
and
such
so
yeah.
A
J
J
I
sympathize
with
you
wanting
to
do
this,
but
I
also
then,
but
you're
a
co-author
on
the
draft.
That's
waiting
for
it
right.
So
you
see
the
point:
I
mean
if
you
can,
if
you
can
think
about
doing
this
as
a
version
2
effort
that
I
think
that
would
actually
be
probably
better
for
seeded
Yale,
because
it
gives
it
a
place
in
the
world,
and
you
can
then
work
on
improvements.
J
E
J
J
H
H
However,
I've
read
a
number
of
CDL
specs
in
drafts
and
RFC's
or
whatever
that
have
floated
around
I'm,
not
aware
of
standards
or
specifications
out
there,
yet
that
are
relying
on
them.
Normatively
we're
removing
them
or
taking
them
out
and
tweaking
them
is
going
to
cause
real
interoperability
problems.
One
point
about
the
control
operators:
I
want
to
point
out
is
the
size
control
operator.
So
right
now.
D
H
Size,
you
can
define
it
as
a
single
size
or
you
can
define
a
range
I
believe
right,
so
it
can
be
like
three
to
63
bytes
in
length
or
whatever
it
is.
So
that's
fine,
but
I
have
use
cases
where
I
want
to
be
able
to
constrain
the
size
of
a
string
or
a
byte
string
to
basically
mod
to
like
only
even
numbers
or
only
powers
of
two
or
whatever
okay.
Just
because
that's
those
are
the
increments
of
the
data
items,
I
can
do
well.
H
Interestingly,
if
I
use
a
regular
expression
constraint,
I
can
actually
do
that
very
powerfully
by
just
saying.
Regular
expression
has
multiples
of
two
right,
a
group
of
two
characters
or
bytes
and
then
plus,
essentially
or
multiples
of
two
I
can't
do
that
with
size
and
I.
Think,
as
a
result,
if
we
adopt
regular
expressions
in
their
more
powerful
form
right
with
the
slash,
notation
and
whatnot,
we
can
actually
find
that
there
will
be
control
operators
that
may
be
superfluous.
So
we
don't
need
them
at
all,
and
we
might
want
to.
H
You
know,
make
all
these
control
operators
simplified
and
just
say
just
use,
regular
expressions
and
here's
like
a
handful
of
very
powerful
ones
that
you're
going
to
commonly
use
for
these
sorts
of
patterns.
So
with
that
said,
that's
kind
of
another
proposal
on
the
table
not
to
remove
the
control
operators,
but
to
recognize
that
there
are
perhaps
more
powerful,
simpler
ways
of
annotating
the
same
thing,
and
if
people
really
really
want
a
v-0
or
a
v1
of
CD
DL,
we
can
consider
you
know
dealing
with
the
control
operator
issue
after
the
publication
of
that.
H
H
H
F
This
and
what
we
have
tried
to
do
is
not
turn
CDL
Interpol,
so
reg
X
is
are
there
for
the
few
cases
where
they
actually
add
something,
but
they
are
traditionally
not
a
way
in
which
we
specify
the
strings
in
the
IDF.
Now
one
way
to
solve
this
problem
would
have
been
to
add
a
B
and
F
to
a
CDL
and
maybe
in
C
Drive
version
2.
We
will
do
that,
but
I
think
reg
X,
therefore,
for
a
narrow
domain,
and
they
should
stay
that
way
and
I.
H
E
C
I
I
E
H
So,
in
response
to
simple
and
powerful,
how
can
that
work
right?
Well,
there's
always
a
holy
grail
of
computer
science.
You
say
something
simple
and
powerful,
and
then
it
turns
out
to
be
one
or
the
other
or
whatever,
and
the
answer
is
regular.
Expressions
are
simple
when
they're
simple
and
they're
powerful
when
they're
powerful
right,
but
they
are
a
part
up
to
now
of
CDL
and
C
bore
right.
H
The
existence
of
C
DDL
in
its
current
draft
form
assumes
that
AC,
DDL
generator
or
a
parser
or
validator
of
some
kind,
that
can
form
suspect,
will
do
something
with
the
regular
expression
right,
so
that
basically
assumes
that
pcre
is
lying
around.
Does
that
make
sense?
So
assuming
that
you
have
this
piece
of
machinery,
that's
part
of
C
bore
because
it's
in
the
the
specs
right,
then
that's
what
you
got
so
you
may
as
well
make
the
most
use
of
it.
Does
that
make
does
that
make.
F
Sense,
yeah
and
one
question
that
I
don't
have
on
the
slides
that
maybe
Aleksic
and
can
answer
so
PCR
II
is
not
defined
in
a
normative
document
that
we
can
use.
It
also
is
the
right
spec
to
reference
here,
as
shown
probably
correctly,
as
identified
so
I'm,
not
sure
whether
I
want
to
be
a
guinea
pig
for
the
new.
Let's
reference
over
source
projects,
policy
of
the
is
G,
but
this
is
where
it
actually
would.
K
A
A
F
G
I
Yeah
and
finally,
my
concern
with
having
accepted
the
let's
assume
we
exchange
controls
in
general,
the
powerful
and
simple
reg
X.
We
are
talking
about
a
domain
where,
if
we
are
ever
going
to
do
message,
validation,
actually
that's
that's
kind
of
kind
of
true
a
bunch
of
an
overhead
maybe
and
that
domain.
So
I
would
be
very
careful
about
the
pole
for
stuff
and
that
domain
yeah.
F
So
the
way
forward,
if
Alexei
finds
out,
we
cannot
do
this
records
currently
linked
into
the
document
using
an
extension
point,
and
we
can
just
throw
out
vague
excerpts
out
of
this
current
document.
Finish
it
and
write
a
second
document
that
has
the
regs
reference
in
it
and
let
that
sit
in
the
internet
drafts
directory
until
we
find
out
how
to
do
it.
All.
L
There
is
no
control
operator
that
it's
not
used
by
at
least
one
it's
you
know,
I
know
you
want
to
stop
this,
but
I'm
confused
by
something
it
ultimately,
the
validator
I
want
to
be
in
my
embedded
device.
Oh
sorry,
Dave
Robin
maker
of
embedded
devices
I
want
to
get
a
c-more
message
in
I
want
to
be
able
to
validate
it
based
on
some
compiler
that
looked
at
the
CDL
and
told
my
device
how
to
validate
it.
L
Now,
if
you
can
say
everything
can
be
replaced
by
one
gigantic
reg
X
expression,
then
I'm
going
to
give
it
to
a
human,
to
figure
out
how
to
actually
write
the
code
to
do
that.
Validation
because
I
don't
have
a
reg,
X
interpreter
in
the
embedded
environment.
So
let's
not
go
overboard
and
say
we
don't
need
CDD
Allah,
just
one
big
reg,
X
expression,
so
the
the
the
opt
the
constraints
are
easily
turned
into
embedded
rules.
Don't
go
crazy
with
things
that
can
per
set
those
constraints.
Those
constraints
can
be
easily
turned
into
embedded
rules.
F
Next,
so
the
CBO
specification
itself.
As
you
know,
our
job
is
to
take
this
to
standards
level,
and
there
are
several
things
we
need
to
do
and
there
is
a
process
you
find
in
RFC
64
10
for
doing
that.
So
those
are
you
who
are
not
familiar
with
that
process.
Please
have
a
look
yeah.
We
have
some
45
implementations,
so
it
should
not
be
too
hard
to
point
out
to
independent
implementations,
whether
they
are
interoperable,
that
that's.
F
What
we
need
to
find
out
is,
as
Joyce
said,
we
have
fixed
the
errata,
while
looking
at
interval
T,
we
can
look
at
whether
we
have
unused
features
and
as
far
as
I
know,
we
don't
have
any
patent
claims
that
are
known
so
far.
So
the
status
of
the
document
is
that
they're,
zero,
zero
had
already
fixed
the
errata
and
l1
has
reacted
to
a
few
comments
that
have
been
made
by
implementers.
One.
Is
that
the
the
way
the
simple
types
like
false,
true
null
and
so
on?
F
I
encoded
is
said
again
in
another
place,
so
it's
harder
to
miss
the
way
they
are
defined.
We
added
a
changes
section,
maybe
in
the
next
version
we
will
separate
editorial
changes,
fixes
and
a
new
information
from
each
other,
and
the
only
real
new
material
is
the
new
section
about
SIBO
data
months
and
there
we
shall
quickly
talk
about
that.
F
So
those
people
who
have
worked
with
Jason
and
we
like
it.
However,
it's
not
always
clear
what
the
data
model
actually
is,
that
is
being
derived
from
a
JSON
instance,
and
if
you
really
paid
attention,
you
could
infer
the
SIBO
data
model
from
our
c7
t49,
but
maybe
it's
better
to
actually
make
this
very
explicit.
F
So
there
is
a
proposal
for
a
new
section
2.5,
which
is
called
generic
data
model
generic,
because
it's
not
about
a
specific
data
model,
a
specific
application
might
be
using,
but
it's
about
the
complete
set
of
instances
that
can
be
realized
in
recibo
and
that
generic
data
model
comes
in
two
parts.
One
is
the
unexpended
basic
data
model
and
the
other
half
is
the
extension
points.
Of
course,
given
that
the
extension
points
are
therefore
extending
zero,
that
data
model
is
not
closed.
F
If
you
expect
generic
encoders
and
decoders
to
interoperate
and
an
ecosystem
of
such
generic
and
coastal
decoders
makes
interrogatory
so
much
more
likely
and
also
guides
the
definition
of
specific
data
models,
because
you
won't
define
data
models
in
such
a
way
that
generic
encoders
and
decoders
have
problems
with
that.
So
this
is
one
edition
which
could
be
called
editorial,
except
there
there's
also
a
little
bit
of
text
in
there
that
clarifies
some
expectations
and
the
batteries
included.
Aspect
of
our
FC
1749
is
not
not
appropriate
if
you
need
to
ship
C
bar
by
ma.
F
So
sometimes
you
have
to
leave
out
the
batteries
and
the
question
is
which
batteries
do
we
really
want
to
have
India?
Which
of
the
pre
extensions
by
the
document
are
really
basic
and
section?
Two
five
now
clarifies
that
the
three
simple
data
items-
false
true
null
do
come
in
through
an
extension
point,
but
they
are
really
not
an
extension.
They
are
part
of
what
is
expected
to
be
provided
in
a
generic
encoder
and
decoder.
F
That
still
doesn't
mean
that
a
data
model
that
you
define
using
SIBO
has
to
use
them,
but
it
means,
if
you
write
a
generic
encoder
decoder.
Please
include
false,
true
or
not,
and
everything
else
is
truly
optional
and
a
matter
of
implementation
quality.
So
that's
a
statement
and
that's
probably
a
state
where
we
want
to
be
very
clear
about
in
this
working
or
whether
that's
what
we
expect
from
generically
coders.
So
again,
this
is
a
relevant
vent
for
interoperability,
because
this
ecosystem
of
generic
encoders
and
decoders
helps
gain
gaining
interoperability.
F
H
H
F
F
E
F
Okay,
the
the
other
thing
that
came
up.
As
you
know,
we
have
a
few
implementations
out
there
last
time.
I
looked
at,
it
was
45,
it's
changing
every
week
now
and
we
haven't
got
a
lot
of
feedback
about
interoperability
problems,
but
we
do
have
one,
and
that
is
really
not
a
problem
in
SIBO.
It's
a
problem
in
the
world.
We
are
bridging
two,
which
is
the
base64
world
that
has
had
a
little
bit
of
an
evolution
since
base64
was
first
defined
for
mine
and
we
probably
haven't
been
tracking
that
evolution
very
well.
F
We
are
not
saying
that
explicitly
for
tag
33,
so
that
may
be
a
problem,
but
then
RFC
4646
about
that
the
basic
C
for
classic
tags,
22
and
34.
They
reference
46
48,
but
46
48
only
has
essentially
decision
criteria
if
you
are
using
base
64.
Those
are
the
reasons
why
you
would
might
want
to
use
heading,
and
those
are
the
reasons
why
you
might
not
want
to
use
it.
F
64
are.
So
we
have
to
tell
this
generic
converter,
what
kind
of
basics
before
it's
supposed
to
generate.
So
the
problem
is
really
urgent
for
the
tag
22,
because
people
cannot
implement
this
right
now
without
the
knowledge
they
have
to
pick
a
choice
and
you
shouldn't
they
implemented
big
something,
and
the
second
thing
is
yeah
tag.
F
So
these
are
two
different
but
interrelated
sets
of
problems,
and
this,
of
course,
should
be
guided
by
how
a
64
classic
and
URL
are
being
used
in
practice.
Now,
basics
before
URL
is
almost
always
used
without
heading
and
remember,
having
seen
once
a
version
with
padding
but
I'm,
not
even
sure
that
that
was
standards
compliant
so
into
ability
might
benefit
from
really
nailing
this
one
down
and
on
the
basics
default
side,
it's
pretty
much
a
bike
shed.
So
it's
not
really
possible
to
decide
this
well
yeah.
So
we
could
be
more
explicit
about
tag
33.
F
We
could
also
go
ahead
and
define
additional
tags,
base64
classic
with
Betty,
without
padding
with
up
to
70
characters
per
line,
probably
not
and
oh,
by
the
way,
there's
also
a
question
that
that
occasionally
comes
out
about
base-16,
so
you
can
write
hex
strings
in
uppercase
and
lowercase,
but
that's
the
side
track
right
now,
so
my
proposal
would
be
to
actually
read:
46
48,
which
tells
us
heading,
was
designed
to
help
with
situations
in
which
the
decoder
isn't
quite
sure
about
what
the
length
is
in
C
bar.
We
always
know
the
length.
F
So
it's
really
the
no
padding
case
from
46
48
that
we
have
here
now
again.
The
J's
inside
of
a
potential
automatic
conversion
might
have
other
constraints,
so
maybe
the
JSON
side,
then
next
converts
into
a
mind
message
or
something
like
that.
So
this
is
not
not
a
perfect
argument,
but
it's
an
argument
that
might
be
used
to
sort
of
that
bike
shed.
So
my
proposal
would
be
to
go
for
no
padding
with
basic
C
for
classic,
but
add
some
language
documentation
note
that
this
was
only
edit.
F
H
Sean
Leonard,
so
I
think
the
core
issue
here
is
that
you
want
different
implementations
to
emit
the
same
basic,
the
same
sequence
of
characters
right
for
base64,
whether
its
upper
case
lower
case
padding
or
whatever
right.
So,
first
of
all,
our
question
is
why
I
mean
is
there
at
light
and
when
I
say
why
I
mean?
Is
there
like
a
canonicalization
issue
where
you
really
need
it
that
way,
because
otherwise
some
digital
signature
or
hash
is
not
going
to
compute
a
or
something
like
that
or
what
is
it
and
then?
H
The
second
is,
there
is
I'm,
not
saying
we
should
do
this,
but
another
possibility
is
that
you
consider
all
the
different
permutations
and
you
just
register
lots
of
hi
tag
numbers
for
all
them.
So
I
think
that
that's
not
a
good
idea,
but
that
is
a
way
to
like
deal
with
it
and
we
just
pick
something
arbitrarily
for
the
current
ones.
That
we
believe
is
gonna,
be
the
most
common.
D
D
They
use
both
rommel
and
json
schema
and
now
moving
towards
swagger
and
both
json
schema
and
swagger
have
the
property
of
defining
a
type
for
base64
and
not
defining
a
type
4
base64
URL,
and
so
that
means
that
they
use
base64
URL,
not
because
they
want
to,
even
though
they're
using
super
on
the
wire,
but
because
of
other
external
dependencies
they're
using
base64,
so
yeah,
and
so
the
implementation
they're
using
is
tiny
sea
bore
and
I.
Don't
know
what
that
does,
but
offhand
I'm
guessing
your
proposal
would
be
fine.
D
The
tiny
sea
bore
offer
really
laments
the
fact
that
he
can't
use
the
base64
URL
buts
in
the
tiny
zebra
implementation
for
ocf,
because
there's
no
reason
other
than
the
specification
language
having
to
use
previously
json
schema
and
now
swagger,
and
if
we
could
get
both
of
those
or
should
say
either
of
those
fix.
The
tiny
co-author
would
be
very
elated.
M
M
For
me,
the
time
it's
it's
base64
URL
and
not
base64,
where
it
is
basics
t4
it's
because
of
trying
to
interoperate
with
existing
tools
which
will
spit
out
whatever
they
spit
out,
but
the
parsing
of
that
usually
they're.
Accepting
of
things
is
usually
pretty
permissive
in
the
first
place,
so
I
think
I.
M
Don't
so
I
think
it's
worth
moving
in
that
direction
for
the
the
30x
set
of
tags,
just
let
it
let
it
be
pretty
flexible
there
we're,
but
on
the
other
side,
just
remove
all
the
white
space
and
the
safest
I've
seen
for
that
is
also
make
sure
to
keep
the
padding
for
base64
versus
base64
URL.
It's
almost
never
padding,
but
what's
it's
okay
to
be
permissive
on
what
what
we
take,
but
on
what
you're
going
to
generate
just
set
some
really
strong
limits.
This.
A
F
E
F
So
Joe
already
mentioned
that
we
have
to
do
work
on
the
implementation
matrix
exactly
to
put
a
link
on
your
code
on
the
slide.
So
maybe
you
can
send
it
again
to
the
mailing
list
so
right
now
again,
Joe
has
done
his
homework
and
yeah
is
anyone
in
this
room
who
can
do
their
homework
on
this
Jim
can
do
that
Carsten.
F
F
Okay.
That's
all
I
had
on
the
syllabus
document.
Now
the
the
last
set
of
items
on
the
agenda
is
C
bar
tags,
very
civil
tag,
documents
and
again
our
c70
49,
pretty
finds
18
tags.
Maybe
it's
14
when
we're
done
today,
there's
stuff
in
there.
But
the
point
is
it's:
it's
easy
to
register
your
own
sea
boat,
eggs.
F
So
just
as
one
example,
what's
going
to
be
completed
very
soon
is
the
see
what
we're
talking,
which
is
essentially
the
the
JSON
web
token
translated
to
see
ball,
that
packages
a
claim
set
into
JSON,
and
then
you
can
apply
a
Cosi
security
to
that
in
various
forms.
So
here
we
have
tag
61
assigned
already.
F
We
didn't
even
have
to
do
an
earlier
location
for
that,
because
the
the
registration
policy
right
now
is
very
liberal
and
the
working
last
call
for
the
CEO
we're
talking
completed
in
the
IGF
iceburg
group
and
we'll
go
into
ITF
last
card.
So
that's
an
example
for
a
tag
document
that
we
are
doing,
because
we
have
a
standard
that
that
actually
uses
that
we
also
have
some
other
tags
draft
that
are
not
necessarily
being
motivated
by
specific
standards,
but
are
being
motivated
by
wanting
to
do
a
specification
work
that
references
certain
types
of
data
structures.
F
So
one
of
this
is
the
OID
draft
and
at
some
point
shown
it
I
have
to
sit
down
and
see
what
we
actually
want
to
push
through
here
at
the
moment
and
what
maybe
should
go
into
a
separate
a
document.
But
if
you
have
feedback
on
that
document,
that
would
be
useful,
but
I'm
not
going
to
talk
about
it
today.
The
second
one
is
the
array
tags
draft,
which
has
been
out
for
a
while
and
has
been
pretty
stable
for
a
while
that
is
still
waiting
for
a
working
group.
F
Adoption
I
will
talk
about
that
in
a
minute.
The
third
one
is
the
time.
Tag
which
is
off
charter
and
essentially
is
is
completed,
process
wise
because
the
IANA
has
registered
in
an
FCFS
tag
for
it
the
1001,
but
maybe
we
actually
want
to
turn
this
into
an
RFC
and
I'd
like
to
understand
what
this
working
group
when
it
has
completed
its
current
charter,
wants
to
do
with
us.
That's
the
care
or
not.
F
So
maybe
this
should
be
any
other
group
of
tanks
motivated
by
standardization
activities
and
then
maybe
at
some
point
we
may
want
to
do
a
useful
tags
document,
because
some
of
those
registered
eggs
are
actually
very
useful
and
it
would
be
good
to
collect
their
specifications
into
an
RFC,
so
people
have
an
easier
way
of
referencing
those
and
we
don't
have
down
refs
for
document
that
use
that
and
that
useful
check
document
actually
could
swallow
the
time
tag
as
well.
If
we
consider
that
useful,
so
that's
one
way
of
handling
this
is
about
that.
F
F
It
defines
24
contiguous
tags
in
the
two
byte
space
and
also
defines
two
more
tags
for
other
homogeneous
arrays,
which
is
useful
in
a
decoder.
If
you
know
ahead
of
time,
this
array
of
4,000
elements
you
are
finding
is
actually
homogeneous,
so
you
can
map
it
to
a
whatever
genius
you
have
in
your
language
and
a
tag
for
multi-dimensional
array.
So
when
you
get
the
the
elements
enumerated,
you
know
how
many
columns
and
how
many
rows
there
are
so
I
think
these
are
pretty
non-controversial.
F
But
for
those
of
course
eating
up
24
tags
is,
is
maybe
a
lot
and
eating
them
out
of
the
two
white
space
is
maybe
even
more
a
lot
we
have
232.
There
are
about
20
taken
at
this
point
in
time,
but
I
mean
this
is
about
the
size
of
the
IP
protocol
number
space.
So
we
want
to
be
a
little
bit
careful
about
that,
and
we
have
have
had
arguments
on
both
sides.
One
is,
it
would
be
a
waste
of
space
because
arrays
can
be
large
and
large.
F
Arrays,
obviously
are
fine
with
a
three
by
check
and
the
other
argument
is
no
arrays
can
also
be
small,
and
one
of
the
more
likely
usages
of
this
tank
is
for
an
RGB
value,
which
is
three
bytes
and
yeah.
Then
you'll
need
attending
a
three
by
tagged
to
a
three
byte
value
that
that's,
maybe
not
so
bright.
What
we
could
do
is
partition
those
24
tags
into
those
that
are
somehow
big
and
those
that
somehow
not
so
big.
F
That's
ugly
I'd
rather
spend
those
24
tags
at
once,
but
even
more
importantly,
I
would
like
to
get
this
out
of
the
way
for
some
reason.
They
said
this
has
kept
us
from
adopting
this
draft,
which
is
weird
because
we
usually
end
up
droughts
before
we
have
solved
all
tanking
issues
with
them.
So
let's
get
this
all
over
the
way.
F
A
Exactly
and
so
having
that
pattern
more
so
sort
of
in
our
pocket
and
with
a
name
that
we
call
whatever
this,
you
know,
parameterised
tag,
for
instance,
would
allow
us
to
shorthand
the
discussion
and
then
so.
My
next
question
here
was
well.
Could
we
do
a
parameterised
tag
for
this,
where
we
have
one
tag
specified,
in
which
case
having
one
in
the
one
bite
range
even
might
be
fine,
and
then
you'd
have
another
bite
for
the
array,
so
total
of
two
bytes
to
describe
the
whole
thing.
A
A
H
Okay,
Sean
Leonard,
so
to
continue
off
of
Joe's
point.
Actually,
the
Seaboard
tags
OID
draft
actually
discusses
that
premise
of
having
stacked
or
parameterised
tags,
where
you
have
a
tag
which
is
small
and
then
another
identifier.
Next
to
it,
which
could
be
another
tag
because
you
can't
in
fact
stack
tags.
You
know
on
top
of
tag
or
an
integer,
it's
worth
pointing
out
that
in
C
bore
a
tag
is
really
just
an
unsigned
integer
with
major
type.
H
Six
I
believe,
instead
of
major
type
zero,
otherwise
they're,
like
literally
the
same,
and
the
semantics
of
course,
are
that
you
can
put
one
thing
after
it,
which
is
the
thing
that's
being
also
has
a
correct,
correct,
yeah!
That's
right!
That's
correct!
Yeah!
That's
the
difference
with
regard
to
this
particular
thing.
I
think
the
draft
should
be
adopted
and
I
think
all
the
tag
draft
should
be
adopted.
We
just
get
it
over
with
I,
have
I
think
in
Prior
meetings,
Joe
expressed,
let's
put
it
in
the
three
byte
space.
I,
basically
think
for
this
block.
H
We
should
just
put
in
three
bytes
space
and
not
use
the
two
byte
space,
but
the
premise
of
allocating
blocks
of
tags
to
take
it
or
exploit
mathematical
properties,
like
the
fact
that
you
know
these
are
interleaved
is
a
good
thing,
especially
because
we
are
trying
to
optimize
for
IOT.
You
know
types
of
constrained
devices
where
then
this
can
be
part
of
a
jump
table
effectively
right
with
one
subtract
and
then
to
wherever
you
need
to
go
with
regard
to
the
bike
shed
issue
of
you
know
three
buy
tags
to
buy
tonight.
H
If
we
have,
if
we
literally
have
an
RGB
value,
that's
three
bytes!
We
just
give
it
another
tag.
For
heaven's
sakes,
you
know
like
if
it's
gonna
be
an
array
of
lots
and
lots
of
RGB
values,
because
it
is
the
contents
of
a
graphic
buffer
or
whatever
from
one
computing
device
to
another.
That's
gonna
be
a
large
array,
so
the
tags
not
gonna,
contribute
to
it
and
if
it's
just
one
just
invent
some
RGB
tag,
you
can
put
into
one
byte
space.
So
two
bytes
space
or
just
just
call
it
a
day.
H
You
know,
or
or
I
will
point
out.
This
is
a
good
point
to
point
out.
You
don't
actually
have
to
tag
anything
at
all
if
you're
really
super
space
constrained
on
the
wire
or
whatever
just
don't
tag,
it
just
have
some
huge
huge
array
of
integers
or
whatever.
That
gets
me
to
the
issue
of.
When
should
we
tag
and
some
of
the
oid
draft,
which
is
admittedly
a
bit
of
a
kitchen
sink,
does
go
into
those
philosophical
issues
right?
When
should
we
tag
it
all?
H
The
problem
with
things
prior
to
C
bore
like
asn.1,
which
I'm
unfortunately
intimately
familiar
with,
is
that
there
are
all
these
options
for
tagging.
You
can
make
tagging
explicit,
you
can
make
it
implicit,
you
can
make
it
automatic,
which
nobody
knows.
What
does
just
the
parser
does
whatever
right
and
you
can
reassign
tags
from
one
universal
class
to
another,
so
something
that's
labeled
and
a
universal
integer.
Maybe
it's
not
an
integer,
maybe.
H
Knows
whatever
the
heck,
it
is
I
think
a
real
advantage
to
see
bore,
which
is
not
mandated
by
RFC
70
49,
but
comes
out
of
it
is
that
we've
got
this
really
awesome
large
tag,
space
and
one
registry
where
once
you
register
it,
that's
it
for
all
applications,
and
so
I'd
like
to
propose
and
volunteer
to
take
some
of
the
work
I've
done
on
this
and
kind
of
develop
a
philosophy
of
Seaboard
tagging,
internet
draft
that
the
working
group
look
at
and
adopt.
So
we
can
all
say
yeah.
H
You
know
this
is
how
you
should
use
tags,
which
mostly
is
use
them
explicitly,
but
if
your
application
doesn't
need
them,
just
don't
use
tags
at
all
and
be
okay
with
that
and
with
respect
to
that's
good,
because
if
we
agree
as
a
whole,
if
we're
gonna
use
tags
in
a
specification,
it's
always
everything's
gonna
be
explicitly
tagged
necessary.
It
is
it's
gonna,
be
great
for
debugging
right
cuz
you
get
a
Wireshark
trace
or
whatever
everything
is
tagged
right
and
they're,
not
that
big
just
couple
of
bytes,
but
you
know
exactly
what
it
is.
H
A
So
coming
back
to
this
like
if
so
again,
this
is
just
me
speaking
as
an
individual.
If
you
moved
all
this
stuff
into
the
three
bytes
space
just
to
get
us
to
the
point
where
it's
adopted,
and
then
we
could
have
a
further
discussion
that
might
be
the
most
expedient
way.
Okay
to
get
past
this,
so
I,
don't
know
if
your
from.
E
F
So,
on
the
philosophical
question
of
whether
you
want
to
have
a
cake
with
a
parameter
space
or
use
tags
with
the
mathematical
property
as
an
integer
I
agree
was
shown
that
that's
a
good
pattern
to
have.
We
have
18
trillion
tags,
so
we
can
do
this
a
lot
before
we
run
out
in
the
particular
RGB
example.
The
tag
is
actually
useful
because
you
would
use
you
and
eight
if
you
are
in
the
classical
RGB
space
and
you
would
use
binary
sixteen
if
you're
using
high
dynamic
range.
H
A
F
F
N
Alexandra
here
so
I,
really
like
the
idea
about
the
document
that
says
you
know,
this
is
the
way
we
should
be.
You
know
creating
tags.
This
is
in
the
way
we
should
be
using
them.
So
if
you
go
with
this,
I'll
be
willing
to
to
review
it.
Orion
objects
express
some
opinion.
So
thanks.
If
thanks
for
starting
this
and
yeah
about
these
tags,
like
I
mean
for
me,
a
tag
is
something
that,
like
its
attack,
you
don't
attach
like
semantics
on
it.
It's
it's
something
that
says
well.
N
The
thing
that
follows
is
this
right,
so
I
really
like
the
idea
of
being
able
to
do
some
mathematical
miracles
with
it
and
do
some
jump
tables
or
something
but
I'm,
not
sure.
If
this
attaching
semantics
to
the
things,
it's
not
violating
the
idea
about
the
tag
itself
like
for
me,
a
tag
is
like
it.
The
fact
that
it's
a
number
it
it's
just
like
an
implementational
detail.
It
could
have
been
like
a
big
string
so
having
these
things,
it
could
could
work
right,
but
you
know
I
I'm,
not
sure
it's
it's
something.
N
We
would
like
to
start
doing
at
some
point,
because
here
now
we
have
some.
You
find
some
neat
thing
to
do
with
it
and
then
maybe
tomorrow
we
find
some
other
new
neat
things
to
do
with
some
new
tags.
So
we
say:
okay,
we
have
some
other
jump,
so
we
have
starting
code
to
the
tags
and-
and
you
know
we
example
with
the
lgb.
So
what
happens?
If,
okay,
you
have?
N
You
need
to
make
the
distinction
between
you,
int
and
binary
16,
to
make
sure
that
your
RGB
is
in
one
case
or
the
other
case
like
dynamic
range
or
non
dynamic
range.
So
what
happens
if
your
application
doesn't
use
use
tax
anymore?
At
some
point
you
say:
okay!
Well,
in
some
other
case,
you
want
to
be
super
efficient,
so
you
don't
or
we
have
some
parser
that
says:
okay
I'm
just
going
to
strip
tags,
because
you
know
it's
not
this.
F
Is
all
you
can
do
and
if
we
define
it
this
way,
then
a
specific
data
model
can
just
say
an
RGB
value
as
three
numbers
and
and
use
one
of
the
tags
that
is
being
defined
here.
I
mean
they
are
still
in
a
table
right,
so
you
don't
have
to
do
the
arithmetic
with
the
tag,
if
you
only
care
for
a
few
of
them,
it's
just
expedient
for
a
generic
encoder
and
decoder
implementation
to
be
able
to
use
the
table.
N
Okay,
I
mean
that's
a
support
point
from
the
one
that
wasn't,
but
what
I
say
is
that
in
this
case,
if
you
remove
the
tag,
then
you
cannot
purse
anymore.
The
you
cannot
make
this
distinction
between.
Is
it
dynamic
range
or
is
it
something
fixed?
So
just
just
saying
that
that,
but
that
that
was
like
a
minor
point
that
the
other
point
was
might
be
more
important
that
do
we
want
to
make
this
explicit,
like
mathematical
thing,
on
top
of
the
text.
H
Sean
Leonard
I
think
the
general
design
pattern
is
not
specific
to
seaboard
to
tags.
It's
just
when
you
have
when
you're
computing
right.
You
can
have
a
huge
if
else,
sequence
of
statements,
which
is
not
necessarily
it
may
or
may
not
be
parallelizable,
but
if
you
can
have
a
large
range
of
contiguous
choices
and
then
perform
a
single
mathematical
operation
and
jump
to
the
right
place
or-
or
you
know,
compute
that,
then
things
go
a
lot
faster
and
can
be
done
in
much
less
code.
H
So
it's
just
like
a
general
pattern
of
allocating
the
same
numbers
in
the
same
block
that
do
similar
things
for
the
same
reason
that
ASCII
0
to
9
just
happened
to
be
in
the
hex
pattern
30
to
39,
because
if
you'd
subtract
48
from
them,
then
you
actually
have
the
number
in
binary.
It's
the
same
exact
premise:.
F
So
he
needed
a
way
to
represent
the
time
based
on
time
a
seconds
and
the
Afghan
people
wanted
to
have
tags,
so
they
really
can
see
in
the
serialized
data.
This
is
a
time,
and
not
just
in
any
array
of
data,
or
something
like
that,
so
they
wrote
up
a
document
and
I
helped
them
a
little
bit
with
with
completing
that
and
they
got
FCSS
registration.
F
Now
we
could
document
this
tag
if
we
think
it
is
useful.
We
it's
not
only
useful
for
microseconds,
also
be
used
for
nanoseconds,
microseconds
or
milliseconds.
So
all
the
cases
when
you
don't
want
to
do
the
computation
of
converting
one
of
these
pairs
of
seconds
and
a
subunit
of
seconds
into
a
single
number,
it's
useful
to
to
have
this
kind
of
a
check.
So
it's
pretty
general
in
its
usage
and
we
could
also
go
ahead
and
add
more
of
what
what
was
in
the
original
proposal
for
the
time
check
to
it.
F
F
If,
yes,
he
will
come
through
the
work
group
and
ask
assist
the
run
around
through
working
group,
and
so
the
work
will
have
to
have
an
opinion
on
that
or
should
I
wait
for
the
work
group
to
complete
its
current
charter
and
then
work
on
making
this
workgroup
document
or
including
it
in
a
set
of
useful
tags
or
but
but
have
the
working
work
on
it.
I.
C
K
Think
I
mostly
have
an
opinion
whether
you
should
complete
other
work
items
before
this
one,
but
I
have
I
have
an
opinion
upon
whether
you
should
complete
other
work
items
first
before
accepting
this,
but
other
than
that,
you
know,
I
think
it's
very
fair
question.
If
you
want
to
pole,
hold
the
room
to
see
how
many
people
are
interested
in
working
on
this.
H
Okay,
Sean
Leonard,
so
I
think
this
is
fine,
as
a
working
group
item
with
Alexie's
caveat
that
we
should
work
on
the
first
things.
First,
that
we
are
chartered
for
an
interesting,
bookkeeping
question,
which
I
believe
is
completely
bike
shedding
is:
do
you
want
to
allocate
a
block
for
bookkeeping
purposes
of
seaboard
tags
just
a
time
stuff,
so
we
just
say
one
thousand
one
to
one
eleven
hundred
or
whatever
is
all
the
time
tags
will
ever
need
and
then.
H
Three
tags,
or
of
like
three
okay,
okay,
but
I'm,
saying
like
we
know
we
just
just
reserved
300
tags
or
just
say:
Oh,
somebody's
coming
up
with
some
weird
time
format,
or
do
we
have
a
block
of
tags
for
Haskell
things
right?
So
it's
like
programming
language
specific,
like
all
the
pearl
stuff,
is
in
3000
to
4000
and
the
Haskell,
so
I'll
support
that,
but
I
don't
know.
I
don't
know.
H
I
can
include
something
along
that
in
the
dock
in
the
document
philosophy
of
tagging,
if
that's
important
to
people
I
just
see
this
as
there's
an
opportunity
to
look
at
what
are
the
underlying
things
that
motivate
sea
bores.
You
know
advantages
over
other
serialization
formats
and
one
of
those
is
efficiency
of
the
encoders
and
decoders
I
believe
that's
explicitly
mentioned
in
1749
as
the
motivating
case.
So
to
the
extent
that
we
are
trying
to
support
IOT
devices
resource
constrained
devices,
you
know
devices
that
want
to
just
work
very
fast
on
data.
H
Hopefully,
that
will
inform
whether
we
have
a
large
number
of
time
tags
or
whatever,
to
make
the
encoding
and
decoding
of
these
things
faster
for
devices
and
easier
for
devices
than
and
protocols
that
actually
need
to
do
them
so
yeah
with
that
said,
if
we
delve
any
more
into
time,
tags
I
really
want
some
subject
matter
expert
on
this
thing,
because
I'm
not
an
expert
in
time
we
got
an
NTP
working
group.
There's
gotta
be
some
people
in
there
who
know
all
about
this
time
stuff
much
better
than
me.
Yeah.
F
A
L
A
K
A
A
Does
anybody
think
that
we
ought
to
do
a
ton
more
work
on
CD
DL
before
we
publish
rev
0
any
any
hands
of
people
who
think
that
we
ought
to
do
add
a
bunch
more
stuff?
Before
we
publish
there's
a
quizzical
look
I
see,
do
people
think
we
ought
I'm
going
to
ask
the
opposite
question.
Do
people
think
that
we're
quite
pretty
close?
We
should
go
into
polish
mode
and
just
get
the
thing
out
the
door
like
raise
your
hands.
A
K
A
Should
I
do
want
to
point
out?
One
of
the
other
ADEs
asked
me
about
using
CD
DL
for
JSON,
and
he
hadn't
seen
appendix
e
out
of
the
the
current
CD
DL
spec,
so
alexei.
If
you
could
just
keep
your
eyes
open
for
people
questioning
if
they're
allowed
to
use
CD
DL
for
JSON,
whatever
allowed
means
there
is
that
appendix
in
the
current
CD
DL
draft?
A
A
A
You
know
no,
no.
What
I'm
saying
is
that
I
think
we
believe
and
I'm
that
Appendix
II
is
enough
to
motivate
using
CD
DL
for
describing
JSON
protocols.
If
the
iesg
wants
something
more
than
that
appendix
say
something
that
feels
a
little
bit
more
normative
or
a
separate
doc
that
describes
how
to
do
it
or
anything
like
that.
We
would
like
them
to
speak
up
immediately
and
give
us
a
little
bit
more
direction
other
than
without
hearing
anything
like
that.
We're
going
to
move
ahead,
assuming
that
appendix
B
is
roughly
exactly
what's
needed,.
K
A
Alright,
anybody
have
any
other
concerns
like
if
we're
gonna
put
CD
DL
on
this
short
timeframe,
which
sounds
like
we
really
ought
to
at
this
point.
Does
anybody
else
have
any
other
concerns
or
potential
roadblocks
things
that
we
ought
to
aggressively
pull
out
in
order
to
fix
things
so
they're,
like
the
regular
expression
thing
like
I,
would
maybe
put
that
on
the
bubble
if
we
don't
have
a
normative
reference
to
point
to
for
regular
expressions.
Are
you
are
you,
okay,
with
that
Sean.
H
Sean
Leonard
so
yeah,
so
my
whole
thing
is
I.
Don't
want
us
to
be
committed
to
all
these
bucketload
of
features
that
we
may
want
to
change,
you
know
or
improve,
or
whatever
later
on
so
I.
So
you
know,
I
am
resigned
to
the
fact
that
we
got
to
publish
something
right
and
we
got
to
keep
people's
work
going.
There's
no
not
really
question
that,
but
we
need
also
be
clear.
It's
version,
zero
and
we're.
Trying
to
you
know
add
these
useful
things.
I
mean
all
that
I
dunno.
A
I'm
gonna
have
to
basically
start
that
as
soon
as
we
get
the
first
one
in
the
can
yeah
and
so
like
the
the
mode
here
should
be.
If
we
can't
come
to
consensus
quickly
on
exactly
what
it
ought
to
look
like,
if
we
don't
have
a
reference
for
the
thing,
for
whatever
reason
will
rip
the
feature
out
and
we'll
put
it
back
in
and
this
right.
D
A
So
do
that
on
the
so
all
I'm
saying
is
that
if
we
can't
come
to
consensus
on
these
things
quickly
now
I'm
also
now
that
we
understand
sort
of
what
polish
mode
would
mean
I'm.
Also,
okay,
with
somebody
going
up
to
Mike
and
saying
you
know,
we're
not
quite
ready
for
that
I'm,
not
seeing
anybody
running
to
the
mic,
all
right,
so
this
is
just
hey.
We
need
to
be
able
to
finish.
A
D
A
H
A
H
Okay,
no
problem
yeah,
cuz,
I'm
I,
am
of
the
view
opinion
that
it's
more
descriptive
than
about
validation,
I'm,
also
I
recognized
people
will
wanna
validate
things,
but
the
point
is
to
describe-
and
we
want
to
emphasize
that-
and
one
way
to
do,
that
is
to
kind
of
put
some
focus
on
the
fact
that
if
stuff
doesn't,
if
you
have
data
that
doesn't
conform
to
the
CD
DL,
it's
not
necessarily
a
fatal
error.
Okay,
this
is
not
XML
land,
it's
more
like
markdown
land.
If
it
doesn't
quite
fit.
That's
okay
still
work
with
it.
F
G
F
A
Right,
let
me
go
back
to
what
I
was
talking
about
with
CD
DL.
So
let's
make
sure
that
anything
that
we
don't
have
pretty
good
consensus
on
or
anything
that
we
don't
have
good
references
or
or
anything
that's
not
done.
We
can
start
ripping
that
stuff
out
get
a
version,
zero
out
the
door
so
that
other
people
can
refer
to
it.
That
should
be
a
pretty
high
priority
for
us
over
the
next
several
weeks
and
I,
don't
see
me,
but
this
I
see
some
nodding.
Heads
I,
don't
see
anybody
shaking
their
head.
L
Ahead
jump
up
to
the
right
as
Dave
Robin
I
was
just
gonna
say,
with
the
exception
of
reg
X,
which
gets
into
the
whole
philosophical
about
it.
What's
it
used
for
it,
can
it
be
used
for
generating
validators
I?
Think
if
you
take
that
out
and
you
look
at
the
rest
of
it,
I
can
easily
make
a
validator
code
out
of
it.
It's
it's.
It's
fairly.
Strict
I
mean
we
had
some
matching
questions
J
year
that
we
talked
about,
but
for
the
most
part,
I
can
automate
about
a
validator
from
that.
L
A
All
right-
well,
maybe
that's
that's
a
quick
way
out
of
this,
for
the
short
term
is-
would
be
to
remove
that
regex
control.
Does
anybody
else?
Gonna
have
heartburn
from
that?
Did
you
say
that
all
right,
so
we're
gonna
Val
we're
going
to
talk
about
that
a
list
see
if
anybody
has
an
issue
with
it.
That
would
be
a
way
for
us
to
move
forward,
get
the
thing
out
the
door
and
we
can
add
regex
back
in
in
version
1
or
version
2
or
whatever
going
to
call
it
all
right.