►
From YouTube: IETF104-CORE-20190329-0900
Description
CORE meeting session at IETF104
2019/03/29 0900
https://datatracker.ietf.org/meeting/104/proceedings/
A
B
C
C
C
Guess
you
were
encouraged
to
come
to
the
front,
not
not
only
other
district,
this
last
year
the
lighting
isn't
up
to
snuff.
Anyway,
let
me
quickly
introduce
the
topic,
so
we
have
had
second
proposal
for
a
congestion
control
algorithm
for
code
for
an
advanced
foundation,
control
everything,
and
we
looked
at
that-
and
we
generally
liked
it.
C
But
then
we
got
stuck
on
IPR
declaration,
which
is
under
data
tracker,
IPR,
3,
2
to
7,
and
people
looked
at
that
and
said:
there's
not
enough
information,
and
so
we
stopped
discussing
it
at
ITA
103
and
since
the
statement
has
been
updated
and
one
of
us
who
alerted
the
mailing
list
to
that
fact.
So
the
declaration
has
been
updated
by
a
statement
about
the
assertion
of
that
patent
claim
and
the
claim
owner
chose
to
make
the
patent
available
for
essential
parts
of
an
ITF
standard.
C
So
it's
not
for
information
documents,
also
it's
just
for
Senate
strict
documents
and
something
that
is
needed
to
do
this
Senate
strike
document.
So
far.
We
we
have
considered
handling
congestion,
control,
algorithms
as
informational,
but
I
think
it's
not
particularly
hard
to
go
for
stannis
trick
yeah
and,
as
is
customary
for
this
kind
of
idea
declaration,
there
is
a
reciprocity
clause
so
that
the
license
expires.
The
moment
you
sue
Rahway
on
on
some
other
patent
claim.
C
So
it's
your
classical
defensive
patent
construction,
so
we
like
unencumbered
stuff,
but
the
ITF
often
has
been
able
to
work
with
technology
that
had
this,
this
kind
of
strings
attached
to
it.
The
procedures
say
that
it's
the
job
of
the
working
group
to
decide
whether
we
can
continue
pursuing
this
and
I
don't
want
to
decide
it
today.
C
I
want
to
get
information
on
whether
the
working
group
now
things
to
have
sufficient
information
to
make
the
decision.
So
is
there
anything
missing
to
make
the
decision
or
do
we
have
all
information
we
need
now?
I
realize
congestion.
Control
already
is
a
fringe
issue
and
people
who
understand
patterns
are
by
definition,
fringe
people,
so
maybe
a
small
number
of
people
in
this
room
who
even
know
what
what
this
is
about.
C
E
I'm
just
wondering
how
are
we
able
to
get
enough
interest
among
the
people
in
this
working
group,
given
that
this
is
primary
mechanism
and
in
an
essentially
conscious
tone,
control
where
the
expertise
is
basically
direct,
the
transport
area?
So,
in
my
opinion,
there
should
have
been
more
creation
with
the
transport
area
with
cocoa,
for
example
as
well,
because
it
basically
in
the
same
area.
So
this
is
just
my
slight
corn
tremor
that
there
is
enough
interest
and
energy
in
this
working
group
to
do
for
this
particular
document
in
case
yes,.
C
C
E
C
For
for
cocoa
Aleksei,
even
delegated
the
shepherding,
the
suicide
of
the
shepherding
Familia
children
to
the
transport
ad
and
I
would
expect
he
would
do
the
same
thing
again
when
we
finished
this
okay,
so
working
or
adoption
call
on
the
list
is
next
normally
I
would
ask
the
room
whether
we
should
do
an
adoption,
but
I
think
you
might
need
some
time
to
weigh
this
IPR
issue.
So
I
think
this
is
better
than
on
the
list.
C
C
C
C
G
C
C
I
Yeah,
it's
a
very
brief
update,
so
we
are
moving
a
bit
forward
on
the
use
of
github
and
I
was
just
wanted
to
summarize
what
other
groups
are
using
to
see
where
core
would
like
to
go
basically
so
in
deep,
for
instance,
they're
using
github
for
issue
tracking
instead
of
the
mailing
list,
but
basically
the
alpha
or
the
editor
needs
to
curate
the
list
and
just
choose
before
every
idea
meeting
what
to
present
or
what
not
to
present
and
so
on.
So
they
are
not
using
any
tagging
or
anything
like
that
in
core.
I
We
do
have
some
preliminary
tagging,
but
it's
not
like.
We
have
a
mandatory,
well
a
structure
list
of
tags
and
then
again
also
we
don't
have
a
lot
specific
discussed
tags
to
be
discussed
with
the
working
group
or
any
kind
of
milestone.
So
there
anything
like
that.
But
maybe
we
are
a
bit
step
ahead
on
that
and
then
HTTP,
for
instance,
a
bit
ahead
of
us
in
a
sense
they
have
discussed
tag.
They
have
all
the
tags
based
on
topics
they
use
milestones
and
essentially
they
work
everything
there.
So
I
would
like
to
propose.
I
Okay,
that's
to
basically
track
the
document
life
from
the
very
beginning,
individual
submission
to
working,
write
them
RFC,
everything
on
github.
So
for
those
who
really
like
email,
we
could
also
have
email
batches.
It
can
be
easily
configured
to
send
even
much
email
batches
every
couple
of
weeks
or
every
week
or
whenever
you
want.
We
could
also
if
people
are
happy
with
that,
we
could
assign
issues
to
a
specific
experts.
That
really
know
a
particular
field,
but
that
means
that
they
would
have
to
check
every
now
and
then
also
we
also
could
assign
we
could.
I
We
could
also
have
milestones
on
a
specific
it
could
be
either
based
on
the
draft
lifetime.
You
know
when
we
do
working
about
option
or
working
on
blast
call
or
specific
features
or
sections
and
I
think
also
the
more
important
one
would
be
to
start
using
tagging.
For
instance,
we
could
have
the
disc
a
stack
if
we
want
to
discuss
it
with
the
working
group
and
very
other
tack,
so
some
of
the
stuff
we
wouldn't
even
need
to
discuss,
because
this
editorial
a
rat,
our
some
other
way
of
working
and
then
last.
I
This
is
not
supported
by
the
ietf
at
the
moment,
but
you
would
make
sense,
maybe
to
consider
having
our
own
ITF
gitlab
servers
that
we
can
depend
on
get
have
only
but
yeah
so,
basically
and
I
just
finished.
We
will
have
some
trial
with
some
individual
submissions
and
perhaps
also
with
one
working
already
working
group
adopted
document
I'm.
Looking
for
DNA
Peaks,
not
just
looking
in
epics
and
yeah.
J
My
comment
is
on
the
discussion
using
it
a
point,
one
of
the
things
that
Martin
talks
about
in
the
various
you
know
using
github
and
your
favorite
working
group
is
the
working
group
has
to
decide
where
wants
sort
of
active
debate
to
occur.
Whether
active
debate
should
occur
on
the
mailing
list
or
in
the
github
issue
itself,
and
either
one
can
be
done
in
the
working
groups.
You
decide.
Often
working
groups
choose
to
their
sites.
A
number
of
working
groups
choose
to
say
active.
J
You
know,
debate
should
happen
on
the
mailing
list
and
summary
should
happen
on
the
github
issue.
That's
actually
the
policy
of
teeth
and
read
to
be
better
at
actually
enforcing
at
that,
because
I
just
sent
out
emails.
The
teeth
list
this
morning
basically
saying
that
on
one
of
the
issues
where
I
started,
seeing
some
discussion
and
github
and
so
I
think
that's
a
job
for
this
working
group
to
specifically
decide
that
I
mean
if
you're
gonna
be
using
github.
Where
do
you
want
arbitrary
responses?
J
I
I
C
C
J
I
presume
your
eyes
meet
like
for
an
ocf
perspective.
I
think
it
probably
wouldn't
make
a
significant
difference
to
OCF.
I
can
say
that
ocf
doesn't
currently
use
github
for
issue
discussion,
there's
no
split
ocf
from
I/o
tivity
right
and
I'll.
Give
you
two
different
answers
there.
Just
because
I'm
at
the
mic,
ocf
uses
mailing
lists
and
it
uses
its
own
document
repository
system,
that's
called
copy,
neither
of
which
is
github.
J
It
may
use
github,
for
you
know,
data
models
or
something
like
that,
but
not
for
the
actual
discussion.
Io
tivity
uses
the
mailing
list
and
github
and
it
does
issue
tracking
on
the
mailing
list.
It
doesn't
have
a
good
issue.
Tracking
history,
I
would
say,
but
it
does
have
github
issues
and
Falvey
active
discussion
to
the
extent
that
there
is
active
discussion
as
on
the
mailing
list
and
I
think
you're
on
that
list.
J
I
I
make
Anoma,
we
are
also
using
github
for
quite
some
time.
The
transition
was
kind
of
painful
at
the
beginning,
because
maybe
people
didn't
use
it.
We
can
use
both
at
the
moment
for
issue
tracking
on
the
for
basically
even
drafting
the
specification
is
done
in
DTaP.
Some
of
the
other
discussions
are
on
the
menu
list,
but
less
and
less
in
my
opinion,
that's
why,
with
them
plan.
K
K
It's
not
necessarily
a
trivial
thing
and
being
an
editor
of
some
documents
right
now,
I
I
have
been
using
the
tagging
system
and
I've
actually
discovered
that
it's
good
idea
to
use
github
for
more
than
just
issue
tracking
and
and
it's
a
code
repository,
so
we've
got
continuous
integration
going
on.
At
the
same
time,
github
also
offers
different
kinds
of
insights
where
you
can
actually
see
the
activity
on
different
drafts
and
it's
difficult
to
know
where
to
draw
the
line.
K
I
I
thought
the
reason
we
would
like
to
do.
The
trial
I
mean
these
are
already
discussed
with
Christian
and
Christian
it.
So
they
sorry
we
want
it
to
be
better.
Try
out
see
what
is
the
outcome
see
if
it
is
actually
feasible?
Maybe
I
can
compile
later
on
some
document
for
it,
but
it's
just
experimentation
at
the
moment
isn't
like
we
are
going
to
switch
everything
right
now.
Okay,
but
at
least
to
try.
F
F
Don't
always
know
that
I
need
to
go
and
follow
a
new
project
whenever
it
starts
out.
So,
therefore,
I
don't
ever
get
find
out
that
that
discussions
are
occurring.
If
the
discussions
are
occurring,
that
get
up
I
mean
if
it
is
one
big
project
and
everything
is
in
the
same
tracker,
the
not
only
have
to
do
it
once
it
is
not
that
hard
to
follow.
If,
on
the
other
hand,
you
have
a
different
project
for
each
document,
that
means
that
all
sudden
I
don't
actually
see
discussions,
and
that
always
worries
me.
However,.
I
We
have
every
document
in
one
project
and
the
issue
tracker
will
be
insane.
You
will
be
a
mess
of
comments
from
any
drafts
on
any
issue
and
also
regarding
their
storage.
So
the
data
we
had
our
own
gate
lab
sorry
if
I
came
had
their
own
deep
lab.
Of
course
you
can
dare
apply
whatever
policy
that's
another
plus
for
using
our
own
ITF
gate
lab
if
it
would
exist.
D
Haidee
interrupt
the
the
HTTP
Biss
working
group
has
handled
this
by
having
a
separate
mailing
list
set
up
that
an
ANA
script
on
github
that
sends
all
of
this
to
that
mailing
list.
It's
not
on
the
HTTP
biz
mailing
list
directly,
so
it
doesn't
flood
it,
but
you
can
always
subscribe
to
this
mailing
list,
this
other
mailing
list
or
look
at
its
archives.
That
takes
care
of
the
archive
problem
because
it's
all
archived
in
IETF
and
it
takes
care
of
the
I
gotta,
go
fifteen
different
projects
to
go
figure
out.
What's
going
on.
D
F
L
Realized
I'm
still
new,
sorry
Brendan
more
in
here,
I
realize
I'm
still
a
bit
new
here,
but
this
does
raise
the
question
of
accessibility.
Github
is
not
and
has
not
always
been
available
to
everyone,
and
that
seems
to
be
one
of
the
core
things
about
the
IETF,
making
it
available
to
everyone
and
github
isn't.
Sometimes
it
is,
but
not
always,
and
we
need
to
be
cognizant
of
that.
I
C
Okay,
we
don't
really
want
to
second-guess
the
work
that
is
going
on
in
the
braking
group
and
you
might
as
well
wait
until
they
have
produced
more
of
a
result,
but
the
the
objective
here
of
this
was
that
maybe
as
well,
we
should
start
our
own
small,
controlled
experiment
on
this
to
gain
some
experience,
so
we
can
make
make
a
good
decision.
So
this
is
not
about
completely
swapping
the
model
in
which
we
work,
but
setting
up
an
experiment
and
I
think
your
next
slide
said
something
about
trying
this
with
an
individual
submission.
C
I
was
actually
going
to
propose
to
try
this
with
a
resource
directory,
because
the
now
in
the
working
group
last
fall
in
the
work
new
blood
scholar
phase.
Actually
there
will
be
a
lot
of
technical
issue
management
that
actually
is
done
quite
well
on
github,
so
I
would
expect
that
that
handing
the
regular
scholar
comments
on
github
would
be
easier
that
way.
So
that
would
be
my
suggestion,
but
I'm
certainly
interested
in
hearing
what
what
the
main
editors
think
about
that
I.
N
J
That
was
Christian,
I'm,
Dave
savor,
the
respond,
the
get
working
group
I
was
there
and
my
understanding
is
you're,
not
gonna
see
a
result
of
that
working.
If
that
says,
you
must
do
X
right
that
the
working
group
has
a
bunch
of
choices,
and
so
I
just
want
to
clarify
that
you
shouldn't
wait
for
some
answer
to
come
down
right,
though
just
here's,
a
cookbook
and
your
working
group
choose
which
of
these
options,
work
for
you
and
so
I.
J
C
G
I
just
wanted
to
make
this
down-to-earth
survey.
Mark
I
already
have
problems
being
explaining
the
faces
to
which
the
document
goes
with
the
normal
IETF
procedure.
If
you're
going
to
add
in
the
good
of
FM
even
more
estates,
then
it
becomes
very
difficult
to
explain.
What's
going
on
so
I
mean
you
should
be
very
careful
that
all
this
taking
follows
what
goes
on
in
the
standard
ITF
process
and
that
all
the
others
are
just
alternative
and
should
be
used
such
that
it
is
not
to
be
confused
for
other
people.
Yeah.
C
I
think
it
would
be
nice
to
have
some
some
of
the
HTTP
this
tooling,
that
that
sends
summaries
of
the
github
activity
to
the
list
every
Sunday.
So
maybe
that's
something.
We
should
procure
to
make
sure
that
there
is
no
disconnect
there,
but
I'm
also
hearing
that
people
would
be
quite
content
with
having
a
small,
limited
experiment
on
the
resource
directory
completion
process.
Now,
and
then
we
can
evaluate
that
and
see
whether
we
widen
this
experiment.
C
C
C
B
B
B
So
what
what
me
worked
out
there
is
that
we'll
have
the
response.
Wait
until
there
is
something
published,
co-active
and
observe.
That's
not
really
a
problem,
you
get
an
AK
and,
and
then
you
know
the
response:
don't
have
to
come
back
there.
There
will
be
some
time
out
possibly,
but
will
happy
a
the
application
layer
work
that
out
the
publisher
can
always
publish
an
empty
payload
there.
The
problem
has
hubsan,
only
transmits
payloads
only
representations
if
you
will
and
content
format,
so
it
doesn't
really
know
how
to
construct
an
empty
payload
it.
B
It
means
the
publisher
or
the
topic
creator
to
do
that.
So
that's
we're
going
to
keep
it
simple
and
write
that
up
and
depend
on
things
like
accept
all
and
nothing
to
see
here,
sort
of
an
aura,
multi-part
content
formats
to
handle
the
case
more
gracefully,
but
even
within
ml
you
can
just
send
a
bracket
bracket
and
an
empty
payload
and
the
the
subscriber
can
understand
that
is
there's
nothing
there.
Yet
so
I,
don't
think,
there's
any
issue
that
we
really
need
to
deal
with
here.
B
Other
than
write
this
up
this
way
for
empty
topic,
right
I
guess
we
can
probably
take
questions
after
I
get
through
explaining
all
of
these
there's,
no
one
at
the
queue.
Now
anyway,
we
ready
okay
next
slide,
please
now
the
lifetime
of
the
topic.
We
just
want
to
use
the
thing
we
owe
okay,
so
the
idea
is
that
we
would
create
a
query,
parameter
topic,
lifetime
TLT,
and
you
can
supply
that
on
creation,
if
you
don't
supply
that
it's
just
a
topic
slip
until
you
remove
them.
B
That
would
refresh
the
counter
to
the
to
the
value.
Of
course,
with
your
doing
the
recreate
operation,
you
can
supply
a
different
lifetime
value
if
you
want
and
then,
when
the
counter
reaches
zero,
the
topics
removed
and
that
that's
basically
the
idea
that
right
next
slide,
please,
okay,
data
lifetime,
we
just
wanted
to
handle
with
Max
age,
that's
already
in
place
and
the
default
would
be
to
just
have
the
Pope's
a
broker,
not
return
anything
in
elapsed,
60
second
max
age
on
all
responses.
Oh
that's
not
ideal
for
all
situations,
but
it's
a
useful
default.
B
B
Publishers
to
have
a
max
age
that
was
different
from
60
seconds.
So,
if
you
wanted
long
live
values,
you
could
you
could
use
this
and
also
there's
there's
some
talk
of
creating
a
header
option
that
you
know
just
can
be
used
by
push
notifications.
Is
that
the
lifetime
of
the
data
they're
sending,
and
so
that
would
be
an
option
also,
but
I
think
when
the
draft
vote
will
well
I'll
show
how
we
conclude
later
now.
Next
slide,
please
yeah
so
lifetime
of
topic.
Contents,
that's
basically
what
I
just
said.
J
B
Of
zero,
the
cash
can
now
use
the
value,
but
but
not
reuse.
It
I
believe
it's
the
way
of
the
way
it
looks.
So
if
we
have
a
subscriber
kind
of
ending
like
that
same
behavior,
but
that
would
seem
to
be
reasonable.
Like
so
it's
a
client
library,
the
client
could,
for
example,
signal
that
the
max
age
of
zero
and
ask
the
application
not
to
use
the
value,
for
example,
and
then
a
cache
that
subscribes
to
a
broker
could
could
just
use
max
age.
B
The
way
it's
used
to
okay,
so
here's
what
he
concluded
that
next
slide,
so
the
proposed
profile,
so
I
was
going
to
write
this
stuff
up
in
the
draft,
but
it
looks
like
there's
enough
here
that
I
didn't
get
around
to
it.
I
wanted
to
just
present
what
what
I'd
want
to
put
in.
So
the
idea
is
the
first
option
for
empty
topic
is
that
the
broker
won't
respond
in
topic.
Creators
are
responsible
for
publishing,
empty
representations
and
then
we'll
give
some
examples.
So
this
is
what
I
planned
it
right
into
the
draft
right.
M
B
Slide
for
the
topic
lifetime
and
data
data
lifetime,
we'll
add
the
two
query:
options
that
default.
If
you
don't
supply
DLT
as
just
to
respond
without
the
max
age
option
and
that
allows
the
default
on
all
responses
to
be
60
seconds.
If
DLT
is
included,
then
the
replications
are
sent
with
DLT
and
max
age
option,
and
then
we
read
and
subscribe
sometime
after
a
thing
has
been
published
with
DLT
you
get
a
max
age,
equal
to
a
dl
t
minus
the
the
time,
basically
as
as
described
in
RFC
72-52
numbers.
Please
I'm
excited
sorry
yeah.
B
So
if
the
topic
lifetime
excluded
included,
the
topic
will
remove,
be
removed
anytime.
There
is
no
published
activity
or
I
guess
or
topic
refreshed
for
a
time
equal
to
a
topic
life
sign
and
when
that
happens,
outstanding
subscribers
and
new
requesters
will
be
sent
oral
force,
if
not
resource
not
found.
B
O
We
have
been
talking
a
lot
about
that
up,
subtopic
configuration
and
I.
Think
if
we
have
things
like
publishing
an
empty
representation
for
a
topic,
it
might
say
it
might
make
sense
to
be
able
to
update
this
representation
at
a
later
date.
So
what
one
idea
could
be
to
split
the
topic
into
two
coop
resources:
one
configuration
resource
which
exists
for
the
whole
lifetime
and
then
a
data
resource,
which
is
what
the
publisher
has
published
to
and
the
subscriber
subscribe
to.
O
No
number
three
number:
two
sorry
the
comet
is
on
next
age.
When
a
publisher
hasn't
published
for
some
time
like
then
the
broker
hasn't
received
the
publication
for
24
hours.
Do
we
really
want
to
say
that
I
can
give
you
a
representation
now,
but
it's
very
likely
that
you
will
get
an
update
like
in
zero
seconds,
so
you
shouldn't
even
try
to
catch
this
and
I
think
we
need
to
distinguish
between
them
data
life
time
and
the
mechanism
of
keeping
the
observation
alive
and
not
generate
needless
traffic.
N
Justin
I'm,
just
I,
think
on
the
topic
of
of
having
those
empty
representations
that
could
be
done
a
lot
more
smoothly
if
at
as
I
understand
right
now,
when
a
topic
is
created,
all
the
metadata
is
is
put
in
there,
whether
it's
I'm
part
of
the
link
to
that
that's
created
for
the
topic
or
as
as
as
registration
of
as
options
for
the
creation
and
I.
Think
that
this
could
be
a
very.
N
This
metadata
could
be
a
very
suitable
place
to
indicate
that
this
topic
has
some
kind
of
Testament
last
world
tombstone
whatsoever,
representation
that
the
publisher
may
use
to
satisfy
observations
on
that
resource,
which
would
align
nicely
with
with
metadata
on
topics
that
on
resources
that
are
not
topics,
because
such
metadata
could
just
as
well
be
expressed
about
any
any
other
resource,
and
when
a
client
gets
that
it
could
look
up
in
the
metadata
that
hey
it's
not
there,
but
it
has
a
default
representation.
I
can
pass
on
you
through
the
application.
C
As
Klaus
mentioned,
there
may
be
situations
where
we
are
having
the
broker
not
respond
immediately
to
something
maybe
more
efficient,
so
my
ear
to
form
of
lung
power
may
actually
efficiency
improving.
But
this,
of
course,
could
be
a
situation
where
a
topic
has
to
be
created
and
two
days
later,
the
first
data
is
published
to
it,
and
then
you
would
finally
get
the
response
to
the
request
you
send
in
so
I
think
we
have
to
consider
whether
that
is
an
evolution
of
the
architecture
that
we
we
actually
like.
C
Now.
Adding
options
also
is
impacting
the
architecture.
So
again,
that's
something
that
we
need
to
think
out
think
about,
but
it
might
be
useful
here
and
the
third
observation
was
already
made
that
maybe
having
a
control
resource
and
the
data
resource.
That
might
be
a
good
thing,
and
if
I
were
designing
this
from
scratch,
then
of
course
I
would
make
a
control
resource.
That
actually
is
a
multi-part
call
that
has
both
some
control
structure
and
potentially
optionally.
N
So,
as
I
understand,
currently
the
kind
of
control
resource
is
implemented
as
just
using
a
different
content
format.
We've
I
think
that
doing
doing
something
like
that
has
given
us
a
bit
of
trouble
in
the
resource
tracker
as
well,
so
having
separate
resources
for
metadata
and
the
actual
data
makes
sense
and
also
frees
up,
have
subtopics
to
actually
transport
link,
link
format
or
similar
data,
because
right
now
getting
the
link.
Data
on
the
topic
means
getting
the
metadata
and
getting
the,
and
that
means
you
can
have
actual
link
formats.
Data
on
the
topic.
G
H
H
There's
them
there's
a
subscribing
the
face
and
reading
the
face
and
the
reading
the
face
is
the
classic
use
case
you
are
referring
to
so
that's
still
possible
and
in
general,
the
theme
of
this
update
was
that
we're
simplifying
a
lot
of
things.
So
you
could
use
coop
client
roughly
as
such,
to
interact
with
a
a
pub/sub
program
and
then,
when
you
do
more
advanced
things
like
five
times
and
such
then
you'll
have
to
do
a
bit
more
advanced
procedures,
but
the
basic
functionality
would
be
very
simple
for
any
bunny
like
or
client
through.
G
H
Yes,
so
we'll
be
most
talking
about
that
interface,
which
is
heavily
based
on
observer,
there
is
also
the
basic
3d
interface,
which
is
essentially
the
same
this
except
for
one
bit.
You
don't
enable
observe
so,
but
it's
it
says
that
the
observer
part
is
slightly
more
complicated
because
of
this
data
lifetime
issues.
C
Okay,
I
think
we
have
had
some
great
discussion,
but
not
really
a
conclusion
yet
and
I
think
marketer
can
use
this
and
all
the
pops
up.
Authors
can
use
this
for
generating
a
next
version
and
proposing
a
solution
to
those
problems,
and
then
the
working
group
can
decide
whether
we're
done
with
us
and
no.
We
are
not
going
to
get
that.
We
are
working
with
sleeping
notes,
Time
Inc.
We
are
about
ten
minutes
behind
schedule.
C
K
Okay,
hello,
so
dine
link,
yeah,
that's
not
very,
very
much
to
mention.
We
actually
managed
to
finish
a
fairly
big
chunk
of
work
already
and
then
at
the
end
of
February,
we
had
a
flurry
of
activity
when
we
actually
had
a
joint
call
with
the
OMA
eleven
m2m
folks,
and
then
the
the
current
draft
reflects
that.
So
there
were
clarifications
that
were
done.
We
restructured
the
draft
so
that
we
have
conditional
notification,
attributes
that
can
be
used
with
general
observe
requests,
and
then
that
introduces
the
whole
the
whole
document.
K
K
K
Don't
really
believe
it
was
there
anybody
in
this
room
from
from
that
meeting,
because
we
were
also
discussing
about
some
alignment
with
the
elaborate
m2m
documents
regarding
the
description
for
the
different
attributes.
So
the
consensus
from
that
from
that
meeting
I
believe
if
I
remember
correctly,
was
that
we'll
wait
for
some
information
from
Delaware
em
folks
on
if
they,
if
they
would
like
to
contribute
some
text
to
the
document.
So
that's
that's,
basically
the
the
major
part
of
it
and
then
the
last
part
was
basically
about
the
binding
table.
K
So
we
changed
the
binding
table
description
so
that
we
have
a
new
new
attribute
value
instead
of
an
interface
description,
so
that
led
to
a
small,
a
small
confusion
on
how
we
actually
do
the
writing
table,
because
we
were
not
in
favor
of
keeping
the
post
operation,
and
this
is
the
old.
The
old
example
in
drop
7,
which
was
about
resource
collection
called
B
and
D,
which
you
can
manipulate
using
a
post,
cut
or
delete,
but
we're
still
looking
on
this
a
little
bit
so
I
think
we
will
get
this
done
fairly
fairly
soon.
K
So
right
now
you
discover
the
the
entry
point
to
the
running
table
and
then
we
had.
We
just
have
get
input
we'll
have
to
think
about
how
we
do
patch
and
fetch
quite
soon
on
that
and
that's
basically
it
so.
The
work
on
the
no
divisions
are
completed.
Link
bindings
are
completed,
so
that's
just
the
partial
changes
that
are
needed
for
the
binding
table,
and
maybe
we
get
something
from
the
Levitt
m2m
folks
and
then
we're
ready.
Oh
yeah,
that's
that's
the
last
light
from.
N
So
there
was
a
comment
on
which
binding
types
there
are
and
that
this
should
probably
not
be
written
out
as
a
comprehensive
list
and
be
open
to
be
open
to
some
scenarios
that
can
fit
easily
with
with
what's
written
there,
but
just
list
that
is
a
bit
too
short
or
maybe
should
just
go
away.
But
there's
a
comment
on
the
mailing
list,
at
least
make
sure
to
follow.
Yeah,
yeah
and.
K
K
Think
that
it
should
have
already
been
ready
now,
then,
the
binding
table
just
needs
a
bit
of
work,
so
discussions
microwave.
We
get
this
done,
maybe
the
next
two
weeks
three
weeks
and
then
I
think
it
should
be
well.
Of
course,
we've
gotta
wait
with
the
for.
Why
wait
and
when
people
also,
if
they
want
to
contribute
some
text,
that's
our
Hanson
moment
and
they
were
supposed
to
contribute
to
all
github
issue.
C
C
K
A
K
C
So
one
of
the
first
drafts
that
we
actually
did
in
in
this
working
group
like
2012,
was
core
interfaces,
and
that
was
an
early
draft
that
actually
consolidated
our
ideas
about
how
this
was
going
to
work
quite
well,
and
actually
it
has
been
in
an
influential
draft
because
it
has
been
taken
over
by
other
stos
who
then
adapted
reg.
They
didn't
just
do
what
we
wrote.
C
They
did
something
better
changed
so
right
now,
there's
actually
no
literal
adoption
of
this
document.
Just
since,
since
there
were
the
few
recommendations,
that
kind
of
have
been
overtaken
by
events-
and
we
looked
at
it
and
decided,
there
is
some
useful
text
in
there
and
actually
turns
out
that,
in
particular,
the
text
on
collections
a
would
fit
into
a
series
of
small
documents
that
the
research
group
is
about
to
generate.
C
So
this
is
non
normative
text
that
just
describes
one
way
of
doing
things
and
that
would
fit
with
the
rest
for
design
document
as
a
parent
document.
And
actually,
if
you
look
into
the
Charter,
the
Charter
mentioned
this
document
as
one
place
where
there
will
be
interaction
with
the
research
group.
So
the
interaction
in
this
case
would
be
push
it
over
the
wall
and
ask
the
research
group
tour
to
use
the
good
pants.
C
K
As
the
people
who
have
been
working
on
core
interfaces,
I
think
this
is
a
good
idea
we
could
so
there
are
basically
two
paths:
I
mean
it's
a
very
well-written
document.
It's
very
small
and
it's
easy
to
understand.
I,
believe
that
at
least
some
ideas
would
fit
very
well.
We
think
two
things
research
group,
so
that's
that's
a
good
thing
and
then
the
other
part
is
probably
yeah,
like
you
say,
on,
the
SDS
has
overtaken
so
yeah
a.
G
N
Okay,
hello
and
then
I'll.
Try
to
make
it
brief,
so
resource
directory
is
an
is
as
of
test
twenty
in
working
group
last
course,
since
last
Wednesday,
we
are
we've
started
to
receive
comments
during
the
working
group
last
call,
one
of
which
is
an
update
to
the
security
considerations
from
class
that
should
not
posing
much
different
difficulties.
It's
just
to
help
explain
basically
to
give
the
reader
a
better
impression
of
of
how
that
would
actually
of
what
would
be
an
example
of
such
policies.
N
Lemon
noted
appointed
outed,
an
outdated
reference
where,
where
we
are
describing
how
all
this
might
interact
with
our
DT
and
SSD,
and
gave
an
example
based
on
probably
two
year
old
version
of
our
DD
n
SSD,
which
doesn't
fully
fit
with
TN
SSD,
so
that
example
would
probably
just
go
out,
and
it
would,
it
was
one
way
of
discovering
that
is
not
fully
fleshed
out
could
still
be
added
if
T,
an
SSD
thinks
that
this
is
a
was
all
DD.
N
N
So,
just
to
summarize,
we
have
we've
had
three
interrupt
events
by
now
and
I.
Think
I've
missed
two
implementations
that
are
work
in
progress
with
the
last
interrupt,
but
the
sequence
of
events
shows
that
ambiguity
have
found
ambiguities
and
problems
that
arise
from
doing
it
in
practice,
with
the
first
interrupts
and
with
the
third
we've
already
gone
on
to
experimenting.
What
more
we
can
do,
what
extensions
we
can
use,
for
example,
using
the
choral
format
to
express
the
content
of
the
resource
directory
which
which
cows
will
later
follow
up
on,
because
it
we
have.
N
We
have
shown
in
the
interrupt
that
we
can
have
a
registrant
register,
was
link
format
at
resource
directory
and
that
resource
directory
would
then
expose
the
information
in
coral
to
someone
doing
a
lookup
and
trying
to
find
resources
and
find
out
metadata,
and
that
did
work,
although
of
course
the
format
of
how
we
do
it.
Imprecise
will
still
change,
but
yeah.
That's
all
at
work
and
I.
N
K
Bill
thanks
for
the
good
draft,
that's
I
was
clarified
a
lot
of
things
and
also
thanks
for
the
good
discussions
we
had
or
the
hallway
meetings
that
that
uncovered
some
of
the
example
problems.
But
but
that's
that's.
Okay!
That's
another
I
sort
of
clarify
one
thing
regarding
so
we
were
discussing
this
also
with
Christine.
So
I
just
wanted
to
raise
this
issue
that
there
is
some
text
in
resource
directory.
That
says
currently
the
cardinality
of
the
cardinality
of
what
was
this.
K
To
the
donation
base
attribute
was
one
and
in
future
you
might
have
multiple
base
values,
and
then
you
refer
to
the
protocol
negotiation.
So
I
want
to
clarify
that
I,
don't
really
mind
if
you
keep
that
text
in,
but
there's
a
very
high
chance
that
protocol
negotiation
is
not
going
to
have
multiple
base
values.
So
that's
that's
what
I
wanted
to
say.
So
we,
you
could
have
base
values
but
the
semantics
of
what
we
are
trying
to
do.
It's
a
bit
different
for
bass,.
N
O
N
N
Read
fate
and
tolerance
to
failure
and
I
think
there
was
a
third
one
that
I
just
don't
remember
out
of
my
head,
because
it
probably
is
a
kind
of
overlap
area,
but
I'll
make
sure
that
the
document
will
outline
why
it
is
done
in
precise
groups.
N
N
This
is
actually
the
thing
to
do.
So
there
is
this.
There
is
a
there's,
a
github
repository
now
in
which
we
are
kind
of
pooling
minutes.
Pooling
ideas.
I
should
probably
have
put
the
link
in
there
right
if
you
can
follow
up
with
the
remaining
list.
So
if
you're
interested
in
political
negotiation
and
co-op
+80,
please
let
us
know
please
watch
that
repository,
because
this
is
something
that
will
eventually
become
relevant
to
Corrigan
but
may
in
the
meantime,
be
processed
into
TRG.
C
O
I
would
like
to
talk
about
coral
over
in
the
t2
TRG.
We
have
been
working
for
some
time
on
the
constrained
restful
application,
language,
coral.
There
are
now
a
bunch
of
internet
drafts
related
to
that
and
the
first
to
the
constrained
resource
identifiers
and
the
constrained
rest
for
application
language
now
coming
out
of
this
being
a
research
topic
and
are
getting
some
interest
in
being
used
in
core
applications.
O
You
can
see
one
example
here:
it's
a
your
eye
that
you
would
normally
write
as
co-op,
codons,
refresh
example.org,
:
and
so
on
and
Siri
takes
the
components
of
the
URI
apart
and
then
puts
it
into
AC
bore
array
where
each
component
has
an
option
identifier
in
front
of
it,
and
you
can
see
the
CDL
notation
here
and
the
idea
is
that
we
want
to
allow
constraint
devices
to
perform
your
eye
arithmetic
in
a
small
amount
of
code
covering
all
corner
cases
and
doing
that
correctly.
O
O
Then
coral
itself,
if
you're
familiar
with
link
format,
it's
basically
link
format
on
steroids.
It
has
a
data
and
induction
model
for
building
applications.
The
machines
can
navigate
between
resources
by
following
links
that
we
already
have
the
link
format,
but
it
can
also
describe
operations
on
resources
by
submitting
forms.
So
essentially,
the
correlate
document
tells
you
if
you
have
a
quarry
representation
of
a
resource.
What
is
it
that
resource?
What
can
you
do
with
the
resource
and
how
does
the
resource
relate
to
other
resources?
O
And
it
also
fixes
a
bunch
of
problems
that
we
have
an
ink
format.
For
example,
link
format
has
is
very
weird
rules
for
generating
the
link
anchor
and
it's
not
very
extensible,
ending
format,
attributes
and
coral
has
comes
with
the
two
serialization
formats.
The
primary
format
is
again
based
on
C
bar,
so
it's
suitable
for
constraint
devices.
A
typical
coral
document
can
be
expressed
in
a
small
amount
of
bytes
and
can
also
be
processed
by
constraint
device
without
taking
a
lot
of
additional
RAM,
but
because
it's
so
compact,
it's
very
hard
to
read
for
humans.
O
O
We
briefly
looked
into
the
conversion
between
coral
and
rdf
and
linked
from
it
and
discuss
the
concept
of
forms,
and
also
I
had
a
bit
of
high-level
discussion.
Are
those
real
hypermedia
applications,
as
envisioned
by
using
coral,
actually
feasible
and
also
had
some
discussions
related
possible
working
group?
Adoption-
and
so
you
know,
I
have
very
quickly
a
few
examples:
how
how
the
text
form
it
looks
like
you
have
to
imagine
that
those
can
be
expressed
in
c
bar,
very
compactly.
O
O
O
O
O
C
C
Okay?
So
we
will
report
this
as
in
room
consensus
and
verify
the
consensus
of
the
neighbors,
but
not
do
a
gradual
adoption.
Call
that
just
verify
that
you
consensus
we
already
reached
here.
Thank
you.
So
the
final
point
on
the
agenda
is
new
work.
That
Jintao
has
started
on
speedy,
block-wise
strength.
G
P
This
is
a
speedy,
speedy,
co-op
lock.
Why
transform
the
problem
being
the
kind
not
in
the
current
space
in
the
corner,
see
the
current
is
to
send
continuously
requests
to
the
server
and
using
the
block
option
to
specify
the
exact
segment.
Then
there
is
expected
each
time.
That
means
we
do.
We
do
a
lot
of
round
trips
there
and
such
a
design
was
a
reasonable
choice,
since
the
server
can
be
implemented
to
be
a
truly
stateless
and
lightweight,
but
is
there
some
lead
and
some
scenarios
that
we
we
can
speeding
at
this?
P
For
example,
during
the
firmware
update?
Well,
a
large
object,
a
large
file
is
going
to
below
you
to
the
sensor
or
the
the
device
is
going
to
conduct
a
critical
mission
conversation
with
some
server,
and
the
other
case
is
there
are
cases
that
the
the
server
is
actually
more
capable
than
their
original
co-op
assumption,
which
means
that
they
can
be
more
stateful
in
handling
these.
P
So
what
we
are
actually
proposing
is
speeding
up
blog
option
called
s
here
and
in
the
speed
up
option
the
client
can
specify
the
speedy
window
size
there,
for
example,
five
here,
then
the
server
can
send
just
the
five
segments
to
this
to
the
client.
Just
you
in
one
conversation
as
to
reply
in
reply,
and
this
it's
a
little
bit
more
tricky.
If,
because
we
put
is
because
the
client
basically
know
its
requirement
and
its
limitation
or
is
constraint
and
requirement
so,
for
example,
here
the
client
says
that
means
I
can
receive.
P
I
can
call
five
packets
in
one
we
ten
pass,
but
actually
interface,
dividing
it
to
more
than
five
second,
so
the
server
we
are
just
as
in
five
segments
to
a
server
or
to
descry
and
each
five
segments
each
speedy
window.
The
client
needs
to
reply
with
the
ACK,
and
then
the
server
can
send
more.
This
can
you
know
speed
up
the
conversation
as
well
as
keep
the
kinds
as
a
constraint
device.
P
There
are
some
more
details
in
the
draft
and
I
think
they're
more
question
to
be
answered,
for
example,
how
to
design
this.
Probably
this
is
now
the
best
way,
but
I
think
this
is
one
of
the
poster
boy
and
that
this
is
just
very
small
fix
here.
We
can
have
block
as
speedy
Apple
here
that
block
s
means
a
speedy
block
option
here,
so
that
you
can.
The
client
can
inform
the
server
about
the
speedy
window.
It
would
like
to
use
yeah,
that's
it,
and
this
is
anyway,
a
small
fix.
N
Have
two
things
that
I
want
to
point
your
it
mean
I,
see
the
use.
K
I
see
the
use
cases
I've
run
into
them
myself
occasionally,
but
I
think
this
is.
This?
Could
profit
from
looking
into
two
pieces
of
to
two
areas
here?
First,
is
using
n
start
greater
than
one,
which
is
something
that
was
I,
think
always
envisioned
to
be
possible,
but
not
really
I'm
actively
worked
on
recently.
That
would
allow
the
that
would
kind
of
simplify
the
model
quite
a
bit
by
just
sending
several
requests
without
waiting
for
the
blocks
to
come
in.
N
That
would
not
give
you
all
the
efficiency
that
you
are
envisioning
here,
but
might
give
sufficient
speed
ups
without
making
the
model
any
more
complicated
sort
of
negotiating
that
and
start
a
greater
than
one
is
okay,
and
if
it
then
turns
out
that
this
is
still
not
enough,
you
may
want
into
what
is
going
on
with
ultra
with
non-traditional
responses,
because
there
comes
in
all
the
topic
of
how
will
this
server
even
know,
on
which
time
token
to
reply
etcetera?
But
my
gut
feeling
is
that
it
wouldn't
need
to
come
to
this.
N
N
Q
P
Think
we
have
done
some
very
simple.
Revocation
of
these
I
think
it's
very
obvious
because,
for
example,
here
the
DOE
regional
ways
you
send
each
sent
for
week,
four
five
requests
for
five
block
items,
but
now
you
have
sent
you
just
an
one,
because
there's
only
one
RTD
happening
one
wrong
tree.
Yes,.
Q
But
there's
also
I,
don't
know
if
you
maybe
you
did
that
already
sent
this
to
the
transport
working
group
to
just
see
where
the
transfer
they're
a
working
group,
because
there's
obviously
a
congestion
control
issue.
Here
you,
like
you,
pick
the
number
five
but
I
could
say:
I
picked
the
number
1000
and
so
I'm
I'm
very
fast,
but
I.
Don't
think
it's
that
easy!
Oh
I'm,
sure
it's
not
that
easy,
but.
P
Now
I
think
because
we
have
the
coop
or
what
he
CP
defined,
already
specified
by
you
right
and
if
you
are
using
TCP.
This
is
obvious,
because
we
have
lot
of
blocks
to
send
and
that's
that's
llama,
a
case
that
history
should
hinder
and
TCP
country
console
is
so
stable
and
can
make
an
internet
very
stable.
P
C
Okay,
so
I'm
seeing
some
interest
in
this
and
also
seeing
interest
in
exploring
related
approaches,
for
instance,
with
increased
and
start
so
I
think
we
should
encourage
you
to
continue
work
on
this
and
maybe
also
explore
the
alternatives
work
with
the
people.
Who
are
the
ideas
in
mind
and
we
are
looking
forward.
C
C
Thank
you
very
much
for
making
the
time
this
time
and
there's
a
lot
of
actions
and
I
probably
should
have
mentioned
it
has
been
said
before
we
will
have
interim
meetings
and
we
have
large
number
of
working
groups
that
will
be
coordinating
to
run
a
weekly
interim
meeting
in
probably
at
late
Wednesday
European
time,
early,
Wednesday,
u.s.
time
and
I
think
once
Easter
the
Easter
break
is
over.