►
From YouTube: IETF114-CORE-20220726-1900
Description
CORE meeting session at IETF114
2022/07/26 1900
https://datatracker.ietf.org/meeting/114/proceedings/
A
A
B
B
And,
of
course,
we've
seen
people
in
the
audience
that
have
read
the
documents
in
the
channel
today
we
make
use
good
use
of
our
time
to
discuss
open
issues
and
progress
to
work
in
a
working
group.
Blue
sheets
are
collected
automatically.
Please
if
you
attend
on
site
in
fact
access
the
meeting
through
miteiko
on
the
live
version.
B
The
chairs
will
take
care
of
looking
at
the
chat
we
have
evo
and
matthias
the
volunteer
to
take
minutes.
Thank
you
very
much.
Anyone
is
welcome
to
help
them
and
christian
thanks.
This
is
an
official
itf
meeting.
So,
as
usual
then
not
well
applies,
we've
been
recorded,
and
this
is
not
just
about
ipr
patterns
and
so
on.
It's
also,
and
especially
about
our
conducts
so
be
nice
and
professional
with
each
other.
Also
for
the
participants
inside,
please
ensure
to
keep
the
mask
on
all
the
time.
B
Also,
when
you
go
to
the
floor
mike,
to
give
a
comment,
ask
a
question,
and
so
on.
Thanks.
B
If
you
want
to
go
to
the
mic
to
ask
a
question,
please
join
the
queue
first
on
metecolite
and
then
go
onto
the
mic
and
then
leave
the
queue.
When
you
come
back
to
your
seat.
B
Okay,
this
is
the
agenda
for,
for
today,
it's
pretty
packed,
so
please
do
your
best
to
stay
on
time
and
the
chairs
will
will
guide
you
through
about
that.
We
start
with
the
two
rule,
80
document
and
href
and
coral.
Then
we
cover
two
documents
related
to.
B
Group
comfort
co-op
as
such
and
multicast
notifications,
then
we
have
two
documents
related
to
all
score:
the
profiling
available
for
oscore
and
the
key
update
for
score.
And
then
we
have
a
number
of
non-working
group
documents,
meaning
dns
over
co-op
performance
measurement
and
copper
gut
for
ble.
B
D
Yeah,
I
just
wanted
to
point
out
if
we
do
have
time
at
the
end,
we
could
briefly
talk
about
something
that
came
up
during
the
anima
meeting
on
monday,
the
co-op
s
plus
jpy
uri
scheme.
Let's
see
if
we
have
time
for
that.
B
Okay,
so
let's
get
into
the
working
group
and
documents,
those
updates
from
the
chairs.
B
So
since
the
vienna
meeting
in
march,
we
actually
had
three
documents
published
as
rfc.
So
many
congratulations
to
the
authors,
the
working
group
and
everyone
that
helped
out
with
that
they
are
in
order,
I
think,
strictly.
In
chronological
order
of
publication,
the
resource
directory
was
indeed
an
achievement.
B
And
young
seabor
one
of
the
four
corcom
cluster
documents
now
rfc
9254,
congratulations
again:
carsten.
D
Yeah,
so
I
think
the
the
two
of
you
weren't
there
when
this
work
was
started,
at
least
on
the
resource
directory
and
on
yang
ziba.
So
maybe
I
should
just
quickly
remember
people
what
this
was
about.
Excuse
me.
I
need
to
reconfigure
my
audio
that
should
work
better.
The
resource
directory
was
started
in
2011
from
a
contribution
by
zac
shelby
out
of
the
eu
product
project
that
that
I
for
already
forget
the
name
for,
but
that
actually
created
the
the
company
sensing
note
and
yeah.
D
By
peter
wenderstuck
and
then
that's
came
and
added
restcon
for
that,
because
peter
really
wanted
to
do
this
with
snmp
at
first,
then
there
were
discussions
whether
we
would
need
to
start
a
new
working
group.
For
this,
then
we
had
the
discussions
about
whether
to
do
hashing
to
create
the
identifiers
and
at
some
point,
michelle,
villette
and
alex
pilaf
joined
and
came
up
with
something
they
called
cool.
D
So
some
of
the
people
in
the
room
are
a
member
of
a
telegram
list
called
cool,
that's
the
origin
of
that
telegram
list
and
yeah.
So
we
got
lots
of
help
from
andy,
biermann
and
and
young
schweinweider.
We
invented
the
differential
coding.
D
D
Michelle
actually
wrote
code
for
for
p
yang
to
support
the
the
sids
and
in
april
2016
we
actually
adopted
the
the
draft
and
then
yang
1.1
was
published,
so
our
basis
actually
changed
and
we
had
to
react
to
that
and
yeah.
It
took
another
four
years
until
we
finally
made
it
to
work
new
blast
call
in
2020,
and
then
we
had
the
pandemic
disruption.
D
So
it
took
us
to
july
to
have
a
second
working
group
last
call
february
2021
to
have
an
itf
last
call,
and
then
we
had
to
to
uncouple
the
drafts
to
get
at
least
one
to
advance,
and
in
april
young
sibo
was
approved,
and
now
it
was
just
a
little
bit
of
auth
48
before
we
finally
got
it
published
in
in
july.
So
just
as
a
reminder
how
these
things
work.
So
if
you
want
to
congratulate
anyone,
do
congratulate
peter
and
all
the
other
people
who
made
this
stuff
happen.
It's
been
a
long
time.
B
Okay,
moving
forward
with
the
update,
we
also
have
another
document
and
not
published
that
yet
right
now
in
the
document
queue
in
the
rfc
rescue-
and
this
is
indeed
an
achievement
considering
what
has
happened
in
the
last
few
months
following
the
vienna
meeting,
during
which
the
document
was
still
dormant
and
then
was
resurrected
in
april
for
a
rush
to
completing
it
due
to
the
urgency
in
in
3gpp,
and
the
result
was
great,
I
think
thanks
again
for
all
those
who
contributed
that
too.
B
And
then
in
ist
processing
we
have
another
of
the
qualcomm
cluster
documents.
Corseed.
There
was
a
resubmission
earlier
today
of
version
19
following
discussions
among
the
cultures
as
180
customers
to
say
something.
D
Yeah,
just
this
document
is
officially
out
of
the
hands
of
the
working
group,
but
the
the
feedback
we
got
from
the
isg
was
pretty
substantial,
so
there
will
be
substantial
changes
to
the
document
and
it's
important
that
the
working
group
actually
pays
attention
to
this.
So
formally.
This
will
probably
take
the
form
of
second
ietf
last
call,
but
that's
for
the
isg
to
decide.
D
I
don't
know
that,
but
yeah,
please
follow
the
github
repository
which,
by
the
way,
is
the
yang
ziba
repository
for
historical
reasons,
and
there
are
two
pull
requests
in
there
right
now.
One
is
mainly
editorial,
but
it's
a
long
per
request
and
the
other
one
is
very
short,
but
it's
technical
and
we
want
to
merge
these
two
pull
requests
and
get
feedback
on
on
this
19
and
and
submit
the
dash
20
to
finally
lift
the
discuss
that
that
is
open
here.
A
B
You
yeah,
we
have
also
a
number
of
documents
in
post-working
group
plus
call
meaning
the
other
two
kirchhoff
documents
combining
a
library
that
just
have
to
wait
for
corsair
to
be
completed
before
being
brought
up
on
service
to
work
on
them.
And
then
we
have
the
ripple
score
document
which
is
currently
in
shepherd.
Write
up.
B
We
also
wanted
to
mention
other
documents
that
are
not
in
the
agenda
for
today,
there
are
two
have
been
recently
adopted:
attacks
on
co-op
and
transport
indication,
for
which
there
was
also
a
recent
resubmission
with
a
little
update,
so
the
version
one
and
then
we
have
another
one
pending
workforce
adoption
group
com,
proxy
carson,
is
fine
for
you.
You
can
take
this
one.
B
It
was
presented
again
at
an
interim
meeting
in
may
basically
requesting
for
adoption.
B
D
Yeah
we
made
a
resolution
there
and
I
probably
should
repeat
that
resolution
here,
but
I
I
must
admit
I
don't
remember
what
that
was.
B
Well,
trying
to
remember
the
means
for
me
that
there
seemed
to
be
interest
and
energy
to
work
on
it,
but
then
it
was
up
to
the
chairs
too,
to
consider
a
call
for
adoption.
That
yeah
since
then
hasn't
started.
B
And
we
have
a
few
more
individual
documents
around.
There
are
four
that
have
been
are
submitted
with
some
updates,
meaning
restoring
cash
ability,
score
protected
responses
and
using.
A
B
Score
also,
with
proxies
possible
with
multiple
layer,
protections,
extensions
for
the
resource
directory
and
everything
on
co-op
as
kitchen
sync,
we
also
have
around
two
new
submissions:
core
parameterized
content
format
was
presented
at
the
recent
inter
meeting,
also
and
then
a
new
co-op
option
for
early
data
usage
of
tls
or
dtls
1.3,
equivalent
to
the
corresponding
http
header.
Basically,
and
so
we
encourage
you
to
read
through
these
documents
that
couldn't
make
it
for
today's
agenda.
B
D
Okay,
so
this
is
mainly
a
report.
We
don't
really
have
time
to
discuss
this,
and
I
think
this
is
really
the
stuff
of
which
interim
meetings
are
made.
So
just
as
a
reminder,
cris
are
concise
resource
identifiers.
They
are
the
the
concise
equivalent
of
uris
and
ui
references
and
essentially,
what
we
did
is
we
looked
at
uris
extracted
the
generic
data
model
that
your
eyes
have
and
made
a
structured
representation
of
that.
D
D
We
are
finding
occasional
little
nits
that
we
have
to
fix,
but
most
of
the
work
at
the
moment
is
on
test
vectors
and
implementations.
So
we
want
to
make
sure
that
our
various
implementations
actually
work
with
the
uri
implementations
that
we
find
on
our
platforms
and
it
turns
out
if
you
want
to
do
something
that
uses
the
the
entire
envelope
of
of
the
uri
data
model.
You
are
fighting
with
your
platform
and
that's
something
that
we
have
found
out.
D
So
there
there
have
been
two
pr's
lately
which
are
in
on
github,
but
not
yet
in
the
internet
draft
a
short
lamentation
about
percent
encoded
text,
because
that
really
is
a
syntactic
feature
of
of
your
eyes
and
doesn't
fit
at
all
to
the
semantic
structure
of
cris.
So
they
are
best
avoided.
D
But
if
you
want
to
be
able
to
write
a
translator
that
works
for
any
uri,
you
have
to
include
that
and
the
other
observation
was
that
we
forgot
to
put
in
a
rule
to
prefix
a
relative
uri
by
dot
slash
if
the
first
path
segment
in
the
relative
uri
contains
a
colon,
because
that
colon
will
look
like
the
whole
thing
starts
with
a
scheme.
D
So
you
have
to
prefix
that
with
dot
slash
to
make
sure
it's
recognized
as
a
relative
ui.
So
that's
on
on
github
already
what
is
on
github
as
an
issue
right
now
is
the
question:
what
do
we
do
with
the
uri
foo
colon?
D
So
the
uri
foo
colon
is
could
be
a
uri
for
the
scheme,
foo
that
does
not
have
an
authority
but
has
zero
or
more
path
segments.
So
it
could
be
full
colon,
slash
bar
slash,
but
we
have
zero
of
these
segments.
So
in
this
case
the
representation
would
be
the
scheme
foo
and
a
null
and
the
null
can
be
taken
away.
So
it
really
is
just
an
array
with
a
scheme.
D
So
this
this
fits
with
what
you
actually
see,
but
it
also
could
be
a
full
colon
opaque,
something
like
a
metoo
url
major
ui
and
that
actually
would
have
exactly
the
the
same
ui
syntax,
but
a
different
ci
representation.
D
So
we
need
to
decide
which
of
these
two
we
don't
support,
and
this
presenter
is
leaning
to
say
we
support
the
zero
segment
path
based
uri,
but
not
the
zero
length,
opaque
ui,
and
one
reason
for
that
is
that
I
have
never
seen
a
zero
length,
opaque
ui
and
I
don't
know
how
to
do
the
proper
research
to
to
actually
verify
that.
But
I
I
think
it's
relatively
unlikely
that
somebody
just
wants
to
use
a
ui
scheme
and
nothing
after
that,
but
allow
opaque
stuff
after
that
as
well.
D
So
that's
the
decision
where
we
could
flip
a
coin,
probably
because
the
practical
effect
will
be
minimal,
but
for
consistency
I
would
go
for
number
one.
D
So
if
anybody
in
the
audience
has
ever
seen
such
a
uri
and
has
an
opinion
on
that,
please
send
email
to
the
list
or
speak
up
now,
so
we
can
get
some
some
help
in
making
that
decision.
C
Might
as
well
do
it
myself
the
the
the
slides?
Thus
far
I
haven't
been
able
to
tell
whether
we're
discussing
urls
uris,
including
urns,
or
something
else
entirely.
I
can
see.
One
obviously
is
a
url
that
one's
clear
two
looks
to
me
a
lot
like
urn,
but
you're,
calling
it
opaque
stuff
and
your
ends
are
not
opaque.
So
I'd
like
to
be
absolutely
clear
whether
things
like
urns
are
in
scope
here,.
D
Okay,
the
terminology
in
this
space
is
completely
confusing,
so
there
is
itf
terminology
and
there
is
what
wg
terminology
and
what
we
call
uris
are
called
urls
in
what
wg
and
in
other
daily
vernacular.
So
please
excuse,
if
I'm
not
always
clarifying
this
very
well,
but
these
are
all
this
is
about
your
eyes.
So
cries
are
supposed
to
cover
your
eyes
and
actually
iris
as
as
well,
because
but.
C
D
Well,
they're
all
tries
and
actually
a
uhn
and
that's
another
problem.
There
is
uri
scheme
called
urn
and
half
of
the
people
who
say
you
are
n
mean
a
uri
that
uses
the
scheme
qrn
and
the
other
half
means
the
the
foo
colon
opaque
stuff.
That
is
number
two
on
this
slide
and
for
uhn,
of
course,
opaque
stuff
is
the
the
registry
and
and
all
that
structure
behind
that.
But
it's
not
a
hierarchical
ur.
D
C
D
D
D
It's
not
something
that
that
we
need
to
do,
but
when
we
do
this,
we
might
as
well
make
sure
that
we
handle
curry's.
Queries
are
a
way
to
to
do
some
lexical
compression
of
your
eyes.
That
also
looks
good
on
a
whiteboard,
so
it's
used
by
by
a
lot
of
formats
that
that
need
to
look
good
on
a
whiteboard
and
curries
are
a
compression
scheme
that
is
based
on
uri
syntax.
D
So
there
is
no
relationship
between
queries
and
the
structure
of
a
uri.
You
can
just
cut
up
your
eye
anywhere
and
and
put
the
different
parts
into
different
places
that
that
amount
to
a
curry,
so
that
was
really
hard
to
support
the
the
mechanism
that
would
best
support.
This
is
sibo
pact,
and
sibo
pack
now
has
function
tags
that
can
be
used
for
reconstructing
a
uri
out
of
a
query.
D
So
we
are
kind
of
in
a
position
to
do
this
now,
even
though
it's
way
more
complicated
than
the
other
things
that
sibo
effect
does
for
us.
But
in
the
the
interim
I
talked
about,
we
decided
to
do
this
in
a
separate
specification
and
we
didn't
quite
decide
whether
this
needs
to
be
done
in
coral
or
in
a
separate
document.
We
still
can
do
that.
We
need
to
start
writing
that
document
and
we
need
to
start
which
of
the
the
potential
carriers
we
actually
support.
D
So
two
of
the
ones
on
this
slide
are
in
strikeout
font
because
they
are
probably
not
easy
to
support,
while
the
other
four
probably
are.
So,
let's
talk
about
that,
but
the
the
point
is
that
whatever
we
come
up
here
and
and
define
as
a
better
carry
or
a
concise
compact,
cri
or
whatever
it
doesn't
have
to
to
cover
the
entire
carry
space.
D
That
depends
a
bit
on
how
many
other
bugs
we
find
in
our
implementations,
but
we
are
down
to
a
small
number
now,
so
I
would
think
this
is
not
months
but
weeks.
A
E
Hello,
so
this
is
the
next
part
coming
from.
We
now
have
cries
now
that
we
have
a
way
of
expressing
your
eyes
well
in
a
compact
and
embedded
friendly
way.
We
can
use
this
to
build
structured
documents
that
talk
about
resources.
E
I'd
like
for
today,
I'd
like
to
pick
up
a
few
of
the
of
the
open
issues
and
outgoing
things
that
are
around
in
coral
for
that
I'll
briefly,
for
everyone
who
is
not
familiar
with
coral
so
far,
I'll
just
summarize
very
quickly.
This
is
a
way
of
tell
of
making
statements
about
resources
similar
to
6690,
but
with
the
extended
options
to
have
really
real
structured
data
in
there
to
build
larger
trees
of
data
and
to
better
guide
interactions
by
using
forms.
E
Properties
that
are
relevant
for
today
are
that
this
is
in
its
structure,
similar
to
our
in
the
information
model
that
is
behind
it
similar
to
rdf.
So
we
have
statements
that
link
one
resource
or
one
uri
through
some
predicate.
That
is
also
expressed
in
a
uri
with
an
object
that
is
either
another
uri
or
a
literal.
If
you
look
at
this
example
statement,
my
temperature
resource
supports
the
serious
transfer
pattern.
You'll
see
that
in
the
so
neither
of
these
is
what
it
looks
like
in
sibor.
E
Obviously,
but
this
is
a
more
of
a
diagnostic
notation
here
and
you'll,
see
that
often
here
in
the
slides
I'll
be
using
the
very
queries
that
carson
just
talked
about,
because
full
your
eyes
behind
those
are
lengthy.
But
if
it
says
core
colon
here,
this
probably
means
something
like
http
colon,
slash,
slash,
ayanna.org,
slash
registry,
slash
something
that
leads
into
into
core.
E
Two,
a
few
things
were
changed
recently
and
one
of
those
is
that
literals
are
now
much
simpler.
So,
if
you've
been
to
the
to
the
interims,
there
has
been
discussion
on
what
identity
means
if
literals
can
still
have
properties.
E
The
the
current
direction
in
which
coral
is
going-
and
this
is
already
part
of
the
latest
draft-
is
that
we
do
not
need
properties
of
literatures,
because
we
can
use
things
we
can
use
the
expressiveness
of
zebra
tech
so,
for
example,
in
earlier
versions
of
coral,
a
lot
of
a
typical
metadata
on
the
literal
was
the
language
or
the
text
direction.
E
This
is
no
now
all
covered
in
sibor
tags,
thanks
to
also
thanks
to
carson
for
covering
that
inside
problem
details,
because
this
is
now
where
the
relevant
zebra
tag
that
we'll
be
using
is
defined.
E
What
is
what
is
the
longer?
What
is
also
ongoing
and
is,
in
part
part
of
the
current
document,
but
not
in
in
full,
is
how
we
use
siebel
pact
now
in
order
to
express
all
of
these
statements
clearly
so
concisely.
E
So
what
you
see
here
is
an
example
translated
from
from
the
proposal
on
the
pubset
broker
and
everything
that
is
a
query
here
in
in
this
diagnostic
notation
would,
in
a
in
a
binary
representation,
be
expressed
through
zebra
packed
now.
This
is
also.
This
is
a
part
that's
already
in
in
the
document.
E
Frankly,
I
think
that
that
simple
pact
was
re
from
a
not
very
old
idea,
reuptaken
partially,
because
it
is
really
useful
here.
E
What
is
not
yet
in
the
document
but
being
planned
for
the
next
update
is
how
we
do
set
up
this.
How
we
to
set
up
this
dictionary,
so
some
of
those
statements
would
probably
would
ideally
be
loaded
with
a
document
type.
So
when
you
receive
a
choral
document,
then
there
is
already
a
dictionary
set
up
that
says
that
we
can
use
all
these
chord
terms,
so
the
title
statement
could
be
in
in
seaboard.
E
This
can
be
applied
to
is
not
quite
trivial,
because
if
we
only
set
up
one
dictionary,
then
this
would
mean
that
we
have
to
register
every
term
in
the
dictionary
and
the
plan
that
I
pl
that
I'd
like
to
go
with
here
is
to
allow
setting
up
additional
dictionaries
using
a
mechanism
very
similar
to
what
is
already
in
sibo
pact.
By
stating
that
we
are
loading,
a
particular
number
of
terms
from
a
well-known
additional
dictionary,
this
this
means
that
we
don't
have
to
lug
around
the
weight
of
those.
E
So,
if
I
said
dictionary
earlier,
my
apologies-
I
meant
tables
terminology
has
changed
here.
That
means
that
we
don't
have
to
lug
around
the
weight
of
having
those
tables.
We
don't
have
to
let
our
numbers
grow
incredibly
large,
because
we
have
so
many
entries
in
the
table,
but
we
can
use
them
selectively
and
by
utilizing
well-known
table
tables.
We
can
also
express
this
selection
concisely,
as
is
shown
here
in
the
fifth
line,
again
using
just
a
few
bytes
to
select
that
to
select
that
table
that
set
of
tables.
E
E
Coral
will
definitely
need
something
some,
we
need
to
say
something
about
security,
but
we
plan
not
to
not
to
redu
reduce
redo
things
that
are
already
described
in
security
models
such
as
ace,
but
what
coral
will
provide
is
for
application
authors
a
way
of
expressing
what
the
application
needs
from
the
what
the
application
needs.
E
The
coral
agent
like
this,
like
a
browser
to
enforce
and
for
the
choral
agent
there
will
be
rules
that
say
how
those
how
those
statements
from
the
application
are
applied
and
what
they
mean
in
the
context
of
loading,
for
example,
a
document
with
a
dictionary
that
is
externally
referenced
because
in
essence,
every
piece
of
information
that
goes
into
assembling
an
action
from
a
document
will
depend
and
security
wise
depend
on
the
integrity
of
all
the
resources
that
go
into
making.
That
statement.
E
So
these
are,
these
are
the
errors
that
are
being
worked
on
next.
Steps
that
are
that
will
be
taken
up
is
that
the
binary
serialization
will
need
to
be
revisited,
but
in
order
to
do
that-
and
this
is
part
of
what
I'm
asking
the
group
for
input-
is
that
in
order
to
evaluate
that,
we'll
need
real
world
examples.
E
So
whatever
you
would
like
to
use
this
with
please
let
us
know,
please
send
example,
documents
that
we
can
then
translate
and
based
on
that
decide
what
good
just
choices
for
the
serialization
will
be
problem.
Details
is
something
that
will
go
in
here.
E
E
I
don't
think
that
these
are
good
items
for
the
first
version,
so
I
think
we,
while
we
should
keep
them
in
mind,
I
wouldn't
want
to
have
them
around
in
an
initial
version,
because
I
think
it
will
slow
things
down
too
much,
and
I
think
I'm
just
on
time
to
ask
for
the
working
group
for
comments
and
questions
and
for
whether
this
is
direction
that
we
should
follow
before
time's
running
out.
Thank
you.
B
I
don't
think
we
chat
no
one
in
the
queue.
I
have
a
question
just
to
be
sure
you
understand
correctly,
so
I
suppose
in
the
short
term,
you
want
to
focus
mostly
on
the
items
you
had
in
slides,
six
and
seven,
while
possibly
working
in
parallel,
but
in
a
at
a
slower
pace
on
the
items
on
slide.
Eight
correct.
E
Yes,
six,
six
and
seven
are
are
ongoing
items
and
items
in
slide.
Eight
are
being
started
and
are
held
back
by
the
lack
of
examples,
but
both
yes,
both
will
be
addressed
in
parallel.
A
B
F
B
B
Thanks
so
in
vienna,
we
presented
version
six
and
after
that
the
document
entered
a
working
row
plus
call.
We
received
quite
a
lot
of
comments
all
very
useful.
Thank
you
very
much.
Most
of
them
were
from
reviews
from
from
kirsten.
Well,
a
summary
of
the
review
was
actually
posted
on
the
list,
but
then
the
authors
got
a
lot
of
detailed
comments
to
address.
B
In
fact,
there
were
also
more
more
replies
from
john
shallow,
basically
concurring
with
karsten
and
pointing
also
to
lib
co-op
as
an
implementation
that
also
supports
group
communication.
Rick
are
confirming
that
the
document
is
good
and
john
matson
started
a
pr
on
the
github
so
far,
only
with
editorial
fixes
that
we
already
adopted
in
the
latest
version,
but
I'm
aware
that
more
comments
are
coming
there
next
slide.
Please.
B
So
for
the
cutoff
we
submitted
the
the
latest
version,
seven
to
the
best
of
our
knowledge,
it
is
addressing
all
the
received
working
groups
called
comments.
I
just
summarized
the
main
updates
here.
We
have
revised
and
extended
the
the
list
of
updates
or
actions
that
this
document
produces
on
other
documents,
especially
the
absolution
of
7390
and
the
update
of
7252.
B
We
also
had
a
figure
providing
an
example
on
an
example
of
how
you
can
combine
the
different
types
of
groups,
meaning
co-op
application
and
security
groups.
We
were
just
requested
to
build
a
real-life
storyline
around
that
examples,
and
we
did
that
considering
a
building
automation
use
case
to
make
it
felt
more
real.
B
Next
slide.
Please
yeah.
There
was
quite
a
lot
of
work
to
do
also
on
improving
content
related
to
the
the
naming
of
the
different
group
types
and
as
to
co-op
groups.
B
So
we
moved
all
those
examples
with
the
necessary
editorial
adjustments
out
of
the
document
body
and
and
to
appendices,
and
that
I
think,
improves
readability
quite
a
lot.
Also
as
to
security
groups,
we
clarify
that.
Well,
yes,
they
have
names
as
invariant
identifiers,
but
they
are
not
exactly
used
in
the
protocol
defined
in
this
document
or
in
the
actual
co-op
group
messages
exchanged.
B
They
are
useful
in
in
other
protocol
and
mechanisms
defined
in
other
documents,
especially
in
ace
as
supporting
security
for
group
communication,
and
since
we
had
the
opportunity,
we
also
clarified
that
it's
a
bad
idea
to
use
no
sec
or
its
variance
lowercase
uppercase
as
a
name
of
a
security
group
because
well
you'll,
be
very
misleading
if
it's
an
actual
security
group,
while
if
you
use
the
nosek
mode
without
security,
that
just
no
security
group,
adult
name
next
slide,
please
thanks
and
more
was
done
also
on
proxies.
B
We
better
clarify
the
limitations
and
issues
that
you
have
in
general
if
you
introduce
a
proxy
in
this
kind
of
setup
and
again
that
they
are
possible
for
the
rest
by
using
the
method
in
the
group
proxy
document
that,
by
the
way,
defines
also
how
you
can
deal
with
these
issues,
where
you
consider
specifically
an
http
to
co-op
proxy,
and
we
also
clarify
a
bit
of
terminology
related
to
this,
and
we
also
describe
what
happens
in
the
case
where
the
client
sends
a
group
request,
for
instance,
over
ip
multicast,
to
be
received
by
multiple
proxies
at
once,
each
of
which,
in
turn
forwards.
B
The
request
to
the
group
observer
discussing
separately,
the
case
with
and
without
security.
It
was
also
noted
by
kirsten
that
we
were
a
bit
too
strict
in
saying
that
group
communication
cannot
happen
with
reliable
transport,
where
multicast
cannot
happen.
But
we
were
already
discussing
a
case
when
using
blockwise,
where
a
first
group
request
can
be
sent
over
multicast
with
the
blockchain
option
and
then
the
following
requests
to
request
the
following
blocks
from
the
origin.
Servers
can
be
sent
individually
via
unicast
to
each
of
those
servers
and
well.
B
Those
unicast
requests
can,
in
principle,
use
reliable
transports,
and
this
kind
of
transport
switching
within
the
transfer
of
the
same
body
can
also
be
facilitated
by
means
of
the
work
in
the
core
transport
indication
document,
and
there
was
also
a
revision
about
the
interworking
with
other
protocols,
especially
related
to
multicast
routing
and
referring
to
most
recent
itf
documents,
especially
in
six
though
next
slide,
please
and
finally,
to
wrap
up
on
the
updates.
B
We
revised
again
the
phrasing
around
the
use
of
the
nausic
mode,
trying
to
be
uniform
and
assertive
all
over
the
document.
The
nalsec
mode,
where
no
security
is
used
is
highly
discouraged.
It's
not
recommended
there
are
a
few
exceptions
mentioned
up
front
like
early
discovery
of
devices
or
services
before
you
can
do
anything
better.
So
you,
basically
you
have
another
choice,
but
still,
even
so,
you
need
to
have
very
well
understood
the
security
implications
of
this
editorially.
B
Core
possible
fragmentation
issues
in
in
601
and
and
purposive
monitoring,
especially
highlighting
highlighting
how
that
makes
a
difference
in
group
communication
plus,
of
course,
an
overall
editorial
revision
of
the
whole
document,
thanks
yeah
to
wrap
up
this
last
version
addresses
all
the
comments
we
got
so
far,
but
we
expect
at
least
one
more
version
to
be
submitted,
because
we
expect
a
follow-up
from
from
kirsten
with
counter
comments
on
how
we
addressed
his
review
and
yeah.
B
I
I
know
that
that
john
will
follow
up
also
with
more
comments
on
spr
and
who
knows,
maybe
more
can
come,
but
yeah.
We
expect
at
least
one
more
version,
and
hopefully
that
can
be
a
final
one,
really
incorporating
all
comments
in
a
stable
way.
And
finally,
I
just
wanted
to
give
a
reminder
that
francesc
at
some
point
recommended
that
when
it
is
ready,
this
document
would
be
better
sent
to
the
isg
together
with
other
two
related
documents.
B
I
I've
just
found
out
that
a
few
hours
ago,
the
document
moved
to
waiting
for
shepard
right
up,
doesn't
change
the
fact
that
I
was
expecting
a
few
more
comments
coming
to
address,
and
I
have
also
in
the
queue
some
more
points
to
add
in
this
very
document
and
in
another
related
one,
but
I
discussed
this
part
already
also
with
daniel
and
his
co-chair.
He
definitely
agrees
with
the
need
of
thinking
about
a
single
bundle
to
be
submitted
to
the
isg,
so
this
is
the
status,
and
that
was
my
last
slide.
E
Christian,
I
was
a
bit
one
when,
when
you
mention
that
the
the
the
way
that
block
one
second
and
and
later
blocks
are
pulled
out
that
those
happen
with
unicast,
I
suppose
this
is
based
on
the
statement
in
7959.
That
says
that
this
is
how
it
works.
Was
there
any
evaluation
on
whether
getting
a
response
getting
the
later
blocks,
with
a
multicast
as
well?
E
So
just
a
get
with
the
block
2
option
would
make
sense,
because
the
7959
says
that
other
uses
of
block
options
with
multicast
are
fulfill
the
study
and
now
that
multi,
now
that
the
group
communication
is
revised,
this
might
be
a
good
point.
E
Buttons
we
stay
with
the
topic
of
multicast
and
so
for
original
co-op,
as
well
as
the
the
updates
that
are
that
were
presented.
Right
now
only
describes
how
multicast
is
set,
how
multicast
packets
are
used
to
send
requests
and
responses
that
are
sent
to
many
clients
were
so
far
not
described.
E
This
is
what
observed
multicast
notification.
Does
it
allows
sending
responses
to
a
group
of
to
a
group
of
clients
whose
token
space
is
managed
by
the
server
as
a
steward
of
of
that
multicast
address?
E
This
was
envisioned
as
part
as
part
of
discussions
around
pubs,
the
pubs
model,
which
is
why
the
the
illustration
here
shows
this
as
a
broker
finding
out
the
responses,
but
is
really
applicable
to
all
kinds
of
of
observation
situations
in
which
a
large
number
of
devices
in
a
network
that
supports
multicast
well,
is
observing
a
single
resource,
as
with
all
modern,
multicast
components.
This
is
supported
by
grouposcore
in
only
to
get
actual
actual
cryptographic.
E
Protection
of
requested
responses
responses,
mainly
here,
since
this
was
last
presented
at
the
eye
at
the
at
the
last
session,
we've
made
a
few
updates.
In
particular,
we
are
now
elaborating
on
how
a
client
can
obtain
the
necessary
configuration
information.
That
is
what
is
the
precise
request,
on
which
token
was
it
sent
at
which
multicast
addresses
from
which
multicast
to
which
multicast
address
is
the
response
being
sent
etc.
E
That
the
client
can
not
only
obtain
these
from
trying
to
set
up
an
observation,
but
also
through
other
means,
because
in
a
sense
it
is
just
a
piece
of
information
about
a
state
of
the
server
that
is
around
a
typical
way
of
obtaining
that
information
is
because
it
is
part
of
a
pub
sub
discovery
step.
E
The
current
document
also
not
not
only
describes
that
this
is
possible,
but
also
what
this
means
for
the
server,
because
if
the
server
is
not
handing
out
that
information
on
a
short
term
basis,
but
as
a
general
statement,
that
means
that
it
must
have
the
request
already
running
and
must
be
sending
out
multicast
notifications
for
as
long
as
that
information
is
around
which
still
doesn't
freed
from
looking
at
whether
it
is
necessary
to
still
send
it
up
so
that
to
still
send
these
notifications,
but
it
has
to
keep
track.
E
A
few
updates
were
just
editorial
being
that
terminology
was
adjusted
or
changed,
but
there
were
also
conceptual
new
input
like
or
at
least
yeah.
There
was.
There
was
a
lot
of
new
text,
for
example,
all
those
prerequisites
that
are
listed
in
the
introduction
that
are,
we
have
to
have
a
multicast
address
that
is
managed
and
we
have
to
be
on
a
network
et
cetera,
are
now
listed
explicitly.
E
Another
new
section,
also
based
on
input
from
previous
reviews,
now
lists
the
various
modes
in
which
this
can
be
used,
because
we
do
have
a
lot
of
examples
and
the
document
was
previously
a
bit
confusing
as
to
which
configuration
is
now
being
used.
So,
for
example,
if
there
is
groupos
code
being
used,
if
there
are
deterministic
requests
being
used,
all
these
change
subtle
details
of
of
what
of
what
this
means
for
the
for
the
precise
packets
that
are
being
exchanged.
E
A
current
point
that
that
that
we
would
like
to
change
in
a
future
version
is
the
handling
of
deterministic
requests,
because
so
far
we
haven't
been
fully
clear
on
what
it
means
for
for
a
deterministic
client
to
be
in
a
group
on
with
respect
to
whether
clients
are
required
to
implement
deterministic
requests.
E
The
proposal
now
previously
this
meant
that
a
server
could
need
to
run
both
the
two
notifications.
One
was
deterministic
request
and
one
was
a
regular
request
created
by
the
server
itself.
E
The
proposal
now
is
to
just
say
that,
given
that
already
there's
so
much
pre-configuration
required
for
this
all
anyway,
that
it
makes
sense
to
just
state
that
the
group
has
or
does,
has
or
has
no
support
for
deterministic
requests,
and
if
there's,
if
there
are
deterministic
requests
being
used,
then
the
clients
have
to
support
these,
and
the
server
doesn't
have
to
run
to
parallel
observations
for
the
different
kinds
of
clients.
E
Another
change
that
is
coming
up
and
there's
already
a
part
of
a
draft
branch,
but
not
but
not
yet
ready
to
be
merged,
is
that
we
have
a
lot
of
information
in
the
phantom
requests
and
in
the
in
the
informative
response
or
in
the
informative
responses
that
relate
to
addresses
of
co-op
addresses
of
core
endpoints
and,
given
that
now
cris,
are
becoming
stable
enough
to
reuse
them.
In
this
document
we
are
slowly
switching
all
these
occurrences
over
to
just
using
cris
so
instead
of
using
our
own
list
of
what
what?
E
If
what
protocols
are
around
that
support
multicast,
we
just
use
the
schemes
that
are
part
of
the
cri
that
other
part
of
that
are
registered
as
part
of
cri
and
those
that
can
do
multicast
are
usable
in
this
place.
This
is
also
lined
with
work
on
on
prop
on
proxies
that
has
a
similar
field
that
previously
listed
both
the
protocol
and
the
ip
address,
and
now
just
list
the
cri.
E
Yeah,
that
being
said,
we've
also
processed
a
review
from
from
ayana,
some
of
which
does
not
fully
apply
anymore,
because
it
is
related
to
how
we
defined
how
we
previously
defined
those
transports
which
is
now
using
using
cris
or
we
will
now
be
using
cris.
E
I
think
that's
largely
it
from
me
and
I
think
I've
made
a
good
time
so,
thanks
for
thanks
for
your
attention
and
do
you
have
questions
comments.
H
E
It
does
not.
I
don't
think
that
this
will
influence
pops
up
in
in
any
way
in
that
a
pops
up
broker
can
still
be
set
up
as
it
as
it
always
was.
It
just
has
an
additional
option
of
using
this
feature
and
using
it
through
a
lot
of
places,
so
it
will,
it
would
ex
it
would
advertise
additional
information
about
the
resource,
but
the
basic
pops
up
would
not
change,
and
this
is
purely
opt-in
for
applications,
so
it
it
just
it
just
matches.
Well.
B
You
know,
thank
you
again.
Thank
you
very
perfectly
back
on
time.
So
now
it's
ricard's
turn
with
two
documents
in
a
row,
starting
with
profiling,
add-on
for
co-op
underscore.
A
H
A
H
The
main
focus
is:
how
can
you
achieve
an
optimization
where
you
actually
combine
the
third
adult
message
with
your
actual
first
oscar
request,
as
you
can
see
in
the
blue
box
there
to
the
right,
so
it's
essentially
defining
an
optimized
workflow
for
edoc
when
transported
over
co-op
with
the
intent
to
to
use
it
for
os
core,
and
it
defines
some
all
score
specific
processing
of
adobe
messengers,
and
it
also
extends
this
add-on
application
profile.
H
That
has
a
number
of
let's
say,
configuration
settings
related
to
that
execution
to
take
place
and
in
the
document
we
also
define
parameters
related
to
web,
linking
for
discovery
of
adult
resources
and
their
application
profiles
so
yeah.
Overall,
the
scope
is
transported
over
co-op
and
the
main
item
here
again
is
this
optimized
workflow
to
reduce
the
actual
amount
of
round
trips
before
you
can
start
using
oscore
communication.
H
And
yeah
to
go
over
some
of
the
updates,
then,
since
itf
113,
some
of
these
things
were
presented
already
at
the
core
interim
during
april,
and
the
updates
now
in
these
slides
are
because
of
changes
that
happened
in
that
book
draft,
which
is
now
in
version
15
right.
H
So
you
have
this
application,
synthetic
or
sec
and
application
and
dot
class
c
per
sec,
and
but
the
actual,
combined
edo
classic
request
still
has
an
unnamed
media
type.
Some
other
things,
small
changes,
updating
of
the
terminology
and
the
replica
applicability
statement
was
renamed
to
that
application
profile
more
things.
Yes,
some
rephrasing
here
changing
a.
I
must.
I
must
not
to
should
not
based
on
feedback
during
the
core
interim.
H
Otherwise,
the
processing
of
added
messages
was
revised
and
simplified
when
it
comes
to
how
you
select
your
own
endoconnection
identifier,
which
then
is
practically
offered
as
score
your
own
oscar
recipient
id
that
you
will
receive
for
oscar
communication.
H
H
To
yet
continued
updates
the
consistency
and
the
text
about
edit
application
profiles
templates
was
simplified
again,
no
need
to
say
anything
about
diesel
squad
dock
identifier,
conversion
before
there
were
parameters
in
there
about,
like
which
conversion
method
do
you
want
to
use,
that's
no
longer
needed,
and
then
it's
about
signaling
like
if
you
support
the
other
cluster
request.
The
application
profile
should
signal
that
support,
and
you
then
cannot
signal
to
use
another
message
for,
because
you
can
some
compatible
use.
H
Another
message
for
together
with
this
optimization
yeah
the
web
linking
port,
was
also
revised.
Removing
the
target
attribute
related
to
the
conversion
of
identifiers
and
also
admitting
multiple
instances
of
ed
target
attribute
to
support.
Basically,
because
you
you
haven't,
can
you
have
an
ead
in
in
each
of
the
doc
messages
right,
so
you
can
have
an
ed,
182,
d3,
etc,
depending
on
which
other
message
you're
talking
about
yeah
some
security
considerations
were
added
like
you
could,
for
instance,
this
planning
attack.
H
If
you
want
to
flood
this
or
without
the
crossover
combined
request,
that's
actually
not
the
security
problem,
because
the
server
does
not
process
the
same
electricity
multiple
times
and
server
performance
replay
checks
on
those
core
protected
application
requests.
So
this
kind
of
flooding
will
not
give
you
any
practical
attack.
H
Well,
one
such
occasion
is,
if
you
have
a
very
large
id
credit
like
a
big
certificate
chain
or
if
you
have
large
items
in
the
ead
in
adt,
specifically
like
just
a
big
big
amount
of
data
there,
then
we
covered
the
basically
client
processing
to
say
that
well
to
clarify
that
only
the
first
inner
block
actually
conveys
endocrine
and
the
other
option,
and
if
this
endoplasmic
request
exceeds
the
max
max
unfragmented
size,
you
should
stop,
but
this
maximum
fragmented
size
is
a
parameter
defined
in
those
score
rfc
to
provide
particular
attacks
like,
for
instance,
a
proxy
injecting
blocks
in
the
middle
of
a
block
wise.
H
Processing
yeah,
that's
just
the
rfc
7959
and
8613.
We
have
new
section
six
guidelines
on
not
using
clockwise
or
using
clockwise,
together
with
the
other
crossover
request.
So
when
should
you
use
it
so
the
client
might
use
inner
clockwise,
but
in
this
section
we
assume
that
it's
actually
not
using
outer
block
wise,
because,
typically
a
client
wouldn't
take
the
initiatives
used
out
of
block-wise.
There
will
be
something
approximate.
H
A
H
So
if
the
end
of
data
is
below
this
limit,
if
block-wise
is
not
used,
then
that
means
the
application
data
plus
the
analog
data
is
below
the
limit
and
if
blockers
is
used,
that's
when
one
block
plus
the
add-on
data
is
below
the
limit.
So
I'm
a
bit
long
time.
So
I
probably
don't
have
time
to
go
into
deals
and
all
these
considerations
but
check
that
section
in
the
draft,
and
it
goes
into
a
lot
more
details
on
these
aspects.
There's
also
some
corner
cases
and
kind
of
trade-offs.
H
When
you
actually
you
you,
basically
you
end
up
having
to
use
block
price
just
because
of
that.
The
plus
also
request
yes,
because
the
fact
that
you're
combining
that
into
a
single
request-
and
we
concluded
basically
that
the
optimized
workflow
can
be
no
worse
than
the
original
one,
but
it
could
also
be
basically
that
in
this
case,
in
some
cases,
you
should
actually
not
consider
using
their
adolescent
request
if
using
it
means
that
you
end
up
having
such
a
big
request
that
you're
forced
to
use
clockwise,
but.
H
Section
in
the
draft
for
more
considerations
on
this
going
into
some
next
steps.
Well,
we
want
to
add
more
security
considerations
so
about
when
you
should
use
the
other
cluster
combined
request
and
the
relation
to
access
control
enforcement,
and
we
do
have
running
code
on
this
based
on
eclipse,
california
and
aligned
with
endoc
version
15.
Of
course,
implementing
endocrine-
and
this
optimization
to
do
is
renew
this
order.
H
Registration
we
did
after
the
option
and
that
the
option
signals
usage
of
this
optimized
request
and
if
there
are
no
big
issues,
we
do
feel
that
the
next
version
may
indeed
be
good
for
a
working
group.
Last
call-
and
it
may
here
be
good
to
sync,
with
the
leg
working
group
and
basically
take
it
in
parallel
with
the
working
group.
Class
call
of
duty
makes
sense,
because
these
documents
are
very
much
interconnected
and
again
yeah.
Any
comments
or
reviews
are
very
welcome
on
this
document.
B
B
D
Yeah,
I
just
wanted
to
to
make
the
point
that
we
we
have
to
be
a
little
bit
careful
about
not
stuffing
our
specifications
with
too
much
informational
content.
There
are
many
reasons
why
we
don't
want
to
do
that.
One
is
that
we
are
essentially
dosing
isg,
but
also
people
who
look
at
this
specification
and
see.
Oh,
my
god,
that's
28
pages.
D
Do
I
have
to
implement
all
that
might
think
this
is
a
complicated
document
to
implement.
Well,
actually,
it's
not
very
complicated,
so
I'm
wondering
whether
we
have
a
good
way
to
to
put
things
like
these
considerations
is.
Is
it
actually
an
improvement
to
to
use
this
or
not
into
a
separate
place,
so
we
can
clearly
separate
the
standard
strike
part
which,
which
actually
is
relatively
simple
from
the?
How
do
I
use
this
in
the
best
way
possible
informational
content?
D
So
I'm
not
saying
that
this
draft
is
is
the
one
where
we
absolutely
have
to
do
this
now,
but
that
struck
me
as
just
another
example
where
we
are
getting
into
this
trap
of
providing
too
much
informational
content
or
where
we
may
be
getting
into
this
trap
of
providing
too
much
information
and
content.
So
I
just
wanted
to
to
set
this
this
flag
here
that
we
should
be
thinking
about
that.
D
I
don't
want
a
response
right
now,
but
when
you
work
on
this
document,
think
about
is:
is
there
maybe
something
that
could
be
exported
into
an
informational
document
meaningfully?
So
the
the
actual
specification
stays
crisp.
H
D
If
you
label
it
as
non-normative,
then
maybe
some
reviewers
won't
read
it,
which
may
be
also
another
way
to
reduce
the
load,
but
still
when
the
people
first
look
at
the
document,
they
see
this
has
28
pages
and
they
won't
see
that
that
only
10
of
those
are
in
the
body
at
18
are
the
in
the
appendix
so
actually
splitting
documents
may
be
something
that
we
may
want
to
do
a
little
bit
more
than
we
have
been
doing
and
by
the
way,
once
you
have
an
informational
document,
this
can
also
be
used
as
a
little
bit
of
a
road
map.
D
So
you
explain
how
the
normative
documents
that
are
being
created
here
actually
fit
together
and
it
can
be
on
a
different
timeline.
So
there
are
many
many
benefits
that
can
be
had
from
this,
but
also,
of
course,
it
makes
the
management
of
those
documents
a
little
bit
more
complicated
during
creating
them.
H
And
I
saw
by
the
way,
christian
had
a
point,
also
agreeing
on
syncing
with
lake
as
a
comment
in
the
chat
yeah.
E
B
Okay,
one
possibility
was
to
have
them
in
parallel
working,
robust
call,
in
fact,
but
but
your
comment
sounds
like
you'd
prefer
to
have
them
strictly
one
first
and
then
you.
E
H
H
One
part
is
about
a
way
that
has
been
defined
to
provide
a
lightweight
way
to
perform
key
update
for
score,
and
that's
what
we
call
kudos-
and
this
is
loosely
inspired
in
the
appendix
p2
procedure
already
defined
for
all
score
and
where
the
goal
is
essential
to
renew
your
master
seeker,
the
master
assault,
which
in
turn
makes
you
derive
new
sender
and
recipient
keys,
and
this
procedure
can
also
achieve
perfect
forward
secrecy
and
at
me,
various
reasons
why
you
would
like
to
re-key.
H
But
one
particular
reason
that
this
draft
covers
is
that
you
may
reach
limits
in
how
many
times
you
have
used
your
keys
for
encryption
or
decryption,
because
there
is
a
para
there's
actually
work
going
on
in
c4d,
where
they
have.
The
terminal
defines
some
usage
limits
for
aad
algorithms.
So,
essentially,
you
need
to
follow
certain
rules
on
how
many
times
you
encrypt
or
how
many
times
you
failed
fail.
H
I
decryption
with
your
keys
before
you
have
to
rekey
and
if
you
do
not
rekey
and
have
this
excessive
use
of
the
same
key
can
enable
breaking
the
security
properties
of
some
aed
algorithms.
So
the
second
part
is
about
these
limits,
but
the
focus
on
today
will
be
the
kudos
part
and
the
key
update
procedure.
F
H
Updates
recently
so
again,
an
overview
of
this
key
update
procedure
is
basically
a
message
exchange.
You
have
a
client
and
server.
Let's
say
the
client
is
initiated
or
you
exchange
a
number
of
messages.
As
you
see,
on
the
right
hand,
side.
The
key
thing
here
is:
you
want
to
actually
exchange
nonsense
and
using
those
dances
you
want
to
derive
new
or
score
security
contexts
on
both
pairs
using
announces
as
input,
and
we
define
this
update
ctx
function.
Basically,
that
takes
the
announcers
and
the
old
security
context
and
produces
a
new
security
context.
H
We
also
extended
the
os
core
option
so
to
have
a
bit
that
indicates
the
presence
of
this
nonce
parameter
and
an
x
byte.
That
signifies
both
the
length
of
the
nouns,
but
also
some
additional
signaling
flags,
which
I
will
come
back
to
later,
and
we
got
some
comments
from
ayanna
about
if,
in
particular,
like
notes
on
the
language,
if
this
bit
1
and
15
can
be
once
yes,
then
15
suggested,
but
we
do
actually
need
them
before
exactly
1
and
15.,
and
by
the
way
these
nuns
were
previously
called
id
detail.
H
We
now
simply
just
call
it
nuns,
so
that
field
in
those
corruption
is
simply
called
nuns
now
and
yeah.
Now
I'll
go
over
some
of
the
updates
that
when
we've
done
practically
to
draft
since
the
last
meeting-
and
one
thing
we
did
here-
is
we
defined,
or
rather
revised
and
moved
text
about
an
alternative
kudos
mode
that
does
not
provide
forward
secrecy
to
the
main
body
of
the
draft.
H
So
the
whole
idea
here
is
that
you
need
a
way
to
do
stateless,
key
updates,
because
some
devices
may
not
be
able
to
store
information
to
persistent
memory
and
the
method.
The
the
main
method
of
kudos,
the
main
mode
we
have
defined,
really
requires
that
you're
able
to
store
information
to
retrieve
it
back
after
you're
rebooted,
and
if
you
cannot
do
that,
you
couldn't
use
that
main
mode.
H
So
we
because
of
that
we
define
this
mode
that
doesn't
have
forward
secrecy,
but
then
the
benefit
is
that
it's
stateless
and
these
devices
that
cannot
store
the
persistent
memory
can
still
use
it
and
well.
There's
then,
the
need
for
a
way
to
signal
this
and
the
way
we
define
that
is
we
defined
a
bit
p
that
we
placed
in
this
x
byte
in
those
corruptions.
So
again,
the
x
byte
signifies
the
length
of
the
nonce
and
some
of
the
signaling
bits
and
well,
the
p
is
zero.
H
The
sender
indicates
that
it
wants
to
run
kudos
in
the
forward
secrecy
mode,
the
normal
original
mode.
If
you
set
p1,
you
wish
to
use
the
no
fs
mode
and
basically,
both
pairs
need
to
agree
here,
so
that
both
need
to
set
p
to
0
or
p
to
1
in
their
respective
request
response
to
align
and
agree
to
use
a
particular
mode.
H
Basically,
your
security
context
that
used
to
drive
a
new
security
context
is
just
the
latest
security
context
that
you
have
stored,
and
it's
also
a
key
point
here
that
if
you
are
capable
of
storing
to
disk
and
if
you're
capable
to
use
the
forward
secrecy
mode,
you
must
use
that
mode.
H
The
no
fs
mode
is
a
fallback
if
you
absolutely
cannot
use
that
or
if
the
other
pair
cannot
use
the
fs
mode.
So
when
you're
using
the
no
ifs
mode
again,
you
sacrifice
for
secrecy,
because
one
player
cannot
write
to
persistent
memory
and
the
the
difference
here
is
that
before
you
run
kudos
you
the
key
material,
you
can
see
there.
Isn't
your
latest
key
material.
You
consider
your
bootstrap
key
material,
which
is
what
you
were
pre-provisioned
with
during
manufacturing
or
commissioning
or
recommissioning.
H
So
again,
this
is
agree
downgrading
if
the
initiator
sets
p
to
zero.
The
responder
may
not
be
able
to.
You
know,
follow
up
on
that,
because
it
cannot
actually
use
the
fs
mode.
In
such
case,
the
server
can
respond
with
an
error
response
setting
p21
to
indicate
no,
I
want
to
use
the
no
fs
mode
or
if
it's
a
client,
that's
a
responder.
H
H
While
this
observation,
one
is
still
ongoing,
the
client
will
send
a
new
request
request
to
also
it
requests
partial
avx,
and
the
problem
is
that,
basically,
you
have
two
kind
of
ongoing
in
in
progress
requests
with
the
same
partial
iv,
and
this
gives
you
a
problem
that
responses
or
notifications
by
the
server
will
cryptographically
match
both
request
one
and
request
two.
H
So
what
you
need
to
be
careful
about,
basically,
is
that
do
not
reuse
the
same
partial
iv
right
because,
like
the
thing
is
after
you
run
kudos,
you
reset
your
partially
back
to
zero.
So
you
cannot
reuse
the
same
partial
v
that
you
already
are
using
for
an
ongoing
observation
and
to
solve
this.
We
device
this
long.
Jumping
solution,
and
essentially
what
that
means
is
that
after
kudos
has
been
run,
you
jump
your
partial
iv
forward
to
the
value
of
the
highest
rec
piv,
among
ongoing
observations
with
your
client
plus
one.
H
So
it's
just
jump
over
the
highest
partial.
Maybe
that's
already
in
use
for
an
observation
to
avoid
this
accidental,
partially
reusage,
and
that
is
basically
that
I
mean
that's.
Of
course,
if
you,
if
you
wish
to
preserve
observations,
so
to
be
able
to
signal,
if
you
wish
to
preserve
observations,
we
divide
defined
one
more
bit,
which
is
this
b
bit
again.
H
It's
in
the
x
byte
of
those
corruption,
and
if
the
b
bit
is
set
to
zero
you're
saying
that
you're
wishing
to
cancel
all
common
observations
if
the
bit
is
set
to
one,
you
wish
to
keep
all
common
observations,
and
this
is
an
all
or
nothing
approach.
Both
players
have
to
agree
and
set
the
beat
to
one
to
decide
to
keep
them
if
one
or
both
of
the
periods
set
to
be
to
zero,
the
observations
will
not
be
kept.
F
H
H
You
must
explicitly
use
cancellation
requests
and
only
purged
observations
if
you
get
a
cancellation
confirmation
from
the
server
because
otherwise,
if
you
just
forget
observations,
you
may
again
run
into
this
problem
of
accidentally
using
a
partial
av
that
is
in
use
for
an
observation
on
the
server
side,
and
one
thing
that
was
pointing
out
there
is
that
if
two
players
actually
don't
want
to
run
kudos
in
the
for
sake
of
key
update,
they
may
wish
to
use
kudos
just
for
a
way
to
quickly
cancel
all
ongoing
observations
with
other
peers.
H
H
Well,
oscar
rfc,
guitar,
basically,
and
the
intent
is
that,
due
to
privacy
reasons,
you
may
actually
want
to
switch
identifiers,
for
instance,
just
after
having
run
edward
to
prevent
like
if
you,
if,
basically,
if
you
switch
network,
for
instance,
you
may
not
want
to
be
that
an
attacker
can
correlate
your
identity
through
this
id
between
the
old
and
new
network,
you
switch
to
and
yeah
this
procedure
can
be
run
standalone
or
part
of
acute
execution
yeah.
H
This
has
also
been
moved
up
from
appendix
to
the
main
body,
and
it
has
a
number
of
properties
here.
Basically,
we
define
a
new
option
where
you
indicate
your
new
decide
recipient
id
and
that's
a
classy
option,
so
it's
also
encrypted
and
using
oscar
then-
and
there
are
some
considerations
on
when
you
when
and
when
you
should
use
this
and
when
you
should
not
use
this
and
some
things
you
should
be
careful
about
in
terms
of
remembering
all
the
used
recipient
ids.
H
H
They
end
up
being
integrity
protected,
which
is
a
very
nice
property
and
for
the
actual
update
ctx.
Basically,
there's
two
like
internal
paths
it
can
take.
One
is
if
your
original
context
was
was
derived
using
ad
hoc.
You
used
edo
key
update
to
drive
your
new
context
with
advanced
as
input
and
x.
Quite
a
simple.
The
other
method
is,
let's
say
you
didn't
use
addok
at
all.
H
H
We
have
now
aligned
the
text,
so
it's
up
to
date
with
edoki
update,
which
takes
silver
by
string
as
input
basically,
and
we
also
define
some
rules
on
when
you
can
like
overwrite
your
prk
out
and
prk
exporter
keys,
which
are
keys,
edo
keys
that
you
use
for
deriving
the
final
key
material,
massive
secret
master
yeah.
So
this
is
a
bit
more
details
on
the
update
ctx.
Basically,
we
needed
to
kind
of
define
a
way
to
handle
x,
values
and
the
nonsense.
H
A
H
Like
way
of
blending
x
and
n,
x
and
n
values
into
just
standalone
x
and
then
parameters,
so
you
can
see
here
in
case
of
message:
one
x
is
simply
x,
one
and
n
is
n
one,
but
in
message
two,
you
have
a
bit
more
complicated
construction.
You
have
a
wrapping
zebra
wrapping
of
x,
one
supply
string.
Concatenated
we
see
the
wrapping
of
x2
is
a
byte
string.
H
H
So
there's
no
risk
of
this
kind
of
attack
where
you
just
have
two
concatenated
byte
robot
arrays,
where
possibly
an
attacker
could
try
to
splice
to
do
some
attack
where,
like
the
concatenation
ends
up
being
the
same
so
in
in
other
contexts,
it
was
pointed
out
that
it's
important
to
keep
the
length
and
their
zebra
is
very
useful
because
it
has
the
length
in
the
hand
the
right
and
then
we
actually
invoked
update
ctx
with
this
x
and
values
and
we
thin
update
ctx.
H
H
But
that's
really
the
reason
we
designed
it
like
this.
We
won't
have
a
key
update
to
take
a
to
take
a
sip
or
by
string
as
input,
because
that's
how
it's
designed
in
that
draft
yep
open
points
I'll
go
through
this
a
bit
quick.
So
there
are
a
number
of
issues
on
the
github
prep,
but
we'll
go
through
those
and
fix
and
take
care
of
those.
H
We
are
proposing
here
to
split
up
the
ctx
into
two
actual
separate
functions
because,
like
I
said
it
basically
has
two
internal
methods
method,
one
relying
on
that
dock
and
method,
two
relying
on
an
hqda
hkdf-based
approach
and-
and
we
also
want
to
have
a
signaling
way.
So
you
can
signal
to
use
the
method
too,
because
what
if
your
context,
was
defined
and
derived
using
edoc,
but
for
some
reason
your
addox
session
is
not
around
anymore.
H
So
you
cannot
use
that
key
update
practically
well,
then.
In
that
case,
you
would
like
to
fall
back
to
the
hkdf
way
and
we
need
some
signaling
for
that,
and
one
easy
way
to
signal
is
just
add:
one
more
beat
in
the
x
pipe
to
be
able
to
signal
this,
and
we
also
want
to
produce
an
implementation
of
this
based
on
those
coryova,
california,
implementation.
We
already
have
and,
of
course,
yeah
comments,
feedback
very
welcome
anything
you
have
to
to
mention.
E
H
Would
like
to
follow
sure,
but
we
would
like
to
run
use
method,
one,
if
possible,
if
you
can
use
the
key
update.
We
would
like
to
take
advantage
of
that
if
possible-
and
I
would
say
that
the
this
signaling
on
the
fallback
that
signaling
could
be
done
by
either
party,
because
it
could
be
that
the
initiator
has
its
education
still
and
wants
to
use
better
one,
but
the
responder
can't
or
it
could
be
the
opposite.
So
both
have.
It
has
to
be
kind
of
a
mutual
bi-directional
signaling.
F
That's
that's.
H
That's
a
good
question:
it's
basically
like
they
have
the
key
update
and
the
hkdf-based
way
they're.
They
ask
the
security.
They
may
have
the
same
security
properties
regardless.
That's
your
point
as
if
I
understand
it
correctly,
that's.
H
Yeah,
that's
that's
a
good
point.
If,
if
there
is
a
a
concrete,
a
security
property
that
metal
one
has
that
meta2
does
not
have,
I
cannot
answer.
F
A
H
I
mean,
I
would
say
it
could,
because
practically
I
mean
this
is
a
standalone
procedure.
You
can
understand
alone
without
kudos,
but
you
can
also
run
it
embedded
in
accuracy
execution,
so
it
it
could
be
if,
if
that
that
it
could
be
a
way
to
go,
I
mean
yes
to
split
this
out
into
a
separate
document.
It
would
be
that
would
be
feasible.
H
F
H
Yeah
I
mean
I
think
I
would
say
same
story
fundamentally
they
could
practically
be.
You
could
have
one
draft,
that's
about
kudos
to
one
that
is
about
the
limits
relating
to
score,
that
that
could
be
like
that
for
sure.
I
think
it
grew
out
like
this
through
due
to
earlier
discussions
and
feedback
it.
It
ended
up
being
disrejoined
in
the
same
document
and
they
are
related
relevant
in
the
sense
that
okay,
one
reason
to
do
key
update
is
because
reaching
the
limits.
H
B
Thank
you,
rickard
and
all.
B
I
Sorry,
can
you
hear
me
now?
Yes,
great
okay,
I
tried
to
make
its
quick
air
then
so
yeah.
I
want
to
talk
about
dns
over
co-op,
so
I
give
a
quick
introduction
for
those
who
weren't
here
the
last
few
meetings
and
give
you
the
changes.
Since
the
last
interim,
I
attended
and
then
go
a
little
bit
into
some
discussions
around
dns
push
and
co-op
observe
which
came
up
after
we
pushed
the
current
release,
the
current
version
of
the
of
the
draft
and
yeah
the.
I
Basically,
we
want
to
save
dns
requests
from
iot
devices
against
each
dropbox,
so
the
typical
solution
for
that
is
to
encrypt
the
name
resolution
and
our
proposal
for
that
and
other
approaches.
Don't
work
in
the
iot
scenarios
we
are
looking
at
is
to
use
dns
over
corp,
which
is
able
to
encrypt
the
communication
either
via
dtls
or
oscar,
and
we
also
have
additional
advantages
to
like
blockbuster
message.
Trends
where
to
overcome
past
mdu
problems,
which
you
encounter,
for
example,
with
dns
over
dtls,
and
we
can
also
share
system
movements
with
co-op.
I
I
I
I
But,
as
I
already
said
in
the
beginning,
this
still
needs
some
work
and
we
removed
the
tv
duns
on
issue
four,
which
was
about
having
some
repetitions
in
the
dns
format,
which
we
want
to
in
some
at
some
point:
go
put
into
a
separate
draft
for
a
compressed
content
format
which
probably
will
be
sibo
based
and
yeah
and
left
to
be
done.
Maybe
we
can
just
discuss
this
here.
Also,
is
the
jana
consideration
where
we
didn't
pick
an
id
for
the
application?
Dns
message:
content
format.
I
Yes,
the
main
reason
is
because
the
up,
probably
with
dms,
some
nice
numbers
we
want
to
use-
and
I
wanted
to
maybe
discuss
this
first
and
so
after
all,
that
I
go
into
the
dns
push
and
how
to
implement
this
with
co-op
observe
so
first,
a
brief
primer
on
how
dns
push
notifications
work
according
to
rc8765.
I
They
are
based
on
the
waitful
operations
and
I
can
basically
very
organize
with
the
query
response
paradigm,
so
you
send
the
dso
stateful
operation
with
a
subscribe
message
in
encapsulated
for
a
certain
record.
You
want
to
query
that,
then
is
for
that.
Then
you
can
get
an
basically
a
subscribe
ack
and
then
the
resolver
can
start
to
push
also
with
dso
messages.
I
So
the
problem
with
that
is
that,
with
that,
the
rc,
eight
seven
six,
four
five,
basically
just
states
that
we
require
dns
over
tls
for
that
and
there's
basically
a
must
to
that,
and
we
also
need
additional
state
information
so
to
not
soften
this
requirement
and
also
to
make
the
implementation
on
the
client
side
simpler,
is
to
use
rather
use
co-op,
observe
as
a
signal
to
use
subspace
instead
of
a
query
at
the
doc
server,
which
would
look
a
little
bit
like
this.
I
I
Oh
there's
a
little
arrow,
but
I
leave
that
to
the
reader
and
for
unsubscribing
just
also
a
normal
unsubscribe
with
fetch
or
like
when
there's
coming
a
reset
back
from
the
client.
I
So
as
an
example,
there
could
be.
I
put
that
on
the
mailing
list.
Maybe
some
comments
there
could
also
be
given.
Basically,
I
I
worked
out
how
dns
service
discovery
could
work
when
using
doc,
and
basically,
if
you
would
just
ask
for
a
powerpoint
of
your
co-op
services
in
a
network
and
then
get
respond
back,
and
this
would
be
the
push
and
if
there
are
new
services
joining
those,
this
list
would
be
just
updated.
I
So
yeah,
that's
basically
it,
for
my
part,
at
least
what
I
drafted
out.
So
if
there
are
any
questions
or
comments,
you're
welcome
to
give
them
now.
B
None
in
the
queue
in
the
chat
in
the
room
so
yeah,
this
draft
has
been
received
pretty
well
as
so
far,
especially
the
the
previous
idea
of
meeting
with
also
a
few
people
supporting
the
work
and
committing
to
review.
So
the
chairs
believe
the
obvious
next
step
is
starting
a
call
for
adoption
on
this.
B
G
B
It's
the
second
from
top
second
left,
second
from
left
icon,
on
top
to
request
sharing
the
slides.
A
B
A
B
G
G
And
I'm
presenting
on
the
alpha
of
the
quarter
so
yeah
just
a
few
words
about
the
motivation.
This
is
not
the
first
time
that
we
presented
this
work,
so
we
also
presented
at
last
at
the
interim
meeting
in
march.
So
the
motivation
is
to
find
the
mechanism
to
measure
the
performance
in
code
to
meet
the
operational
requirements
and
of
course,
since
we
are
talking
about
constrained
environment,
we
need
a
simplified,
simple
mechanism
for
for
network
diagnostic,
since
it
is
resource
consuming
to
read
id
sequence,
numbers
store
time
stamps
for
constraint
nodes.
G
G
Then
we
also
included
a
new
subsection,
a
new
section
on
management
and
configuration
aspects,
even
if
these
aspects
are
not
fully
in
scope
of
our
data
and
then
we
also
improve
the
security
considerations,
part
by
adding
the
new
consideration
on
gtls
underscore
as
well.
G
The
the
methodologies
that
are
used
by
this
mean
the
option
that
is
introducing
this
draft
are
not
a
new
methodology,
but
they
are
already
defined
in
ippm
working
group.
There
is
a
diagraph.
The
working
group
plus
call
that
is
called
explicit
flow
measurement,
and
these
techniques
employ
a
few
marking
beads
inside
the
other
of
the
packet,
for
los
angeles
in
particular,
there
is
the
spin
bit
idea
to
create
the
square
wave
signia
using
a
bit.
G
G
Okay,
now
I
want
to
describe,
of
course,
the
the
addition
one
of
the
main
additions
that
we
we
did
in
the
last
in
the
latest
version
about
the
details
of
the
scenarios
and
use
cases.
G
As
I
said,
there
are
different
cases,
the
non-proxy
endpoints,
the
collaborating
or
non-collaborating
proxies,
and
your
score
in
case
of
non-proxy
and
points
the
co-op.
The
cop
option
can
be
applied,
end-to-end
between
client
and
server.
Since
it
is
elected,
it
can
be
simply
ignored
by
an
endpoint
that
does
not
is
not
configured
to
handle
this
option.
G
The
the
enable
measurements
are
end-to-end,
of
course,
between
client
server,
home
parts,
botup
strip
and
downstream
on
the
observer
and
also
intra
domain,
because
if
you
are
more,
if
you
have
more
than
on
one
observer,
that
can
be
the
borders
of
a
domain,
they
can
also
do
the
measurement
between
two
observers,
so
you
can
have
a
sort
of
intra
domain
on
path
measurement.
G
All
these
measurements
are
defined
in
the
fppm
draft,
so
are
not
new.
Let's
say
for
this
for
this
option
for
this
environment,
then
we
have
a
scenario
with
collaborating
or
non-collaborating
toxic.
In
this
case,
the
the
option
can
be
applied,
end-to-end
between
client
and
server,
or
also
between
proxies.
G
The
enabled
measurements
can
be
also
can
be
different,
depending
on
the
case
of
collaborating
proxima
or
non-collaborating
proxies
in
case
of
collaborating
process.
Of
course,
the
measurement
can
be
end-to-end
and
on
path,
the
intradomain
as
the
case
of
non-proximity
endpoints,
while
in
case
of
non-collaborating
proxy
they
can
also
be
end-to-end
and
on
path
and
intra-domain,
but
cannot
be
between
proxies
and
also
the
proxy
cannot
act
as
an
unpopped
observer.
G
G
Yeah
final
slide
so
yeah.
This
draft,
as
I
also
mentioned
before,
is
based
on
these
methodologies
that
are
also
optional
in
in
the
quick,
the
specification
and
also
in
another
c,
that
is
also
applied
to
ipv6.
G
So
it
aims
to
meet
the
limited
resources
of
constrained
environment,
since
the
mechanisms
are
quite
simple,
so
we
believe
that
at
some
point
maybe
core
working
loop
can
consider
to
adopt
this.
This
work
for
performance,
but
of
course,
for
now
we
welcome
questions,
comments
and
also
cooperation
on
this
work.
Thank
you.
B
Thank
you.
Unfortunately,
there
is
no
much
time
for
discussion.
Just
please
pay
attention
to
the
comments
in
the
chat
from
on
carson
and
christian
yeah.
B
To
add
a
high
level
comment,
I
was
doing
the
draft
and
I
still
have
quite
a
harsh
time
in
navigating
the
the
many
alternatives
that
are
possible
to
use
the
way
you
can
combine
them
and
which
limitations
possibly
or
drawbacks
you
have,
depending
on
which
combination
you
go
for.
A
G
Yeah,
I
will
try
my
best
consider
that,
regarding
the
limitation,
there
is
the
draft
in
ippm
that
I
reference
and
in
that
draft
there
are
a
lot
of
description
of
all
the
possible
measurement
and
the
and
also
the
limitations.
So
maybe
I
can
refer
add
more
reference,
so
the
reader
can
can
go
to
the
ippm
drop.
If
you
want
to
go
deep
to
the
to
limit
to
understand
the
limitation
and
the
trade-offs
of
different
solutions,
I
will
I
will.
I
will
improve
on
that.
Thank
you
for
the
interview.
B
G
I
guess
since
yeah,
for
now
it
is
a
standard
because
yeah
we
are
asking
for
a
new
option,
so
I
guess
they
should
be
standard.
But
if
maybe,
if
we
want
to-
I
don't
know
what
is
the
procedure
for
defining
a
new
option
in
the
car,
but.
D
You
just
register
one
so
this
this
could
be
a
registration
that
you
make.
We
have
space
for
for
expert
review,
so
I'm
not
saying
that
this
is
what
we
should
do.
I'm
just
saying
that's
the
gamut
of
potential
outcomes
that
we
have,
and
an
experimental
protocol
for
instance,
would
also
be
a
possible
outcome.
D
B
E
So
what
I'd
like
to
present
here
is
how
we
could
transport
co-op
or
gut,
and
I
will
digress
at
the
end
a
bit
into
the
transport
indication
draft,
which
is
not
really
on
the
agenda
for
today.
But
it
is
something
that
that
interacts
a
lot
with
this
document.
E
Now
I
I've
the
original
design
for
transport
for
co-op
or
get
which
is
co-op
over
the
get
protocol
that
is
part
of
bluetooth,
low
energy.
So,
basically,
it's
co-op
over
political,
low
energy
comes
from
2020
when
also
an
itf-109
I've.
E
I've
worked
more
on
transport
indication
which
provides
a
lot
of
background
for
co
over
gut,
because
cover
for
cat
would
register
a
new
co-op
scheme
and
there
is
the
there
is
this
this
open
item
of
finishing
that
making
something
usable
with
respect
to
protocol
negotiation
before
we
register
any
additional
schemes.
E
Now
this
in
early
2022,
some
industry
interest
has
sparked
up
on
on
that
topic,
and
there
are
now
people
from
both
assa,
abloy
and
edf
that
are
interested
in
furthering
the
specification
using
it,
for
example,
in
combination
with
the
ace
framework,
so
they
and
I
have
started
filling
filling
out
some
gaps
in
the
specification
which
I'll
come
to
in
the
next
slide,
and
here
I'm
giving
the
status
update
of
this
on
this.
E
So
what
is
what
has
happened
in
the
document
already
in
the
version
that
I've
uploaded
before
the
cutoff
is
that
it
is
now
more
explicit
in
differentiating
this
from
other
from
from
alternatives
that
are
around
in
particular,
pick
choosing
to
go
directly
over
gut
versus
choosing
to
go
over.
The
ipsp
profile
on
ble,
which
exists
and
is
established,
is
something
that
is
limited
to
a
particular
choice
or
a
particular
to
that
is
shaped
by
limitations
of
particular
environments.
E
So,
when
you're
in
when
you're
operating
a
cell
phone,
you
do
not
have
the
option
to
even
reach
the
com,
the
necessary
components
in
the
bluetooth
stack
to
implement
six
lupon
over
ble
or
ipsp
you're
limited
to
basically
taking
gut
everything
else
is
not
as
easily
accessible
as
as
get
from
from
an
application
implementation
point
of
view.
There
is
one
alternative
that
is
also
being
considered.
E
That
is
the
golden
gate
approach
that
fitbit
presented
some
time
ago,
which
is
taking
the
accessible
components
up
to
get
and
then
building
an
ip
transport
on
top
of
this.
But,
given
that,
in
practice,
this
would
mean
that
applications
are
starting
to
implement
an
ip
and
a
udp
stack,
and
given
that
this
is
introducing
a
lot
of
overhead,
I
think
it
is
not
the
right
approach
for
for
this
partic
for
the
particular
use
cases
that
we
are
looking
at
here.
E
E
Things
that
are
happening
as
part
of
that
are
ongoing
and
that
are
not
part
of
the
document
yet
are
about
fixing
limitations.
So
the
original
draft
was
intentionally
very
limited.
There
were
no
concurrent
requests.
There
was
only
one
role
described
of
what
was
the
servant,
who
was
declined.
E
There
were
not
even
tokens
in
there,
because
the
original
idea
was
that
there
are
different
characteristics
in
in
in
get
so.
Maybe
you
these
could
be
exploited
to
not
even
send
a
token
at
all
around
some
of
these
turned
out
to
not
work
exactly
that
way,
so
these
are
being
reshaped.
Now
some
of
the
assumptions
didn't
hold
and
some
changes
are
just
necessary
to,
for
example,
get
bi-directional
transport.
E
This
provides
is
highly
asymmetric,
so
this
needs
this
needs
a
few
changes,
but
I
won't
go
into
details
of
those
changes
because
they
are
essentially
something
that
that
is
being
hashed
out
with
people
that
are
experts
in
budget
low
energy
and
outside
of
the
scope.
For
of
this
working
group's
expertise,
what
is
very
much
inside
the
scope
is
speak
is
at
how
things
would
be
addressed.
So
the
natural
choice
here
is
to
have
a
new
scheme
and
put
bluetooth
mac
addresses
in
there.
E
E
So
in
essence,
we
are
down
to
the
platform,
will
provide
some
identifier
and
that
identifier
might
only
be
meaningful
to
a
partic
to
on
that
particular
host.
And
while
this
sounds
strange
for
your
eye,
it's
not
completely
unheard
of,
because
we
are
in
the
very
same
situation
whenever
we
have
linked
local
addresses.
E
Their
uri
form
also
also
only
makes
sense
on
that
particular
host
and
is
not
usable
across
the
network,
but
the
bigger
question
this
sparks
is:
when
do
we
actually
need
those
addresses
because
sure,
if
we
had
a
mac
address,
we
might
try
to
get
some
routing
via
proxies
in
there.
But
that's
not
what
typically
happens.
What
typically
happens
are
two
scenarios.
E
That
is
not
even
core
plus
bl
core
plus
gat,
but
something
completely
different,
like
code
plus
tcp
and
translated
by
the
proxy
on
the
phone
and
the
address
of
the
device
never
gets
into
the
whole
game,
and
the
other
scenario
that
we
often
see
is
that
the
device
registers
itself
at
some
service
that
is
like
operates
like
a
resource
directory,
for
example,
in
lightweight
m2m.
This
is
what
happens,
and
only
when
you
start
thinking
about
more
niche
use.
Cases
like
after
having
registered,
still
doing
p2p
contact.
E
I
think
that
this
these
considerations
for
what
addresses
we
can
use
for
the
for
for
and
when
we
actually
need
them,
should
shape
the
direction
which
transport
indication
is
going.
That
is
that
nodes
whose
local
addresses
are
not
particularly
meaningful
might
not
only
want
to
use
a
particular
endpoint
identifier
when
registering
at
a
resource
directory
but
start
using
whatever
their
a
more
semantic.
E
E
But
then
again,
maybe
we
just
hit
what
we
are
doing
in
things
like
approximate
resource
directory,
going
forward
with
cope
of
agat
I'll,
be
working
on
this
on
the
on
the
details
of
how
is
this
transported
over
over
over
the
cat
side,
but
I'll
need
input
from
this
working
group
as
but
as
to
how
this
would
interact
with
addressing
also
to
not
only
to
for
this
document,
but
also
to
further
transport
indication.
E
And
last
but
not
least,
this
might
be
a
candidate
for
a
standard
track
document
now,
given
the
way
people
want
to
use
it,
and
for-
and
this
too
means
that
we
should
have
more
of
the
discussion
on
this
here
and
not
just
working
on
this
document
alone.
D
Yeah,
I
just
wanted
to
to
say
a
plus
one,
but
I
also
wanted
to
create
a
local
discontinuity
in
the
space-time
continuum,
so
we
can
continue
talking
for
a
couple
more
minutes,
so
a
few
people
are
have
to
go.
That's
unfortunate,
but
yeah,
so
I
I
use
this.
This
name
non-traditional
addresses
here,
just
to
point
out
that
this
may
actually
be
the
interesting
question.
D
How
do
we
work
in
environment?
So
we
don't
where
we
cannot
make
the
same
assumptions
about
addresses.
Of
course,
nets
already
provide
one
such
environment,
but
we
pretty
much
know
how
to
manage
those,
but
these
addresses
are
weird
and-
and
we
want
to
understand
how
to
use
them
in
places
like
like
directories
and
proxies
and
so
on
so
yeah.
Let's,
let's
do
that.
B
B
D
Yeah,
so
I
looked
at
the
the
core
group
com
proxy
thing,
and
actually
I
had
it
on
my
to-do
list
in
line
175,
and
I
definitely
will
get
to
this,
but
maybe
not
next
week.
So
yes,
the
the
idea
is
that
that
jaime
and
I
have
to
come
to
an
opinion
on
on
what
we're
going
to
do
there
next.
D
But
definitely
this
is
a
document
that
merits
the
attention
so
expect
to
hear
from
the
chairs
about
that
soon,
the
the
corp
s
plus
jp
y
thing
I
wanted
to
mention
in
the
anime
meeting
it
turned
out.
They
want
to
use
resource
discovery
for
a
resource.
D
That
is
not
your
grandfather's
core
s
resource,
but
they
have
a
special
tunneling
protocol
installed
there
and
our
knee
jerk
reaction
to
to
changing
anything
in
the
transport,
of
course,
is
to
add
a
plus
transport
to
the
ui
scheme,
and
we
should
check
whether
that
is
actually
what
we
want
to
to
suggest
to
the
animal
working
group
to
to
handle
this
because
right
now,
the
the
resource
type
registration
they
are
making
is
a
little
bit
icky
because
they
are
advertising
a
resource
that
doesn't
exist.