►
From YouTube: IETF112-CORE-20211108-1600
Description
CORE meeting session at IETF112
2021/11/08 1600
https://datatracker.ietf.org/meeting/112/proceedings/
A
Okay,
we
are
on
top
of
the
hour,
so
I
think
we
can
start
welcome
everybody
to
this
co-working
group
session
at
itf
112.
I
am
marco
tidoka.
My
culture
is
jaime
menetz.
A
And
we
assume
people
at
this
meeting
have
read
the
documents
discussed
today
to
better
contribute
and
be
engaged
in
the
discussion.
The
not
well
applies,
there's
no
next
slide
for
that
and
we're
using
miteko.
So
the
list
of
attendees
is
compiled
automatically
I'll,
keep
a
look
at
the
chat
and
a
few
people
volunteered
to
help
taking
notes
christian
and
joran.
Thank
you.
So
much
again,.
B
A
A
So
this
is
the
agenda
for
today
and
following
an
introduction
from
the
chairs,
we'll
go
through
four
working
group
documents
and
for
individual
submissions,
and
we
start
with
href
and
coral
presented
by
a
karsten
and
christian.
Then
we
enter
into
group
communication
and
low
score
territory.
A
Then
we
have
the
key
update
procedure
for
oscor
on
recard
christian
has
updates
on
cacheable
auth
score
I'll,
give
an
update
on
oscar
capable
proxies,
and
we
conclude
with
a
brand
new
draft
about
a
co-op
option
for
performance
measurement
and
10
minutes
of
flex.
Time
also,
does
anyone
want
to
bash
this
agenda
in
any
way?
Today,.
A
Her
known
so,
let's
get
into
the
document
status
and
we
always
have
good
news
of
the
sky
was
mentioned
at
the
latest
interim,
but
just
to
bring
it
to
the
main
atf
session.
Since
july
we
got
published
one
more
rfc,
9100
was
sentimental
versions.
A
Thank
you
very
much
to
the
authors
in
the
working
group
for
yet
another
achievement
and
along
the
same
lines.
In
the
past
months,
we
got
approved
two
more
documents
for
publication
as
proposed
standard.
They
are
also
in
the
rfc
editor
queue
now
where
they
joined
tumor
documents
also
waiting
there
for
echo
request
tag,
so
there
shouldn't
be
any
more
blocking
conflicts
in
this
respect,
and
these
four
documents
should
be
published
in
the
relatively
relatively
near
future.
A
We
also
have
two
of
the
four
core
comp
documents
in
isg
processing.
Now
we
got
very
good
reviews
from
the
isg
and
the
authors
are
now
processing
the
comments
very
well
tracked
as
github
issues
and
discussed
in
dedicated
design
team
meetings
occurring
bi-weekly.
We
have
next
one
early
next
week.
A
It
was
are
young,
cyborg
and
nc,
and
then
to
mention
some
post-working
groups
called
document.
It's
about
the
other
two
core
conf
documents,
comai
and
yang
library,
they're
a
bit
in
the
background
now
to
prioritize
the
the
first
two
ones
I
mentioned,
and
once
they
are
done,
autos
can
focus
a
bit
more
on
on
these
ones
too.
There
were
just
some
open
points
left
following
the
shepherd
review.
A
A
few
adds
up
on
things
to
come,
as
also
suggested
by
francesca
that
shares
plan
to
have
an
update
of
the
working
group
milestones
on
the
entry
page
of
core
in
the
data.
Tracker,
and
some
milestones
have
been
pretty
much
met
in
the
past
months
or
even
years.
And
it's
good
to
add
a
few
new
ones
to
reflect
ongoing
work
and
especially
documents
that
at
some
point
next
year,
will
be
shipped
to
the
isg
kirsten.
C
Yeah
they're
a
question
by
katyan
a
couple
of
hours
ago
on
the
mailing
list,
whether
we
should
maybe
have
another
of
those
core
applications
meetings-
and
I
said-
maybe
not
this
week-
because
that
is
already
pretty
full,
but
maybe
it's
actually
worth
trying
to
do
an
interim
before
one
month's
time
so
specifically
to
talk
about
these
applications
things
because
we
don't
have
much
apart
from
coral,
not
much
applications
oriented
stuff
on
the
agenda
today.
A
Yes,
it
might
work
just
quickly
thinking.
On
december
1st,
I
just
like
to
synchronize
a
bit
better
on
that
and
what
to
cover
with
that
during
this
week,
even
on
gather
town,
I
think
we
can
do
that.
C
Okay,
a
second
observation,
so
a
couple
of
interims
ago
we
decided
it
would
be
too
much
work
and
and
really
not
necessary
to
recharter,
and
now
there
is
an
interesting
rumor
circulating
that
the
fact
that
we
are
not
recharging
means
the
working
group
will
be
concluded
next
week.
D
On
the
rumor,
I
have
no
idea
first
news
I
hear
as
well
on
the
other
stuff
on
the
problem,
details
and
pops
up.
So
I
would
like
to
have
perhaps
before
an
interim,
because
in
an
interim
it's
nicer
to
present
something
and
show
progress,
so
it'd
be
nice
to
have
first,
perhaps
a
work
meeting
with
co-authors
so
providing.
C
D
C
A
Okay
and
thinking
of
yeah
already
scheduled
thing,
as
agreed
at
the
latest
interim,
before
the
cut-off
we
are
going
to
have
one
on
december,
8th,
it's
a
wednesday
usual
local
time,
and
then
an
actual
regular
series
will
resume
in
january
every
other
wednesdays
same
local
time,
alternating
with
seabor
as
usual.
A
A
This
actually
concludes
the
fair
introduction
and
if
there
are
no
more
comments,
we
can
move
on
to
the
first
document.
Maybe
carsten
has
a
comment.
C
C
Okay,
thank
you.
So
I
wanted
to
quickly
report
on
on
the
work
on
the
draft
that
is
called
href,
but
that
actually
standardizes
what
cris
are,
and
I
have
14
slides
11
of
which
you
mostly
have
seen.
So
I
will
run
through
them
really
quickly.
But
please
do
stop
me
if
there
is
a
need
to
discuss
something.
C
So,
just
as
a
quick
reminder,
the
the
car
group
really
tries
to
do
the
the
web
of
things,
even
if
we
didn't
call
it
that
way,
and
the
web
of
course
has
three
things.
A
hyper
references,
transfer
protocol
and
representation
format
and
moving
this
to
to
devices
means
we
did
a
new
transfer
protocol.
We
did
various
representation
formats,
not
just
a
single
one
and
well.
We
stuck
with
the
hyper
references
which
are
there
in
the
weapon.
C
So
we're
trying
to
pair
the
uri
files,
which
of
course
won't
be
going
away
with
a
concise
form
or
constraint
from
the
ci.
C
C
The
path
is
structured
by
slashes
and
in
co-op
we
decided
to
structure
our
queries
by
embassad
science,
because
that's
the
the
existing
practice
in
the
web,
and
there
is
also
something
called
ui
references
which
are
used
together
with
a
base
uri
to
have
a
short
way
to
to
talk
about
a
ui
that
is
close
to
the
context
that
we
are
currently
in.
So
you
can
say
something
like
foo
and
then
you
take
the
existing
ui,
the
ui,
you
are
add,
and
remove
the
last
path
segment
put
in
foo
and
so
on.
C
C
So
we
essentially
have
to
reverse
engineer
that
and
and
one
of
the
contributions
that
the
that
the
ci
document
in
the
end
will
provide
is
a
much
clearer
expression
of
the
data
model
underlying
eyes.
So
the
the
draft
called
ahref
defines
cris
and
cri
references.
So
these
relative
things
that
you
use
when
you
have
a
defined
context,
so
this
work
started
a
while
ago,
but
was
started
by
klaus
harker
and
jim
shard
had
some
pretty
good
contributions
in
this
space.
C
The
format
is
now
sibo
based
and
the
the
short
form
the
abstract
content
of
the
draft
is
in
the
little
piece
of
cddl
here.
So
you
you
have
something
that
ends
with
a
path,
a
query
and
a
fragment
which
are
all
three
optional,
and
you
have
you
started
with
a
scheme
or
authority,
most
cases
of
which
are
absolute
uris
or
with
a
discard
number,
which
are
relative
uis,
so
that
then
there
are.
There
is
fine
print
here
with
optional
part
numbers
and
and
so
on.
C
So
this
was
moved
around
for
a
while
and
then
we
with
06,
we
finally
ran
into
one
problem
we
hadn't
solved
yet
so
with
urns
and
similar
structures,
which
include
the
gids
which
are
popular
in
certain
parts
of
the
universe.
C
You
don't
necessarily
have
a
need
leading
slash,
so
you
have
to
have
a
way
to
to
indicate
that,
and
we
found
a
way
to
indicate
that
by
putting
true
in
the
place
where
normally
an
authority
would
be
so,
this
gives
us
a
nice
way
to
essentially
have
a
fully
passed
uri
and
the
the
last
thing
we
essentially
changed
in
know.
6
was
to
pass
the
host
name
as
well,
so
dzi
dot
de
becomes
an
array
of
two
elements.
C
These
are
I
and
d
e,
and
there
is
a
port
number
down
there
in
in
the
example,
so
we
put
the
port
number
there
as
well,
so
this
has
got
a
little
bit
of
fixing.
So
after
six
they
came
over
seven
and
then
an
o8
sorry,
and
we
now
have
something
which
I
think
can
be
called
a
consistent
design.
C
C
Few
updates
are
probably
needed
in
the
implementations
as
well,
but
by
the
end
of
the
year
we
should
have
implementations
that
actually
work
together.
So
for
from
my
point
of
view,
we
just
need
a
little
bit
more
implementation
and
implementer
reviews
and
then
a
working
group
last
call.
C
One
is
a
pretty
popular
way
to
treat
your
eyes
in
in
documents
like
like
document
formats
like
json
ld,
where
you
put
a
uri
prefix
into
one
place
and
then
from
a
different
place
reference
this
prefix,
either
explicitly
or
implicitly
via
add
context
and
concatenate,
and
give
the
rest
of
the
ui
and
concatenate
that
to
the
prefix
gift
and
as
they
are
right
now,
cis
can
only
do
this
in
certain
places.
C
So,
for
instance,
if
we
have
a
scheme
and
an
authority,
we
can
put
a
path
after
that
and
sibo
pact
can
can
handle
that
nicely.
So
we
don't
have
to
have
any
special
mechanism.
We
can
handle
this
within
sibo
pact,
but
it
becomes
difficult
if
the
uri,
the
prefix
ui,
goes
on
into
a
partial
path
and
expects
the
the
referencing
site
to
actually
put
in
the
rest
of
the
path.
C
That
has
two
elements
of
a
path
name
it's
under
w3.org,
and
then
it
has
two
elements:
ns
and
td,
and
as
for
namespace
td,
for
thing
description
and
then
it
it
expects
that
you
provide
the
next
path
component,
and
that
is
difficult
to
do
right
now.
Zero
pack
doesn't
provide
you
a
form
of
prefix
compression.
That
would
enable
you
to
do
that.
C
So
this
is
an
area
where
we
have
a
functional
deficiency,
and
we
currently
don't
have
a
nice
idea
how
to
fix
this.
There
are
several,
not
so
nice
ideas
how
to
fix
this,
and
this
probably
needs
a
little
bit
more
thinking.
Of
course,
we
can
leave
the
functional
deficiency
in
place
or
we
can
address
it.
C
So
this
is
one
phone
in
our
site
and
the
other
phone
in
our
site
is
that
co-app
has
decided
to
not
support
percent
encoding
in
your
eyes,
except
for
the
specific
case
that
co-op's
own
or
the
the
co-op
ui's
own
delimiters
are
escaped
by
the
percent
encoding.
C
So
when
cohab
sees
a
path
percent
to
f,
slash,
slash
foo,
then
it
the
the
uri
to
curb
conversion
converts
the
percent
to
f
to
a
slash,
so
that
slash
is
part
of
the
path
segment
and
not
the
delimiter
between
path
segments
and
everything
works
like
it
should.
But
that
doesn't
work
with
w3
c
the
ids,
which
we
probably
have
to
support
in
some
way,
and
we
had
a
meeting
just
last
friday,
where
we
discussed
this
and
we're
a
little
bit
unhappy
with
the
situation
that
we
cannot
really
express
all
potential
dids
using
cis.
C
So
yeah
we
thought
a
little
bit
and
there
is
a
pretty
easy
fix,
which
maybe
is
a
little
bit
easier
on
the
specification
side
than
the
implementation
side.
But
that's
not
that
bad,
where
all
the
the
text
strings
that
are
used
in
your
eyes
have
an
optional
form
in
which
they
alternate
unencoded
and
percent
encoded
parts.
C
So
the
the
bb
percent
3ac
that
that
is
up
there
in
the
example
in
the
uin
example,
which
would
be
look
similar
for
the
idea.
Example,
that
is
a
colon
that
actually
is
not
used
by
the
url
internal
syntax
and
that
may
have
been
registered
as
double
a
by
someone
before
so
there's
no
way
a
generic
processor
is
going
to
know
what
the
the
internal
syntax
of
that
uin
is.
C
So
percent
encoding
may
be,
may
be
needed
to
escape
things
that
need
to
use
the
same
delimiters
that
are
used
for
the
internal
syntax
and
the
idea
is
to
allow
instead
of
the
the
string,
which
is
a
a
colon
bb,
colon
c
in
the
second
last
line,
to
allow
an
array
there
and
the
array
is
structured
in
such
a
way
that
the
odd
numbered
elements
are
actually
percent
encoded.
C
But
they
don't
need
to
be
percent
encoded.
That
there's
no
no
need
to
actually
do
the
percent
encoding.
They
can
be
fully
processable
by
the
implementation,
except
that
they
have
the
special
percent
encoding
syntax.
So
they
would
not
be
used
by
a
string
scanner
that
looks
for
colon
delimiters
in
the
unia
syntax.
C
C
So
here
we
have
a
proposal
and
now,
of
course
it's
it's
a
matter
of
of
design,
taste
to
whether
people
think
this
is
a
proposal
we
can
pick
up
or
whether
this
is
just
too
ugly
but
not
picking
it
up,
of
course,
would
mean
that
we
don't
have
a
way
to
do
all
these
cases
of.
C
Percent
encoded
uris,
so
yeah,
I'm
not
well.
I'm
happy
that
we
have
a
proposal
that
that's
a
good
thing,
but
I'm
I'm
still
not
entirely
convinced
myself,
but
I
think
if
we
actually
do
want
to
support
percent
encoding
in
some
form,
this
is
the
by
far
the
simplest
way
to
do
this.
So
everything
else
when
one
can
come
up
with
is
is
even
uglier.
C
E
Just
I'd
like
to
point
out
that
the
alternative
to
doing
these,
presenting,
codings
and
other
things
that
we
might
need
to
do
apparently,
is
to
just
admit
your
eyes
in
several
places.
Where
see
your
eyes
are
acceptable,
and
it
will
be
something
that
will
need
to
work
through,
as
also
in
the
in
the
design
team,
but
also
solicit
input
here
that
this
is
just
the
easier
thing
to
do.
E
Yep,
where
you
where,
whereas
do
I'm
doing
these
present
encoding
tricks
here,
and
some
other
mean
that
you
have
to
you-
might
have
to
go
into
a
uri
anyway,
because
processing
it
out
of
the
the
separated
form.
If
you
have
semantics
in
there
might
be
comparatively
tricky
too.
I
think
we
should
should
just
keep
an
open
mind
towards
both
solutions.
While
we
were
exploring
them.
A
F
Mumbling
to
you,
so
this
is
yeah.
Of
course
it's
tricky
to
and
encompass
them
all,
but
my
my
suspicion,
my
and
maybe
my
hope
is
that
when
we,
when
we
try
to
support
the
majority
or
the
host
of
it,.
F
In
the
end,
the
the
the
of
how
they're
constructed
today
might
be
contained
a
little
bit
and
and
then
the
cri
could
be
the
the
guidance
how
you
would
do
it
and
and
that
might
might
ease
the
the
the
the
wilderness
out
there
a
little
bit
and
calm
a
little
bit
down.
So
so
I
I
think,
the
effort
of
trying
this
is
volunteered.
I
understand
that,
of
course,
you
fall
back
to
uri
if
nothing
fits,
but
in
the
end,
maybe
that's
something
we
can
make
go
away
over
time.
C
Yeah,
basically,
the
idea
in
in
the
ci
draft
was
to
actually
document
which
your
eyes
are
not
supported,
and
there
there
are
still
some
some
very
weird
uis
that
are
not
supported
even
with
this
percent
encoding
fix.
C
But
I
think
they
don't
have
much
practical
relevance
why
the
percent
encoding
is
actually
being
used.
Even
if
we
know
that
implementations
usually
are
really
bad
in
this
space,
so
they
probably
won't
be
correct,
but
they
they
probably
will
at
least
cover
like
80
percent
of
the
cases
that
actually
do
occur.
C
A
B
E
Hello,
like
carsten,
did
I'd
like
to
go
through
a
bit
of
introduction
of
what
coral
does,
because
it's
been
advised.
This
has
been
presented
as
at
a
full
itf
meeting.
E
Coral
is
a
data
model
and
a
language
that
allows
us
to
talk
about
resources
on
co-op
and
other
protocols
and
on
how
to
interact
with
them
in
a
way
that
is
suitable
for
constraint,
devices
that
is
similar
to
how
cri
fits
into
this
fits
into
the
analogy
of
of
having
a
constrained
device
processable
forward
for
for
your
eyes.
Here
we
have
a
format
that
covers
areas
of
of
metadata,
and
this
before
I
get
into
kind
of
the
the
concrete
formulas
this
could
replace
or
could
is
similar
to.
E
We
already
have
a
few
users
that
would
would
want
to
use
this
so
the
first
two
documents
are
problem
details
and
pops
up
are
in
this
working
group.
A
group
management
that
administration
is
in
ace,
which
would
be
a
bit
simplified
if
built
on
coral
sdf,
might
have
applications
that
we're
starting,
exploring
and
basically
anything
that
so
far
uses
link
format
could
could
just
as
well
build
on
coral,
but
it's
not
precisely
the
same.
E
So,
for
example,
I'm
going
ahead
for
link
format,
our
c6690.
This
had
a
lot
of
string
parsing
so
like
we
had
before
about
processing
your
eye
by
by
bite
and
looking
into
what.
What
does
this
mean?
Semantically
in
in
coral
all
the
information
that
in
link
format,
is
there
by
possible
escaping
of
strings
and
and
processing
parts
of
your
ui
out
of
it.
This
is
all
expressed
in
sibo,
but
then
again
compared
to
seabor.
We
have
cement.
We
have
semantic
information
there.
E
We
have
properties
of
resources
that
are
packed
with
that
are
described
by
predicates,
quite
similar
to
rdf,
which
allows
us
to
use
several
domain
languages
in
the
same
document.
So
we
can,
for
example,
extend,
say
a
pub
sub
broker
that
uses
terminology
originally
designed
for
pubsub,
with
application,
specific
semantics
that
are
ignored
by
processes
that
are
unaware
of
that
application.
But
can
be
used
to
augment
functionality,
so
terminology
can
be
reused
when
building
an
application
on
top
of
coral,
but
it
doesn't.
E
Terminology
can
be
reused
and
sorry
last
track
and
doesn't
need
to
give
and
there's
no
need
to
define
a
fully
custom
format
based
on
the
old
terminology,
but
things
can
just
be
mixed
in
a
single
document
compared
to
rdf,
to
which
this
is
also
similar,
especially
in
the
data
model.
This
is
personable
processable
by
constraint,
devices
there's
no
ui
process
that
you
need-
and
it's
quite
compact.
E
Still
all
those
are
similar,
so
rdf
and
rc6690
link
format
have
now
in
the
latest
version
defined
conversions
back
and
forth,
at
least
for
for
subset.
Without
the
rdf,
the
exceptions
are
relatively
a
minor
with
link
format.
It's
a
bit.
It
needs
a
bit
more
thinking.
Link
format
can
only
be
converted
to
coral,
provided
the
application
uses
the
semantics
that
we
use
in
core
for
the
attributes,
because
otherwise
the
values
of
the
attributes
in
the
link
format
document
are
not
necessarily
well
defined
or
registered
anywhere.
E
E
E
The
interaction
model
didn't
change
too
much.
This
has
already
been
in
a
very
good
shape
a
few
versions
ago,
basically,
describing
that
there
is
a
user
agent
similar
to
a
web
browser
that
hops
along
this
this
graph
or
tree,
but
may
need
to
fetch
a
resource
that
it
that
it
encounters
gets,
gets
information
gets
a
representation
of
that
resource.
That
is
typically
a
choral
document
again
find
that
there
is
a
link
or
form
that
it
knows
to
be
able
to
follow.
E
Perform
that
until
the
program
terminates
or
reaches
a
state
where
it
waits,
we've
changed
a
bit
compared
to
the
version
that
was
presented
a
year
ago.
How
the
dictionary
is
precisely
defined
and
now
seaboro
has
at
least
as
a
as
an
adopted
item,
the
packed
seabor
format
that
reserves
a
few
seaboard
tags
that
can
be
thought
of
as
compressions
of
larger
seaboard
items
and
this.
This
is
basically
what
coral
has
been
doing
for
a
long
time,
but
now,
instead
of
using
integers,
we
use
those
tags
and
all
the
defined
semantics
for
that.
E
What
is
not
clear
there
yet
is
how
we
precisely
we
will
set
up
the
doc
set
up
the
dictionary
based
on
which
the
compression
happens,
because
it's
not
like
gzip
or
or
said
sdg
or
what
or
not
that
ship
all
the
dictionary
in
the
file.
But
the
typical
situation,
especially
for
things
like
link
format.
Ish
documents,
will
be
that
the
document
format
is
declared
as
a
as
a
media
type
with
a
parameter
has
that
is
all
compressed
into
a
single
content,
format
number
and
the
consumer.
E
What
which
of
those
we
take
will
probably
be
guided
by
by
applications
that
we
try
this
out
on
the
binary
serialization.
We
didn't
change
a
lot.
This
is
just
something
that
will
also
to
a
later
stage
when
we
have
a
corpus
of
example,
cases
where
we
can
then
evaluate.
E
Does
it
make
sense
to
spend
a
bite
here
or
spend
a
bite
there,
but
this
is
best
done
with
a
with
a
with
a
with
a
larger
basis
that
will
that
I'll
come
to
in
the
next
step
we
will
will
have
to
obtain
there
used
to
be
a
text
serialization
that
was
very
similar
to
turtle
that
took
up
a
large
portion
of
the
document
and
we've
removed
that
for
the
time
being,
expressing
sibor
in
diagnostic
notation
of
of
expression,
curl
and
the
diagnostic
notation
of
zebra
is
now
a
bit
easier.
E
Now
that
there's
the
edn
extended
diagnostic
notation
draft
around
which
allows
us
to
just
write
cris
in
in
text
uri
reference,
representation
and
occasional
examples
are
also
expressed
in
turtle.
In
those
cases
where
we
just
need
the
c
of
triple
semantic,
and
not
all
the
structure
that
is
in
the
tree
shape,
and
we
don't
need
to
express
whether
something
is
written
down
in
packed
format
or
just
just
spelled
out,
and
there
is
the
topic
of
how
we
do
queries
around
this.
How
we
modify
a
document
like?
E
Can
we
use
this
with
factual
with
post
and
how
we
describe
where
the
data
comes
from,
which
is
related
to
reification
of
those
statements
that
is
making
them
into
something
that
we
can
later
talk
about?
Having
come
from
this
or
that
authority,
which
is
very,
very
important
to
have,
but
we're
currently
just
chunking
it
up
we're
keeping
those
things
in
mind,
but
they
are
for
later
phases
in
the
specification
and
some
of
those
we
might
also
refer
to
through
a
second
stage
when
the
basic
model
is
already
done.
E
This
is
relatively
easy
for
things
like
problem
details
where
we're
already
using
those
documents,
for
example,
and
aligning
this
with
with
astf,
will
hopefully
give
us
a
bit
more
insights,
because
we
don't
precisely
know
how
things
like
forms
are
used
there
at
all
from
that.
We
hope
to
get
a
corpus
of
items
that
we
can
then
try
the
binary
serializations
against
and
also
evaluate
how
which
steps
in
the
dictionary
setup,
we
need
something
that
we
could
probably
use.
E
Working
group
input
on
right
now
is
defining
the
the
subset
of
features
that
we
want
to
have
right
away,
especially
when
it
comes
to
things
like
patching
or
fetching.
Is
this
something
we
should
aim
for
in
the
in
the
current
iteration?
Or
is
this
something
that
would
be
okay
to
ship
in
as
an
update
yep
thanks
for
your
time?
A
A
G
H
G
G
Okay
and
I
need
to
do
it
right,
yeah,
let's
see
sorry
presentation
view,
that's
not
the
right
icon
there
or
just
yeah
okay,
ask
to
share
slides.
Is
that
one.
G
G
Okay,
I
see
it
at
least
now
yeah
it
works
there.
You
go
okay,
so
I'm
going
to
talk
about
group
combis
now
at
version
5
already-
and
let's
briefly
recap:
the
goal
of
this
document
so
I'll
made
it
shorter
than
the
last
time.
So
we
actually
have
working
on
normative
successor
for
rfc
7390,
which
was
experimental
and
addressed
co-op
group
communication.
G
We
obsolete
that
predecessor
rc
and
also
make
some
updates
to
co-op
and
observe,
and
the
idea
was
to
have
a
new
kind
of
standard
reference
document
for
group
communication
that
also
implementers
can
use
the
scope.
We
cover
various
things,
all
around
group
communication,
so
not
just
over
udp
or
ip
multicast,
but
more
in
general,
now
as
well
and
also
mentioned
latest
features
like
observe,
clockwise
and,
of
course,
security.
G
That's
also
a
major
part
of
the
draft,
so
we
now
define
a
group
of
score
based
security.
Besides
the
unsecured
co-op
group
communication,
there's
also
now
a
bit
more
extensive
definitions
of
the
group
types
and
how
they
relate
to
each
other
and
also
some
guide
guidelines
for
a
secure
group
communication.
G
G
G
Also
in
change
was
made
in
section
two
to
one,
so
this
was
about
yeah
how
you
basically
can
identify
or
name
an
application
group
within
the
group
awry
or
more
general
within
the
corp
request.
G
So
we
put
this
resolution
for
this
issue
also
28
in
the
new
version,
but
it's
still
pending
some
working
group
review
and
approval,
of
course,
so
now
I'll
just
move
to
that.
So
this
is
a
little
bit
of
an
intermediate
where
I
introduced
this
issue.
28
application
group
naming
so
what
we
have
defined
in
the
draft
is
the
application
groups
that
could
be
named
with
any
identifier,
such
as
a
string,
a
number
entity
number
or
complete
uri.
G
G
G
You
can
also
put
it
explicitly
in
the
port
number.
So
then,
the
group
application
groups
becomes
a
number
in
that
case.
It
is
also
part
of
the
co-op
group,
because
the
port
number
is
also
included
in
co-op
group.
So
there's
some
overlap
in
that
case
between
what
is
the
co-op
group
and
what
is
the
application
groups,
but
these
are
always
that
the
receiver
can
at
least
identify
yeah
the
group,
but
also
the
sender
can
identify
it
within
the
uri,
which
is
still
not
yet
encoded
in
carb
form.
G
G
The
sender
adds
this,
for
example,
in
a
co-op
option
and
adds
that
to
the
request,
which
is
not
something
that's
not
part
of
the
group
uri
but
is
still
in
the
request,
you
could
also
have
it
implicit.
So
that
means
that
the
receiver
has
to
figure
out
what
is
the
application
group
or
there
is
just
one
default
application
group
in
that
case
that
could
be
associated
to
that
co-op
group.
G
G
Okay,
we'll
just
continue
now
with
the
updates,
so
three
more
updates
we
did
was
in
section
two
two
three.
G
We
also
had
another
open
issue,
so
that
was
also
based
on
a
review
by
christian
about
what
kinds
of
group
discovery
are
possible
using
co-op
because
we
mentioned
there
is
rd
group
discovery
and
we
mentioned
there
is
also
discovery
of
groups.
You
can
do
with
pure
co-op,
so
client-to-server
without
using
an
rd.
It
was
not
so
clear
what
what
this
really
was
and
what
kinds
of
discovery
are
possible.
So
now
we
expanded
this
text
here
also
make
some
examples
of
what
you
can
discover
with
basically
cop
discovery
and
link
format.
G
This
is
also
pending
working
group
review
and
approval
yeah.
A
second
important
change
was
the
second
point
on
the
slides.
So
we
have
stronger
advice
on
unsecured
group
communications
and
are
we
just
saying
capitals?
It's
not
recommended-
and
this
was
one
of
the
open
issues
number
20
do
so.
We
hope
that's
okay
for
the
working
group
to
have
it
as
a
normative
statement
still
possible
to
to
do
it.
But
if
you
have
ways,
of
course,
to
protect
it,
then
that
is
definitely
recommended.
G
I
think
multicast
for
discovery
was,
I
think,
mentioned
as
one
of
the
cases
where
you
often
you
do
still
need
it.
So,
okay,
I
think
marco,
that's
correct
right.
A
C
C
A
Began
on
this
point,
carson
following
jones
review,
we
also
added
in
this
version
more
content
about
the
risk
of
amplification
attack,
so
that's
discussed
now
also
more
than
the
previous
version
good.
Thank
you.
G
Okay,
yeah,
that's
right
and
yeah,
then.
Finally,
that's
the
kind
of
catch-all
improvement,
so
we
made
some
editorial
improvements
and
fixes.
So
there
were
some
fixes
in
the
description
of
the
group
relations
and
the
diagrams.
G
G
G
That's
basically
doing
a
query
on
dot,
well
known,
slash
core
to
discover
something,
and
of
course
all
of
this
is
very
much
application,
dependent,
there's,
no
single
way
to
do
this.
It
depends
how
the
yeah,
if
and
how
the
groups
are
encoded
as
part
of
resources
on
servers.
So
in
this
case
we
assume
that
application
groups
are
represented
as
resources
and
also
these
resources
in
the
top
example,
are
located
within
a
specific
part.
So
slash
g,
something
and
then
that
something
is
the
group
name.
G
So
basically,
this
sends
a
query
to
the
co-op
group
cg1,
which
could
be
it
could
be
your
uri
that's
resolved
to
multicast
address.
It
could
be
just
a
plain
multicast
address
and
port,
so
this
goes
to
all
the
cg1
members
and
this
these
members
will
basically
be
queried
for
application
groups
so
and
with
star
we
basically
have
a
wild
card,
so
any
application
group
name
should
match
here.
G
Let's
see,
then
the
second
example
that's
sending
specifically
a
multicast
to
the
realm
local
co-op
group
of
all
co-op
nodes.
That's
this
ff03fd
address,
so
there
it's
querying
a
specific
group,
so
this
is
encoded
in
a
slash
group,
parent
resource
and
the
slash
group
1
child
resource
is
in
there.
G
G
D
D
G
It's
true
that
they
are
in
kind
of
in
text
there.
Maybe
it
would
help
to
also
have
these
example
uri.
So
these
are
a
bit
more
specific
than
what
we
wrote
down.
I
think.
D
G
Okay,
yeah.
That
could
be
helpful
there
to
add,
maybe
add
this
particular
example.
So
now
we
have
some
text
and
you
could
have
a
yeah
request
response
type
thing
added
to
that:
okay,
it's
definitely
something
to
consider.
G
Yeah,
that's
fine
and,
let's
see
we
can
go
then
I
think
to
the
next
steps
of
this.
G
G
Now
question
is:
do
we
need
more
reviews
of
the
entire
document?
I
copy-pasted
here
the
promise
from
itf
108,
so
we
have
here
christian
and
francesca
reviews.
Christian
actually
did
a
review
because
I
got
yeah
at
least
a
couple
of
comments
from
him.
So
I'm
not
sure
if
you
you
know
I
wanted
to
do
more
than
that.
B
G
Yeah,
okay,
no,
no
problem
so
yeah.
The
question
was
also.
We
can
also
do
this,
as
part
of
working
group
last
fall
that
he
started,
and
then
this
could
trigger
another
review.
For
example,
I
think
the
previous
review
comments,
so
there
were
a
lot
from
john
and
christian.
So
thanks
for
that,
these
are
now
the
rest.
We
think
at
least
so
that's
why
we
believe
the
version
zero
five
may
be
ready
for
the
working
group
plus
call
now.
G
Okay
and
yeah-
that's
it
for
this
time,
thanks
for
your
attention
in
case,
you
have
some
questions.
Let
me
know.
G
Okay,
very
good
yeah
to
see
if
we
solve
the
issues
in
an
understandable
way.
Basically,
yeah.
A
We
have
submitted
version
13
before
the
cut-off
and
the
version
before
that
was
indeed
a
major
revision
based
on
comments
from
working
group.
Last
call
and
some
follow-up
comments
in
comparison
to
that.
This
version
comes
in
with
much
simpler
updates
and
to
start,
we
updated
the
terminology
to
be
aligned
with
what
the
addock
draft
in
the
lake
working
group
is
doing
in
naming
public
keys
as
possible
credentials.
A
So
now
we
are
referring
to
ccs,
so
it's
sufficient
to
refer
to
rfc
83
92.,
while
roof
reading
the
draft.
Again
we
noticed
there
was
an
oversight
when
defining
the
the
key
derivation
of
one
particular
key.
A
So
that's
also
fixed
other
than
this.
The
the
major
update
in
this
version
was
instead
some
more
specific
text
about
what
is
mandatory
to
implement.
There
was
an
issue
that
john
opened
about
this,
I
think,
even
two
years
ago,
and
thanks
to
this
update,
it's
also
resolved.
Now.
A
It
boils
down
to
what
we
can
expect
constrained
devices
to
support,
especially
when
it
comes
to
this
signature,
algorithms
and
and
companion
key
agreement,
algorithms
for
the
paywise
mode,
and
we
are
fundamentally
taking
the
same
rationale
used
again
in
the
addock
document
in
the
late
working
group,
adapted
to
the
grupos
core
case.
It
reads
pretty
much
like
this.
For
the
sake
of
the
group
node,
we
expect
non-constrained
devices
to
support
both
the
eddsa
algorithm
and
the
ecdsa
algorithm.
A
While
we
expect
constrained
devices
to
support
at
least
one
or
the
other
at
the
end
of
the
day,
to
support
as
much
interoperability
as
we
can
recently
have
just
as
a
parallel
thing.
The
pairwise
mode
follows
the
same
rational
so
that
we
expect
non-constrained
devices
to
implement
both
key
agreement
curves
and
constrained
devices
to
to
support
at
least
one-
and
this
can
probably
be
relieved
and
lived
better
in
the
near
future
as
more
algorithms
are
supported
also
in
hardware,
but
it
seems
a
reasonable
thing
to
do
for
a
time
being.
A
But
we
waited
anyway,
considering
that
we
had
these
small
points
to
still
close.
So
at
the
moment,
we
are
not
aware
of
any
other
open
points
or
issues,
and
we
have
also
updated
our
implementation
for
californium
aligned
now
with
the
latest
version
13..
A
So
we
believe
this
version
13
is
now
ready
for
a
second
working
group
last
call
and
independent
of
that
we
are
also
starting
producing
test
vectors,
taking
as
a
starting
point
the
ones
of
the
oscar
rfc,
but
we
expect
this
ones
to
be
a
bit
longer,
since
we
want
to
cover
the
group
and
the
pairwise
mode,
possibly,
and
the
combination
of
the
different
signature
and
key
agreement
algorithm.
A
A
A
I
A
So
content-wise
they
are
aligned
and
both
mature.
I
believe
I
think
it's
easier
if
they
proceed
to
work
in
group
calling
parallel.
Certainly
it's
easier,
I
believe,
for
the
isg
to
receive
them
together.
A
A
E
D
D
All
right,
I
think
we
we
can
do
that.
How
long
would
it
take
to
have
the
test
vectors
ready
by
the
way?
Do
you
have
any
an
estimation.
D
And
also
could
we
get
already
in
this
meeting
an
estimation
of
how
many
people
have
read
the
current
version?
Maybe
not
the
current,
but
the
one
that
is
a
major
update.
The
previous
one,
just
to
you
know,
see
some
participation
there.
D
D
D
Right
and
also
I
mean
for
the
second
working
place
called
we
need
more
eyes
on
this.
Do
we
have
some
volunteers
already
to
have
a
look
to
the
latest
version?
Let's
go
okay,.
J
C
A
J
So
I
will
be
presenting
this
work
on
a
key
update
for
os
core,
also
now
called
kudos
as
a
short
name,
and
these
are
yes,
let's
start
with
a
short
recap.
So,
first
of
all,
what
this
is
about:
well,
os
core
uses
and
the
algorithms
for
providing
security,
and
there
is
a
c4d
document
which
is
referenced
here
in
this
slide,
which
defines
the
fact
that
you
need
to
obey
certain
limits
in
terms
of
key
usage
when
it
comes
to
amount
of
encryptions
and
number
of
failed
decryptions.
J
And
if
you
reach
those
limits,
you
should
rekey,
because
extensive
use
of
the
same
key
can
enable
breaking
security
properties
of
the
aed
algorithms.
So,
basically,
this
draft
has
two
main
parts.
The
first
part
is
the
study
of
these
limits
and
their
impact
on
our
score,
which
means,
among
other
things,
that
you
we
we
some
appropriate
limits
for
our
score
and
yeah
for
a
variety
of
variety
of
algorithms.
J
We
also
define
counters
message,
processing,
details
and
practical
steps
to
take
when
limits
are
reached.
So
it's
about
the
limits
and
yeah
what
how
you
should
change
the
message
message:
processing
in
our
score
to
take
this
into
account,
which
practically
means
counting
key
usage,
and
we
also
took
into
account
by
the
way
input
from
your
matson
on
the
april
core
interim,
and
we
also
got
very
recently
some
further
input
from
him
which
we
will
be
taking
into
account.
J
J
What
you
want
to
do
is
renew
the
master,
secret
and
master
salt
and
thus
to
get
new,
send
and
recipient
keys
practically
rekey
in
your
context,
and
also
this
method
achieves
perfect
forward
secrecy
going
into
some
details
and
updates
on
the
key
limits,
so,
first
of
all
a
recap
again
on
the
scale
limit.
So
again
it's
discussed
in
this
e4g
document.
J
You
need
to
limit
key
usage
for
encryption,
which
is
counted
as
the
you
know,
a
queue
parameter
or
q.
Variable
and
invalid
decryptions,
which
is
the
v
variable.
So
basically,
what
this
draft
does
is
defines
fixed
values
for
q,
v
and
l
and
from
those
values
they
you
calculate
these
c
a
and
I
a
probabilities
which
is
the
confidentiality
and
integrity
advantage,
which
is
basically
the
probability
of
breaking
these
properties
of
the
algorithm.
J
So
what
you
want
to
see
is
set
qvnl
and
from
those
calculate
acceptably,
acceptable
values
of
cn
ca
and
ia
and
yeah.
We
also
added
some
text
now
now
I
go
into
some
updates
from
the
last
from
the
last
version.
So
what
we
did?
J
We
added
explicit
mentioning
of
the
fact
that
now,
when
you
send
an
oscore
message,
you
have
to
obey
the
l
value,
which
means
you
have
a
practical
size
limit
to
the
amount
of
data
you
may
send,
since
the
l
is
basically
the
message
size
per
the
message
size
in
cypher
blocks,
and
you
should
not
exceed
that,
and
we
have
some
text
there
specifically
on
how
you
can
easily
calculate
that.
J
We
also
now,
after
suggestion
on
christian,
have
a
table
showing
the
values
of
l,
not
just
in
cypher
blocks,
but
also
in
actual
bytes
and
continuing
on.
So
what
we
did
was
we
have
this
table
where
we
show
the
iinc
a
probability
for
a
number
of
algorithms,
and
these
are
all
algorithms
except
as128
ccm8.
So
basically,
we
deal
with
as128
ccm8
in
a
separate
table,
and
these
algorithms
are
the
ones
that
the
c4d
document
defines
formulas.
J
So
our
probabilities
are
in
fact
even
lower
than
2
to
the
power
of
minus
50.,
and
so
that
brings
me
to
this
red
line
here,
which
says
that
we
do
intend
to
increase
q
and
l
further,
because
we
do
seem
to
have
some
margin
for
doing
that
as
we're
still
way
lower
than
2
to
the
power
of
minus
50.
In
terms
of
the
probabilities,
so
there's.
B
J
E
Listen,
I'm
just
I
think
l
would,
if,
if
we
can
increment
l
by
at
least
one
one
power
of
two,
then
full
messages
can
fit,
because
one
1000
1024
byte
is
enough
as
a
payload.
But
the
whole
message
will
be
longer.
So
if
that's
the
limit
it's
kind
of
impractical,
because
it
limits
block
wise
to
under
fi
to
512
byte
blocks.
E
J
We
don't
present
it
here,
but
if
you
check
the
draft,
we
do
have
a
table
now,
which
shows
the
actual
l
value
in
bytes,
which
is
then
of
course,
depends
on
the
algorithm
but
check
that
and
we
have
an
actual
table
showing
that
information
yep
and
so
basically
here's
the
table
where
we
deal
with
aes
128
ccm8,
because
that's
a
bit
of
a
special
case
where
you
need
to
treat
that
separately,
because
here
you
end
up
with
quite
a
low
or
less
it's
like
quite
a
high
ia
probability.
J
So
we
chose
custom
values
for
as128
ccma
to
try
to
optimize
reasonable
values
of
q,
e
and
l,
because
if
you
have
l
2
to
the
power
of
10
here,
you
know
you
couldn't
have
a
very
high
q
or
v.
So
here
we
have
l
2
the
power
8.,
like
you
see
the
green
arrow.
Where
is
which
is
the
recommended
values
that
we
have
currently
set
alone
and,
of
course,
here's
an
open
question?
B
J
Update
for
score
so
essentially
the
client
and
server
exchange
to
nonsense,
r1
and
r2,
and
you
have
this
update,
ctx
function
that
you
use
to
derive
new
contexts
using
the
nonsense
and
practically
you
start
with
your
current
context.
You
have
one
intermediate
context
and
then
you
end
up
with
your
new
security
context,
and
we
also
managed
to
get
a
number
of
beneficial
properties
here.
J
J
It's
also
robust
and
secure
against
the
pair
rebooting
and
compatible
with
prior
key
establishment
using
the
addock
protocol,
because
update
c
update
ctx
can
actually
use
the
dead.exporter
if
your
original
context
was
built
using
adhoc
and
by
the
way
I
have
this
red
box
here.
So
what
we
did
is
also
extend
the
score
option
with
a
new
flag
bit
and
a
field
called
id
detail
where
we
practically
exchange
the
nuances.
J
J
J
Other
updates.
We
had
recommendations
now
on
minimum
length
of
r1
and
r2,
which
are
used
as
nonsense
and
motivation
is
similar
to
what
it
is
written
in
appendix
b2
and,
as
things
stand
now,
we
recommend
minimum
eight
bytes
and
here's
an
open
question.
If
this
is
sufficient
or
not.
J
We
also
discussed
a
bit
now
on
observations,
and
our
conclusion
currently
is
that
you
must
terminate
observations
after
a
keying,
because,
basically
you
don't
have
the
cryptographic
binding
between
notifications
in
some
situations
and
there
is
possibility
to
keep
these
by
paying
a
price,
and
the
suggested
solution
we
have
here
is
that
what
you
can
do
is
if
you
have
ongoing
observations.
J
Basically,
after
every
keying,
you
need
to
jump
your
partial
v
and
the
sequence
number
to
higher
than
the
maximum
partial
av
for
any
ongoing
observation
you
have
and,
of
course
the
drawback
here
is:
you
have
very
big
jumps
in
the
partial
av,
which
means
faster
consumption
and
larger
communication
overhead,
so
there's
also
some
possibility
for
more
complicated
solutions
like
reserving
some
pivs
in
a
bitmap,
but
for
now
our
proposal
here
and
plan
is
to
not
keep
observations
after
a
keying,
because
you
can
basically
reestablish
the
observations
and.
C
If
the
the
observations
are
going
away,
this
means
that
the
reaching
event
is
visible
on
the
application
layer
is.
Is
that
something
that
that
we
should
be
doing.
J
Yeah,
that's
yeah,
so
basically
right,
then
the
application
would
have
to
probably
then
take
the
responsibility
to
re-establish
these
observations.
Yeah!
That's
it's
open!
If,
if
yeah,
of
course,
that's
that's
one
drawback
of
terminating
observations,
yeah,
so
yeah.
J
Yeah,
so
it
would
be
basically
like
yeah,
as
we
stated
in
the
possible
solution,
so
right
after
a
keying,
you
would
have
to
check
the
highest
partial
av
among
all
your
ongoing
observations
and
then,
when
the
client
starts
a
new
observation,
you
need
to
jump
your
your
sequence
number
to
that
highest
value
plus
one.
So
of
course
it
depends
on
the
scenario
and
and
how
you
know
often
and
much.
J
This
client
is
using
absurd,
but
in
some
situations
it
would
mean
then
big
jumps
in
the
partial
iv
and
a
lot
faster
consumption
and
then
larger
communication
overhead,
so
yeah
it
depends.
I
would
say,
also
on
the
on
the
way
that
application
is
acting
and
how
much
it's
actually
observing
so,
but
also
by
the
way.
One
thing
we
thought
about
here
is:
I
think
the
way
we
have
to
deal
with
this
is
that
the
two
peers.
J
J
I
Yeah,
so
just
question
for
carson,
if,
if,
if
you
need
to
update
keys
for
some
other
reason
that
are
you
is
your
sort
of
proposal
that
it
should
not
be
visible
to
just
happen
or
what's
what's
the
mind,
what
what
would
you
prefer?
Would
it
be
that
this
is
controlled,
or
it
just
happens
without
without
any
notice.
C
Well,
the
the
the
application
probably
should
have
a
way
to
to
ask
the
security
component
to
re-key,
but
what
what's
in
here
right
now
means
that
each
time
the
security
component,
just
just
based
on
the
number
of
messages
that
have
been
sent,
decides
to
re-key
it
needs
to
involve
the
application.
Doing
that
and
that's
a
bit
inconvenient.
I
mean
it's
not
a
disaster,
but
yeah.
You
have
to
send
all
these
messages
then.
E
J
J
J
Yes
I'll
proceed,
then
so
again
we
added
this
60
use
case,
and
you
can
read
this.
I
will
speed
up
a
bit,
but
you
can
read
about
this
in
the
draft,
but
essentially
one
big
benefit
of
this
new
procedure
is
that
you
preserve
the
id
context
which
is
used
as
a
pledge
identifier.
J
B
J
We
have
some
more
general
updates,
improved
the
table
of
content,
some
editorial
improvements,
formalization
with
adhok
ayana
considerations.
We
updated
the
title
now
to
be
called
key
update
for
us
core,
kudos
and
yeah.
That's
also
an
open
question,
any
feedback
on
the
title
now.
The
title,
of
course
mostly
considers
the
key
updates
section
of
the
draft
and
not
really
the
limits.
J
Then
we
have
some
next
steps:
yeah
addressing
some
open
points.
We
have
a
number
of
issues
on
the
gitlab
repo.
J
We
need
to
look
at
material
to
save,
to
disk,
to
support
rebooting
applicability,
applicable
considerations
from
score
appendix
v2,
updated
security
considerations,
and
then
we
want
to
further
refine
the
key
limits,
as
I
mentioned
earlier.
J
D
So
when
you
say
workshop
adoption
in
you
mean
within
this
week
or
even
in
this
meeting,
right
and
and
the
question
also
goes
to
marco-
which
is
a
co-author,
I'm
just
asking
because,
if
asked
qual
for
he
cannot
maybe
call
the
working
adoption
right
now
right,
right,
okay,
so
right
so
well,
actually
similar
processes
before.
I
would
like
to
know
first
on
the
chat
who
has
read
a
current
version
of
the
draft
other
than
the
authors.
D
Right
and
well,
then,
I
think
I
think
we
could
do
a
a
working
group
adoption
on
the
on
the
current
call
and
then
take
it
also
on
the
main
list.
Let
me
just
I'm
trying
to
use
now
the
show
at
hands
tool
to
see.
D
Let
me
phrase
the
title
here
of
the
question,
so
something
like
is
the
first
time
I
used
this.
Actually
sorry
for
that
who
thinks
the
draft
is
ready
for
adoption.
B
D
If
you
agree-
and
I
guess
you
can
also-
maybe
let
you
put
it
in
race
and
if
I
agree
just
to
avoid
confusion
and
you
should
have
it
on
the
on
the
there
is
a
session
going
on.
D
D
Think
it's
a
a
good
show
of
hands,
so
let's
keep
it
in
the
minutes
that
we
have
got
the
caption
screencap.
Also.
C
D
Yes,
I
agree
with
that,
because
we
have
so
many
java
participants.
I
think
anyway,
in
the
session
now
I
think
it's
a
sufficient
number,
for
I
mean
previous
documents.
We
have
adopted
them
based
on
that
and
anyways.
We
have
to
confirm
on
the
main
list,
so
I'll
take
care
of
of
sending
a
message
to
mainly
later
after
this,
and
thank
you.
A
Okay,
next
is
christian,
with
cashablascore.
E
This
is
an
update
on
casual
oscorp,
which
has
last
been
presented
during
the
interims
today,
I'd
like
to
focus
not
so
much
on
the
mechanics
which
are
slowly
maturing
and
basically
working,
but
on
a
topic
that
came
up
during
the
work
on
the
current
dash
of
three
draft,
which
is
more
along
the
lines
of
what
does
this
do
precisely
in
terms
of
request
response
binding.
E
So
what
I
thought
I
could
do
and
I'll
walk
with
you
through
the
the
steps.
There
is
split
this
into
how
do
we
obtain
a
request
response,
binding
north
score
in
general?
How
and
in
particular,
how
do
we
obtain
requests
and
response
binding
when
we
do
not
have
source
authentication
for
the
request
and
then
building
on
that?
How
do
you?
How
do
we
obtain
casablance
core
responses?
E
I
originally.
I
was
afraid
that
this
would
be
a
very,
very
large
change
to
the
document.
I
tried
it
anyway
and
turned
out
that
it
hasn't
been
that
large
of
a
change
in
terms
of
text
in
terms
of
semantics-
probably
it
still
is
so.
E
This
is
why
I'd
like
to
focus
on
this
part
one
today
just
to
set
up
the
stage
in
the
context
in
auscore,
we
get
request
response
binding
by
repeating
the
sequence
number
and
the
sender
id
in
the
response,
not
in
actual
text
but
as
part
of
the
additional
authenticated
data,
so
for
mismatch
to
happen
there
and
that's
one
of
those
co-op
attacks
that
oscore
is
designed
to
prevent.
E
E
The
server
verifies
that
the
request
is
coming
from
the
client,
so
either
the
client
would
need
to
maliciously
send
to
requests
which
doesn't
make
any
sense,
because
the
client
that
wants
the
rebinding
to
be
there
or
the
server
would
need
to
lie
and
the
server
can
lie
anyway,
because
it's
the
authority
on
all
the
things
here
in
grouposcore
things,
look
a
bit
different
in
grouposcore.
There
is
shared
key
material
for
the
smac
for
the
symmetric
encryption
part,
so
any
other
client
other
than
our
particular
client
c
could
have
done
the
encryption.
E
But
there
is
the
signa
in
in
group
group
mode
and
in
in
group
mode
and
in
powers
mode.
There
is
source
authentication
because
either
the
there
is
a
shared
key
used
between
client
server
that
only
those
two
parties
know,
or
there
is
a
signature
by
the
client.
So
again,
the
server
can
know
that
the
request
was
originated
from
the
client
process.
That
information
into
the
response,
and
then
the
client
can
understand
that
response,
knowing
that
the
server
knew
who
sent
it.
E
But
this
also
means
that
anyone
on
the
wire
who
might
be
member
of
the
group
cannot
trust
responses
at
all
unless
it
also
trusts
the
party
that
sent
the
the
client
the
original
client
see.
That
sends
a
request,
which
is
not
generally
the
case.
So
we
are
usually
assuming
that
there
might
be
malicious
members
in
the
group
which
may
then
read
responses,
but
they
cannot
spoof
answers
from
from
any
other
group
member.
E
So
in
general,
a
third
party
cannot
use
those
at
all
because
it
doesn't
know
whether
the
client
might
have
sent
one
request
to
that.
Trusting
to
that
third
party
but
another
to
the
server
and
then
the
server
center
response,
and
then
those
don't
line
up
anymore,
and
this
is
in
retrospect,
the
very
situation
we
were
finding
ourselves
in
when
defining
a
group
of
current
sorry,
catchable
law
score
in
the
first
place.
But
it
might
not
be
the
only
situation.
E
So
how
can
can
we
salvage
this?
We
can,
as
done
in
in
the
non-traditional
responses,
just
send
the
full
request
in
in
the
response
again,
which
we
want
to
avoid,
because
that's
with
all
about
saving
bytes
here
so
repeating
a
request
is,
is
something
to
avoid.
E
Part
of
the
additional
authenticated
data
that
is,
that
is
verifying
that
the
server
you
saw
that
request,
without
actual
actually
transmitting
that
previous
versions
of
cachable
oscore
had
that
information
in
the
in
a
modified
external
aed,
currently
we're
more
leaning
towards
putting
that
in
a
class
I
option
and
then
not
necessarily
sending
that
class.
E
I
option,
but
so
one
way
or
another
that
information
needs
to
wind
up
in
the
in
the
in
the
aed
and
thus
really
it's
not
really
it's
not
replacing,
because
the
parts
are
still
there,
but
practically
it's
augmenting
the
request
response,
binding
mechanism.
So
now
the
client
even
not
necessarily
the
the
sorry,
the
receiver
of
the
response,
even
not
necessarily
trusting
that
the
server
was
able
to
perform
source
authentication
on
the
request
can
be
sure
that
this
response
is
a
val
is
a
response
created
by
the
server
for
this
particular
request.
E
E
Just
goes
into
that
into
that
half.
So
the
the
statement
that
we
don't
get
freshness
with
cash
of
law
score
is
really
a
more
general
statement.
The
statement
should
be
that
whenever
we
have,
we
don't
rely
on
the
original
request
response
binding,
but
just
some
additional
request
response
binding
we
lose
freshness
and
then
with
deterministic.
E
E
But
the
request
response?
Binding
is
probably
a
somewhat
different
topic
and
that
could
be
even
useful
for
other
cases.
So
one
example,
and
actually
the
example
that
that
triggered
all
this
is
that
group
of
core
used
to
have
an
appendix,
in
which
it
was
described
that,
under
certain
circumstances,
it
might
be
okay,
not
to
verify
the
signature
on
a
request.
E
That
is,
when
you
have
a
group
where
you're
just
doing
requests
and
the
client
does
not
really
have
an
interest
in
doing
doing
an
asymmetric
signature
on
a
request
and
on
sending
64
bytes
just
to
send
a
request
that
really
any
of
the
other
clients
could
just
as
well
have
sent
and
where
the
server
might
not
even
for
its
regular
operation
depend
on
source
authentication,
for
example,
because
any
client
might
be
eligible
to
use
that
information
or
just
because
the
request
is,
the
handler
is
side
effect.
E
Free,
I
mean
it's,
it's
a
get,
there's
nothing
wrong
in
encrypting
the
response
and
and
sending
it
to
a
party
that
can
read
it.
If
it's,
if
it's
the
authorized
party
and
if
not,
it
would
just
not
be
able
to
process
it.
E
But
this
this
needs
very
careful
considerations
for
for
request
response
binding,
and
I
think
that
all
the
tools
that
we
set
up
in
cash
flow
score
can
do
this
and
thus
not
only
benefit
cashable
oscar
by
becoming
more
readable
and
easier
to
easier
to
verify,
but
also
open
up
those
new
cases.
E
So
for
me,
the
main
questions
for
today
are
is
this
a
split
that
you
would
consider
useful
and
if
so,
is
this
something
that
is
useful
to
be
in
the
same
document?
I
think
so.
I
think
it
can
be
meaningfully
presented
in
in
a
document
doing
really
two
parts
describing
request
response,
binding
in
absence
of
source
authentication,
group
communication
and
then
building
on
that
doing
doing
cashable
loss
core.
But
I'd
like
to
hear
your
opinions
here
too,.
I
Yeah,
I
actually
actually
don't
have
a
strong
opinion
here.
I'm
I'm
I'm
I'm
really
happy
that
that
this
there
is
actually
a
solution
to
to
the
this
cachable
problem,
and
I
think
that
we
should
prioritize
to
make
that
as
simple
as
possible
to
to
use
and
whether
it's
useful
in
the
in
other
instances
is
lower
priority.
In
my
my
view,
okay,
so
but
yeah,
that's
my
input.
A
A
Okay,
so
this
is
a
relatively
new
work,
but
not
a
super
new
idea.
A
It
was
also
presented
some
interim
ago
earlier
this
year
and,
to
recap,
you
may
have
a
proxy
deployed
between
a
client
and
the
server,
and
your
use
case
we
have
examples,
may
need
the
security
association
between
the
client
and
the
proxy,
and
it
may
be
very
convenient
that
that
is
based
on
all
score,
especially
if
you're
already
using
all
score
end-to-end
between
origin,
client
and
server,
and
you
end
up
into
something
that
right
now
it
is
not
defined
or
even
forbidden
in
your
score.
A
As
a
recap
of
main
use,
cases
also
described
in
the
document,
yeah
group
comproxy
was
the
first
one.
You
want
the
proxy
to
identify
the
client
before
forwarding
a
request
over
multicast
to
a
group
of
servers.
Second,
one
in
a
sense
also
related
to
your
communication.
A
You
may
have
a
server
sending
multicast
notifications
to
a
group
of
clients,
all
observing
the
same
resource,
and
if
you
have
group
score
end-to-end
and
the
proxy
deployed,
clients
are
required
to
take
an
additional
step,
which
means
providing
an
additional
ticket
request
to
the
proxy
to
make
things
work
and
that
exchange
among
others
is
better
to
be
protected,
for
instance,
with
oscar
again
and
then
another
use
case
comes
from
the
lightweight
endpoint
specification
where
evoscore
is
used,
the
lightweight
one
client
may
want
to
use
it
also
end-to-end
with
an
external
application
server
using
the
lifetime
tom
server
as
a
co-proxy.
A
A
So
the
contribution
of
this
document
is
trying
to
update
the
oscar
rfc
in
defining
also
intermediaries
as
possible,
oscar
endpoint
so
consuming
the
oscar
option
and
an
oscar
layer,
and
then
in
that
meeting
also
a
double
triple
as
many
as
you
want
layer
protection
on
sap
message,
we
did
have
a
limit
of
two
layers
at
most
in
the
previous
version
of
the
draft
and
it
was
lifted
out
of
feedback.
We
got
and
yeah.
We
really
consider
oscar,
but
what
you
see
in
the
document
can
be
applied
right
away.
A
Also
for
group
of
score.
A
So,
for
version
zero,
it
was
presenting.
Also
at
an
interim.
We
got
some
early
comments
from
from
christian
and
joran
that
were
very
constructive
and
essentially
suggesting
some
more
use
cases.
Lifting
the
limit
of
two
at
most
protections
of
the
same
message
and
their
main
feedback
was
that
the
the
message,
processing,
description
and
notation
of
version
zero
was
way
too
complicated
and
requires
some
restructuring
that
we
did
as
to
the
use
cases.
A
We
mentioned
also
the
case
of
a
cross
proxy
acting
as
third
party
service
for
the
sake
of
transport
indication
which
is
also
christian's
work
a
proxy
as
a
traditional
big
firewall
as
an
entry
point
of
a
network
that
is
required
to
identify
the
exact
nodes
joining
the
network,
and
then
we
started
to
think
but
requires
much
more
elaboration,
a
case
where
you
have
a
long
chain
of
proxies,
and
you
really
want
to
hide
that
position
in
the
network
of
the
client
from
most
of
the
chain
elements
and
the
final
origin
server
building.
A
A
And
coming
to
the
main
point
in
in
christians
and
uranus
feedback.
Instead,
we
got
rid
of
the
very
complicated
notation
and
two
fine-grained
message
processing
step
and
we
came
up
with
a
general
algorithm
that
is
applicable
right
away
to
any
endpoint
in
the
chain,
so
a
client,
an
intermediary
or
a
server.
Now
we
also
say
explicitly
that
we
are
not
defining
any
explicit
signaling
of
what
is
happening
and
we
don't
need
to
so.
A
Basically,
the
the
presence,
possibly
in
combination
of
certain
cop
options,
is
just
sufficient
for
an
endpoint
to
understand
exactly
what
is
going
on
and
and
what
to
do,
and
a
main
deviation
from
the
oscar
rfc
is
that
an
endpoint
shouldn't
panic
anymore.
If,
after
the
encryption,
anal
score
option
is,
is
still
there,
because
that
just
means
one
more
oscar
layer
to
strip
and
some
options
have
to
be
protected
or
to
be
treated
as
class
e.
A
If
you
want,
unlike
the
scorer
fc-
and
this
includes
in
fact
just
corruption
and
options
intended
for
a
proxy.
A
So,
with
this
in
mind,
it's
pretty
easy
to
protect
the
request,
just
applying
the
oscar
layers,
one
after
the
other,
typically
using
as
first
one,
the
one
shared
end-to-end
between
the
origin,
client
and
the
server
things
get
interested
and
here's
the
generalized
algorithm.
When
you
think
of
an
incoming
request
and
based
on
the
new
text,
we
have
it's
really
about
evaluating,
which
of
these
conditions
apply
to
a
request.
A
A
If
there
is
any
at
all,
you
are
in
kc,
otherwise,
meaning
you
don't
have
proxy
options,
but
you
have
an
oscar
option
and
then
well
use
the
recipient
context
pointed
by
the
oscar
option:
decrypt
take
the
result
and
own
it
assess
which
condition
applies
again
and
eventually
you
lend
to
case
a
or
b
for
forwarding
or
delivering
to
the
application
omitted
in
the
slides.
But
of
course
we
have
also
error
randomly
covered
already
for
responses.
It
is
easier
for
a
responder.
A
A
We
want
to
add
examples
considering
caching,
but
that
should
be
possible
to
have
just
using
the
cachable
score
proposal
that
kristen
presented
before,
and
we
want
to
to
elaborate
a
bit
more
on
a
use
case
where
having
more
than
two
layers
per
message
is
useful,
and
this
is
perhaps
a
bit
longer
term.
But
we
want
to
look
into
rfc
8824
that
define
another
compression
for
co-op.
A
Also
for
the
case,
when
all
score
is
used,
and
maybe
not
as
is-
but
we
we
think
that
approach
can
be
possibly
adapted
a
bit
to
be
used
also
in
the
case
where
a
message
is
protected
with
multiple
layers,
so
that
we
can,
at
the
end
of
the
day,
reduce
the
hover,
and,
in
this
case
too,
this
is
the
plan
for
version
zero.
Two,
but
until
then
comments
input
are
very
welcome.
Anyone.
A
Esco
is
already
suggesting
a
name:
matryoshka
probably
would
think
about
it.
A
K
K
Okay,
good,
I
have
to
present
by
myself
right.
A
A
K
K
For
this
meeting
yeah,
first
of
all,
let's
explain
the
motivation
for
this
work,
so
you
you
are,
of
course,
the
expert
on
cope,
and
you
know
that
there
are
two
modes:
reliable
mode
and
reliable
mode
in
case
of
reliable
mode.
Reliability
is
provided
with
acknowledgement
so
because
the
message
is
marked
as
confirmable.
K
In
this
case
we
can
say
we
can
think
that
if
we
want
to
implement
some
measurement,
we
can
use
the
message
id
and
acknowledge
to
identify
the
packets
and
measure
rounded
time
to
verify
losses.
This
can
be
done
in
case
of
reliable
mode
in
case
of
unreliable,
reliable
mode.
Of
course,
this
is
not
possible
and
this
is
no
easy
way
to
do.
Measurement
round-trip
time
losses,
delay
measurement
in
any
case,
even
if
we
are
in
reliable
mode,
it
is
resource
consuming
to
read
id
sequence,
numbers
store
timestamps
for
each
packet.
B
K
B
K
K
This
is
a
draft
that
has
been
just
adopted
in
ippm
working
group
and
the
techniques
described
this
draft
employ
a
few
marking
leads
inside
the
header
of
a
packet
for
loss
and
delay
measurement.
In
particular,
I
want
to
start
with
the
two
idea.
One
is
the
spin
lead
idea.
This
is
already
optional
in
quick
protocol.
So
if
you
go
to
the
rfc
9000,
there
is
the
one
bait
that
is
dedicated
for
spin
beat.
K
K
K
Also,
in
this
case,
we
are
talking
about
square
waves,
but
in
this
case
the
square
waves
is
is
made
of
a
fixed
number
of
packets.
In
this
way,
these
fixed
numbers,
all
packets,
can
be
recognized
between
the
client
and
server,
and
you
can
measure
the
losses
just
to
give
a
quick
view
of
these
two
methodologies.
K
K
K
What
are
the
key
points
and
benefit
of
this
solution?
So
the
first
one,
as
I
said,
is
that
no
sequence
number
no
ids,
no
sequence,
number
no
time
stamping
for
each
packet.
So
there
is
an
easy
way
to
measure
ftt
and
delay
and
the
redundant
time
that
fits
well
with
the
requirement
of
constrained
nodes.
K
There
is
also
a
proposal
to
improve
the
square
bit
mechanisms
to
find
the
synergy
with
spin
beat
in
order
to
need
the
methodology
simpler.
But
I
don't
want
to
to
to
explain
this
here
because
it
is,
it
is
mentioned
we
didn't
draft,
and
maybe
I
can
we
can
discuss
on
the
list
or
I
can
I
can
present
during
the
next
meeting.
If
the
idea
is
of
interest
of
the
working
group
and
once
you
are
able
to
to
to
do
performance
measurement,
you
can
also
think
about
possible
advanced
usage
of
this
methodology.
K
For
example,
an
impact
observer
that
can
be
a
probe.
A
a2ar
proxy
can
use
this
information
to
to
adjust,
for
example,
protocol
parameters
or
to
decide
whether
to
use
reliable
or
unreliable
message
transmission
based
on
the
conditions
of
the
network,
so
you
can
also
think
about
this
kind
of
advance
and
music,
so
yeah
the
next
step.
K
The
draft
is
based,
as
aside
on
well-known
methodologies,
that
we
had
this
idea
to
extend
in
this
kind
of
environment,
because
this
methodology
can
be
easily
extended
to
to
constrain
the
environment
and
considering
that
iot
machine
to
machine
devices
keep
growing
nowadays
the
also
the
performance
measurement
aspect
and
something
to
be
considered
for
an
enterprise
for
a
network
operator
that
want
to
use
co-op
protocol
in
in
the
constrained
environment.
So,
okay,
it's
just
a
proposal
for
discussion,
so
we
welcome
collaboration,
quests
and
comments
on
this
work.
So
that's
all!
Thank
you.
E
Christine
go
ahead,
presenting
this
here
you
mentioned
in
the
also
in
the
response
to
previous
communication
that
you
would
and
here
again
that
you
would
like
to
use
this
across
proxies
so
that
that
the
spin
bit
would
kind
of
be
set
by
the
client
and
then
we
are
not
modified
by
the
proxy
and
then
sent
to
the
server
or
other
values
of
that
option.
E
Could
you
briefly
describe
which
parties
would
all
need
to
cooperate
here,
because
I
understand
this
to
be
coming
from
a
telecom
background
and
kind
of
would
would
this
be
implemented
on
all
those
devices
and
which
components
would
then
can?
Can
you
give
a
concrete
example
of
of
which
devices
interact
here
in
a
proxy
situation,
because
my
impression
is
that
when
proxies
are
involved,
it
could
really
be
linked
local
and
then
we
could,
and
then
things
like
carson
suggested
use
of
mid
could
come
into
play.
K
Yes,
I
I
mentioned
in
in
the
reply
that
these
there
are
several
way
to
use
this
methodology,
because,
since
it
is
based
on
on
the
client
server,
so
it's
kind
of
applicable
to
to
the
session
so,
for
example,
if
the
proxy
is
going
to
start
a
session
on
the
iphone
of
the
client,
of
course,
the
proxy
can
implement
can
be,
of
course,
the
the
client
in
terms
of
in
terms
of
the
measurement.
So
it
depends
on
on
on
the
situation
that
we
we
are.
We
want
to
monitor.