►
From YouTube: IETF111-CORE-20210728-1900
Description
CORE meeting session at IETF111
2021/07/28 1900
https://datatracker.ietf.org/meeting/111/proceedings/
A
A
We
need
to
make
a
good
use
of
this
time
to
advance
the
work.
The
knot
will
also
summarize
in
the
next
slide
applies.
We
are
miteikos,
so
blue
sheets
are
collected
automatically,
there's
no
official
jobscribe,
but
the
chairs
will
keep
a
look
in
the
chat
and
we
have
a
notetaker
for
today
bill.
Thank
you
very
much
for
accepting.
Everyone
is
welcome
to
help
out
build
to
take
notes.
A
A
A
few
practicalities
since
we
are
miteko,
please
request
to
enter
the
queue
using
the
leftmost,
an
icon,
or
if
you
really
need
a
request
to
relay
your
comment
at
the
mic
by
someone
of
the
attendees.
This
meeting
is
recorded
as
we
speak
and
again
attendance
is
collected
automatically
as
blue
sheets.
A
A
A
A
Also
discussion,
oriented
john
will
present
a
revived
document
on
co-op
attacks
that
bridges
into
a
kind
of
second
segment
of
the
agenda
more
on
group,
communication
and
security.
So
we
take
your
communication
for
co-op
group
score
for
its
security
and
a
number
of
more
recent
works
differently
related
to
oscore
bashing
to
this
agenda.
Anyone.
A
Now
then,
a
quick
recap
of
the
recent
updates
to
our
documents.
First
of
all,
we
have
recently
published
rfc
1930
used
to
be
a
core
dev
urn.
Thank
you
very
much
indeed
to
the
working
group
at
the
authors
for
this
nice
achievement.
A
We
have
a
number
of
documents
in
a
very
advanced
stage
in
the
rfcq
resource
director
in
new
block
and
miss
ref
waiting
for
an
echo
request,
tag
that
should
also
enter
the
same
stage
pretty
soon
now
and
cinnamon
versions
enter
out
48.
A
We
have
a
number
of
documents
in
isg
processing,
where
echo
request
technically
still
is,
or
at
least
there's
an
announcement
to
be
sent,
but
the
authors
were
submitted.
A
version
before
the
cutoff
now
is
technically
back
to
nad
follow-up
stage
and
of
the
four
documents
two
were
requested
for
publication.
A
We
got
tons
of
good
comments
from
the
isg.
They
are
now
under
a
revised
id
needed
and
since
cars
then
joined
the
author
list
of
both
of
them,
the
sharpening
was
transferred
to
the
two
chairs.
A
We
also
have
a
few
documents
in
post-working
group
call.
We
actually
requested
publication
of
cinema
data
ct.
Then
we
have
the
other
two
core
conf
documents
that
have
to
kind
of
proceed
in
parallel.
Comai
requires
both
some
editorial
fixes
and
technical
clarifications.
Authors
are
on
them,
young
library
has
basically
to
follow,
and
then
we
have
the
purple
score
document
that
passed.
The
first
working
group
called
there
has
been
major
updates
to
that
I'll.
A
Come
back
to
that
in
a
later
presentation
today,
and
then
we
have
one
more
point
that
we
thought
of
raising
here
this
meeting
with
you.
The
chairs
have
recently
discussed
about
a
possible
update
to
the
working
charter,
since
it
is
not
pretty
old
and
updated
with
respect
to
what
we
actually
did
already
what
we
are
doing,
what
we
might
have
in
the
queue
to
do,
and
the
main
motivation
for
this
is
that
charters
charters
are
now
looked
more
and
more
closely
by
the
asg
when
receiving
a
document.
A
So
it's
not
at
all
about
any
major
drastic
change
of
direction
and
we
hope
we'd
like
to
keep
the
same
overall
scope
and
track,
but
at
the
same
time
it's
good
also
to
update
the
description
of
work
so
that
it
reflects
faithfully
what
we
actually
achieved
already,
what
we
are
doing
and
what
we
plan
to
do.
Building
on
that.
A
So
the
first
step,
of
course,
would
be
for
for
the
chairs
to
prepare
first
draft
of
the
updated
text
to
share
with
the
working
group,
and
that
will
be
followed
by
a
joint
revision
on
the
list
and
also
using
some
time
and
the
entering
meetings
we
have
during
the
autumn.
But
of
course
we
want
to
hear
already
today
some
early
thoughts
or
objections
against
this.
A
A
B
Just
comment
that
we
do
these
charter
updates
when
we
have
to
so.
Yes,
these
are
all
the
reasons,
but
then
the
the
real
we
need
to
turn
to
do.
This
is
pretty
large,
so.
A
A
Silence
is
good,
then,
we'll
take
the
next
steps
and
come
back
to
you
with
this,
and
we
can
actually
start
with
the
first
item
in
the
agenda,
which
is
carsten
which,
with
href,
so
I
relinquish
the
sharing.
A
A
B
B
So
this
would
be
very
brief,
but
if
you
have
a
question,
please
do
ask
it
so,
basically,
when
the
web
was
invented
in
1990,
there
were
three
big
components:
the
uri,
the
http
protocol
and
html,
and
we
know
that
this
continues
to
work
today,
30
years
later,
and
when
we
did
the
the
thing
web
or
our
version
of
the
thing
web,
that
we
call
call
we
we
left
the
uris
in
spain
in
place.
B
We
did
a
new
transfer
protocol
and
well
we
have
several
representation
formats
in
in
use,
some
of
which
are
zero
based
on
which
are
not,
and
so
we
have
a
little
bit
more
diversity
there
yeah,
but
well
maybe
it's
time
to
look
at
the
ui
part,
because
that's
really
the
oldest
part
in
the
combination
again-
and
this
is
essentially
what
this
href
draft
is
about.
Href
is
just
an
aberration
for
hyper
reference
because
he
couldn't
agree
on
a
name
for
the
cris,
but
I
think
that
that
has
been
mostly
settled
now.
B
So
what's
the
reason
why
we
want
to
get
rid
of
uis?
Well,
klaus
has
put
down
a
sentence
in
in
the
document
that
pretty
much
describes
it.
There
are
lots
of
implementations
of
uis
and
they're
all
wrong
and
they're
all
subsets
they're,
all
in
non-interpretable,
in
some
corner
cases
and
so
on,
and
the
main
problem
is
that
it's
really
not
trivial
to
transform
between
the
uri
syntax
and
the
the
data
model
that
we
would
actually.
C
B
So,
just
to
remind
everyone:
rfc
3986
is
the
ui
rfc,
and
that
tells
us
the
ui
has
five
components:
a
scheme,
an
authority
which
is
host
name
plus
other
info
port
number.
B
We
have
a
path
which
is
composed
of
path
segments,
we
have
a
query
and
we
have
a
fragment
and
when
we
designed
coab,
we
actually
mapped
these
components
onto
co-op
options
in
a
way
that
a
co-web
server
in
the
best
case
only
ever
sees
parsed
ui.
So
we
already
did
kind
of
the
work
that
that
the
general
cri
document
is
is
now
trying
to
do.
B
So
the
the
other
part
is
in
a
data
address
in
a
document
being
interchanged.
We
also
want
to
support
uri
references
which
are
partial
uris,
like
in
the
left
column
of
that
table
here,
and
there
are
resolution
rules
in
in
3986
that
are
well
interesting.
So
it's
really
unlikely
that
the
random
uri
implementation
that
you
find
somewhere
completely
implements
the
the
whole
resolution
procedure.
It's
that
complicated.
B
So
that's
another
part
of
the
problem
that
we
are
seeing
and
relative
uis
are
really
useful
to
make
documents
independent
of
where
they
are
located,
so
that
they
cannot
just
get
rid
of
it
and
say:
let's
do
all
absolute
and
and
lose
this
complexity.
B
So
what
are
cis
crs
are
essentially
a
new
representation
format
for
the
same
old
uri
data
model
a
little
bit
constrained
so
that
it's
not
you
cannot
put
user
info
or
other
parts
in
into
a
ui
from
a
ui
into
a
ci,
because
we
really
have
learned
why
we
don't
want
to
use
them,
but
for
for
practical
uris
you
would
find
in
an
iot
environment.
Cris
are
going
to
cover
that
not
just
in
an
iot
environment.
There
are
also
other
environments
that
might
want
to
use
it.
B
So
a
draft
itef
called
href
defines
cris
and
ci
references
in
a
modern
representation
form.
So
it's
no
longer
text
based.
It
actually
has
the
components
identified
in
in
a
structure.
B
So
we
worked
on
this
for
a
while,
and
then
we
finally
had
dash
of
four
the
new
syntax
branch
in
the
github
repository.
If
you
have
followed
that
and
that
contains
a
number
of
suggestions
from
jim
sharp
a
little
bit
optimized.
So
this
was
a
very
efficient
form
to
write
down
cis,
but
also
a
form
that
requires
a
number
of
decisions,
number
of
parsing
steps
during
ingestion.
B
So
the
the
abstract
content
here
is
that
we
we
take
the
five
triple
that
ui
is
and
add
a
sixth
element
to
support
ui
references
that
is
card
component
and
that's
essentially,
all
we
need
and
path
and
query
are
arrays
and
authority
is
sometimes
an
array
in
that
version
of
the
document,
so
that
worked
but
yeah
it
was
maybe
more
complicated
than
than
we
like
in
a
constrained
environment.
B
So
in
dasho
five
we
we
took
away
the
syntax
that
needs
to
be
passed
and
essentially
directly
used
the
sixth
tuple
already
in
two
compacted
variants.
So
these
variants
are
essentially
the
absolute
uri
and
the
relative
reference
with
a
few
corner
cases
that
straddle
that
boundary.
So
it
would
not
be
correct
to
talk
about
absolute
and
and
relative
entirely,
it's
just
two
different
syntaxes
for
for
the
information.
B
So
this
simplification
is
particularly
expensive
for
relative
cri
references
which
might
get
two
to
four
bytes
larger.
Otherwise,
it's
for
for
absolute
uis,
it's
it's
pretty
inconsequential,
just
makes
ingestion
simpler
and
then
we
we
implemented
that
and
and
kind
of
liked
it.
But
we
found
two
places
where
we
actually
wanted
to
make
a
change,
and
one
is
something
that
needs
to
be
fixed,
because
your
eyes
can
be
something
like
a
urn
which
doesn't
really
have
a
path,
but
an
opaque
component
which
can
be
thought
about
as
as
a
single
path
component.
B
But
the
interesting
thing
is
that
that
un,
colon
x
and
un
colon
slash
x
mean
two
different
things
in
the
ui
syntax,
so
translating
them
both
to
the
the
same
ci
means
that
cis
cannot
reflect
that
detail.
B
So
we
we-
we
didn't
really
like
that,
but
we
we
came
up
with
one
way
of
handling
this
anomaly,
so
given
that
these
all
are
depending
on
the
fact
that
there
is
no
authority
there
at
all,
once
you
have
an
authority,
you
no
longer
have
this
anomaly.
B
We
just
use
the
authority
field,
which
is
the
second
field
here
and
then
the
tuple
to
indicate
this.
This
is
an
opaque
that
doesn't
have
a
slash.
So
so
that's
what
the
true
says.
So
this
is
not
beautiful,
but
it
kind
of
completes
the
picture.
B
The
the
other
observation
is
that
the
the
syntax
of
host
names
itself
is
something
that
we
might
want
to
relieve
the
participants
from
from
parsing
it
so
dash
of
six
proposes
to
actually
pass
the
the
host
name
like
in
in
the
ui
at
the
bottom
of
the
slide.
We
have
tzi.de,
and
this
is
coming
as
an
array
with
a
two
components,
tzi
and
de
plus
another
component,
which
is
a
number
which
is
the
part
number.
B
A
way
to
do
the
same
thing
co-op
would
have
done,
except
that
we
didn't
really
think
we
needed
to
to
optimize
the
the
transfer
of
host
names
here.
So
we
we
didn't
do
that.
So
this
is
one
part
that
can
be
undone
if
we
don't
like
it,
but
dash
06
currently
contains
this
version.
B
We
think
the
design
is
feature
complete
in
the
sense
that
everything
that
you
want
to
have
in
in
the
ui
can
be
covered,
but
at
the
same
time,
processing
cis
is
much
simpler
than
processing
your
eyes,
and
what
we
need
to
do
is
is
fix
our
implementations
to
be
entirely
compatible
to
to
what
we
describe
in
dash
of
six
and
publish
those
implementations.
So
we
didn't
manage
to
do
that
before
the
ietf.
B
There
was
a
question
from
thomas
facade:
can
you
tell
more
about
what
is
the
use
case
for
passing
domain
name
labels?
Well,
what
I
didn't
say
is
that
that
actually,
of
course,
not
all
authorities
contain
domain
names,
so
you
would
have
an
rp
address
here
in
in
many
cases
an
ipv4
hydros
or
an
ipv6
address.
B
So
the
the
previous
version
just
has
a
type
drives
between
those
ip
addresses
and
and
a
text
string
which
conforms
to
the
reg
name,
property
that
is
defined
by
the
3986
and
the
the
observation
here
is
that
having
to
pass
these
dns
names
is
a
little
bit
out
of
character
from
the
the
rest
of
cris
that
actually
pass
everything
down.
B
Pull
them
apart
apart
to
to
do
it
in
s3,
but
I
cannot
say
there
is
a
use
case
that
that
absolutely
requires
that
this
is
just
trying
to
to
make
the
whole
thing
look
consistent.
This
is
not
something
that
we
need
to
do
so
if
we
decide
this,
this
is
not
something
today's
implementations.
C
A
D
Go
ahead,
christian
on
the
topic
of
the
of
the
authority
anomaly
I'd
like
to
add
that
I
think
that
there
is
probably
a
better
solution
that
keeps
it
more
to
that.
Keeps
the
complexity
more
in
the
conversion
between
your
eyes
and
cries
and
simplifies
the
handling
of
the
cries,
but
I
think
that's
best
demonstrated
when
I
have
my
implementation
ready
to
that
point,
where
I
can
show
a
diff
between
those
two
and
that's
basically,
a
promise
for
for
an
implementation
review.
B
Yeah
this
this
version
here
has
the
advantage
that
there
is
no
special
casing
in
the
resolution
rules,
except
that
you
have
to
to
set
something
from
true
to
another
in
a
certain
case,
but
you
don't
have
to
juggle
the
the
array
of
path
segments
in
in
a
different
way,
depending
on
on
the
specific
case
that
you
are
in.
B
So
that's
why
I
like
this
one
here
better
this.
This
was
pretty
easy
for
me
to
implement
the
other
one
that
had
has
been
in
there
before
yeah.
It
signifies
the
the
the
lack
of
a
leading
slash
with
a
null
path,
name
component,
and
that
that
this
yeah,
that
this
is
completely
monkey
range
2
to
the
resolution
algorithm.
So
you
have
to
to
change
the
the
whole
resolution.
Everything
to
do
this
right.
D
A
A
Thank
you
kirsten.
Then
it's
up
to
bill
with
that
link.
E
Okay,
can
you
first
see
me
and
hear
me.
E
Hello,
everybody,
the
the
next
few
minutes,
is
basically
an
incremental
update
from
the
last
month's
interim
meeting
in
core,
where
we
were
discussing
just
the
dialing
draft
and
the
title
here
is
dialing
and
conditional
attributes.
So
this
is
going
to
be
discussing
two
drafts
instead
of
one
and
I
hope
to
take
just
a
few
minutes.
E
Okay,
so
a
bit
of
history,
dynlink
version
13
has
been
split
into
two
working
group
drafts,
and
this
was
done
after
discussion
with
the
authors
and
the
area
director
and
also
the
chairs.
E
So
now
there
are
two
drafts,
the
original
dynlink
and
then
a
new
draft
called
core
conditional
attributes
and
obviously
both
drops
will
evolve
and
they
will
continue
to
incorporate
all
the
feedback
and
reviews
and
and
so
on
and
so
forth.
So
today's
presentation
is
mostly
about
why
this
happened
and
and
also
perhaps
just
a
just
a
quick
delta
from
the
last
from
the
last
meeting.
E
Okay,
so
this
slide
would
not
be
unfamiliar
to
those
who
are
in
the
last
interim
meeting,
but
just
to
reflect
on
this
diedling
has
essentially
two
parts,
or,
should
I
say,
dangling
version
13
had
essentially
two
parts.
E
The
first
part
of
dynlink
focused
a
lot
on
on
the
core
observe
mechanism
and
how
we
can
provide
conditional
attributes.
The
second
part
of
the
draft
focused
on
the
link
bindings
themselves.
So
what
what
is
the
dynamic
link?
How
do
we
use
it?
How
can
it
be
recorded?
How
can
it
be
addressed
and
and
subsequently
updated?
E
So
these
are,
these
are
two
parts
of
the
draft
and-
and
they
were
rather
distinct
from
each
other,
and
what
we
did
was
that
we
took
the
adverb
within
basically
took
the
the
part
that
is
on
conditional
attributes,
put
it
to
zero
zero
and
then
took
the
the
remaining
part
of
dynlink,
and
then
that
became
diet,
link
version
14
and
both
were
now
submitted
with
an
informative
reference
from
conditional
attributes
towards
the
data
link
14..
E
Why
was
this
done?
There
were
basically
two
things.
Firstly,
conditional
observed
attributes
have
been
available
and
ready
for
some
time
and
the
section
on
link
bindings.
So
this
description
of
link
binding
is
okay,
but
then
the
binding
tables
is
still
under
development,
and
specifically
because
the
way
we
are
expressing
relationships
is,
is
not
that
good.
So
so
there's
a
lot
of
work
to
be
done
there.
E
So
libra
m2m
is
has
a
dependency
on
downlink,
and
this
dining
dependency
is,
is
almost
exclusively
in
the
or,
I
should
say
actually
exclusively
in
the
conditional
observe
part
and
not
on
the
link
bindings
and
the
work
on
the
condition.
Attributes
is
almost
finished
so,
which
basically
means
that
separating
them
into
two
drops
would
allow
the
conditional
attributes
draft
to
move
forward
into
an
rfc
very
quickly
and
that
will
solve
the
resolution
towards
live
with
m2m.
E
E
So,
firstly,
thanks
to
the
to
the
co-chairs
for
setting
up
the
github,
the
the
the
draft
is
already
now
available
at
at
this
address
and
then
also
I've
transferred,
almost
all
the
the
issues
that
were
common
to
dine
link,
so
the
ones
that
have
been
specifically
unconditional
attributes
are
now
in
the
github
repository
as
new
new
issues,
or
rather
I
don't
know,
transfer
issues.
I
guess
there
are
new
issues,
but
essentially
the
the
discussion
has
not
been
lost.
E
E
There
will
be
text
for
possible
security
considerations
added
to
zero
one,
and
then
there
will
be
other
things
done
for
zero,
one,
basically
just
to
update
the
reference
code,
basically
to
think
about
how
epm
and
epmax
work.
I
think
we
have
introduced
a
couple
of
new
new
conditional
attributes.
E
Then
there'll
be
small,
other
changes
and
I
think
ellen
if
you're
online
you're
there.
So
there
was
a
discussion
also
on
the
mailing
list
just
a
few
days
ago
regarding
ipr
disclosures.
So
that
will
be
done
to
condition
attributes.
I
think
so,
if
ellen,
if
you
have
any
kind
of
input
on
that,
let's
just
go
there:
okay,
ellen
you're
on
the
queue.
F
I'm
I'm
I'm
working
it
doing
my
best
to
get
it
moving
the
right
direction,
I'll,
let
you
know
if
I,
if
I,
when
I
get
confirmation
that
things
are,
are
moving.
You
know
how
sometimes
these
things
take
a
little
bit
of
time
but
bill.
I
just
wanted
to
say
thanks
as
well.
I
think
this
is
great
separation.
F
I
know
I've
got
that
action
to
provide
some
examples,
but
yeah
it'd
be
great
to
get
that
done
quickly
and
then
we
can
move
forward.
So
that's.
E
So
that's,
I
think
I
think
these
are
the
the
small
changes
necessary
before
the
draft.
This
is
ready,
and
obviously
please
please
send
send
reviews
to
this
and
then
we'll
be
ready
for
working
group.
Last
call
okay,
I'll
move
forward
if
the
slide
changes,
okay,
right
so
dangling
downlink
itself,
so
moving
from
conditional
attributes
to
the
dialing
draft,
so
so
the
chord
and
drafts
is,
is
obviously
now
a
lot
a
lot
more
concise.
E
There
was
a
shortcoming
scene
with
the
calling
format
and-
and
that
has
been
a
key
sticking
point-
we
had
had
several
ways
where
we
could
try
to
describe
this.
Not
not.
None
of
them
have
been
that
that
ideal.
I
think
christian
had
a
lot
of
input
on
this
as
well.
E
So
we'll
do
some
exploration.
I
think
I
think
probably
coral
would
be
something
that
we'll
be
looking
at
and
so
dining
15
will
will
basically
do
that
and
then
we'll
try
to
include
so
so
that
the
original
dialing
draft
had
a
lot
of
examples,
but
there
were
most
of
the
conditional
attributes,
not
so
much
on
how
dialing
is
being
used.
So
I
think
this
is
important
now
to
address
that
dialing
version.
15
should
have
some
examples
and
use
cases.
E
Yes,
good
christian
has
christian,
are
you
do
you
have
a
question,
or
are
you
just
going
to
the
next
presentation.
E
Okay,
good
do
I
have
to
give
up
slides.
D
D
D
D
D
Unfortunately,
when
co-op
over
tcp
came
along,
things
were
a
bit
controversial
on
whether
this
needs
a
known
scheme
or
not.
There
were
revisions
that
had
it.
There
were
revisions
that
did
not
have,
and
there
was
a
lot
of
unhappiness
still
in
the
no
objection
ballots
when
the
when
rc7323
was
finally
accepted.
D
But
it's
not
like
this
was
not
known
to
be
a
problem
before
and
even
back
in
2014.
We'll
summarize,
the
situation
of
where
we
can
put
the
information
that
distinguishes
which
which
co-op
transport
protocol
to
be
used
where
that
could
go
and
what
was
finally
picked
was
the
the
first
line
scheme
line
you
see.
None
of
those
are
perfect
scheme
has
one
particular
downside
and
nothing
had
fewer.
So
this
is
a.
This
was
a
good
decision,
but
the
the
one
thing
that
still
sticks
around
to
date
is
the
issue
of
urilyzing.
D
So
we
have
different
uris
for
the
same
topic.
If
you
look
at
the
example
at
the
top,
in
most
cases
and
with
most
devices,
if
you
see
the
left
and
the
right
uri,
you
can
assume
that
both
is
about
the
same
coffee
machine.
D
But
there's
we
have
no
terminology
to
tell
that
and
when
your
eyes
like
that
are
announced,
the
application
has
to
make
a
choice.
Do
I
advertise
as
a
co-op
s
and
co-op
s
plus
tcp?
At
the
same
time,
then,
my
user
might
see
two
coffee
machines
and
there
might
be
duplicate
addressing
caches,
or
do
I
only
advertise
one
as
the
canonical
uri
and
then
the
application
has
no
way
to
switch
over
to
tcp.
If
it
has
the
capability
to
do
so
or
might
not
even
connect
be
able
to
connect,
for
there
was
no.
D
There
is
no
mandatory
to
implement
protocol.
So
you,
if
you
get
a
co-op
s,
uri
and
your
implementation
only
has
coke
over
tcp
you
might
there
might
that
server
might
provide
another
transport
as
well,
but
you
can't
know
that
without
any
indication-
and
this
is
precisely
what
core
protocol
indication
transport
indication
is
now
about
before
I
get
to
the
solution
proper.
Let
me
briefly
point
to
two
tools.
We
have.
We
have
resource
metadata
that
we
can
sprinkle
over
our
our
discovery
results
and
that
we
can
add
there
for
reasons
of
efficiency.
D
So
if
we
indicate
if
we
want
the
the
difference
between
a
request
and
the
request
being
sent
over
proxy
is
literally
just
one
option
with
five
bytes
added
that
says
and
send
this,
and-
and
this
is
a
proxy
request
and
that's
the
scheme,
combining
that.
The
solution
that
I
proposed
to
the
issue
of
urializing
and
discoverability
of
of
resources
is
to
treat
the
alternative
transports
more
like
proxies.
D
D
That's
the
second
line
by
advertising:
a
link
from
the
root
of
the
pit
of
the
host
to
its
cove
over
tcp
version,
which
might
even
have
a
different
port
number
and
might
possibly
also
be
available
on
a
different
ip
address,
say
if
there's
ipv4
and
ipv6
involved
with
a
relation
that
states
there
is
a
proxy
for
the
actual
resource
over
there
and
then
what
the
client
can
do
is
to
either.
D
It
would
either
ignore
that
or
it
would
just
open
that
connection
indicate
with
that
five
byte
option
that
please
proxy
me
to
that
same
host,
but
on
the
co-op
side,
and
then
this
is
all
really
requesting
the
same
resource
and
on
the
server
side.
This
is
not
adding
any
any
complexity
in
the
implementation.
D
It
looks
like
on
the
first
glance,
it
looks
like
yeah
you're,
really
adding
a
proxy
in
there,
but
what
you're
actually
doing
is
take
this
proxy
scheme
option
which
is
mandatory,
which
is
which
is
critical
and
if
there
is
no
uri
host
option,
which
you
would
need
to
process
anyway,
and
the
proxy
scheme
indicates
that
this
is
another
scheme
that
you
support.
Just
ignore
it
and
that's
all
there
is
needed
in
an
implementation.
D
D
Of
course,
this
five
byte
option
is
something
we
don't
want
to
send
for
every
request,
so
one
option
explored
in
this
document
and
I'm
pretty
sure
we
can't
do
this
in
all
cases,
but
we
will
probably
be
able
to
do
this
in
very
many
applications
is
to
say
that
this
proxy
is
really
not
even
looking
into
the
uri
host
and
the
proxy
scheme
field.
D
So
when
you
send
your
request,
you
can
just
do
away
with
those
few
bytes,
and
then
we
add
that
we
are
per
message
on
the
same
efficiency
as
we've
always
been,
but
there's
an
explanation
for
how
this
happens
and
it
does
not
create
and
it's
transparent
proxies,
and
they
cannot.
Everyone
can
know
that
these
you,
these
requests
all
go
to
that
particular
resource.
D
How
this
interacts
with
proxies
that
are
not
on
the
same
host,
is
actually
pretty
straightforward,
but
I
won't
go
into
the
details
here
because
that's
probably
better
for
hallway
discussions
unless
someone
asks
about
it
later
on.
G
D
It's
a
proxy
deal
with
it
as
you're
dealing
with
the
proxy.
It
needs
to
be
suitably
authenticated.
D
That's
very
simple
for
oscar
because
for
us
called
the
proxy
is
not
trusted,
so
it's
just
a
matter
of
who
has
had
my
traffic
over
there
or
not
it's
a
bit
more
problematic
on
the
dpls
side,
and
I
could
really
use
some
input
from
people
experienced
with
and
that
have
experience
with
proxies
and
dtls
on
on
how
these
things
are
done
in
realistic
scenarios
right
now,
the
I'm
outlining
two
options
in
in
the
draft
that
is
either
if,
in
the
simple
case,
the
proxy
just
has,
the
ident
presents
the
same
credentials
on
both
transports,
but
there
are
corner
cases
where
it's
not
that
simple
and
I'd
like
to
understand
this
better.
D
So
this
is
this
is
where
I
could
could
really
use
some
some
additional
input,
the
gist
of
all
this
is,
it
can
probably
done
be
done.
Quite
simply,
we
don't
incur
all
of
those
bad
things
that
we've
seen
in
in
in
previous
discussions
where
well,
a
lot
of
new
uris
show
up
about
the
same
for
the
same
resources
and
yeah
with
that.
I'd
like
to
go
over
to
questions
comments
and
all
the
rest
that
follows
with
that.
Thank
you.
A
H
Sorry,
just
to
clarify
that
was
about
the
previous
presentation.
I've
been
thinking
about
it,
my
apologies,
it
doesn't
apply
to
this.
D
Carson
mentioned
on
the
chat
that
this
team
is
only
one
variable
and
the
ip
address
is
another.
If
that
is
related
to
to
this-
and
I
can
probably
I'm
not
sure
which
slide
best
to
go
back
to
this,
but
it
really
works
for
it
really
works
for
both.
So
everything
that
is
identifying
the
connection
from
the
from
the
ip
address
down
to
the
the
protocol
and
possibly
even
kind
of
in
the
http
case.
We
have
different
versions
of
the
protocol.
D
That,
for
us,
would
even
have
different
schemes
all
that
is
part
of
that
proxy
information
and
can
be
swapped
around.
At
the
same
time,.
B
Yeah,
I
think
that
that's
really
a
beauty
of
this
proposal
that
does
solve
the
ipa
address
rollover
problem
together
with
the
multi-scheme
problem.
So
the
the
only
part
that
we
really
need
to
understand
is
what
exactly
are
the
security
properties
of
a
resource
directory
or
very
non-core,
saying
something
about
a
node
and
the
client
blindly,
following
that.
D
What
what
makes
it
a
bit
easier
is
that
all
that
information,
it's
it's
easily
possible
to
obtain
all
that
information
from
the
origin
source.
So
you
can
get
these
statements
from
a
query
to
weld
on
core
and
then
you
get
it
from
the
from
the
ultimate
authority
that
says
if
that
host
wants
to
be
talked
to
through
that
protocol,
then
so
be
it
for
when,
when
this
is
announced
over
a
resource
directory
or
any
similar
advertisement,
then
yes,
this
can
be
used
to
redirect
traffic.
D
It
cannot
be
used
to
change
the
security
properties,
but
there
can
be
traffic
analysis
on
this,
but
then
again
that
source,
where
you
get
the
information
from,
is
typically
the
source
that
sent
you
over
to
that
other
to
that
other
place
in
the
first
place,
so
they
could
just
as
well
have
sent
you
anywhere
else.
D
There
are
issues
there
are
issues
to
be
pointed
out,
but
I
think
that
pointing
them
out
is
also
about
as
much
as
we
can
do
as
long
as
we
don't
have
things
like
frag
coral
coral
signed
in
fragments
and
all
those
things
that
can
later
augment
this,
and
I
don't
think
this
would
need
to
be
a
showstopper
here.
D
Just
switching
to
the
backup
slides
in
case
this,
this
elicits
some
other
questions,
but
we
have
bill
in
the
queue
yeah
bill.
E
Okay,
can
you
hear
me
yeah,
okay,
hi
hi
christian?
Firstly,
thanks
for
for
doing
this
work.
This
is
this
is
really
good
work.
Thank
you
and
I
think
transport
or
protocol
indication
is
a
much
nicer,
much
nicer
title
than
protocol
negotiation.
So
it's
much
more
accurate.
I
have
two
comments.
The
first
thing
is
that
the
going
going
through
this
presentation,
I
think
that
you're
coming
to
the
same
problems
that
that
we
did
when
we
started
the
protocol
negotiation.
E
Firstly,
are
we
doing
protocol
indication
or
are
we
doing
proxying
and
and
this
this
is
a
compromise
that
you
might
have
to
have
to
look
at
quite
carefully,
because
what
is
this
mechanism
trying
to
achieve?
Are
you?
Are
you
trying
to
find
a
new
transport,
or
are
you
trying
to
find
a
new
uri
that
you
can
serve
the
same
resource
under
which
basically
means
that
with,
if
you
have
a
proxy
mechanism
that
that
essentially
could
represent
the
same
transport
with
a
different
uri?
E
How
do
you
handle
that?
And
secondly,
you
might
have
the
same
situation
when
you
are
looking
at
secure
transports
and
an
insecure
transport,
so,
for
example,
co-op
s
with
udp
or
co-op
udp.
So
so
these
are
things
that
we
need
to
look
at.
So
that's
my
first
part,
and
the
second
part
here
is
that
generally,
what
we
needed
to
also
understand
is
whether
we
have
any
kind
of
priority
mechanisms
where
the
the
co-op
client
can
indicate
to
the
server
or
the
the
server
can
actually
indicate
all
the
origins.
E
D
I'd
like
to
briefly
answer
to
the
first
question,
which
I
hope
I
I
got
right
here.
This
is
this
is
not
about
getting
a
new
scheme
for
an
existing
transport.
It's
more
getting
a
new
transport
for
an
existing
scheme.
D
So
you
ideally
a
device
can
especially
a
device
that
does
not
want
to
produce
any
more
urinalyzing
would
advertise
one
single
transport
which,
if
there
is
no
other
information
about
about
how
to
connect
there,
can
can
still
be
used
as
it
is,
and
then
the
additional
proxy
things
are
just
tr
additional
transports,
but
they
are
still
used
for
the
same
uri,
and
this
is
also
playing
a
bit
into
the
and
is
this
is
also
providing
one
answer
to
the
priority
issue
that
is
when,
by
by
picking
the
canonical
uri,
the
server
automatically
picks
one
that
clients
would
most
likely
use,
especially
if
they
are
unaware
of
this
mechanism.
D
So
there
is
in
a
sense
one
distinct
transport
already.
That
is
the
one
indicated
with
the
you
that
is
indicated
with
the
uri.
Now
I
suggest
this
will
in
many
cases
be
co-op
without
any
or
core
s
without
any
additional
kind
of
over
udp,
but
that
will
be
an
application
choice.
D
As
for
priorities,
I
don't
see
as
for
further
priorities.
I
don't
see
why
this
can
when
this
this
should
be
easy
to
pack
into
the
into
the
link
by
indicating
some
kind
of
numeric
priority,
but
then
again,
these
being
absolute
statements.
It's
like
there
can
be
a
priority,
but
what
does
it
actually
mean?
I
mean
the
the
client
might
have
its
own
priority.
So
in
the
end
the
client
will
have
to
choose
and
we
can
provide
probably
provide
something
that
will
guide
that
client's
decision.
D
If
the
client
even
has
a
choice,
but
my
guess
is
that
in
most
cases
the
client
will
find
one
that
it
will
prefer
and
just
pick
that
one
and
if
that's
really
a
no-go,
then
the
server
will
probably
not
advertise
it.
D
That's
a
tricky
one,
because
that's
one
that
is
quite
behaving
quite
differently
on
on
co-op
over
dtls
and
on
oscore,
and
I
think
we
don't
have
a
good
view
on
the
whole
whether
the
co-op,
the
co-op
uri
is
even
kind
of
the
best
choice,
for
example,
for
oscar,
because
where
does
the
indic?
What
is
the
indication
to
use
our
score
come
from
or
on
the
other
hand,
how?
What
does
a
co-op
s
really
mean?
D
D
So
that's
something
that
will
need
a
lot
better
understanding
of
when
to
use
which
security
transport-
and
I
hope
that
by
phrasing
this,
as
in
terms
of
whatever
you
pick
for
the
alternative
transport,
just
use
this
for
as
requirements
just
use
this
for
the
proxy
as
well.
We
can
resolve
this
issue
without
resolving
that
issue,
which
doesn't
mean
that
that
issue
doesn't
need
resolving,
but
having
things
compartmentalized
should
make
things.
A
D
There's
there's
a
note
on
dnssd:
I'm
not
sure
what
this
belongs
here.
Dnssd
is
a
mess.
I
won't
make
any
comment
on
this
but
see
the
footnote
on
the
slides.
A
I
I
Our
initial,
at
least
the
intention,
was
to
publish
core
attacks
as
an
information
rfc,
but
depending
on
the
later
discussion
that
might
be
discussed.
We
think
during
this
work
we
realized.
We
think
that
core
needs
to
discuss
amplification
attacks
and
probably
straight
have
more
hard
requirements
on
how
to
mitigate
that
co-op.
Amplification
attacks
have
gotten
quite
a
lot
of
media
attention
in
the
last
years.
I
Here's
a
list
of
just
what
I
found
there
are,
more
so
in
an
amplification
attack
or
deny
distributed
denial
of
service.
The
most
important
metric
to
for
mitigation
is
the
amplification
factor
or
the
anti-amplification
limit.
I
The
map
here
shows
the
number
of
open,
no
sec
servers
in
the
world.
They
concentrated
to
a
few
countries
and
implementations,
and
you
could
argue
that
they
do
not
fulfill
the
requirements
in
co-op,
but
the
requirements
are
quite
soft.
I
I
If
the
response
is
eight
times
larger,
the
amplification
at
a
factor
is
a
you
can
increase
the
bandwidth
by
sending
more
gets
and
if
you're
really
advanced
and
have
the
ability
to
cut
the
post,
you
can
also
increase
the
amplification
factor.
That
way,
observe
makes
things
much
worse,
because
a
single
request
results
in
here
in
a
number
of
responses,
so
the
amplification
factor
is
am
and
an
attacker
can
register
the
same
client
several
times
by
using
a
different
token
or
a
different
port
number.
I
If
you
can
post
to
the
resource,
you
might
trigger
notifications
and
the
new
addition
conditional
attributes
make
things
much
worse,
because
then
you
can
request
that
you,
your
target,
should
have
a
notification
every
second
or
ten
times
every
second,
at
least
multi,
multicast
or
group
requests
also
make
things
worse
here.
It's
not
a
single
service
anymore.
Several
responses.
It's
several
servers,
sending
a
single
response.
If
the
number
of
servers
are
m,
you
get
a
times
m
as
an
implication.
Factor.
I
We
think
there's
a
need
for
harder
requirements.
The
requirements
are
quite
soft,
they
are,
they
are
should
and
not
must.
Coop
does
not
define
what
what
the
large
amplification
factor
is.
There's
just
some
example
where
100
is
likely
large
and
10
is
probably
not
large.
I
This
is
further
softened
by,
if
possible
and
generally
in
some
of
the
documents.
There
are.
Several
documents
they
are
talking
about.
Observe
is
a
bit
more
strict.
It
has
must,
but
it
does
not
say
when
this
must
apply,
how
many
messages
you
can
send,
and
it
also
says
that
you
must
send
the
confirmal
and
receive
an
acknowledgment,
and
this
acknowledgement
can
be
spoofed.
I
Recently,
idf
has
published
quick
and
quick
as
a
very
strict
and
amplification
limit
of
three
times,
so
a
server
must
not
send
more
than
three
times
what
it
has
received
before
it
has
validated
the
address
and
in
co-op
you
can
validate
the
address
by
either
using
a
security
protocol
or
the
newly
standardized
echo
option.
I
The
nano
service
is
a
problem,
it
tarnishes
co-op
reputation
and
I
think
core
should
at
least
make
sure
it
doesn't
get
worse.
We
can
probably
not
fix
all
the
existing
implementations.
This
needs
absolutely
to
be
done.
We
think
this
needs
to
be
on
to
the
core
document.
Seven
two,
five,
two
multicast
is
a
little
bit
more
hard
and
I
don't
know
exactly
what
these
requirements
should
say.
I
think
quick
is
a
good
starting
point
and
I
think
everything
can
be
discussed.
Then
the
question
is,
if
you
agree
on
that,
where
should
that
be?
I
Should
it
be
in
the
echo
request,
tag
has
already
shipped
should
additional
an
update
of
7256p
in
this
document,
the
core
attacks,
or
should
it
be
in
another
document,
so
I
basically
have
three
three
questions
to
core.
Do
you
agree
that
it
for
the
security
agents
in
the
waters?
It
would
be
good
to
publish
this.
Do
you
agree
that
it's
good
to
strengthen
the
empty
application
limits
and
third,
we
should
such
strengthen
and
the
amplification
limits
be
published.
B
Yeah,
I
have
several
comments
here.
One
is
a
technical
comment,
so
quick
gets
by
with
this
hard
limit
because
they
can
simply
inflate
the
request.
B
B
I
think
what
we
really
should
be
doing,
because
this
is
not
a
problem
that
that
can
be
solved
with
a
simple
rule
like,
like
the
quick
rule,
write
up
something
like
a
bcp
that
tells
people.
How
do
you
avoid
being
a
valuable
target
for
an
amplification
attacker,
and
I
think
there
are
not
some
not
so
trivial
answers
that
this
document
needs
to
provide.
B
So
it's
not
something
that
we
can
cover
together
in
five
minutes,
and
I
think
we
have
to
avoid
overloading
our
existing
documents
with
this,
because
we
want
to
get
documents
out
and
and
make
sure
that
people
don't
put
in
new
stuff
there.
That
hasn't
been
discussed,
that
creates
problems
and
so
on,
and
I
think
it
needs
to
be
a
bcp,
because
really
there
is
no
no
hard
answer
that
you
can
can
make
here.
This
needs
to
be
explained.
What
needs
to
be
done?
B
Multicast
is
hard
and
observance
is
obviously
something
that
could
benefit
from
a
few
clarifications.
It
would
also
be
nice
to
address
the
spoofing
situation
there
and
so
on
and
so
on.
So
I
think
it
would
be
really
useful
to
have
this
bcp,
but
I
think
we
need
to
make
sure
that
this
is
developed
in
such
a
way
that
other
documents
are
not
stopped
in
their
tracks.
B
Well,
conditional
observe
is
something
that
is
out
there
today,
so
I
I
don't
think
that
publishing
or
not
publishing
will
change
anything
here
I
mean
this
is
essentially
an
application
protocol.
B
A
Marco,
here,
an
option
can
be
addressing
especially
the
points
one
and
four
in
your
top
list
already
in
this
document,
actually
because
you're,
giving
the
background
and
discussing
the
problem
and
point
one
and
four
are
really
on
co-op
and
observe,
let's
say
a
one-to-one
communication
while
points
two
three
and
and
five
they're
about
group
home
communication,
and
they
can
go
possible
in
group
on
this,
which
is
already
updating,
co-op
and
observe.
For
other
reasons,.
B
I
I
don't
know
what
to
say
about
that:
let's
do
a
bcp
that
really
addresses
the
problem
that
can
be
referenced
in
rfps,
so
so
people
can
say
folks
you
have
to
use
co-app.
Do
you
actually
follow
this
pcp
and
so
on?
That's
a
useful
way
to
to
take
the
problem
to
actually
have
an
effect
and
the
other
one
is
yeah.
Let's,
let's
do
a
few
more
rfc
6919
keywords
in
in
our
various
documents,
but
it's
not
helping
anything.
G
So
can
you
hear
me
a
question
for
for
carson,
so
just
trying
to
understand,
why
is
it
the
problem
that
we
also,
I
mean
we
have
both
the
bcp
and
and
also
bring
up
the
problem
in
the
various
drafts
which
we
are
working
on.
B
G
J
Question
for
the
authors,
I'm
wondering:
is
there
any
documented
solutions
to
udp
attacks?
Sorry
ddos
attacks
based
on
the
underlying
udp
protocols,
because
that's
under
at
the
end
of
the
day,
the
problem
right.
So
I
don't
think
it
is.
I
mean
the
specification
can
only
guide.
We
cannot
really
propose
solutions
under
there
are
unless
there
are
solutions,
because
I
I'm
not
aware
of
I.
I
A
A
And
we
were
talking
of
that,
so
we
jump
to
group
combis.
In
fact,
all
right.
This
is
an
update
to
the
group
combis
document
and,
as
I
recap,
the
idea
here
is
to
replace
the
old
experimental
7390
formally
obsolete
in
it
and,
at
the
same
time,
updating
a
co-op
and
observe
when
used
for
group
communication,
it
hasn't
scoped
a
number
of
things,
so
a
co-op
used
by
default
now
all
over
udp
ip.
A
So
this
update
was
mostly
about
a
number
of
points
discussed
at
the
june
interim
meeting,
and
most
of
them
were
about
caching
model
and
the
focus
on
this
document
was
agreed
to
remain
all
your
cash
on
cashing
from
the
origin
client
point
of
view.
A
So
we
are
basically
following
the
original
rules
from
from
co-op
with
a
little
addition.
We
are
allowing
a
case
where
the
client
may
not
send
to
the
service
in
the
group
the
group
request,
but
entirely
serve
it
from
its
cache.
A
Only
in
case,
the
client
has
a
full,
up-to-date
knowledge
of
the
group
membership
so
of
the
service
in
the
group
and
for
each
of
those
it
has
an
up-to-date
response
cached
how
to
possibly
achieve
that
is
out
of
scope.
But
you
need
this
condition
to
verify
to
possibly
do
that
and
that's
for
the
freshness
model
for
the
validation
model
we
introduced
in
previous
versions
a
new
option
that
was
just
a
bit
too
complicated
at
the
interim.
A
We
compared
and
discussed
over
four
different
alternatives
and,
in
the
end,
we
converged
for
a
proposal
from
esco
about
using
basically
the
original
etag
option,
pretty
much
in
the
same
way
for
the
servers
just
giving
a
bit
more
homework
to
do
for
the
client
in
case
of
collisions
or
or
value
conflict
in
the
tag
value
offered
by
different
servers
in
the
group.
A
C
A
Free
to
ignore
it,
the
other
things
related
to
caching
were
actually
moved
out
of
the
document,
because
it
was
about
the
caching
model
on
the
proxy
that
built
on
the
one
for
the
original
client
but,
of
course,
introduced
a
number
of
additional
features
and
checks
to
do
so.
A
All
this
was
considered
a
bit
too
specific
for
this
document
and
it
was
moved
out
agreed
to
move
down
to
a
more
appropriate
document
that
we
have
actually
for
proxy
group
communications,
which
is
group
comproxy,
and
that
is
not
in
the
agenda
for
today
anyway.
But
all
this
kind
of
content
was
moved
there.
A
And
then
we
got
a
review
from
john
on
the
show
on
version
four
thanks
a
lot
for
that.
It
was
basically
five
points.
We
think
we
addressed
three
of
them,
so
transporting
co-op
with
udp
or
verip.
Multicast
is
just
a
default
transport.
Now
you
can
go
for
different
ones
and
similarly,
group
score
is
the
default
choice
for
security
in
the
group
in
principle.
A
Also,
as
you
shown
in
the
previous
presentation,
also
building
on
john's
review,
we
still
have
to
clarify
more
in
detail
how
we
are
updating
and
obsoleting,
where,
though,
we
already
have
some
text
on
that
that
that
will
improve
and
on
the
biggest
point
about
amplification
attack,
we
we
added
a
new
section
exactly
about
this,
so
it's
not
complete.
We
would
plan
to
revise
it.
A
Also
considering
the
the
recent
submission
of
co-op
attacks,
but
it
should
be
good
enough
for
first
reading
to
get
more
more
feedback
and
more
input,
especially
from
john
and
more
also
following
a
discussion
on
this
already
at
the
interim.
We
are
stating
stronger
than
before.
A
I
hope
that
we
really
don't
recommend
the
nosek
mode,
it's
strongly
discouraged,
but
we
are
mentioning
examples
where,
if
you
really
know
what
you
are
doing
in
particular
cases,
it
may
still
be
acceptable,
or
even
the
only
choice,
especially
for
a
just
deployed
device
that
is
trying
to
discover
what
what
it
surrounds
it
and
try
to
get
in
touch,
for
instance,
with
an
entry
point
like
a
resource
directory
carsten.
B
Yeah,
I
think
that
that's
again
one
of
those
places
where
it's
easy
to
say
that
no
sag
mode
is
not
recommended.
6119
has
this
additional
phrase,
but
you
know
we
know
that
you
will
do
it
anyway.
So
that's
not
not
the
useful
part.
The
useful
part
is
to
point
out
that
the
the
way
this
is
used
needs
to
to
do
some
things
like
rate
limiting
or
whatever
is
needed
to
to
make
the
the
application
of
the
nozek
mode
safer
and
yeah.
There's
also
block
two
there's
echo.
B
I
think,
at
this
point,
rely
on
this
a
little
bit,
but
I
think
the
the
important
thing
is
that
nose
is
not
something
we
can
deprecate
in
any
way.
It
will
still
be
needed
for
specific
applications
before
security
can
actually
be
established
in
in
the
dispatch
meeting.
I
don't
know
if
you
were
there.
There
was
an
interesting
discussion
in
the
sdp
space.
There
is
something
called
s-desk
security
description
which
is
well
yeah,
a
weird
way
to
do
security
and
people
wanted
to
deprecate
that
and
yeah.
No.
A
Yeah
some
other
little
addition.
We
included
also
a
new
small
section
that
esco
proposed
and
was
also
quickly
touched
at
the
interim
about
giving
a
bit
more
high
level
description
of
what
security
you
can
have
on
different
communication
legs,
and
if
you
have
a
proxy,
so
you
should
really
go
and
when
for
sure,
if
you
have
a
forward
proxy,
you
should
do
the
same.
If
you
have
a
reverse
proxy
and
we
are
kind
of
admitting
a
case
with
the
reverse
proxy,
that
is
really
totally
trusted
where
security
can
be
up.
A
A
We
gave
some
more
clarifications
on
some
options
where
some
more
work
was
needed
and
we
produced
an
open
point
about
terminology
that
maybe
we
can
quickly
check
today.
Here
we
are
using
now
consistently
backward
forward.
Security
and
esco
was
wondering
if
we
can
switch
to
backward
forward
secrecy.
A
I'm
a
bit
worried
that
that
forward
secrecy
can
be
confused
with
perfect
forward
secrecy
used
in
in
different
contexts,
but
I
could
see
used
both
terminology
in
the
literature
about
group
communication.
Anyway.
Is
there
any
input
opinion
from
anyone
about
this.
I
John,
it's
very
good
to
explain
what
you,
what
you
mean
like
compromise
of
tx,
does
not
lead
to
compromise
of
keys,
key
y
or
future
keys
or
yeah
yeah.
We
have
an
explanation
and
we
don't
mean
that
yeah.
I
think
you
should
explain
what
you
what
you
protect
against
and
and
not
the
the
informal
dictionary
on
itf
security
term
says
that
it
provides
a
definition.
But
it
also
says
this
area
is
a
messed
expert
disagrees,
yeah.
A
Yeah,
thank
you
yeah
and
that's
it.
We
plan
the
next
version
for
sure
completing
addressing
the
comments
from
john.
We
have
a
few
still
open
issues
on
the
github
and
three
new
ones
added
today
by
christian.
Thank
you
still
have
to
do
final
tests
on
advanced
functionalities,
with
block
wise
for
the
little
you
can
do
in
a
group.
So
hopefully
the
next
revision
can
be
considered
for
working.
Your
class
call.
A
Right
this
is
instead
a
much
more
consistent
update
to
the
group
score
document
just
to
give
a
very
high
level
overview.
This
was
about
introducing
a
proposal
from
christian
about
recycling
in
the
same
group
also
group
identifiers,
addressing
a
big
point
from
ben
caduck
about
the
use
of
identity,
public
keys,
both
for
signing
and
the
fieldman,
and
then
a
number
of
changes,
especially
to
the
group
mode.
A
So
as
to
the
input
from
christian,
we
were
forbidding
reassigning
the
same
group
id
in
a
group
from
the
group
manager,
because
that
was
going
to
break
the
security
of
very
long
living
observations
and,
as
a
reminder,
the
group
id
changes
when
the
group
manager
rekeys
the
group
and
changes
the
key
material
there's
actually
a
fix
for
that
to
enable
a
perpetual
recycling
and
it
just
requires
some
more
homework
for
the
group
manager.
A
So
the
group
manager
for
each
group
member
has
to
remember
the
group
id
that
was
used
in
the
group
where
that
group
member
joined
the
group.
So
we
call
this
birth
gids
and
if
the
group
manager
decides
for
any
reason
to
rekey
the
group,
it
has
to
notice
it
by
chance
it
is
going
to
assign
a
group
id
that
is
corresponding
to
any
of
those
birth
gid.
A
If
that's
the
case,
it
has
to
evict
from
the
group
also
those
guilty
very
old
group
members
that
eventually
will
join
the
group
again,
dropping
their
observations
and
solving
the
problem.
We
had
in
the
first
place
so
bottom
line.
It
is
now
possible
with
this
addition
to
reassign
the
same
group
id
in
the
group
safely.
A
And
then
we
had
the
big
point
from
ben.
So
to
recap
in
the
group
you
can
use
both
the
the
group
and
the
pairwise
mode,
which
requires
to
produce
symmetric,
pairwise
keys,
and
for
that
we
were
using
public
keys
in
two
ways,
both
for
the
main
signing
in
group
mode
and
for
this
key
derivation.
A
This
is
not
exactly
common
practice,
so
you
may
do
that,
but
you
need
to
really
know
what
you're
doing
and
prove
you're
not
breaking
any
security
property
before
we
were
building
on
a
pretty
well-known
paper
in
reference
5
here.
That
was
providing
a
proof
about
this,
but
not
in
a
so
specific
context,
aligned
to
grupo
score.
A
So
we
ended
up
producing
a
proof
specific
to
group
of
score
that
was
provided
by
richter
marker
from
erickson.
Thank
you
very
much
for
that.
You
can
find
it
in
the
reference
6
here
and
the
proof
required
a
little
adaptation
to
the
way
we
derived
symmetric
is
basically
we
need
to
provide
also
the
public
keys
of
the
two
endpoints
in
the
derivation
of
the
pairwise
keys.
A
But
after
this
little
change,
it
was
possible
to
produce
a
proof
which
actually
shows
that
if
you
are
using
ecdsa
signing
keys,
the
original
proof
actually
applies
to
grupo
score.
Just
as
well,
if
you're
using
eddsa
signing
keys.
Well,
then
you
really
need
a
proof
of
its
own.
That
was
provided.
So
we
don't
have
the
full
proof
in
the
draft,
but
it's
presented
at
a
high
level
in
the
security
considerations
and
we
also
added
a
a
considerable
amount
of
text
to
better
document.
A
The
conversion
process
that
you
have
to
do
from
edward
to
montgomery
coordinates
if
you
use
the
ddsa
keys
and
then
we
have
the
the
the
points
from
john
most
are
common
to
grupo
score.
Others
are
specific
to
the
group
mode,
so
this
is
general
and
it
was
about
the
need
to
saying
something
at
all
about
the
format
of
public
keys
to
using
the
group,
and
it
has
to
be
a
form
such
that
it
includes
not
only
the
key
but
also
a
full
description
of
the
public,
key
algorithm
and
possibly
related
parameters.
A
And
of
course,
it
can
carry
additional
metadata
like
the
key
subject
so
good
formats
to
use
that
we
mentioned.
The
draft
now
are
certificates
of
different
kinds,
cwts
and
unprotected
cwt
claim
sets
like
the
one
in
the
example
here.
But
of
course,
one
group
works
in
one
way
and
every
member
has
to
use
a
same
format
in
the
group.
A
And
the
group
can
work
in
one
way,
but
now
it
is
one
way
out
of
three
possible
ways,
not
two,
so
the
group
can
work
in
group
mode
only
or
in
pairwise
mode
only
which
is
new
or
in
both
modes,
and
a
group
member
has
to
know
that
latest
when
joining
the
group
and
enabling
this
alternatives
really
required
to
hardly
decouple
the
modes
and
to
revise
the
information
included
in
the
in
the
security
context.
A
So
the
one
in
the
figure
taken
from
the
draft
again
builds
as
a
delta
on
top
of
the
original
context
defined
in
the
score
rfc.
But
I
have
highlighted
in
green
here
the
new
elements
we
have
added
in
this
revision.
A
So,
yes,
we
have
involved
also
the
public
key
of
the
group
manager.
It
becomes
clear
why
in
the
next
slide,
we
indicate
specifically
the
signature
encryption
algorithm,
so
the
algorithm
used
to
encrypt
the
message
in
group
mode
when
a
signature
is
in
fact
also
involved,
and
we
have
idea
we've
added
a
new
common
group
encryption
key
there.
I,
like
other
common
keys
in
the
group
which
is
used
more
on
that
later
to
separately
further
encrypt.
A
The
signature
in
group
mode,
so
just
to
recap
what
you
are
supposed
to
use
in
what
mode
in
group
mode
you
encrypt
the
message
with
the
signature
encryption
algorithm.
Then
you
sign
the
ciphertext
with
a
signature
algorithm.
Then
you
take
that
signature
and
you
encrypt
it
somehow
using
the
group
encryption
key
available
for
that
in
pairwise
mode.
A
Now
similar
updates
had
to
be
done
in
the
external
aad.
A
Now
we
have
specified
the
additional
algorithm
entries
from
the
common
context
here,
also
in
the
algorithms
array
from
which
we
have
removed
the
number
of
elements
that
were
descriptive
of
the
algorithms
and
their
parameters
that
was
just
redundant
because
now
all
the
information
will
be
in
the
public
keys
in
the
first
place
that
come
at
the
end
of
the
external
ed.
As
you
see
so
there
you
find
as
a
serialization
of
the
format
used
in
the
group,
the
public
key
of
the
sender,
generating
protecting
the
message
and
the
public
key
of
the
group
manager.
A
It
is
useful
to
have
that
public
of
the
group
manager
there
too,
to
have
a
full,
complete
description
of
how
the
group
works
and
to
cheaply
prevent
the
possible
group
cloning
attack
that
we
found,
although
it's
pretty
hard
to
mount,
as
so
to
say,
but
this
was
a
cheap
solution
to
prevent
it
by
construction
altogether,
and
there
is
a
discussion
on
the
details
in
the
security
considerations
I
mentioned
before
now
that
in
group
mode
we
are
now
also
separately
encrypting,
the
signature
to
not
confuse
the
steps.
A
This
is
the
order
you
encrypt
the
cosy
plain
text.
First,
you
sign
the
resulting
code,
cipher
text
and
then
you
separately
encrypt
that
signature.
A
So
how
is
the
signature
encrypted?
It
is
sword
with
a
key
stream
which
is
generated
with
an
hkdf
construction,
the
same
one
used
to
generate
keys,
taking
as
input
the
common
group
encryption
key
I
mentioned
before,
and
some
information
from
the
messaging
question.
So
the
partial
view
of
the
message
and
the
sender
id
of
the
partial
iv
generator
without
overthinking
this.
It
is
the
same
rationale
and
thinking
that
oscar
uses
for
building
the
aad
nouns,
basically
and
related
to
signature
verification.
A
At
the
reception
side,
we
also
added
a
pretty
strict
rule
about
verifying
the
signature
first
and
if
that
is
valid,
only
that
point
continue
with
the
decryption
of
the
ciphertext.
A
Right
then,
we
also
wanted
to
admit
in
group
mode
only
encryption,
algorithms
that
provide
only
confidentiality
and
not
integrity
protection.
So
basically
they
don't
have
a
tag
and
there's
a
number
of
these
that
are
going
to
be
at
some
point
registered
in
cozy
from
an
integrity
protection
point
of
view.
A
You
are
still
fine
anyway,
thanks
to
the
presence
of
the
signature,
and
the
good
thing
is
that,
of
course,
you
reduce
overhead
that
could
have
been
unbearable
in
some
use
cases
that
otherwise
would
have
to
to
deal
with
both
the
tag
and
the
large
signature
in
every
group
mode
message.
I
think
john
mentioned
especially
cellular
networks
in
the
main
message
on
the
list
at
some
point,
so
integrity
per
se
is
fine.
A
A
So
to
fix
this
point,
we
actually
made
the
key
management
policies
stricter
and
basically
now,
if
a
group
member
leaves
the
group,
the
group
manager
must
repeat
the
group,
and
when
doing
so,
it
must
inform
the
remaining
group
members
about
that
living
node.
It
basically
has
to
inform
the
remaining
group
members
of
invalid
sender
ids,
especially
the
ones
of
the
now
leaving
nodes,
so
that
the
current
group
members
can
make
some
cleanup
and
especially
forget
about
the
public
keys
of
the
nodes
that
leave.
A
A
Of
course,
a
group
member
can
be
unlucky
and
miss
some
some
wrecking
instance
stay
behind,
and
the
group
manager
must
provide
some
kind
of
recovery
mechanism
for
that
group
member
to
catch
up
and
be
in
sync
again
in
this
document.
We
are
phrasing
this
as
requirements
for
the
group
manager
as
services
to
provide,
but
not
in
detail
how
it
has
to,
but
the
suggested
and
recommended
to
use
group
manager
that
we
are
defining
in
ace
actually
defines
in
details
how
this
can
be
achieved.
A
The
other
easiest
things
we
added
more
guidelines
for
external
signature,
checkers
and
how
to
handle
with
some
possible
corner
cases
when
they
do
their
their
job.
We
removed
some
appendices
that
became
modes
due
to
the
changes
we've
made,
or
they
were
including
contact
to
not
be
considered
secure
enough
anyway,
and
we
had
made
a
good
revision
of
the
security
considerations
in
the
light
of
the
updates
I
showed
in
the
previous
slide,
especially
about
the
group
mode
and
the
new
proof
from
eric.
A
So
that's
it
from
this
version
and
we
still
have
to
improve
some
parts
of
the
security
considerations.
I
think
john
also
wanted
to
mention
some
more
on
some
properties
that
we
are
fulfilling
just
with
a
better
analysis,
and
we
need
to
double
check
that
the
few
security,
sorry,
the
few
issues
still
open
on
the
github-
can
actually
be
closed
because
they
are
mostly
addressed.
A
I
believe
so
we
definitely
expect
one
more
submission
and
if
we
are
really
now
addressing
all
all
those
points
raised
before
and
no
more
issues
arise,
that
can
be
a
considerable
for
a
second
working
or
plus
gold.
G
A
To
test
we
hope
after
summer
and
well
rickard
is
updating
the
java
implementation.
I
I
think
christian
is
also
interested,
so
two
implementations
can
hopefully
be
the
next
ones
to
interrupt.
I
hope
it
can
happen
after
summer.
K
K
Right,
I
hope
you
can
hear
me
yes
good.
So
yes,
so
this
is
the
presentation
of
the
draft
about
combining
oscore
and
adhok
an
update
to
that.
So
let
me
go
to
the
next
slide.
Yes,
as
a
recap
on
what
this
is
about.
K
Basically,
first
of
all,
edoc
is
lightweight,
authentication
authenticated
key
exchanged
which
is
being
developed
in
lake,
and
it
has
a
number
of
use
cases,
but
the
main
use
case
can
be
said
to
be
keying
a
false
core
for
establishing
an
oscar
security
context,
and
normally
this
would
take
two
round
trips
to
to
run
a
dock
and
then
have
an
oscar
security
context
set
up.
However,
the
thing
that
this
draft
does
is
it
that
it
combines.
K
So
basically,
we
combine
the
edward
message:
three,
with
the
first
oscar
protected
request
in
us
into
a
single
that
we
call
edo
class
oscar
requests.
We
transport
both
the
actual
oscar
requests,
protect
the
oscar
request
and
advocacy
together,
and
the
point
of
doing
this
is
to
achieve
a
minimum
number
of
round
trips
time
for
setting
up
those
core
security
context
and
to
complete
the
first
all
score
transaction.
K
K
However,
in
the
combined
way
you
can
see
that
the
client
sends
ad
doc
message
one
and
you
get
that
dot
message
two
from
the
server,
but
now
you,
instead
of
sending
online
doc
message
c,
ad
doc,
message
stream,
plus
telescope
request,
and
the
key
point
is
that
after
the
client
has
received
the
add-on
message
too,
it
can
already
produce
those
core
security
context.
So
it's
it's
in
a
position
to
protect
that
the
oscar
request
and
send
that
to
the
server
already
after
receiving
message.
K
Two
and
in
the
same
way,
then
the
server
will
first
process
the
head
of
message:
three
produce
those
core
security
context
and
then
it
can
unprotect
the
oscar
request.
That
comes
together
with
other
message:
three
and
here's
kind
of
an
overview
of
the
message:
how
it
can
look.
Basically
so
in
the
payload,
you
essentially
have
deadlock
message:
three
plus
the
overscore
ciphertext,
and
the
way
we
signal.
This
combination
is
by
using
a
new
option,
which
we
call
this
edit
option.
As
you
can
see.
K
K
K
Support
21
and
13
would
work
for
that
use
case
and
we
intend
to
make
a
request
for
early
ayana
allocation
for
this
number
21,
because
it
allows
us
these
properties
and
and
then,
like,
I
said,
only
one
byte
size
them
for
this
add-on
option,
because
we
assume
that
the
auth
corruption
is
going
to
be
in
the
message
which
it
is
so
then
the
delta
will
only
be
12.
K
and
then
further
than
that
we
extended
and
improved
a
bit
the
background
information
about
educ
itself
as
explaining
a
bit
more.
What
that
actually
is
also
in
this
draft-
and
this
was
based
on
feedback
from
the
itf
110
meeting,
and
we
also
tried
to
keep
track
and
align
this
draft
with
what's
going
on
in
the
edward
draft,
because
there's
been
a
number
of
updates
there.
K
Especially
one
point
was
about
the
connection
identifiers.
We
we,
who
previously
were
kind
of
these
special
encoded
values.
However,
now
they
can
be
either
silver
by
strings
or
zebra
integers,
so
that
affected
also
this
work
a
bit
other
than
that
yeah.
We
improved
and
update
the
examples
a
little
bit.
K
K
So
we
started
by
defining
actually
a
way
to
convert
from
ad
doc
ids
to
all
score
sender,
recipient
ids
in
in
this
document
here
in
this
court
document,
based
on
a
proposal
from
christian.
K
And
but
in
addition
to
that,
we
also
worked
on
defining
a
conversion
method
from
the
old
score.
Senders
input
id
to
the
end
dock
ids,
and
the
point
here
is
that
the
reason
we
need
this
is
that
since
the
edoc
ids
can
be
both
similar
byte
strings
or
integers,
there's
basically
two
equivalent
edoc
ids
for
each
oscar
id.
K
So
in
the
solution
we
propose
in
this
document,
we
need
a
deterministic
way
to
do
this
conversion.
So
we
pick
either
the
into
the
by
string
representation
of
that
identifier,
because
when
you
get
this
edward
oscar
combined
request
you
the
way
you
retrieve
the
add-on
session
is
by
looking
at
the
oscar
option
and
taking
the
kid
from
those
corruptions.
So
then
you
need
a
deterministic
conversion
to
an
exact
specific
ad
doc
id
to
be
able
to
pick
up
the
correct
add-on
session.
K
So
there
can't
be
any
ambiguity
you
have
to
know
to
whether
you
take
the
in
order
by
string
representation
and
also
the
good
part
about.
The
way
we
propose
here
is
that
you
always
choose
the
smallest
representation
in
terms
of
of
size
and
this
we
currently
have
this
defined
in
appendix
a
so
one
thing
we
were
discussion
we
were
discussing
is
if
it
were
to
move
this,
in
fact
to
the
document
holder.
K
Yeah
and
some
open
points
we
have
so
one
general
open
point
we
had
was
about.
Yes,
it
was
to
expand
the
the
scope
of
this
document.
Let's
say
because
previously
we
had
some
earlier
attempts
to
expand
the
scope.
For
instance,
we
had
the
starvation
of
the
score
security
context
that
used
to
be
network
draft,
and
it
was
moved
to
this
core
draft,
then
back
to
deadlock,
draft
and
same
thing
with
this
conversion
from
adopt
to
oscar
id
it
used
to
be
in
this
core
draft
and
it
was
moved
to
the
end
of
traffic.
K
So
one
thing
we
were
thinking
about
was
to
expand
the
scope
of
this
document.
So
it's
not
only
about
this
optimization
about
this
combined
request,
but
it's
more
general
about
you
know
profiling,
the
use
of
end
doc
for
co-op
and
oscor,
so
there,
for
instance,
we
already
have
this
conversion
from
oscar
id
to
adobe
id,
that's
already
included
in
this
code
document,
but
we
had
some
other
ideas
on
things
to
include
like
a
usage
of
this
uri
compression
option.
K
That's
a
proposal
from
christian,
so
you
can
indicate
the
uri
using
an
option
basically
to
reduce
the
overhead,
and
we
also
that
is
about
defining
information
related
to
web,
linking
so
the
the
resource
type
editor
could
be
possibly
registered
in
the
ad
doc
main
draft.
However,
further
target
attributes
and
covering
the
applicability
statement
can
possibly
be
added
here
in
this
code
document
and
basically
under
any
other
things.
That
would
be
considered
too
detailed
to
be
included
in
the
end
of
draft
that
can
potentially
be
then
included
in
this
school
draft.
K
K
And
then
next
steps
yeah.
So,
as
I
mentioned,
we
plan
to
do
this
request
for
early
ayan
allocation
for
option
number
21
for
the
adult
option.
That
will
signal
the
use
of
this
combined
request
and
as
for
the
actual
draft,
we
plan
to
update
the
text
and
figures
a
bit
to
make
it
consistent
now,
with
the
way
that
that
doctor
f
uses
the
content
format,
because
it's
been
changed
a
little
bit
since
now.
The
connection
identifies
our
external.
K
The
use
of
content
format
is
a
bit
different
and
we
want
to
add
a
bit
more
information
about
the
applicability
statement
to
have
information
like
okay
do.
Does
this
client
server
support
that
or
oscar
request,
and
what
id
conversion
method
does
it
use
that
can
be
defined
in
the
actual
applicability
statement?
K
And
more
than
that,
you
want
to
update
our
actual
implementation
because
we
have
running
code.
We
have
an
implementation
of
this
except
it's
based
on
that
of
version
seven,
and
so
we
need
to
update
that.
First,
we
start
by
updating
our
general
add-on
implementation
and
then
the
next
step
is
to
update
this
combined
request
implementation,
and
we
also
would
like
and
need
reviews
for
this
document.
That's
very
welcome.
G
Yeah,
I
think
this
is,
I
mean
it's.
This
is
really
good
work
and
I'm
it's
a
lot.
Maybe
it's
a
little
bit
not
not
so.
G
So
nice,
that
some
of
the
content
you
produce
is
actually
moved,
moved
out
of
the
draft
so
but
I
think
it's
great
that
you
actually
find
fill
in
a
lot
of
the
details
and
I
don't
have
strong
opinions
where
it
ends
up.
That's
been,
I
mean
in
the
case
why
things
was
moved
back
to
edward
was
because
the
chair
of
the
lake
working
group
had
support
in
the
charter.
You
said
that
this
information,
it
sort
of
belongs
to
belong
to
the
lake
charter,
so
so
it
should
should
be
in
the
adult
talk
so
yeah.
G
I
think
those
are
the
type
of
considerations
that
may
decide
where
specific
content
goes,
but
I'm
I'm
really
happy
that
you
you
help
out
and
produce
all
this,
this
input
here,
so
so
everything
you
have
listed
here.
I
think
it's
worth
documenting
somewhere
and
please,
please
put
it
in
this
drive.
Oh
malisha,.
G
C
I'd
just
like
to
reiterate
what
euron
said
about
the
scope
of
the
draft.
Essentially,
the
lake
working
group
has
a
pretty
clear
charter
that
we
need
to
produce
a
deliverable,
a
solution
that
is
that
is
going
to
key
oscar.
So
essentially,
the
ad-hoc
drafts
needs
to
be
self-contained
in
that
sense
that
it
can
work,
as
is
to
key
oscar.
C
So
the
way
I
see
this
based
on
the
points
that
you
raise
in
this
slide
here
is
that
yeah.
So
the
core
points
about
king
was
poor
are
related
to
the
conversion
of
the
connection
ids.
They
belong
to
the
ad
hoc
draft
and
then
any
potential
optimizations,
some
essentially
compressing,
uri
or
web.
Linking
I
mean,
I
guess
these
belong
to
the
core
draft.
This
is
at
least
how
I
see
that
yeah.
A
F
C
C
A
Carson
is
asking
what
is
christian's
separate
proposal.
I
think
it
was
presented
at
an
interim
yeah.
K
That
was
your
compression
option,
but
yeah
it
was
present,
but
I
guess
to
keep
it
short.
It
was
about
using
an
option
to
indicate
the
uri,
so
you
can
save
some
more
and
threading
having
to
say
well
known
core.
You
can
well
no
network,
you
can
have
an
option
very
small
option,
indicating
to
mean
the
same
thing.
K
K
K
First
of
all,
yeah
oscor
uses
aed
algorithms
to
provide
security,
and
there
is
a
draft
from
in
cfrg
that
defines
the
certain
limits
you
have
to
take
into
account
in
terms
of
key
usage,
meaning
how
many
times
you
use
a
key
to
encrypt
and
fail
decryption,
meaning
how
many
times
you
use
a
key
and
the
decryption
fails
before
you
need
to
rekey
these
keys.
And
if
you
do
not
follow
these
limits,
it
would
be
possible
to
break
the
security
properties
of
these
aad
algorithms,
and
I
have
a
reference
here
to
the
c4d
document.
K
So
what
this
draft
does
is
look
at
the
aed
limits
and
the
impact
on
os
core,
including
defining
good
limits
for
os
core,
and
we
originally
started
from
assumptions
from
tls
and
dtls,
but
we
have
now
revised
things
based
on
input
from
john
matson
at
april
core
interim,
we
suggested
a
bit
of
a
different
approach
and
more
than
that,
what
it
currently
contains
is
yeah
how
you
should
handle
these
limits
within
oscar
meaning
you
have
to
have
counters
in
your
security
context,
to
count
key
encryption
usage
and
invalid
decryptions.
K
That's
what
we
call
these
q
and
v
counters
and
when
these
limits
are
exceeded
on
on
the
usage
limits
on
the
keys
when
they're
exceeded,
you
need
to
rekey
the
context.
So
now
number
three
here
point
three:
is
the
new
material
we
added,
which
is
about
defining
a
method
for
a
key
in
our
score,
and
this
is
loosely
inspired
by
the
current
appendix
p2
for
foscore,
and
the
general
goal
is
to
get
a
new
master
secret
insult.
K
That
then
ends
up
meaning
you
derive
new
keys,
so
you
have
new
sender
and
recipient
keys.
Basically,
you
you
renew
the
keys.
You
actually
use
for
encrypting
the
date
they
are
sending
and
the
method
we
propose
here
also
achieves
perfect
forward
secrecy,
just
a
bit
of
overview
on
the
key
limit.
So
what
we
did
now
was
previously.
We
calculated
the
limits
based
on
suggested
probabilities
from
the
c4d
and
dtls
document,
but
after
input
from
young,
we
we
took
a
bit
of
a
different
approach.
K
So
basically,
what
we
do
now
is
we
set
fixed
limits,
so
we
say
that
the
q
limit
is
2
to
the
power
20
v
is
also
2
to
the
power
20
and
l
is
2
to
the
power
of
8..
So
l
here
is
the
max
message
max
message
length
in
cipher
blocks,
so
we
just
set
these
fixed
values,
and
then
we
calculate
what
the
probabilities
would
be,
considering
those
qv
and
l
values,
and
here
in
the
table,
you
can
see
for
these
four
algorithms,
what
our
resulting
probabilities
are
and
the
general
thing
to
take
home.
K
Like
you
see
the
ia
probability,
which
is
the
integrity
of
the
advantage,
which
relates
to
integrity,
properties
and
confidentiality
advantage,
which
relates
to
the
confidentiality
properties
and
the
general
thing
to
take
home
from
this-
is
that
these
values,
with
these
qv
and
l
and
then
using
the
formulas
from
the
c5d
document,
these
values
we
present
in
this
table
here
should
be
safe
for
these
four
algorithms.
K
K
So
one
question
that
comes
up
here
is
that
is,
for
instance,
an
ia
probability
of
2
to
the
powers.
Minus
54
is
that
you
know
good
enough
sufficiently
secure.
Let's
say
the
qa.
The
ca
here
is
2
to
the
power
of
minus
70..
So
that
should
be
not
an
issue,
let's
say,
but
again,
this
as128
ccm
is
the
most
problematic
due
to
its
short
tag
length.
K
So
we
had
a
bit
of
a
special
study
on
that
and
then
we
come
into
the
actual
key
update
method.
So
basically,
we
defined
a
new
method
for
updating
and
re-keying
os
core.
It's
based
on
the
exchange
of
nonsense,
r1
and
r2
between
the
client
and
server.
We
have
a
special
update
function
that
derives
new
score
security
context.
K
We
present
you
never
change
the
id
context,
while
the
current
appendix
v2
does
in
fact
change
the
id
context
and
other
general
properties.
Yes,
it
can
be
initiated
either
by
the
client
or
server
it
works.
Well,
even
if
one
of
the
pairs
reboot
there's
no
issue
with
key
reuse
or
non-reuse
and
the
actual
process
completes
in
a
single
round
trip
after
that,
when
you
start
using
the
new
context
and
finally,
it's
compatible
with
a
possible
prior
key
establishment
to
the
end
of
protocol.
K
So
this
is
an
overview
of
updates
to
the
osco
option.
So
basically,
we
did
a
number
updates
here,
but
the
key
ones
are
that
we
defined
the
use
of
language
one
to
extend
the
oscar
option
with
flag
bits,
a
to
15
that
so
we
can
fit
this
new
flag
bit
15.
K
We
have
to
indicate
that
the
oscar
option
now
produces
or
includes
rather
an
id
detail
and
the
id
detail
is
basically
a
length
and
then
the
value
of
the
id
detail-
and
this
is
what
we
use
for
exchanging
the
nonsense
actually
and
also
a
key
point
here-
is
that
this
bit
this
bit
15
can
explicitly
be
used
to
indicate.
This
is
an
key
update
message,
so
there's
no
ambiguity
or
confusion
that
this
is
a
key
update
message
in
the
current
appendix
b2.
There
is
no
explicit
indication
like
that.
K
You
can
still
figure
it
out
through
some
logical
steps,
but
there's
no
explicit
indication
and
here's
the
general
message
flow.
So
essentially
the
the
key
point
here
is
that
what
what
ends
up
happening
is
we
have
this
update
function
to
the
right
that
can
either
use
an
hkdf
to
produce
a
new
master
secret
or
it
can
use
the
add-on
exporter
depending
on?
If
your
linear
context
started
with
ad
or
not,
and
the
client
will
send
an
initial
request
protected
with
an
intermediate
context.
K
The
server
receives
that
request
and
can
unprotect
that
with
intermediate
context,
and
then
the
server
will
actually
generate
the
final
context
to
be
used
to
send
the
response
with
and
in
the
same
way
the
client
will
generate
the
final
context
to
be
used
and
unprotected
response
and
all
the
the
the
key
for
this
way
we
update
the
context
is
we
simply
take
the
nuns
as
input
and
yeah.
Then
it
depends
if
it's
endoc
or
you're
not
using
end,
but
fundamentally
what
ends
up
happening?
K
Yeah
and
some
summary
so
basically,
what
this
is
this
document
is
is
essentially
a
two-fold
update
to
all
score,
because,
first
of
all,
we
have
this
material
about
the
limits
and
how
you
should
track,
and
you
know
what
you
should
do
with
in
terms
of
message
processing,
considering
the
limits
to
preserve
the
security,
and
then
we
also
have
this
new
method
for
efficient
key
update
for
efficiently
keying
os
core
with
perfect
forward
secrecy
and
actually
initiated.
This
update
procedure
was
planned
as
a
separate
draft.
K
However,
after
some
feedback
from
the
core
interim,
we
decided
to
put
this
together
into
the
same
draft,
because
it
can
also
make
sense
that
we
define
the
limits
and
we
say:
okay,
if
you
reach
the
limits
you
have
to
rekey
and
then
the
next
section
is
okay.
This
is
how
you
can
do
the
rekeying
practice.
K
G
Yes,
yes,
I
think
this
looks,
looks
really
good,
so
I
was
I've.
Been
I've
been
looking
through
the
draft.
I
watched
this
from
the
sideline
and
now
I've
been
reading
reading
through
the
draft,
and
I
had
a
question
about
what
happens
so.
As
you
know,
the
appendix
b
is
handling
the
situation
both
when
the
client
initiates
and
when
the
server
initiates
and
yes.
G
K
Yes,
it's
a
good
point.
I
actually,
I
should
have
mentioned
it
here.
This
is
the
client
initiated
version
shown
here,
but
there
is
also
server
initiated
version,
so
both
the
server
and
client
can
initiate
this
procedure
in
practice.
It's
just
that
when
the
server
initiates
it.
It's
like
the
client
sends
any
kind
of
normal
request
to
the
server,
but
then
the
first
response
from
the
server
will
be
the
first
message
of
this
procedure.
But
fundamentally
both
the
client
and
server
can
take
the
initiative
to
trigger
this
process.
That's
true.
G
K
Yes,
I
would
say
the
current:
we
have
a
list
of
possible
ways
to
rekey
oscore,
where
we
mentioned
like
the
osgrace
profile
and
also
knx
v2.
But
yes,
I
think,
like
you
said
it
would
make
sense
that
this.
What
we
produce
here
is
an
alternative
that
would
deprecate
the
existing
appendix
v2.
G
K
G
So
the
find
my
family
comment
is
on
the
I
mean
there
is
one
thing
that
is
not
attractive
with
this
procedure,
but
it's
even
worse
in
appendix
b2,
and
that's
that
you
need
to
derive
intermediary
security
contexts.
That's
true,
and
in
this
case
it's
just
one
intermediate
income.
G
Yeah
yeah,
so
then,
if
anyone
has
any
feedback
on
that,
but
I
don't,
I
don't
think
it's
possible
to
avoid
that,
because
that's
basically
how
you
trigger
the
update
procedure
in
a
secure
way.
It's
that
you
produce
a
new
security
context
which
only
the
the
the
endpoint
that
has
the
old
context
can
can
use.
K
D
Kristen,
I'm
just.
I
think
that
we
can't,
in
all
situations
avoid
having
the
second
context,
but
there
might
be
situations
in
which
it's
known
sufficiently
long
in
advance
that
the
that
a
reaching
will
need
to
happen
to
avoid
it.
But
then
again,
this
opens
up
another
code
path
that
might
make
things
more
complicated
and
then
there
comes
any
good
of
it.
K
A
So
if
there
are
no
other
questions,
we
are
on
the
top
of
the
hour,
we
would
have
one
more
document,
but
there's
no
time
for
the
session,
I'm
afraid
it
can
be
moved
to
the
interims
and
by
the
way
we
have
scheduled
already
four
interim
meetings
for
the
autumn,
starting
from
september
15.
I
think
so
see
you.
There
see
you
in
the
other
town
enjoy
the
rest
of
the
meeting.