►
From YouTube: CORE WG Interim Meeting, 2021-04-28
Description
CORE WG Interim Meeting, 2021-04-28
A
B
A
My
co-chair
is
hi
many
minutes
and
we
are
going
to
have
six
interim
meetings
before
itf
111
and
you
can
see
we
are
back
on
the
on
wednesdays.
A
A
A
So
anyone
who
wants
to
best
the
agenda
for
today
now
then
we
can
move
to
the
first
topic.
B
Yeah
just
wanted
to
mention
that
the
draft
itf
core
new
block
has
finished
last
call
and
has
been
or
the
ballot
has
been
issued,
so
it
will
most
likely
go
into
into
next
week's
stellar
chat
and
the
other
document
that
is
waiting
with
me
is
the
core
sid
15
and
I
am
waiting
on
casting
and
I
think
he
mentioned
that.
C
The
status
was
that
we
wanted
to
get
a
meeting
going
between
the
authors
and
and
the
shepherd,
and
maybe
a
couple
more
people,
and
that
is
still
pending.
D
D
D
V
is
the
number
of
forgery
attempts
I.e
number
of
failed,
aaa
decryption,
invocations
approximately
and
then
l
is
the
maximum
length
of
each
message
in
blocks,
so
128
bits
or
16
bytes,
and
I
completely
agree
with
the
analysis
in
tls
and
details
that
this
is
necessary.
We
should
have
this
kind
of
counters,
and
we
should
re-key,
then
re-keying
is
maybe
even
more
important
is
that
we
limit
the
impact
of
key
compromise,
but
compared
to
other
limits
where
you
have
like
you
want
to
avoid
non
reuse
or
something.
D
D
So,
basically,
what
I
think
what
we
can
do
is
we
can.
We
can
use
a
slight
a
better
process
for
how
could
to
calculate
these
the
numbers.
So
this
slide
is
not
about
how
the
actual
process
will
be
in
oscar,
that's
wreakage.
This
is
how
do
we
set
numbers
for
the
queue
and
the
limits
that
richards
dropped
late
will
use.
I
think,
some
of
the
tls
numbers
we
can
relax
enormously
for
ccm8
some
other
of
the
values
we
should
maybe
make
a
little
bit
stricter
to
make
ccm
8
could
be
a
perfect
mac.
D
D
So
yeah,
we
don't
need
to
go
through
these
mostly
so
I
think
you
can
you
don't
need
any
limit
at
all.
It's
basically
a
perfect
prf.
If
you
want
to
treat
it
the
same
way
as
a
s,
is
a
permutation,
maybe
important
to
keep
in
mind
that
the
attacks
you're
protecting
against
here
is
very,
very
different.
One
is
an
online
attack
which,
if
it
happens,
have
very
drastic
consequences
you
for
a
message.
D
The
other
is
distinguishing
which
is
offline,
but
on
the
other
hand,
it's
like
probably
nothing
happens.
If
the
attacker
can
distinguish
this,
he
can
just
to
see
that
it's
all
score.
He
can
just
look
at
the
port
number
or
something
and
and
see
that
so
there's
a
it
can
be
a
quite
large
gap
between
this
theoretical
attack
and
something
practical.
Next.
D
D
I
think
we
can
move
on
since
the
ccm8
behaves
quite
a
lot
like
the
60,
perfect
64
bit
mac,
and
also
that
limiting
c
advantage
per
key
makes
does
not
make
very
much
sense
in
the
security
protocol,
where
you
have
multiple
keys
per
connection,
you
can
have
mult,
and
you
can
have
a
lot
of
also
connections
between
the
same
two
peers,
yeah.
D
D
It's
quite
easy:
you
don't
need
to
do
very
much
advanced
calculations
to
see
when
it
makes
sense
to
re-key.
If
you
have
a
linear
function,
you
don't
really
get
any
benefits
from
re-keying
directly.
If
it's
quadratic
things
will
get
worse
and
worse
and
at
some
point
you
need
to
re-key
can
take
the
next
think
we
can
go
back
to
these
slides
if
there's
something
we
need.
D
Yeah
here
it
it's
a
second
bullet
here
is
that
some
of
the
values
in
tls
we
might
want
to
make
a
bit
stricter
right
now,
ccm
deviates
also
in
the
beginning
from
an
ideal
mac
which
we'll
see
in
the
graphs
in
the
coming
slides.
E
John
something
maybe
worth
clarifying
what
what
track?
What
is
the
practical
implication
of
lowering
q
and
l.
D
R
and
r
it
this
slide
suggests
lowering
q
and
l,
but
we
could
also
based
on
at
least
if
it's
not
a
problem
doing
so.
If
you
do
that,
ccm
behaves
like
a
ideal
mac
also
for
low
values
of
the.
D
E
F
D
D
To
here
is
just
some
first
graphs:
you
can
see
how
changing
these
parameters
improves
the
security.
If
you
lowers
l
on
the
left
side
on
the
right
side,
you
can
see
how
a
suggestion
the
suggested
process
that
I
made,
I
think
it
seems
better
than
the
tls,
but
I
only
used
it
for
a
week
so
it
but
showing
how
you
can
measure
the
security
level,
and
then
you
basically
take
the
minimum
y
value
of
the
graph
from
here
0
to
20,
and
then
you
draw
a
line
and
then
you
get
your
security
level.
D
D
E
D
So
therefore
it
in
the
academic
literature
is
known
as
single
user
single
multi-user,
but
in
reality
it's
just
one
or
more
keys.
The
attacker
attack
one
of
many
keys
or
it
attacks
a
single
key.
D
E
D
D
So
on
the
left
side,
you
see
the
security
level
here
on
or
it's
actually
not
the
security.
The
security
level
is
the
minimum
of
these.
It's
rather
complexity
for
here
for
different
values
of
the,
but,
as
you
can
see,
the
green
line
is
a
perfect
64-bit
mac
and
the
blue
line
is
ccm.
The
blue
curve
is
ccm8
and,
as
we
can
see,
it
deviates
a
little
bit
from
a
perfect
mac
in
the
beginning
and
you
actually
get
only
60
bits.
D
But
if
you
lower
lnq
a
little
bit,
then
it
behaves
like
a
perfect
mac
until
almost
two
to
the
power
of
40,
at
least
to
the
power
35
or
something
so
we
can
definitely
and
ccm8
is
not
worse
than
ccm
with
a
full
mac
in
this
aspect.
D
D
D
D
D
D
So
I
think
that's
not
the
problem.
Ccm8
is
basically
a
perfect
mac,
so
basically,
I
think
my
recommendation
would
be
to
basically
use
take
the
tls,
the
tls
values.
The
process
is
a
bit
flawed,
but
most
of
the
values
they
get
out
are
quite
reasonably
anyway,
except
for
ccm8.
I
think
we
can
take
the
tls
values.
We
can
use
a
much
much
much
higher
v
value
for
ccm8
should,
as
we
will
probably
mostly
use
ccm8.
D
Q
and
e
should
be
similar
or
something
I
don't
remember
the
details
of
this,
but
I
think
there's
a
lot
of
whatever
issues
will
be
quite
arbitrary
to
some
degree.
At
least
it's
whatever
you
choose
will
be
based
on
a
lot
of
assumption,
and
I
think,
as
long
as
we
choose
anything
in
this
range,
it
will
be
very,
very
secure.
F
D
F
D
I
don't
know,
I
think
it's
inventing
it.
The
process
will
probably
not
go.
I
would
say
that
we,
my
suggestion,
would
be
that
we
take
the
tls
values
and
lower
them,
and
then
we
can
at
least
say
we
are
more
secure
than
tls.
Unless
we
want
to,
I
think,
as
long
as
we
choose
lower
or
equal
values
as
tls,
we
don't
need
to
be
justify
it
very
much.
F
D
E
So
this
would
be
so
sort
of
outlining
this
and
motivating
this
would
go
into
security
considerations
in
ricketts,
often.
D
F
D
D
D
I
would
but
I
or
personally
I
don't
think
it's
yeah
yeah.
I
don't
think
it's
worth
the
effort
to
to
do
that.
I
would
just
pick
some
values
that
are
equal
or
lower
than
tls
and
write
that
we
are
more
secure
than
tls
at
least,
and
then
severely
higher
the
ccmb
value
and
motivate
this
with
this
slide
set.
F
D
F
Right
then,
it's
just
about
how
do
we?
How
do
we
justify
for
q
because
it's
lower,
I
mean
then
that's
easy
to
justify,
like
you
said,
we're
doing,
taking
a
safer
limit
here
for
we
for
we,
it
would
be
lower,
except
for
ccm8
right
where
we
would
then
choose
a
higher
limit
yeah
the
formula
skip,
so
that
would
be
the
important
thing
to
motivate.
D
F
Right
and
then
the
the
place
to
put
attack
would
basically
in
this
in
this
draft
in
this
score,
aad
limits
draft.
D
D
F
D
F
Right
well,
okay,
then
yeah.
I
would
definitely
appreciate
your
input
also
on
how
to
formulate
this
text
to
motivate
the
the
iv
limit.
So
maybe
we
can
afford
discussions
on
that.
Yeah.
A
John
one
question
about
raising
v:
how
can
you
we
motivate
that
it
has
been
raised
enough.
A
A
D
Yeah,
I
think
these
values
will
probably
see
if
audi
values
will
probably
change.
I
I
hope
that
the
formulas
inequalities
is
correct,
but
any
values
they
get
out.
This,
I
don't
know
see
if
audio
document
talk
a
little
bit
about
the
process
and
then
it
talks
a
little
bit
about
the
values
that
tls
and
dtls
has
chosen,
but
yeah.
F
D
D
It's
important
to
to
re-key
before
you
get
extremely
high
numbers
here,
but
otherwise
it's
there's
much
more
important
security
things
to
do
this
is
this
has
been
blown
out
of
proportion,
yeah
yeah,
I
don't
know.
I
think
I
think
the
implementation
aspect,
like
christian
wrote
up
last
time,
is
there
anything,
but
what
would
tutor
power
of
20
or
2
to
the
power
23
mean?
Is
there
will
be
a
difference
in
practice
and
also
message?
Length
is
very
different
for
the
application.
D
D
I
think
I
can
do
that
after
we
have
decided
on
some
numbers.
I
think
that's
a
easy
writing.
That
is
five
minutes
work
for
me
and
it's
easy,
but
I
think
core
group
needs
to
decide
on
what
what
number
should
we
choose?
I
think
that's.
That's
mostly
not
a
security
question,
it's
more
implementation
and
what
does
it?
How
will
it
impact
applications
using
or
score
if
we
choose
different
numbers.
A
F
E
F
A
Moving
on
then,
the
follow-up
to
this
and
more
practical
to
a
draft
considering
this
is
a
starting
point
for
allscore.
So
just
tell
me
when
to
change
slide
record.
F
Right
right,
so
this
is
a
presentation
on
this
draft
about
this
aad
key
use
its
limits
in
the
context
of
all
score
next
slide.
Please
so
just
to
recap
the
problem,
which
of
course
has
been
recapped
a
bit
already,
but
now
score
uses
aed
algorithms
to
provide
security,
and
there
are
these
forgery
attacks
against
aad
algorithms,
and
this
is
described
in
that
c4d
document.
F
As
we
discussed,
it
may
be
the
case
that
I'll
score
shouldn't
just
blindly
take
the
the
formulas
and
limits
described
in
the
cpg
document,
and
the
second
step
is
how
does
this
for
their
attack
and
the
limits
affect
those
core
that
can
be
in
terms
of
what
steps
you
need
to
take
during
message:
processing,
for
instance,
counting,
send
messages
or
receive
messages,
and
what
actions
should
you
take
when
the
limits
are
exceeded?
F
So
the
q
value
represents
the
number
of
messages
protected
with
the
specific
key
meaning
the
number
of
times
it
keeps
being
used
to
encrypt
data,
and
the
v
value
is
that
the
number
of
40
attempts
made
against
a
specific
key
meaning
the
amount
of
failed
encryptions
for
that
key
and
in
the
context
of
oscore
what
this
means
is
and
what
we
have
added
to
those
core
security
contexts
for
these
new
parameters,
and
one
is
count
q
to
count
the
number
of
times
a
sender
key
has
been
used,
encryption
count
v
to
count
the
number
of
times
the
recipient
key
has
been
used
for
failed
decryption
and
both
of
these
counters
then
have
associated
limits,
limit
q
and
limit
we,
which
will
eliminate
how
high
these
counters
may
go
and
if
the
limits
exceeded
now
the
context
must
be
repeated,
and
this
draft
also
has
an
overview
of
existing
methods
for
keying
of
score.
F
F
So
one
thing
that
was
added
was
a
table
with
the
qmv
limits
for
further
algorithms,
aes,
also
aes
128,
ccm,
128
gcm
to
fix
256,
gcm
and
the
charge
up
only
and
before
we
only
had
the
aes,
128,
ccm8
and,
of
course,
like
these.
These
values
are
still
based
on
that
c4d
document
and
the
assumptions
from
dtls,
and
in
addition
to
that,
we
have
extended
a
section
about
methods
for
all
scoring
keying.
F
So
now
it
also
mentions
previously
mentioned
the
the
ace
oscar
profile
hoc
and,
like
manual
key
update,
let's
say
and
those
core
appendix
p2.
Now
it
also
mentions
that
you
may
as
an
alternative,
if
you're
using
lightweight
m2m-
and
this
is
a
situation
where
you
have
a
five
return
term
server
and
lightweight
m2m
client.
F
You
may
bootstrap
the
client
make
bootstrap
again
towards
the
lightweight
bootstrap
server,
which
will
provide
it
with
updated
security
context
if
the
material
on
the
bootstrap
server
was
actually
updated
and
both
like
the
temp
time,
client
and
the
lightweight
server
may
initiate
this
bootstrapping
procedure.
F
F
Please
and
another
update
that
I
was
stating
is
fact
that
messages
that
are
detected
as
replays
do
not
affect
the
count
b
value,
and
this
is
also
something
that
was
brought
up
by
the
previous
in
three
men.
We
got
agreement
on
this
point
that,
since
these
are
fundamentally
replayed
messages,
they
should
not
be
counted
as
failed
decryptions,
so
they
will
not
affect
the
county
parameter.
F
That
indicates
at
which
point
in
time,
oscar
security
context
may
not
be
used
anymore
for
processing
messages,
and
the
idea
is
that
when
the
context
is
established,
you
you
take
the
current
time
and
you
take
a
certain
time
offset
that
should
be
the
lifetime,
and
then
you
can
calculate
your
expiration
time
step
and
when
that
is
reached,
you
should
not
use
this
context
any
further
and
the
last
point
was
kind
of
general
editorial
improvements.
Some
restructuring
fixing
some
some
sentences
and
general
improvements.
F
F
But
if
you
don't
have
an
actual
lifetime
defined
or
is
not
provided
there
should
be
an
appropriate
default
lifetime
to
use
and
by
the
way,
also
the
lifetimes
and
this
expiration
date
and
time
they
don't
have
to
match
on
the
players,
because
if
one
has
a
shorter
expiration
or
shorter
lifetime,
the
one
that
reaches
the
expiration
first
will
simply
take
the
initiative
to
rekey
with
other
party.
So
they
don't
have
to
synchronize
on
exactly
the
same
values,
but
there
would
be
a
need
to
choose
some
appropriate
default
if
none
is
provided.
F
F
I
mean
the
case
that
a
constraint
device
supports
appendix
p2,
meaning
it
stores
its
sender
sequence
number
to
be
able
to
reboot
and
then
continue
using
the
same
context.
It
now
also
needs
to
store
the
account
v
and
count
q
as
to
not
lose
track
of
them
upon
reboot,
and
so
the
pointer
would
be
to
allow
save
continued
usage
of
score
security
context
after
reboot
and
in
appendix
v1.
There
is
the
solution
to
not
store
every
sender
sequence
number.
F
You
only
store
it
periodically
to
reduce
the
number
of
writes
to
non-volatile
memory,
and
this
would
be
a
similar
situation
where
you
don't
want
to
store
every
account
q
and
count
v,
because
then
you're
really
writing
a
lot
to
the
disk.
You
only
want
to
store
them
periodically,
but
still
make
sure
that
you
have
them
still
available
after
reboot.
F
But
the
thing
is
here
that
we
need
to
consider
what
rate
to
store
this
at,
because
if
the
rate
is
too
large,
then
when
you
reboot-
let's
say
you,
you
save
every
100,
can't
we.
That
means
that
if
you
or
account
we
won
and
then
you
reboot,
you
jump
all
the
way
to
country
100
and
if
the
v
limit
is
quite
low,
as
we
are
now
in
the
current
figures.
Of
course,
that
will
change,
but
then
a
reboot
or
even
two
reboots
would
put
you
over
the
current
limit.
F
So
you
need
to
be
a
bit
careful
on
the
on
the
rates
that
are
decided
for
often
to
store
these
values,
especially
for
count
b.
F
As
you
reboot,
that's
true
yeah,
definitely
like
you
can
have
that
choice.
Let's
say
you
don't
even
support
appendix
b1
or
you
simply
say
that.
Well,
if
I
reboot
either
way,
I
will
lose
the
sender
sequence
number
you
can
choose
to
reboot.
Sorry,
you
can
choose
to
re-key
if
you
reboot,
so
it's
not
mandatory
in
our
citations
is
just
that.
If
you
want
to
support
appendix
p1
and
be
able
to
conne
continue
using
your
security
context
from
where
you
left
off,
then
you
also
need
to
say
count.
Q
and
count
v.
C
You
can
actually
do
a
simple
linear
formula
for
the
number
count
that
should
be
applying
at
the
time
a
reboot
actually
happens,
and
as
long
as
you
stay
below
that
line,
you
don't
have
to
store
anything
so
that
that
would
mean
again
as
as
long
as
you
have
a
real
clock.
You
can
rely
on
at
the
level
of
security
considerations,
but
if,
if
you
can
can
draw
this
line,
then
you
essentially
never
have
to
save
the
counts.
F
F
F
C
C
The
the
device
will
work
under,
so
you
can
actually
choose
a
constant
and
a
linear
factor
and
and
use
that,
but
again
it
only
works.
If
you
have
a
reliable
clock,
yeah.
D
F
D
C
D
F
E
So
carson
just
to
understand
your
idea,
are
you
saying,
with
my
words,
then
you're
saying
that
basically
we
have
an
application,
has
an
idea,
a
number
of
messages
per
per
the
maximum
number
of
measures
that
can
be
processed
in
in
a
certain
time
interval.
Then
you
draw
the
line
and
sort
of
align
slightly
below
that,
and
and
and
that
means
that
you
will
you
will
always
I
mean
if
you
don't
even
reach
up
to
that
line,
then
you
then
you
don't
need
to
store.
The
count.
Is
that
something
like
that.
C
Yeah
the
the
idea
is
that
what
you
save
in
in
these
storage
locations
allows
you
to
have
a
safe
assumption
for
continuing
after
a
reboot,
so
what
whatever
you
can
can
save
there
that
allows
you
to
derive
such
an
assumption
at
a
reboot
is
fine
as
long
as
the
the
implementation
that
actually
counts
remembers
that
it
needs
to
be
on
the
safe
side
of
that
line.
F
Good
thanks
all
right
thanks
for
the
input
yeah
next
slide,
please.
D
F
F
A
F
F
Right
and
then
another
point
is
to
further
explore
optimizations
for
tracking
count
q,
and
this
can
again
be
helpful
for
constrained
devices
that
this
would
be
in
general
on
how
to
to
keep
track
of
q.
Basically,
so
that
one
idea
could
be
that
you
don't
have
an
explicit
count.
Q
parameter,
but
you
instead
rely
on
the
sender
sequence
number,
so
one
one
example
is
using
the
center
sequence
number
together
with
this
x,
where
x
would
be
the
amount
of
the
number
of
outgoing
messages
you
sent
without
partial
led.
F
So,
basically,
the
ssm
plus
x
would
be
the
same
as
count
q,
so
they
don't
need
an
explicit
cap
q
variable.
So
you
can
save
some
memory
overhead,
thereby
cleverly
reusing
the
ssn.
Also
for
this,
and
then
another
possibility
would
be
that
you
rely
only
on
the
sender
sequence
number,
and
here
we
have
some.
F
They
have
some
backup
slides
on
this
point
also,
but
basically
you
would
then,
in
that
case,
sacrifice
some
accuracy
and
accept
more
frequent
rekeyings,
because
you
may
end
up
in
a
situation
where
you
don't
go
all
the
way
up
until
the
full
theoretical
limit
you
could
have
gone
to,
but
we
can
come
back
to
that
if
we
cover
the
backup,
slides
and
then
yeah.
Another
point
here
was:
if
these
limits
kill
this,
could
they
possibly
be
defined
in
a
more
general
location?
F
Of
course
they
can
be
defined
now
in
this,
this
osc
limits
drafts,
but
if
they
are
for
algorithm
specific,
then
they
could
possibly
be
defined
in
some
place,
like
the
cosi
algorithms
registry,
although
as
we've
discussed
now,
maybe
this
would
be.
This
limit
would
be,
in
this
case
specific
to
score,
and
then
in
that
case
it
makes
more
sense
to
keep
them
all
in
this.
F
Document
and
then
the
last
point
here
was
yeah
how
to
adapt
these
limits
to
be
more
specific
and
suitable
for
all
score,
and
there
one
point
was
this:
like:
should
we,
for
instance,
consider
different
probabilities,
pq
and
pv,
and
I
understood
from
yeon
that
he
thought
that
wasn't
something
we
should
consider.
Basically,
we
shouldn't
really
consider
these
these
probabilities
or
to
plug
them
into
those
formulas
from
the
c4d
document.
F
And
the
other
points
would
be
like
if
we
now
can
see
the
different
limits
for
all
score,
what
kind
of
authoritative
and
appropriate
reference
to
use
to
produce
these
these
numbers
like
that
could
either
be.
If
now
the
c4d
document
will
be
updated,
it
could
be
some
other
source
or,
as
we
discussed
earlier,
it
could
also
end
up
that
we
justify
and
describe
why
we
chose
these
numbers
in
this
actual
draft.
F
F
D
No
question
high
level
what
I
think
the
two
aspect
that
I
think
is
important
for
the
group
to
discuss
and
which
we
have
so
far
not
discussed
a
single
minute.
This
meeting
is
what
how
does
different
limit
affect
applications
and,
secondly,
how
do
we
do
re-keying?
D
F
Right,
yeah
and
now
now
in
this
draft,
it's
it,
it
only
describes
existing
methods
for
a
keying.
Of
course,
the
ad
dock
is
a
good
one
there.
If
you
want
forward
secrecy,
but
did
you
have
in
mind
some,
let's
say
some
new
method
for
a
key.
F
Right
I
mean
yeah,
I
think.
As
far
as
I
recall
earlier
discussions,
we
kind
of
had
this
idea
of
splitting
things
like
this
would
be
the
first
draft.
Then
there
could
be
a
follow-up
about
new
methods,
free
keying,
but
I
guess
there
could
also
be
something
to
reconsider
and
add
more
material
to
this
current
draft.
F
Yeah
yeah,
but
it
does,
I
mean
it
does
describe
the
existing
methods,
so
you
have,
I
think,
it's
well
fundamentally,
for
which
would
be
the
yeah,
the
asos
core
profile,
ad
hoc
or
score
appendix
p2,
the
current
incarnation
of
it
or
then
also
like
to
attempt
to
impossibility,
or
as
a
fifth
one
manually,
changing
the
the
context
information.
E
If
I
understand
you're
right
john,
what
you're
saying
is
that
if
we
have
another
draft
which
describes
different
ways
of
updating
the
keys
and
that
would
sort
of
be
dependent
on
this
one,
so
we
would
what
is
written
currently
in
this
draft
would
need
to
be
updated,
so
they
are
somehow
closely
linked
to
each
other
yeah.
I
I
don't
I
don't
mind
if
we
expand
this
on
this
draft
in
this
direction.
I
don't
know
if
there
is.
D
D
D
A
Thanks
for
the
input,
john,
and
if
there
are
no
more
input
on
this,
I
think
rickard.
You
mentioned
some
backup
slides
on
that
particular
point
on
the
optimization
we
may
go
through
them.
F
Yes,
we
could
certainly
do
that.
I
noticed
john
mentioned
also
how
the
limits
affect
the
applications,
but
I
think
I
can
go
through
the
backup
slides.
F
Then
it
shouldn't
take
too
long,
just
two
slides
okay,
so
this
is
describing
a
possible
optimization
for
the
count
q
to
keeping
track
of
count
q
without
explicitly
having
a
count.
Q
parameter
in
those
contexts-
and
one
drawback
of
this
would
be
that
you
basically
have
a
pessimistic
overestimation.
So
you
overestimate
the
actual
value
of
count
q.
F
It
will
be
higher
than
if
you
had
an
explicit
counter,
so
you
may
end
up
breaking
earlier
than
need
to
be,
and
the
point
is
that
basically
like
at
any
point
in
time,
you
know
that
an
end
point,
the
maximum
number
of
encryptions-
it's
it
has
done-
is
its
own
sender,
sequence,
number
added
with
the
sender
sequence
number
of
the
other
endpoint,
because
the
other
endpoints
and
the
sequence
number
can
serve
as
an
overestimation
of
the
responses
without
partial
av
that
us.
You
know
the
december
test,
yeah.
F
F
If
your
sender,
sequence,
number
plus
x,
exceeds
limit
q,
so
basically
the
count
q
will
be
represented
as
the
standard
sequence
number
plus
x,
and
we
in
this
case
you
determine
x
in
the
two
of
the
following
one
of
the
following
two
ways:
if
you're
producing
an
outcome
response
x
would
be
the
partial
v
in
the
request
you're
responding
to
so
basically
x
is
kind
of
a
standing
them
for
the
other
part
to
send
the
sequence
number
right,
which
would
be
the
partial
iv
and
the
request
it
has
just
sent
to
you
and
and
back
to
your
pointer.
F
I
mean
this
could
work
regardless.
Actually,
if
you're
a
client
or
server
it's
just,
it
works
in
either
case.
Okay
and
on
the
other
hand,
if
you're
producing
an
outcome
request,
then
x
would
be
the
highest
partial
portion
of
e
value
that
you
have
received
in
your
replay
window
or
the
wrinkly
window.
Size
minus
one
if
you
have
not
received
any
messages
yet
so
basically
x
in
the
case
of
producing
an
outcome
request,
would
be
the
highest
portion
of
eu
seen
from
the
other
party,
meaning
the
highest
sender
sequence
number.
F
So
if
you
follow
that
rule,
you
know
you
will
be
safe.
On
the
other
hand,
you
may
you
may
end
up
overestimating
the
the
the
count
q
and
then
re-keying
before
you
actually
reach
the
the
limit
q.
Basically,
the
actual
limit
q,
because
you're
just
estimating
calcul
you're
overestimating
how
q,
by
using
this
method,.
E
F
Yeah,
I
believe
it
could
be,
it
depends
I
guess
it
could
be.
It
depends
on
the
message
pattern.
Let's
say:
if
you're,
I
think
yes,
it
makes
sense
the
highest
it
could
be
would
be
to
like
two
times
over
estimation.
Of
course
it
depends.
Let's
say
you,
you
send
one
request,
and
the
other
party
sends
you
tons
of
notifications
or
it
could
be
you're
sending
tons
of
requests
and
the
other
party
is
not
even
responding
at
all.
It
depends
on
the
traffic
pattern,
but
yeah.
E
F
Yeah
yeah
one
one,
two,
two
power
of
yeah:
I
see
a
point
right,
yeah
yeah,
so
that
may,
depending
on
the
traffic
pattern
or
depending
on
your
actual
setup,
that
may
not
be
a
nice
overestimation.
On
the
other
hand,
if
you
use
this
optimization,
you
save
the
need
to
have
an
explicit
count:
q
parameter,
you
can
use
the
ssn,
which
saves
the
memory
overhead,
and
I
also
recall
in
in
some
earlier
meetings
we
discussed
with
christian
on
some
some
ideas.
F
A
Okay,
trying
to
sum
up
so
we
got
some
good
input
from
from
john
on
new,
better
numbers
to
use
here
as
more
appropriate.
So
we
can
build
on
that
as
actual
numbers
and
on
security
consideration.
But
we
are
going
to
have
a
thread
on
the
list
to
discuss
that
more
and
there
are
feedback
to
address
some
of
these
points
and
and
the
one
on
rebuilding
on
appendix
b1
of
oscar
can
actually
be
further
expanded.
A
And
then
we
say
we
can
actually
take
the
path
where
a
broader
draft
can
cover
both
this
topic
and
the
possible
actual
lightweight
working
approach.
A
And
I
know
you
also
like
to
have
more
discussion
on
how
applications
are
affected
by
this.
I
suppose
you
mean
both
the
numbers
and
the
lightweight
working
procedure.
D
Yeah,
I
think
lightweight
the
requiem
procedures
is
basically
required.
If
you
want
to
do
re
frequent
freaking,
I
think
you
you
want
to
do
that
built
for
the
basically
equally
much
because
of
the
aad
limits,
but
also
to
get
forward
secret
c,
which
is
very
quickly
becoming
best
practice.
To
always
do
the
feedback.
I
think
we
should
consider
whether
we
should
use
how
much
should
be.
Should
we
use
to
the
power
of
23,
or
should
we
use
different
values
for
all
the
algoriths?
D
D
F
D
But
then,
of
course,
you
you
get
complexity
in
the
application.
Yes,
the
most
exact
approach
would
be
to
actually
like
calculate
the
length
of
all
some
up,
some
the
length
of
all
the
messages
and
use.
That
is
that
just
n
times
the
maximum
allowed
size,
but
that's
probably
not
something
we
want
to
do.