►
From YouTube: CORE WG Interim Meeting, 2020-10-22
Description
CORE WG Interim Meeting, 2020-10-22
A
Recording
right
welcome
everyone
to
this
score.
Interview
meeting,
I'm
marco
tiloka,
we're
sharing
with
jaime
ximenez
not
well
applies.
Please
get
familiar
with
them
if
you're
not
already.
A
B
Hi,
so
I
just
pasted
two
links
into
the
notes.
There
is
one
poll
request
relating
to
automatic
key
management
that
I
would
like
to
get
a
little
more
review
on.
This
is
issue
number
11.,
and
then
we
have
five
items
that
are
issues
that
were
raised
that
I
believe
are
in
various
states
of
won't
fix,
and
so
I
would
like
to
also
get
some
comments
about
that.
One
of
them
number
eight
is
related
to
the
pull
request
and
the
other
is
probably
probably
could
be.
B
Some
could
be
closed.
So
basically,
I
just
need
a
little
bit
more
help
to
close
these.
To
just
say
this
is
what
I've
done
is
okay
and
then
I
will
post
a
new
version
of
the
draft
this
afternoon.
I
hope.
B
B
So
this
is
the
pull
request,
that's
open
that
has
to
do
with
automatic
key
management.
So
what
I
think
happened
is
that
there
was.
B
This
relates
to
the
fact
that
you
may
want
to
encrypt
the
you
probably
want
to
encrypt
the
token
that
you
are
sending
out
such
that
you
can
recognize
it
coming
back
with
it,
while
keeping
it
privacy,
private
and
integral,
and
there
was
a
reference
to
some.
B
So
that's
what
this
text
kind
of
changes
significantly
and
there's
just
the
question
of
you
know
how
often
you
need
to
re-key
to
avoid
reusing
nonces,
that's
really
the
major
part,
and
that
is
actually
dealt
with
with
this
text
with
a
reference
to
86,
13
sender,
sequence.
Number!
It's
the
same
problem!
If
you
do
the
same
thing,
use
the
same
sequence,
numbers
you're,
probably
all
good,
and
then
I
had
some
other
text
here
about.
B
You
know
how
often
how
many
outstanding
things
you
might
need
and
how
big
a
window
you
might
need,
and
so
that
may
require
some
a
little
bit
of
someone
to
sanity,
check
my
math
and
assumptions
here
that,
essentially
you
know
if
you're
sending
out
things
that
you
know
you
can
expect,
and
you
expect
to
have
a
response
within
say
two
seconds,
which
seems
very
very
generous-
that
the
number
of
requests
outstanding
determines
your
your
replay
window,
and
I
assume
this
is
like
an
ipsec
like
repay
window,
where
you
have
some
set
of
bits
that
you
set
and
then
you
have
a
low
water
account
mark
for
for
that.
B
But
you
could
implement
something
simpler,
even
if
you
wanted
to
for
that.
So
I
just
really
looking
for
some
review
of
this
text
and
an
endorsement
or
two
and
then
I
will
post
the
document.
B
B
C
C
I
see
that
you
now
have
started
to
put
in
some
math
there,
which
I
would
have
to
check.
My
knee
jerk
reaction
is
that
it's
entirely
wrong.
Okay,.
A
C
Have
to
read
it
in
in
detail
specifically.
C
B
Yes,
okay,
I
I
could
accept
it's
completely
wrong.
So
that's
what
I
want
to
get
right
here.
C
A
C
Is
actionable
and
yeah,
I'm
not
sure,
but
I
I
have
to
read
it
more
than
30
seconds.
B
Sure
so
that's
what
I'm
looking
for
is
is
somewhat
longer
thinking
about
what
this
means
and
and
then
whatever
changes
you
think
we
need
to
do.
I
won't
take.
A
A
Okay,
michael
since
you
mentioned
about
rest
submitting
even
this
afternoon,
that
would
be
great
him
and
I
had
a
chat
and
considering
the
work
you've
put
on
this.
We
think
you'll
be
fair
and
good
if
you
want
to
add
yourself
as
second
author
of
the
document,
if
you
are
fine
with
it.
B
A
Okay,
we
can
move
to
the
next
item.
I
understand
john
will
show
the
slides
by
himself
john
if
there
are
updated,
slides
compared
to
the
ones
in
the
data
tracker.
Please
push
this
latest
version
later
on.
A
D
D
Okay,
so
just
just
talking
about
the
the
alternative:
block-wise
transfer
options
for
faster
transmission
of
data,
in
particular
in
a
lossy
type
network
environment,
so
just
go
through
a
requirements,
reminder
what
we
did
in
the
dash
01
updates
some
things
I
observed
as
part
of
the
implementation,
which
raises
some
questions
about
what
is
the
best
way
to
move
forward
and
what's
the
next
steps?
D
Okay,
so
just
as
a
reminder,
we
want
to
be
able
to
transmit
data
faster,
rather
than
waiting
for
the
the
latency
turnaround
of
send
and
some
sort
of
act
coming
back.
Then,
the
next
packet
and
going
on
so
forth,
rather
be
able
to
send
a
block
of
data,
but
obviously
subject
to
con
congestion
control,
which
is
covered
there
in
the
the
draft.
D
We
also
need
the
requirement
to
be
able
to
descend
unidirectional
data
in
the
same
way
that
we
can
send
a
single
non-packet,
and
if
we
don't
get
an
answer
back,
it's
not
the
end
of
the
world.
We
want
to
be
able
to
send
out
a
set
of
non-blocks
to
comprise
of
a
non-body,
and
we
don't
necessarily
need
to
worry
about
getting
a
response
back,
and
the
reason
for
that
is
that
again,
with
ddos
attacks.
Is
that
you're
likely
to
have
a
pipe
flooded
somewhere?
D
The
faster
blocks,
quick
blocks
are
modeled
on
block
one
block
two,
but
they
are
not
an
addition,
so
they're
not
a
replacement
for
them
they're.
In
addition
to,
if
there's
any
questions
as
we
go
down
here,
just
you
know,
please
say:
okay,
so
just
general
updates
to
the
dash
01.
We
updated
the
applicability
scope
just
to
make
it
more
relevant.
We
moved
away
from
having
our
own
error
code
and
using
piggybacking
on
408.
D
Instead,
we've
renamed
the
options
to
quick
block
one
and
two
rather
than
block
three
and
four,
so
that
quick
block
one
models
block
one
and
quick
block,
two
models
block
two:
the
options
are
marked
unsafe
and
cache.
Behavior
is
updated,
so
basically
we're
not
expecting
the
cache
to
be
able
to
cache
individual
blocks.
D
It
seems
before
we've
moved
the
congestion
control
text
into
a
new
dedicated
section
to
break
that
specifically
out.
We
were
fairly
normative
in
the
use
of
tokens
fairly
prescriptive.
They
now
just
pulled
out
into
separate
section
and
it's
more
generic.
The
texts
in
the
various
places
and
just
other
kind
of
edit
typos.
D
So
in
terms
of
moving
forward
to
the
actual
implementation
that
I
did,
I'm
using
libco
app
as
a
base
which
I'm
relatively
familiar
with,
and
I
have
raised
pr554
to
move
all
the
block.
One
block
two
handling
out
of
the
application
layer
down
into
libco
app.
So
it's
up
to
libco
app
to
do
all
the
recovery,
if
necessary,
on
block
one
block
two
operations
and
only
pass
the
large
item
of
data
up
to
the
application
or
if
the
application
wants
it,
just
the
individual
blocks,
but
they've
already
been
checked
out
and
recovered,
etc.
D
And
then,
on
top
of
that,
having
done
that
work,
I
then
added
in
the
support
for
the
quick
block
one
and
two
based
on
the
fact
that
everything's
just
them
they're
in
the
libco
app
area.
So
all
we
just
need
to
do
is
say
well
we're
happy
to
have
block
one
block
two
and
let
libco
get
on
with
it
and
check
with
the
remote
end,
whether
things
work
or
not,
and
at
some
point,
when
we're
happy
with
that
code,
it
will
then
become
also
a
pr
into
libco
app.
D
Some
of
observations
of
from
the
implementation.
D
At
the
moment,
environment
can
support
either
quick
block
one
or
quick
block,
two
or
nine
of
them
or
all
of
them.
This
gave
me
some
challenges
of
the
fact
that
we
could
support
only
one
of
the
two
when
we
come
to
find
out
whether
the
remote
end
likes
us
or
not.
It's
a
critical
option.
D
We
get
some
sort
of
response
coming
back,
but
it
then
is
unclear
if
we're
sending
both
block
one
and
block
two
out
in
that
initial
packet.
The
response
it
comes
back
was
it
referring
to
the
block
one.
Was
it
referring
to
block
two?
Unless
we
update
the
spec
to
say
it's
either
both
of
quick
block
one
and
two
or
neither
of
block
one
and
two
are
supported
just
makes
life
a
lot
easier
in
particular,
because
it's
very
difficult
to
work
out
what
critical
option
is
not
liked
by
the
remote
end.
D
Just
the
standard
7252
just
says
something
in
the
diagnostic
payload:
it
doesn't
talk
about
formatting,
a
non-packet
just
returns,
an
empty
reset
and
just
lib
crap
happens
to
return
the
bad
critical
option
as
an
option,
but
I
don't
think
that's
standard
across
all
the
implementation.
So
we
can't
rely
on
that
kind
of
thing,
so
kind
of
a
suggestion.
Question
for
mutual
support
is
that
we
either
either
both
quick
block
one
and
two
have
to
be
implemented
or
night
of
them
and
are
there
any
thoughts
on
that.
E
Question
do
I
do
you
expect
that
the
quick
block,
one
and
two
options
are
supported
on
every
resource
on
the
same
server?
The
same
way.
D
No,
the
basically
for
server
if
he
sees
a
quick
block,
one
or
quick
block
two
coming
in.
He
knows
that
that
particular
functionality
is
enabled
by
the
client.
He
has
the
ability
to
say
I
like
you
or
I
don't
like
you
as
a
critical
option,
but
there
can
be
sessions
that
coming
from
client
a
and
client
b
client
a
doesn't
use,
quick
block,
client
b
does.
As
far
as
the
server
is
concerned,
the
individual
sessions
will
be
treated
as
appropriate,
using
either
the
old
block
or
the
quick
block.
E
I
was
rather
thinking
the
other
way
around
a
single
client,
interacting
with
different
resources
on
the
same
server.
D
Okay,
so
are
they
if
we're
talking
about
the
same
session
or
different
sessions?
If
you
talk
about
different
sessions,
a
session
is
a
connection
to
an
endpoint
coming
from
an
endpoint.
That's
got
a
particular
source
port.
E
Or
or
put
put
differently,
is
the
client
expected
to
if,
if
the
client
discovers
the
support
for
quick,
quick
block
one
and
two,
is
it
supposed
to
remember
that
about
that
server
or
about
that
resource?
In
general
I
mean,
when
is
this
even
so,
so
what's
what's
the
case
in
which
this
actually
isn't?
This
actually
is
a
decision
to
make,
because.
D
Yeah
well,
okay,
so
we're
not
mandating
quick
block
for
the
dots
type
environment,
so
it
could
be.
The
dots
client
has
it
and
the
dot
server
does
not,
depending
on
manufacturers
and
all
the
rest
of
it.
So
that's
why
put
in
we
just
needed
to
see
whether
the
remote
end
likes
it
or
not.
If
it
doesn't
find,
we
just
say,
carry
on
with
the
original
block
stuff,
but
if
it
does
like
it,
we
then
use
the
block
stuff
for
faster
transmission
and
recovery,
etc.
E
But
then
supporting
them
as
a
single
thing
or
not
seems
to
make
sense
to
me
because
yeah,
if
it's,
if
you
start
with
it
and
fail,
then
you
fail
over
to
traditional
block,
wise
transfer
and
mix.
I
mean
if
if
only
one
of
them
would
be
supported,
that
would
probably
incur
mixing
block
types
and
that's
the
kind
of
forms.
I
don't
think
we
want
to
open.
D
I
mean
I
agree
with
you,
I
just
it's
at
the
moment
I
say
the
speckles,
you
can
do
either
of
them,
but
I
think
it's
got
to
be
mandated.
You
support
both
or
neither
okay
thanks,
christian,
okay
congestion
control,
some
things
came
out
of
it.
So
at
the
moment
we
have
in
place
that
every
max
payloads
every
10
packets
by
default.
D
We
wait
for
an
act
timeout
before
we
then
send
the
next
10.
If
there's
a
fairly
large
set
of
data
to
pass
backwards
and
forwards,
we
have
the
possibility
of
using
con
every
max
payload
just
to
reduce
the
turnaround
times
and
that
works
pretty
well,
but
that
does
not
work.
D
The
con
fails
if
there's
unidirectional
traffic
loss
likelihood
of
that,
and
but
the
non-environment
is
always
the
non-types
traffic
always
waits
for
the
act
time
before
the
next
battlefield,
so
the
so
the
tenth
packet
or
the
next
payload
packet.
If
that's
a
non
it'll,
always
wait
for
the
timeout.
D
Now
that
slows
things
down
quite
considerably
because
we're
waiting
the
two
seconds,
and
we
also
can't
assume
that
max
payloads
is
configured
because
it's
a
default
at
both
ends
to
do
a
trigger.
So
we
can't
just
rely
on
the
10th
packet.
D
Therefore,
we
can
go
faster
because
we're
sending
some
sort
of
response
back,
because
we
know
it's
a
10th
bracket
also
tied
up
with
this
is
if
we
are
probing
for
whether
quickblock
one
or
two
is
sent,
I
can
only
send
out
the
first
block
and
wait
for
response.
I
can't
send
out
max
payloads
and
then
suddenly
get
back
10.
We
don't
like
you
critical
options.
D
So
thinking
about
this,
the
suggestion
is
something
in
the
max
payload
packet
to
indicate
that
the
other
end,
if
it
can
sends
back
a
response,
if
that
response
gets
lost,
we
still
wait
for
the
act
timeout.
If
not,
we
can
then
carry
on
transmitting.
You
know
faster.
D
E
Already
have
something
for
that:
that's
no
response,
so
I
think
that
what
should
happen
here
is
that
them.
I
don't
really
know
what
the
best
default
is
for
for
block
one
transfer,
but
you
can
control
whether
the
server
should
respond
to
that
non
request
by
adding
the
no
response
option
in
that
option
would
then
be
sent
by
the
client
every
max
payloads
to
indicate
that
the
server
that
is
it's
asking
for
a
response
even.
D
E
D
E
What
what
you
could,
if,
if
that's
a
matter
of
of
traffic
that
you
don't
want
to
have
it
would
be
an
option
to
say
that
a
quick
one
transfer
implicitly
has
an
implicit
default
value
for
no
response,
and
if
you
do
want
to
have
a
response,
then
set
no
response
with
an
explicit
value
of,
I
think
it's
zero.
D
Okay,
yep,
certainly
that
would
work
and
then
that
would
work
in
both
quick
block
one
and
quick
block
two
yep.
I
will
work
with
that
thanks
very
much
for
that.
D
Okay,
so
the
quick
block,
2
implementation
was
the
easiest
to
implement
the
the
size
required
for
missing
blocks,
as
options
was
difficult
to
compute,
so
we're
just
looking
at
when
we're
building
up
the
error
response,
the
408
response,
not
for
it
this
or
the
response
is
coming
back
and
saying.
I
need
some
more
yeah.
It's
a
forward.
The
sponsor
comes
back
says
this
is
the
list
of
missing
blocks.
D
Okay,
what
just
happens
to
be
within
libco
app
is
that
you
build
your
pdu
and
you
then
try
and
send
it.
It
then
chokes
because
it's
too
large
yeah,
rather
than
as
you're
building
the
pdu
it
computes
the
size
and
returns
an
area.
You
can't
add
in
the
next
one.
E
But
but
this
is
some,
this
is
something
more
implementation
specific
and
if
in
in
that
particular
implementation,
you
you
take
the
pdu
as
long
as
you
think
in
that
particular
for
that
particular
application
is
good,
for
example,
to
to
get
all
those
max
payloads
in
there.
Then
you
put
in
as
many
numbers
as
you
get
and
if
you
are
still
missing
blocks,
you
can
still
ask
for
them
later
when
you're
getting
the
next.
The
next
blocks
around.
D
Okay,
so
a
secondary
challenge
in
that
is,
the
cddl
is
defining
an
array
in
cbore
and
the
array
size
is
defined
as
a
count.
So
you
then,
if
you
say
well,
I'm
going
to
add
in
x,
missing
blocks,
then
the
array
size
for
that
is
going
to
be
x,
but
if
you
can
only
get
a
subset
of
x
in
then,
you've
got
to
go
back
and
recompute
what
the
array
size
is
when
you
build
the
c
bar.
D
So
let's
consider
using
a
civil
sequence
to
an
extent
yes,
but
it
still
requires
something
that
says
what's
effectively
ends
up
as
an
array
it
just
is.
There
are
this
number
of
elements
that
follow.
A
D
E
What
what
you
can
do
is
that
as
long
as
your
number
of
elements
is
less
than
23,
I
think
23
is
the
magic
limit.
Then
you
can
start
writing
the
array
and
put
the
number
in
at
the
end.
But
again
that's
more
of
an
implementation
thing.
D
Yep
that
that
certainly
would
work
yeah.
So
it's
an
implementation
thing,
but
we
may
just
have
to
make
a
suggestion
that
there
may
be
a
challenge
here:
the
text-
okay,
great
thanks,
very
much.
D
Okay,
so
quick
block
one.
So
I
did
struggle
with
cddl
and
thanks
carsten
for
your
help
and
feedback
feedback
and
working
out.
I
believe
I've
got
it
right
now.
D
We
just
had
to
add
some
see
more
knowledge
into
lip
carp.
We
couldn't
just
assume
that
it
will
could
pull
in
a
sea
ball
library,
but
the
actual
sea
ball.
Knowledge
required
within
the
go
up
is
very,
very
small.
It's
a
significant
subset
of
what's
required
for
cyborg,
so
that
wasn't
that
difficult
to
do
the
next
quick
block
implementation
challenge.
I
had
was
tracking
tokens.
D
So
if
we
are
sending
a
max
payload
of
packets
out
at
once,
we
then-
and
they
all
have
unique
tokens
following
the
current
rules.
We
then
have
to
track
that
max
payloads
of
tokens
and
then
garbage
collect
whenever
things
kind
of
gets
itself
sorted
out,
and
we
certainly
cannot
reply
rely
on.
Rather,
the
max
payload
packet
may
not
arrive
at
the
target.
D
So
the
response
that
we
get
back
from
the
server
may
be
against
a
previous
token,
so
we
have
to
maintain
all
those
different
tokens
again
down
within
libco
up,
so
we
can
then
just
see
what's
going
on,
so
we
can
pass
the
correct
original
token
back
to
any
application.
Should
he
need
to
know
what
that's
what
what's
going
on
there.
So
it's
tracking
tokens.
D
I
haven't
really
done
that
code
because
it
just
needs
a
lot
of
it
and
just
trying
to
think
of.
Are
there
any
ways
of
getting
around
this
and
one
of
the
thoughts
had
that
that
we
have
and
associated
response
as
in
observing
when
traffic
is
coming
back
from
the
server
and
they
all
have
the
same
token?
Is
it
worth
considering
an
associated
request,
a
quick
block,
one
where
all
the
tokens
can
be
the
same,
as
obviously
caveat
that
you
know
if
there's
any
retrieve
request,
we'll
definitely
use
a
different
token.
D
E
I
think
that
I
mean,
I
think
that
if,
if,
if
you
go
that
way,
it
will
need
a
lot
more
thought
in
with
respect
to
what
it
means
for
for
other
for
other
things
for
other
associated
responses.
So
if
there
is
an
easier
way
and
if
it's
not
the
thing,
that's
coming
up
in
the
next
slide,
I
can
suggest
one.
Then
it's
probably
better
to
take
that.
D
Okay
and
I'll
hear
your
suggestion.
I
say
I
have
one
it's
not
on
the
next
slide.
I
did,
I
didn't
add
it
into
the
slugs.
Only
thought
of
it
this
morning.
D
Yes,
okay,
so
my
solution
was
that
the
the
bottom
32
bits
would
be
a
token
and
the
top
32
bits
would
be
the
block
number.
D
And
then
so,
I'm
then
using
up
to
the
full
eight
bytes
of
the
token
I
know:
that's
there
is
stateless
token
coming
downstream,
but
I
didn't
particularly
want
to
go
there
just
yet.
D
D
So
this
is
so
if
we
quick
block
we're
sending
it
to
the
server
the
server,
it's
totally
opaque
and
he
sends
back
whatever
token
he
wants
to
match
against,
but
the
client
in
sending
it
knows
that
I
generated
a
token.
That
is
the
first
32
bits.
C
Yeah
my
assumption
here
is
that
the
server
never
needs
to
to
actually
keep
a
track
of
tokens
because
it
kind
of
responds
in
an
instance.
Yes,
so
there
is
no
tracking
at
the
server
the
client
side
by
by
building
the
tokens
in
a
specific
way,
you
can
make
tracking
them
much
easier.
D
D
So
really
it's
just
moving
on
to
the
next
step,
so
yeah
we'll
come
up
we'll
update
o2
with
what
we
follow
through
this
discussion.
I'm
sure
there's
going
to
be
a
couple.
Other
tweaks
maybe
come
out
somewhere
else
update
the
implementation
so
that
we
make
sure
that
we
haven't
forgotten
something
and
that
it
all
works
and
viable
and
then
go
for
working
group
last
call
so
so.
C
I
missed
part
of
the
discussion.
I
like
the
the
plan.
I
missed
part
of
the
discussion
because
I
had
some
some
weird
webex
audio
problem.
I
think.
F
C
Yeah,
it
was
simply
a
one
one
slide
earlier.
Actually
it
was
even
one
earlier
than
that.
Yes,
no,
five!
No,
four,
I'm
sorry!
I
I
have
a
little
problem
here,
which
is
calling
the
option.
Quick
block
creates
two
problems.
C
One
is
the
itf
has
a
major
standardization
effort
that
is
called
quick
without
a
k
right
and
if
we
want
to
create
confusion,
because
people
will
be
starting
to
think
about
coil
over
quick
and
and
all
that
at
some
point.
If
we
want
to
create
confusion,
then
we
we
give
the
thing
a
name.
That
sounds
like
quick,
so
that
that's
my
one
problem:
okay,.
C
Yeah,
that's
my
other
problem.
Actually,
I
think
the
distinguishing
property
is
not
so
much
the
speed,
but
the
robustness.
C
D
Okay,
certainly
the
word
robust
works
immediately,
thinking
about
it,
but
then
people
will
say
well.
This
is
the
alternative
to
use
for
the
old
block,
one
block
two,
which
is
not
necessarily
the
case.
D
C
B
That's
actually
a
good
question
anyway.
I'm
concur
with
carsten.
This
is
michael
richardson
that
the
the
naming
is
unfortunate.
But
on
top
of
that,
if
I
can
jump
in
because
I
think
you
had
a
second
point
carson,
there
are
no
references
too
quick
q.
Without
the
k
in
the
document,
I
only
really
hadn't
paid
much
attention
to
it
until
today
and
the
quick
people
did
a.
B
It
looks
very
similar
in
lots
of
ways,
and
so
some
contrast
to
it
and
some
reference
to
their
congestion
mechanism
might
would
probably
be
prudent,
because
if
not
the
the
transport
people
are
going
to
ask
us
to
do
that.
Anyway,
that's
going
to
be
the
first
question
as
well.
Sir
sounds
like
quick:
have
you
reviewed
all
of
the
congestion
control
issues
and
quick
thanks.
D
B
And
and-
and
I
I
you
know
yeah-
I
mean
it
can
be
me
saying
it
now
or
someone
else
in
six
months
right.
So
that's,
I
think,
the
the
the
major
thing
and
a
question
of
why
we're
not
doing
block
this
is
a
good
question
and
and
probably
should
be
dealt
with.
B
C
C
C
E
You
on
on
the
on
the
distinguishing
it
from
from
the
regular
block
wise
transfer
that
michael
brought
up
I'd
like
to
I'm,
throwing
that
the
thing
that
traditional
block
can
do
and
this,
if
it
can,
it's
not
thought
through
and
it
would
need.
A
lot
of
thought
is
working
both
with
a
time
with
atomic
operations
and
random
access.
B
B
Traditional
block
mode
would,
let
me
get
a
bite
2000
of
an
object
without
getting
the
rest
of
it,
and
this
won't.
Let
me
do
it
correct
an
interesting
point.
E
I
I
even
think
that
this
can,
as
it
is
now,
this
can
be
a
very
solid
base
for
seven,
seven,
nine,
five,
nine
bis.
If
we
were
to
do
one
at
any
time,
it
will
just
take
some
time
to
get
the
messiness
and
the
and
the
num
and
the
precise
and
the
precision
precise
implications
of
random
access
out.
C
A
A
D
No,
it's
it's
just
something.
It
came
up
because
there
was
a
challenge
of.
I
are
you
just
asking
for
if
with
block
zero
in
the
standard
block
two
case
is
that
you
just
it's.
D
The
issue
is:
is
that
if
you
only
want
to
ask
for
one
block,
you
get
it
when
you're
using
the
old
block
two,
but
with
the
new
block
two,
if
you
ask
for
block
zero
you'll
get
the
next
10,
which
may
or
may
not
be
what
you
want
to
get
back,
and
so
you
have
the
you
know,
block
you
can
identify
from
block
one
or
further
blocks
that
this
is
a
subsequent
block
that
got
lost.
But
how
do
you
handle
just
that
initial
block
that
got
lost.
A
D
Yeah,
I
need
to
actually
duck
out
and
go
somewhere
else,
but
thank
you
guys
for
listening
and
especially
thank
you
for
all
the
excellent
positive
feedback
that
you've
come
back
with
much
much
appreciated.
A
B
E
Just
hello
just
because
my
screen
sharing
has
traditionally
not
worked
in
webex.
Could
you
maybe
call
up
the
the
document
that
I
just
linked
in
the
in
the
minute?
It's
I
mean
it's.
It's
not
much.
It's
just
basically,
quick
slides
for
for
the
topics
that
I'd
like
to
talk
about.
C
E
Here
we
go,
you
see
here
sure,
okay,
so,
on
top
of
the
things
that
we
discussed
last
time,
esco
has
been
as
kind
as
to
throw
in
a
bunch
of
of
review
comments.
Most
of
them
are
just
finding
finding
small
mistakes
that
that
have
persisted
in
the
document.
E
There's
one
good
point
about
the
necessity
of
anchor,
though,
and
there's
been
a
bit
of
back
and
forth
and
long
story
short
of
all
those
interpretations
that
are
floating
around
for
66.90,
which
does
leave
some
room
for
ambiguity,
but
probably
has
a
right
interpretation
somewhere.
E
The
the
decision
we
took
in
2018
basically
was
to
expand
everything
at
lookup,
because
you
could
interpret
things
this
or
that
way.
The
discussion
with
esco
has
shown
me
that
the
basically
the
only
implementation
and
the
only
interpretation
that
necessitated
this
particular
insertion
of
the
anchor
under
all
conditions,
was
my
own
interpretation.
E
So
I
know
it's
rather
late
in
the
process,
and
that
was
also
part
of
my
original
response.
E
If
chairs
and
group
think
that
it's
too
late
to
do
anything
about
this,
and
we
are
saying
that
we
have
different
that
it's
hard
to
interpret
and
we
have
different
implementations
that
do
all
those
things
I'm
fine
with
leaving
it
as
is
at,
is
but
at
the
same
time,
I'd
hate
for
my
own
mistake
to
kind
of
blow
up
every
response
to
a
resource
directory
request
by
a
factor
of
two
which
is
not
precisely
that,
but
pretty
close
so
especially
with
long
host
names
it
can
easily
or
ports.
E
It
can
easily
be
that
the
response
is
almost
twice
as
long
if
we
serve
it
with
the
response
anchors
yeah.
So
what
do
we
do
about
this?
The
two
way.
E
So
I
think
that
the
only
two
ways
forward
are
to
say
that
yeah
not
all
of
this
may
be
necessary,
but
there
is
this
wiggly
room
and
there
have
been
those
interpretations,
and
this
is
why
we
keep
it
or
to
go
through
the
precise
resolution,
steps
one
more
and
kick
out
the
necessity
of
inserting
an
anchor
along
with
precise
rules
as
they
are
now,
but
different
rules.
Let's
say
when
you
need
to
expand
the
anchor,
this
would
not
render
any
existing
resource
directory
non-compliant.
F
Update
the
emails
that
I
remember
one,
the
suggestion
was:
if
rel
is
not
present,
then
anchor
is
allowed
and
if
it
is
not
sorry
yeah.
E
That
that
was
a
suggestion
which
I
did
not
put
in
here
in
this
detail,
because
I
think
that
the
only
reason
this
so
so
there
is
no
implementation
and
no
interpretation,
that
of
the
of
the
resolution
rules
where
this
really
makes
a
difference
and
the
only
one
that
needs
the
anchor
in
there
is
mine
of
2018
and
if
that
goes
away,
even
those
additional
rules
about
hra.
So
so
the
the
more
rules
don't
make
it
easier
and
they
don't
make
the
payload
smaller
and
yeah.
E
C
So
whatever
we
do,
making
things
depend
on
an
interpretation
of
rel
sounds
about
the
the
wrongest
possible
that
we
can
do.
I
mean
looking
retroactively
if
we
were
still
in
2012
and
could
re-discuss
the
the
rules
that
are
in
in
section
2.1
of
rf
c6690.
C
C
The
question
is:
is
the
the
interpretation
of
section
2.1
of
rfc
6690
sufficiently
stable
that
we
can
state
when
you
don't
need
it
and
there's
a
little
bit
questionable
in
the
2.1
b,
where
where
it
says,
when
specified
it's
not
entirely
clear
that
that
this
is
really
well
defined.
But
I
think
we
can
explain
this
a
little
more
and
I
don't
know
how
much
variation
is
between
implementations
about
that,
but
other
than
that.
C
So,
on
the
the
other
thing
I
wanted
to
say
is
that
internally
inside
a
resource
directory
implementation,
the
the
only
correct
way
to
implement
this
is
to
to
expand
the
anchor
in
the
database,
because
what
you
will,
in
the
end,
send
back
to
to
a
lookup
client.
C
E
I
don't
think
that
storing
the
fully
expanded
anchor
is
practical
in
the
implementation,
because
the
base
might
change
in
the
course
of
an
update,
but
the
anchor
the
whether
the
pres.
I
think
that
the
simplification,
also,
if
we
go
that
route
I'll
go
through
the
implementations.
I
looked
at
in
2018
and
all
again,
and
I
think
that
what
will
come
out
of
it
is
that
the
anchor
either
is
necessary
or
is
not
necessary
to
include
and
then
either
that
relative
anchor
can
be
stored
or
it
or
or
no
anchor
is
stored.
C
E
It
it
behaves
like
it
had
stored
the
anchor
in
its
original
form.
It
does
not
behave
like
it
stored
the
anchor
in
its
result
form
because
the
resolution
base
can
change
when
the
base
changes.
F
I'm
sorry
a
stupid
question,
but
I
mean
when
performing
look
up
the
the
response
will
be
a
resolved
anchor
or
not.
I
didn't
get
that.
My
understanding
is
that,
if,
even
if
you
register
in
this
new
proposal
in
the
small
compressed
way,
when
you
do
a
lookup,
you
will
get
an
anchor
to
that.
Or
would
you
get
the
same
thing
that
was
stored
during
the
registration.
E
C
A
E
F
Yeah,
I
think
we
are
all
I
also
link
to
yes,
of
course,
this
is
much
simpler
and
you
save
a
lot
of
bandwidth
on
the
response.
So
why
not.
A
For
the
document
christian,
I
think
you
mentioned
you
would
be
about
giving
clarification
in
the
appendix
about
limited
link,
format
and
fixes
to
your
implementation.
But
I
guess
that's
under
control.
E
Yes,
so
I
mean
we
already
have
a
place
where
we
talk
where
we
kind
of
go
through
the
existing
rules
in
a
tutorial
fashion.
Already
that's
appendix
b,
and
we
have
a
place
where
we
outline
what
we
do
in
in
excess
of
the
6690
requirements
to
get
all
the
implementations
in
land
as
appendix
c.
So,
of
course,
they
would
get
updates.
But
it's
not
like
there's
anything
completely
new
that
needs
added
heading.
A
E
So
if
there's
no
more
comments
on
this
topic,
if
you
could
go
back
to
the
slides,
I
I'll
just
like
to
go
through
the
other
openly
open
points
again,
it
was
just
one
slide
by
the
way
right,
oh
no,
you
can
pay
that
page
down.
Go
to
the
next
one.
E
So
the
the
so
there's
two
topics
that
are
left
over
from
last
time
that
just
needed
group
action.
One
is
the
topic
of
server
authorization
where
this
originally
came
from
the
question
about
how
bad
is
it?
If
someone
puts
in
a
statement
it's
a
resource
directory
about
someone
who
is
not,
then
who
is
not
the
the
registered
and
turns
out?
This
is
the
same
topic
we
have
with
the
with
the
unauthenticated
discovery.
E
So
michael,
you
commented
last
time
that
oh
off,
basically,
because
of
all
this
suit
of
problems
that
could
certainly
be
solved
in
a
more
elegant
way,
just
recommends
to
only
run
a
server
a
single
service
on
one
host
or
at
one
behind
one
behind.
B
One
authority
do
you
have
I
I
don't
have
a
reference
for
that,
but
that's
what
I've
observed
in
the
field,
so
I
would
have
to
dig
through
the
oauth
documents
to
see
if
it
actually
says
that
it's
just
what
I
observe
in
the
field
is
that,
basically,
is
that
you
effectively
because
of
you.
You
can't
really
send
different
cookies
in
your
browser
to
different
places,
and
so
they
solve
that
by
not
having
more
places.
B
No,
but
yes,
I
I
understand
that,
but
you
still
have
os
core
contexts
right,
which
is
effectively
the
same
thing
in
terms
of
they.
They
go
to
a
particular
place,
not
by
really.
E
We
have
we
have.
We
have
a
token,
with
a
set
of
claims
associated
to
an
escort
context
or
a
dkls
context
for
that
matter,
and
that
set
and
nothing
in
the
request
limits
the
set
of
claims
either
from
the
client
nor
on
the
server
down
to
what
the
client
actually
intends.
With
this
request.
E
So
the
way
for
the
way
forward
I
see
here
are
two
and
two
in
parallel.
One
is
to
say
that
implementations
have
to
be
careful
here,
especially
you
should
probably
do
as
people
do
in
oauth
off
unless
you
have
the
have
better
control
of
the
issue
and
in
parallel,
follow
follow
up
with
something
that
allows
us
to
get
better
control.
E
But
there
is
a
mailing
list
thread
on
that
and
I'll
I'd
appreciate.
If
people
could
comment
on
that,
or
of
course
now.
E
I'm
not
following
or
too
closely.
What
does
this
mean
for
ace?
I
mean,
as
I
understand,
ace,
is
largely
doing
taking
the
semantics
of
ors,
but
putting
them
on
into
the
core
area
so
is,
does
this
mean
ac
might
have
might
might
be
on
the
same
way
out,
or
does
this
mean
that
we
have
the
chance
to
do
something
better
with
ace
and
survive
oauth.
C
Well,
michael
just
sent
the
reference
to
gnab
or
nap
depending
on
how
you
want
to
pronounce
it.
So
this
is
one
working
group
that
does
something
that
might
be
replacing
ores,
but
in
the
the
ace
working
group
we
have
had
a
long-standing
battle
about
whether
the
oauth
model
was
even
right
for
iot
and
it's
certainly
right
for
certain
kinds
of
iot.
C
But
there
is
a
whole
area
that
is
not
being
addressed
by
ours
where
the
earth
people
kind
of
assumed
in
in
2000.
When
did
we
decide
that
2017
that
it
might
grow
into
that?
But
that
hasn't
happened,
and
now
we
have
gnab
and
and
other
proposals
of
this
kind.
So
I
would
expect
that
ace
will
be
rechartering
to
pick
up
some
of
this
work
at
some
point
in
time,
but
for
for
now
we
just
should
make
sure
that
resource
directory
is
not
married,
two
hours.
C
B
Olaf,
the
other
reindeer
so
christian.
I
think
that
the
goal
here
is
that
that
was
to
learn
from
oauth,
not
to
cite
it
so
whether
or
not
ace,
dops
sort
of
replaces
it.
B
I
don't
think
that
exactly
matters
I
I
I'm
not
sure,
but
your
example
two
and
your
email
on
that
thread
with
the
temperature
becoming
the
time
it's
kind
of
interesting,
but
I
I
I
it
depends
upon
a
malicious
rd
operator,
and-
and
I
so
I
I
was
trying
to
understand
that-
and
the
relationship
to
the
other
kind
of
issues.
B
So
that's
where
I'm
kind
of
a
bit
lost
here
myself.
I
think
that
it's
just.
E
The
the
the
two,
so
we
we
face
what
I
perceive
as
the
same
problem
in
two
sides.
The
one
side
is
we,
we
present
the
resource
directory
as
an
optimization,
basically
as
a
as
something
better
something
more
efficient
than
a
as
an
efficient
cache
of
of
requesting
to
multicast.
Basically,
which
implies
that
the
the
resource
directory
should
not
really
be
trusted,
at
least
in
in
some
applications,
to
present
correct
information
about
the
registrants.
E
B
So
so,
if
we
could
solve
the
problem
that
the
resource
directory
is
essentially
relaying
untrusted
information
by
adding
some
trust
to
the
information
that
the
resource
directory
couldn't
change,
then
that
would
also
solve
the
second
problem
of
having
to
put
some
trust
in
the
resource
directory.
Because
then
we
could
simply.
It
could
be
whatever
wouldn't
matter
right
that
that
that's
what
I'm
hearing
that
we're
caching
statements
that
don't
include
identities.
E
I
don't
know
so.
One
one
way
forward
here
is
to
is
to
ask
the
to
ask
the
client
in
in
all
situations,
to
verify
that,
whoever,
whichever
information
gets
mangled
into
that
that
request
that
eventually
gets
sent
come
from
an
entity
that
is
trusted
with
that.
E
To
the
necessary
amount
of
that,
for
that
request
thing
is
we
have
different
ways
of
achieving
that
and
a
we
have
to
pick
one
in
the
discovery
and
b
we'll
have
to
recommend
something
to
application
offers.
We
can
be
a
bit
squishy
on
the
recommendation
side,
but
we
have
to
say
something
about
how
to
discover
an
rd.
B
Well,
part
of
the
issue
is
that
you
know
rd.example.com.
We
have
no
real
way
or
time.example.com.
We
have
no
way
of
mapping
that
to
an
identity
to
know
if
it
was
really
even
stated
for
them.
That's
that's
what
coming
down
if
the
resource
directory
simply
cached
identity,
xyz
said
blah
blah
blah,
then
it
would
always
up
be
up
to
the
client
say.
Well,
that's
not
a
valid
identity.
According
to
my
system,
it
doesn't
matter
if
it's
cached
or
not.
E
But
we
can
we,
we
can't,
we
can't
assign
we
can't
associate
that
information,
especially
not
in
an
assigned
way,
with
the
with
the
information
in
link
format,
so
that
would.
B
E
And
even
and
even
if
we
could,
that
would
I
mean
we-
we
there
are
probably
ways
to
do
even
directory
services
for
this,
but
this
would
wind
up
with
some
kind,
or
I
mean
if
the
resource
directory
is
not
the
is
not
in
the
part
of
the
trust
chain,
as
it
should
not
be,
then
this
would
basically
one
give
us
individually
signed
links
and
so
on,
so
that
the
the
rd
could
pass
them
on
efficiently
or
some
kind
of
regulatory.
E
This
is
probably
okay
in
the
in
the
rd
discovery
process,
because
it
means
that
only
one
rare
case
when
the
where
the
client
has
to
send
another
request,
but
for
applications
that
depend
on
drd
that
might
easily
double
their
discovery
traffic.
So
it's
it's
an
easy
way
out
for
for
rd,
and
I
I'm
leaning
towards
taking
that.
E
E
Yes,
if
we
still
have
time,
I
mean
the
kind
of
yeah
backup
style
yeah,
then
next
one
please
so.
Another
thing
we
talked
about
last
time
is
that
we
don't.
We
can't
really
ensure
that
the
requests
arrive
at
the
resource
director
in
a
sorted
way,
which
is
basically
coming
from
the
comments
about
replay
replay
protection,
so
the
the
trouble
one
would
have
if
requests
to
the
rd
could
be
replayed
extend
even
beyond
replay.
What
we
talked
about
last
time
was
that
echo
would
be
an
option
to
to
mitigate
that.
E
E
That
would
basically
allow
us
to
even
associate
an
e-tag
with
the
with
the
state
of
the
registration
and
then
just
ask
the
client
to
send
that
e-tag
again
is
as
an
as
an
it
match,
and
then
we
would
get
basically
the
same
thing
as
with
echo
just
that
it's
nicely
scoped
to
the
resource,
rather
than
being
a
thing
between
the
client
and
the
server.
E
So
the
current
pull
request
and,
as
discussed
last
time,
I've
made
this
into
a
pull
request.
First,
so
that
we
have
extra
text
to
talk
about
shows
the
two
and
a
half
options
we
have
for
this.
One
is
using
echo,
the
other
is
etag
and
the
half
option
is
only
good
against
deletion.
That
is,
the
resource
directory,
picking
different
registration
resource
registration
resources.
Each
time
a
device
comes
back.
So
it's
it!
Might
it's
it's
at
best,
a
partial
solution.
E
If
the
client
didn't
even
send
an
e-tag
so
that,
in
effect,
the
server
would
be
asking
the
client
hey
send
this
with
if
match
that
e-tag
or
else
I
won't
accept
it,
and
the
other
question
is
if
it
can
send
the
current
etag
with
that
error
response,
so
that
the
client
can,
in
the
event
of
the
synchronization,
properly
resume
the
registration
if
those
two
hold
and
cast
them,
you're-
probably
best
qualified
for
that.
E
Because
you
were
one
of
the
original
authors,
then
we
might
manage
to
do
this
without
etec,
which
saves
us
without
echo,
which
saves
us
a
normative
dependency
and
generally
is
using
the
concept,
that
is
in
say
more
regular
cases
where
the
client
puts
something
somewhere
and
then
changes
it
by
putting
something
else
there
as
it's
it's,
it
would
be
using
the
same
mechanism.
C
C
Well,
etag
is
mostly
there,
so
a
client
can
verify
that
that
a
cached
representation
is
is
still
useful.
C
Okay
and
yeah,
so
so,
giving
giving
more
application
semantics
to
if
match
is
certainly
a
questionable
thing.
Okay,
that
doesn't
mean
that
we,
we
absolutely
cannot
do
this,
but
I
think
we
should
be
really
certain
of
what
we
are
doing
here
and
whether
it
has
the
right
consequences-
and
I
haven't
thought
about
this
very
much
yet.
The
other
thing
sending
an
e
tag
in
a
precondition.
Failed
response
is
suddenly
interesting.
C
E
Yeah,
but
that's
basically,
the
rest
just
needs
me
pouring
a
few
more
days
into
that
so
yeah,
that's
basically
the
last
pull
request.
Carton
has
already
put
some
put
in
some
valuable
stuff
in
there,
so
if,
if
someone
else
could
also
do
that,
it
might
help
just
because
this
is
something
where
a
lot
of
new
text
is
is
going
in
without
without
having
the
benefit
of
the
several
iterations
of
the
rest.
E
C
Yes,
so
I
I
just
have
a
question
at
a
previous
meeting,
I
brought
up
making
christian
the
first
author
on
on
this
document.
Are
we
are
we
done
with?
That?
Is
that
decision
taken.
C
A
Thank
you
and
we
bike
share,
probably
a
bit
the
exact
meaning
of
editor.
It
can
be
ambiguous
if
editor
is
greater
than
author
or
not.
C
But
yeah,
but
if
christian
is
happy
with
that,
I
think
everybody
else
should
be
happy
with
that
and
we
can
just
do
it.
A
Right
so
yes,
kristen
go
ahead
with
that
too.
Thank
you
permission
sure
I
had
a
side
question
for
carsten
by
the
way
about
the
status
of
the
conf
write-ups.
C
C
There
must
be
something
about
my
emails
that
makes
evalua
system
drop
them
because
I,
in
in
other
groups,
for
instance
in
cosi,
it
seems
that
he
never
saw
my
messages
when
replying
to
other
messengers,
so
that
might
be
a
technical
problem.
So
if
you
guys
could
maybe
ask
him
what's
going
on,
that
would
help.
A
Yeah
thanks
for
saying
I
I
can
reach
out
to
him,
ask
him
to
send
you
a
mail
and
probably
that
that
will
start
working
or
otherwise
he
can
recommend
other
channels.
He
used
to
be
often
on
jobber,
but
since
it's
one
waiter,
so
he
he's
stopped
yeah.
I
don't
know
why.
But
okay
I'll
give
it
a
try,
karsten
and
come
back
to
you.
F
F
F
B
E
Okay,
just
a
quick
question
about
core
conf:
are
there
public
good
public
implementations
about
that?
You
that
anyone
could
recommend,
because
I
might
need
to
have
a
look
into
that
at
some
point
and
an
implementation
is
always
a
good
point
to
start.
E
A
F
A
Yeah
well,
someone
left
already
but
yeah.
If
there's
no
other
points
to
discuss
that
anyone
wants
to
raise,
I
think
we
can
close
the
meeting.
We
we
have
one
more
interim.
We
don't
have
out
of
our
heads
big
topics
in
queue
to
discuss.
So,
of
course,
let's
keep
the
interim
schedule
till
the
last
moment,
but
we
may
eventually
cancel
that
one,
nothing
exactly
to
discuss.
F
C
Strictly
proportional
to
the
time
you
need
to
process
the
the
first
round
of
comments,
because
the
the
the
isg
has
to
read
so
many
documents
that
the
knowledge
about
what
happened
just
falls
out
of
the
cache
over
time.
So
the
longer
you
wait
in
submitting
a
new
version,
the
longer
the
processing
of
that
will
take,
and
that's
why
it's
sometimes
a
good
idea
to
even
submit
a
new
version
that
only
addresses
part
of
the
comment,
so
the
cache
was
refreshed.