►
From YouTube: CORE WG Interim Meeting, 2020-05-13
Description
CORE WG Interim Meeting, 2020-05-13
A
So
welcome
to
this
virtual
interim
second
of
the
series.
Today,
we
discuss
mostly
new
proposal
for
block
wise
transfer
and
related
work
started
some
time
ago
on
non-traditional
responses
and
as
first
main
item.
In
fact,
we
have
a
presentation
from
john
and
mohammed
on
the
new.
B
B
So
this
essentially
is
to
talk
about
some
of
the
challenges
that
we
had
within
the
the
dots
environment,
the
distributed
denial
of
service
threat,
signaling
type
environment,
and
we
came
up
with
the
fact
that,
when
we're
working
within
the
networks
that
are
potentially
lossy
because
of
the
ddos
attacks,
some
of
the
existing
co-op
stuff,
that's
in
place
there,
we
need
to
look
at
modify
so
so
here
we
just
have
a
kind
of
a
schematic
diagram
of
where
we
have.
C
C
B
I'll
just
move
the
microphone
yep.
Apologies
for
that
okay,
so
the
general
kind
of
operation
for
dots
is
that
for
configuration
we
actually
use
confirmable
co-app,
and
this
is
done
in
the
peacetime
when
there
is
no
attacks
taking
place,
but
the
actual
mitigations
and
responses.
Everything
has
to
be
non-confirmable,
as
I
mentioned
before,
and
we
have
found
that
in
the
general
case
that
a
single
packet
contains
all
the
necessary
information,
so
a
mitigation
request
can
be
in
the
packet
or
some
sort
of
status.
B
Update
coming
back
from
the
service
can
be
in
a
single
packet,
and
we
have
found
and
have
done
testing
that
this
works
fine
with
packet
losses.
It
just
is
that
the
client
may
not
see
some
of
the
information
getting
back
to
make
the
protocol
robust.
We
have
what's
known
as
application
heartbeats,
which
do
some
sanity
checks
of
what's
taking
place
and
through
the
way
that
we've
done
it.
B
The
server
component
out
in
the
internet
is
able
to
identify
that
the
client
is
always
alive,
even
if
it's
traffic
loss,
but
the
client
may
just
have
to
carry
on
blind,
assuming
that
the
server
is
still
there.
When
there
is
traffic
loss,
we've
then
come
to
want
to
extend
the
capability
of
just
a
simple
mitigate
help
me
type
request,
and
this
is
what
we're
doing
type
response
to
telemetry.
B
Likewise,
the
server
can
send
back
information
about
how
it's
mitigating
the
attack,
what
type
of
attacks
it's
seeing
and
so
on,
and
the
consequence
of
this
telemetry
information
is
that
we're
going
to
be
larger
than
what
can
fit
into
a
single
packet
in
the
environment
without
a
packet
loss
block
one
and
block
two
work
perfectly
for
sending
the
stuff
up
to
the
server
and
getting
the
information
back
with
the
non-confirmable.
If
it
was
confirmable,
it
works.
B
But
if
there's
any
packet
loss
we
went
to
b,
twice
and
so
on,
but
typically
with
packet
loss,
block
ones.
Response
saying
give
me
the
next
one
gets
lost
from
the
server
coming
back
to
the
client
or
server.
When
he's
sending
block
two's
downstream,
the
packet
gets
lost
and
there's
no
way
easy
way
of
recovering
from
that,
and
what
basically
happens
within
the
environment.
Is
that
everything
kind
of
grinds
to
halt
whilst
we're
waiting
for
things
to
happen,
it's
just
not
acceptable
when
we're
under
sort
of
ddos
attack
type
environment.
B
So
we've
been
looking
at
several
ways
of
handling
oversized
packets.
One
way
is
just
to
use
ip
fragmentation
so
that,
instead
of
just
a
co-app
message
fitting
into
a
single
packet,
we
let
it
go
into
multiple
packets
and
just
send
the
whole
data
and
assume
that
all
the
fragments
are
going
to
make
it.
But
we
have
no
way
of
recovering
from
missing
fragments.
B
Another
way
is
that
the
application
breaks
up
the
telemetry
data
into
chunks
of
data
that
fit
into
individual
packets
and
send
those
that
we
have
a
bit
of
a
challenge
there,
because
the
yang
any
data
requires
each
individual
chunk
to
be
full
jason
in
its
own
right.
So
we
then
have
to
look
at.
How
can
we
break
the
telemetry
data
plan
into
individual,
fully
blown
adjacent
chunks,
and
then
we?
B
We
can
do
that
into
lots
of
very
small
chunks,
but
we
want
to
actually
minimize
the
number
of
chunks
or
packets
being
sent
so
that
you
know
we
maximize
the
usage
of
any
individual
packet.
That
could
be
there
using
block
one
and
block
two.
We
found
has
limitations,
as
I
alluded
to
before.
The
performance
is
that,
if
you're
sending
a
block
two
from
the
server
back
to
the
client,
the
server
has
to
send
the
next
thing.
Then
the
client
just
has
to
request
the
next
packet.
B
So
we
we've
got
into
packet,
latencies,
taking
places,
we've
got
turnaround
times
and
all
the
rest
of
it
and
we
fall
apart
when
we're
in
a
lossy
type
environment
when
the
pipes
are
running
for
when
there
is
a
serious
ddos
attack,
that's
taking
place
so
out
of
that,
we
then
thought
well,
let's
propose
options,
block
options,
three
and
block
option,
four,
which
work
in
the
same
similar
sort
of
way
as
block
one
and
block
two,
but
there's
several
additions.
B
One
is
that
we
can
send
all
the
block
threes
serially,
one
after
another,
up
to
the
upstream
server
or
likewise
the
server
can
send
all
this
block
fours
with
some
sort
of
major
status
message.
You
can
just
send
them
down
serially
down
the
pipe
as
if
you
were
sort
of
doing
it
with
fragmented
packets.
They
all
just
immediately
follow
one
after
another.
But
if
we're
going
to
be
using
block,
3
and
block
4
in
a
confirmable
environment,
then
we
obviously
need
to
increase
and
start
which
is
slightly
uncharted
territory.
B
C
B
Through
we
can
be
requested
and
so
that
we
can
make
sure
that
the
blocks
that
create
the
body
of
the
entire
chunk
of
data
that's
to
be
passed
down.
We've
come
up
with
using
a
block
id
for
reassembly,
but
has
been
a
bit
of
discussion
about
using
etag,
but
we're
not
entirely
sure
whether
we
can
use
etag
because
it
is
a
resource.
Local
identifier,.
C
B
So
just
looking
at
some
of
the
block,
one
versus
block
three
block
one,
we
are
limited
to
probing
rate,
so
if
we're
descending
them
out
and
there's
no
response,
because
the
pipe
coming
back
towards
us
is
running
full,
we're
limited
to
that
probing
rate
of
one
bite.
A
second
which
is
whatever
reason
is,
is
quite
a
low
rate,
so
we'll
be
hanging
around
for
some
time
doing
stuff.
So
it
will
take
time
for
telemetry
information
to
be
past
that,
whereas
block
three
we
would
say,
subject
the
whole
body.
B
B
Both
block
one
and
block
three
can
utilize
the
four
or
eight
missing
blocks.
You
know
the
entity's,
not
complete
type
stuff,
but
we
need
to
extend
the
message.
That's
in
the
response
of
the
408,
either
with
the
data
body
having
this
is
the
array
of
what's
missing
or
we
just
including
the
408
response.
The
fact
that
we've
got
block
threes
and
we
we've
seen
these
blocks
and
we
haven't.
C
B
Likewise,
with
block
two
versus
block,
four
server
has
to
wait
for
the
next
block
request
before
we
can
send
the
next
block
down
and
this
we
have
to
maintain
a
copy
of
the
body
up
in
the
server
for
a
period
of
time,
whereas
with
block
4,
the
entire
set
of
blocks
can
be
sent
without
waiting,
hence
giving
the
highest
performance
highest
is
kind
of
transfer
time
just
to
get
the
data
in
between
and
likewise,
as
we
mentioned
with
the
block
one
stuff,
the
client
can
indicate
multiplex
submitting
and
server
can
delete
a
body
on
select
successful
seasons
on
so
we
can
do
some
sensible
management
there
and
thinking
about
this,
it
makes
sense
to
us
to
non-co-app
experts
that
caches
can
keep
the
data
at
the
block
or
the
body
level
for
ease
of
caching.
B
With
tokens
in
how
they
should
be
used
or
not
used,
as
the
case
may
be
so
essentially
is
token
matches
a
response
to
the
request
so
that
the
client
can
work
out.
What
this
particular
message
has
come
back
and
hit
him
is
with.
If
we're
going
with
the
block,
3
block,
4
type
stuff,
the
tokens.
There
is
no
more
additional
client
requests
going
upstream
with
the
block
four,
because
all
the
block
fours
are
coming
back
to
do.
We
use
the
same
token
across
all
the
block.
B
B
Some
stuff
back
so
we've
got
kind
of
question
marks
about
what
to
do
with
toki's
tokens,
and
likewise
are
there
any
implications
in
proxies
that
we
haven't
yet
thought
about.
B
B
D
B
E
I
have
a
bunch
of
comments,
but
if
someone
else
has
one
I
might
not
stop
talking
for
some
time.
So
maybe
I'll
put
myself
at
the
end
of
the.
E
E
Go
ahead
christian,
I
think
yeah.
C
C
C
Shall
we
I
mean
this
christian?
Maybe
you
can
start
and
then
constantly
just
jump
in
whenever
the
audio
is
fixed.
E
Okay,
maybe
let's
start
with
the
one
of
the
smaller
items
on
the
on
the
e
tag,
I
don't
quite
understand
what
the
issue
about
the
identifier
being
resource
local
would
be.
I
understand,
resource
local
to
mean
that
this
identifier
is
scoped
to
the
that
resource.
E
That
is,
it
can't
be
reused
between
resources,
but
that's
not
something
that
should
impede
this
particular
use
case.
E
E
But
that's
something
that
could
just
as
well
be
done
with
an
e-tag
if
the
server
is
insists
that
it.
If
the
server
wants
to
have
using
this
particular
model.
E
B
E
Okay,
so
I'm
sorry,
I
I
mixed
things
up.
Yes,
it
was
that
mail
was
a
bit
earlier
that
week,
this
morning's
mail
was
just
kind
of
taking
taking
the
thoughts
from
there
on,
but
I
think
that
wouldn't
affect
this
discussion
too
much.
E
So
my
impression
of
this
overall
thing
is
that
all
of
this
can
be
done
without
without
without
kind
of
changing
the
whole
block
system,
by
partially
by
mechanisms
that
are
somewhere
else
in
the
pipeline
and
partially
just
by
finding
concrete
kind
of
doing
doing
concrete
extensions
without
blocking
things
on
on
proxies.
That
would
be
processing
it.
So,
for
example,
if
you,
if
you
could
go
to
the
yeah
right
here,
it
is
on
the
missing
blocks
being
re-requesting.
E
This
should
for
the
for
the
block
three
case
or
the
block
one
case.
This
should
be
quite
straightforward
if
there
is
just
a
response
sent
by
the
server
that
indicates
which
blocks
are
missing
at
the
point
in
time
when
it
when
it
comes
to
processing
the
processing
that
put
so.
If,
if
that
is
supposed
to
be
done
in
an
atomic
fashion,
then
the
so
sure
we
would
need
to
define
a
media
format.
E
That
says
this
is
a
reason
for
a
failure
to
reassemble
and
to
to
act
on
a
message,
and
it
indicates
how
it
what
parts
are
missing,
but
with
that
in
from
that,
could
easily
be
packed
into
a
seabor
array
or
any
see
where
it
is.
Probably
the
most
straightforward
could
be
defined
as
a
new
media
type
and
then
just
send
back
with
the
block
one,
and
I
think
that
would
then
catch
most
of
the
block.
One
cases
that
of
the
block
one
problems
that
I
mentioned
so
far.
B
Okay,
so
from
your
email
that
I
read
earlier
this
week,
the
slide
I've
got
up
at
the
moment
just
talks
about
it
being
a
408,
which
is
what
you
were
referring
to
just
is.
The
question
is:
how
do
we
include
it?
Okay,
yes,
it
could
be
another
c
ball.
C
B
Yeah,
that's
okay
though
I
just
why
I'm
so
quiet
on
the
microphone
I
apologize,
but
okay,
so
yeah
just
is
that
yeah
in
the
the
email
that
I
saw
earlier
this
week,
which
is
why,
on
the
slide
that
I
have
now
got
up
here,
we
talked
about
the
408
for
the
missing
blocks,
which
could
be
some
seymour
embedded
stuff,
which
is
how
we
can
handle
the
block
1
stuff.
B
C
E
For
four
block
for
block
two:
it's
it's
it!
I
think
it's!
It's
really
not
that
different
from
from
from
what
block
4
does.
The
main
point
is
to
get
the
server
to
send
the
blocks
on
mass
in
the
first
place
and
the
the
token
issues
that
you're
stumbling
on
is
kind
of
the
linchpin
where
this
will
all
resolve
around,
because
this
is
this
is
the
the
tricky
part
and
how?
E
However,
we
resolve
that,
for
example,
by
using
non-traditional
responses
that
then
the
client
receives
the
list
of
all
blocks,
that
it
that
are
there
and
can
just
get
them,
get
the
ones
that
it's
missing
in
a
second
round.
B
Sure
I
understand
that
okay,
so,
but
what
I
I
guess,
what
partly
concerned
me
was.
We
are
potentially
changing
the
semantics
of
block
two,
which
is
that
a
response
is
sent.
Then
the
client
requests
the
next
one
and
then
another
response
is
sent
so
on
and
we're
changing
it
to
sending
them
all
out
in
one
go
somehow
and
does
that
affect
the
appropriate
rsc.
E
B
E
Yeah
yeah,
but
that
kind
of
that
still
leaves
us
with
two
mechanisms
where
I
think
that
one
that
has
a
small
extension
can
can
suffice
here
and
that
one
mechanism
and
that
extension
could
be
generally
used
for
for
other
purposes.
Just
as
well.
E
A
E
So
I
I
think
that
if,
if
we
combine
all
those
those
buildings
that
we
have
or
might
have,
if
we,
if
we
continue
on
on
the
non-traditional
responses,
then
the
only
thing
that
remains
is
that
we
could
that
we
would
not
have
a
mechanism,
for
I
missed
many
blocks
and
precisely
I
need
this
and
that
and
that
and
that
one.
So
is
this
something
that
you
expect
frequently
to
happen,
because
the
examples
that
you
had
in
there
all
were
about
bursts
of
missing
blood.
B
B
I
agree
that
in
the
really
flooded
condition,
with
certainly
with
block
fours,
coming
back
downstream
to
the
clients
we're
likely
to
miss
all
of
them,
and
certainly
within
the
draft
we
have,
that
is
up
to
the
client
as
to
whether
he
decides
to
re-request,
missing
blocks
or
just
say,
okay.
Well,
it's
just
too
much
of
a
loss
going
on
here.
Let's
carry
on
and
wait
for
the
next
set
of
data
to
come
in
my
direction.
F
So
does
anybody
hear
me
now?
Yes,
okay?
Well,
we
reconnected
me
so
just
just
for
everybody's
amusement,
michael
richardson
has
found
out
that
if
you
are
on
the
internet,
webex
doesn't
work,
you
need
to
be
behind
the
net,
and
I
I
didn't
remember
that
I
was
on
the
internet
when
I
started
the
call
so
yeah
very,
very
interesting.
F
So
what
I
was
suggesting
was
that
maybe
we
can
structure
this
discussion
in
into
a
number
of
items
and
I
may
not
be
catching
all
of
them,
but
one
big
item,
of
course,
is
congestion
control
and
we
need
to
understand
what
we
are
doing
here
and
maybe
a
meta
item
behind
that
is.
Are
we
doing
something
that
that
is
very
specific
to
dots?
F
So
are
we
kind
of
ignoring
that
there
might
be
other
applications
that
have
similar
applications,
similar
requirements
for
the
application,
or
are
we
trying
to
build
something
general
that
will
work
in
in
other
places,
because
with
dots,
of
course,
the
the
the
idea
of
applying
congestion
control
to
a
ddos
situation
is
is
never
going
to
work
well,
so
we
we
probably
have
to
build
something
that
actually
actively
cuts
through
excessive
congestion,
so
that
that
makes
the
the
solution
probably
look
different
from
solutions
that
are
generally
applicable.
F
So
that's
one
thing
or
one
and
a
half
things,
and
the
other
question,
of
course
is
what
is
the
the
proper
way
to
extend
the
co-op
model
here,
which
was
not
designed
to
be
optimized
for
for
performance
in
in
the
block
situation,
and
so
far
the
the
general
idea
has
been.
If
you
need
performance
for
large
objects,
then
run
co-op
over
tcp,
but
there
are
good
reasons
why
we
don't
want
to
do
this
here
as
well,
which
is
maybe
another
observation
that
that
we
are
designing
something
here.
F
F
B
Okay,
so
I
totally
agree
that
there's
some
very
dots,
specific
type
stuff
here
because
of
the
potentially
nasty
environments
that
we
work
in,
but
I
think
that
there
will
be
other
people
using
udp
where
there
are
potentially
lossy
lossy
networks,
where
there
is
an
occasional
packet
that
gets
dropped,
which
is
one
of
the
reasons
why
we
perhaps
use
udp
so
that
we
don't
care
too
much
if
something
gets
lost
like
the
old
ip
fragmentation
challenges
is
that
you
always
got
nine
out
of
ten
packets,
but
you
missed
the
tenth
in
a
an
environment
that
was
giving
problems
for
whatever
reason.
B
So
from
that
perspective,
having
the
ability
to
send
more
data
quickly
as
we
can
get
it
over
an
environment,
I
I
think
it's
a
more
general
case
than
dots,
but
I
agree
that
is
certainly
generated
by
what
dot
is
doing
again
with
dots.
Tcp
is
virtually
a
no-no,
because
you're
just
now
into
if
there's
any
loss
at
all
tcp
goes
into
recovery.
Things
slow
right
down
and
can
potentially
just
give
up.
B
F
F
Yeah,
so
to
get
this
by
the
isg,
we
probably
have
to
make
sure
that
we
don't
sell
this
hey.
If
you
want
to
to
do
something
aggressive,
ignoring
congestion
control,
then
co-op
is
your
the
protocol
you
want
to
choose,
even
though
it
is
exactly
that
here,
but
I
think
we
have
to
be
a
bit
careful
here
on
transport,
for
people
with
roasters
saunas,
fit.
B
B
Well,
larger
body
of
data
made
up
of
chunks
moving
in
a
direction
at
any
one
time,
but
that
body
would
be
subject
to
things
like
probing
rate
and
so
on,
which
is
absolutely
right.
Otherwise,
we
just
will
create
a
ddos
attack
in
its
own
rights.
Just
by
using
the
dots
protocol
aggressively
sending
too
much
data
down
the
pipe.
E
That's
confusing
me
because,
if,
if
dots
gets
licensed
to
send
arbitrary
amounts
of
packages,
ignoring
the
probing
rate,
that
sounds
much
more
complicated
to
me
rather
than
just
stating
that
for
this
particular
situation,
the
probing
rate
is
this
and
that
and
that
might
be
higher
than
the
or
the
kind
of
default
probing
rate
code
specifies
in
case
you
don't
know
any
better.
B
This
is
the
so
you
have
a
a
set
of
blocks
which
can
apply
to
the
body
which
is
three
or
four
packets.
That
entity
is
allowed
to
be
sent
and
then
the
when
the
next
entity
wants
to
come
along
to
be
sent.
It
is
subject
to
the
probing
rate,
which
currently
is
one
byte
per
second,
which
is
quite
low,
but
that's
another
score.
E
B
F
F
F
We
we
don't
have
adidas
going
on,
so
I
think
there
is
still
some
some
incentive
to
do
this
in
in
a
friendly
way,
even
though
it's
certainly
priority
traffic
in
in
a
certain
sense,
but
it
never
makes
sense
to
to
send
packets
at
a
higher
rate
than
the
network
can
actually
transport
it.
So
that
that's
why
you
need
congestion
control,
even
if
you
are
alone
on
on
a
particular
path,
so
getting
something
going.
There
would
be
nice,
but
then
congestion
control
needs
feedback.
F
So
how
do
we
get
feedback
if
the
packets
are
lost
in
in
the
other
direction?
I
think
that's
that's
an
interesting
question
is:
is
there
a
good
way
to
get
feedback
for
for
the
direction
away
from
the
due
west
network
that
that
allows
us
to
to
do
this
at
the
the
highest
rate?
That
makes
sense
congestion.
B
Wise,
it's
it's
a
way
of
detecting
at
the
up
here,
the
client
and
detecting
that
there's
a
loss
of
traffic
coming
down
the
pipe
from
as.
B
B
This
is
an
application
specific
thing.
So
let
me
show
you:
that's
going
back
the
ability
to
be
able
to
get
feedback
to
say
that
the
traffic
coming
down
the
piping
band
from
the
internet
to
the
cloud.
There
is
a
loss
and
we
have
a
way
of
indicating
that
loss,
backup
and
that
that
could
be
a
co-op
option
in
its
own
right.
Talking
about
some
sort
of
determined.
C
E
I
suppose
that
could
even
be
part
of
the
telemetry
that
you're
sending
them
would
not
need
to
be
made
into
a
new
co-op
option
at
all.
So
if,
if
the
telemetry
that
the
server
sends
to
the
client
indicates
has
some
information
that
goes
into
the
algorithm,
that
determines
the
probing
rate
for
the
client,
that
wouldn't
I
mean
that
that
would
be
dot
specific
and
that
wouldn't,
in
my
opinion,
be
need
to
be
in
a
code
option,
but
it
could
just
as
well
be
in
and
the
generic
telemetry
that's
in
there.
B
But
absolutely
yes,
you
know
that
we
could.
We
can
pick
up
that
kind
of
stuff.
We
certainly
is
part
of
the
telemetry
we're
able
to
configure
what
we
believe
to
be
the
different
pipe
sizes
and
so
on.
You
know
we're
on
a
hundred
make
circuit
or
50
meg
circuit
or
whatever
it
may
be,
so
we're
able
to
pass
that
information
between
the
client
and
the
server
which
helps
the
mitigation
process
know
that
he's
only
got
a
50
meg
pipe.
You
can
send
stuff
down.
B
B
C
G
F
So
that
that
might
be
one
component
of
the
solution
that
we
essentially
define
a
way
for
co-op
to
run
with
external
congestion
control
input,
so
the
the
co-op
machine
does
not
have
to
itself
find
all
the
congestion
control
information.
It
gets
some
some
additional
information
from
the
application
which
has
more
more
information
available
to
it.
So
that
could
be
one
way
to
to
solve
this
congestion
issue,
and
then
we
could
focus
on
on
the
protocol.
E
D
D
What
we
have
right
now
is
co-op
over
udp
and
dtls,
and
we
have
co-op
over
tcp
and
tls
and
when
we
started
with
the
co-op
over
udp
10
years
ago,
we
tried
to
keep
co-op
as
simple
as
possible,
and
what
we
did
in
the
beginning
was
that
he
tried
to
get
away
with
not
even
having
confirmable
messages
and
acknowledgements
or
even
specify
congestion
control
for
co-op
over
udp
and
somewhat
reluctantly.
D
We
put
those
features
in
confirmable
messages
seemed
to
be
a
good
idea,
but
of
course
the
cost
was
that
we
now
had
this
additional
messaging
layer
between
co-op
request
response
and
udp
and
of
course
we
couldn't
get
around
defining
congestion
control
for
co-op
udp,
and
for
that
we
followed
the
rfc
on
udp
user
guidelines,
which
had
different
recommendations
based
on.
D
If
you
have
round
trip
times
available
or
not
and
so
on,
and
I
think
we
modeled
congestion
control,
mostly
after
the
case
that
was
called
low
volume
applications
and
then,
as
custom
pointed
out,
we
noticed
for
some
people.
This
co-op
over
udp
doesn't
work.
If
you
have
large
payloads,
for
example,
then
you
might
want
to
have
something
more
sophisticated
and
with
co-op
over
tcp.
D
We
have
such
a
solution,
and
so
the
idea,
as
carsten
said,
was
if,
if
you
need
something
more
than
this
simple
transition
control,
simple
retransmissions
just
use
co-op
over
tcp
and
now
10
years
later,
things
have
changed
a
bit
they're,
not
talking
that
much
anymore
about
802.15.4
networks.
D
Maybe
our
constraint
devices
have
also
grown
a
bit
and
more
like
raspberry
pi's
today
than
cortex
m0s
and
so
on,
and
it's
always
great
to
see
when
a
protocol
can
adapt
to
use
cases
and
scenarios
that
weren't
envisioned
in
the
beginning.
Now
in.
In
some
cases
the
use
cases
get
pretty
far
away
from
the
initial
use
case
and
at
some
point
it
starts
hurting
a
bit
because
of
some
initially
designed
limitations,
and
in
that
case
we
have
a
dilemma
and
that
dilemma
is
always.
D
D
D
D
D
So
if
we
now
have
requirements
like
just
as
an
example,
selective
acknowledgements
or
negative
acknowledgments
and
through
transmission
windows
and
dot,
specific
congestion,
control
and
so
on,
wouldn't
it
be
the
best
solution.
If
we
bring
this
to
the
transport
area,
we
built
something
in
between
udp
and
tcp,
and
then
we
just
define
a
new
co-op
transport
co-op
over
dot
tp.
The
dots
transport.
B
B
F
Yeah,
I
think
what
what
claus
said
is
is
quite
true,
but
I
think
it's
also
about
adding
features
to
core
that
become
part
of
the
the
standard
protocol
that
we
would
recommend
people
to
use
outside
the
the
dots
use
case.
I
think
we
we
do
have
a
little
little
bit
more
flexibility
in
adding
things
to
co-op
a
couple
new
options
or
something
like
that.
F
But
of
course,
I'm
still
interested
in
in
optimizing
this
to
have
as
much
that
we
can
take
home
from
from
this
exercise
for
for
the
more
global
co-op
community,
and
I
also
think
we
should
not
be
designing
another
transport
protocol
where
we
are
doing
this
and
so
on.
So
keeping
it
simple
is
also
an
important
consideration.
F
F
A
And
just
double
check
christian:
have
you
explored
in
a
good
way
for
you
the
alternative
use
of
non-traditional
responses,
as
they
are
now
or
can
be
developed.
A
E
I
think
I
think
that
the
the
proposals
are
in
the
mails
discussing
them
through
is
probably
not
on
the
agenda
for
today,
and
I
think
that
the
main
part
that
this
is
the
direction
this
is
going
is
more
about
the
congestion
control
anyway,
and
then
how
to
whether
whether
one
can
use
non-traditional
responses
or
wants
to
go
block.
Three
four
direction
doesn't
help
too
much
in
this
discussion.
E
So
if,
if
the,
if
the
outcome
of
this
is
that
we
have
have
something
about
congestion,
control
and
then
a
block
three
four-
that's
maybe
updated
to
not
interfere
with
probing
rate
but
just
state
what
what
it
can
do,
then
it
would
be
easier
to
go
through
another
iteration
of
what
non-traditional
responses
can
do
for
that.
C
And
I'm
willing
to
then
maybe
draw
another
question
to
the
group:
like
is
the
current
block-wise
transfer?
Is
it
sufficient
for
the
cases
we
have
as
we
need
to
define
your
block
three
block
four?
Is
it
worth
the
while
to
go
back
to
the
drawing
board
and
see
how
block-wise
could
be
updated,
or
maybe
it's
a
bit
too
early?
Okay,
carson!
Sorry,
I'm
jumped!
Thank
you.
I
think
mohammed
is
there,
but
maybe
carson
you
want
to
reply
to
this.
I
guess.
G
First,
actually,
actually
I
wanted
that
to
comment
about
the
the
decayman
from
from
christian
about
the
the
problem
rate,
but
I
think
that
you
you
you
will
it's
it's
not
related
to
the
question
from
from
from
jaime,
so
I
can
wait
until
you
finish.
F
Okay,
so,
basically,
I
think
we
have
identified
two
things
that
that
hurt
here.
One
is
that
the
the
blog
protocol
is
entirely
lockstep.
F
So
this
is
kind
of
the
inverse
of
what
I
just
said
that
if
we
can
get
something
like
like
a
bitmap
back
which
blocks
have
made
it
which
which
haven't
that
would
be
a
way
to
to
solve
that
problem.
E
Work
I
mean
there's.
I
don't
think
that
there's
anything
that
would
stop
a
client
from
kind
of
say,
putting
the
first
block
then
putting
a
few
other
blocks
in
a
non-request
with
no
response
in
the
success
case
and
then
put
the
say,
10th
or
20th
block
in
a
con
again
and
see
whether
the
server
complains
or.
E
It
knows
well
in
india
in
the
end,
it
will
need
some
pretty.
It
will
need
some
concrete
payload
in
the
4.0
or
something
I
don't
have
everything
to
process.
This
response.
F
E
F
Yeah,
so
what
I
was
trying
to
say
is
that
that
something
like
like
a
sac
for
for
this
situation,
selective
acknowledgement
for
this
situation
is
exactly
what
we
need
and
we
essentially
have
to
come
up
with
a
data
structure
that
represents
that,
but
we
also
have
to
understand
how,
in
the
protocol
flow,
how
the
protocol
flow
makes
use
of
that
data
structure.
So
who
sends
this
when?
Why
and
so
on.
F
Yeah
I
I
certainly
agree
with
that
from
from
a
theoretical
protocol
purity
point
of
view,
but
I
still
think
it
we
should
explore
whether
it
can
be
done
and
whether
the
shape
of
what
we
get
is
is
not
just
ugly
but
but
has
some
actual
problems
that
would
stop
us
from
using
it.
F
F
Yeah
to
me,
it
seems
that
it
makes
sense
to
write
up
some
of
these
flows
in
a
little
more
detail.
I
I
couldn't
quite
make
out
out
of
new
block
what
the
protocol
machinery
behind
these
flows
was
going
to
be.
So
maybe,
if
we
get
a
little
bit
more
more
detail
into
that,
we
could
find
out
whether
we
can
solve
it.
That
way.
F
B
I
haven't
done
any
specific
coding
with
block
three
block
four,
but
in
terms
of
the
coding
that
I
have
done,
that's
currently
using
block
one
block
two,
I
don't
see
any
challenges
there,
as,
as
I
said
in
terms
of
the
block
four
providing
the
client
sends
the
block
four
to
the
server.
B
And
therefore
he
can
remove
lock,
step
requirements
to
send
blocks
and
he
can
descend
the
set
of
blocks
in
the
sequential
chain
and
then
go
into
the
recovery.
After
that
I
haven't
been
thinking
through
because
I
know
the
new
block
stuff
after
came
along,
it
has
been
modified
a
little
bit
from
our
new
block
discussions
based
on
the
new
block
discussions
and
thinking
things
further
through
from
a
coding
perspective.
B
F
There
were
a
few
big
statements
about
tokens,
and
so
on
that
I
I
couldn't
quite
understand.
Okay,.
B
So
yeah
it's,
it's
really
is
when
one
needs
to
provide
a
token,
then
we
had
discussions
about
empty
tokens
and
so
on.
But
the
token
that
is
provided
on
a
block
four
set
of
block
forms
should
be
the
same
token,
which
is
what's
controlled
by
the
client,
or
should
it
be
individually
generated
tokens
randomly,
which
is
against
the
whole
spirit
of
things,
which
is
why,
in
the
block
three
block
floor
draft,
I
had
them
all
at
the
same
token,
as
be
having
the
same
token,
to
kind
of
remove
the
ambiguity
there.
B
F
B
Yes,
well,
the
token
is
finally
okay.
So
when
the
get
request
is
done,
he
uses
a
token
and
which
point
the
responses
come
back
with
all
the
if
the
get
request
doesn't
observe
at
the
same
time
or
initiates
a
reserve.
At
the
same
time,
then
the
token
is
remembered
by
the
server
for
a
period
of
time
that
they
observe
lasts,
for
there
is
a
possibility
that
the
client
will
be
cycling
through
lots
of
requests
and
hence
lots
of
tokens.
B
G
B
That's
being
used
for
outstanding
observed
requests,
so
the
client
should
know
and
maintain
tokens
for
this
list
of
this
list
of
tokens
for
this
list
of
observer
quests
that
he's
done
when
the
whenever
the
observed
response
comes
back,
he
knows
what
to
associate
it
with
so
he's
he
will
be.
Maintaining
a
list
of
tokens
which
are
already
allocated
against
observers
should
be,
and
then
he
just
uses
other
tokens
for
his
other
requests
over
time.
F
Right
so
it's
hard
to
run
out
of
tokens
because
there
are
two
to
the
power
64
of
those
agreed.
Maybe
I
I
didn't
understand
what
what
the
point
was.
F
Of
course
you
you
need
to
keep
track
of
your
outstanding
requests
so
that
there
is
no
way
around
that
and
storing
a
token
with
the
outstanding
request
shouldn't
be
too
hard
right.
The
question
really
is:
when
do
you
actually
retire?
F
B
Request:
okay:
well,
we
we
by
and
large
we
use,
observe,
and
so
it's
retiring
the
observer,
but
in
terms
of
when
a
request
retires
okay,
so
we
send
off
a
request
and
there
are
10
blocks
that
can
come
back
and
if
we
have
all
those
10
have
the
same
token,
we
may
get
nine
of
them
back
because
of
a
lost
environment.
B
B
B
F
F
Yeah
I
at
least
think
this
can
be
made
to
work.
It
may
not
be
beautiful,
but
yeah,
so
my
problem
really
probably
is
more
with
the
current
text.
It
doesn't
explain
all
the
things
that
we
seem
to
have
in
mind.
Yeah.
B
F
Okay,
so
we
we
probably
need
this
applicability
statement.
We
need
some
way
of
saying
there
is
congestion,
control
information
outside
of
co-op
that
goes
into
this
process.
We
we
need
the
definitions
of
the
block
three
and
block
four
options,
and
we
need
the
data
structure
that
does
the
selective
acknowledgement
or
the
selective
re-request.
G
E
I
still
think
that
it
would
be
more
elegant
to
solve
this
by
this
by
just
using
a
more
generic
phrasing
of
non-traditional
responses,
but
if
it's
kind
of
tight,
if
if
it
turns
out
that
that
is
more
complicated
and
slower-
and
this
is
just
what
we
need
for
this
very
particular
use
case-.
F
Yeah,
so
I
think
we
should
simply
true
by
through
both
in
parallel,
so
do
the
the
very
very
specific,
very
limited
solution,
but
also
think
about
a
solution
that
that
has
more
more
more
it's
a
broader
scope
of
applicability
and
at
some
point
we
need
to
decide
which
of
these.
We
actually
do
so.
How
much
time
can
we
give
ourselves
for
doing.
F
B
A
couple
weeks
max,
I
would
expect
less
than
that
in
terms
of
the
non-traditional
responses.
I
need
to
think
quite
a
bit
about
that,
because
that
is
more
kind
of
unsolicited
non-traditional
responses,
in
that
the
server
can
elect
to
send
something
because
he
feels
like
it.
Whatever
reason,
rather.
C
B
B
B
B
They
would
be
subject
to
any
probing
rate
type
stuff
because
you're
sending
them
out
and
you're,
not
necessarily
getting
a
response
back
and
therefore
you
don't
know
whether
you're
killing
the
network
by
overload
or
whatever.
So
there's
all
that
kind
of
congestion
control
discussion
is
also
relevant
to
that
there.
E
B
E
So
so
one
one
thing
that
I'd
like
to
to
add
here
is
that,
if,
for
example,
picking
sending
a
singular,
if
giving
more
indication
to
the
server
about
what
blocks
the
client
wants,
is
something
that
you'll
need
you
need.
This
can
be
added,
but
one
of
the
design
points
for
this.
This
ledger
for
for
this
additional,
for
this
new
option
that
allows
the
server
to
send
more
leisure
for
responses
option.
E
Is
that
all
those
up?
Those
kind
of
specifications
like
send
me
this
and
that
and
that
block
or
the
no?
If
there
is
a
link
over
there,
follow
it
and
give
me
that
response
as
well
could
be
done
in
such
a
way
that
the
proxy
doesn't
necessarily
need
to
understand
them
sure.
So
I
didn't
add
them
in
there,
because
that's
kind
of
the
generic
framework,
but
something
like
and
give
me
slices
from
two
as
well,
can
be
an
option
that
I
didn't
write
about.
E
B
B
B
E
I'm
also
happy
to
kind
of
update
that
more
frequently.
Procedurally,
I
don't
really
know
where,
where
to
do
this
best
right
now,
I've
written
a
mail
because
it's
basically
updating
a
draft,
that's
not
mine
or
suggesting
an
update
to
a
draft.
That's
not
mine!.
F
So
is
the
dots
universe
going
to
explode
on
on
august
31st?
If,
if
we
don't
have
that
this
solution
defined
or
do
we
have
a
year
or
what
what
is
the,
of
course
everybody
wants
everything
as
soon
as
possible.
I
understand
that.
But
what
is
the
realistic
timeline
for
this.
G
It
depends
if
you
want
to
to
have
this
included
in
the
base,
dot
telemetry
specification,
which
is
currently,
I
would
say
it's
it's
advanced
in
the
I
would
say
in
the
the
version
we
we
have.
We
have
so
far
my
target
for
writers
to
have
it
in
the
working
of
last
call
by
mid-july.
So
for
for
the
time
being,
I'm
really
careful
to
this
point
that
we
we.
We
know
that
this
is
really,
I
would
say,
a
problem
that
we
we
need
to
solve.
So
far.
G
We
are
careful
in
the
user
of
the,
I
would
say
there
is
no
normative.
I
would
say
language
for
pointing
to
the
to
the
blocks
we
we
have
defined
so
far.
So
for
me
yeah,
I
I
don't
want
to
to
add
any.
I
would
say
any
dependency
on
the
telemetry
specifications
on
something
that
I
am
not
sure
I
will
get
soon.
So
perhaps
we
we
can
manage
to
have
an,
I
would
say,
an
update
to
the
dust
limit
ray
itself.
G
If
we
are
confident
that
we
will,
I
would
say,
carry
on
the
the
two
two
tracks
and
then
decide
whether
we
need
to
whether
we
maintain
the
the
the
proposal
with
the
blocks
or
the
one
about
the
unsolicited
responses.
That
can
be
another.
B
G
G
Progress
in
the
publication
process
faster
than
the
unsolicited
specification
itself,
so
it's
really
up.
I
I
don't
have,
I
would
say
sorry
to
not
be,
I
would
say
more
precise
on
term
of
I
would
say
on
the
on
the
milestone.
But
yes,
this
is
this
is
the
the
the
the
current.
The
current
situation
is
that
we
have
a
specification
we
advanced.
We
we
have
this
spending
issue
about
the
I
would
say
the
the
the
the
large
notification
notifications.
G
Ideally,
we
would
like
to
have
the
the
blocks.
I
would
say
options
specified,
but
I
understand
that
the
working
group,
the
co-working
group,
will
take
more
time
to,
I
would
say,
to
compare,
I
would
say,
the
the
various
proposals
and
then
to
to
to
make
a
decision
so
yeah,
it's
it's
what
it
won't
be
the
end
of
the
world.
If
you
don't
have
this
this
issue,
I
would
say
frozen
in
the
in
the
current
specification.
G
What
I
can
have,
what
I
can
do
by
july
is
that
if
I
don't
see
any
progress
on
this
front,
I
would
just
declare
that
an
open
issue
and
say
that
the
the
current
telemetry
specifications
won't
solve
that,
and
this
is
for
future
version
of
the
specifications.
So
that's
that's
my
current
take
on
this.
F
Okay,
thank
you.
That
was
a
very
detailed
answer
to
my
question.
I
think
I
understand
the
situation,
so
I
think
we
we
have
about
six
weeks.
We
can
use
to
to
play
lego
with
our
various
elements
of
solutions
we
have
in
mind
and
then
I
think
we
should
start
should
be
starting
to
make
decisions.
G
Yeah
yeah
yeah,
if
foreign.
Actually
we
are
when
we
are
opening
the
we
say
the
station
between
the
dos
client
and
the
server
we
are
negotiating
also
the
the
problem
rate
that
will
be
followed
by
the
do
this
agent.
So
we
we
are,
we
are
increasing
the
the
the
default
one
that
we
have
currently
in
co-op.
We
are
recommending
to
use
five
because
we
there
are
a
lot
of
overheads
there,
but
once
we
agreed
on,
I
would
say
the
problem
rate
between
the
dots
client
and
the
dutch
servers.
G
We
are
very
passing
that
that
congestion
control,
in
one
exception
when
there
is
a
an
attack,
and
we
want
to
to
place
a
mitigation
request.
Apart
from
that,
we
are
really,
I
would
say,
following
the
the
the
average
which
is
which
is
indicated
by
the
problem
right
and
we
are
also
following
the
recommendation.
G
There's
also
one
comment
that
the
question
made
by
carson
about,
if
whether
we
have
a
feedback
from
the
agents
themselves
about
they
are
receiving,
I
would
say
some
messages.
As
mentioned
by
john,
we
are
defining
at
the
application
level
what
we
call
the
herbits
and
the
herbits
we
we
are,
including,
I
would
say,
information
about
the
the
herbits
which
is
received
from
the
other
pairs.
G
So
this
can
be
if
there
is,
for
instance,
if
there's
a
condition.
That
means
that
in
one
direction
you
go
you
are,
you
will
be
able
to
send
your
favorites
to
the
the
other
agents,
but
you
don't
receive
any
ones
from
from
the
other
ones.
So
the
the
agent
which
sent
the
orbits
that
are
lost
is
aware
about
the
loss,
because
it's
also
reported
by
the
the
the
destination
agent.
So
we
have
this
elementary
information,
which
is
shared
between
the
the
client
and
the
server.
F
G
F
I
think
that
that's
a
useful
part
and
and
having
the
heartbeats
for
additional
congestion
control
information,
we
would
have
to
check
whether
that's
that's
enough
of
a
measurement
to
use
here,
but
yeah,
so
for
for
the
under
attack
situation,
you
want
to
have
at
least
a
little
bit
of
information
about
whether
the
data
flow
do
flow
in
the
non-attacked
direction.
F
F
Yeah,
so
at
the
particular
level
the
current
particle
we
don't
really
have
to
do
much.
We
just
say
that
there
is
external
input
that
allows
us
to
do
this
in
a
proper
way
and
we
just
make
sure
we
stay
within
those
limits
provided
by
the
external
input.
So
I
think
we
are
pretty
safe,
but
I
think
yeah.
This
has
to
be
written
up
somewhere,
probably
in
in
the
dots
document.
G
F
F
Yes,
so
so,
if,
if
you're
doing
something
like
like
block
three
block,
four,
you
have
a
number
of
nones
you,
your
messages
you
can
send.
And
how
do
you
know
how
fast
you
should
send
them?
How
you
manage
that.
B
B
C
F
B
As
an
average
overall,
yes,
so
there'll
be
a
set
of
blocks
and
then
there'll
be
a
big
think
and
another
set
of
blocks
and
the
average
will
be
across
boxes
they're
sent
so
that
you
know
the
next
setups
can't
be
sent
until
the
probing
rates
down
enough
for
the
next
one
to
be
sent.
B
Well,
we
have
done
some
active
data
reduction
to
try
and
reduce
the
amount
of
data
by
having
pre-negotiated
mapping
tables
as
part
of
the
telemetry
information,
so
that
we
can
actually
reduce
the
amount
of
telemetry
data
that
gets
passed
across.