►
From YouTube: IETF101-CORE-20180320-0930
Description
CORE meeting session at IETF101
2018/03/20 0930
https://datatracker.ietf.org/meeting/101/proceedings/
A
A
This
is
an
ITF
meeting.
The
IPR
principles
of
the
ITF
apply,
including
the
no.12,
and
this
is
the
original
agenda
for
today
and
we
we
are
going
to
move
up
the
deaf.
You
are
in
item
because
GRE
is
only
available
around
10
o'clock
and
we
already
covered
two
items
and
we
have
covered
this
one,
but
we
have
to
return
to
it.
So
we
will
do
that
any
other
things
we
need
to
change
in
the
agenda.
B
So,
first
of
all,
let's
take
a
look
at
the
status
of
the
draft.
The
document
is
currently
in
working
group
state
submitted
to
is
default
publication
and
the
last
revision
is
0
3,
which
incorporates
mostly
editorial
updates
and
addresses
comments
by
Wesley
ID
as
the
TSP.
Our
early
review
and
media
cooler
win
who's.
The
responsibility
for
this
document
by
the
way
both
reviewers
have
expressed
that
the
last
revision
of
the
document
satisfies
their
comments
and
then
for
the
next
revision.
B
We
need
to
address
the
comments
by
for
additional
reviewers,
that
is
the
reviews
from
scott
Brandner,
Vincent
Rocca
and
crystal
hallmark,
and
in
addition,
there
was
an
additional
set
of
comments
received
yesterday
by
Gauri.
So
we
need
to
address
all
these
additional
comments
later
today
in
the
presentation
I'll
refer
to
how
we
plan-
or
it
is
our
position
related
with
a
comment
by
scott
bradley.
However,
the
other
three
reviews
have
been
received
just
a
few
days
ago
or
even
a
few
hours
ago.
So
for
those
we
don't
have
slides,
we're
still
processing
the
comments.
B
Okay,
so
let's
go
quickly
through
the
updating,
0
3.
The
first
update
is
in
section
1
introduction.
So
now
we
have
added
the
new
paragraph
there,
which,
in
its
original
form,
was
previously
in
section
5,
which
is
the
section
that
focuses
on
mom's
and
now
it
has
been
adapted
made
a
bit
more
general
and
it
provides
an
overview
on
what
cocoa
does.
B
We
explained
that
cocoa
computes
the
RTO
based
on
weak
or
strong,
our
titties,
because
we
use
weaker
titties
in
addition
to
strong
art,
it
is
the
reaction
of
cocoa
to
congestion
is
by
using
a
low
ascending
grade
and
specifically
for
moms.
The
sending
rate
is
limited
to
one
message:
per
RTO
per
destination,
end
point,
which
is
more
conservative.
If
everything
works
as
expected,
then
what
is
stated
in
RFC
76
41,
which
would
define
a
limit
of
one
message
per
RTD
per
destination.
B
End
point,
then:
in
section
3
we
have
added
details
on
the
scenario
square
cocoa
has
been
found
to
perform.
Well,
we
explained
that
these
scenarios
comprise
latencies
that
range
from
milliseconds
to
up
to
peaks
of
dozens
of
seconds.
There's
an
additional
comment
by
Jaime,
which
is
that
we
we
might
need
to
detail
a
bit
more,
which
reference
contributes
to
what
within
this
range.
Also
scenarios
used
in
evaluations
comprise
single
hop
and
multi
hub
network
topologies
and
link
technologies
that
have
been
used
in
a
variance
comprised
15.4,
GPRS,
UMTS
and
Wi-Fi.
B
Also,
we've
added
that
cocoa
is
expected
to
work
suitably
across
the
general
internet,
so
not
only
within
the
limits
of
a
constraint,
node
Network,
then
in
section
4,
the
2,
which
is
the
one
that
defines
the
algorithm
for
the
RTO.
We
have
added
an
explanation
for
the
default
weight
values
used
for
the
strong
and
weak
RTO
estimators.
B
Then,
in
Section
4.3
we
explained
that
the
state
of
RTO
estimators
for
an
endpoint
should
be
kept
long
enough,
and
now
we
provide
the
motivation
for
this.
The
idea
is,
we
want
to
avoid
frequent
returns
to
inappropriate
initial
values
of
the
algorithm,
and
also
we
write
that
for
the
fault
parameters
in
coop,
it
is
recommended
to
keep
such
a
state
for
at
least
255
seconds
so
well.
B
So
for
the
next
revision,
we
have
several
comments,
let's
mention
before
so
on.
Scott
Brown
was
common.
He
has
only
one
comment,
which
is
that
the
draft
makes
no
reference
to
RFC
50-33,
which
is
a
document
that
provides
guidelines
for
specifying
new
congestion
control
algorithms
for
the
internet.
However,
we
can
arc
that
we
have
actually
taken
into
account
such
guidance
when
designing
cocoa,
and
this
light
and
the
next
one
show
which
are
the
different
guidelines
in
1533
and
which
is
our
position
on
that
Carlos.
A
C
Hi
I'm,
Corey,
fairest,
aha,
that's
good
an
amplified;
okay,
so
I'm
I'm
coming
at
this
just
parachuting
into
the
room,
so
my
initial
response
was
just
to
provide
some
comments
as
I
read
through
it
on
the
five.
Oh
three,
three
I
think
this
guidance
is
mainly
meant
for
transport
protocols
like
the
ones
that
are
listed:
TCP,
ICT,
PD,
TCP,
zoom,
lis,
quick
and
other
transport
protocols.
C
The
transport
I
see
here
looks
a
bit
more
like
what's
described
in
RFC
808
five.
In
other
words,
it's
a
timer
based
lockstep
retransmission
method.
So
maybe
some
of
these
checklists
do
not
apply,
and
you
could
say
this
but
and
point
number
one
does
apply.
You
still
shouldn't
impact
standard
transports.
So
if
there's
anything
that
happens
here
that
could
appear
on
the
internet
and
could
stop
TCP
behaving
in
the
way
it
normally
does.
C
D
B
Sure,
like
maybe
just
quickly
so
well
in
1533,
there
are
nine
guidelines
and
you
can
see
for
each
one
of
them.
We
have
the
lowest
level
ballot,
which
indicates
our
position
on
how
Coco
or
the
design
of
Coco
has
taken
them
into
consideration.
So
the
first
is
about
deviations
from
congestion
control
principles.
The
in
2019
for
Team
need
to
be
motivated.
B
However,
we
believe
we
have
cocoa
design
is
aligned
with
those
congestion
control
principles
then
for
guideline
one,
we
believe
cocoa
has
no
negative
impact
on
existing
protocols
such
as
TCP,
although
we
may
want
to
discuss
something
later
with
the
second
presentation
in
this
lot,
then
cocoa
has
been
designed
for
difficult
environments
in
terms
of
what
50-33
defines
for
this
term,
then
also
a
range
of
environments
have
been
evaluated,
as
has
been
shown
before.
Also,
we
believe
cocoa
protects
against
congestion
collapse
because
of
the
values
used
for
the
variable
backup
factor
when
retries
are
needed.
B
We
have
evaluated
fairness
of
the
new
congestion
control
mechanism
and
we
have
found
it
is
high,
especially
thanks
to
how
the
variable
backup
factor
compensates
and
possible
potential
issues
in
terms
of
fairness
in
some
topologies,
then
we
have
considered
situation
where
there
may
be
performance
issues
with
misbehaving
nodes.
We
have
text
on
that
in
the
security
consideration
section,
and
in
that
case,
when
nodes
may
want
to
drop
packets
to
decrease
performance,
we
explained
that
the
weak
estimator
may
help
recovering
from
such
situation.
E
F
E
Uplink
60
kilo
bits
per
second
traffic
time
little
bit
over
600
milliseconds,
so
this
kind
of
our
emulates,
an
NB
IOT
like
environment,
but
we
don't
say
that
is
in
the
IOT
environment,
so
good
enough
to
see
the
the
effects
and
from
the
router
to
the
fixed
host.
We
have
a
random
delay
between
103,
10
and
20
milliseconds
and
is
the
fast
links
also
it's
not
kind
of
fun,
affecting
the
so
deep
and
affecting
the
air
traffic
in
any
other
way.
E
On
the
router
we
have
Warren
Buffett,
sighs
use
water
in
buffer
sizes
for
recommended
PDP
size
to
two
thousand
five
hundred
bytes
kind
of
a
small
buffer
and
then
increasing
sizes
for
the
voters,
and
then
we
have
the
has
we
called
infinite
buffer.
So
it's
not,
of
course,
infinite,
but
is
large
enough
so
that
there
are,
there
will
be
no
packet
losses
with
year
that
we
give.
E
We
implemented
the
client
of
serial
using
lip
code
so
actually
for
the
default
course.
Also.
You
have
72-52
implementation.
We
used
here
as
it
was
implemented
in
there.
We
added
some
some
bug
fixes,
but
that's
all
coca
was
implemented
with
the
product
person
or
one
as
well
as
TF,
o
three,
because
we
noticed
that
errors
of
change
in
the
variable
back
of
factor
from
1
0
1
0
1
2
0
2.
So
he
wanted
to
experiment
that
as
well
for
the
default
carbon
KOCO
be
modified
to
max
retransmitted
trend.
E
It
is
just
to
see
that
there
are
no
boards
for
for
any
of
the
exchanges
or
the
max
retransmitted
for
is
defined.
Sisters
could
make
sure
that
our
test
run
put
all
the
way
to
the
end
and
we
get
occurred.
The
results
that
are
kind
of
a
comparable
32
seconds
for
Kauravas
specified,
as
as
as
specified,
was
used
for
the
max
RTO
to
the
default
coffee.
We
use
60
seconds
so
because
otherwise,
in
some
cases,
the
AB,
the
RT
or
fires
very
high
and
it's
it's
kind
of
our
take.
E
Those
tests
would
take
very
long
time.
The
implemented,
copper
TCP
has
per
craft
0,
9
and,
and
it
equals
only
the
necessary
featured
features,
kind
of
had
to
run
the
test
and
we
use
the
Linux
TCP.
But
we
modified
that
so
that
we
don't
use
any
of
the
fancy
features,
so
he
uses
new
Reno.
So
we
disable
sac
cubic
times.
Have
a
party
or
all
this
experiment
of
this
is
TCP
rock
and
so
on
more
about
the
delay
act
time
to
be
constant,
200
milliseconds,
and
then
we
also
for
the
retry
counts
for
tctv.
E
Okay,
for
the
workload
we
use
a
small
request
like
request/response
exchanges,
so
that
they
fit
into
a
single
crop
message,
and
then
we
increase
the
load
by
using
one
client
all
the
way
up
to
400
clients.
So
this
gives
the
kind
of
a
increasing
trend
in
the
of
workload
and
at
the
same
time,
of
course,
the
congestion.
So
it
would
put
the
system
in
a
realist
stress
with
this
higher
number
of
the
of
clients
or
client
server
pairs.
I
read,
use
two
types
of
workloads:
continuous
clients
where
those
clients
they
exchanged.
E
50
requests,
responds
exchange
changes
and
for
TCP
TCP
connection
is
pre-established,
so
the
three-way
handshake
doesn't
affect
measurement.
So
we
can
compare
the
the
TCP
with
the
other
two,
so
the
default
open
D,
and
then
we
have
the
random
clients
which
emulate
short-lived
clients,
so
they're,
each
short-lived,
random
client
exchanges,
random
number
from
1
to
10,
let's
exchanges
and
then
each
client
start
immediately
and
it
changes
yet
another
1
to
10
and
the
50
exchanges
are
completely
sorted.
So
this
is
kind
of
a.
E
We
can
compare
the
results
with
the
random
and
the
continues
playing
in
this
sense.
The
difference,
of
course,
being
there
that
now
another
system
is
putting
all
these.
Those
clients
are
put
in
more
tests,
because
the
retransmission
time
sorry
need
us
for
each
random
client
and
the
same
for
the
TCP.
A
TCP
connection
is,
is
all
communities.
Ep
connection
is
open
for
each
new
random
client.
Ok,
do
the
test
result.
So
here
we
have
the
results
for
one
and
ten
clients,
so
this
just
provide
a
baseline
sort
of
full
completion
time.
E
So
how
long
it
does
takes
to
complete
these
50
exchanges.
It
takes
something
like
roughly
33
seconds
in
all
cases,
except
BT,
coop
or
TCP
with
the
random
clients,
where
of
course,
because
TCP
needs
to
kind
of
establish
a
mini
connection
every
now
and
then
that
that
gives
the
the
overhead.
So
those
DCP
completion
times
are
over
40
seconds.
E
F
E
Have
more
load,
but
still
not
quite
contrasting,
the
late
B
to
exception
that
the
TCP,
which
has
a
larger
header,
it
means
that
the
less
ezp
packets
fit
in
the
router
prefer
sort
of
TCP
has
a
little
bit
more
lost
things
compared
to
their
default,
co-op
and
copper,
which
has
just
a
few
few
losses
in
there
that
source
India
left
hand
side
to
County
and
should
be
two
continuous
clients.
The
TCP
has
a
little
bit
higher
for
completion
time
and
then
over
on
the
right
hand,
side
because
of
the
TCP
connections
are
created.
E
Okay,
then
100
clients,
and
now
something
starts
to
happen.
If
you
look
at
the
the,
what
now
happens
is
that
when
we
increase
the
close
now
another
system
becomes
congested
and
with
the
infinite
buffer
or
when,
when
we
increase
the
size
of
the
buffer
of
cost,
more
queuing
delay
occurs
and
now
because
of
the
queuing
delay,
the
RTD
is
increased
over
2.5
seconds.
E
It
needs
basically
retransmit
every
every
every
request
once
so,
it
does
the
double
the
work
and
now
this
is
the
first
sign
of
the
consistent
collapse.
So
there
is
no
kind
of
a
fine
line.
Then
we
have
a
congestion
collapse.
The
conscious
collapse
is
basically
defined
that
the
if
we
increase
the
load
and
the
useful
work
of
the
system,
then
decreases,
then
we
have
signs
of
the
congestion
collapse
and
that's
what
we
will
see
with
the
people
of
Hogan.
We
further
increase
that
the
log
so
with
the
200
clients.
E
If
you
look
at
the
infinite
buffers,
what
happens
there?
That
happens
with
the
with
the
default
cop?
It
gets
slower
and
slower.
But
if
you
look
at
the
small
buffer
on
the
left-
and
so
it's
both
ball
vitia
it
with
the
continues
clients
there,
you
can
see
that
the
pitiful
or
complete.
Sometimes
there
is
some
difference
between
TCP
and
and
and
default
command
Coco.
E
So
basically
they
are
pretty
much
the
same,
but
but
there
is
much
more
variation
with
TCP
and
this
is
because
TCP
does
the
full
TCP
compatible
back
off.
It
means
that
some
of
the
clients
they
wait
for
longer
they
back
off
for
a
longer
time,
while
the
others
can
proceed
and
and
but
but
this
is
different
from
what
happens
with
the
default,
because
they
both
Coco
and
before
co-op.
They
have
the
same
problem
that
they
restore
for
the
next
exchange.
Do
you
have
the
retransmit
time?
E
If
you
look
at
the
the
the
number
of
retransmissions,
there
is
a
clear
difference
with
TCP
is
also
small
buffer,
2500
fight
with
MIDI
circled,
the
TCP
and
then
the
default
and
Coco.
This
means
that
now
is
not
visible
in
the
whole
completion
time,
because
all
these
unnecessary,
retransmission
or
sorry,
too
aggressive
to
three
transmissions
that
default
government
KOCO
do
they
are
dropped
in
the
in
the
router.
E
E
Is
that
the
that
the
Kokua
also
starts
to
collapse
with
the
continuous
clients
not
as
badly
as
the
default
call
and
with
the
random
class
on
the
right-hand
side,
Kokua
collapses
as
well,
and
what
is
more
interesting
is
that
the
diversion
all
three
is
worse
than
the
apt
and
they're
all
one,
and
why
this
happens?
If
you
look
at
the
number
of
retransmissions,
this
shows
up
there.
If
you
look
at
the
left
hand,
side
first,
the
continuous
clients
be
to
infinite
buffer
there.
We
have
a
see
that
that
those.
E
Kokua
version
three
has
much
much
more
retransmissions
10t
version
one
and
the,
but
not
that
much
as
the
default
cop
has
and
what
happens
there
is
that
bit
more
than
a
half
of
the
clients,
they
are
not
able
to
adjust
a
timer,
because
now
the
raw
drip
type
is
so
high
that
that
you
are
in
the
first
round,
which
they
are
not
even
able
to
get
the
the
week
sample
and
why
they
don't
get.
We
example
of
it
to
all
three,
but
to
get
it
to
over.
E
E
Then
we
then
we
use
the
back
row
factor
0.5
and
if
it's
below
3
seconds,
we
use
the
binary
exponential
and
now
what
happens
there
for
those
clients
that
get
the
the
sample
and
are
able
to
adjust
the
day
timers
they
practically
use
for
the
first
retransmission
they
double
the
timer,
but
for
the
next
one
they
don't
anymore,
while
within
India
or
one.
They
continue
a
bit
to
initial
timer.
Using
the
doubling
so,
which
is
more
conservative,
this
is
one
difference,
and
then
there
is
another
difference,
and
this
relates
to
the
aging.
E
We
should
be
kind
of
increasing
the
value,
not
decreasing
the
value,
and
then
the
third
thing
that
affects
is
that
the
Coco
is
that
it
has
two
upper
part
of
32
seconds
for
the
app
for
the
timer,
and
in
this
case,
actually
especially
when
we
are
backing
off
our
time,
I
should
be
good
for
being
or
even
beyond
32
seconds
all
right.
So
these
are
the
our
results,
and
it
seems
to
me
that
the
protocol
actions
are
needed
so
with
72-52
P,
because
we
don't
in
play
I
employ
a
full
backup
that
is
TCP
compatible.
E
F
E
Estimate
and
the
current
RTO
estimate
in
some
cases,
not
it
not
enough
as
we
as
we
saw
and
with
the
RTO
estimate
larger
than
three.
We
are
kind
of
applying
these
eighteen
pliantly,
the
decrease
RT,
as
as
we
saw
in
some
cases.
This
is
not
a
good
thing.
It
is
kind
of
a
justified
there
in
the
in
the
in
the
aqua
craft
that
that
in
idle
P
after
idle
period
is
a
good
thing.
E
Idft
CPM
RTO
consideration,
so
these
of
general
considerations
for
the
for
the
primary
mechanisms
that
that
we
should
use
in
the
internet.
So
the
action
suppose
I
think
what
we
need
is
to
edit
here,
pourquoi
drafted
it.
That
should
be
easy
because
we
can
address
it
and
but
then
for
the
72-52
we
need
needin
seems
things
that
we
are
needing
an
update,
so
we
could
write
a
short
ID
that
updates.
That
should
be
easy
thing
to
do.
It's
not
just
a
cheese
maybe
and
for
the
cocoa.
E
Then
we
should
reconsider
this
18
at
the
RTO
values
larger
than
three
and
as
well
as
we
should
reconsider
this
upper
part
of
32,000,
which
actually
now,
after
the
change
in
the
in
the
version,
all
three
it
doesn't
apply
in
any
other
case
anymore,
except
when
the
art,
your
value,
is
about
seven
seconds
roughly
about
seven
seconds.
We
it
never
reads
that,
or
it
will
be
too
much
very
transmittal
for
okay.
A
E
These
DCP
DCP
backs
backs
off,
and
then
it
doesn't
restore
the
RTO
value,
but
it
keeps
the
exponential
effect
of
value
until
you
get
an
acknowledgement
without
retransmit
search.
So
when
you
send
a
news
new
segment
and
get
an
acknowledgement
for
that
without
retransmit.
Only
after
that
this
disease
sixty
to
ninety
to
ninety
eight,
before
it's
not
said.
A
If
I
open
a
new
TCP
connection,
I
get
new
state
right
and
that's
exactly
what
happens
with
co-op
here.
So
we
all,
we
all,
have
experienced
the
congestion
collapse
that
you
get
when
the
network
is
not
even
fast
enough
to
open
the
TCP
connection,
because
this
B
has
exactly
the
same
problem
problem
here
now.
E
You
don't
because,
because
because
we
see
here
these
for
the
random
clients,
TCP
is
open
and
UTC
comes
every
now
and
then
what
happens
that
your
ignorance
and
your
sin?
You
have
a
timer
right,
yes
and
then,
if
the
delay
is
too
much,
then
you
you
retransmit
bit
to
be
tobacco
factor
and
after
a
while,
you
get
to
see
knock
right
and
then
for
the
next
segment.
You
kill
is
higher
and
you.
A
A
E
E
E
A
E
Two
and
a
half
and
three
seconds
difference
I
understand
that
with
some
of
the
constraint
devices
you
do
not
necessarily
you
are
not
able
to
keep
the
state,
but
that's
not
the
whole
story.
What
what
we
should
say
there
in
India
in
in
the
spec
is
that
if
you,
if
it's
possible,
you
should
keep
the
stage
and
and
for
most
devices
or
at
least
a
large
number
of
devices.
This
should
be
okay
today,
so.
A
D
A
E
A
I'm
not
sure
that
it's
going
to
break
the
internet
with
respect
to
Koko
I
think
there
are
a
number
of
really
useful
observations
here.
One
thing
is
the
32
seconds,
that
is,
the
upper
bound
for
the
RTO
in
in
Koko,
really
is
based
on
the
default
parameters.
So
if
you
have
makes
retransmit
of
five,
this
is
exactly
what
you
should
use
as
your
your
upper
body.
While
if
you
have
a
much
larger
makes
retransmit,
then
you
also
should
change
you
up
about
and
that's
something
that
we
definitely
missed
when
when
defining
Koko.
A
E
A
Yes,
yeah,
okay,
so
maybe
the
the
the
the
function
that
we
need
to
define
for
the
that
is
based
on
next
reference,
but
won't
yield
exactly
32
4
even
for
the
current
value
of
current
default
video
of
next
retransmit.
So
let's
discuss
that.
I
think
that
at
one
point
we
can
take
home
the
other
one
is
the
aging.
A
A
Yes,
so
the
observation
is
that
the
number
32
is
the
upper
bound
for
the
RTO
should
depend
on
Max
retransmit,
and
that
also
may
be
the
the
very
Oh
32
for
the
default
value
of
max
retransmit
is
not
sufficient
okay,
so
the
other
thing
is
the
aging
issue
and
I
think
that
that's
really
interesting.
A
Because
your
simulation
is
based
on
a
situation
where
you
essentially
have
continuous
conditions
for
the
whole
time
of
the
simulation.
Now
aging
was
not
designed
to
handle
continuous
conditions,
but
it
was
designed
to
handle
bursts.
So
if
we
have
a
burst
in
the
network,
we
want
to
go
back
to
a
relatively
normal
situation
quickly
and
that
the
the
problem
with
not
having
aging
is
that
random
losses
make
your
your
reaction.
A
It
can
make
your
reaction
time
very
high
when
you
don't
have
some
form
of
aging,
so
it's
seeing
fundamentally
the
the
idea
of
aging
is
right
and
from
a
deployment
perspective,
nobody
in
their
right
mind
would
deploy
cocoa
if
we
didn't
have
aging,
because
some
random
losses
for
a
quick
burst
would
turn
your
implementation
into
a
very
sluggish.
What
so
we
have
to
have
some
form
of
aging
I.
E
A
E
That
was
for
understanding
it
with
oh
one
person
and
that's
why
we
we
first
didn't
even
implement
aging,
then
we
implemented,
but
with
all
three.
It
has
two
examples
for
the
aging
the
example,
especially
for
the
below
one
second
case.
Exactly
saw
that
that
you
are
in
you.
Are
you
re
transmitting
it?
It
kind
of
a
takes
their
takes.
A
E
It
means
that
that
you
will
age
it
in
12
seconds
or
a
bit
more
than
12
seconds
right
and
now
it
might
be
that
your
big
changes
are
started
that
that
they
happen
every
15
seconds
and
if
you
get
congested,
you
have
a
persistent
consciousness
just
because
of
these
features
of
the
default
coop.
It
does
not
necessarily
go
away
within
15
seconds,
so
it
is
not
necessarily
good
idea
to
age
the
value
in
such
a
case,
even
though
I
admit
that
in
a
wireless
case
where
you
have
Wireless
closely,
this
is
the
heart
problem.
E
We
don't.
We
have
had
this
problem
with
TCP
and
there
is
no
solution
and
the
best
solution
there
is
is
to
try
to
get
the
the
the
samples
as
often
as
you
can,
and
in
that
sense
kind
of
a
fee.
We
example.
It
may
be
a
good
idea,
but
the
problem
is
there
that
that
you
use
the
app
you
have
to
use
this
ambitious
samples
with
retransmissions,
which
actually
give
you
a
very
high
value,
it's
better
than
nothing,
but
the
it's
still
problematic
here.
What
the
problem
is
hard
yeah.
A
E
If
I
say
that
the
one
thing
that
is
important,
so
we
have
this
performance
probability
with
the
wireless
for
sure
yes,
but
the
congestion
is
the
thing
that
we
need
to
deal
first.
We
need
to
ensure
that
we
are
safe
on
that
side
and
then
we
can
do
whatever
modifications
with
to
handle
the
other.
The
wireless
losses
better.
That
is
kind
of
a
basic
guideline
that
we
should
have
there.
A
Right,
but
we
also
have
to
have
a
protocol,
that's
actually
deployable,
so
we
we
have
too
little
information,
that's
the
whole
problem
and
we
are
trying
to
run
an
estimator.
That
does
something
reasonably
useful
and
we
know
we
in
a
network
that
just
doesn't
give
you
replies.
You
don't
get
enough
information
to
actually
always
react
in
a
sensible
way.
So
again,
I
think
what
we
need
to
take
home
here
is
that,
first
of
all,
it
needs
to
be
made
very
clear.
A
E
That
is
pretty
much
the
results
that
where
we
implemented
all
or
one
without
aging,
okay,
so
it
still
has
this
problem
with
the
random
clients
and
had
a
high
high
enough
world.
So
it
is
that's
discount,
but
it
not
what
a
major
cause
for
disease
is
this
this
the
not
retaining
the
fact
of
RT
or
value?
That's
the
main
cost
that
if
we
fix
that,
then
then
then
then
I
think
these
aging
and
the
and
this
upper
bound
of
32-second
stay
there.
They
have
a
role
there,
but
but
but
we
mostly
fix
everything.
E
E
A
And
I
think
we
have
to
look
in
exactly
in
to
why
that
is
not
happening
in
in
your
situation,
because
the
the
the
current
are
gos
image
contains
back
off
information,
and
maybe
we
have
to
look
into
this
a
little
bit
more
in
detail.
But
probably
can't
do
this
at
the
microphone
right
now
yeah,
but
thank
you
that
this
was
really
useful
and
I
think
it
really
shows
that
there
are
limits
of
the
default
co-op
congestion
control
and
there
are
also
limits.
G
In
one
of
the
orders,
so
so
one
thing
that
wasn't
on
the
slides,
but
it's
interesting
related
to
this
congestion
collapse
that
actually,
even
for
this
400
clients,
the
actual
art
it
is
on
is
something
like
10
to
15
seconds,
depending
on
whether
you
have
continuous
or
random
coin.
So
it's
still
clearly
below
this.
G
Even
this
32
seconds.
If
the
congestion
control
would
work
well
enough,
so
that
it
would
prevent
these
collapse.
But
now
now,
as
the
consistent
control
causes
the
collapse
to
happen,
the
actual
art
RTT
rises
rise
is
much
higher.
So
you
have
this
unnecessary
transmissions
which
consume
some
of
the
capacity,
but
not
only
that
they
also
increase
the
RTT
much
much
beyond
what
it
could
be.
G
So
so,
if
you
have
just
one
one
request
reply
per
client,
which
is
not
always
possible
with
random
clients,
because
there
is
the
status
or
okay
I
lost,
but
but
for
continuous
client.
You
can
always
have
this
that
you
have
just
one
one
request
reply
there
and
with
that
that
even
with
400
clients,
it's
slightly
more
than
10
seconds,
the
actual
RTT.
So
if,
if
we
can
prevent
the
collapse,
the
RTT
also
will
will
be
much
lower
lower.
E
A
E
Anything
actually
is
implemented
the
same
in
both
cases,
so
this
this
Koko
without
aging,
it
is
just
an
it
simply
doesn't
do
this
so
in
in
that,
in
that
sense,
in
these
experiments
it
is
equal
to
the
case
that
if
you
apply
aging
only
with
idle
periods-
okay,
so
it
never
happens,
but
as
we
can
see
with
the
random
clients
with
infinite
poverty,
there
is
also
are
the
same.
So
we
still
have
this.
Every
segment
is
every
request
is
free
transmitted
at
Li
four
times
more
or
less
four
times,
all
so
on.
E
Only
20%
useful
work
that
the
system
does
there
and
if
you
increase
the
the
load
further
to
the
800
clients
did,
then
this
is
cause
for
kind
of
a
more
or
less
double.
This
is
so
then,
then
only
10
percent,
then.
Finally,
there
will
be
very
little
forward
progress
or
or
useful
work
in
the
system.
Okay,.
A
E
A
E
The
default
curve
we
are
not
able
to
do
that
if
I
understood
yeah,
but
after
all
it
is
only
one.
It's
only
one
variable,
so
I
would
say
that
many
systems
can
afford
it.
So
I
think
we
should
recommend
a
little
bit
more
than
what
you
said,
but
you
don't
necessarily
need
to
do
implement
coop,
but
but
you
can
add
one
variable
fault.
What
maybe.
E
That
you
want
to
say
stay
at
the
safe
side.
That's
that's
all
you
need
really
anticipate
starts
at
three
again,
so,
let's,
let's
not
go
into
this-
is
custom
idea,
I
know,
but
but
we
need
to
be
safely
to
understand
first
and
then
tend
to
think
about
the
wireless
losses.
After
we
have
okay.
Unfortunately
time
doesn't
download.
We
have
a
resource
for
the
wireless
case
as
well
as
for
comparing
these,
there
is
no
chromatic.
B
Carlos
Gomez,
so
I
was
wondering
just
a
couple
of
minor
details
and
by
the
way,
thanks
for
all
this
work,
it's
really
helpful.
Is
it
possible
that
when
you
mention
zero
one,
it
is
not
right
draft
IETF
core
Coco
zero
one,
but
instead
it's
draft
Boorman,
no.
B
E
E
E
E
On
the
right-hand
side,
in
those
cases
the
that
client
could
fail
because
they,
because
because
this,
this
exchange
would
be
aborted
and
he
won't
get
the
result
for
that
and
in
that
sense
kind
of
up
for
that
client
results
would
not
be
comparable
with
the
others
that
so
just
to
make
them
comparable.
We
had
this,
and,
as
we
can
see
that
there
is,
there
are
some
retransmit
of
beyond
for.
E
G
Have
a,
and
so
so,
if,
if
we
wouldn't
have
not
changed
it
marks,
at
least
as
we
would
have
this
offer
at
load
as
would
have
varied,
depending
on
how
many
of
those
clients
fail
fail.
So
because
of
that
we
couldn't
have
compared
those
cases
so
well,
because
this
these
are
sort
of
random
effects.
One
of
the
clients
might
might
my
to
more
more
of
the
bat
of
Stan,
some
some
other.
So
so
the
number
of
clients
which
are
able
to
complete
successfully
would
have
worried
if
he
would
have
not
increased
this
much
retransmit.
G
So
this
is
just
to
make
that
test
test
to
be
useful,
useful.
Of
course,
it
would
be
possible
to
run
run
with
these
packs.
We
transmit
four
four
four
and
then
some
some
of
them
fail,
but-
and
then
you
have
this
issue
that
that
they
offered
world
would
have
not
been
the
same
between
different
congestion
controls.
So
so
it's
harder
than
then
comparator
for
completion
times
and
whatever
I.
E
It
wouldn't
change
the
final
outcome
cause
it's
just
a
setup
that
PFP
are
running
a
certain
number
of
clients.
I
had
these
fifty
exchanges,
he
could
have
the
exactly
the
same
load
with
with
larger
number
of
clients
that
are
maybe
exchanging
every
ten
seconds
or
whatever
it's
just
a
bit
based
on
what
is
that
no
amount
of
offer
at
all?
So
how
many
clients
you
have
you?
Have
you
will
have
it
exactly
the
same
result?
It
doesn't
matter
kind
of
find
that
sense.
Yes,.
A
C
First
I
just
like
to
kind
of
come
back
at
the
end
and
first
of
all,
I
found
that
Miss
Christian
really
really
helpful
and
when
I
point
to
Scott
Bradley's
document
and
I
talked
about
DCP
I'm
talking
about
it,
fluor
TCP
flows.
You
shouldn't
directly
compare
one
TCP
syn
with
one
other
packet,
so
you
need
to
talk
about
it
in
this
way.
Talk
about
the
effect
of
congestion
collapse
in
the
network,
so
congestion
collapse
is
more
important,
I
think
than
performance
for
the
network.
C
In
doing
that,
I
have
some
concerns,
which
I
think
you
should
look
at,
which
is
this
idle
time.
I,
don't
think
you
can
reset
RT
t
without
really
seriously
considering
that
in
the
presence
of
failures.
So
what
I
think
we
talked
about
that?
But
I
I,
think
that
has
to
be
talked
about
more
either
has
to
be
really
discussed
in
the
draft
or
you
have
to
address
the
issue
and
if
you
then
expand
the
back
off
appropriately
I
think
you
can
have
a
document
that
actually
satisfies
the
congestion
collapse
conditions.
C
H
Since
you
have
a
bit
of
time,
it's
not
quite
three
minutes
one
minute:
yes,
good
Zack,
Shelby
from
armed
Mattias
and
I
did
some
work
years
ago
on
coop
at
scale,
and
one
thing
that
concerns
me
is
that
our
we
do.
We
have
the
right
use
case
in
mind
here
when
we're
doing
measurements
and
research
around
the
scenario
were
worried
about
and
what
I
see
happening
in
the
industry
right
now
is
coops
being
used
in
quite
a
centralized
way.
H
Yes,
we
have
some
kind
of
local
communication
and
Gateway
things
over
wireless
happening,
but
actually
we
have
very
large
cloud
providers
and
operators
collecting
data
from
hundreds
of
thousands
and
now
scaling
into
millions
of
devices
into
centralized
cloud
platforms.
So
this
is
big
data
data
collection
in
practice.
How
do
you
guys
in
your
in
your
simulation,
work
looking
at
congestion
control?
Looked
at
that
kind
of
scenario,
one
server
communicating
with
very
large
numbers
of
low
performance.
You
know
low
bandwidth
devices,
but
there's
just
lots
of
them
right.
So.
E
Basically,
as
a
know
now
it
depends
on
variable
domain
ease,
so
how
your
server
commonly
or
your
pack
and
communicates
with
all
these
devices.
So
what
is
there
is
how
many
parts
there
are,
how
many
routers
there
are
all
what
matter
is
is
the
bottleneck
where
the
bottleneck
is
and
what
is
to
offer
cloud
over
that
particular
permit,
and
that's
what
we
are
emulating
here.
We
can
for
the
con
system
control
point
of
view.
E
H
Okay,
I
think
I
think
for
that
we
probably
need
to
do
a
little
brainstorming
with
some
of
the
operators
in
the
room.
Some
of
the
LP
land
providers
like
where
these
bottlenecks
might
be
showing
up
where
he
might
have
fairness,
problems
with
TCP
traffic
like
but
I
can't
answer
that
off
the
top
of
my
head.
What
where
those
might
be,
but
just
to
make
sure
that
what
we're
doing
is
real
from
from
an
industry
perspective.
A
I
Debut
our
ends
are
our
namespace
for
hardware
device
identifiers.
We
support
MAC,
addresses
eui-64,
addresses,
1-wire
addresses
and
also
sort
of
a
free-form
organizational
device
identifier.
If
anybody
has
those
at
the
bottom
there's
one
example:
your
dev
Mac,
something
something
pretty
simple
and
just
before
the
deadline,
I
posted
a
zero
zero
for
the
working
group
version
of
this.
This
draft
and
then
earlier
this
week
published
zero
one
version.
I
The
you
are
and
registration
template
because
they've
updated
that
to
where
there's
new
RFC,
RC,
81
41
I
think
it
says
what
how
to
register
you
are
ends
from
now
on
or
from
that
point
onwards,
and
we've
updated
that
and
that
that
sort
of
textual
affair
lots
change
and
also
had
answer
more
questions
than
then
I
had
answered
before
and
that's
that's
the
usual
year
a
good
thing
that
you
I
mean
that
the
new
template
actually
forces
you
think
through
more
cases
than
than
the
old
one.
So
you'll
see
some
of
the
findings.
I
Actually
there
so
I
have
a
couple
requests
and
questions.
One
is
that
can
people
read
the
new
template?
They
disappeared
on
Monday,
so
I
appreciate
feedback,
it's
new
text.
So
take
a
look
at
that,
and
also
given
that
there
was
a
few
more
questions
to
answer.
There
were
some
that
I
didn't
actually
know
how
to
answer,
and
one
of
them
is
that
the
new
template
asks
us
to
specify
how
the
particular,
u
RN,
type
deals
with
Q,
R
or
F
components
in
you
are,
and
then
you
are
ends
and
I
wasn't
really
sure
about
this.
I
So
for
the
moment
that
is
fraught
that
they're
not
used
I'm,
not
quite
sure,
if
that's
the
right
answer,
it
would
appreciate
feedback
and
that
so
so,
that's
sort
of
the
basic
of
of
this.
This
you
are
in
typed
in
there's
two
classes
or
two
items
of
another
type
that
I'm
or
we're
wondering
about,
and
those
relate
to
possibly
adding
new
new
branches
under
under
the
dev
URM.
So
the
first
one
is
something
that
we
had
discussed
briefly
in
in
previous
slides.
I
If
I
think
adding
device
ID
specified
in
1m,
2m
and
light
weight,
m2m
groups
and
and
I
think
we
just
basically
agreed.
That
would
be
a
sensible
thing
to
do.
If
that's,
what
would
you
still
think
and
that's
great,
but
since
I'm
not
personally
working
in
those
groups,
maybe
there
would
be
somebody
who'd
be
kind
enough
to
send
me
some
text
that
we
should
actually
add,
because
we
actually
have
to
specify
the
syntax
in
in
detail
and
then
the
there
was
a
discussion.
I
Mattias
cots
and
I
see
myself
discussed
a
little
bit
during
the
hackathon
about
possibly
adding
web
of
things.
Identify
side-effects
and
schemes
that
he's
developing
and
this
sort
of
would
seem
to
possibly
fall
under
the
Devi
Warren's,
but
could
also
be
separate
thing.
I,
remember
the
Devi
Warren's
are
not
the
only
way
of
identifying
devices.
You
can
still
use
uu
IDs,
you
can
still
use
you
know,
regular,
URL,
C,
then,
and
and
so
on
and
so
forth.
I
So
Orry
main
on
person
support,
so
the
viewers
that
sort
of
the
catch-all
of
the
you
know
things
that
we
missed
or
I
have
not
defined
before
we
can
add
more,
but
we
can
also
do
separate
and
in
in
this
web
of
things
scheme,
I
think
the
it
would
seem
at
least
that
that
that's
that's
the
thing
where
you
have
to
define
the
salient.
The
next
step
is
to
define
the
semantics
of
that
house
how
they
get
allocated
underneath
this,
this
high-level
branch
of
the
tree
and
then
go
from
there.
I
So
that's
it
really.
So
you
know
feedback
on
the
template.
Answer
to
my
question
on
the
different
components
that
could
or
could
not
be
used
in
in
the
URL
and
then
what
to
do
about
this
additional
things.
I
think
material
approach
is
that
if
I
don't
get
text-
or
you
know
clarity
on
on
some
additional
thing,
then
my
inclination
would
be
to
publish
the
try
and
publish
that
the
RFC,
because
you
can
always
possible
to
add
more
branches
to
under
the
viewer
and
later
also,
so
that's
it
looking
feedback.
J
I
J
K
Hi
I'm
Badou
from
Nokia,
so
this
third
bullet
about
decision
and
text
unclear
like
what
is
expected
from
lightweight
him
to
him,
but
what
we
have
done
is
I
believe
with
1.1
release
right
now
what
we
are
working
in
OMA,
we
assume
the
you
are
ends
and
URLs
are
basically
I
understand
it's
all
under
uri,
so
lightweight
m2m
could
do
a
uri,
meaning
anything
under
the
sub
set
can
be
a
device
ID!
That's
what
we
have
update
updated.
It's
still
wrapped,
so
it's
unclear
like
how
to
handle
it.
K
I
So
I
I'm
not
very
well
aware
of
the
details
of
what
you
guys
are
doing,
so
that
might
be
a
you
know.
Maybe
you're
offline
thing
for
us
to
look
into
I
would
fully
understand.
What's
going
on
there
and
and
maybe
it's
the
case-
that
we
don't
need
to
do
anything
I'm
just
basically
standing
here
saying
that
if
you,
if
anybody,
has
a
burning
need
to
add
things,
these
urin
types
that
not
would
be
a
good
time
to
say
yes,
please
add
and
then
give
me
text
otherwise,
we'll
move
ahead.
I.
L
M
M
What
happens
if,
if
packages
get
get
delayed
for
only
a
long
time
by
anniversary-
and
this
document
provides
some
salute,
some
solutions
to
that,
there
is
in
parallel
to
it
document
that
that
outlines
all
the
attacks
that
is
being
updated
constantly,
but
this
should
be
concise
and
and
provide
everything.
That's
needed,
for
example,
for
a
score
to
to
solve
those
issues
and
to
also
solve
those
issues
with
with
other
security
bindings.
So
what
happens
since
since
the
last
meeting
is
that
token
processing
was
added?
M
This
would
now
update
RFC
seven
to
five
to
because
the
as
dps
tokens
were
described
to
work
over
I
would
ETLs
those
could
result
in
responses
being
matched
to
requests
that
they
don't
belong
to.
So
this
is
one
of
the
updates
that
the
second
update
is
that
the
the
whole
echo
section
was
a
bit
shuffled
around
to
be
a
little
more
easily
and.
M
How
do
you
apply
this
in
if
you're
a
client,
and
it
matters
to
you
that
your
blocks
aren't
shuffle
around
this
in
the
in
the
current
draft
version?
This
is
only
being
hinted
at
because
it
as
I
said
it
is
in
under
active
development,
and
there
will
be
an
updated
version
roughly
at
the
end
of
the
week
or
shortly
after
that,
which
will
be
a
bit
shorter.
M
And
if
you,
if
you
have
anything,
if
you
have
any
opinion
of
whether
they
will
this
can
work
that
way
by
saying
that
block
wise
operates
on
cache
key
data
and
if
it's
part
of
the
cache
key
the
the
blocks
are
safe.
If
you
have
any
opinion
on
that,
please
voice
it
now,
because
the
results
of
this
will
probably
go
also
into
into
likely
the
implementation
guide
cannons
and
basically
clarify
what
was
intended
with
our
C
seven,
nine,
five,
nine.
A
And,
of
course,
we
would
still
want
to
keep
the
informative
part.
Why
we're
defining
this
thing
and
also
it's
a
probably
good
idea
to
have
a
common
understanding
in
the
community
which
option
you
use
for
sending
something
that
doesn't
have
any
semantics,
because
you
want
to
to
keep
the
block
requests
together.
So
I
think
it's
just
anything
outer
moving
over
some
materials,
not
really
actually
a
big
change,
but.
M
A
We
see
a
few
hands
of
people
who
would
like
to
review
this
Jim
Klassen
I
can't
see
you
Julian
Francesca,
Michael,
okay,
that
should
be
enough
icon,
so
I
think
that
the
timeline
of
this
should
be.
We
shouldn't
be
sitting
for
too
long
on
it,
because
it
solves
some
some
real
problems,
so
yeah
next
version
reviews
window
glass
call
ship
it.
Thank
you.
A
A
Contribution
here
on
this
slide
and
and
everybody
has
had
a
chance
to
think
about
the
pending
issue
during
the
last
20
hours
or
so
so,
I
would
love
to
hear
other
views
on
this
subject.
How
should
we
handle
this
in
general
and
how
do
we
handle
specifically
the
pending
requirement
that
comes
from
the
est
document,
so
anybody
have
an
opinion
on
this.
One.
N
N
So
the
proposal
having
a
new
response
codes
and
so
on.
This
is
more
on
a
mental
level,
so
there
it's
basically
some
basic
implementation
that
has
to
deal
with
that.
You
do
not
have
to
think
about
it
in
your
application.
There
may
be
proposal
is
more
okay.
You
have
to
design
your
applications
following
this
state
machine
thoughts,
kind
of
the
hypermedia
way
how
to
do
this
and
have
it
explicit
in
your
application.
So
a
question
is:
are
all
applications
yeah
recommended
to
follow
this
approach?
N
Of
course
it's
something
you
would
like
to
see,
but
maybe
goes
a
bit
too
far.
If
we
kind
of
make
them
all,
do
this
and
use
the
media
types
as
they
should
be,
if
you
have
a
lot
of
proposals
yet
that
use
coop
in
a
more
simple
way
that
use
what
is
defined
there
and
they
don't
put
so
much
thought
into
how
to
design
a
hyper
media
driven
application,
meaning
these
thinking
about
what
are
the
states
of
the
state
machine
and
so
on.
N
What
is
the
right,
media
type
to
use
and,
and
so
on,
I
think
in
particular,
for
this
pops
up
draft
a
nicer
solution,
for
that
would
be
to
have
it
in
on
the
meta
level,
so
meaning
the
new
response
code,
because
pops
up
yeah
people
who
we
want
to
adapt
that
bond.
Think
in
this
hyper
media
terms.
So
these
are
my
thoughts
on
this.
So
it's
I
see
it's
it's
two
different
strategies:
how
to
solve
this.
N
A
N
I,
don't
have
a
strong
one,
as
I
said,
so
it
depends
on
the
applications.
On
the
one
hand,
side
it
would
be
nice
if
more
applications
would
follow.
The
hypermedia
driven
approach
would
be
good
to
have
something
there,
but
then,
if
I
look
at
the
case
of
pops
up
where
we
want
to
have
people
at
least
move,
let's
say
from
mqtt
to
to
co-op
pops
up
where
we
have
some
more
metadata
on.
What
is
the
content
sent
around?
N
We
have
more
features
for
interoperability
and
then,
let's
make
it
easy
for
them,
and
then
I
think
this
response
code
solution
is
a
nicer
one.
There
are
cases
for
both
that
that's
kind
of
the
main
message
so
so
I
see
both
solutions.
They
are
valid,
but
it's
two
different
domains
where
they
are
valid
and
so
I
haven't
thought
a
lot
about
the
ESD
use
case,
but
I
think
they
had
a
similar
case.
It's
it's.
Not
people
designing
hyper
media
driven
applications,
but
it's
they're
using
the
coop
protocol
and
they
look
ok.
N
O
Peter
fan
of
stock
because
it's
my
turn
I
think
to
say
actually
I
agree
completely
with
Matias
about.
If
you
have
a
response
code,
it
is
a
more
general
nature
than
when
you
use
the
media
format.
The
media
format
actually
is
for
an
ex-client
which
talks
to
a
server,
and
they
are
part
of
one
application,
so
they
know
about
what
things
are
going
and
there's
no
need
to
export
all
this
knowledge
about
what
the
media
format
means.
Well,
in
the
other
case,
when
you
don't
have
a
more
general
service,
well,
you
have
the
response
code.
O
I
think
it
can
be
used
also
the
other
applications.
You
should
like
to
try
a
response
code
I
understand
that
there
are
problems
because
there
may
be
proxies
which
do
not
understand
it
and
do
who
the
values
Achebe's
and
on
the
other
side,
it
might
be
that
he
have
a
client
who
gets
a
response
back
who
doesn't
understand.
It
I
think
that
it
depends
very
much
on
the
type
of
a
response
code
that
you
do
and
the
kind
of
consequences
which
are
passed
to
this
return.
O
P
You
can
transfer
to
this,
or
can
I
okay
Alexandre
proof
so
I'd
like
to
see
like
a
one
use
case
or
an
application.
That
will
say
no.
How
does
this
map-
and
why
do
we
do
this?
So
it
seems
to
me
like
it's
a
pretty
meta
approach
and
like
yes,
we
could
do
it
and
yes,
but
what
it
will
serve
for.
Like
one
specific
use
case,
you
want
to
see
okay.
Well,
we
solve
this
problem
and
and
then
we
can
say
yes,
it's
interesting
or
or
maybe
not.
F
Michael
Koster
smartthings
excuse
my
voice.
I
yeah
I
agree
with
the
the
idea
that
this
is
more
application
oriented
and
the
idea
of
a
response
code
status
code
is
more
transfer
layer.
So
you
know
pub/sub
I
agree
that
pub/sub
just
transfers
representation,
so
we
really
probably
can't
try
to
synthesize
media
types
in
pub/sub
in
terms
of
the
other
use
case.
They
don't
have
much
to
say,
but
I
think
I'd
like
to
see
a
little
more
discussion
on
what's
wrong
with
response
codes.
A
Those
are
just
specific
things
and
we
do
believe
we
shall
guide
application
developers
to
defining
media
types
for
their
application
states,
so
yeah
on
one
hand,
I
agree
with
my
peers,
maybe
to
the
level
of
saying
yes,
there
is
a
decision
to
be
made,
but
I
would
prefer
to
only
have
response
codes
for
things
that
actually
are
somewhat
Universal.
Now
to
the
question:
what's
bad
about
a
response
code,
that's
in
general
and
nothing
is
bad
about
a
response
code.
A
So
things
like,
like
proxies,
but
also
the
the
coop
layers,
the
caching
layer
in
a
client
implementation,
has
to
know
about
the
response
code
and
what
I
really
don't
want
to
get
is
a
situation
in
which
somebody
cannot
get
the
application
going,
because
their
co-op
layer,
they
call
it
library
or
the
the
proxy
that
they
are
using
hasn't
defined
that
response
code.
Yet
so
that's
really
a
bad
situation
where
to
make
a
deployment.
You
actually
have
multiple
entities
to
agree
that
it's
a
good
idea
to
do
that
deployment.
We
generally
try
to
avoid
that.
A
A
Nobody
has
submitted
anything
to
this
topic
yet,
but
here's
a
default
value
and
so
on.
So
this
is
a
fundamentally
application-specific
and
calls
for
an
application,
specific
media
type
already
so
I
don't
think
it's
a
lot
of
onus
on
an
application
developer
to
to
develop
that
media
type
and
the
final
observation
one
problem
we
ran
into
when
looking
at
the
pub/sub
case
is
that
observe
currently
requires
all
the
notifications
in
a
stream
of
responses
to
have
the
same
content
format.
A
N
As
a
cue
you
didn't
have-
or
that
was
already
okay-
yes,
oh
so
I
came
a
bit
late
to
this
discussion,
so
so
I
discussed
a
bit
during
the
hackathon.
What
is
the
issue
there?
So
for
the
pop
stop
use
case,
it
is
exactly
what
custom
just
mentioned
to
me:
yeah
at
first
glance,
it
feels
like
okay.
This
is
kind
of
a
strong
correction
of
our
changed
in
the
observe
RFC.
N
Maybe
then,
having
something
more
drastic,
let's
say
like
a
response
code
that
can
fix
that
might
be
the
right
direction,
because
there
were
some
thoughts
why
it
should
be
the
same
media
content
format
during
an
observation.
I
think,
that's
that's
something
you
should
think
about,
so
that
the
main
point
is
it's
connected
to
a
lot
of
other
decisions
that
that
have
been
made
and
now
that
more
and
more
applications
pop
up,
we
actually
get
more
evidence.
What
would
have
been?
N
Maybe
that
the
right
decision
with
response
codes,
it's
a
bit
similar
to
to
the
the
methods,
so
we
we
originally
stuck
to
to
a
minimal
set.
Then
it
turned
out
yeah.
Actually,
these
additional
methods,
a
good
idea.
They
have
optimizing
their
days,
source
particular
use
cases,
especially
if
you
look
at
fetch
and
for
the
response
code.
So,
for
instance,
there's
also
still
this
gap.
What
if
you
just
want
to
say?
Yes,
this
was
processed
correctly,
the
resource
state
didn't
change.
So
it's
not
a
change.
N
There's
no
content
to
return,
there's
also
still
a
gap,
and
it's
it's
like
the
workarounds
that
we
had.
Let's
say
in
HTTP
when
there
was
no
fetch
method
to
to
send
a
post
and
everything
was
a
bit
yeah
sloppy.
Let's
say
because
they
are.
There
was
a
gap
in
the
specification,
so
I
think
with
the
evidence
that
we
have.
Maybe
we
can
collect
more
of
these
use
cases
where
everyone
has
a
problem
to
pick
the
right
solution
from
from
the
their
RFC's
and
and
think
rethink.
What
are
the
response
codes
that
that
we
need.
N
A
N
It's
so
so
this
isn't
fully
figured
out.
So
it's
a
I
started
thinking
about
this
during
the
hackathon.
So
the
the
the
one
thing
about
this
response
code
in
question
here
it
tells
you,
okay,
there's
this
resource.
Everything
is
fine,
but
there
is
no
content.
So
it's
kind
of
this
HTTP
200
for
no
content
and
something
similar
was
missing
in
these
cases,
where
you
sent
opposed
to
to
process
something-
and
you
don't
change
resource
state,
there's
also
kind
of
no
content
to
deliver
it,
but
that
is
more
like
a
200.
N
A
Two
or
for
the
HTTP,
two
of
who
actually
has,
even
though
it's
described
as
no
content,
has
a
slightly
different
semantics,
which
is
the
previous
content
you
already
got
for.
This
is
still
valid
and
you
don't
have
to
update
it.
It's
really
weird
that
this
thing
is
called
no
content,
so
I'm
not
sure
we
can
draw
a
parallel
to
HTTP
here,
but
maybe
that's
one
question
that
we
should
try
to
decide
for
this,
this
new
response
code.
What
what
does
it
mean
about
the
actual
content
behind
that
resource?
N
Just
an
observer
observation
here,
so
the
tool
for
I
think
was
also
returned
if
you
change
something
because
HTTP
doesn't
have
to
explicit
changed
response
code
and
so
on.
So
this
is
originally
already
this.
This
problem,
okay,
I,
don't
really
have
the
right
thing
to
pick
from
there's
the
general
confusion
from
from
the
name
of
the
response
code
to
the
actual
semantics,
and
we
I
think
are
stuck
in
the
same
problem
here,
but
with
the
expectation
that
coop
is
for
machine
to
machine.
So
we
have
to
be
way
more
explicit
about
all
this.
A
Yeah,
so
I
would
propose
that
we
define
a
new
response
code,
not
what
you
wanted,
and
that
is
used
for
for
representations
that
come
back
that
are
not
what
was
originally
requested,
but
that
how
somehow
are
useful
in
the
application
to
make
progress
in
successful?
Yes,
successful,
not
what
you
wanted.
A
F
Michael
Koster
smartthings
yeah
on
further
reading
the
204
does
an
HTTP
does
instruct
the
client
to
use
the
previous
value,
so
that
would
not
be
appropriate
for
the
pub/sub
case.
The
same
semantics.
We
do
need
something
a
little
different.
The
I
believe
the
other
one
was
more
analogous
to
202
accepted,
which
says
something
like
I
might
process
this
later
or
I.
Might
not
he
and
I
to
or
to
accepted,
as
used
in
some
IOT
api's
for
ghosts.
F
Go
go
deal
with
this
some
other
way
like,
for
example,
one
one
API
uses
that
to
indicate
that
you're
supposed
to
go.
Get
your
asynchronous
notifications
somewhere
else.
So
you
do
an
HTTP
thing
that
says:
hey
like
I
want
to
observe,
but
it
says
it
gives
you
back
a
202
and
it
says
here's
where
you
go
observe
this
thing.
That's
that's
it's
not
a
redirect,
but
it's
a
I
processing
this
but
yeah.
F
You
have
to
get
your
answer
somewhere
else,
I'm
gonna
sure,
if
that's
exactly
what
computers
use
cases
either,
but
I
think
what
what
you
said
is
I
kind
of
agree
with
that.
That's
really
what
the
semantics
of
what
we
want
to
say
for
pub/sub.
Is
it's
not
what
you
wanted
in
a
general
case,
so
doing
that
would
cover
pub
7
as
well
as
maybe
some
other
general
cases.
Q
Article
on
Erickson
I
think
we
need
a
response
code,
but
maybe
the
one
who
suggests
is
something:
generic,
ok,
yeah,
here's
more
information
how
to
go
forward.
Maybe
that
is
actually
the
right
solution
here.
So
instead
of
having
three
new
success,
coach
of
having
one
that
is
relatively
future
proof,
I
think
that
would
be
solving
the
pub
sub
case
and
most
like
the
SDKs,
so
I
would
think
make
sense
to
explore
that
so.
A
A
N
F
You
can
look
at
this
screening
folder,
oh
thanks.
So
what
we
want
to
do
is
split
out
these
response
codes
into
separate
drafts,
so
that
that
there's
no
dependency
or
impact
and
we're
going
to
need
to
reach
both
or
we're
creating
a
dependency.
But
we
don't
want
them
internal
in
the
draft.
We
want
them
to
be,
as
as
Carson
said,
general
purpose
for
you
know
for
everyone
to
use.
F
So
we
like
to
just
refer
to
those
whatever
the
note
content,
one,
that's
the
TBD
too
many
requests
seems
to
be
less
controversial,
so
that
shouldn't
be
a
problem.
Now
that
we
have
a
really
clear
idea
of
how
observant
groups
and
multicast
and
different
sort
of
security
considerations
are
at
least
a
better
idea
than
we
had
a
year
ago.
We're
ready
to
put
some
specific
stuff
into
the
security
considerations
for
OS
core,
and
there
are
some
more
issues
and
comments
that
we
need
to
address.
F
It's
a
really
good
issues,
really
some
sort
of
unspecified
cases
like
we
have
a
hub
sub
resource.
That's
a
resource
type
that
works
sort
of
like
the
Rd
resource
type.
Does
where
you
that's,
where
you
sort
of
access
the
functions
and
queries
and
stuff
and
question
is:
do
all
the
topics
sort
of
show
up
under
that
in
the
tree
or
do
they
sort
of?
Are
they
able
to
be
sort
of
created
just
anywhere.
F
Conditional
notification,
it
seems
like
conditional
notification
and
pub/sub
or
two
patterns
that
really
need
to
be
used
together,
even
though
we
don't
really
have
the
numeric
stuff,
we
have
P
min
and
P
max.
That
would
be
really
good
for
controlling.
You
know
the
flow
of
data,
a
proactive
way
instead
of,
depending
on
four
to
nine
when
things
go
wrong,
so
we'd
like
to
look
at
that
and
also
dine
link
just
to
be
able
to
use
dynamic
with
pubs
up.
How
do
you
use
that
with
a
broker?
F
Do
we
create
a
binding
table
on
the
broker,
or
do
we
have
a
way
of
putting
linked
findings
with
associating
with
topics
in
the
tree,
so
that
needs
to
be?
That
needs
to
be
work,
and
also
there
are
some
questions
around
how
topic
discoveries
work
with
topic
trees
when
you
create
things
with
a
number
of
levels,
all
at
once
are
there
intermediate
nodes
created,
and
we
need
to
be
clear
about
that
and
there
may
be
a
couple
of
other
small
issues.
F
But
I
think
this
is
the
flavor
of
what's
left
to
kneel
down
before
before
we're
done,
so
that
bit
of
work
left
to
be
done.
We'd
like
to
schedule
an
interim
meeting
so
that
we
can
be
ready
for
last
call
by
the
next
IETF.
It's
basically
how
we'd
like
to
proceed
so
sort
of
do
a
one
big
final
course:
sort
of
the
way
we
did
with
Rd
to
just
get
everything
in
and
get
it
done
before,
then,
as
a
deadline
for
IETF
one
or
two.
A
F
A
A
F
A
F
Q
Okay,
great
Michael,
before
we
go
forward.
If
we
have
time
it
would
be
good
to
discuss
the
point
like
by
the
way.
This
sorry
Carolyn
discussed
the
point:
where
do
we
want
our
topic,
trees,
trees,
land?
Do
we
always
want
to
have
under
the
API
resource,
or
would
we
like
to
have
it'll
be
anywhere?
So
that's
that
first
bullet
over
here
yeah.
F
Q
M
F
Q
F
Q
F
F
So
I'm
gonna
talk
about
interfaces
in
dine
link
now
which,
which
are
also
pretty
close
in
our
opinion,
so
the
interface
is
draft
is
really
just
informational.
We
define
some
some
link,
attributes
if'
and
it
interfaces
draft,
basically,
as
some
high-level
guidance
about
what
this
eye
off
attribute
is
about,
and
you
can
use
it
to
say
this
thing
is
the
sensor
this
thing's
an
actuator.
This
thing
is
a
collection
that
has
stuff
in
it.
F
It's
basically
an
application
layer
tag,
that's
that
tells
you
how
to
process
the
resource
so
originally
other
stos
noted
notably
osya
for
using
interface,
and
there
examples
are
a
lot
different
from
ours.
There
was
an
idea
that
we
would
try
to
show
at
OS.
Ocf
is
doing
in
our
draft,
but
I
think
we've
decided
not
to
do
that
and
to
keep
with
our
original
examples.
F
F
Maybe
maybe
we
if
it's
really
if
people
feel
strongly
about
it,
we
could
bring
in
a
couple
of
examples,
but
we
should
show
different
ways
of
doing
it.
If
anything
and
not
not,
try
to
imply
that
there's
only
one
way
to
use
the
interfaces
target
attribute,
look
what's
happening,
someone
coming
to
the
mic
on
mediate,
Oh,
No,.
F
All
right,
so
that
might
be
a
little
controversial,
but
that's
done
in
the
and
in
a
reason
to
simplify
things
and
also
we're
going
to
use
CIN
ml.
So
it's
sort
of
like
the
examples,
are
going
to
be
more
sentimental
examples
that
show
content.
That's
according
to
another
IETF
draft,
which
seems
to
be
a
little
more
consistent
than
bringing
in
stuff
from
an
external
sto,
and
we
have
some
remaining
issues
to
close,
but
not
too
much.
F
F
Won't
have
the
last
call
at
I
e
TF
102,
yes,
but
we
want
to
be
prepared
to
do
that.
I!
Guess:
okay,
yes,
I!
Guess
it's
just
a
little
a
little
less
certain
that
we'll
have
everything
done
versus
pub/sub,
where
we
really
sort
of
feel
like.
We
need
to
tie
it
up,
but
that's
because
that
you
know
I
guess
we
were
just
not
quite
sure
about
the
state
of
that
draft
relative
to
the
other
ones.
Right
instead,
am
I
representing
this
correctly.
F
Bill
is
Bill
silver
Rajan
has
been
doing
a
lot
of
the
work
on
this
draft
lately
and
but
we're
we're
all
pulling
together
here.
Okay,
so
dine
link,
don't
link
is
a
little
more
going
on
with
it,
but
probably
a
little
less
work
to
do.
We
really
kind
of
understand
the
scope
of
how
we
want
to
finish
dine
link.
There
are
two
components
in
the
dining
draft:
there's
dynamic
links
that
define
it
sort
of
uses,
a
link
to
define
an
asynchronous
data
transfer
from
one
resource
to
another.
F
The
other
thing
in
the
draft
is
conditional
notification
parameters
which,
basically,
we
call
them,
observe
attributes
in
some
other
areas
and
and
basically
they,
they
control
the
notification
behavior.
So
it's
sort
of
the
timing
and
how
much
the
value
needs
to
change
and-
and
things
like
that,
know
that
the
conditional
notification
parameters
could
be
included
in
the
dynamic
link.
So
that's
really
one
way
to
use
them.
F
So
we
want
to
make
sure
we
define
all
those
three
ways
of
using
parameters
really
clearly
I
think
it's
probably
already
there,
but
mainly
we
want
to
put
the
draft
into
two
sections:
have
the
dynamic
links
in
one
section
than
the
observed
parameters
in
the
other
and
then
there's
the
thing
about
a
binding
table,
which
is
how
you
might
organize
things
on
a
server,
but
they
don't
all
have
to
be
organized
that
way.
So
that's
optional,
so
we
kind
of
know
what
we
want
to
do
with
it.
F
We
have
one
thing
that
we're
adding
and
that
is
another
notification
attribute.
We've
looked
at
a
lot
of
different
notification
attributes
to
have
a
lot
of
different.
You
know
tunable,
behavior
and
what-have-you,
and
the
one
that
seemed
to
really
stand
out
as
being
consistently.
You
know
people
say
why
didn't
you
do
it?
This
way
is
to
have
the
notification
happen
within
a
certain
signal,
range
value
range
and
not
when
you're
outside
of
that
value
range-
and
this
was
a
really
popular.
This
is
in
working
with
lightweight
m2m.
F
Originally,
a
lot
of
the
folks
thought
that
that's
what
LT
+
GT
were
for
was
to
say
you
can
only
notify
when
you're,
between
LT
and
GT
or
or
outside
limits
or
whatever
you
know
they
want.
They
wanted
to
have
the
endpoint
be
quiet
when
nothing
was
happening
and
only
do
notification
when
something
unusual
is
happening
so
we're
adding
the
band
attribute
to
modify
the
behavior
of
LT
and
GT
to
be
this
notify
within
band.
F
So
if
you
use
LT
and
GT
in
that
band
has
a
boolean
flag,
then
you
get
the
SPECIAL
behavior
on
those.
If
you
don't
you
just
get
notifications
when
LT
and
GT
are
crossed
as
a
limit
sort
of
a
crossing
limit
behavior
and
then
finally,
we
decided
not
to
rename
LT.
Even
though
resourcedirectory
uses
LT
as
lifetime,
we
don't
see
any
significant
conflict
there.
So
we'd
like
to
keep
this
as
LT.
F
F
J
J
F
We'll
we'll
clarify
that
in
the
in
the
draft.
Well,
we
probably
need
some
examples
there
to
show
some
ASCII
art
that
I
signed
up
to
create
a
little
ASCII
art
to
illustrate
those
and
on
the
roadmap.
The
last
changes
are
scope.
We
want
to
you
know,
do
these
through
these
three
things:
security
considerations.
We
didn't
really
talk
a
lot
about
that,
but
when
you
have
a
dynamic
link,
there's
sort
of
an
implication
that
there's
some
client
functionality
there
that
has
to
process
the
link
to
do
the
output.
F
A
A
Think
modern
and
who
would
be
willing
to
contribute
a
review,
Christian
and
Christian,
okay
and
and
Ari.
Thank
you
and
on
the
subject
of
the
interfaces
document
who
has
read
recent
version
Christian
and
zag.
So
he
probably
should
reserve
the
first
row
for
Christian
because
he
has
read
all
the
raft
and
and
who
would
be
willing
to
have
it
contribute
a
review
of
interfaces
just
yawn.
A
S
Okay,
yeah
all
right
so
we'll
start
with
protocol
negotiation
fist.
So
there
are
two
two
drafts
right
now
that
that
are
separated
for
alternative
transports.
One
one
is
for
describing
where
the
transport
information
should
be
residing
the
URI
and
the
other
one
is
for
discovering
alternative
transport
endpoints.
So
protocol
negotiation
is
about
about
doing
the
second
part
bit
of
context.
So
the
the
document
aims
at
talking
about
notes
that
have
multiple
transports
and
then
they
wish
to
allow
for
every
question
response
to
use
some
or
all
of
these
transports.
S
So
we
started
with
thinking
only
about
other
server
models,
but
then
recent
discussions
also
showed
that
that
per
resource
models
are
useful
and
then
also
the
the
drive
evolve.
So
initially
we
went
with
the
core
result
directory
only
model
and
then
right
now
we
also
have
a
model
where
you
can
directly
query
the
origin
servers
for
the
available
transports.
S
Okay,
current
status,
so
in
be
on
drugs
0-8,
we
did
not
introduce
anything
new.
We
clarified
some
of
the
parameters
based
on
reviews
that
we
received,
and
also
thanks
to
Christian,
for
doing
good
work
on
the
resource
directory.
So
from
that,
we
were
able
to
do
a
lot
more.
We
have
the
DOL
other
locations
attribute
that
allows
multiple
base
you
arise
and
to
align
it
with
the
way
ocf.
Does
it,
and
then
we
have
the
eighty
and
TT
parameters
which
are
also
repeatable,
that
you
do
the
same
thing
with
the
resource
directory.
S
Actually
came
from
the
courier
discussions
that
we
had
and
some
came
from
the
from
the
the
reviews,
and
these
were
some
methods
that
we
think
we
were
going
forward,
we'll
try
to
evaluate
them
and
see
if
there
are
any
of
them
have
any
any
viable
ways
of
doing
that.
So,
for
example,
using
using
music
fetch
to
provide
a
payload
when
you
do
a
request
and
then
retrieving
the
list
of
transferring
points
or
then
using
a
well
known
location
for
cycloid
metadata
or
then
doing
what
we
do
in
deine
links
with
the
writing-table
entry.
S
A
A
L
Opinion
it's
completed,
so
we
need
just
a
it's
actually
from
the
point
of
view
of
language
and
to
him,
since
we
have
now
multiple
transport
say,
I
believe
the
protocol
negotiation.
One
could
be
also
useful.
We
are
finishing
one
to
one,
maybe
on
one
or
two:
it
could
be
something
discussed,
I,
don't
know
what
pad
or
sack
or
harness
eg
he's
in
the
room
would
think.
But
it
sounds
like
a
useful
feature.
Yeah.
S
So
we
wanted
to
figure
out
how,
if
you,
if
you
wish
to,
if
you
wish,
to
expose
the
transfer
endpoint
in
the
URI,
how
do
you
do
it
so
that
the
work
of
the
work
took
a
while
and-
and
we
discovered
that
when
you
look
at
all
the
URI
components,
you
can't
put
it
in
the
query,
the
part
of
the
authority
and
certainly
not
the
fragment
and
with
the
requirements
that
we
had
it.
The
only
place
that
left
to
do
that
was
in
the
URI
scheme.
S
So
the
draft,
basically
just
crystallizes,
that
point-
tells
you
the
design
decisions
that
drove
the
decision
that
currently
is
being
used
by
RFC
a
tree
to
tree
to
the
core,
TCP
and
WebSockets
information
in
the
URI
scheme.
So
the
the
current
current
drop
11
is
just
a
small
Delta.
Yeah,
that's
basically.
A
How
are
we
going
to
use
this
analysis
next,
so
one
one
thing
that
that
we
left
unfinished
when
we
completed
h.323
was
how
do
you
actually
do
your
head
scheme
that
is
open
with
respect
to
the
transport
being
done,
and
we
kind
of
have
an
unpaid
IOU
with
an
approach
on
this
one,
because
we
agreed
with
him
that
that
we
should
have
such
a
URI
scheme
as
well.
But
we
haven't
really
done
the
work
on
that
yet
and
I
think
the
protocol
negotiation
draft
certainly
will
go
into
that
in
some
form.
A
A
Now
dave
is
going
to
remind
us
that
ocf
already
has
such
a
scheme,
which,
which
is
specific
to
their
way.
Ocf
is
naming
endpoints,
and
maybe
that
is
also
something
that
we
also
have
to
take
into
consideration
when
doing
something
like
more
general
co-op,
+80
scheme.
So
I
think
we
have.
We
have
two
nice
pieces
of
raw
material
here
and
we
have
an
unfinished
unpaid
check
and.
J
What
Carson
said
I
was
going
to
say
and
then
to
extend
on
that
I'd
say
my
comment
is
not
entirely
specific
to
ocf,
although
you're
correct
that
that
is
sort
of
that.
The
main
case
that
we
know
of
today,
that's
making
use
of
of
co-op
and
so
on,
but
I
would
say
anytime
that
you
have
an
organization
or
a
vendor
or
whatever
that
makes
use
of
co-op
but
potentially
makes
use
of
transports.
Besides
co-op,
then
such
an
organization
would
probably
never
use
a
co-op
80.
J
J
J
Now
you
know
what
the
implications
when
you
say
this
scheme,
you
mean
the
co-op
+80
scheme.
Yes,
yeah,
yeah,
that's
exactly
the
sort
of
thing
that
I'm
arguing
for
is
to
say.
If
you
do
that,
and
then
you
can
use,
you
know
whether
it's
you
know,
oh
well
or
whatever,
which
can
point
to
things
which
may
or
may
not
start
with
co-op,
plus
okay,
because
you
don't,
if
you
don't
constrain
it
to
that,
so
that
it
could
be
extended
in
the
future.
J
If
something
else
comes
along,
then
yes,
that's
exactly
what
I'm
arguing
for
yeah,
okay
and
then
you're,
basically
covering
the
ocf
use
case,
and
that
depends
on
you
know
what
the
rest
of
the
syntax
says,
which
you
mention.
You
know,
what's
the
right
way,
that
you're
naming
things
and
so
on,
and
what
you're
naming
things
along
the
lines
of
what
URI
presented
or
something
else
right.
A
S
A
So
I
take
it
that
we
just
have
transmogrified
this
set
of
two
documents
into
a
slightly
larger
work
item,
which
is
again
an
unpaid
check
that
we
still
have
with
Roche
and
also
something
that
I
think
the
community
really
could
use.
So,
let's
take
this
discussion
offline
and
make
some
progress
there.
Okay,
thank
you,
okay,
so
we
finally
have
arrived
at
the
flexible
time
part
of
the
meeting
and
right
now
the
the
only
side
said
I
have
is
about
OPC
UA
you're
on.
Do
you
want
to
say
anything
about
the
timescale
thing?
Nothing!
Nothing!
A
A
It's
not
to
talk
I
witness
I
would
like
to
have
a
quorum,
but
it's
a
working
group
but
have
to
put
comment
and
Angie's
draft.
So
the
goal
was
to
to
have
some
when
you
send
a
mess
request,
then
to
say
how
long
you
can
keep
it
in
in
the
server.
So
when
you
are
very
slow,
devices
could
be
very
important,
but
we
need
feedback
from
the
from
the
group
from
this.
A
Ok,
I'll
be
happy
when
people
raising
their
hand
so
yeah.
We
know
who
you
are
good
yeah,
so
so
maybe
we
should
take
this
offline
and
maybe
use
a
break
at
some
point
and
discuss
how
to
make
progress
with
this
okay,
so
that
was
timescale
segment
and
now,
let's
go
to
the
OPC
UA
segment.
There
are
seven
names
on
this
slide.
I,
don't
know
who
is
actually
going
to
present
this.
T
T
We
made
some
Evo
changes
according
to
the
last
meeting
commands,
especially
adding
some
use
cases.
So,
whereas
our
means
this
version
and
forest
was
resource
constraint,
industrial
scenarios,
if
we
want
to
use
OPC
a
to
consolidate
different
types
of
data
and
protocols
into
a
unified
information
model
as
well
as
using
web
service.
Http
is
not
a
good
choice,
because
it's
too
you'll
sink
Rob
instead
of
HTTP
good,
better
choice
because
it
can
achieve
lightweight
communication.
T
The
first
use
cases
was
based
on
this
and
we
only
needed
to
change
OPC,
a
client
and
Savior
to
support
a
sway.
The
second
use
case,
a
use
case,
is
using
crop
tool,
HTTP
proxy.
It's
no
necessary
to
change
OPC,
UA
client
with
developing
of
the
cloud
technology.
The
factories
data's
can
be
uploaded
to
the
cloud
for
fodder.
Precisely
and
many
cloths
api's
could
support
OPC
we
and
a
cop.
So
obviously
over
crop
can
light.
Few
divorces.
T
T
T
J
Dave
Taylor,
so
yeah
I
agree.
Your
next
step
should
be
to
go
to
the
OPC
foundation,
get
reviewed
there
as
we
talked
about
last
IETF
cos.
Yes,
I
think
it's
just
informational
to
us
that
this
work
really
belongs
if
they
accept
it
and
the
OPC
foundation,
not
here
right,
because
it's
on
top
of
us,
it's
a
user
of
us,
and
so
they
would
be
the
ones
to
do
the
findings
from
OPC
UA
to
various
things.
J
Within
that.
My
comments
are
our
sort
of
technical
comments
or
questions
that
are
would
might
be
more
appropriate
in
that
for
them
than
this
one,
but
I'm
happy
to
give
them
to
you
now,
since
I'm
here
and
I'm,
not
there,
which
my
understanding
is
that
the
only
trance,
although
the
OPC
UA
defines
like
three
or
four
different
transports,
which
you
have
the
good
picture
in
the
in
the
draft,
the
only
one
that's
actually
used
is
not
HTTP.
It's
the
one!
J
That's
TCP,
and
here
you
mentioned
Co
app
and
I,
don't
remember
if,
in
the
draft
you
talked
about
coop
versus
over
UDP
versus
coop
TCP,
but
for
the
cloud
it
seems
like
you
care
about.
The
coop
TCP
thing
and
so
I
guess
that's
part
of
a
question
here.
Is
you
assume
it's
co-op
TCP
because
you
care
about
congestion,
control
across
the
Internet
to
the
cloud
and
the
second
question
would
be:
have
you
compared
the
compression
that
you
get
from
you
know?
J
A
Good,
so
I
think
we
continue
to
be
interested
in
finding
out
what
what
other
organizations
might
be
using
our
protocol
and
what
influence
this
has
on
further
design
decisions.
So
I
would
encourage
you
to
bring
back
your
work
to
us
again,
but,
as
Dave
said,
we
would
be
good
to
know
what
obviously
foundation
thinks
about
this,
and
it
also
would
be
good
to
have
some
some
numbers
like
like
the
message
sizes,
the
compression
that
okay.
Thank
you
very
much.
So
we
are
15
minutes
ahead
of
schedule
and
we're
done
with
our
agenda.