►
From YouTube: IETF96-TAPS-20160721-1830
Description
TAPS meeting session at IETF96
2016/07/21 1830
C
E
F
H
J
A
K
A
I
F
F
C
A
J
E
E
So
we
should
combine
together
I'm
getting
too
sweet.
Nothing
yeah
I
know
just
a
little.
N
A
A
J
D
J
D
D
We
have
update
basically
the
agenda
sort
of
boils
down
to.
We
have
updates
on
a
few
of
the
drafts
that
are
in
progress,
we're
going
to
hear
about
a
couple
of
related
research
and
implementation
projects,
and
then
you
know,
go
get
beer
or
whatever
happens
next,
so
the
main,
the
main
piece
of
our
first
milestone,
which
we've
been
referring
to
as
doc.
One
draft
ITF
caps
transports,
we
finished
working
group
last
call
it's.
D
We
gave
it
to
Spencer
Spencer,
had
some
comments,
Brian
read
the
draft
and
so
I,
don't
believe,
there's
any
outstanding
comments
and
so
I
believe.
The
next
step
is
that
it's
going
to
go
to
ihe
review,
and
so
that's
the
that's
the
first
half
of
that
document.
Then
we
we
agreed
that
we
were
going
to
add
in
oh
look
a
little
message
about
my
verizon
courage.
D
We,
we
agreed
that
we're
going
to
add
in
the
taps.
Transports
usage
as
a
second
part
of
the
first
milestone,
so
it's
not
really
a
charter.
It's
not
a
charter
change!
It's
just
that
the
milestone
now
has
got
multiple
documents
in
it,
and
so
we're
gonna
hear
an
update
on
that.
And
so
one
of
the
questions
to
talk
about
is
whether
that's
done
Jeff,
kissing
taps
min
set
is
tacitly
talk
about
a
couple
of
times
and
I.
D
Here's
a
update
on
the
schedule,
so
we
these
this.
These
dates
reflect
changes
that
we
agreed
on
at
our
last
meeting.
So
we're
going
to
try
to
finish
the
first
milestone
by
the
next
ITF
meeting,
hopefully
need
that
means
fishing
up,
the
the
the
usage
documents
and
then
by
the
subsequent
IDF
finish
up
the
the
men
set
documents
and
I.
Don't
think
we
can
really
know
enough
about.
D
B
Thank
you.
Yes.
Now
we
just
have
been
thinking
about
these
steps.
First,
going
a
noticeable
feature
of
it
and
there
are
some
some
topics,
that's
popped
up
and
we
also
have
been
watching
what's
happening
in
the
mailing
list,
and-
and
this
is
this
is
this-
is
about
a
I-
think
that
I
personally
I
think
I
have
been
missing,
that
we
have
not
been
discussed
in
and
taps,
and
we
both
agree
like.
We
should
actually
give
you
some
things
about
that.
B
So
basically
it
tough
says
you
have
struct
everything
from
application,
not
everything
as
long
as
much
as
possible,
but
actually
and
the
applications
they
are
kind
of
getting
smart
earnest
one,
and
because
this
this
is
because
also
like
they
have
to
choose
like
what
transport
they
use
and
they
act
like
do
some
switching
between
the
transport
they
use.
So
they
actually
would
like
to
know.
B
What's
going
underneath
I
mean
if
you,
if
I,
want
to
send
a
packet,
maybe
I'll
change,
my
we're
very
fit
using
and
anticipate
as
a
transport
or
ed
plate
is
a
transport.
This
is
happening
and
this
is
kind
of
like
self-tuning.
You
don't
if
you
know
that
you
kind
of
behave,
you
can
do
package
your
information
in
different
way.
The
transfer
is
different,
underneath
and,
and
another
thing
is
like
the
application.
They
are
responsible
for
the
user
experience.
So
if
the
application
doesn't
work
is
actually
nobody's
gonna
blame
that
transport
listen
like
this.
B
Application
doesn't
work.
So
basically,
what
what's
really
important
is
like
when
it
doesn't
work,
and
how
do
you
know
like
this?
Is
this
like
transport
happening
and
also
there
is
a
like
robust,
predictable
behavior?
We
need
from
the
applications,
and
that
means
like
in
some
of
sub
network.
You
may
actually
get
these
selection
of
transport
pretty
fast
and
some
in
some
networks
it
might
take
time,
but
this
application
is
to
know
why
is
happening.
I
have
a
session
set
of
time,
so
the
Latin
that
it
also
matters
like
tang
time
to
get
the
first
bite.
B
Like
that
and
another
observation
we
have
made
like
okay
there,
there
is
a
mobility
every
concern
and
we
used
to
think
like
mobility
means
like
you
were,
going
you're
moving
and
you're
changing
your
your
IP
address,
so
you
have
like
multiple.
You
have
a
new
path
that
you
actually
would
like
to
continue
your
session
read,
but
it
also
the
the
current
kind
of
trend
is
like
okay.
B
When
you
change
your
path,
you
might
also
need
to
change
the
connect
connection
that
Testament
point
you,
the
service
provider,
like
all
the
servers,
so
that's
like
that's
the
new
nose
and
whether
you
want
to
application,
would
like
to
actually
pass
know
this
information
and
we
have
something
and
differently.
So
these
are
the
observation
we've
made
and
we
think,
like
this
working
group
as
a
future-proof
and
also
not
only
feature,
provides
like
in
current
way.
B
If
somebody
wants
to
use
tabs
and
make
it
and
have
these
tabs
really
happy
useful
to
them,
they,
the
taps,
working
with
me
to
think
about
this
kind
of
not
bong
information,
it's
not
about
like
you.
What
do
you
hide
or
expo?
What
is
Heidi
or
I
mean
the
test
should
actually
makes
the
application
developers
life
simple
bye-bye,
but
also
it
should
give
some
information
back
to
the
applications
that
they
can
do
some
kind
of
self
changing
cell
behaving
this
kind
of
things,
so
the
northbound
function
is
important
also.
D
P
Q
R
I
have
to
be
closed
answer,
so
this
is
an
update
and
the
draft
ITF
tabs
transferred
usage,
and
so
basically,
we
updated
from
zero
zero
two
zero
one
and
one
of
the
first
things
that
we
updated
was
there.
We
provided
some
clarification
on
the
how
the
nomenclature
in
the
document
was
defined
or
what
actually?
That
means.
So,
basically,
it's
kind
of
set
such
somehow
in
a
static
way
that
you,
the
categories,
can
either
be
connection
or
they
or
data
and
the
subcategories
for
the
connection
you
would
have
established
and
availability,
maintenance
and
termination.
R
R
Some
additional
services
on
the
past
three,
which
is
basically
adding
a
sap
flow,
removing
it
or
disabling
in
participe,
because
it's
already
like
in
the
colonel
so,
and
we
also
provided
a
change
to
the
rules
in
the
in
the
document
that
allowed.
Basically
that
allows
basically
to
use
experimental
rfcs
in
addition
to
the
standard,
trac
RFC's,
so
that
we
could
have
the
possibility
of
including
the
MPT
CPR
FCS.
And
then,
with
regards
to
the
discussion
that
we
had
in
the
previous
ITF
about
the
incorporation
of
DTLS
into
the
document.
R
We
had
some
conversation
with
the
tls
chairs.
We
also
got
in
touch
with
their
with
the
you
THS
and
the
feedback
that
we
got
from
the
TLS
chest
was
basically
we've
got
lots
of
documents
that
incorporate
the
TLS
without
actually
describing
an
API.
Also,
we've
got
lots
of
features
that
are
deployed
in
different
applications
that
are
also
not
described
in
a
TLS
bag.
So,
in
addition
to
that,
the
TLS
has
got
lots
of
options
that
relies
on
X
dot,
509
and
making
it
actually
more
complexity.
R
We
would
try
to
come
up
with
some
sort
of
definition
of
an
API
and
we
would
either
end
up
having
a
too
simplistic
API
or
too
complex
and
that
it
wasn't
really
possible
to
come
up
with
the
right
trade-off
between
having
an
appropriate
API.
That
is,
you,
know,
rich
enough
and
not
so
complex
to
actually
do
the
work.
R
So
we're
going
to
leave
it
at
that
and
the
future
plan
further
for
the
draft
is
basically
to
incorporate
all
the
CTP
related
documents,
rfcs
that
are
beyond
RFC
4960,
basically
to
cover
a
different
CTP
document.
Some
of
them
basically
have
their
own
API
related
subsections.
In
addition
to
that,
we
would
like
to
incorporate
some
experimental
RFC's
that
are
related
to
tcp,
like
TFO
and
others
that
we
need
also.
R
C
S
Next
item-
oh,
don't
forgot,
make
sure
the
weapon
and
changes
since
draft
01
is.
We
talked
to
lots
of
people
in
the
ITF
and
it
was
fun.
They
told
us
lots
about
UDP,
there's
not
a
lot
of
documentation,
but
UDP
we
gathered
what
we
could
it's
all
in
there
and
that's
like,
and
we
we
did
a
bit
of
work.
We
actually
try
to
implement
what
we
rope.
S
So
there
is
some
running
cord
in
the
neat
project
github,
which
now
supports
UDP,
which
is
a
little
bit
of
a
triumph
for
UDP
and
UDP
is
not
that
difficult,
aha
or
so
they
say
yes,
well
fix
em
yeah
we're
not
going
to
carry
that
well
and
I
think
we're
done
and
we
did
what
we
set
out
to
do.
We
wrote
down
everything
we
discovered
about
UDP
it's
in
the
draft
and
we
could
get
more
feedback,
but
I,
don't
think
we
will
get
a
lot
more
useful
feedback.
S
S
Do
you
want
to
publish
the
document
in
two
volumes
with
with
the
other
working
group
document
and
get
some
UDP
reviewers
on
the
UDP
stuff
and
TCP
reviewers,
sctp
and
other
reviewers
on
the
second
document
and
putting
two
together
I
think
might
be
interesting
to
get
iesg
review
and
other
people's
review
an
ietf
last
call
and
I:
don't
care
I've
done
the
text.
I
give
the
test
of
the
working
group
if
the
chairs
and
AD
decide
they
want
separately.
I
could
see
a
rationale
for
doing
that.
Otherwise,
I'll
work
with
the
main
editor
to
choose.
D
M
O
Spencer
Dawkins,
responsible
area
director
I,
would
always
say:
do
the
right
thing:
I'm
I'm,
fine,
taking
taking
documents
through
in
smaller
pieces
I
would
be.
I
would
be
fine
doing
that
I
would
encourage
you
not
to
use
and
if
sp4
is
the
standards,
but
we
wish
you
decide
how
big
a
document
can
be,
and
I
say
that
with
love
for
the
University
for
guys,
I.
D
S
Happy
to
it
with
it
is
good.
He
whips
them
to
coordinate
with
the
other
editors
anyway,
so
I'm
happy
to
take
it
either
way.
I
don't
need
the
working
group
to
tell
me
no
but
I
say
if
we
do
publish
it
as
a
pair.
There
should
be
tracked
as
a
pair.
We
should
freeze
them
and
let
the
working
group
do
maintenance
on
them
until
they're
all
ready
to
be
published.
It's
good
idea.
O
D
D
M
P
I'm
going
to
make
this
quick,
it's
the
min
set,
it's
a
pretty
small
update
just
for
context.
This
is
about
the
second
charter
item
chartered
part
that
that
reads
that
well,
the
subset
of
the
transport
service
is
coming
from.
The
first
item
will
be
defined
here.
Not
all
the
capabilities
need
to
be
exposed.
I
will
start
with
the
caveat
I,
like
with
a
busy
statement
about
abstraction.
You
know.
Remember,
abstraction
is
a
trade
off,
so
taps
is
about
identifying
more
services
than
we
normally
always
use,
identifying
all
the
things
from
all
the
transports.
P
At
least
we
care
about
here
and
if
we
expose
all
of
that,
we
make
all
applications
happy,
but
we
haven't
really
achieved
very
much
if
you
also
expose
the
choice
of
the
transport
and
then
we
have
achieved
anything,
so
we
need
to
make
it
possible
for
this
system
to
automatically
at
them,
but
describe
the
possibility
of
hiding
things
and
this
this
draft
is
about
these
trade-offs.
So
this
is
the
job
that
hurts
we
categorize
things
as
being
functional
as
optimizing
automatable.
This
is
nothing
new.
We
have
had
that
already,
but
we
have
removed.
P
P
No
way
you
could
do
this
with
unloaded
deliveries
or
this
kind
of
categorization
is
what
we're
looking
at
multiple
changes,
not
very
many
but
small,
few
things
that
are
worth
mentioning.
So
we
have
made
two
major
decisions
in
this
document.
One
is
that
well
about
making
things
calling
things
automatable,
saying
it's
possible
to
do
things
with
them
ultimate.
We
are
automatic
auto
made
automatically
without
involving
the
application
that
would
be
multi
streaming
and
multiple
paths.
I
hope
people
hate
this
so
that
some
we
have
some
this
discussion,
maybe
not
so
much
discussion.
P
Here's
we
have
time
for
the
other
presentations
as
well,
but
voice
your
opinion
Oh
at
the
bit
active
on
the
list.
Maybe
there
was
a
category
called
deleted:
I
don't
want
to
go
into
it
because
we
just
removed
it.
It
turned
out
to
be
a
redundant
thing.
The
now
brief
implementation
hints
everywhere,
I
believe
that
this
group
need
more
activity
and
the
hope
that
we
get
more
activity
when
we
would
start
discussing
cold
and
start
discussing
out
without
the
dode
such
a
thing.
So
this
is
where
it
begins.
P
There
are
some
hints
about
how
we
think
these
things
could
be
implemented.
Please
take
a
look
at
them
and
tell
us
that
you
disagree
and
it
couldn't
be
implemented
in
that
way,
or
maybe
you
know
some
other
way,
so
we
have
implementation.
Hence
we
have
some
small
fixes
that
were
applied
to
step
three
of
the
four
of
the
first
document,
but
this
is
like
half
way
of
updating
to
the
newer
version.
I
think
that's
it.
D
Well
then,
I
start
a
big
argument.
Let
me
ask
my
question
first,
and
if
it
turns
out
that
mine
doesn't
start
an
argument,
you
can
have
your
trend:
I'm,
not
small
and
blow
it
up.
So
when
we
say
the
API
I
will
expose
multi
streaming
and
multiple
paths
as
automatable
functions.
Does
that
mean
to
say
that
it
will
prohibit
the
ability
of
an
application
to
have
the
ability
to
control
multi-stream
your
multiple
paths
or
just
that
you're
going
to
require
that
the
application
need
not
do
that
if
the
API
is
I,
don't.
D
D
That
want
to
get
their
fingers
into
that
kind
of
control
will
still
be
able
to
do
that.
We're
not
going
to
prohibit,
but
we're
going
to
say.
Oh
we're
going
to
make.
We
as
a
working
group
are
going
to
make
the
effort
to
expose
an
API,
so
the
apps
don't
have
to.
If
they
don't
want
it.
Yeah,
okay,
but
and
I.
Don't.
H
D
P
The
stuff
that
happens
with
the
first
document,
so
this
is
based
on
the
first,
so
the
first
needs
is
gonna
grow.
That's
a
lot
of
sctp
FCS
that
are
not
covered
in
the
first
TCP
isn't
covered
in
the
first.
All
that
all
that
stuff
needs
to
be
incorporated
in
not
lost,
because
it's
all
just
taking
that
and
going
through
the
list
and
saying
categorizing.
These.
P
B
So
my
comment
is
like
the
thing
that
you
described
to
iron
and
I:
don't
describe
to
you.
It
would
be
good
to
put
it
in
the
document
so
that
we
understand
what
exactly
is
automotive
elements
yeah,
not
clear
away
yeah.
Thank
you.
E
D
T
So
this
is
related
to
the
third
work
item
that
hopefully
people
are
excited
to
get
to.
So
we
have
in
the
shorter
that
we
need
to
explain
how
to
select
and
engage
an
appropriate
protocol
and
how
to
discover
which
protocols
are
available
for
the
selected
service
between
a
given
pair
of
endpoints
and,
of
course,
if
you're
going
to
do
this,
and
you
don't
have
some
I
priori
knowledge
or
explicit
signaling.
The
way
to
find
this
out
the
protocol
is
supported
is
by
trying
it
out.
T
So
this
calls
for
some
kind
of
happy
eyeballs
mechanism
to
solve
this
problem
and
happy
eyeballs,
of
course,
is
when
we
don't
try
different
alternatives.
Serially,
because
this
will
take
a
lot
of
time
and
delay
for
the
application,
so
instead
we
look
at
trying
multiple
transport
solutions
in
parallel,
so
we
have
done
some
measurements,
because
if
you're
now
going
to
test
a
number
of
protocols
in
parallel,
this
is
going
to
be
nice
for
for
the
delay.
But
it's
also
going
to
come
with
some
costs
right.
T
So
this
will
require
some
extra
processing,
as
my
extra
memory
at
the
server
potentially
so
we've
had
some
setup,
some
experiments
where
we
have
a
web
server
supporting
both
tcp
and
sctp
and
then
the
custom
web
client
sounding
parallel
requests
using
a
Bible
mechanism
and
then
in
the
middle,
you
stand
emulator
to
add
some
delays
to
the
experiments
and
we
looked
at
three
different
test
cases.
So
the
first
one
very
basic,
a
simplest
scenario
where
you
just
have
the
connection
that
starch-
and
there
is
no
encryption-
there
is
no
caching.
T
T
Where
we
have
no
encryption,
we
have
no
cash
ings,
we're
just
sending
two
requests
in
parallel
and
what
we
have
on
the
x-axis
is
these
different
loads
that
I
talked
about
so
something
at
a
hundred
requests
per
second
or
a
thousand
requests
per
shake
them,
and
each
experimental
rom
here
last
for
10
minutes,
and
then,
of
course
we
had
a
number
of
these.
And
then
you
see
the
cpu
utilization
on
your
y-axis
and
the
two
colors
is
the
two
different
objects
Isis.
T
So
the
blue
one
is
the
1k
bite
and
the
red
one
is
the
35
k
bite,
and
this
is
perhaps
to
look
over
here.
The
thousand
requests
per
second,
because
then
we
have
the
largest
bars
and
we
can
see
that
if
you
just
use
the
graph,
sears
tcp
means
no
happy
eyeballs.
One
tcp
connection
as
ATP
means
no
happy
eyeballs,
wellness,
TP
connection,
and
then
we
compare
these
two
happy
eyeballs
and
with
an
outcome
of
TCP
being
selected.
T
So
we
can
see
that
if
we
just
run
TCP
and
then,
if
we're
unhappy,
eyeballs
we're
going
to
increase
the
cpu
utilization,
it
almost
doubles
here
and
that's
not
surprising
right
because
we're
sending
to
request-
and
there
is
not
much
work.
We
can
see
that
already.
If
we
increase
the
size
of
the
page,
this
difference
is
going
to
be
smaller
and
we
can
also
see
that
using
a
ctp.
In
this
case
the
two
protocols
differs.
So
you
actually
have
not
much
of
a
difference.
T
T
The
tech
second
test
scenario
is
the
same
setup,
but
now
we
are
add
TLS
into
the
picture,
so
the
connection
now
uses
TLS
and
then
we
can
see
that
well,
maybe
I
go
back
and
I
show
you
hear
that
the
CPU
utilization
was
about
six
percent
here.
If
we
add
TLS,
you
see
that
the
CPU
utilization
goes
up
to
twenty-five
percent,
so
you
can
see
that
the
impact
of
TLS
here
is
much
larger
than
the
impact
of
happy
eyeballs
and,
of
course,
that
also
means
that
we
now
have
a
much
smaller
difference
between
the
protocols.
T
T
So
then,
in
the
third
scenario,
we
add
caching
into
the
picture.
So
if
you
want
to
implement
this,
of
course,
you
wouldn't
like
to
every
time
try
everything
right
want
to
learn
when
you're
doing
something.
So
if
you
know
that
the
protocol
succeeds
or
that
the
protocol
does
not
succeed,
you
don't
need
to
try
it.
So
if
you
prefer
protocol
proceed,
you
can
try
that
immediately
and
maybe
not
try
the
other
candidates
or,
if
there's
something
you
know,
doesn't
work.
T
You
shouldn't
try
that
every
time,
so
we
have
now
added
caching
into
all
the
scenarios
you
saw
before
so
here
we
have
the
unencrypted
scenario.
Here
we
have
the
TLS
encrypted
scenario,
the
1k
page
size
and
the
35
k
page
size,
and
now
we
don't
have
the
load
on
the
x-axis
anymore.
Now
we
have
the
cache
hit
rate
so
over
here
we
have
a
cache
hit
rate
of
Siros.
That
means
the
case
where
we
try
happy
eyeballs
on
everything
and
over.
T
In
this
end,
we
have
a
cache
hit
rate
of
1,
which
means
that
you
never
have
to
do.
Have
the
eyeballs.
In
this
case
the
outcome
was
always
a
tcp
and
then
here
arranged
between
them.
So
this
is
0.8,
it's
very
hard
to
see
in
the
slide,
but
60.6
0.4
0.2,
and
perhaps
if
we
look
at
this
scenario,
because
this
was
the
case
where
you
see
a
it's
easiest
to
see.
T
Of
course,
if
you
add
the
caching,
this
is
going
to
help
you
because
now
you're
not
going
to
have
to
do
the
happy
eyeballs
mechanism
all
the
time.
So
if
we
look
over
here
at
the
pouring,
the
eight
eighty
percent
cache
hit
rate
you,
you
can
see
that
the
difference
between
using
happy
eyeballs
or
not
it's
not
really
big.
And,
of
course,
if
we
go
to
the
TLS
encrypted
case
here,
it's
already
a
smaller
difference
between
the
protocol.
T
You
see
some
impact
of
the
caching,
but
it's
already
quite
similar
and
if
you're
wondering
at
the
different
three
balls
here.
That
is
the
outcome
of
the
happy
eyeballs
mechanism.
So
in
the
previous
graphs
it
was
always
TCP
that
one
here
we
also
have
added
so
TCP
being
the
winner
as
City
fifty
percent
of
the
time,
so
either
of
the
protocols
can
win,
or
always
a
ctp
being
the
winner.
T
Okay,
so
this
was
a
very
quick
summary
of
some
of
the
results
from
from
this
measurements
and
I.
Think
the
conclusions
from
the
results
that
we
see
so
far
is
that
it's
feasible
to
use
happy
eyeballs
as
a
transport
protocol
selection
mechanisms,
because
you
can
amortize
over
the
cost
of
all
the
work
for
the
connection
and
with
caching.
You
can
also
reduce
the
load
quite
a
bit,
so
the
experiments
here
was
was
done.
T
You
know
as
a
separate
component
as
I
showed
you
with
a
standalone
yesterday
with
custom
web
client
and
and
the
server
so
we're
also
now
building
this
happy
eyeballs
mechanism
into
the
neat
system.
That
was
mentioned
also
on
the
previous
slide
here
that
we're
building
in
the
this
neat
project,
which
is
a
taps
like
system
and
the
code
is
also
available
here.
So
with
that,
of
course,
we
want
to
do
more
extensive
evaluations
also
measure
in
in
real
networks,
and
we
also
submitted
first
very
rough
draft
of
the
happy
eyeballs
framework.
U
K
U
So
is
there
a
way
that
we?
So
in
those
cases
we
can
guess
from
like
previous
RTT,
which
one
we
think
is
faster
and
just
try
the
one
wait
until
we
think
it
should
have
finished
and
then
kick
off
the
next,
and
that
has
a
very
high
success
rate
for
us.
Is
there
something
we
could
do
similar
for
protocols
here
of
guests
historically,
which
one
we
think
is
better
and
and
race
them
a
little
bit
staggered
so.
T
What
we're
actually
suggesting
in
the
in
the
draft
and
in
the
framework
for
it
is
that
you
have
some
notion
of
which
protocol
will
provide
you
with
the
best
service,
whether
this
comes
from
from
knowledge
of
what
the
application
needs
or
some
policy
system,
or
so
you
actually
try
the
preferred
transport
a
little
bit
before
the
next
transport,
so
kind
of
having
a
priority
for
what
you
would
like.
You
know
that
outcome
to
be
so
that
you
have
some.
T
Course
so
that
you
don't
have
to
combine
with
the
caching
right.
So
if
you
have
protocols
that
didn't
succeed
and
you
have
recently
tried
them,
then
I
think
you
should
not
try
those
at
all
and
then
eventually
you
have
to
time
out
that
so
that
you,
you
know
probe
it
at
regular
intervals,
but
clearly
you
should
not
have
any.
T
L
It's
teams
fries,
oh
I'm,
not
going
to
answer
the
question.
I
just
want
to
make
some
trouble.
That's
fine,
too
I
mean
I
I,
like
this
idea
of
frying
hot
Bibles,
take
thing
for
transport
selection.
How
does
happy
eyeballs
operating
the
transport
layer
and
through
play
with
hoppy
eyeballs
operating
in
the
network
layer
on
how
do
you
race,
different
things
happening
at
the
same
time?
Yes,.
L
Need
to
combine
these
right
in
some
way
so
right
on
I
mean
I
was
talking
to
folks
about
this
earlier,
but
the
way
that
lots
of
real-time
applications
you
do
this
is
with
ace,
which
is
for
connectivity.
Establishment.
Son
gets
you
into
a
really
messy
place.
Are
the
really
nasty
algorithm
with
lots
of
preferences
specifies
my.
T
U
Me
probably
Apple
again
just
in
regards
to
that
we
have
a
bit
of
experience
with
double
happy
I
balls
between
not
transports
but
between
address
families
and
then
interfaces
so
that
we'll
have
double
levels
of
that.
And
if
you
do
it
carefully,
it
can
be
okay,
so
yeah
I
think
it'd
be
good
to
specify
something
like
if
you
are
doing
that
beneath
just
make
sure
they're
kind
of
independent
layers
you
don't
have
to,
but.
U
V
Please
try
to
be
brief.
Oh
I,
don't
think
I'll
just
Ice
met
today
the
working
group
which
does
the
internet
interactivity
connector
via
salvation,
and
they
are
really
good
results
and
actually
a
lot
of
ongoing
discussion
around
that.
So
I
think
column.
Jennings
ran
an
experiment
where
he
bought
a
bunch
of
nap
Nats
and
it
ran
a
test
and
he
said
that
he
could
basically
paste
the
the
the
checks
at
five
second
intervals
and
they
all
went
through.
V
But
what
they
found
out
was
that
they're
about
thirty
six
candidates
pairs
for
each
address
family,
so
there
were
72
total
checks
that
needed
to
be
done
just
for
ipv4
ipv6
and
they
have
like
real
eight
candidates
and
so
on
so
forth,
and
they
found
out
that
if
you
did
this,
there
would
be
one
megabit
per
second,
just
connectivity
traffic.
Look
like
a
fight
at
the
beginning
of
look
like
there's,
nothing
even
started.
V
You
just
put
call-
and
this
starts
up
right
so
so
the
group
through
went
through
a
lot
of
discussion
and,
of
course,
this
UDP
and
it's
not
congestion
control.
So
basically
you
could
the
only
thing.
That's
controlling
the
whole
checks
is
the
timer
like
da
that's
the
timer
in
the
5245
so
like
coming
back
to
here,
at
least
it's
a
tractable
problem,
the
sense
that
there
are
some
transports
here
which
are
like
TCP
and
sctp,
which
are
congestion
control.
V
T
V
I
think
the
question
again:
if
you
have
72
candidates
here
to
address
families,
and
then
you
had
UDP
and
TCP
and
sctp,
you
can
just
do
the
math
to
figure
out
like
72
times
three
in
this
case
and
that's
a
lot
of
stuff
to
go
through
and
I
think
what
we're
looking
for
is.
If
you
come
up
with
the
guidance
here,
then
Ice
Cube,
basically
like
take
that
guidance
from
here
into
their
work
group.
V
Q
Compact
and
yeah
I
came
of
age
sex
mean
a
lot
like.
However,
in
just
finished,
this
is
essentially
ice
that
the
bits
are
encoded
in
a
different
way,
but
you're
essentially.
K
Mycotoxin
just
wanted
to
know
note
that
at
least
the
need
project.
What
we
do
is
we
control
the
protocol
selection
of
the
transport
and
network
layer
completely.
So
we
don't
just
let
the
operating
system
choose
we
choose
explicitly.
We
want
to
test
this
all
over
you
for
your
honored,
estes
or
v6,
and
try
to
avoid
these
two
layers
of
interaction
which
we
can't
control.
So
that's
that's
the
way
we
approaching
this
their
control,
the
stuff.
We
are
aware
of
this
and
we
control
it
and
try
to
do
something.
H
H
The
idea
is
that
really
a
lot
of
the
work
that
we're
trying
to
do
here
to
make
everything
more
flexible,
is
kind
of
running
into
the
fact
that
we're
using
yesterday
interface
stock
scream
right,
like
I,
said
at
the
plenary
last
night
for
the
people
who
didn't
have
better
dinner
plans,
we
have
a
really
great
interface
for
getting
a
tape
from
one
side
of
the
room
to
the
other
and
an
excellent
protocol
of
getting
a
tape
from
one
side
of
the
room
to
the
other.
H
But
you
know
you're
running
into
problems,
we're
scaling
a
protocol
and
we're
also
running
into
problems
with
scaling
the
interface
it's
synchronous.
It's
unicast
got
no
framing
support
single
stream,
single
path,
there's
no
path,
abstraction,
there's,
no
security,
and
you
can
measure
it,
but
everything
is
implicit
right,
doesn't
make
the
network
look
like
a
file
I,
don't
want
to
say
that
this
was
a
bad
idea.
H
This
is
the
reason
that
everybody
who
could
program
a
UNIX
machine
in
the
1970s
became
everybody
who
could
program
Internet
programs,
the
1980s,
which
is
the
reason
we
have
an
Internet.
It's
also
a
reason.
We
have
an
internet
security
problem,
fortunately
about
15
years
ago,
some
very
smart
people,
many
of
whom
are
in
this
room,
came
up
with
shock
stream,
which
is
yesterday's
interface
today
right,
we
can
actually
get
this.
It's
still
synchronous.
Nobody
really
cares
about.
Unicast,
though,
is
in
multicast
routing
insecurity
or
too
hard.
H
There's
no
framing
support,
but
nobody
cares
because
the
lack
of
framing
support
in
TCP
means
anything
that
runs
over
TCP
that
needs
framing
support,
invented
it
anyway,
single
stream,
hi.
You
know
what
actually
we
can
just
open,
multiple
flows.
We
forget
this
out
as
soon
as
we
start
admitting
browsers,
MP
TCP
looks
like
it's
actually
deploying
so
that
fixes
the
single
path
problem
and
TLS
and
open
SSL
solve
all
of
our
problems.
So
we
don't
have
to
worry
about
security
right,
we're
still
missing
a
path.
H
Abstraction
and
the
question
is:
can
we
do
better
than
this?
So
if
you
look
at
seat
pocket,
it
actually
sort
of
fixes.
Most
of
these
problems
eat
unicast
multicast
at
framing
support,
single
or
multiple
stream.
You
get
multipath
for
failover,
but
you
can
actually
control
it,
so
you
can
also
do
it
for
bandwidth.
Cheering
stolen
security
still
know
pop
obstruction.
H
So
these
are
insights
that
I
had
while
staring
at
this
problem,
which
might
also
just
be
silly
assumptions
silly
assumption
number
one
is
that
applications
deal
and
objects
of
arbitrary
size.
There
are
a
few
times
when
you
actually
do
have
to
stream
something
from
one
side
of
like
when
you're,
when
you
actually
have
the
problem
that
you
have
a
tape
and
you're
trying
to
get
to
the
other
side
of
the
room.
You
don't
know
how
much
more
tape
you
have
right,
so
sometimes
you're
actually
streaming
things
often
you're.
H
Not
often
you
know
how
big
the
objects
are
or
can
come
up
with
a
stream
of
objects
that
are
of
the
same
size.
This
is
how
we've
done
video
over
TCP.
By
the
way
we
keep
saying,
multipath
people
jump
up
and
plus
this
morning
said
multipass
multipass
multipass,
and
we
said
yes,
yes,
yes,
I
think
the
network
of
the
future
is
explicitly
multipath.
It's
not
going
to
be
about
often
that
users
or
servers
have
only
one
route
out
to
the
network.
H
I've
heard
of
crazy
people
already
running
bgp
all
the
way
to
the
top
of
the
rack
switches.
I
think
that
basically
saying-
and
this
was
one
of
the
things
that
we
talked
about
when
we
were
putting
the
taps
transports
document-
saying
that
security
is
not
really
a
property
of
transport-
that
is
a
property
of
some
operator.
I
don't
really
think
we
can
get
away
with
that
now.
I
think
future
transports
have
to
guarantee
these
properties.
H
The
fourth
thing
which
is
harder
to
do
on
the
api
side
is
that
message:
resumption
is
inherently
asynchronous
right:
somebody's
gonna
send
you
a
packet
or
a
stream
of
package.
The
transport
layer
is
going
to
put
them
together
and
go
say
hey,
you
have
something,
and
you
know
sitting
there
in
busy
polar
bit
busy
polling
or
busy
waiting
on
it
or
running
a
thread
for
everything
you
know
like
you
know.
Anybody
who
I
think
there
are
few
people
in
the
room
or
a
DHI
balin
programming,
the
java
before
niño,
so
I
mean
like
that.
H
If
you
actually,
if
you
actually
look
at
how
things
are
coming
off
the
network,
everything
is
inherently
asynchronous
and
if
you
look
at
how
scalable
programming
works
right
now,
it's
enabling
this
asynchronous
programming
and
some
of
them
even
actually
require
you
to
a
synchronously
I'm,
going
to
shorten
my
talk,
so
I'm
not
going
to
go
through
this
slide.
This
is
basically
this
talk
is
an
advertisement
to
read
this
slide.
H
This
is
my
it's
obvious
if
you
stare
at
it
for
a
couple
of
seconds,
there
are
some
boxes
and
we
also
have
some
lines.
The
idea
here
is
there's
an
association.
The
association
has
some,
you
can
send
some
objects,
the
association
and
then
there
are
some
events
that
come
off
of
things.
You
can
bind
handlers
and
stuff,
and
this
is
intended
to
start
a
discussion
about
if
we
were
going
to
throw
it
all
out
and
start
over
knowing
now
what
we
wish.
H
O
H
Gill
sans
yeah
I'm
not
going
to
go
through
these
either
I'm
not
going
to
go
through
these
either.
Why
am
I
talking
about
this
in
taps
taps
allows
you
to
select
transport
protocols,
but
if
each
of
those
transport
protocols
inherently
needs
its
own
API
to
deal
with
right.
So
if
you're
writing
to
sock
street
or
sock
seek
packet
and
you
get
TCP,
there's
got
to
be
a
shim
layer
that
makes
DCP
look
like
socks,
epacket
and
yeah.
H
So,
if
you're
trying
to
to
to
do
this,
you
need
an
API,
a
single
API
that
runs
over
all
of
these
transport
protocols,
no
matter
what
their
no
matter,
what
their
properties
are.
The
neat
API
is
one
solution
to
this
problem.
It
looks
a
lot
like
current
api's.
That's
actually
a
pretty
good
way
to
go
about
it
if
you
want
to
minimally
disrupt
existing
application
programming,
but
if
you're
going
to
have
to
touch
the
implication
application
anyway,
why
not
go
with
this
more
radical
approach?
H
Anyone
who's
interested
in
talking
about
this,
unfortunately
I
haven't
had
time
to
work
on
this
since
the
last
time,
I
mention
it
the
mic,
please.
This
is
a
I
wish.
I
had
a
paper
published
to
point
at
this
actually
call
and
publish
most
of
the
paper,
because
this
is
like
three
quarters
of
the
TCP
Hollywood
thing
is
that
what
you're
going
to
say,
yeah
read
Collins
paper
about
my
awesome,
API
I.
Guess
we
have
like
three
seconds
for
questions
or
because
I
want
I,
want
Steven
to
get
up
and
have
a
chance.
Q
Q
I
also
wanted
to
say:
Clark
contenant
house
application-level
framing
there's
me
week,
all
deadlines
lifetimes,
sorry
again:
Clark
contenant
house,
application-level,
framing
sitcom,
1980,
mumble,
yeah,
okay,.
U
Tell
me
Polly,
Apple
I
think
this
is
great.
I
like
your
model.
I
think
it's
very
similar
to
a
lot
of
what
we're
doing.
I
think
the
tricky
thing
will
be
getting
everyone
to
agree
on
it
and
actually
having
something
that
people
use.
So
I
think
it'd
be
important
to
early
on
get
a
discussion
group
going
of
people
who
are
actually
implementing
the
stacks
that
people
are
using
for
networking.
I've
had
some
conversations
this
week
with
people
about
doing
that,
and
so
we
should
loop.
This
group,
in
with
that
group
yeah,
sounds.
U
D
Thank
you,
Brian
thanks
a
lot
I
forgot
to
mention
at
the
beginning.
I
dint
because
I
had
sent
email
to
the
list
that
we're
going
to
run
probably
about
10
minutes
over
that
we
had
a
last
minute
talk
that
came
up
at
the
applied
networking
research
workshop
on
saturday,
and
so
we're
squeezing
it
in,
except
is
phillip
ear.
Oh
good
yeah,
so
just
FYI
bits
and
bytes
doesn't
start
for
20
minutes.
Wait!
I!
Think
we
have
one
more!
Oh
yeah.
What's
the
last
one
well.
D
N
Nopers
five-minute
talk,
okay,
fine,
so
today,
I
want
to
talk
about
socket
attempts.
This
is
our
try
to
automate
something,
namely
the
excess
election
problem
on
yesterday's
API
and
trying
to
do
this,
you
really
need
some
information
and
one
of
the
most
valuable
information
you
could
have
for
excess
election
is
the
question:
what's
the
application
going
to
do
with
this
socket?
It's
now
opening
and
thinking
about
this,
the
obvious
answer
is
applications
no
more
than
they
can
express
using
today's
socket
API,
namely
nothing.
N
So
if
I
have
a
something
like
a
web
browser,
its
most,
probably
knowing
what
aprox
with
me,
which
size
of
an
object
is
festering
or
if
you
have
a
video
screaming
to
note
knowing
it
could
tell
to
the
socket
API
arm
I'm
trying
to
download
something
that
will
have
a
constant
flow
or
mall
as
consul,
bursty
flow
of
information
and
not
something
that
will
just
fetch
20
bytes.
So
how
could
this
look
like,
and
this
is
joint
workers
to
either
where
we've
thought
about?
N
How
could
this
look
like
and
that
basic
idea
is
so
the
application
specifies
something
or
very
rough
men
are
saying
on
I?
Will
know
this
is
something
like
a
screaming
later,
or
this
is
something
like
about
download
or
just
some
control
for
Chet
flow,
something
like
I
know
this
file
I'm
going
to
download.
It
has
10
megabytes
whether
I
care
that
the
data
grams
will
be
delivered,
timely
or
weather.
Delay
is
more
or
less
no
matter
for
me
which
betrayed
a
duration
is
stream
might
have
and
how
resilient
the
transport
should
be.
N
So
do
I,
really
care
about
packet
loss
or
respect
or
something
yeah
I
will
care
about
by
doing
retransmission,
because
the
operating
system
update
nobody
sees
with
it
will
alive,
10
mins
later
enough.
This
got
us
to
table
of
some
ideas
how
this
category's
could
look
like
skip
about
that
and
employing
this.
The
question
is
how
to
do
this.
We
started
with
the
original
DC
Circuit
ABI
was
thinking,
okay,
the
most
problem,
the
best
place
to
implement
business
as
a
sock
adoption.
N
This
is
the
most
the
least
invasive
way
to
do
this,
and
first
got
them
and
modified
all
socket
basic
socket
calls
to
include
a
context
where
we
could
find
the
cults
together,
because
this
call
really
just
loosely
covered
with
later
on
when
trick,
trying
to
solve
the
problem
on
how
to
do
this
as
I'm.
Currently,
we
found
out
would
have
been
sufficient
to
change
get
other
info
in
a
way
that
we
can
provide
all
this
information
and
afterwards
returned
the
information
like
this.
The
saga
represent
the
socket
set
and
the
bind
address.
N
The
application
should
bind
so
for
doing
the
excess
election.
The
next
question
we
asked
ourselves:
okay,
if
we
want
to
put
this
for
web
browser,
we
did
connection
or
use
and
doing
this
connector
easy
also
said.
We
want
to
automate
this
and
I
think
this
is
also
some
point
to
take
into
account.
If
we
need
to
do
connection
we
use,
we
need
something
like
a
more
different
API.
We
need
something
to
first
do
some
kind
of
yes,
we
want
to
connect
to
a
host
and
have
just
this
socket
call
that
gets
everything.
N
The
usual
socket
call
to
get
at
one
place,
and
you
see
this
Reverend
all
mostly
all
applications
that
are
using
sockets
and
you
need
something
that
can
do
connection
you're
so
is
say:
okay
I,
want
to
connect
to
the
following
host
port
I
have
the
following
properties
of
the
connection,
and
please
give
me
a
socket
that
I
can
reuse
or
give
me
a
new
one.
N
If
I,
there
is
no
suitable,
socket
already
open,
and
by
having
this
cause,
we
can't
really
speed
up
HTTP
a
little
bit
and
take
the
burden
from
implementing
the
whole
exes
election
and
the
whole
connection
we
use
stuff
from
the
application
to
some
kind
of
socket,
API
or
shin
layer.
That
also
does
this
automatically,
which
is
really
nice
and
after
socket
release
that
frees
the
saga
blows.
It
I,
don't
mind,
yeah,
there's
something
section.
N
So
if
you
want
to
play
around
with
it
a
little
bit,
there's
a
profitable
on
get
up
with,
including
the
sub
review
stuff,
and
we
release
the
most
recent
version.
0
dot
five
yesterday,
so
if
you
want
to
play
around,
have
fun.
N
D
W
So
I'm
stephen
kristin
and
I'll
be
talking
about
transport
services
for
real-time
applications,
so
the
approach
of
the
working
rip
so
far
for
defending
transport
services
has
been
somewhat
bottom-up,
and
so
we've
been
looking
at
transfer
protocols,
I've
been
standardized
and
we've
been
breaking
them
down
into
transfer
of
services
that
these
protocols
per
right.
Now,
the
approach
that
we
take
in
this
work
is
as
a
top-down
approach.
W
So
just
a
quick
note
about
these
applications,
they're
characterized
by
a
maximum
delay
bound
and
for
interactive
applications,
that's
in
the
low
hundreds
of
milliseconds
so
for
voice.
For
example,
it's
it's
a
couple
of
hundred
milliseconds,
that's
based
on
human
perception,
so
above
Adalia,
both
200
milliseconds
and
the
interactivity
breaks
down
and
user
start
to
notice.
For
non
interactive
applications,
the
delay
bounders
in
the
tens
of
seconds
and
that's
based
on
the
desired
experience
that
we
want
to
provide
users.
W
Ultimately,
the
services
that
we
specify
for
these
applications
need
to
respect
this
maximum
Talib
owned
in
this
timeless
constraint
and,
at
the
same
time,
add
as
little
licenses.
They
can
so
that
we've
got
more
of
the
the
later
to
use
up
at
the
other
layers
of
the
stack.
So
that's
the
applications
and
so
will
define
a
set
of
transport
services
that
these
applications
need.
The
first
of
these
as
attaining
and
deadlines
service,
so
again
deter
has
a
set
time
by
which
an
intuitive
arrived
to
be
useful
to
the
application.
W
So
an
IPTV
application,
for
example,
has
frames
of
video,
and
each
frame
has
got
a
10
by
which,
instead
of
arrived
fee
to
be
used,
and
once
were
past
that
time
once
the
frame
arrives,
it
will
not
be
used
by
the
receiving
application.
If
the
transport
layer
doesn't
know
about
this
deadline,
then
we
run
the
risk
of
sending
data.
That's
that's
effectively
useless.
So
so
what
we
want
to
do
is
estimate
the
likelihood
that
this
data
will
arrive
on
time.
So
we
need
the
deadline
for
this.
W
The
data,
an
estimate
of
the
one
we
network,
delay
and
some
notion
of
how
much
offering
will
will
be
carried
out
to
the
receiver,
and
then
we
can
estimate
whether
or
not
the
detail
will
arrive
on
time
and
then
they
transport
decisions
based
on
that.
This
is
really
the
fundamental
service
for
this
class
of
applications
and
all
the
other
services
that
we
define
or
follow
on
from
this
one.
So
the
next
services
is
partial
reliability.
W
So
we
know
the
IP
gives
us
our
best
effort
packet,
delivery
service,
and
so
some
packets
will
will
be
lost,
and
we
know
about
this
timeless
constraint
where
data
has
a
deadline
after
which
it
will
effectively
become
useless.
If
we
want
to
provide
guaranteed
reliability,
then
we're
going
to
start
missing
deadlines
and
we're
going
to
start
transmitting
data,
that's
effectively
useless.
So
what
we
want
is
partial
reliability.
We
want
to
retransmit
lost
data,
but
only
when
it's
going
to
be
useful
to
the
receiving
application.
W
So
only
while
will
the
data
will
arrive
within
its
deadly
now.
Of
course,
partial
reliability
means
ultimately
some
data
wouldn't
arrive.
Some
packets
won't
be
delivered
to
the
receiver,
and
so
what
we
need
to
do
is
maximize
the
utility
of
the
packets
that
do
iraq,
and
this
implies
some
sort
of
application
level
framing
and
the
application
sending
80
years,
and
so
a
message,
oriented
transport
service
and,
given
that
these
messages
will
be
independently
useful
and
we
we
want
to
reduce
latency.
W
We
then
introduce
a
sub
streams
service,
so
these
applications
are
often
comprised
of
two
or
more
subfloors
or
an
audio
floor.
Now
a
video
flow
and
we
want
to
multiplex
those
across
a
single
transport
connections.
We
have
a
sub
stream
service
to
do
so.
Partial
reliability
means
that
some
messages
will
arrive
successfully,
but
if
we
look
at
applications,
there's
often
interdependencies
amongst
the
data,
so
an
MPEG
one
application,
for
example,
has
IP
and
b
frames
and
dependencies
between
those.
W
So
if
we
have
got
an
iframe
with
a
p
frame
this
tenant
on
it,
an
iframe
isn't
delivered
successfully,
then
we
shouldn't
send
the
p
frame
either
because
it's
dependent
on
data
that
would
have
arrived.
Of
course,
utility
is
quite
difficult
to
define
for
these
applications
because
I
frame
might
not
be
useful
in
the
sensitive
won't
be
played
out,
but
it's
useful
in
a
sense.
The
key
frame
that
depends
on
it
and
needs
it
to
be
decoded.
W
So
we
have
these
multiple
versions
of
utility
and
the
next
service
is
congestion,
control.
It's
important
for
pets,
network
and
other
applications.
We
don't
specify
a
congestion
control
algorithm
for
the
application
to
use.
We
just
simply
say
that
one
must
be
used
and
that
the
application
should
be
able
to
select
an
appropriate
algorithm
and
for
use.
W
So
if
we
provided
them
a
new
DP
and
we
can
see
that
UDP
already
supports
a
message
orientated
service,
so
there's
no
change
needed
for
that.
Sporting
partial
reliability
means
me
to
be
able
to
take
loss
and
retransmit
messages.
If
they're
going
to
arrive
from
the
red
line,
we
make
it
add
some
headers
and
other
mechanisms
for
that
and
we'll
need
to
add
an
estimate
of
the
one
we
networked
early
and
for
that
function
as
well.
W
W
More
interestingly,
though,
we
we
look
at
how
we
can
provide
them
over
tcp
and
tcp
gives
us
a
byte
stream
attraction,
so
providing
a
message
orientated
abstraction
on
top
of
that,
it's
not
sufficient
to
each
the
data
for
each
message
in
a
single
segment,
because
method
segments
may
be
reset
mented
in
the
network,
as
we
some
need
some
sort
of
framing
mechanism.
On
top
of
that,
and
if
we
look
at
million
an
unordered
tcp,
we
we
see
a
bait
stuffing
algorithm
and
our
framing
mechanism
livid
super
again.
W
A
subscript
support
means
adding
a
small
header
to
each
message
and
tcp
is
already
connection
orientated
and
already
sports
congestion
control,
its
congestion
control
algorithm
the
space.
But
we
could.
We
could
modify
the
API
to
tell
our
applications
to
specify
the
congestion
control
up
with
only
one.
Ultimately,
that
leaves
us
with
with
one
final
service,
partial
reliability,
so
metal
boxes
in
the
network
of
ossified
around
TCPS
reliability
mechanism,
and
they
do
not
expect
to
see
gaps
in
TCP
sequence
space.
So
that
means
that
planning
partial
reliability
isn't
as
easy
as
simply
not
sending
tcp
retransmissions.
W
So
we
always
need
to
send
DCP
retransmissions
to
ensure
that
the
tcp
sequence
space
is
as
filled,
but
what
we
don't
do
is
always
send
TCPS
that
the
payload
that
would
be
associated
with
that
segment.
So
as
the
payload
for
that
tcp
retransmission
will
arrive
by
its
deadline,
then
we'll
swap
for
another
message
that
will
arrive
by
its
deadline.
Unless
the
mechanism
called
inconsistent,
retransmissions
we've
done
some
small-scale
evaluations
with
our
prototype,
TCP
Hollywood
and
some
other
people
have
done
evaluations
of
inconsistent
retransmissions
and
largely
it
seems
to
work.
W
There
are
some
cases
where
we
see
cashing
in
the
network
and
we
we
get
the
original
instead
of
the
new
version,
but
broadly
that
say,
failure
modes
for
this
mechanism,
and
so
in
summary,
then,
we've
defined
a
set
of
transport
services
from
real-time
multimedia
applications
and
we're
shown
that
these
services
can
be
provided
over
both
UDP
and
TCP.
Ultimately,
that
allows
us
to
use
UDP
in
the
majority
of
cases,
but
fall
back
to
TCP
and
provide
the
same
services
as
well.
W
C
K
Jackson
and
after
stuff,
you
can
get
with
SCDP
using
your
setp
and
yes.
H
Thank
Steven.
This
is
really
awesome
stuff.
It
sounds
like
you're
actually
have
been
drafted
to
get
into
the
group
of
people
on
the
stack
side
and
also
on
the
application
side
to
talk
about
how
to
make
this
better.
K
H
I
I'm
a
little
skeptical
of
the
of
the
mechanism
that
you're
using
to
support
partial
reliability.
I'd
really
like
to
see
a
larger
scale
measurement
of
that
yeah
I
know
that
you
know
this,
but
I
want
to
say
it
in
the
room.
K
H
Please
consider
if
you
have
the
ability
to
run
something
bigger,
go
ahead
and
do
it
and
submit
it
to
map
Reggie.
If
you
don't
send
mail
to
the
map
orgy
at
RT
after
a
toric
list
and
ask
for
help,
because
this
seems
like
a
really
interesting
problem
to
dig
into
again.