►
From YouTube: IETF97-LUNCH-20161117-1230
Description
LUNCH meeting session at IETF97
2016/11/17 1230
B
C
Okay,
hi
I'm
I'm
Barry
llevo,
with
hallway.
If
you
were
at
the
plenary
last
night,
you
heard
he
Joe
tell
you
how
pleased
we
were
to
to
be
hosting
this
meeting
and
sponsoring
the
hackathon
throughout
the
year.
We're
also
pleased
to
present
a
technical
topic
for
the
lunch
speaker
series
today,
so
we
have
an
Andy
malice,
who
many
of
you
know.
C
A
Hello,
everybody
I'm
Andy.
Can
you
all
hear
me?
Okay
in
the
back?
Okay,
cool,
very
good
so
anyway,
this
right
here
is
an
outline
of
the
talk.
Basically,
what
I
want
to
talk
you
about
is
an
outline
of
the
problem,
space
requirements
and
challenges
that
we're
seeing
from
new
applications
on
the
internet.
Current
work
to
address
these
requirements,
both
here
at
ietf
and
elsewhere
and
other
stos
as
well.
What
other
work
may
be
needed?
Proposals
for
the
future
and
I
do
have
a
spoiler
alert
here.
A
I,
don't
plan
to
come
with
any
P
package
solutions
to
basically
cram
down
your
throats.
What
I'm
here
for
is
really
to
provide
you
all
with
some
food
for
thought
and
hopefully
further
discussion
down
the
line
and
perhaps
the
formation
of
some
new
work
here
in
the
IETF
and
elsewhere.
So
with
that,
let's
get
going,
you
know.
The
internet,
of
course,
has
been
a
huge
success
and
it's
ipv4
v6
are
the
narrow
waist
of
the
internet
and
they
support
countless
applications.
A
As
we
know-
and
you
know,
there's
some
people
who
try
to
make
the
argument
that
HTTP
is
the
new
narrow
waist,
but
there's
still
a
lot
of
applications.
In
fact,
majority
of
applications
that
don't
run
over
HTTP
here
in
a
browser
they
really
do
run
straight
over
IP
and
then
the
UDP
era,
sheep
or
TCP
and
so
on.
However,
today's
internet,
especially
the
cut
what
we
think
of
as
the
public
best-effort
internet,
does
have
a
number
of
challenges
that
still
need
to
be
addressed.
A
With
the
wider
area,
multidomain
open
Internet,
you
know
I
really
don't
want
to
talk
about
private
campus
and
data
central
networks
or
very
well
managed
single
domain
traffic,
engineered
networks,
because
those
all
basically
run
pretty
well,
but
therefore
limited
applications.
And
you
know
it's
not
really
what
we.
What
we
really
deal
with
here,
which
is
the
public
Internet.
A
So
here
are
a
few
of
the
architectural
just
before
I
move
off
the
slide.
There
was
one
example
that
I
just
wanted
to
to
talk
to,
which
is
you
know
just
talking
for
personal
experience.
I've
got
pretty
good
internet
service
at
home.
You
know,
I've
got
fios
from
verizon,
which
is
a
jeep
on
based
service.
A
So
so,
just
with
that,
you
know
here
are
some
of
the
architectural
issues
that
really
prevent
end-to-end
internet
service
guarantees.
Well,
you
know
some
of
them
are
coordination
across
multiple
layers
and
I'm
going
to
be
talking
more
about
this.
This
is
kind
of
like
a
preview,
a
lot
of
the
things
that
I'll
be
talking
about
today.
A
That's
that
in
you
know,
3gpp
and
and
so
on
and
so
forth,
and
so
each
is.
Each
is
basically
looking
at
their
own
slice
of
the
puzzle
and
it's
saying:
okay,
what
can
we
do
to
improve
things,
for
example
from
the
broad
bands
form
and
access
networks
for
3gpp?
What
can
we
do
to
improve
things
and
mobile
networks
and
I
Triple
E?
A
What
can
we
do
to
improve
things
that
the
you
know
at
the
ethernet
layer,
but
in
some
cases
there
isn't
a
whole
lot
of
coordination
going
on
and
it's
difficult
to
have
an
interoperable
solutions
as
a
result-
and
you
know
there
certainly
is
a
mechanism
for
cross
stl
cooperation,
which
is
liaisons.
However,
you
know
liaisons,
don't
work,
often
don't
work
very
effectively.
When
you
have
a
very
complicated
problem
set.
A
You
know
you
need
some
more
concentrated
the
thinking
and
no
and
cooperation
than
just
passing,
liaisons
back
and
forth
to
each
other,
and
then
there
are
things
going
on
in
the
service
of
writers
as
well,
and
I
come
from
a
source
of
art
of
background
that
was
with
with
verizon
for
a
number
years
before
joining
Huawei,
for
example.
So
I've
certainly
got
a
lot
of
experience
here.
There's
very
inconsistent
implementation
of
a
lot
of
the
work
we've
already
done.
A
You
know
dis
serve
is
is
what
15
years
old
now,
if
not
longer
than
that
I'm
20
years
old,
and
then
we
have
dis
serve
te
which,
which
goes
back
to
like
two
thousand
five
or
so,
but
you
don't
see
a
whole
lot
of.
You
see
a
lot
of
implementation
of
it
in
private
networks
and
in
well-managed
enterprise
service
networks,
but
you
really
don't
see
a
lot
of
it.
End-To-End
in
the
public
internet
and
in
the
public
Internet.
A
You
also
see
there's
an
inability
for
the
service
fighters
to
aggregate
like
flows
together,
so
like
flows
can
have
similar
treatment
to
the
backbone.
Now
in
in
you
know,
in
the
public
Internet,
it's
really
impossible
to
do
flow
by
flow
treatment.
You
know
each
individual
flow,
the
network,
because
you
just
can't
scale
and
if
you
know,
there's
one
thing
we
have
to
do
in
the
internet
scale,
but
aggregation
works
very
well
when
it
comes
to
scaling.
A
Right,
certainly
when
we're
here
in
Korea
we're
in
a
very
different
a
s
locally,
then
you
know
all
the
servers
are
that
we
know
we
talked
to
back
home
and
so
on
and
there's
a
whole
lot
of
lack
coordination
across
a
s
boundaries.
You
know
very
often
the
cross
is
boundaries.
What
you
have
is
it?
You
actually
have
a
layer
to
link.
You
know
you
have
what
like
100
gig
in
link
between
service
providers,
which
is
you
know,
fine
in
terms
of
bandwidth,
but
other
than
that.
A
You
know
you
lose
a
whole
lot
in
at
the
IP
layer
for
the
treatment
of
the
packets,
so
do
a
whole
lot
of
new
things
coming
down.
The
line
to
that.
A
It
freely
enters
in
the
topic
so
anyway.
So
some
of
the
of
the
challenges
is,
if
you
take
a
look
at
the
chart,
that's
from
the
ng
MN
5g
white
paper.
That
was
that's
the
source
for
this
chart.
It's
talking
about
a
whole
lot
of
different
applications
that
you
that
five,
the
folks
who
are
designing
5g,
want
to
be
able
to
run
over
mobile
networks,
the
5g
mobile
networks
and
it
talks
about
the
requirements
in
terms
of
user
experience.
A
Data
rates
which
are
easy
enough
to
meet
perhaps
but
what's
very
hard,
is
what
you
see
in
red,
which
is
the
m2m
latency
that
they
need
to
see
in
order
for
these
particular
applications
to
work.
You
know.
In
some
cases
you
have
Layton
sees
under
one
millisecond
in
order
for
this
for
a
particular
application
to
work
over
5
g,
and
this
is
what
the
mobile
folks
are
aiming
towards.
So
then
the
other
thing
they're
working
towards
is
called
network
slicing
and
it'll
be
a
bit
more
on
the
next
slide.
A
You
know
you
can
think
of
it
as
having
your
own
private
part
of
5g
network,
that's
supposed
to
be
completely
unaffected
by
anything
else
that
anyone
else
does
and
the
5g
network
and
have
completely
private
data
on
that,
and
so
on,
and
that's
a
very
hard
problem
as
well,
because
you
have
to
have
complete
isolation
of
one
users,
traffic
from
another
user's
traffic,
while
still
sharing
common
resources
in
the
network.
So
it's
a
very
difficult
problem
and
that's
why
we
spent
two
hours
on
Tuesday
night
talking
about,
and
that
was
really
just
scratching
the
surface.
A
The
folks
over
in
in
in
the
mobile
SEOs
request
spending
a
whole
lot
more
time.
Talking
about
this,
and
so,
as
I
said,
empty
latency
in
particular,
is
a
very
strong
requirement
for
what's
coming
up
in
5g.
So
this
right
here,
so
the
work
and
5g
slicing
is
going
to
really
require
a
whole
lot
of
innovation
in
the
world
of
quality
service.
A
Just
because
you
have
to
have
the
guaranteed
services
for
the
applications
going
to
be
that
are
going
to
be
running
over
5
g
and
those
require
feet,
isolation
from
each
other,
and
so
what
what
I
think
the
mobile
folks
are?
Finding
is
that
current
QoS
mechanisms
that
we
have
in
routers
today,
for
example,
and
the
switches
don't
always
meet
the
requirements
that
they're
coming
up
with.
So
it's
going
to
require
a
lot
of
innovation
throughout
the
networking
industry.
A
I
think
no,
not
just
on
the
mobile
space
in
the
radio
space,
but
in
the
router
space.
You
know,
because
routers
make
up
the
backbone
of
mobile
networks.
As
we
all
know,
you
know
if
you're
doing
mobile
front
hall
mobile
backhaul,
that's
all
implemented
on
routers
right.
So
so
that's
going
to
be
that's
going
to
require
a
lot
of
innovation
in
the
router
space
in
order
to
be
able
to
implement
the
requirements
coming
out
of
5
g
now,
some
other
applications
that
we're
seeing
that
people
would
like
to
be
able
to
use.
A
Is
you
know,
a
lot
of
people
are
into
multiplayer
gaming
/
0
over
the
net
and
the
next
extension
of
that
is
going
to
be.
You
know
virtual
reality
based
experiences,
whether
it's
a
single
person
or
multi-person
virtual
reality
and
augmented
reality,
and
there
are
some
very
definite
latency
requirements
to
come
out
of
being
able
to
use
of
virtual
reality
and
augmented
reality
through
the
internet.
A
So
in
a
lot
of
cases,
this
is
going
to
require
not
only
thinking
about
what
we
do
about
latency,
but
also
some
intelligent
work
towards.
What's
the
right
place,
what's
the
right
placement
for
the
servers
for
the
for
the
service
and
routing
the
users
packets
to
the
proper
server,
so
they
get
the
proper
latency
that
they
need.
A
So
in
cases
like
this,
you
get
you
can
imagine
a
whole
set
of
distributed
servers
and
then
a
part
of
the
puzzle
is
not
only
what
we
can
do
in
the
network
to
reduce
the
latency
between
the
users
and
the
service,
but
also
making
sure
that
we're
out
the
users
to
the
proper
service
as
well.
So
and
of
course,
another
part
of
the
puzzle
is
bandwidth
as
well,
because
you
want
to
make
sure
that
you
have
enough
bandwidth.
That's
that's
basically
guaranteed
to
the
service
so
that
the
user
gets
excellent
quality
service
in
their
headsets.
A
You
know
basically
4k
video,
then
we
have
other
applications
that
are
coming
online.
That
people
would
like
to
be
able
to
basically
bring
into
the
wide
area
whether
it's
remote
health
care.
You
know
that's
becoming
very
important
and
in
some
cases
you
know
people
are
already
attempting
this,
but
they're
really
not
doing
it
over
the
public.
Internet
they're
doing
it
over
these
flying
circuits,
for
example,
just
because
you
really
at
this
point
in
time
can't
trust
doing
things
like
remote
surgery
over
the
public
internet
right
so
but
in
the
future.
A
The
same
internet
pipe
they've
coming
to
their
office
for
this
as
well
as
you
know,
for
for
just
the
record,
keeping
and
so
on
and
regular
internet
access.
That
would
be
a
big
boon
to
that.
It
would
certainly
cut
down
on
their
costs,
and
that
would
be
you
know,
a
big
win
factory
automation.
You
know,
that's
something!
That's
being
done
an
awful
lot
very
right!
A
A
A
Now
this
is
not
the
first
time.
Of
course,
the
people
have
been
looking
into
this
problem
in
2013,
I
saw
the
Internet
Society
held
a
workshop
on
latency
and
at
the
time
that
this
is
three
years
ago.
So
at
that
time
they
basically
held
a
reducing
internet
latency
workshop
and
what
they
found
are
a
taxonomy
for
the
major
sources
of
latency
in
the
internet.
Basically,
there's
processing
delay,
you
know:
computational
translation,
forwarding,
encapsulation
d,
capsulation,
Matt,
encryption,
compression
error,
coding
and
so
on.
A
Then
you
have
multiplexing
delays
needed
to
support,
sharing,
share,
channel
acquisition,
output,
queuing
and
so
on,
and
then
you
have
grouping
which
is
you
know,
reduced
frequency
of
control,
information
and
processing
and
pekka's
ation
message
aggregation
and
then
there
the
workshop
to
report
is
online
and
you're,
certainly
free
to
to
to
check
it
out
anytime.
You
like,
oh
and
by
the
way,
so
you
don't
have
to
write
down
stuff
like
this.
A
We're
going
to
be
making
these
slides
available
they're,
not
yet
online,
but
but
we
we
will
be
putting
them
online
as
part
of
the
ITF
proceedings.
So
just
be
able
to
go
to
proceedings
page
and
be
able
to
to
get
access
to
the
talks.
You
can
get
all
the
links
and
everything
else
so
anyway,
so
the
workshops
report
concluded
basically
that
there
are
fundamental
limits.
A
This
was
in
2013
to
the
extent
to
which
latency
can
be
reduced
in
the
Internet,
there's
also
considerable
capacity
for
improvement,
and
it's
that
capacity
for
improvement
that
I'd
like
to
talk
about
throughout
the
system.
Making
internet
latency
a
multi-faceted
challenge,
and
it
certainly
is,
but
as
it
says,
this
capacity
for
improvement
and
that's
what
we're
going
to
be
looking
at,
then
they
also
said
how
to
standardize
a
definition
of
access
network
latency,
essential
latency
can
be
used
like
big
bandwidth
as
a
unit
of
convert.
A
Commerce
turns
out
to
be
a
hard
problem
and
that's
something
actually
that
the
broadband
forum
is
looking
at
now
and
I'll
be
talking
about
that
in
a
moment.
So,
what's
changed
since
the
report
in
2013?
Well,
there's
a
whole
lot
of
point
solutions
going
on
and
I'll
be
talking
a
lot
about
those
point
solutions.
A
Low
latency
is
is
so
crucial
that
now
we're
finding
the
content
as
a
whole
is
much
more
distributed.
Then
even
was
three
years
ago
you
know.
Now
we
have
you
know
CD
ends.
We
have
service
of
artists
like
Verizon,
who
are
taking
all
their
video
content
in
there,
putting
it
into
their
access
network,
very
close
to
their
customer
and
so
on
and
so
forth.
A
Basically,
the
the
service
providers
who
provide
the
content
are
moving
the
content
as
close
to
the
customer
as
possible,
and
that's
especially
true
in
things
like
multi-user
gaming,
where
you
need
low
latency
and
in
other
applications
where
you
basically
want
to
be
able
to
to
provide
some
amount
of
bandwidth
for
the
content
or
or
quality
of
service
for
the
content.
That's
coming
to
the
user
so
that
the
user
has
a
good
experience
as
they
consume
the
content
and
oh
there's.
A
There
are
management
systems
which
need
access
and
visibility
to
network
latency
characteristics
like
you
really
intelligently
place
the
flows
going
through
the
network
so
that
the
best
server
you
know
is
utilized
for
each
individual
user
and
then,
of
course,
this
is
much
easier
with
stored
content
than
it
is
with
with
a
dynamically
rendered
content
such
as
ritual
or
augmented
reality
or
or
gaming,
and
so
on.
Although
there's
been
some
amount
of
work,
as
I
said,
that's
starting
with
that
as
well.
A
There's
advancements
of
innovations
have
been
made
at
various
layers
and
exploring
technologies
to
meet
low,
latency
technologies
and
the
upper
layer.
This
quick,
you
know,
there's
the
quick
working
group
here
which,
which
I
think
is
some
really
innovative
work
from
google.
If
you're
not
really
familiar
with
the
check
it
out.
What
they
basically
said
is
you
know,
they're
looking
they're,
taking
a
more
systematic
view
of
what
are
the
layton
season,
bringing
up
a
web
page
in
your
browser
and
what
they
did
was
they
said
well,
gee.
A
Do
a
couple
components
of
of
latency
and
bringing
up
a
web
page.
One
of
them
is
actually
getting
the
TCP
connection
open
and
none
of
them
is
getting
security
going
for
HTTPS
right,
so
they
said
well,
rather
than
first
getting
the
TCP
connection
open
with
the
handshake
and
then
doing
on
the
security
handshake.
Let's
combine
them
and
that's
quick.
So
basically
what
they
do
is
they
have
a
new
transport
protocol
which
combines
the
security
and
transport
aspects
and
runs
over
UDP,
and
you
know
it's
an
innovative
solution
to
the
problem.
But
it's
a
point
solution.
A
But
it's
a
nice
point
solution
to
one
of
the
problems
that
people
see
another
one
is
l4s
which
is
taking
a
look
at
at
TCP
and
seeing
what
changes
we
can
make
in
TCP,
but
but
it's
really
aimed
at
a
particular
point
solution
and
as
it
ends
where
aimed
at
data
centers
than
anything
else.
So
it's
not
really
meant
for
wide-area
use
at
the
network
layer.
We
also
have
an
RFC
I
p.m.
pls
harden
pipes,
that's
really
more
intended
for
business
service
networks,
but
that's
their
as
they'll
as
well.
A
Latency
optimized
router
design,
which
is
that
which
is
something
you
don't
really
standardized,
but
it's
something
that
certainly
a
lot
of
the
router
vendors
are
working
on
the
broadband
forms.
Broadband
assured
services,
which
I'll
have
a
couple
of
slides
on
coming
up,
then
at
the
at
the
link
layer.
There's
work
going
on
here.
Called
debt
net
debt
net
really
is
aimed
at
deterministic
delay,
rather
than
particularly
low
delay.
It's
making
it
a
deterministic
that
cuts
down
on
the
jitter
or
change
in
delay,
and
it's
also
meant
for
particularly
constrained
environments.
A
It's
not
meant
for
use
on
the
wide
area.
Internet,
there's,
I,
Triple,
E
802
that
one
time
sensitive,
networking
which
is
again
just
taking
a
link
layer
view
of
what
can
be
done
for
it
for
time-sensitive
applications
over
Ethernet
flex,
ethernet,
which
is
joint
work
by
the
I
Triple
E
and
O
I
F,
which
is
basically
taking
a
TDM
like
approach
to
Ethernet
way
you
still
use
ethernet
framing,
but
in
the
transport
of
the
ethernet
frames
you
actually
allocate
TDM
like
circuits
and
so
you're
able
to
arm
to
provide
bandwidth
guarantees
for
these
net.
A
But
again,
this
is
just
an
Ethernet
layer
solution
and
then
3gpp
with
regards
to
mobile
I'm
has
started
multiple
projects.
We
basically
related
to
reducing
latency
in
the
radio
area
networks
in
the
in
the
mobile
access
core
and
in
mobile,
backhaul
and
and
and
again,
but
that's
really
aimed
at
the
mobile
solution
with
latest
technologies.
Advancements
in
packet
networks,
it's
really
becoming
feasible
for
us
to
petition
access
network
resources
to
dedicate
resources
to
flows
at
NIDA
terminates
of
latency.
A
That's
something
that
we'll
be
looking
a
little
more
closely
at
as
well,
and
it's
really
no
longer
impossible,
as
we
as
we
thought
three
years
ago,
have
solutions
to
the
hard
problems
of
the
SS
network.
And
that
brings
me
to
the
work
in
the
broadband
forum,
which
is
looking
specifically
at
access
networks.
They
have
a
project
ongoing,
called
broadband
assured
services
or
best,
and
we
actually
broadband
assured
IP
services,
which
is
which
is
being
called
bass
and
it.
This
is
in
its
innovation
and
area
work
work
areas.
A
The
whole
point
of
this
work
is
basically
that
they
are
also
recognizing
that
there
is
new
requirements
for
dynamic,
high
speed
on
demand
services
for
broadband
access
network
customers.
In
their
cases,
some
of
the
applications
that
they've
been
taking
a
lot
of
attention
to
our
interactive
videoconferencing,
interactive
gaming,
on
new
platforms
on
high
quality,
4k
and
8k
content
delivery,
5g
mobile,
secure,
vertical
market
application,
such
as
finance
health
care,
which
we
already
talked
about
in
the
government
they're.
A
Also
looking
at
the
emergence
of
particular
technologies
for
use
and
access
networks
such
as
sdn
G,
dot
fast,
which
is
a
particular
type
of
copper
and
and
fiber
based
really
fast.
Tsl,
is
what
you
can
think
of
it
as
ng
pond,
which
is
the
next
generation
of
passive
optical
networking,
mobile
4G
and
5g
solutions.
We
use
for
broadband
access
and
other
new
access
technologies
that
bring
down
infrastructure
costs
for
X's
network
providers
and
also
enable
new
service
offerings,
and
they
also
have
in
the
BBF
there's
all
there's
a
really
a
lot
of
service
fighter.
A
The
participation
they've
been
very
strong.
That's
this
network
service
fighter,
support
in
participation
in
this
work
and
and
and
also
from
their
customers
as
well.
You
know
for
supporting
the
new
services
and
applications
over
and
above
their
current
IP
service
offerings,
so
the
BBF
is
exploring
on
demand,
performance,
sure,
IP
services,
you
know
with
cloud
services,
data
center
interconnect
and
fixed
and
mobile
convergence
being
their
initial
focus
for
the
work
in
them
getting
into
a
lot
of
the
uglier
applications
as
well.
So
the
scope
of
the
work
is
actually
quite
large.
A
So
if
there's
something
that
they
need
to
be
done
in
the
IP
layer,
well,
they'll
be
coming
to
us
to
work
with
us
on
it.
So
if
changes
is
necessary
to
partner
with
other
sto
such
as
the
ietf
and
they
expect
I
Triple
E
as
well
as
required
to
jointly
improve
the
technology,
so
the
there's
also
flex
Ethernet
flex
season
that
I
talked
about
a
bit
before.
Basically,
this
was
a
joint
project
between
oef
and
I
Tripoli
to
give
TDM
arm,
reliability
and
and
and
performance
guarantees
to
Ethernet
traffic.
A
But
again,
as
I
said,
it's
a
point
solution,
the
IETF
is
also
has
work.
I
already
talked
about
the
work
that
was
done
in
l4s,
which
is
looking
at
queuing,
delay
and
latency.
But
it's
really,
as
I
said
before
a
point
solution.
Quick
I
talked
about
it.
Ready
and
detonate.
I
already
talked
about
as
well,
but
this,
but
you
know
it's
important,
just
to
recognize
that
we
already
have
worked
on
going
in
the
IETF
and,
and
so
we
want
to
make
sure
that
that
continues
now.
A
What
we
are
really
seeing
as
a
result
of
this
is
we're
seeing
a
lot
of
point
solutions
in
multiple
stos,
from
the
link
layer
with
flex
internet
into
into
connections
with
a
flexi
and
channel
MPLS
based
then
slicing
and
adaptive
queue
management,
debt
net
at
working
up
through
the
network
layer
to
5g
network
slicing
l4s
and
then
in
the
application
layer
with
things
like
quick
and
so
on,
and
but
what
we
really
want
to
see
is
something
more
of
a
system
level.
Thinking
about
the
problem.
A
You
know
we
have
point
solutions,
but
I
think
what's
more
important
than
just
point,
solutions
is
trying
to
look
at
an
overall
system
approach
to
latency.
You
know
what
we
can
do
to
think
about
this
as
an
end-to-end
problem,
not
just
as
a
series
of
point
solutions
that
are
being
done
in
various
places.
A
So
we've
been
doing
some
some
thinking
about
this
inside
huawei
and
what
we're
doing
is
a
bit
of
experimentation
that
we're
calling
shun
for
now.
A
service
experience
assured
network.
What
we're
trying
to
aim
for
word
to,
in
particular
environments,
is
ultra-low
latency
and
jitter.
Zero
packet
loss,
constant
bandwidth
assurance
and
on-demand
zero
weight
deployment
and
we're
basically
doing
that
by
taking.
You
know
a
more
cross-layer
approach
to
the
problem
and
looking
at
okay.
A
What
are
the
problems
in
each
of
the
layers
and
what
can
we
do
at
each
of
the
layers
so,
for
example,
for
service
deployment?
Looking
at,
for
example,
Sen
based
solutions
is
a
possibility
to
manage
end-to-end
service
and
and
signaling
across
various
Network
domains
via
protocol
extensions.
And
then
what
can
we
do
within
a
particular
network,
such
as
network
resource
allocation
and
reservation,
for
particular
large
flows,
algorithms
for
network
resource
and
delay
planning
and
then
within
a
single
router?
What
can
do
there?
A
You
can
do
ultra-low,
latency,
forwarding
architecture
and
routers
optimizing
packet
processing,
for
example,
improving
your
traffic
management
capabilities
for
low
latency
and
zero
packet
loss
and
fish
and
service
classification
within
the
routers,
and
then
at
layer,
1
and
2.
We
can
make
use
of
flex
Ethernet
interconnects,
the
routers,
optimize,
Wi-Fi
and
next-generation
low
latency
access
technologies.
A
Now
this
right
here
is
a
demo
that
we're
working
on
on
putting
together
based
on
virtual
reality,
and
we
didn't
get
it
done
in
time
for
bits
and
bytes
tonight
so
bits
and
bytes
tonight
is
going
to
be
focused
on
sfc,
but
we
hope
to
bring
this
in
for
the
next
bits
and
bytes.
So
in
Chicago,
we're
hoping
that
we'll
be
able
to
to
bring
the
sin
and
chose
to
you.
A
A
Basically,
what
we'd
like
to
do
is
further
development
of
new
technologies
and
achieve
low
latency
end-to-end,
and
why
over
wide
area,
packet
networks
to
enable
more
latency,
sensitive
applications
to
better
utilize,
the
technologies
we
already
have
and
to
do
more
system
level
thinking
for
technologies
that
we
don't
yet
have
you
know
being
for
working
done
both
here
and
other
stos
as
well.
So
you
know
one
of
the
things
that
possible
things
that
occur
to
us
is
a
workshop
where
people
can
just
get
together
for
a
couple
days.
A
You
know
talk
about
their
own
aspects
of
it
from
their
particular
point
of
view
and
see
what
we
can
do
about
some
system
level
thinking
and
end
to
to
have
you
no
more
effective
solutions
than
just
to
the
point
solutions
that
we've
been
talking
about
to
so
far.
We
really
want
to
focus
on
cross-layer
interactions
and
system-level
issues,
perhaps
leading
to
new
standardization
here
or
elsewhere,
based
on
the
results
to
answer.
A
Basically,
the
questions
that
you
see
here
other
advantage
to
be
gained
from
taking
a
broader
view,
the
different
technologies
at
the
different
layers
and
what
their
interactions
are.
What
are
the
effective
interactions?
The
coordination
is
between
upper
and
lower
layers
such
we
can
make
effective.
Optimization
of
you
know
with
latency
and
let,
unless
sensitive
services
and
given
the
larger
issues,
let
that
prevent
and
service
guarantees.
What's
the?
What,
where
can
we
most
intelligent
li
tap
the
problem
to
get
the
biggest
bang
for
a
buck?
D
D
Anytime
and
obviously
the
topic
of
quality
of
service
is
not
a
new
one,
as
you
as
you
well
know,
I,
and
we
seem
to
be,
and
we
cycling
it
have
we
five
to
ten
years.
I
am
wandering
reasons
that
I
think
previous
efforts
have
not
been
quite
successful.
I,
it
meant
many
man.
This
is
not
a
I'm,
an
easy
problem,
our
not
a
single
failure
bubble,
husband
that
you
can
imagine
I
that
you
will
have
to
worry
about
prioritized
Nile
or
service
attacks.
D
If
I
manage
to
get
access
to
that
priority,
setting
now
obviously
abuse
it
I,
which
means
I
will
probably
have
to
charge
extra
for
that,
so
this,
so
that
I
don't
kept
use
it
either
deliberately
or
mine
just
because
I'm
greedy
right
and
which
now
means
that
if
I
have
that
service
as
a
consumer,
if
one
of
my
application
goes
rogue
I'm
not
just
have
a
denial
of
service
attack.
On
my
link,
I
also
have
a
denial
of
service
attack
on
my
bank.
D
Account
yes,
yeah,
and
so
so
I
wonder
if,
as
part
of
your
Sean
exploration,
you
have
over
broadband
forum,
for
that
matter,
has
thought
about
how
you
can
prevent
exactly
that,
because
that
will
probably
stifle
one
story
like
that:
will
probably
kind
of
take
people's
enjoyment
out
of
a
augmented
reality.
Yes,
it
absolutely.
A
Especially
in
the
bass
work
in
the
broadband
forum,
security
is
is
absolutely
a
part
of
the
architectural
work
that
they're
doing
there.
You
know
I'd
have
a
chance
to
include
everything
but
absolutely
securities
apart
there.
As
far
as
Sean
goes
it.
You
know,
as
I
said
it's
right,
it's
in
the
early
stages,
but
certainly
absolutely
security.
Some
point
there
as
well,
because.
D
A
E
E
To
find
it
also,
you
mentioned
and
obstacle
about
lack
of
dip
syrup
deployment
across
a
esas.
There
is
a
draft
on
the
next
iesg
tell
achat
agenda
that
takes
the
first
shot
that
problem
Oh
draft
I,
ATF
GSP,
WG,
diffserv,
intercon,
it's
co-authored
by
a
colleague
from
DT,
so
it's
real
okay.
Think
of
that
I'm
have
to
read
it.
E
Please
please,
everybody
do
read
that
absolutely
thanks,
finally,
have
a
question
for
you:
aflam
you
talked
about
in
the
VR
and
I
think
sort
of
places,
end-to-end
reservation
as
I'm
sure,
you're
aware,
traffic
aggregation
qsr
is
is
a
headache.
Reservation
irrigation
by
comparison
is
a
nightmare.
Yes,.
E
A
But
really
right:
Oh
someone
shouted
money
and.
E
A
Exactly
to
be
more
serious
reservation
is
a
problem,
absolutely
I
agree
with
you,
but
there
there
may
be
other
ways
to
solve
it
as
well.
One
of
the
things
that
the
BBF
is
looking
at
actually
into
in
terms
of
reservation,
is
looking
at
applications
where
you
might
actually
do
something
like
have
a
web
form.
Where
you
can.
E
Would
also
encourage
people
thinking
about
this
to
step
away
from
the
entirely
too
attractive,
ete
acronym
and
start
figuring
out,
where
the
actual
problems
are
and
how
to
solve
them.
You
mentioned
I'm,
particularly
that
that
biologically
access
networks,
the
cause
of
headaches,
not
the
backbones,
the
background
operators
have
backbones
and
the
backcolor
index
under
control,
maybe
just
maybe
an
attack
on
reservation
on
the
access
network
segundo
domain
might
get
somewhere.
Okay,
cool
thanks.
A
F
One
of
your
first
slides
contain
the
hourglass
model,
yes
and
then
sort
of
from
an
architectural
point
in
the
way
I
saw
if
she
it
is
like
okay.
So
we
are
creating
sort
of
these
different
slices
that
555
g
calls
slices
people
of
different
terminology.
Where
would
you
would
you
expect
that
we
still
be
able
to
sustain
that
hourglass
or
do
we
get
a
multistem
our
gloss,
or
would
it
be
multiple
our
glosses
sort
of
glue
to
the
same
base?
Where
would
we
end
up
architectural
II?
Do
you
think
well.
A
I,
don't
see
a
whole
lot
of
change
coming
to
the
hourglass
model,
just
because
that's
the
way
it
is
and.
A
You
know
in
the
in
in
the
future,
you
know.
As
I
said,
people
have
talked
about
the
hourglass
moving
up
to
http
right,
and
so
you
can
think
of
it,
maybe
as
rather
than
an
hourglass,
you
have
the
top,
and
then
you
have
HTTP.
Then
you
have
a
widening,
then
another
narrowing
it
at
IP.
Then
it
expands
again
and
you
know,
and
it
may
end
up
being
a
model
that
looks
something
like
that
in
the
future.
So.
C
G
Linda
Dunbar
from
Warwick,
so
I'm
address
David
blacks,
question
the
Bible
has
changed.
I
see
p,
has
been
a
headache
for
the
past
and
there's
answer
on
the
money
and
think
that's.
I
just
want
to
give
a
few
more
examples
and
nowadays-
and
you
will
see
if
you
just
search
our
internet
like
latency
monitoring
services,
there's
so
many
new
companies
popping
up
to
provide
their
services,
because
why?
G
Because
the
content,
managers
continent
system
Citians
need
that
kind
of
exposure
and
visibility
to
the
network
so
that
they
can
distribute
the
content
more
intelligently
and
and,
as
Devi
said
end
to
end.
The
end
to
end
doesn't
mean
from
here
to
us
across
the
globe.
The
end
to
end
more
likely
will
be
like
the
excess
domain
on
the
shorter,
like
multiple
hops
and
then
be
able
to
provide
that
kind
of
low
latency
services
that
becomes
more
visible.
Okay,.