►
From YouTube: IETF115-PANRG-20221110-0930
Description
PANRG meeting session at IETF115
2022/11/10 0930
https://datatracker.ietf.org/meeting/115/proceedings/
A
So
in
in
Switzerland
now
yeah
I'm
gonna
use
the
class
tears
I
just
Switzerland.
Now
those
are
actually
classified
as
weapons
and
you're
supposed
to
give
them
to
the
companies
yeah.
B
B
I,
if
you
would
like
to
read
it,
you
can
download
the
slides
and
enjoy
it,
and
just
remind
you,
because
that
it's
here
and
also
that
we
are
supposed
to
be
nice
to
each
other,
what
we're
all
trying
to
do
I'm
sure
housekeeping.
It's
not
a
big
room,
but
it's
still.
The
chairs
would
appreciate
if
in-person
people
would
still
join
the
mitekoku.
B
Remote
speakers
can
probably
present
themselves
or
just
ask
me
now
run
your
slides.
We
have
a
actually.
We
have
a
minute
sticker.
I!
Guess
thanks,
but
Xing
is
remote.
So
if
anyone
in
the
room
would
like
to
assist
because
I
guess
it
might
be
beneficial
if
someone
in
the
room
also
helps
with
meeting
minutes
any
volunteers
for
the
worst
part
of
every
session,.
B
B
B
So
yeah,
thank
you
very
much.
Yeah
we
are
supposed
to
wear
a
Moscow
get
my
one
on
as
soon
as
I
finish
with
coffee.
You
don't
want
me
not
questionated
enough
agenda.
We
actually
do
not
have
a
really
full
agenda
today,
so
we
have
plenty
of
time
for
discussion
for
everything.
So
Brian
will
start
today
with
giving
you
an
update
on
science,
and
then
we
have
a
presentation
about
IPv6
wow
and
it's
not
my
fault
this
time
actually,
and
then
we
have
a
number
of
Alto
presentations.
B
It
was
guest
speakers,
so
any
last
minute
agenda
version.
Anyone
has
a
great
idea.
They
want
to
share
with
us
this
morning.
B
Okay,
good,
so
not
much
happened.
To
be
honest,
past
properties
draft
is
now
always
calling
for
Iris
review,
hopefully
we'll
get
it
published
soon.
So,
if
you
have
any
great
drafts
this
working,
this
research
group
shall
adopt.
B
Please
send
them
to
the
list
and
with
this
I
guess,
Brian
will
start
presenting
as
soon
as
he
finish,
with
helping
Anna
I
am
going
to
share
the
slides.
A
A
A
So
how
we
got
to
where
we
are
with
Scion
was
back
at
itf113
in
Vienna
there
was
a
side
meeting
about
Scion
and
then
a
presentation
at
routing
area
open
and
the
feedback
that
came
from.
That
is
like
neat,
that's
an
architecture.
We
don't
do
architectures,
so
we
decided
to
pull
into
pan
RG
the
work
of
thinking
about
how
to
turn
this
into
things
that
can
be
worked
on
more
or
less
independently
within
the
ITF.
A
So
there
are
three
drafts
that
were
like
basis
for
this
discussion
in
panergy.
One
is
the
overview
draft.
One
is
the
component
analysis,
so
I
think
it
was
the
first
interim
meeting
that
we
had
where
the
the
feedback
from
that
meeting
was
actually.
A
Can
you
look
at
how
we
can
take
the
system
and
break
it
up
into
components
and
figure
out
sort
of
what
the
interfaces
are
between
those
as
input
to
the
decision
to
you
know,
you
know,
figure
out
how
to
document
those
components,
and
we
were
happy
to
accept
that
the
the
work
of
having
that
discussion,
Within
panergy,
had
you
know
the
rest
of
these
slide.
Decks
here
are
or
the
rest
of
these
slides
here
are
largely
an
overview
of.
What's
going
on
in
Scion.
A
Have
a
look
at
the
overview
analysis
draft
for
that
the
interesting
slide
is
here,
so
the
the
the
incorporation
that
feedback
about
doing
component
analysis.
Three
components
pretty
quickly
fell
out
of
that
discussion,
and
these
are
I,
don't
want
to
use
the
L
word,
but
these
are
almost
layered
on
top
of
each
other,
so
there's
the
control,
plane,
pki
the
control
plane
and
then
the
data
plane.
A
Some
of
these
are
closer
to
things
that
could
be
standardized.
Some
of
these
they're
open
research
questions.
Some
of
these
are
kind
of
maybe
the
Scion
way
of
doing
things,
and
maybe
you
less
interesting
on
their
own
within
the
ietf,
but
the
feedback
from
that
was
to
essentially
take.
A
What's
been
written
about
Scion
to
date,
right
like
so
that
the
The
Source
data
for
the
source
content
for
the
specifications
actually
bring
those
into
internet
drafts
with
the
intent
of
publishing
on
the
independent
submission
stream,
as
sort
of
like
you
know,
eth's
implementation
of
Scion
or
the
Scion
lab
implementation
of
science,
a
very
common
thing.
When
work
comes
into
the
ietf,
we
don't
really
know
what
to
do
it
with
it.
A
So
the
Scion
folks,
if
you
have
questions
or
anything,
to
talk
to
them
there
over
here
going
and
Nicola,
we'll
be
doing
that
over
the
winter
I
guess
we
could
say
and
panargy
will
remain
a
discussion
forum
for
things
about
Scion
until
we
look
at
those
documents
and
figure
out,
oh
yeah,
this
actually
needs
to
go
into
this
area
open
meeting
for
a
possible
boss,
or
this
needs
to
be
brought
into
this
working
group,
or
this
needs
to
be
wholesale
replaced
with
this
thing
that
already
exists
in
the
ietf
and
can
be
very
easily
adapted
to
work
with
Scion,
etc,
etc.
A
So,
in
the
meantime,
questions
comments
Etc
about
Scion,
we're
happy
to
have
those
discussions
on
the
panergy
list.
I
think
we're
not
planning
another
interim
because,
like
you,
you
guys
just
have
some
work
to
do
before
you
can
come
back
and
we
expect
to
have
sort
of
a
report
out
on
that
and
possibly
a
discussion
about
next
steps
in
research.
A
A
So
if
anyone
has
questions
about
what's
going
on
in
Scion,
those
really
wouldn't
be.
For
me,
this
is
just
her.
You
know
catching
people
up
who
weren't
in
the
interims
on
on
what's
happened:
I
guess
we
can
take
those
and
put
Nicola
and
Corian
on
the
spot.
Otherwise
we
can
move
on
to
the
next
presentation.
C
D
So
good
morning,
I'm
Maxine
Pirro
from
the
Easter
luva,
so
I'm
gonna
present
something
that
was
a
really
collaborative
work
within
the
lab,
so
I'm
a
PhD
researcher.
My
advisor
is
all
the
team
and
also
colleagues
from
from
the
U
Silva.
So
we've
been
we've
been
reconsidering
what
what
IPv6
can
can
do
what
the
rules
it
can
play
next
slide,
please
so
our
kind
of
thought
experiments
started.
D
Looking
at
OIP
addresses
are
used
today,
and
so
we
we've
been
looking,
obviously
at
ipv4
and
in
ipv4
one
address
usually
identify
a
network
interface
and
so
a
host
that
would
have
multiple
network
interface
as
one
ipv4
per
network
interface,
but
not
much
much
more
than
that.
Due
to
several
reasons,
the
kernel
has
not.
The
links.
Kernel
is
not
really
great
at
handling
multiple
pv4
on
an
interface.
That's
one
reason.
The
second
reason
is
also
that
we've
been
accustomed
to
The
Limited
ipv4
address
space,
and
this
is
a
big
constraint.
D
So
when
IPv6
came
in,
it
obviously
alleviated
this
constraint
space
Problem
by
bringing
many
more
address,
but
the
stance.
So
this
is
a
position
paper.
It's
an
editorial
paper
published
at
at
CCR,
and
we
tried
looking
beyond
that.
So,
okay,
we
have
now
many
more
addresses,
but
is
does
those
addresses
brings
more
possibilities
for
endos
or
in
the
network,
and
so
we
essentially
try
to
look
okay.
Can
we
do
better
than
one
IP
means?
D
One
interface
and
one
of
the
findings
that
goes
hand
in
hand
with
one
of
the
strengths
of
the
lab.
Is
that
obviously
multi-pass
transport
protocol
are
really
help
in
that
endeavor.
So
next
slide,
please
so
you're
probably
wondering
what
are
those
rules
and
if
you
have
a
quick
glance
at
the
paper,
you
probably
see
a
beast
like
that
and
you're
like
okay.
So
what?
What
is?
What
are
those
rules?
This
is
a
nice
meme,
courtesy
of
one
of
my
colleague
Louis,
that
I
took
for
the
slide
next
slide.
D
So
actually
we
we
looked
at
many
many
aspects
that
can
be
improved
thanks
to
IPv6
addresses.
So
we
looked
at
privacy
load,
balancing,
said
man,
routing
differentiate
services
or
different,
differentiated
routing
and
multicast
next
slide.
So
I
want
to
start
by
addressing
the
elephants
in
the
slide.
Let's
call
it
so:
let's
talk
about
multi-homing,
so
in
ipv4
one,
an
S
is
multi-ohm,
so
here
it's
as1
that
as
two
provider
is
two
as3.
D
What
is
usually
done
is
that
as1
is
first,
it's
an
as
so
it's
it
requests
an
as
number
and
it
will
announce
its
prefix.
He
has
one
prefix
that
is
announced
over
the
two
links
and
there
are
many
many
tools
and
many
many
tricks
in
bgp
to
do
that
and
to
be
flexible,
and
this
is
a
wool
domain
in
its
own
that
I'm,
not
that
much
knowledgeable
of,
but
we've
been
looking
at.
D
What's
the
effect
of
this,
so
the
effect
is
that
actually
a
lot
of
the
Asus
are
sub
ass,
so
they
don't
have
clients
they're,
just
reaching
the
internet
through
one
of
several
providers,
and
when
you
look
at
the
quantity
of
bgp
message
that
travel
over
the
internet,
you
can
see
that
roughly
half
of
them
there
are
messages
coming
from
a
from
stubs,
so
stubs
are
really
putting
a
strain
on
bgp
in
the
internet,
and
we've
been
looking
at
this
proportion
for
the
last
10
years.
D
I
think
this
is
data
from
the
paper
and
it's
it's
roughly
constant
and
it's
roughly
also
constant
between
ipv4
and
IPv6,
so
very
likely
to
use
the
same
way
actually
next
slide.
D
So
obviously
the
ITF
is
aware
of
this
problem,
and
this
is
one
of
the
latest
attempts
that
I
know
of
obviously
next
slide.
D
So
in
IPv6,
the
proposed
solution
is,
is
a
bit
different,
so
the
first
striking
difference
from
ipe4
is
that
the
the
bottom
network
is
no
longer
an
as
so
here
we
call
it
Enterprise
because
it
could
be
an
Enterprise
that
is
interested
into
being
multi-ohm
for
resiliency
and
that
kind
of
stuff.
So
here
it's
no
longer
an
es.
D
What
this
Enterprise
do
is
that
what
is
sunrise
does
that
it
receive
provider,
aggregate
aggregated
prefixes
from
each
of
its
providers,
so
it
will
receive
a
blue
prefix
coming
from
as2
and
it
will
receive
a
yellow,
greenish
prefix
from
as3
and
with
those
two
prefixes
it
will
distribute
them
inside
the
network
and
each
of
the
device
here
we
have
a
smartphone
will
receive
one
IP
from
each
of
the
prefix
they're.
Really
advantage
of
of
this
solution
is
that
it
doesn't
require
bgp.
D
So
bgp
is
less
used
for
solving
this
problem,
but
it
has
also
a
really
cool
feature
is
that
the
device
can
choose
which
address
to
use
next
slide,
and
so,
when
the
device
has
multi-pass
transfer
protocol
and
those
two
addresses,
then
it
can
really
do
cool
stuff,
such
as
quickly
reacting
to
a
provider
failure,
because
it's
monitoring
how
the
connection
is
going
through
one
provider
and
could
migrate
the
connection
to
the
other
provider.
It
could
also
dynamically
choose
the
best
provider
and
the
best
provider
could
be
defined
by
the
application.
D
One
application
needing
high
bandwidth
will
choose
could
migrate
from
one
provider
to
the
other,
hoping
that
there
is
more
bandwidth
on
the
other
or
could
migrate
to
a
lower
latency
provider.
If
lower
latency
is
required,
but
there's
many
many
combinations
that
we
can
do.
Also
we
can
use
the
two
providers
together,
redundantly
using
FEC
using
I,
don't
know
plain
redundancy
or
in
an
aggregated
fashion.
D
Next
slide,
please
so
to
benefit
from
that.
I
thought
it
might
be
good
to
look
at
the
status
of
multipath
Transport
protocols
within
the
ITF,
so
there
are
a
bunch
of
multi-pass
Transport
protocols
with
a
different
set
of
features.
Most
of
them
are
overlapping,
but
all
of
their
particularities,
so
the
first
I
thought
I
thought
of
is
sctp,
for
which
the
work
is
still
ongoing.
D
Through
this,
this
draft
and
the
CTP
today
is
deployed
mostly
for
webrtc,
so
the
second
multi-path
transfer
protocol
is
mptcp,
which
has
a
new
RFC
for
its
version
one
and
there
today
mptcp,
is
the
largely
deployed
in
the
sense
that
it's
available
on
Mainline
Linux
5.6,
and
it's
a
it's
apparently
available
in
many
Apple
devices
as
well.
D
The
going
further
into
the
the
timeline
are
closer
to
us,
rather
in
the
timeline
looking
at
quick,
so
quick,
I
put
it
in
the
multi-path
transporter
call
because
it
has
up
feature
or
two
that
relates
to
multipass
transform
protocol,
but
it
has
also
a
constraint
is
that
you
can
only
use
actively
one
pass
at
a
time,
but
quick
as
large
scale
deployment
today
and
the
following
work
in
that
direction
is
multi-path,
quick,
which
is
still
an
ongoing
effort
and
enable
the
use
of
several
Network
paths
next
slide.
D
So
we've
been
involved
in
quick
for
some
years
and
we've
been
looking
at
how
quick
can
be
used
on
multi-ohm
servers.
So
we've
been
looking
at
multium
clan
in
the
in
the
example,
but
it
could
also
be
multi
multi-ohm
server.
That
will
also
would
like
to
be
connected
to
several
providers
for
for
failure
resiliency
or
for
redundancy
and
all
those
purposes.
So
when
looking
at
quick,
there
is
a
bunch
of
features
that
are
useful,
so
quick
enables
the
client
to
change
local
addresses.
This
is
called
connection
migration
in
rxc
9000.
D
The
server
have
a
mechanism
to
defer
client
to
another
address,
just
after
the
end,
shake
and
that's
called
server
prefer
address,
and
then
there
is
this
ongoing
multi-path
quick.
That
would
enable
the
use
of
several
Network
paths,
but
there
is,
there
is
a
hole
in
in
all
that
is
that
quick
V1
lacks
away
for
a
server
to
announce
or
to
advertise
additional
addresses
that
relate
to
the
connection
next
slide.
D
Please,
and
so,
we've
proposed
a
small,
a
small
extension
to
that
I'm
not
going
to
discuss
in
detail
what
it's
about,
but
if
you're
interested
to
use
Quick
on
multi-ohm
servers.
D
This
is
something
that
we'd
like
likely
interest.
You
next
slide,
please,
okay!
So
let's
look
now
at
the
other
animals
I
would
take
some
of
them
there.
D
There
is
more
in
the
paper,
but
let's
look
at
privacy,
load,
balancing
and
segment
routing
and
try
to
reconsider
or
high
pv6
addresses
can
help
those
areas
next
slide,
please
so
the
clients
there
is
also
also
something
that
is,
that
is
well
known,
RFC
8981
so
which
defines
temporary
addresses
so
a
device
receive
a
prefix
from
the
network
and
then
fills
in
the
remaining
bits
with
temporary
addresses
with
a
limited
lifetime.
D
Here
in
an
example,
we
have
an
IPv6
client
that
use
a
temporary
IPA
and
then
establish
several
flows
towards
the
internet
with
that
what
the
RFC
or
the
implementation
do
is
that
as
long
as
the
flows
remain
as
long
as
those
red
flows
remain,
IPA
is
still
used
and
then
IP
at
some
point,
the
ipx
players
and
the
new
IP
comes
up.
This
is
ipb.
The
blue
one,
but
IPA
will
remain
as
long
as
there
are
flows
that
use
it
and
non-multipath
transfer
protocols
cannot
migrate.
D
Next
slide,
please,
then,
on
the
server
there
there
has
been
recently
a
proposal
for
adopting
kind
of
a
moving
Target
defense
with
IPv6.
This
is
kind
of
fun.
This
it's
called
Choi
Opera.
This
is
not
work
from
the
lab,
but
I've
I
I
took
the
idea
just
to
imply
to
explain
it
simply
so
what
it
proposes
is
use
a
temporary
IP
from
for
the
server
that
is
within
a
prefix,
and
this
temporary
IP
is
determined
cryptographically.
D
So,
given
a
shared
key,
A,
timestamp
and
A
soul
that
you
choose,
you
get
a
random
IP
value,
random
I
mean
cryptographically
random,
and
so,
if
you
share
this
shared
key
with
the
client
by
some
ways
could
be
DNS.
For
instance,
then
you're
able
to
hop
from
oneip
to
another
and
kind
of
hides
your
server
from
Scanners
from
all
those
annoying
researchers
that
try
to
understand
what
your
service
is
doing
and
all
of
that
kind
and
that
kind
of
stuff.
D
D
Please
so,
we've
been
also
looking
at
load,
balancing
and
we've
been
doing
a
simple
prototype
and
try
to
to
see
so
to
today's
servers
are
very,
very
likely
multi-core
and,
and
we
started
thinking
okay,
does
it
help
to
put
one
IP
per
CPU
core,
and
so,
instead
of
having
one
IP
on
the
network
interface
on
the
network
interface,
you
would
have
several
IPS
and
each
of
the
IP's
correspond
to
a
CPU
core,
and
so
we've
been
doing
an
experiment
with
quick.
D
So
we
had
a
setup
in
the
lab
with
quick
and
128
clients
that
make
repeated
requests.
So
a
lot
of
requests
comes
in
every
time
for
a
certain
amount
of
of
time,
and
we've
been
looking
at
how
assigning
oneip
per
core
helps
the
load
balancing
and
in
the
graph
there
is
to
box
plot.
There
is
one
for
one
IP
per
core
and
then
one
for
single
IP,
which
is
a
single
IP
on
the
network.
D
Interface
and
we've
been
observing
that
one
IP
per
core
is,
could
be
a
nice
way
to
load
balance
the
the
load
of
incoming
clients.
So
the
reason
for
that
is
that
in
the
single
IP
case,
when
the
next
receives
a
packet,
it
will
spread
the
packets
of
the.
If
you
spread
the
flows
based
on
the
hash,
so
it
takes
the
packet
header.
Does
the
hash
and
then
decides?
Okay,
I'm
gonna
assign
this
flow,
this
new
incoming
flow
to
this
core,
and
this
is
a
costly.
D
This
is
a
Time
closely
mechanism
in
the
one
IP
per
Core
case.
The
clients
use
the
DNS
to
select
one
of
the
server
IP,
and
so
this
one
stage
of
load
balancing
is
kind
of
offloaded
to
the
client
and
then
on
the
Nick.
It
becomes
much
simpler
because
it's
simply
about
looking
at
the
IP
and
based
on
the
IP
choose
the
core.
D
So
this
is
one
step,
but
we
could
go
further,
like
many
of
the
load
balancing
works
from
the
academics
that
you
probably
know
of
have
some
sort
of
mechanism
to
spread
the
load
further.
So
all
the
workloads
are
not
the
same
in
our
in
our
experiment.
Here
the
workloads
were
the
same,
so
it's
kind
of
a
toy
example.
But
if
you
want
to
spread
further
the
load,
you
could
use
multi-pass
transfer
protocols
to
transfer
the
flows
to
a
different
IP,
meaning
transferring
the
flows
to
a
different
core
and
helps
spread.
D
The
load
next
slide,
please.
D
Lastly,
one
ID
that
is
rather
in
the
network,
rather
than
Honda
host,
we've
been
looking
at
IDs
on
the
host
and
this
presentation,
but
there
are
more
IDs
in
the
network
in
the
paper,
so
one
of
the
ID
is
is
combining
IPv6
prefix
and
segment
routing
domains
and
basically
put
all
the
related
Services
inside
an
IPv6
prefix.
This
is
kind
of
a
simple
ID,
but
it
has
the
nice
effect
of
for
incoming
traffic
to
be
able
to
Route
the
traffic
through
service
chains
just
based
on
the
destination.
Ip
next
slide.
D
Please,
and
so
that's
that's.
Some
of
the
idea
we've
been
exploring
in
the
paper,
but
really
the
key
message
of
of
our
position
paper,
and
this
presentation
is
that
with
IPv6
we
have
a
lot
more
addresses
to
play
with,
which
doesn't
necessarily
mean
we
have.
The
only
way
to
use
them
is
use
more
device.
D
We
could
just
simply
reconsider
how
they're
used
on
the
device-
and
there
is
much
more
to
do
than
just
assigning
one
IP
to
each
interface,
and
there
could
be
a
lot
more
thoughts
in
this
direction
to
try
to
explore
what
all
this
means
and
obviously
multi-pass
transfer
protocol
are
quite
useful
on
the
endos
to
be
able
to
leverage
all
those
all
those
possibilities.
D
So,
if
you're
interested
in
developing
those
use
cases
with
multi-pass
transport
protocols
and
several
ipf
addresses
just
reach
to
us
myself,
Olivier
or
share
any
thoughts,
you
have
now
I
think
that's
the
end
of
my
presentation,
I'm
ready
to
take
questions
so.
B
First
I'm
using
my
chair
power,
putting
myself
in
the
queue.
Thank
you
very
much
very
interesting
glad
to
see
that
someone
else
is
trying
to
solve
this
multi-home
and
without
a
bgpo
and
IPv6
stuff.
One
comment
and
one
question
actually
two
comments
and
one
question.
First
of
all,
yes,
IPv6
is
basically
already
right.
By
definition,
any
IP,
almost
10
apb-6
enable
hostile.
Already
you
have
multiple
multiple
other
addresses,
right
link,
local,
stable
privacy,
multiple
prior
research,
right
and
so
on.
B
You
might
be
interested
to
go
and
look
into
V6
Ops
thread
going
on,
because
I
started
some
fire
there
by
noticing
that,
while
it's
all
great
operationally,
if
you
start
using
multiple
addresses
now,
we
hit
in
some
hidden
limitations
on
which
vendors
tend
to
put
on
us,
and
some
people
actually
in
V6
office
might
find
your
presentation
quite
scary,
because,
as
it's
been
correctly
mentioned,
there
are
multiple
addresses
come
in
with
a
cost
to
the
network,
because
yeah
you
need
to
sub,
have
various
types
of
caches
on
your
network
devices
for
every
address.
B
D
So
we're
pretty
early
on
the
process
of
experimenting,
so
the
the
quick
answer
is
no
but
we're
looking
we're
looking
to
continue
that
work,
and-
and
this
is
definitely
the
the
kind
of
question
we
will
like
to
answer-
we
thought
about
how
we
could
proposed
to
the
students
on
our
campus
to
do
experiment
at
their
home
and
really
have
a
lot
of
people
toying
with
this
on
a
practical
level
and
and
see
what
are
the
the
drawbacks
of
having
multiple
addresses.
D
How
is
the
OS
handling
that
and
that
kind
of
stuff?
So
definitely
you
thought
about
you.
You
talked
about
V6
Ops.
We
are
also
interested
not
just
on
an
academical
level,
but
our
okay.
Today,
what
are
the
the
drawbacks
of?
What
are
the
the
obstacles
in
the
way
to
be
able
to
use
multiple
addresses
because
I
don't
think
many
people
have
experience
on
that
practical
deployments
and
stuff
like
that.
E
That's
me
hi.
Thank
you
very
much
for
the
presentation
was
very
interesting.
My
question
is
more
about.
Are
you
just
interested
in
developing
use
cases
and
in
a
certain
way
how
you
deploy
and
use
ip6
addresses,
or
do
you
think
we
can
go
further
and
revise
actually
the
the
IPv6
addressing
model.
D
Somehow
so
giving
such
advice
is
not
an
easy
position,
so
my
personal
interest
is
whether
in
transport
protocols,
that's
an
also.
D
Project
but
but
really
like
the
in
the
paper,
the
discussion
goes
as
further
as
do
we
need
ports
or
is
IP
address
all
enough.
Can
we
address
one
IP
address
per
process,
and
is
that
does
that
works
obviously
from
an
operation?
You
know
point
of
view.
It
might
seem
complicated,
but
the
the
concrete
step
for
us
is
experimenting
with
all
that,
rather
than
like
proposing
a
new
IPv6
addressing
model
and
I
know,
there's
been
an
addressing
discussion
in
the
ATF
going
on,
but
I
have
not
looked
at
it.
D
Certainly,
if
you
want.
E
To
learn
and
I
can
send
you
a
couple
of
pointers.
We.
C
F
Hi
so
you've
so
from
the
prior
question,
I
I
figured
that
you
are
more
interested
in
transport
and
that
sort
of
that's
the
sort
of
focus.
So
so
you
haven't
looked.
Have
you
looked
on
anything
sort
of
underneath
the
transport
layer
at
the
network
layer,
for
example,
because
at
the
end
of
the
day,
so
it
comes
down
to
architectural
naming
issue
where
this
is.
F
You
know
clear
sort
of
semantic
overloading
of
ips
IP
addresses
right
and
IP
address
is
considered
harmful
and
then
you
know,
like
this
kind
of
paper,
have
showed
that
so
my
point
is,
it
might
be
interesting
for
you
to
sort
of
look
at
the
sort
of
cohesion
of
sort
of,
say,
Network
layer
sort
of
you
know
tidying
up
naming
architecture
Approach.
At
the
same
time
as
you
look
at
the
transfer
protocol.
F
What
the
you
know
how
like
this
identifier,
locator
split,
Network
protocol,
aware
transfer
protocol
might
be
something
that
might
be
fun
to
sort
of
explore.
I've
I've
in
my
during
my
PhD
I
worked
on
sort
of
that
sort
of
area,
so
might
be
good
if
we
take
this
offline
and
chat
more
later.
If
that's.
D
Yeah
definitely
yeah
I
talked
about
transport
but
like
we
like
root
cause
analysis
in
our
lab,
so
definitely
the
network.
If
the
network
layer
is
involved-
and
there
are
some
impediments
in
the
network-
there
we're
interested
to
know
them-
we're
not
just
sitting
on
the
transport
layer
but
yeah,
definitely.
B
Megan
so
idea
of
like
address
per
application
and
using
V6
addresses
instead
of
ports.
It's
actually
quite
interesting.
One
I
think.
If
we
want
to
go
to
that
direction,
we
would
need
to
change
deployment
model
specifically
go
to
something
like
64
per
host,
because.
D
B
All
this
like
addresses,
come
in
as
I
said,
with
a
price
and
network,
probably
like
current
networks
would
not
survive
until
you
go
to
something
quote
like
three
gpp
is
doing
when
from
Network
perspective,
it's
still
just
around
the
networks
are
designed
to
deal
with
routes
yeah
and
if
you
go
in
64
per
host
and
you
give
a
host
the
whole
slash,
64
do
whatever
you
want
with
this.
That
will
be
host
limitations,
not
like
Network
limitations,
yeah.
B
D
No
very
interesting
comments,
so
yes,
the
intent
was
to
give
us
64
to
the
host
and
I
almost
also
just
wanted
to
say
that
our
knowledge
is
kind
of
textbook
IPv6,
so
so
the
network
or
the
network
courses,
all
IPv6.
So
that's
the
only
thing
we
know,
but
for
from
our
University
at
least
but
then
yeah
so
so
our
our
understanding
is
kind
of.
D
Is
that
yeah,
so
IPv6
defined
prefixes
prefixes
are
routed
only
routes
to
prefixes
exist,
but
then
in
the
reality
there
might
be
subtle
things
that
we
don't
know
of
and
I'm
pretty
sure
Olivier
had
some
insight
about
that.
But,
like
the
exploration
of
thoughts
started
from
there
from
this
context,
yeah.
B
D
G
H
Right
so
yeah.
Thank
you
very
much.
I'm
enjoy
the
Ross
from
Qualcomm
and
I'm
going
to
be
talking
about
bottleneck
structures.
I
have
a
couple
of
presentations,
the
one
the
first
one
is
price,
the
basics
of
bottlenecker
structures,
interaction
and
some
of
the
use
cases
and
I'm
going
to
talk
a
little
bit
about
also
production
deployments
and
then
the
second
presentation
after
it's
going
to
be
about
how
to
compete
on
the
construction,
the
partial
information
in
the
context
of
multi-domain
or
multi-automa
systems
yeah.
H
So
next
slide,
yeah
thanks
next
one
so
yeah,
you
know
I'm
not
going
to
go
into
the
math,
but
basically
you
have
papers.
That
is
a
second
paper
sigmatics
paper
and
extended
technical
report
also
and
then
a
couple
of
graphs
that
you
can
read
the
drafts.
Currently,
the
discussion
has
been
asked
in
the
outer
working
group,
but
I'm
here
more
in
the
research
group.
It's
also
Wisconsin
speakers.
Applications
of
one
like
a
structures
could
be
within
other
working
groups
and
research
groups
as
well.
H
So
next
up
yeah
thanks
so
I'm
gonna
pick
on
to
start
introduction,
I'm
going
to
pick
on
one
specific
communication,
Network
problem,
which
is
the
problem
of
congestion
control.
So
the
prevailing
view
in
in
the
problem
of
congestive
control
has
been
based
on
this
notion
that
the
performance
of
the
flow
is
uniquely
determined
by
its
bottleneck.
Lean.
H
This
sort
of
view
goes
all
the
way
back
to
you
know
the
famous
Jacobson's
second
paper
1988,
which
proposed
the
first
congestion
control
algorithm
with
this
diagram,
showing
the
notion
of
bottleneck
link
which
literally
saved
internet
from
congestion
Collapse
by
proposing
the
first
congestion
control
algorithm
now
next
to
the
slide.
Well,
this
this
is
a
true
statement
that
the
performance
of
a
fluency
nuclear
determine
by
its
bottom
line,
link.
H
What
we
find
is
that
there
is
a
more
sort
of
fundamental
or
hidden
story
behind
it,
and
the
analogy
that
we
like
to
build
to
understand
what
about
microstructure
is:
is
that
if
the
problem
of
congestion
control
were
an
iceberg,
the
notion
that
the
performance
overflow
is
initially
determined
by
its
bottleneck
link
would
be
the
symptom,
the
the
tip
of
the
iceberg.
What
what's,
what
what's
underneath?
H
What
the
submerged
part
is
what
we
call
the
bottlenecker
structure,
which
is
here
in
sort
of
latent,
but
reveals
the
sort
of
system-wide
performance
of
the
communication
Network
system
right
so
and
we'll
see
what
that
means
in
the
next
slides
yeah.
Next
now,
I'm
going
to
sort
of
put
an
example
to
illustrate
what
a
bottom
like
a
structure
is
so
I'm
going
to
start
with
the
communication
Network
and
in
this
example,
circles
are
links.
H
So
we
have
four
links:
link
one
for
link
four,
each
one
with
a
capacity
C1
through
C4
and
then
lines
are
flows
servers
in
the
link.
So
we
have
six
flows
each
one
with
the
color
so
flow
one
through
four
six.
So,
for
instance,
401
traverses
link,
one
only
float
six
traverses
link,
one
link,
two
and
Link
three,
and
so
on.
H
Okay,
now
next
slide
I'm
gonna
fast
forward
and
forget
about
the
math
and
I'm
gonna,
and
this
I'm
going
to
show
you
what
the
ball
maker
structure
is
and
how
to
interpret
it.
This
is
the
bottleneck
structure
of
this
communication
network
configuration
and
the
way
to
read
the
bottleneck
structure
is
as
follows.
So
vertices
white
vertices
correspond
to
Links,
so
we
have
link
one
link
tooling,
three
and
Link.
Four
are
four
white
vertices
color
vertices
our
flows.
H
If
there
is
a
direct
Edge
from
a
Vertex
from
a
link
vertex
to
a
flow,
it
means
that
that
flow
is
bottleneck
at
that
link.
So
in
this
case
we
have
the
flow.
3
is
going
to
make
a
ling
one
because
there
is
a
direct
Edge
from
name
one.
Two,
four
three
and
the
other
relationship
is
as
follows:
e
there's
a
Negative
Edge
from
a
flow
to
a
link.
Then
it
simply
means
that
that
floats
versus
that
link
right.
H
So
in
this
case
we
have
the
flow
things
versus
Link
to
because
there
is
this
relationship,
and
that's
actually
true,
so
protein
two
and
proteuses
Link
one.
So
this
is
a
bi-directional,
but
fourth
is
probably
like
a
ling
one.
So
there
is
also
the
directed
humbling,
1.3
and
so
on.
Okay,
so
that's
that's
the
sort
of
the
core
relationships
described
by
the
bottom
abstraction
and
so
you're
gonna
start
sort
of
thinking
about
what
what
is
happening
us.
So
it's
telling
us
wherefore
is
bottleneck,
more
information.
H
So
if
you
want
to
click
next
things
now,
let's
see
how
we,
how
we
can
start
interpreting
these
one
one
of
the
things
that
all
microstructural
probabilities
is
how
protovations
propagate,
if
there's
a
perturbation
on
disk
on
this
link
on
link
to
say,
maybe
this
is
why
listening
so
the
same
alternation
is
changing
or
something
the
bomb
microstructure
tells
us
how
it
perturbates
that
you
want
to
click
on
next
things,
and
it
tells
us
that
the
propagation
of
this
perturbation
will
provide
according
to
this
D
graph,
and
what
that
means
is
that
no
it's
going
to
have
an
effect
on
Flow
for
flow
2
and
flow
five
and
then
three,
but
it's
not
going
to
have
an
effect
on
Flow
one
flow
three
and
flow
C's,
because
there
is
no
electrical
platform
link
to
to
any
of
these
other
regions
of
the
network.
H
So,
let's,
let's
see
how
it
works.
If
we
say
there's
a
perturbation
on
link
tool
and
maybe
a
change
of
capacitor,
then
that's
actually
going
to
have
effect
of
flow.
Five
notice
that
floor
5
does
not
Traverse
link
to,
but
it's
sort
of
interconnected
and
and
that's
going
to
propagate
right.
We
can
also
see
other
kind
of
information
sheets
and
does
the
reverse
work
so,
for
instance,
scan
you
know.
This
tells
us
that
actually,
if
I
perturate
flow
one,
somehow
maybe
I,
let's
say
a
traffic
shape
for
one
or
maybe
flow
one
disappears.
H
H
Finally,
if
profile
disappears,
that's
actually
not
going
to
have
an
effect
on
Flow
one,
because
there
is
no
platform
profile
to
flow
one
right,
so
you
can
start
seeing
that
so
the
gradually,
assuming
so
keeping
the
stories
about
how
perturbations
provide
through
a
network,
okay,
I,
haven't
told
you
how
to
read
about
microstructure,
but
that's
in
the
paper.
Maybe
I'll
meet
that
for
for
this
occasion.
But
if
you
want
to
click
on
the
next
one
now
the
thing
what
I've
been
mentioning
so
far
is
sort
of
like
qualitative
analysis
of
the
network.
H
I've
been
telling
you
about
relationships,
how
preparations
propagate
right,
but
one
thing
about
competition
structures.
Actually
they
allow
us
to
compute
things
and
that's
what
we
call
the
quantitative
theoremic
structures.
We
can
actually
quantify
these.
So
there's
a
change
on
the
capacitive
link
to
by
certain
magnitude.
How
is
that
change
going
to
prepare?
Then?
What's
the
throughput?
H
That's
going
to
be
what's
the
effect
on
profile
in
terms
of
throughput
and
so
on
right,
so
we
can
actually
quantify
this
I'm
not
going
to
go
into
what
these
numbers
are,
but
we
can
actually
continue
file
and
this
there's
a
conversation
about.
Can
we
sort
of
use
these
to
complete
a
path
of
magic,
that's
sort
of
the
connections
with
panology
we're
going
to
try
to
build
that.
We
could
actually
use
some
of
these
numbers
to
put
them
as
mathematics
and
then
we'll
sort
of
also
build
that
a
story
in
the
next
presentation.
H
But
the
point
about
this
is
that
you
can
actually
qualify
but
also
quantify
this
relationship
using
bottle
the
bottle
of
structure.
So
if
you
want
to
click
on
next
thanks,
okay,
so
now
can
the
flap
of
a
butterfly's
Wings
in
America
set
up
a
tornado
in
Asia.
Of
course,
the
answer
is
no,
but
everything
is
interrelated
and
even
the
flapping
of
the
Wings
should
have
an
effect
in
somewhere
somewhere
in
China
right.
So
and
that's
the
point
and
that's
sort
of
what
the
problem,
like
a
structures,
sort
of
capture.
H
This
idea
that
everything
is
in
the
red
and
how
this
you
know
how
how
things
correlate
in
the
structure,
the
structure
of
bottlenecks
that
is
latent
in
a
communication
system,
so
next
one
okay
and
then
I'll,
just
summarize
sort
of
what
I
say,
informally
a
little
bit
more
formally,
not
too
much.
H
But
that's
what
we
call
the
propagation
limits
and
when
it
goes
as
follows,
if
a
flow
f
can
influence,
we
say
that
the
flow
f
can
influence
the
performance
of
another
flow
f
Prime,
even
only
if
the
bottlenecker
structure
has
a
directed
path
from
F
to
F
Prime
and
this
works
for
flows
links
and
in
the
interleaved
as
well.
So
in
this
case,
let's
say:
there's
a
perturation
in
this
flow.
H
Flows
will
not
be
affected
same
thing
if
you
next
one,
if
I,
if
there's
a
protection
in
the
capacitor
of
this
link,
then
next
this
portion
of
the
network
will
be
affected
again,
but
not
this
portion
of
the
network
right.
So
this
is
how
it
sort
of
works,
and
then
next
one
okay,
and
so
you
can
actually
I
mentioned.
You
can
quantify
these
things.
So
there's
what
what
we
call
the
linear
flow
equations-
and
this
tells
us
how
this
part,
how
this
perturbations,
propagate
and
and
in
which
quantity
they
propagate.
H
These
are
the
questions.
I
will
not
go
into
it,
but
one
of
the
points
I
will
make
about
this
I'm
going
to
click
on
next
yeah
is
that
the
bottlenecker
structure
actually
effectively
are
computational
graphs,
and
so
that's
one
of
the
advantages,
not
you
know,
sort
of
the
framework
that
provides
us,
but
actually
they
allow
us
to
make
these
calculations
very
fast,
actually,
because
our
perturbation
is
really
a
derivative
when
you
portray
the
flow
you're.
H
Really,
you
know
it's
a
measure
defect
of
that
you're
really
Computing
the
derivatives
with
respect
to
that
small
change
on
that
flow
right.
So
it's
a
tool
to
actually
compute
the
limitation,
so
once
we
have
a
tool
to
compute
derivatives,
we
have
a
tool
to
optimize
communication
systems
and
that's
sort
of
the
whole
point
that
we
can
use
bottleneck
structures
as
a
framework
to
optimize
communication
networks
and
we'll
see
the
applications
that
that
have
in
in
a
couple
of
slides,
I
guess.
H
But
the
bottom
line
is
that
we
can
use
these
componential
graph
bionic
structure
to
compute
gradients,
and
this
is
an
example
and
an
analogy
for
those
that
may
be
required.
More
more
words
on
the
AI
field.
Is
that
really
a
computational
network
is
like
a
neural
network
national
graph,
and
so
you
can
sort
of
can
actually
do
backward
and
forward
prep
computations
on
it
to
competing.
One
is
kind
of
the
grass
all
the
derivatives
and
sort
of
I'm
going
to
start
thinking
about
how
we
can
use
these.
H
You
know
the
graph
to
actually
do
very
scalable
calculations.
If
you're
trying
to
find
ways
to
optimize
your
network
in
one
example
graph,
you
can
actually
compute
all
the
derivatives
and
then
sort
of
get
that
result,
otherwise
that
might
be
sort
of
unscaled.
On
other
using
other
tools,
this
also
connects
to
the
field
of
automatic
differentiation
where
you
can
actually
compute
this
derivatives
fast,
but
also
accurate,
as
opposed
to
using
limits
which
is
medically
unstable.
But
with
the
compilation
graph,
you
can.
H
Actually
you
do
use
AV
to
make
these
calculations
without
error,
actually
with
100
Precision.
So
if
you
want
to
click
on
next,
thank
you
yeah.
H
The
system
say
which
of
the
flows
is
such
that
when
I
remove
the
flow
I
get
maximal
in
total
throughput
Improvement
on
the
network,
it's
sort
of
like
finding
the
elephant
flow,
if
you
will,
which
is
the
flow
that
has
the
highest
impact
on
the
network.
So
if
we
actually
and
then
we
can
use
bottleneck
structures
to
actually
do
this,
and
if
you
click
on
next,
then
you
can,
you
know
we
can
scan
the
graph
compute
the
derivatives
and
we
obtain
these
of
these
derivatives.
Capital
F
is
total
throughput.
H
So
what
I'm
doing
here
is
what
we're
doing
here
is
Computing
the
derivatives
of
total
throughput
with
respect
to
changing
the
rate
of
a
flow
which
could
be
for
one
two,
three,
four
five
and
six
and
then
I'll
pick
the
one
that
has
them
the
the
smallest
value.
Actually,
the
in
this
case
the
smallest
negative
value,
because
what
I
want
is
when
I
reduced
the
terminal
flow
I
want
to
increase
maximally
through
the
right,
so
it
has
to
be
negative
derivative,
and
so,
if
you
look
at
this
model's
value
possible,
is
this
one?
H
So
it's
revealing
us
that
the
sort
of
elephant
flow
that
Flora
has
the
highest
impact
on
total
through
product
of
the
system
would
be
actually
flow.
Seeks.
This
may
not
be
necessarily
serial.
If
you
look
at
what
elephant
flow,
the
technology
and
zoom,
they
typically
focus
on
the
heavy
heater
flow.
The
heavy
heat
of
the
flow
here
this
is
a
throughput
of
the
each
flow.
Actually,
you
know
the
heading
here
will
be
floor.
Five.
F
H
Is
getting
75
megabits
per
second
and
it's
telling
us
that
actually
flow
seeks,
which
only
it's
only
getting
5
8.3
megabits
per
second
is
actually
the
elephant
flow
here
and
it
may
not
be
achievable
to
know
you
know
you
know
our
human
eye,
but
it's
sort
of
you
look
at
the
graph,
then
it
starts
making
sense
because
flow
CX
is
here
is
highly
strategic
flow,
that's
sort
of
traversing
the
core
of
the
network,
and
so
that
starts
to
make
sense
and
why
flow
six
has
such
an
impact.
H
So
if
you
want
to
click
on
next
and
indeed
flow
six
sort
of
here,
it's
very
strategic
on
the
graph.
This
is
the
completion.
This
is
the
world
like
a
structure.
You
can
see
that
that
it
needs
this.
Is
you
know
the
reason
why
this
is
a
high
impact
flow,
even
though
it's
getting
low
bandwidth
there's
the
actual
simulation
showing
that
this
indeed
happens?
This
is
running
the
network
that
I
just
showed
with
all
this
explodes
you
get
this
performance
so
that
the
the
the
purple
flow.
F
H
The
high
is
the
heavy
heater
flow,
the
big
flow
when
I
removed
the
heavy
heater
flow,
which
is
flow
five,
it's
sitting
here
on
the
bottom,
like
a
structure,
we
know
from
the
theory
that
this
has
no
effect
on
everyone
else,
because
there
is
no
path
from
floor
five
to
any
for
anywhere
else,
and
indeed
that's
what
we
see
we
remove
the
purple
flow
and
the
rest
of
the
flow
is
actually
experience,
no
change,
they
all
have
the
same
completion
time
and
but
instead
I
remove
flow,
seeks,
which
is
this
flow
here?
H
The
flows,
which
is
this
flow
here,
one
of
these
one
of
these
flows
here.
Actually,
then,
we
see
that
completion
time.
Everyone
is
getting
more
throughput
in
the
completion
time
as
a
whole
execution.
Here,
it's
reduced
from
679
to
457,
right
so
by
removing
the
mouse
hole.
Actually,
in
this
case,
we
get
maximum
perform
maximum
increase
in
total
throughput.
If
that's
what
we
care
about
say
right,
so
you
can
start
seeing
some
of
some
of
the
insights
that
the
problem,
like
a
structure,
provides
in
this.
H
For
example,
if
you
will
okay,
then
all
right,
so
you
know
types
of
types
of
perturation
supported
by
boundary.
Construction.
Remember,
a
perturbation
is
a
derivative
right.
It's
a
types
of
derivatives
that
you
can
compute
floor.
Routing
traffic,
shaping
link
capacity,
upgrades
link
capacity
fluctuations,
shortcuts
forward
scheduling
flow
completion,
job
mapping,
so
you
want
a
job
map.
H
A
neural
network
on
a
data
center
say
these
are
some
examples
of
how
you
could
actually
use
bottleneck
structures
to
sort
of
optimize
the
map,
these
kind
of
problems
by
Computing
these
kind
of
perturbations.
These
are
perturbations
on
a
system
when
you,
when
you
place
a
job,
that's
a
perturbation.
How
does
that
affect
the
system?
And
next
one?
H
Okay,
then
I'll
talk
a
little
bit
about
use
cases
again
and
so
yeah
thanks
and
so
there's
this
sort
of
key
here
that
you
know
building
construction
at
the
door
root
and
then
what
are
the
applications
we
like
to
think
about
team
link,
branches,
Network
design,
traffic
engineering,
AI?
If
you
even
you,
can
see
some
of
the
applications.
5G
resource
allocation
for
performance
prediction,
Network,
modeling,
slicing,
routing
congestion,
control,
Mac
for
resilience,
capacity,
planning,
robustness,
analysis,
data
center
design,
engine
networks,
even
modeling
simulation.
H
These
are
some
some
of
the
applications
that,
where
some
of
them
that
we're
actually
working
on
and
some
of
them
are
being
sort
of
research
and
put
a
list
of
potential
research
groups.
Maybe
that
could
right
now,
like
I,
said
we're
in
the
auto
working
group,
but
by
coming
here
more
in
the
on
the
research
side,
I
hope
they
got
feedback
on
where
some
of
these
things
could
could.
H
Potentially,
you
know,
have
a
contribution
in
some
of
the
research
groups
or
or
working
groups
that
that
you
guys
are
working
on.
So
then,
next
one
thanks
and
then
in
the
draft
in
the
draft
that
I
have
that
we
have
in
the
alto
working
group
two
dash
right
now.
The
first
draft
we
mentioned
a
few
use
cases
so
application
really
limiting
for
CDN
and
Edge
Cloud
applications.
H
Time
bound
constrained
flow
acceleration
for
large
data
sets,
propose
optimization
through
AI
modeling,
John,
writing,
routing
system
control,
service
placement
network
computing
training,
neural
networks,
mapping
a
neural
network
on
on
a
on
a
GPU
cluster
or
data
center,
a
network
slicing.
These
are
these-
are
use
cases
that
we
elaborate
a
little
bit
more.
So
if
you
are
interested,
you
can
go
to
the
left
and
see
how
we
think
about
Network
structures
could
could
help
there
then
I'm
going
to
click
on
one
example,
which
is
optimizing
routing
and
congestion
control,
that's
also
from
draft.
H
H
That's
the
same
thing:
a
little
bit
more
human
friendly,
which
is
the
Google's
yeah
Google's
V4
network
from
the
paper
from
the
sitcom
paper
and
how
it
how
it's
interconnected,
okay,
so
we're
going
to
apply
a
bottleneck,
a
structure
here
to
reason
about
some
of
the
insights
from
this
network
I'm
going
to
click
on
next
yeah
and
so
assume
a
simple
configuration
with
a
pair
of
flows
between
every
data
center
in
the
US
and
Europe.
We're
going
to
give
this
really
simple.
H
All
links
are
seem
to
have
a
capacity
of
10
gigs,
except
for
the
returns
online
25
gig.
If,
with
these
two
assumptions,
then
we
complete
about
microstructure
and
we
obtained
this
bottleneck
structure.
Okay,
if
we
click
on
next,
that's
a
little
more
graphical,
then
that's
what
we
think
this
is
the
bottlenecker
structure.
Here
we
have
y
vertices
are
links,
so
if
you
want
to
click
on
next
two
and
then
on
Graver,
this
is
our
flows
or
paths,
and
here
so
there's
some
mapping.
H
So
these
two
tons
of
Landing
links
link
a
then
link.
Time
would
would
be
sitting
at
this
at
this
location.
The
ball
microstructure
link,
8
and
Link
10.,
and
these
two
other
links
here
we'll
be
sitting
at
this
location
in
the
bottom.
Like
a
structure
we
are
representing
here
the
bottleneck
construction.
H
What
we
call
a
canonical
form
where
it's
sort
of
like
the
edges,
the
edges,
the
direction
of
the
edges
always
go
down
not
up
unless,
unless
they
are
bi-directional-
and
so
there's
this
property
this
this
Lemma,
that
we
show
also
in
the
paper
that
the
available
bandwidth
is
monotonically
increasing
as
you
go
down
the
graph,
so
the
Flows
at
the
bottom
of
the
graph
will
tend
to
have
more
bandwidth
or
the
paths
at
the
bottom
of
the
graph
will
have
more
bandwidth.
You
can
see
that
I'm
changing
flows
and
bad
interchangeably.
H
I
haven't
mentioned
this,
but
there's
a
version
of
the
polynomial
structure,
which
we
call
Path
grading
graph,
which
is
folding
all
the
flows
that
follow
the
same
path
into
a
path,
and
you
can
same
do
the
same
reasoning,
changing
floors
or
path
and
I.
Think
that's
that's
for
pi
energy.
That
could
be
more
interesting
because
we
really
want
to
talk
about
pants,
I.
Think
but
sorry.
G
H
Think
I
I
got
I,
think
that's
right
so,
but
what
I
wanted
to
say
about
this
is
that
you
can
see
that
this.
These
two
links
actually
somehow
in
the
U.S,
are
highly
strategic
because
they
are
at
the
top
at
the
root
of
the
graph.
This
means
that
any
perforations
on
the
capacity
of
these
links
will
actually
have
an
effect
on
the
rest
of
the
network.
Actually-
and
this
is
not
true
for
these
other
links
are
at
the
bottom
that
really
they
are
not
so
easy,
because
perturbation
these
links
will
not
have
the
same
effect.
H
So
that's
kind
of
like
the
quality
representing
about
this
setting.
Okay.
Now
the
question
is:
okay.
I
want
to
transfer
suppose
that
an
application
needs
or
data
center
needs
to
transfer.
You
know
a
data
set
on
a
sort
of
a
large
data
set
from
data
center
11
to
day
Center,
four
right
and,
and
we
need
to
decide
which
path
to
choose.
H
We
have
multiple
path
options,
and
so
then,
what
we
can
do
here
is
a
random
instructions
and
figure
out
what
what
what
band
we
will
get
on
each
of
these
paths
and
the
idea
here.
H
What
we're
doing
here
is
what
is
sort
of
solving
the
joint
congestion,
control
and
routing
at
the
same
time,
asking
the
question
if
I
place
the
flow
on
this
path,
what's
the
battery
that
this
path
will
get
and
what
is
the
fact
that
all
the
other
parts
will
get
so
the
ripple
effect
the
Xposed
result,
I
don't
want
no,
no
after
I
place
it.
What
is
the
actual
performance
I'm
going
to
get
because
placing
that
flow
itself
is
going
to
have
an
effect
on
the
network?
So
how
is
that
ripple
effect
computed?
H
So
that's
what
we
we're
going
to
do
using
bottleneck
structures,
there's
an
algorithm
to
actually
solve
this
problem
which
actually
combines
dice
with
bottlenecker
structures.
So
you
know
that
directories
every
algorithm,
so
every
step
is
actually
querying.
The
bottom,
like
construction
building
around
like
structure
in
parallel
to
come
to
to
discard
paths
that
are,
that
would
never
be
optimal.
H
Using
the
really
nice
approach-
and
so
that's
why
the
complexity
of
this
algorithm
is
V,
plus
e
log
B,
the
traditional
diagram,
and
then
we
get
this
result
in
this
case,
we
get
that
this
is
the
optimal
path
and
we
will
give
you
2.5
gigs
per
second,
as
created
by
the
polynomial
structure
any,
and
this
is
a
non-service
path.
Actually
that
gives
you
higher
throughput.
If
you
actually
use
the
shortest
pad,
you
will
get
1.4
I
give
its
per
second
of
throughput.
H
So
then
you
can
decide
whether
you
know
whether
you
want
a
high
throughput
path
or
the
low
latency
path,
but
you
can
make
that
based
on
an
informed
decision.
You
can
also
use
this
for
SLA
management
so
because
what
you
can
do
like
I
mentioned
once
you
place
that
flow,
you
can
see
how
the
rest
of
the
flows
will
be
effective.
H
Will
they
have
a
violation
in
the
SLA
so
before
I
place,
that
flow
I
want
to
make
sure
that
that
that
will
not
violate
SLA
agreements
on
other
paths,
say
right,
so
that's
kind
of
IDM
yeah.
Next,
thanks-
and
this
is
sort
of
the
result
here
so
I'm
here-
we're
using
bottleneck
structures
to
you
know
project
what
will
happen
if
you
place
the
flow
in
the
shortest
path
or
not
or
not
the
shortest
path.
H
If
you
are
the
non-short
of
spot
on
the
shortest
path
which
is
lower
throughput
1.4,
you
would
place
it
here
at
the
top.
Actually,
when
you
do
that
the
floor,
the
the
flow
would
go
right
here
at
the
first
level.
If
you
place
it
on
the
on
the
on
the
launch
of
this
path,
the
flow
would
be
placed
here
and
that's
remember,
the
monotonic
property.
If
you
place
something
at
the
bottom,
you
get
more
throughput.
That's
what's
happening
here.
H
So
the
recommendation,
if
you
care
about
throughput
here,
is
to
place
it
on
the
launch
shortest
path
which
will
give
you
2
2.5
of
throughput
as
opposed
to
1.4,
then
you
can
do
the
SLI
management
sort
of
thing
and
then
say:
if
you
want
to
go
next
and
then
this
is
the
ripple
effect
if
I
place
the
flow
here
using
the
shortest
path.
The
the
red
circles
are
the
portions
of
the
network
that
will
see
an
impact
if
I
place
it
on
on
the
main
shutter
speed.
H
So
it's
kind
of
also
more
stable,
maybe
as
opposed
to
here,
that's
creating
a
ripple
effect,
much
bigger
than
the
whole
network,
and
then
next
one
is
this
idea
that
I
mentioned
that,
as
I
mentioned,
that
this
monotonic
property
that
the
Flows
at
the
bottom
at
the
top
are
getting
less
bandwidth
or
the
paths
of
the
topic
and
less
than
me
that
the
ones
at
the
bottom,
if
I
place
the
flow
on
the
top
I'm
still
in
bandwidth
from
the
ones
that
are
getting
less
bandwidth.
That's
you!
H
So
you
could
reason
that
maybe
that's
not
a
fair
thing
to
do,
because
all
these
Flows
at
the
top
we're
already
getting
a
low
bandwidth,
are
going
to
get
even
less
bandwidth,
but
if
I
place
the
flow
at
the
second
level.
These
flows
have
no
see
no
impact
right,
so
their
their
low
rate
does
not
get
deteriorated
any
farther,
and
so
maybe
that's
from
a
fair
standpoint.
H
H
We
have
to
the
production
deployments
and
on
this
one
is
that
the
national
research
platform,
which
is
a
a
US
Hawaii
Network,
used
to
be
called
the
Pacific
research
platform
at
the
UCSD
Network
connecting
the
Pacific
and
the
west
side
in
the
U.S
collecting
the
universities
and
research
Labs,
that's
called
NRP,
it's
a
U.S,
wide
Network
I
mentioned,
and
then
the
other
one
is
the
doe
is
that
which
ensure
many
of
you
note
also,
which
is
a
U.S
or
a
wide
Network,
connecting
also
National
Labs
Etc.
H
So
one
application
we're
using
for
the
for
the
years
capacity
planning,
but
we're
also
looking
at
traffic
engineering,
and
so
you
know
for
lack
of
time.
We
will
not
go
into
diplomacy,
but
maybe
not
some
of
the
time
we
can
discuss
deployments.
B
H
So
we
already
have
Antoine
because
I
think
that's
the
last
one.
Okay
and
one
note
on
discussion.
My
you
know
my
you
know:
I
love
to
get
or
would
love
to
get
feedback
in
terms
of
where
we
think
this
could
this
could
connect
with
other
working
groups
or
research
groups.
So
that's
one
topic
of
discussion,
but
anyhow-
and
you
know
it's
open
for
any
other
questions
as
well.
Of
course,.
I
So
I
have
a
question
but
more
on
the
graphs
you
are
using
rather
than
on
the
working
groups.
I
won't
help
you
on
that.
In
fact,
when
I
looked
at
your
graph
for
bottlenecks
and
how
you
so
yeah,
when
you.
H
I
When,
when
I
look
at
your
graph
with
flows
that
are
nodes,
but
you
also
have
nodes
that
the
flows
go
across
that
are
represented
as
nodes.
It
reminded
me
of
the
work
of
machulata
p,
a
researcher
in
France
working
on
some
tool,
called
stream
graphs,
it's
a
sort
of
mix
between
time
series
and
graph,
and
you
have
mathematical
tools
that
help
you
associate
properties
to
the
temporality
of
where
flows
cross
some
nodes
in
the
network.
I
J
Thanks
great
work
sounds
very
interesting
to
me.
J
Maybe
iccrg
is
a
good
place
also
to
to
come
to
so
because
probably
we
have
to
think
about
different
ways
of
doing
congestion
control
in
the
sense
that
maybe
the
current
closed
loop
congestion
controls
are
some
kind
of
Last
Resort,
but
we
come
more
into
a
direction
of
having
better
planning
so
that
you
can
ramp
up
flows
very
early,
for
example,
and
with
your
framework
one
could
actually
see
the
effect
if
I
admit
a
certain
flow.
J
H
You
yeah
the
next.
The
next
presentation
is
I'm
going
to
talk
about
how
to
do
this
under
partial
information
and
the
case
of
ICC,
IG,
I,
think
or
congestion
control
in
general
can
be
seen
as
as
a
specific
case
of
partial
information,
because
in
a
distributed
congestion,
control
algorithm,
every
node
has
partial
information
and
they're
all
trying
to
converge
to
optimality,
basically
so
that
that
probably
could
enable
another
another
composition
but
yeah
thanks.
A
So
yeah
Brian
Trammel
as
an
individual,
not
as
paner
G
chair
so
I,
had
one
question
that
I
missed
at
the
beginning
of
the
presentation
and
have
like
I
should
have
stopped
you,
but
now
I'm
going
to
have
to
go
back
and
re-watch
on
like
0.75
speed,
but
so
the
difference
between
an
edge
that
is
unidirectional
and
bi-directional
right
like
how
do
we
like
when
we're
going
through
and
like
actually
constructing
this
graph?
How
do
I
know
I'm
putting
two
arrows
on
it?.
H
A
Coming
got
it:
okay,
yeah,
good,
okay,
yeah
wow.
The
rest
of
the
talk
makes
a
lot
more
sense,
now
cool.
Thank
you
very
much.
The
the
second
thing
that
you
know,
riffing
off
of
Roland's
point
a
little
bit.
I
did
very
much
have
the
feeling
that
I
was
watching
an
ICC
or
G
Talk
here,
which
is
great
because
I
missed
ICC
org
this
time.
So
thank
you
very
much
for
reading
this
fanergy
I.
Think
a
lot
of
the
questions
about
sort
of
like
Computing
the
optimality.
A
That's
a
good
venue
to
get
like
good
feedback
on
that
and
sort
of
like
develop
the
idea
there.
The
thing
that
I
would
be
interested
in
seeing
in
pan
RG
is
like
okay.
So
we
have
this
mathematical
tool.
A
How
can
we
you
know
I'm
gonna
go
ahead
and
take
the
mask
off,
so
you
can
see
my
see
my
watch
my
mouth
moving.
How
can
we
tie
that
to
some
of
the
signaling
stuff
that
we're
thinking
about
in
panergy
right
like
so?
Like
the
the
you
know,
you
brought
up
the
B4
graph
from
the
sitcom
paper,
which
is
like
okay.
Well,
you
can
very
clearly
see
how
this
would
be
used
in
a
centralized
planning,
sdn
sd-wan
situation.
I.
A
A
G
E
H
H
Okay,
so
I'll,
then
the
second
part
is
about
okay.
We
know
what
the
bond
like
a
structure
is
at
some
level
and
we
like
to
actually
be
a
little
more
practical
and,
and
one
way
to
make
these
particle
is
okay.
How
do
we
Implement
these
and
under
partial
information,
which
is
typically
the
SMS
that
we
face
in
the
communication
system?
Where
you
don't
know
everything
they
still
have
to
make
the
best?
You
can
invest
on
the
information
that
you
have.
H
So
let's
talk
about
these
a
little
bit
and
the
yeah
I'll
do
I'll
skip
the
first
part.
Actually,
we've
done
that
so
checkbox,
but
then
we'll
talk
about
Computing
model
construction
under
partial
information,
I'll
talk
about
the
distributed
border
protocol.
That
would
allow
us
to
do
that.
The
signaling
part,
if
you
will
and
I'll
put
that
in
I'll
just
honestly
have
this
protocol
will
work
and
Achieve
Global
conversions
and
then
discussion,
yeah
right
next,
one
yeah
same
theme
papers
and
wraps
next
one.
You
can
actually
skip
this
yeah.
H
Sorry
more
and
more
sorry.
H
Thank
you,
yeah.
Okay,
so
now
remember
I'm,
going
to
actually
use
the
same
example,
so
hopefully
we're
a
little
bit
familiarized
the
same
network
configuration
but
so
far
we've
assumed
that
we
have
full
knowledge
of
the
network
right
now.
Let's
assume
that
that's
not
the
case,
and
let's
assume
that
we
have
two
autonomous
systems.
So
if
you
click
on
next
okay,
so
we
know
that
this
is
the
bottom
line.
H
Construction,
we
have
full
knowledge,
but
if
we
click
on
next
now
we're
going
to
assume
that
and
this
this
network
actually
corresponds
to
two
Automotive
Systems,
and
you
know
each
as
knows
only
its
own
region.
So
as1
doesn't
have
an
information
about
as2.
It
still
doesn't
have
any
information
about
as1.
Now
we
run
the
algorithm,
so
you
can
click
on
next
we
run
the
ball
like
a
structural
volume.
That's
in
the
paper
to
compute
the
one
like
a
structure,
a
is
one
which
converges
to
this.
H
We
think
that
this
is
the
the
view
of
the
world.
This
is
the
state
of
the
world
this
this
graph,
but
obviously
based
on
you
know
this
sorry.
This
is
as1,
so
now
this
actually
is
to
oh,
this
is
your
presentation
actually
somehow.
G
C
A
I'll
understand
what
you
see,
what
we
got.
It's
fine.
C
B
Is
it
still
old
one
right?
That's.
H
A
Let's
move
on
then
and
then
we'll
upload
the
correct.
H
Whatever
you
see,
an
app
should
be
a
p
and
latest
and
we'll
get
the
latest
updated
on
the
because
it
works
for
paths
basically,
and
so
instead
of
closes
our
paths
so,
as
one
doesn't
doesn't
know
anything
about
DS2
and
vice
versa.
So
if
a
is-
and
this
is
also
swap
so,
if
a
is
two
tries
to
figure
out
the
bottleneck
structure,
it
will
compute
this
we
just
which
is
incorrect
right.
It's
this.
H
This
is
not
the
same
as
this,
and
if
a
is
one
actually,
this
web
computes
the
one
like
a
structure.
Actually
we
will
compute
the
right
thing
actually,
but
it's
because
it
got
it
got
lucky,
because
there
is
this
property
that,
if
a
part
is
not
it's
bottleneck.
If
all
the
paths
are
ball,
making.
My
in
my
autonomous
system,
then
another
computer
structure
with
any
information
from
anybody
else-
that's
what's
happening
here
with
with
as1.
H
Actually
this
as
one
that
actually
it
happens
that
all
the
paths
in
as1
are
ball
negative
in
as1.
So
then
it
actually
can
converge,
but
it's
because
it
got
lucky
and
as2
actually
the
past
two.
So
it's
not
gonna
get
it's
not
going
to
get
it
right.
So
obviously
we
need
to
share
some
information
to
make
this
work.
So
we
can
click
on
next
and
the
proposed
protocol
here
has
these
three
properties.
H
H
We
could
be
the
global
one
who
called
balmic
structure
and
then
each,
as
has
a
bottleneck
substructure,
and
the
idea
is
that
we
want
each
as
to
compute
the
ball
negative
substrate,
that's
correct!
What
is
the
correct
bottom
mixer
structure.
Subtraction
is
the
subset
of
the
bone
structure
that
corresponds
to
to
that
as
basically
intuitively.
H
So
and
then
this
is
saying
that
all
we
need
to
do
is
share
a
path,
a
path
metrics,
which
I'll
tell
you
in
a
minute
what
what
that
is
scalability
focus
on
building
the
path
creating
graph.
So
here's
another
thing
that
we
describe
in
the
draft
bomb
microstructures
there
are
different
kinds
of
boundary
constructions.
H
One
is
what
we
call
the
floor:
reading
graph,
which
is
per
flow,
but
this
one,
which
is
called
path,
grading
graph,
which
is
per
path
and
that's
way
more
scalable,
because
you
may
have
hundreds
of
bands
as
opposed
to
millions
of
flows
and
and
the
transition
from
one
to
the
other
is
a
straightforward
actually
so
for
this
we
focus
on
pathgrading
graph
and
so
requires
only
per
Path
State
in
this
case,
not
per
flow,
and
then
privacy.
H
This
is
subject
to
discussion,
I
guess,
but
we
think
that
this
could
be
good
in
a
sense
that
you
know,
we
only
need
to
share
one
scalar
one
metric
per
path.
We
don't
need
to
remove
flow
information,
there's
only
three
well
netflow
data
or
or
topology
data,
obviously
is
a
medical
scholar
per
path.
So
hopefully
this
is.
This
is
a
good
in
terms
of
the
Privacy,
depending
on
the
use
case.
H
So
next
one,
then
this
is
the
the
the
distributed
protocol,
it's
in
the
in
the
second
draft,
but
along
these
are
the
high
level
and
I,
don't
expect,
maybe
to
understand
the
leaders
but
the
intuition,
and
it's
actually
fairly
straightforward.
That's
that's
a
good
thing:
okay,
so
it
has
a
netline,
timer
and
and
then
and
a
message
event.
You
know
a
message
exchange
event.
The
timer
is
running
periodically.
H
That's
right,
these
two,
these
two
events
around
each
as
runs
their
own
instantiation,
but
the
first
thing
we
do
is
some
some
solutions.
That's
interesting
in
the
draft
L
is
a
set
of
links.
Ai
is
the
autonomous
system
I,
so
this
runs
for
each
AI.
Pl
is
the
path
Ling
dictionary
where
we
call
the
path
link
dictionary
for
every
path,
the
set
of
Links
at
that
path.
Traverses
C
is
a
capacity
dictionary.
The
capacity
of
each
link
and
PM
is
a
path
metric
dictionary,
which
reveals
the
information
we
received
from
our
neighbors
okay.
H
What
this
does
is
compute
bottleneck,
sub
structure,
which
is
a
function,
that's
in
Nexus
light
in
the
US
that
it
tries
to
compute
the
bottlenecks
of
structure
as
accurately
as
it
can,
based
on
the
local
information
that
we
have
and
this
mathematics
that
we
receive
from
the
neighbors
once
the
output
of
this,
then
we
consolidate
it
into
the
new
path
magic
dictionary.
This
is
what
we're
going
to
be
sharing
with
our
neighbors
and
then
here
in
the
next
step
we
actually
share
with
our
Network
neighbors.
H
We
share
the
output
of
our
computation
into
the
magnetic,
and
then
we
shared
with
the
neighbors
for
all
AJ
and
N.
These
are
the
neighbors
we're
sharing
the
path,
we're
sending
a
pathmatic
announcement
message
to
our
neighbors,
passing
our
pathmatic
Vector
dictionary,
and
then
this
is
the
event
upon
receiving
upon
receiving
a
path
Network
announcement
from
our
neighbor.
We
simply
take
the
minimum.
Actually,
this
is
the
notion
that
the
ball
make
is
always
the
minimum
it's.
This
goes
back
to
Jacobson's
favorite.
H
That
all
weekend
is
the
the
single
volume
the
the
most
constrained
link
right.
So
that's
that's
in
that
Indonesia's
notion,
but
this
is
taking
into
kind
of
Fuller
structure
and
then,
if
we
want
to
zoom
into
this
function,
complete
by
an
extra
structure,
that's
an
exit
slide
and
what
it
does
is
this
computable
mixer
structure.
H
It
runs
this
while
loop,
which
invokes
additive
iteration,
the
compute
bottleneck
as
structure
sort
of
that's
again,
not
the
one
that's
available
in
the
paper
and
it
it
runs
multiple
times
until
it
reaches
agreement
with
the
state
that
we're
being
shared
from
our
neighbors,
that's
intuition,
so
we
compute
it
completely
structure
based
on
the
link
dictionary,
the
pampling
dictionary
and
the
capacity
factorial
dictionary
here
we
check
if
they
hear
about
this
is
the
output
of
our
computation.
H
Our
local
computation
is
in
agreement
with
what
we've
been
told
from
our
neighbors,
then
we're
good.
We
break
we're
out.
If
not,
then
we
need
to
do
something
we
just
we
need
to
add
a
virtual
link.
We
don't
know
our
neighbors
our
network
network,
we're
going
to
model
that
by
putting
a
virtual
link
and
putting
a
constraint
there,
that's
what
this
is.
H
Writing
a
virtual
link
with
a
capacity
equal
to
the
the
path
metric,
that's
provided
by
by
the
the
neighbor
as
and
then
we
go
back
and
it
can
be
proven
as
sort
of
the
map
that,
if
you
do
this
over
and
over,
eventually,
everyone
will
converge
to
the
right
bottleneck
substructure.
When
you
aggregate
them,
you
get
the
right,
Global,
modeling
construction.
We
can
see
with
an
example.
H
Maybe
it's
going
to
be
easier
to
understand,
but
hopefully
it
does
the
intuition
about
the
signaling
so
and
then,
before
putting
the
example,
what
we
call
the
termination
condition
in
the
convergence
condition
in
the
previous
function,
you
saw
that
there
was
domination,
a
break
which
was
it
was
a
while
true,
but
there
was
a
break
statement.
H
The
information
condition
is
when
intuitively
it
says
that
when
my
local
computation
is
in
agreement
with
the
path
metric
dictionary,
that's
being
shared
with
my
neighbors
when
we
are
in
agreement
and
we
can
terminate
and
in
a
convergence
condition,
and
what
it
says
is
that
if
you
run
this
algorithm,
we
ensure
that
at
the
end,
everyone
is
in
agreement
all
the
mathematics
from
all
the
AI.
From
from
all
the
autonomous
systems,
AI
AJ
follow,
the
paths
are
in
agreement-
that's
that's
guaranteed
by
the
youngerian
okay.
So
we
have
these
two
conditions.
H
Let's
look
at
an
example
how
this
works
so
back
to
our
configuration
so
two
other
systems.
We
can
click
on
next.
This
will
be
iteration
one
and
again
this.
The
latest
version
is
going
to
have
some
corrections
on
this,
but
it's
fine,
so
at
the
first
iteration
autonomous
system,
two
actually
gets
it
wrong.
It's.
This
is
the
path
metric
dictionary.
That
is
that
it's
that
it
believes,
is
the
sort
of
the
the
state
of
the
wall,
but
it's
it's
wrong
and
then
application
one
alternative
system,
one.
H
Actually
the
swap,
actually
gets
it
right
and
the
reason
is
that
property
that
I
mentioned,
because
all
the
plans
are
actually
born
like
in
its
own
domain,
so
he
actually
gets
over
here,
the
the
as
actually
gets
it
right.
So
this
one
has
already
converged
now
they
exchange
the
paramedics.
So
this
path,
metric
dictionary,
is
going
to
be
shared
with
this,
as
now
with
that
information
for
as2,
it's
going
to
take
another
iteration.
So
if
you
click
on
next
and
then
in
the
next
Direction,
the
area
is
going
to
get
it
right.
H
Okay,
so,
and
the
trick
is
remember,
we
are
adding
this
virtual,
so
we
don't
know
that
our
neighbor,
but
we
know
that
it's
this
path
is
bottleneck.
This
path,
four,
is
bottleneck
somewhere
else
and
we
model
that
by
using
a
virtual
node
here
and
this
path,
six
is
more
like
someone
else
we
we
know
we
modeled
that
with
another
virtual
node,
and
then
this,
if
you
look
at
this
structure,
this
the
you
are
decent
with
this,
you
get
the
correct
button
like
a
structure.
H
So
if
you
click
on
next
end,
we
can
also
see
that
we
can
now
check
the
convergence
condition
and
check
that
all
the
path
metrics
for
all
the
pathmatic
dictionaries
from
the
two
is
are
in
agreement,
so
1616,
8.3,
8.3
and
so
on.
Okay,
so
then
next-
and
this
is
sort
of
the
the
higher
level
idea-
that
you
would
have
multiple
as
each
one
doing
its
bottleneck,
substructure
calculation
and
then
sharing
this
arithmetic
announcement
messages.
H
Providing
the
asid
and
the
pathmatic
dictionary
and
then
ensuring
that
this
ethernet
work
stabilizer,
then
you
eventually
the
bomb
extraction,
everyone
gets,
gets
it
right
and
I
think
that's,
that's
it
and
then
the
same
discussion
can
be
open
in
terms
of
potential
applications
or
questions.
J
I
think
two
comments
here:
one
is
maybe
I'm
I'm,
not
sure
that
that
actually
the
ass
or
providers
will
be
that
candid
and
exposing
that
they
have
a
bottleneck
inside
there
is.
That
is
maybe
one
thing
that
could
be
a
little
pushback
for
that,
and
the
other
thing
is
so.
J
Let's
assume
that
you
know
that
paths
are
congested
then
probably
your
whatever
routing
decision
will
then
be
based
on
that,
and
we
all
know
that
if
you
do,
let's
say
routing
based
on
Dynamic
metrics
like
this
one,
where
you
have
latency
or
congestion
that
will
lead
typically
to
oscillations,
at
least
in
several
cases.
So
have
you
thought
about
what
why
it
is
not
totally
clear
to
me
what
what
you
want
to
do
with
that
outcome,
that
you
identify?
What
are
the
bottleneck
as
path?
So
what's
the
reaction
to
that?
H
Okay,
so
on
the
first
question,
so
actually
this
doesn't
reveal
that
I
you
can
that
I
that
this
path
is
born
like
at
my
as
actually
you
could
do
you
go
to
real
if
you,
if
you
want
to
do
that,
but
you
can
keep
that
privacy
when
it
reveals.
H
Is
that
what
the
as
ends
up
knowing
is
whether
I
am
the
ball
knuckle
for
this
path
or
not
that
for
sure,
so
we
certainly
and
then,
if
I'm,
not
the
bottleneck,
I
know
that's
somewhere
else,
but
I
don't
know
where
some
other
as
now.
If
you
want
to
do
the
SLA
management
and
figure,
and
you
could
Envision
an
overlaying
protocol
that
then
sort
of
output,
data
and
say
I'm,
not
the
bottom
like
I'm,
not
development
eventually
would
find.
H
Who
is
the
bottleneck,
but
if
you're
not
interested,
you
cannot
put,
you
may
not
participate
and
you
would
not
be
revealing
that
you
are
the
ball
negative.
That's
if
that,
if
you're
not
interested,
you
could
but
it,
but
but
it's
only
an
option
to
reveal
whether
the
other
bottleneck.
What
you
get
is
the
information
that
this
path
is
actually
bottleneck.
On
my
domain
and
if
not
I,
don't
know
where
it's
bottleneck
and
I
think
the
second
question
is
one
about
not
just
qualifying
these
relationships,
but
also
quantify
them.
H
H
Yeah,
so
that's
that's
the
sort
of
the
idea
that
we
can.
We
want
to
compute
the
one
with
this
one
like
a
structure,
so
that
then
we
can
do
the
sort
of
the
derivative
analysis
on
top
based
on
that,
without
knowing
other
other
people's
other
networks,
Ballinger
structure,
but
still
being
practical,
yeah.
Okay,
thanks,
okay.
A
So
yes,
Prime
Terminal
as
an
individual
again
thank
you
Roland
for
actually
asking
half
the
questions.
I
was
going
to
ask
I
think
there's
another
way
that
we
can
consider
like
there's.
Also
a
scalability
problem
here.
Right
like
so,
you
build
a
the
the
the
the
scaling
of
the
convergence
of
the
inter-as
path
has
to
do
with
the
number
of
a
s's
in
the
full
Network
right
like
so.
H
Let
me,
let
me
try
to
see,
let's
show
the
bottleneck
structure.
I'll
just
talk
about
conversion,
let's
talk
about
how
it
actually
converges.
So
if
let's
go
back
back
back
warmer,
let's
try
to
pull
the
the
global
modeling
structure
and
more
and
more
or.
H
Yeah
this,
the
conversions
time
is
revealed
by
the
ball,
like
a
structure
itself
where
it
works
as
follows.
So,
okay,
there
are
six
parts
here
right,
the
first,
the
first
to
converge
will
be
path.
One
part
three
and
past
six
will
converge
immediately
because
they
have
no
dependency
with
any
balance.
Anybody
else,
once
these
three
pass
converts
then
402
and
flow
four
can
converge.
One
flow
of
two
of
them
flow
foreign.
H
A
The
convergent
complexity
is
basically
solely
dependent
on
the
bottleneck
structure,
not
on
how
you
split
it
up
into
as
correct
okay,
one
of
the
of
the
tricks
that
we've
seen
in
other
pathway
or
networking
approaches.
I'm
thinking
about
selling
on
here
mainly
is
when
you
end
up
with
this
with
a
substructure
that
is
approaching
too
complex.
For
you
know
the
the
algorithm
that
you're
using
in
order
to
figure
out
the
route
or
whatever
you
can
abstract
it
away
right
like
so.
A
The
way
that
song
does
this
is
you
can
essentially
take
an
S
and
you
collapse
it
down.
So
it
looks
like
a
switch
right
and
I'm
actually
wondering
if
there
are
ways
that
you
could
iteratively
do
that
within
an
as
so
that
you
get
a
very
simplified
substructure,
so
that
the
overall
structure
gets
you
nearly
an
optimal
result
with
way
fewer
nodes,
yeah.
H
So
then
we
go
forward,
I,
wonder
whether
that
that
captures
it.
So
you
can
tell
me
Ryan
more
yeah
more
for
forward
here,
so
this
is
exactly
I.
Think
what
we're
doing
what
this
is
doing
here
in
that
here
you
can
see
V2
and
V1
what
they're
doing
they're
collection,
okay,.
A
A
H
H
B
K
B
K
You,
okay,
every
start,
I
said
good
morning,
respectful
colleagues
and
experts
and
this
great
honor
innocent,
celebrity
and
on
behalf
of
my
groups
in
the
Deep
Corporation,
I'd
like
to
say,
I
would
like
to
work
and
the
sexiness
and
service
provisioning
of
the
network,
and
the
topic
is
what
the
interviews
is
database.
This
is
a
visual
service
which
is
known
as
PB
illness,
the
framework
use
cases
and
requirements
and
next
slide.
Please.
K
Requirements
in
framework.prness,
as
we
listed
here
this
illustrates
the
background
information
and
the
tablets
interceptions
and
our
considerations,
beginning
with
the
challenges
of
the
code
Network
and
the
Gap
analysis
of
accessible
issues
and
next
slide.
Please
they're
the
rapid
developments
in
the
complete
Council
improvisation
of
cloud
computing
and
local
internet
that
becomes
a
popular
and
amazing
platform
to
various
Enterprises
and
government
departments
to
hence
data
in
the
cases.
K
So
a
data
explosion
and
massive
access
to
the
cloud
had
improved
to
be
an
inevitable
incarnation,
resulting
in
historically
accelerating
rates
of
traffic.
Traversing
system
that
we're
doing
so
as
depicted
here
in
this
figure
of
the
network
domain
from
Fling
cell
to
equator
is
divided
into
several
sections
and
smartflowers
written
Focus
as
a
network
infrastructure
in
different
sections
may
vary
from
one
to
another.
K
With
Google
screen
distinctions
of
network
capabilities
applications
may
require
Diversified
requirements
of
latency
templates
or
flexible,
and
the
country
has
a
demands
like
the
application,
a
b
c
here
yeah
and
it
requires
high
bandwidths.
They
use
demand
level
vacancy,
while
application
C
requires
a
favorites
in
finance
and
ideally,
traffic
of
distinctive
applications.
K
Just
actively
into
pause
shown
here
in
different
colors,
however,
in
conventional
networks
visit
details,
including
post
distances
and
reception
bandwidth
in
a
network
domain,
concealed
capabilities
of
the
network
to
remain
invisible
and
those
differential
aged
services
are
not
provided.
Applications
for
various
requirements
cannot
be
distinguished
in
just
a
customer
really
latency
sensitive
applications
could
misunderstand
that
the
fortunate
of
the
traffic
is
a
restricted
bandwidth
from
things
to
cancer,
which
is
22.
K
K
To
sum
up,
conventional
networks
only
to
provide
clients
to
this
course
range
connection,
services
and
differentiate.
The
service
treatment
is
desired
under
current
circumstances.
The
network
resources
are
not
orchestratory
but
property
right
and,
for
example,
in
China.
The
resource
utilization
to
to
be
relatively
low
for
about
30
to
50.
K
Existing
issues
like
S01
handles
the
problem
by
monitoring
the
latency
of
multiple
rapid
paths
and
Appliance
of
dynamic
multiples,
optimization
algorithm
and,
however,
it
requires
traffic
detection
techniques
and
accuracy
and
immediately
guaranteed
simply
by
Collective
statistics.
K
K
This
one
is
capable
of
configuring
different
priorities
in
accordance
with
requirements
from
the
clients
and
further
generates,
and
public
corresponding
Qs
policies
to
ensure
Service
delivery
letters.
These
sensitive
topics,
such
as
voice
over
Internet
Protocol
and
web
conferencing,
making
are
configured
with
higher
priority
and
is
further
into
a
specific
part
with
better
performance,
while
services
like
12
bytops,
are
around
lower
priorities,
since
they
are
less
time,
assisted
and
may
even
reduce
and
blocks
on
network
links
for
dedicated
lines,
usually
configured
between
data
centers
to
interfere,
Network
qualities.
K
In
another
scenario,
it
is
relatively
postalated
and
the
period
of
service
deployments
and
provisioning
is
comparatively
wrong.
Delays,
attacking
losses
and
interruptions
also
occur
so
with
the
conclusive
view,
defined,
granulated
services
and
net
purchase
sources,
utilize
data
enhancements
or
are
considered
required,
and
next
slide.
Please.
K
K
The
sun
is
bandwidth,
for
instance,
Transit
Network
also
has
also
been
Inked
out
with
various
other
capabilities,
including
deterministic
quality,
Network
slicing
endogenous
security
sector,
which
can
be
developed
as
services
referring
to
the
software
as
a
service
Source,
a
database
based
open
resource
service
framework
is
proposed,
aiming
to
practice
the
concepts
of
laws,
namely
Network.
As
a
service.
You've
been
said
that
you
reserve
applications
to
our
terminal
and
CPE
to
subscribe,
because
funding
customized,
Network
Services.
K
As
a
framework
of
ADB
owners,
the
network
controller
collects
the
running
status
of
the
network
and
assets
and
network
functions
by
extracting
key
attributes,
including
video
link,
lead
to
length
nodes
and
AP
links.
In
particular,
information
includes
descriptors
of
hackners
and
tailners
in
the
radio
layer,
3
lens
as
it
uses
identifier,
a
terminal
system
index
Etc
a
distributed
database
is
also
introduced
here,
which
is
the
things
strong
consistency
and
a
typical
subscribe.
Propolis
mechanism
is
applied,
capabilities
can
be
abstracted
in
a
key
value
skin
and
a
standard
schemat
template
file
is
utilized
for
descriptions.
K
To
illustrate
this
excluded
process,
we
hear
presents
a
detailed
instance.
The
network
communicates
various
clouds
and
multiple
applications.
Data
Center
interconnection
and
Cloud.
Access
from
various
applications
are,
for
example,
scenarios
and
and
the
overall
Network
domain
here,
a
b
c
and
d
constructs
part
of
the
federal
topology
policies
identified
by
binding
suits
are
also
assigned
here.
Called
network
resources
and
capabilities
are
more
in
the
sub
domain,
are
accepted
in
the
form
of
BB
link
and
bt-link.
K
Read
the
link
and
retailing
constitute
a
substitution
for
original
links
and
unique.
Logical
topologies
are
perceived
from
the
perspectives
of
different
style
applications
here,
for
example,
scene
from
cloud.
A
pretty
length
from
a
to
d
includes
two
parts
of
segment
list,
a
to
the
draculate
or
a
to
d
to
a
relay
C,
but
this
port
is
observed
with
only
single
segment
list
by
type
B
resources
like
Pampers
are
also
distributed
respectively
for
different
virtual
links.
K
K
K
K
Which
can
be
identified
by
local
and
remote,
no
disorders,
interface
processes
and
other
parameters
of
capabilities?
Similarly,
data
language
represents
a
battle
tunnel
and
taking
signature
over
IPv6
is
ethnic
example.
Typical
attributes
include
logic,
ID,
no
descriptors,
maximum
resolvable
link,
bandwidths
binding
speed,
Etc
to
facilitate
our
speculation
and
racing
in
clouds.
A
new
log
guiding
is
defined
here
to
identify
our
revealing
or
retelling
The
Logical
IDs
be
globally
unique
in
the
network
domain
as
a
team.
K
Moreover,
in
order
to
make
the
customized
refinements
from
different
prior
applications.
At
the
same
time,
the
parent
networks
lead
to
link
resources
in
layer
2
for
topology
resources
for
layer
3..
Maximum
resolvable
link
batteries,
for
instance,
is
a
maximum
of
the
spherical
link
band
resources
for
virtual
links,
where
multiple
clients
share
identical
physical
links
to
network
and
sellers
must
reduce
the
maximum
visible
resolvable
linked
bandwidths
allocate
to
every
one
of
them.
K
G
K
K
Compared
to
Auto,
this
is
known
as
application
layers,
traffic
optimization
with
similarities
and
differences
or
listed
in
those
well
and,
to
be
honest,
the
abstraction
of
network
of
capabilities
is
designed
to
be
more
exploded
and
diversified,
which
leads
to
deflated
conditions
of
five
granularity
services
and
in
water.
We
have
specific
Auto
Service,
while
in
DB
owners
distributed
databases
as
the
server,
maybe
almost
also
shares
identical
features
and
advantages
with
auto,
including
accessibility
and
standardized
API.
K
Definitely
Auto
provides
a
universal
and
normal
scheme
for
explosive
of
natural
capabilities
and
our
sites
and
perceptions
within
the
owners,
with
focus
on
the
Rival
of
the
network
controller
and
the
specific
cost
calculation
operator
to
facilitate
Network
Services
occupation.
Corporate
cooperates
well
with
a
filter
condition
of
the
convergence
of
the
clouds
and
the
network.
Next
slide.
Please.
K
Before
we
considerations,
it
is
easy
to
expand
more
open
sources
beyond
the
language.
Source
Services
different
interests,
presentation
such
as
topology
is
Securities
and
deterministic
cues,
which
makes
the
frame
rate
of
the
owners
be
capable
of
justifying
future
requirements
appearing
with
the
change
of
the
convergence
of
the
cloud
and
the
network
in
England
conclusion,
more
perceptions
and
drugs
are
expected
in
the
future.
We
are
looking
forward
to
promoting
and
to
cooperating
with
working
groups
and
as
a
colleague
who
had
originated
with
the
issue.
K
L
Hello
Sabine
from
Nokia
thanks
for
your
presentation
very
interesting,
so
definitely
well
to
mention
I
am
contributing
to
the
auto
working
group.
So
definitely
we
do
not
take
the
next
up
the
same
approach.
You
try
to
steer
traffic
at
the
network
layer.
While
we
at
the
auto
working
group,
we
try
to
provide
guidance
for
the
application,
so
it's
an
off-path
approach
to
redirect
the
traffic
I
would
invite
you
to
look
at
the
auto
work.
That
is
ongoing
where
there
are
proposals
to
extend
the
protocol
to
integrate
compute
information
thanks
yeah.
K
Thanks
actually,
I've
learned
some
information
about
Auto,
a
note
concept
of
Auto
that
this
is
the
kind
of
application
protocol
or
just
make
endpoint
selection
to
guide
the
traffic
about
still
make
the
network
domain
as
a
black
box.
That's
here,
maybe
we
want
to
expose
some
capabilities
of
the
network
and
to
and
do
some
traffic
with
Direction
work.