►
From YouTube: IETF112-ALTO-20211110-1600
Description
ALTO meeting session at IETF112
2021/11/10 1600
https://datatracker.ietf.org/meeting/112/proceedings/
A
A
A
Notewhere,
as
euler
node
will
tell
us
how
itf
is
operating,
including
idea,
process
policy
and
rules,
and
ipr-
and
one
thing
I
want
to
mention-
is
our
isg-
encourages
all
the
working
google
chair
to
highlight
the
code
conductor
so
for
the
code
of
conductor
means
you
should
be
courtesy
to
the
colleaguer
focus
on
technical
discussion.
For
example,
topics
we
have
a
bunch
of
pcp.
If
you
are
not
a
familiar,
please
read
it.
A
Other
minutes
trivial,
so
the
session
will
be
recorded
so
we'll
use
the
media
echo
queue
control.
If
you
want
to
speak,
please
enter
in
into
the
queue
by
press
the
red
handed
button.
A
If
you
want
to
leave
the
queue,
please
have
a
second
click
the
wrist
hand,
and
before
you
speak,
please
unmute
yourself
and
after
speaker,
please
mute
yourself
for
audience,
especially
you
can
mute
your
video
stream
and
we
also
have
a
jabber
and
it
will
be
served
as
a
slider
comments
and
we'll
keep
track
of
the
side
comments.
And
if
you
want
to
speak
before
the
mic,
but
you
cannot
make
it
please
put
at
mica
and
we
will
forward
the
comments
to
the
meeting
room
and
the
blue
sheet.
A
We
have
electr
electric
blue
sheet,
so
your
antennas
will
automatically
record
it
and
notes
taking.
We
have
a
kodi
med
code
called
md
and
the
hedgehog
talked
to
us
and-
and
I
I
think
we
already
have
agent
and
daniel-
have
to
take
a
minute.
If
you
want
to
add
additional
minutes,
feel
free
to
do
so
and.
A
So
this
is
a
channel
for
today's
discussion,
so
the
agenda
is
very
tired.
First,
as
a
chair,
we
will
introduce
the
working
status
because
we
have
our
new
channel
get
approved.
So
thanks,
marking
and
and
especially
want
to,
you
know,
highlight
the
two
working
group
document
styles
update,
because
there's
some
open
issues
that
need
to
be
tracked.
We
want
to
make
sure
all
the
comments
are
just
and
for
today's
discussion,
we'll
focus
on
charter
items.
We
have
two
charter
items.
The
first
one
is
auto.
Om
support
and
json
will
lead
the
discussion.
A
And
also
we
have
a
deployment
experience
update.
We
receive
two
different
updates:
one
is
from
g2.
The
second
is
from
flow
director
and
we
actually
have
already
have
a
discussion
with
team,
so
kait
will
introduce
how
g2
is
integrated
with
auto.
The
second
is
flow
director.
The
daniel
will
give
update
of
flow
director,
and
hopefully
this
will
be
good
input
for
auto
deployment.
A
A
The
leader
will
be
the
lewis
and
the
second
is
compute
aware
network
use
cases
will
invite
the
leopong
to
introduce
these
financial
new
diploma
use
cases.
The
last
one
will
invite
leo
zu
jung
from
london
university
to
introduce
bandwidth
exclamation
on
opennet
lab.
After
that,
we'll
wrap
up
the
discussion
and
agenda
bash.
A
Okay,
let's
move
on;
actually
we
we
could
work
remotely.
We
are
already
familiar
with
walker
remotely,
but
I
want
to
mention
highlighter
three,
since
the
first,
we
should
utilize
the
meninist,
so
usually
the
consensus
is
judged
on
the
menu
still
we
today
we
have
important
discussion
in
this
meeting
and
we
also
have
theoretical
interim
meeting
and
informal
meeting
and
under
early
the
working
decision
is
made
on
the
menu
list.
A
So
please
actually
start
it's
important
to
actually
close
down
the
topic
and
to
show
the
the
topic
get
it
done
and
also
it's
important
to
start
a
new
topic.
So
if
you
have
any
new
idea,
please
publish
your
document
and
make
an
intro
and
a
summary
of
what
you
do
on
the
list
and
begin
the
discussion
on
the
list
and
also
for
the
next
itv
dave
meeting.
A
Probably
we
have
hope
can
have
a
mixed
meeting,
both
online
and
in
person,
but
it
hasn't
been
announced
yet,
but
let's
stay
tuned
and
last
and
we
have
a
online
meeting
and
also
we
have
informal
meeting
if
working
google
think
we
need
to
have
these
kind
of
results.
Working
with
chair
can
arrange
this
kind
of
resource.
A
And
so
this
is
a
auto
working
charter,
so
we
for
this
channel
we
actually
have
two
important
delivery.
One
is
auto
om.
The
second
is
auto
new
transport.
In
addition,
we
can
cover
the
auto
deployment
update,
and
so
we
we
also
can
explore
the
use
case
and
protocol
maintenances.
A
A
And
we
actually
break
down
the
chat
item
into
several
categories
and
you
can
see
the
first
two.
Actually
we
already
have
a
relevant
individual
chapter,
so
this
will
be
our
focus
of
this
agenda
discussion
and
we
also
have
a
a
long
channel
item
actually
has
a
bunch
of
draft.
Hopefully
it
will
serve.
I
could
improve
to
the
order
different
update.
A
And
we
also
have
information
development
tracking,
so
you
can
see
actually
for
clan.
We
have
two
implementation
server.
We
have
four
implementation.
If
you
have
been
aware
any
new
implementation,
please
let
us
know
and
we'll
update
this
week,
page.
A
And
this
document
update,
we
already
have
four
working
with
draft,
actually
ready
for
ietf
isg
telechat
arrange
it
at
the
december
2.,
and
so
today
we
will
discuss
the
two
documents
who
which
actually
has
some
urban
issue.
A
Let's
move
to
the
first
topic,
I
think
it
is
a
gokai.
B
Hello,
everyone-
this
is
guy,
so
ching.
Can
you
share
my
slide.
A
B
So
this
is
a
presentation
about
the
open
issues
raised
in
the
last
core
reviews
for
auto
pass
factory
extension.
So
next
slide,
please.
B
And
for
jane
art
and
art
art,
let's
go
reviews
we
have
both,
gives
the
some
minor
issues
and
for
the
reviews
from
ops
dir.
We
actually
had
not
ready.
So
so
the
main
content
of
this
prediction
will
be
on
the
reviews
for
the
obs,
dr.
So
next
time
please.
B
And
for
art,
art
and
junior
arts,
we
actually
most
of
the
issues
matter
or
need
issues
so
from
the
other
we
have.
One
issue
is
the
use
of
current
id
does
not
conform
to
a
previous
rfcs
and
also
the
second
issue
is
we
need
ipv
examples
for
ipv6.
B
And
we
have
a
major
revision
for
the
first
issue:
basically
the
use
of
kind
of
ids
in
the
last
revision
and
we
defined
the
format
to
be
compatible
with
rc,
2003,
87
and
532,
and
we
also
had
revised
the
protocol
specifications
and
examples
to
accommodate
the
changes
for
the
use
of
content
id
so
next
time.
Please.
B
The
first
is
the
reviewer
asks
for
more
use,
real
use
cases
and
examples
to
better
clarify
the
extensions
in
a
document,
and
also
there
are
some
concerns
about
the
clarity
about
the
concept
of
extra
natural
element,
and
then
there
are
some
terms
that
may
not
be
very
cleared
then
may
trigger
some
like
ambiguous
meanings,
and
the
last
is
the
reviewer
asks
for
some
examples
that
the
past
vector
can
be
used
in
practice.
B
So
let's
please
so
this
gives
the
review
for
the
first
issue.
So
basically
the
reviewer
is
asked
for
argues
that
in
the
17
version
of
pass
vector
and
the
use
case
section
does
not
reflect
real
traffic
steering
objectives
of
existing
systems,
such
as
the
lhc
project,
and
also
some
specific
examples
of
are
needed
to
show
real
use
cases,
and
also
some
discussion
and
examples
for
identifying
bottlenecks
are
needed.
So
next
time.
Please.
B
And
in
the
in
our
latest
revision,
we
actually
have
introduced
more
detailed
examples
about
how
this
extension
can
be
used
in
practice,
and
the
examples
are
extracted
from
existing
systems
that
where
we
saw
that
where
we
believe
that
this
extension
can
be
used
or
be
integrated
into
such
these
systems
and
for
details
about
these
examples.
Please
refer
to
the
latest
document.
B
And
we
also,
oh
sorry,
I
I
don't
see
screen
so
well,
which
side
is
this
a
page
eight
right.
B
Okay
and
the
second
issue
that
some
specific
examples
of
anees
are
needed
to
show
real
use
cases,
so
we
actually
give
the
example,
for
example,
how
for
the
example
for
the
use
cases
that
we
presented
in
the
last
slide.
We
actually
also
list
how
the
different
network
components
will
be
considered
as
anes
in
a
specific
use
case,
and
we
also
refer
to
these
use
cases
to
their
like
research
papers
or
other
materials.
B
And
to
address
the
issue
of
identifying
bottlenecks,
we
actually
first
we
point
the
array
studies,
so
the
people,
so
readers
of
the
document
can
refer
to
these
research
papers
to
understand
how
the
bottlenecks
can
be
identified,
and
we
also
put
some
use.
Put
some
use.
Actual
concrete
examples
in
the
document
to
show
how
pass
vector
can
help
expose
bottom
information
so
next
time,
please.
B
And
for
the
second
open
issue,
basically,
the
reviewer
asks
for
clarification.
Examples
for
abstract
editor
element.
So
first
we
give
you
some
examples
in
the
use
case
section
and
also
when
we
specify
the
meaning
of
action
network
element.
We
put
another
example
there
to
show
that
different
objects.
B
If
they
have
the
same
property,
then
they
will
be
triggered
as
like,
equivalent
actual
error
elements
to
basically
to
give
a
better
sense
of
how
different
neural
components
can
be
can
be
considered
as
those
basically
why
we
use
the
meaning
of
actual
network
element
to
basically
show
some
common
properties
of
different
physical
entities
in
the
network.
B
B
And
for
the
last
issue,
basically,
we
need
to
give
some
pointers
for
how
the
clients
can
use
this
extension
to
orchestrate
their
traffic
first.
We
also
give
some
examples
and
pointers
when
we
demonstrate
the
use
case,
and
it
is
emphasized
in
the
documents
that
auto
is
only
used
for
information
exposure
and
traffic
steering
is
done
by
the
application.
B
Yeah
sure,
basically,
I
think
the
next
page
is
basically
a
summary
of
well
what
we
have
done
in
the
last
revision.
So
issues
from
the
last
call
reviews
are
addressed
as
from
my
understanding
and
we
are
actually
waiting
for
the
reviewers
feedback
and
it
will
be
so
we
have
one
question
for
the
working
group
is:
how
do
you
suggest
that
we
proceed
with
this
document.
B
Yes,
yes,
I
think
these
changes
are
already
in
the
19
version.
C
Okay,
good,
I
just
wanted
to
make
sure
that
the
iesg
has
it
it
would
be.
It
would
certainly
be
nice
if
the
last
call
reviewers
responded
to
your
changes,
but
I
don't
view
that
as
necessary.
The
ads
can
the
other
ads
can
do.
What
those
can.
You
know
judge
whether
or
not
you've
addressed
this,
so
I
I
think
we're
fine
to
proceed
to
the
ballot
which
will
happen.
C
D
A
D
Go
ahead,
great
yeah
and
for
the
performance
metric
document
we
received
the
gen
art
review
from
alvin
davies,
very,
very
nice
review,
and
then
we
also
got
the
art
out
review
from
christian
amstels
and
they
are
also
very
nice
review.
The
main
changes
was
to
address
the
reviews
and
there's
one
remaining
thing
which
we
have
not
really
confirmed
yet,
but
I'll
go
over
what
it
really
is:
okay,
so
next
slides,
please
so
here
the
main
thing
we
changed
and
to
address
the
gen,
art
and
art
art
reviews.
D
Basically,
we
made
the
changes
from
version
17
to
version
19
and
would
be
number
one
is
mostly
clarify.
The
definition
of
cosmetic
string
mostly
came
from
the
review
from
christian
and
also,
of
course,
clarify
the
ina
considerations
from
from
ad
from
other
reviewers
as
well,
and
then
the
last
one
in
terms
high
level
structure
is
ongoing.
D
Discussion
with
the
the
general
review,
albing
and
mostly
about
is
cos
context
parameters
if
you
want
to
get
a
little
remainder
and
the
cost
context
parameter
is
on
the
lower
right
corner,
so
that's
one
which
we're
still
trying
to
really
really
reach
the
final.
The
final
decision,
with
with
alvin
I'll,
go
over
a
little
details,
but
that's
those
are
high
level
changes
we
made
so
now.
Let
me
first
went
over
the
first
first
change,
which
is
triggered
by
the
review
by
christian
and
main
issue
here.
D
Is
he
wanted
to
clarify
the
grammar?
So
if
you
look
at
it
and
in
the
early
in
an
earlier
version,
we
give
a
formal
grammar
which
there's
all
these
email
changes.
Of
course
pop
pop
game
exchanges.
We
use
some
kind
like
a
grammar
and
then
the
discussion
of
suggestion,
eventually
from
a
question,
is
maybe
just
use
some
kind
of
english
english
description
about
what
a
cosmetic
reading
string
really
is,
and
we
agree
with
it,
because
we
looked
at
the
authors.
D
Looked
at
the
multiple
ways
to
specify
the
grammar
and
everyone
looks
like
slightly
more
complex
than
necessary-
we're
not
really
dealing
with
a
very
complex
grammar
which
is
dealing
with
what
exactly
the
format
of
a
cosmetic
screen
really
is.
So
eventually
we
adopted
the
paragraph
which
shown
over
here.
Of
course
it
is
in
a
newest
version,
so
the
definition
basically
is
optional
base
metric
identifier,
of
course,
valuable.
D
Everything
in
quotation
means
they
are
the
real
text
and
would
be
base
a
metric
identifier,
followed
by
an
optional
statistical
operator,
string,
that's
a
new
term
but,
of
course,
only
editor,
not
content
change,
and
then
I
think
their
discussion
about
using
dot
call
eventually
decided
to
really
adapt
column,
because
the
the
review
set
would
be
look,
you
already
have
pr
iv
column,
and
so
why
don't
we
use?
Essentially
my
understanding?
Our
understanding
is.
Why
didn't
just
do
the
same
thing?
D
So
therefore,
we
adopted
that
small
change
from
the
previous
version
now
become
like
a
column.
So,
therefore,
if
you
set
examples,
it
becomes
delay
one
way
and
delay.
One
way,
column
mean
mean
and
delay
one
way,
percentile
99.
So
that's
only
one
change
and
also
the
same
review
comment
by
a
question
is
a
discussion
about
what
exactly
how
to
specify
which
statistical
operators
you
want
to
use,
for
example,
what
if
it
is
a
maximum
reasonable
bandwidth,
you
already
have
a
max
and
which
one
is
a
good
metric.
D
D
This
one
is
the
ina
consideration
and
we
add
all
these
paragraphs
an
input.
We
clarified
from
the
review
that
for
any
new
value
or
cosmetic
and
metric
proper
metrics,
you
must
include
the
three
things
identifier
intended
semantics
security
considerations,
and
that
part
also
address.
We
think
we're
pretty
happy
about
that.
Please
next
one,
please
so
the
only
now
or
the
only
opening
issue
is
how
much
information
we
want
to
really
give
to
this,
the
one
the
parameter
field,
which
is
the
the
inlet
right
like
a
rectangle
and
the
main
issue.
D
I
think
I
mean
review
comments
and
that
one
from
general
from
from
elwyn
is
how
much
do
we
want
to
make
this
one
to
be
machine
readable.
So,
therefore,
the
current
version
is
we're
using
json
value,
and
I
think
the
working
group
early
discussion
was,
let's
make
it
opaque.
But
then
the
discussion
from
the
reviewer
is
how
much
you
want
to
make
it
to
be
machine
readable.
I
think
that's
the
main
issue.
D
We
did
talk
about
it
and
to
really
answer
this
question
you
we
really
have
to
ask
ourselves
what
exactly
why
machine
want
to
read
about
it
and
in
which
sense
machine
want
to
read.
Instead,
developers
would
read
about
it
and
there's
so.
Therefore,
we
talked
about
there
are
essentially
two
cases
you
might
want
to
machine
to
be
read
number
one
is
whenever
you
have,
for
example,
look
at
the
estimation,
the
rightmost
column,
and
you
might
want
to
tell
them
which
estimation
also
you
use
to
estimate
one-way,
delay
and
t-space
report
and
so
on.
D
For
example,
later
today
you
will
see
kai
and
and
talk
about
the
g2
work
which
uses
maximum
fairness,
which
is
a
sitcom
paper
and
I
believe,
trying
to
be
deployed
in
yes,
net
and
so
on
and
of
course,
also
can
be
using,
for
example,
in
the
conversion
we
talk
about
using
profit.
So
therefore
one
way
is
maybe
really
index
to
talk
about.
It
is
which
method
you
use.
The
second
way
we
identified
and
how
much
one
making
machine
readable
is.
Maybe
one
keep
some
kind
of
detailed
parameters.
D
So,
overall
thinking
about
this
one,
we
still
believe
a
generic
link
parameter
is
probably
the
best
way
to
go.
So,
therefore,
your
application
would
be
even
okay,
and
I
get
like,
for
example,
what
measures
you
are
using
and
which
are
the
parameters
and
we
can
get
initial
deployment
and
then,
if,
after
we
have
some
experiences,
we
will
work
with.
Eventually,
I
think
we
can
upload
some
of
these
work
to
this.
The
bits
of
79.71
and
auto
deployment.
D
C
Mighty
machine,
readable
thing:
you
know
we
talked
about
this
during
the
ad
review
prior
to
that
and
ultimately
yeah
there's,
probably
some
value
for
debugging,
and
I
could
imagine
a
particularly
power
user
like
looking
at
the
stuff
and
deciding
if
an
alto
server
was
like
good
enough
based
on
what
the
admins
were,
but
ultimately
for
this
to
be
useful
at
all.
You
have
to
have
like
you
have
to
write
scripts.
C
That
say,
ignore
the
results
if
like,
if
I'm
unsatisfied
with
the
criteria,
so
that
implies
you
know
a
registry
and
all
that,
so
it
doesn't
have
to
happen
in
this
in
this
document.
It's
a
little
late
for
that,
but
ultimately
yeah
it
doesn't
have
to
be
a
long
document
or
a
bisp,
but
just
something
saying
defining
your
registry
and
then
I
think
that's
what
this
has
to
go
for
it
to
be
worthwhile
in
the
long
run.
Yes,
exactly.
D
F
F
F
This
is
presentation
for
the
new
items
about
the
the
young
design
model
for
the
auto,
oem
and
apparently
still
already
work,
and
we
try
to
summarize
the
what
the
scope
and
the
requirements
for
this
work
and
we
give
some
initial
proposal
for
the
current
model.
And
we
have
many
open
discussions
in
the
mailing
list.
In
our
previous
meetings
and.
F
The
main
goal
for
this
work
is
we
try
to
define
the
yandere
model
for
the
operation
and
the
management
purpose
for
the
autofocus,
because
this
is
the
the
blank
part
for
the
auto
working
group.
We
already
have
some
extensions
for
the
realtor
services,
but
we
don't
have
any
standard
for
the
operation
and
management
purpose
and
the
main
reference
we
make
this
work
based
on
the
the
rfc
about
the
base
protocols
and
the
department
considerations,
the
latest
version
we
already
upload
to
the
drivers,
but
because
we
still
have
many
discussion
online.
F
F
F
But
before
we
go
into
some
details,
I
want
to
make
sure
so
I'll
make
clear
what
is
the
scope
in
about
this
work.
So
I
think
in
the
scope
of
this
document,
we
should
suppose
we
should
define
audit
data
model
for
the
the
auto
server
of
the
client
operation
and
management.
F
It
depends
some
data
model
for
the
functionality
of
capability
configuration
for
the
different
auto
services.
We
also
need
to
define
the
performance
monitoring
for
the
operation
purpose,
but
but
it's
not
in
the
scope.
I
also
want
to
make
it
clear
so
in
this
work,
we're
not
trying
to
define
any
data
models
related
to
any
specific
implementation.
F
So,
for
example,
a
try
not
not
trying
to
give
any
specific
data
factor
about
how
to
store
or
how
to
deliver
any
auto
information
resources
like
the
network
map,
because
micro
probably
might
find
some
other
information
resources
to
find.
The
extensions.
A
F
F
Now
so,
based
on
the
this
based
requirements,
make
the
objective
in
the
document
earlier.
So
in
this
document,
we
focus
on
the
these
four
main
objectives,
so
we
support
the
configuration
for
the
auto
server
setup
and
provide
the
configurable
data
models
for
administrators
to
create,
update
and
remove
the
alt
information
resources.
F
For
this
part
issues
both
the
different
types
of
digital
sources,
it
can
allow
the
developers
to
argument
the
new
apis.
We
will
not
try
to
define
any
specific
algorithm,
but
we
allow
developed
argument
data
model
to
start
their
own
algorithm
interface
to
generate
the
auto
information
cells,
and
we
also
allow
the
the
future
new
extensions
kind
extend
this
data
model
to
support
the
new
information
resources
in
future
and.
F
But
the
current
version
of
this
document
only
we
only
have
the
this
our
current
progress.
We
only
focus
on
the
the
second
and
the
third
part
of
this
article
list.
F
Model
to
specify
a
new
information,
auto
information
results
like
it
can
define
a
new,
auto
information
results
by
specifying
some
common
parameters
like
what's
the
resource
id.
So,
let's
type
out
this,
the
ultimate
information
resource,
some
based
exact,
same
code,
the
dependency
about
this
information
results
and
some
other
informa.
Also,
information
results,
like
the
cosmetic
dependency
depend
on
another
network
map
and
also
for
the
specific
information
source.
F
It
can
specify
some
resource-specific
parameters
and
the
main
part
is
that
you
should
specify
some
kind
of
creation
algorithm
and
this
creation
movement
is
provided
by
some
other
developers.
It
can
augment
this
model
like.
Let's
give
an
example
like
we
use
the
layer,
3
unicast
class
algorithm
to
generate.
F
F
And
the
operator
can
import
some
different
kinds
of
data
sources
from
different
particles,
the
pgprs,
the
authorized
data,
the
pce
data
and
some
other
network
management
data
and
for
the
data
source
import.
The
the
operator
can
specify
what's
more
to
retrieve
the
data
you
can
use
the
reactive
approach
like
you
can
use
some
types
of
mechanism.
I
will
just
use
the
proactive
pulling
mode
to
press
practically
to
get
the
data
and
for
the
different.
F
There
is
a
purpose
that
could,
if
it's
a
date
from
the
enterprise
can
use
it
maybe
get
from
the
young
store.
And
now
it
is
a
network
management
information.
It
can
data
from
the
some
other
network
management
system
like
the
promises
and
some
other
devices.
F
So
for
the
algorithm,
we
have
some
examples
proposed
by
the
different
other
individual
stuff
like
so
this
one
gives
an
example:
how
to
translate
the
advice
server
to
the
network
map
and.
F
In
the
deployment
consideration,
I've
said
it
suggests
some.
It
already
suggests
some
memory.
Information
should
be
spoiled
by
the
author
cover
like
a
it,
should
support
some
application,
performance
information
and
some
system
and
service
performance,
like
the
request
response
for
each
information,
resources
and
cpu
memory
addition,
they
also
map
update
the
number
of
pds
and
some
other
like
the
automatic
sizes,
but
we
also
have
some
auto
extension
like
the
the
coscanda,
the
sse,
the
unified
properties,
so
they
all
have
the
different.
F
But
where
we
still
have
some
as
a
missing
part
not
covered
by
the
current
model,
they
will
be
considered
in
the
future
versions
like
the
how
to
configure
the
the
server
libraries
and
how
to
support
small,
the
data
source
recruitment
mechanism
and
some
partial
communication.
Support
and
more
options
for
the
integrated
policies
also
have
some
discussion
on
this
verb
and
the
data
model
for
lifecycle
management.
F
And
following
the
first
steps,
try
to
summarize
some
main
items,
the
generates
from
the
the
online
automation
is
discussions
like
the
secretary
proposed
some
comments
like
does
this
document
provide
any
generic
network
models
like
the
dysonic
ici
as
well
from
google
papers?
F
We're
trying
to
actually
the
generate
network
model
is
not
in
this
gobot
document,
but
we
did
try
to
define
some
common
interfaces
to
connect
the
auto
information
results
to
other
ultimate
results
and
the
related
data
source,
so
that
we
think
this
can
be
useful
for
this
document
and
also
to
try
to
try
to
understand
so
how
to
understand
the
head-based
interface
for
the
information
resource
creation.
F
So
it's
not
make
the
decision
we
use
the
internet
based
in
the
security
interface,
should
be
intent
based,
but
in
this
document
we
try
to
make
the
in
this
creation.
F
Interface
be
reactive,
like
if
the
operator
configures
connection
between
the
information
resource
and
some
data
sources,
it's
a
tesla
change.
F
F
I
also
sent
some
many
english
classes
with
like
how
to
support
the
server
discoveries
and
how
to
support
the
livestream
management
and
the
performance
ring.
F
F
How
to
define
the
gene
model
for
the
auto
clients
on
the
side,
because
the
current
the
model
only
focuses
on
the
auto
servers,
but
actually
in
the
scope
of
auto
client
conversions,
also
in
the
governance
document,
but
in
clarify
actually
use
cases
who
will
use
this
data
model
to
categorize
the
auto
clients?
Not
it's
not
useful
for
all
the
auto
clients,
but
for
some
specific
use
cases
like
the
network
application
integration
cases
of
the
multi-domain
case,
they
may
be
useful
and
also
for
the
security
part
and
the
the
performance
modern
version.
F
You
know
and
also
give
some
comments
about
how
to
refine
this.
Currently,
this
model.
A
A
So
therefore,
this
worker,
I
think,
should
closely
align
with
auto
deployment,
rfc
and
also
other
protocol
worker,
so
first
step
for
this
work,
I
I
think
we
need
maker.
We
need
to
make
it
clear
what
is
in
the
scope?
What
is
our
scope
so
so
I
think
that
there's
three
most
important
part
one
is
information.
Source
management.
Second,
is
data
connection
data
source
as
served
as
infrastructure
monitoring.
So
you
already
touch
this,
but
I
think
this
need
to
be
further
cooked.
D
G
D
You
can
be
first,
okay,
yeah,
just
let
me
make
sure,
follow
the
cue
very
quick
question.
One
complexity
of
outer
server
configuration
is
the
algorithm
to
compute
the
network
maps
because
that's
oftentimes,
a
foundational
service
so
but
oftentimes
that
will
be
very
algorithmic
not
like
declared
using,
for
example,
like
a
young
model.
So
what's
a
long-term
solution
for
this
one
you
want
to
address
in
this
effort,
om
effort.
D
Some
some
can
be
declared,
for
example,
declare
any
network.
Node
is,
for
example,
every
autonomous
system
is
one
or
every
federation
or,
for
example,
if
you
model
a
data
center,
which
is
one
thing
I'm
considering,
then
that
potential
can
be
declarative,
but
overall
most
will
be
algorithmic.
So
how?
How
do
you
plan
to
address
this.
F
I'm
sure
I
understand
question
credit
so,
but
we're
not
trying
to
yeah.
Let
me
switch
to
the
the
page,
maybe
page
four.
F
F
D
You
collect
data,
but
you
need
to
specify
a
way
if
you're
doing
a
management,
and
probably
you
want
to
specify
how
these
multiple
types
of
auto
network
maps
can
be
constructed.
So
somehow
the
in
some
way
should
be
specified
in
some
way
to
make
a
good
manager.
At
the
same
time,
specification
can
be
very
common.
Okay.
F
Yes,
you
may
maybe
all
right
this
part.
This
is
the
missing
part
in
the
current
model,
but
I'm
not
sure
if
the
data
model
should
cover
this
part,
because
I
think
it's
more
like
the
implantation
specific,
like
the
yeah,
the
oem
system
should
consider
how
to
implant
the
the
data
management.
If
you
collect
the
data
from
data
sources,
how
to
store
it
and
how
to
query
and
translate
to
the
so.
C
Yeah,
just
very
briefly
on
the
alto
client
thing:
look,
I'm
not
an
alter
protection
practitioner,
so
I
don't
have
the
answers,
but
I
I
just.
I
just
think
that
certainly
that's
a
good
separation
point
and
you
should
really
think
I
don't
know
whether
there's
an
actual
use
case,
particularly
when
you're
talking
about
you
know,
inter-domain
stuff
that
isn't
even
specked.
My
my
instinct
is
that
that
is.
That
is
maybe
not
something
worth
putting
the
effort
into
thanks.
A
A
D
I
start
yes,
okay,
so
I'll
talk
about
auto
transport,
that's
a
charter
item
and
right
now,
there's
a
small
team
of
people,
roland
and
danny
and
the
check
so
we're
working
together.
We're
discussing
about
auto
transport.
Of
course,
the
long-term
goal
is
to
provide
some
kind
of
one
whatever
to
address
the
working
group.
The
charter
item,
which
is
auto
transporting
new
protocols,
but
overall
we
actually
take
a
slightly
broader
from
a
fundamental
approach
to
design
the
auto
transport.
D
Of
course,
this
is
only
one
one,
smaller
team
right
now,
other
people
are
giving
all
the
feedback,
but
currently
that's
a
really
like
kind
of
result
from
this
for
four
person
team
next
slides,
please.
D
So
the
high-level
goal
of
to
really
address
this
working
group
charter
item.
We
decided
to
actually
do
a
slightly
more
systematic
approach,
we're
not
designing
anything
new,
because
before
we
really
design
anything
new,
we
decided.
We
should
have
systematic
approach.
Of
course,
long-term
goal
or
the
the
ultimate
goal,
for
example
by
next
year,
is
really
try
to
push
for
the
deployment
not
really
design
something
super
fancy.
So
we
should
address
real
issues.
D
Therefore,
the
auto
transport
conflict
we
focus
on
lately
focus
on
the
following
four
items:
number
one
actually
is
analyze
systematic
analyzer
of
the
transport
workload.
D
If
you
want
to
talk
about
transport,
you
really
understand
what
services
and
what
the
requirements
of
services
we're
dealing
with
before
we
talk
about,
we
design
a
hb2
or
h3,
and
then
after
we
have
the
workload
and
then
we
want
to
really
very
quickly
start
to
really
set
up
the
environment
and
to
evaluate
the
performance
and
effectiveness
of
current
auto
transport
input
on
the
biggest
particle
using
http
1.1
one
point
x
and
then
also
rfc,
8895
and
auto
sse.
Therefore
they
want
to
develop.
D
We
want
to
benchmarking
understand
where
exactly
the
weak
points
really
are,
and
then
we
also
get
started
to
talk
about
the
design
and
benefits
versus
no
benefits
of
integrating
the
new
transport,
for
example,
gb2
off
three
into
the
auto
transport.
So
therefore
we're
getting
started,
which
will
we
have
some
initial
efforts,
but
that's
actually
a
part
of
the
effort
as
well
and
then.
D
Finally,
of
course,
this
one
is
absolutely
not
in
the
charter,
but
we
also
think
I
thought
it'd
be
quite
interesting,
that
when
we
are
designing
the
new
transport
using
http
203
and
also
take
a
look
at
the
transport
or
genetic
transport
of
network
information
to
application
transport.
So
what
exactly
other
transport
are
doing
so,
therefore
make
sure
our
design
would
fit
into
the
bigger
picture
or
at
least
be
comparable,
and
then
we
can
use
other
people
design
as
well.
D
So
this
is
the
first
line
of
work
which
we
did
and
lately
I
think
after
the
charter
is
approved.
So
basically,
what
we
did
was
we
had
this,
of
course,
there's
a
much
longer
version,
but
here
is
we
don't
want
to
make
the
slides
to
be
too
busy
to
read
here
we
have.
We
did
a
work
of
listing
all
the
major
auto
services.
D
For
example,
if
you
look
at
the
live
column,
the
app
and
the
most
column,
that's
a
list
of
all
the
major
auto
services
which
we
think
we
are
dealing
with
from
beginning,
of
course,
you
private
information
resource
directory,
and
you
have
network
map,
you
have
a
custom
app
and
a
future
map,
endpoint
a
property
service
and
endpoint
cost
service,
because
calendar
unified
properties
perspective
they're
ongoing.
I
think
they
are
getting
somehow
to
any
point
and
cdi
capabilities
and
footprints.
So
those
are
the
services.
The
second
column
gives
us
the
design.
D
So
what
kind
of
input
we
do
and
are
they
really
get
service
or
post
service?
And
if
it
is
get
a
post
service
basis
design
and
what
kind
of
input
do
we
have
and
what
kind
of
output
do
we
have?
So
what
encoding
it
really
is
and
so
on?
So,
therefore
that
is
a
second
one.
We
analyze
and
then
we
talk
about
cell
phobia.
Okay,
what
exactly
is
data
structure,
because
oftentimes
transport
depends
effectively?
Oftentimes
depends
heavily
on
the
dependency
on
the
structure,
on
a
query
and
so
on.
D
So
therefore,
because
we
classified
all
these
kind
of
different
services
and
what
kind
of
fundamental
data
structure
you're
using
for
example,
network
map.
Well,
let's
take
a
look
at
the
cost
map.
The
third
third
row:
it's
a
basic
input,
information
structure
and
it's
key
by
the
store
fundamental.
But
in
an
increasingly
that
auto
was
designed
and
based
on
the
historical
design
of
rc7285.
D
It
is
a
three
level
of
key
value
store
and
you
index
from
source
network
map
to
the
next
to
the
destination
and
then
index
into
the
values,
and
this
one
will
be
dependent
on
network
maps
so
therefore
to
be
dependencies
with
other
information
models.
So
therefore
how
to
handle
this,
and
then
we
analyze
the
scaling
of
the
each
information
resources,
how
much
data
we're
talking
about,
for
example?
Here,
let's
look
at
the
even
the
simple
one:
let's
look
at
the
network
map,
for
example,
and
we
did
some
analysis.
D
We
said:
oh
okay,
the
size
of
this
one
will
be
proportional
to
number
of
cidr
because
that's
where
you
group
a
network
and
therefore
we
did
some
initial
analysis,
for
example,
even
for
a
very
simple
one,
let's
say,
for
example,
you
want
to
model
global
internet.
You
want
to
really
agree
everything
together
and
if
you
don't
do,
for
example,
aggregation
and
you
want
to
transport
information,
then
we're
talking
about
866,
that's
actually,
which
is
already
170
cidrs,
we're
going
to
transport
for
the
global
bgp
prefixes.
D
So
therefore,
that's
the
size
we're
talking
about
and
about
like
a
900k
or
800k
of
addresses,
so
it'll
be
huge.
If
we
want
to
really
address
it
in
that
way,
then
we
analyze
the
transport
and
oh
for
me,
yeah,
I
I
actually,
I
don't
have
a
lot
of
slides.
So
next
one
we
talk
about
what
okay
and
what
kind
of
stability
expectation.
Oh
can
you
go
back?
D
D
Some
one
can
be
dynamic
and
then
we
talk
about,
if
you
really
do
you
really,
it's
not
stable
and
what
kind
of
incremental
changes
you
have,
for
example,
red
column
showed
the
potential
operation
when
you
support,
because
they
have
different
implications
on
on
scheduling
and
also
on
the
capability
of
transport,
as
well
as
on
efficiency.
Okay,
next
slides,
please.
D
Next,
please,
okay,
so
giving
us
analysis
of
all
the
services
which
is
very
nice,
and
then
this
is
our
current
plan
effort.
Before
we
really
do
the
design.
We
won't
get
initial
results
so
therefore,
danny
and
I
and
and
roland
and
friendship
will
start
to
talk
about
it
so
right
now,
our
decision
is
we're
going
to
first
step
actually
is
evaluate
the
current
transport
efficiencies
and
also
provide
initial,
like
benchmarking
like
token
services
or
auto
services.
D
So
therefore,
this
moment
it's
very
generous
binox
is
fully
open
to
use
its
infrastructure
as
evaluating
environment.
We
can
evaluate
all
kinds
of
transport
and
the
greater
big
network,
which
is
like
a
quite
large
network
covering
hong
kong
shenzhen
about
13
cities
and
also
open
to
use
its
infrastructure
to
evaluate
all
the
transport
and
also
as
a
way
to
model
their
networks,
for
example,
then,
for
the
very
basic
one
about
transport,
we
right
now
decided
to
deploy
five
benchmarking
services.
You
can
call
like
some
kind
like
a
auto
transport
spec.
D
So,
for
example,
we
filter
the
cost
map
and
endpoint
unified
property
map
and
cdn
node,
because
there
are
cd
nodes
inside
the
network
and
we
model
them
as
well,
and
then
we
do
flow
detection
direction
based
on
the
flow
director
and
pointing
to
the
cdn
nodes,
and
then
we
also
deployed
the
passive
vector
providing
available
reservoir
bandwidth
for
for
all
the
networks.
So
therefore,
actually
we're
really
trying
to
use
them
not
only
to
do
transport
but
also
model
those
networks
by
themselves
in
particular,
agreed
to
be
a
network.
D
It
is
a
relatively
larger
network
and
then
we're
going
to
evaluate
the
transport.
Oh,
I
have
two
minutes
so
transport
one
is
one
point
x
of
course,
keep
alive
and
two
and
three
initial
design,
and
then
we
call
auto
sse
and
then
the
right
hand
side.
The
last
column
is
collecting
matrix,
for
example,
what
can
metric
one
collect?
I
want
see
sort
of
what
we
can
compare
with
different
transport
next
slides.
Please.
D
So
here
is
the
initial
design
and
initial,
of
course
it's
a
really
benchmarking
initial
design.
I
don't
want
to
say
this
is
in
any
way
or
form
finalized
design
at
all.
So,
therefore,
where
that's
the
initial
design
about
design,
auto
transport
using
hb2,
now
three
and
only
two,
so
the
left-hand
side
is
the
auto
sse,
which
rfc8895
and
initial
current
design
is
based
on
that
design.
But
we
want
to
move
this
one
into
the
right
hand,
design,
which
also
auto
record
right
now
we
call
auto
h2
and
http
2..
D
So
here
is
a
list
of
requirements,
and
so
basically
we
want
to
use
the
same
all
the
resources
possible
using
this
design,
and
you
can
do
addition
or
deletion,
and
you
can
signal
this
start
or
stop
it's.
You
know
like
what
auto
ssd
can
provide
already
and,
of
course,
very
important
part
actually
challenging
part
is
r4
how
to
do
incremental
updates,
and
then
we
also
talk
about
it.
Yes,
if
we're
doing
http
as
the
transport,
we
want
to
make
sure
we
follow
all
the
semantics
and
so
on
next
slides.
Please.
D
So
here
we
main
design
issue
is
that
okay,
I'm
running
I
only
have
18
seconds
left
would
be.
How
do
we
really
encode
incremental
updates
and
because
really
one
very
nice
design
for
ssc
from
all
the
features
and
so
on?
Is
we
allow
you
to
send
all
real
information
and
you
can
do
json
patch?
You
can
do
json
merge
pack,
it's
very
flexible,
but
how
do
we
really
encode
it
using
http
2
next
slides?
Please.
D
Oh,
so
that's
the
initial
design.
So
therefore,
here
is
a
list
of
initial
design
and
the
initial
design
one
is
incremental.
Update
stream
is
equal
to
single
http
stream.
So
therefore
we
do
essentially
now
we
have
a
content
indication
layer
to
encode
what
kind
of
content
type
you
really
use
it,
and
so
therefore,
one
way
is
use
exactly
auto,
sse
sort
of
reason
possible,
or
we
can
design
a
very
simple
content
layer.
The
second
one,
of
course,
I
think,
that's
really
one
suggested
in
rfc
8895
section
3.3,
is
for
every
single
update.
D
We
open
a
new
gp2
stream
and
which
one
will
be
good
and
so
on.
But
the
major
problem
here
is:
we
might
be
violating
the
approach
promise,
but
overall,
I
think
that's
from
hdb2
meaning
they
are
really
not
designed
for
portrait
notification
or
increment
updates,
they're,
mostly
essential,
like
prefetch
designed
for
web
page
prefetching.
D
Please
yeah
so
this
this
one
basically
says:
that's
also
the
the
space
we're
serving
in
we
will
take
a
look
at
that.
Actually,
auto
is
one
of
the
efforts
in
ietf
the
sending
network,
information
application
and
also
itf
has
effort
of
rfc
3168
to
send
easy
end.
There's
also
effort,
for
example,
from
3gpp.
I
want
to
make
sure
our
design
will
be
compatible
or
as
much
as
much
matching
as
possible,
but
that's
of
course
the
sidewalk
we're
also
focusing
on
early
part.
A
G
C
Richard
martin
sure
yeah,
I
I
read
the
draft.
I
was
a
little
confused
on
where
you
were
going
with
it
and
which
maybe
is
appropriate
for
an
initial
shot.
So
thanks
for
that,
I
I
do
well.
I
was
finally,
you
know
as
an
introductions
talk
about
multi-streaming
headline
blocking
and
stuff
like
I
really.
We
don't
need
to
write
a
specification
for
just
taking
existing
like
72-85
requests
and
responses
and
putting
them
on
on
multi-streaming.
I
I
that's
that
that
should
not
be
a
focus
of
the
spec.
C
I
think
we
need
to
look
at
things
where
you'd
actually
modify
the
mechanisms
and-
and
you
know
you
alluded
to
that-
with
like
the
push
versus
sse
and
by
the
way,
as
a
comment
like
h3
is
push
twos
that
that
that
should
be.
You
don't
need
to
treat
hd
as
a
separate
thing.
The
only
mechanism
I
could
think
of
that
isn't
there
is
priorities.
D
We
do
we
do,
for
example,
they
have
dependencies
and
we
won't
even
use
all
the
dependencies.
For
example,
we
need
to
reset
allocation
exclusive,
new
stream
and
so
on.
So
therefore
we
want.
Actually
we
do
one
because,
like
a
class
map
and
network
need
to
depend
on
so
the
allocation,
you
want
to
give
resources
that
dependencies
you
want,
send
send
updates
to
network
map
before
this
map.
Otherwise
it's
just
increasing
the
property
yeah.
C
It
sounds
really
going
the
right
direction.
I
just
want
to
steer
you
away
from
writing
a
lot
of
text
about
you
know
just
taking
7285
request
response
and
putting
it
on
h2,
because
that
should
be
seamless
and
just
focusing
on
really
what
the
new
apis
are.
That
h2
and
h3
give
you,
because
that's
what
needs
to
be
specced
out
thanks
sure.
Definitely.
B
Oh,
so
does
everyone
see
a
slime
now?
Okay,
okay,
I
just
seriously!
Oh
so
hello,
everyone!
It's
me
again
so
today
I'm
going
to
give
some
update
on
our
integration
with
g2
system
and
I
think,
as
they
have
some
like
ip
issues.
So
probably
we
are
more
focused
on
what
kind
of
services
can
be
provided
with
it
with
this
system
instead
of
how
they
are
being
realized
inside
their
framework.
So
today,
basically
I'm
talking
about
the
bottleneck
service
with
auto
and
so
next
slide.
Please.
B
And
in
this
talk,
basically,
we
cover
like
four
aspects.
So
first
we
give
some
basic
concept
of
the
bottleneck
service
and
we
give
some
use
cases
of
how
this
information
can
be
used
by
applications
to
better
orchestrate
the
traffic.
And
then
we
give
some
basic
information
about
the
g2
optimization
framework,
and
then
we
give
some
immature
designs
or
basically,
some
some
evidences-
that
we
want
to
make
to
integrate
the
spawning
service
with
auto.
Some
examples.
B
And
so
here
we
basically
give
an
overview
of
what
the
bonding
information
as
a
serp.
Basically,
we
try
to
argue
that
the
bonding
information
can
be
provided
as
an
out
of
service.
I
think
here
are
some
a
few
points.
B
First,
is
many
networks
use
dynamic
resource
allocation
to
improve
natural
utility,
for
example
in
networks
we
basically
use
tcp
as
congestion
control
to
allocate
like
bandwidth
resources
and
the
tcp
congestion
control
is
actually
you
can
be
modeled
as
one
of
the
optimization
problems
that
dynamically
allocate
resources
based
on
like
different
attributes,
and
also
we
now
many
like
cloud
platforms
or
orchestration
systems.
B
They
basically
manage
the
resources
of
these
infrastructures
using
dynamic
resource
allocation,
for
example,
in
a
google's
web
optimizer
uwe
system,
they
actually
use
some
some
form
of
maximum
fairness
to
allocate
the
resources
between
different
flow
groups
and
the
buffalo
information
is
actually
important
for
applications
to
predict
network
performance
and
also
get
guidance
for
the
traffic
optimization
process
in
such
networks.
B
Basically,
we
need
some
examples,
including
throughput
prediction
and
also
optimization
for
time.
Boundary
flows
by
region,
limiting
the
application's
own
flows
and
also
g2
papers,
also
talk
about
like
use
cases
such
as
network
planning,
but
we
do
as
we
want
to
argue.
We
want
to
find
that
the
common
service
is
provided
by
partnering
service
and
the
auto
service,
so
we
will
be
mostly
focusing
on
the
first
two
use
cases
and
I
think
the
last
inspiring
service
can
be
important,
motivating
use
case
of
auto.
We
already
had
the
dual
system.
B
They
already
get
a
lot
of
some
initial
resources
and
also
software
development,
and
even
with
some
simulations
on
electronic
simulators,
and
I
think
they
are
actually
moving
towards
some
potential
deployment
in
networks
for
supercomputing
or
software
defined
web
optimization,
and
I
think
so.
This
service
can
be
valuable
for
a
future
deployment.
B
So
next
time,
please-
and
here
actually
are
some
definitions
for
the
bottlenecks,
so
the
first
is
basically
the
the
dynamic
bandwidth
allocation
can
is
usually
based
on
some
optimization
problems,
for
example
for
maximum
phase.
We
can.
The
the
system
is
trying
to
maximize
the
minimal
flow
allocation
for
the
traffic
in
their
network
and
in
the
general
sense.
B
We
also
have
many,
for
example,
tcp
can
just
control
the
actual
adequate
resources
based
on
some
unconstrained
network
utility
maximization
problem,
and
for
these
problems
we
can
basically
have
some
formal
deviations
of
the
bottlenecks,
and
if
we
describe
them
in
english
they,
basically
if
we
it
can
be
implemented.
As
if
we
increase
the
capacity
of
the
links,
then
the
rate
of
the
flows
can
be
increased
as
well.
So,
basically,
that's
a
an
informal
definition
of
the
bottlenecks.
B
So
next
slide
please-
and
here
we
actually
have
an
example
of
what
what
leg
links
looks
like
assume
in
this
network.
We
have
like
two
links,
l1
and
l2,
and
we
have
three
flows:
f1,
f2
and
f3,
and
if
we,
if
the
network
is
allocating
resources
with
our
maximum
fairness,
then
we
act.
We
can
actually
using
the
definition
that
we
just
introduced.
We
can
see
that
the
bottleneck
of
f1
is
actually
l2
and
bonnet
for
l,
f,
f2
is
l1
and
bond
x4
f3
is
l2.
B
And
the
way
waste
the
the
analysis
we
just
made
and
we
can
actually
draw
a
graph
of
the
bottom
x
between
the
links
and
the
flows
and
for
these
for
this
for
this,
but
for
the
for
the
network
we
just
described
it.
Its
bottoming
structure
looks
like
something
in
this
page
and
we
can
see
that
the
directions
of
the,
for
example,
we
can
say
out
if
out,
if
an
edge
is
pointing
from
a
link
to
a
flow.
B
Basically,
it
describes
that
this
link
is
a
bottleneck
link
of
the
flow
and
if
a
flow
is
pointing
to
a
link,
basically
means
the
link
is
not
a
bonnet
of
the
flow,
but
the
rate
of
the
flow
would
have
impact
on
the
links.
B
Failure
of
the
other
flows
and
with
this
structure
it
enables
a
quantitative
analysis
of
bottom
techniques
or
flows
over
other
flows.
So
next
slide,
please,
for
example,
if
you
want
to
answer
the
question
of
what
happens
if
the
capacity
of
l1
is
increased
by
a
small
amount
of
traffic
like
delta
and
exactly
so,
we
can
propagate
this
change
along
this
bottom
leg
structure,
and
we
can
see
that.
B
Okay,
if
we
increase
the
capacity
of
l1,
then
the
rate
of
f2
can
be
also
increased
and
exactly
and
another
type
of
question
is
what,
if
we
increase
the
what
if
we
decrease
the
rate
of,
for
example,
f3
by
delta
and
first
with
the
with
the
bottom
net
structure
we
just
described
if
we
decrease
the
rate
of
f3.
The
bottom
extraction
actually
changes
to
the
structure
in
this
page
and
next
up
is.
B
And
again,
we
can
analyze
how
the
how
this
rate
change
can
be
propagated
along
the
spotlight
structure
and
we,
then
we
can
analyze
how
other
flows
might
be
impacted
by
this.
This
this
change
we
just
made.
So
let's
start
this.
B
And
with
this
there's
a
with
this
bonnet
structure,
we
actually
can
analyze.
B
So
the
first
use
case
that
we
can
provide
is
provide
a
throughput
prediction
service
and
in
this
service,
basically,
the
application
can
specify
a
set
of
flows
and
then
the
server
will
will
be
able
to
answer
what
is
a
predictive
throughput
for
this
set
of
flows
in
the
network.
So
next
slide.
Please.
B
And
basically
here
is
a
complete
example
of
the
simple
prediction
service.
So
assuming
we
have
three
background
flows,
f1,
f2
and
f3,
and
a
client
wants
to
query
the
rate
for
f4
before
even
establishing
the
connection.
So
the
server
will
be
able
to
the
server
is
to
monitor
the
background
flows
and
then
so
next
slide.
B
Actually,
like
2
over
3,
for
example,
megabit
per
second,
so
then
it
will
be
able
to
answer
the
this
super
prediction
service.
So
next
time,
please.
B
And
another
use
case
is,
for
example,
in
the
judo
paper.
They
should
describe
the
case
where
you
want
to
speed
up
a
certain
flow,
but
you
can
only
rename
it
some
flows
with
no
with
smaller,
with
lower
priority
and
then
to
enter
this
to
to
be
able
to
solve
this
kind
of
problem.
You
can
also
use
the
bargaining
service
so
next
time.
B
And,
for
example,
assume
that
the
application
wants
to
optimize
the
the
rate
of
f5
to
be
0.8,
and
it
can
only
written
emit
flows,
f2,
f3
and
f4.
And
then
you
can,
for
example,
query
the
auto
server
and
get
the
response.
As
the
next
page.
B
And
so
for
this
query
it
actually,
if
we
return
the,
if
we
look
at
the
flow
gradient
epoch
structure,
then
we
can
see
that,
with
this
bonding
structure
they
can,
we
can
actually
derive
how
different
flows
would
impact
the
rate
of
the
target
flow
and
basically
we
list
the
values
on
the
top,
and
from
this
page
we
can
actually
see
if
you,
if
we
can,
if
we
rate
limits
the
rate
of
f2,
it
actually
does
not
help
us
to
increase
the
rate
of
f5.
So
in
in
this
pay.
B
With
this
information,
the
application
can
choose
to
reach
name
it
r,
f3
and
f4.
So
next
circuit
and
again,
when
we
try
to
re-limit,
f3
and
f4,
then
the
bonding
structure
will
actually
will
actually
change
and
with
this
chain
with
in
this
new
bonding
structure,
we
can
actually
get
a
quantity
of
analysis.
B
B
Yep,
so
so,
basically,
we
just
give
these
two
examples
to
demonstrate
how
we
can
use
the
bionic
service
to
solve
like
flow
scheduling
problems
in
the
application,
and
here
is
an
introduction
for
the
due
to
work,
because
g2
is
the
application
framework
that,
based
on
this
quantitative
theory
of
bonding
structure,
it
develops
some
efficient
algorithms
to
compute
the
bonding
structure
that
we
introduced
and,
and
also
they
give
some
examples
of
how
you
can
conduct
application
layer
optimizations
for
their
own
flows,
with
in
networks
with
maximum
fairness.
B
And
I
think
the
geo
work
is
getting
a
lot
of
attention
in
both
academia
and
industry,
and
they
are
actually
they
have
some
ongoing
deployment
efforts
in
both
like
supercomputing
networks
and
also
with
data
center
networks.
So
next
time,
please-
and
here
is
a
some
examples
of
the
evaluation
results
that
they
get
from
their
studies.
B
So
it
has
shown
some
potential
in
predicting
support
for
tcp
traffic
and
also
demonstrate
many
use
cases
such
as
optimal
rate
path,
network
planning
and
the
te
for
time
boundary
traffic
that
we
just
described
that
these
types
of
use
cases
can
be
effectively
solved
using
the
bottleneck
structure.
B
So
next
time,
please
and
in
the
next
few
slides,
we
just
discussed,
give
some
initial
idea
of
how
this
service
can
be
provided
both
ways
auto
as
the
suspension
and
also
with
the
northbound
and
basically
in
this
slide,
we
focus
more
on
the
providing
the
bottleneck
service
as
an
auto,
basically
using
auto
as
a
response.
So
next
time,
please.
B
And
we
summarize
the
requirement
that
we
just
get
from
the
use
cases
for
through
prediction
we
we
need
to
specify
like
flows
and
the
status,
and
we
also
need
to
get
the
super
prediction
results
for
these
both
and
for
the
second
use
case.
We
not
only
need
to
gather
flows,
specific
flows
and
get
super
prediction.
We
also
need
to
specify,
like
some
customized
linear
constraints,
about
flows
and
also
we
might
need
information
about
the
bonus
structure,
including
ingredient
and
flow
gradient
values.
B
And
actually,
the
bottleneck
service
can
be
provided
for
some
specific
use
cases,
even
with
the
current
auto
base
protocols
and
extensions.
For
example,
if
we,
if
we
are
trying,
if
the
auto
server
is
providing
the
supercreation
for
site-level
flows,
for
example,
the
flows
between
different
networks,
for
example,
building
different
data
centers,
and
there
are
resources
allocated
by
using
some
kind
of
like
maximum
fairness.
B
Then,
if
we
model
each
data
center
as
a
pid
and
this
information
can
already
so,
this
type
of
information
can
be
provided
using
some
existing
auto
extension,
for
example.
Here
we
need
to
specify
whether
the
flows
are
established
on
unestablished
or
already
established,
and
then
we
can
use
the
cost
matrix
from
the
performance
metric
extension
such
as
throughput
as
a
cosmetic
and
then
in
the
response.
We
can
actually
return
the
basically
the
throughput
for
the
predicted
support
for
each
flow
and
next
happens.
B
Sure
I
think
we
only
have
like
a
few
pages
so
and
also
we
can
also
specify
the
superprediction
for,
like
tcp
level
flows,
so
next
slide.
B
Yeah
actually,
and
also
we
to
support
the
use
case
for
time-bounded
data
transfers.
We
also
have
some
initial
design
so
next
step,
please.
B
For
example,
in
this
we
actually
use
the
multicast
extension
to
specify
multiple
to
get
multiple
information
about
the
bionic
service.
For
example,
we
need
the
throughput
and
also
free
flow
gradient
information,
and
then
the
the
response
will
contain
the
both
the
super
and
also
the
flow
gradient
and
the
flow
gradient.
Information
can
be
used
by
the
application
to
decide
which
flow
to
be
rate
limited.
B
So
next
type
is
and
the
application
can
add
the
rate
limit
as
flow
basically
constraints
to
the
flows,
and
then
the
auto
server
can
pre
predict
the
new
throughput
based
on
the
constraints
and
and
basically
in
this
example.
We,
the
application,
can
get
the
expected
flow
rate
for
its
own
target
flows,
so
in
two
iterations.
So
next
time,
please.
B
Yeah,
so
here
is
a
summary
of
this
talk,
so
the
our
first
argument
is
that
the
bionic
service
can
be
useful
in
many
networks
and
solidwork
has
been
established
by
g2
for
maximum
affairs
and
moving
towards
like
real
deployment
so,
and
we
also
believe
that
bionic
service
can
potentially
integrate
as
part
of
the
auto
framework
and
basically
we
we
are
interested
in
if
the
working
group
is
basically
how
how
we
can
proceed
with
this
work,
is
there
any
interest
from
the
working
group
to
standardize
this
type
of
service
yep?
G
So
richard
I
think
we
don't
have
time.
Can
we
move
to
the
next
presentation?
Okay,
yeah.
I
mean
just
like.
D
One
very
quick,
so
basically
the
g2
team
presented
to
the
auto
team
and
meetings
and
they
have
tremendous
interest.
I
think
this
actually
can
be
a
very
good
use
case
for
auto
deployment.
D
We
can
focus
initially
not
really
extending
auto
just
use
existing
auto
for
deployment,
but
later,
of
course,
there's
some
like
a
fully
user
feature.
They
might.
We
might
need
like
to
do
a
little
bit
like
a
simple
addition,
for
example,
inside
the
current
filter,
when
I
like
flow
level
filter,
which
is
very
simple,
essential,
like
a
mime
type.
A
F
H
Nx
yeah,
let's
start
with
flow
director,
so
in
few
words
basically
flow
director
collects
data
from
the
internal
state
of
the
isps.
Basically,
it
builds
an
inventory
from
the
front
wire
from
warden
and
control
plane.
Secondly,
flow
director:
you
know
complete
computes
defects,
mapping
and
compiles
that
into
an
into
an
unaccessible
database
and
finally
communicates
this
information
to
the
hyper
hyper
gangs.
H
Just
to
quickly
introduce
these
terms.
You
know
hypergames
for
us
at
least
you
know.
There
are
multiple
definitions.
You
know
are
large
networks
that
provide
services
mostly
to
the
end
users.
They
are
globally
distributed
and
generating.
You
know
a
large
amount
of
data.
You
know,
for
example,
like
my
facebook
netflix
so
far,
and
so
on
and
basically
flow
director
communicates
the
network
information
to
the
hyper
giants
that
that
want
to
collaborate
using
different
protocols,
including
the
alto
protocol.
Next
slide.
H
Please
yeah
just
a
couple
of
words
of
the
history
on
this
is
it's
took
more
or
less
10
years
in
in
research.
H
H
H
First
of
all,
we
have
a
collaboration
with
one
hypergiant.
This
hyperjunk
had
more
than
the
10
percent
of
the
total
traffic
inside
the
isps
network.
H
Basically,
there
were
two
kpis
to
consider
for
the
isps
we
wanted
to
reduce
the
loan
hold
traffic
and
for
the
hyper
young
we
wanted
to
reduce
the
latency.
You
know
this
is
basically
bring
the
server
closer
to
to
the
user
flow
director
basically
uses
a
mapping
function.
That
is
a
combination
of
the
path
length
and
at
the
distance,
and
just
to
say
that
flow
director.
Basically,
when
it's
give
a
pat
ranking
it
is
this
part,
ranking
basically,
is
a
suggestion.
That
is
that
it
can
be
by
the
hyper.
H
H
This
is
basically
because
there
was
a
you
know,
a
sort
of
misconfiguration
and
the
mapping
was
reset
to
random,
and
but
you
know,
when
the
system
were
enabling
the
traffic
went
down
and
at
the
end
you
know
we
have
a
significant
reduction,
especially
in
the
lot
hall
long-haul
traffic,
which
is
the
most
expensive
for
the
for
the
for
the
isps
okay.
This
is
basically
one
of
the
benefits
for
the
isp
next
slide.
Please,
and
in
terms
of
the
benefits
for
the
hyper
junks,
like
I
said
before,
we
use
the
distance
as
a
proxy
for
latency.
H
So
in
term
of
the
distance
we
reduce
the
gap,
that
is
the
distance
between
the
server
and
the
clients
are
reducing
by
about
40
40
percent,
which
means
that
flow
director
localizes
the
traffic
and,
again
you
you
can
see
here
the
in
the
whole
face.
This
goes
up
really
high.
You
know
because
the
mapping
got
broken,
but
at
the
end,
when,
when
the
water
restarted,
the
the
traffic
goes
wet
down
again.
H
Okay.
Next
slide,
please
just
to
to
give
an
overview
about
what
was
implemented
or
deployed
currently
currently
implemented.
Is
the
the
base
alto
protocol
with
all
the
provided
features.
You
know:
information,
resource
directory,
netmap,
filter
network
network
map,
cause
map
it
back
on
services,
etc.
H
Here
are
one
points
that
we
differ
from
the
rfc
in
at
least
one
point
that
is
or
eight
points
are
not
the
ip
addresses.
They
are
the
ips
nets.
Therefore,
we
don't,
we
don't
identify
just
just
one
one
hot
one,
hose
okay
and
and
in
case
of
the
incremental
updates.
H
This
was
partially
implemented
because
to
reason
it,
it
was
still
a
draft
when
we
started
the
implementation,
implementation
and
also
it
requires.
You
know
some
structural
changes
because
it's
you
know
a
little
bit
quirky
and
in
terms
of
deploy
it
in
production,
we
basically
have
the
network
map
and
cosma
features.
H
This
is
because
you
know
the
main
traffic
simply
flow
from
the
cdns
to
the
to
the
end
users-
and
you
know
the
cdn
caches
are
not
only
embedded
into
the
to
the
aes
of
the
cdn
itself,
but
but
can
also
be
embedded
into
foreign
ideas,
and
one
here
one,
you
know
interesting
question
here
is
how
to
group
the
prefixes
to
form
the
dpids
or
and
at
the
end,
create
the
alto
alto
resources
next
slide.
Please.
H
Okay,
just
to
quickly
provide
info
about
the
how
to
delete
us
accumulated
with
routing
protocol.
You
know
we
collect
information
about
the
links.
Router
networks
with
flow
information
alternate,
for
example,
from
netflow.
We
can
collect
english
already
spoil
points
and
we-
and
we
can
get
you
know,
information
from
the
network
monitoring
and
to
try
to
collect.
You
know
information
about
the
utilization
ban
with
that
see,
etc.
Great
next
slide,
please.
H
And
okay,
just
for
for
for
the
alto
max
calculation,
we
are,
you
know,
attached
to
a
large
european
ixps.
As
you
can
see
here,
there
are
a
lot
of
routers
and
I
beat
prefixes.
A
H
Slide
please:
okay
for
the
network
map,
we
basically
defined
three
different
pids
internal
external
and
oak
net
of
net,
basically
for
prefixes
from
directly
red
asses.
This
means
not
third
party
traffic
on
on
pyramid
links
and
yeah.
I
see
next
slide,
please,
let's,
let's
see
the
time
is
going
to
okay
for
cause
map.
Basically,
we
provide
three
different
course
maps
and
the
hop
distance.
H
Okay,
some
statistics
from
our
auto
running
server.
So
all
the
network
maps
because
map
are
updated
every
five
minutes
we
have
more
or
less
you
know,
250
000
prefixes
across
1700
pids
and
the
average
map
size
is
about
6
megabytes
and,
in
case
of
the
custom
cos
map
matrix.
We
have
more
more
than
one
top
3
million
of
pid
pairs
and
with
the
average
size
of
the
cost
map
is
more
or
less
47
megabytes.
H
H
In
this
case,
a
new
network
map
is
always
available
first
than
the
current
available
cost
map
and
on
the
other
hand,
there
is
a
limitation
to
the
ip
addresses
in
the
8.con
services,
and
you
know
the
rfc
states
that
the
source
and
destination,
while
the
young
ip
addresses.
However,
we
want
to
to
get
the
cost
for
certain
region.
H
Yeah
for
try
to
we
consider
a
set
of
modifications
to
deal
with
the
previous
problem,
for
example,
mechanisms
to
publish
all
maps
together
when
the
last
the
last
one
is
ready,
on
the
other
hand,
or
alto
server,
support
prefixes
in
the
8
point
cop
services,
and
we
also
adding
metadata
fields
like
dtl
timestamps
in
the
alto
server
response.
H
Today,
I
wanted
to
to
briefly
show
how
this
collaboration
works
in
a
system
that
is
currently
deployed-
and
you
know
it's
worth
it
for
both
isps
and
hyper
junks
in
terms
of
net
steps.
The
ideas
you
know
is
to
contribute
without
the
through
implementation
and
deployment
experiences
to
communication,
and,
while
you
know
I'll,
be
not
the
highest
priority
activity
in
terms
of
implementation.
For
us
where,
okay
richard
said,
we
are
fully
open
to
to
use
our
infrastructure
as
an
evaluation
or
testing
environment.
So
I
think
that
that's
all
okay,
thank
you.
A
A
Danny
and
I
I
think,
glad
to
see
flow
director
has
already
been
moved
to
a
production
stage.
So
thank
you
for
sharing
your
problem
and
limitation.
You
found
in
the
auto
implementation.
I
actually
suggest
you
can
take
to
the
list
to
have
some
discussion.
It
will
be
good
input
to
the
auto
deployment
experience
update,
yeah.
A
I
I
Yeah,
the
motivation
of
this
work
essentially
is
to
to
understand
that
negroes
are
becoming
consumable
more
and
more
by
application
and
services.
We
are
observing
this
trend
and
where
there
are
discussions
in
other
working
groups
as
well
in
itf
about
this
idea
of
application,
network
integration
or
network
application,
integration,
so
different
ways
of
interchanging
and
exposing
information
on
capabilities
in
one
direction
or
the
other.
I
So
this
is
also
a
trend
that
is
being
observed
in
in
other
initiatives
outside
itf.
So
these
new
forms
of
exposing
capabilities
and
yeah.
The
final
idea
at
the
end
is
to
inform
or
to
allow
applications
to
get
informed
about
the
situations
of
the
network
and
in
such
a
way
that
these
applications
can
take
better
and
and
optimal
decisions.
So
it
has.
I
I
mean,
moving
from
the
couple
way
in
which
today,
applications
and
services
are
run
into
something
much
more
yeah
in
for
base,
so
yeah
trying
to
avoid
that
inferring
or
guessing
of
what
network,
capabilities
and
status
and
basically
collect
information
that
could
be
useful
for
for
the
application
services
to
have
a
better
optimal
criteria
for
the
delivery.
There
are
a
number
of
examples
in,
in
other,
let's
say,
solutions
and
other
other
networks,
for
instance
the
3d
network
special
function.
That
somehow
is
an
inspiration
for
this
work.
I
Where
the
network
functions
can
be
informed.
Application
function.
Sorry
can
be
informed
about
capabilities
of
the
3gpp
overlay
network.
Also,
we
have
other
initiatives
like
the
hcm
multi-access
computing
apis,
where
these
apis
can
provide
information
about
the
possibility,
capabilities
of
the
access
network
or
even
radio
information,
and
so
so
far
same
night
story.
Some
idea
for
the
oran
run,
intelligent
controller
and
so
on
so
far.
I
So
chin,
okay,
thank
you,
so
yeah
alton
in
fact,
was
conceived
from
from
its
inception
as
as
a
mechanism
for
providing
information
to
support,
optimization
decisions
on
on
applications.
So
in
this
sense,
alto
seems
to
be
very
well
positioned
to
take
this
role
of
regular
spatial
function
for
itf
capabilities.
I
Initially,
the
possibilities
or
the
information
that
can
be
exposed
by
alto
are
the
topology
information
that
is
now
being
expanded
to
a
number
of
other
capabilities
as
chair
performance
or
segment
in
view
according
to
the
path
vector,
etc,
etc.
We
saw
at
the
beginning
some
of
these
possibilities,
so
taking
into
account
these
these
capabilities,
so
how
we
structure
in
the
in
the
draft
different
possibilities
of
information
exposure.
So
somehow
we
are
collecting
a
kind
of
catalog
based
on
on
existing
and
foreseeing
work.
I
They
have
been
existing
work
and
assuming
the
what
is
documented
in
nfcs
or
documents
that
will
become
rfc
soon,
we
could
account
on
a
network
topology
associated
cosmetics
for
performance
metrics.
This
segmented
view
leveraging
on
the
perfector
solution
and
and
also
we
could
even
try
to
cover
some
other
cases
that
are
not
actually
documented
at
all.
I
Also,
we
can
account
on
on
information
that
is
exposed
by
proposed
augmentations,
and
here
maybe
we
can
talk
about
the
optimal
service
edge,
also
the
underlying
view
of
for
overlays,
the
apn,
this
overlays,
the
cellular
network
or
the
cdn,
and
so
there
are
also
some
individual
drafts
now
in
in
the
group
analyzing
those
those
situations
and
also
just
to
commend
other
information.
I
Other
potential
information
that
could
be
exposed
has
been
also
discussed
in
the
weekly
alto
meetings
with
this
is
not
yet
documented
by
the
idea,
according
with
the
progress
of
the
discussion,
would
be
to
document
them
and
to
be
included
in
the
draft,
and
here
we
could
mention,
for
instance,
multipath
approach.
Multipa
support
could
be
something
that
could
be
also
part
of
of
these
exposure
capabilities.
I
So
next
slide.
Please
thank
you.
So
here
just
a
view
of
what
could
be
our
graphically
or
for
illustration
purposes.
What
could
be
the
the
interaction
between
alto
as
a
network
exposure
function
and
potential
clients
of
these
capabilities?
We
could
have
external
applications
here.
We
can
account
on
external
cdn
logic,
so
we
could
consider
external
cdns
like
the
case
that
danny
presented
before
or
internal
cdns.
I
That,
for
instance,
is
the
the
proof
of
concept
that
telefonica
is
integrating
with
the
internal
own
internal
cdn,
but
also
we
could
consider
cloud
application,
orchestration
or
cpp
network
sponsor
function
or
whatever
other
a
customer
external
overlay
network
that
could
leverage
on
itf
capability.
All
you
have
network
capabilities
and
also
we
could
have
a
number
of
internal
customers
like
sdn
controller,
the
cdn
logic
that
also
has
been
mentioned,
and
so
on
so
far,
maybe
some
other
sources
of
of
some
other
applications.
I
I
Next
slide,
please
chin
also
a
potential
way.
An
alternative
way
of
collecting
and
providing
information
would
be
to
to
to
position
these
applications
that
are
internal
to
the
network.
Here
we
put
the
three
epp
has
internal
to
the
network
in
case
that
this
three
evp
network
is
also
handled
by
the
same
administrative
domain
at
the
end
yeah.
I
Essentially,
the
message
here
is
to
integrate
to
collect
the
information
from
alto
and
to
expose
this
information
to
external
or
internal
customers
through
the
alto
protocol,
with
or
without
the
stations,
depending
on
the
kind
of
information.
So
next
slide
next
and
final
slide,
please.
So
the
idea
would
be
to
collect
feedback
from
the
working
group
and
for
sure
we
are
working
on
preparing
an
expression
with
more
detail
on
the
usage
of
alto
as
itf
network
exposure
function
with
a
final
purpose,
of
course,
positioning
alto
as
this
itfnf
for
for
for
feeding
applications
and
services.
A
D
I
just
want
to
give
you
one
very
quick
comment.
I
think
this
is
super
super
cool
and
I
really
wish.
I
hope
that
there's
some
way
we
can
push
this
one
forward
either.
You
know
we
we
we
we
support
nef,
for
instance,
gpp
or
the
other
world
run,
but
although
I
think
this
should
be
very
increasing
work,
I
strongly
support
I'm
very
interested.
J
So
my
screen
available.
J
Okay,
hello,
everyone-
this
is,
this
is
pawnee
from
channel
mobile
and
it's
a
computing
aware.
Networking
use
case
of
auto
first
is
the
background
about
the
ict
infrastructure
definition,
so
the
service
providers
offering
the
integrated
community
on
the
networking
infrastructure
to
provide
the
best
use
case
user
experience
such
as
low
latency
and
high
reliability
and
optimize
utilization
of
network
and
competing
results,
but
there
are
also
some
challenges
of
edge
computing,
such
as
geographically
scattered,
large
number
of
styles,
resource
limitation
and
hedge
nearest
hardware,
dynamic
load
and
others.
J
All
of
the
challenges
are
not
solvable,
so
solar
in
computer
domain
know
the
network
domain,
so
we
need
to
find
a
collaborative
approach.
J
It's
the
case
that
we
want
to
see
is
that
both
of
the
network
and
computing
factors
in
the
same
level
to
influence
the
user
experience,
so
it's
better
to
have
the
scheduled
and
among
the
different
sites
to
find
the
suitable
one
to
offer
the
service
yeah.
So
do
due
to
the
challenges
and
the
use
case
we
put
up
the
computing
aware.
Networking
here
is
a
definition
of
it
committee
aware
networking
is
proposed
based
on
the
ambitious
network
connection
and
highly
distributed
computing
results.
J
It
proposed
new
mechanisms
to
be
aware
of
the
distribution
and
status
of
computer
science
in
network
and
combined
service
and
framework
of
team,
optimal
loading
and
load
balance
to
schedule.
The
computing
and
network
results
based
on
the
awareness
of
service
requests
so
to
improve
the
efficiency
of
the
computing
and
network
results,
and
something
need
to
be
clarified
is
that
the
relationship
between
c
and
the
diecast
since
I
had
joined
the
meeting
as
previous
itf
meetings
and
so
diecast
is
a
key
function
of
computing.
J
J
If
the
results
are
insufficient,
the
ability
can
be
informed
to
increase
the
hardware
results,
so
the
order
can
be
used
to
transmit
information.
It
has
a
similar
idea
with
the
draft
or
the
service
edge
and,
and
another
aspect
is
scheduling,
of
saving
waste.
J
The
time
node
selection
still
depends
on
distributed
routing
such
as
diecast,
and
we
exactly
know
that
satisfy
the
same
framework
or
any
components
of
it
might
not
be
the
substantial
workout
now,
but
the
trend
of
infrastructure
definition
may
bring
the
new
opportunity
of
it,
since
it
can
help
give
the
suggestion
to
deploy
the
service
node
and
collect
some
useful
information.
If
the
network
can
get
the
computing
information,
it
can
also
send
to
the
auto
server.
So
here
are
some
questions
received
from
mailing
list
and
question.
J
One
is
multiple
protocol
or
one
protocol
to
use
to
collect
the
information
we
think
multi-protocol
will
bring
the
issue
of
synchronization,
which
is
not
easy
and
cause
some
additional
expense.
J
Maybe
extending
bgp
can
be
an
option,
but
the
frequency
frequency
is
a
problem,
and
another
question
is
that
to
decouple
also
from
the
specific
syn
architecture,
it's
a
good
way
to
really
and
to
combine
them
and
also
find
some
some
way
to
to
solve
the
problems
now
yeah,
and
the
last
question
is
that
requirements
of
real-time
information,
how
to
shorten
the
period
of
information,
refreshment
measurement,
so
real
time
is
good,
is
the
important
factor
in
the
same
framework
and
we
can
find
some
informations
in
the
existing
rfc.
J
J
Okay
yeah,
so
it's
the
last
two
one.
Okay,
that's
all
thank
you.
A
K
A
A
E
Okay,
let's
get
started
hi,
I'm
jushung
from
microsoft,
research,
asia,
I'm
going
to
give
a
talk
on
bandwidth
estimation
on
openness,
lab
on
behalf
of
openness
lab
community.
This
talk
is
mainly
focus
on
it
to
introduce
the
final
assassination
and
discuss
the
possibility
to
make
it
as
a
part
of
auto.
E
I
think
the
most
important
indicator,
reflecting
the
user
experience,
is
pcr
pro
call
rate
in
the
figure.
We
can
know
that
the
number
of
pool
costs
and
the
pro
car
rate
are
increasing
dramatically
when
the
copy
19
outbreaks
in
u.s
are
march.
So
it's
a
very
urgent
for
rtc
to
continue
improving
our
call
quality
to
attract
more
users.
E
Bandwidth
estimation
actually
is
one
of
the
key
reasons
of
the
pool
call
rate
in
the
table.
We
list
the
top
10
reasons
for
pro
call
pro
11
cost
you
can.
You
can
know
that
20
28.9
percent
of
programming
code
is
highly
related
to
bandwidth
estimation
and
40.9
percent
is
related
to
bandwidth
estimation.
The
problem
includes
the
no
sounds:
discord
distort
with
audio
final
background
noise,
audio
delay,
call
dropped,
video
low
quality
and
the
video
freezing.
E
E
That
means
every
company
has
its
own
design
and
implementation
is
kind
of
closed,
so
it's
usually
use
single
model
for
all
users
and
it's
hard
to
innovate
in
this
area.
E
So
I
think
we
can
use
the
concept
of
auto
to
make
a
bandwidth
estimation
as
a
standard
service.
It
can
make
the
architecture
simpler
and
make
the
service
open
it
can
enable
more
customization
and
everyone
can
contribute
to
this
service
and
share
the
technology
of
the
service
to
to
boot
to
boost
the
innovation.
Actually
we
have
a
bandwidth
as
miss
admits,
challenge
on
mmc's
this
year.
E
The
goal
of
the
challenge
is
to
optimize
the
qoe
for
real-time
communication,
for
example
the
video
and
audio
call
quality
and
every
participant
should
design
their
bandwidth
as
an
estimation
model
or
algorithm
to
compute
our
bandwidth
to
estimate
the
the
current
bandwidth
based
on
the
network
status.
E
We
use
network
open
lab
as
the
taskbar
for
this
challenge
in
the
evaluation
period.
Actually,
we
use
more
than
40
run
run
for
schema
on
opennet
lab.
It
includes
nine
videos,
three
networks,
including
high
medium
and
low
bandwidth,
and
we
have
five
runs
per
schema
in
the
run
in
a
wrong
driving
way.
The
final
score
is
the
average
average
rated
sum
of
the
video
score
audio
score
and
the
network
score.
E
Finally,
the
nanny
university
team
is
the
winner
and
they,
their
score
is
78.33,
which
is
better
than
the
google
congestion
control,
which
is
71.47.
We
can
see
the
innovation
is
very
promising
in
the
bandwidth
estimation,
and
we
can
see,
we
can
also
see
bandwidth.
Estimation
can
be
a
part
of
the
auto.
The
figure
1
shows
the
location
of
auto
service
and
auto
clients.
The
server
can
be
independent
or
live
with
the
clients.
The
client
can
request
the
bandwidth
bandwidth
through
the
interface.
Actually,
they
can
use
the
kind
of
the
standard,
auto
interface
to
communicate.
E
The
potential
application
can
be
rtc
such
as
teams,
tencent,
tencent
meeting
or
also
some
other
rtc
software.
The
the
input
could
be
packaged
states
and
the
output
can
be
the
estimate
of
the
isomine
bandwidth
of
to
the
sender.
Of
course,
the
input
output
can
be
changed,
adapt
to
the
need
of
the
application
by
the
way
our
all
of
our
research
actually
are
on
the
top
of
opennet
lab.
E
It
is
a
kind
of
data-centric
networking
research
community.
The
community
is
founded
by
tao
night
top
university
across
asia,
including
nanjing
university,
packing
university,
qinghai
university
and
also
macro
software
research.
Asia,
professor
timchen,
is
the
chair
of
the
community
this
year.
E
I
think,
due
to
time
limitation,
I
think
I
can
give
a
brief
introduction
of
overnight
lab
it.
Actually,
it
provides
a
framework
and
also
heterogeneous
nodes
to
achieve
the
data-centric
networking
research
and
we
are
building
more
and
more
nodes
across
asia
and
we
are
expanding
the
opennet
lab
to
the
world
worldwide
to
increase
the
coverage
yeah.
Thank
you.
A
L
From
alibaba-
and
I
have
a
question
so
you
just
mentioned
that
for
the
poor
core
rate,
it's
forty
percent
of
the
procore
rate
is
related
to
the
bandwidth
estimations
and
I.
E
D
Okay,
I
think
this
is
a
great.
My
question
is
falling
suppose
we
we
want
to
integrate
this
one
as
a
service
using
auto.
I
think
the
api,
it's
not
a
media
prompt,
we
already
have
it.
One
concern-
is
the
frequency
of
the
data
so
currently
in
your
current
implementation
and
your
vwe
server.
What's
the
frequency
that
how
much
data
and
at
which
frequency
that
you,
your
dpwe
server,
is
sending
information
to
the
client.
E
Actually,
in
in
kind
of
the
webrtc
or
or
some
rtc
like
this,
we
exchanged
the
bandwidth
estimation
data
in,
I
think
200
million
seconds.
D
A
Okay:
okay,
thank
you.
Thank
you
richard,
so
we're
on
time
actually
quickly
right
here,
thanks
for
mohammed
actually
moderate
from
behind
a
reminder.
The
time
limit,
thank
you
for
daniel
and
and
and
agent
to
take
a
minute
and
thank
all
the
participants
and
any
last
words
from
mad
or
martin.