►
From YouTube: IETF113-ALTO-20220323-0900
Description
ALTO meeting session at IETF113
2022/03/23 0900
https://datatracker.ietf.org/meeting/113/proceedings/
A
A
B
Okay,
let's
get
started
so
hi
everyone
and
welcome
to
auto
session.
So
this
is
a
hybrid
meeting
and
my
name
is
ching
woo
join
with
me
online.
We
have
yen,
siddharth
and
mohammed
book
there
and
we
will
host
the
remotely
and
actually
in
a
room
we
have
judy
actually
to
coordinate
and
to
manage
the
midi
echo.
So
thanks
jody
for
volunteer
very
appreciate.
B
Not
aware,
and
probably
already
familiar
with
this,
if
you
make
any
contribution
and
please
make
sure
you
follow
itf
rules
and
if
you
have
ipr
please
just
close
it.
B
B
If
you
do
the
presentation
or
make
comments,
please
enter
into
the
queue
and
by
pressing
the
red
button.
And
if
you
speak,
please
identify
yourself
and
when
you
stop
speaking,
please
mute
yourself
and
for
jabba
scriber
and
we
minutes
taker.
We
actually
have
danielle
king
and
richard
to
have
her
take
a
minute.
B
Actually,
I
think
the
rule
can
help
you
to
watch
the
java
yeah
blue
shield.
Actually,
we
have
electrical
blue
shift.
Please
make
sure
you
join
the
medical,
so
your
attendance
will
be
automatically
recorded
and
you
attend
them
in
person.
Please
make
sure
you
join
the
medical
as
well.
B
And
work
remotely
actually,
as
we
know,
actually
most
of
the
work
actually
will
be
done
on
the
menus.
Please
leverage
the
meninist
and
if
you
bring,
if
you
have
any
new
idea,
please
introduce
your
idea
on
the
meninist
and
for
working
good
job.
It's
important
to
raise
the
discussion
on
the
list
to
resolve
any
open
issue
and
for
informal
meeting.
We
actually
already
have
a
weekly
webis
webex
meeting
and
to
facilitate
the
chat
item,
progress
and
for
online
meeting.
B
If
you
think
you
have
any
good
proposal
can
fit
into
the
order
charter
and
you
can
request
the
share
to
schedule
the
online
meeting.
So
we
are
happy
to
do
this.
B
So
this
meeting
agenda
for
today's
discussion
will
focus
on
childhood
items
and
the
two
items
will
be
discussed.
One
is
auto
om
support.
The
second
is
auto
new
transport
and
also
we
actually
have
a
deployment
experience
updater,
and
this
is
also
part
of
our
auto
chatter,
and
if
we
still
have
time
actually,
we
will
discuss
a
new
non
charter
item
which
actually
introduced
a
new
idea
to
discuss
how
to
integrate
the
g2
into
the
auto
and
also
we
have
auto
related
activity.
B
Actually,
we
have
computer
aware
networking
both.
Actually
this
will
be
introduced
by
luis,
so
any
agenda
batch.
B
Okay,
let's
move
move
on
for
document
update
and
since
last
night
it
meeting
we
actually
moved
four
existing
work
item
to
the
offset
queue.
So
thanks
for
the
author
and
editor
for
tremendous
effort-
and
we
finally
deliver
this
for
work,
so
let's
give
applause
to
all
the
authors
and
editors
and
also
in
addition,
we
have
a
new
worker
item
to
be
adopted,
which
is
a
course
model
draft.
This
actually
is
a
companion
job
for
the
past
vector
and
we
already
initiated
the
adoption
call
in
before
this
meeting.
B
Actually,
this
has
already
have
a
zero
zero
working
group
job
and
our
chairs
plan
is.
We
were
immediately
initially
the
working
last
call
right
after
this
meeting
for
new
era.
Actually,
we
got
a
four
new
era.
Actually
all
of
these
right
actually
are
related
to
the
auto
based
protocol,
and
so
for
the
first
one
actually
raised
by
the
samuel
during
his
security
direct
review,
he
identified
a
typo.
Actually
these
need
to
be
verified
and
also
for
another
one.
Another
two
actually
relate
to
the
course
mode.
B
Actually,
we
need
to
you
know,
make
update
to
the
other
baseball
game.
It's
kind
of
type.
Already,
this
has
been
discussed
on
the
list
with
auto
based
political
author
and
for
all
these
three.
I
think
this
need
to
be
verified
by
our
id.
B
Milestone
update
actually,
currently,
we
have
already
have
a
three
milestone
and
but
because
of
the
existing
work
item,
actually
will
take
quite
a
long
time.
Actually
so
so
we
chair
actually
suggest
that
we
can
make
a
milestone
update.
In
addition,
we
actually
introduce
a
new
work
item
customer
specification,
and
this
has
already
discussed
with
our
ad
and
the
set
of
the
milestone.
So
we
propose
the
milestone
actually
to
change.
It
changes
the
time
frame.
B
So
any
suggestion
for
for
this
any
input
or
opinion
on
this
proposed
milestone.
B
Last
one:
actually,
we
shared
this
class
actually
how
to
socialize
auto.
Actually,
we
really
want
to
increase
more
visibility
to
the
auto
activity
and
our
purpose
actually
is
disseminate
this
worker
to
all
the
other
areas
in
ibtl,
and
we
also
need
to
you
know
socialize
this
author
to
the
operator
community
and
operation
developer
community,
and
we
actually
have
a
sitcom
ai
actually
actually
proposed
by
the
auto
design
team
member,
and
this
will
be
a
continued
effort.
B
And
but
we
need
other
venues
to
promote
the
author
in
operator,
community
and
application
community
and
in
addition,
we
seek
more
volunteer
to
build
the
auto
tutorial.
Actually-
and
this
will
be
discussed
in,
for
example,
office
area,
hot
area
or
some
other
venues,
and
if
you
have
any
input
to,
but
please
let
us
know.
B
So
I
say
luis.
C
This,
yes,
I
think
this
is
from
telefonica,
yeah
we
I
we
were
talking
jordy
and
me
about
the
different
potential
venues
that
we
are
identifying
here
in
in
this
meeting.
B
If
you
know
actually,
we
can
kick
off
the
discussion
for
a
general
item
and
I
think
the
first
one.
D
D
I
work
with
to
roland
and
kai
and
we
are
working
on
this
document
right
now.
D
They
pick
a
summary
about
this
document.
The
main
goal
of
the
document
is
to
try
to
try
to
provide
the
young
data
module
for
the
om
and
the
management
of
the
auto
protocols,
unless
it
version
already
uploaded
to
the
data
tracker.
Also,
we
also
have
the
other
copies
on
the
github
and
in
the
recent
version
we
also
include
the
aussie
young
audio
code,
also
on
the
github
can
be
accessed.
D
For
the
major
changes
since
the
latest
between
this
latest
version
and
our
previous
version,
so
why
not
change
the
vision
day
document
title
from
the
om
to
the
o
and
m
so
so,
which
means
the
oem
and
management
that
we
follow
the
guideline.
This
is
comments
from
the
edu
in
the
mailing
list,
so
we
check
the
gala
of
the
art
6291,
so
I
think
they,
this
document
not
just
target
to
the
om.
D
D
So
this
is
the
current
basic
requirement
we
post
in
the
latest
version,
so
you
can
see
the
basic
requirements.
They
have
the
several
requirements
which
align
to
the
app
system
tuning
85.91.
D
And
we
also
have
the
additional
requirements,
so
that's
for
the
extensibility
so
as
the
auto
purchase
is
extensible,
so
we
also
requested
it.
Data
model
for
the
auto
onm
still
should
allow
for
the
augmentation
to
support
the
potential
future
expansion.
So
that's
a
the
current
additional
requirements.
I
think,
is
very
important
for
this
document.
D
And
then
so,
this
is
the
current
status
of
the
our
progress
for
of
those
requirements.
So
you
can
see
we
have
already
have
the
initial
proposal
for
the
from
three
times:
five:
five
one
five
point:
two
and
the
six
and
eight
so
right
now
we
are
working
progress
on
the
the
first
requirements
and
the
5.3
and
day
7.
D
transfer
it's
in
our
plan,
but
I've
been
still
in
the
to
delete
the
I'm,
not
any
purpose
right
now,
but
hope
you
can
finish
it
by
the
next
scientific
meeting.
D
So
we
want
to
use
this
slide
to
give
a
very
quick
overview
about
how
we
manage
this
data
model,
so
how
the
model
can
land
to
the
other
server
architectures.
D
So
they
did
blackbox
its
black
clothes
means
component
in
the
auto
server,
so
it
is
inside
model
scope.
So
this
this
model
will
provide
a.
D
Use
this
clock
from
this
resources
and
also
need
some
implement
specific
algorithm
plugins
to
really
define
how
to
translate
this
data
data
sources
to
the
specific
information
resources
they
are
in
the
autoscope,
but
not
in
the
scope
of
this
data
model
document.
D
And
they,
so
this
is
the
gift
for
the
server
setup,
so
which
is
the
first
requirement,
so
this
part
will
define
some
meta
information
for
the
server
level,
oem
and
management,
but
it's
still
in
working
progress.
So
you
can
see
the
current
information
word.
D
And
this
one
will
be
the
major
piece
of
this
data
model,
which
is
provided
model
for
the
information
resource
management
and
to
achieve
this,
so
that
this
is
the
requirement
five.
So
to
achieve
this,
we
separated
to
the
three
pieces.
Why
is
how
to
define
information
results,
so
it
provides
some
common
parameters
for
its
resource
and
the
resource
specific
parameters
and
in
each
research,
specific
parameters,
the
opportunity
to
specify
what
algorithm
used
to
generate
this
information
result
and
which
data
sources
can
be
used
to
generate
information.
D
They
so
they,
the
green
landmark
to
the
these
atoms
marked
in
the
green
colors,
means
it's
already
included
in
the
current
model
and
the
atom
marking
the
red
colors
means
they
still
work
in
progress.
It's
not
still
not
in
the
conjugate
models
part,
but
we
are
trying
to
work
on
them
to
finish
this
by
the
entire
active
meeting.
D
So
the
meterpiece,
we
still
need
some
discussion
about
how
which
specific
information
needs
to
be
put
for
the
mailman
on
impact,
which
is
because
some
information
will
associate
with
the
application
performance.
So.
D
That
probably
will
be
implementation
related.
Not
so
we'll
try
to
summarize
some
common
features
from
this.
D
So
it's
required
simple,
so
you
just
need
to
specify
so
which
source
id
should
be
used,
and
the
third
item
will
link
to
a
layer,
three
networks
about
which
is
defined
in
another,
your
model,
so
it
can
be
provided
by
the
netcom
and
we
reference
another.
D
Individual
draft,
which
provide
the
reference
algorithm
implementation
so
how
to
translate
this?
The
layer,
3
network
model
to
the
auto
information,
the
network
map
and
the
custom
app.
D
D
The
obvious
days
come
to
progress
we
achieved
for
the
current
data
model,
but
we
still
have
some
questions
about
how
to
proceed
so
the
first
question
actually
generated
recently
in
the
mailing
list,
which
is
because
this
data
model
referenced
several
identities
and
the
numeration
type
defined,
which
are
managed
by
the
ana
registries.
D
So
to
make
it
this
model
extensible.
One
question
is
that
to
remove
them
into
a
separate
ion
management
match
the
data
model
like
there
are
no
auto
tabs.
We
just
keep
in
the
common
models,
though,
so
we
already
asked
the
young
doctors
about
what's
the
best
practice
for
this
for
argument
that
if
we
make
this
change
so
it
can
fill
up
update
like
the
metrics
and
the
customer
updates.
So
it
will
be
easy
to
support
it
without
updating
the
our
basic
community
module,
but
the
him
also
introduced
some
complexity.
D
So
for
chair
doing
two
stuff
here
and
there's
some
discussion:
oh
we
just
post
all
the
questions
and
then
we
can.
B
Yeah,
I
see
this
issue,
you
are
already
you
know,
sent
to
the
list
and
also
copied
to
the
young
doctor
to
get
a
confirmation,
so
we
can
actually
discuss
with
you
and
offline
and
we
have
two
proposals
on
the
table.
One
is
you
can
change
your
numeration
into
the
identity?
The
second?
Actually,
you
define
all
all
these
type
of
definition
in
the
ana
module.
So
I
think
both
proposal
actually
works,
but
we
need
to
you
know
double
check.
A
young
doctor
get
their
feedback.
B
My
I
think
we
can
open
the
floor
to
get
any
feedback
for
anyone
to
share
your
opinion.
D
Okay,
I
see
richard
asked.
The
question
was
complexity
when
decomposed.
D
Sorry
to
kind
of
explain
a
little
bit,
not
sure
I
understand
your
question.
E
I
I
heard
jason
want
to
discuss
about
this
composition.
He
mentioned
there
could
be
some
potential
complexes.
Therefore,
I
was
wondering
what
kind
of
complex
is.
I
think
I
saw
some
a
quick
comment
from
druva
that
there
might
be
complexity.
E
I
think
maybe
there's
some
like
a
potential
caveat
that
people
need
need
to
pay
attention.
So
what
would
that
be
being
when
you
mentioned
about
profile?
Matrix
performance
can
can
have
a
some
complex,
for
example,
for
example,
tcp
throughput
right,
a
big
part
of
configuration
would
be
specified
which
tcp
model
so
they're
all
kind
of
issues.
So
what
oh?
Okay?
So
therefore
I'm
listening
to
see?
Okay,
I
I
I
saw
some
comments
by
matt
as
well,
but
I
want
to
hear
from
you
as
well
jensen
go
ahead.
D
D
Yeah,
actually,
the
yeah,
the
complexity
see
a
part
yeah.
I
mean
what
isn't
getting
to
to
get
the
feedback
from
the
the
younger
doctors,
because
I
see
some
some
documents.
Some
yamamoto
purpose
document
will
integrate
the
ana
management
into
the
single
mother,
but
something
will
separate
them
yeah,
I'm
not
sure.
What's
the
best
practice.
D
D
So
yeah
we
can.
We
can
move
this,
the
more
detailed
discussion
to
the
main
list
and
I
just
post
some
other
questions
here,
yeah.
D
But
we
need
some
principles
for
so
which
one
should
be
defined
in
the
standard
basic
module
and
which
one
should
just
delegate
to
the
algorithm
not
to
be
the
common
parameters
like
I
see
some
one
be
suggested
by
the
base
particles
and
also
the
f6m9
symptomizing
knight,
but
yeah,
for
example,
the
east
coast
magnitude
configures
the
different
two
pids,
and
I
also
probably
have
some
parameters
to
point
out.
D
What's
granularity
for
each
network
map
and
cosmic
and
also
probably
have
some
plan-specific
computing
to
define
the
the
information
resource
level
access
control
so
which
client
can
request
this
resource
and
which
can
come
out,
it
can
be
detached
by
the
current
ip
or
some
some
other
metadata
from
the
client.
B
So
jason
go
back
to
the
question.
D
D
B
D
Yes,
do
you
have
some
comment,
so
do
you
want
to
speak.
D
D
Can
be
finished
in
the
meetings,
so
another
discussion
about
the
yeah
so
far
you
can
see
the
other
requirements
as
for
the
also
servers,
so
we
don't
have
any
comments
for
the
auto
client
side,
but
actually
in
the
initial
versions.
We
also
mentioned
that
yeah
we're
trying
to
include
some
include
the
auto
client
in
the
scope
of
this
document,
because
the
proper
point
parents
using
the
auto
client
also
do
some
to
need
some
oem.
D
For
example,
we
need
to
configure
the
the
caching
management,
the
server
discovery
part,
and
if
the
house
client
can
access
the
multiple
auto
servers,
I
also
need
some
measurements
for
which
one
to
be
choice
so,
but
I
think
the
om
for
the
auto
client
should
also
be
important,
but
one
thing
I'm
not
sure
is
that
if
the
yamamoto
is
a
good
approach
to
expand
for
the
clan
oem.
B
Okay,
any
comments
for
this
question.
I
say
martin.
G
Martin,
duke
google,
so
I
think
you're
asking
the
right
questions
here,
but
I
think
chin's
advice
is
good.
You
should
strive
to
reduce
the
scope
of
this
wherever
you
can
and
if
no
one
is
willing
to
speak
up
for
any
of
these
particular
features.
You
should
probably
just
omit
them
make
sure
that
they're
extensible
to
support
later,
but
I
mean,
like
I
said
if
someone
wants
to
speak
up
for
any
for
this
or
for
any
of
the
other
features,
that's
great,
but
otherwise
just
leave
it
out.
Thanks.
B
E
I
think
martin
gave
a
very
good
reach
idiom
from
yeah,
so
martin
keep
a
wonderful
comments
and
very
good
suggestions.
I
I
do
want
to,
but
somehow
I
I
do,
I
think,
the
issue
of
how
to
configure
auto
clients
it's
an
interesting
issue,
because
that's
an
issue
that
we
encountered
during
yeah,
changing
yourself
right
and
we
stored
it
to
configure,
use
it
right
for
user
download
command
and
a
huge
amount
of
back
and
forth
about
how
to
configure
the
the
client
and
do
we
do
using
command
line.
E
How
do
we
use
computer
file?
What
kind
of
primary
do
we
configure
they're?
Clearly,
they
are
application,
specific
they're,
all
different
culture
right,
really
essentially
command
line
from
cern,
so
they
have
different
suggestions.
So
I
think
marketing
suggestions
is
great,
but
maybe
there's
some
way
to
tell
people
exactly
how
I
think
you
will
mention
the
consistency
across
different
young
modules,
and
maybe
here
we
can
also
subscribe
to
people
for
different,
auto
clients.
B
Yeah,
I
tend
to
agree
with
richer
actually,
otherwise,
for
normal
client
without
auto
protocol
support
can
discover
the
auto
server.
You
know,
there's
no
way
to
do
this.
You
need
some
configuration
for
auto
client.
B
I
say
joe
and.
H
Hi,
so
one
query
which
I
had
for
the
auto
working
group
to
also
guide
us,
is
whether
the
current
auto
clients
which
are
envisioned
do
they
already
support.
Yang
based
interfaces
for
om
techniques
is
if
that
is
not
the
case.
Even
if
we
write
a
yang
model,
the
issue
would
be
that
yang
model
may
not
get
deployed,
because
these
are
the
kind
of
client
client
devices
that
do
not
use
rescon
for,
like
the
typical
techniques
that
we
have
for
young
models.
D
Yeah,
that's
a
very
good
point.
Yeah!
That's
also
my
concern
because
yeah
for
my
own
experience
so
far,
we
are
not
using
the
yam
mode
to
develop
the
client,
because
some
applications
that
try
to
use
their
level
is
the
auto
server
the
auto
pro
code.
But
you
not
use
the
red
comp
netcom
to
be
because
the
the
high
level
application
now
they
some
application
for
the
network
devices.
Are
the
controllers
awesome,
I
think
so.
D
H
And
I
think
med
also
mentioned
that
on
the
chat,
it
doesn't
hurt
us
to
create
an
alto
client
yang
model.
Like
you
know,
it
will
act
as
sort
of
an
information
model
for
even
if
somebody
wants
to
use
some
other
techniques.
But
this
will
be
the
common
data
that
any
auto
client
must
use
so
sort
of
using
yang
as
an
info
model
that
we
have
standardized.
But
it's
okay,
but,
like
you
know,
we
would
need
more
feedback
from
the
working
group
so
that
we
can
do
the
right
thing.
I
Hi
jordy
from
qualcomm
yeah,
there
was
a
conversation
just
a
couple
days
with
alberto
and
fabio,
and
also
the
idea
of
you
know.
I
How
do
we
not
impose
modifying
the
application
to
integrate
with
an
auto
client
potentially
providing
like
a
proxy
so
that
you
don't
have
to
modify
the
application
basically
so
that
the
auto
client
would
run
in
a
proxy
or
and
then
there's
also
conversations
on
edge
computing,
for
instance,
law,
balancing
so
presentations
at
competing,
aware,
networking
and
so
on,
about
performing
load
balancing
and
so
the
load
balancer
that
decides
how
to
steer
traffic
in
the
edge
cloud
could
benefit
from
from
an
auto
client
implementation.
I
So
those
are
a
couple
of
use
cases
that
you
know
it's
worth
asking
whether
this
would
require
some
kind
of
a
good
way
to
actually
configure
this
specific
use
case,
whether
it's
a
proxy
or
an
lb
that
could
benefit
from
from
you
know,
interaction
with
an
autoclient.
Basically,
so
thanks.
D
Yeah,
so
that's
just
some
proposal
to
update
the
milestones
yeah.
I
think
we
already
integrated
into
the
slide
yeah
so
try
to
okay.
F
D
J
Yay,
hello,
everybody,
the
next
slides,
are
about
aldo
transport,
and
the
draft
is
in
a
nutshell
covering
the
topic:
how
http,
2
and
alto
can
be
combined
and
work
together.
The
work
was
done
with
richard
kai
and
jensen
here
and
yeah.
My
name
is
olan
shot
and
I
will
do
the
presentation
today.
J
J
So
as
I
state
the
motivation
and
requirements
is
that
hd
poet
2
at
the
moment
is
not
usable
or
not
used
for
alto,
and
this
work
tries
to
give
the
proposal
how
this
could
be
established
and
how
it
could
works,
and
we
want
to
introduce
this
idea,
the
ideas
that
we
have
and
then
have
a
discussion
regarding
this.
J
J
The
ultra
sse
example,
but
unfortunately,
this
mechanism
is
only
available
with
for
http
one
and
the
isg
review
has
considered
to
make
a
proposal
how
hdb2
can
work
and
the
idea
is
in
a
way
to
use
what's
already
defined
in
all
the
sse
but
make
it
available
also
with
http
2..
J
Okay.
So
before
going
into
the
details,
just
a
double
check
to
the
requirements
for
sure
we
have
also
collected
the
requirements
from
the
protocol
so
and
client
can
have
an
auto
resource
connection
as
it
is
defined
in
fc,
7285.
J
J
J
So
now
we
come
to
the
design
overview
and
the
idea
how
to
make
it
workable
next
slide,
please.
So
when
now
going
into
this
slide,
you
can
see
here,
above
from
there,
also
the
current
possibilities
of
the
alto
server.
We
have
information
resources
sources.
We
can
have
static
resources
like
a
network
map
but
and
also
filterable
resources
like
cost
map.
J
So
it
is
shown
here
in
the
picture
and
by
we
are
in
a
way
implementing
this
transport
queue
in
the
in
this
information
model.
We
have
also
an
yeah,
let's
say
an
incremental
update,
skew
and
receiver
set
information
that
is
automatically
created
with
transport
queue
is
established,
so
various
clients
can
connect
to
different
yeah.
Let's
say
10,
transport
queues
and
one
client
could
also
connect
to
different
transport
queues
to
get
the
different
information.
J
J
So
the
basic
operations
here
we
have
here
create,
read
and
delete
operations
and
similar
to
the
older
sse.
We
have
here
a
method
in
a
way
to
to
create
here
accurate
response
of
the
server.
So
the
client
creates
the
the
the
queue
by
sending
a
q
request
and
the
server
responds
with
the
information
of
the
transport
queue,
and
this
is
more
or
less
the
json
text.
That's
used
next
slide.
How
does
it
looks
like
so
the
server
request
here
you
have
your
post
information
and.
F
J
The
passes
the
transport
queue
that
where
the
information
of
the
server
could
be
pushed
to
the
client
and
the
server
responds
us
here
with
establishment
of
the
transport
queue
to
the
to
the
client.
This
was
already
stated,
then.
We
have
also,
as
stated
here,
the
client
reads:
possibility
but
also
explicitly
a
delete
of
the
transferred
clue
queue.
So
the
deletion
of
the
transport
queue
is
mainly
also
from
the
view
of
the
client.
J
So
if
other
clients
are
connected
to
the
transport
users,
transport,
you
as
itself
cannot
be
a
delete
because
there
are
some
dependencies,
so
the
transfer
for
client
is
and
feral
and
in
case
the
client
deletes
it.
So
from
the
client
view,
he
cannot
expect
that
this
transport
queue
is
still
valid,
so
the
client
has
also
the
possibility
to
delete.
Let's
go
then
the
next
slide
will
show
the
read
option
of
the
transcript
queue.
J
Could
you
please
go
to
the
next
slide,
so
here
transport
queue
example
read
so
when
the
transfer
queue
is
established,
then
the
client
has
here.
The
information
of
the
appropriate
transport
queue
sends
information
to
the
server
and
the
server
response,
for
example,
here
with
incremental
update
queues
and
the
server
set,
and
then
we
have
here
some
sequence
numbers
with,
let's
say
some
tech
ideas
and,
as
shown
before
in
the
picture,
we
have
some
relationship
between
the
transport
queue
and
the
the
dedicated
stream
tags.
J
The
next
slide
incremental
updates
cue
how
this
works
then.
Looking
into
this
again,
it's
important
to
know
that
the
incremental
aptitude
basic
operation
is
only
a
read
operation,
so
the
client
cannot
create
update
or
leak
incremental
updates
queue
directly.
It's
associated
with
the
transport
queue
automatically,
as
I
stated,
and
if
there
is
a
request,
for
example,
to
a
dedicated
transport
queue.
We
have
here
also
the
link
of
the
transport
queue
and
then
the
dedicated,
for
example,
information.
That's
then
forwarded
to
to
the
client.
J
This
is
related
to
our
requirements
that
we
have
stated
here
and
yeah.
That
is
how
the
incremental
update
works
so
next
slide.
Individual
updates.
J
So
here
we
have
the
pull:
read
push
read
important
if
you
have
now
a
client
pull
so
with
the
get
information
and
update
the
ue,
and
this
looks
like
followed.
Well
next
slide,
please
yeah!
So
we
have
here
again
the
client
pool
with
the
path,
and
then
it's
rca
associated
here.
The
information,
for
example,
a
cost
map
that
is
then
pulled
down
to
the
to
the
client,
the
normal
alter
information.
J
One
can
say:
let's
need
it,
then
the
server
push
yes,
and
now
it
becomes
to
the
to
the
in
interesting
thing
and
the
relationship
also
to
http
to
what's
a
little
bit
tricky.
So
the
server
push
is,
though,
the
client
needs
to
to
have
an
information
more
or
less
on
which
yeah
away
the
server
push
the
information
and
we
use,
as
mentioned
earlier,
the
push
promise
mechanism
and
yeah
and
how
this
is
established.
J
That's
shown
in
the
next
slide,
so
you
can
move
this
so
the
server
push
and
as
an
authentication
example
as
phone
for
we
have
here
the
sequence
number
and
the
dedicated
text.
J
So
if
the
client
has
no
match
it
chooses,
let's
say
the
first
entry
and
if
the
client
has
already
some
matching
criteria,
it
chooses
the
next
next
in
for
the
next
number
and
then
now
going
to
the
push
prompt
mechanism
of
http
2,
for
example,
when
the
server
push,
for
example,
in
a
dedicated
stream,
and
hear
the
information
in
each
stream,
then
the
relevant
information
us
is
auto
solver
sn
pushed
down
towards
the
client.
J
Now,
let's
say
then,
therefore,
the
client
knows
in
which
queue
or
where
the
integrated
update
happens
and
then
also
receive
a
set
information,
how
it
is
yeah
sweet,
get
status,
delete
yeah
for
sure.
The
client
in
case
has
the
opportunity
to
delete
also
the
transport
queue,
but
in
case
also
connection
is
not
available.
Then,
from
the
point
of
view
of
the
client,
the
transfer
is
also
closed.
J
Yeah
then
next
slide
stream
management
is
also
important.
How
is
now
all
the
stuff?
It
was
more
or
less
the
the
intermediate
layer
between
the
alto
and
http
2,
but
we
have
also
a
relationship
how
stream
management
is
done.
The
objectives
are
here
to
allow
concurrency
of
the
streams
to
reduce
latency,
reducing
minimum
numbers
of
streams,
etc
is
more
or
less
a
goal
of
http
2,
and
here
is
now
shown.
J
The
relationship
between
the
frame,
layout
and
headers
without
so
important
thing
is
that
we
use
something
like
a
stream
id
and
the
stream
id
is
then
written
down
in
the
stream
identifier
or
in
the
steam
dependency.
If
we
create
a
transport
queue
for
the
steam
dependency
is
zero
because
nothing
is
there
before.
Let's
go
please
to
the
next
slide
and
yeah
for
sure.
J
If
you
close
transport
queue,
it
must
be
sure
that
there
is
no
steam
dependency
and
then
also
the
relevant
information
is
written
down
and
yeah
in
the
fields
are
properly
yeah
and
close
transport
queue
and
then,
for
example,
it's
if
you
have
here
just
an
request
or
wanted
to
to
read
something.
Then
the
stream
id
is
also
written
down
and
then
for
sure,
as
long
as
something
is
streamed
and
the
steam.
J
J
This
is
some
detail.
Each
push
needs
to
open
a
new
stream
and
steam
transport
queue.
I
think
that
we
can
omit
this
slide
and
go
to
the
next
one.
Then
please
and
yeah,
and
then
we
now
come
to
discussion,
discussion
and
open
points
and
yeah
open
point
is
what
is
missing
yeah,
so
we
do
not
create
generic
message
or
that
does
not
allow
creation
of
message
to
and
the
client
cannot
publish
info
shared
with
other
clients.
So
that
is
something
so
we
have
not
the
ability
to
have
a
direct
exchange
or
things
there.
J
We
are
also
aware
of
the
broker
discussion
that
has
done
so.
We
haven't
looked
so
in
the
details,
but
these
are
also
options
that
can
be
discussed,
yeah
and
then
the
capability
of
negligation
we
have
as
mentioned
earlier.
This
occurrence
is
not
addressed
in
the
draft
next
slide
and
also
calendar
semantics
is
not
addressed
yeah
and
that's
it
now
we
can
discuss
or
get
your
feedback.
B
Okay,
thanks
ronan,
actually
a
good
presentation.
So
actually
I
have
some
comments
on
the
slides.
Actually,
so
in
the
page,
25
is
any
out
of
order.
Delivery
issue
for
streaming
management.
J
B
J
35
25
25
next
one,
okay
objectives.
So
do
you
do
what
was
the
question?
Do
you
see
any
issues
regarding
this.
B
J
E
J
E
Yeah
I
I
I
would
like
to
answer
this
question,
because
this
is
excellent
and
okay.
So
the
yes.
So
there
are
a
lot
of
streams
running
going
on
right
inside
this
system,
because
that's
really
the
purpose
of
using
hdb2
for
streaming
concurrency,
for
example,
a
from
the
design.
Of
course,
the
newest
version
has
not
been
posted
and
all
the
details,
of
course,
are
only
posted
on
on
deck
of
slate.
E
We
were
very
careful
that
there
will
be
no
out
of
order
delivery
and,
of
course,
we
need
to
go
over
a
very
formal
proof,
but
let
me
just
give
you
a
sense
about
why
we
don't
have
auto
order
delivery.
For
example,
the
main
main
thing
which
we're
trying
to
push
through
using
the
new
transport
is
to
send
increment
updates.
E
There
are
actually
multiple
mechanisms
to
make
sure
that
we
don't
have
other
delivery
of
the
income
updates
and
number
one.
Is
that
all
the
push
promise
push
promise?
They
are
only
sent
into
a
single
stream,
which
is
a
stream
which
is
open
for
this
particular
transport
queue.
So,
therefore,
because
if
you're
inside,
a
single
stream
and
you're
carried
by
by
tcp
and
all
the
packets
inside,
the
single
stream
will
be
delivered
in
order
by
tcp.
So,
therefore,
all
the
project
promises
one
by
one
will
be
sent
to
the
client
in
order.
E
So,
therefore,
if
the
client
will
process
all
the
promise
in
order
and
therefore
the
client
actually
can
know,
for
example,
put
the
prime
minister
six
and
eight
and
twenty
they're
all
sending
other,
and
so
therefore
the
client
can
know
the
order
of
the
the
income.
That's
therefore
they're
safe
and
number
two
there's
also
building
a
mechanism
and
to
assign
a
sequential
sequence
number
to
the
incoming
updates.
So
therefore
the
client
actually
can
check
into
it.
E
So,
therefore,
the
client
can
really
do
it
in
the
transfer
layer
using
stream
or
can
also
go
to
application
layer
to
yield
the
second
number.
So
there
are
two
ways
to
protect:
they.
They
are
the
other
delivery.
So
therefore,
I
think
that's
an
example
because
that's
the
main
part
we
we
are
sure,
but
of
course
we
want
to
even
try
to
write
down
some
kind
of
formal
proof
that
there
will
be
no
out
of
delivery
in
terms
of
the
the
requirement,
of
course.
Otherwise.
B
Okay,
so
another
question:
actually,
I
want
to
make
sure
make
sure
the
auto
neutral
spot
doesn't
require.
You
know,
extension
to
the
existing
http
protocols.
Is
that
right.
E
Yes,
and
so
john-
can
also
answer
right.
Of
course,
I
think,
from
from
the
way
we
design
is,
we
make
sure
we
only
use
the
the
central
hb2
capabilities.
E
Of
course,
the
api
need
to
be
a
little
more
careful
when
the
client
or
the
server
any
product
client
is
using
http
2,
because
the
api
should
really
allow
the
application.
The
kind
of
specifics
okay
for
this
request
send
to
this
particular
stream.
For
this
one
please
send
to
this
stream
and
then,
of
course,
the
server
also
need
to
be
confirmed
in
the
sense
that
server
should
pick
the
right
when
they,
when
they
send
the
the
request.
E
B
Okay
last
question
from
me:
actually
is:
you
know.
F
B
Auto
portal,
you
define
four
functionality,
transfer
queue,
incremental
updater
request
and
also
receiver
said
so
for
all
these
four
functionalities
will
be
mandatory
or
somehow
just
optional.
E
Okay,
so,
given
that
I
I'm
on
the
queue
already,
they
are
all
necessary
to
really
get
incremental
push
updates.
But
if
you
don't
want
incremental
push
updates,
for
example,
in
the
initial
request,
when
the
increment
updates
increment
updates
flag
is
set
to
be
false,
which
is
two
in
which
it
can
be
by
default
like
the
true,
but
of
course
you
can
make
it
a
false
and
then
the
increment
updates
feature
will
be
turned
off.
So
therefore,
basically,
there's
no
increments.
E
So,
therefore,
if
application
only
wants
to
provide
a
way
to
provide
this
insight
to
the
large
scope
deployment
to
show
that
there
are
large
number
of
clients
watching
and
you
can
turn
off
increment
updates.
But
now
the
client
can
only
see
the
queue.
B
J
B
J
So
so
we
can,
we
can
bring
this
into
this
draft
or
we
can
separate
it
and
do
in
a
separate
work,
the
the
open
issues,
because
I
I
think
the
main
issues
regarding
the
transport
and
http
2
are
already
addressed
here
in
this
draft.
I
think
the
mandatory
things
are.
The
main
things
are
already
addressed
and
described.
B
Okay,
so,
let's
for
always,
you
take
take
it
to
the
list
and
we
can
see
how
do
you.
J
Yeah
yeah
yeah-
I
I
I
it's
also
for
me,
I'm
quite
open,
but
potentially
it
would
make
sense
to
split
some
some
work.
But
let's
do
it
discuss
on
the
list.
I
both
ways
are
possible.
E
I
do
want
to
get
one
question
quickly
and,
given
that
we
have
other
people
here
is
this
design
is
very,
very
http,
2
specific
and
the
guidance
I
think
roland
and
I,
and
with
this
design
team
with
this
cast,
is
how
much
do
we
should
we
consider
essential,
https
three,
like
a
quick.
E
Any
guidance
and
oh,
we
should
really
just
get
this
one
down
and
focusing
on
http
2,
because
quick,
somehow
iq3
will
open
a
total
different
way.
For
example,
all
these
stream
assignments
somehow
or
just
like
a
dependency
they're,
all
quite
quite
http,
http
2
specific.
So
any
guidance
from
the
working
group
to
us.
G
Hp3's
application
interface
should
be
very
similar
to
http2,
so
I'm
a
little.
G
E
So
martin,
for
example,
I
believe
for
ours
this
one
we
are,
for
example,
if
you
look
at
this
current
slide,
it
is
depends
on,
for
example,
all
these
are
blocking,
for
example,
all
the
streams,
because,
fundamentally
everything
will
be
underlying
there's
a
single
tcp
stream
running
right,
because
we
interleave
everything
it's
a
single
tcp
stream,
because
it
could
be
two
essentially
a
single
tcp
running
on
on
a
single
tcp.
E
So
therefore,
a
lot
of,
for
example,
all
the
other
things
probably
would
be
very
safe,
and
so
therefore,
on
the
client
side
on
the
server
side,
you
probably
don't
know
what
about
outdoor
delivery
at
all,
because
tcp
enforcing
essentially
a
single
stream.
Of
course,
you
should
have
issue
of
had
a
height
of
a
line
blocking
in
this
level,
even
though
you
interview
all
the
streams
and
my
understanding
of
quick
rgb
3
would
be,
then
this
kind
of
highlight
blocking
will
disappear
right
because
they're
using
essential,
different
udp
and
you're
not
going
to.
E
If
there's
a
single
packet
loss,
you're
not
going
to
block
everything
below
be
behind
it.
So
therefore,
then
we
we
need
to
really
check
if
that
will
cause
potential
issues
for
us
in
terms
of,
and
this
may
not
be
the
functionality,
but
I
didn't
come
for
performance
or
design
requirement.
So
therefore
we
they
need
to
investigate
to
see
how
what
the
impact
it
really
is.
So.
E
G
G
G
If
it
turns
out
that
you
need
a
bunch
more
protocol
machinery
to
support
http
3,
then
then
that
would
be
a
different
discussion
to
have.
I
don't
know
if
I
want
to
make
this.
You
know
20
pages
longer
or
something
because
to
support
http
3
like
maybe
it
could
be
a
different
document,
but,
like
I
mean
easy
for
me
to
say,
but
I
would
encourage
you
to
like
do
a
little
bit
of
analysis.
E
Sure
sure
what
will
investigate,
but
let
us
clarify
with
you
so
if
we
deliver
this
document
initially,
for
example,
by
a
next
ietf
and
we're
mostly
very
specific
to
hpd
only
and
we
leave
the
work
of
looking
at
the
capability
or
impact
of
hdb3
to
be
to
be
the
future
work.
Are
you
happy
or
our
working
group
happy
or
that
working?
We
would
recommend
that
we
should
always
like
initially
already
start
look
at
until
the
impact
of
potential
hdpd3.
G
Certainly,
like
you
know,
like
the
the
document,
so
first
of
all,
it
is
perfectly
fine
to
like
for
early
versions
of
the
draft
to
just
consider,
hp2
and
like
and
like
consider
hp3
later.
G
So
I
would
say
that
that
to
me
the
the
case
where
we
do
the
analysis,
it
turns
out
that
http
3
does
require,
like
a
bunch
of
additional
specification.
I'd
be
100
fine
with
submitting
it,
as
is
now
in
in
the
case
where
we
just
do
not
do
the
analysis.
G
That
would
I
would
not
like
that
I
would
have
to
think
about
whether
that
would
be
like
unacceptable.
My
inclination
would
say
that
I
would
probably
grudgingly
accept
something
where
we'd
just
not
done
the
analysis,
but
that
seems.
G
It
seems
like
just
something
to
do
like,
in
my
opinion,
that's
kind
of
a
due
diligence
thing
to
do,
but,
but
again
like
I
don't
I
don't
know
if
I
want
to
put
the
full
force
of
my
id
role
behind
that
assertion
as
an
individual,
I
would
say
that
let's
say
for
now
is
that
does
that
make
sense.
E
Yeah,
I
think
it
makes
sense
to
me
because,
right
because
your
land
is
essentially
where
guy
yelling
is
guiding
the
most
thing,
we're
focusing
on
b2
y'all,
then
what
do
you
think
we.
J
Need
to
go
to
queue
so
so,
when
we
just
this,
this
discussed
this,
the
main
motivation
was
to
focus
on
http
2
because
it
was
mandatory
and
it
was
a
quick
way
forward,
and
so
we
didn't
focus
on
http
not
to
overload
the
work.
So,
in
my
opinion,
the
analysis
needs
to
be
done
too,
and
I
am
happy
to
yeah
as.
G
I
said
like
when
we
chartered
this,
I
was.
I
was
offering
an
assumption
that
that
the
upper
layers,
the
upper
layer,
interface
for
hp,
2
and
3
were
almost
identical
and
that
the
value
proposition
of
the
application
was
almost
identical
and
that
there
should
not
be
a
bunch
more
design.
If
that
turns
out
to
be
untrue,
then
I'm
I'm
100
comfortable,
just
shipping
this
as
an
http
2
rfc
and
like
maybe,
if
there's
the,
then
later
doing
it
for
h3
separately.
B
B
I
I
Hi
yeah,
I'm
jordy
from
qualcomm,
and
this
one.
I
Yeah
so
kai
and
I
are
actually
going
to
be
talking
about
the
alto
code
base
project,
and
so
I'm
going
to
start
and
nkai
is
going
to
take
over.
I
Yeah,
because
most
of
you
are
familiar,
so
I'm
gonna
sort
of
script
through,
even
though
the
slide
deck
is
self-contained
so
because
it's
gonna
be
offline,
so
people
can
actually
take
a
look
at
the
slide,
deck
and
sort
of
think
of
this
as
a
self-contained
deck,
but
mostly
I'm
going
to
skip
through
this
slide
here.
To
begin
with,
and
what
I'll
say
to,
I
guess
is
that
you
know
we.
I
We
came
to
the
realization
that
to
help
alto
get
some
visibility
and
actually
adoption,
basically
that
there
is
a
need
to
to
have
a
code
base
for
alto,
actually
there's
already
codebase
and
there's
an
open,
auto
project
that
kai
and
jensen
and
others
have
actually
been
leading.
They
need
to
bring
it
up
to
speed
and
actually
have
sort
of
a
a
community
to
help
sort
of
drive
that
development,
and
the
reason
we
think
this
is
relevant
is
because
we
think
that
you
know
visibility.
I
Network
visibility
is
very
important
for
the
kind
of
applications
we
we're
looking
at
at
these
days
and
alto
provides
a
key
component
for
that
visibility,
specifically
in
applications
like
edge
computing,
for
instance.
So
we're
going
to
be
talking
about
that.
Secondly,
but
yeah
and
many
others
actually
not
just
edge
computing,
but
many
others
really,
you
think
about.
How
do
we
get
application
performance
is
really
three
components:
visibility,
intelligence
and
controllability,
and
so
with
so
much
going
on
controllability
and
and
so
much
effort.
I
You
know
we
think
visibility
is
lagging
behind
so
without
proper
visibility,
it's
very
hard
to
do,
control
and
and
then
sort
of
make
intelligent
decisions.
So
that's
what
this
is
sort
of
about
in
this
case,
to
help
out
to
get
some
visibility
to
running
code
and
trying
to
bring
the
community
together.
I
There
are
actually
several
efforts
already
going
on
with
different
vendors,
but
it's
sort
of
fragmented
and
because
we're
going
to
be
looking
at
many
different
kinds
of
networks,
and
we
think
that
you
know
we
can
really
leverage
some
networking
effects
here
and
sort
of
build
that
code
base.
I
So
if
you
look
at
the
alto
architecture
which
you
which
I'm
sure
so
you
understand
already,
you
know
just
two
main
building
blocks:
the
alpha
server
and
the
outdoor
client
right
and
so
there's
a
northbound
api
and
the
southbound
api
and
the
service
is
provided
by
the
alto,
the
visibility
services
that
in
between
so
on
the
northbound,
basically
you're
interfacing
with
applications.
So
a
few
examples,
but
you
can
see
it.
I
Some
of
these
are
coming
from
the
edge
h
cloud
already,
so
you
know
xr
v2x,
iot
and
then
the
hackathon
demo
we
put
over
the
weekend
was
with
science
networks,
another
very
interesting
use
case
and
and
ncdms.
Of
course,
historically
alto
has
been
very
predominant
on
cdns
and
and
originally
back
to
the
original
days
on
p2p.
I
But,
as
you
can
see,
the
applications
are
evolving
sort
of
the
need
to
have
this
visibility
and
on
this
new
landscape,
basically,
but
in
the
southbound,
we're
looking
at
actually
multi-domain
networks
right,
so
many
different
components
and
with
the
conversions
of
the
edge
and
and
the
ip
together,
the
wireless
and
the
nd
and
the
and
the
core
ip
at
the
edge.
It's
on
the
edge
computing
side.
I
This
is
a
it's
about
the
main
problem
really
so
going
expanding
from
data
centers,
which
are
in
the
core,
but
also
in
the
edge
and
then
once
the
backhaul
mid
call.
Throne
hall
ran
mobile
core
multi-domain,
so
this
sort
of
landscape
tells
us
that
you
know
we
could
really
leverage
by
sort
of
bringing
some
code
base
together
and
not
having
to
reinvent
the
wheel.
You
know
developing
plug-ins
for
different
sdn
controllers
and
same
provide
run
some
code,
reference
for
the
application
side
and
so
on
and
so
forth.
I
So
the
effort
here
decide
to
sort
of
leverage
all
these
efforts
to
sort
of
not
having
to
reinvent
the
wheel
yeah.
So,
let's
talk,
then,
then,
how
does
that
map
actually
so
again
a
different
view,
but
the
same
picture?
Essentially
so
yeah
so
again,
northbound
api,
certain
api
and
then
we're
looking
at
building
some
open,
so
open
sourcing.
I
Some
of
some
some
of
these
components
here,
building
some
open
some
components
here
that
would
allow
to
avoid
having
to
develop
this
over
and
over,
where
value
is
really
being
created,
is
at
the
at
the
core
here.
But
and
then
you
have
the
software
api
plugins
to
interface,
with
the
various
various
sdn
controllers.
So
over
the
weekend
we
demonstrated
integration
with
mini
net,
but
also
openout
already
integrates
with
opendaylight.
I
But
of
course,
there's
a
need
to
sort
of
integrate
more
with
more
com,
sdn
controllers,
to
help
adoption
of
the
standard
right,
the
same
thing
for
the
itf,
sorry
for
the
alto
client
side,
and
then
you
know
you
could
have
you
would
have
the
the
the
apis
potentially
being
open
and
then
but
then
vendors
could
also
develop
their
own
components
within
these.
I
Okay,
I
want
to
talk
a
little
bit
about,
if
I
hand
it
over
to
kai
about
how
do
we
envision
this
happening.
Basically,
so
within
that
the
hackathon
actually
provide
a
sort
of
a
good
good
mechanism
here,
so
we're
looking
at
you
know,
the
autocode
based
project
ends
up
providing
a
parallel
track
to
the
the
working
groups,
standardization
effort
towards
implementing
the
features
introducing
the
are
in
the
latest
rfcs,
that's
one,
but
then
you
know
the
idf
hackathons.
I
We
intend
to
use
them
as
three
checkpoints
a
year
for
us
to
test
interoperability,
test
the
the
sort
of
the
new
features
and
and
demonstrate
them
and
socialize
them
and
get
feedback
basically
on
running
code.
So
and
the
the
focus
of
this
effort
is
also
gonna,
be
on
really
on
production
use
cases
right
so
really
applications
that
are
already
running
and
how
can
we
help
them
by
bringing
more
visibility
like
the
one
we
had
over
the
weekend
with
science
and
traffic
with
rusio,
and
so
on
so
yeah?
I
I
divine
build
production,
open
source
environments
for
use
cases
and
deployment
and
sort
of
follow
like
the
lena
startup
approach,
where
you
know
really
and
then
driving
this
by
you
know,
use
cases
are
meaningful.
Many
of
you
here
actually
have
use
cases
that
are
relevant.
We
want
to
know
about
them
and,
and
then
what
we
plan
to
do
is
you
know,
starting
next
week
from
the
hackathon,
we're
going
to
regroup
and
then
collect
all
the
feedback
from
this
week.
I
Look
at
all
the
possible
use
cases
and
select,
maybe
one
or
two
and
work
for
them
for
the
next
hackathon.
Basically
so,
and
we're
gonna
continue
to
do
this
as
we
continue
to
build
a
code
base
as
well
right
so
three
checkpoints
and
in
between
you
can
you
know
you.
I
You
know
there's
a
time
period
here
of
two
three
weeks
where
we
are
going
to
be
opening
in
the
outer
working
group
suggestions
we're
already
getting
some
suggestions
on
which
features
which
are
which
use
cases
are
important
and
then
psyc
one
and
then
aim
at
demonstration
in
the
next
hackathon,
as
we
continue
to
build
that
that
code
base
okay
yeah.
So
basically
you
know
I
don't
have
to
go
through
the
whole
thing
here,
but
we
are
invoking
the
community
for
participation
and
then
this
is
actually
gonna.
I
Be
we're
driving
this
through
there's
a
lot
of
interest
from
the
students
actually
to
participate,
and
this
this
is
a
big
thank
you
to
richard
yang,
professor
young
and
professor
kai.
You
know
from
yale
university
and
sinchon,
university
and
and
and
then
for
for
bringing
the
students.
It
was
actually
very
good
experience
for
the
past
month.
You
know
working
with
them,
so
we
envisioned
these
two
roles.
Developers
and
mentors
mentors
are
usually
experienced.
I
People
from
the
ietf
work,
auto
working
group,
basically,
and
then
developers
are
usually
coming
from
universities,
students
that
are
interested
in
learning
you
know
and
on
interesting
production
production
use
cases
and
then
real.
You
know
something
that's
really
really
running
on
a
standard.
We
think
it
can
be
really
good
a
learning
experience,
and
so
today
we
have,
you
know
two
three
universities
already
participating,
but
we
also
call
for
you
know
other
universities,
other
universities
that
might
be
interested
in
participating.
I
So
you
know
we
are
already
potentially
discussing
with
maybe
upc
and
other
universities.
We
really
want
to
make
this
also
sort
of
comprehensive
in
terms
of
people
who
might
want
to
participate,
yeah
project
resources,
so
we're
gonna.
You
know
leverage
github,
some
of
the
new
features
for
scam
management
if
you
will
and
yeah
that's
an
example
of
the
the
dashboard
that
the
the
scrum
dashboard
that
we
use
for
the
hackathon.
This
is
now
completed.
Basically
next
week
we're
gonna
be
sort
of
moving
on
to
the
next
one
yeah.
K
Yup,
thank
you
jordy,
so
carry
on
hear
me.
Yeah!
Okay,
okay,
sounds
good,
so
so
so
after
this
introduction
I'll
give
some
update
on
the
auto
deployment
and
also
the
state
status
of
the
cloud
base,
and
also
the
demo
that
we
did
in
during
the
itf
hex.
So
next
page,
please.
K
So,
as
part
of
the
chatter
item,
we're
actually
collecting
information
about
existing
employment
using
auto,
and
we
have
the
wiki
on
the
itf
webpage,
which
collects
a
list
of
implementations,
and
we
also.
There
are
some
widely
known
deployments
that
we
already
see
from
the
previous
auto
working
group
presenting
in
2008.
K
The
comcast
and
other
vendors
actually
connect
very
few,
a
relatively
large
scale,
a
few
tests
to
for
the
p4p
protocol,
which
is
kind
of
a
predecessor
of
the
auto
protocol.
And
then
we
also
have
the
binox
deployment
with
dash
telecom.
I
think
roughly
starts
around
2017
and
has
been
running
for
several
years
and
they
also
use
auto
as
one
of
the
northbound
apis,
and
we
are
also
working
with
telefonica
to
use
auto
for
their
cdn
deployments
and
I
think
from
the
deployment
we
actually
see.
K
There's
actually
a
shift
from
for
the
work
use
cases
so
previously
is
mostly
about
the
p2p
traffic
and
right
now
it's
shifting
to
a
cdn
traffic,
and
we
we're
also
working
with
russo
and
also
the
pacific
research
platform,
and
they
are
providing
use
cases
such
as
5g,
and
I
think
they
also
mention
like
network
slicing
using
techniques
such
as
srv6.
K
So
well.
So
we
believe,
like
the
five
five
shooting
techniques
and
also
the
large
scale.
Data
management
could
be
the
next
use
cases
that,
where
auto,
can
play
an
important
role.
So
next
page,
please.
K
Okay,
so
so
right
now
I'll
be
talking
about
the
the
hacksaw
that
we
did
for
the
itf
113,
and
here
is
basically
a
summary
of
the
demo.
So
we
are
using
minute
to
simulate
a
network
and
all
the
applications
are
actually
running.
Real
software,
using
specifically
in
containers
and
the
demo
environment,
is
packed
as
multiple
containers
for
future
enhancement.
K
Basically,
so
jensen
is
doing
most
of
the
work
and
we
have
the
the
doctors
will
be
made
available
available
through
the
ietf
hacksaw
on
github,
and
so
what
we
did
during
the
icf
hacksaw
is.
Actually
we
demonstrate
the
capabilities
of
auto
to
select
like
to
give
cost
information
between
deep
sources
and
a
single
client,
so
that
basically
enables
the
source
selection
based
on
the
network,
map
and
cost
map,
and
we
also
compute
two
types
of
costs.
So
this
will
be
based
on
the
new
document.
K
Basically
the
performance
metrics
and
we
are
able
to
basically
provide
two
metrics,
but
well
we'll
give
the
details
later
and
then
for
the
hacksaw.
We
actually
conduct
some
development
and
a
new
library
written
python
is
provided
so
that
people
can
use
these
libraries
to
fulfill.
For
for
future
development,
and
also
we
we
add,
the
auto-based
replica
selection
support
in
the
lucio
scientific
data
management
system.
So
next
page
please.
K
So
for
people
who
are
not
familiar
with
the
russia
develop
data
management
system,
here's
basically
introduction.
So
the
russell
data
management
system
is
used
by
lhc-1,
which
is
also
part
of
us.
We
used
to
host
the
data
generated
by
the
student
project
and
the
data
will
be
actually
spanned
across
multiple
projects,
including
cms,
and
also
atlas.
K
So
this
large-scale
physical
experiments
and
what
we
did
is
we
modified
the
auto,
the
lucio,
a
client
code,
and
we
basically
previously
the
russo
client
code,
enables
the
clients
to
select
replicas
based
on,
for
example,
random
order
or
using
dual
location
information,
and
what
we
did
in
the
hack
zone
is
to
enable
basically
to
integrate
auto
in
basic,
integrated,
auto
client,
with
a
lucio
code,
and
then
we
allow
the
russo
client
to
select
information
collected
by
the
auto
okay.
So
next
page,
please.
K
And
the
hack
during
the
hexon
we
actually
implement
functionalities
from
these
three
drafts
so
for
the
base
protocol.
We
actually
use
the
network
map,
and
course
map
and
for
the
flow
based
course
query.
So
this
these
two
are
individual
documents
and
we
actually
implement
a
flow
code
service
so
that
we
can
express
not
only
the
cross
product
between
source
synthesization,
but
in
a
more
fine-grained
way,
and
also
we
implement
two
metrics
in
the
auto
performance
cosmetic
document.
K
So
here
is
a
summary
of
what
is
achieved
during
the
hacksaw,
so
we
have
like
we
said
before.
We
had
a
client
library
in
python
and
then
we
integrate
autoclient
with
a
certain
russo
replica
download
command,
and
we
also
we
plan
for
three
demos
and
we
were
able
to
achieve
two
of
them
and
also
the
third
one
is
still
partially
partially
completed
and
we
are
still
working
and
hopefully
to
get
working
before
the
next
hacksaw.
K
And
then
we
also
have
some
saucepan
auto
integration
with
sdn
so
and
during
the
process,
we're
actually
using
what
jordy
has
mentioned
before
that
we're
using
the
scrum
board
to
basically
keep
the
software
management
yep.
So
next
page,
please-
and
here
is
basically
the
two
other
measures
that
we
implement
during
the
hacksaw.
So
what
the
the
one?
The
first
is
actually
the
one-way
delay
metric
and
the
second
is
available
bandwidth.
So
next
page
piece.
K
And
so
here
is
a
list
of
the
docker
images
that
we
used
during
the
as
the
test
environment.
So
some
of
the
containers
are
provided
by
the
lucio
project,
so
the
for
example.
The
lucio
container
basically
is
where
the
client
is
located
and
then
we
have
the
xrd
container.
Basically,
that's
where
the
data
are
stored
and
then
we
have
the
auto
the
audio
container,
which
is
used
to
generate
the
auto
maps.
K
And
so
here
is
actually
some
screenshots
for
the
demo.
First,
we
were
using
the
container
nets.
Basically,
canadian
is
a
container
that
enables
the
containers
to
be
connected
to
a
virtual
network.
So
here
we
use
containers
to
construct.
A
network
looks
like
this
and
then
from
this
network
we
actually
construct
to
all
the
resources.
The
first
is
auto
natural
map
and
for
the
natural
map.
K
We
actually
group
the
host
based
on
the
access
link,
and
then
we
collect
bandwidth
information
between
the
hosts
as
the
auto
cost
and
provided
through
the
out,
of
course,
map
and
the
the
third
step.
Is
we
actually
by
invoking
the
by
passing
the
the
address
of
the
auto
course
map
to
the
russo
command,
which
we
have
modified
to
integrate
the
auto
capabilities,
and
then
it
enables
the
sorting
of
the
replicas
based
on
all
the
information
and
because
we
are
using
bandwidth.
K
So
actually,
this
the
sorting
order
is
from
the
largest
bandwidth
to
the
smallest,
and
then
we
were
able
to
select
the
one
with
the
highest
bandwidth
in
the
in
this
demo.
So
next
page,
please.
K
K
And
then
in
this
figure,
obviously
we
are
doing
the
same
selection
process
except
that
we're
using
a
different
matrix
and
we're
not
using
the
bandwidth
information
by
using
the
latency
and
for
latency.
Basically,
the
sorting
orders
will
be
slightly
different
and
also
in
increasing
order.
So
we
select
the
one
with
the
smallest,
one-way,
latency
and
then,
but
but
for
this
use
case
actually,
because
of
downloading
is
mostly
determined
by
the
bandwidth.
K
So
when
we
use
the
latency
selection,
it
does
not
give
the
best
performance,
but
actually
but
but
it
still
shows
that
we
enable
the
capability
of
like
exposing
this
network
from
providing
this
network
visibility
to
the
rucio
client.
So,
let's
page
please.
K
And
here
is
what
we're
doing
for
demo2,
so
we're
actually
providing
the
multiple
throughput
prediction
for
multiple
flows,
and
here
the
screen,
basically
we're
using
two
we're
initializing
like
multiple
flows,
downloading
downloading
requirements.
And
then
we
used
a
prediction
through
multiplication
based
on
the
basically
network
utility
maximization
model,
and
we
already
modified
the
auto
interfaces
to
support
the
flow
core
service
so
that
the
cost
would
not
be
provided
for
the
prosta,
the
cross-product
of
sources
and
destinations,
but
for
more
fine-grained
approaches.
K
And
this
is
actually
proposed
in
one
of
the
individual
drafts
which
we
are
pushing
for
to
become
a
working
group
document.
So
next
specialist.
K
And
what
is
not
a
fully
achieved
during
the
hacksaw
is
the
third
drama
is
to
so
in
the
in
the
demo,
one
we're
actually
starting
a
download
for
a
single
client
and
in
practice
what
is
what
usually
happens
is
where
there
actually
are
multiple
concurrent
downloads
and
for
multiple
congruent
dentals.
Choosing
the
one
with
the
maximum
bandwidth
may
not
be
the
optimal
solution,
and
what
we're
trying
to
demonstrate
in
this
demo
is
that
maybe
we
can
we
can
average
the
information
provided
in
demo
too.
K
Basically,
the
super
prediction
when
there
are
concrete
tcp
flows
and
so
for
demo,
three
we're
actually
trying
to
developing
basically
to
integrate
the
multiple
flow
throughput
prediction
capability
into
the
russell
clan.
But
this
is
not
fully
completely
and
we
look.
We
are
hoping
to
get
this
demo
down
in
the
next
item,
hack
song,
and
then
we
can
maybe
get
also
have
like
further
demonstrations
about
how
it
can
improve
the
performance.
I
Yeah
yeah,
I
guess
any
questions
but
before
yeah
I
just
wanted
to
also
say
thank
you
because
you
know
there's
a
lot
of
people
actually
who
worked
endless
hours
for
the
last
10
days.
Actually
it
has
been
like
a
couple
of
months
planning
and
so
on,
but
the
last
10
days.
You
know
I
should
really
say
thank
you
because
people
say
three
time
zones
and
it's
the
same
late
night.
So
thank
you
all
everyone
for
the
adaptations
and
hard
work
yeah.
So
any
any
questions
on
these.
G
G
I
So
I
think
this
is
a
a
new
thing.
Richard
might
have
more
historical
background.
We
will
have
some
yeah
richard
actually.
E
Yeah,
so
martin
could
clarify
this
code
and
7285
and
all
the
stuff
they're
part
of
rules
already.
So
of
course,
they
really
need
to
finalize
and
do
all
unit
has
and
so
on.
So
but
overall,
let
me
just
clarify
that
russo
is
the
the
data
management
system
for
search
and
for
audio
items
and
so
on,
and
the
workflow
actually
is
complex,
so
a
workflow.
Typically,
what
using
their
terminology?
E
I
don't
know
if
there
are
any
like
certain
people
or
rules
of
people
here
doing
a
call
here,
but
of
course
they
can
ask
him
to
clarify
and
we're
mostly
working
with
the
project
lead
and
martin
and
and
mario
lesnik,
and
they
are
the
project
leads
of
the
russo
team
and
there's
also
radhu
who's.
The
transfer
lead.
So
therefore
they
they
guided
us
about
the
integration
of
auto
and
with
the
rule,
the
system
and
for
the
user
at
high
level.
E
There
are
two
workloads
what
they
call
and
one
workload
is
called
the
manual
workload,
which
basically
means
whenever
a
client,
for
example,
when
they
download
the
file,
they
won't
really
do
analysis
and
they
would
start
something
called
a
user
download
and
that
part
actually
worked
out.
So
this
part
of
code
actually
is
already
in
the
system
they
probably
hopefully
we
can
get
fully
deployed
onto
the
drupal
system
and
soon
and
all
the
code.
E
I
think
we
went
through
the
old
review
last
week,
so
hopefully
this
will
really
become
part
of
rusev
and
but
actually
the
main
main
work.
Main
workload
is
what
they
call
the
automatic
workload
which
is
automatically
replication
of
all
the
data
into
all
kinds
of
sites
that
part
of
code
to
do.
The
integration
need
to
modify
their
database
schema,
which
is
tricky.
E
So
we
are
hoping
that
we
can
really
like
hammer
down
the
details
with
them
for
next
week,
and
then
we
can
start
really
modify
the
database
schema
and
integrate
into
the
total
the
total
automated
workload.
So,
therefore,
then
we
can
claim
that
we
have
a
full
integration
of
the
both
auto.
This
moment,
mostly,
is
for
the
manual
workflow,
which
is
actually
a
smaller
part
of
total
workload.
G
I
So
there's
a
deployment
and
we're
working
with
the
es
net
and
the
pacific
research
platform-
okay
ronaldo
yet,
but
on
something
called
grading
graph,
which
is
something
I'm
actually
going
to
present
in
a
couple
minutes.
I
don't
know
if
that's
a
reference
that
you
were
connected.
I
Science
networks
with
and
we're
actually
gonna
be
working
with
packaging,
that
into
alto
and
deploying
that
at
esnet,
prp
and
starting
with
prp,
I
should
say,
and
then
people
into
potentially
es
net
and
then
yeah
and
then
so
certain
there's
a
lot
of
synergies,
because
this,
as
you
know,
it's
all
neighbors
are
connected.
So
yeah
yeah,
okay,.
G
B
Okay,
I
think
so
thanks
jody
thanks
kai
and
you
did
a
very
good
job
for
hacksaw
actually
build.
The
code
basis
is
very
important
for
auto.
Let's
move
to
the
next
topic
so
give
the
time
and
limit.
I
I
want
to
suggest
maybe
jodi.
You
may
only
have
15
minutes
for
this
topic.
I
I
Let's
get
through
this,
but
you
know
this
is
work
actually
coming
from
research.
There's
a
couple
of
papers,
we
publish
at
sigmatrix
and
sitcom,
and
then
you
know,
professor
richard
yang
nine
months
ago
about
nine
months
ago
and
reached
out
to
us.
You
know
saying
you
know
hey.
This
is
interesting.
Maybe
could
we
discuss
putting
this
in
into
the
alto
work?
Basically,
so,
first
of
all,
thank
you
to
richard
for
reaching
out
and
that's
been
very
interesting
in
nine
months.
I
I
guess
since
then
and
yeah
and
thank
you,
everyone
for
the
coaching
actually
and
the
guidance
from
the
you
know
from
everyone
at
the
outworking
group
really
so
but
yeah.
So
this
is
an
informational
draft
and
what
we
so
the
details,
I'm
going
to
skip
the
leaders
of
course,
but
if
you
are
more
interested,
you
can
look
into
the
papers
and
the
what
I'll
try
to
do
is
to
introduce
vulnera
structures
briefly
and
then
we're
going
to
talk
about
use,
potential,
use
cases
for
alton
and
then
requirements.
I
Let's
see
so
the
the
context
here.
I'm
going
to
pick
on
the
congestion
control
problem
say
so
the
congest,
the
conventional
v
on
on
congestion
control,
basically
for
the
last
30
years,
is
this
idea
that
the
performance
of
a
flow
is
uniquely
determined
by
its
bottleneck
link
right.
So
this
comes
from
jacobson's
paper
back
in
1988
that
literally
saved
the
internet
from
congestion
collapse,
basically
by
inventing
the
first
congestion
control
algorithm-
and
this
is
a
true
statement
right,
I
mean
the
the
performance
of
a
flow.
I
It
is
determined
by
its
bottleneck
link
right.
But
while
this
is
a
true
statement
with,
we
realize
that
there
is
a
much
more
sort
of
fundamental
or
intricate
element
going
going
on
in
a
communication
network.
I
The
analogy
here
is
with
an
iceberg:
if
a
communication
network,
where
an
iceberg,
the
notion
that
the
flow
the
performance
will
flow
is
uniquely
determined
by
its
bottom
egg
lean
would
be
the
tip
of
the
iceberg
underneath
what
the
submerged
part
is,
what
we
call
the
bond
mecha
structure,
which
really
reveals
system-wide
performance
and
how
flows
and
bottleneck
links
relate
to
each
other
and
the
forces
that
they
exert
on
each
other.
I
Basically,
so,
let's
see
how
that
works,
I'm
gonna
just
put
a
very
simple
example
to
see
if
we
can
capture
the
idea
and
and
what's
the
relationship
with
alto,
basically
because
we're
going
to
show
that
bonding
structures
are
a
very
compact
way
to
sort
of
summarize
the
state
of
the
network
that
takes
that
includes
topology,
routing
and
flow
information
in
a
single
d
graph
and
that
actually
allows
you
to
quantify
things
and
compute
derivatives
on
the
network.
So
it's
a
it's.
I
It's
a
way
to
sort
of
capture
the
state
of
network
that
applications
could
potentially
leverage.
So,
let's
take
as
an
example,
this
network,
so
links
are
circles.
So
there
are
four
links
link
one
through
link.
Four,
each
one
with
a
different
capacity
flows
are
lines
each
one
with
a
different
color.
So
there
are
six
flows
and
I'm
going
to
skip
forward
and
just
show
you
what's
the
bottleneck
structure.
So
this
is
the
bottom
like
structure
of
this
network.
Basically,
and
then
how
do
we
read
a
bottlenecker
structure?
I
If
there
is
a
direct
attach
from
a
flow
to
a
link,
then
that
flow
traverses
that
link
that's
the
relationship
so
float3
versus
link
to,
because
there's
a
direct
attach
from
pro
3
to
link
two
because
flow
three
is
bottom,
like
a
link
one,
it
also
master
reversal
in
one.
Therefore,
there's
also,
this
is
a
bi-directional
edge
right.
Whenever
you're
bottleneck
you
have
a
directional
edge.
I
Now
why
this
graph
is
relevant
is
because
this
graph
actually
allows
us
to
both
qualify
and
also
quantify
the
forces
that
flows
and
bottlenecks
exert
on
each
other
and
reveals
a
hierarchical
structure
of
the
submerged
part
of
that
iceberg.
That
tells
us
how
how
to
drive
system-wide
performance.
I
If
you
want
to
know
how
this
graph
is,
it
can
be
computed.
You
can
be
computed
in
polynomial
time.
You
can
actually
compute
a
you
know
this
graph
for
a
us-wide
network
in
a
fraction
of
a
second.
So
this
is
kind
of
the
algorithms
are
highly
scalable,
and
that's
some
point,
but
basically
I'm
gonna
sort
of
talk
about
a
couple
of
concepts.
I
Only
one
is
that
the
fact
that
this
graph
reveals
how
perturbations
on
a
network
propagate
through
the
system,
basically
the
ripple
effects,
suppose
that
there's
a
perturbation
on
a
link.
What
that
means.
That
could
be
several
things.
For
instance,
if
it's
a
wireless
link
could
be
the
signal
to
noise
ratio
is
changing.
So
what
the
graph
tells
us
that,
if
there's
a
perturbation
on
link,
two,
that's
gonna
have
an
effect
on
flows
that
can
be
reached
from
this
limb
according
to
the
bottleneck
structures.
I
So
this
tells
us
that
only
these
flows
here,
which
are
reachable
that
they
have
a
path,
will
be
affected
by
such
perturbations,
but
these
flows
will
not
be
affected
because
they,
I
cannot
go
from
link
two
to
flow
one.
There
is
no
no
path.
It's
broken
here
and
so
on.
So
first
tells
us
how
things
are
interconnected.
I
The
same
applies
to
flows
if
there
is
a
perturbation
on
a
flow,
if
I
tactic
shape
or
flow,
that's
going
to
create
a
ripple
effect
on
the
network,
and
it
tells
us
how
it
propagates.
Basically.
So
so
far,
we've
been
talking
about
the
the
sort
of
the
quality,
the
quality
of
the
qualitative
aspects
of
the
problem,
but
the
other
element
is
that
bottleneck
structures
are
actually
computational
graphs
themselves.
They
cannot.
We
can
also
use
them
to
compute
the
magnitude
of
change.
I
If
I,
if
I
have
a
perturbation,
I
can
quantify
how
much
the
change
is.
How
much
is
that
going
to
induce
a
change
on
a
flow?
Basically,
and
so
what
is
the
perturbation?
Perturbation
is
basically
taking
a
derivative
on
on
the
system
right.
So
a
small
like
an
infinite
decimal
change
on
a
link
or
or
a
flow
rate,
that's
going
to
have
an
effect.
That's
like
taking
a
derivative.
I
So
once
we
have
a
tool
that
allows
us
to
compute
derivatives,
that
is
a
tool
to
help
us
optimize,
optimize
application
performance
and
that's
the
connection
with
alto
well
microstructures.
We
think
that
can
be
a
good
way
to
summarize
the
state
of
the
network
and
empower
applications
to
make
this.
You
know
figure
out
how
to
do
better
routing
how
to
do
better
flow
traffic,
shaping
or
scaling
rate
limiting,
for
instance,
for
xr
applications.
I
mean
the
reality
that
you
need
to
sort
of
encode
the
rate
of
your
your
sender.
I
According
to
the
multi-domain
you
know
available,
bandwidth
sort
of
you
could
actually
use
this
kind
of
framework,
so
I'm
not
gonna
go
into
this,
but
it's
a
tool
to
also
quantify
quantify
these
changes
and
compute
derivatives,
so
yeah.
So
anyway.
This
notion
that
can
a
butterfly
in
mexico
grade
a
tornado
in
asia.
Of
course
the
answer
is
no,
but
everything
is
interrelated
and
baldnecker
structures
tells
us
how
about
the
you
know.
I
What's
the
effect
of
a
butterfly
flapping,
its
wings
in
mexico
in
china,
say,
for
instance,
so
yeah
I'm
actually
gonna
skip
through
this
yeah.
You
know
the
slide.
Deck
is
there
and
then
it's
just
gonna
sort
of
jump
into
yeah.
So
this
this
just
reflects
the
idea
that
you
can
actually
compute
what
we
call
gradients
or
derivatives
using
the
vulnerable
structure
and
the
bone
maker
structure
is
think
of
it
as
a
computational
graph.
I
So
one
of
the
values
is
that
this
allows
us
to
do
these
calculations
very
efficiently,
because
these
are
delta
calculations
on
a
graph.
So,
if
you're
trying
to
solve
these
problems
using
lp
rapidly,
they
are
not
scalable
because
you
have
millions
of
flows
and
so
on.
So
it's
hard,
but
these
kind
of
calculations.
I
You
can
actually
do
two
or
three
orders
of
many
faster
using
these
these
techniques,
and
so
it's
not
only
that
just
quantify
qualify
but
also
the
speed
that
we
can
do
these
calculations
and
we're
looking
at
sort
of
doing
this
kind
of
kind
of
analysis.
In
many
cases,
sort
of
near
real-time.
I
Okay,
so
I'm
gonna
skip
through
these,
but
you
know
types
of
perturbations,
so
perturbations
are
derivatives.
Basically
so,
and-
and
so
we
can,
you
know
we
can
compute
many
different
kinds
of
perturbations
and
think
of
them
as
derivatives.
Let
me
actually
skip
through
this
and
just
jump
into
the
outdoor
use
cases.
So
you
know
we
have
this
sort
of
tree.
We
realize
that
bottom
line
structures.
I
You
know
they
are
sort
of
a
foundational
element
if
you
will-
and
so
that's
what
through
the
research,
we
realized
that
you
know
there
are
potentially
many
applications.
You
could
use
them.
Network
design,
traffic
engineering,
ai,
even
because
bonding
structures
are
a
computational
graph
and
neural
networks
are
computational
graphs
too,
so
you
could
think
of
fibonacci
structures
as
a
as
a
neural
network
itself.
So
and
but
the
point
about
sort
of
the
the
work
here
is
also.
How
does
this
get
connected?
I
We
think
that
there's
a
strong
connection
with
alto
thanks
to
again
richard
for
connecting
and
also
but
then
also
throughout
this
week
here
participating.
You
can
see
that
in
many
different
working
groups
there
are
potential
connections.
Basically,
so
I
listed
some
of
them
here,
but
we're
still
exploring
and
we
you
know
we
look
forward,
starting
from
from
the
output
to
see
how
we
could
explore
potential.
You
know
cross-working
group
collaborations.
I
Yeah
use
cases
you'll
have
you'll,
have
them
in
in
the
in
the
eye
draft.
Basically,
so
you
want
to
look
at
them,
but
and
then
what
I'm
going
to
do
is
just
I'm
going
to
pick
on
on
one
of
these
use
cases
optimize
your
writing
and
congestion
control,
and
for
this
one
we
have
a
use
case.
What
we
do
here
in
the
in
the
eye
draft
is
actually
we
take
an
arbitrary
topology.
We
actually
choose
the
b4
topology
google's
d4
topology,
so
this
is
actually
back
from
2013
sitcom
paper.
So
it's
a
simplified.
I
Now,
of
course,
it
has
many
more
data
centers,
but
and
then
I'm
gonna
do
I'm
gonna
just
show
you
the
more
sort
of
human
readable
version
of
that
slide.
So
this
is
a
the
subset,
I
guess,
of
the
google's
before
network
with
links
across
the
globe
and
then
we're
going
to
compute
the
bond
mecha
structure
of
this
network
and
what
you
get
is
this
one?
So
this
is
in
the
I
draft,
but
I'm
going
to
make
this
more
user-friendly.
I
What
assumption
here
is
that
I'm
gonna
make
some
some
kind
of
a
random
assumptions
about
ad
hoc,
that
everything
has
10
gigabits
per
second
capacity
and
that
the
the
the
two
links
between
the
terms
and
landing
links
have
25
units
per
second,
and
I'm
just
gonna
make
a
very
simple
use
case,
which
is
every
the
pair
of
data
centers
from
us
and
and
europe
are
connected
both
ways.
I
Okay,
in
fact,
you
know
before
runs
is
a
sort
of
multi-path
network,
but
and
in
this
case
we're
going
to
assume
a
single
path,
so
you
can
actually
complete
the
bottom
like
a
structure
of
that-
and
this
is
what
you
get
then
you
can
start
reasoning.
Okay,
if
I
want
to
do
a
large,
large
dataset
transfer,
you
know
which
path
should
I
be
using,
and
so,
for
instance,
here
in
the
bottom
structure,
we
see
that
you
know
the
turns
and
landing
links
are
at
this
level.
I
The
bottom
line
structure
and-
and
these
two
links
are
here-
I
I
highlighted
these
two
links
because
they
are
sort
of
relevant
there's
this
property
that
the
the
the
links
at
the
top
get
less
bandwidth.
So
these
are
actually
the
links
that
are
sort
of
more
bottlenecked,
there's
a
notion
of
being
more
influential,
because
these
links
actually
influence
the
performance
of
the
whole
network
so
that
they
tend
to
be
more
relevant
and
then
what
I'm
going
to
do,
I'm
actually
going
to
skip
through
the
interest
of
time.
I
But
basically
here
what
we
do
in
this
use
case
is:
I
need
to
transfer
data
from
this
point
to
this
point,
I
have
multiple
choices
and
I'm
going
to
use
the
bonding
structure
to
to
to
predict
the
performance
of
the
raid.
This
is
sort
of
like
solving
the
joint
routing
and
congestion
control
problem
together,
because
what
balance
structures
do
is
model
the
congestion
control
algorithm
and
tell
you
if
I
place
a
flow
on
this
network.
I
According
to
this
path,
that's
going
to
create
a
ripple
effect
and
I
can
compute
what
congestion
control
will
do
and
get
you
that
rate.
Basically
after
the
deploy
after
you
place
the
flow,
and
if
you
do
that,
then
basically
in
this
case,
we
showed
that
it's
actually
better
to
not
use
the
shortest
path
that
you're
going
to
get
more
bandwidth.
I
If
you
use
the
non-shortest
path,
and
and
so
sometimes
some
of
these
outcomes
are
non-intuitive
basically,
and
you
can
also
reason
about
them
so
yeah
we
have
a
few
requirements
that
we
start
sort
of
discuss
in
the
group
or
through
the
draft
we're
starting
to
discuss
it.
This
is
very
preliminary.
I
should
say
the
the
initial
requirements
are
in
the
draft,
so
maybe
we
don't
have
to
actually
maybe
time
because
to
do
that,
but
they
are
in
in
the
in
the
draft.
They
are
very
intuitive.
I
At
this
point
I
think,
and
the
requirements
are
actually
structuring
in
four
groups.
You
know
one
is
the
bsgs
abstraction
you
know:
do
we
want
to
create
an
abstraction
for
bond
like
structures,
that's
an
object.
Basically
that
would
go
into
the
specification
requirements.
Number
two
information
received
from
the
network,
so
what
kind
of
information
we
need
to
extract
from
the
network
in
order
to
compute
the
wellness
structure,
requirements
regarding
regarding
the
information
passed
through
the
application,
so
you
can
think
that
one
is
the
southbound.
The
other
is
the
northbound.
I
I
Do
we
want
to
pass
the
whole
volume
like
structure
as
a
compact
way
to
represent
the
state,
or
maybe
some
elements
of
it
and
then
features
features
that
would
go
into
this
potential
bonding
structure,
graph
service
and
some
some
some
initial
sort
of
debate
or
discussion
about
what
features
we
we
want
to
put
into
you
know
for
consideration.
Basically,
so
you
know
welcome
any
feedback
at
this
point,
we're
just
discussing
this
understanding.
I
What's
the
connection
and
and
trying
to
get
some
coaching
guidance
on
on
the
steps
here,
yeah
looking
forward
to
that
conversation.
B
Thanks
jody
actually
have
the
time
limit.
We
don't
have
time
to
take
a
question.
Maybe
we
can
take
it
to
take
it
to
the
list.
You
can,
you
know,
introduce
your
draft
on
the
list
and
we
can
keep
on
discussing
on
the
auto
weekly
webex
meeting.
C
Hello,
this
is
andreas
from
telefonica,
yeah
next
slide.
Please.
C
Yeah
well,
the
idea
of
this
presentation
is
simply
to
comment
about
the
computer
awareness
capabilities
and
how
what
could
be
the
role
of
alton
in
this
story,
so
a
content
service
provider
for
sure
from
a
long
time
ago,
but
now
also
operators
are
developing
the
networks
by
adding
compute
capabilities,
spread
and
disability
across
the
network.
This
is
not
necessarily
related
to
edge
for
sure
the
edge
is
there,
but
we
could
talk
in
general
terms
about
computing
capabilities,
so
cloud
they
more
or
less
centralized
data
centers
and
for
sure
also
edge
computing.
C
So
it
seems
to
be
interesting
to
know
about
where
and
how
these
computer
capabilities
are
really
connected
and
also
to
extract
from
them,
also
even
metrics,
so
to
understand
to
what
would
be
the
latency.
What
would
be
the
throughput
that
we
could
have
for
reaching
whatever
amount
of
cpus
and
status
in
the
in
the
cloud
path?
C
This
is
clear
for
the
resources,
but
also
this
could
be
an
augmented
for
connecting
a
service
function,
so
we
have
a
notion
of
where
we
can
find
a
gateway
or
where
we
can
deploy
a
gateway
or
whatever
other
function
that
we
could
consider.
So
there
is
some
space
for
optimization
on
service
delivery,
management
and
and
planning
by
combining
the
information
of
compute
and
and
the
work
you
know
together.
So
breaking
those
silos
that
are
today
is
how
they
couple
and
permitting
for
decisions
for
placing
of
functions
so
for
simply
connecting
and
accessing
resources.
C
So
next,
please
so
yeah
for
just
for
finishing
this
week,
but
there
was
a
both
named
tan.
How
about
computer?
I
want
to
work
in
that
precisely
was
that
they're
trying
to
address
this
same
problem
space
but
from
the
routing
perspective,
so
trying
to
define
routing
solutions
for
for
that,
we
consider
that
alto
can
can
address
the
same
problem
in
space,
but
from
a
different
engineering
perspective
from
an
ethereum
point
of
view
or
angle.
C
So
the
idea
would
be
to
have
alto
as
a
element
capable
of
exposing
combined
information
between
network
and
and
compute
capabilities.
So
adding
metric
resources,
topology
view
and
reachability,
and
so
on
so
far
in
fact,
there
are
two
existing
pieces
of
work
on
this
subject.
I
put
the
the
draft
and
just
simply
well
the
the
final
message
for
this.
C
The
goal
that
we
were
considering
with
this
presentation
would
be
to
explore
the
how
alto
can
communicate,
contribute
on
this
so
to
define
a
solution
on
the
to
this
problem
space
trying
to
propose
this
as
a
subject
for
future
working
group
recharging,
and
with
this
with
that
I
finished
so.
Thank
you.
B
Okay,
thanks
luis,
actually,
a
good
topic
actually
can.
Actually,
I
think,
has
a
lot
of
you
know.
You
know
for
also
actually
designed
for
the
the
sitting
application.
Actually,
you
know
for
computer
awareness
working
actually,
you
know
very
similar
to
the
cd
application,
so
I
I
think
also
can
be
the
potential
solution
for
computer
aware
networking
so
that
that's
a
you
know,
keep
on
discussing
this
and
cooking
your
draft
and
for
audience.
B
If
you
have
any
imported
to
this
and
feel
free
to
join
on
the
menace
or
contact
with
luis
and
as
a
prevalent
for
the
cairn
and
with
this
actually,
we
we
actually
at
the
end
of
the
meeting
and
so
matt
do
you
have
any
last
words
or
nothing.
C
B
Actually,
yeah
yeah-
it's
not
here
but
I'd
like
to
thank
the
yen
to
serve
the
author
and
he
will
it
I'll
go
in
the
auto
chair
yeah.
So,
thanks
of
all
and
and
that's
a
closed
meeting.