►
From YouTube: IETF114 ALTO 20220726 1400
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
this
is
a
chingu
and
the
CDP
sign
is
Mohammed.
We
are
outer
co-chair,
and
so
this
is
also
Auto
working
with
setting
welcome
to
Philadelphia
and
for
people
in
a
room.
Actually,
if
you
don't
speak,
actually,
please
keep
your
mask,
and
if
you
you
make
a
comments
and
make
a
presentation,
you
can
take
a
mask
off.
B
B
And
when
you
make
a
comment,
please
speak
your
name
and
for
Blue
Shield.
We
have
electrical
blue
sheet
and
it
will
automatically
record
your
attendance
and
and
for
chapter
and
Note
Taker,
and
we
already
have
Jody
and
Richard
to
volunteer
yeah.
So
thanks
for
your
taking
notes,
and
so
the
slides
we
already
uploaded,
this
take
a
look
at
that.
B
So
working
remotely
so
just
remind
actually
all
our
discussion.
Actually
you
know,
most
work
has
been
done
on
amenities.
Please
make
a
good
user
maniness.
We
also
can
schedule
some
interview
meeting
if
needed,
and
we
we
actually
have
Auto
working
in
meeting
and
you
you
use
ITI
working
group,
WebEx
resource.
B
So
this
is
our
meeting
agenda.
We
propose
for
today's
discussion.
The
first
is
session,
introduction
and
working
group
status
update
and,
and
then
we
are
focused
on
Charter
item.
We
have
two
important
China
items.
The
first
one
is
all
the
OM
support
and
the
second
is
auto
over
new
transport
for
Auto
om
supporter
will
focus
on
the
status,
update
and
resolve
the
pending
issue,
and
we
receive
a
lot
of
comments
before
this
meeting.
Actually,
hopefully,
we
can
have
good
discussion
on
this
topic.
Second,
Auto
over
new
transport.
We
actually
as
a
chair.
B
We
request
the
review
from
our
other
area
and
we
got
a
lot
of
quite
good
quality
comments.
So
we
will,
you
know,
take
more
iteration.
We
hope
Also
may
pay
more
attention
to
this.
Auto
new
transport
actually
engage
with
the
expert
from
the
art
area
and
to
get
this
worker
because
our
Mindstorm
actually
just
remind
actually
is
said
on
September.
B
We
also
will
discuss
the
department
experience
of
data.
Actually
we
actually
continued
our
hexon.
This
is
our
second
hackson
made
by
Jody,
and
also
we
have
some
other
people
like
and
and
also
Louis
actually
gave
some
update
for
what
they
are
doing
and
so
we'll
have
joined
the
presentation
for
this
deployment.
Experience
update.
B
If
we
have
time
we
will,
you
know,
discuss
no
chatter
item.
Both
Works
actually
relate
to
the
computer,
aware
networking
and
the
the
first
one
is
considered
Auto
as
the
network
exposure
function.
Actually
we
have
three
relevant
chapter.
The
first
two
actually
are
existing
jump,
the
chance
to
get
updated
before
the
meeting.
The
last
chapter
discuss
Auto
Service
function.
This
is
the
new
chapter,
so
Luis
will
give
the
update
for
the
kernel
status.
In
addition,
we
have
another
draft
architecture
of
computing,
Hardware,
Optical,
networker
and
also
this
chapter
related
to
the
Lewis
worker.
B
Maybe
we
can
have
someone
discussion
on
this
so
since
we
represent
this
topic
and
so
any
agenda
bash
for
this.
B
I
see
now
so
we
move
forward
so
document
update.
Actually
so
first
a
new
app
say,
and
we
since
last
it
meeting.
Actually
we
make
a
lot
of
progress.
Actually,
we
have
three
obviously
get
it
delivered.
The
first
two
actually
are
existing
working
group
documents
and
unified
property
and
a
CDI
requested
routing
the
last
one
actually
the
RC
will
coming
and
which
is
just
before
this
before
last
19
meeting
is
still
individually,
but
after
after
last
itm
meeting.
Actually
we
can
make
a
huge
progress.
B
Actually,
we
already
move
this
individual
chapter
to
working
good
jobs
and
also
now
move
to
the
rcq
and
get
published.
So
this
shows
3D
progress
on
this
work
and,
second,
actually
after
edited,
we
still
have
two
working
groups
after
actually
waiting
for
be
delivered,
actually
I
hope.
The
author,
can,
you
know,
engage
with
outside
Editor
to
address
editorial
comments
and
move
these
two
worker
forward
and
working
Google
draft
and
since
not
the
IEP
meeting,
we
adopted
two
draft
and
om
and
a
new
transport.
B
So
this
is
the
current
Milestone
status
update.
You
can
see.
Actually
we
already
delivered
the
one
of
our
Milestone
coaster
mode
to
for
public
application.
So
we
get
this
Milestone
conclude
and
we
still
have
three.
You
know
ongoing
Milestone.
Actually,
so
please
bear
in
mind
in
the
the
time
frame
for
this
issue.
Work
especially
pay
more
attention
to
the
auto
new
chance.
For
this
we
need
to
engage
more
with
experts
from
the
other
area.
B
B
Gems,
gems,
you
can
speak.
B
For
external
review,
actually
just
we
mentioned
actually
we
already
received
our
early
review
and
get
some
comments
from
hdb
working
group
like
Mike,
and
so
thanks
to
Mike
and
Martin,
and
also
Spence,
actually
give
a
very
valuable
review.
So
please
focus
on
this
external
review
and.
B
So
I
also
want
to
mention,
actually
we
have
Auto
working
meeting
to
discuss
cellular
information
exposure
actually
two
relevant
workers
why
his
movie?
The
second
is
the
pbecc,
and
not
only
testing
people
are
interested
in
this
kind
of
topic.
We
also
see
actually
Alibaba
and
Princeton.
Actually,
they
published
a
lot
of
relevant
paper,
and
so
we
schedule
meeting
to
discuss
with
this
experts
and
research
actually,
for
example,.
B
Actually
also
Professor
Jennifer
and
to
energy
also
developed
some
of
open
source
tools,
so
these
are
very
helpful
to
you
know,
adjust
some
of
issue
in
cellular
information
exposure
and
also
actually
also
share
some
experience
on
cellular
information,
aggregation
and
so
the
before
this
meeting
they
actually
post
a
new
version
and
and
adjust
some
comments
based
on
some
discussion
in
the
interim
weekly
meeting
and
also
monthly
from
Tess
from
Alibaba,
actually
share
his
paper.
B
This
paper
already
get
adopted
by
its
sitcom
and
which
is
focused
on
inter
AP
solution,
and
so
we
actually
agree.
Actually
we
should
you
know
for
the
next
step,
which
will
come
up
with
the
framework
to
investigate
water,
setting
information
to
connect
how
to
transport
connect
information
to
the
application.
How
to
react
to
that
information.
Hopefully
we
can
more
make
more
progress
after
this
meeting
on
this
topic
last
one.
Actually,
we
do
actually
do
a
lot
of
effort
to
sort
of
socialize
the
author.
B
Actually
in
this
meeting
and
Luis
will
you
know,
give
a
presentation
in
many
office
to
introduce
their
Syrian
practice.
Integration
with
Auto
and
also
actually
I
will
discuss
the
auto
introduction
in
obst
working
group,
and
we
already
get
a
time
slot
on
this
and,
in
addition,
actually,
we
share
and
work
with
auto
design
team
member
problems
actually
to
work
on
some
paper
which
targeted
iepf
book
and
IP
Journal,
and
we
will,
you
know,
keep
on
to
cooking
this
idea
to.
B
Hopefully
we
can
make
more
progress,
so
that's
it
and
I
think
we
can
go
to
the
poster
presenter.
A
I'm
Matty
I'm
here
on
behalf
of
the
authors
of
onm
module,
for
also
they
shouldn't
be
here.
A
They
couldn't
be
here
in
person,
so
I'm
doing
the
presentation
on
behalf
of
them,
but
the
discussion
will
be
guided
by
Jensen,
which
is
in
the
virtual
room
and
the
the
reason
that
I'm
actually
focusing
on
this
point,
because
it's
it's
the
this
presentation
is
intended
to
be
to
start
a
large
part
of
discussions
on
onm,
because
we
intend
to
decrease,
increase
the
range
of
the
range
that
it
actually
covers.
D
A
A
Well,
I
name
stands
for
operation,
Administration,
maintenance
and
management
of
also
protocol.
So
in
this
slide
you
can
see
the
latest
link
to
the
latest
version
in
data
tracker
and
also
in
GitHub,
which
is
good,
because
a
lot
of
discussions
are
going
on
in
in
iitfwgi,
also
WG
GitHub,
and
also
the
definition
of
yank
module
yeah.
A
So
after
it
113,
the
group
has
received
many
reviews
and
addressed
a
lot
of
them.
So
we
have
received
five
reviews
from
working
in
working
group
mailing
list
and
we
have
an
ongoing
discussion
with
net
mode
working
group
and
also
we
had
five
discussions
on
GitHub
and
we
achieved
two
milestones
for
rnm
and
this
the
first
one
is
a
document
adaptation
and
the
second
one
is
that
in
hackathon
of
114
we
have
implemented
the
concepts
that
are
parts
of
onm
for
also
which
are
given
in
the
GitHub
link.
A
Okay,
so
in
this
slide
you
can
see
a
data
model
overview
of
the
gang
modules
that
we
have
included
in
the
draft
and
also
in
the
Implement
implementation.
So
we
have
the
server
management,
information,
resource
manager,
performance,
Monitor
and
logging
and
fault
manager,
and
we
have
also
data
sources
inside.
A
Yet
we
have
color
code
that
the
parts
that
are
outside
the
scope
of
1M
but
inside
the
scope
of
Auto
and
also
the
parts
that
are
even
outside
the
school
of
also,
which
is
like
the
data
sources
that
are
interacting
with
Source
board
sourcebond
apis
with
data
source
listeners.
So.
A
A
So
the
first
three
one
I
have
raised
within
the
working
group
mailing
list
question
four
and
five
have
been
discussed
internally
in
Auto
working
group,
and
the
last
question
is
not
have
been
discussed
yet
was
it,
but
it
is
an
important
question
that
Jensen
himself
will
talk
about
and
we
will
divide
a
large
portion
of
this
meeting
to
decide
over
that.
Certainly.
A
So.
The
first
question
is
how
to
deploy
data
types
in
Auto
related
Ina
Registries,
and
we
have
two
proposed
model
for
that,
each
with
its
own
pros
and
cons.
A
The
first
one
is
to
Define
enumeration
in
Ina
also
types,
and
it
has
a
pro
because
it's
guarantee
consistency
between
different
types,
but
it
it
has
a
con
as
a
major
consideration
for
us
and
because
it's
hard
to
extend
to
new
data
types
for
experimental
drafts
and
also
it
requires
I
need
to
have
younger
skulls
to
be
to
interpret
these
data
types.
So.
A
The
second
proposition
is
to
use
identities
in
ITF,
also
Yang
module,
and
it
has
a
pro
because
it's
easily
sensible
to
a
new
extendable
to
new
data
types,
but
the
con
is
it
has.
It
lacks
the
control
for
consistency
and
may
results
in
challenging
interoperability
between
variables,
so
we
are
considering
which
one
should
we
go
go
on
with
between
these
two,
the
first
one
was
to
use
enumeration
and
second
one
is
to
use
identities
in
the
image.
A
The
second
question
is
the
server
the
server
level
management
by
inspired
by
rescamp.
We
added
several
lists
and
accomplished.
Servant
listed,
listen
stack
which
is
configurable
in
the
data
Yang
module,
but
the
the
remaining
question
here
is
that
whether
we
should
include
configuration
parameters
of
lower
level
underlying
protocols
in
that
two
or
not,
for
example,
a
lower
level
HTTP
configurations
like
cache
control
or
retry
after
Etc.
A
A
The
third
question
that
we
try
to
answer
is
logging
and
fault
management
So.
Currently
in
Yang
module
we
have
defined
success,
failure,
cons
for
requests
and
responses,
but
the
reviewers
have
proposed
three
more
lagging
criteria
that
we
will
add
in
the
in
the
revision
of
the
draft.
The
second
one
is
Success
failure
records
for
the
configuration
updates
themselves.
A
The
second
one
is
records
for
which
configuration
types
is
the
dying
module
of
supporting,
and
we
were
wondering
if
kanyang
module
model
provide
this
parameter
itself
or
not,
and
this
the
last
one
is
the
census
update
of
the
connections
to
data
sources
that
also
server
is
supporting
and
we're
looking
forward
to
getting
to
get
more
Stacks
more
suggested,
useful
metrics
in
the
comments
discussions
or
in
the
experience
and
of
real
in
the
experiments,
experiments
of
real
deployments.
A
A
The
first
one
is
that
how
to
how
to
configure
the
way
that
it
also
client
is
accessing
auto
services,
for
example
the
URI
their
resource
IDs,
and
the
parameters
that
of
the
alto
client
should
use
to
be
able
to
query
out
a
server
and
the
second
one
is
data
model
for
transport
mechanism
control
with
that
to
which
we
will
come
back
later.
A
But
there
are
the
three
main
ways:
data
polling,
pops
up
or
on-demand
query,
which
are
very
different
from
each
other,
and
so
for
this
one.
The
main
question
that
we
want
to
ask
comments
for
is
that
whether
we
should
add
a
new
top
level
container
for
also
a
client
or
list
or
add
a
or
whether
to
add
a
new
data
source
type
for
auto
server
and
consider
also
client
as
an
also
data
source
listener.
So
is
there
very
different?
A
Okay,
the
the
the
next
major
question
that
we
want
to
answer
is
a
server
to
server
communication,
so
we
have
introduced
multi-dominant
settings
for
Auto
in
two
major
drafts.
The
first
one
is
for
use
cases
and
the
second
bar
for
second
one
is
for
analytics.
The
link
of
these
two
drafts
are
given
this
slide
that
you
can
refer
to,
but
we
have
face
in
in
the
in
in
the
work
for
through
the
work
for
this
ITF
meeting.
We
have
phase
two
main
question.
A
The
first
one
is
that
auto
server
needs
sometimes
need
to
be
a
data
source
for
another
Auto
server
to
provide
it
for
Network
information.
So
we
have
two
design
options
here.
A
The
first
one
is
to
collect
data
using
Auto
directly
for
in
which
an
auto
server
can
act
like
autoclient
for
another
Auto
server,
and
that
may
need
extensions
to
also-
and
the
second
design
option
is
to
use
other
software
Source
spawn
protocols
to
expose
databases
of
Auto
servers
between
multiple
Auto
servers
and
for
which
we
have
no
standards
so
far
and
need
to
decide
for
that
to
better
be
a
standard
or
not
a
standard
or
not.
A
And
the
second
main
question
here
in
server
to
several
multi-dom
incoming
Communications
between
Auto
server
is
cross
custom
in
past
discovery,
which
is
a
question
that
we
have
reached
to
in
the
hackathon
and
in
the
deployment.
So
it's
a
real
question
and
is
that
not
no?
No
none
of
the
existing
Auto
Services
can
provide
information
for
across
domain
pass
Discovery,
and
we
actually
require
some
mechanisms
to
look
up
to
increase
and
increase
sport
of
interdominals
and
in
different
administrative
damage
to
track
back
the
roads
between
multiple
administrative
domains.
A
The
question
the
last
question
that
we
are
cons
that
we're
seeking
answer
to
Is
like
the
one
that
we
have
not
we
don't
have
an
answer
for
yet,
and
we
want
to
ask
for
like
most
of
the
comments
on
this
part,
so
is
like
how
to
unify
the
the
different
data
sources
in
Yang
data
model
and
configuration.
So
we
we
need
a
different
data
models
for
different
means.
A
Actually,
the
first
one
is,
for
example,
for
Source
One
protocol
stack
parameters
like
URLs
or
protocol
versioning
or
authentication
settings
that
that
I
previously
talked
about
the
second
one
is
to
set
the
query
parameters.
The
query,
expressions
and
the
last
one,
which
is
like
the
most
conflicting
one,
is
the
data
collection
mechanism
parameters
when,
when
the
clients
have
the
ability
to
particularly
pulling
the
auto
server
or
use
pop
sub
mechanisms
or
on
demand
querying
so
for
this,
one
Jensen
will
continue
the
discussion
so
Jensen.
C
C
Thank
Clarity
to
yeah,
so
for
this
question
actually
you're
saying
in
the
current
document,
we
do
introduce
these
three
panel
parameters
for
each
data.platformer
configured
by
the
current
data
model
in
the
in
the
document
on
a
different
two
maximum
to
track
the
data.
Why
is
that
it
can
do
the
periodical
polling
and
otherwise
the
captcha,
but
in
our
a
current
in
the
real
development?
C
So
so
with
some
information
resource
may
need
to
do
the
on-demand
query
for
the
data
scientists
so
once
the
data
translation
to
automatically
update
the
data
broker,
so
the
algorithm
plugin
and
they
want
to
have
some
direct
upside
to
the
data
cell,
so
whether
it's
supposed
to
add
a
new
parameter,
selecting
type
because
they
are
demand.
C
That
means
the
algorithms
likely
don't
need
to
exactly
the
internal
data
broker
in
the
Auto
server
inside,
but
just
to
translate
the
completioner
to
the
direct
query
to
the
data
cell,
but
this
Russian
complexity
about
how
to
do
the
night
in
between
the
data,
programming
and
data
structure
and
the
the
raw
data
collected
from
the
the
sub
distance.
So
this
is
still
open
cutting.
We
want
to
reset
some
feedback
from
some
other
members
may
have
some
their
own
requirements
on
the
real
department
for
how
to
handle
this.
C
A
You
can
go
from
so
the
next
steps,
for
also
oingm
would
be
that
authors
will
make
decisions
about
question
one
to
question
three
that
we
discussed
about
as
soon
as
possible
and
we'll
submit
a
new
new
version
of
the
NM
document.
A
A
So
to
give
you
like,
mallow
Stones,
the
first
one
is
that
we
will
reach
agreements
on
question
one
to
question
six
and
we'll
completely
revise
the
document
before
next
iitf
meeting
and
the
second
one
is
that
we
want
to
push
also
or
a
m
Yang
model
deployment
in
the
current
deployment
of
Alto
that
we
have
in
Opel,
also
GitHub
page
and
before
March
23..
E
Martin
Duke
Google.
Can
you
go
back
to
slide
five
or
question
five
pardon
me,
so
why
would
you
not
just
use
the
existing
altar
protocol
design
like?
Why
not
do
what
are
the
drawbacks
using
design
option,
one.
C
C
We
don't
have
any
documents
specify
how
to
handle
the.
E
Logically,
if
you
have
an
auto
server
and
you
want
to
get
the
configuration
of
that
Alpha
server,
you
ought
to
be
able
to
use
Alto
and
it
has
the
advantage
of
not
having
like
more
extensions
like
I
I.
Don't
see
why
you
would
need
an
extension
to
do
that
necessarily,
but
like
I
would
without
looking
details
like
architecturally.
That
would
be
way
way
cleaner
than
like
doing
a
whole
nother
design.
In
my
opinion,.
F
Yeah
yeah
I
think
that's
I,
I.
Think.
The
comments
made
by
marketing
is
very,
very
good,
the
only
small
case
which
may
be
the
question
to
Jensen,
because
you
guys
uncut,
you
guys
did
the
hacker
sound
just
like
a
few
days
ago
and
it
was
multi-domain.
So
therefore,
if
you
do
multi
domain
discover
using
Auto
I
think
you
are
missing.
The
egress
point,
which
may
I
may
not
be
interesting
to
the
client.
F
Would
that
really
be
an
example
where
you
might
be
one
way
is
like
marketing
side,
which
is
just
extend
the
auto
product,
a
lot
itself,
a
little
bit,
which
is
very
clean
by
itself
or
maybe
you're
really
invaded
some
other
cases.
Maybe
there's
it's
easier
for
auto
servers
to
reach
agreement
because
they
are
essentially
Enterprise
essentially
to
Enterprise
agreement
and
then
a
consumer
to
Enterprise,
and
even
for
that
case,
maybe
the
issue
can
be
solved
by
some
kind
of
authentication
or
authorization.
A
A
So
that's
that's
the
point
that
clients
might
not
be
interested
to
find
out
about
so,
but
Martin's
Point
also
makes
sense
to
me,
because
if
we
we
should,
we
should
consider
some
extensions
in
also
itself
that
enables
the
transfers
of
this
kind
of
information,
but
also
recharge,
haven't
had
another
point
that
I
have
not
answer
to
your
answer
for
that.
A
Yet
that
might
be
like
some
sort
of
communications
that
between
Enterprises
make
sense
between
some
sort
of
specific
authentication
or
handshake,
or
something
like
that
and
client
does
not
have
access
to
that
level
of
information.
So
I
think
that's
what
would
be
or
the
main
difference
between
these
two
interfaces,
but
if
we,
if
we
realize
that
we
can
handle
that
we
can
handle
this
concern
without
adding
another
level
of
complexity
to
the
protocol,
we
definitely
will
go
with
design
option.
One
with
some
extensions.
E
Well,
I'm,
sorry
I'm,
you
know
I'm
starting
to
fully
process
this
slide.
So
this
whole
discussion
is
this
like
relevant.
Only
if
I
like
there's
also-
and
it's
currently
do
the
currently
adopted
documents
in
Alto-
allow
us
to
do
the
multi-domain
use
case.
Okay,
so
this
doesn't.
This
doesn't
assume
that
these
other
documents
go
anywhere.
E
A
No,
the
previous
one,
so
this
this
question
is
one
of
the
major
concerns
of
doubters
so
I'm,
bringing
that
that
around
on
their
behalf,
so.
C
Yeah,
we
want
to
say
some
experiment
about
how
to
handle
the.
C
The
the
connection
between
the
algorithm
plugin
so,
which
is
generally
the
also
information
and
the
data
circulation
so
far,
have
two
approaches.
Why
is
that
the
auto
server
can
build
the
internal
data
broker
to
sell
the
network
information
from
the
the
data
collected
from
the
data
file
and
the
auto?
The
algorithm
plugin
will
use
the
information
stored
in
the
internal
data
broker
to
compute
the
information
results
and
another
approach
is
that
the
the
auto
plugin
comes
directly
curated
the
data
service
and
not
to
use
any
data
broker.
C
So
that's
the
two
approach
you
can
can
happen
in
different
cases,
but
in
the
same
system.
Actually,
it's
hard
to
find
the
current
approach
to
Define
what
should
be
the
model
to
handle
this.
F
C
F
For
example,
you're
really
talking
about
quite
Advanced
data
sources
and
just
like
a
few
days
ago,
maybe
three
or
four
days
ago
and
Marty
and
I
we
were
talking
to
qingok
and
who
is
architect
of
esnet.
We
were
asking
diploma
how
to
deploy
an
auto.
Actually,
his
suggestion
will
be.
Can
I
just
send
you
a
file
every
few
minutes
and
then
you
just
read
it
from
a
file
and
format
we
can
agree
on.
Then
you
lock,
you
load
into
your
own
server.
Would
that
be
simple?
So
there
are
a
lot
of
different
data
sources.
F
Do
you
really
want
to
visit?
My
question:
is
you
mentioned
servers,
I?
Think
for
the
hackathon.
You
really
have
all
kinds
of
issues,
I
think
also
for
the
113,
hacksaw
and
I
think
we're
using
G2
Mini
net,
that's
a
different
format,
and
now
you
talk
about
all
different.
So
it's
quite
large.
So
that's
my
main
concern
so.
A
I
think
refrigerator's
comments
is
like
quite
relevant
here,
sorry,
but
for
20
seconds.
So
we
can
decide
to
delegate
discussing
this
question
for
further
Improvement
after
IDF,
like
115
and
consent
focus
on
like
addressing
question
one
two:
five
before
tackling
this
problem.
So
that
might
be
a
question
but
I
don't
know
if
you
have
realized
Jensen's
comments
or
not
I
think
he
thinks
that
in
deployments
it
would
be
relevant.
E
Like
this,
this
data
source
thing
I
mean
we
don't
Define
any
of
the
data
source
apis
right,
okay,
yeah
and
like
I,
don't
know
this
idea
of
data
Brokers
and
I
mean
Alto
is
supposed
to
be
a
data
source
like
a
server
itself
right
and
there's
just
this
proprietary
thing
where
you're
just
Gathering
stuff
from
around
the
network
and
like
proposing
like
a
broker
that
is
collecting
the
data,
then
Distributing
it
to
Alto.
E
It's
like
that's
the
that's
the
alto
server,
so
you're
like
inventing
a
new
Alto
to
speed
Alto,
which,
just
to
zoom
out
and
again
I'm,
not
you
know
a
practitioner
that
seems
like
very
I,
think
you're
creating
abstraction
to
make
this
problem
practical
tractable.
That
is
not
does
not
sound,
very
realistic,
I
guess
and
like
maybe.
E
There
may
be
interesting
work
in
like
defining
like
apis
for
data
collection,
but
you
would
have
to
get
a
lot
of
devices
to
subscribe
to
that
and
I.
Don't.
That
seems
like
a
big
scope
increase
that
we
shouldn't
get
into
thanks
foreign.
E
G
I
I
fully
agree
with
someone
here.
I
think
I
think
that
going
through
the
southbound
interfaces
is
something
that's
I
mean
Alto
is
about
exposing
data,
not
about
collecting
data.
So
what
do
you?
How
do
you
collect
data
I
mean
it's
quite
important
for
the
implementation
and
making
decisions
on
the
implementation
on
the
design
for
sure,
but
making
it
part
of
Standards
probably
will
be
a
little
bit
over
standardizing,
and
this
is
something
that's
give
me
the
you
know.
A
So
if
you
don't
have
any
more
questions,
I
can
wrap
up
the
comments
that
were
very
useful,
interesting
comments
made
by
Diego
and
Martin
I,
also
Richard,
so
I
think
Jensen
answer
to
this
question,
for
you
is
that
Martin
and
Martin
Diego
said
that,
in
terms
of
his
standardization,
it's
out
of
the
scope
of
Auto
as
data
aggregation
protocol,
but
in
terms
of
implementation,
we
can
decide
later.
A
So
all
in
all,
after
Richard
Martin
and
Diego's
comments,
I
think
like
we
should
focus
on
question
one
to
question
five
for
this
standardization
and
Tackle
question
six.
Only
in
the
implementation
afterward.
Thank
you,
everybody
for
listening,
I
think
like
we
are
reaching
the
end
of
time
for
this
presentation.
So
if
you
have
any
more
comments,
we
would
be
happy
to
have
you
in
the
queue
for
the
last
minute
and
also
we
will
appreciate
discussions
going
on
the
working
group
mailing
list.
A
B
F
Okay,
so
I'm
going
to
go
over
the
output
transport
next
slide.
Please,
okay,
so
here
is
outline
about
things
which
I
want
to
cover,
and
first
one
is
I
really
want
to
show
some
connectivities
what
we
are
doing,
and
so
therefore
people
get
a
sense
about
where
the
modifications
come
from
and
where
it
should
come
from,
and
then
for
people
who
are
new.
We
do
have
a
bunch
of
new
people
who
are
who
are
not
familiar
with
transport
protocol,
so
I'm
going
to
give
a
very
high
level
very
quick,
like
12-3
minutes
overview
about.
F
What's
going
on
about
new
transport,
because
that's
imported
on
your
protocol
and
I'll
highlight
a
little
bit
like
like
a
meter
changes
from
last
ietf
and
then
to
the
current
version,
and
then
I
want
to
spend
all
the
majority
of
the
time
in
real
time
to
really
talk
about
discussions
and
very
many
issues
to
to
be
decided
to
really
focus
on
engineering
side
to
really
make
all
the
decisions,
and
so
on
next
slide.
Please
so,
first,
a
bunch
of
all
this
happened
and
slightly
initially
it's
a
little
bit
slow
and
and
now
we're
we're
moving.
F
Quite
like
active
or
not,
hopefully,
we'll
really
try
to
maintain
our
momentum
and
so
on.
So
initially
the
document
was
adopted
as
a
working
group
document
on
June
22nd
and
then
we
we
did
a
bunch
of
reviews.
F
Thank
you
so
much
for
a
lot
of
reviewers
and
Luis
Sabine
and
Jordy,
and
a
lot
of
reviewers
and
from
all
the
group
so
give
a
lot
of
reviews
and
we
finalize
our
submit
a
new
version
on
July
10th,
ultralight
events,
elabores,
basically
between
right
before
the
deadline
and
now
the
major
issue
as
you'll
see
shortly,
which
that's
why
I
want
to
give
you
a
little
pointer
would
be.
F
There
are
some
very
excellent
accent,
HTTP
expert
reviews,
so
they
are
from
Martin
Thompson
and
they're
from
Spencer,
and
also
from
Martin
Nottingham
they're,
very
nice
reviews
and,
if
you're
interested
in
reading
some
of
some
of
the
discussions
or
reviews-
and
here
are
the
links-
and
there
are
also
some
ongoing
discussions
slightly
because
hackathon
delay
a
little
bit
and
now
we're
resuming.
Hopefully
we
can
have
all
the
discussions
involved.
So
next
slide,
please,
okay,
so
the
religious
review
a
little
bit.
What
we're
doing
in
terms
of
all
the
requirements?
F
So,
therefore,
we
can
all
refresh
our
memory,
because
we
are
we're
not
designing
a
new
transport
protocol
or
new
other
protocol
really
but
not
possible.
It's
a
design
is,
let's
really
make
sure
that's
impossible,
follow
all
existing
designs
and
by
now,
using
essentially
a
new
transform
mechanism,
HTTP
203.
So
therefore,
here
is
bunch
of
requirements
with
Central.
That's
a
constraining
our
design
space
and
we're
not
supposed
to
really
inventing
any
any
new
things
whenever
possible.
So
basically,
number
one
is
like
really
follow
the
base
protocol
possible.
So
therefore,
r0
is
request.
F
All
of
the
resources
possible
I
can
see
that
that
wouldn't
have
some
kind
of
fundamental
constraint
on
the
way
with
designers,
in
your
neurotransport
or
Auto
product
code
and
then
I
call
the
media
base
obvious.
One
is
auto
SSE,
which
is
somehow
old
technology
and
Service
Center
event,
which
we
borrowed
initially
started
from
Windy
room,
which
is
a
wonderful
designer
and
then
with
Central.
There's
SSC
protocol,
therefore,
for
required
inherited
from
over
there,
because
we're
going
to
replace
SSE,
because
quite
a
few
people
Mission
assets
is
very
important.
F
So,
therefore,
this
protocol
is
certainly
trying
to
replace
the
functions
of
SSE.
So,
therefore,
you
must
be
able
to
request
an
increment
updates.
You
must
be
able
to
stop
income
updates
and
you
must
really.
The
server
can
tell
the
client,
okay
I'm
starting
or
I'm
stopping.
So
if
I
keep
the
signaling
and
then
the
server
can
all
have
all
kind
of
Freedom
really
choosing
the
content
type,
which
you
can
say
later
or
the
major
implication,
and
also
why
I
will
read
the
design.
F
We're
not
going
to
revisit
this
point
and
they're
already
determined
in
the
early
design
with
obviously
88.95,
and
then
we
want
essentially
record
as
a
major
decision
and
people
can
discuss
after
that
would
be
I
want
to
refresh
memory
a
little
bit
for
people,
and
the
decision
is
we're
still
following
the
HTTP
request
for
API.
Actually,
that's
a
somehow.
This
requirement
actually
really
bites.
Why?
F
Because
you
can
really
use
HTTP
2
2
to
do
a
lot
of
things,
but
if
you
really
want
to
follow
the
rest
for
API
philosophy
and
activities
will
come
in
really
bites,
and
you
know
says
that,
with
the
concerned,
the
way
we
design
things
I
want
a
little
bit
with
these
costs.
That,
of
course,
allow
flexibility
of
deployment
next
slide,
please
to
refresh
the
memory
a
little
bit.
So
here
is
the
new
design
in
the
protocol,
and
you
have
not
read
the
document.
It
is
becoming
a
little
bit
longer
now.
F
So
let
me
just
summarize
very
quickly
and
structure
and
then
we
can
go
through
the
discussions
and
so
on.
So
the
basic
design
is
very
simple:
you
have
Auto
server
over
there
and
you
have
all
the
auto
information
resources
quite
a
lot
to
know
and
without,
like
all
kind
of
information,
resources,
cost
map,
Network
map
and
endpoint
properties
and
pass
factors.
How
start
will
be
used
at
one
way?
F
Over
here
is
column
2
and
you
use
a
single
HTTP
2
of
3
Connection
and
to
really
get
updates
from
multiple
update
queues.
So,
therefore,
that's
really
the
overall
design
architecture
borrowed
a
lot
from
early
design.
So
for
that's,
the
basic
structure
and
the
design
is
Clan,
can
pull
items
from
rbiet
queue
and
also
the
server
can
push
into
it,
at
least
in
terms
of
latency
or
equivalent
opportunity.
You
can
see
there's
some
like
things.
One
finalized
next
slide.
Please.
F
So
here
is
a
set
of
all
the
major
changes.
I'm
not
going
to
bore
you
with
all
the
details.
Probably
they
don't
make
much
sense
at
all
and
I
need
a
lot
of
things.
But
overall
you
can
see
that
there
are
a
lot
of
changes
and
basically
left
hand.
Side
is
early
before
the
ietf
113
right
about
13
and
here's
100
changes
so
therefore
really
separate
into
different
sections
and
make
everything
as
concrete
as
detailed
as
possible.
So
a
lot
of
very
detailed
updates
and
so
on.
So
so
those
are
a
lot
of
suspect
details.
F
F
So
now
that's
really
the
design
issues
which
one
engage,
because
that's
the
purpose
of
face-to-face
meeting
and
that's
really
make
all
the
media
decisions.
So
the
decision
isn't
triggered
by
very
excellent
comments
and
discussions
from
actuality,
Experts,
of
course,
because
a
working
group
of
reviews
already
also
addressed,
and
then
we
got
the
three
excellent
reviews
sort
of
one
front
address.
There
are
issues
and
discussion,
and
so
on.
F
So
the
main
point
I
want
to
get
people
to
talk
about
is
the
phone
and,
if
you
worry
about,
oh
all,
your
discussion
would
involve
a
major
restructure
of
the
whole
document.
Oh
no,
actually
we're
not
the
main
thing.
Actually
is
the
main
women
issues
from
Jesus
Culture,
which
is
excellent,
essentially
two
issues.
One
is
concurrency
control,
unwind,
semantics
of
the
transport,
the
good
thing.
Actually,
the
good
news
is.
They
may
represent
generic
concept
for
HTTP
3D
design,
actually,
which
suddenly
realize,
if
for
people
who
follow
the
email,
it's
not
likely
Auto
specific.
F
Somehow
it
really
represents
some
kind
of
genetic
concept,
but
we
want
to
really
get
engineering
done
as
fast
as
possible
and
like
I
mentioned
is
ripping
issues
does
not
look
like
really
about
meter
changes
and
mostly
actually
very
minor
changes,
but
very
important.
So
therefore,
we
want
the
working
group
to
have
very
careful
decision
and
discussion
and
then
we
can
finalize
a
lot
of
things
and
then
we
can
make
a
very
good
progress
next
slide,
please!
F
So
now,
let's
really
talk
about
the
first
thing.
We
want
a
working
group
to
make
a
decision
and
to
really
present
the
issues.
Clearly,
that's
a
very
quickly
reveal
a
basic
concept.
So
therefore,
very
quickly,
let's
review
it,
then
we
can
really
discuss.
So
what
are
the
basic
issues
here?
Remember
the
main
thing
about
Neutron
to
our
protocol.
Our
main
thing
about
actually
using
interview,
203
or
SSE,
is
to
do
very
low,
latency
and
very
low
overhead
push
of
Information
Network
information
to
the
client,
like
current
design
in
the
current
document,
essentially
has
two
mechanisms.
F
F
So
that's
a
decline.
Pool
next
slide.
Please
just
refresh
your
memory
and
then
here
is
the
ability
of
server
push
which
is
also
well
designed,
and
so
let's
say
what
it
really
is,
and
then
what
it
will
encounter
so
here
is
approach
is
very
simple
for
people
who
are
familiar
with
gdp2,
it's
using
server
approach
promise
so
basically,
for
example,
if
I'm
a
server
and
I
want
to
put
a
new
content,
for
example,
I'll
send
a
push
promise
stream,
for
example
a
prime
stream
number
four.
F
You
can
see
on
your
right
hand,
side
and
I
pretended,
because
the
semantics
hdv2
really
is
that's.
Why
we
want
to
follow.
Is
you
send
again
request
and
pretend
you
send
a
virtual
request?
You
want
to
get
the
client
and
supposed
to
want
to
do
it
get
and
then
for
the
second
number
or
101
and
a
silver
direct
list
and
the
content
from
a
server
to
the
client.
So
that's
a
server
portion
instead
of
conceptual
agent
like
representing
a
potentially
Clan
pool,
so
those
are
the
basic
concepts
we're
doing
next
slides.
F
Please
now
that's
the
interesting
part.
So
now
it
turns
out
after
the
very
very
increasing
discussion
in
the
discussion
from
Market
Nottingham
and
is
from
review.
Is
the
question
is,
if
that's
what
you
really
want
in
terms
of
Clan
pool
and
given
that
for
hb2,
for
example,
the
load,
the
the
client
overhead
may
not
be
as
a
big.
Why
don't
you
introduce
Clan
long
pool
as
a
way
to
solve
the
issue?
That's
a
very
interesting
discussion
and
very
quickly
in
the
comments.
F
So
therefore,
basically
here
is
one
proposal
and
what
exactly
how
to
get
to
work.
Of
course,
the
email
discussion
did
not
go
to
technical
details
and
here's
the
engineer.
Let's
see
how
it
really
works.
So
the
discussion
point
is
very
simple:
on
your
on
your
right
hand,
side.
Essentially,
it's
transported
q
and
student
number
10112103.
F
But
what
if
you
want
to
get
a
long
pole,
use
HTTP
2
and
how
to
really
get
do
that
is
very
simple?
Is
the
client
can
issue
a
gather
request
from
the
client
to
the
server
and
what
one
request,
essentially
the
next
secret
number
which
we're
building
already
so
therefore,
you
say:
hey
I'm
going
to
get
104..
Basically,
you
always
have
a
hanging
or
essential
outstanding
request
for
next
one.
So
therefore,
this
can
be
sent
without
all
the
TCP,
handshake
and
overhead
understand.
Using
the
single
connection
itself,
then
you
implement
the
whole
Clan
pool.
F
So
therefore,
that's
one
feature
which
we're
discussing
and
we
liked
it
a
lot
during
our
internal
weekly
meetings.
Basically,
the
only
change
is
very
small,
we
probably
add
a
paragraph
say:
okay
and
for
clan,
pull
and
use
a
card
use
for
the
current
Clan
per
model
and
that's
really
allowed.
Currently,
you
only
can
request,
which
is
a
single
number
already
in
the
transport
queue,
and
now
we
should
allow
you
can
allow
the
current
one
with
second
number
plus
one.
Then
that
becomes
essentially
they
they
allow
on
tool.
So
that's
the
design,
Decision
One.
F
So
that's
the
one
proposal
we
want
to
adopt
it,
but
we
want
to
get
feedback
from
working
group.
Very
simple
technology
change
very
elegant,
but
let's
say
if
we
were
missing
and
something
or
somewhere
next
slide,
please
next,
one,
which
is
also
very
interesting
and
also
came
out
essentially
from
HTTP
review,
which
we
really
liked,
and
but
this
one
is
slightly
more
conceptual
and
actually
the
change
is
very
interesting,
but
actually
we
need
a
really
working
group
and
also
all
our
expert
to
discuss.
So
here
is
the
basic
proposal
about
how
to
do
it.
F
So
the
proposal
is
very
simple
and
very
elegant
and
English
follows
it.
For
example,
remember
early
part
we
talked
about
the
club,
The
Promise
approach.
Promise
approach
of
using
a
server
push.
One
very
simple
way
is
reverse
the
thinking
the
process,
for
example
the
client
around
the
push,
for
example
101
and
the
client
right
now.
The
current
model
is
This.
Server
will
send
a
project,
promise
approach,
stream,
4
and
and
using
the
get
method
and
for
second
number
of
101
101.
On
the
right
hand,
side
one
possibility
is
sweetly.
F
F
So
therefore,
that's
one
way
of
because
why
I
think
there's
some
like?
Basically,
there
are
some
discussions,
at
least
from
the
HTTP
review,
is
there's
some
pushbacks
to
think
about
like
a
server
push
which
actually,
interestingly
Implement
themselves
in
HTTP
203
and
as
approach
Prime
is
but
somehow
considered
as
like
anti-patent,
and
so
therefore
there's
some
pushback
about
the
like
push.
But
this
one
with
more
standard.
F
There's
no
push
promise
everything
just
become
a
standard,
HTTP
methods,
verbs
and
put-
and
so
therefore
it's
very
nice
and
what
additional
benefit
is
it
with
this
one
and
actually
can
get
rid
of
one
awkward
which
we
have
in
the
current?
For
example,
if
people
read
the
document
very
carefully,
we
have
one
place
specific
value
and
for
the
current
implementation,
design
and
the
server
the
client
must
not
cancel
the
promise,
because
why?
F
If
you
push
a
promise
and
then
because
all
the
dependencies
there's
all
the
dependencies,
if
we
cancel
it's
very
hard
to
really
synchronize
the
state,
but
if
we
allow
the
server
to
put
you
look,
the
the
clan
cannot
really
like
a
cancel.
Put
sorry
for
what
about
about
awkwardness,
of
course,
semantics
now
really
is
suddenly.
The
concept
becomes
somehow
used
to
be
like
a
server
where
the
lotto
server
would
have
all
the
network
State
and
essentially
like
pushing
the
information
into
the
clan
as
a
cache.
F
But
now,
if
you
do
put
conceptually
it's
now,
client
conceptual
becomes
essentially
a
stateful
machine,
I
mean
in
other
states.
So
for
people
who
are
like
a
purely
you
know,
stateful
design
and
HTTP
may
be
resentful
so
far.
We
like
this
one,
but
we
won't
have
a
lot
of
discussion
on
this
one.
So
that's
the
second
issue
next
slide
excuse
so
opportunity.
We're
just
finish
all
issues
next,
one
which
also
actually
came
out
from
review
and
from
all
three
reviewers
and
Mark
and
Martin
Thompson
and
Spencer
talking.
F
So
basically,
that's
the
third
issue
basically
is
day.
One
is
discussion.
Is,
can
you
environment?
What
are
we
interpreted
is?
Can
you
just
make
a
late
GP
to
spec
as
little
as
possible
idea
will
be
specified
nothing,
but
here
is
a
concept
issue
which
we
encounter
and
that's
actually
also
a
genetic
issue.
Let
me
clarify
the
issue
very
clearly
and
we
want
to
see
which
one
so
we'll
give
three
proposals
and
we'll
see
which
one
are
the
working
group
likes.
So
we
can
make
a
decision.
So
what
exactly
the
issue?
F
You
might
move
one
IP
address
from
location,
one
location,
two
and
now
you're
also,
therefore
change
the
average.
You
change
also
the
the
the
cards,
for
example,
in
the
custom
map.
One
so
therefore
become
such
a
dependent
graph.
So,
overall,
accurate
is
a
DAC.
What
do
we
call
Direct
e600
graph
of
dependency
of
all
the
resources
or
pushing
or
sending
from
from
the
server
to
the
cloud
you
know
pushing
or
supposedly
so
you
know
most
ideal
case,
and
if
the
requirement
is
hey,
if
you
want
this
Auto,
you
guys
I
should
specify
nothing.
F
So
therefore,
you
just
just
send
all
the
informations
ask
a
server
to
send
from
from
a
server
to
the
client.
That's
easy
to
specify
nothing
approach,
engineer
approach
so
conceptually
to
send
all
of
our
resources
and
put
it
into
HTTP
2
servers
cache
or
using
socket,
API
or
whatever,
and
then
they
just
send.
All
of
so
that's
design
number
one
basically
Auto
specifies
nothing
and
you
map
each
RI
into
independent
stream
and
HTTP.
F
Just
out
of
scheduling
the
main
issue
is,
we
might
potentially
lose
a
little
substantial
performance
gain
or
using
hp2
why
we
gave
all
of
our
resources
to
the
HTTP
layer
as
the
server
and
because
the
server
doesn't
understand
about
anything
about
application
semantics.
So
for
the
server
Management
schedule,
let
me
send
rfo
first
because
they're
all
in
the
buffers
in
your
own,
like
objects,
and
let
me
send
our
phone
first
then
R3,
R2
R1,
then
essentially
the
application
cannot
process
anything
in
the
buffer,
all
R4
R2,
R3
R1
onto
everything
you
received
and
then
process.
F
So
therefore,
you
lose
essential
to
gain
of
this
essentially
a
dependency.
So
that's
the
issue
number
one.
We
did
not
like
that
one
early
design.
Therefore
we
went
down
to
design
two,
but
of
course
the
issue
is
we're
not
specifying
nothing.
We
might
introduce
a
maybe
over
specification
and
so
on.
So
let's
discuss
a
little
bit
in
current
design.
2
is
a
form.
It
also
has
an
issue.
It's
not
perfect,
but
we
are
talking
about
engineer,
we're
not
trying
to
get
everything
perfect.
F
So,
for
example,
if
you're
the
server
and
auto
server
you're
sending
the
information
ask
HTTP
2
to
transport
for
you,
and
you
first
really
should
submit
anyone
which
has
no
dependency
R1.
For
example,
you
just
specify
send
to
it
is
finished
so
now
on
the
one
without
dependencies,
R2
and
A3
R3,
you
can
send
R2
and
R3,
and
then
you
wait
until
the
analyst
said:
okay,
it's
transported,
and
now
you
send
R4.
F
So
therefore,
essentially,
you
would
really
be
allowed
somehow
they're
transport
in
the
given
order,
of
course,
if,
by
the
way
we
mentioned
this
one,
even
in
this
one,
even
now
you
send
in
a
given
order
or
actually
receiver's
item.
I
might
also
have
a
semantics,
maybe
the
HTTP
client
buffer
size
that
might
still
buffer
R1,
R2
R3
and
not
delivering
Apple
layer.
You
know
right
away,
they
might
be
still
actually
delay,
but
we
think
this
is
slightly
better.
Of
course.
The
issue
about
this
one
is
also
potential
issue.
F
Remember
this
long-running
connections
and
the
TCP
window
size
that
might
be
big
I
have
a
sliding
window
of
transform
might
be
large
enough
to
understand
R1
R2,
R3
R4
in
a
single
shot,
because
they're
so
big,
you
sound
like
a
big
buffer,
but
now
we're
sending
essential.
We
introduce
round
trip
delay,
there's
no
perfect
solution.
Okay,
these
are
R3
design.
Three
is
I
could
indicate
dependencies.
F
Oh
those
resources
are
really
dependent,
but
that
one
probably
involves
modification
of
HTTP,
3,
semantics
or
two
cement
sort
of
or
not
targeting
this
one,
but
we
want
to
work
and
make
decision
one
or
two.
We
recommended
two,
but
we'll
see
what
the
recommendation
really
is.
Next
slide
excuse,
maybe
how
how
am
I
doing
timing
am
I
too
slow
or
you
know
you
have
the
timer.
Oh.
H
F
Great
yeah
I
do
have
time
great,
wonderful,
wonderful,
so
three
and
is
how
to
well
to
specify
the
settings.
That's
also
an
issue
which
we
encountered
and
then
a
wonderful,
the
Ponder
by
by
Samantha,
which
is
very
good
Point
as
well.
So
that's
the
issue
we
encounter.
So,
let's
see,
if
there's
any
suggestion
or
feedback
from
working
group.
F
Essentially,
so
we
want
to
control
a
little
bit
of
crack,
essentially
the
specifications,
for
example
right
now
in
a
current
draft,
we
specify
that
the
the
client
Mass
being
about
the
server
to
say
hey
the
server
Porsche
is
Singapore
so
reported
to
be
self-consistent.
We
also
want
to
allow
This,
Server
and
client
really
like
a
limited
concurrencies,
because
therefore
you're
not
you're
not
going
to
overwhelm
a
very
slow
Auto
client,
but
unfortunately
we
realize
and
also
a
very
nicely
nicely
upon
out
by
Spencer,
and
is
that
actually
this
specification
scheme
is
changed
in
http
3..
F
So,
basically,
what
happens
is
a
second
year
of
approach.
Now,
essentially,
it's
kind
of
now
is
removed.
If
you
specify
actually
it's
an
even
error.
So
therefore,
you
really
should
use
like
a
Mac
Max
push
ID
frame
with
complete
information
and
maximum
currency
and
also
change,
and
so
therefore
there's
essentially
two
and
three
are
found
on
two
design.
So
how
to
handle
this
issue?
F
Why,
yes,
which
is
remodeling,
we
don't
even
specify
anymore,
we
just
say:
okay,
let's
operational,
and
you
guys
should
handle
the
when
you,
when
you
configure
your
own
servers
and
make
sure
you
compute
them
all
properly.
So
therefore
outstanding
the
right
away,
store
for
independent
of
gp2
or
GB3,
the
second
one,
which
is
best
about
requirement
UMass
the
center
of
sending
about
push,
and
you
must
sample,
for
example,
control
this
one,
but
how
to
really
do
it
will
become
generic
and
then
essentially,
they
deploy.
That's.
F
Why
relative
with
OEM,
when
you're
going
and
really
send
those
information
relevant?
Let
me
you
know
in
the
crack
away
sort
of
foreign.
F
Working
group,
so
that's
the
next
one
next
slide,
please
so
last
one
and
next
one
I
can
might
be
tricky
or
maybe
hard
or
maybe
easy.
So
here
is
what
the
question
really
is.
So
initially
we
use
the
other
examples
were
written
in
ATP,
1.1
format,
very
standard
ever
so
familiar
with
it.
Then
we
got
the
feedback
that
okay,
actually
I,
really
do
e32.
Can
you
guys
now
in
random
format,
in
hpv2,
for
example?
That's
the
right
hand,
side
format
in
the
middle
I
think
for
some
versions.
Actually,
we
have
both
1.1
1.2.
F
Then
every
example
would
have
two
formats
to
become
very
messy
eventually
will
end
up
with
a
two.
Basically,
we
wrote
all
of
them
with
two
sort
of
four.
We
give
some
kind
of
comment
said:
okay,
send
this
stream
using
prime
stream
number
four
and,
for
example,
with
your
ready
settings
examples,
and
also
like
some
like
pseudo
instructions,
to
really
see
how
the
information
should
be
set,
as
example,
but
it's
a
wonderful
sky
guide
from
Mark
is
actually
recommended
written
using
1.1.
F
So,
therefore,
should
we
just
roll
back
the
all
examples
using
1.1
but
adding
some
kind
of
pseudo
like
annotation,
to
indicate
what
kind
of
information
or
not
so,
therefore,
we
really
need
some
kind
of
guidance
and
therefore
we
don't
want
to
go
back
and
forth.
We
just
won't
finalize
next
slice,
peace,
so
the
last
one
is
the
media
type
and
we
really
need
finalize
the
media
type.
I
think
so
at
this
point,
it's
possible.
This
also
will
point
out
by
by
Spencer.
F
So,
therefore,
we
can
finalize
this
media
type,
which
actually
we
have
an
internal
version
already,
but
we
want
to
wait
a
little
bit
until
everything
is
final.
Our
email
attempt
is
format,
input
which
should
be
relatively
quick
and
but
if
people
have
any
comments
and
we're
more
than
happy
to
really
go
back
or
otherwise
we're
going
to
use
traditional
RFC
at
7
to
85
format.
Last
slice,
please
next
slide.
B
F
E
Okay,
Martin
Duke
Google,
so
one
minor
thing
can
you
go
to
discuss
three?
Please
sorry
must
be
three
so
like
I'm
in
favor
of
you
know,
saying
what
you
mean
in
documents
and
not
yeah,
so
I
don't
know
how
long
it
takes
you
to
find
a
slide,
but
whether
like
to
say
something
I
say:
yes,
say
something
don't
just
like
ignore
it
assume
people
do
the
right
thing
discuss
three
get.
E
One
right:
no,
no,
okay,
anyway,
forget
it
so
write
something
down
like
you
could
just
write
for
hp2
and
hp3,
or
you
could
have
a
generic
requirement.
I
would
have
a
requirement
that
you
know.
I
would
not
just
say,
configure
your
thing
properly,
because
that
doesn't
mean
anything
to
implementers.
F
E
Yeah
I
mean
I
would
just
say
like
make
sure
you
can
send
enough
pushes
and
make
sure
you
can
send
push
assuming
you're
using
push
and-
and
you
know,
make
sure
you
have
enough
streams.
Okay,
whatever
the
requirement
is.
Okay
sounds
good,
so
like
to
zoom
out.
Well,
okay,
actually,
there's
one
clarifying
question.
So
are
you
now
considering
not
using
push
at
all.
F
It
easy,
oh
next,
slide.
Okay,
so
basically
it
is
Clan
pool
already
who
can
support
it
right
and
silver
push.
We
can
replace
with
sober
pot.
Okay,
I'll
keep
both,
of
course
that's
a
problem.
These
two
are
mutually,
so
you
can
say
if
you
don't
keep
this
one
move.
So,
okay,
it
has
mine,
and
so
basically,
if
we
change
into
server
put
and
get
rid
of
so
push
completely,
okay,
so
server.
F
Get
rid
of
push
promise.
E
Right,
okay,
so
this
is
this
is
interesting.
You
got,
you
got
the
best
possible
reviewers
from
the.
E
So,
like
I
want
to
zoom
out
on
this
draft,
like
so
first
of
all,
okay,
so
number
one,
the
naive
way,
I
think
as
Martin
Thompson
put
in
his
review.
The
naive
way
to
do
Alto
or
HTTP
2
is
just
take
the
requests
sure
and
put
them
in
streams
like
that.
All
the
like
7285
also
put
them
in
streams
and
you're
done
right,
including
SSC.
All
that
there's
nothing
there's
nothing
wrong
with
that
correct.
F
So
that's
related
three
right.
Basically,
oh
no,
two
I
think
some
kind
of
like
a
yeah
to
basic
ordering
construct.
E
I
mean
I,
don't
want
like
we
have
a
deliverable
I,
don't
want
to
like
get
Tunnel
Vision
on
deliverable.
If
the
answer
is
that
just
like
not
having
an
HTTP
2
document
is
the
right
answer,
because
you
you
just
you
just
take
like
any
almost
any
of
many
other
HP
applications.
You
just
take
one
one,
you
rip
it
out.
You
put
it
in.
F
E
You're,
right
and
I
I
think
I
think
it's
always
important
to
Benchmark,
whatever
we're
doing
against
doing
that,
sure
what
I
am
and
this
this
is
a
terrible
thing
to
say,
because
it's
potentially
like
really
changing
Liverpool,
but
what
I
feel
like
happened
if
I'm
understanding
correctly
is
the
HTTP
experts
essentially
redesigned
SSC
for
us?
Yes,
in
a
in
a
version
agnostic
way
that
maybe
was
better
okay
and
foreign,
if
that
is
the
case,
I
I,
like
I,
you
know,
I've
been
thinking
about
this
for
10
minutes.
So
this
is.
E
F
E
F
And
now,
of
course,
we
got
a
footage
back,
which
is
very
soon
very
only
a
few
days
ago.
Yeah
well
not
approach,
basically
suggestion,
which
is
a
wonderful
suggestion.
We
never
thought
that
was
possibility
to
reverse
it.
Of
course,
there's
conceptual
issues.
Yes,
some
people,
you
know
always
they
can
get
like
a
philosophy
code.
No,
that's
not
the
right
philosophy,
even
though
syntax
is
simple.
It's
equivalent
about
philosophy
is
different.
So
therefore
well,
but
where
do
engineering
as.
E
C
H
E
C
F
E
Serialize,
okay,
fair
enough,
all
right,
like,
conversely,
you
could
do
push
which
I
guess
you
have
code
for
and.
F
Clan
pool
only
right
because,
basically,
essentially
we
have
model,
is
Clan
pool
only
so
the
push
only
and
long
po
definitely
long
Implement
I.
F
A
very,
very
elegant,
very
minor
change.
The
major
concept
debate
is:
do
we
really
switch
the
model
from
the
server
pushing
and
the
essential
virtually
putting
for
the
clan,
or
so
just
putting
the
content
into
the
client.
H
F
E
Diving
into
this,
but
I
would
like
you
to
think
about.
I
would
like
to
Free
Yourself
of
what
you're
to
do
for
a
minute
sure
think
about
like
what.
What
are
we
trying
to
solve.
C
E
Is
it?
Is
it
best
positioned
in
terms
of
HTTP
versions
or
a
new
mode
that
like
replaces
88.95
or
supplements
88.95
and
and
like
so
that's
like
through
some
thinking
and
two
like,
given
that
you
have
some
code
and
I
know,
there's
not
infinite
software
resources,
but
but
certainly
like
some
experimentation,
yeah.
E
F
But
sort
of
put
it
conceptually,
but
the
good
thing
if
it
doesn't
stand
there
to
client
software
like
a
library
it'll,
be
slightly
harder
because,
typically,
clients
are
not
really
ready
to
to
be
written
into
a
typically
read
the
cache
they're,
not
like
a
writable
cache,
mostly
just
write.
It
only
once,
but
now
put
some
conceptually
it's
more
like
a
more
like
a
data
store.
So,
okay,
so.
E
So
I,
like
that's
my
advice,
to
do
those
two
things
and
like
I'm.
My
thoughts
are
fragmented,
so
I'm
not
going
to
continue
to
shade.
I
E
Offline
right
after
this,
maybe
we
can.
F
Yeah
we
should
talk
about
talk
through
a
little
bit
more
okay,
thank
you
and,
of
course,
also
discussion
too,
which
is
no
discussion.
Sorry
I'm,
sorry,
yeah,
1.2,
here
too
yeah.
It's
also
relevant
about
how
much
to
specify,
if
you
look
at
all
the
discussions
or
discussion
about,
don't
stress
anything
or
maybe
there's
you
can
say
the
case
because
generally
were
solving
a
case
which
actually
generic
HTTP
203
case,
which
is
how
to
handle
multiple
dependency
and
how
to
give
instructions
from
the
to
the
transport
leader.
E
E
Yeah,
let's,
let's
talk
after
and
see
what
the
right
thing
to
do
with
this
is
like
I
said:
if
we
need
to
change
the
charter,
we
could
change
a
charter
sure,
because
I
think
we've
got
a
lot
of
great
Insight.
We
didn't
have
when
we
charted
at
this
yeah
well.
Thank
you.
F
B
J
So
hi
I'm
Jordy
Rogers
welcome
and
I'm
going
to
be
talking
about
auto
code
bases
and
deployment.
This
is
actually
a
presentation,
actually
several
of
us
that
we're
going
to
be
talking
about
this,
so
I'm
going
to
be
calling
out
for
more
people,
I
guess
to
join
me.
J
Next
yeah
yeah,
I'm
gonna,
just
at
a
very
high
level,
just
recap
on
the
on
the
code
base
architecture,
just
very
fast
on
project
management
and
approach
when
and
then
jump
into
deployments,
and
then
probably
we're
probably
going
to
skip
the
hackathon
because
that
was
presented
actually
Sunday.
So
you
have
the
the
YouTube
video.
If
you
want
to
look
at
that,
but
yeah.
J
Yeah
so
yeah,
this
is
summary
of
the
charter.
You
all
know
about
it:
the
history
of
Alto,
starting
from
peer-to-peer,
then
cdns
and
recently
and
sort
of
going
into
this
new
applications.
A
lot
of
discussions
is
this
is
not
in
real
in
the
charter,
but
there
are
a
lot
of
discussions
about
you
know
using
Alto,
right,
Computing,
5G,
seller
and
others,
and
you
know
participation
from
multiple
vendors
and
and
so
on.
So
next
yeah.
J
So
this
is
70
7285
the
traditional
architecture,
and
that
you
actually
want
to
go
next
here
next
next
couple
more
so
yeah,
that's
how
we
map
it.
Basically,
so
there's
a
note
one
API
Southern
API
the
standard
focuses
on
the
Northbound
API.
Primarily
there
was
just
a
discussion
about
the
software
API
and
not
over
standardized
in
that
part,
but
yeah.
That's
how
we
use
Envision
the
deployment
of
the
code
base
on
the
Northbound.
J
You
have
applications
across
the
board
from
traditional
cdns,
but
also
science,
traffic
moving
global
data
sets
large-scale
data
sets
globally,
with
the
science
networks
and
and
then
the
edge
Cloud
applications.
I
mean
the
reality
in
iot,
The,
Meta
versus
one
and
then
the
southbound
again,
the
across
the
board.
You
know
going
from
data
centers,
so
all
the
way
to
the
edge
as
well
and
the
back
hall
so
moving
on
next
yeah
just
next.
J
You
know
this
is
pretty
much
what
we
discussed
in
the
last
ATF,
so
I'm
going
to
just
cruise
through
these
I
guess
very
fast,
but
we
have
an
approach
whereby
we
are
leveraging
the
hackathons
as
checkpoints
and
Muslims
to
progress.
So
you
saw
that
over
the
weekend
the
team
delivered
a
demo
of
integration
of
Alto
with
science
networks,
FPS
Russian
and
also
multipath,
quick
and
and
ptcp
by
Z
Yang
as
well
yeah.
J
So
moving
on
next,
if
you're
interested
we,
you
know,
we
use
scrum
to
manage
the
project
and
you
know
we
have
the
dashboards
available.
Everything
is
under
GitHub,
it's
openly
available,
so
anyone
can
check
anytime.
So
there's
a
dashboard
for
the
hackathon
114th
and
in
general
dashboard
for
the
overall
progress
progress
of
open,
Alton
moving
on
next
yeah.
That's
just
the
dashboard
next
and
just
jumping
into
the
topic
of
this
conversation
as
well.
J
So
the
auto
deployments,
if
you
want
to
go
next
yeah
the
the
first
part,
is
what
was
this
what's
been
discussed
in
the
past,
the
current
implementations
there's
a
Wiki
available,
there's
the
Comcast
bannocks
and
telefonica
previous
deployments,
and
then
we
have
a
bunch
of
new
deployments
work
in
progress
that
we're
going
to
be
discussing
right
now:
I
guess
in
this
conversation
and
and
ongoing.
J
So
the
Pacific
Pacific
research
platform
in
in
California,
then
CERN
in
Europe
are
really
a
global
network,
UCSD
the
5G
deployment
and
the
quick
mptcp
and
then
the
edge
cloud
and
science
networks
in
general.
So
moving
on
to
next.
J
All
right,
yes,
now
it's
a
good
question,
some
of
these
yeah
that
yeah,
let's
answer
the
question.
Some
of
these
are
just
the
work
in
progress.
They
may
not
actually
be
deployments.
It
was
so
that
it
might
be
abusing
a
little
bit
the
term
terminology
but
yeah
yeah.
So.
J
F
Okay
yeah,
so
this
is
a
basic,
basically
a
deployment
or
essential
implementation
that
we're
working
with
the
quite
a
lot
of
people.
I
think
some
people
saw
okay,
yeah
so
for
people
who
follow
Auto
mini
list-
and
so
this
is
really
is
a
collaboration
between
this
Auto
working
group
and
also
a
central
discern
team.
A
certain
team
includes
several
two
teams.
Actually
is
quite
a
lot
of
him.
It's
rather
complex,
so
one
team,
CERN
is
a
certain
team,
is
a
high
I.
Think,
probably
you
guys
saw
the
email.
F
Auto
billion
list
has
a
project
lead
of
the
FPS
project,
which
is
essentially
all
the
data
in
certainly
submitted
to
API
system.
Fps
will
do
all
the
scheduling
and
also
see
Murray,
also
part
of
system,
improv
stock,
his
email
as
well,
and
this
case
the
operating
manager.
So
therefore
that's
one
part
of
so
the
question
and
of
course
the
other
part
I
think
involving
in
terms
of
people
would
be.
F
The
rules
is
on
top
of
FTS,
so
basically
will
select,
which
sources
to
really
transmit
and
FTS
will
collect
all
the
transmission
I
requests
and
then
try
to
schedule
an
email
right
away.
So
for
this
one
in
particular,
basically
is
the
main
feature
is
a
major
missing
part
about
FTS
of
all
Lucerne.
Is
you
cannot
really
control
the
the
bandwidth
usage
on
every
single
link?
So
therefore,
the
main
main
new
use
case
of
Auto
here,
your
typical
Auto
will
really
consider
with
like
a
source
selection
or
peer
selection,
and
so
on.
F
Actually
here
for
this
particular
use
case,
which
they
found
to
be
very
exciting
for
the
CERN.
Is
you
basically
insert
the
data
transfer
as
you
have
Source,
you
have
destination
and
you
send
it.
You
send
the
data.
So
here
is
we
use
Auto,
really
map
The
Source
destination,
basically
to,
for
example,
dtn's
data
transfer,
node
I'm
mapping
into
the
freezer
codings
using
Auto,
using
its
own
cost
map
and
pass
Vector
for
mapping
in
the
physical
links
you
use
then
application
in
the
stock.
We
are
running
app
layer.
You
can
really
click
on
usage.
F
All
the
transfers
of
all
like
upper
layer
links
and
then
you
compute,
a
total
user
every
protocol.
They
do
the
app
the
optimization
control
of
every
30
seconds
or
one
minute.
Then
Auto
will
tell
you
how
much
you're
using
on
every
single
physical
link
and
then
essentially
you
can
really
convert
into.
For
example,
you
say:
hey
I
want
to
make
sure
I'm
not
using
this
link
and
model,
for
example,
10
gig
or,
for
example,
the
two
links
or
for
this
organization.
F
For
these
two
experiments
CMS
or,
for
example,
whatever
experiment
and
resource
should
be
one
over
two
sort
of
for
basically,
that's
FTS,
a
specified
resource
control
go
and
then
we
implemented
this
or
the
algorithm.
Essentially
using
Auto
information
is
mapping
as
a
as
a
constraint,
and
then
we
Implement
four
zero
other
gradient
algorithm,
which
is
actually
quite
exciting
and
I.
Think
we
did
a
demo
over.
There
then
also
a
composition
framework
to
go
from
now.
F
Not
only
is
there
other,
but
also
first
order,
and
the
first
order
actually
is
the
is,
is
George's,
work
or
bottleneck?
Of
course,
right
now
it's
a
moment
and
the
opponent
structure
is
now
in
in
the
scope
of
Charter,
so
we're
most
we're
focusing
on
first
is
really
getting
the
information
control,
which
is
quite
a
new
interesting
case.
We're
hoping
to
you
guys,
probably
saw
the
email
on
auto
meaning
list.
So
the
goal
is
really
get
the
production
and
I.
Think
meeting
is
from
from
both
me
high
and
also
from
Mario.
E
So
is
this
a
is
this
informing
a
network
management
tool?
That's
in
repositioning
the
network
based
on
this
data,
or
is
this
a
client
oriented.
E
F
So
they
I
don't
know
what
means
a
clan.
Also,
basically,
the
workflow
is
falling.
Ft
has
to
actually
centralize
control,
logical,
centralized.
Of
course.
If
you
want
to
read
a
separate,
you
might
be
about
separately.
So
basically
there's
a
synchronized
controller
running
a
holster.
Basically
you
might
be
a
multiple
and
each
one
actually
would
have
huge
database
to
get
all
the
transfer
requests
from
across
all
the
data.
F
E
F
E
E
F
For
initially
right
now,
we're
really
using
some
like
a
data,
and
we
had
a
discussion
with
Qing
and
I
think
we're
also
going
to
have
a
very
quick
meeting
with
India
manga,
who
is
the
director
of
the
esnet,
which
is
the
US
part
of
this
infrastructure
and
initially
that's.
Why
I
asked
a
question
that
initial,
the
actor
teen
suggestion
from
last
week
is:
can
I
just
give
you
a
file
and
then
that's
your
data
source.
You
just
load
the
auto
server
and
then
send
to
it.
J
Okay,
thanks
Richard
moving
on
to
next
one
yeah.
We
can
actually
move
on
to
the
next
one
so
and
yeah
we're
gonna
move
to
the
next
one.
So
there's
a
a
paper,
that's
going
to
be
published
at
sitcom
Knight
on
on
this
deployment
as
well
and
we'll
put
another
demo
in
the
next
hackathon
114.
That
will
continue
to
progress
in
that
deployment.
K
Hello,
this
is
another
Circle.
What
will
be
presented
on
Friday
in
the
media
operations
working
group?
So
it's
the
the
integration
of
the
Alto
in
the
telephonica
network
in
order
to
expose
the
capabilities
to
the
telephonic
acid
again.
K
K
This
was
some
some
tests
in
the
lab,
so
we
are
now
in
the
in
the
process
of
moving
this
into
the
production
Network
I
will
show
later
so
essentially,
here
what
we
identify
is
we
are
assigned
pids
for
the
streamer,
the
cdns
streamers,
and
also
we
have
pids
for
identity
final,
connecting,
let's
say
the
preferences
of
the
customers
of
telephonica
networks
in
the
different
Central
offices
in
the
different
pops
right.
So
the
request
routing
logic
of
the
telephonica
CDN
takes
into
account
a
number
of
of
inputs.
K
You
know
the
streamer
State
Tools,
the
load
level
and
so
always
from
the
perspective
of
the
CDN.
So
this
is
the
idea
is
to
complement
this
with
the
perspective
of
the
network,
the
where
the
the
customers
are
in
principle,
so
in
such
a
way
that
we
can
determine
what
would
be
the
number
of
hops
right
now,
with
the
capabilities
of
telephonic
of
Alto
that
we
are
playing
with
right
now,
simply
the
number
of
hops
from
the
streamers.
K
The
idea
would
be
to
enrich
that
information
with
performance,
metrics
and
all
the
capabilities
that
are
being
developed
in
Alto
in
general
right.
So
essentially,
the
point
is,
with
this
information,
the
telephonic
acidien
will
consume
the
the
topological
information
identifying
where
what
are
the
practices
of
the
customer
and
then
taking
decisions
not
only
based
on
the
servers
of
the
CDN
stream
and
information,
but
also
on
the
network
information
itself
for
taking
the
best
decision
at
the
time
of
delivering
the
content.
Next,
please.
K
So
this
is
the
process
that
we
are
following.
We
have
started
with
some
I
mean
really
playing
with
Alto
in
the
in
the
technology
lab,
so
essentially
very
basic
setups
I
will
detail
later
on,
as
well,
so
very
simplistic
Network.
Just
for
understanding
the
capabilities
and
the
feasibility
and
viability
of
this
approach,
then
we
move
to
the
pre-production
lab
facing
a
real
configuration
of
the
network
and
the
complexities
that
I
will
detail,
and
the
point
where
we
are
now
is
just
previously
to
introducing
into
the
production
Network.
K
K
It's
important
to
I
mean
in
in
all
the
pre-production
lab,
and
so
we
are
playing
with
the
real
configurations,
but
we
don't
have
the
the
good
insight
about
the
scalability,
and
so
so,
once
we
move
to
into
the
production
level,
we
will
get
more
starts
and
more
information
that
we
would
like
to
share
with.
All
of
you
just
to
I
mean
to
see
what
is
a
a
real
deployment.
Just
for
you
to
to
understand
what
we
we
are
talking
about
the
real
deployment
this
would
be.
K
Maybe
the
idea
will
be
for
for
alto
to
handle
around
3000
routers
or
something
like
that.
Simple
production.
We
are
talking
about
40
routers,
something
like
this,
so
the
the
the
the
next
step
is
basically
to
understand
the
scalability
and
and
all
this
stuff,
and
next
please.
So.
This
has
the
very
last
slide.
I
would
like
just
to
comment
where
what
was
somehow
the
the
technical
problems
that
we
faced
and
somehow
they
yeah
the
the
different
engineering
fights
that
we
have
with
the
with
this
deployment
in
the
technology
lab
test.
K
K
We
in
the
technology
lab
we
Face
a
mono
vendor
router
scenario
with
virtualized
routers,
so
something
somehow
a
lab
environment
for
sure
simplistic
network
configuration
with
just
single
IDP
ospf.
In
this
case
single
autonomous
system
and
simple
metrics,
like
the
hop
count
right
and
Alto,
was
connected
essentially
to
one
to
some
of
the
routers
acting
as
rock
reflectors,
so
very
constrained
environment.
Then
we
moved
to
the
production
environment,
we
migrated
from
the
a
bdp
module
from
odl
to
the
xrpgp.
We
started
finding
issues
in
in
xvp.
K
We
write
a
number
of
tickets
that
have
been
already
solved,
especially
for
bepls,
so
I
mentioned
later.
Okay,
in
this
pre-production
environment,
we
started
playing
with
multivendor
environments,
multivender
our
routers
physical
routers.
You
know
so
moving
from
the
vehicle
to
the
physical,
dedicated
Alto
server,
and
also
facing
the
complexity
of
the
real
Network
for
the
particular
affiliate
of
telefonica,
where
we
are
doing
this.
This
test
there
is
a
this
is
a
network
that
has
a
multiple
private
autonomous
systems
and
also
for
sure
public
thermal
system.
K
Okay.
So
the
the
very
last
steps
is
the
integration
with
the
production
Network.
So
we
are
now
in
in
the
process
of
being
adapted
to
the
production
processes
and
rules,
the
security,
the
addressing
the
internal
routing
of
the
network,
all
this
stuff
that
you
can
imagine
so
hard
and
in
all
the
environment
in
order
to
protect
from
external
attacks.
K
H
Just
one
location
for
you
for
the
what
you
have
presented
so
far,
so
yes
thank
you
for
sharing
this
interesting
data,
which
is
really
showing
that
there's
something
which
is
really
concrete
and
really
happening
with
the
with
the
protocol.
So
I
really
invite
the
others
who
has
experience
with
the
the
protocol
to
share
with
that.
So
I
understand
that
you
have
I
would
say
some
of
the
issues
for
the
integration
and
so
on,
which
is
just
I
would
say
as
expected.
H
K
As
usual,
so
the
most
problematic
thing
was
to
to
parse
the
information
from
the
products
of
SPF,
Isis
and
and
so
so,
no
special
issue
with
the
the
the
fact
of
building
an
hour
map
and
the
Cosmetics
was
more
or
less
a
straightforward.
It
was
basically
the
passing
of
the
protocols
and
trying
to
to
expose
they'll,
say
the
information
of
the
network
to
be
digested
by
Alto.
Once
this
was
solved,
the
process
of
Alto
was
straightforward.
H
Yeah
and
for
the
performance
perspective,
for
example,
how
Alto
is
behaving
how
the
request
and
there's
is
there
any
I
would
say
shortcoming
the
way
the
operation
of
the
protocol
is
currently
designed.
Is
there
something,
for
example,
enhancement
that
we
can
consider
in
the
future
I
think?
Actually
it's
important.
This
is
for
me
the
the
key
part
that
we
need
to
focus
on,
and
if
you
have
input
that
will
really
appreciate
it
by.
K
Now
not
in
anything
special,
so
take
into
account
that
we
or
we
are
in
pre-production.
We
are
leaving
the
production
from
moving
to
production,
I'm
sure
that
in
the
production
we
will
face
more
complexities,
because
the
network
will
be
huge.
By
now
we
are
dealing
with
four
30
40
routers
is
something
visible
regarding
stations
or
considerations
for
alto.
K
Yeah
I
think
that
the
next
step
would
be
to
include
performance
metrics
on
top
of
the
picture,
so
not
only
taking
into
account
the
Hops
not
only
taking
into
account
the
ADP
metrics,
but
also
taking
the
account
the
the
situation
of
the
network
in
such
a
way
that
the
selection
of
the
streamer
could
be
richer
in
the
sense
of
not
only
considering
the
path,
I
mean
the
length
of
the
path,
but
also
the
characteristics
of
the
path.
So
this
is
the
next
step.
We
have.
A
number
of
use.
K
Cases
here
on
mine
probably
is
soon
for
for
this
talking
yeah,
so
we
are
concentrating
on
the
deployment,
but
the
idea
will
be
to
enrich
the
decisions
at
the
end.
H
Okay
and
one
last
one
last
question
on
this:
one
about
the
the
bootstrap
in
a
different
regression.
I
would
say
how
you
are
automating
the
way
the
the
various
engines
are
currently
running.
Do
you
have.
K
But
now
it's
something
manual
is
I
mean
we
need
to
somehow
we
are
in
in
a
phase
of
how
to
say
being
sure
that
this
is.
We
are
sure
that
this
is
the
way
to
follow,
but
the
next
step
will
be
to
integrate
this
with
the
logic
of
the
CDN,
so
automatic
automating,
all
of
this
from
the
CDM
perspective,
so
essentially
that
we
can
retrieve
the
automatically
and
frequently
and
so
the
the
information
for
that
so
for
rental
to
the
network.
No,
we
don't
expect
special
things.
J
Want
it
so
I
was
going
to
talk
a
little
bit
about
the
the
BRP
deployment.
Okay,
so
just
real
quick,
so
the
next
one.
J
Yeah
just
to
put
things
a
little
bit
in
perspective,
so
I
really
should
talk
about
your
CERN,
which
is
the
problem
of
the
large-scale
data
transfers
from
from
CERN
to
Global
globally
to
scientists
are,
and
research
labs
run
the
around
the
world.
C
J
But
in
general
the
idea
is
that
there
is
a
interest
in
in
expanding
through
science
networks,
so
we're
also
collaborating
with
esnet,
also
in
the
PRP
deployment
and,
as
Richard
mentioned,
we're
meeting
with
indoor
manga
executive
director
of
the
esnet
to
discuss,
discuss
these
deployments.
This
is
a
family
of
of
science
networks
that
their
architecture
is
shared.
So
if
we
deploy
I'll
turn
and.
J
It
can
be
deployed
in
in
the
rest
of
the
networks
as
well.
So
that's
an
approach
and
the
potential
upside
here.
So
next
one
just
real
quick
on
this,
the
prps.
What
this
networks
are
doing
also
they
are
extending
to
the
edge,
so
they
actually
extending
the
networks
onto
the
5G
domain
and
building
the
edge
Cloud
as
well,
whether
it's
the
department
of
energy,
es9
Network,
you
know,
building
sensors
and
collecting
that
through
Wireless
and
then
building
the
the
5G
Edge
Cloud
and
an
example
is
a
PRP.
J
That's
building
a
5G
Edge
Cloud
I
do
CSD
and
other
other
universities
and
NYU
and
other
universities
in
the
US
as
well
and
so
they're.
J
Looking
at
applications
like
the
Holodeck
vehicle
networks
and
and
in
general,
you
know
the
metaverse
or
augmented
reality,
and
so
in
this
project
you
know,
there's
an
ecosystem
of
collaborators
coming
in
this
Caltech,
esnet
and
and
PRP
and
others
and
and
quagmas
actually
involved
here
as
well
and
so
moving
on
the
next
one
yeah,
and
this
is
about
closing
the
loop
here,
building
what
we
call
sort
of
the
edge
Loop.
So
from
a
network.
J
You
have
first
visibility,
then
you
apply
intelligence
and
then
control
back
to
the
network
and
this
orchestration
as
well.
We
believe
that
Alto
is,
you
know
suitable
for
for
visibility
and
that's
the
the
deployment
here,
the
PRP
and
basically
it's
building
this
architecture
and
has
Alto
as.
J
To
enable
this
visibility
here
in
the
intelligence,
we're
looking
at
putting
bottlenecker
structure
analysis
basically
to
be
able
to
make
optimize
decisions,
whether
it's
routing
or
rate,
limiting
or
service
placement,
and
then
controllability
could
include
Technologies
like
segment
routing
in
order
to
help
steer
the
flows
so
moving
on
to
the
next
one
yeah,
and
so
that's
just
then.
I
J
Next
one
I
don't
know
if
we
have
yeah
for
ZN
to
make
some
yeah.
J
D
Okay,
I
will
I
will
introduce
my
project.
We
know
in
the
defunter
mode.
Sda
controller
only
select
one
path.
Every
time
for
MP,
quick
at
the
MP
TCP.
There
are
lots
of
past
not
working
in
the
SDA.
So
my
idea
is
the
people
or
the
past
working
by
by
algo,
so
my
serve
is
correct.
D
D
Next
page
yeah,
so
the
result
is,
is
that
the
root
is
our
mptcp
and
MP3
is
higher,
that
without
eo2
it,
especially
in
per
Network,
we
can
see
from
the
picture
lost
right
higher
to
the
throughput
is
defined
now.
Thank
you.
J
And
then
you
have
the
idea
we
can
take
this
offline,
I
guess
this
is
YouTube
recording
you
can
access
and
the
slide
decks
from
yesterday,
actually
from
Sunday,
so
I
think
this
concludes.
This
yeah
talk
thanks.
K
Thank
you,
foreign.
Yes,
this
presentation
is
I
will
cover
three
drafts.
The
idea,
the
overall
idea,
the
working
idea
is
to
consider
Alto
as
a
network
special
function
for
for
ATF
Technologies,
and
we
will
cover
the
data
that
you
can
then
see
listed
there
and
I
will
do
on
behalf
of
my
co-authors,
Danny,
Christian,
Serene
and
anshu
Fang.
Next,
please.
K
So
in
order
to
present
a
relationship
among
these
routes,
the
the
working
one
will
be,
the
the
one
which
is
entitled
IDF
and
our
special
function.
So
the
idea
is
to
to
consider
Alto
as
playing
this
role
or
the
the
special
function
that
is
able
to
expose
capabilities
to
the
network,
2
applications,
external
or
internal
applications
such
a
way
that
can
consume
what
the
what
the
network
can
provide.
What
kind
of
information
can
provide
the
network
here?
We
have
the
two
examples
that
are
the
other
two
drafts
involved.
K
That
will
be
many
others
right.
One
example
could
be,
for
instance,
to
determine
the
more
convenient
compute
environment
for
instantiating,
any
kind
of
application
functions
or
whatever
this
is
covered
by
the
service
edge.
The
draft
so
essentially
thinking
on
exposing
compute
capabilities,
CPU,
RAM,
storage
and
so
on
so
far,
the
other
draft
and
the
one
entitles
every
functions
follow
the
same
approach,
but
with
the
idea
of
exposing
where
service
functions
are
and
also
the
characteristics
to
reach
those
service
functions.
Here
we
can
consider
isolated
service
functions
or
we
could
consider
even
service
function
chains.
K
So
it's
such
a
way.
The
idea
would
be
the
objective
again
would
be
to
to
characterize
the
path
to
reach
these
functions
or
to
compose,
let's
say
the
service
chain
among
those
functions
in
the
others.
There
are
what
I
recommend
specifically
on
on
the
net
side,
so
we
will
start
with
an
F.
The
working
document,
then,
following
by
the
service
edge,
these
two
have
been
already
presented
in
the
past
and
I
will
focus
later
on
a
little
bit
more
on
the
on
the
latest
one.
K
The
service
function,
which
is
the
new
for
for
to
be
presented
here.
So
in
the
IDF
number
expression
function,
the
current
version
is
zero
one.
This
is
a
the
intention
again
is
to
align
this
with
the
current
industrial
trend
of
Network
application
integration,
and
there
are
initiative
similar
initiatives
in
other
seos,
and
here
we
can
consider,
for
instance,
the
three
Epp
Network
expansion
function.
That's
on.
How
is
the
inspiration
of
this
work
also
that
the
Mec
apis,
the
all-runner,
rig
the
Linux
camera
initiative?
K
That
is
very
recent
with
the
idea,
also
of
offering
apis
to
request
and
consume
capabilities
of
the
network,
and
so
on.
So
far
so
again,
the
problem
statement
is
very
easy.
You
know
the
networks
are
becoming
consumable
about
application
and
services.
So
let's
consider
Alto
as
the
entry
point
for
that
there's
exposure
capabilities
in
which
respect
to
IDF
Network
Technologies.
So
the
final
objective
is
that
the
applications
can
make
informed
decisions,
taking
into
account
the
the
network
information.
So
no
blind
decisions
has
to
be
asked
today.
K
So,
as
the
applications
are
inferring
or
are
guessing
characteristics
of
the
algorithm,
so
the
idea
will
be
okay.
Let's
ask
the
network
and
the
network
will
provide
this
information
in
such
a
way
that
I
can
and
informed
decision
and
then
improve
the
quality
of
experience
and
so
on
so
far.
So
in
this
route.
This
is
a
as
I
said
before
it's
a
kind
of
overarching
draft
so
trying
to
expose
what
would
be
the
different
kind
of
capabilities
to
be
exposed.
K
We
are
considering
by
now
existing
capabilities
for
sure
the
topology
and
the
cost
the
cost
map,
and
so
this
is
somehow
the
the
the
the
initial
capabilities
of
Alto,
but
also
the
performance
metrics.
The
semantic
view
that
can
be
provided
by
the
path
better
and
so
on
so
far,
and
also
we
include
in
this
proposed
also
some
other
proposals
as
the
service
edge.
K
One
for
this
specifically
has
been
the
content
about
service
functions.
This
detail
in
the
draft
that
I
will
comment
later
and
also
security
aspects
that
were
not
present
in
the
previous
version.
So
next
one
please.
So
this
is
the
the
draft
of
the
whole
service
edge.
The
current
version
is
zero
five
and
this
activity
or
this
this
draft.
So
how
is
related
with
the
computer
World?
Networking
discussion
that
is
happening
in
in
the
routine
working
group
area,
but
from
the
in
the
case
of
Alto
Alto
can
provide
here
is
an
anov
path
solution.
K
So
a
direct
working
group
area
is
working
in
an
on-pass
solution.
Here
we
are
addressing
the
same
program
space,
but
with
a
different
perspective
with
a
different
approach
is
also
clear.
There
are
multiple
and
heterogeneous
data
centers
being
deployed
across
the
networks,
so
there
are
compute
capabilities
in
terms
of
CPU
storage,
pan
with
memory
and
so
in
different
points
of
the
network.
So
the
the
objective
here
would
be
to
expose
all
of
this
together
with
the
topological
information
of
the
network
and
in
the
future,
with
the
performance,
metrics
and
so
on
and
so
forth.
K
The
the
purpose
of
this
will
be
that
the
applications
that
can
consume
this
information
can
instantiate
the
applications
of
the
or
of
the
service,
with
informed
information
about
the
resources
that
are
available,
but
also
the
characteristics
of
the
path
to
reach
those
compute
capabilities
across
the
network
right.
K
So
the
solution
is
to
Liberation
Alto
for
aggregating
all
that
information
and
for
exposing
this
information
to
the
external
applications
again
once
more
time
in
such
a
way
that
the
application
can
do
an
informed
decision,
no
guessing
about
the
characteristics
of
the
of
the
network,
but
just
collecting
the
real
information
from
the
network,
the
updates
that
have
been
provided
in
version
zero.
Five,
we
described
potential
stations
for
Pat
battery,
unified
and
unified
properties.
K
So
the
idea
is
to
leverage
on
the
existing
work
and
and
maintain
that
existing
work
to
cover
what
could
be
the
information
related
to
the
Computing
environments?
Okay-
and
we
also
provide
example,
queries
provided
for
a
filter
entity,
property
map,
we've
also
providing
some
examples,
and
so
on
so
moving
to
the
and
I
will
detail
a
little
bit
more.
This
last
draft.
This
is
about
service
functions,
so
the
problem
statement,
essentially,
is
that
nowadays,
the
services
are
formed
by
a
concatenation
of
service
function.
So
this
service
function
changes
right.
K
So
we
have
a
connected
graph
of
service
functions,
but
by
now
we
don't
have
a
combined
information
between
the
functions,
the
this
service
chain,
plus
the
characteristics
that
that
connect
these
different
service
functions,
so
the
characteristics
of
the
of
the
chain.
So
there
is
typically
more
than
one
instance
of
a
service
function
in
the
network.
So
there
is
a
problem
about
what
service
service
function
instance
to
select
individually
or
to
form
a
chain
to
form
a
graph.
K
So
at
the
end,
the
purpose
of
all
of
this
is
to
help
on
the
service
realization
by
selecting
the
most
suitable
service
instance
or
instances
depending
if
we
are
addressing
just
one
single
function
or
a
chain
or
service
functions.
So
at
the
end,
the
the
purpose
of
this
is
to
characterize
the
path
to
reach
a
particular
service
function,
instance
or
a
type
of
service
function.
K
So
we
could
have
several
instances
of
the
same
type,
so
we
can
try
to
determine
what
would
be
the
better
characteristics
in
the
network
for
reaching
the
one
of
the
instances
and
also
to
characterize
the
path
among
a
sequence
of
service
functions.
Again,
I
mean
the
characterize
the
path
in
a
service
function
chain.
Next,
please.
K
So
the
kind
of
information
of
interest
that
a
client
could
consume
would
be
the
the
well
here
at
least
I
will
comment.
Some
of
them
I
mean
from
an
endpoint
to
an
instance
of
a
service
function,
type
the
same
but
characteristics
for
an
endpoint
to
a
specific
instance
of
a
service
function,
type
all
about
characteristics
with
for
a
given
chain
or
for
a
bit
or
at
the
time
of
composing,
a
service
from
one
service
function
to
another
function,
and
so
on
so
far,
I
will
not
waste
more
time
on
this.
K
So
there
are
several
cases
that
could
be
favored.
Let's
say
for
a
com,
complementing
the
information
with
the
network
information
in
place
next,
please
so
the
the
kind
of
solution
will
be
similar.
The
the
picture
is
similar
to
the
to
the
case
of
This
research,
essentially
to
combine
and
to
complement,
to
use
the
alto
server
to
integrate
the
information
of
the
service
functions
and
integrate
all
of
this,
with
the
political
information
retrieved
by
bgp
via
PLS
and
and
so
on,
as
we
saw
before.
K
So
we
are
proposing
here
in
in
this
draft
a
number
of
extensions
extensions
to
enable
the
alto
clients
to
request
information
of
the
inter
of
Interest,
the
the
kind
of
information
that
we
show
in
the
previous
slide,
and
also
extensions
to
collect
and
combine
both
the
service
function
and
the
network
information
all
together
for
enriching
the
the
decision
of
the
sales
function
chain.
There
are
extensions,
or
we
do
foresee
extensions
that
could
involve
augmentations
of
the
path
vector
and
the
unified
property
draft.
K
So
we
will
leverage
on
existing
work
for
this
and
just
to
remark
that
there
is
a
clear
link
with
related
activities
in
ATF
like
the
service
function,
chain
activity,
the
service
programming
with
semen
routing
in
a
spring
working
group
or
the
service
functionality
topology
on
this.
So
we
see
clearly
this
link
with
this
work
in
ATF,
but
also
there
is
a
clear
link
with
activities
outside
ITF,
as
would
be
the
only
about
virtual
Network
function,
graphs
in
Etsy,
NFP
and
next,
please
so
just
just
for
finishing
the
next
steps.
K
So
clearly,
this
is
a
our
idea
is
to
work
on
these
different
aspects
for
the
future.
Authority
Sheltering,
so
we
we
have
cleared
this.
This
is
for
future
chartering,
so
we
want
to
provide
all
all
this
work
for
directing,
let's
say
the
future
and
it
is
in
in
this
direction.
So
our
idea
is
to
complement
also
this
other
work
in
other
working
groups
from
ITF,
so
helping
on
the
automation
of
the
service
functions
so
that
the
sensation
of
the
of
the
instance
of
the
satisfactions
and
so
on
so
far.
K
So
we
see
this
as
a
complement
of
which
is
already
another
in
other
working
groups,
and
our
idea
is
to
prepare
updated
versions
of
the
all
the
documents
for
its
115
and
for
sure
comment.
Some
feedback
is
more
than
welcome,
so
hunting.
E
I
was
very
excited
to
see
on
like
one
of
those
slides
that,
like
you,
think,
there's
something
worth
doing
without
any
extensions.
I
think
that
was
actually
going
to
be
my
question.
Having
reviewed
this
Draft
before
the.
C
E
Because
to
me,
like
I,
think
the
hump
is
just
to
get
adoption
of
this
thing
of
of
Alto
and
out.
You
know,
deployed
and
servers
real
servicing
clients
out
there,
and
then
we
can
talk
about
extensions.
E
You
know
at
this
point
like
doing
extensions
is
the
hope
that
somebody
will
adopt
the
use
case
is
is
where
we've
failed
a
bunch
of
times
and
I
think
this
is.
This
is
very
promising
in
that
respect.
So
I
I
good
luck
to
you
to
get
this
this
out
there
and
I
think
if
this
does
get
out
there
and
deployed,
then
any
extensions
that
that
use
case
might
need
would,
in
my
mind,
move
very
close
to
the
front
of
the
line
for
re-chattering
thanks.
Thank
you.
G
G
From
my
memories,
of
what
the
guys
in
CERN
and
in
general
in
the
academic
networks
are
doing,
this
sfc
case
will
be
of
interest
to
them
as
well,
because
precisely
they
one
of
the
things
that
they
were
interested
in
to
do
some
kind
of
messaging
of
the
data
before
it
arrived
the
final.
So
so
probably
this
could
be
something
that
we
should
offer
to
them
as
well,
not
only
for
the
it's
simply
anything
that
came
to
my
mind.
Nothing.
A
B
I
I
Motivation
with
the
rapid
popularity
and
application
of
cloud
computing,
artificial
intelligence
and
other
technologies
will
cause
the
total
amount
of
data
has
increased
its
possibly
and
the
demand
for
data
storage,
Computing
and
transmission
has
increased
significantly.
Therefore,
the
processing
of
this.
I
I
Computing
related
network
with
the
optical
Network,
to
release
the
calcability
linkage
between
the
next
one
use
use
case.
One
network
resource
requirement,
the
age,
the
age
Network
management
lawyer,
receive
information
from
the
client
obtains
currently
a
commit
user
information
and
provides
it
to
the
cloud
management
platform
for
Network
resource.
I
The
cloud
management
platform
obtains
regular
information
about
applications
and
the
networks.
The
act,
the
architecture
of
computing
power,
Optical
network,
is
shown
in
the
following
falling
feature:
it
equals
Cloud
management
platform
or
computing
power.
I
Okay,
orchestration,
Computing
resource
computing
power,
scheduling,
called
the
network
management,
Age,
Management
platform,
age,
computing
power,
orchestrian,
Computing,
resource
computing
power,
routine
and
forwarding
each
Network
management.
Next
I
will
introduce
the
functions
of
each
component,
module
the
act,
the
architecture,
Age
Management
platform
receive
application,
a
requirements
from
Euler's.
I
Its
function
includes
report,
the
network
information
to
the
computing
power.
Scheduling
lawyer,
inform
the
computing
power
serves
and
perceives.
The
community
empowers
statements
through
the
computing
power
scaling
lawyer,
generates
the
computing
power
route
and
more
enter
the
route
in
real
time
since
the
the
generation
computing
power,
Arrangement
information
to
Cloud,
Network
management,
Cloud
Network
management
distribute
the
receive
the
computing
power
Arrangement
information
to
each
Network
management.
B
B
For
this
topic,
so
so
let's
actually
we
wrap
up
for
today's
discussion
actually
this
time
and
we
both
attend
in
person.
So
it's
a
yeah,
it's
good
good
time.
Actually
we
get
together
hopeful
everybody.
You
know
we
can
have
more
people
to
attend
in
person
in
London
media.
So
so
in
say
you
in
London
meeting
so
and
and
Martin
do
you
have
anything
to
happen?