►
From YouTube: IETF115-ALTO-20221111-0930
Description
ALTO meeting session at IETF115
2022/11/11 0930
https://datatracker.ietf.org/meeting/115/proceedings/
A
A
So
this
is
not
aware
apply.
So
probably
you
can
read
this
and
a
code
conduct.
Please
be
nice
to
the
colleague
in
the
room,
trivia.
Actually
we
make
sure
you
wear
a
mask.
If
you
want
to
you,
you
don't
speak
and
so
there's
a
agenda
for
today's
discussion,
a
pretty
tight
agenda,
and
so
we
will
focus
on
China
item
also.
A
A
A
Document
update,
so
we
have
two
new
offices.
You
get
a
published
since
another
item
meeting
one
is
the
past
Factor.
The
second
is
the
course
mode.
We
still
have
a
performance
performance
measuring
of
the
Q
that
dependency
to
the
tcbm
working
group
draft
and
for
working
with
John.
After
we
have
in
the
auto
working
group,
we
have
Auto
om.
This
will
be
presented
in
today's
meeting.
The
second
is
auto
new
transport.
A
Based
on
the
two
touching
in
last
item
meeting.
We
consider
to
split
into
four
components,
so
we
will
give
the
update
for
where
we
are,
how
we
can
move
forward.
A
A
B
About
Auto
Transport
still
struggling
with
how
to
implement,
let's
say
Alto,
also
for
HTTP
2
and
HTTP
3,
and
we
had
this
Auto
version
new
version
of
three-
and
this
is
crude
still,
some
information
regarding
the
classical,
but
also
regarding
the
bull
mechanism
and
the
feedback
that
we've
got
and
therefore
I
will
also
start
on
the
first
slide
with
it
to
anyway
put
a
separate
decrement
regarding
this
and
yeah
split.
B
Let's
say
the
current
document
into
two
documents:
additional
documents
next
slide:
please
I
just
wanted
to
recap
a
little
bit
what
we
had
this
customer
and
where
the
position
is
and
the
feedback
we
we
got
so
far
and
next
slide.
Please
we
have
a
couple
of
more
slides
and
we
got
also
feedback
from
HTTP
experts,
Martin
Thompson
Spence
at
Elkins,
Mark,
Nottingham
and
Martin
Duke,
and
we
more
or
less
put
it
in,
and
the
idea
to
split
the
document
into
support.
Parts
is
also
some
results
of
the
feedback.
B
We've
plot
next
slide.
Please
also
just
remember
so.
There
was
a
discussion
itf114
regarding
the
client
pool
until
long
full
mechanism.
We
saw
that
HTTP
one
dot
X,
which
is
related
on
a
strict
sequence
numbering
and
you
can,
as
you
can
see,
on
the
right
hand,
side.
This
makes
it
a
little
bit
inflexible
regarding
the
server
push
and
that
yes,
now
to
describe
the
push
promise
mechanism
that
says
in.
C
B
Separate
document,
so
we
need
to
split
a
little
bit.
The
idea
is
to
have
one
draft
that
is
talking
about
the
general
issues
and
then
anyway,
have
additional
draft
where
we've
describe
the
push
mechanism
and
I
have
also
slide
later
that
describes.
So
we
might
also
be
that
we
will
discuss
ourselves
or
put,
but
at
the
moment
this
is
out
of
scope.
We
are
not
focusing
so
much
on
this.
B
B
If
you
have
a
transport
the
queue
mechanism,
we
rely
on
strict
sequencing
of
the
number
and
strict
scheduling,
and
this
is
a
little
bit
inefficient
and
when
we're,
the
idea
is
that
the
server
can
also
push
multiple
in
information
so
that
we
are
get
advantage
of
the
features
that
HTTP,
2
and
HTTP
3
have.
That
is
a
basic
idea,
and
that
was
also
discussed
in
the
last
ITF
meeting.
B
Yeah,
so
discussion
must
go
on.
Can
you
move
on
please?
There
was
also
discussion
last
time
regarding
here,
the
HTTP,
2
control,
knobs
and
last
discussion
that
quick
or
HTTP
3
is
a
little
bit
different
and
the
idea
is
Yet
now
to
you
know
where
we
specify
or
discuss
the
these
operations
and
to
to
remove
them
and
yeah.
That
is
also
something
that
we
did.
This
discussed
so
far
and
I
think
the
idea
is
then
accepted.
B
The
style
guide
things
I
just
want
to
go
over
it.
You
can
put
this
slide.
I
have
not,
and
now
we
come
to
the
major
changes
and
the
ideas
that
we
have
and
where
we
need
also
the
feedback
from
this
group
regarding
the
idea
as
I
state,
so
the
major
structure
of
the
transport
document.
The
idea
is
to
change
this
to
split
the
current
document
into
separate
documents.
At
the
moment
the
idea
is
to
have
three
documents.
B
The
first
document
specifies
the
common
model,
so
supporting
of
the
incremental
updates
queue
and
is
valid.
Let's
say
for
all
HTTP
connections,
and
now
we
made
proposal
for
a
second
document,
and
this
is
especially
looking
on
the
focus
on
the
client
pool
and
long
pull
mechanisms.
B
And
if
we
agree
here
in
this
group
at
some
of
the
part
of
the
first
document,
we
have
prepared-
let's
say
a
draft
version,
and
if
the
worker
group
accepted
that
we
will
move
these
sections
to
a
separate
draft
and
yeah.
And
let's
say
what
is
already
on
this
on
the
table
or
what
we
have
presented
so
far
and
already
uploaded
is
the
server
push
mechanism.
But
in
our
internal
discussion
regarding
the
move
of,
let's
say
because
of
the
pool
section
into
a
separate
document.
B
We
are
also
very
where
I
find
this
discussion
so
in
in
the
in
the
sense
we.
The
idea
is
then
to
have
multiple
documents,
one
describing,
let's
say
the
overall
mechanism
and
then
the
various
mechanisms
of
HTTP,
let's
say
I
get
or
a
mechanism
will
then
also
describe.
Let's
say
in
the
separate
document
and
the.
B
The
client
pool
and
push-
and
if
we
also
want
to
describe
the
put
mechanism,
we
will
need
to
do
this
in
a
separate
document.
At
the
moment
we
have
no
clear
idea
regarding
this
and
how
to
handle
this
yeah,
and
then
you
can
move
to
the
next
slide
pretty
fast.
B
Here
you
can
see
it
here,
visualize
the
structure,
the
idea
that
we
had
we
had
in
the
document-
one
this,
let's
say
increment
Q
updates,
create,
read
Etc
then,
and
we
need
to
decide
whether
we
want
to
put
this
out
of
the
first
document
and
describes
the
prolongful
mechanism
of
zephyr
document.
This
is
something
that's
at
the
moment,
officially
included
in
the
current
transport
document.
We
need
to
decide
to
put
this
out.
B
Then
there
is
a
new
proposal
regarding
the
push
and
and
the
client
which
is
related
especially
to
our
hdb2
and
HTTP
3..
So
this
will
be
then
some
kind
of
version,
specific
document
and
at
the
moment
out
of
scope,
is
a
server
put,
but
that
could
be
also
an
document
for
yeah.
Let's
say
a
discussion
for
the
discussions,
then
okay
here
there's
a
short
record
collection
of
what
is
based
in
the
document.
B
One
document,
one
for
example,
you
have
the
definition
of
let's
say
the
transport
of
information
and
the
incremental
updates
are
described
there,
the
idle
server
as
a
master
of
the
updates.
We
have
this,
as
mentioned
before
the
CDR
operations.
We
are
related.
Let's
say
when
we're
using
this
document
to
a
strict
sequencing
and
and
only
the
auto
server
can
write
to
it
and
client
can
issue
command
sequency
or
in
parallel.
So
that
is
a
basic
stuff
that
we
have
discussed
so
far
and
yeah.
B
The
sequence
number
is,
let's
say
up
to
64
bits
as
described
here,
and
so
the
structure
is
a
pretty
similar
to
what
we
have
discussed
so
far.
Then,
let's
move
to
the
next
Slide
the
proposal
for
the
pull
document
and
Cloud
read,
updates
very
simple
design
and
then
only
the
get
mode
method
that
is
used
here
and
could
be
done.
It
could
be
used.
Also
for
caching
and
content
is
solution
in
that
scale
and
server
clients.
It.
B
And
also
longfold
support
and
transfer
scheduling
and
the
full
design
allows
the
client
to
issue
concurrent
pull
requests
and
this
optimize
the
whole
designs
there
and
transfer
processing
and
requirements
yeah.
They
are
specified
more
or
less
as
discard,
and
we.
B
Specify
attribute
transport
control
here,
but
this
more
lesson
also
transparent
to
the
also
design
next
slide,
please
so
the
server
push.
That
is
something
that
is
here
here
a
little
bit
new
or
different,
since
HTTP,
2
and
3
both
use
push
Province
mechanism,
and
this
was
a
main
advantage.
The
idea
is
to
put
here
a
separate
document,
and
this
would
also
be
done
from
the
transfer
perspective.
Then
also,
let's
say
one
additional
advantage,
and
the
idea
is
here
to
to
specify
this
mechanism,
which
was
also
in
an
early
stage,
defined
in.
E
B
Original
draft
was
included
to
print
this
out
and
have
them
there.
Let's
say
an
separate
document
that
the
sun
wallet
or
can
be
used
for
this
mechanism,
and
that
is
more
or
less
that
the
current
proposal
is
so
far.
F
B
So
I
don't
know
if
we
can
say
it
was
really
doing
the
same
same
thing.
Let's
do
the
same
thing
a
little
bit
differently.
So
the
push
promise
mechanism
they're
just
described
only
in
let's
say
in
this
document,
enveloped
for
two
and
three
and
let's
say
the
basic
document
gives
let's
say
more
or
less
a
general
overview
and
structure
of
of
all
the
concept.
Yeah,
that's
what
I
say
it
was
the
idea.
G
Yeah,
hey
my
mind,
is
sorry
the
jumping
and
actually
I
waited
okay,
but
I'm.
Trying
to
answer
a
question.
The
dependency
is
2
and
3
are
independent.
They
both
depend
on
one.
F
Yeah
I
I
would
I'm
strongly
disinclined
to
publish
two
different
solutions
to
this
problem:
okay
I,
if,
if
like,
unless,
unless
this
like
clearly
unless
one,
unless
they
address
different
use
cases
where,
like
you
know
use
case,
a
like
two-
is
much
better
because
of
these
certain
metrics
and
like
for
use
case
b
doc.
Three
is
much
better
and
like
that
would
have
to
be
pretty
strongly
motivated.
I
think
we
already
have
one
solution
to
this,
which
is
SSC
I.
G
Ssc
has
much
a
larger
overhead.
The
income,
actually
quite
a
complex
and
I,
could
2003
are
we
simpler,
I
think
essential.
Three
essentially
is
complete
replacement
of
SSE
using
a
much
more
modern
design.
Of
course,
if
you
really
push
SSC,
for
example,
and
and
really
for
example,
eventually
it's
really
built
up
like
you
said,
and
it
should
be
built
on
top
of
independent.
G
If
you
build
on
top
of
HTTP
three,
for
example,
then
actually
you
even
can
even
gain
performance
over
SSE,
because
your
fundamental
and
the
line,
supposedly
it's
a
single
using
essential,
total
serialization.
So
therefore
you
serialize
everything
and
about
here,
actually
you
can
even
have
concurrency.
Even
you
can
push
all
all
the
updates
concurrently.
So
therefore
you
can
get
latency
in
the
worst
case.
You
have
a
lot
a
lot
of
number
of
subscriptions
and
you
want
to
push
all
updates.
Docker
three
would
even
compete,
beat
the
performance
of
SSE
great.
F
All
right,
that's
what
I
thought
so
my
my
understanding
is
that
push
is
not
much
beloved
by
the
HTTP
community
and
maybe
not
supported
that.
Well,
so,
without
having
any
data
whatsoever,
I
I
would
be
inclined
towards
the
doctors
approach
rather
than
doc,
three
approach,
but
you
know,
if
you
guys,
have
the
data
at
like
what
like
three
better,
that's
great
I
I
would
actually
and
we're
on
the
private
space
of
like
server
updates
right
in
a
perfect
world.
F
We
would
have
either
doctor
I
mean
whether
you
want
to
combine
one
and
two
like
I'm,
not
gonna
like
if
you're
you
know,
I,
don't
I
haven't
looked
at
it
editorially.
If
that
works
or
not,
but
I
would
I
would
certainly
like
to
do
either
two
or
three
and
in
a
perfect
world.
Consider
like
obviously
an
SSC.
F
Okay
like
an
ideal
world
like
I
mean
I
could
certainly
you
could
certainly
give
me
a
reason
why
that
would
be
a
bad
idea,
but
I
think
having
three
solutions.
This
problem
is
way
too
much
for
alto
thanks,
I.
G
Well,
I
think
personally,
I
will
agree
with
you,
I
would
say
yeah,
so
I
think
I
could
yeah
doc.
Two
really
is
a
clean
cleanse.
F
All
right
yeah,
so
so
there
there
are
three
solutions:
SSE
solution,
one
right,
so,
okay,
so
no
someone
I
do
want
to
say
that
if
so,
if
someone
can
make
a
convincing
case
that
that
these
things
address
different
requirements
in
different
use
cases,
like
that's
that's
an
argument,
that's
a
discussion
we
can
have,
but
failing
that
I
would
certainly
want
to
down
select
between
two
and
three
doc.
F
Two
and
Doc,
three
and
and
again
I
I'm
I,
don't
have
the
data,
but
from
what
I
know,
I
would
be
prejudiced
towards
doc.
Two
because
it
doesn't
use
push
and
then
like
as
whether
doc
one
is
merged
with
Doc.
Two
or
three
is
an
editorial
thing:
I,
don't
you
know
whichever
and
then
I
would
certainly
encourage
the
the
working
group
to
look
at
obsoleting,
SSC
and
I.
Don't
have
the
information
implementation
Wiki
like
in
front
of
me
right
now
on
how
like
how
installed
that
is
in
in
the
base?
F
F
Not
deployed
and
implemented
much
then,
like
all
the
more
reason
to
just
get
rid
of
it.
If
we
think
this
is
a
way
Superior
I
mean
this
is
not
a
such
a
widely
deployed
protocol
that
we
can't
make
really
sensible
revisions
to
it.
I
would
say,
but
that's
the
second
issue,
I
think
the
first
thing
is
to
down
select
of
all
this
stuff
figure
out
what
to
do
here
and
then
we
can
have
it
have
a
discussion
about
obsoleting
other
documents.
G
Okay,
so
how
do
proposal
we
proceed
so
we're
going
to
Real
Estate,
because
right
now,
by
the
way
very
quickly
at
high
level,
is
you
can
emulate
to
actually
get
ambulance
through
using
two
of
course,
then
assumption
is
you
should
there's
some
mechanism,
even
the
amulet
three
using
two?
Basically,
you
you,
you,
you
allow
the
client
to
essentially
pre-fetch,
essentially
put
a
large
number
of
pending
put
because
essentially
is
a
server
portal,
can
reduce
the
latency
even
below
a
single
one
round,
three
time
and
poor
you
really
conceptually.
G
If
you
don't
want
to
really
have
a
lot
of
pending
requests,
you
you
can,
it
can
have
a
large
number
of
pending
pull
requests
put
on
the
server.
So,
therefore,
you
put
a
load
on
a
server,
but
if
you
you
think
like,
for
example,
in
implementation,
we
think,
for
example,
overhead
on
a
server
side
may
not
be
a
major
problem.
We
can
essentially
ask
a
client
to
really
send
out
a
larger
number
of
pending
pull
requests
on
future
secret
numbers
to
emulate.
F
So
I
don't
have
a
strong
opinion
on
on
this,
except
what
I
already
said,
like
I
mean
you
guys
have
been
looking
at
this
problem
closely.
I'm
sure
you
guys
can
come
to
a
smart
decision
about
this.
F
The
other
thing
I
will
say
about
two
versus
three:
is
that
if
we
pick
three,
then
I
think
we
have
to
keep
SSE,
because
that
we
need
an
HTTP
one
solution:
I'm
not
mistaken,
and
that's
okay
like
I
guess,
but
yeah
I
mean
I,
would
look
at
it
and,
like
I,
mean
other
people
in
the
work
whose
benefit
opinion.
Of
course,
I
I,
don't
I.
F
H
On
this
part,
I
think
what
is
what
will
be
really
useful
for
the
for
the
working
group
is
that
the
rational
about
the
I
would
say
the
the
need
for
these
three
functionalities
are
really
documented,
so
that
we
understand
what
our
the
scope
of
the
problem
and
then,
if
they
overlapping
to
justify
modif
and
motivate
why
we
need
a
further
solution,
or
rather
than
just
pick
one
of
them.
If.
D
H
Have
this
material
I
would
say
clearly
formulated
somehow
that
would
be
really
good
for
the
welcome
to
know
and
exactly
that,
this
can
be
included
in
the
current
draft.
You
have
without
proceeding
with
any
emerge
or
any
in
a
split
just
explain
this.
This
is
how
the
three
problems
we
are
trying
to
solve.
These
are
the
approaches
and
we
will
pick
or
select
all
of
them,
based
on
the
information
you
will
receive.
That
would
be
really
good
to
have
in
the
next
version
of
the
this
one.
D
E
B
Next
steps:
okay,
I
understood
we
already
more
or
less
talked
about,
describes
relational
motivation,
use
cases
exactly,
but
also
not
to
have
too
many
documents.
That's
also
understood
so
I'm
through
done
to
presentation
any
other
question
or
someone
in
this
queue
or
yeah.
H
Just
just
a
comment
about
the
I
would
say
more
logistical
aspects.
As
you
know,
we
are
out
of
school
you
from
this
milestone
for
this
one.
We
are
supposed
to
deliver
something
in
the
late
September,
I
I,
don't
know
if
you
have
a
plan
for
the
or
at
least
the
expected
scheduling
for
for
delivery,
I
would
say
the
pieces
we
we
have
so
far.
Do
you
expect
to
have
something
I
would
say
in
the
next
three
months:
that's
something
which
is
stable
or.
B
So
so
what
we
discussed
in
the
group
I
think
we
can
can
do
because
at
the
moment
it's
a
split
of
what
we
already
had
yeah
and
we
support
the
motivation
to
say.
Okay,
let's
describe
the
reasoning,
why
why
it
is
splitted
and
discusses
on
the
list.
I
think
that
could
be
a
growth,
good
progress
and
I
won't
think
that
there
need
to
be
so
much
change
regarding
the
in
also
the
the
content
or
how
it
works
in
the
draft.
I
think
that
is
pretty
much
this
described.
We
got
feedback
regarding.
B
What's
not
working,
some
of
the
feedback
is
not
analyzed
and
discussed
so
far.
We
will
do
this,
but
in
I
would
say
three
months
could
be
a
feasible
thing.
Yeah,
that's
fine!
Thank.
H
I
I
Yeah,
this
is
a
quick
status
and
new
versions
to
the
The
Trackers,
and
also
the
in
the
GitHub
actually
have
some
quick
things,
because
we
have
the
text
arrows
and
the
young
models
and
and
upload
to
the
this
tracker
we'll
do
that
very
soon
after
they
decided
nothing.
So
that's
it.
I
And
for
the
new
revisions,
they
mentioned
it
to
resolve
a
the
three
open
discussions.
They
have
in
the
and
heard
that
at
least
about
the
discussion
on
mailing
list
pages
and
some
persecution
around
the
document
and
also
I
figurative
about
in
the
the
IIT
105
item.
So
they
have
popular
impacted,
the
Olympic
model
in
their
profile
implications
and
it
can
be
checked
and
they
have
to
message.
I
Here
is
a
very
quick
our
update
that
we
will
organize
the
table
of
content.
I
I
Yeah
from
this,
we
will
discuss
some
details
about
how
we
make
the
progress
on
the
three
other
three
of
them
discussion
topics
and
for
the
topic
one.
So
it's
about
how
to
handle
the
data
set
in
the
Auto
related
data
analysis
and
the
current
decision
is
to
use
the
identity
to
define
the
object
size.
This
can
allow
the
data
to
be
managed
in
a
modular
base
and
it
can
guarantee
some
backward
capability
compatibility
and
the
filter
documents
can
Define
the
new
tabs
by
adding
the
new
identities
in
some
extension
modules.
I
I
But
about
the
device
better
to
use
the
element
in
modules
working
group
has
a
different
opinion,
so
some
people
will
support
this
because
the
animating
modules
can
guarantee
the
compatibilities,
but
the
government
has
a
different
means
and
because
they
not
attacked,
have
the
equipment
changes.
So
the
element
in
module
system
cannot
be
overdone,
but
the
example
once
we
explain
so
if
they
use
their
animation
modules.
What
will
look
like
may
I
have
that
has
common.
H
Yeah
yeah
I
have
only
one
comment
about
the
the
what
you
have
mentioned
about
the
new
support
for
the
animated
model.
Did
this
actually
defeat
the
idea
of
having
an
Ayana
registry
for
the
protocol?
So
if
you
are
not
expecting
to
have
something
which
is
extensible
and
new
values
defined
in
the
future,
then
what
what
these
same
arguments
can
be
against
having
this
element
end
model,
so
I'm,
not
I,
don't
think
that's.
H
This
is
really
an
argument
to
take
to
take
into
account
the
key
one
is
that
if
you
want
to
have
something
which
is
really
I
would
say
open
to
new
implementation
without
having
to
say
heavy
changes
in
the
process.
The
animated
models
are
there
for
free
a
new
entry
once
it
will
be
added
to
the
previous
video
model
will
be
automatically
generated
and
there
is
no
cost
there,
but
the
other
one,
the
so
I
I
I'm,
not
sure
I
understand
the
the
the
other
argument.
I
Yeah
actually
I
agree
with
you.
So
in
the
example
we
show
the
how
they
are
limiting
modules
will
work
so
the
next
page
will
you
will
go
to
the
next
page,
so
you
can
say
once
a
experimental
notice
become
the
standard,
it
will
update
the
entire
modules,
but
because
the
basic
mother
depends
on
the
other
model,
we
don't
need
to
update
the
basic
model,
so
only
the
model
will
be
updated.
So
that's
a
I
think
that's
the
main
benefit.
I
That's
a
level
at
the
diversity
was
the
comment
from
the
working
group.
I
And
that's
the
major
part.
We
have
a
lot
of
discussion
on
milliliters
so
to
achieve
the
server
to
server
communication,
we
identify
the
OM
may
need
to
have
the
following
three
part
of
the
configuration.
So
one
is
they
need
to
configure
how
the
server
to
be
discovered
by
another
server-
okay-
and
it's
probably
already
included
in
the
the
history
relations
and
we
have-
will
provide
the
new
grouping
of
the
auto
server
Discovery
grouping
and
they
can
have
the
different
kind
of
the
manners.
The
Safari
has
three
predefined
cases
for
this
respiratory.
I
We
learned
from
the
some
practical
Department
like
the
university,
but
in
the
future
it
can
also
be
extended
through
the
augmentation
and
nasty.
I
And
the
the
second
part.
Actually,
we
also
need
to
configure
the
how
the
server
to
discover
another
server.
So
that
part
is
still
in
the
progress
and
we
have
the
kind
of
proposal
it
will
have
the
another
booking
and
this
grouping
will
include
the
contribution
about
some
parameters,
how
the
server
can
be
used
to
discover
another
so
like
for
the
reversing
ads,
and
you
configure
the
idea,
servers
and
yeah,
but
they.
I
So
the
student
models
can
also
be
eaten
by
augmentation,
but
not
the
side
part.
Is
that
and
decide
for
this
group,
in
which
part
should
be
put?
We
can't
put
it
to
the
basic
Auto
server
instance
and
another
option
that
actually
the
council
put
in
interviews
a
separate.
I
Containers
like
they
have
the
client
engineer,
because
we
think
this
part
should
be
more
like
the
communication
parameter
for
the
cloud
and
other
server.
So
actually
next
phase,
it
will
discuss
some
options
for
this.
I
Yeah
so
then,
so
this
will
also
invoke
the
start,
so
I
probably
like
the
immigration
parameters.
So
we
need
to
configure
how
the
server
to
connect
it
to
discover
servers.
So
that's
part
is
not
determine
whether
CP,
including
in
the
this
document,
because
they
it
has
a
multiple,
continuous
solutions
to
undo
this.
We
have
to
have
some
discussions
on
the
mailing
list
in
the
previous
ITF
meetings
and
we
have
some
individual
jobs
to
talk
about
this,
but
to
a
very
quick
summary.
So
this
can
be
the
three
kind
of
the
solution.
I
Yet
so,
actually,
in
the
next
talk
in
the
incentives,
we
will
have
some
discussions
about
more
details,
also
what
we
observed,
how
to
stop
the
subject
that,
in
this
talk
so
far,
the
the
very
observation
that
we
think
if
we
considered
using
the
First
Source,
it's
a
transfer
remote
to
using
the
auto
particle
to
do
the
set
of
some
multiplication.
So
that
can
be
the
simplest
approach
to
leverage
the
existing
Auto
Centers
and
to
support
this
so
OEM.
Can
it
tends
to
I've
got
so
far?
I
They,
the
oml,
only
have
the
auto
server
instance,
but
to
support
the
severance
communication
using
Auto,
so
they
may
have
a
single
Auto
server
instance,
but
the
multiple
of
the
client
instance
and
for
the
other
client
you
can
configure
the
how
to
do
the
server
recovery
use
the
previous
roofing
system
yeah,
so
the
left
is,
and
the
third
part,
which
is
the
main
part,
is
about
how
to
handle
the
information,
how
to
configure
the
information
rate
of
creation
and
that's
the
major
part
of
the
OM
model,
and
it's
just
by
the
f37285.
I
I
What
we
need
to
do
is
to
provide
the
basic,
unified
models
to
cover,
sell
the
common
application
status
and
the
economic
primary.
So
next
page
will
give
some
information
about
this,
but
from
our
practice
and
determine
so
consider.
The
OEM
and
there'll
be
some
slightly
different
from
the
consider
the
implantation,
the
department
from
the
information
Department
strategy.
I
Well,
they
need
to
progress,
is
how
to
handle
the
heterogeneous
format
of
the
data
sources
and
how
to
process
the
detected
from
the
difficulties,
but
for
the
OEM
actually
more
interest
in
how
to
handle
the
Heritage
mechanism
to
assess
this
result
and
how,
to
correctly
configure
the
the
calling
flow
for
the
infamous
resource
creation.
So
in
the
oriented
model
to
separate
the
information
of
creation,
to
scrape
up
part
of
the
data
model.
The
message
layers,
algorithm
layer
and
the
data
soft
layers
and
for
the
next
page,
we'll
be
a
current
example.
I
I
For
example,
the
result
ID
the
rate
of
type,
the
URI
probabilities
and
the
basic
algorithm
should
be
used
to
create
the
information
result
and
for
the
algorithm
need
to
configure
the
status.
You
will
be
used
to
generate
this
recovery
resource
and
they
also
have
some
implantation.
Investigative
parameters
like
the
network
map
may
need
to
configure
the
theatrical
analysis
and
for
the
cost
knife
you
need
to
configure
the
the
Precision
to
compute
the
cost
and
for
the
data
sources.
I
It
will
include
some
parameters
about
like
the
resource
bumper
codes
and
the
update
like
system
and
also
the
major
part
they
learned
from
our
practices,
the
complex
resolution,
so
the
negative
excuse
this
yeah,
so
that
will
be
a
major
license.
We
learned
from
the
real
implantation,
so
the
different
data
cells
may
have
to
accomplished.
I
So
the
different
data
sets
have
the
different
writers
for
the
same
centuries,
so
we
need
to
actually
solve
it.
F
H
F
Martin
Duke
I'm
concerned
that
someone
comment
in
114
was
that
I
thought
we
were
going
a
little
bit
beyond
Alto
configuration
to
things
that
are
not
Alto,
which
to
me
is
out
of
scope
like
data
collection
is
not
part
of
Alto,
and
this
seems
this
T3
item
seems
to
be
trying
to
configure
something
that
is
not
Alto
with
T2.
F
So
right
so,
but
there's
no
server
to
server
discovery.
There's
no
server-to-server
communication
spec
in
Alto,
like
client,
discovery
of
server
versus
in
scope.
So
2.1,
that's
fine,
I'm
concerned
that
the
other
two
are
something
that
is
not
Alto,
that
we're
trying
to
configure
in
this
document
which
to
me
is
out
of
scope,
do
I,
misunderstand
What's,
Happening
Here,.
F
Okay,
well,
if,
if
that
is
correct,
I
would
you
know
you're
welcome
to
like
take
this
other
and
don't
throw
the
work
away?
You
can
you
can
put
in
a
different
draft
or
or
publish
it
some
other
way
or
whatever,
but
I
would
like
this
draft
to
actually
focus
on
configuring.
F
A
So
Jason
so
I
agree
with
the
mounting
actually
the
one
way
forward
as
the
way
this
constant
offline.
Actually,
you
can
take
it
into
the
appendix
you
know
to
discusses,
to
give
an
example,
or
we
can
complete
this
without
yeah.
For
this.
I
I
Okay,
that's
the
last
page
actually
about
yeah,
maybe
so
go
to
the
paper.
Okay,.
A
Yeah,
maybe
we
already
run
all
the
time,
maybe
let's
move
on
to
the
next
presentation.
So
next
one
is
about
the
auto
deployment
update
so
Jody
and
Richard,
and
also
Louisa,
and
share
your
update.
A
J
Yeah
so
security
Ross
from
Qualcomm
yeah
I'll
talk
about
deployment
spam.
This
is
a
collective
effort,
so
I'm
gonna
hand
it
over.
K
To
just
one
quick
question
so
so
rather
I
think
this
is,
we
probably
have
a
version.
Three
slides
I
think
this
is
the
older
version.
H
H
Can
you
just
proceed
with
uploading
the
new
version
and
then,
as
we
can,
we
can
show
them
in
the
meantime,
Lewis
can
represent
the
yeah.
L
So
hello,
everybody
I
will
present
the
the
update
of
the
deployment
of
all
the
integration
of
Alto
in
in
telefonica,
so
in
such
a
way
that
we
can
expose
the
telephonical
network
information
to
the
telephonica
CDN.
So
it's
a
single
domain
environment
signal
administrative
domain
environment,
but
we
recommend
across
the
presentation
that
also
has
some
influence
in
in
other
domains
in
I
mean
for
users
in
in
other
domains
in
in
other
operators
at
the
end.
So
this
is
an
update
from
last
ITF.
So
next
slide,
please
just
a
quick
reminder.
L
So
the
objective
of
all
of
this,
for
all
of
these
work
is
to
improve
the
little
bit
of
traffic
from
the
telephonica
CDN
in
the
tunnel
of
the
network,
so
such
a
way
that
the
decisions
that
the
telephonica
CDN
the
logic
of
the
CDN
could
take
could
consider
also
the
network
information,
the
topological
information.
By
now
we
are
playing
with
the
number
of
hops.
The
idea
is
to
include
more
power
for
more
Rich
metrics
in
the
in
the
future.
L
That
could
be
maybe
the
the
occupancy
of
the
links
or
the
latency,
and
so
by
now
it's
just
simply
the
the
number
of
hops
or
the
igp
metric
you
wish,
so
the
project
was
already
presented
in
last
ITF
or
in
in
Alto
for
sure,
but
also
in
in
media
operations,
and
so
I
will
provide
an
update
here
next,
please
well.
Let
me
also
recommend
that
this
was
already
presented
this
in
this
itf2
media
operations
and
on
Monday.
So
this
is
just
I
know
that
you
are
very
familiar
with
that.
L
So
what
we
are
playing
the
pieces
that
we
are
playing
is
the
network
map,
which
is
essentially
a
groups,
the
different
prefixes
representing
them
points.
In
this
case
the
endpoints
will
be
on
one
hand
the
prefixes
allocated
for
the
end
users
for
the
consumers
of
the
streaming
content.
On
the
other
hand,
the
other
endpoints,
let's
say,
will
be
the
caches.
The
IP
addresses
of
the
caches
for
delivering
the
the
content.
L
L
Just
to
remark,
the
regular
map
is
obtained
through
vgp,
so
we
establish
a
session,
a
number
of
sessions
of
bgp
sessions
with
roof,
reflectors
in
charge
of
exposure
or
the
advertising,
the
preferences
of
the
end
users
and,
on
the
other
hand,
for
the
cost
map
we
established
bgpls
sessions
with
different
root
reflectors,
so
in
a
manner
that
we
can
build
the
topological
relationship
between
the
nodes
and
connecting
both
endpoints
in
each
side.
Okay.
So
next,
please
so
this
this
slides
summarize
the
process
is
follow.
L
Last
ITF
I
presented
the
the
initial
part,
so
we
started
playing
with
a
toy
environment
in
the
in
the
lab.
Just
for
validating
the
concept,
then
we
moved
to
the
pre-production
lab
of
one
of
the
operations
of
telefonica.
So
now,
I
started
playing
with
more
realistic
environments,
different
vendors
and
and
understandings
of
how
the
the
issues
of
that
the
real
architecture
could
bring
into
the
deployment.
For
instance,
in
the
lab
we
were
playing
simply
with
ospf
in
the
preproduction
lab.
L
We
start
playing
with
Isis
and
this
produce
all
these
motivational
changes
in
the
alto
and
so
on
so
far.
So
the
point
where
we
are
now
is:
we
will
run
a
pilot
last
month
or
October
27th
in
the
real
production
network
of
telefonica
in
in
this
operation
in
Spain.
L
So
the
time
in
in
the
Captivity
114
and
90f155,
we
essentially
prepare
all
the
environments,
so
they
plug
in
Alto
connect
the
financial
reflectors
doing
the
the
Field
Works
of
server
installation,
all
the
security
aspects
for
hardening
the
deployment
in
a
real
Network,
so
trying
to
be
sure
that
the
flows
between
the
road,
reflectors
and
Delta
server
goes
I
mean
where
they
should
go,
etc,
etc.
So
what
we
are
we
I
will
comment
is
the
result
of
this
POC?
Is
this
pilot?
So
it's
just
a
work
in
progress,
but
I
will
provide
you.
L
Some
information
in
this
respect
so
next
place
yeah
an
initial
known
restriction
that
we
have
big
because
from
dependencies
on
the
on
the
real
Network,
this
graph
represents
a
a
kind
of
yeah
topological
hierarchy
in
which
we
structure
the
the
networks
in
telephonica,
more
or
less
all
of
them
follow
this
this
kind
of
of
schema,
so
we
have
different
levels:
different
hierarchical
levels,
being
the
hierarchical
level,
one
the
inter
the
one
for
interconnection
so
connecting
to
other
operators
being
the
hierarchical
level,
five,
the
the
100
in
the
SSI
routers,
and
so
in
the
middle.
L
We
have
the
different
levels,
aggregation,
Regional
and
so
on
so
far,
so
we
have
a
restriction
between
the
connections
with
a
from
hierarchical,
Level,
2
to
hierarchical
level.
Three,
the
problem
that
we
have
is
for
a
specific
vendor.
We
have
an
issue
with
the
software
and
we
cannot
include
the
proper
information
in
ospf,
so
a
hierarchical
level,
two
with
hierarchical
level,
three
is
is
connected
through
ospf
in
this
case,
so
in
not
in
all
the
footprint
of
this
network,
we
we
have
solved
yet
this
this
issue
of
being
able
to
retrieve
the
information
later
on.
L
With
bgpls,
so
there
is
a
gap
in
this
respect,
so
we
have
a
small
area,
a
small
region
of
the
of
the
country
being
solved,
but
not
for
the
general
country.
So
this
is
known
in
advance
and
well.
This
will
take
the
time
to
be
solved
because
we
need
to
wait
to
the
news
over
release
and
so
on
so
far
next,
please
so
good
news
for
bright
news
about
the
the
deployment.
So
we
connected
the
ultra
server
with
the
BJP
speaker
to
the
Rock
reflectors
and
we
start
to
retrieve
a
number
of
summarized
iib.
L
Others
ranges
so
more
than
16
000
of
them,
and
these
ranges
corresponds
to
different
kind
of
users,
fixed
users,
mobile
users
and
Enterprise
users.
This
is
this
is
good,
because
we
can
then
start
differentiating
the
kind
of
flows
that
we
can
deliver
from
the
CDN
to
the
different
users
at
the
end,
according
to
the
type
of
user
that
we
could,
the
user
that
is
requesting
the
content.
L
Those
IP
ranges
are
both
internal
and
external,
so
the
majority
of
them
for
sure
are
internal,
but
there
are
also
some
external.
What
are
those
external
IP?
Prefixes?
Are
those
corresponding
to
the
National
interconnections
who
to
so
the
ones
that
we
are
getting
for
the
pitting
points?
Why
the
national
not
international?
Well,
all
the
international
connectivity
in
telephonica
is
handled
by
a
specific
career
from
the
entire
one
carrier
for
the
telephonica
group.
So
the
only
nodes
that
deal
with
interconnection
in
the
in
this
operation
are
the
ones
for
the
national
interconnection
why
this
is
relevant.
L
This
is
relevant
because
we
also
can
improve
the
delivery
of
towards
the
users
that
are
coming
from
other
operators.
They
can
note
that
the
CDN
is
delivering
for
live
streaming
is
delivering
Ott
traffic
right,
so
Sports
and
and
this
kind
of
things.
So
it's
also
important
for
us
to
have
to
understand
what
could
be
the
better
cash
to
serve
external
users
so
trying
to
identify.
Also
the
the
proper
interconnection
point.
L
L
Pretty
well
so
know
that
I
commented
before
that
we
cannot
build
all
the
cosmap,
but
we
we
retrieve
the
information,
so
we
don't
expect
a
higher
load
because
of
the
fact
that
we
could
resolve
that
point
for
the
cost
map,
so
the
the
load
should
be
the
the
one
that
we
have
now
more
or
less,
and
this
is
not
important
next
slide.
Please.
H
L
Just
wait
for
that:
let's
wait
for
that.
So
next
please!
This
is
just
an
example
of
the
information
retrieve,
so
you
can
see
there.
The
actual
pids
without
the
actual
IP
address
ranges
from
telephonica
and
also
the
course
map
with
the
relationship
of
the
of
the
igb
metrics
among
them.
Interesting
to
note,
I
will
just
highlight
one
point
that
I
will
comment
at
the
end.
You
see
there
in
the
apids
on
asterisks,
and
this
is
because
of
the
fact
that
the
realizing
I
mean
running
the
POC.
L
We
realized
that
this
information,
the
identifier
of
the
PID,
is
sensible.
So
this
is
why
somehow
we
could
hear
this
kind
of
simple
obfuscation,
but
I
will
elaborate
a
little
bit
more
at
the
end
about
this.
Has
a
potential
topic
to
address
as
well,
but
that
were
the
good
news
they're,
not
so
good
news,
so
that
the
things
that
we
need
to
to
get
fixed
is
there
is
no
information
of
Ip
ranges
about
the
five
percent
of
the
pops
of
the
political
on
present.
L
So
we
need
to
analyze
what
particularities
are
on
these
Pops
to
to
understand
why
they
are
not
advertising
the
the
information,
so
another
IP
Ranger
seems
not
to
be
retrieved,
so
we
have
a
good
number
of
them,
but
there
are
some
missions,
so
we
need
to
understand
what
is
happening
with
them.
Probably,
we
need
to
connect
some
to
some
further
or
reflector
to
understand
to
look
for,
for
where
these
preferences
are
being
advertised.
L
Only
27
pids
are
both
in
the
network
map
and
in
the
course
map.
This
could
be
the
result
of
the
known
issue
with
the
a
decoration
between
hierarchical
level,
two
and
three,
but
we
need
to
assess
that
we
are
not
just
zero
if
this
is
the
the
cost
and
finally,
the
pids
for
the
cdan
nodes
are
not
yet
captured,
and
this
could
be
just
a
matter
of
connecting
again
to
a
different
group.
Reflector.
L
The
the
this
operation
in
telefonica
has
a
number
of
reflectors
for
different
purposes,
so
it
could
be
the
case
that
we
are
missing
some
relevant
connection
bgp
for
this.
So
next
steps
not
taking
much
more
time
for
the
pilot,
so
we
need
to
understand
how
to
consume
the
health
information.
Okay,
we
now
have
the
information
about
how
we
should
consume
it.
So
how
often
we
can
retribute
that
map,
and
so
so
we
need
to
continue
analyzing
the
information
received
to
understand
the
Dynamics
in
a
pollution
Network.
So
there
will
be
changes.
L
Product
for
logins
for
a
log
registering
for
all
the
events
that
we
could
Monitor
and
so
on
so
far,
and
also
to
work
on
how
to
automatically
load
upload
the
topology.
So,
for
instance,
one
of
the
questions
that
are
rising
to
us
is
what
happens
if
the
in
some
point
in
time
the
topology
that
this
upload
is
quite
different
from
the
existing
topology,
which
what
should
we
do
to
consider
the
new
one
to
consider
the
old
one
to
discard
the
new
one.
L
So
these
kind
of
things
is
something
that
we
need
to
learn
yet
and
and
provide,
and
then
finally,
for
alto
media
operations
working
group,
the
idea
would
be
to
document
the
pilot.
So,
and
probably
this
could
be
also
interesting
for
for
mobs
to
identify
the
gaps,
issues
and
the
improvedness
in
the
solutions
that
could
be
worth
it
to
work
and
and
disrespective.
L
I
would
like
to
emphasize
the
point
of
the
security,
the
obfuscation
of
the
pids
and
so
probably
all
their
security
capabilities
in
in
the
ultra
environment
and
yeah
for
sure
to
provide
another
update
for
next
ITF
and
hopefully
with
all
of
these
solved.
And
let's
see
if
we
are
able
to
do
so,
so
there
is
a
question.
Thank.
D
You
Luis
for
your
presentation.
My
name
is
Ayo
misuse
I'm
from
physics,
research
of
Europe
I
found
this
presentation
very
very
interesting.
Can
you
go
back
please
to
the
hierarchical
scheme
that
you
showed
for
different
layers,
so
I'm
very
interested
in
this
topological
IP
transport
hierarchy?
My
question
is
that
our
security,
as
per
considering
this
hierarchy
security
aspect,
can
be
integrated
in
this
hierarchy.
L
Yes,
I
mean
well,
this
topology
is
how
internal
to
The
Domain,
so
how
is
is
hardened
is
secure.
What
we
would
like
to
prevent
is
to
to
reveal
sensitive
information,
as,
for
instance,
I
commented
in
the
pids.
The
pids
provide
an
identifier,
but
it's
quite
simple
from
that
identifier
to
derive
the
look
back
of
the
of
the
routers.
So
we
need
to
know
how
to
hide
that
information
in
in
when
the
application
is
internal.
L
It's
not
a
major
problem,
but
with
the
with
the
idea
of
exposing
this
information
to
external
parties,
it
could
be
for
sure,
sensible
information.
So
that
would
be
one
aspect
that
will
be
some
other
aspects
to
cover
in
security,
so
yeah
the
priority
is
a
matter
to
start
working
on
and
yeah
and
trying
to
identify
points
of
yeah
for
making
Alto
more
more
secure
and
more
robust
in.
D
L
L
A
E
A
E
J
Yeah
Jerry
Russia
from
Qualcomm
yeah,
so
I'm
I'm
gonna,
be
this
is
a
collaborative
work
and
Kai
Jensen
and
myself
are
going
to
be
doing
this
presentation
I'm
just
going
to
start
off
we're
going
to
be
talking
about
a
few
deployments
that
are
being
taking
place
on
alto
and
some
of
the
science
networks
in
Europe
and
the
US.
So
next,
thanks
yeah
I'll
skip
this
one,
so
yeah
the
context.
This
is
two
things:
one
is
the
open,
Alto
The
Source
Source
base.
J
That's
been
developed
as
part
of
the
open,
album
project.
It's
an
open
source
implementation
and
platform
with
an
MIT
license
it's
available
on
GitHub.
The
other
development
is.
The
openalto.org
domain
is
a
running
instance
of
the
deployment
of
open
Alto,
providing
Network
information
in
the
context
of
science
networks
like
I
mentioned,
such
as
LIC
one
LIC,
OPN,
CERN,
esnet
and
NRP,
and
so
on.
We'll
talk
about
these
deployments
in
in
this
conversation
and
yeah
it's
available
in
under
this
domain.
J
Next
yeah.
This
is
just
a
snapshot
of
one
one
instantiation
here
of
a
network
that
we
are
when
we're
deploying
open
Alto.
This
illustrates
different
regions
of
where
the
Network's
running
Asia,
Americas
and
Europe
This
is
in
particularly
the
LHC
one
network,
that's
connecting
the
experiments
at
CERN
in
Geneva
and
can
help
basically
dedicated
to
move
this
massive
data
sets
we're
talking
about.
You
know,
petabytes
of
information
that
come
from
the
from
the
LHC
collider
in
Geneva
have
experience.
J
You
know
this
is
foreign
Engineering
in
this
massive
data
sets
that
then
need
to
be
transferred
to
scientists
in
other
regions
of
the
world,
whether
it's
in
the
US
or
Asia
or
anywhere
so
yeah
just
to
get
an
idea
next
yeah.
This
is
the
specific
architecture.
That's
been
running
on
this
science
networks,
specifically
in
the
CERN
network
at
the
bottom,
so
this
sort
of
layering
a
layering
View
at
the
bottom.
J
Then
we
have
the
visibility
component,
which
is
implemented
using
open,
Alto,
that's
deployed
and
sort
of
monitoring
the
network
and
providing
then
the
maps.
Basically,
these
Maps
then
are
fed
into
the
next
layer,
which
is
the
what's
what's
called,
that's
called
TCN,
plus
FTS,
and
then
the
dashboard
control
Network
and
the
the
file
transfer
system.
Fts
is
science
networks,
terminology
or
technology
which
basically
schedules
data
data
transfers.
J
You
know
by
by
looking
at
you
know,
paths
in
the
network
and
finding
the
right
way
to
actually
transfer
the
data
from
1.8
to
point
B,
we'll
talk
a
little
bit
more
about
these
as
well,
and
then
up
above
this
layer,
then
that
is
the
the
rules
of
the
alto
Russia
integration,
which
does
the
orchestra
station,
and
this
is
what
the
user
interfaces
with
so,
for
instance,
when
a
scientist
or
needs
to
try
to
share
data
with
another
scientist,
then
it
would
actually
go
into
the
russio
interface
and
say
please
to
enter
this
data
set
or
please
give
me
access
to
this
data
set
and
then
the
transfer
would
actually
get
schedules
from
from
that
request
goes
into
the
next
layer,
the
TCN
FTS,
who
actually
then
actually
then
based
on
the
network,
visibility
that
makes
these
decisions
on
where
to
access
data
from
and
also
you
know,
which
path
to
select
and
so
on.
J
Next-
and
this
is
yeah
the
e-architecture
you
can
see-
you
can
see
here-
picture
the
real
picture
here-
that
the
gray
boxes
are
components
that
we
already
had
before
ITF
114,
actually,
the
green
our
components
that
we
now
have
that
as
of
ITI
115
and
in
blue
components
to
be
developed.
J
But
this
the
way
to
look
at
this
is
the
various
sources
of
information
of
information
that
we
are
pulling
from
so
and
we
organize
them
in
three
blocks
or
three
classes
on
the
left:
side,
control,
plane
in
the
middle
data,
plane
control
and
then
data
plane,
data
on
the
control
plane.
J
Basically,
it's
pulling
information
from
the
control
plane,
the
adjacent
metrics
subnets,
the
pgp
input
from,
and
then
generation
of
the
FIP
and
so
from
the
here
then
DDF
reconstructing
you
know,
topology
the
paths
and
so
on
and
creating
the
network
map
and
the
cost
map
another
path.
Another
source
of
information
is
that
the
data
plane
control,
which
is
control
information,
but
we
pull
it
from
the
devices
from
the
data
plane.
So
it's
specifically
looking
glass.
J
That's
the
running
deployment
right
now
in
green
and
P4
as
a
future
development.
In
bloom
and
then
yeah,
this
leads
to
the
the
then
the
also
another
way
to
access
the
feed
and
then
on
the
right
side,
data
plane
data.
This
actually
gets
the
information
from
the
data
plane.
J
In
the
actual
you
know,
sampling,
packets,
so
Technologies
like
netflow
as
flow
personal
icmp,
and
this
is
an
integration
with
the
with
grading
graph,
which
is
running
one
of
the
deployments,
so
actually
the
development
of
of
a
Plugin
or
an
Asian,
an
open,
auto
plugin
that
actually
gets
this
data
from
G2
and
G2
actually
gets
it
from
the
net
flow,
S4,
icmp
and
so
on.
Then
this
is
our
component.
J
That's
been
developed
also
by
Jensen
and
Kai
they're,
calling
and
Lauren
and
others,
and
the
team
the
equivalent
class
which,
which
is
something
that,
because,
in
this
data
plane
data
mode
we're
actually
getting
the
information
from
the
data
plane.
You
know
the
actual
writing
flows
on
the
network.
J
You
don't
have
visibility
of
you
know,
of
a
path
that
actually
is
actually
not
active,
and
so
here
what
we
there's
a
solution
of
building
a
cooling
class,
because
we
still
want
to
figure
out
those
files
and
so
create
a
mapping
between
correctly
creating
these
classes,
where,
if
you
make
a
query
that
on
a
path
that
actually
doesn't
exist,
that
is
not
active
right
now
that
you
can
still
resolve
that
path.
J
K
Thank
you,
okay,
so
here
is
basically
an
overview
of
the
art
deployment.
So
right
now
we
have
like
three
deployments
so
the
first
two
deployments,
mostly
based
on
our
server
deployment
efforts.
So
the
first
one
is
to
deploy
the
auto
server
inside
CERN
and
right
now
it
is
already
up
and
running
in
the
server
inside
the
server
internal
Network
which
but
which,
unfortunately,
is
not
accessible
from
the
internet.
But
then
we
also
have
a
public
mirror
hosted
at
auto.org.
K
So
people
can
use
this
server
to
basically
get
the
information
that
we
provide
to
the
certain
provided
in
a
certain
deployment
and
in
This
Server
will
basically
implemented
the
RC,
7285
and
also
RC
9275,
and
then
the
second
deployment
is
also
running
in
the
NRP
platform
and
I
believe
that
hostname
is
access.
K
It
has
Open
Access
in
the
internet,
and
but
we
also
have
another
puppy
mayor
hosted
at
openauto.org
and
we
all
have
implemented
RFC,
7285
and
also
RC
7240.
Basically,
the
unified
property
map
documents
and
the
less
deployment
is
some
update
from
the
client
side,
because
we're
not
only
developing
other
servers
for
the
LIC
one
use
case.
We
are
also
developing
some
applications
that
can
average
the
visibility
information
so
next
time,
please.
K
And
in
a
few
slides
in
the
next
few
slides,
we
basically
give
some
details
about
all
these
deployment
efforts
and
the
first
one
is
certain
deployment
update
and
what
effect
basically
fetch
information
from
the
quick
database,
which
provides
the
map
the
IP
addresses,
which
we
are
interested
in
of
because
in
the
air
HTML
Network,
it
does
not
actually
use
every
IP
address
in
the
whole
internet.
So
it
only
has
some
IP
preferences
between
what
they
call
the
sites.
K
Basically
like
connect,
universities
and
also
science
institutions,
and
then
we
basically
pull
this
information
to
help
create
the
IP
prefix
that
we
are
interested
in
and
then
we
also
use
lsc1
Looking
Glass
server
to
provide
as
a
data
source,
to
get
information
about
like
as
pass
and
also
the
next
router
information.
So
next
Monday
next
slide.
Please,
and
in
this
slide,
basically,
it
gives
a
detailed
a
way
of
how
we
actually
pull
information
from
the
Looking
Glass
and
then
we're
looking
glass
server.
K
We,
let's
turn
with
a
certain
Looking
Glass
server.
We
actually
can
get
a
routing
table
on
one
of
the
Border
routers
and
then
we
fetch
this
information
from
the
Looking
Glass
interfaces,
and
then
we
pass
the
data
and
extract
the
information
that
we're
interested
on,
for
example,
from
the
learning
table.
K
On
the
left
side,
we
can
actually
extract,
for
example,
the
destination
IP
prefix
and
also
what
is
pass
for
that
prefix
and
also
what
the
next
hub
for
the
prefix
and
using
this
information
will
be
able
to
constrain
the
best
vectors
for
the
between
the
CERN
between
the
source
IP
address
as
insert
and
to
the
destinations
in
other
x61
sites.
Next
part,
please
and
here's
some
examples
of,
for
example,
where
we
make
the
query
to
the
certain
Looking
Glass
to
a
certain
Auto
server
and
it'll.
Basically
respond
away.
K
Is
the
easiest
service
with
the
passive
Vector
extension,
which
includes
the
laptop
information
and
also
OS
pass,
which
is
encapsulated
as
a
n
e
and
on
the
bottom?
Basically,
it
gives
an
example
of
how
we
actually
make
the
plugin
to
be
configurable,
where
we
can
specify,
for
example,
what
is
a
back-end
agent
class
which
will
be
responsible
to
fetch
information
and
then
some
parameters
that
are
specific
to
the
agent
okay
next
slide.
K
Please
and
our
second
deployment
effort
is
to
deployment
Auto
in
the
NRP
platform
and
with
the
NRP
platform,
were
actually
leveraging
the
current
deployment
of
of
the
gradient
graph
and
then
we
direct
we
are
making
G2
as
a
some
kind
of
interface
between
the
round
information
that
is
currently
at
NRP
and
the
auto
server,
and
also
we
get
information
from
the
NRP
on
sh.
Basically,
it's
a
it
will
provide
some
information
about
the
ARP
platform
devices
and
so
on,
and
then,
where
is
the
G2
and
the
interface?
K
We
can
actually
get
information
about
like
link
capacity
and
Link
delay
in
the
overlay
Network.
So
next
slide,
please-
and
here
is
an
illustration
of
how
we
actually
declare
information
from
G2
and
basically
G2
provides
exception
of
the
information
that
is,
it
collects
from
the
underlying
services
like
s
flow
and
other
tools,
and
then
we
basically
identify
the
active
flows
from
the
to
snapshot,
and
then
we
map
them
into
like
equivalent
classes
so
that
we
can
infer
information
about
flows
that
are
not
running
currency
and
then,
basically
we
can.
K
We
use
that
information
to
create
the
Anu
path,
and
then
one
is
not
interested
here
is
actually
in.
The
Dual
section
also
contains
the
topology
information,
and
then
we
can
use
the
links.
For
example,
here
the
links
has
an
ID
1070,
and
then
we
can
use
that
ID
to
carry
the
topology
data
and
get
information
about
the
link
next
slide.
Please-
and
here
is
also
an
example
of
the
curries
and
what
responses
you
can
receive
from
the
NRP
yellow
server
and
on
the
bottom.
K
We
also
give
examples
of
how
we
configure
the
backend
and
then
for
the
NRP
case,
since
we
we
are
actually
using
the
ukrum
classes
to
map
the
sampling
results
to
like
prefix
data,
so,
on
the
right
hand,
side.
We
also
give
an
example
of
how
right
now
we
are
configuring,
the
equipment
classes.
So
next
time,
please,
okay
and
the
last
is
some
update
about
the
Lucio
integration
update.
So
in
it43
we
actually
add
an
option
to
sort
every
continuous.
K
K
The
first
is
about
the
entities,
but
basically
we
can
query
the
we
can
do
the
replicas
only
and
filtering
based
on,
for
example,
the
geolocation
about
the
replicas
and
also
the
the
distance
between
the
replica
and
the
place
where
the
which
issues
downloading
requests
and
for
to
support
the
russio
integration
use
case.
We
actually,
we
are
using
an
additional
data
source,
basically
the
maximize
Geo
IP
database.
So
next
slide,
please-
and
here
is
the
illustration
of
what
we
are
extending
in
UCL.
K
Basically,
we
we're
using
what
is
called
an
auto
based,
Auto,
sorting
expression,
which
is
a
basically
gives
a
simpler
syntax
which
can
be
used
by
the
clients
to
query
the
auto
in
which
will
be
translated
into
the
auto
request
in
underlay
and
basically
that
will
help
us
simplify
the
develop
from
the
client
side.
K
So
next
lesson
please-
and
here
is
basically
an
example
of
the
auto
sorting
expression
right
now.
We
is
actually
a
circle
like
language
and
it
has
two
components.
The
first
component
is
the
statement
after
the
buying
keywords
which
specifies,
for
example,
what
are
the
cost
metrics
that
we
use
to
sort
the
replicas
and
then
there's
also
the
where
statement,
which
specifies
some
filtering
conditions
about
the
replicas
and
also
we
developed
a
Syntax
for
basically
using
the
BMF
format
to
Express
the
Sorting
Expressions.
So
next
up
is.
K
And
here
are
some
examples
of
how
we
implement
the
Lucio
integration
and,
on
the
left
hand
side.
Basically,
we
show
how
we
can
configure
some
metrics
that
can
be
used
in
the
Sorting
expression
and
each
metric
will
be
configured
to,
for
example,
where
should
the
clients
fetch
the
information,
for
example,
the
auto
resource,
where
the
client
will
be
getting
the
information
from,
and
also
how
to
pass
the
raw
Auto
information
into
the
metrics
that
can
be
used
in
the
Sorting
expression
and
then
on
the
right
hand,
side
on
the
left
hand,
side.
K
We
basically
give
two
examples.
The
first
is
is
how
come,
basically,
it
will
be
extracted
from
the
resource.
Id
called
the
term
certain
pass
factor,
and
then
it
will
be
extracted
from
the
AES
Pass
Property
of
the
anes,
and
because
a
is
actually
a
list
of
elements,
then
we
will
also
need
to
specify
how
to
map
this
property
into
a
value
and
then
how
to
aggregate
those
values
and
also
so
on.
K
The
we
also
have
another
cost
called
delay,
a
one-way
delay
and
then
the
one-way
delay,
because
it's
finished
from
the
cost
map.
So
we
don't
have
two
space
about
how
to
aggregate
the
results
and
then,
on
the
right
hand,
side
we're
going
to
give
an
example
of
the
Sony
expression
and
on
the
bottom
we
give
and
the
results
where
we
sort
the
replicas
using
this
Auto
sort
in
expression.
K
So
next
slide,
please
foreign
plans
so
for
the
three
deployments
about
the
LHC
one
deployment,
the
so
right
now
as
as
Jordan
shown
in
the
beginning,
with
LHC,
one
network
actually
has
multiple
domains,
and
so
right
now
we
are
only
deploying
in
a
certain
Network.
And
so
the
next
step
is
to
extend
the
deployment
to
multiple
domains
and
provide
the
multiple
multi-domain
endpoint
call
service.
And
we
expect
to
finish
this
before
it
f416
and
for
the
ARP
and
G2
deployment.
K
So
right
now
we
are
actually
only
using
the
s
flow
data
through
the
digital
service,
and
so
the
next
step
is
to
bring
a
further
integration
with
G2
and
provide
the
flow
prediction
service.
And
we
one
of
the
maybe
a
low
handful
is
to
expose
like
fair
share
as
a
course
between
IP
addresses
and
also
we
would
try
to
dispose
the
boundary
structure.
As
well
and
and
for
the
russio
and
FTS
integration,
so
right
now
we
are
act.
K
So
what
we
will
show
earlier
is
to
running
in
the
basically
our
Docker
environment.
So
next
step
is
to
finance
the
unified
graphical
sorting
feature
and
send
the
pull
request
to
the
russio,
and
we
are
also
also
developing
and
auto
assistant,
FTS
scheduler,
which
would
use
all
the
information
to
achieve
like
resource
control
in
the
science
networks
and
all
these
yeah
sure,
yeah
I,
know
I
know
we
only
have
one
missed
so
next
time,
please.
K
So
during
the
during
the
process,
we
also
Identify
some
problems
then
like
and
also
we
we
have
some
experiences
to
trying
to
tackle
these
problems,
and
basically,
we
had
some
some
slides
describing
the
issues
and
also
some
potential
solutions
to
the
to
those
problems,
maybe
quickly
scan
through
all
these
pages,
and
then
we
can
arrive
at
the
last
one.
Basically,
the
feedback
to
the
working
group.
A
So,
okay,
you
want
to
take
a
question,
so
I
just
want
to
you
know,
have
a
question
about
so
you
mentioned
for
nips.
Actually
there's
some
conversation
on
how
this
nit
configuration
or
user
configuration
related
to
Auto
om.
K
One
next
page
before
the
sex,
page
yep,
and
so
during
the
process.
We,
for
example,
identify
like
issues
such
as
data
conflicts,
and
then
we
propose
that
maybe
the
presentation
of
data
sources
as
a
way
to
solve
result.
Conflicts
between
their
sources
could
be
a
feature
of
OEM
and
also
we
use
equivalent
classes
to
specify,
for
example,
what
we
call
the
atomic
query
spaces.
K
So
basically,
we're
wondering
whether
this
should
be
considered
as
part
of
the
OEM
document,
or
maybe
they
are
actually
out
of
school
and
also
about
the
extensions
to
the
protocols
in
one
of
the
problems
we
identify
is
using
source
and
destination.
Ip
addresses
is
not
sufficient
to
support
like
cross-domain
scenarios,
where
Source
IP
address
could
enter
a
network
from
multiple
places,
but
with
some,
for
example,
constraints
from
the
up
Upstream
Asus.
K
It
might
be
the
the
downstream
as
might
be
able
to
resolve
this
issue
and
determine
whether
or
which
is
a
Ingress
Port,
that
the
The
Source,
the
traffic
actually
goes
into
the
network,
so
I
think,
and
also
in
a
NRP
case,
because
the
flows
can
the
past
for
the
flows
can
be
established
on
demand.
So
there
might
be
a
need
to
enable
some
flow
level,
queries
and
yeah
and
basically
that's.
Unfortunately,
we
don't
have
time
to
explain
one
of
the
problems
and
how
we
handle
them.
K
So
maybe
we
can
bring
bring
the
discussion
offline
to
the
main
list.
I
Comments
about
yeah,
just
some
performance
about
the
OM
part,
so
so
far
they
open
also
implementation.
We
don't
use
the
Yamato
to
do
the
OM
conclusion,
but
they
have
some
different
formats
to
handle.
This
also
include
the
server
setup,
the
algorithm
part
and
the
data
source
configuration
and
there's
some
information
starts
creation,
configurations
and
so
right
now
we
are
writing
parsers
to
transform
the
Yamato
to
their
own
country
with
him.
So
that's
the
target
to
have
the
fully
implemented
by
the
United
States.
G
I,
do
one
give
a
very
good
comment,
one
of
the
main
quite
different
use
cases
right
now.
We're
doing,
of
course,
is
auto
TCN
right.
You
probably
saw
the
title
we're
just
presenting
in
the
morning
we'll
present
the
editor
a
couple
weeks
ago
we
presented
at
the
grp,
so
for
that
one,
it's
quite
a
different
use
case.
The
main
integration
is
where
modifying
IPS
controller
FTS
is
the
main
schedule
Engine
with
moves
like
EB
X
by
some
data.
G
So
therefore
a
really
huge
amount
of
data
moving
around
the
one
main
issue
we
encountered
over.
There
is
right
now
in
all
implementation.
We
have
right
now
they
are
essentially
a
single
control.
Cluster,
a
cluster
would
typically
have
around.
You,
know
six
servers,
and
so
on
so
I
could
write
out
a
lot
of
discussion.
Push
back,
not
pushback
a
lot
of
discussions
coming
back
from
from
CERN
from
all
the
discussions,
and
so
on
is
eventually
to
really
achieve
full
goal.
G
We
should
really
be
able
to
do
the
orchestration
among
all
these
essential
servers,
so
basically,
yfts
controller
will
have
one
Auto
a
client
embedded
into
it
and
you
control
it
right
now.
For
example,
right
now,
I
think
they're
around
14
instance
running
globally.
So
therefore,
we
need
the
auto
client
to
essentially
the
Dual
integration.
So
therefore,
we
really
need
some
way
of
Auto
server
or
client,
some
kind
of
aggregation
to
really
be
able
to
Global
coordination.
G
So,
therefore,
that's
really
a
very
important
part
of
efforts
probably
can
get
input
into
oam
as
well,
because
I
think
I
believe
Markie
mentioned
about
server
server
use
cases,
it's
actually
one
of
the
server
Clan
use
cases
or
maybe
a
central
event
become
peer-to-peer
really
pulling
permission
from
all
the
Clusters.
That's
that's
into
a
major
issue,
but
we
don't
have
full
understanding
about
how
it
really
works
yet
because
right
now
we're
focusing
mostly
just
get
this
single
domain
of
fully
functional
I.
G
L
Hello
again,
I
I
recommend
some
few
updates
on
the
on
the
draft
about
the
usage
of
factor
for
determine
the
determining
the
service,
the
services.
So
next
slide,
please.
L
So
the
idea
here
will
be
to
leverage
Ronaldo
for
a
taking
decision
on
what
data
center,
what
compute
environment
use
for
the
instantiation
of
applications
functions
and
so,
but
considering
both
at
the
same
time,
the
information
coming
from
the
compute
side,
plus
the
information
coming
from
the
network,
so
the
topological
information
and
reach
with
some
metrics
that
could
help
to
take
the
the
more
optimal
decision,
for
instance
in
terms
of
latency
bandwidth
and,
and
so
next,
please
so
the
updates
from
the
latest
version.
We
have
refined
the
text
intersections
three
and
four.
L
We
have
added
some
clarification
statement
in
Section
3
about
the
fact
that
delto
nowadays,
actually
So
currently
does
not
allow
an
H
server
or
a
an
entity,
let's
say
be
represented
as
an
entity
in
both
domains.
So
we
again,
we
are
talking
about
the
compute
domain
and
the
network
domain,
so
yeah
we
identify
that
point
and
also
we
have
elaborated
a
real
model
text
in
section
four
and
we
have
provided
some
detailed
examples
for
retrieving
information
when
these
kind
of
servers
or
this
kind
of
computer
facilities
are
part
of
ipv4
and
Ana
domains.
L
So
next
please
yeah.
So
this
is
a
an
example
of
the
new
content.
In
oh
I
mean
this
reflects
the
new
content
and
now
in
the
in
the
draft.
So
here
we
describe
how
a
net
server
could
be
part
of
the
ip4
and
Ana
domains.
So
essentially
we
we
play
with
different
identifiers
in
one
domain
and
another
domain
and
yeah
we
try
to
relate
to
those
what
domains
I
mean
having
tons
to
reconcile
the
information
in
both
sides
here
for
sure
we
need
to
deliver
to
think
more
on
on
this.
L
By
now
we
are
providing
the
details
of
the
host
something
to
be
considered.
Maybe
it
will
be
instead
of
providing
the
detail
of
the
host,
maybe
providing
the
detail
of
the
site
groupings
on
hold
information.
All
of
these
I
mean
we
need
more
work
on
that
to
understand
how
what
could
be
the
kind
of
information
that
we
could
retrieve
from
the
cloud
managers,
for
instance,
so
to
understand.
Well,
what
is
the
kind
of
information
that
we
could
take
from
things
like
kubernetes
and
so
on?
L
We
also
added
a
an
example
of
filter
property
map.
Here
the
the
entities
are
reflected
as
N
A,
N
es
so
describing
in
in
this
example.
The
project
We
are
following
is
to
consider
this
idea
of
quotas
or
bundles
of
resources,
so
having
X
CPUs,
why
the
amount
of
storage
and
set
amount
of
disk,
and
so
so.
This
is
why
we
are
categorizing
on
a
small,
medium
large,
so
the
usual
way
for
for
some
of
the
hyperscalers
at
the
time
of
offering
the
the
compute
capabilities.
L
L
We
yeah
we
named
these
instances
as
flavors
quotas
or
instances
and
essentially
what
we
discovered
that
there
were
some
typos
on
the
Json
that
we
included
in
the
in
the
example.
So
we
will
fix
this
in
the
in
the
next
version.
Next,
please
so
as
an
extra
steps.
We
we
want
to
improve
the
description
on
the
Direction
with
the
cloud
managers
as
I
said,
so
how
we
need.
L
We
have
now
something
generic,
so
we
need
to
understand
what
could
be
the
kind
of
information
that
could
be
retrieved
by
a
most
common
Cloud
managers
existing
today
or
been
deployed
today,
just
just
to
ensure
that
the
process
that
the
approach
that
we
are
taking
is
is
correct.
In
that
respect,
we
would
like
to
collect
comments
and
feedback
from
the
working
group
and
and
also
what
the
next
step
will
be
to
to
keep
working
on
this
to
maybe
serve
as
input
for
for
HR
education
in
the
future.
So
that's
all
from
my
side.
Thank
you.
A
All
right,
thanks,
Luis,
actually
just
the
one
who
mentioned
there's
another
job
at
the
end,
to
talk
about
a
database
Open
Source
service.
So
maybe
you
have
some.
You
know
our
life
and
have
some
discussion
at
the
end.
Sure,
okay,
let's
move
on
the
next
day,
is
a
movie
for
Network,
aware
application
from
Johan
Hayward
remotely
to
present
remotely.
C
J
Okay,
so
so
yeah,
so
this
is
a
new.
A
new
draft
I
was
actually
presented
more
in
depth
actually
yesterday
during
the
NRG
session,
so
I'll
I'll
refer
you
to
that
presentation.
From
yesterday,
where
we
explained
the
details
and
the
context
of
this
draft
is
bottleneck
structures
and
we
go
one
level
sort
of
deeper
and
we
talk
about
how
to
compute
parallel
negastructure
under
partial
information,
when
you
know
you
have
an,
as
that
doesn't
know
about
the
information
from
from
another
as
and
you
still
try
to
compute
the
bottlenecker
structure.
J
So
we
think
that
this
conversation
is
more
at
the
research
group
level.
That's
why
we
sort
of
moving
to
Panera,
G
and
considering
maybe
other
RGS,
but
because
bottlenecker
structures
are
also
have
also
been
discussed
in
Alto
and
actually
the
idea
that
it's
just
like.
We
have
a
network
map
and
we
have
a
cost
map.
J
We
could
also
have
a
bone
like
a
structure
map
which
is
a
compact
and
efficient
way
to
represent
the
state
of
the
network
and
be
able
to
sort
of
know
available
bandwidth
on
a
pad,
and
so
on
that
that
then
we
also
have
a
small
here
conversation
about
this
draft.
J
J
Okay,
so
we've
discussed
bottleneck
structures
in
113,
so
I'm
not
going
to
go
through
that
again,
but
basically,
given
a
network,
we're
gonna
go
next
yeah.
So
from
a
given
network
you
can
compute
the
bottlenecker
structure.
So
that's
the
you
can
find
that
in
a
second
paper.
So
and
then
we're
not
going
to
go
through
that.
But
that's
this
assumes
that
you
have.
You
have
full
knowledge
of
the
network
now
what
happens
when
that's
not
true
when
you
have
partial
information,
so
you're
going
to
click
on
next.
J
One
instance
of
the
problem
of
partial
information
is
the
problem
that
we
face
in
the
internet
when
you
have
multiple,
as
so
as1
doesn't
know
about
the
information
about
as2
and
vice
versa.
So
you
don't
know
about
flow
information
on
the
other
on
the
other,
as
you
don't
know
about
you,
don't
know
about
topology
information,
you
don't
know
information
about
your
domain
right.
J
So
the
question
is:
how
do
we
compute
the
bottleneck
structure
and
there's
an
areas
like
these,
and
so,
if
you
want
to
go
next,
and
so,
if
a
is
one
tries
to
compute
its
own
bottlenecker
structure,
just
using
the
information
that
it
knows
of
it's
going
to
find
this
bottleneck
structure
and
clearly
this
is
incorrect.
If
you,
if
you
look
at
this
bottleneck
structure
here,
is
not
the
sub
graph,
which
should
be
this
region
here.
This
is
the
Global
Knowledge,
the
global
solution.
J
If
you
click
on
next,
if
as2
actually
tries
to
find
the
boundary
structure
just
using
local
information,
actually
it's
going
to
get
it
right
in
this
example,
but
just
because
it
got
lucky
because
there
is
this
sort
of
math
property
that
says
that
if
all
the
paths
are
bottleneck
that
I'm
seeing
are
about
making
my
as
then
I'm
actually
going
to
find
the
right
bottleneck
structure.
So
in
this
case
that's
what's
happening
here,
but
it
just
got
lucky.
J
So
the
point
is
that
they
must,
in
order
for
both
as
to
find
to
be
able
to
compute
the
bottlenecker
structure
correctly.
There's
some
information
that
needs
to
be
shared
between
as
so
now.
The
question
is:
what
is
the
minimum
information
that
they
can
share
in
order
to
converge
to
finding
the
right
Global
next
structure,
which
I've
shown
the
full
topology
without
sharing
for
information,
which
is,
of
course,
very
sensitive
and
that's
sort
of
what
this
drive
discusses
the
solution
and
proposes
this
action
displayed
algorithm
that
just
works
at
the
Border
level
and
three
properties?
J
Conversions
sharing
one
metric
per
path
is
enough
to
ensure
conversions
to
the
correct
bottlenecks
of
structure.
So
that's
one
of
the
results
is
scalability
focuses
on
building
the
path
creating
graph.
So
in
the
draft
the
previous
that
I
have
we
discussed
that
there
are
different
different
versions
of
the
bottlenecker
structure,
there's
a
flow
grading
graph
and
the
path
reading
graph
the
path
reading
graph
is
way
more
scalable
because
it
works
at
a
path
level.
So
you
may
only
have
hundreds
of
pounds
as
opposed
to
millions
of
flows
right.
J
So
this
this
would
have
a
broader
protocol
works
at
the
path
level,
which
makes
it
way
more
scalable
in
that
sense,
so
so
requires
only
per
path
to
stay
and
then
privacy,
you
know,
does
not
require
showing
new
network
information.
All
you
need
to
change
the
path
metric
on
the
scalar.
Basically,
so
we
can
go
next
and
I'm
sure
I'm
actually
going
to
skip
the
the
decimeter
protocol
so
move
on
next
next
yeah
next
and
just
at
a
very
high
level
how
this?
How
does
this
work?
J
So
again
we
have
a
network
that
actually
has
two
as
so
we
go
next
and
then
we
don't
have
to
understand
anything.
That's
just
the
intuition.
So,
as
as2
is
trying
to
solve
the
trying
to
find
the
ball
microstructure
at
iteration
one
it
finds.
It
believes
that
this
is
the.
This
is
the
state
of
the
the
network,
the
FileMaker
structure.
This
is
incorrect.
J
As1
does
the
same,
and
that's
it's
a
local
computation.
It
defines
this
bottlenecker
structure,
it
gets
lucky,
and
actually
this
is
correct.
But
the
point
is
that
there's
this
notion
of
path
metric
dictionary,
each
one
is
tracking
a
path
in
at
the
dictionary
and
what
they
do.
They
share
the
pmds
and
if
they
share
the
PM
list,
which
is
only
one
metric
per
path
that
they
need
to
share,
and
then
you
know
they
shared
this.
This
one
has
conversions
here.
So
it's
good,
but
then
the
next
iteration
foreign.
J
Thanks
now
we
can
see
you
can
compare
that
this
sub
graph
is
is
correct
basically
now
right,
and
this
is
a
response
to
this
portion
of
the
problem
maker
structure.
So
this
is
a
mechanism
for
both
to
converts
to
to
the
correct
blah
blah
bottleneck
structure.
Without
sharing
you
know,
sensitive
information,
you
can
see
sort
of
the
impression
behind
this.
What
what
this
is
doing,
that
you
know
as2
converges
to
the
right
boundary
structure.
It
cannot
really
the
institutional
virtual
virtual
nodes.
It
doesn't
know
what
disease
actually
V1
and
B2.
J
It
has
a
bad
metric
also,
but
V1
and
B2
in
fact
happen
to
be
AES
as1,
but
as2
doesn't
know
what
that
is
because
it
doesn't
have
any
visibility.
But
that's
such
sort
of
how
it
models-
and
the
bottom
line
is
that
with
this
information
now
you
can
actually
do
static
engineering
decisions
without
exposing
or
hiding
full
knowledge
about
about
of
of
the
neighboring
ass,
okay.
So
next
at
a
very
high
level,
how
does
this
would
work?
J
Basically,
so
you
have
as
each
one
is
Computing
the
bottomless
substructure
locally,
and
then
you
have
these.
These
messages
being
shared.
The
pathometric
announcement
packet
that's
been
shared,
so
they
just
shared
the
pathmatic
dictionary
between
neighboring,
ass,
okay
and
the
conversions
time
of
this
algorithm
is
actually
a
logarithmic
because
it
actually
converges
with
with
in
the
same
steps
as
the
bottleneck
structure,
basically
from
top
to
bottom.
So
from
that
standpoint
should
have
some
some
good
scalability
properties,
okay,
so
then
next
yeah
and
then
so
what
does
this?
J
How
do
we
connect
this
with
Alto?
So
without
really
discussing
whether
you
know,
standardizing
bottleneck
structures
is
another
way
to
represent
the
network.
We
have
Network
Maps,
we
have
cost
maps
and
then
we
could
also
have
a
representation
as
an
object
of
the
bottlenecker
structure.
Right
and
so
this
this.
J
The
question
is
now,
but
in
typical
networks
you
have,
there
are
multi-domains,
so
you
have
multiple
autonomous
systems,
and
so
each
one
could
have
an
auto
server
right.
And
so
how
do
you
compute
the
bottleneck
structure
in
a
multi-domain
environment?
So
this
would
provide
a
solution
and,
and
then
the
requirements
at
a
very,
very
high
level,
just
to
start
a
conversation
is
requirement.
One
is
the
capability
to
compare
the
bond
like
substructures
is
actually
the
same
requirements
as
the
previous
draft.
J
It
just
says
that,
well
now
we
have
to
have
a
way
to
compute
the
bottleneck,
a
structure
basically
and
requirement.
Number
two
is
the
requirement
to
between
two
autonomous
systems
to
be
able
to
communicate
or
throughout
the
servers
to
be
able
to
exchange
this
path
dictionary.
So
this
would
be
sort
of
a
new
requirement.
J
As
it's
been
mentioned
this
morning,
there
is
no
service
to
server
communication,
RFC
right
now
for
Alton,
but
this
sort
of
the
this
this
would
be
a
use
case
to
help
that
potential
conversation
about
you
know
how
the
auto
servers
communicate
with
each
other.
What
kind
of
information
they
could
be
sharing
the
pathmatic
information
from
the
wildlife
construction
could
be
one
of
the
use
cases
and
I.
Think
that's
that's
covers
this
part.
A
These
distributed
border
protocol
is
related
to
the
pg,
pgp
or
Auto.
J
Protocol
yeah,
so
it's
similar
it's
similar
to
bgp,
so
bgp
is
an
algorithm
that
the
the
quality
is
that
you
know
with
local
information.
You
can
converge
to
the
global
sort
of
optimal
routing
right,
just
by
sharing
some
messages
between
neighbor
neighbor
and
neighbors.
J
Basically,
so
this
is
the
same
concept,
but
in
this
case
what
we're
doing
is
not
just
topology
information
where
else
bottleneck
structures
take
into
account
traffic
engineering
information,
because
it's
taking
the
gun,
the
flows
that
are
running
on
the
network
and
then
it
provides
this
path
magic,
the
capacity
so
in
terms
of
implementation,
it
could
be
running
in
parallel
to
bgp
you
could
you
could
you
could
run,
you
could
run
as
part
of
Alto.
It
could
run
separately.
J
A
M
M
Maybe
we
can
have
some
discussion
after
the
video,
okay
and
next
slide,
and
so
far
we
propose
that's
another
requirements
and
framework
of
video
that's
listed
here.
This
illustrates
the
background
information
and
edit
perceptions
and
our
considerations
begin
with
the
challenges
of
the
current
Network
and
next
part.
Please.
M
This
explosion
and
measured
access
to
Cloud
have
been
prepared
to
be
an
inevitable
Incarnation
resulting
in
the
standards
and
grows
at
Target,
so
those
ensures
and
everything
and
as
depicted
in
this
figure,
the
network
domain
from
Main
cell
to
completely
so
is
divided
into
several
sections
and
much
power
rating
for
the
network.
Infrastructure
in
different
sections
may
vary
from
one
to
another
which
results
in
distinctions
of
network
capabilities.
M
Applications
make
a
prior
Diversified
requirements
of
latency,
bandwidths
or
flexible,
and
temporary.
Current
settlements
are
here
if
it
is
an
A,
B
and
C
I
do
away
so
I.
Think
of
this
thing,
with
applications
from
Peace
So
Sophisticated,
respectively
Enterprise
journey
in
different
colors,
but
in
conventional
networks
with
the
details,
including
of
businesses
and
section
bandwidths
and
electric
domain
concealed
capabilities
of
the
networking
remain
invisible,
so
differentiated
services
are
not
provided.
Applications
with
various
requirements
cannot
be
distinguished
and
served
customer
already
a
legend.
M
M
This
floor
in
this
location,
to
zip
them
with
your
speakers,
the
stickers
in
the
sensory,
Flowback
and
properly
learning
sections.
The
bandwidth
of
the
overall
pause
is
combined
in
this
sectional
resources
are
wasted,
doesn't
make
sense,
and
the
overall
utilization
is
yeah.
We're
clear
now,
and
this.
N
M
M
Think
about
that,
so,
besides
bandwidth,
for
instance,
brightness
network
has
also
been
unveiled
with
various
other
capabilities,
including
deterministic
quality
networks.
Rising
indigenous
security,
which
can
be
delivered
as
Services
referring
to
Source
deep
illness
framework
is
aiming
to
practice
the
concept
of
laws,
namely
networked
as
a
service
and
for
the
services,
Reserve
applications,
Cloud
terminal
and
CPE
to
subscribe,
corresponding
customized,
Network,
Services
and
next
slide.
M
Please
and
the
framework
of
people
as
the
network
controllers,
like
the
running
status
of
the
network
and
Abstract
Network
functions
by
extracting
key
as
it
is,
and
a
distributed
database
is
introduced
which
is
being
strong.
Consistency
and
a
typical
sub
mechanism
is
applied,
capabilities
can
be
accepted
in
a
key
value
scheme
and
a
standard
schema.
Template
file
is
utilized
for
descriptions
with
Cloud
controller
or
a
super
architecture.
M
Further
subscribe,
the
updated
information
of
services
with
a
watch
mechanism
which
enhances
the
efficiency
of
information
advertisements
and
with
the
knowledge
of
network
capabilities
of
calculation,
can
be
performed
and
services
can
be
re-orchestrated
and
phones
with
specific
policies
and
also
with
database
clients
watching
updated
information.
Task
degradation,
for
instance,
can
be
reflected
and
learned.
Traffic
can
be
firmly
directed
and
next
slide.
Please.
M
To
illustrate
the
experience
process,
we
create
present
a
data
instance
exactly
communicates
various
browsing
and
multiple
applications.
Eabc
and
D
construct
part
of
the
physical
topology
as
part
of
the
network.
Resources
among
them
are
accepted
in
the
form
of
VD
links
and
Lady
link.
Here
they
could
use
a
substitution
for
original
links
and
unique.
Logical
topologies
are
perceived
from
different
Cloud
applications
here
and
maybe
see
you
from
cloud.
A
returning
from
a
to
d
includes
two
parts
of
second
release
that
is
obsessed
with
only
single
segment
list
by
Cloud
B.
E
M
Biological
points,
identity,
but
sources
are
reserved
exclusively
and
next
slide,
please
in
particular
to
Define
baby
length
and
video
link.
The
previous
one
is
a
virtual
version
of
a
physical
drug
Lane,
which
can
be
identified
by
Loco
and
real
medicine,
interface
intervals
and
other
parameters
of
capabilities,
and
similarly,
data
link
represents
a
virtual
tunnel
taking
second
reading
over
IPv6.
As
an
example,
a
typical
attributes
include
logic,
ID,
no
descriptors
maximum
resolvable
length,
bandwidth,
binding
C
and
to
facilitate
our
calculation.
A
neurological
ID
is
defined
here
to
identify
a
bigger
links.
Orbital
Interlink.
M
M
More
perceptions
and
drugs
are
expected
in
future
and
we
are
looking
forward
to
connecting
our
River
and
to
operating
with
working
groups
and
experts
who
have
recommended
with
the
issue.
Safety
and
service.
Affinity
issues
also
be
considered
in
the
future,
and
this
is
all
about
this
presentation
and
thank
you
all.
N
Yeah
I'm,
leaving
now
I,
really
understand
what
a
presentation
is.
Very
wonderful
and
I
have
two
comments.
The
first
one
is
about
the
comparison
is
the
APN
API,
which
is
the
application
aware.
Networking
I
think
the
use
case
in
this
proposal
may
be
similar
to
the
user
case
in
APN,
and
it
makes
sense
to
analyze
the
what
are
the
difference
between
them
and
the
second
question
is
about
Computing
awareness
Network,
which
is
called
VA
and
correctly.
Cn
also
has
the
user
case
for
the
integration
of
the
cloud
and
the
network.
M
Okay,
okay,
thanks
and
let
me
answer
the
questions
so
as
for
APM
or
can
see
in
our
perceptions
make
sure
the
similar
or
identical
use
cases
to
solve
the
future
scenario
of
the
convergence
of
the
network
and
the
cloud.
But
we
have
some
different
solutions
and
this
bacon
or
10
they
focus
on
the
layer
3.
That
is
the
neck
layer.
We
need
to
extra
extension
headers
to
bring
about
the
identifiers
of
applications
in
the
network
domain
and
the
forwarding
Behavior
could
be
instructed
by
some
mapping
relationships
but
Security
in
our
daily
owners.
M
A
Okay,
don't
you
try
to
make
your
answer
short
so.
C
L
This
is
really
contractor
from
telefonica.
One
question:
you
are
characterizing
the
the
internals
of
the
data
center
and
you
are
included
in
as
a
metric,
the
delay,
but
the
delay
in
internals
of
the
data
center,
as
you
reflected,
is
in
terms
of
microseconds
so
taking
these
microseconds
in
in
the
overall
computation
of
a
delay
introduced
by
the
network.
That
will
be
another
milliseconds,
probably
is
not
not
relevant,
so
I
would
like
to
yeah
just
the
question
is,
if
you
consider
relevant
this
delay
metric
in
internal
or
the
data
center.
M
A
O
Can
hear
it?
Okay,
great
yeah,
okay,
I
think
I
can
go
directly
to
the
updated
parts.
So
that's
yeah
yeah,
so,
okay,
so
the
next
slide.
Please
next
slide.
O
So,
yes,
we
have
in
the
movie,
you
know
we
have
some
standardizing
considerations
and
Mr
GBP
and
we
have
released
17
50.
Yes,
we
focus
on
skills,
enhancements
for
interactive
services,
including
Cloud,
cleaning,
XR
and
so
on,
and
in
English
18,
and
there
are.
There
is
a
item
which
is
called
xrmo.
O
O
In
this
slide,
we
have
some
substantial
updates,
based
on
some
comments
from
other
experts
during
the
over
14
item
meeting,
and
we
have
some
folder
clarifications
and
focus,
for
example,
and
we
we
had
some
motivations
for
predicting
the
network
instead
of
response
of
the
battery,
and
we
add
some
related
work
that
is
relevance
to
through
gpp
and
the
5G
which
which
can
can
be
utilized
by
movie
framework.
So.
O
So
we
can
so
I
think
the
the
main
point
here
I
I
would
like
to
illustrate
illustrate
that
we
give
a
framework
of
the
convergence
of
5G,
Network
architecture
and
ietf.
Also
so
as
soon
as
some
important
is
that
the
EF
can
be
acted
as
as
the
memory
is
just
the
old
plant
can
be
acted
as
EF
or
on
the
functional
model
of
the
EF
through
the
implementation.
O
So
there
is
some
implementations,
and
so
next
lapis
and
so
I
think
so
here
so
here
you
can
see
EF
or
here
Auto
Clan
can
can
can
be
received.
I
can
receive
some
Network
information
through
the
movie
by
step,
one
step
two
and
a
thing.
Another.
O
Client
can
also
receive
some
Auto
information
from
the
auto
servers
through
the
auto
auto
server
discovery,
so
so
the
other
plants
can
support
the
application
adaptation
using
this
expose
the
information.
O
So
next
step
is
so
we
we.
So
we
can
conclude.
What's
one
point,
that's
the
the
movie
architecture
has
been
proposed
and
refined
in
several
of
versions
in
the
architecture
of
Auto,
and
the
movie
framework
is
purpose
in
this
version.
So
I
would
like
to
ask
to
ask
you
if
there
are
any
suggestions
to
this
work
or
how
to
encourage
this
work
to
the
Charter
atom?
So
that's
all.
Thank
you.
A
Thanks
for
your
presentation,
actually,
this
is
not
a
new
job.
Actually
so
I'm
just
you
know,
one
quick
question
is:
do
you
have
implementation
for
this
movie?
The
second
one
is:
how
do
you
collaborate
with
3gbp,
any
feedback
from
3gpp?
Thank
you.
O
Yeah,
actually
we
we
we
have.
We
already
have
some
analysis
on
this
topic,
just
for
example,
how
EF
or
AF
related
to
this
work
I
think
if,
if
the
I
think
the
main
point
is
that
we
can
so
we
can
have
more
discussion
if
this
work
can
be
if
this
work
can
can
be
encouraged
to
the
Charter
atoms.
So.
A
Okay,
I
think
you
have
so
we
can
wrap
up
now.
Actually,
so
we
can
take
it
to
the
list
and
see
how
we
can
move
forward
so
I
think
probably
we
I
know
there's
a
setting
and
the
Sun
for
all
the
participants,
actually
it's
by
the
way
who
actually
attend
in
person
and
any
last
comments.
A
D
You
are
you,
my
shoes
from
Future
2
research
of
Europe
I,
basically
have
like
a
quick
comment.
I
had
the
chance
to
discuss
with
the
different
Wood
Group
members,
and
they
found
this
topic
very,
very
interesting
to
our
work
in
Fujitsu,
we're
trying
to
focus
on
again
on
network
Trust,
and
we
already
think
found
some
areas
where
we
can
collaborate
and
hopefully,
after
this
meeting,
we
try
to
join
the
discussion
and
maybe
find
like
specific
use
cases
where
we
can
Implement
Network
trust
and
help
move
Alto
in
the
future.
A
Thank
you
for
your
interesting.
Actually,
I
suggest
you,
you
know,
write
a
email
to
write
an
email
to
the
list.
That
summarize
what
what
is
your
focus
and
to
get
more,
you
know
feedback,
and
so
we
can
see
how
to
move
on
definitely
for
money
domain
setting.
Actually,
the
security
issue
is
a
big
issue,
so
we
need
to
have
more
discussion
on
this.
Just.
D
To
give
you
a
quick
heads
up
in
a
couple
of
maybe
a
week
or
two,
we
do
have
a
white
paper
coming
once
we
have
published
that
wallpaper
I
will
share
it
with
the
group
group,
so
this
will
better
explain
our
interest
in
this
topic
and
how
we
can
move
together
and
hopefully,
by
Yokohama
meeting,
because
fujusu
is
also
participating
in
the
organization
there.
So
we'll
try
to
prepare
something
for
that
meeting.
Okay,.