►
From YouTube: IETF106-ALTO-20191121-1550
Description
ALTO meeting session at IETF106
2019/11/21 1550
https://datatracker.ietf.org/meeting/106/proceedings/
C
D
F
I
C
K
B
E
L
M
B
B
The
problem
is
that
the
author,
the
author's
relaxed
in
finalizing
their
documents,
for
example
the
SSC
Draft,
so
as
a
solution
that
shares,
will
divide
the
remaining
drafts
between
them
and
follow
up
with
authors
and
editors
to
close
them
out
by
the
next
IETF
in
Vancouver.
So
the
agenda
is
as
follows:
reflecting
the
priorities,
we
will
have
three
present
a
full
presentation
relating
to
working
group
items,
so
performance,
metrics,
unified
properties,
path,
vector
footprint
and
capabilities,
and
we
will
also
have
two
presentation
on
personal
exploration.
B
So
as
for
the
working
group
progress
since
the
last
meeting
in
Montreal,
so
we
have
two
working
group
items
that
are
well
progressing
and
the
SSC
Draft
is
basically
done.
The
second
working
group
last
call
is
over
and
all
older
working
group
items
are
very
close
to
being
finalized.
So
in
more
detail,
so
the
cross
domain
server
discovery
is
now
in
the
RFC
editor
view.
The
cost
calendar
is
in
iesg
processing.
B
As
for
the
SSC,
there
was
an
old
working
group.
Last
call
data
has
expired,
so
another
one
has
been
issued
like
a
week
ago
and
it
will
expire
on
December
8th,
and
so
we
look
right
now
for
two
volunteers
to
read
the
draft
I
believe
we
had
already
identified
some
during
the
last
meeting.
So
please
send
us
a
notice
or
if
you
are
in
the
room
and
volunteer
to
review
it,
please
let
us
know
the
CD
ni,
f
CI
without
so
so
a
new
version
has
been
published.
B
It's
ready,
but
really
needs
feedback
from
the
CDN
I
working
group.
So
a
message
was
sent
out
to
the
CDN
I
working
group
to
review
that
document.
In
that
perspective,
with
a
tentative
deadline
for
December,
15th
and
Jensen
will
present
it
today
performance
metric,
a
new
version
was
published
and
Richard
well
presented
today
as
well
and
we'll
see
if
it's
ready
for
working
with
Pascal,
unified
property
and
Perfector,
they
asked
they
progress
as
a
bundle,
so
new
version
have
been
published
as
well.
B
Kai
will
present
path
vector
and
will
present
a
unified
properties
and
we'll
see
how
far
it
is
from
working
group
last
call,
and
last
but
not
least,
we
had
yesterday
morning
a
side
meeting
on
application
and
network
integration
who
was
organized
in
two
sessions
first
session
with
presentation
and
the
other
one
with
discussion.
So
the
purpose
was
to
study
how
application
and
network
can
integrate
to
perform
and
perform
gap,
analysis
what
is
missing
on
either
side
and
how
this
can
help
defining
some
work
for
alto,
but
also
be
out
beyond
Auto.
B
So
there
an
agenda
was
as
follows.
So
we
had
for
presentation
one
by
ten
cent
on
network
away:
five:
five
g-cloud
interactive
service,
another
one
by
a
telefónica
on
integration
of
telefónica
CDN
with
the
transport
network,
another
one
by
way
which
was
about
application-aware
networking,
whereas
Auto
is
more
our
network,
aware
applications.
And
finally,
we
had
a
presentation
by
Richard
on
application
and
network
integration,
possibility
challenges
and
the
next
steps.
B
L
B
L
Thank
you.
So,
oh
okay,
so
I'll
talk
about
the
updates
on
the
alto
problem,
magic
document-
and
this
is
one
documented
way
which
we
got
a
lot
of
feedback
and
which
I
think
we
are
wrapping
up.
I,
think
we
made
pretty
good
progress
and
hopefully,
by
this
time
and
we'll
get
the
feedback
from
a
working
group,
and
then
we
can
really
wrap
up
so
and
that's
oh
I,
think
we're
missing
the
top.
So
basically-
and
here
is
the
issue-
I
won't
talk
about
and
essentially
presentation
board
consists
of,
you
know
how
to
fix
it.
L
Great,
thank
you
so
basically,
I'll
first
talk
about
updates
and
what
we
deep
from
version:
six:
seven,
a
syndromes
from
seven
to
eight
I,
never
talk
about
the
single
remaining
issue,
which
I
think
we
know
how
to
resolve.
What
would
you
want?
A
cocktail
working
group
I
now
talk
about
plan
for
next
step.
Okay,
and
so
let
me
first
talk
about
the
changes
that
we
made
from
version
seven
to
version
eight,
essentially
between
these
two
IPs
and
really
for
people
rhetoric
of
what
exactly
why
we're
making
the
changes
is
remember.
L
This
is
very
interesting
conversation
during
ITF
last
time
and
essentially
is
that
time
limited
decision
point
was:
do
we
want
to
have
the
phylum
LuPone
metrics
to
be
consistent
of
your
based
on
only
existing
metrics
based
on
impurity
I
ppm?
So
we
have
feedback
and
people
from
a
ppm.
They
can
you
and
give
a
very
increase
or
pinyons,
and
so
on,
and
just
to
refresh
your
memory
or
later
bit
and
on
your
life
and
inside
is
the
outer
performs
matrix
example
in
latency,
RTP
and
so
on.
L
So
essentially,
we
have
all
matrix
over
there
about
also
matrix
and
right-hand
side
would
be
all
the
metrics
defined
by
I
ppm.
So
therefore,
is
quite
a
details,
for
example,
round-trip
delay,
active
IP,
UDP
and
purity,
depending
which
are
seeing,
which
section
and
so
on
it
was
quoting
the
intensive
discussion
about
this
topic
and
I
think
eventually,
it
was
made
quite
clear.
Of
course,
you
post
you
in
the
session
and
right
session
and
the
session
was
the
feedback
was
clarified.
That
also
really
metrics
are
really
guidance.
L
L
L
Offering
all
sub
level
agreement-
and
you
can
issue,
for
example,
month
s
mission
and
so
on.
Oftentimes,
you
don't
update
very
frequently
Abdi,
for
example,
as
much
high
chronology,
then
I
ppm
would
do,
but
we
should
have
muscle
types,
for
example,
service
level,
agreement
and
worse
estimation.
So
how
do
we
solve
this
issue
over
there?
L
M
L
L
Remember
ITF,
you
can
base
bias
optional
and
all
these
requirements,
and
so,
therefore,
what
we
said
was
and
therefore
the
cost
type,
which
is
a
data
type
system
in
our
code,
and
we
include
use
an
optional
optional
extension
field
called
a
cause
source
over
there.
So
basically
document
you
find
this
one
as
a
source
and
essential
here
to
define
optional
field
over
there.
I
know
what
happens
is
a
core
source
should
be
registered
to
a
long
extension.
L
So
that's
the
second
requirement
as
the
essential
second
in
the
index
introduced
optional
field
and
making
it
fun
to
be
waste
like
I
wish.
I
am
a
red
kitchen
right
now,
I
was
a
summation
and
one
is
SL
a
so
therefore,
now
we
can
really
solve
prom
with
this
one
and
now
turns
a
lot.
Life
could
be
easy.
So
therefore,
next
change
we
made.
L
It
was
also
pretty
simple,
straightforward
now
and
because
we
have
all
examples
and
the
so
now,
what
really
they
are
are
the
I
ppm
guidance,
and
here
I,
guess
I
think
really
make
a
clear
is,
and
we
basically
said
will
be
put
in.
Essentially
we
have
of
the
COS
source,
but
we
make
estimation
as
a
default
value.
For
example,
we
did
not
even
modify
all
the
examples,
we
just
say:
okay
for
all
examples,
and
because
there's
no
optional
car
source
extend
an
extension
field
and
therefore
the
cost
is
really
is
an
extension.
L
So
therefore
we
just
add
a
comment.
So
every
single
example
we
said
we
could
remember
cost
type,
that's
not
included
the
record
source
field.
So
therefore
the
value
essentially
should
be
interpreted
as
the
estimation
to
solve
this
issue.
Okay,
and
given
this
one
now
really
given
that
we
no
longer
dependent
on
IP
p.m.
we
also
made
a
second
change,
which
I
think
I
should
also
clarify.
Is
we
no
longer
with
all
these
when
based
on
IP
PM
I?
Now
we
introduced
much
much
simpler
representation
of
alcohol
cost
backers,
for
example
here
and
we
go
see.
L
L
Okay,
the
only
potential
complexity
introduced
by
introducing
the
cost
source
optional
field
is
backward
compatibility
so-
and
here
is
issue
which
we
described
in
the
document
potential
usually
introduced,
file,
an
optional
course
view
the
is
backward
compatibility,
consider
a
an
information
resource
directory
and
that
you
find
to
cast
hype
and
with
the
same
cost
mode
and
cosmetic,
a
one-way
source
of
being,
as
mention
the
other
one
being
SLA.
But
if
you're
really
backward
is
that
like
old
alkyl
client
and
you
wouldn't
know
which
one?
L
Yes,
it's
really
a
submission,
or
you
really
is
SOA,
because
it's
optional
field,
we
don't
understand
just
the
other
definition,
so
you
can
just
skip,
and
this
particular
optional
field.
Nadi
and
that's
potentially
can
be
any
issue,
and
now
our
solution,
the
only
define
metric
in
our
system,
if
I,
which
is
the
standard
based
out
of
protocol,
the
only
metric
defendant,
was
cutting
cost
and
we
make
it
a
centrist
that
it
would
be
this.
One
can
only
be
estimation
so,
therefore,
if
you
use
a
new
custom
metric
and
for
the
Newcastle
metric
and
you.
L
Otherwise,
your
every
see
just
fine
so
I
think
that's
the
only
issue
we
try
to
solve,
and
now
let
me
talk
about
the
only
remaining
issue
here.
The
only
remaining
issue
here
turns
out
be
very
simple.
Is
this
a
moment
we
have
defined
to
cost
source
types
and
the
one
is
SOA.
One
is
estimation
which
was
mentioned
last
time
doing
the
idea
105
a
meeting
was
maybe
we
should
really
have
third
type,
which
called
a
nominal
type.
For
example,
what's
normal
type,
for
example,
here
use
normal
type
means,
for
example,
for
been
aware.
L
For
example,
PC
means
that's
really
my
link
capacity
and,
according
to
my
Dingle,
a
bow
is
hanging,
so
the
points
got
a
normal
value
or
nominal
value
and
therefore
it's
all
estimation.
It's
not
really
estimation
the
card.
Everything
from
SEO
is
good
enough
and
good
enough.
We
do
not
really
define
nominal,
but
there's
some
other
key
discussion.
Maybe
if
it
really
introduced
nominal
essential
is
a
very,
very
simple
change.
We
add
a
wine
and
most
people
essentially,
for
example,
in
capacity
you
probably
the
most
irrelevant
you
find
a
central,
a
fixed
value
and
what
so?
L
That's?
What
you
capacity,
so
that's
the
only
remaining
issue
which
we
won't
get
people's
opinions.
If
any
so
here
for
the
next
step
and
turns
out
to
be
pretty
simple,
a
number
one
is
for
feedback
on.
The
remaining
item.
I
know
will
really
talk
to
some
people
from
I
ppm.
I
know
what
we
do
is
we're
gonna
some
in
update
and
if
any.
L
N
Chinua
we
wanna
know
cars
are
thank
you.
Thank
you
Richard
to
take
care
of
these
work.
Actually
one
of
comments
for
this
is
you
mention
one
type
where
you
introduce
this
estimation
and
I
want
to
make
sure
it
is
affordable
haver,
because
in
some
example
you
you
actually
make
comments.
Actually
you
think,
if
or
doesn't
include
a
Costas.
Also,
that
indicated
is
estimation.
Does
it
mean?
Actually
this
estimation
hyper
is
kind
of
default
type.
It.
L
L
L
Think
that's
why
the
complete,
let's
really
make
guarantee
right,
as
typically
you
see
all
the
websites
they
really
announce.
Sla
I
think
forget.
For
the
example
we
talked
to
several
people.
Is
it
typical?
That's
really
a
so
it's
best
one,
because
I'll
tell
you
as
necessary
and
as
summation
I.
Think
from.
Let
me
show
over
here
promise
that's
good
example,
for
example.
So
typical
visit
and
your
publisher
for
this
information
may
be
hard
to
use.
L
You
can,
essentially,
they
are
really
like,
say:
I
need
to
pass
all
the
websites,
you
don't
have
dates,
and
so
on
and
estimation
program,
I
think
this
is
the
mission
they
opted
everyone
Mouse
or
every
once
in
a
while.
We
don't
want
to
update
every
get
a
continue
to
really
like
it
as
a
mission
I
nominal.
L
What
do
we
want
said
we
want
to
say
is
essentially
it's
a
value
of
which,
for
example,
traditional,
like
a
normal
value,
which
is
expected
to
be
10
gig,
but
because
that's
really
more
value
about
if
it
really
not
right
and
something
you
should
really
take
here,
because
we're
Suze
Orman
guidance
and
there's
no
guarantee
about
that.
But
as
a
crowd,
okay,.
O
L
O
The
discussion
we
had
last
time
was
because
there
was
like
some
use
case
where
people
actually
want
to
use
this
information
on
a
very
small
time
scale
and
then
like
it,
would
it
be
important
to
actually
understand
how
it
was
measured
and
what
the
time
scale
is
for
the
measurement
and
so
on,
and
that
has
like
a
lot
of
problems
and
like
when
I'm
looking
at
this
light
right
now,
I
also
doesn't
seem
to
be
in
the
intention
here.
So
this
draft
say
anything
about
this
about
times
typical.
L
L
L
Right
so
the
way
we
want
to
do
is
the
following:
is
we
own
is
estimation
and
remember,
is
in
the
cause,
a
type
system
we
have
a
cost
mode
and
the
cost
metric
and
it
cost
source.
We
also
have
description.
I
would
suggest
in
this
moment
is
in
Makati
description,
which
often
or
fuel
you
give
a
link
to
describe.
L
We
are
this
value
for
this
festival,
and
people
can
retrieve
it,
but
we
don't
want
to
define,
go
to
details
or,
for
example,
you
find
figures
as
we
defined
for
Mary
every
five
minutes
seconds,
everything,
millisecond
and
so
on.
We
don't
want
to
go
down
that
path.
We
won't
give
it
like
a
coarse-grained
high
level
to
use
essential
guidance.
That's
lovely
session,
okay
with
you,
or
maybe
that's
this.
L
Think,
typically
or
number
one
course
is,
for
example,
estimation
we
probably
expect
people
to
examine
other
up
days.
Mom
says
that
typically,
like
people
ugly,
their
wife
said
proudly
up
days
like
a
day
is
mouse
is
I.
Think
oftentimes
a
tip
is
like
a
master,
every
value,
for
example,
and
there
are
not
redefine
degree
of.
O
O
L
P
Hi,
this
is
a
loose
run,
telefónica
a
when
you
have
mentioned
the
nominal,
the
nominal
value
you
have
referred
to
the
bank
with
which
is
clearly,
let's
say,
an
absolute
value,
but
I
was
wondering
if,
for
other
cases,
that
could
be
the
delay.
For
instance,
we
could
manage
percentiles
or
I
mean
not
absolute
values
but
95
95
percent
of
the
time.
This
is
to
my
question
is
if
this
also
could
be
consider
as
nominal
value
or
would
be.
L
Yeah
I
think
that's
a
complex
of
using
introduce
a
nominal
value.
What
exactly
it
is
and
and
which
was
normal,
which
was
abnormal
and
you're
right.
So
95
person
hell
I
up
front
I,
don't
know
how
to
specify
and
yet
I
would
introduce
the
complex
a
over
basing
the
complex
that
we
don't
really
get
into.
Is
all
these
I
ppm
complexes.
L
So
complex
multiplication
don't
need
this
happen,
information
at
all,
and
but
the
short
answer
is
I.
Think
about
that.
Maybe
I
tricked
you
some
comments
and
you
actually
probably
it's
a
moment,
I
think
for
me.
If
we
can
give
a
summation
and
we
can
give,
for
example,
some
folk
band
amazing
game,
normally
Allah
was
you
don't
help
you
find
this
moment.
L
B
B
Okay,
so
at
the
last
IETF,
the
conclusion
about
this
draft
was
that
it
was
very
complicated
and
the
introduction
of
the
features
of
unified
property
were
so
complex
and
hard
to
understand,
and
so
this
is
actually
not
the
case
as
long
as
it
is
clearly
explained
and
introduced.
So
the
major
changes
in
the
present
version
are
on
simplification
and
clarification
on
how
to
introduce
the
new
concept.
So
the
major
changes
are
on
section,
1,
2
3
and
of
course,
we
started
looking
at
some
type
of,
but
that's
that
was
not
the
hardest
part.
B
So
in
the
meantime
the
draft
grew
even
fatter,
but
we
are
now
working
how
to
make
it
simple
and
concise.
So
there
is
no
change
in
the
design,
as
I
said,
but
there
is
a
very
strong
need
to
simplify
the
text,
so
we
opted
for
a
didactic
approach
and
progressively
introduced
a
concept
in
with
a
progressive
complexity,
and
of
course
we
did
need
to
do
that
for
the
rest
of
the
document,
and
that
requires
substantial
revisions.
So,
as
a
digest,
the
introduction
has
been
made
non-technical.
B
We
removed
every
text,
any
text
that
was
like
specifying
already
things.
There's
we
added
a
we.
The
second
section
is
now
about
the
basic
features
of
the
new
United
property
extension
as
they
were
defined
in
the
first
version
of
the
document
in
the
third
section
is
about
advanced
features
for
unified
property,
so
this
section
explains
limitations
in
some
cases
of
using
the
basic
features.
B
It
also
explains
the
risk
of
an
ambiguous
client
requests
and
how
to
solve
it,
and
this
ambiguity
issue
was
around
for
like
more
than
a
year
and
caused
some
head
scratching
so,
and
you
finally
found
a
design
that
solves
it.
This
is
now
the
new
talk
for
the
three
first
sections,
so
in
Section
two,
as
you
will
see,
we
introduced
basic
features.
What
is
an
entity
generalizes
endpoints
examples?
What
is
an
entity
domain
defined
as
a
set
of
entity
of
the
same
type,
which
is
also
defining
the
type
of
the
entity
domain?
B
It
also
defines
an
entity
ID
format
and
we
give
examples,
section
2.4,
entity,
property
that
can
be
network
aware,
like
an
AAS
number
or
network
agnostic
like
a
geographical
region
and
2.5
new
information
resource,
because
that's
one
of
the
key
points
of
the
draftees.
We
introduce
a
new
media
type
and
a
new
resource
which
is
called
Alto
property
map
that
you
can
get
half
via
get
Mart
of
your
post
lot.
Section
3
and
introduces
advanced
yupi
extension
features.
B
So
we
have
section
13.1
that
establish
the
relation
between
what
an
entity
identifier
and
an
entity
domain
with
some
rules
and
especially
a
rule
saying
that
the
entity
is
not
the
entity
points
to
a
physical
or
logical
object,
but
you
can
have
two
different
entities
that
point
to
the
same
physical
and
logical
object,
because
auto
is
like
it's
a
map,
so
the
object
in
an
auto
map
is
a
identify.
So
you
have
one
endpoint
that
has
a
AP
v4
address
and
an
ipv6
address.
B
You
have
two
entities
because
different
applications
use
different
address
formats,
and
so
you
need
those
two
entities.
So
3.2
defines
resource
specific
entity
domain
name.
Why
do
we
need
that?
Because
the
PID,
for
example,
the
entity
in
which
an
entity
which
is
a
PID
domain,
has
any
identifier
like.
B
Okay,
here
so
that
you
can
use
the
same
identifier
in
different
network
maps,
so
if
you
only
use
this
identifier
for
a
PID,
you
cannot
distinguish
in
which
information
resource
in
which
property
map
it
is
defined.
So
this
is
why
we
compose
the
entity
domain
name
with
the
resource
where
it
has
been
defined.
For
example,
here
you
you,
you
want
to
define
an
entity
domain
PID
so
and
if
you
want
to
distinguish
between
PID
number
10
define
in
two
different
network
maps,
you
need
to
compose
your
entity
domain
with
the
resource
idea.
B
So
here,
if
you
use
this,
and
this
you
can
distinguish
between
the
two
entities
which
otherwise
you
cannot
section
3.3
does
well.
Similarly,
the
same,
we
have
the
similar
issue
for
entity
properties
same
if
a
property
is
defined
relatively
to
an
information
resource.
The
property
value
can
change
depending
on
this
information
resource.
So
if
you-
and
if
you
require,
if
you
query
a
property
for
an
entity,
for
example,
defining
the
ipv4
entity
domain.
B
This
this
end
point
can
have
two
PID
properties
in
two
different
Network
Maps,
two
different
Network
Maps
use
this
end
point
and
define
a
PID
property.
So
if
you
want
to
query
the
value
of
the
PID
property
for
this
ipv4
entity,
you
need
to
specify
in
which
network
map
you
want
it.
So
the
resource
here
is
impacting
the
property
value.
So
and
in
the
text
you
can
look
up
the
text
and
the
text
explains
how
to
how
to
do
this.
B
No
comments
on
section
for
that
3.4
sorry,
3.5
will
be
in
the
next
slide
and
3.6
is
the
okay
yeah.
That
idea
that
this
section
3.6
needs
clarification.
We
are
not
done
yet
with
the
clarification
it's
about
INR
registration,
and
so
there
is
a
discussion
regarding
mapping
definitions,
section
3.5,
so
this
actually
solves
that
ambiguity
issue.
B
We
had
the
use
case
like
more
than
one
year
ago
on
the
FCI
map
capability
use
case.
So
in
the
initial
design
there
was
one
member
on
the
capability
you
had
one
member
that
was
defining
which
entity
domain
types
you
can
query
and
the
other
main
member
was
listing.
What
properties
you
can
query
on
that
on
these
entity
domains
and
the
problems.
Is
that
well
it
up?
Actually
you
can
you
cannot
query
PID
on
every
of
those
entities.
B
Pid
does
not
exist
on
country
code.
It
does
not
exist
on
a
SN.
How
does
the
client
figure
out
so
the
solution
is
now
to
introduce
a
new
member
in
the
capability
and
this
member
for
each
of
the
listed
entity.
Domain
types
gives
you
the
list
of
property
that
you
can
query
on
this
entity
domain
type
and
it
specifies
it
results,
depend
whether
it's
resource
dependent
or
or
not.
So
here
with
this
design,
okay,
you
can
end
up
with
very
long
names,
but
these
solves
the
problem
in
a
clean
way.
B
So
and
of
course,
we
really
need
to
simplify
the
text,
but
basically
it
appears
to
be
very
pretty
clearly
defined
now.
So
there
is
another
work
we
studied
on
illustrative
sections,
as
some
sections
do
not
provide
any
protocol
specification,
I
meant
to
explain
the
design
and
but
they
introduced
complexity
and
notions
that
are
otherwise
not
useful
to
the
implementers.
B
Given
that
arif
sees
meant
for
implementers,
so
we
also
want
to
clarify
this,
so
the
next
steps
will
be
to
fix
typos
and
errors
that
I
detected
right
after
sending
the
the
new
version
continue
the
clarification
and
cleanup,
and
this
will
be
required,
substantial
work.
We
also
need
to
make
a
last
check
on
the
IANA
section
regarding
how
we
define
mappings
and
many
things
that
have
been
that
have
been
added
in
this
section
and
once
we
are
done,
we
will
propose
it
for
working
group
Lascaux.
E
L
L
Long
I'm,
the
one
reason
I
think
that
God
pretty
law
was
being
caused.
A
lot
of
examples.
Yes,
and
so
when
you
say
you
want
to
clean
up,
do
you
mean
to
really
clarify
remote
examples
or
I'm
just
want
to
get
a
little
bit
sense
about
what
you're
really
meant
to
really,
for
example,
I.
Think
clarification,
of
course,
because
always
sense,
but
when
you
see
coming
up
what
does
which
direction
potentially
might
be
really
pursuing.
B
Definitely
as
far
as
examples
are
concerned,
I
rather
tend
to
be
willing
to
add
examples.
We
need
to
add
examples:
own
abstracted,
Network
elements.
We
need
to
add
examples
on
FCI
capabilities,
because
they
were
the
ones
who
caused
actually
who
who
helped
us,
identifying
the
ambiguity,
issue
and
led
us
to
design
this
resource
dependency
and
more
thinking
about
the
text
where
just
the
wording
that
can
be
really
simplified
and.
Q
Name
wrong
so
I'm
time
here
to
talk
about
the
auto
past,
Baxter
extension,
so
for
those
who
are
familiar
and
not
familiar
ways,
past
brand
extension
I
like
to
give
a
quick
summary
of
the
current
status
of
the
drafts.
First,
what
is
the
motivation
behind
this
draft?
It
wants
to
provide
a
what
kind
of
new
features
it
provides,
so
a
in
addition
to
the
cost
metrics
before
endpoints,
and
we
also
want
to
review
some
internal
structures
and
detailed
properties
associated
way.
Q
These
internal
structures
find
the
ISPs
point
of
view
for
the
past
between
human
said
also
stationed
in
pairs,
and
example.
Is
that
driven
that
the
tracks?
This
chats
includes
water,
negatives
for
NARS
security
and
ethics,
and
recently
we
also
receive
some
inputs
from
other
working
groups
that,
for
example,
like
five
GU
Pierre
functions
and
also
mobile,
edge
computing
and
probably
service
edges
as
well.
They
might
benefit
from
this
information
we
provided
and
why
this
instantly
essential
to
the
auto
framework.
There
are
two
reasons.
First,
such
information
is
useful.
Q
So
how
does
this
extension
provide
such
information
to
represent
internal
structures?
We
introduce
a
concept
called
optional
elements
which
will
give
more
details
about
why
we
do
this
later
and
then
for
the
detail:
property
information,
we
reuse
the
unified
property
map
for
the
for
the
for
these
optional
elements
and
to
represent
the
path
between
especially
the
entry
in
communications
where
you
reuse
the
auto
base
protocol.
Q
Basically,
the
cost
map
and
end
point
cost
services
where
it
uses
like
source
and
destination
pairs
to
represent
potential
communications
between
hosts
and
so
what
problems
we
need
to
solve
in
this
draft
yeah
back
to
two
parts
of
the
problem.
First,
when
you
consider
about
some
privacy
concerns
and
then
we
need
to
determine
basically,
this
is
rate
to
how
we
determine
the
representation
of
our
internal
structures
and
first
of
course
we
don't
want
to
expose
the
physical
properties
of
the
network
elements.
So
we
want
these
elements
to
be
abstract
and
then
there's
a
question
of.
Q
Are
these
internal
structures
persistent
in
the
network
or
they
can
be
dynamic
and
constructive
argument
and
our
current
decision
that
first
we
need
to
want
to
make
these
natural
elements
abstract
and
then
because,
first
right
now,
we
are
making
him
to
be
basically
the
scope
of
this
idea
of
the
end.
The
identifiers
for
this.
The
elements
are
assumed
temporary
in
your
query,
but
we
had
a
property
to
optionally
expose
persistent
entities
in
the
network,
and
we
also
designed
this
protocol.
Q
We
also
consider
some
performance
issues
that
may
arise
when
the
developers
are
building
their
servers
or
end
clients,
and
one
problem
is
scalability
and
also
consistency
as
well
is
the
pattern
of
walk,
because,
essentially,
we
are
now
requesting
information
of
two
resources
in
Auto
for
the
path
vector
for
the
correlate
for
the
application
error
and
in
part,
we
need
the
resource
of
cosmic
or
endpoint
service,
but
for
the
properties
we
we
want
to
use
the
unified
program
map.
So
the
problem
is,
we
are
actually
now
requesting
two
resources.
Q
So
do
we
want
to
request
these
two
resources
in
a
single
query
or
in
two
separate
queries
and
another
separate
two
consecutive
queries,
and
that
brings
us
to
a
problem
of
one
run
communication
and
to
run
communication
and
our
current
and
also
there's
intimidation
capacity.
Basically,
when
you
want
to
implement
you
want
to
up
your
up,
creates
the
auto
implementation
to
supports
the
prospective
extension
then
from
there.
You
want
to
use
a
new
message
format
or
you
want
to
reuse
the
old
format
that
is
already
supported
in
the
auto
based
protocol.
Q
Our
decision
is
to
use
one-run
communication
with
multiple
responses
so
that
we
can
first,
if
we
use
to
run
communication,
then
the
you
have
to
keep
track
of
the
previous
of
the
first
requests
before
you
can
provide
the
properties
for
the
second
request,
and
that
means
you
need
to
cache
the
request
for
you
need
to
catch.
The
first
request
before
you
can
return
the
result
of
the
second
request
and
that
qu
need
to
get
a
bitty
problem
on
the
server
and
also
the
the
problem
with
for
the
multiple
response.
Q
So
so
that
means
we
need
to
use
one
run
communication
and
then
we'll
actually
reuse
format
that
we
already
have
for
the
honor
protocol.
So
we
want
to
so.
The
decision
is
to
make
them
into
two
different
mess,
parts
of
a
single
message,
and
these
two
paths
should
be
included
in
one
multi-part
response
and
a
quick
summary
of
what
we
did
in
the
last
gita
IETF.
So
for
the
NIT
f15.
Q
Basically,
we
did
some
kind
of
a
major
revision
and
first
we
finalized
the
specification
for
the
cost
type
extension
for
the
extension
of
the
cost
type.
We
essentially
introduced
a
new
MIT
new
new
cost
type.
The
Cosmo
is
array
and
the
cost
metric
is
any
path
and
we
also
clarify
the
property
negotiation
process
because
so
before
0/8,
there
is
no
explicit
explanation
of
how
the
probably
negotiation
is
happy.
It's
happening
between
client
and
a
server,
and
we
basically
clarified.
Q
We
add
a
new
section
to
explicitly
specify
the
negotiation
process
of
the
properties
for
a
E,
and
we
also
introduced
a
persist
entity
as
an
initial
registry
entry
for
any
properties
and
we
Kara
Phi
the
part
resource
ID
in
the
multi-part
message,
because
this
is
actually
related
to
integration,
ways,
incremental
updates,
and
so
in
the
last
revision
it
is
synchronized
with
SEC
draft
version.
16
and
I
recently
checked
the
status
of
SEC
and
I
see
there's
a
new
version
popping
up.
Q
So
this
is
actually
a
really
remaining
issue
that
we
need
to
solve
in
the
next
revision,
and
we
also
prop
so
before
0-8
we
have.
We
will
leave
the
capability
of
cost
Condor
of
future
a
work
and
in
the
last
revision
we
actually
proposed
solutions
for
the
cost
can
of
ability
problem
and
so
after
a
ITF
on
five.
We
basically
because
before
I
file,
we
did
a
lot
of
work
and
after
alpha
we
had
a
minor
revision.
Q
That's
the
latest
version
and
we
emphasize
that
action
elements
can
be
captured
dynamically
to
rockery,
and
we
highlight
the
benefits
of
this
decision
in
multiple
places
in
the
drafts
and
here
I'm,
going
to
talk
about
some
remaining
issues.
First,
as
subbing
missioned
in
chair
size,
there's
a
dependency
between
the
past
vector
draft
and
also
the
UFL,
probably
trapped,
and
first
that
I
have
identified
three
dependencies:
first
they're
the
terminology
dependency
and
also
the
pop
the
format
of
the
property
map.
Basically,
reuses
response
data
format
from
the
unified
property
draft
and
also
oh.
Q
We
need
to
register
some
probably
domains
and
for
int.
Oh
sorry,
there's
tab.
You
should
be
entity
domain
and
entity.
Properties
should
be
registered
using
the
mechanism
defined
in
the
unified,
a
poverty
trap
and
currently
is
synchronized
with
the
unified
republican
version
number
zero.
Eight.
I
think
it
is
a
newer
version.
B
Q
Okay
and
also
there's
a
dependency
on
the
sse
drafts,
basically
in
sec
17,
it
includes
a
section
saying
how
we
should
handle
multi-part
message
using
SSC,
so
this
part
should
be
removed
from
the
current
path,
vector
extension
now
and
I
also
identified
there
as
a
terminology
inconsistency
between
these
two
jeff's,
so
it
is
called
so.
The
same
object
is
called
part
resource,
I
the
impasse
factor
and
is
called
Content
ID
SEC,
and
we
need
to
resolve
this
issue
in
the
next
review
and
here's
a
revision
plan
so,
first.
Q
In
terms
of
writing,
we
need
to
fix
the
dependency
issues
and
improve
the
quality
of
the
writing,
and
we
also
and
to
improve
the
quality
of
writing.
We
need
some
feedback
from
the
working
group
as
well,
and
another
revision
is
so
based
on
the
discussion
with
a
being
and
also
some
inputs
from
other
ITP
working
groups.
Q
So
in
our
current
design,
the
AE
is
designed
to
be
homogeneous,
so
we
will
provide
the
same
unified,
the
same
properties
for
all
the
areas
returned
by
the
perspective
extension,
but
there
is
actually
a
growing
demand
in
the
internet
that
we
might
want
information
about
heterogeneous
and
E's,
though
first,
for
example,
in
the
site
meeting
talks
we
had
yesterday
so
a
Chinese
company
courtesan.
They
are
building
a
gaming
platform
which
might
use
this
information
from
the
are
collaborating
with
some
eyes
piece
to
get.
Q
Q
So
what
we
need
to
do
is
first,
we
need
to
define
the
entity
type
here.
The
structure
for
this
is
basically
we
need.
We
need.
We
need
ability
to
first
need
to
identify
what
what
types
of
æneas
we
need
and
what
should
be
the
properties
associate
ways
different
types
of
ease
and
in
order
to
do
that,
the
property
negotiation
process
should
be
slightly
tweaked
in
the
current
draft,
and
but
this
is
actually
not
a
big
deal,
because
the
capabilities
are
Lansing.
Q
So
right
now
the
we
are
using
a
more
simplified
version,
because
previously
we
only
have
one
type
of
anyi
and
right
now
we,
the
extension,
is
actually
we
have
multiple
types,
and
that
is
actually
a
good
new,
because
this
can
be
this
weekend
before
different
days
can
be
considered
as
different
types
of
entities.
That
means
we
can
reuse
the
capabilities
from
the
unified
property
map.
So
actually
I
think
this
is
actually
a
cool
new
for
us,
because
that
basically
means
the
unified
problem,
the
unified
problem
draft
and
password
drafts.
They
they
have
more.
Q
We
don't
have
to
build
a
new
mechanism
to
expose
the
properties
for
different
entities
and
after
we
adopt
this
change,
what
follows
we
might
need
to
identify
more
any
drafts.
Maybe
work
with
other
working
groups,
for
example
on
working
groups,
I
related
to
the
mobile
edge
or
like
5g
and
MIT,
and
then
we
need
show
researcher
the
relate
age,
energy
type
and
their
properties
to
the
unified
property
map,
but
that
the
part
included
in
what
follows
is
not
part
of
the
aspect
or
extension.
Q
First
I
liked
I
think
we
need
to
make
a
revision
to
adopt
the
changes
that
we
mentioned
earlier
and
we're
gonna
stay.
The
milestone
for
working
group
Lesko,
so
yeah
I
was
actually
more
conservative
and
I.
Think
we
can
do
this
in
ITF,
Oh,
8,
but
I
think
the
working
group
chair
would
like
to
push
us
to
finish
it
in
the
next,
basically
in
Vancouver,
and
also
we
neck
to
a
couple
reviews
yeah.
That's
all.
Thank
you
great.
L
So
can
I
ask
some
questions
on
one
simple
and
one
simple:
one
might
more
complex
and
a
simple
working
in
the
following,
and
so
for
the
proposal
matrix
we
have
introduced,
learn
your
cost
source,
fueled,
SLA
and
estimation,
and
very
likely,
of
course,
right
now.
There's
no
dependency,
but
actually
I
think
it
passed
vacuum
most
likely
will
be
use
cosmetic
of
a
bandwidth
available
bandwidth.
L
That's
probably
one
of
the
most
typical
use
cases
was
a
little
bandit
ways
for
it's
a
cough
loss
right
because
I
think
that's
one
of
the
major
use
cases
which,
being
first
of
all
you
so
many
places,
and
so
very
likely
that
particular
metric
with
a
bandit,
would
be
defined
in
the
problems
of
magic
document,
which
means
now
you
actually
really
could
have
a
dependency
one.
Is
you
define
and
measure
yourself
and
one
is
you
use
a
metric
defined
unit
cost,
performs
a
magic
document
and
then
you're
gonna
have
create
a
dependency
very
likely.
L
My
understanding
will
be
the
praval
metric
document
a
proper
go
first,
what
probably
becomes
an
iced?
So
what
do
you
mean
how
to
solve
this
dependency
issue?
So
you
have,
for
the
other
one
I
mean.
Let
me
be
very
specific
number
one
is,
do
you
think,
for
example,
about
bandwidth
would
be
SLA
or
to
be
a
summation
or
you
need
like
a
nominal
or
some
different
types
of
a
cost
source
I.
Q
Think,
for
example,
it's
in
the
example
we
have
in
the
past
Burger
Chef.
We
probably
are
using
some
either
SEO
a
or
nominal
busy
waste
some
care
with
some
guarantees
from
the
ISP,
but
I
I.
Don't
think
that
part,
basically,
either
it's
SLA
or
estimation
or
nominal
should
be
included
as
part
of
the
past.
Whether
Jeff,
because
a
special
job
does
not
depend,
has
not
specify
what
specific
needs
or
properties
there
should
be.
Q
L
The
party
drafts,
but
this
one
really
comes
from
a
coastal
metric
from
cost
types,
is
separate
and
so
therefore,
that's
probably
not
handled
by
the
unified,
probably
most
likely
we
coupling
with
the
performance
metrics.
So,
of
course
we
will
probably
took
this
week
and
taking
it
offline
about
the
high
level
thing
really
potentially,
maybe
there's
some
feedback
which
can
keep
it
till
the
pop-up
matrix
documents
are:
oh,
that's,
really
a
new
type
of
guidance,
and
so
what
exactly?
L
Really
it
is
I
think
that's
the
one
you
should
sell
I
realize
maybe
there's
such
a
dependence
I
think
that's
one
and
number
two
is
very
common
if
I
may
have
like
user
on
to
me,
and
it's
able
to
be
the
dependency,
if
you
really
wanted
to
extension,
to
handle
Hitler
genius
can't
go
to
hatred
genius,
any
part
that
actually
number
one.
Of
course
it's
super
cool
right,
because
even
now,
probably
even
right
now
today
and
in
in
Denver
and
Colorado
I,
think
it
was
these
people
Cal
attack
and
yes,
not
about
doing
demos.
L
I
think
one
of
them
was
was
using
this,
like
any
staff,
pass
vector,
they're
doing
all
the
demos
without
the
major
use
cases
without
the
biggest
and
his
big
data
analytics,
and
so
therefore,
actually
I
think
that's
a
perspective.
Well,
most
useful
thing.
But
then,
if
you
do
this
extension
and
that
throughout,
like
you
mentioned,
you
potentially
can
add
a
comp
like
it.
So
do
you
really
envision?
The
final
version
would
include
all
these
little
juniors
are
not
really
a
single
simple,
most
of
base
class
app
has
any
I
think
we
would.
Q
Probably
meet
first,
we
need
to
define
me
when
we
design
protocol.
We
definitely
should
leave
some
space
for
future
extensions,
but
I
don't
think
we
should,
for
example,
consider
every
single
types
of
Annie's
and
put
them
in
this
draft.
In
the
extra
we
probably
just
defined
a
very
basic,
basically
an
initial
registry
to
to
the
as
any.
Q
Basically,
we
probably
need
consider
any
as
a
family
of
entities,
not
a
specific
type
of
entities,
and
then
we
probably
want
to
introduce
the
very
basic
anyway.
Some,
for
example
like
penalize
properties
and
some
simple
properties
before
we
go
in,
and
if
we
can,
for
example,
maybe
later
well,
we
can.
We
want
to
consider
more
come
complicated
use
cases
such
as
their
MBC
and
UPF.
Q
B
A
very
very
quick
comment
as
a
co-author
and
because
I
am
very
motivated
by
this
heterogeneous
stuff.
The
metrics
that
are
conveyed
by
the
auto
performance
metrics
are
metrics
on
path
and
properties.
Here
are
properties
on
network
element.
So
to
me,
the
bandwidth
on
in
performance
metric
is
not
the
same
as
the
bandwidth
is
some
data
center
or
element,
and
the
other
quick
command
is
what
makes
in
my
view,
this
draft
interesting
is
that
it
has
a
design
that
opens
the
way
to
any
type
of
abstracted
network
element.
L
Okay,
aw
I'll
be
quick
and
like
I
mentioned,
and
then
I
had
some
questions.
Sort
of
I
could
be
very
quick,
so
I'm
gonna
give
an
update
on
the
city
and
I
I've
CI
document,
and
this
one
I
could
have
pretty
simple
now
and
it's
a
good
collaboration
between
young
and
cabbing
and
John
and
also
Jensen.
So
let
me
go
where
I,
don't
you
mean
I
probably
can
get
done
in
like
a
five
minutes,
so
the
change
is
actually
the
most
such
as
text.
Ideas,
I,
think,
that's
why
it's
a
sign!
L
That's
a
really
most
ready
and
text
I,
the
in
particular.
Mostly
all
the
edits
in
the
two
versions
will
be
consistent,
a
user
terminology
and
very
thorough
check
for
internal
use
and
also
I.
Think
one.
You
should
why
we
encounter
all
the
tech
side
is
all
the
time
in
lately.
It's
because
all
the
dependency
among
all
the
documents.
L
So
therefore
we
an
athlete
to
be
consistent
with
the
dependency,
for
example,
unify
the
proper
document
you're
the
same
terms,
that's
why
we're
cause
all
these
like
I
text,
edit,
okay
and
so
number
one
the
change
we
made.
It
was
very
simple,
is
constantly
use
of
terminology
within
the
same
document
and
we
sometime
called
City
FC
service
sometime.
We
call
obsidian
I've
seen
a
map
service.
Eventually,
we
will
terminology,
we
make
sure
they
all
essentially
using
one.
L
So
that's
the
change
in
number
one
it
and
then
we,
of
course,
we
also
made
it
a
consistency
over
to
be
consistent
way.
So
the
documents,
the
terminology
using
auto
document,
program
Proctor
map,
and
we
also
do
the
same
thing.
So
basically,
I
don't
need
to
go
to
all
the
essential
consistency,
and
we
also
need
to
be
consistency
with
essentially
the
unified,
properly
document
and,
for
example,
there
that
were
using
and
then
the
name
change
from
anything
addressed
to
end
it
here
identifier.
L
We
also
start
we
go
through
the
whole
document,
make
sure
we
were
you
the
same
thing:
I,
don't
think
we're
really
depend
on
a
document,
but
I
think
that's
essential
to
what
we
did
to
to
dual
updates
and,
for
example,
here
is
previously.
We
call
alternative
domain
registry
and
now
right
now
really
called
auto
entity
domain.
The
hybrid
tree,
a
term
changes
because
it
about
chasing
the
dependence
say:
that's.
L
L
We
have
the
follow
names
any
other
clever
way
to
do
it
and
we're
waiting
for
some
comments
from
singing
I
working
group
and
from
over
there
give
us
final
comments
and
until
they
sign
off
with
other
changes,
and
then
we
essentially
want
to
go
to
a
working
group,
ask
our
I
will
grab
a
miss
document.
That's
pretty
much
it
so
I
know
it's
short,
but
we
really
don't
need
to
change
it
too
much.
L
B
G
B
L
B
P
P
Potentially,
we
will
bore
by
the
progeny
computing
capabilities,
and
so
so
the
idea
would
be
to
leverage
from
their
capabilities
in
in
order
to
help
us
to
identify
the
better
edge
for
a
given
service
and
here
the
ponies
to
to
differentiate
between
the
physical
edge
and
the
service
edge.
So
no,
not
all
the
services
necessarily
have
to
go
a
close
to
the
access
or
probably
the
edge
for
each
service
depend
on
the
kind
of
service
tree
to
be
honored,
to
be
the
Liebherr
so
yep.
P
Basically,
the
the
the
situation
is
that
now,
operators
that
are
starting
to
the
products
within
covering
capabilities
across
the
network,
the
edge
environments
that
more
centralized
data,
centers
large
data,
centers
close
to
the
interconnection,
etc,
cetera
these
data
centers
all
of
them
had
different
capabilities
in
terms
of
the
environment
itself,
the
size
of
the
number
of
CPUs
memory
bandwidth,
even
for
forgiving
the
traffic,
and
so
so.
The
idea,
the
detective,
would
be
to
think
of
mechanisms
for
assisting
the
decision
in
which
datacenter
we
deploy
a
given
service.
P
According
to
the
restriction
on
these
services,
in
terms
of
latency
bandwidth,
etc,
cetera,
and
we
consider
that
the
alto
could
play
a
role
on
this.
So
basically,
the
idea
will
be
to
incorporate
into
Alto
all
the
information
ready
to
compute
into
the
computing
environments,
so
CPUs
memory,
the
storage
and
so
and
combine
that
with
the
topological
information
from
the
network
in
such
a
way
that
the
Alto
client
could
request
the
Altos
shared
information
based
on
the
needs
for
the
computing
capabilities.
P
So
typically,
they
are
structurally
in
bundles
of
CPU,
RAM
and
storage
units,
and
we
can
see
this
in
commercial
examples
like
Amazon,
Web,
Services
or
make
micro
service,
or
there
is
another
example
that
is
being
from
promoted,
let's
say
between
Linux
Foundation
and
the
SMA,
which
is
a
common
network
function,
virtualization,
infrastructure
telecom,
tax
force,
C
NTT
and,
in
this
particular
case
that
we
have
taken
as
an
example
for
the
for
the
draft.
The
the
different
flavors
or
instances
are
characterized
by
five
different
item
items,
the
same
the
type
of
instance.
P
So
basically,
this
instance
can
be
characterized
as
basic
network
intensity
for
computing
intensive
the
interface
option,
where
basically
declares
the
bandwidth
of
the
interface
that
is
required
for
for
the
function
to
be
deployed.
On
top
of
that,
the
compute
flavor,
which
basically
refers
to
a
combination
between
CPU
Raman
and
disk
and
the
bandwidth
for
the
management
interface
of
this
instance.
P
Optionally,
storage,
suspensions
to
request
additional
storage
capacity
and
optional
harbour
acceleration
graph
statistics
in
order
that
in
case
that
they
design
a
specific
need
for
the
application
running.
On
top
of
the
of
the
of
this
computing
capabilities
regarding
the
set
a
characteristics
of
acceleration
in
the
liberty
of
the
traffic.
P
Here,
you
can
see
more
or
less
the
kind
of
flavors
that
are
being
proposed
in
this
yen
TTD
initiative,
and
also
you
have
here
the
link
for
for
that
for
checking
that,
and
we
have
performed
in
this
graph,
an
initial
mapping
to
the
property
map
seen
in
alt.
So
this
would
be
an
initial
exercise
that
we
would
like
to
develop
and
to
complete
in
future
revisions
of
the
other
document
that
basically
will
be
to
take
this,
alongside
as
an
example
for
exercising
how
the
how
Alto
could
be
play
a
role
in
in
this
part
there.
P
In
the
association
of
computing
capabilities,
a
network
topology,
we
have
identified
so
far
three
potential
solutions.
This
does
not
mean
that
could
be
others
for
sure,
but
these
these
ones
are
the
potential
solutions
that
we
have
identified.
One
could
be
one
could
be
to
leverage
and
possible.
Is
ten,
the
service
function,
network
topology
model
that
is
been
moved
into
T's
working
group.
In
fact,
now
in
this
cell
function,
our
topology
model
we
already,
including
the
current
version,
so
information
about
the
data
center
capabilities.
P
Let's
say
a
second
ocean
could
be
to
extend
the
BGP
LS
or
to
propose
to
stand
the
VPLS
RFC,
including
among
other
attributes,
the
information
of
the
compute
part,
and
a
third
option
could
be
there
to
combine
somehow
the
information
from
this
researcher
profiles
catalog
with
a
topological
information
by
leveraging
on
the
AP
prefixes
allocated
here
to
the
Gateway,
all
the
data
centers,
basically
populating
this
probably
doing
this
together
with
a
IP
Alton,
gave
me
so
very
next
steps.
We
are
considering
to
elaborate
more
on
the
mapping
exercise
to
the
property
property
maps
in
Alto.
P
N
P
I
refer
compute
in
a
general
term,
so
will
be
four
bits,
even
remember:
CPUs
memory,
storage
and
also
so
complete
information
at
the
end,
probably
a
startling,
the
so
how
trying
to
abstract
with
this
idea
of
bundles
of
flavors.
This
is
this
makes
this
more
easy
to
handle
than
the
individual
values
of
memory
and
storage.
But
this
is
something
to
explore,
but
remember
in
your
question
idea
would
be
to
address
everything
not
only
CPUs
but
memory
storage,
and
so
so
I
referred
as
computer
in
a
linear,
dynamical
way.
N
P
This
initiative
or
sorry
this
one
try
to
standardize
the
way
in
which
an
API
capabilities
can
be
requested,
so
normalized
on
how
they
the
way
or
not,
only
requested
normalize,
the
way
in
which
the
nmda
capabilities
can
be
somehow
exposed
in
such
a
way
that
the
different
network
functions
can
be
deployed
in
a
similar
way
in
different
environments.
So
somehow
we
have
this
link
with
an
API
stuff.
That's
gonna
be
a
stuff
I.
N
Just
remember
some
open
issue
in
service
service
aware
topology
model.
There's
some
open
should
relate
to
the
NFA
actually
I'm,
having
followed
discussion
very
closely.
That
seems.
Actually
they
need
to.
You
know,
talk
ways
as
a
younger
way
to
address
that
open
your
issue,
so
probably
you
needed
need
to
you
know,
consider
this
okay,
thank
you.
So.
L
We
can
yes,
I
think
this
is
super.
Interesting
and
I
do
wanna,
say
comedian,
it
won't
be
the
intradomain
setting
and
even
show
to
be
a
single
network
or
eventually
this
model
will
involve
multiple
domains,
meaning
out
of
multiple
autonomous
systems,
and
there
are
competition
or
what
a
story
resource'
will
be
aggregated
or
maybe
the
eventually
only
invasion,
essentially
a
single
one.
You
don't
worry
about
it.
Eventually,
information,
abrogation
issues,
the.
P
Initial
approach
is
single
domain
sure
so
that
could
be
applicable.
A
I
guess,
I
presume
to
to
multi
multiple
level
domain
case
should
not
be
probably
too
much
complex.
There
will
be
probably
issues
of
authentication
privacy,
probably
different
levels
of
a
structure.
Maybe
I
think
it
can
be
extended
to
to
multi
domain.
But
the
initial
approach
is
single
domain
only
to
start
for
for
the
easy
part
and
maybe
to.
L
Complicate
the
things
sure
yeah,
because
if
I
mean
so
one
reason
why
I
want
is
following
several
years
ago
and
IBM
guys
will
trying
to
use
this
actually
a
map
to
really
do
the
IBM
private
class
to
do
information,
aggregation
and,
of
course,
that
he
was
in
Kiyomi
multi
domain
setting
and
turns
out.
The
information
aggregation
turns
out
to
be
very,
very
tricky,
so
I
can
give
you
a
very
quick
example.
L
If
you
go
to
relay
from
from
stopping
from
me
to
you
to
suddenly
five
plus
three
equal
to
eight
very
easy
to
the
addition,
every
since
I'm,
still
working
so
I've,
had
my
house
from
me
to
you,
you
fake
your
cost
from
you
to
stop
being
we
had
together,
we
got
eight
everything
seems
to
be
okay,
but
but
somehow
competition
has
this
very
weird
property
of
somehow
the
outreach
system
somehow
seems
to
be
broken
and,
for
example,
the
example
which
I
p.m.
gasps
gasps
encounter
was
very
simple
use.
L
Case
was,
for
example,
I
will
see,
I
have
two
users,
I
have
to
use
of
CPU
I
got
information,
I'll
tell
you
and
you
heard
from
Kyle
that
he
has
two
units
of
CPU.
He
also
tells
you
and
he
were,
and
you
might
tell
us
I
mean
you
have
for
your
new
units
or
CPU.
But
I
can
funny
thing
might
be
my
tune.
You'll
sleep
you
might
hold
for
exam
two
units
I
mean
I,
would
also
keep
example,
Chi
two
units,
we
all
posted
Howie.
L
We
have
two
units
and
you
got
two
units
and
eventually
you
got
four
units
and
you
tell
actually,
for
example,
you
control
they
subpoena
your
for
you
know
you,
don't
you
only
have
two
units
honey,
simple
mission
when
the
propagate
it
somehow
mean
gold,
may
not,
they
cannot
be
distinguished.
Somehow
the
information
becomes
quite
a
complex.
That's
why
you're
in
surely
will
encounter
this
issue
or
an
action
now
most
likely
the
IBM
gas
eventually
I,
don't
know
what
eventually,
how
to
do
it
I
think.
L
R
Important
from
Sprint,
so
thank
you
for
the
presentation.
I
think
it's
very
interesting
and
I
support
further
work
in
this
area,
in
addition
to
the
properties
of
the
data
center
I'm
wondering
if
we
should
also
consider
the
the
back
hall
or
a
transport
going
into
the
property
for
that
going
into
the
data
center
right.
P
For
sure,
so
there
are
in
the
back
:
and
so
for
sure
we
intend
to
let's
say,
mix
all
they
play
with
all
to
help
us
or
allow
us
to
mix
both
words
say
they
never
part
together
with
a
compute
part
in
a
generic
way
in
such
a
way
that
we
can
take
their
worse
decision,
but
looking
at
both
things
together
at
the
same
time
so
I,
my
answer
will
be
yes
and
for
the
second
days
at
ccsm
for
sure
we
will
talk
of
that.
Thank
you,
okay!
Thank
you.
P
Okay,
so
here
the
idea
is
to
what
of
the
objective
is
to
present
an
idea,
but
that
would
be
it
was
10
alto
by
using
bgp
communities,
so
the
the
BP
communities
are
BP
communities
have
a
BP
attribute,
basically
is
commonly
used
for
grouping
destination,
so
you
associate
a
number
of
prefixes
to
a
community
and
you
defined
this
community
and
you
basically
use
the
communities
for
policy
purposes
purposes
for
influencing
the
delivery
of
the
traffic
to
certain
prefixes.
So
this
is
basically
the
fact.
P
These
communities
are
represented
as
an
indigent
number
32
feet
length,
basically,
including
the
information
of
the
autonomous
system
and
yeah,
and
also
interesting
these
communities.
This
community,
exhibit
can
be
carried
across
an
autonomous
system,
so
somehow
could
help
us
to
to
support
these
multi
domain
cases
in
the
future
as
well.
So
the
the
problem
when
looking
at
this
issue
was
the
basically
we
operators
used
stands
extensively.
The
BP
communities
is
a
way
of
putting
together
some
practices
or
some
IP
destinations.
P
The
other
protocol
is
based
on
IP
practices
and
the
point
here
is
but--but
and
we
usually
use
these
these
communities
in
aggregation
nodes.
So
basically,
when
we
allocate
IP
prefixes
to
PNG
or
to
a
P
gateway,
so
how
we
are
identifying
the
users
we
handle
these
happy
prefixes
with
just
a
single
identifier.
So
the
point
here
will
be
instead
of
going
to
individual
IP
prefixes,
to
play
with
the
PEP
communities
in
order
to
obtain
information
to
retrieve
the
information
from
the
from
the
alto
service
said.
Well,
we
see
benefits
on
on
this.
P
One
would
be
probably
the
reduction
in
the
number
of
queries
to
the
auto
server,
so
we've
just
working
with
VP
communities.
If
we
could
address
information
for
different
number
of
prophecies
and
also
to
fully
absorb
the
natural
change
in
practices,
because
well
with
all
the
migration
of
users,
and
we
need
to
move
professors
here
and
there-
and
sometimes
this
complicates
I-
say
the
management
of
a
addressing
IP.
P
Addressing
in
the
internal
of
the
network
so
going
to
with
big
communities,
somehow
we
are
hitting
this
complexity
and
we
will
kept
the
Altos
tough,
more
or
less
constant
without
so
many
changes,
so
regarding
their
next
steps,
for
this
idea
will
be
to
elaborate
more
on
the
proposal.
This
is
just
to
present
you
they.