►
From YouTube: IETF115-RTGWG-20221110-1300
Description
RTGWG meeting session at IETF115
2022/11/10 1300
https://datatracker.ietf.org/meeting/115/proceedings/
A
A
Hi
everyone
Seth
welcome
to
the
second,
the
routing
working
group
meeting.
Please
make
it
so
familiar
with
not
well
so
your
participation
in
the
ATF
by
participating
in
ITF.
You
adhere
to
the
policies
that
are
in
effect.
A
So
the
agenda
for
today
we've
got
a
invite
talk,
so
it
got
best
paper
award
at
sitcom
this
year,
we'll
talk
about
Network,
Define,
Network
simulation
and
thanks
Moon
Chen
to
come
and
talking
to
us
our
first
ATF.
Please
support
her.
Then
we'll
have
15
30
minutes
on
new
routing
architectural
proposals.
A
So
first
of
all,
detection
stuff
will
also
have
David.
Thank
you,
David
for
being
here
to
talk
about
Envy
me
and
what
should
and
should
not
be
done
so
we'll
have
semi-formal
five
minute,
David
presentation
our
talking
topic,
and
then
there
are
some
spring
stuff
as
usual.
So
again,
the
fact
that
you
see
it
here
doesn't
mean
that
building
routing
we
offer
sometimes
people
place
to
present
and
then
dispatch
them
in
the
right
place.
A
B
For
the
first
presentation,
we
are
very
happy
to
have
the
author
who
received
this
year's
ACMC
comes
past
paper
award
to
give
us
a
talk,
so
a
quick
introduction,
Dr
Huang,
Shin
Chen
is
a
researcher
at
Huawei,
Hong,
Kong
research
center.
Before
joining
Huawei,
she
received
her
PhD
degree
in
computer
science
and
engineering
from
the
Hong
Kong
University
of
science
and
technology
in
2020.
Her
research
interests
include
network
configuration
management,
intelligent
sensing,
several
physical
security
and
their
intersection
with
machine
learning.
So
with
that
that,
let's
welcome
Dr
chat.
C
Okay,
could
you
hear
me.
C
Okay,
thank
you.
Thanks
for
introduction
and
good
afternoon,
this
is
a
function,
a
researcher
from
Hawaii
Hong
Kong
Research
Center.
Actually
this
is
my
first
time
to
attend
iitf
conference,
I'm
so
glad
and
honored
to
be
invited
here
to
share
our
recent
work
on
network
configuration
management.
This
world
is
accepted
by
this
year's
sitcom
and
our
work
is
titled
self-defined,
Network
assimilation,
breaching
The,
Last
Mile
towards
Enterprise
network
configuration
management
with
nursing.
C
This
is
the
corroborated
work
with
Dr
yukanya
dot
Legion
professor
hyphoon,
professor
hongshu
daughter,
Libby,
Leo,
daughter,
gonjang
and
Professor
Wei
Wang
and
now
let
me
begin
my
presentation
and
as
one
of
the
most
important
infrastructure
that
will
support
numerous
applications
in
a
perfect
world,
all
network
devices
should
have
homogeneous
device
models,
which
means
that
if
given
the
same
configuration,
they
should
execute
the
same
action
and
in
this
way
the
network
operator
will
be
very
happy.
C
So
in
common
practice
we
will
have
a
soft
defined
controller
situated
between
the
app
layer,
Network
functions
and
the
heterogeneous
network
devices.
C
It's
desired
that
the
network,
the
controller,
can
present
a
logically
centralized
control
plan
to
config
multivendor
devices,
as
if
they
are
the
same.
Does
the
network
operator
will
Define
a
unified
model
at
the
controller
it's
often
stored
as
three
hierarchy
nodes
of
the
tree
denotes.
Some
configuration
attributes
such
as
IP
address
of
an
interface
named
of
a
SEO
policies
and
Etc
subtrees
will
generally
denote
a
group
of
relevant
attributes.
C
For
example,
some
attribute
for
specific
Network
protocol
and
the
network
operator
will
construct
this
unified
model
and
annotate
some
brief
context
for
attributes
and
then,
after
that,
they
will
go
to
understand
the
menus.
The
vendor
menus
and
file,
correct
commands
and
draft
and
validate
the
template
and
provide
the
mapping
rule
from
the
parameter
of
a
specific
vendor
command
to
the
attributes
in
the
unified
device
models,
and
then
they
should
repeat
the
same
process
again
for
a
new
device
model.
C
So
we
found
that
this
is
the
process
of
introducing
heterogeneous
Network
device
into
a
century
controlled
existing
self-defined
Network
and
we
Define
it
as
a
soft
defined
Network
or
simulation
sna
in
short,
and
we
identify
that.
The
key
problem
here
is
the
mismatch
between
two
configuration
models,
the
heterogeneous
device
model
and
the
unified
model
in
the
controller,
and
we
found
that
the
current
SMA
process
will
require
significant
human
efforts
to
bridge
them.
C
C
The
Second
Challenge
is
the
error
and
ambiguity
in
the
menus.
After
all,
all
menus
are
human
written
documents,
it's
inevitable
to
have
mistakes
and
typos
and
is
also
impractical
to
audit
menu
from
the
first
page
to
the
last,
and
actually
some
problems
are
hard
to
catch
by
human
eyes.
So
we
are
handling
this
issue
automatically.
Invest
is
very
critical
to
extract
a
reliable
device
configuration
models.
C
And
the
third
challenge
is
to
bridge
the
heterogeneity
between
configuration
models.
Actually,
the
configuration
language
is
designed
by
each
vendor
to
be
visibly
different
with
each
other
for
the
same
concept
or
intent.
Different
vendors
could
use
different
wordings
or
syntax,
but
due
to
the
sheer
number
of
CRI
commands
and
parameters,
handcrafting
mapping
is
very
tedious
and
error-prone.
Automating.
This
process
will
require
a
powerful
semantic
comprehension
model
to
understand
and
match
similar
network
configuration
context.
C
C
Thank
you
and
we
further
designed
the
method
to
adjust
the
third
challenge.
We
change
the
network,
semantic
similarity
inference
model
for
network
configuration
domain
to
Output
the
mapping
between
parameters
of
different
models.
We
also
release
a
validate
executory
data
set
of
past
menu
copers
for
future
research.
C
C
You
see
the
screen,
I
can
flip
the
slides,
but
oh
yeah
see
it's
very
slow
response.
Oh
let
me
let
me
begin
here.
The
key
instead
of
the
password
actually
is
this.
Despite
diverse
styles
of
menus,
all
menus
should
serve
the
stem
purpose,
showing
us
how
to
config
the
devices
they
should
cover
the
CR
commands,
supported
by
the
devices
and
the
function,
descriptions,
the
working
views
and
the
parameter
description
and
some
representative
examples.
C
Sorry
engine
and
could
you
hear
the
screen
now
but
because
I
cannot
see
this
screen,
I
cannot
determine
you.
Don't.
C
Yeah
yeah
yeah,
it's
very
slow
response.
I
cannot
tell
you
you
can
see
which
size.
B
C
But
I
think
maybe
it's
it's
better
to
let
me
control,
but
just
the
the
spawns
are
response.
This
mouse
here
is
very
it's
a
little
bit
delayed.
C
Oh,
just
let
me
continue
sorry
and,
and
so
we
designed
a
unified
format
here,
additionally
with
keys
and
value
type
constraints
as
showing
the
these
tables
and
for
mainstream
vendor.
We
build
a
password
to
cast
this
menu
into
this
vendor
independent
format.
So.
To
avoid
this
here
we
will
show
the
sample
Corpus
generated
by
our
passing
framework
and
actually
to
avoid
the
potential
bugs
in
the
developed
process.
We
will
adopt
some
test
driven
methodology
to
ensure
the
quality,
so
here
I
will
skip
this
part.
C
Please
refers
to
our
paper
for
more
technical
details,
so
in
this
way
we
actually
can
extract
all
essential
informations
from
the
menus
and
meanwhile
normalize
diverse
menu
style
into
a
unified
format
to
facilitate
the
following
processing.
So
the
second
part
is
the
validator.
So
here
the
key
inside
is
that
we
want
to
extract
a
reliable
device
model,
but
the
key
inside
here
is
that
the
menus
are
not
fully
reliable,
so
we
identify
two
main
type
of
Arrow
on
ambiguities
and
the
first
type
is
a
synthetic
ambiguities.
C
For
example,
this
is
the
CI
command
from
a
vendor
menus
and
it
has
unpaired
left
bracket
before
the
remote
As
symbol,
and
actually
there
are
multiple
potential
valid
facing
options.
We
can
remove
the
left,
brain
key
or
adding
a
red
blending
after
the
remote
as
simple
or
adding
a
right
bracket
at
the
end
of
the
call
command
and
choosing
which
require
judgment
from
the
experts
and
the
second
is
the
hierarchy.
Ambiguities,
for
example.
E
E
E
C
Yeah,
okay,
thanks
so
and
our
continued.
So,
despite
all
the
potential
errors
ambiguity,
I
mentioned
previous-
we
still
want
to
derive
the
reliable
device
model
from
menus.
So
here
we
design
a
multi-level
validation
schemes
and
the
first
Parts
actually
work
on
the
command
level.
C
We
want
to
fully
audit
the
syntax
of
CR
commands,
so
we
just
find
out
the
command
convention
in
menu
Preamble
and
express
them
into
the
equivalent
backwards
normal
form,
and
in
this
way
we
can
use
the
password
generation
tool
to
generate
the
to
generate
the
a
seal
command,
syntax
puzzle
and
call
them
to
identify
all
synthetic
arrows.
And
then
the
expert
can
only
the
incorrect
one
in
more
targeted
and
efficient
way
and
the
second
part
to
work
on
the
intercommand
level.
C
In
order
to
derive
this
tree
base
hierarchy,
which
is
implicit
in
menus,
we
actually
exploit
the
example
Snippets
in
the
menus.
It
actually
is,
example,
generally
demonstrate
an
instance
of
the
current
command
and
its
Parent
Command.
So
we
will
construct
one
model
to
get
their
relationship
on
CRI
template
level,
and
here
we
we
use
to
construct
a
graph
from
this
template
energy
with
the
CR
instance,
and
so
I
will
skip
the
algorithm
details.
C
So
in
this
example,
we
can
infer
that
the
CR
command
template
bgps
number
actually
enters
the
bgp
view,
so
the
other
commands,
with
the
same
parallel
view,
can
also
reuse
this
derivation,
and
in
order
to
handle
the
hierarchy,
ambiguities,
we
will
quantify
the
certainty
of
each
derivation
and
so
that
to
facilitate
the
expert
audit
later
and
the
third
part
of
the
design
to
work
on
the
snippy
levels.
Here
we
will
leverage
the
configuration
configuration
file
collected
from
the
running
devices
because
they
are
collected
from
the
running
devices.
C
They
have
the
correctness
current
here
to
to
to
to
be
used
for
validations.
So
for
each
CRI
instance,
single
configuration
file,
we
oh
for
each
configuration
lies
in
the
configuration
files
we
can
derive
it.
We
can
check
whether
they
are
matched
CLI
templates
from
the
correct
parent
trial
relationship
on
our
derived
hierarchy,
and
we
can
record
all
the
image
one
for
the
expert
to
audit
later
to
correct
our
derived
model.
C
So,
let's
take
a
quick
look
at
the
results
of
our
Auto
vendor
device
model.
Construction
phase
and
our
password
plus
validator
successfully
constructs
refine
and
validated
vendor
device
model
from
four
menus
of
popular
vendors,
and
we
also
identify
more
than
200
synthetic
arrow
and
ambiguity
in
the
menus.
So
please
refers
to
our
GitHub
report
for
more
details
and
the
third
designs
component
is
the
mapper.
C
Existing
tedious
handcraft
mappings,
we
found
that
the
key
of
the
sna
is
to
pair
somatic
similar
configuration
items.
So
since
the
path
validator
has
obtained
the
device
model
with
abundant
somatic
information
from
the
menus
and
and
those
parameters
from
the
unify
model
is
also
enriched
with
contact
information
assigned
by
the
network
or
operating
experts.
So
we
want
to
design
the
mapper
that
can
understand
the
contextual
information
by
applying
recent
advance
in
natural
language
processing.
C
So
our
resultant
model
is
the
our
resultant
semantic
comprehension
model
is
collaborate,
so
let's
take
a
look
at
how
it
works.
So
for
a
for
a
given
configuration
item.
Actually
we
want
the
model
to
find
out
its
thematic,
similar
ones
in
the
counter
bottom
part
models,
so
we
will
run
the
following
process
for
each
pair
of
them.
First,
we
will,
we
will
locate
and
extract
their
contacts.
Information
actually
depend
on
amount
of
available
information
of
each
model.
We
can.
We
can
select
different
number
of
contact
sequence
for
the
vendor
device
model.
C
We
found
that
the
parameter
names
and
the
description
and
its
corresponding
CR
commands
and
function
description
and
the
working
views
are
very
valuable
for
mapping
tasks
and
for
the
unified
device
model
we
just
directly
retrieve.
Is
this
description,
information
of
each
parameter
and,
secondly,
We
Will
We
Will
leverage,
the
most
popular
natural
language
processing,
pre-train
model
and
we
Channel
domain
adaptive
version
to
encode
the
contention
information
into
the
vectors.
So
for
one
pair
of
parameters,
the
encode
encoder
will
produce
a
pair
of
contacts
embedding
letters.
C
And
thirdly,
we
will
evaluate
the
similarity
of
embedding
better
using
cosine
similarity
and
to
Quantified
the
semantic
mechanisms.
So
actually
we
will
repeat
this
process
to
multiple
times
to
get
a
ranked
list
about
multiple
parameter
pairs
and
so
therefore,
for
a
configuration
parameter,
our
model
Network
can
recommend
top
case
similar
mappings.
C
So
the
power
of
the
network
comes
from
two
sources
and
the
first
is
the
pre-channel
model
expert.
This
model
is
preacher
on
a
large
natural
language,
inference
data
set
with
the
same
time
matching
objectives,
so
it's
capable
of
encoding
two
semantic,
similar
test
sequence
into
two,
embedding
vectors
that
are
closed
in
the
embedding
space.
However,
this
preacher
model
may
have
limited
performance
in
a
domain
that
is
never
seen
in
the
pre-trained
Corpus,
so
we
will
introduce
the
Second
Source
a
domain
copers
of
network
configuration
so
to
generate
it.
We
will
leverage
a
few.
C
Actually,
map
is
recommended
to
generate
the
most
likely
mappings,
so
here
we
adopt
the
record
at
top
k
for
evaluation
metrics,
which
denotes
the
percentage
of
test
case
where
the
correct
matching
parameter
are
in
top
care
recommendation
by
mapper.
So
higher
recall,
as
small
okay
implies,
that
the
mapper
is
more
helpful
to
assist
Network
operators.
C
We
use
cross
vendor
tuning
and
validation,
to
evaluate
the
effectives
of
fine
tuning
and,
as
shown
in
this
figure
fighter,
Network
out
for
the
other
models,
because
we
adapt
this
model
to
the
domain
of
network
configuration
and
regarding
the
save
of
human
effort.
We
can
take
an
example
for
the
Huawei
case.
We
can
find
correct
matching
within
top
10
recommendation
with
89
accuracy.
C
C
C
Nasim
C2
transform
the
original,
TDS
and
error
from
process
to
automatically
an
efficient
manner
and
configuration
menus
as
human
written
documents
are
not
fully
reliable,
including
inevitable
errors
and
ambiguities,
so
nursing
feature
a
universe,
password
framework,
a
routerless
validator
and
a
mapper
using
the
domain
adaptable
model
to
produce
recommendation
mapping
between
heterogeneous
configuration
models,
and
we
also
release
a
validate
and
executed
data
set
for
future
research,
and-
and
this
is
all
of
my
today's
presentations.
Thanks
for
all
your
listening
and
any
question
is
welcome.
A
I've
got
one
comment
as
someone
who's
been
working
on
for
a
very
long
time,
so
you're
trying
to
do
Brute,
Force
attack
on
inconsistent
configuration
with
inconsistent
cli's
across
different
vendors.
It's
a
noble
task,
really
CLI
is
really
wrong
obstruction
to
work
with,
and
you
know
we
wish
it
would
have
been
consistent
within
single
vendor
single
release.
A
It's
not
there's,
usually
some
product
manager
who
would
put
some
something
in
it
and
hope
it's
working
configuration
has
errors
to
any
degree,
so
I
mean
running
it
through
neural
Nets
and
trying
to
figure
out
what's
right
and
reinforce
it.
It's
a
noble
task,
but
trying
to
use
other
abstractions
look
at
young-
it's
not
complete,
but
it
gives
you
a
very
healthy
base
to
at
least
get
consistent,
Behavior
doing
the
CLI
again.
It's
really
Noble
just
but
being
involved
in
this
for
20
years.
I,
don't
think
it's
even
possible.
C
Yeah,
and
actually,
if
you
mentioned
young
I,
think
Young
is
a
very
evident
Trend
and
because
I
think
we
at
least
what
we
focus
on
the
command
line
interface.
It's
a
very
entry-level
way
to
configure
network
devices.
We
we
choose
it
as
the
research
Target,
because
it's
almost
supported
by
all
network
devices,
both
the
Legacy
and
new,
but
we
think
that
yeah
young
is
a
evident
trend,
is
a
more
advanced
way
to
config
network
devices
and
a
young
young,
but
young
still
don't
do
not
fully
adjust
the
difficulty
of
managing
multi-vendor,
Network
I.
C
Think
because,
although
young
is
a
standard,
but
each
vendor
seems
to
have
their
vendor
specific
young.
So
actually,
we
are
also
currently
working
on
some
topics
related
to
somehow
try
to
address
the
heterogeneous
between
the
different
young
models
between
different
vendors
yeah,
but
I.
Think
yeah,
yeah
I
agreed
that
this
is
a
this
field
still
need
much
efforts
to
to
address
this
heterogeneity.
A
F
B
Someone
Gold
Line
lending
me
on
laptop.
You.
B
A
A
B
G
Okay,
so
let's
hope
this
works.
This
is
about
routing.
On
service
addresses
is
a
newly
submitted
draft
together
with
Louise
from
telefonica,
who's
somewhere
else
over
there
I
think
I
mean
it
does
work
the
goal
of
the
draft
that
but,
as
the
name
suggests,
is
to
propose
an
approach
to
transition
from
locator-based
addressing
to
an
addressing
scheme
where
the
atlas
represents
Services,
being
invoked
for
computation
processes
and
their
generated
information
requests
and
responses.
G
We
have
there's
a
structure
that
we
we've
done
so
far.
I
realized
after
the
submission
I
forgot,
the
last
two
sections
and
they're
already
updated.
They
were
still
empty,
so
we
go
through
a
number
of
use
cases.
I
will
very
briefly
go
through
them,
formulate
a
number
of
requirements
and
then
outline
the
initial
design
that
we've
been
working
with.
G
We
also
provide
the
terminology
and
I
want
to
highlight
a
couple
of
things:
I'm
not
going
through
all
of
them
I'm
not
going
to
read
them,
but
it
gives
you
an
idea
what
we
are
describing
as
a
service.
We
have
the
notion
of
the
service
instance,
which
is
a
realization
of
the
service
and
possibly
several
Network
locations
across
a
network.
G
We
we
well
the
definition
of
a
service
address
that
we're
using
was
also
important
in
the
in
the
description
is
the
notion
of
the
service
transaction,
which
is
a
sequence
of
an
initial
request,
which
we
call
the
service
request
and
the
so-called
Affinity
requests,
which
are
subsequent
requests
being
maintained
because
of
ephemeral
state.
G
So
the
idea
is
that,
if
I,
if
I
have
been
directed
to
a
certain
Network
location
to
a
certain
service
instance,
I'm
sticking
with
that
instance
because
of
femoral
State
once
my
transaction
is
over,
I
can
be
assigned
to
another
service
instance.
So
that's
important
in
our
model
of
service
interactions
to
to
incorporate
in
the
design.
We
have
different
architectural
roles
of
the
Rosa
provider,
the
domain,
the
endpoint
that
are
being
explained
in
the
actual
text.
But
we
put
this
into
the
terminology
section
up
front.
G
As
I
mentioned,
we
we
outlined
a
number
of
use
case
and
we're
working
on,
elaborating
and
adding
a
few
more
section.
3.1
talks
about
CD
and
interconnect
and
distribution.
So
here
the
main
key
aspects
that
we're
highlighting
is
the
multi-site
replication.
You
have
more
than
one
CDN
most
most
possibly
you
have
a
dynamic
decision
making
as
to
where
to
assign
a
client
to
to
which
of
the
sites,
and
we
also
have
the
ability
to
actually
do
multi-site
retrieval
for
Content
to
reduce
latent
civilians.
G
We
presented
yesterday
some
work
in
a
site
meeting
where
we
showed
the
impact
on
the
latency
variants.
If
you
do
such
thing,
so,
instead
of
sticking
to
one
side
for
the
duration
of
a
let's
say,
a
long-term
video
you're
actually
pulling
content
from
all
the
different
sites,
and
that
reduces
your
variance
quite
nicely
Second
Use
case
subscribers
distribute
user
Appliance
for
mobile
and
fixed
access.
G
So
here
the
idea
is
to
take
the
so-called
user
blend
functions,
which
is
the
the
data
plane
in
in
in
a
in
a
45g
system
and
distribute
them
the
the
the
selection
of
the
UPF
that
handles
your
request
may
be
service
and
therefore
policy,
specific
and
and
also
some
of
the
use
cases.
There
are
specifically
aiming
at
Edge
compute
capabilities,
which
you've
probably
heard
enough
in
in
in
in
5G
type
of
scenarios.
G
The
third
one
is
multi-homed
and
multi-domain
services,
so
services
that
are
deployed
across
administrative
domains,
so,
for
instance,
for
Enterprise
scenarios
and
again
in
multi-side,
but
in
this
case
multi-site
Enterprise
facilities.
Multi-Homing
is
very
often
used
both
at
the
client
and
its
server
side,
and
the
the
consideration
here
is
to
pick
the
best
service
instance
where
the
definition
of
best
may
be
very,
very
service.
Specific,
that's,
hence
it's
written
in
quotes.
G
That's
meant
to
come
out
after
this
ITF
as
to
how
efficiently
steer
traffic
across
these,
but
with
possibly
significant
distributed
networks
and
enabling
also
a
dynamicity
in
selecting
the
best
service
instance.
So
not
doing
this
for
relatively
long
periods,
but
doing
this,
maybe
for
relatively
short
service
transactions
as
well.
G
The
main
idea
that
we
are
outlining
in
the
draft,
as
the
name
suggests,
maybe
routing
in
service
addresses,
is
to
replace
the
typical
DNS
resolution,
plus
IP
data
transfer,
meaning
the
off-pass
discovery
of
the
service
name
onto
an
IP
locator,
with
an
on-pass
discovery
to
a
suitable
service
instance
location.
So
for
that
and
you'll
see
this
later
in
a
bit
more
detail,
we
sent
an
initial
IP
packet,
it's
directed
and
again.
This
is
an
air
quotes.
Even
though
we're
using
the
standard
IP
semantics,
we
are
putting
the
service.
Oh
no,
don't
know
what
happened.
G
Sorry
for
that
they're,
putting
the
actual
service
actress
in
the
extension
header
and
it's
directed
to
a
special
to
a
shim
overlay
which
you've
seen
in
the
in
the
next
slide
and
well,
and
the
shim
overlay
routes
the
packet
on
path
based
on
the
service
name,
not
based
on
the
locator
right.
G
It
uses
mapping,
which
is
a
bit
similar
in
in
role
but
not
in
in
implementation
to
the
DNS
records.
It
uses
these
mappings
between
the
service
name
and
the
the
possible
service
instance
locations
to
do
that
once
it
arrives
at
the
server's
instance
location.
It
responds
back
the
initial
packet
to
the
client,
which
then
uses
for
the
subsequent
so-called
Affinity
request
the
direct
IPv6
address
of
the
service
instance.
So
only
the
initial
packets
one
and
two
go
through
the
shim
overlay
and
after
that
you're
shooting
down
the
IPv6
part.
G
I
know
you
do
that
as
long
as
the
service
Direction
lasts.
If
you
finish
the
service
transaction,
you
go
back
to
step
number
one.
You
initiate
a
new
service
Discovery
and
you
may
potentially
be
assigned
to
a
different
instance
right.
If
you
have
stateless
Services,
you
may
only
ask
a
good
step
one
and
two,
so
you
never
send
an
affinity
request,
because
you
have
exactly
one
request
in
your
Rover.
What
so?
The
key
point
is
that
the
in
band
Disco
recovery
is
performed
at
the
IP
packet
level
and
not
at
the
application.
G
Level,
that's
the
characteristic
of
the
design.
So
how
do
we
do
this?
We
this
shows
the
gray
part
in
the
in
the
figure.
There's
the
slightly
nicer
figure
than
the
draft
is
a
so-called
Rosa
domain,
which
is
identified
in
the
terminology
which
may
connect
via
a
traditional
locator
based
IP
to
other
Rosa
domains
on,
for
instance,
the
right
hand
side
which
have
other
services
deployed.
G
The
the
rectangular
boxes
are
the
so-called
service
address
routers.
While
the
round
boxes
are
traditional,
IB
Physics
readers,
so
you
have
the
normal
ib6
tools
as
well
as
the
actual
saw
Roots
in
there.
It's
located
that
we
call
this
layer
3.5.
It
uses
Rosa,
specific
IPv6
destination
extension
headers,
it's
deployed
either
in
the
network,
as
you
can
see
here
or
like
Star
phones.
So
five,
for
instance,
are
increased
nodes
into
Edge
side,
so
they
may
also
be
deployed
at
the
edge
site.
G
It's
non-path
shim
overlay
routing
for
the
initial
service
request
and
it
doesn't
do
the
dedicated
off
path
in
Direction
like
in
the
various
methods
like
DNS
gsrb
Etc.
It
is
also
described
in
the
draft.
The
traffic
steering
uses
server-specific
policies.
You
can
either
do
an
increased
base
based
selection
and
I'll
show
later
how
this
works
and
how
this
is
differentiated,
which
selects
the
service
instant
location
at
the
increase
directly
or
you
use
intersile
routing,
meaning
you're
rooting.
G
The
request
between
the
individual
SAS
until
it
ends
ends
up
at
an
increase
to
a
data
center,
for
instance,
as
I
mentioned
before
the
instance,
Affinity
is
done
over
native
IPv6,
so
the
the
the
the
the
overlay
is
not
involved
in
that
one
anymore
and
routing
table
sizes
are
limited.
Given
that
we
introduced
a
Rosa
domain
and
a
Rosa
provider,
the
routing
table
size
is
limited
to
the
contractual
relations
you're.
Having
right,
only
the
services
that
are
being
announced
to
the
Rosa
provider
are
in
the
Rosa
routing
table,
you're,
not
hosting.
G
If
you
see
a
certain
a
certain
Affinity
here
to
information,
something
networking
the
routing
table
does
not
include
all
the
services
available
in
the
Internet.
It's
only
the
Rosa
announced
service
in
the
routing
table
that
keeps
the
routing
tables
as
large
as
you
want
right.
There's
no
client
awareness
needed,
so
we
describe
in
the
draft
how
a
rose
enabled
client
can
issue
requests
to
a
Rosa
service,
but
also
to
a
non-roller
service.
This
is
handled
by
the
so-called
service
address
Gateway,
which
then
gateways
to
the
locator-based
IP
internet.
G
Hence
you
can
access
anything
really,
it's
weird,
and
now
my
control
went
away.
No,
this
was
the
right
one.
G
Sorry,
we
have
three
different
messages,
so
we
have
a
service
announcement
which
is
which
is
used
by
a
servers
instance
to
announce
its
own
IP
address
in
relation
to
a
service
address
with
certain
constraints,
the
constraints
are
being
used
for
the
traffic
steering
I
mean
after
the
initial
service
request,
which
is
is
being
sent
to
the
inquist
saw
to
your
to
the
client
specific
interests
are
which
hosts
the
client
IP
the
increase
IP,
as
well
as
the
service
ID.
G
The
reason
they
are
you'll
see
this
in
a
moment
in
the
message
you
can
start
there
in
the
extension
header
is
to
avoid
client-specific
state
in
the
increase.
So
we
we
want
to
avoid
that.
We
you
need
to
carry
any
particular
client-specific
state,
which
is
why
we're
carrying
this
in
the
extension
header,
the
service
response
has
the
same
entries,
but
it
attaches
the
or
it
it
amends
its
own
instance
IP
in
the
response.
G
So
that's
how
the
client
learns
where
it
needs
to
go
for
the
following
requests:
the
the
positioning
and
the
implementation
we're
currently
working
on
which
we
haven't
brought
to
this
ITF
because
we're
not
entirely
finished,
but
we
plan
on
bringing
this
to
the
next
one.
It
sits
under
the
transport
protocol
right,
it
sits
on
top
of
IPv6
and
it
uses
ipf6
as
well
for
the
affinities
and
it's
it's
realized
under
the
transport.
Hence
the
layer
3.5
as
we
call
this.
G
The
message
flow
is
shown
in
this
graph
again
looks
a
bit
nicer
than
the
ASCII
version
in
the
draft.
The
client
sends
an
initial
service
request
and
the
the
first
brackets
are
the
source
destination
address.
The
second
brackets
are
the
extension
headers
as
an
explanation
of
the
syntax
that
we're
using
it's.
It
sends
to
the
increaser
a
service
request
which
carries
its
own
IP,
the
SAR
IP
and
the
service
ID.
The
service
ID
is
being
used
to
determine
the
next
top
and
I
come
in
a
second.
G
There
are
two
different
methods
that
you
can
use,
which
then
sends
the
the
service
request
eventually
to
the
service
instance.
Where
derives,
which
generates
the
response,
according
to
whatever
the
semantic
is
sent,
generates
the
best
the
service
response
back,
which
includes
its
own
IP
address,
which
is
then
used
in
the
following
request
in
What's,
called
the
Affinity
request
by
the
client
directly.
So
choose
it
straight
to
the
to
The
Chosen
service
instance.
G
Once
the
transaction
is
over,
which
is
the
dotted
line,
you
restart
the
whole
process,
so
in
the
next
iteration,
the
service
request
may
be
sent
to
a
different
instance.
That
depends
on
your
policy,
see
right.
So
the
key
point
is
really
that
the
service,
only
as
a
service
requests,
are
being
sent
by
the
by
the
by
the
shim
overlay.
The
infinitive
requests
follow
the
direct
IPv6
part.
My
Implement
is
at
the
moment,
through
a
socket
interface,
which
specifies
a
different
address
family
rather
than
IP
address.
G
You
use
the
address,
address
family
for
a
service
address
and
the
entire
mapping
of
the
IP
address
is
being
hidden
from
the
client
you're
sending
to
a
socket
and
if
you
keep
sending
it
automatically
switches
to
the
IPv6
address
in
the
subsequent
requests.
The
forwarding
engine
I
show
this
here.
The
difference
is
we
describe
two
modes
in
in
the
draft.
One
is
what's
called
request:
scheduling
where
the
increase
directly
decides.
What
is
the
destination
of
the
request
that
allows
you
to
do?
G
Runtime
scheduling
or
you
may
use,
for
instance,
multi-optimality
routing
this
is
was
actually
meant
to
be
a
link.
If
you
click
on
the
actual
underline
text,
you
find
one
of
the
really
nice
papers
you
can
use
for
multi-optimality,
routing
and
service
specific
environments
published
in
sitcom
in
2020
and
which
means,
in
this
case
you
actually
forward
it
to
a
Next
Top
saw
instead
of
forwarding
it
straight
to
the
instance.
The
the
difference
comes
really
in
the
forwarding
information
base.
G
G
Service.Org
has
three
Next
Tops
and
if
you
read
in
the
next
hub
information
based,
they
Point
directly
to
an
instance
right,
while
the
other
ones
actually
point
to
the
next
top,
and
that
indicates
that
for
service.org
you
have
a
a
secondary
forwarding
step
that
needs
to
select
one
of
the
three
possible
choices,
and
this
is
the
runtime
scheduling
that
I
mentioned
before.
You
also
can
use
a
special
wild
card.
G
This
is
the
last
NV
and
the
phone
information
basis,
which
is
a
white
card,
which
means,
if
you're,
asking
for
anything
that
the
Rosa
provider
doesn't
know.
It
is
by
default,
forwarded
to
the
spa
to
the
service
address
Gateway,
which
interconnects
either
to
a
different
Row
the
domain
or
to
the
public
internet.
That's
how
they
client
unawareness
is
being
realized.
G
So
what
are
the
plans
moving
forward
with
this
job?
This
is
the
first
version
I'm
quite
detailed.
We
worked
a
bit
on
it,
but
we
want
to
provide
more
details
on
the
design,
realization
of
this
larger
section
five
and
incorporate
any
feedback
that
we
that
we
received
so
far
proper
header
descriptions,
there's
all
still
a
little
bit
schematic
Jens
suggested
to
support
to
to
add
support
for
multi-home
service
device
instances
we
had
that
already,
but
we
didn't
want
to
have
it
in
the
first
draft.
G
We
can
do
this,
so
that
allows
you
to
also
do
other
service
scenarios
where
you
not
necessarily
use
the
classical
domain
name
system.
More
use
case
insights,
I
already
spoke
of
different
use
cases,
involvement
and
implementation.
Insights.
We've
implemented
this
through
ebpf
in
a
standard,
Linux
router,
and
we
will
combat
more
performance
results
in
the
next
ITF.
We
didn't
manage
to
do
this
in
time
for
the
first
draft,
but
there
is
more
and
we
also
plan
a
demo
thanks.
What
we
see
feedback
on
is
is
the
problem
space.
G
The
motivation
are
we
approaching
this
all
from
the
different
and
from
the
wrong
angle,
comments
on
the
architectural
approach
on
the
realization,
anybody
who's
interested
to
contribute
or
fiddle
with
a
draft.
Please
send
us
an
email
are
very
happy
to
do
this
and
the
way
forward.
Thank
you.
Sorry.
For
the
few
seconds
more.
A
A
I
Can
you
hear
me
yeah
from
ZTE
trust
me
I'm
I,
appreciate
it
for
the
wonderful
presentation
of
dark
and
I
have
two
comments
and
the
first
one
is
about
the
service
address
or
I
I
can
understand
as
the
service
identification.
I
My
primary
understand
is
that
the
right
here
from
your
terminology,
the
service
address
or
service
identification,
is
globally.
You
leaked
across
terminal
and
network
and
the
cloud
side
where
the
service
resides.
So
here's
here's
the
quite
important
issue.
Well,
the
service
address
or
or
in
any
coordinate
the
service
identification
has
has
to
be
managed
and
published
by
a
a
quite
higher
entity
which
could
coordinate
and
management
the
manage
and
all
of
the
entities
from
terminal,
Network
and
and
Cloud
site.
So
it's
not
simply
about
from
my
personal
understanding.
I
The
network
Lord
could
not
alone
this
kind
of
service
address,
because
it's
it's
there's
a
lot
of
multiple
parties
and
get
involved.
The
second
question
is
actually
what
I
can
say
is
the
first
benefit
is
from
from
from
your
proposer.
The
service
location
Discovery
is
a
processed,
is
removed
from
the
off
pass,
such
as
the
DNS
to
the
on-pass
routing.
So
the
benefit
is
the
the
latency
saving.
Something
like
that.
But
actually
we
here
is
the
quite
heavy
price
involved,
because
this.
G
So
the
first
part
is
we're
not
rely.
We
are
reusing,
namespace
governance
in
the
application
space
in
which
you,
which
you
work.
We
do
this
very
similar
to
the
ICN
Community,
you
utilize,
the
domain
name,
you're,
obviously
relying
on
the
domains,
governance
that
exist
in
the
internet,
and
that
means,
if
you
do
a
service
announcement,
I
cannot
announce
facebook.com,
that's
not
possible.
I
mean
it's
possible
to
announce
what
the
announcement
will
be
rejected,
obviously
because
I'm
not
Facebook.
G
So
that's
that's
the
only
place
where
this
happens,
where
you
may,
of
course,
fake
a
server's
announcement,
but
we
are
tying
into
the
actual
governance
that
exist
and
we
utilize
a
certification
authorization
of
the
announcement
which
you
can
do
if
you
use
your
namespace
I,
don't
care-
and
this
is
the
nice
thing
about
the
use
of
RFC
8609
in
in
the
draft.
You
can
use
your
own
namespace,
the
the
the
the
RFC
allows
that
and
then
you
can
do
your
own
stuff
right,
which,
which
I
don't
particularly
care.
G
The
second
part
is
about
is
about
performance.
Now.
The
point
about
the
performance
is
that
we
are
putting
this
onto
the
on
path
and
it's
distributed,
which
means
in
the
even
in
the
early
performance
evaluations.
We
are
getting
hundreds
of
thousands
of
service
because
we're
able
to
root
this
is
significantly
faster
than
any
DNS
resolution.
You
can
do
and
that's
in
a
distributed
manner,
which
means
the
increase
is
only
dealing
with
the
actual
requests
that
are
coming
from
the
clients
it's
attached
to.
G
So
it's
way
faster
than
doing
an
edns
resolution
at
high
rate,
so
so
I,
don't
think
they're,
that's
one
of
the
reasons
we
did
this
because
the
performance
could
potentially
be
so
much
better.
A
J
Just
brief,
so,
while
putting
application,
Level
things
down
to
the
network,
which
provides
speed
drawback
could
be
that
the
application
Level
semantics
are
causing
complexity
inside
the
network.
So
what
about
the
ideas?
What
kind
of
application
semantics
you
can
put
down
into
your
service?
Gateway.
G
Well,
the
only
the
only
applications
I'm
putting
in
is
is
the
the
description
or
the
identifier
of
the
entity.
You
want
to
talk
to
and
and
that's
something
we
we
can
relatively
easily
deal
with
so
we're
using
a
hash
based
lookup,
which
is
much
faster.
It's
relatively
fast,
and
that's
the
only
thing
we're
really
doing
at
the
announcement
level.
You
need
to
do
the
additional
verification
of
the
service
announcement.
That's
okay,
but
that's
not
done
at
data
speed
right.
Okay,
thanks!
Thank
you.
A
J
Hello,
so
my
name
is
Roland
Bliss
kit,
so
I
have
the
pleasure
to
introduce
Kira
to
you,
which
is
a
scalable
ID
based
routing
architecture,
specifically
designed
for
control
planes,
and
this
is
Joint
work
with
martinez
over
on
Archer,
so
controllability
is
really
important.
You
may
have
heard
last
year's
major
service
disruptions
at
large
providers,
mainly
caused
by
configuration
mistakes.
They
took
a
large
set
of
services
down
for
a
considerable
amount
of
time,
so
Services
depend
on
resilient
connectivity
and
the
control
plane.
J
Connectivity
is
inherently
important,
I
in
The
Meta
case,
for
example,
the
guy
is
misconfigured
bgp
routing
and
they
cut
off
themselves
from
their
control
Network,
and
it
was
hard
to
regain
control
over
their
routers
again
just
to
SSH
to
them
or
whatever,
in
order
to
to
get
the
whole
thing
running
again.
So
Kira
is
aimed
to
provide
a
self-organized,
robust
control,
plane
connectivity.
Actually,
so
it
is
designed
to
interconnect
a
large
pool
of
network
resources,
build
compute
storage
network
coverage.
J
What
have
you-
and
it
also
should
design-
should
provide
this
resilient
connectivity
for
the
control
entities
that
try
to
control
the
various
resources
you
have.
Furthermore,
since
it's
ID
based,
it
provides
stable
addresses
for
moving
resources
via
the
virtual
machine.
What
have
you
or
your
own,
and
so
it
tries
to
provide
all-in-one
features,
namely
massive
scalability,
so
it
scales
up
to
100
thousands
of
nodes
in
a
single
domain.
So
to
say
it
is
zero
touch
doesn't
require
configuration,
so
we
cannot
be
broken
by
configuration
mistakes.
J
It
has
a
really
fast
conversion.
It's
Loop
free,
even
under
convergence
or
during
conversions.
It
works
well
in
many
different
topologies,
which
we
call
topological
versatility.
J
So
we
don't
need
any
special
variants
of
routing
protocol
in
in
denser
topologies,
like
data
center
topologies.
Also
and
last
but
not
least,
we
try
to
also
provide
efficient
routes.
There
are
some
related
works
that
yeah
do
some
things
in
a
related
manner,
but
either
dialect
Dynamics
or
we
have
a
non-id-based,
not
ID,
based
approaches
which
work
in
specific
topologies.
Only
so
Kira
is
a
routing
architecture
which
comprises
two
tiers.
J
One
is
the
routing
tier
where
the
r2k
routing
protocol
sits
in,
and
it
is
the
90
based
routing
protocol,
uses
Source
routing
and
works
on
top
of
any
link
layer,
while
the
forwarding
tier
is
some
kind
of
optimization
where
we
use
path,
ID
based
forwarding,
which
eliminates
The
Source
routing
that
we
use
for
the
source
running
for
the
running
protocol
itself
and
it
is,
can
be
seen
as
basically
similar
to
your
label
switching
approach
and
it
aims
to
reduce
overhead
since
we
get
rid
of
source
routing
for
the
control
time.
J
So
the
first
thing
that
we
need
to
to
have
is
to
to
learn
the
path
in
our
Network,
and
so
here
we
have
a
very
small,
just
small
topology
from
a
larger
topology.
Maybe
so
the
white
nodes
are
basically
or
white
circle.
Nodes
are
basically
a
layered
link,
layer,
nodes
and
the
every
every
node
then
creates
a
node
ID
randomly.
Basically-
and
these
are
the
the
large
letters
uppercase
letters
in
the
in
the
blue
dots.
J
So
in
the
beginning,
every
node
discovers
it's
super
physical
vicinity.
So
X,
for
example,
learns
the
context
a
y
b
m,
so
we
call
them
contacts,
and
so
now
is
the
question:
if
x
want
to
reach
z,
how
does
it
work,
and
so
the
idea
is
here
to
construct
the
underlay
routes
by
using
the
node
ID
based
overlay,
and
we
use
cademia
for
that.
J
That's
why
we
have
K
in
the
name
or
the
cat,
and
so
the
idea
is
that
we
use
Source
roads
to
the
contact
that
is
closest
to
the
destination.
Node
ID
I
have
an
example
on
the
next
slide,
and
so
what
is
closest?
If
you
talk
things
are
closer
to
each
other.
You
need
some
kind
of
distance
metric
and
we
use
the
EXO
metric
from
cardamia
for
that
which
roughly
boils
down
to
if,
if
two
node
IDs
have
a
longer
share,
prefix
the
closer
they
are
in
the
ID
space.
J
So
now,
let's
assume
that
X
Knows
Why
as
discouraged
in
the
first
step-
and
here
also,
we
assume
that
letter
is
closer
and
the
alphabet
are
also
closer
than
the
node
ID
space.
Just
for
this
example,
so
the
next
overlay
Hub
would
be
y,
for
example,
so
exynosa
sorcerer
to
Y.
J
What
we
call
that
find
note
requests
in
order
to
discover
a
path
will
then
also
contain
that
that
small
cycle
here
and
naturally
that
could
incur
some
path
Bridge,
so
the
checking
routes
longer
than
a
shortest
path.
But
we
can
do
some
optimization.
So
first,
when
respond
Z
responds
to
X,
it
can
shorten
the
recorded
route
just
cutting
out
the
cycles
and
later
pickets
actually
can
use
a
shorter
route,
because
X
knows
a
routes
to
M
which
is
shorter,
and
then
it
can
use
net
route
for
later
pickets.
J
So
the
initial
stretch
that
we
have
can
reduce
for
later
packets.
In
case
you
can
spend
some
state
for
that.
Furthermore,
archicad
offers
a
flexible
memory
stretch
trade-off,
and
this
comes
from
the
fact
that
we
are
using
cademia
based
routing
table
which
arranges
its
routing
table
as
tree
of
buckets,
where
each
bucket
has
a
size
of
up
to
K
contacts.
J
So
here,
in
this
example,
we
have
one
bucket
that
covers
all
contacts
that
have
the
different
first
bit
in
the
in
the
whole
other
space.
So
so
one
one
bucket
also
only
20
contacts,
for
example,
for
the
the
half
of
the
ID
space.
J
J
What
we
now
do
is
store
in
addition
to
the
to
to
the
the
node
ID.
We
also
store
the
the
path
Vector,
so
the
the
path
that
we
learned
are
just
attached
to
these
contacts,
and
so
we
can
do
that
also
in
a
in
a
way
that
we
prefer
shorter
routes,
and
this
actually
leads
to
the
fact
that
we
learn
shortest
path
to
all.
J
The
contexts
that
we
know
and
yeah
the
size
of
the
buckets
can
be
chosen,
so
we
can
have
here
a
flexible
memory
straight
off
and
basically,
we
have
logarithmically
large
or
depending
on
the
number
of
nodes,
we
have
only
algorithmically
dependency
on
the
number
of
nodes
in
the
network.
So
what
about
Dynamics?
So
if
in
case
links
and
my
routing
is
about
also
covering
with
Dynamics,
so
we
assume
that
once
you
detect
failure
in
the
underlay,
so
let's
assume
the
let's
assume
the
the
link
between
X
and
B
breaks.
J
Then
we
have
a
two-step
strategy.
How
to
deal
with
that,
and
one
is
that
your
first
inform
your
ID
wise
Neighbors
about
the
faith
link
and
then,
after
that,
basically,
you
try
to
ReDiscover
an
alternative
path
by
the
overlay
route
and
we
included
not
via
information.
So
that
note
on
the
path
didn't
heard
about
the
broken
links
that
so
they
can
know.
Then
they
cannot
reuse
routes
that
that
also
include
that
broken
link.
J
We
also
periodically
probe
context
for
broken
path,
and
you
also
have
to
apparently
look
up
your
own
node
ID
in
order
to
detect
any
network
partitioning
and
so
on,
and
the
route
information
validity
is
ensured
by
using
State
sequence
numbers
in
the
concept
for
path,
information
agents,
hey
sorry,
that
was
an
animation
yeah.
So
briefly,
about
the
the
path
ID
based
forwarding,
so
the
the
forwarding
tier.
J
J
So
the
path
ID
is
basically
just
the
hash
of
the
source
route
and
we
use
the
path
ID
as
label
for
the
source
routes,
and
basically,
we
use
then
label
switching
between
the
nodes
and
the
reason
for
discovering
the
two
of
vicinity
in
the
first
place
is
that
we
can
actually
pre-calculate
path
IDs
on
your
true
and
natural
vicinity.
So
we
only
need
explicit
path
set
up
for
path
that
are
longer
than
four
Ops.
J
So
we
also
implemented
the
thing
and
showed
that
basically,
we
use
the
IPv6
packet
format.
We
can
embed
the
node
ID
into
IPv6
addresses,
and
so
the
idea
is
that
the
forwarding
tier
then
has
the
node
ID
and
a
path
ID
forwarding
table
and
we
use
in
order
to
use
the
path
IDs.
We
use
GRE
encapsulation,
so
you
have
GAE
encapsulation
header
in
case
you're,
using
path
IDs.
J
If
you
don't
use
or
don't
need
path,
ID,
you
can
just
use
plain
IPv6
header
and
so
any
control
application
that
is
able
to
send
IPv6
packets
may
use
keyword,
connectivity
and
also
the
autocup
messages
are
sent
by
using
IPv6.
So
we
simulated
that
thing
on
in
topologies
up
to
200
000
nodes,
various
typologies,
just
to
briefly
show
that
this
works
not
so
badly.
J
So,
for
example,
here
we
have
various
topologies
and
a
size
of
10
000
nodes
along
the
x-axis
and
shown
is
here.
The
average
multiplicative
stretch
along
the
y-axis,
and
so
the
green
dots
are.
J
The
first
packet
stretch
that
the
first
packet
C
and
later
practice
are
the
the
blue
triangular
triangles
and
we
compared
here
with
a
ripple
and
a
soaring
mode,
single
double
deck,
single
direct
version
and
so
on,
and
you
can
see
that
we
can
even
achieve
for
the
first
packets,
a
better
stretch,
values
in
average
and
what
is
really
nice
is
that
the
stretch
to
the
context.
So
all
the
contacts
that
you
have
in
your
routing
table,
you
have
effectively
shortest
path
routes.
J
J
J
Yeah,
no,
no,
don't
they're
backup,
slides,
don't
don't
worry,
that's
just
showing
the
Dynamics
part.
So
in
case
you
have
a
hundred
thousand
nodes
15
of
the
links
very
randomly
at
20
seconds.
J
Then
we
are
able
to
converge
within
roughly
10
seconds
until
we
have
nearly
full
connectivity
again
with
scalable
overheads,
so
the
the
packet
rate
is
is
quite
quite
low
in
in
that
case,
so
to
conclude,
Cura
provides
self-organized,
robust
control
tank
connectivity.
It
is
not
meant
to
be
a
replacement
for
ospf,
Isis
or
bgp,
because
it's
not
data
plane
routing
right,
so
it
puts
connectivity
first
and
efficiency.
J
Second
is
designed
for
large
provider
domains
as
we
expect
them
to
have
in
in
5G
or
6G
networks
and
could
be
used
also,
even
across
multiple
providers.
We're
currently
developing
our
domain
concept,
where
you
can
keep
the
routing
inside
your
domain,
more
local,
so
you're
not
depending
on
other
domains.
Security
can
be
easily
added
in
that
sense
that
we
can
have
safe
server,
defined
node
IDs
and
that
stuff
it
is
a
path
Vector
protocol,
so
we
can
also
I
have
hash
values
or
a
Max
for
for
path
and
so
on.
J
We
also
designed
a
special
end
system
mode
that
reduced
the
overhead
even
more
so
in
case
you're,
not
the
router.
You
can
still
use
the
node
IDs,
but
you
don't
have
to
actually
send
router
updates
and
stuff.
J
It
also
supports
multipath
routing,
also
the
forwarding
scheme,
the
path
IDs
do
that
and
we
can
also
easily
integrate
a
DHT
for
a
simple
name
resolution
service
Discovery
and
recently
we
had
developed
a
new
scheme
that
enables
also
very
efficient
topology
Discovery,
so
you
can
discover
within
seconds
100
now
a
thousand
nodes
topology
with
most
of
the
links.
So,
having
said
that,
we
have
a
side
meeting
for
today,
starting
at
7
00
pm
and
it's
a
small
room
as
a
912.
J
I
have
to
leave
for
the
animal
working
group,
because
I'm
supposed
to
be
a
give
a
presentation,
also
there
but
Paul
sitting
here
next
to
the
the
mic,
will
be
happy
to
answer
questions
in
case
you
have
some
after
the
session.
So
thanks.
Thank
you.
Alan.
K
E
J
Thing
is
that
that
you
wrote
simply
to
the
closest
node.
You
know:
I
closest
in
the
ID
space
and
the
Excel
metric
is
very
nice
and
because
it's
Unique,
so
you
always
can
tell
whether
you're
making
progress
in
ID
space-
and
in
case
you
don't
know
a
node.
That
is
closed.
So
then
you
choose
the
target,
then
there's
some
inconsistency
in
the
routing
table
or
the
node
doesn't
exist
anymore.
E
E
What,
if
you
send
it
the
wrong
way?
What
if
it's
it's
partitioned,
if
you
send
it,
if
you
send
it
right
and
the
and
the
guys
to
the
left,
yeah.
J
Right
we
do
not
see
so
much
stretched
so
that
the
the
progress
is
usually
very
localized
in
that
sense.
So
so,
if
you
don't
make
progress,
then
you
go
the
other
way.
No
I
mean
it's
yeah.
If
the
ID
way
is
overlay
is
consistent,
then
we
always
make
progress.
That's
for
sure.
J
A
F
Okay,
hello:
this
is
hibo
from
Huawei
this
time,
I'll
present
tour
jobs,
the
requirement
of
three
faster
for
detection
under
the
framework
of
us
for
detection
for
IP
based
Network.
Next,
please.
F
Okay,
okay:
this
is
a
motivation
of
our
jobs
today,
for
most
database
applications
always
use
a
long
time
out
to
identify
Network
failures.
This
will
cause
the
ZIP
codes,
but
with
the
also
they
are
very
much
designed
to
the
faster
really
detection
is
especially
for
high
performance
applications
such
as
IP
based
mme
and
the
class
computing.
F
They
can
hardly
tolerance,
the
long
duration
of
failures
included
from
the
tunnel
scheme,
for
example,
for
the
ones
when
the
failure
occurs
on
IP,
based
on
ume,
the
iops
will
reduce
to
zero
until
the
equation
can
identify
the
failure
it's
through
the
key
level
timeout
and
also
for
cloud
computing.
It
is
similar
where
IP
connection
for
the
server
is
done.
The
correspond
respondent
Computing
in
a
phase
will
be
blocked
under
the
whole.
Computing
progress
may
be
affected.
F
This
pillar
detection
there
are
also
there
are.
There
are
some
resist
the
failure,
detection
mechanic
mechanisms,
such
as
BRB
also,
we
can
deploy
the
multiplied
The
BFG
to
accelerate
the
fourth
tension,
but
this
mechanism
will
typically
consume
the
system
resource
heavily,
especially
for
the
host
and
servers
so
from
IP
network
Point.
We
need
an
economism
to
help
the
host
accelerate
the
40
passion
and
provide
a
better
experience
for
the
high
performance
applications.
F
Such
a
high
performance
applications
are
really
wrong
in
controlled
domain,
such
as
a
data
center
Network,
and
this
should
be
considered
when
designing
a
solution
and
deployment.
Next,
please.
F
This
is
the
ID
based
memory
use
case
here.
Show
a
small
activation
I
mean
Network
the
horse,
the
the
all
the
horse
and
the
storage
connected
to
two
switches
and
the
host
one
creates
a
mme
connection
to
the
storage.
You
want
the
ip1,
but
when
the
ip1
link
is
failed,
the
host
one
will
not
detect
it
until
it
keeps
time
keep
live,
timeouts
the
failure
May
last
for
more
than
10
tens
of
a
minute
seconds
before
being
handled
during
this
time.
The
connection
between
the
host
storage
is
disrupt.
F
This
is
another
class
Computing
use
case
there
are.
There
are
now
there
are
many
distributed
computing
algorithm
for
some
for
some
kind
of
the
Computing
algorithm
the
server
the
servers
will
be
divided
to
several
Pairs,
and
each
pair
will
connect
communicate
with
each
other
Network
about
the
Computing
model.
The
server
one
server
three
are
paired
and
the
server
two
and
the
server
for
another
pair
they
are
they
are.
F
They
are
both
to
the
class
Computing,
but
the
servers
when
the
server
series,
the
link
to
the
leaf
three-
is
fair:
the
connect,
the
communication,
the
communication
between
the
server
one
and
server
three-
we
all
will
not
work.
This
video
will
block
the
step
of
the
commuting
and
the
whatever
it
is
further
will
will
block
the
whole
cluster
Computing.
The
job
scheduler
cannot
reschedule
the
Computing
task
until
the
detecting
the
server
series
failure.
This
thought
it
may
be
lasted
for
one
or
more
minutes.
F
This
is,
this
is
our
requirements
and
first
we
we
want
the
network
device
that
can
detect
the
link
or
network
failure.
Second,
we
want
the
network
device.
Can
synchronously
synchronize
the
failure
to
the
other
network
devices,
because
some
because
there
are
many-
there
are
many
many
network
devices
in
network,
so
the
network
device
can
notify
local
or
remote
failure,
information
to
the
local
access
endpoints
and
then,
after
the
network
device,
they
send
the
notifications
to
the
endpoints
one.
F
This
is
a
framework
of
our
web
reference
model.
In
this
model
it
will
using
a
controlled
domain.
Both
the
client
component
Imports
and
the
server
endpoints
are
allowed
to
register
with
their
IP
information
and
some
other
information
with
its
access
switch
Services.
The
third
one
points
must
reaches
is
register
its
information
to
that
network,
but
the
registration
is
opting
optional
for
einclined
endpoints.
F
Each
client
unemployes
subscribe
to
the
network
for
the
reachability
of
the
IPS
to
its
E2,
where
it
where
it
is
interesting.
The
registration
and
the
substitution
information
is
synchronized
and
propagated
through
the
network.
F
When
a
network
device
such
as
switch
one
or
switches
to
detects,
is
a
scheduling,
failure
or
awesome
on
on
Commercial
Network
failure.
The
switch
will
quickly
notify
the
fault
to
those
client
endpoints
through
subscribing
subscribing
is
the
app
information.
When
the
client
endpoint
receive
the
notification,
it
can
immediately
include
the
recovery
by
switching
to
the
backup
path.
F
Next
slide.
Please
here
is
an
example
of
a
procedure.
Example.
This
is
a
also
a
IP
based
mme
and
Network.
In
this
network,
the
all
host
and
a
storage
devices
register
their
information
to
the
IP
network.
Such
as
its
raw,
where
is
the
where
is
the
host
or
storage
under
correct
correspondence,
PHS
or
host
and
client
endpoints,
create
mme
connection
to
special
specific
storage
device
here,
is
show
the
host
one.
It
creates
a
mme
connection
to
store
the
one,
the
ip1
and
also
he
may
create
a
backup
connection
to
story
through
one.
The
ip2.
F
Wants
to
know
if
your
status
and
the
Subscribe
is
a
request
to
the
app
network
with
me,
submit
the
request
to
the
switch
one
one
ip1
link,
Fields
switch.
One
can
quickly,
detect
it
and
notify
the
failure
to
host
one
host
one.
When
receive
the
notification,
then
he
can
to
quickly
start
the
reset
or
recovery
progress.
How
to
do
it.
It
may
be
defined
by
the
mme
scheme.
D
F
In
this
in
this
particular
the
job
scheduler
he
wanted
to
can
also
at
the
four
servers,
are
connected
to
the
network,
the
job
scheduler
here
to
a
tasks
and
divide
the
four
servers
two
into
two
pairs,
so
we
want
to
so
on
with
the
server
three
is
a
pair
under
server
two
with
a
service
to
work.
For
so
four
is
another
pair
all
the
correct
connection
to
to
for
each
other.
Then
job
schedule
wants
to
know
all
the
servers,
the
IP
status,
so
it
can.
It
subscribe
to
all
the
subscribers
to
us.
F
Every
servers
IP
at
least
one
one
ip3
is
the
link,
fails
if
three
can
quickly
detect
the
failure
at
least
one
count.
So
if
three
will
synchronize
the
status,
the
change
to
other
leaves
level
one
receive
the
synchronized
single
synchronize.
The
information
like
it
notifies
the
job
scheduler
based
on
on
its
subscription
job
scheduler
identifies
the
fourth
path,
and
then
he
can
reassign
the
Computing
task
to
other
good
servers.
So
the
then
the
then
the
Computing
will
maybe
continue
to
go
on
next
slide.
Please
so
this
is
before
is
our
two
jobs.
F
A
L
Greg,
yes,
Greg
mirsky
Erickson.
Can
you
kindly
bring
the
requirements
slide
in
the
meantime,
I
just
wonder
so
what
your
goal,
what
you
are
planning
to
achieve
with
this
work.
F
Our
our
goal
is
to
the
network
help
the
client
endpoint
calendar
to
quickly
detect
the
network.
Failure
such
as
the
link
IP,
Access
Link
value
on
the
app
on
uncovered
just
a
network
failure.
L
So,
okay,
regardless
of
their
size
of
the
network,
so
you
you
you,
you
plan
to
do
it
as
a
distributed
system
or
you.
You
look
at
centralized
system
that
can
needs
to
know
about
the
failure.
F
F
L
Well,
I
I
think
that
that's
already
can
be
achieved
through
the
igp
protocol.
F
Yeah
you
mean
that
how
to
implementation
the
network
information
signalization
this
is
this-
is
not
described
in
our
framework.
Now.
A
A
The
infrastructure
is
highly
parallel,
massively
parallel.
So
there's
no
single
point
of
failure.
The
goal
is
in
requirement
is
to
detect
failure
as
soon
as
possible
and
Route
around
it.
If
you're
an
accurate
Network,
this
is
commonly
implemented
on
the
host.
Today
you
could,
you
could
flow
Bender
or
variety
of
other
techniques
to
change
entropy
to
your
out
round
failure
if
you
need
to
notify
a
controller
you're
in
seconds,
your
machine
learning
job
is
dead.
You
lost
hundreds
of
millions
of
dollars,
potentially,
so
the
requirements
aren't
suitable
for
machine
learning
clusters
in
any
possible
way.
M
Trainer,
okay,
so
my
name
is
David
black
I
work
for
dell,
EMC
and
I
sort
of
feel
like
there's
one
wonderful
British
phrase
for
the
opposition
party
in
Parliament.
It's
now
with
Charles's
King
is
his
Majesty's.
Loyal
opposition
and
I
would
lay
emphasis
on
the
loyal
in
that.
Please
and
Frank
would
like
to
make
a
pause
contribution
here.
I'm
one
of
the
original
designers
of
MV
mirror
fabrics
and
the
Envy
me
over
TCP
transport.
That
is
of
primary
interest
to
the
authors.
I
guess.
M
The
first
thing
I
should
say
is
that
I'm
surprised
the
storage
networking
configuration
shown
in
the
slides
are
unrealistic.
It's
an
active
passive
configuration
storage.
Networking
is
typically
moved
to
active
active
these
days,
which
means,
as
opposed
to
Second
path
being
backup.
M
The
second
path
is
active,
so
if
you've
got
a
failure
on
the
first
path,
There's
an
opportunity
to
immediately
use
a
second
path
to
get
that
failure
and
information
communicated
without
having
to
go
sort
of
indirected
through
all
the
switches
and
that
that's
probably
the
the
better
place
to
start
for
nvme,
because
you're,
not
relying
on
on
the
the
level
of
switch
interactions.
Now
turn
to
those
interactions,
I
can't
quite
figure
out
what's
going
on
here,
but
it
looks
like
the
failure.
M
Detection
mechanism
is
building
a
model
of
the
topology
and,
in
particular,
of
Ip
accessibility
for
the
cases
in
which
Link
in
the
network
fails.
That's
not
a
good
idea
to
do
on
doing
their
own
routing
system
is
the
authority
of
what
network
topology
is
what
the
connectivity
is
and
what
IP
addresses
are
reachable
from
where
please
don't
reinvent
that
and
I
guess,
one
of
the
major
point
of
minor
point:
the
drafts,
labeled
security
considerations
as
n,
a
which
I
presume
stood
for
not
applicable,
unfortunately
also
stands
for
not
acceptable.
M
This
class
of
it's
broken
gone
mechanism
is
a
great
Vector
for
a
denial
of
service
attack
and,
and
that's
going
to
take
some
serious
security.
Thought
minor
point
is
that
real?
When
we
talk
about
link
failures,
we
try
to
talk
about
as
binary.
The
link
is
either
working
or
it's
not
real
links
fail
in
really
interesting
ways.
The
draft
office
said
that
they
don't
want
to
use
BFD,
okay,
but
need
to
do
something.
M
So,
if
the
switches
aside,
the
link
has
failed
turn,
the
link
off
the
other
end
of
the
link
generally
tends
to
notice
pretty
quickly
that
lack
of
light
means
means
a
dead
link
and
then
you,
don't
you
don't
have
a
problem
with
the
two
ends
disagreeing
on
link
failure:
okay,
thanks
for
the
opportunity,
be
happy
to
take
any
questions,
including
from
hi
Beau.
If
he's
still
on
yeah.
A
Thank
you,
dude
and
Versace
is
coming
just
to
let
you
know.
All
Storage
protocols,
as
well
as
your
Gmail
like
product
calls,
do
Implement
their
own
livability
mechanisms
at
much
lower
layers
than
AP,
and
it
it's
very
fast.
It's
actually
rtt
Mach
2
rtts.
So
we
are
talking
Nano
seconds.
Definitely
not
seven.
K
Can
you
please
show
this
slide
with
the
storage,
a
network
architecture
from
the
previous
presentation,
if
possible,
yeah?
Next,
there
is
a
the
next
one.
K
Yes,
this
one
I
just
would
like
to
say
that
in
this
configuration,
if
switch,
one
and
switch
two
were
connected,
and
if
access
of
hosts
to
storage
One
turns
three
two
stars,
three
were
not
my
IP
addresses
of
these
devices
on
the
links
connecting
them
to
the
switches,
but,
on
some
kind
of
say,
Lubeck
addresses
switch.
K
The
switches
that
in
Fairfax
can
detect
very
quickly
the
link
failures
could
simply
rear
out,
whatever
our
exchange
happens,
between
host
one
and
host
any
specific
host
and
any
specific
storage,
without
involving
any
interaction
in
the
hosts,
whether
the
host
itself,
the
host,
was
remain
completely
ignorant
of
what
happens
in
the
network,
which,
most
probably
is
what
most
Network
operators
would
prefer.
K
This
looks
indeed
like
I
think
David
has
said
a
somewhat
problematic
Network
architecture,
but
I
did
not
think
that
we
have
to
I'm
not
sure
where
we
here
should
try
to
address
the
what
personalized
see
as
a
poor
Network
design
by
propagating
some
new
functionality
to
hosts.
B
Okay,
so
thank
you
for
the
presentation
and
all
the
comments
are
great.
So
let's
go
to
the
next
one.
N
Okay,
everyone
and
I
will
give
as
a
physics
deployment
use
case,
so
you
can
have
this
time
and
okay
and
about
this.
This
document
is
a
Services
deployment
Constitution,
and
this
is
the
zero
sixth
version,
so
at
the
first
I
will
give
our
simple
instruction
to
the
document
now
and
as
rb6
has
a
significance.
N
And
other
what
you
cause
to
use
Asics
and
have,
and
as
so
far
one
thousand
and
each
controller
networks
have
deployed
the
SMS,
sr6,
V6
and
so
and
also
the
SM
Sr
V6
policy
have
also
been
deployed
to
money
networks
and
also
use
like
as
FD
and
TFA
and
a
filter
and
like
this
future,
also
have
deployed
to
improve,
improve
the
SOA
of
the
network
and
and
currently
money
money.
Networker
was
thinking
about
the
smooth
migration
to
as
a
Basics
and
countries.
So
this
document.
D
N
In
the
case
and
this,
the
agriculture
Bank
channel
is
the
top
top
five
biggest
bank
in
Channel
and
the
deployment
is
the
deploy,
the
SR
srv6
policy
and
controller
in
the
backbone
Network,
and
so
first
of
all,
we
can
introduce
the
all
the
network
status
in
all
the
network,
the
the
ABC
they
deploy
osb2,
and
it
is
very
six
to
asset
background
and
to
to
get
the
network
and
to
cut
the
C
and
P
and
to
save
here
to
connect
here
and
I,
always
through
VPN,
is
deployed
at
the
local
network
to
carry
the
service
like
the
financial
services
and
office
service
and
internet
service.
N
N
Are
deployed
in
the
internet
worker
and
password
since
the
least
launch,
mostly,
is
less
than
six
so
use
use
the
IPv6,
IPv6
original
and
only
Japan
so,
and
the
compression
is
not
used
and
the
VPN
are
divided
by
the
service
and
the
based.
The
company
of
eviction
and
the
tsap.
A
N
A
Look
we'll
take
it
to
the
list,
but
looking
through
the
list
of
features
implemented,
there's
nothing
that
prevents
implementation
of
these
teachers.
There's
any
other
technologies
that
exist
today.
A
So,
looking
at
the
list
here,
you
said
that
it
requires
srv6
in
order
to
implement
new
services.
I
don't
see
in
single
Services
cannot
be
implemented
with
other
Technologies
I.
Think
you,
you
really
need
to
clarify
what
are
the
unique
and
distinct
advantages
that
would
require
significant
investment
into
new
technology
in
order
to
benefit
from
it.
Once
again,
what
you
see
here
doesn't
just.
F
D
N
A
D
Just
said
just
a
quick
question,
you
know
your
SRV
says:
are
you,
but
are
you
using?
Is
there
an
inner
working
that
you're
using
between
the
data
center
and
the
core,
so
the
core
is
srv6
and
the
data
center
is
VX
slave.
H
H
Yeah
yeah,
I,
I,
I,
guess
yeah.
The
background
is
that
you
know
we
have
a
good
definition
on
the
srv6
hierarchy.
The
first
level
is
policy
as
then
come
to
the
path,
and
the
bottom
is
a
sid
list,
so
that
this
is
good.
If
we,
when
we
got
a
failure,
we
think
and
Sr
policy
as
our
policy.
H
So
it's
a
good
protection,
but
in
case
of
about,
if
we
consider,
if
the
one
policy
run
out
of
the
resources
or
the
policy
got
some
failure
in,
for
example,
some
of
the
sincerely
field,
so
the
bandwidth
got
impacted,
so
not
not
all
of
the
services
can
be
can
be
can
be
assured
with
a
good
quality.
So
we
want
to
what
we.
What
we
want
to
do
in
case
of
there
are
still
some
of
other
parties
that
can
carry
those
those
over
traffic.
H
So
that's
a
that's
a
question.
That's
a
problem
we
are.
We
are.
We
are
thinking
so
so,
for
example,
when
there
are
100
service
and
voice
traffic
has
have
different
as
our
current.
That
means
they
have.
They
got
different
of
the
as
a
policy,
because
each
Sr
policy
has
can
be
mapped
to
one
color,
so
they
are
carried
by
different
policies.
H
Part
of
failure
happened
with
with,
as
a
policy
carried
the
OE
traffic
or
the
voice
traffic,
so
we
we
should
have
some
way
to
to
to
keep
our
services
running.
H
So
the
idea
is
what
we,
what
what
we
think
is
first,
is
which
we
should
maximize
the
failure
or
the
degradation
protection
in
case
of
there's
still
some
resources
we
can
use
the
second
one
is.
We
should
minimum
the
impact
after
after
taking
the
repair,
repairing
action.
That
means
when
the
general
failures
happened.
We
need
to
not.
D
H
So
the
last
one
is.
Is
that
way
we
can
maximize
the
band-based
efficiency
because
in
case
of
there's
still
some
bandwidth
available,
we
can
reuse
them
to
keep
our
services
running.
H
So
the
then
the
best
idea
is
that
we
need
to
set
some
rule
on
which
policy
could
protect
what
policy.
H
So
the
idea
is
to
to
to
group
the
policies
together
and
then
have
some
some
some
some
some
some
priority
mechanism
to
to
give
the
order
of
the
traffic
switch
over.
H
So
the
so.
This
is
the
basic
idea,
so
the
zero
there
will
be
flow
classification
which
will
identify
the
service
class
and
then
third
class
does
does
map
to
the
color
of
the
as
a
policy.
And
then
we
got
the
flow
steering
to
Stair,
not
individual
policy,
but
policy
group.
H
And
then
there
is
a
new
new
new
unit
or
component,
which
is
intelligence.
Logging,
which
takes
the
takes,
takes
the
traffic
and
makes
the
decision
on
which,
which
the
policy
that
it
should
forward
the
traffic.
H
So
the
intelligent
blocking
unit
also
takes
some
input
of
the
network
called
environment,
for
example
the
BFD
or
some
of
kind
of
latency
or
loss
passing
off
as
an
input.
H
So
first
one
is,
is
a
simple
just
a
flu
classification
and
just
take
a
take
some
some
of
the
some
of
the
of
tuples
or
something
they
are
to
information
out
and
again
and
to
identify
the
first
class
and
the
first
Theory
unit
is
also
a
very
straightforward.
H
So
the
the
thing
is
so
we
come
to
the
Intensive
intelligence
login
unit.
So
this
part
we
takes
the
as
a
pulse
group
as
input
and
also
some
of
the
measurement
result
as
input.
So
here
is
a
policy
decision
decision
function
which
which
can
just
just
decide
which
which
policy
the
traffic
should
be
go.
So
we
have
the
several
policies
with
each
with
the
priority
set
so
prior
to
just
represent
a
represents
the
highest.
H
So
so
so
the
priority
is
a
is,
is
defines
the
manner
what's
what's
the
highest.
H
B
H
B
H
There
is
a
level
called
measurement
units
that
takes
some
some
to
marry,
some
latency
or
Twitter
or
loss.
So
last
one
is
a
flow
foreign
unit.
That
is
normal,
so
we
have
the
example.
Here
is
first,
we
Define
the
some
policy
policies
and
with
colors,
and
then
we
combine
those
the
second
step.
We
combine
some
of
the
houses
together
with
with
the
color
the
third
step.
We
just
mapping
the
traffic
to
those
those
past
group.
H
So
that's
that's
the
idea
of
what
we
want
to
do,
but
maybe
it's
not
not
perfect,
but
we
can
improve
it.
So
that's
all.
D
O
So
probably
I
can
already
start
so
my
name
is
David
Rowe
from
Huawei
Technologies.
This
is
a
work
actually
together
with
my
colleaguers
Luigi
Eugene
and
training
talk
about
the
signaling
in
network
computing
operation.
O
It
works
yeah,
so
things
I
only
have
a
few
minutes.
Left
I
will
be
just
speed
up,
so
the
motivation,
basically
is.
We
already
observed
a
lot
of
network
device
on
taking
some
communication
tasks
to
improve
the
overall
Network
perform
overall
system
performance
and
typically
they
are
done
in
the
program
switches
right,
but
we
are
lack
of
kind
of
generic
and
general
way
to
define
what
to
do,
how
to
do
where
I
get
the
data
and
where
to
how
to
process
it.
So
that's
the
purpose
of
this
draft.
O
O
Those
hosts
will
take
part
of
the
calculation
and
the
send
without
the
back
and
there's
a
APS
server
called
parameter
server.
Try
to
aggregate
those
data
and
punch
back
the
results.
So
traditionally
the
way
is
you
build
a
tree
or
start
topology
where
the
PS
server
will
react
to
all
the
results
from
those
hosts.
But
apparently
you
have
an
in-cast
problem
number
one.
O
Secondly,
the
PS
server
becomes
a
bottleneck
right,
although
there's
other
Solutions,
like
all,
reduce,
try
to
design
the
ring
topology
to
distribute
the
tasks
to
different
servers,
but
still
this
is
not
really
ideal,
so
we
we
find
that
let's
switch
to
do
that
task
in
the
wrong
line.
Rates
will
be
the
up
to
will
be
the
best
option.
So
this
is
one
of
the
actually
use
case.
O
There
are
two
other
use
cases
happening
in
the
data
storage
Network,
where
you
want
to
store
some
data,
you
just
trying
to
get
the
right
right,
so
you
have
to
really
check
the
lock
and
then
once
you
have
this,
you
can
do
that,
but
I
will
not
really
dive
deep
deep
into
those
use
cases.
Those
are
the
real
use
cases.
O
O
So
the
idea
basically
is
to
offload
those
leading
coordination,
Mark
and
bottleneck
Computing
operations
to
the
network
device
in
order
to
improve
the
system
performance.
Of
course,
we
don't
want
to
affect
the
forwarding
performance
right,
so
two
things
we
need,
one
is
generic
and
simple
operators
are
designed
to
to
execute
those
scenarios.
You
need
to
know
what
to
do
where
the
data
comes
from,
and
how
do
you
process
it?
O
Secondly,
you
need
to
have
a
expected
way
to
Route
the
package
to
the
right
place
so
that
they
can
be
operate
at
all
executed,
so
a
quick
overview
about
that
is,
we
have
the
hosts,
which
should
tell
what
to
do
so.
There's
a
async
header
created
on
top
of
UDP
to
say
that
okay
I
want
to
do
this
aggregation
and
the
data
are
coming
from
those
resources
and
how
many
of
them,
so
this
can
be
information.
O
This
is
the
first
part,
the
second
part
actually
is
you
have
to
have
a
mechanism
to
look
back
to
the
right
place
because
you
probably
don't
know
which
switch
has
that
capability
right
so
for
that
we
can
use
a
lot
of
kind
of
mechanism
to
root
back
to
the
right
place.
In
this
draft.
We
use
the
example
of
surface
function.
Training
we
can
also
use
as
our
signal
routing
or
mpls
or
even
others
right,
but
in
this
particular
example
we
use
sfc.
O
Okay,
so
so,
first
of
all
the
sync
header
itself,
this
header
is
used
to
tell
the
switch
what
to
do
so.
You
will
see
that
we
have
a
group
ID.
We
have
date
of
resources,
data
resource,
ID
sequence
number.
They
are
all
combined
to
tell
the
the
switch
where
the
data
is
which
data
I
need
to
operate
on.
Then
you,
you
need
to
specify
what
kind
of
operation
right
is
a
it's
a
sum:
it's
a
maximum
mean,
or
it's
CNS,
FNA
or
others.
O
Of
course
you
have
the
data
offset
to
indicate
where
the
data
is
in
this
package.
Right,
particularly
there's
a
loopback
flag,
indicates
that
after
this
data
operation,
you
need
to
send
the
data
kind
of
back
to
the
source
or
you
need
to
continue
forward
to
the
destination.
O
So
the
second
part
actually
is,
creates
a
kind
of
a
tunneling.
In
this
case
we
use
sfc
as
I
indicated
so
in
the
Ingress.
You
just
wrap
up
with
the
next
service
header
until
the
egress
and
then
send
to
the
destination,
of
course,
in
the
middle
there's
some
nodes
with
the
sync
capability
to
do
the
data
operation.
O
So
this
is
actually
encapsulation
by
using
sfc.
We
use
the
base
header
pass
header
and
the
context
header.
The
context
header
is
the
sync
header
we
I
just
explained.
The
service
pass.
Header
are
defined
in
RFC
8300
and,
of
course,
in
the
base
header,
we
have
to
specify
the
a
few
things
like
lens,
TTL
or
others
for
the
metadata
type
by
quiz.
Coincidence
means
our
sync:
header
is
only
16
bytes,
so
we
can
use
MD
type
1.
That
fits
very
well.
O
Okay,
so
these
are
the
last
slides,
so
basically,
this
is
the
first
trying
and
there
we
know
that
it's
not
very
clear
for
us.
Where
is
the
home
for
this
kind
of
drafts,
because
this
is
working
on
the
kind
of
data
by
leveraging
the
program
switch.
O
We
use
the
sfc
as
a
carrier,
but,
as
I
said
it
might
be.
Other
kind
of
routing
mechanism
could
be
better
suited
for
for
this
case.
So
we
welcome
any
kind
of
discussion
and
we
are
going
to
update
the
the
draft
based
on
that
of
course-
and
this
is
only
the
data
playing
and
Next
Step-
probably
we
will
do
the
control
plane
and
others
yeah.