►
From YouTube: IETF109-LISP-20201119-0500
Description
LISP meeting session at IETF109
2020/11/19 0500
https://datatracker.ietf.org/meeting/109/proceedings/
A
A
A
A
A
A
A
A
Now
in
the
schedule,
we
have
sharon
speaking
a
position
about
network
hexagons,
but
actually
he's
also
speaking,
maybe
right
now
in
the
corneal
g
meeting.
So
if
he's
late,
it's
running
late.
For
any
reason,
we
will
postpone
his
talk
later
on
okay,
and
we
will
give
priority
to
albert
which
will
give
us
an
update
on
the
naturopath
document.
A
Then
dino
on
the
his
distinguished
name,
encoding.
There's
been
some
discussion
about
this
on
the
mailing
list
and
the
last
talk
scheduled
today
is
charlie.
She
show
us
a
real
deployment
of
lisbon
campus
network
as
well.
As
I
have
understood.
The
the
long
title
that
you
can
see
is
sd
access,
practical
experience
in
designing
and
deploying
software-defined
enterprise
networks.
A
C
D
Okay,
so
so
I'm
going
to
give
the
let's
call
it
traditional
update,
and
I
hope
that
this
is
the
last
one,
because
I
believe
that
we
are
very
close
to
finishing.
So
if
you
go
to
next
slide
paste,
so
as
rudy
was
saying,
we
posted
three
three
three
revisions
since
last:
80th
for
the
data
plane
and
two
for
the
control
plane
and
and
now
we
are
only
down
to
one
discussed
for
each
document
and
and
the
discuss
are
really.
D
I
don't
want
the
same
minor
things
because
they
are
discussed
right,
but
at
least
they
are
super
easy
to
fix
and-
and
I
have
already
replied
to
both
eric
and
martin.
So
hopefully
they
clear
the
discuss
and
and
we're
pretty
much
done
so
next
slide.
D
So
I
have
one
slide
per
per
document
specifying
what
has
been
changed
for
data
plane.
We
have
pretty
much
clarified
some
different
aspects.
D
I
would
say
that
the
most
relevant
ones
are
the
limitations
for
icmp
in
the
case
for
the
use
of
icmp
in
the
case
of
past
passenger
discovery,
and
also
clarified
that
the
instance
id
is
not
protected
and
when
they
are
carrying
villain
attacks
on
path.
Attackers
can
change
them
and
next
slide
and
then
data
plane
has
more
changes.
D
We
have
removed
a
bunch
of
text
related
to
verifying
map
requests
of
what
was
the
the
use,
some
specification
of
what
you
should
do
when
you
receive
piggyback
data
on
the
map
request
and
we
have
removed
pretty
much
all
the
tax
related
to
that.
We
have
also
elevated
a
must
and
a
shoot
for
the
different
crypto
shoots
for
the
mob
register.
D
Then
we
have
also
clarified
what
happens
when
the
record
count
that
the
record
count
can
actually
be
larger
than
the
requested
count,
because
you
may
have
more
specifics
also
specified
better
the
use
of
the
salt
in
the
kvs
mechanism,
because
it
was
not
clear
how
to
use
it
for
different
messages
and
then
some
clarifications
on
how
actually
map
notifies
and
mammoth
device
acknowledgements
are
retransmitted.
A
G
G
Supply
these
are
the
the
summary
of
changes
in
the
last
version.
Basically,
the
big
tank
of
changes
were
about
how
to
establish
a
security
resistance
between
the
ipf
and,
of
course,
without
requiring
this
charge.
Everyone
and
the
the
design
that
we
wrote
down
in
the
document
after
the
discussion
with
the
working
group
can
be
summarized
and
that
we
use
listic
to
study
that
computer
vision
and
basically,
you
derive
a
popular
key
from
there
from
the
attic
that
you
change
and
you
know
the
details
that
are
in
the
document.
They
are
not
that
that's
accomplished.
G
And-
and
this
is
what
I
wanted
to
the
of
the
working
group-
so
we
had
this
like
the
early
review
and
I
just
realized
when
I
was
in
the
slide
that
the
working
group
was
not
in
in
history
on
the
on
the
review.
G
G
D
So
this
is
alberta,
so
we
have
the
same
issue
for
for
the
big
document,
but
at
the
end
I
believe
that
they
accepted
the
terminal
because
it
has
been
used
in
the
past
for
so
long
that
it
was
hard
to
think.
But
I
don't
know
if
this
applies
here.
A
A
True
might
be
might
be.
Maybe
something
to
to
to
put
in
the
document
is
somewhere
just
a
sense,
a
sentence
that
says
we
use
those
mouse
yeah
to
be
consistent
with
the
main
specs
of
list,
because
but
he,
if
we
look
the
definition,
then
the
rfc
4949
security
glossary.
This
would
be
categorized
like
a
token
just
take
it
as
it
is,
but
we
keep
nose
for
for
consistency,
a
simple
sentence
to
add
somewhere.
I
don't
know.
G
F
I
Yeah,
I
think
we
shouldn't
change
it
at
all
and
plus
you
know,
adding
extra
sentences
with
new
terminology
just
adds
a
term.
That's
not
going
to
be
used
anywhere,
and
you
know
how
many
occurrences
of
nonce
we
have
across
all
the
documents
it
it's
just.
This
is
just
busy
work
and
I
don't
think
we
should
make
any
changes.
F
G
G
G
Okay,
so
maybe
let's
go
to
the
next
slide
reading.
G
So
this
is
the
other
comment
that
chris
put
on
the
review
that
I
finished
you
see
two
others.
That
is
what
to
do
when
the,
when
the
value
of
the
knowledge
is
the
first
space
and-
and
here
I
think
the
the
answer
is
what
we
have
said
in
in
other
cases
in
the
past-
that
the
the
feeder
space
should
be
big
enough
to
accommodate
with
plenty
of
updates,
and
I'm
I'm
doing
some
some
numbers
that
I
I
will
check
like
three
times
I
I
am
sure
that
they
are
right
still.
G
So
if
we
assume
one
update
the
hd
pediatric
record
per
second,
so
that's
a
lot
of
updates.
Actually
we
will,
you
know,
run
out
of
nonsense
in.
I
don't
know
how
many
years
that's
a
very
big
number,
so
I
think
that
we
are
probably
safe,
saying
that
we
are
not
gonna
exceed
the
the
field
space,
especially
now
that
we
did
that
change,
that
the
the
nuances
are
very
extremely
periodic
record.
So
so
the
space
that
we
have
is
beginner
for
those
updates
and
then
the
may.
A
I
ask
a
question
of
course:
yes,
I
bet,
did
you
specify
the
fact
that
I
mean
but
the
fact
that
once
you
reach
the
the
top,
I
mean
2
power,
64
minus
1,
then
you
wrap
around
to
zero
and
then
there
is
the
usual
thing
you
know
you
take
half
of
the
space
before
the
value
is
minor.
Half
of
the
space
in
the
front
is,
is
an
increase,
so
zero
is
is
bigger
than
2
power.
64
minus
1!
A
G
G
I
I
In
previous
protocols,
when
you're,
comparing
sequence
numbers
that
wrap
around
you,
usually
look
at
the
old
sequence
number.
And
if
it's
in
the
upper
half
of
the
space
and
you're,
comparing
it
to
a
new
sequence
number,
which
is
in
the
lower
half
of
the
space,
then
the
ladder
is
greater
than
the
former.
G
So
the
the
very
last
slide
if
we
go
there
is
just
to
say
so
once
we
you
know,
we
put
these
the
two
words
about
the
wrapping
notes
and
we
get
back
to
to
chris
and
the
on
the
comic
he
made
on
the
notes.
G
I
don't
think
I
I
don't
know
what
else
is
to
be
done
on
this
document,
so
I
think
we
are
ready
for
for
last
call.
So
I
will
open
the
the
floor
now
for
anyone
to
comment
on
that
or,
if
not,
I
think
it's
just
a
matter
of
publishing
the
new
version
and
then
sending
it
to
the
list
for
last
call.
A
So
the
first
one
is,
maybe
you
meant
to
say,
working
group
adoption.
As
far
as
I
remember,
the
documents
still
came
again.
A
J
I
think
that's
the
the
two
painting
issues
are
the
ones
that
are
raised
by
the
everything
by
the
holistic
director
review
and
this
one
had
really
manual
ones
that
you
can
really
handle,
and
I
think
that
yeah
they
do
have
just
here
from
such
a
while
and
the
many
iterations.
Even
if
yeah
we
are
a
little
bit
slow
in
the
in
the
published
version.
I
think
that
the
the
document
is
really
stable
enough
and
it
is
really
ready
for
the
workings
right.
A
G
A
No,
so
albert
lopez,
we
have
to
make
the
distinction
here
too
many
albert
and
alberto,
so
not
reversal.
Let
me
put
up
the
slides.
K
Let's
explain
that
in
the
presentation
of
the
the
last
edition,
an
art
is
a
critical
point
in
current
appointments,
especially
for
these
mobile
note
devices
that
are
constantly
moving
and
changing
the
direct
endpoint
connections.
K
K
K
K
K
K
K
This
indicates
to
the
rtr
that
the
xtr
with
xtrid
a
has
left
the
rtr,
as
we
cannot
authenticate
the
map
notify
because
the
authentication
data
of
the
map
notify
is
for
the
destination
xtr.
We
need
to
validate
its
information,
so
we
send
we
store
the
map,
modify
record
and
request
the
mapping
to
the
mapping
system.
K
The
replay
is
used
to
validate
the
map,
notify
not
to
insert
it
in
the
catch.
Otherwise,
in
the
example
of
this
slide,
the
rtr
would
lose
the
state
associated
with
xtr
idv
once
validated
the
map
notify
we
can
remove
from
the
entry
the
locators
associated
with
the
xtr
id
a
indicated
in
the
map
notify.
K
So
in
the
example
that
we
were
explaining
the
rtr
checks,
if
local
airlock,
a
the
one
in
red
in
the
slide,
is
present
in
the
map
notify
record.
If
the
air
lock
was
present,
the
rtr
will
forward
them
and
modify
encapsulated
in
a
data
packet
to
the
xtr
a,
but
as
it
is
not
the
case,
this
means
that
the
xtr
idea
is
not
longer
associated
with
the
rtr.
K
K
K
Until
here
we
presented
the
part
of
mapping
updates
now
I
will
present
a
couple
of
more
of
modifications
introduced
in
the
draft.
The
first
one
is
related
to
the
map
request
replay
handling
in
the
previous
version
of
the
draft.
It
gives
the
option
for
an
xtr
to
replay
directly
to
the
map
request
to
do
that,
it
is
required
that
the
map
server
has
information
to
punch
the
nut
to
forward
the
map
request
to
the
xtr.
K
K
K
D
Yes,
I
have
a
comment
I
will
I'm
also
also.
This
is,
and
the
draft
has
been
it
has
at
least
two
implementations.
D
It
has
gone
through
several
iterations
and,
as
far
as
I
know,
it
is
still
an
individual
commission
and
I
would
like
to
understand
if,
if
we
can
move
to
working
group,
adoption.
A
Excellent
question:
I
I
like
the
work
that
you
did,
because
this
is
a
nice
laugh
that
is
really
based
on
the
on
the
pain
you
experienced
in
doing
much
others
while
implementing
this.
As
far
as
I
have
understood,
so
it's
a
nice
piece
of
work
and
there's
been
around
for
a
while.
I
I
have
only
one
question
last
time:
dino
presented
also
an
universal
solution.
B
I
I
A
So
but
let's
say
what
you
plan
to
do
with
your
document,
because
he
he,
if
you
want
to
push
it,
I
mean
we
cannot
have
two
two
documents
that
explain:
two
different,
not
traversal
properties.
There
are
approaches,
so
the
working
group
at
some
point
has
to
discuss
whether
or
not
merge
the
documents.
I
Well,
anyways
well,
like
I
said
in
my
presentation
at
last,
it
have
whatever
the
working
group
wants
to
do
with
the
document,
but
with
the
document
that
I
wrote
I'll
support
that
I'm
not
sure
if
I'll
support
a
merge,
because
you
have
to
take
the
pros
and
cons
of
each
one
and
merge
it
to
have
a
sensible
design
and
that
will
be
a
lot
of
work
and
it
typically
compromises
the
design.
I
So
you
know
okay,
we
would
have
to
go
through
a
thorough
review
of
the
pluses
and
minuses
of
each
proposal
and
either
decide
to
go
with
one
or
the
other
or
try
to
come
up
with
a
third
option,
which
is
take
some
of
the
ideas
from
here
and
the
other
ideas
from
there
and
that's
what
that's.
What
you're
suggesting
with
a
possible
merge.
G
Alternatives,
my
question
is
why
why
can't
we
have,
I
mean,
if
need
be,
and
there
is
that,
for
the
the,
not
capital
graphs
are
always.
G
A
What
I
suggest
is
the
following:
I
I
understand
that
the
request
for
the
authors
to
adopt
the
document-
okay
and.
A
And
if
joel's
agree,
what
we
can
do
is
we
we
can
formulate
a
call
for
an
option
on
the
mailing
list.
I
just
want
to
formulate
it
in
in
a
way
that
makes
that
all
the
participants
in
the
mailing
list
aware
that
there
has
been
a
recently
a
different
approach
from
herself,
which
is
the
one
people's
baby.
Note.
Okay,
and
in
comment
of
that
on
that,
okay
and
let's
see.
A
B
G
L
Yes,
hi
good
morning,
good
evening,
hello
being
late,
but
I
think
I
did
a
good
pr
for
this
excellent
took
longer
than
I
was
planned,
but
was
good
good
interest.
So
I
I
want
to
give
a
quick
update
on
what
has
changed
in
list
hexagon
since
108.,
mostly
courtesy
of
luigi.
Thank
you.
We
were
had
a
pre-review
of
security
aspects
and
that
resulted
in
some
clarification
in
the
wording.
L
One
was
tapping
the
ability
to
tap
into
traffic
from
mobility,
clients,
uploading
a
potentially
sensitive
data,
and
here
the
answer
is
that
we
use
an
rtr
scheme,
meaning
all
communications
are
tunneled
between
the
eid
and
the
rtr,
between
the
rtrs
and
in
between
the
rtrs
and
the
services,
the
h3
services,
and
therefore
we
can
do
tunnel
encryption.
L
I
know
tunnel
is
not
the
perfect
term
for
this,
but
these
are
tunnel
routers,
but
there
are
more
dynamic
encapsulations,
but
we
can
still
use
a
point-to-point
in-cap
encryptions
by
default
ipsec.
L
But
if
there
is
support
for
more
advanced
mechanisms,
then
we
can
do
that
on
a
per
tunnel
basis
without
tying
the
clients
which
are
in
cars
to
services,
which
is
in
edges
or
cloud
which
have
a
very
different
development
and
update
rates,
so
we're
not
tightly
coupling
but
we're
doing
it
by
tunnel
by
town.
L
The
next
concern
was
spoofing,
and
here
what
we
pointed
to
is
that
there
is
a
aaa
stage
where
the
eid
clients
are
allocated,
an
eid
that
reflects
it's
completely
logical
and
reflects
their
credentials
and
affiliations
and
also
their
kind
of
channels
they
are
allowed
to
subscribe
to.
This
is
all
encoded
in
the
eid
and
that
eid
is
provisioned
at
the
rtr,
which
is
the
home.
L
So
even
if
it's
two
homes,
you
cannot
skip
between
rtrs
as
you
choose
and
that's
the
spoofing
protection
you
have
to
spoof,
both
the
arloc
and
the
eid,
and
that
should
be
detected
by
the
underlay
network
same
goes
for
the
h3
services.
L
Devops
considerations
and
therefore
provision
at
the
rtrs
and
again,
if
you
need,
if
you
want
to
spoof
them,
you
you'd
have
to
spoof
both
underlaying
over
the
addresses
and
the
routers
between
the
services
and
the
rtr
should
detect
that
based
on
a
double
lookup.
L
The
next
concern
was
a
privacy,
and
here
we
pointed
to
the
fact
that
that
was
the
point
of
using
standard
lisp
to
create
the
skeleton
on
his
network
and
the
geoprivacy
is
protected
by
the
ability
of
say
toyota
to
bring
their
own
rtr.
It
would
be
provisioned
to
the
system.
It
just
has
to
function
like
a
standard
list
rtr
and
support
the
rfcs
that
are
used
in
the
list
next
ones,
mostly
six,
eight
three
zero
and
eight
three
seven
and
three,
the
the
signal
free
and
the
a3
services
themselves.
L
Don't
have
the
ip
address
of
the
age
of
the
eid
clients.
So
they
don't
know
they
know
the
eid,
the
eid
of
the
client.
They
know
its
credentials
its
credibility,
but
they
don't
know
it's
real
ip
address.
L
So
we
have
a
geo
privacy.
The
last
point
was
about
fake
news.
What,
if
you
spread
a
bunch
of
traffic
jams
and
problems
where
they're
not
so
you'll,
have
a
clear
path
to
work,
and
here
you
know
the
the
explanation
was
that
this
is
a
source
crowdsource
system,
so
nothing
is
set
and
published
to
subscribers,
based
just
on
one
account.
There
has
to
be
multiple
accounts.
L
Are
adding
or
subtracting
to
its
credit
score?
The
credit
score
based
on
what
it
detected
versus
what
other
people
detected
and
was
verified
is
passed
back
to
the
h3
to
the
to
the
aaa
service
to
the
diameter
server,
but
not
from
a
specific
h3
service,
but
from
all
h3
services.
So
the
aaa
doesn't
know
where
the
client
has
been.
Why
did
he
get
these
credit,
scored,
increase
or
decrease,
and-
and
that
was
okay-
I
think
for
this
examiner
it
was
okay.
L
So
if
any
questions
I
can
move
on.
A
L
All
right,
so
the
next
topic
was,
you
know
the
presentation
to
coin,
and
it
has
to
do
with
a
presentation
that
louis
teacher
yesterday
about
using
virtual
routing
for
edge
computing
and
to
do
concurrency
and
computing,
and
here
we
show
this
mexico
in
two
examples
for
to
triangulate.
What's
going
on
here,
to
try
and
triangulate
and
maybe
as
a
next
step
generalized.
L
This
draft
for
other
use
cases
and
the
logic
may
be
interesting
to
the
group
and
I
can
go
through
it
really
quickly.
I'm
not
gonna
repeat
the
whole
presentation,
but
if
you
can
go
down
one
side,
okay,
so
here's
the
problem.
L
At
the
data
center
using
frameworks
like
spark
or
data,
breaks,
leveraging
spine
lifts
get
together
for
concurrency,
but
what
if
we
have
to
process
on
the
edge
I
mean,
if
we
don't
have
to,
we
will
process
in
the
data
center,
but
if
we
have
to
we
have
to
process
on
the
edge.
L
Why
do
we
have
to
process
on
the
edge
there's
multiple
reasons
a
like,
if
you
think
of
a
data
center
as
a
brain
that
can
think
about
things
and
spawn
the
concurrent
load
or
for
it
takes
a
few
seconds,
and
all
that
the
edge
is
more
like
a
a
spine
reaction,
an
intelligent
reaction
to
physical
world
activities
and
has
to
respond
with
a
subsequent.
L
The
other
reason
is
that,
if
the
raw
data,
the
fresh
raw
data
load,
is
too
much
to
upload
to
to
the
cloud,
I
have
to
process
it
in
the
edge
and
the
last
is
a
regulatory
business.
L
Also,
if
I
wanna,
if
I
wanna,
pull
as
much
ai
as
I
can
out
of
the
car
from
the
same
exact
principle
of
concentrate
as
much
as
possible,
then
I
have
to
do
it
at
the
edge.
I
cannot
pull
a
lot
of
the
ai
all
the
way
to
the
cloud
and
and
keep
only
the
necessary
ai
in
the
car.
L
There's
multiple
reasons
for
that.
The
car
is
very
idle.
It
has
a
different
depreciation
and
so
on.
So
this
is
the
general
a
problem
statement
and
if
you
go
next
slide,
we
gave
nexagon
as
an
example
where
the
cars
are
generating.
Every
car
generates
four
megabits
per
second
data,
so
10
000
cars
is
40
gigabits
and
that's
nothing
compared
to
100
000
cards,
so
I
have
to
process
it
at
the
edge.
I
have
to
react
quickly
if
there
was
a
stroller
there
and
of
course,
if
I'm
an
av
I'll
stop.
L
But
what
about
the
cars
behind
me?
They
should
know
about
it
and
so
on,
and
there
is
muni
and
oem
reasoning
why
you
should
do
it
at
the
edge
and
we
give
mexico
as
an
example.
If
you
go
down
one
more.
L
We
gave
another
example
a
cyber
example
where
terabit
switches,
maybe
hundreds
of
those,
are
sample.
The
the
0.1
percent
rate
and
to
create
gigabits
per
second,
and
that
cannot
be
go,
cannot
go
all
the
way
to
the
cloud,
and
I
have
to
process
that
in
the
edge,
and
it
will
explain
how
you
use
the
kind
of
a
nexagon-like
architecture
to
do
that.
L
So,
if
you
go
down
yeah
okay,
this
is
why
this
is.
If
you
go
down,
you
all
know
that
this
ability
to
use
the
purely
logical
addresses
no
topological
constraints
because
of
the
mapping
system
and
list
of
ability.
If
you
go
down
to
do
a
channels,
multicast
channels
which
are
good
for
millions
of
channels
for
thousands
of
users
very
good,
for
if
I
reduce
data
to
indexes,
then
I
can
subscribe
to
those
indexes,
and
this
subscription
is
for
a
much
more
portable
feed
than
the
raw
data
after
reduction.
Okay.
L
So
this
is
why
the
least
part
of
all
the
overlays
out
there
it's
very
ready
next
slide,
and
this
is
the
you
know.
What
is
the
hexagon?
What
are
the
tiles?
Every
problem
will
have
its
own
tiles.
The
problem
that
the
tiles
of
cyber
is
the
flows,
the
five
couple
throws
of
masks.
L
So
if
I,
if
I'm
scooping
thousands
of
five
tuples
out
of
millions,
then
in
multiple
points
in
the
network,
they
will
still
be
steered
to
the
same
location
where
they'll
be
reduced.
The
distribution
of
the
flow,
the
bayesian
behavior,
will
be
learned,
visibility
will
be
reflected
and
the
surprise
will
be,
which
will
from
what
is
expected
will
create
a
hyper
attention.
L
So
this
has
been
very
successful.
It
was
able
to
it's
very
easy
to
put
on
top
of
the
network
and
it
was
able
to
detect
ddos
like
a
minute
before
a
firewall
next
slide,
and
this
is
the
reduction
factors
next
slide.
I
mean
I
want
to
get
to
the
gist
of
it.
Okay,
this
is
the
gist
of
it.
Is
we
use
this
for
this
pattern?
L
We
take
a
problem.
We
pre-divide
it
algorithmically
to
something
that
can
be
addressable.
L
We
steer
the
raw
samples
to
these
addressable
contexts.
These
contexts
apply
reduction
functions
and
then
we
subscribe
to
those
that's
it.
So
the
point
here
is
that
if
there
is
interest
in
the
group
to
further
develop
this
the
design
of
what
luigi
presented
than
other
use
cases,
then
it
would
be
good
to
know,
and
if
anybody
wants
to
help
on
that
very
welcome,
that's
it.
L
Okay,
so
I
have
a
question
luigi.
Yes,
so,
given
that
you
know
we
wanna
mark
this
and
move
on
to
generalize
to
that,
every
problem
can
have
its
own
tiles
and
on
channels
and
feeds.
Then
I
want
to
finish
the
next
one
draft
to
be
an
rfc
and
the
question
to
you
and
joel
is:
do
we
have
to
wait
for
the
base
to
do
that
or
we
can
just
publish
it
as
is,
and
then
refer
to
the
beast
in
the
generalized
in
a
generalized
draft?
B
F
G
So
you
just
yes,
it
is
alberto,
just
a
quick
comment
and
I
I
don't
have
an
opinion.
I
just
raised
the
point.
The
intended
status
of
next
second
right
now
is
informational,
not
experimental,
so
I
I
don't
know
if,
if
we
still
have
to
wait
for
the
screen,
I'm
perfectly
fine
waiting,
but
you
know
I
don't
know
if
we
need
to.
A
My
personal
opinion,
honestly
is
even
if
it
is
informational,
it
would
be
better
if
it
points
to
the
beast
documents,
because
is
just
for
more
credibility
from
the
for
the
content,
because
at
some
point
somebody
could
read
and
say
yeah
that
is
based
on
the
old
specs,
not
the
the
proposed
standard.
A
L
All
right,
so
I
think
you
know
it's
your
consideration.
Is
your
decision,
the
I
understand
what
john
said
just
be
aware
of
the
other
side
of
it,
which
is,
if
you
want
to
hand
off
this
nexagon
as
an
rfc
to
organizations
like
aacc
the
auto
edge.
L
Anything
itf
is
all
right
already,
not
their
industry.
It's
not
like
dealing
with
juniper
and
cisco
and
those
and
the
minimum
has
to
be
in
rfc.
So
until
then
you
know
it's,
it's
really
delaying
it.
So
you
know
just
observe
the
the
progress
and,
if
it's
reasonable,
then
fine,
if
not
just
make
a
call
up
to
you.
A
Again,
I
will
give
you
my
my
personal
opinion.
Okay,
these
documents
are
very,
very
close
to
be
done
so,
okay,
I
would
wait
a
few
weeks
just
to
see
if
the
albert
just
submitted
new
versions,
maybe
today
or
tomorrow,
they
they
clear
all
the
discuss,
if
not
actually,
like,
I
think,
as
a
chair
to
please
have
a
look
to
the
to
the
to
the
new
versions.
Okay,
so
maybe
maybe
in
a
couple
of
weeks
these
documents
are
through.
L
I
A
Okay,
okay,
so,
let's,
unless
there
are
all
the
comments
of
question,
we
move
forward
toward
its
style
for
dino
for
encoding
names.
I'm
going
to
take
up
your
slides.
Can
you
hear
me?
Okay?
Yes,.
A
I
Okay,
great
okay,
so
I'm
gonna
talk
about
the.
I
guess.
The
11th
revision
of
the
list-
name
encoding
spec
next
slide,
so
an
overview
of
the
spec.
It's
it's
an
extraordinarily
simple
spec.
I
Basically,
this
is
a
compact
way
of
encoding,
an
ascii
string
in
either
an
eid
record
or
an
arloc
record.
There's
an
afi
that
has
been
defined
before
afi
number
17,
that's
called
a
distinguished
name.
I
thought
the
draft
could
use
that
and
we
could
encode
names
in
either
of
those
records.
It's
much
more
compact
than
l
caps
and
it
has
much
more
flexibility
than
elcas,
because
it's
shorter
and
these
things
will
typically
be
nested
in
other
things,
so
the
shorter,
the
better
make
the
packet
sizes
smaller.
I
What's
the
advantage
of
distinguished
names
is
that
you
can
provide
a
self-documenting
set
of
mapping
records
in
the
mapping
database.
I've.
You
know,
since
I've
done
my
implementation,
it's
been
eight
years
now
that
these
names
are
lifesavers
and
debugging
problems
and
maintaining
systems
very
much.
You
know
the
idea
is
not
very
original.
It
basically
came
from
isis
first
and
ospf
second,
where
you
could
put
host
names
to
name
lsp
ids
and
is
pdus.
Those
were
you
know
in
the
early
90s.
I
Those
things
were
invaluable
tools,
and
I
thought
we
could
use
something
in
inside
of
list
to
do
that
here,
and
it
also
allows
you
to
do
these
groupings
if
you
wanted
to
have
a
name
that
maps
to
other
things,
to
other
names
which
then
maps
to
you
know
traditional
our
looks
or
whatever
the
lcaps
can
be.
You
can
set
up
a
multi-stage
lookup
if
you
chose
to
do
so,
and
it
turns
out
that
the
names
could
be
formatted
in
a
way
where
they
fit
into
a
hierarchy
of
list
ddt.
I
So,
for
instance,
a
look
up
of
slash
root,
slash
dino
slide,
slash
bangkok,
which
is
a
fully
a
fully
specified
eid,
distinguished
name
the
root.
Could
you
know
parse
the
slash
roots?
Do
you
know,
because
that's
the
authoritative
prefix
at
the
root
level
and
then
point
you
to
a
set
of
children
that
are
the
next
level
like
slides
and
then
the
next
level
would
be
bangkok
would
be
the
map
servers.
So
there's
no
changes
at
all
that
have
to
be
done
list
ddt.
I
You
just
have
to
con
figure
the
authoritative
pre-six
to
be
the
set
of
characters
versus
the
set
of
bits
that
we
use
for
ipv4
and
ipv6
eids,
and
the
same
goes
for
supporting
lisp
decent
list.
Decent
is
a
way
of
hashing
the
id
to
pick
a
map
server
to
register
to
or
a
map
server
a
map
resolver
to
send
map
requests
to.
So
since
it's
a
fully
it's
since
it's
just
a
set
of
bits,
you
can
still
hash
across
that
to
pick
a
map
server.
I
So
the
decentralized
list
mapping
system
works
with
zero
changes
as
well
and
map
request
lookups
usually
are
doing
exact
match.
Typically,
so
if
you
have
a
fixed
name
or
you're,
using
maybe
a
hash
of
a
public
key
where
it's
been
defined
as
being
fixed
length,
you're
always
doing
you're
always
doing
a
length
with
a
with
a
mask
length
and
you're,
always
getting
the
map.
Reply
back
with
the
same
mass
length,
that's
what
I
meant
by
exact
match
next
slide.
I
So
here's
an
example:
if
you
look
at
the
top,
this
is
the
eid
prefix
that's
being
registered
as
g
xtr,
one,
the
the
actual
mass
length
of
that
would
be.
What
is
it
one?
Two,
three
four
five
six
seven
times
eight
so
56
bit
mask
length
that
would
be
stored
in
the
system
if
you
wanted
to
use
the
same
concept
as
as
ipv4
and
ipv6
masculine.
I
Now
the
our
loc
name
that
you
see
down
there,
that's
called
xtr1.
That's
encoded
in
this
rlok
record
as
an
afi
list,
where
the
first
item
in
the
afi
list
is
an
ipv4
arlo
or
an
afi,
and
then
the
second
one
is
the
afi
17..
So
this
is
this:
is
the
representation
on
how
a
lcaf
afi
list
would
be
encoded,
so
example
of
something
being
nested
next
slide.
I
So
the
changes
that
I
put
in
to
reflect
the
comments
that
came
on
the
mailing
list
was
describe
better
how
the
mass
lengths
work
in
each
of
the
messages.
So
I
did
that
and
then
I
added
a
use
case
section
saying
that
we
have
some
examples
that
use
distinguished
names
and
those
three
documents.
There
are
examples
of
using
distinguished
names.
I
I
also
added
a
name
collision
consideration
section,
because
a
distinguished
name
can
be
the
structure
of
a
distinguished
name
can
be
decided
on
what
instance
id
you
registered
the
mapping
system
and
the
mapping
system
could
decide
what
it
is.
Joel
had
a
comment
that,
if
there's
two
different
types
of
distinguished
names
registered,
will
they
counter
conflict
with
each
other
and
if
they,
if
they
would,
they
would
have
to
be
in
separate
instance,
ids.
I
So
we
started
the
draft
in
april
of
2016
and
we're
now
at
the
latest
draft
that
I
just
submitted
a
few
days
ago
is
2020..
So
this
this
very
simple
seven
page
draft
has
been
going
on
now
for
four
years
next
slide
and
if
you
look
at
what
we
did
is
we
had
the
initial
submission
there
on
the
right
which
is
in
green
and
then
we
just
spent
a
good.
I
You
know
one
two,
three
four,
five,
six,
seven
eight
four
years
of
just
updating
the
changes
to
the
draft,
and
so
you
know
maybe
it
comes
to
a
point
where
you
have
to
ask:
why
is
the
progress
been
so
slow?
There
is
utility.
There
has
been
some
support.
There
has
been
commentary
over
the
course
of
the
six
years
and
there's
really
has
been
no
strong
objections.
B
B
I
Well,
I
have
no
reply
to
that.
I
don't
know
what
your
definition
of
a
distinguished
name
is,
but
I'm
using
that
term
because
it's
assigned
to
afi
17.
what
it
is
is
a
ascii
encoding
which
or
can
be
a
unique
unicode
encoding
if
we
use
a
different
afi.
But
it's
just
a
it's
just
a
string,
and
you
know
it's
no
different
than
a
dns
name
where
you
have
to
coordinate
what
the
value
is.
So
I
don't
know
what.
I
On
the
particular
use
case,
they
have
to
have
structure
and
if
you
look
at
the
ecdsa
author
draft,
it
shows
an
encoding
where
you
do
not
have
collisions,
and
it's
as
simple
as
that
and
if
you
think
another
use
of
it's
going
to
collide
with
this,
that
specification,
you
don't
run
it
in
the
same
instance
id.
So
these
are
very
solvable
problems.
So
I
don't
know
what
the
objective.
I
B
B
I
I
I
So
this
is
my
third
attempt,
at
least
to
request
at
least
the
working
group
to
work
on
it.
So
it's
not
like
we
have
to
make
this
an
rc
right
away,
because
we
can't
do
that
anyways,
but
I
just
does
the
does.
The
working
group
want
to
work
on
such
a
feature
because
it's
actually
useful
and
it
would
be
nice
to
build
practical
solutions.
I
A
Let
me
try
to
explain
my
concerns.
Let
me
try
to
inspire
you,
okay.
So
the
point
is
the
the
the
the
document
has
not
been
really
evaluated
by
the
group.
You
had
a
once.
You
presented
the
document,
then
you
had
several
updates,
etc,
but
the
the
what
happened
is
that
you
moved
forward
the
document.
At
some
point.
You
you
you
you.
You
said
it's
time
for
the
the
working
loop
to
adopt
this
document.
I
A
A
I
A
No,
no!
No!
No!
No!
That's
not
fair,
that's
not
fair!
I
I
I'm
trying
to
to
to
see
what
is
happening.
So
there
is
some
concerns
about
development
or
what
I
suggest.
A
What
I
suggest
is
that
you
and
joel
you
you,
you
really
try
to
discuss
what
what
are
the
concerns
and
what
are
the
technical
issues
concerning
the
this
document?
Okay
and
then
we
go
back
to
the
mailing
list
and
the
working
group
and
say
this
is
me:
are
the
issues
that
joel
had
and
does
anybody
has
ordered
concerns?
A
Are
people
really
interested
in
having
these
in
lisp,
because
you
think
this
is
a
feature
that
lease
needs?
It's
not
a
feature
that
you
need
and
it
works
for
you
when
you
are
alone
in
your
implementation,
something
that
has
to
work
with
with
the
internet
and
the
lisp
as
a
whole.
That's
my
point.
Okay,
if
we
reach
this
this
level
of
specification
because,
in
my
opinion,
the
documents
I
I
told
you
this
on
the
native
list-
is
under
specified.
A
A
A
A
A
Keep
in
mind
that
this
is
the
point
being
trusted,
because
it's
not
that
we
I
write
something
concerning
please-
and
I
say,
okay.
The
working
group
needs
to
to
adopt
this
should
consider
adopting
this
document.
Okay,
it's
about
the
interest.
How
useful
is
for
the
least
I
understand
from
your
perspective
is
very
useful,
but
it's
about
the
working
group
see
what
I
mean.
I
A
B
Q
Just
wanted
to
make
a
short
comment.
Sorry
so
you
know
looks
like
the
joel
has
some
specific
objection.
Right
probably
it
seems
that
the
the
word
might
help
from
him
articulating
better.
What
are
the
objections?
So
I
think
at
this
point
the
suggestion
that
luigi's
giving
is
spend
some
time
with
joel.
If
you,
the
two
of
you,
sit
down
and
you
try
to
basically
understand
what
joel
is
saying
and
joel
is
trying
to
articulating
in
more
detail
what
are
his
objections
that
will
eventually
lead
to.
I
C
Q
A
A
Is
it
okay
for
everybody?
If
we
proceed
in
this
way,
does
anybody
has
further
comments?
I
invite
you
all
of
the
people
to
read
anyway,
the
the
document
and
and
beside
the
technical
concerns
that
you
may
or
may
not
have
make
up
your
mind
if,
whether
or
not
this
is
something
useful
for
this
and
the
working
group.
G
Okay,
so
quick
commentary-
yeah
sorry
I
haven't-
I
haven't
spent
the
time.
Maybe
I
shouldn't
on
this
document
and
I
don't
have
the
expertise,
but
my
personal
opinion,
as
a
working
good
participant,
is
that
this
could
potentially
be
something
something
difficult
for
the
protocols.
G
A
A
Try
to
understand
it's
just
that
we
we,
we
were
not
able
to
to
to
declare
a
strong
consensus
and
there
were
technical
concerns
about
your
document.
That's
the
point.
We
can
work
on
the
technical
concerns
and
make
a
full
normal
checking
whether
or
not
the
working
ability
is
interesting.
That's
it
simple.
As.
A
A
O
Okay,
great-
and
I
hope
you
can
also
see
me-
yes,
okay,
so
in
this
presentation,
we'll
show
how
we
use
lisp
into
real
life
deployments,
and
this
presentation
is
based
on
a
paper
that
you
can
find
here
in
archive.
O
O
Support
the
requirements
of
core
enterprise
networks,
so,
in
short,
its
mobility
segmentation
and
resource
efficiency,
and
also
we
it
provides
segmentation
into
our
technology,
we're
using
the
bix,
lan,
bni
and
also
group
based
policies,
while
this
is
quite
a
widespread
standard
in
the
industry,
basically
mapping
ip
addresses
to
a
group
and
then
enforcing
policies
on
this
group,
and
it
also
provides
layer
to
stretching-
and
we
also
use
some
list
probabilities
for
that
next
slide.
Please.
O
So
why
did
we
use
lisp
for
this
product?
Well,
first
of
all
to
provide
mobility
about
layer,
two
and
layer,
three
also
to
reduce
and
distribute
the
data
playing
state
and
especially
to
reduce
the
electronic
state.
So
our
main
goal
is
reusing
capex.
So,
basically,
if
we
have
less
entities
in
the
data
plane,
we
need
a
smaller
fit.
O
This
means
less
memory
and
this
means
cheaper
routers
because
they
will
need
a
smaller
chips
or
memories
or
tickets
or
whatever
they
are
using,
and
also
because
we
can
at
least
we
can
provide
incremental
deployment.
Basically,
we
can
leave
all
routers
in
the
underlay
and
just
update
the
ones
that
are
using
this
next
slide.
Please.
O
D
O
So
we
use
the
standard
list
with
a
few
tweaks,
as
I
said
before,
we
also
use
the
big
sun
as
a
data
plane
and
we
tap
endpoints
individually.
So
that
means
that
all
eit's
are
slash
32s
and
we
store
four
mappings
for
each
endpoint,
so
we
have
their
ip
version
for
address
ipv6
mac
address,
and
then
we
also
have
a
mapping
of
mac
eid
to
ipa
id
and
we
use
that
for
layer
2
stretching
that
I
will
explain
it
later
next
slide.
O
And
regarding
the
wireless
parts
basically
to
to
paste
or
to
join
list
and
and
wireless,
what
we
do
is
on
one
hand
we
connect
access
points
to
xtrs
and
on
the
other
hand,
we
connect
the
vlan
controller
to
the
map
server.
So
this
way,
the
when
the
vlan
controller
detects
that
the
host
is
doing
a
mobility
event,
it
sends
a
map
resistor
to
the
amp
server,
so
the
location
is
updated
and
also
I
have
to
mention
that
we
add
a
big
slant
panel
between
the
hdrs
and
access
points,
but
next
slide
please
well.
O
We
actually,
I
mean,
however,
this
big
land
tunnel.
I
was
mentioning
between
the
access
point
and
this
guy
is
different
from
the
ones
in
the
underlay,
so
the
the
one
index
between
the
hdr
and
access
point
just
to
be
able
to
have
several
access
points
in
the
same
xtr,
but
this
information
is
not
propagated
in
the
map
mapping
system.
We
we
just
have
the
regular
ones
from
the
underlay
next
slide,
please.
O
So
I
I
will
explain
just
a
few
details
about
how
we
handle
all
the
mobility
with
the
wireless
stuff.
As
I
said,
we
have
this
static,
big
slant
tunnel.
Basically,
we
needed
to
carry
the
layer,
two
frames
between
the
xtr
and
access
points,
and-
and
we
also
what
we
also
do-
is
we
store
the
ip
of
the
access
point
and
the
xtr
where
it
is
connected
in
the
map
server
we'll
see
later,
why
we
need
that
and
when
the
vlan
controller
detects
a
roaming
event
from
an
access
point
via
the
cab
web
protocol.
O
As
I
said
before,
it
sends
the
map
register
to
the
map
server,
to
update
the
location
of
this
of
this
endpoint.
But-
and
here
is
when,
when
the
the
mapping
of
access
point
ap
to
xtr
airlock
is
useful,
because
so
basically
we
do
to
push
the
wheel
on
contour.
Does
a
map
request
to
know
the
exterior
lock
of
the
new
access
point
and
then
that's
the
the
actual
map
register
of
the
of
this
endpoint
next
slide?
Please.
O
So
once
we
have
updated
the
the
location
of
the
endpoint
in
the
map
server,
the
new
xtr
receives
a
map
notify
in
order
to
update
local
estate.
So
it
can
so.
Traffic
can
reach
the
new
host
and
also
we
use
a
couple
of
mechanisms
to
improve
connectivity.
O
They
are
the
away
entry
and
the
smr,
and
I
will
well
can
you
you
switch
to
next
light.
Please,
and
basically,
what
we
do
is
we
usually
use
smr
if
even
xtr
receives
traffic
for
an
eid
that
is
no
longer
in
this
xtr,
it
sends
an
smr
to
the
origin
xtr,
so
it
can
update
its
map,
cache
and
well.
How
does
this
this
xtr
know
that
the
e8
is
no
longer
connected?
Well,
we
have
the
the
other
mechanism,
the
y
entry,
that
basically
remembers
this
information
and
also
the
oynt.
O
What
does
is
tell
the
xtr
to
forward
this
traffic
to
the
new
xtr?
So
well,
it's
what
you
can
see
here
in
the
in
the
graph.
Basically,
the
way
entry
instead
of
dropping
traffic
for
for
an
id
that
is
no
longer
present,
it
forwards
it
to
the
to
the
new
xdr.
O
I
think
that's
all
next
slide,
please
oh
yeah
and
finally,
a
design
note
about.
O
O
So
this
way
we
can
forward
the
the
pack,
the
packets,
until
the
map
request,
is
reset
received
in
the
xtr.
So
this
way
we
don't
drop
any
any
packet,
and
we
also
use
this
proxy
to
to
provide
external
connectivity
next
slide.
Please
and
well.
Finally,
just
a
few
notes
about
layer,
2
stretching
I
mean
by
layer
to
stretching,
I
mean
that
being
able
to
extend
layers
to
domains
in
a
scalable
way-
and
this
is
something
that
is
quite
useful
for
enterprises,
because
sometimes
you
want
to
avoid
air
free
broadcasts
propagating
through
the
whole
network.
O
I
mean
it's
especially
important
in
medium
and
large
deployments.
Also,
you
have
you
have
some
layer,
two
protocols
that
require
layer,
two
visibility,
for
example,
apple,
conjure
and
also
it's
very
common-
that
to
have
legacy
iot
devices
that
are
not
using
ips.
So
you
need
some
way
to
connect
them
and
well.
What
we
basically
do
is
it's
quite
simple
and
straightforward.
O
The
the
source
xtr
encapsulates
the
layer,
two
frames
to
the
destination
xtr,
because
we
can
do
that
because
we
have
the
option
of
in
the
xlan
and
we
use
the
map
server
to
resolve
the
missing
information.
For
example,
we
use
the
destination
mac
address,
to
know
which
xtr
has
this
this
endpoint,
and
we
know
this
because
we
we
registered
previously
in
the
map
server,
both
the
ip
address
and
the
macadas
of
an
endpoint
and
also
for
for
being
able
to
propagate
irp
request.
We
use
the
mac
to
id
mapping.
O
So
this
way
we
can
know
which
ip,
which
gid
corresponds
to
a
specific
mac
address
and
well,
just
as
a
small
note,
we
forward
all
irp
requests
instead
of
creating
them.
I
mean
the
xtr
could
respond
to
some
irp
requests,
but
we
forward
them
for
coherence
with
ipv6
ndp.
O
So,
while
I
provide
some
some
results
that
prove
more
or
less
what
I've
been
saying
until
now,
next
slide,
please
so
the
the
first
measurement
we
performed
was
trying
to
prove
that
we
could
reduce
the
data
playing
state
and
what
we
did
was
count.
The
map
cache
entries
in
the
proxies
and
the
xtrs,
since,
as
I
told
you
before,
the
process
has
all
the
map
server
data.
O
We
should
be
seeing
index,
dr
just
a
fraction
of
the
mappings
that
that
has
the
proxies
and
we
have
data
from
two
deployments.
You
can
see
here
the
topology,
which
is
quite
simple,
I'm
not
showing
the
underlay
for
clarity.
O
We
have
a
deployment
with
150
hosts
another
one
with
450
horse.
There
are
usually
two
proxies
for
redundancy
six
xtrs
and
approximately
20
access
points
per
xpr
next
slide.
Please-
and
here
you
can
see
the
the
results-
these
graphs
plot,
the
average
number
of
map
cache
entries
for
ip
version
for
eads,
for
the
proxies
and
xtrs
for
the
two
deployments
and
for
two
weeks,
and
you
can
see
that
usually
we
have
more
entries
in
the
proxy
that
index
trs
next
slide.
O
O
For
example,
you
can
see
that
deployment
b
there
is
an
amount
of
devices
that
are
always
connected,
and
while
these
are
iphones,
I
I
if
I
recall
correctly-
and
you
can
also
see
that
the
proxy
since
using
pops
up
or
well
actually
a
very
pops
up-
it's
always
following
the
the
work
they
schedule.
O
For
example,
you
can
see
that
there
is
no
one
during
weekends
and
you
have
all
these
spikes
across
the
weekday
that
they
go
down
when
people
leave
the
office
because
we're
talking
about
an
enterprise
network,
so
people
come
and
go,
and
also
you
can
see
in
the
saturday
afternoon
in
well
here
in
the
top
right,
you
can
see
like
a
large
drop
in
the
number
of
entries
in
the
xtrs.
O
O
This
is
due
that
to
the
fact
that
we
are
using
a
24-hour
tpl
for
the
map,
cache
so
more
or
less
saturday
afternoon,
when
it's
a
day
after
people
have
left
the
office,
all
these
entities
get
evicted
and
you
don't
see
this
in
the
proxies
because
since
they
are
synchronized
when
an
endpoint
leaves
the
office,
it's
very
probable
that
so
probably
or
it's
very
yeah-
it's
very
possible
that
someone
wants
to
talk
with
it,
and
since
this
endpoint
is
no
longer
in
the
network,
it
will
receive
a
negative
map
reply
and
it
will
get
deleted
from
the
map.
O
O
O
We
also
used
the
traffic
generator
to
emulate
roughly
200
xtrs
and
they
were
all
generating
traffic
to
an
external
network
and
also
we
use
the
traffic
generator
to
create
handover
events
between
the
two
physical
routers
at
the
rate
of
800
moves
per
second,
and
this
was
a
requirement
of
the
this
deployment
that
I
was
telling
you
before
and
next
slide.
O
Please,
and
you
can
see
here
the
results,
so
we
measure
the
handover
delay
for
the
list,
control,
plane
and
also
an
implementation
with
a
bcp
control
plane,
and
you
can
see
there's
a
difference
of
approximately
one
order
of
magnitude
and
this
due
to
the
fact
that
lis
only
notifies
the
routers
that
need
an
update
for
drops,
while
busy
needs
to
send
updates
to
all
the
routers,
and
also
you
can
see
that
this
also
creates
more
variability.
You
know,
because
it
depends
on
the
moment
that
bcp
received
this
update
and
then
the
connection
will
restart.
O
So
you
can
see
that
there
is
much
more
variability
in
the
bcp
delay
than
in
the
list.
Delay
next
slide,
please
and
that's
all.
In
conclusion,
we
presented
an
example
of
a
list
deployment
in
a
private
setup.
We
have
shown
how
lisp
can
be
used
to
reduce
data
plane
state
and
how
we
created
a
distributed
mobility
data
plane.
O
When
we
compare
this
setup
to
the
classical
vlan
controls,
we
have
improved
routing
because
we
are
not
sending
traffic
to
the
vlan
controller
and
back
to
the
network
and
also
this
makes
it
possible
to
scale
better,
and
so
we
have
shown
how
it
can
reduce
the
mobility
handover
and
I
think
that's
all,
let's
slide
but
yeah
thanks.
If
you
have
any
questions.
A
Thank
you
very
much
jody
any
questions
or
comments
from
the
audience.
A
I
I
actually
have
one:
oh,
there
is
alberto,
let's
go
ahead.
G
Yeah,
I
just
have
one
quick
comment.
I
wanted
to
say
that
so
jordy
has
done
a
great
summary
of
the
paper
in
the
presentation,
but
the
the
actual
paper
that
is
available
on
that
link.
That
is
very.
A
I
switch
back
to
my
question,
so
it's
a
nice
piece
of
work
that
you
did.
First
of
all,
it's
very
interesting
and
you
seem
to
rely
quite
a
bit
on
the
mac
address.
Okay,
now
that
that's
what
that
works
very
well
in
enterprise
networks,
but
if
you
have
a
more
public
deployment,
let's
say
a
huge
campus
that
wants
to
get
wi-fi
connectivity
now,
maybe
you
will
connect
to
user
in
ios
or
android,
and
I
discovered
this
week
that
they
have
features
to
change
dynamically.
The
mac
address
because
of
private
sequences,
okay,.
A
O
O
O
I
really
don't
know
how
it
would
solve
it,
but
I
think
it's
a
problem
that
everyone
would
have
also
still,
because
it's
not
easy
specific,
no.
B
But
for
those
who
are
tracking
the
missing
pieces,
this
there
was
a
bath
called
madinas
m-a-d-I-n-a-s
yesterday
on
driven
by
a
bunch
of
work
over
at
ieee
on
randomizing
mac
addresses,
periodically
changing
mac
addresses
and
what
that
does
for
ip
activities,
which
depend
on
mac
addresses.
N
N
Q
In
general,
having
a
centralized
database
right
for
the
mac
and
the
ip
that
are
the
table,
that
you
have
marked,
that
probably
makes
things
a
little
simpler
and
keeping
in
mind
that
sure,
if
you,
if
you
change
the
mac
address
at
every
change
of
practice,
point
then
yeah
I
mean
you
need
something
in
the
first
folder.
N
Yeah,
actually
it's
interesting.
I
mean
it's
what
you
say
right
from
a
scalability
perspective.
The
solution
just
holds
the
the
the
main
effect
is
in
mobility
right.
The
problem
is
that
this
guy
that
randomizes
every
time
that
changes
access
point
needs
to
re-authenticate
again,
and
this
affects
the
speed
at
which
things
happen,
but
but
yeah
the
way
sda
has
been
designed
you
you
can
detect
that
the
guy
is
the
same
right
once
it
reauthenticates.
So
so
the
database
is.
I
I
Like
the
problem
is,
a
mac
address
is
being
used
as
an
eid
and
when
you
reassign
eids
that
causes
problems.
If
you
just
run
layer
3
over
this,
you
don't
care
what
the
mac
address
is.
So
it's
kind
of
it
could
be
arguably
a
misuse
of
layers
right
and
the
problems
with.
Traditionally
we
have
with
layer
two.
Q
I
I
don't
think
there's
a
problem
if
you
just
randomize
the
mac
address
as
long
as
it's
not
being
used,
otherwise
nowhere
else
but
locally,
like
on
a
layer,
two
land
right.
So
I
mean
I
don't
see
the
problem.
The
fact
that
you
want
to
build
l2
overlays
means
that
problem
now
moves
into
the
overlay.
That's
the
problem.
G
One
thing
to
to
highlight
is
what
mar
just
just
said
that
on
sda,
that
is
the
solution
that
this
paper
evaluates.
You
still
have
an
authentication
to
be
allowed
into
the
network
right
and
on
that
authentication
you
have
to
establish
your
unique
identity
against
the
system.
So
it's
not
gonna
matter
on
this
particular
solution.
F
O
M
M
I
Mean
the
eid
anemity
giraffe
basically
says
that's
a
feature
by
randomizing,
an
eid
and
that
you
can
use
multiple
eids
at
the
same
time
and
phase
them
out.
And
since
you
have
a
mapping
you
know
they
can
be
added
to
the
system
a
lot
easier
when
you're
on
an
overlay.
So.
I
A
I
Group
are
the
rsc
numbers
going
to
be
assigned
soon
to
the
this
document.
B
I
I
remember
once
it
goes
on
the
queue
an
rfc
number
is
allocated
and
that
you
could
start
referencing
it
I'm
trying
to
address
sharon's
concern,
so
we
could
still
write
documents
that
reference
the
bit,
but
we
can
start
referencing
the
rfc
numbers
and
we
could
have
some
parallelism
or
concurrency
while
that's
while
sitting
in
the
editor
queue
and
we're
doing
all
the
you
know
editorial
comments
with
the
rfc
editor
right.
You
believe
that
we.
B
I
B
I
I
I
I
want
the
I
want
the
documents
that
depend
on
this
to
be
rewritten
to
that
are
going
to
go
fast,
track
right
on
its
heels,
that
the
editing
could
be
done
while
we're
waiting
for
the
rfc
editor.
You
know
trying
to
to
build
some
concurrency
instead
of
serialization,
the
rfc
editor.
I
Yeah,
but
I
remember
all
the
documents
I
remember
in
the
past
when
we
had
68
30,
that
the
current
documents
we
were
working
on
had
to
be
changed
right
and
they
will
the
ones
that
have
not
been
sent
to
iesg.
Yet
that's
what
I'm
talking
about,
because
those
things
tend
to
have
long
lifetimes
and
you'd
like
to
point
it
to
the
latest
specs.
So
the
people,
the
working
group
documents,
are
going
to
have
to
be
modified
by
us,
not
by
the
arts.
The
editor.
A
I
L
It
it
would
be
good
to
know
as
soon
as
possible,
also
for
ipd
papers
and
people
have
like
deadlines
and
they're.
Probably
like
much
better,
because.
B
A
Q
For
keeping
the
meeting
going,
you
do
a
lot
of
work
and
thank
you.
A
Okay,
so
in
that
case,
thank
you
all.
The
next
meeting
will
be
again
in
virtual
in
the
european
time
zone
because
it
was
supposed
to
be
held
in
prague,
so
a
little
bit
easier
from
for
the
american
people
right
thai
ones,
so
anyhow
have
a
good
evening,
good
day,
good
morning
and
stay
safe,
okay,
bye,
bye,
all
right.