►
From YouTube: IETF106-LSR-20191118-1330
Description
LSR meeting session at IETF106
2019/11/18 1330
https://datatracker.ietf.org/meeting/106/proceedings/
B
A
C
D
B
B
B
D
D
D
F
D
D
G
To
Singapore
everybody,
I'm
gonna
go
through
the
status
here.
We
don't
have
any
RFC's
since
Montreal,
but
we
have
a
lot.
We
have
a
lot
of
stuff
there's
the
Sigma
grouting
drafts
are
coming,
all
is
part
of
a
cluster,
so
a
lot
of
these
are
already
done,
are
almost
done.
The
top
two
are
already
approved
in
the
bottom.
One
it
is
is,
is
almost
approved
as
well.
It's
in
sewing
off
48
state.
G
Now
these
are
ones.
These
are
grafts
that
are
on
the
RFC
queues
with
miss
references.
The
tunnel
end
cap
graph
I
mean
the
OSPF
tunnel,
aincrad
graft.
It
shares
in
I
Anna
registry
with
the
IDR
tunnel
red
end
cap
draft,
and
that
draft
is
finally
moving
along
and
it's
gotten
through
the
working
group
and
it's
and
the
chair,
the
Shepherd
in
the
chairs,
I
guess
it's
waiting
with
Alvaro.
We
got
both
two
yang
models.
G
The
bottom
one
is
already
an
IH
bit
is
in
the
is
iesg
review.
That's
coming
up
on
the
12:5
and
OSPF
routing
with
cross
address
cross
address
families,
traffic
engineering,
tunnels.
That
is
it's
already
an
off
48.
Isn't
it?
Yes,
so
that's
almost
done
as
well,
and
then
the
other
two
are
just
waiting
on
the
RFC
queue
for
the
RFC
editors
to
get
to
it.
I'm
sure
they
will
soon.
After
this
large
cluster
of
segments
routing
drafts,
these
two
drafts,
the
two
multi-attribute
OSPF
and
is-
is
encoding
the
two
graphs
for
handling
application,
specific
attributes.
G
Publication
I
think
we
yeah,
we
broke
I
just
requested
publication
on
these,
so
we
had
one
problem
here:
that'd
be
when
we
simplified
these
drafts
and
went
to
prefix
level
entropy
level
capability
encoding.
We
negated
the
need
for
the
bgp
LS
because
we
just
used
the
existing
encodings
from
a
different
draft.
So
we
sort
of
consolidated
the
bgp
LS
in
both
of
those
and
that's
a
good
thing
saves
work
for
the
for
all
of
us
by
having
it
the
bgp
LS
included
in
ihe
p
drafts
and
we're
gonna.
Try
it.
G
G
G
This
should
probably,
even
though
it
was
it's
only
been
a
working
group
dog
come
in
for
one
ietf.
I
think
this
should
go
to
a
working
group.
Last
call
ASAP.
Now
that
we've
got
a
good
review
from
a
number
of
vendors
and
here
flexible
algorithms
we
covered
today,
all
these
yang
models
are
going
to
be
is
covered
covered
on
Friday
and
update
on
the
SRP.
Six
is
AI
extensions.
Today,
dynamic
flooding
is
covered
today,
hierarchy.
G
Let's
see
Friday
we're
gonna
have
yang
model
updates,
and
then
we
have
these
two
I,
don't
I
know
that
they
recently
covered
I
believe
it
was
less
less.
His
comments
on
IGP
extensions
for
PC,
EP
security
capability
discovery
using
HP's
and
the
last
one
has
been
pretty
quiet.
I'm
gonna
have
to
check
to
see
what
offers
to
see
what
their
intention
is
and
if
anybody's
gonna
implement
it
and
here's
a
number
of
non-working
group
documents
that
the
authors
have
asked
in
various
and
sundry
matters
to
have
adoption
calls
on.
G
You
probably
saw
those
of
you
phone
lists
that
we
did
start
some
discussions
on
the
third
one
SRV
six
enhancements
to
SPF.
They
correspond
you
can
map
them
directly
to
the
is,
is
ones
that
it's
already
a
working
group
document,
but
the
rest
of
these
we're
gonna
kind
of
take
them,
and
what
I'd
like
people
to
do
is
the
offers
to
initiate
just
send
the
request
to
the
list.
G
Chris
and
I
discussed
these.
We
think
this
is
the
best
way
for
it
send
the
request
for
an
adoption
call
to
the
list
and
then
we'll
take
a
look
at
it
and
do
them
in
that
order.
One
thing
we
don't
get
enough
of,
is
we
don't
get
enough
discussion
on
the
list?
You
know
it
seems
like
one
week
after
the
ietf
and
the
two
weeks
leading
up
to
the
next
IETF
is
when
we
get.
F
I
From
your
most
productive
working
group
from
like
most
productive
working,
I'll
order
that
around
eighty
I
just
want
to
ask
the
work
who
for
a
favor
many
of
these,
especially
this
slide,
is
about
like
ISS
Perry,
so
it'll
raise
it
in
one
protocol,
I'm
gonna,
do
it
in
the
other
profile.
That's
great
I'm
really
happy
about
that.
I
If
the
functionality
is
different,
so
you're
doing
I,
don't
know
just
to
pick
on
the
first
one
reversed
metric
for
SPF,
because
we'll
reassess
functionality
is
different.
Please
highlight
it
somewhere,
send
an
email
to
the
list
and
and
say
that
so
that
when
I
review,
the
draft
I
don't
have
to
ask
you
later.
Why
is
this
different?
Or
why
didn't
you
cover
that
other
functionality
we're
trying
to
do
parity?
You
know
it's
nice
that
maybe
the
same
for
Charlotte
is
there?
Maybe
it's
not
possible?
Maybe
it's
not
needed
whatever.
I
A
A
A
A
A
H
Part
that
I
did
want
to
talk
about
is
something
of
an
implementation
report.
The
Arista
networks
implementation
is
now
complete.
This
is
shipped
shipping
gonna
ship
real
real
soon
now
4.20
3.1
F.
It
was
supposed
to
ship
November,
8th,
we're
slipping
oops.
All
of
our
testing
is
complete.
All
of
our
bugs
are
known,
bugs
are
fixed.
We
have
simulated
up
to
5,000
nodes
and
it's
tedious
and
it
works.
H
Quick
static
demo
for
you,
this
is
a
cut
and
paste
off
of
a
working
implementation.
Basically,
we
have
a
CLI
command.
That
shows
you
all
of
the
nodes
in
the
network
and
their
index
assignment,
and
then
it
shows
you
the
paths
that
the
area
leader
has
distributed
in
debugging.
This
is
very,
very
tedious
to
work
with,
so
we
added
another
command
that
translates
it
and
host
names
for
you.
So
you
can
see
all
the
paths
in
the
topology
and
that's
about
it.
It
just
works
any
questions.
G
J
H
H
H
We
wanted
to
be
sure
to
allow
people
to
specify
multiple
identifiers
for
a
particular
area,
and
this
is
important
because
sometimes
you
need
multiple
identifiers.
If
you
are
like
changing
your
area
assignments
around
and
such
so,
you
want
to
be
able
to
enumerate
multiple
synonymous
area
identifiers
and
we
also
changed
it
so
that
we
described
the
entire
set
of
areas
that
a
particular
router
is
is
implementing
in
a
single
TLV.
So
this
compresses
TLV
space
just
a
bit.
H
H
A
B
H
A
H
K
H
K
K
A
K
G
H
K
K
Could
just
use
the
the
packet
to
do
it,
but
so
I
think
the
other
reason
for
that
is
for
legacy.
Routers
they're
going
to
receive
you
use
the
existing
multicast
MAC
addresses
they're,
going
to
receive
these
PDS
and
they're
going
to
look
at
them
and
who
knows
what
they're
going
to
do?
This
was
the
same
thing
that
happened
when
we
did
the
multi
instance
draft.
We
had
that
discussion.
We
eventually
decided
to
use
a
separate
Mac
multicast
address,
so
we
followed
the
same
thing
here.
H
H
Sorry
about
that.
It's
not
my
choice,
but
it
is
what
it
is.
We've
made
further
simplifications
to
the
draft.
You
may
recall
that
one
of
the
things
we
were
doing
was
we
doing
a
heck
of
a
lot
of
tunneling.
We
were
tunneling
all
of
the
the
transit
traffic.
We
were
tying
tunneling
LSPs
to
get
around
having
layer
two
holes
in
on
the
inside
area.
H
We
decided
that
that
was
just
way
messy
and
it
would
be
much
simpler
and
more
straightforward
if
we
simply
said
all
of
the
inside
area,
routers
ran
l1,
l2
and
all
of
the
tunneling
needs
go
away.
So
this
simplifies
everything.
Now
the
entire
area
collapses
to
one
layer,
two
node
1
level,
2
node
and
everything
becomes
very
straightforward.
H
K
K
It
I
find
it
as
conceptually
you're
trying
to
run
another
instance
of
the
protocol
inside
an
existing
instance
in
a
way
and
it
it.
It
depends
upon
defining
this,
this
proxy
system
ID
and
making
it
consistent
so
I'm
not
very
comfortable
with
the
draft.
As
is
I'd
like
to
see
us
have
more
discussion
about
whether
this
is
a
good
solution
to
the
problem
space.
Before
we
just
ask
the
question
about
working
group
adoption
that
said,
I
have
one
mechanical
question
which
is:
are
you
I
don't
believe
you
said
anything
in
the
draft
about
consistency?
K
In
other
words,
you
you
could
have
you've
got
a
set
of
l1
l2
notes
that
are
going
to
participate
in
this
proxy
area
and
they
all
have
to
agree
upon
the
the
proxy
ID
for
the
area.
You
didn't
say
anything
about
how
each
of
the
notes
decides
what
they're
going
to
propose
as
the
proxy
ID
and
so
I'm
wondering
what
happens
when
the
area
leader
goes
down.
Does
everything
flap.
H
H
A
H
Well,
so
the
reason
we
didn't
go
down
that
path
is
that
we
didn't
want
bgp
involved.
This
is
supposed
to
be
a
simple
being
based
in
network
and
not
trying
to
get
autonomous
systems
involved
in.
B
A
H
A
H
A
E
A
E
You
go
okay,
yeah,
hello,
everyone,
I'm
Martin
from
future,
so
today
I'm
going
to
talk
about
a
tpz
which
is
sort
of
for
a
college
joined
parent
zone.
You
can
see
that
you
have
a
cool
for
Eagle
group
of
people
working
on
this
DDT
for
quite
a
few
years.
I
think.
Maybe
we
can
see
that
80
years,
so
we
work
on
tdz,
starting
from
2011,
so
TDC.
Basically,
we
focus
on
a
tune
zone
is
a
blocker
for
area,
so
in
special
case
is
a
entire
area.
E
So
in
the
beginning
we
proposed
solutions
to
virtualize
zoom
to
as
a
single
pursuit
of
note,
so
you
can
see
that
in
this
picture,
there's
only
just
the
any
block
of
area,
so
this
area
is
AB
just
abstracted
into
a
single
note.
So
during
the
process
of
this
ietf
with
we
can
see
from
2011
until
now,
eight
years
has
gone,
so
we
proposed
different
solutions.
So
one
solution
is
we
abstract
tune
into
a
single
note.
We
also
proposed
a
solution
to
abstract
a
zoom
into
is
ages
fully
connection.
E
So
for
this
in
the
green
in
the
green
hat
is
what
we
standardized
in
us
OSPF.
We
can
see
that
this
is
the
only
small
piece
of
cake
for
the
whole
PDT.
You
walk,
so
we
take
us
seven
years
from
start
2011
through
2017,
so
we
achieve
RFC
2009,
but
we
have
a
big
chunk
of
cake.
So
do
there
it's
right
on
there,
but
it's
eight
zero.
E
And
then
we
have
a
Gaza,
a
piece
of
works,
for
example,
our
solutions,
which
is
we
abstract,
a
zoom
into
a
single
node
in
OSPF
and
the
USS,
and
also
we
the
solution
for
smooth,
smooth
decoration
from
a
zoom
to
a
single
node
in
ISS
OCF,
and
also
smooth
transitions
from
resume
to
its
age.
It's
for
the
convention
in
in
ISS.
So
that's
it
in
the
blue
part.
Those
part
is
in
the
current
draft,
which
is
sse3
busy.
E
So
here
we
just
put
a
little
focus
on
the
Somoza
migration
between
a
zoom
and
a
single
node.
So
we
can
see
that
so
originally
we
have
a
flight
network,
so
we
have
a
jungle
in
the
cloud
side
which
is
backed
as
cloud,
so
we
need
to
transfer
that
is
oom2
a
single
note.
So
when
we
do
this
kind
of
transfer,
so
normally
people
like
a
smooth
transfer.
That
means
that
we
don't
have.
E
We
have
a
minimum
traffic
interruptions
so
here
we
have
a
solutions
to
achieve
that,
because
when
we
do
the
standardization
for
us
PFT
disease,
so
we
have
implementation
and
then
we
can
show
that
we
can
do
the
smooth
migration
between
a
June
and
that
zones
ages
for
the
carnation
so
King
week
and
I
know
that
we
have
for
been
among
traffic
in
the
row
in
the
reference
for
that
smooth
migration.
So
for
those
a
smooth
migration,
the
basic
idea
can
also
be
applied
to
the
smooth
migration
between
a
tune
and
a
single
pursued,
a
note.
E
So
the
PHA
is
that
so
we
do
this
smooth
migration.
We
can
see
that,
for
the
flight
network
will
have
a
connections
for
that
June
through
the
outside
node,
which
is
a
connect
to
the
tune.
So
that's
in
the
in
the
original
book
after
we
migrated
to
a
single
note,
so
the
connection
is
different.
So
the
single
note,
the
big
note,
will
connect
through
that
blue
line
to
those
ages
note,
so
we
should
have
smooth
transition
between
the
blue,
solid
line
and
that
data
up
like
fashion
line.
E
E
We
put
a
lot
of
effort
there
and
then
so
here
this
thing
is
a
present
to
the
those
Google
people.
How
should
we
move
forward?
So,
while
my
scene
is
that
we
have
our
obstruction
or
IRA
proxy,
so
that's
one
piece
walk.
We
can
I,
don't
know
whether
he's
a
roof
over
that
with
that
piece
along
and
then
we
have
this
work
so
the
way
you
move
I'll
also
separately.
Well,
so
we
merge
together.
So
that's
the
question
to
all
of
us
here.
A
A
Precluded
right,
no,
so
what
I'm
very
curious
about
is
what
state
that
collaboration
draft
is
in,
because
what
stops
us,
as
a
working
group,
I
mean
first
of
all,
assuming
we
even
want
to
adopt
this
work
right,
the
the
abstraction
of
the
area
or
zones.
If
we
did
what
stops
us
from
just
as
a
working
you're,
taking
the
existing
collaboration
draft
working,
not
as
a
working
group,
I
mean
we
as
a
working
group.
We
don't
care
about
edicts
right,
I
mean
if
the
work
is
almost
done.
A
E
H
Tony
Lee
arista
so
Chris
I'm,
sorry
to
say
that
I
don't
think
we
made
a
heck
of
a
lot
of
progress
on
the
collaboration
anyway,
so
well.
I
think
you're
welcome
to
look
I,
don't
have
read
or
write
access,
I
think
anymore,
but
I
don't
think
that
there's
a
whole
lot
of
value
there.
Above
and
beyond
the
drafts
that
are
already
out
there.
M
Hey
I'm,
Jen
and
I
have
two
questions.
First
is
like:
if
I
can
work
this
zone
as
a
known
pseudo
node,
then
how
does
this
this
affect
my
LFA
calculations
and
can
a
node
like
in
the
diagram
between
our
two?
Can
it
calculate
some
PQ
node
or
the
other
via
this
zone,
or
are
there
some
restriction?
That
is
actually
not
clear
in
the
draft,
and
my
second
question
is
how
about
P
database.
You
know
if
I
have
an
LS
p
going
across
this
zone.
E
E
E
E
G
It's
it's!
It's
unfortunate,
I,
don't
know
precisely
how
to
move
forward
with
this.
If
we
can't
merge,
obviously
yeah.
E
A
simple
question
because
IETF
is
in
public
domain
right:
I
think
maybe
those
are
law
years
or
whatever
they
proposal.
We
cannot
do
any
corporation,
but
I
think
from
the
IETF
stem
also
after
decima
on
the
entities,
entities
I
think
there's
some
kind
of
policies,
data
from
Ikea.
My
understanding
is
that,
because
this
is
that
we
can
still
cooperate
in
the
public
domain
right
then,
because
this
is
not
those
kind
of
on
this
technology
is
not
long
to
whatever
export.
Whatever
imports
those
stuff
yeah.
E
A
The
there
I
mean
I
can
throw
out
a
few
things
process.
Wise
people
don't
always
think
about
this
right,
but
when
a
working
group
adopts
a
document
right,
it's
no
longer
that
under
it's
technically,
not
really
owned
by
the
author
anymore,
it's
owned
by
the
working
group.
Typically,
that's
not
an
issue.
A
But
you
know
even
if-
and
this
is
totally
hypothetical
I'm-
not
leaning
towards
anything.
But
even
if
we
adopted,
for
example,
to
Tony's
Drive
right,
it
would
no
longer
be
of
wristers
work.
It
would
be
if
we've
now
become
the
IDF's
work
right,
that's
outside
of
like
IPR
claims,
I
mean
IPR.
Claims
are
different
right
now.
Tony
might
immediately
step
down
as
author
at
that
point
he's
at
the
mic.
Let's
see
I'm.
H
Just
Tony
Lee
again
so
I
think
I
should
clarify
the
layer.
8
interrupt
context
a
little
bit
because
I
think
it
will
prevent
us
from
causing
problems
and
I
am
NOT.
A
lawyer
I
do
not
play
one
on
TV
I,
don't
know
what
I'm
talking
about.
Ok,
take
it
with
a
grain
of
salt.
That
said,
my
understanding
is
that
we
are
not
allowed
to
have
a
private
conversation
about
technology
whatsoever.
H
A
That
that's
actually
a
great
clarification,
because
I
I
actually
misunderstood
when
I
talked
to
you,
I
thought
it
was
more
restrictive
than
that,
so
that
I
yeah,
so
I
think
there's
more
more
than
just
you
in
this
room
probably
has
been
given
similar
instructions.
I
mean
it's,
it's
weird,
crazy
stuff
that
I
certainly
hate,
but
yet
really.
F
I
That's
why
I
haven't
actually
said
anything,
but
now
that
you
brought
it
up
I'm
going
to
say
two
things
that
I
would
say.
Often
one
is
that
I
need
to
obviously
recuse
myself
from
any
discussion
of
this
draft
or
whatever
the
working
was
going
to
do
with
it,
because
I'm
one
of
the
authors
with
Ranma
I'm
sure
you
two
can,
you
know
deal
with
whatever
and
if
you
need
any
help,
Martin
or
therefore
can
help
you
with
that.
I
There
are
statements
that
the
LLC
put
out
last
last
year
this
year
about
precisely
the
open
nature
of
the
ATF,
and
you
know
both
what
you
guys
said:
what
27
is
actually
what
the
statement
says,
meaning
the
ITF
is
an
open
organization,
there's
nothing
that,
according
to
the
IETF
legal
team,
precludes
communication
between
participants
on
mailing
lists
or
an
opera
forums
on
all
that
stuff.
Having
said
that,
I'm
also,
not
a
lawyer,
and
all
of
you
should
talk
to
your
own
lawyers
right
about
what
what
you
can
do
or
can't
do
or
what
you
think.
I
You
know.
The
other
thing
that
I,
don't
say
too
often,
is
that
I'm
going
to
agree
with
less
in
that
I
think
that
before
we
start
adopting
stuff
now
I'm
talking
as
a
working
member,
we
need
to
figure
out.
You
know
what
is
this
something
we
do
we
need.
Is
this
something
we
want?
What
are
the
characteristics
of
what
want
and
then
we
can
go
and
say:
well,
we
have
these
two
things.
I
This
satisfies
this
or
doesn't
satisfy
that
and
and
go
from
there,
that's
my
suggestion
as
a
working
group
member,
but
yes,
you're
a
complete
way
right
now
we
don't
have
another
arriving
ad
here,
but,
as
I
said,
the
chairs
can
get.
There
were
important
to
to
help
whenever
you
need
to.
They
had
all
the
confidence
so.
E
F
B
H
G
G
C
A
N
N
I'll
tell
you
what
the
problem
is,
and
you
know
what
were
the
parameters
that
we
were
solving
and
give
very
rough
outline
of
the
solution
which
is
actually
you
know,
embarassingly,
simple
and
then
I
hope
for
some
discussion
on
the
mic
and
depending
on
that,
I
may
call
for
adoption
or
not
all
right.
So
so
what
is
the
problem?
The
problem
is
that
some
very
big
customer
smiles
right.
N
They
are
mixing
in
eius,
eius,
backbones,
the
number
of
links
and
nodes,
and
even
when
you
scale
the
implementation
very
very
well,
you
start
to
our
heat,
actually
heat.
Basically,
protocol
limitations
right
in
terms
of
like
flooding
grades.
How
much
can
you
really
put
into
eius
eius
and
computer
over
it?
N
So
you
know
some
possibilities
are
too
forklift
the
protocol?
You
know
there's
some
suggestions
here
and
the
other
limitations
that
these
customers
are
heating
is
that
I
gps
are
kind.
Is
one
hall
one
hop
multiple
spokesmodel
right
which
limits
them
so
the
spokes
are
good
for
excess,
so
it's
good
to
scale
an
excess
network
that
way,
but
if
you
really,
what
you
have
is
a
backbone
that
goes
really
really
big,
then
this
model
is
not
particularly
good
for
it
right,
because
we
we
did
not
build
areas
as
transit.
Just
like
BGP
SAS
is
as
transit.
N
Igps
were
built,
our
heat
actually
different,
I'm
just
describing
the
problem.
These
people
are
heating.
Without
passing
any
judgment
right
I
have
a
tool
they
try
to
fit
it.
They
try
to
get
something
done.
They
go
like
a
ain't
working
and
the
other
trend
that
we
observe
is
that
they
start
I
mean
traditionally
I
GPS
were
built
for
something
which
were
relatively
sparse,
smashes
and
that's
why
we
picked
up
the
Dykstra.
Dykstra
behaved
much
better
on
sparse
stuff.
Right.
Bellman-Ford
is
much
better
on
dense,
but
you
said,
like
who's
gonna
get
dance,
yeah.
N
Did
a
couple
of
thinking
tossed
a
lot
of
things
around
and
then
the
customer,
of
course,
will
not
accept
any
solution
right.
They
have
lots
of
other
properties
which
are
anything
from
technical
to
political
and
operational
anything
in
between
right.
So
what
they
basically
wanted
is
scale
me.
The
backbone
without
triggering
the
control,
plane,
scale
limitations.
That's
really
what
they
wanted
right.
Somehow
remove
the
control,
plane,
scale
limitation,
give
me
much
more
scale
on
the
backbone
and
they
want
many
more
smaller
boxes,
which
kind
of
aggravates.
The
problem
also
leads
to
the
dense
masses.
N
They
go
like
I.
Don't
want
these
huge
boxes
which
look
like
a
single
box.
I
want
more
and
more
like
small
boxes,
I'm
slapping
together,
but
then
I
need
a
lot
of
links
in
between
them,
so
I'm
hitting
even
faster.
This
control,
plane
limitations
and
then
yes
seriously.
No,
please
do
not
for
cliff
the
protocol.
We
love
you
all,
but
flag
days
are
something
we
hate.
You
know
from
you
no
longer
operational
experience.
They
give
me
something
but
now
have
to
forklift
the
protocol.
Tony.
N
See
so
we
I
have
not
here
to
talk
about
great,
but
I
can
tell
you
how
reflux
into
the
stuff
actually
I
won't
tell
you
you'll
figure
out
in
a
couple
of
years:
okay,
I,
don't
have
to
be
right
now,
I
just
have
to
be.
I
have
to
be
able
to
a
couple
of
years
to
tell
you
where
beer
I
told
you
so
ok
now,
alright,
so,
but
no
rift
is
neither
here
or
there
I'm
not
talking
about
rift,
so
they
also
wanted
a
very
simple,
robust
configuration.
N
So
please
do
not
allowed
me
with
a
hugely
complicated
operational
model
where
I
have
to
provision
everything,
just
though
okay
and
then
go
look
for
troubles,
no
proprietor
solutions,
okay
and-
and
they
didn't
want
any
kind
of
centralized
point
of
failures
right.
So
just
do
please
not
build
me,
something
where
you
want
something
centralized
where
you
store
some
states
and
they
think
and
blow
up,
and
next
thing
will
be
reinventing
NSR,
so,
okay,
so
it
has
to
be
fully
distributed.
Solution,
yeah.
B
N
Like
them,
they
extremely
robust
and
then
what
they
also
wanted
was
well,
but
at
the
end,
if
you
scaled
it
up,
I
still
want
my
optimal
traffic
engineering
like
yeah.
Well,
that's
hard
again
right!
You
won't
it
abstract,
but
then
you
want
the
food
thing
to
do
all
the
optimal
path.
Selection
like
okay,
everything's
impossible.
So
let's
go
and
graph
through
a
lot
of
different
solutions
and
the
TTC
stop
was
around
and
so
on,
right
all
right.
So
what's
the
outline
of
the
solution
so
place
somewhere
in
this
directions?
N
People
were
poking
around
here,
but
basically
having
look
at
a
lot
of
things
and
play
it
and
tossed
up
flood
reflection,
which
is
really
a
knock
off
of
route.
Reflection,
seems
to
meet
the
bill
actually,
surprisingly,
well
with
all
these
impossible
requirements.
So
basically,
what
do
you
do?
Is
you
pick
three
colors
right?
Three
colors
are
good,
so
you
take
a
piece
of
the
backbone
and
you
make
it
in
l1
right.
N
So
this
is
this
red
box
and
outside
are
the
l2
guys,
so
the
l2
is
blue,
but
now,
instead
of
sticking
every
thing
into
l1
l2,
which
blows
out
your
l2
scale
again,
you
just
pick
up
a
bunch
of
notes
and
made
them
reflectors.
So
they're
in
the
blue
and
the
thin
lines
are
tunnels,
because
now
those
l2
nodes
are
not
necessarily
directly
connected
right.
N
So
you
pick
up
some
kind
of
tunneling
who
cares
and
you
flood
over
those
tunnels,
so
you've
L
to
flooding
topology
starts
to
look
like
multiple
stars,
so
that
has
much
better
scaling
properties
that
full
meshes
it
gives
you
see
redundancy
it
doesn't
give
you
any
single
point
of
failures,
because
this
is
colorful
still
fully
distributed.
You
know
is,
is
and.
N
I'll
talk
about
that
stuff
later
now,
so
how
do
we
transition
through
this
thing?
So
the
guys
on
the
l2
on
the
outside?
They
see
those
two
stars.
So
if
you
kind
of
naive
all
the
forwarding
we
go
through
these
tunnels,
like
da
so
first
you
like
you're
out
you
go
forward
tunnels.
These
l2
tunnels
could
be
okay
or
not,
but
then
everything
kind
of
bunches
out
in
those
tunnels
and
these
rail
reflections
become
like
forwarding
choke
points.
N
Like
yeah
great
you
scaled
up,
you
control
blame,
but
you
just
killed
my
data
plane
and
if
this
some
of
his
l1
topology
somewhere
outside
of
this
l2
path,
the
capacity
will
never
get
utilized.
So
you
lose
all
the
path
diversity
of
you
know
underlying
Network
like
now,
I
know
that
doesn't
help
well,
so
you
have
to
change
the
SPF
on
the
incoming
leaves
and
what
they
basically
do
is
underneath
you
don't
have
to
do
that,
but
I
described
the
simplest
possible
solution.
N
The
orange
colors
are
or
l1
forwarding,
tunnels
which
are
invisible
to
the
l2
outside,
so
you
still
have
full
mesh
in
the
l1
right
and
the
guys
on
the
edges
when
the
l2
traffic
comes
in.
They
basically
shortcut
through
this
l1
tunnels.
That's
actually
fairly
simple,
SPF
modification,
so
they
look
whether
the
l2
route
that
they
compute
goes
through
a
flat
repeater.
What
we
call
we
call
them.
Reflectors
flood
flood
reflectors,
which
hinge
language
a
couple
of
times,
so
the
flood
reflector
adjacency.
N
If
this
is
so,
then
you
don't
use
it
or
you
there's
a
couple
of
solutions,
but
basically
you
don't
use
that
for
forwarding.
You
shortcut
to
your
exit
through
the
l1,
and
you
can
also
shortcut
these
things,
so
I'm
not
showing
it
here,
but
the
blue
leaves
the
client
reflects
for
clients.
Connector
also
have
a
direct
link,
and
then
you
forward
along
this
direct
link
that
works
just
fine
Tony.
N
Have
done
that
without
doing
the
time
you
can
do
something
even
more
smarter
than
that,
but
you
have
to
read
and
think
about
it.
Ok,
so
this
is
the
simplest
simplest
variant.
It
works
like
a
charm
for
practical
purposes.
Ok
now
the
nice
thing
is
that
what
you're
really
lifting
are
only
those
leaves
and
the
reflectors
and
the
configuration
in
most
extreme
case
is
just
local
indoor
interface
on
this
tunneling
interface
right
and
the
people
and
the
l2
people
outside
are
completely
oblivious
of
towards
going
on
and
most
of
these
l1
nodes.
N
You
don't
have
to
lift
them
either
right,
so
you
really
have
a
minimal
lift
and
you
can
do
it
partially,
because
that
will
also
work
it.
Some
of
these
people
are
not
even
even
client,
so
that's
the
outline
of
the
solution
right.
So
what
we
put
on
is
now
you
don't
want
to
miss
configure
this
thing
in
a
sense
that
you
start
to
build
really
weird
stuff
with,
so
that
has
been
done
with
BGP
route
reflectors
and
led
to
a
lot
of
problems.
N
So
we
stuck
this
indication
which
cluster
ID
are
you
on
the
iih
and
whether
you
apply
on
the
reflector
I,
have
very
strict
adjacency
forming
glue
rules,
we
killed
off
route
reflex,
flood
reflect
or
hierarchies
route
reflect
hierarchies
are
a
beautiful
thing
to
behold.
If
you
run
operational
into
problems
with
them,
we
also
kill
to
what
people
tend
to
do
is,
in
the
same
cluster,
to
build
horizontal
link
between
reflectors.
But
those
are
relatively
simple
rules
that
we
can
argue
about.
N
I
mean
if
people
want
to
draw
work
on
the
draft
and
they
feel
strongly,
we
should
have
a
hierarchy.
Well,
it
can
be
made
to
work
right.
What
what
we
had
was
people
asking
us
for
the
maximum
simplicity,
like
no
foolproof,
almost
impossible
to
me
it's
configured,
that's
where
we
were
leaning
towards
so
I
am
saying
this
orange
I
want
tunnel
mesh
is
optional,
that's
more
advanced
discussion!
N
So,
what's
the
big
deal?
Well,
you
only
have
to
lift
the
flood
reflectors
and
the
clients
okay,
and
they
actually
only
need
local
configuration
knobs
because
you
don't
really
even
well,
so
we
also
put
the
discovery.
The
cluster
discovery
not
only
know
ia
cheese,
but
we
also
advertise
it
on
the
links.
So
you
can
figure
out.
You
know
who?
What
is
the
route
reflector
topology,
but
you
don't
have
to
look
at
the
stuff
and
you
can
actually
even
implement
it
without
advertising
anything.
N
So
this
is
a
local
configuration
knob
on
the
router
like
this
goes
to
my
flat
reflector.
The
whole
thing
makes
you
allows
you
to
scale
the
l2
control
plane
in
a
row,
linear
fashion
right.
So
nothing
you
don't
have
the
n
square
or
full
L
to
tunnel
meshes,
which
practically
speaking,
gives
you
a
very,
very
long
runway
for
very
little
work
right.
So
what
you
get
out
is
this
l1,
but
actually
the
l1
doesn't
have
to
l
be
l1,
so
this
leads
to
further
thinking.
N
If
you
abstract
these
things
as
l1,
you
could
run
out
of
protocol,
so
you
can
run
OSPF
underneath
as
l1,
and
you
can
run
l2
with
front
refraction
is,
is
on
top,
which
gives
you
interesting
properties
in
terms
of
like
faith
sharing
in
a
protocol.
Don't
you
know
you
can
you
don't
have
to?
But
it
gives
you
this
degree
of
freedom,
so
an
l2
runs
with
these
flat
reflectors.
N
This
is,
you
know
they
make
better
and
better
fools
every
year,
that's
which
that's
why
we
still
working
right,
but
this
is
as
foolproof
as
fools
fools
come
these
days,
I'm
sure
they
invent
better
fools.
That
will
break
the
stuff,
but
you
know
as
far
as
we
could
we
base
it
to
run
out
anything
that
can
be
misconfigure
on
lead
you
into
interesting.
You
know
problems
which
I
think
we
understood
fairly
well,
because
we
went
a
whole
BGP
route
reflector
curve
and
it
was
actually
extremely
educational,
all
right
and
so
it's
kiss
means
so
yeah.
N
N
G
B
N
The
draught
says
you
better
do:
okay
for
lot
of
practical
purposes.
We
saw
in
deployment
like
nothing
as
ugly
as
a
broken
tunnel
over
which
you
trying
to
forward
happily
right
so
yeah.
We
suggest
people
do
but
yes,
so
this
is
this
degree
of
flexibility
of
l1
or
you
can
want
the
right
protocol
in
l1
right,
which
makes
it
ztp.
If
you
do
it
correctly,.
A
N
N
G
G
N
H
Tony
Leoni
and
the
way
we
are
doing
things
while
it
puts
everything
into
l2.
The
l2
information
about
the
inside
area
does
not
leave
the
inside
area.
This
is
absolutely
key
because
it
means
that
when
we
actually
abstract
things,
we
effectively
take
it
out
of
link
level
tuning
state
database
for
the
rest
of
the
network.
H
G
G
N
H
If
you
want
diversity,
then
we're
talking
back
to
traffic
engineering
and
your
traffic
engineering
mechanisms
like
BG
pls
yeah,
are
highly
applicable.
Ok,
you
cannot
have
both
abstraction
and
detail.
At
the
same
time,
it
doesn't
work
well
and
for
our
purposes,
right
now
in
scaling
the
IGP.
We
need
the
abstraction.
N
We
have
a
low
yes
and
no,
so
this
keeps
them
the
abstraction
without
losing
the
diversity
and
with
the
bgp
LS
that
they
take
out
of
l1
and
l2
on
the
controller.
They
can
do
a
full
path.
Computation,
yes,
I
cannot
do
optimal,
distribute
the
TE.
Whatever
solution
we
take,
I
mean
that,
like
you
say
you
hide
it,
it's
controlled
lying
right.
So
what
I'm
telling
people?
Well,
you
need
to
run
BTW
pls
and
take
everything
from
from
the
l1
and
l2
in
the
controller,
and
you
have
to
do
optimal
computation
that
way.
N
H
H
A
So,
back
to
my
comment
about
the
redistributing
routes:
Tony,
you
mentioned
that
it's
it
could
be
dangerous,
I'd
be
interested
I
mean.
Maybe
we
can
mitigate
that
danger
like,
if
maybe
you
know,
maybe
it's
worth
exploring
a
small
protocol
extension
that
allows
otherwise
using
normal,
already
deployed
things
like
multi
instance,
with
redistribution,
with
a
little
bit
of
extra
information
there
to
keep
us
from.
You
know
like
the
down
bit
when
we
went,
you
know
there
could
be
a
way
that
we
could
look
at
this
and
just
keep
the
danger
from.
K
Unless
Ginsburg,
so
we
I
mean
we
do
have
the
are
bits,
no,
as
far
as
you
know,
signaling,
that
a
route
has
been
redistributed
in
some
way.
Plus
we
have
the
X
bit
to
indicate
that
it's
external
came
from
another
protocol
which
could
be
another
ISS
protocol
instance.
That
said
I,
don't
think
the
big
danger
with
you
know,
widespread
redistribution
has
to
do
with.
You
know
the
protocol
itself
not
figuring
out
that
this
this
particular
route
was
redistributed.
N
G
N
Well,
yes,
I.
That
is
a
caveat
which
I
mean
we
can
argue
now
the
paint
off
the
wall
in
graph
theory,
if
you
start
to
abstract
in
any
way,
there
is
no
bloody
way
and
that's
what
I
tell
people
when
they
want
te
with
absolute
optimal.
They
have
to
take
the
full
topology
into
the
controller
flat,
make
it
flat
and
run
the
computation
flat
and
that
works
as
well,
but
it
has
a
cost
right.
N
P
N
K
Simple
is
the
protocol
extensions
which
are
very
minimal,
yeah,
okay,
I
think
what
gets
to
be
complex
is
is
the
implementation
of
the
changes
and
the
deployment.
Well,
you
haven't
talked
at
all
and
I,
don't
I,
don't
think.
You've
talked
at
all
in
the
draft
and
granted
its
v-0,
so
you
know
there
may
be
more
coming,
but
you
haven't
talked
at
all
about
the
draft
about
the
population
of
the
route
reflectors
and
the
distribution
of
the
information
to
the
clients,
and
you
just.
N
N
N
Okay,
I.
Take
that
all
right,
so
that's
advertisement,
I,
take
the
keys
yeah,
that's
that's
absolutely
valid.
So
let's
drop
the
kiss,
but
I
think
the
draft
tells
you
everything
that
is
necessary
to
implement
that
stuff.
Yes,
SPF
is
modified.
We
describe
it.
Okay,
there's
multiple
solution
actually
to
implement
it,
so
you
can
modify
SPF
in
three
or
four
different
way,
so
we
just
outlined
one
way
how
we
can
implement
it.
Flooding
procedures
is
zero
modification
literally
otherwise.
You
know
we
would
be
forklifting.
The
tlvs
can
be
ignored
on
the
outside
safely.
N
Actually,
even
inside
of
you,
you
implement
local
configuration
knobs.
If
you
find
holes
more
than
happy
you
more
you
more
than
you
know.
I
have
more
than
happy
to
invite
you
to
participate
on
the
draft
and
then
iron
this
stuff
out
or
roll
me.
The
technical
arguments,
because
I'm
not
foolproof,
we
may
have
missed
land
hold
that
size.
We
very
confident
we
did
but
hey.
We
know
this
is
arcane
and
very
difficult
art
right,
so
technical
arguments.
More
than
welcome.
N
G
N
G
A
As
AC
mentioned
a
minute
ago,
this
is
also
working
on
the
same
sort
of
problem,
so
I
like
yeah
I
like
the
energy
now
can
we
take
this
energy
to
the
list?
I,
don't
think
we
should
be
calling
for
adoption
on
any
these
drafts
yet,
but
if
we
could
keep
this
energy
going
and
maybe
argue
about
the
solutions
on
the
list,
maybe
we
can
read
relook
at
the
adoption
idea.
Next
name.
G
I
think
that
I
think
that's
a
good
thing,
hopefully
we'll
hear
from,
like
you
had
said
the
operators
on
the
requirement
to
do
this,
there's
also
going
at
the
same
time
if
we
can
get
the
flooding
reduction.
If
that's
you
know,
you
know
it's
really
mitigate
mitigates.
The
problem
and
I
actually
think
that's
less
complex
than
these
area.
Abstractions.
H
G
Agree:
they're
orthogonal
I
was
just
saying
whether
or
not
we
need
the
area
abstraction,
giving
the
complexity
and
yours
is,
you
did
drop
the
kiss
for
the
flood
reflectors
too
I
actually
I
actually
think
it
would
be
good
for
your
graph.
If
you
put
the
different,
you
said,
there's
four
or
five
ways:
if
you
put
those
alternatives
and
how
to
do
without
the
tunnels,
the
tunnels,
yeah,
okay,
keen
as
a
working
relation.
N
And
then
well,
you
know
I
get
to
stuff
adopted
where
I
run
all
this
stuff
out,
if
I
don't
get
it
adopted,
like
I
may
as
well,
just
let
it
sit
and
expire.
Thank
you
very
much
right.
So
yeah
I
mean
if
that
gets
adopted.
Yes,
that's
perfect.
The
other
very,
very
valid
discussion
is:
should
we
build
hierarchies
right
which
I
killed,
but
if
well,
there's
a
certain
runway
that
you
get
with
this
right?
If
you
build
hierarchy
of
road
reflectors,
it's
pretty
much
in
finite
size,
but
there's
a
certain
size.
N
You
can
go
with
that.
We
think
the
practical
runways
from
what
we
saw
is
about
you
can
5x
the
scale,
though
they
already
are
insane
scale,
so
they
get
like
5x
insane
with
something
very
simple
without
forklifting,
so
that's
pretty
cool
and
what
we
saw
with
that
is
that
if
you
do
this,
you
put
it
on
you
pretty
much
need
flood
reduction
right,
because
that
kills
all
this
highly
dense
things
right
into
relatively
simple
stars
like
current
flooding,
is
good
enough
as
far
as
we
saw
so
yeah.
N
So
absolutely
we
can
work
all
this
stuff
out,
but
you
know
there's
only
interest
from
our
society.
We
get
the
stuff
adopted
and
you
know
we
get
enough
people
to
work
on
the
stuff
and
chew
through
the
details,
because
you
know
we
know
we
know
this
stuff.
We
don't
have
to
write
it
down
right
and
then
who
cares?
If
you
don't
care?
Well,
then
why
should
we
put
to
work?
A
A
C
L
L
The
only
change
that
happened
to
the
draft
from
the
from
the
last
version
is
that
we
move
the
a
flag
from
the
locator
options
or
flags
and
we
added
it
to
the
prefix
attribute
Flags,
because
the
prefix
attribute
TLV
or
subtly
supported
in
inside
the
locator,
so
there's
no
functional
change.
We
only
moved
it
and
we
said
I'll
use
the
prefix
attribute
like
something
away
to
advertise
the
a
bit.
L
G
G
The
draft
yeah
yeah
me
just
say
just
to
get
an
idea:
okay,
a
people
who
are
willing
to
let
it
know
that
they've
implemented.
Okay
and
I
haven't
read
the
recent
version.
I,
don't
think
I've
read
it
for
about
a
year,
so
I
wonder,
I,
think
and
Chris
you
read
it
recently.
I
did
read
the
Deaf
yeah,
okay,
yeah
I
want
to
read
it
from
start.
Describe
the.
A
L
L
Okay,
so
the
next
is
Theodore,
the
flags
all
go
draft,
so
there
are
two
changes
in
the
draft
from
the
previous
version.
First
days,
more
mechanical
change.
Where
previously,
in
the
eye,
size
and
OSPF
application,
Ling
attributes
draft,
there
was
an
axpy,
which
means
the
application
is
flex
I'll
go.
We
basically
took
that
bit
out
of
that
draft
and
moved
it
and
define
the
bit
with
the
exactly
same
semantics
in
the
Flex
I'll.
Go
draft,
there's
no
functional
change.
L
L
And
we
discovered
one
one
problem
in
the
in
the
Flex
algo
specification,
which
we
are
now
trying
to
fix
in
the
Indus
revision
of
the
draft.
So
here
I
give
you
an
example
where
we
have
two
two
areas
or
domains
and
if
you
have
the
right
one
in
the
left
one.
So
we
have
the
prefix
1/1
1/32
and
here
are
the
links
with
the
metric
IGP
metric
of
10
and
flexible
metric,
which
could
be
delay
or
anything
else
being
hungry
and
the
definition
of
the
alga
says
remove
any
languages
which
is
red.
L
So
what
would
happen
and
there's
no
link
between
the
r2
and
r3
in
any
of
those
domains
or
areas.
So
what
P,
r2
and
r3
routers
would
do?
They
would
advertise
the
prefix
and
without
so,
in
the
previous
version
of
the
draft,
we
define
what
we
call
a
flag,
a
specific
metric
of
Luxardo
prefix
metric,
even
without
this.
Basically,
what
happens
is
that
the
r1
will
still
believe
he
can
reach
the
the
prefix
1
1
1
1
in
a
flex
I'll
go
hundred
28
yr,
but
r2,
because
the
link
he
has
in
the
right
area.
L
It's
not
participating
or
he's
excluded
from
the
from
the
calculation.
He
would
actually
calculate
the
past
via
the
left
area
and
there
we
have
a
loop,
so
we
defined
as
I
said,
the
Flex
I'll
go
specific
metric
previously,
but
even
that
doesn't
solve
the
problem,
because
the
previous
version
of
the
graph
says,
if
you,
if
the
advertisement
of
the
prefix
doesn't
have
the
Flex
all
got
specific
metric
fall
back
to
the
IGP
metric.
So
the
r1
would
still
believe
that
there
is
actually
even
a
better
pass.
Why
r2?
L
Because
the
IGP
metric
is
lower
and
would
cause
a
loop
again.
The
problem
really
is
we
have
no
way
of
saying,
or
we
had
no
way
of
saying,
don't
use
me
for
flex.
I'll
go
under
28
pass,
but
you
can
use
me
for
alga
zero
pass,
because
the
reach
ability
is
basically
taken
from
the
algal
zero
and
that's
why
the
prefix
is
advertised
so
to
fix
that
problem.
L
What
we
say
is
that
when
the
ABR
or
SBR
advertises
or
redistribute
or
leaks
the
prefix
between
the
areas
or
levels,
it
must
set
the
Flex,
August
Pacific
metric
and
then
anyone
computing
the
destination
for
flex
I'll
go
for.
The
prefix
must
use
this
flex
all
go
specific
metric
and
if
the
prefix
doesn't
have,
it
is
basically
unreachable.
So
this
way
we
kind
of
make
the
announcement
of
the
reachability
parallel
go.
Well,
obviously,
this
only
works.
If
the
algo
has
the
plugs
algo
prefix
metric
as
it's
one
of
its
part
of
the
flags,
algo
definition.
L
So
that
way
we
fix
the
problem.
The
question
is:
what
do
we
do
if
we
don't
use
the
Flex
algo
prefix
metric
as
a
flexible
definition
attribute
in
that
way
it?
What
the
draft
says
is
it's
not
recommended
to
use
it
for
inter
area
inter
domain
switch
ability,
because
it
can
actually
cause
loops
or
like
holes.
If
people
feel
this
is
not
strong
enough
language,
we
can
even
say
we
are
not
allowed
to
compute
the
inter
domain
or
inter
area
pass.
G
Just
a
second
here,
let
me
restate
that
so
the
advantage
of
not
doing
a
hard
prohibition
of
this
is,
you
could
use
flex
algorithm
within
one
area,
but
still
forward
the
prefix
through
other
areas
using
the
default
right,
that's
the
advantage,
and
but
but
the
problem
is
in
certain
topology,
somewhat
pathological.
With
this
kind
of
configuration
you
could
have
loosed.
Yes,.
L
N
Tony
P,
juniper,
hey
Peter,
so
I
think,
even
if
you
forward
between
the
the
inter
area
default,
you
may
still
end
up
looping,
because
you
cannot
mix
and
match
right
if,
before
default
between
inter
area,
it
realized
that
it
brings
it
on.
You
know
some
exit,
which
is
based
on
shortest
metrics
when
it
brings
in
to
funny
you
know
out
which
uses
default
again,
you
may
end
up
in
a
loop,
but
I
need
a
napkin
to
go
and
draw
her
a
draw
around
the
stuff.