►
From YouTube: IETF106-LSR-20191122-1220
Description
LSR meeting session at IETF106
2019/11/22 1220
https://datatracker.ietf.org/meeting/106/proceedings/
B
D
B
B
B
E
All
right,
thank
you.
So
this
is
talk
about
our
yang
model
for
dynamic
flooding.
Apparently,
yang
models
are
now
required
for
everything,
so
we
attempted
to
jump
in
and
do
one
please
note
that
both
of
us
are
yang
novices.
So
we
have
no
idea
what
we're
doing.
Please
be
gentle,
we're
happy
to
fix
whatever,
let's
see,
there's
a
link
to
the
current
draft.
We
are
spinning
it
as
fast
as
we
can
go,
which
is
not
very
fast
we're
trying
to
cover
both
OSPF
and
is
is
with
this.
E
So
I'm
not
going
to
go
through
all
of
this
because
it
gets
really
boring
very
quickly,
but
basically
we
tried
to
cover
everything.
This
covers
the
dynamic
flooding
capability,
subtil
v2,
router
capabilities,
relatively
straightforward,
the
config
stuff,
all
of
the
various
knobs
for
the
configure
here
enabling
the
dynamic
flooding
is
this
object
and
status
information?
So
you
can
suck
out
everything
that's
going
on.
This
gives
you
the
list
of
paths
that
the
system
has
learned
from
the
area
leader.
E
C
F
F
F
One
major
change.
We
did
see
you
know
in
OSPF
and
as
we
have
lost
Flags
field
and
we
used
to
use
this
field
to
say:
oh
one
flag,
another
flag,
but
we
figure
out
those
things
are
not
augment
able,
so
we
change
loss
from
type
beats
to
an
identity.
This
is
one
major
change
during
the
last
CalPERS
s.
If
you
are
interested,
please
take
a
look
at
the
models
here
list.
F
Somehow
is
done
as
an
example,
so
it's
really
just
use
change
the
type,
so
it's
easier
for
augmentation
later
so
the
modules
can
be
augmented
and
then
the
next
two
is
post
OSPF
and
ISS
module
four
segment
routing
with
all
the
drafts
related
with
OSPF
and
ISS
protocols
are
being
published.
These
two
I
in
the
pipeline
to
be
published,
so
the
plan
is,
we
will
publish
the
base
SR
model
in
spring
first
and
then
this
two
will
follow,
and
so
this
give
you
an
last
opportunity
to
reveal
them,
and
if
you
have
any
please.
F
So
the
changes
we've
done
so
far
know
the
both
of
these
models
have
been
stable
for
some
time.
So
the
change
we've
done
so
far
is
basically
just
change
everything
to
match
what
we've
done
in
the
peso
SPF
and
the
ISS
model,
and
we
expect
to
publish
this
to
very
soon
and
we
have
two
new
working
group
document.
F
C
F
So
that's
about
the
OSPF
and
ice
ice
ice,
our
model,
because
this
to
augmenting
the
segment
routing
pace
model
and
this
week
the
author,
so
far,
the
segment
routing,
the
segment,
routing
and
pure
ice
module
had
a
meeting
with
the
SRA
six
model
authors
and
we
make
sure
we
align
our
configurations
and
our
also
some
comment
effort
definitions.
So
then
we
can
provides
the
base
module,
we
make
sure
everything
is
aligned
and
then
we
next
step
will
be
progress.
F
F
So
the
new
working
group
document
the
OSPF
these
three
extended
RSA.
This
one
is
actually
very
important
because
all
the
new
OSPF
with
these
three
features
will
have
a
dependency
on
this
module
if
they
are
using
the
new,
extended
error,
I,
say
format
and
the
plan
is
to
progress
this
one.
After
once,
we're
done
with
the
base
OSPF
module,
and
then
we
have
the
OSPF
young
augmentation
version
one.
F
So
this
one
is
right
now
we
have
I,
think
four
modules
included
we
this
time
we
added
that
the
new
OSPF
is
Ricky
module
and
we
also
have
other
modules
included
this
one
so
far,
I
think
the
plan
is
to
keep
these
four
modules
in
this
one
document.
If
there
are
more
features
that
need
to
be
augmented
to
OSPF
later
go
to
another
augmentation
document
public
illusion
to
just
to
keep
all
the
modules
up-to-date.
F
This
is
a
new
document,
is
OSPF
very
segmented.
Hyung
is
for
the
OSPF
III,
but
for
segmenting
MPI's
feature
and
this
one
well
augment
the
base,
OSPF
segment,
routing
data
module
and
all
included
all
those
theories.
So
people
who
are
interested
in
implementing
this
feature,
please
take
a
look.
We
didn't
have
this
in
the
base
segment,
all
team
model,
because
this
one
needs
the
OSPF
v3
extended
RSA,
so
we'll
need
to
progress
that
one
first
and
then
this
one.
F
Okay,
so
with
that
the
OSPF
augmentation
modules
were
introduced,
that
we
started
last
ITF,
and
this
idea
is
already
working
group
document
and
this
one
is
a
new
one
with
we
are
doing
for
ISS.
We
are
doing
these
same
activities
for
ESS,
so
this
page
listed
all
the
RFC's
published
in
ISS
thinks
2016
right
now.
This
draft
only
has
the.
F
Reverse
my
shape
Chris
will
present
later
that's
in
a
separate
document
and
will
keep
next
time.
We
next
revision
will
I'd
be
probably
consider
adding
the
multi
instance
of
hot,
and
if
you
see
any
ISS
feature,
that's
now
covered
yet,
please
let
us
know,
and
if
you
need
it,
we'll
add
it
so,
for
this
module
that
we
have
so
far
in
the
document
is
just
the
minimum
remaining
left
hand.
I
scoured
RFC
7
9
87,
it's
very
simple
one.
Please
take
a
look
and
send
us
their
comments.
F
F
So
the
SRA
6ss
document
is
already
a
working
wolf
document
and
this
one
defines
the
young
module
for
that
document.
It
defines
the
configuration
how
to
do
the
locator
sighting
and
how
to
you
also
include
the
faster
rerouting
features:
exact
Shore.
Please
review
it
and
send
us
your
comments
and
the
next
step
for
this.
One
we'd
like
to
request
the
working
group
adoption
for
this
draft
and.
B
Plan
or
it's
coming
together,
like
yeah,
we
so
I
had
like
done
like
the
reverse
metric
one
before
we
sort
of
agreed
to
try
to
do
this
in
in
bunches,
which
is
where
these
augmentation
version
1
things
came
from
yeah,
so
the
it's.
You
know
that
that's
what
we're
looking
at
now,
but
you
know
you.
You
also
see
that
there's
other
modules
there.
You
know
that
aren't
included
in
that
so
I
think
we're
sort
of
just
worth
learning
right.
It's
a
work
in
progress
and
trying
to
figure
out
what
the
best
ways
to
do.
B
You
know
personally,
I
wasn't
a
fan.
I,
don't
know
if
this
is
with
my
chair
hat
on
or
I'm
I.
Remember,
remember:
happeh
I
wasn't
a
big
fan
of
the
Augmented,
but
you
know,
but
other
people
seem
to
like
that
idea.
So,
I
you
know,
I
was
willing
to
go
with
the
flow
on
that
yeah
you're
grabbing
them
go
ahead.
Hi-Yah.
H
C
Enemy
alert
or
not,
I,
don't
know
we're
not
in
complete
agreement
I'm
in
hoping
to
get
it
all
done
and
I
did
not
want
to
say
because
didn't
work
for
SNMP,
it's
a
and
I,
don't
see
why
it
would
work
that
much
better.
Well,
yang
is
a
lot
easier
to
augment,
but
I
meant
was
to
make
anybody
who
did
a
draft?
Also
do
the
yang
augmentations
right
in
that
draft?
That's
the
idealistic
approach.
We
haven't
taken
that
to
put
such
a
barrier
on
graphs.
C
We
can
discuss
that
for
future
functions,
but
because
we
have
people
doing
we
have
people
that
are
contributed.
That
aren't
don't
know
yang,
and
now
we
have,
we
have
we
have
so
separate,
so
we
are
so
so
for
better
or
worse
right
now
we're
doing
separate
drafts
unless
we
change
that
with
augmentation.
Okay.
B
That's
what
I
was
gonna
say
is
it.
This
is
our
first
shot
at
this
right.
We're
trying
it
out,
I
mean
I,
don't
think,
there's
any
danger
in
doing
it,
sort
of
both
ways
and
figuring
out
which,
which
works
better
I,
mean
my
general
thinking.
Is
that
certainly
for
complex
features?
You
know
it
makes
more
sense
to
have
a
standalone
draft
right,
because
you
know
I
think
I.
B
I
Chris
Bowers
with
juniper,
perhaps
a
reasonable
compromise,
would
be
something
like
you
know.
They
can
be
in
separate
drafts.
The
yangko
new
feature
and
yang
model
for
new
feature
separate
drafts.
You
know
different
expertise,
but
you
know,
maybe
a
requirement
of
working
group
last
call
is
that
at
least
the
yang
model
is
at
a
level
that
it's
adopted
as
a
working
group
document,
some
kind
of
staged
approach
that
could
make
everyone
happy
I.
B
F
For
my
personal
opinion,
I
think,
if
it's
a
large
feature,
we
don't
want
the
Yamada
to
delay
the
publication
of
that
trapped
at
all
an
RFC.
So
if
it's
a
small
feature,
you
make
more
sense,
if
it's
possible
to
include
among
yamato
in
the
same
draft,
if
you
don't
know
how
to
do
it,
you
can
ask
any
of
us
for
help
we'll
be
happy
to
provide
the
Yamada.
B
So
speaking
is
a
working
group.
Member
last
time
this
came
around
I
I
thought
that
we
could
have
a
you
know,
sort
of
a
localized
Jang
doctor,
you
know
or
yang
experts
in
the
LSR
group
I
know
the
problem.
Is
we
get
so
these
author
lists
gets
so
big,
but
you
know
these
things
aren't
hard
to
write
for
people
that
have
written
them,
and
you
know
if
you're,
if
you're
working
on,
if
there
was
a
group
of
people,
you
could
go
to
and
say
you
know
we're
authoring
this
draft.
B
G
So
the
reason
I
was
asking
is
because
maybe
I
had
missed
the
discussion
and
there
was
over
here
some
plan,
but
I
think
that
it's
important
that
we
discuss
and
if
we
get
to
a
conclusion
of
requiring
or
not
or
whatever
it
is
that
way.
You
know
and
if
there's
no
specific
planned
it's
good
to
know
that
there's
no
specific
plan
right,
there's
not
a
requirement
that
we
have
a
plan.
G
A
C
A
A
B
J
Okay,
I'm
West
Ginsburg.
At
the
last
ITF,
we
had
a
couple
of
presentations
talking
about
issues
associated
with
increasing
the
rate
of
flooding
and
since
there
were
some
two
different
points
of
view
that
were
presented
since
that
time,
myself
and
my
co-authors
have
issued.
This
draft
is
also
another
draft
that
was
authored
by
Bruno
and
several
others.
J
It's
relatively
slow
or
significantly
slow
by
today's
standards,
and
if
you
have
a
large
network
and
you're
now
talking
about,
for
example,
if
you
have
a
node
that
has
hundreds
or
a
thousand
neighbors
and
that
node
goes
down,
that
means
you've
got
a
thousand
Ellis
bees
that
have
to
be
flooded,
network-wide.
Well,
you
do
the
math
and
if
your
rate
limited
to
33
per
second
on
on
the
interface,
it's
going
to
take
30
seconds
or
more,
which
is
a
significant
amount
of
time.
So
that's
why
we're
all
looking
at?
J
What
can
we
do
to
what
happens
if
we
increase
this
rate
significantly?
What
issues
do
we
have
to
deal
with?
Convergence
consists
of
a
number
of
steps.
One,
the
nodes
that
are
adjacent
to
the
topology
change
have
to
detect
the
topology
change.
They
then
update
and
flood
their
LSPs
as
the
LSPs
are
received
on
other
notes,
the
network
they
run
a
new
SPF,
and
then
you
update
the
forwarding
claim
steps.
2,
3,
&
4
need
to
be
done
network
wide.
J
So
what
we're
the
aim
of
flooding
is
really
to
get
a
consistent
link
state
database
network
wide
as
fast
as
possible,
and
this
quote
here
from
10
to
5.
89
indicates
how
10
589
originally
thought
about
this,
and
it
was
well
at
each
interval.
You
look
at
all
of
the
LSPs
that
are
still
need
to
be
sent
and
you
keep
track
of
this
on
a
per
interface
basis
and
you
send
them
all.
J
J
So,
let's
talk
about
the
consequences
or
the
tools
that
we
have
to
deal
with
faster
flooding.
This
is
just
a
little
example
of
bandwidth
is
not
a
major
consideration
here.
These
are
just
numbers
about
what
happens
if
how
much
bandwidth
do
we
consume
if
we
go
up
to
a
hundred
per
second
or
a
thousand
per
second
flow
control?
So
if
I
start
sending
LSPs
to
my
neighbor
at
a
much
faster
rate,
it's
possible
due
to
whatever
else
my
neighbor
may
be
doing
at
that
time,
that
there
may
be
periods
where
he
gets
overwhelmed.
J
So
if
this
happens
occasionally
this
is
not
a
major
issue.
If
this
happens
consistently,
we've
got
a
significant
problem.
What
does
it
mean?
It
means
maybe
you've
got
a
no
to
the
network
that
is
underpowered
and
simply
can't
deal
with
the
network
at
the
size
that
this
particular
network
is,
or
perhaps
it's
configuration
is
such
that
it's
doing
more
work
than
it
should
be
doing
or
it
should
needs
to
be
in
another
position.
J
As
far
as
flow
control
is
concerned,
of
the
position
that
we're
presenting
here,
we
already
track
the
put
the
base
protocol
update
process
already
tracks
for
each
LSB.
That's
been
updated,
like
we
keep
track
of
the
set
of
interfaces
that
we
need
to
flood
it
on.
Have
we
received
an
acknowledgment
for
that,
so
the
update
process
is
completely
reliable.
J
So
that's
one
flow
control
is
one
tool
to
do
to
deal
with
faster
flooding,
there's
also
packet
prioritization,
most
implementations
today,
or
at
least
a
number
of
them
have
already
prioritized
receiving
hellos
over
receiving
snps
and
Ellis
B's.
Why
do
we
do
that?
We
do
that
because
we
don't
want
to
if
we,
if
we
get
a
burst
of
LSPs
and
that
causes
us
to
drop
our
adjacencies.
That
means
we're
going
to
have
to
generate
it
even
more
LSPs,
so
it
just
exacerbates
the
problem.
J
What
we
need
to
do
here
is
to
prioritize
the
reception
of
snps
over
LS
B's,
because
snps
provide
the
acknowledgment
for
LS
B's
and,
if
we
drop
snps,
then
potentially
we're
doing
needless
retransmissions
of
LSPs,
we've
already
sent
it
to
the
neighbor
he's
already
received.
That
he's
already
acknowledged
it,
but
we
lost
track
of
the
fact
that
he
acknowledged
it.
So
this
is
another
tool
that
we
can
use
to
help
make
using
faster
flooding,
operate
more
smoothly,
minimizing
LSB
generation.
J
We
have
in
the
base
specification
up
to
255
LS,
B's,
/
or
256
LS
B's
per
level
that
can
be
generated
by
a
given
node
if
I
have
and
I'll
use
the
example
here
you
know
get
a
bunch
of
neighbors
and
I've
got
them
distributed
into
three
different
LS
B's,
as
I've
shown
here.
Only
if
one
of
the
neighbors
goes
down,
I've
got
two
strategies
that
I
can
use
to
update
the
LS
B's
just
to
distribute
this
information
to
the
rest
of
the
network
with
method
number
1.
J
So
in
this
case
a
neighbor
number
3
went
down.
Neighbor
number
3
was
in
LSP
0
I,
simply
remove
neighbor
number
3
from
LSP
0
and
I
flood
LSB,
0
method,
number
2,
I,
decide
well
gee
now,
neighbor
number
3
is
gone.
I
can
fit
all
of
my
neighbors
into
2
LS
B's,
whereas
before
I
needed
3,
LS
B's,
so
I'm
gonna
shift
everything
and
compact
it.
J
But
when
I
do
that,
I
pay
a
heavy
price
because
now
I've
got
to
flood
3
LS
B's
I've
got
to
flood
the
update
to
LSB
0,
the
update
to
LSB
1
and
I've
got
to
purge
LSB.
That's
no
longer
has
any
content
in
this
except
at
scale.
This
exacerbates
the
flooding
problem
by
an
order
of
magnitude
and
I.
Think
we've
for
those
of
us
who
have
done
implementations
and
dealt
with
scale.
We've
learned
this
lesson
the
hard
way
in
some
cases.
So
one
of
the
lessons
here
is
don't
try
to
compact
your
LSPs
unnecessarily.
J
Try
to
minimize
the
number
of
LSPs
that
you
need
to
update
when
a
topology
change
occurs
redundant
in
flooding,
there's
a
number
of
mechanisms
available
to
reduce
redundant
flooding.
When
you
have
a
highly
mesh
network,
one
of
them
requires
no
protocol
extensions
whatsoever.
You
just
make
a
local
decision
and
say
I've
got
parallel
links
to
the
same
neighbor
I.
Don't
need
to
flood
this
LSP
update
on
all
the
links.
I
just
want
to
make
sure
that
each
of
my
neighbors
has
received
the
update
on
at
least
one
interface.
J
J
J
We
need
to
emphasize
that
the
flooding
rate
is
not
a
per
interface
parameter.
The
goal
of
flooding
is
to
get
things
flooded
as
fast
as
we
can
metric
wide
and
not
try
to
tune
this.
You
know
to
a
particular
neighbor
used
transmit
base
flow
control
to
make
sure
that
we
don't
overwhelm
the
neighbors
prioritize
SNP
reception,
minimize,
LSB
generation
reduce
redundant
flooding
when
use
jumbo
frames.
When
you
again,
the
the
other
comment
I'd
make
before
opening
it
up
to
questions
or
comments,
we
wrote
this
draft.
J
It's
intended
to
be
an
informational
draft,
as
I
said,
there's
a
competing
draft
in
the
same
space,
we'll
have
to
see
where
the
discussion
goes.
As
far
as
the
differences
in
the
two
drafts
I'm,
not
totally
convinced
that,
at
the
end
of
the
day,
we
need
a
draft,
regardless
of
what
the
consensus
of
the
working
group
is.
This
could
be
something
that
we
just
it's
good
to
have
the
discussion
publicly
whether
we
actually
need
to
publish
as
a
standard,
be
it
informational
or
standards
track.
I
think
is
also
something
that
the
working
group
should
consider.
D
So
let's
recap
that
the
good
of
the
drop
is
about
floating
point
full
point,
so
I
think
the
problem
is
big
enough
and
complex
enough
on.
We
don't
need
to
to
look
at
independent
or
to
another
point,
such
as
with
reducing
rhythm
fooding.
Yes,
I
agree,
it's
good.
We
have
a
working
document
working
on
that,
for
example,
I
think
it's
off
to
granola
I
think
it
doesn't
bring
to
put
it
in
the
discussion
minimize
the
number
of
regeneration.
D
Yes,
please,
let's
do
that
again,
it's
Auto
canola
I
think
we
should
focus
on,
including
which
is
already
a
big
boy,
got
a
point
on
which
had
a
lot
of
discussion
on.
Thank
you
for
Jeff,
also
big
shot
in
in
Vancouver.
It
was
really
a
pool
were
sworn
many
many
Clements
on
the
list,
so
I
think
we
should
focus
on
floating.
So
what
is
flooding?
I
have
a
set
of
information
that
I
need
to
send
to
you.
D
That's
all
I
think
who
we
are
in
agreement
that
we
want
to
do
flooding
as
fast
as
possible
ice.
Are
we
yes?
Okay?
Thank
you.
I
think
we
are
coming
to
an
agreement
that
we
are
fine
to.
You
flow
control,
your
neighbor.
That
means
your
interface,
okay,
so
that's
good,
and
so
it's
already
a
good
improvement.
If
you
can
come
to
slide
four
I
don't
want
to
spend
too
much
time
on
what
we
disagree.
D
Okay,
so
that
that's
when
it's
really
wrong
to
my
comment,
I
think
we
should
focus
on
on
on
what
we're
trying
to
do,
which
is
flooding
I,
don't
think!
Breaking
convergence
in
the
description
is
AB
food
I'm,
quite
the
contrary,
I
think
it's
awful!
It's
it's
a
it's
a
largely
different
subject,
and
if
we
talk
about
convergence
who
are
going
to
talk
about
packet
loss,
we
are
going
to
try
to
order
the
cro-magnons
in
the
network,
but
convergence
is
about
updating
the
Vibha.
D
It's
completely,
it's
largely
very
largely
different
about
flooding
and
it's
important
because
for
conventions
as
I
was
proposer
a
long
time
ago,
but
all
the
hot
fever
I
knew.
We
can
see
that
we
would
like
for
authorship
for
corner
drones
to
have
an
order
within
the
network
and
that
could
be
beneficial
to
delay
the
order,
depending
on
the
type
of
topological.
D
You
know,
but
that's
not
what
we
want
for
flooding
and
for
flooding.
We
don't
want
two
delays
of
the
flooding
we
want,
don't
want
to
order
the
flooding
we
want
to
fruit,
the
information
on
there
with
information.
We
can
try
to
do
something
intelligent
for
conversions,
but
I'd
rather
not
discuss
that
subject
at
the
same
time
and
don't
think
it's
helpful
so
with
that
so.
J
D
In
agreement,
so
the
points
we
are
not
fully
on
agreement,
but
we
are
going
to
work
on
it
is-
is
a
doing
throw
controller,
so
our
neighbor
or
per
interface.
An
array
I
think
that
doing
some
explicit
for
flow
control
with
your
neighbor
is
safer.
On
better
on
that
we
have
a
disagreement
but
I'll
leave
that
point
to
Tony
I.
E
J
E
Do
we
agree
that
flooding
as
fast
as
possible
does
not
mean
dumping
the
entire
LSD
be
as
back-to-back
package?
Yes,
we
agree
good,
so
we're
trying
to
flood
maximize
good,
put
in
getting
a
actual
transfer
between
two
notes,
and
now
this
boils
down
to
a
control.
Theory
question
right:
how
can
we
maximize
that
good
book
now,
because
we
are
in
a
situation
where
we
don't
know
the
exact
character,
disk
characteristics
and
their
dynamic
characteristics
of
the
receiver?
E
J
J
J
You
know
here's,
you
know
here's
what
I
could
handle,
or
you
know
please
slow
down
or
some
kind
of
message
to
that
effect,
which
requires
signaling
from
the
data
plane
to
the
control
plane
about
the
the
queue
size.
If
you
will
of
on
a
per
interface
basis
of
a
particular
set
of
protocol,
packets
I
find
that
very
challenging
for
any
platform
to
implement
so
from
a
practical
standpoint.
I'm
very
concerned
that,
even
if
we
could
agree
that
this
is
you
know
they
have
the
best
solution.
I,
don't
know
how
practical
it
is
to
implement.
J
The
second
point
I
would
make
is
the
time
when
you
need
the
signalling
is
when
you're
actually
overloaded
and
that's
the
time
it
in
it
requires
you
know,
sending
another
hello
back
to
your
neighbor
to
say,
hey,
slow
down
and
that's
the
very
time
when
you're
more
likely
to
be
dropping
packets.
So
I'm
also
concerned
about
that.
So.
E
G
J
E
E
E
B
Question
with
my
chair
hat
on:
is
there
you
don't
like
you,
don't
like
their
proposal,
but
could
can't
your
proposal
and
their
proposal
actually
live
together?
I'm
thinking
I'm,
basically
going
off
what
Tony
just
said.
You
know
if
it's
very
easy,
given
your
hardware
to
determine
packet
types
in
your
queue
right
and
to
be
able
to
get
the
information
needed,
then
that's
easy
to
implement.
If
not,
then
couldn't
they
fall
back
to
to
your
proposal.
E
J
So
Chris,
if
I
understand
you
correctly,
what
you're
suggesting
is
that
we
define
a
transmitted
based
flow
control
which
everybody
can
implement
and
for
those
implementations
that
can
support
the
detection
of
the
you
know,
received
queue
length
and
send
that
they
can
use
the
extensions
that
are
defined
in
the
other
draft,
which
are
then
become
optional.
And
then
you
have
to
define
what
happens
when
you're,
using
if
you're
going
to
use
the
transcript
base
flow
control
by
default,
and
if
you
happen
to
get
the
receiver
based,
how
do
you
decide
how
they
interact?
E
Seems
like
a
reasonable
compromise.
Your
second
point:
we
never
want
to
get
into
a
situation
where
the
receiver
is
congested,
regardless
of
what's
going
on.
We
never
one
get
to
the
point
where
our
receive
queue
has
zero
free
entries.
That
means
we're
going
to
start
dropping
packets
and
that's
guaranteed
to
hurt
good
book.
So
what
we
know
is
that
we
want
the
transmitter
to
keep
that
queue
somewhat
full
right.
J
E
J
So
you
know,
part
of
you
know.
Part
of
what
we're
talking
about
here
is
the
goal
is
to
flood
the
set
of
LSB
changes.
Network
wide
quote-unquote
as
fast
as
possible,
okay
I
would
submit
in
most
cases,
there's
no
need
to
flow
control,
won't
actually
kick
in.
There
are
obviously
some
cases
where
you
know
we
get
a
large
number
of
else
visited
in
it.
It
may
kick
in,
but
I
think
in
general.
We
probably
don't
have
to
pace
LSPs
for
95%
of
the
topology
changes.
J
J
So
I
think
that
this
goes
back
to
my
concern
that
I
think
it's
very
difficult
for
the
receiver
to
specify
a
rate.
I
mean
it's
even
you
don't
need,
as
I've
expressed
I,
think
it's
difficult
for
a
receiver
even
at
to
detect
the
peak
conditions
that
communicate
them
to
the
control
plane
when
it's
needed
to
do
so.
Trying
to
pre
calculate
you
know:
Here
I
am
everything's
quiet
I'm.
J
Looking
at
my
configuration
I'm
looking
at
the
size
of
my
LSB
DB
I'm,
looking
at
all
of
the
other
protocols
that
are
running,
all
of
the
other
features
are
running
somehow
I'm
supposed
to
figure
out.
How
many
is
is
LSPs
I
can
support
on
a
particular
interface
I.
Think
that's
a
pretty
complex
problem.
I.
E
J
E
J
A
K
Peter
from
Cisco
I
just
want
to
make
the
comment
on
the
distributed
system.
There's
no
single
queue
to
monitor.
You
have
queue
on
a
line
card
to
your
queue
between
our
line
card
and
our
P.
You
have
Q
and
R
P
we're
going
to
look
at
all
these
queues
and,
as
you
said,
lies
I
mean
these
queues
are
not
just
for
us.
They
are
shared
with
other
traffic
I,
don't
see
a
simple
way,
how
we
can
figure
out
the
rate
that
we
can
retain.
C
It's
a
silliness
beginners
working
to
remember,
I,
just
as
just
a
point
of
reference.
We
looked
at
this.
We
had
lots
of
problems
with
quiff
earlier
earlier,
OSPF
implementations
and
in
the
mid
early
2000s
guy
from
AT&T
T
Research
published
to
draft
RSC
for
and
we
did
not
because
of
these
complexities.
We
made
all
we
made
a
number
of
recommendations,
but
they
didn't
involve
explicit
/,
/,
neighbor
or
/
interface
flow
control,
and
since
implementations
have
done
that
the
problems
have
pretty
much
gone
away
in
OSPF.
B
B
I
I
I
Yeah
anyway,
so
I
think
that
would
actually
help
a
lot
so
Chris,
Bowers
I,
just
had
a
comment
about
you
know
when,
when
I
originally
worked
on
the
draft
with
Bruno
the
the
idea
was
that
that
I
had
in
mind.
Wasn't
the
dynamic
adjustment
of
this
received
value
and
I,
even
I,
believe
there's
still
text
in
the
draft
that
it
talks
about
that.
So
a
fallback
you
know,
even
if
you're
you
don't
believe
you're
able
to
you
know,
for
whatever
reason,
compute
dynamically,
what
your
maximum
receive
rate
should
be
it.
I
You
know,
by
advertising,
no
value
by
saying
we
just
can't
advertise
any
value,
we're
really
putting
it
back
on
the
service
providers
to
individually
test
every
single
hardware
platform
and
software
release
and
say
and
vendor
and
set
the
value
themselves
and
I
think
we
can
possibly
do
a
little
better
than
that.
You
know
we're.
They
are
using
extremely
conservative
values.
Now
you
know
a
value
that
says:
look
I'm
on
a
really.
You
know
a
method
that
says
I'm
gonna
release,
you
know
limited
platform.
I
I'm
going
to
you
know,
use
this
current
value
like
33,
milliseconds
or
something
I'm
on
a
stronger
platform
in
general
and
I.
You
know
I
figure
out.
I
have
you
know
only
10
interfaces,
then
I'm
gonna
go
well.
It's
probably
okay
for
me
to
go
down
to
five
milliseconds,
something
something
along
those
lines
and
that
value
doesn't
change
over
time.
It's
like
a
you
know,
configure
eight.
You
know
changes.
Maybe,
as
you
change
configuration
and
number
of
interfaces,
that
seems
like
a
a
reasonable.
You
know
advertised
value.
It
could
be.
I
J
I
Estimate
the
estimate
that
you
would
probably
want
to
be
using
for
this
the
value
this
static
value
you
know,
should
should
assume.
Okay,
you've
got
sort
of
a
maximum
amount
of
BGP
going
on
or
whatever,
but
but
again
that
you
know
that
we
do
that
testing
and
be
willing
to.
You
know,
publish
that
value
in
a
TLV,
at
least
in
some
you
know.
Some
some
scenario
is
better
than
the
current
situation,
where
they're
choosing
the
worst
conservative
values,
yeah.
J
I
Oh,
so,
if
we
want
people
to
be
able
to
lower
them,
then
we
we
have
to
be
willing
to
to
say.
Okay
at
the
very
least,
this
value
is
reasonably
safe.
Now
you
know
your
your
flow
control
mechanism
of
you
know
not
acting
the
SNMP.
S
could
then
kick
in,
for
example,
but
but
at
least
being
willing
to
advertise
some
information
now,
so
that
we
can
at
least
get
below
the
values
that
have
been
in
the
network
for
10
years
or
20
years.
Is
you
know?
J
B
C
L
Here,
each
as
the
prism
for
the
prefix
unreached,
more
adjustment,
it's
narrow,
nice
mm
Senora
and
the
solution
for
accents
really
started
as
a
sort
of
the
first
technologies
for
inter
area.
You
know
normally
for
inhaler
the
item.
Video
of
them
make
the
summary
or
test
at
a
quadrotor
are
the
believe
me.
If
you
reporter
wrote
American
summer
Otis
and
the
one
node
in
my
area
help
you
in
failure
the
know,
the
other,
the
other
area
beyond.
You
will
be
notified
immediately
at
Evo
dependent
the
other
mechanism
to
detect
the
failure
of
the
node.
L
L
Another
scenario
it
also
for
the
inner
area
is
in
for
the
link
failure
in
our
improvers
failure
in
prisoner.
It
is
not
a
failure
in
certain
the
sonorities
link
leader
so
easier
to
link
Allah
to
connect
one
note
p.m.
disconnected
the
amount
of
the
ABR
daughter,
for
example.
You
know,
I
are
true,
can't
reach
reach
to
reached
out
to
the
node
Tito,
but
it
because
it
is
still
under
ties,
the
summary
or
trace.
L
You
all
know
that
we
all
still
seek
reach
the
unreachable
enrichment
node,
so
the
traffic
could
put
people
in
DC
grandpa
will
be
dropped
by
The
Artful.
The
the
traffic
should
be
rerouted
to
the
other
eight
yards,
but
you
know
in
current
current
situation,
there
is
no
solution
to
or
the
city
or
ISA
the
service
analogies
for
the
inter
area.
You
know
in
this
area
the
the
packbow
area
for
grammar
error
0
also
to
some
snap
neurologists
this
summer
discovered
the
authors
of
the
notes
prefix.
L
So
if
the
node
failure
can
failure
humidity
in
the
same
area,
the
the
iron
node
receives
the
samurai
test,
it
cover
the
field
and
notice
the
prefect.
So
if
you're
still
seeing
cuz
the
field
is
Odom,
you
will
be
reached
so
the
summer.
Some
other
Bank
has
a
bank
a
program
or
the
harvesting
parable.
We
all
cannot
cannot
bypass
the
field
note.
This
is
a
traditional
for
the
perfect
and
reach
unreachable.
L
So,
based
on
this
novel
he'll
propose
some
solution.
The
the
first
thing
is
for
the
interior
Irish
both
notion
this
curvy
now
for
OSPF
and
the
wise
we
help
define
the
horrific
or
later
theology,
so
we
went
to
Ulysses
tell
we
all
use
this
information
to
notify
the
other
Luther
in
the
other
area.
That
one
perfect
is
unreachable.
For
example,
you
this
is
a
rich
graph,
even
not
t-too
filled,
it
will
will
be
detected
by
the
ABR.
Rotor
are
poor
and
are
harmful.
L
Another
right
I
wrote
her
r2
and
r4
you're.
Not
we
announced
not
only
the
summer
days
but
all,
but
also
the
peripheral
unreachable
not
going
to
be
abbreviated
to
QA
LSA
on
this
ratio.
This
LSA
will
be
flooded
to
other
area.
So
when
the
deceiver
your
honor
earlier
this
year,
this
hello
si
you
to
feel
generative.
Our
black
hole
wrote
that
it
is
conservatively
cannot
reach
the
third
node.
L
Another
solution
is
for
the
intro
very
here
introduced
to
you
for
integer.
There
are
also
some
cities
and
some
rodent
your
tower,
still
the
node
of
your
link
prefix.
So
in
such
situation
we
went
to
the.
If
you
don't
know
the
median.
The
aerial
receive
this
information,
his
Bo
and
find
the
field
know
that
is
funny
really
in
the
scope
of
the
summer
days.
It
also
will
generate
automatically
with
the
black
hole
Road
to
the
fuel
note
of
your
links.
L
L
And
some
content
for
the
over
consideration.
You
know:
control
sonography
just
uses
the
most
simple
situation,
folder
for
the
POA
generate
and
consume
you
pal
syrup.
There
may
be
some
at
least
complex.
The
situation
is
a
longer
line
network.
So
currently
we
just
and
some
more
conservative
folder
for
the
QA.
L
The
reason.
The
first
thing
is
the
API
returns
for
the
hello
value
to
the
configure.
The
max
of
the
summary
this
is
least
value.
Just
the
key
would
describe
it
amongst
a
number
of
the
detail
that
we
treat.
Each
ball
are
unreachable,
address
of
each
for
each
one
each
summer
root.
So
when
the
unreachable
perfectly
is
less
in
a
mess,
then
the
API
wrote
reveal
another
unauthorized.
L
The
Sam
Rogers
the
PA
unreachable
address
is
less
in
mass,
then
the
AVR,
really
instead
under,
has
the
detailed
reachable
orders
only
so
there
are
some
cities
and
the
unreachable
others
are,
and
the
reach
modulus
will
post
two
more
in
the
math.
So
in
such
it
here
we
make
a
single
solution
just
advertise
the
summer
days
with
the
magnetic
so
decreases
the
value
of
the
recent
some
roaches
and
the
from
receiver
side.
The
serial
cable,
a
black
hole
root
for
for
predefined,
the
time
interval
I
am
after
the
service
based
and
these
field.
L
Note
of
your
prefix
is
converted.
The
this
black
hole
wrote
Tempe
the
record
so
for
X,
of
a
which
we
also
welcome
the
comment
from
the
expert,
and
we
also
will
come
further
more
complex
in
order
to
be
not
covered,
and
we
also
welcome
the
caller
from
other
in
there
to
put
for
this.
I
think
is.
Such
city
is
also
exist
in
the
in
the
DC
Network.
Okay,
a.
C
Sender,
Cisco
Systems,
so
the
the
the
do
this
effectively
I
mean
in
independent.
You
have
to
know
what
you
expect
in
that
range
because,
because
otherwise
you
don't
know
what
you're
missing,
because
because
of
the
timing,
you
know
the
sequences.
So
are
you?
Are
you
gonna
map
that
any
place
where
you
summarize
that
the
ABR's
are
you
know,
are
you
in
a
map
what
you're
expecting?
So
you
know
what
pieces
are
missing
like
say
you
first
cut
when
you
first
come
up.
L
Normally
we
just
because
there's
a
normal
city
reason
not
to
know
consider
come
up
right
there.
You
know
normally
ABR
account
find
that
that's
a
terrific
leave.
Is
him
I
wanted
to
see
with
LSA
update.
You
know
he
can.
It
can
come
here
which
link
of
each
node
in
Museum.
This
is
enormous
it
even
maybe
if
it
if
it
can
come
up.
Maybe
there
are
other
artists,
it
cuz
iterate.
The
later.
K
Isis
Peter
from
Cisco,
so,
as
you
said,
I
mean
we.
You
had
a
chat
about
this.
There
are
multiple
issues
here.
First
of
all,
you
announce
something
is
unreachable.
How
long
is
it
going
to
be
there?
It's
going
to
be
unreachable
forever
if
it
never
comes
back,
so
you
have
to
time
out
the
information
at
certain
point.
If
the
address
that
you
make
unreachable,
it's
not
coming
up
as
well.
You
know
if
behavior
loses
a
connectivity
to
many
of
the
prefixes.
Now
you
basically
lost
the
summarization
effect
because
you
are
going
to
advertise.
K
J
Watkins
Burke,
so
I
share
all
the
concerns
that
Peter
expressed
and
I
know
we.
We
did
talk
about
this
privately.
The
other
points
I
would
make
is
to
from
a
procedural
standpoint,
you're
actually
violating
some
existing
protocol
specifications.
It's
currently
not
legal
to
send
the
router
idea
of
zero,
and
this
is
completely
non
backwards.
Compatible.
There's
no
implementation
today.
That
would
interpret
a
prefix.
Reachability
advertisement
is
as
indicating
negative
reach
ability,
so
it
would
require
a
forklift
upgrade
I
think
the
problem
space
is
interesting
and
I.
J
L
C
C
C
M
M
So
this
is
the
data
opening
mechanism
for
the
which
can
be
applicable
for
the
transport
network
slice.
For
this
document,
its
main
it
defines
the
IGP
mechanism
and
extensions
for
the
SR
based
of
a
pin
class.
The
purpose
is
to
for
the
distribution
of
the
required
attributes
to
post
the
nettle,
nodes
and
the
controller,
and
in
this
version
we
also
take
the
control
plane
scalability
into
consideration
and
for
the
controlling
analysis.
We
have
another
draft
in
T's.
We
can
take
a
look
at
that.
One.
M
Ok
here
is
the
methodology.
First,
the
functionality
required
for
IGP
is
to
advertise
and
track
to
the
in
from
attributes
of
different
to
virtual
networks,
and
they
also
need
to
compute
the
routing
forwarding
tables
for
each
portion
network.
So
we
think
for
the
IGP,
the
flexibility
and
scalability
is
important
and
we
need
to
consider
it
the
beginning
of
the
design.
M
So,
as
we
can
support
the
for
the
5g
network,
slicing
deployment
in
different
scenarios
and
the
phases
like
we
can
have
a
consistent
solution
for
tens,
hundreds
of
thousands
of
nano
slices
so
basically,
basically
the
design
is.
We
provide
proposed
multi,
dimension
networks
last
definition,
so
that
we
think
the
nano
slice
is
defined
as
a
combination
of
a
several
key
attributes
like
the
topology
attributes
and
the
resource
attributes,
we
may
also
have
other
attributes
in
the
future.
M
Okay.
Here's
an
example
when
we
can
see
this
topology
on
the
top
based
on
this
topology.
This
is
a
neural
network.
We
can
define
T
for
the
topologies
using
the
existing
technology
like
a
multi
topology
of
LaSalle
go
in
this
example.
We
have
two
different
two
sub
topologies
define
depends
on
this
under
a
topology
and
the
second
step
is:
we
can
use
another
identifier
called
the
resource
ID,
so
I
define
the
different
groups
of
resources
allocated
to
different
like
a
service
or
network
slice,
so
that
we
can
have
several
Metro
slices
have
the
same
topology.
M
M
Okay,
here
are
the
extensions
to
IGP.
The
first
is
we
need
to
advertise.
The
definition
of
that
I
would
call
the
transfer
narrow
slice
in
this
version.
Basically,
it
is,
can
contains
132,
page
global
identifier
for
the
transfer,
narrow
slice,
and
it
can
also
carry
optional
sub
t
RVs.
Currently,
we
have
two
optional
sub
t:
io
is
defined,
which
is
the
topology
and
the
resource
here.
M
Okay,
here
are
the
encoding
of
the
sub
theories
for
the
topology
information
advertisement.
We
think
we
can
reuse
the
audience
apology
and
a
flash
I'll
go
to
identify.
The
topology,
which
is
power,
belongs
to
a
narrow
slice
like
we
have
the
MT,
ID
and
algorithm
filled
in
this
sub
tre.
We
also
use
the
flats
to
control
which
field
is
used
as
topology
identified,
and
the
second
sub
theory
is
the
natural
resource
sub
goe.
In
this
one
we
use
a
new
32-bit
global,
significant
identifier,
to
identify
the
group
of
resources
allocated
to
a
in
the
network.
M
Okay,
the
following
slides.
We
showed
how
to
advertise
the
topology
attributes
and
the
resource
attributes
independently.
The
first
is
that
we
can
reuse
a
multi
topology
for
the
topology
advertisement,
because
we
have
the
MPR
and
it
can
be
used
together
with
a
segment
routing
to
define
the
network
topology
and
advertise
the
topology,
specific
C's,
locators,
attributes
and
second
option.
So
we
can
also
consider
reduce
flux,
algo
based
to
root
for
to
define
the
topology
constraints
for
a
virtual
network.
M
This
is
also
applicable
to
similarly
named
eurozone
as
a
v6,
and
you
can
define
that
algorithm,
specific
assays
and
locators.
So
we
think
for
this
topology
ID
attribute
advertisement.
We
can
use
either
option
okay
for
the
resource
attribute
advertisement
here
we
also
reuse
the
existing
technology
with
necessary
extensions.
M
Here
we
can
see
we
use
that
hello
to
bondo
mechanism
is
extensions
to
advertise
the
resources
associated
with
a
particular
Network
slice,
and
we
can
see
in
this
case
we
consider
a
subset
of
the
net
link
resource
as
a
physical,
virtual
member
link
of
a
layer,
3
interface,
so
for
the
layer
2,
a
pound
of
mechanism.
We
to
define
your
flight
like
we
flag,
to
indicate
whether
the
inter
member
dink
and
administrating
is
a
virtual
physical
link
and
another
extension
is
a
new
resource.
M
Id
TL
sub
Q
is
defined
to
advertise
the
identifier
of
this
subset
of
resource
in
the
link,
and
this
information
will
be
advertised
together
with
the
existing
T
attributes
for
each
member
Inc.
Okay.
Okay,
that's
all
for
this.
The
extensions
and
we
would
like
to
hear
the
feedbacks
and
we
will
refine
this
document.
H
I'm,
just
insurance
or
similar
comment
provided
in
spring
I.
Don't
think
you
should
be
defined.
What
network
slice
or
network
topology
slice
is?
You
should
refer
government
documents,
that's
going
to
be
the
product
of
design,
team
number,
0
and
I.
Think
you've
heard
it
before
VPN
is
so
much
which
I
do
not
really
BPL
give
it
less
controversial
name
VM,
for
example,
might
be
good.
N
Blue
burger,
as
T's
co-chair,
just
to
be
clear.
We
do
have
a
draft
on
framework
for
enhanced
VPN
yeah.
We
don't
have
any
drafts
on
slicing
solutions,
yet
we
do
have
a
design
team.
That's
looking
at
a
framework
for
slicing
the
solution.
That's
described
here
is
nothing
more
than
an
individual
draft
and
has
no
more
than
individual
draft
standing
in
T's
as
well.
L
From
entitlement
for
I
think
for
the
future
networking
the
resources
is
OE,
so
to
be
information
so
to
be
tripod
or
Freudian
VD
in
the
ITV
network.
So
every
node
can
Raziel
required.
The
sauce.
L
O
N
Lou
burger
I'll
defer
to
the
chair
is
how
much
of
a
conversation
you
want
to
have
about
the
teas
working
group,
but
the
in
the
teas
working
group.
We
have
the
VPN
plus
or
enhanced
VPN
framework
as
a
working
group
document.
It
does
mention
that
slicing
is
a
potential
use
case
for
that
technology.
It
is
not
something
specific
to
slicing.
We
also
have
a
design
team,
that's
looking
at
the
slicing
framework
and
they've
yet
to
produce
their
recommendation
into
a
working
group.
So
that's
that's
where
it
stands
so.
B
Yeah
I
mean
we're
gonna,
listen
to
the
teas
co-chair
when
we're
hearing
about
like
what
it's
an
individual
draft-
and
you
know
this
I
think
that
we
need
to
wait,
maybe
for
the
teas
working
group
to
be
selecting
at
least
a
direction
technology's
going,
and
this
shouldn't
be
us
our
Shin
muses
it
like
an
end-run
to
try
to
present.
You
know
things
that
should
be
first
decided
on
and
teas
here,
you're
gonna
be
doing
TLV
extensions
and
stuff
to
support
a
technology
right.
This
isn't
this
isn't
a
place.
O
Hobby
I
wanna
under
one
point,
I
think
addition
in
fact
this
of
the
BPM
class
offering
worker
two
years
ago.
We
already
when
I
assume
you
may
know
this
is
the
history
of
this.
The
draft
I
sing
from
this
is
a
sight
from
Leah,
so
the
side
meeting
on
it
who
that
are
ttw,
Gianna
dent
who
the
then
to
the
tears
of
working
guru
but
I
think
at
the
beginning
you
know
they
say
you
the
other
beginning.
O
L
I
think
this
proper
will
help
the
implementation
of
the
network
strategy
and
butter.
You
know
it
is
not
a
point
to
the
net
for
strategy
and
you
know
there
are
at
least
in
other
technology
for
implementations,
NATO
study,
so
we
get
which
technology
will
be
selected
by
the
networks.
Identities
are
topical,
you
know
it
is
I
think
it
should
not.
Coupled
with
the
networks,
rezian
selection.
C
P
C
H
M
M
H
You're
addressing
particular
layer
in
the
transport
slice-
you
don't
address
all
of
it
and
it's
one
of
the
possible
solutions.
It's
definitely
not
going
to
be
exposed
not
bound
to
consumer
of
it,
so
don't
make
it
more
generic
because
it
shouldn't
be
there's
going
to
be
more
generic
definition.
That's
going
to
use
this
kind
of
technology
below
it,
but
not!
This
is
not
it.
It's
really
reasonably
small
parts
of
it
right
so.
O
In
fact,
here,
I
I
problem
my
concern
because
I
I
don't
think
this
is
a
good
way
to
talk
about
the
name.
I
think
we
always
talk
about
the
use
of
slicing
like
this.
When
the
framework
here
I
sing,
you
will
taller
this
one
I
think
that
is
clear.
Maybe
later
do
you
3gpp?
You
say
please
read
for
our
architecture.
The
network
slicing
proposed
by
3gpp
I,
don't
see
Melissa,
you
look
good
away.
C
C
I'm
I
think
we
have
one
working
group
class
called
the
request
for
working
group
class
called
I.
Look
at
that,
and
we
also
are
looking
for
ways
forward
for
first
of
all
to
make
sure,
since
we
have
the
two
giraffes
in
the
first
session,
the
ttz
and
the
area
proxy
for
is
is
look
at
whether
or
not
we
actually
need
to
do
this
in
terms
of
requirements.