►
From YouTube: IETF100-DCROUTING-20171115-0930
Description
DCROUTING meeting session at IETF100
2017/11/15 0930
https://datatracker.ietf.org/meeting/100/proceedings/
A
A
Is
better
perfect,
so
we
had
pre-planned
and
note-taker
and
unfortunately
the
person
actually
is
on
some
sort
of
an
escalation
call.
So
we
are
looking
for
another
volunteer
right
now
to
actually
take
notes
and
another
volunteer
for
you
know
being
part
of
the
jabber.
So
anybody
willing
to
you
know
pick
up
that
particular
task.
A
A
Okay,
so
let's
get
started
because
we
do
have
a
pretty
intense
agenda
today,
so
first
some
administrivia,
and
so
that
is
the
quick
agenda
here.
So
you
probably
have
seen
this
thing,
but
I
know
by
now
already
a
few
times.
The
note
well
keep
in
mind.
You
know,
even
though
you've
seen
it
a
few
times.
You
know
whatever
you
say
you
know
it's
gonna
be
record,
it's
gonna,
be
public
record
and
be
I.
Know.
Aware
of
that.
So
you
know
keep
the
note.
Well
your
attention.
We
have
also
given
around
the
blue
sheets.
A
So
fill
those
things
in
then
at
least
we
know,
and
we
have
an
idea
how
many
people
actually
were
here,
which
again,
you
know,
is
one
of
the
data
metrics
which
we
intend
to
capture
for
this
particular
session
of
DC
routing,
so
the
next
two
and
a
half
hours
we
do
have
a
pretty
intense
agenda.
We
have
a
lot
of
topics
to
talk
about
some
of
the
elements
will
you
know
we
have
the
expectation
that
will
be
lots
of
Q&A
and
discussions.
You
know
going
around
and
all
that?
Actually,
no,
it's
pretty
nice.
A
So
we're
going
to
be
covering
first
like
a
few
presentations
and
discussions
around
the
requirements
in
the
problem.
Space
that's
gonna,
be
followed
up
by
some
people,
I've
been
looking
into
some
of
the
changing
requirements
from
the
routing
perspective
and
have
been
looking
into
like
new
protocol
work
on
how
to
address
that.
So
how
to
make,
like
you
know,
new
tailored
solutions
for
this
changing
problem,
a
requirements
space
now,
there's
gonna
be
followed
up
with
another
approach
where
we
actually
or
where
people
actually
have
been
looking
into
augmenting
existing
technologies.
A
So
not
really
tailor-made,
but
you
know
use
what
we
have
augmented
for
this
changing.
You
know:
data
center
routing
you
know
environment
and
then
at
the
end,
we're
gonna
do
like
a
quick
wrap-up
I
know
of
where
we
are
so
now,
because
me
and
Victor,
you
know
we
have
been
wearing
these
batches.
Like
you
know,
ask
us,
ask
us
about
the
buff
yeah
so
now
before
we
actually
start
with.
You
know
these
birds
of
a
feather
about
data
center.
A
Routing
I
want
to
make
clear
that
everybody
in
the
room
understands
you
know
what
is
the
intent
of
this
particular
session
here.
So
the
main
intent
here
is
to
actually
know
provide
our
you
know:
wonderful
area
directors.
You
know
with
input
which
they
can
actually
use.
You
know
for
potential
future
actions
know
regarding
the
developing
problem
space
for
data
center
routing,
and
in
addition
to
that,
we
also
would
like
to
get
like
better
understanding.
A
What
are
the
you
know?
Modern
data
center
needs
with
respect
to
routing
management,
resiliency
programming
operations.
You
know
traffic
flows
and
so
onwards.
Also
it
is
not
the
direct
intention
of
this
particular
buff.
It's
an
on
working
group
forming
buff
that
there
will
be.
You
know
as
an
outcome
at
general,
PC
routing
working
group,
or
that
may
be
a
result
of
this,
but
that
does
not
necessarily
have
to
be
the
case.
So.
B
A
A
So
what
we
expect
is
you
know
from
you
is
to
actually
receive
some
some
guidance
and
input
to
understand
some
of
the
leading
indicators.
You
know
of
change
for
the
routings
in
the
modern
data
center
environments.
Also,
in
addition,
you
know
it
is
not
the
expectation
that,
from
any
of
the
requirements
that
we
gain
consensus
on
the
requirements
as
they
are
right
now.
A
So
what
we're
trying
to
understand
here
is
if
these
requirements,
you
know,
go
towards
the
you
know
the
environment,
where
you
know
that
there
are
changes
from
the
traditional
one
environment
so
that
the
needs
from
a
data
center
in
a
routing
perspective,
no
are
different
from
the
traditional
one,
routing,
environment
and
also
another
assumption
we
are
making
is
that
you
know
the
potential
to
augment
existing
protocols.
You
know
that
should
not
exclude
potential
work
upon
new
protocol
work,
so
this
actually
means
you
know
there.
We
may
actually
do
some
things.
You
know
in
parallel.
A
At
the
same
time,
we
may
actually,
you
know,
want
to
work
upon
augmenting
existing
protocols,
but
at
the
same
time
you
know
why
not
actually
know
create
some
purpose-built
data
center
and
Oprah
that
isn't
their
environment
and
or
routing
technologies.
So
now
you
know,
to
conclude,
you
know
the
intersection
here
so
when
going
through
the
through
the
buff
here,
I
want
you
to
think
about
this.
You
know
following
aspects.
So
the
first
thing
is:
if
you
do
Q&A,
please
be
concise.
A
We
don't
have
that
much
time
and
we
have
a
lot
of
content
to
cover
here.
So
the
questions
I
want
you
to
think
about
and
for
which
we're
going
to
be
asking
your
feedback.
Is
you
know
our
data
center
requirements
cleared
off
to
justify
you
know
IT,
additional
IETF
focus
or
what
else
actually
is
needed
and
again
it
is
not
our
intent
to
create,
like
full
of
you,
know,
consensus
on
everything.
A
Another
question
you
need
to
think
about:
is
you
know,
would
it
be,
you
know,
can
a
single
solution
actually
achieve
all
of
the
requirements,
so
that
is
something
to
think
about.
In
addition,
do
you
have
interest
to
actually
work
upon
the
solution
space?
If
there
are
like
you
know,
different
sets
of
requirements?
You
know
for
you
available
and
as
a
last
one,
it's
like
you
know
how
it
should
I
TF
organized.
You
know
the
working
upon
the
solution,
space
environments,
so
that
means
you
know,
should
we
just
leave
it
as
this
right
now?
A
Nothing
really
dedicated
should
we
create,
like
you
know,
like
a
data
center
routing
working
group
which
is
like
recovering
general
elements,
or
should
we
actually
it
all,
really
carve
up
a
set
of
requirements
and
then
follow
them
up
with
with
the
solution
space.
You
know
protocol
work
as
such,
so
I
think.
That
is
what
I
wanted
to
say
here.
So
I
think
Jeff.
C
Good
morning,
everybody
it's
great
to
be
here,
put
a
lot
of
work
and
effort
to
make
this
happen,
Thanks,
so
our
80s,
he
makes
it
happen.
So
we've
started
about
a
year
ago.
Thinking
about
what
is
it,
how
data
centers
are
different
than
the
rest?
Our
network
what's
been
done
before
what
needs
to
be
done
in
next
few
years.
We
form
the
team
and
not
because
we
are
longtime
friends.
We
are,
we
know
each
other.
We
trust
each
other.
We
respect
each
other.
C
C
So
why
do
you
feel
the
requirement
draft
is
needed?
We
really
want
to
avoid
the
beauty
context.
We've
seen
this
kind
of
debates,
my
other
safes
are
better
than
your
last
piece
and
if
I
run
the
internet
akhirin
anything
else
many
times.
The
idea
here
is
to
create
single
set
of
requirements
and
definitions
that
every
contender
or
any
potential
solution
could
be
compared
against
so
really
to
avoid
beautiful.
C
C
So
first,
your
slides
are
the
definitions
of
the
fabric.
This
is
what
we
see
today.
This
is
what
we
see
happening.
Maybe
within
year
two
we
know
new
types
of
silicon
are
coming
that
will
provide
some
new
functionalities
some
next
year,
some
two
years
from
now,
so
we've
tried
to
provide
high-level,
set
of
definitions
that
identifies
what
this
is
fabric
today,
so
to
be
very
clear.
Fabric
provides
basic
connectivity,
so
we
received
an
equation.
Is
it
EVP
and
is
it
list?
Is
it
something
more
fancy?
C
It
is
not
it's
really
basic
connectivity
if
you
wish
to
run
a
VPN
on
top
it's
an
overlay
service
which
still
requires
reachable
next
hop,
and
this
next
hop
will
be
delivered
by
the
fabric
from
the
main
separation
within
the
fabric.
It's
not
provided
by
the
protocol
itself.
It's
provided
by
an
overlay
if
needed
right,
so
we
are
not
looking
into
separation
within
the
fabric,
so
some
characteristics
we've
seen
and
again
many
people
who
have
built
rather
large
data
center,
we've
gone
through,
probably
50
60
durations
of
initial
draft
contributed
to
this.
C
So
hopefully
this
is
something
you
can
identify
yourself
in.
The
fabrics
are
mostly
regular
they're
repeating
itself
and
they're,
usually
recursive,
very
high
perceptual
bandwidth
and
path
diversity,
half
thousand
fathers
fan-out
is
not
unheard
of.
So
we
see
today,
most
data
centers
are
ECT
based,
meaning
we've
got
many
many
different
paths
to
the
same
destination
and
can
load
balanced
per
flow.
C
Limited
physical
Demeter
is
important.
What
we
see
today
is
no
more
than
80
kilometers
between
data
center
within
the
metro
and
fabric
could
be
within
this
location.
However,
we
see
more
and
more
requests
to
extend
fabric
beyond
single
physical
location,
so
it's
not
unthinkable
of
heaven
fabric
extended
our
Metrolink,
so
it's
not
classical
DCI
in
a
way
it's
really
stretched
fabric
and
such
construction
should
be
supported.
C
Another
very
important
property
is
very
tight
propagation
time
which
is
much
shorter
than
desired
conversion
delay.
We
will
see
number
later,
which
gives
ability
to
build
protocol
in
a
way
that
the
conversions
using
protocol
itself
is
shorter,
due
to
very
short
propagation
time
going
through
additional
clothes
fabrics
where
every
entrance
is
icky
distant.
So
you
know
exactly
the
time
it
takes
to
get
from
one
point
to
another
and
back
we
are
seeing
more
and
more
data
centers,
where
fathers
are
a
symmetrical,
so
bus
to
one
destination
could
be
longer
than
another.
C
One
return
pass
again
might
be
different
than
forward
pass.
This
should
be
accumulated
in
when
designing
new
protocol
or
extending
existing
one
and
link-state.
Obviously,
number
of
links
is
significantly
larger
than
number
of
knots.
This
should
be
well
understood
when
we
think
scale
when
we
think
converters,
then.
C
To
convert
what
we'll
set
two
requirements,
we
believe
fabric
must
support
non
equidistant
endpoint.
So
it's
not
classical
today's
data
center,
where
everything
is
exactly
the
same
number
of
hops
from
each
other
since
spine
and
lift
today
is
the
most
prevalent
architecture
it
must
be
supported.
However,
we
don't
exclude
support
for
anything
else.
However,
it's
an
evolution.
We
are
not
trying
revolutionize
data
centers,
however,
as
we
go
and
design
new
topologies,
they
should
be
supported.
C
Kpi's,
so
maybe
the
product
management
language,
but
to
make
it
clear
the
KPI
to
identify
here
a
single
dimensional
single
failure
and
they
will
be
changed-
will
look
from
application
perspective.
What
an
application
can
survive.
Typically,
we
looked
at
what
does
it
take
to
propagate
a
failure?
What
does
it
take
for
control
plane
to
understand?
There's
a
failure
process,
the
failure,
don't
load
it
into
forwarding,
plane
and
react.
C
So
we've
provided
three
different
set
of
KPIs
based
on
the
size
problem
to
read
the
numbers
you
can
read
themselves.
We
would
really
like
you
to
look
at
them,
compare
them
to
your
use
case
and
see
whether
they
meet
your
requirements.
You
need
something,
that's
faster.
Do
you
think
it
should
be
relaxed?
It's
really
cool
for
your
input.
It's
not
the
numbers
we
impose
on
you
and
very
important.
The
total
conversion
stem
it's
always
a
combination
of
number
of
routes,
number
of
paths
and
time
it
takes
to
detect.
C
So
if
we
look
at
today's
data
center,
we
must
support
load,
balancing
fusion
ecmp,
which
is
the
case
today.
We
should
be
able
to
support
weight.
It
is
simpie
where,
besides
metric
to
the
past,
we
can
also
provide
weight
to
particular
class
and
we
need
to
support
an
actual
load
balancing
as
well
based
on
particular
metrics
and
those
metrics
might
not
exist
today.
As
we
know,
them
might
be
something
new
with
an
new
protocol,
so
we
may
support
any
new
Lord
metric.
We
are
not
familiar
today
with
that
would
bring
more
granularity.
C
C
C
Classical
dirty
fabric
distribute
IP
reach
ability.
However,
we
believe
that
a
new
protocol
should
be
able
to
distribute
any
other
type
of
reach,
ability
being
MPLS
labels
being
any
third
party
data,
meaning
additional
placeholder
for
metadata,
so
think
about
segment
routing
data
center.
Some
people
think
it's
really
important
so
being
able
to
distribute
labels
or
binding
from
prefixes
to
labels
within
same
protocol,
as
we
do
today
in
classical
segment.
Routing
is
important.
C
Telemetry
is
important
as
well,
so
if
we
need
to
distribute
some
data,
that's
not
necessarily
reachability
data,
but
any
metadata
related
with
quality
with
third
party
metrics.
The
protocol
should
provide
us
ability
and
easier
of
encoding
to
distribute
such
information,
and
we've
been
looking
into
encoding
for
quite
some
time.
We
decided
to
postpone
this
discussion.
It's
a
lot
poisonous,
but
there's
definitely
interest
in
new
encoding.
C
We
believe
mccannon
such
as
B,
D
and
B
D
here
specifically
mentioned,
because
this
is
the
facto
standard
today
there
might
be
something
else
in
the
future.
However,
if
you
look
at
what
supported
on
today's
silicon,
how
we
do
fast
failure
detection
today,
BFD
our
ability
to
track
state
of
BFD
that
which
result
in
changes
in
state
in
protocol
is
mandatory.
C
We
also
believe
that
a
protocol
should
be
able
to
bootstrap
B
of
this
session.
So
when
there
is
no
configuration
on
both
sides,
we
should
be
able
to
leverage
the
routing
protocol
to
provide
enough
information
to
bootstrap,
give
the
assertion
that
could
conceptually
be
used
to
track
loveliness
of
the
link.
C
Operational
requirements
so
getting
more
and
more
interesting,
because
amount
of
data
expected
amount
of
information
expected
is
growing
really
with
the
day.
So
we
believe
new
protocol
must
support
real-time
notifications
or
near
real-time.
Definitely
in
milliseconds
it
should
be
able
to
communicate
each
state
state
relative
to
its
neighbors
to
a
potentially
out-of-band
system
that
could
change
the
graph
recompute.
The
graph
do
some
through
this
information.
C
We
should
be
a
as
a
snapshot
at
any
given
time,
so
when
we
want
to
figure
out
what
happened
20
minutes
ago,
we
should
be
able
to
look
at
snapshot
that
was
taken
20
minutes
ago.
Compare
it
to
operational
state
and
understand
what
has
changed,
what
has
happened
and
while
maybe
we
live
particular
routes
to
the
world.
C
Very
important
requirement
is
to
be
able
to
Commission
the
Commission
knew
not
without
degradation
of
operational
Network.
So
we
are
not
talking
about
gracefully
start
or
non-stop.
Routing
will
know
the
switches
their
complicated.
They
bring
huge
amount
of
bugs,
and
potentially
we
would
like
not
to
see
them.
We
would
like
to
see
a
an
evolution
of
gracefully
start
if
you
wish.
So
when
we
finished
the
Commission
you
not,
it
doesn't
affect
operation
of
existing
it.
C
So
there
are
few
items
that
we
looked
at.
They
require
additional
study
and
we
would
like
your
impotence
items
so
non-classical
and
codings
GRP
Seadrift,
similar
cells,
self-describing
encodings,
when,
rather
than
trying
to
encode
something
any
TLV,
we
could
provide
brought
a
file
that
describes
the
data.
C
It
gives
innovate
gives
ability
to
build
think
within
our
own
data
center.
It
may
be
not
requiring
their
ability
to
do
it
in
minutes,
not
in
months
ability
to
function
as
an
overlay.
The
focus
of
fabric
protocol
is
underlay,
however,
running
same
protocol
to
provide
overlay
services
is
definitely
possible.
It
will
bring
new
requirements
with
regards
to
scale
with
regards
to
liability,
however,
it's
possible,
and
we
were
looking
to
it,
floorlet
signaling,
so
floorlet
it
is
a
sub
flow
of
a
flow.
It
requires
particular
knowledge
about
interleaving.
C
It
requires
particular
knowledge
about
how
to
reassemble
flow,
so
there
is
no
reordering
at
the
receiving
site
will
be
looking
into
it.
We
believe
it's
an
interesting
technology,
multicast
the
most
people
in
data
center,
other
afraid
of
multicast
I
hate
it.
So
there's
some
requirements
for
multicast.
There
is
beer.
That's
coming
next
year
in
implementations,
so
I
will
definitely
look
into
how
to
provide
facility
to
replicate
traffic.
How
to
address
broadcast
and
multicast
traffic
in
the
fabric
out
aggregation
and
conditional
delegation
is
very
interesting
topic,
so
most
data
centers
today
run
low
and
silicon.
C
So,
in
terms
of
scale,
you
should
always
watch
where
you
are
in
terms
of
number
of
routes
number
of
labels.
When
we
aggregate
we
can
increase
amount
of
state
in
the
network
since
is
aggregated.
However,
as
a
result
of
aggregation
and
they're,
very
good
description
in
BGP
and
large
DC
RFC
about
potential
black
calling
and
other
artifice
of
abrogation,
so
being
able
to
derogate
dynamically
based
on
particular
condition,
is
an
important
thing
in
protocols
and
we'll
be
looking
into
it.
C
Stay
to
present
so
ability
to
build
graph
that
could
be
analyzed
upon
real-time
or
non
real-time
is
important
so
to
facilitate
easily
understandable.
Networking
graph
at
scale
is
important,
so
I'll
be
looking
into
how
the
networking
state
could
and
should
be
represented.
Northbound,
and
the
last
point
here
is
pretty
much
the
consequence.
If
you
want
to
do
traffic
engineering
in
data
center,
you
need
something
like
PC.
B
I'm
Artie
angora
rooster
I'm
a
little
concerned
about
the
requirements,
doc
kind
of
missing
basic
problem
statement,
essentially
like
if
you
have
looked
at
the
MSD
C,
which
you
do
refer
to
in
the
document.
What
would
be
good
to
understand
is
this
is
not
a
new
problem
right,
like
data
center
people
have
been
building,
requirements
are
very
varied
and
what
seems
to
have
happened
here
is
a
collection
of
all
sets
of
varied
requirements
becoming
a
god
list.
B
Secondly,
from
a
fundamental
prakit
of
problem
statement,
we
do
not
have
a
requirements
document
to
tell
service
providers
how
to
design
their
backbone
networks.
Data
center
network
design
is
how
companies
compete
and
differentiate
themselves,
and
people
run
everything
from
for
way
to
128
way
to
maybe
4
5
12
in
the
future.
Why
are
we
coming
up
with
a
set
of
requirements
to
standardize?
What
does
I
need
to
standardize?
The
requirements
is
my
question
I'm
fine
with
the
proposals.
B
If
you
are
saying
that,
oh
there
is
in
one
case,
you
can
use
BGP,
some
people
run
links
to
it
and
some
people
may
want
to
run
RIF
vgp
SPF
perfectly
fine,
but
why
are
we
trying
to
standardize
the
set
of
requirements
which
are
so
varied
as
different
customers,
plus
it
is
their
business
differentiator,
and
all
these
requirements
together
really
don't
make
much
sense
as
a
whole.
So
that's
kind
of
my
comment
so.
C
B
Kind
of
odd,
because
we're
defining
what
the
fabric
is,
you
are
saying:
there's
a
leaf
spying
Plus.
There
is
certain
requirements
which
are
kind
of
your
said
may,
based
on
what
we've
said,
for
example,
Dec
always
a
me,
but
you
see,
I
could
be
extremely
important
to
some
customers
who
operate
data
centers
but
are
there's
a
structured
leaves
point
super
spine
may
be
fine,
so
point
being
it's
just
varied
right
like
and
one
set
of
requirements
doesn't
apply
to
everybody.
So.
C
A
C
Second,
yeah,
please
do
understand,
we
are
not
imposing
an
architecture.
We
are
gathering
requirement
that
people
who
are
going
to
build
either
new
protocol
or
extend
existing
one
and
the
problem
space
is
well-known
in
both
iGPS
and
bgp
right.
They
could
look
into
thing.
Hey.
My
design
doesn't
address
this,
not
the
DC
design,
the
routing
protocol
design,
because
this
is
about
routing
protocol,
it's
not
about
how
you
design,
datacenters,
yeah,.
A
Happens,
clarifies
so
so,
and
I
would
like
to
compliment
also
that
you
know
one
of
the
reasons
we
are
doing
these
requirements.
You
know
overview
is
to
come
to
an
understanding.
You
know
that
you
know.
Is
it
sufficient
to
just
augment
you
know
existing
technology,
or
should
we
look
upon,
like
you
know,
more
tailored
solutions
you
know
going
forward,
so
a
question
to
ask
yourself
is:
is
it's
basically
around
that?
So
this
way
you
know
is
what
we
have
right
now
is
a
sufficient
to
fulfill
these
sets
of
requirements.
C
B
It
is
on
the
informational
track
to
becoming
standard,
so
I,
don't
understand
so
I
that
it
is
kind
of
being
proposed
as
an
informational
kind
of
draft,
eventually
to
become
an
RFC
for
on
that
track,
or
it's
not
right.
So
it's
fine.
If
it's
like
a
guideline,
but
it's
not,
it
is
actually
on
the
informational,
RFC
tracks.
C
C
D
Sorry
just
add
a
comment
for
clarity
as
well.
Remember
what
Gunther
stated
in
beginning
was
we're
trying
to
understand
the
leading
indicators
of
what's
driving
various
things
in
in
the
environments
and
so
for
this
bla.
This
understand
are
these
leading
us
to
like
what
is
this
leading
us
to
understand
the
environments
to
dictate
for
us
in
terms
of
new,
potentially
new
work
etc?
We're
not
here
specifically
trying
to
say
these
are
the
finalize
that
are
part
of
this.
This
is
trying
to
understand.
Where
is
this
leading
us
are
there
augmentations
is
this?
D
C
If
you
look
at
number
of
proposals,
there's
definitely
it's
not
a
vacuum.
Proposals
are
coming
they're,
trying
to
address
limitations
of
protocols
today
right,
so
it's
not
something:
we've
invented
and
importing
to
the
rest
of
the
world.
We
are
seeing
the
need
for
better,
more
scalable,
faster,
converging
protocol
that
could
be
either
enhanced
existing
protocol
or
a
new
protocol,
and
we
are
trying
to
figure
out
what
is
it?
Okay,.
E
E
In
ten
minutes
with
Jeff
anyway,
a
lot
of
these
requirements
is,
they
seem
to
be
all
encompassing.
It
seems
like
you,
should
at
least
classify
them
as
being
routing
control,
plane
requirements
or
data,
plane
requirements
or
platform
requirements,
because
they,
when
I,
saw
the
list.
Some
of
them
are
closer
to
routing,
and
some
of
them
are
closer
to
platform,
but
yeah,
but
there
I
mean
it's.
It's
a
good
start,
I
think
I.
C
Fully
agree
with
you,
so
the
intention
to
publish
in
such
a
way
it
was
to
request
your
comments
after
more
comments
and
date
have
been
gathered
in
our
lives,
we
all
start
structures
in
a
way
you
can
read
them
say
this
is
what
I
want
from
management
plane
this,
what
I
work
on
her
plate?
This
is
for
data
plane,
should.
F
You
guys
need
to
just
split
different
document,
different
concept,
because
this
one
I
love
man,
management,
I,
love
my
management.
It's
really
not
as
much
a
frauding
problem,
then
maybe
the
other
things
you
have
in
this
segment,
so
I
think
the
doctor
ego
I
started
to
put
too
much
chain
in
one
document
which
makes
it
kind
of
hard
to
digest
and
use.
C
F
H
G
H
C
I
Greg
mirskiy
is
et
so
you
are
emphasized
and
called
out
the
role
BFD
in
triggering
conversions.
That's
very
good,
I
appreciate
it.
What
about
degradation
of
links?
Paths?
So
you
think
that
in
direct
and
analysis
of
telemetry,
like
buffer
queue,
utilization
is
sufficient
or
there
might
be
some
requirements
towards
the
performance
measurement.
So.
C
We
would
like
to
keep
functionality
of
a
protocol
limited
in
a
way.
It
doesn't
increase
size
of
link
state
database,
so
given
ability
of
telemetry
systems
today,
we
believe
that
amount
of
information
that
could
be
provided
with
regards
to
drop
packets
reason
why
it
was
dropped.
Key
occupancy
is
enough
to
declare
that
there's
degradation,
and
this
is
where,
where
did
he
seem?
He
could
come
into
play.
For
example,.
I
Another
question
is
that
if
I
understood
correctly,
are
you
mentioned
two
routing
protocols
requirement
is
distribution
of
telemetry,
so
is
it
routing
or
management,
because
routing
I
wouldn't
imagine
that
it
will
be
flooded
information?
You
think
that
telemetry
needs
to
be
divided
in
in
the
fabric
or
you
need
to
deliver
it
to
certain
host.
That
will
be
do
analysis.
You
know
and
digging
and
such
so
it
it's
a
little
bit
different
functionality
and
I.
Think
that
then
it
has
to
be
watered
differently.
I
C
A
very
good
question:
thank
you
for
asking.
So
the
basic
telemetry
is
system
to
management
interruption.
I,
don't
think
we
need
to
distribute
counters
across
the
fabric.
However,
there's
some
amount
of
metadata
that
might
benefit
how
the
load
balance,
how
we
choose
the
path
so
finding
right
balance
about
what
should
be
distributed
versus
what
should
be
sent
up
to
the
management
if
yet
to
be
figured
out.
Okay,
I
think.
I
C
I
I
C
J
Randy
Bush
I.
Could
you
pull
to
slide
seven,
please
the
we
provide
trance
Goble
transit.
We
also
have
massive
banks.
We
also
build
data
centers.
We
see
those
as
very
different
things
and
I
love
this
slide,
because
this
makes
clear
the
delineation
between
a
data
center
and
an
enterprise
right
and
I
was
initially
horrified
at
the
enormous
laundry
list
of
requirements
which,
as
I
posted
on
list
kind
of,
implies,
there's
something
behind
them.
I,
don't
know
if
there
is
or
isn't
but
I
guess
in
another
sense,
this
being
the
ITF.
J
If
you
hadn't
laid
out
the
requirements
you
we
complain
that
you
hadn't,
so
what
the
heck,
but
it's
an
enormous
list
and
I
think
some
focus
is
going
to
be
needed.
C
Absolutely
so
my
first
commenter
on
anything
is:
please
don't
try
to
boil
the
ocean.
However,
if
zero
zero
draft
wouldn't
trigger
any
comments
from
you
any
unhappy
faces,
we
wouldn't
have
done
great
job.
So
it's
really
to
you
to
read
to
figure
out,
hey
I,
disagree,
I,
don't
like
I
know
better
and
let
us
know
we
are
here
to
listen
to
you.
We
are
not
imposing
any
of
what
has
been
presented.
We
would
like
you
to
assess,
what's
been
presented
and
let
us
know
what
you
think.
L
If
you
can
hear
me
we're
we're
just
doing
this
to
kind
of
give
a
different
perspective.
If
I
can
kind
of
I
supposed
to
introduce
ourselves,
we
have
we
I
said
I,
don't
want
to
say
we
represent
anybody,
it's
it's,
but
you
can
see
the
people
who
have
written
the
draft
are
have
a
large
corporate
I.
Don't
know
if
legacy
is
the
word,
but,
and
so
we
use
the
word
brick-and-mortar.
L
These
are
the
the
people
who've
been
around
for
a
long
time,
and
so
we'll
talk
about
that
and
it
if
I
can
give
a
little
bit
of
history
I
suppose
it's
like
I
I
mean
I
might
call
us
the
99%
and
we're
probably
the
99%
and
I.
Don't
know
what
that
number
is,
but
we'll
probably
the
99%
who
are
not
here
at
the
IETF
and
but
in
terms
of
I
guess,
data
centers
or
presents
there's
a
lot
of
them.
I
mean,
for
example,
like
Mike,
like
Michael.
L
He
has
he's
with
Blue
Cross
in
Michigan,
which
is
a
large
healthcare
enterprise
and
I
was
we
were
sitting
around?
They
meet
all
the
the
healthcare
industry,
insurance
companies,
a
lot
of
them.
They
meet
like
once
every
quarter
and
I
was
sitting
around
the
table
with
him
one
time
and
somebody
looked
around
and
said:
that's
10%
of
the
US
economy
sitting
right
here
and
I
myself
have
a
very
small
software
company
and
I
have
probably
2000
enterprises
and
data
centers.
L
That
I
talked
to
you
know
my
my
email
list
on
just
that
I'm,
nobody,
you
know
I'm
just
a
little
software
company,
so
it's
just
to
kind
of
give
you
a
little
perspective
and
we
have
I
guess
different
requirements
and
we're
not
maybe
at
them.
You
know
some
of
the
stuff
that
you
guys
talk
about
with
you
know,
fabric
and
everything
else.
It's
an
aspirational
goal
for
us,
but
a
lot
of
us
are
definitely
not
there
and
so
just
to
kind
of
give
perspective
on
where
we
are
and.
M
L
M
I
can
say:
I
was
just
telling
my
new
friend
Linda.
That
means
you
have
to
wake
up.
Now
we
have
the
the
typical
set
up
of
data
center
campus
and
then
unique
to
Michigan
is
a
statewide
network
and
that's
important
to
us,
but
we
have
Blue
Cross
plans
in
every
other
state
and
we
connect
to
them
via
private
network.
That's
called
lose
net,
so
that
report
that
may
represent
a
unique
set
of
requirements
and
that's
what
we
want
to
take
into
consider.
But.
L
But
anyway,
so
art
we
use
the
internet
for
business,
but
that
isn't
our
business
and
for
maybe
I,
don't
know
if
I,
if
I,
if
the
analogy
of
the
1%
works,
but
for
that
for
the
the
people
for
whom
maybe
I
don't
know,
I
may
be
speaking.
You
know
out
of
turn,
but
maybe
for
the
people
for
whom
fabric
and
all
that
stuff
is
a
huge
consideration.
The
Internet
is
their
business
and
for
us
that's
definitely
and.
L
And
we
have
multiple
kinds
of
networks
and
we'll
talk
about
that
and
we
will
focus
on
what
happens
inside
the
data
center,
because
I
know
that's
of
concern.
But,
as
I
said,
we
get
questions
like
why?
Don't
you
just
go
to
the
cloud
and
so
I'm
gonna
kind
of
tell
you
why
we
why
we
are
the
way
we
are
quite
and
why
it's
really
hard
to
change,
okay
and
and
part
of
it
is
because
we
got
in
early
we've
been
computerized
for
the
last
40
or
50
years.
Not
just
like
the
last
five.
L
You
know,
I
mean
it's
it's
very
it's
it's
there's
a
lot
of
inertia.
I
mean
a
lot
of
stuff
to
move
very
large
IT
staff.
We
have
a
lot
of
baggage,
yeah
and
very
large
IT
staffs.
Thousands
and
thousands
of
people
and
what's
important
in
a
lot
of
decisions,
is
as
in
any
business
is
you
know,
time
to
market
return
on
investment,
business
reasons
and
the
business
reasons
are
not.
How
fast
can
you
get
your
content
out
under
the
internet
that
it's?
How
many
insurance
policies
can
you
sell?
L
So
it's
a
very
different
kind
of
motivation.
So
what
kind
of
networks
do
we
have
with?
Usually
people
have
large
campus
lands
private
lands?
L
You
know
going
out
that,
for
example,
to
the
state
of
Michigan
I
used
to
work
for
a
very
large
oil
company,
and
we
would
have
you
know
our
own
private
microwave
networks
in
the
swamps
of
Louisiana,
because
there
was
no
service,
and
so
we
had
to
build
our
own
microwave
towers,
I
mean
so
we
we
do
stuff
where
we
need
to
a
lot
of
extranet
a
lot
of
business
to
business
connections
or
he'll
connect
to
the
government
to
Medicare.
You
know
so
on
Social
Security
and
we
do
have
internet
facing
apps.
L
You
know
and
but
that's
everything
is
firewall,
but
let
me
show
you
this
is,
but
inside
the
data
center.
That's
because
I
know
that's
what
we're
interested
in.
What
is
the
inside
of
our
data
center?
Look
like
it
looks
horrible,
it's
it's
tons
and
tons
of
layers
of
servers,
middle
boxes
and
so
on
and
all
I'll
kind
of
I'll
face
this
out
lots
and
lots
of
different
kinds
of
applications.
Let
me
see
if
I
can
give
you.
L
This
pathing
is
done
through
this
through
this,
these
kind
of
things
and
the
biggest
things
which
you
cannot
forget
is
that
in
in
so
so
so
so
many
of
these
kind
of
data
centers
there's
we
have
a
back-end
mainframe
application.
I,
it's
it's
I
know
it
seems
unfamiliar
to
a
lot
of
people,
but
you
cannot
take
these
large
mainframes
and
shift
them
over
to
the
cloud.
L
I've
worked
with
people
for
years
and
years
and
years
and
I've
only
known
to
somewhat
medium-sized
companies
who
are
able
to
do
that,
and
a
lot
of
the
data
resides
on
these
on
these
large
mainframes
they'll
have
a
web
application,
but
the
back
end
is
the
mainframe,
and
so
so
that's
that's
kind
of
where
we're
sitting.
So
let
me
go
back
the
in
terms
of
the
proud
routing
protocols.
We
used
a
lot
of
people
use
OSPF.
These
guys
use
a
lot
of.
A
L
Sorry
and
it's
all
different
I'm,
just
I'm
just
collecting
this
from
different,
not
everybody
does
all
of
this
I'm
just
collecting
it
from
different
people.
I
know,
but
but
this
is
this
is
what
we
do
and
when
I'm
getting
tell
you
to
is
like
I
asked
around
people
that
that
I
talk
to
all
the
time
I
mean
I
talk
to
these
companies
by
five
six,
seven
of
them
ever
every
single
day
and
I
said.
Are
you
guys
having
problems
with
routing?
L
L
L
And
we
totally
if,
if
other
people
find
that
they
need
other
solutions
or
they
need
a
problem,
I
mean
we're.
That's
totally
totally
fine.
It's
just,
and
you
know
we
don't
speak
for
every
single
enterprise,
every
single
data
center
in
the
world.
That's
we're
just
you
know,
I
to
a
number
of
them
and,
and
they
all
kind
of
said,
yeah
and.
M
M
L
J
Bush
I
I
J
two
comments.
One
is
probably
half
the
people
in
this
room
understand
what
there
is
to
understand
so
far
about
Enterprise
routing,
yeah,
I
GRP.
By
the
way
it
went
away
once
we
got
multiple
vendors.
Sorry,
you
gotta
snuggle
up
to
this
one,
so
I'm
gonna
do
it
again:
okay,
most
of
the
people
in
this
room
understand
or
at
least
flail
at
enterprise,
routing
you're,
a
GOP
runaway
once
we
got
multiple
vendors
a
decade
or
two
ago,
but
I
think
you've
done
an
excellent
job
of
delineating.
J
C
Thank
you
for
your
work.
I,
really
appreciate
it.
The
time
you
are
spending
with
us
here,
it
happens
completely
different
world.
Most
of
us
believe
in
bubble.
We
assumed
thinks
our
modern
build
replaced
every
three
years.
In
your
case,
it's
very
different
was
I
opened
her
for
me.
So
when
looking
at
what
you've
been
doing,
what
I
really
could
see
is
the
network
has
been
built
in
a
way
that
either
represents
your
entire
organization
or
vendor
future.
C
It's
managed
to
sell
you
over
the
last
20
years,
so
the
intention
of
work
here
together
is
really
to
give
you
tools
to
fight
this.
To
tell
next
time
salesperson
our
next
IT
manager
comes.
This
is
what
we
figure
out.
This
is
what
could
work
for
us,
it's
a
simple
its
operational,
and
this
is
what
we
want,
rather
than
trying
to
fight
unknown.
So
it's
really
to
make
things
known
for
you.
C
This
was
the
intention
of
publishing
BGP,
enlarged
EC,
so
thanks
to
Illya
for
pushing
it
through
and
I
believe
it
published
it
in
less
than
a
year.
It's
really
to
make
you
aware
to
make
information
public,
how
things
could
be
done?
Leveraging
previous
experiences
so
could
work
for
you
and
you
have
enough
Emma
to
fight
sales
pitch.
You.
L
Know,
Jeff
I
really
I
really
really
appreciate
that,
because,
and
and
and
I'll
let
Mike
talking,
but
where
I
think
a
lot
of
us
would
like
to
go
to
somehow
is
this
is
kind
of
how
I
think
we
see
our
future
is
we'd
like
to
go
to
Sdn
containers
and
that
kind
of
stuff,
hybrid
cloud
we
have
a
problem,
though,
is
how
fast
we
could
possibly
go
we'd
love
to
have
some
VC
PS,
some
paths.
You
know
forward
best
practices
on
how
to
do
that.
L
Telemetry
we're
talking
to
some
people
about
the
kinds
of
visibility
and
stuff.
We
need
the
problems
because
we
have
some
problems,
maybe
that
out
that
that
others
might
not
have,
which
is
that
we
need
to
encrypt
inside
the
data
center.
You
know
we
have
regulatory
requirements
for
separation
and
visibility
and
so
on,
and
we
have
tons
and
tons
of
applications,
some
which
were
written.
You
know,
as
I
say
in
the
Stone
Age
of
of
computing,
which
are
very
difficult
to
convert,
and
we
have
relationship
and
expanding.
M
But
we
want
this
to
be
as
much
of
a
two-way
street
as
possible.
We're
going
to
learn
a
whole
lot
more
from
you
than
you
will
from
us,
but
to
the
extent
that
our
requirements
are
valuable.
Great
I'll
echo
what
Nalini
said
about
we're
somewhat
anchored
in
the
past
that
kind
of
hits
at
what
randy
said
as
well,
that
the
IG
RP
has
gone
away.
It
hasn't
gone
away
in
our
environment
I'm,
unfortunately,.
O
G
O
In
a
different
way
right,
the
one
problem
that
you
facing
that's
killing
the
IT
staff
right
continuing
on
this
trajectory
will
probably
not
be
feasible.
So
we
have
us
to
ask
yourself
how
will
I
reduce
IT
staff,
which
leads
you
to
your
automation,
question
and,
in
a
sense
and
more
regular
consumable?
That's
why
the
building
fabrics
right
this
stuff
is
regular.
O
It
is
easier
to
manage
the
second
one
is:
what
is
the
trajectory
in
terms
of
consumption
of
bandwidth
that
you
see
right,
because,
ultimately,
those
are
components
you
consume
CPUs
as
components
you
consume
memory
as
components
right
you
consume
storage
as
components.
You
are
not
running
any
more
punch
cards
and
no
tapes
anymore
right
same
thing
is
happening
to
bandwidth.
O
All
right
and
that's
why
I
think
this
buff
is
driving,
that's
where
the
world
is
going
right
and
that
the
bandwidth
has
to
be
packaged
into
something
you
buy
it
fries,
which
it
is
not
today
right.
So
if
you
own
this
trajectory,
it
will
push
you
into
this
direction,
because
this
is
how
it
is
being
packaged
as
a
component
right.
The
way
the
world
is
going,
so
all
this
stuff
that
you
have
today
will
go
more
to
more
and
more
regular
pattern
to
reduce
IT
staff
and
basically
allow
for
cheaper
consumption
at
larger
scales.
O
M
You
make
a
lot
of
very
good
points
and
and
and
I
we
we're
not
as
different
as
maybe
I'm
making
it
out
and
I'll
agree
with
what
randy
said.
I
like
that
slide,
seven
presentation
as
well,
so
we
do
have
a
lot
more
in
common
than
maybe
I'm
letting
on.
So
we
do
look
forward
to
work
MIDI.
If
you
can
continue
to
tolerate
us.
Okay,.
A
D
Be
more
opportunity
in
the
next
sections,
so
thank
you
very
much.
So
what
we
want
to
do
is
we're
gonna,
try
to
be
very
definitive.
Gonna
have
a
few
questions
now,
after
this
first
section
hold
it.
So
sorry,
you
can't
hear
me
we're
gonna,
ask
a
quick
questions
that
for
each
section
to
try
to
do
a
bit
of
a
wrap-up
to
get
a
census
of
the
room,
I'm
gonna,
we're
gonna!
D
You
show
hands
here
again
we're
not
voting
for
any
leader
here,
we're
just
you
know,
trying
to
get
a
sense
of
what
how
people
in
the
room
so
gonna
ask
three
questions
and
it's
either.
You're
gonna
feel
yes
or
kind
of
yes
or
no
position
on
that.
So
the
first
one
is,
and
we
added
this
to
our
deck
recently.
So
do
you
feel
that
the
requirements
so
raise
your
hand
if
you
do
feel
that
there's
requirements
that
are
not
yet
captured
from
what
we
discussed
today?
D
That
should
be
within
the
focus
of
DC
routing
war
potential,
DC
routing
work.
So
if
you
feel
that
yes
there's,
there
are
requirements
that
we
have
not
yet
discussed,
but
they
should
be
considered
within
DC
routing
potential
routing
work.
Would
you
mind
raising
your
hand,
so
see
me
about
20-30,
okay,
okay,
so
a
few
hands?
Now,
if
you
feel
that
this
pretty
much
captures
at
all
and
there's
really
not
a
lot
additional
requirements
that
would
really
go
into
potential
DC
writing
work.
You
can
now
raise
your
hand.
D
Little
less
okay,
few
hands;
okay,
so
I
guess
a
few
of
us
still.
Okay,
that's
fair
enough!
Yeah!
Sorry,
second
question:
do
we
do
you
agree
that
the
DC
frat,
like
there's
enough
augmentation
in
some
of
the
clients
that
we've
seen
here
that
make
it
different
from
traditional
routing
or
work
we've
done
in
the
past?
So
there's
an
so.
D
The
question
here
is:
do
we
feel
that,
from
what
we've
seen
in
these
requirements
that
there's
enough
augmentation
to
what
it
the
demands
of
the
DC,
the
new
DC
routing
environments,
that
we
should
that
we
think
there's
a
lot
of
augmentation
well
used
to
do
so?
Is
there
enough
differences
here
that
we've
been
exposed
to
that?
We
feel
that
it's
quite
different
than
traditional
work,
we've
done
or
traditional
networks
we
built
or
data
centers.
We
built
okay,
so
show
of
hands.
If
you
do
feel
there's
a
significant
difference.
P
I
have
a
comment
on
this.
You
are
asking
from
the
requirement
presentation
from
those
two
people's
presentations.
Any
differences
I
will
say
that
it's
not
because
of
they
presented
is
actual
data
center.
Today
is
different.
Then
there's
like
service
provider
network
and
data
center.
Then
it
has
specific
characteristics
and
I
haven't
seen
that
being
presented,
because
the
topology
is
very
simple
and
very
kind
of
far
different
than
the
traditional
ones.
I
think
there's
significant
differences,
yeah.
D
I
understand
so
again,
unfortunately,
yeah
you're,
correct
you,
you
are
high
percent
correct
and
we
agreed.
We
didn't
see
all
the
potential
requirements
that
could
possibly
go
into
this.
We
could
only
cover
sone
so
much.
We
took
some
initial
data
and
that's
why
I
tried
ask
question
up
front.
Are
there
additional
requirements
that
we're
not
yet
just
right
here?
That
would
go
into
it?
I.
G
P
A
It's
you
know,
you
probably
you
know,
I
think
you're
right
I
mean
only
some
perspectives.
We
need
to
remove
certain
functionality,
but
at
the
same
time,
you
probably
need
to
add
functionality
to
in
all
compensates
for,
like
the
new
interchange
to
kind
of
in
a
routing
environment.
So,
by
the
way
this
is
not
like
a
Q&A
cam.
We're
just
trying
to
you
know,
fill
the
sins
of
the
room,
because
otherwise
we're
gonna
be
running
totally
out
of
time.
Okay,
so
no
comment
just
hand
raising
basically
an.
D
D
D
Sorry
repeated
DUIs,
you
repeat
the
question:
is
that
what
you're
asking
okay?
So
the
question
was:
do
we
feel
from
what
we've
seen
from
the
DC
wrote
in
current
or
characteristics
that
we've
seen
exposed
to
us
that
this
drives
us
to
do
potential
protocol
work
so
I'm
repeating
the
question
so
you're
asking
it's
the
same
question
I
just
asked
a
second
ago:
I
apologize.
D
Booked
an
or
okay,
okay,
so
just
to
wrap
up,
and
then,
if
you
want
to
ask
a
quick
question,
so
it
seems
like
from
that
last
question.
More
people
believe
that,
yes,
this
seems
to
indicate
that
potential
new
protocol
work
might
be
needed
to
help
address
some
of
these
characteristics.
I'll
use
the
word
characteristics,
maybe
that's
a
better
better
term
to
use
at
this
point
that
versus
hard
set
of
requirements.
Q
Comment
youngster
and
Bloomberg
LP
from
the
comment
that
I
heard
was
that
there
is
definitely
you
know,
there's
definitely
a
need
for
possibly
more
protocol
work,
but
does
it
absolutely
and
there's
definitely
need
to
be
developing
more
things
for
fabric-like
implementations
that
are
distinct
from?
Let's
say
how
traditionally
data
centers
got
done,
but
does
it
really
have
to
be
I
mean
I
could
see
how
this
could
be
called.
You
know
fabric,
but
does
it
absolutely
need
to
be
called
DC
routing?
That's
there's
got
to
be,
there
has
to
be
in
it.
Q
D
Just
to
comment
that
you're
correct
the
DC
routing
was
the
name
of
the
boss
and
we
were
charged
trying
to
get
input
there.
This
is
a
non
working
group,
foreign
Boff
and
there's
no
specific
indication.
We're
gonna
have
a
DC
routing
working
group
per
se.
This
is
just
trying
to
get
input
from
the
group
in
community
so
just
to.
C
N
D
S
S
So
the
main
problem
we
were
trying
to
target
was
again
massively
scalable
data,
centers
they've
implemented
some
sort
of
layer,
3
routing
this
is
this.
Is
this
has
been
done
today?
There
has
been
some
kind
of
centralized
route
control
using
a
controller
based
solution,
no
surprises
there
and
from
operational
simplicity.
Most
of
these
folks
have
deployed
BGP
as
their
routing
protocol.
S
S
Typically,
the
route
reflectors
that
are
used
inside
BGP
that
are
not
in
forwarding
paths
or
are
assumes
a
presence
of
an
IDP
like
protocol
to
resolve
the
next
stops.
The
underneath
next
stops
for
BGP
in
MS,
DC's
or
typically
closed
networks.
This
problem
is
resolved
by
having
hop-by-hop
sessions
set
up
and
therefore
the
next
stops
are
just
recursing
over
a
directly
connected
interfaces.
S
So
the
solution
that
is
proposed
here
helps
a
deployment
of
a
controller
type
loss
model
where
you
could
still
use
BGP
arm
and
get
a
wrong
get
more
convergence
from
BGP
by
avoiding
head-of-line
blocking
that
you
would
have
to
otherwise
incur
using
a
distance
vector
protocol
here
are
the
standard
advantages
of
running
BGP.
Oh
SPF,
sorry
over
any
distance
vector
protocol
nodes
have
a
complete
view
of
the
topology.
Therefore,
it's
an
ideal
sort
of
an
algorithm
that
you
want
to
run
as
an
underlay
protocol.
S
S
So
with
that
in
mind,
if
you
were
to
combine
both
link
state
as
well
as
BGP
together
here
are
the
changes
you
would
probably
need
to
make
to
the
protocol.
You
want
to
define
a
new
Safi,
so
you
carry
this
over
a
completely
different
address
family.
We
have
done
that.
We
have
defined
a
Safi
that
mimics
pretty
much
a
link
state
Safi
that
has
already
been
defined
inside
bgp,
and
the
reason
to
do
that
is
that
the
packet
formats
and
link
state
Safi
very
closely
mimic
the
IGP
packet
formats.
S
You
have
a
new
capability
that
lets
you
exchange
the
new
Safi
and
establish
the
connection.
Only
if
it
sees
on
the
other
end
supports
multiple
peering
models
and
essentially
runs
the
Dijkstra
instead
of
business
vector
so
in
your
own
attacks.
Basically,
the
next
hop
and
the
path
attributes
that
are
carried
to
resolve
the
next
stops
are
kept
intact,
so
that
you
don't
break
the
base
RFC
42
71.
S
You,
however,
replaced
the
decision
process
of
bgp,
in
particular
the
phase
1
&
2,
with
SPF
phase
3
of
the
decision
process
can
be
short-circuited
because
you're
talking
about
node,
IDs
and
prefixes
here
and
finally,
you
need
to
ensure
that
when
you
are
announcing
this
updates,
only
the
most
recent
version
of
NLRA
update
is
accepted.
This
is
something
that
BGP
already
does.
You
could
augment
it
with
sequence,
numbers
on
an
update
message
and
actually
take
care
of
this
problem
from
an
SPF
standpoint
that
is
defined
in
this
proposal.
S
S
The
the
part
of
the
work
that
is
not
covered
here
is
if
it
runs
for
any
underlay
surface
or
address
families.
How
do
you
stitch
it
with
overlay?
Bgp
address
families.
We
think
this
is
a
matter
of
local
implementation
policies
that
can
be
taken
care
of
by
individual
implementations
and
therefore
it
doesn't
need
to
be
standardized.
R
Uncut
the
way
VMware
a
question
is
in
the
slides
you
mentioned
that
will
be
replacing
the
phase
1
and
phase
2
processes.
You
know,
but
BGP
provides.
You
know,
policies
like
kind
of
central
to
those
decision
processes
and
you
should
be
able
to
you
know,
continue
having
the
capability
to
apply
the
policies
you
know
at
ribbon
rip
out.
So
what
I
have
thoughts
there
I
mean.
Why
do
you
say
that
it's
replacing.
S
So
the
policies
that
are
applied
within
bgp,
whether
they
are
at
an
inbound
or
an
outbound,
they
are
typically
done
either
before
you
run
a
decision
process
or
after
you
run
decision
process.
Those
policies
would
still
be
supported
by
this
protocol,
but
the
decision
that
the
best
path
process
that
you
would
run
post
an
application
of
a
policy
for
inbound
processing,
simply
gets
replaced
with
an
SPF.
So
you
don't
need
to
run
any
in
rib
evaluation
into
bgp
best
part,
which
is
the
ten
step
best
part
process.
R
Yeah
sure
welfare
phase
phase
one
and
phase
two
processes
do
mention
that
you
applied
there.
Even
propolis
is
I
mean
in
the
phase
one.
You
do
apply
the
policy
before.
Do
you
say
that?
Okay,
you
know
this
route
is
now
selected
to
apply
to
you
know
selected
for
the
local
rift
and
same
thing
for
Phase
two
as
well.
You
do
apply
that
out.
Policy
is
outbound
in
this
I
believe
are
considered
as
part
of
the
part
of
the
decision
process
and
in
the
phase
processes.
R
U
S
U
What
I'm,
trying
to
put
in
perspective
here
I
mean
there
are
many
other
things
that
are.
You
know
we
captured
as
I
won't
call
it
requirements,
but
as
part
of
the
wish
list
that
we
need
to
solve
these,
you
know
ten
five
different
things,
and
what
we
are
addressing
here
is
very
specific
one
which
I
I
mean
I
agree
like.
So
we
need
some
kind
of
SPF
sort
of
thing
right.
U
S
Yes-
and
my
answer
still
stands-
that
we
are
looking
at
optimizing-
the
protocol
in
a
way
that
it
helps
reduce
effects
of
a
network
by
achieving
a
better
convergence
and
better
manageability
by
introducing
a
presence
of
a
controller
like
model.
Yes,
there
are
other
problems
that
are
listed
out
in
the
requirements
and
we
can
address
them.
Maybe
this
to
address
them,
you
would
require
a
different
set
of
solutions,
while
this
is
specifically
focused
towards
solving
some
of
the
topics.
Issues
of
the
network,
so
we
are,
in
agreement,
make.
U
Sense,
this
wanted
to
echo
one
of
the
other
things
that
someone
brought
up,
that
we
are
and
we
want
to
solve
the
problem,
but
we
are
looking
for
more
like
simplification
and
removing
features,
then
adding
large
I
mean
I
agree.
We
need
enhancements,
but
it's
not
like
overloading
some
existing
BGP
and
trying
to
solve
it.
Simplification
is
also
very
important
least.
That's
required
and
that's
my
opinion.
E
Ac
Linda
MA
just
a
couple
comments
on
some
of
the
previous.
With
regards
to
the
phase
one
and
Phase
two
BGP
normal
processing,
you
really
have
to
modify
that
for
this
Safi.
If
you
want
to
get
the
same
convergence
behavior
of
an
IDP,
you
can't
keep
it
exactly
as
it
is.
That's
this
one
comment
and
the
other
comment
is
I,
think
this
all
of
the
requirements
that
are
actual
routing
requirements
and
not
data
plane
or
management
or
platform
requirements
that
we're
in
I
think
it
covers
more
than
just
one
of
them.
E
J
Aj
we
have
a
long
history
of
things
that
we
designed
for
the
local
area
network
goes
out
to
the
wide
area
network
and
gets
in
trouble.
One
of
my
hats
is
a
research.
I
think
I
could
model
this
in
a
data
center.
I
think
I
haven't
had
enough
coffee
to
understand
if
somebody
tries
this
on
a
large
transport
backbone,
and
I
don't
think
I
want
to
have
that
much
coffee.
J
So
I
think
this
has
the
limited
for
the
moment
I
can
put
up
with
this
on
the
limited
scope
of
a
data
center
and
I
think
it
scales
well
for
that
and
I'd
relies
on
the
homogeneity
and
so
on
and
so
forth.
So
I
see
that
just
red
flags
if
people
start
to
either
add
requirements
to
make
it
work
on
the
lan
or
try
to
apply
it
to
the
way
and
I
think
we're
stepping
over
the
line
there.
O
A
S
S
E
Just
one
more
comment
on
that:
a
Salerno
coopera
is
that
you
know
right
today
we're
already
using
BGP
LS
for
other
purposes
and
we're
reusing
a
lot
of
those
in
codings
for
the
link
and
node
attributes.
So
you
know
if
we
want
to
change
the
behavior
like
we
are
to
get
the
faster
convergence
and
use
and
and
do
a
different
reception
to
determine
which
version
of
an
NRI
we
use.
We
really
need
a
different
Safiye
to
attach
this
new
behavior.
We
can't
just
poke
it.
You
know
you
reuse
and
put
communities,
I,
don't
think
I.
R
Think
what
so
good
to
have
in
this
draft
is,
you
know,
take
an
example.
Topology,
like
you
know,
typical,
leaves
point
apology
in
the
data
center,
because
we
are
trying
to
do
this
for
the
data
center
and
and
and
kind
of
walk
through
that.
You
know
today,
if
you
have
BGP
or
OSPF.
This
is
how
the
routing
works,
and
you
know
this
is
how
it's
going
to
work
with
with
this
new
proposal,
and
these
are
the
benefits
you.
J
R
S
D
Speaking
as
a
commenter
that
you
are
forcing
so,
as
an
implementer
of
you
know,
one
that's
focused
a
lot
on
using
BGP
for
a
number
of
reasons.
One
is
I
see
a
lot
of
potential
personally
in
so
much
that
you
know,
a
lot
of
us
have
gone
to
try
to
do
holistic,
BGP
deployments
for
a
number
of
operational
reasons.
Second,
item
to
what
Randy
had
mentioned,
potentially
something
like
this
were
to
to
occur.
D
C
V
V
For
example,
the
last
one
addresses
a
problem:
we've
had
a
long
time
with
ibgp
and
it's
been
the
topic
of
study
for
a
long
time
and
might
indeed
help
the
people
in
the
brick
and
mortar,
where,
whether
it's
doing
everything
that
is
in
the
the
final
thing
that
jeff
indicated,
what
is
the
scope
of
the
discussion
you're
hearing
is?
Is
this
question
germane
to
it,
because
it's
hard
to
determine
what
I'm
listening
to
and
what
I'm
trying
to
provide
input.
A
Yes,
so
I
think
we
mentioned
that
you
know
during
the
beginning
of
the
the
session
here.
The
scope
of
this
buff
is
first
to
get.
You
know
to
come
to
an
agreement
of
an
entire
tournament,
understanding
that
routing
in
the
data
center
you
know
maybe
a
different
kind
of
beast
than
routing
in
the
one
environment,
so
majority
of
the
routing
protocols.
Right
now
they
have
been
developed
for
the
one
environment
like
15
years
ago,
or
something
like
that.
A
So
you
know
in
that
envisioning
yeah,
so
we
first,
you
know
tackle
that
by
looking
into
some
of
the
requirements
and
then
come
to
an
agreement
that,
yes,
you
know,
potentially
something
actually
has
changed
and
that
has
been
followed
up
with
you
know.
Some
people
have
realized
that
they
developed
like
new
concepts,
new
ideas
and
that's
what
we're
presenting
right
now
with
the
BGP
SPF.
A
V
So
the
problem
is:
there's
a
third
option.
Some
of
the
things
we've
looked
at
it
for
a
time
like
the
bgp
SPF
LSPs,
but
they
didn't
really
fit
what
we
had.
But
again
we
need
more
requirements
out
of
the
brick
and
mortar
and
the
rest
in
order
to
say
gee
it
may
not
fit
the
delay
were
characteristics,
but
it
might
really
help
data
centers
in
the
brick
and
mortar
to
really
do
a
much
better
job.
C
If
I
may
comment,
so
we
are
not
trying
to
choose
a
kink
and
it
doesn't
have
to
be
single
thing
for
everybody.
If
your
IT
staff
is
happy
with
OSPF,
maybe
just
reducing
flooding
or
optimizing.
This
would
be
good
enough.
So
mileage
varies
per
company
per
type
of
IT
staff.
So
it's
important
again
to
make
people
aware
there
are
different
solutions
and
it's
a
it
is
good
to
provide
guidance
for
implementers
for
designers
to
what
it
should
be.
What
people
are
looking
for.
J
J
would
be
good
if
we
didn't
try
to
boil
the
ocean.
I
know
it's
a
tradition
to
do
so
in
the
ITF,
but
having
a
narrow
focus
on
ice
I
mean
I
love
the
fact
that
slide
seven
was
great.
You
know
and
excuse
the
drift
but
David
C
Karros
a
very
famous
Mexican
painter
of
a
previous
century
who
did
ceilings
and
churches.
Among
other
things,
he
has
one
of
the
man
who
was
so
open-minded
that
his
brains
fell
out.
I,
don't
think
we
need
to
do
that
here.
O
You
guys
hear
me
yeah,
okay,
that's
constraining
good
morning,
yep,
whatever
time
zone,
we're
in
so
I'll
be
showing
something
radically
different.
I
got
interested
in
this
specific
IP
fabric
problem
ten
years
ago.
Well,
it
was
might
be
fair
because
making
Mac
was
like
the
hottest
thing
and
work
with
people.
You
know
along
those
bgp
angles,
looked
at
eyes,
eyes,
modifications
being
around
all
these
things
for
quite
a
long
time,
and
these
requirements
were
emerging.
O
I
was
talking
to
a
lot
of
people,
and
what
I
observed
is
that
the
most
interesting
angle
and
this
IP
fabric
development
is
that
we
have
a
chance
to
build
a
component.
We
have
a
chance
to
build
a
ram
chip.
Yes,
we
are
ITF
and
we
networking
are
used
to
OPEX
right.
We
were
driving
insane
sizes
of
IT
departments
in
corporations,
and
that
can
be
done
right.
We
can
continue
on
this
IP
fabric
along
those
lines.
Right
and
hyper.
Scalars
are
showing
that
objects
is
free.
O
Well,
reality
for
the
majority
of
people
is
not
that
right.
A
very
interesting
angle
is
to
get
the
IT
departments
down
and
make
bandwidth
networking
a
consumable
like
you
go
to
fries
and
buy
a
RAM
chip.
I
know
what
does
a
Cass
configuration
on
RAM
chips,
so
basically,
what
I
my
angle
is
that
I
was
looking
as
the
most
important
requirement
for
a
good
IP
fabric
data
center
fabric
solution
would
be
something
that
would
have
a
zero
op
x
right.
So
you
pay
you
cap
X,
and
you
pay
you
cap
X.
O
As
you
go
to
get
more
bandwidth
and
you
don't
have
to
scale
your
op
x,
train
people
configure
stuff
build
controllers.
Do
all
these
things,
so
what
it
boils
down
to
is
that
this
thing
is
not
working.
Ok,
it
is
so
the
requirements
draft
is
a
good
wash
list.
Right,
like
lots
of
good
governments
have
been
made
like
512
is
not
really
the
reality.
O
512
ecmp
reality
for
most
people,
but
if
we
do
the
job
well,
it
will.
How
much
memory
do
you
have
in
PC?
The
memory
has
been
done
well,
as
a
component
look.
Look
how
many
core
what
what
what
capacity
you're
running?
How
many
CPUs
are
running,
how
many
solid
state,
how
much
storage
are
you
by
and
compared
to
what
you
were
buying
10
years
ago,
but
it
isn't
because
the
solid-state
disk
are
so
bloody
complicated
to
run
like
punch
cards.
O
One
of
the
main
reasons
is
that
that
stuff
became
a
component
and
the
volleys
were
driven
down
and
prices
came
down.
Volumes
are
prices
down
called
the
McKinsey
curve.
So
how
could
we
take
something
like
an
IP
5
break
there
and
make
a
bandwidth
and
easily
consumable
component
for
corporations
whatever
right?
Whoever
needs
bandwidth
can
consume
it
very
easily.
So
the
Walsh
list
is
pretty
good,
but
there's
more
stuff
that
needs
to
be
met
for
it
and
that's
kind
of
the
additional
things
that
Rift
us
on
top
and
I'll
be
not
be
talking.
O
You
know
if
you
expect
protocol
formats
and
understand
how
the
protocol
is
put
together
in
a
Gradius
detail.
There's
a
draft
there's
more
authors
on
it
and
so
on
I'm,
giving
here
more
of
a
pitch
why
it
would
make
sense
to
do
something
radically
new
and
why
the
stuff
just
incrementally
progressing
with
what
we
have
will
not
by
bring
us
into
a
new
landscape
all
right.
O
So
the
most
prevailing
thing
is
that
we
need
zero,
zero
config
CTP,
you
just
punch
those
things
in
just
like
you,
punch
in
RAM
chips,
buy
more
switches,
buy
more
links
and
you're
done
so
there's
two
flavors
to
that.
You
can
assume
that
people
miss
cable.
That
leads
you
into
different
area
where
you
need
some
minimum
configuration,
or
you
can
assume
that
people
cable
correctly,
which
leads
you
to
a
complete
zero
provisioning
solution
and
the
drafts
that
I
put
out
doesn't
have
it
yet
we
have
it
solved.
So
the
next
revision
will
talk
about
that.
O
So
we
can
really
make
an
IP
fabric,
zero
config,
you
just
buy
more
switches
and
you
cable
them
up
the
next
one
is
an
interesting
observation
that
if
you
look
at
the
data
center
fabrics,
the
main
volumes
are
tours
and
servers
and
those
cannot
hold
large
fits
all
right.
The
FIPS
have
to
be
small.
So
on
the
tours,
you
really
want
d4
routes,
because
that
allows
you
to
run
a
really
cheap
silicon.
O
The
next
one
is
that
we
will
be
going
to
high
degree
of
ecmp,
because
otherwise
you
have
to
build
very
deep
fabrics
and
the
delay
starts
to
go
up
so
the
way
I
see
the
economics
and
the
delay
requirements
will
force
us
into
a
out
so
I
think.
A
solution
where
we
have
to
address
fairly
outs
is
necessary.
O
O
We
do
not
optimize
seventeen
different
types
of
RAM
and
fourteen
times
of
storage.
Okay,
we
just
have
a
lot
of
it
and
we
make
sure
that
we
can
saturate
all
of
the
stuff
that
we
buy
and
I
think
that's
where
the
IP
fabrics
will
be
going,
we're
talking,
IP
fabric.
We
know
talking
when
right.
The
problem
changed
when
you
are
restricted
by
geography,
expensive
bandwidth,
a
lot
of
top
to
Malaysian
problems.
O
Here
we
are
talking
locality
where
I
can
put
this
thing
in
a
highly
structured
way
together,
predictable
way
together
and
just
by
very
cheaply
and
quickly
a
lot
of
it
and
provision
it.
If
it
is
zero
touch
provision,
we
nevertheless
have
to
see
the
whole
fabric
fabrics
will
not
become
well.
We
have
in
RAM
chips,
ECC
I,
think
the
fabric
will
still
need
more,
so
I
need
some
kind
of
observation.
What's
happening
inside
the
fabric.
Where
do
I
have
lost
is
what's
fails
so
we
have
to
address
the
requirement.
O
O
O
The
automatic
desegregation
falls
out
from
the
other
requirement,
where
you
really
want
you
towards
to
be
cheap.
If
you
Tour's
have
to
hold
large
fits,
the
silicon
will
be
expensive
and
there
will
be
lot
of
sloshing
information
which
will
lead
to
convergence
problems
all
kind
of
anomalies,
but
once
you
are
gate
and
links
fail
or
node
fails,
you
have
to
da
greg
8,
so
that
has
been
well
explaining
the
RFC
73
98,
don't
remember
the
number
and
what
is
very
beneficial
if,
if
you
keep
a
minimal
blast
radius,
so
what
would
I
mean
by
that?
O
If
you
grow
this
fabrics-
and
you
end
up
with
the
solution-
we're
adding
a
virtual
machine
or
a
link
failing
shakes,
the
whole
fabric
you'll
be
inherently
limited
in
scale
and
stability
of
this
thing,
and
he
has
to
do
it.
Multiple
facts
like
how
much
information
does
note
have
to
hold
how
much
information
do
you
have
to
exchange
to
converge
the
whole
thing,
so
everybody
has
an
information
necessary
and
the
fluctuations
on
the
fabrics
are
actually
quite
phenomenal.
O
Just
on
large
fabrics,
just
rolling
dates
of
servers
are
generating
quite
a
significant
amount
of
routing,
churn
and
I
think
it
will
only
get
worse
because
it
is
a
consumable
okay,
so
people
will
just
do
whatever
they
do
without
thinking.
What
stress
it
puts
on
the
underneath
and
align
infrastructure
fabric
should
become
an
infrastructure,
so
those
are
kind
of
the
additional
requirements.
O
Alright.
So
what
I
show
you
is
quickly
general
concept,
because
the
concept
is
radically
different
from
the
routing
we
did
so
far.
I
show
you
an
example
how
an
automatic
desegregation
would
work
which,
on
traditional
routing
so
far,
no
one
addressed
and
I
think
will
be
very
difficult
to
address
and
I
talked
about
small
data
about
horizontal
links
to
show
you
how
different
the
whole
thing
is
working
and
then
I
give
you
a
couple
of
more
things
that
fall
out
elegantly
out
of
it.
O
O
O
So
when
you
look
at
the
fabric,
what
strikes
you
is,
why
is
it
such
so
different?
Is
the
irregular
pattern
is
enticing,
but
the
fundamental
change
compared
to
what
we
are
running
today
as
routing
as
generic
routing,
is
that
we
have
a
sense
of
direction.
So
it
is
a
ordered
lattice,
whatever
it
means
so
assuming
this
compass
direction.
We
know
it
is
not
about
itself
and
with
that,
of
course
we
know
this
east
and
west,
which
is
interchangeable,
but
it's
different
thing.
We
can
kind
of
assume
some
kind
of
numbering
of
the
levels.
O
So,
let's
just
assume
the
Leafs
are
level
zero
and
then
we
go
level
one
level,
two
recursive
so
having
this
sense
of
direction
at
the
head
and
the
tail.
The
first
concept
that
we
applies,
that
we
have
this
topological
sort,
that's
really
what
it
means
is
the
sense
of
direction
I'll.
We
we
put
the
concept
of
link
state
flooding
going
up
only
so
the
Leafs
propagate
up
and
everybody
propagates
up.
O
So
the
guys
at
the
top
have
basically
the
whole
topological
information
and
can
compute
trees,
and
that
is
actually
necessary,
because
once
your
traffic
hits
the
super
spine,
you
must
know
which
way
to
go.
You
have
to
choose
whether
it's
left
or
right.
You
can
just
the
buck
stops
there
right.
Everybody
passes
the
buck
up,
but
the
buck
stops
up
with
someone,
and
that
worst
case
is
the
super
spy.
O
O
The
last
requirement
is
not
not
the
last
concept
that
you
have
to
add.
Is
that
not
that
obvious,
the
clothes
fabrics,
which
is
what
is
mostly
used,
do
not
have
horizontal
links.
It
has
to
do
a
lot
with
blocking
probabilities
and
your
fabric
starts
to
become
behave.
Funky.
If
you
do
that,
you
can
do
that,
but
you
run
certain
risks.
O
So
you
want
to
note
at
the
same
levels
to
see
each
other
and
since
they
just
propagate
this
vector
down-
and
you
only
float
up
flood
up
north,
the
red
nodes
would
not
see
each
other
so
to
make
them
being
aware
of
each
other
and
the
links
the
state
of
the
links
we
have
to
bounce
off
the
lower
layer
and
it's
completely
recursive
again
so
every
layer
bounces
the
upper
layers,
note
information
up,
so
the
red
nodes
are
aware
of
each
other.
The
Leafs
are
not
aware
of
each
other
and
there
is
no
need
now.
O
Why
would
that
be
so
one
example?
So
once
we
aggregated
everything
up
and
links
fail,
we
could
black
hole
very
easily
and
that's
one
of
the
examples,
so
that's
showing
a
hyperplane
slightly
flip
problem,
not
very
relevant.
So
what
we
have
in
here
are
the
hyperplane
of
two
red
nodes
at
the
top
and
green
aggregation
switches
and
the
blue
Leafs.
O
O
Unless
the
orange
link
fails,
a
worse
was
never
there.
So
imagine
what
the
routing
would
look
like.
The
Leafs
from
the
left
have
just
defaulting,
so
they
just
pump
the
traffic
up.
So
if
they
pump
it
to
the
left
green
node,
everything
is
fine,
so
I
blanked
out
the
other
hyperplane
through
the
other
green
switch.
So
let's
not
even
consider
that
and
the
green
switch
has
defaults
again,
the
red
spines,
so
it
would
just
try
randomly
load-balanced
to
traffic
which
all
works.
Fine.
If
the
fabric
behaves
correctly
or
is
you
know
in
perfect
state?
O
But
if
the
orange
link
fails
and
you
try
to
go
to
p1
through
the
top
red
node,
you
will
black
hole,
so
you
need
some
kind
of
desegregation
to
prevent
black
hole
'ok.
Now,
if
you
run
this
kind
of
protocol-
and
we
remember
the
red
notes
each
other-
because
there's
a
reflection
of
the
green
layer-
the
lower
red
note
can,
after
the
SPF
computation,
understand
very
easily
that
the
p1
cannot
be
reached
by
the
upper
node,
because
the
only
possible
next
hop
is
the
green
switch
on
the
right
and
it
has
no
adjacency
to
it.
O
And
the
reaction
is
very
simple.
You
just
disagree
gate
the
p1
in
the
lower
red
node,
so
you
don't
only
advertise
the
d4
out,
but
also
p1,
that
does
not
propagate
to
the
all
the
way
to
the
bottom,
to
the
Leafs,
observe
it
just
propagates
the
green
layer.
So
the
very
left
green
switch
will
have
a
forwarding
table
d4
out
over
those
two
red,
but
the
p1
only
through
the
lower
and
the
longest
prefix
match
will
take
care
of
itself.
O
It
takes
a
while
to
sink
in
because
it's
so
novel,
but
it
works
like
a
charm.
So
what
you
get
is
a
maximum
aggregation
to
make
sure
that
your
all
you
blast,
reduce
our
minimum,
that
the
Leafs,
which
is
just
full
defaults
and
on
failures,
the
algorithm
again
in
kind
of
synchronous
fashion,
observed
as
a
fully
distributed
a
synchronous
algorithm,
will
take
care
of
healing
the
fabric
for
you,
just
like
a
solid-state
disk,
just
relocates
a
factor
which
failed,
which
is
exactly
what
you
expect.
O
O
O
The
B
would
get
rigged
desegregated
on
the
second
switch
and
the
horizontal
links
behave
just
like
the
southbound
links,
so
a
would
see,
B
and
go
through
2
and
if
anything
goes
wrong.
Y1
will
also
see
B
and
traverse
the
horizontal
link
to
go
to
through
2,
but
the
horizontal
links
are
only
used
in
failure
cases,
ok,
because
the
default
will
not
be
propagated
over
horizontal
links.
O
So
that
was
just
a
small
smattering,
especially
the
disaggregation
is
very
novel
and
only
made
possible
by
those
concepts.
What
falls
out
on
top
when
you
run
this
kind
of
very
novel
approach
to
routing
is
that
we
can
solve
a
lot
of
stuff
which
has
been
worked
on
for
a
long
time
and
cannot
be
addressed
with
traditional
routing,
and
one
of
them
is,
for
example,
automatic
flood
reduction,
which
is
a
significant
problem.
O
If
you
go
very
wide
in
terms
of
fan
outs
on
iGPS,
so
once
you
start
to
flood
up
a
couple
hundred
adjacencies
and
you
try
to
implement
it,
you
understanding
the
implications
of
that.
So
you
can
use
the
fact
that
you
have
such
a
dense
connectivity
to
load
balanced
flooding,
but
normally
that
has
been
done
through
OPEC,
so
people
configured
special
flooding
meshes
and
to
trees,
and
they
made
sure
that
don't
overlap
and
so
on.
O
This
protocol
can
run
automatic
flooding,
reduction,
low,
end
and
flood
balancing,
just
like
EVP
and
active
active,
so
I'm
not
drilling
into
detail
because
then
will
be
here
for
the
whole
day.
Like
I
said
the
leaf
to
leave
bi-directional
shortcuts
are
allowed.
We
do
have
a
flooded
DV
overlay,
which
allows
policies
and
can
be
used
for
something
like
traffic
engineering.
O
Okay,
so
the
requirement
is
solved
like
I
said
what
it
will
be
used,
we'll
see,
the
packet
formats
have
been
pushed
towards
being
completely
model-based
okay,
so
that
was
not
feasible
performance
wise
until
fairly
recently,
it
does
not
at
this
point
in
time
posed
a
significant
roadblock.
It
can
be
done
at
very
high
performance.
The
advantage
of
it
is
that
we
come
off
of
very
north
or
nor
pocket
encodings
which
have
been
optimized
now
over
a
long
tradition
in
networking
for
performance.
O
We
were
always
bandwidth,
restricted,
right
or
also
CPU
restrict
in
terms
of
processing.
Those
restrictions
are
going
away,
especially
in
data
centers.
We
have
enough
RAM,
we
have
impressive
CPUs,
we
have
been
do
it
to
burn.
Ok,
what
we
are
lacking
with
the
traditional
protocols
is
that
the
turnarounds
to
introduce
any
kind
of
feature
is
very
very
long.
We
have
a
lot
of
non
orthogonal
encodings.
We
cannot
generate
code
very
well.
O
Everybody
is
writing
the
note
parser
with
their
own
set
of
bugs,
and
we
go
through
those
rafts
which
frustrates
a
lot
of
this
data
center,
guys
who
see
the
fabric
like
a
CPU,
that's
just
something
that
should
there
should
be
enough
of
it
and
it
shouldn't
get
into
my
way.
I
don't
want
to
deal
with
routing
bugs
that
crash,
my
box.
So
that's
one
of
the
way
to
innovate.
O
O
The
channel
deliver
is
actually
arrestees
agnostic,
so
the
initial
discovery
goes
of
a
multicast.
It's
really
still
the
simplest
way
because
you
don't
have
to
configure
anything
right,
but
once
you
get
to
flooding
distributing
traffic,
the
channel
really
don't
matter
all
that
much.
We
have
all
these
religious
battles.
You
just
go
hope
to
help
and
you
take
whatever
whatever
you
want,
you
can
go
over
UDP
quicks
promising.
We
can
run
it
over
TCP.
Each
of
them
has
its
pluses
and
minuses.
O
One
thing
the
drift
addresses
is
that
it
gives
you
very
wide
what
you
would
consider
LSA
spaces
right.
It
is
very
feasible
today
to
maintain
a
lot
of
information
elements
in
IGP,
which
earlier
was
a
restriction
and
their
BGP
had
a
leg
up.
One
prefix
change
you
send
one.
A
one
update
rift
gives
you
the
possibility
to
generate
an
LSA
per
prefix,
it's
an
implementation
thing
and
we,
the
spaces,
are
wide
enough.
O
If
you
go
over
UDP,
this
is
much
much
cheaper
right.
You
don't
have
500
tcp
state
machines
trying
to
get
the
work
done.
You
just
push
this
thing
over
UDP
out
there
purging
has
been
completely
taken
out.
That's
a
small
detail
and
we
have
a
key
value
store
support,
which
means
you
can
push
during
convergence
time,
certain
things
which
are
important
for
you
over
the
fabric
and
one
of
them
is,
for
example,
service
configuration.
O
When
you
run
an
overlay,
you
want
a
very
quick
service
configuration
you
don't
want
to
convert
your
each
ability,
which
is
really
nothing.
It's
like
your
PC
booted.
Now
what?
Unless
I
ran
an
application?
Well,
cool
I
got
the
login.
So
if
we
put
a
key
value,
store
support
to
allow,
for
example,
service
configuration
into
the
rich
ability
protocol,
the
moment
you
reach
oblique
converged.
Your
basic
service
set
converged.
So
it's
like
a
mac
booting.
You
mac,
doesn't
boot
into
well.
It
boots
into
login.
O
O
We
give
people
a
possibility
to
very
easily
to
put
the
working
set
onto
the
reachability
protocol
so
when
they
bring
up
the
fabric,
the
services
are
up
as
one
possible
application.
No
good.
So
summary,
we
have
a
limited
time
here.
So
when
you
look
at
drift
and
you
give
it
the
thought
necessary,
if
your
interest
into
technology,
you
find
that
you
get
advantages
of
both
of
distance-vector
link
state,
because
the
vexing
thing
is
the
data
center
fabric.
O
If
you
want
to
really
drive
it,
zero
or
pax
is
neither
so
you
get
the
fastest
possible
convergence
because
everything
is
being
flooded.
You
get
automatic
detection
of
topology,
which
is
really
precondition
for
ztp.
Unless
you
run
a
controller
and
you
believe
the
controller
is
a
ztp
solution
which
I
don't
you
get
a
minimal
route
on
tours
which
allows
you
and
actually
the
solution
gets
simpler
and
simpler.
The
more
you
go
towards
the
server.
O
The
server
is
a
very
simple
implementation,
so
you
can
pull
it
all
the
way
down
to
the
server,
so
that
allows
you
to
run
cheaper
and
cheaper
silicon
at
the
bottom,
where
you
have
more
and
more
of
the
stuff
right
that
supports
very
easily
very
high
degree
of
ecmp,
because
it's
where
the
ecmp
kicks
in
it's
a
link
step.
What
distance-vector
is
not
very
good
at
ec
in
pink,
it
can
do
it,
but
it's
no
a
little
bit
of
a
crutch.
O
We
can
get
very
fast
decommissioning
of
nodes
out
of
the
fabric
because
of
overload
bits
on
IGP,
it's
the
same
mechanism
and
we
have
a
maximum
propagation
speed
because
we
don't
wait
until
we
decide
something
like
in
a
distance
vector.
Protocol
right
is
the
link
stage.
We
just
propagate
the
information,
you
blast
it
the
maximum
speed
out
and
because
we
have
this
flexible
number
of
prefixes,
we
can
get
to
be
GP
efficiency
in
terms
of
a
prefix
/
update
element.
O
If
you
want
to
run
it
that
way,
but
we
do
not
face
the
disadvantages
of
both
in
inter
fabric,
so
the
flooding
is
normally
what
limits
the
IGP
at
certain
point
in
time,
performance
wise.
This
can
reduce
flooding
automatically,
there's
a
load,
balance,
flooding,
sorry
and
we
have
automatic,
enable
detection,
which
of
course,
can
be
added
to
BGP
I
mean
any
protocol
can
be
molded
to
the
point
where
it
looks
like
another
protocol
right:
it's,
not
software
software,
which
is
right
enough
of
it.
O
It
looks
like
just
like
something
else,
and
it
brings
some
advantages
again,
driven
all
toward
zero
up
X,
which
the
other
protocols
will
have
a
very
high
a
hard
time
to
address
unless
we
get
to
the
point
where
they
start
to
deal
with
the
concept
of
north
and
south.
This
is
really
like
the
deciding
piece.
Do
you
build
the
protocol?
It
understands
the
direction
of
the
fabric
or
don't
you
so
it
does
to
automatically
segregation
on
failures.
The
key
value
store
is
another
act.
Forget
horizontal
links,
the
minima
blast
radius.
O
If
you
think,
through
the
protocol
it
skills
extremely
well,
because
adding
more
leaves
does
not
push
any
load
on
any
other
leaks,
Leafs
or
other
ports,
it
only
adds
at
the
very
top
information.
Okay
and
when
something
fails,
the
distribution
radius
of
the
change
is
very,
very
limited
and
what
all
the
folds
out
of
the
protocol
and
that's
very
non-intuitive.
The
protocol
cannot
loop.
O
There
are
no
loops
here.
The
packet
goes
up
until
it
turns
down,
and
then
it
goes
down
which
allows
you
something
very
interesting.
It
allows
you
to
run
a
completely
different
class
of
path.
Algorithms,
rather
than
just
a
shortest
path,
which
we
all
laugh
and
try
to
stick
to.
So
riff
can
actually
saturate
all
the
feasible
paths
through
the
fabric,
minimum
hop
paths,
and
it's
up
to
you
to
decide
how
you
use
the
weights.
O
You
can
just
blast
through
all
the
feasible
paths
or
you
can
just
take
the
shortest
one
or
short,
the
second
shortest
and
so
on.
Modern
beats
over
PowerPoint.
Yes,
lots
of
real
work
has
been
done,
so
this
is
not
just
shut
out
there.
If
you
interested
further
to
participate,
look
at
the
stuff
have
a
chat
with
me.
We
take
it
from
there
and
thanks
for
your
attention
and
then
he's
already
standing
in
a
very
threatening
pose.
A
H
We
talked
it
yeah
so
and
and-
and
that
is
one
part
for
me
to
be
able
to
follow
the
other
part
is
this
is
sort
of
the
the
basic
but
I
have
a
it.
I
have
a
it's
a
very
stupid
thing,
but
your
name
is
already
taken
by
a
company
in
Boston
and
you
should
really
think
about
renaming
it
because
they
they
have
trademark.
On
that.
My.
O
O
Tony
I
have
one
question
here:
I
think
you
have
very
good
control.
Plane
worked
out
perfectly
well,
but
the
problem
is
with
the
failure.
The
failure
is
addressed
by
the
aggregation
right.
You
inject
the
p1
down
from
the
spine
to
leaves,
and
you
fix
the
problem,
but
doing
that
it's
actually
convergence
problem
because
it
takes
hundreds
of
milliseconds
before
you
advertise
the
route
in
any
protocol
today,
going
to
re
and
from
re,
so
I
think
it's
way
too
slow
to
actually
be
you
know,
production
quality
solution,
so
I
mean
riff
can
run
without
aggregation.
O
A
O
O
No
discussion
I,
don't
think,
there's
a
way
around
it,
but
then
the
whole
solution
will
become
no
more
expensive
and
more
complex
if
people
can
leave
with
something
like
250
milli,
single
failure,
convergence
very
comfortable
with
that
ok
and
I
found
that
this
requirement
is
very
squishy
depending
how
people
built
their
service
architecture
on
top.
Ok,
some
people
are
very
comfortable
with
like
one
second
blips.
Some
people
are
very
uncomfortable
with
100
millisecond
blips.
O
O
We
wildly
agree
right:
if
you,
if
you
aggregate,
you
basically
do
a
controlled
line.
So
if
you
have
to
unravel
the
lie,
it
will
take
some
time
and
if
you
go
through
a
control
plane,
there's
certain
limitations
which
today
with
good
implementation,
it
can
take
you
in
hundreds
millisecond
range.
But
if
you
look
30
40
50
well,
that's
why
we
did
all
the
bf,
the
LFA
and
all
ever
all
the
chip.
Vendors
are
doing
all
the
crazy
stuff,
so
I
mean
I,
can't
change
the
speed
of
light
and
we
can
only
push
electrons
about
5.
C
O
W
O
I,
assuming
the
stuff
runs
on
the
switches,
does
it
so
for
the
controller
side
it
has
two
flavors
one
is
the
distance
vector
overlay
where
you
can
actually
go
and
push
prefixes
and
policies
onto
these
things,
and
they
will
be
preferred
onto
the
shortest
path,
but
when
it
comes
simply
to
the
shortest
path,
convergence.
I
assume
that
the
stuff
is
distributed
for
simple
reason
that
it
will
be
always
faster
than
anything.
You
can
push
from
a
control
all
right
unless
there
is
some
absolutely
impossible.
Magic
involved.
O
Yes,
so
I
can
give
you
a
simple
number
like
that
converges
three
times
faster
than
a
host-based
IGP,
both
well
implemented,
so
that
you
get
about
3x
convergence
speed-up
over
a
reasonable
like
three
layer
fabric,
you
know
so
this
is
a
number
I
have
and
I
don't
long
enough
with
BGP
to
know
that
well,
I'm,
not
here
in
a
beauty,
contest
and
I,
do
think,
don't
think
the
convergence
speed
is
really
like
the
highest
requirement.
What
I
found
for
the
people
I'm
interested
in
who
are
willing
to
pay
for
solutions?
O
R
A
D
G
D
One
simple
question
we
saw
is
this:
this
was
a
second
section
of
the
Boff,
so
one
simple
question
for
it
for
the
group,
so
with
a
show
of
hands,
this
is
the
from
a
positive
size.
So
we're
gonna,
ask
one
question:
do
you
would
you
agree
that
the
proposed
solutions
is
a
logical
or
by
the
way,
would
you
would
do
you
agree
that
proposed
solutions
address
unique
challenges
in
the
DC
routing
fabric
space?
D
Y
So
there
is
a
one
new
trv
is
proposed
for
this
proposal
and
that
the
P
bit
is
to
indicate
which
tier
level
the
node
is
at,
and
the
tier
in
the
current
proposal
is
from
0
to
15,
with
15
as
a
amplifying
the
level
and
that
the
orbit
indicates.
This
is
a
leaf
on
the
adjacency,
the
op.
It
is
for
the
spine
to
indicate
you
can
use
this
adjacency
as
the
default
gateway,
and
there
are
two
optional
sub
t
redefine
the
four
but
like
hole,
avoidance.
Y
So,
in
a
very
basic
example,
here
we
have
a
few
tears
and
to
the
bottom
layer
it's
the
leaf
and
the
top
one
is
the
is
the
spine
and
the
believe
know
the
were
set
to
the
tier
level
to
zoo
set
to
the
orbit
and
the
flood
its
own
LS
people's
the
spine
and
the
spine.
We
are
eternal
in
the
hollow
set
to
the
orbit
before
the
leaf
views,
its
self
as
the
default
gateway
and
that
you
are
not
flooded
by
LSP
back
to
the
live
modes.
Y
So
the
spine
nodes
has
the
full
eye
size,
topology
database,
but
the
leaf
nodes
just
point
defaults
towards
the
spine,
so
it
supports
the
more
cutie
level.
This
is
the
oddity
or
discovery,
and
by
this
we
mean
the
tier
0
is
defined
as
long
as
the
tears
or
to
find
the
all
the
other
tiers
are
dynamical
runs
by
by
exchange
the
packets
and
when
its
own
position.
Y
So
you
can
see
that
tiers
view
is
believed
to
the
tier
1
and
the
tier
1
can
in
turn
be
believed
of,
but
here
to
our
notes.
So
in
this
particular
case
are
on
the
more
light
at
the
air
6.
We
have
only
flood
its
own
LSP
l6
toads,
the
s3
s4,
which
is
a
human
notes
and
the
bear.
For
example,
the
s4
in
the
tier
1
will
flood
the
air
for
l5
air
6
plus
its
own.
Y
As
for
ports,
there's
even
in
a
situation,
so
the
last
slides
and
talk
about
how
to
handle
the
linked
our
events
to
avoid
the
black
hole,
so
in
this
particular
case
talking
about
to
the
convergence.
Actually,
this
requirement
is
only
a
optimization
because
if,
if
you
think
about
this
s,
3
to
s
6,
the
link
is
down,
and
even
if
the
l4
is
sending
a
packet
to
s
3
to
reach
the
l6,
the
s3
by
PFD
or
some
other
mechanism,
you
may
need
to
know
that
it
cannot
further
to
ethics.
Y
It
can
fold
up
link
to
the
c1
or
c2.
So,
in
the
long
run
you
don't
want
to
do
this
because
it
increases
the
pack,
the
latency,
but
in
a
short
term,
what
we
can
have
to
do
as
you
follow
the
ports,
the
upper
layer
and
that
the
upper
layer
had
a
4th
apology
of
the
bottom
layer
so
that
convergence
can
be
actually
served
in
this
particular
case,
but
we
in
the
long
term.
Y
Z
Z
Another
thing,
I'll
point
out
is
that
this
draft
is
compatible
with
or
complementary
to,
the
other
two
drafts
in
this
section,
which
are
naming
and
Cisco,
and
those
guys
we're
actually
working
together
to
make
sure
that
our
drives
use
compatible
signaling
and
things
like
this,
so
we're
just
two
different
kinds
of
things
on
the
same
thing.
So
the
idea
here
is
that,
with
open
fabric
is
that
we
really
want
to
just
simplify
things.
We
see
a
lot
of
complexity
in
the
data
center
fabrics.
Z
Linkedin
runs
a
pretty
big
data
center
fabric,
a
bunch
of
pretty
big
data
center
fabrics.
We
just
want
something
really
simple:
to
do:
to
give
us
IP
reach
ability
and
label
distribution,
and
we
don't
want
all
the
policy
stuffed
into
it.
That
bgp
gives
us.
We
don't
want
anything
else.
We
do
want
to
link
state
protocol
because
we
want
to
have
full
topology
of
the
information
so
separate
reach
ability
from
possibility
from
policy.
Z
We
want
to
minimize
or
eliminate
configuration
the
way
this
is
set
up,
that
there's
only
one
router
on
the
fabric
that
needs
to
be
configured
and
everything
else
is
computed
and
can
come
off
of
servers
or
configuration
systems
or
whatever
you
want
to
link
state
when
optimize
convergence
and
optimize
scale.
So
what
we're
going
to
do?
Two
separate
complexity
from
complexity
is
topology
and
policy
is
we
use
a
distributed?
Z
Control
plane,
like
I,
said
to
give
us
reach
ability
and
label
distribution,
and
then
we
want
to
have
a
controller
based
overlay
using
something
like
piece,
F
or
I
to
RS
or
an
undefined
thing
that
actually
pushes
policy
to
the
top
of
rack
switches
and
uses
segment
routing
to
do
all
te
and
other
things
like
this.
So
that's
kind
of
the
thing
that
we're
doing
here
the
goal
in
distributed
protocol
is
to
build
the
simplest
possible
distributed,
links
a
protocol.
We
don't
want
any
policy,
we
don't
want
any
configuration,
we
want
any
extra
stuff.
Z
I,
don't
know
about
your
data
centers,
but
in
my
day,
Center
one
of
the
problems
I
have
is
I,
have
sre
and
net
ops,
guys
and
Eddings
guys
who
want
to
go
use,
BGP
policies
and
stuff
to
play
around
and
make
traffic
engineering
work,
and
we
really
shouldn't
be
doing
that
because
it
just
creates
all
sorts
of
weird
stuff
all
right.
So
fabric
location
is
really
simple.
As
long
as
you
have
one
top-of-rack
switch
or
t
zero
configured
as
a
t
zero.
Z
When
you
boot
the
fabric,
you
can
compute
using
the
hop
count
to
and
from
that
T
zero
as
defined
everything
else.
You
can
compute
your
tier
level,
no
matter
what
your
topology
is
from,
that
one
T
zero
once
the
fabric
boots
and
a
bunch
of
other
people
have
computed
themselves
as
T
zero.
The
requirement
for
the
single
one
to
be
configured
goes
away.
The
requirement
for
a
single
configured
T
zero
box
is
only
to
boot
the
fabric
when
you
have
nothing
up
to
begin
with.
So
you
can
read
this
in
the
draft.
Z
It's
actually
pretty
simple
trivial,
there's
some
IPR
around
it,
but
it's
been
released
to
the
ITF
and
stuff
like
that,
so
forwarding
optimization.
There
are
two
stages
and
forwarding
opposition.
One
is
forward.
Optimization
are
flooding,
I'm,
sorry,
flooding,
optimization
one
is
forward,
so
we
look
at
the
flooding
coming
off
of
a
t0
and
going
towards
the
spine.
There
is
a
particular
way
that
you
use
neighbors
neighbors,
which,
by
the
way
we
have
experience
in
from
mobile
ad
hoc
networking.
These
are
actually
extinctions
in
OSPF.
Z
That
does
something
very
similar
to
this
already
in
the
field
deployed
and
working.
So
we
have
experience
with
this,
and
we
know
this
works
in
this
direction.
Nico's,
who
works
for
LinkedIn,
suggested
a
reverse
optimization,
which
is
just
to
not
flood
back
down
the
SPT
towards
the
originator.
Since
you
already
know
what
the
topology
looks
like
in
your
LS
dB,
so
this
actually
tells
the
for
the
back
flooding.
The
way
this
turns
out
is
depending
on
how
you
have
these
parameter.
Z
You
can
set
parameters
for
how
many
neighbors
neighbors
you
want
to
pick
a
reef
letters
or
things
like
that.
Every
is
on
the
fabric
and
by
the
way
this
is
grounded
in
is
to
is,
if
you
read,
the
draft
only
receives
one
or
two
copies
of
every
LSP
on
the
fabric.
So
once
you
know
your
location,
we
assume
you're
gonna,
connect
to
a
controller
you're
gonna
get
things
like
you're
gonna
get
things
like
DCP
pools.
If
you
need
them,
you're
gonna
get
maybe
you'll
get
your
label
pools
if
they're
not
calculated
locally
things
like
this.
Z
All
of
this
stuff
is
going
to
be
pulled
off
of
a
controller,
probably
using
a
pub
subsystem
or
G
RPC,
or
something
like
that
to
make
it
very
fast
and
efficient.
So
that
gives
you
the
ability
to
pull
receive
status
as
possible.
We
are
currently
working
on
an
initial
implementation.
This
in
the
is
IIST
part.
Z
We
don't
have
an
implementation
underway
or
undertow
for
a
controller,
yet
we're
still
working
on
that,
but
we
actually
have
an
initial
implementation
undertow
in
free
range
routing,
so
there
will
be
a
free
range
routing
implementation
of
this,
hopefully
well,
no,
no
mechs,
three
to
four
months
five
months,
something
like
that.
So
further
updates
to
current
drafts.
In
the
queue
I
have
a
couple
more
things:
I
need
to
do.
I
need
to
change
something
about
the
hello
processing
to
make
it
a
little
bit
simpler.
Z
We've
received
a
lot
of
help
from
the
community,
I'd
like
to
say,
thanks
to
everybody
who
sent
the
comments
on
that
there's
a
huge
contributors
list.
It's
been
really
fantastic,
getting
a
lot
of
people
from
the
community,
so
that's
kind
of
it
for
eius
eius
support
for
open
fabric.
Any
questions
at
the
mics
I
see
nobody.
Is
there
read
the
draft
and
make
comments
and
tell
us
whether
it
fits
your
use
case
or
not,
and
what
could
be
changed
to
make
it
better.
Z
X
A
Z
A
T
T
Since
its
of
time,
I
will
not
discuss
many
details
about
the
solution.
I
just
saw
the
general
idea
of
this
solution.
It's
a
mix
of
the
central
ID
link
state
distribution
and
distributed
them.
Spf
calculation,
for
instance,
OSPF
I,
see
rotors
within
Clause,
just
need
to
exchange
hello
packages
to
fight
each
other.
There
are
no
need
for
them
to
further
exchange.
Rsa
is
wires
piece.
They
only
need
to
exchange
arrises
and
piece
with
the
controller
where
the
management
layer,
in
fact
the
controller,
is
exactly
as
PF
dr
OSS.
T
It's
heavily
depend
depend
on
the
existing
Perko
capability.
The
only
requirement
is,
you
need
a
dedicate
network,
manaman
network
to
in
connect
them
controller
and
all
the
routers
within
cross
networks.
If
you
are
familiar
with
google
fire
pass,
there
are
many
familiar
the
similarities
between
these
two
solutions.
The
differences
include
first,
this
is
amperes.
This
is
a
high
tier
protocol
complaint
solution.
T
Second,
it
then
have
heavy
dependency
on
the
reliability
of
the
meta
meta
layer.
The
counterparts
in
the
high
passes
CBN,
because
this
solution
has
a
very
simple
rollback.
Magnus
in
the
world,
when
a
router
in
the
cloth
lost
connection
to
the
controller,
it
can
roll
back
to
the
traditional
IGP
mode.
D
All
right,
thank
you,
sir
yeah.
Okay.
So
we're
gonna
ask
one
question
at
the
end
at
this
point
and
then
maybe
comments
from
the
ad
if
he
wants
to
share
so
the
one
single
question.
This
point,
then,
is
noting
that
contributing
to
any
new
solution
is
not
mutually
exclusive,
so
show
of
hands.
Who
here
would
be
interested
in
participating
and
contributing
to
potential
new
work
and
in
the
routing
area.
D
N
D
AA
AA
Not
that
one,
but
so
the
the
purpose
of
office
you
guys
said
has
been
really
beginning,
was
to
try
and
see
what
the
landscape
is.
I
get
some
requirements
see
what
people
are
doing
and
figure
out
if
there
is
a
need
for
a
new
work
and
I
think
that
many
people
raised
their
hand
and
said
yeah,
we
may
be
new
work
to
do
to
be
done.
The
ITF
now
many
of
you
I,
think
also
raise
your
hand
saying
yes,
we
would
be
willing
to
work
on
something
new.
AA
Now
there
were
two
proposals
here
of
New
York.
What
I
think
I
want
to
ask
extra
is
to
get
a
sense
here
in
the
room
of
people
who
be
willing
to
work
on
specific
proposals
that
were
put
forward.
In
other
words,
if
we're
going
to
use
this
buff
to
in
the
future,
at
some
point
maybe
gets
whom
you
work
in
the
ITF.
It
would
be
nice
to
know
if
you
want
to
work
on
that
solution
or
not
that
make
sense.
AA
Yes
hum
something
anyone
raise
your
hand
whatever
okay
good.
So
so
the
question
we
ask
is
this
two
questions,
since
we
have
two
proposals
right,
assuming
that
we
go
forward
with
work
in
those
proposals,
let's
take
the
BGP
SPF
proposal
because
it
was
presented
first.
How
many
of
you
would
be
willing
to
contribute
to
that
work?
AA
Meaning
before
you
raise
your
hand,
you
know
where's
one
hand,
meaning
would
you
be
willing
to
not
only
read
the
drafts
make
contributions,
the
reviews
you're
participating
some
kind
of
effort,
the
type
of
effort
I,
don't
know
it
may
be
a
new
working
group.
It
may
be
something
that
we
roll
into
RT
WG
or
something
else,
but
how
many
of
you
would
be
willing
to
contribute
to
that
effort
of
bgp
SPF?
AA
And
the
other
question
is,
of
course,
for
the
other
proposal
for
rift.
How
many
of
you
would
be
willing
to
contribute
to
that
effort
again?
Read
the
drafts
discuss
on
the
mailing
list
proposed
enhancements.
The
reviews,
you
know
it
said,
are
all
the
work
that
would
be
needed
to
be
done
there.
Okay,
now
you
can
go
racer
here
for
that.
AA
So
that's
it
I
think
this
is
what
we
want
to
get
out
of
out
of
this
room
right,
the
the
sense
that
there
is
whether
there
is
a
problem
or
not,
and
if
we're
going
to
go
forward
with
anything.
This
is
a
decision
that
we
gonna
make
with
the
chairs
and
the
other
route
any
of
these
hopefully
shown
as
to
what
do
we
do
next?
Did
we
charter
something?
Did
we
do
more
discussion?
Did
we
get?
You
know
some
more
presentations
on
before
we
move
somewhere
else?
So
that's
all
I
have
thank
you
so
much.
AA
A
So
we
actually
have
about
like
two
or
three
minutes
left
before
you
know
the
end
of
the
time
here.
Actually
so,
if
anybody
actually,
you
know,
has
like
any
other
comments
or
things
they
you
know
want
to
radiate
from
their
hearts
or
whatever.
Now
is
your
opportunity,
otherwise
we
gonna
close
the
session.