►
From YouTube: IETF110-GROW-20210308-1200
Description
GROW meeting session at IETF110
2021/03/08 1200
https://datatracker.ietf.org/meeting/110/proceedings/
A
A
A
Hey
jared,
the
grow
meeting
has
been
delayed
by
one
hour.
It
will
start
in
56
minutes
because
of
a
scheduling
conflict
with
one
of
the
presenters
and
the
fact
that
the
grow
agenda
was
shorter
than
the
two
hour
block
allocated
to
it.
So
get
some
water
get
some
tea.
Some
cereal,
I'm
gonna,
put
up
a
slide
that
indicates
that
we're
starting
a
little
bit
later.
Apologies,
oh
okay,
got
it
all
right,
I'm
going
back
to.
A
A
D
A
I
think
today's
meeting
was
originally
envisioned
to
take
place
in
prague,
but
unfortunately,
all
of
us
are
gathering
in
our
houses
wherever
that
is
on
this
planet.
That
will
not
take
away
from
the
joy
of
this
meeting,
but
before
we
start,
I
would
like
to
draw
attention
to
the
itf
note.
Well,
this
is
a
reminder
of
itf
policies
that
are
in
effect
on
various
topics
such
as
patents
or
code
of
conduct.
A
A
As
mentioned
before,
we
started
one
hour
later.
This
was
because
there
was
a
scheduling
conflict
that
we
could
not
resolve
at
the
last
minute,
but
it
appeared
to
be
a
simpler
solution
to
just
start
this
meeting.
One
hour
later
in
today's
meeting,
we're
gonna
cover
the
administrativia,
some
bmp
and
young
related
reports
from
the
hackathon,
a
new
initiative
called
bmp
seamless
session
and
a
update
on
the
internet
draft
about
as
path
prepending.
A
A
A
A
A
Foot
all
right,
so
we
got
a
minute
taker
in
the
shape
of
christopher
morrow
and
then
I
will
volunteer
to
monitor
the
jabber
room.
A
Oh
an
update
about
the
charter.
At
last,
at
the
last
meeting,
we
discussed
that
a
charter
refresh
was
requested
by
the
area
area
director
and
we
faithfully
labored
to
produce
an
update
to
hopefully
align
the
work
of
this
working
group.
More
with
what
was
described
in
the
system
and
this
charter
update
was
discussed
in
the
working
group
and
to
me
it
appeared
that
most
in
the
working
group
had
an
idea
of
what
the
update
meant
and
what
this
working
group
does.
A
But
when
it
was
brought
forward
to
the
iasg
refuse
stage,
a
lot
of
questions
came
back
where,
apparently,
there
is
still
some
work
to
be
done
to
make
this
charter
understandable,
not
just
by
the
working
group
participants,
but
also
by
people
who
review
our
output.
So
I
have
it
on
my
to-do
list
to
go
over
the
feedback
from
the
ist
review
round
and
incorporate
that
feedback
into
a
revised
charter
proposal,
which
I'll
then
share
with
the
working
group
mailing
list,
and
if
people
feel
that
that
is
a
good
update,
then
we'll
try
again
with
ist.
A
I
think
that's
it
for
now,
I'm
gonna
hand
over
the
microphone
to
thomas
graff,
who
is
gonna
share
with
us,
an
update,
the
bmp
yang
hackathon.
E
So
here
our
interest
was
basically
we
want
to
see
how
bmp
adjacency
in
out
locally,
including
path
working
tlb,
is
affecting
the
the
cpn
memory
consumption,
but
also
when
congestion
on
the
network
is
happening.
So
bgp
start
sending
a
lot
of
updates
and
withdrawals
back
and
forth.
E
If,
on
such
circumstances,
we
can
ensure
that
the
hip,
which
is
collected
through
bmp
route
monitoring,
is
always
like
the
mirror.
Drip
is
always
representing
the
same
state
as
we
have
on
the
network
and
last
but
not
least,
we
also
want
to
understand
if
router
in
transit,
if
pmp
is
enabled
or
not
what
the
difference
is
in
terms
of
bgp
without
propagation
delay.
E
E
So
what
we
achieved
was
this
time
on
the
lab
environment
that
we
have
now
a
fully
automated
test.
F
E
So
we
can
monitor
basically
prefix
loss
as
a
metric
loss
in
bmp
and
cpu
memory
usage.
We
can
go
down
to
bgb
process
level
as
a
next
step
for
the
next
hackathon.
E
This
is
basically
what
we
did
on
the
udp
notif.
F
E
E
That
was
a
test
message.
We
generated
into
apache
kafka
for
young
push,
and
this
was
a
test
performance
setup.
We
did
with
the
young
push
very
impressive.
The
throughput
we
achieved
with
young
push.
I
think
this
is
one
of
the
most
fastest
implementation
currently
available.
So
on
one
core
with
500
000
messages,
we
were
achieving
up
to
10
11
gigabits
per
second
with
the
jumbo
frames,
so
here
basically
the
test
setup
from
the
network
side.
So,
as
I
mentioned,
we
enable
bmp
at
various
points
and
we're
measuring
the
effect
of
the
bmp.
E
What
was
really
impressive
was
that,
even
though
we
had
congestion,
so
a
lot
of
we
were
generating
up
to
one
million
rounds
on
the
network
that
under
no
circumstance
we
had
loss
of
metrics.
So
in
all
cases,
basically
the
hip
state
could
be
achieved.
E
That's
one
good
notice
here,
the
other
one
is
in.
Regarding
php
propagation
delay.
We
discovered
that
the
accuracy
of
the
timestamp
being
was
not
accurate
enough.
So
therefore,
we
could
not
draw
final
conclusions.
How
much
impact
bmp
brings
to
the
bgp
propagation
delay.
So
there
we're
going
to
improve
the
time,
stamping
a
currency,
and
we
will
gonna
do
the
tests
in
the
next
itf
hackathon.
E
Here
are
some
examples
on
on
the
tests.
We
did
so
the
test
setup
generated
1
million
bgp
vpn
before
unicast
passed,
and
it
was
advertising
it
as
fast
as
possible
to
10
piers
and,
as
you
can
see
here,
on
the
very
left
hand,
side
the
cpu
usage,
basically
during
the
bgp
propagation
period,
whenever
bmp
was
off
or
on,
did
not
change
but
later
on.
E
Basically,
when
the
propagation
was
through
and
then
the
basically
the
cpu
usage
back
back
down
to
to
normal
the
the
export
started
to
do
to
perform
and
during
the
export
period
we
basically
had
the
same
amount
of
peak
that
we
had
during
the
pgp
propagation
period
of
time.
E
In
regards
of
memory
consumption,
we
can
see
that
once
the
prefixes
are
being
propagated.
Initially
we
are
consuming
more
memory
on
the
pgp
process,
but
throughout
the
test
cycle
that
did
not
change.
E
E
Then
we
did
another
test,
basically
on
the
heart
reflected
verb,
while
propagating
the
the
b2p
prefixes,
we
were
flapping
the
vmp,
and
here
basically
we
could
saw
that
there
are
no
impact
in
memory
nor
in
cpu
usage,
when
the
mp
session
is
flapping.
E
It's
basically
since
it's
the
five
fifth
time
we
were
more
prepared
than
we
were
before
testosterone
test
automation
was
very
important
and
also
being
connected
throughout
the
hackathon,
with
slack
and
damage
teams
and,
of
course,
since
it's
all
virtual,
we
all
miss
beers
and
cocktails
afterwards.
C
Has
jeffrey?
Thank
you,
thomas.
First,
a
question
about
the
test
methodology
for
the
reflector
on
the
slide
there
you're,
showing
that
bmp
is
kicking
in
much
later
than
the
bgp
convergence.
Is
that
because
bmp
is
being
enabled
after
the
fact
or
is
there
actually
a
gap
there
where
the
system's
converging,
in
some
fashion.
E
C
Okay,
so
the
implementation
in
question
is
choosing
the
weight
and
defers
advertising
bmp
exactly
okay.
Thank
you,
and
my
second
thing
is
an
observation
about
the
time
stamping.
I
certainly
cannot
speak
for
other
people's
implementations.
No
junos
has
second
level
granularity
for
its
timestamps.
It
does
not
waste
the
extra
space
to
keep
know
something,
no
less
than
that.
C
We
put
in
effectively
a
increasing
counter
in
the
additional
field
to
make
sure
that
if
we
have
more
than
one
event
within
the
same
second,
that
you
get
something
of
no
use,
you
know
the
sequencing,
but
so
the
the
two
observations
are
that
the
timestamp
will
be
closer
to
second
level
granularity
for
at
least
one
implementation,
and
the
second
observation
is
that
the
time
stamping
is
going
to
be
somewhat
separated
from
the
bgp
events
in
some
cases
based
on
pipelining.
C
So
if
you
just
think
about
the
time
it
takes
for
a
packet
to
be
placed
on
the
wire
by
a
transmitter
and
maybe
tcp
delays
before
it's
processed
in
the
queue
potentially
with
some
additional
wiggle
room,
you
know
within
a
second
granularity,
as
is
being
run
through
the
pgp
pipeline.
A
Remarks
if
there
are
no
additional
questions
at
this
point
in
time,
I
would
like
to
invite
you
to
continue
with
the
bmp
seamless
session
presentation.
Sure
then.
E
E
Basically,
the
first
one
shows
how
much
how
many
messages
per
second
we
are
producing
in
terms
of
ip
fix,
the
second
one
in
yang
push
and
the
third
one
into
bmp
and,
as
you
observe
that
bmp
is
quite
bursty
and
if
you
look
at
the
count,
basically,
if
you
have
a
burst
that
you
can
easily
have
been
higher
than
the
highly
fixed
data
collection.
E
So
from
an
evolution
point
of
view,
I
can
say
that,
as
the
end
of
last
year,
we
were
collecting
roughly
two
and
a
half
thousand
from
two
and
a
half
thousand
devices
network.
Telemetry.
D
E
With
bmpi
prefix
and
yank
push,
and
we
had
up
to
one
million
messages
per
second
in
peak
and
we
are
considering
onboarding
up
to
40
000
devices
this
year
and
then
with
up
to
10
million
message
per
seconds
and
from
this
40
000,
most
of
them
38
000.
We
will
have
bmp
enabled.
E
E
E
And
this
is
the
point
which
we
tried
to
address
in
this
draft
and
or
basically
to
optimize.
So
one
angle
is
we
wouldn't
we
want
to
try
to
avoid
data
duplication
and
the
reason
why
we
want
to
avoid
data?
Duplication
is
because
in
the
b2p
network,
basically
the
amount
of
pgp
pass
or
in
a
service
provider
steadily
increasing,
but
also,
on
the
other
hand,
thanks
to
the
new
support
of
emp.
E
Now,
results
are
json
series
in
adjacency
out
also
locally
has
been,
it's
been
supported,
so
therefore,
basically,
more
and
more
matrix
are
exposed
and
those
matrix
thanks
to
the
tlvs,
contain
more
and
more
additional
information.
So,
overall,
basically,
the
amount
of
matrix
from
the
routers
is
steadily
increasing,
so
duplication
needs
to
be
prevented
and
we
see
that
those
duplications
happening
most
of
the
time
when
the
bmp
session
is
re-established
and
we
see.
Basically,
there
are
two
main
contribution
factors
why
this
bmp
session
is
reestablished.
F
E
So
if
we
look
a
bit
more
closely
into
the
bmp
session
handling
so
section
3.3
in
rfc,
78
54
describes
basically
that
if
the
tcp
session
towards
the
the
bmp
server
is
established
or
closed,
then
also
the
bmp
session
is
established
and
closed
and
in
section
five,
basically
that
it
is
described
that
once
the
bmp
session
is
established,
the
initial
rip
dump
withdrawal
monitoring
messages
is
performed
and
this
rib
dump
is
performed
regardless.
E
Re-Established
tcp
fast
open
is
defined
in
rfc
7413
and
basically
it
gives
us
the
unique
capability
that
we
can
distinguish
between
a
niche
initial
tcp
establishment
or
a
re-establishment
of
the
tcp
session,
and
this
draft
is
not
only
suggesting
to
use
tcp
fast
open
for
the
bmp
session,
but
also
on
the
bmp
application
on
the
bmp
client
extending
the
bmp
session
life
cycle.
So
the
bmp
session
should
not
be
closed
immediately.
E
What
we
want
to
wanted
to
preserve
in
this
draft
is
the
key
principle
of
pmp,
that
is,
it
is
still
a
unidirectional
collection
protocol,
the
suggested
the
extensions
are
optional,
so
either
the
tcp
fast,
open,
tcp
session
is
supported
or
not,
and
as
described
in
already
in
rfc
7854,
that
a
buffering
on
the
router
can
happen
and
in
most
implementations
today,
we've
seen
that
it,
it
is
happen,
and
this
draft
is
making
me
makes
basically
use
of
such
a
bmp
buffer.
So
the
message
can
be
preserved
between
the
tcp
establishments.
E
So
this
is
zero
zero
version.
We
are
looking
very
much
feedback
from
the
go
mailing
a
mailing
list.
We
would
love
to
hear
your
comments
and
feedbacks
and
also
if
this
draft
has
married
or
not
and
whatever
you
think
about
tcp,
fast,
open
or
whatever,
you
would
prefer
to
have
an
implementation
be
done
within
the
bmp
application,
but
keep
in
mind
if
the
implementation
needs
to
be
done
in
the
bmp
application.
We
cannot
preserve
the
main
principle
of
bmp.
That
is
a
unidirectional
protocol.
A
A
I
see
an
analogy
to
the
session
id
in
rrdp
and
how
sessions
are
resumable
in
rtr,
similar
to
how
tcp
fast
open
cookies
are
handled,
and
I
think
mechanisms
like
this
are
are
useful
and
arguably
help
conserve
resources,
so
I
would
encourage
you
to
pursue
this
work.
A
E
We
were
thinking
about
that
and
in
the
draft
we
were
specifying
a
default
value
of
60
seconds,
so
the
main
goal
is
basically,
if
you
have
a
small
interruption
in
connectivity
between
client
and
server
or
you
just
want
to
restart
the
pmp
server
that
within
that
period
of
time,
the
router
should
be
able
to
do
to
buffer
the
matrix.
But
of
course,
memory
is
limited
on
the
bmp
client
and
therefore
you
cannot
buffer
for
unlimited
time.
A
A
C
So
I
I
don't
know
if
we
actually
support
tcp
fast
open
in
either
of
our
bit
two
big
stacks.
I'm
vaguely
familiar
with
the
feature.
C
The
thing
I
would
I
guess,
comment
to
thomas
is
that
the
amount
of
state
that
needs
to
be
sort
of
kept
in
flight
is
stuff
that
the
colonel
would
have
already
owned
anyway
from
the
tcp
portion
of
things,
and
you
know
maybe
some
state
that
bmp
had
serialized
at
that
point,
but
anything,
that's
not
in
that
category
is
basically
unserious
state,
that's
still
in
the
state
machine,
and
you
know
that
shouldn't
cost
any
additional
memory.
C
So
the
main
considerations,
from
my
perspective,
without
having
read
the
rfc
in
detail
for
some
time,
is
that
really,
as
long
as
your
colonel
can
hide
the
details
about
the
session
being
lost
from
the
application
to
some
extent,
I
don't
expect
there
to
be
much
difficulty
in
implementing
this.
So
I
think
that
this
would
probably
be
a
good
mechanism.
C
But
if
that's
not
the
case,
this
is
not
a
bad
thing.
I
think
the
major
challenge
for
most
vendors
for
real
routers,
rather
than
you
know
easy
to
host
stacks,
is
that
non-stop
routing
tcp
extensions
are
very
messy
and
complicated,
as
it
already
is,
and,
to
some
extent
well.
I
think
that
the
fast
open
stuff
has
some
good
balances
with
those
features.
A
This
this
working
group
should
realize
that
the
seamlessness
can
be
solved
at
multiple
layers
of
the
stack.
So
the
current
proposal
is
to
leverage
existing
work
in,
for
instance,
the
linux
tcp
kernel
stack
to
allow
to
to
resume
a
session,
but
it's
not
the
only
path.
We
can
also
resume
the
session
within
the
bmp
protocol
itself
and
there's
pros
and
cons
to
both
approaches,
but
both
approaches
will
will
give
us
the
same
result,
namely,
avoiding
duplicating
avoiding
resending
information
that
was
sent
previously
and
is
still
in
the
buffers
of
the
client.
A
But
with
this
in
mind,
let's
just
explore
one
path
and
if,
if
we
somehow
encounter
an
issue
or
if
there
are
negative
interactions
with
graceful
restart
or
who
knows
what
what
crazy
things
exist
in
the
wild,
then
we
can
can
revisit
it,
and
the
resumption
mechanism
can
be
put
at
a
different
layer.
A
A
G
All
right
very
good,
so
I
thought
it'd
be
a
good
idea
to
just
to
give
a
brief
update.
We
did
present
this
draft
last
itf.
We've
received
many
comments.
Thank
you
so
much
we've
incorporated
all
of
them.
We
just
received
one
this
morning
from
thomas.
Thank
you
very
much.
We'll
incorporate
that
comment
in
the
memory
section
of
this
draft
in
the
next
rev
that
that
comment
was
regarding
bgp's
impact
on
ipfix,
netflow
and
so
jacob
wrote
that
section
and
we'll
make
sure
we
get
that
section
updated.
So
thank
you,
thomas.
G
And
we've
had
some
we've
also
had
several
people,
several
authors
changing
affiliations,
so
we
updated
that
as
well.
G
So
the
background,
just
real
briefly,
is
that
doug
midori
he
did
present
as
path
prepending
at
a
nanog,
and
there
was
participants
asked
if
there
was
any
sort
of
a
pcp
coming
out
of
ietf
or
any
other
sdo,
and
there
was
the
answer
is
no,
and
so
we
thought
it'd
probably
be
a
good
idea
in
this
working
group
to
have
a
opinion
on
es
path.
Prepending.
G
We've
included
a
variety
of
use
cases
where
it's
as
path
pre-painting
is
in
use
today,
including
preferring
one
isp
over
another
isp,
preferring
one
asbr
over
another
asbr
utilizing
one
path
exclusively
and
another
solely
as
backup
to
signal
to
indicate
one
path.
May
have
a
different
amount
of
capacity
than
another.
G
So
some
of
the
examples
of
the
problems
again,
you
can
look
at
the
draft,
but
you
know
you
could
have
an
attacker
wanting
to
intercept
or
manipulate
traffic
to
a
prefix
and
it
enlists
a
data
center
to
allow
announcements
of
that
same
prefix
with
a
fabricated
shorter
as
path
and
that
malicious
route
would
be
preferred
due
to
the
shortened
ps
path,
and
that
has
happened
and
then
there's
also
been
some
routing
leaks
from
one
country
that
are
being
preferred
over
another
country
because
of
excessively
prepended
aspas,
and
in
that
case,
illegitimate
routes
are
preferred,
can
be
preferred
over
legitimate
routes
and,
as
was
mentioned-
and
this
is
the
comment
that
thomas
made
was
long
as
fast
can
cause
an
increase
in
memory
usage
among
pgp
speakers.
G
G
Robert
is
a
fan
of
using
igp
or
egp
origin,
which
takes
precedence
over
an
incomplete
origin
code
and
keeping
the
path
lengths
the
same,
and
so
we
include
that
in
the
draft
there
may
be
others,
but
I
think
probably
that's
all
important,
but
I
think
probably
the
most
important
is
the
best
practices,
and
in
bold
here
is
what
we've
added
since
the
last
rev
jeff
houston
offered
these
suggestions,
and
so
we
included
those.
G
G
We
based
upon
analysis
that
doug
had
done
that
he
presented
at
maddog
and
we
included
the
draft
that
there's
no
need
to
prevent
more
than
the
then
five
asses,
and
we
include
a
diagram
that
shows
that
ninety
percent
of
as
path
lengths
are
five
as
passes
or
fewer
in
length.
G
A
In
the
chat,
I
see
a
comment
from
jeffrey
haase
for
aspaf.
Pre-Pending
communities
are
effectively
scoped
one
as
over,
which
doesn't
help
stop
asens.
In
many.
A
Cases,
jeffrey
perhaps
elaborates.
C
Presentations
in
this
working
group,
as
an
example,
people
talk
about
their
community
policies.
People
are
very
strong
about
how
they
actually
do
their
filtering
on
stuff.
You
know
the
chair
as
an
example
is
part
of
an
organization
that
you
know
very
strictly
scrubs
the
communities
that
pass
through
their
system.
C
So
the
the
problem
with
the
with
using
communities
to
shape
traffic
is
that
if
you
can't
get
your
behavior
that
you
desire
using
some
sort
of
community
addressed
to
your
adjacent
isp,
there's,
probably
not
anything
you
can
do
about
the
next
hop
over
so
for
traffic
control,
for
you
know,
as
single
hop
communities
can
be
very,
very
helpful,
especially
if
you're
doing
things
like
selectively
leaking
more
specifics.
C
But
once
you
get
past
that
point
the
practice
that
has
held
up
over
the
years,
you
know
for
more
than
one
hop,
unfortunately
has
been
pre-pending,
and
while
I
generally
agree
that
the
number
five
is
probably
fairly
reasonable,
the
sort
of
challenge
that
you
end
up
running
into
which
jeff
houston
would
recognize
as
a
bjp
wedgie
from
the
rfc
that
he
put
out
on
the
topic
is
that
if
you
don't
have
something
that
is
strongly
prepended,
sometimes
near
the
beginning,
you
know
it's
like.
C
We've
gone
very
much
closer
to
hub
and
spoke.
The
core
has
now
continued
to
condense
over
the
years,
but
if
you
happen
to
be
a
stub
as
that
was
purchasing,
multiple
streams
of
you
know
service
from
different
providers.
C
F
So
I
just
want
to
make
a
point
about
what
jeff
was
talking
about
about
the
scope
of
communities.
I'm
not
sure
that
that
one
hop
radius
is
true
in
general,
certainly
for
us
we
scrub
anything.
That's
in
the
well-known
space
or
starts
with
a
you
know
starts
with
a
private
as
and
we're
very
strict
about
what
we
accept
in
our
name
space,
but
everything
other
than
that
we
leave
untouched.
F
So,
for
example,
one
of
our
customers
can
send
a
community
in
one
of
our
transits
name
space
and
expect
that
to
be
honored
at
our
transits
boundary,
and
I
know
of
customers
that
do
use
that.
F
I
think
the
limitation
is
that
if
your
transit
happens
to
be
a
happens
to
be
a
you
know,
inverted
commas,
tier
one
or
a
transit
free
network.
The
next
hop
over
is
going
to
be
a
parent
or
another
customer
and
then
unlikely
to
honor
traffic
engineering
communities
of
any
sort.
But
I
don't
think
that
the
I
don't
think
that
the
scope
is
limited
by
virtue
of
any
scrubbing.
F
That
necessarily
happens
at
the
your
immediate
upstream's
edge,
whether
or
not
that's
worth
including
in
the
draft
or
not,
I'm
not
sure,
but
it
certainly
does
mean
that
whether
you've
got
alternatives
gets
complicated.
F
I
think
that
I
think
the
additional
text
that
you've
you've
added
about
the
fact
that
people
will
typically
use
local
prep
to
prefer
customer
routes
is
important,
and
I
think
you
can
probably
go
a
bit
further
than
it
sounds
like
you've
gone
and
point
out,
the
you
know
save
resorting
to
very,
very
aggressive,
aggressive
de-aggregation.
F
A
A
challenge
with
documents
of
this
type
is
that
there
is
a
multitude
of
audiences
that
that
have
to
interpret
this
recommendation
and
apply
it
to
their
local
situation
and,
on
the
one
hand,
there's
the
group
of
people
who
self-inflict
harm
by
needlessly
prepending
themselves,
which
negates
the
efforts
of
say,
origin
validation.
A
A
What
what
it
is.
We
want
to
teach
the
reader
and
I
I
feel
that
the
number
five
appears
almost
arbitrary,
but
on
the
other
hand,
some
recommendation
is
perhaps
better
than
fake
hand
waving,
keep
it
as
short
as
possible.
A
But
these
are
commons
with
just
my
working
group
participant
haddon.
I
see
that
rudiger
joins
the
queue
rudicar.
The
floor
is.
A
G
Some
there's
been
some
good
comments
on
the
chat
as
well
that
will
yeah.
Let's.
G
Yeah
see
if
we
can
just
we,
we
should
be
able
to
add
some
of
your
comments,
jeffrey
and
others
into
the
draft,
as
well
as
useful.
A
So
large
print
says:
can
you
explain
why
the
average
as
path
length
affects
the
number
of
used
prepend
sizes?
If
I
want
to
differentiate
between
the
preference
of
n
peers,
I
need
up
to
n
minus
1
different
prepending
sizes.
From
my
own
experience,
the
average
internet
path
length
should
just
limit
the
maximum
useful
difference
between
any
two
of
those
and
minus
one
sizes.
G
Yeah,
so
the
thought
there
was
that
you
know
if
there's
maximum
or
if
there's
an
average
of
five
a.s
paths
in
their
net,
then
pre-pending
ten
times
doesn't.
Do
you
any
good?
That's
the
general
reasoning
behind
us
putting
in
there
and
to
another
comment
on
you
know.
If,
if
five
is
arbitrary,
it
I
don't,
I'm
not
sure
how
arbitrary
that
is,
and
I
think
I
think
the
draft
is
trying
to
show
that
it's
not
that
arbitrary.
G
Because
that's
you
know,
ninety
percent
of
the
as
path
lengths
in
the
internet
are
five
as
paths
or
less.
So
that's
why
we
put
that
number
in
there.
G
If
that's,
if
you
think
that's
not
a
wise
thing,
then
we
can
avoid
doing
so,
but,
like
you
said,
I
think
we
should
have
some
recommendation
in
there.
A
A
Another
approach
might
be
to
say
that
you
should
not
prepend
more
times
than
you
have
ebgp
neighbors
that
might
apply
to
some
smaller
isps
as
a
useful
rule
of
thumb,
or
perhaps
there
are
other
mechanisms
to
arrive
at
the
number
five,
but
not
prescribe
it
as
five
as
the
magic
number,
but
I'm
doing
a
lot
of
talking.
Let's
go
back
to
the
queue
I
saw
alexander
and
then
randy
bush,
so
we'll
go
through
it
in
in
that
order,
alexander,
it's
your
turn.
H
H
H
As
far
as
I,
as
I
understand,
the
idea
is
to
limit
the
propagation
of
the
leaked
roots
by
making
the
length
of
ice
path
of
the
village
roots
less
or
just
preventing
it
from
being
meaningful,
meaningless,
big.
H
So
it's
easy,
so
the
idea
that
you
should
not
prepare
more
than
two
or
three
times
or
whatever
it's
very
clear,
and
there
is
another
side
of
the
middle,
and
maybe
it's
a
good
opportunity
to
cite
it
in
one
of
the
documents
that,
if,
if
you
here,
you
are
speaking
about
from
the
perspective
of
the
victim
of
the
source
of
the
route
of
the
prefix
of
the
other
space
whose
traffic
is
redirected
by
the
route
leg,
this
is
another
side,
the
sender,
the
receiver
of
the
route
league,
who
sends
the
traffic
you
know
into
a
black
hole
or
in
them
in
the
middle.
H
It
doesn't
really
matter-
and
there
is
another
simple
stupid
idea
here-
that
if
you
have
a
flat
policy,
I
mean
that,
for
example,
if
you
are
a
content
provider,
if
you
have
just
same
local
preference
values
for
all
your
peers,
you
are
getting
really
good
protected
again
against
rotlicks,
because
the
route
league
will
have
a
bigger
ice
path,
and
you
know
it
will
be.
H
It
will
not
become
a
best
route
for
you.
So
maybe
a
suggestion
that
do
not
use
multiple
layers
of
our
local
preference,
for
example,.
F
H
G
D
F
F
I
think
that's
a
reasonable
compromise
between
additional
complication
and
making
it
relatively
useful
just
quickly.
Another
point
that
I
forgot
to
make
when
I
was
talking
last.
I
think
that
the
the
stuff
around
the
origin
code
is
a
little
silly.
F
I
think
mostly
the
reason
that
it's
effective
is
because
most
receivers
tend
to
just
ignore
it,
because
it
doesn't
actually
change
their
path,
selection
process
meaningfully,
and
I
think
that
if
people
started
actively
using
it
at
scale
to
do,
traffic
engineering
you'd
find
that
people
just
reset
it
on
their
borders
and
the
effect
goes
away.
I
don't
think
that's
a
recommendation
that
we
should
be
making
to
people.
G
B
Yeah
I
find
myself
agreeing
with
with
randy
and
ben
to
some
extent
of
the
shape
of
the
internet
has
changed
significantly
over
the
past.
You
know,
25
years
or
so
that
I've
been
meddling
with
it
and
the
question
of
what
the
diameter
is
and
what
service
providers
support.
B
B
We've
been
damaged
in
the
past
with
prior
implementations,
and
so
we
have
asp
limitations
implemented
in
the
20940
network
that
limited
limit
as
paths
to
128,
which
is
probably
a
safe,
protectionist
number,
but
is
also
driven
by
prior
software
defects
and
so
identifying
a
number
that
is
that
is
appropriate
and
based
upon
what
service
providers
have.
B
What
capabilities
I
think
is,
and
and
coming
back
to
that
with
a
way
to
describe
that
to
protocol
implementers
to
say
this,
this
number
is
okay,
and
this
number
is
not,
I
think,
is-
is
going
to
be
really
difficult
for
this
group
to
undertake
just
based
on
you
know
the
role
and
position
the
apology
of
the
internet.
A
Thanks
two
people
are
in
the
queue,
randy
and
jeffrey,
we'll
start
with
randy,
and
then
there's
an
opportunity
for
mike
to
answer.
If
there
is
an
answer
and
then
it's
up
to
jeffrey
randy
go
ahead.
D
Ben,
if
where
I
am
in
the
topology,
is
part
of
the
consideration
for
how
I
prepend,
then
that's
fine.
I
didn't
say
the
algorithm
is
based
strictly
on
how
many
peers
I
have
or
what
color
wednesday
is.
D
D
C
Jeffrey,
so
the
the
one
thing
I
will
follow
up
on
from
the
jared
invention
that
might
be
worth
discussing
in
the
document
is
the
bugs
that
have
happened
from
pre-bending
and
the
the
single
biggest
one
is
just
simply,
you
know,
segment,
overflow
and
the
as
path
as
path
basically
says.
Here's
the
number
of
ass
for
this
segment
type
there
could
be
at
most
255
in
a
given
segment.
C
If
you
want
to
go
to
you
know
two
segments
in
a
row
that
are
the
same
type,
you
have
to
add
a
new
segment
and
an
awful
lot
of
people
had
bugs.
You
know
for
a
number
of
years
that
didn't
get
exercised
until
people
like
ripest
are
playing
around
with
these
things,
experimentally,
which
people
found
to
their
horror
when
the
internet
started
crashing.
C
So
it
might
be
worthwhile
mentioning
that
this
is
one
of
the
possible
contributors
towards
long
paths
being
bad,
not
necessarily
that
it's
broken
in
bgp,
but
implementations
often
are
broken
in
this
respect
and
something
to
watch
out
for.
A
The
difference
in
opinion
on
origin
code,
what
I
understood
from
razor's
contribution
is
that
he
merely
posted
it
that
it
is
possible
to.
It
may
be
possible
to
use
the
origin
code
as
an
alternative
mechanism
to
accomplish
the
same
as
what
was
set
out
to
accomplish
with
the
pre-pending,
not
that
origin
code
is
a
better
practice
than
prepending.
Just
that
it's
it
was
enumerating
was
my
understanding
of
it.
A
A
So,
since
we're
deprecating,
as
sets
from
the
as
path
attribute,
does
the
group
feel
that
that
frees
up
for
longer
as
paths?
No,
I'm
calling
joking.
A
Mike
my
summary
of
of
the
feedback
on
this
presentation
appears
to
be
that
the
effort
should
be
focused
on
seeing
if
nobody's
questioning
whether
5
is
an
appropriate
number
today,
based
on
the
data
it
was
derived
from,
but
to
make
the
document
more
future
proof,
it
would
be
good
to
set
out
to
see
if
the
number
five
can
be
constructed
from
different
data
sources
that
perhaps
are
more
applicable
to
the
local
environment
in
which
operator
recites
reading
the
future
bcp.
A
G
A
Thank
you
for
your
time,
mike
with
that
we
have
arrived
at
the
top
of
the
hour
and
I
would
like
to
conclude
this
grow
session.
A
A
I
Leaving
the
meeting-
I
guess
the
the
chair
just
leaves
the
room.
I
will
try
that.