►
From YouTube: IETF105-IEPG-20190721-1000
Description
IEPG meeting session at IETF105
2019/07/21 1000
https://datatracker.ietf.org/meeting/105/proceedings/
A
A
Okay,
there's
no
note
well.
This
Jeff
will
get
angry.
If
I.
Do
that
a
note
well
thing
yeah,
he
would
we
have
four
presenters,
five
presenters,
sorry
and
potentially
a
six,
so
we're
gonna
skip
along
pretty
quickly
between
the
two,
the
presentations.
If
you
have
any
questions,
I
guess
the
presenters
can
take
those
as
they
see.
First
up
is
Joel.
We
have
about
20
minutes
per
person.
Okay,
if
you
don't
need
that
much.
That's
good!
B
B
This
particular
complex
of
vulnerabilities,
so
I
mean
the
basic
gist
of
what's
going
on
here
is
that
there
are
various
ways
to
panic:
your
your
your
kernel
using
really
small
packets
and,
if
you're,
a
malicious
client,
you
can
actually
get
the
server
to
participate
in
this
by
setting
your
TCP
MSS
rather
low
one
of
the
proposed
mitigations.
Assuming
that
you
cannot
simply
reboot.
All
of
your
machines
in
a
big
hurry
is
to
limit
the
minimum
size
MSS
that
you
can
actually
set
on
my
connection
so
in
the
in
in
in
the
CVE.
B
One
of
the
proposed
mitigations
is
actually
the
iptables
lines
that
you
see
here
for
ipv4
and
ipv6,
and
you
know,
hopefully
nobody
is
actually
advertising
a
TCP
MSS
of
500
on
ipv6
I'm,
not
sure
why
we
would
consider
that
legal,
but,
as
it
turns
out,
people
actually
do
so
before
I
actually
went
and
slammed
that
mitigation
and
several
thousand
Linux
machines
that
serve
millions
of
connections
per
day
per
box.
I
went
and
actually
looked
at
two
questions.
What
is
what
is
a
legitimate
TCP
MSS?
B
We
can
go
back
to
some
fairly
foundational
documents
with
respect
to
ipv4
to
look
for
advice
there.
Obviously,
at
one
point
in
time,
the
maximum
default
maximum
Datagram
size
was
576.
Really
it
still
is
for
v4,
and
you
can
of
course,
obviously
send
a
packet,
that's
smaller
than
that
down
to
the
size
of
your
minimal
amount
of
data,
that's
associated
with
your
tcp
ack,
so
about
60
bytes,
but.
B
B
B
B
B
B
There
are
some
that
are
there,
some
that
are
pretty
common
I,
think
the
the
the
most
common,
very
small
one
was
unsurprisingly
536,
so
576
minus
40
bytes
there
were
3,000
odd
packets
there,
since
with
that
MSS
said,
I,
don't
have
a
hundred
million
system.
Many
of
these
are
pretty
rare,
they're,
both
I.
Don't
really
you
can
actually
see
in
this
one.
B
If
you
do
some
quick
math,
what
percentage
of
my
tea
the
TCP
sins
that
we
see
in
those
pops
is
actually
v6,
but
this
is
actually
the
the
percentages
for
the
common
MSS
sizes
as
sampled
from
a
hundred
million
sins.
So
you
know,
fortunately,
for
our
expectations.
Fourteen
sixty
and
fourteen
forty
are
pretty
highly
represented
in
this
distribution.
B
8960,
interestingly,
is
it's
actually
hard
for
me
to
separate
out
the
portions
of
my
network
infrastructure
that
use
that,
because
we
speak
nine
km
to
use
back
and
forth
to
each
other
from
the
portion
of
the
internet?
That
optimistically
suggests
that
it
can
support
a
9k,
MTU
I
would
say
that
for
anything
bigger
than
1,500
they're,
basically
counting
on
path,
MTU
discovery
to
work,
and
that's,
interestingly,
optimistic,
but
for
the
most
part
we
clamp
those
on
their
behalf
so
that
it
doesn't
have
to
so.
B
These
are
obviously
just
the
the
emesis
is
that
they
sent
us
not
what
we
sent
back
okay.
So
if
we
look
down
here
at
the
bottom,
because
I
wanted
to
know
if
I
was
going
to
block
small
as
MSS
is
because
malicious
clients
might
be
using
them,
what
sizes
would
I
see
commonly
down
at
the
bottom,
so
out
of
a
hundred
million
samples,
the
only
one
that
gave
me
pause
was
this
guy
at
512
and
the
one
at
256
the
suit.
So
this
seemed
like
interesting
choices.
B
Interestingly,
I
didn't
see
those
in
all
of
our
pops,
so
they
many
of
these
oddball
ones
that
appear
only
regionally.
So
there
are
particular
providers
or
clusters
of
devices
that
seem
to
be
associated
with
these
kinds
of
behaviors
they're,
not
generalized
ie,
the
256
one
was
most
commonly
seen
in
Northern
Virginia
and
when
I
actually
specifically
filtered
that
the
rate
increased,
so
I
can
Intuit
from
that
that
those
clients
were
not
very
happy
when
they
got
that
filtered.
B
So,
in
fact,
when
I
built
a
mitigation
for
this
and
socked
it
away
for
good
for
for
future
use,
and
we
put
the
threshold
below
that
256
number
because,
even
though
we
are
down
here
in
the
10
to
the
minus
6
or
10
to
the
minus
8
in
terms
of
frequency
of
things
that
we
see
there's
an
awful
lot
of
those,
so
I
guess
I'm.
B
Looking
at
this
one
of
the
things
I
have
we
have
to
ask
ourselves
is
what
problems
were
exposed
here,
there's
a
generalized
problem
with
croix
understood
and
infrequently
exercised
code
paths.
So
we
see
a
number
of
kinds
of
vulnerabilities
like
that:
IP
options
handling
ipv6
packet
to
big
handling
in
some
of
these
are
some
of
these
are
apparent
in
more
than
one
operating
system
and
in
separate
implementations.
So
some
of
these
things
are
a
product
of
interpretation
of
IETF
standards.
B
There
are
implementers
out
there
that
are
looking
at
the
advice
that
we
provide
and
producing
configurations
that
are
feasible,
but
not
necessarily
compliant
or
maybe
they're
completely
illegal,
but
MSS
is
lower
than
536
or
1240
seem
to
fit
into
that
category.
So
when
you
see
those
and
they're
the
product
of
say
additional
encapsulations,
where
someone
to
set
their
MTU
lower
deliberately
because
they're
not
doing
PPP,
for
example,
those
are
all
well
understood.
B
So
if
you
see
an
MTU
of
an
advertised
MSS
of
1388,
it's
pretty
obvious,
what's
going
on
there
right,
but
these
512
and
256
seems
like
numbers
that
someone
picked
out
of
a
hat
like
you
didn't
arrive
at
those
because
of
the
structure
of
your
of
your
link
MTU
for
example.
So
one
of
the
things
that's
exposed
here
is
that
malicious
clients
are,
they
are
able
to
control
the
behavior
of
servers
right.
We
new
advertise
a
lower
MSS
than
the
server
would
prefer
to
use.
B
The
server
is
going
to
use
that
unless
it
has
a
knob
to
prevent
that,
and
now
Linux
actually
has
one
of
those
now
as
a
product
of
this
particular
vulnerability.
But
that's
a
behavior
we
try
and
avoid
in
in
lots
of
things
is
to
avoid
cases
where
the
client
can
specify
things
that
you
really
don't
want
to
do,
and
you
end
up
doing
them
anyway.
B
B
So
is
there
some
advice
we
should
be
offering
here.
Rfc
66
91,
for
example,
went
back
to
went
back
to
what
we
specified
for
Mt
use
and
how
to
calculate
offsets
and
provided
better
advice,
basically
because
of
a
lack
of
a
lack
of
clarity
and
I.
Think
there's
some
argument
to
be
made
that
there's
more
clarity
that
we
could
provide
to
implementers.
B
D
Good
morning,
I'm
Cassandra
from
JP
RS
I
will
talk
about
attack
to
passivity
discovery.
It
is
a
portable
walks,
haughty
presentation.
This
may
attack
the
passivity
discovery
is
presented
by
IP
fragmentation
attack
on
DNS
after
ripe
6006
table
meeting
October,
2013
and
domain
invitation.
Prosperous
for
enmity
I
am
religion
to
PKI.
D
They
these
papers
show
that
some
implementations
accept
a
champey
government
ation
needed
on
the
DF
set
with
small,
empty
body.
This
is
on
576
under
the
code,
especially
the
body
at
the
tessera
toot
body,
and
the
posse
MP
posterity
body
may
be,
can
be
decreased
to
552
on
Linux,
three-point,
shotting
or
rate,
or
order,
and
the
paper
show
that
says
that
pass
my
body
may
be
degrees
to
296.
D
Then
I
would
like
to
evaluate
to
sir
attack
this
space
shows
information
methodology
attack
fast,
generate
crafted
ICMP
packet
details
in
next
ride
on
second
dose
send
the
packet
to
the
target
and
the
attack
Isis
solder
party
at
MTV
buried
target
Saba
under
another
machine
and
number
three
verify
that
easily
verify
the
result
on
the
target
to
machine
using
commands.
Only
Knox
IP
do
to
get
IP
address
command
and
on
previously
his
control
or
net
tiny
to
TCP
host
Akash
disto
only
TST
NATO's,
our
Schultz,
passing
with
you.
D
D
D
D
And
this
page
shows
how
to
generate
a
cross
needed
a
simpie
baiance
pockets.
I
think
I
crafted
eyes
in
people's
pocket
contains
I
simpiy
bar
on
6
header,
I
simpiy
buttons,
the
item,
IP
version,
6
header
and
across
the
ICMP
balance
sheet
set
that
contains
pocket
beak
with
small,
empty
body,
for
example,
280
undo
in
ipvanish
header
and
in
I
UDP
header,
with
large
data
sides
and
the
field
0
to
the
end.
O
packet
under
these
small
code
shows
how
to
generate
the
pocket.
D
This
place
holds
verification
with
a
lizard
only
knocks
two
points:
6
points
attitude,
I
Peru
to
get
come
on.
The
shows
that
a
multi
facility
you
change
to
2,180
IP
version
6
under
par
70.
You
change
two
degrees
to
552
on
IP
version
4,
and
this
page
shows
the
recent
vegetation
result
of
previously
under
NATO
BST
previously
on
previously,
which,
after
attack.
D
D
D
Linux
2.6
accept
accept,
crafted
I,
keep
Acosta,
dashing
people,
John,
Holliman,
Tunisia
and
the
except
for
UDP
and
passivity
degrees
to
552
and
4.80.
Previously,
net
Obst
ignore
crusty
Tai
Chi
Chi
B
button
for
fragmentation
needed
under
the
except
for
UDP
and
the
previously
net.
We
see
case
they
don't
have
called
for
crafty
Tadashi.
They
don't
have
called
for
a
simpie
revolution
needed
under
the
except
for
UDP
higher
bar
Linux
and
FreeBSD
NetBSD,
all
except
crafty,
crafty
dozen
people
in
people's
pocket
to
beak
or
UDP,
and
they
and
passivity
you
decreased
to
380.
D
This
page
shows
summary
of
the
attack.
Although
Linux
system
doc
left
custom,
people
in
media
and
DF
set
for
UDP
and
pasyati
changed
to
500
to
550
to
the
BSD
systems
and
renewal
in
existing
ignore
action.
People
are
admitted
needed
under
the
offset
or
UDP
and
DST
under
Linux
systems,
except
a
shame.
People
who
fragmentation
idea
set
SMP
battleships
Paquito
too
big
for
TCP
and
a
changeup,
a
74,
much
to
TCP
session.
D
Then
I'd
like
to
propose
that
don't
change
facility
discovery,
a
simple
buttons
expected
to
be
Co,
TCP
and
IP
version
4
permanently
did
not
reset
are
necessary.
Ho
TCP,
because
TCP
stack
uses
a
shame,
PP
to
be
Rowan's
needed
under
the
accept
to
adjust
packet
size
to
others.
Mss
and
UDP
is
safe
to
use
with
packet
size
up
to
1280
on
IP
version
6.
I
simpiy
partnerships
pocket
too
big
for
UDP
balloon
and
I'm.
D
Proposing
recommendations
to
avoid
the
orientation
in
DNS
so
in
this
page
is
DNS
might
be
my
draft
proposals
through
services
rubadoux
to
set
it
en
0
requested.
The
UDP
pedal
size
to
320
and
also
TT
service
under
Rousseff
is
observable,
should
set
it
en.
0
responded.
Maximum
payload
size
2
to
3
120
and
more
authoritive
service
may
send
a
DNS
response
into
its
IP.
Don't
have
IP
version
6,
don't
rock
options
and
euros.
Observers
may
drop
augmented
the
UDP
responsive
tribe,
drama
penis
before
IP
reassembly.
B
Yeah
Jolie
eglee,
so
this
is
definitely
something
I
was
thinking
about
in
the
context
of
could
I
get
things
to
go
lower
than
that
right,
I,
don't
see
a
real
high
value
as
a
malicious
agent
in
reducing
someone's
MTU
from
say,
1500
to
1280
right,
that's
not
I
mean
that's
kind
of
me,
being
an
ass
I
suppose,
but
like
that,
that's
not
nearly
as
costly
as
if
you
can
say,
reduce
it
to
200
right
so
I
mean
this
is
it.
B
If
we
still
want
to
path
MTU
discovery
to
work,
then
out-of-band
messages
are
of
course
necessary.
If
you
want
to
do
it
in
band
well,
yeah,
there's
other
mechanisms
for
doing
that
that
are
less
susceptible
to
this
kind
of
behavior,
but
unlike
the
case
potentially
in
v4,
particularly
with
really
old,
like
architectures,
where
you
really
could
reduce
the
the
MSS
by
a
lot.
I
think
this
is
less
dangerous.
I
mean
if
you're,
using
a
huge
MTU
like
say,
65
K,
reducing
it
to
1280.
It's
obviously
quite
detrimental
to
your
performance
and
behavior.
B
But
you
know
those
are
not
general
Internet
cases.
F
Eric
fine
yeah,
1280
I
gave
you
six
decision
to
do.
1280
seems
like
a
better
and
better
idea
with
every
passing
presentation
today,
so
this
has
been
interesting.
I
think
the
original
presentation,
as
well
about
the
fragmentation,
DNS
tech
that
you
mentioned
in
ryf,
was
also
very
detailed
and
fascinating
I
think
it's
possible
to
do
more
sort
of
authentication.
If
you
will
of
the
ICMP
packet
for
TCP,
because
you
have
some
state
tables
for
UDP,
you
might
not,
because
it's
not
the
socket
is
not
necessarily
connected.
F
F
Yeah
I
think,
if
it's,
if
it's
an
unconnected
socket
it
might
not
ever
you
might
not
maintain
any
state
about
that,
but
if
it's
connected
it
would
have
it.
So
it's
definitely
at
least
try
to
scan
that
table.
I,
think
that
would
make
sense
yeah.
This
I
thought
was
also
a
problem
with
Idina
zeros
MTU
advertisement.
G
Well,
actually,
I'm
not
really
worried
about
TCP
UDP
fragmentation
or
texting
Dennis,
because
we
do
actually
know
how
to
deal
with
them.
We've
got.
We
can
do
that.
We
can
deal
with
them
at
the
UDP
at
the
D
in
this
little.
If
we
have
to
you
just
use
t
seeking
that
generates
the
cryptographically,
secure
signature
over
the
UDP
message
and
there's
no
way
a
reassemble
packet
passes
that
now,
as
I
suggested
I
suggest
the
win.
This
was
originally
brought
up
that
we
use
of
online
TC
TC
key
to
completely
mitigate
this
problem.
G
H
H
The
solution
we're
proposing
here
is
to
create
a
open
system.
Kind
of
like
wrist
live
is
for
BGP
updates
where
the
domain,
the
data
owners,
push
data
into
the
system,
and
then
it
replicates
the
data
back
out
to
any
of
the
end
users
that
are
interested,
so
that
could
be
the
domain
owner
the
domain
users,
anyone
that
relies
on
those
objects
or
people
just
monitoring
for
security
reasons,
so
they
in
phase
one
of
what
we're
trying
to
do.
H
We're
gonna
try
and
get
as
much
data
from
the
registries
as
we
can
to
push
into
this
project.
So
this
is
essentially
a
pub
sub
system
that
will
send
the
data.
The
main
goal
is
back
to
the
registrant,
so
the
people
that
own
the
domain
should
get
updates
about
when
anything
changes
in
their
name
going
forward.
We're
also
talking
about
trying
to
get
data
from
either
G
TLD
name
servers
and
then
possibly
the
registrar,
so
that
we
can
cross
check
if
the
registrar
sent
us.
H
H
That's
one
area
I
forgot
to
put
on
this
graph
where
the
registrar
could
receive
this
data
so
that
when
the
registry
makes
the
change,
they
can
notice
that
they
actually
got
the
update
and
then
what
or
if
they
didn't,
request
the
change
and
they
see
the
update.
They
know
something
went
horribly
wrong.
H
So
we
modeled
this
off
of
this
somewhat
of
the
certificate
transparency
model,
the
one
major
change
that
we
had
that
we
built
into
it
was
that
we
were
gonna,
try
and
build
some
monitoring
and
alerting
on
top
of
it,
just
as
a
way
of
but
you'll
still
be
able
to
get
to
the
full
log.
But
we
wanted
to
make
it
so
that
you
could
also
you
get
real-time
updates
for
whatever
you're
interested
in
so
you're
not
having
to
go
and
crawl
the
entire
log.
H
So,
as
I
was
saying,
the
the
data
we
want
in
is
data
from
registries,
mostly
the
public
zones
that
are
out
there,
where
they've
kind
of
like
CCDs
the
centralized
domain
repositories
service
from
ICANN
but
more
up-to-date
and
faster.
We
have
no
interest
in
contact
information.
We
these
slides,
are
from
the
ICANN
meeting
and
I
knew
that
was
going
to
be
a
hot-button
topic.
If
we
were
wanting
Whois
information,
because
I
don't
want
to
deal
with
gdpr
and
then
we're
hoping
to
output,
both
our
raw
feed.
H
H
Were
mostly
presenting
around
a
whole
bunch
of
different
places
to
see
if
people
are
interested
this,
if
you're
a
registry-
and
we
want
to
provide
data,
I'm
happy
to
talk
or
we
can.
You
can
reach
us
at
the
contact
information
at
the
end
of
the
slide,
if
you're
a
registrant
and
want
to
subscribe
to
this
sort
of
thing,
let
us
know
and
what
your
use
case
is.
I
I
H
H
F
K
Johanna
say
again:
just
a
nice
project.
I
really
like
this
idea.
This
is
this
a
problem
we
have,
of
course
the
problem
here
is
not
the
technical
artistic
interest
or
the
registration
share,
and
two
people
to
share
data
with
you.
You
could
also
try
maybe
to
build
another
system
like
in
parallel
to
the
one
in
which
people
can
submit
their
own
domains.
If
you
will,
and
then
you
start
tracking
them
from
the
parent
and
the
child
allegation
to
see
if
they
match,
you
know,
there's
a
change.
You
can
just
notify
the
same
way.
H
So
tracking
that
sort
of
change
where
we're
getting
data
from
below
the
second
level
is
also
something
very
much
on
our
radar,
and
that
was
we
had
proposed
going
that
method
first,
but
we
actually
have
more
registry
interest
than
we
thought
we
were
going
to
where
there
are
a
handful
of
registries
that
are
willing
to
hand
us
their
full
zone
file
in
essentially
the
same
format.
They
send
it
to
their
till
their
name
servers.
H
L
H
The
primary
reason
for
that
is,
if
you
look
at
trying
to
remember,
someone
from
Verisign
may
be
able
to
correct
my
the
domain
name
but
I
think
it's
trans
trust
Barris
on
labs,
comm,
where
you
can
see
all
of
the
name
servers
that
your
delegations
rely
on
so
like
if
you're,
if
you're
into
a
org
you'll,
see
that
you
have
the
door
name
servers
you
have
the
all
the
affiliates
name
sets,
and
then
you
have
the
dotnet
from
Verisign
net
and
common
Verisign
also
rely
on
that.
So
you'd
probably
want
to
know
any
update.
H
If
you're
really
interested
in
no
name
changes.
You
want
to
know
all
of
the
updates
to
any
of
those
names
and
being
able
to
subscribe
to
all
of
that
and
then
like
I'm
a
if
I'm
a
customer
of
like
if
my
mail
is
hosted
on
Google
I,
probably
want
to
know
if
any
of
their
names
change
so
that
I
know
if
my
mail
is
impacted.
M
Ready
bouche,
the
DMS
is
pretty
big,
so
I
think
the
subscribers
you're
gonna
want
to
deal
with
scoping
in
many
ways.
But
could
you
sometime
somewhere
posts
a
URL
for
those
of
us
who
do
run
cctlds
with
an
ICD,
it's
page
on
how
to
push
yeah.
H
There's
your
URL
I
didn't
realize.
I
only
have
one
side
left
so
I'll
send
it
out
to
the
cie
PG
list
a
little
bit
with
the
it's
right.
Now,
it's
a
very
basic
website.
It
just
has
the
quick
description
of
what
we're
doing,
and
then
we
have
a
form
that
you
can
go
and
say
I'm
a
registry
and
I'd
like
to
send
data.
We
don't
have
anything
like
we
don't
have
any
of
the
infrastructure
deployed.
Yet
once
we
do,
I
will
come,
knocking
will
do
a
question
for
you
actually
is.
H
N
C
A
M
So
hi
I'm
Randy
from
a
J
researching
from
our
kissing.
How
many
people
here
remember
in
mid-april
DNS
attack
miss
called
sea
turtle?
Please.
This
is
scary.
This
room's
half
DNS
people
come
on
okay,
so
what
happened
was
DNS
registration
systems
are
hierarchy
and
hierarchy
is
as
strong
as
its
weakest
link.
M
It
has
the
same
security
model
for
the
small
deployment
of
DNS,
which
is
object,
security
as
opposed
to
transport
security.
Okay.
So
if
you
can
break
things
up
chain,
you've
got
it
and
it
has
one
the
way
it
is
currently
deployed.
It
has
one
wonderful,
additional
brilliant
weakness,
which
is
all
five
of
the
registry
at
the
root
of
the
registry.
Hierarchies
are
authoritative
for
the
route
that
is,
as
if
d
dot
de
was
also
authoritative
for
dot.
M
This
is
brilliant.
Okay
in
the
security
universe,
the
question
isn't
when
this
will
be
attacked.
If
this
will
be
attacked,
it
is
when
this
will
be
attacked.
Okay,
we're
starting
more
and
more.
Thank
you
very
much
for
the
people
who
are
deploying
and
for
people
like
you
who
are
putting
a
lot
of
effort
between
in
the
getting
deployment
out
there.
M
M
R
O
K
So
what
happened
like
we
had
a
study
last
year
on
IMC
on
details
and
denial
of
service
attacks,
and
we
saw
we
show
how
the
longer
TTL
protect
users
when
there's
denial
of
service
attacks
and
on
the
authoritative
servers
and
I
present
that
to
the
ops
folks
at
this
idea
that
I
now
and
they
were
like
all
right.
So
it's
nice
to
know
that
so
which
details
should
I
use.
I
was
like
oops,
that's
a
different
question.
I
haven't
looked
into
that!
K
So
that's
what
he's
trying
to
do
here
this
is
that
has
just
been
except
last
Thursday
right
for
publication
in
IMC
this
year
in
Amsterdam,
which
is
perfect
timing
for
this
meeting.
We
have
put
online
the
submission
version
at
this
URL
if
you're
interested
we're
gonna,
of
course,
created
a
revised
version
with
the
comments
that
we're
gonna
get.
Last
week,
and
that
will
follow
will
be
online
as
well.
Now
kesshun
is
the
cornerstone
of
DNS
performance.
We
know
like
a
15
millisecond
query
response.
K
Time
is
real
good,
but
1
milliseconds,
far
better
coming
from
a
cache
hit
and,
as
I
said,
we
did
a
study
last
year,
we'll
look,
how
caching
protect
users
from
alternative
service
attacks
on
DDoS
attacks
on
out
servers
and
the
thing
with
TTL
and
DNS
and
caching
says
the
DNS
detail
set
at
the
zones.
They
can
actually
control
cache
and
duration
at
resolver
sized
site,
so
it
actually
affects
latency
and
resilience
indirectly,
actually
very
directly,
and
there
hasn't
been
a
lot
of
evaluation
on
the
topic.
K
I'm
just
mentioned
here,
two
studies
there's
some
more
on
paper
we
mentioned,
and
no
research
actually
provides
recommendations
or
in
the
context
of
that
ETF
I.
Should
you
use
the
word
considerations?
It's
less
controversial
here
on
which
vendors
are
good
for
TTLs,
because
it's
very
it's
a
very
challenge.
It's
a
big
challenge
is
very
challenge
this
topic
to
determine
what
good
vendors
are,
because
there
are
trade-offs,
intrinsic
trade-offs
in
the
choices
of
TTLs
short
TTL
allows
for
operation.
Ops
teams
to
change
quickly
services
along
details
with
those
latencies
and
service
load.
K
So,
given
that
and
but
other
sort
of
details
is
no
surprise,
there's
no
consensus.
So,
let's
try
to
fill
this
gap
into
this
study
and
we
break
this
down
into
three
different
research
questions.
The
first
one
we
ask.
We
need
to
know
if
resolvers
in
practice
in
in
the
wire
they're,
actually
parent
or
child
centric.
I'm
gonna
explain
this
in
more
details,
but
you
can
get
some
information
either
from
parent
or
child
depend
on
how
your
zone
is
configured
and
they
may
have
different
TTL
values.
K
So
we
wanted
to
be
sure
who
is
actually
in
charge
of
the
TTL
s
and
the
second
question
we
wanted
to
know
it's
like
how
the
different
parts
of
the
fully
qualified
domain
name
changed:
effective,
TTL
life
time,
for
example.
Let's
say
you
have
your
DNS
provider
has
a
use,
a
certain
DNS
provider
to
gives
you
the
NS
records
details
for
a
domain
for
an
hour.
What
happens
if
you
are
on
a
record
is
like
two
hours
and
how
those
things
interact
with
each
other
and
the
third
question
that
we
address
in
this
paper.
K
We
wanted
to
know
right.
We
do
some
background
work,
some
in
one
in
two
to
see
how
the
resolvers
choose
these
details
and
firmware,
but
then
we
wanted
to
know
how
actually
they
are
used
in
a
while.
How
folks
are
deploying
that
we
know
that
our
roots
are
very
conservative
and
they
have
to
be
for
resilience,
so
they
have
longer
details
but
see
the
ends.
On
the
other
hand,
they
usually
have
shorter
details
because
they
want
to
change
things
very
quickly
in
our
grow
here
she
provides
recommendations
or
considerations
and
choosing
those
values.
K
So,
let's
start
here
the
first
questions,
resolver
centricity
like
if
you
let
me
see,
let
me
get,
let's
get
one
TLD,
that's
al
from
Chile.
If
would,
if
you
would
ask
for
the
NS
record
of
that
CL
phone
one
other
root
servers.
I
just
speak
a
word
here,
but
it
doesn't
matter
kiss
all
the
same.
If
you
ask
for
the
NS
record,
you're
gonna
get
a
response.
K
That
say
is
that
with
a
detail
of
this
value
here,
which
is
like
two
days
and
they
don't
within
that
response,
that
value
would
come
in
this
alternative
section
authorities
section
part
of
the
response.
Now
you
can
also
ask
for
the
same
question
and
type
instead
of
the
roots,
you
can
ask
for
directly
to
the
authoritative
surveillance
itself,
aid
that
Nicosia
one
of
them
and
then
you're
gonna
get
the
same
answer
but
with
a
different
detail
here
or
one
hour
and
that's
the
child
value.
K
So
you
see
there's
a
difference
in
here
for
two
days
in
one
hour
and
this
one
they
the
response
actually
of
this
one
comes
actually
as
this
an
authoritative
answer
and
comes
within
the
answer
field
of
their
response.
So
it's
quite
a
confusing
on
DNS,
like
a
DNS
response,
may
have
up
to
three
parts:
answer
authority
and
additional.
So
that's
why
this
one
here
comes
in
the
answer
and
it
has
even
a
flag.
It
just
says
that
is
actually
the
real
authoritative.
K
K
So
to
investigate
that
we
chose
that
your
UI,
which
is
from
Uruguay.
Why?
Because
well
at
the
roots
as
every
single
TLD,
they
had
a
detail
in
today's,
but
the
time
of
this
analysis
they
had
an
NS
record
TTL
with
a
child
of
300
seconds
in
a
record
for
120
seconds.
So
this
vendors
are
very
nice.
Video
sorry
for
my
experiments
are
very
nice,
but
there,
but
not
for
operations.
They
changed
I.
Let
me
get
to
that
later.
K
They
change
those
values,
but
for
if
you're,
losing
ripe
Atlas
that
which
I
use
all
the
time
it
takes
guys
phone
right
and
those
again
to
for
this
platform,
which
is
amazing,
if
you
use
it
all
the
time
and
if
you
usually
would
measure
every
10
minutes.
So
if
you
run
queries
tools,
those
records
every
10
minutes
every
time
as
you
do
it,
you
know
you
get
ahead,
get
against
code
cache
because
it
gonna
be
expired
by
then
now,
so
we
did
a
bunch
of
measurements.
I'm
gonna
cover
two
of
them
here.
K
The
rest
is
in
the
paper.
We
query
for
NS
queries
for
NS
records
for
that
year
by
the
a
record
with
one
of
those
rating
themselves
for
Uruguay,
and
it
just
asked
the
probes,
which
were
like
roughly
15,000
of
them
16,000
to
ask
their
local
resolver
to
get
those
records.
We
don't
know-
and
we
don't
know
where
the
gonna
kid
could
be
coming
from-
and
that's
exactly
what
we
want
to
know
so
how
we
want
to
know.
How
do
you
know
if
the
answers
are?
Actually
the
results
are
getting?
K
K
But
you
see
here
the
CDF
we
see
here
to
let's
say
60%
or
there's
a
huge
spike
here
that
most
queries,
but
for
the
a
record
they
come
back
with
a
TTL
of
two
hundred
twenty
seconds,
meaning
that
most
resolvers
are
actually
trusting
the
child
value,
not
one
set
by
the
roots,
and
the
same
applies
to
the
NS
record.
So
what
we
see
here,
it's
that
most
reserves,
I
child,
sank
with
centric,
preferring
the
details
of
the
authoritative
answers
and,
as
should
be,
there's
an
RFC
for
that
2181
I'm,
not
sure
filters.
K
One
of
the
outers
here
in
the
list
I
think
is
brandy
is
one
of
those,
maybe
in
section
five
for
that
one.
They
specified
this
order,
that
they
should
have
just
records
so
with
the
to
order.
Experiments,
I'm
not
gonna
cover
because
it's
kind
of
a
repetition,
but
we
need
to
double
check
that
and
we
use
we
say
I
said
well:
let's
use
another
domain
name,
that's
using
Google
calm,
because
those
are
TLD.
K
So
maybe
things
will
be
different,
but
no
those
results
are
the
same
and
we
analyzed
passive
data
and
then
I
know
as
well.
So
most
resolvers,
I,
child-centric
and
I
think
that's
a
good
thing,
because
that
gives
the
power
to
the
whoever
actually
all
the
domains
enough.
The
parent
now
second
question
how
different
parts
of
the
fully
qualified
domain
name
change
detail
lifetime.
How
look
how
long
he
gets
in
caching.
So
let
me
explain
this
is
a
little
more
tricky,
but,
let's
let
me
try
to
break
this
down.
K
So
we
we
have
this
case
domain
cache
test
of
math.
We
use
for
testing
and
I
create
a
subsonic
outs
of
that
cache
test
net
and
I
configure
this
domain
in
two
different
scenarios:
the
first
one
we
use
as
the
NS
record
for
sublet
cache
test
that
net
I
used
this
name
server
and
the
second
scenario
out
of
bailiwick
we
use
dishonor
1
in
Bailey
week
out
of
beta
wicket
for
enough
meaner
in
beta
week
means
that
your
name
servers
is
under
the
same
zone
in
this
case
of
the
cache
test
of
Nats.
K
So
it's
part
of
this
and
out
of
beta
wickets
means
in
this
is
a
difference
on
and
we
set,
we
set
intentionally
to
set
a
detail
of
the
NS
record
to
be
shorter
than
the
a
one
hour
versus
two
hours,
and
we
wanted
to
know
what
happens
with
the
answers
in
the
cashy
once
the
NS
record
expires.
Does
the
a
is
gonna
expire
as
well?
So
we
wanted
to
know
how
this
different
components
of
DNS
interact
to
give
answers
to
the
users.
K
So,
let's
so
that's
pretty
much
how
it
looks
like
our
setup
and
the
dot
Nats
on
that's
what
we
have
there.
The
circuit
details
and
the
cache
test
that
this
is
I
also
placed
a
time
net.
This
one's
here
one
hour
and
an
hour
and
in
our
zone
that
we
actually
take
care
of.
We
set
the
NS
to
one
hour,
the
a
record
to
two
hours
and
the
trick
is
we
start
the
measurements,
every
10
minutes
with
ripe,
again
15,000
vantage
points
and
at
time
equals
to
9.
K
We
read
direct,
we
just
remember
our
authoritative
nameserver
and
in
this
way-
and
we
configured
a
new
server
to
give
you
a
different
answer.
Why
I'm
doing
this
fame?
Because
I
wanted
to
know
later
where
the
answer
came
from,
if
it's
the
new
or
the
old,
because
that's
the
thing
I
wanted
to
evaluate
if
they
went
to
the
if
they're,
respecting
the
TTL
or
not
and
I
actually
asked
my
reservoir
should
ask
quad-a
queries,
they
don't
matter,
but
like
oh
they're,
not
that
not
asking
directly
those
records.
K
I
me
asking
records
at
under
that
tree
and
I
use
those
vantage
points
to
write
office.
So
these
are
the
two
figures.
The
first
one
here
is
for
in
Bailiwick
experiment.
This
one
is
for
auto
bailiwick.
We
see.
Let
me
start
here
every
bar
here.
It
shows
a
time
being
of
10
minutes
and
what
we
do
in
both
figures
at
that
10.
The
time
equals
to
0
from
0
to
10.
We
allow
other
probes
to
ask
for
this
quad
a
query
and
other
answers
comes
from
the
original
server.
K
That's
the
ones,
configuring
this
honor
I
think
is
fine.
Both
cases
here
at
time
equals
to
9
when
you
see
this
arrow
here,
in
both
cases
we're
in
number
the
authoritative
nameserver,
and
we
see
what
happens
so
for
this
time
period.
Here
this
color
here.
In
both
figures,
we
see
the
same
behavior.
What
is
that
people
that
knew
they're
having
cash,
the
values
of
these
previous
records?
They
still
keep
on
going
that
because
both
of
them
are
still
valid,
both
a
and
ns.
So
nothing
really
new
here,
people
that
didn't
know
about
that.
K
They
go
to
the
new
server
because
that's
the
one
that's
currently
available
now,
that's
when
it
gets
interesting
here
here.
Only
the
TTL
of
the
NS
record
expires
because
they
had
to
detail
when
one
hour
so
once
an
expires.
A
resolver
has
to
refetch
again
that
information,
and
we
see
here
for
the
in
bailiwick
experiment.
Most
queries
are
coming
actually
from
a
new
server,
but
a
very
different
scenario
here
from
the
out
of
beta
week
and
most
clear
answers
come
actually
from
the
old
servers.
So
what
is
actually
going
on?
K
K
If
your,
though,
in
this
way,
even
though
you
had
an
NS
record,
you,
which
it
has
a
duration
of
two
hours
just
by
ask
in
the
aim,
bailiwick
configuration
gonna
get
also
the
glues,
so
the
resume
is
gonna.
Get
that
information
is
gonna,
be
data
cache
or
try
to
figure
out
where
this
answer
is
located
at
the
date
later.
But
if
it
is
out
of
bailiwick,
you
don't
get
that
you
just
get
attractive.
Answer
answer
so
what
matters
in
here
is
that's
like
most
recovers.
K
Would
trust
cached
a
records
on
servant
from
different
zones
for
auto
bailiwick
scenario,
so
the
independence
of
the
records
actually
depends
on
how
your
system
is
configure
your
zone.
It
is
out
of
a
tweeker
of
in
bailiwick
and,
if
he's
in
bed,
wake
is
if
one
answer
or
never,
the
NS
expires
is
in
a
force,
you're
a
also
to
expires
okay.
So
now,
let's
move
to
the
third
research
question:
how
people
use
details
in
the
wild?
So
we
just
like
got
a
bunch
of
hit
lists.
K
The
public
of
popular
lists
I
know
they're
biased,
but
we're
not
interested
we're
interested
like
in
a
big
picture
where
they
have,
and
we
also
use
entire
Grinnell's
on
ccTLD
and
other
field
is
available
in
the
root
zone
and
you're
gonna
retrieve
in
retrieve
like
TT
else
for
NS
a
quad,
a
and
Max
and
DNS
KITT
KITT.
Sorry,
and
we
focus
only
on
child
TTL
values.
Why?
Because
we
found
out
the
most
resolvers
I
child-centric
and
and
that's
the
latch
there
was
some
discussions
of
our
operators.
K
So
we
see
here
on
this.
First
line
on
response
to
see
the
number
of
domains
that
responded
and
the
first
thing
we
fought,
we
they
can
catch.
The
first
lesson
from
this
table
you
can
see
is
that
most
of
the
domains,
except
for
the
root
zone,
they
are
only
using
out
of
bailiwick
name
servers.
They
alternative
name
servers,
so
we
should,
like
you
have
like
example.com.
K
Your
name
server
would
be
example,
and
as
that
example,
that
net
so
most
of
them
95
nine
percent
in
a
both
not
on
the
routes,
not
other
routes,
their
routes,
a
lot
of
them
have
mixed,
set
up
or
in
bailiwick
one
way
it's
like
rough.
Some
of
them
are
only
in
avail
week
and
I
have
here
two
figures
of
the
CDF:
the
distributions
of
TTL
records
for
different
zones.
So
the
way
to
read
this
is,
for
example,
the
root
and
s
records
here
from
the
child.
K
You
see
like
roughly
less
than
10%
that
within
24
hours,
but
if
you
look
at
the
other
zones,
it's
like
40%
have
a
TTL
that
just
is
like
a
smaller
than
one
day
for
the
other
zones
here
for
the
NS
record,
and
so
that
means
that
the
roots
is
more
conservative.
More
people
are
having
longer
2t
LZ
r,
which
is
what
we
found.
But
if
you
look
at
that
rake
a
records,
you
see
the
figures.
K
Little
shifted
to
the
left,
I'm,
just
gonna,
move
back
and
forth
for
you
to
see
that
it's
because
the
TTL,
typically
of
the
a
records
are
shorter
for
the
same
domain
than
the
NS
records
themselves.
So
that's
what
they're
trying
to
see
here
how
people
actually
deploy
those
values
and
umbrella?
It's
a
list
and
I
think
it
is
the
only
one
in
our
list
that
not
doesn't
only
have
like
second-level
domain
information.
K
They
have
like
fully
qualified
and
have
a
lot
of
system,
CDN
names
that
they
have
like
many
parts
and
those
look
like
CDN
domains.
They
tend
to
be
very
short-lived.
Some
of
them
use
like
only
for
one
time,
signaling
and
that's
why
I
see
the
umbrella
list
here,
having
way
short
details
and
the
rest
like
60%
a
you
have
less
than
10
minutes,
which
is
like
a
lot
now
we
found
we
so
then
we
look
at
this
and
now
I
mean
if
I
have
to
choose
I.
Like
a
look.
K
First,
look
in
the
TLDs
we
need
found.
We
found
a
34
of
them
had
a
TTL
for
the
NS
record,
which
was
under
half
an
hour
and
one
or
22
one
or
two
hours
on
the
child
delegation,
and
we
contacted
ATC
T
of
this
on
this
matter
and
six
responded.
So
three
of
them
did
not
consider
this
question
before
you
said
it
was
intentional
that
you
were
changing
infrastructure.
K
We
found
that
3
T
of
these
actually
increases
the
TTL
of
their
any
records
because
they
were
not
aware
of
that
and
that
that's
was
like
I
didn't
expect
that
to
happen,
but
there
were
folks
having
attorneys
I
mean
I'm
gonna
get
it
one
case
here
of
your.
Why
yeah
I'm?
Sorry,
it's
something:
oh
yeah,
honey!
It's
not
passing
nope
Chris
can
help
me
on
here.
K
R
K
Perfect
yeah
all
right.
No,
that
does
the
trick.
So,
as
I
said,
we
know
three
cctlds.
They
actually
change
dirty
tales
of
their
records
and
one
of
them
was
Uruguay.
We
contacted
them
and
we
actually
asked
permission
to
disclose
that
publicly.
They
were
like.
They
had
changed
the
value
before
in
the
past
and
because
they
were
doing
some
infrastructure
changes.
K
But
after
that
he
went
down
to
eight
milliseconds
and
then
75%
I
went
down
from
180
to
80
21
milliseconds
you're,
shaving
love
like
160
milliseconds,
just
by
changing
the
DT,
L
I,
think
that's
a
great
change
as
men.
Of
course
it
requires
the
records
to
be
in
cash,
but
that
shows
the
power
of
TTL.
Here
you
can
actually
improve
a
lot
of
response
times.
Next,
let's
see
if
it
works
so
yeah
I
yeah.
If
you
know,
should
we
probably
find
a
certain
more
figures?
Anyways.
What
if
it
works?
K
So
the
question
so
just
going
back
to
research
questions,
a
resolvers
parents
are
centric,
so
just
answered
in
Mostar
child-centric
how
the
different
parts
interact
bailiwick
in
pets
a
lot.
How
caching
works
depends
on
earlier
domain,
how
its
configured?
How
are
details
using
the
wild?
The
Vedas,
are
all
over
the
place,
longer
and
answers,
but
then
Aden
Cuates
and
most
of
the
domains.
We
see
it's
like
they're
out
of
a
lake
max
to
see
if
it
works.
M
K
K
Yeah
so
that
yeah
that
one,
so
you
see
here
that
one,
that's
exactly
what
I
said
before
on
this
figure
here
on
the
left,
you
see
the
red
color.
We
show
the
detail
of
the
domain
of
for
the
bright
probes
for
towards
one
authoritative,
nameserver
in
unique,
a
single
location-
and
you
see
here
the
curve
and
if
you
use
a
detail,
change
that
record
and
run
in
the
same
unicast
for
a
server
that
actually
has
a
TTL
for
just
any
TTL.
To
one
day
you
see
the
performance
improves
a
lot.
K
So
here
you
know
you
would
see
like
60%
of
the
people
there
under
15
milliseconds
if
they
use
a
detail
of
one
day,
but
if
the
user
to
TL
of
one
minutes
is
there
roughly
around
60
milliseconds
in
any
case
rerun
this
on
any
cast.
We
put
the
same
zone,
we
put
on
route
53
with
a
TTL
of
one
minute,
and
my
question
was
like
alright:
let's
see
how
much
anycast
benefits
the
users
and
compares
to
a
longer
TTL,
maybe
I
have
a
big
anycast.
K
Network
I
don't
care
about
the
tails
anymore,
but
for
the
point
of
view
of
performance
you
see
here
that
any
cast
does
improve
that
much
for
people
of
we,
which
are
very
close.
It
grows
longer
here
for
people
that
are
both
50
dead,
where
you
actually
start
to
see
the
difference,
but
caching
helps
that
helps
more
if
the
domain
is
cached,
of
course,
then
any
cast
and
that's
a
lesson
here
for
operators
next,
if
it
works.
K
M
K
K
So
reasons
for
longer
in
shorter
detail,
so
longer
TT
our
labels
longer
caching,
which
means
faster
responses,
lower
DNS
traffic
to
authoritative,
server,
more
robustness
towards
to
denying
service
attacks.
Shark
action,
on
the
other
hand,
supports
operational
changes.
It's
like
faster
for
you
to
change
stuff
can
help.
In
case
you
use
like
DNS
phase
response
to
those
systems
mitigation.
K
It
can
cope
better
DNS
base
load
balancing,
so
the
takeaway
here
is
like
fergan
ization
should
like
wait
and
this
trade-offs
here
to
find
a
good
balance
next,
so
recommendations
are
considerations
that
we
knew
there's
no
single
optimal
TTL
for
our
users,
but
for
general
users
longer
details
hours
or
like
even
a
day,
if
you
will
a
great
like
even
like
440
LG's
as
well.
The
exception
is,
if
you're,
if
you're
running,
dns-based
DDoS
protection
servers
so
that
we
would
like
to
have
like
a
short.
K
It
also
allows
you
to
quickly
configure
your
service,
but
feels
like
only
BGP
Bay's.
You
don't
care
about
that.
A
and
quota
a
records,
an
NS
relationship
for
out
of
bailiwick.
The
records
are
caching,
the
family.
So
we
don't
really
care
about
the
details
like,
but
if,
in
there
and
made
a
wiki,
we
recommend
to
set
it
CEO
of
the
a
and
quad
a
should
be
shorter
or
equal
to
the
NS
and
location
of
those
records
as
well.
K
At
least
you
should
have
one,
preferably
one
at
least
out
of
beta
week,
name
server
in
case
the
zone
becomes
unreachable
next
conclusions,
so
details
in
DNS
or
a
complex
topic.
We
all
know
that
we
carry
out
a
bunch
of
careful
design,
experiments
to
to
factor
how
to
figure
out
how
those
factors
interact.
K
We
show
that
in
the
wild
there's
a
little
consensus
on
how
to
TL
values
are
used
and
what
I
really
liked
about
this
paper,
like
the
discussion
of
the
Ops
teams
for
some
sissies
here,
this
actually
led
to
improve
a
user's
experience
and,
in
short,
longer
to
TL.
So
if
you
can't
do
that,
just
do
it-
and
this
is
also
a
very
good
time,
because
we
have
a
draft
now
on
IETF
justice,
messenger
and
we're
proposing
recommendations
of
sorry.
K
K
So
the
way
they
did
is
measurements
like
I,
use,
dry
pathless,
and
it
can
have
we
kind
of
agnostic
because
I
don't
know
which
resolvers
they
are
and
what
they
do.
I
haven't
looked
at
the
problem,
but
I
know
someone
who
is
looking
into
that.
His
nominee
Elias
works
with
Roland
Sheffield
Roland's
here
from
University
of
Twente
in
Holland.
It's
actually
look
into
every
single
resolver
version
and
what
you
see,
what
they're
actually
doing,
because
that's
a
different
contribution.
I
I
You
know
caches
and
stuff,
and
so
the
fact
that
you
can
tell
which
answer
because
there's
different
answers
in
different
places,
you
know
which
answer
you
received
yeah
and
whether
or
not
you
were
able
to
infer
from
from
that
what
the
resolver
stack
or
stacks,
or
you
are
doing
from
that,
is
it
so
yeah.
It
seems
like
it'd,
be
interesting.
In
addition,.
K
C
S
Morning
all
I'm
Jeff
Houston
I'm,
with
a
peening
as
Randy
observed,
IEP
G
meetings
are
exclusively
for
DNS
and
BGP,
and
nothing
else,
and
so
Joel
and
and
the
MTU
work
merely
aberrations.
But
you
know
you
hadn't
had
a
BGP
dos
this
morning,
so
you're
early
you're
not
allowed
out
into
the
room
until
midday,
so
you've
got
to
have
your
dose
of
BGP.
S
Know
BGP
is
wonderful
because
of
all
these
pieces
of
work,
bgp
is
the
one
protocol
that
brings
the
entire
internet
back
to
you.
Your
details
might
vary
with
my
details,
depending
on
where
you
sit
in
the
routing
mesh,
but
the
whole
issue
about
routing
protocols.
Is
you
get
to
see
a
complete
topology
of
the
entire
internet?
S
Not
perfect,
but
not
bad,
and
so
in
some
ways
it
gives
you
some
amazing
insights
as
to
what
the
internet
did
and
if
you
take
all
of
route
views
over
all
of
its
data,
which
starts
around
93
in
terms
of
hour
by
hour,
and
this
is
hour
by
hour
huge
amount
of
data.
You
actually
see
the
major
events
that
happened
in
the
internet.
You
know
the
great
internet
boom
and
bust
god
I'll
need
a
microscope
just
around
here
somewhere.
You
know
it
happened,
big
boom
and
bust.
S
S
That's
the
date
at
which
I
Anna
ran
out,
then
a
pina
can
write,
then
lactic,
and
then
we
seem
to
be
running
on
empty
and
running
at
an
increased
rate
in
terms
of
the
routing
size,
even
though
we're
not
pushing
more
addresses
into
the
routing
system,
there's
something
going
on
there
and
let's
have
a
look
at
it.
This
is
24
months
in
detail
of
every
pair
of
route
views
and
every
pair
of
rights
rests.
Bizarrely
route
views
sees
more
prefixes
and,
oddly
enough,
actually
more
addresses
than
rest.
S
There
are
no
more
new
addresses,
that's
2011
over
there
on
the
left
and
that's
when
we
ran
out
of
really
pushing
large
amounts
of
address
space
out
into
the
internet.
Yet
we're
growing
and
at
fifty
two
thousand
prefixes
per
year
in
the
v4
space
almost
clockwork
and
what
is
even
Auto
in
some
ways
is
that
the
a
s
growth
34,000
prefixes
per
year
is
clockwork.
S
So
at
some
point,
I,
don't
know
how
you
guys
organize
this
as
operators,
you
know
only
ten
per
day
and
once
there
attend
no
more,
no
more
you've
got
to
line
up
tomorrow,
even
on
weekends.
It's
phenomenally
uniform
in
terms
of
the
growth
rate
of
AAS
numbers
BGP
for
traffic
engineering
is
still
BGP
for
traffic
engineering.
Absolutely
nothing
has
changed
over
that
same
extraordinary
long
period.
The
amount
of
more
specifics
which
used
to
be
half
of
the
routing
table
is
now
54
percent
of
the
routing
table
ie
around
half.
S
So
no
matter
what
we
say
in
the
message
about
aggregate
aggregate
aggregate
aggregate
whoever's
not
listening
is
still
not
listening
in
whoever
it
is
listening
was
always
listening,
and
so
the
basic
message
has
never
really
changed.
The
only
thing
that's
really
changed
over
those
last
few
years
is
the
average
size
for
routing
advertisement,
because
there
are
no
new
addresses
right.
S
Address
span,
that's
being
advertised,
the
last
few
years
is
being
constant.
So
how
do
you
get
all
those
address?
New
routing
advertisements
in
you
advertise
smaller
and
smaller
prefixes?
The
average
routing
size
now
spans
4,000,
slash,
32
s,
it
used
to
span
around
7,000,
so
the
average
routing
element
is
getting
more
and
more
finer.
As
we
progressed,
that's
the
address
span
and
you
can
see
from
around.
S
2016
or
so
that
was
the
amount
of
address
space,
that's
being
routed
in
the
Internet
we
actually
haven't
unleashed
the
rest
of
the
unadvertised
addresses
into
the
network.
That's
now
relatively
constant
sitting
there
to
around
170
/a
it's
being
advertised,
and
that
is
know
more
is
coming
out.
Whoever's
sitting
on
an
advertised
addresses
is
sitting
on
them.
Whoever
was
going
to
sell,
has
possibly
sold
I,
don't
know,
but
we're
not
seeing
more
in
the
routing
space
and
the
other
thing
that's
pretty
constant.
S
So
this
is
another
one.
This
is
the
amount
of
adjacencies.
Now
the
people
on
this
list
might
vary
depending
on
where
you
sit
where
I
sit,
I
see
Huracan
electric
with
the
greatest
number
of
adjacencies
is
$6.99.
But
that's
me,
you
look
at
yours,
you
might
find
a
different
one,
but
what
you
will
find
is
around
9
to
10
a
s's
have
an
extraordinary
large
number
of
adjacencies
number
of
neighbors
more
than
a
thousand
about
2,000
or
so
have
around.
S
You
know
10
or
more
adjacencies,
and
everyone
else
is
out
on
the
edge
one
or
two
adjacencies
so
that
it
almost
looks
like
a
power-law
distribution.
It's
not
exactly
obvious
who
sits
in
the
top
slot
as
a
global,
constant,
because
I
don't
think
it
is.
It
depends
on
where
you
are,
but
that
shape
is
the
same.
No
matter
where
you
are
so
the
Internet
is
very
heavily,
not
even
a
star
network.
It's
a
dense
black
hole,
kind
of
network,
we're
a
very
small
number
of
foe.
S
Do
the
bulk
of
the
transit
routing
and
everyone
else
just
attaches
at
the
end,
and
everyone
wants
to
get
as
close
as
possible
to
those
magic
10
right.
There
is
no
long,
AS
path
vectors
flying
around
as
sort
of
an
industry
norm.
It's
not
so.
This
is
why
the
actual
date
doesn't
matter.
Nothing's
change.
You
know
the
growth
plots.
It's
just
business
as
usual,
the
number
of
entries
is
reached,
you
know
a
magic,
three-quarters
of
a
million
and
it's
keeping
on
growing
at
much
the
same
rate,
52,000
entries
a
year.
S
34,000
is
is
a
year
and,
quite
frankly,
the
way
we're
doing
this
shorter
and
shorter
prefixes,
but
exactly
the
same
topology.
So
what
about
address
exhaustion?
Well,
you
know:
Erin
ran
out
just
really
ran
out
everything's
going
to
go
in
about
May
next
year,
like
Nick
late
this
year,
ap
Nick
and
right
both
have
these
last
slash,
eight
policies,
you
know
dribbling
it
out
AP
Nick
late
2020,
maybe
if
they
change
their
policies
and
I,
think
they
have
to
a
slash
23
or
something
it
might
last
a
bit
longer
but
effectively.
S
All
that's
left
is
sort
of
dribbling
little
bits
and
pieces.
So
what's
driving
growth
right
now
in
v4,
is
it
all
transfers
of
these
last
slash
eights
the
factor
or
is
it
leasing
an
address
recovery?
What,
if
you
take
a
snapshot
of
the
routing
system
at
the
start
of
the
year
and
a
snapshot
at
the
end
of
the
year
and
eliminated
everything
that's
in
both
tables?
S
So
the
new
staff-
you
look
out
and
say:
when
was
that
allocated
so
what
appeared
through
the
year
and
when
did
the
registries
claim,
they
gab
give
it
out
in
2010,
80
percent
of
all
the
addresses
we
saw
in
the
routing
table
at
the
end
of
the
year
were
allocated
within
12
months
before
when
they
first
appeared.
So
in
other
words,
you
got
an
address,
you
routed
it,
and
that
was
the
major
source
of
entries
into
the
routing
system.
S
This
is
every
year
since
then,
and
what
you
notice
is
it's
a
very
sort
of
similar
curve,
but
it
keeps
on
dropping
so
last
year
only
20%
of
all
the
new
addresses
I
saw
in
the
routing
system
were
actually
allocated
or
assigned
by
an
RA
are
in
the
last
12
months,
which
kind
of
figures.
What
isn't
quite
so
obvious
is
that
almost
half
are
really
really
old,
really
old
more
than
20
years.
So
this
is
almost
predates
the
RIR
system.
S
S
We
had
50
slash
eights
that
weren't
in
the
routing
system.
We
now
have
48,
so
this
whole
idea
that
transfers
and
trading
would
unleash
a
huge
amount
of
address
space
that
was
otherwise
dormant,
isn't
supported
by
the
facts
and
in
fact
the
last
12
months
were
even
more
amazing
that
the
amount
of
unadvertised
addresses
actually
rose,
not
fell,
and
there
are
a
number
of
large
transfer
deals
where
their
end
result
was
the
addresses
that
were
previously
advertised
to
now,
not
advertised
which
again
is
sort
of
anomalous
behavior.
B
That's
the
text.
Yeah
you've
had
a
question.
Julie
I
can't
speak
to
the
details
of
any
particular
transfer,
but
I
would
observe
that
one
of
the
things
that
people
tend
to
require
when
they,
when
they
do
transfer
them,
is
that
the
previous
prefix
announcements
go
away.
So
that
actually
means
that
some
existing
advertisement
advertisements
have
gone
away.
That
I
know
of
specifically
prior
to
the
address
base
being
transferred.
This.
S
Is
kind
of
bears
out
with
what
we
see
too
and
the
predominant
factor
we
see
in
the
movement
of
addresses
is
towards
various
forms
of
CD
ends
and
cloud
providers,
and
they
do
go
away.
We're
probably
going
to
see
them
again
at
some
point,
but
in
this
case
the
go
away
was
more
than
three
months,
so
in
relatively
long
quarantine
period
between
previous
use
and
NEX
use.
S
So
we
assigned
some
addresses
from
the
our
our
system,
relatively
small,
but
2.1
/
eights
actually
were
dropped
into
the
quarantine
pool,
we'll
probably
see
them
again
a
little
while
later,
and
most
of
that
was
actually
heading
towards
the
cloud
providers
there
so
far
as
I
can
see
the
biggest
buyers,
not
ISPs
cloud
providers.
Isps
don't
seem
to
be
doing
much,
v6
well
exponential
growth
for
all.
U
v6
folk,
you
can
smile,
it's
still
exponential,
it
hasn't
gone
linear.
Yet
this
is
probably
for
them
really
good
news.
Jared
actually.
S
On
it,
but
right
now
it's
impossible
to
create
a
model
that
says
that
exhausts.
If
you
look
at
that
trend,
where
there
was
a
slight
downward
curve
healed
back
up
again,
signal-to-noise
kind
of
says,
we're
going
to
sit
between
45
and
50
for
an
awfully
long
time
and
the
stats
aren't
draining
it
now,
I,
don't
understand
the
market
signals
out
there.
M
M
The
second
thing
is
I:
just
had
a
little
trouble
with
your
unrouted
and
legacy
space
doesn't
exist.
It's
that
some
of
your
terminology.
There
seemed
a
little
extreme,
but
did
the
motion
what's
interesting?
Is
I
think
what
we're
seeing
is
a
motion
from
stuff
coming
from
the
our
IRS
to
stuff
coming
from
existing
holders
and
if
I
were
in
our
IR
I'd,
be
very
happy
that
I
chose
a
rental
model
instead
of
a
sale
model.
S
I'll
go
back
to
this
graph,
which
is
I,
think
what
you're
referring
to,
and
this
relatively
large
pool
of
addresses
that
appeared
in
the
routing
system
that
weren't
there
at
the
start
of
the
year
whose
original
allocation
age
was
a
long
time
ago,
and
then
in
some
cases
predating
the
our
IRS,
and
the
observation
is
just
that
number
is
increasing.
You
know
the
percent
each
year
gets
bigger
where
we're
mining
ancient
coal
right.
This
is
not
recent
energy.
It's
a
long
time
ago,
we're
good
at
this.
E
S
That's
a
good
point
times
have
gone
into
looking
at
bogans.
You
know
and
routing
issues.
It
makes
what
I
thought
was
a
quick
pack
even
longer,
but
it
is
useful
to
understand
the
extent
to
which
we
understand
exactly
what's
in
the
routing
system
is
being
valid
versus
wow.
How
did
that
get
there?
Yes,
now,
specifically.
J
Q
S
L
L
S
I
assume-
and
you
might
say,
I'm
out
on
a
plank-
that's
just
broken
behind
me.
I
assume
that
the
dates
recorded
in
the
RIR
registry
are
real
and
that's
the
date
they
left
the
shop
and
when
Aaron
changed
the
date
on
a
transfer.
I
write
it
back
again
because
I
want
to
know
the
date.
It
first
left
the
shop,
not
the
date
when
they
fussed
around
with
it
and
changed
something
so
I'm
going
for
the
allocation
date
when
a
previously
unheard
becomes
in
play.
But.
L
S
T
You
know
I'm
curious
as
to
the
makeup
of
this
unadvertised
space,
there's
a
lot
of
rumors
that
the
u.s.
DoD
is
setting
on
a
whole
bunch
of
space
and
it's
a
little
I
said.
There's
a
lot
of
rumors
around
about
people
have
squatted
on
it.
So
I'm
just
curious
what
what
is
yours?
What
are
your
thoughts
and
opinions
on
all
of
that.
P
S
P
So
the
the
two
things
they
think
might
be
interesting
out
of
that.
The
first
one
is
that
if
the
s
paths
have
no
relationship,
no,
that
potentially
implies
that
you
know
the
allocations
just
walking
away
and
that
this
is
an
allowed
thing.
Of
course.
The
second
thing
I
find
interesting
out
of
that
based
on,
but
some
of
our
security
friends
tend
to
say
is
that
those
spaces
have
been
sort
of
prime
pools
where
hijacks
are
happening,
and
this
is
where
cross
correlation
with
the
rpki
stuff.
Interesting.
That's.
S
A
different
talk,
I
do
look
at
that.
It's
a
different
talk,
I'll,
take
it
at
that
we'll
press
on
for
all
uv6
vote.
You're
waiting
for
this.
This
is
where
I'd
got
to
the
exponential
growth
is
still
growing
there.
Interestingly,
v6
risks
and
route
views
have
a
very
consistent
view.
Now,
the
this
is
a
recent
routing
table.
S
It's
only
a
you
know,
a
decade
or
so
old
in
in
reality,
there
are
fewer
ghosts
and
one
of
the
issue
is:
to
what
extent
is
the
v4
table
full
of
prefixes
that
are
actually
unadvertised,
but
withdrawals
haven't
fully
permeated
through
the
entire
system
and
no
one's
going
to
permeate
a
second
gratuitous
withdrawal
and
the
fact
that
this
is
more
concise
and
tighter?
The
written
route
view
see
exactly
the
same
picture
is
interesting,
but
more
than
that
I
don't
know.
Ghost-Hunting
is
extremely
difficult.
S
S
Advertised
address
span,
it's
linear
in
a
log
scale.
Remember
maths!
If
it's
linear
in
a
log
scale,
that
means
in
a
normal
scale,
it
would
be
exponential,
but
I
couldn't
get
it
on
a
graph.
So
that
means
it's
growing
and
it's
growing
at
an
exponential
rate.
Take
my
word
for
it.
The
interconnection
in
v6
is
weird
and
I
suspect.
This
is
a
relatively
long
baseline.
S
It's
the
slow
decline
of
the
v6
tunnel
overlay
Network
and
the
increasing
use
of
native
connections
actually
means
that
this
is
behaving
the
way
we
would
like
it
to
behave,
you're
actually
seeing
something
closer
to
the
underlying
topology.
The
fact
that
it's
still
noisy
says
here
still
more
work
to
do
and
part
of
the
reason
I
think
why
Huracan
is
that
there
is
Huracan,
did
the
whole
bunch
of
v6
tunnel.
S
So
again,
your
view
will
differ
depending
on
where
you
are,
the
v4
shaped
v6
shaping
connectivity,
exactly
the
same
small
number
of
players
right
in
the
middle.
In
this
case,
only
two
AAS
numbers
from
where
I
sit
have
enormous
numbers
of
adjacencies.
You
will
probably
see
a
different
number
from
where
you
sit
and
you
might
even
see
different
players
there.
It's
all
relative,
so
overall,
it's
growing
what
to
expect.
Well
some
projections.
This
is
V
for
daily
growth
rates,
140
routes
a
day
52,000
a
year.
S
You
should
prepare
within
two
years
for
up
to
you
know
a
million
entries
or
so
despite
address,
exhaustion,
despite
everything,
there's
no
reason
to
suggest
that's
going
into
a
logistic
curve.
So
if
you're
planning
on
fib
size
and
if
you
really
want
to
stuff
all
your
routes
into
a
fib,
if
that's
what
you
want
to
do,
you're
going
to
just
have
to
grow
at
this
kind
of
size,
there's
the
only
other
way
of
doing
it
is
don't
put
all
your
routes
in
the
FIB.
E
S
Could
get
really
fascinating
because
these
are
starting
to
get
big
numbers
even
for
big
iron
out
there,
the
v6
daily
growth
rates?
It's
not
linear,
it's
exponential,
so
those
predictions
get
to
be
scary,
because
they're
128
bits
long.
So
at
Alinea
you
kind
of
go
yawn,
but
v6
is
not
growing
in
a
linear
rate.
It's
growing
exponentially.
We
can
expect,
and
within
five
years
quarter
of
a
million
128
bit
entries
in
the
FIB.
So
it
starts
to
get
significant.
S
So
in
absolute
terms,
v6
table
is
rocketing
along
it'll,
be
about
the
same
footprint
in
feed
memory
in
about
five
years
time.
The
same
footprint
now
as
long
as
you're
prepared
to
do
this,
the
internet
will
keep
on
running
BGP
I'm
like
there's
nothing
intrinsically
wrong
with
BGP
scaling,
profitable
properties.
It's
going
just
fine,
but
if
you
don't
want
that
size,
you've
got
to
think
long
and
hard
about
how
you
want
around
the
other
part
of
this
is
the
performance
of
BGP,
which
is
the
level
of
updates.
S
S
Riddle
me
that
one
with
more
people,
more
Brownian
motion,
more
noise,
more
random
actors,
more
there's
more
that
more
withdrawals.
No
no
withdrawals
are
a
different
thing.
I,
don't
understand
roughly
what
is
that
number
10,000
a
year
10,000
per
day
constantly,
no
matter
where
I
look
I
see
a
very
similar
number,
but
the
one
thing
about
this
is
that
it
does
have
a
relationship
to
convergence
performance,
Randy.
M
Randy
those
two
that
many
of
the
announcements
create
implicit
withdrawals.
So
the
explicit
withdrawals
are.
S
S
S
S
S
That
number
actually
translates
to
just
around
50
seconds,
so
on
average
around
50
seconds,
whatever
state
it
was
going
to
now
stays
in
that
state
for
at
least
a
minute
I
think
was
the
way
I'd
timed
that
so
it
remains
stable
after
that
point,
more
than
two
MRI
intervals,
and
that's
been
the
same
for
ever,
this
is
viewed.
No.
This
is
viewed
from
1:30
1:07
to
I
had
a
different
graph
that
did
all
of
route
views
all
of
the
time,
but
it
does
take
a
lot
of
compute.
S
It's
the
same
property,
it's
exactly
the
same
property,
because
it's
actually
based
on
average
AAS
path
length,
because
what
you're
looking
at
is
there's
a
disturbance
in
the
force
it
has
to
propagate
a
longer.
Convergence.
Time
actually
means
a
longer
path
to
propagate
through,
because
you
always
like
connecting
into
the
core
updates
happen.
Really
quickly
so
into
the
core
back
out
again,
you're
done
so
this
is
why
it's
relatively
constant
as
far
as
I
can
see
Jeff.
P
S
P
What
you're
going
to
see,
especially
with
your
prior
slide,
with
amount
of
HP
update
noise?
Just
simply
the
link
that
something
is
going
to
be
in
a
queue
tends
to
cause
that
to
happen.
So
when
you
have
a
fair
amount
of
stuff
churning
around
you
get
a
lot
of
convergence,
no
niceties,
as
a
side-effect
of
that,
where
you
actually
get
the
pathological
stuff
is
when
you
have
short
queues.
Okay,.
S
Is
where
it's
really
good?
That
is
an
amazing
thing,
like
grown
by
an
order
of
10,
but
convergence
is
much
the
same.
So
if
you're
worried
about
v4
update
performance
in
BGP,
don't
bother
and
whether
it's
M
right
or
whether
it's
Q
just
suggests
the
underlying
observation
is
the
system
is
actually
quite
stable
from
where
I
sit.
S
V6
is
entirely
not
that
and
I'm
always
been
wondering
to
what
extent
it's
me
and
to
what
extent
others
see
this
when
I
look
further
afield,
I
still
see
large
amounts
of
operational
instability
in
routing
updates
bizarre
the
number
withdrawals
are
still
low,
but
it's
more
variable.
The
number
of
updates
is
unbounded.
S
It
just
seems
at
some
times
some
routers
is
going
to
catatonic
update
announcement
systems
a
little
earlier
and
never
stop
convergence
performance
as
a
result
is
all
over
the
shop.
There's
no
true
average
that
maintains
from
day
to
day,
to
the
amount
of
updates
that
converge,
and
nor
is
there
any
real
consistency
in
time.
Now.
Whether
this
is
the
impact
of
tunnels
close
in
to
the
call
or
some
other
impact.
S
There
is
something
really
different
in
BGP
carrying
v6
and
BGP,
carrying
v4
and
the
suspicion
is
it's
something
to
do
with
topology
and
the
suspicion
is
Jeff's
nods,
if
shaking
his
head
to
say
no
or
something
to
do
with
overlays
and
tunnels
and
tunnel
behavior.
And
if
there's
another
explanation,
someone
should
enlighten
me
because
I'm
not
enlightened.
P
Jeff
has
made
a
joke,
a
partial
joke
and
made
the
Doug
just
too
much
cargo
in
the
system
and
I
wish
I
was
joking,
absolutely
much
Kroger
in
the
system.
There's
there's,
unfortunately,
some
interesting
artifacts,
some
of
the
older
trees,
routing
software
that
was
out
about
and
it
causes
that
a
huge
amount
of
noise
in
the
system
when
I
was
doing
work
some
number
of
years
ago.
For
you
know
the
four
byte
a
s,
transition
and
stuff.
P
We
found
that
there
is
significant
meta,
stable
routing
updates
going
around
and
it
was
showing
up
because
it
was
incorrect,
no
for
by
transition
code
and
a
bit
of
chasing
down,
since
they
ended
up
being
a
customer.
I
had
a
little
bit
of
visibility
and
at
the
time
was
much
old
cargo
boxes.
So
there's
just
literally
I
think.
A
lot
of
this
noise
is
from
the
fact
that
a
lot
of
v6
is
being
done
know
via
software
routing
and
a
lot
of
it's
being
done
across
tunnels.
P
P
It's
taking
a
long
time
to
get
there
yes
and
well.
I.
Think
that,
partially
that
what
you're
seeing
here
is
also
your
observation
about
these
being
over
tunnels
as
we
get
them
more
and
more
native
v6
connectivity
and
the
tunnels
go
away.
People
are
shipping
their
routes
across
actual
routers,
which
tend
to
actually
be
a
little
more
stable
versus
over
the
things
that
are
problematic.
Okay,.
S
Q
M
Randy
motion
many
years
ago,
some
Australian
nut
said
that
v6
performance
absolutely
sucks
due
to
tunnels
and
crazy
peering
agreements
and
so
on
and
so
forth,
but
that
as
more
money
started
real
resting
on
v6,
this
will
get
fixed
for
to
get
fixed.
Those
people
playing
in
the
game
would
benefit
from
even
more
analysis,
so
we
know
what
to
fix.
I
think
you
know.
Finally,
people
are
coming
to
the
conclusion
that
tunnels
are
is
evil
as
gnats,
but
I
think
the
other
Jeff's
is
saying
that
weak
software,
it's
a
problem.
B
Yeah
Joel
yeagley
yeah
I
mean
that
certainly
is
historically.
The
case
like
I,
can
remember.
Circa
1998.
B
The
reason
our
BGP
implementation
was
shittier
was
literally
because
it
was
a
different
platform,
because
the
soup,
zero
cat,
65
hundredths
that
we
had
running
the
core
did
forward
v6
at
that
point,
and
so,
like
literally,
the
Cisco
7000
that
was
sitting
in
the
corner
was
doing
the
v6
overlay
Network.
So
it's
literally
shittier
I
have
observed
in
the
in
the
process
of
building
adjacencies
in
exchange
points
over
a
long
period
of
time.
That.
B
People
apply
the
principles
of
benign
neglect
to
their
deployment
of
v6
peering.
If
you
take
down
a
v4
peer,
they
noticed
pretty
quickly
weirdly
because
the
traffic
that
they
were
saving
a
lot
of
money
on
stopped
flowing.
If
you
take
down
the
v6
one,
most
of
them
don't
notice
for
some
time
dual
stack,
happy.I,
Bibles,
yeah.
B
V6
peering
with
Google
tends
to
get
a
lot
of
notice
because,
like
the
youtubes
flow
over
it,
and
so
this
that
that
property
of
benign
neglect
has
been
like
slowly
getting
pushed
out
of
that.
But
it's
still
readily
apparent
in
a
number
of
regions
that
I
operated
where
that
that
the
management
of
those
resources
and
the
the
frequency
with
which
they're
up
is
demonstrably
lower
than
the
v4
ones.
Even
even
in
the
case
where
they're
running
on
the
same
hardware.
Well,.
S
E
O
S
O
S
Worth
more
investigation,
Jim,
no
matter
how
we
cut
it,
we
need
to
understand
this
better.
That's
true!
Do
you
need
to
run
a
successive
protocol
to
BGP
I?
Don't
understand
why
you
would
need
to
in
terms
of
is
BGP
failing?
No,
it's
not!
Is
it
scaling?
Well,
it
seems
to
be
there's
no
great
sort
of
cracks
appearing
and
no
great
huge
holes
in
performance
or
even
size.
As
long
as
you've
got
the
money,
you
can
create
the
hardware
it
will
keep
on
working
as
a
protocol.
S
There
might
be
other
reasons
to
go
to
a
different
protocol,
but
you
know
BGP
itself
is
still
scaling.
Fib
size
line,
speeds,
equipment,
cost
sets
up
to
you,
guys
nothing's
going
to
get
cheaper,
I
make
the
stuff
is
still
going
to
grow
and
as
v6
grows,
it's
going
to
place
more
pressure
on
that.
What
you
put
in
your
cache,
what
you
are
floats
up
to
you,
but
the
routing
space
itself
is
going
to
keep
on
growing
as
far
as
I
can
see
in
both
protocols.
S
So
you
know
if
you're
running
this
stuff,
understand
what
you're
putting
in
your
high-speed
fib
cards
and
manage
that
very
carefully.
You
can't
ignore
it.
It's
going
to
get
more
and
more
critical
and
another
thing
is
gets
ignored,
is
v4
v6
partitioning
in
your
fehb
that
allocation,
which
was
normally
static
in
a
lot
of
router
con
things,
Oh,
10%,
v6,
90%
v4,
will
not
work
even
today.
You
need
to
look
at
that
balance
carefully
and,
quite
frankly,
you
don't
need
to
carry
every
router
all
the
time
you
can't
just
default.
M
Randi
again
one
place
I'd
quibble
is
the
cost
of
the
hardware
falls
radically
and
and
it's
held
up
artificially
by
market
forces,
etc,
etc,
but
it
really
my
entire.
You
know:
I've
been
in
this
field
for
50
something
years,
those
damn
hardware,
people
are
just
killer,
they
drive
the
cost
down
and
down
scaling
up
and
up
and
up
bet
on
them.
Don't
bet
on
the
software
Meredith.