►
From YouTube: IRTF Open Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
next
is
that
there
is
going
to
be
a
good
bit
of
photography
during
the
set
this
session
because
of
the
prizes,
and
if
you
have
a
concern
to
not
be
photographed,
we
have
a
policy
that
you
can
inform
the
photographer.
Please
don't
include
me
in
a
photograph,
so
that's
one
of
the
things
that
I
wanted
to
also
mention
here.
A
And
I'm
going
to
give
a
little
bit
more
extended
update
because
with
the
long
panel
and
so
on,
I
volunteered
not
to
give
an
update
during
the
plenary
last
night.
In
case
you
were
wondering
it's
not
that
we
don't
exist
or
that
we
were
dissolute,
it's
that
we
actually
tried
to
give
some
time
back
and
there
we
can
have
a
little
open
mic
session.
A
If
you
have
some
questions
about
the
IRT
F
as
well
before
we
start
the
prizes
and
then
at
the
end,
one
of
the
discussion
topics
at
the
current
ITF
is
the
idea
of
having
some
sort
of
lightning
talk
session.
That
would
be
like
the
kinds
of
lightning
talks,
or
these
sometimes
with
other
names
that
that
take
place
at
research
meetings
and
it
would
be
actually
IETF
and
IR.
Tf
lightning
talks
mixed
up
together,
possibly
in
an
evening
if
we
don't
get
bits
and
bytes
back
so
be
thinking
about
that.
A
I'll
introduce
the
a
and
RP
speakers
when
we
get
to
them,
but
we
have
very
good
speakers.
This
is
the
last
of
the
2017
awardees
this
time.
Okay,
so
in
overview
as
you
as
you
know,
because
you're
here
the
IRT
every
focuses
on
the
kinds
of
tests
that
are
not
standards,
engineering,
basically,
but
that
seem
important
to
the
IETF
and
to
the
Internet
community
to
be
tackled
by
people.
We
tend
to
work
on
Applied
Research.
A
We
wouldn't
were
not
sold
likely
to
have
a
theoretical
program
here,
although
I
suppose
we
could
we're
organized
into
para
a
parallel
to
the
working
groups
called
research
groups,
as
you
know-
and
there
is
an
internet
research
steering
group
which
is
all
of
the
research
group
chairs
and
some
at-large
members
and
I'll
introduce
them,
but
that's
the
basic
picture
of
the
IRT
F
and
it's
been
around
as
long
as
the
IETF
has
been
around
with
different
organizational
relationships.
Our
current
relationship
is
a
close
tie
to
the
IAB.
A
So
we
could,
the
groups
can
meet.
They
happen
to
meet
at
ITF
quite
a
lot,
but
they
can
have
meetings
co-located
with
other
other
organizations.
They
can
have
the
lengths
of
meetings
they
need.
We
have
had
in
the
past
had
closed
research
groups.
We
have
none
now
I
would
have
to
think
really
hard
before
we
would
charter
a
closed
research
group,
but
sin
and
since
all
the
research
groups
are
open,
the
one
caveat
about
meeting
anywhere.
A
Wherever
is
that
they're
held
to
the
same
process
as
the
ITF
of
announcing,
where
those
meetings
will
be
with
enough
time
for
people
to
participate
if
they
want
to?
Similarly,
the
output
of
of
working
groups
is
RFC's,
but
the
out
group
or
the
working
groups
does
not.
A
research
group
does
not
have
to
be
RFC's,
it
could
just
be
code,
it
could
be
a
hackathon,
it
could
be
a
series
of
publications
in
a
journal.
A
It
could
be
an
agenda,
that's
used
for
for
another
body,
you
know
for
a
scholarly
body,
it
could
be
a
project
of
other
sorts,
and
all
of
that
is
cool.
So
if
you
come
to
a
research
group
meeting
and
they
look
like
they're
doing
kind
of
ITF
process,
they
don't
have
to
be
and
feel
free
to
suggest.
If
you
have
a
good
idea
for
how
to
pursue
their
mission.
Well,
some
other
methods
we're
very
cool
with
experimenting
being
creative.
A
Some
groups
aim
to
to
do
their
work
well
enough
to
solve
some
hard
problems
and
then
transition
to
creating
a
working
group
most
recently
disruption.
Tolerant
networking
has
done
that,
and
this
is.
This
is
certainly
one
of
the
modes
that
we
like,
but
we
would
not
be
publishing
standards
track.
We
would
be
publishing
informational,
experimental
or
open
source
or
whatever
up
to
the
time
that
there's
a
transition
to
IETF
and
then
the
other
thing
is
that
some
groups
do
perform
roles.
There
are
of
service
to
the
standards
track.
A
So
one
of
the
key
examples
of
that
is
that
the
cipher
G
produces
does
crypto
reviews
that
are
sometimes
extremely
important
and
normatively
required
for
documents
in
the
ITF,
and
that
is
done
with
with
very
close
ad
sponsorship.
So
that
is
another
model,
and
if
anyone
has
a
question
as
we're
going
along,
I'm
also
happy
to
answer
questions.
A
A
The
research
mailing
lists
and
all
the
wiki
links
can
be
found
on
IRT
F,
dot
org,
and
we
are
restructuring
that
page
you'll
see
some
changes.
My
email
officially
I
RTF,
chair
IR
s,
G,
is
available.
I
didn't
get
enough
time
to
put
in
the
picture
where
the
cats
are
not
just
looking
at
the
TV
they're
actually
typing,
because
that's
actually
what
we're
looking
for,
but
you
can
start
out
by
reading
the
way
they
are.
A
Those
are
my
cats,
they're
they're
at-large
members
of
the
IRS
G
in
the
spirit,
yeah
Oh.
Somebody
me
out.
This
is
where
the
picture
of
the
Taiping
cats
will
be
later.
So
how
do
you?
How
do
research
groups
originate?
You
may
wonder
about
that
too,
and
some
of
them
have
been
around
a
long
time,
so
their
origins
are
lost
in
time.
A
Perhaps
I'm
going
to
start
requiring
research
group
chairs
to
know
who
all
where
the
chair
is
going
back
to
the
beginning,
because
we
some
groups
have
really
different
feelings
and
structures
now,
but
but
it
would
be
good
for
them
to
actually
have
a
sense
of
their
past
as
well
as
their
future.
But
the
main
thing
about
research
groups
is
that
they
originate
more
more
freely
than
the
ITF
groups.
Do
we
are
very
interested
in
making
sure
we
don't
block
something
new?
That
could
be
important.
That
probably
don't
appreciate
it.
A
If
I,
we
I
mean
me
and
the
IRS
G,
but
it
also
you
so
so
it
turns
out
that
you
can
propose
a
group
and
with
some
tweaking
to
make
sure
that
it
has
a
sensible
Charter
and
that
the
that
it
has
a
mission,
and
it
has
some-
you
know,
vision
of
what
it
should
be
doing.
It
can
run
for
three
meetings
in
a
datas,
a
proposed
research
group
before
being
considered
for
for
a
more
long-term
gig,
and
we
have
two
of
those
right
now.
A
So
the
and
the
way
that
we
do
the
evaluation
is
they
get
to
three
meetings.
And
then
the
chair
has
a
kind
of
review
with
the
with
the
with
the
chairs
of
those
groups
to
talk
about
how
things
of
gone,
but
also,
we
will
start
to
have
some
more
some
more
requests
for
a
review
by
the
community
and
by
the
IRS
J
there.
A
So
here's
the
set
every
of
research
groups
and
I'm
happy
to
tell
you
that
last
meeting
everybody
met
this
meeting
everybody,
but
the
decentralized
internet
group
met
and
they
met
informally,
so
I
they
were
actually
not
yet
counting
as
having
had
one
of
their
meetings,
but
we
won't.
We
don't
do
that
for
too
long
and
you
can
see
what
they
are.
They
have
a
wide
range
of
topics,
I'm
actually
quite
interested
in
soliciting
a
privacy,
research
group,
and
if
you
have
an
interest
in
that,
you
should
talk
to
me.
A
We
still
have
human.
Several
of
these
groups
still
have
to
meet
ahead
of
us
and
I.
Don't
have
the
schedule
right
in
my
head,
but
if
any
of
the
chairs
would
like
to
pitch
their
working
group
at
the
mic
for
the
rest
of
the
meeting,
I
beat
you.
Could
you
certainly
can
do
that
h,
h,
RPC
every?
Would
you
like
to
come
up
and
say
word
their
meeting
Friday
morning?
Who
else
is
still
to
me?
They
apologize
for
not
memorizing,
okay,
so
pan
RG,
maybe
you'd
like
to
mention
pan
proposed
research
group.
C
Session
and
over
you
want
to
say
a
little
more
about
HIPC
I'm,
Avril,
Torre
I'm,
one
of
the
co-chairs,
the
group
that
Sandra's
gonna
speak
at.
So
she
already
said
the
good
stuff.
But
you
know
another
thing,
though
about
the
group
is
we
just
got
one
RFC
out
and
it's
going
to
be
time
to
sort
of
look
at
okay.
We've
got
some
other
things
that
are
in
the
works,
but
how
do
we
continue?
So
it's
a
great
time
to
get
involved
in
terms
of
how
we
continue.
Okay,.
D
D
E
Yeah
we
actually
did
did
have
a
problem
with
attendance
by
people
who
were
doing
active,
work,
we're
still
serving
in
start-up
mode,
and
we
just
were
having
problems
getting
enough
active
participation.
So
so
so
yeah.
We
are
going
to
be
we're
talking
about
doing
an
interim
in
conjunction
with
nd,
SS
and,
of
course,
we'll
be
meeting
in
London.
But
we
thought
it
was
a
better
idea
to
not
meet
rather
than
have
a
bad
meeting.
A
Okay-
and
we
have
had
very
good
meetings
of
the
group
so
far
and
I
will
put
out
a
report
with
my
observations
about
the
groups
on
the
discuss
list.
That's
something
that
I
need
to
gather
by
their
cadence
for
Brian
was
say
something
about
the
pan
proposed
research
group
as
well
path,
aware,
networking.
F
F
It's
like
what
what
could
we
do
in
a
world
in
which
the
end
points
have
a
more
active
participation,
this
election
that
passeth
or
traffic
takes-
and
this
came
out
of
a
an
observation
about
a
lot
of
sort
of
disconnected
work-
that's
happening
in
this
space
and
the
IETF
sort
of
things
like
IV,
music
segment,
routing
things
like
pvd's
multipath
transport
protocols
work
on
these,
how
all
of
these
fit
together?
And
what
can
we
do
with
this
stuff?
F
F
A
A
One
thing
that
I
hope
to
do
is
move
this
slot
into
a
lunchtime
so
that
we
can
make
sure
that,
since
we
always
have
some
a
lot
of
interest
in
the
in
the
prize
when
their
presentations,
we
actually
have
some
ability
to
not
conflict
with
people,
for
that
and
in
general
I
think
we're
just
at
the
same
mercy
of
of
time
like
everybody
else,
but
the
the
groups
cover
you
know
span
from
there.
I
don't
want
to
call
other
things
out
too
much,
but
but
there's
lots
going
on.
A
We
also
have
recently
talked
about
having
documents
which
are
in
two
different
groups,
because
network
coding
and
IC
and
RG
have
interesting
interaction
between
between
their
work,
where
they
can
support
each
other.
So
we
may
have
parallel
work
going
on.
That
is,
is
really
co-sponsored
by
two
groups
and
we
don't
need
to
have
it
owned
by
one
group.
We
actually
have
the
freedom
to
do
that
as
well.
So
if
you
have
any
questions
about
other
things,
I'm
happy
to
entertain
them.
A
Okay,
you
could
see
the
membership,
so
the
chairs
are
our
members
of
the
IRS
G.
If
you
go,
if
you
write
to
the
IRS
G
mailing
list,
you'll
get
all
these
people
and
then
we
have
some
at-large
members
as
well,
who
especially
helped
to
tie
us
back
to,
for
example,
the
transport
area
spencer
is
one
of
our
at
large,
because
transport
and
an
IRC
have
a
lot
of
relationship.
A
Essentially
it's
a
best
paper
prize
for
all
possible,
published
papers
and
applied
networking
and
I
like
to
say
security
too,
because
we
have
an
interest
in
security,
applied
security
topics
and
you'll
see.
We
have
had
a
number
lots,
so
they
are
four
previously
published
papers,
we're
not
very
strict
about
the
deadline,
but
it
should
be
the
last
couple
of
years
and
somebody
numb
or
either
nominates
their
own
paper
and
themselves
or
a
paper
and
a
speaker
and
the
speaker
is
specifically
nominated
in
order
to
come
and
give
a
presentation.
A
If
they've
made
lots
of
good
ties
and
would
like
to
follow
up
and
spend
more
time
at
the
IETF
and
I
RTF
and
the
origin
story,
is
that
Loras
had
a
stroke
of
genius
and
created
this
and
it's
a
very
good
way
for
us
to
connect
I,
believe
that's
right,
Erin
that
was
Lars,
and
it
was
it's
a
very
good
way
for
us
to
connect
in
the
larger
research
community
and
the
IRT
F
and
bring
people
that
are
not
always
here
to
the
to
talk
with
us.
A
The
Internet
Society
funds,
it
primarily,
but
there
are
some
sponsors
as
well
and
you
might
be
a
sponsor.
If
you
think
you
could
it's
not
a
large
amount,
but
we
certainly
would
love
that
you
can
talk
to
Isaac
about
that
and
thank
you
to
Comcast
for
being
a
current
sponsor
as
well
as
thank
you
to
Isaac.
A
The
process
starts
with
a
yearly
call
for
papers.
I
hope
you
all
saw
it
because
I
tried.
We
tried
to
get
it
everywhere
and
if
we
didn't,
we
need
to
do
better
and
it
completed
on
November
5th.
We
actually
got
our
largest
number
ever
almost
60
submissions
and
very
good
ones
at
to
boot,
which,
in
my
estimation,
I'm
not
the
only
reviewer.
Obviously
we
have
a
peer
reviewing
committee,
that
is
from
academics
and
from
industry,
and
you
can
check
on
the
link
there.
A
If
you
want
to
see
more
about
the
call
for
papers
original
call
who's
on
the
program
committee,
things
like
that
and
then,
before
the
end
of
the
year,
we
will
be
actually
before
the
the
middle
of
December.
We
will
be
selecting
all
six
and
then
start
to
announce
the
ones
for
2018,
so
that
is
how
that
actually
happens.
That's
where
they
come
from.
That's
where
Roland
and
Paul
came
from
from
last
year's
group,
so
there's
also
the
confusion,
because
we
have
to
A&R
things
and
people
sometimes
say.
But
did
you
mean
the
W?
A
Are
the
P,
whether
the
difference
between
those?
So
it
seemed
reasonable
to
deconflict
that
for
you
today,
they
both
have
an
annual
call.
A
and
RP
calls
for
nominees
a
and
our
W
calls
for
papers,
and
our
W
gives
a
prize
for
an
already
published
paper.
I'm,
sorry,
PC,
I'm
doing
it
myself
wrong
and
ANR
W
gathers
new
papers,
new
submissions,
they're
workshop
submissions,
the
prize
those
two
presentations
at
each
ITF.
A
The
workshop
does
presentations
at
a
workshop
co-located
with
a
summer
ITF
we've
chosen
the
program
chairs
for
the
a
and
RW
there's
a
steering
committee
for
that,
including
Lars
and
and
Colin
Perkins,
and
myself
and
Sharon
Goldberg,
who
many
of
you
know
and
Dave
Chavez.
So
you
may
not
know
are
the
the
co-chairs
of
the
PC
and
there
will
be
more
information
soon
about
the
rest
of
that.
A
Okay,
so,
hopefully
you're
no
longer
confused
and
you
might
want
to
follow
us
at
at
in
Retta
fo,
which
is
our
twitter
handle,
and
we
also
have
a
facebook
page
and
you
will
notice
that
roland
is
in
this
picture
because
he
has
been
that
an
awardee
before.
But
it
was
a
great
sort
of
triumphant
picture
to
attract
people's
attention,
and
I
tribute
to
this
picture
the
large
increase
in
the
submissions.
So
thank
you
to
the
people
in
that
picture
and
thank
you,
too
Olaf
for
taking
such
a
good
picture.
A
G
Hi
Falk,
so
some
of
you
may
recognize
me
as
the
host
of
the
Pecha
Kucha,
which
is
actually
tonight
and
I,
but
I'm
here,
because
there's
an
idea
under
discussion
to
take
the
the
lightning
talk
idea
that
has
been
for
a
Pecha
Kucha
has
been
kind
of
a
fun
thing
and
trying
to
actually
make
it
available
for
people
to
present
new
ideas.
So
more
of
a
serious
lightning
talk
session,
so
I'm
sort
of
at
the
center
of
a
small
group.
That's
doing
some
brainstorming
on
this,
so
just
to
be
clear.
G
We're
talking
about
doing
a
lightning
talk
session
at
the
IETF
sometime
during
the
week.
That's
not
going
to
conflict
with
working
groups
or
research
groups
and
if
you
have
ideas
as
or
opinions
as
to
whether
you
think
that's
a
good
idea
or
not
or
if
it's
something
that
you've
got
some
ideas
on
people
who
might
be
interested
in
participating.
G
G
The
kind
of
thing
that
they're
doing
in
dispatch,
but
to
a
broader
and
more
general
audience
so
I
would
love
to
see
researchers
come
and
throw
up
a
couple
of
slides
on
stuff
that
they
think
that
they're
doing
and
so
I
think
this
is
kind
of
a
natural
fit
to
the
a
and
our
PA
and
our
W
discussion,
and
hopefully
some
stuff
that
will
come
out
of
this
will
lead
to
new
research
groups.
Thank.
A
You
yeah,
and
so
Aaron
and
Aliyah
are
two
of
the
key
players
there
and
the
IRS
G
will
take
an
interest
in
this
as
well.
So
so
definitely
we're
we're
going
we're
supporting
this
idea.
We
think
it's
a
good
one.
A
Okay,
so
I'm
going
to
introduce
our
first
an
RP
Prize
winner,
although
not
I'll,
introduce
you
briefly
because
you
introduce
yourself
and
your
slides
as
well,
but
Paul
Emmerich
is
from
the
Technische
universität
münchen
in
Germany
and
he's
going
to
present
Mugen,
which
is
a
really
interesting
high
speed
packet
generator
and
I
will
hand
you
the
dongle.
Now,
let's
see
you
need
just
the.
H
Okay,
yeah,
thank
you
for
the
introduction
and
yeah
I'm
here
to
talk
about
my
package.
Generate
emulsion
and
I
will
just
start
with
a
rough
introduction.
Who
I
am
where
I
come
from
and
then
I
will
go
over
a
few
aspects
of
Mugen.
I
I
won't
bore
you
with
any
details
about
the
implementation
or
performance
evaluation
or
how
we
built
it.
H
Sometime
I
said
that
last
year,
as
well,
but
yeah
and
my
thesis
will
be
about
testing
different
network
devices
where
our
network
devices
can
refer
to
a
typical,
classical
hardware.
Black
box
you
sent
in
packets
outer
does
something
you
get
packets
back,
but
does
it
the
right
thing
how
faster
this?
This
is
kind
of
boring
compared
to
a
complex
software
system.
H
The
things
I
wanted
to
was
kind
of
annoying,
and
so
I
went
with
this
idea
of
building
a
package
generator
to
start
to
really
get
even
to
really
be
able
to
do
what
I
wanted
to
do.
I
had
to
build
this
first
and
now
it
seems
to
have
consumed
almost
everything
about
the
teasers
and
it's
mostly
about
a
package
generator
now.
H
So,
where
do
I
work
at
buttress?
The
context
that
this
work
is
being
done
in
this
is
the
networking
group,
the
thematics
faculty,
a
Technical
University
of
Munich.
We
are
a
relatively
large
group
of
about
20
people,
plus
some
external
guys,
and
we
do
a
broad
range
of
network
research
topics
that
ranges
from
everything
from
your
usual
traffic
measurement
and
analysis.
Where
we
look
at
look
at
traffic,
look
at,
we
have
a
mirror
portrait,
our
internet,
uplink.
Where
we
look
for
animalist
here
we
do
internet
byte
scans.
H
We
do
everything
we
have
our
own
autonomous
system
for
just
for
research
stuff
and
for
doing
internet
scans.
Then
we
do,
of
course,
all
your
hotbar
things
from
software-defined
networking
and
yeah
Internet
of
Things
and
the
usual.
We
do
a
lot
of
security
and
privacy
research
as
well
and
peer-to-peer
networks
and,
of
course,
the
performance,
analysis
and
modeling
part
where
I'm
at
this
is.
This
is
really
the
subgroup
that
I'm
working
in
and
what
are
we
doing
there?
H
It's
well
the
main
research
question
or
the
main
question
that
we
have
is
that
packet
processing
becomes
more
and
more
complex
networks
become
more
and
more
complex.
It's
no
longer
just
a
few
simple
that
switch
allowed
your
packets.
There
are
more
software
components
in
there.
There's
important
passwords
from
software-defined
network
to
network
function,
virtualization,
and
even
even
when
this
is
done
in
hardware.
It's
often
a
software
component
to
it,
and
it's
often
even
done
in
software
nowadays,
instead
of
in
hardware.
H
Just
just
last
year,
we
had
a
project
in
the
5g
area
where
we
worked
was
a
big
company
who
were
interested
in
in
doing
some
performance,
research
of
software
components
in
the
5g
back
and
where
a
lot
of
stuff
is
being
virtualized,
a
virtualized
network
functions
doesn't
need
to
be
chained
together.
It's
quite
unclear
how
the
how
it
impacts
the
performance
if
you
have
different
things
competing
for
the
same
resources.
H
If
you
have
different
configurations
that
then
run
in
software
that
can
compete
for
hotter
sources
from
bandwidth
to
cache
to
memory
to
whatever,
and
so
the
research
questions
are
from
the
simple
thing:
what
are
the
important
performance
metrics
sure
offer
performance
metrics.
You
can
just
go
to
let's
say
of
C
2544
defines
measure
these
things
on
your
box,
but
it
doesn't
really
work
well
for
a
for
a
software
device
compared
to
a
hot
drivers,
yeah
it's
20
years
old
and
was
designed
for
hardware
devices
there.
H
H
How
do
you
keep
that
stuff,
manageable
and,
of
course,
how
can
you
predict
performance
as
models?
How
do
you
can
can,
if
you
are
planning
a
network,
and
you
want
to
know
how
much
harder,
how
much
softer
to
buy
what
hardware
to
buy?
How
can
you
do
that?
How
can
you
kind
of
get
a
model
for
the
for
the
behavior
for
the
performance
and
kind
of
predict?
H
What
you
need
to
buy
what
you
need
to
plan
instead
of
just
adjusting
after
the
fact
so-
and
this
is
what
unblocking
lists
we
are
lucky
to
have
this
big
wreck
of
test
service,
which
has
a
lot
of
10g
ports
and
some
43
ports.
It's
quite
diverse
hardware,
from
low
end
power,
saving
for
CPUs
to
big
Numa
nodes
with
40
cores
and
so
on,
from
small
portable
service
that
we
can
direct
somewhere
to
show
off
in
a
demo
to
these
big
boxes
and
also
as
the
inspection
in
Sdn
router
and
one
one
key
thing
here.
H
That
really
makes
work
work
easier.
Is
that
it's
fully
automated
test
workflow?
That
means,
if
I,
want
to
run
a
network
experiment
on
and
benchmark
something
I
really
write
a
script
that
defines
everything
that
starts
starts
from
I
want
to
use
this
this
and
that
server
I
want
to
configure
the
switch
that
way
or
I
know
that
they
are
directly
connected
and
then
there's
a
management.
Note
that
allocates
the
service
exclusively
for
me,
so
I'm
sure
that
only
I'm
using
it
and
only
my
tests
is
currently
running
on
it.
H
On
top
of
that,
this
is
really
nice,
especially.
Was
the
knife:
Buddha
have
a
big
collection
of
operating
systems
and
kernel
versions.
So
if
I
want
to
run
a
test,
how
to
see
how
different
kernel
versions
evolved
and
then
I
want
to
maybe
afterwards
I
think
oh,
this
might
be
another
good
metric.
Then
I
can
just
boot
the
whole
thing
again
and
run
the
test
on
the
old
thing
again
stead
of
having
to
cope
with
downgrading
an
operating
system
or
anything.
H
So,
let's
get
to
the
main
part
of
the
talk
earlier.
This
was
just
a
longer
introduction.
Is
this
about
packet
generators?
So
this
is
a
big
packet
generator
which
you
might
have
seen
or
anything.
There
are
a
few
problems
with
these
big
hearty
boxes.
First
of
all,
they
are
big.
Second,
they
are
quite
expensive
and,
as
I've
heard
some
some
nice
guy
from
intellects
painted
like
this,
so
the
problem
is
them.
Shipping
around
extra
boxes
doesn't
scale.
This
is
I.
H
Think
true,
because
often
if
you
have
multiple
labs
and
you
don't
have
a
package
owner
for
each
of
them,
then
or
you
might
want
some
hardware
features
that
are
not
available
and
the
end
people
often
go
back
to
this
fancy
commodity
Hardware,
here
of
you,
know,
minotaur
carts.
They
are
quite
cheap.
Comparatively
they
readily
available.
H
You
can
just
tuck
them
in
your
server
and
use
them,
but
then
you
run
into
a
lot
of
problems
because,
of
course,
there's
a
reason
why
these
big
Hardware
generators
are
so
expensive
and
they
are
very
reliable
and
precise
at
what
they
do.
Whereas
if
you
use
a
software
package
generator,
it's
typically,
it
might
be
slow,
it
might
be
imprecise,
it
might
be
unreliable,
it
might
just
give
a
different
result.
H
If
you
will
show
a
few
examples
of
how
a
typical
moongeun
script,
looks
like
it's
always
at
its
core.
It's
always
explicit
multi-threading
and
explicit
multi-core,
because
that
is
really
the
only
way
to
scale
to
higher
speeds.
Sure,
a
single
10g
link.
You
can
fill
that
up.
Those
minimum
size,
packets,
meaning
around
50
million
packets
per
second,
was
a
single
GPU.
Core
is
not
too
hard
as
long
as
you're
not
doing
too
complicated
things,
but
as
soon
as
you
go
beyond
that,
you
need
to
be
able
to
run
multiple,
multiple
threads
at
the
same
time.
H
H
So
what
I
did
was
I
for
moonshine
I,
give
full
control
of
the
main
application
to
the
user,
meaning
that,
if
you
use
moon
during
the
core
idea,
is
really
that
you
write
the
code
for
the
main
transmit
loop
yourself,
meaning
every
single
packet
you
send
out,
goes
through
your
code
and
gets
executed
in
real
time
for
that
packet,
and
for
that
we
are
using
the
scripting
language
Lua,
which
was
has
a
very,
very
nice,
just-in-time
compiler.
That
allows
us
to
really
run
custom
script
code
for
each
and
every
single
packet
because
it
really
come.
H
It
really
integrates
very
well
with
lower
level
things
and
you
can
get
direct
access
to
the
packet
memory
without
pesky
things
like
bound
checks
or
anything
that
just
be
careful.
When
you
write
your
tests,
then
another
another
thing
that
was
traditionally
very,
very
challenging
for
software
package
generators
is
time,
stamping,
meaning
that
you
want
to
maybe
do.
Maybe
people
often
only
evaluate
there
a
few
if
you
read
some
academic
papers
about
a
great
new
whatever
out
or
whatever
switch,
they
often
only
give
you
okay.
H
This
is
the
number
and
throughput
that
many
with
this
million
packets
per
second
or
even
though
this
bandwidth,
which
is
typically
the
software
devices
I,
was
selected
by
packets
per
second
or
not
by
bandwidth.
But
you
rarely
see
latency
because
it
was
just
so
hard
to
measure
as
a
software
package
generator
and
especially
in
academia.
People
who
only
have
these
big,
expensive
hardware
boxes,
so
I
really
wanted
to
change
that
and,
of
course
turns
out
time.
Stamping
doing
it
precisely
in
software
is
a
challenging
problem.
H
When
investigating
software
systems
and
an
aspect,
that's
unfortunately
often
ignored,
and
then,
of
course,
I
wanted
to
make
it
open
source,
because
what's
the
point,
if
only
I'm
using
it
I
wanted
to
make
it
really
easy
to
use
and
freely
available,
you
can
check
it
out
on
github
and
what
I'm
now
going
to
show?
You
is
just
just
only
only
a
few
well
measurements
and
and
a
few
sites.
Basically
and
I,
don't
want
to
bore
you
with
the
details.
H
You
can
go
to
the
paper
citations
down
there
if
you
want
the
Goya
implemented
in
details,
but
I
really
want
to
show
some
some
well
examples
of
how
to
use
it,
how
the
user
is
and
why
a
few
things
are
important,
and
this
is
this
traffic
patterns.
This
is
really
a
point
that
I
really
like,
because
it
was
just
so
much
more
important
than
I,
initially
thought,
and
it's
so
often
just
ignore
what
people
just
send
a
burst
of
packets
and
say:
oh,
the
average
rate
is
fine.
H
H
Think,
okay,
there's
a
really
boring
test,
but
if
you
dig
down
into
even
such
a
simple
software
software
forward
in
case
it
really
shows
you
what
kind
of
complexity
is
hidden
behind
the
seemingly
simple
example.
So
this
graph
shows
the
x-axis
is
the
offered
load,
meaning
I'm
increasing
load
in
this
case
was
restricted
to
one
flow.
The
forwarding
device
was
restricted
to
one
CPU
core,
because
if
you
go
motocross
Numa
than
this
opens,
oh
and
you
can
so
the
simplest
possible
thing
and
I
configured
Mundra
to
use
different
different
burst
sizes.
H
The
default
I
just
generated
conservatory
traffic,
meaning
a
constant
gap
between
the
packets-
and
this
is
the
baseline
of
this
measurement,
meaning
a
hundred
percent,
and
the
measure
thing
here-
is
the
latency
relative
to
that
case.
So
what
you
would
expect
if
you
run
your
packet
generators
a
few
times
on
the
same
case,
you
would
expect
the
device
under
test
to
show
the
same
latency
response,
because
why
would
it
be
different?
And
especially,
you
would
expect
to
get
the
same.
H
Latency
result,
if
using
different
packet
generators,
if
you
don't
change
your
device
on
a
test,
but
what
we
did
in
the
past,
we
had
different
package
owner
doesn't
got
completely
different
results
for
the
latency
of
this,
the
same
device
and
a
test
to
be
investigated
estrada,
and
this
graph
is
what
I'm
varying
here
was.
The
different
graphs
in
that
diagram.
Here
is
just
the
burst
size,
meaning
the
baseline
one
packet
sleep
for
some
time,
one
packet
and
so
on,
and
then
four
packet.
H
Sixteen
packets
are
the
two
packets
and
so
on
and
see
how
the
latency,
relatively
to
the
base
case,
changes
and,
as
you
can
see,
even
with
something
as
a
burst
set
of
4
or
16.
You
can
quickly
get
a
relative
latency
that
differs
by
a
hundred
percent
or
so
so
you
just
get
a
completely
different
result,
just
by
changing
how
the
packets
are
spaced
on
the
wire,
without
even
going
into
anything
from
the
content
of
the
packets.
H
Just
just
changing
the
that's
one
thing
and
the
the
problem
is
why
I'm
showing
this
is
that
people
often
sent
bursts
as
the
default
case,
because
it
turns
out
packages.
Software
package
generators
are
only
really
fast
in
if
you
have
a
knife
implementation,
if
you
send
out
bursts
because
all
these
frameworks
are
always
optimized
to
do
burst,
packet,
processing
or
batching
or
vector
or
whatever
you
want
to
call
it
all
of
it
is
optimized
for
this,
and
so
the
typical
default
burst
sizes
are
between
16
and
256.
For
software
package.
H
Generators,
as
you
can
see
here,
is
a
really
bad
idea.
If
you
do
latency
or
if
you
want
latency
measurements,
it
doesn't
matter
so
much
for
the
maximum
achievable
throughput
that
was
around
2
million
packets.
For
all
these
configurations,
and
now
it's
it
was
really
tricky
to
get
different
packages.
We
have
another
paper
on
this,
where
we
compared
a
lot
of
different
software
package.
H
H
H
I'll
just
talk
louder
or
closer
to
the
microphone,
so
it
was
really
annoying
to
even
get
these
packaged
generators
to
do
what
we
wanted
them
and
even
then
there
were
some
optimization
what
you
will
sometimes
in
there
sometimes
there's
a
kernel
component
that
then
batch
them
together
well
to
make
it
faster.
But
that
was
not
very
helpful
and
then
they
lied
to
you
and
even
even
when
we
managed
to
configure
it
it
they
still
send
out
births
because
it
turns
out
it's
really
hard
to
send
an
individual
packet
to
a
network.
H
H
Ok,
why
is
this
even
different?
Well,
one
reason
is
the
CBR
traffic
is
not
a
good
case.
So
typically
it's
not
a
realistic
case.
The
Internet
traffic
is
not
CBR
traffic,
but
like
people,
people
test
the
CBR
traffic,
because
if
you
look
at,
for
example,
the
old
RC
25:44,
it
calls
for
CBR
traffic
by
default,
and
people
just
follow
that
90
I
mean
the
ass.
He
even
says
you
can
test
other
traffic
pattern
afterwards,
but
the
default
is
CBR
and
people
are
just
like.
H
Oh
well,
let's
just
do
CBR
and
it's
good
enough,
so
this
graph
shows
the
same
measurement
as
the
sto
site
before
it's.
Basically,
the
baseline
was
plotting
the
latency
for
the
CBR
case,
and
it
looks
really
weird
at
first,
it's
okay.
It
seems
to
be
increasing
and
there's
one
weird
spike.
The
spike
is
completely
reproducible
across
different
systems,
different
things
and
right
before
overload,
it
drops
completely
in
the
latency.
Latency
gets
better.
This
is
also
completely
reproducible.
H
You
can
see
there
are
more
measure
plane
points
in
there
because
when
I
first
thought
about
like
well,
it
can't
be
too
and
me
to
measure
maybe
the
slightly
different,
but
it
just
reliably
drops
there.
I
also
have
a
paper
about
that,
but
it's
also
another
deep
dive
into
the
details
of
how
the
Linux
kernel
and
the
driver
works.
H
Basically,
there
are
two
things
that
are
trying
to
well,
one
is
trying
to
prevent
the
system
from
locking
up
from
interrupted
terms,
that's
the
Linux
poly
mode
Nappi,
which
just
switches
to
some
fancy,
poly
mode
and
just
pulls
a
network
out
and
disable
interrupts,
and
then
there's
the
interrupt
sorting
rate,
which
is
typically
found
on
all
these
drivers
that
tries
to
save
power
and
then
you
and
goes
into
power
saving.
That's
a
whole
other
can
of
worms
to
open.
H
So,
basically
you're
going
to
say
here,
Paul's
having
kind
of
okay
kind
of
works,
also
hard
to
measure,
because
by
default
the
new
yorks
can
you
find
in
most
distributions
doesn't
report
the
CPU
time.
Consumed
per
interrupter
can
be
at
100%
CPU
load,
but
each
top
reports.
You
are
at
10%
because
it
doesn't
account
interrupt
time
properly
unless
you
set
your
cue
time,
accounting
flag,
while
compiling
the
kernel
or
directly
with
the
CPU
performance
counters,
and
this
is
completely
different,
piped
off.
H
So
now
this
kind
of
looks
weird
and
the
reason
why
it
looks
weird
as
really
that
that
these
algorithms,
that
try
to
estimate
the
rate
and
so
on,
they
don't
play
well
with
CBR
traffic.
They
kind
of
get
confused
and
there
are
a
few
state
machines
if
you
get
them
and
where
they
keep
switching
between
two
states
all
the
time,
because
they
they
get
slightly
confused
by
the
CBR
traffic.
Then
I
don't
know
exactly
what
happens,
but
you
get
these
vertices.
H
So
let's
use
a
Poisson
process
instead
and
that
just
looks
much
smoother
and
the
site
is
much
more.
What
you
would
expect
and
the
only
thing
I'm
changing
between
these
two
measurements
is
again
the
time
between
packets,
in
this
case
from
CBR
to
Poisson
and
much
more
reasonable
result.
And
if
you
look
at
River
traffic,
of
course,
you
all
know
that
really
old
sitcom
paper
about
how
you
shouldn't
model
your
internet
traffic.
H
With
episode
process,
but
that's
only
really
true
for
a
larger
time
scale,
if
you're
running
your
test
of
a
few
minutes
or
whatever,
then
you
can
use
a
Poisson
process
to
reasonably
approximate
what
wheel
traffic
looks
like,
and
then
you
get
these
nice
nice
smooth
results
and
more
in
more
realistic
scenario,
which
is
also
what
this
is
about.
So
what
does
a
latency
measurement
of
look
like
if
you
want
to?
H
If
you
now
installed
Mundra,
and
you
want
to
drill
down
into
into
really
one
measurement-
you
get
these
nice
histograms,
which
are
just
some
way
to
represent
how
the
latency
is
distributed
and
typically
for
many
cases,
you
want
to
see
the
F
instead,
but
a
histogram
is
well
easier
to
see
visually.
So
what
you
can
see
here?
These
are
just
measurements
of
a
few
systems.
The
first
one
is
a
software
for
water
running
directly
on
the
machine.
You
can
clearly
see
there
some
interrupt
sorting
going
on.
H
H
Distribution
and
I
actually
cut
it
off
yet
actually
long
tail
and
there
are
some
worst
cases.
It's
also
an
interesting
thing
to
measure.
There
are
just
just
if
you
look
at
the
99.99%
Isle
of
some
latency
measurement,
and
then
you
see
some
horrible
results
there,
if
you,
if
you
are
benchmarking,
a
virtual
machine
or
anything,
this
can
also
be
a
big
problem,
and
it's
also
another
thing
where
you
can
get
probably
a
whole
PhD
on
how
this
happens,
why
it
happens,
how
to
measure
it
and
a
hacker
box.
H
This
is
just
something
to
show
how
precise
this
really
is,
because
if
you
note
the
x-axis
and
microseconds
and
only
goes
up
to
3.5,
microseconds,
typically
saying
the
position
of
the
moon
jaein
Hardware
time,
stamping
apart,
is
typically
plus
minus
and
plus
minus
12
nanoseconds.
It
was
quite
good
for
most
things
that
typically,
you
get
typically
agencies
of
the
hardware
box,
its
new
range
of
a
microsecond
or
maybe
500
nanoseconds,
to
a
few
microseconds,
a
software
box
around
ten
microseconds.
H
If
it's
good
and
a
hundred,
if
it's
doing
some
power
saving
stuff-
and
you
can
see-
here's
a
nice
by
modelers
to
fusion-
and
this
is
just
an
example
where,
if
you
want
to
break
it
down
to
one
value,
your
latency
or
anything,
it
doesn't
really
work,
because,
what's
the
average
of
this?
What's
the
meaning
of
the
average
of
this
is
really
nothing
so
really
meaning
of
the
median
of
this.
It
has
to
clearly
distinct
past
in
the
hardware.
H
H
Really,
if
you
look
at
the
example
scripts-
and
if
you
look
at
your
basic
example-
script
that
generates
different
UDP
flows
and
to
report
suicides,
you
should
quickly
get
an
idea
of
how
to
modify
it
for
your
needs
or
how
to
add
multiple
things
and
the
way
we
do
multi-threading
as
we
spawn
completely
independent
virtual
virtual
machines,
virtual
machines,
in
the
sense
of
a
language
implementation,
virtual
machine
for
a
just-in-time
compiler,
and
that
is
really
they
are
really
completely
independent.
And
there
are
nice
api's
that
allow
you
to
talk
between
these.
These
independent
sets.
H
But
the
main
ideas
are
shared.
Nothing
approach,
because,
in
the
end,
what
you
want
to
generate
as
multiple
flows
and
they
are
often
independent
from
each
other
or
you
can
break
them
down
into
a
few
independent
chunks,
and
that
makes
it
really
really
high-performance
and
really
have
to
look
at
the
examples
to
get
an
idea
of
what
I
mean
by
this.
So
and
I.
Don't
know
so
a
quick
example.
I,
don't
know
how
much
time
I've
left,
there's
no
clock.
How
much
time
I
got
left.
H
Great,
this
seems
perfect,
so,
okay,
I'm
gonna,
show
this
example
and
in
this
case
I'm
this
example
is
based
on
our
vehicle,
an
example.
This
is
also
something
we
wanted
to
test
the
exam
and
then
the
packet
generators
almost
I,
don't
know
that
protocol
and
well,
but
luck
for
you,
and
so
the
the
first
thing
that
you
can
do
is
you?
Can
dynamically
general
define
a
a
complex
stack
of
headers
and
really
these
are
just
ahead?
Us
moongeun
is
still
a
low-level
package,
generate
another
traffic
generator,
meaning
there's
no
protocol
logic
behind
it.
H
You're
just
sending
out
packets
and
protocol
logic
is
like
absolute
minimum
like
we
implement
up
and
LACP
and
whatever
you
you
expect
for
basic
functionality
and
there's
like
the
hashing
algorithm
to
get
the
sauce
pot
of
the
explan
and
so
on,
and
to
check
some
check
some
stuff
and
check
some
of
loading.
But
it's
not
a
traffic
generator
can't
build
a
tcp
stream
from
it.
H
You
just
can
build
packets,
but
it's
meant
to
benchmark
some
devices
on
there
on
the
lowest
level,
and
what
you
can
do
here
is
you
can
stack
together,
arbitrary,
had
us
and
like
in
this
case.
It's
we
explained
running
over
ipv4
and
in
the
we
expand
that
a
VLAN
tag
is
another
frame
with
another
ipv4
header
and
UDP.
H
Oh,
by
the
way
everything
we
have
is
also
a
PBX
capable
because
the
guy
who
wrote
the
what
the
protocol
stack
stuff,
he
really
likes
ipv6,
and
so
all
the
examples
also
do
ipv6
yeah
and
once
you
have
that
thing
it
gets
just-in-time
compiled,
and
the
next
thing
you
do
is
you
create
a
memory
pool
with
a
packet
archetype,
meaning
this
is
just
some
some
basic
packet.
This
is
quite
memory
pool.
This
is
just
Maps.
H
H
It's
a
it's
a
framework
for
writing
package
generators
and
you
write
your
own
one
based
on
one
of
the
examples
and
a
lot
of
boilerplate
code.
So
what
you
do
here?
You
allocate
some
buffer
array
in
this
case.
It's
default
back
size
and
yes,
this
example
sends
out
bursts
unless
we
configure
that's
well,
you
have
to
can't
go
into
the
details
about
how
the
right
control
works.
Then
you
are
the
actual
main
loop.
That
just
checks
is
the
process
still
running,
meaning
that
someone
press
control
sees
and
sick
term
or
another
task?
Stop
it.
H
Then
we
tell
it
okay,
nice.
We
want
some
packets
on
our
memory
pool
and
we
just
get
the
packets
with
the
packet
archetype.
That
we
previously
filled
we
iterate
over
these
packets
and
again
cast
them
to
the
to
detect
that
was
previously
jet
compiled
this
cast
operation
is
a
completely
free
operation.
There's
no
cyclist
behind
you,
it's
just
the
equivalent
of
a
seat
cast,
and
it
doesn't
do
anything.
Besides
tell
the
compiler.
Okay
I
want
to
use
these
offsets
for
my
packet
and
then
I
can
just
access
these
packets
at
the
right.
H
H
There
I've
talked
to
two
people
from
some
companies.
They
were
like,
basically,
oh,
but
our
test
engineers.
They
are
not
programmers,
they
can
just
click
on
the
X
jeogori
and
click
the
start
button.
We
can't
use
this
okay,
so
let's
make
it
somewhat
easier.
In
this
case,
we
did
at
a
config
file
to
it.
There's
some
new
work
still
work-in-progress
might
be
completely
buggy,
not
completely
buggy,
but
might
contain
box.
H
So
and
in
this
config
file
we
just
define
defined
flows
to
give
to
flow
a
name,
and
we
tell
that
the
packet
type
the
few
predefined
packet
types.
Otherwise,
you
can
use
the
magic
magic
protocol
stack
saying
again
and
then
this
is
basically
the
same
syntax
just
with
a
few
syntactic
sugar
things.
You
need
to
tell
it
if
it's
a
MAC
address
you'd
need
to
tell
it
if
it's
nappy
address
and
then
you
can
define
things
like
dangerous
or
random
or
angels,
which
then
get
basically
well
not
compiled
to
code,
but
they
basically
it's
efficient.
H
In
the
end,
it
works,
there's
a
lot
of
anonymous
functions
and
magic
in
this
case.
There's
your
typical
sin
flat
example:
please
don't
copy
paste
and
run
it,
because
that
ipv6
address
is
one
of
my
test
servers
and
then,
once
you
have
these
pretty
of
these
flows,
you
can
few
predefined
slow,
so
I
can
define
your
own
one.
Then
you
can
under
moonshine
simple
interface
type
in
start
and
the
name
of
your
flow
on
which
device
to
send
on
which
device
you
want
to
receive.
H
What
you
also
can
do
here,
which
is
often
quite
annoying
to
debug-
something
like
this,
because
in
the
end,
you
want
to
know
what
it's
actually
sending
out
and
then
you
might
end
up
using
TCP
dump
on
your
destination
device
or
dumping
methods.
Here's
like
a
simply
debugging
interface
that
can
show
you,
okay,
given
this
config
file
in
this
configuration,
the
packets
that
I'm
going
to
send
out
would
look
like
this.
Here's
an
example
of
five
packets,
and
these
are
the
fields
that
are
being
randomized
or
modified,
and
this
is
work
in
progress.
H
H
The
site
I
couldn't
share
the
actual
code
for
the
for
the
DDoS
attacks
for
legal
reasons,
which
apparently
something
about
Hacket
rules
and
France
also,
but
they
contributed
in
escort,
and
it
should
be
relatively
simple
to
build
some
DNS
DDoS
testing
device
based
on
top
of
that.
So
this
is
interesting
because
it
uses
the
complex
protocol
six
and
then
the
last
thing
I
want
to
point
out.
This
is
really
interesting
because
they
actually
used
mundial
how
it
was
in
tended
to
be
used.
H
Most
people
just
use
my
standard
example
script,
which
sends
out
randomized
to
DP
packets
and
say:
ok,
it
gets
me.
The
number
of
flows
are
type
in
there
and
it
gets
me
a
latency,
that's
good
enough
and
they
maybe
change
one
line
in
the
code.
But
these
guys
and
the
European
a
few
project.
They
really
built
a
nice
test
harness
around
Mondrian,
but
they
actually
use
multi
support,
multiple
different
package
generators
and
moon
Jonas.
One
of
them
was
one
of
the
first
and
I
really
like
this,
because
they
actually
use
their.
H
I
On
which
actually
very
nice
work
and
I'm
pleasantly
surprised
in
packet
generation,
the
package
generation
so
far
has
been
an
issue
and
the
main
of
the
expensive
guys
and
seeing
an
implementation
is
in
VP
decay.
We
know
solves
many
of
the
problems
in
especially
up
to
a
10
gig
space.
So
thank
you
for
the
work.
H
A
I'll
ask
a
question:
would
it
make
sense
to
come
to
one
of
our
hackathons
and
generate
test
traffic
on
the
fly
for
various
people,
doing
testing
of
their.
H
H
A
J
J
What
would
you
say
to
a
specification
that
calibrated
the
generators,
in
other
words
like
policing,
kind
of
like
policing,
the
police,
would
would
I
mean
I'm
beginning
to
think
because
of
the
sensitivities
we've
we've
seen
and
you've
seen
that
that
might
be
a
valuable
spec
to
pursue.
What's
your
thoughts,
that's.
H
An
interesting
idea,
a
subset
I,
have
some
condos
I,
believe
I
have
a
few
graphs
here
like
this
is
just
4
million
packets
per
second
and
a
few
software
package
generators
and
how
they,
even
even
moonshine,
when
configured
to
use
pure
software,
doesn't
quite
hit.
This
target,
which
is
250
nanoseconds,
but
some
significant,
diverse
and
package
on
duplicate
is
not
not
visible
here,
but
they're,
huge
outliers
and
they're,
because
the
printer
statistics
in
the
same
certains
a
really
bad
idea
and
I'm
just
wondering
how
such
a
specification
might
look
like
the
the
paper.
H
H
H
J
A
J
K
K
J
H
J
A
A
L
So
my
talk
today
is
going
to
be
about
the
use
of
elliptic
curve
cryptography
in
DNS
SEC,
which
is
what
the
paper
that
was
that
got
da
NRP
was
actually
about
and
I'm
going
to
go
into
a
little
bit
more
detail
about
some
follow-up
work
that
we
did
after
writing.
That
paper,
and
also
going
to
go
into
some
detail
about
adoption
of
these
cryptographic,
algorithms
in
B,
in
a
sec
and
since
Paul
did
a
really
nice
introduction
of
himself
I
decided
to
add
one
slide
about
myself,
because.
J
L
Thank
you.
This
is
me,
surrounded
by
my
committee.
You
may
recognize
some
of
the
people
in
that
committee.
Some
of
them
are
here
in
the
room,
so
no
pressure,
no
pressure
but
I'm
gonna
go
into
the
the
meat
of
the
presentation
now
and
I'm,
not
going
to
repeat
all
of
the
earlier
research
we
did.
There
are
some
pointers
on
the
slices.
Slides
are
actually
up
on
the
material
site
for
this
this
meeting
and
at
the
end
there
is
a
set
of
references
to
all
the
papers
I'm
referring
to.
L
We
actually
saw
that
up
to
10%
of
resolvers
on
the
internet,
have
issues
receiving
fragmented
responses
which
causes
delays
or,
in
a
worst
case
scenario,
actually
causes
them
to
be
unable
to
resolve
certain
domain
names
that
are
DNA
sex
and
the
other
issue,
of
course,
is
that,
because
packets
are
a
lot
larger
DNS
a
can
easily
be
abused
for
denial
of
service
attacks
and
it
in
the
past
few
years.
It
has
actually
been
abused
for
that
purpose,
and
there
have
been
reports
about
that
in
the
media.
L
So
this
made
us
wonder
it
can't
be
do
anything
better.
Can
we
use,
for
instance,
ECC
so
elliptic
curve,
cryptography
run
from
with
asymmetric
crypto,
achieves
the
same
goals
as
RSA,
so
it's
public
key
cryptography.
You
can
do
signatures
with
it,
but
the
nice
thing
about
ECC
is
that
the
both
the
keys
and
the
signatures
are
much
smaller
than
they
are
for
RSA,
while
they
offer
greater
cryptographic
strength
and
to
give
you
an
example,
a
typical
kiss
is
used
for
elliptic
curve.
L
So
why
wouldn't
we
switch
to
ECC
immediately
for
DNS
SEC,
well,
to
quote
our
essays
RFC
605,
which
is
the
RFC
that
actually
standardizes
the
use
of
the
elliptic
curve.
Digital
signature
algorithm
in
DNS
SEC
validating
RSA
signatures
is
significantly
faster
than
validating
ECDSA
signatures
about
five
times
faster.
L
So
we
did
some
benchmarking
and
it's
it's
way
worse
than
this.
So
the
goal
of
the
study
in
this
paper
was
if
we
switch
DNS
sick
from
using
RSA
to
using
ECC.
How
does
that
impact
validating
DNS
resolvers
right
so
rather
than
recommend
to
everyone?
Well,
these
easy
say:
ECC
signatures
are
much
smaller.
It's
really
nice
to
switch
to
them.
Let's
work
out
if
we're
not
introducing
a
new
problem
by
giving
this
recommendation-
and
that
was
the
purpose
of
this
study.
So
how
do
we
go
about
doing
this?
L
We
decided
to
do
a
measurement
study
and
some
modeling
for
this
and
our
method.
I'm
gonna,
describe
our
methodology
in
the
next
couple
of
slides
and
we
started
out
from
the
premise
that
we
want
that
we
had
this
intuition
that
if
we
knew
the
number
of
outgoing
queries
from
resolver,
so
that's
not
incoming
queries
for
clients.
That's
the
queries.
The
resolver
sends
to
authoritative
name
servers
on
the
internet
that
we
might
be
able
to
predict
the
number
of
signatures
validations
that
it
has
to
perform.
L
Given
that
load
right-
and
that
was
our
premise
and
I'm
gonna
get
I'm
gonna
talk
you
to
four
factors
that
influence
the
number
of
signature
validations
that
a
resolver
will
have
to
perform
now.
The
first
factor
is
that
for
every
query
that
a
dns
resolver
sends
to
an
authoritative
nameserver
on
the
internet,
it
will
not
always
get
a
response
right.
So
the
number
of
responses
that
come
back
are
somehow
a
factor
in
this.
L
The
third
factor
is
the
number
of
signatures
in
the
response
that
contains
signatures
right,
because,
while
you
might
expect
a
response
to
contain
a
single
signature
for
the
record
that
you
requested,
actually
we
observe
that,
on
average,
every
response
that
contains
signatures
has
somewhere
in
the
order
of
between
2.4
and
2.5
signatures
per
response.
That
has
signatures.
That's
because
there
are
signatures
in
the
additional
section.
There
might
be
extra
records
in
there
that
require
signatures,
so
this
intuition
doesn't
hold.
There
are
more
signatures
in
a
response
than
just
for
the
record
that
you
requested.
L
We
wanted
to
build
a
model
that
we
can
apply
to
any
resolver,
regardless
of
its
client
population,
and
if
I'm
not
gonna,
go
into
detail
in
the
presentation.
But
in
the
paper
you
will
see
that
the
model
is
actually
a
little
bit
less
accurate
for
resolvers
that
have
a
small
client
population.
But
those
are
the
resolver
that
we
really
don't
have
to
worry
about
as
much
because
they
will
be
validating
far
fewer
signatures,
because
they're
processing
far
fewer
queries
right.
So
how
did
we
measure
this?
L
This
picture
shows
you
our
measurement
setup
and,
on
the
left
hand
side.
You
see
clients
which
is
typically
you
with
your
laptop
unless
you're
an
idiot
like
me.
That
runs
the
resolver
on
their
laptop
and
then
what
we
did
was
we
captured
traffic
that
was
going
to
production,
DNS
resolvers,
and
then
we
forward
this
traffic
to
an
instrumented,
DNS
resolver.
So
we're
sending
a
copy
of
the
exact
query
traffic
that
goes
to
a
production
resolver
to
one
that
we
instrument,
and
then
we
measure
certain
factors
on
that.
L
We
don't
want
to
violate
their
privacy,
so
he
took
some
measures
for
that
on
the
instrument.
Instrumental
resolver
we
measure
on
the
alcohol
in
link
towards
the
Internet
so
where
the
resolver
talks
to
a
third
dative
name
servers
the
number
of
queries
it
sends.
So
all
of
these
factors
that
I
talked
about
in
the
previous
set
of
slides
and
the
numbers
of
signatures
that
it
validates,
and
for
that
we
actually
had
to
alter
the
code
of
the
resolver
right,
because
this
is
not
something
that
most
resolver
implementations
typically
keep
statistics
for.
L
A
number
of
signatures
per
response
and
D
is
a
number
of
signatures
that
are
actually
validated
or
fraction
of
signatures.
That's
actually
getting
validated
now,
if
you
look
at
these
graphs,
then
your
intuition
might
be
that
you
could
model
this
with
a
linear
model,
although
especially
graph
B
has
a
lot
of
noise
in
it,
but
as
it
turns
out
graph,
B
is
not
the
one
that
we
want
to
worry
about,
which
is
the
number
of
responses
that
contain
signatures,
because
we're
actually
going
to
use
that
later
on
in
the
model.
L
So
there
the
accuracy
is
not
an
issue
for
the
other
ones.
We
tried
if
we
can
approximate
this
with
a
linear
model,
and
it
turns
out
that
we
can
so
I'm
not
going
to
talk
you
through
all
of
this,
but
we
came.
The
details
are
in
the
paper,
but
we
created
a
simple
set
of
linear
equations,
that
you
can
then
combine
to
make
a
model
for
validating
resolver
and
it
has
four
important
parameters.
L
So
the
first
one
is
the
average
number
of
responses
per
query,
and
this
is
something
that
you
need
to
measure
on
an
operating
resolver
to
actually
populate
the
model.
So
these
are
parameters
that
you
would
need
to
measure
the
fraction
of
responses
with
signatures
and
the
average
number
of
signatures
per
response
and
the
fraction
of
signatures
that
is
validated.
Oh,
is
the
mic
broken
again.
Oh.
L
The
second
thing
we
wanted
to
validate
is
whether
the
model
has
stable
properties
over
time
and
actually,
like
I,
said
only
alpha.
S,
which
is
the
number
of
the
fraction
of
responding
responses
that
contain
signatures,
can
vary
significantly
over
time
because
we
varied
this
parameter
to
do
to
make
predictions
right.
So
we
don't
really
care
if
that's
stable
over
time,
but
the
other
factor
should
be
more
or
less
stable
over
time.
L
We
also
did
some
worst
case
estimations
that
are
in
the
paper
where
we
take
sort
of
worst
case
estimates
for
all
of
the
parameters
of
the
model.
And
finally,
we
checked
if
the
model
is
actually
a
good
predictor
of
empirically
observed
data.
So
what
we
did
was
we
populated
the
model
make
predictions
of
what
we
thought.
The
number
of
signatures
that
we
need
to
validate
was
going
to
be
and
then
compare
that
to
what
we
saw
in
actual
practice
and
again
the
details
for
that
are
in
the
paper.
L
But
we
did
some
statistical
goodness-of-fit
tests
for
that
now.
The
next
thing
that
we
have
to
do
is
now
that
we
are,
we
have
a
model
for
predicting
the
number
of
signature
validations
that
are
required.
We
of
course
need
to
know
how
ECC
performs
right,
and
although
there
are
some
benchmarks
publicly
available
that
we
used
in
in
in
an
earlier
paper,
we
wanted
to
make
sure
that
we
had
up-to-date
benchmarks.
So
we
took
five
implementations
of
elliptic
curve,
cryptography
and
benchmarked
those
on
them.
L
So
that
is
the
algorithm
that
we
expect
will
be
used
the
most
in
DNS
SEC
at
this
point
in
time,
and
actually
there
is
now
open,
SSL
1.1
at
zero,
which
does
not
perform
that
differently
from
102.
So
this
is
actually
still
a
good
set
of
benchmarks
to
use.
Today,
then,
we
looked
at
the
newer
elliptic
curve,
algorithms
edie,
two
four
five,
one,
nine
and
eighty
four
four
eight,
which
have
only
recently
been
standardized
for
use
in
DNS
SEC.
L
And
there
again
we
took
optimize
the
implementations
of
these
two
algorithms
because
the
the
reference
implementation
of
those
don't
perform
very
well.
So
we
took
optimized
versions
there
and
then
what
we
did
was
we
did
a
hundred
tests
of
running
the
algorithm
for
10
seconds
and
then
measuring
how
many
signature
validations
it
will
perform
in
that
period
and
the
details
of
the
benchmarks
are
in
the
paper
to
give
you
some
idea.
L
So
EC
T
is
AP.
Two
five
six
is
an
order
of
magnitude
slower
than
RSA
1024,
and
why
is
that
a
good
comparison,
because
we're
you
could
argue
that
we're
comparing
apples
and
oranges
right,
because
ECDSA
P,
two
five,
six
cryptographically
is
much
stronger
than
RSA
1024.
But
I
would
argue
that
you
need
to
make
this
comparison,
because
most
of
the
signatures
that
you
see
in
the
NSTIC
today
are
are
signatures
with
zone
signing
keys
that
are
1024
bits,
whether
that
makes
sense
in
terms
of
security
or
not.
L
You
can
debate,
but
that's
the
case
and
then,
if
you
take
ECDSA
p3
84,
which
is
pretty
perfectly
even
stronger
and
you
compare
that
to
something
like
RSA
2048
you'll
see
again
that
there
is
an
order
of
magnitude
performance
difference.
So
this
is
way
more
than
the
five
times
that
is
quoted
in
the
RFC,
even
8255,
four
nine,
which
is
way
faster
than
the
ECDSA
algorithms.
In
terms
of
implementation,
that
is
still
almost
up
to
an
order
of
magnitude
slower
than
ours,
a
1024,
and
only
four.
L
This
is
not
the
top
end
TPU,
but
it
would
be
a
common
CPU
that
you
would
encounter
in
server
architecture.
So
this
is
typically
something
that
people
have
in
their
data
center
right.
We
have
data
centers
full
of
this
stuff
unless
you're
really
rich
in
your
Google.
You
have
these
and
why
did
we
pick
these
particular
benchmarks
because
you're
going
to
see
them
in
the
graphs
that
I'll
be
showing
you
in
the
next
couple
of
slides,
so
we
picked
ECDSA,
P,
3
and
a
4,
because
this
was
a
worst
case
scenario
right.
L
This
is
the
slowest
of
all
of
the
algorithms
that
we
benchmarked
and
it
is
the
strongest
broadly
supported
cipher.
What
do
I
mean
by
that?
You
could
say
that
84
for
parade
at
the
bottom
of
the
slide
is
stronger
in
terms
of
cryptography,
but
it's
not
widely
available
in
implementation,
so
few
people
are
going
to
use
it.
So
that's
why
we
took
p3
before
as
sort
of
a
benchmark
to
compare
against
then
B.
J
L
So
let's
go
back
to
our
original
question,
which
was
what
is
the
impact
on
validating
DNS
resolvers
right,
because
that's
why
we
started
this.
So
we
use
our
model
to
evaluate
or
to
estimate
future
performance
right,
and
we
took
we
look
at
two
scenarios.
So
the
first
scenario
is
what,
if
we
take
currently
in
a
six-point
Minh-
and
we
switch
all
of
those
domains
over
to
ECC
overnight,
so
everything
that's
now
signed
with
RSA
or
DSA
or
whatever
you
use.
We
all
switch
that
to
ECC.
L
Is
that
an
issue
well
based
on
the
measurements
that
we
did
on
our
resolvers?
We
argued
that
you
need
about
150
signature
validations
per
second
for
busy
resolver,
give
us
a
resolver
processing
around
20,000
queries
from
clients
per
second,
so
this
was
a
busy
resolver
and
that's
not
a
problem,
and
even
if
we
take
the
model-
and
we
put
in
the
worst
case
numbers-
we
still
don't
go
above
the
worst
case
scenario,
which
is
using
P
384.
So
that's,
but
what?
L
L
What
do
I
mean
by
popular
by
popular
I
mean
that
the
domains
for
we,
for
which
the
resolver
sends
the
most
queries
to
the
internet,
if
that
switch
is
to
ECC
first
right,
and
so
we
here
at
block
the
query
popularity
for
queries
that
the
resolver
sends
to
authoritative
name
servers,
and
this
shows
you
this
is
a
classic.
So
you
can
see
that
the
axes
are
log
log
and
this
is
a
classic
internet
distribution.
L
L
Wizard
and
what
you
can
see
in
this
graph
on
this
access
and
we
can
debate
what
you
call
it
accent,
but
I
would
say
it's
the
y-axis.
You
see
the
NSF
deployment
and,
as
you
may
remember,
I
said
that
we
will
be
varying
the
average
number
of
responses
with
signatures
to
simulate
the
innocent
enjoyment
right.
So
what
we
did
was
calculate
based
on
the
distribution
for
popularity,
we
modeled
what,
if
we
go
from
left
to
right
in
that
distribution,
how
many
queries
with
how.
L
The
takeaway
from
this
is
that
there's
ample
room
for
growth
in
the
NSA
deployment
and
outgoing
query
load
right.
So
we
can
go
up
to
a
hundred
percent
in
a
sec
deployment
using
DSA,
P
384,
and
the
number
of
outgoing
queries
from
that
resolver
could
still
double
and
unbound
would
still
be
able
to
validate
those
signatures
on
a
single
CPU
core
right
and
so.
L
Estimate,
if
you
run
a
really
busy
resolver
you're
working
at
an
ISP,
you
typically
don't
have
a
single
core
assigned
to
that
resolver.
So
this
is
something
that
is
easily
within
the
realm
of
the
possible.
That's
a
great
result.
We
can.
We
can
take
the
worst
case,
algorithm
and
unmount,
we'll
still
be
able
to
cope
with
it.
So
what
does
this
picture
look
like
for
bind
I
apologize,
because
the
title
of
the
slide
is
somehow
God,
but
this
is
binding
and
for
bind.
We
observe
something
a
little
bit
different.
L
And
there
are,
there
have
been
suggestions
for
reasons
why
that
might
be
because
by
it
might
be
trying
to
chase.
If
it
gets
negative
responses,
it
might
be
trying
to
chase
more
chase
up
other
Thursday's
name
surface.
It
really
tries
very
hard
to
get
the
response
and,
if
I
be
validating
more
signatures
because
of
that,
but
we
actually
didn't
investigate.
L
As
a
given,
but
what
you
can
then
see
is
if
we
look
at
the
the
situation
for
the
ECAC
ECDSA
P
256
long-term
support
version,
which
is
actually
the
one
that
we
expect
people
to
use.
If
they
sign,
then
you
see
that
even
their
bind
has
ample
room
for
growth.
So
the
green
line
that
intersects
the
the
red
slope
intersects
it
way
beyond
the
number
of
queries
that
we
would
need
to
be
able
to
validate
signatures
for
on
our
busiest.
L
N
L
There
are
slight
words
for
P
384.
Now,
after
the
original
paper,
we
did
some
additional
benchmarks,
because
you
could
argue
that
we
did
our
benchmark
and
Intel
x86.
Some
of
the
optimizations
that
have
been
implemented
are
only
available
for
x86
architectures.
So
what
about
other
architectures?
What?
If
I,
have
a
home,
router
and
I
want
to
do?
Dns
accreditation?
L
L
Now
the
key
I'm
not
going
to
go
through
all
the
details
of
those
benchmarks,
but
the
the
key
takeaways
there
is
that
performance
is
low,
but
it
is
more
than
sufficient
for
home
scenarios
and
that,
interestingly,
ECDSA
may
sometimes
be
faster
than
EDD
si,
because
there
are
already
some
optimized
versions
of
ECDSA
available
for,
for
instance,
arm
C
peers
that
outperform
the
stock
EDD
si
implementations
that
are
available
for
that
we
did
an
N
equals
1
home
router
experiment.
So
this
is
take
with
heap
of
salt
right.
L
This
is
not
representative,
but
it's
an
interesting
experiment
to
run,
and
one
of
my
students
really
wanted
to
do
this,
and
and
I
mean
he
got
extra
bonus
points
for
the
fact
that
he
got
informed
consent
from
his
roommate
before
he
did.
The
experiment.
Alright,
so
and
I
didn't
even
have
to
increase
it
because
he
it
turned
out.
He
had
an
ethics
course
going
on
at
the
same
time
and.
L
L
He
also
measured
cache
performance
on
on
his
resolver
and
the
cache
performance
OSHA's
right,
because
there
are
too
few
users
to
make
good
use
of
the
cache.
So
actually
only
about
10
20
%
of
his
queries.
Computers
could
be
received
responses
from
the
cache,
so
I
said.
Well,
let's
go
for
the
worst
case.
The
cache
doesn't
do
anything
and
the
resolver
has
to
validate
everything.
L
So
there's
maybe
some
work
there,
but
this
is
actually
only
if
we
have
100%
DNS
SEC
deployment,
which
today
we
don't
have
so
I,
believe
we
can
safely
issue
that
by
the
time,
if
ever
we
reach
100%
the
in
a
sec
deployment,
these
devices
will
be
fast
enough
and
there
will
be
optimized
implementations
of
the
elliptic
curve.
Algorithms.
Now
in.
A
K
L
M
L
L
L
L
This
is
the
point
in
time
where
cloud
fair
announces
it
to
Universal.
Dns
SEC
using
ECDSA,
be
two
five
six
and
as
you
as
I
hope
you
can
see,
there
is
an
uptake
of
ECDSA
that
starts
from
that
point
onward.
Right,
D,
the
pink
area
at
the
top
of
the
graph
starts
growing
slowly,
but
actually
easy.
Tsa
adoption
is
now
driven
completely
by
other
operators
that
are
adopting
is
on
mass
and,
as
of
I,
think
the
beginning
of
this
month.
Ecdsa
is
the
second
signing
algorithm
after
our
shi-wan.
Please
people
change
that
replacing.
N
L
Sha-256
as
second
popular
signing
algorithms,
that's
actually
good
news.
This
is
getting
adopted
pretty
quickly
in
combinatorial.
If
we
look
at
the
end
to
the
two
T
of
these
that
have
the
largest
number
of
science,
which
is
not
an
L
not
to
see
the
picture
is
a
little
bit
different.
As
you
can
see
that
adoption
of
ECDSA
me
to
five
six
is
still
quite
low.
It's
only
a
fraction
of
the
total
number
of
signed
domain,
but
one
takeaway
here
is
that
especially
Delta
C
is
doing
really
well
right.
L
It
does
now
also
for
a
lesser
say:
sha-1,
then
in
common
and
org,
but
also
only
a
little
bit
of
ECDSA.
So
to
take
away
is
the
early
large-scale
adopters
of
DNS
SEC
take
longer
to
get
a
significant
share
of
SEC
sentiment?
It's
not
surprising,
but
it
also
means
that
replacing
signature.
Algorithms
will
take
time
also
because
replacing
signature
algorithm
is
actually
difficult.
L
If
we
look
at
the
Alexa
top
1
million
completely
different
graph,
also
quite
interesting
here,
you
can
see
again
that
there's
quite
a
bit
of
adoption
of
AC.
They
say
P,
two
five,
six,
twenty
two
percent
of
the
electorate
of
a
million
signed
domains
in
about
1.7
percent
of
them
are
signed,
UCC,
BSA
and
61
percent
of
those
use
cloud
fair.
So
there's
actually
also
quite
a
significant
number
that
are
not
using
cloud
forest
are
using
another
operator.
That's
interesting
right!
So.
L
L
L
L
Term
support
versions
of
the
software,
but
if
you
want
to
save
on
CPU
cycles,
you
might
want
to
deploy
newer
libraries
and
finally,
as
I
showed
you
in
the
last
couple
of
slides.
Adoption
is
slowly
taking
off
and
with
that
I
get
to
the
end.
I
would
like
to
thank
my
students
who
who
helped
me
with
this
so
as
Casper
hacker
man
who
just
started
his
PhD
in
Albert
in
Denmark,
Bruce
and
JJ,
who
helped
with
the
arm
and
MIPS
benchmarks.
L
L
N
I'm
screwed
I'm
from
NIST,
so
we
we
have
been
working
on
bgp
SEC
implementations,
which
also
uses
ECDSA
P
256
and
mr.,
together
with
the
SBIR
contract.
The
company's
name
is
an
entire
technique,
so
we
have.
As
together,
we
have
developed
a
high-performance
implementation
of
BGP
SEC,
and
in
that
we
we
did
I
mean
it
was
mainly
our
SBIR
contractor
entire
technique
and
his
name
is
mammoth
it
earlier
so
excellent
work.
You
should
look
at
that.
We
presented
that
paper
at
Nanak
69
in
February
this
year.
N
It
will
say
BGP,
SiC,
high-performance
BGP,
simplement
asian,
but
it
has
a
lot
of
measurement
details
about
ECDSA
performance
and
we
used
it.
We
compared
it
with
the
open,
SSL
one
dot,
one
dot,
zero
and
significant
I
mean
several.
Several
I
mean
significant
multiplication
factor
improvement
of
this
high
performance
implementation
compared
to
open,
SSL
one
dot,
one
dot
zero,
so
I'd
be
happy
to
give
you
the
pointer
to
that
paper.
L
J
L
So
there's
actually
some
discussion
about
that
in
the
paper
and
in
the
paper
we
argue
that,
even
if
you
go
for
a
worst-case
scenario,
so
where
we
we
take
away
all
of
the
measurements
and
we
just
model,
we
assume
that
the
model
is
is
accurate
enough
to
put
in
worst
case
parameters
you
it's
arguable
whether
or
everybody
has
the
cpu's
that
can
deal
with
this
right,
but
that's
an
assumption
and
that's
an
assumption
that
we
cannot
prove
or
disprove.
But
even
if
you
put
in
worst
case
data,
then
with
the
way
the
DNS
currently
looks.
L
I
am
confident
enough
to
say
that
our
resolvers
would
be
able
to
handle
the
signature
validations.
However,
if
suddenly
the
new
TT
of
these
become
wildly
popular
and
we
see
a
fragmentation
in
the
namespace,
then
this
picture
might
change,
because
that
might
blow
up
the
number
of
cache
misses
which
might
blow
up
the
number
of
signatures
validations
and
then
we
are
in
unknown
territory,
but
it
doesn't
seem
like
the
new
GT
of
these.
Are
that
wildly
popular
so
I'm
not
too
worried
about
that?
Yet.
L
One
of
the
other
things
that
we
discussed
in
the
paper
is
a
denial
of
service
attack
where
you
try
to
cause
cpu
starvation
by
forcing
signature
validations,
and
this
is
actually
something
that
was
that
whoever's
men
who
was
10
still
at
Comcast
brought
up,
and
we
have
seen
verified
this.
So
that's
also
in
the
paper
and
what
we
did
was
we
first
resolved
it
to
do
lots
of
ECDSA
signature
validations
by
making
them
verify
signatures
for
CloudFlare
sort
of
black
LifeSpring.
L
That
gives
you
a
fresh
signature
for
every
NX
domain
that
can
kill
by
it
easily
unbanned
survives
so
there
is
an
issue
there,
but
we
also
sketch
an
idea
of
how
you
could
solve
the
same
resolver
implementations
by
doing
some
form
of
rate
limiting
and
and
I
know.
I've
discussed
this
with
Valtor
from
analogue
labs,
who
thought
it
might
be
feasible
to
do
that.
I,
don't
know
whether
he
has
had
time
to
implant
it
yet
does
that
answer
your
question?
Ok,.
P
J
P
L
That's
actually
interesting
because,
as
the
papers
States
there's
a
huge
difference
between
bind
and
unbound,
but
there
are
many
factors
that
play
a
role
in
that
right:
a
platonic
reserve,
a
resolver.
What
would
you
need
to
specify
requirements?
What
that
does?
Does
that
give
you
does
that
work
the
hardest
to
get
you
an
answer?
Does
that
find
some
middle
ground
between
spending
time,
finding
something
and
then
deciding
it
can
be
found?
L
L
So
there's
actually,
in
my
opinion,
far
too
little
research
on
what
is
the
optimal
resolver
and,
for
instance,
we
can
talk
about
caching
strategies.
We
can
talk
about
time
spent
finding
responses,
so
I
think
that
yes,
I,
agree
with
you.
There
is
interesting
research
to
be
done
there.
Thank
you.
Thank.
O
O
O
L
That's
that's
a
really
good
question,
and,
and
and
yes
it
did
nearly
drive
me
insane,
but
I
have
a
backup
slide
on
that.
So
we
actually,
we
actually
measured
the
legality.
As
you
can
see
in
a
graph
here,
we
measured
the
galley,
more
free
factor
and
and
I'll
talk
you
through
what
you
see
in
the
in
the
graph
here.
L
So
we
you
see
the
initial
ramp
down
right,
which
is
what
you
would
expect
to
see,
typically
in
a
galley,
more
free
distribution
and
then
some
normal
noise.
But
then
there
is
this
weird
peak,
an
unexpected
rise
around
four
and
twenty
five
queries
per
second,
which
we
can't
explain
right
so
and-
and
that
has
puzzled
us
for
for
for
the
past
year
and
a
half.
And
if
you
know
what
causes
that
I
I
would
I
would
like
to.
L
O
A
Concerned
that
the
audience
is
wondering
if
they've
gone
out
of
their
minds
about
this,
so
we
should
explain
that
Willam
has
been
challenged
and
now
he's
taken
rolling
into
a
challenge
to
to
use
the
word
galimov
free
at
three
mic
lines,
and
this
was
a
very
successful
version
of
that
as
well
as
a
you
know,
a
deep,
deep
enlightenment
of
an
important
scientific
factor.
It's.
A
M
A
D
Going
that
one
no
yeah,
maybe
it
was
the
one
you
said,
yeah,
why
not
so
anyway?
That'll
do
sorry,
see
I'm
trying
to
understand
this
because
it
looks
like
there
was
a
lot
of
growth
around
this
time
last
year
and
then
it's
peanuts
actually
been
pretty
flat
in
terms
of
ECC
adoption.
Since
then,
do
you
have
any
reflections
on.
L
Yes,
I
do
so,
so
let
me
see
if
the
pointer
on
this
works.
What
you're
referring
to
is
is
here
where
there
is
especially
in
the
door,
gets
very
steep.
This
is
a
fact
we
actually
in
looked
at
that
in
detail.
What
happened
there-
and
this
is
one
operator
in
particular
who
who
was
using
RSA
sha-256
before
and
they
decided
on
their
own
and
to
switch
to
a
CDC,
ECDSA
b25
six
for
all
the
right
reasons,
because
they
wanted
to
reduce
their
packet
sizes.
They
wanted
to
increase
the
security
of
their
signature.
L
Algorithms
I
think
I
I'm
trying
to
struggle
on
to
him.
Remember
the
name:
I
was
domain
name
shop,
it's
a
Norwegian
company
and
I
can
mention
their
name
because
I
had
some
email
communication
with
them
and
they
they
did
this.
It's
actually
not
been
flat,
it's
kind
of
hard
to
see
in
the
in
the
graphs
here.
L
If
you,
if
you
look
at
the
slice,
you
can
zoom
in
a
little,
so
there
is
still
some
adoption
and-
and
it's
still
increasing
slowly
but
the
the
large
peak
was
when
this
one
organization
decided
to
switch
and
then
later
on.
There's
a
smaller
bump
that
you
see
if
you
zoom
a
little
bit
into
the
in
on
the
graph
there's
another
small
bump,
which
is
another
algorithm
rollover
and
actually
I
brought
these
people,
because
I'm
monitored
they're
all
current
all
over
they
did
it
completely
correctly
right.
L
So
those
of
you
familiar
with
the
NSX
will
know
that
this
is
difficult
to
give
a
little
bit
of
idea
to
the
people
that
are
less
familiar
and
algorithm
rollover
requires
you
to
take
very
specific
steps,
which
is
to
introduce
new
signatures.
First
before
you
introduce
a
new
key,
because
DNS
SEC
actually
has
this
provision
that,
if
something
is
signed
with
a
certain
algorithm,
so
if
a
key
for
a
certain
algorithm
exists,
signatures
for
that
algorithm
must
also
exist,
because
otherwise
you'd
be
able
to
perform
downgrade
attacks,
and
most
resolver
implementations.
Take
that
quite
strictly.
L
So
they
did
this
correctly
and
I
think
that
was
the
first
example
at
scale
of
switching
to
a
different
algorithm
that
was
done
successfully
because
we've
seen
algorithm
rollovers
for
T
days
and
cctlds
in
the
past,
and
almost
all
of
them
have
had
some
hitch
where
they,
for
instance,
introduced
the
key
at
the
wrong
time
or
made
another
mistake
introduced
at
the
S
at
the
wrong
time.
So
this
really
is
something
that
needs
attention.
L
And
to
finish
off
of
that,
there
is,
as
I
said,
the
Swedish
registry
is
going
to
switch
from
using
our
session
one
to
RSA
sha-256
for
Delta
C
and
will
actually
be
measuring
that
I
started.
This
other
project
called
root
canary
which
we
presented
at
the
Met
party
meeting
earlier
this
week,
and
this
the
Suites
actually
came
to
us
and
said:
can
you
measure
our
algorithm
below
for
because
we're
kind
of
scared
that
something
might
go
wrong,
so
we're
we're
actually
gonna
measure
that
and
see
how
that
works
in
practice.
D
A
Thanks,
everybody
I
think
we
should
end
on
this
high
note,
and
so
you
know,
please
feel
free
to
cluster
up
here
and
ask
any
questions
or
get
involved
further
and
oh
I
wanted
to
show
you
the
cat
who
was
working.
So
let
me
quickly
do
that
and
then
well
and
on
that
note,
but
does
anyone
else
have
any
questions
of
a
general
nature
before
we
go.
A
A
Do
we
have
certificates
to
present
to
oh,
we,
oh
sorry,
yes,
so
we
we
were
going
to
actually,
if
you
want
to
stay
and
work
in
here,
you
can
see
the
the
honor,
the
presentation
of
the
Prize
certificates
and
the
photographing
of
same,
but
thanks
very
much
for
for
being
with
us.
We're
really
very
good
session.