►
From YouTube: IETF103-MAPRG-20181106-1610
Description
MAPRG meeting session at IETF103
2018/11/06 1610
https://datatracker.ietf.org/meeting/103/proceedings/
A
B
C
A
A
A
A
Yeah
and
then
last
time,
I
presented
a
little
bit
of
feedback.
We
got
from
the
IEP
review,
because
every
research
group
has
once
in
a
time
or
like
from
time
to
time,
actually
a
review
with
the
IAB,
and
we
had
a
good
discussion
and
we
also
had
a
discussion
about
what
we
did
so
far
and
what
we
could
change
or
we
could
improve
a
couple.
A
Things
came
up
and
I
presented
this
last
time,
so
one
thing
we
did
change
this
time
is
that
I
did
a
little
bit
of
spamming,
so
I
tried
to
send
a
pointer
to
those
working
groups
which
might
be
interested
in
the
talks
we
have
here
today
and
I
got
no
complaints,
but
I
got
a
little
bit
of
positive
feedback,
so
that
seems
to
be
useful.
People
told
me.
I
was
not
aware
of
this.
Thank
you
for
sending
it,
so
I
will
continue
that.
A
The
other
thing
we
we
have
discussed
with
the
IEP
or
came
up
from
the
IP
was
that
this
research
group
has
many
nice
talks
which
are
interesting
for
a
lot
of
people
in
and
outside
the
IETF,
and
there
is
the
possibility
to
to
write
blog
posts,
and
we
already
had
one
blog
post
from
people
presenting
their
work
here
and
if
you
are
have
a
presentation,
Mataji
or
also
other
interesting
measurement
results.
You
might
be
interested
in
writing
an
IETF
blog
post
as
well.
A
A
The
third
point
on
here,
where
the
bullet
point
is
actually
missing.
Somehow
was
what
the
thing
that
I
brought
up
last
time
and
I
didn't
get
a
lot
of
feedback.
So
I
would
like
to
ask
again-
and
that's
like-
is
this
group
interested
in
like
only
consuming
measurement
results?
Or
is
this
group
also
interesting
in
being
a
venue
for
sharing
data
and
collecting
data?
And
if
so,
if
there's
an
interest
what
to
do
about
it
like?
How
can
we
get
the
people
in
here
to
bring
their
data
and
which
data
do
you
want
to
see?
A
And
one
simple
idea
was
to
like
pushing
everybody
who
gives
to
talk
a
little
bit
more
on
the
side
of
making
their
data
available
and
and
make
it
easy
to
at
least
find
the
data
for
the
talks
that
we
have
here
and
I?
Don't
see
anybody
rushing
to
the
mic,
so
it's
kind
of
no
okay,
but
that's
not
says
well,
okay
and
somebody
in
the
remote
q.
D
Hello,
everyone
so
I
would
be
the
one
to
say
yeah.
Actually
we
should
consider
doing
this.
There
was
some
work
that
we
did
in
the
Maumee
project,
on
building
sort
of
reference
of
all
databases
of
intermediate
measurement
results
that
I
could
possibly
talk
about
in
Prague,
or
we
could
talk
about
in
a
hackathon
in
Prague
as
to
looking
for
you
know,
some
sort
of
sets
of
intermediate
measurement
results
that
money
makes
sense
to
share
over
a
platform
like
that.
D
Data
collections
we're
talking
about
what
it
would
take
to
get
their
api's
together,
so
that
you
could
use
essentially
a
single
calling
it
for
all
of
those
particular
things
so
I.
Maybe
I
could
propose
a
talk
on
the
state
of
these
things
in
Prague
and
then
we
could.
We
could
speak
for
it
from
there.
Okay,.
A
Okay
and
then
the
last
point
on
the
slide
brings
meiosis
the
next
slide.
One
proposal
was
also
to
interact
with
the
community
closer,
for
example,
by
having
a
map
very
focused,
Heckert
on
table
and
especially
Dave.
My
co-author
a
co-chair
really
liked
this
idea,
and
he
will
push
this
idea
for
he's
not
here
today,
unfortunately,
but
he
will
push
this
idea
for
the
next
meeting
in
Prague
and
the
current
two
measurement
ideas
he
brought
up
was
one
to
check
the
current
state
of
ecn,
because
there
was
a
hackathon
table
at
iit
of
101.
A
So
this
is
like
kind
of
a
continuous
measurement
work
which
would
be
nice
to
do
and
the
other
one
he's
interested
in
is
measuring
DNS
and
there's
also
a
related
paper
at
IMC,
actually,
which
is
a
link
here-
and
you
know,
if
you're
interested
in
these
topics
watch
out
for
a
hackathon
project
at
the
next
meeting
or
if
you
have
more
topics
around
measurements
that
you
would
like
to
hack
on
at
the
next
time.
Please
talk
to
us.
E
Tom
Jones
I
support
the
idea
of
a
hackathon
table.
I
would
sit
there
and
Comcast
before
have
offered
virtual
machines
on
their
DOCSIS
backbone,
for
lack
of
thought
and-
and
this
probably
needs
to
be
organized
in
advance
because
it
wasn't
getting
utilisation,
but
it
would
be
a
good
thing
for
a
platform
to
have
four
measurements.
Okay,.
E
F
A
That's
easy:
okay,
cool
yeah.
We
will
also
notice
on
the
meeting
just
say:
if
you
watch
out
a
little
bit,
you
shouldn't
miss
it
okay
and
talking
about
IMC.
This
is
like
one
of
the
reasons
why
Dave
is
not
here,
because
the
IMC,
the
in
measurement
conference,
was
last
week
in
Boston,
so
he
went
there
and
I'm
here
and
what
he
did
is
that
we
talked
a
little
bit
about
it
and
he
enjoyed
the
meeting
a
lot.
It
was
a
good
meeting.
A
It
was
a
lot
of
people
packed
agenda
more
papers
than
previously,
and
he
put
the
effort
in
to
pick
out
some
of
the
papers.
That
might
be
most
interesting
for
this
audience,
so
you
have
have
it
easier
to
find
the
stuff
that
might
impact
your
work
directly,
while
the
other
presentations
were
probably
also
very
interesting,
so
check
out
the
agenda.
G
A
Okay
and
that's
our
agenda
today,
we
have
a
quick
heads
up
talk
and
then
we
actually
have
five
presentations
and
the
last
who
are
actually
IMC
papers
that
were
presented
last
week
in
Boston
and
are
now
I'm
here,
for
you
brand
new
basically,
and
we
go
ahead
and
we
start
with
Tobias
except
you
have
any
questions
or
I.
Don't
think
we
do
agenda
fishing
here.
Actually,
I
don't
have
time
for
that.
H
Hey
so
welcome.
This
is
together
with
Dave
and
it's
about
some
things
we
found
in
our
research
on
privacy,
one
on
our
v6
scanning
and
ipv6
deployment,
and
basically
we
stumbled
a
lot
about
eui-64
addresses,
actually
there's
one
error
on
there.
Our
C
49:41
has
been
now
been
overruled,
but
I
forgot
to
update
the
slides
and
technically,
the
problem
of
having
you
i64
address
on
the
attend
should
be
kind
of
solved.
H
However,
when
we
were
looking
at
actual
measurements
out
there,
we
found
that,
for
example,
if
you
do
trace
routes
across
the
internet
for
v6,
you
find
up
to
45%
of
hosts,
responding
with
UI
64
dresses
for
ICMP
time
exceeded,
which
is
a
quite
a
lot,
but
be
mostly
CPEs.
The
other
example
we
had
actually
in
a
work
where
we
looked
at
how
we
can
actually
enumerate
reverse
DNS.
We
could
actually
track
people
walking
across
buildings
when
they
were
auto-populated
that
reverse
DNS,
because,
like
8
a.m.
to
1
p.m.
H
In
addition,
also,
the
addressing
practices
in
ipv6
can
lead
to
well
privacy
implications,
because
now
you
can
actually
do
seeing
structured
lee.
So
you
have
a
lot
less
noise
in
there
which
aids
topology
discovery.
So,
for
example,
and
one
of
our
words,
we
could
discern
the
policy
of
dot
mil
installation
quite
well
accurately.
H
There
are
two
related
publications.
One
has
been
at.
Imc
was
I,
think
also
in
the
list
mayor
showed
from
Dave,
which
is
mostly
around
measuring
up
the
v6
adoption
and
where
they
found
these
45%
of
responses
from
here,
I
64
addresses
and
then
there's
the
paper
where
we,
for
example,
found
the
building
stuff
together
with
revolt
in
some
other
people,
which
was
at
security
and
privacy
this
year.
H
H
This
is
basically
a
call
for
measurements
and
observations.
So
if
you
have
anything
which
also
falls
in
the
context
of
ipv6
security
and
privacy
implications,
we
would
really
like
to
hear
your
input
even
more.
So
we
would
like
to
hear
datasets.
So
if
you
have
any
others,
please
drop
us
an
e-mail
at
Dave,
Parker
faith
as
plonker
taught
us
and
my
funny
institutional
address,
and
that's
actually
already
the
last
slide.
If
there
are
no
comments.
A
E
Hello,
I
am
Tom
Jones
and
so
the
University
of
Aberdeen
we've
been
working
on
implementation
of
UDP
options.
Udp
options
ads
transfer
options
to
UDP.
It
does
this
by
taking
advantage
of
the
fact
that
when
GDP
is
carried
inside
IP,
there
were
two
fields
which
describe
the
the
length
of
the
EDP
diagram.
E
There
is
a
length
field
in
the
UDP
header
which
describes
the
the
length
of
the
UDP
header
and
the
data
and
there's
a
length
field
in
the
IP
payload,
which
type
the
payload
length,
and
normally
these
two
numbers
are
the
same,
and
so
we
can
create
surplus
space
after
the
UDP
option.
After
the
UDP
Datagram
by
increasing
the
IP
payload
length,
we'll
keep
in
the
UDP.
Like
the
same,
we
get
surplus
area
in
the
surplus
area.
We
can
stick
in
transfer
options
and
transfer
options.
E
Look
like
this,
and
so
we
have
some
options
for
for
structuring
with
space
and
creating
a
checksum
for
the
option
space
and
then
the
rest
of
the
options
are
tlvs
where
they
have
a
type
length
and
then
a
value
they
carry
and-
and
so
we've
working
this
for
about
a
year
and
ITF.
At
CSP,
WG
in
London,
back
in
March
I,
showed
the
slide
as
an
extra
slide
about
a
potential
issue.
E
The
conclusion
from
this
was
don't
offload
to
UDP
checksums
hardware,
or
so
they
don't
do
this
wrong.
I,
don't
fix
it
in
code
and
I'll,
be
done.
I'm
great,
you
know,
fixing
FreeBSD
the
ITF,
making
them
making
internet
better.
I
went
to
the
pub
and,
of
course,
that
that
doesn't
really
worked
out
and,
and
so
we've
been
doing,
measurements
for
UDP
options
for
the
last
couple
of
months
and
UDP
options
are
fun
to
measure.
There
are
no
hosts
on
the
internet
yet
which
support
UDP
options.
So
we've
had
to
come
up
with
similar
solutions.
E
We
have
a
tool
called
mobile
trace
box
core.
As
a
continuation
of
this
field
called
trace
box,
it
performs
a
trace
route
style,
a
TTL
ring
search
and
allows
us
to
see
where
in
the
network,
packets
or
modified
or
dropped
and
we've
been
using
this
for
doing
measurements,
and
we
also
find
that
UDP
is
quite
difficult
to
measure.
E
We
filter
out
the
hosts
which
don't
respond
to
this
and
we
get
out
a
larger
set.
Of
course
we
can
measure
against,
and
so
with
this
we've
done
some
measurements
for
UDP
options,
and
they
sort
of
look
like
this,
and
this
plot
shows
the
the
chance
of
a
path
for
each
of
these
measurement
types
to
pass
a
UDP
options
diagram
end
to
end.
E
You
might
see
that
this
about
50
percent
chance
of
failure-
and
it's
not
very
goodness,
not
very
promising-
for
UDP
options
and-
and
we
know,
we've
thought
about
this-
and
we've
trying
to
figure
out.
Why
and
and
Ron
Banach
I've
brought
this
up.
You
know
when
the
check
sums
wrong
for
a
packet.
A
lot
of
stateful
boxes
drop
the
packet
because
they
should
they
shouldn't
feed
more
broken
stuff
into
the
internet,
and
but
these
are
these.
Packets
are
being
incorrectly
detected
as
broken,
and
why
is
this?
E
And
so
we
interrogated
this
and
we
figured
out
some
pathologies
for
what's
going
on
our
favorite
pathology.
Is
that
everything
straight
correctly
and
it
works
and
then
serve
in
in
descending
order
of
frequency?
We
find
the
the
checksum
on
the
UDP.
Datagram
is
being
done
on
the
full
IP
payload
length,
and-
and
this
is
the
same
as
the
bug
we
saw
in
FreeBSD-
and
so
this
issue
that
we
thought
would
be
easy
to
solve,
turns
out
to
be
a
big
chunk
of
the
failures
we
find
and
then
the
next
two
cases
are
weird.
E
We
see
the
full
payload
checksum,
but
the
pseudo
header
is
created
using
the
length
in
the
the
UDP
header,
and
then
we
see
the
correct
checksum
like
performed,
but
with
a
pseudo
header
created
from
the
Ikey
length
header,
we
see
UDP
options,
payload
being
passed
as
long
as
the
option.
Space
only
contains
zeros
and
while
that's
cool
we're
not
really
sure
how
to
use
that,
and
then
we
see
hard
checks,
bimodal
boxes
where
they
compare
the
IP
he'll
like
the
UDP
length,
and
this
one
we
don't
think
we
can
solve.
E
So
today,
I'd
like
to
introduce
the
the
CCO.
The
CCO
is
an
option
for
UDP
options.
It
uses
a
modified
super
header
and
the
checksum
across
the
UDP
option
space
and
it
creates
a
checksum
which,
when
you
calculate
the
checksum
across
the
UDP
Datagram
and
the
you
option
space,
you
get
the
same
checksum
as
you
would
get.
If
you
correctly
calculated
the
the
UDP
checksum.
E
We
see
a
huge
increase
in
the
number
of
paths
which
will
successfully,
which
was
successfully
passed
traffic
a
bit
better
than
that
it
actually
works
through
CPE.
So
rural
America,
the
University
of
Oslo,
has
a
testbed
he's
made
out
of
the
strangest
things
he
can
find
in
flea
markets.
He
goes
to
second-hand
shops
and
he
picks
up
for
everytin.
E
He
has
a
test
bed
and
he
ran
his
tests
through
23
of
these
devices
and
at
first
pass
17
over
23
would
would
pass
UDP
options,
but
6
would
drop
it
when
we
add
in
the
the
CCO
all
23
devices
pass
it
great
and
ok,
and
so
this
is
what
we're
proposing.
We
have
a
draft
on
this
we'd
love
it.
If
you
read
the
draft-
and
we
love-
if
you
passed
on
comments
about
this
and
if
you
send
them
to
the
office
directly
or
at
ETS,
vwg
and
we'd
love
to
hear
what
you
think,
questions.
I
Okay,
hello,
everyone,
my
name
is
nicola
queen
and
putting
in
work.
We
have
been
doing
with
all
these
people
listed
underneath
I
need
something
I
need
to
make
clear
before
I
start
is.
We
have
not
been
working
on
IHF,
quick
but
more
on
the
Google,
quick
implementation.
Basically,
why
we
wanted
to
work
on.
It
is
because
first
quick
is
here
already
so
this
is
a
chart.
Lots
of
you
may
know.
So
that's
the
guys
from
working
on
these
things,
so
they
have
a
website.
I
Well,
you
can
actually
have
extract
these
figures
quite
easily
for
your
talks,
and
so
we
just
picked
a
point-
one
nominee
not
so
hard
on
me,
but
when
actually
quick
was
25
percent
of
the
traffic,
so
it's
based
on
the
Maui
traces.
So
this
is
one
of
the
main
reason
why
we
wanted
to
work
on
it,
but
also
because
we
we
as
SATCOM
we
split
and
twine
connections
and
with
TCP.
So
basically,
every
TCP
connection
is
pitted
in
three
independent
TCP
connection,
and
why
do
we
do
that?
I
Basically,
when
you
look
at
the
underneath
picture,
we
have
three
different
websites
with
different
characteristics.
One.
The
three
pages
are
downloaded
with
add
value,
tenth
or
initial
condition,
and
ten
or
initial
condition
of
window
of
60,
and
we
just
split
the
connection
with
the
pet.
We
just
keep
the
cubic
inside
the
pet.
We
don't
do
any
specific,
optimization
and
just
by
splitting
TCP,
we
have
the
time
needed
to
download
the
web
page.
I
So,
basically,
that's
why,
in
a
sec
immunised,
we
will
go
on
splitting
TCP
connection,
but
the
volume
that
we
can
split
quick
at
the
moment.
So
that's
why
we
work
on
it.
That
is
a
slide
to
show
basically
from
analysis
of
a
quick
and
fat
calm.
Basically,
what
could
be
good
for
Earth
is
that
we
have
problems
in
making
TCP
fast
open
when
we
speak
participe
connections.
So
the
zero
ATT
and
shake
thing
is
very
interesting
for
us
problem
is
we
may
not
be
able
to
add
to
the
congestion
control.
I
Good
thing
is
we
can
support
with
that,
a
new
lots
of
new
control,
congestion,
control
position,
control
regions
and
without
pets.
Our
gone
segments
would
be
a
lot
cheaper
and
also
that
would
not
even
we
don't
put
pets
because
we
want
to
split,
but
because
we
have
to,
and
basically
that's
a
big
problem
for
us,
because
we
have
to
follow
all
the
trends
in
end
to
end
to
actually
support
the
good
innovations
and
that's
a
cost
forth.
I
And
although
one
of
the
threat
we
have
at
the
moment,
but
that's
more
for
an
operative
point
of
view-
is
when
you
have
all
different
kind
of
traffic,
with
different
characteristics
being
under
the
same
port.
It's
getting
complicated
to
do
some
quality
of
service
management
for
the
different
needs
of
the
different
applications
and
also
threads
that
we
have
in
that.
I
will
show
later
on
that
we
have
some
issues
with
end-user
quality
of
experience.
For
basically
the
question
we
have
is
earthquake,
doing
any
better
than
speed
TCP
for
SATCOM
public
access.
I
Gouri
already
mentioned
on
the
mic,
doing
whatever
talk
that
I,
don't
remember
when
it
was
this
week,
but
basically
that
it's
sure
that
speedy
TCP
is
doing
better.
But
we
just
wanted
to
make
simple
measurements
to
measure
to
by
how
much
is
it's
doing
better?
So
our
testbed
for
these
are
following
different
questions
we
had.
So
how
can
we
test
our
quick
experiments?
We
can't
replicate
any
quick
implementation
available
today
because
we
don't
have
all
the
smart
congestion
control
that
may
be
embedded
in
it.
I
Another
thing
we
did
to
have
an
actual
and
user
perception
in
that
we
have
an
ink
Ness
public
SATCOM
accesses,
so
we
just
have
a
terminal
and
we
connect
to.
We
have
public
access
is
so
we
connect
our
laptop
to
the
terminal
satellite
terminal
and
go
to
the
different
web
pages.
So
it's
good
thing
if
we
have
real
end
user
experience,
but
then
the
problem
we
have
and
I
will
be
back
on
that
later
and
multiple
moments
is
that
there
are
lots
of
things
happening
lots
of
operator
tuning
that
we
don't
actually
understand.
I
So
we
understand
that
our
experiments
are
just
cannot
be
extended
to
lots
of
cases.
We
just
wanted
to
point
out
from
any
other
experience.
Although
we
are
a
small
community,
we
want
to
make
everything
we
do
available
to
anyone.
So
all
the
scripts
that
run
on
whatever
VM
and
the
using
any
VM
you
can
connecting
to
a
favor.
You
know
enabling
quick,
you
can
reproduce,
also
results.
I
We
have
known
in
all
the
experiments
we
have
so
feel
free
to
use
it,
and-
and
that
is
just
a
slide,
to
explain
a
little
bit
better
how
we
made
our
experiments.
So
we
focus
on
w3c
metrics
to
measure
the
page
load
on
time
and
the
time
to
restart,
and
we
compare
those
two.
We
made
different
webpages
downloads
and
then
we
purge
the
border
profile
and
for
basically,
we
use
selenium
to
automate
our
measurements.
I
Phone,
the
results
we
show
this
kind
of
diagrams,
we
don't
have
lots
of
plots,
but
still
it
shows
disparity
and
how
the
measures
are
distributed.
Basically,
for
the
first
load,
we
often
always
have
the
same
page
loading
time
that
is,
for
the
first
part,
I
will
focus
on
the
big
page.
We
don't
note,
and
basically
what
we
observed
is
that
in
common
implemented
we
always
start
with
the
TCP
connection.
I
So
the
first
reason
of
our
question:
if
quick
doing
any
better
than
speeded
TCP
for
large
page
on
satellite,
it's
not
the
case
for
this
is
to
be
more
for
the
CDF
of
with
from
French
words,
but
that's
the
CDF
and
now
basically
we
show
that
we
are.
We
have
two
completely
independent
CDF,
so
we
are
quite
sure
about
what
we
are
showing
here
to
better
understand.
I
We
have
tried
to
come
up
with
some
sequence
number
view,
so
this
is
the
sequence
number
reception
in
byte
and
add
from
a
function
of
the
time
to
since
the
connect
start.
So
the
first
triangle
is
when
we
actually
start
to
download
data,
and
the
second
triangle
on
the
top
is
when
we
finish
to
the
nahji
page,
so
we
will
be
back
to
that
after,
but
the
first
back
bits
bytes
come
before
with
quick,
but
not
so
much
and
also
what
we
can
feel.
I
If,
basically,
if
we
look
at
the
very
derivative,
when,
basically
when
you
look
at
the
one
pub,
we
can
feel
that,
with
the
back,
we
have
a
very
high
and
stable
throughput.
We
get
up
to
speed
directly,
because
we
know
our
conditions
with
quick,
quick
congestion
control
doesn't
know.
Google
quick
collision
control
doesn't
know
that
we
are
navigating.
So
basically
it
takes
its
awhile
to
actually
come
up
to
the
goods
and
available
boundaries.
I
This
is
the
part
of
these
things.
We
don't
understand,
basically,
if
we
look
at,
but
because
we
want
to
be
honored,
and
so
when
we
look
at
the
different
loads,
with
your
huge
disparity
in
the
tron,
no
quick
page
and
or
learning
time
in
basically
everything
in
the
response
that's
what's
happening.
There
is
strange
that
may
be
due
to
the
channel
capacity.
The
way
this
access
is
done
for
those
all
these,
they
are
two
things
that
are
happening
on,
so
we
not
actually
know,
and
we
couldn't
dig
into
the
details.
I
I
The
thing
is
basically
quick
is
doing
better
I,
don't
know
how
much
time
left
I
have
if
it's
fine,
but
basically
we
can
see
that
the
the
fact
that
we
have
the
first
bytes
of
data
that
comes
faster
with
quick
is
very
a
huge
gain
in
this
case,
and
so
again
the
a
lot
of
things
we
don't
understand.
We
just
wanted
to
report
some
results
and
share
the
code.
We
have
that
anyone
wants
to
replace
it
with
any
kind
of
also
accesses
can
take
the
code
and
try
it
out.
I
So
at
the
conclusion
for
small
files,
quick
quick
is
winning
even
on
the
satellite
thing,
because
we
have
very
first
data
bytes
that
arrive
earlier
for
large
files
stated
tcp
wins,
and
the
big
issue
is
how
to
get
up
to
speed
with
quick
in
this
case.
So
the
paper
is
here
and
we
are
here
to
have
any
discussions
on
it.
So
what
you
want
jump
in
now-
or
that's
my
last
slide.
J
That's
that's
a
pretty
large
number
by
during
that
stance,
at
least
in
my
experience.
So
our
numbers
indicate
that
you
know
a
median
congestion
window
for
a
typical
connection
is
something
we're
like
20
to
40
for
most
users.
So
your
your
two
orders
of
magnitude
off
the
typical
user.
You
combine
that
with
the
longer
RTT
and
and
slow
start
is
going
to
take
a
while
to
fix
that
so
I'm
very
interested
to
see
what
you
might
come
up
with
to
to
make
this
situation
better.
J
I
think
there
might
be
some
things
on
the
server
side
we
could
potentially
do,
but
but
yeah
I
mean
this
is
sort
of
the
nature
of
this
is
one
of
those
cases
where
end
end
is
actually
worse,
because
you
just
have
a
network.
That's
so
different
from,
like
the
typical
network
and
connections
are
relatively
short
web
typically
and.
I
Thanks
for
final
for
the
small
solutions
we
may
have,
we
have
already
measurements
on
just
increasing
a
lot
of
initial
causation
window
helps.
So
if
we
can
say
just
pace
it,
we
our
our
TT,
60
and
paste
a
very
large
congestion
window
and
that's
absolute
already,
not
aiming
for
something
that
we
know
we'd
be
impossible
possible,
but
we
will
try
to
find
some
small
tricks
that
would
help
us
or
not.
Yeah
I
think
we
did
1iw.
L
M
Westford
occur,
I
am
so
like
15
plus
years
ago,
which
means
that
the
data
is
no
longer
valid,
but
I
actually
looked
into
studying
UDP
versus
TCP.
You
know
long
before
a
quick
and
one
of
the
conclusions
that
I
came
to
and
specifically
I
was
looking
at
satellite
links
and
things
like
that
which,
at
the
time
were
really
lossy
like
33%.
You
know
lossy
and
I
suspect
that
that's
gotten
better
and
my
knowledge
of
the
situation
has
gotten
worse.
I
We
are
working
on
geo
satellites
fixed
axis
where
the
loss
ratio
is
lower
than
in
LTE
and
Wi-Fi.
We
have
very
high,
but
we
have
no
loss
now
for
quasi
error-free
transmission,
but
then,
when
it
comes
to
satellite
concentrations
or
mobile
users,
that's
not
the
same.
We
have
burst
of
losses,
so
it's
a
pattern
of
losses
is
not
the
same
depending
on,
but
we
are
focusing
on
geo
bob
on
access
internet
Access's
right.
One.
M
N
N
I
think
would
be
really
useful
to
also
try
to
take
some
measurements
of
the
effects
that
those
large
initial
windows
have
on
low
latency
traffic,
like
on
the
other
side
of
the
hop
right,
because
you're
gonna
be
very
unresponsive
to
congestion
events
then,
and
I
think
that
trying
to
not
only
document
sort
of
the
performance
increase
that
you'll
see
over
the
satellite
link,
but
also
look
at
like
how
bad
are
you
making
it
for
everybody
else.
Who's
sharing
the
network
on
the
other
side
would
be
that's
gonna,
be
part
of
the
story
right
now.
Well,.
I
The
large
initial
condition
window
we
have
I
mean
we
can
increase
it
a
lot
if
we
pace
it.
You
just
said
we.
If
you
send
one
packet,
every
five
milliseconds
during
500
milliseconds,
you
have
already
sent
a
bunch
lots
of
data
without
including
tumors
burst
in
the
network.
So
we're
not
asking
that
says
one
thing
we
work
on:
it's
true
that
we
we
don't
have
a
same
initial
congestion
window
as
in
LTE,
but
then
and
for
the
vegan
point
of
latency-sensitive
traffic.
I
We
have
n
users
with
SNA
contracts
and
requirements
for
latency-sensitive
traffic,
so
we
have
quality
of
service
management
where
we
actually
prioritize
this
kind
of
traffic,
such
as
it
has
low,
latency
and
I,
mean
it
is
not
affected
by
the
problem.
Then,
in
how
your
collision
control
is
affected
by
the
quality
of
service
management.
You
have
underneath,
but
that's
a
generic
topic
not
only
specific
to
our
case,
I,
believe
sure
and.
N
Then
the
question
that
I
had
going
to
your
environment
I
think
that
you,
when
you
were
talking
about
your
test
environment,
it
looked
like
you
said
that
on
the
satellite
link,
you
weren't
actually
sure
whether
they
were
running
some
sort
of
TCP,
optimization
they're
wearing
your
UDP
packets,
your
quic
packets,
over
some
TCP
optimization
is
so.
Do
you
know
for
a
fact
that
there
is
some
TCP
running
in
here
or
we
or
were
you
just
saying
that
you
are
uncertain?
We.
N
I
O
There
Pappas
guy
I,
just
thought,
I'd
come
up
and
say
I,
don't
think
it's
a
controversial
point.
I
just
wanted
to
correct
the
people
who
are
saying
that
you
know
this
is
a
problem
in
satellite.
It's
actually
a
canary
in
the
cage
for
what
could
will
be
the
problem
as
links
get
fast
to
go,
spend
with
delay
product
is
about
bandwidth
as
well
as
delay.
I
Basically,
since
we
have
more
time
and
that's
something
slightly
different
I
didn't
want
to
make
too
much
noise
about
it,
but
we
are
here
and
basically
we
have.
We
are
and
public
Institute's,
so
we
believe
a
lot
in
open
source.
So
we
have
this
tool
that
is
basically
experimental
directly.
We
use
lots
of
open
source
tools
available
to
orchestrate
results.
I
If
you
want
I'm,
not
speaking
about
opening
the
having
a
common
DCP
evaluation,
suite
document
in
the
IDF,
but
it's
just
that
we
are
having
tools
to
experiments,
lots
of
things,
I
want
ECP
fairness,
and
so
whenever
we
have
a
new
cohesion
control,
we
want
to
try
out
to
just
put
it
in
this
box
and
we
have
lots
of
scenarios.
I
couldn't
run
and
I
think
we
think
is
useful
for
the
community,
because
we
always
complain
about
how
to
evaluate
your
furnace
or
performance
of
different
collision
controls.
I
P
A
Q
Hello,
so
probably
you
really
hate
it
when
someone
is
presenting
somebody
else's
work,
because
they
don't
know
deep
in
detail
what
it
is
about-
and
this
is
one
of
those
cases
I
will
be
presenting
on
behalf
of
two
colleagues,
Roberto
Morabito
and
Sicario
LaRue
see
the
topic
is
vehicular
communications
on
how
different
application
layer
protocols
may
affect
them.
You
have
all
the
information
there
on
the
reference
the
reference
below
this
was
presenting
in
the
eye
to
policÃas
CN
conference
this
year.
Q
So,
let's
start
with
the
purpose,
so
the
main
idea
is
to
see
if
QT,
co-op
or
HTTP,
depending
on
the
vehicle
or
network
might
affect
the
performance
also
depending
on
whether
we
use
H
or
cloud
to
provide
the
service
or
there
also
that
may
affect
the
performance
we
use.
Vanilla,
mqtt,
also,
cooperation
EP.
That
means
that
it's
nqt
over
TCP
with
the
QoS
of
that
affects
also
the
resource
you
will
see
later
on
and
with
HTTP.
We
do
not
use
quick,
which
probably
would
have
shown
much
better
results.
Q
Please,
playing
mqt
TCP,
co-op,
UDP
and
HTTP
TCP
as
well
in
the
scenario
is
rather
simple,
so
we
have
vehicle
a
car
with
an
onboard
unit
that
is
contacting
an
inert
P
a
base
station
relatively
close
to
the
base
station.
We
have
the
edge
server
that
is
provisioning
with
the
same
services
as
in
the
cloud.
We
do
not
modify
the
base
station
in
any
way
or
we
don't.
Q
Q
The
setup
is
done
in
Finland,
so
we
have
for
future
work.
We
would
like
also
to
check
vehicle
vehicle
to
vehicle
communication
as
well,
so
we
will
need
more
age
entities.
At
the
moment,
we
have
only
one
H
entity
which
is
located
nearby
the
path
of
the
vehicle
the
car
will
connect
to
the
H
entity
when
is
nearby,
and
it
will
connect
to
the
cloud
through
the
normal
mobile
operator
when
it's
not.
The
data
center
is
in
London
in
Sweden
about
850
meters
away
and
yeah.
Q
So
the
the
participating
entities,
so
we
have
this
system
with
the
data
center.
So,
as
I
said,
that's
inland
we
run
OpenStack
and
inside
we
have
a
VM
with
the
set
of
software
and
protocols
that
are
required
to
send
and
receive
data
payload
size,
rather
small
about
11
kilobytes.
Not
not
so
small
in
the
IOD
scenario,
of
course,
was
relatively
small.
For
for
other
environments,
the
H
entity
is
a
Dell
server,
d5500
15
gigabytes
of
RAM
pretty
powerful.
Q
Q
So
it's
we
are
running
it
on
the
normal
network
infrastructure
in
Finland,
and
the
same
thing
goes
for
the
mobile,
which
is
the
mobile
operator
DNA,
and
no
specific
tweaks
there
and
the
onboard
unit
is
a
Raspberry
Pi,
3,
running
raspbian
and
again
running
a
co-op
and
unity
and
HTTP.
We
are
is
connected
to
a
6
pub
shield
and
we
have
a
4G
LTE
module
and
onion
is
transmitting
there.
This
so
very
simple
setup.
It's
unit
on
board,
sorry,
H
server
on
board
unit
and
the
cloud
side.
Q
Q
Also,
as
I
will
continue
later
on,
we
do
comparison
of
the
various
application
layer
protocols,
but
in
the
future
we
will
also
look
into
a
bit
larger
size
payloads
and
how
does
the
H
operate
in
indoor
environment?
We
haven't
finished
the
empirical
evaluation
there
and
we
also
test
with
all
the
radio
interfaces
older
than
40
so
locally.
Q
We
would
like
to
use
Wi-Fi
and
for
vehicle-to-vehicle
Wi-Fi
802.11
dot
P,
moreover,
later
on
also,
we
will
test
with
other
application
layer
protocol
modifications,
just
like
we
did
for
MPD
t
QoS
settings
so
other
than
what
I
already
mentioned.
We
would
like
to
we
check
on
all
the
factors
like
the
vehicle
speed,
the
number
of
clients
and
again
the
queue
is
so
on
kind
of
repeating
the
same
thing.
Q
The
background.
Alright,
just
I
didn't
check
this
like
before,
but
you
here
here
you
can
see.
Basically,
the
setup
we
have
for
HTTP
cooperativity,
so
the
architecture
of
coop
is
client
to
server
restful
type
of
interaction.
The
coop
server
is
running
on
the
device,
the
coop
line
on
the
cloud,
although
in
practices
device
device
to
device
with
a
peer-to-peer
setup,
he
has
a
bit
of
a
larger
header
size
compared
with
GP
as
it.
Well.
Q
No
sorry
compared
with
entity
the
paradigm
of
communication
between
mqt,
HTTP
and
coop
is
a
bit
different
until
it
is
designed
for
pub
sub
type
of
communication
over
co-op
also
supports
the
same
kind
of
pattern
with
observe
option.
An
HTTP
is
also
restful,
then
on
the
semantics.
So,
basically
yeah.
There
are
different
methods
to
do
similar
operations.
We
do
the
basic
one
so
get
post,
put
delete
basic
operations
on
resources
on
the
devices
and
similarly
basic
operations
when
it
comes
to
the
Infinity
side,
so
connect/disconnect,
publish/subscribe
and
so
on.
Q
There
is
a
bit
of
a
larger
handshake
if
you're,
using
TCP
that
is
not
contemplated
here,
but
yeah
so
that'll
be
basically
it
on
the
QA
side,
as
I
mentioned.
I
can
go
on
later
on
it,
but
basically
pure
0
has
no
delivery
guarantee
q
is
one
is
basically
that
at
least
the
message
has
arrived,
1
and
qh2.
There
is
a
very
high
reliably
high
guarantees
that
the
message
has
arrived
one
time,
but
it
it
implies.
Q
That
days
are
for
four
messages:
overhead
and
that's
pretty
much
it
I
mean
it
is
well
known
already
for
the
evaluation.
So
the
set
up
is
on
the
protocol
side
we're
running
mosquito,
which
is
very
well
known.
It
has
a
broker,
it
has
a
client
server.
It
has
a
benchmark
tool
that
allows
you
to
edit
the
QoS
xx
root
code
to
do
modifications
on
the
broker
as
well.
Q
Q
So
then,
as
far
as
empirical
results
go
so
for
the
vehicle
speed.
Sadly,
we
couldn't
test
at
really
high
speeds.
He
was
only
the
the
barrier,
but
ability
was
only
from
30
to
50
kilometers
because
of
the
spin
speed
limitations.
So
obviously
there
was
no
correlation
I
believe
probably
I
mean
we
will
need
to
run
this
in
another
place
where
the
speed
limits
are
a
bit
higher,
but
at
least
you
can
see
some
interesting
results
as
far
as
performing
goes
in
which,
basically,
the
coop
outperforms
in
terms
of
this
is
when
they
grab
it.
Q
On
the
on
the
left
side,
you
have
the
throughput
emitted
messages
per
second
and
on
the
right
side
you
have
the
latency
in
milliseconds
and
in
the
bottom
you
can
see
how
the
H
and
the
cloud
perform
in
in
in
each
of
the
cases.
So
you
can
see
there
that
the
performance
when
it
comes
to
core-
yes,
just.
R
R
Q
Use
a
confirmable
messages,
and,
and
that's
it
confirmable
messages.
There
was
not.
The
acknowledgment
was
not
confirmable,
so
you'll
be
matching,
I
would
say
quality
of
service
one,
but
we
don't
really
have
that
terminology
in
coop
I
should
that
would
be
something
interesting
for
further
work
as
well
to
see
how
the
coop
could
match
the
distillate
quality
of
service
terminology
as
well.
Essentially,
you
could
send
confirmable
acknowledgment
so
that
you
receive
the
confirmation
back,
and
you
know
that
it
has
been
received
only
one
time,
so
yeah
so
again,
co-op
over
UDP
with
the
observe
option.
Q
Sorry
with
the
observe
knob,
sir,
with
the
confirmable
option
on
so
that
you
receive
the
acknowledgment
yeah.
What
was
yeah
so
the
throughput
was
about
in
the
in
the
edge
case
for
coop
25
messages
per
second,
which
was
almost
in
one
of
the
cases
almost
twice
as
the
messages
per
second
for
HTTP
and
in
completely
perform
a
bit
better
than
HTTP
and
in
terms
of
latency
again
co-opt.
Q
Q
The
difference
is
not
as
dramatic
as
in
this
case
and
as
it
was
expected,
the
cloud
also
had
a
bit
of
a
higher
latency
and
a
bit
of
a
lower
throughput,
so
no
surprises
here
and
then
the
next
one
we
did
so
assuming
that
you
have
a
this
kind
of
foam
board
unit
and
this
kind
of
vehicle
connected
to
the
Internet.
You
may
have
different
type
of
services,
so
you
have
an
infotainment
system
with
video.
You
may
have
some
telemetry
being
sent
so
they
have
different
requirements.
Q
So
we
wanted
to
test
not
not
those
requirements
in
particular,
but
having
multiple
clients
connected
at
the
same
time
and
see
how
the
performing
varies
so
again.
On
the
left
side,
the
throughput
on
the
right
side,
the
latency
and
the
as
we
increase
the
number
of
clients
and
the
throughput
greatly
decreases,
especially
in
the
in
the
case
of
HTTP,
so
that
the
tenth
client,
actually
the
throughput,
is
almost
one
third
of
the
first
one.
Q
In
the
case
of
coop,
the
throughput
was
much
higher
and
actually
had
a
very
good
efficiency,
because
it
was
only
10%
difference
between
the
first
and
the
last
client
latency
wise.
It
was
pretty
high
for
HTTP
again.
I
believe.
The
fact
that
we
were
using
TCP
has
something
to
do
with
that.
I'm,
not
sure
why?
Actually,
but
anyway,
it's
something
to
look
into.
But
again
the
latency
was
much
better
for
well
a
slightly
better
for
co-op
than
for
infinity
and
much
better
for
both
of
them
than
for
HTTP
in
the
next
one.
Q
Q
We
could
have
used
cubes
of
one,
and
we
did
I
will
continue
on
that
to
be
a
bit
more
fair,
since
we
were
using
coop
UDP
with
plain
configurable
messages
and
no
equivalent
equivalent
guarantee
of
delivery,
curious
curious
of
actually
a
thing
he
was
designed
for
subtly
satellite
communications.
Actually,
so
it
has
a.
He
has
a
quite
a
lot
of
message
transmission
and
therefore
it
performs
a
bit
worse,
but
it's
the
safest,
obviously.
Q
So
this
was
the
results.
The
throughput
greatly
was
increase
when
using
Q
as
a
one,
so
Q's
of
to
greatly
reduces
the
throughput
and,
and
conversely
same
thing
happens
with
the
latency
still
the
results,
if
you
remember
from
the
previous
slide
so
here
the
throughput
for
Q
s
of
one
was
about
17
messages
per
second.
In
the
case
of
co-op,
it
was
20
24,
25
messages
per
second,
so
still
co-op,
outperforms
mqt,
even
in
with
Q
s
of
1.
Yes,.
Q
R
R
Yeah,
you
have
an
unsaturated
link,
so
if
you
actually
have
capacity
on
the
link,
then
this
is
actually
the
quickest
protocol
I
mean,
but
on
the
other
hand,
what
we,
what
we
see
is
I
mean
we
were
using
IOT
scenarios
of
with
15.4
links.
What
we
actually
see
is
that
the
confirmable
layers
in
both
protocols
basically
fail
and
you
have,
if
you
have
this
link,
saturations
yeah.
Q
We
didn't
actually
we
didn't
have
any
specifics.
On
the
4G
I
mean
we
were
using
4G
and
LTE,
so
it's
a
completely
different
case
than
15.4,
but
that
probably
is
affected
as
well
and
again
we
didn't
use
MTD
DSN,
which
could
have
been
also
interesting.
Actually,
we
got
a
lot
of
feedback
when
preparing
this
on
how
we
could
improve
later
on
having
more
tests
and
more
variations.
Q
It
seems
that
co-op
did
outperform
overactivity
when
it
comes
to
Q
s
of
one
and
two,
an
HTP,
of
course,
both
the
co-op
and
ability
perform
HTTP
in
this
case,
both
in
the
case
of
throughput
and
latency
yeah,
and
basically
like
the
the
we
confirm,
also
that
the
HKS
performs
better
than
the
cloud
case.
That
is
interesting.
Actually,
because
we
have
it
started
a
bit
comparing
for
larger
sized
payloads,
and
the
preliminary
results
showed
that
the
cloud
performs
bit
worse
than
they
sorry.
Q
The
h
performs
a
bit
worse
than
the
cloud
we
haven't
figured
out
what
the
reason
could
be,
but
that
also
for
for
future
work.
In
fact,
the
question
well
alright,
so
for
future
work
we
have.
We
would
like
to
work
on
with
larger
sized
payloads
or
workloads,
and
also
maybe
test
on
some
streaming
cases
like
IPTV,
for
example.
Yesterday,
in
fact,
in
core,
we
had
a
very
interesting
presentation
on
that
for
videos
with
very
low
frame
frame
rate.
So
it's
not
full
fledged
video
streaming,
but
a
bit
of
a
lower
quality
one
with
h.264
encoding.
Q
Then
we
would
also
like
to
test
other
security
mechanisms
or
score,
for
example,
so
for
application
layer,
security
and
see,
if
that
affects
the
the
throughput
as
well.
Another
network
interfaces,
as
I
mentioned
at
the
beginning.
When
we
have
a
couple
more
entities,
we
will
also
test
a
vehicular
communication
and
see
how
the
placement
of
the
H
entity
the
distance
to
the
UE
effects.
Even
in
the
best-case
scenario,
we
will
even
modify
a
base
station
to
to
offer
another
value
service
there
and
that's
pretty
much
it
thanks.
Yeah,
please
go
yes.
K
K
K
Q
Q
Q
Q
K
K
So
on
the
axis
practically
you
have
a
error
correction
mechanism
in
which
frame
may
be
sent
multiple
times
with
error
correction
schemes,
I'm
asking
because
at
least
since
the
latency
numbers,
what
you
put
it
there
very
idea
so
I
would
expect,
at
least
in
the
contest
that
your
vehicle
is
moving
to.
Those
latencies
may
change.
Why
I
say
is
that
at
a
minimum,
you're
gonna
have
eight
millisecond,
but
we're
gonna
be
is
overall
ya
mean
the
latest.
Q
K
On
a
typical
operator,
yeah
on
a
data
rate
do
better.
We
would
expect
to
have
one
one
on
average,
four
or
five
ray
transmissions
which
we're
gonna,
make
the
excess
to
be
like
at
least
40
50
millisecond
overall
latency
for
Z
access.
So
that
is
so.
If
you
look
at
your
numbers
on
the
edge
practically,
that
number
represents
only
the
overall
axis
and
the
last
question.
It
is:
do
you
make
requests
for
data,
or
do
you
go
with
semi
persistent
connections?
Yes,
mi,
persistence
connection.
K
Q
Q
I
mean
I
believe
that
there
was
no
specific
modifications
on
how
the
communication
today
only
on
them
and
I,
don't
think
there
were
specific
modifications
below
IP
layer
to
be
more
precisely
was
the
same
setup
for
all
of
them
playing
out
of
the
box
set
up.
We
didn't
do
any
modifications.
Also,
on
the
H
side,
we
didn't
do
a
local
break
out
between
the
edge
server
and
the
base
station
was
normal
network
connection.
P
Q
Kilo
bytes
of
size,
I
believe
yes,
okay
and
the
then
about
mqtt,
and
it
has
DP.
Are
they
using
persistent
TCP
connect
record?
Are
they
opening
a
new
TCP
connection
for
its
message,
exchange
that
was
B
I,
do
not
know.
I
will
be
guessing
the
answer
if
the
results
seem
to
be
showing
that
you
open
a
new
connection,
for
it
is
because
it
is
that
the
performance
difference
is
roughly
two
twice
for,
so
how
can
two
round-trips
for
the
TCP
based
stuff
and
the
just
one
round
reporter
co-op
paste?
Okay?
Q
Q
L
So
hello,
everybody,
and
just
talking
about
a
measurement
study
that
we
did
on
the
upcoming
deployment
of
certificate,
transparency
and
the
short
version
of
this
talk
was
already
presented
last
week
at
ACMI
MC,
the
jean-jacques
Quillin
Oliver
T
or
Johanna
Lexie,
Georg,
five
and
Thomas
and
yeah.
So
the
rise
of
certificate
transparency
before
I
get
into
the
details.
Let's
briefly
recap:
what
CTE
certificate
entry
is
and
why
we
needed
so
I
mean
if
you're
an
owner
of
a
name,
you
can
get
a
certificate.
L
Unfortunately,
anyone
ads
can
ask
for
a
certificate
for
the
same
name,
and
if
something
was
wrong,
he
actually
or
she
gets
a
certificate,
and
the
problem
is
that
the
name
owner
does
not
have
any
chance
to
verify
if
another
certificate
for
his
or
her
name
exists.
So
how
to
solve
this
problem,
and
what
is
one
option
is
that
you
actually
make
our
existing
certificates
publicly
available.
L
We
certificates
that
issued
after
a
30
M
and
is
not
in
the
certificate
in
Spain.
See
lock
will
trigger
a
warning
in
the
web
browser.
So,
and
from
this
point
of
view,
cities
I
mean
there
might
be
some
chance
that
there
will
experience
some
deployment,
and
this
is
something
that
we
want
to
analyze
here
and
also
the
implications
of
having
cities
deployed.
So
there
are
two
basic
questions
around
this
new
approach.
One
is:
does
City
introduce
new
dependencies
and
to
extend,
because
you
have
the
lock
server
infrastructure.
L
Why?
Because,
and
what
is
city
actually
doing?
It
is
exposing
names
all
right
and
having
access
to
names
and
easily
to
easily
way
to
search-
and
you
can
think
about,
for
example,
to
find
something
like
malicious
domain
names,
phishing
domains,
but
also
we
can
think
about
that
attackers
leverage
this
public
repository
to
find
victims.
This
is
these
post
questions.
We
try
to
answer
in
our
measurement
study.
So,
let's
first
look
on
the
increase
of
deployment
of
city.
What
you
see
here
is
on
the
x-axis
the
time
and
on
the
y-axis.
L
A
number
of
LOC
entries
in
the
city
locks
by
different
city
lock,
and
the
lunch
line
actually
indicates
appreciate
Esau
the
date
of
time
where
Google
Chrome
gives
you
a
warning.
If
a
certificate
is
not
part
of
the
city
lock.
What
you
see
is
that
the
number
of
log
entries
across
a
different
operators
increase
over
time
and
in
particular
close
to
the
deadline.
There's
a
huge
increase.
L
Let's
encrypt
was
publishing
several
hundred
million
certificates,
so
there's
an
increase
yes
deployment
and,
according
to
the
cap
o'clock
service,
you
also
see
that
let's
encrypt
dominate
more
or
less
the
certificate
ecosystem,
so
there's
a
strong
rise.
Now
the
question
is
this:
CAS
who
issued
certificates
and
distributed
certificates
over
many
city
locks?
L
So
do
they
care
about
reliability
and
what
you
see
here
on
the
x-axis,
the
most
popular
CAS
and
that's
encrypted,
ESET
and
so
on,
and
on
the
y-axis,
you
see
the
lock
service,
so
a
company
might
even
operate
several
Knox
owners
and
the
color
in
this
heat
map
indicates
the
number
of
lock
entry
Spurlock
server
PRCA,
and
what
you
would
expect
is
that
the
Pala
power
column
actually
is
more
is
somehow
homogeneous,
which
would
indicate
that
a
CA
is
using
several
lock
operators
and
I
thought.
This
is
not
the
case.
L
What
you
see
is,
for
example,
that
let's
encrypt
first
of
all
dominates
and
markets
already,
but
also
focus
on
to
lock
operators.
Let's
encrypt
is
publishing
the
certificates,
mainly
in
at
Google
and
CloudFlare,
and
this
is
also
valid
for
as
ICS
some
of
the
lock
operators
are
very
rarely
used.
So
you
have
a
very
sparse
matrix
here
so
over
the
city,
ecosystem
or
less
relies
on
a
few
loksabha
instead
of
using
all
available
locks
on
us.
L
So
this
picture
indicates
some
limitation
in
reliability
and
should
be
changed
actually
in
the
future.
So
now,
let's
think
about
implications
on
cities,
because
if
he
is
exposing
names,
so
the
first
question
that
you
can
ask
this
can
be
used
City
to
actually
identify
malicious
names
and
the
domain
names
and
the
malicious
domain
names
swimming
fishing
domain
names.
L
So
what
we
are
doing
is
we
fetch
all
publicly
available
city
locks
from
our
lock
service
and
then
exclude
four
five
popular
services
which
are
Apple,
pepper,
hotmail,
Google
and
eBay
exclude
all
valid
domain
names,
something
like
epic
calm
and
then
do
a
pattern
matching,
but
that
that
use
the
Bellator
main
name
and
looks
for
domain
names.
That
includes
this
valid
domain
name
somewhere
in
the
name.
Something
like
Apple
ID
appeared,
calm
something.
L
L
L
Suffix
leverage
fig,
not
a
talk
on
dotnet
talk
because
I
mean
then
you
would
get
too
many
domain
names.
O
L
And
then
we,
after
we
created
these
artificial
names,
we
verified
Reza,
says
and
processes
value
domain
names
in
the
sense
that
there's
an
available
and
based
on
this,
we
actually
ended
up
with
18.8
million
for
qualified
domain
names
based
on
this
simple
mechanism
and
compared
to
other
publicly
available
domain.
This
something
like
from
a
sauna
measurement
project.
We
have
found
seventeen
million
more
domain
names.
L
So
the
last
question
that
we
try
to
answer
is
if
an
attack
attacker,
actually
I
mean
having
the
knowledge
of
the
city
locks
having
a
nine
DNS
name,
actually
misused.
This
name
so,
which
basically
means
does
a
resolves,
a
record
or
quad
a
record
for
this
name
and
then
also
does
ask
any
on
the
corresponding
IP
address.
And
for
this
we
actually
introduced
the
city
honeypot-
and
this
is
Randy.
Park
consists
of
four
components:
this
first
we
create
and
soy
dunamis
item
number
DNS
name.
L
So
then
a
name
that
is
hard
to
guess
based
on
hash
value,
then
we
published.
Then
we
created
a
certificate.
We
long
into
this,
how
to
guess:
DNS
name
published
it
in
the
city
locks
and
then
we
monitored
the
locks
on
the
also
in
s
server
on
the
one
hand,
and
we
also
captured
our
traffic,
our
incoming
traffic
on
the
corresponding
IP
address
that
relates
to
the
artificial
domain
name
and
yeah,
and
then
try
to
correlate
requests
on
the
DNS.
L
For
this
name
and
the
incoming
later
on
the
IP
entrance
and
as
an
ethic
I
might
easily
use
publicly
recursive
DNS
servers
to
hide
the
source
address.
We
also
inspect
extended
DNS
subnet
field,
which
gives
you
an
indication
of
the
original
stub
resolver
of
the
DNS
requests.
So,
first,
as
soon
as
we
publish
the
data
as
soon
as
we
publish
the
lock
we
receive,
we
saw
DNS
lookups
for
this.
How
to
guest
names
in
less
than
one
minute
depends
it
a
bit,
but
I
mean
we
created
11
names
boss.
L
L
But
what
is
much
more
interesting
is
if
you
also
found
one
scanner
that
immediately
after
resolving
the
DNS
DNS
name
that
was
exposed
by
the
CT
lock
was
also
scanning
30
parts
of
the
corresponding
IP
entrust
so
this
and
tracking
back
the
source
IP
address
of
the
scanner
to
an
autonomous
system.
Actually,
this
autonomous
system
is
somehow
we're
now
for
hosting
give
us
service.
L
So
this
is
there's
some
indications
that
actually
City
is
misused
and
can
easily
be
misused
by
attack
us
to
more
easily
find
cause
potential
victims.
So,
to
sum
up
so
far,
as
this
measurement
study
is
based
on
data
from
April
this
year,
the
city
ecosystem
is
dominated
by
a
few
stakeholders.
Most
of
the
CAS
are
locking
to
just
a
few
locks
service
up.
A
few
companies
city
might
help
to
find
malicious
domains
such
as
fishing
domains.
L
By
easily
applying
something
like
pattern
matching,
you
can
get
much
better
data
if
you
do
much
more
sophisticated
searching
on
this
on.
The
downside
city
also
helps
attackers
to
much
easily
identify
potential
victims,
either
by
constructing
previously
unknown
domains
or
by
doing
targeted
scanning
on
the
domain
names
that
are
visible
in
the
city
locks.
If
you
are
interested
in
more
details
and
the
site,
such
as
also
the
paper
link.
Thank
you.
A
L
Approves
is
this
improve
improve
the
system,
a
I
mean
yeah
I
mean.
If
you
think
about
the
fishing
domains,
I
mean
you
can
do
apply
some
machine
learning,
algorithms,
yes,
any.
What
is
from
my
personal
point
of
view,
actually
is
most
interesting.
Is
this
analysis
of
the
city
honeypot
right
that
is,
can
calculate
you
publish
a
certificate?
It
was
a
name.
The
name
is
immediately
as
a
city
log
is
monitored,
the
name
is
resolved,
and
then
you
see
a
scan
on
the
corresponding
a
record.
What
Ellicott?
L
This
is
a
little
bit
actually
how
to
maintain
on
a
long
term.
We
because
you
need
to
create
again
and
again
a
new
name
as
soon
as
the
name
is
published.
You
cannot
I
mean
it
will
be
cached
somewhere,
and
then
you
cannot
rely.
That's
a
next
week.
Fest
I
mean
based
on
the
skin
of
the
locks
instead
of
getting
the
status
and
wear
it.
So.
L
N
L
O
L
N
N
N
L
Mean
I
think
I
means
this
privacy
concerns,
for
example,
that
relates
to
City
are
not
new
right.
We
just
give
measurements
that
this
is
not
a
sabbatical
concern.
That
is
actually
a
concern,
but
we
did
not
discuss
ways
to
improve
this.
Okay
and
personally
I'm
not
a
big
fan
of
this
anyway,
but
this
is
because
I
think
it's
the
wrong
idea
to
source
a
problem
but
different
question.
This.
F
L
G
Poli
Apple-
and
this
is
just
kind
of
a
funny
comment-
thank
you
for
sharing
how
many
fishing
domains
do
you
see
under
Apple
as
well
I'm,
pretty
sure
I
have
seen
you
know
a
good
half
of
those
in
text
messages,
so
you
know
where
they're
going.
Yes,
there
we
get
quality,
so
improvements
in
the
area
would
be
lovely.
M
That's
her
to
curry
is
a
good
work
for
stuff.
You
know
it's
an
interesting
study.
Have
you
thought
further
on
on
sort
of
next
directions?
You
know
to
go
and
in
particular
measuring
the
effectiveness
of
certificate.
Transparency,
there's
always
been
the
debate
of.
Is
this
helping?
You
know
the
Bing
people
or
you
know
the
mom-and-pop
shops.
Don't
really
get
helped
much
kind
of
things,
because
they're
not
looking
to
see
if
their
domains
are
being
misused,
or
are
you
actually
able
to
measure
our
users
actually
being
protected
by
this?
That
type
of
measurement
I?
M
S
Dan
Ruta,
actually
I,
don't
see
why
there
is
not
a
simple
solution
to
this,
because,
instead
of
you
doing
the
scanning
on
the
logs,
the
owners
can
do
the
scan.
You
just
provide
the
information
you
know
and
they
come
back
with
the
matching
rather
than
exposing
everything.
It
seems
to
me
like
the
flaw
on
on
yeah.
L
But
I
mean
this
is
something
running
in
soccer,
because
I
mean
your
Argan.
Is
that
assassin
authentication
between
the
name
owner
and
syslog
server,
but
I
mean
this
is
something
similar
to
that's
a
name
owner
that
only
the
name
owner
should
get
a
certificate
for
the
name
right,
and
if
such
mistakes
never
happens,
then
we
would
not
it's
a
city
lock
and
as
similar
as
you
can
never
be
mistaken
issues
a
certificate.
F
L
It
was
a
it
was
a
low
interaction
honeypot,
so
it
was
basically
I
mean
first,
not
doing
any
handshake
with
the
attacker,
but
I
think
it's.
There
is
some
evidence
that
I
mean
someone
who
first
requests
the
name
and
then
tosses
scanning
on
my
tip
reports
that
this
is
not
a
legitimate
access
actually,
but
is
there
was
no
I
ain't
actually
party
plate
when
you
go
and
take
janicot.
E
Shall
we
go
okay?
Well,
thanks!
Everyone
for
staying!
This
is
pretty
late
in
the
day
long
day,
but
this
is.
This
is
a
measurement
study
that
I
did
with
with
a
group
of
researchers
from
various
different
places
to
study
OCSP
in
OCSP
most
stable.
So
what?
What
is
that?
A
lot
of
you
may
know?
Http
is
the
is
the
basis
of
you
know:
browsing
the
web.
It's
HTTP
over
TLS
everybody
here
at
the
IETF
knows
this
okay.
E
So
we've
got
a
lot
of
time,
but
I'll
try
to
try
to
rush
through
there
through
the
the
obvious
details
and
and
get
to
the
meat
of
the
study
but
yeah.
So
when
a
browser
connects
to
a
website
to
securely
communicate
website
introduces
a
private
and
public
peek
key
pair
to
encrypt
the
channel
and
just
off
off
the
bat.
E
There's
no
wait
for
the
browser
to
trust
web
site,
so
you
bring
a
third
party
certificate
authority
into
the
mix,
so
the
web
site
will
send
its
public
key
to
this
CA
and
in
return,
get
back
a
certificate
and
certificate
a
signed
by
the
CAS
key,
and
this
is
what's
presented
with
the
public
key
to
the
browser-
and
this
is
this
is
the
the
basis
of
the
web
PKI.
So
the
browser
checks
the
certificate
chain.
The
certificate
is
signed
by
an
intermediate
certificate.
Some
root
certificate
that
root
certificate
trusted
by
the
browser
and
butter.
E
Bing
bada
boom
secure
communication
on
the
Internet
right,
at
least
on
the
web.
So
what
happens
when
a
certificate
is
no
longer
valid,
say
if
the
private
key
has
been
stolen
by
an
attacker,
then
this
attacker
can
then
impersonate
the
web
site
if
they're
say
on
path-
and
this
is
a
this-
is
a
bad
thing.
So
this
is
basically
how
it
works.
E
So,
if
the,
if
the
site
doesn't
realize
that
this
compromise
has
happened,
then
there's
not
much
they
can
do,
but
if
they
do
realize
that
compromise
has
happened,
they
can
talk
to
the
certificate
authority
and
tell
the
CA
to
revoke
the
certificate.
So
now,
when
the
browser
receives
a
compromised
certificate,
it's
the
browser's
responsibility
to
check
to
see
if
the
certificate
is
revoked
and
and
then
essentially
no
longer
trust
this
attacker.
E
So
there's
several
mekka
mechanisms
that
have
been
standardized
for
this:
the
CRL
certificate,
revocation
list
and
OCSP,
which
is
a
query
protocol
that
we
are
going
to
be
studying
in
this
research.
So
browser
checks
voila,
it's
not
good.
So
how
does
the
CRL
work?
This
is
sort
of
simple
background
here.
Crl
is
just
a
list
of
revoked
certificates,
serial
numbers
and
the
browser
will
periodically
download
this.
It
can
either
this
URL
containing
the
CRL
can
either
be
embedded
in
the
certificate
or
it's
something
that
the
operating
system
repeatedly
does
for
the
certificate
authority
and
yeah.
E
You
just
check
to
see
that
the
certificate
is
not
part
of
the
CRL
and
if
it's
not,
if
it
is,
then
it's
revoked.
So
this
is
not
necessarily
efficient,
we've
seen
TRL's
that
can
be
up
to
76
megabytes
after
heartbleed
in
particular,
there
were
a
lot
of
scaling
issues
with
the
CRL,
so
this
is
where
OCSP
comes
in
as
a
slightly
more
efficient
protocol.
E
So
OCSP
is
more
of
a
query
in
protocol,
so
when
the
certificate
gets
back
from
the
from
the
website
to
the
browser,
the
browser
will
specifically
query
a
server
that
stood
up
by
the
CA.
To
say:
is
this
certificate
revoked
or
not,
and
they
will
respond
with
revoked
good
unknown,
and
this
is
valid
for
a
specific
time
period
and
if
it's
revoked?
The
connection
is
closed.
So
what
are
I
guess
the
challenges
of
this
specifically
the
OCSP
responders,
the
the
CA
needs
to
provide
a
service
and
high
availability
service
with
low
latency.
E
So
if
you're
gonna
be
checking
for
blocking
your
connection
on
doing
this
remote
HTTP
connection
that
might
actually
slow
down
your
your
your
connection,
so
this
is,
this
is
kind
of
a
bad
thing
has
to
be
low
latency
and
it
also
has
privacy
concerns
as
well.
So
this
CA
is
not
really
involved
in
the
users
connection
to
the
website,
but
this
communication
reveals
the
traffic
patterns
of
the
browser.
So
this
is
this
is
not
great,
so
this
is
where
SP
stapling
comes
in.
This
is
why
I
was
introduced
and
the
basic
idea
is
very
simple.
E
The
website
webserver
itself
will
do
the
certificate
revocation
check
and
obtain
the
OCSP
response
and
then
included
in
the
TLS
handshake,
if,
if
requested
by
the
browser,
so
in
this
this
method,
the
website
can
these
OCSP
responses
are
typically
valid
for
seven
days
or
so
they
can
just
fetch
one
for
the
site.
Put
it
in.
Whenever
the
browser
says,
I
support,
OCSP
stapling
they
get
no
CSP
response
can
check
the
validity,
so
there's
no
additional
latency,
no
blocking
on
random
HTTP,
koalas
calls
and
CA
doesn't
is
unable
to
track.
So
this
is.
E
This
is
a
pretty
good
thing
compared
to
the
standard
OCSP
mechanism.
So
what
are
the
challenges
here?
So
OCSP
stapling
doesn't
solve
all
the
problems
with
OCSP.
One
major
problem
is
that
clients
will
accept
a
certificate
even
if
they're
unable
to
check
validation.
So
a
lot
of
browsers
have
kind
of
made
the
optimization
that
revocation
checking
is
really
not
worth
the
hassle
so
specifically
with
OCSP
checking
almost
all
browsers
at
this
point
because
of
the
latency
hit.
Just
they
just
won't
do
it.
E
So
if
they,
if
they
don't
obtain
OCSP
within
a
certain
amount
of
time,
then
you
know
then
they'll
consider
this
revocation
check
as
okay.
This
is
just
too
much
of
a
pain,
we'll
just
consider
trusted
and,
and
so
an
attacker
could
potentially
block
this,
and
this
is
where
the
idea
of
OCSP
must
staple
comes
into
play.
This
is
a
new
x.509
certificate.
Extension.
It's
based
on
RFC,
seven,
six,
three
three
TLS
feature:
it's
essentially
a
flag
in
your
certificate.
E
That
says,
if
you
see,
if
you
understand
this
extension
and
you
see
the
certificate
and
it
does
not
come
with
a
no
CSP
response-
then
consider
it
invalid,
and
so
this
is
everything's
essentially
the
same.
There
you
go
it's
this
certificate
cannot
be
used
with
a
browser
that
understands
OCSP
must
staple.
So
this
is
no
additional
latency,
no
privacy
issues
no
self
either.
This
is
just
totally
the
the
high-level
overview
of
the
state
of
revocation
online,
so
to
support.
E
Ocs
screamo
staple
CA
must
include
the
CID,
must
equal
extension
run
reliable,
error-free,
OCSP
responder,
where
I
guess
we'll
discuss
that
a
little
bit
later.
What
that
means
and
clients
must
also
support,
understand
the
extension
as
well
as
present
the
request
for
OCSP,
and
you
know
really,
if
they
see
a
certificate
that
does
not
follow
this
TLS
feature,
then
they
then
they
must
refuse
so
so
to
support
OCS
most
stable
web
servers
need
to
fetch
in
cash.
Oh
she's,
OCSP
responses.
E
They
must
configure
them
to
the
server
to
use
OCSP
stapling
and
yes,
so
there's
these
three
components
and
they
all
have
certain
properties
and
things
that
they
must
satisfy
to
to
have
this
ecosystem
work,
and
that's
that's
what
the
study
was
is
measuring
all
these
three
pieces
together,
so
generally
yeah
we're
looking
at
this
certificate
authority
side.
We
looked
at
the
website
side
and
we
looked
at
the
browser
side.
So
we
were
hoping
to
in
this
measurement
study
and
understand
how
close
the
web
would
be
is
to
being
ready
for
OCS.
E
We
must
able,
whether
it's
Universal
deployment
or
just
even
a
small
deployment,
because
there's
currently
very
very,
very
few
certificates
that
have
this
properties
somewhere
around
ten
or
so
in
use
according
to
a
certificate
transparency
logs.
Alright.
So
let's
look
at
the
OCC
responders
availability
validity,
consistency
with
crl.
E
From
this
group
we
ended
up
having
536
unique
OCS
peer
response,
responders
so
of
those
77
million
there
were
you
know
around
15,000
or
so
that
had
unique
responders,
and
so
we
picked
the
these
15,000
of
course
went
to
these
responders
and,
and
we
set
up
a
measurement
client
so
gathering
these
certificates.
We
would
just
do
OCSP
queries
to
this
certificate.
Ocsp
server
for
each
of
these
right,
so
in
order
to
get
a
better
sense
of
how
this
works
globally,
we
deployed
this
in
six
different
AWS
regions,
and
it's
I
mean
the
reason
for
this.
E
It's
hard
to
know
the
exact
status
of
OCSP
responders
and
how
they're
configured
worldwide.
This
is
not
the
the
greatest
most
distributed
test,
but
it
is
at
least
covers
different
continents
and
different
different
availability
zones,
and
it
did
reveal
some
interesting
things.
So
we
sent
requests
every
hour
to
monitor
the
availability
and
status
of
these
OCSP
responders
for
about
four
months,
and
we
were
able
to
analyze
around
50
million
OCSP
responses.
There
are
three
main
observations
from
this
yep,
so
April
to
September
we're.
E
E
So
it's
probably
a
good
chart
chart
depending
how
you
think
think
of
things,
but
well
it
turned
out
that
the
first
observation
is
that
we
were
actually
never
able
to
successfully
receive
requests
for
all
OCSP
responders
in
a
given
hour
for
any
of
the
measurements
on
average.
E
1.7
percent
of
requests
failed,
so
we're
not
even
into
the
two
nines
of
availability
with
regard
to
our
sample
set,
which
we
think
is
relatively
representative
of
the
web
and
during
the
measurement
period
there's
at
least
one
measurement
that
the
client
was
never
able
to
make
a
request
to
an
OCSP
responder,
even
though
we
spent
750
or
so
so.
There's
a
couple
spikes
here
and
there
we'll
dig
dig
into
that
right
now
so
yeah
here
you
go
29
responders,
or
at
least
one
that
failed.
E
Some
of
these
ended
up
being
fixed
because
we
told
the
OCSP
responders
that
something's
going
something
was
going
going
funky,
but
the
failure
rate
varies
across
the
different
locations.
The
average
rate
was
between
2.2
percent
in
Virginia
data
center,
AWS,
east
and
5.7
percent
from
Sao
Paulo
in
Brazil.
So
one
of
the
interesting
angles
is
digital,
solid,
Asian
comm.
This
was
a
specific
example.
During
the
first
three
months
of
our
measurement
periods,
the
measurement
in
South
Paulo
could
not
get
any
responses
from
this
OCSP
responder,
which
serves
the
actually
the
Wells
Fargo
certificate.
E
So
after
we
contacted
them,
they
fixed
this.
That's
this
chart
right
this
spike
right
here.
The
issue
was
fixed
on
August
31st.
So,
yes,
there
was
404
for
that
entire
period.
Right
there
till
it
was
fixed
all
right,
so
availability
transy
transient
failure.
That's
that's
this
blob
right
here
on
the
left,
sometimes
OCSP,
responders
that
serve
OCSP
responses
for
a
large
percentage
of
the
web
would
just
go
down.
So
we
observed
that
some
were
temperately
down
for
at
least
a
few
hours
even
multiple
days
in
some
circumstances.
So
this
is
an
example.
Are
April
25th.
E
This
actually
only
happened
from
the
measurements
in
Seoul,
Sydney
and
Oregon,
which
are
all
in
asia-pacific
region,
and
this
turned
out
to
be
due
to
OCSP
servers
maintained
by
Komodo
and
Komodo
CA,
which
I
believe
they
just
changed
the
name
of
their
CA
this
week,
after
after
an
acquisition
but
yeah,
so
all
OCC
Spheeris
Weston's
service
were
not
served.
E
And,
interestingly,
we
also
observed
that
some
OCSP
servers
that
are
related
to
Komodo
we're
also
down,
for
example,
Gandhi
and
a
number
of
other
providers
actually
cname
to
OCS
weak
Komodo
CA,
and
these
these
other
CA
providers
are
just
provided
by
OCS,
bye-bye
Komodo.
So
all
of
these
went
down
and
that's
that's-
that's
your
spike
down
to
97
percent.
E
So
what's
the
impact
of
the
web.
So
what
how
did
this
outage
really
affect
OCSP
online?
So
if
popular
OCSP
responders
experienced
an
outage,
it
could
be
a
serious
problem,
because
so
many
certificates
actually
rely
on
OCSP.
The
clients
will
not
be
able
to
check
so
to
measure
this
impact.
We
estimated
how
many
popular
websites
from
these
say-
that's
Alexa,
top
million.
O
E
Unable
to
fetch
fresh
OCSP
response
sponsors
due
to
this
specific
outage-
and
this
is
the
result
so
reserved
a
couple-
spikes
during
our
measurement
period,
which
means
that
many
popular
sites
were
unable
to
fetch
fetch
OCSP
responses
during
that
time.
So
this
is,
these
spikes
were
due
to
single
OCSP
servers
going
online.
E
So
this
is
the
you're
looking
at
tens
of
thousands
of
sites,
so
out
of
out
of
the
top
million,
and
at
least
case
so
this
is
Komodo
is
down
for
two
hours
start
us
ourselves
down:
OCSP
tsardom,
digit,
sir
we're
down
this
is
some
of
these
CAS
are
very
popular
and
used
by
lots
and
lots
and
lots
of
different
sites,
so
availability,
OCSP,
general
conclusion,
OCSP
responders,
are
not
fully
reliable.
There's
reasons
you
can
kind
of
Intuit
as
to
why
that
is
the
case
and
we'll
kind
of
get
to
that
in
the
end.
E
So
this
the
second
thing
that
we
measured
here
and
you
can
see
that
we're
down
the
y-axis
actually
is
a
lot
lower
in
this
case
and
we're
talking
work,
you
can
barely
see
it,
but
the
validity
of
the
response.
This
is,
you
know,
pretty
pretty
ok,
so,
let's
see
how
many
OCSP
responses
have
successfully
received
are
actually
valid.
The
OCSP
responses
can
be
wrong
due
to
multiple
reasons.
The
most
representative
ones
are.
E
The
format
is
not
a
SN
one,
which
happened
sometimes
or
the
serial
number
of
the
OCSP
response
is
different
from
the
OCSP
request,
or
sometimes
the
signature
was
not
valid.
There
are
a
lot
of
different
ways
that
this
can
go
wrong
and
we
found
them.
We
saw
them
all
so
here's
the
result
generally
most
of
the
responses
are
valid,
but
we
often
see
the
consistent
error
from
some
Oh
CSB
responders,
which
returns
only
zero.
That's
the
the
red
line
here
around
two
percent
so
mostly
valid,
but
there
are
some
specific
errors.
Okay.
E
So
the
last
thing
that
we
looked
into
here
was
consistency.
Crl
is
I,
guess
as
widely
used
as
well,
so
OCSP
versus
CRO,
let's
see
if
they
return
the
same
results
but
which,
which
one
is
is
correct.
We
know
that
CR,
ELLs
and
OCSP
responders
should
return
consistent
results.
If
server
VOC
here
should
be
revoked
there,
all
right
so.
E
To
measure
this
consistency,
we
need
to
obtain
CRLs
and
it
aim
so
we
went
to
the
Alexa
top
million
again
and
we
only
use
the
certificates
that
supported
both
OCSP
NZRL
s
and
we
try
to
extracted
the
CR
ELLs
and
so
from
this
process
we
were
able
to
extract
1500
CRLs,
which
contained
two
million
members
or
revoked
certificates,
and
before
asking
the
revocation
status
of
the
serial
numbers
first,
we
eliminated
right.
So
we
went
to
ones
that
were
posted
OCSP
and
crl
got.
E
The
list
got
the
ones
with
the
O's
whose
some
of
these
were
expired.
So
we
just
eliminate
the
ones
that
were
expired
and
cross-check
with
census,
and
we
filter
this
down.
We
end
up
with
right,
around
700,000
or
so
unexpired
serial
numbers
derive
from
CRLs
that
we
could
check
the
OCSP
from
so
what's
the
expectation
this
should
be
that
every
single
one
that
was
wrote
in
the
in
the
CRL
should
return
revoked
when
checking
from
OCSP,
and
this
turned
out
to
not
be
the
case.
E
Okay,
after
we
contacted
them,
they
told
us
they
would
fix
the
issue.
All
right
web
servers
yeah.
So
we
did
see
a
'z,
let's
step
a
next
next
step
down
the
ecosystem
chain
to
web
servers.
So
a
web
server,
that's
OCSP
must
stable
compliant,
should
fetch
the
OCSP
response
and
cache
it
and
serve
it
during
its
full
validity
period,
and
they
should
handle
errors
if
they're
unable
to
successfully
especially
OCSP
they
should.
We
use
use
a
previous
one.
As
long
as
it's
not
expired,
that's
that's
the
expectation.
Is
your
server?
E
E
We
see
that
web
server
proactively
fetches
those
ESP
responses.
If
web
servers
do
not
prefetch
OCSP
responses
and
fix
them
on
demand,
then
you
can
be
unnecessary
latency.
So
it's
kind
of
the
latency
problem
in
Reverse,
rather
than
the
client
having
to
fetch
OCSP
and
check
its
you
connect
to
the
server
and
the
server
connects
it
fetches
OCSP.
This
is
unnecessary
latency
from
caching
perspective.
You
have
to
remove
expired
things
from
the
cache.
This
is
just
what
you
should
do
and
in
any
sort
of
caching
situation
and
the
third
is
availability.
E
Web
servers
should
periodically
ask
for
fresh
osseous
p's
before
they
were
before
they
expire.
This
is
this
is
just
the
only
way
that
this
is
ever
going
to
work
right.
So
how
do
we
think
they
did?
This
is
well
prefetching,
OCSP
responses,
neither
nginx
nor
apache
does
this,
which
means
that
the
very
first
connection
to
a
server,
that's
runs
apache
or
nginx
that
has
OCSP
will
not
serve
you
in
Ossipee
response.
In
fact,
they
they
don't.
E
They
don't
block
because
the
performance
delay,
but
instead
they
do
kind
of
what
browsers
do
and
that
they
just
send
off
an
asynchronous
request
to
go
fetch
them.
We
found
that
they
both
cache.
The
OCSP
response
and
Apache
did
not
respect
the
next
update
in
the
cache.
So
it
did
keep
around
some
some
old
OCSP
responses,
whereas
nginx
did
fetch
it,
and
you
know
if
there
is
a
no
CSP
response
on
an
error.
This
is
this
is
something
else
that
we
found
that
was
slightly
lacking
in
the
Apache
web
server.
E
E
Alright,
so
there
to
support
OCSP
must
staple
there's
three
things
you
need
to
understand.
The
extension
present
the
certificate
status,
request,
extension
and
then
reject
the
certificate
is
that
if
the
response
is
not
provided
and
it
let's
yeah
so
basically,
this
is
the
methodology
request
it
see
if
it
rejects
it
and
also
we
check
if
it
sends
an
additional
OCSP
response,
just
just
in
case.
E
So
you
probably
if
you
have
a
no
CSP
most
staples
certificate,
you
probably
don't
want
to
also
fetch
OCSP,
because
then
you
lose
all
the
privacy
guarantees
as
well
as
potentially
the
the
latency
guarantees.
Okay,
so
first
we
tested
multiple
browsers
as
well:
multiple
mobile
browsers.
First,
we
noticed
that
all
of
them
are
asking
for
the
stapled
responses,
which
is
good.
This
means
that
all
of
these
support
OCSP
stapling.
However,
we
did
observe
that
firefox
only
Firefox
displays
a
certificate
error
message.
E
If
the
user
has
a
stable,
though
CSP
response
that
doesn't
come
so,
sadly,
all
the
other
browsers
simply
just
accept
the
certificate
and
don't
don't
send
their
own
rescues
to
be
a
response
or
or
really
just
don't
do
any
OCSP
checking
at
all.
After
this
point,
so
these
results
indicate
that
the
clients
are
not
largely
ready
for
OCSP
must
staple,
in
fact,
barely
yeah.
E
They
don't
respect.
The
OCSP
must
staple
extension
at
all
and
generally
most
browsers,
don't
do
CSP
to
any
capacity
all
they
requested
OCSP
responses,
but
they
don't
respect
respect
this
extension.
So
quiet
clients
are
largely
not
ready
and
that's
kind
of
the
additional
coding
work
is
likely
not
too
significant.
But
it's
it's
it's
worth
thinking
through.
Why?
Why
browsers
haven't
invested
in
this
technology?
Yet,
okay,
so
in
conclusion,
on
the
bright
side,
only
a
few
players
need
to
take
to
make
it
possible
for
web
servers
to
begin
relying
on
certificates
with
OCSP
ma
staple.
E
But
every
single
piece
of
the
ecosystem
needs
to
make
a
change,
so
the
OCSP
servers
need
to
be
more
reliable
web
server.
Software
needs
to
do
a
better
job
of,
or
needs
to
be
capable
of
consistently
serving
OCSP
responses
and
handling
web
server
failures
and
expiring
their
cache
and
doing
all
these
sort
of
things
and
and
then
only
then,
once
servers
actually
support.
This
fully
do
browsers
can
can
be
updated
to
support
OCSP
ma
stable,
we're
almost
at
the
end,
so
I'll
take
the
questions.
I
just
make.
N
It
very
brief,
I
just
want
to
point
out
that
you
looked
at
your
popular
web
servers.
It
doesn't
quite
match
up
with
sort
of
the
origins
for
a
lot
of
popular
web
content
that
you
know.
Cd
ends
are
the
origin,
so
there
support
for
doing
the
OCSP,
prefetch
and
caching
stuff
like
that,
is
probably
going
to
have
a
big
effect
on.
What's
actually
sort
of
the
bits
that
are
sent
over
the
Internet
yep.
E
So
let
me
just
continue
with
the
rest
of
the
conclusions
yeah.
So
only
a
few
players
need
to
make
action
to
make
this
and
to
make
it
possible
to
enable
OCSP
must
staple
the
web
server
software,
as
I
mentioned
it's
possible
to
do
this,
it's
possible
to
have
a
reliable
server
that
does
OCSP
ma
staple.
E
Sometimes
it
might
require
a
slightly
more
complicated
infrastructure
to
keep
these.
Keep
these
in
sync,
there's
questions
about
whether
the
web
server
should
have
external
HTTP
access
or
whether
it
should
be
pre-loaded
with
configuration
that
there
are
some
questions
about
this,
but
I.
Don't
think
these
our
insurmountable
problems.
So
the
I
guess
the
the
general
conclusions
is
that
there
are
problems
all
over
here,
but
they're
in
the
small
percentage
range
and
a
much
wider
deployment
of
OCSP
mas
staple
is
I
guess.
E
The
result
of
the
study
is
that
we
think
it's
a
realistic
and
achievable
goal,
but
not
today,
but
sometime
soon,
so
the
data
is
is
available
at
secure,
PK,
I,
dot
work,
and
this
is
still
measuring.
We're
gonna
publish
every
month
or
so
new
data,
and
it's
there
should
be
a
slash.
I
am
C
2018,
so
this
is.
This
is
a
research
that
was
presented
at
IMC
this
year
and
so
that's
sort
of.
E
T
E
E
The
the
outer
spike
it
was
the
cache
busting
version
of
OCSP,
so
specifically
Komodo,
and
some
of
these
other
ones
are
using
CD
ends
in
front
to
catch
them,
and
that
wouldn't
mask
a
lot
of
this.
This
specific
issue,
but
yeah,
so
so
so
that
that's
another
reason
that
to
be
optimistic
right
is
that
if
these
failures
do
happen,
they're
less
impactful,
because
of
how
much
caching
is
typically
used
by
these
OCSP
responders,
still.
E
U
And
Mac's
bad
cable
ads
and
have
you
great
work,
I
love?
It
have
you
look
into
many
environment
today
we
are
deploying
you
know,
devices
with
certificates,
which
we
need
to
verify
the
revocation,
it's
very
important,
and
so
it's
outside
the
browser
space
and
also
time.
But
you
know
it's
the
same
architecture
pls
into
the
cloud
and
validate
the
certificates
right
today.
As
far
as
I
understand,
most
of
these
devices
are
typical
said
are
deployed.
They
completely
ignore
their
vocation,
but
at
some
point
we
will
have
to
do
it.
U
Are
you
planning
or
expanding
your
work
to
see
if
they
affecting
all
these,
because
at
this
point
would
be
on
the
server
side
right?
The
client
would
do
stapling,
but
they
have
to
actually
fetch
this.
These
OCSP
responses
themselves
and
one
of
the
problems
that
I
have
seen
in
many
environments
that
these
devices
might
not
be
able
to
reach
USP
because
of
network
limitations.
E
This
work
is
yeah.
It
is
mainly
focused
on
web
servers
and
servers
that
are
handled
on
the
web,
but
in
a
lot
of
these
IOT
scenarios,
you're
talking
to
a
centralized
server,
so
these
these
results
still
apply
with
respect
to
you
know,
making
it
easier
for
servers
in
a
traditional
context
to
get
copies
of
the
OCSP
yeah.
We
saw
that
over
HTTP,
it's
not
entirely
reliable.
To
do
so.
E
I
tend
to
think
that
if
this,
because,
if
must
staple,
becomes
something
that
people
are
attempting
to
deploy
that
it'll
be
more
likely
that
these
OCSP
responders
will
invest
in
having
more
reliability,
it's
not
that
hard
of
a
service
to
really
run,
but
you
know
it
makes
it
it
kind
of
would
make
sense
to
put
it
in
DNS
or
some
other
mechanism
to
you
know
have
just
have
another
place
to
obtain.
It
I
mean
it's
it's.
It
has
a
specific
lifetime
when
it
comes
to
actually
clients
having
to
demonstrate
revocation
on
the
client
certificate.