►
From YouTube: IETF102-ANRW-20180716-0930
Description
ANRW meeting session at IETF102
2018/07/16 0930
https://datatracker.ietf.org/meeting/102/proceedings/
A
Ok
good
morning,
everyone
I
am,
let's
get
started
so
I'd
like
to
welcome
everyone
to
a
our
w18,
which
is
not
the
first
in
our
W,
but
it's
the
first
time.
We've
run
it
the
way
that
we've
run
it
this
year,
so
I'm
really
extremely
personally
excited
to
be
in
the
room
with
like
amazing
academics
and
amazing
ITF
folks.
The
whole
reason
we
did
this
was
to
get
everybody
together,
so
I'm
extremely
excited
just
to
look
around
the
room
and
see
who's
here
for
academics.
A
If
this
is
your
first
time
at
ITF,
you're
sitting
with
some
of
the
people
who
make
decisions
about
the
protocols
that
we
study
in
the
lab.
So
this
is
your
a
chance
to
talk
to
them,
get
their
feedback
work
with
them
and
so
on,
for
the
ITF
is
you're
sitting
in
the
room
with
the
researchers
who
do
some
of
the
leading
work
on
network
protocol
measurement
and
design,
and
so
this
is
your
chance
to
actually
work
with
them
and
convince
them
to
get
their
stuff
into
the
standard.
A
So
I'm
extremely
happy
and
excited
to
see
everyone
here.
On
behalf
of
myself
and
my
co-chair
Dave
Chavez,
who
couldn't
make
it,
unfortunately,
ok,
so
I
just
wanted
to
go
over
what
our
goals
were
when
we
rebooted
this
conference
this
year.
Our
goal
is
around
academic
research
in
wide
area,
networking
in
wide
area
security,
so
in
particular
with
a
focus
on
the
core
Internet
and
also
with
a
bit
of
interesting
cryptography
sort
of
mirroring
the
topics
that
are
focused
on
here
at
the
ITF.
A
Ultimately,
what
we'd
like
to
achieve
by
bringing
this
group
together
year
after
year,
which
is
what
we
hope
to
do,
is
to
transition
more
research
results
to
practice,
and
so
I
wanted
to
spend
like
a
few
minutes.
This
morning,
just
I've
come
to
several
ITF,
so
I
think
this
may
be
my
fifth
or
my
sixth
one
I'm,
not
sure,
and
it's
really
hard
to
understand
exactly
what
goes
on
here.
A
If
you
show
up
for
the
first
time
so
I
would
take
the
liberty
of
just
trying
to
expose
academics
to
what
I've
learned
after
five
years
of
coming
here,
and
so
really
one
thing
that
we
as
academics
know
how
to
do
is
to
disseminate
our
results.
We
know
how
to
write
papers
and
give
talks
and
present
them,
and
so
here's
your
chance
to
talk,
give
your
talk
and
present
it
in
front
of
the
people
who
design
the
core
Internet
and
that's
great.
So
that's
the
first
thing.
A
The
second
thing
I
really
want
to
encourage
people
to
do
is
to
just
not
view
this
so
much
as
a
black
box
I.
Remember
myself
when
I
first
started
coming,
I
thought:
if
I
would
like
tell
people
enough
about
what
I
was
working
on,
it
would
somehow
magically
like
get
into
standards
and
stuff,
and
some
of
it
sort
of
a
little
bit
did.
But
it's
not
that
easy,
and
so
this
is
so.
A
This
is
your
chance
to
actually
connect
with
the
people
who
know
how
to
work
through
these
processes,
and
so,
if
you're,
someone
who's
interested
in
TLS,
DNS
bgp
ntp
rpki
certificate
transparency.
We
have
all
the
people
who
work
on
that
I
think
most
of
them
are
actually
right
inside
this
room
right
now.
You
can
talk
to
them
and
tell
them
what
you're
working
on
and
try
to
get
them
to
work
with
you
and
the
other
goal
that
we
wanted
to
do
it.
A
We
wanted
to
make
this
place
a
home
for
longer-term
networking
research,
so
some
conferences
are
really
focused
on
novelty.
This
conference
is
not
Co
focused
on
novelty.
This
conference
is
focused
on
presenting
your
results
and
seeing
how
do
you
make
them
practical?
So
if
that
means
you
know,
you
have
to
write
the
second
or
third
paper
on
a
topic,
because
the
first
one
you
wrote
was
completely
wrong
and
completely
impractical.
A
We
want
this
conference
to
be
a
home
for
that
kind
of
work
and
that's
important,
because
if
you
look
at
how
things
work
in
the
ITF
there's
iteration
over
iteration
over
iteration
over
iteration,
and
so
we
wanted
to
make
that
a
home
for
that
kind
of
research.
So
this
is
my.
You
know
two
academics,
please
don't
just
hope
that
others
will
standardize
your
work,
try
to
get
involved
in
the
process,
so
just
a
little
bit
about
the
stats
for
the
conference.
Here's
the
program
committee.
Thank
you,
everyone
for
your
hard
work.
A
The
steering
committee
did
a
lot
of
work
other
than
just
steering,
including
organizing
all
of
this.
So
thank
you.
Everyone
who's
on
this
list,
especially
Matt,
who
responds
to
my
emails
within
half
an
hour
and
does
everything
I
ask
so
thank
you.
We
had
44
papers
submitted.
We
selected
11
talks.
We
accepted
18
posters
as
well,
so
there's
going
to
be
two
poster
sessions.
A
It's
all
the
same
posters
there
in
that
room
over
there,
so
you
have
access
to
them
during
lunch
and
then,
after
the
last
talk
session,
we'll
have
the
poster
room
open
as
well.
So
hopefully
we'll
have
continuous
discussion
posters
and
then
to
kick
off
the
the
process
and
to
sort
of
make
the
conference
interesting
for
the
world.
We
also
invited
five
five
talks,
so
some
of
the
talks
are
invited.
A
The
process
that
we
used
for
that
was
the
PC
nominated
papers
and
then
the
PC
selected
that
the
the
speakers,
so
these
invited
talks,
were
invited
collectively
by
the
PC
altogether.
So
those
are
the
stats
okay.
This
is
my
like.
I
know
that
there
are
people
who
have
been
doing
this
for
a
long
time.
So
probably
what
I'm
saying
is
not
exactly
right,
but
this
is
my
view
of
how
the
ITF
standards
process
works
and
for
sort
of
I
wanted
to
spend
like
two
minutes.
A
Just
telling
you
a
little
bit
about
how
you
can
potentially
work
through
this
process.
If
that's
something
you
want
to
do
so,
ITF
has
this
thing
called
internet
drafts.
These
are
not
RFC's.
These
are
documents
that
you
write.
Anyone
can
write
one.
You
can
write
one
right
now
and
submit
it.
There's
no
requirement
that
you
be
an
ITF
or
even
go
to
an
ITF.
You
just
write
one
and
you
submit
it.
That's
it
okay!
So
that's
an
internet
draft!
A
The
thing
is
that
this
thing
doesn't
necessarily
carry
too
much
weight
until
you
do
a
lot
of
other
things
which
allow
you
to
get
your
internet
draft
adopted
by
a
working
group.
Okay,
so
the
ITF
is
divided
into
working
groups.
There's
lots
of
different
things
that
people
work
on
so
DNS
as
a
working
group.
Tls
has
a
working
group,
PGP
and
so
on.
So
what
you
want
to
do
is,
if
you
want
it,
you
want
to
submit
your
internet
draft
and
get
it
somehow
adopted
by
the
working
group.
A
A
So,
for
instance,
with
our
draft
here
we
found
a
home
in
the
C
FRG,
the
crypto
forum
research
group,
but
we
actually
started
in
the
security
area.
Sag
group
I
forget
the
acronym,
so
we
didn't
know
exactly
where
this
draft
should
go.
Find
a
home
post.
Your
draft
post
to
the
mailing
list,
hopefully
you'll,
be
given
a
slot
to
present
to
the
working
group
and
then
you'll
get
a
lot
of
feedback
from
a
working
group.
This
will
go
on
for
a
while
and
if
you're
lucky,
your
draft
will
be
adopted.
A
A
You
present
it
to
the
working
group
and
eventually
you
get
to
something
called
working
group
last
call,
and
if
you
get
to
that
and
you
get
past
that
then
you're
really
on
the
road
to
becoming
an
RFC
I,
don't
really
understand
what
happens
to
at
this
point
because
I've
never
done
it.
But
this
is
if
you're
here,
you're
winning
okay.
A
So
how
do
you
so?
What
I
really
want
to
say
is
like
as
an
academic.
If
you
want
to
drop
a
draft,
just
go
write
it,
it's
fine,
you
can
do
it.
Nobody
can
prevent
you
from
doing
that.
The
issue
is
just
getting
your
draft
adopted
and
finding
a
home
for
it.
So
what
I
wanted
to
spend
like
five
minutes
on
was
just
how
you
might
do
that
and
just
so,
you
see
like
iteration
right.
So
this
is
a
draft
that
started
out
by
Fujiwara
at
the
top.
You
can
see
it's
0th
version.
A
It
went
through
three
iterations,
okay
and
then
it
was
adopted
by
the
working
group
on
June
2016.
It
went
through
all
these
iterations
ten
iterations
and
then
it
became
an
RFC
RFC
80
198
right.
So
that's
the
process
that
you
might
have
to
go
to.
You
might
have
to
be
a
draft
but
you've
revised
yourself
before
adoption
and
then
it
gets
adopted.
Then
you
revise
it
with
input
from
the
working
group
and
then
it
becomes
an
RFC.
So
this
is
the
process.
You
go
to
it's
pretty
long,
but
that's
how
it
works.
A
Okay,
so
my
last
point:
how
do
you
actually
get
through
this
process?
So
this
is
my
personal
trick.
My
first
rule
find
at
least
one
IETF
native
who
will
help
you
through
this
process,
so
make
this
person
your
co-author
on
the
draft.
You
can
write
the
draft
yourself,
but
they
can
become
your
co-author,
and
why
is
this
good?
First
of
all,
they'll
tell
you
how
to
write
the
drafts
they'll
help
you
how
to
write
the
draft
they
can
help.
A
You
get
a
presentation
in
front
of
a
working
group
and
even
figure
out
what
that
working
group
should
be.
The
presentation
in
front
of
a
working
group
should
be
honest,
for
the
academics
in
the
room
is
just
like
an
academic
presentation,
there's
nothing
like
particularly
unusual
there,
but
then
it
starts
to
get
different.
A
You
have
to
post
to
the
mailing
list,
keep
up
with
responses
to
the
mailing
lists,
your
draft
in
response
to
the
mailing
lists,
and
then
you
kind
of
go
through
a
process
which
is
basically
kind
of
getting
to
the
consensus
that
your
draft
will
be
adopted
by
the
working
group,
and
this
is
where
the
ITF
native
is
key.
Getting
through.
That
process
is
complicated
because
there's
a
lot
of
views
in
the
room.
You
have
no
idea
who
these
people
are,
if
it's
your
first
ITF.
A
So
if
you
actually
have
someone
who
understands
the
different
viewpoints
about
the
protocol
and
what's
going
on
in
the
working
group
that
you
can
get
through
that
process,
also,
the
other
thing
is:
there's
three
ITF
se
year
most
Ike
it
academics
cannot
go
to
three
ITF
C
years.
So
if
you
have
this,
this
co-author,
they
can
go
and
they
can
keep
you
in
the
process
and
make
sure
the
working
group
is
paying
attention
and
also
really
really
helpful
for
getting
your
coat
code
deployed
in
open
source
projects,
and
things
like
that.
A
A
Okay,
last
thing
drafts
me
to
be
watered
and
fed
they
expire
after
six
months.
Don't
let
them
just
you
know,
change
something
and
make
it
not
expire.
There's
also
a
deadline.
Two
weeks
before
the
ITF,
don't
miss
the
deadline,
and
then
another
really
important
thing
is
like
being
here
and
talking
to
people
is
really
the
way
you
get
consensus
around
your
draft.
A
So
it's
not
just
coming
in
and
listening
make
sure
you're
talking
to
people
try
to
get
introductions
to
the
people
who
are
influential
in
the
working
group
that
you
want
to
influence
and
get
them
to
pay
attention
to
your
draft
and
that's
a
way
to
kind
of
push
things
forward.
So
that's
my
advice.
Thank
you
for
listening
and
the
last
thing
I
wanted
to
say.
A
B
Okay,
thanks
Sharon,
that
was
great
I'm
Nick
Sullivan
I'm
chairing
the
TLS
section
of
this
conference.
Tls
is
transport
layer
security.
It's
one
of
the
main
protocols
that
the
IETF
worries
about
on
the
security
security
front,
so
we're
gonna
have
four
great
talks
about
tos
and
the
first
of
which
is
being
set
up
right
now
and
let's
see
you
have
everything
you
need
yeah,
I'm,
very
excited
for
this
group
of
talks.
B
I
think
there's
there's
a
lot
to
learn
about
studying,
TLS
in
the
wild
as
and
there's
there's
a
lot
to
learn
about
how
TLS
has
been
deployed.
It's
been
an
exciting
couple
years
for
TLS
in
terms
of
deployment.
Https
adoption
has
grown
significantly
and
we're
on
the
verge
of
the
publication
of
the
latest
version
of
TLS
TLS
1.3.
So
without
further
ado,
I
would
like
to
welcome.
Kiran
live
from
the
Technical
University
of
Munich
thanks
queuing
Thank.
C
You
Nick,
hello,
everyone
I'm,
a
PhD
candidate
at
Technical,
University
of
Munich,
it's
my
first
ITF
first
nrw
I'm,
really
happy
to
be
here
and
I'll.
Tell
you
a
bit
about
the
yeah
measuring
adoption
of
security
additions
to
the
HTTPS
ecosystem.
So
it's
a
bit
of
a
broad
range
of
additions
that
we've
measured
over
the
past
year.
C
C
C
So
in
the
first
one
we
said,
ITF
is
really
bringing
a
lot
of
additions
to
the
HTTP
ecosystem
over
the
years,
and
we
said
we
want
to
set
out
to
measure
the
quality
and
quantity
so
how
many
people
do
use
it
and
do
they
use
it
in
the
right
way
or
do
they
have
typos
wrong
copy/paste
from
tutorials
been
using
these
security
additions?
So
what
we
did
is
we
did
a
large-scale
scanning
campaign,
so
almost
200
million
domains
to
vantage
points,
ipv4
ipv6.
C
It's
really
good
if
you
use
a
large
target
population
to
get
a
good
sample
from
the
internet
and
not,
for
example,
a
top
list,
but
it's
a
different
topic.
We
also
did
passive
observations
from
three
continents
to
also
mirrored
of
the
few
that
people
really
use
of
the
Internet,
not
just
a
long
list
of
domains
that
maybe
are
not
in
use
so
straight
to
the
results.
We
measure
all
these
domains
and
we
measure
a
lot
of
things
per
domain
and
what
we
looked
at
is
the
first
thing
when
were
toasting,
standardize.
C
Typically,
in
the
recent
years,
how
big
is
deployment,
so
hundred
percent
will
be
about
55
million,
so
you
can
see
some
very
quite
well
deployed
and
then
what
we've
said
is
we
think
that
deployment
is
correlated
to
effort
for
the
administrator
to
set
something
up,
and
we
had
some
metric
for
effort
which
also
involved.
Do
they
have
to
make
decisions
today
simply
have
to
say
on
off
or
do
they
actually
need
to
think
about
it
and
configure
something
and
then
risk
to
their
website?
C
Because,
for
example,
if
you
is
HP
KP
and
you
insert
the
wrong
key,
your
website
is
not
available
anymore.
So
that's
the
high
availability
risk.
If
you
get
it
wrong
and
we
find
there's
pretty
much
a
correlation
to
these
things.
So
a
CSV,
for
example,
which
is
a
downgrade
protection
pseudo
cipher
suite,
has
the
highest
deployment
and
that's
because
it
simply
comes
through
an
optimizer
cell
or
whatever
SSL
library
is
update,
so
most
site,
admins,
probably
don't
even
know
they're
using
this.
C
C
What
typically
goes
wrong,
which
sides
adopted,
although
if
it's
more
adopted
by
chop
sites
or
sites
outside
the
top
1000
and
but
also,
as
you
see,
HP
KP,
is
high
high
and
also
chrome
around
IMC
last
year,
when
we
published
this
first
chrome
also
announced
that
they
will
remove
HP
copy,
because
it's
just
too
dangerous
for
people
to
get
it
wrong.
C
Then
we
did
a
follow
up
paper
where
we
look
deeper
into
CAA
CAA.
Is
it's
been
standardized
in
2013,
but
it
just
became
effective
last
September
and
it's
a
DNS
extension
where
a
site
owner
can
enter,
looks
like
this,
where
a
site
owner
can
enter
which
CA
is
a
permitted
to
issue
certificates
for
your
domain.
So
as
a
site
owner
you
can
control
which
CAS
you
trust
to
correctly
issue
certificates
for
your
domain.
In
this
per
CIB
forum,
robot
became
mandatory
in
last
September.
C
So
we
said,
let's
take
a
deep
look
into
this:
let's
see
how
this
use
the
n
nut
get
started
in
the
internet.
What
are
the
things
that
go
wrong?
What
are
the
things
that
go
good
and
let's
see
how
we
can
learn
from
that,
and
we
did
that
using
three
angles.
The
first
is
a
controlled
experiment.
Where
we
said
we
set
up
some
test
domains
by
certificates
and
see
which
checks
to
the
CAS
really
do
and
are
they
in
line
with
the
standard.
C
Then
we
also
said
we
want
to
see
the
market
adoption.
We
want
to
know
how
many
domains
to
actually
use
this
new
extension,
or
is
it
just
something
that
got
introduced
and
no
one
is
using
it
and
the
third
one
if
you're,
adding
something
to
the
DNS.
You
must
be
aware
that
a
large
portion
of
the
DNS
space
is
not
managed
by
people
themselves,
but
people
host
a
DNS
on
whatever
that
hosting
platform,
and
if
these
and
supported
in
s
extension,
it
will
not
gain
a
big
market
share.
C
So
we
also
bought
some
domains
with
the
big
hostess
and
see
it's
had
a
look.
How
many
do
support
setting
and
configuring
CAA
so
for
the
first
one?
We
set
up
six
test
domains
and
basically
we
don't
have
to
get
into
the
details,
but
you
can
imagine
there
as
a
number
of
dimensions
where
you
can
do
things
differently
and
we
set
up
those
six.
For
example,
the
first
one
is
correctly
the
NSX
sign
and
it
simply
says
no
za
must
issue
any
certificate,
so
that's
a
very
basic
test
that
any
CA
should
get
right.
C
The
second
one,
for
example,
is
the
NSX
signed,
but
it
times
out,
our
server
will
not
respond
to
your
query.
So
in
that
case,
you're
also
not
permitted
to
issue
and
there's
some
more
of
these.
Some
are
informational,
that's
where
the
standard
is
ambiguous,
so
CAS
can
or
cannot
issue
depending
on
how
they
want
to
go
about
it.
We
added
these
mainly
so
we
get
some
more
insight
into
how
our
decisions
taking
at
these
corner
cases.
C
So
we
set
up
these
six
test
domains
and
then
we
did
two
rounds
of
testing
one
month
apart,
where
we
bought
certificates
from
I
think
12
CAS
and
after
each
round
we
did
a
yeah.
We
basically
told
the
CAS,
which
cases
went
wrong,
which
ones
weren't
good
and
typically
they
were
very
happy
about
the
free
testing
and
fixed
things,
and
if
we
look
at
the
results,
this
is
basically
it
so
a
red
eye
means
issued
but
should
not
have
issued.
C
The
columns
are
the
test
domains
and
the
rows
are
the
CAS
and
I
think
there's
two
quite
interesting.
Takeaways
from
this
first,
whatever
column,
you
will
find
a
red
eye,
it
doesn't
matter
which
case
you're
testing.
Some
CA
will
give
you
a
certificate,
although
they
should
not,
and
the
nice
thing
is
an
attacker
is
you
can
test
an
almost
infinite
number
of
CAS?
C
So
from
an
attacker
perspective,
this
is
clearly
not
good.
No
matter
how
our
domain
is
set
up,
you
will
find
some
za
that
will
not
adhere
to
that
specific
portion
of
the
standard.
You
can
also
see
some
columns
have
quite
a
lot
of
red.
For
example,
time
outs
on
DNS
Xin
to
mains
are
apparently
not
that
simple
to
recognize.
If
you
use
standard
libraries,
so
a
lot
of
CAS
had
their
issues
with
that.
C
You
can
also
see,
but
this
is
not
that
much
a
name
and
shame
game
that
some
rows
are
really
green
and
some
rows
are
really
red.
So
there's
also
some
different
levels
of
rigor
that
cas
displayed
good
then
for
adoption.
We
basically
did
that
large-scale
scan
from
earlier.
So
we
have
this
list
of
by
now,
as
I
think
200
million
domains
and
we
just
scanned
them
all
and
see
which
ones
have
CIA
records
and
what
is
the
configuration
and
what
we
did
and
which
is
a
really
nice
thing.
C
Is
these
scans
run
two
times
a
day
and
these
scans
automatically
get
post
processed
and
put
on
a
website.
So
this
is
the
website,
and
basically
this
gets
rewritten
out
of
two
times
a
day,
so
until
that
so
ever
dies
or
whatever.
This
will
continue
running
and
update
the
status
of
CIA,
and
I
think
you
can
also
see
from
last
March
when
we
had
three
thousand
domains
to
now.
There
was
a
big
chunk
yesterday.
Two
hundred
thousand,
you
can
see
that
there's
a
clear
uptick
in
adoption
in
the
market.
C
You
can
also
look
at
all
the
nitty
gritties
and
details
like
how
many
use
issue,
how
many
use
issue
boiled,
and
also
one
thing.
That's
really
interesting
is
names
of
inconsistencies.
I
think
no
one
is
surprised
that
name
servers
are
inconsistent
in
the
internet,
but
in
this
case
there
really
shouldn't
be
it's
a
security
feature
right.
It
shouldn't
depend
on
which
of
your
authoritative,
nameservers
I'm,
asking
if
I'm
permitted
to
issue
a
certificate
or
not
so.
C
Frequently,
it
was
something
like
some
name
server
used.
A
version
did
so
they
would
use
different
software
versions
and
some
didn't
support
a
record
yet,
for
example,
but
that
would
also
mean
that
the
trend
should
be
decreasing.
So
I,
don't
think
that's
the
key
element
to
this,
but
it's
really
hard
to
find
out
what's
wrong.
With
these
name
servers,
you
can
also
see
these
anomalies.
That's
typically
some
host
or
changing
something
and
affecting
a
lot
of
domains
at
once.
C
Good
and
then
also
for
DNS
provider
support.
So
basically,
we
identified
the
31
biggest
Ynez
providers
in
the
commnity
arc
zone
files
and
these
biggest
31
make
out
54%
of
all
domains
and
for
those
we
bought
domains
and
did
check
can
be
configured
on
CIA
records
for
these
DNS
operators
and
it's
looking
somewhat
good.
So
the
biggest
one
actually
is
a
proxy
AAA.
You
can
configure
it
and
that's
about
of
these
50%
another
50%
that
actually
supports
CA,
but
has
also
quite
a
long
tail
of
Hostess
that
do
not
support
it
yet
and
also
all.
C
For
example,
one
of
the
hosts
has
had
it
wrong
in
the
beginning,
so
they
would
set
the
wrong
bits
in
the
Flex
field,
for
example,
and
but
that
hopefully
got
fixed
good.
So
if
we
quickly
summarize
HTTP
security,
extensions
differ
vastly
in
scope
and
deployment,
and
also
the
quality
of
deployment
is
quite
different,
especially
in
HP
KP.
We
found
quite
some
domains
that
had
simply
copied
tutorials
from
the
internet
and
put
in
the
key
from
the
tutorial,
which
is
really
bad.
So
there's
a
bunch
of
these
details
in
the
paper
where
you
really
look.
C
How
do
people
set
it
up
and
typically,
if
you
googled
a
key,
and
you
find
some
tutorial,
you
know
this
didn't
go
well
and
what
also
stands
out
these
low
risk
and
effort
technologies
are
widely
deployed.
So
really,
if
you
standardize
something,
make
it
as
simple
as
possible,
because
people
will
people
maybe
won't
have
time
or
won't
be
interested,
they
will
just
copy
paste
something
and
hope
it
works
for
CAA.
We
did
a
deep
dive
and
we
really
found
that
mixed
rigor.
C
Let's
call
it
on
the
CA
side,
we
see
that
market
adoption
is
picking
up
and
that
also
DNS
provider
support
is
a
real
critical
factor.
If
you
want
to
change
something
in
DNS
that
users,
as
opposed
to
use
and
also
we
make
the
data,
software
and
tools
available
and
also
running
and
also
in
detail
good.
Thank
you
and.
C
Thank
you,
yeah
and
and
I
fully
agree
for
CA
aid
as
a
great
generator
tool
on
the
web,
where
you
can
really
just
tick
boxes
and
it
will
put
you
out
a
copy-paste
config
for
your
site's
I.
Think
these
tools
really
help
and
I
don't
think
they
are
that
hard
to
set
up
to
put
as
an
addition
to
a
now
FC
or
something.
B
D
B
And
a
bunch
of
other
things,
so
how
have
things
changed
since
the
measurement
so.
C
These
measurements
were
April
2017,
we
keep
running
then
then,
but
we
don't.
We
haven't
evaluated
all
of
them
again.
I
know
that
HSTs
is
really
rising.
We
had
a
look
last
week,
or
so
it
was
by
nine
million
or
something
now
so
HSTs
seems
to
be
quite
well
deployed.
Ct,
of
course,
has
has
seen
strong
adoption.
C
C
So
for
for
CIA,
it's
clearly
hostess
enabling
this
for
default
for
all
the
customers
for
CT.
We
haven't
done
a
chart
over
time
yet,
but
I
was
a
would
assume
that
CT
has
significantly
gone
up
mostly
driven
by
a
petty
chrome
enforcement.
C
B
And
the
next
talk
is
going
to
be
again
by
killers,
so
this
the
first,
the
person
invited
talk
and
since
another
presentation
of
a
talk
that
was
submitted
and
there
with
this
one
had
many
more
co-authors
as
well,
so
I'll
give
it
back
to
Karen.
Thank.
C
You
sorry
for
having
to
and
Union
again
but
I
promise
this
one's
also
really
interesting.
This
was
joint
work
with
Matthias
and
Georg
and
he
published
this
TM
a
last
year
and
it
describes
one
of
the
ways
how
TLS
can
really
go
wrong
in
overhead,
like
really
bad.
C
So
I'm
what
you
probably
all
know-
and
this
has
been
known
for
a
long
time-
and
some
reviewers
were
also
friendly
to
point
out
that
this
is
not
new.
But
apparently
everyone
knows
that
TLS
1.2
handshake
does
not
encrypt
certificates,
which
is
maybe
not
that
much
of
a
problem
on
you
server
side,
but
definitely
on
the
client
side,
so
client
certificates,
if
you
look
at
them
the
stuff
that
is
India,
is
just
incredible.
So
it's
used
by
VPNs
often
has
your
clear
name
and
it's
used
by
governments
for
voting
things.
C
Thankfully,
it's
fixed
in
here
as
1.3,
but
I
guess
it
will
take
some
time
until
TLS
1.3
is
actually
the
majority
of
handshakes
and
back
when
we
discovered
this
TLS
1.3
was
not
standardized
yet
so
I'm
just
as
a
context.
So
where
is
TLS
client
certificate
authentication
being
used?
It's
used
in
network
authentication,
it's
used
a
lot
by
VPNs,
Open
VPN
is
using
it
edge.
Connectors
using
it
HTTP
is
using
is
also
some
IOT
protocol
and
qtt
and
was
at
least
planning
to
use
it.
C
So
there's
a
huge
user
base
and
just
as
a
brief
recap,
what
a
push
notification
service
is.
Basically,
they
exist
to
save
energy,
so,
instead
of
all
your
apps
pulling
data
from
all
these
servers,
you
have
this
one
channel
in
the
backend,
where
just
Apple
is
telling
your
phone.
What's
up
has
a
new
message:
I
mean
they
basically
exist
on
all
the
ecosystems
out
there
and
they're
typically
tightly
integrated,
so
it's
not
possible
to
disable
them
and
they
also
always
connected
and
Apple
push
notification
service.
C
Every
time
you
connect
to
a
Wi-Fi,
it
will
connect
on
mobile
phone.
It
will
remain
connected
over
long
times,
but
if
you
just
invite
fire,
you
will
have
continuous
connections
every
time
you
join
a
new
verify.
So
there's
a
lot
of
logins
to
be
observed
and
what
we
found
them
to
be
using
is
that
they
used
client
certificates
for
lock
in
to
this
push
notification
service.
So
every
time
you
connect
to
a
Wi-Fi
and
don't
have
cellular
internet
and
it
sends
the
certificate
the
certificate
is
generated.
C
A
device
set
up
so
with
our
second
precision
time
stamp.
So
I
can
actually
tell
when
did
I
set
up
this
phone,
the
first
time
and
there's
very
unique
cryptographic
material
India
like
serial
number
and
that
set
up
time,
which
is
pretty
unique,
so
you
can
clearly
identify
a
device.
If
you
see
it
several
times
now
we
looked
at.
Can
this
be
used
to
track
users?
C
Obviously
it
can
only
track
devices,
but
if
you
can
track
my
iPhone,
you
can
pretty
much
track
me
and
we
looked
at
two
out
of
four
attacker
types
that
you
could
come
up
with.
So
if
your
Apple,
you
have
better
means
to
track
your
users.
Also,
if
you
run
this
Wi-Fi,
you
have
better
means
to
track
your
users.
The
interesting
attackers
are
regional
attackers
that
have
access
to
large
networks
like
a
university
campus,
a
city
network
or
global
address
advisories
that
potentially
can
access
several
core
networks.
C
What
we
did
is
for
the
regional
advisory.
We
looked
at
the
scientific
network
in
Munich
and
said:
can
we
track
users
in
this
network,
which
is
about
hundred
thousand
users
sortsa
fairly
big
regional
network
and
for
the
global
advisory?
Looked
at
routing
of
diff,
lock,
logins
through
which
core
networks
are
being
routed
good?
C
So
we
did
some
passive
capturing
on
two
weeks
of
traffic,
using
where
we
capture
these
logins,
and
this
is
private
data
that
we
followed
a
lot
of
regulation
by
our
IRB
and
also
by
us,
so
that
we
would
not
attempt
to
identify
users.
We
will
not
publish
the
identifiable
data
and
the
measurement
machine
is
actually
not
on
the
internet.
So
a
lot
of
precaution,
so
just
that
this
doesn't
go
wrong
and
what
we
found.
C
This
is
the
one
I
accept
exception,
of
course,
because
I
gave
in
front
consent,
then
we
looked
at
what
is
what
percentage
of
certificates
is
traceable
and
most
certificates?
We
see
for
three
or
more
days
so
there's
thirty
five
percent
or
so
that
we
only
see
once
on
one
day
but
most
certificates.
You
see
recurringly.
So
you
can
really
say
all
those
over
there
are
sixty
percent
or
more
or
forty
percent
or
more.
C
You
can
really
trace
over
several
days
until
when
are
people
coming
to
work,
and
we
are
working
from
from
all
these
things,
you
can
from
certificate
type
tell
if
it's
iOS
or
desktop
certificate,
but
I
won't
get
into
the
details.
It's
just
a
slightly
different
kind
of
good
and
then
the
next
thing
we
looked
into
is:
is
this
global
tracking,
feasible
and
so
from
the
past?
Slides
I
would
say
clearly
if
you're
a
regional
attacker,
you
can
do
a
pretty
good
job
in
tracing
individual
users.
What
do
you
do
on
a
global
scale?
C
So
what
we
did
is
we
got
the
back-end
servers
from
basically
observing
all
these
handshakes.
You
know
pretty
well
what
back-end
servers
these
iPhones
Mac
OS,
whatever
devices
talk
to
and
the
majority
of
a
penis
logins
is
going
to
through
few
central
Expy
is
P,
so
the
a
penis
back-end
is,
as
you
would
expect,
globally
quite
well
distributed,
and
we
did
global
measurement
campaigns
through
a
ripe
Atlas.
C
We
looked
at
how
many
central
networks,
what
do
you
have
access
to
to
trace
users
globally
and
basically,
if
you
get,
this
is
also
not
surprising
but
interesting
to
Z.
If
you
get
the
top
10,
ixps
or
ISPs,
you
can
trace
80
percent
of
devices
and
I
think
they
attack
us
out
there
that
might
have
that
capability.
C
Given
that
there's
billion
devices
lots
of
back-end
servers
affected,
so
I
was
also
quite
impressed
that
this
was
done
so
quickly
and
so
professionally,
and
this
was
then
fixed
in
January,
2017
and
iOS,
ten
to
one
and
whatever
the
iTunes
on
Windows
and
whatnot
devices
affected,
because
this
was
really
we
didn't
test
it.
But
this
was
also
for
watch
OS,
TV
OS,
whatever
everything.
Basically,
everything
has
push
notification
service
in
it
today.
C
Good
and
what's
happening
now,
I'm
TLS
one
country
is
luckily
encrypting
certificates.
Now
there
are
still
some
things
unencrypted.
It
should
be,
and
sni
I
am
hearing
this
people
working
on
that
and
also
an
some
application.
Specific
data,
like
the
ARP
anything
I
think
is
under
discussion.
For
me,
that
take
away
is
really,
and
a
lot
of
people
told
us
this
is
not
new.
Everyone
knows
this
is
unencrypted.
It's
a
privacy
leak,
it's
bad,
but
still
Apple,
who
is
probably
employing
engineers
who
know
these
things
still
used
it.
C
So
I
think
there
was
really
a
lack
of
awareness
of
the
scope
of
the
problem
that
you
can
easily
create
a
few
and
bet
traceable
identifiers
into
your
protocol.
Handshakes,
especially
in
this
always-on
scenario,
of
where
it's
not
one
login
per
year,
but
it's
ten
logins
per
day
or
so
happening,
so
be
very
careful
if
you
build
any
traceable
identifiers
than
your
protocol
really
try
to
avoid
this.
C
We
did
this
at
team
a
PhD
school,
so
we
ran
a
half-day
lab
and
gave
people
the
paper
and
said
in
this
half
they
use
ripe,
Atlas
and
try
to
replicate
the
results,
which
was
quite
a
fun
exercise
and
also
it
pauses
the
confidence
in
your
result.
Right
if
you
say
20
people
actually
came
to
a
similar
result.
So
this
is
also
a
fun
thing
that
you
can
think
about:
go
to
PhD
schools
and
offer
exercises
you'll
get
interested
people
to
work
on
stuff
good.
E
A
C
It's
I
really
don't
know
how
to
how
to
solve
this.
Its
scale
may
be
just
a
thing
where
we
say,
let's
hope,
for
a
better
future
and
hope
that
is
not
too
many
protocols
that
do
that
as
well,
and
we
looked
at
some
reasonable
closed
protocols
like
SSH
and
and
found
a
true
that
well,
but
there's
so
many
protocols
out
there
and
I'm
sorry.
C
We
day
they
never
told
us
how
to
actually
fix
the
they
they've
done
away
with
client
certificates.
So
what
you
can
observe
now
is
there
some
ALP
and
key,
but
that
just
says
I'm
talking
the
new
version
of
app
approach
and
then
there's
something
in
the
encrypted
payload
that
we
don't
know
what's
happening
there.
F
G
Which
provides
some
analysis
and
guidance
about
link
ability
in
your
protocol
doesn't
go
so
far
as
to
say
like
when
designing
you
football.
You
must
not
have
one
cool
identifiers,
because
at
the
time
it's
actually
published
just
a
month
after
the
Snowden
disclosures,
but
at
the
time
it
didn't
seem
feasible
to
put
that
requirement
on
on
all
new
protocols.
Then
perhaps
that's
where
to
be
visiting.
So
there
is
some
guidance
it's
available,
but
there's
nothing
strict
about
actual.
B
B
I
H
You
everybody
for
coming
to
Europe
in
for
being
here
and
yeah.
It's
a
pleasure
to
talk
about
some
of
my
past
research,
specifically
this
one
which
was
published
at
NGSS
this
year
and
specifically
looking
at
the
security
risks
that
domain
validated
certificates,
or
rather
TLS
in
general,
and
giving
out
TLS
certificates
to
everybody
might
have
and
what
we
can
do
about
that
and
the
motivation
for
this
is
basically
looking
at
what
can
happen
if
there's
a
takeover
attack.
H
For
example,
in
this
case
of
uber,
he
they
forgot
to
actually
remove
a
DNS
entry
from
Sao
static
to
Dubrow,
calm,
pointing
to
ec2
so
to
Amazon,
and
somebody
figured
that
out
took
it
over
registered
as
hey
I'm,
actually
Sao
static
to
Dubrow,
calm
and
because
of
the
way
that
they
are
setting
up
single
sign-on.
It
actually
allowed
everybody
to
went
there
to
disclose
their
cookies
and
then
allowed
impersonation
attacks
on
uber
users
and
in
this
case,
are
no
bases.
H
They
could
just
order
rights
wherever
he
want
it
on
whoever
goober
account
he
stole
the
credentials
from,
and
the
reason
that
this
is
actually
happening
is
because
you
have
stale
DNS
records.
Largely
so
you
might
have
an
domain,
for
example,
Cloud
Strife,
dot
cycle,
a
proxy
SW
seems
beter
ddu,
which
points
to
an
IP
address.
In
this
case
an
ec2
IP
address,
for
example,
34
2,
1,
5,
2,
5
568
that
one's
actually
still
allocated
to
us
so
no
need
to
try
actually
trying
to
get
that.
H
The
question
here
is,
of
course,
what
are
you
going
to
do
when
you
actually
want
to
migrate
away
away
you?
What,
when
are
you
going
to
remove
an
IP
address?
How
are
you
going
to
migrate
this
gracefully?
Are
you
going
to
remove
this
immediately
eventually
at
least
the
IP
address
once
the
TTL
expires?
H
This
can
actually
become
a
security
issue
and
just
to
recap,
and
have
everybody
on
the
same
page,
the
main
validated
certificates
are
basically
standard,
TLS
certificates
and
the
trusted
by
major
browsers
and
operating
systems.
For
example,
let's
encrypt
has
been
a
major
player
here
and
they
are
credited
for
one
of
the
rise
in
HTTPS
adoption
and
TLS
adoption
in
general
and
domain
validation
generally
is
either
pay
cheap
or
even
free.
Let's
encrypt
is
usually
free,
but
comodo
usually
charges
some
amount
of
money
and,
most
importantly,
it
doesn't
have
any
identity
verification.
H
So
it
doesn't
verify
that
you're,
actually
the
person
claiming
in
the
jar-
it's
not
extended,
validation,
it's
really
just
who
controlling
the
domain
and
one
of
the
major
ways
that
this
is
happening
today
is
actually
using
HCP
based
domain
validation
and
just
looking
at
HTTP
based
domain
validation,
and
it
works
in
a
pretty
simple
way.
You
as
a
client
or
a
user
who
is
requesting
a
certificate,
is
actually
you're
going
through
a
certificate
authority.
In
this
case,
an
actress,
CA
and
you're
saying,
hey.
I
would
like
to
have
a
certificate,
for
example,
at
com.
H
Then
the
certificate
authorities
coming
back
to
you
and
saying:
hey,
okay,
please
hold
this
challenge
and
show
that
you
actually
have
ownership
that
you
can
actually
control
this
specific
domain
that
you're
claiming
you're
the
owner
from
then
you
will,
as
the
client
or
user,
usually
say,
okay
I'm
hosting
the
specific
nouns
on
my
web
server,
and
then
you
instruct
the
certificate
authority
to
verify
that.
It's
still
that
it's
actually
there,
which
the
certificate
authority
is
then
going
to
do
and
basically
step
four.
H
But
if
you
think
about
this,
this
actually
doesn't
prove
that
you
control
the
actual
domain,
but
what
it
does
that
proves
by
proxy
that
you're
actually
controlling
the
house
behind
the
domain.
Alright,
so
you're
not
actually
showing
that
you
can
control
whatever
the
domain
points
do
but
you're
controlling
what
the
domain
currently
points
to,
which
might
simply
be
an
IP
address
that
somebody
might
have
forgotten
to
either
remove
the
DNS
entry
from
that.
H
So,
let's,
of
course
theoretically
can
become
the
problem
because
an
attacker
once
they
actually
are
able
to
do
this
they're
getting
a
trusted
TLS
certificate
which
they
could
use
for
men
in
the
middle
attacks,
for
example
on
a
wire
on
your
local
Wi-Fi
or
if
they
have
access
to
or
a
large
user
pool,
they
could
use
it
for
those
there
cases
where
you
they
might
want
to
use
it
for
malicious
remote
code.
Loading,
for
example,
chrome
only
loads
JavaScript
nowadays
through
HCPs.
H
H
Sub-Domain
attacks
that
I
mentioned
in
the
case
of
uber
might
allow
you
to
steal
cookies,
which
might
only
be
sent
through
secure.
Only
if
it's
email
you
can
explore
the
residual
trust
that
might
exist
in
a
domain
if
you're
getting
a
spam
or
phishing
mail
from,
for
example,
SEO
static,
Thunderbird
com,
you're,
probably
more
inclined
to
actually
believe
that
some
kind
of
security
authority
office
at
uber
or
something
where
you
just
make
up
something
for
the
subdomain.
The
question,
of
course,
then
becomes.
H
This
is
actually
a
problem,
so
for
the
comodo
part,
you
could
get
a
certificate
which
would
be
valid
for
three
years.
So
are
there
actually
a
lot
of
websites
where
this
could
be
of
impact?
So
what
we
decided?
We
look
at
okay,
let's
evaluate
whether
this
is
a
problem
at
scale.
So
we
decided
to
look
at
okay,
how
many
active
domains
point
to
IP
addresses
it
could
actually
be
taken
over?
H
That
could
be
if
they're
free
anybody
who
has
an
ec2
or
an
AWS
account
or
and
like
such
as
your
account
or
an
digitalocean
account
you
just
and
try
and
get
an
IP
address,
and
specifically
we
decided
to
look
at
active
domains
so
domains
that
are
actively
being
resolved
on.
For
specifically,
we
looked
at
the
far
side
data
set,
so
we
looked
at
around
data
from
I
think
around
two
months.
H
How
quickly
can
you
iterate,
through
all
of
the
IP,
addresses
that
a
specific
AWS
tool
might
have-
and
we
did
this
by
allocating
one
IP
address
every
two
seconds
and
we
cycled
around
14
million
allocations
and
we
got
around
1.6
million
unique
IP
addresses
and
even
just
allocating
one
IP
address
every
two
seconds
you
cycle
through
the
entire
pool.
H
That
is
specific,
AWS
region
has
in
less
than
two
weeks
so,
but
just
going
through,
you
might
be
able
to
get
a
specific
IP
address
in
less
than
two
weeks
in
an
attacker,
because
they
don't
need
to
do
this,
only
one
every
two
seconds,
but
they
can
actually
do
possibly
10,000
requests
a
second
for
them.
It's
much
easier
and
much
quicker
and
actually
they're
around
7,000
domains
for
which
currently,
or
rather
back
in
April
2017,
for
which
the
awsm
B
address
was
actually
free
and
unresponsive.
H
Where
somebody
could
just
cycle
through
the
pool,
get
it
and
then
claim
that
they
are
owning
some
specific
domain
that
they
have
no
control
over
beyond
the
appear
dress
that
it
pointed
to
at
some
point
in
the
past.
So
we
kind
of
need
to
do
at
this
point
is
that
we
need
to
assume
that
these
kind
of
attacks
will
happen
in
the
future
and
they
might
even
already
be
happening.
H
We
just
don't
know
about
it
at
the
same
time,
there's
no
way
that
we
can
fix
this
by
requiring
some
major
Dupree
deployments
or
changes
to
DNS
I
mean
just
looking
at
the
way
DNS
SEC
has
been
adopted
and
how
long
that
has
taken
it's
not
that
we
can
do
anything
rapidly
there.
So
what
we're
trying
to
do
is
we're
trying
to
some
way
prevent
the
abuse
a
little
bit
higher
up
and
specifically
we're
focusing
onto
your
less
services.
H
We
want
to
leverage
existing
standards
whenever
possible
so
that
we're
not
just
trying
to
make
up
a
new
solution
that
then,
might
take
like
another
five
to
ten
years
to
actually
be
adopted
and
for
HCP
that's.
Actually
you
can
take
a
very
pretty
simple
idea
on
you.
Just
use
HTTP
with
trusted
certificates,
use
strict
Transport
Security,
and
then
you
also
throw
in
public
key
pinning,
which
basically
means
hey.
H
Now
you
pinned
the
key,
which
means
you
don't
really
need
to
do
anything,
because
when
a
takeover
attack
now
happens,
the
attacker
actually
needs
to
have
the
pin
certificate.
Otherwise,
the
browser's
just
going
to
this.
Okay,
no
I'm
not
going
to
connect
to
this,
which
at
least
reduces
some
kind
of
impersonation
or
takeover
attack
down
to
denial
of
service
attacks.
H
The
problem
is,
of
course,
a
this,
doesn't
work
for
smtp
and
other
kind
of
protocols
that
might
rely
on
TLS
certificates
and
and
of
course,
in
addition,
we
have
two
main
related
certificates
and
public
key
pinning
is
actually
deprecated
since
May
of
this
year
and
since
chrome
67,
and
now
we're
at
chrome
68
that
doesn't
really
work.
So
we
need
to
do
at
least
a
little
bit
more
than
that,
so
we
kind
of
need
to
work
on
a
better
solution.
H
So
looking
at
a
better
idea,
so
cased
in
some
way
that
we're
just
starting
for
HTTP
and
HTTPS,
we
still
use
trusted
certificates,
but
now,
rather
than
issuing
certificates,
it
will
we're
looking
at
hey.
Is
this
domain
or
the
appeared
rest
behind
it
possibly
taken
over?
And
if
so,
can
we
prevent
a
certificate
from
being
issued
in
this
case,
and
in
this
case
this
means
it
also
works,
West,
ntp
and
other
protocols
which
rely
on
TLS,
because
if
there's
no
certificate,
then
the
client,
hopefully
salic?
Yes,
so
I'm
not
going
to
connect
to
this.
H
The
question,
of
course,
is
how
do
we
actually
prevent
certificates
from
being
issued?
So
how
do
we
identify
that?
This
might
be
a
takeover
situation
where
the
IP
address
is
not
actually
being
controlled
by
the
previous
and
legitimate
owner,
and
fortunately
we
can
here
rely
on
something
that
the
certificate
some
ecosystem
has
for
awhile
now
and
that
specifically
certificate
in
spencey
locks
and
so
diffidence
princey
logs
for
those
of
you
who
don't
know
basically
a
public
append-only
log
for
any
kind
of
certificate
that
has
been
issued.
The
idea
here
is
that
you
monitor
them.
H
In
case
you
have
owned
a
domain,
whether
there's
any
certificate
that
was
issued
for
your
domains.
That
then
you
can
react
and
they're
kind
of
providing
you
with
a
real-time
age,
audit
trail
of
certificates
that
have
been
kind
of
lively
issued.
It
takes
usually
a
few
minutes
or
up
to
like
15-20
minutes
until
they
actually
show
up
but
find
themselves.
This
is
actually
still
problematic
a
because
this
is
reactive.
The
certificate
is
still
being
issued,
which
means
the
attackers
window
of
opportunity
remains
you
as
a
website
owner
or
domain
owner.
H
You
actually
need
to
go
to
the
certificate
authority
and
say:
hey,
please
revoke
the
certificate.
This
should
have
not
been
issued,
which
is
usually
the
manual
process
and
both
manual
validation
that
you
were
actually
at
the
legitimate
owner.
So
this
is
going
to
take
a
while,
on
top
of
that,
every
single
domain
owner
needs
to
monitor
this
by
themselves.
H
The
majority
of
them
are
not
going
to
do,
however.
At
the
same
time,
we
can
actually
use
this
for
storage
lookups,
so
this
one
exists,
so
we
can
just
have
a
certificate
authority,
basically
look
up
whether
a
certificate
previously
existed,
which
might
still
be
valid
and
only
issue
certificate.
If
this
is
not
the
case,
for
example,
or
if
you
can
prove
that
you
still
have
the
old
certificate,
which
is
exactly
what
we
did
with
a
preventive
HCP
based
domain
validation
and
the
way
this
works
is
for
user,
it
doesn't
really
change
anything
you.
H
As
a
user,
you
still
go
to
the
certificate
authority,
you
request
a
certificate,
but
now,
rather
than
the
certificate
authority,
immediately
responding
to
you,
hey,
please,
post
this
challenge
there
now
getting
some
background
information
on
the
domain
that
you
want:
the
certificate
for
and
they're,
basically
just
looking
up
hey
is
there
any
certificate
that
currently
exists?
That
might
still
be
valid.
That
I
should
be
checking
for
if
it's,
maybe
whoever
claims
that
this
person
is
actually
so
half
so
this
clients
still
have
a
certificate
that
currently
should
be.
H
It
should
exist,
and
then
the
certificate
authority
responds
in
the
same
way,
here's
a
nun's
response
with
a
challenge.
You
host
a
challenge,
but
this
time
you
are
not
toasting
it
on
HTTP
version
or
your
website
or
whatever
service
you're
using,
but
rather
you're
doing
this
with
the
TLS
certificates.
Are
you
using
HTTPS
now
the
little
twist
comes
in
that
the
certificate
authority
is
not
only
verifying
that
there's
and
the
specific
months
that
you
have
that,
but
they
also
very
fine
the
certificate
that
you're
using
to
actually
serve
the
website,
and
specifically
it
means
hey.
H
If
there
is
an
old
certificate,
that's
still
valid,
on
which
whoever
is
actually
claiming
to
be
that
website
or
example.com
should
actually
still
have
then
we're
requiring
that
certificate
to
actually
be
the
current
certificate.
So,
basically
from
the
very
start
off
when
the
first
certificate
is
issued,
we
are
requiring
a
chain
of
trust
through
the
old
certificates
going
forward.
H
So
if
we
have
a
takeover
situation,
then
whoever
is
taking
over
that
specific
domain
should
not
have
access
to
that
certificate
when
any
legitimate
owner
should
actually
have
that
certificate,
because
they're
usually
updating
it
when
it's
as
close
to
expiration,
hopefully
I
mean
hopefully
nobody's
letting
their
certificates
expire.
But,
of
course
there
are
cases
where
you
might
have
a
disaster
recovery
scenario
where
somebody
you
might
have
lost
the
certificate
because
they
might
have
thrown
it
away
the
server
diet,
for
whatever
reason.
H
So
in
these
cases
we
can
actually
use
stronger
domain
validation
challenges,
for
example,
that
require
more
manual
work.
An
example
is
the
DNS
based
challenges
that
you
have
already
there
cases
with
who
is
based
challenges
which
I
guess
miss
gdpr
know
is
less
of
an
option
in
the
majority
of
cases,
but
overall
this
basically
means
that
we
can
actually
identify
these
kind
of
takeover
attacks
fairly
accurately.
H
We
can
prevent
TLS
certificates
to
be
issued,
then,
which
means
we
can
downgrade
takeover
attacks
that
might
will
be
full
and
personation
attacks
to
at
least
down
to
denial
of
service
attacks.
At
the
same
time,
this
doesn't
really
change
anything
for
the
end
user.
That
increases
some
amount
of
work
for
the
certificate
authorities,
but
it
also
doesn't
really
have
any
drawbacks
for
the
user.
We
see
on
the
case
of
the
disaster
recovery
that
I
mentioned
when
you
need
to
reboot,
strap
basically
the
chain
of
trust.
Where
do
we
go
from
here
next?
H
For
us,
we
hope
to
actually
have
this
coming
out,
maybe
as
a
ACMA
validation
challenge.
So
if
anybody
of
the
ACMA
working
group
is
here,
we'd
be
happy
to
chat
will
actually
be
attending
the
session
so
fully
we'll
get
a
little
bit
more
input
there
and
with
that,
I
would
like
to
thank
you
for
your
attention
and
I'm
happy
to
take
any
questions.
J
D
B
So
on
that
point,
yet
the
brittleness
that
that
was
pointed
out
by
Richard,
what
are
the?
What
are
the
use
cases
that
you
thought
out
of
a
provider
that
has
a
certificate
and
is
unable
to
hook
it
into
the
actual
certificate,
validation,
think
what
what
challenges
are
there
for
someone
to
use
this
use
a
certificate
which
is
typically
for
TLS
in
this
case,
for
a
kind
of
a
brand
new
protocol?
What
what
sort
of
challenges
have
you
considered
as.
H
B
One,
the
one
with
the
big
loop
right,
so
it's
the
null
certificate
was
found
requested
to
meet
the
current
https
certificate
in
a
lot
of
hosting
provider
scenarios.
This
is
not
something
that
you
can
just
sort
of
send
a
certificate
somewhere
else
to
prefer
it
to
be
the
certificate
used
to
prove
the
acne
challenge.
B
H
And
generally
I
mean
I'm,
not
sure,
if
I'm
understanding
the
question
correctly,
but
so
they
the
way
that
the
current
Agnew's
back
is
that,
even
if
that
challenge
fails,
you
could
basically
go
back
to
different
challenge
by
offering
multiple
ones.
At
the
same
time,
you
only
need
one
of
them
to
succeed,
so
you
could,
for
example,
go
with
this
one
and
then
have
the
DNS
based
validation
challenge
in
there
as
well,
which
might
actually
require
you
to
modify
DNS
records.
So
you
have
a
nicer
failover
between
the
two.
B
H
You
could
also
do
that
just
that
in
majority
of
cases,
at
least
from
what
I've
seen
I'm,
not
sure
what
I'm
sure,
let's
encrypt,
have
some
statistics
on
what
the
defaults
are
like,
what
people
opt
to
which
kind
of
challenges
they
are,
but
in
in
a
lot
of
cases,
people
are
deploying
this
on
their
web
server
with
and
they
don't
actually
have
necessarily
immediate
access
to
change.
The
Dom
change
the
DNS.
We
can
adjust
that
for
them,
there's
no
direct
API
access,
for
example,
or
they
don't
want
to
deploy
this
immediately
on
the
API.
H
B
H
H
It
doesn't
really
change
the
problem
right
because,
just
because
it
says
hey,
only
let's
encrypt
is
allowed
to
issue
a
certificate
doesn't
mean
that
the
attacker
could
not
just
go
to
let's
encrypt
and
argue
it
say
it's
okay,
I
would
like
to
do
the
HCP
based
validation,
because
right
now
at
least
see
a
records
doesn't
restrict
which
kind
of
challenges
are
there
there's?
Currently
the
ACMA
mailing
list,
that's
being
discussed,
there's
some
issues
with
which
kinds
of
dashes
and
non
dashes
you're
allowed
to
have,
and
in
there.
K
Thank
you
very
much
for
the
introduction,
so
I'm
going
to
talk
about
studying
the
TLS
usage
in
Android
apps,
and
this
is
joint
work
of
people
at
Stony
box,
University,
some
more
people
at
International,
Computer,
Science,
Institute
Princeton
and
the
University
of
Massachusetts
at
am,
and
the
work
here
that
I'm
presenting
he
was
mostly
done
by
Abbas,
who
is
sadly
not
able
to
attend
the
workshop
today.
So
nowadays,
as
you
know,
encryption
is
basically
everywhere
and
especially
after
the
Snowden
releases
in
2013.
K
You've
seen
that
more
and
more
people
know
that
the
data
should
be
protected
when
it's
on
the
internet
and
are
aware
of
the
privacy
challenges
that
they
have
and
data
is
not
protected
on
the
internet,
and
you
can
see
that
generally
more
and
more
web
pages
on
the
Internet
are
encrypted.
So
in
2017,
according
to
a
lot
of
sources
via
age.
The
point
where
more
than
50%
of
the
web
is
encrypted,
and
if
you
look
at
the
quality
of
encryption,
it
has
been
increasing
tool.
K
Google
has
been
doing
things
like
downgrading
sites
that
don't
use
HTTPS
and
so
on.
But
now
the
question
is:
how
does
all
of
this
look
like
on
mobile
devices,
because
on
mobile
devices,
TLS
also
is
important,
but
the
system,
the
ecosystem
looks
a
little
bit
different,
so
on
Android
about
88%
of
applications
used
here
less,
but,
unlike
on
the
desktop
they
you
basically
have
most
people
are
using
the
beppo's
or
to
access
the
Internet,
and
you
have
big
groups
of
people
who
should
hopefully
know
what
they
are
doing
on.
K
Android
lots
and
lots
of
people
use
applications
that
then
you
still
has
to
communicate
with
the
Internet,
and
there
are
many
opportunities
to
make
mistakes
when
you
implement
TLS,
and
so
what
we
aim
to
do
in
our
work
is
that
we
wanted
to
understand
how
Telus
is
being
used
on
Android
and
typically
when
you
want
to
do
something
like
this.
You
have
two
opportunities
that
you
first
look
at,
which
is
static,
analysis
and
dynamic
analysis
of
the
code.
That's
running
both
of
those
aren't
really
this
bad
great.
K
In
this
circumstance
because
static
analysis,
you
never
get
a
really
full
picture
with
dynamic
analysis.
You
first
have
to
get
the
code
paths
that
Duty
lists
actually
execute
and
you
might
not
be
able
to
do
that.
So
what
we
did
for
this
work
is
views,
a
software
that
is
lumen
and
basically
we
because
it
so
we
do
user
space
traffic
monitoring
on
Android.
K
Research,
we
anonymize
the
data,
and
the
answer
generally
is
that
we
are
not
interested
in
anything
that
the
people
are
doing
or
we're
not
interested
in
the
users.
We
are
just
interested
in
the
software
and
when
someone
installs
our
application,
you
have
to
go
through
quite
a
few
screens
that
make
it
clear
what
we
are
doing
and
what
you
are
agreeing
to,
and
you
can
obviously
stop
contributing
your
data
at
any
point
of
time
by
just
uninstalling
the
application.
Again.
K
No,
that's
just
died
here.
Okay,
next
thing:
what
exactly
do
we
collect?
We
connect
three
key
items:
we
collect
the
client,
hello,
the
server
hello,
including
the
certificate
and
could
actually
kind
of
pictures,
and
we
also
include
failures
of
our
TLS
proxy,
which
reviews
certificate
pinning
so
in
our
study.
K
So
our
study,
this
was
presented
at
first
presented
at
ACM
conics
last
year
and
in
our
study
we
have
data
from
5,000
users,
more
than
500
countries,
with
more
than
1.4
million
connections,
more
than
thousand
7,000
applications
with
more
than
34,000
domains
and
more
than
800
unique
device
and
OS
combinations,
and
the
data
goes
from
the
end
of
2015
to
mid
2017.
So
you
do
not
have
data
in
this
talk
about
any
later
days.
So
what
do
we
do
with
this
data?
K
K
So
what
applications
are
using
and
it
turns
out
that
84%
of
the
applications
in
our
data
set
actually
just
use
the
default
ssl
of
android
with
the
default
settings
and
don't
change
anything
which
is
a
lot.
But
now
the
interesting
question
is:
what
does
the
rest
do
and
write
with
some
of
the
applications
on
Android
not
use
the
default
settings,
and
there
are
several
answers.
K
So
in
some
cases
the
answer
is
to
improve
security
because
there's
there
are
some
people
that
have
strong
feelings
about
security
and
know
exactly
what
they
are
doing
like,
for
example,
Facebook.
They
use
optimizer,
they
remove
weaker
cipher
suites
and
they
use
the
ALP
an
extension
to
negotiate
I
think
a
variant
of
HTTP
to
that
they
are
speaking,
and
you
cannot
do
any
of
that
with
the
default.
Implementation
of
SSL
and
Android
Twitter
uses
the
default
implementation
and
removes
a
few
weaker
libraries.
K
But
naturally,
then
you
have
some
people
that
do
it
wrong,
like
we
have
some
private
messaging,
apps
and
Volpe
apps
that
use
a
really
short
cipher
speed
list.
That
does
not
include
any
forward
secure,
ciphers
for
some
reason
and
then
there's
other
software
that
does
uses
the
own
libraries
for
different
reasons
like
five
simply
Firefox
uses
NSS,
which
is
their
own
calligraphy
library,
which
they
use
on
all
operating
systems
where
they
can
and
is
generally
really
good.
K
K
So,
if,
once
you
buy
your
phone,
you
even
if
Google
brings
about
a
new
operating
system
version
and
it's
available
for
that
phone,
you
might
not
get
it
because
your
operator
also
has
to
make
it
available
for
you
and
for
all
the
phones,
it's
kind
of
unlikely
that
the
operating
system
will
even
be
available
for
your
phone.
Then
there
are
other
applications
like
some
EA
game
apps.
K
When
we
did
our
measurement
enabled,
as
version
3
on
all
operating
on
all
versions
of
Android,
we
had
some
application
that
allowed
new
lure
an
animal
ciphers
that
had
once
again
hundreds
of
millions
and
often
stores
where
you
can
potentially
just
do
a
man-in-the-middle
attack.
Where
you
put
yourself
into
the
middle
and
your
server
says:
I
speak
this
anonymous
offer
and
you
let
them
connect
to
you
and
if
they
actually
don't
enforce
a
that,
the
certificate
is
present
later
during
the
verification
which
we
do
not
test.
K
You
have
a
man-in-the-middle
attack,
then
exported
ciphers,
which
should
not
be
used
any
more
at
all
and
can
generally
be
man-in-the-middle
and
are
still
used
in
android,
available
by
default
on
Android,
4
and
below,
which
is
used
on
still
yesterday
was
more
than
10%
of
devices
and
some
big
apps
against
and
generally.
What
we
observe
is
that
most
apps
of
weak
cipher
suites
use
poorly
configured
open
SSL,
so
they
are
doing
things
themselves,
obviously
so
solutions.
K
What
can
we
do
about
this?
So,
in
our
opinion,
one
good
thing
that
Google
could
do
is
to
basically
decoupler
TLS
updates
from
operating
system
updates,
because,
as
I
noted,
the
big
majority
of
applications
just
uses
the
operating
system
default
installation
and
when
the
operating
system
default
installation
becomes
outdated,
like
with
Android
5
and
still
tons
of
people
use
it.
You
speak
outdated,
primitives
of
TLS.
So
that
would
be
one
nice
solution.
K
They
actually
already
kind
of
do
that
for
their
own
services,
so
Google
Play
services,
bundles
that
don't
heal
as
library
and
is
updated
independently
of
the
operating
system
and
in
general
it
also
would
be
nice
to
give
more
options
to
developers,
because
some
developers
apparently
opted
to
use
up
messes
air
instead
of
the
operating
system
version
to
be
able
to
set
TLS
extensions.
But
you
currently
cannot
do
with
the
operating
system
and
TLS
implementation
and
another
thing
that
we
looked
at
and
that
kind
of
builds
on
earlier
work
is
that
Android
stores
often
have
impurities.
K
So
you
said
your
telephone
scripts
with
a
load
store,
and
we
have
shown
in
some
of
our
earlier
work
that
often
vendors,
if
inserted,
with
certificates
into
the
root
zone
like
if
you
have
a
phone
form
yeah,
that's
built
by
LG
or
another
company.
They
might
insert
a
certificate
into
your
hood
store
because
they
are
using
it
and
some
apps
also
on
Android
and
again
don't
use
the
booster
of
the
phone,
but
just
ship
their
own
hood
store
instead
or
pin
server
certificates
or
Yusuf's
and
certificate.
K
For
example,
Firefox
has
its
own
CA
store,
you
go
Google
and
paper
and
Facebook
you
certificate,
pinning
and
so
on,
and-
and
that
is
all
good
and
can
be
really
great
for
security
actually.
But
you
also
can
make
interesting
mistake
when
I'm
using
something
like
this.
So
when
we
look
at
how
things
are
being
used,
is
that
most
applications
just
attach
the
operating
system
provided
to
us?
K
That
means
that
any
CA
that's
in
this
trusted,
including
any
CAS
that
you
found
I,
know,
might
insert
into
it,
which
you
might
not
actually
want
to
trust,
and
then
some
paps
pin
certificates
to
mitigate
certain
attacks,
which
is
really
good.
But
you
also,
we
also
found
one
app
that
it
is
really
poorly.
So
there
is
one
major
system,
recovery
up
with
root
access,
and
so
on
that
downloaded
it
would
store
over
HTTP
so
over
HTTP
in
the
key.
K
K
So
perhaps
they
could
make
primitives
available
to
make
that
easier
and
another
thing
they
could
do
about
people
using
their
own
libraries
is
they
should
at
least
make
sure
that
people
are
properly
educated
about
what
they
are
doing
and
potentially
do
some
basic
checking
that
people
use
open
SSL,
for
example
correctly,
or
that
they
have
that
a
basic
connection
that's
made
by
the
app
is
at
least
a
reasonably
secure.
They
kind
of
have
done
something
like
this
in
the
past.
K
So
there
was
an
issue
with
security
issue,
if
not
Heelys,
where
they
prevented
you
from
uploading
applications
that
bundle
vulnerable
version
of
new
TLS
to
the
App
Store,
and
you
got
a
notification
that
basically
told
you
how
you
can
fix
this.
So
in
summary,
we
made
the
first
study
of
TLS
usage
on
android
apps
at
scale.
K
The
majority
of
apps
just
uses
what
the
operating
system
provides
by
default
and
applications
are
often
vulnerable
to
potentially
vulnerable
to
attacks.
When
the
operating
systems
get
outdated,
which
happens
a
lot,
you
use
third-party
libraries.
A
lot
of
people
make
mistakes.
During
the
configuration
we
found.
A
little
bit
of
people
are
doing
certificate
pinning
for
security,
and
we
have
shown
a
few
potential
solutions.
And
now,
if
you
have
any
questions,
I
would
be
happy
to
answer
them.
D
B
L
L
K
K
Are
automation
tools
but
again
you
have
begun.
Then
you
again,
you
have
the
problem
that
the
automation
tools
typically
aren't
all
that
great
of
at
getting
there
into
the
right
code
path,
because
once
there's
a
sign-up
screen
of
what
do
you
do
so
the
ones
that
I
know
basically
tap
out
on
the
screen
more
or
less
randomly
and
hope
that
something
happens,
and
this
actually
shows
you
what
is
being
used
in
practice.
So
do.