►
From YouTube: IETF101-SAAG-20180322-1330
Description
SAAG meeting session at IETF101
2018/03/22 1330
https://datatracker.ietf.org/meeting/101/proceedings/
B
A
C
D
C
A
G
F
F
E
F
I
I
Duck
and
before
we
get
started
I'm
a
new
face
up
here
and
we
want
to
say
thank
you
to
the
old
face
who
has
been
a
great
ad
for
the
past
several
years.
So
Kathleen
come
up.
We
got
you
a
couple
things
so
now
that
you
have
some
more
free
time.
You
won't
be
reading
grass,
so
maybe
we'll
have
some
light.
Reading
material.
I
G
I
L
I
I
M
O
E
P
J
P
So
many
of
you
have
been
involved
in
or
at
least
are
aware
of
what
I
call
the
TLS
one
three
experiment
where,
for
several
years
the
Taylor's
working
group
engaged
in
a
collaboration
with
a
bunch
of
group
of
academic
researchers
and
the
idea
was
to
do
hand
in
hand
safe
formal
security
analysis
of
the
protocol,
while
it
was
being
designed
and
specified
so
there's
a
list
of
it.
It's
an
incomplete
list
of
papers,
I'm
citing
there.
So
as
for
every
significant
draft
of
TLS
1
3,
there
was
usually
some
cryptographic
analysis.
P
Some
security
analysis
of
various
kinds
and
various
papers
were
published.
So
this
was
very
productive
for
researchers,
because
you
could
a
lot
of
papers
out
of
this
at
top
venues
and
I'm
hoping
it's
also.
It's
also
a
good
thing
for
TLS
1
3,
because
we
can
have
more
confidence
in
the
in
the
final
version
of
this
of
the
subsea.
So,
for
those
of
you
are
not
we're
not
really
deeply
into
the
TLS
working
group
and
how
this
whole
thing
went.
You
might
be
curious
about
some
things
like
well.
There
are
so
many
papers
out
there.
P
You
say
that
many
analyses
well.
What
are
they
all
prove
how
they're?
How
are
they
different?
How
much
work
did
it
take,
and
is
it
too
much
work?
You
know
and,
for
example,
if
you
have
a
shiny
new
protocol
well
can
we
do
the
same
thing
for
that
protocol,
so
in
particular,
we
are
trying
to
do
the
same
kind
of
thing
for
the
MLS
protocol.
P
That'll
be
you'll,
be
seeing
the
boss
later
today,
but
you
want
to
see
how
this
kind
of
I'm
gonna
give
you
a
little
bit
of
a
flavor
I
was
told
for
a
20
minute
talk,
so
I'm
gonna
speak
for
20
minutes
and
we'll
see
whether
we
can
answer
some
of
these
questions
next
slide,
please
well
before
the.
What
and
the
how
of
these
various
verification
techniques
is
good
to
remember
why
next
slide,
please
so
the
for
the
four
years
before
TLS
one
three
was
designed.
P
P
You
might
have
expected
to
hold
next
like
these,
and
the
last
category
is
implementation
bugs
which
sort
of
feels
like
well,
of
course,
any
piece
of
suffolk
and
I'm
implementation
bugs,
but
next
piece
it
turns
out
that
often
a
lot
of
the
attacks
that
became
famous
were
relying
on
a
combination
of
all
of
these.
So
they
were
implementations
bugs
crypto
weaknesses.
Protocol
flaws
all
coming
together
in
some
kind
of
perfect
storm
to
create
all
these
attacks
next
time.
P
This
so
here's
an
example
I'm
going
to
work,
you
walk
you
through
the
kinds
of
problems
that
we,
which
you
had
to
understand.
What
are
the
kinds
of
attacks
that
these
various
tools
and
tools
are
trying
to
prevent?
Okay,
so
this
is
the
classic
one.
Crypto
101,
all
of
the
synthesis
diffie-hellman
key
exchange
Alice
sends
du/dx
Bob,
since
you
do
why
they
compute
due
to
the
X
Y
and
that's
going
to
be
the
key,
and
we
all
know
that
this
is
vulnerable
to
a
man-in-the-middle
attack.
P
That
is
that
if
I
don't
know
X
or
if
I
don't
know
what
and
if
I
don't
know
why
it's
going
to
be
hard
for
me
to
compute
G
to
the
XY,
okay,
and
this
assumption
crucially
depends
on
how
well
this
group
has
been
generated.
How
strong
that
P?
That
prime
is
modular
which,
by
doing
all
this
computation
I'm
going
to
talk
about
this
as
if
this
is
modular,
modular
diffie-hellman
in
some
sense,
but
similar
arguments
holds
for
elliptic
curves
as
well.
Okay,
next
slide,
please!
So
what
does
the
primes
were
small
right?
P
So
obviously
the
crypto
proof
that
we
did
for
Sigma
only
holds
if
that
assumption
is
true
for
the
particular
prime
and
group
that
we
chose
and
if
you
go
and
look
at
the
records
for
how
small
this
prime
can
be
well
right
now,
do
we
have
been
able
to
break
all
the
way
people
have
been
able
to
break
all
the
way
in
academic
settings
up
to
768
bits?
We
don't
know
if
nations
yet
may
scale,
people
can
do
better,
but
but
basically
we
can
break
at
least.
P
Actually
we
went
one
side
to
forward
I
guess
we
can
at
least
wake
up
to
768
bits.
So
if
you
want
a
proof
for
Sigma
to
hold,
you
must
instantiate
it
with
the
group
which
cryptographers
will
say
should
at
least
have
2,000
bits,
maybe
even
three
thousand
bits
in
the
finite
field.
If
you
haven't
set
it,
otherwise
the
proof
doesn't
work.
P
Okay,
so
that's
how
you
interpret
the
krypter
krypter
krypter
proof
is
allowing
you
to
uncover
the
necessary
cryptographic
assumptions
on
the
primitives
under
which
the
proof
falls
and
now
it's
your
job
to
go
and
then
instantiate
it
in
a
way
that
the
proof
is
not
broken.
Okay,
next
time,
listen
but
protocols,
don't
just
sort
of
support.
One
group
right
I
mean
all
interoperable
Internet
protocols
are
going
to
support
at
least
like
multiple
groups,
and
some
of
them
may
be
strong
and
some
of
them
may
be
weak.
P
So
Alice
is
going
to
say
to
Bob,
listen,
I
support
these
two
groups
pop
is
going
to
say:
oh,
let's
do
two
thousand,
but
if
you
have
one
because
that's
good,
that's
the
strongest
group
you
have
in
common
and
then
we
are
going
to
continue
with
the
protocol
and,
if
all
goes
well,
they're,
never
going
to
use
the
finder
12-bit
true,
so
you
might
ask
well
why
do
they
support
that
512
bit
group
in
the
first
place?
Well,
the
usual
answer
is
backwards,
compatibility
right.
P
So
if
you
want,
though,
if
alice
is
connecting
to
like
an
old
printer
or
something
which
has
never
been
updated,
it's
still
supporting
some
500
bit
group.
Maybe
for
that
particular
connection,
she
is
willing
to
accept
that
that
we
group,
but
when
she's,
connecting
to
Google,
which
has
a
2000
bit
group
I,
certainly
want
to
use
2,000
bits
right
and
if
all
goes
well,
every
connection
like
this
between
the
clients
and
servers,
if
we
do
care
about,
is
going
to
use
a
strong,
deviant
group.
P
Hence
it
is
going
to
satisfy
the
assumption
that
we
used
in
our
crypto
proof,
so
the
crypto
theorem
still
holds
okay.
So
if
all
goes
well
that
holds.
But
if
you
go
to
the
next
slide,
but
there's
a
protocol
flow
on
the
protocol.
I
put
on
the
previous
slide
with.
If
there
is
a
downgrade
they
don't
get
attack,
is
a
kind
of
a
man-in-the-middle
attack,
again
again
back
to
the
same
pattern
over
and
over
again
for
diffie-hellman.
P
So
as
a
person
in
the
middle,
what
that
person
is
going
to
do
is
the
attacker
is
that
when
Allison
says
I
support
these
two
groups,
a
strong
group
in
the
B
group,
the
attacker
is
just
going
to
delete
the
strong
group.
That
message
and
just
let
the
B
group
through
so
Bob,
is
going
to
think
that
Alice
only
supports
a
weak
group.
So
it's
like
some
old
Windows
XP
machine
or
something-
and
it's
going
to
say:
okay,
let's
do
the
weak
group.
So
it's
going
to
send
back
the
founder
and
12-bit
group.
P
Now
Alice
gets
us,
we
go
back
from
Bob
and
says:
well,
maybe
Bob
is
an
old
printer
or
something
so
maybe
I
will
accept
this,
so
both
of
them
have
been
fooled
into
thinking.
The
other
one
is
a
legacy
device
with
whom
they
need
to
kind
of
downgrade
to
find
her
in
12
bit
troops
and
now
the
whole
protocol
goes
through.
We
are
using
finder
in
12
bits
and
obviously
the
theorem
doesn't
hold,
the
assumption
is
broken
and
everything
goes
back
now.
P
The
thing
to
remember
here
is
that,
even
if
you
try
to
protect
this,
this
negotiation,
like
TLS,
does
using
the
using
a
Mac
or
something
it
might
well
be
the
case,
and
this
is
what
happened
in
TLS,
that
the
mechanism
being
used
to
protect
the
downgrade
the
the
protocol
from
downgrade
itself
was
downgrade
about.
So
you
could
use
the
downgrade
attack
to
completely
break
even
the
downgrade
protection.
So
that's
the
attack
called
log
jam
which
came
into
2015.
P
It
is
a
combination
of
this
crypto
flaw,
which
is
that
there
are
some
weak
modes,
512-bit
sort
of
defilement
modes
and
a
protocol
flaw
that
allows
us
to
go
from
a
strong
mode
to
a
week
or
so
compose
them
together.
We
get
an
attack
next
slide.
Please,
of
course
you
could
just
say:
let's
eliminate
all
bad
diffie-hellman
groups
from
all
implementations
of
TLS
and
that'll,
be
a
very
nice
thing
to
do
so.
We
have
only
strong
groups
everywhere.
P
We
end
this
nice
world
and
you
still
have
one
more
thing
to
worry
about
which
are
implementation
bugs.
So
what?
If
you
can
disable
in
your
configuration
and
open
SSL
all
week,
diffie-hellman
or
maybe
there's
a
bug
in
the
code
which
will
allow
the
attacker
to
trigger
the
code
path
that
leads
to
week?
If
you
haven't
groups,
even
though
you
have
officially
disabled
them
all
right,
so
this
is
the
kind
of
bug
that
led
to
a
bunch
of
these
attacks
like
free,
Claudia,
mint
round.
P
There
could
be
other
birds
in
the
implementation
of
the
Fieldman
itself
in
the
crypto
library,
so
to
speak,
your
diffie-hellman
computation
might
be
functionally
incorrect.
It
may
have
memory
safety,
but
so
it
may
have
side
channels
right
and
that
can
completely
bypass
the
security
that
you
intend
for
your
protocol,
so
obviously
verifying
the
implementation
is
going
to
be
important,
so
we
got
tripped
up
weaknesses,
protocol
flaws,
implementation
bugs
and
they
are
coming
together
to
break
our
security
next
slide,
please.
P
So
how
can
we
identify
and
prevent
these
attacks?
Well,
we
are.
We
are
advocating
the
use
of
formal
verification
tools
for
this
and
from
one
analysis
techniques,
whether
they
done
manually
or
automatically.
So
there
are
three
kinds
of
works
that
we
have.
We
are
seeing
in
this
domain,
which
are
things
that
you
can
use
well,
the
first
kind
of
thing
you
can
do
is
you
can
identify
the
core
protocol
that
you're
interested
in
and
actually
prove
it
strip
tow
graphic
security?
P
What
that
means
is
that
you'll
have
to
uncover
all
the
crypto
assumptions
that
you
rely
on
and
then
show
using
a
reduction
or
a
game
popping
proof
that
your
protocol
is
secure.
If
all
these
assumptions
are
met
to
do
this
kind
of
proof.
Typically,
you
need
to
include
some
kind
of
cryptographer
in
your
team.
It
takes
a
significant
amount
of
time.
P
P
A
second
technique-
and
this
is
one
I
will
advocate
for
almost
everyone
in
the
room-
is
that
before
you
even
start
doing
your
crypto
proof,
maybe
you
should
analyze
the
protocol
symbolically
to
find
all
kinds
of
protocol
logical
flaws
that
you
might
have
in
your
in
your
protocol
and
for
this
there's
a
whole
bunch
of
really
good
tools
available.
Now
they
are
fully
automated.
You
can
specify
your
full
model.
P
P
Of
course,
after
having
all
done
all
that,
I
would
encourage
you
also
to
kind
of
if
you
have
a
prototype
implementation
or
whatever
or
a
reference
implementation
to
it,
to
verify
that
it
doesn't
have
any
bugs
so
that
you
have
at
least
one
clear
way
of
showing
that
your
protocol
can
be
implemented
in
a
secure
way
and
for
that
as
well.
There
are
a
bunch
of
tools
available
that
that
you
can
use
next
like
this,
and
so
the
one
will
confess
on
his
the
one
in
the
middle
next
slide.
Please.
P
So
all
of
this
effort,
like
I,
said,
can
take
months
or
weeks
to
do
these
kinds
of
proofs,
which
is
not
much
if
you
think
about
the
the
gains
that
you
get
from
them
can
all
be
made
much
simpler.
If
the
protocol
itself
were
designed
from
the
very
beginning
to
be
verifiable-
and
what
do
I
mean
by
this-
because
you
know,
if
you're
writing
a
new
protocol
spec,
what
can
you
do
to
make
this
kind
of
formal
method,
more
accessible,
more
applicable
to
your
protocol?
P
Well,
I
would
give
you
at
least
five
recipes
and
five
things
that
you
can
do,
and
there
are
many
more
of
course,
but
these
are
five
I
think
are
quite
important.
The
first
thing,
which
is
kind
of
obvious,
is
that
your
protocol
spec
standard
should
precisely
define
your
threat
model
and
security
goals.
Now
this
is
like.
Obviously,
it
isn't
every
RFC
do
this?
No,
not
precisely
enough
actually,
because
we
spend
a
lot
of
time.
Researchers
actually
just
writing
those
things
down.
P
So
just
that's
like
almost
a
majority
of
our
time,
so
finding
a
way
by
which
we
can
communicate
this
between
spec
authors
and
researchers
would
be
very
good,
because
this
is
not
precise
enough
right
now.
The
second
thing
you
can
do
is
to
use
standard
well
understood
constructions,
so
we
don't
have
to
invent
new
security
definitions
for
the
way
you
use.
Some
combination
of
encryption
and
macking
in
your
protocol,
when
you
could
just
have
you
say,
hey
D
right.
So
if
you
use
some
well
understood
constructions,
we
can
use
well
understood
proof
patterns.
P
The
third
thing
is,
if
you
break
a
protocol
into
a
kind
of
composable
sub
protocols
and
composable
is
actually
a
key
word
here,
because
it
actually
means
something
very
technical,
which
is,
if
you
can
prove
things
about
the
sub
protocols
they
actually
the
proofs
and
the
definitions
meld
together
well
in
order
to
give
us
a
proof
of
the
main
protocol.
This
is
a
problem
with
earlier
versions
of
TLS,
which
is
solved
a
TLS
1.3
and
is
actually
kind
of
very
important
for
a
lot
of
the
proof
techniques
to
work.
Well.
P
A
fourth
thing
is
that
if
you
have
two
different
versions
or
two
different
modes
of
the
protocol,
you
have
to
either
it's
best
to
actually
completely
avoid
any
key
reuse
between
the
different
modes.
But
if
you
can't
avoid
all
key
reuse
and
very
strictly
limiting
how
a
key
that
is
used
in
one
mode
can
also
be
used
in
a
different
mode,
it's
critical.
Otherwise,
you
have
to
verify
all
of
these
things
together.
P
P
Implementations
that
are
easy
is
easy
to
prove
correct
and
therefore
easy
to
be
correct,
and
to
do
that
there
might
be
some
standard
data
structures
and
state
machines
that
you're
using
in
the
protocol.
It's
best
to
specify
it
because
without
that,
people
are
just
inventing
their
own
ways
of
doing
this
and
getting
it
wrong.
Next,
like
this,
so
the
TLS
one,
three
experiment:
what
I
did
right
I
would
say
is
that
it
actually
did
a
lot
of
these.
P
This
was
a
big
part
of
the
effect
effort
was
to
actually
implement
many
of
these
patterns
to
actually
make
the
proofs
even
feasible.
So
the
all
the
papers
at
the
bottom
were
only
possible
because
if
because
the
design
team
agreed
to
make
many
changes
in
design
and
redesigns
in
order
to
in
order
to
make
the
cryptographic
analysis
easier
in
some
cases
well,
there
was
just
nothing
to
be
done.
P
For
example,
zero
watch
et
so
despite
many
complaints,
it
remains
in
the
remains
in
the
standard,
but
in
many
other
cases
actually
the
protocol
changed
a
lot,
even
sometimes
at
the
cost
of
performance.
So
I'm
going
to
show
you
a
little
bit
more
about
one
particular
analysis
technique,
which
is
the
one
which
I've
highlighted
there,
which
is
symbolic
protocol
analysis
technique,
which
I
think
can
be
more
useful
and
has
been
already
applied
to
many
other
protocols,
not
just
to
tell
us
exactly
so.
P
Let's,
let's
set
a
target,
so
we,
let's
assume
that
we've
got
tons
and
tons
of
proofs
of
TLS
1.3
right.
So
we,
let's
assume
that
all
the
modes
of
TLS
1.3
are
provably
secure,
they're
as
solid
as
can
be.
We
don't
have
to
worry
about
them,
but
we
do
know
because
of
all
the
red
stuff
before
that
there
are
some
modes
of
TLS
1.2
which
are
broken.
P
There's
some
obscure
modes,
maybe
but
SACEM
mode,
which
are
broken
so,
and
you
also
know
that,
for
the
foreseeable
future,
TLS
1.3
is
going
to
be
deployed
and
run
on
servers
alongside
side-by-side
with
TLS
1.2
just
for
backwards
compatibility
for
maybe
5
to
10
years.
This
is
going
to
be
the
case,
so
how
can
we
be
sure,
despite
all
our
proofs
of
TLS
1-3,
that
an
attacker
will
not
just
do
the
same
man-in-the-middle
attack
that
we
talked
about
when
somebody
is
trying
to
connect
with
TLS
1.3
just
downgraded
to
TLS
1.2?
P
Now
we
are
in
TLS
1.2.
We
know
there
are
some
broken
words
exploit
the
broken
modes
and
therefore,
or
just
jump
down
the
ladder
until
we
can
get
to
an
attack
and
then
break
the
protocol.
So
is
there
a
downgrade
attack
on
TLS
1.3
that
will
completely
avoid
all
the
proofs
that
we
did
for
it,
hopefully
not
because
we
actually
try
to
put
in
downgrade
protection
countermeasures
and
until
s13.
But
how
do
we
verify
that
they
work
next
side?
Please.
P
P
It's
not
super
super
complicated,
there's,
also,
of
course,
a
zero
RTT
mode,
and
in
this
case
we
also
want
to
check
for
downgrades
to
TLS
1.2.
So
we
will
also
have
to
model
TL
at
some
point,
which
is
equally
complicated,
but,
however
complex
these
protocols
are,
when
you
actually
write
down
the
cryptographic
model
or
the
protocol
model
for
these
protocols.
1.3
1.2,
0,
RT
d,
1,
RT,
T.
Everything
put
together
is
about
500
lines
of
progress.
It
can
be
even
more
compact
if
you
like,
okay,
so
it's
like
a
few
hundred
lines.
P
You
can
just
model
the
whole
protocol.
That's
not
the
hard
bit
and
I'll
show
you
in
the
next
slide.
What
the
model
looks
like
the
hard
bit
is:
how
do
you
specify
what
your
threat
model
is
and
what
your
security
goals
are,
and
how
do
you
understand
what
the
results
of
the
analysis
are
actually
telling
you?
So
how
do
you
map
what
your
desired
goals
from
TLS
are
down
to
this
tool
and
and
kind
of
map
what
it's
saying
to
what
you
actually
want
to
prove
next
type
piece?
P
Well,
this
is
again
probably
unreadable,
but
this
is
what
it
looks.
The
protocol
model
itself
looks
like
in
probative
all
I
wanted
to
show.
You
was
that
well,
if
you're
used
to
programming
in
something
like
I,
don't
know
Haskell
or
oh,
camel
or
Python
or
whatever
any
of
these
high-level
languages,
it
won't
look
very
different.
You
just
write,
basically,
according
the
sequence
of
cryptographic
operations,
that's
happening
in
the
protocol.
It
just
caught
really,
ok,
so
that's
just
in
its
own
syntax.
It
has
its
own
formal
semantics.
P
There
are
some
threads
and
some
messaging,
but
ok,
it's
not
it's
not
going
to
be
rocket
science
to
write
down
that
model.
Ok,
why
interesting
like
I
said
again
is
how
do
you
model
the
threats?
So
next
slide,
please
so
in
a
tool
like
Crow,
Varys
and
and
Cameron.
What
you're
doing
is
symbolic
protocol
analysis,
which
means
that
you
have
a
symbolic
threat
model.
What
that
means
is
this
model
which
is
which
was
originally
designed
by
Needham
and
shorter,
and
then
elaborate
a
bit
olivine
Yau,
which
is
basically
of
an
network
adversary.
P
So
then
the
attacker
is
someone
who
can
read
write
tamper
with
any
message
on
the
public
network.
The
attacker
can
also.
You
can
also
play
the
role
of
a
client
and
play
the
role
of
a
server
on
any
session.
The
attacker
can
also
compromise
certain
servers
and
certain
clients
by
kind
of
stealing
their
long-term
keys,
and
when
all
of
this
is
said
and
done,
we
asked
well
can
an
attacker
now
break
into
a
session
where
it
has
actually
not
broke
kind
of
start
playing.
P
The
client
of
the
server,
both
the
client
and
the
server
are
honest
or
those
sessions
safe
or
not.
The
one
limitation
we
put
on
the
attacker
in
this
model
is
that
they
that,
when
you
think
about
strong
crypto
algorithms,
we
take
them
treat
them
as
completely
perfect,
so
the
attacker
cannot
break
into
strong
crypto,
cannot
guess
long
keys,
just
there's?
No,
it's
not
like
a
probabilistic
argument.
We
just
say
possibility'
cailli
attacker
cannot
do
it.
It's
a
it's
a
black
box
and
that
kind
of
simplifies
the
analysis
a
lot.
P
So
in
order
to
deal
with
downgrade
attacks,
we
also
have
to
extend
this
classic
model
a
little
bit.
So
we
have
to
extend
it
to
the
idea
that
some
algorithms
might
be
weak,
because
you
know
that
wise
there's
no
point
in
doing
downgrades
right.
So
what
we
do
is
we
can
say
that
every
primitive
has
two
versions,
a
strong
version
and
every
equation.
If
you
use
a
strong
version,
it's
perfect.
The
attackers
cannot
really
break
into
it
at
all.
P
If
you
use
a
weak
question,
you're
completely
hosed
I
mean
the
equation
of
that
Tripta
primitive
gives
you
no
guarantees
whatsoever.
Okay,
and
now
we
want
to
ask
the
question:
can
an
attacker
who
can
break
weak
ciphers
and
you
can
do
all
those
Network
attacks
that
we
talked
about
and
be
a
man-in-the-middle
into
any
protocol
session?
Can
such
an
attacker
break
the
security
goals
of
TLS
next
likely's?
And
what
are
these
security
goals?
You
have
to
write
them
down
right,
so
an
providers
will
have
to
write
down.
Well.
What
is
your
secrecy
go?
P
What
is
your
authenticity
goal?
Do
you
want
to
prevent
replays?
Do
you
want
forward
secrecy
and
each
of
these
goals
you
can
write
down?
Some
of
them
will
be
true.
Some
of
them
will
be
false
right.
So
replay
protection
does
not
hold
for
zero
RTT
data,
but
it
holds
for
one
RTT
data.
So
these
are
all
goals
you
can
write
and
you
can
evaluate
the
protocol
and
say
where
which
messages
enjoy
these
properties
and
which
messages
don't
enjoy
these
properties.
P
P
There
is
a
sequence
of
actions
have
in
the
pro
in
the
protocol
model,
which
is
this
500
line
model
we
wrote
which
at
the
end
of
which
the
attacker
has
this,
has
this
message
and
in
that,
if
you
look
at
what's
actually
happening
there,
you'll
find
it
in
that
particular
trace.
The
attacker
has
compromised
the
server's
private
key
and
is
able
to
therefore
break
the
secrecy
of
the
message
is
able
to
impersonate
the
server
get.
The
message
that
this
plant
is
thinking
is
sending
to
the
server,
but
the
attacker
gets
it.
P
P
If
we
run
this
again,
only
our
whole
model
find
an
entire
trace
of
messages
and
I
say:
yes,
I
found
a
trace
where
in
fact
the
attacker
can
get
the
message,
even
though
he
has
not
compromised
the
server,
because
that
particular
trace
uses
a
weak,
aad,
algorithm
or
weak
encryption
algorithm
like
Mac
encoding
crypto.
Something
next
type
is
so
you
can
say.
Okay
now,
I
know
that
in
a
in
caelis
1
3,
all
my
algorithms
are
strong.
So
I
want
to
remove
that
case
as
well
and
say:
ok!
P
Is
that
any
other
situation
in
which
this
happens
and
so
on?
So
you
can
keep
iterating
with
your
goals
and
Rover
if
you'll
still
keep
searching
through
all
possible
man-in-the-middle
attacks
and
try
to
find
you
ways
in
which
that
goal
could
fail
and
if
you
keep
going
on
next
slide.
Finally,
you
get
to
a
this.
Is
a
strongest
goal
that
we
can
prove
for
TLS
1.3
secrecy?
P
It
says
it
lists
all
the
particular
cases
in
which
the
message
from
the
client
to
the
server
could
be
known
to
the
attacker,
and,
at
this
point,
probative
says
no
I
cannot
find
any
symbolic,
which
means
a
network-based
attack
that
would
break
this.
Okay
next
slide,
please.
So
how
do
we
interpret
that
query?
Well,
if
you
read
through
what
it
actually
means,
what
it
says
is
that
all
messages
on
TLS
1.3
connections
between
an
honest
client
and
an
honest
server
are
secret
as
long
as
the
following
four
conditions
hold
true,
ok,
the
server
is
honest.
P
The
client
is
honest,
plus
four
conditions
that
the
connect,
this
particular
connection
should
not
use
a
weak
AE
algorithm,
which
is
okay,
TLS
one
three
shouldn't
have
an
EVGA
algorithms.
This
connection
should
not
use
a
weak
diffie-hellman
group,
which
is
again
okay.
Tell
us
one
thing
should
not
use
a
different
groups
week,
different
groups.
The
third
one
is
that
the
server
promises
it
never
uses
a
weak
hash
function
when
signing
with
the
same
key
that
it's
using
on
this
connection.
P
On
any
other
session,
so
even
on
some
other
session,
if
it
uses
the
weak
hash
function
to
sign
with
the
same
certificate
key
that
other
session
could
be
used
to
break
the
session
a
care
about
okay.
So
this
is
actually
a
global
property.
That
server
should
not
use
a
big
hash
function
with
any
other
protocol
if
it's
using
that
key.
P
So
this,
essentially,
if
you
look
at
these,
it
gives
you
the
minimal
set
of
conditions
or
pre
conditions
under
which
TLS
1.3
satisfies
that
secrecy
goal
that
we
said
and
therefore
is
downgrade
resilient
in
some
sense,
as
long
as
those
conditions
hold
true
another
way
of
looking
at
it,
it
is
giving
us
a
limit
on
all
the
weak
algorithms
that
we
have
to
watch
out
for
in
TLS
1.2,
because
they
might
break
TLS
1.3.
They
might
be
breaking
tears.
P
1.2,
but
maybe
you
don't
want
it
to
break
TLS
1.3,
so
I
have
focused
most
of
what
I've
been
saying
here
on
TLS,
because
you
know
yes,
giving
out
TLS
t-shirts
on
the
back,
it's
kind
of
success
story
and
so
on,
but
of
course,
a
lot
of
the
techniques
I'm
talking
about
you
know
they
don't
just
apply.
The
TLS
apply
to
any
security
protocol
that
you
care
about
right.
P
P
So,
let's
look
a
little
bit
at
off,
which
is
a
very
different
protocol
from
Kalyan
just
to
see
how
how
it
looks
like
so
earth
is
what
we
are
using
and
most
of
you
probably
not
sorry.
What
you're
using
when
we
go
to
say
Pinterest
and
say
I
want
to
login
with
Facebook.
So
Pinterest
is
gonna
forward
me.
We
are
an
HTTP
redirect
to
Facebook
Facebook
I'm
gonna
log
into
facebook.
Facebook
is
going
to
generate
an
access
token.
P
It's
going
to
redirect
me
back
where
an
HTTP
redirect
back
to
back
to
Pinterest,
which
now
gets
the
access.
Token
from
the
URL
and
I'm
logged
in
yeah,
it's
a
double
redirection
protocol.
It
uses
HTTP
underneath,
but
in
the
protocol
itself
this
hardly
any
crypto
I
mean
there's
no
crypto,
actually
the
inside
the
access
token.
That
could
be
some
crypto.
This
is
in
open,
ID
Connect,
which
isn't
which
is
on
top
of
us.
P
There
is
a
there's,
an
ID
token,
which
has
a
little
signature
inside
it,
which
is
kind
of
important,
but
really
it's
very
lightweight
on
triple.
So,
if
you
just
look
at
that
protocol,
it's
just
like
you
know
whatever
secure
channels
to
messages
or
for
messages
back
and
forth.
It's
very
simple
people
have
analyzed
this
for
many
years
now.
We
think
we
should
understand
this
protocol
reasonably
well
right
so,
but
to
analyze
this
protocol
there's
a
big
hole.
P
The
hole
is
a
following:
the
protocol
is
running
in
the
browser
right
and
the
browser
is
a
different
beast
to
TLS
clients
and
servers,
so
I
might
be
connecting
to
Pinterest
on
one
tab
and
doing
this
Oscar
tickle
on
another
tab,
I
might
be
connected
to
I,
don't
know
dancing,
bunny,
comm
and
dancing.
Bunny
calm.
I!
P
Try
to
interfere
with
my
OS
flow,
so
this
is
a
web
attacker,
the
attacker
who's
able
to
run
javascript
and
run
on
other
tabs
on
the
same
machine
where
I'm
running
my
my
wallet
protocol-
and
this
requires
a
completely
different
kind
of
analysis.
It's
a
way
more
powerful
attacker
in
a
certain
way
than
the
dollar
by
a
symbolic
attacker,
I
showed
you
and
even
modeling.
This
attacker
requires
you
to
give
some
meaning
to
the
term
same-origin
policy
which,
as
many
of
you
might
know,
is
kind
of
a
very
fuzzy
term
at
the
best
of
times.
P
So,
just
to
give
you
an
idea,
if
you
look
at
the
OAuth
RFC,
of
course,
at
76
pages,
the
security
considerations
which
lists
a
long
list
of
web
attacks
or
which
could
be
possible
and
how
they
can
be
ameliorated
and
how
some
of
them
are
ameliorated
by
the
spec
is
another
70
pages.
So
this
is
like
that
the
threat
model
is
almost
as
large
or
I
would
say
much
larger
in
a
certain
way
than
the
protocol.
So
a
bunch
of
researchers
in
Stuttgart
did
a
formal
analysis.
P
A
proper
security
analysis
of
2.0
and
the
large
part
of
their
model
is
actually
just
modeling.
Web
security.
Web
threats
cross-origin
same
origin
stuff,
so
they
can
even
state
what
what
security
goals
are.
The
paper
is
95
pages
long,
okay
and
the
model
is
really
really
huge
and
they
did
of
most
of
their
proofs
by
hand
but
they're
trying
to
automate
it
now.
But
that's
it's
as
an
example
of
where
the
protocol
itself
looks.
P
Trivially
simple,
but
its
threat
model
and
security
goals
can
actually
take
a
lot
of
effort
to
to
formalize
and
prove
things
about,
and
that
is
actually
important
because
in
doing
that
proof
these
guys
found
attacks.
Many
attacks,
one
classic
one,
which
some
of
you
may
have
heard
of-
is
the
IDP
mix-up
attack.
P
If
you
squint
and
look
it's
a
man-in-the-middle
attack
between
a
client
and
it's
and
two
servers
in
this
case,
it's
a
classic
symbolic
attack
in
the
in
the
network
protocol
sense,
but
it
only
appears
if
you
model
exactly
what
the
features
of
the
web
that
are
being
used
here
are.
There
are
other
attacks
which
rely
on
actual
details
of
what
code
you're
using
for
the
HTTP
redirect
and
so
on,
and
these
are
the
kinds
of
attacks
won't
appear.
P
So
I'm
almost
done
what
I
wanted
to
impart
to
you
that
there
are
these
bunch
of
tools
and
bunch
of
methodologies
that
we
use
for
TLS,
if
you're
hoping
to
use
form
MLS
that
people
can
use
for
a
variety
of
protocols,
and
these
kinds
of
analyses
can
find
bugs
and
flaws
and
in
your
protocol
and
in
your
crypto
in
the
in
the
underlying
crypto
fit,
and
if
you
do
the
proof,
we
can
provide
you
some
pretty
strong
cryptographic
security
guarantees
using
these
kinds
of
techniques.
Doing
these
kinds
of
proofs
certainly
requires
some
expertise.
P
P
The
first
thing
that
we
all
have
to
do
anyway,
whether
we
are
the
verification,
guys
or
protocol
guys,
is
to
write
a
spec
right
and
if
you
can
write
a
formal
spec
that
really
eases
the
way
into
doing
any
of
these
proofs.
So
I'm
going
to
encourage
all
of
you
to
start
kind
of
at
least
write
one
formal
spec
for
your
latest
favorite
shiny
new
protocol
and
this
spec.
You
can
write
in
any
of
the
languages
and
whatever
format
you
like,
there's
a
whole
bunch
of
them
that
are
coming
out.
P
The
security
goals
in
the
protocol
model
already
sort
of
exposes
tons
of
things
that
you
may
not
have
thought
about,
even
though
you've
been
thinking
about
the
protocol
very
hard
for
a
long
time,
so
I
think
I'm
done.
There's
some
some
I
think
we
like
that
online.
So
you
I
just
gave
you
some
pointers
where
you
can
go
find
these
tools.
Thank
you
for
your
patience.
Questions
are
welcome.
Q
One
observation
we
made
was
that
researchers
are
motivated
to
start
fairly
late
in
the
process,
ideally
when
the
stuff
already
is
widely
deployed,
so
that
any
vulnerability
they
find
has
a
significant
impact
on
which
is
kind
of
understandable
from
their
incentives.
Of
course,
it's
not
very
helpful
for
us,
though,
so
luckily
it
worked
out
differently
in
DLS,
1.3
and
now,
more
recently,
with
with
our
continuous
engagement
with
their
os
community,
but
in
general
I
think.
The
only
approach
that
works
is
to
get
ITF
participants
to
actually
do
their
formal
verification
himself.
Q
This
does,
however,
raise
another
challenge,
as
you
mentioned,
that
most
of
the
formal
analysis
takes
actually
more
time
than
the
protocol
work
itself
and
it's
difficult
to
say
whether
people
are
really
prepared
to
do
that
and
the
tools
that
we
have
are
also
a
little
bit.
So
there
are
a
number
of
different
tools.
You
mentioned
some
of
them,
but
they
are
many
more
and
on
all
of
them
have
used
a
slightly
different
syntax
different
properties.
Are
they
have
certain
limitations?
Q
The
support
is
different
and
if
different
groups
work
on
different
tools,
then
the
reuse
is
very
difficult
in
general,
like
composing
different
protocols,
as
we
often
do,
and
and
it
doesn't
translate
nicely
to
these
tools-
so
you
can't
easily
say
I'm
using
TLS,
so
dumping
at
specific
Els
configuration
I'm,
adding
chesta
OS
on
top
of
it.
That
doesn't
doesn't
work
that
way.
Q
So
I
think
we
have
to
potentially
look
at
what
is
specifically
needed
for
the
IDF
protocol
community,
which
tools
are
available
and
which
ones
actually
still
supported,
because
then
you
can
easily
run
into
a
lot
of
frustration
there,
but
in
general,
like
I,
had
noticed
myself
that
most
of
the
IDF
protocols
are
not
really
at
the
level
that
you
could
immediately
produce
a
formal
model
out
of
it,
because
it's
lacking
a
description
of
the
security
goals,
because
the
interoperability
need
is
often
very
difficult.
Different
from
what
you
need
for
the
protocol
analysis.
That's.
P
Right,
thank
you.
I!
Don't
think
there
was
a
question
there,
but
I
will
just
make
a
comment,
which
is
that
indeed
the
so
this
has
been
a
problem.
The
fact
that
there
are
so
many
different
tools
with
so
many
different
languages.
The
way
the
languages
are
designed
for
these
tools
are
so
that
it's
easy
to
do
certain
kinds
of
proofs
automatically
so
they're
geared
towards
proof,
rather
than
trying
to
make
it
most
usable,
the
most
number
of
people
and
that's
kind
of
unfortunate.
P
We
identified
this,
at
least
in
the
context
of
crypto
primitives,
so
I
gave
a
presentation
at
this
year
version
hack,
spec,
where
we
are
trying
to
design
one
language
or
you
can
write
your
crypto
primitives.
That
will
compile
down
to
all
the
various
tools
that
all
the
people
who
are
doing.
Crypto
primitive
verification
can
go
maybe
one
day
for
protocols
you
might
have
a
and
we
will
probably
try
to
write
an
informational
RFC
on
that
language
which
can
be
used
by
the
IDF,
but
hopefully
for
protocols
as
well.
We
can
come
up
with
something.
R
All
right,
ed
lemon,
so
you
had
a
slide
kind
of
way
back
towards
the
beginning
that
had
like
five,
a
list
of
five
things
that
I
really
loved
this
slide.
That's
the
one
yeah!
R
P
First
well
so
well,
there
are
different
levels
actually
right,
so
the
last
one
is
fairly
protocol
specific
but
the,
but
indeed
for
well.
Let's
see,
let's
assume,
that
you
wanted
to
do
crypto
proofs
for
some
protocol,
which
you're
going
to
be
developing.
You
haven't
fully
developed
it
yet
and
yet
say
you
just
picked,
which
is
easy.
Clip
easy
clip
will
have
a
library
that
will
give
you.
P
These
are
the
twenties
definitions
which
have
been
defined,
which
we
understand,
which
the
tool
does
very
well
it'll,
be
us
cma
for
signatures
in
c
txt,
whatever
their
these
ten
constructions
that
it
already
has
so
do
you
won't
have
to
add
anything
use
as
long
as
you
pick
from
the
standard
constructions
there,
the
tools
are
going
to
do
very
well
for
you,
okay,
plus
you're,
less
at
risk
of
screwing
up
and
in
actually
modeling
the
stuff
right.
So
that's
the
kind
of
way
I
would.
P
R
R
So
I'm
just
thinking
that
if
there
were
a
document
that
that
kind
of
were
was
like
a
toolkit
for
somebody
like
me,
who's,
not
really
an
expert
or
even
remotely
an
expert
on
this
stuff
could
use
to
get
close
and
then
maybe
at
least
know
the
right
questions
to
ask
of
somebody
who
is
an
expert.
That's
good
feedback
thanks
thanks.
S
I
am
very
keen
on
formal
methods
as
a
pedagogical
exercise,
because
the
thing
that
it
teaches
you
more
than
anything
else
is
the
value
of
simplifying
your
design,
because,
if
you
don't,
it
is
always
easier
to
simplify
your
design,
then
construct
a
proof
for
a
complicated
one.
I
am
a
little
bit
cautious
in
that
one
of
the
effects
that
I've
seen
is
people
going
for
things
that
we
can
prove
are
correct
over
things
that
are
robust
in
the
face
of
a
poor
implementation,
and
those
things
are
not
always
the
same.
S
That
said,
one
of
the
things
that
this
might
help
us
get
over
is
we
tend
to
see
a
lot
of
you
know.
We
use
TLS
a
lot
as
a
hammer
that
we
know,
because
it's
the
one
that
we
use
and
we're
comfortable
with,
and
it's
often
not
the
right
hammer
to
use
because
it's
big
it's
complicated.
It
has
a
lot
of
moving
parts
and
it
can
go
wrong.
S
S
Q
Have
a
question
for
you:
the
son
is
I
was
wondering
what
what
do
you
think
it's
worthwhile
to
into
entertain
a
discussion
about
the
different
tools
to
form
MSO
tools
and
which
ones
are
most
suitable
for
their
IETF
protocol
context?
And
maybe
there
are
some
that
shine
here,
I
I
know.
Of
course
you
have
some
preferences
because
we've
worked
on
some
of
them
yourself
or
defined
them
yourself,
but
but
maybe
maybe
they
are
all
not
quite
there
in
terms
of
what
we
need
for
the
IDF
community.
There
are
obviously
trade-offs
here
so.
P
I
mean
yeah,
so
my
experience
with
protocols
and
my
comments
can
only
be
based
on
the
few
protocols.
I've
looked
at
and
I
know
that
the
idea
was
suddenly
so
standardizing.
So
many
different
variety
of
things
that
it'll
be
useful
to
understand
what
the
Scopus,
the
reason
we
have,
what
I
would
say
to
leading
tools
or
three
leading
tools
in
each
of
these
zones
is
because
they
do
some
things
well,
better
than
the
other
one
and
both
ways.
So
that
isn't
like
a
clear
winner
that
I
can
pick
out.
P
Q
To
be
specific,
for
example,
if
I
compare
Skype
with
F
star
there,
we
are
talking
about
the
completely
different
type
of
complexity
and
and
level
of
understanding
that
anyone
here
needs
to
have
in
order
to
write
something
so,
and
it
also
depends
whether
we
just
want
to
do
a
formal
analysis
or
want
to
also
have
had
a
sort
of
backends
to
do.
Let's
say
this
case
generation,
we
may
even
generate
code.
Q
P
But
indeed
we
should
make
the
easy
stuff,
easy,
I,
think
and
I
think
that's
a
good
challenge
for
the
research
community
and
they
can
be
motivated
by
the
fact
that
there
will
be
people
consuming
our
stuff.
You
know
so
and
in
answer
to
an
earlier
question
you
had
is
all
about
about
the
motivations
of
the
research
community.
M
Tobias,
you
said
you
have
97
pages
paper
for
very
for
a
formal
model
of
a
browser-
yeah-
it's
not
me,
but
here,
but
how
long
would
that
be
for
our
operating
system?
And
you
work
a
neat
one
right
or
even
for
a
platform
like
if
you're
talking
about
IOT
and
stuff,
where
you
actually
have
physical
presence?
Let's.
P
Be
clear,
there's
a
very
limited
model
yeah.
This
is
just
doing
a
browser
security
mechanism,
so
it's
just
of
cookies
and
cores,
and
you
know
redirects
and
HTTP
fetch
and
this
kind
of
stuff.
It's
just
things
that
kind
of
get
exercised
by.
Oh,
really
that
that
zone
of
things
indeed
I
mean
there
is
a
project
called
SEL
for
out
of
Australia,
and
if
you
look
at
their
work,
it's
taken
them
years
and
years
to
build
a
spec
for
the
operating
system.
It's
really
small
operating
system
element.
So
so
this
stuff
can
get
very
large.
P
You
have
to
I,
think
flag.
Matically
choose
the
component
that
you
care
about
and
I,
don't
recommend
doing
whole
system,
verification,
I,
don't
believe
in
it,
because
it's
kind
of
a
bit
not
see
some
I
mean
that
you.
You
should
pick
the
comp
component
that
you
care
about
and
try
to
make
sure
that
one
is
robust
and
the
cleaner
and
the
more
you
can
separate
it
from
the
rest
of
the
system.
The
more
conference
you
can
have
it
that
you
prove
actually
means
something.
K
T
It's
okay,
okay,
so
now
for
something
completely
different,
although
somewhat
related,
which
is
a
packet
format.
How
many
people
here
have
heard
of
this
Fink's
packet
format,
just
wondering
very
few?
So
it's
a
next
slide,
so
I
apologize
for
not
being
designer
of
the
pack
and
unfortunately
he
cannot
escape
his
teaching
duties.
It's
a
provably,
secure
and
compact
secure
mix,
networking
format
from
George
Anissa
sand
in
Goldberg,
so
to
relate
the
Karthik's
talk.
T
It
has
security
proofs
and
the
goal
is:
how
do
we
send
a
packet
from
one
hop
point
point
in
a
network
to
another
one:
making
these
packets
completely
unlink
able?
That
means
completely
privacy-preserving
from
both
a
sort
of
large-scale
passive,
global
Global
adversary,
even
a
and
an
active
adversary,
which
perhaps
is
corrupting
the
network
nodes
which
are
sending
the
packet
around,
and
it
has
a
bunch
of
interesting
properties.
T
It's
about
10
years
old.
It's
withstood
the
test
of
time.
Alternative
designs
have
in
general,
had
some
sort
of
breakage
keep
going
next,
so
the
the
intuition
is
that
first
thing
is
all
the
packets
have
to
be
the
same
size.
This
isn't
really
a
security
intuition
of
sort
of
a
privacy
intuition
so
that
each
pack
is
indistinguishable
from
each
other,
and
you
don't
want
you
you
have
to.
This
is
a
sort
of
seer
for
a
GNote,
but
we
need
a
wide
block
cipher
for
this,
why
block
ciphers
a
Zed
is
the
original
one.
T
Now
we
have
lioness
now
for
fala,
but
they're
sort
of
a
the
otherwise
enemy
could
serve
corrupted
and
link
the
two
packets
enroute,
the
core
cryptographic
intuition
behind
Fink's,
is
that
it's
sort
of
an
extended
diffie-hellman.
You
have
a
network.
People
have
keys,
you're
a
client.
You
want
to
send
a
packet
to
another
node
in
the
network.
T
You
do
as
you,
if
you
can
do
a
sort
of
diffie-hellman
with
that
other
node,
that
you
can
take
that
lock
that
diffie-hellman
and
break
it
up
into
lots
of
little
diffie-hellman
and
then
use
those
little
different
humans
to
sort
of
pass
each
past
the
node
in
an
encrypted
way
between
a
different
bunch
of
different
readers.
So
this
is
very
similar
to
Onion
Routing
to
some
extent,
but
has
a
few
differences.
So
one
difference
is
that
the
goal
of
the
Sphynx
packet
format
is
to
keep
the
routing
network
information.
T
The
header,
the
Sphynx
header,
essentially
constant
size
and
encrypted
such
that
for
each
hop
you
basically
decrypt
the
header,
not
the
whole
packet,
and
you
can
look
at
the
after
checking
a
Mac,
make
sure
it's
not
being
corrupted
in
route.
Look
at
sort
of
the
information
for
the
next
hop
and
that
no
one
knows
all
of
the
hops
except
the
client
and
that's
I,
think
a
quite
a
useful
trick
and
we'll
go
for
the
next
slide,
and
this
is
just
a
sort
of
large-scale
version
of
how
it
works.
Essentially,
the
alphas
are
group
elements.
T
The
betas
are
the
rest
of
the
packet,
the
so
routing
information
which
is
encrypted
and
the
gamma
is
the
Mac
and
has
a
payload
as
you
can
see,
which
is
really
separate
from
the
header
and
just
at
each
hop
in
the
network.
You,
essentially,
you
know,
you
use
the
shared
secret
you've
established
using
whatever
PKI
method
you
want,
which
people
have
very
different
preferences
on
this
one
and
you
just
decrypt
it
you
basically
D
blind.
T
T
That's
like
the
core
intuition
next
slide
and
you
can
see
the
you
know,
that's
the
details,
but
if
you
want
the
details,
I
highly
recommend
you
guys
look
at
the
paper
or
we
have
a
draft
spec,
but
now
for
like.
Why
would
you
want
a
standard
for
this?
This
seems
a
bit
academic.
Next,
the
next
slide.
Well,
there's
a
lot
use
cases,
and
even
though
the
original
work
has
happened
quite
a
while
ago,
there's
been
increased
sort
of
interest
in
anonymous
communication
technologies.
Some
of
this
is
actually
coming
from
the
blockchain
world.
T
So
we
have
a
lot
of
now
there's
a
lot
of
interest
in
next-generation
encrypted
and
privacy.
Enhanced
messaging,
you
see
the
MLS
work
booting
up
here
at
the
IETF,
but
you
know
encrypted
messaging
is
only
one
half
the
battle.
How
can
make
sure
you
don't
know
who's
communicating
to
who-
and
this
is
where
the
Sphinx
network
format
is
absolutely
necessary.
T
Strangely
enough,
that
the
same
general
intuition
is
used
in
voting
system,
so
the
state
of
greece
is
already
deploying
a
sphinx
and
their
Evo
systems
through
the
work
of
gr
net
and
Estonia,
as
sort
of
probably
the
most
advanced
avoding.
Work
is
also
looking
at
this
format,
although
they
don't
currently
use
it
and
you're,
seeing
some
adoption
in
terms
of
the
general
day
protection
regulation
so,
for
example,
sa
P
the
start,
look
going
working
with
mixed
nets
and
Sphinx
on
private
privacy
enhance
statistical
analysis
of
data.
T
T
You
know
all
this
information
basically
is
no
longer
maintains
the
critical
assumption
of
Sphinx,
which
is
that
your
routing
information
is
fixed,
but
it
made
me
dynamic
and
so
there's
a
lot
of
work
on
how
to
do
that
and
we
haven't
really
settled,
and
this
is
not
really
a
crypto
problem.
This
is
a
network
level,
engineering
problem
and
also
there's.
You
know,
there's
been
cryptographic
requirements
that
have
changed
since
the
work
was
originally
put
out
there.
What
we're
seeing
due
to
all
the
discussions
are
elliptic
curves
is
you
know
the
original
papers.
T
Now
you
can
use
RSA
and
elliptic
curves,
but
now
we're
seeing
you
know,
people
use
curve.
Two
five,
five
one,
nine
other
people
use
different
curves.
The
wide
block
cipher
space
has
bit
of
a
mess.
We
don't
have
any
CF
RG
recommendations
on
it
with
our
known
attacks
today
he
said
and
efficiency
a
lot
of
the
the
latest
changes
we're
seeing
through
the
work
on,
for
example,
Adrian
paring
sky
on
which
also
uses
Fink's,
as
is
underlying
packet
format.
It's
pretty
exciting
work
that
we
believe
we'll
be
deployed.
T
Is
that
there's
different
assumptions
about
you
know
how
how
many
times
you
should
encrypt
or
decrypt?
Basically,
you
know
different
threat
models,
and
these
leads
essentially
different
extensions
to
the
core,
and
so
we
would
hope
next
slide
that
we
could
eventually
get
aspect
and
we
believe
this
incompatibility
is
caused
because
we
don't
actually
know
many
of
these
people,
particularly
all
the
blockchain
work
going
on.
But
what
they
do
is
they
Google
Sphinx?
They
read
the
paper.
E
U
All
right,
so
this
is
a
very
quick
talk
on
a
capsule,
a
protocol
for
secure
collaborative
document.
Editing
next
slide.
Please
haven't
you
all
been
there.
You
open,
Google,
Docs
and
you're
typing
up
your
plan
to
conquer
the
world
or
your
very
secret
work
secrets
on
collaborative
no
wait
hold
on
on
on
collaborative
editing.
But
you
don't
know
where
that
documents
going.
You
need
to
collaborate
with
your
coworkers.
U
However,
it's
being
shared
on
Google
servers,
it's
available
in
plain
text
or
on
Microsoft
servers.
Don't
you
wish
there
was
a
better
solution?
Next
slide,
please
introducing
capsule
a
better
solution.
Capsule
is
a
end-to-end
encrypted
protocol
for
it's
a
protocol
for
end-to-end
encrypted
collaborative
document
editing.
It
has
security
goals
like
participant
list,
integrity,
confidentiality,
Wow,
integrity,
amazing,
authentication,
whoa,
transcript,
consistency,
incredible
all-in-one
protocol
next
slide.
Please
you
see
in
capsule
we
take
the
patented
and
you've
really
been
tired
of
it,
but
not
in
the
way
we
do
it.
U
Blockchain
and
document
is
essentially
a
hash
chain
of
diffs.
Access
to
a
document
is
obtained
by
sharing
a
simple
ID.
Incredible
users
must
prove
knowledge
of
the
ID
to
participate
in
the
document.
Wow
I
used
the
word
proof.
I
must
know
what
I'm
talking
about
primitives
Blake
2
for
symmetric
operations
and
8255
1/9
DJB,
designed
it
it's
fashionable
great,
so
I
was
I'm
really
in
group,
aren't
I,
so
alright
looks
like
yes,
the
we
have
a
protocol
level
is
formally
verified
with
probe
Arif
whoa.
U
The
guy
with
the
most
minutes,
talked
about
that
and
also
hand
proofs
at
the
implementation
level.
We
use
F
star
five
people
in
the
whole.
Don't
know
how
to
program
in
that
star.
That's
how
incredibly
difficult
and
complex
it
is,
but
we
use
it,
and
so
we
have
like
formally
verified
primitives.
We
have
formally
verified
the
protocol,
there's
a
hand
proof
for
the
protocol
and
on
the
implementation
level.
We
also
from
the
get-go
have
an
implementation
with
functional
correctness
and
resistance
against
side-channel
attacks,
and
so
on
next
slide
please.
U
So
this
is
an
open
standard
for
end-to-end
encrypted
collaborative
document
editing.
This
is
a
obviously
a
use
case
that
I
mean.
Obviously
this
is
clearly
something
we
all
want.
The
the
protocol
I
managed
to
design
it
in
a
way.
That's
pretty
straightforward.
It's
really
I
mean
I.
Think
a
simple
elegant
protocol,
we
can
prove
it's
secure
very
easily,
there's
an
implementation,
that's
readily
available
with
nice,
primitives
and
clean
code.
This
is
an
open
standard.
I
think
this
should
be
an
IETF
thing,
I
think
that's
really
a
no-brainer.
U
So
let's
talk
about
that,
everyone
I
think
agrees
that
the
use
case
is
important
and
it's
an
easy
problem
to
solve
it's
not
like
TLS.
You
know
you
have
to
worry
about
a
bunch
of
stuff,
it's
actually
pretty
straightforward,
so
you
can
go
on
the
website.
Symbolic
software,
slash
capsule,
there's
also
the
e
print
paper,
which
is
also
on
the
website,
but
you
can
just
email
me
or
talk
to
me
and
then
we
can
make
an
IETF
RFC
and
then
everyone
benefits
and
we
solve
an
obvious
problem.
So
thank
you.
Anyone.
W
I'm
the
sacrificial
routing
guy
sent
to
the
lion's
mouth
here,
so
we
have
the
next
slide.
Please
so
we've
got
a
problem
in
reaching
control
planes.
We
use
are
often
carried
over
very
simple
transport
layers
such
as
UDP
or
TCP
control.
Planes
are
good
targets
for
attack
and
their
disruption
or
subversion
can
have
serious
operational
consequences
for
our
networks.
W
W
It
currently
uses
TCP
md5
for
authentication,
which
is
no
longer
considered
secure,
and
this
is
frequently
pointed
out
to
us
when
we
send
one
of
our
documents
for
publication,
including
documents
that
are
just
a
minor
extension
of
LD
P
itself.
So
next
slide,
please
we've
done
a
small
survey
and
not
not
a
particularly
scientific
one,
but
a
sort
of
tiny
survey
just
to
see
what
the
world
looked
like.
So
we
asked
some
operators
if
TCP
IO,
which
is
the
obvious
alternative
way
of
securing
a
TCP
protocol.
W
So
what
we
hear
from
the
operators
we
did
speak
to
is:
we've
got
no
real
plans
to
deploy
tcpo
as
long
as
the
vendors
support,
their
md5
implementations
and
some
of
us,
some
of
them
told
us
that,
actually,
you
know
there
are
very
few
authenticated
LDP
sessions
in
use.
I
think
they
use
a
lot
of
peer-to-peer.
Jacent
l,
DP,
Mac
SEK
is
quite
commonly
used
anyway,
but
we
also
have
targeted
LD
P,
which
goes
over
several
hops
there.
At
the
concern
is
the
cost
of
reconfiguring
their
network
and
rolling
out
TCP
AO.
W
Now
the
vendors
also
answered
saying
more
naturally,
most
of
us
don't
have
TCP
I/o
in
our
products,
though
one
did
say
it
would
be
available
later
and
what
they
tell
us
is.
They
were
not
implemented
until
they
hear
from
the
operators
that
it's
needed,
so
this
is
kind
of
a
just-in-time
development
strategy
that
the
product
managers
run
and
next
slide.
Please.
So
what
we
think
we
need,
we
need
a
security
suite
that
is
more
secure
than
md5
bother.
I
should
say
this
is
for
authenticating
packets.
We
don't
worry
about
the
routing
stuff
being
encrypted.
W
We
just
want
to
make
sure
that
no-one
is
tampering
with
it.
So
we
want
something:
that's
more
secure
than
md5
when
used
over
the
longest
sessions
that
support
routing.
There
was
someone
this
morning
in
the
MPLS
session,
saying
that
they
had
a
they
had
a
routing
session
that
was
up
for
a
decade,
so
we're
not
talking
about
shorts
or
a
web
page
type
of
grab,
get
and
throw
it
away.
We're
talking
about
very,
very
long
running
sessions
between
between
Reuters.
W
It
needs
to
be
able
to
run
on
at
leat
on
most
existing
route
processors.
Now
route
processors
are
not
like
the
processes
you
have
in
server
complex.
Is
there
not
the
latest
hype
hotshot
of
technology,
they're,
often
really
quite
a
Cinderella
child
in
terms
of
being
begrudgingly?
Given
some
cpu
horsepower,
that's
the
way
the
world
has
been.
W
It
needs
to
start
up
and
run
fast
so
that
it
doesn't
implement
that
doesn't
impact
reconvergence
even
from
a
cult
start,
that
is
to
say,
if
I
have
a
power
down
in
the
on
the
Rooter
and
I
bring
it
back
up.
It's
got
to
come
up
sufficiently
quickly
that
this
is
not
the
long
pole
in
the
tent
in
terms
of
reconversion.
My
network,
over
the
optimum
paths.
Again,
it
needs
to
be
sufficiently
operationally
complete
and
easy
to
use.
W
So
what
we're
doing
we're
looking
at
88,
TCP
I/o
as
the
obvious
solution,
but
where
we
think
we
probably
need
to
look
a
bit
wider
and
see
if
there's
anything
else,
that's
even
more
obvious,
given
the
the
needs
that
we
need.
We
have
we're
coordinating
with
a
number
of
the
protocol
groups
that
will
use
this
BGP
MSD,
P
and
P
set,
and
we're
talking
to
a
number
of
the
routing
groups:
IDR
pimpy
C,
Abe
s,
RT
GW,
g,
pals
and
MPLS.
A
lot
of
groups
are
in
this
same
position.
W
There
was
a
discussion
this
morning,
the
MPLS
working
group
and
we're
here
to
ask
for
long-term
guidance
from
the
security
area
as
we
work
on
the
problem
next
slide.
Please
so
there's
more
detail
in
that
in
the
draft
and
attached
to
this
slide.
Deck
is
an
excellent
slide
deck
that
Jeff
has
put
together,
which
he
gave
to
the
IPG
meeting
on
Sunday
and
to
get
their
a
perspective
on
it.
So
there's
more
details
in
it,
but
I'm
not
going
to
discuss
it
today.
Next
slide,
please.
So
why
am
I
here?
W
I
need
help
or
we
need
help.
The
routine
teams
cannot
do
this
on
the
gnome
on
their
own.
We
need
long-term
help
from
security
designers
because,
although
we
know
about
routing
protocols,
we
are
not
security
experts,
a
rolls-royce
solution.
This
is
unlikely
to
find
its
way
into
RFPs
that
set
the
implementation
specifications.
W
X
Hello,
I'm
jared
much
work
for
akamai,
but
I
also
spent
the
you
know,
probably
about
two
decades
running
and
configuring
routers
as
part
of
this,
and
so
I
have
my
short
list
of
things
to
learn
here
that
I've
brought
up
to
the
mic
to
make
sure
that
I
remember
all
of
them.
So
you
know
the
first
thing
is
you
know?
I,
definitely,
don't
want
any
of
my
comments
to
be
construed
as
a
defense
of
md5.
X
X
But
when
we
talk
about
the
router
vendors
and
the
implementations
that
exists
there
like
there's,
tends
to
not
be
beguine
support,
so
I
have
a
full
mesh
of
ibgp
sessions
with
let's
say
130
or
230
devices
and
I
need
to
go
in
to
change
all
the
you
know.
If
I
need
to
change
the
keys
on
those
sessions,
I
can't
just
take
them
all
down.
It
would
become
a
global
outage
in
in
some
cases
for
many
of
the
networks
involved.
So
we
need
to
have
a
method
to
do
that.
As
you
mentioned,
encryption
is
not
needed.
X
We're
not
concerned
about
authenticating
the
the
Datagram
inside
of
it.
We
just
need
to
make
sure
that
that
transports
reliable
and
things
like
G
TSM
are
helpful,
because
we
can
use
G
TSM
to
go
in
ensure
that
at
least
on
a
one
hop
link
things
work,
but
when
we
have
multi
hop
sessions
you
mentioned
targeted,
LDP
or,
if
we're
doing
ebgp
multi-hop
to
either
feed
information
to
some
third
party.
We
have
needs
around
that
that
allow
us
to
do
that.
We
may
not
know
the
distance
across
the
network
for
the
setting
the
TTL.
X
X
At
session
turn
up
often
that
person
will
have
left
the
organization
and
not
have
it
and
not
have
those
keys,
and
they
end
up
cutting
and
pasting
them
into
future
devices.
And
the
final
comment
is
the
time
horizon.
For
you
know,
any
sort
of
an
upgrade
here
is
really
out
in
that
five
to
seven
year
window.
So
when
we
talk
about,
if
the
IETF
says
yes,
we
want
to
stop
anything
from
going
forward.
That
includes
this
MD.
You
know
the
current
md5
algorithm.
X
We
need
to
have
time
as
operators
to
upgrade
to
something
else
and
equipment.
Life
cycle
is
in
the
five
to
seven
year
window
and
we
may
get
some
software
upgrades
in
there,
but
in
some
cases
we
may
not
be
able
to
do
anything
else
in
there
and
so
that
these
are
the
some
of
the
key
comments.
I
want
to
make
sure
that
people
who
are
gonna
come
up
to
the
mic
behind
me
as
well,
have
in
their
mind
as
bounding
some
of
the
challenges
that
we
have
from
an
Operations
perspective
of
running
a
network
yeah.
I
W
X
Yeah
I
have
them
written
down
thankful,
thank
you,
but
yeah,
but
this
is.
But
this
is
a
big
concern,
because
we
do
need
that
transport
level
security,
because
people
do
go
and
point
fire
in
cannons
at
the
routers
and
play
well-formatted
bgp
messages
with
the
right
source,
ports
etc
against
the
routers,
and
they
do
need
to
be
able
to
survive
that
type
of
attack.
This
has
been
known
to
happen
happen
in
the
wild.
L
Dan
Harkins
there
seems
to
be
a
disconnect
here
between
what
you
said.
Your
problem
was
and
what
you're
doing,
and
not
your
call
for
help
it
seems
like
the
problem
is
the
it's,
the
classic
security
catch-22
where
the
operators
don't
want
to
do
anything
because
the
vendors
don't
support
it.
Yet
the
vendors
aren't
going
to
support
it
unless
the
operators
demand
it.
L
So
nothing
ever
happens
so
now,
you're
gonna
do
you're
doing
some
work
and
in
the
ITF
and
now
you're
calling
for
help
for
a
less
than
rolls-royce
solution,
but
even
even
if
that
did
exist,
it's
not
gonna
break
the
jam
of
your
security
catch-22.
So
what
you
should
probably
do
is
ask
somebody
to
hack
into
one
of
these
networks
without
fear
of
God
into
the
operators,
who
will
then
light
a
fire
underneath
the
vendors
well.
Y
W
So
if
we
go
with
TCP
IO-
and
we
need
to
confirm
that
this
is
really
the
right
solution,
if
we
were
to
go
with
it,
we
would
need
the
rekeying
stuff
to
be
standardised,
which
I,
don't
think
is
done
yet.
Last
time,
I
looked
at
the
spec,
it
said
this
is
kind
of
working
progress
and
we
need
a
good
agreement
on
a
common
implementation
algorithm.
Otherwise
people
will
do
different
out
with
different
security,
algorithms
and
everyone
will
fall
back
to
md5.
W
Y
So
I
would
just
don't
remind
I,
guess
everyone
that
TCP
Aero
was
developed
for
browning
protocols
for
keys,
be
based
around
protocols.
It
did
take
into
account
many
of
the
requirements
that
gentleman
mention
about,
including
the
ability
to
rekey
long
live
sessions.
It
was
built
with
agility.
Luckily,
now
the
question
really
is
what
what
is
good
enough
security
for
GCL
and
what
are
the
really
blockages
with
the
current
default
algorithms
that
are
there.
W
We
certainly
need
to
be
confident
that
not
the
many
many
generations
of
route
processor
can
do
it
that
we
will
meet
in
the
the
field,
can
do
it.
I
have
no
idea.
Why
no
is
that
whenever
I
worked
for
a
repla,
when
I
worked
for
an
earlier
router
vendor
the
route,
processors
were
often
Cinderella's
children
and
had
rather
less
CPU
and
memory
than
you
would
have
wished.
W
No
I'm
not
sure
it
would
I
mean
why.
Why
so?
Because
we
need
to
get
this
deployed
network
wide,
and
that
means
that
the
old
stuff,
which
is
migrating
from
core
to
edge
and
from
aggregation
to
real
age
that
needs
to
be
able
to
take
part
in
the
discussion.
So
I,
don't
know
that,
it's
true
to
say
it
will
all
be
on
a
forklift
upgrade.
Okay,.
Y
Z
There
are
operational
practices
and
operational
hygiene
approaches
that
solve
this
practical
problem
to
a
good
enough
extent
that
there
is
not.
There
is
no
urgent
push
from
the
user
community
to
honor
vendors
to
implement
that.
So,
though,
the
dentists
themselves
don't
see
much
point,
at
least
at
this
time,
to
go,
proactive
and
implement
that
the
CPO
s1
2006.
To
my
knowledge,
there
is
one
commercial
implementation
of
the
CPO
shipping.
AA
Kyle
rose
from
Akamai,
so
TCPA,
oh
definitely
seems
like
the
obvious
choice
here
from
a
technical
standpoint.
I
don't
want
to
go
too
much
into
the
details
of
that
right
because
there's
some
there
may
be
some
crypt
of
modernization
that
needs
to
happen.
The
the
original
algorithms
and
whatever
the
second
RFC.
Okay,
great.
AA
So
a
production
l1
hasn't
actually
been
broken,
but
we
probably
want
to
not
standardize
them
on
sha-1
anyway,
but
one
of
the
things
that
I
brought
up
with
Jared
before
when
we
were
talking
about
this
at
lunch,
was
you
may
want
to
consider
if
you're
going
to
do
a
Reb
of
this
protocol,
making
the
protocol
itself
a
little
bit
more
resilient
to
individual
connection.
AA
Breakage
now,
I
know
that
you
want
to
bring
down
sessions
when
a
connection
breaks
right,
but
you
could
have
two
parallel
connections,
for
instance
over
the
same
link
I'm
just
just
that's
just
an
idea
that
I'm
throwing
out
there
as
some
way
to
say.
Well,
if
this
connection
breaks
or
if
you
need
to
break
this
connection,
Allah
TC
you
because
we
need
to
rekey
with
TCP
md5,
we
could
bring
one
connection
down.
You
know
rotate
the
key
bring
him
back
up
then
bring
the
other
connection
down.
AA
W
Jared
will
proceed
to
tell
me
why
I'm,
if
we
were
to
do
a
thing
such
as
that,
that
is
very
much
in
the
class
of
thing
where
we
need
help
right,
because
we
can
write
routing
protocols.
At
least
the
authors
of
the
document
are
not
experts
in
security
protocols
right
around
I?
Don't
know
expert
enrolling,
so
we
need.
This
is
a
good
area,
a
good
case
where
the
two
areas
need
to
get
together
to
a
common,
good
or
three
areas.
AA
AA
E
Hi
Erika
scroll
I
was
partly
responsible
for
you,
but
I'm
perfectly
happy
to
concede
the
socks,
if
you
say
so,
no
I
don't
know
whether
it
sucks
it's
just
that
it's
not
been
deployed
for
and.
W
E
Get
the
chance
to
decide
sure
yeah,
so
I
guess
I,
guess
the
it's
not
entirely
clear
to
me
is
what,
if
any,
the
perceived
tentacle
efficiencies
are,
as
opposed
to
lack
of
demand
so
like
that
does
to
say,
as
I
understand
you
know,
you
know,
is
I
understand
that
your
survey
serves
the
operators,
aren't
very
interested
in
upgrading
and
making
it
right
and
I
guess
I'm.
Not
quite
sure.
E
Why,
like
if
the
operators
are
interesting,
it's
like
they're
like
halfway
35,
which
is
fine
by
me,
I
guess
or
that
I'm
interested
because
they
don't
like
md5
but
they're,
something
they
don't
like
it.
I
yo
if
I
understood
what
that
was
me
weak
anymore,
awful,
so
I
guess.
E
Sure,
I
guess
I
guess
I
guess
so
I
guess
whatever
understand
is
like
see.
If
we
designed
some
other
protocol
that
had
different
properties,
is
there
some
reason
to
believe
a
tradition?
Is
there
some
reason
to
believe
that
people
would
like
that
better?
Who
knows
that's
part
of
the
discussion
we
got
to
have
I.
W
Guess
I'm
just
just
as
a
protocol
as
I'm,
not
quite
sure
if
they
do
so
so.
Nor
am
I
that's
why
I'm
here
and
if
it
turns
out,
we
got
an
impossible
problem,
we'll
write
a
note
and
say:
please
don't
revisit
in
LDP
security
until
something
of
other
happens
and
we
won't
keep
getting
our
usual
protocols.
What's
bounced
so
I,
don't
care
what
we
do.
What
you
want
to
do
the
right
thing
for
the
Internet
hi.
AB
AC
AC
So
the
issue
that
we
have
here
don't
keep
going
all
the
way
all
the
way
to
whatnot
there
we
go.
What
should
we
do
so?
The
problem
here
is
not
tceo
the
issue
here
and
this
is
covered
in
the
general
presentation
which
no
you
can
follow
here
or
go.
Listen
to
the
recording
for
my
EPG
is
that
no
similar
to
security
people
are
not
routing
people.
Routing
people
are
not
security.
People.
What
you
really
have
with
Stuart
is
standing
up
there,
no
representing
the
mpls
and
other
No
related
chairs.
Is
there
not
security?
AC
People
they've
been
told
that
no
as
part
of
their
work
security
is
not
adequate
in
the
protocols
it
keeps
blocking
their
documents.
They
need
to
do
something
about
it.
What
do
they
do
about?
It?
Tceo
is
a
perfectly
valid,
no
cipher
and
set
of
mechanisms
to
allow
them
to
get
their
job
done.
The
meta
issue
that
we
really
have
here
is
no.
We
have
existing
protocols
that
either
get
security.
You
know
bolted
on.
After
the
fact.
You
all
know
in
this
room
that
the
work
is
best
done
upfront.
AC
So
that's
one
of
our
problems.
The
second
problem
is
coming
up
with
a
common
set
of
mechanisms
in
IETF
to
allow
us
to
get
past
know
some
of
these
misunderstandings
that
we
tend
to
see.
So
the
thing
that's
in
this
slide
here
is
really
what
I'd
like
to
have
the
group
take
as
action
items
now.
We
know
that
transport
security
considerations
have
to
be
just
discussed
up
front.
AC
This
means
that
we
need
little
bit
of
process
help
to
make
sure
that,
as
new
things
are
being
introduced,
take
care
of
that
we
also
need
help
for
existing
mechanisms.
In
this
case,
the
plea
from
Stuart
of
how
do
we
actually
know
both
these
things
after
the
fact,
ideally
without
completely
having
green
right,
the
protocol
there's
a
lot
of
code
that
we
have
to
deal
with
and
a
lot
of
operational
considerations,
not
to
mention
operational
and
deployments
of
considerations
and
servers
of
using
the
things
it
has
to
work.
It
has
to
scale.
AC
It
has
to
be
usable.
The
other
thing
that
we
could
use
significant
help
from
here-
and
this
is
within
your
power,
similar
to
what
the
ops
group
has
done
previously
for
the
middle
no
documents.
We
could
use
some
common
boiler
plates
for
common
profiles
of
how
we
actually
could
use
security
mechanisms
such
as
IPSec.
These
Bao.
B
AC
Each
of
these
things,
depending
on
the
protocol
and
the
mechanism,
may
be
appropriate
for
our
work,
but
you
know
the
least
efficient
way
for
everyone
concerned
to
actually
do
this.
Work
is
for
us
to
have
somebody
write
something
down,
bring
it
up
to
the
security
group
at
the
very
end,
and
then
you
say
what
the
hell
were
you
thinking.
AC
Obviously,
early
security
review
is
helpful
when
it's
done
at
the
right
spot.
We
know
this
is
already
available
to
us.
A
lot
of
the
chairs
are
not
aggressive
about
seeking
it
too
early.
This
is
something
that
we
probably
could
do
a
little
bit
more,
no
chair
coordination
to
take
care
of
the
other
thing
that
we
need,
and
if
you
scroll
back
until
you
see
the
Unicorn
one
of
the
problems
that
we
actually
have,
is
that
no
great
we
have
this
nice
wonderful
mechanism
out
there
draft
Bonica
actually
solves
a
lot
of
this
problem.
AC
No
juniper
actually
has
it
early
deployment
of
this
cisco
has
a
later
version
of
this
process
being
deployed.
Now,
obviously,
code
can
be
written,
but
until
we
actually
have
implementations
available,
it
doesn't
do
any
good
that
no
two
boilerplate
tceo
is
the
solution
if
it
isn't
practically
deployed
and
that's
partially
matter
of
solving
it
by
encouraging
the
vendors
to
go.
Do
something
about
this.
A
lot
of
us
use
the
stuff
that
has
Linux
at
the
core.
AC
AC
Last
thing
I'm
familiar
with
is
a
set
of
patches
from
2010
I
haven't
actually
dug
into
myself
and
also
emphasized
in
this
deck.
You
know
one
of
the
conversations
that
no
Steven,
Farrell
and
I
had
over
the
you
know.
His
term
is
no
grounding
guys
are
not
the
security
guys
in
terms
of
even
using
security
mechanisms,
if
I
can't
do
a
API
or
assist
kettle
or
something
else
to
actually
make
use
of
this
as
I'm
writing
stuff,
and
it's
going
to
open
up
a
socket
bang
bits.
AC
B
I
have
been
running
around
and
asking
okay.
Well,
you
do
do,
will
you
do
an
implementation?
I
noted
the
md5
deprecation
thing
was
out
there
and
he
supposed
with
the
suggest
that
replacement
was
defined
and
when
I'm
looking
at
it
right
now,
this
seems
to
be
one
of
the
cases
were
ham
during
the
development
some
running
code
of
Monica
was
actually
produced,
but
for
the
final
product
it
seems
not
and
delivering
a
running
code.
B
B
B
This
is
really
the
right
thing
to
do
for
getting
and
I-I-I-I-I
asked
the
question
once
in
a
while
somewhere
and
I
didn't
get
good
answers,
but
when
I
think
about
what
care,
what
could
drive
in
my
environment
and
roll
out
I
would
think
that
kind
of
complementary
functionality
that
can
be
rolled
out
at
the
same
time.
That
gives
me
overall
provement
could
be
a
very
important
thing
and
for
me,
as
a
bgp,
inter-domain
guy,
the
question
would
be
well.
Okay.
B
Is
there
is
very
young
model
for
doing
the
keychain
that
seemed
to
be
related
to
the
TCPA
or
definition
out
there
and
do?
Are
we
in
the
state
where
we
would
expect
that
the
key
management
could
be
done
using
that,
so
that
we
are
looking
forward
as
we
as
we
improve
our
installations
and
for
BGP
to
I?
B
Actually,
I
actually
would
like
very
much
to
inject
a
link
so
that
the
authentication
part
between
via
domains
could
actually
be
hooked
to
good
stuff
that
we
are
having
now,
which
is
rpki,
which
is
supposed
to
enable
authentication
between
a
SS
into
the
game
and,
if
I
take
that
package
and
have
and
have
the
proof
that
my
management
of
the
old
thing
gets
easy
in
a
way
that
is
going
to
work
well
over
the
next
decade.
That
would
be
actually
I
think
a
fairly
good
argument.
AE
I
I
F
Z
I'm,
a
new
fresh
management,
ID
and
I
also
have
a
call
for
help
from
the
security
area.
It's
about
one
of
the
topics
that
was
raised
at
a
napkin.
This
is
all
related
to
modeling
and
young
and
configuration
how
to
approach
the
configuration
aspects
of
keys,
key
stories,
key
stores
and
crypto
configuration
in
general
I'm
relaying
the
slides
the
office
in
the
chairs
of
the
network
group
were
not
able
to
be
here
so
next
I'll
just
give
a
brief
overview
skipping
through
some
of
the
details.
Z
What
you
see
here
are
the
slides
from
another
concession
which
has
way
too
many
details.
Problem
space
net
confuses
SSH
as
a
transport
rest
conf,
the
other
configuration
protocol
uses
HTTPS
as
a
transport.
The
reason
need
to
somehow
manage
in
a
centralized
way
the
keys
between
the
agent
and
the
client,
and
it
started
as
a
simple
mechanism
of
storing
the
SSH
host
keys.
Z
Now,
with
the
rest
con
for
the
certificates
always
added,
and
one
of
the
problems
was
that
with
every
possible
transport
or
configuring,
the
configuration
the
main
part,
the
ways
I
need
to
have
a
configuration
mechanism
for
initial
transport
bootstrap.
This
model
has
a
lot
of
history
and
next
next,
if
you
could
next
one,
please
the
way
how
current
implementations
look
at
that
this
model
is
called
the
key
store,
but
in
fact
it
doesn't
store
the
keys
it
store.
Z
The
Tres
tank
you
see
here
are
here
of
certificates
and
other
things
related
to
how
that
can
be
obtained.
A
question
for
the
security
group
here:
how
much
of
the
configuration
work
of
a
young
modelling
work
that
you
are
doing
here
depends
on
the
current
keystore
drafts
and
whether
any
of
the
other
young
young
modelling
work
is
being
done
in
an
active
way.
Z
So
what
I
could
find
out
looking
through
the
documents
that
was
an
IPSec
young
model
which
expired
a
couple
of
years
ago
and
there's
also
a
couple
of
documents
on
a
TLS
side
which
are
then
by
the
same
author
from
the
network
working
group?
Next
one
please,
there
are
similar
groupings
for
other
objects
which
have
much
more
advanced
in
the
modeling
aspect
from
from
routing
areas.
Z
Mostly
it's
about
the
interface
it's
about
the
routing
types,
the
network
types
next,
one
please
so
looking
into
what
is
in
the
keystore
document,
it
has
some
groupings
for
some
fragments
of
the
crypto
related
configuration
elements.
Next,
one
please
cryptography
overall,
is
a
big
area
and
configuration
of
a
cryptography
is
operationally
needed.
That's
probably
only
the
start
of
what
is
covered
by
those
documents.
Z
I'd
have
a
question:
should
this
be
used
as
the
start,
or
should
it
be
refactored
to
something
else
which
is
more
applicable
to
the
view
of
the
security
area
and
let
it
that
it
could
be
used
also
for
net
con
transports?
And
next
one
please
three
possible
options
that
the
närcon
working
group
were
thinking.
Leave
it
leave
it
as
it
is.
Z
We
use
these
models
only
for
the
net
conf
and
thrust
conference
port,
knowing
that
we
in
turn
dublicate
the
configuration
rename
those
modules
and
build
see
more
more
end
to
end
here,
and
the
main
thing
is
ask
advice
how
to
proceed
from
you
again.
This
is
a
management
area
and
not
necessarily
having
deep
insight
into
the
security
aspects.
Next
one
please
that's
it.
Z
O
Hi
Shawn
Turner
I
guess
we
had
an
IPSec
met
way
back
in
the
day,
and
that
was
the
one
that
got
done
and
then
I
don't
know
if
it
ever
got
used.
The
best
of
my
knowledge,
no
one
has
done
any
kind
of
yang
for
any
of
this
any
stuff
that
gets
done
in
the
security
or
I'm
getting
up
and
getting
corrected
here,
but
I,
don't
think
other
than
the
stuff
that
Kent
did
or
attempted
to
I
think
really
kind
of
got
anywhere.
O
AD
I
might
as
Michael
Richardson
I've
I've
tried
to
use
the
keystore
and
when
it
was
keychain
to
model
the
rekeying
of
15.4
networks
it
it,
it
didn't
work
the
way
that
I
wanted
it
to,
and
that
was
fine
because
it
wasn't
intended
to
do
what
at
that.
But
I
have
read
it.
A
Kent
he's
also
involved
in
this
I
think
right.
So
he
and
I
have
talked
about
this.
D
Westford
occur
USC,
so
a
long
long
time
ago,
in
the
same
galaxy,
we
had
a
I
piece,
X
security
policy
working
group
that
was
a
partner
with
the
DM
TF
task
force.
Group
I,
remember
the
name
of
it
was
to
do
modeling
of
IPSec
configuration.
I
am
the
author
of
one
of
the
RFC's
that
came
out
of
that
out
of
a
few
that
are
missing.
That
was
mid
base
in
the
DM
TF.
D
Had
this
gigantic
UML
structure,
kind
of
thing
that
took
fifteen
pages
printed,
to
be
able
to
actually
visibly
see
the
little
lines
because
the
comp,
the
diagram,
was
that
complex,
you
might
go
look
if
the
DM
TF
is
still
using
their
side
of
it
from
the
mid
side
of
it.
I
wouldn't
recommend
you
look
at
that.
There
was
also
an
RFC
that
describes
sort
of
the
architecture,
but
it
ended
up
being
very
complex
when
you
get
to
standards
organisations
trying
to
help
each
other.
D
Z
A
question
then,
for
security,
a
DS,
what's
your
view
and
you
see
a
need
for
working
on
yeah
models
for
configuring
of
the
security
aspects,
especially
looking
from
route
in
perspective
tunnels,
IPSec
panels,
other
type
of
tunnels
and
from
a
management
areas
perspective
things
that
are
not
automatable.
They
are
questionable
from
a
deployment
perspective.
Therefore,
a
large
activity
that
happened
in
routing
Theory
incorporation
of
operation
Syria
was
to
develop
the
core
set
of
modules
for
configuring
at
least
the
basis
of
the
routing,
and
that
is
actually
in
a
pretty
decent
shape.
Z
But
writing
yank
module.
Is
it
takes
two
parts?
One
is
the
syntax
which
is
in
the
management
theory,
and
that
is
done.
The
other
is
the
subject
area
and
that
needs
to
be
done
clearly
in
those
working
groups
which
deal
with
the
protocols.
That's
how
it
worked
with
routing
all
all
of
the
routing
related
modules
are
produced
in
the
relevant
routing
working
groups.
Oh
I
see
I,
don't.
E
AE
J
J
E
AG
Writing
area
just
talking
in
the
hallway
right
now,
it's
in
the
routing
area,
right
the
impulse
and
Powell
school,
but
I
think
this.
The
80s
among
ops
and
security
are
going
to
discuss
this
and
possibly
we'll
even
have
a
design
team
to
help
to
work
on
it
that
we
get
this
document
and
in
more
documents
jointly
among
all
three
or
four
areas.