►
From YouTube: IETF107-SECDISPATCH-20200327-1950
Description
SECDISPATCH meeting session at IETF 107
2020/03/27 1950
https://datatracker.ietf.org/meeting/107/proceedings
A
A
First,
one
so
just
to
remind
everybody,
they're
there
to
turn
off
the
video
and
also
your
microphone
unless
you're
speaking,
use,
WebEx
chat
to
join
the
mic
you
and
only
to
join
the
mic.
You
any
other
conversation,
please
move
it
to
the
jabber.
Ask
you
to
I
will
add
you
to
do
to
the
Q
and
minus
Q
to
remove
you.
Please
remember
to
sign
your
name
and
affiliation
in
the
ether
pad
the
link.
Is
there
also
kathleen
posted
it?
And
yes,
please
do
join
join
the
job,
a
room.
A
We
are
looking
at
the
jabber
chat,
so
if
you
use
plus
one
in
the
jabber
chat,
please
make
sure
to
write
down
what
you
are,
what
your
plus
one
is
for,
so
that
we
can
keep
track,
but
also,
let's
try
to
keep
the
what
you
think
is
important
conversation
in
the
microphone,
so
that
also
the
presenter
doesn't
miss.
What's
being
said
in
the
jabber
chat,
but
if
you
have
clarification
questions
of
course
you
can
post
them
there.
A
Also,
let's
try
to
make
the
discussion
at
the
end
of
the
presentation.
The
this
will
hopefully
help
the
presenters
to
go
through
the
presentation
in
a
shorter
time
and
for
the
speakers
today
we
were
thinking
if
you
can
to
keep
the
presentation,
15,
10,
15
minutes
and
the
rest
for
the
discussion.
That
would
be
great
and
next
slide.
A
A
A
This
is
just
a
place
to
just
to
help
out
to
have
a
useful
discussion,
so
there
are
guidelines
for
effective
participation,
the
SAC
dispatch
working
group.
This
is
mostly
for
people
who
are
not
aware
of
how
dispatch
working
group
works,
but-
and
it
may
be
good-
to
have
some
some
form
of
template,
also
for
asking
for
a
presentation
slot
in
one
place
or
please
take
a
look,
and
if
you
have
some
feedback
and
comments
or
something
that
you
might
want
to
add
there,
please
let
the
chairs
know
or
also
feel
free
to
edit
yourself.
A
A
Itf
should
not
work
on
this
topic
or
additional
discussion
is
required.
So
please
keep
these
outcomes
in
mind
when
you
listen
to
a
presenters
today
and
when
also
when
you
ask
for
for
question
next
slide.
So
here
is
the
agenda
for
today.
So
we
are
in
the
first
slot.
We
ended
intro
right
now,
then
we
have
Stefan
presenting
SVT,
then
Brian
will
take.
We
talk
about
client
certificate,
HTTP,
header
and
Kristy
in
will
talk
about
IOC
and
the
role
in
attack
defense
and
finally,
Rick
will
present
adding
fasl
to
HTTP.
D
Hi
I
just
would
this
is
Roman
I
just
wanted
to
go
back
to
where
you
were
talking
about
the
wiki
and
kind
of
thank
you
in
there
and
the
other
kind
of
co-chairs
for
putting
that
up.
I
think
it's
really
critical
for
us
of
the
ITF
can
make
it
easy
to
find
the
right
entry
way
to
get
work
started
or
to
talk
about
work,
and,
while
I
appreciate
that
some
of
it
some
of
the
content
there
is,
like
you
said,
not
targeted
at
folks
that
know
how
the
process
works.
D
A
G
You
Stefan
Santos
on
is
my
name
and
I'm
here
to
present
the
signature
validation
token,
which
and
offer
that
as
an
input
to
the
ITF
process,
I
personally
think
it's
a
very,
very
exciting
topic
and
the
basic
idea
behind
this
was
initiated
or
started
more
than
10
years
ago,
and
when
Etsy
tried
to
address
the
long
term.
Signature
validation
in
what
I
found
was
a
very
backwards
manner
and
that
I
think
was
doomed
to
fail
and
it
still
has
not
succeeded,
and
until
now
many
people
haven't
cared
about
the
long
term.
G
Validation
and
I
have
to
be
say
that
most
of
the
time
have
been
one
of
them,
but
recently
this
has
become
a
very,
very
important
topic.
You
can
move
to
the
next
slide,
so
this
is
all
about
being
able
to
validate
signatures
in
a
distant
future,
and
can
you
switch
to
the
next
slide
slide?
Please,
if
it's
possible.
G
Many
times
is
just
about
having
some
kind
of
log
that
this
document
was
actually
with.
His
number
actually
was
verified
and
signed,
and
that's
all
they
have,
but
there
is
a
pressure
in
order
to
to
get
something
that
works,
but
the
complexity
of
current
solutions
is
a
big
deployment.
Blocker
next
like
this.
G
So
in
order
to
talk
about
history,
we
have
always
sort
of
approached
the
problem
like
the
time-machine
approach,
and
that
is,
we
have
defined
the
standard
so
that
we
can
do
the
validation
in
current
time.
In
order
to
do
that,
validation
in
current
time
in
the
future,
we
have
to
build
a
time
machine.
That
brings
us
back
to
a
moment
in
time
where
the
signature
was
fresh,
the
certificates
were
trusted
and
the
algorithms
were
secure
go
to
the
next
slide.
G
So
there
are
three
things
that
you
want
to
achieve
in
order
to
do
that
time,
machine
one
is
to
establish
a
time
when
the
signature
action
existed,
where
all
of
these
checks
can
actually
be
done.
That
means
in
that
time
you
can
prove
that
the
certificates
were
valid,
and
you
can
also
prove
that
the
document
you're
looking
at
right
now
is
matching
the
signature
that
existed
at
that
time.
G
At
this
time,
the
algorithms
in
use
were
still
is
considered
secure,
and
it's
important
that
you
have.
Actually
you
need
to
prove
that
that
document
you're
looking
at
is
that
document
that
was
covered
by
the
signature,
because,
if
you're
looking
at
the
time
when
the
algorithms
are
no
longer
secure,
you
may
present
another
document
that
you
found
in
recent
times.
That
would
match
the
old
signature
and
you
can
full
the
system
that
other
way.
G
So
the
obvious
problems
of
the
time
machine
approaches
that
is
very,
very
complex.
None
of
the
timestamp
services
actually
do
any
validation
of
the
signature
itself.
It's
all
designed
to
bring
you
to
the
point
where
you
can
validate
the
original
signature
and
the
current
standards
are
incomplete.
For
example,
there
is
no
requirement
to
store
certificate,
supporting
revocation
data.
It
is
voluntary
and
you
may
actually
need
that
data
in
order
to
prove
a
is
a
full
chain
of
evidence.
G
G
So
the
idea
here
is
that
we
need
a
new
paradigm.
We
need
to
do
this
much
much
simpler
and
next
slide.
There
are
three
basic
IDs
in
order
to
achieve
this.
First
of
all,
to
remove
the
time
machine,
so
the
die
idea
is
to
have
a
token
that
does
not
do
a
times
machine
trick,
but
actually
is
an
assertion
that
the
signature
was
checked
according
to
a
certain
policy
in
when
the
signature
was
fresh,
and
this
this
token
can
be
signed
with
a
secure
algorithm
that
survives
beyond
the
certificate
validity
period.
G
So
it
would
remove
also
the
need
to
validate
any
of
the
original
cryptographic
primitives.
So
the
original
signature,
algorithms
in
original
hash
algorithms
do
not
have
to
be
used
anymore
once
you
have
the
svt.
The
third
thing
is
that
you
achieve
this
by
one
single
sign
statement.
The
SAT
itself
signed
by
one
currently
trusted
key
and
using
one
current
trusted
algorithm
and
when
in
the
future
and
SAT
might
become
old,
you
can
replace
it
with
a
new
one.
So
next
slide.
G
So
this
looks
quite
easily
a
quite
simple
structure,
so
we
have
a
typical
sign
documents
which
have
a
signature,
and
the
signature
contains
some
kind
of
signature
context
with
this,
which
is
a
a
a
excerpt
of
what
is
signed
and
the
hash
over
the
signed
data.
What
algorithms
are
used
in
transforms
and
so
forth,
then
there
is
a
signature
value
actually
computed
over
the
signature
context.
G
Then
there
is
a
certificate
supporting
the
signature,
so
in
the
signature
validation
token,
which
is
a
JSON
web
token
with
claims-
and
there
are
hashes
over
the
signed
document,
there
is
hash
over
the
signature
context.
There
is
a
hash
of
the
signature
value
and
there
is
a
hash
over
the
certificates
used
to
do
the
verification
because
of
those
hashes,
you
don't
have
to
use
the
cryptic
crypto
algorithms
of
the
original
signature.
G
Once
you
have
this
statement,
it
includes
a
statement
about
verified
times
that
were
checked
and
statements
about
validation,
results
that
was
done
by
trusted
validation
service.
So
this
is
like
you:
do
the
validation
by
a
trusted
service
once
and
then
you
store
that
validation
result
in
the
token
and
you
no
more
have
to
validate
the
original
signature
next
slide.
G
This
can
be
do
very,
very
small
and
very
very
efficient.
This
is
a
well
I
made
the
hash
values
a
little
bit
shorter
for
a
good
presentation,
but
this
is
actually
complete,
claim
set
of
a
signature,
validation,
token.
It
contains
the
time
of
issue
and
and
issue,
and
it
has
two
claims
about
one
signature
and
and
that's
pretty
much
all.
It
is
next
slide.
G
We're
basically
two
profiles,
but
suggestion
also
also
a
possibly
a
third
one
for
how
to
do
this.
In
PDF
documents
in
XML
signatures
and
in
the
PDF
document
case
where
the
next
signature
covers
the
old
signature
and
makes
it
complex
to
add
data
to
previous
signatures,
we
suggest
to
add
a
document
timestamp
where
the
signature
validation
token.
The
SAT
is
just
an
extension
inside
the
timestamp
and
we
have
running
code.
This
works
perfectly
fine
with
no
problems
in
XML.
A
G
And
that's
important
to
remember
that
the
SVT
token
does
not
claim
that
the
signature
is
valid.
It
only
claims
that
the
trust
service,
a
performed,
the
valid
process
B
to
validate
the
signature
and
came
to
the
conclusion
C
and
the
important
know
difference-
is
that
that
claim
never
changes.
The
fact
that
a
did
the
validation
process,
B
and
came
up
with
results
C
is
true.
G
Whatever
happens
in
the
future,
while
singer
to
validity
can
be
something
that
we
can
change
in
the
future,
so
that
we
are
aiming
at
making
a
signed
statement
of
something
that
will
never
change
in
the
future.
That's
something
we
can
always
say.
Once
you
have
issued
this
statement,
that
statement
will
never
be
false.
It
never
has
to
be
expired
or
have
any
expiration
date.
Actually
you
can.
It
can
be
valid
for
as
long
as
the
cryptographic
algorithms
Kansas
can
support
it.
Next
slide.
G
You
have
one
trusted
service,
doing
a
trustworthy
validation
process
when
the
evidence
is
fresh,
and
that
is
better
evidence
to
the
judge
than
then
coming
up
with
all
these
complex
collection
of
signed
objects.
So
what
if
the
SVT
gets
too
old
and
we
see
no
problems
of
issuing
in
US
city
based
on
an
old
one,
you
can
simply
replace
it.
Some
people
say
it's
obviously
better
to
be
able
to
validate
the
original
signature
than
to
rely
on
a
statement
that
it
was
valid.
We
run
this
through
lawyers
and
I.
G
G
And
why
would
I
trust
your
validation
service?
It's
also
important
that
it
is
not
designer.
Typically,
that's
the
SVT.
It
is
the
relying
party
after
receiving
the
document
verifying
the
document
and
wanting
to
archive
its
own
evidence
is
going
to
turn
to
the
SVT
issue,
its
self
trust
and
then
add
it
to
the
document
after
the
signature
was
created.
So
it's
not
about
I,
create
my
SVT
and
everybody
needs
to
trust
the
ones
that
the
signer
provided.
G
It's
the
relying
part
in
the
verifier,
typically
that
that's
this
do
I
have
to
use
special
tools
in
order
to
make
this
document
work.
No,
the
solutions
we
have
for
PDF
and
xml
will
work
with
any
standard
xml
process
and
any
standard
PDF
and
a
standard
PDF
reader
would
just
see
this.
As
a
document,
timestamp
will
see
those
strange
things
with
it.
Any
applications
who
actually
understand
it
can
take
it
it's
further
and
use
it
as
a
validation
tool.
So
next
slide.
G
G
G
G
G
H
This
has
been
Caidic,
so
I
found
it
kind
of
interesting
that,
at
the
start
of
the
presentation
you
talked
about
being
able
to
in
the
future
validate
a
signature,
and
then
we
sort
of
transitioned
in
the
latter
half
of
the
talk
to
the
safety
proposal,
which
is
more
about
knowing
that
in
the
future
that
this
signature
was
valid.
Now
and
I.
Think
these
are
related
concepts
but
they're
slightly
different
and
they
have
slightly
different
semantics.
H
So
I
think
it's
unclear
whether
we
should
be
doing
this
work
until
we
know
what
the
actual
requirements
are
for
the
people.
That
would
be
using
this
if
they're
more
concerned
about
knowing
in
the
future
that
the
signature
is
in
fact,
still
a
valid
signature
issued
at
that
original
time
or
if
they
only
need
to
know
that
the
signature
was
valid
at
the
time
was
made
or
at
the
time
that
the
SAT
was
made
and
with
my
apologies
to
the
people
behind
me
in
the
queue
but
Mike
Bishop
had
mentioned
in
the
jabber.
H
A
very
interesting
point
that
I
will
repeat
on
his
behalf,
which
is
that
if
we
treat
this
not
as
a
proof
that
the
signature
is
good,
but
we
have
in
learn,
stood
like
making
a
note
to
ourself
or
noticed
people
that
trust
art
ourselves,
that
we
have
checked
the
signature.
That's
sort
of
get
a
third
different,
subtle,
different
semantics
that
we
could
be
assigning
and
that
might
actually
be
more
palatable
to
be
able
to
reason
about
what.
G
But
but
we
do
have
the
hashes
overall,
the
signature
elements
in
order
to
prove
that
they
are
accurate
and
we
have
a
statement
that
the
signature
itself
has
been
validated
and
you
can
also
check
the
to
the
crypto
primitives
if
you
want
to
using
the
old
gal
Griffin's.
So
the
idea
is,
of
course,
to
establish
the
fact
that
the
signature
was
valid
and
is
valid.
As
the
lawyer
we
are
talking
to
stated
to
us,
every
signature
is
valid
one
side.
They
never
become
unvalidated.
H
H
I
was
just
sort
of
going
at
this
from
an
example
that
was
also
mentioned
in
the
jabber
room
about
like
if
I
have
the
title
to
a
house
or
the
D
to
a
house,
you
know
and
I
go
to
sell
the
house
in
20
years.
What
is
the
actual
property
that
I
need?
Do
I
need
the
signature
on
that
the
deed
to
be
valid,
while
I'm
selling
it
sort
of
intuitive
these
things
like
the
the
key
property
to
have,
but
if.
G
Did
this
this
turns
into
more
illegal
than
a
technical
discussion?
I'm,
not
a
lawyer,
but
but
all
the
lawyers
were
talking
to
claims
that
the
signature
never
becomes
invalid.
It
may
be
the
fact
that
you
bring
you
proof
to
the
table
that
the
signature
was
never
a
fact
valid,
and
it
was
never
in
fact
signed
by
the
person
who
claimed
and
I
can
bring
those
evidence,
but
you
can
ever
come
to
a
conclusion
that
it
was
valid
when
signed,
but
some
mysterious
way
it
became
invalid.
H
G
These
were
the
things
that
we
rechecked
and
you
can
add
all
of
those
data
into
it
if
you
want,
but
the
most
important
thing
is
the
conclusion,
which
is
a
lot
better
than
a
lot
of
government
agencies
are
doing
today,
because
they
don't,
they
just
have
a
log
with
a
number
reference
to
the
document,
with
no
cryptographic
integrity
of
the
document
whatsoever.
Right.
E
G
G
E
G
You
would
believe
that
it's
the
case
but
I
claim
no
because
in
there
regional
a
case
you
still
need
to
validate
original
signatures.
So
you
need
the
whole
chain
of
troop
to
go
back
to
the
time
where
you
actually
could
validate.
It
prove
that
this
was
the
date
I
did
validate
and
so
forth.
In
this
case,
we
have
a
statement
at
the
validation
process
took
place,
so
I
can
replace
this
completely
with
a
new
one.
G
A
D
G
D
G
Fully
understand
that
that
you're
coming
to
this
profiling
issue,
where
you
want
to
stick
the
profile
you
could
have,
it
could
be
something
in
my
thought.
It
could
be
something
like
the
scene
in
which
is
no
longer
active,
but
it
was
a
token
oh
I
lose
through
the
word,
but
but
the
the
protected
token
bearer
token.
That
is,
that
is
in
other
standards
as
well.
You
might
have
the
basic
profiles
in
one
one
group
and
you
might
have
the
the
other
profiles
in
other
groups.
If
you,
if
the
need
arises.
C
E
G
G
Yeah
yeah
yeah,
exactly
the
the
the
standard
itself
does
not
produce,
claim
or
or
or
enforce
any
policy
it
just
gives
a
blank
sheet
for
you
to
say,
I
did
the
validation
according
to
this
policy,
and
this
was
the
conclusion
and
and
if
you
have
a
policy,
a
very
low-level
policy
that
we
did
this
validation
of
this
very
insecure
signature.
We
came
to
this
conclusion.
You
can
do
that,
but
the
point
of
doing
that
is
sort
of
very
small.
B
In
Jordan
I
just
have
one
quick
question:
Stefan,
it
seems
like
in
the
use
cases
you
described.
This
was
mainly
the
same
entity
producing
and
consuming
the
the
SBT,
which
kind
of
hints
to
me.
This
may
not
eat
a
specification.
So
what
I'm
wondering
is?
Why
is
there
a
need
for
an
interoperability
specification
here?
Are
there
multiple
vendors,
see
images
or
something
like
that?
Yeah.
G
In
the
simplest
of
cases,
you
actually
don't
need
any
standard
at
all
or
not
not
even
agreement.
If
you
do
your
own
implementation.
First
of
all,
we
may
want
to
foster
to
have
off-the-shelf
solutions
that
can
be
handed
over
with
removal,
limited
price
tag
and
that
it
would
be
supported
by
standards,
but
you
may
also
in
a
community,
want
to
be
able
to
verify
each
other's
esti
tokens
so
that
you
can
increase
the
size
of
the
community.
That
would
benefit
from
the
same
tokens.
So,
but
it's
just
it's
a
fair
and
valid
question.
I
G
A
D
Viduh
yeah
I
was
tricking
a
little
bit
unmute
yeah
I.
Concur
that
we
need
more
discussion
here.
I,
don't
see
an
obvious
place
with
an
existing
working
group.
Perhaps
we
could
suggest
if
there's
interest,
we
could
start
up
a
mailing
list
to
get
to
get
more
discussion
on
what
it
is
that
we're
talking
about
here.
If
it
turns
out
you
the
that
the
scope
of
work
is
as
broad
as
it
seems
here,
it
seems
like
both
might
be
next
step,
but
I
think
we
need
a
lot
more
discussion
on
it.
A
A
K
F
Thanks
expert
second
excuse
me
psych
this,
that
for
having
me
today,
I'm
going
to
talk
about
a
modest
proposal
here
for
a
client,
sir
HP
header
on
the
idea
is
basically
the
convey
client
certificate,
information
from
a
TLS
terminating
reverse
proxy
from
there
to
origin,
server
applications
on
the
back
end,
and
so
me
that
no
need
probably
know
my
pension
for
putting
photographs
and
presentations
here.
So,
even
though
we're
not
in
Vancouver
I
thought
included
a
little
shot
of
Vancouver
here.
F
You
see
this
in
a
lot
of
different
ways:
the
old-fashioned
kind
of
into
your
reverse
proxy
and
origin,
server
architecture
more
and
more
you're,
seeing
like
CDN
as
a
service
type
offerings
or
application,
though,
balancing
offering
services
that
do
the
same
things
that
effectively
terminating
TLS
and
proxy
in
the
traffic
back
to
the
actual
application,
sometimes
even
in
a
different
different
domain
of
ownership,
and
even
even
more
so
lately
with
things
like
angrist
controllers
in
your
new
hotness
of
the
micro-services
world
and
sometimes
TLS
client
certificate
authentication
is
used.
F
What
these
needs
are
will
vary,
but
it
usually
need
to
know
something
about
it
to
do
something
act
on
the
authentication
along
the
data,
whatever
it
is,
and
in
the
absence
of
some
standardized
method
of
conveying
that
client
certificate
from
the
proxy
to
the
back-end
applications.
Different
implementations
have
done
it
differently
or
in
some
cases
not
at
all-
and
this
is
the
cases
of
things
right
now.
You'll
see
in
systems
like
apache
there's
some
de
facto
ways
to
do
this,
with
some
recommended
header
names
and
things
in
genex,
but
there's
nothing.
F
This
something
like
this
just
isn't
supported
at
all,
so
wrote
a
draft
over
last
couple
of
months
trying
to
basically
standardize
on
what
a
header
would
look
like
that
would
allow
the
reverse
proxy
to
convey
the
client
certificate
from
itself
to
the
backend
application,
so
the
application
would
have
the
data
needed
to
work
on
it,
and
rather
than
going
into
the
details
of
that,
try
to
draw
up
a
basic
little
picture
to
show
what
it
is.
The
idea
here
is.
F
This
would
be
a
simple
idea
that
could
potentially
enable
like
turnkey
style,
'nor,
interoperable
integration
between
independently
developed
components
and
how
this
would
work
is
you
have
a
client
making
call
to
the
server
over
HTTP
over
a
client
certificate,
mutually
authenticated,
TLS
connection.
The
client
has
no
idea
what
the
architecture,
the
backend
is.
F
It's
just
making
a
call
to
the
proxy
and
sees
that
as
the
actual
server
itself,
the
proxy
sanitizes
it's
headers
and
will
then
pass
the
client
certificate
as
a
new
header,
with
the
defined
name
and
encoding
along
to
the
origin,
server
and
the
origin.
Server
then
can
do
whatever
it's
application.
Specific
needs
with
that
certificate
are,
and
that's
like
all
of
it
in
a
nutshell,
there's
not
that
much
more
to
it.
F
Certainly
specifics
that
could
be
worked
through,
but
that's
the
main
idea
is
just
conveying
that
certificate
from
the
reverse
proxy
to
the
origin:
server
through
a
known,
well
known,
defined,
standardised
header,
name
and
encoding
a
little
bit
of
backstory
about
how
we
got
here.
I
was
one
of
the
co-authors
on
RFC
87
l5,
which
was
about
using
mutual
TLS
clarification
in
conjunction
with
some
olaf
functionality.
The
details
of
it
aren't
really
important
to
this,
but
the
course
of
discussing
that
and
some
other
things
on
the
mailing
list.
F
Paraphrasing
my
own
thoughts,
there
was
like
well
that's
really
beyond
the
scope
of
that,
although
it's
a
potential
problem
with
something
that
could
be
improved
upon,
certainly
doesn't
belong
in
the
scope
of
that
particular
document,
which
is
just
about
using
mutual
TLS
and
Olaf
specifically,
but
some
more
conversation
passed
and
basically
the
question
came
up:
was
it
possible
to
define
something
like
this?
Maybe
get
it
pushed
out
into
a
CPA
or
PRF
working
groups
it'd
be
more
appropriate
there
and
actually
be
very
helpful
to
have.
F
The
consensus
there
out
of
the
SEC
Dispatch,
meaning
at
one
of
six,
was
basically
please
come
back
later
with
an
actual
draft.
So
that's
why
I'm
here
yeah
so
following
on
that,
basically
I
wrote
some
drafts
share
them
here
and
set
this
batch
thinking
about
it
more.
Maybe
there
should
have
been
a
more
general
dispatch
thing,
but
we
got
here
how
we
got
here,
and
the
draft
itself
has
received
some
positive
if
some
done
underwhelming
reception,
there's
just
a
few
quotes.
Somebody
said
it
was
useful.
F
Some
people
support
the
effort
and
the
lack
of
it
is
going
to
pain,
point
migrating
applications
with
clients
or
different
up
mechanisms
for
the
cloud
office.
Somebody
said
good
luck
on
the
effort.
If
you
need
a
vote,
please
let
me
know.
Strangely
enough
is
somebody
that
works
in
the
IETF
and
knows
we
don't
vote,
but
he
mentioned
that
anyway.
So
what
went
so
far
as
to
say
I'm
surprised,
it
wasn't
already
a
thing
already.
F
I
can't
see
why
it
would
be
any
other
place
in
an
HTTP
and
also
off,
let's
the
coworker,
my
mention
that
would
have
been
really
useful
to
have.
Although
he
said
it
would
the
nice
two
years
ago,
since
we've
already
done
some
integration
like
this
without
a
standard,
so
there
seems
to
be
interest
in
it.
It
like
I,
said
somewhat
underwhelming
hasn't
been
a
lot
of
response,
but
largely
everything,
that's
been
said
has
been
at
least
mildly
positive.
There
was
some
more
technical
feedback
that
came
through
just
slightly
before
this
meeting.
F
I
haven't
had
a
chance
to
fill
it
up
cat,
but
I
think
some
bits
of
it
may
be
covered
or
at
least
discussed
further
in
the
presentation.
Despite
that,
I
have
my
own
trepidation
about
the
work
or
it
is
proceeding
with
it.
You
know
there
are
lots
of
district
solutions.
Already
on
the
market
that
exists
for
doing
this
and
retroactive
adoption
of
late
coming
standards
like
this
is,
it
sort
is
really
uncertain
at
best
when
things
are
already
working.
F
Sort
of
trying
to
retrofit
a
standard
into
to
replace
existing
ad
hoc
solutions
is
often
I,
ignore
it
or
not,
well-received
or
just
doesn't
get
a
lot
of
uptake
I'm.
Also
a
little
worried
that,
despite
what
seems
like
a
pretty
simple
proposal,
consensus
here
might
prove
surprisingly
elusive
on
the
thread
I
mentioned
previously
on
that
sort
of
led
to
me
doing
this
work.
After
that,
consensus
is
sort
of
sketches
of
the
work
elsewhere.
F
The
whole
thing
is
generated
into
really
strong
opinions
about
how
to
properly
secure
or
not
the
communication
between
the
proxy
in
the
backend
and
even
got
into
borderline
personal
attacks,
which
isn't
really
relevant,
but
it
seems
people
have
a
lot
of
strong
feelings
about
this
sort
of
thing
at
a
previous
IETF
I
was
attacked,
although
it
was
not
personal
I
was
attacked
in
a
meeting
by
a
navy
and
discussing
a
very
similar
proposal
around
conveying
this
kind
of
information
with
respect
to
tow,
combining
there's
likely
to
be
contention
about
exactly
what
pieces
of
the
certificate
and
how
much
of
the
certificate
and/or
certificate
chain
actually
get
conveyed.
F
I
know
some
people
have
felt
like
something
like
this
would
be
more
proper
inside
of
the
framework
offered
by
the
forwarded,
HP
extension,
but
that
brings
its
own
complications.
There's
similar
ish
type
of
work
going
on,
but
at
different
layers.
So
it's
not
a
direct
analogy,
but
it
also
is
somewhat
competitive
to
this
sort
of
thing
and,
frankly,
there's
all
the
other
things
that
I
don't
know
that
I
don't
know
about
in
the
1i
I
guess
I
do
know
that
I
don't
know
about
here.
I
mentioned
is
the
secondary
cific
at
authentication
hb2.
F
That
would
at
least
potentially
have
some
ramification
against
something
like
this
and
what
actual
data
goes
in
there.
So,
while
there's
the
potential
for
something
like
this
to
be
really
useful,
there
may
be
difficulty
in
actually
getting
to
a
consensus
approach,
and
even
if
we
did
there's
some
questions
about
whether
it
would
actually
be
useful
at
this
point
since
there's
already
a
lot
of
people
already
doing
something
similar
just
not
in
a
standardized
fashion
and
I,
don't
know
I'm
sort
of
on
the
fence
about
it.
F
Hope
we
actually
do
get
to
go
to
Bangkok,
but
even
that
may
not
happen,
but
I'm
here
suck
disguise
to
try
to
sense
of
whether
to
dispatch
this
or
not,
and
if
so,
where
would
be
the
most
appropriate
place
for
the
work,
TLS
or
ACP
seem
like
potentially
obvious,
maybe
the
wrong
word,
but
potential
candidates,
or
maybe
somewhere
else
or
and
then
honestly,
not
at
all,
is
a
not
an
entirely
unexpected
answer
either
with
that,
that's
all
I've
got
here.
So
thanks
for
listening
and
take
it.
A
Thank
you,
Brian
zero,
Thank,
You
Ryan.
You
got
a
lot
of
people
in
the
queue
and
before
start
with
the
queue
I
just
wanted
to
just
highlight
what
you
just
said.
So
this
seems
like
your
options,
also
considering
the
positive
feedback
we've
seen,
questions
might
be
HTTP
based
TLS
or
a
focused
working
group,
or
maybe
something
else
so
please
be
whoever
is
on
the
mic.
Please
think
about
these
possible
options.
When
you
comment.
B
As
my
bishop,
so
first
off
from
the
secondary
search
standpoint,
there's
no
conflict
with
this,
because
secondary
searches
is
intended
for
delivering
the
search
to
that.
First
terminating
was
over
for
place,
whereas
this
is
for
the
back
end,
so
no
conflict.
There
use
case
wise
I,
really
like
this,
because
it
lets
you
pass
things
to
the
back
end.
B
Ironically,
it
builds
very
similar
use
case
to
the
previous
presentation,
in
that
you
want
an
attestation
that
yes,
I,
have
validated
this
signature
and
semantically
the
same
we're
just
not
looking
to
do
it
twenty
years
from
now
we're
looking
to
do
it
one
out
further
than
go
on
as
far
as
where
to
locate
it,
probably
HTTP,
but
I
can
see
reasonable
arguments
for
putting
it
in
yet
those
places.
Thank
you.
E
Brian,
thanks
for
the
presentation,
I'd
like
to
just
first
say
that
I
support
this
work
and
wherever
it
happens,
Montclair
Wood
is
very
interested
in
implementing
it.
You
had
mentioned
that
this
is
potentially
a
retrofit.
I
would
say
from
from
our
perspective.
It
hasn't
been
implemented
yet,
and
this
solves
a
need
so
I'm
very
much
in
favor
of
finding
a
home
for
this.
F
L
Hi
I'd
suggest
you,
as
a
next
step,
come
to
HTTPS
and
give
a
presentation.
It's
come
up
there
before
in
the
past.
I
have
to
dig
around
and
find
out
what
happened,
but
I
just
didn't
get
enough.
Energy
I,
think
I,
you
know
getting
involvement
from
CD
ends
and
from
reverse
proxy
vendors
is
probably
what
you
want.
L
I
think
it's
probably
best
to
do
it
there
or
get
those
folks
there
and
and
I
think
the
tricky
bits
of
this
are
probably
in
the
HTTP
is
a
hop-by-hop
protocol
and
you
have
to
account
for
that:
the
design
of
the
header
and
so
forth,
and
so
on.
So
at
least
involving
that
community,
a
little
bit
would
probably
be
the
next
time.
F
That
makes
a
lot
of
sense
done
point
about
how
this
was
very
much,
at
least
initially
designed
to
only
account
for
sort
of
the
client
to
origin
connection
as
a
single
piece
of
sort
of
30
note
to
call
it
up
not
allowing
for
this
sort
of
information.
We
conveyed
hot
buy
hot,
but
that's
it's
certainly.
The
sort
of
thing
I
would
want
to
discuss
further
and
make
sure
I
really
have
a
handle
on
going
forward.
So
with
that
respect
going
to
HP
fess
seems
like
it
makes
sense.
E
C
E
Eric
or
squirrel
I'd
really
agreed.
This
is
an
important
application
on
you
know.
We
don't
have
as
much
to
Ellis
client
off
as
we'd
like,
but
we
do
have
some
and
that's
going
to
help.
Maybe
now
that
we
fix
the
privacy
problem,
we'll
get
more
I
think
I'm
the
person.
That's
ethical
feedback
in
getting
this
right
is
a
substantially
more
subtle
than
this
draft
server
flex.
So
probably
we
need
to
like
actually
think
about
a
little
harder.
I
agree
them
not
that
easy
to
be
ich.
We
missed
the
place
for
that,
though.
K
F
I
mean
the
first
answer
that
is:
I've
had
people,
you
know,
they're
people
interested
in
only
the
subject,
identifier,
some
we're
interested
only
in
public
key.
Some
are
interested
in
the
issuer
and
the
subject
identifier,
some
like
Eric,
who
just
spoke,
suggested
that
in
fact
the
entire
certificate
chain
is
necessary.
So
there
are
different
needs
for
different
back-end
applications,
depending
both
on
what
their
functionality
is,
as
well
as
what
the
sort
of
the
trust
model
is
who's
expected
to
validate.
F
What
particular
pieces
of
data
are
relevant
to
the
backend
application,
but
that
doesn't
mean
that
that's
the
final
or
the
right
answer,
but
that
was
that
was
my
thinking
at
the
time
all.
K
M
I'll
just
relay
what
I
think
I
was
going
to
say,
respond
to
Mike
bishops
thing.
We
need
more
than
what
he
what
he
suggested
and
quite
a
few
people
in
the
job
or
said
about
27
different
things,
and
then
the
most
interesting
part
was
the
suggestion
that
we
actually
need
a
working
group
on
back-end
stuff
and
that
should
it
be
split
out
of
HTTP
this
and
that
actually
sounds
like
I
would
have
thought
we
needed
that
much.
But
if
that's
the
right
direction,
then
maybe
that's
the
right
way
to
go.
D
So
the
front,
so
the
primary
thing
I've
heard,
is:
there's
positive
feedback
across
the
board
that
we
need
to
look
into
this
a
little
bit,
but
we
need
to
think
harder
about
the
venue
I
think
further,
based
on
on
the
feedback,
overwhelmingly
there's
kind
of
talked
about.
We
need
to
at
least
talk
to
http
bits
about
that,
and
we
probably
also
want
to
make
sure
that
there's
adequate
coordination,
kind
of
with
TLS.
So
my
suggestion,
Ben
jump
in
if
you
disagree,
is
that
we
get
a
we
get
a
conversation
started
in
HGTV
that
fits.
D
N
O
B
J
So
what
I
have
been
doing
over
the
past
few
years?
Design,
identity
system,
which
basically
says
we
start
with
the
domain
and
clients-
should
have
a
web
holes
that
have
lightweight
identity
management
solution
that
allows
them
to
basically
access
services
everywhere,
hopefully,
and
where
they
can
bring
their
own
identity
and
the
identity
will
be
me
at
my
domain
book
called
your
wood
example
book
called
and.
J
J
Web
host
and
they
will
run
up
to
the
purple
foreign
server
and
they
want
to
authenticate
as
someone
and
the
function
of
the
gems
back
at
the
identity
provider
for
the
domain
and
because
it's
not
NT
at
a
domain
deck,
basically
trust
whatever
client
identity
is
given.
There
I've
been
looking
for
a
number
of
protocols
for
doing
that,
and
there
are
a
few
very,
very
interesting
proposal.
Something
differently
make
now
and
I'm
here
to
present
one
part
of
it.
J
I'm
gonna
ignore
pardon
I,
will
just
test
in
the
audio
problem
and
the
back-end
protocol
for
doing
that
might,
for
example,
be
diameter.
Now
what
protocol
can
be
used
to
authenticate
because
it
will
be?
There
are
many
proposals
to
making
a
single
sign-on
system
that's
specific
to
the
web,
but
as
far
as
I'm
concerned,
that's
a
little
bit
easy.
J
That's
a
year,
three
mechanisms
came
up
basically
as
sam'l,
which
is
particularly
very
heavy,
heavy
weight
for
public
keys
and
all
the
refined
semantics
verifying
it.
Both
the
first
speakers
spoke
about
this
actually,
but
in
a
lot
of
detail,
so
it
would
be
a
possibility,
but
it's
probably
too
heavy-
to
get
people
on
board
with
very
easy,
especially
when
the
applicator,
which
end-users
with
small
domain
hosting,
actually
servers,
might
work.
Just
like
you
can
have
Kerberos
in
a
in
a
centralized
in
a
organization
you
can
have
Kerberos
at
your
domain
host.
J
There
are
no
strict
problems
with
that,
except
perhaps
that
some
people
find
it
offensive
to
have
to
log
on
with
Kerberos
every
day
I
mean
some
people
are
really
accustomed
to
the
passwords
and
don't
want
to
break.
Have
it
so.
The
third
option
Sasol
actually
gives
a
larger
large
range
of
choice.
We
can
go
from
the
easy
access
with
a
password
and
gradually
grow
to
Kerberos.
J
The
nice
thing
is
you
get
to
grow
at
your
own
pace
and
if
it's
a
set
of
clients
that
run
by
the
end
user
and
the
social
validation
serve
run
at
their
IDP,
then
what
you're?
Having
is
a
free
choice
to
to
have
a
local
policy
for
your
domain
and
say
I
only
want
to
do
Kerberos
or
I
only
want
to
do
scramble,
and
maybe
I
just
want
to
do
plain
authentication.
J
You
get
a
lot
of
choice
and
because
the
portal
server
is
not
really
involved
in
the
actual
authentication
in
this
design,
that's
actually
they
can't
pull
you
down.
Basically,
they
can't
say
we
only
support
this
or
that
now
this
requires
a
few
crossover
mechanisms
and
four
covers
I've
found
a
way
to
do
that
and
for
some
I
found
a
way
to
do
that
so
to
pass
the
authentication
through
the
intermediate
server.
J
Google
then
rely
on
the
result
from
the
oh,
my
TP,
because
that'll
carrying
the
two
other
options,
basically
I
think
that's
the
winners
solution,
except
on
HTTP,
where
there
is
no
embedding
process.
There
has
been
work
on
that
fifteen
or
twenty
five
years
ago.
Fifty
nothing
and
that
never
got
around,
but
that
was
before
HTTP
authentication
was
more
clearly
defined
and
the
things
it
read
it
ran
into
a
I've
looked
into
those
and
those
are
gone
in
the
current
proposal.
J
So
what
I'm
basically
proposing
is,
let's
add,
seneschal
to
the
HTTP,
so
we
can
have
these
benefits
or
realm
crossover
for
virtually
all
the
protocols
or
the
Taoiseach,
because
HTTP
runs
along
with
the
other
protocols,
or
at
least
can
run
along
with
the
other
protocols
with
a
pluggable
authentication
system
enough
in
next
slide.
Please.
J
So
this
is
a
bit
crowded,
I'm,
afraid
I'm
left,
HTTP
sessile
very
briefly:
I,
don't
care
so
much
about
how
I
think
that's
all
in
the
draw.
That
would
like
just
to
explain
why
it's
what's
useful,
but
the
request
basically
contains
essential
token
that's
being
sent.
It
mentions
a
realm
so
that
the
intermediate
server
knows
where
to
look
back
through
diameter
or
something
to
look
up
for
a
yes
or
no
for
the
authentication
and
basically
the
first
time
around.
It
will
select
a
mechanism.
J
J
There's
some
provisioning
for
caching
so
that
you
don't
have
to
go
through
this
interaction
for
every
single
resource.
You're
addressing
that's,
that's
just
an
extension,
basically
on
the
right
I've
drawn
what
this
means
in
terms
of
rail
crossover.
So
we
have
the
client
here,
which
is
a
lovely
lady,
know
somebody
with
a
skirt,
as
you
could
say,
let's
be,
and
this
this
client
issues,
sessile
authentication
and
it
fixed
selects
the
mechanism
for
ethics,
overs,
sessile
crossover
and
it
mentions
the
realm
where
it
wants
to
do
that.
J
The
ethics
ogre
basically
is
encrypted
by
either
user
shares
with
their
realm.
So
it's
end-to-end
decryption
and
the
only
thing
the
purple
server
can
do
is
look
up.
The
realm
find
the
background,
make
diameter
diameter
connection
and
fast
on
the
Essex
opal
request
and
then
fast
better
for
whatever
the
client
sends
that
what
the
admissions
back
until
the
client
is
authenticated.
The
Foreign
Service
sees
this
traffic
pass
the
room.
J
He
knows
that,
eventually
it's
a
very
validated
realm
I
mean
you
can
have
NSF
in
Dane
and
unless
all
those
things
are
quite
helpful
and
certificates,
of
course,
are
quite
helpful
to
establish
the
validity
of
the
of
the
realm
and
anything
at
that
realm
is
then
reliable.
So
when
this
contact
validated
realm
says
this
that
it's
it's
Mary
for
example,
then
the
purple
server
will
know
that
it's
Mary
at
realm
will
proceed
acting
with
Mary
ed
realm,
without
having
ever
seen,
Mary
or
or
without
ever
having
store
the
password.
J
So
there's
no
set
of
passwords
needed
anymore
either.
So
this
is
how
the
sessile
crossover
mechanism
works.
There's
one
added
thing:
that's
that's!
Definitely
something
to
work
out
in
more
detail
to
explain
in
more
detail.
I,
don't
think
I've
done
that
sufficiently
in
the
draw
the
channel
binding
can
actually
be
used
in
the
form
of
an
to
end
authentication
uses
so
Essex
over
is
always
mechanism
social
mechanism
with
channel
binding,
and
that
means
that
that
Mary
knows
that
she
stopped
it.
J
No,
sir,
because
the
T
she
use
Mary
knows
that
she's
talking
to
do
I
defeat
with
a
DP.
Also
knows
it's.
It's
done
for
a
particular
connection
and
small
pass
through
an
extra
intermediate
server
that
might
be
well.
It's
a
couple
of
consensus.
Attacks
are
possible
there
that
it
can
be
mitigated.
This
is
done
well,
next
slide,
please.
J
This
is
the
other
way
of
doing
a
realm.
Crossover,
that's
based
on
Kerberos,
and
this
all
by
the
way
pieces
of
this
did
the
essential
pieces
have
all
been
implemented,
except
that
it
hasn't
been
brought
together
to
one
whole.
Yet,
but
basically,
we
know
that
this
works
all
these
magnetism
here.
The
clients
left
on
top,
wants
to
contact
the
surface
in
another
realm
and
basically,
what
the
client
has
it's
a
hostname,
so
the
client
runs
up
to
their
own
KDC.
J
I
said
I
would
like
to
have
that
particular
surface,
for
that
particular
hostname,
the
local
can
she
goes
and
looks
into
the
database.
That's
the
one
point,
one
arrow
and
the
database
says:
I,
don't
know
that
house
many
so
the
cave
she
then
does
is
it
looks
into
DNS
using
unisex
philosophy
to
look
up
the
realm
over
at
the
realm
for
the
remote
area
and
then
looks
at
the
KDC,
then,
okay,
this
is
the
flanking
surface.
J
Kdc
engage
in
energy
exchange,
which
is
basically
less
connection
with
damodar's
messages
going
back
and
forth,
and
at
that
point
there
is
a
shared
key
between
the
two
cases.
This
is
standard
facility
in
Kerberos,
except
that
normally
it's
done
by
hand
and
this
game
this
case
it
can
be
a
keyless
automatically
span
deleted
after
month.
J
This
key
can
now
be
used
to
construct
a
reference
or
a
redirection
that
sent
back
to
the
client.
That's
the
two
dot
reader
error,
going
back
to
the
client
and
that's
standard
shape
that
all
the
Kerberos
software
basically
understand
how
to
interpret
so.
The
client
software
doesn't
have
to
change
to
accommodate
this.
Only
the
KB
sees
need
to
support
this
extra
protocol.
These
extra
lookups,
the
client
now
has
a
way
to
contact
the
ticket
granting
service
for
the
surface.
J
J
This
works
a
standard,
Kerberos
clients,
as
they
are
today,
it's
only
in
extension
to
KDC,
it's
well
exchanged
on
the
fly
when
it's
needed.
It
uses
dns
again
and
TLS
as
the
basic
security
foundation.
The
one
thing
that's
really
important
is
that
the
fees
that
are
being
exchanged
between
two
cases
are
symmetric
keys.
So
once
you
know
that
you
can
unravel
the
entire
derived
being
system,
so
it's
vitally
important
that
the
exchange
is
done
with
quantum
cryptographic.
J
So
this
this
definitely
is
a
puzzle,
but
there
are
other
systems.
I
think
lettuces,
for
example,
might
be
used.
You'll
need
to
see
what
standardizing
thiness,
but
this
really
requires
quantum
proof
the
TSP
can
be
through
KBC,
but
that
that
will
come
at
some
form.
So
this
is
basically
the
thing
I
wanted
to
present.
So
this
HTTP
acyl,
which
I
think
is
an
enabler
to
have
the
same
authentication
mechanism
for
just
about
any
protocol.
It
can
include
the
other
two
versions
that
represent
as
other
alternatives,
and
there
are
his
might.
J
This
can
help
to
unleash
realm
crossover
into
particularly
in
two
different
ways,
which
I
think
is
really
helpful
in
getting
clients,
more
control
of
their
online
presence,
their
online
identities
and
aliases
and
through
the
names
and
all
that
and
suddenly
become
possible.
So
leave
this
is
useful
work
I'm
just
this
is
a
bit.
This
is
actually
three
different
aspects
of
the
design.
So
I'm
always
looking
for
how
to
present
this
way.
J
To
present
this,
how
to
pull
this
off
because
the
specs
are
concentrated,
they
describe
a
single
thing,
I,
don't
know
exactly
what
the
best
part
is
to
bring
this
into
the
IDF
and
where
to
bring
it
in
HTTP
base
has
responded
but
has
been
lukewarm
response.
So
a
proto
should
also
come
here
and
talk
to
some
people
involved
a
security
about
what
they
felt
about
this
either.
That's
what
I
wanted
to
say.
If
there
are
questions,
please.
A
A
L
D
Romina,
this
is
rhomin
speaking.
I
want
to
jump
in
as
a
game
vacuum
kind
of
what
Mark
said.
The
criteria
of
having
implementers
and
kind
of
interest
is
something
we
also
have
n
security,
so
I'm
equally
interested
in
in
hearing.
If
there
is
that
class
of
interest
to
also
help
us
decide
what
to
do
next,.
J
L
It
gets
a
little
more
tricky,
I
might
say.
If
it's
just
a
server
side
component,
then,
if
people
want
to
you
know,
do
things
server
side,
you
like
add
headers,
or
you
know,
facilities
like
nudist
by
extending
a
server
using
CGI
or
whatever
facility.
That's
one
thing,
but
if
it
requires
server
client
coordination
that
we
need
to
see
that
we
generally
would
like
to
see
the
breadth
of
interest
across
the
different
components.
The
need
to
implement
yeah.
H
Up
see
if
I
can
successfully
unmute,
this
has
been
key
doc.
You
hear
me
yeah,
yeah,
okay,
I
have
some
severe
technical
difficulties
out
here,
so
I
missed
one
of
the
talks.
I
just
wanted
to
point
out
that
you
know
some
of
the
discussion
in
the
jabber
room
has
been
sort
of
centering
it
on
whether
you
know
on
occasion
happens
at
the
HTM
layer
versus
application
layer
and
how
the
user
experience
is
not
great
I
just
wanted
to
sort
of
emphasize
so.
H
The
user
experience
is
not
great
in,
like
the
current
HTTP
basic
auth,
that
sort
of
thing
and
for
negotiate
as
well.
For
that
matter
and
so
sort
of
some
of
the
considerations
in
that
area
about
how
can
we
design
something
that
collects
is
be
useable,
seem
relevant
to
whether
or
not
this
work
is
going
to
succeed
and
I'm?
Sure
I
did
a
terrible
job
of
summarizing
the
Jamboree,
but
I
welcome
other
people
to
chime
in
as
well.
J
Yeah,
if
I
may
respond
to
that,
the
most
scary
thing
to
me
reduce
credentials
in
the
same
space
where
javascript
code
is
running
loaded
from
dynamic
places,
including
adverse
enforcement's
that
have
key
logging
capabilities.
I
very
much
try
to
get
away
from
in
this
design
too.
Very
much
try
to
get
away
from
applications
performing
authentication
and
I
know.
That's
not
everybody's
approach,
but
I
would
really
like
to
get
to
push
it
into
a
lower
layer.
J
Also,
this
is
reasoning
specifically
from
HTTP.
What
I'm
trying
to
get
to
is
that
I
would
very
much
like
to
have
the
same
authentication
mechanism
across
all
the
protocol.
Yes,
I
see
a
lot
of
it.
It's
almost
like
a
read,
see
every
now
and
then
there's
web
authentication
and
this
authentication
will
offer
all
the
other
protocol
which
somehow
hardly
ever
meet,
and
that
really
surprises
Munich
is
I.
Think
a
very
impractical
situation.
E
E
And
I
think
that
I
am
was
sort
of
alluding
to
the.
Perhaps
they
don't
quite
agree
the
details,
the
underlying
story
with
authentication
on
the
web.
As
far
as
I
can
tell
is
the
site
you
don't
want
to
detour,
don't
want
to
like
defer
to
the
browser
and
event
education,
primitives
and
so
on
that
sort
of
means
you're
stuck
with
either
something
which
is
like.
Essentially,
you
know,
web
forms.
E
Cat
like
Titan
detected,
oopsy
they're,
like
you
know,
web
forms
or
something
transparent
like
like
web
walk,
then
in
on
you
know
on
things
that
are
you
know
HTTP,
but
not
web
applications
and
obviously
there's
like
a
lot
alternatives
for
like
having
things
like
controlling
the
you
know
the
you
know
for
other
other
modalities.
E
J
N
N
N
Thank
you
so,
where
you
can
find
the
draft
and
it's
been
co-authored
by
only
white
house
of
MCC
group
and
myself,
you
can
find
it
on
the
data
tracker,
the
links
there.
You
can
also
find
it
on
github
and
so
and
I've
already
had
some
feedback.
Do
please
keep
it
coming,
especially
from
a
range
of
people
and
those
who
are
familiar
and
those
who
are
not
familiar
with
the
topic.
N
It's
really
good
to
get
that
Bret's
range
of
views
and
stakeholders
in
all
standards
work,
and,
of
course
this
is
no
different
and
so
I
just
say
at
this
point
before
I
talk
about
io,
C's
or
indicators
of
compromise
I,
just
like
to
briefly
outline
what
they
are
so
that
you're
not
lost
through
this
whole
presentation,
I'm,
IRA,
C's
are
featured
or
attacks
at
Forex
features
or
artifact
of
attacks
or
attackers.
So
that's
just
a
very
high-level
view
and
we'll
dive
into
that
through
through
the
presentation
next
slide.
Please.
N
So
that's
why
I'll
use
both
throughout
the
presentation
and
I
think
in
the
draft
as
well
and
so
narrate
it's
yeah
to
share
knowledge
with
protocol
engineers
and
to
my
second
point,
this
knowledge
share
should
I
hope,
pretend
prevent
this
technique
being
accidentally
ignored.
So
engineers
can
make
protocol
design
choices
that
affects
the
availability
of
FiOS
es,
which
are
those
artifacts.
N
You
observe
about
an
attacker,
and
so
both
Ali
and
I
we'd,
like
the
IETF
community
at
large,
to
consider
the
impact
of
this
ioc
availability
and
just
in
either
direction
if
they're
more
or
less
available
the
related
impact
that
can
have
so
for
an
example
for
those
of
you
who
are
familiar
with
miters
attack
framework,
there's
quite
a
lot
of
momentum
in
the
industry
at
the
moment
about
this.
It's
a
useful
framework
that
helps
categorize
and
classify
attackers
based
on
how
they
act
in
a
victim's
network,
and
the
framework
is
massive.
N
Please
thank
you.
So
this
is
taken
straight
from
the
draft.
It's
the
abstract
as
an
introduction,
so
to
be
clearly
what
this
draft
is.
Not
it's
not
defining
a
protocol.
It's
not
a
format
for
IOT
sharing,
it's
not
a
threat,
feed
format
or
anything
like
that.
This
is
describing
an
important
technique
in
attack
defense
for
reference
and
for
information,
though
the
draft
as
it
goes
through,
outlines
different
types
of
IOC
s.
N
It
discusses
their
effective
use,
adèle
rotations
that
benefits
and
some
IOC's
are
directly
relevant
to
the
work
of
the
ITF
and
those
are
called
AK
protocol,
artifact
and,
but
importantly,
we're
not
presupposing,
as
it
says
in
the
draft
abstract,
where
you
would
find
IOC's
or
detect
them
in
the
first
place,
as
just
that,
engineers
should
be
aware
that
they
need
to
be
detectable
to
fulfill
the
functions
described
in
the
draft
next
slide.
Clean.
N
Thank
you,
so
I
love
a
good
table
of
contents
and
I
hope
you
do
too,
because
that's
this
slide
this
draft
and
is
aiming
to
describe
and
illustrate
purposes
of
ISEs
which
are
widely
used.
So
the
way
it's
structured
is
that
first,
it
goes
through
what
is
ESR
and
then
the
benefits
of
them.
You
can
see.
There
are
seven
sections
there.
Then
we
introduce
there
pyramid
of
pain
and
have
had
quite
a
lot
of
jokes
about
this,
but
no,
sadly,
it's
not
named
after
me.
N
This
is
not
something
I
created,
it's
often
referenced
in
the
cybersecurity
community,
so
I
sort
of
wish
I
had
invented
it
actually,
but
I
didn't
and-
and
it's
just
there
to
show
the
broad
properties
the
broad
range
of
Defense's
that
I
see
can
provide
and
then
you'll
see
as
section
five.
We
relate
that
to
defense
in
depth
and
then
finally,
talk
about
a
real
threat
group,
apt
33,
for
which
and
some
ICS
were
identified
and
used
for
defense
just
to
give
a
case
study
to
contextualize
what
what's
going
on
next
slide,
please.
N
So
what
are
is
C's?
Well,
here's
a
list
of
them.
This
is
taken
from
the
draft
as
well.
Clearly
they
include
some
protocol
relevant
things
and
that's
what's
relevant
and
the
link
to
ICF
and
but
there's
other
things
in
there
too,
like
hashes
of
malicious,
binaries
or
scripts,
and
that
you'll
see
IP
addresses
domain
names
at
TLS,
ni
values
and
certificate
information,
and
so
there's
a
section
in
the
draft
that
talks
about.
N
Why
are
you
CSR,
or
just
great
and
I'll,
go
through
those
and
briefly,
because
I
think
it's
important
to
recognize
the
benefits
that
the
IOC's
will
bring?
The
first
is
that
they
are
a
big
win
for
the
underdog,
and
so
it's
cheap
and
achievable
for
lots
of
organizations
and
favorite
charities.
The
small
companies
for
schools
like
if
you're
a
small
manufacturing
subcontractor,
perhaps
you're
in
the
supply
chain,
for
a
big
Thatcher.
You
have
quite
a
big
threat
attached
to
you.
Perhaps
you
just
don't
have
much
resource
to
manage
that
risk.
N
However,
you
will
likely
have
a
firewall,
and
so
you
can
use
things
like
the
ability
pretty
easily
and
still
get
quite
a
good
and
base
level
of
defense.
There
are.
These
are
just
made
for
network
defenders
to
regularly
Fatima's,
but
there
are
all
government
departments
or
big
tech
companies
and
because
of
this,
they
have
a
widespread
and
huge
multiplier
effort.
On
effect,
sorry
on
attack
different
effort,
so
they
are
just
them.
N
This
big
multiplier
effect,
which
is
which
is
really
useful
and
I
use
a
very
shareable,
and
some
of
you
may
be
aware
of
methods
in
which
you
can
share
eysies
at
the
moment,
and
but
this
is
just
about
the
share
ability
and
reproducibility
of
ICS
being
really
quite
top-notch.
You
can
just
capture
it
and
keep
doing
it
consistently.
Look
for
things
and
automate
that
and
it
allows
again
the
underdog
to
benefit
from
resources
of
bigger
players
so
through
the
share
ability
of
protecting
a
whole
community
and
permitting
this
cyber
security
like
uplift,
which
is
great.
N
They
can
also
help
with
attributions,
which
means
and
that
an
organization
can
sort
of
prioritize
or
perhaps
accept
some
false
positive
trade
offs
when
they're
looking
at
particular
subsets
of
malicious
actors,
and
that
gives
organizations
this
kind
of
technical
freedom
and
capability
to
choose
their
own
risk.
Posture
and
defence
methods.
And
you
also
have
big
time
savings
with
overseas.
So
it
avoids
duplicating
your
investigative
effort
and
by
conducting
the
same
investigation
in
separate
organisations
just
to
find
the
same
IOC,
because.
A
N
And,
of
course
there
are
other
automatic
alternative
techniques
like
machine
learning,
and
they
do
have
their
place
that
compared
to
ICS.
These
are
from
technique,
really
they're
generally
more
expensive
and
can
require
manual
intervention.
You
might
have
more
false
positives
or
lower
confidence
in
each
event,
which
can
require
sort
of
manual
investigation,
whereas
IOC
is
kind
of
required,
there's
no
human
intervention.
They
do
provide
this
protection
against
known
threats,
and
you
can
also
use
ICS
to
investigate
and
discover
previous
attacks.
N
Of
course,
that
can
fail
and
for
very
good
reasons.
Perhaps
it's
going
for
a
low
low,
false
positive
rate,
or
perhaps
it's
a
never-before-seen
executable.
There's
plenty
good
reasons
so,
rather
than
just
relying
only
on
AV,
we
aim
for
a
sort
of
layered
defense
in
depth.
Can
I
get
the
next
slide?
Please
thank
you.
N
So
this
is
the
pyramid
of
pain,
which
is
often
referenced,
and
just
to
note
that
TTP
s
at
the
top
that
stands
for
tactics,
techniques
and
procedures
that
you
might
see
associated
with
an
attacker
group
and
I
got
asked
why's
this
the
pyramid,
not
a
list,
and-
and
it's
not
just
because
this
ASCII
art
was
super
fun.
It's
there,
this
kind
of
building
the
idea
that
you're
building
on
artifacts
below
it.
So
it's
quite
hard
to
start
TTP's
and
work
down.
You
sort
of
start
from
the
ground
up.
N
Every
layer
has
value
and,
as
you'll,
see
from
the
axis
on
the
right
hand,
side
that
they
vary
in
pain
and
fragility
and
precision.
So
the
first
kind
of
thing
to
talk
about
how
much
pain
there
is
associated
with
these
IOC's
and
it's
actually
not
related
to
like
pain
of
deployment
or
anything
like
that.
It's
how
much
pain
it
is
to
an
adversary.
If
you
defend
at
that
point,
and
so
it
will
vary
from
recompiling
to
totally
losing
your
persistence
and
it
correlates
to
how
much
communities
for
an
adversary
to
change
that.
N
So
for
changing
a
hash
value
that
you
said.
It's
just
me
compiling
it's
not
not
too
difficult
to
change.
Your
IP
address
things
like
bit
more
difficult,
and
so
it
goes
right
up
to
changing
your
entire
tactic
and
your
techniques
and
the
way
that
you
infiltrate.
That
is
much
much
harder
and,
of
course,
with
how
much
pain
it
is
to
change
it.
N
That's
correlated
with
fragility,
so
the
easier
it
is
to
change,
then
the
more
fragile
that
IOC
is
again
taking
the
example
of
hash
values,
because
that's
quite
a
common
and
easy
one
to
understand,
and
it's
very
easy
to
change
a
hash
once
you've
changed
it.
That
is,
is
they're
totally
fragile.
It's
it's
gone
has
changed
and
that
is
also
correlated
with
with
precision.
So
more
precise
is,
you
know,
obviously
better,
but
it's
usually
linked
to
fragility
as
well.
Using
that
hash
example,
you
get
no
false
positives,
but
it's
easy
to
change.
N
So
then
you
go
up
this
pyramid
and
you've
got
IPS
next
and
access
control
lists
which
can
go
on
firewalls
and
or
DNI
filtering
service
domain
names,
and
that
would
blanket
defend
kind
of
full,
your
endpoints,
but
of
course
it
comes
with
both
positive.
So
the
idea
is
that
you
can
employ
a
range
of
these
things
just
depending
on
on
your
risk
posture,
so
to
illustrate
that
in
the
draft-
and
we
discuss
like
a
real
case-
study
apt
33-
it's
not
a
comprehensive
study
of
that.
N
It's
intended
to
be
read
alongside
open
source
material,
and
so
this
is
where
I
usually
use
to
defend
against
an
advanced
threat.
Now
there
are
many
many
more
case.
Studies
available-
and
you
know
appreciate
contributions
or
ideas
for
those,
but
if
they
have
some
other
nuance,
I,
don't
I
wouldn't
want
the
to
kind
of
bloat
into
a
big
list
of
all
the
other
possible
case
studies
and
all
the
times.
I
cease
with
use,
because
that
would
be
absolutely
massive
and
they're.
N
N
Five
IP
addresses
and
seven
domains
and,
as
you
can
see
from
this
pyramid,
like
they
vary
between
between
levels,
next
slide,
please
thank
you.
So
I'm
looking
for
input
and
I've
already
said
the
draft
food,
a
few
people
and
had
some
feedback
and
comments
so
far
say
thank
you
to
everyone
who
has
read
the
draft
and
commented,
and
so
far
they
seem
to
indicate
that
this
is
like
useful
work
or
helpful
informational
in
some
way,
and
that
comes
from
a
variety
of
people
with
a
variety
of
backgrounds
that
are
quite
encouraging
to
me.
A
N
I'll
just
kind
of
address
those
and
then
go
to
open,
open
mic
if
that's
okay,
so
one
question
asked
today
was
like
what's
the
benefit
of
publishing
this
as
an
RFC,
and
so
my
view
is
that
some
ICS
are
directly
relevant
to
the
work
of
the
IETF,
like
those
ones
that
I
listed
before
that
protocol.
Artifacts.
So
just
like
to
make
this
information
like
available,
where
it's
relevant.
N
N
Commenting
on
a
draft
so
I
think
it's
kind
of
value,
valuable
for
that,
and
then
there
are
another
two
questions
on
the
smart
list
that
would
dispatch
relevant.
So
just
answer
those
and
then
wrap
up
to
this
first
was
if
the
draft
would
remain
a
summary
draft
and
if
so,
would
it
fit
into
mile
and
so
I'm,
not
a
mile
chair
but
I.
Don't
I,
don't
know
about
that
second
part
and
that
essentially
I
can
see
the
draft
like
evolving
with
input
from
others.
N
So
this
leads
to
the
second
question,
which
is
with
the
draft
evolved
to
state,
to
IETF
protocol
with
new
thoughts
on
how
IOC's
may
be
used
and
so
I
think
yeah.
It
could
be
like
both
one
and
two,
like
I
would
like
authors
to
join
me
and
Ollie
I'd.
Welcome
like
developments
by
people
who
who
know
the
industry.
You
know
what
protocol
relevance.
I
could
see
it
remaining
as
a
summary
Josh,
in
which
case
like
perhaps
miles
Fitz,
I,
don't
know
and
but
I
think.
C
So
I
put
myself
I
guess
right
before
you
been
in
queue,
I
hope,
that's,
okay.
My
question
would
be
to
see
if
you
plan
to
evolve
this
beyond
this
summary.
If
there's
some
way
that
through
you
know
something
if
it
were
to
go
to
Mile,
could
it
be
expanded
to?
How
might
you
integrate
it
into
protocols
or
even
OPSEC?
N
N
So,
at
the
moment
the
zebra
have
had
is
that
it's
quite
a
good
work
summary
and
that
there's
definitely
space
to
to
go
into
more
specifics
and
that
this
is
just
a
0-0
dress.
So
it
kind
of
liked
putting
out
there
and
I've
had
no
one
so
far
say
like
they
already
knew
everything
in
it,
which
is
like
nice,
so
I
think,
and
it
would
be
good
to
like,
maybe
just
see
what
well
maybe
I'll
get
you
to
rush
people
in
the
queue,
but
also
yeah.
H
H
I
mean
like
the
actual
document
self,
in
addition
to
a
home
for
publication
and
so
like,
if
we
did
want
to
frame
it
in
terms
of
this
is
a
sort
of
information
that
we
might
want
convey
in
a
protocol
such
as
you
know,
mile
or
iof
work.
That
would
be
one
approach,
but
it
would
also
be
possible
to
sort
of
frame
this
as
a
sort
of
introduction
or
maybe
tutorial
style
document
about.
H
A
D
N
D
N
E
N
It,
oh
no,
it's
not
it's,
not
the
latter.
I
think
it's
them
is
notoriously
difficult
to
bring
new
work
inside
yeah.
So
it's
not
a
sort
of
easy
Avenue.
I
think
it's
just
that
this
is
kind
of
maybe
well-known
in
other
organizations,
and
so
it's
kind
of
the
idea
is
to
bring
it
where
it's
where
it
would
be
useful,
so
yeah
I
think
it's
definitely
more.
The
former.
F
Thanks
as
far
as
the
existential
question
goes,
I
think
we
do
this.
You
know
we,
we
do
publish
informational
documents
that
are
about
operational
experience
and
bring
that
operational
experience
to
bear
on
protocol
design.
The
recent
relatively
recent
document
that
comes
to
mind
is
85
17
right,
which
is
about
the
effect
of
middleboxes
on
transports.
So
this
is
something
that
that
we
do
as
a
community
and
I
would
certainly
support
this
going
forward.
F
I
agree
with
the
sort
of
general
comment
here
about
how
the
document
evolves
will
determine
where
it
goes,
but
as
it's
chronically
sort
of
outlined,
it
really
seems
like
OPSEC
might
be
a
good
place
for
it
to
try
to
answer
the
dispatch
question,
or
at
least
that
could
be
the
next.
You
know
the
the
next
best
home
for
it
in
a
way,
but
my
first
question
I
mean
one
of
the
questions
are
kirsty
brought
up
was
well.
One
of
the
comments
has
been.
F
Should
this
even
be
an
RFC
and
I
reflected
on
that
and
sort
of
like
what
we
do?
Actually,
some
we
do
produce
documents
that
provide
information
about
operational
experience
and
that's
what
this
is.
In
my
mind.
This
is
a
guidance
to
protocol
designers
about
current
operational
experience
and
so
that
that's
one
of
the
reasons
I
support
it
and
if
I
were
asked
a
dispatch,
question
I
think
about
OPSEC.
First,
thanks.
D
Looking
for
the
double
mute,
yeah
I
wanted
to
make
a
couple
comments:
oh
I
was
going
to
say:
OPSEC
is
a
possibility
that
was
just
mentioned
Thanks,
the
other
one
is
mile
to
kind
of
be
more
precise,
but
how
this
could
fit
in
a
mile.
It
occurs
to
me
that
there's
a
particular
architecture
that
mile
has
with
IRCs
there's
a
particular
work
flow
being
suggested
about
how
they're
used.
D
Could
we
marry
up
the
proposed
work
flow,
that's
being
proposed
with
using
I
OCS
with
some
of
the
IETF
technology
in
mile
and
then
the
last
one
to
mention
this
was
teed
up
in
the
in
the
jabber
session
about
privacy
considerations.
Could
this
be
inverted
a
little
bit
to
remind
protocol
designers
that
they
expose
features
in
their
protocol,
and
so
how
can
a
Cure's
potentially
use
those
features
that
they
expose
for
more
pervasive
monitoring?
O
E
E
N
A
N
N
So
it's
just
defined
choices
that
kind
of
affect
the
availability
of
IOC
s,
and
so
making
no
judgment
on
what
those
choices
would
be,
or
you
know
what
would
be
best
for
whatever
you're
and
use
case
and
your
stakeholders
that
you're
thinking
of
but
yeah
so
I
hear
the
concern,
but
just
I
just
like
to
redirect
kind
of
those
questions
back
to
the
motivation
point
that
I
thought
I
covered
in.
In
my
talk:
okay,
ELISA.
C
Cooper
and
solicit
Cooper,
just
on
that
point,
Kirstie
when
you
say
prevent
the
techniques
from
being
accidentally
ignored.
That
kind
of
implies,
like
some
follow-on
documents
that
would
actually
try
to
you
know,
for
specific
protocols
enforce
the
availability
of
these
raid,
because,
just
by
just
by
publishing
this
document,
it
doesn't
actually
prevent
the
techniques
from
being
ignored.
You
would
want
to
do
something
that's
normative
in
order
for
that
to
happen.
Right
now,.
N
So
I
think
that
you
know
it's
kind
of
it's
very
easy
when
you
think
some
expertise
isn't
there
or
whatever
that
it.
You
know
you
might
just
be
quite
forceful
in
pushing
it
through
I.
Think
my
view
is
that
it's
better
to
have
information
available
and
I.
Don't
think
that
this
at
all
will
precipitate
a
follow-on
document
that
the
document
is
kind
of
wise.
N
What
I've
written,
what
it's
complete,
in
my
view
like
what
it
will
see
and-
and
it
will
evolve,
of
course,
with
contributions
from
other
people,
but
yeah,
it's
not
intended
to
be
the
start
of
any
bigger
thing.
It's
just
it.
This
is
just
all
you
know,
one
pickney
that
I
really
would
like
to
share
it's
an
important
technique,
in
fact
sense,
and
so
it's
kind
of
the
idea
of
justice
militia.
E
I
share
some
of
their
choices,
concerns
I,
think
the
question
really.
Is
it
really
worth
trying
to
get
like
it
says,
so
this
documentary
will
be
just
as
well
as
an
IFC
document
where
you
know
we
could
say
we
could
want
to
say
and
just
you
hit
the
points
you
think
are
important
and
you
can
still
get
a
feedback
but
like
it
would
be
effort
getting
offensive.
So
I
think
that's
the
question.
I'd
product
be
working
out
for.
D
D
What,
where
the
authors
might
want
to
take
it
and
I
would
recommend
you
know
further
community
kind
of
discussion
about
how
to
appropriately
tailor
this,
and
you
know
where
to
insert
this
if
at
all
kind
of
mean
to
the
idea
I
mean
there
was
a
broad
set
of
recommendations
made
here.
So
it's
a
0-0
draft,
so.
D
So
I
think
the
two
working
groups
that
were
mentioned
were
Maya
Lin
OPSEC.
It's
be
worthwhile,
I
think
to
have
conversations
with
both
of
those
to
figure
out
but
I
think
the
difference
between
those
working
groups
and
again
it's
a
different
approach.
Is
you
know
miles
talking
about
a
specific
set
of
technologies,
as
I
think
we've
talked
about.
Opsec
is
about
a
generalized
kind
of
set
of
practices
so
which
one
of
those
do
we
want
to
pursue
and
that
I
think
affects
which
working
group.
H
My
recommendation
is
to
not
immediately
try
to
get
adopted
by
a
working
group,
but
to
get
some
more
feedback
in
D
and
evolve.
The
draft
into
a
few
more
through
a
few
more
revisions
and
I
would
not
be
opposed
to
having
that
discussion
take
place
on
sag,
but
I'm
also
open
to
others.
Justin's
for
discussion
forums.
N
A
You
great
so
we
thank
you
Jesse,
so
we
have
one
minute
left
in
our
session,
which
is
perfect,
so
I
can
give
a
short
summary
of
the
dispatched
decision
they
were
taking
today.
So
for
the
SVT
signal,
signature
very
validation,
token.
The
decision
was
for
the
action
point
for
the
ad
to
set
up
a
main
in
this
and
start
a
discussion
there
and
then,
following
the
discussion,
possibly
start
to
both
for
the
client
certificate,
HTTP
header,
it
was
dispatched
to
the
HTTPS
written
group.
A
So
start
a
discussion
there,
possibly
considered
working
group
focus
on
back-end
stuff
for
adding
sensor
to
HTTP
and
no
actions
were
taken.
It
was
already
planned
to
have
a
discussion
in
HTTP
this
and
finally,
our
last
presentation
was
I.
You
see
and
the
role
in
attack
defense
and
the
dispatch
action
is
to
gather
feedback
on
the
documents
in
my
land
object,
mailing
list,
I
work,
working,
I,
hope
that
sounds
fair
and
anybody
who
hasn't
signed
the
blue
sheet.