►
From YouTube: IETF98-TOKBIND-20170327-1520
Description
TOKBIND meeting session at IETF98
2017/03/27 1520
A
Good
day,
everyone
you
found
yourself
in
the
token
binding
work
group.
If
you
believe
that
you
have
boarded
the
wrong
working
group
meeting,
please
make
your
way
to
the
exit.
Now
the
work
the
blue
sheets
are
being
circulated,
everyone
should
sign
them
and
up
on
the
screen,
we
have
the
note.
Well,
please
note
it
well.
A
A
B
So
the
way
we
thought
we'd
do
this
is
to
get
and
the
people
have
been
active
contributors.
Please
discussion
to
take
an
active
part
today
and
see
if
we
can
work
out
as
much
as
as
much
as
possible
of
these
issues
on
the
there
are
no
slides
for
the
court
documents
at
for
good
reason.
We
don't
have
a
lot
of
stuff
to
cover
other
than
this
space,
the
festive
face-to-face
discussion.
B
These
are
the
issues
that
chairs
could
beam
from
the
discussion.
Then
that
may
not
be
a
complete
list.
I'm
hope
I
mean
I'm
sort
of
assuming
that
it's
not
going
to
turn
out
to
be
a
complete
this.
But
it's
it's
a
starting
point.
There
is
sort
of
there's
been
like
two
main
sets
of
issues
other
than
the
editorial
issues
and
the
editorial
stuff
we
propose
not
to
cover
at
all.
B
We
can.
We
can
deal
with
that
on
the
list
and
deal
with
it
as
normal
as
fallacious,
there's
mint
or
two
mains
or
well
sets
of
issues.
One
of
them
has
to
do
with
security.
For
the
considerations,
there
hasn't
been
a
lot
of
support
for
from
making
the
change
that
Denise
proposed
around
how
to
determine
essentially
a
terminal
in
the
area
issue
around
security
considerations,
and
we
propose
actually
leave
it
as
as
is
so
that
make
no
change
the
document
and
that
leads.
Morton,
Thompson's
review
and
I
think
Morton
gets
up
to.
C
B
Exactly
and
whether
you
know
if
some
of
these
don't
need
any
more
discussion,
that's
fine,
but
you
know
we
want
to
put
them
up
here,
so
everybody
can
get
those
sort
of
a
finally.
D
Go
at
them,
hi,
Eric,
rajala
speaking,
is
about
to
be
a
person
who
has
to
approve
this
document
on
so
with
respect
to
Danny's
point
on
it
be
great
if
the
cherish
could
run
through
the
process
carefully,
so
that
you
can
tell
me
that
there's
consensus
not
to
do
this
and
you've
considered
it
rather
than
just
sort
of
leader
pass
the
silence,
because
that
way
you
all.
B
Right,
that's
a
fair
point.
We
actually
did
ask
a
question
on
the
list
whether
there
was
support
for
making
the
change
and
there
wasn't
right,
we
can.
Yes,
we
can
absolutely
sort
of
wrap
that
up
without
harming
the
room.
So
I'd
like
to
ask
the
question
whether
there
for
anybody
who
supports
making
the
change
of
terminology
suggested
by
Denise
to
please
come
now.
B
And
for
anybody
who
support
who
thinks
we
should
leave
the
text,
as
is
to
please
come
now
all
right
to
me.
That
sounds
like
pretty
clear
consensus
and
since
we
sort
of
pre
confirm
this
on
the
list,
I
think
we'll
sort
of
call
this
right.
Now
all
right
thank
Eric
and
I
guess
the
floor
is
open.
Please
settle
these
issues
now.
A
A
B
B
B
Right,
if
you
believe
that
work
should
be
put
into
the
draft
to
eliminate
towards
eliminating
TLS
extensions
now
right
and
the
the
third
question,
if
you,
if
you
don't
sort
of
this,
doesn't
make
sense
if
the
question
doesn't
make
sense
to
you
or
if
you
need
more
information
to
answer
this
question,
please
come
now.
B
A
A
E
Hi
I'm
jerk
I'm
in
the
pink
box,
so
the
idea
was
so
the
referring
provider
bindings
are
there
in
a
situation
where
the
client
is
basically
asked
to
go
to
an
identity
provider
and
the
identity
provider.
Then
issues
a
some
sort
of
assertion
or
identity
token,
or
something
like
that.
That
is
that
the
client
isn't
supposed
to
hand
to
a
relying
party
and
the
identity.
E
Token
assertion
is
supposed
to
be
bound
to
a
key
that
the
client
controls
and
the
key
is
presumably
something
that
the
client
doesn't
normally
use
with
the
identity
provider,
because
the
client
will
use
different
keys
for
different
websites,
so
normally
the
identity
provider
wouldn't
know
what
the
key
pairs
that
the
client
uses
with
the
relying
party.
But
there
are
provisions
in
the
spec
that
sort
of
make
exceptions
to
this
rule
and
allow
the
client
to
show
the
key
pair
that
it's
that
it's
normally
using
with
relying
party
to
the
identity
provider.
E
There's
this
the
provided
token
mining
in
there,
whether
where
the
client
can
say
look
I,
am
the
client
that
you
already
know.
He
is
proof
that
I'm
in
possession
of
this
keep
hair,
and
he
is
also
my
cookie
that
is
bound
to
that
key.
So
you
know
who
I
am?
Can
you
now?
Please
mint
me
this
that
I'm
going
to
use
with
the
riverline
party
and
to
get
to
the
sort
of
an
equivalent
level
of
security
between
the
client
and
the
relying
party?
E
We
don't
always
know
exactly
what
kind
of
environment
the
the
token
mining
will
be
deployed
in
whether
it's
really
going
to
be
a
web
browser,
whether
it's
really
going
to
be
in
sec,
free,
fixed
header
and
so
forth.
So
to
make
sure
that
clients
can
be
tricked
into
sending
mentioning
a
key
that
they
don't
really
control
in
this.
In
this
message,
we
thought
it
would
be
prudent
to
at
this
particular
layer
also
make
the
client
actually
proof
that
they
possess
the
ski.
E
E
E
C
C
Please,
please
bind
the
following
key,
that
the
client
doesn't
control
to
this,
and
as
long
as
the
client
is
the
one
generating
this
message,
and
we
can
make
that
we
can
guarantee
that,
and
you
can
do
that
with
a
single
signature,
which
is
you
have
a
referred
token
binding,
which
is
a
token
binding
for
the
token
binding
ID
of
the
client
that
it's
using
this
origin,
which
encloses
the
same
signature,
also
covers
the
token
binding
idea
for
the
other
origin.
Then
you
have
a
single
signature.
E
Right,
if,
if
you
can
be
sure
that
the
client
can't
be
tricked
into.
C
Yeah
yeah
yeah.
Well,
presumably
the
client
would
know
that
it's
doing
this
and
have
written
the
referred
token
binding
does
exactly
this
right.
So
when
you
have
the
referred
token
binding
Heather
in
the
redirect
you're,
effectively
tricking
the
client
into
sending
this
information.
So
it's
all
automated
yeah.
A
G
A
H
A
C
Relative,
I
find
that
relatively
convincing
and
I'm
kind
of
willing
to
step
back
on
this
one,
but
I
would
ask
that
you
write
down
that
because
that's
not
in
the
draft
at
the
moment
there's
a
long
explanation
of
why
you
have
this
and
the
conclusion
from
that
is
that
you
don't
need
the
signature
right.
The
actual
text
that
goes
through
and
there's
quite
a
lengthy
section
explaining
why
you
have
the
two
things
in
the
referring
and
and
the
attack,
but
the
particular
tack
that
attacked
that
you're
talking
about.
But
my
conclusion
from
reading.
C
A
To
somebody
else
want
it
put
the
counter
position,
because,
but
otherwise
I
can
make
a
couple
of
other
points
sort
of
in
favor
of
Martin's
argument,
even
though
I
don't
personally
agree
with
it.
I
we
may
as
well
get
them
out
on
the
table,
bring
them
up
later,
so
that
the
problem
that
we
have
with
the
two
signatures.
A
The
current
state
is
that
there
is
an
edge
case
where
the
relying
party
uses
different
crypto
algorithms,
that
aren't
understood
by
the
server.
So
in
principle,
you
could
have
a
referred
token.
Binding
browser
supports
both
algorithms
and
the
server
supports
algorithm.
A
client
supports
algorithm
B
and
the
server
won't
be
able
to
actually
understand
that
referred
token
binding.
That
wouldn't
happen
in
under
the
case
of
Martin's,
one
signature
over
both
token
binding
IDs.
I
So
under
a
cup
of
Microsoft,
my
argument
against
this
is
that
in
practice,
if
you
have
you
know
rps
and
idps,
then
the
it
is
reasonable
to
expect
that
I
dp's.
You
know
that
I
DP
and
he
is
our
peace.
You
know
that
that
relate
to
him
actually
have
a
communication
channel
between
them
and
that
they
actually,
you
know,
coordinate
deployment
of
the
token
binding
parameters.
I
I
think
that's
a
reasonable
thing
to
expect
from
them,
and
my
other
argument
against
eliminating
the
signature
on
the
referred
bindings
is
that
you
know
today
when
we
look
at
the
actual
redirect
mechanism
that
we're
currently
defined,
we
can
maybe
convince
ourselves
that
there
is
no
way
to
dupe
the
client
into
into
into
sending
a
referred
binding
to
a
key
that
the
client
does
not
control,
but
from
the
protocol
perspective.
I
would,
like
the
token
binding
a
movement
implementation
to
be
independent
kind
of
the
details
of
the
Federation
mechanism.
So
the
you
know.
J
A
K
Hi
I'm,
Darrell,
Piper,
sort
of
from
deck
I
think
I'm
following
a
microsoft,
guy
I
was
told
this
sounds.
You
know
you
can
tell
a
lot
about.
Working
groups
are
the
kind
of
languages
they
use
the
domain-specific
languages
they're
inventing
the
terms
they
if
I,
feel
a
need
to
redefine,
for
example,
tells
you
a
lot
I'm
hearing
language.
K
That
sounds
a
lot
like
the
language
that
was
used
once
upon
a
time
in
gssapi
to
bind
tokens
securely
with
things
like
a
cryptographic,
algorithm,
independent
protocols
and
assertions
into
the
API
about
what
levels
of
protection
you
want.
These
bound
tokens
to
carrot
and
I
brought
up
Microsoft
because
I
know
they
have
it
and
I
know
I
used
it
in
there
Ike
to
authenticate
cross-realm,
authenticate
domain
controllers,
Brent,
e5
and
I'm
just
wondering
why
this
working
group
seems
to
be
reinventing
the
wheel.
I
was
told
it.
K
B
J
L
K
B
All
right,
I,
don't
know
if
anybody
wants
to
comment
on
that
again,
I
sighs.
As
I
said,
I
don't
think
there
is
duplication
of
effort.
I
think
there's
more
so
there's
it.
There's.
B
K
B
B
So
Martin
did
you
want
to
say
anything
about
the
ET
le
plus
one
issue,
or
is
that
another?
Okay?
Look
so
I'm
just
going
to
do
a
pro
forma
jamon
that
this
may
make
sure
that
nobody
else
who
sort
of
thinks
that
actually,
that
is
really
important.
B
G
B
C
B
Like
all
right
so
rephrase,
so
if
you
believe
that
marking.
B
N
Hi
Jeff
Hodges,
so
this
is
in
regards
to
the
HTTPS
token
bargain
bindings.
I
Just
a
slight
clarification
under
a
puff
of
Microsoft
up,
there
are
two
levels
of
things
that
the
specs
say.
The
protocol
spec
says
that
the
the
token
binding
keys
should
be
scoped
no
larger
than
the
the
scope
of
the
tokens
in
the
application
protocol
and
then
the
HTTPS
token
binding
spec
says
that
the
scope
is
et
al
D,
plus
1
right.
B
J
To
be
a
little
bit
clear,
the
current
text
is
slightly
more
complicated,
and
that
has
one
case
that
is,
must
be
et
al
v,
plus
one
and
then
there's
another
case
that
should
be
ET
le
D
plus
1.
So
it's
four
pairs
for
first
party
in
Federation
use
cases
to
find
in
this
specification
and
used
for
binding
HTTP
cookies
is
must
and
then,
if
you
want
to
find
other
things
like
Oh,
auth
tokens
or
open
ID
connect
ID
tokens,
that's
the
should.
C
So
Martin
Thompson,
since
you
have
the
text
in
front
of
you,
then
does
that
allow
you
to
go
wider
in
scope.
What
does
it
say
that
you
might
want
to
that's
the
map?
I
mean
it
should
their
suggest
that
you
could
choose
to
ignore
that
rule
and
do
something
else,
in
which
case?
Does
that
mean
that
you
could
do
wider
scope?
Tokens
are
going
to
give
the
same
ID
to
every
single
site
on
the
Internet,
so
that's
permitted
by
the
spec.
Okay,
that's
that's
a
no
from
me.
C
O
Brian
Campbell
paying
it
so
the
etl
D
plus
1,
is,
is
a
must
Maxim
for
web
browsers,
be
I.
Would
it's
come
up
a
few
times
on
the
wording
is
very
confusing,
but
the
intent
as
I
understand
it
is
that
a
broader
scoping
is
permitted
for
other
applications
which
use
HTTPS
such
as
Oh
off.
Oh
that
how
that
shakes
out
is
still
a
little
unclear
that
that's
the
intent
to
allow
that
the
touch
could
maybe
be
improved.
Yeah
there's
actually.
J
Another
part
that
I
can
read
you
Daphne,
which
says
that
applications
other
than
web
browsers.
It
may
use
different
key
pair
scoping
rules.
Sorry.
L
O
I
And
drape
ahold,
microsoft
just
confirming
that
this.
This
is
indeed
the
case.
So
so
originally
we
were
going
to
say
it's
must
you
know
that
this
is
the
scope
and
you
cannot
be
broader
than
that
and
then
the
point
was
brought
up
that
in
some
application
domains
there
is
inherent
knowledge
in
the
client
of
certain
domains
should
be.
You
know
the
unlink
ability
problem
does
not
apply
in
some
domains.
Basically,
that's
why
we
kinda
made
it
issued.
B
N
Hi
Jeff
Hodges
one
could
argue
that
we
may
want
to
document
design
rationale
on
this
point.
B
So
Martin,
let
me
ask
you
this:
if
you
do
believe
that
there
is
language
to
be
clarified,
such
that
your
your
sort
of
issue
you
disappear,
effectively
disappears
or
or
is
there?
Is
there
some
way
for
you
to
sort
of
formulate
your
your
issue
in
a
and.
C
So
my
preference
would
be
that
all
new
specifications
actually
used
origin
because
we
actually
understand
what
that
means.
It
aligns
with
the
security
boundaries
on
the
web
and
it's
pretty
straightforward,
but
I
understand
that
we
have
cookies
here
and
will
actually
want
to
protect
cookies.
So
I
appreciate
the
position
that
people
are
taken
on
this
one.
It's
just
that
I
wanted
to
make
sure
that
we
had
a
discussion
about
it
and
we
had
a
discussion
about
it.
I
think.
That's
and
when.
C
C
Language
that
could
be,
I
think
that
actually
doing
is
jeff,
suggested
and
saying
we're
doing
this.
We
understand
that
is
bad,
but
its
necessary
to
to
protect
cookies,
because
cookies
are
bad,
and
maybe
this
might
cook
is
somewhat
less
bad.
It's
probably
something
that
would
be
reasonable
to
put
in
a
document
and
fix
the
should
can,
at
the
same
time,
I
suggest
that
you
and.
C
C
P
P
C
The
case
where
you
were
you
using
HTTPS
and
and
as
you're
driving
it
through
an
API
and
you're,
not
operating
inside
a
web
browser
and
your
understand
who
you're
talking
to
and
you
want
to
maintain
some
some
sort
of
consistent
identity,
because
you
have
a
token
that's
shared
across
disparate
set
of
servers.
Then
fine
do
whatever
you
want
to
do
and
I
think.
The
specification
is
pretty
clear
on
that.
One,
not.
A
C
B
Am
very
helpful.
Thank
you
with
that
I
think
we
are
poster
what
we
identified
as
the
list
of
issues
just
before
you,
what
why
do
you
get
to
them
like
or
I
just
want
to
ask
if
anybody
has
to
identify
anything
else
in
the
law
school
review
that
you
believe
we
haven't
covered
other
than
editorial
issues.
O
Yeah
raise
it
a
little
while
ago
about
the
ramifications
of
allowing
for
longer
EK
m
values,
which
generated
a
little
bit
of
discussion
that
felt
like
maybe
it
was
tending
towards.
No,
let's
go
back
to
just
having
a
fixed
32
x,
80
km,
that's
signed
over
all
the
time,
but
kind
of
it's
sort
of
trailed
off
and
wasn't
didn't
come
to
any
clear
consensus
of
those.
Aware
of
so
I
I'd
love
to
get
to
an
answer
one
way
or
the
other
or
not.
I
And
drape
over
Microsoft,
so
I
would
just
like
to
to
try
to
explain
the
issue
again
over
km
right
in
the
history
of
that.
So
originally,
in
our
token
bindings
pack,
we
said
that
the
ekm
will
always
be
32
bytes
for
token
binding
protocol
version
one.
You
know
one
once
you
have
a
new
version,
you
could
have
a
different
2
km,
but
basically
it
was
fixed.
I
Then
an
argument
was
brought
up
in
this
working
group
saying
that
you
know
why
don't
we
make
the
length
of
the
km
dependent
on
the
type
of
the
token
binding
key
right?
So
if
you,
if
you
add
a
new
algorithm
and
using
the
shelter
in,
for
example,
in
your
signature
scheme,
then
you
can
use
a
m
EK
m
of
a
different
length.
That
was
that
was
the
argument,
and
we
did
that.
We
said
that
you
know
for
further
currently
defined
token
binding
key
types.
We
will
be
using
32
by
tpms
in
the
future.
I
This
may
change
with
a
new
definition
of
the
token
vanity
type.
Now
it
is
a
valid
point
that
this
creates
issues,
for
example,
when
there
is
a
tale
as
Terminator
which
basically
forwards
the
token
binding
message
and
the
EK
m
to
some
back-end
server
for
verification.
In
this
case
you
know
the
without
knowing
without
the
TLS
terminator
doing
all
the
possible
to
combining
key
types.
They
don't
necessarily
have
the
ability
to
generate
all
the
necessary
KMS
to
be
forwarded
to
the
backend
server
for
variation.
I
So
so
there
is
this
this
little
bit
of
a
corner
case,
but
it's
an
interesting
kind
of
issue
to
discuss,
and
you
know
in
principally
in
principle
our
choices
are
to
go
back
to
say
you
know
the
token
binding
10.
You
always
have
a
third
to
buy
tqm
ekm,
and
if
you
want
something
else
you
know,
then
you
need
a
new
token
by
an
inversion,
or
we
can
say
that,
for
example,
you
know
the
dtls
Terminator
is
going
to
generate
the
key
ends
of
various
lengths
and
put
them
all
in
that
message.
O
More
bits,
basically,
but
the
the
argument
was
that
the
signature
schemes
to
find
now
have
a
strength,
that's
somewhat
equivalent
to
being
able
to
sign
over
32
bits
and
that
there
might
be
one
in
the
future
that,
in
order
to
bytes
not
bits
that
practically
the
same
right,
there
might
be
a
signature
scheme
defined
as
a
key
parameter
type
in
the
future.
O
That
that
needs
a
longer
input
into
this
what's
being
signed
over
in
order
to
achieve
its
full
strength
and
I
I
was
actually
the
one
that
made
the
argument,
and
so,
although
there
was
less
of
an
argument
in
more
of
it,
should
we
maybe
consider
this
and
that
I
think
we're
practically
speaking.
There's
there's
not
there
aren't
algorithms
known
or
likely
to
be
adopted
here
that
require
that.
So
we
are
I,
think
the
discussion
on
the
list
was
that.
D
D
Wear
of
any
reason
why
you
need
to
go
larger
than
that,
if
you
did,
you
have
to
tie
it
to
a
bigger
function
which
tells
you
the
maximum
is
48
our
tests
on
an
infra
current
version
to
tell
us
it
is
a
decipher
sweep
but
the
but
the
largest
efforts
we
currently
defined
has
shot
34,
so
the
biggest
possible
and
every
block
you
have
and
of
course,
then,
of
course
you
also
have
to
be
using
what
I
guess
efforts
of
Mad
Max.
D
A
C
So
mum,
Thompson
I,
have
a
suggestion.
Here.
Echo
mentioned
it.
We
can
make
the
EK
m
as
long
as
the
output
of
the
hash
function.
That's
the
Taylor's
paraffin
hash
and
you
avoid
the
problem.
C
If
you
have
the
entropy,
then
you
get
the
entropy
that
you
have
and
if
you
don't
have
the
entropy,
then
you
don't
get
pretty
much
more
entropy
and
then
you're
in
the
situation
where
you
want
to
go
to
pee
1342,
which
is
an
awesome
prime,
by
the
way,
because
I
just
made
it
up,
then
you
then
you
have
to
pick
a
site
this
week.
That
is
actually
yeah.
The
best
prime,
unlike
to
the
five
21-1,
have
to
pick
a
side
this
week.
C
M
Q
C
So
so
to
be
clear
that
the
consequence
of
this
change
for
people
who
are
negotiating
a
yes
GCM
with
charge
of
56
is
nothing
and
for
those
people
who
happen
to
be
using
AES
256
with
the
shell,
380
fours
and
all
that
and
I
think
that's,
probably
one
of
the
only
and
psycho
suites
that
we
actually
have
that's
got
the
wider
hash.
Those
people
would
be
affected.
That
would
be
using
a
larger
he
came,
and
so
it
would
be
incompatible.
So
people
are
routinely
negotiating
those
things
it
would
break.
C
I
And
reverb
or
Microsoft,
so
this
does
not
sound
like
a
bad
idea.
There
are
few
a
lot
exactly.
There
are
a
few
potential
like
implications
from
this
right,
so
one
implication
is
it
would
mean
that
the
the
token
binding
protocol
specification
has
this.
You
know
it
means
that
you
know
whoever
implements
this
make.
If
you
did
this
change
right,
whoever
implements
needs
to
figure
out
what
cipher
suite
was
negotiated
right
and
request
an
ATM
of
that
length.
I
B
A
just
a
process,
thing
I,
you
know
the
chairs
sort
of
do
it
get
a
quick
coupler
and
we
both
feel
that
if
we
do
this,
it's
sort
of
a
it's
it's
more
than
just
clarification
and-
and
you
know
an
editorial
and
that
would
probably
require
us
to
do
a
quick,
but
still
another
love
working
with
law
school
make
sure
everybody
think
it's
worth
it
because
good
really
know
yeah,
you
absolutely
do
a
one
week
or
if
that's
cold,
but
still
you
know
we're
gonna
we're
gonna.
Do
that?
Don't.
J
J
Been
kada
canyon,
just
noting
that
this
idea
generally
seems
reasonable
as
we
bring
it
up
right
now
in
the
abstract,
and
the
only
concern
I
would
have
that's
not
obvious
right
in
the
room
is
just
as
Andre
sort
of
talking
about.
How
is
the
plumbing
in
a
work
to
actually
get
the
information
to
where
it
needs
to
be
so,
hopefully,
someone
will
think
about
that
harder
in
the
next
day
or
two
yep.
M
Just
wanted
to
stand
up
and
say,
I
think
it's
a
bad
idea,
because
someone
should
stand
up
and
say
its
excess
complexity
for
no
particular
reason,
because
256
bits,
even
if
you're
worried
about
collisions
like
128
bits,
228
different
connections,
necessary
generated
collision
in
this
space
I
would
say.
Keep
it
simple
for.
G
B
B
Absolutely
we're
we're
not
committing
any.
You
know
future
versions
of
the
protocol
right
to
anything
and
you're
allowed
to
change
hi
test
text
with
the
you
know
the
right
amount
of
arguing,
so
the
the
other
approach
would
be
to
you
know,
make
no
change
and
clarify
that
if
you
actually
run
into
this
problem,
you'd
have
to
change
retrospect.
That's
another
option
and
the
third
option
is
to
make
it
the
size
of
the
hash
function,
and
so
that's
a
breaking
change
but
he'll.
That's
the
third
option
right
anybody,
anybody
think
of
other
options.
G
B
G
B
B
All
right
next
question:
it
is
support
making
no
change
other
than
a
clarification
saying
that
this
sort
of
is
something
you
will
need
to
the
protocol,
no
need
to
revisit
if
it
if
it
becomes
necessary
in
the
future.
Please
hum
now,
all
right
and
third
option
make
the
breaking
change
to
make
the
size
of
the
e
km
dependent
on
the
size
of
the
hash
function
is
now
all
right.
B
N
B
Alright,
so
anybody
else
nope
issues
that
we
hope
to
bring
up,
that
they
is
remembered
or
not
seeing
anybody
rush
for
the
mics
and
I
think
in
that
case
we're
ready
to
do
the
following
if
the
authors
could
ship
another
edition
of
their
document
with
all
of
these
issues
addressed
including
editorial
myths
that
we
have
discussed
on
the
list
at
that
point,
will
shape
it
upstairs
and
hope
for
the
best.
B
B
P
R
Also,
I
changed
the
client
to
switch
from
0
rst
exporter
to
the
normal
exporter
during
the
connection
to
satisfy
some
concerns
about
using
0
TT
exporter
and
also
clarified
that
when
we
use
the
rhd
exporter,
there
is
a
token
binding
extension
with
that.
It's
just
an
indicator
extensions
your
length
data
to
indicate
to
the
server
which
exporter
is
being
used
so
that
the
server
does
not
have
to
do
a
trial.
Verification
of
that
signature.
R
So
on
to
the
next
slide,
this
is
an
overview
of
what
a
client
malware
attack
could
look
like
in
TV
proto.
So
you
have
the
attacker
on
the
left
here.
Does
that
LS
connection
to
a
server
the
attacker
uses
their
access
to
the
client
machine?
In
the
center
to
generate
a
signature
of
that
EK
m
and
then
sends
that
the
token
binding
with
that
signature
over
the
connection
from
the
attacker
to
the
server,
with
the
bound
token
stolen
from
the
client.
R
The
key
point
here
is
that
once
the
connection
between
the
attacker
and
the
client
has
been
severed,
the
attacker
cannot
create
any
new
connections
to
the
server.
So
at
the
approval
of
possession
in
normal
token
binding.
Is
that
the
whoever
sent
the
token
binding
message
had
possession
of
the
private
key
was
able
to
do
private
key
operation
at
the
time?
The
connection
was
established
so
the
next
slide.
This
is
what
it
looks
like
in
zero
RTT.
R
The
attacker
can
make
several
TLS
connections
to
the
server
and
get
new
section
tickets
that
can
be
used
for
zero,
RTT,
then,
and
forward
requests
to
the
client
and
generate
token
binding
signatures
for
future
connections
using
this
new
session
ticket
at
a
later
point
in
time,
the
attacker
can
now
send
initiated
to
the
US
connection
as
the
rxt
connection,
and
that
really
data
include
the
token
binding
signature
over
the
early
exporter
and
all
this
can
be
pre
computed
without
any
input
from
the
server.
So
this
is
why
sometimes
learning
is.
R
There
is
a
lack
of
proof
of
possession,
because
after
the
attacker
loses
access
to
the
client
machine
as
long
as
it
has
a
new
session
ticket,
I
was
still
valid
and
went
through
the
stamps.
Previously
it
can
that
send
a
token
binding
signature
that
is
still
valid.
By
about
I
mean
it
server
can
verify
it.
So
there's
this
sort
of
lifetime
concern
of
up
to
seven
days.
R
And
next
slide,
this
is
just
outlining
a
different
possible
attack
that
can
be
done.
We
still
have
these
seven
day
or
new
session
ticket
lifetime
concern
here.
In
this
case,
it
is.
The
attack
are
actually
replaying
the
token
binding
message
and
client
hello
from
a
previous
connection.
In
practice,
this
one
is
probably
less
likely
to
be
workable,
just
because
it
needs
to
replay
that
exact
message
and
the
server
is
hopefully
going
to
enforce
some
amount
of
life
time
on
the
ticket
age,
which
will
be
a
shorter
time
period.
R
So
that's
sort
of
the
overview
on
the
sort
of
proof
of
possession
concerns
on
here,
I
think,
even
with
the
sort
of
seven
day
time
frame,
this
sort
of
form
of
token
binding
that
has
these
security
concerns
is
still
something
that
is
valuable.
It
is
especially
valuable
on
the
use
cases
where
to
client
malware
is
not
in
front
model,
that's
being
considered,
for
example,
protecting
OAuth
tokens
from
XSS,
there's
no
client
malware
involved
in
there.
R
R
Basically
it
when
a
client
starts
preparing
a
request,
there's
going
to
be
some
time
delta
between
when
it
starts
repairing
that
request
to
when
it
sends
it
on
the
wire
and
sort
of
in
the
diagram
I've
sketched
out
here
for
request,
see
it
starts
preparing
that
request
before
the
early
exporter
or
before
the
normal
exporter
is
available,
but
then
sends
it
after
that's
available.
So
having
this
hard
limit
adds
a
lot
of
complexity
to
the
server
the
bed.
This
also
creates
issues
with
a
fragmentation
of
TLS.
R
Stat
could
decide
that
it
needs
to
fragment
of
requests,
for
whatever
reason
could
be
something
related
to
the
one
n
minus
1
split
for
CBC
IV
randomization
or
it
could
exceed
the
maximum
to
us
record
size.
If
it's
a
large
post
request,
there's
several
reasons
that
fragmentation
could
cause
an
issue
here
and
then,
if
the
token
binding
message
gets
political
cross
records
and
pre
and
post
handshake,
then
I
don't
think
anyone
has
any
explanation
for
that
means.
R
So
this
mechanism
for
having
the
client
change
the
exporter
as
soon
as
possible,
I
think,
is
more
reasonable
as
soon
as
possible
is
kind
of
vague
language.
But
essentially
what
I
mean
here
is
a
client
sends
some
requests,
maybe
sends
it
with
the
early
exporter.
The
server
doesn't
like
that.
It
used
the
early
exporter
for
what
every
incident
says.
This
is
not
secure
enough
for
me.
I
want
to
see
this
with
the
normal
exporter
and
sends
back
a
response
that
says
please
try
again
with
the
full
exporter.
R
M
R
Again
with
the
book:
yes,
that's
a
good
point,
so
tell
us
1.3
doesn't
have
any
mechanism
say
I
didn't
like
what
you
sent
an
early
data
after
it
has
accepted
the
early
data,
so
this
would
have
to
be
an
application-specific
signal
for
HTTP.
This
signal
could
be
a
307
redirect
to
keep
the
method
the
same
and
redirect
to
the
same
location
with
possibly
an
added
query
parameter
to
say
that
it
performed
this
redirect.
F
Suppose,
anger
yeah,
so
why
is
this
different
from,
for
instance,
a
server
allowing
a
0
RT
exporter
for
some
amount
of
time
and
then
and
then
expiring,
that.
R
Later
this
is
basically
the
same
thing.
I
said
this
is
sort
of
codifying
the
client
rules
of
when
to
switch.
There
isn't
a
sort
of
time-based
thing
on
here,
nor
is
there
a
ratchet
mechanism
explicitly
indicated.
So
one
could
imagine
that
the
timing
of
two
requests
is
that
the
client
starts
both
of
those
requests
at
the
same
time,
and
then
it
flips
the
order
in
which
it
sends
them
on
the
wire
and
ends
up
sending
one
with
the
early
exporter
right
after
one
with
the
full
exporter.
So,
okay.
F
I'm
just
suggesting
on
the
server
side
you
you
allow
both
for
some
amount
of
time-
and
you
say
like
like
hey.
This
is
possible
that
one
request
was
sent
over
0
rd.
Another
one
was
sent
over
one
hour
duty
just
to
close
to
each
other
with
exporter.
But
then
later
you,
you
wrapped
it
on
the
server
side
like
after
a
certain.
A
So
what
is
the
security
consequence
of
accepting
that
request
with
the
early
exporter
in
the
token
binding,
in
the
say,
an
access
token?
That's
going
to
give
that
that
requests
their
access
to
some
information
under
the
early
exporter?
Or
are
you
proposing
that
that,
if
that
request
just
be
thrown
away
because
it
doesn't
meet
desert
security
requirement.
R
S
A
Guess
we
would
have
to
explain
that
there
are
different
security
properties
of
these
mysterio
I
key
requests
to
whatever's
being
the
resource,
server
or
authorization
server
or
client
sounds
like
so
first,
the
question
is:
what
is
the
different?
Is
there
a
meaningful
difference
in
the
security
properties
that
would
cause
one
to
be
vulnerable
to
meaningful
attacks?
By
accepting
the.
R
A
I
So,
under
a
pop
of
Microsoft,
so
on
the
first
question,
I
do
think
that
the
token
binding
with
zero
RTT
creates
replayable
tokens
in
a
way
right,
so
they
can
be
either
replayed
directly
with
the
same
message
or,
as
you
know,
showing
again
in
the
attacks.
So
all
of
those
attacks
with
token
replay
become
possible.
If
you
do
0dt
so
I
do
believe
that
it,
it
involves
the
degradation
of
security
in
the
way,
and
it
takes
away
some
of
the
guarantees
of
the
token
binding
protocol
that
we
have.
I
My
second
point
is
that
trying
to
ratchet
up
from
the
early
ekm
to
the
appropriate
IAM
with
proper
proof
of
possession
is
a
pretty
complicated
mechanism
is
currently
defined.
But
I
also
think
that
you
know
once
the
server
has
accepted
a
weakly
bound
token
from
0
TT
the
course
is
kind
of
out
of
the
barn
so
trying
to
secure
it
after
that
is
a
little
bit
so
I'm,
just
not
convinced
that
the
complexity
is
worth
it.
Okay
like
either
you
accept
0t
token
binding,
and
you
are
willing
to
deal
with
it.
R
If
the
server
receives
these
that
same
request,
post
handshake
I,
it
can
benefit
it's
fine.
What
if
it
still
gets
York
City
exporter
with
that,
because
it
was
fine
receiving
a
pre
handshake
with
the
0
st
exporter,
so
that
request
can
have
0
should
be
expert
at
any
point.
And
then
the
next
part
of
this
argument
is
that
a
TLS
1.3
server
cannot
reject
early
data
based
on
the
contents
of
the
early
data.
It
needs
to
decide
whether
to
accept
that
before
looking
at
it.
J
J
R
D
Or
you
could
tell
me
a
lot
wrong,
so
you
going
to
stand
there,
so
my
understanding,
security
property
you
were
attempting
to
obtain
is
that
I
can't
bootstrap
temporary
access
to
the
token
Detroit
to
the
team,
the
signing
key
on
the
clients
device
into
permanent
into
a
permanent
for
truth
right.
That's
these
part
of
the
principal's
eyes,
I
temporarily,
controlled
the
clients,
device
and
I
signed
a
bunch
of
crap
and
then
like
then
I
lose
access
right
and
so
the
on
there.
What
the
problem
is
the
reason
it
doesn't
retro
actively
bless.
D
The
CRCT
data
is
that
the
cinema
is
that
ordinarily.
Thus
ensuring
cut
includes
the
servers
and
you
replay
knots,
and
now
this
does
not
as
the
mere
fact
so
I
can
basically
makes
me
a
token
as
I
want
and
like
and
then
because
I,
because
I
complete
the
handshake
with
the
server
on
the
on
it's
only
the
TLS
eng
replay.
You
have
it's
not
that
it's
not
the
injury
play
from
the
tip
from
the
token
bite,
his
signature
and
so
in
or
so
so
that
I
got
up.
D
I
was
gonna,
say
what
you
said
and
then
I
realized
halfway
through
that
I
think
it's
not
true,
because
because
there's
one
or
two
Tito
combining
covers
covers
that
server
in
as
you
can
hear
like
that
level,
even
though
you
don't,
even
though
the
teeth,
though
you're
right
to
say
that
on
the
arm
for
the
TLS
data
alone,
the
one
or
gt
handshake
does
can
resurrect
retract.
If
you
confirmed
on
replay.
R
H
R
J
R
Even
auntie
replay
on
that
doesn't
help
so
the
scenario
on
slide,
for
it
shows
that
the
attacker
generates
it's
only
session
tickets
to
use.
So
it's
never
replaying
a
new
session
ticket.
It
is
just
yet
using
the
on
ones
that
it's
generated
using
the
client
as
an
Oracle
to
sign
to
use.
After
the
fact
yeah
I
mean
my.
D
R
This
yeah,
so
the
I
think
I
had
mentioned
this
on
the
mailing
list.
An
original
proposal
for
doing
this
is
that,
basically,
in
terms
of
how
the
processing
rules
work
for
the
token
binding
extension
on
0
RT
t
would
be
that
if
a
client,
it
does
not
want
to
do
a
combination
crst
in
token
binding.
If
it
has
0
xt
a
new
session
ticket,
that
it
could
use
first
sort
of
that's
tanto
combining
with
and
passed
it
doesn't
do
z,
rh
t
and
that's
fine.
R
D
R
D
R
R
I
think
a
better
mechanism
would
be
to
propose
another
TLS
extension,
just
an
indicator
extension
that
says
the
client
says
I
support
doing
a
token
binding
over
0
RT
t
and
server
echoes
that
back.
If
it
supports
it
and
sounds
like
something
that
could
also
be
put
in
the
new
session
ticket,
maybe
just
as
a
reminder
yeah.
J
R
R
So
if
this
flag
isn't
there
in
the
new
session
ticket,
the
client
is
still
safe
to
put
token
binding
in
its
client,
hello
and
because
that
extension
ism
in
the
new
session
ticket
the
server
will
not
assume
that
it's
doing
this
newfangled
thing,
and
it
will
assume
that
it's
just
you
know
the
idea.
This
seems
intricate
enough,
which
is.
R
R
Is
this
replay
protection
to
us
extension
sort
of
the
let's
make
sure
that
a
server
won't
accept
the
same
new
session
ticket
twice,
I
think
based
on
the
discussion
of
the
threat
model
and
the
attacks
that
an
attacker
can
do
that
don't
involve
actually
playing
a
new
session
ticket
I.
Don't
think
this
is
useful,
so
I'd
like
to
remove
this.
H
C
R
Looks
like
so
upcoming
changes,
so
this
first
bullet
point
was
fixing
up
some
language
around
changing
exporter's
sounds
like
based
on
I
like
to
get
a
better
sense
of
the
consensus
of,
should
we
stick
with
switching
exporter's,
in
which
case
this
is
a
language
fix.
Since
I
got
the
lingo
drawn,
or
should
we
just
go
back
to
doing
0hd
exporter
for
the
whole
time
and.
R
R
M
M
R
This
argument
is
sort
of
looking
at
this,
ignoring
what
the
application
protocol
is
and
assuming
that
TLS
gives
us
no
mechanism
for
a
server
after
accepting
early
data
to
basically
reject
the
early
data
yeah.
The
307
argument
is
an
argument
that
says
with
HTTP.
We
do
have
this
mechanism
to
sort
of
rejecting
basket
for
your
response
later
so.
M
There's
always
a
danger
of
building
complexity.
Out
of
the
fear
that
somebody
may
need
something.
Do
we
have
concrete
cases,
I'm,
looking
sort
of
a
Dirk
and
maybe
somewhere
cloud
flurry
there
do
we
have
concrete
cases
where
servers
would
one
to
three
hundred
requests
if
they
got
it
on
a
zero
RTT
exporter.
R
R
Where
so,
is
the
idea
there
that
your
token
key
would
be
in
a
place
on
the
client
machine,
that's
not
accessible
by
malware,
but
does
have
an
API
to
sign
with
it?
Yes,
so
the
idea
is
that
the
key
on
the
malware
might
be
like
sort
of
backed
by
some
hardware
module
ever,
but
there
is
some
API
for
signing
it,
that
the
browser
or
whatever
client
software
has
access
to
which
the
client
malware
can
run
at
same
privileges
as
that
software.
I
Andreypopov,
microsoft
defend.
Definitely
this
depends
on
who
you
ask.
In
IE
and
age
we
use
stronger
protections
for
the
token
binding
key,
then
for
the
cookies
so
malware
that
can
access
the
cookies
cannot
necessarily
get
that
since
the
token
by
ninky
so
within,
but
but
they
do
have
access
to
sign
because
the
application
you
know
process
is
supposed
to
sign
so
yep
the.
F
R
F
My
question
is
that
why
isn't
there
a
third
option,
which
is
that
you,
if
you
have
access
to
one
RDX
or
use
the
one
hour
to
get
exported
access,
is
00,
but
the
server
can
choose
how
long
it
is
actually
valid
for,
like
the
0
RT
to
explore
to
how
long
is
valid
for
so,
for
example,
instead
of
signing
307,
the
server
could
implicitly
use
its
RTP
calculation
estimate
to
say,
like
hey,
I
expected
one,
our
TV,
this
would
have.
This
should
have
come
in
the
monarchy.
Exported
requester
come
in
by
now
that.
R
F
I'm
not
suggesting
an
explicit
signal
at
all
I'm
suggesting
just
that
the
server
ratchets
tries
zio
RT
one
Arkady,
both
on
HTTP
message
that
it
gets
so
it
tries
to
validate
against
the
order
today
exporter
as
a
valued
one
RT
exporter
and
then
after
some
amount
of
time.
It
says
like
sorry
there.
Where
did
they
export
not
valid
anymore,
like
go
away
like
if
there's
narrow
it
sends
a
307
which
it
can
do
anywhere,
doesn't
need
like
token
binding
support
for
that,
so
it
can
it
has
it
has
this.
R
F
Yeah
I'm
saying
that
like
well,
it's
not
it's
an
argument
against
saying
that
always
do
0r
XD
exporter.
It
doesn't
like
that's
a
third
option
right.
So
if
Okafor
humming
on
a
fourth
option,
sorry
I
mean
if
we're
humming
on
options,
and
why
isn't
that
an
option
to
choose
from
hey.
R
A
Think
that
we
proved
that
we
don't
actually
have
any
kind
of
consensus.
Okay,
further
conversation
about
this
on
the
list
would
be
useful
because,
as
I
had
a
sense,
not
everybody
is
it
I,
don't
know
that
we
can
actually
formulate
binary
binary
things
on
the
first
one
about
cooking
a
quarter
there,
another
one
that
he
wanted
us
to
be
there.
R
L
O
All
right,
yeah.
G
L
A
L
L
O
O
Oftentimes
applications
are
deployed
behind
some
kind
of
reverse
proxy.
That
actually
does
the
TLS
termination
for
these
applications
actually
do
all
the
kinds
of
things
you'd
want
to
do
with
tow.
Combining
something
needs
to
be
passed
from
the
front
end
terminating
TLS
to
the
back
end
in
the
general
case
anyway,
there's
a
few
things
you
could,
you
could
do
differently,
but
sort
of
the
general
case
of
supporting
this.
O
You
need
to
pass
some
information
between
the
two
and,
in
the
absence
of
some
standardized
or
documented
way
of
doing
this
everybody's
going
to
do
it
differently.
O
This
is
bad
for
an
operability
for
security,
and
you
know
opportunity
to
get
things
wrong,
just
as
an
example
of
it
recently
working
on
client,
mutual
client,
TLS,
authentication
and
passing
that
between
front
and
a
back
any
component,
and
everyone
does
it
differently,
and
now
we've
been
pulling
in
one
product
and
our
other
product
has
to
do
it
differently
than
that
one
to
match
up
with
the
way
the
first
one
did
it,
because
it
was
based
on
a
de
facto
implementation
of
how
Apache
did
it
and
it's
just
it's
a
mess
and
I'm.
O
Looking
to
avoid
that
kind
of
mess
for
these
kinds
of
things
going
forward
with
tow
combining,
however,
that
may
be-
and
so
next
slide
please
in
so
we
got
to
a
consensus
to
work
on
the
problem
that
was
as
far
as
we
got,
because
I
didn't
have
a
draft
or
anything
but
I
took
that
and
ran
with
it.
I
wrote
up
a
0-0
draft
and
basically
how
this
works
is.
There's
a
new
HTTP
header
introduced,
that's
to
be
passed
between
the
reverse
proxy
and
the
backend
application
called
token
binding
context.
O
It's
a
base64
URL,
encoded,
bite
sequence
and
basically
that
byte
sequence
is
a
concatenation
of
the
token
binding
protocol
protocol
version
that
was
negotiated.
The
token
binding
key
parameters
that
was
negotiated,
which
is
just
one
bite
and
the
km
value
on
this,
is
a
sufficient
amount
of
information
for
the
back
end
application
actually
validate
the
so
combining
and
the
token
binding
header
the
sacto
combining
header,
including
that's,
provided
and
referred.
O
It
requires
some
level
of
trust
between
the
reverse
proxy
in
the
back
end
application
that
they're
not
sending
it
to
the
wrong
people
are
accepting
the
inbound
from
somebody
else,
and
the
reverse
proxy
also
needs
to
sanitize
this
header,
so
it
can't
be
just
pooped
and
sent
through
from
the
from
you
know
the
actual
original
client
quick
example.
It
looks
like
that,
you
break
it
out.
You've
got
the
you
know
the
version,
their
draft
13.
This
came
from
a
live
example.
2
is
the
UC
PSAP
256,
and
then
the
ECAM
follows
that
arm.
O
Next,
please,
and
so
in
real
life.
Someone
actually
implemented
this
already.
They
have
it
shown
up
here.
The
bottom
one
is
the
token
binding
context.
This
is
a
mod
token
binding
plug-in
for
Apache
by
a
former
colleague
of
mine,
Hans
I
think
he
is
on
the
line,
so
Thank
You
Hans
he's
got
it
working,
which
is
nice.
It's
not
too
difficult.
O
Next
slide,
please,
but
he
and
some
others
recently
have
raised
some
questions
about
whether
this
is
really
the
right
approach
and
my
my
goal
really
is
just
how
kind
of
one
way
of
doing
it.
I'm
not
married
to
one
or
the
other,
but
but
there's
kind
of
two
general
ways
to
tackle
this.
One
is
the
current
one
where
the
backend
application
validates
the
tow
combining
message
itself,
and
it
does
this
by
being
provided
this
additional
data
context
from
the
front
end
on
this
keeps
that
took
a
TTR
p.
O
The
TLS
terminated
reverse
proxy
velit
of
Li
lightweight.
It
doesn't
have
to
do
any
extra
graphic
work
on
the
token
binding
header
it
doesn't
have
to
parse
the
HTTP
header
pull
it
apart,
which
is
I,
think
a
goal
that
many
have
stated
either
explicitly
or
implicitly,
for
this
is
to
keep
that
layer
as
light
as
possible.
I
think
it
even
one
point:
durkee
said
that
Google's
implementation
does
validate
the
the
tow
combining
at
the
front
end
and
that
in
retrospect
you
might
consider
doing
that
different.
O
Is
that
still
true
yeah,
maybe
okay,
so
that
that's
the
kind
of
the
main
main
goal
here
it
does
introduce.
Maybe
some
problems
reconciling
and
updating
supported.
He
parameter
types.
If
you
have
a
lot
of
different
applications
behind
one
reverse
proxy
and
you
want
to
add
a
new
key
parameter
type,
all
those
different
applications
have
to
be
able
to
support
it.
In
order
to
make
this
work,
it
might
make
that
both
the
management
of
it
and
rolling
out
new
ones
rather
difficult.
O
The
alternative
approach
at
some
level
would
be
to
have
the
reverse
proxy,
validate
the
token
binding
and
pass
back
some
information
about
it,
either
the
token
binding
IDs
itself,
the
whole
message,
maybe
the
hash
of
the
message,
I'm
not
sure
exactly,
but
something
basically
where
the
work
happens
on
the
front
end
and
some
sort
of
digested
version
of
the
hook
and
bindings
get
past
back
to
the
back
end.
This
is
way
simpler
for
applications.
O
I
think
it's
a
lot
easier
to
deal
with
the
support
key
parameter
types
because
they're
all
isolated
at
the
front
layer.
You
can
upgrade
that
everything
works
you
just
pack
so
combining
back
to
the
to
the
back
end.
The
downside
of
it
is,
it's
definitely
less
light.
On
the
token
up
too
many
teas
excuse
me
the
TLS
terminating
reverse
proxy
arm,
and
that
that
violates
a
goal
that
at
least
some
have
definitely
said.
O
They're
interested
in
and
I've
had
some
people
go
so
far
as
to
say
that
the
draft
should
describe
how
to
do
both,
which
I
would
really
prefer
to
have
just
a
single
way
of
doing
it,
whatever
it
is
rather
than
both,
but
it's
been
thrown
out
there
as
a
compromise
of
sorts.
So
the
current
draft
is
the
the
first
approach
that
the
second
one
I
think
is
totally
valid.
There's
trade-offs
between
the
two
yeah
is.
It
is
now
a
good
time.
Yeah,
hey.
H
A
D
A
D
So
I'm
not
particularly
fussed
about
whether
or
not
who
does
the
parsing
and
validation,
but
I
am
fost
about
the
fact
that
if
the
the
proxy
fails
the
striptease
properly
and
you
have
a
really
serious
problem
and
so
on,
as
I
said,
an
email,
I
really
be
nice.
If
the
proxy
give
you
some
cryptographic,
Mecca
is
some
secretory
binding.
Did
this
value
so
that
you
can
sit?
D
P
Don't
think
we
go
yeah
I
agree.
We
don't
have
time
so
the
Justin,
richer
and
so
I'm
in
favor
of
the
first
approach
of
sort
of
doing
the
calculation
and
passing
it
back
to
the
applications,
because
that
allows
the
applications
to
do
sort
of
the
full
processing
of
these
things
and
figuring
out
what
they
want
to
do
with
that.
P
Information,
whatnot
and
I
would
also
like
to
point
out
that
this
does
not
preclude
the
gateway
from
also
doing
validation
of
the
of
the
TLS
toe,
combining
if
it
wants
to
dig
into
the
messages
and
all
of
that
stuff
as
well.
The
question,
then,
is:
what
can
it
then
pass
back
to
the
backend
system
to
say
that?
P
Yes,
I
also
did
this,
and
this
is
where
I
think
that
what
was
just
brought
up
is
a
by
Eric
was
a
really
good
point
that
if
the
Gateway,
if
the
Terminator
is
doing
something,
then
the
client
sitting
behind
it
should
have
some
indication
that
that
really
did
come
from
the
Terminator.
We
see
the
exact
same
problem
at
the
application
layer
with
API
gateways
and
all
sorts
of
other
messy
things
that
people
aren't
you're
absolutely
right
in
your
intro.
P
People
are
going
to
deploy
these
things
anyway,
so
we
should
get
the
tools
to
do
this
correctly.
So
I
would
like
to
see
something
that
mixes
in
keying
material
from
the
gateway
or
something
like
that
in
generating
this
to
prevent
injection
attacks
and
lots
of
other
things
that
we're
already
seeing
in
other
similar
spaces.
So.
O
B
S
Nedelin
we've
implemented
this
different
ways
within
Microsoft
and
one
of
the
issues
that
we
have
is
this
doesn't
work
on
untrusted
endpoints,
very
well
at
all.
So
that's
accuse
case
that
just
this
thing
doesn't
work.
We're
also
not
seeing
any
real
need
for
the
interoperability
aspects
of
of
this,
mainly,
what
we're
doing
is
talking
to
her
own
back
ends,
and
we
want
to
be
able
to
do
what
we
want
to
do
our
own
way,
and
sometimes
we
have
validation
on
either
side
of
this
thing.
So
we're
not.