►
From YouTube: IETF115-SAAG-20221111-0930
Description
SAAG meeting session at IETF115
2022/11/11 0930
https://datatracker.ietf.org/meeting/115/proceedings/
B
This
good
morning,
everyone
it's
Friday,
we
made
it
welcome.
This
is
the
security
area.
Advisory
Group,
my
name
is
Roman
genito
I
am
one
of
the
security
ads
and
sitting
next
to
me
is
Paul.
B
Let's
get
started
all
of
the
slides
are
in
the
data
tracker
if
you
want
to
follow
along
there
and
we
also
have
kind
of
media
meeting
minutes.
So
if
you
want
to
help
collaborative
edit,
those
that's
a
possibility
as
well
on
the
screen.
Is
the
note
well
just
a
reminder:
you've
seen
this
every
every
meeting
you've
attended.
We
have
a
various
kind
of
policies
and
procedures
kind
of
in
place.
Please
make
yourself
aware
of
them.
B
B
We
have,
we
probably
have
as
many
participants,
remote
As.
We
have
in
this
very
small
room
here,
I'm,
not
sure
why
we
got
sized
the
way
we
did
so
it
will
be
kind
of
intimate.
What
we
have
up
are
a
few
tips
about
how
to
make
the
experience
most
productive.
If
you
are
in
the
room,
please
scan
the
QR
code
to
bring
up
the
meet
Echo
light
client
and
if
please,
keep,
please
have
your
kind
of
mask
on.
B
A
B
We
have
this
is
the
agenda
we
previously
had
published
we're
going
to
go
through
the
administrative
and
summaries
of
the
area.
We're
going
to
have
Morgan
Cullen,
give
us
a
talk
on
significant
adoption
of
a
couple
of
key
ietf
Technologies.
Then
we're
going
to
talk
about
formal
verification,
it's
a
topic
that
has
popped
up
in
a
couple
of
other
places,
and
then
we
Justin
Richard
is
going
to
talk
to
us
about
hdb
message,
signatures
which
is
I
believe
in
working
from
last
call
in
HTTP.
This
pausing
there
for
any
agenda
bashing.
B
Not
hearing
any,
let's
press
forward
wanted
to
want
to
call
out
a
couple
of
things.
Participating
in
the
security
area
means
a
lot
of
different
things.
We
welcome.
We
welcome
you
kind
of
hearing
more
more
from
interested
participants
about
how
you
would
like
to
participate.
There,
of
course,
is
sitting
on
this
side
of
the
of
the
room.
If
you're
interested
in
being
a
chair,
please
let
us
know
another
great
way
to
get
involved
in
kind
of
the
real
business
of
how
the
working
groups
do
things
is
be
a
document
Shepherd.
B
This
gives
you
a
lot
of
insight
into
how
to
do
document,
reviews
and
ietf
process,
and
it
is
a
tremendous
help
to
the
working
group
The
working
group
authors
and
to
and
to
pollinate
we're
going
to
talk
about
this
later.
There
are
all
sorts
of
Errata
in
the
security
area
that
we
need
to
do
a
little
bit
better
job.
B
At
closing,
this
is
another
way
to
learn
about
how
we
do
maintenance
on
on
ITF
protocols
that
we
publish
some
in
active
working
groups-
some
not
so
much
and
that's
certainly
a
place
where
we
need
a
lot
of
help
and
then,
of
course
participating
is
you
know
if
you
can't
make
it
in
person
the
virtual
experience
these
are
you
know?
Virtual
participants
are
first
order,
participants
kind
of
as
well,
so
please
join
especially
in
boss
to
help
us
understand
what
work
we
want
to
bring
into
the
ITF.
B
So
I
will
say
you're
going
to
see
something
a
little
bit
different
Paul
introduced
an
innovation
here
then
maybe
we
don't
need
to
run
through
all
the
different
working
groups,
so
we
are
going
to
largely
manage
the
queue
from
now
on
by
exception.
If
folks
want
to
come,
come
forward
and
talk
about
things
so,
first,
first
order
of
things:
we've
had
some
changes
in
working
groups
and
some
talk
of
new
work.
B
Since
we
last
met
at
ietf
14.,
we
had
a
virtual
buff
between
14
and
15.,
originally
titled
Json
web
proofs,
and
then,
if
you
jump
to
the
next
category,
there
was
consensus
out
of
the
both
in
person.
We
had
an
in-person
buff,
and
then
we
had
a
virtual
buff
coming
out
of
that
virtual
Bob.
We
had
consensus
to
move
forward
with
chartering
to
reopen
the
Jose
working
group,
and
that
is
in
Flight,
that
is
in
flight.
Now
it's
going
to
be
looked
at
the
isg
in
early
December.
B
B
A
thing
to
kind
of
call
out
is:
what's
at
the
bottom,
the
ITF
overall
introduced
a
new
way
to
do
Buffs
or
brought
back
I'm
told
for
for
folks
that
have
participated
a
long
time.
This
notion
of
virtual
interim
Buffs,
that
is,
you,
don't
have
to
wait
to
the
face-to-face
meeting
to
have
a
Bob.
We've
now
used
that
successfully
three
times
so
Tigris,
which
is
in
the
app
area
with
security
considerations.
B
We
used
it
to
to
move
skit
forward
and
we
also
used
it
to
move
forward
to
what
what's
getting
chartered
now
we'll
see
on
the
community
feedback
in
Jose
and
so
kind
of
bottom
line.
It
looks
like
it's
cutting
out
at
least
a
half
meeting
cycle
for
work
to
get
new
work
kind
of
Chartered,
and
it
looks
like
a
successful
approach
and
I
see
us
continuing
to
to
use
that.
B
So
this
is
this
is
the
compression
of
what
normally
would
be
20
slides.
Thank
you
kind
of
Paul.
A
lot
of
working
groups
met
him
this
last
week,
I
believe
almost
all
of
them
sent
a
summary
to
the
sagless.
That's
where
all
the
details
are.
We
invite
any
working
group
chairs
or
participants
to
come
up
to
the
mic
and
now
tell
us
about
Hot,
Topics
or
anything.
They
think
Sac
should
know
about
those
previous
meetings
this
week.
B
A
B
All
right,
well,
perfect,
I'm
sure
we're
going
to
read
everything.
That's
on
the
mailing
list,
okay,
so
pulling
us
to
to
some
of
the
administrative
updates
that
the
80s
would
like
to
to
share
with
you.
First
we
want
to.
We
want
to
highlight
that
working
one
of
the
ways
in
which
working
groups
are
successful.
It
really
is
because
of
the
leadership
of
the
chairs,
facilitating
the
technical
work
so
to
highlight
some
changes.
We
want
to
thank
Mohit
and
Richard
for
for
chairing
cassette
Dispatch.
B
They
have
rotated
out
if
you're
on
the
SEC
dispatch
kind
of
mailing
list.
One
of
the
things
that
we
did
is
you
know,
really
rethought
what
to
do
with
standing
working
groups
so
with
working
groups
that
don't
have
an
end
to
their
work
like
SEC
dispatch.
When
a
working
group
chairs
signs
up
the
exit
plan
is,
unless
we
say
you
know,
let's
rethink
this
or
they
tell
us
they're
out,
they
will
be
working
group
chairs
kind
of
indefinitely.
B
We
wanted
to
set
up
a
rotation
process,
and
so
we've
roughly
created
a
policy
insect
dispatch
that
it's
about
a
four-year
term.
Unless
you
know,
we
think
there's
an
issue
or
a
day,
or
they
would
like
to
step
down,
and
so
Richard
has
been
in
in
that
role
for
four
and
a
half
four
and
a
half
years
and
really
launched
the
process
for
us,
rifat
is
stepping
in
for
him.
Kathleen
is,
is
three
years
into
the
term.
The
next
I'm
sorry
this
coming
summer
in
iatf17.
B
She
is
going
to
rotate
out
and
we
would
expect
that
the
Ed's
kind
of
at
the
time
Paul
and
whoever
is
the
ad
at
the
time-
will
kind
of
make
that
make
that
selection.
So
this
is
kind
of
this
idea
of
withstanding
working
groups.
We
should
rethink
what
happened
friends
and
then
we
spun
up
the
new
skit
working
group
and
so
kind
of
thanks
to
John
and
honest
for
for
for
stepping
in
to
to
lead
us.
B
B
We
didn't
create
any
non-working
group
mailing
lists.
So
our
cue
for
ad
sponsor
documents
is
getting
kind
of
a
little
bit
deeper.
So
this
is
real
time
as
of
real-time
as
of
at
least
SEC
dispatch.
Yesterday,
where
we
added
the
draft
legit
SP
CAC
for
for
ad
sponsorship,
and
then
we
have
not
yet
closed
the
action
item
for
SEC
dispatch,
114
to
land
draft
East,
Lake
fnv,
but
we
are
going
to
work
on
it.
B
Are
there
any
questions
about
the
ad
sponsor
drafts
we
are
holding?
We
especially
Point
them
out
here
because
of
for
folks
a
little
less
familiar
with
the
process.
80
sponsorship
means
it's
not
going
through
a
working
group,
so
the
review
process
is
the
formal
ITF
last
call
and
whenever
we
as
ads
trigger.
So
these
documents
are
here
primarily,
so
you
have
visibility
assat
and
we
would
beg
for
your
reviews
on
those
documents
to
make
sure
that
they
leave
the
ITF
and
is
the
best
quality
as
we
can
get
them
foreign.
B
B
One
unique
thing
that
we
did,
that
it
isn't
kind
of
part
of
this
structural
update
that
you've
seen
a
number
of
times
is
reconvened.
What
we
call
the
pqc
next
steps
or
additional
next
step
side
meeting
on
Monday
and
the
thinking
there
was.
We
saw
a
tremendous
amount
of
energy
on
the
pqc
mailing
list
that
we
spun
up
a
little
bit
before
ietf
14..
B
Additionally,
we
had
we
had
a
sec
dispatch
result
in
114
to
pick
up
a
pickup,
what
we
considered
a
transition
support
document
around
pqc
and
after
kind
of
thinking
about
the
energy
on
the
list
and
what
we
wanted
to
do
with
that
document,
we
we
shopped
around
a
charter
to
what
we're
calling
the
pqc
transition
support
working
group.
So
this
is
a
working
group
that
is
not
intended
to
change
existing
protocols.
B
It
is,
it
is
a
forum
which
to
discuss
design
and
transition
choices
that
are
relevant
across
the
the
SEC
working
groups
and
probably
across
kind
of
the
ietf
and-
and
all
it's
permitted
to
do.
Is
that
facilitation
that
discussion
and
when
discussion
really
is
probably
reusable
or
it
makes
sense
for
it
to
be
durable.
For
archival
purposes
they
that
working
group
would
have
the
ability
to
charge
to
sorry
publish
informational
documents.
B
It
appeared
we
had
consensus
for
that
from
the
mailing
list
to
mailing
list
it
appeared.
There
was
really
strong
support
for
that
from
the
side
beating.
So
at
this
point
we
are
likely
to
proceed
with
chartering.
The
big
blocker
is
as
everything
in
the
ietf
one
of
the
most
important
things
is
choosing
a
name
which
we
have
not,
and
we've
got
to
dial
in
the
dial
in
the
deliverable.
So
we
welcome
kind
of
additional
kind
of
feedback,
and
polish
please
put
that
on
the
pqc
list.
B
C
You
just
said
something
that
now
that
you
didn't
say
in
the
boff
that
that
I'm
or
the
side
meeting
that
I'm
a
little
bit
concerned.
You
said
informational
documents,
whereas
Sometimes
some
of
the
discussion
in
was
about
best
current
practices.
B
B
And
we're
not
making
changes
to
proposal
to
I'm
sorry
protocols,
yeah,
please
check
the
charter
text
I
believe
we
do
say
that,
but
if
not,
let's
publish,
we
asked
two
other
questions,
and
this
is
a
little
bit
of.
We
see
energy
and
we've
been
coming
back
to
you
in
Sag
double
checking
where
we
want
to
be
as
a
community.
We
were
trying
to
check
out.
Where
do
we
think
we
have?
There
is
a
bunch
of
working
groups
that
are
active,
doing
pqc
kinds
of
things
we
wanted
to
check.
B
D
B
Heard
Kerberos
and
we
held,
we
heard
XML
signature.
If
you
think
there
are
more,
please
kind
of.
Let
us
know
if
you
go
to
that,
GitHub
site
that
we're
going
to
migrate
to
the
ITF
kind
of
Wiki.
There
there's
kind
of
a
place
where
we're
capturing
that
list
and
then
the
third
question
we
asked
was:
we
often
rely
on
the
ITF
on
directors
to
help
us
do
document
reviews
for
specialized
things,
so
A
specialized
I
mean
we
have
the
security
director,
but
we
also
have
Niche
things
like
Yang.
B
So
if
you
have
a
Yang
document,
you
can
go
to
the
Yang
doctors
to
help.
You
specifically
do
that
the
question
was
asked:
do
we
think
we
need
something
for
just
post
one
cryptography
reviews,
so
a
pqc
directorate,
the
feedback
from
the
side
beating
was
no.
It
is
we're
not
ready,
and
now
is
not
the
time
to
do
that.
If
you
want
kind
of
more
information,
there
are
some
references
there.
B
Okay
other
highlights
we
try
to
Us
in
the
security
directory,
try
to
keep
a
running
list
of
things
that
we
are
likely
to
discuss
on
at
the
isg
telechat,
and
this
is
a
little
bit
of
a
service
to
the
rest
of
the
ietf
of
hey,
write,
good
security
considerations,
but
here's
another
list
to
double
check,
because
even
after
you
know
having
all
the
guidance
we
have
about
writing
security
considerations,
Paul
and
I
often
find
is
find,
find
common
themes
and
so
that
URL
up
there
is
what
what
is
the
common
themes.
B
We
see
the
sector
review,
putting
in
kind
of
feedback
hey.
You
need
to
think
a
little
bit
more
about
that
or
issues
that
Paul
and
I
commonly
discuss
on
this
is
kind
of
a
running
list.
This
is
you
know
something
that
that
we
started
it
could
have
a
number
of
years
ago.
I
just
highlight
that
if
you
have
feedback
for
other
things,
perhaps
you
are
doing
if
you
are
reviewing
documents
that
have
been
working
in
the
working
groups.
B
One
thing
we
just
added
added
to
that
list
because
we
are
seeing
it
in
a
number
of
recent
and
kind
of
child
chats
is
that
working
that
documents
are
coming
to
the
telechat
review
and
they
are
saying
we
are
Fielding
this
in
a
limited
domain
and
beyond
that
there
isn't
a
lot
of
specification
about
what
are
the
security
properties
of
that
domain.
How
are
you
bounding
that
domain
and
how
are
you
controlling
things
are
in
there
and
so
that's
an
example
of
a
flavor
of
kind
of
common
themes
that
we
would
put
on
that.
B
If
you
want
to
get
a
sense
for
where
we
are
with
your
documents.
Those
are
the
pointers
to
the
queue
there
is
fairly
detailed
kind
of
inside.
You
know
when
we
take
the
document
for
isg
processing,
where
we
are
kind
of
with
that
process
and
how
far
we
have
Advanced
it.
Just
a
reminder:
we
have
a
sec
area,
SEC
area
kind
of
Wiki
The
igf
is
migrating
its
Wiki
technology
from
track
to
Wiki
JS,
so
that
URL
is
where
we
are
right
now.
B
I
would
expect
in
a
couple
of
weeks
that
URL
will
be
will
be
something
different
after
that
migration
occurs,
we'll
make
sure
we
set
on
your
pointer
around,
and
we
are
in
that
in
that
time
of
year,
in
the
ITF,
where
we
have
spun
up
the
nomcom
to
choose
the
new
leadership
for
the
ietf
as
as
every
year,
there
are
kind
of
candidates
for
the
isg,
so
with
the
security
ad
all
the
rest
of
the
aeds
for
the
IAB
and
kind
of
the
other
key
leadership
positions,
the
nomcom
really
relies
on
the
community
for
feedback
on
how
to
make
their
choices.
B
So
please,
if
you
want
to
have
a
say
in
how
leadership
is
chosen
at
the
ietf,
go
to
that
URL
and
provide
feedback
on
those
candidates.
B
In
close,
in
close,
we
can't
emphasize
enough
how
helpful
the
security
director
is
to
the
ietf
to
help
the
working
groups.
Polish,
the
documents
find
issues
and
really
get
us
to
the
point
where,
when
we
go
to
RFC,
the
documents
aren't
as
good
as
they
can
be
and
with
as
well-rounded
of
any
kind
of
an
assessment.
So
thank
you
to
the
security
directorate.
B
They
made
kind
of
this
possible,
and
these
are
all
the
folks
that
that
helped
us
review
documents
across
all
of
the
working
groups
for
everything
that
went
to
ietf
last
call,
since
we
got
together
and
also
a
big
shout
out
to
tuchero,
who
is
our
secretary
for
the
sector.
Reviews
and
literally
every
week
to
every
kind
of
two
weeks
is
making
the
assignments
of
those
documents
and
What
needs
to
meet.
If
you
are
interested
in
participating
in
security
directorate,
please
email,
Paul
and
may,
and
then
we
can
have
a
comment.
B
B
The
queue
is
clear,
so
in
that
case
we're
going
to
switch
over
to
Margaret
who's,
going
to
tell
us
about
edu
Roman,
some
adoption
of
of
ITF.
D
E
So
I'm
gonna
talk
about
an
implementation
report
on
the
use
of
the
Ben
radius
for
U.S
Edge
room
edu
Rome
is
a
service.
That's
been
around
for
a
long
time.
20
plus
years
U.S
said
Jerome
has
been
around
for
a
long
time.
We
did
a
new
implementation
of
us
edgerobe
in
AWS
and
it
was
new
to
us
to
operate
and
deploy
this
service,
and
so
we're
gonna
offer
some
observations
we
had
about
about
those
Technologies.
While
we
were
doing
that
so
next
slide.
E
We
worked
with
internet
too,
who
operates
the
U.S
Edge
Rome
service
to
implement
and
deploy
a
new
Amazon
web
services
based
U.S
Edge,
Rome
infrastructure.
It
went
live
in
November
of
2021,
so
we've
been
operating
it
monitoring
and
maintaining
it
for
about
a
year.
It's
free
radius
based
it's
geographically
redundant
and
then
within
each
geographic
region
it
is
load
balanced
and
I
will
talk
more
about
that.
The
goal
of
this
presentation
is
to
share
our
experiences.
E
Implementing
and
operating
this
service,
and
much
of
this
presentation
focuses
on
Need
for
improvement,
but
I
want
to
emphasize
that
a
lot
of
those
things
are
small
things
and
actually
the
overall
experience
has
been
very
positive
and
highly
successful.
So
it's
up
and
running
and
it's
stable
and
it's
working
well,
but
there
are
a
lot
of
small
things
that
we
think
either
could
be
modernized
or
added.
That
would
make
it
a
lot
a
lot
easier
to
deal
with
so
next
slide.
E
So
this
is
a
little
outline
of
the
talk.
The
first
thing
I'm
going
to
do
is
a
quick
review
of
what
Edge
room
is
just
because
there
may
be
people
in
the
room
who
don't
know
what
I'm
talking
about
at
all
and
try
to
bring
them
up
to
speed
and
then
we'll
do
an
overview
of
the
US
said
your
own
deployment,
including
how
we
deployed
the
infrastructure
and
some
facts
and
figures
about
it,
then
talk
about
how
requests
are
routed
in
entero
and
some
things
about
how
that
works.
E
That
have
been
a
struggle.
Some
security
challenges
that
we've
run
into,
and
also
some
operational
challenges
that
we've
run
into
so
next
slide
at
your
room
is
a
widely
used
international
roaming
service
for
Higher,
Education
and
Research.
As
I
said,
it's
Josh
sitting
in
the
front
row
could
probably
tell
us
exactly
how
long
it's
been
out
because
I
believe
he
did
the
first
deployment
of
it.
But
it's
been
out
for
a
long
time
decades.
E
E
E
E
Sometimes
it
could
provide
more
privileges
or
better
performance
than
a
public
guest
access,
but
even
when
it
doesn't
the
fact
that
it's
free
and
seamless
is
in
itself
a
better
service
there's
a
home
institution
which
is
the
identity
provider
for
the
user.
E
So
a
student
would
have
a
home
institution
at
the
University
they
attend
and
they
actually
that
University
will
have
verified
the
user's
identity
and
provided
credentials
to
them
to
use
Edge
room,
then
that
student
can
go
to
other
locations
at
your
own
Service
locations,
relying
parties
right
and
visit
those
locations
and
get
access
to
the
internet.
E
With
that
location,
proxying
an
authentication
request
into
the
central
infrastructure
which
goes
back
to
the
home
institution
and
authenticates,
the
user,
millions
of
students
from
thousands
of
Home
institutions,
access,
Edge,
Aroma,
tens
of
thousands
of
service
locations
throughout
the
world.
It's
a
very
large
scale
service,
and
if
you
want
to
hear
more
about
edurome
itself,
you
can
go
to
that
URL
there
or
talk
to
class,
because
I
think
he
he
works
at
that
organization
and
knows
everything
about
the
central
Ledger
room
service.
E
But
if
you
go
to
the
next
slide,
I
have
a
little
picture
of
how
it
works
here.
If
you
look
on
the
right
hand,
side
well,
let's
start
with
there's
a
user
and
the
user
is
John
at
institution.home.
Okay.
He
once
was
at
his
home
over
on
the
left
and
he
got
identified.
There
got
issued
a
student
ID.
He
got
issued
credentials.
He
then
traveled
to
another
institution
institution.visit
where
he
wanted
to
get
on
the
network.
E
E
E
There's
a
bunch
of
underlying
Technologies
I
could
have
made
a
much
longer
list
of
rfcs
here,
but
these
are
the
core
ones:
radius,
Eep,
how
you
run
you
know
Eep
over
radius
and
then
the
various
methods
that
you
can
use
for,
Eep,
authentication
and
then
also
the
802.11
and
802.1
expects
from
the
IEEE
are
the
core
technologies
that
are
used
in
edurope
next
slide.
E
There's
a
proxy
hierarchy
in
engine
room
that
lets
it
run
all
over
the
world
and
the
way
that
this
proxy
hierarchy
works
is
that
the
individual
edurome
institutions
have
their
own
radius.
Servers
could
be
the
same
radius
servers
that
they
use
to
have
people
log
in
at
other
ssids
or
could
be
dedicated
to
eduro.
That's
totally
up
to
them.
E
So
now
we're
going
to
talk
about
the
US
said
your
own
deployment.
If,
on
the
previous
slide,
I
had
thought
to
mention
it.
I
could
have
said.
One
of
those
National
servers
is
the
U.S
national
server
and
that
server
is
run
or
that
proxy.
The
U.S
national
proxy
is
run
by
internet
2
in
common,
and
we
work
with
them
to
to
run
that
service.
E
The
U.S
Edge
robe
service
has
greater
than
a
thousand
home
institutions
or
idps
greater
than
3
000
service
access
points
or
RPS,
and
somewhere
in
the
area
of
2
million
eligible
students
and
staff
to
use
the
service.
We
can't
say
how
many
people
are
actually
using
it,
because
some
people
only
use
it
locally.
We,
we
can
only
see
from
running
the
the
national
proxy,
the
people
who
roam
so
we
we
don't
know
exactly
how
many
people
out
of
those
eligible
students
are
using
it.
There's
a
URL
here.
E
If
you
want
to
see
more
about
U.S,
Edge
Rome,
but
I,
don't
want
to
go
into
a
huge
amount
of
like
it's
a
edge
room
took
tutorial,
so
that's
basically
U.S
Central
and
next
slide.
E
The
way
we're
currently
deploying
the
architecture,
as
I
said
it's
in
Amazon
web
services.
This
picture
has
a
lot
of
details
on
it,
only
some
of
which
really
matter.
E
First
off,
this
is
half
of
our
deployment.
This
is
the
East
Coast
deployment.
There's
a
mirror
image
on
the
West
Coast.
We
have
a
server
in
the
internet2
data
center
that
the
traffic
comes
into
at
an
internet
to
address.
E
It
is
there's
a
sort
of
traffic
engineering
activity
that
happens
there,
where
the
the
software,
the
traffic,
isn't
added
and
sent
over
VPN
tunnel
to
AWS,
where
we
run
the
actual
VPN
a
router
to
be
the
endpoint
of
the
VPN
and
then
the
actual
radius
proxies.
This
picture
shows
a
primary
and
a
backup
because
it
hasn't
been
updated
yet,
but
actually,
at
this
point,
we're
load
sharing
between
the
two
servers
running
any
on
each
coast
and
as
I
said,
it's
then
duplicated.
E
E
E
We
get,
we
see
somewhere,
you
know
greater
than
12
000
unique
authentication
requests
coming
through
per
minute.
An
authentication
request
involves
several
packet
exchanges,
so
it's
if
there's
not
any
sort
of
one-to-one
between
messages
and
authentication
requests
that
complete
about
two-thirds
65
percent
of
the
authentication
requests
that
are
made
in
the
U.S,
get
an
access
accept
and
about
35
get
an
access
reject.
E
This
is
of
authentication
requests
that
complete
without
some
sort
of
internal
error,
about
18
of
the
requests
we
receive
are
rejected
or
discarded
by
our
proxy
and
some
of
the
biggest
reasons
that
that
happens
are
request,
looping,
where
basically
proxy
looping,
where
basically
we're
receiving
back
a
request
that
we've
already
touched
missing
or
malformed,
username
or
Realm
unknown
client
just
isn't
registered
with
us.
Or
is
it
known
to
us
an
invalid
authenticator
or
a
malformed
message?
E
Those
are
the
errors
that
we
most
often
C
that
cause
us
to
reject
or
discard
any
incoming
request.
Next
slide.
E
The
eat
method,
distribution,
I
thought
this
was
interesting
when
I
first
saw
it,
we
see
about
75
percent
peep,
fifteen
percent
epls
and
12
ttls.
This
is
from
a
report
in
q1
of
2022..
We
have
I,
haven't
like
gone
through
and
tried
to
look
at
it
again
to
see
if
we
have
any
Trends,
but
but
that's
where
we
were
at
that
time.
E
There
are
two
sets
of
top
level
Edge
room
operators,
one
in
Europe
and
one
in
Asia
and
there's
a
national
roaming
operator
in
each
country,
who's
responsible
for
enrolling
the
edurome
institutions
in
that
country
and
also
proxying
between
them,
so
only
requests
that
go
from
one
country
to
another:
go
through
those
top
level,
Edge
Rome
servers
within
a
country
they
go
through
the
national
ledgerum
server
and
then
locally
within
an
institution
they
would
just
go
to
the
institution
server.
E
Each
of
the
nros
provides
a
Json
formatted
list
of
their
enrolled
institutions
to
their
top
level
provider.
For
example,
U.S
federal
provides
a
list
to
jayant
in
Europe.
E
If
that
sounds
a
lot
like
a
1980s
internet
host
file,
that's
would
be
because
it
is
a
lot
like
a
1980s
internet
host
file.
Okay,
we
say
we
know
these
hosts
and
basically
the
whole
thing
is
being
routed
by
files
that
are
being
exchanged
once
a
day
saying
the
all
these
people
are
under
me.
Please
send
their
traffic
to
me.
If
so,
if
an
institution,
radio
server
receives
a
non-local
request,
it
goes
to
the
nro
radius
proxy.
The
nro
radius
proxy
does
not
have
a
matching
realm
registered.
E
It
forwards
it
up
to
a
top
level
server.
The
way
our
servers
work.
If
the
country
code
is
included
in
the
Rel
name,
we
forward
the
request.
Accordingly,
we
send
Asian
countries
to
Asia
and
European
countries
to
Europe,
but
what
we
do
if
the
country
code
is
not
in
Europe
for
Asia
or
if
there's
no
country
code,
it's
just
example.com
is
that
we
do
some
round
robin
between
the
top
level
servers.
E
There's
multiple
servers
in
in
each
of
Europe
and
Asia,
and
if
the
top
level
operator
doesn't
have
a
registration
request
of
the
realm,
they
need
to
fold
it
over
to
the
other
top
level
operator,
and
so
basically,
things
can
be
forwarded
several
times.
This
isn't
a
completely
flat
structure.
So
if
you
look
at
the
next
slide,.
E
And
inefficient,
this
is
like
the
most
inefficient
you
could
be
in
a
real
example.
You
know
is
that
a
user
from
example.com,
which
happens
to
be
a
Canadian
institution
in
my
example,
visits
a
US
Edge
your
own
service
location
and
attempts
to
join
entero
the
service
location,
radius
server.
So
that's
the
first
radius
server
that
touches
it
determines
that
example.com
is
not
in
the
U.S,
it's
not
local
to
that
institution
and
it
forwards
the
request
to
one
of
the
us
as
your
own
proxies
the
U.S
and
your
own
proxy.
E
That's
the
second
radius
server
to
touch
the
packet
determines
that
example.com
is
not
a
realm
registered
in
the
U.S,
finds
no
country
code
and
forwards
the
request
to
an
Asian
top-level
radius
proxy,
just
by
round
robin
the
Asian
proxy.
That's
the
third
proxy
to
touch
this
packet.
This
request.
This
message
determines
that
the
IDP
realm
realm
example.com
is
not
registered
in
Asia,
so
it
forwards
their
quest
to
a
European
top
level
server.
E
The
European
top
level
server
determines
that
example.com
is
registered
by
Canada,
who
I
am
presuming
registers
with
Europe,
but
I
could
be
wrong
and
forwards.
That
request
to
Canada's
radius
proxy,
the
Canadian
proxy,
that
being
the
fifth
proxy
forwards,
the
request
to
one
of
example.com's
radius
servers,
and
that
is
the
sixth
radius
server
to
actually
see
this
request.
So
it
can
go
through
six
proxy
hops
before
well
through
six
servers,
five
hops
before
it
gets
to
the
the
ultimate
destination
and
that
could
be
a
normal,
successful
authentication
exchange.
D
B
I'll
forward
your
slides
Margaret,
can
you
really
retry
your
video?
We
just
lost
your
video
feed.
B
E
So
you
know
you
could
say
well,
six,
six
hops
or
you
know,
five
hops
through
six
servers
is
not
that
much,
but
it's
actually
the
case
that
a
successful
Edge
room
Authentication
also
requires
several
request.
Response
exchanges,
I've
seen
them
from
three
to
seven.
Although
I
don't
think
we
get
three
using
the
methods
I
just
listed,
and
it
depends
on
the
eat
method
and
the
size
of
the
credentials
and
a
few
other
things.
E
But
we
see
several
usually
message,
exchanges
so
request
resp,
like
like
access,
request,
access,
challenge,
access,
request,
access
challenge.
You
know,
request
response
exchanges
before
we
get
to
an
answer
in
a
typical
radius
request,
so
we're
seeing
a
multiplicative
value
there
right
we're
going
through
six
hops.
We've
got,
let's
say
five
messages,
and
every
time
a
message
goes
through
a
hop
there's,
a
cryptographic
message,
authentication
performed
so
more
efficient
routing
than
this
would
be
highly
desirable.
A
lot
of
times.
People
aren't
waiting
to
get
on
the
network.
E
But
if
you
are
waiting
to
get
on
the
network,
the
delay
can
be
quite
noticeable
from
using
radius,
but
it's
hard
to
know
how
to
get
more
efficient
routing,
because
we
don't
do
any
sort
of
dynamic
routing
and
we
don't
even
have
the
equivalent
of
icmp
redirects.
So
even
if
Asia
knew
that
example.com
should
have
gone
to
Europe,
there's
no
way
for
them
to
tell
us
and
there's
no
standard
mechanism
for
Loop
detection
or
prevention.
E
E
The
other
thing
we
found
that
was
difficult
for
us
is
that
there
are
very
few
testing
or
debugging
tools
for
this
type
of
multi-level
proxy
fabric.
There's
when
a
remote,
EPA
radius
request
is
dropped
or
rejected,
it
can
be
very
difficult
to
figure
out
why
it
didn't
work
or
even
figure
out
what
server
dropped
it
or
ejected
it.
A
lot
of
things
are
silently
dropped
so,
and
access
reject
messages,
don't
typically
contain
a
useful
error
code.
E
Even
if
you
do
get
an
access,
reject
method,
back
a
message
back
so
you're
you're,
often
trying
to
figure
out
what
went
wrong.
There's
a
status
server
request
in
radius,
but
that
only
goes
one
hop.
You
can
only
send
it
to
that
that
first
server
that
you
talk
to
that
you
have
a
shared
secret
with.
So
you
can
query
the
health
of
that
server
of
that
proxy.
There's
no
way
to
query
the
health
of
the
more
remote
proxy.
That's
multiple
hops
away
from
you!
E
So
there's
no
ping
that
would
go
across
multiple
Hops
and
then
also
there's
no
way
to
trace
the
paths
that
a
request
would
take
to
see
if
the
path
is
valid
or
looping
or
what
so,
there's
no
like
trace
route
like
functionality
that
we
can
use
to
debug
these
problems,
which
ends
up
with
people
literally
calling
each
other
on
the
phone
and
asking
them
to
check
their
locks.
That's
that's
how
you
you
debug
a
multi-hop
radius,
routing
problem
today,
so
next
slide.
E
There
are
also
some
security
challenges.
You
know,
eduroma
is
a
security
service
and
it's
based
on
radius
and
radius
is
good
and
works
well,
but
its
message
protection
is
pretty
Antiquated
by
today's
standards
and
this
comes
up.
Sometimes,
when
you
talk
to
people
about
Edge
Rome,
it
consists
of
pairwise
shared
secrets
and
an
md5
hash
for
message.
Protection
and
the
shared
secrets
are
often
typed
by
administrators
into
a
UI
or
a
plain
text
file.
E
There's
no
consistently
enforced
minimal
length
for
those
keys.
Nor
is
there
any
requirement
for
cryptographic
generation
of
those,
those
shared
secrets
and
there's
no
algorithm
agility
for
the
md5
hash
and
I
think
this
came
up
in
the
the
buff
on
on
Monday,
although
when
I
first
started
working
on
these
slides
and
talking
to
you
guys,
the
buff
wasn't
even
a
glimmer
in
in
Alan's
eye
as
far
as
I
know.
So
next
slide,
there's
also
a
trade-off
that
our
our
subscribing
institutions
run
into
between
privacy
and
secondary
credentials.
E
Now,
there's
no
reason
in
the
world
that
there's
an
inherent
trade-off
between
privacy
and
secondary
credentials,
but
there
is
in
Eep
over
radius
and
the
reason
it
has
to
do
more
with
what's
available
for
methods
than
it
does,
with
any
technological
reason
why
it
has
to
be
this
way.
E
User
privacy
is
absolutely
essential
in
edurome,
because
the
risk
is
exposing
the
physical
location
of
an
end
user
right,
you're,
not
going
to
expose
the
fact
that
they
went
to
this
website
you're
going
to
expose
the
fact
that
they're
in
a
particular
room
at
this
time.
So
some
of
the
people
in
the
SAG
room
are
on
edge
room.
B
You
can't
see
the
signaling
in
the
room.
I
was
raising
my
hand,
as
were
a
couple
of
others
in
the
room
on
that
wireless
network.
E
So
but
you
know
you
wouldn't
want
a
possible
attackers
to
be
able
to
figure
that
out,
possibly
not
even
the
I.T
people
at
your
home
institution,
but
you
know,
but
there
are
issues
with
that
which
I'll
talk
about
and
then
the
other
desirable
thing
that
you
want
in
Edge
Aroma
is
the
use
of
secondary
credentials
so
such
as
certificates
or
maybe
derived
credentials
I'm,
not
talking
about
multi-factor.
E
You
might
want
that
too,
but
that's
a
whole
nother
question
that
maybe
I
should
have
put
on
these
slides,
but
I
did
not,
but
secondary
credentials
such
as
certificates
can
be
valuable
to
allow
passwordless
Authentication
or
to
protect,
protect
primary
credentials
from
being
used
in
an
environment
where
the
where
they
might
be
exposed
out
on
the
internet.
So
so
you
know,
your
user
may
have
primary
credentials.
E
They
use
for
logging
into
University
services,
but
you
don't
want
them
to
use
that
same
credential
to
use
Edge
your
own,
because
they're
going
to
go
out
and
use
that
credential
and
Starbucks
or
whatever
so
people
want
both
of
those
things,
but
there's
a
trade-off.
Peep
and
ttls
are
pretty
good
on
the
Privacy
side.
You
well
as
long
as
you
configure
it
right.
You
can
use
an
anonymous
username
in
the
outer
method
so
that
the
username
portion
is
only
ever
transmitted
over
an
encrypted
tunnel.
E
E
One
is
that
you
could
support
TLS,
1.3
and
eptls
so
that
the
certificate
would
be
encrypted
or
have
wider
use
of
radzsec
to
encrypt
the
whole
session,
and
that
would
those
would
meet
both
requirements,
but
those
are
also
both
pki
Solutions,
there's
and
there's
some
people
who
don't
want
pki
Solutions,
which
is
why
we
see
Peep
and
ttls
combined
being
a
much
larger
deployment,
at
least
in
the
US
than
eptls,
but
there
I
don't
know
of
any
combination
of
currently
available
methods.
That
would
give
you
both
of
those
things
in
a
non-pki
solution.
E
So
so
that's
where
we
are,
we
don't
actually
have
any
of
those
things
fully
today,
like
rad
sex,
informational
and
Eep,
TLS,
1.3,
I,
I,
don't
I
think
there's
been
some
talk
about
that,
but
it's
not
in
implementations
and
I.
Don't
even
think
we
have
a
a
draft
that
would
offer
a
non-pki
solution.
That
would
let
you
do
both
of
these
things
at
once.
So
next
slide,
we
ran
into
some
operational
challenges
as
well.
E
E
Also
because
you
don't
end
up,
you
don't
want
the
response
to
end
up
looping
as
well,
and
we
have
a
way
that
we
notice
looping
with
a
vendor
attribute.
Other
people
have
other
ways
of
noticing
looping,
usually
requiring
a
vendor
specific
attribute,
but
there's
no
standard
method
for
detecting
looping
or
letting
another
server
know
that
anything's
looping
also
separate
issue.
Many
clients
will
retry
a
failed
connection,
request
immediately
with
the
same
credentials,
so
they
they
get
back
and
access
reject
and
they
just
try
again.
E
E
They
had
a
comma
edu
at
the
end
of
their
realm.
Instead
of
a
DOT
at
you
and
their
supplicant
just
tried
to
authenticate
as
fast
as
it
could
37
000
times
every
five
minutes
until
somebody
went
and
blocked
that
particular
user
I.
Don't
know
why
that's
the
case
and
would
like
to
figure
out
if
there's
something
better,
that
we
could
say
that
would
get
them
to
back
off
or
stop.
E
We
get
a
lot
of
people
who
have
long,
expired
or
obsolete
credentials
still
configured
on
their
devices
like
you
might
say
why
35
error
rate,
those
aren't
proxying
errors.
Those
are
the
IDP
saying.
No,
those
credentials
are
not
valid
and
most
of
the
time,
that's
because
they're
obsolete
or
expired,
and
there's
no
way
to
signal
for
the
IDP
to
signal
to
the
device
that
those
credentials
need
to
be
invalidated.
E
Supplicants
will
try
to
use
obviously
bogus
credentials.
You
know
Realms
with
comma
edu
in
them
things
with
no
realm
things
with
white
space
or
special
characters
in
in
the
username
or
the
realm
missing.
Realms
expired,
certs
like
there's,
not
supplicants
aren't
doing
anything
to
try
to
decide
if
this
is
a
reasonable
request
before
sending
it,
which
is
frustrating
because
we
get
a
lot
of
those
as
well.
E
That
I
think
was
third
down
on
the
list
of
reasons
why
we
have
to
throw
away
requests
and
then
the
last
one
I
have
and
I've
tried
to
research.
This
to
some
extent,
is
that
we
receive
many
Quest
requests
per
second,
with
Realms
of
the
form
wland.mnc3
numbers.mcc3
numbers,
Dot
3gppnetwork.org
and
as
far
as
I
can
tell,
we
are
somehow
being
mistaken
for
a
3gpp
carrier
network,
but
I
can't
figure
out
how
to
make
it
stop.
So
those
are
my
random.
E
E
E
E
Some
of
them
are
in
the
rad
extra
boff
and
the
the
radx
tree
chartering
effort
and
then
also
Josh
talked
about
eat
guy
and
emu.
But
some
of
the
things
that
we're
struggling
with
aren't
actually
being
worked
on
anywhere
and
you
know
would
I
would
be
interested
at
least
in
in
trying
to
contribute
if
somebody
had
a
good
way
to
solve
some
of
those
problems
next
slide.
E
So
these
are
the
three
people
who
talked
about
this
presentation
as
it
was
under
development.
It's
just
our
own
observations
and
opinions.
It
doesn't
represent
the
views
of
internet
too,
certainly
not
of
edrome.org
who
none
of
us
work
for
or
any
other
company,
company
or
organization.
So
there
you
have
it.
Those
are
thoughts
on
on
deploying
Eep
and
radius
in
a
great
big,
multi-level
proxy
application.
A
F
Yeah
hi
Jan
Freddie,
room,
Enthusiast
and
National
roaming
operator
in
Germany
Margaret.
You
talked
about
load,
balancing
and
failover
mechanisms
in
Germany.
We
had
tried
that
once
but
failed
spectacularly
because
of
course
there
are
the
two
layers:
radius
and
EAP,
and
if
you
don't
recognize
the
the
EAP
layer
and
you
just
distribute
the
radius,
then
you
end
up
with
failed
authentication
requests.
How
do
you
deal
with
that?.
E
We
are
just
load
balancing
based
on
IP
address,
so
a
particular
you
know,
SP
will.
E
The
S
yeah
a
particular
SP,
all
their
requests
will
go
to
the
same
internal
server,
which
does
mean
we
don't
get
perfect
load
balancing,
because
some
are
busier
than
others,
and
if
we're
unlucky
that
day,
we
may
get
two-thirds
of
the
traffic
going
to
one
and
one-third
of
the
traffic
going
to
the
other
until
some
event
occurs,
where
both
those
those
machines
are
constantly
monitored
and
restarted.
E
If
if
something
goes
wrong
with
them,
so
or,
and
sometimes
they
were
booted
for
operational
reasons-
updates
stuff
like
that,
so
we
don't
get
perfect
load
balancing,
but
we
do
it
all
on
IP
address,
rather
than
trying
to
do
anything
out
of
the
radius
or
eat,
because
it's
not
or
even
the
ports,
because
we
end
up
with.
If
we
do
that,
we
end
up
with
traffic
I
mentioned
for
people
who
aren't
aware
of
what
he's
talking
about
that.
There
are
several
exchanges
for
each
authentication.
E
Those
exchanges
have
to
go
through
the
same
proxy
because
the
proxy
has
State.
So
basically
we're
we're
doing
just
a
much
sort,
of
course,
or
grade
load
balancing
in
order
to
avoid
those
problems.
F
E
Portion
of
especially
on
the
East
Coast,
because
more
people
use
the
East
Coast
servers
on
the
west
coast
server
for
some
reason
having
all
of
the
stuff
people
sent
to
the
east
coast
IP
address
going
to
one
AWS
was
we
were
running
into
non-protocol,
related
problems,
load
problems,
and
so
this?
What
we're
doing
now
is
enough
to
actually
make
make
the
load
low
enough
that
we're
not
running
into
those
problems.
So
so
we.
G
G
So
we,
as
Margaret
said,
we've
determined
that
the
the
Source
IP
address
is
sufficient
for
us.
Okay,.
H
H
H
So
the
what
I
was
going
to
come
up
here
and
say
is
that,
if
we're
looking
at
sort
of
things
where
the
ITF
could
actually
help
in
this
space
right,
there's
another
kind
of
even
more
pressing
problem
and
that's
kind
of
the
lack
of
onboarding
and
provisioning
mechanisms
for
for
endpoints
in
large-scale
deployments
of
Wi-Fi
and
Elliot
actually
brought
this
up
at
in
a
couple
of
places,
including
in
in
scheme
of
all
places,
because
he's
kind
of
has
this
idea
of
using
skim
as
an
onboarding
protocol
for
endpoints,
which
might
seem
a
little
bit
odd,
but
it
I
mean
there
are.
H
There
are
kind
of
reasons
for
that
that
he's
talked
about,
but
in
any
case
I
mean
today,
if
you're
deploying
Wi-Fi
at
scale
it
it
is
and
you're
not
doing
mobile
management.
Right
then
doing
something
like
crossword
cross
domain
cross
everything
right.
It
is
a
challenge
because
platforms
do
not
align
here.
They
don't
share
like
common
apis
common
models,
even
anywhere
remotely
common
capabilities,
and
that's
a
problem
for
logical
rollout.
E
Gers
I
I
guess
I
didn't
mention
it
because
I,
you
know
don't
play
product
managers
on
TV
or
anything.
But
you
know
if
you
like
talk
to
our
subscribers,
that's
the
biggest
problem
that
they
have
is
how
to
onboard
the
users
to
eduro
and
if
we
could
come
up
with
a
solution
that
would
actually
make
that
easy
that
that
would
help
with
deployment
it
might
help.
Radius
need
be
used
in
other
large-scale
deployments
where
that
might
be
an
even
bigger
issue.
E
I
might
mentioned
on
a
slide,
but
didn't
say
that
in
the
US
at
least
we're
moving
to
K-12
and
and
libraries,
museums
Etc,
not
like
not
research,
institution,
libraries,
but
just
regular
libraries,
and
that
also
has
challenges
in
that
area,
because
those
places
have
even
less
it
support
help
desk
support
Etc
than
than
a
small
University.
I
C
I
haven't
been
inside
in
a
long
time,
but
boy
would
I
encourage
people
who
are
thinking
where
are
hard
problems
that
touch
end
users
as
compared
to
the
network
to
work
on
this.
If
this
comes
to
them,
there
will
be
an
expectation
because
they
already
do
free
internet
service
through
local
internet
service
provider
and
stuff.
Helping
them
out
would
would
be
like
a
huge
thing.
C
E
This
came
up
a
lot
with
children
out
of
school
for
covet,
at
least
in
the
U.S
as
well,
that
they
would,
you
know,
end
up
outside,
like
Taco
Bell
or
something
trying
to
get
on
the
internet.
So
there's
a
big
desire
to
get
all
those
school
children
on
edge
room
and
have
a
lot
of
Ed
your
own
service
points,
so
that,
if
something
like
that
happened
again,
the
they'd
have
somewhere
to
go
to
get
free
internet
access.
A
Okay,
so
there's
no
more
further
questions,
so
thank
you
very
much
Margaret
and
we'll
go
on
to
our
next
item.
B
So
we're
switching
to
the
next
topic,
which
we
hope
will
be
a
little
more
we'll
be
interactive
because
we
want
to
get
feedback
from
from
the
group.
So
the
key
up
is
that
we
in
the
security
area
have
relied
on
outside
help
to
do
formal
verification
of
security
properties
across
a
number
of
working
groups,
frankly
to
a
lot
of
great
success.
B
So
if
we
think
about
the
Tron
Workshop
that
helped
us
do
formal
verification
that
helped
us
bring
academics
in
that
are
doing
formal
verification
for
tls13,
if
we
think
about
in
in
in
Lake
for
the
ad
hoc
protocol,
if
we
think
about
IP
components
of
ipsec,
if
we
think
about
MLS
I
am
sure
in
oauth
in
nap,
so
I'm
sure
I've,
forgotten
kind
of
a
couple
of
them.
B
We've
had
really
great
success
to
do
one
of
two
things:
first,
it
has
helped
us
find
things
deep
vulnerabilities
that
we
would
not
have
otherwise
found.
It
has
also
helped
us
the
other
way,
which
is
again
gave
us
a
little
bit
more
confidence
that
what
we're
gonna,
what
we're
about
to
publish
for
broad
scale,
adoption
and
operation
has
the
properties.
B
We
hope
that
it
have
that
we
wouldn't
have
otherwise
found
even
with
large-scale
large-scale
kind
of
interop,
something
that
happened
after
114
was
really
the
recognition
by
the
ads
that
that
practice,
while
prevalent
in
SEC
for
for
big
things,
is
actually
not
something
that
occurs
very
all
that
often
outside
of
the
security
area
and
kind
of
really.
B
The
question
is:
okay,
that's
not
kind
of
unusual
to
have
to
take
in
our
different
practices,
but
we
have
a
really
kind
of
nice
method
to
give
us
a
little
bit
more
security
kind
of
assurance,
and
when
would
we
want
to
really
apply
it
fully?
Recognizing
that
we
don't
have
a
lot
of
that
expertise
here
in
the
iitf?
So
how
do
we
Bridge?
Perhaps
the
communities
that
can
do
that
formal
verification?
B
The
other
motivation
for
this
is
that
there's
an
activity
in
Flight
in
the
irtf
there's
an
initiative
very
much
how
we
have
a
charter
and
kind
of
process
to
create
new
working
groups.
There
is
a
proposal
in
Flight
in
the
irtf
to
spin
up
a
group
kind
of
more
broadly.
That
helps
us
think
how
to
formally
specify
ITF
protocols,
and
one
sub
component
of
that
would
be
specification.
That
would
allow
us
to
do
this
class
of
formal
verification,
and
you
know
not
to
cut
a
very,
very
kind
of
future
stuff.
B
Justin
Richard
is
here
because
he's
going
to
be
talking
to
us
about
HTTP
signatures,
which
is
actually
at
a
vhdp
business,
which
is
actually
the
the
document
that
started
this
conversation
because
art
came
to
kind
of
second
set
wow
during
working
group.
Last
call
there
was
a
question
of
whether
this
was
formally
verified.
Well,
sec
I
mean.
Is
that
a
requirement-
and
we
said
well
in
fact
it
is
done
quite
commonly,
but
it
is
not
kind
of
a
requirement
and
really
wanted
to
get.
You
know
some
feedback
on
the
mic.
B
B
We
welcome
opinions
and
feedback.
I
can
tell
you
what
sector
we
asked
the
same
question
at
sector
if
you're
kind
of
thinking,
if
you
have
an
opinion,
is
the
recognition
that
this
is
helpful.
Concern
that
how
do
we
get
the
help?
The
help
that's
needed
and
the
caution
that
formal
verification
is
not
a
magic
kind
of
bullet
that
will
kind
of
find
everything
in
anything.
D
B
You
got
your
video
right,
sorry.
J
F
J
I
I
I'm,
a
big
fan
of
formal
methods.
I
did
do
formal
methods
for
a
doctor
at
Once
Upon
a
Time.
The
thing
that
does
worry
me
a
little
bit
is
that,
when
we're
using
formal
methods
in
the
cryptographic
algorithm,
space,
we're
actually
using
one
set
of
tools
and
that
set
of
tools
is
not
necessarily
the
set
of
tools
that
will
be
relevant
to
the
protocol.
Work
that
we
do
it's
in
some
ways.
It's
a
more
advanced
set
of
tools,
but
we're
looking
at.
Can
somebody
break
this?
Not?
J
How
does
this
go
wrong?
And
so
it's
slightly
different
I
do
have
a
bunch
of
tools
that
I
use
that
are
not
quite
formal
methods.
I
use
domain-specific
languages
to
design
a
lot
of
my
stuff,
and
it
might
be
that
we
could
find
a
way
of
going
from
a
domain-specific
language
that
is
optimized
for
ietf
protocol
design.
J
That
is
probably
more
practical
as
an
expectation
than
expecting
people
to
do
mathematical,
proofs
of
protocols
which,
to
be
honest,
I've
not
done
in
25
years,
I've,
not
done
that.
Since
I
got
my
doctorate
I,
don't
think
that
many
other
folk
do
that
willingly.
You
know
unless
you've
got
a
grad
student.
B
We
we
have
heard
kind
of
similar
feedback.
You
know
I
think
both
both
themes,
one,
which
is
when
we
say
formal
methods
and
formal
verification.
It
covers
quite
a
lot
of
different
kind
of
practices
and
it's
you
know
you
got
to
be
really
clear
on
what
properties
you're
really
kind
of
talking
about,
and
we
have
equally
heard
that
that
is
not
a
set
of
expertise
we
have
in
the
IHF,
and
we
completely
appreciate
that
to
me.
I
see
I'll
speak
for
myself.
K
Yeah
so
I
actually
wanted
to
Justin
Richard
I
wanted
to
talk
about
sort
of
some
of
the
experience
that
we've
had
with
this
with
oauth
and
gonna
have,
and
things
like
that
which
to
Phil's
point
it
was
through
grad
students.
K
You
know
there
are
some
some
great
Master's
thesis
have
come
have
come
out
of
this
work,
one
of
the
difficult
things
with
these
multi-party
multi-step
and
especially
staple
protocols
and
trying
to
model
them.
Is
that
a
lot
of
times
the
assumptions
of
sort
of
who
knows
what
piece
of
information
is
really
at
at
any
given
time
is
really
difficult
to
model
within
the
tools,
because
the
the
tools
generally
tend
to
have
a
much
more
sort
of
simplified
model
of
a
connects
to
B.
K
Therefore,
a
can
talk
to
B
and
and
things
like
that.
The
other
challenge
that
we
found
is
we've
had
grad
students
come
running
up
to
us
and
saying
things:
did
you
know
that
Bearer
tokens
can
be
shared
between
people
and
then
you
can
use
them,
and
the
entire
working
group
said
yes,.
K
And
so
that
type
of
like
we,
we
know
that
that
is
a
very
significant.
You
know
trade-off
in
the
space
being
able
to
communicate
that
so
that
a
grad
student
doesn't
spend
a
month
modeling
that
Bearer
tokens
can
be
shared
between
parties
and
and
actually
spends
time,
finding
sort
of
the
more
subtle
interconnection
problems
which
have
I
will
say
been
very,
very
useful
to
get
that
kind
of
feedback.
K
So
I
I
support
there
being
sort
of
more
of
an
ongoing
liaison
or
onboarding
or
some
type
of
relationship
whereby
that
institutional
knowledge,
that's
in
the
ITF
of
like
we
kind
of
like
these,
are
our
assumptions
about
the
space
that
the
protocol
lives
in
the
kind
of
stuff
that
you
write
down
in
security
considerations
versus
the
institutional
knowledge
of
this
is
what
the
tools
can
model
and
how
they
model
those
different
things
and
connecting
those
things
together.
K
B
K
B
So
just
managing
strictly
by
who's
in
the
queue
I
think
Joe.
You
are
next
so
swap
with
Mike.
L
Joe,
salary
and
I
think
Justin
kind
of
said,
a
lot
of
the
things
I
was
going
to
say,
but
one
of
the
values
of
of
doing
some
of
these
things
isn't,
even
maybe
in
the
formal
analysis
which
is
important,
but
just
getting
to
the
point
of
knowing
what
assumptions
you're,
making
and
and
documenting
what
you're.
What
your
security
properties
you
want
can
be
incredibly
useful
and
something
that
we
don't
always
do
even
in
the
security
area
but
outside
of
the
security
area.
L
M
Ellsworth,
so
I
have
exactly
the
same
comment
that
Joseph
just
made,
but
from
the
I.
Don't
really
know
how
to
do
this
angle.
So
if
you're
going
to
can
flirt
with
mandating
this
in
security
standards,
do
we
have
like
a
prereq
is
going
to
have
to
be
a
really
good,
how-to
document?
What
to
what
debts
do
you
need
to
document
your
assumptions
to
what
depth
you
need
to
document
your
requirements?
What's
the
formal
language
for
doing
so,
and
for
people
who've
never
done
formal
modeling,
that's
going
to
have
to
be.
M
B
So,
if
I
quickly,
summarize
I
I
think
what
you're
saying
is
be
very
careful
with
how
much
formalism
you
put
on
top
of
it.
There
is
an
onboarding
problem.
N
Peter
Peter
Castleman,
so
I
participate
in
the
oauth
working
group
in
a
few
other
places
similar
to
Justin's
comment
we've.
You
know,
we've
used
this
in
a
number
of
places,
we're
looking
at
using
it
for
analyzing
some
of
the
issues
that
we
see
with
what
we
call
cross
device
protocols,
one
of
the
things
that
I
would
sort
of
caution
or
one
a
few
things
to
think
about
is
around
education,
also
for
people
who
are
relying
or
hearing
that
there
was
a
formal
analysis
of
this
protocol.
N
They
don't
necessarily
understand
that
you
know
there's
a
set
of
assumptions,
there's
a
set
of
models
even
around
the
attacker
Etc,
and
if
you
don't
understand
that
it's
the
proof
can
very
easily
be
taken
out
of
context
or
misunderstood
right,
so
I
think
some
education
on
that
side.
The
other
thing
is
the
limitations
of
the
the
the
process
itself.
N
So
some
of
the
problems
that
we
see
you
know
I
think
it's
it's
not
very
good
at
modeling
human
behavior,
where
we
have
humans
that
participate
in
the
protocol,
it's
not
very
good
or
we
I
haven't
seen
good
instances.
Maybe
that's
a
challenge
and
opportunity
for
practitioners
is
to
start
thinking
about
how
do
we
also
model
human
decision
making
in
in
this
process
as
well?
So
maybe
there's
an
opportunity
to
help
Advance
the
field
but
be
cautious
about
the
limits
of
the
mechanisms,
but
overall,
very
supportive
of
this.
B
Thank
you
insert
to
reiterate,
I
think
the
unique
caution
you
you
just
gave
us
is
to
remind
folks
to
not
falsely
for
on
the
ITF
side
that
are
developing
the
protocols
not
to
put
too
much
necessarily
did
you
consider
what
really
the
proof
in
the
models
will
bring
back
to
you
and
not
not
make
assumptions
there.
So
from
the
other
thing
that
Mike
and
Justin
were
talking
about.
O
My
personal
experience
on
this
is
you
need
a
professor
who
is
interested
in
your
work
and
informal
proofs,
and
it
happens
at
a
time
frame
when
it's
at
the
beginning
of
the
term
when
he
can
tag
a
master
students
or
two
to
do
it,
and
we
are
now
actually
having
this
happening
in
in
drip
that
that
that
Dr
gertoff
has
a
student
actually
doing
the
formal
proof
because
he
needed
he
needs
things
for
his
students,
of
course,
and
he's
been
working
with
us
for
for
years.
O
He
knows
his
stuff,
so
we're
getting
it
done,
but
it
was
because
of
all
these
circumstances
falling
together
and
back
in
the
spring.
He
didn't
have
the
students
assigned
to
it.
Now
here
in
the
fall
at
the
beginning
of
the
term,
he
was
able
to
sign
a
student
to
it.
So
it's
like
you
have
to
be
references
of
it.
You
got
to
have
someone
who's
capable
of
doing
it
and
have
the
resources
to
do
it
and
the
timing
to
do
it
and
maybe
you'll
get
it
done.
B
Yeah,
the
big
the
big
takeaway
I
I
I'm
hearing
is
the
reminder
that
we
don't
have
that
expertise
as
native
in
the
ietf
right
Florence.
P
Hi
so
yeah,
broadly
in
favor,
of
doing
something
and
I
think
a
lot
of
really
good
things
have
been
said
about
like
us.
This
may
be
encouraging
us
to
document
our
security
assumptions
when
writing
properties.
That's
right
protocols,
that's
a
good
thing,
not
seeing
this
as
a
silver
bullet.
The
thing
I
was
going
to
add
is,
if
we're
kind
of
we
really
want
this
to
happen.
P
We
really
want
form
verification
of
Standards,
then
there's
probably
also
something
that
we
could
do
at
the
protocol
writing
stage
of
how
do
the
ITF
right
protocols
that
are
kind
of
conducive
to
being
formally
verified,
I,
don't
know
anything
about
that,
but
I'm
sure
there's
a
big
community
that
do
out
there.
B
Yeah
good
good
point
that
there's
additional
work
we
can
do
to
set
the
set
the
stage
here
next
in
the
queue
is
Paul
I.
C
So,
with
the
last
couple
of
comments,
it
seems
I,
don't
this
may
be
a
very
small
thing,
but
a
formal
proof
directorate,
similar
to
the
Yang
doctors
might
be
of
interest
to
and
when
Bob
said,
oh
grad,
students
and
such
I
always
cringe,
because
I
never
was
one,
but
if
there
are
a
pool
of
grad
students
who
are
looking
for
interesting
work
on
formal
proofs,
even
if
they
have
nothing
to
do
with
the
ietf,
if
there's
a
place
to
put
them
where
we
say
we
will
be
offering
you
things
that
we're
pretty
sure
is
interesting.
C
B
It's
a
very
concrete
suggestion
about
how
to
integrate
with
our
processes.
Peter.
N
Yeah,
just
building
on
what
Bob
and
Paul
this
is
Peter,
not
Paul,
just
building
on
what
Bob
and
and
Paul
said,
I
think
what
what
I
heard
when
Bob
was
talking.
I
was
thinking.
This
is
a
pipeline,
and
so
in
a
way,
if
we
can,
we
need
to
think
about
building
that
process
that
integrates
with
where
the
resources
are
the
grad
students
who
are
doing
this
work
and
that
may
take
a
bit
of
planning,
but
maybe
that's
sort
of
another
way
to
think
about.
N
It
is
that
integration
mechanism,
you
know
teeing
up
these
projects
having
institutions
that
that
we
can
engage
with
so
that
when
fall
comes
right,
there's
that
opportunity
for
the
analysis.
So
maybe
that's
just
Just
one
thought.
H
All
right
so
two
things:
if
you're
gonna
play
with
academics,
you
have
to
understand
that
the
academic
Work
World
works,
and
there
are
there
are
things
the
ITF
does
not
provide
that.
They
need
right
that
we
need
to
pay
for
work
in
the
sense
of
providing
sort
of
citable
references
for
us
and
stuff
that
you
can
use
in
your
academic
career
and
my
understanding
from
talking
to
computer
science
professors
back
home.
H
Do
we
do
do
not
today
provide
any
of
those
things,
not
the
AR,
anrp
price
and
stuff,
like
that's
kind
of
it's
doing
going
in
that
direction,
but
that's
irkf,
not
iitf
right.
If
we're
bringing
this
stuff
into
the
operational
practice
of
the
iitf,
we
have
to
change
some
stuff
and
those
are
fairly
fundamental
things
right.
The
fact
that
you
contributing
to
another
to
a
draft
that
is
a
temporary
thing
is
something
you
can
cite
and
get
academic
brownie
points
for
it's
not
so
not
something
we
do
today
right
and
the
the
director's
idea.
H
The
risk
here
is
that
we
bring
in
you
know
the
old
academic
who
happen
to
be
interested
in
our
work,
and
then
we
place
these
requirements
on
our
work
and
are
unable
to
satisfy
them
right
and
the
the
other
thing
that
I
was
going
to
say
is
that
you
know
be
Beware
of
the
scoping
issue
and
formal
verification,
as
other
people
have
said
right.
There's
this
meme.
That
goes
something
like
out
of
scope,
said
no
attacker
ever
foreign.
B
Enough
and
I'm
going
to
jump
in
between
the
recognition
that
credit
is
really
key
and
the
credit,
as
we
think
about
on
the
IDF
is
not
commenced,
with
what's
what's
often
happens
in
the
academic
side.
Is
my
understanding,
Joe
May
kind
of
correct
me
here.
That's
why
the
Tron
Workshop
was
actually
created.
I
think
was
with
IEEE,
because
there
were.
There
was
a
ton
of
academic
interest
in
tls13,
but
again
they
wanted
to
publish
at
a
citable
venue
that
helps
with
tenure
and
kind
of
credit,
and
so
I
believe
that
was
Joe.
B
L
Do
yeah
so
with
TLS
1.3,
we
did
hold
several
workshops
in
coordination
with
different
conferences,
ndss
and
and
a
couple
other
ones,
and
and
that
helped
give
a
venue
for
people
to
be
able
to
present
their
work
in
a
in
a
more
academic
setting
versus
a
standards
meeting.
B
That's
it
if
you're
talking,
we
don't
hear
you
you're
at
the
top
of
the
mic.
Please
turn
on
your
audio
and
pose
your
comment
or
question.
B
I
Okay,
so
maybe
increasing
the
visibility
would
help
so
some
of
the
projects
which
could
be
best
Master's
projects
or
either
the
Maximus
project,
so
things
used
in
class
may
also
be
very
good,
because
any
of
the
people
who
would
graduate
from
even
bachelor's
programs
and
Master's
programs
I,
would
go
straight
into
the
workforce
and
it
may
not
necessarily
be
work
that
would
be
considered
like
publishable
or
like
high
credit
publishable.
I
So
allowing
for
those
kinds
of
easier
opportunities
and
making
those
visible
would
be
I.
Think
a
very
good
thing,
especially
now
that
remote
participation
is,
is
much
easier.
B
B
Okay,
we're
gonna
close
the
close
this
topic
I
mean
we,
we
very
much
kind
of
thank
the
feedback
from
the
community.
This
was
an
open-ended
open-ended
kind
of
call
for
feedback.
I
mean
largely
what
I
heard,
and
you
know
you
can
kind
of
correct
me.
If
you
heard
it
differently.
Kind
of
Paul
is
we've.
We've
had
past
experience,
we
heard
folks
kind
of
at
the
mic
kind
of
talk
about
the
goodness
we
heard
the
Need
For
Education
bootstrapping
kind
of
onboarding
on
kind
of
both
sides.
If
we're
gonna
leverage.
B
More
of
that,
we
may
need
to
do
something
a
little
bit
different
with
with
how
we
write
the
write,
the
specs.
We
also
we're
cautioned
that
you
know
be
really
careful
about
what
would
be
the
expectation
from
what
you
get
on
the
other
side.
We
heard
a
reminder.
A
lot
of
those
participants
are
here
at
the
ITF.
Also,
they
may
need
some
help,
understanding
what
exactly
they
would
need.
B
A
formal
verify
whether
it's
the
the
technical
thing
to
catch
or
even
how
mechanically
they
would
reach
into
the
reach
into
the
the
organization
and
recognizing
that
we
hear
the
IDF
may
not
exactly
know
much
about
the
workflow
of
those
of
those
kind
of
larger
academics
and
researchers
that
are
participating.
What
I
did
not
hear
is
anyone
going
to
Mike
and
say
we
should
absolutely
not
do
this
and
I
also
didn't
hear
a
very
strong
statement,
saying
Roman
that
should
be
Roman
Paul.
B
This
should
be
table
Stakes,
for
you
know,
for
something
kind
of
in
particular,
I
heard
something
in
the
middle,
so
I
mean
we
really
kind
of
appreciate
that
feedback.
B
B
No
okay,
all
right
fair
enough
again.
We
appreciate
that
feedback
from
I.
Think
from
our
perspective,
there
is
a
Groundswell
of
activity
in
the
irtf.
That
I
think
will
very
much
kind
of
help.
Us
I
think
all
the
feedback
you
heard
in
kind
of
your
excitement
around
this,
but
also
your
caution
would
be
great
things
to
give
to
Colin
as
rgf
chair
who
is
evaluating
whether
to
spend
such
a
group
up
and
I'm
sure
there
are
a
number
of
proponents
pushing
that
forward.
B
Some
pointers,
if
folks
aren't
aware
in
the
irtf
anything
else,
Tampa:
okay,
excellent,
all
right!
That
brings
us
to
our
last
formal
topic.
Justin.
If
you
want
to
come
on
up,
get
your
slots
cheat
up.
D
K
So
I'm
now
going
to
talk
about
the
draft
that
caused
all
those
problems
so
HTTP
message:
signatures
is
a
draft
that
is
sort
of
going
through
the
throes
of
working
group
last
call
and
in
HTTP
I
think
that's
an
accurate
description,
some
Chuckles
from
the
80s,
if
those
didn't
make
it
over
the
mic,
but
but
basically
one
of
the
interesting
things
about
this
document
for
this
group
is
that,
even
though
this
is
very
much
a
security
focused
thing,
it
was
done
in
the
HTTP
space.
K
The
goal
of
this
document
is
to
provide
detached
signatures
for
HTTP
messages
and
specifically
things
that
are
robust
signatures
that
are
robust
against
common
changes
that
you
see
in
the
HTTP
space.
You
know,
headers
being
reordered
and
all
sorts
of
stuff
like
that.
I
want
to
be
very
clear
that
this
is
not
message
encapsulation.
This
is
not
Ojai.
This
is
not
shttp.
This
is
not
TLS.
It
is
solving
a
different
security
problem
at
a
different
sort
of
point
of
the
stack.
K
K
Yeah
the
next
slide
and
let
go
of
all
control
buttons
are
right
next
to
each
other
and
they're,
both
huge
all
right.
So
the
reason
that
this
is
in
the
HTTP
group
and
not
in
any
security
working
group
is,
it
turns
out
signing
a
bunch
of
bits.
Isn't
the
hard
part
dealing
with
HTTP
is
a
hard
part
right.
Http
is
a
super
weird
protocol,
especially
as
it
exists
in
the
wild
like
I,
did
not
fully
appreciate
how
weird
HTTP
is
until
I
got
involved
with
this
work.
K
A
couple
of
years
ago,
like
I,
knew
that
it
was
hard
I
had
kind
of
invented.
Various
versions
of
HTTP
message
signing
three
different
times
in
the
past
before
before
starting
work
on
this
with
the
HTTP
working
group
and
now
going
back.
K
I
realized
that
every
single
one
of
those
has
absolutely
horrible
gaping
holes
in
them
in
just
different
ways
that
HTTP
messages
can
be
legitimately
chopped
up
and
moved
around
without
changing
any
of
the
semantics
of
the
message,
but
changing
the
bytes
that
actually
go
across
the
wire
and,
as
we
all
know,
signatures
like
dealing
with
bytes,
they
don't
like
dealing
with
semantics.
K
So
what
we
needed
was
a
mechanism.
The
goal
was
to
have
a
mechanism
where
the
signer
and
the
verifier
are
seeing
largely
semantically,
equivalent
messages
that
don't
look
the
same.
So
maybe
it
hit
an
intermediary
or
it
hit
an
application
layer
or
it
hit
any
number
of
things
that
could
move
stuff
around.
So,
instead
of
doing
any
sort
of
like
strict
canonicalization
or
doing
an
encapsulation
like
those
familiar
with
Jose
or
you
know,
tangentially
the
stuff
that
they're
doing
with
Ojai
all
that's
great
stuff.
K
But
what
we
wanted
to
do
instead
was
to
have
a
common
signature
base
that
could
be
generated
from
both
of
these
messages,
so
both
from
the
signer
side
and
from
the
verifiers
side
and
I'm
not
going
to
go
into
great
detail
right
now.
Unless
we
have
a
lot
of
questions
about
the
mechanics
of
how
this
works.
But
basically
what
you
end
up
with
is
you
create
a
signature
based
string
based
off
of
the
message
that
you're
signing
and
then
you
write
down
a
in
sort
of
a
very
deterministic
format.
K
These
are
the
things
that
I
signed
in
this
order
and
then
you
hand
that
whole
thing
to
the
verifier.
Who
can
then
look
at
that
and
say
I'm
going
to
take
this
same
message
and
attempt
to
create
the
same
signature
based
string
and
so
anything
that
was
signed
even
if
it's
moved
around
in
the
message,
I
should
be
able
to
get
the
same
value
out
and
put
it
in
the
base
string
in
the
right
spot
so
that
all
of
our
crypto
Primitives
actually
work
all
right
so
very
quickly.
K
What
this
looks
like
we've
got
an
HTTP
message
here:
we've
got
a
few
bits
that
we
want
to
sign,
so
we've
got
parts
of
the
what's
called
the
Control
Data
I
learned
like
there's
all
this
esoteric
terminology
in
HTTP.
It's
absolutely
amazing.
If
you
really
really
want
to
know
some
weird
stuff
buy
me
a
beer
and
ask
me
about
trailers
later
it
is
it's
it's
bizarre
anyway.
K
So
what
we're
going
to
do
here
is
we're
going
to
sign
part
of
the
control
data
and
we're
going
to
assign
one
of
the
header
fields
and
the
way
that
we
do
that
is
we
say
that
oh
and
we're
signing
the
method,
the
the
font
color
just
didn't,
show
up
very
well,
so
we're
signing
the
method,
we're
signing
the
target
URI,
which
has
a
sort
of
a
strict,
formal
definition
in
HTTP
ish.
K
In
most
cases
again,
HTTP
is
weird
and
then
we're
signing
a
particular
header
in
this
very
trivial
example.
It's
the
content,
type
header,
why
you
would
want
to
sign
only
that
I
don't
know,
but
we
did
here,
but
the
important
bit
is
that
this
last
line
the
at
signature
params.
K
This
is
an
ordered
list
of
everything
that
I
just
signed,
including
a
set
of
parameters,
in
this
case
a
creation
timestamp
and
a
key
identifier,
and
what
I
do
is
I
actually
send
that
string
across
the
wire
now
we're
making
use
of
something
called
HTTP
structured
Fields,
so
that,
even
if
this
gets
like
white
spaces
around
and
stuff
like
that,
this
has
a
formal
data
model
to
it
in
a
way
that
most
HTTP
messages
the
rest
of
the
HTTP
message.
Kind
of
doesn't.
K
So
we
can
rely
on
being
able
to
get
this
signature
input
line
in
a
state
that
we
can
actually
parse
and
we
can
make
sense
of,
and
then
the
signature
is
just
a
is
a
binary
blob
again
using
HTTP
structured
fields
in
this
case
to
send
just
I.
Think
that's
an
RSA
signature
or
something
like
that,
and
that's
that's
pretty
much
it.
You
add
these
two
headers
and
because
they
are
what
are
called
dictionary
type
headers.
K
You
can
add
multiple
signatures
to
a
single
message,
which
means
an
intermediary
that
you
know
is
in
the
process.
Can
take
a
look
at
this.
Validate
this
signature,
add
its
own
headers,
sign
this
signature
and
any
additional
stuff
and
pass
it
along,
which
means
that
inside
of
our
data
centers,
we
now
have
all
of
these
origin.
Servers
that
are
several
hops
removed
from
the
outside
world
can
actually
validate
the
message.
That's
coming
through,
so
really
really
powerful
pattern
you
can
play
around
with
it
see
how
it
works.
K
If
you
go
to
httpsig.org
I
think
it's
mostly
Upstate,
except
for
the
trailer
stuff
like
I
said,
the
trailers
are
an
absolute
mess,
but
you
can
play
around
with
signing
and
verification
if
you
want
to
see
how
it
really
works
and
sort
of
Step
through
the
the
different
parts
of
the
process.
K
K
But
what
we
really
need
at
this
point
is
we
need
more
eyes
from
the
security
side
of
things,
both
Annabelle
and
I.
The
the
authors
on
this,
like
most
of
our
work,
we
do
in
the
security
domain.
K
We
have
an
extensive
security
and
privacy
consideration
section
in
this
document,
but
were
enough
of
Security
Professionals
to
know
that
we
probably
missed
some
things.
You
know
there's
some
subtleties
here
that
we
may
have
not
picked
up
on
and
we
are
calling
out
to
The
Wider,
SEC
Community,
even
Beyond
sort
of
the
formal
sector
review.
We
want
more
eyes
to
look
at
this.
We
want
more
people
to
build
and
implement
this
and
figure
out.
K
If
there
are
any
gotchas
that
we
need
to
call
out
any
you
know
sharp
Corners
that
need
to
be
rounded
off.
You
know
it
is.
It
is
not
an
easy
space
to
work
in
signing
HTTP,
but
people
have
been
doing
it
in
the
wild
in
sort
of
dangerous
and
incomplete
ways
for
well
over
a
decade.
At
this
point,
just
about
every
AWS
call
goes
through
a
whole
chain
of
signed
HTTP
messages
using
their
own
proprietary
mechanism.
K
That
was
one
of
the
inputs
to
this
work
in
the
HTTP
working
group.
So
we
know
that
it
works
within
certain
domains
and
boundaries,
and
we
want
to
make
sure
that
this
works
in
as
wide
a
space
as
we
can.
So
this
is
our
call
out
to
The
Wider
security
area
to
come.
Come
take
a
look
at
this
play
around
with
it
see
how
it
works
and
and
help
us
make
this
the
best
it
can
be.
D
K
So
you
can
use
it
now.
Actually,
there
are
a
number
of
implementations
in
a
bunch
of
different
platforms,
I've
personally
written
one
in
Java
and
a
probably
not
as
good
one
in
Python
like
it's
it's
out
there
you
know
yarn.
Shuffle
has
one
I
think
it's
in
go
it's
in
go
okay
and
browsers,
so
it
is
not
built
into
browsers.
Yet
this
is
something
that
Annabelle
and
I
have
talked
about
a
lot.
K
Ideally,
what
we
would
love
to
see
is
this
part
of
as
part
of
the
fetch,
API,
so
sort
of
post
RFC,
that's
something
that
we
want
to
pursue,
so
that
a
developer
doesn't
have
to
think
about
this,
that
they
call
Fetch
and
hand
a
key
and
Magic
happens.
That's
what
I
want,
and
we've
done
a
little
bit
in
trying
to
sort
of
write
wrappers
around
fetch,
a
couple
of
implementations
that
I've
worked
on.
K
We've
kind
of
done
done
that
kind
of
thing
on
the
Java
side
I've,
my
library
I,
have
adapted
it
so
that
you
can
wrap
it
around
a
spring
rest
template,
which
is
basically
this
the
Java
spring
version
of
patch,
and
so
you
have
you
basically
make
a
rest
template
handed
a
key,
and
then
it
handles
all
of
the
signature
stuff
and
putting
it
in
the
right
spot
and
then
there's
a
verifier
on
the
HTTP
servlet
side
again
in
Java.
That
does
the
reverse
of
that
to
make
sure
that's
validated
so
yeah.
K
This
is.
This
is
out
there.
I
will
say
one
caution,
though,
when
you're
looking
at
this,
like
I
I
mentioned
before
people
have
been
doing
this
kind
of
in
the
wild
and
Loosely
weird
ways
for
a
bunch
for
a
bunch
of
years.
Well,
there
was
an
individual
giraffe
called
cabbage
signatures,
which
a
lot
of
people
here
are
probably
familiar
with,
at
least
in
passing,
and
especially
with
the
rise
of
mastanon
over
the
last
couple
weeks
for
reasons
we
won't
get
into
here.
K
Mastodon
uses
a
very
particular
version,
a
fork
of
a
very
particular
version
of
the
individual
draft
of
cabbage
signatures
from
many
years
ago.
So
a
lot
of
people
have
written
implementations
of
that.
That
is
not
the
same
as
what's
in
the
HTTP
Biz
draft,
and
we
would
like
to
you
know,
push
that
Community
a
lot
of
a
lot
of
developers
in
that
Community
to
sort
of
getting
up
to
speed
here.
So
do
takes
a
little
bit
of
caution
when
you
pull
an
HP
signatures
draft
because
it
might
mean
several
different
things.
E
D
K
So
this
is
sort
of
the
grandchild
of
Sig
V4,
so
the
Cabbage
draft,
so
cabbage
himself
wrote
Sig
V4
at
Amazon
and
then
left
Amazon
and
sort
of
dropped,
an
ID
out
into
the
wild,
and
that
was
the
cabbage
signatures
draft
that
floated
around
without
a
working
group
for
many
years.
Meanwhile,
a
bunch
there
were
a
bunch
of
other
attempts
to
also
do
HTTP
signatures,
and
we
pulled
all
of
that
as
input
into
the
HTTP.
This,
including.
E
I,
don't
remember
what
that's
called
actually
I'm.
K
Yeah,
like
I
I,
have
full
confidence
that
she's
she's
got
that
you
know
yeah.
D
K
D
O
D
K
Yeah
absolutely
agree,
I
will
I
will
say.
One
thing
like
I
said:
I
I
was
not
I
was
not
joking.
When
I
said
that
I've
reinvented
this
three
times
is
the
problem.
That
seems
to
be
really
easy
to
do
it's
it's
one
of
those
like
how
hard
can
it
be
until
you
sit
down
to
actually
do
it
and
it's
really
hard.
B
Philip
you're
up
next
and
Brennan
I
think
you
dropped
off.
So
if
you
didn't
mean
to
do
that,
please
put
yourself
back
in
yeah.
J
J
Yeah
I
I
think
that
this
is
useful
in
the
HTTP
world,
but
really
I
think
that
we
should
recognize
that
web
services
are
not
optimal.
For
HD
HTTP
is
not
optimal
for
web
services
and
we
should
build
something
that
is
optimal
for
web
services
and
doesn't
start
from
all
the
bizarre
Legacy
and
so
on.
J
That
HTTP
now
has
I'm
the
way
that
I
move
signed
messages
over
HTTP
is
I,
put
a
wrapper
around
it
and
I
stick
ship,
the
payload
ship,
that
is
the
HTTP
payload,
and
then
you
get
around
the
need
for
all
this
canonicalization
whatever,
because
your
proxies
aren't
going
to
mess
with
it
and
that
that's
probably
a
better
way
of
going
about
it
than
trying
to
get
ATP
infrastructure
to
do
things
that
it
should
do,
but
never
will
because
well,
people.
K
K
We've
deployed
different
versions
of
that,
and
there
are
some
really
really
big
trade-offs.
You
know
into
going
in
that
direction
as
well.
Yes,
it
makes
the
crypto
a
little
bit
easier.
It
actually
makes
the
messaging
portion
a
lot
harder.
I
do
also
want
to
point
out
that,
unlike
previous
versions,
this
actually
works
with
H2
and
H3
just
kind
of
natively.
It
uses
it's.
It's
based
on
the
definitions
in
the
draft
are
based
on
the
semantic
definitions
of
HTTP,
so
as
close
to
http
gets
as
having
to
to
having
a
formal
model.
K
This
is
what
everything
is
defined
in
as
opposed
to
previous
versions.
Where
take
these
bytes
from
this
string
as
part
of
the
http11
message
yeah,
we
could
just.
J
Come
back
there
for
a
second,
if
there's
some
applications
where
you
absolutely
need
this,
and
when
you've
got
one
proxy,
that's
signing
a
a
message
and
then
handing
it
over
to
another
and
handing
it
over
to
another.
Absolutely
you
definitely
need
to
have
it
in
the
HTTP
there
or
bad
things
happen.
J
Q
Brendan,
so
to
be
clear,
I
haven't
read
the
draft
and
I'm
just
trying
to
understand
exactly
what
the
the
target
is
here.
Is
this
essentially
providing
a
message,
authenticity
between
a
pre
like
a
pre-established
relationship
for
client
to
server
server
to
client?
Where
does
this
fit.
K
So
it
works
for
both
server,
client
and
client
to
server
okay,
and
it
is,
and
key
establishment
and
key
negotiation
is
out
of
scope
of
this
draft.
Okay,.
Q
And
it's
like
identity
based,
not
content,
authenticity.
K
So
content
office
authenticity
can
be
added
if
you
use
the
HTTP
digest
draft,
which
translates
sort
of
the
content
into
a
header
field,
which
you
can
then
cover
with
the
signature.
K
All
right,
thank
you,
yeah.
Please
read
the
draft
and
we
want,
as
as
widest
set
of
experiences
we
can
on
this.
Thank
you.
A
A
A
It
so
end
of
the
agenda,
so
it
is
open
mic.
So
anyone
have
any
other
things.
They
want
to
talk
about
related
to
security.
A
Okay,
so
then
see
you
at
online
on
on
the
various
tools
that
we
have
and
maybe
in
in
Yokohama
physically
again.
Thank
you
very
much.