►
From YouTube: IETF115-PRIVACYPASS-20221111-1200
Description
PRIVACYPASS meeting session at IETF115
2022/11/11 1200
https://datatracker.ietf.org/meeting/115/proceedings/
A
All
right
folks,
welcome
to
privacy
pass
at
ietf
115.
here
in
London
and
Ben
is
online,
and
thank
you.
Thank
you
all
for
coming.
Let's
see
just
a
few
things.
I
know
this
is
this
is
The
Bitter
End
of
the
meeting,
and
so
the
note
well
you've
all
seen
this
before
about
what
you
should
consider
before
you're
contributing
to
the
ietf
and
and
code
of
conduct,
so
I'm
just
gonna
leave
that
up
there
for
a
brief
second,
but
if
you're
not
familiar
with
it.
A
Obviously
you
should
be
by
this
time
in
this
week
code
of
conduct.
This
group
is
in
general,
really
good,
but
we
want
to
make
sure
we
treat
everybody
with
respect
and
and
keep
focused
on
engineering
Solutions.
A
A
Next,
let's
see
so
when
you're
in
the
meeting
room,
please
sign
into
the
on-site
meeting
tool
so
that
you
can
join
the
queue
it
makes
it
a
little
easier
to
manage
the
cues
between
remote
participants
and
local
participants.
You
know
this
already
you've
been
here
a
while
all
right
for
our
agenda.
A
We
have
a
couple
of
discussions
of
working
group
drafts
right
now.
We
have
our
core
documents,
either
through
working
class
call
or
in
working
group
last
call,
and
we
really
for
the
ones
that
are
in
for
the
issuance
document
and
the
HTTP
off
scheme
really
want
to
make
sure
we
get
good
review
of
that.
I
haven't
seen
much
activity
in
GitHub
lately
or
on
the
list.
We
want
to
make
sure
these
documents
are
ready
to
move
forward.
So
please
put
your
reviews
in
Tommy.
B
Yeah
just
to
comment
on
that
for
the
core
docs,
we
did
get
some
good
review
on
the
auth
scheme
and
we
sent
that
over
to
http
and
we
made
some
adjustments
based
on
that.
So,
okay.
B
Yeah
yeah
but
I
think
definitely
emphasizing
the
call
for
review
on
the
issuance
protocol.
Since
that
one
has
not
received.
A
A
Be
good
I
mean
these.
These
documents
have
been
around
for
for
a
little
while,
so
you
know,
even
if
you,
if
you've
read
them
and
you
feel
like
they're
ready
to
move
forward
and
letting
us
know
on
the
list
would
be
helpful
because
there
may
not
be
much
to
comment
on
at
this
point,
but
we'd
like
to
make
sure
that
these
are
being
reviewed.
A
We
did
have
a
plan
for
some
time
to
talk
about
key
consistency,
because
that's
a
newly
adopted
draft,
but
we
did
not.
Have
anybody
take
us
up
on
that,
so
we're
going
to
strike
that
from
the
agenda
today.
But
if
anybody
wants
to
make
any
comments
on
that,
that
would
be
fine
other
than
that
we
have
a
few
other
presentations.
Is
there
any
other
business
anybody
thinks
we
need
to
conduct
at
this
time
or
modifications
to
the
agenda.
A
All
right,
given
that
I
think
Tommy
you're
up
and
I'll
assume
that
you'll
take
control
of
the.
Let
me
just
get
rid
of
this
and
you
should
be
able
to
share
the
slides
right.
B
Yeah,
if
you
just
grant
because
yeah
perfect,
thank
you.
B
All
right,
hello,
everyone
I'll
be
giving
the
update
on
the
rate
limited
issuance
document
which
was
adopted
since
last
time,
and
one
of
our
co-authors
Stephen
is
in
the
room,
he'll
be
presenting
some
stuff
later,
but
he
can
also
help
chime
in.
B
So
the
two
things
we
wanted
to
go
through
today
was
one
is
just
a
recap
of
the
differences
in
rate
Limited
token
issuance
as
compared
to
the
basic
token
issuance
just
to
remind
people,
since
this
is
the
first
time
we're
presenting
this
as
a
working
group
document.
And
so
you
know,
take
a
look,
make
sure
you
understand
and
are
happy
with
the
the
current
shape.
That's
in
and
then
we'll
go
through
some
of
the
open
issues.
There's
essentially
one
major
open
issue
and
then
we'll
just
mention
some
of
the
other
smaller
ones.
B
All
right
so,
starting
with
recap,
what
are
we
doing
here?
So
this
is
an
extension
essentially
to
the
basic
issuance
protocol.
B
It
is
based
on
the
publicly
verifiable
variant
which,
in
the
basic
one,
is
Type
2,
and
the
difference
here
is
that
this
is
essentially
always
done
in
the
split
a
test
or
an
issuer
model,
whereas
with
the
basic
type
you
can
either
have
them
split
or
combined,
and
in
this
model
the
a
tester
is
maintaining
some
state
on
behalf
of
the
client,
essentially
counting
the
number
of
tokens
that
a
client
has
received
for
An
anonymized
Origin
bucket-
and
this
is
a
bucket
that
is
defined
by
the
client.
B
B
Id
is
indeed
a
one-to-one
correspondence
with
some
origin,
but
it
doesn't
know
which,
upon
requesting
a
token
through
the
tester
that
goes
to
the
issuer,
and
then
the
issuer
in
this
responses
to
the
tester
adds
an
extra
bit
of
information,
which
is
a
rate
limit
to
enforce,
and
so
it
gives
out
a
token
and
then
says
this
is
the
number
of
tokens
that
I'm
going
to
allow
for
this
origin
and
it
doesn't.
B
The
issue
does
not
learn
if
that
rate
limit
was
reached
or
not,
but
the
attester
uses
that
to
drop
the
request.
If
the
limit
is
reached.
B
So,
looking
at
this
visually
again,
there
are
two
halves
of
all
of
the
different
privacy
paths
protocols.
There
is
the
side
that
actually
talks
to
the
origin,
that's
the
challenge
and
Redemption
flow
and
then
there's
the
side.
That
is
the
issuance
flow.
So
there's
a
very
minor
change
to
the
Challenge
and
Redemption
flow.
B
Well,
there's
actually
no
change
to
the
Redemption,
and
the
challenge
only
has
to
include
one
new
key,
and
this
is
a
hpke
key
that
corresponds
to
the
issuer,
so
that
the
client
knows
how
to
encapsulate
the
actual
origin
info
and
the
origin
name
when
requesting
a
token
to
the
issuer.
B
B
B
There
is
the
token
rate
limit
per
time
window
that
the
issuer
communicates
the
attester
and
then
there's
also
a
set
of
signatures
that
the
client
provides
and
that
the
test
the
issuer
also
provides
input
on
to
prevent
the
client
from
cheating
and
saying
that
what
is
in
reality,
one
origin
trying
to
make
that
look
like
it's
two.
B
B
The
origin
can
provide
multiple
challenges,
but
any
particular
request
for
a
token
is
bound
to
an
issuer
and
that's
the
same
as
the
basic
types.
That's
no
different
from
that,
so
looking
at
the
actual
on
The
Wire
bits.
So
this
is
how
the
basic
token
request
looks
like
for
the
publicly
verifiable
type.
It
has
the
token
type
key
ID
and
the
blinded
message
and
the
difference
for
the
rate
limited.
B
B
All
right,
so
that
is
the
recap
of
the
protocol
and
its
Delta
from
the
basic
type
and
so
going
into
the
issues.
The
main
issue
that
we've
been
discussing
recently
is
number
six
on
the
GitHub,
which
is
about
a
linkability
attack
that
was
not
detected
earlier
on.
That
has
to
do
with
a
malicious
issuer.
B
B
This
is
the
secret
that
is
used
in
the
responses
from
the
issuer
to
the
attester
that
help
the
tester
verify
that
the
client
is
not
cheating
in
terms
of
what
it's
claiming
as
its
Anonymous
origin
ID.
B
So
the
point
is
that
the
issuer
can
essentially
confirm
it
when
it's
working
well,
that
to
that
what
the
client
claims
are.
Two
different
Origins
really
are
two
different
Origins.
B
To
essentially
make
it
look
like
the
clients
are
cheating
or
malicious
and
based
on
the
current
text
in
the
document
when
the
tester
detects
this
Collision,
since
it
was
assuming
that
that
is
a
malicious
client.
It
currently
just
says
you
drop
that
request
on
the
floor,
and
that
is
kind
of
the
root
of
this
particular
problem,
because
if
you
were
dropping
that
on
the
floor,
then
that
is
potentially
a
signal
kind
of
going
out
to
whoever
requested
the
token.
B
That
may
be
the
reason
that
this
request
didn't
get
a
successful
token
back
was
because
you
had
hit
this
Collision,
which
implies
that
you
had
previously
requested
a
token
for
a
different
origin
right.
So
if
you
had
various
Origins
colluding
with
their
issuer
to
say
you
know,
hey,
you
know,
make
me
collide
with
this
other
origin
and
then
I'll
try
to
guess
that
when
a
token
fails
to
be
generated
it
was
because
that
client
had
previously
talked
to
a
different
origin.
B
It's
not
a
particularly
direct
signal.
There
are
many
reasons
that
the
the
token
issuance
could
fail.
You
could
have
hit
your
rate
limit.
The
tester
could
have
not
liked
you
for
some
other
reason,
but
it
is
kind
of
a
bit
of
information
that
you
could
leak
if
the
tester
naively
just
drops
requests
foreign.
B
So
the
proposed
fix
that
we
think
is
the
kind
of
the
the
smallest
Delta
and
kind
of
the
correct
thing
to
do
for
the
protocol
is
to
not
just
naively
silently
drop
a
request.
If
you
see
this
type
of
collision,
not
only
does
that
allow
a
malicious
issuer
to
leak,
a
signal,
but
it
also
isn't
particularly
useful.
B
You
know,
let's
say
we
do
have
a
malicious
client
that
is
intentionally
trying
to
make
different
or
make
one
origin
look
like
multiple
different
Origins,
silently
dropping
the
requests
on
which
they're
trying
to
cheat
and
not
doing
anything
else
is
probably
not
the
right
thing
to
do.
You
know
the
whole
point
of
this.
A
tester
vouching
for
the
client
is
that
it
is
a
testing
that
you
know
it
meets
some
bar
that
it
is
a
legitimate
client.
B
Foreign
ER
has
detected
cheating
it
should
probably.
Instead,
you
know,
penalize
the
clients-
or
you
know
say
you
know,
you're
no
longer
someone
I'm
willing
to
serve
tokens
for
so.
Overall,
you
know
we
need
to
work
on
the
text
here,
but
The
Proposal
is
that,
instead
of
just
dropping
a
request
silently
the
tester
would
flag
the
Collision
events
and
use
the
Collision
patterns
to
reevaluate
trust
in
the
issuers
OR
clients-
and
you
know
in
general,
with
non-malicious
implementations
these
cases
should
you
know
you
should
never
have
these
collisions.
B
B
If
the
tester
detects
this
situation,
it
should
you
know,
rather
than
just
dropping
a
request.
It
should.
Essentially,
you
know,
remove
the
issuer
as
a
trusted
partner
or
kind
of
you
know,
put
put
them
in
a
penalty
box
at
the
very
least,
and
similarly,
if
the,
if
it's
a
singular
client
that
is
doing
this,
and
it's
not
across
many
different
clients
for
a
particular
issuer,
then
the
heuristic
can
say
you
know
we're
not
going
to
issue
any
more
tokens
in
the
future.
B
To
that
client
so,
overall,
rather
than
dropping
a
particular
request
in
the
moment,
the
approach
should
be
rejecting
future
requests
for
that
client
or,
if
you
detect,
eventually
that
this
is
more
likely
the
issuer
problem
than
is
the
client.
Then
you
stop
using
that
issuer,
which
effectively
stops
using
letting
any
client
talk
to
that
issuer
through
that
a
tester.
B
As
a
bit
of
sidebar,
we
filed
another
issue
that
you
know
we
should
have
more
text
in
this
entire
area.
As
we
were
talking
about
it,
you
know
we've
for
the
basic
types
we
already
have
some
experience
with
a
test
or
an
issue
or
deployment
we're
getting
a
lot
of
more
issuers
being
deployed
for
the
basic
types
and
we're
working
with
our
tester
and
they're.
B
There
is
already
a
fair
amount
of
validation
that
needs
to
go
on
between
an
attester
and
an
issuer.
There
are
things
Beyond,
just
its
pattern
of
you
know.
Are
you
being
malicious
or
not,
but
are
the
lengths
of
the
windows
that
you
have
for
enforcing
a
rate
limit,
an
acceptable
thing?
You
shouldn't
have
windows
that
are
one
minute
long.
You
shouldn't
have
a
window
that's
a
year
long,
so
there
are
things
that
the
tester
needs
to
validate
before
it's
willing
to
trust
an
issuer.
B
For
that
the
different
rate
limits
there
may
be
bounds
on
that.
You
may
not
want
to
accept
a
rate
limit
of
one.
You
may
not
want
to
accept
a
rate
limit
of
a.
B
So
there
are
values
that
you
need
to
be
able
to
validate
there
and
of
course
you
also
need
to
make
sure
that
if
this
is
kind
of
a
generic
case
that
the
issuer
you're
working
with
has
enough
different
Origins
being
served
to
create
an
anonymity
set.
B
B
There
was
an
alternative
approach
that
Nikita
brought
up,
and
we
discussed
and
I
know
he's
on
the
call
here,
which
would
involve
a
completely
different
approach
to
the
signature
that
currently
prevents
the
clients
from
cheating
that,
instead
of
using
that
crg
work,
you
would
instead
use
zero
knowledge
proof
that
the
client
would
present
to
both
the
tester
and
the
issuer
and
be
kind
of
like
a
one-way
thing.
B
B
So
you
know
this
is
something
that
were
we
to
go
in.
This
direction
would
need
a
lot
more.
You
know
kind
of
starting
from
scratch
in
cfrg
other
discussion,
and
you
know
we're
not
sure
all
the
details
of
how
it
works.
B
So
I
think
my
recommendation
and
our
recommendation,
as
the
authors
is
to,
for
the
basic
rate,
limited
type
that
we're
defining
now
do
the
fixes
that
we
previously
mentioned,
but
I
have
the
zero
knowledge,
proof,
Direction,
B
candidate
for
future
token
types
and
future
work
in
this
area
once
it's
gone
through
cfrg
other
groups
and
other
analysis,
I
think
there's
other
features
that
people
have
discussed
Beyond
just
like
a
basic
count
of
rate
limit
that
could
involve
state
or
these
more
complex
relationships
and
bundling
the
news
or
knowledge
proof.
B
So
if
that
would
make
sense
to
me
so
yeah
I
I
see
some
comments
in
the
chat.
That's
yeah,
it
does
seem
like
a
big
change,
but
yeah
I
think
it's
very
cool
and
I
I
think
we
should
keep
pursuing
it.
I
think
one
of
the
benefits
of
the
architecture
that
we
are
using
is
that
we
have
flexibility
on
token
types.
We
can
Define
new
token
types
that
people
can
move
to
that
can
bring
us
new
cryptographic,
algorithms
that
can
bring
us
new
capabilities
and
properties.
A
And
we
do
have
somebody
in
the
queue
if
you'd
like.
A
D
Not
a
question
just
the
note
that
the
Jose
work
group
is
in
the
process
of
being
recharted
by
the
iesg
to
formalize
jwp,
which
is
pretty
much
exactly
what
you
were
just
talking
about.
So
there
is
a
the
Jose
working
group
is
intended
to
look
and
formalize
how
to
do
Json
tokens
with
zero
knowledge.
Proofs
approved
algorithms
from
the
cfrg,
so
that
is
in
process.
D
B
Next
meeting,
okay
got
it
yeah.
We
will
definitely
want
to
track
that
and
yeah
I'll
also
see
what
common
bits
are
present
in
cfrg.
Then.
Thank
you.
B
B
There's
a
discussion
around
ways
to
potentially
hide
the
issuer
rate
limit
from
the
tester.
That
is
something
that
we
think
may
work
better
in
a
future
token
type
kind
of
like
the
zero
knowledge
proof.
B
There
is
an
issue
just
remember
this.
Yes,
I
think
we
need
more
discussion
and
text
around
how
you
deal
with
rate
limited
tokens
when
you
have
an
origin
info
that
lists
multiple
Origins.
So
this
is
like
when
you
have
a
third
party,
you
know.
B
Let's
say
you
have
a
recapture
embedded
on
some
other
website
and
it
wants
to
do
a
rate
limit
and
how
you
handle,
which
one
is
kind
of
the
one
you're
limiting
on,
and
then
this
last
one
I
mentioned
where
we
want
to
expand
the
text
on
the
trust
relationship
between
the
testers
and
issuer
and
what
they
need
to
validate
against
each
other.
B
So
that's
the
end
of
the
slides
I
have
here
I.
Think
our
main
next
step
is
to
update
the
document
for
these
issues,
based
on
what
we
discussed
here.
If
people
have
opinions,
we'd
love
to
hear
that
we
are
still
tracking
the
cfrg
dependency
for
the
signature,
key
blinding
that
helps
us
do
the
proof
that
the
clients
are
not
cheating
and
other
than
that.
We're
gonna
keep
working
on
this
any
questions.
A
C
Yeah,
thanks
for
the
overview
I
I
was
asking
about
the
I.
C
Think
it's
an
encapsulation
key
right
that
the
origin
provides
I
I,
want
you
to
go,
get
a
token
from
this
issuer
and
gives
you
a
particular
key
to
use
to
encap
your
request,
and
that
just
seems
like
a
potential
tracking
Vector
I
know
there
are
other
tracking
direct
vectors
if
they
collude,
but
that
seemed
like
a
potentially
significant
one
and
it
seems
like
there's
some
mitigations
in
the
document
about
the
a
tester
can
help
mitigate
by
trying
to
make
sure
that
the
key
isn't
unique
or
something
is,
is
the
is
the
assumption
that
okay,
there
should
be
a
single
key
for
each
origin
like
there
should
be
a
universal,
a
consistent
E3,
Georgian.
B
B
There
would
be
one
or
you
know
one
wrote
slowly
rotating
key
for
that
issuer,
and
so
you
need
to
make
sure
that
you
have
the
same
key
as
everyone
else,
and
the
working
with
the
tester
is
one
way
to
do
that
that,
like,
if
you're
doing
a
key
consistency
approach,
you
you
have
one
source,
which
is
the
origin
telling
you
here's
the
key
to
use
and
the
tester
can
also
potentially
be
a
Vector
to
validate
that.
That
is
the
same
as
we
see
what
it
sees
across
all
other
clients.
B
Yeah,
it
definitely
doesn't
need
to
be,
and
I
don't
recall
offhand,
but
I
think
I
think
it
is
optional
like
because,
even
in
the
basic
token
requests,
the
origins
include,
for
example,
like
the
RSA
flight
signature
public
key
that
they're
going
to
verify
against.
B
That
is
something
that
doesn't
strictly
need
to
be
in
the
challenge.
If
the
client
has
another
way
to
get
it
so
Origins
could
leave
it
off
and
then
you
know
essentially
Force
the
clients
to
fetch
it
some
other
way,
but
it
it
is
defined
as
a
parameter
that
can
be
included
in
the
challenge.
C
Yeah
that
that
that's
helpful
I
mean
it
does,
it
does
seem
like
we
should
have
as
little
information
in
coming
from
the
origin
that
could
be
used
to
directly
communicate
with
the
issuer.
B
E
I'm
very
I'm,
very
naive
about
all
this
all
this
stuff
that
I've
been
trying
to
figure
out
if
the
key
blinding
is
just
equivalent
to
taking
the
taking
the
public
key
and
encrypting
it
under
symmetric
encryption
and
passing
the
passing
the
password
around
that
you
know
passing
the
the
symmetric
key
around
as
needed
is
that
is
that
all
that
the
key
binding
is
actually
doing
here.
F
Stephen
Google,
so
the
scheme
there
is
slightly
more
complicated
than
just
the
symmetric
thing
like
it's
using
the
public
key
of
the
issuer,
but
then
we
have
the
extra
binding
so
that
the
tester
can
verify
some
Anonymous
ID
that
the
or
the
issuer
provides
without
like
learning
the
actual
origin
or
key
information.
F
B
Yeah
yeah
we
I
would
rather
that
you
know
Chris
Wood
and
some
of
the
other
authors
on
that
comment.
Authoritatively,
I,
don't
wanna
perjure
myself.
H
A
Right:
okay,
any
other
questions.
A
All
right,
so
it
sounds
like
there's
a
we
have
kind
of
a
way
forward
for
some
of
the
issues
raised.
Some
of
the
other
ones
still
maybe
need
a
little
bit
work
but
seem
relatively
straightforward.
So.
F
A
I
think
I've
never
tried
it.
Can
you
share
a
slide
through
that?
Maybe
you
need
to
do
the
sharing
then
or
do
I
share.
Do
you
share.
F
F
Cool
so
I'm
Stephen,
Google
I'm
gonna,
give
a
quick
overview
of
the
current
state
of
privacy,
pass
s,
work
in
the
w3c
and
where
some
of
the
next
steps
might
be
there.
So
there's
a
few
relevant
Works,
some
of
them
in
the
w3c
and
some
of
them
not
quite
there.
F
F
The
thing
is
that
you
need
to
do
in
the
w3c
to
support
this
are
the
there's
a
fetch
API?
This
is
different
from
like
the
JavaScript
Factory
people
a,
but
this
describes
how
web
requests
and
web
responses
attach.
Various
information,
such
as
the
WWE
authorization
headers,
that
we
see
in
private
in
privacy
paths
in
the
authorization
draft.
F
F
So
for
this
particular
API
I
think
the
it
currently
doesn't
exist
in
any
community
group
in
the
w3c,
but
I
think
it
would
likely
end
up
in
either
the
wiki,
which
is
the
web
incubation
group
or
the
anti-fraud
CG,
which
is
a
group
working
on
a
bunch
of
anti-fraud
technology
which
private
access
tokens
sort
of
falls
under.
As
a
quick
aside,
the
w3c
has
a
bunch
of
different
groups,
there's
interest
groups
which
are
exchange
of
ideas,
Community
groups,
business
groups,
working
groups.
F
The
important
distinction
here
is
that
computer
groups
tend
to
do
like
early
work
on
various
ideas,
but
don't
actually
produce
deliverables
or
standardized
documentation.
Eventually,
once
something
is
in
a
good
enough.
State
there
it'll
move
into
some
working
group,
I
think
for
privacy
pass
related
technology.
F
So,
to
talk
a
little
bit
about
how
private
access
tokens
usage,
format
Works
right
now
token
issuance
is
a
trusted
tester
in
the
Apple
case,
it's
the
Apple
platform
and
I
think
that's
the
general
idea
for
Access
tokens,
and
then
any
website
is
allowed
to
redeem
these
tokens
to
have
some
proof
that,
like
you,
can
access
this
resource
because
you
have
a
real
device
or
a
legitimate
platform
and
then
to
do
rate
limiting
with
the
rate
limiting
draft
dude.
F
Both
rate
limit
of
each
origin
is
only
allowed
a
fixed
number
of
tokens,
so
you
don't
like,
go
take
a
hundred
thousand
tokens
from
one
device
and
go
spread
them
to
a
bunch
of
malicious
devices
and
there's
also
rate
limiting
based
on
the
attest
or
can
rate
limit
like
you,
don't
want
to
be
spitting
out
a
ton
of
tokens
for
one
specific
device
just
since
these
sorts
of
token
farming
attacks
are
quite
common,
at
least
in
the
web
space
around
cookies
and
we'd
expect
similar
things
with
private
access,
tokens
and
other
privacy
pass
related
technology.
F
So
another
API
undergoing
work
in
the
w3c
is
private,
State
tokens.
It
was
formerly
known
as
trust
tokens,
but
both
the
name,
trust
there
didn't
quite
reflect
what
it
actually
did
and
it
was
better
to
try
getting
it
to
a
closer,
more
coherent
story
with
all
the
other
tokens
that
are
undering
work.
It
is
based
on
a
very
old
version
of
privacy
paths,
the
vopf
draft
and
I
think
from
like
three
years
ago
and
a
not
standardized
PMB
tokens
graph,
which
is
a
version
of
privacy
pass
that
has
a
private
metadata.
F
F
It
is
currently
in
the
wick
tree
we're
hoping
to
move
it
to
the
anti-fraud
community
group
just
because,
while
the
web
platform
incubation
group,
like
there's
lots
of
documents
in
that
you
don't
get
as
much
feedback
and
response
there
and
anti-fraud
I
think,
is
the
core
use
case
of
that
API.
So
we're
up
to
move
there.
F
Let's
see
the
model
there,
unlike
private
access
tokens,
is
that
any
website,
either
first
party
or
third
party,
would
be
issuing
tokens
if
they
have
some
concept
of
the
users
like
legitimacy.
I.
Think
the
recaptcha
edge
capture
case
is
the
primary
one
here.
Also,
if
you
have
a
strong
first
party
identity
and
then
you're
willing
to
share
that
information.
That
might
be
another
case
where
you
want
to
be
issuing
tokens
and
then
other
Origins
can
redeem
tokens
from
those
particular
users.
F
There's
likely
going
to
need
to
be
some
sort
of
partnership
between
the
issuers
and
the
folks
redeeming
just
because
they
need
to
know
what
a
token
means
like
if
you
get
a
token
from
some
random
issuer,
and
you
don't
know
like
what
it
actually
is
attesting
to
what
it's
actually
promising.
That's
less
useful,
at
least
for
these
sorts
of
ecosystems.
F
F
Like
50
requests,
you
might
make
on
one
top
level
page
there's
a
concept
of
a
Redemption
record,
which
is
basically
like
a
stored
local
cached
version
of
the
Redemption,
though
this
is
charted
by
the
top
level,
to
like,
avoid
that
Redemption
record,
acting
as
a
cross-site
tracking
vector
so
some
other
Deltas
from
the
privacy
of
house
protocol
that,
hopefully
will
go
away
as
far
as
things
get
standardized
or
we
update
to
the
new
rfcs
is
for
private
State
tokens
in
order
to
get
key
commitments
and
to
avoid
the
like
different
users
getting
presented,
different
keys,
we're
using
the
data
database,
Discovery
Model,
which
is
described
in
Chris's
draft,
which
is
basically
there's
a
central
service
that
fetches
all
the
key
commitments
and
those
are
provided
to
clients
having
some
sort
of
standard
around
like
how
this
actually
functions.
F
Beyond
the
random
thing.
That's
written
for
private
State
tokens
would
be
good,
and
hopefully
some
of
that
work
can
come
out
of
the
key
consistency
draft
currently
private
State
tokens.
Instead
of
using
the
application.
Private
token
request
method
where,
like
use
an
entire
post,
it
instead
runs
the
Privacy
pass
protocol
via
headers
on
existing
fetch
request
or
existing
xhr
request.
I
think
this
was
partially
an
optimization
because
having
a
completely
separate
Quest
was
an
extra
overhead,
that's
possible.
F
F
Another
thing,
though
this
is
unlikely
to
be
specified
in
the
its
is
that
there's
a
way
of
triggering
all
this
using
the
fetch
JavaScript
API,
instead
of
just
HTTP
authentication.
The
idea
here
is
that
there
are
various
websites
that
won't
want
to
do
a
full
like
run
a
separate
request
or
have
to
embed
a
separate
like
iframe
on
the
page
in
order
to
get
tokens
or
if
you
want
to
do
it
in
the
middle
of
other
activity
happening
on
the
page.
F
I
think
we
hope
to
also
add
HTTP
authentication
to
the
private
State
token,
so
it
would
be
a
strictly
like
super
set
of
what's
currently
provided
in
privacy
pass
and
what
private
access
token
does
and,
as
I
mentioned
before,
there's
the
Redemption
records,
which
help
solve
some
of
the
latency
issues,
where
you
have
a
bunch
of
requests
from
the
same
top
level
origin.
F
So
moving
on
to
slightly
less
thought,
fleshed
out
Solutions,
a
common
thing.
That's
come
up
in
the
w3c
anti-fraud
group
has
been
the
idea
of
having
some
sort
of
device
out
of
station
similar
to
private
access
tokens,
but
like
some
way
to
attest
to
facts
about
the
device
or
test
two
facts
about
the
client.
F
But
this
needs
some
sort
of
mechanism,
some
sort
of
anonymous,
credential
style
mechanism.
To
do
this
without
leaking
information
about
like
this
particular
device
ID
some
variants
of
privacy
pass
might
be
useful
here,
either
building
up
on
the
private
access
tokens,
private
State
tokens
or
some
other
variant.
If
there
are
other
types
or
other
parts
of
that
attestation
work
that
either
need
slightly
different
features
or
need,
don't
need
a
lot
of
the
extra
complexity
here.
F
F
This
is
basically
clients
send
reports,
but
there
needs
to
be
some
way
to
authenticate
this,
avoid
attacks
from
fake
devices
or
devices
that
aren't
authenticated
in
some
way.
You
don't
need
a
like
real
user
identifier
attached
to
the
support,
so
you
just
need
someone
somewhere
like
was
fine
with
this,
and
attaching
something
like
privacy
pass
might
be
useful
there.
F
There
are
a
few
complications
here
where
you
want
to
bind
tokens
to,
like
particular
origins
or
include
some
amount
of
public
metadata
inside
Beyond,
just
the
like
your
choice
of
keys,
so
we
need
some
of
the
more
complicated
public
metadata
forms
of
privacy
pass
for
that
case,
but
yeah
privacy
degree
has
been
visiting
other
sorts
of
privacy
preserving
technology,
so
there
might
be
some
work
there
and
then
at
tpec,
the
web,
often
and
web
Havens
folks
had
a
joint
session
with
the
anti-fraud
group
and
there's
was
some
vague
interest
in
various
blind
signature
methods
and
honest
credential
methods
there
to
test
to
something
about,
like
you
are:
you've
worked
with
some
instrument
providers,
some
credit
card
providers
in
bank,
and
you
want
to
attest
to
some
facts
about
that
in
privacy
paths
or
other
variants
of
this
might
be
useful
in
that
space.
A
G
I
Yeah
Martin
Thompson
I
just
want
to
avoid
creating
a
mistaken
impression
about
some
of
these
things.
I
think
this
might
be
misrepresenting
a
little
bit.
What
Pat
Sage
is
actively
working
on
the
there
are
proposals
in
our
community
group
to
adopt
think
the
aggregate
reporting
API
is
just
one
of
the
candidates,
that's
being
discussed.
There's
there
are
some
designs
that
don't
require,
at
least
don't
require,
to
the
same
extent
those
sorts
of
capabilities
as
a
result
of
their
design.
F
To
be
clear
with
the
device
attestation
and
the
ones
here,
this
is
places
where
privacy,
like
technology
has
come
up.
It's
not
necessarily
A
Hard
requirement
or
even
likely
to
happen
for
these,
but
it's
places
where
we
might
want
to
look
at
for
future
use
cases
or
other
extensions
that
might
be
useful
in
the
space.
H
A
F
So
yeah
I
think
for
a
lot
of
these.
It's
still
in
the
early
stages,
so
there's
no
strong
expectations,
I
think,
depending
on
how
the
standardization
process
and
the
input
we
get
in
like
the
various
CGS
like
some
of
these
things,
particularly
the
key
consistency,
might
be
useful
to
have
some
baseline
sorts
of
things
in
the
ITF
I.
A
Okay,
Ben.
E
Hey
so
it
sounds
like
the
work
in
the
w3c
is
really
pretty
nascent
there's.
No.
None
of
these
proposals
have
have
actually
moved
into
a
working
group.
So
what
are
the
odds
that,
by
the
time
By
the
time?
E
Any
of
this
work
actually
is
moved
into
a
working
group
it'll
turn
out
that
the
specifications
that
we've
written
now
actually
are
not
what's
needed
and
that
it
turns
out
that,
for
example,
what
the
working
group
wants
is
PMB
tokens
or
some
other
arrangement
with
properties
a
little
bit
different
from
what
we're
specifying.
F
I
think
that's
one
of
the
benefits,
at
least
of
the
like
the
way
the
core
protocol
models
are
structured
like
if
we
need
slightly
different
properties,
we
can
come
up
with
new
token
types
and
work
on
standardizing
those
at
least
I.
Think
the
general
issuance
Redemption
format
of
the
core
protocols
is
like
pretty
like
in
line
with
what
we've
seen
and
I
think
it's
mostly
just
going
to
be
a
question
of
additional
token
types
to
expose
whatever
features
we
need
in
the
space.
A
B
Yeah
just
to
answer
a
little
bit
also
to
Ben's
question:
there
I
think.
B
B
So
you
know
people
are
able
to
deploy
it
and
use
it
independently.
So,
like
our
w3c
is
not
our
only
customer
directly
right,
like
people
can
benefit
from
it
independently,
I
think
there's
also,
you
know
some
very
minimal
work
that
can
be
done
to
like
we're
talking
about
you
know.
B
Can
you
mark
on
like
an
iframe
specifically
which
iframes
or
third
parties
are
allowed
to
issue
issue
token
challenges
or
not,
and
what
the
browsers
will
accept
and
that's
something
that
you
could
do
very
surgically
and
minimally
within
w3c
work
that
would
integrate
well
with
what
we're
defining
here
in
privacy
pass,
and
then
there
are
these
other
use.
Cases
which
may
be
able
to
use
privacy
passes
for
defining
it
or
may
didn't,
may
need
a
new
token
type,
but
I
don't
think
it's
invalidating
the
basic
types.
B
I
think
the
basic
types
are
still
a
good
foundation
that
will
be
used
regardless
of
how
complex
the
adoption
in
w3c
is
on
top
of
them.
A
Okay,
any
other
comments
or
questions.
H
C
Yeah
is
there:
is
there
documentation
or
a
specific
proposal
on
the
private
access
tokens?
C
I
I
personally
wasn't
aware
that
was
using
rate
limiting
version
of
this
and
I'm
sure.
That's
my
fault
that,
just
if
there's
somewhere
I
could
be
reading
that
or
if
that
is
likely
to
go
through
standardization
at
w3c,
then
that
would
be
helpful
thanks.
B
B
All
that
is,
is
just
the
basic
publicly
verifiable
privacy
pass
token
and
saying
that
we
support
it,
and
we
also
intend
to
support
the
type
3
and
other
testers
and
platforms
can
support
those
as
well,
so
I
I
think
the
parts
on
top
that
this
is
referring
to
is
you
know,
do
we
have
changes
to
fetch
or
permissions
policy?
B
That
really
probably
would
they
wouldn't
talk
about
private
access
sections?
They
would
just
talk
about
like
privacy
pass
or,
like
the
private
token
off
the
scheme,
type
essentially
saying
what
resources
are
allowed
to
include
the
private
token
off
scheme
challenge
and
that's
all
they
would
amount
to
it's
just
kind
of
giving
boundaries
on
who
you
accept
these
from.
B
You
know
in
production
for
us
with
different
issuers,
we
support
kind
of
like
beta
testing
with
the
rate
limited
types,
but
I,
don't
think
the
type
actually
directly
impacts.
B
B
I
I
view
this
more
as
just
a
way
to
enable
some
communication
about
privacy
pass
in
general.
At
the
like
permissions
policy
layer.
A
H
A
All
right
next
I
think
we
want
to
have
kind
of
a
chat
on
where
privacy
pass
is
and
where
we
can,
you
know
what
what
sort
of
next
steps
might
be.
A
E
Hi
everybody,
so
we've
got
plenty
of
time
left
in
the
session
and
that
sometimes
means
that
we're
running
out
of
things
to
do
I,
don't
know
that
that's
true
in
this
case,
but
I
want
to
ask
the
working
group
where
they
think
this
group
should
be
going
so
first,
just
to
recap
where
we
are
right
now
we
have
the
architecture
document
that
has
completed
last,
call
we're
holding
that
just
so
that
the
architecture
and
issuance
and
auth
documents
can
be
considered
together
in
the
post-working
group
reviews.
E
E
We
have
the
recently
adopted,
adopted
key
consistency,
informational
draft,
it
seems
like
we
don't
have
any
major
changes
in
Flight
there.
We
have
the
rate
limits
document
which
has
a
lot
of
attention
on
it,
and
Tommy
walked
us
through
some
of
the
interesting
things
that
are
happening
there,
but
it's
making
good
progress
and
I.
Think
the
question
is:
what
else
do
we
want
to
work
on?
E
You
know
is
this:
are
we
essentially
wrapping
up
here
or
are
there?
Are
there
other
things
that
we
want
to.
F
E
E
So
that's
something
that
people
might
want
to
work
on.
There's
I
have
a
draft
on
a
specific
consistency
protocol
and
there's
a
there's,
a
an
endless
number
of
of
possible
topics,
but
I
want
to
know
what
the
working
group
thinks
is
interesting
and
also
just
to
remind
people.
We
have
a
charter,
and
so,
if
you
want
to
talk
about
the
progress
we've
made,
we
can
compare
it
against
the
charter.
E
I
Yeah
I
think
on
the
consistency
side
of
things
it.
It
seems
to
me
like
there's
a
very
strong
need
to
have
something
in
this
space
concretely,
in
addition
to
having
sort
of
just
a
general
guide
of
what
what
consistency
is
and
and
what
it's
good
for
and
and
why
it's
important,
I
think
there's
a
concrete
need
in
some
of
these
deployments
to
have
a
strong
consistency,
protocol
or
system
in
place
in
order
to
provide
the
sort
of
privacy
guarantees
that
we're
hoping
to
achieve
some
of
the
proposals
that
I've
seen.
I
Really
just
don't
don't
work
without
something
like
that.
There
has
been
some
more
discussion
this
week
about
the
exact
shape
of
consistency,
protocols,
I,
think
Richard
Barnes
had
some
interesting
ideas
about
applying
something.
That's
basically,
how
did
he
phrase
that
something
like
CT,
but
without
all
of
the
warts?
I
So
having
that
work
happen
here
would
be
I
think
a
good
thing.
This
is
For
Better
or
Worse.
The
the
group
that
has
taken
on
that
sort
of
work.
It's
I
think
it's
integral
to
what
we're
doing
here,
but
at
the
same
time,
I
also
think
that
that
discussion
is
fairly
nascent
and
I.
Don't
know
that
we're
really
in
a
position
to
make
some
strong
decisions
yet
about
the
exact
nature
of
those
protocols,
despite
the
fact
that
people
might
want
to
deploy
things
that
depend
on
them,.
E
Thanks
I
see
Stephen
in
the
queue
I'd
like,
in
addition
to
whatever
you're
planning
on
saying
I'd
appreciate.
If
you
could
share
with
us
your
your
sense
of
consistency
in
the
w3c
context,
you
mentioned
that
there
is
already,
in
a
sense,
a
sort
of
proposed
or
draft
consistency
solution.
There.
F
F
I
think
Chris's
draft
is
a
good
start,
but
I
think
we
want
something
more
structured
coming
out
of
that
either
as
part
of
that
draft
or
as
additional
drafts,
and
then
to
my
point,
I
was
just
going
to
mention
that,
like
as
an
additional
token
type,
I
think
it's
potentially
like
there
might
be
some
potential
interesting
stuff
coming
out
of
the
zkp
solution
to
issue
six
from
Tommy's
proposal,
and
it
might
be
worth
having
folks
talk
a
little
bit
more
about
that
and
see.
E
B
I'm,
not
the
best
person
to
talk
about
Chris
Wood
was
driving
more
of
it.
I
know
Nikita's
here,
but
I
I
think
for
the
basic
types
as
well
as
very
limited
types.
They've
had
formal
verification.
G
Yeah
I,
just
wanted
to
maybe
I
could
say
with
more,
is
that
there
is
a
formal
verification
of
the
current
protocol
in
using
proverif
and
other
than
this
unlinkability
issue.
It
shows
about
how
we
discussed
it
shows
the
security
properties
and
I
think
there
will
be
a
write-up
of
it
in
the
next
month
or
two
that
could
be
made
public.
G
J
On
the
topic
of
like
future
work
and
potential
concerns
right
now,
when
an
attestation
mechanism
is
compromised,
any
abuse
that
arises
from
that
compromise
will
be
felt
by
the
relying
parties,
but
the
attester
has
no
way
of
narrowing
in
on
which
devices
are
compromised
and
should
be
ignored
or
debugged,
and
obviously
that's
intention
with
the
blinding
property.
It's
also
unclear
how
the
issuers
can
choose
the
testers
and
how
relying
parties
assess
the
relative
strength
of
these
testers
and
issuers,
and
some
of
that
might
come
to
Bear
an
application
over
time.
E
Thanks,
you
know
one
thing
that
that
this
reminds
me
of
is
thanks:
Steven,
for
the
overview
of
the
w3c
side.
I
think
it
would
be
potentially
helpful
if
we
had
a
draft
here
that
laid
out
in
a
sense
one
level
above
architecture,
not
just
how
the
Privacy
pass
system
works,
but
how
how
it's
actually
used
or
how
some
different
ways
that
we
expect
it
to
be
used
that
allow
us
to
think
through.
Some
of
the
details.
Like
you
know,
Canna
can
one
origin
drain.
E
Your
entire
collection
of
of
privacy
pass
tokens
or
you
know.
Are
we
expecting
that?
There's
some
upper
layer,
defense
against
that
and
also
walking
through
some
of
the
difficulties
that,
in
those
models,
I,
would
love
to
see
an
informational
on
something
like
that.
If,
if
anybody's
interested.
E
H
Yeah,
so
it
says,
Martin
mentioned
my
name.
What
we
had
I
thought
I
might
recap
some
of
the
discussion
that
we
had
in
oi
for
folks
who
weren't
there
so
I
obviously
has
a
similar
key
consistency.
Challenge
to
privacy
passes.
So
there's
some
discussion
about
whether
to
do
consistency,
work
there
as
well
and
presented
his
subject
proposal
in
that
discussion.
It
came
up
that
there's
actually
a
slightly
more
General
consistency,
property
that
one
might
like
for
HTTP
resources.
H
You
know
things
like
some
applications
might
be
interested
in
binary
transparency
like
guarantees
that
you're
getting
the
same.
You
know
web
resources
that
other
folks
are
getting.
So
it's
possible.
This
consistency
property
might
be
a
little
bit
more
General.
The
the
conclusion
that
discussion
in
Ohio
was
that
you
know
if
consistency
work
is
going
to
be
done.
You
know
privacy
pass.
Is
you
know
a
an
instance
where
there's
clearly
a
need
for
this?
H
It's
probably
the
most
acute
need
for
these
consistency
guarantees,
and
so
this
would
be
a
a
good
place
to
work
on
it,
but
it
may
be
their
Solutions
out
there.
A
little
bit
are
a
little
more
General,
so
I
don't
know
if
we
dive
directly
into
the
work
here,
or
maybe
we
run
some
stuff
through
sick
dispatch,
but
yeah
I
think
it's
a
doing
doing
a
little
bit
more
consistency.
Stuff
here
is
a
sensible
idea.
E
Okay,
I
I
hope
that
you've
been
inspired
to
to
contribute
more
interesting
work
to
the
working
group.
If
you're
working
on
formal
analysis,
it
would
be
great
to
to
maybe
see
a
seize
the
results
of
that
at
the
next
session
and
I'll
turn
it
over
to
to
Joe
for
for
closing.
A
Yeah
I
think
that
comes
to
the
end
of
our
regularly
scheduled
agenda.
If
there's
anybody
who
has
any
additional
topics,
we
can
bring
those
up
now
and
if
not,
we
can
adjourn
till
next
meeting.