►
From YouTube: IETF110-PRIVACYPASS-20210312-1600
Description
PRIVACYPASS meeting session at IETF110
2021/03/12 1600
https://datatracker.ietf.org/meeting/110/proceedings/
A
Hello
folks,
I
think
it's
about
time
to
start
the
meeting
looks
like
they've
saved
the
best
for
the
last
welcome
to
the
privacy
pass
meeting
just
going
to
go
quickly
to
the
note
well,
which,
probably
by
friday,
you've
all
seen
already
so
we'll
spend
just
a
minute
on
that.
A
A
A
B
I
have
a
quick
comment,
joe
so
sabode,
and
I
and
some
other
people
in
the
drafts
were
talking
about
potentially
swapping
out
the
last
presentation
with
one
that's
focused
on
api
implications
and
metadata
I
sent
in
slides
for
that
very
recently.
I
was
wondering
if
folks
would
be
okay
with
that.
A
Yeah,
I
have
your
slides
and
I
think
that's
a
a
good
thing
you're
at
the
end,
so
hopefully
we
can
give
you
enough
time
to
go
through
your.
D
A
C
Nice,
yes,
so
I'm
alex,
I
I'm
alex.
I
I'm
one
of
the
authors
of
the
protocol
and
architecture
draft
and
today
I'm
going
to
be
talking
about
a
specific
aspect
of
the
architecture
draft
around
issue
of
cardinality
and
how
that
relates
to
client
unlinkability
in
privacy,
past
ecosystems.
So
next
slide,
please
so
just
as
a
general
overview.
C
If
you're
not
familiar
with
the
document,
the
it,
it
provides
an
architectural
framework
for
analyzing
how
we
build
privacy-preserving
ecosystems
around
the
privacy
plus
protocol,
and
they
say
that
because
the
unlinkability
requirement,
which
is
that
client,
issuances
and
client
redemptions
cannot
be
linked
is
is
is
determined
by
the
ecosystem
in
which
the
protocol
exists,
rather
than
it
just
being
like
a
cryptographic
property
of
the
protocol
itself.
C
So
the
the
architecture
document
is
kind
of
similar
to
like
the
architecture
document
used
in
the
mls
working
group
and
is
based
around
formalizing.
This
formula
is
formalizing
these
ecosystems,
so
we're
currently
in
the
second
iteration
of
the
architecture
document
in
in
this
working
group,
and
there
have
been
minor
changes
in
the
last
iteration
around
clarifying
token
metadata
considerations
which,
as
chris
mentioned,
is
going
to
be
discussed
later
next
slide.
Please.
C
So
in
this
particular
talk.
I
just
want
to
raise
a
particular
issue
which
is
kind
of
like
still
open
in
the
draft,
which
relates
to
the
number
of
issuers
that
are
allowed
in
a
private
class
ecosystem
or
the
issue
of
cardinality.
C
So
the
unthinkability
of
a
client
essentially
rests
upon
the
redemptions
that
it
participates
in
because
a
redemption
reveals,
if
it
succeeds,
reveals
a
valid
token
that
the
kind
holds
and
so
the
the
number
of
issuers
in
the
ecosystem,
like
transitive
transitively,
leads
to
a
number
of
different
tokens,
all
kinds
of
different
keys
in
an
ecosystem
as
well,
and
so
when
a
client
keep
redeems
tokens
from
multiple
issuers,
they
build
like
a
profile
with
which
they
can
be
linked
to
and
that
that
client
identity
is
then
linked
to
the
anonymity
set
of
clients
that
possess
all
the
tokens
that
that
client
has
redeemed
and
and
not
only
that,
but
there
are,
there-
are
mechanisms
in
the
privacy,
protocol
and
architecture
for
clients
to
redeem
tokens
via
third
parties,
whereby
either
redemption
to
proxied
through
a
third
party
to
the
eventual
issuer
that
verifies
the
redemption
or
via,
like
publicly
verifiable
cryptographic.
C
So
in
this.
In
this
scenario,
what
we're
imagining
is
that
there
are
adversarially
controlled
issuers
and
they're
trying
to
learn
the
identities
of
clients,
and
so
next
slide,
please
in
the
issuance
phase.
The
clients
issue
tokens
so
the
clients
of
these
blogs
on
the
on
the
left
hand,
side
and
the
these
issue
is
i1
and
i2
issue
tokens
to
some
clients.
C
C
The
clients
then
redeem
their
tokens,
and
this
is
the
crucial
stage
where
unlinkability
comes
in,
because
when
a
client
redeems
a
token
with
an
issuer,
it
reveals
that
issue
whether
it's
been
issued
or
taken
before
so
in
this
case,
client
c
and
client
b
of
redeem
tokens
with
i1
and
i2
respectively,
and
then
next
slide,
please.
C
So
when,
when
the
adversary
then
comes
to
try
and
make
a
guess
about
which
client
has
when
it's
trying
to
link
together
the
issue
into
redemption
sessions,
it
can
make
inferences
so
with
client
c.
C
It
knows
that
either
client
c
was
participating
in
the
issuance
phase
related
to
client,
a
or
client
c,
but
in
in
b's
case,
there
was
only
one
issue
in
session,
so
this
is
like
a
very
you
know
synthetic
example,
but
it
kind
of
illustrates
how
this
like
unlockability
property,
is
related
to
redemptions
and
how,
if
you
know,
if
you
can
partition
clients
in
such
a
way,
then
there's
a
lot
of
scope
for
de-anonymizing
clients.
C
So
next
slide.
So
in
the
worst
case,
each
issuer
in
the
system
reduces
the
privacy
of
the
client
by
a
single
bit.
So
this
like
results
in
an
exponential
decrease
in
the
in
the
privacy
of
a
client
next
slide.
C
And,
and-
and
what's
crucial
to
remember
is
actually
this
number
of
issuers
and
this
privacy
decrease
is,
is
completely
independent
of
any
other
privacy
reducing
features,
so
that
can
include
including
token
metadata
and
also
how
like
key
rotations,
build
in.
So
all
these
impacts
are
cumulative
in
the
privacy
loss
of
the
client,
which
can
you
know,
reduce
client
privacy,
quite
massively,
which
is
something
that
we're
going
to
talk
about
on
the
next
slide.
C
Shouldn't
have
done
so
many
transitions,
and
so
section
10
of
the
document
kind
of
talks
about
the
parametrization
of
what
a
generic
privacy
pass.
Ecosystem
looks
like
so
in
we
have
this
equation,
which
kind
of
determines
the
number
of
issuers
which
you
can
allow.
So
this,
like
logarithmic
expression
in
the
equation,
relates
to
the
exponential
relationship
between
the
number
of
issuers
and
the
decrease
in
client
privacy.
C
So
you
have
the
number
of
users
there
and
then
like
a
minimum
size,
but
anonymity
set,
and
then
we
also
build
in,
like
you
subtract
the
number
of
metabolism
metadata
bits
which
you
allow
in
the
protocol,
and
then
you
have
this
over
two
denominator,
which
comes
in
because
of
allowing
issuers
to
rotate
keys
and
have
two
active
redemption
keys
at
one
time.
C
So
next
slide
please,
and
so,
if
we
take
like
a
really
massive
ecosystem
like
a
billion
clients
and
then
an
anonymity
set
size
of
5000,
which
is
is
currently
in
the
document
kind
of
plugged
out
thin
air.
So
if
we
have
some
better
guidance
around
what
that
minimum
anonymity
set
size
should
be,
then
that
would
be
useful.
We
we
get
a
maximum
number
of
issues
of
17..
This
doesn't
even
allow
for
any
metadata
bit.
So
the
more
metadata
bits
you
add,
then
the
more
you
reduce
this.
C
You
know
like
by
one
for
every
mess
days
a
bit
and
obviously
then,
if
you,
if
we
take
a
like
more,
you
know
smaller
ecosystem.
So
the
private
browser
extension
currently
only
has
a
number
of
users
and,
like
the
hundreds
of
thousands,
I
believe
and
then
you're
going
to
have
like
a
much
smaller
loud
number
of
issues.
So
this
is
a
really
going
to
impact
the
privacy
of
clients
quite
consider
considerably
always
going
to
impact
the
practicality
of
the
ecosystem
in
not
allowing
many
issues
so
next
slide,
please.
C
So
one
of
the
things
we
started
talking
about
in
the
document
but
haven't
really
fleshed
out.
A
concrete
solution
to
is
is
is
moving
the
privacy,
the
unlinkability
dependency
on
the
number
of
issuers
to
a
different
property,
which
can
still
ensure
a
practical
ecosystem.
So
note
that
pricey
bits
are
only
lost
when
redemptions
occur,
that's
important!
So
next
slide.
C
The
alternative
option
is
to
reduce
the
occurrences
of
these
redemptions
or
next
slide
to
limit
the
number
of
issuers
that
a
client
can
redeem
tokens
for,
and
so
in
this.
In
this
scenario,
a
client
would
only
say
hold
tokens
for
four
different
issuers
and
would,
and
so
when
a
redemption
is
triggered,
it
can
only
redeem
tokens
for
those
issuers
and
then,
when
a
client
receives
tokens
from
a
new
issuer,
it
would
drop
old
tokens
off
the
stack
and
in
doing
so,
keep
that
like
small
set
of
tokens.
C
So
then
next
slide,
please.
C
The
idea
would
be
that
a
client
at
any
one
time
can
only
redeem
a
number
of
like
tokens
for
a
profile
which
is
quite
limited
in
its
in
the
number
of
issuers
that
it
can
that
it
refers
to.
But
the
the
wider
ecosystem
has
many
issues
like
you
know
as
many
issuers
as
you
want,
but
the
the
the
privacy
requirement
is
moved
to
the
the
set
of
tokens
that
a
client
holds,
and
the
issue
is
that
it
infers
from
that
set
of
tokens
rather
than
the
total
number
of
issuers
in
the
ecosystem.
C
So
this
this,
I
think,
represents
a
much
more
practical
way
of
building
a
privacy
class
ecosystem.
But,
like
I
said,
we
don't
have
a
concrete
way
of
doing
this.
So
next
slide,
please
so
I'd
just
like
to
finish
up
here
with,
like
obviously
similar
or
potentially
obvious,
open
questions
around
how
we
would
do
this.
C
So
I
mean
how
exactly
so,
these
open
questions
that
I
have
I
and
I
haven't
gone
any
further,
been
trying
to
specify
this,
because
I
haven't
come
up
with
a
good
way
of
solving
them
in
the
discussions
I've
had
with
others
with
others,
but
so
the
key
one
is
like:
how
does
a
client
manage
which
issuers
it
should
interact
with,
which
has
been
like
an
ongoing
discussion
point
for
a
few
meetings
now,
in
particular,
for
example,
like
what
happens
if,
if
a
client
has
like
a
bound
on
the
number
of
tokens
it
holds
for
different
issuers,
what
happens
when
a
client
receives
tokens
from
a
new
issue?
C
Does
it
just
drop
all
its
own
tokens
or
would
they
or
could
this
lead
to
attacks
that,
whereby
you
know
adversaries
could
just
keep
flooding
or
flushing
the
client
token
storage
by
just
you
know,
sending
tokens
from
new
issues
around
how
you
build
these
redemption
contexts
in
which
a
client
holds
tokens
for
different
issuers
like
exactly?
C
How
would
we
do
that,
and
I
think
this
is
going
to
be
covered
in
steven's
talk
just
after
this,
where
he'll
propose
a
solution
like
the
http
layer
and
then
finally
around
obviously
like
there's
a
question
that
comes
then,
are
these
questions
all
application
specific?
Can
they
all
be
solved
at
the
http
layer
and,
if
so,
like?
What
architectural
guidance
in
the
architecture
document
is
suitable
for
guiding
like
implementers
and
constructors
of
postpass
ecosystems
and
making
you
know
same
decisions
around
this
topic
and
that
that's
everything
I
had.
E
Hi,
this
is
ben
schwartz.
His
chair,
we're
only
taking
brief
clarifying
questions
right
now,
so
we
ask
that
substantive
questions
and
comments.
You
helped
at
the
end
of
this.
E
F
So,
following
up
on
that
talk,
I'm
going
to
talk
about
idea
of
introducing
these
redemption
contexts
to
privacy,
pests,
possibly
at
the
protocol
architecture
level.
Next
slide.
So
when
talking
about
context,
I
mean
places
that
have
shared
anonymity
and
privacy,
properties
and
privacy
pass.
Currently,
there
is
this
one
global
context
where
any
redemption
identifies
that
client
with
every
other
redemption.
We
treat
it
all
as
like
one
massive
context
and
do
all
the
privacy
math
there
as
one
massive
thing
in
the
http
world.
F
We
sort
of
have
this
in
site
boundaries
and
top
level
sites
which
are
first
parties
with
each
other
have
their
own
context
and
information
shared
across
different
top
level.
Sites
to
some
extent
exists
like
as
a
privacy
boundary.
There
are
mostly
like
boundaries
here,
but
there
are
issues
such
as
cross-site
information
transfer,
third-party
cookies,
that
introduce
complexities.
Here,
though,
there
work
to
try
reducing
those
sorts
of
information
transfers
to
make
these
harder
boundaries
in
devices.
You
see
these
sorts
of
contexts
in
applications
which
have
their
own
information,
but
don't
share
information
between
applications.
F
They
also
have
some
leakage
here
through
device,
identifiers
and
other
fingerprints
that
allow
attackers
to
see
some
sort
of
links
between
these
contexts
next
slide
and
the
global
context
case.
The
problem
we
have,
which
alex
mostly
covered,
is
that
all
these
issuers
together
can
build
up
a
massive
like
fingerprint
or
id
in
the
worst
case.
You
assume
that
each
issuer
is
one
bit
of
your
identifier
and
then
very
quickly.
You
build
up
an
identifier,
that's
enough
to
identify
most
users
in
your
space
next
slide.
F
Introducing
the
concept
of
these
like
context,
you're
able
to
for
the
same
client
split
up
how
much
identifying
information
is
in
a
specific
context,
using
the
same
having
eight
issuers
here.
The
amount
of
information
for
each
context,
assuming
like
one
of
these
inner
squares,
is
like
a
top
level
site
on
the
web
case,
allows
you
to
have
all
these
issuers
without
leaking
quite
as
much
information
next
slide.
F
And
the
math
here
is
very
similar
to
the
global
context
case
subset.
Instead
of
focusing
on
the
number
of
global
ecosystem
issuers
you're
only
focusing
on
the
issuers
that
have
done
some
sort
of
interaction
with
the
context
which
can
be
limited
in
certain
ways
by
keeping
track
of
like
what
issuers
are
present
and
appear
within
a
context.
F
So
some
of
the
requirements
you
actually
need
to
be
able
to
use
these
sorts
of
contexts
to
subdivide
or
identity
spaces.
You
need
these
strong
privacy
boundaries
if
there's
any
way
of
leaking
information
from
one
context
to
another.
You
get
the
issue
where,
over
time
that
leakage
is
enough
to
leak
all
the
issuer
redemptions
in
one
case
to
another,
and
you
really
need
this
like
strong
boundary,
that
redemptions
don't
affect
another
context
in
any
way.
F
This
includes
caching
or
reusing
previous
redemptions,
and
it
is
true
that
there
are
a
lot
of
application,
specific
challenges
depending
on
the
layer,
fingerprinting
and
cross-site
tracking,
but
I
think,
to
some
extent,
that's
hopefully
being
solved
through
these
other,
like
application
layer
things,
and
while
we
should
take
into
account
that
these
aren't
perfectly
leak-proof
context.
I
don't
know
if
privacy
passes
right
layer
to
try
solving
or
enforcing
solutions
in
that
space
next
slide.
F
So
to
actually
add
the
concept
of
the
context
into
privacy
past
there's
a
lot
of
like
architectural
things.
That
would
be
useful
to
talk
about
how
the
privacy
math
looks
and
the
sorts
of
recommendations
based
on
the
size
of
your
context,
what
your
privacy
goals
are,
but
there's
also
some
protocol
changes.
That
would
make
it
easier
to
maintain
context.
F
In
this
example,
there's
addition
of
a
new
layer
above
redemption
that
keeps
track
of
what
sorts
of
issuers
are
using
in
particular
contexts
using
like
hard-coded
limits
that
we
suggest
in
the
architecture
draft
to
limit
how
many
different
issuers
get
used
and
informing
people
who
are
trying
to
develop
here.
That,
like
things
like
whether
or
not
you
have
a
token,
you
still
need
to
consider
a
any
sort
of
interaction
with
an
issuer
to
count
against
this
context.
F
You
yeah,
so
if
you
just
do
your
context
based
on
like
when
you
successfully
redeem
the
token
you
can
do
sorts
of
attacks
where
you
attempt
to
redeem
against
a
ton
of
different
issuers
and
if
a
user
was
given
a
single
token
from
one
of
these
issuers,
then
the
issuer
that
finally
is
successful
or
deemed
leaks
more
information
other
than
just
the
like.
F
True
false
of
that
one
issuer,
since
you
also
have
the
fact
that
that,
if
that
client
wasn't
given
tokens
from
any
other
issuer,
this
unfortunately
adds
some
complexity
to
the
sorts
of
privacy
calculations.
We
need
to
do
depending
on
how
we
implement
the
counting
issuers
in
a
particular
context.
The
straightforward
thing
is
the
first
issue
you
think
about
and
talk
to,
like
your
pin
that
issuer
you
either
got
a
token.
You
didn't
get
a
token,
but
now
like
you,
can
no
longer
try
using
another
issue.
F
In
that
context,
you
can
increase
this
number
to
various
amounts
to
get
like
two
to
the
r.
The
number
of
different
attempts
you're
willing
to
do,
and
the
more
attempts
to
issue
that
attempts
to
redeem
that
you
have
the
lower
privacy
in
this
particular
context
is
there's
alternative
ways
of
counting
against
your
limits,
such
as
just.
F
Counting
the
number
of
individual
redemptions
themselves,
rather
than
caring
about
the
particular
issuers
that
use
and
this
you
have
slightly
different
math.
F
I
think
there's
even
more
complexities
when
you
add
the
number
of
active
keys
that
a
particular
issuer
can
use
and
the
number
of
metadata
bits-
and
I
think,
a
lot
of
that
complexity
probably
needs
to
be
written
down
somewhere
in
the
privacy
pass
protocol,
rather
than
allowing
like
each
application
to
try
to
rediscover
its
own
privacy
math
here
in
the
meanwhile
time
like
you,
don't
want
a
redemption
within
a
particular
context
to
be
stuck
forever
like.
F
This
is
a
bad
foot
gun
of
like
an
issue
or
let
you
decide
you
don't
really
want
sticks
around
forever
and
it
doesn't
let
you
switch
to
other,
maybe
more
useful
issuers
in
the
future,
how
we
allow
a
specific
context
to
rotate
the
issuers
that
it
uses,
I
think,
is
an
open
question.
There's
the
never,
which
is
this
bad
foot
gun
of
like
getting
stuck
with
issuers,
there's
tying
it
to
key
rotation
since
there's
already
attacks
where
an
issuer
rotates
their
key,
and
each
new
epoch
represents
different
data
from
a
previous
epoch
and
tying
it.
F
F
F
I
think
if
we
leave
it
to
the
application,
this
does
mean
that
there's
more
foot
guns
in
terms
of
like
calculating
privacy
for
each
application,
that
has
its
own
definition
of
a
context,
and
it
also
means
that
privacy
pass
would
be
recommending
very
small
number
of
issuers,
which
might
not
actually
be
a
requirement
that
the
protocol
needs.
If
you
have
these
sorts
of
subdivision
of
contexts,
there's
how
much
the
architecture
document
should
go
into
these
various
sorts
of
privacy
math
and
like
how
detailed
we
want
to
go
there
and
there's
an
open
issue
there.
F
There's
discussions
about
how
much
should
be
specified
about
what
a
context
should
provide
and
the
sorts
of
unlinkability
guarantees
that
we
need
to
require
and
how
to
deal
with
the
fact
that
in
the
real
world
we
don't
really
have
these
like
perfectly
separate
contexts
in
most
cases
and
there's
how
we
deal
with
this
issue
or
stickiness
issue
or
pinning
problem.
If
we
even
ignoring
like
redemption
context,
it
does
become
a
wider
issue
that
needs
to
be
resolved
somewhere.
G
Thanks,
so
this
is
a
draft
on
the
centralization
problem.
Next
slide,
please,
where
this
came
from
the
the
charter
has
a
milestone
on
centralization
and
hear
the
words
that
are
in
the
charter,
and
I
think
this
actually
comes
from
when
the
working
group
was
established.
There
was
a
lot
of
discussion
about
the
issue
of
centralization
and-
and
if
you
add
to
that,
what's
happened
this
week,
there's
a
lot
of
independent
conversations
going
on,
for
instance,
in
the
plenary
and
iab
open.
G
So
it's
a
good
time
to
talk
about
centralization,
and
since
it's
in
the
charter,
it's
a
meaningful
thing
to
do.
I
will
say
that
for
this
draft,
this
draft
doesn't
take
into
consideration
the
idea
of
redemption
contexts
and
I'll
come
back
to
that
at
the
end,
just
to
say
a
few
words
about
it,
but
I
think
in
the
30
minutes
that
we
have
to
talk
about
centralization
issues.
I
think
that
would
be
a
good
thing
to
talk
about
next
slide,
please.
G
So
the
goal
of
the
draft
is
really
to
address
the
centralization
issues
surrounding
privacy
past.
It's
sort
of
a
a
supporting
piece
of
documentation
for
the
architecture
and
protocol
drafts.
G
The
idea
is
to
identify
potential
privacy
concerns
and
then
ident
and
then
try
to
write
a
crisp
problem
statement
for
privacy
pass
in
regards
to
centralization
and
then
talk
about
potential,
mitigations
and
I'll
say
again
that
of
the
potential
mitigations
again.
Redemption
context
is
not
in
the
current
draft,
although
when
we
come
to
the
end
I'll
talk
about
being
happy
to
add
it
next
slide.
G
G
Alex
did
a
a
great
job
in
the
in
the
first
presentation
to
talk
about
what
the
fundamental
problem
is
and
and
rather
than
repeat
everything
he
said,
which
is
what
I
would
do
if
I
spoke
from
these
slides
I'll,
simply
say
that
one
of
the
fundamental
problems
in
the
architecture
draft
is
that
we
have
to
really
limit
the
number
of
issuers.
G
That's
a
that's
a
key
recommendation
of
the
architecture
draft
and
it
talks
about
why
I
mean
it:
it's
actually
not
just
a
arbitrary
limitation,
but
section
10
of
the
architecture
draft
talks
very
specifically
about
why
that
limitation
is
there.
I,
given
this
slide
sort
of
the
example
of
you
know
losing
one
bit.
What
happens
is
verifiers
learn
a
bit
of
information
for
every
redemption
right
and
so
over
time.
The
anonymity
of
the
client
is
at
risk.
G
Because
of
that-
and
so
that's
one
of
the
reasons
why
section
10
of
the
architecture
draft
talks
about
talks
about
limiting
the
number
of
servers
in
the
ecosystem.
Again,
I
I
will
say
one
more
time
that
this
is
without
the
concept
of
redemption
clients.
This
is
in
the
original
concept
of
what
the
ecosystem
would
provide
next
slide.
G
Please
so
there's
a
that's
a
fundamental
problem
from
a
centralization
point
of
view,
in
that,
if
we
limit
very
drastically
the
number
of
servers
that
can
provide
tokens,
then
we
have
a
problem
where
the
the
control
of
the
issuer,
the
control
of
the
issuance,
is
really
centralized
in
a
very,
very
small
number
of
organizations.
That
would
would
run
those
issuing
servers,
and
so
the
natural
thing
is
to
wonder
whether
or
not
there's
an
alternative
steve
provided
the
example
of
a
redemption
context
is
one
of
the
alternatives
in
the
architecture.
G
Draft
there's
another
alternative,
and
that
is
actually
limiting
the
number
of
tokens
that
are
at
the
client,
but
it
seems
on
an
analysis
both
in
section
10
of
the
architecture
draft
and
also
in
in
the
draft
here
for
consolidation.
It
seems
like
that's,
not
a
very
practical
alternative,
that's
not
an
alternative!
That
would
be
easy
to
make
real
and
so
and
the
reason
for
that
is
here
in
the
last
bullet.
It
implies
having
significant
control
over
the
client.
G
So
it's
actually
much
more
difficult
when
you
consider
the
number
of
clients
than
it
is
in
terms
of
restricting
the
number
of
servers
next
slide,
please
so
the
problem
statement,
then
that's
in
the
consolidation
draft
actually
takes
a
look
at
what
the
architecture
document
provides
as
as
guidance,
and
so
the
the
basic
problem
statement
is
that
if,
if
there's
an
upper
bound
available
of
privacy
past
servers
in
the
ecosystem,
then
that's
there's
a
very
practical
problem
there
for
deployment
of
the
protocol,
because
they,
you
have
a
very
small
number
of
issuing
servers.
G
So
you
have
control
located
in
a
very
small
number
of
servers
and
then
we'd
call
into
question
who
was
running
those
servers
and
all
the
usual
questions
we
have
about
centralization.
G
So
having
a
very
small
upper
limit
on
the
number
of
servers
brings
into
question
traditional
centralization
problems
that
are
associated
not
with
not
with
economic
or
size
of
network.
It's
simply
part
of
the
protocol
that
enforces
this
problem
and
so
in
in
the
consolidation
draft.
There's
a
an
attempt
at
a
reasonably
crisp
problem
statement.
G
Next
slide.
Please.
G
Now,
at
the
sort
of
towards
the
end
of
the
consolidation
draft,
there's
a
discussion
of
what
the
architectural
problems
are
for
privacy
paths,
what
the
engineering
problems
are
and
what
the
practical
deployment
problems
are.
One
of
the
things
that's
good
about
the
architecture
draft
is
that
the
authors
of
the
architecture
draft
are
very
upfront
about
these,
it's
very
clear
and
so
much
of
the
stuff.
G
G
The
problem,
the
problem
is
in
the
end
that
the
mitigations
in
the
architecture
document
don't
seem
to
be
mitigations
at
all.
G
If
we
limit
the
ecosystem
to
a
very,
very
small
number
of
servers,
we
have
consolidated
control
of
the
issuing
into
a
very
small
group
of
operators
of
those
servers
right
and
then
constraining
the
clients,
as
an
alternative
doesn't
seem
to
be
a
solution
as
well
either.
G
So
what's
needed
is
a
mitigation,
that's
consistent
with
the
goals
of
the
underlying
protocol,
the
privacy
goals
of
the
underlying
protocol,
but
also
address
the
issue
of
centralization
next
slide.
Please
one
of
the
things
that
this
is,
of
course
a
zero
draft.
So
it's
the
first
time
it's
come
to
the
working
group,
I'm
very
interested
in
comments.
I'm
looking
forward
to
that.
G
E
H
E
H
Widowmaker
good
morning
I
am,
I
have
been
listening
to
this
presentation
and
I
have
a
question
for
the
group.
I
mean
this
problem
of
anonymity
and
and
server
knowledge,
and
all
that
looks
an
awful
one.
An
awful
lot
like
the
obvious,
oblivious
dns
problem
and
the
way
that
oblivious
dns
architecture
solves
the
problem
is
by
having
several
layers
of
servers
and
my
own.
I
could
say
that
as
a
turtle
architecture.
H
E
Christopher
wood,
chris
wood,
do
you
have?
Would
you
like
to.
B
I
just
I
I
just
I
guess
a
clarifying
question:
I'm
not.
They
seem
to
be
like
very
different
problems
that,
like
a
privacy
pass
and
oblivious
foo
hp
doe,
so
I
like
me,
I
think
I'm
just
struggling
to
see
the
connection.
The
privacy
pass
at
a
very
high
level
is
effective,
like
anonymous
authorization,
whereas
odo,
oh
http
and
whatnot
are
not
about
authorization
at
all
like
it's
just
about
hiding
sort
of
the
identity
of
the
client,
and
I
I
guess
yeah,
can
you?
H
H
I
Yeah,
I
I
didn't
particularly
want
to
reply
to
that
in
any
way,
but
I
think
the
problem
might
be
just
the
fact
that
the
I
think
this
could
use
a
more
formal
definition
of
what
the
problem
is
and
sort
of
like
in
a
more
formalized
way.
But
I
think
the
problem,
if
I
understand
it,
comes
from
the
fact
that
the
authorization
context
might
be
bound
to
certain
attributes
that
you
might
know
about
and
then
like
you
can
join
the
attributes
from
multiple
sources.
I
I
You
actually
have
those
attributes
that
essentially
the
credential
or
the
anonymous
token
was
issued
to
you
for
so
yeah.
I
feel
like.
I
would
love
to
see
more
formalism
of
this
problem
like
how
how
this
is
affected,
and
maybe
some
ideas
from
differential
privacy
might
also
be
helpful
here,
to
reduce
the
number
of
sort
of
like
any
time,
hard
gap
on
number
of
issue
or
something
comes
in
then
it
ends
up
being
like.
I
guess
really.
The
failure
modes
are
very
unknown
or
something
like
that
it
would
be.
I
It
would
be
awesome
to
like
say
that
we
don't
need
to
have
a
hard
cap,
but
we
have
like
maybe
an
alter
way
to
think
about.
Is
we
have
some
differential
privacy
guarantee
instead
of
thinking
about,
like
just
the
number
of
bits
of
privacy
that
we're
losing
here,
where
it's
like,
we
formalize
it.
Maybe
we
can
formalize
it
in
terms
of
differential
privacy
as
well.
Sorry,
yeah,
I'm
done.
E
So
ben
schwartz
hat
off
first
on
the
on
the
on
christian's
question
in
case
it
wasn't
in
case
there's
some
confusion
here.
I
just
want
to
emphasize
that
the
issuer,
whatever
it
knows
about
the
the
identity
of
the
client,
cannot
link
that
information
to
the
redemption.
E
Even
when
the
redemption
comes
back
to
the
same
issue
or
later,
that's
the
the
crucial
property
of
blinding
that
that
we
rely
on
so
so
that
that
step
is
is
safe
and
the
concern
is
just
whether
the
number
of
different
issuers
is
sufficient
to
mark
a
client.
Somehow
I
wanted
to
focus
on
the
value
k
that
stephen
mentioned
in
his
presentation
in
the
the
applications
for
privacy.
Pass
that
I'm
aware
of
the
the
goal
is
something
roughly
like.
Can
we
prove
that
this
user
is
not
a
bot?
E
So,
for
example,
if
we
imagine
setting
k
equal
to
one
which
in
other
words
means
saying
that
the
the
redemption
context
like
a
website
can
only
get
one
positive
token
and
then
it's
cut
off.
It
can't
do
any
more
redemption
checks
after
the
first
successful
redemption.
E
In
that
case,
it
seems
to
me
that
this
math
would
tell
us
that,
instead
of
instead
of
10
issuers,
we
could
have
a
million
issuers.
I
think
that
you
know
that
seems
like
it
would
go
a
long
way
to
addressing
the
the
centralization
concerns
at
k
equals
two
essentially
saying
as
a
site.
You
get
to
double
check.
You
can
check
each
user
against
two
different
issuers
and
you
can
try
all
the
issues
you
can
think
of
until
you
find
two
that
they
support.
E
J
Thank
you.
I
was
just
waiting
to
see
if
the
authors
wanted
to
come
back
on
that
question,
but
I'm
happy
to
ask
yes,
so
I'm
really
glad
to
see
this
topic
being
addressed
like
I,
I
think
it's
been
an
issue
for
privacy
past
ever
since
it
was
suggested
it
should
first
be
created,
so
I'm
really
glad
to
see
it
like
being
treated
and
given
the
discussion
time
it
deserves.
So
thanks
to
everyone
who's
kind
of
presented.
Already
today,
I
have
like
a
few
a
few
questions
about
you
know
the
anonymity
set
size.
J
J
I
think
this
is
such
a
complex
problem
and
it's
like
a
fundamental
issue
in
privacy
past
that
it
may
not
actually
be
that
previous,
like
give
you
that
much
privacy
when
we
talk
about
all
of
these
issues,
like
I
think
it's
it's
like
a
substantial
piece
of
work
like
to
me,
it
sounds
like
it's
not
a
simple
solution.
We
don't
have
a
good
way
of
how
to
solve
it
right
now.
J
Just
that
question
about,
like
anonymity
set
size
like
as
we
vary,
that
what
is
acceptable
because
5000
like
I
know
it
was
acknowledged,
it's
pretty
arbitrary,
but
I
mean
that's
that
I
think
in
itself
is
like
a
whole
discussion.
So
yeah
thanks.
E
Thank
you
kirsty
on
kirstie's
point
I'll
briefly,
note
as
chair
that
we
are
chartered
to
describe
the
risk
and
possible
ramifications
of
issuer
centralization
and
explore
possible
mechanisms
to
mitigate
these
risks,
which
is
significant,
but
perhaps
not
quite
the
same
as
as
must
solve
it
now.
E
K
All
right
watson
lad
cloudflare.
This
is
kind
of
very
similar
to
ben's
point,
and
that
is
talking
about
clients
and
issuers,
but
the
redemption
process,
at
least
in
the
applications
that
I'm
most
familiar
with,
is
carried
out
by
a
website,
and
that
website
has
an
incentive
to
use
very
few
issuers,
because
if
it's
a
bought
or
not
decision
each,
it's
the
weakest
link
that
gets
you
access.
K
So
if
it's
going
to
accept
the
token
it's
going
to
most
likely
accept
it
from
one
place
and
one
token,
and
then
maybe
it
switches,
provider
and
switches
so
that
that
makes
you
think
that
very
small
values
of
k
are
are
good
and
then,
with
sort
of
steven's
compartmentalization
idea,
we're
sort
of
done.
There's
going
to
be
very
few
queries,
you're,
never
going
to
see
the
set
of
tokens.
A
client
has
you're
only
going
to
see
that
it
has
one
or
maybe
whether
it
has
one
or
maybe
two.
F
Yeah,
so
I
agree
a
lot
with
what
watson
said.
I
guess
one
of
the
problems
with
like
setting
k
equals.
One
is
like
now:
we've
moved
the
centralization
question
from
like
the
global
ecosystem.
To
like
this.
Like
specific
context,
ecosystem,
like
all
sites,
will
probably
just
choose
the
same
like
top
set
of
issuers,
so
you
want
to
have
like
some
flexibility
like
because
a
site
will
want
to
get
the
best
signal
it
can
and
like
things
on
that
site
will
want
like
their
specific
choice
of
issuer.
F
So
I
don't
think
like
every
issuer
is
like
can
be
swapped
out
for
any
other
issuer,
and
I
think
we
still
have
like
a
sort
of
centralization
problem
here
that
we
need
to
think
about,
and
I
think
k
greater
than
one
is
probably
required
unless
we
like
really
are
okay
with
this
like
permanent
stickiness,
or
think
that
sites
are
going
to
have
sufficiently
different
thoughts
about
what
it
wants
to
try.
First,.
B
Actually
let
richard
go,
he
was
skipped.
L
Yeah
thanks
chris,
I
think
I
proactively
dequeued
myself
before
I
I
took
the
mic,
so
I
will
dequeue
myself
now
yeah.
I
think
my
this
seems
like
a
hard
problem
and
my
usual
reaction
to
hard
problems
is
to
try
and
not
work
on
them.
So
I'm
wondering
if
we
can
kind
of
do
that
here.
L
It
seems
like
a
lot
of
this,
we're
taking
on
a
huge
amount
of
complexity
here
by
having
this,
it's
like
very
flexible,
very
open
in
operation
between
issuers
and
redemption
contexts,
whereas
I
think
the
majority
of
the
cases
people
have
in
mind
have
a
fairly
tight
coupling
between
redemption
contexts
and
issuers,
as
I
think
watson
elaborated
and
put
it
quite
well,
so
it
seems
like
most
of
the
practical
cases
have
that
property,
and
I
also
think
that
that,
even
if
you
have
that
property,
you
don't
really
have
a
centralization
problem.
L
You
know
cookies,
for
example,
are
specific
to
a
single
site.
The
the
redemption
context
for
a
cookie
is
tied
very
closely
with
its
issuance
context,
and
yet
you
know
that's
a
it's
a
decentralized
mechanism,
so
I
I
am
wary
of
centralization
concerns
in
the
abstract.
Here
I
think
we
can
have
a
flexible
decentralized
system
that
still
has
a
fairly
tight
binding
between
redemption
context
and
issuance
context,
and
is
able
to
avoid
a
bunch
of
this
complexity
by
not
having
the
openness
and
flexibility
at
that
level.
L
L
B
I
just
wanted
to
follow
up
on
the
complexity
point.
It's
not
clear
to
me
how
what
what
what
protocol
changes,
what
protocol
level
changes
are
necessary
to
sort
of
constrain
or
make
this
sort
of
constraint
that
richard
suggests,
because,
right
now
it
seems
like
you,
you
can
instantiate
the
system
in
any
such
way
where
issuers
are
different
from
redeemers.
They
are
the
same
and
like
the
the
core
privacy
class
protocol
specification
is
basically
unaffected,
so
is
is
the
like.
B
Do
we
want
to
sort
of
change
that
such
that
the
protocol
constrains
having
these
like
decoupling,
issuing
and
redemption,
and,
if
so,
how?
But
if
we
don't
want
to
sort
of
change
the
core
protocol
in
any
particular
way,
then
this
just
seems
like
sort
of
a
a
higher
level
concern
that
we
would.
We
should
certainly
like
talk
about
and
document
and
like
maybe
only
like
focus
on
deployments
that
have
matching
issuers
and
redeemers,
but
I'm
just
not
I'm
not
sure
what
like
concrete.
B
Like
tangible
thing,
we
could
do
here
to
either
accept
richard's
proposal
or
not,
if
that
makes
sense,.
D
Richard's
proposal
here
sounds
kind
of
tempting,
but
then
I
think
about
the
the
point
that
watson
made-
and
I
I
I'm
reminded
now
that
we
have
these
cases
where,
for
instance,
you
have
the
bot
detection
thing
and
that's
worn
by
a
third
party,
and
so
that's
been
performed
on
some
other
site
and
the
token
was
issued
on
on
some
other
site,
maybe
through
a
third
party
iframe
or
something
along
those
lines,
and
it
is
still
that
one
single
entity
consuming
receiving
that
thing.
So
the
framing
seems
a
little
bit
wrong.
D
So
the
the
third-party
iframe
on
the
the
redemption
site
is
still
the
redemption.
The
relying
party
becoming
the
issuer,
the
the
thing
that
I'm
concerned
with
there
is
that
you
have
potentially
a
lock-in
when
it
comes
to
that
and
as
far
as
I
understand,
the
only
concern
that
comes
from
that
is
the
increase
in
information
transfer
between
those
two
contexts.
And
if
the
number
is
two,
I
don't
see
that
being
significantly
worse
than
having
one.
E
E
M
I'm
sure
yeah,
I'm
happy
to
do
so.
We've
been
chatting
a
little
bit
in
the
chat.
One
thing
I
just
wanted
to
point
out
is
with
the
issuer
cardinality
question:
there
might
be
some
nice
things
that
could
be
leveraged
from
multi-party.
M
Signature
schemes,
noting
that
the
issue
of
issuer
cardinality
essentially
comes
down
to
signatures
being
identifiable
by
the
public
key
that
issues
them.
So
I
I
wrote
up
some
notes
in
the
chat,
but
essentially
there's
some
nice
properties
of
threshold
schemes
that
I
think
might
make
this
problem
easier.
M
It
does
transfer
some
overhead
to
the
issuers
because
essentially
then
they
have
to
do
key
management
together,
but
it
might
make
the
ecosystem
management
question
easier
and
also
like
more
constrained
as
well
so
yeah.
So
I
put
some
notes
there.
I
can.
I
can
follow
up
with
ideas
for
the
authors
as
well.
J
Hi
yeah,
so
I
just
while
we've
got
a
bit
of
time.
I
looked
at
the
charter
for
the
group
and
one
of
the
things
that's
got
listed
as
like
a
a
deliverable
is
like
a
risk
assessment
for
centralization
privacy
plus
deployments
for
multiple
design
options.
So
I
didn't
know
if
now
might
be.
Maybe
a
good
point
just
to
put
in
folks
heads
like
what
that
risk
assessment
would
include
like
we've
spoken
about
anonymity
set
size,
that's
reasonable
and
like
number
of
issuers
and
well
maybe
this
isn't
the
place
for
it.
J
L
N
N
O
Yeah,
that's
me,
I
always
just
wait.
Okay,
thank
you.
So
thank
you
for
these.
My
name
is
sofia
sally,
I'm
from
clavley
and
today
I'll
be
talking
about
a
little
bit
of
feedback
from
the
use
cases
itself
for
previously
passed.
O
So
next
slide,
please,
okay,
and
just
to
give
a
little
bit
of
context
of
why
this
presentation
exists
mainly
because
we
held
at
a
meeting
that
was
named
an
anonymous
credential
meeting
and
it
was
co-located
on
january,
where
were
crypto
2021
and
the
reason
why
we
decided
to
make
this
meeting
was
to
actually
hear
the
feedback
from
the
different
use
cases
that
had
been
used
in
previous
cpas
or
some
other
use
cases
that
are
not
indeed
using
specifically
as
it's
written
previously
past,
but
some
work
that
is
either
derived
from
it
or
some
different
protocol
that
is
based
on
anonymous
credential
and
we
wanted
to
hear
what
specific
drawbacks
or
specific
feedback
does
those
these
cases
have
to
the
previously
passed
document
or
what
was
missing
in
the
privacy
pass
documents
three
drafts
that
exist
there,
what
was
missing
and
could
be
potentially
looked
into
or
other
two.
O
The
only
problem
was
that
there
was
a
lot
of
questions
for
most
of
the
presentations
that
were
given,
but
there
was
not
much
space
to
actually
have
a
chat
around
those
questions,
so
that
could
be
a
feedback
for
myself
for
a
new
meeting,
there
was
presentation
from
the
tour
project
from
google
from
facebook
from
each
captcha,
from
brave,
also
from
the
working
group
that
right
now
we're
into
success
live
please.
O
One
of
the
points
was
obviously
the
problem
of
public
private
metadata
and
how
to
actually
add
it
into
an
implementation
or
privacy
pass,
there's
already
a
lot
of
a
lot
of
proposals
around
this
matter
about
adding
public
or
private
metadata.
So
I
will
not
go
into
details
around
that,
but
indeed
in
the
architectural
document,
there's
some
mentions
around
how
to
add
in
metadata,
but
sometimes
not
as
specific
enough
as
implementation
will
actually
wanting.
O
O
Something
that
was
mentioned
is
that
indeed,
in
general,
it
was
a
lot
of
talking
about
what
kind
of
attacks
can
be
seen
in
practice
against
privacy
pass.
One
of
them
was
the
attack
of
holding
attack,
and
it's
this
attacking
with
an
attacker
collects
tokens
in
a
small
doses
across
a
large
number
of
clients,
and
then
he
knows
them
into
single
spender
and
then
expands
them
all
at
once.
O
There
were
some
presentations
that
also
talked
about
the
accessibility
concern
and,
if
there's
a
place
on
that
on
the
draft
right
now,
there's
only
one
mention
of
the
accessibility
itself
on
the
architectural
draft.
I
think
it
is
specifically
in
this
in
the
sections
that
is
talking
about
implementations
themselves
around,
for
example.
What
happens
is
an
implementation
at
the
beginning
shows
this
challenge,
but
this
challenge
sometimes
is
down,
or
it's
really
difficult
for
accessibility
purposes.
O
Then,
of
course
the
client
accessibility
degrades.
So
there
was
a
question
indeed.
If
there
could
be
a
section
on
the
draft
addressing
also
this
kind
of
accessibility
or
talking
in
general,
about
this
in
practical
considerations,
there
was
a
lot
of
talking
about
the
double
spending
storage
mechanism.
Just
over
spending.
A
storage
mechanism
is
basically
this
database,
basically
in
which
is
where
you
store
all
of
the
tokens
that
are
spent.
O
So
you
can
later
check
against
this
storage
to
see,
if,
indeed
the
token
has
been
spent
in
the
past-
and
you
cannot
spend
it
anymore
and
sort
of
sort
of
try
to
solve
the
problem
of
double
spending.
In
fact,
it
seems
that
it's
really
difficult
to
implement
the
service
storage
mechanism
in
a
safest
way.
At
least
that
was
the
feedback,
and
it's
specifically
difficult
to
implement
in
the
case
of
distributed
architecture,
there's
also
some
different
attacks.
That
can
be
done
into
this
double
spending
and
storage
mechanism.
O
So,
for
example,
what
if
the
double
storage
mechanism
is
down
or
if
you
can
do
an
attack,
that
is
a
denial
of
service
against
this
kind
of
attack?
What
are
the
practical
implications?
The
security
of
privacy
implications?
If
this
indeed
happens?
O
Another
problem
that
it
was
brought
into
this
meeting
was
is
what
happens
is
that
you
usually
have
to
update
this
double
storage
mechanism
all
the
time?
What
happens
in
between
the
time
in
which
the
storage
is
not
updated,
but
our
request
of
updating
it
is
coming.
Obviously,
during
this
x
amount
of
time
of
updating
process,
there
could
be
an
attractor
that
was
spending
itself,
so
that
could
be
taking
into
account.
O
Last
week
we
will
speak
in
this
with
alex
and
one
of
the
things
that
we
decided
is
to
try
to
put
forward
a
document
or
a
draft
that
could
be
laid
and
not
be
thrown
into
directly
into
draft,
but
just
a
section
in
the
document
in
which
we
address
this
over
spending
a
storage
mechanism
more
directly
around
the
problems
that
all
attacks
that
you
can
have
into
this
storage
itself
and
what
and
what
kind
of
properties
what
kind
of
attacks
against
the
properties
the
privacy
paths
have
could
be
could
be
degraded
if
this
is
not
not
implemented
in
a
good
way.
O
Yes,
another
thing
that
was
addressed
and
I'm
really
happy
that
stephen
already
addressed
it
a
little
bit
is
that
a
lot
of
people
asked
about
what
would
be
an
ideal
key
rotation
policy,
and
this
is
indeed
already
addressed
in
the
architectural
drafts.
But
a
lot
of
people
was
asking
from
an
implementation
perspective
in
their
implementation
needs
will
be
what
would
be
accurate,
a
good
key
rotation
policy.
O
I
think
it
will
be
really
difficult
from
a
draft
perspective
actually
say
like
indeed.
This
is
what
your
implementation
could
have
in
the
draft
itself,
there's
only
like,
like
very
big
big
statements
of
what
would
be
a
good
key
rotation
policy.
O
Another
thing
that
I
was
thinking
while
having
this
meeting
is
that
it's
really
difficult,
sometimes
to
have
reliable
mechanisms
to
measure
what,
indeed,
what
attacks
happen
in
practice,
so
something
that
one
of
the
facebook
presentation
actually
showed
is
that
there's
not
much
attacks
against
tokenki
reusage.
O
So
maybe
that's
not
seen
in
the
world,
but
it's
not
to
me
that
much
clear
if
this
is
not
happening
in
the
world
that
much
because
people
have
not
found
out
that
this
attack
is
possible
to
be
done,
or
just
indeed
that
this
is
that
not
happening
in
the
wild.
So
maybe
a
good
measurements
will
be
something
to
put
into
place.
O
Next
is
like
this:
okay,
so
the
path
forward,
it
will
be
really
nice,
and
I
know
that
the
architectural
draft
already
has
some
sections
about
implementations,
but
it
will
be
really
nice
to
have
maybe
more
in-depth
sections
around
that
and
to
take
into
describe
practical
considerations
in
place.
Maybe,
as
I
said,
it
would
be
also
nice
to
have
a
section
about
accessibility,
maybe
not
something
to
go
that
much
into
that,
but
at
least
just
mention
it
somewhere
there.
Something
that
was
also
discussed
in
the
meeting.
O
Is
that
there's
sometimes
difficult
between
difficulty
between
the
different
proposals
and
the
difficult
implementations
to
see
what
kind
of
property
they
are?
Indeed
giving?
They
are
given
the
same
onlink
ability
as
privacy
pass
or
what
kind
of
unlinkability,
for
example,
they
are
giving.
O
The
meeting
was
really
nice
because
we
had
perspectives
from
different
parts
of
people
interested
in
anonymous
tokens
in
general,
and
sometimes
what
happens
is
that
what
the
feedback
could
be
is
that
other
kind
of
meetings
could
be
too
restrictive,
restrictive
and
that
this
meeting
was
nice
because
it
sort
of
allow
everybody
to
have
a
saying
in
what
they're
trying
to
implement.
So
maybe
some
other
meeting
will
be
nice
to
have
in
place
and
also
to
see
what
other
application
privacy
paths
or
the
derivative
protocols
are
out
there
yeah
next
slide.
Please
and
without.
L
B
All
right,
so
this
isn't
officially
a
document
for
this
particular
working
group.
It's
sort
of
a
document
that
is
applies
generally
to
various
protocols
that
are
being
looked
at
and
worked
out
in
the
ietf,
be
it
oblivious,
dome,
privacy
pass
and
whatnot,
but,
given
that
you
know
that
the
topic
of
sort
of
consistency,
of
keys
correctness
of
keys
and
discovery
of
keys
is
of
importance
to
privacy
fast,
we
thought
it
would
be
good
to
present
here.
So
next
slide.
Please.
B
Yeah
so,
as
I
said,
there's
there's
a
number
of
parallel
use
cases
in
in
emerging
protocols
that
require
some
mechanism
to
get
a
key
and
then
figure
out
that
it
is
authentic.
It
is
like
the
correct
key
for
a
particular
service,
be
it
a
prize.
You
pass
issue,
a
verification,
key,
an
oblivious,
doe
or
http
encryption
key
or
in
the
case
of
tor,
maybe
a
relay
public
key.
B
This
seems
to
be
sort
of
a
common
requirement
across
all
of
these
protocols
and
the
the
the
point
of
this
particular
document
is
to
describe
sort
of
these
requirements
in
various
ways.
B
We
could
go
about
building
systems,
for
you
know
achieving
these
two
requirements
next
slide,
please
so
to
go
into
the
unlinkability
problem
a
little
bit
or
the
requirement
a
little
bit
more
imagine
you
had
a
system
with
end
clients
and
you
had
a
server
with
a
particular
key
k,
and
this
server
somehow
delivers
its
key
to
one
of
the
clients
in
this
anonymity
set
next
slide
and
then
later
on,
someone
happens
to
use
that
particular
key,
so
send
some
function
f
of
k
over
to
the
server
if
the
server
doesn't
have
any
other
sort
of
linking
information
like
assuming
doesn't
have
ip
addresses
or
whatever,
perhaps
a
tall
order,
but
bear
with
me.
B
The
server
concludes
that
the
key
was
just
used
by
one
of
the
end
clients
in
this
set
and
that's
sort
of
the
unlinkability.
I
guess
property
that
we
want
next
slide.
Please-
and
this
is
important,
because
in
sort
of
an
alternative
world
wherein
the
server
gives
a
specific
key,
k1
to
say,
client,
c1
and
then
a
totally
different
key
to
all
the
other
clients.
B
Next
slide,
you
can
imagine
usage
of
a
particular
key
to
directly
reveal
which
client
it
belongs
to.
So.
In
this
particular
case,
client
c1
gabe
server
gave
server
s
a
function
of
k1
and
therefore
link
the
two
together,
which
is
not
great.
We
commonly,
I
guess
in
the
privacy
password
we
prefer.
This
is
key
tagging,
but
just
generally
sort
of
on
linkability
is
how
we're
capturing
it
here
next
slide.
Please.
B
So
authenticity
is
also
important.
Imagine
again,
similar
scenario,
event,
clients,
a
server
and
you
have
an
adversaries
between
clients
and
servers,
server
publishes
some
key
k
or
has
some
key
k
that
it
makes
available
and
then
the
adversary.
In
this
particular
case,
imagine
it
just
happened
to
swap
in
its
own
key
k,
prime
to
this
particular
case
and
gave
it
to
one
of
the
clients
next
slide.
B
Usage
of
that
particular
key
would
allow
the
adversary
to
potentially
learn
something
that
was
meant
for
s
so
in
in
this
case,
like
if
it's
a
public
encryption
key
as
in
the
case
of
odo
or
http
in
the
adversary.
Someone
that
this
client
did
not
intend
to
actually
encrypt
the
its
message
for
the
adversary
could
easily
learn
information
about
the
client's
usage
of
this
particular
key,
which
is
not
great.
So,
ideally,
when
a
client,
oh,
you
can
go
ahead.
Sorry
next,
one
yeah!
B
So,
ideally
when
a
client
discovers
a
key,
it's
it's
it's
authentic,
meaning
that
it's
sort
of
correct.
It
corresponds
to
the
right
server
that
the
client
intended
and
that's
use
of.
The
key
is
unlinkable
from
other
people
who
happen
to
have
the
same
key,
and
we
sort
of
say
in
this
case
that
all
clients
in
the
same
set
have
a
consistent
view
of
this
key.
So
then
the
task
at
hand
is
to
sort
of
see
if
we
can
build
like
systems
and
protocols
or
what
are.
B
And
protocols
that
we
that
allow
us
to
get
a
sort
of
consistency
and
correctness
of
these
keys
next
slide,
please.
So
we
refer
to
such
a
system
as
a
key
consistency
and
correctness
system
or
pieces.
I
don't
know
not
bad
with
acronyms,
I
guess,
but
okay,
one
of
these
systems
or
the
the
I
guess,
the
particulars
of
the
system
vary
obviously
quite
a
lot
depending
on
what
the
threat
model
is,
what
sort
of
like
cryptographic
dependencies
you
have.
What
your
trust
model
is.
B
What
pki
you
have
you
know
how
difficult
these
things
are
to
to
run
operationally
speaking,
whether
or
not
you
have
other
external
dependencies
like
totally
varies
depending
on
use
cases,
and
so
that
is
to
say
that
there's
not
right
now,
one
one
size
fits
all
sort
of
system
next
slide.
B
And
so
because,
there's
so
many
parameters,
depending
on
how
you
tune
these
knobs
and
tune
these
parameters
different,
you
know,
systems
might
naturally
fall
out.
So,
for
example,
if
you
assume
like
a
trusted
third
party,
then
perhaps
your
your
system
for
actually
getting
keys,
consistently
and
getting
correct
keys
becomes
quite
simple
for
some
definition
of
simple.
B
But
if
you
don't
have
like
a
trusted
third
party,
maybe
you
need
to
rely
on
the
less
trusted
parties
or
proxies,
or
maybe
you
need
to
ride
on
external
systems
like
you
know,
append
only
logs
or
whatnot,
and
you
can
also
like
vary
the
cryptographic
techniques
under
the
hood
that
are
used
to
produce.
B
Like
proofs
of
correctness,
you
like
rely
on
classical
digital
signatures,
as
is
probably
the
most
common
case
here,
or
perhaps
there
are
other
types
of
cryptography
that
might
be
applicable
here,
somehow
somehow
making
novel
use
of
like
identity-based
signatures
or
identity-based
encryption.
For
these
reasons,
as
I
have
to
echo
next
slide,
please
so
I'm
gonna
go
through
a
couple
of
examples
of
the
the
different
systems
that
that
might
fall
out,
depending
on
your
your
trust
model
and
your
threat
model.
B
H
B
In
this
particular
case,
there's
some,
you
know
good
samaritan
proxy
pe,
that
of
all
these
clients
in
this
set
trust
and
it
syncs
with
the
server
to
get
the
key
k
next
slide
and
then
simply
gives
that
key
to
all
clients
in
the
set.
Next,
one,
oh
shoot,
I'm
missing
an
animation
anyways.
Anyone
yeah
thanks
joe
anyone
using
a
particular
key
in
this
particular
case,
then,
would
not
leak.
What
specific
client
got
it
because
the
server
doesn't
know
which
client
actually
fetched
it.
B
You
know
tor
like
ip
addresses,
are
not
linkable
vectors
here
and
what
not
excellent
in
the
multi-proxy.
H
B
There's
there's
not
this.
You
know
this
good
samaritan
trusted
proxy.
That's
acting
on
behalf
of
all
clients.
Maybe
there
are
multiple
like
less
trusted
proxies
or
relays
in
this
particular
case,
r1rm.
B
That
a
client
can
connect
to
and
then
next
slide.
B
Through
these
proxies
ask
the
server
for
the
key
you
can
imagine
like
say
r1
through
rmrl
connect
proxies
or
something
like
that,
and
through
these
connect
proxies.
They
say
I
wish
a
connection
to
s,
ask
for
the
key
and
then
check
to
see
whether
or
not
they
get
back
the
the
same
view
of
the
key
it
seems
like
it
would
potentially
work
this
from
the
server's
perspective.
B
B
So
server
could
try.
So
in
this
particular
example,
he
bends
out
k,
prime
to
the
first
relay,
which
is
then
forwarded
on
back
to
the
client
and
the
client
discovers
you
know
I
I
didn't
receive
the
same
view
of
the
key
for
each
request,
so
perhaps
there's
a
problem
now
there
is
somewhat
of
an
edge
case
in
this
particular
case,
where
you
can
imagine
this.
B
The
server
rotates
the
key
in
between
queries
such
that
you
know
that
this
is
actually
not
malicious
behavior,
but
benign
behavior
and
the
client
just
got
super
unlucky,
but
I
I
think
chelsea
proposed
this
on
the
list
quite
a
while
ago,
but
there
are
various
ways
to
work
around
that,
perhaps
by
like
retrying,
just
to
make
sure
that
things
eventually
stabilize
so
next
one.
Please.
B
Right,
so
if
you
don't
have
like
a
sort
of
a
trusted
entity
or
like
a
set
of
proxies
through
which
you
can
connect
to
the
server,
perhaps
and
other
models
in
which
the
server
publishes
the
key
to
a
bulletin
board
or
to
a
database
in
a
particular
case
next
slide
and
then
server
or
clients
fetch
this
key
from
this
database.
B
B
It
could
be
the
output
of
like
consensus
protocols,
similar
to
the
the
tour
case,
and
if
clients
are
really
concerned
about
privacy
here
they
might
like
they
might
use
some
special
mechanism
to
actually
connect
and
fetch
the
queries
or
fetch
the
keys
from
the
database,
or
maybe
they
might
use
peer
or
something
like
that.
B
If
that
was
applicable
depending
on
the
size
of
the
table,
I
guess
the
the
important
distinction
here
is
that,
rather
than
clients
directly
fetching
our
servers
directly
providing
keys
to
you
know
some
client,
either
directly
or
indirectly,
they
publish
their
keys
elsewhere
for
clients
to
fetch
next
slide.
Please
yeah.
B
So
the
the
purpose
of
this
document
was
really
to
allow
us
to
punt
on
discovery
for
the
other
protocols
that
were
that
we
need
this
thing
for
specifically
odo
and
oh
gtp,
because
discovery
is
a
challenging
problem
and
this
specific
mechanism
will
vary
depending
on
again
on
the
threat
model,
on
the
the
specific
clients
and
servers
in
question
and
and
many
other
things.
B
So
we
wanted
to
at
least
have
sort
of
a
common
place
where
we
could
talk
about
the
different
tradeoffs
and
different
properties
of
each.
You
know
variant
of
the
system
and
and
things
to
consider
so
right
now.
Definitely
we
did
not
intend
to
this
be
published
as
an
rc.
It's
simply
a
place
to
collect
a
bunch
of
information
and
hopefully
facilitate
conversation.
B
Discussion
we'll
note
that
most
of
the
schemes
that
I
kind
of
just
walk
through
at
a
very
very
high
level
could
be
technically
deployed
without
any
new
technology.
B
So,
for
example,
you
could
use
mask
to
connect
through,
like
connect
proxies
to
get
keys,
you
could
the
the
trusted
proxy
case
requires
no
new
technology,
just
use
https
to
like
fetch
a
key
from
a
proxy
and
in
the
the
the
external
database
case.
You
can
imagine
like
repurposing
dependently
logs,
but
then
I
don't
wanna
get
the
details.
I
guess,
but
you
know
there
are
how
those
logs
are
audited,
is
a
topic
of
lively
discussion.
B
There's
some
discussion
on
like
things
like
gospel
in
the
chat
room,
so
we're
not
recommending
certainly
not
recommending
any
particular
thing
just
like
just
discussing
the
various
use
cases,
so
I
guess
to
wrap
up
there's
really
only.
I
guess
two
questions
with
the
group.
B
First,
is
this
at
all
useful
to
you,
I
mean,
personally
speaking,
it
was
helpful
to
sort
of
just
get
everything
down
in
a
single
place,
so
we
could
point
to
like
specific,
you
know,
systems
and
and
point
to
the
particular
properties
that
they
have,
rather
than
like
continually
sort
of
talk
around
it
in
the
context
of
other
protocols,
and
I
guess
specifically
for
the
the
privacy
pass
case
here.
I'm
wondering
like
how
this
how
this
document
might
help
sort
of
either
accommodate.
B
You
know
these
types
of
consistency
and
discovery,
protocols
and
systems,
either
in
the
architecture
spec
or
in
eventual
future
deployments
of
of
the
protocol.
So
all
that
has
to
say
there's
no
clear
goal.
We
leave
this
here.
Spoof
of
thought
certainly
welcome
to
you
know
contributions
in
any
form
to
help
improve
it.
B
Yeah,
and
I
guess
with
that
I'll,
take
questions
if
there
are
any
or
we
can
move
on.
I
don't
know
what
to
forget.
Thank
you.
Chris.
E
We,
the
the
next
block
of
time,
is,
is
for
discussion
of
sofia's
presentation,
chris's
presentation
and
any
other
topics
that
people
want
to
discuss
in
terms
of
bringing
new
topics
into
privacy
pass
or
other
next
steps
that
that
are
on
their
mind.
That
they'd
like
to
see
happen
in
this
working.
E
H
I
mean
I
was
listening
to
sophia's
presentation
and
and
to
the
discussion
on
concentration,
and
I
wonder
whether
anyone
is
interested
in
doing
an
a
study
of
the
economics
of
these
things.
I
mean
what
are
the
the
flow
of
revenues
that
sustain
the
provision
of
servers
and
things
occurred,
I
mean,
do
we
have
that.
J
Oh
yeah,
I
just
wanted
to
pick
up
on
christian's
point.
I
think
that's
like
a
really
interesting
angle
to
explore,
like
the
economics
behind
this,
so
yeah
I'd
be
interested.
I'd,
certainly
review
anything
that
gets
written
on
that.
I
think
it's
one
question
I
had
just
about.
Like
the
the
future
of
the
group,
we
had
a
good
discussion
about
consolidation
and
centralization.
J
C
Hey
yeah.
I
just
also
like
to
reiterate
kirsty's
point
that
we
haven't
really
had
like
a
concrete.
You
know
solution
to
kind
of
like,
at
least
in
the
architectural
perspective,
how
we
would
mitigate.
You
know
these
problems
around
issue
of
cardinality
an
issue
a
centralization.
C
I
think
I
think
mark's
done
a
really
good
job
with
the
the
draft
he's
written
so
far,
and
I'm
happy
to
continue
reviewing
that,
but
I
think
in
terms
of
how
we,
you
know,
build
like
a
generic
specification
for
like
adding
these
redemption
context
that
steve
steven
mentioned
in
his
talk
to
the
architecture
document.
I
think
it'd
be
good
to
hear
some
more
feedback.
G
Yeah
thanks
every
time
I
get
up
to
speak,
I
end
up
agreeing
with
alex.
I
do
think
it
would
be
really
helpful
to
get
some
more
conversation
going
on
the
idea
behind
the
redemption
context.
G
I
think
that
the
consolidation
draft
can
be
improved
as
part
of
that
conversation
and
as
part
of
that
discussion,
and
so
I
guess
what
I
would
intend
is
to
based
on
some
comments
and
suggestions
that
we
get
on
the
mailing
list
and
also
the
conversation
that
we
have
about
the
context
idea
doing
a
revision
of
the
the
draft
that
I
wrote
thanks.
D
Thompson,
so
I
liked
the
redemption
context
concept
when
we
spoke
with
this
with
stephen
and
others
a
few
weeks
ago.
It
completely
solved
the
question
for
me
of
centralization
ability
that
you
can
narrow
down
within
for
for
any
given
website
that
the
number
of
things
that
it
can
talk
to
is
so
severely
limited.
D
It
does
require
that
you
accept
the
concept
that
there
is
some
information
transfer
going
on,
and
so
for
me,
the
the
real
question
comes
down
to
cardinality
in
terms
of
the
number
of
issuers
and
the
number
of
bits
that
are
carried
with
each
thing.
So
we
talked
about
the
private
information
bit
and
we
talked
about
publicly
verifiable
metadata
as
well,
and
so
when
you
combine
those
things
with
other
things
from
the
context,
that's
where
it
gets
interesting.
D
So
there's
some
some
more
analysis
required
in
order
to
work
out
what
the
specific
numbers
are.
But
I
I
think
this
is
definitely
the
direction.
This
needs
to
go
it's
much
more
limited,
but
I
think
that's
that's
the
only
way
this
can
possibly.
D
B
D
It
is
an
architectural
structuring.
I
I'm
not
sure
that
I
can
turn
it
into
words
for
you
at
you
know.
4
30
a.m
on
friday,
sorry
saturday
morning,
but
the
the
direction
that
that
stephen
outlined
and
and
what
I
alex
was
getting
to
is-
is,
I
think,
the
sort
of
thing
that
we'd
want
to
have
in
the
architecture
document.
D
The
way
that
I
had
imagined
this
working
in
the
web
context,
specif
specifically,
which
might
help
sort
of
frame
it
for
you
is
that
any
given
website
nominates
a
set
of
token
issuers
that
that
are
able
to
operate
and
essentially
redeem
tokens
on
that
website
and
the
browser
would
enforce
a
limit
on
the
number
of
third
parties
that
could
be
listed
in
that
fashion.
D
And
so
you
would
have
you
know
the
bot
detection
or
the
ad
fraud
detection
or
some
small
number
of
use
cases
for
which
this
this
would
work.
They
would
go
into
a
bucket.
The
browser
would
would
police
the
size
of
the
bucket
to
say
that
there's
only
two
there's
only
three
or
some
relatively
small
number
of
those,
and
then
the
website
would
be
able
to
frame
in
those
third-party
origins
and
they
would
be
able
to
invoke
an
api
other
origins
that
weren't
listed
in
that
list.
D
Or
if
the
list
were
too
long
would
be
blocked,
and
so
they
would.
They
would
receive
nothing.
B
E
So
please
try
to
at
least
state
your
name
and
stick
to
the
queue
if
possible.
I'm.
L
Richard
barnes
and
I
think
I
am
in
the
queue
minor
friendly
amendment
on
martin's
point.
It
seems
like
in
a
scheme
like
that,
where
the
relying
party
or
the
redeemer
was
stating
the
trusted
issuers,
the
redeemer
could
also
specify
what
public
keys
it
trusts
for
those
issuers,
which
would
clear
up
some
of
the
key
consistency
stuff
here.
F
Yeah,
I
think,
like
in
terms
of
actionable
things,
updating
the
architecture
dock
to
like
have
these
sorts
of
privacy
analysis
and
like,
as
a
like,
explicit,
like
implementation
of
this
updating
the
http
api,
to
take
into
account
like
a
lot
of
martin's
points
and
how
we
would
implement
this
in
that
particular
world.
I
think,
would
be
good
to
getting
alignment
on
like
what
sorts
of
contexts
and
how
this
like
practically
works.
C
Yeah,
I
guess,
based
on
martin's
points.
It
seems
like
we're
moving
to
like
an
architecture,
but
forgive
the
you
know:
dual
usage
of
the
term,
but
where
we
have
like
a
permanent
proxy
verifier
state
whereby,
like
we
have
these
entities
that
are
always
sending
tokens
to
entities
that
can
verify
redemptions
and
so
currently
in
the
architecture
document.
C
We
we
kind
of
allow
these
like
multiple,
different
ways
of
like
redeeming
tokens,
but
it
seems
like
in
that
scenario
we
would
never
have
like
a
direct
client
to
issue
a
redemption
and
that
that's
not
something
I'm
particularly
wedded
to.
But
I
just
wanted
to
clarify
that
that
was
kind
of
the
direction
people
who
are
imagining.
D
If
you
think
about
this
in
the
web
context,
there's
the
site
that's
operating
and
as
using
the
fraud,
detection
or
bot
detection
service,
I
don't
imagine
that
to
be
the
thing
that
is
consuming
the
token
or
redeeming
the
token.
It
is
probably
more
likely
the
case
that
they
have
code,
that's
framed
in
or
scripts
that
they
load
that
creates
a
frame
for
the
third
party
that
has
issued
the
token
and
would
redeem
it
itself.
So
there's
there's
no
proxying.
D
I
imagine
in
that
scenario
it's
more
more
a
case
of
the
the
the
issuer
is
the
is
the
redeemer
at
that
point.
P
So
maybe
I
just
like
not
following
you
properly
but
as
I
understand
it,
the
the
the
limitation
on
the
number
of
issuers
is
a
key
part
of
your
suggestion,
because
you
want
the
client
to
enforce
it.
What
stops
the?
What
stops
the
website
from
providing
a
different
set
of
issued
candidate
issuers
to
every
client.
D
Martin,
so
I
haven't
run
the
analysis
on
one,
but
it
would
would
have
to
be
the
case
that
you
would
yeah.
So
I'm
not
sure
perhaps
steven's
run
the
analysis
for
this
one,
but
it
could
be
that
that
doesn't
matter
in
that
case
either
either
the
site
is
already
able
to
identify
you
and
therefore
you
don't
have
any
any
additional
privacy
loss
or
there's
nothing
else
there.
So
I'll.
Let
stephen
answer.
N
E
P
That
far,
I'm
just
saying,
I
guess
I'm
just
saying
that
like
if
we
have
something
which
is
being
like.
So
the
root
of
the
problem
here
like
as
far
as
I
understand,
is
verifying
that,
like
everyone,
has
a
consistent
view
of
like
what
keys
are
in
play,
and
so
it
seems
like
creating
a
situation
where
the
where
the,
where
the
client
enforces
a
fixed
number,
first
cardinality
of
keys.
It
sees
it's
not
the
same
as
restricting
the
total
number
of
keys
in
play,
and
so
I'm
just
trying
to
make
sure
like
that.
F
Yeah,
so,
like
martin's
point
is
that
if
the
site
is
able
to
identify
the
issue
or
the
client,
then
like
the
ability
to
choose
what
set
of
issuers
already
like
ahead
of
time
doesn't
matter.
I
think
the
important
bit
is
that
different
sites
aren't
able
to
come
to
like
the
same
set
of
issuers
for
a
specific
client,
and
I
think
to
be
able
to
do
that.
F
K
Sir,
I
understood
steven's
proposal
a
little
differently
and
sort
of
martin
thompson's
idea,
and
the
combination
a
little
differently,
which
is
the
site,
would
only
be
able
to
interact
with
tokens
it
pre-declared.
So
it
would
never
be
able
to
see
what
tokens
a
client
had,
except
for
the
ones
that
were
on
the
list
of
issuers
and
issued.
So
if
it
goes
through
a
set
and
then
changes
the
set,
it
wouldn't
learn
any
information
about
the
client,
because
it's
a
different
set
from
the
ones
it's
handed
out
before
and
every
client
doesn't
have
the
new
issuer.
E
E
B
On
yeah,
so
this
is
a
slight
detour
from
the
original
agenda,
but,
as
mentioned
earlier
in
discussion
with
everyone,
we
thought
it'd
be
useful
to
spend
some
time
discussing
how
metadata
is
expressed
in
the
protocol
from
an
api
perspective,
as
well
as
from,
I
guess,
a
bits
on
the
wired
perspective.
B
There's
an
issue
sort
of
linked
here
that
capture
sort
of
the
general
request
to
accommodate
metadata,
because
currently
it's
not
something
we
have
in
the
protocol
document,
although
it's
discussed
in
the
architecture
document.
B
So
let's
move
on
next
slide.
Please
right!
So
there's
different
types
of
metadata.
Both
the
client
server
can
have
metadata
either
public
or
private.
The
vetted
metadata
might
also
be
verifiable
or
non-verifiable,
depending
on
what
type
of
data
it
is
and
the
the
existing
cryptographic
primitives
that
we
have
allow
or,
to
some
degree
different
types
of
metadata.
So,
for
example,
the
vopr
construction
that
privacy
pass
is
currently
built
on
does
not
allow
any
public
data
metadata
from
the
client
or
server,
with
the
exception
of
actually
using.
B
You
know,
per
metadata
bit
private
keys,
but
that
doesn't
scale
well.
We
talked
about
this
in
the
crg
meeting.
A
bit
earlier,
private,
the
pmp
token
worked
from
google
and
the
trust
token
is
based
on
permits
private
metadata
from
the
server
side,
but
only
a
single
bit,
and
then
the
recent
private
stats
anonymous
token
work
which
are
linked
here.
Permit
arbitrary
client
or
server
amount
of
data.
It
doesn't
actually
could
say
whether
or
not
it
has
to
be
the
client,
your
server.
B
I
think,
ideally,
what
we
would
like
from
sort
of
an
api
perspective
is
the
ability
to
accommodate
you
know
all
variants
of
metadata,
provided
that
the
underlying
cryptographic
construction
supports
it
and
then
applications
would
constrain
exactly
how
much
metadata
is
possible.
So,
for
example,
it
would
be
not
great
if
there
were
more
than
32
bits
of
public
metadata
for
any
of
these
applications.
Given
that
33
bits
is
sufficient
to
track
most
people
in
the
world,
and
certainly
we
want
much
less
than
32,
but
we
don't
want
to
overly
constrain.
B
I
guess
the
api
or
the
protocol
to
to
not
allow
or
accommodate
future
cryptographic
schemes
under
the
hood.
So
next
slide
please.
B
This
is
basically
an
overview
of
how
the
the
current
protocol
works.
The
it
was
previously
just
a
single
round
issuance
request
response,
but
in
trying
to
prepare,
for
you
know,
other
constructions
that
have
three
messages
specifically
like
schnorr,
blind
signature-based
schemes.
We
added
a
a
sort
of
an
additional
round
trip
that
may
or
may
not
be
actually
take
place,
depending
on
what
cryptographic
scheme
is
being
invoked.
B
So
there's
at
a
very
high
level,
two
flows,
the
first
of
which
is
to
get
a
commit
request
in
response
and
then
the
second
flow
which
uses
that
commitment,
request
and
response
to
get
some
tokens
and
before
all
of
this
runs,
the
the
client
is
expected
to
get
the
view
of
the
server's
public
key
out
of
band
somehow
somewhere.
It
could
also
be
transferred
in
line
but
for
simplicity
purposes.
This
is
how
we're
describing
it
next
slide.
B
So
from
the
client's
perspective
seems
like
there's
really
only
one
way
or
one
sensible
place
to
fold
in
public
metadata
here,
and
that
is
in
the
generation
step,
so
in
asking
or
requesting
the
generation
of
a
series
of
tokens
m
tokens
in
this
particular
case,
the
client
would
provide
you
know
it's
it's
arbitrary
metadata
and
you
know
the
specific
instantiation
might
limit
how
long
that
metadata
is
or
what
its
type
is
or
whatever
next
slide.
B
Interestingly,
the
the
server
has
a
couple
of
opportunities
to
fold
the
metadata.
So
as
mentioned
earlier,
it
can,
you
know,
use
a
a
different
key
for
each
variant
or
each
value
of
that
data.
It
might
er
er.
It
might
express
it
that
way.
Alternatively,
it
might
provide
the
metadata
in
line
as
the
issuance
response
is
being
generated,
and
so
it's
unclear
exactly
where
and
how
we
want
to
accommodate
both
of
these
different
slots,
but
just
noting
that
the
the
two
different
possibilities
exist
next
slide.
Please.
B
Right
so
I
guess
the
question
for
the
people
thinking
about
the
protocol
on
the
api
and
trying
to
build
on
top
of
it
is
first
and
foremost
like
should,
should
there
be
arbitrary
support
for
client
server
metadata,
that
is,
like
unbounded
application,
defined
cryptographic,
instruction
defined
metadata
at
the
api
level,
and
obviously
that
would
be
limited
by
the
underlying
crypto.
And
if
so,
how
do
we
want
to
accommodate
that
like?
B
What
are
the
functions
that
we
need
to
touch
in
order
to
add
these
news
fields?
Secondly,
like
as
I've
mentioned
a
number
of.
H
B
The
the
cryptographic
construction
underneath
constrains
exactly
what
sort
of
metadata
is
possible.
So
in
the
pmb
token
case
you
can
only
have
a
single
private
bit
of
metadata,
so
you
can't
arbitrarily
add
any
more,
but
the
other
variants
like
private
stats
and
so
on
are
much
more
flexible
and
permit
more
more
bits.
B
So
the
question
is:
should
the
the
protocol
and
sort
of
the
api
impose
these
limits,
such
that
it's
not
left
up
to
the
applications
to
you
know,
use
arbitrary
log
metadata
that
would
effectively
defeat
the
entire
privacy
properties
of
privacy
pass
and
if
so,
how
should
it
constrain
it
like?
What
should
the
length
be-
and
I
guess
finally,
for
the
architecture
document
in
particular,
what
sort
of
additional
guidance
should
we
put
in
place
around
the
usage
of
this?
B
This
type
of
metadata
there's
already
some
text
there
around
the
different,
the
different
flavors
of
metadata,
be
it
public
and
private
for
clients
and
servers,
and
whether
or
not
it's
verifiable
or
not,
and
but
maybe
there's
more,
we
can
add
there
and
more
guidance
for
applications
that
are
wishing
to
use
this
sort
of
thing.
B
So
that's
that's
pretty
much.
It
we
we,
I
I
believe
it's
in
the
charter
to
support
metadata.
H
E
B
I
guess
we
need
to
to
iron
out
what
small
amount
means
here
and
figure
out
whether
or
not
that's
something
you
want
to
bake
into
the
protocol
or
just
parameterize,
and
let
applications
choose
so
that's
pretty
much.
It
folks
have
questions
opinions
on
how
we
should
do
this
I'd
love
to
hear
it.
I
I
it's
about
yeah,
I
feel
like
the
kind
of
the
how
the
metadata
is
actually
constructed
is
probably
lovely
applications,
but
I
think
constraining
the
size
of
the
metadata
would
require
the
applications
to
also
supply
some
sort
of
mapping
table
between
so,
for
example,
applications
that
think
of
metadata
as
strings.
I
If
you
say
that
the
the
metadata
is
only
10
bits,
long
or
maximum
size
metadata
has
like
16
bits
of
32
bits,
then
applications
would
have
to
supply
some
sort
of
mapping
table
from
the
string
metadata
to
the
the
small
metadata
sizes
versus
like
applications
using
their
strings
directly
as
metadata
itself.
I
The
other
thing
is,
I
think,
like
just
constraining
the
metadata
size,
although
it
reduces
the
amount
of
sort
of
overall,
like
let's
say,
even
if
we
choose
three
bits
or
two
bits,
it
doesn't
mean
that
probably
like
only
one
person
has
actually
issued
a
credential,
and
that
has
been
one
bit
that
one
bit
of
metadata
did
not
protect
that
one
person
but
more
like.
I
think
that
the
anonymity
set
size
is
kind
of
like
what
we
should
understand
here
as
well.
I
I'm
wondering
whether
kind
of
some
of
this
metadata
should
be
folded
into
some
of
the
discussion
we
were
having
about
anonymity
set
size,
and
things
like
that
as
well.
It's
like
also
seems
like
a
really
hard
problem
just
to
solve
as
well
to
define
hey,
we
want
to
define
like
this
metadata
size
is
okay.
I
This
is
not
okay,
then,
without
like
arbitrarily,
without
taking
the
application
into
account,
it
seems
like
a
really
hard
problem
to
solve
in
a
generic
way
and
yeah
that
that
that's
kind
of
like
no
notes-
and
maybe
richard
barnes-
has
some
thoughts
on
this
as
well.
E
So
I
have
a
I
have
some
questions.
First,
are
you
proposing
adding
a
network
round
trip
here?
Does
this
add
a
round
trip
to
the
privacy
pass
issuance
process.
E
Or
yeah,
so
I
I'm
asking:
does
adding
this
metadata
support?
Add
a
round
trip
to
the
issuance
flow,
no.
B
We
don't
believe
it
does.
You
can
fold
it
in
quite
nicely
into
the
generate
and
issuance
apis
right
now,
so
it
shouldn't
affect
the
number
of
round
trips,
the
the
additional
round
trip
that
you
saw
if
you
can
go
back
to
the
protocol,
joe.
B
The
additional
round
trip
for
the
commit
request
is
like
skipped
entirely
for
like
the
vopr
case,
it
would
be
skipped
entirely
for,
like
the
pmb
token
case
and
whatnot,
it's
it's
only
there
to
accommodate
protocols,
cryptographic
protocols
that
require
more
than
two
messages
to
actually
mint
tokens,
which
is
the
case
for
things
like
blind,
snore
and
whatnot.
E
Okay,
thank
you
yeah
and
my
my
other
question
here
is:
is
there
enough
commonality
among
the
cryptographic
primitives
to
share
a
common
api
and
I'll
give
you
an
example
of
a
concern
here?
It
seems
like
some
of
the
primitives
here
support.
Essentially,
unlimited
amounts
of
client
generated
metadata
efficiently,
and
some
of
them
would
are
very
inefficient,
with
very
large
amounts
of
client
generated
metadata
it
would,
it
would
impose
significant
load
on
the
server.
E
Assessment,
well,
I
I'm
asking
I
I
don't
know
if
that's
accurate
and-
and
I'm
also
wondering
more
broadly-
is
there
enough
commonality
among
these
different
primitives
that
they
can
hide
behind
a
uniform
api.
B
I
don't
believe
so
so
say,
for
example,
compare
the
I
mean.
Sobo
can
probably
speak
more
to
this,
so
maybe
I
should
let
him
answer,
but
in
the
private
stats
case,
in
the
anonymous
token
case,
you
can
either
full
metadata
into
the
the
public
keys
such
that
like
it
does
not
affect
the
online
issuance
flow
whatsoever,
or
you
could
fold
it
into
the
issuance
evaluation
in
in
real
time
and
the
the
I
guess.
B
One
of
the
differences
between
the
two
is
that
the
client
could,
in
theory,
provide
an
arbitrarily
long
thing
piece
of
metadata
for
the
server
to
fold
in
not
sure
why
it
would
do
that,
but
it
could.
However,
the
for
that
particular
variant.
The
the
performance
cost
is
not
bad.
It's
just
like
hashing.
B
All
it
requires
is
to
hash,
basically,
this
input
to
to
a
scalar
in
the
corresponding
field.
So,
okay,
thank
you.
I
Yeah,
I
think
maybe
I
think
we
wouldn't
want
the
client
to
generate
arbitrary
media
that
the
server
did
was
not.
It
was
not
acceptable
to
server
the
server,
maybe
in
the
part
of
the
config
and
initial
preparation
stage,
the
server
could
say
like
I
support
this
is
the
maximum
amount
of
metadata.
I
support
and
the
client
would
that
this
is
how
I
think
it
would
work
so
essentially
that
would
constrain
the
algorithms
that
the
server
might
want
to
use
based
on
like
the
the
maximum
metadata
it
is
willing
to
accept.
A
One
of
the
things
I
think
you
know
with
respect
to
the
metadata
would
be
good
to
have
a
better
understanding
of
what
the
effects
of
how
much
the
length
of
the
metadata
or
the
number
of
options
of
the
metadata.
A
B
I'm
just
going
to
say
that
yeah,
every
additional
bit
of
metadata
is
a
potential
you
know
having
of
the
anonymity
set
effectively.
So
we
need
to
make
clear.
I
guess
what
the
implications
are.
I
don't
believe
the
privacy
calculus
in
the
draft
right
now
in
the
architecture
draft
goes
into
detail
or
like
accommodates
metadata
accordingly,
but
that's
certainly
something
easy.
We
could
do.
D
So
I
I
wanted
to
just
point
out
here
that
the
the
number
of
the
number
of
bits
of
metadata
that
you
provide
here
can
be
combined
at
the
discretion
of
the
attacker
here
being
the
server
to
be
re-identifying,
because
you
are
able
they
have
complete
control
over
the
value
of
those
bits.
And
so,
if
we
imagine
a
scenario
where
there
is
some
amount
of
say,
fingerprinting
information,
that's
available,
say
10
bits
of
those,
and
we
have
three
different
issuers
that
any
given
context
will
tolerate.
D
B
I
think
that's
definitely
true.
I
don't.
I
don't
disagree
with
the
need
to
constrain
it.
Just
a
question
of
like
I
guess,
logistically
how
we
can
strain
out
either
in
the
protocol
itself
or
at
the
applications
that
are
using.
It
make
a
good
point,
though,
that
perhaps
like
I
was
I
was.
B
I
had
a
box
around
like
sort
of
the
ideal
functionality
here
that
is
like
the
tokens
that
are
produced
are
a
function
of
like
all
these
different
flavors
of
metadata
and
like
we
might
be
able
to
fold
different
types
of
metadata
at
different
slots
in
the
protocol.
B
But
maybe
we
only
want
to
provide
like
the
server
to
include
public
metadata
as
part
of
like
the
key
generation
step,
and
only
the
client
should
be
able
to
allow
provide
metadata.
During
the
token
issue
at
step,
you
can
imagine
like
a
crazy
scenario
where,
if
the
during
the
issuance
step
the
the
client
or
the
server,
if
it
was
able
to
provide
public
metadata,
like
somehow
you
know,
puts
in
like
a
tracking
cookie
or
something
like
that
in
the
token
yeah
yeah.
B
So
I'm
just
trying
that
I
think
I'm
agreeing
with
you.
D
Yeah
back
again,
so
that
the
the
thing
that
I
really
don't
want
to
say
here
is.
D
This
this
be
completely
unbounded
and
understood,
but
there
are
cases
where
you
can
potentially
put
metadata
in
that's
not
directly
something
that
the
server's
got
control
over
and
that
changes
the
calculus
slightly.
So
if
the
client
is
willing
to
carry
some
information
between
context
itself
on
it
on
its
own
recognizance
say,
for
instance,
it
wants
to
do
a
sign
in
process
and
the
the
information
that's
been
carried
is
it's
its
identity.
D
Now,
that's
very
much
identifying
at
that
point,
but
because
the
choice
is
visible
to
the
client
and
deliberate
on
the
part
of
the
client,
then
there's
no
real
risk
there.
I
think
primarily,
what
we're
concerned
with
here
is
information.
The
client
doesn't
know
about
that.
The
server
provides,
and
so
this
pertains
mostly
to
those
private
bits
that
have
been
proposed
in
in
the
different
solutions.
I
So
both
yeah,
I
think
I
was
gonna,
make
the
same
point.
I
think
there's
two
types
of
metadata:
one
of
them
is
like
some,
the
client,
understandable
and
the
kind
non-understandable.
I
So
if
the
client
understands
that
this
is
a
part
of
the
protocol
or
this
is
required,
this
is
this
is
actually
understandable
for
the
security
of
the
application.
I
I
If
the
client
is
actually
on
the
protocol,
if
it
makes
meaningful
sense
for
the
application
protocol
to
have
that
32
bits
and
the
client
can
understand
what
does
third
tip
bits
are
useful
for
versus,
like
yeah
versus
it's
just
a
server
blob
of
data,
which
the
client
has
no
idea
about.
A
Now,
I'd
just
like
to
thank
everybody
for
coming
and
participating.
I
think
this
is
a
good
discussion
and
that
you
know,
hopefully
we
continue
the
activity
on
the
mailing
list
to
kind
of
resolve
these
issues
and
start.
A
You
know
getting
the
drafts
into
to
incorporate
some
of
this
discussion.