►
From YouTube: IETF113-PRIVACYPASS-20220324-1200
Description
PRIVACYPASS meeting session at IETF113
2022/03/24 1200
https://datatracker.ietf.org/meeting/113/proceedings/
A
Welcome
to
privacy
pass
at
ietf
13..
This
session
is
being
recorded
as
usual.
A
You're,
probably
by
this
time,
all
familiar
with
the
notewell.
If
anybody
isn't
you
can
quickly
get
a
refresher
here
or
look
it
up
online.
A
As
so
for
for
participants
in
the
room,
if
you
want
to
speak
at
the
mic,
please
add
yourself
to
the
queue
and
also
you
need
to
log
in
to
the
meeting
using
the
barcode
or
the
qr
code
up
there,
so
that
we
can
get
you
counted
on
the
blue
sheets.
The
attendance.
A
All
right
some
more
information
about
the
agenda.
We
don't
think
we
need
to
go
in
there
so
for
our
agenda
today
we
do
need
note
takers.
Is
there
anybody
who's
willing
to
take
notes
in
the
cody
md,
either
online
or
in
the
room
here.
A
All
right
and
then
on
the
agenda.
Today
we
have
looking
at
our
adopted
drafts
and
going
through
some
of
the
changes
that
have
been
made
there.
We
have
a
draft
on
rate
limit
tokens
that.
A
We
are
that
we
may
adopt
so
we'll
have
a
presentation
on
that
and
then,
if
we
have
time,
I
I
don't
know
if
mark
is
here
but
there's
some
updates
to
the
centralization
problem.
We
weren't
sure
if
we
were
going
to
have
time
for
that
presentation.
So
I
don't
know
if
if
mark
is
going
to
attend
or
not,
but
if
he
is,
we
can
get
get
him
to
give
us
an
update.
Are
there
any
other
additions
or
modifications
to
the
agenda
or
ben?
Did
I
leave
anything
off.
B
All
right,
we
only
have
one
hour.
So,
let's
see,
if
let's
see
how
fast
we
can
move.
A
All
right:
well,
then,
let's
go
and
get
started
with
the
core
drafts.
C
Okay
morning,
everyone
afternoon,
everyone,
I
guess.
C
C
So
for
the
architecture,
the
the
biggest
change
that
went
into
the
last
revision
was
sort
of
an
exploration
of
deployment
considerations
for
ways
in
which
you
would
use
privacy,
perhaps
in
practice,
and
what
the
implications
are
on
sort
of
the
privacy
posture
of
the
protocol
with
respect
to
clients,
we'll
kind
of
go
through
the
main
highlights
of
those
deployment
considerations
here
and
talk
about
next
steps
for
the
draft.
After
that.
C
As
a
reminder,
the
the
architecture
draft,
since
the
as
of
the
last
meeting,
was
sort
of
updated
such
that
it's
now
split
into
like
two
sub-protocols,
in
which
there's
this
this
issuance
protocol,
that
that
clients
run
with
an
ad
tester
and
issuer
for
the
purposes
of
acquiring
tokens
for
use
and
interacting
with
an
origin
via
the
redemption
protocol.
C
Later
on
in
the
architecture
document,
describes
what
the
role
of
the
tester
is
and
the
azure
is
in
the
issuance
protocol,
as
well
as
what
the
role
of
the
origin
is
in
the
challenge
and
redemption
protocol
tried
to
orchestrate
things.
Such
that
the
redemption
protocol
is
really
simple
and
lightweight
and
doesn't
really
put
a
lot
of
burden
on
the
origin.
C
Conversely,
the
issuance
protocol,
of
course,
sort
of
encapsulates
all
the
complexity
of
new
token
types
and
the
I
guess
relevant
privacy
properties
of
those
different
issuance
protocols
and
and
yeah.
That's
pretty
much
sums
it
up.
I
guess
the
the
overall
flow
of
the
the
the
protocol
sort
of
looks
like
this.
If
you
recall
so,
the
the
origin
of
the
server
on
the
right-hand
side
generally
would
ask
the
client
you
know
it
when
it
wants
to.
C
That
is
attesting
to
particular
properties
on
behalf
of
the
client
and
in
the
latest
version
architecture.
Document
we've
introduced
this
this
notion
of
context,
so,
for
example,
during
the
redemption
protocol,
there's
a
redemption
context
and
the
redemption
context
sort
of
encapsulates
all
the
things
that
the
origin
of
the
server
would
see
about
the
client
during
its
interaction
with
the
client
for
the
purposes
of
redeeming
a
token.
C
So
that
might
be
the
origin
name
itself,
because
the
origin,
as
information
about
that
particular
redemption
context,
might
be
the
time
stamp
of
the
redemption
event
might
be
the
client
ip
address.
You
know,
whatever
information
there
is
about
a
particular
client
interaction
when
redeeming
a
token
with
a
particular
server.
C
Likewise,
there's
an
attestation
context
that
is
present
during
the
or
sort
of
encapsulates
all
of
the
you
know,
perkline
information
that
the
attester
sees
during
the
issuance
side
of
the
exchange
side
of
the
interaction.
C
It's
the
exact
same
sort
of
thing,
so
it's
the
all
the
information
that
the
tester
sees
about
the
client,
including
timestamp,
the
event
all
the
the
client's
ip
address,
whatever
other
relevant
metadata
or
information.
Is
you
know,
representative
of
that
particular
issuance
event?
C
If
you
take
a
step
back
and
ask
yourself,
you
know
what
is
at
least
in
a
deployment
of
privacy
pass.
What
is
meaningful
privacy
because
you
know
for
most
uses
of
privacy
pass
we're
trying
to
use
it
for
purposes
of
improving
client
privacy,
mind
you.
There
are
applications
of
privacy
password,
you
may
not
care
about
privacy,
but
for
this
particular
you
know
scenario
we
are
so
in
the
the
meaningful
privacy
we
claim
in
this
particular
setup.
C
Is
that
there's
no
single
entity
in
the
system
that
can
link
per
client
information
and
per
server
information
sort
of
across
these
contexts?
C
This
attestation
redemption
context
and,
of
course
the
the
way
in
which
you
deploy
privacy
pass,
has
an
implic
has
an
impact
on
you
know
who
sees
what
in
the
interaction
and
also
has
an
impact
on
sort
of
what
the
client
has
to
do
when
it's
interacting
with
these
different
parties
in
the
system
in
order
to
achieve
this
goal
so
take,
for
example,
what
we
call
the
joint
deployment
model,
which
is
basically
how
privacy
pass
is
deployed
today,
where
the
client
interacts
within
a
tester
and
a
server
for
the
purposes
of
solving
captchas
as
an
attestation
mechanism
and
then
spending
tokens
as
a
as
a
way
to
demonstrate
that
they
solve
captures
at
some
point
in
the
past.
C
In
this
deployment
model,
the
a
tester
and
server
are
the
same
entity,
and
so
they
have
basically
a
shared
view
of
the
client.
During
both
interactions,
they
share
the
same
ip
address,
or
they
they
will.
C
They
will
share
their
view
of
the
the
client
during
the
at
the
capture
solving
process,
as
well
as
the
you
know,
presenting
the
token
or
the
redemption
solving
process
and,
as
a
result,
sort
of
meaningful
privacy
in
this
particular
arrangement
means
that
the
client
has
to
either
separate
itself
across
interactions
over
time
sort
of
so
that
across
different
redemption
events
or
across
different
attestation
events.
C
It
appears
as
sort
of
a
different
client
or
over
space
such
that
and
when
interacting
with
the
attester
or
the
server,
it
appears
as
a
different
client
or
it's
sort
of
unlinkable
with
respect
to
an
attestation
and
redemption
event.
C
So,
if
you're
looking
at
the
time
separation
aspect
of
this,
this
is
where
things
like.
You
know,
our
unlinkable
tokens,
you
know,
sort
of
the
basis
of
the
the
privacy
passport
will
come
into
play
a
you.
You
can't.
C
C
On
the
space
separation
side.
A
client
would
sort
of
separate
itself
in
space
by
using,
for
example,
a
proxy
and
connecting
to
the
attester
or
connecting
to
the
server.
Indeed,
in
the
initial
motivating
use
case
for
privacy,
password
tor
users
would
go
and
interact
with
the
with
the
you
know.
C
Users
with
users
of
tor
would
be
you
know,
being
faced
with
captures
over
and
over
again
repeatedly
being
forced
to
present
tokens.
They
would.
They
were
already
sort
of
separating
themselves
over
space,
but
you
can
use
other
proxies
if
you
know
reasonable
for
your
particular
deployment
model.
B
Hey
you
said
time
or
space
here.
I
guess
I
would
have
expected
time
and
space,
because
whether
I
see
a
consistent
ip
address
over
a
long
period
of
time-
or
I
don't
know
your
ip
address,
but
I
but
you're
the
only
client
who's
who's
requesting
and
using
tokens
in
some
short
period
of
time.
It
seems
like
either
one
of
those
is.
C
Well,
I
think
sort
of
implicit
here
is
that
you're
more
than
one
client
interacting
with
the
system
right
otherwise,
like
that
none
of
this
really
holds
that
there's
no
really
privacy
to
be
gained.
C
C
Sure
and
that's
why,
in
this
particular
model,
non-interactive
tokens
sort
of
like
are
the
most
sensible
variant
in
particular,
because
spending
a
token
or
redeeming
a
token
does
not
mean
that
you
like
went
off
and
fetched
in
real
time.
It
just
means
that
at
some
point
in
the
past,
you
have
fetched
a
token.
So
there's
like
a
sort
of
a
natural
separation
between
the
two
over
over
time.
C
B
C
So
recall
the
the
the
meaningful
privacy
here
is
that
you're
not
being
you're
not
able
to
link
per
client
per
server
information.
So,
although
you're
able
to
like
link
these
two
events
together,
presumably
by
the
time
stamp,
you're,
not
linking
you're,
not
sort
of
revealing
any
per
client
information
by
virtue
of
using
a
sort
of
a
proxy
and
interacting
with
the
cert
for
the
tester
server.
In
that
particular
case,.
C
C
Okay,
moving
on
there's
also
there's
another
another
other
deployments
in
the
in
the
draft
as
well
deployment
models
in
the
draft
as
well.
Another
one
we
talked
about
is
this
split
deployment
model,
and
this
is
this
is
useful
for
different
attestation
mechanisms
that
are
less
privacy
friendly.
Like
say,
for
example,
the
client
is
demonstrating
that
has
you
know,
ownership
over
some
specific
type
of
application
account,
and
it's
like
specifically
logging
in
with
that
account
in
this
particular
model.
C
By
joint,
we
mean
that
the
sort
of
the
tester
and
the
server
are
run
by
two
different
non-colluding
entities
and
as
a
result,
they
don't
share
the
same
context
with
respect
to
attestation
and
redemption,
and
so
the
the
sort
of
bar
to
meaningful
privacy
is
kind
of
lowered
in
a
sense
and
just
means
that
the
attestation,
in
this
particular
case
can't
reveal
any
sort
of
per
server
information,
because
the
attester
doesn't
have
the
sort
of
the
shared
view
of
the
the
redemption
side
that
otherwise
would
in
the
joint
deployment
model
and
likewise
that
redemption
doesn't
reveal
any
per
client
information.
C
C
C
C
In
particular,
the
insurance
protocol
should
not
reveal
any
things
that
are,
you
know,
particular
to
the
to
the
origin
during
issuance,
but
thankfully
all
the
issuance
protocols,
like
the
the
blind
signatures
and
the
oprs
and
whatnot
naturally
sort
of
hide
this
information
by
virtue
of
being
blind
signatures
or
oblivious
suit.
Around
functions.
C
So
yeah,
I
think
next
steps,
there's
there's
an
open
issue
right
now
for
sort
of
addressing
the
double
spend
requirements
when
you're
using
cross-origin
tokens
across
different
or
double
spend
prevention
requirements.
Excuse
me
when
you're
using
cross-origin
tokens.
C
Which
means
that,
like
say,
for
example,
you
have
two
origins
that
both
accept
cross-origin
tokens:
they
both
have
to
sort
of
share
double
spend
prevention
state.
Otherwise
a
client
could
spend
a
token
at
either
one
of
these
particular
servers,
and
I
don't
think
we
explicitly
sort
of
make
that
obvious
in
the
draft
right
now.
So
there's
just
an
issue
to
sort
of
call
that
out.
C
There
is
also
some
existing
sort
of
privacy
parameterization
in
the
draft
which
sort
of
describes.
You
know
if
this
is
the
sort
of
like
size
of
the
anonymity
set
that
you
want
for
particular
clients.
Here's
how
you
should
arrange
your
issuers
and
arrange
your
testers
and
whatnot,
but
it's
still
sort
of
kind
of
highly
dependent
on
the
previous
incarnation
of
the
architecture
draft.
C
So
we
just
need
to
kind
of
go
through
and
update
that
and
then
there's
there's
been
a
sort
of
a
long-standing
issue
to
address
centralization,
although
with
mark's
draft
we
may
be
able
to
or
may
consider
just
simply
punting
discussion
there
or,
conversely
folding
his
his
some
of
his
text
into
this
architecture
document
and
closing
out
the
issue.
C
I
think
at
that
point
we'll
have
sort
of
discussed
and
covered
all
of
the
different
architectural
properties
of
the
system
that
are,
you
know
relevant
to
how
you
would
deploy,
how
you
use
it
and
and
what
the
the
resulting
privacy
posture
is
or
for
clients.
C
I
think
we
could
either
park
it
or
move
it
into
working
group
class,
call
and
focus
our
efforts
elsewhere
on
issuance
protocols,
but
that's
pretty
much
it.
Anyone
have
any
questions
before
I
hand
it
over
to
tommy
to
talk
about
the
architecture
or
the
authentication
scheme.
C
If
not
tommy
jeremy
stop
sharing,
so
you
can
pull
it
up
or
tell
me
just
advance
for
you.
E
I
can
just
say
next
slide:
it's
it's
fine!
If
you
don't
mind,
yep,
that's
fine!
Okay,
all
right
next
slide
great!
So
I'm
going
to
talk
about
the
authentication
scheme.
This
is
the
document
that
we
discussed
at
the
interim
meeting.
It
is
newly
adopted
and
we
did
actually
publish
a
zero
one
version
just
this
week
with
a
couple
of
the
changes
that
I
will
talk
about
today.
E
Okay,
so
for
the
changes
in
the
zero
one
document,
the
main
thing
was
renaming.
Some
of
the
terminology.
E
There
was
a
field
in
the
challenge
structure
before
which
was
a
redemption
nonce,
oh
ben,
in
the
chat
is
saying
that
the
audio
is
not
very
clear.
Is
that
true.
E
All
right,
I
I
will
just
try
as
best
I
can
I'll
speak
a
little
bit
more
slowly,
so
the
redemption
nonce
is
renamed
to
the
redemption
context
generally.
E
This
is
because
this
field
was
not
necessarily
a
nonce.
It's
really
just
some
servers
chosen
context
that
they
want
a
token
bound
to,
and
I
want
to
point
out
that
the
fact
that
you
have
this
redemption
context
in
a
challenge
doesn't
actually
make
the
token
issuance
interactive
as
in
saying
that
the
client
needs
to
fetch
a
token
immediately.
E
It's
just
saying
that
this
token
is
bound
to
something
that
the
server
knows
for
purposes
of
double
spend
prevention
and
it's
something
that
isn't
exposed
to
the
issuance
protocol.
So
it's
really
just
between
the
client
and
the
origin.
That's
doing
the
redeeming
to
make
sure
that
you're
not
spending
a
token
that
someone
else
got
and
then
one
other
minor
rename
is
that
there
was
another
confusing
context,
name
in
the
actual
token
struct.
E
E
All
right,
then,
the
other
thing
we
want
to
do
is
talk
about
stabilizing
the
format
of
this
challenge
and
response.
We
have
several
implementations
that
have
been
testing
with
interop
and
to
encourage
the
deployment
of
these
experimentation
between
these.
The
authors
would
like
to
essentially
hear
any
issues
with
that
format
now,
so
that
we
don't
have
to
worry
about
changing
it
too
much
later
next
slide,
so
just
to
review
what
the
current
status
is
of
these
structures
in
the
challenge.
E
It
has
the
issuer
name,
which
tells
you
who,
who
is
allowed
to
actually
get
give
you
the
tokens.
On
the
other
side,
it
has
the
newly
renamed
redemption
context,
which
is
optional
and
is
essentially
just
some
random
server
chosen
context
that
they
want
to
bind
this
token
to
and
then
also
an
origin
name
to
scope.
This
token,
to
a
particular
origin
upon
redemption.
E
E
That
is,
is
used
to
make
sure
that
this
is
a
unique
token.
As
part
of
the
issuance
protocol
there's
a
hash
of
the
challenge
structure.
E
F
Ted
hardy
speaking,
if
you
wouldn't
mind
going
back
a
sled,
I
was,
I
was
typing
slowly,
so
it's
actually
about
this
for
the
opaque
origin
name,
which
is
optional.
If
you
wanted
to
have
something
that
covered
both
youtube-
and
you
know,
google
search
at
the
moment,
you
could
leave
this
out
and
the
redemption
context
would
handle
it.
F
You
could
present
it
and
neither
one
could
could
do
it,
but
I
was
wondering
whether
it
be
another
option
to
consider-
and
maybe
it's
not
needed-
is
to
allow
opaque
origin
name
to
to
have
more
than
one
appearance
in
the
struct,
so
that
you
could
specify
a
a
list
of
origin
names
that
are
covered
for
when
you
have
cases
like
that,
where
the
redemption
mechanics
in
the
back
end
are
likely
to
be
the
same,
but
the
origin
names
are
not.
So
I'm
not
sure
this
is
worth
doing.
E
Got
it
yeah?
That's
a
great
point
of
an
interesting
feature
that
this
could
have.
As
you
point
out,
it
could,
of
course,
just
use
cross
origin
tokens,
but
then,
depending
on
what
your
issuer
is
this,
that
could
be
a
much
broader
pool
of
origins
that
it
would
be
shared
with
and
not
just
google
and
youtube.
E
The
other
approach
I
imagine
could
be
taken
is
that
the
client
could
know
essentially
when
it
is
safe,
to
do
cross-origin
for
a
single
origin,
similar
to
how
I
can
do
connection
like
http
2
connection
reuse,
if
the
if
two
origins
are
covered
in
the
same
certificate
of
the
server
that
I'm
talking
to
potentially.
F
So
I
think
I
I
think
you're
probably
right
there,
that
the
there
could
be
something
else
it
uses
to
know
whether
it's
safe
to
do
the
cross-origin,
but
the
the
mechanics
of
this
in
the
back
ends
of
some
of
these
are
going
to
be
a
little
bit
wonky,
because
in
some
of
these
cases
the
same
redemption
mechanics
are
going
to
be
used
for
something
like
gcp.
So
you
could
have
instances.
F
F
So
if,
if
you,
if
we're
not
going
to
use
multiple
origin
names,
then
I
think
one
of
two
things
will
happen:
either
cross-origin
is
going
to
be
very,
very
common
to
to
major
services,
because
they
many
of
them
have
more
than
one
name
from
the
point
of
view
of
http
origin
or
you're,
going
to
have
to
have
some
other
system
to
kind
of
figure
out.
Oh
okay,
what
I'm
going
to
actually
use
to
figure
out
whether
it's
cross,
urgent
safe,
is
my
last
contact
for
them.
F
What
all
of
the
subject
names
were
in
the
certificate
or
something
like
that,
so
it
there's
some
trade-offs
here
and
a
simpler
trade-off
might
actually
just
be
to
say:
opaque
origin
name
can
and
can
occur
multiple
times,
because
then,
if
somebody
wants
to
to
scope
it
to
a
specific
set,
they
don't
have
to
rely
on
either
previous
contact
by
the
the
client
or
maybe
a
try
and
fail
with
cross
origin.
That
was
to
the
same
set
of
servers
but
to
different
actual
redemption
context.
So
just
just
a
thought.
E
G
E
All
right,
thank
you.
Okay,
let's
go
forward
a
little
bit
all
right
and
yes,
that
was
steven,
so
the
origin
behavior,
essentially
the
server
behavior
of
what
to
do
in
this
is.
E
If
you
want
to
challenge,
it
should
be
very,
very
simple:
you
essentially
just
need
to
choose
who
your
issuer
is
one
or
more
of
them
and
what
token
type
you
want
to
use.
So
what
issuance
protocol
you
can
choose
to
be
per
origin
or
cross-origin
and
to
the
discussion
we
were
just
having?
Maybe
you
know
there
is
a
in
between
you.
Essentially,
you
need
to
choose
what
is
that
value
of
kind
of
what
you
are
binding,
your
challenge
to,
and
then
you
can
choose
the
optional
context
that
you
want
to
have
so
for
this.
E
So,
of
course
you
can
have
an
empty
context,
but
you
could
have
some
very
simple
mechanics
like
doing
a
hash
of
the
client
ip
address
or
the
client
ip
address
subnet,
so
that
you
can
just
compare
these
tokens
to
other
clients
that
fall
within
that
subnet
or
you
could
have
something
associated
with
your
state
with
the
client.
So
if
the
client
has
a
long-lived
connection
with
you,
that's
doing
http,
2
or
http
3.
E
When
you
don't
have
a
context,
then
it's
very
easy
to
cash
it.
If
they
are
context-based,
then
you
have
generally
shorter
caching
lifetimes,
and
we
had
a
recent
issue
from
something
like
chris
pointed
out
that
you
want
to
probably
clear
any
cached
tokens
whenever
you
clear
your
cookie
state
or
something
else
that
would
otherwise
be
changing
the
client
state.
E
And
then
the
other
thing
that
you
want
to
do
on
the
client
side
is
to
verify
that
origin
name
information
to
make
sure
that
it
matches,
and
this
again
to
the
point
that
ted
brought
up
is
where,
if
this
is
expanded
to
include
multiple
things,
it's
some
verification
that
what
the
challenge
was
bound
to
actually
represents
what
you,
the
this,
the
state.
You
think
you
have
with
the
server
to
make
sure
that
you're
not
going
to
give
a
token
to
the
wrong
server.
E
All
right
so,
based
on
this
discussion,
it
sounds
like
we
have.
You
know
one
issue
where
we
want
to
kind
of
dive
in
a
bit
more
to
this
origin
name,
if
other
people
see
other
changes
that
to
the
formats.
That
would
be
useful.
That
would
be
great
to
hear
now
and
other
than
that
we
plan
to
continue
polishing
the
document
and
doing
interrupt
testing
if
people
are
interested
in
testing
with
this,
let
us
know
any
other
questions.
E
Yeah
we'll
go
back
and
forth
all
right,
so
next
we'll
talk
about
a
secondary
issuance
protocol,
so
the
main
core
issuance
document
talks
about
blind,
rsa,
blind
signatures
and
oprf,
and
these
are
just
very,
very
basic
usages
of
those
protocols,
but
we
have
another
issuance
protocol
that
has
been
defined
specifically
to
allow
per
origin
rate
limiting
next
slide.
Please.
E
So
rate
limiting
is
a
very,
very
common
part
of
fraud,
prevention
and
anonymous
access
across
web
and
in
apps
and
often
it's
something
that
does
rely
on
tracking
cookies
or
client
ip
address,
and
so
it's
not
a
great
thing
for
user
privacy
and
just
first
to
give
some
background
on
how
this
is
commonly
done
with
something
called
token
buckets.
Chris
is
going
to
walk
through
some
examples.
C
Yeah
so
token
buckets
or
leaky
buckets,
you
know
choose
whichever
one
you
like
using
toca
buckets
here,
so
I
think
it
maps
better
to
the
analogy
of
the
sort
of
the
use
case
here
so
like
for,
as
I
guess,
a
reminder
or
refresher
for
people
who
may
not
know
a
token.
D
C
Is
a
sort
of
a
just
a
process
or
an
algorithm
for
sort
of
enforcing
rate
limits,
and
you
can
think
of
it
as
a
like
a
composition
of
two
independent
processes,
one
of
which
is
a
process
for
sort
of
replenishing
tokens
in
this
bucket
sort
of,
like
adds
a
new
token
to
the
bucket
at
a
fixed
rate,
and
the
bucket
has
a
particular
size
or
a
capacity
after
which,
if
it's
full,
you
can't
add
any
more
tokens.
C
If
it's
empty,
you
can
add
the
tokens,
and
then
you
have
another
process
that
removes
tokens
from
this
bucket
and
this.
This
process,
that
is
removing
tokens,
is
typically
representative
of
something
that
wants
to
like
access
a
resource,
send
a
pack
out
on
the
network
or
send
an
api
call
or
do
whatever
basically
and
the
the
the
token
bucket.
Just
the
internal
check
is
basically,
you
know:
are
there
tokens
available
to
service
this
particular
request?
C
If
the
answer
is
yes,
as
in
that,
the
number
of
tokens
in
the
bucket
is
not
empty,
the
the
request
is
serviced.
If
it's
not,
it's
dropped
on
the
floor.
Leaky
buckets
are
sort
of
the
mirror
image
of
this
and
also
commonly
used
to
implement
rate
limiting.
But,
as
I
said,
this
is,
I
think,
a
simpler
mental
model
internally,
if
you're
to
sort
of
open
this
up,
take
a
look
at
the
sort
of
token
replenishing
process.
First,.
C
When
a
token
bucket
is
replenished,
the
very
first
obvious
thing
is
that
the
the
bucket
that
is
being
replenished
has
to
be
identified.
So
in
this
particular
case
you
can
think
of
it.
Like
you
know,
there's
a
hash
table
inside
and
the
hash
table
has
a
particular
index,
and
the
value
associated
with
that
particular
index
maintains
the
count.
C
So
in
this
particular
slide
here,
the
the
bucket
with
the
index,
one
two,
five
blah
blah
blah,
whatever
is
incremented
with
t
tokens
are
replenished
with
t
tokens
and
previously,
where
it
had
n
now
has
n
plus
t
pretty
straightforward
on
the
redemption.
D
C
Sorry,
the
resource
request
side.
Similarly,
the
bucket
has
to
be
identified,
and
so
that
involves
going
into
the
hash
table
and
then,
depending
on
how
many
tokens
your
particular
request,
corresponds
to.
Maybe
it's
like
you
know.
If
it's
a
packet,
size
and
bytes,
there
needs
to
be
n
tokens
or
whatever
here
we're
just
saying
that
each
request
counts.
As
one
token,
the
the
the
algorithm
identifies.
C
Decrements
the
the
count
by
one
and
if
it's
greater
than
zero
surfaces,
if
it's
not
it
just
and
drops
it
on
the
floor,
and
that's
basically
it
the
it's
pretty
straightforward,
the
the
you
know
you
have
to
identify
bucket
and
either
you
increment
increment
tokens
or
decrement
tokens
and
act
accordingly
or
service
requests.
Accordingly,
back
to
you,
tommy.
E
All
right,
thank
you,
so
that's
how
these
schemes
normally
work,
so
why
is
this
interesting
for
privacy
pass
so
specifically.
E
This
is
because
of
tor
or
proxies
or
vpns
or
just
being
on
a
shared
ip
on
a
public
network
and
a
basic
privacy
pass
token
is
useful
for
the
cases
where
I'm
just
going
to
get
really
gratuitous
captchas,
but
it's
not
always
enough,
both
for
some
functional
use.
Cases
like
the
metered
paywall,
but
even
for
some
of
the
just
the
captcha
prevention
cases.
E
E
So
I
could
have
a
bunch
of
legitimate
devices
that
are
being
used
as
a
click,
farm
or
captcha
farm,
or
I'm
just
trying
to
get
around
something
like
a
metered
paywall.
And
so
we
have
a
concern
that
in
many
of
these
cases,
we're
still
going
to
degenerate
to
people
being
blocked,
even
if
they're
using
basic
privacy
pass
next
slide.
E
And
then
the
rate
limited
token
variant
does
that.
But
it
also
is
attesting
that
your
access
rate
for
this
origin
was
below
a
certain
threshold
and
this
adds
mitigations
doesn't
completely
solve.
But
it
adds
a
lot
of
mitigations
against
devices
being
used
as
a
click
farmer,
captcha
farm,
and
it
also
allows
you
to
work
with
things
like
metered
paywalls,
even
without
giving
away
user
privacy
next
slide.
E
That's
used
for
rsa
blind
signatures,
it
shares
most
of
the
same
structure
and
it
essentially
adds
a
bit
at
the
end
that
the
attester
and
issuer
boxes
will
use,
and
so
this
does
rely
on
the
attester
and
issuer
being
separate,
which
is
not
a
requirement
for
basic
privacy
passes
and
in
this
model
the
tester,
which
is
the
thing
that
sees
the
client
identity,
maintains
a
counter
for
how
many
times
a
given
client
has
accessed
an
anonymized
identifier
for
that
origin,
and
so
these
attesters
are
trying
to
learn
a
stable
mapping
between
the
client
and
the
origin,
based
on
a
per
client,
see
it's
anonymized
using
a
per
client
secret
and
a
per
origin
secret.
E
E
By
the
way
for
this,
for
the
thing
that
this
client
is
accessing,
they
should
only
be
allowed
to
access
it
either.
You
know,
let's
say
three
times
per
hour
or
five
times
per
month.
You
know
it
could
be
a
fairly
wide
range
of
rate
limits
and
the
tester
is
the
one
responsible
for
failing
the
request.
If
that
rate
limit
is
exceeded,
next
slide.
E
C
Yep
yeah
thanks
tommy
right
so
as
time
you
said,
the
the
main
challenge
in
the
protocol
is
to
basically
compute
the
stable
mapping,
because
we
want
to
do
it
in
such
a
way
that
the
the
sort
of
buck
identifier
is
private
to
the
attester,
in
particular
private,
in
the
sense
that
it,
the
tester,
doesn't
learn
the
specific
origin
for
which
the
bucket
corresponds
to,
but
it
does
have
assurances
that
it
is
sort
of
the
the
same
origin
for
a
given
client
over
time.
C
Otherwise,
it's
it's
difficult
to
reason
about
or
difficult
to
say
that
this
is
meaningful
rate,
limiting
on
a
per
client
and
per
origin
basis,
so
to
do
so
sort
of
the
the
way
the
rate
limited
issuance
protocol
works
is
a
part
of
it
is
that
it
computes
this
this,
the
stable
mapping,
which
is
you
can
think
of
it's
just
a
deterministic
function
between
this
client
secret
and
this
origin
secret,
where
the
client
has
and
maintains
the
client
secret
and
the
issuer
has
and
maintains
the
origin
secret.
C
A
tester
has
access
to
neither
and
therefore
can't
compute
the
mapping,
and
this
mapping
is
basically,
you
can
use
it
as
an
index
into
whatever
other
data
structure,
you'd
like
to
use
for
enforcing
rate
limits.
So
if
you
think
back
on
the
the
token
bucket
example,
you
would
use
it
as
the,
for
example,
the
hash
table
index
that
would
use
to
store
and
associate
your
accounts
or
your
token
accounts
for
that
particular
or
client
per
origin
bucket,
and
so
the
flow
would
be
the
tester
computes,
the
secret
arc.
C
And
if
you
were
to
map
this
to
sort
of
the
flow,
the
issuance
flow
between
the
client
and
tester
and
the
issuer
sort
of
hand
waving
massively
over
sort
of
the
underlying
details
in
terms
of
how
this
stable
mapping
is
computed
reserve
those
details
to
the
document,
basically,
the
the
the
tester's
job
is
to
in
in
interacting
in
completing
this
protocol
between
the
client
and
the
issuer
is
to
compute
the
stable
mapping.
Decrement
account,
or
rather
that
should
say
increment
count.
C
You'll
find
that
the
the
slide
is
hilariously
wrong
in
terms
of
like
the
the
algorithm
but
like.
Hopefully,
the
idea
is
clear.
Basically
compute
the
mapping
apply
the
the
the
algorithm
and
either
eject,
accept
the
request
and
and
forward
the
token
response
back
down
to
the
client
or
drop
down
the
floor,
sending
the
client
down
an
appropriate
error
code,
and
this
you
know
the
the
close
reader
will
understand
why
this
is
wrong
in
terms
of
like
what
the
what
the
check
is
actually
doing.
C
C
C
As
tommy
said,
it
does
require
a
split
deployment
for
meaningful
privacy
in
particular,
because
the
issuer
does
need
to
learn
the
origin
name
in
order
to
associate
and
use
sort
of
the
right
origin
secret
for
computing,
the
stable
mapping
and,
as
a
result,
the
attester
who
cannot
learn
sort
of
per
origin
information.
Thinking
back
on
what
we
think
of
as
meaningful
privacy
needs
to
be
a
separate
entity.
C
There
are
today
a
couple
different,
operable
implementations
of
this
particular
variant
using
the
the
signature
scheme
with
ecdsa
previous
incarnations
of
this
draft
used
eddsa
for
the
crypto,
but
we've
since
changed
that
we
could
bring
it
back.
I
guess
we're
not
particular
in
the
type
of
crypto
here,
and
there
is
a
security
analysis
underway
for
sort
of
the
both
the
new
underlying
cryptographic
scheme,
as
well
as
sort
of
how
it
plugs
into
the
the
larger
rate
limited
issuance
protocol
and
what
the
resulting
privacy
properties
are.
C
So
at
this
point,
as
I
said,
without
going
into
sort
of
the
crypto
details,
I
think
those
are
perhaps
more
useful
for
offline
discussion
or
you
know
cfrg
in
particular.
C
B
Hi
this
is
just
as
individual.
I
want
to
understand
a
little
bit
more
about
the
about
the
proposal
and
use
case.
So
let
me
let
me
ask
why
can't
we?
Why
can't
we
do
this
with
the
standard
or
previous
privacy
pass
construction?
So,
for
example,
let's
talk
about
the
the
rate
limiting
paywall
case.
B
If
I'm
a
magazine,
I
could
act
as
both
issuer
and
and
and
validator
and
users
could
make
an
account
with
my
with
my
origin,
and
then
I
would
say
you
know
you
have
a
free
account.
So
I'm
going
to
issue
you
three
tokens
a
month
and
then
I'm
going
to
execute
a
redemption
on
you
know
to
to
do
a
token
spend
event.
Every
time
you
attempt
to
view
an
article
and
so
you'll
run
out
of
tokens.
B
E
Yeah,
that's
fine,
so
you
mean
yes
you're
right
that
if
I'm
willing
to
create
an
account
with
the
origin-
and
you
could
definitely
use
it
for
that
case,.
E
I
think
the
more
interesting
case
here
is
talking
about
the
cases
where
you're
trying
to
rate
limit
for
the
fraud
prevention
case.
Let's
say
the
actual
account
creation
time
right.
So
if
what
I'm
trying
to
do
is
prevent
you
know,
device
farms
from
creating
a
bunch
of
abusive
accounts,
that's
a
case
where
you
know
today
they
may
use
captcha
plus
rate,
limiting
on
ip
addresses,
etc
and
the
basic
token
issuance
there
isn't
always
going
to
be
enough,
because
they
that
will
still
not
give
you
confidence
that
these
aren't
abusive
devices.
E
B
An
account
here
I
didn't
mean
I
didn't
mean
an
account
in
the
that
is
in
the
origin.
B
E
Okay,
that's
essentially
what
this
is
then,
but
so
in
that
case,
if
I
have
an
account
that
says,
oh
you
get
to
read
articles
or
you
get
to
do
this.
That
thing.
If
it
wants
to
rate
limit
to
you
to
say,
I
will
give
you
only
three
right:
it
kind
of
needs
to
know
what
what
those
are
for.
E
Like
unless
you're
setting
up
a
new
one
of
those
accounts
for
every
service,
so
like
do
you
think
I
have
like
a
new
one-to-one
mapping
of,
like
some
other
rate,
limiting
service
right,
but
I
I
think
the
issue
then
is
at
that
point
it.
It
sounds
like
it
would
be
fairly
easy
to
know
when
I
signed
up
for
that
service
like
oh.
This
is
the
service
to
give
me
tokens
for
the
new
york
times,
and
so
that
thing
does
know
when
I
am
getting
those.
B
B
Way
to
take
that
state
and
shard
it
out
and
push
it
into
an
entity
that
is
accountable
and
funded
by
the
by
the
origin,
as
opposed
to
having
it
all,
essentially
centralized
in
a
single
attester,
which
has
to
maintain
this,
like
potentially
unbounded
amount
of
state
for,
like
all
of
the
different
origins
that
the
user
is
potentially
active
on.
B
Okay,
so,
okay,
I
mean
stepping
back
to
the
question
at
hand.
I'll
just
say
I
you
know
I
my
my
feeling
as
individual
here
is,
is
that
I
wish
I
I
could
see
a
little.
I
wish
I
were
a
little
more
confident
that
that
this
is
the
simplest
solution.
H
Just
wanted
to
quickly
respond
to
ben's
comment,
then
I
I
I
think
I
would
100
agree
with
you
that
if
there
was
something
simpler,
we
should
absolutely
do
the
simpler
thing,
and
I
would
encourage
folks
to
think
about
how
this
could
be
made
simpler
and
share
that
either
on
the
list
or
in
github.
C
Yes,
I
assume
you're
done.
Thank
you.
The
the
key
challenge
here
was
in
that
in
trying
to
build
in
a
mechanism
for
sort
of
keeping
the
state,
we
did
not
want
the
the
thing,
maintaining
the
state
to
be
able
to
effectively
reconstruct
any
any
like
browsing
information
or
any
sort
of
pre-origin
information
about
the
the
the
thing
that
it's
enforcing
rate
limits
for
indeed
like
earlier
designs,
were
a
lot
simpler
and
but
they
enabled
the
attester.
C
I
don't
know
if
it
was
called
the
adjuster
at
that
particular
point,
but
they
enabled
the
attester
to
learn
that
information
and
in
in
trying
to
address
that
this
was
the.
This
was
not
the
first
solution
that
we
came
up
with,
but
is
it?
I
think
it
is
the
one
that
has
the
desired
functional
properties
as
well
as
the
desired
privacy
properties
and
and
just
to
heavily
plus
one
to
what
jonah
said.
C
If
there
is
a
simpler
solution,
we
would
love
to
see
it,
but
the
the
the
the
core
challenge,
I
think,
remains
the
same.
So.
A
A
All
right,
thank
you
very
much
and
for
folks
in
the
room,
if
you
didn't
check
in,
I
think
you
might
still
be
able
to
check
in.
I
want
to
make
sure
everybody
we
count.
Everybody.
Thank
you.