►
From YouTube: IETF114 PRIVACYPASS 20220728 2000
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
to
privacy
pass
at
iatf
114
thanks
for
joining
us,
just
real,
quick,
here's.
The
note
well,
I
think
you're
probably
well
acquainted
with
this,
but
if
you
need
a
refresher,
you
can
take
a
quick
look
at
this
reminder.
A
A
couple
reminders
for
this
meeting
is
it's
really
helpful
to
join
the
meeting
with
the
on-site
tool,
so
you
can
join
the
queue,
and
so
you
can
so
we
record
your
attendance
as
well.
So
please
join
the
on-site
meeting.
A
Thank
you
also.
Please
wear
your
mask,
except
when
speaking,
I
think
that's
the
main
things
main
thing
I
want
to
do
right
now
in
terms
of
administrative
trivia,
is
to
get
a
note
taker.
A
So
we
have
a
a
pretty
full
agenda
today.
So
with
initial
discussion
about
deployment,
experience
status
of
the
core
documents
talk
about
rate
limited
token
and
then
key
consistency
and
discovery.
Are
there
any
additions,
modifications
to
this
agenda
that
anybody
would
like
to
make.
B
I
want
to
note
that
we
have
a
full
agenda
for
our
very
short
session,
so
I
will
be
running
a
timer
to
include
both
presentation
and
questions
on
each
topic.
D
D
Hi
everyone
it's
loud,
I'm
tommy,
paulie
from
apple
one
of
the
co-authors,
on
some
of
the
documents
we
have
here
and
I'll
be
presenting
some
of
our
current
status
on
implementation
and
deployment
testing
here
for
privacy
pass.
D
So
first
I
wanted
to
point
to
some
implementations.
This
is
of
the
base
specs.
So
this
is
the
token
issuance
and
authentication
scheme
to
request
and
redeem
tokens.
There
are
a
couple
different
open
source
implementations.
If
you
want
to
try
these
out,
there's
one
that
is
written
in,
go
that
cloudflare
has
that
implements
client
origin,
a
tester
issuer,
either
as
one
thing
or
separate
things,
and
you
can
use
it
for
all
the
basic
variants
and
you
can
also
test
out
the
rate
limited
variant.
D
D
In
building
stuff,
these
are
good
resources
to
interrupt
with.
But
beyond
that,
we
are
also
doing
some
non-open
source
implementations
and
actually
getting
some
deployment
experience
at
apple's
developer
conference.
D
In
june,
we
announced
that
we
want
to
replace
captchas
with
private
access
tokens,
which
is
really
just
privacy
pass,
but
that's
what
people
wanted
to
call
it
for
branding
reasons,
so
it
is
just
privacy
pass
doing
the
basic
type
2,
a
publicly
verifiable
tokens,
and
so
we
introduced
a
broader
developer
community
to
this
concept
and
pointed
back
to
privacy
pass
if
they
wanted
to
get
more
engaged.
D
So
hopefully
that
will
kick
up
some
interest
and
I've
already
seen
people
looking
at
this
a
little
bit
more
so
yeah
we're
trying
to
increase
awareness.
We
are
allowing
developers
of
apps
to
be
able
to
do
token
issuance
and
redemption,
as
well
as
anything
that's
running
within
webkit
environments
on
ios
and
mac.
Os
will
automatically
be
able
to
support
this
now,
and
so
we
have
attestation
and
token
issuance
that
we're
working
with
to
try
to
kind
of
get
some
tests
going
on
in
this
ecosystem.
D
D
D
D
There
are
deployments
that
are
actually
starting
to
use
this
and
try
it
out
in
non-forced
cases
before
I
get
into
that,
just
what
is
supported
here
on
our
client.
This
is
only
doing
type
two
and
I
believe
that's
also
the
case
for
the
issuers
that
they
are
doing
the
publicly
verifiable
basic
token
types
that
are
using
rsa
blind
signatures
for
our
client.
D
We
are
doing
a
split
origin,
a
tester
and
issuer
model,
and
our
client
supports
either
specific
origin
or
cross-origin
tokens,
and
there
are
some
limitations
around
what
we'll
do
in
the
web
context.
We
only
accept
token
challenges
and
redeem
tokens
to
things
that
are
first-party
domains
within
a
web
context,
so
that
random,
ads
and
other
stuff
can't
start
trying
to
poke
and
get
tokens
out
of
you.
D
Requests
for
tokens,
and
so
currently
right
now
with
these
beta,
builds
we're
seeing
35
000
tokens
being
issued
per
day
and
16
000
tokens
that
are
being
redeemed
so
that
you're,
seeing
kind
of
like
the
client
fetching
batches
and
not
necessarily
spending
them
all,
but
then
we're
actually
spending
them
later
to
say:
okay
get
around
this
capture
because
I've
already
done
the
attestation
and
I
don't
have
detailed
latency
information,
but
it's
all
very,
very
minimal.
It's
not
really
impacting
any
user
facing
things
so
far,
so
I
believe
that's
it
for
these
slides.
D
D
G
Nick
dodie
cdc
thanks
for
both
translating
and
giving
us
some
implementation
experience.
I'm
curious
whether
well
I
have
lots
of
questions
but
I'll.
Just
ask
one:
I'm
curious
whether
we've
considered
the
problem
of
of
over-reliance
that
that
some
developer
might
just
say,
hey
I
hear
apple-
can
attest
to
all
the
devices.
Therefore,
I'm
just
gonna
block
access
permanently
to
anyone
who
who
doesn't
get
one
of
these
tokens
and
whether
we've
considered
sort
of
either
protocol
or
implementation
implementation
changes.
G
D
That's
a
very,
very
good
point,
so
we
absolutely
don't
want
reliance
like
that
in
our
communication.
Publicly
we're
like
do
not
do
I
mean
you
should
only
do
this
if
in
lieu
of
a
captcha
and
you
always
need
to
fall
back,
but
of
course
advice
does
not
give
you
everything.
D
One
of
the
nice
things
is
that
this
it
will
not
happen
all
the
time
and
I
didn't
go
into
those
details,
but
this
may
be
something
we
want
to
add
advice
to
in
the
documents.
Probably
I
think
in
the
architecture
document.
Let's
talk
about
the
whole
ecosystem,
so
first
there's
a
there's,
actually
a
human
level
toggle,
where
some
users
can
disable
this.
D
But
beyond
that
our
attestation
system
will
rate
limit
you,
but
even
on
the
device
we
have
limits
of
when
we
are
willing
to
get
a
token
or
not.
So
we
will
not
randomly,
but
if
we
see
more
than
a
couple
token
requests
in
a
minute
or
based
on
what
process
you're
in,
we
will
just
ignore
some
requests,
so
you
certainly
would
not
be
able
to
do
a
site
that
did
100
and
have
it
work.
D
But
I
agree
that
this
is
something
that
we
should
talk
about
more
and
I
think,
having
something
randomly
not
due
at
some
percentage
of
the
time
may
be
one
mitigation
for
this.
D
All
right,
I
am
still
tommy
polly,
but
now
I
am
speaking
on
behalf
of
more
co-authors
chris
wood
and
john
iangar
steven
valdez
amongst
others.
So
we
have
three
base
drafts
here.
We
have
the
architecture
which
essentially
describes
the
different
models
for
using
privacy
paths
and
deploying
privacy
paths.
D
D
So
this
is
just
a
slide
to
go
into
some
of
the
recent
changes.
Overall,
the
authors
think
that
these
are
pretty
pretty
mature
and
from
like
a
technical,
interrupt
standpoint.
We
think
they're
done
and
we'd
like
to
get
this
moved
along,
but
I
think
there
are
things
around
wording
that
we
still
need
a
bit
of
work
on,
but
we
hope
to
consider
a
last
call
soonish.
D
Some
of
the
recent
changes
include
for
architecture
clarifying
the
trust
model,
removing
some
old
text
that
no
longer
really
applied
around
parameterization
and
adding
a
bit
more
text
to
talk
about
centralization.
Although
I
know
there
are
other
documents
proposed
that
will
go
into
more
depth
there
for
the
auth
scheme,
based
on
the
discussions
at
previous
meetings.
D
Clarifications
around
how
to
support
multiple
origin
names
being
related
to
a
given
token
were
added.
We
added
guidance
around
how
to
handle
and
process
token
challenges
clarified
what
you
need
to
do
in
order
to
prevent
double
spending
depending
on
what
type
of
token
context
was
being
added,
and
we
also
added
test
vectors.
So
you
can
confirm
your
implementations
and
for
the
basic
issuance
very
little
changed.
There
were
a
couple
things
really
to
clarify
fully
how
the
token
verification
worked.
D
So
just
the
the
couple
open
issues
we
have
that
I
want
to
bring
up
today.
One
is
actually
just
based
on
an
early
ayanna
review.
Ayana
has
been
doing
a
great
thing
of
actually
before
we
go
through
the
full
process.
They
are
reviewing
all
of
the
working
group
documents
that
have
iana
considerations
in
them
that
are
being
presented.
So
thank
you,
ayanna
for
doing
that.
D
So
the
auth
scheme
document
defines
a
registry
of
token
types
and
the
basic
protocol
issuance
defines
two
types
in
there:
privately
verifiable
and
a
publicly
verifiable
token,
and
the
auth
scheme
also
reserves
some
types
to
be
able
to
do
greasing
of
making
sure
that
you
can
accept
unknown
types
within
your
off
challenges
and
redemption
ayanna
did
have
a
couple
questions.
D
We
were
not
clear
where
this
new
registry
should
live,
they
assumed
it
would
probably
be
on
a
new
privacy
pass
specific
page.
I
think
that
makes
sense.
So
the
document
now
says
that
I
would
like
to
hear:
if
anyone
disagrees
with
that
or
what
we
should
call
it,
that's
very
much
a
bike
shed
and
then
the
other
thing
is:
what
do
we
want?
The
review
policy
to
be
for
new
entries
into
this.
D
What
we
put
in
for
now
in
the
pr
is
specific
specification
required,
and
so
that
would
involve
designating
experts
and
then
requiring
that
you
have
a
stable
spec.
In
order
to
allocate
this,
I
don't
have
particularly
strong
opinions.
This
seemed
like
the
right
thing
for
entirely
new
token
types,
because
you
have
to
have
a
pretty
full
specification,
be
able
to
implement
against
it.
But
I'd
love
to
hear
opinions
of
the
working
group.
H
Yeah,
this
seems
fine.
I
don't
know
if
we
need
to
say
anything
more
about
the
type
of
like
analysis
that
goes
into
a
token
type
before
it
actually
winds.
Its
way
in
the
registry
like
hpk
for
comparison
establishes
a
registry
for
like
different
algorithms,
and
it
just
says
if
you
want
to
be
on
this
list,
you
need
to
like
have
to
meet
certain
properties,
certain
security
properties.
We
could
say
the
same
exact
thing
here
and
then
require
that
the
documents
like
demonstrate
in
some
way.
H
B
D
Right,
I
don't
have
the
text
in
front
of
me
right
now
that
I
propose,
but
I
I
think
there
is
a
sentence
there
that
it's
recommending
what
the
experts
review-
and
I
think
it
says
something
along
the
lines
of
like
it-
needs
to
meet
the
properties.
But
I
think
that's
the
interesting
sentence
to
make
sure
is
correct
that
it
that's
the
right
advice
to
the
experts.
C
Thank
you,
hey,
there's
kenazi
yeah,
quick
warrant
enthusiast.
How
is
this
encoded.
D
C
All
right,
just
the
the
really
nice
thing
about
those
is,
even
though
you
have
2
to
the
62,
only
ever
use
10.,
and
but
what
it
means
is
that
you
can
like
do
whatever
you
want
for
experimentation,
and
it
makes
life
a
lot
easier.
So
you
don't
have
to
do
specific
communication
required
and
let
people
do
I.
I
try
to
push
back
on
specification
required
for
the
entire
space,
because
the
current
it
encourages
people
to
just
skip
by
and
out
together
and
squat
on
values
if
they
don't
want
to
have
a
publicly
accessible
spec.
C
So,
rather
having
somewhere,
where
you
can
just
have
like,
what's
the
term
not.
D
C
Is
try
to
make
this
a
bit
more
like
the
quick
one
you
don't
have
to
go
full
warrant,
even
though
I
love
that
but
saying
some
of
the
space
is
specification
required
and
some
of
the
space
you
can
do
provisional,
but
provision
all
the
atf
ioni
reserves
the
right
to
yank
it
from
you
at
any
time.
Okay,
that
way,
you
avoid
all
these
problems
and
that's
been.
I
D
Oh
the
other
issue,
number
141
is:
it
was
more
of
a
kind
of
architectural
question
about
what
what
does
the
ecosystem
do
if
the
access
attestation
service,
which
could
be
the
issuance
service
as
well,
stops
doing
the
right
thing
so,
like
it
just
says,
you
know
it
starts
giving
out
tokens
willy-nilly
to
people
who
aren't
trusted
or
who
didn't
actually
solve
their
captcha
or
didn't
meet
the
bar,
and
I
think
chris
made
this
nice
little
hilarious,
spider-man's
pointing
at
each
other
of.
D
Modes
here
are
origins,
are
saying:
I'm
getting
some.
You
know
bad
traffic
coming
in
that
you
know
clearly
didn't
actually
shouldn't
actually
have
passed
the
attestation
checks,
it
says
issuer.
You
know.
Why
are
you
giving
me
these
bad
tokens?
He
says:
hey,
you
know
the
tester
you're
not
doing
the
right
thing
and
the
tester
can
be
like.
Oh,
my,
my
checks
are
fine.
How
do
you
know
that
that's
actually
fraudulent?
So
we,
you
know,
we
definitely
need
a
way
out
of
this,
and
this
was
chris's
suggestion
for
how
to
address
this
in
general.
D
Essentially,
the
ecosystem
needs
to
come
to
some
recognition
of
you
know
either
give
away
for
origins
to
report
back
to
the
issuer
that
things
are
not
working
as
they
would
expect.
So
I
think
we
do
need
some
extra
text
on
this.
I
don't
know
chris.
If
you
wanted
to
comment
on
what
you
were
imagining
here,.
H
I
didn't
have
anything
specific
in
mind
right
now
because,
as
you
say,
as
I
sort
of
jotted
down
here,
I
do
think
that
this
will
likely
vary
based
on
your
particular
deployment,
whether
or
not
it's
like
joint
to
test
your
issue
or
split
a
tester
issue
or
and
what
the
configuration
is
of
the
different
parties.
H
We
might
just
note
that
you
know
if,
if
things
go,
awry
signals
start
firing
for
reasons
that
they
shouldn't
start
firing
or
whatever.
But
this
is
an
exceptional
event
and
you
know
applications
should
deal
with
it
some
way.
I
don't
have
a
very
eloquent
way
of
like
stating
that
some
way
right
now,
but
hasn't.
D
Yeah,
it
eventually
needs
to
be
at
the
point
where,
if
some
party
like
an
ancestor
issuer,
starts
doing
bad
things,
they
kind
of
need
to
be
kicked
out
of
who's
trusted
by
the
other
parties
that
you
know
the
origins
yeah.
I
no
longer
trust
this
issuer
or
the
issuer
says.
I
no
longer
trust
this
a
tester,
because
I
need
to
make
sure
that
my
origins
keep
trusting
me
yeah.
D
So
essentially,
the
roles
of
running
a
token
issue
and
service
involves
maintaining
your
credibility
and
you
need
to
make
sure
you
have
some
way
of
getting
feedback
to
make
sure
you're
not
doing
you're
letting
bad
clients
in.
H
And
it's
it's
not
like.
I
guess,
if
attestation
fails
once
that
that
that
particular
tester
should
indefinitely
be
untrusted,
I
can
imagine
scenarios
where,
like
I
don't
know,
there's
like.
H
Like
somehow,
for
some
reason
like
at
that
station
is
like
subverted
or
whatever
compromise,
but
once
that's
patched
everything's
back
to
normal,
so
I
mean
we
can
work
on
the
text
offline.
I
think,
but
I
don't
think
the
the
implication
needs
to
be
like
permanent
adjustments
to
the
trust
model,
just
like
temporary
or
grounds
or
mitigations,
or
something.
D
D
Cool
all
right-
and
I
think
that's
it
so
those
are
essentially
the
main
two
open
issues.
We
already
have
the
ayana
pull
request
out,
so
we
just
need
to
refine
that
based
on
what
we
have
here
and
then
we
want
to
add
this
architectural
text
and
beyond
that,
we
think
these
basic
ones
are
pretty
good
to
go.
D
I
I
Is
this
waiting
also
on
the
standardization
or
the
last
call
at
least
of
the
documents
that
currently
are
sitting
in
the
cflg
like
the
bo
prf
or
the
blinded
rsa
that
com
they're
currently
kind
of
dependencies
on
the
privacy,
past
main
trust
and
the
second
thing,
the
new
current
architecture
of
the
privacy
pass
protocol
was
kind
of
introduced
on
the
december
of
last
year,
and
now
it's
there's
an
introduction
of
different
issues
and
the
testis
and
origins
and
how
they
can
interact
with
each
other
and
there's
different
ways
that
they
can
either
collude
with
each
other
or,
as
you
just
mentioned
in
one
of
the
issues,
the
different
entities
can
be
compromised.
I
D
Cool
for
the
first
one,
maybe
chris
will
have
opinions
on
the
relationship
with
cfrg
documents.
I
think
that's
kind
of
up
to
the
chairs
of
privacy
pass
and
cfrg
to
how
they
want
to
coordinate
that.
I
think
those
other
specs
are
stable
enough,
that
it
would
be
fine
to
run
them
in
either
order
through
that,
but
maybe
ship
them
off
to
the
isg
or
I
irs
g
review
around
the
same
time.
That's
fine
and
then,
regarding
the
other
one,
I
think
the
document
does.
D
You
know,
try
to
talk
a
bit
about
how
you
do
the
analysis.
If
there
are
specific
things,
you
think
the
architecture
should
add
for
extra
analysis,
then
we
could
add
it,
but
I
I
think
it
would
also
be
perfectly
fine
to
have
further
documents.
Analyzing
different
aspects
of
this.
I
think
in
general,
these
split
models
that
we
have
for
privacy
pass.
D
We
have
for
oblivious
http,
we
have
for
using
mask
proxies,
they
all
share
some
interesting
properties,
and
I
know
there's
even
discussion
in
perigee
about
what
are
the
terminologies
we
use
for
doing
privacy
analysis
there,
so
this
certainly
should
not
be
the
last
word
on
how
we
talk
about
this
architecture
and
think
about
analyzing.
These
deployments.
F
Yeah,
a
quick
question
about
credibility
of
the
entities
that
you
mentioned.
Can
you
say
more
about
those
like?
Is
that
going
to
be
part
of
the
solution
which
entities
are
we
talking.
D
D
D
They
don't
necessarily
get
to
see
all
the
information
about
the
clients
themselves,
and
so
they
have
a
trust
dependency
on
the
entity
that
issues
tokens
and
that
entity's
credibility
comes
from
the
fact
that
either
they
are
doing
their
own
attestation
or
checks
like
maybe
they
had
issued
captchas
or
they
had
done
some
verification
or
they
are
working
with
one
or
more
other
services
that
can
do
that
attestation
of
the
clients.
So
essentially
it
is
a
transitive
trust
and
credibility
chain
there,
and
it's
essentially
up.
H
Yeah
just
to
respond
to
the
crg
thing,
because
you've
kind
of
called
me
out,
I
think
both
of
the
blind
rsa
and
the
vpf
documents
are
basically
ready
for
research.
Last
call
in
that
group
anyways
now
the
process
does
take
longer,
unfortunately,
and
the
iot
than
it
does
in
the
ietf.
So
I
guess
it's
up
to
the
chairs,
as
you
suggested,
to
figure
out
how
they
want
to
stack
things
and
what
the
pipeline
looks
like.
I
don't
think
we
need
to
block
on.
H
H
We
expect
article
participants
to
implement
the
protocol
honestly,
just
like
we
expect
tls
servers
to
implement
that
protocol
correctly
and
not
post
keys
to
twitter
or
whatever
and
things
break
down
when
they
don't
behave
correctly,
and
this
is
a
little
bit
different
in
that
we're
trying
to
design
against
potentially
malicious
parties
in
the
protocol.
But
for
the
most
part
like
the
the
behavior
has
specified.
We
we
just
sort
of
assume
that
they're
they're,
following
that
classification.
B
Thank
you
tommy.
I
guess
you
I
believe
chris
wood
is
presenting
the
next
session
section.
H
Okay,
all
right,
so
this
is
an
update
on
the
rate
limited
privacy
pass
issuance
protocol
that
we
presented
last
time.
It's
actually
undergoing
adoption.
An
adoption
call
right
now.
I
kind
of
wanted
to
take
a
step
back
from
the
internal
technical
details
of
the
protocol,
because
it
is
admittedly,
somewhat
complex.
We
tried
to
spend
a
lot
of
time
as
the
editors
of
the
document
clarifying
what
the
different
functions
of
the
protocol
are,
what
the
different
steps
are
and
what
not.
So
here.
H
I
just
want
to
kind
of
talk
about
the
high
level
motivation
for
the
protocol.
What
the
different
properties
are
in
terms
of
how
much
state
it
has
a
different
different
pieces
or
different
entities
in
the
vertical
and
what
the
what
the
desired
or
end
resulting
privacy
properties
are
as
well
as
touch
on
some
open
issues.
H
So,
having
said
that,
as
a
recap,
because
it
wasn't
covered
in
the
the
base
protocol
document,
privacy
passes
now
as
a
architecture
is
composed
of
two
sub-protocols,
one
of
which
is
the
redemption
protocol.
This
is
the
the
protocol.
It's
run
between
client
and
origin
for
the
purposes
of
redeeming
tokens,
and
it's
based
on
well-established
http
authentication
mechanisms.
H
Origin
will
challenge
the
client
to
present
a
token
or
may
challenge
the
client
to
present
a
token
with
certain
parameters
in
that
challenge
and
the
client,
if
it
can
satisfy
the
challenge,
presents
a
token
in
response
with
its
retried
request
issuance
as
the
complementary
protocol
which
effectively
takes
one
of
those
token
challenges
from
the
origin
and
runs
a
protocol
between
the
client
to
test
or
an
issuer
for
the
purposes
of
producing
a
token.
That's
bound
to
that
particular
token
challenge
and
we've
sort
of
arranged
things
such
that
all
of
the
complexity
in
privacy.
H
Past
is
sort
of
encapsulated
in
the
issuance
protocol,
because
there's
expected
to
be
sort
of
like
very
few
implementations
of
these,
like
one
implementation
of
one
client,
inflation
that
serves
many
clients
and
so
on.
Whereas
we
on
the
flip
side
and
the
redemption
side,
we
wanted
adoption
to
be
incredibly
easy.
So,
for
example,
in
the
the
basic
type
two
tokens
that
tommy
was
referring
to
earlier,
like
consuming
and
verifying
a
token,
is
as
simple
as
verifying
an
rsa
signature.
It's
really
straightforward.
H
Okay,
I'm
gonna
walk
through
sort
of
how
these
two
things
work
in
concert.
H
So
on
the
left,
we
have
the
redemption
protocol
which,
as
as
day
before
the
origin,
produces
a
public
keynote
challenge
the
client
and
the
output
of
this
or
in
response
to
the
challenge.
The
client
produces
a
token
and
the
desired
property
here
is
that
the
origin
learns
nothing
about
the
client
beyond
whether
or
not
it
was
able
to
present
a
token
that
it
satisfies
the
challenge
that
just
one
bit
on
the
right
hand,
side,
the
the
issuance
protocol
is
again
takes
its
input.
H
This
public
key
and
challenge
turns
it
into
a
token,
with
this
advanced
between
client
to
tester
and
issuer,
which,
for
the
basic
issuance
protocol,
is
just
a
blind
signature
protocol
or
a
vrf
protocol
or
whatever,
and
the
desired
property
here
is
that
the
issuer
learns
nothing
about
the
client
in
particular
that
it
cannot
link
successive
requests
from
the
clients.
Together,
they
each
appear
as
independent
unlinkable
requests.
H
H
It
cannot
do
so
because,
by
definition,
the
only
thing
it
learns
from
the
token
is
whether
or
not
the
token
was
able
to
present
a
valid
token
and
the
issuer.
The
only
thing
that
the
issuer
learns
is
whether
or
not
like
token
issue
and
succeeded.
It
learns
nothing
about
or
there's
no
concept
of
state
anywhere
in
in
the
issuance
flow.
H
This
is
problematic
because
there's
a
lot
of
interesting
applications
where
rate
limiting
is
quite
useful
and
we
don't
want
to
fall
back
to
things
like
rate
limiting,
based
on
shared
context
across
clients
like
ip
addresses.
You
have
meter
pay
walls
where
you
might
want
to
limit
the
number
of
times.
A
particular
client
is
able
to
access
some
content.
You
might
want
to
dampen
the
damage
or
activities
that
spots
from
a
particular
bot
form
are
doing,
and
so
on
rate,
limiting
is
used
in
many
many
places.
H
And
before
talking
about
sort
of
the
the
mechanism,
just
I
want
to
give
a
quick
reminder
for
how,
like
rate
limiting,
is
commonly
implemented
in
practice.
Typically,
it's
typically
done
with
some
algorithm
called
a
token
bucket
or
a
leaky
bucket,
depending
on
which
textbook
you
read
where
the
token
bucket
or
leaky
bucket
is
driven
by
two
independent
processes.
H
There's
one
process
which
is
like
the
person
trying
to
access
the
particular
resource
that
consumes
tokens
from
this
bucket
and
they're
only
able
to
access
the
resource.
If
there
are
tokens
available
to
consume
the
the
other
process.
The
token
representative
process
will
put
tokens
back
in
the
bucket
at
a
sort
of
fixed
recurring
rate
and
the
rate
at
which
you
replenish
tokens
in
the
bucket
determines
the
overall
rate
limit,
and
so
the
way
these
two
things
interact.
H
The
dynamics
of
the
the
interaction
between
requesting
and
consuming
tokens
determines
effectively
how
clients
are
rate
limited
internally.
You
can
think
of
this
being
implemented
using
like
a
simple
hash
table.
So
when
a
you
know,
a
token
replenish
event
comes
in
the
token
bucket
will
just
identify
the
context
associated
with
the
rate
limit
and
then
increment
some
counter
associated
with
that
particular
context.
So
in
this
example
there
the
context
is
an
entry
in
a
hash
table.
H
There's
a
previous
count
and
associated
with
it,
and
just
bumps
up
the
count
by
t
et
is
the
replenish
account
requesting
a
resource
does
exactly
the
same
thing:
to
identify
the
writ
limiting
contacts
and
then
go
into
the
hash
table,
but
now
it
just
decrements,
you
know
the
the
counter
associated
with
it
and
if
tokens
are
available
process,
the
request,
if
not
drops
the
request
on
the
floor.
H
Okay,
so
we
wanted
to
implement
basically
this
functionality
under
certain
constraints,
as
I
was
saying
before,
we
want
to
maintain
the
security
and
privacy
properties
that
we
originally
had
in
the
protocol.
H
In
particular,
we
don't
want
the
origin
to
learn
anything
beyond
a
bit,
and
this
bit
in
this
case
is
not
only
was
a
client
able
to
produce
a
token,
but
the
the
client
was
able
to
produce
a
token
without
exceeding
the
rate
limit.
That
is,
that
is
effectively
the
bit
any
other
sort
of
state
at
the
origin
side
could
be
potentially
used
to
track
clients,
and
so
we
didn't,
we
didn't
even
want
to
explore
that
avenue.
H
We
also
wanted
adoption
to
being
as
simple
as
it
was
for
privacy
pass.
In
fact,
the
the
tokens
that
are
produced
for
the
rate
limiting
version
are
indistinguishable
from
the
tokens
for
the
basic
issuance
protocol,
both
verified
with
the
rsa
signature.
We
wanted
the
issuance
protocol,
the
thing
that
actually
produces
tokens
and
necessarily
deals
with
the
state
aspect
of
rate
limiting
to
be
as
close
to
stateless
as
possible.
H
Of
course,
it
can't
be
stateless
because
then
you
can't
be
really
rate.
Limiting
someone
has
to
have
stake
around
like
how
many
things
how
many
tokens
were
issued
for
a
particular
client
origin
pair,
and
the
question
is
like:
where
is
that
state
captain?
How
is
it
enforced
and
whatnot,
but
we
want
to
minimize
the
state,
because
we
want
to
you,
know,
make
it
easy
or
to
actually
operate
an
issue
or
a
tester
and
importantly,
for
certain
deployment
models
of
privacy
pass.
H
We
want
to
make
sure
that
neither
a
tester
nor
issue
are
able
to
link
client
origin
pairs
together.
So
right
now
in
the
basic
issuance
protocol.
There's
no
there's
no
information
exposed
to
the
issuer
during
issuance,
but
if
we
now
consider
a
rate,
that's
applied
on
a
per-origin
basis,
someone
intuitively
someone
has
to
see
or
enforce
some
state
based
on
a
specific
client
and
a
specific
origin.
But
we
did
not
want
that
state
to
reveal
anything
about
a
specific
client
going
to
a
specific
origin.
H
Like
you
know
my
my
laptop
going
to
example.com
or
whatever
so
we're
trying
we're
aiming
for
something,
that's
close
to
like
oh
db
and
odo
and
sort
of
style,
where
one
party
sees
like
half
of
the
half
of
the
equation.
H
Okay,
so
functionally
what
the
issuance
protocol
does
in
this
document
is:
it
extends
the
basic
issuance
protocol
with
a
couple
of
new
properties
and
features,
the
first
of
which
is
that
the
issuer,
that's
actually
producing
a
torque
and
necessarily
learns
the
origin
associated
with
the
token
challenge,
because
this
is
necessary
because
the
rate
limits
that
are
sort
of
enforced
in
the
system
potentially
differ
on
a
per
origin
basis.
So
the
issuer
needs
to
learn
which
origin
is
this
token
request?
For
so
I
can
pick
the
right
limit
and
and
respond
accordingly.
H
A
testers
in
this
in
this
protocol
have
a
much
more
responsibility
than
they
do
in
the
basic
issuance
protocol.
Beyond
just
relaying
requests
and
responses
back
and
forth
between
client
and
issuer,
they
now
learn
what
we
refer
to
as
a
stable
mapping
or
as
sort
of
described
earlier.
The
rate
limiting
context
on
a
that
is
specific
to
a
per
client
secret
and
a
per
origin
secret,
and
that
word
secret
is
important
as
a
tester
doesn't
learn
like
client
a
is
going
to
example.com.
H
It
learns
client
a
is
going
to
random
thing
and
then
client
b
is
going
to
different
random
thing,
and
this
state
is
necessary
because
this
in
this
particular
design
of
the
solution
we
have
the
attester
is
the
entity
responsible
for
enforcing
the
rate
limits
on
per
client
and
per
origin
on
a
per
client
of
origin
basis,
which
we
think
is
a
reasonable
trade-off,
given
all
things
considered
and
also
differently
from
the
basic
issuance
protocol
token
requests
can
now
fail,
because
if
you
try
to
issue
a
token
or
request
a
token,
you've
exceeded
your
rate
limit.
H
Okay,
so
functionally
just
sort
of
walk
you
through
what
the
issuance
protocol
does
at
a
very
high
level
without
describing
any
of
the
technical
details
internally,
those
are
best
kept
to
the
draft.
This
is
just
the
issuance
protocol.
The
redemption
protocol
remains
the
same
you'll
notice
on
the
left
as
input
to
the
client.
H
There's
some
public
keys,
there's
a
challenge,
there's
some
other
things,
specifically
a
secret
key
skc
that
the
client
maintains
for
sort
of
its
for
all
of
the
rate,
limited
requests
that
it
has
to
or
that
it
wants
to
access
through
a
particular
tester,
because
that's
that's
the
perkline
secret,
but
anyways,
there's
a
request
response
flow
between
client
to
tester
and
issuer,
and
what
the
tester
does
using
this
request.
H
Response
flow
is
basically
compute
the
stable
mapping
or
compute
the
rate
of
any
context,
decrement
account
associated
with
that
rate,
limiting
context
and
then
respond
to
the
client
with
an
error
or
with
the
token
request
accordingly
and
there's
a
lot
of
like
complexity
in
it
or
there's
a
lot
of
stuff
that
happens
behind
the
scenes.
H
To
actually
do
this,
this
actual
stable
mapping
computation
and
do
it
in
such
a
way
that
the
like
the
a
tester
can't
like
try
to
figure
out
what
the
client
was,
after
with
without
the
client's
active
participation
mary?
If
it's
okay,
can
I
take
questions
at
the
end.
H
She's
in
the
room,
I
can't
hear
okay,
thank
you
miriam
as
a
result
of
this
particular
design.
The
the
way
state
is
split
across
the
the
various
participants
is
as
follows:
the
so,
as
before
the
basic
issuance
protocol,
the
origin
keeps
sort
of
a
constant
amount
of
state.
In
particular,
it
just
keeps
the
token
associated
with
the
token
or
the
public
key
necessary
to
validate
tokens.
H
Clients
maintain
state,
that's
on
the
order
of
the
number
of
servers
that
they
want
to
request.
A
testers
now
maintain
the
most
amount
of
state
on
the
order
of
the
number
of
clients
multiplied
by
the
number
of
servers
that
all
those
clients
are
accessing
and
issuers
only
maintain
state.
That's
on
the
order
of
the
number
of
servers
that
are
actively
configured
for
this
particular
issuer.
H
There's,
I
guess,
looking
at
the
privacy
properties
as
before,
it's
not
possible
for
origen
or
tester
to
link
any
two
requests
together
to
the
same
client,
so
we'd
sort
of
maintain
parity
with
basic
privacy
pass
here.
But,
as
I
said,
the
tester
does
learn
this
sort
of
state.
This
rate
limiting
context
based
on
a
per
client
and
prohibition
secret.
H
A
rationale
for
why
this
was
reasonable
was
that
the
tester's
already
in
sort
of
a
privileged
position
with
the
client's
ip
address
and
the
split
tester
issuer
model
and,
as
a
result,
it's
sort
of
already
trusted
to
handle
this
sensitive
information
and
has
less
incentive
to
misbehave,
because
if
it
was
gonna,
if
it
wanted
to
just
do
bad
things
to
the
client,
you
could
just
forward
the
ip
address
onto
the
issuer.
H
Okay,
so
there's,
I
think,
there's
two
there's
one
sort
of
fundamental
open
issue:
that's
worth
some
analysis
and
in
consideration,
the
first
of
which
is
this.
This
sort
of
state
that
the
attester
learns
is
effectively
equivalent
to
access
patterns
that
clients
might
have
in
terms
of
like
you
know,
using
browser
in
the
web.
H
It's
unclear
how
much
leakage
is
actually
possible
or
revealed
through
this
access
through
this
access
pattern
and
in
particular,
if
an
attester
could
combine
this
information
with
certain
auxiliary
information,
for
the
purposes
of
you
know,
potentially
learning
what
origin
a
specific
client
was
after
there's,
another
open
issue
that
we
think
we
have
a
technical
solution
for
it
would
be.
We
would
have
better
technical
solution
if
we
had
a
partially
blind
signature,
but
we
do
not
yet
have
one.
So
we
have
a
different
solution
to
it.
Details
are
issue
218.
H
as
a
status
update
to
assist
with
the
adoption
call
we
have
two
interoperabilitations,
one
of
which
is
the
open
source
implementation
that
tommy
referred
to
earlier.
You
can
test
it
out.
It
supports
all
the
basic
insurance
protocols,
as
well
as
a
very
limiting
one,
and
we
do
have
security
analysis
that
was
done.
It
was
submitted
for
peer
review.
Unfortunately,
it
did
not
capture
this.
H
This
issue
just
deleted
on
the
previous
page,
but
we're
working
right
now
to
update
the
the
analysis
to
make
sure
it
is
taken
into
account
and
that
our
mitigation
does
actively
present
it
and
we'll
hopefully
update
the
work
group.
With
that
analysis,
when
complete
okay,
that's
it
thank
you
for
letting
me
run
over.
I'm
sorry
miria!
I
could
take
your
question
now.
E
H
Yeah,
so
one
of
the
assumptions
is
that
they
access
the
attestation,
that's
done
between
client
and
the
tester
prevents
that
from
happening,
so
you
can't
just
like
endlessly
spin
up
new
new
identities
as
a
malicious
client
and
do
that
sort
of
civil
attack.
Of
course,
if,
like
your
test,
attestation
does
not
ensure
that
that
happens,
then
there's
an
obvious
problem
with
this.
So
the
the
protocol
assumes
that
the
client,
a
tester
relationship,
enforces
or
prevents
that
from
happening.
H
J
Mia
columbia,
thanks
for
the
overview,
I
have
a
question
about
so
in
the
in
the
architecture
you
propose,
the
the
rate
limit
is
provided
from
the
issuer
to
the
attester
and.
J
H
Yeah,
actually,
that
was
how
the
draft
was
previously
written.
We
had
basically
the
tester
send
the
count
to
the
issuer,
but
we
determined
that
that
leaks
too
much
information
in
particular,
based
on
the
counts
that
the
issuer
sees.
They
could
figure
out
whether
or
not
two
requests
for
the
same
client
or
not.
So
as
a
simple
example,
imagine
that
the
issuer
just
saw
like
n
requests
in
a
row
in
sequence
and
every
single
time
the
count
kept
going
up.
It
would
just
conclude
that
these
are
obviously
from
the
same
client.
J
H
G
Evidence
from
people
doing
this
sort
of
account
creation,
or
something
like
that,
that
this
metering
will
be
effective.
I
I
can
see
how
this
is
effective
for
the
new
york
times
paywall
and
and
if
you
want
to
come
up
with
a
complicated
scheme
to
to
do
that.
G
Maybe
that's
fine,
but
I
sort
of
get
the
impression
that
if
you're
in
the
abuse
case,
there's
not
going
to
be
a
single
threshold
where,
where
sort
of
yes
no
is
useful,
but
you
might
want
to
know
the
the
number
because
you
could
you
could
you
know
escalate
or
something
that
that's
not
always
going
to
be
like
good
or
bad
in
those
abuse
cases,
but
that
they
would
want
to
just
use
that
as
a
signal.
So.
B
G
Those
use
cases
that
a
single
threshold
is
going
to
be
helpful.
H
I
think
this
is
still
something
that's
under
discussion
in
the
anti-frauds
community
group.
There's
a
lot
of
uncertainty
with
respect
to
you
know
what
are
the
useful
signals
that
can
feed
into
some
anti-abuse
system
and
what
role
like
different
types
of
attestation
and
rate
limiting
plays
in
those
scenarios.
I
so
I
hope
the
conversation
and
an
answer
to
your
question
evolves
in
in
that
group,
which
is
where
sort
of
the
practitioners
are
iterating
on.
H
You
know
what
is
the
value
of
this
as
a
application
specific
thing,
but
I
don't
have
an
immediate
answer
for
you,
because
that
group,
you
may
be
you-
I
mean
you
you're,
actively
participating
in
it
and
has
many
conflicting
opinions
from
different
people.
So
we'll
see
how
things
feel.
D
Yeah
to
try
a
minute
just
like
another
author,
we
have
yeah.
Obviously
we
can't
know
what
how
everything
will
play
out,
but
when
we
have
discussed
with
various
anti-fraud
systems
that
would
be
interested
in
using
privacy
pass
in
general,
essentially
like
there's
a
spectrum
from
like
the
basic
privacy
pass
that
has
no
rate
limit
guarantees
at
all
other
than
client
implementation
and
like
what
they
would
have
today
with.
D
It
will
not
allow
them
to
do
all
of
the
things
that
they
could
do
if
they
got
an
exact
count
for
a
client
and
re-identified
someone,
but
it
it
increases
the
number
of
sites
that
can
use
the
use
privacy
pass
meaningfully
today
and
we'll
see
if
we
need
to
go
further
than
that.
But
this
is
a
useful
jump.
H
Yeah,
I
sort
of
view
it
as
a
another
tool
for
them
to
use
a
tool
that
discourages
use
of
like
existing
tools
that
might
have
like
a
bad
privacy
posture
with
respect
to
the
client
like
tracking,
based
on
the
ip
address.
For
example,.
B
Thanks
for
thanks
for
that
discussion,
yeah
just
reminding
the
group,
this
document
is
subject
to
an
open
adoption
call
that
will
end
in
about
two
weeks.
So
please
do
comment
on
the
mailing
list
and
share
with
the
group
your
opinion
of
whether
this
document
is
suitable
for
adoption.
H
Okay,
thank
you
so
this
is.
This
is
just
a
reminder
of
a
draft
that
we,
a
couple
of
us,
have
written
on
the
side
after
realizing
that
there's
this
common
problem,
that's
emerging
in
many
many
different
application
areas
inside
the
idf
oi
privacy
pass
in
particular
on
in
an
attempt
to
distill
all
the
different
ways
that
you
know,
people
deal
with
key
consistency
and
discovery
and
practice,
and
that's
the
intent,
that's
the
motivation.
H
So
this
is
not
like
a
you
know,
tremendously
new
update
to
that
document,
but
just
a
reminder-
and
hopefully
at
the
end,
we'll
talk
about
like
what
we
can
do
with
this.
If
we
want
to
do
anything
with
it,
okay,
so,
as
I
said,
there's
a
number
of
protocols
in
the
itf
right
now
that
actually
depend
on.
You
know,
clients
having
some
consistent
view
of
their
keys,
so
they
don't
when
actually
using
these
systems,
they
don't
reveal
information
that
they
shouldn't
reveal
to
a
malicious
service
provider.
H
Privacy
passes
one
particular
case
where
you
want
the
issuer
verification
key
to
be
consistent
across
all
clients
that
are
interacting
with
that
issuer.
Http
is
another
one
where
you
want.
The
gateway's
public
encryption
key
to
be
consistent,
and
it's
like
configuration
in
the
url
and
all
these
things
that
provided
that
information,
but
it's
there
tor
is
another
example
where
you
want
relay
public
keys
to
have
be
consistent
across
all
clients
and
fundamentally,
there's
like
two
common
requirements
for
these
different
type
of
systems,
which
I'll
call
unlinkability
and
authenticity.
H
Unlikability
is
informally
that
you
know,
servers
can't,
link
usage
of
a
specific
key
to
an
individual
user,
and
that
authenticity
is
that
when
the
clients
are
using
a
key,
it's
a
key
that
was
intended
for
that
specific
server.
It's
not
like
key
the
key
that
someone
else
owns.
That
is
not
the
intended
server.
H
So
why
is
this
important?
Imagine
you
had
like
a
scenario
where
there's
a
bunch
of
clients
interacting
with
some
server.
The
server
makes
its
key
k
available
to
those
clients
and
the
server
uses
that
key
sorry.
The
client
uses
that
key
a
client
uses
that
key
produces
a
function
based
on
the
key
and
sends
the
the
output
of
the
function
to
the
server.
H
The
server,
assuming
all
clients,
have
a
consistent
view
of
this
key.
The
only
thing
it
learns
is
that
this
was
sent
from
one
of
these
end
clients
in
the
in
this
particular
set.
H
There's
also
authenticity,
the
other
informal
principle.
So
imagine
you
had
now
an
adversary
that
sits
between
the
server
and
the
clients
where
the
server
makes
its
key
available
it's
authentic
key
available,
but
then
the
adversary
slips
in
its
own
malicious,
key
ek,
prime,
and
gives
that
to
the
clients,
clients,
if
they
don't
know
any
better
any
function.
Computed
based
on
this
particular
key
k.
Prime
reveals
something
that
was
originally
intended
for
us.
H
H
And
sort
of
simplifying
things
a
lot.
Basically,
what
the
draft
advocates
for
is
that
these
two
properties
on
linkability
authenticity
mean
that
every
single
client
has
a
consistent
view
of
the
server's
intended
key,
and
then
that
view
is
correct
and
and
systems
that
actually
enforce
these
two
things.
We
call
key
consistency
and
correctness
systems
and
the
design
space
for
this
sort
of
thing
is
huge.
H
I
mean
there
was
a
discussion
based
on
key,
consistent,
ohio
earlier
in
the
week,
based
on
one
of
ben's
drafts,
and
that,
like
touched
on
one
solution
in
this
design
space,
there
are
other
solutions
that
have
different
like
complexity
in
terms
of
operation,
trust
model
and
so
on.
H
So
the
the
actual
system,
that's
used
to
enforce
consistency
and
correctness
can
vary
based
on
all
these
external
factors
are
based
on
all
these
factors,
some
of
them
external,
I
guess,
roughly
speaking,
the
design
space
kind
of
boils
down
to
you
know
these
four
things,
first
of
which
is
like
fetching
through
a
fetching
a
key
through
a
trusted
proxy.
So
clients
that
you
know
belong
to
the
same
annuity
set,
would
ask
a
trusted
proxy.
H
H
Spec
roughly,
does
you
can
fetch
through
multiple
untrusted
proxies
or
less
trusted
proxies
and
verify
that
you
get
back
the
same
answer
or
you
could
sort
of
stick
all
these
keys
on
a
bulletin
board
and
audit
that
bulletin
board
and
make
sure
that
you
know
you're
getting
the
latest
contents
of
the
audited
bulletin
board
or
a
bulletin
board
here.
H
It's
just
a
you
know
an
authentic
append,
only
log
or
something
like
that,
and
these
these
approaches
I'm
going
to
walk
through
the
pictures
just
to
really
drive
the
point
home.
These
are
pictures.
Are
these
these
different
solutions?
Again
they
they
have
different
applicability.
Based
on
your
your
application,
configuration
your
deployment,
module
your
threat
model
and
trust
model
and
whatnot,
and
so
there's
no
one-size-fits-all
or
correct
solution
here.
It's
a
pilot
trade-offs.
H
So
in
the
trusted
proxy
case,
as
I
was
saying,
there's
a
proxy
in
the
middle
that
gets
it
gets
a
key
and
then
bends
it
of
that
same
copy
to
all
clients,
pretty
straightforward
in
the
multi-proxy
case,
where
clients
might
go
through
different
untrusted
proxies
to
ask
the
server
to
present
its
view
of
the
key
clients
can
check
to
see
whether
or
not
a
server
is
trying
to
lie
and
change
keys.
H
You
know
in
a
malicious
attempt
to
tag
clients.
I
have
here
listed
that
constancy
double
check
is
sort
of
an
example
of
this
slightly
different,
but
I'm
kind
of
the
close
enough
in
spirit
that
I
just
kind
of
threw
it
down
here
and
then
there's
the
bulletin
board
external
data
database,
which
stores
the
keys
and
then
clients
just
pull
it
down
and
there's
some
assumption
that
there's
an
ecosystem
built
around
this
bulletin
board
for
verifying
that
everything
is
correct
and
consistent
like
connexing
key
transparency
or
basically,
an
instantiation
of
this
concept.
H
H
The
question
is,
you
know
whether
or
not
there's
value
in
this
complementary
document
that
sort
of
describes
the
the
rest
of
the
design
space
or
some
other
pieces
of
the
design
space
may
not
be
exhaustive.
There
is
no
specific
section
in
the
privacy
fast
charter
that
to
sort
of
address
the
key
consistency
or
key
tagging
issue.
H
So
after
discussion
with
the
shares,
we
basically
are
proposing
to
bring
this
document
as
informational
as
a
working
group
like
supporting
document
or
something
to
be
published,
I
don't
think
it
matters
that
much
to
sort
of
help,
inform
the
discussion
around
consistency,
double
check
or
other
solutions
as
they
emerge,
and
that's
our
proposal
cures
to
hear
what
folks
think.
D
Yeah
thanks
for
doing
this,
this
is
clearly
very
important.
I
have
no
idea
what
working
group
this
belongs
in.
I
don't
think
it
matters
too
much.
We
don't
have
one
to
do
it
in,
but
we
should
do
this.
D
The
the
one
comment
I
had
is:
if
I
I
wonder,
if
the
approach
of
double
check,
not
the
specific
protocol
instantiation,
but
the
approach
of
it
could
be
mentioned
more
explicitly
here
again,
it
kind
of
seems
like
implicit
when
you're
talking
about.
Oh,
you
have
proxies,
you
can
go
through
and
you
can
go
direct
and
like
maybe
laying
that
out
and
bringing
some
of
the
double
check
in,
because
it
feels
like
different
protocols
like
ojai.
H
That
sounds
like
a
reasonable
approach,
and
certainly
we
didn't
mean
to
like
intentionally
omit
double
check
variant.
It
is,
as
you
say,
a
different.
It
is
a
slightly
different
technique
than
the
the
multi-proxy
discovery.
I
just
kind
of
I
like
vastly
oversimplified
things
on
this
particular
slide,
so
we
could
add
a
section
there.
Alternatively,
depending
on
like
ben's
preference
here,
we
could
like
just
take
all
this
content,
like
put
it
in
an
appendix
in
his
draft
as
like.
H
Here
are
alternative
solutions
that,
if
you
didn't
want
to
use
double
check
or
whatever
I
don't
think
it
matters
that
much,
although
to
tommy's
point
it
does
seem
like
it
might
be
useful
to
have
a
place
to
somewhere
to
point
and
say
you
know,
this
is
the
consistency
mechanism,
I'm
using
in
a
specific
deployment
or
in
my
specific
protocols,.
B
So
the
authors
of
this
draft
have
requested
an
adoption
call
for
for
this
draft
in
the
privacy
pass
working
group.
I
don't
believe
such
an
adoption
call
has
been
made
yet,
but
the
the
chairs
will
consider
that
and
and
if
an
adoption
call
is
started,
you'll
you'll
see
the
announcement
on
the
list,
so
please
do
review
the
draft
so
that
you're
prepared
in
case
such
an
announcement
appears.