►
From YouTube: IETF114 PPM 20220728 1400
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
And
if
you're
speaking
at
the
microphone
keep
your
mask
on,
I
think
we're
ready
for
siobhan
shawn's
gonna
talk
about
star
and
then
we're
gonna
move
on
to
that.
B
D
Yes,
I
mean
q
to
bash
the
agenda.
There
was
no
opportunity
to
do
so.
I
would
I
would
like
to
request
that
this
be
moved
to
the
end
of
the
session,
so
we
give
appropriate
amount
of
time
for
all
the
dapper-related
issues
that
we
have.
We
need
to
make
sure
we
get
through
those
prior
to
discussing
non-working
group
items
at
this
time.
E
A
All
right
is,
it
is.
E
Okay,
it's
just
me
chris
wood.
Can
you
cover
me
if
I
am
not
able
to
to
communicate
with
people
I'll
I'll
figure
it
out?
On
my
end,
I'll
be
I'll,
be
back
but
yep.
F
C
Hello,
my
name
is
siobhan
and
alex
pete,
and
I
have
been
talking
about
star
for
a
while
star
is
was
a
research
paper,
it's
gonna
come
in
the
it's
gonna
appear
in
the
upcoming
ccs.
I
think
and
yeah
looking
forward
to
discussing
with
folks.
C
The
central
idea
is
that
we
would
like
to
have
k
anonymity
for
clients,
reporting,
potentially
sensitive
measurements
to
an
untrusted
server
and
the
pretty
important
goal
that
we
have
is
that
it
should
be
cheap
because
brave
is
a
small
organization
and
there
are
other
similar
small
organizations
who
would
like
to
do
privacy,
preserving
measurement
but
don't
have
infinite
aws.
You
know
money
to
spend
on
that
and
so
yeah,
so
low
computational,
overhead
and
network
usage
for
clients
and
servers
is
pretty
important.
C
It
should
I
guess.
Similarly,
it
should
also
be
easy
to
implement
using
well-known
cryptographic
techniques
and
obviously
should
also
be
private.
C
The
central
idea
here
I
won't
go
into
too
much
detail.
I
think
alex's
presentation
at
the
last
itf
did
a
good
job,
but
essentially
we're
going
to
use
jameer's
secret
sharing.
So
you
get
a
symmetric
key
through
some
deterministic
function,
operation
on
the
initial
measurement,
and
then
you
encrypt
the
measurement
using
that
key,
and
so
the
idea
now
is
that
you
can
only
get
x
when
you
have
key
when
you
have
k.
C
So
you
generate
a
secret
share
of
of
k
and
you
send
the
measurement
and
the
secret
share
to
the
server
and
the
server
gets
a
bunch
of
these.
But
it
doesn't
know
what
m
is
until
it
can
perform
the
recovery
operation
on
k
on
the
secret
share
of
k,
and
it
can
only
do
that
once
there
are
n
of
those.
So
in
this
way
you
get
like
key
anomaly.
C
Where
n
is
the
number
of
minimum
number
of
shares
you
need,
and
then
you
can
use
k
once
you
recover
it
to
decrypt
them
to
and
get
the
original
measurement
back,
and
it's
really
important
to
have
an
anonymizing
proxy
here.
So
I'll
talk
a
little
bit
about
this
later
on,
but
essentially
once
the
measurement
is
decrypted,
you
want.
You
want
to
be
sure
that
you
still
don't
have
access
to
the
the
ip
address
of
the
people.
C
Submitting
this
and
also
you
have
to
use
a
randomness
server,
because
the
you
could
also
do
this
locally,
but
we
decided
that
it's
just
a
lot
better
for
privacy
reasons.
If
you
use
this,
so
the
idea
is
that
you
would.
The
client
sends
a
blinded
input
value
to
the
randomness
server
and
it
gets
assault
back,
and
if
you
have
the
same
input
value,
then
all
of
those
clients
would
get
the
same.
C
Salt
pack-
and
this
is
to
do
this-
is
done
to
like
mitigate
the
server
brute,
forcing
all
possible
input
values,
because
if
the,
if
the
space
for
the,
if
the
initial
measurement
is
not
except
doesn't
have
enough
entropy,
then
the
so
the
server
can
very
easily
brute
force,
all
possible
values
and
then
just
see
what
the
values
match
up
with
the
encrypted
value
and
then
yeah.
We
also
use
an
opr
to
make
sure
that
the
randomness
server
does
not
learn
the
input
value
as
well.
C
So
the
aggregation
server
can't
ask
the
randomness
server
at
this
point
and
yes
and
then
the
client
sends
the
encrypted
message
over
an
anonymizing
proxy
to
the
aggregation
server
and
then
there's
an
aggregation
phase
where
you
can
reveal
the
original
message.
If
it's
sent
by
enough
clients.
C
C
Okay,
cool,
so
the
implementation
status
is
that
we're
shipping
this
in
brave
for
some
telemetry
there.
G
C
Yeah
and
we
made
a
bunch
of
changes
given
the
feedback
that
we
got
at
the
last
ietf
on
mailing
list
and
through
conversations,
so
we
don't
use
punctual
opr
apps
anymore
and
we
just
simply
rotate
the
keys.
Like
I
mentioned
so
the
randomness
server.
The
client
can
only
sends
the
encrypted
value
to
the
aggregation
server
in
the
subsequent
ebook
after
the
randomness
phase
and
yeah.
C
We
also
require
the
use
of
a
randomness
server
just
to
make,
I
guess,
keep
things
simple
and
you
don't
have
to
like
then
make
that
decision
yourself.
Oh,
should
I
do
I
need
one.
Do
I
not
need
one,
and
also
apart
from
that,
just
a
bunch
of
documentation
of
the
risk
of
collusions
between
various
entities
and
just
a
lot
more
like
details
on
leakages
and
the
security
consideration,
sections
and
stuff
like
that
and
yeah,
and
we
think
it's
ready
for
adoption.
C
We
spoke
a
bunch
to
chris
wood
and
he
had
some
good
comments
and
I
think
we
will
be
making
those
comments,
but
I
think
at
this
point
it
would
be
great
if
that's
a
working
group,
if
the
if
the
draft
is
like
a
working
group
thing
that
the
working
group
works
on
instead
of
just
a
couple
of
people,
so
yeah
so
happy
to
hear
any
comments
about
that.
H
So
first
question:
how
long
do
you
imagine
the
epochs
being?
Is
it
again
sorry.
C
Right
so
right
now
we
use
it
the
the
the
time
period
we
use
is
one
day,
okay,
but
that's
what
we
imagine.
H
Okay,
my
second
question
is:
what
happens
if,
if
I,
as
a
client,
generate
a
bogus
share,
what
I
mean
is,
I
generate
a
share
that
corresponds
to
a
different
encryption
key.
C
H
H
C
So
but
it
is
a
secret
share.
You're
trying
to
attack
a
particular.
H
Well,
I
want
what
I
want
is
to
make
it
impossible
for
you
to
construct
a
given
value
so
like
so
so,
for
instance,
like
you're
collecting
the
top
urls
right,
and
I
want
to
make
it
to
impossible
for
you
to
impossible
for
you
to
collect
the
google.com
on
the
top
urls.
So
what
I
do
is
I
generate
a
description
because
I
generate
so.
I
generate
a
thing
that
has
the
encrypted
value
for
google.com,
but
has
a
secret
share.
That
corresponds
to
a
random
point.
C
I
think
generally,
we
are
keeping
civil
attacks
out
of
out
of
scope
because
I
think
that's
a
common
problem
with
like
many
systems,
but
the
one
thing
that
we
do
that
we
were
talking
about
with
chris,
is
that
if
on
decryption,
like
that's
one
aspect
of
it,
is
that
on
decryption,
if
you
see
a
value
that
is
different
from
everyone
else,
then
you.
H
No,
no!
No
because
so,
if
I
understand
it,
you
inventory
the
you
sort,
the
keys,
so
you
sort
the
inputs
on
the
encrypted
value
right
good,
so
you
bucket
them
up
on
encrypted
value.
You
take
the
corresponding
sql
shares
right
now
you
can
start
the
key,
and
now
you
get
a
key
and
you
try
to
decrypt
them,
and
none
of
the
values
will
decrypt.
H
And
so
because,
if
you
consider
one
key,
you've
considered
a
random
key
right,
so
you've
got
to
say
so.
You've
got
to
find
somebody
to
reject
the
bogus
key.
The
book
is
inputs
right,
and
so,
if
I
remember
correctly,
there
was
some
technique
that
apple
used
in
their
in
their
in
their
system
system.
For
this,
but,
like
I
don't
know
how
to
do
it
with
this
design.
So
so
I
think
this
actually
doesn't
answer
for
this.
I
think
it's
not
just
I
mean
so
civil.
H
I
mean
like
it's
like
one
thing
to
be
like
we
don't
have
grand
civil
attacks,
but
like
the
situation
where
you
know
the
situation
is
that
like,
if
I
can
get
any
small
fraction
of
the
keys
that
you
can't
decrypt,
that's
like
actually
very
serious.
So
I
think
you
think
you
need
a
way
to
reject
bad
input.
Shares.
C
Okay,
yeah,
I
got
to
think
more
about
that,
but
did
you
have
another.
H
I
don't
know
I
don't
know,
I
don't
know
how
to
fix
it.
I
think,
as
I
I
said,
I
said
my
pleasure
would
be
that
there
was
a
that
in
the
in
the
binet.
You
know
lester
meyers
apple
ccm
thing.
They
had
a
secret
share,
a
secret
shimmer
secret
share
scheme
and
they
had
some
way
to
reject
broker's
values.
So
maybe
you
can
steal
that.
Maybe
you
can't.
B
B
And
alex
it's
your
you're,
the
last
speaker.
I
Hey,
can
you
hear
me?
Okay,
yes,
alex
yeah
yeah,
I'm
alex
I'm
one
of
the
authors
of
the
star
draft,
a
co-author
of
the
star
draft.
So
in
answer
to
eric's
question,
that's
what
we
do
in
brave
is
we
take
the
threshold,
a
threshold
number,
the
threshold
number
of
ship,
the
minimum
threshold
number
of
shares
and
tries
to
reconstruct
and
then,
if
we
can't
reconstruct,
we
keep
taking
random
sets
of
shares.
So
that's
like
one
thing
that
we
don't.
H
Say
your
threshold
see
a
threshold
is
zero
threshold
is
500
shares
right
and
one
percent
of
the
shares
are
bogus.
What
is
what
is
the
chance
that
not?
What
is
the
chance
that
99,
so
0.99
to
the
500
is
like
an
incredibly
small
number
so,
like
it
doesn't
work
it
doesn't
it
doesn't
work?
It
can't
attack
with
any
kind
of
power.
I
B
Think
the
thanks
for
thanks
for
highlighting
that
point.
I
think,
we're
out
of
time.
We
need
to
move
to
the
next
speaker,
which
is
tim
thanks
for
mentioning
the
the
question
of
adoption
we
we
might
come
back
to
that.
If
there
happens
to
be
time
at
the
end
of
the
session.
B
Chris
is
I
see
tim
is
in
the
in
the
q2
tim.
This
is
your
presentation.
Do
you
want
to
present
slides.
J
K
We
go
okay,
let's
see,
and
I
got
buttons
all
right
great.
Let's
get
started
okay,
so
so
in
this
particular
deck
we're
going
to
cover
the
current
status
of
implementations
of
jp,
then
I
want
to
talk
about
a
small
number
of
the
notable
changes
in
the
most
recent
draft
001
of
bap,
and
then
we're
going
to
use
that
to
segue
into
the
discussion
of
a
couple
of
one
or
two
open
problems
that
the
chairs
have
encountered
and
that
we're
interested
in
discussing
in
the
working
group
before
I
move
on.
K
K
Sounds
good
all
right,
so
implementation
status.
So
as
of
right
now
we
have
two
implementations
of
draft
ietf
ppm
dap01
that
are
up
on
github.
First
there's
daphne,
which
implements
a
dap
leader
helper
and
a
collector
and
is
written
in
pure
rust.
Then
yanus
is
another
implementation
of
dap
server
components,
also
written
in
pure
rust,
so
dafty
and
giannis
are
independent
implementations
of
dap,
though
they
do
share
some
common
dependencies
which
we'll
get
to
in
a
minute
and
finally,
there's
a
divi
up
ts,
which
is
a
client.
K
So
it
only
has
enough
of
the
protocol
and
cryptography
bits
just
to
do
report
uploads
and
it's
written
mostly
in
pure
typescript,
although
some
of
the
cryptography
dependencies
are
transpiled
from
rust,
so
as
chris
patton
would
have
explained
hedigon
before
me,
so
the
dnp
protocol
is
defined
in
terms
of
vdaf,
which
is
a
verifiable
distributed,
aggregation
functions,
so
that's
being
standardized
through
the
cfrg
and
draft
zero.
K
Two
of
that
just
dropped
a
few
weeks
ago
now,
so
we
have
an
implementation
of
vdaf
draft
zero,
one
in
librio
rs,
which
is
up
on
github
and
also
published
as
create
prio
on
create.io.
K
K
So,
as
I
mentioned
before,
daphne
and
giannis
are
independent
implementations
of
dap,
but
they
both
use
libprio
to
implement
vdafs.
So
it
certainly
would
be
nice
to
see
more
implementations
and
especially
ones
in
some
languages.
Besides
rust,
then
you
want
there.
If
anyone
out
there
is
interesting
so
yeah
all
this
stuff
is
up
on
github,
please,
you
know,
go
check
it
out,
maybe
deploy
them.
K
Let
us
know
how
it
works
out
for
you,
so
we
have
been
doing
some
measure
of
manual
testing
of
interoperability
between
daphne
and
giannis,
which
has
gone
all
right
and
going
forward
we're
looking
at
designing
an
interoperability
test
framework
inspired
by
the
quick,
interrupt
runner,
the
the
aim
of
which
is,
you
know
in
a
nutshell,
you
could
take
a
dp
implementation,
stick
it
inside
a
docker
container
and
then,
besides
the
endpoints
specified
by
the
protocol,
you
would
have
a
handful
of
extra
sort
of
control,
endpoints
that
allow
the
automated
setup
and
execution
of
interoperability
tests.
K
So
the
idea
being
that,
hopefully,
at
some
point,
we'll
have
some
tests
running
in
like
a
continuous
integration
setup
somewhere.
That
you
know
gives
us
like
ongoing
results
about
whether
implementations
are
working
and
can
talk
to
each
other.
K
K
So
in
dap,
a
report
gets
uniquely
identified
by
its
knots
and
a
nonce
consists
of
the
time
at
which
the
measurement
was
taken
and
then
a
random
component.
That's
intended
to
make
the
nazis
unique,
so
nonces
have
to
be
unique
because
they
are
used
for
anti-replay
by
the
aggregators
and
they
are
time
stamped
so
that
the
aggregators
can
decide
whether
a
given
report
falls
into
a
particular
batch
interval
so
up
until
draft
zero
one.
K
The
notch
was
the
number
of
seconds
since
the
unix,
epic
and
then
eight
random
bikes,
as
it
turns
out
that
timestamp
is
high
enough
resolution
to
leak
some
meaningful
information
about
the
client.
So,
as
of
the
most
recent
draft,
we
now
expect
clients
to
round
the
timestamp
down
to
the
minimum
batch
duration,
which
is
one
of
the
long
lived
task
parameters
and
we
widen
the
random
component
to
16
bytes.
K
Moving
on,
but
okay,
next
big
idea
is
aggregation
jobs.
So
first,
let's
recall
what
the
aggregation
sub
protocol
is
about,
and
I
suppose
here
I
should
take
a
brief
parenthesis
to
note
that
dfp
consists
of
three
sub-protocols
upload
where
clients
are
transmitting
reports
to
the
aggregators.
Then.
K
Where
the
aggregators
jointly
prepare,
prepare
inputs
and
aggregate
them
and
finally
collect
where
the
aggregate
shares
are
transmitted
to
the
collector
so
that
it
can
get
the
eventual
aggregate
result,
chris
patton
is
going
to
get
a
into
this
a
little
bit
more
in
just
a
few
more
minutes.
K
Okay,
turning
back
to
the
aggregation
sub
protocol,
let's
just
unpack
a
bit
more
what
it
actually
does
so
at
some
point,
aggregators
are
going
to
be
holding
some
large
number
of
input
chairs
that
have
been
uploaded
by
clients,
and
I
mean
they
want
to
aggregate
them
together,
but
in
the
taxonomy
of
dap
you
can't
actually
aggregate
input
shares
you
first
have
to
obtain
an
output
share.
An
output
share
that
is
from
each
input
share
and
that
process
of
going
from
input
to
output
share
is
what
vdaf
refers
to
as
preparation.
K
Hence
we
end
up
with
the
somewhat
unhelpfully
generic
term
of
preparation,
but
one
thing
that
we
do
expect
is
going
to
hold
across
all
or
most
vdafs
is
that
preparation
is
embarrassingly
parallel
since
verifying
the
proof
of
one
input's
validity
should
generally
be
completely
independent
from
another.
So
we
want
to
enable
the
leader
to
be
able
to
schedule
the
preparation
of
lots
and
lots
of
inputs
in
parallel
for
efficiency,
and
this
is
why
we
introduced
this
notion
of
the
aggregation
job
into
the
protocol.
K
Okay.
So,
as
we
discussed
at
some
point,
the
leader
is
going
to
want
to
schedule
the
preparation
of
a
big
set
of
shares
in
the
prio
family.
Vdafs
preparation
can
begin
as
soon
as
the
aggregators
receive
inputs
from
the
clients,
because
there
isn't
an
aggregation
parameter
in
those
vdafs.
So
maybe
in
that
setting
the
leader
every
time
it
receives
a
thousand
inputs,
it'll
dispatch,
an
aggregation
job
right,
but
in
something
like
poplar
one.
K
The
aggregate
results
sorry
aggregate
shares
as
quickly
as
possible,
so
either
way
what
the
leader
will
do
is
generate
random
aggregation,
job
ids
and
assign
a
set
of
reports
to
each
job
that
mapping
one
job
id
to
many
shares
gets
transmitted
to
the
to
the
helper
in
the
aggregate.
Initialize
request
illustrated
here
on
the
slide
and
that
job
id
is
going
to
be
referenced
in
subsequent
messages.
K
In
the
aggregate
protocol
right
so
then
the
helper
can
use
the
job
id
to
index
into
its
own
storage,
to
fetch
the
state
and
execute
the
next
step
of
edaf
preparation.
K
So
this
enables
many
helpers
to
work
in
parallel,
provided
they
can
share
some
storage,
like
you
know
a
database
or
key
value
store,
or
what
have
you
and
the
other
virtue
of
the
scheme
is
that
since
the
job
ids
aren't
secret
the
they
excuse
me,
the
job
ids
aren't
secret
and
neither
do
they
need
any
complicated,
anti-replay
protections,
because
all
the
sensitive
state
is
in
either
aggregator's
trusted
data
store.
K
So
again,
you
can
check
out
the
linked
issue
and
pull
request
on
the
slide.
If
you
want
to
learn
more
about
like
the
context
behind
this
change,
all
right.
Moving
on
now,
let's
discuss
how
the
aggregators
authenticate
to
each
other
in
this
aggregate
sub
protocol
that
we've
just
been
discussing
so
in
in
dp
aggregation
is
coordinated
by
the
leader
aggregator
though,
actually
in
the
aggregate
sub
protocol,
the
leader
is
acting
as
a
client
to
the
helper's
http
server.
Now
this
channel
between
the
two
aggregators
has
to
be
mutually
authenticated.
K
To
prevent
network
attackers
from
impersonating
either
aggregator,
we
might
assume
that
they
talk
over
tls,
which
helps
to
an
extent
with
server
authentication,
but
we
also
need
client
auth
here
so
what's
in
the
spec
now,
as
of
the
pull
request
linked
in
the
slide
is
a
requirement
that
the
leader
has
to
set
a
a
bearer
token
under
this
dap
auth
token
header
that
we
invented
in
the
request
it
makes,
and
the
value
of
that
token
is
a
secret
pre-negotiated
between
the
aggregators
before
the
start
of
the
protocol.
K
Now
we
did
this
because
it
enables
the
deployments
that
we
have
in
mind
right
now,
but
this
isn't
really
a
workable
solution
for
the
protocol.
So,
first
off
you
know,
long-term
shared
secrets
between
the
participants
is
not
desirable.
We
should
do
our
best
to
avoid
that,
of
course,
and
more
to
the
point,
because
it's
such
a
specific
prescription,
it
makes
it
impossible
for
deployments
to
use
any
number
of
existing
well-established
offense
or
offset
mechanisms
used
widely
used
in
http
apis.
K
So
this
in
particular,
is
something
that
we
definitely
want
to
change
in
a
future
draft.
But
at
this
point
we
should
take
a
step
back
from
the
specific
interaction
between
the
two
aggregators
and
look
more
broadly
at
how
protocol
participants
are
authenticating
to
each
other
when
they
communicate
in
dap.
K
And
that's
we
have
a
summary
in
this
slide
here.
So
so
right.
So
here
we
have
the
different
communication
interactions
between
dap
participants,
so
in
the
first
row,
we're
looking
at
the
how
the
client
interacts
with
the
aggregators
to
upload
its
input
shares.
K
So
in
this
case
the
design
requirements
are
confidentiality
which
we
achieve
by
having
the
client
encrypt.
Either
input
share
to
an
hpk
public
key
advertised
by
either
aggregator.
Of
course,
this
is
necessary
because
if
a
network
observer
could
see
the
input
shares
in
the
clear,
then
that
would
defeat
all
the
privacy
goals
of
the
protocol.
K
The
colors
are
intended
to
match
up
the
requirements
to
the
specified
mechanism
that
satisfies
the
requirement
and
then
red
highlights
just
like
stuff,
that's
going
to
change.
Hence
the
motivation.
K
The
working
group
today,
okay,
where
was
I
right,
confidentiality
right,
so
we
protect
the
input
shares
in
flight
by
http
encrypting
them
to
a
public
key
advertised
by
either
aggregator.
Okay,
then,
of
course
this
has
to
be.
We
need
server
authentication
in
this
setting,
because
we
want
to
make
sure
that
the
input
chairs
are
being
transmitted
to
the
act,
the
authentic
aggregators
participating
in
a
dap
deployment.
K
So
we
do
this
by
having
the
client
fetch
vhpke
configuration
that
it's
going
to
encrypt
you
over
tls,
so
that
the
server
identity
can
be
verified.
That
way,
excuse
me,
where
was
I
all
right?
K
Finally,
we
have
the
dap
allows,
but
does
not
require
the
client
to
authenticate
to
the
aggregators,
so
it's
tempting
to
require
klein
off
as
a
mitigation
for
civil
attacks
and
in
those
deployments
where
this
is
possible,
that's
going
to
be
extremely
effective,
but
but
it's
not
going
to
be
the
case
that
every
deployment
will
be
will
be
able
to
have
a
meaningful
client
side,
identity
with
which
you
could
authenticate.
K
So
for
that
reason
we
don't
want
to
require
it.
In
all
cases,
some
deployments
are
are
going
to
have
to
allow
on
authenticated
input,
input,
uploads
and
we'll
have
to
figure
out
some
other
means
of
mitigating
civil
attacks.
K
We
we
also,
I,
in
my
view,
don't
want
to
specify
how
a
deployment
would
do
client
authentication
if
it
chose
to
which
I'm
going
to
come
back
to
all
right.
The
next
row
is
the
communication
between
the
leader
and
the
helper
during
the
aggregate
sub
protocol,
which
we
just
covered,
so
I
don't
want
to
spend
a
ton
of
time
on
it
again,
but
yeah
confidentiality
is
generally
cheap
because
they
are
communicating
over
tls
and
mutual
authentication
through
that,
through
this
current
pre-negotiated
barrier,
token
scheme
and
the
server's
tls
certificate.
K
K
So
these
aren't
exactly
the
same
because,
while
the
while
the
collector
makes
direct
http
requests
to
the
leader,
it
never
actually
talks
directly
to
the
helper.
The
communication
between
collector
and
helper
is
tunneled
through
the
leader,
which
coordinates
the
collect
and
aggregate
protocols,
so
we
achieve
confidentiality
in
both
cases.
K
K
What's
in
the
text,
right
now
is
the
same
bearer
token
scheme
as
between
the
aggregators,
and
we
currently
have
nothing
to
specify
whether
or
how
the
collector
and
helper
should
authenticate
to
each
other,
which
is
necessary
in
the
one
direction,
because
we
don't
want
the
leader
to
be
able
to
present
forged
collect,
request
parameters
to
the
helper
and
in
the
other
direction,
because
we'd
like
the
collector
to
have
confidence
that
it's
receiving
an
aggregate
chair
from
the
authentic
helper
aggregator.
K
Okay.
So
clearly
we
have
a
bunch
of
cases
where
what
we
do
say
about
authentication
needs
to
change
and
others
where
we
say
nothing
at
all.
And
maybe
we
should
oh
excuse
me
before
I
move
on.
I
forgot
one
interesting
piece
of
red
text
in
the
slide,
which
is
in
the
collector
helper
case,
as
it
turns
out
both
of
those
actors
already
advertise
in
http
configuration
and
public
key,
so
maybe
that
the
problem
of
mutual
authentication
there
could
be
solved
by
using
hpke's
mutual
authentication
mode.
K
Okay.
Where
was
I
right?
So
clearly
we
have
some
inconsistent
guidance
and
some
missing
recommendations
about
authentication
in
this
protocol.
So
the
question
here
broadly,
is
what
should
dap
say
about
request
or
response
authentication
so
to
advance
the
strawman
claim
and
stimulate
some
discussion.
I'm
going
to
claim
that
as
much
as
possible,
we
should
say
nothing
and
stick
to
enumerating
requirements
for
the
security
of
the
channels
rather
than
solutions.
K
K
So,
in
my
view,
we
should
be
aiming
for
composability,
with
existing
authentication
schemes,
widely
deployed
with
http
apis,
with
an
eye
towards
sort
of
integrating
nicely
with
with
the
schemes
already
deployed
by
vendors,
who
might
want
to
operate.
Dap
servers
so
stuff,
like
aws,
request,
signatures,
oauth2
or
even
tls
client
certs,
which
a
lot
of
people
do
use
for
authentication.
K
Now,
of
course,
the
exception.
There
is
the
cases
that
I
discussed
where
we
mandate
the
use.
Excuse
me
where
dap
mandates
the
use
of
hpke
the
distinction
to
keep
in
mind
there
is
that
we
mandate
that,
in
those
cases
where
we're
channeling
a
secure
channel
through
some
protocol
participant
and
so
in
those
cases,
we
can't
rely
on
an
under
excuse
me
on
a
security
property
of
an
underlying
transport.
K
So
continuing
from
the
topic
of
like
good
use
of
http,
we're
thinking
about
rewriting
the
http,
the
api
mandated
by
dap
to
be
a
little
more
resource
oriented,
if
not
full-on
restful.
So,
for
instance,
instead
of
the
upload
endpoint
being
just
upload,
with
all
the
meaningful
parameters
being
encoded
into
the
body
of
the
request.
K
You
know
into
into
the
uri
that
you're
uploading
to
we're,
also
interested
in
looking
at
the
relevant
best
current
practices
documents
and
aligning
with
their
guidance
where
it
makes
sense,
for
instance,
to
get
to
make
use
of
better
http
semantics
and
maybe
doing
something
like
extending
the
http
config
and
point
into
something
like
acme's
api
directory
and
as
we
were
just
discussing
or
interested
in
revisiting
what
the
requirements
are
for
authentication
and
what?
If
any
prescriptions
we
make
okay,
so
that's
it
for
me.
K
We're
looking
forward
to
discussing
all
these
topics
in
the
working
group
here
today,
and
you
know
in
the
coming
weeks
and
months
on
the
mailing
list
and
so
on.
H
Tim,
can
you
go
back
or
something
back
to
a
few
slides
to
this
authentication
point.
Thank
you
nope,
but
next.
H
So
knowing
what
I
know
now,
which
is
not
much
well,
I
know
I
know
how
this
document
works,
but
I
mean
more
generally,
the
this
seems
like
a
good
approach,
but
I
think
perhaps
we
should
do
is
reach
out
to
the
hdp
api
working
group
because
they
are
specifying
best
practices
for
this,
and
I
think
they
can
like.
This
is
just
like
straight
up.
H
Http
web
app
right,
our
http
web
service,
and
so
I
think
we
should
take
their
guidance
on
how
we
do
this,
which
I
think
would
quite
likely
be
this,
but
I
think
we
should
get
their
guidance
on
that
rather
than
reinventing
the
wheel,
so
I
don't
know
who
will
be
responsible
for
that.
Is
you
know?
I
suppose
we
could
do
it
privately,
and
I
said
not,
but
that
would
be
my
recommendation
for
this.
As
I
said,
I
think
these
are
like,
I
think,
you're.
H
I
think
that
your
your
intuition
here
that,
like
people,
are
going
to
have
their
own
mechanisms
and
we
don't
want
to
interfere
with
those
like.
I
think
it's
entirely
entirely
correct.
I
think
that's
also
true
for
the
for
the
next
thing
you
said
about,
like
the
acme,
you
know
the
you
know
the
directory
and
stuff
like
that.
Those
are
also
questions
which,
like
that
one
might
help
us
with
so
starting
our
recommendation.
K
Yeah
yeah,
I
agree
eric.
Thank
you.
I
think
we
should
also
talk
to
some
of
the
prominent
operators
of
acme.
I
I
know
a
couple
of
them
to
see
what
their
experience
has
been
with
like
this.
You
know,
acme
specifically
mandates
the
use
of
jwts
and
the
directory
and,
like
I
know,
the
people
who
run
let's
encrypt,
have
opinions
about
those
things.
I'd
love
to
hear
from
other
acme
operators
how
they've
what
their
experience
of
that
has
been.
E
I
kind
of
see
the
motivation,
though,
because
we
have
basically,
this
collect
request
from
the
lecturer
goes
to
the
it
goes
through
the
helper
via
the
leader
and
if
the
leader
is
is
attacking
privacy,
then
this
is.
This
is
a
problem.
You
should
also
stipulate,
though,
that
the
collector
is
also
part
of
the
threat
model,
so
privacy
should
hold
as
long
as
one
aggregator's
honest.
That
said,
I
think
I
think
off
the
authentication
would
be
useful.
E
K
Yes,
yes,
I
think
that's
a
good
point,
chris
and
yeah
and
on
the
topic
of
direct
communication,
either
aggregator.
That's
also
something
we've
been
batting
around
in
this
on
the
upload
side
of
the
protocol.
Right
like
at
the
moment.
The
way
uploads
work
is
that
the
client
sends
one
message.
Sorry,
chris,
we
can
hear
you,
we
can
hear
your
typing,
you
wouldn't
doing
my
muting.
Thank
you.
K
At
the
moment,
clients
will
create
one
message
that
contains
both
input
shares
transmit
that
to
the
leader
and
the
leader
is
responsible
for
relaying
the
helper's
chair
to
the
helper.
So
this
has
some
problems
like
the
the
the
main
problem
with
that
which
I
think
we
discussed
the
last
ietf
is
that
it
means
a
leader
may
incur
like
significant
costs
for
network
egress,
but
yeah,
but
it
also
forces
us
to
deal
with
like
this
tunnel
channel
through
the
leader
so
yeah.
F
M
I
I
want
to
reinforce
the
concerns
tim
that
you're
raising
about
the
dependence
of
this
on
proper
behavior
of
some
of
the
centralized
players.
In
particular,
the
leader
has
worried
me
with
this
design.
M
You
know
the
goal
here
is
to
make
it
so
that,
as
I
think
chris
said,
it
should
be,
the
privacy
should
be
preserved
as
long
as
there's
one
aggregator
who's
playing
fair,
and
I
worry
that
the
leader
has
a
tremendous
amount
of
control
here
and
could
potentially
de-anonymize
or
remove
the
privacy
protections
based
on
it
being
capable
of
just
controlling
which
messages
get
rounded,
where
both
from
the
collector
to
the
other
helpers
and
from
the
reporter
to
the
collectors.
K
Yeah,
I
agree.
This
is
why
I
highlighted
the
problem
of
a
collector.
The
collect
request
being
authenticated
all
the
way
through
to
the
helper.
Otherwise
we
do
have
a
threat
model
in
the
back
of
the
document
that
tries
to
enumerate
like
what
exactly
the
leader
can
do,
but
the
helper
can't,
I
think
it's
out
of
date,
though,
and
it
certainly,
I
think
it
needs
some
attention.
H
So
on
overscroll,
if
we
do
have
that
problem,
however,
if
the
protocol
is
lines
away,
the
leader
can't
leader
can
independently
break
the
privacy
protocol.
H
Then
then
there's
a
protocol
design
failure,
because
the
because
the
collector,
if
you
take
the
collector
and
leader
and
you
split
them
apart,
the
collector
talks
directly
to
the
helper
and
then
the
collector
closes
the
leader
you're
back
in
the
soup.
So
the
protocol
must
resist
that
must
resist
it
must
have,
but
really
designer
design.
Was
that
neither
case
so
I
actually
don't
believe
it's
like
so
like.
So,
while
I'm
open
to
having
the
open
to
having
the
collector
talk
directly
to
the
helpers,
I
do
not
believe
they
address
the
problem.
H
The
gtg
is
addressing
so
one
thing
that
sorry,
I'm
finding
something:
there's
a
bunch
of
backup
machine.
H
Great
okay,
so
with
that
said,
I
I
think
I'm
not
not
averse
to
having
the
collectors
directly
to
like
the
the
helpers,
the
you
know
in
our
implementation.
You
know
we
looked,
we
literally
like
to
send
to
both
helpers
independently,
and
that
made
us
pretty
sad.
H
So
I
think
if
we
do
decide
to
do
that,
we
have
to
have
a
mechanism
that
also
allows
you
to
have
a
an
ingest
server
because
like
otherwise
there
are
all
kinds
of
problems
where,
like
we
send
like
only
one
chair,
not
the
other,
and
you
could
deal
with
that.
So
I
think
there's
less
of
an
issue
for
the
helper,
though
it
is
like
a
lot
of
burden
on
the
helper
to
like
you
know,
make
it
happen.
L
So
I
also
want
to
echo
some
of
dkg's
concerns,
but
separately.
I
noticed
in
the
draft
that
you
specifically
say
that
only
one
collector
is
or
sorry
only
one
helper
is
supported.
Is
that
still
the
case
because
it?
I
did
not
pick
up
on
any
real
blockers
for
that,
but
I'm
curious
what's
providing
that
thanks.
K
All
right,
so
my
understanding
is
that
there
is
nothing
in
like
the
underlying
crypto
constructions,
which
is
to
say
the
vdas
precludes
additional
helpers,
although
chris
patton
is
about
to
because
I
think
that's
not
very
popular,
but
in
prior
you
can
have
arbitrarily
many
helpers,
dap
kind
of
makes
the
soft
assumption.
But
there's
exactly
one
helper,
though
we're
a
little
inconsistent.
I
think
throughout
the
draft
about
whether
there's
exactly
would
help
or
not
but
yeah.
So
in
my
view,
there's
a
trade-off
between.
K
If
you
add
more
helpers,
you
get
in
some
sense
more
privacy,
because
more
actors
have
to
collude
to
defeat
the
privacy
of
the
protocol,
but
that's
a
trade-off
against,
like
the
complexity
resulting
complexity
of
the
protocol,
because
you
have
so
many
more
actors
to
coordinate.
K
So
I
think
we're
where
we're
at
at
the
moment.
I
suppose
when
I
say
we
just
mean
the
editors
of
the
document,
is
that
exactly
what
helper
is
yeah?
I
think
that's
where
we're
at
common.
E
If
anyone's
ever
played
that
video
game-
okay,
so
yeah
just
to
echo
tim's
point
right
now,
we
don't
support
more
than
one
helper.
However,
we
we
intended
to
design
the
protocol
in
a
way
that
we
can.
We
could
go
in
that
direction.
If
that's
what
people
wanted
to
do?
One
liter
one
helper
is
kind
of
the
simplest
thing.
It
adds
protocol
complexity
to
add
additional
helpers,
but
I
don't
think
that
complexity
is
impossible
to
address.
E
So,
if
folks
want
to
add
support
for
more
for
more
aggregators,
I
think
we
can
do
it.
I
wanted
to
go
back
to
dkg's.
Point
collector
to
helper
authentication
doesn't
have
anything
to
do
with
the
power.
The
extra
power
that
the
leader
has
the
extra
power
that
the
leader
has
has
to
do
with
civil
attacks,
because
the
leader
gets
to
pick
the
set
of
reports
that
are
aggregated.
E
We
don't
have
a
generic.
We
don't
have
a
generic
defense
for
civil
attacks.
That
would
probably
be
pretty
hard,
but
something
definitely
we
should.
We
should
find
solutions,
for
I
wanted
to
point
out
that
the
helper
also
has
can
do
sybil
attacks
by
anyone
who
can
upload
reports
to
the
leader
can
can
mount
a
civil
attack.
They
have
to
collude
with
the
collector
because
the
as
tim
pointed
out,
the
aggregate
shares
are
encrypted
under
the
collector's
public
key.
So
as
long
as
one
server's
honest
they,
don't
they
don't.
E
Actually
the
attacker
doesn't
see
the
result,
but
we
want
you
know
we
want
to
be
able
to
deal
with
the
case
where
the
collector
is
malicious.
So
because
reports
are
unauthenticated,
there's
anyone
can
do
a
civil
attack
and
I
think
the
leader's
relative
strength
is
kind
of
minor,
and
I
would
I
I
would
like
to
see
defenses
be
more
generic
and
not
just
apply
to
that
particular
situation.
A
J
J
Okay,
I
I
would
be
concerned
about
not
specifying,
or
only
specifying
requirements,
not
specifying
an
effective
way
to
do.
Request.
Authentication
apologies.
I
didn't
introduce.
B
J
At
nick
dodie
center
for
democracy
and
technology,
my
concern
would
be
that
that
there
would
be
like
an
extraordinary
or
like
silent
spec
for
actually
enabling
interoperability
between
clients
and
servers,
and
we
would
like
to
make
that
easier.
N
J
It
would
be
good
if
we
had
a
sort
of
recommended
way
do
that,
even
if,
yes,
there
are
going
to
be
cases
where
someone
would
deploy
this
with
their
own
custom
authentication
scheme.
Thanks.
D
On
on
the
topic
of
one
helper
versus
multiple
helper,
I'm
sorry
to
keep
bouncing
back
and
forth
between
different
things.
I
wouldn't
be
surprised
if
we
find
out
that
popular
in
practice
is
just
like
too
expensive
to
run
given
how
many
rounds
it
requires
for
every
single
bit
of
input
that
you're,
actually
that
you
actually
want
to
aggregate.
D
So
I
don't
know
what
that
says
about
the
fate
of
the
dap.
As
a
you
know,
generic
thing
for
all
vdos
versus
dap
as
a
prio,
specific
or
bus
or
whatever
specific
protocol.
But
I
I
could
see
a
future
wherein
dap
kind
of
gets
less
general
more
specific
to
prio
and
maybe
a
heavy
hitters
like
solution
gets
it's
its
own
thing,
maybe
that
star,
maybe
that's
something
else,
but
if
that
were
the
case,
then
accommodating
multiple
helpers
would
be
rather
straightforward
and
dap
for
prio.
D
But
if
it's
like
super
general-
and
we
have
popular
with
its
constraint
that
it
only
works
with
one
particular
helper,
it's
it's
not
clear
like
what
the
the
result
of
in
terms
of
complexity
would
be
on
the
protocol.
But
I
just
wanted
to
note
that,
like
we're
not
set
in
stone
here,
we
might
see
that
things
get
less
general
or
not
as
we
go
forward.
L
I
think
so
on
the
topic
of
the
leaders,
power
and
and
control
over
over
the
system.
Can
you
just
clarify
just
so
I
I
know
I
I'm
understanding
it.
So
the
leader
has
the
ability
to
reject
shares
and
therefore
they
are
never
processed
by
the
helper
and
therefore
a
colluding
leader
and
collector
can
basically
single
out
individual
uploads.
Is
that
correct.
K
Yeah
but
then
you're
going
to
select
yeah,
which
shares
get
paired
like,
as
we
saw
earlier
right,
it's
good
to
assign
a
reporter
to
job
ids.
However,
the
helper
also
can
do
this
on
a
per
share
basis,
because
the
responsibility
helper
delivers
to
a
leader's
aggregation
request
is
going
to
include
like
a
list
of
essentially
a
list
of
like
per
input,
preparation
messages,
so
the
helper
could
simply
choose
to
like
fail
to
prepare
any
individual
input
right.
So
I'm
not
saying
that
that's
good!
K
The
other
half-baked
thought
I
have
in
response
to
that
question
is
that
I
think
injection
servers,
anonymize
congestion
servers
are
probably
a
helpful
mitigation
here
right
in
that
one
of
the
issues
is
that
if
you
have
a
in
a
deployment
where
clients
are
uploading
directly
to
a
leader,
the
leader
gets
to
see
all
sorts
like
interesting
metadata
about
a
report.
You
know
client
ip
stuff
like
that,
on
which
basis
he
could
choose
to
to
to
drop
reports.
K
So
we
anticipate
that
a
lot
of
deployments
are
going
to
use
some
kind
of
intervening
ingestion
server.
Hopefully
you
could
just
like
stick
an
ohio
server
in
front
of
in
front
of
the
leader
such
that
the
leader,
I
don't
want
to
say
to
be
impossible,
but
certainly
it
ought
to
be
harder
for
it
to
be
able
to
selectively
drop
reports.
E
All
right,
so
you
planned
for
me
to
go
first,
I
was
going
to
start
with
kind
of
an
overview
of
hold
on.
Let
me
turn
off
this.
There
we
go.
We
were
going
to
start
with
an
overview
of
the
dap
protocol.
E
I'm
curious
if
people
would
find
that
useful
at
this
point
chairs,
can
you
can
you
basically
just
tell
me
if
I
should
do
like
a
four
like
four
to
five
minute
overview
of
dap
thumbs.
A
E
All
right
cool,
well,
so
the
so
what
this?
What
this
talk
is
going
to
be
about
is
about
how
people
use
dap
recently
since
adoption
a
couple
of
use
cases
that
have
come
up,
that
we
don't
support
very
well.
E
So
what
I
want
to
ask
the
room
about
is
what
protocol
changes
should
be
made
if
any
to
accommodate
these
use
cases,
and
as
I'm
talking,
I
think
it
would
be
helpful
if
folks
would
sort
of
think
about
how
they
intend
to
use
dap
and
whether
the
protocol
really
suits
their
their
needs.
E
So,
as
tim
mentioned,
dap
centers
around
a
particular
class
of
multi-party
computation
schemes
that
we
call
vdfs.
These
all
have
basically
the
same
shape.
We
have
a
large
number
of
clients
each
with
the
measurement
clients
split
their
measurements
into
what
we
call
input,
shares
and
upload
these
to
a
small
number
of
aggregation
servers.
E
The
aggregation
servers
interact
with
one
another
in
order
to
verify
and
aggregate
the
reports
at
the
end
of
this
process.
Each
computes,
a
share
of
the
aggregate
results
then
later
on
a
collector,
comes
along
and
pulls
aggregate
shares
from
the
aggregators
and
computes.
The
final
result,
so
yeah
vdas,
are
being
worked
on
in
the
cfrg.
E
There
is
a
link
to
the
document
there
if
you'd
like
to
learn
more,
but
this
broadly
covers
things
like
prio,
poplar
and
other
schemes
that
we've
we
found
in
the
literature,
and
we
hope
this
becomes
a
target
for
cryptographers
to
go
design
solutions
for
the
problems
that
we
have
in
this
working
group.
E
So
the
first
is
the
upload
protocol,
clients
take
their
measurement,
generate
input,
shares
and
then
upload
these
to
one
of
the
the
aggregators,
the
leader
in
a
report.
E
The
input
shares
are
encrypted
under
the
public
key
of
each
of
the
aggregators
in
order
in
order
to
protect
them
and
at
the
same
time
the
aggregators
are
aggregating
reports.
So
this
process
begins
with
the
leader
whose
gets
all
the
reports
it
picks
some
set
and
takes
the
pulls
out
the
encrypted
input
shares
of
the
helper
and
sends
these
to
the
helper
in
a
a
an
http
request,
and
then,
after
some
number
of
rounds,
they
have
computed
aggregate
shares
for
the
set
of
reports
they
were
able
to
verify.
E
And
finally,
we
have
the
eventually
the
collector
comes
along
and
grabs
data.
E
It
does
so
by
sending
this
thing
called
a
collect
request,
and
this
is
where
the
yeah,
so
the
the
in
general,
the
leader,
is
not
prepared
to
respond
to
a
collect
request
right
away
in
general,
it
has
to
interact
with
the
helper
first
in
order
to
compute
the
correct
aggregate
shares
to
return
to
the
collector,
so
what
it
does
immediately,
is
it
and
sends
the
the
leader
sends
to
the
collector
this
uri
that
the
collector
can
pull
later
on
in
order
to
get
the
result,
so
that's
kind
of
the
the
shape
of
the
thing
that
we're
working
on
the
problem
I
want
to
talk
about
is
this:
how
do
we
choose?
E
How
do
we
choose
a
set
of
reports
to
aggregate
and
when
you
think
about
it,
the
most
basic
requirement
for
this
is
well?
The
batch
of
reports
needs
to
be
sufficiently
large
that
the
measurements,
the
set
of
measurements,
remain
private
and
what
this
means
is
kind
of
application
dependent,
but
at
least
intuitively
the
larger
the
batch,
the
more
privacy
that
you
get
but
think
about
this
from
a
sort
of
a
usability
perspective.
What
are
the
expectations
of
the
collector
who's
grabbing
data?
E
E
E
So
what
a
collect
request
specifies
a
batch
interval
which
determines
a
sequence
of
time
windows
and
what
the
collector
expects
is
that
the
reports
aggregated
all
fall
into
one
of
these
time
windows.
Now
we
have
certain
restrictions
on
the
on
on
batch
on
batch
intervals.
On
the
one
hand,
this
is
about
like
operational
stuff,
like
we
want
it
to
be
possible
for
both
aggregators
to
be
able
to
efficiently
pre-compute
aggregate
shares
in
advance
of
getting
a
collect
request
and
also
there's
privacy
considerations
here.
E
Basically,
what
we,
what
we,
what
we,
what
we
do,
what
we
say
today
is
that
batch
intervals
must
not
overlap
in
order
to
avoid
leaking
small
batches,
and
then
chris
wood
is
going
to
get
into
this
problem
a
little
bit
more
in
the
next
talk.
E
But
the
problem
we're
working
on
right
here
is:
there
are
a
couple
of
use
cases
that
this
this
scheme
doesn't
support.
Very
well.
So,
for
starters,
you
might
want
to
select
a
batch
based
on
some
client
property,
so
this
was
brought
up
in
in
issue
183.
Basically
reports
you
might.
What
the
collector
might
want
is
that
the
reports
are
grouped
by
say
user
agent
or
location,
so
the
collector
would
specify
some
predicate
that
defines
a
set
of
reports
that
go
in
the
batch.
E
Basically,
the
properties
of
reports
that
go
in
the
batch-
and
you
can
imagine
this-
this
could
be
quite
simple
like
give
me,
the
aggregate
for
all
chrome
users
or
all
safari
users,
or
a
little
bit
more
complicated,
like
give
me
the
aggregate
for
all
chrome
users
in
the
us
or
or
all
firefox
users
that
aren't
in
canada
or
something
like
that
and
yeah.
So
the
problem
that
is
the
problem,
though,
is
that
even
a
very
simple
version
of
this
kind
of
grouping
strategy
is
not
well
supported
in
the
protocol.
E
You
can
kind
of
hack
around
it,
but
we
don't
expect
that
any
solution
that
we
have
today
will
scale
very
well
now
issue
273
brought
up
even
what's
arguably
a
simpler
use
case.
Maybe
you
actually
don't
care
that
reports
have
anything
to
do
with
each
other?
Maybe
what
you
need
basically,
is
that
reports
are
the
the
batches
are
disjoint
and
that
they
all
have
the
same
size
or
at
least
approximately
the
same
size
and.
E
I
think
it
would.
I
think
it
would
be
good
to
finish
because
I
want
to.
I
want
to
sort
of
talk
about
the
generalization
of
this,
but
we
can
talk
about
these.
I
mean,
I
guess
up
to
you.
E
Okay,
so
so
fixed
size
batches
are
useful
for,
like
you
know,
a
statistical
analysis
where
you
need
to
control
the
sample
size
and
for
applications
that
want
to
compose
dap
with
differential
privacy.
This
is
also
going
to
be
important
for
tuning
noise
and
then
there's
also,
you
know
waiting
for
the
current
time,
endo
to
expire
before
you,
compute
in
aggregate
can
add
latency
to
the
system
that
might
not
actually
be
sort
of
necessary.
E
E
Okay,
so
all
right
so
where
we
are,
we
think,
is
that
we
need
more
flexibility.
The
question
is
how
much
and
we
need
to
stipulate
the
fact
that
collectors
in
dap
are
going
to
be
more
constrained
than
in
a
more
in
a
traditional
database
or
telemetry
system,
and
this
has
to
do
with
some
privacy
privacy
issues
which
chris
will
talk
about
in
the
next
presentation.
E
Lots
of
open
questions
there,
but
even
from
like
a
functional
perspective,
we
need
to
figure
out
what
we
need.
One
question
is:
what
are
all
the
query
types?
So
I've
talked
about
three
here.
Basically,
this
time
series
things
that
we
have
today,
grouping
by
client
properties
or
or
or
partitioning
things
into
fixed-sized
chunks.
What
else
do
we
need?
E
Another
question
is:
do
we
need
to
be
able
to
compose
different
query
types?
This
can
get
quite
complicated.
I
imagine
not
all
query
types
would
necessarily
compose,
and
finally,
would
every
dapp
deployment
need
to
implement
all
query
types,
or
is
this
something
that
we
can
allow
folks
to
implement
incrementally
or
not
at
all
so
yeah?
So
I,
I
guess
I'll
leave
this
slide
up
for
the
discussion.
I
can
also
go
back
and
forth
as
needed.
E
My
proposal
for
draft
two
would
be
to
take
an
incremental
step
that
is
minimal,
but
is
sufficient
for
our
use
cases,
and
I
think
this
would
involve
enumerating
all
the
possible
query
types
that
we
want
to
support
in
a
way
that's
extensible,
and
then
I
you
know
I
would
add
some
additional
requirements
to
this.
Basically,
the
idea
would
be
that
the
collector
would
include,
in
its
collect
request,
a
query
that
the
leader
would
use
to
choose
a
batch
of
reports
that
satisfy
that
query.
E
Will
there
are
some
additional
requirements
here
to
think
about
yeah?
So
my
question,
I
guess
for
the
room,
is:
does
a
protocol
change
that
satisfies
this?
These
requirements
fit
your
use
case.
Do
you
think
we
need
something
else
and
I
see
eckerd's
in
the
queue
in
person.
Yep.
H
I
want
to
just
talk
this
over
before
I
start
saying
what
I
think
we
got
to
do
so
I
guess
I
want
to
make
two
observations.
One.
I
know
what's
going
to
talk
about
the
pro
but
the
privacy
implications,
but
I
think
we're
already
like
kind
of
out
of
the
zone
where
we
can
plausibly
make
privacy
assertions
so
like
the
the
the
property
that's
nice
about
the
current
design
is
that
you
can
look
at
that.
Look
at
the
possible
and
queries
these
trips.
H
Don't
work
the
possible
outputs
and
draw
conclusions
about
the
privacy
properties
of
the
system
right.
You
got
some
key
and
enemy
conclusion.
You
draw
some
like
conclusion
about.
Like
interception
attacks
like
you
can
just
conclude:
it's
safe
or
unsafe,
but,
like
you
can
just
analyze
it
right,
and
so
I
suspect
that
the
minute
we
get
to
the
point.
B
H
But
if
it
doesn't-
but
I
mean
sorry,
you
know,
although
screwing
around
we've
done,
is
that
trying
to
make
that
single
processing
requirement
easier
to
implement
on
the
helper
run
later
right,
and
so
as
soon
as
you
get
out
of
that
mode,
and
you
say
you
can
make
one
query:
multiple
queries
on
the
same
submission,
which
is
not
quite
creative.
This
allows
but
like
it
implicitly,
might
allow
and
and
certainly
we're
much
harder
to
implement.
You
know
if
it
doesn't.
H
Allow
then,
like
the
situation's,
much
more
complicated,
wise
and
then
some
other
way
to
think
about
it.
I'm
not
saying
you
can't
generalize
it,
but
I'm
saying
that
trying
to
analyze
what
the
political
commitment
doesn't
permit,
it
will
be
almost
impossible.
H
You
have
policy
construction
instead
to
look
at
a
different
example,
and
so
that's
like
the
first
thing.
I
want
to
say,
look
at
a
different
example.
H
The
way
ipa
works
on
the
meta
missile
proposal
for
interoperable
private
aggregation,
the
instead
of
having
the
the
the
selection
of
the
submissions,
is
entirely
within
the
the
unit
of
the
effectively
what's
in
this
case
the
collector,
namely
that
the
collector
collects
all
the
submissions
and
then
then
shoves
them
into
the
in
in
the
helper
for
analysis
right,
and
so
we
can
do
anything
or
anything
anything
it
wants
modulo,
whatever
mechanisms
are
provided
for
privacy,
and
so
what
I
wonder
is
whether
or
not
that
kind
of
design,
even
not
that
about
that
specific
design
is
what
we
want
here
in
particular,
allowing
essentially
instead
of
trying
to
create
some
language
here
that
is
sort
of
like
restricted
for
what
you
can
say.
H
I
wonder
if
we
want
something
like
much
more
fancier
instead
right
and
so
what
I
need
a
fancier
is
effectively
to
say
well,
the
collector's
like
any
subset.
It
wants
to
be
any
mechanism
at
once
and
and
then
they
usually
analyze,
and
we
have
some
other
mechanism
for
ensuring
privacy
into
those
conditions.
H
But
anything
could
be
I'm
not
sure
quite
sure,
but
to
say
once
we
have
any
kind
of
query
methods
that
allows
overlapping
queries,
we're
already
in
the
soup
and
we're
going
to
have
a
flexibility
question
analysis.
So
so,
like
here's,
like
my
dumb
version
of
this,
which
is
effectively
that
the
that
the
collector
gets
to
upload
a
piece
of
javascript
that
the
the
leader
and
help
the
coverage
execute
the
term.
Whether
it'll
include
a
given
section.
H
Another
version
of
that
would
be
for
the
leader
and
helper
to
provide
the
collector
with
the
entire
inventory,
every
possible
submission
and,
and
the
collectors
simply
say,
aggregate
these
ones,
these
ones
these
ones
right.
And
so
the
reason
why
the
reason,
the
reason
I'm
saying
this
is
not
it's
not
make
the
problem
harder,
but
to
make
it
easier
and
to
sort
admit
the
fact
like
admit
the
fact
that
we
already
are
sort
of
like
off
the
fairway
and
try
to
solve
the
problem
on
the
far
end
of
the
fairway.
E
E
One
thing
one
thing
I'd
point
out,
though,
so
this
this,
this
fixed
size
chunk
use
case.
Here's
kind
of
it's
it's
it's
it's
simple
in
that
there's
no
reason
to
ever
have
overlapping
batches.
What
we
want
is
every
chunk
is
disjoint.
E
To
support
something
like
this,
that
is
simple
and
already
kind
of
constrained,
but
I
like
your
point,
though,
is
well
taken
like
whatever
we
do
here,
I
think
at
a
minimum
we
can
try
to
prevent
overlapping
batches,
like
that.
I
think
that
could
always
be
defined,
although
with
like
the
grouping
thing
like
the
what
this
is,
what
this
is
kind
of
about
is
I
want
to.
I
want
to
explore
the
data
yeah
yeah
well.
E
Well,
they
could
be
overlapping,
because
the
intersection
between
chrome
users
and
everybody
else
that
would
be
an
overlapping
batch
might
tell
you
something
about
everybody.
H
Okay,
that's
what
I'm
saying:
that's
what
I'm
saying
yeah.
So
I
think
so
I
think
like
one
so
again
like
I
don't
know,
I
guess
what
I'm
saying
is
like.
H
Instead
of
having
like
we
initially
designed
this,
the
kind
of
idea
was
we
designed
a
grammar
that
would
basically
not
let
you
say
things
that
were
illegal
right
and
the
grammar
like
it
wasn't
even
really
a
query
grammar
just
like
this
is
how
it
works
right
and
so
and
it
was,
and
that
grammar
inherently
enforced
one
query
per
match
right,
one
career
submission,
and
so
I
think
we
need
to
go
back
and
say:
what's
the
underlying
rule
that
guarantees
guarantees
privacy
and
then,
let's
implement
that
rule
on
the
helpers
and
and
not
worry
about
like
and
not
trying
to
minimize
state
on
the
helpers
and
leaders
and
just
be
like
this
rule
and
then
within
the
limits
of
that
rule,
you
can
do
any
queries.
H
H
Then
you
like
a
counter
on
every
submission
and,
like
you
know-
and
I
don't
know
about
one
query
right-
and
these
are
some
other
rules
and
some
other
rule,
but
I
think
like
and
and
then
I
think
that
what
basically
says
like
and
then
I
think
the
question
is:
what's
the
most
the
cheapest
way
to
allow
arbitrary
queries
and
not
design
a
whole
new
language
for
that,
rather
than
trying
to
design
that
language
for
that.
So
now
we
might
put
on
this
on
this.
H
M
All
right,
daniel
gilmore,
so
I
wanted
to
say
something
similar
to
what
ecker
was
saying,
but
maybe
looking
at
it
from
a
different
perspective.
M
The
reason
that
people
are
comfortable
participating
in
this
scheme
is
because
they
want
to
give
feedback
that
will
help
the
person
the
the
group
that's
developing
their
software
or
they
want
to
report
some
telemetry
without
risking
their
own
privacy
right,
and
some
of
these
types
of
disaggregation
mechanisms
require
me,
as
a
user,
to
report
some
specific
things
that
I
don't
actually
know
how
they're
going
to
be
used
to
differentiate
me
from
the
rest
of
the
crowd.
Yeah.
E
M
If
the
goal
is
to
convince
people
that
they
can
do
this
safely
and
not
everyone's
going
to
do
the
full
analysis
here,
but
they
might
read
analysis
from
other
people,
then
the
more
complicated
you
make
this
the
harder
it
is
for
somebody
to
analyze
it
and
say
you
cannot
be
disaggregated
right,
yeah
totally,
and
that
seems
to
be
defeating
the
purpose
of
all
of
this
right.
If
people
are
willing
to
just
throw
their
hands
in
the
air
and
say
well,
we
trust
that
the
telemetry
collector
is
not
going
to
disaggregate
me.
M
Then
we
don't
need
any
of
this
protocol,
so
I
would
be
very
wary
about
the
extent
to
which
we
are
asking
people
to
tag
or
to
tag
their
submissions
or
opt
into
their
submissions
in
some
way
that
that
has
something
complicated
here.
So
so
you
know
ecker's
point
of
like
if
we
can
say
that
you
know
the
helper
that
obeys
the
right
rules.
M
Has
this
limit,
which
is
like
each
query,
can
only
be
put
into
one
aggregate
response:
that's
much
easier
to
analyze
and
much
easier
to
convince
someone
that
they
should
participate
than
this
kind
of
like.
Well,
you
might
be
disaggregatable
if
you
happen
to
use
a
browser
that
you
know
more
than
75
percent
of
other
people
don't
use
or
something
like
how
do
I
know
that
that's
going
to
happen,
whereas.
E
Yeah,
I
I
totally
agree
with
that.
I
think,
like
I
think,
where
we
might
be
heading
is.
I
would
like
to
be
able
to
support
at
least
use
case
number
three,
but
maybe
we
sort
of
so
so
one
one
like
in
in
the
in
the
original
design.
One
thing
we
were
we
were
contending
with
is
what,
if
you
don't
have
enough
data,
so
like
say,
your
batch
interval
is
t0
to
t1
and
you
don't
have
enough
data
to
actually
get
an
aggregate
to
actually
get
over
the
minimum
batch
size.
E
Well,
if
I
have
enough
data
t0
and
t1,
then
that's
good
enough
like
then
that's
good
enough
as
long
as
I'm
prevented
from
ever
like
doing
an
intersection
over
over
that
that
larger
interval,
but
I
think
yeah.
I
think
this
is
something
a
problem
that
we
can
deal
with.
Simplicity
is
key
here.
M
Sorry,
let
me
just
add
one
more
thing.
The
fact
that
this
is
looking
like
it
might
be
proposed
as
an
in
protocol
negotiation
also
makes
me
more
worried
right.
So,
if
you
could
say,
if
you're
doing
ppm,
you
make
a
decision,
whether
you're
doing
this
kind
of
grouping
or
that
kind
of
grouping
and
the
whole
the
whole
deployment
makes
a
decision.
So
it's
not
in
the
wire
format
right
yeah.
You
know,
as
part
of
the
configuration
of
your
of
your
system,
that
you're
going
to
be
doing
this
type
of
grouping.
M
That
makes
it
easier
for
someone
who's.
Considering
do
I
want
to
deploy
this?
Do
I
want
to
participate
as
a
helper?
Do
I
want
to
report
as
a
client
to
know
what
they're
getting
themselves
into
instead
of
being
like?
Well,
it
could
change
up
while
we're
going
on,
and
it
looks
like
here.
This
proposal
for
adapt2
looks
like
you've
got
it
in
the
wire
format,
which
suggests
dynamic
transformation
of
any
particular
collection
over
time,
and
that
that
seems
much
harder
to
believe
in.
E
Yeah
yeah
totally.
I
think
that
initially
we
would
just
say
that
the
query
type
is
is,
is
configured
out
of
band
and
as
part
of
the
task
configuration
that's
like
it's
kind
of
punting,
but
I
think
that
at
least
kind
of
addresses
your
concern.
O
Jim,
I
also
agree
with
the
simplicity
point.
I
think,
for
that
the
most
important
thing
is:
we
deliver
a
badge
that
satisfies
some
privacy
guarantees,
how
the
analyzing
of
the
badge
is
totally
up
to
the
user,
and
it
doesn't
have
to
be
part
of
the
the
initial
requirement
for
that
because,
like
others
have
said
there
are,
you
know,
infinite
possibilities
of
how
a
user
wants
to
define
these
selecting
groups
or
query
groups,
and
also
another
point:
is
these
predicates
you
define
for
grouping
the
data?
O
It
can
also
be
achieved
by
encoding
your
input,
shares
or
your
measurements
in
a
way
that
every
client
participating
will
subject
to
the
same
predicate.
So
everyone
knows
what
kind
of
a
task
they
are
participating
in,
but
later
there's
no
way
you
can
slice
the
data
to
pick
up
some
user
lab,
but
you
could
like
encode
whether
a
user
is
using
chrome
or
safari
without
actually
picking
out
the
the
group
of
users
that
actually
using
safari.
E
D
A
couple
things
so
I
think
chris
you
already
mentioned
this,
or
someone
mentioned
it,
but
the
the
current
restrictions
that
we
have
in
the
draft
right
now
for
validating
our
verifying
password,
basically
is
present
forces
limit
that
echo
is
adjusted
where
you
have
basically
one
query
submitting
to
a
given
report,
there's
no
intersection
allowed
and
that
allows
us
to
like
very
reasonably
conclude,
like
certain
privacy
properties,
about
the
the
resulting
scheme,
if
you're
using
that
in
a
specific
way,
it
is
overly,
perhaps
overly
rigid,
particularly
because
it
doesn't
enable
like
the
the
chunk
based
variant
here.
D
So
I
would
be
in
support
of
you
know,
perhaps
looser
enforcement
that
did
enable
that
use
case.
But
I
guess
what
I'm
concerned
about
is
whether
or
not
those
two
different
or
that,
or
rather
the
the
enforcements
that
we
put
in
place,
whether
or
not
it
would
yield
a
protocol
or
a
system.
That's
like
useful
in
practice,
there's
certainly
like
a
large
gap
that
exists
between
dap,
with,
like
all
these
sort
of
query
constraints
and,
like
other
general
purpose,
data
collection
systems
that
are
used
today.
D
It's
like
daf
is
not
a
drop
in
replacement
for
these
things,
and
if
it's
not
a
drop-in
replacement
for
these
things,
like
what
is
the
incentive
for
people
to
to
use
this
protocol,
the
the
drill
down
use
case
that
was
mentioned,
I
think
echo
originally
brought
it
up
a
while
back.
Is
you
know
particularly
interesting
to
consider-
and
I
I
don't
know
if
that's
something
that
people
will
want
to
have
in
order
to
like
use
doubt
to
enable
privacy
preserving
collection.
D
So
I
I'm
kind
of
conflicted
here.
I
I
very
much
support
you
know
guardrails,
where
appropriate
and
more
reasonable
such
that
we
can
reason
about
the
resulting
privacy,
but
I'm
worried
about
the
inflexibility
that
that
yields
for
the
resulting
system-
and
I
don't
know
how
to
square
that
right
now.
E
Yeah
I
mean
I
I
I
maybe
there's
a
way
I
mean
if
anyone
is
insisting
on
supporting
this,
it
doesn't
sound
like
anybody.
Is
I
I'll
just
say
that
I
I
have
a.
I
want
to
be
able
to
use
this
use
case.
I
think
it's
going
to
be
really
important
for,
in
particular
for
differential
privacy,
which
is
something
we
haven't
totally
worked
out,
but
I
would
like
to
at
least
take
a
step
in
draft
two
that
that
deals
with
this
use
case.
N
No
thanks,
so
I
always
hesitate
speaking
about
a
protocol
that
I
haven't
read
the
draft,
but.
N
But
I'm
going
to
speak
at
a
really
high
level.
I
mean
one
to
rephrase.
I
think
some
of
the
things
that
other
people
have
said
that
are
extremely
important.
Is
it's
not
even
so
much
the
complexity
but
the
instant
you
add
two
parties
into
determining
whether
you're
getting
the
proper
privacy
aspects.
You
know
or
not.
It
greatly
changes
things
right.
So
I
think
about
this
protocol
being
deployed
in
a
wide
range
of
circumstances.
Everything
from
you
know
in
my
house
collecting
data
about
my
wife
and
then
you
know
we
have
this
mutual
agreement.
N
N
So
if
you
do
end
up
putting
this
document,
I
greatly
suggest
putting
in
some
guidance
on
when
it
shouldn't
be
used
like
when
you
know
what
the
error
message
should
be.
If
I
refuse
to
actually
you
know,
resolve
this,
this
conflict
in
this
negotiation-
I
just
don't
do
negotiation,
it's
hard-coded
and
you
know
there's
legal
auditing
and
representation
behind
it.
H
But
the
I
mean
the
drill
down
case
is
important,
but
when
we're
like
able
to
drill
down
based
on
client,
demographics
is
like,
like
I
can
tell
you,
is
all
the
time,
and
and
yes
you
get
to
the
point
where
you're
like
hey
the
failure,
like
some
statistic
is
like
bizarrely
high
on
you
know.
Overall
and
now
you
want
to
know
where
it
is
like.
That's
like
absolutely
important
and
it's
not
just
a
matter
of
time
windows.
H
But
that
said,
I
think,
it'd
be
okay
to
like
roll
out
like
certainly
be
okay
in
this
version
of
the
draft
and
the
next
version
of
the
draft
to
only
only
cover
a
smaller
set
of
use
cases,
and
it
might
even
be
okay
if
we
had
a
sensible
system
that
allowed
for
drawdown
later
but
like
at
the
end
of
the
day.
This
is
going
to
be
necessary
for
like
a
lot
of
correction
cases,
so.
E
Thanks
hacker
yeah
I
mean
I,
it
would
be
great
yeah,
given
the
the
known
unknowns,
I
think
we
we
should
try
to
take
as
small
of
a
step
as
possible.
I
think
differential
privacy
is
something
that
we
need
to
figure
out
the
story
of
pretty
soon.
I
would
say
it's
like
it's
higher
priority
than
other
things,
but
I
mean
what
would
you
say
is
higher
priority
figuring
out
differential
privacy
or
or
drilling
down
at
this
point,
do
you
have
a
preference.
H
Sure
yeah,
no,
let
me
give
you
a
concrete
example.
We
take
measurements
regularly
of
the
fraction
of
of
tls
deployments
of
the
fraction
of
connections
that
potentials,
and
so
we
have
a
graph
you
can
see.
H
That's
like
you
know
how
much,
how
many
connections
are
https
versus,
http
and
like
that
was
like
up
to
the
right
until
about
nine
months
ago
and
then
for
some
reason
it
started
going
down
for
the
world's
whole
and
so
we're
like
what
the
hell,
and
so
I
asked
somebody
to
go
and
like
look
and
they
were
like,
let's
bucket
by
country,
and
they
discovered
that
there
was
like
two
countries
where
it
was
like.
H
They
had
bizarrely
high
numbers
of
reports
and
what
was
going
down
and
if
you
remove
those
back
it
up
to
the
right.
So
like
there's
a
great
case
of
drill
down
we'd
like
to
demographic
drill,
down
to
figure
out,
what's
going
on
in
a
statistic,
and
so
it's
not
you
can't
do
it
temporarily.
You've
got
to
do
it
like
by
the
demographics,
and
so
you
know
now
again
we
have
to
like.
H
We
have
developed
preserving
privacy,
which
is
the
complicated
part
but
like
it's
like,
and
that
requires
overstep
because
we're
pretty
sampling
on
the
same
data
set
to
solve
that
problem,
and
I
guess
if
we
cancel
that
problem,
I
guess
we'll
like
have
a
less
useful
protocol
than
others
would.
But
like
it's
like
a
really
important
use
case.
We
do
all
the
time.
E
I
think
so
I
think
so
I
think
the
connection
to
differential
privacy,
whether
differential
privacy
is
going
to
be
necessary
or
or
if
it's
sufficient
or
even
necessary,
I'm
not
sure
about,
but
we
we
can.
We
can
get
into
that.
D
Well,
I
I
just
wanted
to
summarize
my
way
for
chris
and
I
love
to
meet
you
as
well.
I
think
like
addressing
273
with
equivalent
constraints
that
are
currently
in
the
draft
one
report.
Poor
query
is
a
good
next
step
for
the
next
version,
and
then
we
can
sort
out
separately
how
we
want
to
deal
with
drill
down
and
the
related
differential
privacy
issue.
E
Perfect
I'll
I'm
going
to
file
an
issue
and
then
I'll
I'll
start
working
on
a
pr
to
discuss.
Thank
you.
Everybody.
A
D
Okay,
so
talked
a
lot
about
privacy
in
the
previous
presentation,
specifically
for
the
the
colexa
protocol
and
what
that
means
for
dap.
So
attempt
here
is
to
sort
of
take
a
step
back
and
and
try
and
reason
about
what
the
what
the
threat
model
is
for
dap
make
sure
we
have
sort
of
agreement
there.
What
irrelevant
attacks
that
we
want
to
consider?
D
What
are
possible
to
consider
in
the
protocol
itself
is
like
first
class
thing
and
what
are
attacks
that
we
need
to
sort
of
punt
to
deployment
specific
mitigations.
Just
a
reminder.
You
know
we
just
we
just
saw
this,
but
I'm
gonna,
I'm
gonna,
repeat
it
anyways.
The
collector
protocol
basically
allows
the
collector
to
issue
a
batch
predator
cut
for
a
particular
for
a
particular
query
and
get
an
aggregate
result
as
the
output.
D
The
details
of
like
what
happens
internally
are
not
really
that
important
beyond
the
the
stuff
that
was
talked
about
in
chat
like
the
leader
can
choose,
which
reports
correspond
to
a
particular
batch
that
satisfy
the
batch
predicate
and
that
relates
to
you
know,
civil
attacks
and
stuffing
attacks
and
whatnot,
but
at
the
end
of
the
day,
the
collector
issues
a
query
with
some
predicate
and
gets
back,
and
I
agree
results
and
the
question
is
you
know
what
is
the
right
way
of
validating
that
batch
credit
kit
right
now
in
the
draft?
D
We're
extremely
constrained
in
terms
of
what
is
permitted
as
a
valid
batch
predicate
there's
a
number
of
conditions.
I
should
have
linked
to
this
specific
section,
but
the
the
first
and
foremost
most
obvious
one,
is
that
the
number
of
reports
must
be
at
least
the
min
batch
size.
So
you
get
the
gain
energy
guarantees
that
you
want
from
the
particular
instantiation
of
dab.
You
have
that
a
a
report
has
not
been
included
more
than
max
batch
lifetime.
D
We
need
to
change
that
particular
variable,
name
that
constant
name
or
whatever
but
hasn't
been
included
in
more
reports
than
is
allowed
and
importantly,
we
to
deal
with
intersection
attacks,
which
chris
was
sort
of
alluding
to
previously.
We
require
that
no
batches
can
intersect.
D
D
But,
as
noted,
this
is
not
really
flexible.
It
doesn't
allow
sort
of
the
other
use
cases
that
chris
was
going
through
in
particular
group
based
or
maybe
we
need
a
better
name
for
that,
but
whatever
more
chunk-based
collection
and
the
the
motivation
for
this
restriction
was
primarily
doing
an
abundance
of
you
know
safety.
D
I
think,
since
this
landed,
we've
had
lots
of
discussions
about
what
are
what
are
reasonable
ways
to
enforce
it.
You
know
the
the
underlying
fundamental
requirement:
ecker
just
proposed
a
new
one
that
we
might
move
to
in
the
next
version,
but
the
gist
is
that
you
know
we.
We
had
this
huge
gap,
we
plugged
the
gap,
but
we
plugged
it
with
perhaps
too
big
of
a
van
date
or
too
big
of
a
patch.
D
So
it's
probably
worth
you
know
to
identifying
towards
all
right.
Let's
take
a
step
back.
So
if
we
wanted
to
identify
what
was
sort
of
the
minimal
the
minimal
enforcement
needed
to
take
place
and
the
minimal
patch
that
we
needed
to
apply
in
order
to
allow
dap
to
be
queried
correctly,
it's
worth
like
taking
a
step
back
and
looking
at
what
dap
is
doing
under
the
hood.
So
chris
already
already
mentioned
this,
but
dap
is
a
multi-party
computation
protocol.
D
D
I,
whatever
all
these
inputs,
get
fed
in
collector
issues
a
query
and
gets
an
aggregate
as
output
and
the
privacy
goal
that
we
want
is
that
the
aggregate
output
does
not
leak
anything
more
beyond
the
aggregate
itself,
so
in
particular
the
person
who
views
or
is
able
to
interactively
and
adaptively
query
the
system
and
get
aggregates
doesn't
learn
anything
about
honest
client
inputs
beyond
the
aggregate
that
is
computed
based
on
those
honest
client
inputs.
D
So,
as
chris
said
sort
of
that
means
you
want
the
batchman
batch
size
to
be
high
higher
the
better
for
more
privacy.
It's
an
application,
specific
parameter
or
a
deployment
specific
parameter,
but
that
this
isn't
sort
of
intuitively
or
fundamentally
sort
of
the
privacy
definition
for
for
dap.
D
The
threat
model
that
we
consider
as
a
reminder
for
privacy,
not
for
robustness,
is
that
there's
some
fraction
of
clients
that
are
assumed
to
be
malicious
and
others
that
are
assumed.
To
be
honest.
So
if
ever,
every
client
was
malicious,
the
system
wouldn't
really
make
sense.
So
you
assume,
you
have
some
number
of
honest
clients
that
are
contributing
to
the
protocol
contributing
to
individual
aggregates,
and
the
number
of
malicious
entities
are
bounded.
D
All
of
one
of
the
aggregators
are
honest
or
dishonest.
Rather
so
we
assume
that
every
single
there's
at
least
one
honest
aggregator,
that
is
you,
know,
implementing
and
abiding
by
the
protocol
as
as
specified
and
everything
else
is
malicious,
and
we
also
assume
that
the
collector
is
malicious
from
the
purposes
are
from
from
the
perspective
of
actually
interacting
with
the
system
as
it
adaptively
queries
it.
This
is
kind
of
interesting
because
you
know
in
practice.
I
guess
we
in
practice.
D
I
can
see
you
know,
scenarios
where
the
collector
is
the
one
actually
configuring
the
system
deciding
whether
or
not
to
use
dapp
in
the
first
place,
so
a
malicious
collector
could
just
easily
not
use
that
or
you
know,
configure
the
system
with
parameters
that
are
pretty
awful,
but
we're
sort
of
assuming
that
you
know
that
was
done
in
an
honest
way.
D
Clients
are
actively
or
we're
configured
with
good
parameters
and
we're
configured
with
or
are
actually
using
dap,
and
we
want
to
protect
against
now
a
collector
that
wants
to
subvert
this
honest,
bootstrap
or
an
honest
configuration
for
the
purposes
of
learning,
individual
information
about
client
inputs
and,
as
noted
at
the
bottom,
the
robustness
start
model
is
different,
which
assume
it
does
assume
that
all
aggregators
are
honest.
D
Okay.
So
there
are
a
number
of
tacks
that
we've
already
identified
and
that
we
have
either
text
in
the
document
to
deal
with
or
open
issues
to
address,
the
first
of
which
is
a
stopping
attack.
Your
classic
civil
attack,
wherein
the
attacker,
which
could
be
either
a
combination
of
leader,
helper
or
compromised
clients,
is
injecting
things
into
the
system
into
the
aggregate
to
basically
skew
the
result
and
allow
allow
the
attacker
to
learn
information
about
an
individual
client
inputs.
D
So
in
this
example,
we
have
all,
but
one
of
the
clients
contributing
to
a
particular
aggregate
are
malicious
and
the
honest,
collector
is,
or
the
honest
client
is
submitting
its
honest
value.
It
would
be
very
easy
for
someone
looking
at
the
aggregate
to
determine
what
this
honest
input
was,
which
is
obviously
something
we
want
to
protect
against
in
in
actual
deployments
of
the
system.
D
It's
also
I'm
calling
like
an
over
sampling
attack
in
the
differential
pharmacy
literature.
It's
like
continual
release
up
to
exposure-
or
I
don't
know-
maybe
that's
not
the
correct
technical
term,
but
basically
the
idea
is
like
you
have
clients
that
are
contributing
honest
inputs
over
and
over
and
over
again
up
to
a
point
where
they've
revealed
or
contributed
that
input
too
many
times
and
have
sort
of
it's
been
folded
into
an
aggregate
and
the
the
intersection
of
the
combination
of
those
aggregates
therefore
reveals
information
about
the
honest
client's
input.
D
So
in
the
the
sketch
here
I
have
like
multiple
instantiate
or
multiple
runs
of
the
aggregation
function.
F,
the
honest
client
x,
one
is
contributing
at
the
same
value
each
time,
but
every
other
client
is
contributing
a
different
value,
maybe
either
honestly
or
maybe
maliciously.
D
D
The
other
attack,
which
is
referred
to
as
an
intersection
attack,
was
also
discussed
as
in
the
previous
presentation,
which
is
why
we
have
the
sort
of
very
restricted
query
batch
or
the
batch
predicate
enforcement
right
now.
It's
where
the
collector
is
adaptively
querying
the
system
with
different
query
parameters
trying
to
like
yield
different
aggregates
that
may
have
overlapping
underlying
batches
and
then
using
the
aggregate
results
to
compute
some.
D
Okay,
so
I
don't
know
if
this
list
is
exhaustive.
In
fact,
that's
one
of
the
questions
for
the
group
like
have
we
sufficiently
identified.
You
know
all
the
relevant
problems
for
a
privacy,
relevant
problems
for
dap,
but
the
question
that
we're
asking
ourselves
now
is
you
know
what
are
reasonable
mitigations
for
these
particular
issues,
so
a
stuffing
attack
as
an
example.
D
It
might
be
reasonable
to
say
that
this
is
a
very
deployment,
specific
problem
that
you
could
deal
with
if
you
had,
for
example,
client
authentication
that
ensured
that
every
single
client
input
was
honest
and
not
maliciously
generated,
you
could
address
it
with
some
application
of
different
privacy,
local
or
central
or
otherwise.
D
I
think
it
kind
of
depends
on
you
know
the
specific
deployment,
and
I
don't
know
to
what
extent
dap
wants
to
mandate
to
require
anything
about
dealing
with
this
particular
problem,
and
the
same
goes
for
oversampling
as
well,
because
that's
very
closely
adjacent
to
the
stuffing
in
civil
attack
the
intersection
attack.
However,
we
can
deal
with
in
the
protocol.
D
In
fact,
we
do
deal
with
in
the
protocol
right
now,
but
it's
as
273
or
issue
273
sort
of
talks
about
it's,
it's
not
it
can
be
improved
and-
and
we
aim
to
do
that.
D
Okay,
so
we
I
mean
we
kind
of
already
talked
about
this,
fortunately
in
the
previous
presentation
so
and
we
kind
of
have
like
a
proposed
solution
for
moving
forward,
but
so
this
slide
was
meant
to
say,
like
we're,
trying
to
ask
the
question:
what
is
the
fundamental
requirement
that
we
have
for
mitigating
intersection
attacks
and
the
informal
goal
is
to
basically
not
allow?
D
You
know
the
privacy
definition
that
we
described
earlier
to
be
violated,
so
all
aggregates
are
based
on
the
minimum
batch
size
and
the
enforcement
is
to
basically
ensure
that
you
know
every
single
report
contributes
to
some
number
of
or
exactly
one
query
in
this
particular
case
as
we
were
as
we
were
describing
and
the
the
question
that
was
raised
previously.
Was
you
know
what
is
what
is
a
reasonable
way
for
expressing
queries
such
that
this
price
equals
meant?
D
But
I
think,
like
the
the
conclusion
that
we
reached
was,
you
know,
maybe
don't
constrain
ourselves
with
how
we
express
queries
just
enforce
the
fundamental
invariant
for
query,
validation
or
batch
predicate
validation
and
allow
whatever
sort
of
queries
make
sense.
In
the
time
being
and
separately,
we
can
figure
out
what
the
drill-down
solution
would
be.
Yeah
dkg,
but
I
assume
now's
a
good
time
to
take
questions.
M
So
I'm
thinking
about
these
underlying
constraints,
right
that
that
echo
proposed
in
the
previous
talk
yeah.
M
Sorry,
there's
there's
also
something
on
a
few
slides
back
that
I
just
don't
want
to
go
into
the
record
if
it
wasn't
if
it
was
miswritten
and
you
might
want
to
update
the
slides
or
maybe
it's
right
and
I'm
confused
one
more
back
one
more
back
that
one
all.
But
one
of
the
aggregators
is
honest.
M
Okay,
yeah,
please
update
the
slides
in
in
whatever
archive
we
have.
I
just
want
to
just
make
that
should
be
at
most
at
least
one
of
the
aggregated
zones
right.
Yes,
okay,
just
wanted
to
put
that
in
on
the
record.
Okay,
we
can.
A
M
In
the
situation
where
everyone
submits
one
report
on
their
own,
then
the
types
of
constraints
that
ecker
was
describing
sound
pretty
plausible.
To
me
I
mean
I
don't
have
a
clear
analysis
of
it
exactly
I
don't
know
for
sure.
But
if
we
say
you
can
only
use
each
report
in
one
query
and
each
query
needs
a
reasonable
size
batch,
then
I'm
fairly
confident
about
those
protections
in
the
event
that
each
client
might
report
more
than
one
more
than
once
over
time.
M
I
am
much
less
confident
in
that
defense
in
the
event
that
the
client's
reports
might
themselves
be
aggregatable,
then
I
don't
know
how
we
I
don't
know
how
to
evaluate
this.
So
yeah.
D
Yeah
I
was
chatting
with
martin
thompson
about
this
earlier
in
the
week
in
the
concept
of
ipa.
This
is
fundamentally
related
to
the
the
concept
of
like
over
release
or
continual
release
in
that
in
that
setting,
it's
not
clear
like
what
is
the
best
way
to
deal
with
some
practice.
Like
you
do
on
the
client
collector
side,
you
bound
the
number
of
times
a
given
client
can
contribute
its
input
across
different
aggregations
or
different
tasks.
I
so
I
don't.
D
I
don't
know
the
answer
there
or
the
best
answer
there,
but
I
I
agree
that
the
current
query
enforcement
mechanism,
and
even
the
one
that
was
proposed,
is
only
helpful
in
the
context
of
a
given
a
single
like
aggregate
or
a
single
task.
Rather
it
does
not
consider
like
leakage
that
might
occur
across
tasks
that
have
as
input
the
same
client.
H
Go
ahead
yeah,
so
just
just
to
that
point
first,
I
believe
that
actually
do
anything
about
that.
Sorry,
another
constraint!
The
claims
are
continuously
reporting,
I
suppose,
to
the
clients.
You
know
report
only
a
small
number
of
times.
I
don't
need
any
way
to
do
anything
about
that.
That
doesn't
also
require
some
some
sort
of
reasonably
strong
client
identification
like
if
you
do
know,
if
you
don't
miss
the
clients,
obviously
there's
no
way
to
person
say
that
you
can't
bucket
up
all
the
clients.
H
You
know
you
know
other
clients
over
a
month
right.
So
so
I'm
not
sure
I'm
like
I
don't
have
fix.
But
that's
like
that's
my
initial
observation.
The
second
is,
you
know
the
version
that
the
constraint
the
dkg
just
suggested,
which
is
to
say
you
know,
one
query
per
submission
and
you
know
and
no
submission,
maybe
and
no
no
batch
size
is
more
than
n.
It's
like
playing
an
easy
mode
right
and
the,
and
I
guess
I
don't.
H
I
don't
actually
know
whether
there's
an
algorithm
that
provides
the
variant
you
describe
up
there
informally,
given
a
like.
If
you
just
give
me
the
you
know
the
matrix
of
like
which
queries
are
in
which
batches,
I
actually
don't
know,
if
there's
an
efficient
algorithm
to
determine
whether
it
conforms
to
this
requirement.
Yeah,
maybe
you
do
know,
but
I
don't
know.
D
Do
you
know?
No,
I
don't
think
I
don't
think
we
have
one
either,
and
this
is
something
we
were
talking
about
with
some
folks
in
the
slack
channel
about,
especially
as
you
start,
adding
like
multiple
dimensions
to
how
you
express
certain
queries.
How
do
you
enforce
this?
Like
non-overlapping.
H
H
You
know
in
enumerated
form
rather
than
rather
than
degenerative
form,
and
can
you
simply
determine
the
term
validity
of
this
of
this
constraint
right
and
if
we
had
an
algorithm
like,
I
guess
where
I'm
going
with
this,
I
think
I
sort
of
indicated
earlier
right
is
that
I
think
the
easiest
way
to
do
whatever
we
do
is
going
to
be
to
require
the
to
require
the
the
servers
to
maintain
an
inventory
of
exactly
which
queries
each
client
was
involved
in
and
then
to
prescribe
an
algorithm
that
determines
whether,
with
the
m
plus
one
query,
is
a
valid
query
based
on
the
previous
end,
queries
and
it
doesn't
require.
H
It
doesn't
say
anything
that
other
queries
are
expressed,
because
the
query,
because
thing
that
the
server
is
required
to
express
is
membership
in
that
query,
membership
right.
So,
and
so
I
think
that
then,
if
we
had
that,
then
we'd
be
able
for
the
case
of
this,
to
ask
the
simple
this.
Like
math
question
of
like
is
there
a
way,
is
an
algorithm
for
looking
at
a
matrix
and
terminating
conformance
right,
and
I
suspect
there
is,
but
I
just
like
I
don't
know,
because,
like
I'm,
not
a
math
guy.
D
Yeah
I
mean,
I
think
the
the
thinking
that
I
have
right
now
is
that
each
aggregate
mark
reports
that
contributed
to
batches
is
dirty
or
not
based
on
whether
or
not
they
were
included
in
particular
queries,
and
then
we
would
express
the
enforcement
criteria
based
on
like
that
dirty
bit
for
every
single
report.
That
seems
like
the
simplest
thing
right
now,
but
I
I
agree
as
written.
I
don't
see
an
obvious
algorithm,
but
the
the
simpler
one
that
I
think
you're
describing
just
makes
sense.
N
E
We
can
find
a
solution.
I
think
I
would
like
to
be
able
to.
D
G
E
Yeah,
I
I
also
don't
have
a
solution
for
this,
but
I
I
I
suspect,
like
we've,
been
thinking
about
this
at
on
my
team
for
a
little
while,
and
I
think
I'm
a
little
more
confident
that
we
can
find
a
solution
but
yeah.
So
so
I
I
don't
think
we
should
rule
anything
in
or
out
at
this
point,
but
just
work
on
the
problem.
E
One
I
actually
wanted
to
add
something
for
folks
interested
in
working
on
differential
privacy.
We
are
definitely
gonna
have
to
make
accommodations
in
the
dap
spec
itself,
but
over
in
the
cfrg
we
have
we're.
We
think
that
we're
going
to
need
to
say
something
in
the
vdef
vdf
document
itself
about
how
to
how
to
compose
differential
privacy.
So
if
anyone
has
expertise
there
and
wants
to
contribute,
we
would
love
to
have
your
help
on
the
vdf
document
and
that's
all
I
got.
D
All
right
so
just
to
kind
of
wrap
up.
I
wanted
to
circle
back
to
the
the
high
level
questions
that
I
was
trying
to
identify
and
hopefully
get
some
discussion
around.
D
First,
I
guess
is
the
threat
model
clear
with
the
you
know,
the
edit
that
dkg
pointed
out
and
that
we
discovered
during
presenting
it,
second
of
which
is,
are
there
attacks
that
we're
actively
not
considering
or
just
not
considering,
not
actively
that
we
we
think
we
should
address
either
as
first
class
sort
of
citizens
in
the
protocol
itself
or
or,
as
you
know,
a
deployment
specific
thing
and
the
third
one.
D
The
folks
agree
with
sort
of
just
right
now,
constraining
ourselves
to
mitigating
the
intersection
attack
using
the
the
the
sort
of
proposal
that
has
now
been
floated
and
end
up
and
sort
of
punting
on
the
other
ones.
For
the
time
being
right,
my
sense
isn't
based
on
nothing
in
the
room,
but
seeing
chatter
and
hearing
people
talk.
Is
that
yes,
like
let's,
let's
deal
with
the
intersection
attacks
and
let's,
let's
like
separately
in
parallel,
talk
about
how
we
might
consider
these
stuffing
attacks?
D
And
you
know
over
release
of
data
across
tasks
or
the
over
sampling
attacks
separately.
H
I
think
that's
the
right
answer,
I
think
so
I
think
one.
Yes,
two,
I
don't
know
of
any,
but
I'm
sure
we'll
find
some
three
yeah.
So
I
think
that
right
now
we
should
do
is
effectively.
You
said
what
I
said
and
what
dkg
said,
which
is
like
any
minimum
batch
size
and
any
submission
can
only
be
one
query.
H
It
could
ever
be
only
one
in
one
query
and
then,
like
maybe
you
could
last
requirement
later
but
like
that,
would
get
that
will
get
you
pretty
far
and
I
think
it's
compatible
with
like
a
very
a
very
flexible
set
of
actually
quite
a
flexible
set
of
queries,
a
little
drill
up
and
we'll
get
you
pretty
far
and
then,
like
you
know,
and
then
like
once,
you
have
some
experience
or
anything
like.
I
think
we're
gonna
like
this
is
a
complicated
enough
thing.
H
It's
a
new
enough
thing
that
we're
getting
some
experience
either
like
pre,
rc
or
post
rfc.
So
I
think
this
would
get
us
far
enough
to
make
some
real
progress
and
then
we
could
like
as
and
then
you
know
and
like
if
they,
the
coop
and
I
were
discussing
earlier
right.
You
know
we
could
have
addressed
that
problem
by
like
doing
doing
that,
aggregate
on
like
like
day
one
and
then
drill
down
for
day
two
and
like
that
would
have
also
sold
a
problem
a
slightly
clunky
one.
C
K
Click
the
button.
Okay,
sorry,
we
were
talking
now
about
the
notion
of
tracking
how
many
queries
a
given
report
had
been
used
in.
So
I
think
actually,
this
motion
does
exist
in
the
draft.
There's
this
concept
of
that
lifetime
in
there
and
that's
the
that's
intended
to
accommodate
poplar
in
the
popular
setting.
K
It's
expected
that
the
collector
would
make
multiple
iterative
queries
against
an
aggregate
in
order
to
like,
like
essentially,
it
would
be
longer
and
longer
string,
prefixes
to
eventually
figure
out
like
what
the
heavy
hitter
in
a
population
is
so
yeah.
So
one
like,
I
think
we
already
have
this
and
I'm
pretty
sure
having
the
code
that
the
honest
implementation
already
handles
it,
and
we
should
keep
in
mind
that,
like
I'm,
pretty
sure
that,
in
order
to
make
poplar
useful,
you
have
to
allow
multiple
queries
against
the
same
set
of
reports.
J
Nick
dirty
seriously,
I
think
this
is
a
great
start
in
the
privacy
right
model.
I
I
don't
think
we
should
be
confident
that
we've
considered
every
privacy
threat
and
in
particular
I
wanted
to
raise
something
that
I
think
has
come
up
in
the
chat
or
sophia
had
mentioned
in
another
presentation.
J
There
might
be
some
privacy
threads
that
are
about
groups
rather
than
individuals,
and
I
think
that'd
be
particularly
important
with
small
groups.
But
if
I
don't
learn
that
yes,
this
particular
person
had
this
particular
report,
but
I
do
learn
that
you
know.
B
Unfortunately,
we're
not
able
to
to
get
your
audio
very
clearly.
Maybe
you
can
write
your
question
in
the
chat
and
we
can
invite
sean
to
come
up
and
and
show
his
slides,
so
we
can
get
started
there.
O
O
So
this
is
a
list
of
task
parameters
defining
the
current
that
draft.
So
today
we
are
focusing
on
the
parameters
that
are
particularly
important
to
a
task.
O
So
here
we
have
parameters
like
the
vdf
verified
key,
which
is
not
necessarily
related
to
a
task,
and
it's
also
a
shared
secret
between
the
leader
and
helpers.
So
what
we
are
talking
today
doesn't
necessarily
apply
to
these
kind
of
parameters,
but
it
definitely
applies
to
saying,
like
minimal
batch
size.
O
There
is
no
detailed.
Specifics
assumes
its
deployment
specific,
but
there
are
some
potential
issues
and
privacy
issues
with
this.
So,
first
of
all,
what
if
the
leader
that
configures
these
parameters
are
dishonest,
this
is
not
necessarily
just
the
leader
it
could
be.
The
collector
or
sometimes
the
leader
and
the
collector
could
be
the
same
organization
if
the
when
the
task
is
constructed
either
the
leader
or
the
collector,
they
define
a
minimum
batch
size
that
is
too
small
for
the
task
and
then
communicate
that
with
the
helper,
it's
impossible
for
the
helper
to
know.
O
Now
this
applies
to
other
parameters,
especially
if
you
adopt
the
differential
privacy,
there
could
be
parameters
like
the
different
currency,
epsilon
and
so
on.
These
parameters
can
also
be
redef
specific
and,
like
I
mentioned
how,
how
does
dap
earn
clients
trust,
so
the
client
know
the
task
of
the
participant
is
indeed
configured
of
the
the
right
parameters
and
is
enforced
on
the
server
side.
O
So
what
I
think
needs
to
be
addressed
in
depth
on
the
protocol
level
is
first
the
transparency,
so
the
client
or
the
user.
They
shouldn't
know
the
privacy
guaranteeing
they
are
getting.
They
should
know
the
parameters
that
defines
the
privacy
guarantee
of
the
task.
Secondly,
there
needs
to
be
some
enforcement
on
the
server
side
in
the
aggregators.
O
O
This
can
also
use
this
can
also
allow
client
to
have
some
of
these
parameters
hard-coded
on
the
client
side,
for
example,
if
you
are
using
differential
privacy,
the
client
could
decide.
I
want
to
use
the
local
different
privacy
of
a
particular
epsilon
or
lower,
and
the
second
thing
is:
the
client
needs
to
send
these
task
parameters
back
to
the
server.
We
want
to
use
the
extension
in
the
current
report
because
the
extension
is
supposed
to
be
extending
what
the
the
report
is
providing.
O
So
this
is
a
a
way
to
make
it
extensible
for
different
read-apps.
We
can
have
one
particular
extension
data
type
for
a
re-dash,
with
a
particular
dp
guarantee
or
a
particular
privacy
guarantee,
but
also
because
the
extension
is
used
in
aed.
So
a
malicious
aggregator
cannot
change
these
parameters
later
on.
They
will
fail
to
decrypt
the
report
and
thirdly,
the
aggregator
should
check
the
parameters
coming
from
the
extension
match
what
they
have
stored
for
that
task.
O
So
as
a
quick
example,
here,
you
have
a
leader
that
sends
the
task
parameters
to
client
helper,
auto
banned
the
client.
Each
client
will
seal
the
input
shares
with
the
extension
that
contains
the
task
parameters
and
then
send
those
back
to
the
leader.
In
the
report,
the
leader
will
verify
the
parameters
from
the
extension
and
when
the
aggregation
flow
starts,
it
will
send
the
same
extension
in
the
report
share
to
the
helper,
which
will
do
the
same
verification.
O
O
What
this
does
is.
We
can
actually
create
the
task
on
demand
automatically
when
we
receive
a
new
combination
of
task
id
and
task
parameters.
So
in
this
diagram
we
no
longer
have
the
auto
band
task,
parameter:
distribution
between
leader
and
helper,
so
the
parameters
are
only
given
to
the
client.
The
client
does
exactly
the
same
thing
in
as
in
task
enforcement,
but
when
the
leader
receives
the
report,
if
it's
an
insane
task
id
and
task
parameters,
tuple
the
leader
can
create
a
new
task
on
demand.
O
In
that
case,
your
task
object
becomes
kind
of
a
index
that
just
groups
reports
together
now
in
this
last
slides.
This
is
a
small
optimization.
So,
like
I
said
in
this
scheme,
the
task
is
identified
by
tuple
of
task
id
and
parameters.
O
We
can
optimize
that
further
by
creating
the
task-
ids
not
randomly
at
the
uuid,
but
as
like
a
hash
of
some
shared
info
among
all
the
clients
participating
and
the
extension
that
includes
all
the
parameters.
In
this
way,
the
id
can
be
sent
to
the
server
and
as
one
thing,
the
server
have
to
verify
as
a
genuine
task,
but
this
does
have
some
implications
on
whether
you
know
the
task
id
can
be
defined
on
the
server
side
before
the
task
starts.
O
G
G
There
we
go
thanks,
john
for
the
presentation,
so
I
just
have
one
note
on
the
notion
of
like
dynamic
configuration
based
on
our
experience.
Finding
the
notifications.
K
Describes
in
that,
like
you,
the
tasks
effect.
The
task
analog
are
configured
dynamically,
just
based
on
like
what
inputs
are
getting
uploaded
by
in
that
system's
ingestion
servers
anyway,
and
it
turns
out
that
has
been
quite
valuable
because
it
has
enabled
like
the
mobile
os
vendors
to
add
new
aggregations
to
the
system
without
needing
to
explicitly
coordinate
with,
like
the
three
organ
organizations
running
the
other
servers,
so
yeah,
I
just
wanted
to.
E
E
Change
but
like
it
would
hopefully
be
nice
to
like
look
at
apr
for
this.
For.
G
E
To
implement
on
the
server
side,
so
if
you
wanted,
I
would
say
like
if
you
wanted
to
start
drafting.
M
I
appreciate
that
you're
thinking
about
this,
because
I
think
it
pushes
us
to
really
think
about
how
things
are
going
to
be
deployed.
When
I
think
about
how
things
are
going
to
be
deployed.
I
like
to
ask
a
couple
of
questions
by
taking
the
perspective
of
some
of
the
participants
as
a
client.
M
I
don't
know
how
we
would
expect
the
client
to
set
to
to
to
be
able
to
choose
the
consent
options
here
like
well,
I'm
okay,
with
this
kind
of
parameters,
but
not
that
kind
of
parameters
that
seems
very
complicated
and
difficult
for
clients
to
do,
and
I'm
wary
of
asking
clients
to
do
that
kind
of
configuration
choice
when
I
think
about
asking
someone
to
operate
as
a
helper.
M
M
Not
just
not
just
you
know,
take
oh
here's,
a
new
set
of
parameters,
I'll
just
I'll
just
adopt
them,
and
that
means
the
helper
is
going
to
need
to
actually
make
some
constraints
on
their
system
about
what
what
requests
they're
going
to
accept
and
now
you're
asking
the
helper
to
make
some
pretty
sophisticated
decisions
as
well.
M
I
think
the
way
we
want
people
to
step
up
and
say
yes,
I'm
willing
to
be
a
helper,
because
I
think
this
measurement
is
valuable
and
I
also
want
to
protect
the
client's
privacy.
It
seems
I
don't
know
how
how
you're
going
to
offer
those
kind
of
constraints.
If
this
dynamic
configuration
is
happening.
O
Yeah,
can
I
just
quickly
address
the
question,
so
I
agree
for
clients
to
understand
all
the
privacy
parameters
and
decide
to
open
or
not
is
not
realistic
for
most
of
the
clients,
but
I
think
the
transparency
needs
to
be
there.
So
for
the
few
that
do
understand,
I
do
want
to
see
what
kind
of
a
collection
schema
they
are
they
are
participating
in.
I
think
this
is
very
valuable,
and
sometimes
these
kind
of
obtaining
opt-out
policy
could
be
defined
by
the
organization
that
provides
the
client
side
to
the
to
the
individual
clients.
O
That's
one
thing,
and
then
the
other
thing
is
for
the
you
mentioned
the
photo
helper.
It
has
a
responsibility
to
you,
know,
know
the
parameters
that
you
receive
and
make
sense
privacy-wise.
That
is
true
and
for
in
issues
271.
We
also
said
something
about
like:
if
you
have
differential
privacy
guaranteeing,
then
we
could
implement
some
sanity
check
on
the
helper
side,
and
this
can
be
extended
to
to
extend
it
to
the
client
side
as
well,
that
you
know
the
privacy
differential
privacy
parameter.
O
You
received
indeed
makes
sense
for
the
kind
of
batch
size
you
are
defining,
but
I
think
here
that
the
key
is
one
the
transparency
to
the
client
today,
we
we
simply
don't
have
that
on
that
side
and
two
is
like
you
said
when
you
deploy
something
you
have
to
worry
about.
How
do
you
communicate
these
parameters
between
need
and
helper?
O
You
could
find
any
secure
ways
to
deliver
these,
but
I
think
it's
better
that
you
don't
have
to
worry
about
that,
and
there
is
one
option
for
you
to
actually
configure
these
using
the
same
route
that
you
uploaded.
The
report
to
the
aggregators
and-
and
you
achieve
both
transparency
and
enforcement
in
in
one
solution,.
B
Thank
you,
sean.
That
concludes
our
meeting
of
privacy,
preserving
measurement.
Thank
you.
Everyone
for
participating
and
for
getting
through
our
whole
agenda.
A
I
noticed
continuing
discussion,
particularly
about
star
in
the
zulu
room,
feel
free
to
take
that
to
the
list.
We
haven't
issued
a
call
for
adoption,
but
feel
free
to
comment
on
that
on
the
list,
because
I
see
that
some
people
are
interested
in
if
there
are
topics
that
are
warrant
an
interim
feel
free
to
come
to
the
chairs.
We
could
schedule
such.