►
From YouTube: IETF115-PPM-20221108-1630
Description
PPM meeting session at IETF115
2022/11/08 1630
https://datatracker.ietf.org/meeting/115/proceedings/
A
A
Given
the
short
session,
I
think
we're
my
clock
shows
that
it's
time
to
start
and
so
I
think
we're
not
going
to
wait,
we're
just
going
to
Dive
Right
In.
So
this
is
privacy,
preserving
measurement
at
ietf,
115.,
I,
hope
you're
in
the
right
room.
A
A
A
A
A
A
A
Not
seeing
any
requests
for
changes
to
the
agenda,
so,
let's
get
started
with
the
DAP
editors
I'm,
going
to
give
you
a
30
minute
timer
and
you
can
divide
that
time
as
you
like
among
these
topics.
Please
do
leave
plenty
of
time
for
discussion
if
you
take
all
the
times
with
your
slides
you're,
not
going
to
get
any
feedback.
B
No,
we
did
I
am
I've
requested
to
screen.
Oh
wait:
I
I'm,
trying
to
request
I,
wanted
to
request
to
share
slides
sorry.
C
A
B
A
B
D
B
All
right,
so
this
is
just
going
to
be
a
very
quick
update
on
recent
changes
to
the
DAP
spec
I'm,
going
to
talk
a
little
bit
about
the
issues
that
we
want
to
address
in
the
immediate
future
and
talk
a
little
bit
about.
What's
like
our
long-term
objectives
are
and
then
I'll
just
quickly
update
folks
on
implementation
status.
B
So
the
major
change
in
the
the
current
draft
dappo2
is
this
new
notion
of
query
types.
So
folks
might
remember
from
the
previous
version
of
the
spec
the
the
way
we
would
pick
a
reset
of
reports
to
aggregate
is,
the
collector
would
specify
a
time
interval
and
then
the
aggregators
would
go
find
a
set
of
reports
that
have
time
stamps
that
fall
on
that
interval
and
aggregate
those.
B
B
Basically,
what
we
do
now
is
the
aggregators
are
going
to
partition
reports
into
what
we
call
buckets
and
then
the
collector's
query
actually
specifies
a
sequence
of
buckets,
and
this
is
what
we
now
call
the
batch,
so
you
can
think
of
a
bucket
as,
for
example,
for
the
old
semantics,
a
sequence
of
non-overlapping
time
intervals.
B
So
now
we
have
two
query
types,
one
that
specifies
this
preserves
this
semantics
from
the
previous
version.
We
call
it
time
interval
and
then
we
have
this
new
one
called
fixed
size
where
we
don't
actually
care
how
the
aggregators
partition
reports.
All
we
care
about
is
that
they're
all
roughly
the
same
size.
So
that's
what
we
accomplish
with
that.
B
So
we're
anticipating
need
wanting
a
lot
of
flexibility
here.
So
as
folks
start
to
use
the
depth
spec
I
encourage
you
to
think
about.
You
know
whether
or
not
your
use
case
fits
and
we'll
figure
out
what
we
can
do
to
accommodate
it,
hopefully,
by
spelling
out
a
new
query
type
next
slide.
B
One
minor
change
I
wanted
to
highlight.
Is
we
now
have
this
notion
of
task
exploration?
Basically,
the
consideration
here
is
primarily
operational.
B
B
B
The
final
major
change
is
the
is
to
HTTP
client
authentication,
so
we
know
that
we're
going
to
need
client,
author
authentication
for
a
couple
of
channels,
one
between
the
aggregators,
the
leader,
makes
requests
to
the
helper,
and
we
want
to
authenticate
that
request
and
similarly,
The
Collector
is
going
to
collect
request
Aggregates
from
the
leader,
and
we
want
to
authenticate
that
channel
as
well.
B
So
in
dapple
one
we
had
spelled,
it
spelled
out
a
a
kind
of
a
simple
dinky,
Bearer
token
based
approach,
and
we
decided
in
Depot
2
to
instead
of
specifying
a
concrete
scheme,
we
would
just
spell
out
requirements
for
the
establishing
the
secure
Channel,
and
the
idea
here
is
that
we
we
wanted
to
permit
lots
of
flexibility
here
for
for
implementations,
because
this
is
client
HTTP
client
auth
can
be,
can
be
tricky
next
slide.
B
So
the
the
plan
for
the
the
the
next
spec
Dapo
3
is
basically
to
address
some
minor
bugs
that
we
noticed
while
we
were
working
on
the
implementations
of
dappo2.
So
these
are
things
like
we
we
at
issue
362.
We
ended
up
making
anti-replay
requirements
a
bit
stricter
than
in
the
previous
draft
and
we're
wondering
if
that's
actually
a
regression.
B
So
there's
some
discussion
happening
there
it'd
be
great
to
get
people's
feedback
and
then
another
issue
to
highlight
is
extension
processing,
semantics
I
we
haven't
actually
fully
spelled
this
out
and,
as
we'll
see
later,
we're
gonna
propose
a
new
extension,
so
I
think
it's
time
to
figure
this
out
and
then
beyond
that,
Tim
is
going
to
talk
about
the
some
API
rework
proposal.
B
This
is
like
the
only
major
change
that
we
foresee
doing
and
and
then
beyond
that
integrating
pop
other
one
is
something
we
want
to
get
to
in
the
near
in
the
medium
term.
Future
editorial
changes,
I
think
there's
this
presentation
in
the
spec
is
is
wanting
a
lot
as
as
people
many
pointed
out,
and
then
beyond
that
we
wanted
to
start
doing
all
the
things
that
makes
a
good
spec
good
experimentation,
security
analysis
and
so
on
next
slide
so
very
quickly.
B
There
are
two
implementations
of
dappo2
that
we're
aware
of
one
by
cloudflare
and
another
by
the
isrg.
We've
been
working
together
closely
on
our
implementations.
We
are
we're
quite
confident
that
we're
at
a
point
where
we
could
really
start
experimenting.
So
if
there's
interest,
please
ping,
the
list
also
the
PPM
channel
in
the
ietf
slack,
is
quite
active,
so
we'd
love
to
see
you
there
and
finally,
I
wanted
to
flag
this
draft
that
David
Cook
is
working
on
he's
from
the
isrg
they're
yeah
they're.
B
B
If
not,
we
can
move
on
to
Tim
who
I
believe
is
next
in
the
agenda.
C
Okay,
it'd
be
great
if
somebody
could
take
over
note
taking
while
I'm
speaking
okay
same
it
looks
like
you're
gonna,
be
driving
the
slides.
C
Let
me
pull
up
notes
and
all
right,
let's
dive
in
so
for
the
last
few
weeks,
I've
been
working
on
a
new
version
of
the
HTTP
API
for
for
the
distributed
aggregation
protocol,
but
thanks
So.
Currently
this
exists
only
as
a
memo
that
I've
that
I've
written
up
in
a
request
on
GitHub
that
sketches
out
the
proposal.
So
there's
a
lot
of
work
to
do
before
we'll
even
have
a
PR
that
could
actually
be
merged
into
the
protocol
text.
C
But
what
I
want
to
do
at
this
stage
is
socialize
this
idea
to
the
working
group
and
gather
feedback,
but
to
make
sure
that
we've
identified
the
right
problems
and
that
we're
headed
in
the
right
direction
to
solve
them.
So
I
think
it's
important
to
do
this
kind
of
thing
sooner
rather
than
later,
the
idea
being
to
minimize
disruption
to
DAP
implementations
would
have
to
adopt
a
new
API
since,
as
Chris
just
highlighted,
there's
only
two
of
them
known
in
public.
Hopefully
it's
cheaper
to
have
them.
C
You
know
to
have
us
update
our
implementations
than
to
do
this
like
months
or
maybe
years
down
the
line
right.
So
on
this
slide
we
see
the
the
current
API
surface
specified
in
napo2,
so
these
seven
HTTP,
you
know,
requests
Drive
the
upload
Aggregate
and
collects
our
protocols
so.
F
C
Notice
next
slide
slide.
Please
one
thing
we
noticed
right
away.
Oh
thank
you
is
that
the
thing
described
by
the
the
relative
path
is
variably,
a
noun
or
a
verb,
which
I
think
makes
it
then
awkward
to
that.
You
know
to
resolve
the
verbs
against
the
verb,
that's
in
the
HTTP
method.
C
So
we
see
this
because,
in
fact
reflected
in
that
more
than
half
of
the
of
the
API
pass
here
use
post,
which
to
me
suggests
like
that
they
should
be
speak
for
I,
want
to
do
something
to
the
thing
at
this
path.
But
the
semantics
aren't,
you
know,
aren't
super
clear
next
slide.
Please,
crucially,
post
also
means
that
the
requests
are
not
item
potent
which
makes
it
unclear
for
protocol
participants
how
they're
supposed
to
go
about
recovering
from
faults.
C
This
is
especially
a
challenge
in
the
aggregate
subprotocol,
since
that's
all
about
the
stateful
multi-round
execution
of
the
vdaf
verification
algorithm
and
finally,
a
problem
with
the
current
API
layout
is
that
there
are
some
cases
where
servers
have
to
partially
parse
a
message
to
extract
a
value
like
say
the
task
ID,
which
is
needed
to
then
parse
the
rest
of
the
message,
because
there
are
cases
where
you
have
to
look
up
the
task
to
figure
out
what
vdaf
or
query
type
is
in
use
and
that
informs
the
structure
of
the
remainder
of
the
message
in
the
request
body.
C
So
that's
awkward
at
best
and
risky
I
think
in
some
cases.
C
Okay,
next
slide,
please.
So
with
those
problems
in
mind,
let's
look
at
what
I'm
now
proposing
to
do
instead,
so
in
designing
this
I
started
by
enumerating.
What
are
the
resources
that
this
API
is
managing?
What
are
the
things?
The
idea
being
to
let
the
HTTP
methods
be
the
verbs?
So
that's
enumerated
in
this
table,
so
the
resources
ought
to
be
familiar
from
dappo2.
It's
hpk
configurations
reports,
aggregation
jobs,
aggregation
shares
and
collections.
C
So
we
noticed
that
the
New
Paths
contain
much
more
information,
so
in
particular
the
task
ID
and
in
some
cases
the
unique
identifier
for
the
resource
are
there
in
the
relative
path,
and
we
see
that
you
know.
Most
resources
are
now
subordinate
to
a
task
with
the
exception
that
the
hpk
config,
because
those
can
be
Global.
So
it
would
be
a
bit
awkward
to
make
it
look
like
everything
else.
C
C
Please
so
I
also
tried
to
pay
attention
to
making
better
use
of
put
so
that
we
can
get
item
potents
in
particular,
for
example,
if
an
aggregation
job,
you
do
a
put
request
to
create
one
since
there's
a
unique
identifier
in
it
which
allows
the
the
server
the
helper,
in
this
case,
to
sort
of
disambiguate
repeated
requests,
but
advancing
the
state
of
an
aggregation
job
which
is
again,
you
know,
that's
the
stateful,
vdap
verification
algorithm.
C
Next
slide.
Please
all
right!
For
some
resources,
the
unique
identifier
for
a
resource
appears
in
the
URI,
because
this
is
for
cases
where
the
unique
identifier
is
assigned
by
the
requester.
So
in
a
report
the
client
assigns
the
report
ID
for
aggregation
jobs,
yeah
he's
assigned
by
the
leader
next
slide,
please
in
other
cases
it's
the
message,
Handler
that
will
assign
the
unique
ID
of
the
resource,
so
the
put
is
to
a
resource
for
aggregate
shares,
plural
or
collections.
Plural.
C
Then
it's
the
Handler
of
the
messages,
responsibility
to
sort
of
assigning
identifier
and
construct
a
URI.
So
for
that
reason,
in
these
cases
we
don't
specify
what
the
URI
for
the
individual
resource
is
just
what
methods
it
supports
and
what
the
you
know
what's
and
what
the
semantics
of
those
calls
are
next
slide,
please.
C
So,
despite
being
significantly
different
and
I
hope
better.
The
migration
from
the
Depo
2
API
of
his
proposal
hopefully
ought
to
be
relatively
smooth
for
implementations.
C
So
this
table
enumerates
the
old
API
endpoints
with
the
corresponding
view
ones
and
there's
actually
a
one-to-one
analog
in
almost
every
case,
so
the
migration,
hopefully,
would
just
consist
of
adopting
new
message
types
since
in
some
cases
the
message
is
no
longer
need
to
contain
information
like
a
task
ID
or
some
other
unique
identifier,
as
that's
been
hoisted
up
into
the
to
the
URI,
and
you
know,
update
the
path
of
which
a
message
is
handled,
but
the
actual
message
handling
code
shouldn't
change.
C
Much
the
exception
is
handling
of
aggregate
shares
which,
in
Depo
2
was
one
synchronous
post
request
to
initiate
from
the
leader
to
the
helper,
to
initiate
the
construction
of
an
aggregate
chair
and
then
obtain
it.
In
this
proposal,
I've
I've
changed
the
handling
of
aggregate
chairs
to
align
it
with
collects
sorry,
the
leaders
collect
resource
primarily
this
is
for
symmetry,
since
the
two
resources,
the
helper
aggregate
chair
and
the
leader
collection,
is
that
they're
very
much
the
same.
C
This
also
enables
the
helper
to
asynchronously
compute
aggregate
chairs,
since
that
process
can
be
expensive
and
take
some
time.
Okay
next
slide,
please.
C
So,
there's
a
lot
more
to
all
this
that
we
didn't
have
time
to
discuss
today
and
a
lot
more
work
to
do
so.
All
of
this
needs
some
more
analysis
to
show
that
we
can
do
error
recovery
and
all
the
cases
that
we
care
about
and
there's
some
open
design
questions.
C
So
for
one
thing,
does
it
make
sense
to
align
the
aggregate
chair
resource
on
the
helper
with
the
collection
resource
on
the
leader
further,
in
my
opinion,
The
Collection
resource
right
now,
it's
rather
awkward
I'm,
not
sure
it's
the
right
noun
that
we
have
so
Chris
Patton
suggested
that
we
could
call
this
a
collection
job
instead,
which
I
think
is
better.
But
maybe
we
should
think
about
this
as
a
query
right
in
the
sense
that
the
collector
is
making
a
query
against
an
aggregate
that's
been
compiled
by
the
aggregators.
C
The
other
thing
is:
there's
some
awkwardness
in
the
collect
API
that
we
didn't
discuss
just
now,
because
we
don't
have
time
that's
there,
because
we're
trying
to
write
one
API
for
the
collection
flow
that
accommodates
both
the
time
interval
and
fixed
size.
Query
types
that
Chris
discussed
just
earlier
so
I
think
we
should
bash
out
like.
C
Is
this
a
good
goal,
or
should
we
accept
that
these
two
things
are
fundamentally
different
and
surface
two
different
apis,
each
better
fit
to
the
specific
task,
so
I'll
close
with
a
call
to
action
here,
which
is
that
if
you've
been
following
all
this
Gap
in
PPM
work-
and
you
find
it
interesting
but
you're
you
know,
but
you
don't
have
a
good
handle
on
this
Euro
knowledge,
proofs
and
so
on.
But
you
do
know
your
puts
from
your
posts.
Please
come
and
help
us
out
with
this.
C
So
there's
a
pull
request
linked
here
and
otherwise.
We'd
love,
yes,
we'd
love
to
hear
from
you
there
on
GitHub
in
the
PPM
mailing
list
on
slack
wherever
all
right.
That's
it
thanks
very
much.
E
That's
a
hell
of
a
walk,
Martin
Thompson
thanks
for
walking
through
this
Tim.
Can
you
walk
back
a
slide
or
two.
E
Right
so
I'm
singing
a
bunch
of
effectively
URI
templates,
which
is
all
reasonable.
My
question
is:
does
the
client
determine
any
of
the
things
that
are
in
those
curly
braces
on
in
those
urls.
C
Yes,
so
the
client
chooses
the
report
ID
when
it
uploads
a
report.
Additionally
well
I'll.
Let
Chris
and
Shan
speak
to
the
task
provisioning
extension
later,
but
yeah
I
think
they'll
explain
how
the
task
ID
can
in
that
setting
have.
E
Something
to
do
with
the
client
that
tends
to
produce
interesting
problems.
It
may
be
that
you're
far
enough
down
the
down
the
path
that
it
doesn't
matter
as
much
in
this
context.
But
if
you
have
the
ability
for
clients
to
for
the
same,
for
instance,
for
the
same
task,
ID
have
multiple
clients.
They
might,
for
instance,
produce
the
same
report,
ID
or
whatever
other
pieces
you
have
there,
which
creates
potential,
collisions
and
other
things.
E
The
usual
practice
here
is
to
have
the
client
request,
the
creation
of
a
resource
and
the
server
to
tell
it
where
it
is,
and
that
means
using
your
you're
losing
some
of
your
item
potency.
But
in
exchange
you,
you
get
a
lot
more
resilience
on
the
server
side
against
things
like
clients
that
might
accidentally
or
maliciously
try
to
get
collisions
out
of
the
resource
identifiers.
It
also
gives
the
server
a
little
bit
more
control
over
how
it
structures
identifiers
for
its
own
purposes.
E
So
if
the
server
is
in
a
position
where
it
needs
to
know
how
to
route
some
of
the
the
queries
from
to
each
one
of
these
resources,
if
it
gets
to
choose
the
identifiers,
then
it
can
do
things
to
optimize
and
change
the
way
it
processes
these
requests,
so
I
would
recommend.
Maybe
in
the
cases
where
you're
trying
to
create
a
resource,
you
you
look
at
using
post
with,
say
a
201
response
that
contains
the
location
of
the
resources
created.
E
C
C
Of
systems
like
prio
is
that
the
client
only
speaks
once
and
I
think
that's
something
we
should
strive
to
maintain.
But
I
do
take
your
point
about
the
risks
of
clients,
choosing
report
IDs
as
well
taken.
E
Yeah,
so
the
other
thing
is
it's
totally:
okay
to
use
post
and
trade
it
like
it
has
item
put
in
somatics,
because
you
know
what
the
somatics
of
the
the
operation
is,
and
so
in
the
case
where
you're
posting
and
you
didn't
get
a
response
from
the
server
and
you're
reasonably
confident.
It's
got
some
other
information
that
that
can
be
used
to
prevent
a
replay
or
something
along
those
lines.
Then
it's
totally
okay
to
do
another
post!
It's
fine!
E
You
won't
necessarily
engage
all
the
automated
logic
that
might
be
in
some
Stacks
to
to
do
the
post
again,
but
then
again,
if
you're
doing
it
in
a
browser,
you
might
get
retries
on
posts
anyway.
Just
saying.
A
Okay,
let's,
let's
keep
going
Chris,
do
you
want
to
try
sharing
slides
again,
foreign.
B
Oh
yeah,
here
we
go
so
we're
on
to
differential
privacy
right.
B
All
right,
everyone
can
see
that
cool
all
right.
Well,
oh
actually,
let
me
set
my
timer
real,
quick
okay,
so
this
is
actually
not
going
to
be
that
specific.
This
is
more
about
the
land
of
PPM.
In
general,
we've
been
talking
about
the
composition
of
differential
privacy
with
protocols
like
dap
in
various
different
venues,
and
you
know
we're
reasonably
confident,
there's
a
lot
we
can
do
here.
B
What
I
want
to
try
to
pitch
today
is
that
we
need
to
start
working
on
a
draft
that
provides
some
concrete
guidelines
for
integrating
specific
differential
privacy
mechanisms
with
specific
PPM
protocols,
so
just
to
get
a
little
at
the
motivation
here,
let's
ask
I
think
it's.
The
starting
point
should
be
like
what
does
dap
provide
a
protocol
like
that
provide
and
what
does
it
not
provide?
B
So
dap
specifically
provides
NPC
style
security
guarantees
where,
basically,
you
want
that
the
ad
The
Collector
can
compute
some
aggregate
of
over
some
measurements
without
seeing
the
individual
measurements
themselves,
so
in
our
threat
model
from
debt.
This
is
certainly
like
unnecessary
property
for
privacy
right,
but
there's
really
no
reason
to
think
that
this
is
going
to
be
sufficient
for
every
application
and
kind
of
the
canonical
example
of
this
Chris
Wood
brought
up
at
the
last
ITF
I
have
a
link
here
to
the
slides.
B
You
should
go
check
it
out,
which
there's
this
risk
of
overexposing
a
user.
If
say,
some
automated
system
has
that
measures
the
client
multiple
times
in
in
a
single
batch
or
across
multiple
batches
over
time,
either
way,
there's
this
risk
of
overexposing
information
about
an
individual
user.
B
So
what
can
we
do
about
this
well
I?
Think
mechanically
at
the
at
the
level
of
depth.
There's
there's
really
not
much.
We
can
do
that's
going
to
be
good
for
all
situations.
B
So
the
way
you
know
an
alternative
way
to
approach.
This
is
to
kind
of
formalize.
What
do
we
mean
by
privacy?
And
one
answer
to
that
question?
Is
this
notion
of
differential
privacy,
which
has
been
around
for
a
long
time
and
I'll,
give
a
quick
overview
of
my
my
own
understanding
of
how
DP
works?
You
basically
imagine
you
have
some
randomized.
B
That
is
exposing
Aggregates
over
measurements
and
the
property
that
we
want
is
the
distribution
of
the
output
of
the
aggregate
should
not
depend
significantly
on
any
one
individual's
measurement,
and
we
can
formalize
this
by
thinking
of
the
the
the
the
difference
in
the
distribution
of
the
output
between
two
databases
that
differ
in
exactly
one
measurement.
B
If
we
have
some
secure
method
for
computing,
the
aggregate
to
make
it
differentially
private
private,
what
we
do
basically
is
sample
randomly
some
noise
from
some
appropriate
distribution
and
instead
of
handing
to
The
Collector
the
aggregate
we
hand
to
the
collector
the
sum
of
the
aggregate
with
the
random
noise
and
intuitively.
If
we
perturb
the
output
enough,
then
the
idea
is
that
we
hide
the
contribution
of
any
one
individual
measurement.
B
So
that's
a
really
nice
intuitive
idea,
but
differential
privacy
has
a
lot
of
subtleties
to
it.
The
main
kind
of
consideration
when
you're
applying
DP
is
the
Privacy
budget.
So,
basically,
the
degree
of
privacy
that
you
get
from
the
system
is
going
to
depend
on
how
many
queries
you
allow
in
the
system,
as
well
as
what
is
the
exact
nature
of
those
queries?
B
And
you
know,
but
despite
this
complexity,
I
think
there
is
a
there's,
a
clear
win
here
to
be
able
to
compose
PPM
protocols,
NPC
style
security
goals
like
dap,
or
even
this,
the
security
of
what
security
properties
we
get
from
something
like
star
differential
privacy
is
going
to
be
interesting
for
a
lot
of
different
applications.
B
So
the
question
is
how
and
in
all
of
the
discussions,
I
think
the
the
main
thing
to
take
away.
Is
that
there's
not
one
like
clear?
B
There's
not
one
solution,
that's
going
to
fit
every
protocol,
the
the
the
the
the
the
the
the
the
kind
of
the
ideal
mechanism,
or
even
like
the
set
of
suitable
mechanisms,
is
going
to
depend
first
of
all,
first
off
on
your
base
protocol,
so
differential
privacy
is
going
to
look
very
different,
but
for
star
versus
versus
dap,
and
also
you
have
to
be
very
careful
about
considering
the
application
and
and
the
nature
of
the
data
that
you're
collecting,
so
I
think
I.
B
B
One
thing
that
would
be
really
useful,
something
that
the
ITF
is
quite
good
at
is
you
know,
spelling
out
algorithms
for
sampling
noise
from
a
given
distribution,
basically
take
debut
random
and
map
it
to
you
know
a
random
point
in
a
LaPlace
distribution
or
whatever,
and
then
you
know
there's,
like
you
know,
like
guidelines
for
enforcing
privacy
budgets.
I
mean
as
a
cryptographer
in
my
analogy
is
like
something
like
safety
margins
for
an
aead
scheme
encryption
scheme.
B
Is
there
an
analogous
kind
of
set
of
guidelines
we
can
develop
for
differential
privacy
and
then
another
idea
is,
you
know
like
something
we
can
do
is
spell
up
concrete
mechanisms.
There
are
lots.
There
are,
like
you
know,
lots
of
Prior
work
on
this,
that
that
that
apply
more
or
less
directly
to
protocols
that
we
have
already
so
yeah
I
think
there's
a
lot
of
interesting
stuff.
B
We
can
do
here,
I'm
kind
of
just
throwing
out
ideas,
I
would
love
if
someone
in
the
room
is
who
has
some
expertise
in
differential
privacy
had
a
strong
opinion
about
what
we
should
do
here
and
with
that
we
have
five
minutes
for
questions.
H
All
right,
Erica
squirrel
out
as
I,
say,
more
a
comment
than
a
question.
We
should
do
nothing
here.
H
We
have
like
already
a
very
complicated
specification
that
we're
trying
to
get
out
the
door
these
are
on.
These
are
they're
independent
pieces
of
work,
and
we
should
not
do
this
one
until
the
first
one
is
done,
is
consuming
the
exact
same.
Excuse
me,
the
exact
same
resources
so,
like
I,
think
it's
I
I,
don't
mean
to
give
the
impression
I,
don't
think
it's
important
I
think
we
should
sequence
it
next,
but
I.
H
Don't
think
I
think
that
by
trying
to
do
them
both
like
the
only
thing
we
should
be
doing
right
now,
is
making
appropriate
changes
to
adapt
to
make
this
possible
if
necessary,
and
if
there
are
none,
we
should
do
nothing
I,
I,
say
and
again.
I
don't
mean
to
make
light
of
this
problem.
I
think
it's
a
very
real
problem,
but
I
think
it's
also
a
problem
that
is
not
like
very
straightforward
and
so
I'd
like
to
like
keep
like
got
my
eye
on
the
ball.
I
Thanks
Ecker
Jonathan
Holdens
this
this
is
more
I,
guess
related
to
the
first
presentation.
In
a
way
when
you
have
different
query
types,
does
that
break
or
make
differential
privacy
harder
or
worse,
because
there's
different
groupings
of
queries
that
are
now
possible,
and
you
know
how
to
consider
the
interaction
between
them
on
how
it
leaks
information.
B
Yeah,
so
one
thing
that
we're
trying
to
guarantee
in
the
spec
is
that,
regardless
of
the
query
type
you
you
have
a
like
a
proper
partitioning
of
the
reports,
so
no
report
is
going
to
be
used
in
more
than
one
batch.
That's
something
we
try
to
guarantee.
B
However,
like
to
your
point,
though,
there
are
considerations
for
differential
privacy,
the
main
one
it
would
be
the
size
of
the
batch.
So
my
understanding-
and
this
is
like
I-
you
know
I
hope,
I
hope,
I-
see
Charlie
Harrison
in
the
room
and
I'm
calling
him
out
to
to
correct
me
here.
B
G
Yeah
I
I
can
just
quickly
speak
to
that,
like
the
batch
size.
I
think
the
batch
size
is
important
for
some
of
the
deployments
that
we're
considering
for
differential
privacy
like,
namely
the
deployments
that
are
similar
to
the
covid
privacy,
preserving
analytics
like
that
info
work,
where
noise
is
added
on
the
client.
G
In
that
case,
the
total
amount
of
privacy
is
based
on
how
many
clients
you're
kind
of
adding
together
because
you're
summing
a
lot
of
noise
from
a
lot
of
clients
in
the
central
case
where
the
like
aggregators
themselves
are
adding
noise.
The
batch
size
is
not
related
to
the
privacy
of
the
output.
It's
only
related
to
the
kind
of
like
relative
area
that
you're
going
to
get.
I
So
if
maybe
this
is
just
completely
mistaken,
if
a
record
is
queried,
let's
say
I
do
a
query
that
says:
I
want
to
know
the
average
of
all
things.
Every
single
record
I
will
know
what
the
average
is.
Does
that
mean
I
now
can
no
longer
do
any
queries
because
they
can't
appear
in
two
queries.
I
If,
if
I,
if
I
make
a
query
that
says,
give
me
the
average
of
all
things,
I
want
to
know
every
single
record:
what's
the
average?
Does
that
mean
I
can
no
longer
do
any
more
queries,
because
they've
all
appeared
in
one
report.
What
one.
B
At
some
point
you
have
to
start,
you
have
to
stop
like
you,
don't
get
to
do
like
rolling
a
rolling
average
like
give
me
the
average
of
everything
so
far,
at
least
in
depth.
What
we're
requiring
you
to
do
is
at
some
point
you
have
to
say
this
is
the
end
of
my
batch,
so
yeah.
The
intent
is
that
batches
never
overlap.
H
Right
I
mean
this
goes
back
to
the
discussion.
We
had
sir
Erica
discussion.
We
had
last
time
about
drill
down,
right
and,
and
so
once
again
we're
going
to
address
that
problem.
At
some
point,
though,
I
understand,
we've
said
not
address
it
like
at
this
exact
moment
and
that's
another
reason
why
the
DP
problem
would
be
so
difficult.
H
B
A
Great
so
I
see
Eric
back
in
the
queue
but
Chris
and
Sean.
This
is
your
your
time
for
in-band
task,
provisioning.
A
F
Sorry,
can
anyone
remind
me
how
do
I
share
slides?
Obviously,
Chris
is
passing
me
control
for
sharing,
slides,
but
I.
Don't
know
where
else
do
I
click.
A
F
There
you
go.
Let
me
turn
on
my
camera
as
well.
F
Hi
everyone
next
I'll
talk
about
the
inbound
task
provision
right
now.
It's
a
individual
individual
draft
for
extension
to
the
core
depth
protocol.
First
of
all,
let's
talk
about
the
motivations.
So
today
the
dev
protocol
doesn't
actually
Define
how
a
task
should
be
provisioned
or
configured.
It
basically
mentions
it
will
be
done
out
of
band.
F
We
delete
our
helpers
agree
that
a
particular
mechanism
to
share
the
parameters
for
configuration
as
an
provision
tasks
securely,
but
here
we
want
to
introduce
a
new
mechanism
for
provisioning,
a
task
purely
through
the
existing
flows,
especially
the
upload
flow
and
the
aggregator
share
flow
and
use
the
extension
mechanism
without
introducing
any
extra
flows.
And
hopefully
this
will
be
useful
for
many
deploy
deployments
and
many
return.
Helper
diplomas
can
just
provision
tasks
in
the
same
way
without
defining
any
extra
candidates.
F
Now
the
basic
protocol
architecture.
Here
we
introduce
a
concept
called
a
task:
author
author
is
basically
a
logical
participant
that
it
defines
the
task
it
defines
what
configurations
goes
into
a
task
and
we
assume
it
has
ability
to
send
the
task
configuration
objects
which
I'll
show
later
what
it
contains
to
the
clients
now
in
reality,
also
could
be
implemented
by
the
leader
or
The
Collector.
So
we
don't
have
any
more
trust
to
task.
Also,
then
we
have
on
the
aggregators,
so
the
basic
flow
is
the
following.
F
The
author
will
set
the
task
config
to
the
clients.
The
clients
will
verify
the
task,
config
make
sure
it
makes
sense,
and
then,
when
it
decides
to
opt
into
a
task,
it
will
send
the
report
as
usual,
but
contains
task
config
as
part
of
the
extension
data,
and
this
report
will
be
received
by
the
leader
which
is
going
to
do
its
own
procedures
for
checking
the
task.
F
F
So
the
task
config
object
itself
is
basically
the
task
specific
parameters
described
in
the
corporate
call.
Here
we
basically
group
them
into
different
structs
based
on
their
purpose.
There
is
a
query
config,
which
includes
things
like
minimum
batch
size.
Maximum
batch
size,
for
example,
for
the
fixed
size,
query
and
the
redef
config
contains
over
the
specific
configurations,
like
the
type
of
the
redef
and
the
buckets
in
Need.
F
For
for
the
normal
preo3
histogram,
and
here
we
because
the
tasks
are,
the
task
is
created
on
the
fly,
so
it's
necessary
to
have
a
mechanism
for
all
parties
to
derive
the
same
task
ID
based
on
the
same
task
configuration.
So
a
task
ID
here
is
simply
created
by
a
shadow
56
hash
on
the
serialized
task.
Config
object
just
go
into
some
details
of
the
client
side.
Clan
simply
receive
the
task.
F
Config
sets
existing
type
to
task
proof
and
then
encode
the
task
config
in
part
of
extension
data,
which
itself
is
part
of
report
metadata
and
all
the
aggregator
side.
You
need
an
helper.
They
both
will
check
the
task,
configs
received
and
run
the
same
task,
ID
generation
mechanism
and
make
sure
the
task
ID
costing
measures
the
generative
task,
ID
and
here
often
basically
means
the
the
provision
task
is
either
unrecognized,
which
means
the
leader
or
helper
needs
to
provision.
This
task,
or
the
config
received
matches,
are
already
configured
task.
D
F
Haven't
mentioned
about
the
collector
side,
but
hopefully
the
clutches
are
will
be
very
simple.
It
is
very
much
oblivious
to
the
use
of
task
proof.
It
should
have
ability
to
receive
task
config
from
task
ulcer,
but
after
that
it
simply
sends
the
collected
request,
including
a
task
ID.
You
know
generated
from
the
same
task.
Id
hashing
mechanism
and
here
are
some
links
for
the
draft
and
the
GitHub
repo
to
there
is
one
reference
implementation
in
Daphne,
which
is
still
in
progress.
F
So
the
main
purpose
of
this
presentation
is
to
show
everybody
that
task
proof
is
useful
for
many
deployments.
It
may
not
be
useful
for
all
deployments,
but
we
want
to
gather
more
feedbacks
and
see
whether
the
group
wants
to
adapt
it
as
a
working
group
extension.
That's
it
so
I'll
use
the
many
formats
for
questions.
Please.
H
Here
for
squirrel
up,
I
guess:
I'm,
not
really
persuaded
by
the
problem.
State
I
certainly
understand
why
you
might
want
to
have
Dynamic
configuration,
but
I
don't
understand
why
telling
for
the
client
is
a
good
idea.
It
seems
to
me
that,
like
basically
that
the
there's
a
relationship
between
between
the
clock
between
the
between
the
The
Collector
and
the
leader
and
Helper-
and
there
should
be
the
collector
on
the
client
and
like
that's
the
appropriate
place-
that
this
information
to
be
carried
and
I-
don't
understand
why
you're
selling
for
the
client.
F
H
F
That's
basically,
it
was
a
core
protocol
is
suggesting,
but
I
think
the
problem.
Is
there
isn't
a
specified
mechanic
for
doing
that?
So
if
you
have
a
leader
helper,
if
your
organization
manages
many
of
these
are
Health
repairs,
then
you
need
to
make
sure
either
you
have
a
unified
mechanism
for
all
these
other
organizations
that
working
with
you
as
a
aggregator
or
you
have
to
maintain
basically
different
ways
for
Distributing
cost
configs
to
to
all
these.
H
Other
projects,
you
misunderstanding
me
I'm
not
opposing
having
a
protocol
for
for
provisioning
the
test
config.
What
I'm
saying
is
that
protocol
should
be
two
separate
protocols,
one
for
the
well
really.
The
only
protocols
actually
needed
is
the
one
from
The
Collector
to
leader
and
Helper,
and
that
should
be
standardized
and
the
client
should
not
be
part
of
the
picture.
F
You
certainly
could
stand
arise
from
collectors
or
your
helper,
but
I
think
there
are
extra
benefits
when
it
comes
from
the
client
side.
So
in
this
case
the
client
knows
exactly
how
the
task
will
be
created.
So
the
task
the
current
report
will
be
aggregated
in
will
have
exactly
the
same
task
config
as
the
client
have
seen
it.
If
you
started
out
just
between
collector
I
need
a
helper,
so
you're
basically
asking
the
client
to
trust
whatever
the
leader
helper,
the
leader
or
or
task
author
sending.
H
F
Oh
minus
yeah,
no
I'm
talking
about
a
cough.
This.
H
Okay,
but
like
if
you
want
that
this
is
your,
this
is
your
problem,
then
we
should
just
have
checksums
on
all
the
configs,
which
we
in
fact
discussed
previously
so
but
like
here's,
the
thing
the
client
cannot
implement
this
cannot.
The
task.
Config
is
not
even
remotely
sufficient
for
the
client
to
implement
this,
because
a
client
needs
instrumentation
in
the
client
code
to
collect
the
data.
That
goes.
H
The
testing
thing
so
like
the
client
needs
all
kinds
of
garbage
and
not
just
the
test,
config
and
so
like
that,
and
that
needs
to
be
delivered
by
some
other
channel.
The
client
has
all
that
stuff,
so
like
yeah
like
again
like
again
I'm
on
board
with
Dynamic
configuration,
but
this
is
like
not
the
right
approach.
H
Respond
to
me
or
or.
B
H
Yeah
sure
yeah
actual
software
I
mean
the
thing
I
understand
is
like
that
that,
like
that
and
Dan
isn't
like
that.
This
is
the
reason
we
didn't
do
this
in
the
first
place
was
because,
in
order
to
make
this
work,
you've
got
to
like
modify
the
client
code,
and
so,
like
I
mean
even
remotely
provisioning.
The
client
is
almost
impossible
because.
H
F
D
I
will
be
brief,
I
I
think
my
question
was
I
could
use
the
benefits
from
doing
in
band
provisioning,
and
the
draft
seems
to
suggest
that
the
client
will
decide
based
on
some
of
that
configuration
whether
this
seems
like
an
acceptable
task
or
not.
D
F
F
There
could
be
more
advanced
checks
like
if
we
introduce
DP
whether
we
do.
That
is
another
question,
but
if
we
do
introduce
DPE,
we
might
not
want
to
check
out
the
client
side
of
whether
the
client
is
willing
to
participate
in
a
task
with
a
certain
a
certain
DP
guaranteeing
yeah.
A
Hi,
what
I
want
to
say
is
that
I'm
speaking
as
a
somebody
who
implemented
a
client
that
reports
metrics,
I
I
appreciate
ecker's
point
that
it
takes
code
to
implement
new
metrics,
but
there
it
does
seem
to
me
like
there
is
room
here
for
essentially
Dynamic
reconfiguration
of
how
especially
numeric
values
are
reported.
So
if
I
decide
that
I've
been
measuring
the
average
of
some
value,
but
actually
I
want
to
switch
and
start
measuring
the
histogram
of
that
value.
Do
I
have
to
push
out
new
code
to
all
of
my
clients.
A
You
know:
do
I
have
to
reach
all
of
my
clients
through
some
sort
of
external
control
system,
or
can
that
actually
be
done
in
band
through
depth?
That
is
that
that
kind
of
narrow
use
case
I
think
there
could
be
value
here.
F
Yeah
I
think
there
are
two
different
things
here.
This
one.
It's
a
consort
implementation
and
the
other
is
the
Tasker
distribution.
I.
Think
in
today's
step,
task
distribution
is
not
defined.
It's
very
much
deployment,
specific
or
outside
of
that
scope,
but
for
the
client
side,
I
think
there
are
many
scopes
for
the
client
to
do
things
to
make
sure
the
the
Privacy
guarantees
and
the
transparency
it
provides.
For
example,
we
would
like
clients
to
log
all
the
toxic
conflicts
it
has
received.
F
So
you
know
that's
true.
Yes,
the
client
will
have
to
do
extra
work
to
Implement
these,
but
if
you're
considering
you're
already
receiving
touch
config
from
the
server-
and
you
might
also
wants
to
log
them-
then
just
putting
it
into
the
metadata
and
send
it
to
the
the
server
side.
I,
don't
think
it's
a
massive
leap
from
what's
already
being
done.
H
I
mean
go
ahead,
no
I
mean
Ben,
I
mean
you're.
Certainly,
right
like
it
might
be
nice
to
reconfigure
the
client
like
without
having
to
load
new
code,
but
like
that's
not
going
to
be
done.
First
of
all
like
this
doesn't
provide
the
channel
for
doing
that.
So
you
still
need
a
channel,
and
that
channel
is
like
not
going
to
be
done
with
like,
like
with
high
probability.
H
That's
any
news
has
configs
it's
going
to
come
with,
like
some
whatever
remote
configuration
mechanism,
your
your
product
already
has
and
like,
and
that
that's
again
a
much
more
Rich
mechanism
than
this
thing,
and
so
like
it
just
don't
I
just
don't
think
it's
very
possible,
like
the
task
configure
not
really
enough,
like
reconfigure
the
client
but
again
I,
guess
I.
Just
think.
H
That's
like
missing
the
main
point,
which
is
that,
like
that
you're
like
not
induced
to
a
good
reason
to
Tunnel
this
data
through
the
client
to
the
the
different
Helper,
and
it
was
much
more
sensible-
is
to
like
configure
that
directly
downward
from
and
in
both
locations.
And
then,
if
you
want
to
compare
them,
compare
hashes
but
like
it's
just
I,
just
I.
Don't
understand
why
you
like,
like,
like
like
I,
just
think
the
telling
thing
is
to
keep
a
half.
B
So
I'll
just
quickly
say
so:
I
want
to
point
out,
first
of
all
that
there
is
much
more
to
the
thing
that
is
actually
serialized
in
the
extension
than
just
like,
say
the
Min
batch
size.
B
B
So
in
particular,
like
you
know,
the
the
the
the
vdef
that
you're
going
to
use
like
preo,
three
sum
or
histogram
Etc
is
going
to
be
it's
going
to
be
part
of
that
configuration
and
the
second
thing
I,
just
like
taking
a
step
back,
we
don't
ex
we're
not
asking
for
an
architecture
change
to
DAP.
B
The
question
here
is
really
like:
is
this
the
scope
of
a
protocol
extension
and
like?
If
is
this
the
sort
of
thing?
Is
this?
The
sort
of
like
Behavior
change
that
folks
think
is
is
is
useful
to
make
okay.
That's
it
thanks.
J
Yeah
should
I
start
I.
Think
I
can
control
my
slides
using
my
phone
yeah,
but
Al
Siobhan
here
I'll
be
talking
about
star,
which
is
distributed
secret
sharing
for
threshold
aggregation
reporting
yeah.
So
the
main
idea
is
that
we're
getting
K
anonymity
for
for
clients
and
they're
reporting
measurements
to
an
untrusted
server.
J
The
goals
are,
it
should
be
cheap,
fast,
simple
and
obviously
private.
That's
what
we're
doing
over
here,
just
a
very
quick
I
guess
overview
similar
to
what
I
did
last
time
is.
The
idea
is
that
the
client
wants
to
send
a
Telemetry
value
to
the
server,
but
only
wants
the
server
to
see
this
if
there
are
at
least
K
other
submissions
of
the
same
value
by
other
clients.
So
as
an
example
like
a
Json,
you
know
blob
of
like
City
Vancouver.
J
J
So
so,
if
multiple
clients
have
the
same
value
same
measurement,
then
they
will
get
the
same
key
and
then
they
all
generate
the
secret
share
of
that
key
and
they
send
the
server
the
encrypted
message
and
the
secret
share
of
the
key
and
on
the
other
side,
if
and
only
if
the
server
gets
care
shares,
it
can
recover
the
original
key
and
then
you
can
decrypt
the
encrypted
message.
This
is
not
a
new
idea.
This
is,
but
we
are
basically
using
it
for
privacy
preserving
measurement.
J
It's
also
really
important
that
there
is
an
like
a
proxy
in
between
the
plan
is
to
use
Ohi,
but
you
can,
you
know,
use
your
favorite
Network
for
that
you
can
use
Tor
and
there's
also
this
idea
for
Randomness
server.
So
in
case
the
measurement
has
low
entropy.
J
It's
a
low
interest,
low
entropy
space,
then
to
prevent
like
attacks
from
like
prevent
the
server
from
brute
forcing
all
possible
measurements
you,
you
use
a
randomly
server
to
get
the
to
stop
to
get
the
search
and
the
randomness
server
uses
a
voprf
to
so
that
it
doesn't
learn
the
input
value
but
still
can
operate.
It
can
still
like
provide
the
randomness.
J
There
was
some
feedback
on
the
list.
We've
been
getting
a
bunch
and
there
was
the
idea
was
a
Dos
attack
using
corrupt
reports
where
essentially,
the
client
wants
to
prevent
recovery
of
a
single
value
like
a
given
Telemetry
value.
So
it
sends
a
random
secret
share
for
a
given
tag.
So
we
worked
on
this
and
we
addressed
it
using
verifiable
secret
sharing.
J
So
we
have
this
idea
of
a
share
commitment.
Now
that
becomes
a
tag
and
verifiable
secret
sharing
allows
checking
if
a
particular
share
is
valid,
but,
like
importantly
before
you
do
recovery,
so
you're
not
wasting
Cycles.
This
adds
a
little
bit
of
computation
cost
it's
about
a
big
goal
of
k
in
badminton
competition,
so
it's
moving
on,
but
yeah,
so
I
guess
I
just
wanted
to
give
a
note
on
implementation.
It's
shipping
in
the
brave
browser
we,
the
rest
implementation
was
the
original
one
and
we
had
some.
J
We
have
some
bindings
for
that,
but
there's
also
a
new
one
that
Chris
Wood
wrote
in
go
who's.
Also
a
co-author.
Now,
on
the
draft
and
I
think
that
happens
a
lot
more
up
to
date
with
the
raft
but
yeah
just
what's
new
in
the
newest
version
is
that
we
specify
the
verifiable
and
unverifiable
secret
sharing
we're
refracting.
The
document
to
be
easier
to
implement,
like
Chris,
did
a
bunch
of
work
on
helping
us
with
the
defining
the
cryptographic
apis
and
functions.
J
We
also
have
defined
this
the
protocol
message,
types
for
Ayana
and
yeah,
and
we
also
talk
about
garbage
reports,
which
is
this
idea
that
the
client
generates
a
key
from
one
message
but
encrypts
and
sends
a
different
message.
So
in
this
case
the
recovery
happens
correctly,
but
the
value
will
be
like
garbage.
So
there's
a
couple
of
different
ways
to
address.
This
Ecker
pointed
out
that
throwing
out
the
whole
battery
again,
this
causes
simple
Doss.
So
we
definitely
don't
want
to
do
that.
J
You
can
do
like
a
majority
vote,
but
like
one
idea
that
we
had
was,
you
could
use
blind
signatures
instead
of
an
oprf,
and
the
idea
here
is
that
you
would
bundle
the
signature
that
you
get
from
the
operation
of
a
blind
signature
to
and
then
send
that
in
your
in
your
encrypted
message
and
then,
when
the
the
aggregation
server
recovers.
The
message
it
can
also
check
the
signature,
verify
it
against
the
public
key
of
the
randomness
server
and
try
to
get
that
back.
We
it's
not
like
it's
this.
J
We
haven't
defined
this
yet
in
the
draft,
but
we
at
least
describe
the
problem,
but
anyway,
so
I
guess
we're
calling
it
Superstar
now.
But
this
idea
that
you
can
have
you
could
pick
your
secret
sharing
scheme
of
choice.
You
can
picture
pick
your
signature
scheme
or
protocol
of
choice
and
then
they
give
you
like
more
or
less
protection
against
client
threats,
so
so
I
guess
we
kind
of
recommending
the
third
option
for
a
lot
of
implementation
like
the
user.
J
Oh
sorry,
the
second
one,
the
verifiable
secret
sharing
and
the
regular
prf
you
can
prevent
the
trivial
dos
attack.
I.
Think
that's
really
important,
but
if
you
have,
if
you
also
want
to
prevent
the
bad
Cypress
attack,
you
can
also
use
verifiable
secret
sharing
and
blind
signatures
and
there's
like
increasing
implementation,
complexity
and
cost
to
this.
J
A
Okay,
you're
up
I'm,
sorry
Martin
Thompson
is
first
I'm
I'm
reading
from
the
bottom
Martin
Martin
is
Martin
has
first
first
cut.
E
Thanks
for
doing
this,
I
think
this
sort
of
helps
a
lot.
I
am
seriously
concerned
about
the
computation
cost
of
recovering
values.
Once
the
value
of
K
gets
big
I
just
you,
you
said
it
was
okay,
I,
think
Alex
and
I
and
chat
have.
G
E
Believed
it's
K
squared,
because
each
each
submission
requires
that
you
perform
a
computation
across
K
values
in
order
to
to
generate
the
confirmation
value.
So
every
submission
contains
k
field
elements,
so
that
is
scary.
E
It
would
be
interesting
to
see
how
this
sort
of
compares
to
some
of
the
alternative
like.
How
does
this
compare
to
popular
in
terms
of
the
computation
cost
involved?
E
J
I
agree,
I
think
some
more
analysis
on
the
performance
aspect
of
it
and
how
it
compares
to
to
that
I
think
especially,
would
be
pretty
interesting.
Yeah.
E
I,
don't
particularly
care
as
much
about
the
bad
cycle
text
thing,
but
it
seems
to
me
like
you're,
going
to
get
in
the
case
of
100
you're
going
to
get
100
submissions,
but-
and
some
of
them
will
be
good
and
some
of
them
might
be,
and
most
of
them
will
probably
be
good,
and
if
most
of
them
aren't
good,
which
is
the
sort
of
the
point
where
the
verify
verifiable
secret
sharing
breaks
down
anyway,
you
won't
be
able
to.
You
won't
be
able
to
recover
them.
H
Eric
scroll
up
support,
I
got
to
say
what
Martin
said.
I
think
you
know
the
primary
value
proposition
for
this
work
ahead
of
popular
performance,
and
so
it
was
not
faster
than
coupler.
H
Then
it's
kind
of
like
hard
to
justify
so
I
think
we
need
to
see
some
some
analysis
of
that
and
I
agree
that
the
case
where
I
think
is
pretty
scary,
I
think
maybe
you
meant
it
was
okay
over
the
exist
multiplied
by
the
existing
thing,
which
is
true
because
right
now
you
have
to
do
okay,
computations
and
now
you
have
to
do
okay,
computations
per
client,
so
whatever,
but
it
is
okay,
Squad
I
agree
with
Martin.
Second
I
said
this
list.
J
D
H
Here
crypto,
this,
like
this
needs
to
be
cut
apart
in
two
pieces.
One
piece
needs
to
go
to
cfrg,
and
one
piece
needs
to
happen
here.
J
So
so
would
you
say
like
we
should
block
adoption
until
we
do
that
yeah
absolutely
so
there
is
a
draft
in
cfrg
right
now,
the
frost
one
that
has
like
that
also
uses
and
defines
verifiable
secret
sharing.
J
So
I
I
think
you
know
cfrg
folks,
one
are
interested
I
think
we
could
refactor
that
document
and
like
so
that
you
know
it
can
be
serve
as
a
pointer
to
like
both
documents.
Sure.
H
I
guess
my
point
is
like
if
this
like
contains
stuff
but
like
like
this
is
gfp
a
lot
it
has
like.
You
know,
scalar
mold
like
that
is
like
way
way
too
much
for
this,
like
the
the
way
this
needs
to
work
is
that
there
needs
to
be
a
box
that
is
called
verifiable
secret
sharing
it
is
sucked
in,
and
that
is
like
consumed
the
same
way
as
Dash
consumes.
Like
you
know,
in
the
same
way,
as
you
know,
a
TLS
consumes
x25509
in
the
same
way
as,
like
you
know,
as
daf
consumes.
H
You
know
popular
and
like
it's
like,
okay
and
the
reason
for
this
is
interest
like
isn't
just
like.
You
know,
persnicketing
this
about,
like
you
know
the
about,
like
the
purity
of
IDF,
it's
about
like
who
can
do
the
review
and
where
it
has
to
happen
and,
like
you
know,
like
I
I
like
skimmed,
like
the
VSS
thing
and
like
sort
of
like
it's
like
how
much
time
it
takes
you
to
persuade
myself.
This
is
correct
and
the
purpose
of
insistency
I've
really
ensure
that
it's
correct.
K
K
Splitting
the
document,
or
at
least
making
it
really
clear,
would
only
help
the
working
group
because
it's
going
to
get
to
the
back
end
of
working
group
last
call
it's
going
to
come
to
Pub
wreck
for
me
and
the
first
question
I'm
going
to
have
is
what's
the
process
by
which
the
crypto
was
kind
of
verified.
Did
you
go
to
crypto
panel
and
now
you're
trying
to
get
resources
from
the
crypto
panel?
And
if
you
have
that
as
a
work
product
from
cfrg,
it
probably
will
go
faster.
J
H
K
Yeah
I
don't
want
to
deliver
the
point.
I
haven't
looked
at
the
draft
in
sufficient
detail,
but
to
me
just
to
be
in
a
very
kind
of
crude
metric,
there's
going
to
be
a
lot
of
things
that
look
like
kind
of
crypto.
If
it
comes
with
a
draft
that
says
irtf
when
we
go
through
the
even
kind
of
pass
me
in
the
IU
view,
it's
gonna
be
like
good
to
go.
It
came
from
crfg.
It
obviously
got
reviewed
with
us.
It's
gonna.
K
A
Okay,
thank
you.
Everybody
for
the
Lively
and
efficient
session
and
I
think
we
are
all
done
for
this
edition
of
privacy,
preserving
measurement.