►
From YouTube: IETF112-PRIV-20211110-1600
Description
PRIV meeting session at IETF112
2021/11/10 1600
https://datatracker.ietf.org/meeting/112/proceedings/
C
Second,
from
the
left,
adam
will
get
you
the
repo
from
the
data
tracker.
The
second
left
for
me
is
gallery
view
under
beneath
your
name
in
the
upper
left-hand
corner.
B
So
it's
my
first
time
sharing
with
meat
echo
and
I'm
not
sure
exactly
what
I'm
supposed
to
be
seeing
the
icon
is
lit
up,
but
I'm
not
actually
seeing
any
slides
myself.
A
And
we
can
see
the
slides,
so
I
think
you're
good
to
go.
If
you
want
to
turn
on
video,
you
can.
D
Adam,
you
probably
need
to
click
on
the
interview
and
the
the
icon
on
the
on
the
left
on
the
upper
right
stuff.
B
B
B
This
is
basically
saying
if
you
know
of
any
patent
rights
associated
with
anything
that
you
discuss.
There
are
obligations
that
are
described
in
bcp
79.
Please
go,
read
it
and
also
behave
yourself.
That's
basically
the
the
general
just
here.
B
So
we
have
peterson
andre
has
volunteered
to
take
notes.
Initially
he
was
going
to
take
it
for
half
of
the
session,
but
he
just
very
kindly
stepped
up
for
the
entire
session,
so
we're
good
there.
Traditionally
we
have
a
jabber
scribe
for
those
folks
who
are
not.
B
D
B
All
right
so,
overall
agenda,
we
have
we're
gonna
start
out
with
25
minutes
from
ecker
talking
about
ppm
and
how
it's
different
from
what
we've
done
before
we're
going
to
go
through
use
cases
from
a
handful
of
folks
and
then
talk
about
how
this
relates
to
some
work
going
on
in
the
cfrg
research
group.
B
B
B
Potential
solutions
here,
but
we're
not
going
to
try
to
like
improve
those
solutions
or
anything
along
those
lines,
we'll
be
taking
questions
for
clarification
after
the
first
presentation.
That
explains
the
problem
and
then
we're
going
to
have
four
presentations
on
associated
use
cases.
B
F
Fantastic,
I
actually,
despite
what
it
says,
on
the
on
the
agenda,
I'm
going
to
merge
the
discussion
of
the
things
we're
trying
to
accomplish
with
the
discussion
of
technology,
because
I
think
it's
easier
to
understand
the
motivating
cases
so
I'll
just
fold
them
right
in
rather
having
a
separate
use
cases
presentation
so
I'll
say
I
sent
a
link.
I've
written
up
quite
a
bunch
of
interactive
material
on
this.
F
So
if
people
want
to
read
that
I
sent
a
link
in
the
java
chat
and
obviously
something
is
unclear.
Please
stop
me
and-
and
you
know,
ask
so:
okay,
let's
move
on.
So
let's
just
start
so
here's
just
like
the
traditional
like
this
is
what
I'm
going
to
talk
about
kind
of
in
order.
First,
I
want
to
talk
about
like
the
kinds
of
things
like
one
actually
wants
to
measure
in
these
typical
settings.
F
Then
I'll
talk
about
something
called
anonymous
measurement,
which
is
like
one
approach
you
might
take
to
it,
and
then
I
want
to
talk
about
the
kind
of
multi-party
cryptographic
techniques
that
are
the
focus
of
this
of
this
buff
and
then
I'll
sort
of
talk
about
the
technical
architecture.
F
For
the
protocol
that
we
sort
of
developed
that
are
hoping
to
you
know,
pull
in
in
this
work,
that's
what
all
aside
posting
I'm
planning
to
do
so,
there's
a
lot
of
situations
where,
like
one
would
like
to
learn
about
people
right,
you
know
we
have
like
the
the
census
or
you
know,
there's
a
lot
of
public
research.
You
know,
and
you
learn
things
like
demographics,
and
you
know
people's
income.
You
know,
maybe
they
have
medical
issues.
F
You
know
companies
want
to
do
product
development,
so
see
what
features
that
you
know.
People
use
and
don't
use
how
much
they
use
them
like.
Are
the
products
not
working
in
some
way?
And
then
you
also
want
to
take
like
behavioral
measurements
like
like,
so
you
know,
say
you
want
to
discover
like
new
websites
that
no
one
knows
about
or
what
information
people
care
about,
so
you
can
tune
your
product
to
be
like
more
like
what
people
actually
want.
F
So
I
mean
there
was
a
good
example
of
this.
On
the
other
day,
when
brave
did
some
posting
about
like
their
research
engine
and
how
they
like
want
to
discover
websites
for
the
search
engine,
so
so
all
these
problems
involve
like
collecting
data.
The
information,
of
course,
is
like
very
useful,
but
it
also
is,
you
know,
can
be
very,
very
sensitive.
You
know
people
often
don't
want
to
and
shouldn't
have
to
disclose.
You
know
their
medical
issues
in
order
in
order.
F
We
want
to
learn
how
many
people
have
some
medical
condition.
You
know,
that's
that's
good.
F
Want
to
know
you
know
where,
where
funding
should
be
targeted,
for
instance,
but
we
don't
want
to
know
what
individuals
have
like
you
know,
medical
conditions,
that's
exactly
this
right
same
thing
is
true
for
your
income,
official
orientation,
all
those
things
someone
is
unmuted,
amazed
peter.
If
you
could.
I
know
it's
not
me,
because
I'm
not
typing-
and
you
know
it
turns
out
that,
like
not
only
the
things
you
naturally
think
of
sensitive
sensitive
but
even
like
much
less
sensitive
data
can
be
very
appealing.
F
And
of
course
this
is
like
how
ad
targeting
works,
and
it
turns
out
that
there's,
like
a
lot
of
a
lot
of
like
evidence
that
you
can
put
like
less
sensitive
data
together.
This
is
often
called
like
high
dimensional,
sparse
data
sets
and
figure
things
out
that
people
would
be
surprised,
and
so
I
had
this.
This
is
you
know
from
an
article
a
few
years
ago
about
you
know,
target
inferring.
You
know
a
girl
was
pregnant
by
like
looking
at
her
other
other
person
behavior.
F
So
there's
been
a
lot
of
you
know,
research
on
this,
and
one
would
hope
to
do
better
and
so
like
the
historical
way
that
you
know
one
does.
These
things
is
that
you
just
gather
all
the
data
and
then
you
promise
not
to
disclose
it,
and
you
know
this
is
like
not
working
out
super.
Well,
you
know
data
breaches.
F
There's
this
the
famous
case
of
the
census,
information
we
use
for
targeting
japanese
americans
during
world
war
ii,
and
so
generally,
you
better
have
a
system
that
does
not
involve
just
like
trusting
someone
to
like
handle
the
data
appropriately.
So
the
good
news
is
that
actually,
the
data
that
you
want
to
know
is
not
necessarily
sensitive.
The
data
you
want
to
know
is
usually
what's
called
aggregates.
F
So
say
you
wanted
the
distribution
of
people's
income,
maybe
in
a
particular
region,
or
maybe
you
want
to
look
at
their
relationship
between
like
income
and
height
which,
by
the
way
there
is
one
or
you
want
to
know
what
the
most
popular
websites
are.
But
I
don't
care
like
what
lists
any
individual
goes
to.
What
I
care
about
is
like
what
says
people
aggregate
go
to
and
in
fact
it's
often
not
useful
to
learn
the
websites
that
an
individual
goes
to,
because
maybe
a
lot
of
them
are
like
cardinality.
F
Often
you
need
to
like
slice
the
data
in
multiple
ways,
so
you
say
look
I
just
want
to
look
at
a
given
region
or
when
I
compare
two
variables.
I
want
to
regress
them
against
each
other.
So
in
these
cases
you
know
it's,
as
I
say,
it's
actually
not
useful
to
have
to
have
the
individual
data
beyond
what
lets
you
compute
these
aggregate,
metrics
and
and,
of
course,
it's
very
harmful
to
digital
data.
F
If
you
misuse
it
and
from
the
perspective
of
a
researcher,
not
only
you
know,
it's
not
just
a
matter
of
it's
not
just
a
matter
of
of
harmful,
but
it's
a
matter
of
dangerous,
because
now
you
have
to
have
all
sorts
of
controls
and
procedures
around
hand
on
the
data
that
make
it
very
hard
to
work
with,
and
of
course
also
it
makes
it.
People
unwilling
to
you
know,
share
the
data
with
you
if
they
know
if
they
think
you're
not
gonna
handle
responsibly.
F
So
you
know,
while
there
are
situations
in
which
it
is
necessary
to
gather
full
data
and
then
just
say,
look
you
have
to
trust
me.
Those
are
these
are
ones.
We
should
look
forward
rather
than
foster.
F
F
I
want
to
tell
you
a
little
bit
about
the
kinds
of
kind
of
output
measurements.
You
might
want,
there's
a
number
of
common
measurement
tasks
that
that
we're
hoping
to
achieve
in
this
working
group.
So
the
first
is,
what's
often
called
a
simple
aggregates.
This
is
the
stuff
you
would
like
learn
in
an
interest
s
class
of
you
know.
You
know
single
figures,
group
statistics
that
capture
some
data
so
mean
median
sum
histograms.
F
I
think
then,
there's
sort
of,
like
you
know,
relationships
between
values,
correlation
coefficients
ordinary,
these
squares,
that
kind
of
stuff-
and
you
know,
people
talk
like
federally
machine
learning,
that's
kind
of
out
of
scope
for
this,
but
simple
stuff
is
in
scope
and
and
then
there's
this
specific
problem,
that's
often
called
heavy
hitters,
which
is
collecting
common
strings
that
a
lot
of
people
have.
F
That
turns
out
to
be
a
very
useful
technique
for
a
number
of
number
settings
so,
like
I
said
you
can
notice
it.
These
are
all
like
the
aggregate
things
I
don't
depend
on
any
anybody's
individual
data.
So
we
just
like
to
find
some
way
to
gather
those
these
aggregates
without
having
to
be
infected
by
people's
individual
data,
which
is
like
basically
toxic
waste
at
some
level.
F
So,
like.
Let
me
give
you
like
one
motivating
use
case,
it's
very
useful
to
know
what
kind
of
sites
users
visit,
because
then,
if
you're
like
a
web
browser,
so
that
our
backlight
use
case,
if
you're
like
web
browser,
you
like
to
know
what
kinds
of
sites
people
visit.
So
you
can
make
your
web
browser
work
well
on
those
sites,
and
so
you
can
spend
and
spend
time
saying:
okay,
but
people
like
really
watch
a
lot
of
videos
like
making
video
where
they
work
is
important.
F
Now
we
have
like
some
data
on
this
because
we
collect
like
mechanical
data
but
like
for
obvious
reasons,
it's
unattractive
to
know
like
what
topics
any
individual
is
interested
in,
because
on
this
topic,
some
of
those
topics
are,
you
know,
implicate
information.
They
don't
want
us
to
have,
and
you
know
it's,
it's
very
difficult
to
obviously
know
exactly
what
these
people
go
to
as
problematic,
but
even
knowing
the
interest
people
have
is
problematic.
F
Basically,
further
things
about
them,
and,
and
so
there's
been
a
lot
of
work
and
and
trying
to
figure
out
like
exactly
what
information
is
sensitive
and
what
information
is
not
extremely
difficult,
because
in
some
cases
like
some
interests
are
sensitive
and
in
some
cases
people
think
they're,
just
not
sensitive,
and
so
there's
been
a
lot
of
talk
about
this
in
the
ads
context
of
like
what
topics
are
safe
but
like
in
ideal
world.
F
If
you
didn't
care
about
privacy,
what
you
want
to
do
is
like
bucket
the
sites
by
topic
and
then
count
the
number
of
like
minutes,
but
on
each
topic,
but,
like
I
said,
we
can't
just
do
that,
but
so
this
is
another
problem
statement
is
to
collect
the
distribution
of
time
spent
on
each
type
of
site
without
actually
seeing
the
individualized
people
on.
F
So
that's
like
one
motive
in
this
case,
another
one
of
the
use
case
again
for
a
browser
is
to
see
which
websites
are
having
problems
of
one
kind
or
another.
In
some
cases,
these
problems
are,
I
guess,
innocuous,
so
like
web
compatibility
is
a
big
problem.
Some
sites,
just
don't
render
properly
and
like
mozilla,
operates
like
a
thing
which
you
can
press
in
the
upper
right
hand
corner
of
your
browser
somewhere.
F
That
says,
like
this
site
is
broken
from
me
but
like,
but
we
depend
pretty
heavily
on
people
volunteering
information
because
we
obviously
don't
want
to
collect
what
url
everybody's
going
to.
This
is
like
a
bigger
problem
for
surprisers
with
smaller
market
shares,
because
things
will
often
work
on
like
one
engine
or
not
another.
So
in
many
cases
we
can
detect
breakers
on
the
client.
F
We
know
that
something's
wrong,
like
they
try
to
use
a
property,
we
know
it
doesn't
exist
or
the
user
is
saying
reload
constantly
like
rage
clicking,
but
you
can't
do
anything
about
it
because,
like
the
browser
knows,
but
it
can't
tell
us-
and
so
that's
one
example.
Another
example
is
that
we
know
there's
a
lot
of
like
what's
called
fingerprinting
going
on.
F
So
a
lot
of
web
tracking
happens
with
cookies,
but
there's
a
lot
of
which
happens
without
cookies
and
and
so
what
you
do
is
you
like
have
a
bunch
of
javascript
apis
and
you
can
measure
how
the
browser
behaves
under
the
job
sheet
apis.
So
an
example,
people
often
talk
about
is
what's
called
canvas
fingerprinting
where
you
like
render
some
fonts
and
then
you
read
back
from
the
canvas,
and
that
gives
you
information
about
like
the
gpu,
that
machine
uses
machine.
F
So
you
can
use
this
to
build
up
a
like
single
value
which
you
can
use
use
like
follow
the
user
on
the
internet.
So
this
is
like
a
big
front
tracking
that
is
not
addressed
by
the
kinds
of
you
know.
Third-Party
cookie
blocking
the
browsers
like
firefox
and
chrome.
Do
this
is
again
off
detectable
on
the
client
because
you're
like?
Why
is
this
person
like
doing
a
lot
of
canvas?
F
Read
back
so
when,
like
they're
not
actually
displaying
anything
meaningful
and
or
like
they're,
actually
loading
some
some
some
like
script,
which
you
know
is
a
tracking
script,
but
you
just
can't
like
report
it
back
for
exactly
the
same
privacy
reason.
F
So
we're
stuck
in
the
situation
where,
where,
if
you
have
like
the
sort
of
accurate
information
about
which
sites
are
having
problems,
you
could
just
think
about
it,
could
you
go
to
the
site
and
you
could
like
download
the
script
and
find
it
yourself,
but
but
doing
that
would
entail
collection,
browsers
are
still
very
problematic.
One
thing
I
would
say
is
people
often
say
like:
why?
Don't
you
just
do
a
scraper,
and
you
certainly
can
do
that
sometimes.
F
But
there
are
two
problems:
one
is
building
a
scraper
that
collects
that
much
information
is
very
expensive
and
the
second
is
that
it's
very
easy
to
detect
when
someone
has
a
scraper
and
if
so,
especially
in
these,
in
these
fingerprinting
cases,
they
could
just
send
you
different
different
data.
There's
a
lot
of
fingerprints.
So
again,
the
problem
statement
is
to
collect
on
the
sites
where
the
client
is
seeing
some
issue,
but
only
to
see
the
hot
ones
and
only
to
see
and
not
see
individually.
F
F
And
I'm
just
going
to
preview
on
the
the
rest
of
the
people.
Here
were
talking
about
a
number
of
use
cases.
I
guess
I
I
hate
this
first
one,
so
I
should
have
like
remove
that.
My
slides
got
changed,
but
there's
a
bunch
of
talk,
I
think
other
use
cases
involving,
but
some
advertising
work
and
also
some
work
on
covenant.
Exposure
notification
measurement
so
you'll
see
those
later,
but
I
mean
this
should
give
you
a
sense
of
breath.
F
The
kind
of
problem
we
have
here,
which
is
all
kinds
of
sensitive
measurements
you
want
to
collect
that,
unfortunately,
are
difficult
to
collect
with
pride
over
privacy
with
like
without
fancy
technology
off
the
countertop
develop
here.
F
So
it's
important
to
recognize.
There
actually
are
two
kinds
of
privacy
threats
with
these
kinds
of
data
collections.
The
first
is
when
you
collect
sensitive
data
and
it's
directly
tied
to
identifying
information,
so
you
say:
look
I,
like
you
know,
did
a
survey
and,
like
I
called
people
on
the
phone,
and
they
told
me
you
know
they
tell
me
this
sensitive
stuff
and
now,
like
I
have
the
phone
number
and,
like
you
know,
they
told
me
like
you
know
whether
they
have
a
particular
medical
condition.
F
F
You
can
use
the
nonsense
information
to
figure
out
who
the
person
is,
and
so
there's
this,
like
famous
observation
due
to
latin
sweeney
that
like,
if
you
just
have
five
zip
code,
gender
and
date
of
birth,
you
can
identify
like
most
population,
united
states
and
so
generally
and
there's
like
was
a
bunch
of
bunch
of
work
also,
but
by
our
vernerian,
about
this
netflix
data
set
where
they're
able
to
find
like
people's.
F
F
So
the
naive
thing
that
people
often
talk
about
doing
is
basically
what's
called
anonymized
data
collection,
and
this
is
absolutely
a
viable
technique
and
there's
a
bunch
of
work
going
on
about
it
in
ohio,
working
group
and
the
basic
idea
is
to
strip
off
all
the
identifying
that
information
and
and
so
what
the
client
does.
F
The
client
like
and
it
encrypts
the
data
to
the
collector,
and
then
you
have
some
proxy
in
the
middle
that
removes
all
the
metadata
like
the
ip
address,
and
so
this
this
avoids
the
collector
like
seeing
that
meta
information
but
still
gets,
and
because
that
is
encrypted,
the
proxy
never
sees
the
report
so
that
these
things
are
split
up
and
we've
talked
about
the
trust
model.
For
that.
So
I
won't
go
into
that
in
much
detail
and
so
the
number
of
ways
to
do
this.
F
You
can
do
this
with
liquidational
proxies
like
mask
or
ipsec
or
20,
or
you
can
do
an
application
proxy,
like
oh
hi,
so
like
this
is
a
very
good
technique
for
a
number
of
cases.
F
It's
really
good
for,
like
sort
of
boosting
the
privacy
of
semi-sensitive
data
like
data
you
collect
anyway,
you
say
well
like
I
wish
it
had
the
ip
address
you
can
get
rid
of
it,
and
so
it's
very
common
now
for
like
browsers
to
collect
telemetry
and
we
have
the
ip
address
which
you
just
throw
away,
and
we
agree
not
to
have
that
involved.
There
are
also
a
bunch
of
cases
where
you
want
to
collect
individual
values
and
these
free
form
data
blobs
that
you
want
to
really
dig
into.
F
It's
also
like
the
only
way
to
do
things
that
need
an
answer.
So,
if
you
like
actually
want
to
have
not
just
like
data
collection
data
collection,
but
you
also
want
to
have
data
reports
like
you
can't
do
that
with
that
you
can
you'll
hide
for
that,
and
the
techniques
we're
talking
about.
F
Don't
do
that,
but
there's
a
bunch
of
cases
where
it
doesn't
work
well
and
and
the
most
common
cases
are
you
can't
it's
not
good
for
like
these
high
dimensionality
data
sets
where
you
need
to
like
take
multiple
values
and
put
them
together
and
and
the
problem
is
that
goes
back
to
what
I
was
saying
before.
F
F
I
know
your
income,
so
I
can't
do
it
so
I
can't
use
this
if
I
want
to
do
those
kind
of
correlation,
because
unfortunate
demographic
information
itself
is
fine
and-
and
the
same
thing
is
true,
if
you
want
to
do
like
cross
tabs
inside
and
sub
groups,
it's
also
not
good
for
collecting
this
kind
of
heavy
hitter
stuff,
and
the
reason
is
because
the
the,
even
though
it
is
anonymized
you
get
all
the
values,
even
the
low
cardinality
values
and
a
lot
of
the
cardinality
values
are
problematic,
so
they
might
be.
F
For
instance,
you
know
you
know
google
capability
or
else,
so
the
good
news
is
over
the
past
10
years.
We've
done
cryptographic
because
of
all
these
problems
and
there's
a
bunch
of
fancy
crypto,
which
I'm
gonna
really
only
only
vaguely
sketch.
F
But
the
basic
idea
is
a
multi-party
computation,
so
you,
the
the
what
the
client
does
is
the
client
wants
to
report
some
value
and
it
takes
the
value
and
it
splits
it
up
into
two
two
shares
with
a
secret
sharing
technique
of
some
kind
and
the
way
the
shares
are
constructed
is
that
knowing
only
one
share
doesn't
give
you
any
information
about
the
other
about
the
and
that
that
splitting
is
what's
called
information
theoretically
secure.
F
Maybe
it
doesn't
depend
on
computational
assumptions,
but
when
you
put
the
shares
together,
they
of
course
represent
the
entire
value.
So
what
you
do?
Each
client
sends
like
one
share
to
one
server
another
share
another
server
and
then
the
servers
take
the
shares
themselves
and
they
aggregate
them.
They
compute
the
aggregate
value
but
again
you're
just
working
on
the
partial
data,
so
you're
not
learning
anything,
and
then
you
could
take
the
aggregate.
F
So
like
say,
for
instance,
you
want
the
sum
you've
got
a
partial
sum,
and
then
you
take
each
partial
sum
and
you
put
them
together
and
you
get
the
final
output.
And
so
the
key
point
here
is
that
you
can
do
all
this
work.
You
know
without
ever
having
anybody
see
any
anybody's
individual
value.
F
So
let
me
just
pause
for
a
second
before
I
talk
at
the
crypto,
which
is
the
trust
model,
because
this
is
like
really
important.
I
know
this
comes
up
a
lot.
We
talk
about
these
systems,
so
the
client's
requirement
is
the
two
servers
do
not
collude.
If
it's
a
service
clue,
they
can
be
digital
values
and
it's
game
over
right,
and
so
this
is
very
hard
to
operate
between
people,
obviously
and
and
the
client
has
to
trust
exactly
one
of
them.
The
client
is
great.
F
The
client
trusts
both,
but
as
long
as
one
of
them
doesn't
cheat,
it's
fine
and
you
could
do
n
servers,
but
two
is
the
most
common
number.
Obviously,
the
servers
also
have
to,
for
various
reasons,
do
a
little
bit
of
enforcement
about
like
minimum
batch
sizes
and
query
limits
and
stuff
like
that.
To
avoid
some
attacks
that
we
will
talk
about
here
for
the
collector's
requirement,
both
servers
have
to
actually
be
executed
protocol
correctly
because
either
server
can
like
distort
the
results
that
they
don't.
F
But
again,
this
is
only
a
correctness
requirement
that
only
one
server
is
required
to
behave
correctly
from
the
client's
perspective.
So
I
just
want
to
recognize
right
up
front.
It's
like
difficult
to
verify
from
the
client's
perspective
that
the
servers
aren't
colliding.
That
conclusion
could
happen
through
side
channels
depending
on
the
architecture.
Sometimes
the
sideshows
are
small
signs
are
big.
You
can
do
point
in
time
audits
to
verify
that
someone
is
behaving
correctly,
but
you
can't,
but
it's
like
not
possible
for
them.
F
You're
not
colliding,
and
I
just
want
to
like.
So
I
want
to
highlight
that,
but
I
also
want
to
say
that,
like
this
is
like
a
very,
very
common
scenario
on
the
internet,
where
you,
where
people
have
data-
and
you
have
to
trust
and
behave
correctly-
I
mean,
if
you
think
about
like
you
know
your
data
is
in
gmail.
F
Like
you
know,
google
has
like
your
entire
email
record
right
and
you're
pressing
them
behave
correctly,
and
you
know
even
you
know,
even
if
the
software's
running
on
your
machine,
like
you
know,
generally
people,
think
of
that
as
behaving
correctly,
but
like
your
ability
to
verify
the
software
running
on
your
device,
extraordinarily
limited
and
so
like,
while
like
it
would
be
great
to
have
a
situation
in
which
you
never
had
to
trust
anybody.
That's
simply
not
the
situation.
F
Most
of
us
find
ourselves
in
so
we're
talking
about
here
is
trying
to
like
alleviate
the
situation
of
trust
people
and
make
the
number
of
people
have
the
trust
smaller
or
the
number
of
people
have
to
cheat
larger.
I
guess
we're
not
talking
about
a
limiting
trust
entirely.
It's
not
possible
the
state
and
our
technological
development,
and
so
we're
trying
to
improve
the
situation,
but
we're
not
trying
to
like
boil
the
ocean.
F
So
I
want
to
talk
like
very
briefly
about
like
one
cryptographic
protocol
to
give
you
a
sense
of
like
the
situation.
This
is
sort
of
the
one
that
started
it
off.
It's
called
prio
and
it's
useful
for
computing,
like
numeric
aggregates,
like
sum
and
mean
that
kind
of
thing
and
like
this
is
like
the
one
that's
going
to
most
apprehensible
and
like
we're
going
to
punt
the
crypto
to
cfrg.
But
this
one
is
like
understandable,
like
normal
humans.
F
So
we
assume
each
client
has
some
value
like
the
numeric
value
and
like
called
x
of
I
right,
and
so
the
client
does.
Is
the
client
splits
up
that
value
in
the
following
way?
It
generates
a
random
value,
sorry
about
the
fancy
math,
but
basically
a
random
value
is
smaller
than
a
prime
and
then
it
basically
does.
It
sends
server
one
like
the
value
minus
the
random
value
module
the
prime
and
it
sends
server
to
the
random
value.
F
It's
like
you
see
it's
quite
easy
to
convince
yourself
that
if
you
know
that
oh
no,
the
random
value
is
not
enough
and
knowing
and
knowing
that
the
subtraction
is
not
enough,
and
that
is
sufficient.
So
now
each
server
takes
all
the
shares
I
get
from
everybody
and
they
add
them
up
right
and
again
like
because,
because
because
because
these
are
like
information,
theoretically
they're
not
only
anything
and
then
they
and
then
they
basically
exchange
the
exchange,
the
sums
or
really
like
one
sentence
on
the
other.
Probably.
F
And
then,
if
you
take
the
sums-
and
you
add
them
up-
and
you
like
do
a
bunch
of
like
much
like
relatively
simple,
like
you
know-
middle
school
math,
you
could
just
convince
yourself
because
addition
is
commutative
that
when
you
add
the
sums
up,
you
actually
get
some
of
the
initial
values.
F
F
So
this
is
like
this
seems
like
really
boring
and
like
kind
of
obvious
and
but
it's
actually
fantastically
powerful,
and
the
reason
is
because
there's
a
lot
of
things
you
can
actually
compute
with
just
the
sum
thing.
As
long
as
you
encode
the
data
properly,
you
can
compute
all
kinds
of
things
as
sums,
so
like
arithmetic
mean,
is
obvious.
That
sum
divided
by
count
product
you
compute
product
by
doing
some
of
the
logs
geometric
means
coming
from
product.
F
You
can
do
variance
and
standard
deviation
by
computing,
some
some
of
the
values
of
the
squares
there's
a
but
there's
a
bunch
of
fancy
stuff
for
doing
like
or
and
min
max,
and
even
ordinary,
obviously
squares.
So
the
trigger
is
just
finding
the
writing
coding
and
there's
like
papers
now
about
how
to
do
all
this
stuff.
F
F
Unless
you
qualify
yourself
and
you
just
live
with
it,
you
have
to
live
with
the
noisy
data
right,
because
people
don't
lie
consistently
and
then
there's
like
consistent
completely
ridiculous
data,
or
I
like
say,
like
I'm,
a
kilometer
tall
or
worse,
I
say
like
I'm
negative,
kilometer
tall
right
and
ordinarily.
What
you
do
is
you
just
like
have
some
filter
mechanism
where
you
said
you
know
you
just
said
like
well,
I
I
I
just
reject
anything.
This
is
a
kilometer
tall
right
and
but
like
with
the
pre
of
it
is
encrypted.
F
You
can't
do
that
right
and
so,
and
so
instead,
what
you
do
is
like-
and
this
is
the
fancy
math
part.
Each
submission
comes
with
the
zero
knowledge
proof
of
validity,
and
the
proof
says
something
like
this
height
report
is
like
between,
like
100
and
200
centimeters
right,
the
servers
work
together
to
value
the
proof
and
you
only
aggregate
the
submissions
that
have
valid
proofs
right
so,
like
you
have
to
trust
me.
F
This
part
works
but,
like
this
part
works,
but
it's
important
remember
that
this
part
believing
this
part
works,
does
not
we're
not
necessarily
required
for
bullying
privacy
if
the
privacy
claim
caused
the
previous
things,
the
youtube
with
zero
knowledge.
Please
don't
pick
the
data
so
okay,
so
like
going
back
to
my
use
cases
right
say
I
want
to
collect
these
user
interests
right.
F
So
basically,
what
you
call
user
interest
thing
like
preo
is
that
every
user
interest
is
a
bucket
and
you
have
like-
I
don't
know:
100
200,
500
buckets
right
and
the
client
individually
reports
time
spent
in
each
bucket.
You
have
to
report
the
ones
that
are
zero
too
by
the
way.
Otherwise
you
can
just
look
at
which
buckets
are
reported,
and
then
you
use
prior
somewhat
and
you
end
up
with
a
bunch
of
sums
one
for
each
bucket
and
now
you
know
exactly
how
much
your
time
was
spent
on
each
bucket
for
each
bucket.
F
But
you
don't
anybody's
individual
time
spent
and,
as
I
know
you
can.
Oh,
you
can
also
report
t
squared.
You
could
be
standard
deviation
as
well,
so
this
is
like
a
pretty
straightforward
application
or
something
like
preamp,
but
there's
like
a
whole
pile
of
use
cases
that
basically
come
into
this
okay.
So
even
more
fancy
is
a
protocol
called
heavy
hitters.
Well,
actually
they
didn't
name
it
so,
like
we've
been
calling
it
hits
and
the
idea.
F
So
the
idea
is
that,
like
each
client
submits
a
string
like
a
url
and
you
want
to
output
the
most
frequent
strings
and
I'm
gonna
like
very
very
aggressively
handle
this,
but
the
basic
idea
is
that
the
servers
can
join
the
compute,
the
number
of
strings
at
any
given
prefix
p,
and
so
what
you
do
is
you
start
with
like?
F
Basically,
you
just
do
binary
search,
so
you
start
with
like
strings
start
with
zero,
and
you
say
how
many
strings
are
zero
or
start
with
one
and
then
okay,
fine,
now
I'll
keep
going
down
the
tree
until
I've
gotten
down
to
whatever
threshold
I
want
to,
and
so
you
can
just
just
find
all
the
values
and
effects
all
the
important
values
and
basically
log
n
tries-
and
I
will
not
even
remotely
attempt
to
explain
how
this
works.
F
F
So
so
again,
this
is
like
an
example
how
you
can
use
this
for
like
a
real
world
application
like
we
actually
really
care
about.
So
I
I
do.
One
thing
I
want
to
like
flag
is
that
there
is
a
there's.
An
issue
course
called
subset
query,
which
is
that
so
submissions?
F
Could
I
just
giving
you
examples
or
something
we're
just
giving
you
submissions
that
like
have
one
piece
of
information,
but
you
can
also
tag
the
submissions
of
the
demographic
data
because,
like
birthdayers
of
code
results-
and
these
are
talking
about
right
and
those
get
passed
on
all
the
way
to
the
aggregators
right-
and
this
is
like
this-
is
notionally
safe,
because
you
say
that
the
non
nonsense
information
is
safe
or
you
you
only
click
nonsense,
information
and
then
they
owed
it
as
encrypted.
But
then
you
can
say:
okay
computing
aggregate
over
the
subsets.
F
So
that's
like
a
very
powerful
today.
That's
one
reason
why
this
is
like
a
powerful
technique
and
ways
that
like
ohio,
is
not,
but
of
course
it
means
that
repeated
queries
can
be
used
to
reduce
values
by
like
querying
for
the
subset
like
includes
them
and
then
excludes
them.
There's
defenses
against
this
on
having
minimum
batch
sizes,
anti-replay
randomization
for
differential
privacy.
This
is
a
piece
of
work.
F
The
working
group's
gonna
have
to
work
on
that's
part
of
why
this
is
completely
done,
but
I
think
we
will
understand
how
to
do
pieces
of
this.
For
for
some
some
measurements
and
then
other
ones
that
you
need
some
more.
F
So
what
is
the
state
of
the
play
here?
A
number
of
us,
some
of
the
people
you're
representing,
have
basically
developed
a
generic
protocol
that
is
designed
for
doing
privacy
pure
measurement.
What
I
mean
by
generic
is
that
it's
a
framework
protocol
that
then
you
can
plug
in
you
know
individual
cryptographic
technologies,
so
it's
compatible
with
the
basic.
What
these
things
called
verifiable
should
be
application
functions
which
we'll
be
talking
about,
but
initially
it's
tuned
to
work
with
prio
and
heavy
hitters.
F
It's
built
on
top
of
https.
It's
going
to
smell
a
lot
like
you
know
any
rest
kind
of
protocol
like
acme
or
whatever
you've
seen
before,
and
so
it's
easy
to
implement
with
physical
services,
infrastructure
and
and
other
people
working
on
this,
like
you
know,
having
work
from
regular
infrastructure
to
design
and
work.
Well
with
this,
so
the
so.
F
Let
me
just
give
you
like
system
architecture
picture,
that's
what
it
looks
like
you
know,
there's
a
bunch
of
clients
they
send
their
shares
and
there's
like
a
leader
and
and
I'm
the
leader
like
basically
takes
the
shares
and
parcels
about
the
helpers
and
orchestrates
the
whole
computation,
and
then
the
data
is
like
sent
to
a
collector
and
so
the
clutch,
because
I
look
give
me
like
the
output
for
this,
this
subset
of
the
system,
and
then
it
splits
the
results
out
right,
and
so
one
way
to
think
about
this.
F
I
think
there's
a
number
of
different
playing
modes
it's
compatible
with
one
of
which
is
that
you
know
the
collector
and
the
leader
are
the
same
person
and
they're
trying
to
do
it.
The
data
collection
and
they
outsource
the
helper
job
to
one
other
person
so
that
they
can
make
guarantees
about
the
privacy
of
the
system.
Another
possibility
is
the
whole
is
a
whole.
Like
leader,
collector
helper
box
is
like
a
service.
That's
provided
people.
F
The
trust
model
here
you'd,
be
assuming
is
that
the
clients
have
to
know
what
the
helpers
leader
are,
and
so
the
clients
know
who
the
who,
who
the
data
is
being
encrypted
for,
and
so
they
can
make
their
own
assessment
of
whether
or
not
they
trust.
One
of
those
people,
though,
of
course
in
a
real
world
scenario.
F
So
you
know
people
are
actually
going
to
ask
like
what
like
the
situation.
Oh
hi
is
because
oh
hi,
as
I
mentioned,
is
useful
for
many
of
these
kinds
of
settings.
These
are
complements
and
not
substitutes.
So
you
know
good
cases
for
ohi
are
like,
as
I
say,
sort
of
standing
sensitive
data,
this
kind
of
rich
freeform
data
that,
like
you,
couldn't
really
you
know,
aggregate
this
way.
Anything.
F
Of
course
it
needs
any
kind
of
responsible
right
because,
like
none
of
this
technology
gives
you
a
response
that
just
measures,
good
cases
for
ppm
are
like
really
sensitive
data,
hopefully
simpler
data
because,
like
as
you
sort
of
find
any
idea,
it's
not
like
really
easy
to
like
do
really
complicated
stuff
and
settings
where
you
just
kind
of
do
drill
down
you
do
regression
or
you
do
do
subset
correspondence
like
kinds
of
things
so
like
these
are
like
you
know,
these
are
two
great
hits
that
tastes
great
together
right.
F
You
can
use
ohio
talk
to
ppm
server,
that's
like
boosting
the
privacy
of
the
system.
So,
like
you
say,
okay
well,
I
do
want
to
collect
this
data,
but
I
actually
store
an
ip
address,
so
you
can
remove
that
as
well,
and
so,
in
fact,
I
think
you'll
see
you'll
see
some
of
the
some
of
that.
Like
later
talks,
you
can
also
use
sort
of
a
front-end
proxy
server
to
do
a
bunch
of,
like
you
know,
kind
of
misuse.
F
Detection
of,
like
you
know,
spamming,
attacks
and
stuff
like
that.
So
so
I
think
these
are
a
complimentary
techniques,
not
but
not
not
competitive
techniques,
which
is
why
you
see
some
of
the
same
people
working
on
them.
I
don't
know
why.
Oh
yeah
right,
I
was
like.
Why
do
I
have
two
more
slides,
so
I'm
now
done.
I
think
I
hit
my
target
time
target
quite
well.
I
have
a
little
time
for
questions
which
I'd
be
happy
to
take.
F
Oh,
I
did
want
to
say
one
more
thing,
but
while
I'm
waiting
to
see,
if
anybody
will
say
anything
which
is
our
assumption,
is
the
vdf
functions
we
standardized
or
are
defined
in
cfrg,
not
an
itf,
and
so
the
idea
is
we'll
build
the
framework
here
and
we'll
defer
the
work
by
defining
the
v
dash
to
cfrg
and
that
work
is
already
probably
going
on
or
is
being
is
being
presented
as
safety.
F
E
Boss,
thanks
good
presentation,
I
enjoyed
you
laid
it
out
really
well,
the
one
thing
that
I'd
like
to
hear
more
about
is
sort
of
the
deployment
scenario,
for
example
on
slide
14,
don't
go
back
you
you
specifically
said
the
client
wants
to
report
some
value
right.
So
would
your
expectation
be
that
application,
authors
and
and
servers
would
sort
of
make
use
of
the
same
way
that
sort
of
ohio
is
considering
being
deployed
by
you
know
various
organizations
to
get
into
stuff
or
that
you
know
doe
is
being
used
for.
E
F
I
think
so
I
think
I
think
the
most
likely
settings
for
this
initially
will
be
the
kind
of
like
kind
of
measurements
that,
like
people
are
already
taking
via
the
software
disseminate.
So
you
know
things
like
browser
telemetry.
You
know,
I
think
the
the
case
of
the
case
of
the
the
tim.
What
we
talk
about
next
involves
measurement
of
covert
exposures
and
yeah.
Those
things
are
all
being
done,
like
sort
of
sort
of
like
you
know
automatically.
F
You
know,
potentially
by
asking
these
are
automatically
by
the
software.
I
think
there's
some
possibility
that
in
the
future
you
know
you'd
see
like
this
used
for
surveying
for
direct
surveying.
Where
you
say,
okay,
you
know,
are
you
willing
to
participate
in
surveys
and
then
we'll
we'll
write
down
the
client?
And
you
can
do
it,
but
I
mean,
like
you
know
the
the
I
mean
I
think
yeah.
You
know
these
are
pull.
F
These
are
pull
techniques
not
push
techniques
fundamentally,
and
I
think
you
know
obviously
cryptographic
is
so
complicated
that
the
user
has
to
involve
some
pieces
of
software
to
do
it.
F
I
think
so
so
I
think
the
cert
so
certainly
the
the
use
case
of
measuring
sort
of
misbehaving
websites.
It
basically
cannot
be,
cannot
be
contained
with
ohio,
because
you,
because
you
learn
the
you
learn
a
bunch
of
the
you-
don't
want
to
learn
any
of
the
little
cardinality
sites.
F
Those
are
very
dangerous
right
because,
if
you
like
collect
like
every
url
somebody
goes
to,
if
this
is
behaving,
then
there's
a
real
probability
that
you
learn
a
bunch
of
like
say:
google
docs,
you
know
capability
urls
or
you
learn.
You
know
you
know
some
document
quality.
This
is
like
my
plan
to
buy
company
a
you
know
so
yeah.
F
So
it's
basically
like
anything,
involves
that
that
kind
of,
like
string
data
collection,
is
very
problematic
without
high,
also
even
like
the
cert,
this
sort
of
bucket
eyes,
you
know
you
know,
give
me
counters
that
would
be
80
80
counters,
like
really
problematic
with
the
high,
because,
basically,
if
you
decide
to
get
every
single
value
and
report
it
separately,
otherwise
you
end
up
a
situation
where
you
can
do
these
high
dimensional
data
sets
and
use
them
to
do
to
do
production.
H
Just
my
way
of
clarity,
since
ohio
has
come
up
quite
a
lot
in
the
chat
with
the
new
presentation
where
you
said
complimentary,
there's
no
dependency
on
yeah.
B
Okay
chairs-
I
I
I'll
hang
around
but
like
I
think
then
we're
done
right.
Yes,
thank
you
very
much.
So,
just
for
avoidance
of
confusion,
the
the
mozilla
use
cases
were
bolted
into
accuracy.
That's.
F
B
Right
so
we're
going
to
roll
into
tim
now,
if
you
go
ahead
and
grab
the
slides,
excellent,
oh
the
song,
slides,
oh.
I
I
All
right,
thank
you,
okay,
so,
let's
get
started.
My
name
is
tim
and
I'm
an
engineer
at
the
internet
security
research
group
we're
the
non-profit
that
operates
the
let's
encrypt
certificate
authority,
not
to
be
confused
with
the
irs
g.
Incidentally,
the
dilemma
that
mr
riskorla
just
introduced,
in
which
the
essential
function
of
gathering
telemetry
from
the
field
introduces
significant
privacy
risks
for
users
is
of
great
interest
to
the
isrg.
I
Given
our
mission
of
reducing
barriers
to
secure
and
private
communications
on
the
internet
next
slide,
please
so
echo
covered
that
you
know
the
ways
in
which
telemetry
is
a
privacy
risk
for
users
and
the
you
know
the
implied
benefits
to
users
of
using
these
new
technologies.
But
I
think
it's
worth
noting
that
many
data
collectors
also
want
to
do
the
right
thing
and
respect
the
privacy
of
their
users.
Besides
that
being
a
decent
thing
to
do.
I
The
large
amount
of
personal,
identifying
information
stored
by
conventional
telemetry
systems
is
a
significant
liability
for
the
data
collector.
There's
new
privacy
regulations
emerging
in
various
jurisdictions,
all
the
time
which
require
expensive
and
complicated
controls
around
user
data
and
all
that
pii
makes
for
a
very,
very
tempting
target
for
attackers.
I
I
So
isrg
is
interested
in
providing
private
measurement
aggregation
as
a
public
service
to
the
internet
for
a
lot
of
the
same
reasons
that
we
built.
Let's
encrypt,
we
want
to
make
it
easy
to
do
the
right
thing
next
slide.
Please.
I
So
we
envision
running
a
standards
compliant
aggregator
as
a
service,
with
the
same
focus
on
automation
and
ease
of
integration
that
drive.
Let's
encrypt.
We
expect
that
some
customers
will
want
to
run
their
own
aggregator
and
have
it
work
with
ours,
but
others
will
want
to
avoid
running
any
servers
at
all
and
will
instead
choose
two
existing
aggregators,
say
one
run
by
isrg
and
one
run
by
some
other
organization
or
a
company
that
chooses
to
participate.
I
So
in
support
of
that,
we
are
hoping
to
provide
an
open
source
implementation
of
a
ppm
aggregator
with
the
aim
of
making
it
easy
for
some
data
collector
to
interoperate
with
isrges
aggregator
or
anybody
else's.
So
hopefully
it
ends
up
being
a
matter
of
grabbing
a
container
image
from
some
public
registry.
I
You
deploy
it
into
your
existing,
you
know
kubernetes
cluster
or
whatever
you
have,
and
you
can
start
gathering
private
telemetry
cheaply
and
easily
somewhat
akin
to
how
the
eff
cert
bot
can
be
easily
deployed
alongside
your
existing
web
server
to
have
it
manage,
let's
encrypt
tls
certificates.
I
We
are
also
aiming
to
provide
open
source
client
libraries
targeting
like
a
variety
of
languages
and
frameworks
chosen
to
you
know,
facilitate
adoption
for
the
most
likely
interested
parties.
So
you
know
you
can
imagine
like
a
swift
sdk
for
ios
apps
javascript
for
web
app
for
a
single
page
application
on
the
web
and
so
on.
I
So
right,
so
with
this
goal
in
mind,
an
open
standard
through
the
ietf
is
acutely
valuable,
because
a
proprietary
single
vendor
solution
wouldn't
be
terribly
useful,
since
the
privacy
guarantees
are
contingent
upon
independent
and
non-colluding
aggregators
next
slide.
Please
thank
you.
So
we
talked
a
lot
about.
You
know
the
benefits
of
these
new
technologies,
but
there
are
some
trade-offs:
some
drawbacks
to
these
systems.
So
for
one
thing
there.
I
So
it's
more
likely
more
likely
to
fail
somewhat
necessarily.
Second,
the
verification
of
the
proofs
that
introduced
earlier
do
introduce
some
computational
network
overhead
and,
in
particular,
we'll
introduce,
depending
on
which,
which
protocol,
which
vdaf
is
in
use,
will
introduce
potentially
multiple
rounds
of
communication
between
the
aggregating
servers.
I
Finally,
metrics
gathered
under
these
schemes
are:
these
skins
are
necessarily
less
flexible
than
conventional
telemetry
systems.
You
can't
make
arbitrary
post-hoc
queries.
Sorry,
you
can't
make
arbitrary
queries
post-hoc
against
your
corpus
of
data.
You
have
to
know
up
front
before
you
begin
collecting
any
data.
What
are
the
aggregations
you're
interested
in
computing?
This
has
to
do
with
the
construction
of
the
proofs,
as
well
as
enforcing
some
of
the
privacy
guarantees
of
the
system.
I
Fortunately,
you
know,
in
spite
of
all
these
challenges,
we
do
have
some
evidence
that
this
stuff
actually
works
and
at
scale
so
in
december
of
2020,
a
collaboration
between
apple
google,
we
at
the
isrg,
the
linux
foundation,
public
health
initiative,
the
mitre
corporation
and
the
national
cancer
institute
at
the
national
institutes
of
health
in
the
united
states,
launched
the
exposure
notifications,
private
analytics
system,
and
this
is
the
back
end
to
apple
and
google's
exposure
notifications,
express,
which
is
a
system
of
course,
for
covert
exposure
notifications.
I
So
so
en
explorer.
Verifications,
of
course,
is
the
system
by
which
mobile
devices
can
sort
of
anonymously
exchange
with
each
other
to
a
cove
exposure.
Enpa
allows
back
anonymously
and
privately
back
hauling
that
data
to
your
regional
public
health
authority
so
that
they
can
get
information
on
how
many
people
are
getting
the
notifications,
how
many
people
are
on
their
mobile
devices,
how
many
people
are
interacting
with
them
and
all
sorts
of
interesting
metrics
about
the
spread
of
code
itself,
as
well
as
the
effectiveness
of
the
of
the
en
system.
I
So
this
is
currently
deployed
in
13,
u.s
states
and
the
district
of
columbia,
and
at
the
moment
it's
gathering
2.1
million
measurements
per
hour.
We
also
heard
last
night
if
there
was
one
more
interesting
number,
that
sometime
over
the
night,
we
gathered
12
billion
individual
metrics
that
have
been
aggregated
since
the
system
launched
we're
also
about
to
deploy
this
internationally.
So
beyond
the
united
states,
so
we're
hoping
to
soon
turn
this
on
in
four
states
in
mexico.
Okay,
I'm
already
well
over
time,
so
I
will
see
the
floor.
B
J
Yeah
it
takes
some
time.
Where
are
we.
J
Okay.
So
I'm
going
to
talk
very
briefly
about
some
of
the
use
cases
in
advertising,
specifically
the
conversion
measurement
one.
I
think
charlie's
going
to
follow
up
with
some
more
details
on
on
this
one.
So
conversion
measurement
is
something
that
happens
on
the
web
quite
a
bit
when,
when
someone
shows
an
advertisement,
it's
kind
of
nice
to
know
if
that
advertisement
is
having
the
intended
effect.
J
J
J
So
the
way
this
works
today
is
pretty
simple:
we
assign
a
user
an
identifier,
so
everyone
gets
their
own
unique
identifier
and
every
time
they
visit
a
website
and
an
advertisement
is
shown
or
they
do
something.
Then
we
just
create
a
little
little
log
record
that
records
all
the
details
from
the
context
and
the
user
identifier
time
stamps
all
those
sorts
of
other
things.
J
And
then
you
look
at
the
log
and
you
can
answer
all
sorts
of
questions
about
what
people
have
done.
It's
it's
really
great
for
getting
the
information
that
you
need.
It's
all
quite
precise
and
leaving
aside
all
of
the
complications
of
anti-fraud
and
all
those
sorts
of
other
things
you
can
answer
all
the
questions
that
you
have
fairly
precisely.
J
However,
that
doesn't
really
respect
people's
privacy,
and
so
the
idea
behind
some
of
the
efforts
that
we're
talking
about
here
is
to
produce
aggregate
statistics
about
conversions
without
relying
on
user-specific
logs.
J
The
current
status
of
this
work
is
that
there's
lots
and
lots
of
requirements
that
are
coming
through.
This
is
obviously
a
lot
more
complicated
when
it
comes
into
practice.
There's
lots
of
ideas,
people
have
all
sorts
of
wonderful
proposals
and
some
competing
requirements,
but
a
lot
of
the
really
promising
ideas
include
something
like
what
ekka
described
earlier.
J
J
They
buy
an
item,
as
maybe
that's
connected
to
that
particular
ad
and
those
are
two
of
independent
events,
and
you
can
imagine
those
events
going
into
some
sort
of
opaque
box,
that's
operated
by
say
their
browser
or
something
like
that,
and
you
add
up
some
numbers,
and
maybe
you
add
up
a
one
if
there
was
an
ad
shown
and
the
person
bought
the
thing
and
if
there
was
no
ad
shown,
you
add
up
a
zero,
you
get
those
reports
from
thousands
and
thousands
of
people
and
feed
them
through
a
system
like
brio
or
one
of
those
other
things,
and
you
get
back
a
count
of
the
number
of
people
who
saw
the
ad
and
bought
the
product,
and
that
allows
you
to
make
some
conclusions
about
how
you
how
your
business
has
been
operating
and
the
advertising
campaigns
you're
running.
J
Obviously
this
gets
a
lot
more
complicated,
but
we'll
let
charlie
explain
some
of
that.
B
K
Let's
see
if
this
works
great,
you
guys
can
see
it
yep,
okay,
great
hello,
everyone.
My
name
is
charlie
harrison.
I'm
a
software
engineer
working
at
google
looking
at
ppm
like
solutions
for
doing
ads
measurement
on
the
web
to
satisfy
similar
use
cases
that
martin
was
just
talking
about
so
I'll
kind
of
breeze
through
some
of
the
background.
K
Just
because
there's
a
lot
of
overlaps
with
martin's
slide,
but
I
I
think
the
gist
here
is
that
here
at
google,
we
think
third-party
cookies
are
not
great
for
users,
users,
privacy,
but
they're
kind
of
right
now,
like
critical
infrastructure
that
powers
online
ads
for
exactly
the
reason
martin
mentioned
and
the
the
problem
that
we're
trying
to
kind
of
grapple
with
is
whether
we
can
build
something
like
a
third-party
cookie
alternative
that
gives
users
good
privacy
while
still
kind
of
supporting
this
like
critical
infrastructure
to
some
extent-
and
I
think
the
key
insight
here
is
that
at
least
as
it
relates
to
ppm.
K
Is
that,
like
many
ads
use,
cases
are
actually
like
totally
fine
with
aggregate
data
and
they
don't
actually
need
to
track
you
kind
of
around
the
web
and
learn
all
the
data
exactly
we,
we
could
be
fine
with
aggregate
data
in
many
cases,
so
I
I
want
to
go
over
just
in
a
little
bit
more
detail.
How
attribution
measurement,
which
is
also
called
conversion
measurement,
happens
today
with
cookies,
essentially,
and
I'm
sure
this
is
going
to
be
familiar
for
many
people.
K
K
K
So
you'll
read
this
cookie
when
an
ad
is
placed
and
you'll
also
read
this
cookie
when
there's
a
conversion
like
when
you
buy
something
later
on
down
the
road
after
you've
seen
an
ad
for
it
and
so
right
now,
this
cookie
id
is
used
as
the
join
key
to
join
these
two
cross
site
events
right
and
but
they
they
they
join.
Arbitrary
events,
so,
like
all
of
your
browsing,
can
be
linked
up
to
this
one
cookie
in
theory.
K
How
could
it
be
improved?
We
could
internally
join
this
data
in
the
browser.
So
when
you
see
an
ad,
we
could
register
something
in
like
custom,
new
browser
storage
and
when
you
buy
something
that
was
like
pointed
to
by
that
ad.
That
would
join
up
with
that.
With
that
event
in
the
internal
browser
storage-
and
you
could
have
a
communication
path
from
the
browser
to
something
like
ppm,
where
we
we,
you
know,
we
have
this
data,
we
want
to
compute
something
like
a
histogram.
K
Maybe
we
want
to
learn
something
like
you
know:
what
are
the
counts
of
conversions
like
purchases
per
ad
campaign,
so
the
x-axis
is
ad
campaign.
The
y-axis
is
the
number
of
conversions
we
can.
We
can
use
ppm
to
kind
of
generate
data
data,
share,
split
that
encodes,
that
histogram
contribution
send
that
up
to
ppm
and
the
ad
tech
could
learn.
Aggregate
statistics
like
just
a
histogram
of
of
the
of
the
counts,
and
this
doesn't
reveal
any
user
data
directly.
K
It
only
reveals
aggregates,
so
there
are
a
whole
bunch
of
like
cool
use
cases
to
think
about.
I
know
there's
a
lot
of
talk
in
the
chat
about
differential
privacy,
and
this
is
a
something
some
formal
privacy
that
we
could
add
within
the
ppm
system
to
ensure
that
the
output
of
ppm
is
private.
K
So
that's
something
we're
looking
into
there's
a
lot
of
interesting
research,
that's
related
to
kind
of
the
heavy
hitter
stuff
about.
How
do
we
report
these
histograms
when
they're?
Like
really
really
really
big,
like
you,
are
running
millions
of
campaigns
or
you
want
to
do
like
all
sorts
of
different
crosses
we're
really
interested
in
systems
that
can
help
train
machine
learning
models
in
some
cases
there's
there
are
results
that
show
that
even
just
aggregate
histograms
could
be
used
to
train
like
logistic
models.
K
But,
like
are
like
we're
looking
into
more
sophisticated
mechanisms.
There's
you
know
in
rather
than
conversion
measurement.
We
could
look
at
reach
measurement,
which
is
asking
like
how
many
distinct
users
saw
my
ad
across
many
different
websites.
So
this
is
kind
of
like
removing
a
like
remove,
duplicates
operation,
and
you
know
we
we're
we're
interested
in
exploring
like
you
know.
K
We
have
this
conversion
measurement
thing,
but
maybe
there's
something
that's
more
generic
that
we
could
use
for
like
a
more
basic
browser,
primitive-
and
I
think
I
think
that's
my
time
but
yeah
happy
to
answer
questions
after
the
everyone
else
is
done.
Thanks.
B
M
L
All
right
so
yeah
this
is
this
is
pretty
different,
so
ecker
talked
about
protocol.
We
had
a
couple
talks
about
use
cases,
I'm
here
to
talk
about
kind
of
the
substrate
and
the
the
label
we
are
using,
for
this
is
verifiable,
distributed
aggregation
functions
thanks
to
chris
patton.
For
that
that
name
context
here
is
a
like
many
things
in
the
in
the
ietf.
We
need
some
complicated
crypto
for
this,
and
so
we're
doing
some
parallel
work
in
cfrg,
and
it
goes
alongside
the
crypto.
L
The
protocol
work
in
the
ietf
to
kind
of
to
specify
the
the
complicated
crypto
bits
in
a
place
where
we
can
get
cryptographers
eyes
on
them,
as
opposed
to
just
protocol
nerds,
just
highlighting
some
some
parallels
here,
tls,
depending
on
cryptographic
primitives
that
cfrg
defines
mls,
relies
on
hpke,
which
you
did
in
cfrg
and
so
for
this
ppm
work
we're
defining
this
vdf
or
vdaf,
depending
on
how
you
want
to
pronounce
it
that
defining
this
as
the
kind
of
cryptographic
abstraction
that
that
ppm
relies
on
in
cfrg.
L
Now,
if
you
look
at
the
draft,
it's
kind
of
in
two
parts,
we
define
an
api.
That
is,
the
abstraction
ppm
is
supposed
to
rely
on.
So
the
idea
is
that
we
have
multiple
instantiations
that
do
various
flavors
of
this
private
aggregated
measurement,
dance
that
all
behave
in
a
close
enough
way
that
we
can
define
a
common
api
over
them
and
build
protocol
around
that
api,
and
this
provides
a
way.
L
You
know
that
we
can
build
a
protocol
without
having
to
care
about
the
details
of
the
cryptography
and
provides
a
target
that
cryptographers
can
look
at
and
design
new
schemes,
and
if
they
plug
into
this
api,
then
they
can
presumably
be
used
inside
of
ppm
with
you
know
more
quickly
and
more
more
easily
than
if
they'd
been
designed
kind
of
completely
from
scratch,
using
a
new,
not
not
reusing
that
construct.
L
L
You
know
that
the
computation
of
that
aggregate
is
distributed
over
those
aggregators
and
the
privacy
properties
of
the
individual
measurements
are
assured
by
non-collusion
among
those
aggregators
and
finally,
it's
verifiable
in
the
sense
that
the
aggregators
can
check
that
the
inputs
have
meet
some
properties.
As
ecker
pointed
out.
You
know,
there's
risk
in
this
sort
of
distributed
scenario
that
the
measurement
results
that
get
reported
in
by
the
measurement
points
could
be
corrupt,
it
could
be
invalid
and
they
that
could
flow
through
and
lead
to
a
garbled
aggregate.
L
So
we
will
have
some
checks
in
here
that
allow
the
aggregators
to
verify
that
the
measurements
they're
getting
are
meet.
Some
definition
of
correctness
now,
which
exactly
which
definitions
you
can
apply
varies
a
little
bit
by
the
instantiation.
L
So
looking
at
how
this
kind
of
plays
out,
you
know
this
is
the
kind
of
classic
measurement
scenario.
This
is
how
a
lot
of
telemetry
works
today,
where
you
have
a
client
out
there.
You
know
actually
many
clients,
but
I'm
only
going
to
portray
one
on
this
slide.
You
have
many
clients
out
there
collecting
measurements
of
something
you
know
where
people
clicked
in
an
application
or
what
what
not
and
reporting
that
back
to
a
collector
and
the,
as
you
know,
you've
seen
various
takes
on
this
on
the
last
few.
L
But
the
idea
here
is
that
we're
introducing
this
aggregator
tier
in
between
the
client
and
the
collector
and
the
idea
of
these
aggregators
is
that
the
client
shards
their
each
measurement
out
to
individual
shards
of
the
aggregators
such
that
you
can
only
make
sense
of
that
measurement.
If
you
have
all
the
shares,
that's
how
we
get
that
non-collusion
guarantee
the
aggregators
then
go
through
this
process
of
what
I've
called
prepare.
The
term
we're
using
the
draft
is
preparing
the
measurements.
L
So
that's
that
includes
verifying
that
the
measurements
are
correct,
are
acceptable,
that
they're
correct
and
you
know,
transforming
them
into
an
aggregatable
encoding,
perhaps
that's
necessary
in
some
realizations,
so
once
you've
kind
of
taken
the
individual
measurements
and
prepared
them,
you
aggregate
them.
This
is
kind
of
where
you
do
that
step
that
echoers
over
you
add
up
the
shares
into
a
share
of
the
sum
and
then
finally,
the
aggregators
send
their
aggregate
shares
over
to
the
collector
who
unshards
them
to
get
the
final
results.
L
So
this
is
kind
of
the
data
flow
view
of
the
same
thing.
I
think
the
interesting
thing
here
from
a
kind
of
protocol
point
of
view
is
you
notice,
there's
a
kind
of
back
and
forth
thing
at
the
preparation
stage.
There
is
a
need
for
some
some
chattiness,
some
interaction
between
the
aggregators
in
that
verification
process,
to
enable
a
distributed.
Verification
of
the
correctness
of
the
input
shares,
but
otherwise,
hopefully,
this
kind
of
explains
the
overall
process.
L
Also,
this
the
kind
of
little
gray
squares
that
parallel
to
the
output
shares
are
other
and
meant
to
indicate
other
output
shares.
The
idea
that
an
aggregate
share
represents
the
aggregation
of
the
output
shares
over
a
batch
of
measurements
and
as
as
was
pointed
out
before
it's
up
to
the
aggregators.
To
do
things
like
enforce
minimum
batch
sizes
define
what
a
batch
is.
L
So
that's
kind
of
the
data
flow.
This
is
how
kind
of
how
we've
described
that
api
in
the
draft
again
just
kind
of
using
notations
here.
So,
basically,
you
know
what
the
vdaf
draft
defines
is
this
api
and
how
a
couple
of
instantiations
fulfill
that
api
and
then
ppm's
job
is
to
do
the
plumbing
to
get
the
inputs
and
outputs
of
of
these
local
functions
to
the
right
places
at
the
right
time,
so
that,
at
the
end
of
the
day,
you
can
unshard
into
a
measurement
that's
meaningful
to
the
collector.
L
In
the
draft
we
define
a
couple
of
constructions.
I
think
ecker
probably
described
described
these
in
a
little
bit
more
detail.
Prio
is,
is
the
one
where
we
that
I
think
enpa
is
is
based
on.
L
The
diversion
in
the
draft
is,
is,
I
think,
has
a
few
more
changes
on
top
of
what's
in
the
enpa,
but
that's
one
where
we
have
a
fair
bit
of
deployment
experience
and
then
the
hits
protocol
is
the
one
that
lets
you
get
a
distribution
on
strings
and
there's
been
some
discussion
in
chat
about
whether
you
know
possibly,
we
could
fit
something
like
star
in
here.
That's
I
mean
it's
kind
of.
L
L
There's
a
few
implementations
so
far
this
is
kind
of
well.
We
have
the
one
that
supports
enpa
with
prio
v2
we've
got
some
early
implementations
of
distributed.
Point
functions
are
what
supports
kind
of
the
inner
loop
of
hits
and
there's
some
work
on
various
previous
instantiations.
L
L
If
you
look
at
the
papers
in
which
they're
published
there's,
you
know
it
might
look
intimidating
in
terms
of
the
amount
of
data
that's
getting
sent
around
the
amount
of
computations
being
done,
but
with
these
implementations
we
have
some
early
data
about
the
expense
of
in
terms
of
computation
and
communications
overhead
of
doing
these
private
schemes-
and
you
know
your
mileage
may
vary,
but
it
seems
seems
tolerable
to
first
order.
Let's
say
that
so
yeah
I
think.
That's
all.
I
had
yeah
a
couple
references.
B
Much
that
puts
us
right
on
schedule
again
much
appreciated.
So
any
any
questions
for
richard
about
any
of
that.
G
Eric
hey
so
in
one
of
your
slides,
you
had
the
aggregators
with
lines
between
them
and
in
the
ppm
spec.
I
mostly
see
sort
of
the
helpers
not
connected.
G
Are
we
thinking
about
a
protocol
where
we
might
imagine
enabling
a
topology
that
could
have
helpers
sort
of
communicating
with
each
other,
like
some
npc
protocols
allow
for
yeah?
This
slide.
L
Yeah,
I
I
I
I
admit
that
I'm
not
an
expert
on
kind
of
what
the
ppm
protocol
is
doing
right
now.
I've
mainly
been
stuck
down
at
this
layer,
but
I
think
my
understanding
is
that
you
can
emulate.
You
know
this
kind
of
looks
like
a
broadcast
channel,
and
if
you
have
a
leader
that
is
connected
to
multiple
helpers,
you
could
have
basically
a
star
topology
that
over
which
you
can
emulate
a
broadcast
channel.
So
if,
if
a.
L
To
it
for
us,
for
instance,
in
the
in
the
in
the
pro
in
the
api
we've
sketched
out,
we
structure
the
preparation
process
in
rounds
where
the
input
to
each
round
is
the
output
of
the
previous
run
from
all
aggregators.
L
So
I
think
in
the
ppm
content
kind
of
communications
model,
you
can
envision
all
helpers
submitting
their
outputs
from
the
previous
round
and
then
the
leader
distributing
those
outputs
out
to
all
of
the
helpers
for
the
next
round
kind
of
yeah.
F
G
F
We
have
a
this
is
this
is
like
a
straw.
Man
thing:
it's
not
like
a
first
of
all
any
change
radically.
I
imagine.
B
Okay,
seeing
no
one
else
in
the
queue!
Thank
you
again
very
much
richard
we're
going
to
go
ahead
and
move
on
to
well.
First,
we're
going
to
call
and
bring
the
slides
up
here.
B
So
we're
going
to
move
on
to
a
call
for
expression
of
interest
from
folks
in
this
just
just
to
make
sure
that
there
is
enough
critical
mass
behind
this.
Aside
from
the
people
who
have
presented
here,
we'd
like
to
ask
who
in
attendance
is
interested
in
working
on
this
technology
trying
to
figure
out,
we
probably
don't
need
people
to
speak
to
this.
Necessarily,
although
we're
very
happy
to
hear
what
you
have
to
say
on
the
topic,
I'm
also
going
to
throw
up
a
quick
show
of
hands.
O
Hello,
all
right,
I
I
did
the
show
I
just
wanted
to
speak
up
a
little
bit
to
mention
from
my
perspective.
You
know
we
definitely
have
already
using
stuff
like
this,
and
seeing
something
like
this
be
standardized
and
taken
on
by
the
word
group
would
be
something
that
would
be
a
positive
thing,
so
we
didn't
do
any
of
the
presentations
here,
but
we're
definitely
interested.
F
That
we
did
get
is
two
things,
one
that
I
think
everybody
who
is
beak
is
interested
in
working
on
this.
Just
so
we're
all
clear.
And
second,
if
you
look
at
the
there's,
a
bunch
of
people
in
the
job
or
chat
that
are
saying,
nurses,
including
facebook
and
whatever
tuesday,.
B
P
P
P
In
other
words,
what
systematic
approach
can
you
adopt
for
the
design
and
innovation
process
that
ensures
that
ethical
factors
and
values
are
considered
at
each
relevant
step,
and
I
think
that
would
that
would
help
clarify
a
lot
of
the
comments
in
the
chat
about
along
the
lines
of
yes,
but
this
happens
already
or
well.
This
only
works.
If
you
can
assume,
there's
no
collusion
and
so
on,
because
what
it
does,
is
it
flushes
those
assumptions
out
and
lets?
B
Okay,
thanks
yari.
Q
Yeah,
I
just
wanted
to
briefly
mention
that
I
do
think
this
is
exciting
technology
and
we
should
work
on
that.
I
can
see
other
types
of
applications.
We
mostly
talked
about
the
sort
of
application,
type
browser
type
of
things,
but
also
many
types
of
networks.
We
could
probably
use
this
this
technology
to
do
a
better
job
at
collecting
information.
That
means
we
collected
for
various
debugging
and
other
other
reasons.
Q
The
only
piece
that
I
was
sort
of
a
little
bit
concerned
about
was
was
this
advertising
piece
and
I
sort
of
just
as
a
personal
opinion.
I
had
some
reservations
about
browsers
working
with
advertisers,
even
though
I
do
recognize
that
I
need
to
do
something
better
than
we
do
today,
but
you
know,
maybe
there's
also
other
paths
for
the
advertisement,
problems
trying
to
try
to
prevent
information
flow
rather
than
collaboration,
but
I
don't
work
for
the
advertisement
in
the
industry
so
anyway
I
I
like
this.
They
should
go
ahead.
Q
B
B
There
may
be
an
incorrect
input
device
selected
or
something
along
those
lines.
B
Okay,
I
think
we
probably
need
to
let
you
try
to
work
that
out
and
get
back
in
the
queue
thanks.
R
Member
of
that
set
may
be
different
from
the
best
answer
for
that.
Member
of
the
set,
in
particular
the
the
set
of
problems
you're
dealing
with
with
something
like
identifying
bad
urls,
etc.
Doesn't
really
involve
tying
two
actions
together
in
the
same
way
that
an
ad
conversion
does
so.
R
It
may
be
that
there
are
simpler
mechanisms
or
only
a
partial
use
of
this
system
that
would
still
satisfy
those
where
invoking
the
full
thing
might
be
more
difficult
and
not
necessarily
buying
you
everything
that
it
would
need
to
buy
you
when,
when
you
had
the
conversion
case,
so
I
think
as
a
set
of
problems
it's
interesting
to
work
on,
but
I
would
prefer
that
we
take
it
as
a
set
when
we
take
the
work
in.
B
Thank
you,
ted
phillip,.
S
Yeah,
I
I
like
this
stuff.
I
don't
think
that
we
understand
the
problem,
but
I
want
to
do
it
anyway.
S
I
am
a
bit
worried
that
we
seem
to
have
gone
straight
for
the
real
high
falutin
cryptography,
rather
than
measures
like,
let's
just
encrypt
log
files
as
they're
produced
rather
than
having
them
sit
in
plain
text,
ready
to
be
stolen,
so
there's
a
bunch
of
real
bread
and
butter
issues.
I
think
that
we
need
to
do
as
ietf
before
we
do
the
stuff
that
really
excites
us.
The
other
thing
that
I
point
to
is
this
has
been
presented
in
terms
of
a
network
realization.
S
I
think
that
the
more
immediate
and
more
useful
use
for
this
would
be
to
assist
work
like
the
stuff
that
my
wife
does.
Analyzing
workers
compensation
claims
in
that
there
you've
got
enormous
amount
of
really
privacy,
sensitive
data
that
is
aggregated
together
and
then
she
analyzes
it,
and
so
it
that's
not
a
network
application,
but
that
transition
from
I
have
this
large
amount
of
data.
S
I
would
like
to
pre-process
it
into
a
form
that
it
can't
then
leak
in
a
dangerous
way
and
analyze
it
in
that
form.
I
think
that
that
technology
would
be
very
useful,
disconnected
from
the
whole
network
case,
and
it
might
make
for
some
starting
problems
that
are
rather
simpler
than
trying
to
think
about.
How
do
we
secure
the
web.
B
E
Thanks
so
you
know
in
general,
I
think
this
is
a
an
interesting
problem
to
look
at,
and
it's
certainly
something
that
we
could
consider
doing
it's
as
ted
said
it.
It
meets
all
of
the
buff
criteria,
except
for
you
know
one
oddity
right
that
that
this
solution
is
really
designed
to
help
us
protect
ourselves
or
protect
end
users
from
good
people
right,
because
it
it
has
no
ability
to
prevent
all
of
the
evil
sites
and
the
the
evil
of
you.
A
E
Mechanisms
and
evil
ad
trackers
that
don't
want
to
use
aggregation
because
they're
act
deliberately
trying
to
track
individual
people,
and
you
know
I.
I
think
it
would
be
sort
of
fair
to
say
that
this
cool
new
technology
really
helps.
You
perfect,
protect
you
from
companies
that
may
already
be
behaving
reasonably
in
the
first
place,
and
that
doesn't
mean
that
it's
not
worth
doing.
But
you
know,
there's
a
security
consideration.
Sections
I'd
be
considering
writing.
N
Yeah,
sorry
about
that,
I
I
just
wanted
to
echo
something
that
richard
was
saying
in
the
chat
and
and
perhaps
ask
us
to
take
a
step
back
from
the
advertising
use
cases
and
recognize
that
this
this
pattern
that
we're
seeing
across
the
industry
across
different
problem
domains
for
collecting
these
aggregates,
be
it
in
you
know
telemetry
for
browsers
or
exposure
notification
for
coven,
or
even
you
know,
bandwidth
measurements
in
the
case
of
tor
is
very,
very
common,
and
I
think
it's
certainly
true
that
we
have
confidence
in
the
general
shape
of
the
problem.
N
That
is
to
say,
we
we
want
to.
We
have.
We
need
to
collect
these
aggregate
statistics
to
answer
certain
questions
and
it's
whether
or
not
you
know
this
is
harmful
helpful
for
the
purposes
of
web
advertisements
and
conversion
measurements
in
a
privacy.
Provisional
way,
I
think,
is
a
good
question
to
ask,
but
that
does
not.
I
don't
think
that
takes
away
from
the
the
very
valid
use
cases
that
were
also
presented
here.
N
That
could
certainly
be
improved
by
a
more
privacy
preserving
protocol.
So
I
I'm
very
strongly
supportive
of
this
work.
It
cuts
across
so
many
different
problem,
domains
and
use
cases,
and
it's
kind
of
inevitable
that
you
know
it'd
be
standardized
somewhere,
given
how
important
it
is.
So
that's
what
I
want
to
say.
Thank
you.
B
Thanks
so
we've
we've
closed
the
queue
in
the
interest
of
making
certain.
We
have
enough
time
to
discuss
the
proposed
charter,
although
let's
go
ahead
and
move
on
decker
here.
F
Yeah,
so
I
would
make
a
number
of
points
so
so
first,
I
think
you
know
ted's
point
yeah
I
mean
this
is
a
toolbox.
That's
got
that
it
has
a
variety
of
tools,
and
I
think
that
that
you
know
that
the
one
way
to
think
about
this
is
that
the
ppm
protocol
is
the
toolbox
and
the
vdf's
the
tools,
and
so
the
idea
is
not
it's
not
not.
To
define
any
particular.
You
know.
F
Measurement
measurement
you
know
thing
is
to
define,
is
to
define
a
set
of
mechanisms
which
can
be
used
for
taking
different
kinds
of
measurement,
and
then
you
know
individual
applications
they
layer
on
top
of
those
tools.
F
So
that
would
be
true
both
in
terms
of
the
you
know
which
vdas
exist
and
it'll
also
be
true
in
terms
of
like
how
how
you
build
applications
on
top
of
the
top
of
the
vdf,
because,
like
you
know,
like
I
mean
one
thing
about
this
is
like
you
got
a
thing
that
basically
says
I
can
collect,
counts
right
and
but
like
what
do
you
do
with
those
counts
is
an
important
question
and
one
that
we
don't
pretend
to
define
the
you
know
it's
I
I
I
do
see
some
over
indexing
on
the
on
the
ad
use
case
here
as
well.
F
You
know
you
know
there
are
a
lot
of
these
cases,
they're
not
ads
and
in
particular
like
there's,
nothing
ad
specific
to
any
of
this,
all
the
ad
specific
work,
that's
which
exists
or
somewhere
else
in
pat
cg.
So
I
mean
this
is
about
building
a
set
of
fulfillment
measurements
and
and
as
a
really
one
application
measurement
with
that
said,
I
do
think
that
you
know
this
question
about,
like
you
know
about.
This
is
for
letting
good
people
do
do
good
things.
F
I
think
there's
two
points
about
that.
One
is
that
there
are
a
lot
of
applications
where
I
as
wes
says:
good
people
want
to
take
measurements
and
want
to
be
able
to
like
bind
themselves.
So
they
can't
cheat
you,
and
so
it
can't
be
collected
the
day
they
don't
want
to
have
so
that's
application
case.
One.
I
think
application
case
two
is
that
there's
been
a
lot
of
like
attempts
to
look
at
what
it
would
take
to,
like.
F
You
know,
reproduce
partially
out
ecosystem
with
better
privacy
because,
like
I
think,
we'll
agree
as
they're
not
going
to
go
anytime
soon
and
one
of
the
pushbacks
that
one
gets
once
once
it's
doing,
that
is,
it
will
have
a
negative
impact
on
the
ecosystem
as
a
whole.
And,
like
you
know,
I'm
not.
You
know,
I
think
you
know.
Firefox
has
already
played
a
bunch
of
anti-tracking
technologies
so,
like
I'm
not
like
all
in
on
that,
but
I
think
there's
a
real
concern.
F
People
have,
and
so
the
idea
is
not
that
when
we
simply
offer
you
know
these
alternative
technologies
and
then
people
would
use
them
and
third-party
cookies.
The
idea
is
that
the
idea
is
that
zulu
offers
alternative
and
that
would
eventually
have
to
deprecate
the
existing
privacy-based
technologies.
B
Right
thanks,
ecker
charlie.
K
Yeah,
I
also
wanted
to
respond
to
wes
and
some
of
the
comments
in
the
chat,
just
mostly
the
plus
one,
what
ecker
said,
but
yeah
I'm
on
the
on
the
google
end.
We're
definitely
looking
at
like
helping
helping
this.
K
This
tracking
problem
on
the
web,
with
kind
of
a
two-pronged
approach
like
one,
is
having
a
well-lit
path
where
we
can
recover
use
cases
in
a
way
that
we
know
is,
is
privacy
preserving
something
like
ppm
could
fit
into
this
story,
and
once
we
have
kind
of
a
a
foundation
where
some
of
the
use
cases
that
we
think
are
are
important
to
maintain
the
ecosystem
are
there,
then
we
can
go
ahead
and
remove
kind
of
the
bad
stuff
that
we
think
is
bad
for
user
privacy.
K
I
think
the
platform
can
really
mediate
this,
and
if
we
think
that
the
platform
provides
a
good
enough
foundation,
then
we
can
we
can
remove
the
stuff
that
we
think
is
is
bad
for
user
privacy,
and
this
is
like
our
our
strategy
that
we're
trying
to
do
on
chrome,
where
you
know
we're
investing
all
this
time,
trying
to
come
up
with
new
browser
primitives
and
also
we
have
like
a
timeline
where
we
want
to
disable
third-party
cookies
and
deprecate
them.
So
yeah
we're
we're.
B
Thank
you
very
much,
charlie.
I
do
want
to
give
wendy
an
opportunity
to
to
come
back
in
if
we
think
we've
solved
the
audio
issues.
B
Okay,
so
let's
go
ahead
and
move
on
to
the
charter,
then
I
have
it
up
here
in
front
of
us:
we've
broken
into
two
sections:
it's
probably
not
terribly
useful
for
me
to
try
to
read
through
it
directly
as
it's
on
your
screens,
but
we
want
to
go
ahead
and
take
some
comments
here.
I
see
our
area
director
has
stepped
up.
C
B
All
right,
thank
you.
So
this
is
the
current
proposed
charter,
or
at
least
the
first
half
of
it.
T
Thank
you.
So,
thanks
for
all
the
presentations,
it's
been
really
interesting.
So
most
presentations
today
were
about
use
cases,
but
there's
nothing
about
that
in
the
charter.
I
don't
think,
although
I
can
only
see
the
second
side
now
or
I
don't
think
currently
in
the
draft,
I
think
that
would
be
a
useful
thing
to
kind
of
document
somewhere.
F
I
think
we're
up
for
that.
I
guess
you
know.
I
think
one
thing
we're
trying
to
do
is
with
the
use
cases
motivational,
because
I
think,
like
the
these
are
generic
techniques,
but
I
think
it'd
be
great.
I
think
I
would
be
more
than
happy
to
think
to
like
write
down
like
what,
like
enough
motivation,
is
to
understand
why
you
want
to
collect
the
naked
statistic
and
just
to
say
something.
F
Siobhan
should
ask
in
the
chat
is
there's
room
for
like
for
like
multiple
for
like
for
like
multiple
instantiations
of
the
same
basic
task.
I
think
absolutely
so.
I
think,
for
instance,
if
there's
a
there's,
a
better
mechanism
for
for
collecting
heavy
hitters
than
hits,
which
is
relatively
expensive.
That
would
like
something
like
that
room,
for,
I
think
you
know
you
know,
like
I'd,
certainly
be
interested
like
like.
F
I
think
I
want
to
build
something
where,
like
as
new
cryptography
gets
adults,
we
can
add
it
and
I'm
gonna
build
something
where
we
have
multiple
mechanisms
that,
like
support
the
same
basic
thing,
I
think
you
know
the
only
the
only
question
I
I
I
want
to
be
sure
is
like
you
know:
if
we
have
something
radically
different
in
some
like
some
lightweight
it
doesn't
match
up,
then
we
have
to
ask
would
be
better
to
make
a
new
a
new
protocol
or
better
to
fit
it
in,
but
like
I'm,
I'm
like
definitely
pro
fitting.
U
Yeah,
I
just
wanted
to
see
that
that's
that's
encouraging.
I
just
want
to
make
sure
that,
like
currently
the
charter,
I
think,
would
kind
of
rule
out
some
of
the
ideas
alternatives
to
achieve
the
same
goals.
So
I
think
the
charter
would
need
some
tweaking.
U
I
think
if
you
go
to
the
previous
one
previously
sorry,
I
think,
like
splitting
measurements
between
multiple
non-including
servers
like
I
think
that
is
like
a
class
of
for
techniques
for
achieving
this
goal
of
privacy
preserving
measurements.
But
I
think
you
could
do
it
in
other
ways
as
well,
and
I
see
there's
some
support
to
rewrite
text
and
I'm.
V
I
think
the
charter
needs
to
say
something
about
use
cases
and
threat
models.
I
don't
have
any
formal
words
for
that
at
this
point,
but
I
do
think
that's
the
important
thing
to
capture
here,
because
if
we're
going
to
be
developing
multiple
solutions,
as
ted
was
alluding
to
edward,
there
might
be
different
threat
models
that
apply
to
them
rather
than
a
single
overarching
one,
and
I
think
it
would
be
good
to
make
sure
that
information
is
clearly
captured
when
things
are
developed
and
obviously
that
needs
to
fit
into
the
charter.
Cheers.
B
Thank
you
eric.
G
Come
on
on
a
similar
line,
actually
there
was
comments
in
the
chat
earlier
about
sort
of
different
scene
attacks
and
simple
attacks
and
comments
that
differential
privacy
would
be
required
and
that
the
the
vdfs
have
you
know
sort
of
had
the
scope
for
that.
G
Is
it
meaningful
to
like
say
that
these
will
support
privacy,
respecting
incorporation
values
without
having
some
definition
of
what
we
mean
by
a
a
private
value,
or
should
that
be
here
or
should
that
be
at
the
at
the
vdf
level?.
K
Yeah
I
just
wanted
to
I
I
guess,
drill
in
a
little
deeper
in
the
charter
of
like
what
aggregation
actually
means,
and
it
might
be
good
to
have
maybe
like
a
definition
there.
K
I
see
like
with
the
server,
can't
learn
the
value
of
individual
measurements,
which
maybe
that
suffices
to
say
as
long
as
that's
the
case,
you
can't
you
can't
have
aggregate
measurements,
but
some
of
the
use
cases
that
we're
considering,
like
you
know,
are
not
are
don't
very
neatly,
fall
into
this
realm
of
like
you're
learning
in
aggregate
like
you
know,
you're,
if
you
learn
like
a
private
ml
model
that
you
can
verify,
is
you
know
differentially,
private
or
something
like
that?
K
H
Yeah
just
adding
to
the
the
early
point.
I
completely
agree
with
the
idea
of
adding
in
the
use
cases,
but
just
to
echo
a
point:
that's
come
up
in
the
chat,
I
think
also
consider
adding
abuse
cases
and
ideally
mitigations
as
well,
which
might
help
alleviate
some
of
the
concerns
about
some
of
the
dark
practices
in
this
general
area.
Thanks.
W
Watson,
I'm
relaying
a
message
from
wendy
in
the
chat
she
said
she
wrote,
I'm
just
gonna
first
voice
support
from
w3c
for
this
work
as
there's
working
incubation
that
could
use
it.
X
Yeah,
just
on
the
abuse
cases
use
cases
thing.
I
think,
given
that
this,
I
think,
is
probably
pretty
good
technology
that
could
be
used.
Well,
I'm
not
too
worried
about
documenting
use
cases,
and
that
could
be
a
bit
of
a
time
sync,
but
given
that
this
good
technology
could
be
abused,
I
think
effort
spent
in
that
direction
would
be
much
more
valuable,
particularly
if
we
find
ways
of
mitigating
abuses
that
we
see.
M
Nick
30
cdt
and
thanks
for
presenting
all
this
everyone
on
the
charter,
I
had
two
questions
or
concerns
for
now
one
I
think
we're
talking
about
abuse
and
things
like
that,
but
I
also
think
there's
a
lot
of
the
privacy
that
depends
on
assumptions
about
non-collusion
or
about
how
the
client
is
going
to
find
the
different
servers
or
or
configure
them,
and
I'm
a
little
bit
worried
that
that's
all
getting
marked
out
of
scope
in
in
the
hope
that,
like
maybe
we
can
just
ignore
that
problem
or
someone
else
will
fix
it.
M
I
I'm,
I
guess
I
think
we
should
be
discussing
it
if
we're
going
to
put
a
lot
of
effort
into
into
the
work
of
the
protocol.
That
depends
on
those
things.
The
other
concern
is
about
the
name.
I
think.
Maybe
it's
come
up
a
little
bit
already
in
the
chat.
I
I
think
priv
is
is
both
very
confusing,
does
not
describe
the
work,
that's
happening
about
privacy,
preserving
measurement
and
actively
misleading
that
this
is
the
group.
M
B
All
right,
thank
you,
so
nick
I'm
going
to
ask
that
you
send
some
concrete
suggestions
for
your
first
bullet
point
to
the
list.
If
you
have
some
some
time
to
do
so,
thanks
sure
chris.
Y
This
plus
one
to
that,
like
I
think
it
priv,
does
seem
overly
broad,
but
also
I
wanted
to
address
okay,
I
want
to
talk
about
the
abuse
use
cases.
Y
People
are
talking
about
in
chat,
so
one
of
the
things
that
was
brought
up
is
like
what
about
clients
like
trying
to
like
corrupt
the
computation
by
sending
bogus
inputs,
or
something
like
that,
so
that
form
of
abuse
is
something
that
we're
explicitly
trying
to
rule
out
so
in
the
in
the
so
like
a
vdf
is,
is
verifiable
in
the
sense
that
invalid
inputs
can
be
detected
and
removed
from
the
output
of
of
the
computation.
B
All
right
thanks
so
just
for
clarification-
and
I
this
is
addressed
at
stephen
when
we're
talking
about
abuse
cases-
is
that
the
kind
of
abuse
you're
talking
about.
Are
you
talking
about,
like
abuse
of
the
actual
data
being
collected.
X
So
again,
pardon
my
ignorance,
it's
vast,
but
so,
for
example,
if
this
tech
this
this
technology
could
be
used
to
to
measure
things
in
two
small
in
aggregates
that
are
too
small
and
that
become
exposing
or
if,
if
some
application
had
some
kind
of
opt-in
mechanism,
and
then
it
was
possible
to
change
what
gets
measured
in
ways
that
are
damaging
for
users.
Those,
I
think,
would
be
kind
of
abuses
that
are
a
bit.
X
X
Q
Thank
you
so,
plus
one
on
the
name,
confusion,
issue,
plus
one
on
talking
about
abuse
cases,
and
I
I
think,
that's
mostly
by
the
you
know.
Whoever
is
doing
the
measurements,
not
not
so
much
about
the
abuse
by
by
the
user.
Users
should
still
be
able
to
produce
the
data
that
they
want
to
produce
as
long
as
it's
sort
of
within
bounds.
Q
The
other
thing
is
that
I,
I
think,
maybe
there's
some
room
for
a
discussion
of
opt-in,
opt-out
type
of
solutions
and
if
we
learn
from
some
other
protocol
cases
like
first
in
quick,
there's
the
spin
bit
where
there's
an
arrangement
that
some
fraction
of
the
users
are
automatically
always
excluded
from
from
spinning.
Q
In
order
to
create
this
set
of
users
that
don't
do
a
particular
action
and
similar
techniques
might
actually
apply
here
that
you
automatically
exclude
some
set
of
users,
and
then
you
sort
of
create
this
opportunity
to
not
do
this,
and
users
can
opt
out
without
being
targeted
as
opt-outers,
and
so
I
think
that
that
kind
of
thing
would
be
important.
I
don't
have
the
specific
language
for
the
charter,
or
maybe
something
about
you
know
enabling
opt-in
and
opt-out
or
realistic
opt-in
and
opt-out
possibilities.
F
Generally,
just
the
opt-in
thing
of
generally,
these
are.
These
are
configuration
points
of
some
kind
and
clients.
F
You
know
you
know
the
question
of
exactly
how
how
users
decide
whether
that
got
actually
engaged
is
like
it's
like
not
generally
saying
we
were
too
much
itf
the
the
points
even
making
are
absolutely
correct,
like
you
know,
there's
like
a
whole
pile
of
material
that
has
to
be
done
about
ensuring
that
this
system,
the
box
the
system
is
in,
does
not
allow
for
abuse
by
by
the
collector
or
by
or
by
the
various
aggregators,
and
that's
like
a
that's
a
big
topic
that
that,
like
absolutely
have
to
make
sure
we
hit
properly
there's
some
work
already
in
the
document
about
it,
but
like
it's
it's
insufficient,
so
I
think
I
would
I'd
love
to
see.
F
Somebody
suggest
some
text,
but
I
think
absolutely
having
like
some,
like,
I
think,
having
zion
texas.
If
we
had
to
work
on,
you
know
actually
like
addressing
those
topics,
would
be
really
important.
B
Okay,
all
right,
martin.
J
Just
to
just
a
yari's
point,
a
lot
of
the
systems
that
are
being
described
here
allow
people
to
to
opt
out
of
the
system
without
apparently
opting
out.
They
can
generate
the
inputs
to
the
system
that
appear,
to
all
intents
and
purposes
as
valid
inputs
to
the
system,
but
they're
not
actually
contributing
any
any
values
to
the
system
and
that's
actually
a
useful
property
that
is
exploited
in
various
ways.
So
I
think
that
that
capability
exists,
I'm
not
sure
that
we
need
to
put
those
sorts
of
things
into
a
charter.
J
I
do
think
that
we
need
to
include
some
of
the.
I
think
we
need
to
include
something
generic
about
abuse.
There
is
something
very
specific
about
abuse
in
here
about
proofs
of
validity,
but
I've
spent
quite
a
lot
of
time.
Thinking
about
things
like
civil
attacks,
on
various
manifestations
of
these
sorts
of
protocols
and-
and
those
would
be
interesting
things
to
to
have,
at
least
in
the
auspices
of
the
group.
E
A
E
And
steven
stand
up
our
own
helpers
and
leaders
and
expect
everything
to
accept
data
that
we
submit
to
it,
because
the
two
of
us
agree
that
we
trust
each
other.
But
but
the
collectors
you
know
are
the
collector's
only
going
to
accept
data
from
certain
places,
and
so
we
end
up
with
either
a
very
small,
very
centralized
ecosystem
or-
or
you
know,
a
much
more
flexible
framework.
Q
Thanks
yuri,
yes,
so
just
a
quick
response
to
martin.
So
that's
that's
really
good
news
and
I
wasn't
looking
for
documenting
any
of
that
detail
in
in
the
charter.
Perhaps
the
high
level
bit
about
being
effective,
opt-out
mechanisms
could
could
be
in
the
charter.
Thank
you.
Q
C
F
The
answer
is
question
I
think,
there's
a
bunch
of
like
compatible
models.
I
think
the
important
thing
to
remember
is
that
the
person
who
I'm
going
back
to
the
right
model
I
had
I
I
put
up
earlier
right-
that
the
the
collector
has
to
trust
the
to
help
the
aggregators,
but
the
clients
also
trust
the
aggregators,
because,
because
aggro
is
responsible
for
starting
the
client's
privacy
and
so
as
a
practical
matter,
I
would
expect.
F
I
would
anticipate
that
you
know
that
that
there,
that
you
know
there'll,
be
a
a
a
non
like.
You
know,
a
non-gigantic
number
of
collectors
of
different
aggregators,
because
fundamentally
you're
you're
trusting
them
to
behave
correctly
right.
So
what
you
just
you
know
what
you
and
stephen
probably
has
worked
up,
except
for,
like
maybe
you
know,
maybe
I
people
intimidated
that,
but
because
people
probably
rely
on
relying
on
the
rotations
of
the
collectors.
Oh
sorry,
the
aggregators,
but
I
mean
this
is
compatible
with.
Like
an
arbitrarily
large
number.
F
I
would
expect
to
see
really
two
models
that
we
talked
about,
one
of
which
is
you
know,
people
who
operate
just
as
just
as
aggregators
and
are
basically
their
job
is
to
ensure
the
safety
system,
and
then
people
who
operate
sort
of
like
a
more
global
system
where
it's
like.
You
know
I
like,
if
you
see
amplitude,
but
it's
like
you
know,
look.
F
I
am
like
data
collection
of
the
service
and,
like
you
know
I
I
I
I
contract
with
several
several
different
aggregators
and
that's
how
I
provide
the
system
in
a
box.
So
I
put
this
to
those
two
models,
but
there's
nothing
really
centralized
here.
There's
something
there's
like
no
there's
no
reason.
You
know
like
every
you
know
like
plenty
of
like
trustworthy
entities
in
the
world
and
and
so
there's
nothing
that
basically
requires
like
that,
like
a
good,
the
collector,
a
work
with
it
same
set
of
aggregators
as
quantum
b.
Y
To
bump
what
eckerd
just
said,
but
also
also
kind
of
point
out
that
something
that
we've
been
working
on
in
the
protocol
is
lowering
the
bar
of
entry
to
running
this
this
system
as
much
as
we
can
so
there's
a
sort
of
asymmetry
between
leader
and
helper.
So
the
leader
is,
you
know,
and
and
this
could
change
depending
on
what
p,
what
people's
needs
turn
out
to
be.
Y
But
what
we
have
been
thinking
about
so
far
is
the
helper
should
be
very,
very
cheap
to
run
and
operate,
and
a
leader
is
inherently
more
expensive
because
it's
getting
measurements
directly
from
clients,
it
has
to
store
them
for
some
amount
of
time
before
they
can
begin
processing
them.
How
much
you
have
to
store
and
how
long
you
have
to
store.
Y
It
depends
on
the
vda
if
you're
running
so,
but
I
think
this
is
one
of
our
goals,
and
I
think
this
is
important
thing
to
keep
in
mind,
regardless
of
like
how
the
ecosystem
pans
out.
We
want
it
to
make.
We
want
to
make
the
bar
bar
to
entry
as
low
as
possible.
I
think.
B
All
right,
thank
you
very
much
so
at
this
point,
we'd
like
to
go
ahead
and
move
on
to
some
questions,
to
inform
the
the
area,
directors
and
the
iasg
about
community
interest
for
this,
I'm
going
to
go
ahead
and
bring
up
polls
for
each
of
these,
but
also
at
certain
points,
we're
going
to
ask
that
if
you
are
not
in
support
of
forming
the
working
group
that
you
go
ahead
and
put
yourself
in
the
queue.
B
The
next
question
we're
going
to
ask
is:
do
we
think,
do
you
have
something
to
say
alyssa.
A
Well,
maybe
we
should
take
them
one
at
a
time,
so
we
can
hear
from
from
the
folks
who
did
not
raise
their
hand
or
explicitly
left
their
hand
down.
A
B
R
B
C
B
So
again,
if
you
have,
you
have
concerns
about
the
the
scoping
here
and
have
not
yet
spoken
on
them.
Please
add
yourself
to
the
microphone
queue,
so
we
can
understand
better
your
position.
M
Just
okay,
probably
yeah
okay,
doing
polls
like
this
is
is
tricky
because
we're
chatting
about
possible
variations.
M
I
I
agree,
but
I
think
we
have
this
like
very
open
question
about,
are
use
cases
and
abuse
cases
going
to
be
put
into
the
charter,
or
are
they
going
to
be
a
work
item
and
and
until
until
some
of
that
is
settled
until
we
actually
are
going
to
define
them,
then
I
think
that
it's
it's
harder
to
conclude
that
the
problem
statement
is
is
completely
clear
if
we're
not
actually
agreed
on
the
use
and
abuses.
P
And
again
a
personal
statement
not
on
behalf
of
isoc.
I
think
the
discussion
in
the
chat
about
even
the
name
of
the
group
and
most
of
the
proposed
names
did
not
have
the
word
value
in
values
raises
questions
about
the
the
scope
and
intent
to
the
group
that
are
fundamental
enough.
That
question
two
is
very
difficult
to
answer.
As
nick
said,
okay.
B
All
right
so
ecker
go
ahead.
F
F
I
like
got
punchy
and
like
trying
to
figure
out
how
to
say
priv
so
like
like
really
like,
there's
like,
like
that's,
what's
going
on
here
so
like,
if
you
want
to
like
that,
like
suggest
something
else
because,
like
like
the
the
like
every
every
letter
in
there
was
like
constructed
to
make
it
say,
priv,
not
for
any
actual
reason.
Other
than
that.
B
Okay
and
then
the
final
two
questions
that
we're
going
to
have.
Typically,
we
would
have
raised
of
hands
of
this
in
a
physical
room
because
we
do
like
to
keep
track
of
who
exactly
answered
yes,
but
to
do
that
effectively
here
we're
going
to
ask
people
in
the
chat
if
you
are
willing
to
review
documents
associated
with
this
working
group.
Please
respond
with
review.
B
Okay,
so
I've
seen
on
the
order
of
20
people
respond
so
far.
This
is
a
very
strong
signal
and,
secondly,
who
here
plans
to
be
an
editor
for
related
documents.
Please
respond
with
edit
in
the
chat.
B
All
right
great
well,
thank
you
very
much
that
takes
us
to
the
end
of
the
things
we
wanted
to
discuss
for
this
buff.
I
think
this
has
been
very
useful
and
I
want
to
give
a
few
moments
over
to
our
area
director
to
speak.
C
Hi
everyone
I'd
like
hi,
everyone,
I'd
like
to
repeat,
I
think
we've
had
a
successful
buff
and
I
want
to
thank
everyone
for
kind
of
all
their
input.
I
think
the
chat
was
just
as
lively
as
as
the
mic
line.
If
not
more,
I
mean
generally.
What
I
heard
was
there
is
a
critical
mass
of
interest
in
working
in
this
kind
of
particular
problem.
There
was
repeated
the
tension
I
heard
around.
This
idea
of
you
know
are
we
okay
working
on
ad
tech
and
the
ad
use
cases?
C
C
There
were
a
number
of
really
kind
of
helpful
suggestions
about
how
to
polish
that
and
we'll
double
check
and
kind
of
confirm
that
so
specific
things
I
heard
reading
off
my
list
is
things
like,
let's
generalize
the
charter
text
to
make
sure
we're
sufficiently
flexible,
so
we
can
swap
approaches.
So
it's
not
just
pre-owned
the
heavy
hitters.
We
need
to
make
sure
that
how
we
define
aggregation
doesn't
restrict,
you
know
other
kind
of
alternatives.
C
We
talked
about
the
need
for
work,
items
to
document
the
abuse
cases
and
talk
about
the
threat
models
against
all
the
portions
of
of
the
architecture,
and
then
we
had
quite
a
lot
of
conversations
that
we
need
to
tune
the
working
group
name.
I
don't
know
what
the
solution
is,
but
that
we
would
like
a
change
there,
and
so
I
think
the
proponents
seem
open
to
making
kind
of
those
changes
and
that
will
those
changes
will
be
made
and
then
we'll
bring
that
back
for
confirmation.
C
But
otherwise
I
think
we
do
have
a
critical
mass
and
we
should
we
should
kind
of
polish
this
to
consensus,
and
so
we
can
move
forward.
Thank
you.