►
From YouTube: Management Techniques in Encrypted Networks (M-TEN) Workshop Day 3: How We Get There (2022-10-19)
Description
User privacy and security are constantly being improved by increasingly strong and more widely deployed encryption. This workshop aims to discuss ways to improve network management techniques in support of even broader adoption of encryption on the Internet.
Workshop page: https://datatracker.ietf.org/team/mtenws/about/
Day 1: Where We Are: https://youtu.be/Kizk_QrIc3s
Day 2: Where We Want to Go: https://youtu.be/aV1pzuCduLo
Day 3: How We Get There: https://youtu.be/p4NlZJlactE
A
A
I
think
we
all
know
the
goals
at
this
point,
and
so
I
won't
go
over
them
in
detail.
The
important
thing
that
we
are
now
at
is
day
three,
which
is
you
know
previously.
We
discussed
sort
of
where
we
are
what
the
state
is,
and
you
know
where
we
want
to
go,
and
the
real
question
now
is
well.
What
do
we
do?
How
do
we
get?
There
I
think
a
lot
of
some
of
yesterday's
talks.
Actually,
you
know
dived
into
this
too,
where
we
want
to
go
and
how?
How
do
we
get?
A
There
do
kind
of
overlap
quite
a
bit
and
that's
just
fine,
so
we're
gonna
start
today
with
a
presentation
by
Michael
Collins
on
improving
network
monitoring
through
contracts
and
another
one
by
Paul,
Grubbs
on
zero
knowledge,
middle
graphs
and
one
by
Tommy
Polly
on
Red,
Rover
collaborative
approach
to
content
filtering
there's
a
fair
amount
of
space,
especially
because
this
is
the
last
day
of
the
workshop.
A
I
really
didn't
want
to
fill
it
with
too
many
presentations,
because
I
think
we
we
need
a
good
discussion
at
the
end
and
we'll
sort
of
divide
it
into
two
parts
a
little
bit.
Well,
you
know
I
have
a
lot
of
space
for
the
talking
about
what
comes
out
of
today's
presentations
and
topics,
but
I'm
hoping
we'll
do
a
little
bit
of
Workshop,
wrap
up
too
and
think
about
stuff
as
a
bigger
picture,
and
maybe
next
steps
and
things
like
that.
So
with
that
I'm
going
to
go
on
to
Michael's
presentation.
B
So
before
I
came
to
isi,
which
is
about
two
years
ago.
I
worked
for
a
company
called
red
Jack
and
before
that,
I
was
at
the
cert
at
Carnegie
Mellon
University,
a
lot
of
my
work.
There
was
in
large-scale
network
monitoring
primarily
for
the
Department
of
Defense.
So
if
you've
ever
heard
of
centor
or
silk,
I
am
partly
I
was
heavily
involved
in
building
those
and
if
you've
heard
of
Einstein
I
can
claim
no
culpability
for
that
whatsoever.
B
Okay,
so
I
want
to
talk
in
particular
about
this
concept
of
a
network
contract
now
I
believe
the
core
idea
here,
which
is
to
have
assets,
publish
a
behavioral
envelope.
We've
talked
about
in
various
ways
in
presentations
over
the
past
few
days,
so
I'm
going
to
slightly
change
the
angle
of
how
I'm
talking
about
it
to
focus
on,
in
particular,
the
role
of
security
operations
in
this
and
the
idea
of
human
in
the
loop,
because
I
think
we've
talked
a
fair
amount
about
automated
publication
and
I.
B
Think
one
of
the
core
things
when
we're
talking
about
security
is
we're
dealing
with
for
lack
of
a
better
term
the
problem
of
hostility,
which
is
to
say
that
we
are
going
to
see
not
only
unexpected,
but
very
bizarre
traffic
run
intentionally
to
mess
with
us.
So
the
core
idea
that
said
that
I
found
the
discussion
about
mud
yesterday
in
particularly
Revolution
about
this
is
we
would
have
a
traffic
descriptor,
but
the
goal
in
this
case
is
the
descriptor
enables
operators
to
focus
their
attention
on
unknown
or
suspicious
Behavior.
B
B
So
the
reason
for
this
a
lot
of
what
drove
me
in
building
This
was
ideas
that
came
from
Nancy
levison,
the
author
of
Safeware
she's,
a
non-traditional,
reliability
researcher
and,
in
particular,
her
sections
on
monitoring.
Reliable
systems
in
that
book
are
I,
think
very
insightful
and
she
talks
about
the
need
for
monitoring
systems,
to
support
crises
and
for
monitoring
systems
to
help
develop
expertise
among
operators,
so
they
get
a
deeper
intuition
of
how
systems
operate
under
Normal
and
abnormal
circumstances.
Next
slide.
Please.
B
So
as
part
of
this
I've
been
working
for
the
past
few
years
and
what
I
call
attention-based
response
now,
the
idea
of
this
is,
if
you
take
a
look
at
the
history
of
starter
security
operations
and
intrusion
detection.
For
many
years,
The
Operators
were
considered
a
kind
of
appendage
to
the
IDS,
and
certainly
when
I
was
in
my
previous
job,
I
did
run
into
situations
where
I
would
talk
to
people.
You
know
sweets
c-suites,
whose
primary
goal
was
to
figure
out
how
to
get
rid
of
operators.
B
The
idea
of
operators
having
expertise
and
having
their
own
independent
agency
I
think
is
a
relatively
recent
concept
and
I
would
say.
The
division
point
is
miter.
A
couple
of
years
ago
wrote
a
book
called
10
strategies
for
a
world-class
cyber
security
operations
center,
and
one
of
the
points
they
make
there
is
that,
when
Sim
sort
of
our
Baseline
monitoring
software
was
introduced,
the
First
Response
of
most
organizations
were
great.
I
can
automate
this
and
get
rid
of
all
of
my
analysts
and
then
realizing.
B
Oh
there's
an
awful
lot
more
hostile
stuff
going
on
on
my
network
than
I
expected
I'm
going
to
need
more
analysts
and
well
this,
and
as
this
sort
of
acknowledgment
and
increased
professionalization
comes
on,
there
are
things
coming
on
the
research
side.
I
will
briefly
pause
to
plug
the
workshop
that
I'm
going
to
be
running
on
ndss
on
sock
operations
cfp
to
come
out
soon,
but
we've
also
seen
bottom
up
from
the
operational
World
metrics
appear.
One
of
the
core
things
we
often
see
people
talking
about
is
the
idea
of
events
per
analyst
hour.
B
This
is
basically
just
saying
this
is
about
how
much
an
analyst
can
handle
in
that
period
and
given
that
the
term
event
is
itself
fuzzy,
the
standards
are
a
bit
off,
but
if
you
sort
of
get
people
in
a
corner,
they'll
say
about
10
an
hour
now.
The
Corey
thing
here
is
not
the
10
an
hour.
The
core
thing
is
that
I've
dealt
with
systems
where
I
get
30
000
alerts
every
30
seconds,
so
that
value
of
Epi
is
dwarfed
usually
by
which
we're
actually
getting
on
the
network.
B
So,
given
that
an
attention-based
framework
is
based
around
the
idea
that
operator
attention
is
your
most
valuable
resource,
operators
need
to
be.
Creative
attackers
are
a
constant
moving.
Target
things
that
are
identifiable
should
be
automated
and
moved
away
and
I've
broken
it
into
three
basic
points.
The
first
is
all
security
organizations
are
cost
centers,
and
so
they
have
to
work
within
the
constraints
imposed
by
their
parent
organization.
B
Given
those
constraints,
if
you're
given
two
design
options,
pick
the
one
that's
going
to
frustrate
the
attacker
goals
more
and
then
given
design
Options
under
those
categories
pick
the
one
that's
going
to.
Let
the
operators
manage
more
events,
view
your
defenses
as
how
am
I
going
to
be
able
to
spend
my
analyst
attention
more
wisely
if
I
can
result
in
and
to
sort
of
get
on
to
that,
and
what
that
means.
Let's
move
to
the
next
slide,
so,
okay,
so
the
history
of
intrusion,
detection
follows
the
history
of
artificial
intelligence.
B
To
a
large
extent,
what
operators
tend
to
be
doing
is
validating
alerts
and
when
I
say
validating,
there's
actually
two
things
going
on
here:
it's
not
simply
a
matter
of
false
positive
or
false
negative.
There
are
a
lot
of
alerts
that
we
get
that
are
not
necessarily
threatening.
One
of
the
things
that
I've
been
doing
is
you
know,
I've
been
doing
analyzes
of
the
isi
dark
spaces.
We
get.
B
You
know
thousands
of
scans
every
day
for
telnet,
which,
at
least
on
the
dark
space
is
not
a
significant
problem
for
us,
because
you
know
we
have
no
machines
there,
but
the
point
is:
there's
an
enormous
amount
of
undirected
hostile
traffic
and
anyone
who's
directly
connected.
You
know
anyone
who
gets
unfiltered
email
because
they're
running
the
spam
filters
or
anyone
who's
running
a
large
network
is
all
aware
of
this.
B
What
operators
tend
to
be
doing
is
comparing
and
cross-referencing
data
I
generally
have
described
it
as
doing
left,
joins
across
crappy
databases,
so
we'll
take
Network
traffic
and
we'll
compare
it
to
server
logs
or
we'll
take
an
endpoint
agent
and
we'll
compare
it
to
network
traffic,
and
these
are
not
easy
things
to
do,
because
elements
of
identity
and
elements
of
normalcy
are
all
a
mess
here.
The
other
thing
to
keep
in
mind
about
this
is
that
any
network
is
a
contested
domain.
Network
instrumentation
is
unreliable.
B
The
first
thing
that
I
see
most
operators
write
for
their
system
are
systems
to
check
the
Integrity
of
the
data
they're
collecting
because
they
don't
trust
it,
and
especially
going
back
to
this
point.
I
made
about
levison
and
crises
in
a
crisis.
Instrumentation
is
often
the
first
thing
to
go.
One
of
the
ways,
for
example,
in
the
early
days
that
we
knew
that
ddos's
were
happening
on
some
of
the
networks
I
was
looking
at
was
that
the
flow
feeds
from
the
routers
stopped
because
flow
is
a
lower
priority
process
than
moving
packets.
B
The
other
thing,
of
course,
is
if
you're
running
any
non-trivial
network
parts
of
that
Network
are
controlled
by
people
who
are
actively
hostile
to
you.
That's
just
the
reality.
You
you
know
we
we
talk
an
awful
lot
about
zero
trust
and
the
perimeter
model
is
dead,
but
oftentimes
we
treat
response.
We
treat
the
management
of
our
systems
as
if
the
perimeter
model
still
existed
next
slide.
B
Please
now
I
made
this
point
previously,
but
I
want
to
sort
of
get
into
the
so
the
classic
thing
in
intrusion,
detection,
anomaly
detection
has
been
to
take
whatever
is
current
and
artificial
intelligence
and
plug
it
in
there.
So,
if
you
take
a
look
like
one
of
the
earliest
intrusion
detection
systems,
Knights
literally,
the
acronym
is
Network
intrusion,
detection
expert
system.
B
B
One
of
the
points
I
make
to
people
is,
you
know,
there's
a
lot
of
research
where
people
still
try
to
use
like
the
1999
Lariat
data
set
like
that,
has
no
relationship
to
what
a
modern
Network
looks
like
it's
also
true
that
networks
in
different
countries
different
profiles
for
not
only
the
not
only
for
the
software
they're
running
or
the
tools
they're
using,
but
even
the
types
of
attacks
they
get
I
found
an
interesting
paper
about
the
types
of
scans
that
actually
happen
in
different
countries,
individual
sites,
you
know,
building
like
deep
learning
models,
that's
doing
statistical
analysis
with
very
weak
power
tests,
there's
no
guarantee
that
an
organization
is
going
to
have
enough
information
to
train
a
model,
in
particular
if
you're
dealing
with
rare
and
high
threat
events
like
say
spear
fishing
or
a
watering
hole.
B
The
other
thing,
of
course,
is
that
the
highest
compliment
paid
to
any
effective
defensive
system
is
the
attackers
figure
out
a
way
to
get
around
it.
This
is
a
moving
Target
all
over
the
place.
The
other
issue
is
that
normal
traffic
is
not
necessarily
innocent.
This
has
been
a
classic
assumption
and
a
lot
of
machine
learning
work
on
this
is
a
you
know.
We
used
to
see
papers
where
people
say
we
train
on
two
weeks
of
data
and
work
from
there.
B
B
The
somebody
just
quoted
to
Vern
Paxton
paper,
which
is
always
a
good
thing
to
do.
In
this
situation.
B
The
and
I
will
also
say
that
a
lot
of
this
was
informed
by
paper
written
several
years
ago
by
Carrie,
Gates
and
Carol
Taylor
that
talks
about
challenging
the
anomaly
detection,
Paradigm
I,
think
it's
the
2006
and
this
PW
Workshop,
where
that
was,
and
you
can
just
kind
of
wave
that,
like
a
pamphlet,
the
base
rate
fallacy
which
is
axelson's
particular
observation
that
rare
that
rare
phenomena
and
commonly
repeated
tests
you'll
just
get
dominated
by
the
false
positives,
and
then
the
other
thing
is
that
operators
go
from
my
previous
slide.
B
B
They
have
to
explain
this
because
often
security
decisions
are
restrictive
and
opaque
and
they
make
people's
lives
harder,
which
means
that,
if
you're
talking
to
someone
and
you're
saying
I
need
to
do
this
and
I
can't
explain
why
that
has
got
to
be
a
spectacular
justification,
or
it
won't
happen
next
slide,
please
Wes
so
underpinning
all.
B
That
is
the
idea
that
sort
of
passive
machine
learning
driven
data
collection
is
hard,
and
so,
instead
of
working
from
the
idea
that
we're
going
to
passively
monitor
traffic
and
build
up
a
model
of
normalcy,
an
alternative
approach
is
to
publish
a
model
of
your
acceptable
behavior
for
an
asset
and
then
use
that
information
to
let
the
let
the
network
decide
if
it's
going
to
allow
the
traffic
and
let
the
operator
know
what
expected
traffic
is
going
to
look
like.
So
the
idea
here
is
that
the
contract
is
an
envelope.
B
It's
a
description
of
acceptable
behavior
and
that
may
include
things
like
IP
addresses
and
domain
names,
and
what
I
use
is
a
jumping
off
point
here
is
there's
a
standard
developed
by
the
Luxembourg
cert
called
Miss,
which
is
short
for
malware
information
sharing
platform
as
part
of
the
general
professionalization
of
the
sock
floors.
There's
been
a
lot
of
work
in
the
last
few
years
towards
standards
for
for
threat,
intelligence
sharing
and
what's
more
interesting
from
my
perspective,
is
that
these
standards
are
now
sticking.
B
There
were
a
number
before
this,
but
between
sticks
and
Mis
sticks
is
the
Oasis
standard
between
in
those
two
standards.
There's
actual
teeth
to
them
now
and
people
are
talk,
are
publishing,
misfeeds
and
the
like.
So
if
you
take
a
look
at
misspit's
gigantic
key
value
store
with
a
whole
bunch
of
different
classes
of
information
that
could
potentially
be
stored,
they're
the
kind
of
things
that
operational
analysts
are
interested
in,
so
we
could
do
that
information
and
then
what
we
can
also
do
is
we
can
break
these
into
different
states.
B
The
idea
here,
particularly
for
the
different
states
is
security,
is
about
crisis,
and
so
we
need
the
ability
to
describe
how
the
system
is
going
to
operate
in
a
crisis.
So
the
idea
here
is
I.
Put
some
basic
States
down
here
default.
What
happens
during
a
control?
What
I
might
do
if
there's
an
emergency
patch
Behavior
like
that
and
then
what
these
would
do
is
they
would
describe
the
traffic
that
you
could
expect
to
see?
B
For
example,
I'm
going
to
I'm
antivirus
software
I'm,
going
to
update
my
system
once
a
day
by
dialing
home
to
the
update
site
and
one
of
the
first
things
that
malware
authors
often
do
disable
the
antivirus
software.
So
when
I
see
the
absence
of
that
signal,
that
is
for
the
operator
a
point
of
attention
next
slide,
please
Wes,
okay,
so
and
basically
I
just
explained
this
slide
in
the
last
slide.
So
the
point
here,
though,
the
the
other
point
here,
though,
is
to
make
a
note
that
you
know.
B
While
there
are
some
security
events
where
we
have
sort
of
an
immediate
jumping
in
front
of
the
bullet
situation,
the
most
common
response
in
a
long-standing
problem
and
keep
in
mind
when
we're
talking
about
things
like
botnet
occupations
or
Insider
threat,
or
things
like
that,
these
are
investigations
that
can
go
on
for
weeks
or
months.
The
most
common
initial
response
to
an
incident
is
to
reorganize
data
collection.
So
you
focus
your
attention
on
an
affected
area.
You
reroute
traffic.
B
Next
slide,
please
thank
U.S,
so
the
general
mechanism
for
this
I,
don't
think
at
the
mechanical
level,
is
a
shock.
The
asset
comes
online,
it
publishes
a
contract,
the
monitoring
system,
May
accept,
reject
or
constrain
it.
The
operational
element
is
that
the
operators
can
use
us
to
extrapolate
Behavior.
So
one
of
the
like
common
tasks
that
you'll
see
going
on
right
now
is
organizations
will
be
coordinating
network
monitoring
and
endpoint
assets.
B
So
you
have
something
like
Titanium
or
Os
query
installed
on
your
assets,
but
you
don't
know
how
many
assets
you
actually
have
and
in
fact
the
definition
of
an
asset
can
be
extremely
fuzzy,
especially
when
you're
dealing
in
Cloud
environments.
So
so
what
you'll
see
is
Active
network
monitoring
will
be
used
to
compare.
This
is
what
the
network
tells
me.
I
see
versus
what
my
endpoint
assets
tell
me,
I
see
and
then
I
go.
Oh
here's
an
IP
I,
don't
recognize.
Let's
go
over!
Oh
it's
a
printer!
There's!
Nothing!
I
can
do
about
that.
I!
B
Guess
we're
doomed
the
by
publishing
the
contracts.
What
happens
to
the
operator
can
do
things
like
extrapolate
Behavior
I
know,
for
example,
I'm
going
to
see
a
large
number
of
updates
to
antivirus
every
day
and
it
provides
reasons
justifications
and
again.
This
is
driven
from
the
idea
that
we
would
be
using
this
to
monitor
analyst
attention
or
to
control
to
improve
analyst
attention
expenditure.
The
contracts
can
manage
operator
response
by
telling
them.
This
is
something
I'm
not
particularly
concerned
about.
B
At
this
point,
I
won't
say
it's
safe
because
I'm
an
operational
security
person
and
nothing
is
safe,
but
it
gives
us
a
way
to
think
about
different
classes
of
reaction
and,
ultimately
giving
our
people
the
ability
to
have
different
classes
of
reaction
is
going
to
provide
more
power
to
them.
Next
slide,
please,
okay!
B
Okay,
so
again,
if
I
haven't
emphasized
it
enough,
I
tend
to
wave
around
the
copies
of
Safeware
like
they
were
scripture,
Nancy
Levinson's
work
on
monitoring
there
and,
in
particular
the
role
of
human
in
the
loop,
the
development
of
expertise
and
the
use
of
Crisis
States
is
I,
think
very
useful
for
us
to
think
about
here.
B
So
the
idea
of
multiple
States
is
I
think
important
here,
because
it's
not
a
case
of
Simply
binary
classification.
There
are
different
aberrant,
behaviors
and
I
can
tell
you,
based
on
my
own
experience,
building
such
systems
even
something
as
simple
as
detecting
a
threshold.
You
can
find
out
that
there's
all
sorts
of
specialized
Corner
cases
associated
with
those
thresholds.
B
The
idea
of
analyst
attention
and
attention-driven
secure
the
idea
that
what
we
want
to
do
is
defenses
is
free
up
our
analysts
to
think
more
about
what
they
need
to
do
or
what
more
the
problems
they
can
face
because
look,
there's
an
endless
array
of
creative
and
hostile
people
out
there
who
are
always
looking
for
new
and
exciting
ways
to
mess
with
our
networks.
We
can't
automate
against
that
level
of
creative
perversity.
B
We
have
to
have
people
who
are
able
to
respond
and
free
them
up
so
that
they
can
and
then
the
idea
of
supplementing
anomaly
detection
with
these
explicit
behavioral
elements
in
terms
of
outstanding
questions,
these
are
the
kind
of
things
I
was
going
to
ask
the
idea
of
different
types
of
States.
What
is
there
a
dictionary
of
them?
B
The
question
that
I
asked
yesterday
about
iot,
because
I've
been
thinking
about
this.
In
particular,
you
know
on
a
general
purpose
laptop.
This
is
very
constrained
for
iot,
where
we
got
largely
mechanical
traffic
traffic.
B
I
think
there's
a
lot
more
advantage
in
talking
about
very
tight
bounding
for
there
and
the
question
of
negotiation
between
the
assets,
monitoring
and
negotiation,
and
in
particular
you
know
like
how
far
do
we
want
to
go
there
and
what
are
the
potentials
for
abuse
because
you
know
ultimately
packet
inspection
of
some
kind
is
the
elephant
in
the
room
on
this,
and
you
know
I
can
easily
see
an
unrestricted
version
of
this
basically
being
destructive,
which
is
not
my
goal
so,
okay,
anyway,
so
those
are.
A
C
One
query
that
I
have
which
I'm
not
able
to
understand
is
specially
about
the
traffic
descriptor
and
this
indicator
of
compromise.
How
do
you
detect
that?
Our
issue
is
that
this
is
all
encrypted
traffic,
and
now
these
things
are
harder
to
detect
in
the
traffic?
Could
you
explain
like
in
your
proposal?
How
do
you
achieve
that?
Like
you
know,
if
the
data
is
hidden,
we
can't
detect
these
this
thing
even
having
the
contract
and
an
envelope.
How
does
this
help.
B
So
in
particular,
Drew
of
what
I'm
thinking
about
there
is
monitoring
across
multiple
domains.
So
let's
take,
for
example,
an
indicator
of
compromise
as
a
DNS
name.
All
right
so,
like
my
my
sort
of
Baseline
iOS
or
my
Baseline
idea
for
contract
here
is
a
system
has
to
publish
that
it's
going
to
contact
an
antivirus
site
every
morning
for
a
download
okay.
B
So
what
we
would
expect
to
see
is
if
they're
going
to
communicate
with
our
internal
DNS
server,
then
that
DNS
server
should
be
able
to
log
that
particular
request,
and
then
we
would
have
to
cross-correlate
that
information.
Now,
if
you
put
a
gun
to
my
head,
I
will
say
that
I
think
there
are
certain
classes
of
traffic
that
I
would
prefer
to
have
encrypted
and
not
authenticated
or
sorry
other
way.
Around
authenticated.
B
But
I
also
recognize
that
that's
not
a
fight
I'm
going
to
win
so
the
classic
defensive
responses
you
have
to
cross,
correlate
across
service
or
host
based
information
for
that
purpose,
and
one
of
the
things
the
contract
can
do.
Is
it
can
tell
you
how
you
have
to
direct
your
instrumentation.
C
Yeah
thanks
Michael,
so
basically
I
think
it
would
mainly
work
when
you
kind
of
control
the
iot
devices
you
control
the
network,
you
control
the
DNS
and
then
you
can
correlate
so
in
those
kind
of
systems.
What
you
are
proposing
could
work
but
may
not
work
on
the
open
internet
and
other
kind
of
system.
B
Yes,
I
think
that
this
is
very
much
I.
Think
if
there's
a
core
idea
here,
it
is
the
idea
of
situationality
like
I.
Do
not
see
this
as
a
universal
solution
for
the
core
internet.
I
see
this
in
particular,
I
mean
I'm.
Thinking
about
it
now
keep
in
mind,
I've
spent
my
time
working
on
edge
networks
which
are
pretty
flipping
enormous,
and
so
that
is
also
biasing.
B
C
A
If
you
hear
me
or
not
sorry,
we
can
but
you're
very
quiet.
D
C
I
I,
it
was
just
about
thinking
on
the
scalability
of
this
approach
and
the
flexibility
actually
of
the
envelopes
to
describe
the
different
behaviors
I
understand
now
that
you're
thinking
on
more,
let's
say
smaller
networks,
or
you
were
thinking
on
actually
networks
of
these
approaches,
but
but
yeah,
then
thinking
about
the
mix
of
these
envelopes
and
how
to
actually
make
some
sense
out
of
the
Money
Trade
of
them.
B
I
so
and
part
of
this
keep
in
mind
is,
if
you
would
ask
me
on
Monday
about
this,
I
would
have
been
much
more
heavily
focused
on
automation.
But,
having
listened
to
the
talks
for
the
past
few
days,
I
think
that
we're
I
think
that
there's
this
idea
of
a
published
behavioral
element,
that's
popping
up
in
various
areas,
there's
clearly
the
mud
standard.
B
You
know,
and
so
the
I
think
the
Corey
thing
I
was
thinking
about,
for
this
is
how
do
we
deal
with
the
human
operator,
and
the
answer
is
that
a
lot
of
the
automation
should
be
focused
on
reducing
the
information
that
the
operator
has
to
spend
their
attention
on,
and
so
what
that
may
mean
is
we
have
an
enormous
number
of
automated
filtering
and
controls
and
blocks
which
are
intended
to
handle
the
crude
and
obvious
elements.
I,
don't
need
a
lot
of
intelligence
to
tell
I'm
being
DDOS
and
I.
B
So
I
think
the
question
really
boils
down
to,
and
this
is
a
problem
that
I
really
like
to
figure
out,
because
I
think
it's
I
think
it
is
the
core
for
whether
this
is
viable
and
how
much
of
it
can
be
applied
in
the
Enterprise
environment
is
figuring
out
how
much
the
automated
versus
the
human
role
is
played
in
this
and
I've
got
some
work
on
evaluating
and
wargaming
out.
These
situations
that
I'm
happy
to
discuss
offline
about
that.
But
I
do
think
it's
a
core
problem
here.
B
A
B
E
E
I
cannot
put
any
like
you're
using
a
10
year
old
version
of
WebEx
if
you're
half
chat,
so
you
know,
I
can't
put
anything
in
chat,
so
I
raised
hands.
Sorry,
so
love
this
talk,
love
this
direction
and
I'm
very
keen
on
this
type
of
description
of
what's
Happening
for
for
applications
that
are
Cloud
applications
as
well
to
be
able
to
publish
and
then
explain
to
their
Network
or
applications
that
were
downloaded
through
the
app
store
or
something
I
I
feel
like
the
whole.
Iot
Space
is
a
much
harder
space.
E
It's
just
it's
such
a
slow.
It
doesn't
seem
like
the
the
biggest
problem
we
have
to
deal
with
right
now
and
I
view
this
as
something
that
can
help
us
get
some
handle
on
detecting
rapid
ransomware
spread,
which
is
a
fundamentally
actually
a
much
bigger
issue
than
DDOS
or
spam
right
now,
right,
yeah,
so
I
I'd
love
to
see
this.
This
tourist
stop.
But
the
thing
that
I
wanted
to
ask
you
about
is
really
making
me
think
was
about
the
different
states.
E
B
I
so
I'll
make
a
general
Point
here,
which
is
I
tend
to
think
of
software
as
a
complement
to
people,
and
so
my
general
feeling
is
that,
instead
of
the
software
being
all-encompassing,
let's
create
a
baseline
set
of
States
I
mean
we've
been
writing
and
maintaining
software
for
60
years
now,
so
I
think
we've
got
a
good
idea
of
what
the
basic
set
of
States
would
be.
We
know
we've
got
remote
updates.
B
We
know
the
system
might
be
down
and
we
can
come
up
with
that
category
and
then,
if,
for
example,
somebody
was
to
March
me
into
the
ITF
and
start
creating
standard
for
this,
which
I
got
nothing
better
to
do,
the
the
the
base
set
of
States
I
think
we
could
probably
nail
it
down
to
maybe
eight
to
ten
of
them
and
then,
if
we
included
space
for
additional
public
for
additional
controls
there.
B
But
you
know,
there's
this
question
sort
of.
Let's
say:
sim,
slash,
zeke-ish
interaction
that
I
think
has
to
be
handled
as
well,
but
I
can
see
that
that
would
be
my
first
stab
at
this
and
again,
like
I
said
it
goes
back
to
the
sort
of
levison
idea
of
the
crisis
and
the
role
of
monitoring
there.
So
all.
A
Right
thanks
very
much.
We
do
definitely
need
to
move
on
so
Paul
you're
up
and
thanks
Michael,
for
your
talk
appreciate
it.
D
Great
all
right
thanks
thanks
everyone,
and
thanks
to
the
organizers
of
the
workshop
for
inviting
me
to
present
this
work
and
for
organizing
and
running
everything
pretty
smoothly.
D
So
today,
I'm
going
to
be
telling
you
about
some
work,
some
ongoing
research
work
that
myself
and
my
collaborators
are
doing,
and
it
fits
really
well
in
the
kind
of
context
of
this
Workshop.
The
goal
of
the
work
is
to
sort
of
resolve
these
challenging
tensions
between
like
privacy
and
security
that
come
up
in
in
networks
that
are
have
encrypted
traffic,
so
I
don't
need
to
belabor
the
the
the
context
here,
but
just
so
that
we're
on
the
same
page,
encryption
is
becoming
really
ubiquitous.
D
These
days,
using
like
a
protocol
like
TLS
1.3,
to
use
a
handshake
to
establish
a
shared
key
and
then
use
it
to
crypto
traffic.
This
pattern
is
becoming
like
super
super
ubiquitous
and
not
only
is
it
encrypting
the
traffic
in
networks,
it's
also
encrypting
the
metadata
like
DNS,
as
we
all
know,
traditionally,
networks
enforce
policies
by
scanning
plain
text
traffic
directly,
so
the
network
will
sort
of
check
some
policy
represented
by
this
sort
of
checklist.
D
Here
on
the
traffic
and
if
the,
if
the
traffic
violates
the
policy,
then
it'll
block,
it
else,
it'll,
let
it
through.
So
this.
This
pattern
is
like
super
super
ubiquitous
in
all
different
kinds
of
network
management,
tasks
like
filtering
and
data
loss
prevention
and
Extrusion
detection
and
those
kinds
of
things.
So
the
motivating
question
of
this
work,
which
is
in
line
with
the
goal
of
the
workshop
I
think
is
it
is:
can
we
sort
of
resolve
these
tensions
between
privacy
and
policy
enforcement
that
arise
in
networks?
D
So
this
question,
at
least
in
my
community,
where
I
come
from
the
applied
cryptography
Community
is
very,
very
fraught
because
a
lot
of
cryptographers
argue
that
they're
very
sort
of
like
crypto
maximalist,
and
they
say
that
we
should
just
all
we
like.
We
should
encrypt
everything
and
you
should
only
enforce
policies
that
don't
require
you
to
decrypt
traffic,
whereas
I
think
many
Network
administrators
would
argue
that
you
should
only
use
encryption
where
it
doesn't
kind
of
conflict
with
policy
enforcement.
D
So
this
is
kind
of
a
very
polarized
debate,
especially
in
certain
areas
of
that
ITF,
like
in
the
in
the
standardization
process
of
TLS
1.3.
There
was
a
very,
very
acrimonious
debate
between
Network
operators
and
and
the
TLs
working
group
about
sort
of
protocol
level
ways
of
allowing
these
kinds
of
monitoring.
So
the
goal
of
this
work
is
to
try
to
like
get
both
of
these
at
the
same
time.
So
before
we
talk
about
Solutions,
we
need
to
sort
of
outline
what
the
goals
of
a
solution
would
be.
D
The
first
thing
we
want
to
do
is
to
not
weaken
encryption
at
all.
We
shouldn't
have
to
downgrade
encryption
protocols
or
reveal
some
information
in
plain
text
or
really
reveal
anything
except
to
the
network,
except
whether
the
traffic
is
or
isn't
compliant
with
the
policy,
and
we
also
need
to
give
networks
the
ability
to
enforce
their
policies,
which
in
particular
means
they
need
to
be
able
to
identify
traffic
that
isn't
compliant
and
sort
of
take
some
action
like
blocking
the
traffic
or
blocking
the
user,
or
something
like
that.
D
And
finally,
we
would
really
like
to
ensure
that,
like
external
web
servers,
don't
need
to
change,
so
this
is
sort
of
something
that
needs
to
take
place
between
the
user
in
a
network
and
the
network
operators,
but
the
destination
server
shouldn't
necessarily
need
to
know
about
this
interaction,
and
this
also
makes
it
much
easier
to
deploy
and
as
a
core
layer.
D
We
also
don't
want
to
introduce
additional
trust
assumptions
like
like
installing
root
certificates
or
like
like
sgx,
or
anything
like
that,
and
so
there's
been
just
a
little
bit
of
work
in
the
academic
Community
around
this
trying
to
resolve
these
tensions.
But
all
these
Solutions
fail
in
at
least
one
of
these
points.
D
So
one
important
non-requirement
that
we
wanted
in
this
work
is
to.
We
don't
want
to
prevent
all
circumvention
of
network
monitoring,
because
we
want,
we
really
don't
want
to
build
a
tool
that
can
be
misused
for
censorship
of
traffic.
So
what
we
want
to
do
is
allow
circumvention
by
sort
of
advanced
users
using
like
Tor
or
a
VPN
or
something,
but
it
still
established
a
sort
of
safe
Baseline
that
most
people
are
going
to
abide
by.
D
In
the
in
the
network,
so
as
in
short,
what
we
want
to
do
is
reveal
just
enough
information
about
the
traffic
to
allow
the
network
to
gain
an
assurance
that
the
policy
is
being
followed
but
reveal
nothing
else,
and
if
it
sounds
impossible
to
do
this,
it
turns
out
that
there's
a
cryptographic
primitive
that
can
come
to
our
rescue
here
called
a
zero
knowledge
proof.
D
So
this
is
zero
knowledge
proof
this.
This
funny
slide
with
Superman,
with
the
zkp
on
it's
just
I
spent
a
long
time
making
some
PowerPoint
so
I
just
want
everyone
to
appreciate
how
nice
it
is
with
the
the
lettering
and
everything.
D
D
We
can
think
of
it
as
a
single
message
protocol,
where
the
prover
generates
some
bit
string,
which
we
call
the
zero
knowledge
proof
it
sends
it
to
the
verifier
and
the
verifier
runs
some
special
verification
algorithm
to
check
whether
the
statement
is
true
and
this
protocol
has
two
important
properties.
The
first
is
that
it
doesn't
reveal
why
the
statement
is
true.
So
somehow
the
reasoning
for
the
the
statement's
truth
is
is
hidden
from
the
verifier,
so
this
is
the
zero
knowledge
property
and
second,
it
only
allows
the
prover
to
convince
the
verifier.
D
D
So,
with
this
zero
knowledge
proof
Tool,
we
built
a
solution
called
a
zero
knowledge
middlebox
for
resolving
these
tensions.
So
the
idea
of
a
zero
knowledge
middlebox
is
it's
a
system
between
a
a
client
and
a
network
and
and
the
networker
specifically
like
some
piece
of
network
middleware
like
a
middle
box
and
at
the
beginning
of
time.
D
When
the
client
joins
the
network,
they
get
a
description
of
the
policy
that
they
need
to
abide
by
in
the
network
and
then,
when
the
client
wants
to
establish
a
connection
to
some
remote
server,
they
use
normal
encryption
protocols
like
TLS
1.3
and
just
encrypt
their
traffic
normally,
but
in
addition
to
sending
encrypted
traffic
the
client
generates
and
sends
to
the
network
or
the
middle
box
a
zero
knowledge
proof
for
the
truth
of
the
public
statement.
Roughly
speaking,
my
ciphertext
contains
compliant
traffic.
D
So
if
you
decrypted
my
traffic
with
a
key
that
I'm
going
to
hide
it,
what
you
get
is
traffic
that
is
compliant
with
this
policy
that
you
gave
me
and
then
the
middle
box
can
act
as
the
verifier
in
this
protocol
and
verify
the
proof.
And
if
the
proof
verifies
correctly,
it
gains
an
assurance
that
the
traffic
that's
still
hidden
from
it
is
compliant
with
the
policy.
And
if
the
proof
fails
to
verify,
then
it
can
just
block
the
traffic
or
take
whatever
action
it.
It
wants
to
take
with
the
with
the
claim.
D
So
this
architecture
I'll
just
briefly
convince
you
that
this
satisfies
our
visit
erotic
here.
The
first
is
that
it
doesn't
weaken
encryption
because
we
can
use
standard
encryption
protocols
like
TLS
1.3,
combined
with
the
zero
knowledge
property
of
the
zero
knowledge.
Proof
system
means
that
the
traffic
is
hidden
but
we're
still
using
the
standard
encryption
protocol.
So
we
don't
have
to
change
the
encryption
so
and
we
haven't
weakened.
D
Importantly,
we
haven't
weakened
the
encryption
because
it
was
your
knowledge,
so
the
middle
box
can
still
enforce
policies,
because
the
soundness
property
of
this
proof
system
makes
it
so
that
the
middle
box,
when
it
verifies
the
proof,
is
correct.
It
always
knows
that
the
underlying
traffic
is
policy
compliant
because
of
soundness,
and
finally,
because
the
the
server
doesn't
even
really
know
about
this
interaction
between
the
client
and
the
middle
box
or
the
network,
so
the
server
doesn't
really
need
to
to
change
at
all.
D
So
there
are
no
changes
required
to
the
server
and
the
middle
box
doesn't
even
really
need
to
forward
the
proof
to
the
server
it
could
strip
it
out
of
the
connection
and
and
forward
the
traffic
on
so
building
a
zero
knowledge.
Middle
box,
I
won't
sort
of
belabor
the
technical
details
in
this
talk,
but
I
just
want
to
convince
you
that
building
a
zero
knowledge,
Moodle
box
is
is
quite
non-trivial.
D
So,
for
example,
this
is
the
key
schedule
of
TLS
1.3,
which
is
the
newest
version
of
TLS,
and
so
this
this
hideously
complicated
diagram
is
something
that
we
actually
had
to
digest
in
building
our
our
prototype,
but
surprisingly
to
us
and
I
hope.
This
will
surprise
you
as
well
is
that
zero
knowledge,
proofs
of
properties
of
keyless
1.3
traffic
are
close
to
practical.
D
So
in
this
work
we
were
able
to
develop
a
proof
for
a
DNS
filtering
policy
that,
while
it's
not
practical
yet
is
within
Striking
Distance
of
practicality,
and
it's
it's
really
really.
On
the
cusp
of
practical
and
I'll
talk
about
evaluation
a
little
bit
later
in
the
talk,
but
so
I
just
want
to
pause
here
and
and
tell
you
that,
if
you,
if
you
tune
out
after
this
after
the
sliding
you
take
away,
only
one
takeaway
from
my
talk.
D
I
just
want
it
to
be
that
zero
knowledge
proofs
are
a
tool
that
are
in
that
is
in
our
toolbox
for
solving
these
kinds
of
challenging
tensions
between
privacy
and
policy
enforcement
and
networks.
So
far
from
being
like
a
sort
of
theoretical
curiosity,
zero
knowledge
proofs
are
a
real
tool
that
we
can
really
use
to
solve
problems
in
real
networks
and
our
our
on
the
custom
practicality
right
now.
D
So
in
the
rest
of
the
talk,
I'll,
just
briefly
talk
about
the
architecture
of
a
zero
knowledge
middle
box
and
talk
a
little
bit
about
a
core
subprotocol
which
is
like
doing
the
the
cryptographic
part
of
TLS
and
Azure
knowledge.
Proof
and
then
I'll
briefly
touch
on
a
zero
knowledge
middlebox
that
we
built
for
encrypted
DNS
like
filtering
encrypted
DNS
and
I'll
talk
a
little
bit
about
future
work.
D
So
is
your
knowledge.
Proof
is
a
protocol
that
that
takes
as
input
a
public
statement
represented
as
a
circuit,
and
this
circuit
is
not
a
digital
circuit.
It's
an
arithmetic
circuit
which
is
sort
of
like
a
a
circuit
over
a
a
finite
field.
So
it's
its
gates
are
additional
multiplication
Gates.
D
But
the
important
thing
is
that
we
can
represent
really
any
computation
in
these
arithmetic
circuits,
and
so
we
imagine
these
circuits
as
having
public
inputs
and
private
Witnesses,
and
the
circuit
takes
these
two
inputs
and
outputs
a
zero
or
a
one,
and
if
it
outputs
one,
this
means
that
the
inputs
in
the
witnesses
satisfy
the
circuit
and
output
zero.
D
This
the
circuit's
not
satisfied,
so
the
zero
knowledge
proof
protocol
in
a
little
bit
more
detail,
takes
his
input,
the
circuit,
the
inputs
and
the
witnesses
and
outputs
a
bit
string,
which
is
the
zero
knowledge
proof,
and
then
the
prover
sends
the
inputs
which
are
public
and
the
proof
to
the
verifier
and
the
verifier
can
take
the
description
of
the
public
circuit
and
the
public
inputs
from
the
prover
and
the
zero
knowledge
proof
string
and
run
the
verify
algorithm,
which
just
outputs
of
zero
or
one.
D
So
how
do
we
actually
build
a
zero
knowledge?
Middlebox,
a
circuit
for
a
zero
knowledge
model
box.
So
what
we
need
to
do
at
a
high
level
is
decrypt
the
traffic
using
a
hidden
key
which
acts
as
a
witness
and
then
check
somehow
that
the
plain
text,
the
underlying
plain
text
of
the
traffic,
is
compliant
with
the
policy.
So
in
the
paper
we
introduced
like
a
three-step
framework
for
this,
to
make
it
a
little
bit
more
modular
and
simpler
to
understand.
D
So
we
have
a
channel
opening
step
which
just
does
the
decryption
part,
and
then
we
have
a
Parson
extract
step
that
outputs
the
policy
relevant
data
from
the
from
the
underlying
traffic
and
then
a
policy
check
that
checks,
whether
the
policy
relevant
data
is
compliant
with
the
policy
so
for
TLS
1.3,
the
channel
opening
sub-circuit
there's
a
an
interesting
subtlety
here
that
actually,
for
time
reasons,
I
think
I'm
actually
going
to
skip
this.
D
If,
if
people
are
interested,
you
can
ask
me,
after
the
talk
but
I
I,
think
just
for
time,
Reasons
I'm
going
to
skip
over
so
the
high
level
Point
here
is
that
TLS
1.3
there's
an
interesting
subtlety
with
the
record
layer
and
the
record
layer
of
TLS
1.3
lacks
an
important
security
property
for
zero
knowledge
middleboxes,
and
so
we
have
to
design
our
circuit
in
such
a
way
that
we
can
design
around.
D
Basically,
this
lack
of
of
this
key
committing
property
for
TLS
1.3,
and
so
what
we
do
is
we
take
the
TLs,
1.3,
handshake
and
sort
of
produce.
An
initial
key
consistency
check
that
the
client
uses
to
convince
the
middle
box
that
it
knows
a
hash
of
the
session
key
basically
and
then,
when
the
client
wants
to
generate
a
proof,
it
can
basically
refer
back
to
this.
This
key
consistency
check
that
it
did
before
and
so
for.
D
Long-Lived
TLS
connections,
this
work
of
generating
this
key
consistency
check
amortizes
over
all
the
proofs
that
it
generates
in
in
the
session
so
for
encrypted
DNS,
we
built
a
zero
knowledge,
middlebox
and
I'll
briefly
describe
how
it
works
so
encrypted.
Dns
I,
don't
think
I
need
to
explain
the
what
how
the
encrypted
DNS
works.
I'll
just
say
that
in
by
Design
encrypted
DNS
prevents
the
local
network
from
seeing
what
you're
querying,
and
so
it
means
that
enforcing
a
filtering
policy
on
DNS
traffic
in
the
local
network
is
basically
impossible
within
DNS.
D
So
this
is
causing
huge
tensions
in
networks
and
when
Mozilla
Firefox
tried
to
roll
out
encrypted
DNS
by
default
in
2019
it
faced
huge,
huge
blowback
from
a
lot
of
like
technology
groups
and
network
operators
and
also
governments,
because
it
this
was
going
to
prevent
a
lot
of
of
DNS
filtering
that
that
people
were
doing
in
their
networks
and
so
Firefox
built
in
a
sort
of
downgrade
that
allows
the
networks
to
sort
of
tell
the
browser
not
to
use
encrypted
DNS
so
by
default.
D
So
we
took
this
as
a
sort
of
motivation
and
as
a
case
study,
and
so
we
built
the
zero
knowledge
middlebox
for
filtering
encrypted
DNS,
and
so
the
way
it
works
is
the
at
the
beginning
of
time.
The
network
creates
a
circuit
that
represents
a
sort
of
like
decryption
and
block
list
check
policy.
D
So
the
circuit
decrypts,
the
DNS
query
using
this
TLS
1.3
subcircuit
I,
described
before
and
then
extracts
the
domain
name
from
the
underlying
bytes
of
the
DNS
query
and
then
finally
verifies
that
the
domain
name
is
not
in
a
set
of
of
block
domains
that
the
network
creates,
and
so
when
the
client
joins
the
network.
D
The
middle
box
will
give
the
client
the
circuit
and
the
list
of
block
domains
and
then,
when
the
client
makes
a
TLS
1.3
connection
to
the
external
DNS
server
that
and
establishes
a
shared
key,
they
do
this
normally
and
then,
when
the
client
sends
a
DNS
query
to
the
server,
they
include
a
zero
knowledge
of
proof
for
this
circuit
to
the
block
list
to
the
for
the
circuit
pertaining
to
the
block
was
to
the
middle
box
and
so
by
verifying
the
this
proof.
D
The
middle
box
is
is
assured
that
the
underlying
DNS
query
that
it
can't
see,
is
not
on
this
list
of
block
domains
and
there's
a
couple
of
details
here.
There's
some
minor
differences
between
Dot
and
Doh
that
affect
runtime
a
little
bit
and
we
can
actually
achieve
privacy
for
the
block
list.
With
some
fancier
crypto
at
some
efficiency
cost,
so
our
experimental
results
here
for
our
initial
prototype
of
the
proof,
generation
and
verification
are
on
the
slide.
D
So
for
the
Baseline
circuit
that
we
that
we
developed
for
TLS
the
proving
time
was
very,
very
bad
so
for
for
just
setting
up
a
TLS
session,
the
one-time
recession
cost
was
based
on
94
seconds,
which
is
humongous.
D
D
D
So
in
our
future
work
we're
switching
to
a
different,
zero
knowledge
proof
system.
In
the
first
paper,
we
used
a
proof
system
called
growth
16,
but
we're
switching
to
a
newer
one
called
Spartan
and
the
proof
times
here
are
much
much
more
competitive
than
then
with
graph
16..
So
we
see
like,
like
a
10x
speed
up
improver
time,
so
the
work
that
the
client
has
to
do
to
open
a
TLS
session
has
basically
an
additional
1.7
second
cost.
D
You
can
think
with
with
Spartan,
which
is
still
maybe
a
little
bit
too
high
for
some
settings.
But
arguably
this
is
on.
This
is
really
getting
pretty
close
to
practical.
D
We
think
the
trade-off
here
is
that
the
proofs
are
a
little
bit
larger,
but
since
this
is
a
one-time
cost,
like
a
49
kilobyte
proof
for
every
TLS
session,
that
the
middle
box
doesn't
even
really
need
to
store
persistently,
this
doesn't
seem
too
too
prohibitive
and
after
the
the
session
setup
cost,
we
can
do
the
decryption
step
of
the
circuit
in
about
two
tenths
of
a
second
in
approver
time.
D
So
we
expect
that
with
the
full
circuit
here,
these
numbers
would
be
like,
maybe
like,
like
six
or
seven
hundred
milliseconds,
so
think,
like
after
a
1.7
second
setup
cost.
You
can
do
a
DNS
filtering
proof
in
about
700
milliseconds
on
the
client,
and
you
can
verify
it
on
the
middle
box
in
about
28
to
40
milliseconds.
D
So
this
is
the
the
the
where
we're
at
now
in
terms
of
the
proverb
time
we're
still
working
on
building
a
full
system
here
and
the
full
system
actually
kind
of
connecting
this
to
the
external
application.
There's
a
little
bit
of
Plumbing
cost
that
incurs
some
inefficiency,
but
these
numbers
are
kind
of
these
numbers
are
very
promising.
So
we're
working
on
this,
the
system
right
now
and
working
on
these,
bringing
these
numbers
down
further
and
we
hope
to
have
a
paper
submitted
on
on
this.
D
This
follow-up
work
like
in
December
or
so
so,
just
in
conclusion,
in
this
work,
my
collaborators
and
I
initiated
a
new
line
of
work
on
zero
knowledge,
middle
boxes,
which
use
Advanced
cryptography,
called
as
your
knowledge
proof
to
resolve
tensions
between
privacy
and
policy
enforcement
and
networks.
So
we
built
an
application
of
xeronauts
Moodle
boxes
for
DNS
filtering.
We
also
designed
a
couple
of
other
applications
which
you
can
see
our
paper.
D
Here's
the
link,
if
you
want
to
see
these
other
case
studies
and
we're
working
on
additionally
other
case
studies
as
well,
and
we're
we're
very,
very
excited
about
this
work.
D
We
think,
like
zero
knowledge,
middle
boxes
and
zero
knowledge
proofs
in
general
as
a
tool,
have
a
ton
of
potential
for
be
for
kind
of
resolving
these
challenge
intentions
and
solving
these
problems
that
arise
when
we
need
to
manage
encrypted
networks
and
so
I
hope
you're
as
excited
as
we
are,
and
that
you
that
you
think
this
is
worth
pursuing.
So
with
that
I'll
conclude
and
take
any
questions,
all.
A
Right
thanks
Paul
that
was
fascinating
and
I-
think
it's
definitely
an
interesting
tool
to
throw
into
our
toolbox.
I
think.
The
first
question
is
from
nalini.
F
Yeah
hi
guys
so
yeah
very,
very
interesting
work,
so
I
have
a
bunch
of
questions,
but
I'll
just
throw
one
in
so
so
I
think.
If
you're
talking
about
blockless,
that's
all
fine
and
good,
but
the
problem
sometimes
is
you
don't
know
ahead
of
time
what
should
be
blocked,
for
example,
malware
changes,
the
the
URL
all
the
time,
so
it's
not
possible
to
to
block
it.
F
D
So
the
the
question
is:
if,
if
you
don't
know
the
block
list
ahead
of
time,
then
you
can't
necessarily
block
the
traffic
like
you
can't
block
like
the
DNS
queries
is.
Is
that.
D
F
Well,
so
what
happens
today
is
if
there's
intelligent,
firewalls
and
intelligent
protection,
malware
detection
kind
of
systems.
What
they'll
do
is
they'll
look
for
nonsense,
domain
names
or
like
domain
names,
which
you
know
were
which
you
got
in
you
know
like
five
seconds
ago.
F
You
know
what
I
mean
and
and
so
so
that,
because
that's
kind
of
the
problem
with
block
lists
I
mean
there's
a
whole
I
I
know
a
little
something
about
the
the
particular
controversy
that
that
happened
was
was
tls13
and
there
are
reasons
why
people
decrypt
traffic
I
mean
I,
mean
there's,
as
I
said,
for
I
I
mean
this
is
very
interesting
work
and
for
a
certain
class
of
of
things.
This
is
maybe
very
interesting,
but
but
there
are
many
classes
of
problems,
I
I,
guess
let
me
put
it
that
way.
D
Yeah
yeah,
that's
true,
I
mean
I.
Think
there
we've
really
only
scratched
the
surface
of
what
these
your
knowledge
proof
tools
are
capable
of
and
I.
Think
to
your
point,
is
it
possible
to
do
more
advanced
things
in
a
privacy
preserving
way
with
zero
knowledge,
proofs
and
I
I
mean
I.
Think
the
answer
is
yes,
but
it
would
just
I
would
just
maybe
need
to
understand
a
little
bit
more
about
the
application
to
understand
like
what
the
desired
behavior
is.
D
So,
for
example,
if
you
wanted
to
prevent
people
from
querying
like
nonsense,
domains
or
very
new
domains,
you
could
use
an
allow
list
instead
of
a
block
list
and
sort
of
give
an
allow
list
of
the
domains
that
you
like
the
top
1
million
domains
and
because
of
the
way,
because
of
the
way
that
the
xeronaut
proof
scale,
it
isn't
very
much
more
expensive
to
generate
a
proof
for
an
allow
list
than
for
a
block
list.
D
So
these
numbers
wouldn't
really
change.
If
you
did
an
allow
list
of
like
the
top
1
million
domains
or
something.
A
All
right,
thank
you.
Let's
move
on
to
Richard.
G
G
The
question
I
had
for
you
that
some
of
the
data
of
interest
for
security
is
in
the
TLs
handshake,
looking
at
things
like
encrypted
certificates
or
encrypted
client,
hello,
even
it's
as
some
new
stuff
I
was
wondering
if
you
had
intuition
as
to
whether
looking
at
that
stuff
might
involve
lighter
weight,
zero
knowledge
proofs
than
than
proving
things
about
the
encrypted
content,
the
the
actual
traffic.
D
D
So
my
sense
is
that
the
costs
aren't
meaningfully
going
to
change
if
you
are
looking
at
Legacy
TLS
1.3
with
like
proving
things
about
ech.
However
I
think
like
looking
at
like
encrypted
client,
hello,
so
they're
doing
zero
knowledge
proofs
about
the
client,
hello,
like
Sni,
and
also
like
doing
zero
knowledge
proofs
about
encrypted
certificates.
I
think
this
is
a
fantastic
Direction.
D
It's
one
that
I'm
actually
thinking
about
right
now.
So
if
you
want
to
chat
more
about
it,
offline
I'd
be
happy
to
yeah
awesome,
okay
yeah,
so
that,
like
the
cool
thing
here,
would
be
it's
like
splitting.
The
enforcement
of
like
authentication
and
needing
to
decrypt
the
traffic.
So
you
could
have.
You
could
have
a
kind
of
separation
of
concern
where
you
can
have
a
A
system
that
just
sort
of
checks,
client,
authentication,
for
example,
without
actually
having
the
ability
to
decrypt
any
traffic
at
all,
which
is
very
cool.
H
D
H
You're
good
all
right,
I'll
try
to
make
this
relatively
quick.
This
is
a
simpler
one,
so
Red
Rover
is
something
that
Richard
and
I
have
been
talking
about
for
a
while,
and
it's
aimed
largely
at
solving
someone.
That's
the
same
use
cases
that
the
last
talk
was
covering,
but
in
a
certainly
less
intensive,
Manner
and
a
less
provable
manner,
but
something
that
we
think
may
be
an
interesting
direction
for
certain
types
of
use.
Cases.
G
H
You
know,
overall,
the
question
that
we're
trying
to
look
at
here
is:
is
there
a
conflict
between
filtering
on
a
network
and
privacy,
and
certainly
it
seems
that
that
has
people
have
been
perceiving
that
as
a
conflict
as
things
like
encrypted
DNS
and
other
things
have
come
about,
but
to
kind
of
dig
into
it
more
I
want
to
look
at.
You
know
what
are
the
actual
motivations
and
incentives
of
the
different
parties
here
that
being
kind
of
the
network
operator
and
then
the
client
devices
and
applications.
H
H
There
may
be
a
need
to
enforce
regulations
that
are
required
by
a
country
to
block
access
to
certain
content,
to
particular
users
or
just
to
enforce
the
terms
and
conditions
that
a
public
network
has
given
and
when
we
look
at
kind
of
the
client
side
and
the
user
and
I'm
talking
here
about
client
devices
and
applications,
which
are
you
know,
presumably
working
on
behalf
of
a
user
in
whatever
configuration
the
user
set
up,
whether
or
not
they
understand
all
the
ramifications
of
that.
H
But
these
pieces
of
software
are
going
to
be
trying
to
protect
the
user
security,
protect
the
user
privacy
from
various
entities
of
either
servers
or
the
network
operators
learning
too
much
about
their
browsing
history
and
selling
information
Etc
and,
of
course,
the
clients
want
to
be
able
to
give
users
access
to
the
content
that
they
want
to
have
access
to.
H
So
how
much
are
these
two
sides
of
things
in
conflict
when
we
look
at
the
mechanisms
that
we
have
to
actually
achieve
these
goals?
H
On
one
hand,
we
have
the
network
operators
traditionally
using
things
like
DNS,
filtering
or
DNS
redirection,
to
get
you
to
some
landing
page
or
block
page
TLS,
Sni,
filtering
or
just
firewalling
IPS,
and
on
the
client
side,
applications
are
starting
to
use
more
encrypted
DNS,
certainly
most
kind
of
user-facing
web
traffic
now
is
TLS
protected.
We
expect
TLS
encrypted
client
to
load
to
come
about
at
scale
sometime
soon
that's
been
under
development
for
a
while
and
also
users
may
choose
to
enable
vpns
or
proxies
to
get
added
security
and
privacy.
H
So
at
this
level
you
know
we
certainly
do
see
that
there
can
be
a
conflict
encrypted
DNS,
as
we've
brought
up
already,
can't
interfere
with
DNS
filtering
the
TLs
encrypted
client.
Hello
is
directly
in
conflict
with
TLS,
Sni,
filtering
and
vpns,
and
proxies
prevent
some
types
of
IP
firewalling.
H
So
at
this
level
they
do
seem
like
these
are
in
conflict.
However,
I
think
when
we
look
at
it
those
intents
that
we
had
originally
don't
inherently
conflict,
sometimes
they
will,
but
for
a
lot
of
the
cases
of
I'm,
just
trying
to
do
parental
controls
on
my
home
network
or
I
have
a
public
network
at
a
cafe.
That's
trying
to
enforce
the
regulations
and
the
terms
and
conditions,
but
not
necessarily
need
to
Snoop
on
all
of
my
users.
They
don't
conflict.
Well,
conflicts
is
the
protocol
mechanisms
that
are
being
used
traditionally.
There.
H
And
these
are
broad
statements,
but
I
think
you
know
in
general,
most
clients
and
most
client
applications
and
operating
systems.
Don't
want
to
expose
users
to
harmful
content.
They
don't
want
to
be
downloading
malware
and
they
don't
want
to
be
violating
the
terms
and
conditions
that
networks
have
and
a
lot
of
networks
don't
want
to
interfere
with
user
privacy
and
security.
H
I
think
you
know
collectively
if
we
could
get
no
malware
of
holding
parental
controls
and
still
having
user
privacy
and
security.
That
would
be
a
good
situation
that
both
sides
of
this
would
be
happy
with.
H
So
for
this
particular
aspect
of
collaboration,
one
of
the
things
we
looked
to
was
an
existing
model
where
we
have
a
collaborative
approach
for
providing
malware,
filtering
or
other
types
of
objectionable
content
filtering,
and
this
is
something
that's
called
safe,
browsing
so
I.
Imagine
a
lot
of
us
may
be
familiar
with
it,
but
a
lot
of
us
may
not
be
so
safe.
Browsing
is
something
that
many
many
browsers
are
already
doing
today.
Most
of
the
main
ones
do
this,
and
there
are
a
few
major
safe
browsing.
H
Services
I
think
the
primary
one
is
a
service
that
Google
runs
that
a
lot
of
the
browsers
talk
to
and
the
way
this
works
is
that
there's
a
server
side
known
and
maintained
list
of
malware
phishing
sites,
Etc
that
need
to
be
filtered
out
and
the
browsers
will
communicate
with
that
server
periodically
and
they
will
download
a
list
of
partial
truncated
hashes
for
URLs
that
are
bad
content
for
different
categories,
and
this
can
be
malware
phishing.
H
Other
types
of
content
that
the
browsers
don't
want
to
be
able
to
show
and
then,
when
a
browse,
a
browser
is
about
to
load
a
page
or
a
load,
a
URL
that
matches
one
of
those
partial
hashes.
It
communicates
again
with
that
server
to
get
the
more
complete
hash
list
for
the
bucket
that
corresponds
to
the
partial
hashes,
and
then
it
learns
whether
or
not
that
particular
URL
is
indeed
on
the
block
list,
and
then
it
will
block
that
page
if
there's
a
match.
H
So
this
is
done
and
is
used
successfully
at
Large
Scale
to
block
malware.
H
So
the
collaborative
approach
that
we
have
with
Red
Rover
is
saying
you
know:
can
we
use
this
model
and
bring
it
to
instead
of
the
you
know,
browser
talking
to
one
main
server,
bring
it
to
a
smaller
scale
of
just
clients
and
client.
Os
is
talking
to
their
network
provider,
and
the
proposal
here
is
that,
essentially
you
need
two
pieces.
You
need
a
discovery
mechanism
by
which
the
network
can
tell
the
client
that
there
is
this
block
list
service.
H
So
this
is
a
client
enforced
blocking
Behavior,
but
done
in
coordination
and
with
a
list
that's
generated
by
the
network
that
the
client
doesn't
get
to
learn.
The
entire
contents
of
the
list.
H
So
we
have
a
lot
of
different
building
blocks
that
are
already
ietf
rfcs,
that
we
could
use
or
things
that
we
could
modify
from
safe
browsing
on
the
Discovery
side.
We
have
a
lot
of
different
options.
This
kind
of
looks
like
it
could
fit
into
something
like
the
captive
portal
apis,
where
you
learn
extra
information
about
a
network.
Similarly,
there's
provisioning
domain
options,
or
you
could
just
put
this
information
directly
into
dhtp
or
Ras,
and
then
for
the
block
list
service.
H
H
We
could
look
at
having
longer
more
complete
hashes
that
allow
you
to
not
have
to
check
in
with
the
service
quite
as
much
or
look
at
techniques
from
private
information.
Retrieval.
H
Now,
with
this
type
of
approach
they're,
obviously
there
can
be
issues
for
how
easily
circumventable
this
is,
and
also
how
well
this
works
in
a
situation
where
you
have
clients
that
can
be
updated
to
work
with
a
system
like
this
and
also
Legacy
clients.
H
So,
since
this
is
on
device
filtering,
there
are
trade-offs.
Some
aspects
are
actually
better
when
we're
doing
on-device
filtering
we're
using
a
hash
lists
like
this,
we
can
provide
stronger
assurances
than
DNS
or
Sni
blocking
alone.
H
So
it
can
be
more
complete
in
that
regard
and
be,
like
you
know,
truly
on
device
mechanisms,
but
with
this
network
assistance.
However,
on
the
other
hand,
a
malicious
to
compromised
client
could
certainly
cheat.
He
could
say:
oh
yes,
I'm
doing
this,
but
in
fact
not
actually
apply
the
blocking
and,
of
course,
Legacy
endpoints
clients
that
haven't
been
updated,
wouldn't
know
about
this
service,
so
I
think
the
model
that
comes
out
of
this
is
that
you
would
probably
need
some
sort
of
selective
enablement.
H
So
here
we
have
different
categories
of
clients
and
then
what
the
network
will
do.
So,
if
you
have
a
legacy
client,
it
would
just
continue
doing
traffic
as
normal,
and
maybe
this
network
is
going
to
apply
current
policy
like
it
has
of
doing
DNS
filtering,
and
it's
going
to
block
DNS
encryption
by
doing
the
markers
that
we
mentioned
earlier,
of
trying
to
disable
what
Firefox
does
or
trying
to
block
other
proxies
or
vpns,
because
it
wants
to
be
able
to
enforce
its
terms
and
conditions
or
required
filtering.
H
And
then
you
could
have
modern
clients
that
could
be
updated
for
this
and
they
would
know
how
to
check
in
with
the
service
and
likely.
We
also
want
some
sort
of
attestation
of
these
devices
or
the
software
running
on
them,
so
that
when
we
are
checking
in
you
know,
this
is
kind
of
a
legitimate
client
doing
some
level
of
proof.
H
This
is
similar
to
what's
already
done,
for
safe
browsing
where,
when
clients
are
checking
into
safe
browsing
with
the
Google
service,
they
are,
they
provide
some
proof
about
what
version
of
the
client
software
browser
is
being
used
and
that
doesn't
prevent
all
types
of
attacks,
certainly,
but
it
means
that
the
bar
to
circumventing
this
is
higher.
H
H
And
when
the
network
recognizes
that
a
client
is
doing
this
and
has
provided
enough
proof,
then
it
could
allow
DNS
encryption
proxies
Etc
and
presumably,
if
we
had
some
good
enough
attestation,
then
a
malicious
client
that
is
trying
to
check
into
the
service
but
not
actually
apply
it,
because
the
software
has
been
compromised
or
something
like
that.
That
could
fail
the
attestation
and
then
the
network
would
go
back
to
the
stricter
stance
you
would
have
where
it
filters
or
blocks
encrypted.
Dns.
H
So
you
know,
obviously
this
is
not
going
to
work
for
all
kinds
of
networks
to
nalini's
point
on
the
last
talk.
There
are
many
categories
of
networks
here
that
this
would
not
be
nearly
strong
enough
for
if
you
rely
on
a
TLS
intercepting
firewall,
for
example,
and
you
need
to
decrypt
all
the
traffic.
H
This
is
not
good
enough
for
you
by
any
means,
but
I
think
there
is
a
set
of
common
use
cases
for
either
public
networks
or
home
networks,
where
you're
just
trying
to
enforce
best
effort
guarantees
where
this
could
actually
solve
a
lot
of
problems
without
having
a
huge
amount
of
overhead
and
could
be
maybe
a
lighter
weight
way
to
get
to
some
of
these
things
than
like
doing
a
full,
zero
knowledge
proof
Etc.
H
Today,
in
particular,
for
cases
where
people
are
trying
to
just
enable
parental
controls
on
their
home
network,
then
this
probably
actually
meets
the
bar,
because
you
know
by
the
point
where
you
can
compromise
the
software
enough
that
it's
able
to
go
around
this.
You
can
probably
get
around
a
lot
of
things
anyway
that
these
current
Network
and
DNS
filtering
mechanisms
are
vulnerable
too.
H
So,
in
summary,
you
know,
we
believe
that
fundamentally,
filtering
content
and
user
privacy
do
not
need
to
conflict,
and
we
can
have
collaborative
Solutions
that
work
in
the
situations
where
the
networks
and
the
clients
and
the
users
have
goals
and
incentives
that
all
align-
that's
not
always
the
case,
but
we
do
believe
that
exists
in
many
cases,
and
mainly
the
Gap
here
is
that
we
need
to
talk
about
more
standardized
protocols
to
enable
these
collaborative
models.
H
It's,
and
there
are
ways
that
we
could
do
this-
that
aren't
a
huge
amount
of
effort
and
new
a
new
work.
So
we
believe
it's
achievable
and
that's
all
I
have.
C
D
H
Right
I
mean
maybe
it's
I
think
you
could
have
either
a
model
where
there's
a
safe
browsing
service.
Essentially
that
runs
like
very
close
to
you
in
the
network
or
your
network
points
you
to
the
safe
browsing
service
that
it
wants
you
to
apply,
which
may
be
one
that's
shared
across
many
different
Networks.
D
Yeah
so
I'm,
just
wondering,
like
the
the
really
dumb
solution
to
this
problem,
is
to
just
basically
treat
so
you
have
a
safe
browsing
service,
but
all
it
is
is
a
cache
of
the
safe
browsing
URLs
plus,
whatever
the
network
wants
to
add
and
then
clients
just
download
a
gzip
of
the
list
when
they
join
the
network
right.
D
H
Then
of
the
actual
list
to
block
right
right
so
I
mean
yes,
that
that's
also
kind
of
the
initial
thing
that
I
had
proposed
in
some
of
our
conversations
around
this
I
think
the
the
two-fold
concerns
there
are.
These
lists
can
be
quite
quite
large
and
quite
Dynamic
oftentimes.
They
would
be
larger
than
you'd
want
to
just
download
and
store
on
every
device
and
there's.
D
H
I'd
have
to
look
it
up,
but
it
was
yeah
I.
Think
if
you
wanted
to
get
the
full
list
of
URLs
I,
don't
know
it's,
you
know,
half
a
gigabyte
or
something
like
you
know
larger
than
you
want
to
just
have
every
and.
H
I
think
the
other
aspect
is,
you
know,
particularly
with
normal,
safe
browsing
due
to
due
to
how
Dynamic
this
is.
There
is
concern
with
just
having
the
entire
list
be
published,
because
then
it'd
be
very
easy
to
just
keep
generating
names
that
are
not
on
the
list
like
it's
very
much
an
arms
race
of
you
know
a
new
bad
side,
pops
up
it
gets
on
the
list
they
try
to
once
they
learn
they're
on
the
list,
then
they
need
to
modify
their
names
so.
G
A
C
Yeah
hi
Tommy.
That's
a
really
interesting
talk,
I,
suppose
really
for
clarification.
Safe
browsing
as
implemented
by
the
safe
browsing
API
only
deals
with
URLs.
What
you're
potentially
suggesting
here
is
that
is
something
that's
going
to
intervene,
necessarily
with
a
DNS
resolution
process
and
bearing
in
mind
also
with
safe
browsing
that
you
could
have
perfectly
reasonable
content
with
on
a
on
a
website
but
also
have
within
that
same
website.
Having
you
know,
a
fishing,
a
page,
that's
been
dedicated
to
a
phishing
attack
and
that's
you
know
those
two
things
can
coexist.
C
H
Yeah
I
mean
I
think
the
proposal
here
is
that
we
would
take
what
is
currently
a
URL
browser-based
thing
talking
to
a
server
and
make
it
a
real
OS
service
in
which,
if
we
know
full
URLs.
Yes,
we
can
have
those
too,
but
I.
Think
yes
filtering,
even
just
on
domain
names
at
the
DNS
resolution
time
and
essentially
hook
this
into
get
Adder
info
on
all
of
the
devices
and
just
operate.
There
would
be
the
proposal.
G
I
actually
think
this
is
one
of
the
cool
things
about
this
proposal.
Is
that,
unlike
you
know,
say
filtering
in
a
DNS
resolver
a
safe
browsing
filter,
at
least
the
way
safe
browsing
is
applied
today
operates
on
a
full
URL,
so
you
can
do
precisely
the
distinction
you're
talking
about
that
where
you
allow
innocuous
parts
of
a
domain
names
of
web
content
but
block
the
malicious
parts,
and
all
of
that
can
be
done
before,
even
before
DNS
resolution
before
there's
any
interaction
with
the
site
just
based
on
betting,
the
URL,
the
user
has
entered.
A
Cue
I
think
that
is
the
queue
all
right.
So
thank
you,
everybody
to
the
three
invited
talks
and
the
really
wonderful
questions
at
this
point.
You
know
we
would
like
to
just
have
the
the
larger
discussion
you
know
both
about
this
day
in
particular
and
I,
think
you
know,
I'll
say
why
don't
we
just
roll
all
three
days
into
into
a
single
topic
at
this
point
as
well?
A
Is
there
people
that
want
to
discuss?
You
know?
How
do
we
get
from
hearing
all
these
talks?
You
know
to
actually
making
progress
and
and
making
a
movement
in
this
space.
I
think
the
proposals
today
are
certainly
interesting.
Are
they
actionable?
Are
they
you
know
usable
and
things
like
that?
Does
anybody
have.
A
So
I
mean
one
of
the.
Let
me
let
me
see
it
a
little
bit
then
I
think
one
of
the
common
themes
across
all
three
of
the
presentations
had
some
scalability
issues.
A
A
The
the
the
the
dynamic
nature
of
it,
which
actually
I
think,
is
one
of
the
things
that
makes
me
worry
about
mud
and
things
like
that
is
behaviors
change.
And
how
do
you
prevent
an
actor
from
getting
in
the
middle
of
that,
and
you
know
publishing
a
new
Behavior
or
you
know,
behavioral
analysis
in
general
have
always
led
to
way
too
many
incidents
of
compromising,
or
you
know,
signals
and
things
like
that
that
just
overwhelm
the
The
Operators.
A
That
Michael
was
talking
about
the
first
talk,
for
example,
and
that
apparently
worked
so
Richard.
G
Yeah,
it
looks
like
you
got
a
few
entries
in
the
key
while
you
were
advancing
so
I
wanted
to
kind
of
expand
on
Tommy's
thoughts,
a
little
bit
and
kind
of
throw
a
question
out
to
the
network
operators
in
the
crowd.
The
the
premise
of
something
like
Red
Rover
is
that
you
know
there
is
collaboration
there
in
the
sense
that
the
filtering
system
only
works.
G
G
If
they
let
you
dial
back
on
some
other
security
mechanism,
say
you
might
allow
VPN
connections
out
of
your
network
from
a
host
that
agreed
to
apply
your
your
filter.
G
You
know
in
this
safe
browsing
sort
of
scheme,
so
I
I.
The
question
is,
you
know
from
a
network
operator
point
of
view.
Is
that
actually
trade-off?
Anyone
would
be
willing
to
make.
G
Would
you
be
willing
to
dial
back
some
of
the
protections
that
you're
applying
today
be
a
little
less
paranoid
in
terms
of
enforcing
things
in
the
network
in
exchange
for
offloading
some
of
these
functions
to
the
client?
Trusting
the
client
to
do
the
right
thing.
A
Whoops
sorry
I'll
leave
it
to
somebody
to
answer
that.
That's
a
very
good
question:
Rob.
C
Yeah
I
wouldn't
ask
that
question
but
I
think
it's
interesting
question
so
I
I
really
liked.
Can
you
hear
me
am.
G
C
Audible
and
so
I
really
liked
the
Red
Rover
presentation
and
I
liked
the
thing
on
Richard's
presentation
yesterday,
in
terms
of
it
being
a
collaboration
between
the
network
and
also
the
the
use
of
that
Network
that
effectively
both
of
them
have
needs,
and
it's
trying
to
get
the
balance
between
those
two
needs
both
of
the
network
and
the
user.
So
I
like
that
I
think
one
key
question
I
have,
on
my
mind,
is.
C
Relatively
quickly
and
my
question
is
really
about
timing
of
these
sort
of
of
mechanisms
to
have
this
sort
of
selective
filtering
or
something
is,
is:
can
we
allow,
should
we
be
allow
ech
to
progress
to
the
point
or
be
standardized
before
we
have
these
mechanisms,
or
should
we
be
trying
to
ensure
that
these
mechanisms
are
available
before
we
go?
The
full
hog
to
ech
would
be
my
it's.
My
question.
A
Networking
questions
because
we're
having
worked
on
both
sides-
it's
always
a
strain
between
users
and
networks,
I
think
the
multiple
questions
around
you
know
is
there
a
balance
that
both
sites
can
agree
to
is
kind
of
fascinating
Delaney.
F
So
I'm
gonna
pull
back
a
little
bit
and
think
about
the
question.
If
kind
of
what
two
questions,
what
have
we
done
and
where
are
we
going
and
what's
what?
What's,
the
problem
and
and
I
think
I
think
the
problem
is
it's
a.
F
This
is
a
very
large
problem
and
I,
don't
think
we
even
have
a
scope
for
it
and
because
there's
multiple
different
kinds
of
of
implementations,
I
mean
there's
The
Wider
internet
there's
the
iot
world,
there's
a
private
managed
networks
and
and
then
the
other
problem
of
very
targeted,
Solutions
versus
some
kind
of
overarching
solution.
F
I
I
have
to
think
a
little
bit
more
but
they're
all
everything
is
very,
very
limited
in
scope,
and
and
and
is
that
in
there's
no
question
that
that's
certainly
a
route
to
go
down
because
I
have
to
think
what
is
practical
and
in
some
senses
the
the
last
presentation
from
Tommy
and
I
I
believe
Richard
Barnes,
that's
a
very
practical
and
probably
doable
solution,
because
there's
a
limited
number
of
people
who
need
to
implement
it
and
I
believe,
probably
one
of
the
people
who
could
implement.
It
is
a
co-author.
F
H
H
Well,
I
was
just
kind
of
try
to
give
two
sets
to
what
nalini
was
asking.
There
I
think
it's
a
very
good
observation
that
there
doesn't
seem
to
be
an
overarching
exclusion.
I,
don't
think
this
space
will
have
an
overarching
solution,
I
I,
think.
If
we
look
at
the
previous
stance
with
unencrypted
traffic,
we
had
you
know
everything
was
just
kind
of
out
there
and
available,
and
so
you
had
the
DNS
to
look
at
and
filter
or
modify.
H
There
are
a
lot
of
different
approaches,
but
they
didn't
have
to
coordinate
on
them
because
all
of
the
information
was
there,
and
so
it
seemed
like
one
simple
space
that
now
was
disrupted,
but
really
there
were
many
different
things,
so
I
expect
you
know,
as
we
have
more
encryption
and
if
we
want
to
have
explicit
collaboration,
we
realized
that
there
are
20
different
use
cases
and
different
things
that
networks
want
to
do
to
manage
it
and
those
May
each
have
a
completely
separate
mechanism
to
support
it
so
like
in
the
Red
Rover
case,
it's
like
how
do
we
solve
just
this?
H
You
know
block
the
things
I'm
required
to
block
by
regulation
or
parental
controls,
and
that
can
be
one
mechanism
and
the
mechanism
for
doing
zero
rating
or
optimization
on
a
network
or
other
things
like
that
can
just
be
a
completely
different
one
and
I
guess
that
makes
more
work
for
people
writing
the
standards
and
doing
the
things
but
I,
don't
think
we'll
have
one
solution
that
will
fit
every
Network
and
every
use
case
that
they
need
to
apply.
D
Yeah
kind
of
following
on
what
Tommy
asked
I'm,
also
wondering
like
nalini.
Do
you
think
to
your
point?
Do
you
think
it
would
be
useful
to
to
try
to
come
up
with
a
proposal
for
some
kind
of
holistic
privacy?
Preserving
managed,
Network
architecture
I
mean
I
mean.
Do
you
think
the
the
concern
I
have
is
like
if
someone
proposed
this
is
is
like
it?
D
F
If
that's
for
me,
I
can
I
can
go
ahead
and
answer
well.
Okay,
so
I
do
have
some
thoughts
on
a
unified
structure.
Of
course,
my
problem-
and
we
can
talk-
I
mean
maybe
I'll
talk
to
miria
and
and
some
of
the
other
folks
offline,
but
but
it's
encumbered,
and
so
that
makes
it
problematic.
I.
Think,
of
course,
maybe
not
I
mean
I
and
again.
There's
there.
There
are.
There
are
thoughts
there
are
there.
I
have
had
some
thoughts
on
a
more
unified
structure.
F
The
problem,
though
it's
an
Implement,
it's
so
there's
two
things
right,
there's
one:
what
is
a
possible
re-architecture
and
then
two
who
will
actually
do
this
and
then
we
have
to
live
with
the
realities
of
will
Cisco
cooperate
with
Microsoft
and
Huawei
and
apple.
You
know,
I
mean
there's
the
perfect
pattern:
yeah
there's
there's
the
Practical
Solutions
and
then
then
we
have
the
the
the
problem
of
of
non-green
field.
Implementations
anytime,
you
propose
a
holistic
architecture.
It
really
needs
to
be
transparent
and
seamless.
F
A
Thanks
Let
me,
let
me
interject
sort
of
an
example.
I
was
thinking
of
earlier.
What
I
think?
One
of
the
reasons
why
what
people
are
describing
is
difficult
right.
The
different
places
that
you
might
need
to
deploy
something
can
be
very,
very
different
in
what
they
need.
I
mean
if
you
think
about
the
solutions
that
we've
talked
about
today.
Enterprises
want
to
be
very
strict
in
what
they
allow
in
and
out
of
their
Network,
and
they
have
legal
reasons
for
doing
so.
You
know
both
for
data
leakage
as
well.
A
As
you
know,
potential
lawsuits,
whereas
home
users
actually
want
to
be
protected
and
those
two
different
types
of
scenarios
have
very
different
end
goals
that
you
know
firewalls
and
things
like
that
have
have
tried,
have
been
able
to
generically
do
before
and
there's
a
question
of
I
think
that
people
are
bringing
up
of
those
two
use.
Cases
are
quite
different,
so
I'll
just
throw
that
out
there
as
examples
people
may
or
may
not
want
to
talk
about
so.
C
Truth
thanks
Liz
I
kind
of
agree
with
what
Tommy
and
you
guys
are
saying,
but
at
the
same
time,
during
this
discussion,
also
I
do
realize
that,
like,
for
instance,
the
the
title
of
our
Workshop
like
the
whole
network
management
itself,
is
such
a
vast
topic
and
then,
when
people
use
terms
like
okay,
how
do
we
don't
decrypt
or
do
this
particular
solution?
C
So
one
of
the
outputs
that
I
could
think
from
this
Workshop
is
whenever
we
are
thinking
of
this,
we
do
try
to
categorize
and
come
up
with
sort
of
a
framework
or
a
categorization
of
these
are
the
various
different
use
cases
where
we
categorize
the
network
monitoring
into
things
that
we
talked
today,
like
you
know
the
block
list,
one
safe
list,
one
TLS
and
DNS.
All
of
this
are
different
and
malware.
C
A
Thanks
Yuri.
I
Yeah,
thank
you,
so
maybe
I'm
repeating
a
little
bit
of
what
other
people
have
already
said,
but
I
guess
I'm
agreeing
with
nalini
that
there's
like
a
big
space
out
there,
both
in
terms
of
solutions,
even
even
problems,
I
think
if
you're
trying
to
optimize
something
you're
trying
to
debug
something
you're
trying
to
express
your
preferences,
you
you're
trying
to
filter
something.
I
These
are
different
things
and
we
should
at
least
not
in
my
opinion,
assume
that
that
the
there's
one
solution
that
solves
all
of
them
and
we
should
not
chase
that
path.
So
if
you
find
a
nice
solution
for
a
particular
part
of
the
problems,
then
go
for
that.
For
instance,
some
of
the
filtering
things
today
were
quite
interesting
and
and
probably
fruitful
avenues
for
for
research.
I
I
I
They
will
be
able
to
provide
a
function
and
even
some
server
equipment
to
provide
some
of
the
yeah
lists
and
facilities
for
that
and
and
get
the
get
the
thing
that
the
youths
actually
want
executed
and
done
so
that's
great,
but
there
might
be
some
other
situations
where
that's
not
as
easy
I
don't
know
about
the
Enterprise,
for
instance,
there
could
be
a
different
problem,
or
maybe
not
maybe
others
can
comment
on
that.
Also
government
sometimes
put
requirements
on
other
people
and
other
other
entities
and
good
governance.
I
I
guess
should
be
technology
agnostic
so
that,
if
you
can
do
it
with
dns-based
filtering
or
or
some
some
more
clever
trick
with
zero
knowledge
proofs,
for
instance,
then
both
should
be
fine,
but
in
reality
that's
not
always
us
as
simple
as
that.
J
We're
injury
are
you
so,
like
my
one
takeaway?
Is
that
there's
not
like
one
solution
that
fits
everything
we
have
to
go
on
a
case-by-case
basis,
but
the
good
news
is:
there
are
brands
of
solutions
and
I
think
like
when
we,
when
we
have
a
case
and
we
find
a
solution,
then
we
can
learn
something
from
it
and
we
can
maybe
apply
a
similar
approach
to
different
problems,
but
and-
and
so
what
we
could
do
is
look
at
the
solutions
that
we
have
here
in
figuring
out.
What
are
the
patterns?
J
What
is
different
from
what
we
do
today
and
how
can
we
apply
it
to
other
problems?
Maybe
as
well,
but
it's
probably
a
one
and
one
by
one
study
and
it's
a
lot
of
work
and
that's
why
we
have
this
Workshop
here
and
then
very
quickly.
I
wanted
to
reply
to
this.
Other
point.
Yari
also
was
talking
about
so
the
content
filtering.
J
This
is
mostly
done
by
Network
operators
because
they
have
legal
obligations
to
do
so,
and
so
you
know
they
would
need
a
very
strong
proof
that
this
is
done
correctly
in
order
to
fulfill
their
legal
obligations.
But
if
you
look
at
what
Tommy
presented
effectively,
you
don't
need
the
network
operator
in
this
picture
right,
because
why
should
the
network
operator
provide
you
a
block
list?
Why
does
the
network
operator
know
anything
about
the
content?
J
It's
only
this
way
because
of
these
legal
obligations,
so
I
mean
if
the
legal
obligations
would
change
and
it
would
not
be
the
operator
who
has
to
has
to
do
the
blocking,
but
the
operating
system.
You
don't
need
this
Step
at
all.
So
this
is
the
Dilemma
we
have
here
and
maybe
also
something
to
consider.
F
E
I
was
just
trying
to
think
of
you
know
what
we
could
pull
out
of
this
as
sort
of
steps
forward,
and
it
seems
to
me
that
a
lot
of
the
proposals
we've
had
so
far
group
broadly
under
a
category
of
of
things
that
are,
you
know,
like
Michael,
Collins
sort
of
you
know,
work
out
that
there's
it's
from
the
application
describing
what
it
needs
up
and
some
of
them
aren't
they
from
the
idea
of
the
network.
Having
some
sort
of
list
of
you
know,
adult
browsing
or
type
type
thing,
or
something
like
that.
E
That's
coming
down
and
the
network
can
pass
it
on
down
to
it
and
ignore
the
up
and
down
things,
but
just
the
two
different
ends
here
and
I
think
that
we
should
approach
this
from
both
directions
simultaneously,
with
with
small
Simple
Solutions
to
it,
the
the
simplest
ones
we
can
like,
and
they
won't
be
perfect
and
we'll
have
to
make
you
know
many
small
incrementals
and
goes
back
to
what
Tommy
was
saying
about.
This
problem
was
previously
solved
with
a
lot
of
different
ways
that
came
together.
E
They
might
have
used
some
common
mechanisms,
but
they
were
actually
very
different
systems
to
that
we
shouldn't
be
bothered
by.
We
have
solutions
to
parts
of
this
problem
that
slowly
fit
together
to
make
a
broader
solution
over
time,
so
in
particular,
I
would
be
very
excited
about
seeing
some
work
that
helped
applications
describe
what
they
used
on
the
network
so
that
that
could
be
used
to
help
filter,
Network
events
and
better
understand
what
they
did.
E
I
think
that'd
be
great
work
to
have
and
then
broader
ways
of
describing
both
block
and
allow
lists
not
only
block
lists
and
having
in
the
network
and
having
the
network
being
able
to
describe
those
lists
in
some
compact
way
being
able
to
bring
those
into
policy
enforcement
points
wherever
those
are
and
some
way
for
the
network
to
to
communicate.
You
know
those
types
of
things
I
mean
I,
think
we
had
better
Solutions
to
all
of
that.
E
A
Thank
you,
Colin
and
I,
put
myself
in
the
queue
in
part
to
respond
to
the
AI
related
question
earlier,
and
actually
it
follows
really
nicely
onto
what
Colin
was
just
talking
about
about
the
need
to
have
both
both
block
and
allow
this
and
figure
out
a
way
to
mix
them.
When
you
have
ai
and
machine
learning
related
techniques,
you
have
to
consider
what
the
actions
of
those
will
end
up
being
so
I'm
doing
research
right
now
into
doing
traffic
identification
to
allow
network
prioritization
of
particular
flows.
A
And
so
you
can
say,
oh
well,
this
flow
is,
you
know,
quite
possibly
an
interactive
SSH
Channel
and
therefore
you
might
want
to
prioritize.
You
know
its
latency
over.
You
know
some
video
channel
that
you
know
is
being
buffered
or
something
like
that.
But
the
important
thing
is
the
end
use
case
right.
A
You
actually
block
something
that
that
you
know
needed
to
get
access,
I,
mentor,
high
school
all
the
time
and
the
high
schools
decided
to
have
a
rather
expansive
block
list
that
blocks
things,
that
the
students
actually
need
and
I
understand
it,
but
I
mean
they
block
DuckDuckGo,
but
not
Google
I
mean
you
know
it's
like
the
decisions,
don't
make
a
whole
lot
of
sense.
All
the
time
GitHub
is
actually
they
had
to
ask
for
it
to
be
released
every
year
because
the
students
use
it
heavily.
A
So
when
you
end
up
having
those
two
use
cases,
there's
still
got
to
be
some
negotiation,
for
how
does
a
client,
you
know,
add
or
remove
their
needs
individually
as
I
think,
which
is
also
multiple
people
have
talked
about
so
with
that
I
think
the
next
person,
the
Q,
is
Paul
Groves.
D
Yeah
so
I
just
had
a
question
about
going
back
to
the
question
like
miria
brought
up
of
like
regulation.
I
was
just
wondering.
D
People
who
have
more
experience
with
regulation
I
mean.
Is
it
the
case
that
when
regulations
dictate
like
content
filtering,
do
the
networks
have
to
does
the
regulation
say
the
networks
have
to
do
traffic,
or
does
it
just
say
they
have
to
enforce
content
filtering
because
seemingly,
if
they
only
have
to
enforce
it,
then
some
kind
of
client-side
solution
would
work,
but
if
they
actually
are
legally
required
to
decrypt,
then
there's
really
nothing.
D
We
can
do,
and
so,
as
a
like
a
follow-up
question
like
where
people
think
it's
feasible
to
like
talk
to
regulators
and
sort
of
explain
these
privacy
solutions
to
them.
I
I,
guess
I'm,
just
wondering
like.
Even
if
we
develop
the
best
privacy
solution
in
the
world,
if
The
Regulators
don't
understand
it
and
won't
do
it
or
use
it
then
like
if
the
regulations
don't
allow
it,
then
it's
a
moot
point.
So
I
guess
I'm
just
wondering
where
people
view
like
the
regulatory
environment
as
being.
F
Yeah,
that's
a
yeah
I'd
like
to
hear
Paul's
question
answered
too
there's
as
far
as
I
know,
there's
a
bunch
of
different
Regulators,
but
what
I
wanted
to
come
back
I
want
to.
You
know
I'm
a
little
conscious
of
the
time
too,
and
thinking
about
next
steps.
I
I,
really
I,
like
a
number
of
the
ideas,
I
think
I
think
Drew
brought
up
something
in
Yari.
F
One
concrete
output
that
might
be
interesting
is
is
what
what
are
the
environments
that
we're
talking
about
and
what
are
the
kind
of
problems
that
are
solved
today
and
that
probably
need
to
be
solved
tomorrow.
The
problem,
of
course,
is
like
we
are
a
small
subset
of
people
and
it's
hard
to
know.
You
know
within
ourselves.
If
we
all
know
everything
that's
being
done
so
maybe
a
crowd-sourced
approach
to
like
people
thrown
in
is,
like
you
know
what
what
are
all
the
things
that
that
you
need.
F
You
know,
I
mean
what
do
you
do
for
diet,
I
mean
and
you
can
be
roughly
categorized.
You
know
performance
monitoring,
Diagnostics
malware
detection,
I
mean
there.
Is
you
can't
you
can't
at
least
categorize
that?
But
if,
if
at
least,
we
know
that,
like
okay,
we've
got
six
concrete
environments,
we
need
to
deal
with
and
they
and
they
have
X
specific
characteristics.
Iot
networks
are
like
this
by
and
large
or
whatever
Enterprise
regulated
networks
are
like
this.
F
The
wide
internet
is
like
this
or
something
like
that
and
then
and
then
have
another
list
of
okay.
So
what
I
need
to
do
is
I
need
to
detect
malware
I
need
to
diagnose
problems
of
X
kind.
I,
don't
know
if
that
is
of
use
for
us
to
have
as
an
output.
A
G
I
Overall
is
I,
haven't
seen
you
know
any
any
statement
in
the
ietf
about
the
need
to
support
with
appropriate
IDF
Technologies
the
need,
for
you
know,
securing
not
only
the
Privacy
aspect,
but
also
other.
You
know
interests
that
that
we
feel
we
want
to
support
right.
So
do
we
think
we
do
not
need
to
care
about.
You
know
how
parents
can
protect
their
children
from
you
know,
abuse
in
the
internet
or
recording
content
or
equally
Enterprises,
protecting
their
their
content
and
access
to
the
network
right.
I
So
I
think
that
you
know
malware
was
was
brought
up
right.
So
so
all
these
things
you
know
I
haven't
seen
a
good
statement
of
the
ietf
about
it.
It
just
seems
that
we're
we're
never
getting
out
of
our
corner
of
saying
privacy
above
all
else,
and
that
just
means
that
what
I'm
seeing
is
there
is
an
even
worth
ramp.
I
Up
of
you
know
the
different
interest
groups
right
so,
for
example,
the
more
we
encrypt
on
the
network
without
providing
any
form
of
a
way
to
build
policies
to
to
do
this,
the
more
I
see
the
intrusiveness
against
the
privacy
of
the
users
go
into
the
endpoints
right,
I
mean
endpoints
are
having
all
these
in
in
Enterprises
are
having
all
these
terrible
mdms
that
are
so
intrusive
right
that
are
kind
of
reaching
into
every
application
doing
stuff,
allowing
you
know
the
fascists
in
whatever
Department
Enterprise
they
are
to
to
inspect
everything,
and
so
I.
I
Just
don't
think
we
can
win
it.
We're
just
moving
the
picture
out
unless
we
actually
have
a
broader
set
of
goals
that
we
are
following
to
do
something
in
the
ITF.
Besides
just
privacy
I
mean
the
fact
at
all
that
you
know
the
whole
concept
of
a
firewall
has
no
official
representation
in
the
ITF,
which
is,
you
know,
acknowledges
it's
there,
but
we
have,
you
know
no
plan
about
it.
The
only
plan
we
had
at
some
point
in
time
was
behave
right,
so
I
think.
That's!
That's!
A
Oh
very
good
points,
I,
think
one
of
the
things
since
the
queue
seems
to
be
empty,
minus
some
plus
ones
to
what
people
have
said.
I
think
one
of
the
things
that
I
would
love
to
hear
is
given
all
of
this.
It
seems
like
there's
a
few
people
that
have
have
come
up
with
it
with
a
number
of
small
ideas
for
going
forward,
and
things
like
that.
Is
there
anybody
that
wants
to
propose.
You
know
some
some
thinking
that
you've
gotten
out
of
this.
A
We
could
turn
to
sort
of
some
brainstorming
on
things
to
do
next
and
actually
maybe
even
build
some
collaborations.
Or
does
anybody
have
projects
that
they're
working
on
that
they
want
help
with
or
want
you
know,
collaboration
with
and
things
like
that,
given
either
the
papers.
Hopefully,
people
have
read
more
of
the
papers
than
just
what
what
the
talks
came
out
of
today.
Does
anybody
have
want
to
speak
up
toward
things
that
they
might
want
to
do
or
you.
A
I
Yeah
so
I
mean
I
was
bringing
up
on
Tech
dispatch
this
this
this
discussion
now
where,
where
Mark
also
signed
in
with
respect
to
I,
really
liked
what
we
had
before
TLS
1.3
came
in,
namely
being
able
to
know
kind
of
in
a
somewhat
cryptographic
fashion,
whom
you're
talking
to
by
looking
at
the
server
certificate
right,
so
that
solves
I
think
a
lot
of
requirements.
Of
course,
it's
not
doing
it
well
right,
so
it
was
removed
because
it
provided
intrusiveness.
I
So
the
question
is
really:
could
we
come
up
with
a
cryptographic
scheme
that
would
allow
to
have
a
three-party
relationship
where
you
know
a
client
will
learn?
Okay,
you
need
to
allow
to
see
what
is
the
authenticated
certificate
of
the
server
you're
talking
to,
but
that's
all
right
so
not
being
able
to
intrude
on
the
content.
I
Have
the
privacy
on
the
content
but
being
able
to
have
filtering
based
on
you
know
a
firewall
authenticated
certificate
off
the
server
right,
so
at
least
you
know
it
would
be
I
think
a
cool
cryptographic
and
session
establishment
problem
to
solve.
Then,
of
course
comes.
The
whole
question.
Is
this
a
useful
compromise
which
goes
back
to
my
prior
statement
that
we
don't
have
any?
I
You
know
IDF
position
on
that
Beyond,
you
know
yeah
everything
needs
to
be
protected
from
any
third
party,
so
you
know
that's
that's
what
I
would,
because
it's
it's
hard
and
you
know
I,
wouldn't
know
all
the
cryptographic
aspects
to
make
a
three-party
transaction
like
that
work.
Well,
can.
G
I
just
inject
you
with
a
brief
response,
just
want
to
observe
that
the
objective
of
having
filtering
applied
by
the
network,
like
one
of
the
the
core
points
of
both
Red
Rover
and
the
zero
knowledge
middlebox
work,
is
that
to
do
filtering
in
the
network.
You
don't
need
to
expose
to
the
network
all
of
the
names
of
services
that
people
are
connecting
to
so
I
think
that's
a
property
that
we
should
aspire
to
in
any
work.
That's
done
in
the
in
this
kind
of
next
round
of
collaborative
Solutions.
G
I
I
would
I
would
I
would
say
that
that
maybe
a
good
solution
for
some
problems
but
I,
don't
think
it's
the
correct
solution
for
all
the
problems.
Right,
I
think
that
would
be
a
longer
I
I'd
like
to
raise
a
Counterpoint
on
that
I
mean
we.
We
we
are
using
authentication
for
for
everything
that
we
do
want
to
talk
to.
Why
don't
we
want
to
use
authentication?
I
For
you
know,
decisions
made
on
policies
about
whether
or
not
somebody
is
allowed
to
talk
so.
G
The
the
point
of
both
Red
Rover
and
the
the
zero
knowledge
minnow
box
work
is
that
the
information
that
is
being
used
for
the
policy
decision
is
in
fact
authenticated
and
authenticated,
at
least
in
the
middle
box
case,
to
the
network.
G
I
Post
sure
but
I
mean
then
it
goes
back
to
where
do
you
derive
that
is
it
allowed
or
not,
right
and
I'm
saying
that
a
very
simple
way
to
do
this
is
by
being
able
to
know
who
you're
connecting
to
right
by
cryptographic
certificates.
If
you
insert
another
policy
layer
that
basically
says
there
is
another,
you
know
complex
entity
that
decides
whether
or
not
that's
allowed
to
hide
the
identity
of
the
service
you're
connecting
to
like
classifying
them.
That's
fine,
but
that's
an
even
you
know
more
complex
solution.
G
A
A
Of
the
fundamental
tenets
that
we
are
struggling
with
in
this
work
is
trust
right.
We've
talked
a
lot
about
trust
in
these
different
cases
of.
Does
the
network
trust
the
user?
The
flip
side
of
that
is.
Does
the
user
trust
the
network,
and
sometimes
it's
nice
when,
when
those
are
all
we,
we
trust
each
other,
but.
G
A
Many
adversarial
networks
in
coffee,
shops
and
things
you
know
you
probably
don't
want
to
trust
the
network
and
the
coffee
shop
probably
doesn't
trust
you
and
and
finding
solutions
that
work
you
know
across
all
of
those
I
think
is
one
of
the
things
that
nalini
was
saying
would
be
would
be
hard
to
do.
A
It'll
it'll
it'll
be
challenging
to
do
that.
So
I
think.
At
this
point
we
have
drained
the
queue
lots
of
good
interesting
discussion,
do
any
of
the
other
chairs
or
Maria
or
I
guess
any
other
IAB
members
want
to
ask
some
follow-on
questions
for
things
that
we
could
take
out
of
this.
Certainly
the
IAB
thanks
everybody
for
participating
in
this
Workshop.
It's
been
valuable
to
at
least
learn.
B
A
Come
up
with
you.
J
Yeah,
maybe
I
want
to
at
least
add
that
I
think
it
was
like
for
me.
It
was
very
useful
to
see
all
these
inputs
together
in
one
workshop
and
have
the
discussion
I'm
I'm,
not
sure
if
we
have
any
concrete
follow-up
items
rather
than
just
keep
working
on
our
stuff
somewhere
in
the
ITF,
but
that's
not
a
too
bad
outcome
of
that.
J
A
I
A
To
time
these
perfectly
every
day,
I
don't
know
how
in
the
world
that
happened,
but
the
discussion
is
definitely
petered
out
at
just
the
right
time.
So
I'd
like
to
thank
everybody
and,
as
I
said,
the
aib
thanks
everybody
for
participating.
This
has
been
fascinating.
A
We
will
probably
send
out
mail
at
some
point
about.
If
there's
you
know
if
at
some
point
this
will
turn
into
a
workshop
RFC
for
a
report.
I've
taken
lots
of
notes
and
we
will
also
look
at
the
recordings
and
things
like
that,
so
you
should
be
hearing
from
us
again
great.
Thank
you,
everybody
and
we
will
solve
this
problem
one
day.