►
From YouTube: IAB workshop on Management Techniques in Encrypted Networks (M-TEN) Day 1: Where We Are (2022-10-17)
Description
User privacy and security are constantly being improved by increasingly strong and more widely deployed encryption. This workshop aims to discuss ways to improve network management techniques in support of even broader adoption of encryption on the Internet.
Workshop page: https://datatracker.ietf.org/team/mtenws/about/
A
Can
everyone
see
this?
Does
it
look?
Okay,
you're
good,
great?
Well,
thank
you.
Everyone
for
joining
and
Welcome
to
our
first
day
of
the
IAB
M10
Workshop,
which
stands
for
the
management
techniques
in
encrypted
Networks
on
behalf
of
everyone
on
the
IAB.
A
Who
is
helping
to
arrange
this
we're
excited
to
have
you
all
here
and
we
appreciate
all
of
your
contributions
that
you've
been
making
as
one
point
of
order
to
note
this
is
being
recorded
and
the
other
sessions
will
be
recorded
as
well,
and
these
are
planned
to
be
posted
to
YouTube
if
there
are
any
concerns
or
issues
with
that,
please
let
us
know
so.
A
That's
the
end-to-end
encryption,
that's
being
used
to
protect
user
data
and
user
privacy,
and
we
want
to
see
if,
through
this
discussion
and
this
exploration,
we
can
initiate
new
work
on
specifically
collaborative
approaches
that
can
promote
security
and
user
privacy.
While
also
supporting
and
enabling
operational
requirements
that
networks
may
have.
A
A
Are,
you
know
not
broad?
Oh,
we
need
to
see
all
the
bits,
but
practically
what
our
goals
that
we
think
people
can
agree
on
and
that
we
can
see
being
viable
having
Bible
pass
forward
in
encrypted
Networks.
A
A
To
structure
the
discussion
we
have
three
days
and
we've
generally
grouped
these
into
the
topics
of
where
we
are
so
the
state
of
things
for
Network
management
and
encryption,
not
looking
at
whatever
kind
of
the
next
steps
yet
that'll
be
today.
Tomorrow,
we'll
go
over
some
of
the
principles
and
ideas
of
where
we
want
to
go,
and
then,
on
the
third
day,
we're
going
to
talk
about
some
of
the
proposals
that
we've
gotten
in
the
contributions
for
ideas
of
directions
or
practical.
A
Next
steps
on
how
we
increase
collaboration
for
Network
management
and
encrypted
traffic,
and
for
each
of
these
we'll
start
out
with
about
half
the
session
being
presentations
of
some
of
the
work
that
was
contributed
and
then
the
second
half
will
be
open
discussion
that
anyone
would
be
able
to
contribute
to
so
I'll.
Go
into
just
the
agenda
for
today.
I'm
Tommy,
Pauley
I'm
chairing
this
session.
A
Then
we'll
have
a
presentation
from
an
invited
talk
from
Lauren
Vanderbilt
van
bever
on
this
current
state
of
some
of
the
techniques
we
have
for
preventing
traffic
analysis
or
more
like
what.
What
are
the
extreme
angles
of
how
we
can
encrypt
and
prevent
networks
from
learning
too
much
user
information,
and
then
Mallory
will
take
us
through
some
of
the
State
and
current
thoughts
about
user
privacy
and
how
it
interacts
with
safe
measurement
on
the
internet
and
then
for
the
rest
of
the
session.
We'll
have
open
discussion
from
a
practical
standpoint.
A
We
are
using
WebEx
and
so
I'd
like
to
propose
that,
as
we
are
doing
questions
and
queuing,
if
people
could
use
the
WebEx
chat
and
do
like
a
plus
Q
in
there
and
then
we
can
manage
the
queue
that
way
all
right,
any
other
things
I
missed
or
we
should
cover
before
we
get
going.
A
Yep
we
can
do
you
want
to
present
your
own
slides
or
should
I
present
it
here.
C
You
can
present
it
yeah.
Okay,
just.
C
Okay,
hello,
everyone,
my
name
is
Chase,
so
I'll
be
talking
about.
How
do
you
design,
robust
and
efficient
classifiers
for
encrypted
hour
traffic
in
the
modern
internet,
especially
go
over
some
of
the
challenges
associated
with
it
and
how
explore?
How
do
we
explore
some
potential
directions
that
we
can
we
can
move
towards?
C
So,
as
we
all
know,
Network
traffic
classification
is
a
fairly
common
Network
management
task,
so
that
usually
involves
inferring
services
and
applications
right
and
efficiently
and
accurately.
C
That's
the
key
component
to
allow
Network
operators
to
perform
a
fairly
wide
range
of
essential
tasks
such
as
a
capacity
resource
planning,
QRS
monitoring,
etc,
etc.
So
there
are
some
conventional
approaches
to
traffic
classification,
which
often
uses
Network
features
that
are
hand
extracted
from
expert
knowledge.
C
But
more
recently,
people
are
trying
to
use
machine
learning
to
perform
classification
that
has
post
classical
learning
based
as
well
as
deep
learning
based
methods,
and
these
methods
have
generally
performed
quite
well
when
they
are
applied
to
curated
data
sets
and
when
they
are
evaluated
in
very
specific
contexts,
and
they
have
frequently
depended
on
domain
service
domain.
Specific
features
such
as
IP
addresses
and
information
that
is
readily
available
in
say
that
I
encrypted
packet
payloads.
C
However,
given
the
rise
of
network
traffic
encryption,
the
effectiveness
of
such
long-established
Network
cost
fires,
methods
are
made
longer
no
longer
be
available,
and
in
this
position
paper,
we
look
at
some
of
the
challenges
associated
with
designing
classifiers
that
are
robust
and
efficient
against
a
pervasive
encryption
of
the
traffic,
and
we
also
suggest
some
possible
research
directions
to
look
into.
If
you
want
to
do
the
next
one.
C
Okay,
so
one
of
the
things
that
we
are
looking
at
is
that
why
are
current
encrypted
traffic?
Classifier
is
not
enough.
So
first
thing
of
the
increasing
utilization
of
different
Network
traffic
encryption
schemes,
they
tend
to
alter
the
feature
space
of
machine
learning
based
classifiers.
This
is
through
first
reducing
the
usefulness
of
import,
a
feature
of
the
effective
features.
It's
just
inherently.
Some
of
the
features
are
becoming
encrypted
and
they
are
no
longer
providing
sufficient
information
to
the
classifiers.
Second
is
that
this
feature
importance.
C
Distribution
is
Shifting
itself
as
well,
and
the
majority
of
the
existing
classifiers
that
attempt
to
address
these
issues
is
to
rely
on
complex,
deep
learning,
based
models
to
avoid
manually
articulating
from
the
features,
but
unlike
traditional
methods
that
are
heuristic
based
or
classical
machine
learning,
based
which
really
depends
on
a
few
selected
component
of
the
traffic
flows.
This
deep
learning
models
often
learn
representation
of
the
traffic
from
very
long
lengthy
in
our
traffic
inputs,
such
as
the
entirety
of
packet
headers.
C
This
is
to
make
traffic
classification
decision
accurately,
but
the
drawback
is
that,
in
a
real
world
deployment
setting
such
as
a
ispe,
capturing
and
storing
large
portions
of
the
traffic
flow
on
a
large
scale
can
introduce
very
high
overheads
in
terms
of
system
costs
as
well
as
inference
cost.
Moreover,
is
a
very
crucial
for
Network
operators
to
make
classification
decisions
quickly
enough,
so
that
the
appropriate
follow-up
actions
can
be
taken
and
consider
a
broad
set
of
network
traffic
features.
C
C
So,
while
most
of
the
existing
classifiers
designed
for
encrypted
Network
traffic
class
fires,
they
do
show
promising
results
when
they
are
evaluated
with
close
World
data
sets
this
class
first,
they
often
fail
to
remain
robust
when
they
are
given
your
network
traffic
received
at
a
different
location
or
time
across
different
domains,
and
this
is
largely
because
the
heterogeneity
and
involvement
of
network
infrastructure,
if
you
look
at
the
installation
of
new
equipment,
those
news,
software
updates
or
increasing
amount
of
Handles
in
the
network,
and
to
look
at
this
issue,
we
conduct
a
sample
study
to
collect
TRS
encrypted
traffic
across
a
wide
range
of
applications
at
two
different
locations
and
times,
and
we
splitted
the
collective
traffic
into
two
different
data
sets
the
old
one
and
new
one.
C
Our
results
show
that
well,
we
can
train
a
simple
ml-based
traffic
classifier
to
perform
really
well
with
a
FY
score
like
in
year.
99
on
one
of
the
data
on
the
old
data
set,
the
performance
of
the
classifier
decrease
severity
will
apply
directly
to
the
new
data
set,
even
though
both
data
sets
contain
traffic
from
the
same
set
of
applications.
C
More
generally
speaking,
while
a
lot
of
the
existing
encrypted
traffic
classifiers
are,
you
are
evaluated
using
a
well-known
data
sets
we're
talking
about
the
IELTS,
for
example,
the
iscx
VPN
VPN
data
set
and
the
UN
IPS
data
set.
This
classifiers
are
not
necessarily
robust
when
they
are
transferred
to
new
data
sets
or
environments,
because
close
world
datasets
are
not
necessarily
sufficient
to
describe
what
the
most
up-to-date
internet
traffic
actually
looks
like
film
move
on
to
the
next
one.
C
So
why
a
deep
learning
based
approaches
appear
to
be
the
mainstream
approach
for
Designing
classifiers
for
encrypted
Network
traffic.
We
found
that
we
can
utilize
a
non-uh
black
box
models.
We
are
talking
about
classical
machine
learning
methods
like
EC
entries
or
interpretable
machine
learning
techniques,
for
example
the
permutation
based
importance
the
importance
based
on
Sharp
to
reduce
the
amount
of
features
to
consider.
While
we
can
obtain
reasonably
good
classification
results.
C
This
reduces
the
feature
space,
while
maintaining
the
classification
accuracy
can
effectively
lower
the
relevant
assessment
costs
for
classifier
implementers
because
they
need
to
preserve
less
traffic
now
compared
to
before,
and
a
plausible
way
to
reduce
the
feature.
Space
is
to
rank
Network
level
features
according
to
the
feature
importance
as
interpreted
by
the
model
and
choose
to
neglect
feature
star,
less
informative
or
have
sometimes
negative
impact
on
the
classifier
performance.
C
If
you
want
to
go
to
next
slide,
so
we
evaluate
this
using
some
of
the
prominent
data
sets.
So
this
includes
the
quick
data
set
that
we
obtained
and
then
the
ISS
vpn.vmp
VPN
datasets,
and
we
also
collected.
C
A
Choose:
okay:
we
can
see
you
now
we
lost
your
audio
for
pretty
much
all
of
this
slide.
A
C
I
mean
so
essentially,
we
are
trying
to
evaluate
our
previous
claim
of
we
can
just
use
a
few
features
features
compared
to
like
what
most
deep
learning
models
use.
The
entire
entire
bit
stream
of
the
flow
is
that
we
evaluate
this
using
very
prominent
data
sets,
including
the
quick
data
set,
the
icx
VPN,
not
VPN,
data
set,
and
we
also
collect
our
own
TRS
encrypted
traffic
flows,
which
include
a
bunch
of
video
streaming.
C
Video
conferencing
and
social
media
applications,
and
our
results
here
show
that
we
can
arrive
at
around
space,
similar
performance
when
providing
the
models
with
just
the
top
few
features,
and
we
are
talking
about
the
packet
header
fields
and
compared
to
just
using
all
of
the
features.
At
the
same
time,
we
observe
a
very
big
reduction
in
inference
time
needed
to
arrive
at
the
classification
decisions,
because
there
are
a
few
features
to
be
considered.
So
if
you
have
fewer
Matrix
modifications,
yeah
next
slide.
C
So
the
second,
the
second
thing
that
we
are
considering
is
that,
while
training
and
evaluating
models
based
on
a
single
closed
World
data,
secondly
to
classifiers
that
are
no
not
robust
in
terms
of
model
transferability,
we
can
try
to
identify
features
that
remain
consistently
robust
across
datasets
and
exploit
these
features
when
designing
classifiers.
C
Here
we
design
a
set
of
redefined
set
of
features
to
be
robust,
is
that
when
models,
training
and
validated
using
this
set
of
features
can
achieve
similar
performance
when
they
are
tested
on
a
new
data
set
that
has
never
seen
before,
and
one
reasonable
one
reasonable
way
to
obtain.
This
set
of
features
is
through
statistical
analysis
or
comparison
across
data
sets
and
finding
Network
level
features
that
are
that
have
relatively
consistent
values
and
distributions
for
predicting
each
application
Services
across
the
data
set.
If
I
go
to
the
next
slide,.
C
So
here
we
give
an
example
of
the
like
the
possible
way
to
do
this
kind
of
analysis
by
Computing
the
JS
based
JS
test
based
drift
score
across
two
datas
or
a
header
field
level.
So,
on
the
on
the
so
here
we
use
the
unprint
encoder,
which
encodes
the
the
raw
pickup
on
bit
level,
and
then
we
aggregate
the
bit
level
into
the
field
level
through
either
mean
or
some
so
either
yeah
the
mean
or
the
max
so
providing
the
model
with
this
set
of
robust
features.
C
Look
according
to
the
GIF
score
allows
us
to
avoid
contact
specific
features
that
are
overfitted
to
a
particular
data
set
which
can
be
easily
rendered
less
effective
when
the
model
is
given
new
instances
of
traffic
generated
in
a
different
network
environment
across
domain
yeah.
Let's
go
on
the
next
slide
and
to
wrap
up
things.
C
The
conclusion
that
we
obtain
is
that,
although
the
topic
of
interpret
traffic
Constitution
has
been
extensively
studied,
we
point
out
that
there
are
still
room
for
improvement,
because
existing
classifier
lack
efficiency
for
practical
deployment
and
experience,
low
model
transferability
and
based
on
the
observations
that
we
made.
We
presented
opportunity
for
the
network
research
Community
to
re-examine
this
space
and
attempt
to
develop
new
methods
for
shopping
classification
that
are
robust
in
the
face
of
encryption
and
more
accurate
and
efficient
for
real
deployment.
C
A
So
what
we're
going
to
do
is
have
five
minute
questions
after
each
of
these
talks.
So
if
anyone
has
questions
here,
please
drop
a
Q
Plus
or
something
into
the
chat,
or
we
can
I
see
some
people
raise
my
hand
yeah.
Let's
use
the
Q
Plus
Wes.
D
Please
go
ahead.
Oh
thanks!
Chase
for
the
good
presentation.
It
aligns
well
with
sort
of
my
own
research
results.
Have
you
studied
sort
of
the
temporal
drop
off
of
approaches
and
how
long
they?
Last,
as
you
sort
of.
C
So
like
what
we
find
is
that,
like,
like,
essentially
that's
too
scenarios
right?
Why
is
like
you
try
and
model,
and
you
try
to
expect
a
model
to
last
as
long
as
possible,
without
retraining
right,
given
that
you,
you
train
your
weight
on,
you
train
your
weights
on
the
features
and
then
the
second
scenario
is
like:
if
you,
if
you
consider
retraining,
to
have
a
low
cost,
but
you
have
a
pre-selected
set
of
features
and
then
you
retrain
based
on
the
set
features.
C
How
long
does
it
take
for
that
set
of
features
to
become
less
informative?
And
we
find
that
so
we
essentially,
we
have
data
like
over
a
year
of
spam,
and
we
find
that,
like
even
across
two
data
sets
that
are
a
week
apart.
If
you
just
try
a
model
and
then
use
the
exactly
identical
model
to
infern
infer
application
on
a
different
data
set,
you
are
getting
someone
like
50,
no,
better,
random,
guessing
performance.
However,
the
set
of
features
tend
to
stay
rightly
robust
across
time
with
a
few
features
are
become
less
informative
and
yeah.
C
So
it
really
depends
like
on
in
the
context
where
you,
whether
your
retraining
is
costly
or
not
like
if
you
like,
if
you're
talking
about
model
aging-
oh
oh
yeah,
without
retraining,
then
yes,
the
Asian
part
is
not
very
well,
but
then,
if,
if
you
consider
retraining
to
not
be
a
big
problem,
then
yeah
yeah.
A
Right
next
up,
Michael.
B
So
actually
it's
Michael
Collins
here
fascinating
talk,
I've
got
a
question
for
you.
That's
sort
of
been
bothering
me
for
a
long
time.
B
Thinking
about
statistical
training
or
statistical
processes
on
network
data
and
I
think
the
question
I'm
really
dealing
with
is:
what
have
you
thought
much
about
how
much
Precision
we
can
realistically
get
out
of
any
of
these
models,
because
I
sometimes
feel
that
we're
that
we
might
be
better
served
by
saying
we
can
only
get
this
very
rough
level
and
then
decide
what
kind
of
actions
we
can
take
based
on
those
rough
levels
of
precision.
C
So,
like
we've
evaluate
models
based
on
just
one
closer
data
set,
and
then
we
try
to
back
like
so.
For
example,
if
we
try
to
gradient
boosting
machine
on
one
particular
data
set,
we
can
arrive
at
something
for
like
99
accuracy
right
like,
and
if
you
look
at
some
of
the
machine
learning
deep
learning
models
they
are.
They
achieve
similar
performance
high
in
the
99
percentiles
and,
like
every
new
paper
says
like
we
increment
a
little
bit.
C
C
There's
variation
in
performance.
You
can't
really
provide
a
guarantee
on
how
much
of
a
good
performance
you
can
get,
but
on
a
red
video
like
if,
if
your
network
doesn't
vary,
that
much
you
can't
get
into
getting
to
the
high
80s
and
high
90s
and
and
this
even
surprisingly,
this
even
applies
to
quick
and
VPN
traffic.
C
If
you
look
at
the
ICS
VPN
traffic,
we
can
get
somewhere
like
85
accuracy,
just
using
a
simple
gradient,
boosting
machine
and
for
quick
traffic.
You
get
into
the
high
90s
yeah.
A
Right,
Richard
and
then
I
think
we'll
move
on
after
that
question.
E
Hi
Chase
thanks
for
this
talk,
I
find
this
so
I
kind
of
come
from
the
application
or
security
side
of
the
world
and
I
find
these
talks
really
interesting
because
it
talks
about
how
leaky
our
application
or
security
properties
are
because
they're
leaking
all
this
information
you
guys
can
pick
up.
E
Can
you
chat
for
a
moment
about
kind
of
what
the
broader
context
for
for
this?
These
classification
tasks
are
like
why,
from
a
network
management
point
of
view,
do
I
care
about
classifying
traffic?
This
way,
what
are
the
uses
for
the
storage
classification
kind
of?
What
are
the
harms
that
would
arise
if
you
know
the
application
or
the
encryption
were
updated.
So
these
sorts
of
classifications
approaches
became
even
less
effective.
C
Why
do
we
care
about
this
kind
of
traffic
classification
and
people
are
looking
at
things
like
resource
planning
capacity
planning
when
like
and
at
the
same
time
like
consider
quality
of
service
monitoring
right
and
traffic
prioritization,
for
example,
there
are
certain
traffic
that
are
more
like
say,
for
example,
there's
a
difference
between
the
privatization
of
video
streaming
traffic,
video
conferencing
traffic
and
then
say,
for
example,
you
just
browsing
on
the
web
right
there
are
traffic,
don't
inherently
more
relate
like
more
do
they
rely
more
on
the
latency
of
the
traffic
right,
then
you
tend
to
like,
if,
if
I
can
correctly
infer
that
your
traffic
is
you're,
having
a
zoom
call,
I
can
prioritize
that
traffic
to
make
sure
that
you
get
minimized,
latency
that
you're
receiving
right
and
with
the
increasing
level
of
encryption,
so,
for
example,
DNS
is
getting
encrypted.
C
For
example,
if
you
use
the
more
encrypt
TRS
or
if
you're,
just
using
VPN,
how
hard
is
it
for
ISP
to
infer
your
traffic
to
be
video
conferencing
and
then
try
to
actually
prioritize
your
traffic
right
and
then
you
can
also
have
applications,
say:
malicious
traffic
detection
and
then
and
that
etc,
etc.
So,
anything
really
related
to
you.
You
try
to
isolate
traffic
into
different
flows
and
then
do
something
subjective
about
them.
Individually,
yeah.
A
All
right,
thank
you
again,
Chase
and
next,
if
you've
chin,
are
you
going
to
be
covering
this.
A
You
hear
me:
yes,
we
can,
did
you
want
me
to
click
through
the
slides
again
or
do
you
want
to
present.
F
Yeah,
you
present
them
for
me,
yeah!
That's.
B
F
B
F
F
To
begin
this,
discussion,
I
want
to
set
some
context
for
this
talk
so,
as
we
know,
actually,
I
have
a
developer,
a
bunch
of
protocols
at
the
network,
level,
transfer
layer
and
the
application
error,
and
this
protocol
has
been
enriched
with
many
security
features
and,
for
example,
happy
SEC
at
the
layer
3,
and
here
is
quick
at
the
layer,
4
HTTP
at
the
layer
7.
F
and
in
the
meanwhile
actually
also
developed
Max,
to
provide
a
confidential
later
and
authentication
encryption,
and
also
we
say,
the
trend
that
is
use.
Privacy
security,
you
know,
show
a
lot
of
attention:
attention
in
ITF
or,
for
example,
PPM
privacy,
preservation,
management,
obliverse,
HTTP,
obligence,
DNS
and
masking
Etc.
Next.
F
So
the
traffic-
actually,
you
can
see
actually
equipped
at
the
different
layers.
So
in
this
picture
we
also
provide
example,
traffic
increasing
at
the
Mac
layer
and
the
Wi-Fi
nickel
layer.
So
in
all
of
these
examples,
the
fields
in
the
package
format
for
each
layer
in
red
encryption,
part
I,
know
I.
One
observation
we
have
is
that
the
network
layer
we
have
rpsec
but
rpsec
USB
only
provide
encryption,
but
for
ipsec.
Actually
they
provide
authentication
doesn't
provide
encouraging
next.
F
So
for
traffic
in
Curves
in
China,
we
all
know
this
actually
there's
two
breakpoints.
Actually
the
forces
in
2006
to
TRS
1.1
was
introduced.
Actually,
so
we
can
provide
traffic
encryption
for
application
data
for
TRS
payload.
The
second
breakpoint
is
TRS,
one
dollar
three
getting
introduced
get
published
and
we
can
provide
a
fully
inclusion,
not
only
for
packet
header,
but
also
for
packet
payload.
F
F
So
dive
deeper
into
the
traffic
increasing
at
the
different
layers,
so
we
can
see
the
commonality
for
ipsec
magazac
WPA
they.
Actually
they
can
allow
the
traffic
complication
between
Network
to
network
or
between
the
device
to
device
in
the
network,
but
the
TRS
and
and
quick.
They
can
provide
a
traffic
complication
between
the
end
point
to
end
upon
or
between
host,
to
hosts
in
the
end-to-end
manner.
Secondly,
actually
security
protocols
has
actually
they.
F
You
know
more
rely
on
the
cryptographic,
Innovation
or
progress
free
number.
You
can
see
the
trs1.3.
Actually,
they
introduce
a
lot
of
good
security
feature.
They
provide
more
secure
criminal
suits.
F
So
here
we
actually
give
the
network
management
standard
overview.
In
this
overview,
you
can
see
Nano
management
standards,
the
span
across
the
whole
life
cycle
of
service
management
and
device
management,
for
example,
from
the
device
onboarding
bootstrapping
to
IP,
address
management
to
the
DS
name,
resolution
from
the
network,
access
control
to
the
subway
management
and
identity
management
from
the
network
confusion
protocol
such
as
netconf
to
network
monitoring
such
as
Telemetry
ipfix
syslog.
They
also
cover
network
maintenance,
shop,
shooting
using
om
towards
mechanism
protocols.
F
F
So
for
natural
monitoring,
actually,
these
can
be
classified
into
the
passive
monitoring
and
active
monitoring.
So
typical
example,
active
monitoring
is
the
t-wamp
and
PPM.
They
actually
allow
you
to
establish
the
dedicated
control
channel
to
initiate
the
management
results.
Nano
monitoring
can
also
be
classified
into
the
poor
based
mechanism
and
push-based
mechanism.
Poor
based
maximum
actually
is
more
related
to
the
pulling
based
mechanism.
Ut
represents
the
slow
speed
management
interface
for
push
based
maximum
more,
you
know,
represent
the
high
speed
management
interface
next.
F
So
we
have
so
many
Network
management
standards
that
so
which
protocol
impacted,
which
program
are
not
impacted.
So.
Based
on
our
observation,
we
can
see
actually
political,
especially
accounting,
Security
Management,
Qs
management
network
access
control
actually
impact
a
lot
about
this
traffic
encryption.
In
addition,
actually
traffic
margins
such
as
PPM
or
rpfix,
also
get
it
a
little
bit
impact,
but
not
a
bigger
concern
and
we
think
actually
AI,
based
and
Nano
management
actually
could
serve
the
good
solution
to
deal
with
this
traffic
inclusion
challenge.
So
we
list
several
challenges.
F
F
So
so,
since
the
traffic
increasing
imposes
a
great
Challenge
on
the
network
management,
so
how
Network
can
be
managing
in
support
of
the
traffic
encryption,
so
we
think
actually
we
have
two
directions.
The
first
direction
is
we
just
you
know,
take
what
we
can
get
from
the
networker.
We
can
rely
on
the
network
management
plan.
The
second
is
that
we
can,
you
know,
increasing
more
collaboration
with
the
user
and
a
service
provider
from
the
intermediate
proxy
perspective
for
the
first
direction.
F
We
actually
need
to
acquire
a
lot
of
metadata
from
the
network
and
use
these
metadata
combined
with
AI
machine
learning
mechanism.
Where
you
can,
you
know
to
do
the
traffic
classification,
application
identification
so
I
think
our
position
is
the
way
sync
DPI
is
not
recommended,
and
also
we
think
you
don't
need
to
decrypt
the
traffic
when
you
use
this
kind
of
network
management,
maximum,
so
AI
can
be
played
a
key
role
in
this
kind
of
Nano
management
solution.
F
For
the
second
Direction,
you
need
to
more
encourage
more
collaboration
between
the
network
community
and
the
application
community.
Next.
F
So
for
the
first
direction,
we
think
to
support
this
AI
based
manual
management.
Actually,
the
important
part
is
to
get
the
metadata,
so
metadata
actually
represents
traffic
caricaturistics,
for
example,
it
will
record
what,
when
and
where
whom
of
the
Nano
communication.
So
this
matter
metadata
can
be
captured
from
the
packet
header
and
also
you
can
capture
using
outband
mechanism,
and
actually
this
metadata
can
be
the
session
level
packet
level
or
flow
level.
It
can
also
can
be
captured
indirectly
from
The
Host
next.
F
So
here
we
show
actually
how
AI
based
Network
management
works.
Actually,
so
you
get
the
metadata
you
process
the
metadata
and
and
then
you
can
use
AI
based
the
maximum
to
classify
the
traffic
and
and
to
identify
that
application.
Better
application,
identification
or
traffic
classification
is
not
the
end.
Actually,
you
can
further
use
this
identified
application
to
for
to
to
do
the
Qs
management
or
Security
Management,
for
example,
to
detect
the
manager's
traffic
next.
F
So
here
we
also
give
another
example
of
the
collaboration
method,
so
in
this
use
case
it.
Actually,
it
is
more
related
to
the
tier
space,
the
traffic
identification
identification.
So
in
this
case
is
Jos,
1.3
or
encrypted
client.
Hello
will
be
used.
So
it's
hard
to
capture.
You
know:
incubator
metadata,
such
as
peripheral
suits
or
TRS
version,
client
public
Keynes.
So
how
do
we?
F
You
know,
get
this
equivalent
data,
so
collaboration
method
can
be
used
so
either
you
can
collaborate
between
the
user
device
and
the
intermediate
process,
or
you
can
establish
a
collaboration
between
the
intermediary
proxy
and
a
server
in
the
cloud.
In
both
cases,
you
need
to
establish
the
transfer
relationship
between
the
intermediary
proxy
and,
with
the
other
end,
in
both
cases.
Actually,
you
can
use
the
certificate
management
to
help
to
establish
such
kind
of
transfer
relationship.
So
here
we
just
give
the
example.
F
So
I
I,
like
conclude
my
discussion
here,
actually
I-
think
we
have
to
take
away
I.
Think
first,
is
you
know
we
can?
You
know,
use
AI
based
and
management
and
mechanism
to
deal
with
traffic
information.
I
think
this
is
the
best
choice
and
it
will
keep
involved
actually,
and
so
one
thing
we
can
think
about
whether
these
need
to
be
standard
in
the
ITV
or
in
iitf
I
think
it
could
be
actually
developed.
Architecture
for
AI,
based
and
narrow,
Management
on
equivalent
traffic
and
the
second
taken
away.
F
Actually
I
think
you
know
faces
this
too
many
impact
or
challenge.
Actually,
we
have
also
have
a
lot
of
opportunities
so
for,
for
example,
for
network
access
control
management.
Actually,
usually
we
will,
you
know,
use
IP
based
Access
Control
provider.
You
know
cause
granularity
access
control,
but
you
can
be
further
involved
to
support
a
policy-based,
Access
Control,
also
to
support
these
kind
of
cases.
Actually,
protocol
can
be,
you
know,
extend
to
to
carry
the
ACR
attribute
or
user
control
list
attribute
for
Network
Security
Management.
F
Actually,
we
can
see
actually
there's
some
work
ongoing
in
itl,
especially
in
OBS
WG
called
TRS
demand.
Actually
they
can.
You
know
Define
the
Tails
provider
for
malware
traffic
detection
and,
and
the
last
actually
is
nano
application
collaboration.
I
think
here
we
just
give
example.
I
think
this
may
be
was
more
exploration
to
find
more
use.
Cases
come
up
with
a
generic
solution.
So
that's
it.
Thanks
for
listening.
B
A
F
Yeah
I
think
I
haven't
discussed
I.
Think
last
time.
Actually
I
I
say
this:
an
Mig
chair
actually
reported
to
the
IAB
and
about
this
and
there's
some
suggesting
for
this,
but
I
I
think
potentially
this
can
be.
You
know,
be
discussed
in
an
Mig
first
cook
and
see
whether
it
is
needed
to
to
deal
with
this
challenge.
Yeah.
A
Yeah
Taurus
yeah.
G
Is
there
an
idea
of
how
much
you
know
encrypted
Network
management
traffic
itself
can
be
analyzed
because
I'm
looking
at
you
know
things
like
routing
protocols
or
others
where
maybe
I
can
see
more
or
less,
but
I
wouldn't
know
how
much,
how
much
more
could
be
seen
from
AI
side.
So
it
would
be
very
interesting.
F
Yeah
I
I
think
for
this
actually
I
I
compare
you
know:
Nano
layer,
security
with
the
transport
layer,
security
IC
actually
have
more
innovation
in
the
transport
layer,
but
ipsec
actually
also
there
has
to
be.
You
know,
developer
and
has
several
iteration
to
add
more
feature
button.
Not
a
you
know:
innovators
and
transport
layer,
security,
I.
Think
rbsec
is
a
good
example
actually
either
for
a
routine
protocol,
actually
I
I'm,
not
sure,
currently,
actually
what
I
heard.
Actually
they
may?
F
This
console,
you
know,
use
a
like
a
TCP
MP5
and
the
twao
and
maybe
consider
to
involve
some
rooting
protocol
toward
the
TRS
1.3.
These
actually
there's
some
ongoing
design
PC.
What
can
go
other
currently?
Actually
this
I
think
I
wrote
him.
Protocol
maybe
need
to
be
further
involved.
I
mean
to
support
this
kind
of
security.
A
B
A
A
Okay
and
now
we're
gonna
take
a
different
angles,
so
those
first
two
talks
were
covering.
A
Essentially
you
know
the
efforts
around
trying
to
classify
or
detect
traffic,
even
with
encryption
and
kind
of
where
those
efforts
are
currently
going.
Now,
we'll
have
a
very
different
angle
of
talking
about.
How
do
we
make
sure
that
traffic
can
be
better
obfuscated
and
better
protected,
see
kind
of
the
other
end
of
this
arms
race
here
so
Lauren?
Do
you
wanna
peace.
H
All
right
so
now
for
something
completely
different,
so
indeed
we
would
like
to
speak
about
is
a
little
bit
how
we
can
prevent
traffic
analysis
of
encrypted
traffic
and
doing
so
in
a
way
that
doesn't
hamper
performance
too
much
or
ideally
at
all.
So,
in
a
nutshell,
what
we
leverage
here
just
to
give
you
the
gist,
is
this
new
next
generation
of
programmable
data
planes
program
Hardware
that
allows
us
not
nowadays
do
notification
with
very
high
speed,
so
I
don't
need
to
motivate
this
to
this
crowd.
H
We
already
had
several
discussions
on
that
in
the
last
few
minutes,
but
even
if
the
traffic
is
encrypted,
as
you
know,
attackers
like
money
in
the
middle
and
looking
at
your
traffic
can
figure
out
a
lot
of
interesting
properties
about
that.
So,
for
instance,
there's
been
a
lot
of
work
in
this
space,
showing,
for
instance,
that
people
can
infer
which
video
somebody
is
watching.
The
characteristic
of
the
end
points
what
what
type
of
endpoints
of
the
operating
systems
is
running,
what
kind
of
applications
are
running
the
version
numbers,
etc,
etc.
H
So
there
is
a
flurry
there
of
attacks
that
I
won't
go
into
a
few
ones.
That
I
always
find,
let's
say
surprising
still
in,
for
instance,
is
like
the
attacks
on
on
Vogue
traffic
being
able
to
figure
out,
even
if
you
are
speaking
French
or
English
in
an
encrypted
communication
version,
so
whether
it's
given
awards
are
being
pronounced
in
a
conversation,
so
all
of
these
are
possible
that
have
been
shown.
H
So
what
we,
what
we're
looking
at
in
in
this
work,
is
what
about
other.
So
we
know
this.
This
problem
is
true,
of
course,
in
the
internet.
What
about
other
contexts
in
which
this
program
happens,
and
what
we
realize
is
that,
for
instance,
why
the
area
networks
this
problem
happens
as
well,
so
the
wide
area
networks?
H
What
I
mean
by
this
are
these
large-scale,
sometimes
Planet,
reaching
networks
that
interconnect,
for
instance,
large-scale
centers,
so
the
kind
of
network
that
Google
Microsoft
Amazon
is
running
in
between
their
centers,
for
instance,
and
in
these
networks.
What
you
have
is.
You
are,
of
course,
like
fibers
that
are
interconnecting
these
different
sites
and
these
fibers
by
Design.
H
So
that's
kind
of
like
the
kind
of
like
specific
application
on
the
network
context
that
we
have
been
considering
Facebook
and,
of
course,
it
goes
without
saying
that
in
these
wide
area
network
these
ones,
you
tend
to
see
very
high
throughput
latest
a
lot
of
hundreds
of
of
even
400klings.
H
So,
of
course,
one
operators
they're
already
aware
of
this
problem.
So,
for
instance,
you
can
see
here
two
screenshots
from
Microsoft
Azure
and
in
Amazon
AWS,
so
these
entities
are
actually
encrypting
all
the
traffic
in
between
the
data
centers.
So
even
though
it's
the
private
Network
right,
they
own
the
physical
infrastructure,
they
are
using
dark
fibers.
There
are
also
very
worried
about
possible
tampering
of
their
of
their
fibers
and
in
order
to
prevent
analyzes
the
the
traffic.
Those
scores
as
I've
said,
encrypting
is
not
enough
and
traffic
analysis
is
still
possible.
H
So
if
you
think
about
traffic
analysis
prevention
systems,
the
kind
of
challenges
that
apply
there,
we
see
three
of
them.
Of
course,
the
first
one
and
the
most
important
one
is
the
security.
So
you
want
when
you're
obfuscating
your
traffic,
you
would
like
to
get
some
security
guarantees
out
of
that.
H
So
the
system
that
I
will
I
will
speak
about
in
in
the
next
few
minutes
is
called
detail
and
kind
of
like
full
face.
These
three
properties
so
detour
provides
high
security
guarantees
in
the
sense
that
the
traffic
obfuscated
traffic
that
detail
produces
does
not
contain
any
more
information.
So
the
traffic
is
completely
independent
from
the
input
traffic
of
scaly
traffic,
the
output
of
data
completely
independent
from
the
input
traffic.
H
It
runs
at
very
high
speed
and
does
so
by
minimizing
the
overhead,
and
here
I
will
speak
about
how
we
minimize
the
overhead
of
obfuscation.
Of
course,
there
is
an
overhead,
but
the
good
thing
for
us
is
that
we
can
actually
do
the
obfuscation
now
in
Hardware
by
leveraging,
as
I
said,
this
new
next
generation
of
line
cards
that
become
programmed.
H
So
there
have
been
a
lot
of
work
here
on
traffic
of
education
and
only
mentioning
a
few
here
on
these
slides,
but
you
can,
but
you
can
see
them,
and
this
works,
the
I
mean
they
are
great.
They
are
very
useful,
but
they
tend
to
fail
one
or
many
of
the
properties
I've
just
mentioned
so,
for
instance,
in
the
case
of
security,
they
might
not
protect
all
the
all
the
attacks,
which
is,
of
course,
a
problem.
H
So
how
does
detail
work
while
we,
we
kind
of
like
deploy
our
solution
at
the
edge
of
of
the
one?
So
you
can
see
here
the
edge
switch
that
interconnects
these
different
sites
together
will
deploy
detail
in
Everest,
which
is
there
so
programmable
switches
and
then
detail
switches
will
actually
protect
the
traffic
that
goes
alongside
the
one
links.
So
all
these
links
will
not
be
protected,
meaning
that
an
attacker
tapping
on
them
should
not
be
able
to
infer
anything
about
the
traffic.
H
So
what
properties
do
we
provide
for
provide
three
of
them?
The
first
one
is
the
volume
anonymity.
So
here
the
idea
is
that
the
attacker
looking
at
detail
obfuscated
traffic
should
not
be
able
to
inspire
anything
about
the
size
of
the
packets
or
the
flows
of
the
real
traffic.
We
also
provide
timing
anonymity.
H
So
again,
an
attacker
should
not
be
able
to
inspire
anything
about
the
timing
of
the
packets
that
have
been
I
mean
created
by
the
end
of
bonds
and,
of
course,
the
attacker
should
also
not
be
able
to
track
the
packets
across
multiple
Lanes.
So
that's
what
we
call
pass
anonymity.
It
should
be
impossible
for
the
attacker
to
know
that
your
packet
is
the
same,
for
instance,
across
multiple
links,
so.
H
Use
a
very
classical
technique,
I
must
say
so
there
is
it's
not
like
the
the
technique
here
is
radically
new.
What
is
I
think
really
new
is
how
we
actually
make
it
possible,
but
so,
if
you
think
about
your
your
natural
traffic
here,
so
this
is,
for
instance,
like
the
traffic
that
an
application
like
avoid
core
would
generate
like
Skype,
for
instance.
H
So
what
enables
this
traffic
analysis
attack
is
typically
that
even
in
the
crypto
traffic,
you
leak
the
information
about
the
packet
size
and
the
timing
in
between
packets,
so
an
obvious
ID
to
obfuscate.
This
is
just
to
ensure
that
the
traffic
that
you
send
on
any
link
is
perfectly
constant.
For
instance,
you
will
have
only
Max
size,
packets
and
these
Max
size
packets
will
be
separated
by
always
exactly
the
same
in
the
packet
dealer,
so
essentially
by.
H
And
here,
of
course,
there
is
an
overhead
to
be
paid
and
that's
kind
of
like
something
you
cannot
avoid
you
need
to
in
this
case,
for
instance,
you
need
to
make
the
small
buckets
large
and
then
also
when
there
is
a
gap
in
between
two
packets.
That
is
too
big.
You
need
to
insert
an
another
packet
in
order
to
again
make
traffic
look
perfectly
constant,
and
so
here
you
can
see
in
yellow,
on
the
right
hand,
side
the
override
that
we
have
to
pay
either
in
terms
of
padding
or
creating,
essentially
fake
packets.
H
So
here
trick
to
minimize
this
overhead
is
to
try
to
avoid
to
always
obfuscate
with
Max
size
packets.
That
would
be
very
wasteful,
for
instance,
if
you
think
about
the
distribution
of
the
packets
being
by
modern.
You
have
like
the
acts
that
are
very
tiny
and
then
the
full
size
packets.
So
if
you
have
to
make
all
the
acts
full
size,
you
would
build
a
big
price.
So
here
we
are
actually
modulating
the
traffic
according
to
a
pattern
which
is
not
always
Max
size,
packets
and
I
will
explain
you.
H
How
I
will
do
that
a
bit
later
and,
as
I
said,
we
do
this
entirely
in
the
data
plane.
So
for
those
of
you
who
do
not
know
about
programmer
data
planes,
you
can
think
about
essentially
new
next
generation
of
line
cards
that
allow
to
run
very
simple
program
onto
every
single
incoming
packets
and
modify
the
forwarding
logic
of
a
network
device
according
to
these
programs.
So
this
is
exactly
what
we,
what
we'll
enrich
it.
H
So
briefly,
what
I
would
like
to
speak
about
now
in
the
next
three
three
parts
of
the
talk
would
be
first,
how
do
we
compute
this
efficient
pattern?
Briefly,
how
do
we
actually
shape
the
traffic?
According
to
this
pattern
in
the
data
plane,
and
then
briefly,
mentioning
you
a
few
experimental
reasons
so
first
for
the
community
of
the
computation
of
the
pattern.
Here
again,
if
we
look
at
the
example,
I
was
mentioning,
so
you
can
see
that
the
overhead
is
of
two
types
again.
The
first
one
here
in
red
is
the
padding.
H
So
this
is
the
overhead
that
we
have
to
pay
when
we
take
a
small
packet
and
we
have
to
make
it
bigger
so
that
it
becomes,
for
instance,
the
size
of
the
packet.
In
this
case
it's
Max
Max
size.
So
this
is
the
first
type
of
overhead
that
we
have
to
pay.
The
second
one,
as
I
said,
is
the
shaft
package,
so
these
fake
packets,
that
we
have
to
insert
in
between
the
read
ones
so
the
padded
ones,
so
that
again
we
keep
the
constant
package
rate.
H
So
we
want
to
minimize
essentially
the
amount
of
padding
and
chat
packets.
We
have
to
insert
according
to
a
repeating
pattern,
and
here
the
intuition
is,
is
rather
simple.
So
what
we
want
is
to
not
have
to
pay
the
price
too
much
again,
like
small
packets
to
full
size
package
is
a
big
override,
so
we
lack
in
a
sense
to
take
the
input
traffic
characterization,
so
the
traffic
distribution
in
your
network
analyze
it
and
then
infer
an
output
pattern
that
is
adapted
to
your
traffic.
H
Here,
it's
actually
quite
simple,
which
is
based
ourselves
on
the
percentile
of
the
packet
size
of
an
input
Trace
that
you
might
have
in
your
network
traffic.
So
of
course
here
in
I
wonder!
But
if
you
do
that,
isn't
it
that
you
are
leaking
information
to
the
attacker
and
the
answer
is
yes,
you
are,
and
so
here
it's
a
Trader
right.
So
if
you
don't
want
to
do
that,
you
don't
have
to
and
then
you
can
actually
just
have
a
pattern
which
is
always
Max
size.
Packets
data
will
work
as
well.
H
But
if
you
ask
me
just
a
leaking
the
information
about
the
traffic
distribution
is
a
very
aggregate
type
of
information
and
so
I
think
it's
it's
actually
a
relatively
okay
price
to
pay
in
order
to
minimize
the
overhead,
but
again
you're
not
forced
to
do
that.
If
you
don't
want
to
and
you're
very
concerned
about
leaking
anything,
you
can
avoid
so
again
like
the
way
we
minimize
the
overhead
is
by
mapping
the
input
traffic
into
an
optimized
patterns,
which
is
repeating
itself
adenosium
for
eternity.
H
So
let
me
now
briefly
speak
about
how
do
we
actually
map
input
traffic
into
Distributing
patterns
so
essentially
that
three
operations
we
need
to?
We
need
to
do
the
first
one
is
that
we
need
to
kind
of
like
delay
some
buckets
so
that
they
fit
the
pattern.
Okay.
So
if
we
receive
a
packet
a
little
bit
before,
and
the
pattern
tells
us
that
we
have
to
send
that
back
and
in
10
seconds,
we
need
to
buffer
that
packets
for
10
nanoseconds
in
this
case,
so
we
have
to
pay
the
price
in
buffering.
H
We
also,
as
I
said,
we
need
to
bat
traffic
in
order
to
make
it
larger
again,
according
to
the
traffic
pattern.
Of
course,
we
would
like
this
padding
to
be
minimized,
and
then
we
have
to
insert
this
chaff
packet.
So
here
we
are
inserting
packets
Whenever.
There
is
a
gap
which
is
too
big
according
to
the
pattern.
So
these
are
the
three
operations
that
we
need
to
do
in
detail
in
order
to
obfuscate
the
traffic
in
perfect
way.
H
So
at
a
high
level,
if
you
look
at
the
architecture
of
the
data
plane,
that's
been
built
can
be
divided
into
four
building
blocks,
so
you
have
the
real
packets
that
arrive
on
the
left
hand,
side
of
the
slides.
First,
we
insert
the
chat
packets.
Then
we
do
the
buffering.
We
delay
the
packets.
According
to
the
pattern
we
Pat
the
non-shaft
packets,
the
right
size.
H
We
do
encryption
as
well,
of
course,
and
then
we
send
all
this
traffic
out
so
for
the
encryption
here,
I'm,
assuming
that
the
switch
were
running
on
support
encryption,
for
instance,
using
backsec.
So
this
is
like
the
assumptions
we
make,
but
there
are
many
prediction
stretches
that
support
this.
So
it's
it's
reasonable
but
assume
that
encryption
is
taken
care
of
by
Max
modules
for
padding.
What
we
leverage
here
is
capabilities
of
this
program,
where
that
happens,
to
add
extra
information
on
top
of
our
packets
in
the
form
of
extra
helps.
H
So
these
are
kind
of
like
pieces
of
headers
that
we
can
slam
onto
our
packets
and
we
can
make
a
packet
larger
thanks
to
these
extra
helps
and
these
extra
headers
are
fake,
headers
and
then
they
will
be
encrypted,
and
so
you
won't
see
anything.
You
will
see
random
random
bits
after
the
encryption
and
then,
of
course,
towards
the
end
of
the
detail.
Network,
we
decrypt
remove
these
fake
headers
and
then
the
real
packets
will
get
for
the
buffering.
H
What
we
are
using
is
a
simple
round
robin
scheduler,
and
this
round
rubbing
scheduler
will
kind
of
like
Circle
through
the
different
states
of
the
pattern.
So,
for
instance,
here
you
can
see
on
the
slides
for
state
state
with
500
bytes
packet,
1000
and
1500
bytes
twice
so
essentially,
each
state
maps
to
a
queue,
and
then
we
just
ask
the
switch
to
go
and
run
Robin
throughout
this
these
skills.
H
So
here
what
is
really
important
for
dito
to
work
at
all
is
that
we
can
guarantee
that
there
is
always
a
packet
in
all
of
these
queues.
Otherwise
the
pattern
will
be
broken.
We
won't
actually
implement
the
pattern
correctly
and
also
we
will
start
to
see
on
the
output
length
some
gaps
in
between
packets.
H
So
this
is
where
we
do
the
chat
packets
insertions.
In
order
to
ensure
this
and
as
you
can
see,
we
kind
of
use
the
hierarchical
scheduler
bit.
So
we
have
here
here
you
have
the
first
level
of
the
scheduler
and
then
the
second
level
second
level.
The
second
level
is
around
Robin,
as
I
mentioned,
and
the
first
lever.
We
use
priority
queues
here
and
you
can
see
we
have
kind
of
like
two
queues
per
state.
H
So
if
you
look
at
this
stage,
the
500
bytes
one,
we
have
these
two
queues
here
and
we'll
use
the
highest
priority
queue
for
the
production
traffic
that
will
be
mapped
to
that
state,
and
then
we
use
the
lowest
priority
queue
for
the
chat
traffic.
So
this
chat
traffic
will
be
500
bytes.
This
chat
traffic
will
be
1000,
bytes,
etc,
etc.
So
what
we
ensure
in
detail
is
that
we
always
have
chapped
packets
that
are
in
this.
H
This
second
priority
queues
over
here,
so
we
always
have
something
to
send
for
any
possible
state
in
a
pattern
always
because
of
the
traffic
and
then
what
we
also
ensure
is
that
whenever
we
have
a
production
package
to
send,
we
put
in
the
highest
priority
queue
ensuring
that
it
will
go
before,
of
course,
the
chart
traffic
in
order
to
minimize
even
20.
H
for
the
job
traffic
we
use
actually
the
capabilities
of
the
switch
to
generate
and
recirculate
traffic.
So
each
switch
has
some
ports,
but
they
get
dedicated
to
recirculation
and
we
use
that
to
actually
create
effect
traffic
of
the
right
size
that
we
kind
of
like
Loop
inside
the
switch
in
order
to
ensure
that
we
always
have
this
chat
packets
are
on
and
these
buckets,
if
in
case
you
wonder,
can
also
be
generated
not
in
Hardware
directly
by
the
switch.
You
don't
need
software
instantly
to
do
that.
A
H
H
So
in
terms
of
the
property
that
we
provide
again,
we
provide
very
limited
timing,
anonymity
and
passing
and
limiting
so
how
do
we
provide
volume
anonymity?
What
we
guarantee
as
I
said
that
the
traffic
will
follow
the
pattern
all
the
time,
so
the
volume,
if
you
wish,
will
be
constant.
The
rate
of
the
packets
will
be
the
same
all
the
time
and
forever
in
case
of
timing
anonymity,
where
we
guarantee
the
absence
of
any
timing
links,
because
we
blast
the
link
at
full
speed
or
time.
H
So
we
we
send
the
traffic
at
the
fixed
rate,
and
if
the
output
link
is
a
100
Gig
link,
we
will
send
100
Gig
traffic
all
the
time.
This
is,
of
course,
not
great
for
in
the
context
of
energy
consumption.
So
that's
something
we
are
well
aware
of.
Perhaps
ironically,
though,
the
existing
devices
are
not
great
at
let's
say,
consuming
less
when
there
is
no
traffic
and
consuming
more
only
when
there
is
a
lot
of
traffic.
H
So
here,
even
though
there
is
an
override
in
terms
of
the
traffic,
there
is
not
a
huge
overhead
in
terms
of
the
engine
that
said,
I
really
wish
that
there
will
be
a
lot
of
progress
in
the
coming
years
on
reducing
the
consumption
of
network
devices
in
terms
of
the
past
anonymity
that
we
do
that
when
we
encrypt
the
driving
on
a
burning
basis.
H
So
this
prevents
the
attacker
from
like
kind
of
like
mapping
packets
that
are
like
the
same
on
different
links,
just
because
just
the
encryption
would
be
different,
all
right.
A
few
words
now
about
the
experimental
reasons,
so
we
actually
developed
that,
and
we
launched
we
kind
of
like
loaded,
that
on
real
Hardware.
So
you
can
see
here
on
the
on
the
right
hand,
side.
It's
a
picture
of
a
lab,
so
it's
composed
oops
around
as
of
now
eight
programmable
switches.
H
So
one
experiments
I
will
briefly
mention
is
how
much
of
course
throughput
detail
can
achieve,
which
is
probably
the
most
the
most
pressing
ones
that
that
everyone
has
in
the
back
of
his
or
her
ad.
So
you
can
see
like
traffic
entering
here
and
the
question
is
how
much
obfuscated
traffic
we
can
get
out.
H
So
here
you
can
see
the
evolution
of
the
output
rate
so
how
much
obfuscated
traffic
we're
able
to
produce
as
a
function
of
the
input
Rays
the
input
rate
between
0
and
100
Gig.
So
ideally,
there
will
be.
The
traffic
of
data
will
follow
this
line,
which
just
means
that
we
are
kind
of
like
have
no
bread
whatsoever.
Okay,
so
if
I
send
him
10
gig
of
production
traffic
and
then
I
get
10
gig
out
of
obfuscated
traffic,
so
of
course
we
have
an
overhead.
H
So
it's
not
like
we
can
send
100
Gig
without
without
any
over
it
right.
We
have
to
do
this
padding.
We
have
to
do
this.
Chart
traffic
insertion,
so
we
do
pay
an
overhead,
but,
as
you
can
see,
we
are
almost
ideal
up
to
60
gig
per
link.
H
So
this
is
the
point
where
the
curve
starts
to
well,
essentially
decrease
a
little
bit,
and
that's
because
at
that
point
we
start
to
pay
the
price
of
of
the
chaff
and
the
batting,
but
you
can
see
if
you
send
100
Gig
in
we
still
manage
to
get
80
gig
out.
So
of
course
we
are
working
on
how
we
can
make
this
curve
look
as
ideal
as
possible
and
try
to
push
the
boundary
here
on,
on
the
right
hand,
side
as
much
as
possible,
but
still
I
mean
up
to
60
gig.
H
There
is
no
no
effect
whatsoever
on
the
production
traffic,
which
is
which
is
actually
I,
think
quite
good
again
on
per
poor
basis.
We
also
perform
experiments
in
the
paper
on
the
on
the
applications,
so
we
run
real
traffic
through
this.
Of
course
there
is.
Perhaps
some
of
you
are
asking
questions
about
reordering,
so
that's
these
are
like
aspects
that
can
happen,
of
course,
because
we
are
making
some
packets
weights
so
we
want.
H
We
were,
of
course
worried
about
this,
and
then
we
measured
that-
and
we
saw
no
essentially
insane
effects
on
the
application
performance
again
up
to
a
given
point,
because
at
some
point
again
the
the
overhead
is
time
to
kick
in
and
then
we
start
to
see
an
increase
in
the
performance,
but
still
it's
a
relatively
modest
increase
and
of
course
one
here
can
tune
the
the
system
in
order
to
to
reduce
that
overhead
according
to
easy
traffic.
H
So,
for
instance,
here
you
can
see
that
there
was
no
effect
on
websites
load
up
to
60
per
seconds,
and
after
that
we
start
to
see
an
increase
up
to
half
a
second.
When
we
send
100
Gig
of
traffic
inside.
You
can
see
here
that
we
were
using
a
pattern
of
length
6,
so
pattern
was
composed
of
six
packets
and
if
we
are
using
different
patterns,
we
really
have
an
impact
on
the
performance.
As
you
can
see
here,
we
have
a
pattern
of
length.
H
3
only
and
here
pattern
of
blank
one
which
is
the
equivalent
of
the
smart
size
package,
I,
was
mentioning
so
having
a
pattern
which
is
optimized,
really
pays
off
and
really
helps
all
right.
So
that's
it
I
hope
I
wasn't
too
long,
but
just
to
summarize,
what
detail
is
ditto
is
a
system
which
can
obfuscate
traffic
directly
in
Hardware
in
the
nine
card,
provides
strong
security
guarantees
because
it
makes
the
obfuscated
traffic
completely
independent
from
the
production
traffic
it
provides
high
performance
again
becomes.
It
runs
in
the
hardware.
H
It
also
minimizes
the
override
thanks
to
these
optimized
patterns
that
I've
mentioned,
and
it's
Deployable.
So
these
devices
you
can
buy
them
today,
some
Network
providers.
They
have
them
already.
So
it's
not
like
I'm
talking
about
like
a
new
Asics
that
might
hit
the
market
in
10
years
from
now.
So
you
can
really
try
to
do
these
things
today.
H
If
you
are
interested-
or
just
perhaps
refer
you
to
our
GitHub
repo,
where
we
put
all
the
code,
including
for
the
the
Hardware
switch,
everything
is
open
and
you
know
and
available
over
there,
and
we
have
plenty
of
more
details
again
in
the
paper.
You
can
find
the
April,
9
and
I
need
to
acknowledge
my
PhD
student
here
who
I
who
I
wish
could
have
given.
H
This
talk
is
just
just
graduated:
no,
it's
not
on
holidays,
so
you
are
you're
stuck
with
me
instead,
but
hopefully
I
can
do
him
Justice
and
answer
question
with
these
thanks
I
would
be
happy
to
take
any
questions
you
may
have
all
right.
Thank
you.
So
much.
A
G
I
think
I
didn't
catch
which
programmable
forwarding
plane
you
were
using
it's.
This
is
something
like
P4
or
so
you'll
be
interested
because
I
haven't
seen
the
operation
of
you
know,
modifying
packet,
payload,
concatenating,
packets
or
so
at
line
rate
yeah.
H
Yeah
yeah
you're
right
I
didn't
mention
that.
So
that's
a
good
question,
so
it's
indeed
B4
B4
based,
so
we
use
a
P4
program
to
obfuscate.
The
traffic
and
the
hardware
we
tested
this
program
on
is
the
Ninja
Tofino
and,
as
I
said,
the
way
we,
for
instance,
make
the
packet
bigger
is
by
abusing
one
of
the
P4
features
which
allows
you
to
hide
extra
Adder
on
top
of
the
bucket.
H
So
this
is
by
the
way
a
limitation
of
the
system
is
that
these
P4
programs
that
are
limited
in
the
amount
of
of
headers
that
they
can
add
onto
a
packet,
and
so
in
some
cases,
for
instance,
I,
don't
know
like
if
you
want
to
bump
up
a
packet
of
40
bytes
to
1500
bytes.
This
would
be
very
hard
to
do
so
here.
The
solution
it's
more
of
a
hack
rather
than
a
solution,
but
is
that
you
can
also
recirculate
packets,
and
then
you
can
do
the
padding
of
these
headers
multiple
times.
H
B
G
H
Correct
so
this
is
something
that,
as
far
as
I
know,
to
the
best
of
my
knowledge,
P4
does
not
allow
you
to
do
so.
As
I
said
now,
we
are
are
thinking.
Of
course,
we
we
I
mean
the
motivation
of
this
project.
Is
that
okay,
we
can
do
things
now
in
P4
and
Hardware.
Can
we
do
this,
and
now
we
are
hitting
all
these
limitations
that
are
kind
of
like
preventing
us
from
having
this
very
nice
curve,
as
I
was
saying,
and
one
of
them
is,
is
the
one
you
mentioned
right.
H
So
we
we
have
like
heavy
limitations
on
how
we
do
the
How
We
Do,
the
padding,
and
so
you
could
be
actually
imagining
just
having
a
next
generation
of
programmable
Hardware
that
comes
with
I
mean
either
like
the
ability
to
put
packets
together.
That
would
be
tough,
I
think,
because
you
need
to
buffer
things
right,
but
also
just
with
the
ability
to
create
kind
of
like
a
random
amount
of
bits
stream
of
bits,
and
then
to
slap
that
on
the
packets
that
you
can
delete
afterwards,
and
that
is
something
for
instance.
H
G
H
H
Right
this
is
another
alternative
design
you
can
use
the
endpoints.
You
can
use
smartness
also
running
in
the
host
one
issue
here
is
that
if
you
are
considering
the
endpoints
generating
traffic,
it's
very
hard
to
guarantee
what
will
the
traffic
inside
the
network
look
like
if
you
are
creating
your
obfuscated
traffic
at
the
edge,
because
it
depends
on
all
the
all
the
different
queues
and
links
and
devices
that
you
will
cross
before
you,
you
hit
the
length
that
is
supposed
to
be
obfuscated,
so
here
the
price
that
you
need
to
pay.
H
If
you
go
there
is
that
you,
you
really
send
a
lot
of
traffic
so
that
you
guarantee
that
order
links
are
being
full
essentially
and
that
we
saw
as
a
kind
of
like
limitation
of
endpoint
based
design.
That
said,
you
might
imagine
that
smartnex
can
help
you
kind
of
like
giving
you.
Let's
say
this
traffic
that
we
can
use
to
obfuscate
a
production
traffic
better.
H
A
Right
just
an
interesting
friend:
let's
have
the
next
questions,
be
relatively
brief
on
discussion.
Sure!
Yes,
that's
fine,
no
worries,
Wes
is
next
I.
Think
all
right.
D
Thanks
yeah
thanks
for
the
talk
it
was,
you
know
fascinating
to
hear
the
the.
I
D
Issues
of
deploying
constant
bit
rate
type,
you
know
Flows
at
scale.
Can
you
speak
to
you
know
whether
you
do
content
management
of
flows
before
they
go
into
Ditto?
In
other
words,
sort
of
the
purpose
of
this
Workshop
is
really
to
talk
about
how
to
improve
management
and
ditto.
You
know
functionally
is
designed
to
prohibit
that
which
is
fine.
It's
your
own
layers,
but
do
you
have
any
sort
of
prioritization
that
can
go
into
your
buffering
layer?
D
H
Yeah,
that's
a
good
question.
Indeed,
the
I
mean
our
goal
was
was
not
to
try
to
to
discriminate
between
different
types
of
traffic,
so
we
did
not
consider
that
the
the
context
we
where
theorem
is
just
we
receive
traffic
in.
We
want
to
obfuscate
everything.
Now.
That
said,
it's
actually
possible
to
do
exactly
what
you
said,
because
since
we
the
the
plapping
is
programmable,
we
can
actually
recommend
detail
with
classification,
Primitives
or
classification
programs
that
we
run
before
detail
and
would
then
adapt.
A
You
Richard.
E
I
think
it's
just
a
couple
quick
clarifying
questions
on
on.
You
talked
about
applying
a
distribution,
it's
packet
sizes,
but
I
was
wondering
if
you
were
also
applying
distributions
to
kind
of
other
aspects
of
the
pack
of
flow
like
things
like
interpacket
timings.
So
that
was
a
little
quick
question.
The
other
one
was
whether
it
seems
like
you
could
impose.
Maybe
any
distribution
here.
E
If
you
know,
if
I
wanted
my
my
military
networks,
look
like
a
corporate
one
say
you
know
it
seems
like
that's
possible,
but
the
cost
of
efficiency
just
wanted
to
confirm
that
intuition's
right
yeah.
H
So
let
me
let
me
answer
the
second
one
and
then
the
first
one,
the
if
I,
don't
forget
the
first
one
by
the
time
I've
answered,
but
the
you
know
to
answer.
Your
second
question
is
indeed
correct.
So
you
you
are
in
in
control
of
whatever
you
want
to
leak
out
through
your
distribution.
So
if
you,
if
you
want
to
avoid
like
making
just
the
constant
uniform
distribution,
we
will
packet
Max
size,
you
can
indeed
make
your
military
Network
look
like
an
Enterprise
Network
on
ISP
Network
at
the
price
of
an
overhead.
H
So-
and
this
is
some
things
that
you
have
to
decide.
This
is
a
design
decision
at
this
point
and
then
for
the
first
question:
if,
if
I'm
not
mistaken,
it
was
about.
Why
do
we
only
look
at
packet
sizes
and
not
the
packet
timing
in
the
distribution?
It's
because
what
we
do
is
we
really
send?
So
how
do
we
maintain
constant
timing
in
between
packets
is
by
ensuring
that
we
always
send
that
100
Gig?
So
we
we
always
flat
always
flat
there.
H
So
the
distribution
of
the
incoming
packets
timing
does
not
really
matter
for
us
because
at
the
end,
it's
all
about
being
constant
Time
by
blasting
the
link
out.
What
do
matter,
of
course,
is
the
size
of
the
packet,
because
this
is
where
we
have
to
pay
the
price
in
the
padding
right.
So
this
is
why
we
care
about
the
size
and
not
the
timing.
If
you
want
to
run
detail
and
not
blast
the
link,
then
you
would
need
to
start
looking
at
exactly
timing
distribution.
E
B
That's
going
to
ask
you,
that's
okay,
so
my
question
is
sort
of
similar
to
what
Richard
was
asking,
but
in
terms
of
timing,
is,
did.
B
On
the
link,
which
would
still
sort
of
I
think
inject
quite
a
lot
of
noise
and
different
shapes
in
without
doing
quite
as
much
as
you're
doing
here,
potentially
still
with
extending
some
packets.
So
is
that
something
you
looked
at
or
thought
about,
or
would
that
not
give
enough
Security
in
terms
of
what
you're
trying
to
achieve.
H
H
So,
as
you
said,
like
the
big
question,
then
is:
if
you
have
like
an
intermingle
between
random
traffic
random,
randomly
sized
traffic
that
you
generate
right,
then
the
production
traffic,
somehow
mixed
in
that
you
really
need
to
be
to
be
able
to
guarantee
that
there
is
no
way
to
distinguish
than
the
random
traffic
e
in
an
easier
way
than
the
production,
traffic
or
vice
converter
right,
because
I
can
just
try
to
discover
the
complement
of
what
I'm
looking
for
and
then
I
do
the
difference
and
then
I
get
I
get
the
traffic
the
production
traffic
out.
H
So
here
again,
the
big
question
is
how
to
do
that,
while
guaranteeing
security
here
so
here
the
fact
that
we
have
a
repeating
pattern
really
never
to
ensure
that
we
don't
need
anything.
If
you
have
Randomness
include
the
mix,
then
the
big
question
is
how
much
Randomness
you
need,
and
this
I
don't
know.
I,
don't
have
good
mental
model
of
that
it
depends
on
your
input,
traffic
right.
So
I
guess
it's
it's
a
network
dependent
question,
but
it's
a
good
suggestion.
We
we
haven't
considered
it
yet.
A
All
right,
I
think
that
takes
us
to
the
end
for
this
one.
Thank
you
so
much
for
sharing
this
talk
and
then
last
today,
I
believe
Mallory
is
going
to
take
us
through
yet
a
different
perspective
of
kind
of
you
know
where
we
add
into
some
of
the
discussions
about
user
interests
and
using
privacy
and
then,
after
that,
I
think
the
rest
of
the
time
will
just
be
used
on
discussion
and
next
steps.
J
J
You
never
know
exactly
what
to
do
with
about
the
view,
but
that
maybe
that's
a
slight
Improvement
okay
yo.
So
thanks
for
having
me
present,
this
actually
zooms
out
quite
a
bit
from
what
folks
have
presented
today.
So
it's
kind
of
a
reminder
of
the
where
we're
headed
or
what
the
intention
is
around
user-centric
approaches
to
to
measurement,
and
so
it
doesn't
necessarily
speak
directly
to
the
issue
of
encrypted
traffic.
But
I
do
think
that
it.
J
So
one
of
the
things
that
I
one
of
the
reasons
why
I
submitted
it
was
I
find
often
in
talking
about
just
encryption
in
general,
which
is
a
really
big
part
of
my
advocacy
as
staff
member
at
the
center
for
democracy
and
technology,
is
that
the
metadata
versus
content
conversation
happens
quite
a
lot
and,
and
so
the
I
think
in
the
case
of
encrypted
traffic,
where
you
have
less
access
to
the
traffic
itself.
There's
a
tendency
then
to
lean
quite
a
bit
on
metadata,
which
is
what
a
lot
of
folks
have
presented
today.
J
But
there
are
good
reasons.
Maybe
why
one
should
exercise
some
restraint
around
that
or
to
be
a
little
bit
more
thoughtful
in
the
approach
to
to
using
metadata
to
make
up
for
the
fact
that
there
isn't
so
much
content
available,
so,
let's
get
into
it.
These
are
just
some
of
the
reasons
why
we're
taking
this
principled
approach.
J
There's
a
document,
that's
in
the
Privacy
research
group
at
the
irtf
and
it's
sort
of
presents
some
guidelines
for
this,
so
this
is
really
for
any
measurement.
Like
I
said
in
any
environment.
If
you're
concerned
about
user
safety,
you
should
follow
these
guidelines
so
again,
not
necessarily
specific
to
encrypted
systems.
It
takes
a
user
perspective
and
I
would
say
its
biggest
contribution
in
in
the
text.
That's
there
is
around
data
minimization.
It
goes
into
several
different
approaches.
To
that
I
do
recognize.
J
Data
minimization
is
a
topic
that's
going
forward
in
other
parts
of
the
ietf,
but
within
the
context
of
the
safe
measurement.
It
also
reiterates
that
I
I
wanted
to
point
out.
J
I
made
this
point
just
a
minute
ago,
but
I
wanted
to
say
it
really
explicitly
is
the
very
Act
of
detecting
traffic
or
measuring
it
is
itself
a
proliferation
of
data,
so
it
does
work
at
Cross
purposes,
with
data
minimization
principles
and
so
where,
when
it's
done,
especially
in
encrypted
environments
that
are
perhaps
concerned
about
security
and
privacy
and
user
safety,
not
enhancing
the
metadata,
not
proliferating
that
data
would
be
should
be
an
explicit
goal.
So
and
if
you
do,
then
what
do
you
do
about
it?
J
So
this
document
talks
a
little
bit
about
about
what
to
do
you
can
you
can
link
from
here?
These
are
in
my
slides
that
are
uploaded,
but
you
know
there's
an
active
internet
draft.
Discussion
of
this
has
been
happening
on
the
research
groups
list.
J
A
review
just
came
in
the
other
day
about
this,
so
it's
it's
definitely
an
active
topic
of
conversation
and
we're
using
GitHub
to
manage
it.
So
if
you
have
comments
after
this,
you
can
you
can
go
there.
The
folks
who
started
it
actually
was
from
Folks
at
the
tour
project
and
then
I've
sort
of
been
shepherding
the
document,
since
it
was
more
or
less
abandoned,
but
also
gershabad
Grover,
who
actually
isn't
at
CIS
anymore.
J
That's
out
of
date,
he's
working
at
Oni,
the
or
uni
the
the
open
Observatory
for
internet,
sorry,
Network
measurement
right,
and
so
the
goal
of
the
document
then,
which
I'll
get
into
the
structure
relatively
soon.
Is
we
really
are
hoping
that
folks,
both
in
industry
and
Academia,
noting
that
there's
quite
a
lot
of
measurement
of
the
network
happening
by
Third
parties
or
or
folks
who
are
interested
in
studying
the
network
that
these
measurements
being
used
to
research?
J
The
use
of
the
internet
and
Etc
follow
some
guidelines
just
to
ensure
that
those
measurements
can
be
carried
about
without
violating
user
privacy.
It's
a
document,
for
example
that
we
feel
like
could
be
useful
when
you
know
I
guess
ethics
review
boards
are
looking
at.
You
know:
academic
studies
that
that
measure
Network
traffic.
This
could
be
a
document
that
comes
out
of
the
irtf
that
that
sort
of
discusses
this
and
should
maybe
inform
the
outcomes
of
a
review
board
decision
about
about
things.
J
There
are
some
important
scope
issues
that
we
always
are
careful
to
disclose.
We
we
don't
I,
did
mention
ethics
review,
but
it
isn't
a
replacement
for
one.
It
should
be
informative
of
one
and
help
influence
that
it's
also
not
legal
advice.
Even
though
it
talks
about
things
like
consent
in
terms
of
service,
but
it's
also
not
restricted
in
scope
just
for
the
network,
so
it
could
also
have
implications
elsewhere,
so
explicitly
user
traffic,
for
example.
J
And
then
we
talk
about
internet
users,
and
this
is
always
a
tricky
one,
because
I
think
that
this
is
a
really
broad
term.
We
we
talk
about
users
as
if
they
need
to
be
engaged
with
the
thing
that
they're
doing
and
logged
in
and
quote
online
and
all
that.
J
But
we
know
increasingly
that
data
about
people
is
really
what
we
mean,
and
so
it
may
not
be
that
you
actually
have
a
user
for
that's
subscribed
to
your
service
for
their
data
to
be
affected
and
for
a
person
to
be
affected
by
it.
J
So
a
person
could
maybe
not
even
be
you
know,
logged
in
or
online
for
for
their
you
for
their
data
to
be
implicated,
and
so
we
have
to
be
I,
think
specific,
what
we
mean
by
by
internet
user
and
and
what
it
is
and
what
it
doesn't
mean
and
what
it
does
mean.
J
So
those
are
the
scoping
issues
around
the
draft
So.
Within
that
scope
we
have
a
document
with
three
parts,
essentially
so
the
first
one
talks
about
consent
and
the
the
layers
of
that
literally
because
you,
it
may
be
a
little
too
abstract,
and
so
we've
broken
that
into
three
different
strata
of
consent.
J
There's
safety
considerations
which
then
actually
are
more
like
the
guidance
piece
of
this,
so
give
some
suggestions
around
how
to
do
measurement
in
a
way
that
doesn't
implicate
user
privacy
more
than
more
than
you
have
to
more
than
you
need
to,
and
then
there's
a
last
and
final
piece
that
so
far
in
this
document
is
rather
unexplored
and
Unwritten
so
and
just
to
also
yeah
restate
because
it's
internet
drafted
is
a
work
in
progress,
but
I
think
the
point
of
the
risk
analysis
is
to
be
able
to
do
some
decent
amount
of
trade-offs
when
making
various
decisions
around
these
other.
J
These
other
pieces
to
be
considered
so
I
I'm
not
going
to
go
over
the
entire
paper
for
you,
I
I,
think
enough
for
discussion
purposes.
I
wanted
to
present.
You
can,
of
course,
read
it
on
your
own
time
and
give
comments,
and
so
I
might
I
might
end
up
finishing
finishing
this
presentation
early,
which
I
think
is
a
good
thing,
because
I
really
it's
just
meant
to
inform
the
the
larger
discussion.
J
So
the
the
main
pieces
I'm
going
to
talk
about
are
the
consent
piece
and
the
safety
piece
so
consent.
There
are
three
main
approaches
to
consent
that
are
somewhat
agreed
upon.
Meaningful
consent
is
a
term
that
just
tries
to
be
specific
about
what
that
means,
and
so
the
first
is
obviously
informed
consent
you're
able
to
actually
you
know,
obtain
obtain
consent.
J
So
one
example
we
use
in
the
draft
is
you
know,
researcher
uses
volunteer-owned
mobile
devices
to
collect
information
about
local
internet
censorship,
and
the
connections
will
be
made
from
the
volunteers
device
toward
a
known
or
suspective
blocked
web
pages.
So
this
is
a
case
often
when
you
actually
are
asking
people
to
participate,
you're
you're,
you
know,
there's
a
specific
reason.
Why
you're
taking
these
measurements
to
find
out
something
specific
until
you've
asked
people
to
participate
either
by
buying
Hardware
or
installing
software
or
an
add-on,
or
something
like
that
proxy
consent?
J
So
this
is
probably
the
one
where
we're
thinking
about
happening
in
the
the
terms
of
service
or
you
already
have
a
user,
but
so
this
is
we.
Where
up?
You
know
the
example
we
give.
Is
a
researcher
performs
packet
capture
to
determine
the
TCP
options
and
their
values
used
by
clients
on
a
corporate
wireless
network.
So
proxy
consent
probably
is
not
necessarily
informed
or
explicit
consent,
but
you
can
assume
that
consent
exists
because
of
the
way
you've
designed
the
project
and
then
the
the
implied
consent
is
separate.
J
So
you
have
you've
designed
your
Telemetry
to
exclude
the
obvious
stuff
and
then
beyond
that
you're
pushing
updates
to
users
at
random,
and
so
you
would
just
need
to
as
long
as
you're
not
implicating
their
data.
That
consent
is
implied
because
they've
signed
up
for
the
auto
updates
just
to
give
some
thoughts.
J
So
then
I'm
going
to
go
through
then
in
the
safety
section.
There
are
four
separate
subsections
that
talk
about
how
to
reduce
risk
to
users
or
user
data.
J
This
one,
this
one
is
around
test
beds,
and
so
this
makes
sense
in
in
a
place
where
you
have
something
specific
you
want
to
find
out.
So
it's
not
measuring
all
the
traffic
all
the
time
you
have
you
know
a
test
bed
essentially
so
you've
put
some
parameters
around
that
and
that's
I
think
important.
You
can
maybe
find
out
changes
or
effects
that
changes
to
your
network
management
are
going
to
have
by
doing
this
without
having
to
monitor
all
the
all
the
traffic
all
the
time.
J
So
it's
something
to
consider
the
second
one
is
more
guidance
to
folks
who
are
probably
third
parties,
and-
and
this
is
not
their
infrastructure,
but
if
you
are
measuring
the
network
for
a
variety
of
different
reasons,
there
are
going
to
be
places
where
you
would
want
to
exercise
some
restraint
around
what
other
others
infrastructure
is
doing.
J
So
I
think
that
this
is
probably
pretty
common
sense
for
folks,
but
it's
worth
stating
because
I
think
we
sometimes
forget
that
you
know
there
are
places
where
I
think
as
Richard
Put
it
earlier
things
are
leaky
and
exploiting
that,
for
the
purposes
of
one's
experiment
is
maybe
not
all
that
all
that
great
of
an
approach,
so
you
know
just
be
mindful
that
you're
you're
technically
on
other
people's
infrastructure
in
that
way
and
again
to
to
think
about
restraint.
J
There
are
this,
and
this
gets
really
tricky,
of
course,
for
for
folks
who
are
just
sort
of
interested
in
you
know
what
does
the
network
look
like,
but
then
our
Crossing
legal
jurisdictions,
for
example,
and
so
that
could
be-
that
can
be
a
really
difficult
consideration
so,
but
but
I
think
addressing
that
or
trying
to
mitigate
it
sooner
rather
than
later
is
always
a
prudent
approach,
not
just
for
you
and
whether
or
not
you
are
sort
of
breaking
the
law
or
doing
something
within
legal
boundaries,
but
that
your
users
and
the
data
that
you're
measuring
also
is
not
implicated.
J
That's
probably
the
larger
consideration
right,
okay,
there's
then,
and
of
course
these
sections
goes
on,
they
go
on,
and
so
there's
something
in
here
that
you
haven't
seen.
Please
do
raise
it,
but
there
might
also
it
might
also
be
there
I've.
Just
not
it's
not
made
into
my
summary.
J
The
the
last
one
I
think
that
I,
that
I
bring
up
in
the
context
of
the
slides
is
around
minimizing
data.
So
this
goes
on
to
do
four
additional
Sub
subsections
in
its
approach
to
how
data
minimization
can
be
done
when
you
are
again
proliferating
it
creating
it
proliferating
it
when,
when
you're,
when
you're
capturing
data
when
you're
taking
measurements
of
it
so
discarding
data
is
always
very
nice,
don't
keep
it
around,
it
can
be
ephemeral.
You
can
learn
something
from
it.
J
Machines
can
learn
something
from
it
and
then
just
get
rid
of
it.
Masking
data
is
another
approach,
so
that
you're,
not
it's
not
just
a
fire
hose.
You
only
need
what
you
need
and,
if
you're
going
to
keep
it
around,
you
can
certainly
hide
certain
parts
of
it.
J
Reducing
accuracy
is
just
you
know,
to
to
minimize
the
ability
to
to
Target
individual
points
on,
and
there
are
obviously
lots
of
techniques
for
that
I.
Don't
have
to
detail
them
for
you
and
then
similar
I
think
in
the
aggregation
of
data.
So
another
approach
to
minimizing
specific
data
about
the
folks
that
you're
books,
traffic,
that
you're
measuring
m
risk,
like
I,
said
the
this
section
is
rather
Unwritten
and
I.
Think.
J
The
sort
of
the
intention
around
here
is
to
give
folks
interested
in
this
topic
the
ability
to
make
trade-offs.
J
So
it
may,
it's
I,
don't
think
it's
redundant
necessarily
to
the
first
two
sections
on
consent
and
on
safety,
mitigations,
but
I
do
think
it's
a
necessary
section.
It's
just
difficult
to
so
far
know
exactly
what
the
guidance
might
be.
So
if
folks
have
strong
feelings
about
risk
assessments
and
that
please
you
know
chime
in,
we
would
appreciate
your
expertise
on
this.
J
J
So,
if
you're
interested
in
the
direction
of
travel
for
this
draft,
we
do
have
some
open
issues
that
we
know
we
need
to
include,
because
this
is
maybe
going
to
have
implications
for
the
ethics
of
of
measurement.
We
want
to
have
something
on
responsible
disclosure
of
vulnerabilities
like
say,
you're
measuring
something
you
find
out
all
is
not
well.
How,
then,
do
you
approach
it
and
what
you
do
in
response?
J
We
want
to
also
think
about
availability
as
a
a
risk,
or
you
know
we
don't
also
want
folks
to
be
losing
data
as
well.
That's
another
infection
for
cyber
cyber
security.
J
We
haven't
explicitly
considered
IP
addresses,
which
we
we
might
want
to
do,
there's
guidance
potentially
on
other
forms
of
metadata
that
we
could
get
more
explicit
about.
That's
an
open
question
there
are.
It
doesn't
also
discuss
like
what
what
measurement
looks
like
in
the
future
when
Computing
capabilities
are
more
robust
and
then
there's
just
a
few
here
that
are
like.
We
need
to
make
citations
of
of
work.
J
That's
like
very
obviously
in
this
realm,
but
isn't
yet
cited,
or
that
those
learnings
aren't
brought
forward
into
the
text
which
they
should
be.
Then
then
I
would
just
say
the
last
two
pieces
are
the
data.
Minimization
section
I
think
is
good
in
its
skeletal
form,
but
it
hasn't
probably
enough
guidance
there
as
such.
J
Yet
and
then
I
already
talked
about
how
the
risk
assessment
piece
is
Unwritten,
but
I
think
still
valuable,
so
I
would
be
in
favor
of
keeping
it,
but
we
don't
have
a
lot
there
to
talk
about
so
far.
J
This
just
to
remind
you
again
I.
This
was
my
first
slide.
It's
kind
of
why
we're
taking
this
principled
approach
and
I
do
I,
do
still
think
it's
rather
valuable,
so
I
don't
need
to
go
into
the
last
slide.
I'll
just
leave
it
here.
Maybe
if
there
are
questions
along
these
lines
or
if
anybody
else
wants
to
chime
in
with
feedback
I
think
I
did
actually
go
over
time.
Apologies!
That's.
A
Right:
okay,
we
have
time
for
one
or
two
quick
questions
before
we
get
to
the
more
open
discussion.
If
anyone
has
stuff
Q.
A
I'm
not
seeing
anything
immediately,
so
thank
you,
Valerie.
We
can
kind
of
wrap
everything
together
now
in
the
broader
discussion
and
I'm.
Just
because
you
know
one
point
of
order
here,
I
think
for
the
next
two
days,
we're
probably
going
to
have
you
know
a
bit
more
of
the
broad
open
discussion,
but
today,
since
we're
just
kind
of
gathering,
you
know
one
of
the
different
perspectives
and
stuff
that
we're
coming
from
I
think
we
wanted
to
just
hear
more
of
these
different
angles.
A
So
you
know
tomorrow
we're
going
to
talk
about
where
we
want
to
go
and
then
we're
going
to
get
into
kind
of
like
what
are
the
different
collaboration
techniques
of
how
we
actually
improve
the
relationship
here
between
Network
management
and
the
encrypted
traffic.
But
yeah
I'd
like
to
just
open
the
floor
now
for
people
to
comment.
A
Let's
keep
kind
of
the
comments
you
know
relatively
succinctly
and
try
to
keep
it
to
five
minutes
or
less,
certainly
for
any
given
point
here
but
curious
to
hear
people's
thoughts
synthesizing
the
different
angles
we've
heard
today.
A
B
I
So
I
thought
that
the
various
flow
hidden
hiding
pieces
were
kind
of
interesting
in
the
effort
back
about
15
20
years
ago.
You
know
people
came
to
the
ipsec
working
group
and
said
if
you
could
just
show
us
your
TCP
headers,
we
could
do
wonderful
things
for
you
and
at
the
time,
around
2002
or
something
like
this.
I
You
know
the
group's
attitude
was
pretty
much
show
me
the
money
and
was
like
okay.
What
are
you
gonna
do
for
us
tell
us
what
you
can
do
and
and
then
there
were
various
Mudders
about.
You
know
how
it
was
proprietary,
secret,
3G
information,
and
they
couldn't
tell
us
what
they're
going
to
do
and
we
were
just
all
like
well
I.
Think
if
you
can't
tell
us
what
the
benefit
is,
then
I
don't
think
we
want
to
take
the
risk
later
on.
I
The
group
I
guess
changed
a
bit
and
published
what
was
it
RFC
5840,
which
is
the
wrapped
encapsulating
security
payload,
which
I
would
give
a
six
pack
of
beer
to
anyone
who
could
tell
me
it
was
actually
ever
deployed
by
anyone
ever
ever
and
and
I.
Think
that's
a
little
bit
telling
to
me
that
we
get
lots
of
information
from
people
that
say
if
only
I
could
see
stuff,
then
I
could
do
things
for
you
and
and
what
I
conclude
actually
the
3G
people
and
some
of
them
were
not
3G.
I
Actually
it
was
their
proprietary
solutions
to
what
we
now
call
Buffer
bloat
some
kind
of
act
pacing
that
they
wanted
to
do
to
to
deal
with
the
problem
that
actually
they'd
misdiagnosed
as
Jim
Getty's.
You
know
said
that
3G
networks
are
massively
buffer
bloated
and
if,
if
if
they
had
recognized
it
or
the
ipsec
working
group
had
recognized
that
this
is
what
they
were
really
dealing
with.
Was
a
congestion
problem,
not
a
flow
problem,
and
you
really
didn't
need
to
see
the
headers
you
just
needed
to
eliminate
buffers
right.
I
Then
you
know
we
can
have
a
better
conversation.
We
could
have
said
well,
no
actually
you're
you're
completely
misdiagnosing
the
problem,
and
maybe
that's
because
you
you're
misdiagnosing,
because
you
can't
actually
see
what
the
traffic
is.
But
you
know
there's
a
lot
there.
I
There
wasn't
a
lot
of
encrypted
traffic
on
the
network
on
that
Network
to
begin
with,
so
it
was
kind
of
like
a
bit
weird,
but
people
were
really
really
concerned
about
about
https
everywhere
and
other
stuff
like
this,
and
so
I
think
that
I
think
that
in
large
part
that
that
there's
actually
lots
and
lots
and
lots
of
data,
as
as
as
the
various
presenters
have
said,
that
you
can
get
out
of
encrypted
traffic.
I
If
you
want
to
get
it
and
sometimes
it's
a
little
bit
harder,
but
I
think
actually
that
it's
healthier
because
I
think
you're.
Actually,
you
have
more
certainty
that
you've
actually
diagnosed
the
real
problem,
rather
than
what
were
the
symptoms,
which
was
the
window
was
too
big
and
I.
Think
that
being
able
to
see
the
window
was
a
mistake
and
I
guess.
I
I'd
also
say
that
I
think
that
one
of
the
the
failures
caused
by
Nat
4-4
was
the
belief
that
your
transport
layer,
headers,
were
not
subject
to
wiretap,
so
I
believe
we
should
have
encrypted
them
from
the
beginning,
and
other
people
have
said
that
as
well.
But
I
think
from
a
legal
standpoint
that
you
know
I,
don't
know
if
non-americans
and
I'm
not
an
American
but
I've
read
some
of
the
stuff.
I
There's
these
things
called
pen
Registries,
which
is
the
old
list
of
telephone
numbers
that
the
the
operator
would
write
down
right
when
you
phone
somebody,
they
would
write
down
what
number
you
phoned
in
order
to
bill
you
and,
and
that
went
into
like
as
a
different
kind
of
a
level
of
Warrant.
At
some
point,
the
police
just
walked
into
the
telephone,
Center
and
said.
Show
you
show
me
your
pen
registry,
and
there
was
no
no
oversight
at
all,
but
they
couldn't
tap
the
line
without
a
warrant
and
I.
I
Think
that's
a
really
important
thing
that
we
missed
on
the
internet
was
that
yeah
you
have
to
reveal
your
destination
address
to
the
routers
and
that's
the
pen
registry,
but
everything
Beyond
layer,
three
should
have
been
sacrosanct
and
the
fact
that
nat44
essentially
made
us
start
allowing
middle
boxes
to
see
them
is
actually
part
of
where
we
went
wrong
right
and
we
needed
to.
We
need
to
roll
that
part
back
and
so
I'm,
not
that
upset
about
you
know
quick,
making
everything.
Look
that
way.
A
G
So
I
think,
first
of
all,
I'm
a
little
bit
annoyed
of
of
seeing
this
broad.
You
know
naming
of
network
management
for
all
these
different
things.
That's
kind
of
very
misleading,
I.
Think
right.
So
we
had
already.
You
know
much
nicer
term
like
per
pass
for
a
good
amount
of
this,
but
obviously
I
think
we
should
be
cognizant
of
the
opposite
side
when
you
do
have
Enterprise
networks.
G
Where
you
know
the
the
traffic
classification
to
figure
out
what's
going
on
in
the
business
is,
is
a
really
important
thing
and
I
don't
think
we
have
given
that
a
lot
of
thought
in
the
ietf,
but
maybe
a
good
starting
point
is
to
figure
out
you
know
how
can
we
create
different
buckets
of
what's
going
on
and
you
know
if,
if
an
application
encrypts
things
so
that
they're
not
meant
to
be
seen
by
the
network,
then
they
they
they
they
shouldn't
be
seen
right,
so
I
mean
doing
exactly
all
these
things
with
I
think
saw
two
very
good
presentations
here.
G
The
first
two
ones,
I
think
is
very
important,
but
likewise
I
think
which
we
should
also
consider
exactly.
What
is
the
metadata?
We
want
to
break
out
from
the
application
payload
when
there
is
a
valid
you
know,
business
need
and
it
is
not
user
privacy
that
is
required
for
them.
So
I
I
think
really
the
the
scope
of
what
we
need
to
look
into
needs
to
become
broader.
G
If,
if
we
want
to
capture
more
broad
use
case
requirements
and
as
as
Michael
said,
I
think
the
the
one
thing
we
really
need
to
distinguish
is
you
know
insight
into
what
is
happening
on
the
application
side
from
the
inside
about
how
do
we
deal
with
congestion
right
because
I
think
that's
really
at
the
risk
of
becoming
worse
and
worse?
G
At
this
point
in
time,
the
more
we
are
removing
the
transport
header
inside
and
we
still
don't
have
good
ways
on
a
big
aggregation
point
in
the
internet
to
easily
figure
out
what
are
the
badly
performing
flows
that
are
taking.
You
know
unjustifiable
amount
of
bandwidth
away
from
others
without
having
non-scalable
per
state
forwarding
actions
in
them
right
and
I.
Think
that
is
still
hitting
us.
A
lot
of
service
providers
have
been
trying
to
do
this
through
the
application
layer
which,
which
is
very
misguided
right
so
I
mean
even
10
years
ago.
G
It
was
all
these
back,
then
3G
network
providers
that
had
problems
with
the
peer-to-peer
file
sharing,
not
because
it
was
doing
things
that
you
know
Hollywood
was
concerned
about,
but
because
they
didn't
really
get
conjunction
control
well
worked
out,
and
that's
still
the
case.
Whenever
you
know
people
are
Reinventing
transport
protocols
by
themselves
we're
running
into
congestion
issues
right
so
really
getting
in
in
the
tsv
work
more
towards
manageable
congestion
control
in
a
scalable
fashion.
G
I
think
is,
is
really
very
much
related
to
this,
because
when
we
have
worked
that
out
much
better
than
I
think
we
have
now
at
that
point
in
time
in
the
public
internet.
There
is
no
need
anymore,
in
my
opinion,
to
spoof
into
traffic
for
good
reasons
right,
then,
it
goes
back
to
all
the
bad
reasons
that
we
know
why
we
are
working
against
per
pass.
B
A
I
jumped
in
the
queue
next
just
kind
of
listening
to
all
of
these
talks
today.
The
first
two
touched
on.
You
know
that
essentially
the
direction
for
Passive
classification
and
categorization
is
you
know,
AI
management
here
and
I
I
definitely
get
that
that's
a
natural
direction,
if
you're
trying
to
just
replace
your
existing
passive
classification.
A
But
you
know
very
much
concerns
me
as
a
direction
because
it
really
just
plays
into
this
arms
race
between
the
traffic
and
the
classification
here,
I
think
if
we
were
resorting
to
kind
of
the
AIML
models,
we
are
assuming
that
the
traffic
doesn't
want
to
be
classified
to
a
large
degree
and
saying
like.
A
Oh,
we
need
to
pull
this
information
out,
that's
not
otherwise
available
and,
if
that's
the
case,
I
think
we're
just
going
to
see
more
of
the
techniques
like
but
ditto's
presenting
of
saying
we
can
just
make
it
harder
and
harder
to
analyze,
and
so
it's
going
to
end
up
in
a
bit
of
a
dead
end.
A
Also,
if
you're,
assuming
that
the
traffic
is
hidden,
then
it
brings
up
the
concerns
that
Mallory
is
bringing
up
about
consent.
And
you
know
what
are
you
actually
trying
to
show
so
when
we
get
to
the?
Why
I
think
there's
a
interesting
question:
Richard
brought
up,
you
know.
Why
are
we
trying
to
do
this
and
one
of
the
examples
there
was
the
you
know,
we
need
a
classifier
to
be
able
to
say
we
need
to
treat
video
traffic
with
low,
latency
and
I.
A
Think
like
what
Michael
was
pointing
out
just
a
little
bit
ago.
You
know
why
not
use
other
techniques
to
fix
the
latency
and
buffer
boat.
Why
not
use
explicit
signals
right
so
in
the
case
of
you
know,
video
chat
or
low
latency.
Those
applications
have
a
clear
incentive
to
try
to
get
better
Behavior.
A
We
have
techniques
like
ecn
and
people
are
working
on
l4s
now
that
are
much
much
better
way
to
achieve
the
goal
that
don't
require
classifiers,
so
you
know
I
know
there
are
different
reasons
that
networks
want
to
identify
traffic,
but
for
cases
like
this
I
want
to
just
you
know,
leave
us
with
the
thought
of.
A
A
All
right,
West,
what's
next.
D
And
I
think
you
and
I
were
thinking
along
the
same
lines
of
I.
Think
one
of
the
problem
faces
that
was
sort
of
unexplored
in
today's
presentations
that
we
need
to
consider
over
the
course
of
the
workshop
is:
where
is
this
boundary
between
traffic
that
needs
to
be
encrypted
at
some
level,
but
can
still
allow
for
analysis
in
determining
you
know
what
type
of
traffic
there
is
needed
for
Qs
prioritization
versus
traffic
that
truly
wants
and
needs
to
be
entirely
hidden
for
whatever
reasons
good
or
bad.
D
You
know
I
doubt
that
anybody
would
mind
being
put
into
a
priority,
Key
Queue,
even
when
accidentally,
you
know,
but
how
do
we
ensure
that
we
get
sort
of
this
that
maximum
chance
of
being
put
into
the
right
queue
when
we
still
otherwise
want
to
let
everything
else
remain
unknown
and
in
in
other
cases,
you
know
there
are
things
like
Torah
that
actually
much
better
protect
your
privacy
that
are
well
worth
using
at
the
expense
of
latency.
So
how
do
we
boundary?
You
know?
D
How
do
we
bound
these
two
types
of
use
cases,
and
specifically,
how
do
we
offer
these
choices?
To
the
end
user,
in
a
way
that
they
can
understand
it
it
without
harming
their
priority,
like
certainly
they're,
harming
their
priority,
harming.
A
I
believe
nalini
is
next.
Thank
you.
K
Sure
can
you
guys
hear
me:
okay,
yeah,
okay,
so
I
think
what
somebody
had
made
the
point
about
needing
the
authority.
I
mean
who's
entitled
to
see
this
data,
because
I'm
going
to
speak
from
the
point
of
view
of
large
private
managed,
Enterprise,
Networks
and
and
I'm
going
to
tell
you
that
you
know
we
have
requirements
for
fraud,
monitoring,
malware
detection,
not
passing
out
personally
identifiable
Health
Data.
K
A
I,
don't
see
anyone
else
in
queue.
Currently,
there's
definitely
a
lot
of
chat
going
on
the
WebEx
chat.
B
A
Please
kill
our
time.
Yes,.
E
Yes,
like
the
talk,
I
would
love
to
see
in
general
from
some
Network
operators
is,
you
know,
I.
Think
a
lot
of
folks
I'll
include
myself
in
this
who
live
up
with
the
application.
There
kind
of
have
this
idea
that
the
network
delivers
my
packets.
It's
it's
just
kind
of
a
pipe.
It's
just
there.
It's
not
applying
any
intelligence
and
I.
Think
the
folks
on
the
network
side
note,
there's
actual
intelligence
network
is
trying
to
provide
some
value,
and
so
there's
this
kind
of
Disconnect
as
to
the
way
the
application
level.
Folks.
E
Think
of
the
network,
you
know,
would
you
come
from
that
application
point
of
view?
It
seems
obvious
that
you
can
just
encrypt
everything
because
you're
not
losing
any
value,
because
the
number
is
not
providing
you
any
value.
So
I
think
you
know,
I
would
love
to
see
the
talk
from
the
network
provider
side
about
what
value
the
network
operators
think
they
can
provide
by
understanding
more
about
application
traffic,
as
I'll
I'll
rant
about
a
bit
tomorrow,
you
know
I
think
we
need
to
articulate
the
user
level
benefits
of
these
things.
E
You
know
the
the
application
Level
benefits
of
these
Network
layer
changes,
but
I
think
that
would
be
a
a
useful
kind
of
substrate
on
which
to
have
these
discussions
to
you
know
to
articulate
the
the
benefit
side.
If
we
review
the
you
know,
exposing
more
information
leaking
more
information
as
a
cost.
E
You
know
we
need
to
understand
what
the
benefit
side
of
that
operation
is
as
well,
so
that
we
can
make
you
know
we,
as
my
application
developer
had
on,
we
can
make
intelligence
decisions
as
to
whether
and
you
know,
applications
and
also
we,
as
kind
of
the
the
internet
standards
Community-
can
make
Intelligent
Decisions
about
you
know
what
trade-offs
make
sense
in
terms
of
exposing
more
information
versus
you
know,
keeping
things
private.
D
Thanks,
so
you
know
one
of
the
other
places
that
I
don't
think
we've
explored.
It
just
feels
a
little
bit
almost
on
the
borderline
of
scope.
I
think
not
is.
We
haven't,
talked
about
sort
of
how
to
deal
with
discussions
of
user
privacy
within
managed
networks
like
corporate
networks,
where
they
quite
possibly
have
control
over
a
lot
of
your
privacy.
D
So
for
things
like
you,
know,
forced
installment
of
CAS
or
software,
and
things
like
that
that
actually
allow
sniffing
of
user
Behavior
I
certainly
talked
to
somebody
10
years
ago,
at
least
when
Facebook
was
just
spinning
up,
and
rather
than
rather
than
tell
the
users
that
they
shouldn't
use
Facebook
on
their
corporate
machines.
They
actually
were
just
sniffing
it
and
logging
in
and
using
you
know,
browsing
their
account
after
hours
and
things
like
that.
D
Then
that's
how
the
user
got
tipped
off
it's
because
they
were
logged
in
and
they
shouldn't
have
been,
and
so
it
brings
up
very
interesting
cases
and
I.
Don't
know
how
to
deal
with
privacy
in
those
cases
where
the
corporate
really
does
allow.
You
know
they.
They
have
essentially
control
and
authorize
control.
Because
that's
your
agreement
for
working
there,
but
to
do
that
without
without.
B
Yeah
I
wanted
to
react
to
Richard's
comment
and
it's
not
like
I'm
just
disagreeing,
but
I
think
it
takes
very
one-sided
feel
right.
So
it's
it's
not
that,
like
the
application
is
encrypting
everything
and
now
the
it's
on
the
network
operators
to
show
that
there's
some
benefit
in
doing
something
differently.
There
are
also
cases
where
the
applications
actually
cannot
infer
the
information
they
want
to
optimize.
The
traffic
and
one
of
the
chat's
example
is,
for
example,
in
mobile
networks.
B
You
can
suddenly
have
more
bandwidth
available
and
if
you
have
any
kind
of
video
or
whatever
it's
hard
for
you
to
scale
up,
because
you
don't
know
how
long
will
it
be
available,
should
you
actually
skate
up
how
much
more
is
available
whatever,
so
you
would
need
to
send
some
kind
of
whatever
fake
threading
or
whatever
figure
out
how
much
bandwidth
is
there
and
when
you
can
scale
up
and
when
you
change
your
audio
coding
or
whatever.
So
this
is
a
known
problem
where
you
can.
B
Actually
you
could
like
get
information
from
the
network
that
could
help
you
so
I.
Think
it's
really
a
collaborative
approach
where,
like
both
sides,
need
to
sit
together
and
figure
out
like
where
is
it
most
benefit
to
do
something,
and
how
can
we
and
how
can
we
work
together
on
that?
That's,
that's
all
I
wanted
to
say.
E
Just
in
response
to
the
question
about
Network
operators
and
some
of
the
things,
a
pretty
large
number
of
our
customers
at
Comcast
opt
in
to
a
sort
of
managed
tone
Network
where
they
might
turn
on
parental
controls
for
their
kids
as
an
example,
it
might
be
time
of
day
based
or
it
may
be
destination
or
application
or
type
of
content
based,
and
so
you
really
can
only
derive
that
if
you
can
see
the
fqdn
and
to
the
extent
that
you
know
so
dough
to
some
extent
you
know
disrupted
that
you
know
ech
will,
you
know,
become
a
bigger
problem.
E
Obviously
so
there's
a
lot
of
of
that
being
used,
and
then
people
that
aren't
using
parental
controls
want
to
have
things
like
malware
and
ransomware
types
of
protection.
And
again
you
know
that's
derived
from
well-known
fqdns
of
malware
and
CNC
servers,
and
so
you
know
that's
a
way
to
do
that
at
the
network.
Layer
and
some
users
also
go
and
want
to
prioritize
for
particular
devices.
Certain
classes
of
application-
and
you
know
that's
sort
of
another
thing,
but
really
the
biggest
use
case
is
sort
of
that
parental
control
and
malware.
E
You
know
security,
type
of
protection
and
oftentimes.
The
answer
is
well
like,
oh,
you
know
it
just
has
to
happen
on
the
end
point.
Then
the
problem
ends
up
being
that
okay,
then
you
end
up
with
an
ecosystem
of
really
three
plausible
providers
of
Microsoft,
Google
and
apple
and
number
one.
That's
probably
not
satisfactory
from
the
standpoint
of
centralization,
but
for
users
they
have
very
mixed
environments
and
then
they
have
lots
of
iot
and
other
things
which
may
not
have
the
ability
to
be
managed.
E
At
that
end
point,
and
so
therefore
you
know
some
management
point
in
the
in
their
Homeland
that
they
control
is
a
better
Point
as
an
example,
and
then
just
one
last
comment,
you
know
within
the
operator
Network
it's
helpful
to
generally
know
you
know
what
are
the
destinations,
what
are
the
general
types
of
application
class
because
it
can
make
make
you
sort
of
change,
Network
planning
decisions?
You
know
at
a
very,
very
high
level.
L
L
I
I
think
like
this
is
a
very
interesting
topic
that
hasn't
received
enough
like
attention
and
just
as
one
data
point,
my
mom
recently
got
a
new
job
and
she
her
job
is
like
a
bring
your
own
device
company,
and
so
she
just
uses
her
personal
laptop
for
work
purposes
and
I.
Think
this
kind
of
bring
your
own
devices
increasingly
common
and
so
I
think
we
have
to
reflect
on
what
it
means
for
non-enterprise.
L
Privacy
like
it
seems
like
in
bring
your
own
device
there's
a
really
important
bleed
over
for
like
personal
privacy
that
doesn't
have
to
do
with
Enterprise
networks.
If
you're
using
your
home
computer
right
like.
If
you
install
some
monitoring
thing
on
it,
then
the
company
just
knows
everything.
You're
doing
all
the
time
so
yeah
I,
I,
I
I,
didn't
have
a
conclusion.
There
I
just
wanted
to
also
add
another
data
point.
A
That's
good
all
right!
Thank
you.
So
that
brings
us
to
the
top
of
the
hour
and
we
are
out
of
time,
but
thank
you
for
everyone
who
presented
or
who
commented
today
and
just
for
listening
and
thinking
about
this,
we're
going
to
carry
on
tomorrow
and
Wednesday,
with
more
discussion,
I'm,
looking
forward
to
kind
of
exploring
more
of
those
directions
for
how
we
go
forward
from
here.